text
stringlengths 7.27k
650k
| label
int64 0
10
|
---|---|
arXiv:1411.5327v3 [] 11 Jan 2017
Almost algebraic actions of algebraic groups and
applications to algebraic representations
Uri Bader∗1 , Bruno Duchesne†2 , and Jean Lécureux‡3
1
Uri Bader, Weizmann Insitute of Science, Rehovot, Israel.
Université de Lorraine, Institut Élie Cartan, B.P. 70239, 54506
Vandoeuvre-lès-Nancy Cedex, France.
3
Département de Mathématiques, Bâtiment 425, Faculté des Sciences
d’Orsay, Université Paris-Sud 11, 91405 Orsay, France.
2
April 4, 2018
Abstract
Let G be an algebraic group over a complete separable valued field k. We
discuss the dynamics of the G-action on spaces of probability measures on
algebraic G-varieties. We show that the stabilizers of measures are almost
algebraic and the orbits are separated by open invariant sets. We discuss
various applications, including existence results for algebraic representations
of amenable ergodic actions. The latter provides an essential technical step
in the recent generalization of Margulis-Zimmer super-rigidity phenomenon
[BF13].
1
Introduction
This work concerns mainly the dynamics of an algebraic group acting on the space
of probability measures on an algebraic variety. Most (but not all) of our results are
known for local fields (most times, under a characteristic zero assumption). Our
main contribution is giving an approach which is applicable also to a more general
class of fields: complete valued fields. On our source of motivation, which stems
from ergodic theory, we will elaborate in §1.2, and in particular Theorem 1.16.
First we describe our objects of consideration and our main results, put in some
historical context.
Setup 1.1. For the entire paper (k, | · |) will be a valued field, which is assumed
to be complete and separable as a metric space, and b
k will be the completion of
its algebraic closure, endowed with the extended absolute value.
∗ [email protected]
† [email protected]
‡ [email protected]
1
Note that b
k is separable and complete as well (see the proof of Proposition 2.2).
The most familiar examples of separable complete valued fields are of course R and
C, but one may also consider the p-adic fields Qp , as well as their finite extensions.
Considering k = Cp = “
Qp one may work over a field which is simultaneously complete, separable and algebraically closed. Another example of a complete valued
field is given by fields of Laurent series K((t)), where K is any field (this field is
local if and only if K is finite, and separable if and only if K is countable), or
more generally the field of Hahn series K((tΓ )), where Γ is a subgroup of R (see
for example [Poo93]). This field is separable if and only if K is countable and Γ is
discrete (see [hmb]).
Convention 1.2. Algebraic varieties over k will be identified with their b
k-points
and will be denoted by boldface letters. Their k-points will be denoted by corresponding Roman letters. In particular we use the following.
Setup 1.3. We fix a k-algebraic group G and we denote G = G(k).
We are interested in algebraic dynamical systems, which we now briefly describe. For a formal, pedantic description see §2.1 and in particular Proposition 2.2. By an algebraic dynamical system we mean the action of G on V , where
V is the space of k-points of a k-algebraic variety V on which G acts k-morphically.
Such a dynamical system is Polish: G is a Polish group, V a Polish space and the
action map G × V → V is continuous (see §2.1 for proper definitions). The point
stabilizers of such an action are algebraic subgroups, and by a result of BernsteinZelevinski [BZ76], the orbits of such an action are locally closed (see Proposition
2.2).
Following previous works of Furstenberg and Moore, Zimmer found a surprising
result: for the action of an algebraic group G on an algebraic variety V , all defined
over R, consider now the action of G on the space Prob(V ) of probability measures
on V . Then the point stabilizers are again algebraic subgroups and the orbits are
locally closed. However, this result does not extend trivially to other fields. For
example, with k = C, consider the Haar measure on the circle S 1 < C∗ . For
the action of C∗ on itself, the stabilizer of that measure is S 1 , which is not a
C-algebraic subgroup. Similarly, for k = Qp , consider the Haar measure on the
p-adic integers Zp < Qp . For the action of Qp on itself, the stabilizer of that
measure is Zp , which is not a Qp -algebraic subgroup.
Definition 1.4. A closed subgroup L < G is called almost algebraic if there
exists a k-algebraic subgroup H < G such that L contains H = H(k) as a normal
cocompact subgroup. A continuous action of G on a Polish space V is called
almost algebraic if the point stabilizers are almost algebraic subgroups of G and
the collection of G-invariant open sets separates the G-orbits, i.e the quotient
topology on G\V is T0 .
Remark 1.5. If k is a local field then G is locally compact and by [Eff65, Theorem 2.6] the condition G\V is T0 is equivalent to the (a priori stronger) condition
that every G-orbit is locally closed in V .
2
Remark 1.6. If k = R then every compact subgroup of G is the real points of
a real algebraic subgroup of G (see e.g. [VGO90, Chapter 4, Theorem 2.1]). It
follows that every almost algebraic subgroup is the real points of a real algebraic
subgroup of G. We get that a continuous action of G on a Polish space V is almost
algebraic if and only if the stabilizers are real algebraic and the orbits are locally
closed.
Two obvious classes of examples of almost algebraic actions are algebraic actions (by the previously mentioned result of Bernstein-Zelevinski) and proper actions (as the stabilizers are compact and the space of orbits is T2 , that is, Hausdorff). The notion of almost algebraic action is a natural common generalization.
It is an easy corollary of Prokhorov’s theorem (see Theorem 2.3 below) that if the
action of G on V is proper then so is its action on Prob(V ), see Lemma 2.7. The
main theorem of this paper is the following analogue.
Theorem 1.7. If the action of G on a Polish space V is almost algebraic then
the action of G on Prob(V ) is almost algebraic as well.
The following corollary was obtained by Zimmer, under the assumptions that
k is a local field of characteristic 0 and V is homogeneous, see [Zim84, Chapter 3].
Corollary 1.8. Assume G has a k-action on a k-variety V. Then the induced
action of G = G(k) on Prob(V(k)) is almost algebraic.
In the course of the proof of Theorem 1.7 we obtain in fact a more precise
information. A k-G-variety is a k-variety with a k-action of G.
Proposition 1.9. Fix a closed subgroup L < G. Then there exists a k-subgroup
H0 < G which is normalized by L such that L has a precompact image in the
Polish group (NG (H0 )/H0 )(k) and such that for every k-G-variety V, any Linvariant finite measure on V(k) is supported on the subvariety of H0 -fixed points,
VH0 ∩ V(k).
This proposition is a generalization of one of the main results of Shalom [Sha99],
who proves it under the assumptions that k is local and L = G. For the case L = G
the following striking corollary is obtained.
Corollary 1.10. If for every strict k-algebraic normal subgroup H/G, G(k)/H(k)
is non-compact, then every G-invariant measure on any k-G-algebraic variety
V(k) is supported on the G-fixed points.
In particular we can deduce easily the Borel density theorem.
Corollary 1.11. Let G be a k-algebraic group and Γ < G = G(k) be a closed
subgroup such that G/Γ has a G-invariant probability measure. If for every proper
k-algebraic normal subgroup H / G, G(k)/H(k) is non-compact, then Γ is Zariski
dense in G.
To deduce the last corollary from the previous one, consider the map
Z
G/Γ → (G/Γ )(k),
3
Z
where Γ denotes the Zariski closure of Γ, and push forward the invariant measure
Z
from G/Γ to obtain a G-invariant measure on (G/Γ )(k). The homogeneous space
Z
Z
G/Γ must contain a G-fixed point, hence must be trivial. That is Γ = G.
1.1
Applications: ergodic measures on algebraic varieties
A classical theme in ergodic theory is the attempt to classify all ergodic measures
classes, given a continuous action of a topological group on a Polish space. In this
regard, the axiom that the space of orbits is T0 has strong applications. Recall
that, given a group L acting by homeomorphisms on a Polish space V , a measure
on V is L-quasi-invariant if its class is L-invariant. The following proposition is
well known.
Proposition 1.12. Let V be a Polish G-space and assume that the quotient topology on G\V is T0 . Let L < G be a subgroup and µ an L-quasi-invariant ergodic
probability (or σ-finite) measure. Then there exists v ∈ V such that µ(V −Gv) = 0.
Indeed, G\V is second countable, as V is, and for a countable basis Bi , denoting
the push forward of µ to G\V by µ̄, the set
\
\
{Bi | µ̄(Bi ) = 1} ∩ {Bic | µ̄(Bi ) = 0}
is clearly a singleton, whose preimage in V is an orbit of full measure.
In particular, we get that for a subgroup L < G and an algebraic dynamical
system of G, every L-invariant measure is supported on a single G-orbit. Another
striking result is that an algebraic variety cannot support a weakly mixing probability measure. Recall that an L-invariant probability measure µ is weakly mixing
if and only if µ × µ is L-ergodic.
Corollary 1.13. Assume G has a k-action on the k-variety V. Fix a closed
subgroup L < G and let µ be an L-invariant weakly mixing probability measure on
V = V(k). Then there exists a point x ∈ V L such that µ = δx .
This corollary follows at once from Proposition 1.9, as the action of L on
VH0 ∩ V(k) is via a compact group.
We end this subsection with the following useful application, obtained by composing Proposition 1.12 with Theorem 1.7. This corollary is in fact our main
motivation for developing the material in this paper. It deals with measure on
spaces of measures, and is the main tool in deriving Theorem 1.16 below.
Corollary 1.14. Assume G has a k-action on the k-variety V. Denote V =
V(k). Let L < G be a subgroup and ν be an L-ergodic quasi-invariant measure on
Prob(V ). Then there exists µ ∈ Prob(V ) such that ν(Prob(V ) − Gµ) = 0.
1.2
Applications to algebraic representations of ergodic actions.
A main motivation for us to extend the foundation outside the traditional local
field zone is the recent developments in the theory of algebraic representations of
4
ergodic actions, and in particular its applications to rigidity theory. In [BF13] the
following theorem, as well as various generalizations, are proven.
Theorem 1.15 ([BF13, Theorem 1.1], Margulis super-rigidity for arbitrary fields).
Let l be a local field. Let T to be the l-points of a connected almost-simple algebraic
group defined over l. Assume that the l-rank of T is at least two. Let Γ < T be a
lattice.
Let k be a valued field. Assume that as a metric space k is complete. Let G be
the k-points of an adjoint simple algebraic group defined over k. Let δ : Γ → G be
a homomorphism. Assume δ(Γ) is Zariski dense in G and unbounded. Then there
exists a continuous homomorphism d : T → G such that δ = d|Γ .
The proofs in [BF13] are based on the following, slightly technical, theorem
which will be proven here.
Theorem 1.16. Let R be a locally compact group and Y be an ergodic, amenable
Lebesgue R-space. Let (k, | · |) be a valued field. Assume that as a metric space k is
complete and separable. Let G be a simple k-algebraic group. Let f : R×Y → G(k)
be a measurable cocycle.
Then either there exists a k-algebraic subgroup H
G and an f -equivariant
measurable map φ : Y → G/H(k), or there exists a complete and separable metric space V on which G acts by isometries with bounded stabilizers and an f equivariant measurable map φ0 : Y → V .
A more friendly, cocycle free, version is the following.
Corollary 1.17. Let R be a locally compact, second countable group. Let Y be an
ergodic, amenable R-space. Suppose that G is an adjoint simple k-algebraic group,
and there is a morphism R → G = G(k). Then :
• Either there exists a complete and separable metric space V , on which G acts
by isometries with bounded stabilizers, and an R-equivariant measurable map
Y →V;
• or there exists a strict k-algebraic subgroup H and an R-equivariant measurable map Y → G/H(k).
Taking Y to be a point in the above corollary, we obtain the following.
Corollary 1.18. Suppose R < GLn (k) is a closed amenable subgroup. Then the
Z
image of R in R modulo its solvable radical is bounded.
Z
Indeed, upon moding out the solvable radical of R , the latter is a product of
simple adjoint factors, and by the previous corollary the image of R in each factor
is bounded.
Note that over various fields, such as Cp and F̄p ((t)), every bounded group
is amenable, being the closure of an ascending union of compact groups, while
for other fields there exist bounded groups which are not amenable. For example
SL2 (Q[[t]]), which is bounded in SL2 (Q((t))), factors over the discrete group SL2 (Q)
which contains a free group.
5
1.3
The structure of the paper.
The paper has two halves: the first half consisting of §2,§3 is devoted to the proof
of Theorem 1.7 and the second half is devoted to the proof of Theorem 1.16.
In §2 we collect various needed preliminaries, in particular we discuss the Polish
structure on algebraic varieties, and on spaces of measures. The most important
results in this section are Proposition 2.2 that discusses algebraic varieties and
and Corollary 2.14 that uses disintegration as a replacement for a classical ergodic
decomposition argument (which is not applicable in our context, due to the lack of
compactness). The heart of the paper is §3, where the concept of almost algebraic
action is discussed. Theorem 1.7 is proven at §3.4.
In §4, we give a thorough discussion of bounded subgroups of algebraic groups,
and in §5, we discuss a suitable replacement of a compactification of coset spaces.
In §6, we prove Theorem 1.16.
Acknowledgements
U.B was supported in part by the ERC grant 306706. B.D. was supported in part
by Lorraine Region and Lorraine University. B.D & J.L. were supported in part
by ANR grant ANR-14-CE25-0004 GAMME.
2
Preliminaries
2.1
Algebraic varieties as Polish spaces
Recall that a topological space is called Polish if it is separable and completely
metrizable. For a good survey on the subject we recommend [Kec95]. We mention
that the class of Polish spaces is closed under countable disjoint unions and countable products. A Gδ -subset of a Polish space is Polish so, in particular, a locally
closed subset of a Polish space is Polish. A Hausdorff space which admits a finite
open covering by Polish open sets is itself Polish. Indeed, such a space is clearly
metrizable (e.g. by Urysohn metrization theorem [Kec95, Theorem 1.1]) so it is
Polish by Sierpinski theorem [Kec95, Theorem 8.19] which states that the image
of a continuous open map from a Polish space to a separable metrizable space is
Polish.
A topological group which underlying topological space is Polish is called a
Polish group. Sierpinski theorem also implies that for a Polish group K and a
closed subgroup L, the quotient topology on K/L is Polish. Effros Lemma [Eff65,
Lemma 2.5] says that the quotient topology on K/L is the unique K-invariant
Polish topology on this space. Another important result of Effros concerning
Polish actions (that are continuous actions of Polish groups on Polish spaces) is
the following.
Theorem 2.1 (Effros theorem [Eff65, Theorem 2.1]). For a continuous action of
a Polish group G on a Polish space V the following are equivalent.
1. The quotient topology on G\V is T0 .
6
2. For every v ∈ V , the orbit map G/ StabG (v) → Gv is a homeomorphism.
Our basic class of Polish actions will be given by actions of algebraic groups
on algebraic varieties. As mentioned in Setups 1.1 & 1.3, we fixed a complete and
separable valued field (k, | · |), that is a field k with an absolute value | · | which
is complete and separable (in the sense of having a countable dense subset). See
[EP05, BGR84]1 for a general discussion on these fields. It is a standard fact that a
complete absolute value on a field F has a unique extension to its algebraic closure
“
F [BGR84, §3.2.4, Theorem 2] and Hensel lemma implies that the completion F
of this algebraic closure is still algebraically closed [BGR84, §3.4.1, Proposition 3].
Recall that we identify each k-variety V with its set of b
k-points. In particular,
this identification yields a topology on V. Identifying the affine space An (b
k) with
b
k n , any affine k-variety can be seen as a closed subset of An (b
k). More generally, a
k-variety has a unique topology making its affine charts homeomorphisms. Observe
that with this topology, the set of k-points V of V is closed.
Topological notions, unless otherwise said, will always refer to this topology. In
particular, for the k-algebraic group G we fixed, G and G = G(k) are topological
groups. We note that V actually carries a structure of a k-analytic manifold, G is
a k-analytic group and the action of G on V is k-analytic. We will not make an
explicit use of the analytic structure here. The interested reader is referred to the
excellent text [Ser06], in which the theory of analytic manifolds and Lie groups
over complete valued fields is developed (see in particular [Ser06, Part II, Chapter
I]).
We will discuss the category of k-G-varieties. A k-G-variety is a k-variety
endowed with an algebraic action of G which is defined over k. A morphism of
such varieties is a k-morphism which commutes with the G-action.
Proposition 2.2. A k-variety V and its set of k-points V are Polish spaces. In
particular, G and G are Polish groups.
If V is a k-G-variety then the G-orbits in V are locally closed and the quotient
topology on G\V is T0 . For v ∈ V , the orbit Gv is a k-subvariety of V. There
exists a k-subgroup H < G contained in the stabilizer of v such that the orbit map
G/H → Gv is defined over k and the induced map G/H → Gv is a homeomorphism, where H = H(k), G/H is endowed with the quotient space topology and
Gv is endowed with the subspace topology.
Proof. Let us first explain how the extended absolute value makes b
k Polish. In
our situation k has a countable dense subfield k0 . The algebraic closure k0 of k0 is
still countable and thus its completion k“0 is separable and algebraically closed. By
the universal property of the algebraic closure, k embeds in k“0 and by uniqueness
of the extension of the absolute value, this embedding is an isometry. Thus b
k is
algebraically closed, complete and separable.
Since b
k is Polish, so is the affine space An (b
k) ' b
k n . It follows that V (respectively V ) is a Polish space, as this space is a Hausdorff space which admits
a finite open covering by Polish open sets — the domains of its k-affine charts
(respectively their k-points).
1 In
the second reference, the word valuation is used for what we call an absolute value.
7
The fact that the G-orbits in V are locally closed is proven in the appendix of
[BZ76]. Note that in [BZ76] the statement is claimed only for non-Archimedean
local fields, but the proof is actually correct for any field with complete non-trivial
absolute value, which is the setting of [Ser06, Part II, Chapter III] on which [BZ76]
relies. Another proof can be found in [GGMB13, §0.5]. It is then immediate that
the quotient topology on G\V is T0 .
For v ∈ V the orbit Gv is a k-subvariety of V by [Bor91, Proposition 6.7].
Z
We set H = StabG (v) (note that if char(k) = 0 then H = StabG (v)). By
[Bor91, AG, Theorem 14.4], H is defined over k, and it is straightforward that
H = H(k) = StabG (v). By [Bor91, Theorem 6.8] the orbit map G/H → Gv is
defined over k, thus it restricts to a continuous map from G/H onto Gv. The fact
that the latter map is a homeomorphism follows from Effros theorem (Theorem 2.1)
since G\V is T0 .
We emphasize that, as a special case of Proposition 2.2, we get that for every kalgebraic subgroup H of G, the embedding G/H → G/H(k) is a homeomorphism
on its image. We will use this fact freely in the sequel.
2.2
Spaces of measures as Polish spaces
In this subsection V denotes a Polish space. We let Prob(V ) be the set of Borel
probability measures on V , endowed with the weak*-topology (also called the topology of weak convergence). This topology comes from the embedding of Prob(V )
in the dual of the Banach space of bounded continuous functions on V . If d is a
complete metric on V which is compatible with the topology (the metric topology
coincides with the original topology on V ), the corresponding Prokhorov metric
d on Prob(V ) is defined as follows: for µ, ν ∈ Prob(V ), d(µ, ν) is the infimum of
ε > 0 such that for all Borel subset A ⊆ V , µ(A) ≤ ν(Aε ) + ε and symmetrically
ν(A) ≤ µ(Aε ) + ε, where Aε is the ε-neighborhood (for d) around A. The following theorem summarizes some standard results, see Chapter 6 and Appendix III
of [Bil99].
Theorem 2.3 (Prokhorov). The metric space (Prob(V ), d) is complete and separable and the topology induced by d on Prob(V ) is the weak*-topology. In particular
the space Prob(V ) endowed with the weak*-topology is Polish.
A subset C in Prob(V ) is precompact if and only if it is tight: for every > 0
there exists compact K ⊂ V such that for every µ ∈ C, µ(K) > 1−. In particular
Prob(V ) is compact if V is.
Remark 2.4. Replacing if necessary d by a bounded metric, we note that there
is another metric on Prob(V ) with the same properties (metrizing the weak*topology and being invariant under isometries): the Wasserstein metric [Vil09,
Corollary 6.13].
We endow Homeo(V ) with the pointwise convergence topology. The following
is a standard application of the Baire category theorem, see [Kec95, Theorem 9.14].
8
Theorem 2.5. Assume G is acting by homeomorphisms on V . Then the action
map G × V → V is continuous if and only if the homomorphism G → Homeo(V )
is continuous.
Lemma 2.6. If G acts continuously on V then it also acts continuously on
Prob(V ) and if the action G y (V, d) is by isometries, the action G y (Prob(V ), d)
is also by isometries.
Proof. The fact that G acts by isometries on Prob(V ) when G acts by isometries
on V is straightforward from the definition of the Prokhorov metric. In order to
prove that G acts continuously on Prob(V ) when it acts continuously on V it is
enough, by Theorem 2.5, to show that for every µ ∈ Prob(V ) and every sequence
gn in G, gn → e in G implies gn µ → µ in Prob(V ). Fix µ ∈ Prob(V ) and assume
gn → e in G. For every bounded continuous function f on V , we have by Lebesgue
bounded convergence theorem
Z
Z
Z
f (x) d(gn µ)(x) = f (gn x) dµ(x) → f (x) dµ(x)
as for every x ∈ V , gn x → x. Thus, by the definition of the weak*-topology
gn µ → µ.
We observe that Lemma 2.6 and Proposition 2.2 show that if V is a k-G-variety
then G acts continuously on V = V(k) and on Prob(V ). The following is a nice
application of Prokhorov theorem (Theorem 2.3).
Lemma 2.7. If the action of G on V is proper then the action of G on Prob(V )
is proper as well.
Proof. For a compact C ⊂ Prob(V ) we can find a compact K ⊂ V with µ(K) >
1/2 for every µ ∈ C by Theorem 2.3. Then for g ∈ G and µ ∈ C such that gµ ∈ C
we get that both µ(K) > 1/2 and µ(gK) = gµ(K) > 1/2, thus gK ∩ K 6= ∅.
We conclude that {g ∈ G | gC ∩ C 6= ∅} is precompact, as it is a subset of the
precompact set {g ∈ G | gK ∩ K 6= ∅}.
2.3
Polish extensions and disintegration
Definition 2.8. A Polish fibration is a continuous map p : V → U where U is a
T0 -space and V a Polish space. An action of G on such a Polish fibration is a pair
of continuous actions on V and U such that p is equivariant.
Let p : V → U be a Polish fibration. Let ProbU (V ) be the set of probability
measures on V which are supported on one fiber. We denote p• : ProbU (V ) → U
the natural map.
Lemma 2.9. The map p• is a Polish fibration. If the group G acts on the Polish
fibration V → U , then it also acts on p• .
9
Proof. Since U is T0 , fibers of p are separated by a countable family (Cn ) of closed
saturated subsets of V . A probability measure µ is supported on one fiber if
and only if for all n, µ(Cn )µ(V \ Cn ) = 0. The set {µ ∈ Prob(V ), µ(Cn ) = 1}
is closed and {µ ∈ Prob(V ), µ(V \ Cn ) = 1} is Gδ since for all 0 < r < 1,
{µ ∈ Prob(V ), µ(V \ Cn ) > r} is open. So ProbU (V ) is a Gδ -subset of Prob(V )
and thus Polish.
Let us show that p• is continuous. Assume µn → µ in ProbU (V ). Let u = p• (µ)
and un = p• (µn ). Let O ⊆ U be an open set containing u. For n large enough,
µn (p−1 (O)) > 1/2 and thus un ∈ O.
If G acts on V → U , it is clear that G acts on ProbU (V ). The continuity of
the action on ProbU (V ) follows from Lemma 2.6.
Let (U, ν) be a probability space and X be a Polish space, we denote by
L0 (U, X) the space of classes of measurable maps from U to X, under the equivalence relation of equality ν-almost everywhere. Note that the dependence on ν is
implicit in our notation. We endow that space with the topology of convergence
in probability. Fixing a compatible metric d on X, this topology is metrized as
follows: for φ, φ0 ∈ L0 (U, X), the distance between φ and φ0 is
Z
0
δ(φ, φ ) =
min(d(φ(v), φ0 (v)), 1) dν(v).
X
This topology can be also defined using sequences: φn → φ if for any ε > 0, there
is A ⊆ U such that ν(A) > 1 − ε and for all n sufficiently large and all v ∈ A,
d(φ(v), φn (v)) < ε. We note that this topology on L0 (U, X) does not depend on
the choice of an equivalent metric on V . This turns L0 (U, X) into a Polish space.
Lemma 2.10. Assume (αn ) is a sequence converging to α in probability in L0 (U, X).
Then there exists a subsequence αnk which convergence ν-a.e. to α, that is for νalmost every u ∈ U , αnk (u) converges to α(u) in X.
The proof of the lemma is standard, but in most textbooks it appears only for
the cases X = R or X = C, see for example [Fol99, Theorem 2.30]. Even though
the standard proof works mutatis-mutandis, we give below a short argument, reducing the general case to the case X = R.
Proof. Observe that the sequence d(αn , α) (which denotes the map u 7→ d(αn (u), α(u)))
converges in probability to 0 in L0 (U, R). Thus there exists a subsequence d(αnk , α)
converging to 0 a.e, and we get that αnk converges to α a.e.
If p : V → U is a Polish fibration, and ν is a measure on U , we denote L0p (U, V )
the space of measurable (identified if agree almost everywhere) sections of p, i.e.
maps which associates to u ∈ U a point in p−1 (U ), endowed with the induced
topology from L0 (U, V ). If G acts on the Polish fibration p, it also acts on L0p (U, V )
via the formula (gf )(u) = gf (g −1 u) where u ∈ U and f ∈ L0p (U, V ).
The following theorem is a variation of the classical theorem of disintegration
of measures. It is essentially proven in [Sim12].
10
Theorem 2.11. Let p : V → U be a Polish fibration and ν be a probability measure
on U . Let P = {µ ∈ Prob(V ) | p∗ µ = ν}. For every α ∈ L0p• (U, ProbU (V )) the
R
formula U α(u) dν defines an element of P . The map thus obtained L0p• (U, ProbU (V )) →
P is a homeomorphism onto.
Definition 2.12. For µ ∈ P , the element
of L0p• (U, ProbU (V )) obtained by apR
plying to µ the inverse map of α 7→ U α(u) dν is denoted u 7→ µu . It is called the
disintegration of µ with respect to p : V → U .
R
Proof. We first claim that the map α 7→ U α(u) dν is continuous, and then we
argue to show that it is invertible, and its inverse is continuous as well.
For the continuity,
given a converging
sequence αn → α in L0p• (U, ProbU (V ))
R
R
with µn = U αn (u) dν, µ = U α(u) dν, it is enough to show that every subsequence of µn has a subsequence that converges to µ. Since every sequence
that converges in measure has a subsequence that converges almost everywhere,
abusing our notation and denoting again αn and µn for the resulting sub-subsequences, we may assume that αn converges to α ν-almost everywhere. Picking
bounded function f on V , we obtain that for ν-a.e u ∈ U ,
Ran arbitrary continuous
R
dα
(u)f
→
dα(u)f
. Thus by Lebesgue bounded convergence theorem we
n
V
V
get
Z
Z
Z
Z
Z
Z
dµn f =
dν
dαn (u)f →
dν
dα(u)f =
dµf.
V
U
V
U
V
V
This shows that indeed µn → µ.
R
We now argue that the map α 7→ U α(u) dν is invertible and its inverse is
continuous. Without loss of generality, we can assume that p is onto. Hence U
is second countable. Since it is also T0 , it follows that U is countably separated.
By [Zim84, Proposition A.1], there exists a Borel embedding φ : U → [0, 1]. We
consider [0, 1] with the measure
φ∗ ν. Precomposition by φ gives a homeomorphism
L0(φ◦p)∗ [0, 1], Prob[0,1] (V ) → L0p• (U, ProbU (V )). Thus, in what follows we may
and do assume that U ⊂ [0, 1].2 Under this assumption [Sim12, Theorem 2.1]
guarantees that the map L0p• (U, ProbU (V )) → P is invertible. We denote the
preimage of µ ∈ P by u 7→ µu . We are left to show that this association is
continuous. To this end we embed V in a compact metric space V 0 and extend
p by setting p(v 0 ) = 1 for v 0 ∈ V 0 − V . Then [Sim12, Theorem 2.2] proves that
for almost every u ∈ U , µu is obtained as the weak*-limit of the normalized
restrictions, denoted by µu,η , of µ on p−1 (u − η, u + η) as η → 0.
Assume that µn → µ is a converging sequence in P . We know that for ν-a.e.
u, d(µu,η , µu ) → 0 when η → 0 and similarly for all n ∈ N, d(µnu,η , µnu ) → 0 when
η → 0. Fix ε > 0. For n ∈ N, we set
An = {u ∈ U ∃η0 > 0 ∀k ≥ n ∀η ∈ (0, η0 ); d(µku,η , µu ) ≤ ε }.
Then ν(∪An ) = 1 and An ⊆ An+1 . Thus there is n such that ν(An ) ≥ 1 − ε and
for u ∈ An , d(µu , µku ) ≤ ε for all k ≥ n. This shows that the image sequence of
(µn ) in L0p• (U, ProbU (V )) indeed converges to the image of µ.
2 Since the embedding U → [0, 1] is only Borel, when we assume U ⊂ [0, 1], the fibration
V → U cannot be assumed to be Polish anymore. Since our argument does not depend on the
topology of U , this does not matter here.
11
We note that if G acts on the fibration V → U (that is, G acts on U and V and
p is equivariant) then the disintegration homeomorphism is also equivariant with
respect to the natural action of G on L0p (U, V ) given by (gf )(u) = g(f (g −1 u)).
Lemma 2.13. Let p : V → U be a Polish fibration with an action of G such
that the G-action on U is trivial. Let ν be a probability measure on U , and let
f ∈ L0p (U, V ). Then there exists U1 ⊂ U of full measure such that
\
Stab(f ) =
Stab(f (u)).
u∈U1
Proof. Let L be the stabilizer of f in G. If L0 is a countable dense subgroup of L,
T
then there is a full measure subset U1 ⊂ U such that L0 ⊂ u∈U1 Stab(f (u)) (for
any g ∈ L0 , there is such a subspace Ug . Choose U1 to be the intersection over
L0 ). T
Since all these stabilizers are closed, and L0 is dense in L, we actually have
L ⊂ u∈U1 Stab(f (u)). Since the reverse inclusion is clear, we conclude that
L=
\
Stab(f (u)).
u∈U1
Corollary 2.14. Assume G acts continuously on the Polish space V and the
quotient topology on G\V is T0 . Let L < G be a closed subgroup and µ be an
L-invariant probability measure on V . Then there exist a point v ∈ V and an
L-invariant probability measure on G · v ' G/ Stab(v).
Proof. Let ν be the pushforward measure of µ on U . By Theorem 2.11, we may
consider the disintegration of µ as an element (µu ) ∈ L0p• (U, ProbU (V )) and this
element is clearly L-invariant. By Lemma 2.13, the stabilizer of (µu ) is an intersection of stabilizers of the measures µu , for u in a subset of U . In particular L
stabilizes some µu , which is a measure supported on an orbit G · v. The latter
is equivariantly homeomorphic to G/ Stab(v) thanks to Effros theorem (Theorem 2.1).
3
Almost algebraic groups and actions
The goal of this section is the proof of Theorem 1.7. Starting with an almost
algebraic action of G on a Polish V , we aim to prove that the action G y Prob(V )
is algebraic as well. So we have to prove that stabilizers of probability measures
on V are almost algebraic and the quotient G\ Prob(V ) is T0 . Going toward wider
and wider generality, we prove the first point in §3.2 and the second one in §3.3
3.1
Almost algebraic groups
Recall that by our setup 1.1, (k, | · |) is a fixed complete and separable valued field
and G is a fixed k-algebraic group. By Proposition 2.2, G = G(k) has the structure
12
of a Polish group. Recall that a closed subgroup L < G is called almost algebraic
if there exists a k-algebraic subgroup H < G such that L contains H = H(k) as
a normal cocompact subgroup (Definition 1.4).
Lemma 3.1. An arbitrary intersection of almost algebraic subgroups is again
almost algebraic.
More precisely, let (Li )i∈I be a collection of almost algebraic subgroups and Hi
algebraic subgroups such that Hi = Hi (k) is normal and cocompact in Li .
Then one can find a finite subset I0 such that, defining H = ∩i∈I0 Hi , we have
that H = ∩i∈I Hi and H(k) is normal and cocompact in ∩i∈I Li .
Proof. Let L = ∩Li and H = ∩Hi which coincides with (∩i∈I Hi )(k). Then it is
straighforward to check that H C L. Thanks to the Noetherian property of G,
there exists a finite subset I0 ⊂ I such that ∩i Hi coincides with ∩i∈I0 Hi .
Let L be theQZariski closure of L and Li the one of Li . The diagonal image of L(k) in i∈I0 Li (k)/Hi is locally closed by Proposition 2.2 and it is a
group. Thus it is actually closed. Moreover it is homeomorphic to L(k)/H.
To conclude, it suffices to
observe that L/H is closed in L(k)/H and lies in
T Q
(L(k)/H)
i∈I0 Li /Hi which is compact.
Remark 3.2. Actually the proof of this lemma shows that any almost algebraic
subgroup L has a minimal subgroup among all cocompact normal subgroups N
which can be written N = N(k) for some algebraic subgroup N ≤ G. This group
is actually the intersection of all such subgroups and it is invariant under the
normalizer NG (L) of L in G.
Lemma 3.3. Let H, L be closed subgroups of G such that H is almost algebraic,
H C L and L/H is compact. Then L is almost algebraic.
Proof. There is a algebraic subgroup N of G such that N = N(k) is normal
and cocompact in H. Moreover thanks to Remark 3.2, N may be choosen to be
invariant under NG (H) and thus N is cocompact and normal in L.
3.2
Almost algebraicity of stabilizers of probability measures
Let V be a Polish space endowed with a continuous G-action. Recall that the
action G y V is called almost algebraic if the stabilizers are almost algebraic
subgroups of G and the quotient topology on G\V is T0 (Definition 1.4).
Remark 3.4. For a continuous action of G on a Polish space V , the action is
almost algebraic if and only if the stabilizers are almost algebraic and for every
v ∈ V and any sequence gn ∈ G, gn v → v implies gn → e in G/ StabG (v).
This equivalent definition is much easier to check, and we will allow ourselves to
use it freely in the sequel. The two definitions are indeed equivalent by Effros’
Theorem 2.1.
Example 3.5. Let I be a k-algebraic group and φ : G → I a k-morphism. Let L
be an almost algebraic group in I = I(k). Then the action of G on I/L is almost
algebraic. This fact is proved after Lemma 3.7.
13
Lemma 3.6. Let K be a compact group acting continuously on a T0 -space X.
Then the orbit space K\X is T0 as well.
Proof. Continuity of the action means that the action map K ×X → K ×X which
associates (k, kx) to (k, x) is a homeomorphism. Compactness of K implies that
the projection (k, x) 7→ x from K × X to X is closed. Composing the two yields
closedness of the map (k, x) 7→ kx. This implies that if F ⊂ X is closed, then KF
is again closed.
Let x, y ∈ X in different K-orbits. Let us consider Y = Kx ∪ Ky with the
induced topology. This is a compact T0 -space. Now, consider the set of closed
non-empty subspaces of Y with the order given by inclusion. By compactness any
decreasing chain has a non-empty intersection and thus Zorn’s Lemma implies
there are minimal elements, that are points since Y is T0 . Thus Y has at least a
closed point.
Without loss of generality we may and shall assume that {x} is closed in Y .
This means that there exists a closed subset F of X such that F ∩ Y = {x}. In
particular F ∩ Ky = ∅, and therefore Ky ∩ KF = ∅. Finally, KF is a closed
K-invariant set separating Kx from Ky.
Lemma 3.7. Let J be a topological group acting continuously on a topological space
X. If N is a closed normal subgroup of J, the induced action of J/N on N \X is
continuous and the orbits spaces J\X and (J/N )\(N \X) are homeomorphic.
Proof. The map (g, x) 7→ N gx from J ×X to N \X is continuous and goes through
the quotient space J/N ×N \X which is the orbit space of N ×N acting diagonally
on J × X. Thus, (gN, N x) 7→ N gx is continuous, that is the action of J/N on
N \X is continuous.
By the universal property of the topological quotient, the continuous map
x 7→ (J/N )N x from X to (J/N )\(N \X) induces a continuous map J\X →
(J/N )\(N \X). Conversely, the continuous map N \X → J\X induces also a continuous map (J/N )\(N \X) → J\X which is the inverse of the previous one.
Proof of Example 3.5. Since φ−1 (L) and its conjugates are almost algebraic in G,
it is clear that the stabilizers are almost algebraic. So we are left to prove that
the topology on G\I/L is T0 . Let H be a cocompact normal subgroup in L with
H = H(k) for some k-algebraic subgroup H of I. By Lemma 3.7 the orbit space
G\I/L is homeomorphic to the space of orbits of the action of G × (L/H) on I/H.
Note that the action of G on I/H ⊂ I/H(k) has locally closed orbits (and therefore
G\I/H is T0 ) by Proposition 2.2, as the action of G on I/H is k-algebraic. Now
the T0 property of G\I/L follows from Lemma 3.6 for the compact group L/H
acting continuously on the T0 -space G\I/H.
Lemma 3.8. Let J be a countable set, (Li )i∈J a family of almost algebraic subQ
groups of G. Then the diagonal action of G on i∈J G/Li is almost algebraic.
Q
Proof. Stabilizers of points in i∈J G/Li are intersections of almost algebraic
subgroups of G. Hence
by Lemma 3.1 they are almost algebraic. So we just have
Q
to prove that G\ ( i∈J G/Li ) is T0 .
14
For i ∈ J, let Hi be an algebraic subgroup of GQsuch that Hi = Hi (k) is
a cocompact normal subgroup of Li . Consider V = i∈J G/Hi . We first prove
that the topology on G\V is T0 , by proving that orbit maps are homeomorphisms
(Theorem 2.1). Let (hi Hi )i∈J be an element of V and (gn ) be a sequence of
elements of G such that gn · (hi Hi ) converges to (hi Hi ) in V .
T
Let H = i∈J hi Hi h−1
= Stab((hi Hi )i∈J ). We have to prove that gn converges
i
to e in G/HT(see Remark 3.4). By Noetherianity,
there exists a finite J0 ⊂ J such
Q
that H = i∈J0 hi Hi h−1
.
Set
V
=
G/H
. We see that, in V0 , we have
0
i
i∈J0
i
that gn .(hi Hi )i∈J0 converges to (hi Hi )i∈J0 . By Proposition 2.2, it follows that gn
converges to the identity in G/H.
Q
Now let K be the compact group i∈J Li /Hi . The group K acts also continuously on V via the formula (li Hi ) · (gi Hi ) = (gi li−1 Hi ) and this action commutes
with the action of G. Thus we can apply Lemma 3.6 to KQacting on G\V and
get that the space of orbits for the G-action on V /K ' i∈J Gi /Li is T0 , as
desired.
Our main goal in this subsection is proving the following theorem, which is an
essential part of our main theorem, Theorem 1.7.
Theorem 3.9. Let V be a Polish space with an almost algebraic action of G.
Then stabilizers of probability measures on V are almost algebraic subgroups of G.
We first restate and prove Proposition 1.9, discussed in the introduction.
Proposition 3.10. Fix a closed subgroup L < G. Then there exists a k-subgroup
H0 < G which is normalized by L such that L has a precompact image in the Polish
group (NG (H0 )/H0 )(k) and such that for every k-G-variety V, any L-invariant
finite measure on V(k) is supported on the subvariety of H0 -fixed points.
Proof. Replacing G by the Zariski closure of L, we assume that L is Zariski-dense
in G and consider the collection
{H < G | H is a k-algebraic subgroup, Prob(G/H(k))L 6= ∅}.
By the Noetherian property of G there exists a minimal element H0 in this collection. We let µ0 be a corresponding L-invariant measure on G/H0 (k).
We first claim that H0 is normal in G. Assuming not, we let N G be the
normalizer of H0 and consider the set
U = {(xH0 , yH0 ) | y −1 x ∈
/ N} ⊂ G/H0 × G/H0 .
This set is a non-empty Zariski-open set which is invariant under the diagonal Gaction, as its complement is the preimage of the diagonal under the natural map
G/H0 × G/H0 → G/N × G/N. Since the support of µ0 × µ0 in G/H0 × G/H0 is
invariant under L×L which is Zariski-dense in G×G we get that (µ0 ×µ0 )(U(k)) 6=
0. It follows from Corollary 2.14 that there exist u ∈ U(k) and an L-invariant
finite measure on G/ StabG (u) ⊂ (G/ StabG (u))(k). By the definition of U we
get a contradiction to the minimality of H0 , as point stabilizers in U are properly
contained in conjugates of H0 . This proves that H0 is normal in G.
15
Next we let V be a k-G-variety and µ be an L-invariant measure on V(k). We
argue to show that µ is supported on VH0 ∩ V(k). Indeed, assume not. Let V0
be the Zariski-closure of V(k) ∩ VH0 , and V00 = V − V0 . Then we see that V0
is defined over k [Bor91, AG, 14.4]. Furthermore, H0 acts on V0 trivially, so that
we have V0 (k) = V(k) ∩ VH0 . Hence by assumption we get that µ(V00 (k)) > 0.
Replacing V by V00 and restricting and normalizing the measure, we may and
shall assume that VH0 ∩ V(k) = ∅.
We consider the variety G/H0 × V as a k-G-variety. The measure µ0 × µ is
an L-invariant measure on (G/H0 × V)(k). It follows from Corollary 2.14 that
there exists u ∈ (G/H0 × V)(k) and an L-invariant measure on G/ StabG (u). By
Proposition 2.2 there exist a k-algebraic subgroup H < G with H = H(k) =
StabG (u) and an orbit map G/H → Gu inducing a homeormorphism G/H →
G/ StabG (u). Thus we obtain an L-invariant probability measure on G/H(k).
Now, H is contained in some conjugate gH0 g −1 , for some g ∈ G. Hence we get
that g −1 Hg < H0 is such that G/g −1 Hg has an L-invariant probability measure.
By minimality, this implies that g −1 Hg = H0 , hence by normality of H0 , H = H0 .
Therefore u belongs to V(k) ∩ VH0 , which was assumed to be empty. Hence we
get a contradiction. This proves that µ is supported on VH0 .
We set S = (G/H0 )(k) and let T be the closure of the image of L in S.
We are left to show that T is compact. S is a Polish group and T is a closed
subgroup. The quotient topology on T \S is Hausdorff, and in particular T0 . The
measure µ0 is an L-invariant finite measure on S, hence it is also T -invariant.
Substituting S = V and T = G = L in Corollary 2.14 we find a finite measure µ1
on S which is supported on a unique T -coset, T s. The measure (Rs )∗ µ1 , given
by pushing µ1 by the right translation by s−1 is then a T -invariant probability
measure on T . It is well-known result due to A. Weil (see [Oxt46] where the result
is attributed to Ulam) that a Polish group that admits an invariant measure class is
locally compact, and a locally compact group that admits an invariant probability
measure is compact. Thus T is indeed compact.
Corollary 3.11. Fix a k-G-algebraic variety V, and set V = V(k). Let µ ∈
Prob(V ). Then Stab(µ) is almost algebraic.
Proof. Let L = Stab(µ). We may and shall assume L to be Zariski-dense in G,
and we can find H0 as in Proposition 1.9. We know that µ is supported on the
set of VH0 thus H0 = H0 (k) < L. Since G/H0 is acting on VH0 ∩ V(k) and the
stabilizer of µ is closed in G/H0 , we conclude that L has a closed image. We know
that the image of L is precompact, thus it is actually compact, and we conclude
that L is almost algebraic.
Lemma 3.12. Let L < G be an almost algebraic group, with H = H(k) a normal
cocompact algebraic subgroup of L. Then there is a G-equivariant continuous map
φ : Prob(G/L) → Prob(G/H). Furthermore, we have, for every µ ∈ Prob(G/L),
Stab(µ) = Stab(φ(µ)).
Proof. Let λ be a Haar probability measure on L/H. For a continuous bounded
function Rf on G/H let f be the continuous bounded function on G/L defined by
f (gL) = L/H f (gh) dλ(h) and finally φ(µ)(f ) = µ(f ).
16
Then it is clear that φ is equivariant, and we deduce that Stab(µ) ⊂ Stab(φ(µ)).
In the other direction, we note that if π : G/H → G/L is the projection, we have
π∗ (φ(µ)) = µ. Hence the other inclusion is also clear.
To check the continuity, let µn → µ ∈ Prob(G/L), and take f a continuous
bounded function on G/H. Then φ(µn )(f ) = µn (f ) → µ(f ) = φ(µ)(f ). Hence
φ(µn ) converges to φ(µ).
Proof of Theorem 3.9. Choose µ ∈ Prob(V ) and denote L = StabG (µ), H =
FixG (supp(µ)). Set U = G\V , and let ν = p• µ, where p : V → U is the projection.
Note that p is a Polish fibration. By Theorem 2.11, L is equal to the stabilizer of
an element f ∈ L0p• (U, ProbU (V )). By Lemma 2.13 there exists a ν-full measure
T
set U1 ⊂ U such that L = u∈U1 Stab(f (u)). For a fixed u ∈ U1 , f (u) is a
measure on a G-orbit in V which we identify with G/L0 for some almost algebraic
subgroup L0 < G. Let H0 < G be a k-algebraic subgroup such that H 0 = H0 (k)
is a cocompact normal subgroup of L0 . By Lemma 3.12, Stab(f (u)) is also the
stabilizer of a probability measure on G/H 0 ⊂ G/H0 (k). By Corollary 3.11, it
follows that Stab(f (u)) is almost algebraic. We conclude that L is almost algebraic
by Lemma 3.1.
3.3
Separating orbits in the space of probability measures
In this subsection, we prove the following theorem.
Theorem 3.13. Let L < G be an almost algebraic subgroup. Then the action of
G on Prob(G/L) is almost algebraic.
The proof of Theorem 3.13 consists in several steps, proving particular cases
of the theorem, each of them using the previous one. First we start with the case
when L = G (Lemma 3.14). Then we treat the case when L is a normal algebraic
subgroup of G (Lemma 3.15). The main step is then to deduce the theorem when
L is any algebraic subgroup of G (Proposition 3.20), before concluding with the
general case.
Lemma 3.14. The G-action on Prob(G) is almost algebraic.
Proof. The regular action of G on itself is proper, so by Lemma 2.7 it follows that
the action of G on Prob(G) is proper. Any proper action is almost algebraic.
Lemma 3.15. Let H < G be a normal k-algebraic subgroup. Then the G-action
on Prob((G/H)(k)) is almost algebraic.
Proof. Denoting I = G/H and I = I(k), we know that the I-action on Prob(I)
is almost algebraic (Lemma 3.14). Since G/H is a subgroup of I, G stabilizes
each I-orbit. It is thus enough to show that G acts almost algebraically on each
I-orbit. We know that such an orbit is of the form I/L where L is almost algebraic
(Theorem 3.9), so this follows from Example 3.5.
An essential technical tool for proving Theorem 3.13 and Theorem 1.7 is given
by the following proposition.
17
Proposition 3.16. Let V be a Polish space, with a continuous action of G. Assume that
• The quotient topology on G\V is T0 , and
• For any v ∈ V , the action of G on Prob(G.v) is almost algebraic.
Then the quotient topology on G\ Prob(V ) is T0 .
The proposition will directly follow from the following lemma.
Lemma 3.17. Let p : V → U be a Polish fibration with an action of G, and let
ν be a probability measure on U . Assume that the action of G on U is trivial and
that the action of G on Prob(p−1 (u)) is almost algebraic for almost every u ∈ U .
Let P = {µ ∈ Prob(V ) | p∗ µ = ν}. Then the topology on G\P is T0 .
This proof is similar to the proof presented in [Zim84, Proof of Proposition
3.3.1]; see also [AB94, Lemma 6.7].
Proof. The set P is Polish, as a closed subset of Prob(V ). By Theorem 2.1 we
need to show that the orbit maps are homeomorphisms. By Theorem 2.11, P is
equivariantly homeomorphic to L0p• (U, ProbU (V )).
Fixing f ∈ L0p• (U, ProbU (V )) and letting gn ∈ G be such that gn f → f , we
will show that gn converges to the identity in G/ Stab(f ) by proving that every
subsequence of (gn ) has a sub-subsequence which converges to the identity in
G/ Stab(f ). Doing so, we are free to replace (gn ) by any subsequence. Relying on
Lemma 2.10, we replace (gn ) by a subsequence such that gn f (u) → f (u) for every
u in some ν-full subset U0 ⊂ U . Let U1 ⊂ U0 be a full measure subset such that
the action of G on p−1 (u) is almost algebraic for every u ∈ U1 .
Let u ∈ U1 . By definition, we know that f (u) ∈ Prob(p−1 (u)) and that the
action of G on Prob(p−1 (u)) is almost algebraic. By Proposition 2.2, the orbit
map G/ Stab(f (u)) → Gf (u) is a homeomorphism thus gn f (u) → f (u) implies
that gn converges to the identity in G/ Stab(f (u)). By Lemma 2.13, there is also
a full measure subset U2 , that we may and do assume to be contained in U1 , such
that
\
Stab(f ) =
Stab(f (u))
u∈U2
and since G is second countable, one can find U3 countable in U2 such that
\
Stab(f ) =
Stab(f (u)).
u∈U3
By assumption, for every u ∈ U3 , theQgroup Stab(f (u)) is almost algebraic. Hence
by Lemma 3.8, the action of G on u∈U3 G/ Stab(f (u)) is almost algebraic. In
particular, we see that gn converges to e in G/ Stab(f ).
Proof of Proposition 3.16. Let U = G\V and p : V → U be the projection. Consider the G-invariant continuous map p∗ : Prob(V ) → Prob(U ). Clearly the
fibers of p∗ are closed and G-invariant, so it is enough to prove that for a given
ν ∈ Prob(U ), the quotient space G\p−1
∗ ({ν}) has a T0 -topology. This is precisely
Lemma 3.17.
18
Let π : V → V 0 be a continuous G-map between Polish spaces, µ ∈ Prob(V )
and ν = π∗ µ. Then ν has a unique decomposition ν = νc + νd where
P νc and
Pνd are
the continuous and discrete parts of ν. Moreover νd can be written λ∈Λ λ f ∈Fλ δf ,
where Λ = {λ ∈ R+ | ∃u ∈ V 0 , π∗ µ({u}) = λ} and Fλ = {u ∈ P
V 0 | ν({u}) = λ}.
−1
Defining µλ to be the restriction of µ to π (Fλ ) and µc = µ − λ∈Λ µλ , we have
P
a unique decomposition µ = µc + λ∈Λ µλ , where π∗ (µc ) is non-atomic and each
P
π∗ (µλ ) is a finitely supported, uniform measure of the form λ f ∈Fλ δf .
Lemma 3.18. Let π : V → V 0 be a continuous G-map between Polish spaces
and
T µ ∈ Prob(V ). Using the above decomposition, we have Stab(µ) = Stab(µc ) ∩
( λ Stab(µλ )). If gn µ → µ then gn µc → µc and for each λ ∈ Λ, gn µλ → µλ .
Proof. The statement about Stab(µ) is straightforward from the uniqueness of the
decomposition of µ. Let (gn ) be a sequence such that gn µ → µ. Once again, we
use a sub-subsequence argument: we prove that any subsequence of (gn ) contains
a sub-subsequence such that gn µλ → µλ for every λ. Hence we start by replacing
(gn ) by an arbitrary subsequence.
Observe that gn µ → µ implies gn ν → ν because π∗ : Prob(V ) → Prob(V 0 ) is
continuous. Let K 0 be a compact metrizable space in which V 0 is continuously
embedded as a Gδ -subset (see [Kec95, Theorem 4.14]). Then Prob(V 0 ) embeds as a
Gδ -subset in Prob(K 0 ) as well [Kec95, Proof of Theorem 17.23]. We begin with the
following observation. Assume νn is a sequence of probability measures converging
to ν ∈ Prob(V 0 ) and νn decomposes as δun + νn0 with un ∈ V 0 and νn0 ∈ Prob(V 0 ).
Up to extraction un converges to some k ∈ K 0 and thus ν({k}) > 0 which implies
that k ∈ V 0 .
Let λ1 be the maximum of Λ. The above observation implies that up to extraction we may assume that for any f ∈ Fλ1 , gn f converges to some l(f ) ∈ V 0 .
Since gn ν → ν, we have that l(f ) ∈ Fλ1 thus gn νλ1 converges to νλ1 , where
νλ1 = π∗ (µλ1 ). An induction on Λ (countable and well ordered with the reverse
order of R) shows that (after extraction) gn νλ → νλ for any λ ∈ Λ.
Once again, we embed V in some compact metrizable space K. Fix λ ∈ Λ
and let µ0 be an adherent point of (gn µλ ) in Prob(K). As π is G-equivariant, we
have that π∗ gn µλ = gn π∗ µλ = gn νλ which converges to νλ . Hence π∗ µ0 = νλ .
Furthermore, we also see that µ0 is supported on π −1 (Fλ ), hence µ0 ∈ Prob(V ).
The same argument proves that µ−µ0 , which is an adherent point of gn (µ−µλ ),
is supported on V \ π −1 (Fλ ).
As µ can be written uniquely as a sum of a measure supported on π −1 (Fλ ) and a
measure supported on V \π −1 (Fλ ), we see, writing µ = (µ−µ0 )+µ0 =
P(µ−µλ )+µλ ,
that necessarily µ0 = µλ . This concludes the proof since µc = µ − λ∈Λ µλ .
Lemma 3.19. Let H < G be a k-algebraic subgroup. Set N = NG (H), H = H(k)
and N = N(k). Let V = G/H, V0 = G/N, V = V(k) and V 0 = V0 (k).
P Consider
the map π : V → V 0 . Let F ⊂ V 0 be a finite set, ν = 1/|F | f ∈F δf and
µ ∈ Prob(V ) be a measure with π∗ µ = ν. Let (gi ) be a sequence with gi µ → µ.
Then gi → e ∈ G/ Stab(µ).
Proof. Denote m = |F |. We know that (V0 )m / Sym(m) is an algebraic variety,
hence by Proposition 2.2, every G-orbit in (V 0 )m / Sym(m) is locally closed. It
19
follows in particular that gi → e in G/ Stab(F ).
Again, it is enough to show that every subsequence of (gi ) contains a subsequence which tends to e modulo Stab(µ). We start by extracting an arbitrary
subsequence of (gi ).
Let us number f1 , f2 , . . . , fm the elements of F and denote F 0 = (f1 , . . . , fm ) ∈
0 m
(V ) . Since gi converges to e in G/ Stab(F ), it follows that, passing to a subsequence, there exists σ ∈ Sym(m) such that gi F 0 tends to σ(F 0 ) = (fσ(1) , . . . , fσ(m) ).
This means GF 0 ⊃ Gσ(F 0 ) and thus GF 0 ⊃ Gσ(F 0 ) ⊃ · · · ⊃ Gσ n (F 0 ) = GF 0 for
some n ∈ N. In particular GF 0 = Gσ(F 0 ) and since orbits are locally closed we
have that GF 0 = Gσ(F 0 ).
This shows that there exists g ∈ Stab(F ) such that gF 0 = σ(F 0 ). Hence we
have gi F 0 → gF 0 , and by almost algebraicity of the action on (V 0 )m it follows that
T
gi tends to g modulo Stab(F 0 ) = f ∈F Stab(f ).
Let us fix some notations. For f ∈ F we denote by µf the restriction of
µ to π −1 ({f }) and fix f ∈ G such that f N = f and denote by Hf ≤ G the
−1
conjugate of H by f . Observe that f Nf
= StabG (f ), Hf C StabG (f ) and
−1
π ({f }) ' StabG (f )/Hf where π : G/H → G/N is the projection and StabG (f )
is the stabilizer of f under the action of G on G/N. We also denote µ0f = g −1 µσ(f )
T
T
and gi0 = g −1 gi . Since gi0 → e ∈ G/ f ∈F Stab(f ) there exist ni ∈ f ∈F Stab(f )
such that gi0 n−1
converges to e (in G). We observe that ni µf = ni (gi0 )−1 gi0 µf . As
i
0
0
gi µf tends to µf and ni (gi0 )−1 tends to e, we have that ni µf converges to µ0f .
Those measures are supported on π −1 ({f }) ' (StabG (f )/Hf ) (k). By Lemma
3.15, Stab(f ) acts almost algebraically on Prob ((StabG (f )/Hf ) (k)). So we have
that ni tends to some n in Stab(f )/ Stab(µf ).
We conclude that gi0 = gi0 n−1
in G/ Stab(µf ). Arguing similarly for
i ni tends to nT
every f , it follows that gi T
tends to gn in G/ f ∈F Stab(µf ). Hence (gi ) converges
also in G/ Stab(µ), since f ∈F Stab(µf ) ≤ Stab(µ). Let h be the limit point of
(gi ) modulo Stab(µ). Then we have that gi µ converges to hµ by continuity of the
action. Hence h ∈ Stab(µ), meaning that h = e modulo Stab(µ). In other words,
gi converges to e in G/ Stab(µ).
Proposition 3.20. Let H < G be a k-algebraic subgroup and set H = H(k).
Then the action of G on Prob(G/H) is almost algebraic.
Proof. Assume the proposition fails for an algebraic subgroup H. We also assume,
as we may, that H is minimal in the collection of k-subgroup of G with the property
that the G-action on Prob(G/H) is not almost algebraic. By Theorem 3.9, G acts
on Prob(G/H) with almost algebraic stabilizers. Hence we have to show that for
every measure µ ∈ Prob(G/H) and sequence gn with gn µ → µ then gn tends to e
in G/ Stab(µ) (Remark 3.4). We fix such a measure µ and a sequence gn . We will
achieve a contradiction by showing that gn does tend to e in G/ Stab(µ).
We set N = NG (H), N = N(k), V = G/H, V0 = G/N, V = V(k) and V 0 =
0
V (k). We consider the natural inclusion G/H ⊂ V and view µ as a measure on V .
We consider the projection map π : V → V 0 and set ν = π∗ µ. We use the notation
introduced T
in the discussion before Lemma 3.18. The lemma gives: Stab(µ) =
Stab(µc )∩( λ∈Λ Stab(µλ )) where Λ is countable subset of [0, 1], gn µc → µc and for
20
each λ ∈ Λ, gn µλ → µλ . By Lemma 3.19, for each λ ∈ Λ, gi → e ∈ G/ Stab(µλ ).
Assume given also that gn → e ∈ G/ Stab(µc ). Since by Theorem 3.9 the groups
Stab(µλ ) and Stab(µc ) are almost
Q algebraic, we will get by Lemma 3.8 that the
action of G on G/ Stab(µc ) × λ G/ Stab(µλ ) is almost algebraic. Hence,
,
!!
\
Stab(µλ )
gn → e ∈ G
Stab(µc ) ∩
= G/ Stab(µ),
λ
achieving our desired contradiction. We are thus left to show that indeed gn →
e ∈ G/ Stab(µc ).
For the rest of the proof we will assume as we may µ = µc , that is ν ∈ Prob(V 0 )
is atom-free. We consider the measure µ × µ ∈ Prob(V × V ) and the subset
U = {(xH, yH) | y −1 x ∈
/ N} ⊂ G/H × G/H = V × V
defined and discussed in the proof of Proposition 1.9. We set U = U(k). Note
that the diagonal in V 0 × V 0 is ν × ν-null as ν is atom-free, thus U is µ × µ-full.
We view as we may µ × µ as a probability measure on U .
We now consider the G-action on U and claim that the G-orbits are locally
closed and for every u ∈ U , G acts almost algebraically on Prob(Gu). The fact that
the G-orbits are locally closed follows from Proposition 2.2, as U is a k-subvariety
of V. Fix now a point u = (xH, yH) ∈ U for some x, y ∈ G, and consider
−1
the G-action on Prob(Gu). By the definition of U, H ∩ Hy x H, thus by the
−1
minimality of H the G-action on Prob(G/H∩H y x ) ' Prob(G/H x ∩H y ) is almost
algebraic. Since by Proposition 2.2 G/H x ∩ H y is equivariantly homeomorphic to
Gu we conclude that indeed, G acts almost algebraically on Prob(Gu), and the
claim is proved.
By Proposition 3.16, we conclude that G acts on Prob(U ) almost algebraically.
Hence Effros’ Theorem 2.1 implies that gn → e in G/ Stab(µ × µ) as gn (µ × µ) →
µ × µ. Observing that Stab(µ × µ) = Stab(µ), the proof is complete.
Proof of Theorem 3.13. By Theorem 3.9 we know that the point stabilizers in
Prob(G/L) are almost algebraic. We are left to show that for every µ ∈ Prob(G/L),
for every sequence gn ∈ G satisfying gn µ → µ we have gn → e modulo Stab(µ)
(see Remark 3.4). Fix µ ∈ Prob(G/L) and a sequence gn ∈ G satisfying gn µ → µ.
Let H < G be a k-algebraic subgroup with H = H(k) normal and cocompact in L, and recall that by Lemma 3.12 we can find a G-equivariant continous map φ : Prob(G/L) → Prob(G/H) such that Stab(µ) = Stab(φ(µ)). We
get that gn φ(µ) → φ(µ). By Proposition 3.20, the G-action on Prob(G/H) is
almost algebraic, thus gn → e modulo Stab(φ(µ)). This finishes the proof, as
Stab(µ) = Stab(φ(µ)).
3.4
Proof of Theorem 1.7
For the convenience of the reader we restate Theorem 1.7.
Theorem 3.21. If the action of G on V is almost algebraic then the action of G
on Prob(V ) is almost algebraic as well.
21
Proof. By Theorem 3.9, we know that the G-stabilizers in Prob(V ) are almost
algebraic. We need to show that the quotient topology on G\ Prob(V ) is T0 .
By Proposition 3.16, it is enough to check that the quotient topology on G\V is
T0 , which is guaranteed by the assumption that the action of G on V is almost
algebraic, and, as we will see, that for any v ∈ V , the action of G on Prob(Gv) is
almost algebraic. We note that by Effros theorem (Theorem 2.1), the orbit Gv is
equivariantly homeomorphic to the coset space G/ StabG (v), and thus Prob(Gv) '
Prob(G/ StabG (v)). Since StabG (v) is an almost algebraic subgroup of G, the fact
that the G-action on Prob(Gv) is almost algebraic now follows from Theorem 3.13.
4
On bounded subgroups
In this section, we essentially retain the setup 1.1 & 1.3: we fix a complete (k, | · |)
valued field and a k-algebraic group G. Nevertheless there is no need for us to
assume that (k, | · |) is separable, so we will refrain from doing so.
Definition 4.1. A subset of k is called bounded if its image under | · | is bounded
in R. For a k-variety V, a subset of V(k) is called bounded if its image by any
regular function is bounded in k.
Remark 4.2. Note that the collection of bounded sets on a k-variety forms a
bornology.
Remark 4.3. For a k-variety V it is clear that a subset of V(k) is bounded if and
only if its intersection with every k-affine open set is bounded, so in what follows
we will lose nothing by considering exclusively k-affine varieties. We will do so.
Remark 4.4. Note that if (k 0 , | · |0 ) is a field extension of k endowed with an
absolute value extension of | · | and V is a k-variety, we may regard V(k) as a
subset V(k 0 ) and, as one easily checks, a subset of V(k) is k-bounded if and only
if it is k 0 -bounded. Thus it causes no loss of generality assuming k is algebraically
closed since b
k is so. Nevertheless, we will not assume that.
It is clear that every k-regular morphism of k-varieties is a bounded map in
the sense that the image of a bounded set is bounded. For a k-closed immersion
of k-varieties f : U → V also the converse is true: a subset of U(k) is bounded
if and only if its image is bounded, as f ∗ : k[V] → k[U] is surjective. This is a
special case of the following lemma.
Lemma 4.5. For a finite k-morphism f : U → V a subset of U(k) is bounded if
and only if its image is bounded.
Proof. Assume there exists an unbounded set L in U(k) with f (L) being bounded
in V(k). Then we could find p ∈ k[U] and a sequence un ∈ L with |p(un )| → ∞.
The function p is integral over f ∗ k[V] so there exist q1 , . . . qm ∈ f ∗ k[V] with
P
m−i
pm + m
= 0. Thus,
i=1 qi p
1=
m
m
X
X
|qi (un )|
qi (un )
≤
→ 0,
i
p (un )
|pi (un )|
i=1
i=1
22
as the sequences qi (un ) are uniformly bounded. This is a contradiction.
Recall that a seminorm on a k-vector space E is a function k · k : E → [0, ∞)
satisfying
1. kαvk = |α|kvk, for α ∈ k, v ∈ E.
2. ku + vk ≤ kuk + kvk, for u, v ∈ E.
A seminorm on E is a norm if furthermore we have
3. kvk = 0 ⇔ v = 0, for v ∈ E.
Two norms on a vector space, k · k, k · k0 , are called equivalent if there exists
some C ≥ 1 such that
C −1 k · k ≤ k · k0 ≤ Ck · k.
It is a general fact that any linear map between two Hausdorff topological
(k, | · |)-vector spaces of finite dimensions is continuous [Bou87, I, §2,3 Corollary
2] and thus we get easily the following.
Theorem 4.6. All the norms on a finite dimensional k-vector space are equivalent.
Proof. It suffices to use that the identity map (E, || · ||) → (E, || · ||0 ) is continuous
and observe that every continuous linear map is bounded. The latter is an easy
exercise in case | · | is trivial, and standard if it is not.
Recall that, if (e1P
, . . . , en ) is a basis, then the norm || · ||∞ (relative to this
basis) is defined as || xi ei ||∞ = max{|xi |}.
Corollary 4.7. For a subset B ⊂ E = k n the following properties are equivalent.
1. B is a bounded set of An .
2. All elements of E ∗ are bounded on B.
3. All the coordinates of the elements of B are uniformly bounded.
4. The norm || · ||∞ is bounded on B.
5. Every norm on E is bounded on B.
6. Some norm on E is bounded on B.
Theorem 4.8. For a subgroup L of GLn (k) the following are equivalent:
1. L is bounded in GLn (k).
2. L is bounded as a subset of Mn (k).
3. L preserves a norm on k n .
4. L preserves a spanning bounded set in k n .
For a subgroup L of G = G(k) the following are equivalent:
23
1. L is bounded.
2. L preserves a norm in all k-linear representations of G.
3. L preserves a norm in some injective k-linear representation of G.
4. L preserves a spanning bounded set in some injective k-linear representation
of G.
Proof. Note that the second part of the theorem follows from the first once we
recall that any injective homomorphism of algebraic groups is a closed immersion.
We prove the equivalence of the first four conditions.
(1) ⇔ (2) : Clearly, if L is bounded in GLn (k) then it is bounded in Mn (k).
Assume L is bounded in Mn (k). Then it has a bounded image under both morphisms
ι
det
GLn →
− GLn ,→ Mn , GLn ,→ Mn −−→ A1 ,
ι
where GLn →
− GLn is the group inversion. We conclude that L has a bounded
image under the product morphism GLn → Mn ×A1 . But the latter morphism
id ⊕ det−1
is the composition of the isomorphism ι and the closed immersion GLn −−−−−−→
Mn ⊕A1 . Thus L is bounded in GLn (k).
(2) ⇔ (3) : If L is bounded in Mn (k) then, by Corollary 4.7(3) all its matrix
elements are uniformly bounded, hence for all v ∈ k n , supg∈L kgvk∞ is finite. This
expression forms an L-invariant norm on k n . On the other hand, if L preserves a
norm on k n , by the equivalence of this norm with k · k∞ , all matrix elements of L
are uniformly bounded, thus it is bounded in Mn (k).
(3) ⇔ (4) : If L preserves a norm then it preserves its unit ball which is a
bounded spanning set. If L preserves a bounded spanning set B than it also
preserves its symmetric convex hull:
( n
)
n
X
X
αi vi vi ∈ B, αi ∈ k,
|αi | ≤ 1 .
i=1
i=1
The latter is easily seen to be the unit ball of an L-invariant norm.
Note that if L is a compact subgroup of G then L is bounded, as the k-regular
functions of G are continuous on G.
Corollary 4.9. Every bounded subgroup of G admits a bi-invariant metric.
Proof. Let L be a bounded subgroup of G. Fix an injective k-linear representation
G → GL(V ) and consider L as a subset of Endk (V ). Endk (V ) is a representation
of G × G, hence admits a norm which is invariant under the bounded group L × L
by Theorem 4.8. This norm gives an L × L-invariant metric on Endk (V ) and on
its subset L.
Proposition 4.10. Assume V is an affine k-variety with a k-affine action of G.
Z
Let B ⊂ V(k) be a bounded set and denote by B its Zariski closure. Then the
Z
Z
image of StabG (B) is bounded in the k-algebraic group StabG (B )/ FixG (B ).
24
Z
Proof. Without loss of generality we may replace G by StabG (B ) and then asZ
Z
Z
sume V = B . We then may further assume G = StabG (B )/ FixG (B ). We do
so. By [Bor91, Proposition 1.12] there exists a k-embedding of V into some vector
space, which we may assume having a spanning image, equivariant with respect
to some k-representation G → GLn , which we thus may assume injective. The
proof then follows from the implication (4) =⇒ (1) in the second of equivalence
of Theorem 4.8.
Corollary 4.11. Let L < G be a bounded subgroup. Then NG (L)/ZG (L) is
bounded.
5
The space of norms and seminorms
In this section we study a compact space on which an algebraic group over a
complete valued field acts by homeomorphisms, the space of seminorms. This
space was already considered in the case when k is local, in [Wer04].
We fix a finite dimensional vector space E over k. Given two norms n, n0 on E
we denote
™
ß
n(y)n0 (x)
x,
y
∈
E
\
{0}
.
d(n, n0 ) = log sup
n0 (y)n(x)
This number is finite by the fact that n and n0 are equivalent norms. Recall
that two seminorms on E are called homothetic if they differ by a multiplicative
positive constant. The relation of being homothetic is an equivalence relation. We
denote the set consisting of all homothety classes of norms on E by I(E). Observe
that d(n, n0 ) only depends on the homothety classes of n and n0 and thus define a
function on I(E).
Lemma 5.1. The function d : I(E) × I(E) → [0, ∞) defines a metric on I(E).
The group PGL(E) acts continuously and isometrically on I(E) and the stabilizers
in PGL(E) of bounded subsets in I(E) are bounded as well.
Proof. The fact that d is a metric and PGL(E) acts by isometries on I(E) is
a straightforward verification. To prove the continuity part, it suffices to show
that the orbit map g 7→ gn is continuous for all n ∈ I(E). Fix a norm n on
E. Let (gi ) be a sequence converging to e in PGL(E). By an abuse of notation we identify gi with an element of GL(E) such that gi → e ∈ GL(E), and
also still
is n. Using that d(gi n, n) =
n denote n a norm whose homothety class o
log sup
n(gi−1 y)n(x)
n(gi−1 x)n(y)
| x, y ∈ E \ {0}, n(x), n(y) < 1 and that gi−1 converges uni-
formly to e on the unit ball of E with respect to n, we see that indeed d(gi n, n) → 0.
Let L be the stabilizer of some bounded set N ⊆ I(E). Fix v 6= 0 and identify
N with a set N 0 of norms on E satisfying n(v) = 1 for every n ∈ N 0 . The set
B = {x ∈ E | ∀n ∈ N 0 , n(x) ≤ 1} is clearly bounded in E. By Theorem 4.8, its
stabilizer L0 ∈ GL(E) is bounded, hence also its image in PGL(E), namely L.
Remark 5.2. The space I(E) actually contains the affine Bruhat-Tits building
I(E) associated to PGL(E) [Par00] and there is a metric d0 on I(E) such that
25
(I(E), d0 ) is CAT(0) —not necessarily complete. The metric d is similar to the
one considered by Goldman and Iwahori in [GI63]. The two metrics d and d0
are Lipschitz-equivalent. This can be checked first on an apartment and extended
to the whole building using that any two points actually lie in some apartment.
Thus, Lemma 5.1 and Theorem 4.8 are a reminiscence of the Bruhat-Tits fixed
point theorem.
Let S 0 (E) be the space of non-zero seminorms on E, and S(E) be its quotient
by homotheties. We endow S 0 (E) with the topology of pointwise convergence and
S(E) with the quotient topology.
Proposition 5.3. The space S(E) is compact and metrizable. The action of
PGL(E) on S(E) is continuous.
Proof. Let m be the dimension of E. Fix a basis (e1 , . . . , em ) of E. Let S1 (E) be
the set of all s ∈ S 0 (E) such that s(ei ) ≤ 1 for every 1 ≤ i ≤ d, and s(ej ) = 1 for
some j.
We first claim that the quotient map S 0 (E) → S(E) restricts to a surjection
S1 (E) → S(E). This follows from the fact that if a seminorm is zero on all the
vectors ei , then it is zero everywhere, by triangle inequality. Furthermore, the
map S1 (E) → S(E) is actually an injection. Indeed if s ∈ S1 (E) and λs ∈ S1 (E)
it is easy to conclude that λ = 1.
We now claim
Indeed, let k · k1 be the norm
P that the
Pspace S1 (E) is compact.
P
defined as k xi ei k1 = i |xi |. Let v =
xi ei . Then we see that s(v) ≤ kvk1
for every v ∈ E. So we get that S1 (E) is homeomorphic to a closed subset of
Q
v∈E [0, kvk1 ]. This proves the compactness of S1 (E) and therefore of S(E).
Note that it also proves that every element of S1 (E) is 1-Lipschitz with respect to
k · k1 .
It follows that S1 (E) is homeomorphic to S(E). The metrizability of S1 (E)
comes from the fact that S1 (E) is a closed subset of the space of continuous
functions on E, which is metrizable because E is separable.
Now, let (gn , sn ) be a sequence converging to (e, s) ∈ GL(E)×S1 (E) then gn sn
tends to s. Indeed, for every v ∈ E,
|sn (gn v)−s(v)| ≤ |sn (gn v)−sn (v)|+|sn (v)−s(v)| ≤ kgn v−vk1 +|sn (v)−s(v)| → 0.
Each non-zero seminorm s has a kernel ker(s) = {v ∈ E | s(v) = 0}, which is a
proper linear subspace of E depending only of the homothety class of s. The map
S(E) → N, s 7→ dim(ker(s)) is obviously PGL(E)-invariant. Denote by Sm (E)
the space of homothety classes of seminorms s such that dim(ker(s)) = m. Note
that S0 (E) = I(E). We denote by Grm (E) the Grassmannian of m-dimensional
linear subspaces of E. The map Sm (E) → Grm (E), s 7→ ker(s) is clearly PGL(E)equivariant. Grm (E) is the k-points of a k-algebraic variety, thus carries a Polish
topology by Proposition 2.2.
Proposition 5.4. The maps S(E) → N, s 7→ dim(ker(s)) and Sm (E) → Grm (E),
s 7→ ker(s) are measurable.
26
Proof. We first note that the space S(E) is covered by (countably many) open sets
which are homeomorphic images of sets of the form {s ∈ S 0 (E) | s(v) = 1}, for
v ∈ E, under the quotient map S 0 (E) → S(E). It is therefore enough to establish
0
that the corresponding maps S 0 (E) → N, Sm
(E) → Grm (E) are measurable
0
(where Sm (E) denotes the preimage of Sm (E)).
Fix a basis for E and a countable dense subfield k0 < k. Let E0 = E(k0 ) be
the k0 -span of the fixed basis of E. A subspace of E is said to be defined over k0
if it has a basis in E0 . E0 is a k0 -vector space and it is a countable dense subset
of E. Note that for every d, Grm (E0 ) is countable. Observe that for s ∈ S 0 (E),
dim(ker(s)) ≤ m if and only if we can find a codimension m subspace F < E which
is defined over k0 , such that s restricts to a norm on F . The latter condition is
equivalent by Theorem 4.6 to the condition that there exists n ∈ N such that for
every v ∈ F , s(v) ≥ |v|/n for some fixed norm | · |. Note that it is enough to check
this for every v ∈ F0 = F (k0 ), thus we obtain
[
[ \
{s ∈ S 0 (E) | dim(ker(s)) ≤ m} =
{s ∈ S 0 (E) | s(v) ≥ |v|/n}.
F0 ∈Grdim(E)−m (E0 ) n v∈F0
This shows that the map s 7→ dim(ker(s)) is measurable.
0
(E) → Grm (E) is measurable, we make two
In order to prove that the map Sm
observations. We first observe that the topologies of pointwise convergence and
uniform convergence give the same Borel structure on S 0 (E). In fact, for every
separable topological space X, the pointwise and uniform convergence topologies
on Cb (X) give the same Borel structure (as uniform balls are easily seen to be
Borel for the pointwise convergence topology), and S 0 (E) could be identified with
a closed (for both topologies) subspace of bounded continuous functions on the
unit ball of E. Our second observation is that we may identify Grm (E) with a
subset of the space of closed subsets of the unit ball of E. Endowing it with
the Hausdorff metric topology, we get a PGL(E)-invariant Polish topology on
Grm (E). Since the Polish group PGL(E) acts transitively on Grm (E), by Effros
Lemma [Eff65, Lemma 2.5] the quotient topology is the unique PGL(E)-invariant
Polish topology on this space, thus the topology on Grm (E) given by the Hausdorff
metric coincides with the one discussed in Proposition 2.2.
The proof is now complete, observing further that with respect to the uniform
0
convergence topology on Sm
(E) and the Hausdorff metric topology on Grm (E),
the map s 7→ ker(s) is in fact continuous (moreover, it is C-Lipschitz on {s ∈
0
Sm
(E) | s is C-Lipschitz}).
6
Existence of algebraic representations
This section is devoted to the proof of Theorem 1.16, which we restate below. The
reader who is unfamiliar with the notion of measurable cocycles and amenable
actions might consult with profit Zimmer’s book [Zim84, Chapter 4]. The following
theorem provides a so-called algebraic representation of the space R, thus allowing
to start the machinery developed in [BF13] and prove cocycle super-rigidity for
the group G.
27
Theorem 6.1. Let R be a locally compact group and Y an ergodic, amenable
Lebesgue R-space. Let (k, | · |) be a valued field. Assume that as a metric space k is
complete and separable. Let G be a simple k-algebraic group. Let f : R×Y → G(k)
be a measurable cocycle.
Then either there exists a k-algebraic subgroup H
G and an f -equivariant
measurable map φ : Y → G/H(k), or there exists a complete and separable metric space V on which G acts by isometries with bounded stabilizers and an f equivariant measurable map φ0 : Y → V .
Furthermore, in case k is a local field the G-action on V is proper and in case
k = R and G is non-compact the first alternative always occurs.
Proof. We first note that the isogeny G → G, where G is the adjoint group
associated to G, is a finite morphism. Thus, by Lemma 4.5 we may assume that
G is an adjoint group. We do so. By [Bor91, Proposition 1.10] we can find a
k-closed immersion from G into some GLn . By the fact that G is simple, we may
assume that this representation is irreducible. By the fact that G is adjoint, the
associated morphism G → PGLn is a closed immersion as well. We will denote for
convenience E = k n . Via this representation, G acts continuously and faithfully
on the metric space of homothety classes of norms, I(E), and on the compact
space of homothety classes of seminorms, S(E), introduced in §5.
By the amenability of the action of R on Y there exists a f -map, that is a
f -equivariant map, φ : Y → Prob(S(E)), which we now fix. By Proposition 5.4,
there is a measurable partition S(E) = ∪n−1
d=0 Sd (E), given by the dimension of
the kernels of the seminorms. For a given d, the function Y → [0, 1] given by
y 7→ φ(y)(Sd (E)) is R-invariant, hence almost everywherePequal to some constant,
by ergodicity. We denote this constant by αd . Note that n−1
d=0 αd = 1. We choose
d such that αd > 0 and define
ψ : Y → Prob(Sd (E)),
ψ(y) =
1
φ(y)|Sd (E) .
αd
Note that ψ is a f -map. We will consider two cases: either d > 0 or d = 0. This is
a first bifurcation leading to the two alternatives in the statement of the theorem.
We first consider the case d > 0. We use the map Sd (E) → Grd (E) discussed
in Proposition 5.4 to obtain the push forward map Prob(Sd (E)) → Prob(Grd (E)).
By post-composition, we obtain a f -map Ψ : Y → Prob(Grd (E)). By Theorem 1.7 the action of G on Prob(Grd (E)) is almost algebraic (as the action of
G on Grd (E) is almost algebraic by Proposition 2.2), and the quotient topology
on G\ Prob(Grd (E)) is T0 . We claim that there exists µ ∈ Prob(Grd (E)) such
that the set Ψ−1 (Gµ) has full measure in Y . The standard argument is similar to the prof of Proposition 1.12: for a countable basis Bi for the topology of
G\ Prob(Grd (E)), the set
\
\
{Bi | Ψ−1 (Bi ) is full in Y } ∩ {Bic | Ψ−1 (Bi ) is null in Y }
is clearly a singleton, whose preimage is of full measure in Y . Let µ be a preimage
of this singleton in Prob(Grd (E)).
28
By the fact that G acts almost algebraically on Prob(Grd (E)), we may identify
Gµ with a coset space G/L, for some almost algebraic subgroup L = StabG (µ) <
G, and view Ψ as an f -map from Y to G/L. By Proposition 1.9, there exists
a k-subgroup H0 < G which is normalized by L such that L has a precompact
image in the Polish group (NG (H0 )/H0 )(k) and such that µ is supported on the
subvariety of H0 fixed points in Grd (E). Note that by the irreducibility of the
representation G → GLn we have no G-fixed points in Grd (E), thus H0 G.
Assume moreover that H0 6= {e} and let H be the Zariski-closure of L. By
[Bor91, Theorem AG14.4], H is a k-subgroup of G. By the simplicity of G, H G,
as H normalizes H0 . Post-composing the f -map Ψ with the map G/L → G/H(k)
we obtain a k-algebraic subgroup H G and an f -equivariant measurable map
φ : Y → G/H(k), as desired.
Assume now H0 = {e}. In that case L is compact, and in particular bounded
in G. It follows by Theorem 4.8 that L fixes a norm on E. Thus we may map
the coset space G/L G-equivariantly into S0 (E) = I(E). Using the δ-measure
embedding I(E) ,→ Prob(I(E)) and obtain a new f -map Y → Prob(I(E)). We
are then reduced to the case d = 0, to be discussed below.
We consider now the case d = 0, that is we assume having an f -map Y →
Prob(I(E)). We set V = Prob(I(E)). By Lemma 5.1, G acts isometrically and
with bounded stabilizers on I(E). By Lemma 2.6, G acts isometrically on V .
Let us check that stabilizers are bounded. Fix µ ∈ Prob(I(E)), and let L be its
stabilizer in G. Since I(E) is Polish there is a ball B of I(E) such that µ(B) > 1/2.
It follows that for any g ∈ L, gB intersects B. Thus the set LB is bounded in I(E),
and by Lemma 5.1 its stabilizer is bounded in G. It follows that L is bounded.
Thus we have found an f -map from Y to a complete and separable metric space
V on which G acts by isometries with bounded stabilizers as desired.
Acknowledgement
U.B was supported in part by the ERC grant 306706. B.D. is supported in part
by Lorraine Region and Lorraine University. B.D & J.L. are supported in part by
ANR grant ANR-14-CE25-0004 GAMME.
References
[AB94]
N. A’Campo & M. Burger – “Réseaux arithmétiques et commensurateur d’après G. A. Margulis”, Invent. Math. 116 (1994), no. 1-3,
p. 1–25.
[BF13]
U. Bader & A. Furman – “Algebraic representations of ergodic
actions and super-rigidity”, (2013).
[BGR84]
S. Bosch, U. Güntzer & R. Remmert – Non-Archimedean analysis, Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences], vol. 261, Springer-Verlag, Berlin,
1984, A systematic approach to rigid analytic geometry.
29
[Bil99]
P. Billingsley – Convergence of probability measures, second éd.,
Wiley Series in Probability and Statistics: Probability and Statistics,
John Wiley & Sons, Inc., New York, 1999, A Wiley-Interscience Publication.
[Bor91]
A. Borel – Linear algebraic groups, second éd., Graduate Texts in
Mathematics, vol. 126, Springer-Verlag, New York, 1991.
[Bou87]
N. Bourbaki – Topological vector spaces. Chapters 1–5, Elements of
Mathematics (Berlin), Springer-Verlag, Berlin, 1987, Translated from
the French by H. G. Eggleston and S. Madan.
[BZ76]
I. N. Bernšteı̆n & A. V. Zelevinskiı̆ – “Representations of the
group GL(n, F ), where F is a local non-Archimedean field”, Uspehi
Mat. Nauk 31 (1976), no. 3(189), p. 5–70.
[Eff65]
E. G. Effros – “Transformation groups and C ∗ -algebras”, Ann. of
Math. (2) 81 (1965), p. 38–55.
[EP05]
A. J. Engler & A. Prestel – Valued fields, Springer Monographs
in Mathematics, Springer-Verlag, Berlin, 2005.
[Fol99]
G. B. Folland – Real analysis, second éd., Pure and Applied Mathematics (New York), John Wiley & Sons, Inc., New York, 1999, Modern
techniques and their applications, A Wiley-Interscience Publication.
[GGMB13] O. Gabber, P. Gille & L. Moret-Bailly – “Fibrés principaux
pour les corps valués henséliens”, (2013).
[GI63]
O. Goldman & N. Iwahori – “The space of p-adic norms”, Acta
Math. 109 (1963), p. 137–177.
[hmb]
L. M.-B. (http://mathoverflow.net/users/7666/laurentmoret bailly) – “When is a valued field second-countable?”, MathOverflow, URL:http://mathoverflow.net/q/96999 (version: 2012-0515).
[Kec95]
A. S. Kechris – Classical descriptive set theory, Graduate Texts in
Mathematics, vol. 156, Springer-Verlag, New York, 1995.
[Oxt46]
J. C. Oxtoby – “Invariant measures in groups which are not locally
compact”, Trans. Amer. Math. Soc. 60 (1946), p. 215–237.
[Par00]
A. Parreau – “Immeubles affines: construction par les normes et
étude des isométries”, in Crystallographic groups and their generalizations (Kortrijk, 1999), Contemp. Math., vol. 262, Amer. Math. Soc.,
Providence, RI, 2000, p. 263–302.
[Poo93]
B. Poonen – “Maximally complete fields”, Enseign. Math. (2) 39
(1993), no. 1-2, p. 87–106.
30
[Ser06]
J.-P. Serre – Lie algebras and Lie groups, Lecture Notes in Mathematics, vol. 1500, Springer-Verlag, Berlin, 2006, 1964 lectures given
at Harvard University, Corrected fifth printing of the second (1992)
edition.
[Sha99]
Y. Shalom – “Invariant measures for algebraic actions, Zariski dense
subgroups and Kazhdan’s property (T)”, Trans. Amer. Math. Soc. 351
(1999), no. 8, p. 3387–3412.
[Sim12]
D. Simmons – “Conditional measures and conditional expectation;
Rohlin’s disintegration theorem”, Discrete Contin. Dyn. Syst. 32
(2012), no. 7, p. 2565–2582.
[VGO90]
È. B. Vinberg, V. V. Gorbatsevich & A. L. Onishchik – “Structure of Lie groups and Lie algebras”, in Current problems in mathematics. Fundamental directions, Vol. 41 (Russian), Itogi Nauki i Tekhniki,
Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow,
1990, p. 5–259.
[Vil09]
C. Villani – Optimal transport, Grundlehren der Mathematischen
Wissenschaften [Fundamental Principles of Mathematical Sciences],
vol. 338, Springer-Verlag, Berlin, 2009, Old and new.
[Wer04]
A. Werner – “Compactification of the Bruhat-Tits building of PGL
by seminorms”, Math. Z. 248 (2004), no. 3, p. 511–526.
[Zim84]
R. J. Zimmer – Ergodic theory and semisimple groups, Monographs
in Mathematics, vol. 81, Birkhäuser Verlag, Basel, 1984.
31
| 4 |
arXiv:1707.09854v1 [math.KT] 27 Jul 2017
The IA-congruence kernel of high rank free
Metabelian groups
David El-Chai Ben-Ezra
August 1, 2017
Abstract
The congruence subgroup problem for a finitely generated group Γ
and G ≤ Aut(Γ) asks whether the map Ĝ → Aut(Γ̂) is injective, or more
generally, what is its kernel C (G, Γ)? Here X̂ denotes the profinite completion of X. In this paper we investigate C (IA(Φn ), Φn ), where Φn is a
free metabelian group on n ≥ 4 generators, and IA(Φn ) = ker(Aut(Φn ) →
GLn (Z)).
We introduce surjective representations of IA(Φn ) onto the group
x7→1
ker(GLn−1 (Z[x±1 ]) −→ GLn−1 (Z)) which come via the classical Magnus representation of IA(Φn ). Using this representations combined with
some methods and results from Algebraic K-theory, we prove that for every n ≥ 4, C (IA(Φn ), Φn ) contains a product of n copies of the congruence
\
\
±1 ])) which is central in IA(Φ
\n ).
kernel ker(SLn−1
(Z[x±1 ]) → SLn−1 (Z[x
It enables us to show that contrary to free nilpotent cases, C (IA(Φn ), Φn )
is not trivial and not even finitely generated.
We note that using some results of this paper we show in an upcoming
paper that actually, all the elements of C (IA(Φn ), Φn ) lie in the center
\n ).
of IA(Φ
Mathematics Subject Classification (2010): Primary: 19B37, 20H05, Secondary: 20E36, 20E18.
Key words and phrases: congruence subgroup problem, automorphism groups,
profinite groups, free metabelian groups.
Contents
1 Introduction
2
2 Some background in algebraic K-theory
6
3 IA (Φn ) and its subgroups
8
4 The subgroups Ci
13
1
5 The centrality of Ci
20
m
6 Some elementary elements of hIA (Φn ) i
7 A main lemma
7.1 Decomposing the proof . .
7.2 Some needed computations
7.3 Elements of form 2 . . . . .
7.4 Elements of form 3 . . . . .
7.5 Elements of form 4 . . . . .
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
24
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
26
30
34
41
47
Introduction
The classical congruence subgroup problem (CSP) asks for, say, G = SLn (Z) or
G = GLn (Z), whether every finite index subgroup of G contains a principal congruence subgroup, i.e. a subgroup of the form G (m) = ker (G → GLn (Z/mZ))
for some 0 6= m ∈ Z. It is a classical 19th century result that the answer is
negative for n = 2. On the other hand, quite surprisingly, it was proved in the
sixties by Mennicke [Men] and by Bass-Lazard-Serre [BaLS] that for n ≥ 3 the
answer to the CSP is affirmative. A rich theory of the CSP for more general
arithmetic groups has been developed since then.
By the observation GLn (Z) ∼
= Aut (Zn ), the CSP can be generalized to
automorphism groups as follows: Let Γ be a group and G ≤ Aut (Γ). For a
finite index characteristic subgroup M ≤ Γ denote:
G (M ) = ker (G → Aut (Γ/M )) .
Such a G (M ) will be called a “principal congruence subgroup” of G and a
finite index subgroup of G which contains G (M ) for some M will be called a
“congruence subgroup”. The CSP for the pair (G, Γ) asks whether every finite
index subgroup of G is a congruence subgroup.
One can easily see that the CSP is equivalent to the question: Is the congruence map Ĝ = limG/U → limG/G (M ) injective? Here, U ranges over all
←−
←−
finite index normal subgroups of G, and M ranges over all finite index characteristic subgroups of Γ. When Γ is finitely generated, it has only finitely many
subgroups of given index m, and thus, the charateristic subgroups: Mm =
∩ {∆ ≤ Γ | [Γ : ∆] = m} are of finite index in Γ. Hence, one can write Γ̂ =
1
lim
←−m∈N Γ/Mm and have :
limG/G (M ) =
←−
≤
lim
G/G (Mm ) ≤ limm∈N Aut(Γ/Mm )
←−m∈N
←−
Aut(lim
(Γ/M
))
= Aut(Γ̂).
m∈N
m
←−
1 By the celebrated theorem of Nikolov and Segal which asserts that every finite index
subgroup of a finitely generated profinite group is open [NS], the second inequality is actually
an equality. However, we do not need it.
2
Therefore, when Γ is finitely generated, the CSP is equivalent to the question:
Is the congruence map: Ĝ → Aut(Γ̂) injective? More generally, the CSP asks
what is the kernel C (G, Γ) of this map. For G = Aut (Γ) we will also use the
simpler notation C (Γ) = C (G, Γ). The classical congruence subgroup result
n
mentioned above can therefore
be reformulated as C (Z ) = {e} for n ≥ 3, and
2
it is also known that C Z = F̂ω , where F̂ω is the free profinite group on a
countable number of generators (cf. [Mel], [L]).
Very few results are known when Γ is non-abelian. Most of the results are
related to Γ = π(Sg,n ), the fundamental group of the closed surface of genus g
with n punctures (see [DDH], [Mc], [A], [Bo1], [Bo2]). As observed in [BER], the
result of Asada in [A] actually gives an affirmative solution to the case Γ = F2 ,
G = Aut (F2 ) (see also [BL]). Note that for every n > 0, π(Sg,n ) ∼
= F2g+n−1 =
the free group on 2g + n − 1 generators. Hence, the above mentioned results
relate to various subgroups of the automorphism group of finitely generated free
groups. However, the CSP for the full Aut (Fd ) when d ≥ 3 is still unsettled.
Denote now the free metabelian group on n generators by Φn = Fn /Fn′′ .
Considering the metabelian case, it was shown in [BL] (see also [Be1]) that
C (Φ2 ) = F̂ω . In addition, it was proven there that C (Φ3 ) ⊇ F̂ω . The basic
motivation which led to this paper was to complete the picture in the free
metabelian case and investigate C (Φn ) for n ≥ 4. Now, denote IA(Φn ) =
ker(Aut(Φn ) → GLn (Z)). Then, the commutative exact diagram:
1 → IA (Φn ) → Aut (Φn )
ց
↓
Aut(Φ̂n )
→ GLn (Z) → 1
↓
→ GLn (Ẑ)
gives rise to the commutative exact diagram (see Lemma 2.1 in [BER]):
\
\
\
IA
(Φn ) → Aut
(Φn ) → GL
n (Z)
ց
↓
↓
Aut(Φ̂n ) → GLn (Ẑ)
→ 1
.
\
Hence, by using the fact that GL
n (Z) → GLn (Ẑ) is injective for n ≥ 3, one
can obtain that C (Φn ) is an image of C (IA (Φn ) , Φn ). Thus, for investigating
C (Φn ) it seems to be worthwhile to investigate C (IA (Φn ) , Φn ). The main
result of the paper is the following:
Theorem 1.1. For every n ≥ 4, C (IA (Φn ) , Φn ) contains a subgroup C which
satisfies the following properties:
Qn
1. C is isomorphic to a product C = i=1 Ci of n copies of:
\
\
±1 ])).
(Z[x±1 ]) → SLn−1 (Z[x
Ci ∼
= ker(SLn−1
\
2. C is contained in the center of IA
(Φn ).
3. C is a direct factor of C (IA (Φn ) , Φn ), i.e. there is a normal subgroup
N ⊳ C (IA (Φn ) , Φn ) such that C (IA (Φn ) , Φn ) = N × C.
3
We note that as for every n ≥ 4, IA (Φn ) is finitely generated [BM], the
\
action of Aut (Φn ) on IA (Φn ) by conjugation induce an action of Aut
(Φn )
\
\
on IA
(Φn ), such that the closer IA (Φn ) of IA (Φn ) in Aut
(Φn ) acts trivially
\
\
\
on Z(IA
(Φn )), the center of IA
(Φn ). Thus, as we have Aut
(Φn )/IA (Φn ) =
\
\
\
GLn (Z) we obtain a natural action of GLn (Z) on Z(IA (Φn )). It will be clear
from the description in the paper that the permutation matrices permute the
copies Ci through this natural action.
As by using the techniques of Kassabov and Nikolov in [KN] one can show
that Ci are not finitely generated, this gives the imediate following theorem as
a corollary:
Theorem 1.2. For every n ≥ 4, C (IA (Φn ) , Φn ) is not finitely generated.
It will be shown in an upcoming paper that when Γ is a finitely generated
free nilpotent group of class c, then C (IA (Γ) , Γ) = {e} is always trivial. So
the free metabelian cases behave completely different from free nilpotent cases.
The following problem is still open:
Qn
Problem 1.3. Is C (IA (Φn ) , Φn ) = i=1 Ci or it contains more elements?
We get close to solve Problem 1.3 in an upcoming result, which asserts [Be2]:
\
Theorem 1.4. For every n ≥ 4, C (IA (Φn ) , Φn ) is central in IA
(Φn ).
We note that considering our basic motivation, as C (Φn ) is an image of
C (IA (Φn ) , Φn ) we actually obtain that when n ≥ 4, the situation is dramatically different from the cases of n = 2, 3 described above, and:
Theorem 1.5. For every n ≥ 4, C (Φn ) is abelian.
We remark that despite the result of the latter theorem, we do not know
whether C (Φn ) is also not finitely generated. In fact we can not even prove at
this point that it is not trivial.
The main line of the proof of Theorem 1.1, is organized along the paper as
follows. For a ring R, ideal J ⊳ R and d ∈ N denote:
GLd (R, J) = ker(GLd (R) → GLd (R/J)).
±1
n
For n ∈ N denote also the ring Rn = Z[x±1
1 , . . . , xn ] = Z[Z ]. Using the
Magnus embedding of IA (Φn ), in which IA (Φn ) can be viewed as:
x1 − 1
x1 − 1
..
..
IA (Φn ) = A ∈ GLn (Rn ) | A
=
.
.
xn − 1
xn − 1
u
we obtain in 3, for every 1 ≤ i ≤ n, a natural embedding of:
GLn−1 (Rn , (xi − 1)Rn ) ֒→ IA (Φn )
4
and a surjective natural homomorphism:
ρi
IA (Φn ) ։ GLn−1 (Z[xi±1 ], (xi − 1)Z[x±1
i ])
in which the obvious copy of the subgroup GLn−1 (Z[xi±1 ], (xi − 1)Z[x±1
i ]) in
GLn−1 (Rn , (xi − 1)Rn ) is mapped onto itself via the composition map. This
description,
u after quoting some
u classical notions and results from Algebraic Ktheory in 2, enables us in 4 to show that for every n ≥ 4 and 1 ≤ i ≤ n,
C (IA (Φn ) , Φn ) contains a copy of:
Ci
∼
=
±1
\
ker(GLn−1 (Z[xi±1\
], (xi − 1)Z[x±1
i ]) → GLn−1 (Z[xi ]))
∼
=
±1
\
\
ker(SLn−1
(Z[x±1
i ]) → SLn−1 (Z[xi ]))
such that C (IA (Φn ) , Φn ) is maped onto Ci through the map ρ̂i which induced
by ρi . The second isomorphism in the obove equation is obtained by using a
main lemma which is left to the end of the paper and some classical results from
Algebraic K-theory. In particular, we get that for every 1 ≤ i ≤ n:
u
C (IA (Φn ) , Φn ) = (C (IA (Φn ) , Φn ) ∩ ker ρ̂i ) ⋊ Ci .
In 4 we aslo show that the copies Ci intersect each other trivially. Then,
following the techniques of Kassabov and Nikolov in [KN] we show that Ci is
not finitely generated, and therefore
deduce that C (IA (Φn ) , Φn ) is not finitely
u
generated either. Then, in 5 we show that the copies Ci lie in the center of
\
IA
(Φn ), using classical results from Algebraic K-theory and the main reminded
lemma. In particular we obtain that:
C (IA (Φn ) , Φn ) = (C (IA (Φn ) , Φn ) ∩ni=1 ker ρ̂i ) ×
n
Y
Ci
i=1
This complete
the proof of Theorem 1.1.
u
m
In 6 we compute some elements
u in hIA (Φn ) i which help us later to prove
the main reminded lemma. In 7, using classical results from algebraic Ktheory, we end the paper by proving a main lemma which asserts that for every
1 ≤ i ≤ n, we have:
m
GLn−1 (Rn , (xi − 1)Rn ) ∩ E n−1 Rn , Hn,m2 ⊆ hIA (Φn ) i
when:
• GLn−1 (Rn , (xi − 1)Rn ) denotes its appropriate copy in IA (Φn ) described
above.
• E n−1 Rn , Hn,m2 is the subgroup of En−1 (Rn ) = hIn−1 + rEi,j | r ∈ Rn i
which is generated as a normal subgroup by the elementary
matrices of the
form In−1 + hEi,j for h ∈ Hn,m2 = ker Rn → Zm2 [Znm2 ] , 1 ≤ i 6= j ≤ n.
Here, In−1 is the (n − 1) × (n − 1) unit matrix and Ei,j is the matrix which
has 1 in the (i, j)-th entry and 0 elsewhere.
5
• The intersection is obtained by viewing the copy of GLn−1 (Rn , (xi −1)Rn )
in IA(Φn ) as a subgroup of GLn−1 (Rn ).
Acknowledgements: I wish to offer my deepest thanks to my great supervisor
Prof. Alexander Lubotzky for his sensitive and devoted guidance, and to the
Rudin foundation trustees for their generous support during the period of the
research.
2
Some background in algebraic K-theory
In this section we fix some notations and recall some definitions and background
in algebraic K-theory which will be used through the paper. One can find more
general information in the references ([Ros], [Mi], [Bas]). In this section R will
always denote a commutative ring with identity. We start with recalling the
following notations: Let R be a commutative ring, H ⊳ R an ideal, and d ∈ N.
Then:
• GLd (R) = {A ∈ Mn (R) | det (A) ∈ R∗ }.
• SLd (R) = {A ∈ GLd (R) | det (A) = 1}.
• Ed (R) = hId + rEi,j | r ∈ R, 1 ≤ i 6= j ≤ di.
• GLd (R, H) = ker (GLd (R) → GLd (R/H)).
• SLd (R, H) = ker (SLd (R) → SLd (R/H)).
• Ed (R, H) = the normal subgroup of Ed (R), which is generated as a normal subgroup by the elementary matrices of the form Id + hEi,j for h ∈ H.
As for every d ≥ 3, Ed (R, H) is normal in GLd (R) (see Corollary 1.4 in [Su]),
one can have the following definitions:
K1 (R; d) = GLd (R) /Ed (R)
SK1 (R; d) = SLd (R) /Ed (R)
K1 (R, H; d) = GLd (R, H) /Ed (R, H)
SK1 (R, H; d) = SLd (R, H) /Ed (R, H) .
We now go ahead with the following definition:
Definition 2.1. Let R be a commutative ring, and 3 ≤ d ∈ N. We define
the “Steinberg group” Std (R) to be the group which generated by the elements
xi,j (r) for r ∈ R and 1 ≤ i 6= j ≤ d, under the relations:
• xi,j (r1 ) · xi,j (r2 ) = xi,j (r1 + r2 ).
• [xi,j (r1 ) , xj,k (r2 )] = xi,k (r1 · r2 ).
• [xi,j (r1 ) , xk,l (r2 )] = 1.
6
for every different 1 ≤ i, j, k, l ≤ d and every r1 , r2 ∈ R.
As the elementary matrices Id + rEi,j satisfy the relations which define
Std (R), the map xi,j (r) 7→ Id + rEi,j defines natural homomorphism φd :
Std (R) → E d (R). The kernel of this map is denoted by K2 (R; d) = ker (φd ).
Now, for two invertible elements u, v ∈ R∗ and 1 ≤ i 6= j ≤ d define the
“Steinberg symbol” by:
{u, v}i,j = hi,j (uv) hi,j (u)−1 hi,j (v)−1 ∈ Std (R)
when hi,j (u) = wi,j (u) wi,j (−1)
and
wi,j (u) = xi,j (u) xj,i −u−1 xi,j (u) .
One can show that {u, v}i,j ∈ K2 (R; d) and lie in the center of Std (R). In
addition, for every 3 ≤ d ∈ N, the Steinberg symbols {u, v}i,j do not depend
on the indices i, j, so they can be denoted simply by {u, v} (see [DS]). The
Steinberg symbols satisfy many identities. For example:
{uv, w} = {u, w} {v, w}
, {u, vw} = {u, v} {u, w} .
(2.1)
In the semi-local case we have the following ([SD], Theorem 2.7):
Theorem 2.2. Let R be a semi-local commutative ring and d ≥ 3. Then,
K2 (R; d) is generated by the Steinberg symbols {u, v} for u, v ∈ R∗ . In particular, K2 (R; d) is central in Std (R).
Let now R be a commutative ring, H ⊳ R ideal
and d ≥ 3. Denote R̄ = R/H.
Clearly, there is a natural map E d (R) → E d R̄ . It is easy to see that Ed (R, H)
is in the kernel of the latter map, so we have a map:
πd : Ed (R) /Ed (R, H) → Ed R̄ .
In addition, it is easy to see that we have a surjective map:
ψd : Std R̄ ։ Ed (R) /Ed (R, H)
defined by: xi,j (r̄) → Id + rEi,j , such that φd : Std R̄ → Ed R̄ satisfies:
φd = πd ◦ ψd . Therefore, we obtain the surjective map:
ψd
K2 R̄; d = ker (φd ) ։ ker (πd ) =
≤
(Ed (R) ∩ SLd (R, H)) /Ed (R, H)
SK1 (R, H; d) .
In particular, it implies that if Ed (R) = SLd (R), then we have a natural
surjective map:
K2 (R/H; d) ։ SK1 (R, H; d) .
From here one can easily deduce the following corollary, which will be needed
later in the paper.
7
Corollary 2.3. Let R be a commutative ring, H ⊳ R ideal of finite index and
d ≥ 3. Assume also that Ed (R) = SLd (R). Then:
1. SK1 (R, H; d) is a finite group.
2. SK1 (R, H; d) is central in GLd (R) /Ed (R, H).
3. Every element
of SK1 (R, H; d) has a representative in SLd (R, H) of the
A
0
such that A ∈ SL2 (R, H).
form
0 Id−2
Proof. The ring R̄ = R/H is finite. In particular,
R̄ is Artinian and hence semi
local. Thus, by Theorem 2.2, K2 R̄; d is an abelian group which generated
by the Steinberg symbols {u, v} for u, v ∈ R̄∗ . As R̄ is finite, so is the number
of the Steinberg symbols. By equation 2.1 we obtain that the order of any
abelian group
Steinberg symbol is finite. So K2 R̄; d is a finitely generated
d
is
finite.
Moreover,
as
which its generators are of finite order. Thus, K2 R̄;
R̄ is semi-local, Theorem 2.2 implies that K2 R̄; d is central Std R̄ . Now,
as we assume Ed (R) = SLd (R), SK1 (R, H; d) is the image of K2 R̄; d under
the surjective map
Std R̄ ։ Ed (R) /Ed (R, H) = SLd (R) /Ed (R, H) .
This implies part 1 and that SK1 (R, H; d) is central in SLd (R) /Ed (R, H).
Moreover, as d ≥ 3, we have {u, v} = {u, v}1,2 for every u, v ∈ R̄∗ . Now,
it is easy to check from the definition that the image of {u, v}1,2 under the
A
0
· Ed (R, H) for
map Std R̄ ։ SLd (R) /Ed (R, H) is of the form
0 Id−2
some A ∈ SL2 (R, H). So as SK1 (R, H; d) is generated by the images of the
Steinberg symbols, the same holds for every element in SK1 (R, H; d). So we
obtain part 3. Now, as d ≥ 3 we can write:
GLd (R) = SLd (R) · {Id + (r − 1) E3,3 | r ∈ R∗ }
∗
and as all the elements of theform Id + (r
− 1) E3,3 for r ∈ R commute with
A
0
for A ∈ SL2 (R, H), the centrality
all the elements of the form
0 Id−2
of SK1 (R, H; d) in SLd (R) /Ed (R, H) shows that actually SK1 (R, H; d) is
central in GLd (R) /Ed (R, H), as required in part 2.
3
IA (Φn ) and its subgroups
We start to deal with the IA-automorphism group of the free metabelian group,
G = IA (Φn ) = ker (Aut (Φn ) → Aut (Φn /Φ′n ) = GLn (Z)), by presenting some
of its properties and subgroups. We start with the following notations:
• Φn = Fn /Fn′′ = the free metabelian group on n elements. Here Fn′′ denotes
the second derivative of Fn , the free group on n elements..
′
m
′ m
• Φn,m = Φn /Mn,m , where Mn,m = (Φ′n Φm
n ) (Φn Φn ) .
8
• IGn,m = G(Mn,m ) = ker (IA (Φn ) → Aut (Φn,m )) .
• IAn,m = ∩ {N ⊳ IA (Φn ) | [IA (Φn ) : N ] | m}
±1
n
• Rn = Z[Zn ] = Z[x±1
1 , . . . , xn ] where x1 , . . . , xn are the generators of Z .
• Zm = Z/mZ.
• σi = xi − 1 for 1 ≤ i ≤ n. We also denote by ~σ the column vector which
has σi in its i-th entry.
Pn
• An = i=1 σi Rn = the augmentation ideal of Rn .
Pn
• Hn,m = ker (Rn → Zm [Znm ]) = i=1 (xm
i − 1) Rn + mRn .
By the well known Magnus embedding (see [Bi], [RS], [Ma]), one can identify
Φn with the matrix group:
)
(
n
X
g a 1 t1 + . . . + a n tn
ai (xi − 1)
| g ∈ Zn , ai ∈ Rn , g − 1 =
Φn =
0
1
i=1
where ti is a free basis for Rn -module, under the identification of the generators
of Φn with the matrices
xi ti
1 ≤ i ≤ n.
0 1
Moreover, for every α ∈ IA (Φn ), one can describe α by its action on the
generators of Φn , by:
xi ti
xi ai,1 t1 + . . . + ai,n tn
α:
7→
0 1
0
1
and this description gives an injective homomorphism (see [Bac], [Bi]):
IA (Φn ) ֒→
defined by α
7→
GLn (Rn )
a1,1 · · ·
..
.
an,1
a1,n
..
.
· · · an,n
which gives an identification of IA (Φn ) with the group:
IA (Φn ) =
=
{A ∈ GLn (Rn ) | A~σ = ~σ }
n
o
In + A ∈ GLn (Rn ) | A~σ = ~0 .
Proposition 3.1. Let In + A ∈ IA (Φn ) and denote
Pnthe entries of A by ak,l for
1 ≤ k, l ≤ n. Then, for every 1 ≤ k, l ≤ n, ak,l ∈ l6=i=1 σi Rn ⊆ An .
9
Proof. For a given 1 ≤ k ≤ n, the condition A~σ = ~0 gives the equality:
0 = ak,1 σ1 + ak,2 σ2 + . . . + ak,n σn .
Thus, for a given 1 ≤ l ≤ n, the map Rn → Z[x±1
l ] which defined by xi 7→ 1 for
±1
every i 6= l, maps 0 = ak,1 σ1 + ak,2 σ2 + . . . + ak,n σn 7→ āk,l σP
l ∈ Z[xl ]. Hence,
n
±1
±1
as Z[xl ] is a domain, āk,l = 0 ∈ Z[xl ]. Therefore: ak,l ∈ l6=i=1 σi Rn ⊆ An ,
as required.
Proposition 3.2.
Qn Let In + A ∈ IA (Φn ). Then det (In + A) is of the form:
det (In + A) = r=1 xsrr for some sr ∈ Z.
Qn
Proof. The invertible elements in Rn are the elements of the form ± i=1 xsrr
(see
Qn[CF], chapter 8). Thus, as In + A ∈ GLn (Rn ) we have: det (In + A) =
± i=1 xsrr . However, according to Proposition 3.1, for every entry ak,l of A
we have: ak,l ∈ An . Hence, under the
xi 7→ 1 for every 1 ≤ i ≤ n,
Q projection
sr
one has In + A 7→ In and thus, ± ni=1
x
=
det
(In + A) 7→ det (In ) = 1.
Qn r
Therefore, the option det (In + A) = − i=1 xsrr is impossible, as required.
Consider now the map:
Pn
g a 1 t1 + . . . + a n tn
n
Φn =
| g ∈ Z , ai ∈ Rn , g − 1 = i=1 ai (xi − 1)
0
1
↓
Pn
g a 1 t1 + . . . + a n tn
n
n
| g ∈ Zm , ai ∈ Zm [Zm ], g − 1 = i=1 ai (xi − 1)
0
1
which induced by the projections Zn → Znm , Rn = Z[Zn ] → Zm [Znm ]. Using
result of Romanovskiı̆ [Rom], it is shown in [Be1] that this map is surjective and
that Φn,m is canonically isomorphic to its image. Therefore, we can identify the
principal congruence subgroup of IA (Φn ), IGn,m , with:
IGn,m
=
=
{A ∈ ker (GLn (Rn ) → GLn (Zm [Znm ])) | A~σ = ~σ }
n
o
In + A ∈ GLn (Rn , Hn,m ) | A~σ = ~0 .
Let us step forward with the following definitions:
Definition 3.3. Let A ∈ GLn (Rn ), and for 1 ≤ i ≤ n denote by Ai,i the minor
which obtained from A by erasing its i-th row and i-th column. Now, for every
1 ≤ i ≤ n, define the subgroup IGLn−1,i ≤ IA (Φn ), by:
The i-th row of A is 0,
IGLn−1,i = In + A ∈ IA (Φn ) |
.
In−1 + Ai,i ∈ GLn−1 (Rn , σi Rn )
Proposition 3.4. For every 1 ≤ i ≤ n we have: IGLn−1,i ∼
= GLn−1 (Rn , σi Rn ).
Proof. The definition of IGLn−1,i gives us a natural projection from IGLn−1,i →
GLn−1 (Rn , σi Rn ) which maps an element In + A ∈ IGLn−1,i to In−1 + Ai,i ∈
10
GLn−1 (Rn , σi Rn ). Thus, all we need is to explain why this map is injective
and surjective.
Injectivity: Here, It is enough to show that given an element In + A ∈
IA (Φn ), every entry in the i-th column is determined uniquely by the other
entries in its row. Indeed, as A satisfies the condition A~σ = ~0,
P for every 1 ≤
− n
i6=l=1 ak,l σl
, i.e.
k ≤ n we have: ak,1 σ1 + ak,2 σ2 + . . . + ak,n σn = 0 ⇒ ak,i =
σi
we have a formula for ak,i by the other entries in its row.
Surjectivity: Without loss of generality we assume i = n. Let In−1 + σn C ∈
GLn−1 (Rn , σn Rn ), and denote by ~cl the column vectors of C. Define:
Pn−1
In−1 + σn C − l=1 σl~cl
∈ IGLn−1,n
0
1
and this is clearly a preimage of In−1 + σn C.
Under the above identification of IGLn−1,i with GLn−1 (Rn , σi Rn ), we will
use through the paper the following notations:
Definition 3.5. Let H ⊳ Rn . We define:
ISLn−1,i (H) = IGLn−1,i ∩ SLn−1 (Rn , H)
IEn−1,i (H) = IGLn−1,i ∩ E n−1 (Rn , H) ≤ ISLn−1,i (H) .
Observe now that as for every 1 ≤ i ≤ n, GLn−1 Z[xi±1 ], σi Z[x±1
i ] ≤
GLn−1 (Rn , σi Rn ), the latter
isomorphism gives also a natural embedding of
±1
],
σ
Z[x
]
as
a
subgroup of IA (Φn ). Actually:
GLn−1 Z[x±1
i
i
i
Proposition 3.6. For every 1 ≤ i ≤ n, there is a canonical surjective homomorphism:
ρi
±1
±1
±1
GLn−1 Z[x±1
i ], σi Z[xi ] ֒→ IA (Φn ) ։ GLn−1 Z[xi ], σi Z[xi ]
such that the composition map is the identity. Hence:
±1
IA (Φn ) = ker ρi ⋊ GLn−1 Z[x±1
i ], σi Z[xi ] .
Proof. Without loss of generality we assume i = n. First, consider the homomorphism IA (Φn ) → GLn Z[x±1
n ] , which induced by the projection Rn →
Z[x±1
]
which
defined
by
x
→
7
1
for every j 6= n. By Proposition 3.1, given
j
n
Pn−1
In + A ∈ IA (Φn ), all the entries of the n-th column of A are in j=1 σj Rn .
Hence, the above map IA (Φn ) → GLn Z[x±1
n ] is actually a map:
o
n
IA (Φn ) → In + Ā ∈ GLn Z[xn±1 ] | the n-th column of Ā is ~0 .
Observe now, that the right side of the above map is mapped naturally to
GLn−1 Z[x±1
n ] by erasing the n-th column and the n-th row from every element. Hence we obtain a map:
IA (Φn ) → GLn−1 Z[x±1
n ] .
11
Now, by Proposition 3.1, every entry of A such that In + A ∈ IA (Φn), is in
An . Thus, the entries of every Ā such that In−1 + Ā ∈ GLn−1 Z[x±1
n ] is an
image of In + A ∈ IA (Φn ), are all in σn Z[x±1
].
Hence,
we
actually
obtain a
n
homomorphism:
±1
ρn : IA (Φn ) → GLn−1 Z[x±1
n ], σn Z[xn ] .
Observing that the copy of GLn−1 Z[xn±1 ], σn Z[x±1
n ] in IGLn−1,n is mapped
isomorphically to itself by ρn , finishes the proof.
Let have now the following propositions:
∼
Proposition 3.7.
For 1 ≤ i ≤ n denote R̄ = Z[x±1
i ]. Then, viewing Im(ρi ) =
GLn−1 R̄, σi R̄ , for every m ∈ N one has:
Im(ρi ) ∩ IGn,m = GLn−1 R̄, σi ((xm
i − 1)R̄ + mR̄)
Proof. It is easy to see that the elements of IGL
n−1,i which correspond to
the elements of GLn−1 R̄, σi ((xm
i − 1)R̄ + mR̄) are clearly in Imρi ∩ IGn,m .
On the other hand, without loss of generality we assume that i = n and let
In + A ∈ Imρn ∩ IGn,m . Then In + A has the form:
Pn−1
In−1 + σn C − l=1 σl~cl
∈ IGLn−1,n
0
1
Pn−1
when the entries of C admit ck,l ∈ R̄ and
j=1 σl ck,l ∈ Hn,m . So ck,l ∈
m
Hn,m ∩ R̄ = (xn − 1)R̄ + mR̄, and the claim follows.
Proposition 3.8. For 1 ≤ i ≤ n denote R̄ = Z[xi±1 ]. Then, for every m ∈ N
one has:
ρi IGn,m2 ⊆ Im(ρi ) ∩ IGn,m ⊆ ρi (IGn,m )
Proof. As every element in Imρi ∩ IGn,m is mapped to itself via ρi we clearly
have Imρi ∩ IGn,m ⊆ ρi (IGn,m). On the other hand, if In + A ∈ IGn,m2 then
viewing Imρi ∼
= GLn−1 R̄, σi R̄ , the entries of ρi (In + A) = In−1 +B belong to
Pm−1
2
2
⊆ (xm
(xm
−1)
R̄+m
σi R̄. Observe now that we have: r=0 xmr
i
i −1)R̄+mR̄,
i
so:
2
xm
i
− 1 = σi
2
m
−1
X
r=1
xri = σi
m−1
X
r=0
xri
m−1
X
r=0
xmr
∈ σi (xm
i − 1)R̄ + mR̄ .
i
So by proposition 3.7 ρi (In + A) ∈ Imρi ∩ IGn,m as required.
Proposition 3.9. For every m ∈ N and 1 ≤ i ≤ n one has:
ρi (IAn,m ) = Im(ρi ) ∩ IAn,m .
12
Proof. As every element in the intersection Imρi ∩ IAn,m is mapped to itself via
ρi we clearly have Imρi ∩IAn,m ⊆ ρi (IAn,m ). For the opposite, assume that α ∈
IAn,m , and denote ρi (α) = β ∈ Imρi . We want to show that β ∈ IAn,m . So let
N ⊳ IA(Φn ) such that [IA(Φn ) : N ]|m. Then obviously [Imρi : (N ∩ Imρi )]|m.
−1
Thus, as ρi is surjective [IA(Φn ) : ρ−1
i (N ∩ Imρi )]|m so α ∈ ρi (N ∩ Imρi ) and
hence β = ρi (α) ∈ N ∩ Imρi ≤ N . As this is valid for every such N , we have:
β ∈ IAn,m as required.
We close this section with the following definition:
Definition 3.10. For every 1 ≤ i ≤ n, denote:
IGL′ n−1,i = {In + A ∈ IA (Φn ) | The i-th row of A is 0} .
Obviously, IGLn−1,i ≤ IGL′ n−1,i , and by the same injectivity argument as
in the proof of Proposition 3.4, one can deduce that:
Proposition 3.11. The subgroup IGL′ n−1,i ≤ IA (Φn ) is canonically embedded
in GLn−1 (R), by the map: In + A 7→ In−1 + Ai,i .
Remark 3.12. Note that in general IGLn−1,i IGL′ n−1,i . For example, I4 +
σ3 E1,2 − σ2 E1,3 ∈ IGL′ 3,4 \ IGL3,4 .
4
The subgroups Ci
In this section we define the subgroups Ci ≤ C (IA (Φn ) , Φn ), and we show that
C (IA (Φn ) , Φn ) can be viewed as a semi-direct product of each one of them.
We also show that when n ≥ 4:
±1 ]))
\
\
(Z[x±1 ]) → SLn−1 (Z[x
Ci ∼
= ker(SLn−1
and use it to show that C (IA (Φn ) , Φn ) is not finitely generated. Along the
section we will simplify the notations and write IAm = IAn,m and IGm =
IGn,m .
It is proven in [Be1] that Φ̂n = lim
←−Φn,m . So, as for every m ∈ N ker(Φn →
Φn,m ) is characteristic in Φn , we can write explicitly:
\
= ker(IA
(Φn ) → Aut(Φ̂n ))
\
= ker(IA
(Φn ) → lim
←−Aut (Φn,m ))
\
= ker(IA
(Φn ) → lim
←− (IA (Φn ) /IGm ))
\
= ker(IA
(Φn ) → lim (IA (Φn ) /IGm2 )).
←−
Now, as for every n ≥ 4 we know that IA (Φn ) is finitely generated (see [BM]),
\
we have IA
(Φn ) = lim
←− (IA (Φn ) /IAm ). Hence:
C(IA (Φn ) , Φn ) = ker(lim
←− (IA (Φn ) /IAm ) → lim
←− (IA (Φn ) /IGm ))
C (IA (Φn ) , Φn )
= lim (IAm · IGm /IAm )
←−
2
= lim
←− (IAm · IGm /IAm ) .
13
Observe now that for every 1 ≤ i ≤ n the composition map:
ρi
±1
±1
±1
GLn−1 Z[x±1
i ], σi Z[xi ] ֒→ IA (Φn ) ։ GLn−1 Z[xi ], σi Z[xi ]
induce a similar composition map for the profinite completions:
±1
±1
\
±1
±1
\
\ ρ̂i
GLn−1 Z[x
i ], σi Z[xi ] ֒→ IA (Φn ) ։ GLn−1 Z[xi ], σi Z[xi ] .
\
This enables us to write: IA (Φn ) = ker ρi ⋊ Imρi and IA
(Φn ) = ker ρ̂i ⋊ Imρ̂i .
Define now:
Definition 4.1. Ci = C (IA (Φn ) , Φn ) ∩ Imρ̂i = ker(Imρ̂i → Aut(Φ̂n )).
Proposition 4.2. If 1 ≤ i 6= j ≤ n, then Ci ∩ Cj = {e}.
Proof. By the above description for C(IA (Φn ) , Φn ), one can write:
Ci
=
ker(Imρ̂i → Aut(Φ̂n ))
=
2
ker(lim
←− (IAm · Imρi /IAm ) → lim
←− (IA (Φn ) /IGm ))
2
lim
←
− ((IAm · Imρi ) ∩ (IAm · IGm )) /IAm .
=
We claim now that:
(IAm · Imρi ) ∩ (IAm · IGm2 ) ⊆ IAm · (Imρi ∩ IGm ) .
So we have to show that if ar = bs such that a, b ∈ IAm , r ∈ Imρi and s ∈ IGm2 ,
then there exist c ∈ IAm and t ∈ Imρi ∩ IGm such that
ar = bs = ct. Indeed,
write: Imρi ∋ r = a−1 bs. Then: r = ρi (r) = ρi a−1 b ρi (s), and:
ρi a−1 b ∈ ρi (IAm ) = Imρi ∩ IAm
ρi (s) ∈ ρi (IGm2 ) ⊆ Imρi ∩ IGm .
Therefore, by defining c = a · ρi a−1 b , and t = ρi (s) we get the required
inclusion. Thus, we have:
Ci ∩ Cj ⊆ lim
←− ((IAm · (Imρi ∩ IGm ) ∩ IAm · (Imρj ∩ IGm )) /IAm ) .
We claim now that:
IAm · (Imρi ∩ IGm ) ∩ IAm · (Imρj ∩ IGm ) = IAm .
Indeed, if ar = bs such that a, b ∈ IAm , r ∈ Imρi ∩ IGm and s ∈ Imρj ∩ IGm ,
then:
ar = aρi (r) = aρi a−1 b ρi (s)
and we have aρi a−1 b ∈ IAm . I addition, it is not difficult to observe that:
ρi (s)
∈ ρi (Imρj ∩ IGm )
⊆ hIn + m(σi Ek,j − σj Ek,i ) | k 6= i, ji
m
= hIn + σi Ek,j − σj Ek,i | k 6= i, ji
Hence, ar ∈ IAm , and Ci ∩ Cj = {e} as required.
14
⊆ IAm .
The proof of the above proposition shows that for every 1 ≤ i ≤ n and for
every m ∈ N we have:
(IAm · Imρi )∩(IAm · IGm2 ) ⊆ IAm ·(Imρi ∩ IGm ) ⊆ (IAm · Imρi )∩(IAm · IGm )
and by proposition 3.8 we have also ρi (IGm2 ) ⊆ Im(ρi ) ∩ IGn,m ⊆ ρi (IGm ). It
ρ̂i
follows that Ci ֒→ C(IA (Φn ) , Φn ) ։ Ci such that the composition map is the
identity. Indeed:
Ci
=
=
=
֒→
ρ̂i
։
=
=
lim
←
− ((IAm · Imρi ) ∩ (IAm · IGm )) /IAm
lim
←
−IAm · (Imρi ∩ IGm ) /IAm
limIA · (Imρi ∩ IGm2 ) /IAm
←− m
2
lim
←−IAm · IGm /IAm = C(IA (Φn ) , Φn )
2
lim
←−ρi (IAm ) · ρi (IGm ) /ρi (IAm )
lim
←
− (Imρi ∩ IAm ) · (Imρi ∩ IGm ) / (Imρi ∩ IAm )
limIA · (Imρi ∩ IGm ) /IAm = Ci .
←− m
Hence, one gets the following corollary:
Corollary 4.3. For every 1 ≤ i ≤ n we have:
C (IA (Φn ) , Φn ) = (ker ρ̂i ∩ C (IA (Φn ) , Φn )) ⋊ Ci .
We would like now to compute Ci and to show that it is canonically isomorphic to:
\
\
±1 ])).
ker(SLn−1
(Z[x±1 ]) → SLn−1 (Z[x
So set n ≥ 4, 1 ≤ i0 ≤ n, denote: x = xi0 , σ = σi0 = xi0 − 1, R = Z[x±1 ] =
m
Z[x±1
i0 ] and for m ∈ N denote: Hm = (x − 1) R + mR. Denote also:
ρ
=
ρ̂
=
ρi0 : IA (Φn ) ։ GLn−1 (R, σR)
\
\
(Φn ) ։ GLn−1
(R, σR).
ρ̂i : IA
0
Now, write (the last equality is by proposition 3.7):
Ci0
=
=
ker(Imρ̂ → Aut(Φ̂n ))
\
ker(GLn−1
(R, σR) → Aut(Φ̂n ))
=
\
ker(GLn−1
(R, σR) → lim
←− (IA (Φn ) /IGm ))
=
\
ker(GLn−1
(R, σR) → lim
←− (GLn−1 (R, σR) · IGm /IGm ))
=
\
ker(GLn−1
(R, σR) → limGLn−1 (R, σR) / (GLn−1 (R, σR) ∩ IGm ))
←−
\
ker(GLn−1 (R, σR) → lim
←−GLn−1 (R, σR) /GLn−1 (R, σHm )).
=
15
Now, by the same computation as in Proposition 3.8 one can show that for every
m ∈ N one has (Hm2 ∩ σR) ⊆ σHm ⊆ (Hm ∩ σR), so the latter is equals to:
\
ker(GLn−1
(R, σR) → lim
←−GLn−1 (R, σR) / (GLn−1 (R, σR) ∩ GLn−1 (R, Hm )))
\
= ker(GLn−1
(R, σR) → lim
←−GLn−1 (R) /GLn−1 (R, Hm )
\
= ker(GLn−1
(R, σR) → limGLn−1 (R/Hm )).
←−
Now, if R̄ is a finite quotient of R, then as x is invertible in R, its image x̄ ∈ R̄
is invertible in R̄. Thus, there exists r ∈ N such that x̄r = 1R̄ . In addition, there
exists s ∈ N such that 1R̄ + . . . + 1R̄ = 0R̄ . Therefore, for m = r · s the map
|
{z
}
s
R → R̄ factorizes through Zm [Zm ] ∼
lim (R/Hm ),
= R/Hm . Thus, we have R̂ = ←
−
which implies that: GLn−1 (R̂) = ←
lim
GL
(R/H
).
Therefore:
n−1
m
−
\
(R, σR) → GLn−1 (R̂)).
Ci0 = ker(GLn−1
Now, the short exact sequence:
1 → GLn−1 (R, σR) → GLn−1 (R) → GLn−1 (Z) → 1
gives rise to the exact sequence (see [BER], Lemma 2.1):
\
\
GLn−1
(R, σR) → GL\
n−1 (R) → GLn−1 (Z) → 1
which gives rise to the commutative diagram:
\
\
GLn−1
(R, σR) → GL\
n−1 (R) → GLn−1 (Z)
ց
↓
↓
GLn−1 (R̂) → GLn−1 (Ẑ)
→ 1
→ 1
.
Assuming n ≥ 4 and using the affirmative answer to the classical congruence
subgroup problem ([Men], [BaLS]), the map: GL\
n−1 (Z) → GLn−1 (Ẑ) is injec\
tive. Thus, by diagram chasing we obtain that the kernel ker(GLn−1
(R, σR) →
\
GLn−1 (R̂)) is mapped onto ker(GLn−1 (R) → GLn−1 (R̂)). Let have now the
following lemma:
Lemma 4.4. Let d ≥ 3 and denote: Dm = Id + xk·m − 1 E1,1 | k ∈ Z for
m ∈ N. Then:
\
GL
d (R) =
lim (GLd (R) / (Dm Ed (R, Hm )))
←−
\
SL
d (R) =
lim
←
− (SLd (R) /Ed (R, Hm )) .
Proof. We will prove the first part and the second is similar but easier. We
first claim that Dm Ed (R, Hm ) is a finite index normal subgroup of GLd (R):
16
By well-known result of Suslin [Su], SLd (R) = Ed (R). Thus, by Corollary
2.3, SK1 (R, Hm ; d) = SLd (R, Hm ) /Ed (R, Hm ) is finite. As the subgroup
SLd (R, Hm ) is of finite index in SLd (R), so is Ed (R, Hm ). Now, it is not
difficult
elements of R is equals to R∗ =
k to see that the group of invertible
k·m
±x | k ∈ Z (see [CF], chapter 8). So as x
| k ∈ Z is of finite index in R∗ ,
Dm SLd (R) is of finite index in GLd (R). We deduce that also Dm Ed (R, Hm ) is
of finite index in GLd (R). It remains to showuthat Dm Ed (R, Hm ) is normal in
GLd (R). We already stated previously (see 2) that Ed (R, Hm ) is normal in
GLd (R). Thus, it is easy to see that it is enough to show that the commutators
of the elements of Dm with any set of generators of GLd (R), are in Ed (R, Hm ).
By the above mentioned result of Suslin and as R∗ = {±xr | r ∈ Z}, GLd (R) is
generated by the elements of the forms:
1. Id + (±x − 1) E1,1
2.
3.
Id + rEi,j
Id + rE1,j
r ∈ R, 2 ≤ i 6= j ≤ d
r ∈ R, 2 ≤ j ≤ d
4.
Id + rEi,1
r ∈ R, 2 ≤ i ≤ d.
Now, obviously, the elements of Dm commute with the elements of the forms 1
and 2. In addition, for the elements of the forms 3 and 4, one can easily compute
that:
= Id + r xk·m − 1 E1,j ∈ Ed (R, Hm )
Id + xk·m − 1 E1,1 , Id + rE1,j
Id + xk·m − 1 E1,1 , Id + rEi,1 = Id + r x−k·m − 1 Ei,1 ∈ Ed (R, Hm )
for every 2 ≤ i, j ≤ d, as required.
Now, every finite index normal subgroup of GLd (R) contains Dm for some
m ∈ N. In addition, It is not hard to show that when d ≥ 3, every finite index
normal subgroup N ⊳ GLd (R) contains Ed (R, H) for some finite index ideal
H ⊳ R (see [KN], Section 1). Thus, as every finite index ideal H ⊳ Rn contains
\
Hm for some m, GL
d (R) = lim (GLd (R) / (Dm Ed (R, Hm ))), as required.
←−
Let have now the following proposition:
\
Proposition 4.5. Let n ≥ 4. Then, the map GLn−1
(R, σR) → GL\
n−1 (R) is
injective. Hence, the surjective map:
\
(R, σR) → GLn−1 (R̂)) ։ ker(GL\
Ci0 = ker(GLn−1
n−1 (R) → GLn−1 (R̂))
is an isomorphism.
Proof. We showed in the previous lemma that
GL\
n−1 (R) = lim
←−GLn−1 (R) / (Dm En−1 (R, Hm ))
where: Dm = In−1 + xk·m − 1 E1,1 | k ∈ Z and Hm = (xm − 1) R + mR.
\
Hence, the image of GLn−1
(R, σR) in GL\
n−1 (R) is:
lim
←−GLn−1 (R, σR) / (GLn−1 (R, σR) ∩ Dm En−1 (R, Hm ))
17
which by using that Dm ⊆ GLn−1 (R, σR), one can see that it is equals to:
lim
←−GLn−1 (R, σR) / (Dm (GLn−1 (R, σR) ∩ En−1 (R, Hm ))) .
Now, by the surjective map ρ : IA (Φn ) ։ GLn−1 (R, σR) we have:
m
hIA (Φn ) i
IEn−1,i0 (Hn,m )
ρ
m
։ hGLn−1 (R, σR) i
ρ
։ GLn−1 (R, σR) ∩ En−1 (R, Hm ) .
m
So as by the main Lemma (Lemma 7.1) we have IEn−1,i0 Hn,m2 ⊆ hIA (Φn ) i,
we have also:
m
GLn−1 (R, σR) ∩ En−1 (R, Hm2 ) ⊆ hGLn−1 (R, σR) i .
m
As obviously Dm2 ⊆ hGLn−1 (R, σR) i, we deduce the following natural surjective maps:
2
2
lim
←−GLn−1 (R, σR) / (Dm (GLn−1 (R, σR) ∩ En−1 (R, Hm )))
m
։ lim
←−GLn−1 (R, σR) / hGLn−1 (R, σR) i
\
։ GLn−1
(R, σR)
։ limGLn−1 (R, σR) / (Dm (GLn−1 (R, σR) ∩ En−1 (R, Hm )))
←−
2
2
= lim
←−GLn−1 (R, σR) / (Dm (GLn−1 (R, σR) ∩ En−1 (R, Hm )))
such that the composition gives the identity map. Hence, these maps are also
injective, and in particular, the map:
\
GLn−1
(R, σR) → lim
←−GLn−1 (R, σR) / (Dm (GLn−1 (R, σR) ∩ En−1 (R, Hm )))
is injective, as required.
Proposition 4.6. Let d ≥ 3. Then, the natural embedding SLd (R) ≤ GLd (R)
induce a natural isomorphism:
\
\
∼
ker(SL
d (R) → SLd (R̂)) = ker(GLd (R) → GLd (R̂)).
Proof. By Lemma 4.4 we have:
\
ker(GL
d (R) → GLd (R̂) = lim
←−GLd (R/Hm ))
= ker(limGLd (R) /Dm Ed (R, Hm ) → limGLd (R) /GLd (R, Hm ))
←−
←−
=←
lim
GL
(R,
H
)
/D
E
(R,
H
)
.
d
m
m
d
m
−
Notice now that under the map R → Zm [Zm ], every A ∈ GLd (R, Hm ) is
mapped to Id and hence det (A) 7→ 1, which implies that det (A) = xk·m for some
k ∈ Z (provided m > 2. When m = 2, det (A) = ±x2k , but this exception does
18
not influence the inverse limit below). Thus: GLd (R, Hm ) = Dm SLd (R, Hm )
(for every m > 2). Therefore, since Dm ∩ SLd (R, Hm ) = {Id }, we deduce that:
\
ker(GL
d (R) → GLd (R̂))
=
=
lim
←
−Dm SLd (R, Hm ) /Dm Ed (R, Hm )
lim
←−SLd (R, Hm ) /Ed (R, Hm )
=
\
lim
←− ker(SLd (R) → SLd (R̂)).
The immediate corollary from Propositions 4.5 and 4.6 is:
Corollary 4.7. For every n ≥ 4, Ci0 ∼
= ker(SL\
n−1 (R) → SLn−1 (R̂)).
We close the section by showing that ker(SL\
n−1 (R) → SLn−1 (R̂)) is not
finitely generated, using the techniques in [KN]. It is known that the group
ring Z[x±1 ] = Z [Z] is Noetherian (see [I], [BrLS]). In addition, it is known that
dim (Z) = 1 and thus dim (Z [Z]) = 2 (see [Sm]). Therefore, by Proposition 1.6
in [Su], as n − 1 ≥ 3, for every H ⊳ R, the canonical map:
SK1 (R, H; n − 1) → SK1 (R, H) := lim
−→ SK1 (R, H; d)
d∈N
is surjective. Hence, the canonical map (when H ⊳ R ranges over all finite
index ideals of R):
ker(SL\
n−1 (R) → SLn−1 (R̂))
=
=
lim
←− (SLn−1 (R, H) /En−1 (R, H))
lim
←−SK1 (R, H; n − 1)
→ lim
←−SK1 (R, H)
is surjective, so it is enough to show that lim
←−SK1 (R, H) is not finitely generated.
By a result of Bass (see [Bas], chapter 5, Corollary 9.3), for every H ⊳ K ⊳ R
of finite index in R, the map: SK1 (R, H) → SK1 (R, K) is surjective. Hence,
it is enough to show that for every l ∈ N there exists a finite index ideal H ⊳ R
such that SK1 (R, H) is generated by at least l elements. Now, as SK1 (R) = 1
[Su], we obtain the following exact sequence for every H ⊳ R (see Theorem 6.2
in [Mi]):
K2 (R) → K2 (R/H) → SK1 (R, H) → SK1 (R) = 1.
In addition, by classical result of Quillen ([Q], [Ros] Theorem 5.3.30), we have:
K2 (R) = K2 Z[x±1 ] = K2 (Z) ⊕ K1 (Z)
so by the classical facts K2 (Z) = K1 (Z) = {±1} (see [Mi] chapters 3 and 10)
we deduce that K2 (R) is of order 4. Hence, it is enough to prove that for every
l ∈ N there exists a finite index ideal H ⊳ R such that K2 (R/H) is generated
by at least l elements. Following [KN], we state the following proposition (by
the proof of Theorem 2.8 in [SD]):
19
Proposition 4.8. Let p be a prime, l ∈ N and denote by P ⊳ Z [y] the ideal
l
which generated by p2 and y p . Then, for R̄ = Z [y] /P , K2 R̄ is an elementary
abelian p-group of rank ≥ l.
Observe now that for every l ≥ 0:
pl+1
(y + 1)
l
= (y p + 1 + p · a (y))p = 1 mod P
so y + 1 is invertible in R̄. Therefore we have a well defined sujective homomorphism R → R̄ which defined by sending x 7→ y + 1. In particular,
H = ker R → R̄ is a finite index ideal of R which satisfies the above requirements. This shows that indeed ker(SL\
n−1 (R) → SLn−1 (R̂)) is not finitely
generated, and C (IA (Φn ) , Φn ) is not finitely generated either.
5
The centrality of Ci
In this section we will prove that for every n ≥ 4, the copies Ci lie in the
\
center of IA
(Φn ). Along the section we will assume that n ≥ 4 is constant,
and will show it for i = n. Symmetrically, it will be valid for every i. So for
±1
simplifying
we will denote: R = Rn = Z[x±1
1 , . . . , xn ] and Hm =
Pnnotations
m
Hn,m = i=1 (xi − 1) Rn + mRn (note that in the previous section we used
these notations for different objects, i.e. for Z[x±1 ] and (xm −1)Z[x±1 ]+mZ[x±1 ]
respectively). Also here we will simplify the notations and write IAm = IAn,m
and IGm = IGn,m . We saw in Section 4 that we can write:
C(IA (Φn ) , Φn )
= ker(lim (IA (Φn ) /IAm ) → lim (IA (Φn ) /IGm2 ))
←−
←−
2 /IAm ) .
= lim
(IA
·
IG
m
m
←−
Similarly, we can write:
Cn
=
≤
≤
2
lim
←
− (IAm · (Imρn ∩ IGm ) /IAm )
2
lim
←− (IAm · IGm /IAm )
\
lim
←− (IA (Φn ) /IAm ) = IA (Φn ).
\
Hence, if we want to show that Cn lies in the center of IA
(Φn ), it suffices to
show that for every m ∈ N, the group IAm · (Imρn ∩ IGm2 ) /IAm lies in the
center of IA (Φn ) /IAm .
Denote R̄ = Z[x±1
n ], and recall Imρn ∩ IGn,m2 ≃ GLn−1 R̄, σn Hm2 ∩ R̄)
(see Proposition 3.7). We claim that by this isomorphism one has:
IAm · Imρn ∩ IGn,m2 /IAm = IAm · SLn−1 R̄, σn Hm2 ∩ R̄) /IAm .
Indeed, if α ∈ Imρn ∩ IGm2 then det(α) ∈ 1 + σn Hm2 ∩ R̄. In particular, it is
2
s
in 1 + Hm2 ∩ R̄ so it must be of the form xm
for some s ∈ Z (see proposition
n
3.1). Consider now the element
2
(In + σn E1,1 − σ1 E1,n )m s ∈ IAm ∩ GLn−1 R̄, σn Hm2 ∩ R̄) .
20
2
Then by replacing α with (In + σn E1,1 − σ1 E1,n )−m s · α we deduce that Imρn ∩
IGn,m2 ⊆ IAm · SLn−1 R̄, σn Hm2 ∩ R̄) and we get the above equality. It
\
follows that if we want to show that Cn lies in the center of IA
(Φn ) it suffices
\
(Φn ).
to show that IAm · SLn−1 R̄, σn Hm2 ∩ R̄) /IAm lies in the center of IA
However, we are going to show even more. We will show that
IAm · ISLn−1,n (σn Hm2 ) /IAm
lies in the center of IA (Φn ) /IAm .
Let Fn be the free group on f1 , . . . , fn . It is a classical result by Magnus
([MKS], Chapter 3, Theorem N4) that IA (Fn ) is generated by the automorphisms of the form:
(
fr 7→ [ft , fs ] fr
αr,s,t =
fu 7→ fu
u 6= r
when 1 ≤ r, s 6= t ≤ n (notice that we may have r = s). In their paper [BM],
Bachmuth and Mochizuky show that for every n ≥ 4, IA (Φn ) is generated by
the images of these generators under the natural map Aut (Fn ) → Aut (Φn ).
I.e. IA (Φn ) is generated by the elements of the form:
Er,s,t = In + σt Er,s − σs Er,t 1 ≤ r, s 6= t ≤ n.
Therefore, for showing the centrality of Cn , it is enough to show that given:
• an element: λ̄ ∈ IAm · ISLn−1,n (σn Hm2 ) /IAm ,
• and one of generators: Er,s,t = In + σt Er,s − σs Er,t for 1 ≤ r, s 6= t ≤ n,
there exists a representative of λ̄, λ ∈ ISLn−1,n (σn Hm2 ), such that [Er,s,t , λ] ∈
IAm . So assume that we have an element: λ̄ ∈ IAm · ISLn−1,n (σn Hm2 ) /IAm .
Then, a representative for λ̄, λ ∈ ISLn−1,n (σn Hm2 ), has the form:
Pn−1
In−1 + σn B − i=1 σi~bi
λ=
0
1
for some (n − 1) × (n − 1) matrix B which its entries bi,j admit bi,j ∈ Hm2 and
its column vectors denoted by ~bi . Let have now the following proposition:
Proposition 5.1. Let λ̄ ∈ IAm · ISLn−1,n (σn Hm2 ) /IAm . Then, for every
1 ≤ l < k ≤ n − 1, λ̄ has a representative in ISLn−1,n (σn Hm2 ), of the following
form (the following notation means that the matrix is similar to the identity
matrix, except the entries in the l-th and k-th rows):
Il−1
0
0
0
0
0
0
1 + σn a
0
σn b
0
−σl a − σk b
← l-th row
0
0
I
0
0
0
k−l−1
0
σn c
0
1 + σn d
0
−σl c − σk d
← k-th row
0
0
0
0
In−k−1
0
0
0
0
0
0
1
(5.1)
21
for some a, b, c, d ∈ Hm2 .
Proof. We will demonstrate the proof in the case l = 1, k = 2, and symmetrically, the arguments hold for arbitrary 1 ≤ l < k ≤ n − 1. Consider an arbitrary
representative of λ̄:
Pn−1 ~
In−1 + σn B − i=1
σi bi
λ=
∈ ISLn−1,n (σn Hm2 ) .
0
1
Then In−1 + σn B ∈ SLn−1 (R, σn Hm2 ). Consider now the ideal:
′
R ⊲ Hm
2 =
n−1
X
2
2
m
2
(xm
r − 1)R + σn (xn − 1)R + m R.
r=1
′
′
Observe that σn Hm2 ⊳ Hm
2 ⊳ H m2 ⊳ R and that Hm2 ∩ σn R = σn Hm2 .
In addition, by similar computations as in the proof of proposition 3.8, for
4
2
every x ∈ R we have xm − 1 ∈ (x − 1) (xm − 1)R + (x − 1) m2 R, and thus
′
′
H m 4 ⊆ Hm
2 , so Hm2 is of finite index in R.
′
Now, In−1 + σn B ∈ SLn−1 (R, σn Hm2 ) ⊆ SLn−1 R, Hm
2 . Thus, by the
′
third part of Corollary 2.3, as R ⊲ Hm
2 is an ideal of finite index, n − 1 ≥ 3
and En−1 (R) = SLn−1 (R) [Su], one can write the matrix In−1 + σn B as:
′
A
0
In−1 + σn B = AD when A =
0 In−3
′
′
Now, consider the
and D ∈ E n−1 R, Hm
for some A′ ∈ SL2 R, Hm
2 .
2
images of D and A under the projection σn → 0, which we
denote
by D̄ and Ā,
′
respectively. Observe that obviously, D̄ ∈ E n−1 R, Hm
2 . In addition, observe
that:
AD ∈ GLn−1 (R, σn R) ⇒ ĀD̄ = In−1 .
Thus, we have In−1 + σn B = AĀ−1 D̄−1 D. Therefore, by replacing D by D̄−1 D
and A by AĀ−1 we can assume that:
′
A
0
In−1 + σn B = AD for A =
0 In−3
′
where: A′ ∈ SL2 R, Hm
∩ GL2 (R, σn R) = SL2 (R, σn Hm2 ), and:
2
D
∈
⊆
′
E n−1 (R, Hm
2 ) ∩ GLn−1 (R, σn R)
E n−1 (R, H m2 ) ∩ GLn−1 (R, σn R) = IEn−1,n (Hm2 ) .
Now, as we prove in the main lemma (Lemma 7.1) that IEn−1,n (Hm2 ) ⊆
m
hIA (Φn ) i ⊆ IAm , this argument shows that λ can be replaced by a representative of the form 5.1.
We now return to our initial mission. Let λ̄ ∈ IAm ·ISLn−1,n (σn Hm2 ) /IAm ,
and let Er,s,t = In + σt Er,s − σs Er,t for 1 ≤ r, s 6= t ≤ n be one of the above
22
generators for IA (Φn ). We want to show that there exists a representative of λ̄,
λ ∈ ISLn−1,n (σn Hm2 ), such that [Er,s,t , λ] ∈ IAm . We separate the treatment
to two cases:
The first case is: 1 ≤ r ≤ n − 1. In this case one can take an arbitrary representative λ ∈ ISLn−1,n (σn Hm2 ) ∼
= SLn−1 (R, σn Hm2 ). Considering the embedding of IGL′n−1,n in GLn−1 (R), we have: Er,s,t ∈ IGL′n−1,n ⊆ GLn−1 (R).
Thus, as by Corollary 2.3
SK1 (R, Hm2 ; n − 1) = SLn−1 (R, Hm2 ) /En−1 (R, Hm2 )
is central in GLn−1 (R) /En−1 (R, Hm2 ), we have:
[Er,s,t , λ] ∈ [GLn−1 (R) , SLn−1 (R, σn Hm2 )] ⊆ En−1 (R, Hm2 ) .
In addition, as SLn−1 (R, σn Hm2 ) ≤ GLn−1 (R, σn R) and GLn−1 (R, σn R) is
normal in GLn−1 (R), we have:
[Er,s,t , λ] ∈ [GLn−1 (R) , GLn−1 (R, σn R)] ⊆ GLn−1 (R, σn R) .
Thus, we obtain from Lemma 7.1 that:
[Er,s,t , λ]
∈ En−1 (R, Hm2 ) ∩ GLn−1 (R, σn R)
= IEn−1,n (Hm2 ) ⊆ hIA (Φn )m i ⊆ IAm .
The second case is: r = n. This case is a bit more complicated than
the previous one, as Er,s,t is not in IGL′n−1,n . Here, by Proposition 5.1 one
can choose λ ∈ ISLn−1,n (σn Hm2 ) which its t-th row is equals to the standard
vector ~et . As t 6= r = n, we obtain thus that both λ and Er,s,t are in the
canonical embedding of IGL′n−1,t in GLn−1 (R). Moreover, considering the
above mentioned embedding of IGL′n−1,t in GLn−1 (R), we have:
Er,s,t ∈ GLn−1 (R, σt R) , λ ∈ SLn−1 (R, Hm2 ) .
Note that when considering λ ∈ IGL′n−1,n ֒→ GLn−1 (R), i.e. when considering
λ ∈ GLn−1 (R) through the embedding of IGL′n−1,n in GLn−1 (R), we have λ ∈
GLn−1 (R, σn Hm2 ) ≤ GLn−1 (R). However, when we consider λ ∈ IGL′n−1,t ֒→
GLn−1 (R) we do not necessarily have: λ ∈ GLn−1 (R, σn Hm2 ), but we still
have λ ∈ GLn−1 (R, Hm2 ).
Thus, by similar arguments as in the first case:
[Er,s,t , λ]
∈
[GLn−1 (R, σt R) , SLn−1 (R, Hm2 )]
⊆ En−1 (R, Hm2 ) ∩ GLn−1 (R, σt R)
= IEn−1,t (Hm2 ) ⊆ hIA (Φn )m i ⊆ IAm .
\
This finishes the argument which shows that Ci are central in IA
(Φn ).
23
6
Some elementary elements of hIA (Φn )m i
m
In this section we compute some elements in hIA (Φn ) i, which needed through
the proof of the main lemma. Additionally to the previous notations, on the
section, and also later on, we will use the notation:
σr,m = xm
r − 1 for 1 ≤ r ≤ n.
Proposition 6.1. Let n ≥ 4, 1 ≤ u ≤ n and m ∈ N. Denote by ~ei the i-th
row standard vector. Then, the elements of IA (Φn ) of the form (the following
notation means that the matrix is similar to the identity matrix, except the
entries in the u-th row):
Iu−1
0
0
au,1 · · · au,u−1 1 au,u+1
· · · au,n ← u-th row
0
0
In−u
when (au,1 , . . . , au,u−1 , 0, au,u+1 , . . . , au,n ) is a linear combination of the vectors:
1. {m (σi~ej − σj ~ei ) | i, j 6= u, i 6= j}
2. {σ k,m (σi~ej − σj ~ei ) | i, j, k 6= u, i 6= j}
3. {σu σu,m (σi~ej − σj ~ei ) | i, j 6= u, i 6= j}
m
with coefficients in Rn , belong to hIA (Φn ) i.
Proof. Without loss of generality, we assume that u = 1. Observe now that for
every ai , bi ∈ Rn for 2 ≤ i ≤ n one has:
1 a2 · · · an
1 b2 · · · bn
1 a2 + b 2 · · · an + b n
=
.
0
In−1
0
In−1
0
In−1
Hence, it is enough to prove that the elements of the following forms belong to
m
hIA (Φn ) i (when we write a~ei we mean that the entry of the i-th column in
the first row is a):
1 mf (σi~ej − σj ~ei )
i, j 6= 1, i 6= j, f ∈ Rn
1.
0
In−1
1 σ k,m f (σi~ej − σj ~ei )
i, j, k 6= 1, i 6= j, f ∈ Rn
2.
0
In−1
1 σ1 σ1,m f (σi~ej − σj ~ei )
i, j 6= 1, i 6= j, f ∈ Rn
3.
0
In−1
We start with the elements of form 1. Here we have:
m
1 f (σi~ej − σj ~ei )
1 mf (σi~ej − σj ~ei )
m
∈ hIA (Φn ) i .
=
0
In−1
0
In−1
24
We pass to the elements of form 2. In this case we have:
"
−1
m #
1 f (σi~ej − σj ~ei )
xk −σ1~ek
m
hIA (Φn ) i ∋
,
0
In−1
0
In−1
1 σ k,m f (σi~ej − σj ~ei )
.
=
0
In−1
We finish with the elements of form 3. The computation here is more complicated than in the previous cases, so we will demonstrate it for the private
case: n = 4, i = 2, j = 3. It is clear that symmetrically, with similar argument,
the same holds in general when n ≥ 4 for every i, j 6= 1, i 6= j. By similar
arguments as in the previous case we get:
1
0
0
0
0
1
0
0
m
.
hIA (Φ4 ) i ∋
0
0
1
0
0 σ3 σ1,m f −σ2 σ1,m f 1
Therefore, we also have:
x4 0 0 −σ1
0 1 0
0
m
hIA (Φ4 ) i ∋
0 0 1
0
0 0 0
1
1 −σ3 σ1 σ1,m f
0
1
=
0
0
0
0
7
1
0
,
0
0
0
1
0
σ3 σ1,m f
σ2 σ1 σ1,m f 0
0
0
.
1
0
0
1
0
0
1
−σ2 σ1,m f
0
0
0
1
A main lemma
In this section we prove the main lemma which asserts that:
Lemma 7.1. For every n ≥ 4, m ∈ N and 1 ≤ i ≤ n we have:
m
IEn−1,i Hn,m2 ⊆ hIA (Φn ) i .
For simplifying the proof and the notations, we will prove the lemma for
the private case i = n, and symmetrically, all the arguments are valid for every
1 ≤ i ≤ n. Also in this section n will be constant, so we
Pncan use the notations:
m
IAm = hIA (Φn ) i, R = Rn and Hm = Hn,m =
r=1 Ur,m + Om where:
Ur,m = σr,m R and Om = mR.
In addition, using the isomorphism IGLn−1,n ∼
= GLn−1 (R, σn R) (Proposition 3.4), we will identify IGLn−1,n with GLn−1 (R, σn R), and IEn−1,n (Hm )
25
with GLn−1 (R, σn R)∩E n−1 (R, Hm ). So the goal of this section is proving that
GLn−1 (R, σn R) ∩ En−1 (R, Hm2 ) ⊆ IAm .
Through the proof we use also elements of IGL′ n−1,n , so as GLn−1 (R, σn R)∩
En−1 (R, Hm2 ) ≤ GLn−1 (R, σn R) ≤ IGL′ n−1,n ≤ GLn−1 (R) (Proposition
3.11), in this section, we will make all the computations in GLn−1 (R), and
omit the n-th row and column from each matrix.
7.1
Decomposing the proof
In this subsection we will start the proof of Lemma 7.1. In the end of the
subsection, there will be remained a few tasks, which will be accomplished in
the forthcoming subsections. We start with the following definition:
Definition 7.2. For every m ∈ N, define the following ideal of R:
Tm =
n
X
σr2 Ur,m +
r=1
n
X
2
σr Om + Om
.
r=1
By the observation that for every x ∈ R we have
one can have the following computation:
m2
x
−1 =
(x − 1)
2
m
−1
X
xj = (x − 1)
m−1
X
j=0
j=0
∈
⊆
Pm−1
j=0
xj
xj ∈ (x − 1)R + mR,
m−1
X
xjm
j=0
(x − 1) ((x − 1)R + mR) ((xm − 1)R + mR)
(x − 1)2 (xm − 1)R + (x − 1)2 mR + (x − 1)m2 R
which shows that Hm2 ⊆ Tm . Hence, it is enough to prove that:
GLn−1 (R, σn R) ∩ En−1 (R, Tm ) ⊆ IAm .
Equivalently, it is enough to prove that the group:
(GLn−1 (R, σn R) ∩ En−1 (R, Tm )) · IAm /IAm
is trivial.
We continue with the following lemma, which is actually a lemma of Suslin
(Corollary 1.4 in [Su]) with some elaborations of Bachmuth and Mochizuki [BM]:
Lemma 7.3. Let R be a commutative ring, d ≥ 3, and H ⊳ R ideal. Then,
Ed (R, H) is generated by the matrices of the form:
(Id − f Ei,j ) (Id + hEj,i ) (Id + f Ei,j )
(7.1)
for h ∈ H, f ∈ R and 1 ≤ i 6= j ≤ d.
Proof. By Suslin’s proof to Corollary 1.4 in [Su], and the remark which follows
Proposition 3.5 in [BM], Ed (R, H) is generated by the matrices of the form:
t
Id + h (f1~ei + f2~ej ) (f2~ei − f1~ej )
26
for f1 , f2 ∈ R, h ∈ H and 1 ≤ i 6= j ≤ d. So it is enough to show that these
matrices are generated by the matrices of the form 7.1. We will show it for the
case i, j, d = 1, 2, 3 and it will be clear that the general argument is similar. So
we have the matrix:
1 + hf1 f2
−hf12
0
Id + h (f1~e1 + f2~e2 )t (f2~e1 − f1~e2 ) =
hf22
1 − hf1 f2 0
0
0
1
for some f1 , f2 ∈ R and h ∈ H,
1
0
1
= 0
f2 −f1
1
0
0
1 0
1
0 0 1
= 0
f2 −f1 1
0 0
As the matrix
1
0
0
which is equals to:
−hf1
1
0 hf1
−hf2 0
1 hf2
1
−f2 f1
1
−hf1
1
0 0
1 0
−hf2 0
1 0 0 1
1
−f2 f1 1
0 0
0 hf1
1 0
1 hf2 = 0 1
0 1
0 0
is generated by the matrices of the
1
0
0
1
0
1
0 0
f2 −f1 1
0
=
hf1
hf2 .
1
0
hf2
1
hf1
1 0
0 0 1
0 0
1
form 7.1, it remains to show that:
0 −hf1
1
0 0
1 −hf2 0
1 0
0
1
−f2 f1 1
1
0
0
1 0 −hf1
1
0 0
0
1
0 0 1
0 0
1 0
f2 −f1 1
0 0
1
−f2 f1 1
1
0
0
1
0 0
1 0
0
1
0 0 1 −hf2 0
1 0
· 0
0 0
1
f2 −f1 1
−f2 f1 1
is generated by the matrices of the
1
0
0
1
0
1
0 0
f2 −f1 1
0
1 + hf1 f2
0
=
hf1 f22
form 7.1. Now:
0 −hf1
1
1
0 0
0
1
−f2
−hf12
1
−hf12 f2
27
0
1
f1
−hf1
0
1 − hf1 f2
0
0
1
1
= 0
0
−hf12
1
−hf12 f2
1
0
1
= 0
0 −hf12 f2
1 0 0
· 0 1 0
f2 0 1
0
1 + hf1 f2
0
0
hf1 f22
1
0
0
1
1
0
0
is generated by the matrices of the
1
0
0
1
0
1
0 0
f2 −f1 1
0
1 −hf12
0
1
0
0
0
1
0
0
1
0
−hf1
0
1 − hf1 f2
0
0
1
1
−hf1
0 0
1
−f2
0
1 0
0 1
form 7.1, and by similar computation:
0
0
1
0 0
1 −hf2 0
1 0
0
1
−f2 f1 1
is generated by these matrices as well.
We proceed with the followimg lemma:
Lemma 7.4. Let n ≥ 4. Then, every element of GLn−1 (R, σn R)∩En−1 (R, Tm )
can be decomposed as a product of elements of the following four forms:
1.
A−1 (In−1 + hEi,j ) A
h ∈ σn Om
2.
A−1 (In−1 + hEi,j ) A
h ∈ σn2 Un,m , σn σr2 Ur,m
for 1 ≤ r ≤ n − 1
3.
A−1 [(In−1 + hEi,j ) , (In−1 + f Ej,i )] A
2
h ∈ Ōm
, f ∈ σn R
4.
A−1 [(In−1 + hEi,j ) , (In−1 + f Ej,i )] A
h ∈ σr2 Ūr,m , σr Ōm for
1 ≤ r ≤ n − 1, f ∈ σn R
when A ∈ GLn−1 (R), i 6= j, and Ūr,m , Ōm are the projections of Ur,m , Om
±1
to Rn−1 = Z[x±1
1 , . . . , xn−1 ], induced by the projection σn 7→ 0.
Remark 7.5. Notice that as GLn−1 (R, σn R) is normal in GLn−1 (R), every
element of the above forms is an element of GLn−1 (R, σn R) ∼
= IGLn−1,n ≤
IA (Φn ).
Proof. (of Lemma 7.4) Let B ∈ GLn−1 (R, σn R) ∩ En−1 (R, Tm ). We first claim
that for proving the lemma, it is enough to show that B can be decomposed as
a product of the elements in the lemma (Lemma 7.4), and arbitrary elements in
GLn−1 (Rn−1 ). Indeed, assume that we can write B = C1 D1 · · · Cn Dn for some
Di of the forms in the lemma and Ci ∈ GLn−1 (Rn−1 ) (notice that C1 or Dn
might be equal to In−1 ). Observe now that we can therefore write:
B = C1 D1 C1−1 · . . . · (C1 · . . . · Cn ) Dn (C1 · . . . · Cn )−1 (C1 · . . . · Cn )
28
and by definition, the conjugations of the Di -s can also be considered as of the
forms in the lemma. On the other hand, we have:
−1
(C1 · . . . · Cn ) Dn−1 (C1 · . . . · Cn )
· . . . · C1 D1−1 C1−1 B = C1 · . . . · Cn
and as the matrices of the forms in the lemma are all in GLn−1 (R, σn R) (by
Remark 7.5) we deduce that:
C1 · . . . · Cn ∈ GLn−1 (R, σn R) ∩ GLn−1 (Rn−1 ) = {In−1 }
i.e. C1 · . . . · Cn = In−1 . Hence:
−1
B = C1 D1 C1−1 · . . . · (C1 · . . . · Cn ) Dn (C1 · . . . · Cn )
i.e. B is a product of matrices of the forms in the lemma, as required.
So let B ∈ GLn−1 (R, σn R) ∩ En−1 (R, Tm ). According to Lemma 7.3, as
B ∈ En−1 (R, Tm ) and n − 1 ≥ 3, we can write B as a product of elements of
the form:
(In−1 − f Ei,j ) (In−1 + hEj,i ) (In−1 + f Ei,j )
Pn
Pn
2
2
for some f ∈ R, h ∈ Tm =
r=1 σr Ur,m +
r=1 σr Om + Om and 1 ≤ i 6=
j ≤ n − 1. We will show now that every element of the above form can be
writen as a product of the elements of the forms in the lemma and elements of
GLn−1 (Rn−1 ). Observe first that as:
Tm
=
n
X
σr2 Ur,m +
r=1
⊆ σn
n
X
2
σr Om + Om
r=1
n−1
X
σr2 Ur,m
+ σn Un,m + Om
r=1
!
+
n−1
X
σr2 Ūr,m +
r=1
n−1
X
2
σr Ōm + Ōm
r=1
Pn−1
we can decompose h = σn h1 + h2 for some: h1 ∈ r=1 σr2 Ur,m + σn Un,m + Om
Pn−1
Pn−1
2
. Therefore, we can write:
and h2 ∈ r=1 σr2 Ūr,m + r=1 σr Ōm + Ōm
(In−1 − f Ei,j ) (In−1 + hEj,i ) (In−1 + f Ei,j )
= (In−1 − f Ei,j ) (In−1 + σn h1 Ej,i ) (In−1 + f Ei,j )
· (In−1 − f Ei,j ) (In−1 + h2 Ej,i ) (In−1 + f Ei,j ) .
Thus, as the matrix (In−1 − f Ei,j ) (In−1 + σn h1 Ej,i ) (In−1 + f Ei,j ) is clearly a
product of elements of forms 1 and 2 in the lemma, it is enough to deal with
the matrix:
(In−1 − f Ei,j ) (In−1 + h2 Ej,i ) (In−1 + f Ei,j )
Pn−1
Pn−1 2
2
. Let us now write: f = σn f1 + f2
σr Ōm + Ōm
when h2 ∈ r=1 σr Ūr,m + r=1
for some f1 ∈ R and f2 ∈ Rn−1 , and write:
(In−1 − f Ei,j ) (In−1 + h2 Ej,i ) (In−1 + f Ei,j )
= (In−1 − f2 Ei,j ) (In−1 − σn f1 Ei,j )
· (In−1 + h2 Ej,i ) (In−1 + σn f1 Ei,j ) (In−1 + f2 Ei,j ) .
29
Now, as (In−1 ± f2 Ei,j ) ∈ GLn−1 (Rn−1 ), it is enough to deal with the element:
(In−1 − σn f1 Ei,j ) (In−1 + h2 Ej,i ) (In−1 + σn f1 Ei,j )
which can be writen as a product of elements of the form:
(In−1 − σn f1 Ei,j ) (In−1 + kEj,i ) (In−1 + σn f1 Ei,j )
2
k ∈ Ōm
, σr2 Ūr,m , σr Ōm , for 1 ≤ r ≤ n − 1.
Finally, as for every such k one can write:
(In−1 − σn f1 Ei,j ) (In−1 + kEj,i ) (In−1 + σn f1 Ei,j )
= (In−1 + kEj,i ) [(In−1 − kEj,i ) , (In−1 − σn f1 Ei,j )]
and (In−1 + kEj,i ) ∈ GLn−1 (Rn−1 ), we actually finished.
Corollary 7.6. For proving Lemma 7.1, it is enough to show that every element
in the forms of Lemma 7.4 is in IAm .
We start to deal here with the elements of form 1:
Proposition 7.7. The elements of the following form are in IAm :
A−1 (In−1 + hEi,j ) A, for A ∈ GLn−1 (R) , h ∈ σn Om and i 6= j.
Proof. In this case we can write h = σn mh′ for some h′ ∈ R. So as:
A−1 (In−1 + σn h′ Ei,j ) A ∈ GLn−1 (R, σn R) ≤ IA (Φn )
we obtain that:
A−1 (In−1 + hEi,j ) A =
=
as required.
A−1 (In−1 + σn mh′ Ei,j ) A =
m
A−1 (In−1 + σn h′ Ei,j ) A ∈ IAm
We will devote the remaining sections to deal with the elements of the three
other forms. In these cases the proof will be much more difficult, and we will
need the help of the following computations.
7.2
Some needed computations
Proposition 7.8. For every f, g ∈ R we have
1 − f g −f g
fg
1 + fg
0
0
30
the following equalities:
0
0
1
1
0 0
= fg 1 0
f g2 0 1
1 −f g
1
· 0
0 −f g 2
1
0
0
0 1 + fg
−f
2
0
fg
1 − fg
0
1 − fg 0
f
1 0
0 1
0
0
1
0
1
−f g 2 0 1 + f g
0 0
1
0 0
fg
1 0
−f g 2 0 1
1 −f g 0
1
0
· 0
0 f g2 1
=
1
0
f
1
· 0
0
=
=
1
0
0
0
1 + fg
−f g 2
1 − fg
0
f g2
0
1 − fg 0
0
0
1
1
−f
0
0
0
1 + fg
f g2
−f
1 − fg
0
1
f
1
0 0
1 − fg
0
1 0
0
−f −f 1
f
1
0
0
· 0 1 + f g −f g 2
0
f
1 − fg
0
1
0
0
f
1 − fg
−f
1
0
0
1 + fg
0
f g2
1 0
fg 1
0
1 + fg
0 0
2
1 −f g −f g
0
1
0
0
0
1
0 −f g 2
1
fg
1
0
0 1 + fg
0
2
1 −f g f g
0
1
0
0
0
1
Proof. Here is the computation for Equation 7.2:
1 − fg
−f g 0
1 0 f
fg
1 + f g 0 = 0 1 −f
0
0
1
g g 1
1 0 0
1 0 f
1
0 0
1 0
= 0 1 0 0 1 −f 0
g g 1
0 0 1
−g −g 1
−f
f
1
(7.3)
0 f
1 −f
0 1
0
−f g 2
1
(7.4)
0
0
1 f g2
0
1
(7.5)
1
0 −f
0
1
f
−g −g 1
1 0 −f
0 1 f
0 0 1
1 0 0
1 0 0
1
0 0
1 0
= 0 1 0 0 1 −f 0
g g 1
0 0 1
−g −g 1
1 0 0
1 0 f
1
0 0
1 0
1 0 0 1
· 0 1 0 0 1 0 0
g g 1
0 0 1
−g −g 1
0 0
31
(7.2)
−f
f
1
1
= fg
f g2
=
0
1 − fg
−f
0
1 − fg
−f g 2
0
1 + fg
f g2
1
0 0
fg 1 0
f g2 0 1
1 −f g
1
· 0
0 −f g 2
−f g
1
−f g 2
f
1
0
0
1 + fg
0
0 −f
1 f
0 1
1
0
0
0 1 + fg
−f
2
0
fg
1 − fg
0
1 − fg 0
f
1
0
0
0
1
0
1
−f g 2 0 1 + f g
0
0 −f
1 f .
0 1
Equation 7.3 is obtained Similarly by changing the signs of f and g simultaneously. Here is the computation for Equation 7.4:
1 − fg
−f g 0
1 0 g
1
0 −g
fg
1 + f g 0 = 0 1 −g 0
1
g
0
0
1
f f 1
−f −f 1
1 0 0
1 0 g
1
0 0
1 0 −g
1 0 0 1 g
= 0 1 0 0 1 −g 0
f f 1
0 0 1
−f −f 1
0 0 1
=
1
0
f
1
· 0
0
1
= 0
f
=
0
1 0
0 0 1
1
0 0
g
1
−g 0
1
0
1 − fg
fg
−f
0
1
f
0
1
f
1
0
f
1
· 0
0
0
1
0
0
0
1
g
1 0
−g 0 1
1
−f 0
0 0
1 0
1 0 0 1
−f 1
0 0
0
f g2
1
1 −f g 2 0
0 1 + fg
0
0
1 − fg 0
0
0
1
1
−f
0
0
0
1 + fg
f g2
−f
1 − fg
0
1
f
0
1
0 0
1
0
−g
g
1
−f g
1 + fg
−f
f g2
1 0
fg 1
0
1 + fg
0 0
2
1 −f g −f g
0
1
0
0
0
1
0
1
0
−g
g
1
−f g 2
f g2
1 − fg
0
−f g 2
1
and Equation 7.5 is obtained similarly by changing the signs of f and g simultaneously.
In the
corollary, a 3 × 3 matrix B ∈ GL3 (R) denotes the blocks
following
B
0
∈ GLn−1 (R).
matrix
0 In−4
32
Corollary 7.9. Let n ≥ 4, f ∈ σn
P
n−1
r=1
σr Ur,m + Un,m + Om
and g ∈ R.
Then, mod IAm we have the following equalities (the indices are intended to
help us later to recognize forms of matrices: form 7, form 12 etc.):
1 − f g −f g 0
fg
1 + fg 0
(7.6)
0
0
1 13
1
0
0
1 − fg 0
f
−f
0
1
0
≡ 0 1 + fg
2
2
0
fg
1 − fg 1
−f g
0 1 + fg 2
1
0
0
1 − fg 0
−f
f
0
1
0
≡ 0 1 + fg
0 −f g 2 1 − f g 3
f g2
0 1 + fg 4
1 − fg 0
f g2
1
0
0
0 1 + fg
0
1
0
f g2
≡
−f
0 1 + fg 5
0
−f
1 − fg 6
2
1 − f g 0 −f g
1
0
0
0 1 + f g −f g 2
0
1
0
≡
f
0 1 + fg 7
0
f
1 − fg 8
and:
1 − fg
0
≡
f
fg
0
1 + fg 0
0
1 14
2
−f g
1
0
0 1 + fg
0
1 + fg 7
0
−f
1 − fg
−f g
0
0
1
0
(7.7)
0
f g2 .
1 − fg 6
Moreover (the inverse of a matrix is denoted by the same index - one can
observe that the inverse of each matrix in these equations is obtained by changing
the sign of f ):
1 − f g 0 −f g
0
1
0
(7.8)
fg
0 1 + f g 15
1
0
0
1 − fg
f
0
f g 2 −f g 2 1 + f g 1
≡ 0 1 − fg
0
−f
1 + fg 8
0
0
0 9
1
0
0
1 − fg
−f
0
1 + fg 0
≡ 0 1 − f g −f g 2 f g 2
0
f
1 + fg 6
0
0
1 10
1 − fg
f g2
0
1
0
0
1 + fg 0 0 1 − fg
−f
≡ −f
2
0
0
1 11
0
fg
1 + fg 3
33
and:
1 − fg
f
≡
0
and:
1 − fg
f
≡
0
0
1
0 0
1 12
0
0
fg
1
0
0 1 + f g 16
0
1
0
0 0 1 − fg
1 12
0
f g2
0
1 − fg
−f g 2
1 − fg
0
−f g
−f g 2
1 + fg
0
0
f
1 + fg 1
(7.9)
0
−f
1 + fg 3
and:
−f g 2
1 + fg
0
1 − fg
0
≡
−f
1
0
0
0 1 − fg
−f g
0
fg
1 + f g 17
0
f g2
1 + f g −f g 2
1
0
f
1 − fg
0 1 + fg 5
0
0
(7.10)
0
1
0 11
1 + fg
≡ −f g 2
0
1
0
0
0 1 − fg
fg
0 −f g 1 + f g 18
f
0
1 − fg 0
1 − fg 0
0
1
0
1 10
f g2
0
(7.11)
−f
0
1 + fg 4
Remark 7.10. We remark that as f ∈ σn R, then every matrix which takes part
in the above epualities is indeed in GLn−1 (R, σn R) ∼
= IGLn−1,n ≤ IA (Φn ).
P
n−1
Proof. As f ∈ σn
r=1 σr Ur,m + Un,m + Om , Equation 7.6 is obtained by
applying Proposition 7.8 combined with Proposition 6.1. Equation 7.7 is obtained similarly by transposing all the computations which led to the first part
of Equation 7.6. Similarly, by switching the roles of the second row and column
with the third row and column, one obtains Equations 7.8 and 7.9. By switching
one more time the roles of the first row and column with the second row and
column, we obtain Equations 7.10 and 7.11 as well.
7.3
Elements of form 2
Proposition 7.11. The elements of the following form, belong to IAm :
A−1 (In−1 + hEi,j ) A
where A ∈ GLn−1 (R), h ∈ σn σr2 Ur,m , σn2 Un,m for 1 ≤ r ≤ n − 1 and i 6= j.
34
Notice that for every n ≥ 4, the groups En−1 σn2 Un,m and En−1 σn σr2 Ur,m
for 1 ≤ r ≤ n − 1 are normal
in GLn−1 (R), and
thus, all the above ele
ments are in En−1 σn2 Un,m and En−1 σn σr2 Ur,m . Hence, for proving Proposition 7.11, it is enough to show
that for every 1 ≤ r ≤ n − 1 we have,
En−1 σn2 Un,m , En−1 σn σr2 Ur,m ⊆ IAm . Therefore, by Lemma 7.3, for proving Proposition 7.11, it is enough to show that the elements of the following
form are in IAm :
(In−1 − f Ej,i ) (In−1 + hEi,j ) (In−1 + f Ej,i )
when h ∈ σn σr2 Ur,m , σn2 Un,m for 1 ≤ r ≤ n − 1, f ∈ R and i 6= j. We will prove
it in a few stages, starting with the following lemma.
Lemma 7.12. Let h ∈ σn σr Ur,m , σn Un,m for 1 ≤ r ≤ n − 1 and f1 , f2 ∈ R.
Assume that the elements of the forms:
(In−1 ± f1 Ej,i ) (In−1 + hEi,j ) (In−1 ∓ f1 Ej,i )
(In−1 ± f2 Ej,i ) (In−1 + hEi,j ) (In−1 ∓ f2 Ej,i )
for every 1 ≤ i 6= j ≤ n − 1, belong to IAm . Then, the elements of the form:
(In−1 ± (f1 + f2 ) Ej,i ) (In−1 + hEi,j ) (In−1 ∓ (f1 + f2 ) Ej,i )
for 1 ≤ i 6= j ≤ n − 1, also belong to IAm .
Proof. Observe first that by Proposition 6.1, all the matrices of the form In−1 +
hEi,j for h ∈ σn σr Ur,m , σn Un,m belong to IAm . We will use it in the following
computations. Without loss of generality, we will show that for i, j = 2, 1 we
have:
(In−1 − (f1 + f2 ) E1,2 ) (In−1 + hE2,1 ) (In−1 + (f1 + f2 ) E1,2 ) ∈ IAm
and the general argument is similar. In the following computation we use the
following notations:
B
0
∈ GLn−1 (R) is denoted by B ∈ GL3 (R).
- A matrix
0 In−4
- “=” denotes an equality between matrices in GLn−1 (R).
- “≡” denotes an equality in IA (Φn ) /IAm .
(In−1 − (f1 + f2 ) E1,2 ) (In−1 + hE2,1 ) (In−1 + (f1 + f2 ) E1,2 )
2
1 − h (f1 + f2 ) −h (f1 + f2 )
0
=
h
1 + h (f1 + f2 ) 0
0
0
1
1
0
− (f1 + f2 )
1
0
(f1 + f2 )
0
1
1
1
−1
= 0
−h −h (f1 + f2 )
1
h h (f1 + f2 )
1
35
1
= 0
−h
1
· 0
0
0
0
−h (f1 + f2 ) 1
0 − (f1 + f2 )
1
0
1
1
0
1
h
0
1
0
1
h (f1 + f2 )
0
1
0 0
1
0
0 (f1 + f2 )
.
1
−1
0
1
As the product of the elements in the square brackets is a conjugation of an element of GLn−1 (R, σn R) it is also an element of GLn−1 (R, σn R) ∼
= IGLn−1,n ≤
IA (Φn ). In addition, by the remark in the beginning of the proof, the remained
element is in IAm . So mod IAm , the above expresion is equals to:
1 0 − (f1 + f2 )
1
0
0
1 0 f1 + f2
0 1
0
1
−1
1
0 0 1
0 0
1
0 0
1
h h (f1 + f2 ) 1
=
=
1 0
0
1 0 −f1
1 0 −f2
0 1
1 0 1
0 0 1
0
0 0
1
0 0
1
0 hf2 1
1
0
0
1 0 f1
1 0 f2
· 0
1
0 0 1 −1 0 1 0
h hf1 1
0 0 1
0 0 1
1 0 −f2
1
0 1
0 0
0 0
1
0
1
1 0 f1
· 0 1 −1 0
h
0 0 1
1 0 −f1
1
1 0
· 0 1
0 0
1
h
−f1
1
0
0
1 0
1
0
1
0 hf2 1
0
0
1
0
1
0 0
1
hf1 1
−h −hf1
0 0
1 0 f1
1 0 0 1 −1
hf1 1
0 0 1
0
1
0
1
1 0 −f1
1 0 −f2
1 0
0 0 1
= 0 1
0 0
1
0 0
1
0
2
1
0
0
1 − hf1 −hf1
1
0
h
1 + hf1
· 0
h hf1 1
0
0
36
0
0
1
1
0
0
0 0
1 0
hf2 1
0
1 0
0 0 1
0 0
1
0 f2
1 0
0 1
1 0 f1
0 1 −1
0 0 1
f2
0
1
1
1 0 −f2
0 0
= 0 1
0
0 0
1
1
0
0
1
1
0 0
· 0
0
h hf1 1
1 0 −f2
1
= 0 1
0 0
0 0
1
0
1
0
0
1
1
0 0
· 0
h hf1 1
0
1
0 0
0 −f1
1 0 f1
1
1 0
1 0 0 1 −1
0
1
0 0 1
0 hf2 1
2
1 − hf1 −hf1
−hf1 f2
0 f2
h
1 + hf1
hf2
1 0
0 1
0
0
1
0 −f1
1
1
1 0
0
1
0
0 f2
1
1 0 0
0 1
0
0
1
0 0
1
0
−hf1 f2
hf2
1
0
1
hf2
0
1
0
f1
−1
1
0
1
0
1 − hf1
h
0
−hf12
1 + hf1
0
Notice now that by assumption, the right matrix in the above exspression
belongs to IAm , and the nearer matrix also belong to IAm by the remark in
the beginning of the proof. So, mod IAm , the latter expression is equals to:
1 0 −f2
1 0 −f1
1
0
0
0 1
0 0 1
1 0
1
0
0 0
1
0 0
1
0 hf2 1
1 0 f1
1
0
0
1 0 f2
1
0 0 1 0
· 0 1 −1 0
0 0 1
h hf1 1
0 0 1
1 0 −f2
1 0 −f1
1
0 0
1
0 0 1
1 0
1 0 0
= 0 1
0 0
1
0 0
1
0 hf2 1
0
1 0 f2
1 0 −f2
1
0
0
1
0 0
1
0 0
· 0 1 0 0 1
0 0 1
0 0
1
h hf1 1
0
=
=
1 0 − (f2 + f1 )
1
0
0 1
0
1
1
0 0
1
0 hf2
1 − hf2 −hf1 f2 −hf22
0
1
0
·
h
hf1
1 + hf2
0
1
0 0
1
0
0
1
0
0
1
0
f1
−1
1
f2
0
1
0 f2 + f1
1
−1
0
1
1
0
0
1 −hf2 (f1 + f2 ) hf2 (f1 + f2 )
0 1 + hf2
−hf2 0
1
0
0
0
1
0
hf2
1 − hf2
1 −hf1 f2 0
1 − hf2 0 −hf22
1
0
0
1
0
· 0
0
hf1
1
h
0 1 + hf2
37
0
0 .
1
1
0
≡ 0 1 + hf2
0
hf2
0
−hf2 .
1 − hf2
Consider now the last part of Equation 7.6 in Corollary 7.9, and switch the roles
of f, g by h, f2 respectively. Then, by switching the roles of the first row and
column with the third row and column in the equation, and transposing the
expresion, one deduces that:
1
0
0
0 1 + hf2
−hf2
0
hf2
1 − hf2
1 − hf2 −hf22 0
1 + hf2 0 −hf22
≡ In−1
h
1 + hf2 0
0
1
0
≡
0
0
1
h
0 1 − hf2
and this finishes the proof of the lemma.
We pass to the next stage:
Proposition 7.13. The elements of the following form belong to IAm :
(In−1 − f Ej,i ) (In−1 + hEi,j ) (In−1 + f Ej,i )
where h ∈
σn σr2 Ur,m , σn2 Un,m
for 1 ≤ r ≤ n − 1, f ∈ Z and i 6= j.
Proof. According to Lemma 7.12, it is enough to prove the proposition for f =
±1. Without loss of generality, we will prove the proposition for r = 1, i.e.
h ∈ σn σ12 U1,m , and symmetrically, the same is valid for every 1 ≤ r ≤ n − 1.
The case h ∈ σn2 Un,m will be considered separately.
So let h ∈ σn σ12 U1,m and write: h = σ1 u for some u ∈ σn σ1 U1,m . We will
prove the proposition for i 6= j ∈ {1, 2, 3} - as one can see below, we will do it
simultaneously for all the options for i 6= j ∈ {1, 2, 3}. The treatment in the
other cases in which i 6= j ∈ {1, k, l} such that 1 < k 6= l ≤ n − 1 is obtained
symmetrically, so we get that the proposition is validfor every 1 ≤
i 6= j ≤ n−1.
B
0
∈ GLn−1 (R)
As before, we denote a given matrix of the form
0 In−4
by B ∈ GL3 (R). Let have now the following computations (the indices of the
matrices below intended to recognize the appropriate form of matrix in Corollary
7.9, as will be explained below. We remind that the inverse of a matrix is denoted
by the same index, and one can observe that the inverse of each indexed matrix
is obtained by changing the sign of u). We remind that u ∈ σn σ1 U1,m ⊆ σn R.
Thus, by Proposition 6.1 we have:
1 − σ1 u −σ12 u 0
x2 −σ1 0
1
0 0
u
1 + σ1 u 0
1
0 ux2 1 0
= 0
0
0
1 12
0
0
1
0
0 1
−1
−1
x2
x2 σ1 0
· 0
1
0 ∈ IAm
0
0
1
38
1 − σ1 u
0
u
0 −σ12 u
1
0
0 1 + σ1 u 7
0
0
1 + σ1 u
u
2
−σ1 u 1 − σ1 u 3
0
−σ12 u
1 + σ1 u 6
1
0
0
1
0
0 1 − σ1 u
0
u
1
0 0
x3 0 −σ1
0 0
1 0
= 0 1
0 0
1
ux3 0 1
−1
−1
x3
0 x3 σ1
∈ IAm
· 0
1
0
0
0
1
0 0
1 0
0 1
1
0
−σ2
1
0
σ2
0 0
1 0
0 1
1
−σ3
0
1
σ3
0
1
= uσ2
−uσ1 σ2
1 0 0
· 0 1 u
0 0 1
=
1
−uσ1 σ3
uσ3
1 0 0
· 0 1 0
0 u 1
0
0
1
0
−σ1 1
0 0
1 0 ∈ IAm
σ1 1
0
1
0
0
−σ1
1
0 0
1 σ1 ∈ IAm .
0 1
By switching the signs of σ1 , σ2 and σ3 in the two latter computations we obtain
also that:
1
0
0
1
0
0
0 1 − σ1 u
, 0 1 + σ1 u −σ12 u ∈ IAm .
u
2
0 −σ1 u 1 + σ1 u 1
0
u
1 − σ1 u 8
Consider now the identities which we got in Corollary 7.9, and switch the roles
of f, g in the corollary by u,σ1 , respectively. Remember now that u ∈ σn σ1 U1,m .
Hence, as by the computations above the matrices which correspond to forms 7
and 8 belong to IAm , we obtain from the last part of Equation 7.6 that also form
13 belong to IAm . Thus, as we showed that forms 1, 3, 6 also belong to IAm ,
Equation 7.6 shows that also forms 2, 4, 5 belong to IAm . Similar arguments
show that Equations 7.6-7.11 give that all the 18 forms belong to IAm . In
particular, the matrices which correspond to forms 13 − 18 belong to IAm , and
these matrices (and their inverses) are precisely the matrices of the form (we
remind that h = σ1 u):
(In−1 ± Ej,i ) (In−1 + hEi,j ) (In−1 ∓ Ej,i ) , i 6= j ∈ {1, 2, 3} .
Clearly, by similar arguments, the proposition holds for every 1 ≤ i 6= j ≤
n − 1 and every h ∈ σn σr2 Ur,m for 1 ≤ r ≤ n − 1. The case h ∈ σn2 Un,m is a
bit different, but easer. In this case one can consider the same computations we
built for r = 1, with the following fittings: Firstly, write h ∈ σn2 Un,m as h = σn u
39
for some u ∈ σn Un,m . Secondly, change σ1 to σn , change σ2 , σ3 to 0 and change
x2 , x3 to 1 in the right side of the above equations. It is easy to see that in this
situation we obtain in the left side of the equations the same matrices, just that
instead of σ1 we will have σn . From here we continue exactly the same.
Proposition 7.14. The elements of the following form belong to IAm :
(In−1 − f Ej,i ) (In−1 + hEi,j ) (In−1 + f Ej,i )
where h ∈
i 6= j.
σn2 Un,m , σn σr2 Ur,m
for 1 ≤ r ≤ n − 1, f ∈ σs R for 1 ≤ s ≤ n and
Proof. We will use again the result of Corollary 7.9, when we switch the roles
of f, g in the corollary by h, σs u respectively for some u ∈ R and 1 ≤ s ≤ n.
Similarly to proposition 7.13, we
will prove it for s = 1, i 6= j ∈ {1, 2, 3}, and
B
0
∈ GLn−1 (R) by B ∈ GL3 (R).
denote a given matrix of the form
0 In−4
As h ∈ σn σr2 Ur,m , σn2 Un,m , we have also σ1 uh ∈ σn σr2 Ur,m , σn2 Un,m . Hence,
we obtain from the previous proposition, that the matrices of the forms 13 − 18
belong to IAm . In addition:
1
0
0
1
0 0
1
0
0
0 1 − uσ1 h
= −huσ2
h
1 0 0
1
0
0 −u2 σ12 h 1 + uσ1 h 1
−hu2 σ1 σ2 0 1
−uσ2 uσ1 1
1 0 0
1
0
0
1
0 ∈ IAm
· 0 1 h 0
0 0 1
uσ2 −uσ1 1
1
0
0
0 1 − uσ1 h −u2 σ12 h
0
h
1 + uσ1 h 6
=
1
0
−hu2 σ1 σ3 1
huσ3
0
1 0 0
· 0 1 0
0 h 1
0
1
0 uσ3
1
0
1
−uσ3
0
0
1
0
0
0
1 −uσ1
0
1
0
uσ1 ∈ IAm
1
and by switching the signs of u and h simultaneously, we get also forms 3 and
8. So we easily conclude from Corollary 7.9 (Equations 7.6 and 7.8) that also
the matrices of the other eight forms are in IAm . In particular, the matrices of
the form:
(In−1 − σ1 uEj,i ) (In−1 + hEi,j ) (In−1 + σ1 uEj,i ) , i 6= j ∈ {1, 2, 3}
belong to IAm . The treatment for every i 6= j and 1 ≤ s ≤ n − 1 is similar, and
the treatment in the case s = n is obtained by replacing σ1 by σn and σ2 , σ3 by
0 in the above equations.
Pn
Corollary 7.15. As every f ∈ R can be decomposed as f = s=1 σs fs + f0 for
some f0 ∈ Z and fi ∈ R, we obtain from Lemma 7.12 and from the above two
propositions that we actually finised the proof of Proposition 7.11.
40
7.4
Elements of form 3
Proposition 7.16. The elements of the following form, belong to IAm :
A−1 [(In−1 + hEi,j ) , (In−1 + f Ej,i )] A
2
where A ∈ GLn−1 (R), f ∈ σn R, h ∈ Ōm
and i 6= j, when Ōm is the projection
±1
±1
of Om to Rn−1 = Z[x1 , . . . , xn−1 ] which induced by the projection σn 7→ 0.
We will prove the proposition in the case i, j = 2, 1, and the same arguments
are valid for arbitrary i 6= j. In this case one can write: h = m2 h′ for some
h′ ∈ Rn−1 , and thus, our element is of the form:
1 − f m2 h ′
f
0
1 −f
0
2
0 A
A−1 −f m2 h′
1 + f m2 h ′
0 0 1
0 0 In−3
0
0
In−3
for some A ∈ GLn−1 (R), f ∈ σn R and h′ ∈ Rn−1 . The proposition will follow
easily from the following lemma:
Lemma
7.17.
Let h1 , h2 ∈ R, f ∈ σn R and denote a given matrix
B
0
∈ GLn−1 (R) by B ∈ GL3 (R). Then:
0 In−4
1 − f m (h1 + h2 )
f
0
1 −f
2
A−1 −f (m (h1 + h2 )) 1 + f m (h1 + h2 ) 0 0 1
0 0
0
0
1
1 − f mh1
f
0
≡ A−1 −f (mh1 )2 1 + f mh1 0
0
0
1
1 − f mh2
f
0
1
· −f (mh2 )2 1 + f mh2 0 0
0
0
0
1
1 −f
0 1
0 0
−f
1
0
0
0
1
of the form
0
0 A
1
0
0 A mod IAm
1
Now, if one proves the lemma, he can deduce that:
1 − f m2 h ′
f
0
1 −f
0
2
0 A
A−1 −f m2 h′
1 + f m2 h ′
0 0 1
0 0 In−3
0
0
In−3
m
1 − f mh′
f
0
1 −f
0
≡ A−1 −f (mh′ )2 1 + f mh′
0 A
mod IAm
0 0 1
0 0 In−3
0
0
In−3
and as the latter element is obviously belong to IAm we obtain that:
1 −f
0
1 − fh
f
0
0 A ∈ IAm
0 0 1
A−1 −f h2 1 + f h
0 0 In−3
0
0
In−3
as required. So it is enough to prove Lemma 7.17.
41
Proof. (of Lemma 7.17) Notice through the computation the reminder (Remark
7.5) that as GLn−1 (R, σn R) is normal in GLn−1 (R) every conjugation of an
element of GLn−1 (R, σn R) ≤ IA (Φn ) by an element of GLn−1 (R), belongs to
GLn−1 (R, σn R) ≤ IA (Φn ). We remark also that through the computation, we
will use freely the result of Proposition 7.7 and the three notations which we
used in the proof
of Lemma7.12:
B
0
∈ GLn−1 (R) is denoted by B ∈ GL3 (R).
- A matrix
0 In−4
- “=” denotes an equality between matrices in GLn−1 (R).
- “≡” denotes an equality in IA (Φn ) /IAm .
1 − f m (h1 + h2 )
f
0
1 −f 0
2
A−1 −f (m (h1 + h2 )) 1 + f m (h1 + h2 ) 0 0 1 0 A
0 0 1
0
0
1
=
1
0
−f
0
1 −f m (h1 + h2 )
A−1
−m (h1 + h2 ) 1
1
1
0
f
0
1 f m (h1 + h2 )
·
m (h1 + h2 ) −1
1
1
0 0
0
1 0
= A−1
−m (h1 + h2 ) 1 1
1
0 0
1
·
0
1 0 0
m (h1 + h2 ) −1 1
0
1
0
0
0
1
0
1 −f
0 1
0 0
−f
−f m (h1 + h2 )
1
0
f
1 f m (h1 + h2 )
0
1
0
0 A
1
1 −f
0 1
0 0
0
0 A
1
1
0 0
1
0 0
1 0
−f
0
1 0
0
1 0 0 1 −f mh1
= A−1
−mh2 0 1
0 0
1
−mh1 1 1
1
0 0
1 0
f
1 0
−f
1
0
1 0 0 1 f mh1 0 1 −f mh1 0
1
· 0
mh1 −1 1
0 0
1
0 0
1
mh2 0
1
0 0
1 0
0
1
0
0
1 0 0 1 −f mh2
0
1
·A−1
−m (h1 + h2 ) 1 1
0 0
1
m (h1 + h2 ) −1
m
1 0
0
1 −f f
· A−1 0 1 f (h1 + h2 ) A A−1 0 1 0 A
0 0
1
0 0 1
42
0
0 A
1
0
0 A
1
1 − f mh1
f
0
1
0 0
0
1 0 −f (mh1 )2 1 + f mh1 0
≡ A−1
−mh2 0 1
0
0
1
1 0
−f
1
0 0
1 0 A
· 0 1 −f mh1 0
0 0
1
mh2 0 1
m
1
0 0
1 0
0
1
0 0
· A−1
0
1 0 0 1 −f h2
0
1 0 A
−m (h1 + h2 ) 1 1
0 0
1
m (h1 + h2 ) −1 1
1 −f f
·A−1 0 1 0 A
0 0 1
1
0
≡ A−1
−mh2
1
0 0
1 0
· 0
mh2 0 1
=
1 − f mh1
0 0
1 0 −f (mh1 )2
0 1
0
1 −f f
0 1 0 A
0 0 1
1
0
A−1
−mh2
1
0 0
1 0
· 0
mh2 0 1
1
0 0
1 0
· 0
mh2 0 1
1
0 0
1 0
· 0
mh2 0 1
f
1 + f mh1
0
0
1 0
0 1
0
0 0
1
1 − f mh1
f
0
0 0
1 0 −f (mh1 )2 1 + f mh1 0
0 1
0
0
1
1
0 0
1 0 −f
0
1 0 0 1 0
−mh2 0 1
0 0 1
1
0 0
1 0
0
0
1 0 0 1 −f mh1
−mh2 0 1
0 0
1
1 −f f
0 1 0 A
0 0 1
43
−f
−f mh1
1
m
1 − f mh1
f
0
1
0
0
0
1
0 A
= A−1 −f (mh1 )2 1 + f mh1 0 A · A−1
f mh1 h2 −f h2 1
0
0
1
1
0 0
1 0 −f
1
0 0
0
1 0 0 1 0 0
1 0 A
·A−1
−mh2 0 1
0 0 1
mh2 0 1
m
1
0 0
1 0
0
1
0 0
· A−1
0
1 0 0 1 −f h1 0
1 0 A
−mh2 0 1
0 0
1
mh2 0 1
1 −f f
·A−1 0 1 0 A
0 0 1
≡
A−1
1
· 0
0
1 − f mh1
f
0
1
0 0
0
1 0
−f (mh1 )2 1 + f mh1 0
−mh
0
0
1
2 0 1
0 −f
1
0 0
1 −f f
1 0 0
1 0 0 1 0 A
0 1
mh2 0 1
0 0 1
1 − f mh1
f
= A−1 −f (mh1 )2 1 + f mh1
0
0
1 f 0
1
0
·A−1 0 1 0
0 0 1
−mh2
1
0 0
1
0
1 0 0
·A−1
−mh2 0 1
0
1 − f mh1
= A−1 −f (mh1 )2
0
1
0
· A−1 0
1
0 f h2
1 − f mh2
−1
0
·A
2
f (mh2 )
0
1 −f 0
0 0 1 0
0 0 1
1
0 0
1 −f
1 0 0 1
0 1
0 0
f −f
1
1 0 0
0 1
mh2
f
1 + f mh1
0
m
0
0 A
1
f
1
−f mh2
44
A
0
1
0 0
1
mh2
0 0
1
1 0 0
0 1
0
0
1 −f
0 0 1
0 0
1
−f
1
0
0
0
1 + f mh2
0 0
1 0 A
0 1
−f f
1 0 A
0 1
0
0 A
1
−f
1
0
f
0 A
1
1 − f mh1
f
0
1 −f 0
A−1 −f (mh1 )2 1 + f mh1 0 0 1 0 A
0 0 1
0
0
1
m
1 f 0
1
0
0
1
0 A
·A−1 0 1 0 A · A−1 0
0 0 1
0 −f h2 1
1 − f mh2 0
−f
1 −f f
0 1 0 A
0
1
0
·A−1
2
0 0 1
f (mh2 )
0 1 + f mh2
≡
1 − f mh1
f
0
1 −f 0
1 f 0
≡ A−1 −f (mh1 )2 1 + f mh1 0 0 1 0 0 1 0
0 0 1
0 0 1
0
0
1
1 − f mh2 0
−f
1 −f 0
1 0 f
0 1 0 0 1 0 A
0
1
0
·
2
0 0 1
0 0 1
f (mh2 )
0 1 + f mh2
=
≡
1 − f mh1
f
0
A−1 −f (mh1 )2 1 + f mh1 0
0
0
1
2
1
f mh2
0
1
0 A
·A−1 0
2
2
0 −f (mh2 ) 1
1 − f mh2 0
−f
0
1
0
·A−1
2
f (mh2 )
0 1 + f mh2
1
0
0
−f
1
0
1 0
0 1
0 0
0
0 A
1
f
0 A
1
1 − f mh1
f
0
1 −f 0
A−1 −f (mh1 )2 1 + f mh1 0 0 1 0
0 0 1
0
0
1
1 − f mh2 0
−f
1 0 f
0 1 0 A.
0
1
0
·
2
0 0 1
f (mh2 )
0 1 + f mh2
Now, by similar computation as for Equation 7.5, we have (switch the roles
of f, g in the equation by f, mh2 respectively, and then switch the roles of the
first row and column with the third row and column):
1
0
0
1 −f −f
0 1 + f mh2
f mh2 = 0 1
0
0 −f mh2
1 − f mh2
0 0
1
45
f
1
f (mh2 )2
0
0
1 − f mh2
f
0
1
0
1 + f mh2 0
2
0
1
f (mh2 )
1 + f mh2
0
·
−f (mh2 )2
1 − f mh2
· −f (mh2 )2
0
0
1
0
Therefore, using Proposition 7.7, we have:
1
1
0
0
f mh2 A ≡ A−1 0
A−1 0 1 + f mh2
0
0 −f mh2
1 − f mh2
1 + f mh2
0
·
2
−f (mh2 )
Now, observe that:
1 − f mh2
0
f
−f (mh2 )2
1
0
0
0 1 − f mh2
1
A−1 0
0
0
1 + f mh2
−f mh2
1
0
= A−1 0 1 + f h2
0 −f h2
0
0
1 f mh2
0
1
0
0
1
0 .
−f mh2 1
−f
1
0
f
1 + f mh2
0
−f
0
1
0
0 A.
1
0
f mh2 A
1 − f mh2
m
0
f h2 A ∈ IAm .
1 − f h2
Hence we obtain that mod IAm we have:
1 + f mh2 0
f
1 −f −f
0
1
0
0
A−1 0 1
2
0 0
1
−f (mh2 ) 0 1 − f mh2
1 − f mh2
f
0
· −f (mh2 )2 1 + f mh2 0 A ≡ In−1 .
0
0
1
But as the inverse of every matrix in the above equation is obtained by replacing
f by −f , we have:
1 −f 0
1 0 −f
A−1 0 1 0 0 1 0 A
0 0 1
0 0 1
1 + f mh2
2
−1
≡A
f (mh2 )
0
−f
1 − f mh2
0
0
1 − f mh2
0
0
1
f (mh2 )2
46
0
−f
A
1
0
0 1 + f mh2
and thus:
1 − f mh2 0
−f
1 0 f
0 1 0
0
1
0
A−1
2
0 0 1
f (mh2 )
0 1 + f mh2
1 − f mh2
f
0
1 −f
≡ A−1 −f (mh2 )2 1 + f mh2 0 0 1
0 0
0
0
1
A
0
0 A.
1
This finishes the proof of the lemma, and hence, also the proof of Proposition
7.16.
7.5
Elements of form 4
Proposition 7.18. The elements of the following form, belong to IAm :
A−1 [(In−1 + hEi,j ) , (In−1 + f Ej,i )] A
where A ∈ GLn−1 (R), f ∈ σn R, h ∈ σr2 Ūr,m , σr Ōm for 1 ≤ r ≤ n− 1 and i 6= j,
±1
when Ūr,m , Ōm are the projections of Ur,m , Om to Rn−1 = Z[x±1
1 , . . . , xn−1 ]
which induced by the projection σn 7→ 0.
through the subsection we denote a given matrix of the form
As before,
B
0
∈ GLn−1 (R) by B ∈ GL3 (R). We start the proof of this propo0 In−4
sition with the following computation.
Lemma 7.19. Let f, g ∈ R and A ∈ GLn−1 (R). Then:
1 − fg
−f g 0
1 + fg 0 A
A−1 f g
0
0
1
1
0 0
1
0
0
−f
= A−1 f g 1 0 AA−1 0 1 + f g
2
2
fg 0 1
0
fg
1 − fg
1 0 0
1 −f g 0
1
0 AA−1
·A−1 0 1 −f AA−1 0
2
0 0 1
0 −f g 1
1 − fg 0
f
1
0
0
AA−1 f 2 g 2 1 −f 2 g
0
1
0
·A−1
−f g 2 0 1 + f g
0
0
1
0
f A
1
1 0 0
0 1 f A
0 0 1
1 0 −f
AA−1 0 1 0 A.
0 0 1
1 0
0 1
0 0
Proof. The lemma follows from the following computations, which based on
Proposition 7.8, Equation 7.2:
1 − f g −f g 0
fg
1 + fg 0
0
0
1
47
1
= fg
f g2
1
· 0
0
1
· 0
0
0 0
1
0
1 0 0 1 + fg
0 1
0
f g2
0 0
1 −f g
1 −f 0
1
0 1
0 −f g 2
0 0
1 − fg 0
1 −f
0
1
0 1
−f g 2 0
1
1
0 0
= fg 1 0 0
0
f g2 0 1
1 0 0
1
· 0 1 −f 0
0 0 1
0
1 − fg 0
f
0
1
0
·
−f g 2 0 1 + f g
0
−f
1 − fg
0
0
1
1
0
0
0 0
1 f
0 1
1 0 0
0 1 f
0 0 1
f
1 0 0
1 0
0 1 f 0 1
0
1 + fg
0 0 1
0 0
0
1 + fg
f g2
−f g
1
−f g 2
0
−f
1 − fg
0
0
1
1
f 2g2
0
0
1
0
1
0
0
1 0
0 1
0 0
−f
0
1
0 0
1 f
0 1
0
f
1
0
1 0
−f 2 g 0 1
1
0 0
−f
0 .
1
Observe now that if we have f ∈ σn R and instead of g we have h ∈
σr2 Ūr,m , σr Ōm for 1 ≤ r ≤ n − 1, then by Propositions 7.7 and 7.11, we have:
1 − f h −f h 0
1 + fh 0 A
A−1 f h
0
0
1
1 0 0
1 −f h 0
1 0 0
1
0 1 1 0 A ∈ IAm .
= A−1 −1 1 0 0
0 0 1
0
0
1
0 0 1
Therfore, by Propositions 7.7, 7.11 and the previous
GLn−1 (R) we have the following equation mod IAm :
1
0
0
1 0
−f 0 1
A−1 0 1 + f h
0
f h2
1 − fh
0 0
1 − fh
0
≡ A−1
−f h2
0
f
1 0
0 1
1
0
0 1 + fh
0 0
lemma, for every A ∈
0
f A
1
−1
−f
0 A.
1
I.e. we have the following corollary (notice the switching of the sign of f ):
48
Corollary 7.20. For every h ∈ σr2 Ūr,m , σr Ōm for 1 ≤ r ≤ n − 1, f ∈ σn R and
A ∈ GLn−1 (R) , the following is equals mod IAm :
A−1 [(In−1 + hE3,2 ) , (In−1 + f E2,3 )] A ≡ A−1 [(In−1 − f E1,3 ) , (In−1 + hE3,1 )] A.
Let have now the following proposition:
Proposition 7.21. Let h ∈ σ12 Ū1,m , σ1 Ōm and f ∈ σn R. Then:
[(In−1 + hE3,2 ) , (In−1 + f E2,3 )] ∈ IAm .
Proof. Denote h = σ1 u
1
0
1
IAm ∋ 0
−σ2 u σ1 u
=
for some u ∈ σ1 Ū1,m , Ōm . Then:
0
1 0 0
1
0
0
1 0
0 0 1 f 0
1
0 0 1
1
0 0 1
σ2 u −σ1 u 1
0 0
0
−f
1
1
0 0
f σ2 u
1 0
2
σ1 σ2 u f 0 1
1
0
0
1
1
0 0
· 0
0 σ1 u 1
0
and thus:
0 0
1
1 f 0
0 1
0
1
[(In−1 + hE3,2 ) , (In−1 + f E2,3 )] = 0
0
as required.
0
1
h
0
0
1
1
0 0
−σ1 u 1
0
0
1
0 , 0
1
0
0 0
1 −f
0 1
0 0
1 f ∈ IAm
0 1
We can now pass to the following proposition.
Proposition 7.22. Let h ∈ σ12 Ū1,m , σ1 Ōm , f ∈ σn R and A ∈ GLn−1 (R).
Then:
A−1 [(In−1 + hE3,2 ) , (In−1 + f E2,3 )] A ∈ IAm .
Proof. We will prove the proposition by induction. By result of Suslin [Su], as
n − 1 ≥ 3, SLn−1 (R) is generated by the elementary matrices of the form:
In−1 + rEl,k for r ∈ R, and 1 ≤ l 6= k ≤ n − 1.
Qn
So as the invertible elements of R are the elements of the form ± i=1 xsi i for
si ∈ Z (see [CF], chapter 8), GLn−1 (R) is generated by the elementary matrices
and the matrices of the form: In−1 + (±xi − 1) E1,1 for 1 ≤ i ≤ n. Therefore,
it is enough to show that if:
A−1 [(In−1 + hE3,2 ) , (In−1 + f E2,3 )] A ∈ IAm
49
and E is one of the above generators, then mod IAm we have:
A−1 E −1 [(In−1 + hE3,2 ) , (In−1 + f E2,3 )] EA
−1
≡A
(7.12)
[(In−1 + hE3,2 ) , (In−1 + f E2,3 )] A.
So if E is of the form In−1 + (±xi − 1) E1,1 , we obviously have property 7.12.
If E is an elementary matrix of the form In−1 + rEl,k such that l, k ∈
/ {2, 3} then
we also have property 7.12 in an obvious way. Consider now the case l, k = 2, 3.
In this case, by Corollary 7.20 we have:
1 0 0
1 0 0
A−1 E −1 0 1 0 , 0 1 f EA
0 h 1
0 0 1
1 0 0
1 0 −f
1 0 0
1 0 0
≡ A−1 0 1 −r 0 1 0 , 0 1 0 0 1 r A
0 0 1
0 0 1
h 0 1
0 0 1
2 2
2
1 0 0
1 − hf + h f
0 −hf
1 0 0
0 1 r A
0
1
0
= A−1 0 1 −r
2
0 0 1
−h f
0 1 + hf
0 0 1
2 2
2
1 − hf + h f
0 −hf
1
0
0
AA−1 rh2 f 1 −rhf A
0
1
0
= A−1
2
−h f
0 1 + hf
0
0
1
1 0 −f
1 0 0
1
0
0
= A−1 0 1 0 , 0 1 0 AA−1 rh2 f 1 −rhf A.
0 0 1
h 0 1
0
0
1
So by applying Propositions 7.7, 7.11 and Corollary 7.20 once again on the
opposite way, we obtain that this element belongs to IAm by the induction
hypothesis. The other cases for l, k are treated by similar arguments: if l, k =
3, 2 we do exactly the same, and if l or k are different from 2 and 3, then
the situation is easier - we use similar arguments, but without passing to
[(In−1 − f E1,3 ) , (In−1 + hE3,1 )] through Corollary 7.20.
Corollary 7.23. Let h ∈ σ12 Ū1,m , σ1 Ōm , f ∈ σn R and A ∈ GLn−1 (R). Then,
for every i 6= j we have:
A−1 [(In−1 + hEi,j ) , (In−1 + f Ej,i )] A ∈ IAm .
Proof. Denote a permutation matrix, which its action on GLn−1 (R) by conjugation, moves 2 7→ j and 3 7→ i, by P . Then:
A−1 [(In−1 + hEi,j ) , (In−1 + f Ej,i )] A
= A−1 P −1 [(In−1 + hE3,2 ) , (In−1 + f E2,3 )] P A ∈ IAm .
Now, as one can see that symmetrically, the above corollary is valid for
every h ∈ σr2 Ūr,m , σr Ōm for 1 ≤ r ≤ n − 1, we actually finished the proof of
Proposition 7.18.
50
References
[A]
M. Asada, The faithfulness of the monodromy representations associated with certain families of algebraic curves, J. Pure Appl. Algebra 159
(2001), 123–147.
[Bac]
S. Bachmuth, Automorphisms of free metabelian groups, Trans. Amer.
Math. Soc. 118 (1965) 93-104.
[Bas]
H. Bass, Algebraic K-theory, W. A. Benjamin, Inc., New YorkAmsterdam, 1968.
[Be1]
D. E-C. Ben-Ezra, The congruence subgroup problem for the free
metabelian group on two generators. Groups Geom. Dyn. 10 (2016),
583–599.
[Be2]
D. E-C. Ben-Ezra, The congruence subgroup problem for the free
metabelian group on n ≥ 4 generators, arXiv:1701.02459.
[Bi]
J. S. Birman, Braids, links, and mapping class groups, Princeton University Press, Princeton, NJ, University of Tokyo Press, Toyko, 1975.
[Bo1]
M. Boggi, The congruence subgroup property for the hyperelliptic modular group: the open surface case, Hiroshima Math. J. 39 (2009),
351–362.
[Bo2]
M. Boggi, A generalized congruence subgroup property for the hyperelliptic modular group, arXiv:0803.3841v5.
[BER] K-U. Bux, M. V. Ershov, A. S. Rapinchuk, The congruence subgroup
property for Aut (F2 ): a group-theoretic proof of Asada’s theorem,
Groups Geom. Dyn. 5 (2011), 327–353.
[BL]
D. E-C. Ben-Ezra, A. Lubotzky, The congruence subgroup problem
for low rank free and free metabelian groups, J. Algebra (2017),
http://dx.doi.org/10.1016/j.jalgebra.2017.01.001.
[BaLS] H. Bass, M. Lazard, J.-P. Serre, Sous-groupes d’indice fini dans SLn (Z),
(French) Bull. Amer. Math. Soc. 70 (1964) 385–392.
[BrLS] K. A. Brown, T. H. Lenagan, J. T. Stafford, K-theory and stable structure of some Noetherian group rings, Proc. London Math. Soc. 42
(1981), 193–230.
[BM]
S. Bachmuth, H. Y. Mochizuki, Aut (F ) → Aut (F/F ′′ ) is surjective for
free group F of rank ≥ 4, Trans. Amer. Math. Soc. 292 (1985), 81–101.
[CF]
R. H. Crowell, R. H. Fox, Introduction to knot theory, Ginn and Co.,
Boston, Mass., 1963.
51
[DDH] S. Diaz, R. Donagi, D. Harbater, Every curve is a Hurwitz space, Duke
Math. J. 59 (1989), 737–746.
[DS]
R. K. Dennis, M. R. Stein, The functor K2 : a survey of computations
and problems, Algebraic K-theory, II: "Classical” algebraic K-theory
and connections with arithmetic, 243–280, Lecture Notes in Math., Vol.
342, Springer, Berlin, 1973.
[I]
S. V. Ivanov, Group rings of Noetherian groups, (Russian) Mat. Zametki
46 (1989), 61–66, 127; translation in Math. Notes 46 (1989), 929–933
(1990).
[KN]
M. Kassabov, M. Nikolov, Universal lattices and property tau, Invent.
Math. 165 (2006), 209–224.
[L]
A. Lubotzky, Free quotients and the congruence kernel of SL2 , J. Algebra 77 (1982), 411–418.
[Ma]
W. Magnus, On a theorem of Marshall Hall, Ann. of Math. 40 (1939),
764–768.
[Mc]
D. B. McReynolds, The congruence subgroup problem for pure braid
groups: Thurston’s proof, New York J. Math. 18 (2012), 925–942.
[Mel]
O. V. Mel´nikov, Congruence kernel of the group SL2 (Z), (Russian)
Dokl. Akad. Nauk SSSR 228 (1976), 1034–1036.
[Men] J. L. Mennicke, Finite factor groups of the unimodular group, Ann. of
Math. 81 (1965), 31–37.
[Mi]
J. Milnor, Introduction to algebraic K-theory, Annals of Mathematics
Studies, No. 72, Princeton University Press, Princeton, NJ; University
of Tokyo Press, Tokyo, 1971.
[MKS] W. Magnus, A. Karrass, D. Solitar, Combinatorial group theory: Presentations of groups in terms of generators and relations, Interscience
Publishers, New York-London-Sydney, 1966.
[NS]
N. Nikolov, D. Segal, Finite index subgroups in profinite groups, C. R.
Math. Acad. Sci. Paris 337 (2003), 303–308.
[Q]
D. Quillen, Higher algebraic K-theory. I, Algebraic K-theory, I: Higher
K-theories, 85–147, Lecture Notes in Math., Vol. 341, Springer, Berlin
1973.
[Rom] N. S. Romanovskiı̆, On Shmel´kin embeddings for abstract and profinite
groups, (Russian) Algebra Log. 38 (1999), 598-612, 639-640, translation
in Algebra and Logic 38 (1999), 326–334.
[Ros]
J. Rosenberg, Algebraic K-theory and its applications, Graduate Texts
in Mathematics 147, Springer-Verlag, New York, 1994.
52
[RS]
V. N. Remeslennikov, V. G. Sokolov, Some properties of a Magnus embedding, (Russian) Algebra i Logika 9 (1970), 566–578, translation in
Algebra and Logic 9 (1970), 342–349.
[Sm]
P. F. Smith, On the dimension of group rings, Proc. London Math. Soc.
25 (1972), 288–302.
[Su]
A. A. Suslin, The structure of the special linear group over rings of polynomials, (Russian) Izv. Akad. Nauk SSSR Ser. Mat. 41 (1977), 235–252,
477.
[SD]
M. R. Stein, R. K. Dennis, K2 of radical ideals and semi-local rings
revisited, Algebraic K-theory, II: "Classical” algebraic K-theory and
connections with arithmetic, 281–303, Lecture Notes in Math. Vol. 342,
Springer, Berlin, 1973.
Institute of Mathematics
The Hebrew University
Jerusalem, ISRAEL 91904
[email protected]
53
| 4 |
On Existence of Solutions to Structured Lyapunov Inequalities
Aivar Sootla and James Anderson
arXiv:1603.07686v1 [math.OC] 24 Mar 2016
A condensed version of this paper will appear
in the Proceedings of the 2016 American Control Conference
Abstract— In this paper, we derive sufficient conditions
on drift matrices under which block-diagonal solutions to
Lyapunov inequalities exist. The motivation for the problem
comes from a recently proposed basis pursuit algorithm. In
particular, this algorithm can provide approximate solutions to
optimisation programmes with constraints involving Lyapunov
inequalities using linear or second order cone programming.
This algorithm requires an initial feasible point, which we aim
to provide in this paper. Our existence conditions are based
on the so-called H matrices. We also establish a link between
H matrices and an application of a small gain theorem to the
drift matrix. We finally show how to construct these solutions
in some cases without solving the full Lyapunov inequality.
I. I NTRODUCTION
Lyapunov equations and matrix inequalities play a central
role in control theory, since they are used for, e.g., verifying
stability of a dynamical systems, optimal control, and model
order reduction (cf. [2]). Lyapunov matrix inequalities with
sparsity constraints on the decision variables are used in
in the context of distributed control [3], structured model
reduction [4] etc. In such applications, a typical constraint
on the decision variables is block-diagonality of a matrix.
The major bottleneck in solving optimisation programmes
with a Lyapunov inequality constraint is scalability, since it
is a semidefinite programme (SDP). There exist a number of
methods addressing scalability of SDPs (cf. [5], [6], [7]), and
in one of them, it was proposed to replace the constraints in
the cone of positive semidefinite matrices with conic innerapproximations [8], [9]. There are two main conic approximations: one which results in a linear programme (LP), and
another which results in a second order cone programme
(SOCP). Since we are dealing with inner approximations of
the cone of positive semidefinite matrices, even if the LP
or SOCP solution can be computed, this solution is usually
conservative with respect to the optimal SDP solution. This
limitation was partially addressed using the basis pursuit
algorithm [1], which is iterates over LPs or SOCPs and
provides a guarantee of improvement with each iteration.
This algorithm requires an initial feasible point in order
J. Anderson is with St John’s College, Oxford and the Department of
Engineering Science, University of Oxford, Parks Road, Oxford, OX1 3PJ,
U.K. e-mail: [email protected]
A. Sootla is with the Montefiore Institute, University of Liège, B28, Liège
Belgium, B4000 e-mail: [email protected]. A. Sootla holds an F.R.S–FNRS
fellowship. This paper is partially funded through the Belgian Network
DYSCO, and the Interuniversity Attraction Poles Programme initiated by
the Belgian Science Policy Office.
The authors would like to thank Prof. Amir Ali Ahmadi for valuable
discussions, and specifically for pointing out the reference [1].
to start the iterations. Hence, major questions still remain
concerning existence theorems and scalable computation of
block-diagonal solutions to Lyapunov inequalities.
Necessary and sufficient conditions for block-diagonal
stability were described more than 20 years ago in [10].
However, these results do not provide a constructive way
to build block-diagonal Lyapunov functions. This perhaps
explains why these results are relatively unused in the control
theory literature. Besides some simple cases, such as, the
drift matrix being block-triangular matrix (cf. [11]), it is
known that the closed loop interconnection of strictly passive
systems has a drift matrix which admits a block-diagonal
solution to Lyapunov inequalities [12]. It is also well-known
that stable Metzler matrices admit diagonal solutions to
the Lyapunov inequality [13]. Additional special cases are
covered in [14], [15] and revisited in what follows.
In this paper, we aim at identifying additional cases, when
a block-diagonal solution to a Lyapunov inequality can be
found using algebraic methods or LPs. We start by studying
a generalisation of Metzler matrices known as H matrices.
Stable H matrices possess many properties of stable Metzler
matrices, for example they also admit diagonal solutions
to Lyapunov inequalities [16]. We provide another such
property, namely we show that for H matrices, diagonal
solutions to Lyapunov inequalities can be computed using
algebraic methods and/or LPs. We then investigate conditions
on specific blocks in block-partitioned matrices. We establish
a link between the H matrix conditions and a version of the
small gain theorem before extending this intuition to blockpartitioned case. In the 2 by 2 block partitioned case, we
provide an explicit way to construct block-diagonal solutions
to the Lyapunov inequalities without the need to solve the
full inequality. An extension to n by n block partitioned case
is one of the future work directions.
The rest of the paper is organised as follows. In Section II,
we cover some preliminaries and motivate our problem formulation in Section III. We show how to construct diagonal
solutions to Lyapunov inequalities for H drift matrices in
Section IV. We provide stability results for block partitioned
matrices and link the condition for H matrices with the small
gain theorem in Section V. In Section VI we provide a largescale numerical example and we conclude in Section VII,
where we discuss linear programming solutions to Lyapunov
inequalities.
Notation: Our notation is mostly standard: ρ(A) stands
for the spectral radius of a matrix A, A ≥ 0 (respectively,
A 0) means that all entries aij of A are nonnegative
(respectively, positive), A 0 (respectively, A 0) means
that A is positive semidefinite (respectively, positive definite).
n
Let S n denote the set of symmetric n by n matrices, S+
denotes the cone of positive semidefinite n by n matrices.
Let kBk2 be the matrix induced norm, that is kBk2 is equal
to the maximum singular value of B, and let σ(B) denote the
minimum singular value of B. The H∞ norm of a transfer
matrix G is defined as kGkH∞ =
max
kG(s)k2 and
s∈C:Re(s)≥0
kGkH∞ = max kG(ω)k2 for stable G. For a space X, its
ω∈R
dual is denoted as X ∗ . Finally, I is the identity matrix of an
appropriate dimension.
II. P RELIMINARIES
Consider the linear time invariant dynamical system
ẋ(t) = Ax(t),
x(0) = x0
(1)
where x(t) ∈ Rn . An important concept associated with the
system (1) is stability, which is typically verified by solving
a linear matrix inequality (LMI).
Proposition 1: System (1) is stable if and only if there
exists an X 0 that satisfies the LMI
AX + XAT ≺ 0.
(2)
A matrix X which satisfies (2) defines a Lyapunov function of the form V (x) = x(t)T X −1 x(t) for system (1). In
this paper, we aim at describing some sufficient conditions
of solvability of the LMI (2) when the decision variable X
satisfies additional sparsity constraints.
In order to simplify the presentation we say that a matrix
A ∈ RN ×N has α = {k1 , . . . , kn }-partitioning with N =
n
P
ki , if the matrix A is written as follows
i=1
A11
A21
A= .
..
A12
A22
..
.
...
...
..
.
A1n
A2n
..
.
An1
An2
...
Ann
where Aij ∈ Rki ×kj . We say that A is α-diagonal if it is
α-partitioned and Aij = 0 if i 6= j, and α-lower triangular
if Aij = 0 if i < j. We aim at characterising α-diagonally
stable matrices A ∈ RN ×N , which are such that there exists
an α-diagonal positive definite X ∈ RN ×N satisfying (2). If
α = {1, . . . , 1}, we say that an α-diagonal (respectively, αlower triangular, α-diagonally stable) matrix A is diagonal
(respectively, lower-triangular, diagonally stable).
We will make use of so-called scaled diagonally dominant
matrices.
Definition 1: A matrix A ∈ Rn×n is called strictly row
scaled diagonally dominant if there exist positive scalars
d1 , . . . , dn such that
X
di |aii | >
dj |aij |
j6=i
for all i = 1, . . . , n. The matrix A is strictly row diagonally
dominant if di = 1 for all i.
A related class to scaled diagonally dominant matrices is
the class of H matrices. In order to define this class we
require the following definitions:
Definition 2 ([17]): Given an α-partitioned matrix A with
nonsingular Aii for all i, we define the α-comparison matrix
Mα (A) as
−1
kA−1
if i = j,
α
ii k2
Mij (A) =
(3)
−kAij k2 otherwise,
When α = {1, . . . , 1}, we will simply write M(A).
−1
Note that kA−1
= σ(Aii ). Hence using a continuity
ii k2
−1
argument we can assume that kA−1
= 0 for a singuii k2
lar Aii , and Definition 2 is well-posed. The α-partitioned
matrices allow a version of Gershgorin circle theorem:
Proposition 2 ([18]): For an α-partitioned matrix A ∈
n
P
RN ×N , where α = {k1 , . . . , kn } and N =
ki , every
i=1
eigenvalue of A satisfies
k(λI − Aii )−1 k−1
2 ≤
n
X
kAij k2
j=1,j6=i
for at least one i where i = 1, . . . , n.
Definition 3: A matrix A ∈ Rn×n is said to be Metzler if
all the off-diagonal elements are positive.
Definition 4: A matrix A ∈ Rn×n is said to be an H
matrix, if the minimal real part of the eigenvalues of M(A)
is greater than or equal to zero.
It is clear that stable Metzler matrices are also H matrices.
It is also straightforward to show that A is strictly row and
column scaled diagonally dominant if and only if M(A)
has eigenvalues with positive real part [19]. We also refer the
reader to [20], [16] for additional information on H matrices.
Let DD+ denote the cone of matrices A such that A and
AT are strictly diagonally dominant and the elements on
the diagonal of A are positive (that is, Aii > 0). Similarly,
let H+ denote H matrices A with positive elements on the
diagonal of A. If A is a symmetric DD+ matrix, then by
Proposition 2 with α = {1, . . . , 1}, it is easy to show that
A 0. Moreover, the constraint that A = AT ∈ DD+ can
be written as a set of linear constraints
n
X
aii >
cij ∀i,
(4)
j6=i
− cij ≤ aij ≤ cij , and cij = cji ∀i 6= j.
Hence, a constraint A 0 can be replaced by a more
restrictive but scalable linear constraints. This approach
was proposed in [8], [9] to restrict some sum-of-squares
optimisation problems which are naturally SDPs to LPs.
A symmetric H+ matrix is also positive semidefinite, and
this constraint can be imposed by a number of second order
cone constraints [21]. That is A = AT ∈ H+ if and only if
A=
N
X
2
EiT Xi Ei , with X ∈ S+
, Ei ∈ T2
i=1
where E ∈ T2 , if E ∈ Rn×2 and every column of E has
only one non-zero element equal to one, and N = |T2 |.
To summarise the subsection, we will mention this strong
result on diagonal stability of H+ matrices, which we will
revisit in the sequel.
Proposition 3 ([16]): Let −A be an H+ matrix. Then A
is diagonally stable if and only if A is nonsingular.
III. M OTIVATION AND P ROBLEM F ORMULATION
A. Conic Programming
Fig. 1.
Conic optimisation problems take the generic form of optimising a linear functional over the intersection of an affine
subspace and a proper cone. Typically conic programmes
have the following primal and dual formulations:
minimise cT x
s.t.
Ax = b
x∈K
maximise bT y
s.t.
c − AT y = s
Lk+1 = decomp(Xk )
where K is a proper cone (i.e. closed, non-empty, pointed,
convex) and K∗ is the dual cone of K defined as
K∗ := {y | hy, xi ≥ 0, ∀x ∈ K} .
It is well known, that the cone of positive semidefinite
n
n ∗
n
. The
) = S+
is self dual meaning that (S+
matrices S+
cone of symmetric DD+ matrices
KLP = {X ∈ S n ∩ DD+ } ,
n
.
however, is not self-dual and it is larger than the cone S+
More specifically:
∗
KLP
= X ∈ S n viT Xvi ≥ 0, ∀vi ∈ T1 ,
where T1 is the set of all vectors in Rn with a maximum of
two non-zero elements, each of which is ±1. The cone of
symmetric H+ matrices defined as
X=X
has the dual
∗
KSOCP
= X ∈ S n EiT XEi 0, ∀Ei ∈ T2 .
We will make use of these cones and their duals in the
remainder of the paper.
B. Structured Gramians via Basis Pursuit
The standard from primal SDP [22] is written as
s.t X ∈ Sn+ ,
(5)
hAi , Xi = bi ,
where decomp(Xk ) is a Cholesky decomposition of Xk ,
the optimal solution decision variable from iteration k. In
some cases, Xk can have singular values close to zero, thus
creating numerical problems in the iterative procedure. In
order to avoid such cases, we can remove a k-th column of
L with the k-th entry close to zero (we assume that L is lower
triangular). One can also use LDL decomposition to avoid
dealing with negative eigenvalues of Xk . This approach
also slightly improves numerical complexity of the conic
programme by lowering the number of decision variables and
constraints. We finally note that the method relies on the fact
that at the first iteration a feasible solution X0 exists.
In many applications, it is desirable to solve the following
problem using an LP or SOCP rather than the more natural
SDP:
min trace(X)
such that: AX + XAT ≺ 0
KSOCP = {X ∈ S n ∩ H+ } ,
X
Note that Z ∈ K(L) ⇒ Z 0. The algorithm in [1] solves
a sequence of optimization problems of the form (5) but
with the conic constraint replaced by X ∈ K(Lk ) where the
sequence {Lk } is given by
L0 = I
(y, s) ∈ (Rm , K∗ )
min hC, Xi
Block-diagram of the boiler-header system.
i = 1, . . . , m
where Sn+ is the cone of n×n positive semidefinite matrices.
The basis pursuit algorithm proceeds as follows: At each
iteration the algorithm re-parameterizes the simpler cone that
approximates Sn+ and then solves an optimization problem
over this cone, the solution of which is then used to update
the cone for the next iteration. In particular, the algorithm
specifies for a fixed matrix L, the cone
K(L) = X | X = LT QL, Q = QT ∈ DD+ .
T
(6)
is α-diagonal,
where A is Hurwitz, and α is a given partitioning. Note that
since A is Hurwitz, then the condition X 0 is implied
by the solvability of (6). The basis pursuit can be applied
given an X satisfying the constraints of the programme (6).
Therefore, we set up our problem: find X satisfying the
constraints of (6) with algebraic or linear programming
methods.
There are many practical applications, where the problem
of the form (6) appears and one of them is structured model
reduction. For example, consider the boiler-header system
described in [12] and schematically depicted in Figure 1.
The state space can be partitioned according to dimensions
of the subsystems, which are {3, 3, 1}. The system always
admits diagonal generalised Gramians, since the drift matrix
of the closed loop system is a stable H+ matrix.
In order to perform structured model reduction [12] the authors computed a {3, 3, 1}-diagonal generalised controllability Gramian P = diag{P1 , P2 , P3 } such that P1 , P2 ∈ R3×3
and P3 ∈ R. The optimal trace of such a Gramian computed
using semidefinite programming is equal to 1.2817 · 104 .
Using linear programming, we found the minimum trace of
2.4132 · 104 signifying a loss of quality of almost 100%.
After solving one iteration of the basis pursuit algorithm we
obtain objective equal to 1.5893 · 104 , an additional iteration
of the algorithm gives 1.3172 · 104 , and one more provides a
value equal to 1.3093 · 104 , which comes really close to the
optimal value. Naturally, on this example we do not need
basis pursuit or linear programming to obtain an optimal
solution due to the low complexity of the problem. However,
this example indicates that the basis pursuit algorithm can be
beneficial to obtain an approximate solution of large scale
Lyapunov inequalities using linear programmes.
IV. H M ATRICES AND D IAGONAL S TABILITY
The main result of this section concerns diagonal stability
of H matrices, where we sharpen the results from [16] by
providing an explicit diagonal Lyapunov function for a class
of H+ matrices.
Theorem 1: Let −A be an H+ matrix with a nonsingular
M(A). Then the following conditions hold
T
1) There exist positive vectors v = v1 . . . vn ,
T
w = w1 . . . wn
such that M(A)v, wT M(A)
are also positive.
2) There exists a diagonal X such that −(AX + XAT )
is an H+ matrix. Moreover, we can choose it as
X = Pv Pw−1 , where Pv = diag{v1 , . . . , vn }, Pw =
diag{w1 , . . . , wn }, and v, w satisfy point 1).
3) There exists a diagonal positive definite matrix Y such
that
−Pw APw−1 Y − Y Pw−1 AT Pw ∈ DD+
(7)
Proof: 1) By definition −M(A) is a Metzler matrix
with all eigenvalues λi (M(A)) ≤ 0, since M(A) is nonsingular by the premise, −M(A) is a Hurwitz Metzler matrix.
Hence the claim follows by applying the results from [23].
2) Let X = Pv Pw−1 , then
(M(A)X+XM(AT ))w = (M(A)v+XM(A)T w) 0,
where the inequality follows since M(A)v and M(A)T w
are positive and X is nonnegative. Hence S = −M(A)X −
XM(AT ) is a Metzler matrix and there exists a positive
vector such that Sw is negative. This implies that S is a
symmetric Hurwitz and Metzler matrix, which means that
M(A)X + XM(AT ) is positive definite.
Note that aii < 0 for all i, let
(−AX − XAT )ij = −aij xj − aji xi
(
−aij xj − aji xi
(M(A)X + XM(AT ))ij =
−|aij |xj − |aji |xi
or equal to the minimal eigenvalue of M(−AX − XAT ).
This implies that M(−AX − XAT ) has eigenvalues with
positive real part, hence −AX − XAT is an H+ matrix.
3) Consider the matrix R = Pw M(A)Pv , and e the vector
of ones. Now it is easy to see that Re 0:
Pw M(A)Pv e = Pw M(A)v 0.
This implies that the matrix Pw M(A)Pv is row strictly diagonally dominant. Similarly, we can show that Pw M(A)Pv
is column strictly diagonally dominant. This by definition
implies that the matrix −Pw APv is a row and column diagonally dominant matrix with positive elements on the diagonal
or a DD+ matrix. Hence the matrix −Pw APv − Pv AT Pw
is positive definite. Since we can set Y = Pv Pw the result
follows.
We showed that there exists a diagonal X matrix such
that the matrix Z = −Pw APw−1 X − XPw−1 AT Pw is a DD+
matrix and hence positive definite. Note that the constraint
Z = Z T ∈ DD+ is linear and if needed we can relax the
sparsity constraints on X. This implies that given an H drift
matrix, we can compute an α-diagonal Lyapunov function
with an arbitrary α using linear programming.
If the entries of the A matrix are poorly scaled then solving
a linear programme can be numerically challenging. Using
our methods, this can be avoided if we compute an initial
point using the right and left eigenvectors of M(A), instead
of the positive vectors v and w satisfying point 1). Having
an initial point re-scales the optimisation programme and
can provide feasible points as shown on a specific example
in [24].
Theorem 1 is a direct generalisation of the similar result
for Metzer matrices (cf. [23]), but our result can be applied
to a broader class of matrices including lower-triangular
matrices. Using Theorem 1 other results for Metlzer matrices
can be extended to problems such as construction of sumand max-separable Lyapunov functions (cf. [23]).
The state-space transformation Pw is essential in order to
guarantee the diagonal dominance of the inequality. Consider
an asymptotically stable matrix
−1 −2
A=
.
2 −5
and a positive definite X = diag{x1 , x2 }. The matrix −A is
an H+ matrix and it is stable. The diagonal dominance of
AX +XAT requires the following inequalities to be fulfilled
2 · x1 > 2x2 + 2x1 ,
i=j
i 6= j
It is straightforward to show that M(A)X + XM(AT ) ≤
M(−AX − XAT ), moreover the elements on the diagonal are equal. This means that we can write M(A)X +
XM(AT ) = sI − R1 , M(−AX − XAT ) = sI − R2 , where
the matrices R1 and R2 satisfy R1 ≥ R2 ≥ 0. According
to Weilandt’s theorem ρ(R1 ) ≥ ρ(R2 ) (cf. [16]). Therefore
the minimal eigenvalue of M(A)X + XM(AT ) is smaller
2 · 5x2 > 2x2 + 2x1
for some positive x1 , x2 . The first inequality is equivalent
to 0 > 2x2 , which is impossible to fulfil.
V. α-D IAGONAL S TABILITY AND H+ M ATRICES
A. A Motivating Example
In this section, we cover two main classes of results for
diagonal stability and compare them to a classical example
from [14] for cyclic systems. These classes stem from two
arguments based on the passivity and the small gain theorem.
In this section, we will argue that the H+ matrix condition
Fig. 2.
Feedback interconnection of two stable systems G1 , G2 .
is an implicit constraint in these stability proofs. In order to
explain our motivation consider an example studied in [14]
and let:
01×n−1
−β1
A0n =
− diag{α1 , . . . , αn }
diag{β2 , . . . , βn } 0n−1×1
where αi , βi are positive scalars. This matrix represents the
dynamics of a negative feedback of a cascade of transfer
βi
. First, let us consider the 2 by 2
functions Gi (s) = s+α
i
case, which gives
−α1 −β1
A02 =
β2 −α2
β2
β1
and G2 = (s+α
.
and two transfer functions G1 = (s+α
1)
2)
According to the small gain theorem, the system is stable if
β1
s + α1
H∞
β2
s + α2
=
H∞
β1 β2
< 1.
α1 α2
This argument can be extended to an arbitrary size matrix
resulting in the condition
β1 · · · βn
< 1.
(8)
α1 · · · αn
Surprisingly, it is straightforward to verify by definition
that A0n is an H+ matrix if and only if (8) holds. Hence on
this loop the H+ matrix condition is a small gain condition.
Alternatively, using passivity arguments it was shown in [14],
that A0n is asymptotically stable if and only if
β1 · · · βn
< (sec(π/n))n .
(9)
α1 · · · αn
This in particular means that for n = 2 all matrices
in the form A02 are not only stable, but also diagonally
stable, however, they may not be H matrices. This analysis
is based on passivity arguments and has been extended to
less restrictive classes of systems in [15]. It is easy to
verify that with n → ∞ the limit (sec(π/n))n converges
to one. Hence, it appears (for this class of system) that
for large dimensions, H matrices constitute a large subset
of diagonally stable matrices. We will pursue the relation
between H+ matrices and small gain argument in the αdiagonal case in the remainder of the paper.
B. Passivity and Small Gain Conditions for α-Diagonal
Stability
Let Gc be the closed loop transfer function depicted in
Figure 2, which is an interconnection of two Linear Time
Invariant (LTI) subsystems
Ai B i
,
Gi =
Ci Di
where Ai ∈ Rki ×ki , Bi ∈ Rki ×mi , Ci ∈ Rli ×ki , Di ∈
Rli ×mi and m1 = l2 , m2 = l1 . The closed loop transfer
function from [u1 , u2 ] to [y1 , y2 ] has the following statespace realisation
c
c
c
A11 Ac12 B11
B12
c
c
Ac21 Ac22 B21
B22
Gc =
c
c
c
c ,
C11
C12 D11 D12
c
c
c
c
C21
C22
D21
D22
where
Ac11 = A1 − B1 R21 D2 C1 ,
Ac12 = −B1 R21 C2
Ac21
Ac22
= B2 R12 C1 ,
(10)
= A2 − B2 R12 D1 C2
and R12 = (I + D1 D2 )−1 , R21 = (I + D2 D1 )−1 and the
rest of the matrices are computed accordingly. For the sake
of simplicity we assume that this realisation is minimal.
Passivity and small gain arguments can both be used to
determine if the closed loop system is stable but we will
focus on the small gain condition. Passivity results in this
direction will be addressed in future work, similar ideas were
pursued in [25], [26].
It is straightforward to verify that stability of the system
with inputs u1 , u2 and outputs y1 , y2 depends on stability
of the transfer function L = (I − G2 G1 )−1 .
Proposition 4 (Small Gain Theorem): Suppose B is a
Banach-algebra and Q ∈ B. If kQk < 1, then (I − Q)−1
exists and
∞
X
(I − Q)−1 =
Qk .
k=0
Applying Proposition 4 we can verify that if Q :=
kG2 G1 kH∞ ≤ kG2 kH∞ kG1 kH∞ < 1 then the function L
and hence the closed loop are stable (cf. [27]). We can
apply the small gain condition to the closed transfer function,
which would result in a condition on α-diagonal stability
of the matrix Ac . However, given only a partitioning α =
{k1 , k2 } and a realisation of the closed loop transfer function
Gc , these conditions again will be hard to verify. We can
apply a small gain theorem in another way, namely apply it
to the matrix Ac directly. In this case, we do not need to
know the realisation of transfer functions G1 , and G2 , all
we need to know is the matrix Ac and the partitioning α.
The conditions on α-diagonal stability of Ac are established
in the following proposition.
Proposition 5: Let Ac be α partitioned with α = {k1 , k2 }
c
A11 Ac12
c
A =
.
Ac21 Ac22
Let K1 (s) = −(sI −Ac11 )−1 Ac12 , K2 (s) = (sI −Ac22 )−1 Ac21
with Hurwitz Ac11 , Ac22 . If there exists a γ > 0 such that
kK1 kH∞ < 1/γ and kK2 kH∞ < γ, then the matrix Ac is
α-diagonally stable.
Proof: We need to show that there exists an α-diagonal
Lyapunov function for the system ẋc = Ac xc . For the sake
of clarity we drop the superscript c from Acij and simply
write Aij . The inequality kK1 kH∞ < 1/γ and the Bounded
Real Lemma imply that X1 0 solves the Riccati equation
0 = A11 X1 + X1 AT11 + γ 2 X1 X1 + A12 AT12 =
(11)
2
X
γ
I
0
1
k1
A11 X1 + X1 AT11 + X1 A12
,
0
Ik2
AT12
which has always has a solution since (I, A11 ) is a controllable pair (cf. [2]), since the control matrix is equal to I and
we can control every state independently.
Again, due to the Bounded Real Lemma the inequality
kK2 kH∞ < µ is equivalent to
A22 Y2 + Y2 AT22 + µ−2 Y2 Y2 + A21 AT21 = 0,
(12)
where Y2 0 since (I, A22 ) is a controllable pair (cf. [2]).
Let µ = γ − ε for some ε > 0 such that µ > kK2 kH∞ ,
which implies that µ−2 Y2 Y2 γ −2 Y2 Y2 and consequently:
A22 Y2 +
Y2 AT22
+ Y2 γ
−2
Y2 +
A21 AT21
≺0
By multiplying the equation by γ −2 setting X2 = Y2 γ −2
A22 X2 + X2 AT22 + X2 X2
+
+γ
T
−2
A21 AT21
≺0⇔
γ 2 Ik1
0
A21
(AT22 X2 + X2 A22 )−1 A21
X2
0
Ik2
X2 0
(13)
Combining the inequalities (11) and (13) yields
AT21
T
X
A
0 A11 X1 + X1 A11 −
1
12
X2
X1
· (A22 X2 + X2 AT22 )−1 A21 X2
A12
= A11 X1 + X1 AT11 − (X1 AT21 + A12 X2 )
· (X2 A22 + AT22 X2 )−1 (A21 X1 + X2 AT12 ). (14)
Applying the Schur complement properties to (14) yields
A11 X1 + X1 AT11 A12 X2 + X1 AT21
≺ 0,
A21 X1 + X2 AT12 A22 X2 + X2 AT22
thus the blocks on the diagonal are negative definite which
completes the proof.
Our proof is constructive, and shows how to build an αdiagonal Lyapunov function by solving two Riccati equations (11) and (12) instead of solving an LMI. Next we link
a simplified version of these conditions with α-partitioned
and H+ matrices.
C. Conditions for α-Diagonal Stability via H+ Matrices
The authors in [18] showed that A is Hurwitz if it is
an α-partitioned matrix such that Mα (A) ∈ DD+ , and the
matrices Aii are Hurwitz and Metzler for all i. In particular,
this result shows that stability of A is implied by stability of
all the blocks Aii . We provide a generalisation of this result.
Lemma 1: Let A be α-partitioned matrix and Mα (A) be
an H+ matrix. Let also Aii be Hurwitz matrices, and the
Hamiltonian matrices
Aii γi−2 I
Hi =
(15)
−I −ATii
have no purely imaginary eigenvalues with γi = kA−1
ii k2 + ε
for all ε > 0. Then A is a Hurwitz matrix.
Proof: We prove the result by contradiction. Let A have
eigenvalues with a positive real part. Since Mα (A) is an H+
matrix, there exists positive scalars di such that for every i
X
dj
−1
(16)
kA−1
kAij k2 .
ii k2 >
di
i6=j
The matrix A is unstable if and only if D−1 AD is unstable
with D = diag{d1 Ik1 , . . . , dn Ikn }. Let λ be the eigenvalue
of D−1 AD with a positive real part. By Proposition 2 there
exists an index i such that
X
X
dj
dj
Aij
=
kAij k2 .
k(λI − Aii )−1 k−1
2 ≤
di 2
di
i6=j
i6=j
(17)
Now since the Hamiltonian matrix Hi has no purely
imaginary eigenvalues for all ε > 0 and Aii is Hurwitz,
this implies that k(sI − Aii )−1 kH∞ = kA−1
ii k2 . Therefore
the maximum of k(zI − Aii )−1 k2 over z with Re(z) ≥ 0 is
−1
equal to kA−1
k2 ≤ kA−1
ii k2 , and k(λI − Aii )
ii k2 . Hence
due to (16)
X
dj
−1 −1
k(λI − Aii )−1 k−1
kAij k2 .
2 ≥ kAii k2 >
di
i6=j
We arrive at the contradiction with (17), which completes
the proof.
Lemma 1 allows us to determine stability of A by verifying
stability of the blocks Aii subject to the condition (15)
and Mα (A) being an H+ matrix. This, however, does not
directly imply that there exists an α-diagonal Lyapunov
function. In what follows, we only present the result for
α = {k1 , k2 } partitioning.
Theorem 2: Let A be α partitioned with α = {k1 , k2 },
then under the premise of Lemma 1 the matrix A is αdiagonally stable.
Proof: The proof is using the small gain argument for
the systems G1 (s) = (sI − A11 )−1 A12 , G2 (s) = (sI −
A22 )−1 A21 . We have that kG1 kH∞ kG2 kH∞ ≤ ∆ where
∆ := kA21 k2 k(sI − A11 )−1 kH∞ kA12 k2 k(sI − A22 )−1 kH∞ .
Under the premise of Lemma 1 we have that γkA12 k2 <
−1
−1
kA−1
and γ −1 kA21 k2 < kA−1
11 k2
22 k2 . Hence kG1 kH∞ <
−1
γ , while kG2 kH∞ < γ. Proposition 5 proves the claim.
Note that if A is such that Mα (A), Mα (AT ) ∈ DD+ ,
it is not generally true that Mα (A + AT ) ∈ DD+ . This
property holds for α = {1, . . . , 1} and was used in the proof
of Theorem 1. Hence the absence of this property for a
general α is the major obstacle for extending Theorem 1
to the α-diagonal case.
VI. N UMERICAL E XAMPLE
Consider the one-dimensional heat equation in the form
∂ 2 T (t, x)
∂T (t, x)
=α
+ u(x, t)
∂t
∂x2
T (0, t) = T (1, t) = 0,
t≥0
T (x, 0) = 0
x ∈ [0, 1]
x ∈ (0, 1), t > 0
TABLE I
T IME TO COMPUTE THE GENERALISED CONTROLLABILITY G RAMMIAN
Size of the system
SDP solution
SOCP relaxation
LP relaxation
LP relaxation w scaling
50
0.94
0.74
0.01
0.01
100
22.7
4.11
0.02
0.03
150
310.7
11.9
0.05
0.05
200
NA
31.2
0.10
0.10
with α = −0.01, where T (t, x) denotes the temperature at
time t at x. Assume, we want to heat (i.e. apply an input)
at a point of the rod located at 1/3 of its length across, and
observe the temperature at a point on the rod located at 2/3
of its length. Then as in [28], we can obtain the following
spatially discretised model:
Ẋ(t) = AX(t) + Bu(t),
X(0) = 0,
Y (t) = CX(t),
where X(t) ∈ Rn is the temperature
n spatial discretisation points, and
2 −1
−1 2 −1
..
..
A = α(n + 1)2
.
.
−1
at time t at each of the
..
.
2
−1
∈ Rn×n ,
−1
2
(18)
The matrices B ∈ Rn×1 , C ∈ R1×n are equal to zero except
for the entries dn/3e and d2n/3e, respectively, which are
equal to one.
Our goal is to compute the diagonal controllability Gramians P for various n with a minimal trace, which we will do
in the dual form:
max trace(BB T Y ),
Y
s.t. diag{Y A + AT Y + I} = 0
Y ≺ 0.
In the dual form, we have an LP relaxation where −Y
belongs to the dual to the cone of symmetric DD+ matrices,
and an SOCP relaxation, −Y belongs to the dual to the
cone of symmetric H+ matrices. We solve only the dual
SDP formulation and the corresponding relaxation. Due to
the structure of the system, the trace of its Gramians does
not change much with dimensions and we always get the
optimal values in the range between 6.5 to 6.6 for the SDP
programme. Remarkably the results for the SOCP relaxation
are only slightly higher, but in the same range of values. This
however, is due to structure of the system, where the drift
is Metzler and the matrix BB T has only one non-zero entry
on the diagonal. The optimal solutions for the LP relaxation
are in the range between 10.7 − 10.9, hence there is a drop
in quality when using this relaxation.
In Table I, we provide the computational times for various
systems sizes n. The entry “NA” means that the programme
terminated due to running out of memory. Note however,
that we do not take into account the time for parsing the
constraints (that is, we plot only the variable “solvertime” in
Yalmip [29]). Since A is a Metlzer matrix it is straightforward to find a transformation T such that T AT −1 becomes
a diagonally dominant matrix (see Theorem 1). We have
implemented the LP relaxation while transforming the A, B
matrices with such a transformation T . The optimal solutions
for the trace vary between 7.1 and 7.3, thus drastically
improving the quality of the relaxation with a mild loss (if
any) in computational time.
VII. D ISCUSSION AND C ONCLUSION
We have provided some sufficient conditions on A, which
guarantee the existence of feasible points in (6) and interpreted these results as small gain like conditions. Moreover,
our sufficient conditions also provide computationally cheap
solutions, for example Proposition 5 replaces an LMI constraint with two Riccati Equation solutions. If we drop the
“X is α-diagonal” constraint and set Q = AX + XAT , then
the LMI (2) has a solution for any Q ≺ 0 if and only if
λi (A) + λ̄j (A) 6= 0. Since Q is arbitrarily negative definite,
we can replace the constraint AX + XAT ≺ 0 with −AX −
XAT ∈ DD+ . Thus our solvability LMI becomes a linear
program. Finally we showed how our constructive proofs
can be used to initiate a recently developed basis pursuit
algorithm for solving large scale optimization problems.
R EFERENCES
[1] A. A. Ahmadi and G. Hall, “Sum of squares basis pursuit with
linear and second order cone programming,” 2015, to appear in
Contemporary Mathematics.
[2] K. Zhou, J. C. Doyle, and K. Glover, Robust and optimal control.
Prentice Hall New Jersey, 1996, vol. 40.
[3] F. Lin, M. Fardad, and M. R. Jovanović, “Design of optimal sparse
feedback gains via the alternating direction method of multipliers,”
IEEE Trans Automat Control, vol. 58, no. 9, pp. 2426–2431, Sep 2013.
[4] H. Sandberg and R. M. Murray, “Model reduction of interconnected
linear systems,” Optimal control applications & methods, vol. 30,
no. 3, pp. 225–245, 2009.
[5] R. P. Mason and A. Papachristodoulou, “Chordal sparsity, decomposing SDPs and the Lyapunov equation,” in Proc Am Control Conf, 2014,
pp. 531–537.
[6] S. Kim, M. Kojima, M. Mevissen, and M. Yamashita, “Exploiting sparsity in linear and nonlinear matrix inequalities via positive semidefinite
matrix completion,” Mathematical programming, vol. 129, no. 1, pp.
33–68, 2011.
[7] M. Yamashita, K. Fujisawa, and M. Kojima, “SDPARA: Semidefinite
programming algorithm parallel version,” Parallel Computing, vol. 29,
no. 8, pp. 1053–1067, 2003.
[8] A. A. Ahmadi and A. Majumdar, “DSOS and SDSOS optimization:
LP and SOCP-based alternatives to sum of squares optimization,” in
Proc Conf Inform Sci Syst. Princeton University, 2014.
[9] A. Majumdar, A. A. Ahmadi, and R. Tedrake, “Control and verification
of high-dimensional systems with DSOS and SDSOS programming,”
in IEEE Conf Decision Control, 2014, pp. 394–401.
[10] D. Carlson, D. Hershkowitz, and D. Shasha, “Block diagonal semistability factors and lyapunov semistability of block triangular matrices,”
Linear Algebra Appl, vol. 172, pp. 1–25, 1992.
[11] J. Anderson and A. Sootla, “Decentralised H2-norm estimation and
guaranteed error bounds using structured gramians,” in Proc Sym Math
Theory Netw Syst, Groningen, Netherlands, July. 2014.
[12] P. Trnka, C. Sturk, H. Sandberg, V. Havlena, and J. Rehor, “Structured
model order reduction of parallel models in feedback,” IEEE Trans
Control Systems Technology, vol. 21, no. 3, pp. 739–752, 2013.
[13] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences. SIAM, 1994, vol. 9.
[14] M. Arcak and E. D. Sontag, “Diagonal stability of a class of cyclic
systems and its connection with the secant criterion,” Automatica,
vol. 42, no. 9, pp. 1531–1537, 2006.
[15] M. Arcak, “Diagonal stability on cactus graphs and application to
network stability analysis,” IEEE Trans Autom Control, vol. 56, no. 12,
pp. 2766–2777, 2011.
[16] D. Hershkowitz and H. Schneider, “Lyapunov diagonal semistability
of real H-matrices,” Linear Algebra Appl, vol. 71, pp. 119–149, 1985.
[17] S.-h. Xiang and Z.-y. You, “Weak block diagonally dominant matrices,
weak block h-matrix and their applications,” Linear Algebra Appl, vol.
282, no. 1, pp. 263–274, 1998.
[18] D. G. Feingold, R. S. Varga, et al., “Block diagonally dominant
matrices and generalizations of the gerschgorin circle theorem,” Pacific
J. Math, vol. 12, no. 4, pp. 1241–1250, 1962.
[19] R. S. Varga, “On recurring theorems on diagonal dominance,” Linear
Algebra Appl, vol. 13, no. 1, pp. 1–9, 1976.
[20] J. Liu and Y. Huang, “Some properties on schur complements of hmatrices and diagonally dominant matrices,” Linear Algebra Appl, vol.
389, pp. 365–380, 2004.
[21] E. G. Boman, D. Chen, O. Parekh, and S. Toledo, “On factor width and
symmetric H-matrices,” Linear Algebra Appl, vol. 405, pp. 239–248,
2005.
[22] S. Boyd, L. Ghaoui, E. Feron, and V. Balakrishnan, Linear matrix
inequalities in system and control theory. SIAM, 1994, vol. 15.
[23] A. Rantzer, “Scalable control of positive systems,” European Journal
of Control, vol. 24, pp. 72–80, 2015.
[24] A. Sootla and J. Anderson, “Structured projection-based
model reduction with application to stochastic biochemical
networks,” Submitted to IEEE Trans. Autom. Control, Oct. 2015,
http://arxiv.org/abs/1510.05784.
[25] C. Sturk, H. Sandberg, P. Trnka, V. Havlena, and J. Rehor, “Structured
model order reduction of boiler-header models,” in Proceedings of the
18th IFAC World Congress, vol. 18, 2011, pp. 3341–3347.
[26] J. Anderson, A. Teixeira, H. Sandberg, and A. Papachristodoulou,
“Dynamical system decomposition using dissipation inequalities,” in
IEEE Conf Decision Control, 2011, pp. 211–216.
[27] H. K. Khalil, Nonlinear systems. Prentice Hall, 2002.
[28] Y. Chahlaoui and P. V. Dooren, “A collection of benchmark examples
for model reduction of linear time invariant dynamical systems,”
Univesité Catholique de Louvain,” SLICOT Working Note 2002-2,
February 2002.
[29] J. Löfberg, “YALMIP: A toolbox for modeling and optimization in
MATLAB,” in Proceedings of the 2004 IEEE International Symposium
on Computer Aided Control Systems Design. IEEE, 2004, pp. 284–
289.
| 3 |
Prioritized Sweeping Neural DynaQ with
Multiple Predecessors, and Hippocampal
Replays
arXiv:1802.05594v1 [] 15 Feb 2018
Lise Aubin, Mehdi Khamassi, and Benoı̂t Girard
Sorbonne Université, CNRS, Institut des Systèmes Intelligents
et de Robotique (ISIR), F-75005 Paris, France
[email protected]
Abstract. During sleep and awake rest, the hippocampus replays sequences of place cells that have been activated during prior experiences.
These have been interpreted as a memory consolidation process, but recent results suggest a possible interpretation in terms of reinforcement
learning. The Dyna reinforcement learning algorithms use off-line replays
to improve learning. Under limited replay budget, a prioritized sweeping
approach, which requires a model of the transitions to the predecessors,
can be used to improve performance. We investigate whether such algorithms can explain the experimentally observed replays. We propose
a neural network version of prioritized sweeping Q-learning, for which
we developed a growing multiple expert algorithm, able to cope with
multiple predecessors. The resulting architecture is able to improve the
learning of simulated agents confronted to a navigation task. We predict that, in animals, learning the world model should occur during rest
periods, and that the corresponding replays should be shuffled.
Keywords: Reinforcement Learning, Replays, DynaQ, Prioritized Sweeping, Neural Networks, Hippocampus, Navigation
1
Introduction
The hippocampus hosts a population of cells responsive for the current position
of the animal within the environment, the place cells (PCs), a key component of
the brain navigation system [1]. Since the seminal work of [2], it has been shown
that PCs are reactivated during sleep – obviously without any locomotion –
and that these reactivations are functionally linked with the improvement of the
learning performance of a navigation task [3]. Similar reactivations have been
observed in the awake state [4], while the animal is immobile, either consuming
food at a reward site, waiting at the departure site for the beginning of the
next trial or stopped at a decision point. These reactivations contain sequences
of PCs’ activations experienced in the awake state (forward reactivations) [5],
sequences played in the reverse order (backward reactivations) [4], and sometimes
never experienced sequences (resulting from the concatenation of experienced
sequences) [6]. These reactivations have been interpreted in the light of the
act
1
.6
.2
dist
0
T2
R1
T1
R2
.
.
.
0
0.2
0.6
1
φ= 0.6
0.2
0
.
.
1
0.5
Location
Memory
Fig. 1. Model of the rat experiment used in [6]. The maze is discretized into
32 positions (squares). The agent can use 4 discrete actions (N,E,S,W). The input
state φ is the concatenation of 32 location components and two reward memory components. The location part of φ represents the activation of 32 place cells co-located
with the maze discrete positions, their activity act depends on the Manhattan distance of the agent to the cell. All figures by Aubin & Girard, 2018; available at
https://doi.org/10.6084/m9.figshare.5822112.v2 under a CC-BY4.0 license.
memory consolidation theory [7]: they would have the role of copying volatile
hippocampal memories into the cortex [8] for reorganization and longer-term
storage [9]. Some recent results have however shown that these reactivations
also have a causal effect on reinforcement learning processes [3,10].
A number of reinforcement learning (RL) algorithms make use of reactivations of their inputs, reminiscent of hippocampal reactivations, that are thus
candidates to explain this phenomenon [11]. Among them, the Dyna family of
algorithms [12] is of special interest because it was specifically designed to make
the best possible use of alternation between on-line and off-line learning phases
(i.e. phases during which the agent acts in the real world or in simulation). We
concentrate here on the Q-learning version of Dyna (Dyna-Q). When operating
on-line, Dyna-Q is indistinguishable from the original model-free Q-learning algorithm: it computes reward prediction error signals, and uses them to update
the estimated values of the (state, action) couples, Q(s, a). In its original version
[12], when off-line, the Dyna algorithm reactivates randomly chosen quadruplets
composed of an initial state, a chosen action, and the predicted resulting state
and reward (produced by a learned world-model, this phase being thus modelbased ), in order to refine the on-line estimated values. However, when the number
of reactivations is under a strict budget constraint, it is more efficient to select
those that will provide more information: those that effectively generated a large
reward prediction error in the last on-line phase, and those that are predicted
to do so by the world model, a principle called prioritized sweeping [13,14].
We are here interested in mimicking the process by which the basal ganglia,
which is central for RL processes [15], can use the state representations of the
world that are provided by the hippocampus. The manipulated state descriptor
will thus be a population activity vector, and we will represent the Q-values and
the world model with neural network approximators [16].
In the following, we describe the rat experimental setup proposed in [6],
and how we simulated it. In this task, a state can have multiple predecessor
states resulting from the execution of a single action, we thus present a modified
Dyna-Q learning algorithm, with a special stress on the neural-network algorithm we designed to learn to approximate binary relations (not restricted to
functions) with a growing approach: GALMO for Growing Algorithm to Learn
Multiple Outputs. Our results successively illustrate three main points. First,
because of interferences between consecutively observed states during maze experience, themselves due to the use of a neural-network function approximator,
the world model had to be learned with shuffled states during off-line replay.
Second, GALMO allows to efficiently solve the multiple predecessor problem.
Third, the resulting system, when faced with a training schedule similar to [6],
generates a lot of disordered state replays, but also a non-negligible set of varied
backward and forward replay sequences, without explicitly storing and replaying
sequences.
2
2.1
Methods
Experimental task
We aim at modeling the navigation task used in [6]: two successive T-mazes (T1
and T2 on Fig. 1), with lateral return corridors. The left and right rewarding
sites deliver food pellets with different flavors. The training involves daily changing contingencies, forcing rats to adapt their choice to turn either left or right at
the final choice (T2) based on the recent history of reward. These contingencies
are: 1) always turn right, while the left side of the maze is blocked; 2) always
turn left, while the right side of the maze is blocked; 3) always turn right; 4)
always turn left; 5) alternate between left and right on a lap-by-lap basis.
Rats attempting to run backward on the maze were physically prevented to
do so by the experimenter. They had forty trials the first day to learn task 1, and
forty trials the second day to learn task 2. Then, depending on their individual
learning speed, rats had between seventeen and twenty days to learn task 3, 4
and 5 (a single condition being presented each day). Once they reached at least
80% success rate on all tasks, rats were implanted with electrodes; after recovery,
recording sessions during task performance lasted for six days.
During the six recording sessions, the reward contingency was changed approximately midway through the session and hippocampal replays were analyzed
when rats paused at reward locations. Original analyses of replayed sequences
[6] revealed that: during same-side replays (i.e., replays representing sequences
of previously visited locations on the same arm of the maze as the current rat
position) forward and backward replays started from the current position; during opposite-side replays (i.e., representing locations on the opposite arm of the
maze) forward replays occurred mainly on the segment leading up to reward
sites, and backward replays covered trajectories ending near reward sites. In
general, the replay content did not seem to only reflect recently experienced trajectories, since trajectories experienced 10 to 15 minutes before were replayed
as well. Indeed, there were more opposite-side replays during task 3 and 4 than
during the alternation task. Finally, among all replays, a few were shortcuts
never experienced before which crossed a straight path on the top or bottom of
the maze between the reward sites.
2.2
Simulation
We have reproduced the T-maze configuration with a discrete environment
composed of 32 squares (Fig. 1, left), each of them represents a 10 × 10 cm
area. States are represented by a vector φ, concatenating place cells activity and
a memory of past rewards (Fig. 1, right). The modeled place cells are centered
on the discrete positions, their activity (color-coded on Fig. 1) decreases with
the Manhattan distance between the simulated rat position to the position they
encode (top of Fig. 1). When a path is blocked (contingencies 1 and 2), the
activity field does not expand beyond walls and will thus shrink, as is the case
of real place cells [17]. To represent the temporal dimension, which is essential
during the alternation task, we have added two more components in the state’s
vector representation (Fig. 1, right): the left side reward memory (L) and the
right side reward memory (R). They take a value of 1 if the last reward was
obtained on that side, 0.5 if the penultimate reward was on that side, and 0 if
that side has not been rewarded during the last two reward events. Therefore,
after two successful laps, the task at hand can be identified by the agent based
on the value of this memory (Tab. 1). This ability to remember the side of the
rewards is supposed to be anchored both on the different position and flavor cues
that characterize each side. Since it has been shown that, beyond purely spatial
information, the hippocampus contains contextual information important for the
task at hand [18], we hypothesize that this memory is also encoded within the
hippocampus, along with the estimation of the agent’s current position.
The agent can choose between four actions: North, South, East and West.
As in the real experiment, the agent cannot run backward.
Table 1. State of the L and R memory components of φ and corresponding meaning
in terms of task at hand, after two successful laps.
L
1
0
0.5
1
R
0
1
1
0.5
Task identification (after 2 laps)
Always turn right (Tasks 1 & 3)
Always turn left (Tasks 2 & 4)
Alternation (Task 5), go left next time
Alternation (Task 5), go right next time
2.3
Neural DynaQ with a prioritized sweeping algorithm
Our algorithm is based on a Dyna architecture [12] which means that, as
in model-based architectures, we need to learn a world model composed of a
reward and a transition model [19]. In order to implement prioritized sweeping
[13,14], the transition model must be designed so as to allow the prediction of
the predecessors of a state s given an action a, because it will be needed to backpropagate the reward prediction computed in state s to its predecessors. Hence,
our architecture is composed of two distinct parts: one dedicated to learning the
world model, and the other one to learning the Q-values.
Algorithm 1 LearnWM: learn the world model
collect S // a set of (φt , φt−1 , a, r) quadruplets
for k ∈ {N, S, E, W } do
SPk ← {(φt , φt−1 ) : (φt , φt−1 , a, r) ∈ S and a = k}
k
← {(φt , r) : (φt , φt−1 , a, r) ∈ S and a = k}
SR
for f ∈ {P, R} do
// P,R: Predecessor and Reward types of networks
Nfk ← null // list of networks (outputs)
Gfk ← null // list of networks (gates)
k
k
create Nnew
; append Nnew
to Nfk
k
k
create Gnew ; append Gnew to Gfk
GALMO(Sfk , Nfk , Gfk ) // refer to Algo 2 for this specific training procedure
end for
end for
Learning the world model. Two sets of neural networks compose the world
model. Four reward networks NRa , one for each action a, learn the association
between (state, action) couples and rewards (NRa : s → r(s, a)). Four other
networks NPa learn the states for which a transition to a given state s is produced
after execution of action a, i.e., the predecessors of s (NPa : s → {s0 }).
Owing to the nature of the task (navigation constrained by corridors) and
the states’ representation, the data that must be learned are not independent.
Indeed, successive state vectors are very similar due to the overlap between
place-fields, and are always encountered in the same order during tasks execution (because the agent always performs the same stereotyped trajectories along
the different corridors). However, it is well known that the training of a neural
network is guaranteed to converge only if there is no correlation in the sequence
of samples submitted during learning, a condition that is often not respected
when performing on-line reinforcement learning [20]. We indeed observed that
in the task at hand, despite its simplicity, it was necessary to store the successive
observations and to train the world model off-line with a shuffled presentation of
the training samples (for the general scheme of the off-line training, see Algo. 1).
For that reason, we created a dataset S compiling all transitions, i.e (φt , φt−1 ,
a, r) quadruplets from all tasks. When there is no predecessor of φt by action
a (as can be the case when this action would require to come through a wall),
the transition is represented as (φt , 0, a, r): those ”null” transitions allow NPa
networks to represent the fact that the transition does not exist.
Correct alternation from left to right
T2
T2
T1
Memory part of Φ:
Followed by a wrong choice at T2
R
R
T2
T1
L=1
R=0.5
L=0.5
R=1
L=0.5
R=1
L=1
R=0.5
T1
same action
different
predecessors
L=1
R=0.5
L=1
R=0.5
R
Fig. 2. Example of multiple predecessors in the alternation task. The agent
first correctly goes to the right (left). It then goes to the left (middle) where, at the
reward site, its predecessor state has a (L = 0.5, R = 1) memory component. It then
makes a wrong decision and goes to the left again (right), but is not rewarded: at this
last position, the location component of the predecessor state (white mouse) is identical
but the memory component is different (L = 1, R = 0.5) from the previous lap. Violet
gradient: past trajectory; white mouse: previous position; gray mouse: current position;
white R: agent rewarded; black R: current position of the reward.
Despite its simplicity, the navigation task modeled here has some specificities:
during task 5 (alternation), some states have more than one predecessor for a
given action (see an example on Fig. 2), the algorithm must thus be capable
of producing more than one output for the same input. To do that, we have
created a growing type of algorithm inspired by mixture of expert algorithms
[21] (which we call here the GALMO algorithm, see Algo. 2), based on the
following principles:
– The algorithm should allow the creation of multiple Ni networks (if needed)
so that a single input can generate multiple outputs. Each of these network is
coupled with a gating network Gi , used after training to know if the output
of Ni has to be taken into account when a given sample is presented.
– When a sample is presented, the algorithm should only train the Ni network
that generates the minimal error (to enforce network specialization), and
remember this training event by training Gi to produce 1 and the other
Gk6=i to produce 0.
– The algorithm should track the statistics of the minimal training errors of
each sample during an epoch, so as to detect outliers (samples whose error
is much higher than the others’). GALMO assumes that these outliers are
caused by inputs who should predict multiple outputs and are stuck in predicting the barycenter of the expected outputs. A sample is considered an
outlier when its error is larger than a threshold θ, equal to the median of the
current error distribution, plus w times the amplitude of the third quartile
(Q3 − median). When such a detection occurs, a new network is created on
the fly, based on a copy of the network that produced the minimal error for
the sample. The new network is then trained once on the sample at hand.
Algorithm 2 GALMO: Growing algorithm to learn multiple outputs
INPUT: S, N , G
OUTPUT: N , G
// S = h(in0 , out0 ), ..., (inn , outn )i : list of samples
// N = hN0 i : lists of neural networks (outputs)
// G = hG0 i : lists of neural networks (gates)
θ ← +∞
for nbepoch ∈ {1, maxepoch} do
M ← null // M is a list of the minimal error per sample
for each (in,out)∈ S do
E ← null // E is a list of errors for a sample
for each N ∈ N do
append kN (in) − outkL1 to E
end for
if min(E) < θ then
backprop(Nargmin(E) , in, out)
backprop(Gargmin(E) , in, 1)
for each G ∈ G with G 6= Gargmin(E) do
backprop(G, in, 0)
end for
else
create Nnew ; append Nnew to N
Nnew ← copy(Nargmin(E) )
backprop(Nnew , input =in, target =out)
create Gnew ; append Gnew to G
backprop(Gnew , in, 1)
end if
end for
θ ← median(M) + w ∗ (Q3(M) − median(M))
end for
In principle, the algorithm could be modified to limit the maximal number
of created networks, or to remove the networks that are not used anymore, but
these additions were not necessary here.
Neural Dyna-Q. The second part of the algorithm works as a classical
neural network-based Dyna-Q [16] with prioritized sweeping [13,14]. As in [16],
a
the Q-values are represented by four 2-layer feedforward neural networks NQ
(one per action). During on-line phases, the agent makes decisions that drive its
movements within the maze, and stores the samples in a priority queue, their
priority is the absolute value of the reward prediction error, i.e., |δ|. Every time
the agent receives a reward, similarly to rats, it stops and replays are simulated
with a budget B (Algo. 3): the samples with the highest priority are replayed
Algorithm 3 Neural Dyna-Q with prioritized sweeping & multiple predecessors
INPUT: φt=0 , NP , GP , NR , GR
a∈{N,S,E,W }
OUTPUT: NQ
PQueue ← {} // PQueue: empty priority queue
nbTrials ← 0
repeat
a ← softmax(NQ (φt ))
take action a, receive r, φt+1
a
backprop(NQ
, input = φt , target = r + γmaxa (NQ (φt+1 ))
a
Put φt in PQueue with priority |NQ
(φt ) − (r + γmaxa (NQ (φt+1 )))|
if r> 0 then
nbReplays ← 0
Pr = hi // empty list of predecessors
repeat
φ ← pop(PQueue)
for each GP ∈ GP do
if GP (φ) > 0 then
k ← index(GP )
append NPk (φ) to Pr
end if
end for
for each p ∈ Pr s.t norm(p) > do
for each a ∈ {N, S, E, W } do
a
a
a
(φ)))
(p) + γmaxa (NQ
, input = p, target = NR
backprop(NQ
a
a
a
(φ)))|
(p) + γmaxa (NQ
(p) − (NR
Put p in PQueue with priority |NQ
nbReplays ← nbReplays + 1
end for
end for
until PQueue empty OR nbReplays ≥ B
end if
φt ← φt+1
nbTrials ← nbTrials +1
until nbTrials = maxNbTrials
first, their potential predecessors are then estimated and placed in the queue
with their respective priorities, and so on until the replay budget is exhausted.
The various parameters used in the simulations are summarized in Tab. 2.
3
3.1
Results
Learning the world model
Because of correlations in sample sequences, the world model is learned off-line:
the samples are presented in random order, so as to break temporal correlations.
We illustrate this necessity with the learning of the reward networks NR : when
trained on-line (Fig. 3, left), the reward networks make a lot of erroneous pre-
Table 2. Parameter values.
value
parameter
4000 maxepch: number of epoch replays to train the world model
3
w: gain of the outlier detector threshold in GALMO
20
B: replay budget per stop at reward sites
2
number of layers in NP , NR and NQ
10, 16, 26 size of the hidden layers in NQ , NR and NP (respectively)
±0.05, ±0.0045, ±0.1
weight initialization bound in NQ , NR and NP (resp.)
0.5, 0.1, 0.1
learning rate in NQ , NR and NP (resp.)
0.9, 1, 1 sigmoid slope in NP , NR and NQ (resp.) (hidden layer)
0.5, 0.4, 0.4 sigmoid slope in NP , NR and NQ (resp.) (output layer)
Fig. 3. Reward predictions are inaccurate when the model is trained on-line (Left
panel) and accurate when it is trained off-line (Right panel). L, R: memory configuration. Note the use of a logarithmic scale, so as to make visible errors of small amplitude.
dictions for each possible task, while when trained off-line with samples presented
in randomized order, the predictions are correct (Fig. 3, right).
With a single set of NP networks, the error of the whole set of states decreases
steadily with learning, except for four states which have multiple predecessors
(Fig. 4, top left). With the GALMO algorithm, when the error of these states
reaches the threshold θ (in red on Fig. 4, top right), networks are duplicated
and specialized for each of the possible predecessors. We repeated the experiment 10 times. It always converged, with a number of final networks comprised
between 2 and 5 (3 times 2 networks, 1 time 3, 5 times 4, and 1 time 5).
3.2
Reinforcement learning with multiple predecessors
We compare the efficiency of the Dyna-Q model we developed with the corresponding Q-learning (i.e. the same architecture without replays with the world
model). As expected, Q-learning is able to learn the task, measured here with
the proportion of erroneous choices at the decision point T2 (Fig. 4, bottom
left). On average it does not fully converge before 1000 epochs of training, the
Dyna-Q learns much faster, thanks to the replay mechanism (Fig. 4, bottom
right), converging on average after 200 trials.
Error evolution at state T2
Error evolution at state T2
1
0.9
0.8
0.8
0.7
0.7
0.6
0.6
Errors
Errors
1
0.9
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
0
0.1
200
400
600
800
1000
1200
1400
1600
1800
2000
0
0
200
400
600
800
#episodes
1000
1200
1400
1600
1800
2000
#episodes
Fig. 4. Top: Learning error dynamics without (left) and with (right)
GALMO. Errors of all samples (gray) during epochs of training. GALMO allows
for the creation of multiple prediction networks to handle the states where multiple
outputs have to be generated. Bottom: Learning without (left) and with (right)
replays. Evolution of the proportion of decision errors at point T2 during the alternation task. Blue: 10 run average, light blue: standard deviation.
3.3
Preliminary analysis of generated replays
Always Right
B
RND
Same
Opposite
Always Left
B
Alternate
B
RND
Same
Opposite
Same
Central
F
RND
Opposite
Central
O
Fig. 5. Type of replays. B: backward, F: forward, RND: random.
We analyze a posteriori the state reactivations caused by the prioritized
sweeping DynaQ algorithm in always turn right, always turn left and alternate
tasks. Prioritized sweeping does not rely on explicit replay of sequences, but it
however favors them. We considered sequences or replays implying three or more
consecutive steps; with 128 possible states, a 3-state sequence has a chance level
of 0.01% of being produced by a uniform random selection process. We observed
in all cases (Fig. 5) that a bit more than 80% of the state reactivations did not
correspond to actual sequences. Most of the sequences are backward, except for
the alternate task, which also generated 4.5% of forward ones. As in [6], we classified these sequences as being on the same side as the current agent location, on
the opposite side, or in the central part of the maze. There is no clear pattern
here, except that central reactivations were observed in the alternate task only.
4
Discussion
We proposed a new neural network architecture (GALMO) designed to associate
multiple outputs to a single input, based on the multiple expert principle [21]. We
implemented a neural version of the DynaQ algorithm [16], using the prioritized
sweeping principle [13,14], using GALMO to learn the world model. This was
necessary because the evaluation task, adapted from [6], contained some states
that have multiple predecessor.
We showed that this system is able to learn the multiple predecessors cases,
and to solve the task faster than the corresponding Q-learning system (i.e., without replays). This required learning the world-model off-line, with data presented
in shuffled order, so as to break the sequential correlations between them, which
prevented the convergence of the learning process. A neuroscience prediction derives from this result (independently from the use of GALMO): if the learning
principles of the rat brain are similar to those of the gradient descent for artificial
neural network, then the world model has to be learned off-line, which would
be compatible with non-sequential hippocampal replays. Besides, the part of the
DynaQ algorithm that uses the world model to update the Q-values predicts a
majority of non-sequential replays, but also 15 to 20% of sequential reactivations,
both backward and forward.
Concerning GALMO, it has been tested with a quite limited set of data,
and should thus be evaluated against larger sets in future work. In our specific
case, the reward networks NRa did not require the use of GALMO; a single network could learn the full (s, a) → r mapping as rewards were deterministic. But
should they be stochastic, GALMO could be used to learn the multiple possible
outcomes. Note that, while it has been developed in order to learn a predecessor
model in a DynaQ architecture, GALMO is much more general, and would in
principle be able to learn any one-to-many mapping. Finally, in the model-based
and dyna reinforcement learning contexts, having multiple predecessors or successors is not an exceptional situation, especially in a robotic paradigm. The
proposed approach is thus of interest beyond the task used here.
Acknowledgements
The authors would like to thank Olivier Sigaud for fruitful discussions. This
work has received funding from the European Unions Horizon 2020 research and
innovation programme under grant agreement No 640891 (DREAM Project).
This work was performed within the Labex SMART (ANR-11-LABX-65) supported by French state funds managed by the ANR within the Investissements
d’Avenir programme under reference ANR-11-IDEX-0004-02.
References
1. O’Keefe, J., Dostrovsky, J.: The hippocampus as a spatial map. preliminary evidence from unit activity in the freely-moving rat. Brain research 34(1) (1971)
171–175
2. Wilson, M.A., McNaughton, B.L., et al.: Reactivation of hippocampal ensemble
memories during sleep. Science 265(5172) (1994) 676–679
3. Girardeau, G., Benchenane, K., Wiener, S.I., Buzsáki, G., Zugaro, M.B.: Selective
suppression of hippocampal ripples impairs spatial memory. Nature neuroscience
12(10) (2009) 1222–1223
4. Foster, D.J., Wilson, M.a.: Reverse replay of behavioural sequences in hippocampal
place cells during the awake state. Nature 440(7084) (2006) 680–3
5. Lee, A.K., Wilson, M.A.: Memory of Sequential Experience in the Hippocampus
during Slow Wave Sleep. Neuron 36(6) (2002) 1183–1194
6. Gupta, A.S., van der Meer, M.A.A., Touretzky, D.S., Redish, A.D.: Hippocampal
Replay Is Not a Simple Function of Experience. Neuron 65(5) (2010) 695–705
7. Chen, Z., Wilson, M.A.: Deciphering neural codes of memory during sleep. Trends
in Neurosciences (2017)
8. Peyrache, A., Khamassi, M., Benchenane, K., Wiener, S.I., Battaglia, F.P.: Replay
of rule-learning related neural patterns in the prefrontal cortex during sleep. Nature
Neuroscience 12(7) (2009) 919–926
9. McClelland, J.L., McNaughton, B.L., O’reilly, R.C.: Why there are complementary
learning systems in the hippocampus and neocortex: insights from the successes
and failures of connectionist models of learning and memory. Psychological review
102(3) (1995) 419
10. De Lavilléon, G., Lacroix, M.M., Rondi-Reig, L., Benchenane, K.: Explicit memory
creation during sleep demonstrates a causal role of place cells in navigation. Nature
neuroscience 18(4) (2015) 493–495
11. Cazé, R., Khamassi, M., Aubin, L., Girard, B.: Hippocampal replays under the
scrutiny of reinforcement learning models. submitted (2018)
12. Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on
approximating dynamic programming. In: Proceedings of the seventh international
conference on machine learning. (1990) 216–224
13. Moore, A.W., Atkeson, C.G.: Prioritized sweeping: Reinforcement learning with
less data and less time. Machine learning 13(1) (1993) 103–130
14. Peng, J., Williams, R.J.: Efficient learning and planning within the dyna framework. Adaptive Behavior 1(4) (1993) 437–454
15. Khamassi, M., Lacheze, L., Girard, B., Berthoz, A., Guillot, A.: Actor-critic models of reinforcement learning in the basal ganglia: from natural to arificial rats.
Adaptive Behavior 13 (2005) 131–148
16. Lin, L.H.: Self-improving reactive agents based on reinforcement learning, planning
and teaching. Machine learning 8(3/4) (1992) 69–97
17. Paz-Villagrán, V., Save, E., Poucet, B.: Independent coding of connected environments by place cells. European Journal of Neuroscience 20(5) (2004) 1379–1390
18. Eichenbaum, H.: Prefrontal–hippocampal interactions in episodic memory. Nature
Reviews Neuroscience 18(9) (2017) 547
19. Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. Cambridge, MA:
MIT Press (1998)
20. Tsitsiklis, J.N., Van Roy, B.: Analysis of temporal-diffference learning with function
approximation. In: Advances in neural information processing systems. (1997)
1075–1081
21. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local
experts. Neural computation 3(1) (1991) 79–87
| 2 |
Towards Data Quality Assessment in Online Advertising
Sahin Cem Geyik 1,+,∗ , Jianqiang Shen 2,∗ , Shahriar Shariat
Ali Dasdan 4,+ , Santanu Kolay 2
arXiv:1711.11175v1 [] 30 Nov 2017
2
3,+
,
1
LinkedIn Corporation, [email protected]
Turn Inc., {jianqiang.shen, santanu.kolay}@turn.com
3
Uber, [email protected]
4
Vida Health, [email protected]
ABSTRACT
1. INTRODUCTION
In online advertising , our aim is to match the advertisers
with the most relevant users to optimize the campaign performance. In the pursuit of achieving this goal, multiple
data sources provided by the advertisers or third-party data
providers are utilized to choose the set of users according to
the advertisers’ targeting criteria. In this paper, we present
a framework that can be applied to assess the quality of such
data sources in large scale. This framework efficiently evaluates the similarity of a specific data source categorization
to that of the ground truth, especially for those cases when
the ground truth is accessible only in aggregate, and the
user-level information is anonymized or unavailable due to
privacy reasons. We propose multiple methodologies within
this framework, present some preliminary assessment results, and evaluate how the methodologies compare to each
other. We also present two use cases where we can utilize the data quality assessment results: the first use case is
targeting specific user categories, and the second one is forecasting the desirable audiences we can reach for an online
advertising campaign with pre-set targeting criteria.
Online advertising strives to serve the most beneficial advertisement (ad) to the most relevant online users in the
appropriate context (a specific website, mobile application,
etc.). This typically results in attaining higher return-oninvestment (ROI) for the advertisers [10], where the value
is generated either from a direct response such as a click or
conversion (e.g. the purchase of a product, subscription to
a newsletter, etc.), or through delivering a branding message. For this purpose, advertisers receive help from multiple entities in the domain. Supply-side platforms (SSP)
provide ad-space (inventory) on websites or mobile apps, to
serve ad impressions to users. Ad-exchanges run auctions
on available inventory from SSPs. Demand-side platforms
(DSP) act on behalf of the advertisers and aim to bid on
the most valuable inventory. Advertisers often get performance reports from an independent evaluation agency # .
For privacy reasons, these reports, in most cases, only contain aggregate metrics (e.g. click-through rate, percentage
of female audiences).
In order to reach the right audience usually defined by the
advertiser, which in general would improve direct response
and branding metrics, the advertisers need to utilize various data sources to label the users in the most accurate way
possible. Data management platforms (DMP) have been
emerging as a central hub to seamlessly collect, integrate
and manage large volumes of user data [6]. Such user data
could be first-party (i.e. historical user data collected by advertisers in their private customer relationship management
systems), or third-party (i.e. data provided by third-party
data partners, typically each specializing in a specific domain, e.g., demographics, credit scores, buying intentions).
While first-party data is proprietary to the advertiser and
free to utilize, third-party data often carries a pre-negotiated
cost per impression (ad served to a user in a website or application). In both cases, it is important for the advertiser
to know how accurate a data source is. That is, if a data
source has tagged a user to be in category ci (user property,
e.g. gender, age, income), how likely it is for the user to
actually be in that category.
In this paper, we are investigating the above problem
which we call data quality assessment in online advertising.
The main issue in evaluating the accuracy of a data source
Categories and Subject Descriptors
H.3.5 [Information Storage and Retrieval]: Online Information Services; I.2.1 [Artificial Intelligence]: Applications and Expert Systems
General Terms
Algorithms, Application
Keywords
Online Advertising; Data Quality; Targeting; Forecasting
∗
The authors contributed to this work equally.
+
This work was completed when the authors were at Turn.
#
This work has been presented in the KDD 2016
Workshop on Enterprise Intelligence.
There are certain independent evaluation agencies in online advertising domain, whose names we cannot list here
to comply with the company policy. Advertisers trust these
organizations to collect the ground truth.
Table 1: Confusion matrix of data source s for tagging users with category c.
Ground Truth
Predicted by data source s
c
not c
Unknown
c
n+,+
n+,−
n+,
not c
n−,+
n−,−
n−,
n
n
n
Unknown
,+
,−
,
is the lack of ground truth in the user-level granularity. For
example, the advertisers, in reality, never have access to the
confusion matrix (Table 1) of a data source in either first or
third-party cases. Therefore, the only way for an advertiser
to evaluate the quality of a data source is to run an advertising campaign on a set of users and then evaluate the performance in hindsight. Even in those cases, the post-campaign
data is often constrained (mostly due to privacy concerns)
and in aggregate, that is, only the total number of users in
different categories is provided, and not a granular user-tocategory assignment. If it were possible to have the granular
data, it would then be trivial to just use the ground truth
data source to come up with the accuracy metrics, e.g. filling
in the entries of the confusion matrix. Therefore, utilizing
the aggregate performance statistics makes the data quality evaluation task quite challenging, and somewhat similar
to aggregate learning tasks in machine learning [5], few of
which are also directly applicable to this problem.
The main contributions of this work are as follows:
• formal definition of data quality assessment problem,
and the challenges of solving it in online advertising
domain,
• multiple approaches for evaluating the quality of a data
source, which also take into account the efficiency requirements due to the large number of possible data
sources∗ to be evaluated,
• several use cases where data quality assessment comes
in handy for online advertising, and,
• initial evaluation of our methodology utilizing simulated
data and real-world advertising campaigns.
Rest of the paper is organized as follows. In Section 2, we
give a more formal definition of the data quality assessment
problem. Next, we discuss the literature that deals with either data quality assessment, or aggregate learning (which,
as aforementioned, is relevant to our problem) in Section 3.
We present our two proposed assessment methodologies in
Section 4 and Section 5, and later give some use cases on
how we can utilize our data quality assessment output in
Section 6. Finally, we present some initial results in Section 7 and conclude the paper along with some potential
future work in Section 8.
2.
RESEARCH PROBLEMS
As we have explained in the previous section, we seek to
evaluate first or third-party data sources available for online
advertising using multiple accuracy metrics.
* While we cannot list the exact number due to the company
policy, there are currently over 200k active data sources in
Turn’s system.
Definition 1. A sound data source tags each virtual user
(cookie ID that might be specific to a browser and device)
with one and only one of the 3 labels – {Positive, Negative,
Unknown}.
The tagging process could be explicit or deductive, but cannot be self-contradictory. For example, a user can have a
positive tag – Age25, or a negative tag – NotAge25, but
cannot be tagged as both Age25 and NotAge25. The data
source can also simply indicate that it has no knowledge on
a user by tagging it as Unknown. In real-time bidding, the
positive tags are the most important, as advertisers usually
utilize them to target the desirable audiences.
The problem of data quality assessment is defined as the
following:
Definition 2. Given a sound data source S, its data
quality assessment is defined as a measurement mS that has
error no more than ǫ over user examples drawn from user
set U , with probability of at least δ,
P ( Eui ∈U Ω(ui , Si ) − mS ≤ ǫ) ≥ δ,
where Ω(ui , Si ) is a metric to measure the granular targeting
performance when this data source tag user ui with Si .
As an example, suppose we have a data source which we
utilize to tag a user as Male (positive example) or Not Male
(negative example). Consider two evaluation metrics, which
are Accuracy (percentage of correct taggings by our data
source), and True Positive Rate (percentage of positive examples, i.e. Males, that our data source also tags as males).
For the accuracy metric, we have the following Ω:
Ω(ui , S) = I(GT (ui ) == Si ),
which does an exact comparison of the ground truth tagging
of user ui (GT (ui )) against the tagging by data source S
(Si ). On the other hand, if we were to calculate true positive
rate, then we would have the following Ω:
Ω(ui , Si ) = I(GT (ui ) == Si == Male),
which counts only those cases where both ground truth and
the data source tag the user as Male.
Note that the above problem definition is a very general
formulation, which is typically used in evaluating Machine
Learning models [7, 8, 12]. As long as both the ground truth
category of a user and that of the data source are available,
one can
come up with a perfect data quality assessment, i.e.
Eui ∈U Ω(ui , Si ) ≈ mS from Def. 2. The problem occurs
when we don’t have direct access to the ground truth category of every single user. Typically, Ω(ui , Si ) is unknown,
but rather the category distribution of groups of users is provided. The main reason is to protect the privacy of users [1].
In these kind of situations, especially in online advertising,
we may utilize a specific data source to make smart advertising decisions to choose the most appropriate set of users,
and in the end, we can receive an aggregated report from
a third-party evaluator, which is considered as the ground
truth and provides a non-granular distribution of the audience we have reached over many categories of interest. As
an example, the report may provide that over all users we
have 20% Male, and 80% Not Male. When this occurs, we
can no longer do a one-to-one comparison between ground
truth and data source in the user granularity, but rather
need to come up with alternative methods that can deal
with aggregated data, which is our main focus in this paper.
In many cases, we need to select the best data source
from a large set of candidates with the same semantic goal
and adopt it for targeting. For example, given a set of data
sources that tag users as male, female, or unknown, we may
care more about their relative performance and less about
their absolute measurements. The data quality assessment
can then be simplified as a ranking problem:
Definition 3. Given two sound data sources S 1 and S 2 ,
and an accuracy metric Ω, a data quality ranking system
outputs a rank measurement r 1 for S 1 and r 2 for S 2 such
that
r 1 > r 2 ⇐⇒ Eui ∈U Ω(ui , Si1 ) > Eui ∈U Ω(ui , Si2 ) .
Once we have the rank measurements for each sound data
source, we can order them and select the best one.
3.
PREVIOUS WORK
As we have explained in the problem definition, evaluation
of a data source can be taken as any other machine learning
model evaluation task, provided that we have the ground
truth information in the user granularity. A detailed evaluation of 18 performance metrics for classification problems
is given in [7]. These 18 metrics can be listed as accuracy,
kappa statistic, mean F-measure, macro average arithmetic,
macro average geometric, AUC of each class against the rest
(two variants), AUC of each class couples (two variants),
scored AUC, probabilistic AUC, macro average mean probability rate, mean probability rate, mean absolute error, mean
squared error, LogLoss, calibration loss, and calibration by
bins. The paper provides a detailed correlation analysis and
noise sensitivity analysis . Also, the survey by Gunawardana
et al. [8] discusses both the evaluation settings and proper
evaluation metrics for different classes of recommendation
problems, of which online advertising is a sub-problem.
When we only have access to aggregated ground truth
data, evaluation of a data source is much harder. There has
been significant work in aggregate learning tasks which utilize aggregate assignments of classes to groups of samples
to train a model. Our aim in this paper is significantly different from such works, since we already have a model (i.e.
data source), and we are trying to evaluate its performance
utilizing many campaigns and multiple aggregates of ground
truth data. Cheplygina et al. [5] provides an overview of aggregate learning methodologies, which may utilize granular
response variables/feature vectors (single instance) or aggregate response variables/feature vectors for groups (multiple
instance) to train their models, and later, testing them. Musicant et al. [11] utilizes aggregate outputs for the response
variables to specialize the training process of k-nearest neighbors, decision trees, and support vector machines. In [3], the
authors utilize aggregate views of data, which consist of a
choice of different combinations of features, response variables, and combining machine learning models learned from
these views. Another interesting work is presented in [16],
which gives error bounds on how a model learned from aggregate data can perform. They assert that a machine learning
model should minimize empirical proportion risk, and prove
that under certain assumptions for the class distributions,
learning in the aggregate setting can actually improve individual classification performance.
Finally, specific to the online advertising domain, we can
list [14] as being a relevant work to ours. In this paper,
similar to aggregate learning techniques, the authors aim
to learn a predictive model to decide whether a user is in
a specific ground truth category using the aggregate data
over many campaigns, by assigning the most likely label to
all users in the aggregate, or assigning a probabilistic single
label. They utilize logistic regression with L2 -norm regularization, where the response variables are the artificially
generated labels.
4. BRUTE FORCE EVALUATION
In this section we will present our first proposal for data
quality assessment, which includes setting up specialized
campaigns for a data source and utilizing the targeting results directly for evaluation.
Note that we typically rely on the independent survey
agencies to collect the ground truth analysis data on our
audience population. Such agencies use offline data (such as
credit card information) and online data (such as information filled in social networking websites) to profile an Internet user. Reports from these survey agencies are generally
considered as the ground truth by advertisers. Such reports
are aggregated statistics and contain no user-level information due to privacy reasons.
4.1 Performance Campaign for Data Source
An intuitive and straightforward way to evaluate a data
source is to set up a campaign that only targets certain
users which are tagged by data source S to be in category
c. This way we can calculate the quality of the data source
N(c )
as p(cg |cs ) = N(cgs ) , where N (cg ) is the number of users
in category c reached by this campaign as reported by the
ground truth and N (cs ) is the total number of users reached
by the campaign via at least one impression. Note that we
can put a limit on the number of impressions to be served
to a user so that we can increase the unique user reach and
have more reliable results.
We applied this methodology to evaluate age and gender
categorizations of some well-established data providers in
online advertising. Table 2 demonstrates some results for
one of the better performing such data providers in age categorization. We anonymize the name of the data provider
and exact age ranges listed in the table to comply with the
company’s regulations. In Table 2, R1,· · · · R10,· represent
the age ranges such that they are mutually exclusive and
sorted in ascending order, e.g. R5 is the range that is the
immediate higher range after R4 (i.e. minimum age in range
R5 is one larger than maximum age in range R4 ), and the immediate lower range before R6 . Ri,p represents the predicted
results by the data source for age range i, while Ri,g stands
for ground truth data for the same age range. For example,
we can observe from the table that p(R4,g |R4,p ) = 0.345,
that is, when our data source classifies the user to be in age
range R4 , 34.5% of the time it is correct, or in other words,
34.5% of the users reached by campaign that targets category R4 , as predicted by the data source, were actually in
category R4 , as provided by ground truth. Note that for
this particular data source, the exact match, highlighted in
bold, is quite high compared to random.
Data providers utilize various models and online/offline
information to tag users. Please note that data sources from
Table 2: Ground truth distributions of anonymous data provider’s category taggings.
Age Ranges
R1,p
R2,p
R3,p
R4,p
R5,p
R6,p
R7,p
R8,p
R9,p
R10,p
R1,g
0.414
0.058
0.031
0.031
0.037
0.056
0.08
0.065
0.05
0.035
R2,g
0.245
0.297
0.053
0.041
0.057
0.049
0.067
0.074
0.044
0.036
R3,g
0.033
0.246
0.298
0.089
0.056
0.061
0.054
0.082
0.066
0.055
R4,g
0.018
0.037
0.227
0.345
0.063
0.041
0.035
0.043
0.065
0.048
the same data provider may have quite diverse accuracy values. For example, the accuracy results in Table 2 range
from 0.29 to 0.49. Thus we cannot evaluate one data source
and assume its sibling data sources have similar predictive
power. Each data source needs to be evaluated individually. This causes a significant disadvantage with the above
methodology, which is the fact that we need to set up a separate campaign for each category so that we can gather the
accuracy statistics. To remedy this problem, we propose
a second methodology in Section 5, which solves an optimization problem to come up with the best fitting accuracy
probabilities based on the aggregate reports.
4.2 Cost Analysis
In this subsection, we further analyze the cost of the brute
force method discussed in Section 4.1. It must be noted that
obtaining the ground truth, that is the aggregated labeled
data, is costly on its own right. However, in the following
lemma, we focus only on the ad serving costs to underline
the utility and benefit of our approach.
Lemma 1. Given a data source that can tag the user by
one of the possible c categories (e.g. the data source gives a
positive/negative output on one age group of possible c age
groups), then to observe a significant difference between the
calculated accuracy of one category versus others, we need
at least ⌈ 4202.969
(1 − 1c )⌉ impressions.
c
Proof. Assuming a uniform distribution, we assume the
average on-target rate is 1c for each data source (although the
intention of a data source is to increase this value). Each impression can be considered as a Bernoulli trial with 1c probability of success. The sample variance is, thus, 1c (1 − 1c ).
We would like to detect a significant difference between the
prediction accuracy of the correct category versus the rest
of possible tags. The industry standard accepts a 5% error.
Then, for a significance level of α = 5% for a two tailed
hypothesis
test and to attain at least 90% power, we have
√
n
√0.05
>
(z0.975 + z0.9 ), where n stands for the number
1
1
c
(1− c )
of users, z0.975 and z0.9 are the values of the quantile function of the standard normal distribution for 97.5% and 90%,
respectively [2]. Therefore, the number of users that receive
the ad impressions must be more than ⌈ 4202.969
(1 − 1c )⌉ for
c
each data source.
Based on Lemma 1, for d data sources that provide information on one of the possible c categories, we need at least
d⌈ 4202.969
(1 − 1c )⌉ impressions. As we discussed before, it is
c
necessary to evaluate each data source individually, considering the diversity of their predictive power. Given the very
large number of data sources, this causes the brute-force
approach to incur a very significant cost.
R5,g
0.012
0.021
0.049
0.182
0.337
0.053
0.03
0.03
0.048
0.039
R6,g
0.032
0.043
0.042
0.057
0.212
0.339
0.041
0.04
0.033
0.048
R7,g
0.043
0.051
0.042
0.037
0.047
0.23
0.332
0.048
0.048
0.061
R8,g
0.064
0.089
0.049
0.039
0.038
0.041
0.204
0.339
0.044
0.06
R9,g
0.075
0.091
0.132
0.116
0.082
0.065
0.077
0.203
0.488
0.106
R10,g
0.03
0.046
0.054
0.041
0.05
0.046
0.048
0.052
0.101
0.492
5. ACCURACY INFERENCE
High-quality data sources can enable advertisers to reach
the right audience at the right moment. Because they have
become an important component of online advertising, more
and more online/offline data are being ingested into Turn’s
data management platform. As we have mentioned previously, there are currently over 200k active data sources in our
system. Lemma 1 established that explicitly evaluating each
of these data sources by running performance campaigns is
overwhelmingly costly: not only a large amount of money is
required to run the performance campaigns, but also enormous manual effort to set up and manage those campaigns is
essential as well. We need an efficient way to simultaneously
infer the accuracy of multiple data sources.
As we have presented in Section 2, our focus in this paper
is to calculate the accuracy metrics of a data source for single
or multiple categories. In essence, we are trying to calculate
a set of probabilities, which represent the likelihood of a data
source predicting correctly/incorrectly that a user belongs
to a category ci . In Figure 1, we have shown the set of
probabilities that we aim to predict. For representational
purposes, we have shown the accuracy probabilities of a data
source which denote its capabilities to tag a user as Male or
not, though the same logic follows for any category. The
probabilities in the figure can be listed as follows:
• α1 : The probability of the user actually being Male
when the data source tags it as Male. This value can
also be called precision or positive predictive value.
• α2 : The probability of the user actually being Not Male
when the data source tags it as Male. This value can
also be called false discovery rate.
• α3 : The probability of the user being Unknown (i.e.
ground truth does not exist) when the data source tags
it as Male.
• β1 : The probability of the user actually being Male
when the data source tags it as Not Male.
• β2 : The probability of the user actually being Not Male
when the data source tags it as Not Male. This value
can also be called negative predictive value.
• β3 : The probability of the user being Unknown (i.e.
ground truth does not exist) when the data source tags
it as Not Male.
• γ1 : The probability of the user actually being Male
when the data source tags it as Unknown.
• γ2 : The probability of the user actually being Not Male
when the data source tags it as Unknown.
• γ3 : The probability of the user being Unknown (i.e.
ground truth does not exist) when the data source tags
it as Unknown.
As it can be seen from the figure, and trivial from the defini-
α1
Male
D1 by Data Source S
α2
Male
by Ground Truth
N1
Not Male
by Ground Truth
N2
α3
β1
Not Male
D2 by Data Source S
β2
β3
γ1
D3
Unknown
by Data Source S
γ2
γ3
Unknown
by Ground Truth
N3
Figure 1: Objective probabilities of a data source S
for a specific category (is user Male).
We log the received report data along with the first-party
campaign data into our in-house data warehousing system
called Cheetah [4], which is built on top of the Hadoop
framework [13]. Cheetah is designed specifically for our online advertising application to allow various simplifications
and custom optimizations. Campaign facts are stored within
nested relational data tables. Fast MapReduce jobs are designed to extract key features of the performance campaigns,
compare them with the ground truth and infer the accuracy
of a data source. Utilizing the collected campaign information, we present two approaches in this section to efficiently infer the quality of data sources: one that ranks data
sources, and another which directly deduces precision (α1 )
of a data source.
5.2 Ranking Based Assessment
tions, we have α1 +α2 +α3 = β1 +β2 +β3 = γ1 +γ2 +γ3 = 1.
Among these nine variables, α1 , i.e. precision, is often the
most important value for the advertisers, since it denotes
the goodness of a data source to be used for their advertising purposes. To calculate this value, we presented the
methodology, which is based on creating specific campaigns
to evaluate a singular data source for a specific category in
Section 4. In this section we are proposing an optimization
scheme, which utilizes the aggregated category distributions
over multiple campaigns, for both the data source we want
to evaluate, and the ground truth. In the following subsections, we call the above nine variables predictive values of a
data source.
5.1 Setup for Inference
We propose to set up multiple performance campaigns
without using any data source for targeting, so the audience will not be explicitly skewed by any data source. We
compare the ground truth of each campaign against the hypothesis of each data source, and infer the quality of the
data source.
We follow a set of rules to set up a performance campaign.
First, the targeting criteria should be minimal and cannot
be biased by any third-party data. For example, targeting
the online users in U.S. is fine, since this is purely based on
IP address and not biased; however, using a data source to
limit audience to middle-aged men is not acceptable as the
quality of this data source is what we want to assess. In general, only geographical location should be used as targeting
criteria. Second, the targeted websites must be discriminative, that is, the population of the visiting users should be
largely skewed towards one of our possible tags. This way,
we will not mistakenly estimate a data source to be accurate,
when in fact it is predicting the label in a random manner.
For example, a website is beneficial for such an experiment
if 70% of visitors are female and 30% of visitors are male,
and not necessarily if the distribution is 50% to 50% (since,
in such a case, a random prediction of female vs. male will
closely fit the overall audience in aggregate). One should
note that obtaining such knowledge is not always feasible
before running the campaign. Therefore, in our system, we
target only the websites that are known, based on our domain experience and verified by independent reports, to be
popular among a certain group of audience. After creating
such a campaign, we run it for a certain period to collect
data. The ground truth is collected through independent
agencies as described in Section 4.
Ranking Data Sources. In many instances, we only
need to choose the best data source from a large set of candidates with similar semantic purposes. If this is the case,
then data quality assessment becomes a ranking problem.
A data source’s absolute precision α1 is of less importance
then, and rather its rank among others is critical.
Since the independent evaluation agency sends us the aggregate statistics on a campaign, we can similarly construct
such statistics using a data source. Note that this approach
will only represent the view of the data source, and not
the ground truth, unlike the independent agency case. We
can then evaluate this data source based on how close the
constructed statistics are to that of the ground truth, and
therefore rank data sources based on such closeness measure.
This logic is presented in Algorithm 1.
Algorithm 1: Measurement calculation for ranking.
Input: ψ: aggregation function, f : closeness function
Input: S: data source
Output: µ̄: score for quality of data source S
1 foreach performance campaign C do
2
U ← Retrieve audience of C;
3
R̂ ← ψ(U, S);
4
R ← Retrieve the ground truth report;
5
µC ← f (R̂, R);
6
7
µ̄ ← Calculate the average value of µC ;
return µ̄;
There are multiple ways to design the closeness function f
to compare two aggregated statistics. Since the positive taggings are the most valuable for online advertising purposes,
we propose to compare the positive population distributions
between the ground truth and the data source. We define
the percentage of population marked as positive by a data
source as
count+ (S, U)
R̂ =
,
(1)
count+ (S, U) + count− (S, U)
where count+ counts the number of users in U marked as
positive by S, and count− counts negatives. Given that R is
the ground truth ratio of positive population, a simple way
to calculate the closeness can be defined as:
R̂ − R .
(2)
However, this does not consider the scale of R̂ or R, which
are usually quite small for a rare positive group. To make
the measurements more comparable, instead we propose to
calculate the relative error as the closeness:
ˆ + (S, U)
count+ (U) − count
,
RelativeErr =
count+ (U)
two predictive values. Since the data sources are unbiased,
the order of metrics for negatives are also preserved. Averaging is monotonic, therefore we can expand the previous
statements to micro and macro averaging cases as well.
where count+ (U) is the number of positives reported by the
ground truth, count− (U) is the number of ground truth negˆ + (S, U) is the scaled number of positives
atives, and count
marked by the given data source. We want to scale the number of positives of the data source, because the population
recognized by S might be quite different from the one recognized by the ground truth. For example, the independent
evaluation agency might have data on 10000 users, while S
might only have data on 1000 users. Therefore, we need to
extrapolate the population unrecognized by S to scale up
the populations:
5.3 Precision Inference Approach
RelativeErr =
count+ (U) − R̂ · (count+ (U) + count− (U))
count+ (U)
= 1−
R̂
count+ (U)/(count+ (U) + count− (U))
= 1−
R − R̂
R̂
=
R
R
(3)
The above value reflects the potential error rate if we scale
the data source’s recognizable population to the size of the
ground truth population. Per Algorithm 1, we calculate
average relative error (RelativeErr) across all performance
campaigns for each data source. We can then rank data
sources based on their average relative errors.
Soundness Analysis. A ranking algorithm needs to be
sound to ensure the optimal assessment: Given two data
sources S 1 and S 2 , a ranking algorithm is sound if it outputs
measurements r 1 for S 1 and r 2 for S 2 , such that r 1 < r 2 if
and only if S 1 is more likely to perform better than S 2 , as
we defined in Def. 3. We will show that the RelativeErr
based ranking algorithm is sound in many cases. First, let
us define the notion of unbiasedness for a data source:
Definition 4. A data source S is unbiased if and only if
its positive predictive value equals to its negative predictive
value: α1 ≈ β2 .
Our experience suggests that many data sources we utilize
are on demographics and can be considered as unbiased. For
example, the accuracy that a data source claims someone
as male, in general, is close to the accuracy that it claims
someone as female (Not Male as in Figure 1). In real-time
bidding, better on-target metrics, i.e. improving the ratio of
audience that really have the data source’s claimed characteristics, is the endeavor of any data provider. We can show
that the RelativeErr based ranking algorithm is sound:
1
k
Lemma 2. Given a set of sound data sources hS , .., S i,
the RelativeErr based ranking algorithm is sound for precisions and orders the data sources based on their expected
performance. In addition, if the data sources are all unbiased, the algorithm is sound for any on-target metrics.
Proof. Per definition, R̂ of the better data source is
closer to the reality, thus R − R̂ is smaller. Since R is constant, the order of RelativeErr is preserved for precisions.
On-target metrics can be the precision, or the negative predictive value, or simply the micro or macro average of the
Although the ranking methodology is able to pinpoint the
highest performing data sources, the output ranking measurement is only a surrogate of precision. It correlates with
the underlying precision, but is inherently different. As we
will show later, in online advertising, it is often necessary
to forecast the campaign performance as well as evaluate
whether a third-party data source is worth the extra amount
of money that an advertiser has to pay in order to utilize
it. In such cases we need an accurate estimation of a data
source’s precision.
Direct Inference of Data Source Precision. We propose an efficient way to directly estimate the predictive values of a data source. As shown in Figure 1, a data source’s
hypothesis on the audience population can be mapped into
the ground truth using its predictive values. Given a performance campaign C i , let the size of Positive, Negative and
i
i
Unknown audiences identified by data source S be D+
, D−
and Di correspondingly (these are scaled values to cover
the whole population of C i ), and the size of ground truth
Positive, Negative and Unknown audiences be Gi+ , Gi− and
Gi correspondingly. When the audience population size is
large, it is clear that we have
i
i
Gi+ ≈ D+
· α1 + D−
· β1 + Di · γ1
i
i
Gi− ≈ D+
· α2 + D−
· β2 + Di · γ2
i
i
Gi ≈ D+
· α3 + D−
· β3 + Di · γ3
Combining with the probability simplex constraint and the
unbiasedness constraint, we can estimate a data source’s predictive values by solving a quadratic optimization problem:
given n performance campaigns hC 1 , C 2 , .., C n i, we search for
predictive values α1 , α2 , α3 , β1 , β2 , β3 , γ1 , γ2 , γ3 so that
min
α,β,γ
n
X
i=1
i
i
(D+
· α1 + D−
· β1 + Di · γ1 − Gi+ )2
i
i
+(D+
· α2 + D−
· β2 + Di · γ2 − Gi− )2
i
i
+(D+
· α3 + D−
· β3 + Di · γ3 − Gi )2
s.t. 0 ≤ αj , βj , γj ≤ 1
3
X
j=1
αj = 1,
3
X
∀j = 1, 2, 3
βj = 1,
j=1
− ξ ≤ α1 − β2 ≤ ξ .
3
X
γj = 1
(4)
(5)
(6)
j=1
(7)
Here, (4) is our optimization objective which aims to find the
best mapping between the data source’s hypothesis and the
ground truth. Note that here we assume the size of audiences
of campaigns C i to be similar, which can be controlled at
campaign set up time. Otherwise we need to normalize by
the audience size of each campaign. (5) and (6) enforce
the probability simplex. (7) attempts to help us find the
unbiased solution, and predefined constant ξ controls our
confidence on the unbiasedness of the predictive values.
We, therefore, can run a few performance campaigns, extract each data source’s hypothesis on those campaigns, compare with the ground truth and solve the above optimization
problem. As we will show, this will efficiently give us the estimated predictive values of data sources in batch (among
those, precision is the most valuable for online advertising).
Performance Analysis. The proposed inference approach is efficient, in terms of both computation complexity
and money. First, it is straightforward to show that the
quadratic programming problem has a semi-definite Hessian
with a bowl shape. The optimization problem is convex
and can be solved efficiently with polynomial time complexity. Additionally, we only need to run a limited number
of performance campaigns to simultaneously estimate the
predictive values of multiple data sources. In practice, it
is possible that a data source’s predictive values are slightly
different in different performance campaigns due to variance.
Given a campaign C i , it is natural to assume a data source’s
predictive values for this specific campaign are
αij = αj + ε1j , ∀j = 1, 2, 3,
βji = βj + ε2j , ∀j = 1, 2, 3,
γji = γj + ε3j , ∀j = 1, 2, 3,
where ε is normally distributed with zero mean. In such
cases, we can get the unbiased estimate of a data source’s
predictive values by running a limited number of performance campaigns:
Theorem 3. Given k ≥ 3 performance campaigns, our
direct inference method can get the unique and unbiased estimate of a data source’s predictive values. Furthermore,
given any predictive value αi and its estimation α̂i , we have
P ( αi − α̂i ≤ si · Tδ/2,2k−6 ) ≥ 1 − δ,
where 0 < δ < 1 is a constant, si is the standard error of the
estimation, and Tδ/2,2k−6 is (1 − δ/2)-th quantile of Student
Distribution with 2k − 6 degrees of freedom.
Proof. The optimization can be converted into a linear regression problem within a simplex search space. This
regression problem contains 6 free regressors and each campaign provides 2 points in the space. When we have k ≥ 3
campaigns, the quadratic matrix is positive-definite and we
will have a unique global optimal solution. A Bias-VarianceNoise decomposition shows the solution is unbiased.
Since the errors are normally distributed, the sum of the
regression residuals is then distributed proportional to Student Distribution with 2k − 6 degrees of freedom:
t = (αi − α̂i )/si ∼ T2k−6
We then construct the confidence levels for the estimated
regressors.
By running more campaigns, we can quickly reduce the estimation errors and get highly reliable predictive value estimations of multiple data sources. Given its computational and
economic efficiency, we adopted the direct inference method
and utilize it continuously to generate the quality report on
data sources.
6.
USE CASES
In this section, we will discuss some use cases where the
quality assessment of first or third-party data sources can
be useful. First we will talk about targeting in online advertising, and the amount that an advertiser should be willing
to pay for a data source. Then we will give a very general
use case in campaign forecasting, i.e. to predict, before an
online advertising campaign starts, what category of users
will actually be reached by a pre-set targeting criteria.
6.1 Targeting in Online Advertising
Advertisers aim to reach the best audiences to promote
their products, so that they can increase the likelihood of a
click or an action happening. The automated way of grouping users into beneficial and non-beneficial subsets is often
called audience segmentation. For an informative work on
how this kind of audience segmentation can improve click
rates, refer to [15]. In this paper, however, we focus on a different kind of targeting where the advertisers already have a
pre-defined set of users they want to target. As an example,
suppose that an advertiser wants to reach only female audiences within the age range 21-35. There are multiple data
sources this advertiser can utilize to reach this group, but
as discussed in this paper, none of these data sources gives
a definitive classification. Intuitively, an accurate prediction
of the quality of a data source is essential for advertisers to
choose it over others. Also, note that here we mostly care
about the precision or positive predictive value of a data
source (i.e. α1 , when a data source suggests that a user is in
category c, the likelihood that this user actually belongs to
category c), since this is the signal that the advertiser uses
to bid on a user.
Here, we would also like to discuss the consideration of
data cost. In general, when an advertiser wants to utilize a
third-party data source for bidding purposes, it should pay
the third-party provider a certain amount of money. This
cost is generally per impression served using this data source,
hence can have significant effect on the ROI of an advertiser
(i.e. advertiser needs to pick up extra clicks/conversions to
make up for the money paid to the third-party for the targeting information it provides). An important point for an
advertiser to consider is if using a data contract is “worth”
its price. We will give a simple calculation here for the case
when the advertiser utilizes no data sources to reach a specific audience (i.e. free targeting), and whether adding the
data source and paying for it makes sense. Our main argument is that, by paying for the data source to target a
specific audience, the reduced cost of the mis-targeted impressions (i.e. those impressions that are served to the audience that are out of our desired audience) should make up
for the data cost. In other words, for the same amount of
money we should get more of the desired impressions, although our total number of impressions is less due to data
cost. Please note that below we assume the effective cost
per impression (cpi) to be the same for both free targeting
and data source assisted targeting, just that the data source
has the additional data cost per impression (cpidata ):
totalSpend
(1 − errorRate(dataSource)) ≥
cpi + cpidata
totalSpend
(1−errorRate(freeTargeting))
cpi
α1,freeTargeting
α1,data
≥
.
cpi + cpidata
cpi
Above, totalSpend is the amount of money that the campaign spends, cpi is effective cost per impression, and cpi data
totalSpend
is data cost per impression (hence cpi+cpi
is the numdata
ber of impressions picked up by data source assisted targetis the number of impressions that can be
ing, and totalSpend
cpi
picked via free targeting, i.e. no data cost). errorRate is indeed the inaccuracy of free targeting (percentage of audience
that is not desired), and percentage of the cases when the
data source predicts a user to be in desired audience, while,
in fact, it is not. In the next inequality, we actually translated (1 - errorRate) into α1 from Section 5. After further
reorganizing the above inequality, we get the following:
α1,data
−1
.
(8)
cpidata ≤ cpi ×
α1,freeTargeting
This means that for a data source to be beneficial for a
campaign,
its data cost per
impression should be less than
α1,data
cpi × α1,freeTargeting − 1 . Please note that we have the
assumption here that effective cost per impression would be
the same for free targeting vs. data source which is not
always valid, i.e. we may have to pay more to show ads
(impression) to those users that the data source tagged to
be desirable. Also, it can be seen that the benefit of the
data source often depends on how expensive the impressions
are for a campaign, hence is campaign specific. Finally, the
above calculations do not take into account the cost of data
evaluation utilizing our proposed two methodologies, which
was also mentioned in Section 4.2. However, this evaluation
can be performed once for each data source, and hence is
not of significance for each campaign that utilizes it.
6.2 Forecasting
Forecasting the performance (return-on-investment), reach
(unique users we can show an ad to), and delivery (amount
of money we can spend on advertising given the targeting
criteria) of a campaign is a significant problem that has to
be dealt in online advertising [9]. Here we show that by utilizing the accuracy metrics (i.e. α1→3 , β1→3 , and γ1→3 from
Section 5) over multiple data sources that may tag a user,
we can actually predict the expected number of users that
will fall into a specific audience/category, on top of the total
spend/reach as in the traditional forecasting problem, once
the advertising campaign goes live.
Here is how the forecasting process in online advertising
works. Once the advertiser sets some targeting criteria (filtering of users to show ads according to anonymous user
properties) and goals (in terms of clicks and conversions)
for a campaign, we can utilize our system as explained in [9]
to find out which users this campaign is likely to reach. We
can already calculate expected number of unique users and
delivery for the campaign from this information alone. Furthermore, one problem that we can solve for the advertisers
is the prediction of what percentage of these users will fall
into a specific user category/class c. Note that the approach
mentioned in [14] does work on this problem of predicting
likelihood of a user belonging to a specific category via utilizing many features, but their focus is on targeting rather
than forecasting. Here, we suggest that rather than training
a simple model for predicting the membership in a category,
we can utilize multiple data sources and their estimated predictive values to forecast the expected number of users that
will fall into a category. This information is not used in
bidding time, hence contrary to the targeting use case we
explained in Section 6.1, there is no data cost.
Figure 2 summarizes the overall idea. In the first step,
Targeting
Criteria T
Users which obey
the Targeting Criteria T
Figure 2: Forecasting Example.
we communicate the targeting criteria set by the advertiser
to our set of users. For real-time forecasting, we often need
to use a sampled set of users [9]. Once we filter the users
that are appropriate for the targeting criteria, we can go
through each of these users and see whether these users are
tagged by a first or third-party category cd . Once we see
these taggings, we can calculate the probability of this user
belonging to a desired ground-truth category cg by the probability p(cg |cd ) which is the α1 from Section 5, i.e. precision.
If we assume that each user is tagged by one and only one
ci , we can forecast the expected number of users that will
belong to category cg as:
X X
X
I(u has ci ) p(cg |ci ) . (9)
p(cg |u) =
E[ |cg | ] =
u∈T
u∈T ci ∈C
In (9), T is the set of users that belong to a certain set of
targeting criteria T. cg is the category that the advertiser desires to forecast how many of their targeted users will belong
to. ci is a first/third-party category from a list of categories
C for which we have the prediction values. I is an indicator
to see whether a user has category ci , and above formula is
valid only because we assume that each user has only one of
possible ci s. If each user can have multiple first/third-party
categories (as is the case in real situations), we need to aggregate multiple p(cg |ci )s, where we can utilize combination
methods such as getting the maximum, minimum, average
or median of the precision values.
7. EXPERIMENTAL RESULTS
In this section we will give some preliminary results for
our optimization-based evaluation technique. We have already given some preliminary results for our first methodology (Section 5.1) in Table 2. As aforementioned, we do aim
to calculate all nine prediction values of a data source for
a category, but for purposes of online advertising, the most
important one is the precision (α1 ). Only if this value is
high we can reliably use this data source to reach a certain
category of users.
Simulation Results. To evaluate our methodology we
ran several simulated campaigns, where for each campaign
we create an audience of 100 users and assign them to predicted categories as follows:
• Random, disjoint sets of 20 users each are assigned to
category c, not c, and unknown,
• The rest 40 users are assigned to either category c, not
Difference
0.10 0.20 0.30
0.00
High Quality
Low Quality
1
2
3 4 5 6 7 8 9 10
Number of Campaigns
0.12
Figure 3: Number of campaigns versus difference
between predicted and actual precision values.
0.00
Difference
0.04
0.08
High Quality
Low Quality
0.00
0.10
0.20
Noise Level
0.30
Figure 4: Noise versus difference between predicted
and actual precision values for six campaigns scenario.
c, and unknown in a uniform manner.
Then, we generate the ground truth categories for two types
of data sources with the following actual probability values:
• High Quality: This data set has the following underlying nine probability values: α1→3 = (0.8, 0.15, 0.05),
β
0.7, 0.1), γP
1→3 = (0.2,P
1→3 = (0.4, 0.5, 0.1), note that
P
α
=
β
=
i
i
1→3
1→3
1→3 γi = 1,
• Low Quality: This data set has the following underlying nine probability values: α1→3 = (0.4, 0.5, 0.1),
β1→3 = (0.3, 0.6, 0.1), and γ1→3 = (0.5, 0.4, 0.1).
Please note that this kind of synthetic data generation is
quite counter-intuitive. We first create the predicted values using some pre-set distribution, and then generate these
users’ actual categories using the predictive values of the
two data sources. For example, if the user in our synthetically generated audience has a predicted category of not c by
High Quality data source, then we assign it to ground truth
category of c by probability β1 = 0.2, not c by probability
β2 = 0.7, and unknown by probability β3 = 0.1.
Once we generate the dataset, we actually have aggregated values of category counts for each data source. Using
these category counts, we can utilize our data quality assessment method we described in Section 5.3, and look into the
difference between our computed predictive values, and the
actual predictive values as given above.
The results of the above described simulations are given in
Figures 3 and 4. In Figure 3, we estimate the nine predictive
values for each of the data sources using our methodology, by
utilizing multiple campaigns (the results are averaged over
100 trials at each value in x-axis). As we have proven in
Section 5.3, we do need at least three campaigns to get a
Figure 5: Pearson correlations between the estimated positive population and the ground truth.
unique solution for all nine probabilities. We can observe
that starting with four campaigns, the difference of the real
values versus our predicted values fall to zero. We have
plotted the difference between α1 s (precision values, since
α1 = p(actually positive | predicted positive)), but the results are similar for other α2→3 , β1→3 , and γ1→3 .
Next, we performed another experiment where we introduced a uniform noise between ±ζ (where we changed ζ
between 0 and 0.35) to the above nine real predictive values, and then generated the ground truth assignments. We
tried to recover the nine predictive values using six campaigns and present the difference (averaged over 100 trials)
between real and predicted α1 values in Figure 4. We can
see that even under significant noise levels, our methodology
can recover the precision values accurately. Because α1 for
the high quality data source is higher, we can also observe
that the noise effect is slightly less.
Real-World Results. Following the methods we discussed in Section 5.1, we ran 156 performance campaigns,
each of which targeted a specific website. We used half of
the campaigns to calculate RelativeErr and predictive values for around 100 data sources. Then, we tried to estimate
the positive population in the rest of campaigns using these
100 data sources. We utilized the average of positives predicted by the sources, and calculated the correlation with the
ground truth positive sizes. For the direct inference method,
it is clear that for each campaign C i , its estimated positive
i
i
population is Gi+ ≈ D+
· α1 + D−
· β1 + Di · γ1 (for a single data source). For the RelativeErr method, by deducing
(3), we can roughly estimate Gi+ = M · τ̂ R̂/(1 − µ̄), where
M is the population size, τ̂ is the percentage of population
recognized by the ground truth in the training set, and µ̄ is
the average R − R̂/R (i.e. RelativeErr from Eq. 3) of the
training set for a single data source.
Each method’s Pearson correlation coefficient is shown in
Figure 5. The direct inference method gives a significantly
more accurate estimate of the positive population (p < .001)
and it correlates well with the ground truth.
Next, we utilized two popular data sources S1 and S2 as
the targeting criterion and ran test campaigns individually.
The reported positive rates from the independent evaluation
agency can be treated as the ground truth of their precisions.
The ground-truth precisions and our estimated values are
listed in Figure 6. The direct inference method yields a
much closer estimation to the ground truth (p < .001), while
the ranking method preserves the orders but the values are
substantially different from the ground truth.
Our proposed approach was deployed into Turn’s data
management platform and generates weekly reports on the
of combining multiple data sources and their quality assessments to come up with a better model.
Acknowledgments
We thank many talented scientists and engineers at Turn for
their help and feedback in this work.
9. REFERENCES
Figure 6: Reported measurements on the positive
population from different methods.
Figure 7: Inferred precision of a data source over a
three month period.
quality of our top data sources. We have received positive
feedback from our campaign optimization managers in the
field, commenting that the reported precisions are close to
the real campaign results. Interestingly, by evaluating our
data sources periodically, we are forming a positive reinforcement loop over their data quality: feeling the pressure,
data providers work consistently to improve their data quality. For example, the estimated precision of one data source
over a three month period is plotted in Figure 7. It is clear
that the data source’s quality has been improved over this
time period.
8.
CONCLUSIONS AND FUTURE WORK
In this paper, we have presented a novel framework to
evaluate first or third-party data sources on user properties
for online advertising, which is a particularly challenging
task when the ground truth is reported in aggregate form.
We call this problem data quality assessment, and presented
two solutions, one utilizing the data sources directly in a
campaign, and another one, which utilizes outputs from multiple online advertising campaigns to optimize a set of probabilities which represent the “goodness” of a data source.
We have also presented some use cases on how these evaluations can be utilized in online advertising domain, mainly in
targeting, assessing the amount of money that an advertiser
should pay for a data source, and forecasting. Some preliminary simulation and real-world results were also presented
that show the effectiveness of our methodology, as well as
some results on the performance of a well-established actual
data provider for age categorization of users on multiple realworld advertising campaigns.
Possible future work mainly lies on the use cases of the
evaluation output of our methodologies, as given in Section 6. Our current focus is on accurate targeting of online
users, which also needs to take into account the problem
[1] N. R. Adam and J. Wortmann. Security-control
methods for statistical databases: A comparative
study. ACM Computing Surveys, 21:515–556, 1989.
[2] G. Casella and R. L. Berger. Statistical inference,
volume 2. Duxbury Pacific Grove, CA, 2002.
[3] B. Chen, L. Chen, R. Ramakrishnan, and D. R.
Musicant. Learning from aggregate views. In Proc.
IEEE ICDE, 2006.
[4] S. Chen. Cheetah: a high performance, custom data
warehouse on top of mapreduce. Proc. VLDB
Endowment, 3(1-2):1459–1468, 2010.
[5] V. Cheplygina, D. M. J. Tax, and M. Loog. On
classification with bags, groups and sets. Pattern
Recognition Letters, 59:11–17, 2015.
[6] H. Elmeleegy, Y. Li, Y. Qi, P. Wilmot, M. Wu,
S. Kolay, A. Dasdan, and S. Chen. Overview of turn
data management platform for digital advertising.
Proc. VLDB Endowment, 6(11):1138–1149, 2013.
[7] C. Ferri, J. Hernandez-Orallo, and R. Modroiu. An
experimental comparison of performance measures for
classification. Pattern Recognition Letters, 30:27–38,
2009.
[8] A. Gunawardana and G. Shani. A survey of accuracy
evaluation metrics of recommendation tasks. Journal
of Machine Learning Research, 10:2935–2962, 2009.
[9] A. Jalali, S. Kolay, P. Foldes, and A. Dasdan. Scalable
audience reach estimation in real-time online
advertising. In Proc. ICDMW, pages 629–637, 2013.
[10] K.-C. Lee, B. Orten, A. Dasdan, and W. Li.
Estimating conversion rate in display advertising from
past performance data. In Proc. ACM KDD, pages
768–776, 2012.
[11] D. R. Musicant, J. M. Christensen, and J. F. Olson.
Supervised learning by training on aggregate outputs.
In Proc. IEEE ICDM, pages 252–261, 2007.
[12] C. Parker. An analysis of performance measures for
binary classifiers. In Proc. IEEE ICDM, pages
517–526, 2011.
[13] T. White. Hadoop: The Definitive Guide. O’Reilly
Media, Sebastopol, CA, 2012.
[14] M. H. Williams, C. Perlich, B. Dalessandro, and
F. Provost. Pleasing the advertising oracle:
Probabilistic prediction from sampled, aggregated
ground truth. In Proc. ACM ADKDD, 2014.
[15] J. Yan, N. Liu, G. Wang, W. Zhang, Y. Jiang, and
Z. Chen. How much can behavioral targeting help
online advertising? In Proc. ACM WWW, pages
261–270, 2009.
[16] F. X. Yu, K. Choromanski, S. Kumar, T. Jebara, and
S. Chang. On learning from label proportions.
arXiv:1402.5902v2, 2015.
| 2 |
Artifact reduction for separable non-local means
Sanjay Ghosh and Kunal N. Chaudhury1
arXiv:1710.09552v1 [] 26 Oct 2017
Department of Electrical Engineering, Indian Institute of Science, Bangalore.
Abstract. It was recently demonstrated [J. Electron. Imaging, 25(2), 2016] that one can perform fast non-local means
(NLM) denoising of one-dimensional signals using a method called lifting. The cost of lifting is independent of the
patch length, which dramatically reduces the run-time for large patches. Unfortunately, it is difficult to directly extend
lifting for non-local means denoising of images. To bypass this, the authors proposed a separable approximation in
which the image rows and columns are filtered using lifting. The overall algorithm is significantly faster than NLM, and
the results are comparable in terms of PSNR. However, the separable processing often produces vertical and horizontal
stripes in the image. This problem was previously addressed by using a bilateral filter-based post-smoothing, which
was effective in removing some of the stripes. In this letter, we demonstrate that stripes can be mitigated in the first
place simply by involving the neighboring rows (or columns) in the filtering. In other words, we use a two-dimensional
search (similar to NLM), while still using one-dimensional patches (as in the previous proposal). The novelty is in
the observation that one can use lifting for performing two-dimensional searches. The proposed approach produces
artifact-free images, whose quality and PSNR are comparable to NLM, while being significantly faster.
Keywords: Denoising, non-local means, fast algorithm, lifting, artifact.
1 Introduction
We consider the problem of denoising grayscale images corrupted with additive white Gaussian
noise. A popular denoising method is the non-local means (NLM) algorithm,1 where image patches
are used to perform pixel aggregation. While NLM is no longer the state-of-the-art, it is still
used in the image processing community due to its simplicity, decent denoising performance, and
the availability
of fast implementations. The NLM of an image f = {f (i) : i ∈ Ω}, where
Ω = i = (i1 , i2 ) : 1 ≤ i1 , i2 ≤ N }, is given by1
P
j∈S(i) wij f (j)
NLM[f ](i) = P
(i ∈ Ω).
(1)
j∈S(i) wij
where S(i) = i + [−S, S]2 is a search window around the pixel of interest. The weights wij are set
to be
1 X
2
wij = exp − 2
f (i + k) − f (j + k) ,
(2)
α k∈P
where α is a smoothing parameter and P = [−K, K]2 is a two-dimensional patch.
A direct implementation of (1) has the per-pixel complexity of O(S 2 K 2 ), where S and K are
typically in the range [7, 20] and [1, 3].1 Several computational tricks and approximations have been
proposed to speedup the direct implementation.2–8 A particular means to speed up NLM is using
a separable approximation, which in fact is a standard trick in the image processing literature.9–12
In separable filtering, the rows are processed first followed by the columns (or in the reverse
order). Of course, if the original filter is non-separable, then the output of separable filtering is
generally different from that of the original filter, since a natural image typically contains diagonal
details.12 This is the case with NLM since expression (2) is not separable. The present focus is on a
1
Email: [email protected], [email protected]
recent separable approximation of NLM.13 At the core of this proposal is a method called lifting,
which computes the NLM of a one-dimensional signal using O(S) operations per sample. In other
words, the complexity of lifting is independent of the patch length K. Extending lifting for NLM
denoising of images, however, turns out to be a difficult task. Therefore, we proposed a separable
approximation, called separable NLM (SNLM),13 in which the rows and columns of the image are
independently filtered using lifting. In particular, we separately computed the “rows-then-columns”
and “columns-then-rows” filtering, which were then optimally combined. The per-pixel complexity
of SNLM is O(S), which is a dramatic reduction compared to the O(S 2 K 2 ) complexity of NLM.
A flip side of SNLM (as is the case with other separable formulations14 ) is that often vertical
and horizontal stripes are induced in the processed image. The stripes are more prominent along the
last filtered dimension.14 In SNLM, this problem was alleviated using the optimal recombination
mentioned above followed by a bilateral filter-based post-smoothing. In this work, we demonstrate
that the stripes can be mitigated in the first place simply by involving the neighboring rows (or
columns) in the filtering. In other words, we use a two-dimensional search (similar to classical
NLM1 ), while still using one-dimensional patches (as done previously13 ). The present novelty is in
the observation that one can use lifting for performing a two-dimensional search. In particular, the
per-pixel complexity of the proposed approach is O(S 2 ), which is higher than our previous proposal,
but still substantially lower than that of classical NLM. Importantly, the proposed approach no
longer exhibits the visible artifacts that are otherwise obtained using SNLM.
The rest of the paper is organized as follows. We recall the SNLM algorithm in Section 2 and
its fast implementation using lifting. We also illustrate the artifact problem with an example. The
proposed solution is presented in Section 3, along with some algorithmic details. In Section 4, we
report the denoising performance of our approach and compare it with classical NLM and SNLM.
We end the paper with some concluding remarks in Section 5.
2 Separable Non-Local Means
To set up the context, we briefly recall the SNLM algorithm.13 Suppose we have a one-dimensional
signal g = {g(i) : 1 ≤ i ≤ N }, corresponding to a row or column. The one-dimensional analogue
of (1) is given by
P
j∈S(i) wij g(j)
NLM1D[g](i) = P
(1 ≤ i ≤ N ),
(3)
j∈S(i) wij
and
K
2
1 X
g(i + k) − g(j + k) ,
wij = exp − 2
β k=−K
(4)
where S(i) = i + [−S, S] and β is a smoothing parameter. In other words, both the search window
and patch are one-dimensional in this case. It was observed in our previous work that the weights
{wij : 1 ≤ i ≤ N, i − S ≤ j ≤ i + S} can be computed using O(1) operations with respect to K.
In particular, consider the N × N matrices:
F(i, j) = g(i)g(j)
and
F(i, j) =
K
X
k=−K
(1 ≤ i, j ≤ N ),
F(i + k, j + k).
(5)
(6)
3
(a) Clean.
(b) Noisy, 20.19.
(e) Proposed (RC), 26.67.
(f) Proposed (CR), 26.66.
(c) 1D search and 1D patch. (d) 2D search and 1D patch.
(g) SNLM,13 26.57.
(h) NLM,1 26.02.
Fig 1 Denoising of Peppers at noise standard deviation σ = 25. We see stripes in (c) in which both the patch and
search window are one-dimensional (both are along rows). As seen in (d), the stripes can however be reduced using
a two-dimensional search in place of the one-dimensional counterpart (though we still see some noise). The image
obtained by further processing (d) using a two-dimensional search and one-dimensional patches (along columns) is
shown in (e). The visual quality and PSNR (mentioned below each image) of (e) is comparable to NLM. In (f), we have
reversed the order of processing: we first use one-dimensional patches along columns and then along rows (the search
is two-dimensional). Notice that the order (RC/CR) has no visible impact on the final output. Also notice that residual
stripes can be seen in SNLM.
We see
see that
that F
F is
is the
the smoothed
smoothed version
version of
of F,
F, obtained
obtained by
We
by box
box filtering
filtering F
F along
along its
its sub-diagonals.
sub-diagonals.
13
13
The
important
observation
is
that
we
can
write
The important observation is that we can write
K
X
K
2
X
g(i + k) g(j + k)2 = F(i, i) + F(j, j) 2F(i, j).
g(i + k) − g(j + k) = F(i, i) + F(j, j) − 2F(i, j).
k= K
k=−K
(7)
(7)
In particular, using this so-called lifting, we can compute the patch distance using just three
In
particular,
this so-called and
lifting,
can compute
the patch distance
using just
samples
of F,using
one multiplication,
twowe
additions.
The computational
gain comes
fromthree
the
samples
of
F,
one
multiplication,
and
two
additions.
The
computational
gain
comes
the
fact that the box filtering in (6) can be computed using O(1) operations with respect to from
K using
fact
that the13box
filtering following
in (6) can the
be computed
using
operations
with
using
recursions.
Moreover,
observation
that O(1)
not all
samples of
F respect
are usedtoinK(3),
an
13
recursions.
Moreover,
following
the
observation
that
not
all
samples
of
F
are
used
in
(3),
an
13
efficient mechanism for computing (and storing) just the required samples was proposed. 13 The
efficient
forcomputing
computing
storing)
just the to
required
samples
was proposed.
The
per-pixelmechanism
complexity of
(3)(and
using
lifting reduces
O(S) from
the brute-force
complexity
per-pixel
complexity
of computing
(3) using
lifting
reduces
to O(S) from thepatches
brute-force
of O(SK).
Unfortunately,
extending
lifting
to handle
two-dimensional
turnscomplexity
out to be
of
O(SK).
Unfortunately,
extending
lifting
to
handle
two-dimensional
patches
turns
out using
to be
difficult. Instead, we proposed to use separable filtering, where the rows (columns) are
filtered
difficult.
Instead,
we columns
proposed(row).
to use The
separable
filtering,outputs
where are
the rows
(columns) combined
are filteredto
using
(3) followed
by the
two distinct
then optimally
get
(3)
followed
by
the
columns
(row).
The
two
distinct
outputs
are
then
optimally
combined
to
the final image. In fact, the reason behind the averaging was to suppress artifacts in the formget
of
the final image. In fact, the reason behind the averaging was to suppress artifacts in the form of
stripes arising from the separable filtering. This is demonstrated with an example in Fig. 1, where
we have compared NLM, SNLM, and the proposed approach. We used bilateral filtering to remove
the stripes in SNLM, at an additional cost. However, the final image still has some residual artifacts.
3 Proposed Approach
We see less stripes in Fig. 1(d) precisely because we use a two-dimensional search. In other words,
we use a cross between classical NLM and SNLM in which we use (8) for the aggregation and (4)
for the weights. The two-dimensional search results in the averaging of pixels from across rows
(and columns). This does not happen in SNLM, which causes the stripes to appear in Fig. 1(c).
v
u
j = (j1, j2)
q = (i1, q2 )
i = (i1, i2)
Rows
(2 K + 1)
(2 S + 1)
(2 S + 1)
Fig 2 Illustration of the idea behind the proposed method (see text for details).
The working of our proposal is explained in Fig 2. The pixel of interest in this case is the pixel
at position i = (i1 , i2 ) marked with a red dot. The search window of length 2S + 1 is marked with a
green bounding box. Two neighboring pixels at locations j = (j1 , j2 ) and q = (i1 , q2 ) are marked
with red dots. The former pixel is on a neighboring row, while the latter is on the same row as the
pixel of interest. Similar to SNLM,13 we can consider either horizontal or vertical patches. For our
example, the patches (of length 2K + 1) are aligned with the image rows; they are marked with
light blue rectangles. For our proposal, the denoising at i = (i1 , i2 ) is performed using the formula:
P
`∈S(i) wi` f (`)
P
,
(8)
`∈S(i) wi`
and
K
1 X
2
wi` = exp − 2
f (`1 , `2 + k) − f (i1 , i2 + k) ,
β k=−K
(9)
where S(i) = i + [−S, S]2 and ` = (`1 , `2 ). To compute (8), we group the neighboring patches
into two categories: (i) patches with row index i1 , e.g., patch q in Fig. 2, and (ii) patches with a
different row index, e.g., patch j in the figure. Let {u(t) : 1 ≤ t ≤ N } and {v(t) : 1 ≤ t ≤ N } be
the i1 -th and j1 -th row, where N is the length of a row (see Fig 2). Similar to (5) and (6), we define
the N × N matrices:
Fuu (p, q) = u(p)u(q),
Fvv (p, q) = v(p)v(q),
and Fuv (p, q) = u(p)v(q),
(10)
and the corresponding matrices Fuu , Fvv , and Fuv , where, for example,
Fuu (p, q) =
K
X
k=−K
u(p + k)u(q + k)
(1 ≤ p, q ≤ N ).
(11)
As in (7), the (squared) distance between patches centered at i = (i1 , i2 ) and q = (i1 , q2 ) is
Fuu (i2 , i2 ) + Fuu (q2 , q2 ) − 2Fuu (i2 , q2 ).
(12)
On the other hand, the distance between patches centered at i = (i1 , i2 ) and j = (j1 , j2 ) is
Fuu (i2 , i2 ) + Fvv (j2 , j2 ) − 2Fuv (i2 , j2 ).
(13)
In other words, we can compute the distance between patches centered at i and q using Fuu . To
compute the distance between patches centered at i and j, we require the matrices Fuu , Fvv , and Fuv .
Moreover, using these matrices, we can compute patch distances for different i, j, and q, provided
the row index of i and q is i1 , and the row index of j is j1 . Thus, an efficient way of computing (8)
is to sequentially process the rows. For each row (fixed u), we compute Fuu , Fvv , and Fuv , where v
corresponds to neighboring rows that are separated by at most S. We compute 2S + 1 matrices of
the form Fvv and another 2S matrices of the form Fuv . As mentioned in Section 2, we can compute
each matrix using O(1) operations with respect to K. Moreover, as per the sum in (3), we only
require entries within the diagonal band {1 ≤ i, j ≤ N : |i − j| ≤ S} of each matrix. The cost
of computing the banded entries is thus O(N S) for each matrix. The overall cost of processing N
rows is O(N 2 S 2 ). The per-pixel complexity of computing (8) using the proposed approach is thus
O(S 2 ). We can efficiently compute (and store) the banded entries using the method in Section 2.2
of the original paper.13 The main difference with SNLM is that we require a total of 4S + 1 matrices
for processing each row; whereas, just one matrix is required in SNLM. As shown in Fig. 1(d),
some residual noise can still be seen after the processing mentioned above. We perform a similar
processing once more, except this time we use one-dimensional patches along columns. The visual
quality and PSNR of the final image (Fig. 1(e)) are comparable to NLM (Fig. 1(h)). Moreover, we
see from Figs. 1(e) and 1(f) that if we first use one-dimensional patches along columns and then
along rows, then the outputs are similar. We empirically corroborate these observations in the next
section. Therefore, we propose to first process the rows using (8) and then process the columns of
the intermediate image using (8). A precise description of the proposed approach for processing
the (noisy) image along rows using lifting is provided in Algorithm 1. We then perform column
processing on the intermediate image to obtain the final output of our algorithm. That is, we simply
apply Algorithm 1 on the intermediate image, where we logically switch the rows and columns in
the algorithm. Suppose S1 and S2 are the corresponding search windows for the row-aligned and
column-aligned processing. Then we set the search parameter in Algorithm 1 as: S = S1 for the
row-aligned processing, and S = S2 for the column-aligned processing.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
Data: Image f of size M × N , and parameters K, S, β.
Result: Row-processed image f˜ of size M × N given by (8).
for i1 = 1, . . . , M do
% lifting
for i2 = 1, . . . , N do
u(i2 ) = f (i1 , i2 );
end
Compute matrices Fuu and Fuu using (10) and (11);
for j1 ∈ { i1 − S, . . . , i1 + S}\{i1 } do
for j2 = 1, . . . , N do
v(i2 ) = f (j1 , j2 );
end
Compute matrices Fvv , Fuv , Fvv and Fuv using (10) and (11);
end
% weight computation and pixel aggregation
for i2 = 1, . . . , N do
Set P = 0 and Q = 0;
for j1 = i1 do
for k2 ∈ {i2 − S, . . . , i2 + S} do
Compute weight wi2 k2 using (12) and (9);
P = P + wi2 k2 f (j1 , k2 ) ;
Q = Q + wi2 k2 ;
end
end
for j1 ∈ { i1 − S, . . . , i1 + S}\{i1 } do
for j2 ∈ {i2 − S, . . . , i2 + S} do
Compute weight wi2 j2 using (13) and (9);
P = P + wi2 j2 f (j1 , j2 ) ;
Q = Q + wi2 j2 ;
end
end
f˜(i1 , i2 ) = P/Q.
end
end
Algorithm 1: Proposed processing along rows using lifting.
4 Experiments
The denoising performance of the proposed method is compared with NLM and SNLM in Table 1.
We have used standard grayscale images from15, 16 for our experiments. The Matlab implementation
used to generate the results in this section is publicly available2 . The search windows for the
three methods were set as follows. Suppose S be the search window for NLM (which we take as
2
http://in.mathworks.com/matlabcentral/fileexchange/64856
σ
Method
5
10
20
30
50
5
10
House (256 × 256)
20
30
50
Montage (256 × 256)
Noisy
34.1/83 28.1/60 22.1/34 18.6/22 14.2/12 34.2/83 28.1/61 22.1/36 18.6/25 14.1/15
NLM1
36.9/90 34.1/87 29.7/82 26.8/77 24.0/69 39.1/97 34.3/94 29.6/89 26.5/85 22.2/76
Darbon et al.4 36.1/90 31.4/75 26.1/51 22.8/36 18.6/21 38.4/89 30.9/76 25.8/52 22.6/38 18.6/24
SNLM13
36.6/89 33.6/86 29.4/81 26.5/76 23.7/69 39.3/97 34.6/94 29.7/89 26.7/84 22.5/77
Proposed
36.6/89 34.1/86 30.4/82 27.3/77 24.1/70 39.4/97 34.8/94 30.2/90 27.3/86 23.4/79
BM3D19
38.6/95 34.7/93 31.3/88 29.3/85 26.6/78 41.1/98 37.3/96 33.5/94 31.2/91 27.4/85
Method
Boat (512 × 512)
Man (1024 × 1024)
Noisy
34.1/97 28.1/90 22.1/73 18.6/59 14.2/41 34.1/99 28.1/97 22.1/91 18.6/85 14.1/70
NLM1
35.1/96 30.8/89 26.7/78 24.7/70 23.0/62 35.3/98 31.1/95 27.5/88 25.8/83 24.2/76
Darbon et al.4 34.4/97 30.3/94 25.4/82 22.4/71 18.4/53 35.1/97 30.4/98 25.6/95 22.5/90 18.5/80
SNLM13
35.0/96 30.7/89 26.6/77 24.5/69 22.7/61 35.3/98 31.0/95 27.2/87 25.4/83 23.8/75
Proposed
34.9/96 30.7/89 26.8/77 24.7/70 22.9/62 35.1/98 31.1/95 27.5/88 25.8/ 84 24.2/78
BM3D19
37.3/98 33.9/96 30.8/92 29.0/88 26.7/81 37.3/99 34.1/98 31.2/96 29.5/94 27.4/89
Table 1 Comparison of the denoising performances on various images15 in terms of PSNR/SSIM at various noise
standard deviations σ. The PSNRs are rounded to one decimal place, while the SSIMs (in %) are rounded to integer.
S
7
Method
NLM1
10
12
7
K=2
44
87
10
12
K=3
124
45
88
126
Darbon et al.4 0.33 0.60 0.84 0.33 0.62 0.85
SNLM13
0.31 0.39 0.45 0.32 0.40 0.46
Proposed
1.20 2.30 3.20 1.30 2.40 3.30
Table 2 Comparison of the run-time (in seconds) of the proposed approach with classical NLM for a 256 × 256 image.
The computations were performed using Matlab on a 3.40 GHz Intel quad-core machine with 32 GB memory.
reference). Following the original proposal,13 the window for SNLM is also set as S. For a fair
comparison with NLM, we ensure that equal number of pixel are averaged in both methods. This
is achieved if (2S1 + 1)2 + (2S2 + 1)2 = (2S + 1)2 . Moreover, following,14 we set S2 = S1 /2.
These equations uniquely determine S1 and S2 (up to an integer rounding). Moreover, we normalize
8
(a) Clean (512 ⇥ 512).
(b) Noisy, 18.6/60.
(c) NLM,1 25.3/71.
(d) SNLM, 25.3/71.
(e) SNLM13 (smoothed), 24.9/69.
(f) Proposed, 25.4/70.
Fig
smoothing (d)
(d) using
using aa bilateral-filter.
bilateral-filter.
Fig33Denoising
Denoisingof
ofMan
Man1515 at
atσ =
= 30.
30. Notice
Notice that
that stripes
stripes can be seen in (e) after smoothing
The
our proposal
proposal (f)
(f) isis visually
visually
ThePSNR/SSIM
PSNR/SSIMvalues
valueswith
withreference
reference to
to the
the clean
clean image
image are also provided. The result from our
similar
335, 1.7,
1.7, and
and 9.8
9.8 seconds.
seconds.
similartotoclassical
classicalNLM
NLM(c).
(c). The
The runtime
runtime for
for NLM,
NLM, SNLM, and the proposed method are 335,
We
proposed approximation
approximation(f)
(f)
Weused
usedthe
theparameter
parametersettings
settings mentioned
mentioned in
in the
the main
main text. The PSNR/SSIM between the proposed
andthe
theclassical
classicalNLM
NLM(c)
(c)are
are40.58/70.22,
40.58/70.22, whereas
whereas the
the values are 37.96/69.90 for SNLM (d).
and
(d).
proposed
approach
gives comparable
results
inthe
terms
of PSNR
comparison
the
smoothing
parameters
in (2) and (9)
using
relation
β 2 =and
α2SSIM.
/(2K 17
+A
1).visual
For the
results in
of the1,denoising
is provided
and4,4.and
Weαcan
clearly
seenotice
somefrom
stripes
in the
images
Table
we set Kresults
= 3, S
= 10, S1 in
= Fig.
9, S23 =
= 10σ.
We
Table
1 that
the
17
obtained approach
using SNLM,
with andresults
without
the boxed
In contrast,
proposed
givesboth
comparable
inpost-processing
terms of PSNR (see
and SSIM.
A areas).
visual comparison
there
is
hardly
any
artifacts
present
in
the
denoised
image
obtained
using
our
method.
A images
timing
of the denoising results is provided in Fig. 3 and 4. We can clearly see some stripes in the
comparison
is provided
in Table
While
the proposed
method(see
is slower
than areas).
SNLM In
(this
is the
obtained
using
SNLM, both
with2.and
without
post-processing
the boxed
contrast,
price
we
pay
for
removing
the
stripes),
it
is
nevertheless
significantly
faster
than
NLM.
there is hardly any artifacts present in the denoised image obtained using our method. A timing
4
We noteisthat
thoughin
Darbon
is generally
fastermethod
than our
proposal,
its (this
denoising
comparison
provided
Table et
2. al.
While
the proposed
is current
slower than
SNLM
is the
performance
starts
deteriorating
with
the
increase
in
noise
variance.
This
is
evident
from
Table
price we pay for removing the stripes), it is nevertheless significantly faster than NLM.
18
1 and
Wethough
also note
that NLM
SNLM fall
short
of our
KSVD
and
BM3D19itsindenoising
terms of
WeFig.
note4.that
Darbon
et al.4 and
is generally
faster
than
current
proposal,
denoising performance.
Nevertheless,
NLM
continues
to be variance.
of interest This
due toisits
decentfrom
denoising
performance
starts deteriorating
with the
increase
in noise
evident
Table
20–23
18
19
capability,
and
importantly,
the
availability
of
fast
approximations.
As
reported
by
other
1 and Fig. 4. We also note that NLM and SNLM fall short of KSVD and BM3D in terms of
authors,24performance.
NLM is quite Nevertheless,
effective in preserving
fine details,
successfully
removing
noise.
denoising
NLM continues
to bewhile
of interest
due to its
decent denoising
20–23
capability,
and importantly, the availability of fast approximations. As reported by other
24
authors, NLM is quite effective in preserving fine details, while successfully removing noise.
9
(a) Clean (512 ⇥ 512).
(b) Noisy, 28.1/83.
(c) NLM,1 34.7/94.
(d) SNLM,13 34.6/94.
(e) BM3D,19 37.2/97.
(f) KSVD,18 36.9/97.
(g) Darbon et al.,4 30.9/91.
(h) Proposed, 34.6/94.
kodim2316
Fig
classical
Fig 44 Denoising
Denoising of kodim23 at σ = 10. The result obtained through our proposal (h) is visually similar to classical
NLM
PSNR/SSIM
NLM (c).
(c). The runtime for NLM, SNLM, and the proposed method are 335, 1.7, and 9.8 seconds. The PSNR/SSIM
between
42.4/99.4
between the proposed approximation (h) and the classical NLM (c) are 43.33/99.7, whereas these values are 42.4/99.4
for
some
for (d)
(d) and
and 30.85/89.3 for (g). We have zoomed the region around the beak in (c), (d), (g), and (h). We can see some
artifacts in (d) and residual noise in (g); the zooms in (c) and (h) are visually indistinguishable.
artifacts
5 Conclusion
We proposed a method that uses the idea of lifting from previous work13 to perform fast non-local
means denoising of images. The proposed method does not give rise to undesirable artifacts (as was
the case with the original proposal), and produces images whose denoising quality and PSNR/SSIM
are comparable to non-local means. While this comes at the expense of added computation, the
proposed method nevertheless is much faster than non-local means. In fact, the speedup is about
40x for practical parameter settings.
6 Acknowledgements
The last author was supported by a Startup Grant from IISc and EMR Grant SB/S3/EECE/281/2016
from DST, Government of India.
References
1 A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” Proc. IEEE
Conference on Computer Vision and Pattern Recognition, 2, pp. 60-65 (2005).
2 M. Mahmoudi and G. Sapiro, “Fast image and video denoising via nonlocal means of similar
neighborhoods,” IEEE Signal Processing Letters, 12(12), pp. 839-842 (2005).
3 J. Wang, Y. Guo, Y. Ying, Y. Liu, and Q. Peng, “Fast non-local algorithm for image denoising,”
Proc. IEEE International Conference on Image Processing, pp. 1429-1432 (2006).
4 J. Darbon, A. Cunha, T. F. Chan, S. Osher, and G. J. Jensen, “Fast nonlocal filtering applied to
electron cryomicroscopy,” Proc. IEEE International Symposium on Biomedical Imaging, pp.
1331-1334 (2008).
5 A. Dauwe, B. Goossens, H. Luong, and W. Philips, “A fast non-local image denoising algorithm,”
Proc. SPIE Electronic Imaging, 68(12), pp. 1331-1334 (2008).
6 J. Orchard, M. Ebrahimi, and A. Wong, “Efficient nonlocal-means denoising using the SVD,”
Proc. IEEE International Conference on Image Processing, pp. 1732-1735 (2008).
7 V. Karnati, M. Uliyar, and S. Dey, “Fast non-local algorithm for image denoising,” Proc. IEEE
International Conference on Image Processing, pp. 3873-3876 (2009).
8 L. Condat, “A simple trick to speed up and improve the non-local means,” Research Report,
HAL-00512801, (2010).
9 P. M. Narendra, “A separable median filter for image noise smoothing,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, 3, pp. 20-29 (1981).
10 N. Fukushima, S. Fujita, and Y. Ishibashi, “Switching dual kernels for separable edge-preserving
filtering,” IEEE International Conference on Acoustics, Speech and Signal Processing, (2015).
11 T. Q. Pham and L. J. Van Vliet, “Separable bilateral filtering for fast video preprocessing,” Proc.
IEEE International Conference on Multimedia and Expo, (2005).
12 Y. S. Kim, H. Lim, O. Choi, K. Lee, J. D. K. Kim, and C. Kim, “Separable bilateral non-local
means,” Proc. IEEE International Conference on Image Processing, pp. 1513-1516 (2011).
13 S. Ghosh and K. N. Chaudhury, “Fast separable nonlocal means,” SPIE Journal of Electronic
Imaging, 25(2), 023026 (2016).
14 E. S. Gastal and M. M. Oliveira. “Domain transform for edge-aware image and video processing,”
ACM Transactions on Graphics (ToG), 30(4), 69 (2011).
15 BM3D Image Database, http://www.cs.tut.fi/˜foi/GCF-BM3D.
16 KODAK Image Database, http://r0k.us/graphics/kodak/.
17 Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From
error visibility to structural similarity,” IEEE Transactions on Image Processing, 13(4), pp.
600-612 (2004).
18 M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over
learned dictionaries,” IEEE Transactions on Image Processing, 15(12), pp. 3736-3745 (2006).
19 K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transformdomain collaborative filtering,” IEEE Transactions on Image Processing, 16(8), pp. 2080-2095
(2007).
20 J. M. Batikian and M. Liebling, “Multicycle non-local means denoising of cardiac image
sequences,” IEEE International Symposium on Biomedical Imaging, pp. 1071-1074 (2014).
21 C. Chan, R. Fulton, R. Barnett, D.D. Feng, and S. Meikle, “Post-reconstruction nonlocal means
filtering of whole-body PET with an anatomical prior,” IEEE Transactions on Medical Imaging,
33(3), pp. 636-650 (2014).
22 G. Chen, P. Zhang, Y. Wu, D. Shen, and P.T. Yap, “Collaborative non-local means denoising
of magnetic resonance images,” IEEE International Symposium on Biomedical Imaging, pp.
564-567 (2015).
23 D. Zeng, J. Huang, H. Zhang, Z. Bian, S. Niu, Z. Zhang, Q. Feng, W. Chen, and J. Ma, “Spectral
CT image restoration via an average image-induced nonlocal means filter,” IEEE Transactions
on Biomedical Engineering, 63(5), pp. 1044-1057 (2016).
24 G. Treece, “The bitonic filter: linear filtering in an edge-preserving morphological framework,”
IEEE Transactions on Image Processing, 25(11), pp. 5199-5211 (2016).
| 1 |
Asynchronous Programming in a Prioritized Form
Mohamed A. El-Zawawy1,2
of Computer and Information Sciences
Al Imam Mohammad Ibn Saud Islamic University
Riyadh, Kingdom of Saudi Arabia
arXiv:1501.00669v1 [] 4 Jan 2015
1 College
2 Department
of Mathematics
Faculty of Science
Cairo University
Giza 12613, Egypt
[email protected]
Abstract: Asynchronous programming has appeared as a programming style that overcomes undesired properties
of concurrent programming. Typically in asynchronous models of programming, methods are posted into a post
list for latter execution. The order of method executions is serial, but nondeterministic.
This paper presents a new and simple, yet powerful, model for asynchronous programming. The proposed model
consists of two components; a context-free grammar and an operational semantics. The model is supported by
the ability to express important applications. An advantage of our model over related work is that the model simplifies the way posted methods are assigned priorities. Another advantage is that the operational semantics uses
the simple concept of singly linked list to simulate the prioritized process of methods posting and execution. The
simplicity and expressiveness make it relatively easy for analysis algorithms to disclose the otherwise un-captured
programming bugs in asynchronous programs.
Key–Words: Prioritized posting, asynchronous programming, operational semantics, context-free grammar, concurrent programming.
1 Introduction
All contemporary appliances (mobile, desktop, or
web applications) require high responsiveness which
is conveniently provided by asynchronous programming. Hence application program interfaces (APIs)
enabling asynchronous, non-blocking tasks, such as
web access or file operations) are accommodated
in dominant programming languages. APIs provide
asynchronous programming but mostly in a hard way.
For example consider the following situation. A
unique user interface (UI) task thread is typically used
to design and implement user interfaces. Hence events
on that thread simulate tasks that change the UI state.
Therefore when the UI cannot be redrawn or respond,
it get freezed. This makes it sensible, in order to keep
the application responding continuously to UI tasks,
to run blocking I/O commands and long-lasting CPUbound asynchronously.
Asynchronous programming has multi-threaded
roots. This is so as APIs have been implemented using
multi-threaded programs with shared-memory. Software threads execution is not affected by the number
of processors in the system. This is justified by the
fact that the threads are executed as recursive sequential softwares running concurrently with interleaved
write and reads commands. The many possible interleavings in this case cause the complexity of models
of concurrent programming [8]. In a complex process,
atomic locking commands can be added for prevention and prediction of bad thread interleavings. The
non-deterministic style of interleaving occurrence creates rarely appearing programming-errors which are
typically very hard to simulate and fix. This difficulty lead researchers to design multi-threaded programs (of APIs) in the framework of asynchronous
programming models [6, 7].
The relative simplicity of asynchronous programming makes it a convenient choice to implement APIs
or reactive systems. This is proved by recent years intense use of asynchronous programming by servers,
desktop applications, and embedded systems. The
idea of asynchronous programming is to divide cumulative program executions into tasks that are brieflyrunning. Moreover accessing the shared memory,
each task is executed as a recursive sequential software that specifies (posts) new methods to be executed
later.
Many attempts were made to present formal asynchronous programming models. However few attempts [5] were made to express and formalize the
fact that posted tasks in asynchronous programs may
well have different execution priorities. A big disadvantage of existing work [5] that considers execution
priorities is the complexity of the models hosting such
priorities. For example the work in [5] considers the
execution priorities using several task-buffers which
makes the solution a bit involved.
This paper presents a simple, yet powerful, model
for asynchronous programming with priorities for task
posting. We call the proposed model AsynchP . The
paper also presents a novel and robust operational semantics for the constructs of AsynchP . A simple
singly linked list of prioritized posted tasks is used to
precisely capture the posting process. Our proposed
asynchronous model is to simplify analyzing asynchronous programs [4].
Motivating Example
A motivating example of designing our model comes
form the way hardware interactions take place in operating systems (more specifically in Windows) kernels.
Concepts of prioritized interrupt sets are used to simulate these hardware interactions in an asynchronous
style. For such applications a simple, yet powerful
and mathematically well founded, model for prioritized asynchronous programming is required.
Contribution
The contributions of this paper are:
2 Prioritized Asynchronous
gramming Model
Pro-
This section presents our model, AsynchP , for prioritized asynchronous programming. In AsynchP , each
posted method has an execution priority. An asynchronous program execution is typically divided into
quick-running methods (tasks). Tasks of higher priority get executed first and task of equal priorities are
executed using the first come first serviced strategy.
Asynchronous programming has an important application in reactive systems where a single task must
not be allowed to run too long and to prevent executing other (potentially) highly prioritized tasks.
Figure 1 presents the simple and powerful model
AsynchP for prioritized asynchronous programming.
Considering single local and global variables and using free syntax of expressions does not cause any generality lose. However each expression is built using
the global variable of the program and the local variable of the active method. A AsynchP program P
consists of a single global variable g and a sequence
of methods denoted M1 , . . . , Mn . The Provided(e)
statement continues executing the program provided
that the expression e evaluates to a non-zero value.
Each method M is expressed as a structure
meth(m, l, S) of a method name, a single local variable l, and a top-level statement S. The sets of all program methods and statements are denoted by Meths
and Stmts, respectively. Intuitively, the asynchronous
call is modeled by the statement Synch(m(e),p)
where:
• the called method name is m,
• the calling parameter is the expression e, and
• the execution priority of this call is p.
• A prioritized asynchronous programming model;
AsynchP .
• A novel operational semantics for AsynchP programs.
Organization
The rest of the paper is organized as follows. Section 2
presents the proposed prioritized asynchronous programming model – AsynchP . The semantics of prioritized asynchronous programming model; AsynchP
is shown in Section 3 which is followed by Section 4
that reviews related work and presents directions for
future work. The last section (Section 5) of the paper
concludes it.
We assume three levels of execution priorities;
{high(1), medium(2), low(3)}.
3 Mathematical
AsynchP
Framework
for
This section presents a novel operational semantics
for asynchronous programs built using AsynchP . Our
semantics is based on a singly liked-list (which we call
Asynchronous Linked List (ALL)) to host the posted
methods. ALL is divided into three regions using
pointers. The first, the middle, and last regions of ALL
host posted methods that have high, medium, and low
execution priorities, respectively.
Definition 1 introduces formally the concept of
(Asynchronous Node (AN)) to be used to build ALL.
g ∈ G = Global variable names.
l ∈ L = Method local variable names.
p ∈ P = {high(1), medium(2), low(3)}
= The set of all synchronization priorities.
m ∈ M = The set of all method names.
v ∈ V al = the set of possible values of local and global variables.
S ∈ Stats
::=
S1 ; S2 | g := e | l := e | Provided e | if e then St else Sf |
while e do S | run m(e) | return() | Synch(m(e),p).
M ∈ Meths
::=
meth(m, l, S).
P ∈ Programs
::=
program(g, M ∗ ).
Figure 1: AsynchP : a language model for a simple prioritized asynchronous programming.
Definition 1 An asynchronous node (AN), n, is a single linked list node whose data contents are two locations containing:
• x1 : a method name, and
• x2 : a parameter expression.
For a method call m(e) in a AsynchP program, we let
Nodem(e) denotes the asynchronous node whose locations x1 and x2 contain m and e, respectively. The
set of all asynchronous nodes is denoted by NodesA .
Definition 2 introduces formally the concept of
Asynchronous Linked List (ALL) that is to be used to
accurately capturing the semantics of the constructs of
the proposed asynchronous model.
Definition 2 An asynchronous linked list (ALL),
li =< f, c, eh , em >,
(1)
is a singly linked list whose nodes are asynchronous
nodes (in NodesA ) such that:
• f is a pointer to the first node of the list,
• c is a pointer to the current node, and
• eh , em are pointers to the last node in the list
hosting a method of high and medium priorities,
respectively.
The set of all asynchronous linked lists is denoted by
ListsA .
Whenever a method gets posted, an asynchronous
node is created and inserted into an asynchronous list.
If the posted method is of priority h or m, the created node gets inserted after the nodes pointed to by
eh or em , respectively. If the posted method is of priority l, the created node gets inserted at the end of the
list. Whenever a posted method is to be executed, the
method corresponding to the head of an asynchronous
node is executed and that head gets removed form the
list. These two operations are assumed to be carried
out by the functions defined in Definition 3.
Definition 3 Let li =< f, c, eh , em > be a asynchronous linked list (in ListsA ). We let
• addA : NodesA ×P ×ListsA → ListsA denotes a
map that adds a given node n of a given priority
p after the node pointed to be li.ep in a given list
li1 .
• removeA : ListsA → NodesA × ListsA denotes
a map that removes the first node of a given list
li and return the removed node and the resulting
linked list.
Definition 4 introduces the states of our proposed
operational semantics.
Definition 4 Let program(g, M1 , . . . , Mn ), where
Mi = meth(mi , li , Si ), be a program in AsynchP . An
asynchronous program state (APS) is a triple (s,li,sk),
where:
• s is a partial map from G ∪ (M × L) to Val.
• li is an asynchronous linked list.
• sk is stack of method names.
We let Mi .l and Mi .l denote li and Si , respectively.
Each semantic state is a triple of a partial map captures
the contents of global and local variables, an asynchronous linked list, and a stack of method names.
1
list
Note that p ∈ {h, m, l}. If p = l, el is the last node in the
li′ = addA (Node(m(e)), p, li)
Synch(m(e), p)) : (s, li, sk) → (s, li′ , sk)
is-empty(sk) = false
sk′ = pop(sk)
return() : (s, li, sk) → (s, li, sk′ )
(synchs )
(returns )
m′ = peek(sk)
v = ke(s(g), s(m′ .l)k
s′′ = s[m.l 7→ v]
′′
′′
′′
sk = push(sk, m)
m.S : (s , li, sk ) → (s′ , li′ , sk′ )
(runs )
run m(e) : (s, li, sk) → (s′ , li′ , sk′ )
m′ = peek(sk)
v = ke(s.g, m′ .l)k
g := e : (s, li, sk) → (s[g 7→ v], li, sk)
m′ = peek(sk)
(:=sg )
v = ke(s.g, m′ .l)k
l := e : (s, li, sk) → (s[m′ .l 7→ v], li, sk)
m′ = peek(sk)
ke(s.g, m′ .l)k =
6 0
(:=sl )
(ps )
Provided e : (s, li, sk) → (s, li, sk)
m′ = peek(sk)
ke(s.g, m′ .l)k = 0
(w1s )
while e do S : (s, li, sk) → (s, li, sk)
m′ = peek(sk)
ke(s.g, m′ .l)k =
6 0
S : (s, li, sk) → (s′′ , li′′ , sk′′ )
while e do S : (s′′ , li′′ , sk′′ ) → (s′ , li′ , sk′ ) (ws )
2
while e do S : (s, li, sk) → (s′ , li′ , sk′ )
m′ = peek(sk)
ke(s.g, m′ .l)k = 0
Sf : (s, li, sk) → (s′ , li′ , sk′ )
(ifs1 )
if e then St else Sf : (s, li, sk) → (s′ , li′ , sk′ )
m′ = peek(sk)
ke(s.g, m′ .l)k =
6 0
′
St : (s, li, sk) → (s , li′ , sk′ )
(ifs2 )
if e then St else Sf : (s, li, sk) → (s′ , li′ , sk′ )
S1 : (s, li, sk) → (s′′ , li′′ , sk′′ )
S2 : (s′′ , li′′ , sk′′ ) → (s′ , li′ , sk′ )
′
′
′
(;s )
S1 ; S2 : (s, li, sk) → (s , li , sk )
Figure 2: Transition rules for statements.
The stack is meant to keep the order in which methods
call each other.
Figures 2 and 3 present the transition rules of the
proposed operational semantics. Some comments on
the rules are in order. The rule synchs creates an asynchronous node corresponding to the method m and the
parameter e. Using the map addA , the node then is
added to the asynchronous list li to get the new list li′ .
The rule (returns ), pops an element from the method
stack as the return statement means that the top element of the stack is executed. The rule (runs ) first
peeks the first element of the stack to get the local variable (m′ .l) of the currently active method. This local
variable is then used together with the global variable
to evaluate the expression e. The resulting value is
used to modify the local variable of the method (m)
that is to be executed. Then m is pushed into the stack
and the statement of m is executed.
The rule (progs ) first runs the statements of all
methods of the program being executed then runs all
statements of the methods that is posted. The posted
statements are executed via the rules (⇒s1 ) and (⇒s2 ).
4 Related and Future Work
Parallel, distributed, reactive, and concurrent programming have been attracting much researcher activities. The asynchronous programming methodologies
include:
• multi-threaded light-weight orchestration programming [19],
• thread Join-based allocation,
• typed synchronous programming languages [20],
• functional sensible programming,
(⇒s1 )
(s, empty, empty) ⇒ (s, empty, empty)
(n, li′ ) = removeA (li)
n.x1 .S : (s, li′ , empty) → (s′′ , li′′ , empty)
(s′′ , li′′ , empty) ⇒ (s′ , empty, empty)
(⇒s2 )
(s, li, empty) ⇒ (s′ , empty, empty)
sk′′ = push(m, sk)
S : (s, li, sk′′ ) → (s′ , li′ , sk′ )
meth(m, l, S) : (s, li, sk) → (s′ , li′ , sk′ )
(meths )
∀1 ≤ i ≤ n. Mi : (si , lii , ski ) → (si+1 , lii+1 , ski+1 )
(sn+1 , lin+1 , empty) ⇒ (s′ , empty, empty)
(progs )
program(g, M1 , . . . , Mn ) : (s1 , li1 , sk1 ) → (s′ , empty, empty)
Figure 3: Transition rules for methods and programs.
• promises, and
• co-methods and futures agents [21, 22].
Event-based techniques for programming have been
using continuations which are delimited monadic [17,
18]. Fork-join, task, async, and event functions appear
not to rely on a specific language design. There is
a big research debate about the relationship between
threads and events in systems research [16].
In an asynchronous program, executions containing context switches bounded by a user-specified limit
are explored by context-bounded verification [14, 15].
This context-bounding idea is not reasonable for programs with big number of events. Several treatments
for this problem have been proposed for this problem. Without losing decidability, [13] proposed a
context-minimizing technique permitting unbounded
context switches. For asynchronous concurrent programs, in [12], the round-robin technique for scheduling is used to enable unbounded context switches.
Sequential techniques are also used to analyze
asynchronous programs. In [14], a source-to-source
technique building sequential programs from multithreaded programs was proposed via under approximating the possible set of executions of the input program. A novel source-to-source transformation providing for any context-bound, a context-bounded under approximation was presented in [11]. A main
issue of the work in [11] is that the resulting sequential program may host main states unreachable
in the given asynchronous program. Other techniques
like [10] treat this problem by repeatedly running the
code to the control points where used-defined valued are needed. The work in [9] compares the techniques of asynchronous programs verifications that
use verification-condition-checking against that use
model-checking. One major results of this work is that
eager approaches outperforms lazy ones. The work
in [8] uses the construction using a bound on the task
number, to reduce asynchronous programs into sequential programs via priority-preemptive schedulers.
The work presented in this paper is close to sequentialization [14]; the concept describing compositional reductions to get sequential programs from concurrent ones. Although sequentialization started by
checking multi-threaded programs with one contextswitch, it was developed later to treat a user-specified
number of context-switches. These switches occur
among statically-specified group of threads running
using RR order [11]. In [2], a technique for treating context switches among an unspecified number of
dynamically-created tasks was presented. This technique (in [2]) hence explicitly treats event-oriented
asynchronous programs.
For future work, it is interesting to devise static
analyses for asynchronous programs using the model
AsynchP [1]. Initial experiments show that our proposed model is expected to support devising robust
and powerful analysis techniques. An examples of
targeted analyses is dead-posting elimination which
aims at removing the unnecessary posting statements
from asynchronous programs.
5 Conclusion
Main reason to use asynchronous programming is
to overcome some problems of concurrent programming. The main idea of asynchronous programming
is to post methods into a post list for latter execution.
The order of executing these methods is nondeterministically serial.
A new and simple, yet powerful, model for asynchronous programming was presented in this paper. More precisely, the paper proposed a contextfree grammar and an operational semantics for asynchronous programming. One important aspect of the
proposed model is supporting posting methods with
execution priorities.
[10]
References:
[1] El-Zawawy, M.A.: Detection of probabilistic
dangling references in multi-core programs using proof-supported tools. In: Murgante, B.,
Misra, S., Carlini, M., Torre, C.M., Nguyen, H.Q., Taniar, D., Apduhan, B.O., Gervasi, O. (eds.)
ICCSA 2013, Part V. LNCS, vol. 7975, Springer,
Heidelberg (2013), pp. 516–530.
[2] M. Emmi, S. Qadeer, and Z. Rakamaric. Delaybounded scheduling. In POPL 11: Proc. 38th
ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, ACM, 2011,
pp. 411–422.
[3] N. Kidd, S. Jagannathan, and J. Vitek. One stack
to run them all: Reducing concurrent analysis
to sequential analysis under priority scheduling.
In SPIN 10: Proc. 17th International Workshop
on Model Checking Software, volume 6349 of
LNCS, Springer, 2010, pp. 245–261.
[4] M. F. Atig, A. Bouajjani, and T. Touili. Analyzing asynchronous programs with preemption. In
FSTTCS 08: Proc. IARCS Annual Conference
on Foundations of Software Technology and Theoretical Computer Science, volume 2 of LIPIcs,
Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2008, pp. 37–48.
[5] Michael Emmi, Akash Lal, Shaz Qadeer: Asynchronous programs with prioritized task-buffers.
SIGSOFT FSE 2012,48.
[6] K. Sen and M. Viswanathan. Model checking multithreaded programs with asynchronous
atomic methods. In CAV 06: Proc. 18th International Conference on Computer Aided Verification, volume 4144 of LNCS, Springer,
2006,pp. 300–314.
[7] R. Jhala and R. Majumdar. Interprocedural analysis of asynchronous programs. In POPL 07:
Proc. 34th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,
ACM, 2007, pp. 339–350.
[8] N. Kidd, S. Jagannathan, and J. Vitek. One stack
to run them all: Reducing concurrent analysis
to sequential analysis under priority scheduling.
In SPIN ’10: Proc. 17th International Workshop
on Model Check- ing Software, volume 6349 of
LNCS, Springer, 2010, pp. 245–261.
[9] N. Ghafari, A. J. Hu, and Z. Rakamarifc. Contextbounded translations for concurrent software: An empirical evaluation. In SPIN ’10:
Proc. 17th Interna- tional Workshop on Model
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
Checking Software, volume 6349 of LNCS,
Springer, 2010, pp. 227–244.
S. La Torre, P. Madhusudan, and G. Parlato. Reducing context-bounded concurrent reachability
to sequential reachability. In CAV ’09: Proc. 21st
International Conference on Computer Aided
Verification, volume 5643 of LNCS, Springer,
2009, pp. 477–492.
A. Lal and T. W. Reps. Reducing concurrent
analysis under a context bound to sequential
analysis. Formal Methods in System Design,
35(1), 2009, pp. 73–97.
S. La Torre, P. Madhusudan, and G. Parlato. Modelchecking parameterized concurrent
programs using linear interfaces. In CAV ’10:
Proc. 22nd International Conference on Computer Aided Verification, volume 6174 of LNCS,
Springer, 2010, pp. 629–644.
M. F. Atig, A. Bouajjani, and S. Qadeer. Contextbounded analysis for concurrent programs
with dynamic creation of threads. In TACAS ’09:
Proc. 15th International Conference on Tools
and Algorithms for the Construction and Analysis of Systems, volume 5505 of LNCS, Springer,
2009, pp. 107–123.
S. Qadeer and D.Wu. KISS: Keep it simple and
sequential. In PLDI ’04: Proc. ACM SIGPLAN
Conference on Programming Language Design
and Implementation, ACM, 2004, pp. 14–24.
S. Qadeer and J. Rehof. Context-bounded model
checking of concurrent software. In TACAS ’05:
Proc. 11th International Conference on Tools
and Algorithms for the Construction and Analysis of Systems, volume 3440 of LNCS, Springer,
2005, pp 93–107.
G. Kerneis, J. Chroboczek, ContinuationPassing C, compiling threads to events through
continuations, Higher-Order and Symbolic
Computation (LISP), 24(3), 2011, pp. 239–279.
A. Holzer, L. Ziarek, K. Jayaram, P Eugster,
Putting events in context: aspects for eventbased distributed programming, AOSD, 2011,
pp. 241–252.
L. Vaseux, F. Otero, T. Castle, C. Johnson,
Event-based graphical monitoring in the
EpochX genetic programming framework,
GECCO (Companion), 2013, pp. 1309–1316.
R. Ranjan, B. Benatallah, Programming Cloud
Resource Orchestration Framework: Operations
and Research Challenges, CoRR abs/1204, 2204
,2012.
J. Aguado, M. Mendler, R. Hanxleden,
I. Fuhrmann, Grounding Synchronous Deterministic Concurrency in Sequential Programming. ESOP, 2014, pp. 229–248.
[21] G. Gori, E. Johnsen, R. Schlatte, V. Stolz,
Erlang-Style Error Recovery for Concurrent Objects with Cooperative Scheduling. ISoLA, 2014,
pp. 5–21.
[22] J. Nelson, Co-ops: concurrent algorithmic skeletons for Erlang. Erlang Workshop, 2012, pp. 61–
62.
| 6 |
On the importance of graph search algorithms for
DRGEP-based mechanism reduction methods
Kyle E. Niemeyera , Chih-Jen Sungb,∗
arXiv:1606.07802v1 [] 24 Jun 2016
a
Department of Mechanical and Aerospace Engineering
Case Western Reserve University, Cleveland, OH 44106, USA
b
Department of Mechanical Engineering
University of Connecticut, Storrs, CT 06269, USA
Abstract
The importance of graph search algorithm choice to the directed relation
graph with error propagation (DRGEP) method is studied by comparing
basic and modified depth-first search, basic and R-value-based breadth-first
search (RBFS), and Dijkstra’s algorithm. By using each algorithm with
DRGEP to produce skeletal mechanisms from a detailed mechanism for nheptane with randomly-shuffled species order, it is demonstrated that only
Dijkstra’s algorithm and RBFS produce results independent of species order. In addition, each algorithm is used with DRGEP to generate skeletal
mechanisms for n-heptane covering a comprehensive range of autoignition
conditions for pressure, temperature, and equivalence ratio. Dijkstra’s algorithm combined with a coefficient scaling approach is demonstrated to
produce the most compact skeletal mechanism with a similar performance
compared to larger skeletal mechanisms resulting from the other algorithms.
The computational efficiency of each algorithm is also compared by applying
the DRGEP method with each search algorithm on the large detailed mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and
8157 reactions. Dijkstra’s algorithm implemented with a binary heap priority queue is demonstrated as the most efficient method, with a CPU cost two
orders of magnitude less than the other search algorithms.
Keywords: Mechanism reduction, Skeletal mechanism, DRG, DRGEP,
Graph search algorithm, Dijkstra’s algorithm
∗
Corresponding author
Email address: [email protected] (Chih-Jen Sung)
Preprint submitted to Combustion and Flame
June 28, 2016
1. Introduction
The directed relation graph (DRG) method for the skeletal reduction of
large reaction mechanisms, originally developed by Lu and Law [1, 2], has
been shown to be applicable to relevant transportation fuel components [3, 4].
Development of this approach has since focused on variants of the method
[5–8], but DRG with error propagation (DRGEP) in particular has received
much attention [6, 8–13]. The DRGEP differs mainly from DRG in that
it takes error propagation down graph pathways into account. Further improvement of the DRGEP method is motivated by the large and continually
increasing size of detailed reaction mechanisms for liquid transportation fuels
[14]; for instance, see the recent biodiesel surrogate mechanism of Herbinet
et al. [15] that contains 3299 species and 10806 reactions.
The DRG method maps the coupling of species onto a directed graph
and finds unimportant species for removal by eliminating weak graph edges
using an error threshold, where the graph nodes represent species and graph
edge weights indicate the dependence of one species on another. Following
a simple graph search initiated at certain preselected target species (e.g.,
fuel, oxidizer, important pollutants), species unreached by the search are
considered unimportant and removed from the skeletal mechanism. DRGEP
modifies this approach slightly by considering the dependence of a target
on other species due to a certain path as the product of intermediate edge
weights and the overall dependence is the maximum of all path dependencies.
One point that has not received much attention is the choice of graph
search algorithm used to calculate this overall dependence. While the DRG
method only needs to find connected graph nodes and can therefore use any
search algorithm, caution must be taken when selecting the method used
with DRGEP. The issue of calculating the overall dependence is actually a
single-source shortest path problem [16] where the “distance” of the path
is the product of intermediate edge weights rather than the sum, and the
“shortest” path is that which has the maximum overall dependence rather
than the minimum. Search algorithms that in general do not correctly calculate the overall dependence will underestimate the importance of species
and cause premature removal from the skeletal mechanism. Further, the resulting skeletal mechanism is independent of the order of the species in the
mechanism only if a specific algorithm can correctly capture and calculate
2
the overall dependence of species. Therefore, the reliability of various algorithms needs to be studied by determining whether each is dependent on the
order of species in a detailed mechanism.
The efficiency of the graph search algorithm is also an important factor
due to the recent use of DRGEP in dynamic skeletal reduction approaches
[10–12]. In the worst case, DRGEP is applied at every spatial grid location and each time step to generate locally relevant skeletal mechanisms,
although as Shi et al. [12] demonstrated this can be eased by combining dynamic DRGEP-based reduction with an adaptive multi-grid chemistry model.
Regardless, the computational cost of the search algorithm increases with the
number of species and must be considered when comparing algorithms due
to the large and ever-increasing size of detailed reaction mechanisms.
Most DRGEP studies reported using algorithms based on either depthfirst search (DFS) [8] or breadth-first search (BFS) [10–13], but no comparison has been performed. Here we compare such methods with Dijkstra’s
algorithm, the classical solution to the single-source shortest path problem
[16, 17], in order to demonstrate the weaknesses of DFS- and BFS-based
approaches and the subsequent effectiveness and reliability of Dijkstra’s algorithm. First, by randomly shuffling the order of species in the detailed
mechanism for n-heptane of Curran et al. [18], we show that DFS- and BFSbased algorithms can generate results dependent on the order of species in
the reaction mechanism while Dijkstra’s algorithm generates consistent results regardless of species order. Second, we demonstrate that, for a given
error limit, more compact skeletal mechanisms are possible with DRGEP by
using Dijkstra’s algorithm. This is done by comparing skeletal mechanisms
generated with different search algorithms for n-heptane covering a comprehensive range of autoignition conditions for pressure, temperature, and
equivalence ratio. Third, we compare the computational efficiency of the
various search algorithms by calculating the CPU time required to perform
the graph search on the large detailed mechanism of Westbrook et al. [19]
covering oxidation of n-alkanes from n-octane to n-hexadecane containing
2115 species and 8157 reactions.
2. Methodology
2.1. DRGEP Method
In the current work we use the DRGEP method of Pepiot-Desjardins and
Pitsch [6], described here in brief. Accurate calculation of the production of a
3
species A that is strongly dependent on another species B requires the presence of species B in the reaction mechanism. This dependence is expressed
with the direct interaction coefficient (DIC):
PnR
i
i=1 νA,i ωi δB
,
(1)
rAB =
max (PA , CA )
where
PA =
nR
X
max (0, νA,i ωi ) ,
(2)
max (0, −νA,i ωi ) ,
(3)
i=1
CA =
nR
X
i=1
(
1 if reaction i involves species B,
i
δB =
0 otherwise,
(4)
A and B represent the species of interest (with dependency in the A → B
direction meaning that A depends on B ), i the i th reaction, νA,i the stoichiometric coefficient of species A in the i th reaction, ωi the overall reaction
rate of the i th reaction, and nR the total number of reactions.
After calculating the DIC for all species pairs, a graph search is performed
starting at user-selected target species to find the dependency paths for all
species from the targets. A path-dependent interaction coefficient (PIC)
represents the error propagation through a certain path and is defined as the
product of intermediate DICs between the target T and species B through
pathway p:
n−1
Y
(5)
rT B,p =
rSj Sj+1 ,
j=1
where n is the number of species between T and B in pathway p and Sj is
a placeholder for the intermediate species j starting at T and ending at B.
An overall interaction coefficient (OIC) is then defined as the maximum of
all PICs between the target and each species of interest:
RT B =
max (rT B,p ) .
all paths p
(6)
Pepiot-Desjardins and Pitsch [6] also proposed a coefficient scaling procedure to better relate the OICs from different points in the reaction system
4
evolution that we adopt here. A pseudo-production rate of a chemical element a based on the production rates of species containing a is defined
as
X
Pa =
Na,S max (0, PS − CS ) ,
(7)
all species S
where Na,S is the number of atoms a in species S and PS and CS are the
production and consumption rates of species S as given by Eqs. 2 and 3,
respectively. The scaling coefficient for element a and target species T at
time t is defined as
Na,T |PT − CT |
.
(8)
αa,T (t) =
Pa
For the set of elements {E}, the global normalized scaling coefficient for
target T at time t is
αa,T (t)
αT (t) = max
.
(9)
a∈{E}
max αa,T (t)
all time
Given a set of kinetics data {D} and target species {T }, the overall importance of species S to the target species set is
RS = max
max (αT RT S ) .
(10)
T ∈{T }
k∈{D}
all time, k
We employ the same sampling method as given by Niemeyer et al. [8],
where constant volume autoignition simulations are performed using SENKIN
[20] with CHEMKIN-III [21]. Species are considered unimportant and removed if their RS value falls below a cutoff threshold εEP , which is selected
using an iterative procedure based on a user-defined error limit for ignition
delay prediction [8]. Reactions are eliminated if any participating species are
removed.
2.2. Graph search algorithms
In this study we compare the results of DRGEP using basic DFS, modified DFS, basic BFS, R-value-based BFS (RBFS), and Dijkstra’s algorithm.
Cormen et al. [16] presented detailed discussion of and pseudocode for DFS,
BFS, and Dijkstra’s algorithm, while the modified DFS used by Niemeyer et
al. [8] and RBFS of Liang et al. [10] differ only slightly from the basic DFS
and BFS.
5
2.2.1. DFS-based algorithms
The DFS initiates at a root node, in this case a target species, and explores
the graph edges connecting to other nodes. The first node found is added
to a last in, first out stack, then the search moves to this node and repeats.
In this manner, the search continues deeper down the graph pathway until
it either reaches a node with no connections or all the connecting nodes
have been explored, then backtracks one position up the stack. The search
is performed separately using each target species as the root node and the
maximum OIC is stored for each species. Lu and Law first introduced the
DFS in this context for the DRG method [1].
The modified DFS used by Niemeyer et al. [8] for the DRGEP method
(but not described in detail) differs from the basic DFS in that the OIC
values for all target species are set to unity before starting the search at the
first target. The search then only repeats starting at other target species
not discovered in the initial search. The resulting OIC values combine the
dependence of all targets on the species and this prevents the use of coefficient
scaling based on individual target species activity.
2.2.2. BFS-based algorithms
The BFS initiates at the root node and explores all adjacent graph edges,
adding connected nodes to a first in, first out queue. After discovering all
connected nodes, the search moves to the first node in the queue and restarts
the procedure, moving to the next node in the queue after discovering all
connected nodes. Previously discovered nodes are not repeated. As with the
DFS, the search initializes at each target separately and the maximum OIC
for each species is stored.
The R-value-based BFS (RBFS), first introduced by Liang et al. [10, 11]
and also used by Shi et al. [12, 13], differs from the standard BFS in that
PICs smaller than the error threshold εEP are not explored. In other words,
the search ignores graph pathways that would lead to an OIC below the cutoff value and only allows discovery of important pathways. This increases
efficiency by avoiding exploration of unimportant pathways but causes the
RBFS to depend on the value of εEP , unlike the other search methods. As
such, the primary application of this algorithm lies in dynamic reduction
where the threshold value is known a priori, rather than general comprehensive skeletal reduction where the threshold is to be determined iteratively
based on a user-defined error limit [8].
6
2.2.3. Dijkstra’s algorithm
Dijkstra’s algorithm, originally introduced by Dijkstra [17] and discussed
in detail by Cormen et al. [16], differs from DFS, BFS, and their variants
because it was specifically designed to calculate the shortest paths from a
single source node to all other nodes rather than simply search the graph
to find connected nodes. As mentioned previously, calculating OIC values
is a shortest path problem where the “shortest” path is that with the maximum product of intermediate edge weights. This is the only modification
needed to apply Dijkstra’s algorithm to the current problem. In brief, Dijkstra’s algorithm functions similarly to BFS except that it starts with the
set of graph nodes stored in a max-priority queue. The algorithm finds and
removes the node with the maximum OIC value (initially, the root node)
from the queue and explores adjacent nodes to calculate or update OIC values. When all neighboring nodes are explored, the procedure restarts at the
node with the next-highest OIC in the queue. The search completes when
the queue is empty. As with previously-discussed search algorithms, this is
performed separately for each target species as root node. See the Appendix
for pseudocode describing the algorithm as applied to the DRGEP method.
Transportation and internet routing applications routinely use Dijkstra’s
algorithm and so much effort has been focused on optimizing its performance.
A naive implementation requires a running time of O(V 2 ) where V is the
number of nodes. For sparse graphs, i.e. where the number of edges E follows
E ∼ O(V 2 / log V ), the runtime can be improved significantly to O(E log V )
by using an adjacency list to store the immediate neighbors of nodes and a
binary max-heap priority queue [16]. Using a Fibonacci heap can reduce the
runtime further to O(E + V log V ) [16, 22], although in many cases binary
heaps prove more efficient [23]. These will be compared in order to determine
the most efficient implementation for DRGEP. Briefly, a binary max-heap is
a tree where each node has at most two child nodes and the associated key
(in this case, OIC) is always greater than or equal to the keys of the child
nodes. By storing the nodes in this manner, a costly search is not required
to find the node with the highest key during the operation of Dijkstra’s
algorithm. Fibonacci heaps function similarly, but rather than a single tree,
the heaps are made up of a collection of trees with more relaxed structures
designed to avoid unnecessary operations. Detailed discussion of the heap
implementations is beyond the scope of this paper and we refer interested
readers to Cormen et al. [16].
7
3. Results and discussion
3.1. Reliability of graph search algorithms
In order to demonstrate the dependence of DFS- and BFS-based methods on the order of species in a reaction mechanism, all five graph search
algorithms are used with DRGEP to generate skeletal mechanisms from the
detailed mechanism for n-heptane of Curran et al. [18], containing 561 species
and 2539 reactions, where the order of species is randomly shuffled. Basic
DFS, basic BFS, RBFS, and Dijkstra’s algorithm are used both with and
without the target-based coefficient scaling while the modified DFS is not
compatible with this and so is used without it. Autoignition chemical kinetics data are sampled from initial conditions at 1000 K, 1 atm, and equivalence
ratios of 0.5–1.5. Oxygen, nitrogen, n-heptane, and the hydrogen radical are
selected as target species.
Figure 1 shows the ignition delay error of skeletal mechanisms at varying
levels of detail generated by DRGEP using RBFS and Dijkstra’s algorithm
with coefficient scaling and the modified DFS. Basic DFS and BFS demonstrate similar dependence on species order to the modified DFS and thus are
omitted from Fig. 1 for clarity, but the skeletal mechanism sizes from each
may be different as will be shown. Similarly, the coefficient scaling comparison is also omitted because its application does not affect dependence on
species order, although the resulting skeletal mechanism sizes are different.
Comparison of results from the original mechanism and five mechanisms with
randomly shuffled species illustrates the dependence of the modified DFS on
the order of species in the mechanism while RBFS and Dijkstra’s algorithm
produce consistent results regardless of species order. This is because DFS
and BFS explore graphs based on the order of nodes while Dijkstra’s algorithm follows the strongest path independent of order [16]. In addition, Fig.
1 shows that both Dijkstra’s algorithm and RBFS produce smaller skeletal
mechanisms before the error becomes unacceptably large.
The dependence of DFS, modified DFS, and BFS on species order stems
from the fact that these algorithms do not in general calculate the correct OIC
for every species and as such can underestimate the importance of species,
causing unwarranted removal. RBFS produces the same results as Dijkstra’s
algorithm because it prevents exploration of unimportant paths, i.e. paths
with PICs (see Eq. 5) less than the threshold, and therefore only discovers
paths that lead to an OIC greater than the threshold value. While this value
may be smaller than the correct OIC as calculated by Dijkstra’s algorithm,
8
the species is nonetheless considered important and retained in the skeletal
mechanism. This subtle error could be significant, though, if the DRGEP
method is followed by a further reduction stage such as sensitivity analysis
that depends on the OIC values [8].
3.2. Effectiveness of graph search algorithms
Skeletal mechanisms for n-heptane are generated covering a comprehensive range of conditions using basic DFS, modified DFS, basic BFS, and
Dijkstra’s algorithm. Basic DFS, BFS, and Dijkstra’s algorithm are used
with and without the coefficient scaling in order to determine its effect as
well. Autoignition chemical kinetics data are sampled from initial conditions
at 600–1600 K, 1–20 atm, and equivalence ratios of 0.5–1.5. Oxygen, nitrogen, n-heptane, and the hydrogen radical are selected as target species, and
the error limit in ignition delay prediction is 30%. RBFS is not compared
here because it is not suited for a comprehensive skeletal reduction due to
its dependence on threshold value; additionally, based on the results in the
previous section we assume the resulting skeletal mechanism would match
that from Dijkstra’s algorithm.
The results using the original mechanism of Curran et al. [18] are summarized in Table 1. All search methods lead to skeletal mechanisms with similar
performance, but Dijkstra’s algorithm with coefficient scaling produces the
most compact skeletal mechanism. BFS and modified DFS produce similar
results while basic DFS is unable to generate a comparable skeletal mechanism for the given error limit. This weakness is most likely due to the fact
that basic DFS finds only the first path from target to species, which typically contains many intermediate species and severely underestimates the
OICs for some important species. The modified DFS performs better by
inserting the other targets into the pathways and therefore increasing the
OIC values for important species, while BFS finds the path with the shortest
number of intermediate nodes [16].
Table 1 also demonstrates the need for coefficient scaling. Both basic DFS
and Dijkstra’s algorithm generate more compact skeletal mechanisms with
the scaling, while BFS actually produces a slightly larger skeletal mechanism.
Without scaling, Dijkstra’s algorithm is unable to produce a more compact
skeletal mechanism than the other methods, despite its ability to generate
results independent of species order in the parent mechanism.
9
3.3. Efficiency of graph search algorithms
The computational costs of DFS, modified DFS, basic BFS, RBFS, and
Dijkstra’s algorithm are compared by applying the DRGEP method to the
detailed mechanism of Westbrook et al. [19] for n-alkanes covering n-octane
through n-hexadecane, which contains 2115 species and 8157 reactions. Kinetics data are sampled from constant volume autoignition of n-decane with
initial conditions covering 600–1600 K, 1–20 atm, and equivalence ratios of
0.5–1.5. The efficiency improvements available to Dijkstra’s algorithm including use of adjacency list, binary heap, and Fibonacci heap are also compared
to the naive implementation.
The average computational costs of DFS, modified DFS, BFS, Dijkstra’s
algorithm, and its improvements are listed in Table 2. In order to perform
a fair comparison, a version of Dijkstra’s algorithm modified to similarly depend on the threshold value is compared with RBFS. Figure 2 shows the
average computational cost of RBFS, Dijkstra’s algorithm, and its improvements as a function of threshold value. The most important result shown
in Table 2 and Fig. 2 is that Dijkstra’s algorithm implemented with binary
heaps is faster than all other search algorithms, and is in fact more efficient
than the same algorithm implemented with Fibonacci heaps even though the
theoretical time limit with binary heaps is higher. This result is consistent
with literature comparisons for sparse graphs [24, 25].
4. Concluding remarks
Graph search algorithms used in the DRGEP method for skeletal mechanism reduction are compared. Basic DFS, basic BFS, modified DFS, RBFS,
and Dijkstra’s algorithm are implemented in DRGEP and used to generate (1) skeletal mechanisms covering a limited range of conditions using a
randomly shuffled detailed mechanism for n-heptane to determine the dependence of results on species order and (2) skeletal mechanisms covering a
comprehensive range of conditions to determine the effectiveness of each algorithm. The RBFS algorithm is not used to generate a comprehensive skeletal
mechanism because it is not suitable for use when the cutoff threshold is not
known a priori. Both Dijkstra’s algorithm and RBFS are able to generate
consistent results independent of species order, and Dijkstra’s algorithm used
with target-based coefficient scaling generates a more compact skeletal mechanism than the other methods. Even with the improved search algorithm,
however, the size of the resulting skeletal mechanism (131 species and 651
10
reactions) is larger than that of DRGEP with sensitivity analysis (108 species
and 406 reactions) [8], suggesting that the post-DRGEP sensitivity analysis
is still required.
In addition to the reliability and effectiveness of the graph search algorithms, the efficiency is also compared due to its importance to dynamic
DRGEP approaches. Efficiency improvements available to Dijkstra’s algorithm are also implemented and compared to the other search algorithms.
Dijkstra’s algorithm with a binary heap priority queue runs two orders of
magnitude faster than the other search methods and is the most efficient implementation of Dijkstra’s algorithm. As such, this approach is recommended
for use in dynamic skeletal reduction using DRGEP. Dynamic approaches
that avoid entirely repeating the search by updating graph edge values have
also been developed [23, 26] and will be the focus of future work.
Acknowledgments
This work was supported by the National Science Foundation under Grant
No. 0932559 and the Department of Defense through the National Defense
Science and Engineering Graduate Fellowship program.
Appendix
The following pseudocode describes Dijkstra’s algorithm for use in the
DRGEP method to calculate the OICs for all species relative to a target
species T using the set of DIC values rAB , adapted from Cormen et al.
[16]. The adjacency list adj contains the list of adjacent (directly connected)
species in the directed graph. Operations are left in general form to be
applicable to other implementations of Dijkstra’s algorithm using heaps as
well as the basic version. The procedure MAKEQ (Q) generates the maxpriority queue Q, MAXQ (R, Q) returns the node with the highest OIC
value, REMQ (u, Q) removes the node u from Q, and UPDATE (R, v, Rtmp )
increases the OIC value for node v to Rtmp .
subroutine Dijkstra ( rAB , adj, T )
R ← zeros
R(T ) ← 1.0
call MAKEQ (Q)
while Q 6= ∅ do
u ← MAXQ (R, Q)
11
call REMQ (u, Q)
for each node v ∈ adj(u) do
Rtmp ← R(u) · ruv
if Rtmp > R(v) then
call UPDATE (R, v, Rtmp )
end if
end for
end while
return R
end subroutine Dijkstra
References
[1] T. Lu, C. K. Law, Proc. Combust. Inst. 30 (2005) 1333–1341.
[2] T. Lu, C. K. Law, Combust. Flame 146 (2006) 472–483.
[3] T. Lu, C. K. Law, Combust. Flame 144 (2006) 24–36.
[4] T. Lu, C. K. Law, Combust. Flame 154 (2008) 153–163.
[5] X. L. Zheng, T. Lu, C. K. Law, Proc. Combust. Inst. 31 (2007) 367–375.
[6] P. Pepiot-Desjardins, H. Pitsch, Combust. Flame 154 (2008) 67–81.
[7] W. Sun, Z. Chen, X. Gou, Y. Ju, Combust. Flame 157 (2010) 1298–1307.
[8] K. E. Niemeyer, C. J. Sung, M. P. Raju, Combust. Flame 157 (2010)
1760–1770.
[9] I. G. Zsély, T. Nagy, J. M. Simmie, H. J. Curran, in: 4th European
Combustion Meeting, 810045, 2009.
[10] L. Liang, J. Stevens, J. T. Farrell, Proc. Combust. Inst. 32 (2009) 527–
534.
[11] L. Liang, J. G. Stevens, S. Raman, J. T. Farrell, Combust. Flame 156
(2009) 1493–1502.
[12] Y. Shi, L. Liang, H.-W. Ge, R. D. Reitz, Combust. Theor. Model. 14
(2010) 69–89.
12
[13] Y. Shi, H.-W. Ge, J. L. Brakora, R. D. Reitz, Energy Fuels 24 (2010)
1646–1654.
[14] T. Lu, C. K. Law, Prog. Energy Comb. Sci. 35 (2009) 192–215.
[15] O. Herbinet, W. J. Pitz, C. K. Westbrook, Combust. Flame 157 (2010)
893–908.
[16] T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction to
Algorithms, MIT Press, Cambridge, MA, 2nd edition, 2001.
[17] E. W. Dijkstra, Numer. Math. 1 (1959) 269–271.
[18] H. Curran, P. Gaffuri, W. Pitz, C. K. Westbrook, Combust. Flame 114
(1998) 149–177.
[19] C. K. Westbrook, W. J. Pitz, O. Herbinet, H. J. Curran, E. J. Silke,
Combust. Flame 156 (2009) 181–199.
[20] A. E. Lutz, R. J. Kee, J. A. Miller, SENKIN: A FORTRAN program
for predicting homogeneous gas phase chemical kinetics with sensitivity
analysis, Sandia National Laboratories Report No. SAND 87-8248, 1988.
[21] R. J. Kee, F. M. Rupley, E. Meeks, J. A. Miller, CHEMKIN-III:
A FORTRAN chemical kinetics package for the analysis of gas-phase
chemical and plasma kinetics, Sandia National Laboratories Report No.
SAND 96-8216, 1996.
[22] M. L. Fredman, R. E. Tarjan, J. ACM 34 (1987) 596–615.
[23] D. Wagner, T. Willhalm, Lect. Notes Comput. Sci. 4393 (2007) 23–36.
[24] B. Cherkassky, A. Goldberg, T. Radzik, Math. Program. 73 (1996) 129–
174.
[25] A. V. Goldberg, R. E. Tarjan, Expected performance of Dijkstra’s shortest path algorithm, Technical Report TR-530-96, NEC Research Institute, Princeton University, 1996.
[26] L. Roditty, U. Zwick, Lect. Notes Comput. Sci. 3221 (2004) 580–591.
13
Algorithm
DFS
no scaling
scaling
mod. DFS
# Species
# Reactions
Max. Error
461
449
2304
2267
19%
19%
173
868
28%
BFS
no scaling
scaling
180
207
891
921
29%
25%
Dijkstra
no scaling
scaling
178
131
986
651
23%
17%
Table 1: Comparison of n-heptane skeletal mechanism sizes generated by the DRGEP
method using DFS, BFS, and Dijkstra’s algorithm with and without coefficient scaling
and modified DFS. The original mechanism contains 561 species and 2539 reactions [18].
14
Algorithm
Mean cost (sec)
Cost normalized by Dijkstra
DFS
1.48 ×10−1
2.20
mod. DFS
1.46 ×10−1
2.18
BFS
8.30 ×10−2
1.24
Dijkstra (naive)
Dijkstra (adj)
Dijkstra (binary heap)
Dijkstra (Fibonacci heap)
6.72 ×10−2
1.18 ×10−2
1.5 ×10−3
3.4 ×10−3
1.00
0.176
0.028
0.051
Table 2: Comparison of average CPU time costs using DFS, modified DFS, BFS, and
Dijkstra’s algorithm, which includes the naive implementation, use of adjacency list, binary heaps, and Fibonacci heaps. The kinetics data used are generated from n-decane
autoignition over a range of initial temperatures and pressures, and at varying equivalence
ratios, from a detailed mechanism consisting of 2115 species and 8157 reactions [19].
List of Figures
1
2
Error in autoignition delay prediction as a function of number
of species for skeletal mechanisms of n-heptane generated by
DRGEP using Dijkstra’s algorithm and RBFS with coefficient
scaling and modified DFS. The solid lines indicate the results
using the original detailed mechanism while the dashed lines
indicate the results obtained from five versions with randomlyshuffled species. All 12 results using Dijkstra’s algorithm and
RBFS coincide. . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Comparison of average CPU time cost as a function of threshold value using RBFS and Dijkstra’s algorithm, which includes
the naive implementation, use of adjacency list, binary heaps,
and Fibonacci heaps. The kinetics data used are generated
from n-decane autoignition over a range of initial temperatures and pressures, and at varying equivalence ratios, from a
detailed mechanism consisting of 2115 species and 8157 reactions [19]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
15
100
Error (%)
80
modified DFS (five random mechanisms)
60
modified DFS (original mechanism)
40
20
0
Dijkstra's algorithm and RBFS (original + five random mechanisms)
50
100
150
200
250
Number of species
Figure 1: Error in autoignition delay prediction as a function of number of species for
skeletal mechanisms of n-heptane generated by DRGEP using Dijkstra’s algorithm and
RBFS with coefficient scaling and modified DFS. The solid lines indicate the results using
the original detailed mechanism while the dashed lines indicate the results obtained from
five versions with randomly-shuffled species. All 12 results using Dijkstra’s algorithm and
RBFS coincide.
16
RBFS
Dijkstra
Dijkstra (adj)
Dijkstra (binary)
Dijkstra (Fibonacci)
-1
CPU Time (sec)
1x10
1x10-2
1x10-3
1x10-6
1x10-5
1x10-4
1x10-3
1x10-2
Threshold value
Figure 2: Comparison of average CPU time cost as a function of threshold value using
RBFS and Dijkstra’s algorithm, which includes the naive implementation, use of adjacency
list, binary heaps, and Fibonacci heaps. The kinetics data used are generated from ndecane autoignition over a range of initial temperatures and pressures, and at varying
equivalence ratios, from a detailed mechanism consisting of 2115 species and 8157 reactions
[19].
17
| 8 |
arXiv:1212.4590v1 [math.AP] 19 Dec 2012
RELATIVE PARAMETRIZATION OF
LINEAR MULTIDIMENSIONAL SYSTEMS
J.F. Pommaret
CERMICS, Ecole Nationale des Ponts et Chaussées,
6/8 Av. Blaise Pascal, 77455 Marne-la-Vallée Cedex 02, France
E-mail: [email protected], [email protected]
URL: http://cermics.enpc.fr/∼pommaret/home.html
ABSTRACT :
In the last chapter of his book ”The Algebraic Theory of Modular Systems ” published in 1916,
F. S. Macaulay developped specific techniques for dealing with ”unmixed polynomial ideals ” by
introducing what he called ”inverse systems ”. The purpose of this paper is to extend such a
point of view to differential modules defined by linear multidimensional systems, that is by linear
systems of ordinary differential (OD) or partial differential (PD) equations of any order, with any
number of independent variables, any number of unknowns and even with variable coefficients.
The first and main idea is to replace unmixed polynomial ideals by pure differential modules.
The second idea is to notice that a module is 0-pure if and only if it is torsion-free and thus if and
only if it admits an ” absolute parametrization ” by means of arbitrary potential like functions, or,
equivalently, if it can be embedded into a free module by means of an ” absolute localization ”.
The third idea is to refer to a difficult theorem of algebraic analysis saying that an r-pure module
can be embedded into a module of projective dimension r, that is a module admitting a projective
resolution with exactly r operators.
The fourth and final idea is to establish a link between the use of extension modules for such a
purpose and specific formal properties of the underlying multidimensional system through the use
of involution and a ”relative localization ” leading to a ” relative parametrization ”.
The paper is written in a rather effective self-contained way and we provide many explicit examples
that should become test examples for a future use of computer algebra.
KEY WORDS :
Unmixed ideal, algebraic analysis, homological algebra, extension module, projective dimension,
torsion-free module, pure module, characteristic variety, formal integrability, involution, Spencer
operator, inverse system.
1) INTRODUCTION :
Let D = K[d1 , ..., dn ] = K[d] be the ring of differential operators with coefficients in a differential field K with n commuting derivations ∂1 , ..., ∂n and commutation relations di a = adi +∂i a, ∀a ∈
K. If y 1 , ..., y m are m differential indeterminates, we may identify Dy 1 + ... + Dy m = Dy with
Dm and consider the finitely presented left differential module M with presentation Dp → Dm →
M → 0 determined by a given linear multidimensional system with n independent variables,
m unknowns and p equations. Applying the functor homD (•, D), we get the exact sequence
0 → homD (M, D) → Dm → Dp −→ N −→ 0 of right differential modules that can be transformed
by a side-changing functor to an exact sequence of finitely generated left differential modules.
This new presentation corresponds to the formal adjoint ad(D) of the linear differential operator
D determined by the initial presentation but now with p unknowns and m equations, obtaining
therefore a new finitely generated left differential module N and we may consider homD (M, D)
as the module of equations of the compatibility conditions (CC) of ad(D), a result not evident
at all (Compare to [24]). Using now a maximum free submodule 0 −→ Dl −→ homD (M, D)
1
and repeating this standard procedure while using the well known fact that ad(ad(D)) = D,
we obtain therefore an embedding 0 → homD (homD (M, D), D) → Dl of left differential modules for a certain integer 1 ≤ l < m because K is a field and thus D is a noetherian bimodule over itself, a result leading to l = rkD (homD (M, D)) = rkD (M ) < m as in ([19], p
178,201)(See section 3 for the definition of the differential rankrkD ). Now, the kernel of the map
ǫ : M → homD (homD (M, D), D) : m → ǫ(m)(f ) = f (m), ∀f ∈ homD (M, D) is the torsion submodule t(M ) ⊆ M and ǫ is injective if and only if M is torsion-free, that is t(M ) = 0. In that case,
we obtain by composition an embedding 0 → M → Dl of M into a free module that can also be
obtained by localization if we introduce the ring of fractions S −1 D = DS −1 when S = D − {0}.
This result is quite important for applications as it provides a (minimum) parametrization of the
linear differential operator D and amounts to the controllability of a classical control system when
n = 1 ([24], p 258). This parametrization will be called an ”absolute parametrization ” as it only
involves arbitrary ”potential-like ” functions (See [1], [9], [18], [19], [20], [24], [25] and [32] for more
details and examples, in particular that of Einstein equations).
The purpose of this paper is to extend suh a result to a much more general situation, that
is when M is not torsion-free, by using unexpected results first found by F.S. Macaulay in 1916
through his study of ”inverse systems ” for ”unmixed polynomial ideals ”.
For this we define the purity filtration :
0 = tn (M ) ⊆ tn−1 (M ) ⊆ ... ⊆ t1 (M ) ⊆ t0 (M ) = t(M ) ⊆ M
by introducing tr (M ) = {m ∈ M | cd(Dm) > r} where the codimension of Dm is n minus
the dimension of the characteristic variety determined by m in the corresponding system for one
unknown. The module M is said to be r-pure if tr (M ) = 0, tr−1 (M ) = M or, equivalently, if
cd(M ) = cd(N ) = r, ∀N ⊂ M and a torsion-free module is a 0-pure module. Moreover, when
K = k = cst(K) is a field of constants and m = 1, a pure module is unmixed in the sense of
Macaulay, that is defined by an ideal having an equidimensional primary decomposition.
Example 1.1 : As an elementary example with K = k = Q, m = 1, n = 2, p = 2, the differential
module defined by d22 y = 0, d12 y = 0 is not pure because z ′ = d2 y satisfies d2 z ′ = 0, d1 z ′ = 0 while
z” = d1 y only satisfies d2 z” = 0 and ((χ2 )2 , χ1 χ2 ) = (χ1 ) ∩ (χ1 , χ2 )2 . We obtain therefore the
purity filtration 0 = t2 (M ) ⊂ t1 (M ) ⊂ t0 (M ) = t(M ) = M with strict inclusions as 0 6= z ′ ∈ t1 (M )
while z” ∈ t0 (M ) but z” ∈
/ t1 (M ).
From the few (difficult) references ([1],[9],[15],[18],[27]) dealing with extension modules extr (M ) =
and purity in the framework of algebraic analysis, it is known that M is r-pure if and
only if there is an embedding 0 → M → extrD (extrD (M, D), D). Indeed, the case r = 0 is exactly
the one already considered because ext0D (M, D) = homD (M, D) and the ker/coker exact sequence:
extrD (M, D)
0 −→ ext1 (N ) −→ M −→ ext0 (ext0 (M )) −→ ext2 (N ) −→ 0
allows to test the torsion-free property of M in actual practice by using the double-duality formula
t(M ) = ext1 (N ) as in ([19]). Also, when r ≥ 1, a similar construction that we shall recall and
illustrate in section 4 provides a finitely generated module L with projective dimension pdD (L) = r,
that is a minimum resolution of L with only r operators, and an embedding 0 → M → L that
allows to exhibit a relative parametrization of D because now the parametrizing potential-like functions are no longer arbitrary but must only depend on arbitrary functions of n − r variables.
Example 1.2 : With K = k = Q, m = 2, n = 3, r = 1, the differential module M defined by
the involutive system Φ1 ≡ d3 y 1 = 0, Φ2 ≡ d3 y 2 = 0, Φ3 ≡ d2 y 1 − d1 y 2 = 0 is 1-pure and admits
the resolution 0 −→ D −→ D3 −→ D2 −→ M −→ 0. The differential module L defined by
the system d3 z = 0 is also 1-pure and admits the resolution 0 −→ D −→ D −→ L −→ 0. We
finally obtain the relative parametrization y 1 = d1 z, y 2 = d2 z providing the strict inclusion M ⊂ L.
In a simple way, this result can be considered as a measure of how far a module is from being
projective, recalling that a module P is projective if there exists another (projective) module Q
2
and a free module F such that P ⊕ Q ≃ F .
We adapt the ”relative localization” technique used by Macaulay and combine it with the ”involution” technique used in the formal theory of systems of partial differential equations in order
to obtain an explicit procedure for determining L when M is given. Many examples will illustrate these new methods that avoid the previous abstract arguments based on ”double duality”.
In particular, original non-commutative examples will also be presented. However, we point out
the fact that the latter method can be adapted without any change to the case of systems with
variables coefficients as it only depends on the use of adjoint operators but the following example
will explain by itself the type of difficulty involved.
Example 1.3 : Starting now with K = Q(x1 , x2 ), m = 2, n = 3, r = 1, the new differential module
M defined by d3 y 1 = 0, d3 y 2 = 0, d2 y 1 − d1 y 2 + x2 y 2 = 0 is also 1-pure and the differential module
L is again defined by d3 z = 0 as in the previous example. However we obtain the totally different
relative parametrization y 1 = d12 z − x2 d2 z + z, y 2 = d22 z providing the strict inclusion M ⊂ L.
More generally, we may consider a constant parameter a ∈ k = Q and consider the new system
d3 y 1 = 0, d3 y 2 = 0, d2 y 1 − d1 y 2 + ax2 y 2 = 0 depending on a. For a = 0 we find back the case of
the previous example and we let the reader wonder why the situation only changes when a 6= 0.
The content of the paper is just following the introduction.
In section 2 we recall the definitions and results from the formal theory of systems of OD/PD
equations that will be crucially used in the sequel. We pay a particular emphasize to the definition
of involution and the way to introduce the Spencer operator in this framework. We also study the
possibility and difficulty to use computer algebra in this framework.
In section 3 we recall the basic tools needed from module theory and homological algebra in
a way adapted to our purpose, in particular the definition of the extension modules, and provide
a few of their properties which, though well known by specialists of algebraic analysis, cannot be
found easily in the literature. Meanwhile, we provide a few links with the preceding section which
are not so well known. Many explicit examples will illustrate the main concepts in the commutative
(constant field k) and the non-commutative (differential field K) framework.
In section 4 we shall recall the proof of the theorem already quoted showing how to embed an
r-pure module M into another module L with projrective dimension equal to r. We shall provide
for the first time explicit computations of this result in order to point out the difficulty encountered
in such a procedure as a motivation for avoiding it.
In section 5 we extend the work of Macaulay, showing why only pure modules can fit with
relative localization in a coherent way with what happens for torsion-free modules. Meanwhile, we
shall extend for the first time this work to the non-commutative framework, showing in particular
that the operator introduced by Macaulay ([11], §60) for studying inverse systems is nothing else
than the Spencer operator. Many explicit examples, including highly non-trivial ones provided by
Macaulay himself, will be fully treated in such a way that any engineer, even with a poor knowledge of homological algebra, will nevertheless become intuitively able to understand and apply
these new techniques without reading the previous sections, just comparing to the way the same
examples have been treated in section 4 by means of another approach.
2) TOOLS FROM SYSTEM THEORY :
If X is a manifold of dimension n with local coordinates (x) = (x1 , ...xn ), we denote as usual by
T = T (X) the tangent bundle of X, by T ∗ = T ∗ (X) the cotangent bundle, by ∧r T ∗ the bundle of
r-forms and by Sq T ∗ the bundle of q-symmetric tensors. More generally, let E be a vector bundle
over X, that is (roughly) a manifold with local coordinates (xi , y k ) for i = 1, ..., n and k = 1, ..., m
simply denoted by (x, y), projection π : E → X : (x, y) → (x) and changes of local coordinates
x̄ = ϕ(x), ȳ = A(x)y. If E and F are two vector bundles over X with respective local coordinates
(x, y) and (x, z), we denote by E×X F the fibered product of E and F over X as the new vector
3
bundle over X with local coordinates (x, y, z). We denote by f : X → E : (x) → (x, y = f (x)) a
global section of E, that is a map such that π ◦ f = idX but local sections over an open set U ⊂ X
may also be considered when needed. Under a change of coordinates, a section transforms like
f¯(ϕ(x)) = A(x)f (x) and the derivatives transform like:
∂ f¯l
(ϕ(x))∂i ϕr (x) = (∂i Alk (x))f k (x) + Alk (x)∂i f k (x)
∂ x̄r
We may introduce new coordinates (xi , y k , yik ) transforming like:
ȳrl ∂i ϕr (x) = (∂i Alk (x))y k + Alk (x)yik
k
We shall denote by Jq (E) the q-jet bundle of E with local coordinates (xi , y k , yik , yij
, ...) = (x, yq )
k
k
k
called jet coordinates and sections fq : (x) → (x, f (x), fi (x), fij (x), ...) = (x, fq (x)) transforming
like the sections jq (f ) : (x) → (x, f k (x), ∂i f k (x), ∂ij f k (x), ...) = (x, jq (f )(x)) where both fq and
jq (f ) are over the section f of E. Of course Jq (E) is a vector bundle over X with projection πq
while Jq+r (E) is a vector bundle over Jq (E) with projection πqq+r , ∀r ≥ 0.
DEFINITION 2.1: A linear system of order q on E is a vector sub-bundle Rq ⊂ Jq (E) and a
solution of Rq is a section f of E such that jq (f ) is a section of Rq .
Let µ = (µ1 , ..., µn ) be a multi-index with length |µ| = µ1 + ... + µn , class i if µ1 = ... = µi−1 =
0, µi 6= 0 and µ + 1i = (µ1 , ..., µi−1 , µi + 1, µi+1 , ..., µn ). We set yq = {yµk |1 ≤ k ≤ m, 0 ≤ |µ| ≤ q}
with yµk = y k when |µ| = 0. If E is a vector bundle over X with local coordinates (xi , y k ) for
i = 1, ..., n and k = 1, ..., m, we denote by Jq (E) the q-jet bundle of E with local coordinates
simply denoted by (x, yq ) and sections fq : (x) → (x, f k (x), fik (x), fijk (x), ...) transforming like
the section jq (f ) : (x) → (x, f k (x), ∂i f k (x), ∂ij f k (x), ...) when f is an arbitrary section of E.
Then both fq ∈ Jq (E) and jq (f ) ∈ Jq (E) are over f ∈ E and the Spencer operator just allows
to distinguish them by introducing a kind of ”difference” through the operator D : Jq+1 (E) →
T ∗ ⊗ Jq (E) : fq+1 → j1 (fq ) − fq+1 with local components (∂i f k (x) − fik (x), ∂i fjk (x) − fijk (x), ...)
k
(x). In a symbolic way, when changes of
and more generally (Dfq+1 )kµ,i (x) = ∂i fµk (x) − fµ+1
i
coordinates are not involved, it is sometimes useful to write down the components of D in the
form di = ∂i − δi and the restriction of D to the kernel Sq+1 T ∗ ⊗ E of the canonical projection
πqq+1 : Jq+1 (E) → Jq (E) is minus the Spencer map δ = dxi ∧ δi : Sq+1 T ∗ ⊗ E → T ∗ ⊗ Sq T ∗ ⊗ E.
The kernel of D is made by sections such that fq+1 = j1 (fq ) = j2 (fq−1 ) = ... = jq+1 (f ). Finally, if Rq ⊂ Jq (E) is a system of order q on E locally defined by linear equations Φτ (x, yq ) ≡
aτk µ (x)yµk = 0 and local coordinates (x, z) for the parametric jets up to order q, the r-prolongation
Rq+r = ρr (Rq ) = Jr (Rq ) ∩ Jq+r (E) ⊂ Jr (Jq (E)) is locally defined when r = 1 by the link
+ ∂i aτk µ (x)yµk = 0 and has symbol
ear equations Φτ (x, yq ) = 0, di Φτ (x, yq+1 ) ≡ aτk µ (x)yµ+1
i
gq+r = Rq+r ∩ Sq+r T ∗ ⊗ E ⊂ Jq+r (E) if one looks at the top order terms. If fq+1 ∈ Rq+1 is
over fq ∈ Rq , differentiating the identity aτk µ (x)fµk (x) ≡ 0 with respect to xi and substracting the
k
k
(x)) ≡ 0
(x)+∂i aτk µ (x)fµk (x) ≡ 0, we obtain the identity aτk µ (x)(∂i fµk (x)−fµ+1
identity aτk µ (x)fµ+1
i
i
∗
and thus the restriction D : Rq+1 → T ⊗ Rq ([17],[18],[30]).
q+r+1
DEFINITION 2.2: Rq is said to be formally integrable when the restriction πq+r
: Rq+r+1 →
Rq+r is an epimorphism ∀r ≥ 0 or, equivalently, when all the equations of order q + r are obtained
by r prolongations only ∀r ≥ 0. In that case, Rq+1 ⊂ J1 (Rq ) is a canonical equivalent formally
integrable first order system on Rq with no zero order equations, called the Spencer form.
Finding an intrinsic test has been achieved by D.C. Spencer in 1970 ([30]) along coordinate
dependent lines sketched by M. Janet in 1920 ([7]) and W. Gröbner in 1940 ([4],[6]). The key
ingredient, missing in the old approach, is provided by the following definition.
Let T ∗ be the cotangent vector bundle of 1-forms on X and ∧s T ∗ be the vector bundle of s-forms
on X with usual bases {dxI = dxi1 ∧ ... ∧ dxis } where we have set I = (i1 < ... < is ). Moreover, introducing the exterior derivative d : ∧s T ∗ −→ ∧s+1 T ∗ : ω = ωI (x)dxI −→ dω = ∂i ωI (x)dxi ∧ dxI ,
we have d2 = d ◦ d = 0 and may introduce the Poincaré sequence:
4
d
d
d
d
∧0 T ∗ −→ ∧1 T ∗ −→ ∧2 T ∗ −→ ... −→ ∧n T ∗ −→ 0
PROPOSITION 2.3: There exists a map δ : ∧s T ∗ ⊗ Sq+1 T ∗ ⊗ E → ∧s+1 T ∗ ⊗ Sq T ∗ ⊗ E which
restricts to δ : ∧s T ∗ ⊗ gq+1 → ∧s+1 T ∗ ⊗ gq and δ 2 = δ ◦ δ = 0.
k
k
.
Proof: Let us introduce the family of s-forms ω = {ωµk = vµ,i
dxI } and set (δω)kµ = dxi ∧ ωµ+1
i
2
k
i
j
k
We obtain at once (δ ω)µ = dx ∧ dx ∧ ωµ+1i +1j = 0.
Q.E.D.
The kernel of each δ in the first case is equal to the image of the preceding δ but this may no
longer be true in the restricted case and we set:
s
DEFINITION 2.4: We denote by Hq+r
(gq ) the cohomology at ∧s T ∗ ⊗ gq+r of the restricted
1
s
δ-sequence which only depends on gq . The symbol gq is said to be s-acyclic if Hq+r
= ... = Hq+r
=
0, ∀r ≥ 0, involutive if it is n-acyclic and finite type if gq+r = 0 becomes trivially involutive for r
large enough.
DEFINITION 2.5: Rq is said to be involutive when it is formally integrable and its symbol gq
δ
δ
is involutive, that is to say all the sequences ... → ∧s T ∗ ⊗ gq+r → ... are exact ∀0 ≤ s ≤ n, ∀r ≥ 0.
Equivalently, the following procedure, where one may have to change linearly the independent
variables if necessary, is the heart towards the next effective definition of involution. It is intrinsic
even though it must be checked in a particular coordinate system called δ-regular ([17],[18],[29])
and is particularly simple for first order systems without zero order equations.
• Equations of class n: Solve the maximum number βqn of equations with respect to the jets of
order q and class n. Then call (x1 , ..., xn ) multiplicative variables.
− − − − − − − − − − − − − − −−
• Equations of class i ≥ 1: Solve the maximum number βqi of remaining equations with respect
to the jets of order q and class i. Then call (x1 , ..., xi ) multiplicative variables and (xi+1 , ..., xn )
non-multiplicative variables.
−−−−−−−−−−−−−−−−−
• Remaining equations equations of order ≤ q − 1: Call (x1 , ..., xn ) non-multiplicative variables.
In actual practice, we shall use a multiplicative board where the multiplicative ”variables” are represented by their index in upper left position while the non-multiplicative variables are represented
by dots in lower right position.
DEFINITION 2.6: A system of PD equations is said to be involutive if its first prolongation
can be achieved by prolonging its equations only with respect to the corresponding multiplicative
(q+n−i−1)!
−βqi for i = 1, ..., n and
variables. In that case, we may introduce the characters αiq = m (q−1)!((n−i)!
we have dim(gq+1 ) = α1q + ... + αnq . Moreover, one can exhibit the Hilbert polynomial dim(Rq+r )
in r with leading term (α/d!)rd with d ≤ n when α is the smallest non-zero character in the case
of an involutive symbol. Such a prolongation allows to compute in a unique way the principal
(pri) jets from the parametric (par) other ones. This definition may also be applied to nonlinear
systems as well.
REMARK 2.7: For an involutive system with β = βqn < m, then (y β+1 , ..., y m ) can be given
arbitrarily and may constitute the input variables in control theory, though it is not necessary
to make such a choice. In this case, the intrinsic number α = αnq = m − β > 0 is called the
n-character and is the system counterpart of the so-called ”differential transcendence degree” in
differential algebra. As we shall see in the next section, the smallest non-zero character and the
number of zero characters are intrinsic numbers that cannot be known without bringing the system
5
to involution and we have α1q ≥ ... ≥ αnq ≥ 0.
EXAMPLE 2.8: ([11], §38, p 40 where one can find the first intuition of formal integrability) The primary ideal q = ((χ1 )2 , χ1 χ3 − χ2 ) provides the system y11 = 0, y13 − y2 = 0 which
is neither formally integrable nor involutive. Indeed, we get d3 y11 − d1 (y13 − y2 ) = y12 and
d3 y12 − d2 (y13 − y2 ) = y22 , that is to say each first and second prolongation does bring a new second
order PD equation. Considering the new system y22 = 0, y12 = 0, y13 −y2 = 0, y11 = 0, the question
is to decide whether this system is involutive or not. One could use Janet or Gröbner algorithm but
with no insight towards involution. In such a simple situation, as there is no PD equation of class 3,
two evident permutations of coordinates (1, 2, 3) → (3, 2, 1) or (1, 2, 3) → (2, 3, 1) both provide one
equation of class 3, 2 equations of class 2 and 1 equation of clas 1. It is then easy to check directly
that the first permutation brings the involutive system y33 = 0, y23 = 0, y22 = 0, y13 − y2 = 0 that
will be used in the sequel and we have α32 = 0, α22 = 0, α12 = 2.
EXAMPLE 2.9: With n = 4, m = 1, q = 1, K = Q(x1 , x2 , x3 , x4 ), let us consider the system R1 :
{y4 − x3 y2 − y = 0,
y3 − x4 y1 = 0
Again, the reader will check easily that the subsystem R1′ ⊂ R1 :
u≡
v≡
w≡
y4 − x3 y1 − y = 0
y3 − x4 y1 = 0
y2 − y1 = 0
1 2
1 2
1 2
3 4
3 •
• •
(1)
namely the projection R1 of R2 to R1 , is formally integrable and even involutive with one equation of class 4, one equation of class 3 and one equation of class 2.
In the situation of the last remark, the following theorem will generalizing for PD control systems the well known first order Kalman form of OD control systems where the derivatives of the
input do not appear ([27], VI,1.14, p 802). For this, we just need to modify the Spencer form and
we provide the procedure that must be followed in the case of a first order involutive system with
no zero order equation, for example an involutive Spencer form.
• Look at the equations of class n solved with respect to yn1 , ..., ynβ .
• Use integrations by part like:
yn1 − a(x)ynβ+1 = dn (y 1 − a(x)y β+1 ) + ∂n a(x)y β+1 = ȳn1 + ∂n a(x)y β+1
• Modify y 1 , ..., y β to ȳ 1 , ..., ȳ β in order to ”absorb” the various ynβ+1 , ..., ynm only appearing in the
equations of class n.
We have the following unexpected result providing what we shall call reduced Spencer form:
THEOREM 2.10: The new equations of class n only contain yiβ+1 , ..., yim with 0 ≤ i ≤ n − 1
while the equations of class 1, ..., n − 1 no longer contain y β+1 , ..., y m and their jets. Accordingly,
as we shall see in the next section, any torsion element, if it exists, only depends on ȳ 1 , ..., ȳ β .
Proof: The first assertion comes from the absorption procedure. Now, if y m or yim should appear in an equation of class ≤ n−1, prolonging this equation with respect to the non-multiplicative
m
variable xn should bring ynm or yin
and (here involution is essential) we should get a linear combination of equations of various classes prolonged with respect to x1 , ..., xn−1 only, but this is
impossible and we get the desired reduced form.
Q.E.D.
jq
Φ
When Rq is involutive, the linear differential operator D : E → Jq (E) → Jq (E)/Rq = F0 of order
q with space of solutions Θ ⊂ E is said to be involutive and one has the canonical linear Janet
sequence ([17], p 144):
D
D
D
D
n
2
1
Fn −→ 0
... −→
F1 −→
0 −→ Θ −→ T −→ F0 −→
6
where each other operator is first order involutive and generates the compatibility conditions (CC)
of the preceding one. As the Janet sequence can be cut at any place, the numbering of the Janet
bundles has nothing to do with that of the Poincaré sequence for the exterior derivative, contrary
to what many physicists believe. Moreover, the dimensions of the Janet bundles can be computed
at once inductively from the board of multiplicative and non-multiplicative variables that can be
exhibited for D by working out the board for D1 and so on. For this, the number of rows of this
new board is the number of dots appearing in the initial board while the number nb(i) of dots in
the column i just indicates the number of CC of class i for i = 1, ..., n with nb(i) < nb(j), ∀i < j.
It follows that the successive first order operators D1 , ..., Dn are automatically in reduced Spencer
form.
EXAMPLE 2.11: Coming back to Example 2.9 and changing slightly our usual notations, we
get for D1 the following first order involutive system of CC in reduced Spencer form:
3
φ ≡
φ2 ≡
1
φ ≡
d4 v − d3 u + x4 d1 u − x3 d1 v − v = 0
d4 w − d2 u + d1 u − x3 d1 w − w = 0
d3 w − d2 v + d1 v − x4 d1 w = 0
1 2
1 2
1 2
3 4
3 4
3 •
as d4 u does not appear in φ2 and φ3 while u does not appear in φ1 .
We finally obtain for D2 the only CC:
ψ ≡ d4 φ1 − d3 φ2 − d1 φ3 + x4 d1 φ2 − x3 d1 φ1 − φ1 = 0
DEFINITION 2.12: The Janet sequence is said to be locally exact at Fr if any local section of
Fr killed by Dr+1 is the image by Dr of a local section of Fr−1 . It is called locally exact if it is
locally exact at each Fr for 0 ≤ r ≤ n. The Poincaré sequence is locally exact, that is a closed
form is locally an exact form but counterexamples may exist ([18], p 373).
j1
Equivalently, we have the involutive first Spencer operator D1 : C0 = Rq → J1 (Rq ) →
J1 (Rq )/Rq+1 ≃ T ∗ ⊗ Rq /δ(gq+1 ) = C1 of order one induced by D : Rq+1 → T ∗ ⊗ Rq . Introducing
the Spencer bundles Cr = ∧r T ∗ ⊗ Rq /δ(∧r−1 T ∗ ⊗ gq+1 ), the first order involutive (r + 1)-Spencer
operator Dr+1 : Cr → Cr+1 is induced by D : ∧r T ∗ ⊗ Rq+1 → ∧r+1 T ∗ ⊗ Rq : α ⊗ ξq+1 →
dα ⊗ ξq + (−1)r α ∧ Dξq+1 and we obtain the canonical linear Spencer sequence ([17], p 150):
jq
D
D
D
D
n
3
2
1
Cn −→ 0
... −→
C2 −→
0 −→ Θ −→ C0 −→
C1 −→
as the canonical Janet sequence for the first order involutive system Rq+1 ⊂ J1 (Rq ).
The canonical Janet sequence and the canonical Spencer sequence can be connected by a
commutative diagram where the Spencer sequence is induced by the locally exact central horizontal sequence which is at the same time the Janet sequence for jq and the Spencer sequence for
Jq+1 (E) ⊂ J1 (Jq (E)) ([17], p 153) but this result will not be used in this paper (See [5],[20],[22],[23]
for more details on Cosserat and Maxwell equations, see ([16]-[21]) and in particular ([22],[23]) for
applications to engineering and mathematical physics).
REMARK 2.13: We shall revisit Example 2.8 in order to explain the word ”canonical ” that
has been used in the previous definitions. For this, starting with the inhomogeneous system
y33 = u, y13 − y2 = v, we obtain easily the following inhomogeneous involutive system with its
corresponding board of multiplicative and non-multiplicative variables:
4
Φ
3
Φ
Φ2
1
Φ
≡
≡
≡
≡
y33
y23
y22
y13 − y2
=
=
=
=
u
d1 u − d3 v
d11 u − d13 v − d2 v
v
1
1
1
1
2
2
2
•
3
•
•
•
Using prolongation with respect to the 4 non-multiplicative variables involved should bring 4 first
order CC for the right members and we could wait for 4 third order CC involving u and v.
7
Surprisingly, we need the only CC Ψ ≡ d33 v − d13 u + d2 u = 0 and obtain the differential sequence:
0 −→ Θ −→ 1 −→ 2 −→ 1 −→ 0
as a single CC has no CC for itself (See ([18],p365) for the effective general procedure).
Such a differential sequence is quite different from the canonical Janet sequence:
D
D
D
1
2
0 −→ Θ −→ 1 −→ 4 −→
4 −→
1 −→ 0
which is the only sequence that can provide the Spencer sequence as we already said and could not
be obtained by simply using Gröbner bases. This remark will become essential in mathematical
physics (foundations of continuum mechanics, gauge theory, general relativity) where only involutive operators must be used ([20],[22],[23]). We also check that the Euler-Poincaré characteristic,
namely the alternate sum of the circled dimensions of the vector bundles involved, does not depend
on the differential sequence used as we get 1 − 2 + 1 = 1 − 4 + 4 − 1 = 0 (See [18], p 378).
In the same spirit, using certain parametric jet variables as new unknowns, we may set z 1 = y, z 2 =
y1 , z 3 = y2 , z 4 = y3 in order to obtain the following involutive first order system with no zero order
equation:
class 3
class 2
class 1
d3 z 1 − z 4 = 0, d3 z 2 − z 3 = 0, d3 z 3 = 0, d3 z 4 = 0
d2 z 1 − z 3 = 0, d2 z 2 − d1 z 3 = 0, d2 z 3 = 0, d2 z 4 = 0
d1 z 1 − z 2 = 0, d1 z 4 − z 3 = 0
1 2
1 2
1 •
3
•
•
where we have separated the classes. Contrary to what could be believed, this operator does not
describe the Spencer sequence that could be obtained from the previous Janet sequence. Indeed, introducing the trivial vector bundle E with local coordinates (x1 , x2 , x3 , y), it follows that J1 (E) has
local coordinates (x1 , x2 , x3 , z 1 , z 2 , z 3 , z 4 ). Now, the involutive system R2 ⊂ J2 (E) ⊂ J1 (J1 (E))
with involutive symbol g2 ⊂ S2 T ∗ ⊗ E ⊂ T ∗ ⊗ T ∗ ⊗ E ⊂ T ∗ ⊗ J1 (E) projects onto J1 (E)
but dim(R2 ) = 6 because par(R2 ) = {y, y1 , y2 , y3 , y11 , y12 } while we have only 4 unknowns
(z 1 , z 2 , z 3 , z 4 ). Nevertheless, as R2 projects onto J1 (E), we may construct a canonical Janet
sequence for this new system where the successive Janet bundles involved will be the Spencer
bundles Cr = ∧r T ∗ ⊗ J1 (E)/δ(∧r−1 T ∗ ⊗ g2 ) with a shift by one step in the numbering of the
bundles as now C0 = J1 (E) and the successive operators are induced by the composition of the
inclusion R2 ⊂ J2 (E) with the Spencer operator D : J2 (E) −→ T ∗ ⊗ J1 (E) as in ([17],p144,150) or
([18],p356). In any case, it is essential to notice that, both in the canonical Spencer sequence and
in the canonical Janet sequence, any intermediate operator can be constructed explicitely without
knowing the previous ones.
EXAMPLE 2.14: With m = 1, n = 4, q = 2, one could treat similarly the involutive system:
y44 = 0, y34 = 0, y33 = 0, y24 − y13 = 0 with one equation of class 4, two equations of class 3 and
one equation of class 2.
EXAMPLE 2.15: Coming back to the involutive system of Example 2.9 with variable coefficients, we let the reader prove that the Janet sequence is:
D
D
D
1
2
0 −→ Θ −→ 1 −→ 3 −→
3 −→
1 −→ 0
EXAMPLE 2.16: Let us finally consider the following involutive system of PD equations with
two independent variables (x1 , x2 ) and three unknowns (y 1 , y 2 , y 3 ), where again a is an arbitrary
constant parameter and we have set for simplicity yik = di y k :
2
y2 + y12 + y23 − y13 − ay 3
y 1 − y12 − y23 − y13 − ay 3
21
y1 − y12 − 2y13
=0
=0
=0
1 2
1 2
1 •
Then the corresponding Janet sequence is:
D
D
1
1 −→ 0
0 −→ Θ −→ 3 −→ 3 −→
8
Finally, setting ȳ 1 = y 1 − y 3 , ȳ 2 = y 2 + y 3 , we obtain the new first order involutive system:
2
ȳ2 − ȳ12 − ay 3
ȳ 1 − ȳ12 − ay 3
21
ȳ1 − ȳ12 = 0
=0
=0
1
1
1
2
2
•
with two equations of class 2 and one equation of class 1 in which y 3 surprisingly no longer appears.
If χ1 , ..., χn are n algebraic indeterminates or, in a more intrinsic way, if χ = χi dxi ∈ T ∗ is
a covector and D : E −→ F : ξ −→ aτk µ (x)∂µ ξ k (x) is a linear involutive operator of order q,
we may introduce the characteristic matrix a(x, χ) = (aτk µ (x)χµ | µ = q) and the resulting map
σχ (D) : E −→ F is called the symbol of D at χ. Then there are two possibilities:
• If maxχ rk(σχ (D) < m ⇔ αnq > 0: the characteristic matrix fails to be injective for any covector.
• If maxχ rk(σχ (D) = m ⇔ αnq = 0: the characteristic matrix fails to be injective if and only if
all the determinants of the m × m submatrices vanish. However, one can prove that this algebraic
ideal a ∈ K[χ] is not intrinsically defined and must be replaced by its radical rad(a) made by all
polynomials having a power in a. This radical ideal is called the characteristic ideal of the operator.
DEFINITION 2.17: For each x ∈ X, the algebraic set defined by the characteristic ideal is
called the characteristic set of D at x and V = ∪x∈X Vx is called the characteristic set of D.
One has the following important theorem ([18],[29]) that will play an important part later on:
THEOREM 2.18: (Hilbert-Serre) The dimension d(V ) of the characteristic set, that is the maximum dimension of the irreducible components, is equal to the number of non-zero characters while
the codimension cd(V ) = n − d(V ) is equal to the number of zero characters, that is to the number
of ”full ” classes in the board of multiplicative variables of an involutive system.
EXAMPLE 2.19: Coming back to Remark 2.12, we obtain a = ((χ3 )2 , χ2 χ3 , (χ2 )2 , χ1 χ3 ) =⇒
rad(a) = (χ2 , χ3 ) and thus cd(V ) = 2. However, if we take only into account Example 2.8, we
should only get the radical ideal (χ3 ) and the wrong result cd(V ) = 1. The reason for using the
radical can be seen from the equivalent first order system that shoul provide b = ((χ3 )4 , ...) with homogeneous polynomials of degree 4 and thus b ⊂ a with a strict inclusion though rad(a) = rad(b).
A similar situation can be obtained with Examples 1.1 and 2.9.
3) TOOLS FROM MODULE THEORY :
We may roughly say that, if a reader familiar with Gröbner bases ([4],[6]) and computer algebra
looks at the previous section, he will feel embarassed because he will believe that ”intrinsicness
is always competing with complexity ” as can be seen from Examples 2.8 + 2.12. However, even
if he admits that it may be useful to have intrinsic and thus canonical procedures, then looking
at the existing literature on differential modules ([1],[9],12]), he will really feel to be on another
planet as the main difficulty involved in the theory of differentia modules is to understand why and
where formal integrability and involution become essential tools to apply quite before dealing with
the homological background of ”algebraic analysis ” involving extension modules. This is the main
reason for which the case of variable coefficients is rarely treated ”by itself ” always refering to Weyl
algebras for examples and the main difficulty we found when writing ([18], in particular Chapter
IV). The central concept, essential for applications but well hidden in the literature dealing with
filtred modules ([14],p 383) and totally absent from the use of Gröbner bases because it amounts
to formal integrability by duality, is that of a ”strict morphism ”. Accordingly, the purpose of this
section will be to explain why such a definition, which seems to be purely technical, will be so
important for studying extension modules and purity.
If P = aµ dµ ∈ D = K[d], the highest value of |µ| with aµ 6= 0 is called the order of the operator
P and the ring D with multiplication (P, Q) −→ P ◦ Q = P Q is filtred by the order q of the
operators. We have the filtration 0 ⊂ K = D0 ⊂ D1 ⊂ ... ⊂ Dq ⊂ ... ⊂ D∞ = D. Moreover, it
is clear that D, as an algebra, is generated by K = D0 and T = D1 /D0 with D1 = K ⊕ T if we
9
identify an element ξ = ξ i di ∈ T with the vector field ξ = ξ i (x)∂i of differential geometry, but
with ξ i ∈ K now. It follows that D = D DD is a bimodule over itself, being at the same time a left
D-module by the composition P −→ QP and a right D-module by the composition P −→ P Q.
We define the adjoint functor ad : D −→ Dop : P = aµ dµ −→ ad(P ) = (−1)|µ| dµ aµ and we have
ad(ad(P )) = P . It is easy to check that ad(P Q) = ad(Q)ad(P ), ∀P, Q ∈ D. Such a definition can
also be extended to any matrix of operators by using the transposed matrix of adjoint operators
(See [18],[19],[22] for more details and applications to control theory and mathematical physics).
Accordingly, if y = (y 1 , ..., y m ) are differential indeterminates, then D acts on y k by setting
k
and y0k = y k . We may therefore use he jet coordidi y = yik −→ dµ y k = yµk with di yµk = yµ+1
i
nates in a formal way as in the previous section. Therefore, if a system of OD/PD equations is
written in the form Φτ ≡ aτk µ yµk = 0 with coefficients a ∈ K, we may introduce the free differential
module Dy = Dy 1 + ... + Dy m ≃ Dm and consider the differential submodule I = DΦ ⊂ Dy
which is usually called the module of equations, both with the differential module M = Dy/DΦ
or D-module and we may set M = D M if we want to specify the ring of differential operators.
The work of Macaulay only covers the case m = 1 with K replaced by k ⊆ cst(K). Again, we
k
+ ∂i aτk µ yµk
may introduce the formal prolongation with respect to di by setting di Φτ ≡ aτk µ yµ+1
i
k
k
in order to induce maps di : M −→ M : ȳµ −→ ȳµ+1i by residue if we use to denote the residue
Dy −→ M : y k −→ ȳ k by a bar as in algebraic geometry. However, for simplicity, we shall not
write down the bar when the background will indicate clearly if we are in Dy or in M .
k
As a byproduct, the differential modules we shall consider will always be finitely generated
(k = 1, ..., m < ∞) and finitely presented (τ = 1, ..., p < ∞). Equivalently, introducing the
matrix of operators D = (aτk µ dµ ) with m columns and p rows, we may introduce the morphism
D
Dp −→ Dm : (Pτ ) −→ (Pτ Φτ ) : P −→ P Φ = P D over D by acting with D on the left of these
row vectors while acting with D on the right of these row vectors and the presentation of M is
defined by the exact cokernel sequence Dp −→ Dm −→ M −→ 0. It is essential to notice that
the presentation only depends on K, D and Φ or D, that is to say never refers to the concept of
(explicit or formal) solutions. It is at this moment that we have to take into account the results
of the previous section in order to understant that certain presentations will be much better than
others, in particular to establish a link with formal integrability and involution.
It follows from its definition that M can be endowed with a quotient filtration obtained from
that of Dm which is defined by the order of the jet coordinates yq in Dq y. We have therefore the
inductive limit 0 ⊆ M0 ⊆ M1 ⊆ ... ⊆ Mq ⊆ ... ⊆ M∞ = M with di Mq ⊆ Mq+1 and M = DMq for
q ≫ 0 with prolongations Dr Mq ⊆ Mq+r , ∀q, r ≥ 0.
DEFINITION 3.1: ([14],p 383) If M and N are two differential modules and f : M −→ N is
a morphism over D compatible with the filtration, that is if f (Mq ) ⊂ Nq with induced morphism
fq : Mq −→ Nq , then f is a strict morphism if fq (Mq ) = f (M ) ∩ Nq , ∀q ≥ 0.
Equivalently, chasing in the following diagram:
0
↓
0
↓
fq
Mq
↓
−→ Nq
↓
M
−→
f
N
0
↓
−→ coker(fq )
↓
−→ 0
−→
−→ 0
coker(f )
then f is strict if the induced morphism coker(fq ) −→ coker(f ) is a monomorphism ∀q ≥ 0.
DEFINITION 3.2: An exact sequence of morphisms finishing at M is said to be a resolution of
M . If the differential modules involved apart from M are free, we shall say that we have a free
resolution of M . Moreover, a sequence of strict morphisms is called a strict sequence.
LEMMA 3.3: If f is a strict morphism as in the last definition, there are exact sequences
10
0 −→ coker(fq ) −→ coker(fq+1 ), ∀q ≥ 0.
Proof: As f is compatible with the filtrations and Mq ⊆ Mq+1 , Nq ⊆ Nq+1 , we have an induced
morphism coker(fq ) −→ coker(fq+1 ). Now, as f is also strict, we have the following commutative
and exact diagram:
0 −→
coker(fq )
−→ coker(f )
↓
k
0 −→ coker(fq+1 ) −→ coker(f )
The lemma finally follows from an elementary chase.
Q.E.D.
Having in mind that K is a left D-module with the standard action (D, K) −→ K : (di , a) −→
∂i a and that D is a bimodule over itself, we have only two possible constructions:
DEFINITION 3.4: We define the system R = homK (M, K) = M ∗ and set Rq = homK (Mq , K) =
Mq∗ as the system of order q. We have the projective limit R = R∞ −→ ... −→ Rq −→ ... −→
R1 −→ R0 . It follows that fq ∈ Rq : yµk −→ fµk ∈ K with aτk µ fµk = 0 defines a section at order
q and we may set f∞ = f ∈ R for a section of R. For an arbitrary differential field K, such a
definition has nothing to do with the concept of a formal power series solution (care).
DEFINITION 3.5: We may define the right differential module homD (M, D).
PROPOSITION 3.6: When M is a left D-module, then R is also a left D-module.
Proof: As D is generated by K and T as we already said, let us define:
(af )(m) = af (m),
∀a ∈ K, ∀m ∈ M
(ξf )(m) = ξf (m) − f (ξm),
∀ξ = ai di ∈ T, ∀m ∈ M
In the operator sense, it is easy to check that di a = adi + ∂i a and that ξη − ηξ = [ξ, η] is the
k
and thus
standard bracket of vector fields. We finally get (di f )kµ = (di f )(yµk ) = ∂i fµk − fµ+1
i
recover exactly the Spencer operator of the previous section though this is not evident at all. We
k
k
k
=⇒ di dj = dj di , ∀i, j = 1, ..., n and thus
+ fµ+1
− ∂j fµ+1
also get (di dj f )kµ = ∂ij fµk − ∂i fµ+1
i +1j
i
j
di Rq+1 ⊆ Rq =⇒ di R ⊂ R induces a well defined operator R −→ T ∗ ⊗ R : f −→ dxi ⊗ di f . This
result has been discovered (up to sign) by Macaulay in 1916 ([11]). For more details on the Spencer
operator and its applications, the reader may look at ([22],[23]).
Q.E.D.
As D is a bimodule over itself, it follows from this proposition that that D∗ = homK (D, K)
is a left D-module. Moreover, using Baer’s criterion ([28]), it is known that D∗ is an injective
D-module as there is a canonical isomorphism:
M ∗ = homK (M, K) ≃ homD (M, D∗ )
where both sides are well defined ([2], Prop 11, p 18;)([28], p 37).
DEFINITION 3.7: With any differential module M we shall associate the graded module
G = gr(M ) over the polynomial ring gr(D) ≃ K[χ] by setting G = ⊕∞
q=0 Gq with Gq = Mq /Mq+1
and we get gq = G∗q where the symbol gq is defined by the short exact sequences:
0 −→ Mq−1 −→ Mq −→ Gq −→ 0
=⇒
0 −→ gq −→ Rq −→ Rq−1 −→ 0
We have the short exact sequences 0 −→ Dq−1 −→ Dq −→ Sq T −→ 0 leading to grq (D) ≃ Sq T
and we may set as usual T ∗ = homK (T, K) in a coherent way with differential geometry. Moreover
any compatible morphism f : M −→ N induces a morphism gr(f ) : gr(M ) −→ gr(N ).
11
EXAMPLE 3.8: If K = Q(x), m = 1, n = 1, let us consider the system yxxx − yx = 0 for which
we may exhibit the basis of sections {f = (1, 0, 0, 0, ...), f ′ = (0, 1, 0, 1, ...), f ” = (0, 0, 1, 0, ...)} as in
([11],§59,p 67) or ([21]). We obtain dx f = 0, dx f ′ = −f −f ”, dx f ” = −f ′ and check that all the sections can be generated by a single one, namely f ” which describes the power series of ch(x)−1. With
1
now m = 2, let us consider the module defined by the system yxx
= 0, yx2 = 0. Setting y = y 2 −xy 1 ,
2
1
1
1
1
we successively get yx = −xyx − y , yxx = −2yx, yxxx = 0 =⇒ y = x2 yxx − yx , y 2 = x2 yxx − xyx + y
and a differential isomorphism with the module defined by the new system yxxx = 0. All the sections of the second system are easily seen to be generated by the single section f = (0, 0, 1, ...), a
2
1
= 0, ..., f 2 (x) = x2 , fx2 = 0, ...
result leading to the only generating section f 1 (x) = x2 , fx1 = − 21 , fxx
of the initial system but these sections do not describe solutions because ∂x f 1 − fx1 = 1 6= 0 and
∂x f 2 − fx2 = x 6= 0. We do not know any reference in computer algebra dealing with sections (See
[21] for more details)
Coming back to the presentation of M under study, we notice that the morphism D involved
is not compatible unless we shift the index of the filtration by ord(D) = q. In that case, we obtain
im(D) = I ⊂ Dy and may set Iq+r = I ∩ Dq+r y but we have in general Drp D ⊆ Iq+r only, that
is the equations of order q+r may not be produced by r prolongations only. We have thus obtained:
PROPOSITION 3.9: The morphism induced by D is strict if and only if D is formally integrable. Accordingly, the module version of both the Janet sequence and the Spencer sequence are
strictly exact sequences.
Proof: Using q + r instead of q in Lemma 3.3 and applying homK (•, K), we obtain the epimorphisms Rq+r+1 −→ Rq+r −→ 0, ∀r ≥ 0.
Q.E.D.
The reader will find in ([18], IV,3) more details on the relations existing between G and M
which are needed in order to study the non-commutative situation, at least when K is a differential field as such a case is hard enough. We obtain in particular the Hilbert polynomial
α d
dimK (Mq+r ) = dimK (Rq+r ) = d!
r + ... where α is called the multiplicity of M and we use to set
cdD (M ) = cd(M ) = n − r, rkD (M ) = rk(M ) = α if cd(M ) = 0 and 0 otherwise.
EXAMPLE 3.10: Coming back to the Example 2.8 of Macaulay, we obtain from Remark 2.13
the free resolution 0 −→ D −→ D2 −→ D −→ M −→ 0 but only the morphism on the left is strict
as for the morphism on the right we know that its image is indeed I2 = {y33 , y23 , y22 , y13 − y2 } and
not just {y33 , y13 −y2 }. However, bringing the system to involution, we get the strict free resolution
0 −→ D −→ D4 −→ D4 −→ D −→ M −→ 0 as the module version of the Janet sequence and we
let the reader exhibit the module version of the corresponding Spencer sequence as an exercise.
If M is a differential module over the ring D = K[d] of differential operators and m ∈ M , then
the differential submodule Dm ⊂ M is defined by a system of OD or PD equations for one unknown
and we may look for its codimension cd(Dm). A similar comment can be done for any differential
submodule M ′ ⊂ M . Sometimes, a single element m ∈ M , called differentially primitive element,
may generate M if Dm = M .
EXAMPLE 3.11: With K = Q, let us consider the differential moduleM defined by the system
1
1
yxx
− y 1 = 0, yx2 = 0. Then, setting y = y 1 − y 2 , we get yx = yx1 , yxx = yxx
= y 1 , yxx − y = y 2 and
thus yxxx − yx = 0 provides another way to describe M by means of a single element as in ([18],
p435). We have the following commutative and exact diagram:
0 −→
0 −→
D2
↓
D
D2
↓
D
−→
−→
(P, Q)
−→
(P (d2 − 1), Qd)
↓
↓
(P d + Q) −→ ((P d + Q)(d3 − d))
12
−→ M
k
−→ M
−→ 0
−→ 0
1
If now we consider the differential module M defined by yxx
− ay 1 = 0, yx2 = 0 where a is a constant
parameter, we cannot find a differentially primitive element when K = Q if a = 0 but we can when
K = Q(x) for any value of a, as in Example 3.8.
We may check the following definition in a constructive way ([27]):
DEFINITION 3.12: tr (M ) = {m ∈ M | cd(Dm) > r} is the greatest differential submodule of
M having codimension > r.
PROPOSITION 3.13: cd(M ) = cd(V ) = r ⇐⇒ αqn−r 6= 0, αn−r+1
= ... = αnq = 0 ⇐⇒ tr (M ) 6=
q
M, tr−1 (M ) = ... = t0 (M ) = t(M ) = M and this intrinsic result can be most easily checked by
using the Spencer form of the system defining M .
We are now in a good position for defining and studying purity for differential modules.
DEFINITION 3.14: M is r-pure ⇐⇒ tr (M ) = 0, tr−1 (M ) = M ⇐⇒ cd(Dm) = r, ∀m ∈ M . In
particular, M is 0-pure if t(M ) = 0 and, if cd(M ) = r but M is not r-pure, we may call M/tr (M )
the pure part of M . It follows that tr−1 (M )/tr (M ) is equal to zero or is r-pure (See the picture in
[18], p 545). Finally, when tr−1 (M ) = tr (M ), we shall say that there is a gap in the purity filtration:
0 = tn (M ) ⊆ tn−1 (M ) ⊆ ... ⊆ t1 (M ) ⊆ t0 (M ) = t(M ) ⊆ M
PROPOSITION 3.15: tr (M ) does not depend on the presentation or on the filtration of M .
EXAMPLE 3.16: If K = Q and M is defined by the involutive system y33 = 0, y23 = 0, y13 = 0,
then z = y3 satifies d3 z = 0, d2 z = 0, d1 z = 0 and cd(Dz) = 3 while z ′ = y2 only satisfies d3 z ′ = 0
and cd(Dz ′ ) = 1. We have the purity filtration 0 = t3 (M ) ⊂ t2 (M ) = t1 (M ) ⊂ t0 (M ) = t(M ) = M
with one gap and two strict inclusions.
We now recall the definition of the extension modules extiD (M, D) that we shall simply denote
by exti (M ) and the way to use their dimension or codimension. We point out once more that
these numbers cannot be obtained without bringing the underlying systems to involution in order
to get informations on M from informations on G. We divide the procedure into four steps that
can be achieved by means of computer algebra ([27]):
• Construct a free resolution of M , say:
... −→ Fi −→ ... −→ F1 −→ F0 −→ M −→ 0
• Suppress M in order to obtain the deleted sequence:
... −→ Fi −→ ... −→ F1 −→ F0 −→ 0
• Apply homD (•, D) in order to obtain the dual sequence heading backwards:
... ←− homD (Fi , D) ←− ... ←− homD (F1 , D) ←− homD (F0 , D) ←− 0
• Define exti (M ) to be the cohomology at homD (Fi , D) in the dual sequence with ext0 (M ) =
homD (M, D).
The following nested chain of difficult propositions and theorems can be obtained, even in the
non-commutative case, by combining the use of extension modules and bidualizing complexes in the
framework of algebraic analysis. The main difficulty is to obtain first these results for the graded
module G = gr(M ) by using techniques from commutative algebra before extending them to the
filtred module M as in ([1],[9],[18],[19]).
THEOREM 3.17: The extension modules do not depend on the resolution of M used.
13
PROPOSITION 3.18: Applying homD (•, D) provides right D-modules that can be transformed
to left D-modules by means of the side changing functor and vice-versa. Namely, if ND is a right
D-module, then D N = ∧n T ⊗K ND is the converted left D-module while, if D N is a left D-module,
then ND = ∧n T ∗ ⊗K D N is the converted right D-module.
PROPOSITION 3.19: Instead of applying homD (•, D) and the side changing functor in the
module framework, we may use ad in the operator framework. Namely, to any operator D : E −→ F
we may associate the formal adjoint ad(D) : ∧n T ∗ ⊗ F ∗ −→ ∧n T ∗ ⊗ E ∗ with the useful though
striking relation rkD (ad(D)) = rkD (D).
PROPOSITION 3.20: exti (M ) is a torsion module ∀1 ≤ i ≤ n but ext0 (M ) = homD (M, D)
may not be a torsion module.
EXAMPLE 3.21: When M is a torsion module, we have homD (M, D) = 0 (exercise). When
n = 3 and the torsion-free module M is defined by the formally surjective div operator, the formal
adjoint of div is −grad which defines a torsion module. Also, when n = 1 as in classical control
theory, a controllable system allows to define a torsion-free module M which is free in that case
and homD (M, D) is thus also a free module.
THEOREM 3.22:
exti (M ) = 0, ∀i ≥ n + 1.
THEOREM 3.23:
cd(exti (M )) ≥ i.
PROPOSITION 3.24:
THEOREM 3.25:
exti (M ) = 0, ∀i < cd(M ).
cd(M ) ≥ r ⇔ exti (M ) = 0, ∀i < r.
PROPOSITION 3.26:
cd(M ) = r =⇒ cd(extr (M )) = r and extr (M )is r-pure.
PROPOSITION 3.27:
extr (extr (M )) is equal to 0 or is r-pure, ∀0 ≤ r ≤ n.
PROPOSITION 3.28: If we set t−1 (M ) = M , there are exact sequences ∀0 ≤ r ≤ n:
0 −→ tr (M ) −→ tr−1 (M ) −→ extr (extr (M ))
THEOREM 3.29: If cd(M ) = r, then M is r-pure if and only if there is a monomorphism
0 −→ M −→ extr (extr (M )) of left differential modules.
THEOREM 3.30: M is pure ⇐⇒ exts (exts (M )) = 0, ∀s 6= cd(M ).
The last two theorems are known to characterize purity but it is however evident that they are
not very useful in actual practice.
THEOREM 3.31: When M is r-pure, the characteristic ideal is thus unmixed, that is a finite
intersection of prime ideals having the same codimension r and the characteristic set is equidimensional, that is the union of irreducible algebraic varieties having the same codimension r.
REMARK 3.32: For the reader knowing more about commutative algebra, we add a few details about the localization used in the primary decomposition of a module which are not so well
known ([3],[18],[21],[31]). For simplicity, setting k = cst(K), we shall denote by A = k[χ] the
polynomial ring isomorphic to D = k[d] and consider a module M over A. We denote as usual
by spec(A) the set of proper prime ideals in A, by max(A) the subset of maximal ideals in A
and by ass(M ) = {p ∈ spec(A)|∃0 6= m ∈ M, p = annA (m)} the (finite) set {p1 , ..., pt } of
associated prime ideals, while we denote by {p1 , ...ps } the subset of minimum associated prime
ideals. It is well known that M 6= 0 =⇒ ass(M ) 6= ∅. We recall that an ideal q ⊂ A is pprimary if ab ∈ q, b ∈
/ q =⇒ a ∈ rad(q) = p ∈ spec(A). We say that a module Q is p-primary
if am = 0, 0 6= m ∈ Q =⇒ a ∈ p = rad(q) ∈ spec(A) when q = annA (Q) or, equivalently,
14
ass(Q) = {p}. Similarly, we say that a module P is p-prime if am = 0, 0 6= m ∈ P =⇒ a ∈
p ∈ spec(A) when p = annA (P ). It follows that any p-prime or p-primary module is r-pure with
n − r = trd(A/p), a result generalizing ([11],§4, p 43). Accordingly, a module M is r-pure if and
only if a = annA (M ) admits a primary decomposition a = q1 ∩...∩qs and rad(a) = p1 ∩...∩ps with
cd(A/pi ) = cd(M ) = r, ∀i = 1, ..., s. In that case, the monomorphism 0 −→ M −→ ⊕p∈ass(M) Mp
induces a monomorphism 0 −→ M −→ Q1 ⊕ ... ⊕ Qs called primary embedding where the primary
modules Qi are the images of the localization morphisms M −→ Mpi = S −1 M with S = A − p
inducing epimorphisms M −→ Qi −→ 0 for i = 1, ..., s. Macaulay was only considering the case
M = A/a with primary decomposition a = q1 ∩ ... ∩ qs .
EXAMPLE 3.33: With k = Q and n = 3, then a = rad(a) = (χ1 , χ2 χ3 ) = (χ1 , χ2 ) ∩ (χ1 , χ3 ) is
unmixed and M = A/a is 2-pure while a = rad(a) = (χ1 χ2 , χ1 χ3 ) = (χ1 )∩(χ2 , χ3 ) is mixed, though
an intersection of two minimum prime ideals and M = A/a is not 1-pure. On the contrary, if one
has the primary decomposition a = ((χ1 )2 , χ1 χ2 , χ1 χ3 , χ2 χ3 ) = (χ1 , χ2 ) ∩ (χ1 , χ3 ) ∩ (χ1 , χ2 , χ3 )2 =
p1 ∩ p2 ∩ m2 and M = A/a, then ass(M ) = {p1 , p2 , m} with pi ⊂ m for i = 1, 2, though
rad(a) = p1 ∩ p2 as before. In that case, there is an embedding 0 −→ M −→ Q1 ⊕ Q2 ⊕ Q3
where Qi = A/pi is the image of the localization morphism M −→ Mpi for i = 1, 2 because p1
is killed by χ3 ∈ A − p1 , p2 is killed by χ2 ∈ A − p2 and Q3 = A/m2 is m-primary because
rad(m2 ) = m ∈ max(A). We have also an embedding 0 −→ M −→ Mp1 ⊕ Mp2 ⊕ Mm but no
element of the multiplicative set A − m = {1 + a|a ∈ m} can kill any element of M and the image
of M into Mm is thus isomorphic to M which is not a primay module. It is important to notice
that the example of Macaulay q = ((χ3 )2 , χ2 χ3 , (χ2 )2 , χ1 χ3 − χ2 ) provides a p-primary module
A/q with p = (χ3 , χ2 ) even though the annihilating ideal of G = gr(M ) is the homogeneous
ideal a = ((χ3 )2 , χ2 χ3 , (χ2 )2 , χ1 χ3 ) = ((χ2 )2 , χ3 ) ∩ (χ1 , χ2 , χ3 )2 which is a mixed ideal because
ass(A/a) = {(χ2 , χ3 ), (χ1 , χ2 , χ3 )}. However, we get rad(a) = (χ2 , χ3 ) in a coherent way.
PROBLEM : Is it possible to have a test for checking whether a differential module is pure or
not without using the previous results ?
4) MOTIVATION :
As we already said in the introduction and in the previous section, a torsion-free module M
is 0-pure because in that case t0 (M ) = t(M ) = 0. Accordingly, M can be embedded into a free
module F and the inclusion, which may not be strict when n > 1, provides a parametrization by
means of a finite number of potential-like arbitrary functions in the classical language of elasticity (Airy function) or electromagnetism (EM 4-potential). As it is clear that such a situation is
only a very particular case of purity, it remains to wonder what can happen for an r-pure module
whenever r ≥ 1. One has the following result ([9], [18], compare to [1], p494):
THEOREM 4.1: If M is an r-pure differential module with r ≥ 1, there exists a differential
module L with pd(L) ≤ r and an embedding M ⊆ L.
Proof: First of all we notice that we have r > 0 and thus any element m ∈ M is surely a torsion element because cd(Dm) > 0, that is M = t(M ) is a torsion module with ext0 (M ) = homD (M, D) =
0 because D is an integral domain. Let now ... −→ Fr −→ ... −→ F1 −→ F0 −→ N −→ 0 be a free
resolution of the right differential module N = extr (M ). According to Proposition 3.26, we have
cd(N ) = r > 0 too and N is also a torsion module with ext0 (N ) = homD (N, D) = 0. Applying the
functor homD (•, D) to the previous sequence or , equivalently, constructing the adjoint sequence in
the operator framework while using the fact that exti (N ) = 0, ∀i < r according to Theorem 3.25,
we obtain the finite long exact sequence with exactly r morphisms because N is finitely presented
and extr (N ) 6= 0:
0 −→ homD (F0 , D) −→ ... −→ homD (Fr−1 , D) −→ homD (Fr , D) −→ L −→ 0
where the left differential module L is the cokernel of the last morphism on the right. As
homD (F, D) is free whenever F is free because of the bimodule structure of D = D DD , the
15
corresponding deleted complex is:
0 −→ homD (F0 , D) −→ ... −→ homD (Fr−1 , D) −→ homD (Fr , D) −→ 0
Applying again homD (•, D) and using the reflexivity of any free module F , that is the isomorphism
homD (homD (F, D), D) ≃ F , we obtain the dual sequence:
0 −→ Fr −→ ... −→ F1 −→ F0 −→ 0
and a similar procedure may be followed with operators as we shall see in the next illustrating
examples ([18],[27]). This sequence is exact everywhere but at Fr and at F0 where its cohomology
is just N by definition, that is to say extr (L) = N = extr (M ). Looking for the cohomology at
homD (Fr , D) in the sequence obtained by duality from the resolution of N with coboundry module
Br and cocycle module Zr , we obtain the following commutative and exact diagram:
0
0
↓
↓
0 −→
Br
=
Br
↓
↓
0 −→
Zr
−→ homD (Fr , D)
↓
↓
0 −→ extr (N ) −→
L
↓
↓
0
0
−→ 0
Finally, composing the bottom monomorphism with the monomorphism 0 −→ M −→ extr (N )
provided by Theorem 3.29, we get the desired embedding M ⊆ L. It must be noticed that such a
procedure can be followed equally well in the commutative and non-commutative framework, that
is when K is a field of constants or a true differential field.
Q.E.D.
EXAMPLE 4.2: With K = Q, m = 1, n = 4, q = 2, let us study the 2-pure differential module
M defined by the involutive system:
4
Φ
3
Φ
Φ2
1
Φ
≡
≡
≡
≡
y44
y34
y33
y24 − y13
=0
=0
=0
=0
1
1
1
1
2
2
2
2
3
3
3
•
4
•
•
•
From the board of multiplicative variables we may construct at once the Janet sequence:
D
D
D
1
2
4 −→
1 −→ 0
0 −→ Θ −→ 1 −→ 4 −→
where D1 is defined by the involutive system:
4
Ψ
3
Ψ
Ψ2
1
Ψ
≡
≡
≡
≡
d4 Φ3 − d3 Φ4
d4 Φ2 − d3Φ3
d4 Φ1 − d2 Φ4 + d1 Φ3
d3 Φ1 − d2 Φ3 + d1 Φ2
=0
=0
=0
=0
1
1
1
1
2
2
2
2
3
3
3
3
4
4
4
•
and D2 by the (trivially) involutive system:
Ω
≡
d4 Ψ1 − d3 Ψ2 + d2 Ψ4 − d1 Ψ3
=0
1 2
We have therefore the resolution:
0 −→ D −→ D4 −→ D4 −→ D −→ M −→ 0
16
3 4
leading to pd(M ) ≤ 3 and the deleted complex is:
0 −→ D −→ D4 −→ D4 −→ D −→ 0
Applying homD (•, D) to this sequence, we get the sequence:
0 ←− D ←− D4 ←− D4 ←− D ←− 0
which can be described by the following adjoint sequence:
ad(D)
←−
0 ←− 1
4
ad(D1 )
←−
4
ad(D2 )
←−
1 ←− 0
which is not a Janet sequence. As M is a torsion module, using now Theorem 3.25 we get
ext0 (M ) = 0, ext1 (M ) = 0 and we check that N = ext2 (M ) 6= 0. For this, dualizing Ψ by λ
and Ω by θ, we have to look for the CC of the inhomogeneous system:
1
Ψ −→ −d4 θ = λ1
2
Ψ −→
d3 θ = λ2
3
Ψ
−→
d1 θ = λ3
4
Ψ −→ −d2 θ = λ4
which are not already provided by
1
Φ −→
2
Φ −→
Φ3 −→
4
Φ −→
the system:
−d4 λ2 − d3 λ1
−d4 λ3 − d1 λ1
−(d4 λ4 − d2 λ1 ) + (d3 λ3 − d1 λ2 )
d3 λ4 + d2 λ2
=0
=0
=0
=0
One can check that the torsion module N can be generated by {u = d2 λ3 + d1 λ4 , v = d3 λ3 − d1 λ2 }
satisfying the involutive system:
4
φ
3
φ
φ2
1
φ
with the two CC:
ψ2
ψ1
≡
≡
≡
≡
d4 u − d1 v
d4 v
d3 u − d2 v
d3 v
=0
=0
=0
=0
≡ d4 φ2 − d3 φ4 + d2 φ3 − d1 φ1
≡ d4 φ1 − d3 φ3
1
1
1
1
2
2
2
2
3
3
3
3
=0
=0
4
4
•
•
1
1
2 3
2 3
4
4
Accordingly, we have the following strict free resolution of N :
0 −→ D2 −→ D4 −→ D2 −→ N −→ 0
with deleted complex:
0 −→ D2 −→ D4 −→ D2 −→ 0
Applying homD (•, D), we get the desired resolution of L, namely:
0 ←− L ←− D2 ←− D4 ←− D2 ←− 0
Dualizing ψ by z, we finally discover that L is defined by the involutive system:
−φ1
−φ2
−φ3
φ4
−→
−→
−→
−→
d4 z 1 − d1 z 2
d4 z 2
d3 z 1 − d2 z 2
d3 z 2
17
=0
=0
=0
=0
1
1
1
1
2
2
2
2
3
3
3
3
4
4
•
•
and is therefore 2-pure with pd(L) ≤ 2 and a strict inclusion M ⊂ L defined by y = z 1 .
.
REMARK 4.3: In this example, we discover that, if L were also r-pure, we should therefore
have an embedding 0 −→ L −→ extr (extr (L)) = extr (N ) and thus an isomorphism extr (N ) = L
leading to an isomorphism Zr = homD (Fr , D) and to Fr+1 = 0, as can be checked on this example
with r = 2. It has been a challenge for the author during many months to find the following
counter-example showing that, sometimes L may not even be a torsion module.
EXAMPLE 4.4: According to the proof of the theorem, N = extr (M ) does not depend on the
resolution of M used while L does indeed depend on the resolution of N used. Coming back to
the system studied in Example 2.8 and Remark 2.12 with r = 2, we may use the shortest finite
free resolution of M already presented, namely 0 −→ D −→ D2 −→ D −→ M −→ 0. Therefore,
taking the adjoint of the only CC found, we may define N by the system:
v −→ d33 θ
=0
−u −→ d13 θ + d2 θ = 0
and obtain the corresponding involutive system:
4
φ
3
φ
φ2
1
φ
≡
≡
≡
≡
d33 θ
d23 θ
d22 θ
d13 θ + d2 θ
=0
=0
=0
=0
1
1
1
1
2
2
2
•
3
•
•
•
1
1
1
1
2
2
2
2
We obtain the first order involutive system of CC:
4
ψ
3
ψ
ψ2
1
ψ
≡
≡
≡
≡
d3 φ3 − d2 φ4
d3 φ2 − d2 φ3
d3 φ1 − d1 φ4 − φ3
d2 φ1 − d1 φ3 − φ2
=0
=0
=0
=0
3
3
3
•
with the only CC :
ω ≡ d3 ψ 1 − d2 ψ 2 + d1 ψ 4 + ψ 3 = 0 .
We may therefore introduce in reverse order the corresponding adjoint
volved in the Janet sequence we have just constructed:
4
ψ −→ −d1 λ = ν 4
3
ψ −→
λ = ν3
2
ψ −→
d2 λ = ν 2
1
ψ −→ −d3 λ = ν 1
4
4
4
2
φ3 −→ µ3 ≡ d2 ν +4 d1 ν 1
φ −→ µ ≡ −(d3 ν − d1 ν ) + (d2 ν 3 − ν 2 )
φ2 −→ µ2 ≡ −d3 ν 3 − ν 1
1
φ −→ µ1 ≡ −d3 ν 2 − d2 ν 1
operators of the ones in-
=0
=0
=0
=0
This last operator is defining L but is not involutive. We have the two torsion elements:
µ5 ≡ d2 ν 3 − ν 2 ,
µ6 ≡ d1 ν 3 + ν 4
which are generating ext2 (N ) and are easily seen to satisfy the involutive system:
d3 µ6 − µ5 = 0,
d3 µ5 = 0,
d2 µ6 − d1 µ5 = 0,
d2 µ5 = 0
because d2 µ5 ≡ d2 µ3 + d3 µ4 + d1 µ1 = 0. Finally, using the first equation, we may eliminate µ5
and identify µ6 with y because we have indeed d33 µ6 = 0, d13 µ6 − d2 µ6 = 0 in order to obtain the
strict inclusion M ⊂ L. Equivalently, we may also eliminate ν 1 and ν 2 respectively from µ2 and
µ3 in order to obtain:
µ4 −→ d13 (d1 ν 3 + ν 4 ) − d2 (d1 ν 3 + ν 4 ) = 0,
18
µ1 −→ d33 (d1 ν 3 + ν 4 ) = 0
but we may notice that L is not 2-pure and thus a torsion module because ν 3 (similarly ν 4 ) is
not by itself a torsion element of L. Such a situation is well known in control theory with the
SISO (single input u, single output y) system ẏ − u̇ = 0 because u (similarly y) is not by itself a
torsion element but z = y − u is a torsion element because ż = 0 (See the pages 9 and 10 of the
introduction in [17] for more details on such a comment).
PROBLEM : Is it possible to find an analogue of the previous theorem or of the case r = 0,
where L should be also r-pure with a free resolution having exactly r morphisms ?.
5) ABSOLUTE AND RELATIVE LOCALIZATIONS :
Surprisingly, the positive answer to such a problem has been given by Macaulay in ([11]) for
differential modules defined by systems with constant coefficients and only one unknown. Our purpose in this section is to generalize this resul to arbitrary differential modules defined by systems
of PD equations with coefficients in a differential field.
Now we hope that, after reading the previous section, the reader is convinced that the use of
extension modules is a quite important though striking tool for studying linear multidimensional
systems. Of course, as for any new language, it is necessary to apply it on many explicit examples
before being familiar with it. However, it is evident that it should be even more important to have
a direct approach allowing to exhibit the purity filtration and, in particular, to recognize whether
a differential module is pure or not. The purpose of this section is to combine the module approach
with the system approach, while taking into account the specific properties of the Spencer form in
a way rather similar to the use of the Kalman form of a control system when testing controllability, namely to check that an ordinary differential module is 0-pure. For this, we shall divide the
procedure into a few successive constructive steps that will be illustrated on explicit examples.
• STEP 1: Whenever a system Rq ⊂ Jq (E) is given, there is no way to obtain informations on
the corresponding module without bringing this system to an involutive or at least formally integrable system by means of prolongations and projections as in the Example 2.8 of Macaulay where
(2)
only the projection R2 ⊂ R2 of R4 to R2 is involutive. Of course, an homogeneous system with
constant coefficients is automatically formally integrable and one only needs to use a finite number
of prolongations in order to obtain an involutive symbol, though it is known that 2-acyclicity is
sufficient to obtain first order generating CC ([17]). However, it is essential to notice that it is
only with an involutive system that we are sure that the CC system is first order both with the
following ones in the Janet sequence.
EXAMPLE 5.1: With K = Q, m = 1, n = 3, q = 2, the homogeneous second order systems
y33 = 0, y23 − y11 = 0, y22 = 0 or y33 − y11 = 0, y23 = 0, y22 − y11 = 0 both have a 2-acyclic symbol
g3 of dimension 1 at order 3 (exercise) and a trivially involutive symbol g4 = 0 at order 4, such a
result leading to only one CC of order 2 with cd(M ) = 3 in both cases. We let the reader treat the
system y3 = 0, y12 = 0 similarly and conclude (Hint: (χ3 , χ1 χ2 ) = (χ3 , χ1 ) ∩ (χ3 , χ2 ) is unmixed).
It is however not evident that the homogeneous system y11 = 0, y12 = 0, y13 = 0, y23 = 0 of Example 3.33 is involutive.
Finally, according to section 2 and 3, this first step provides the characters α1q ≥ ... ≥ αnq ≥ 0
and the smallest non-zero character α = αqn−r 6= 0 providing cd(M ) = r, a result leading at once
to tr (M ) ⊂ M with a strict inclusion while tr−1 (M ) = ... = t0 (M ) = t(M ) = M . Of course, if
α = αnq 6= 0, then M cannot be a torsion module and t(M ) ⊂ M with a strict inclusion. The
following example proves nevertheless that it is much more delicate to study systems with variable
coefficients.
EXAMPLE 5.2: With K = Q(x2 ), n = 3, m = 1, q = 1, let us consider the differential module M
defined by the trivially involutive system y3 − x2 y1 = 0. We have cd(M ) = 1 but we can only say
that cd(Dz) ≥ 1, ∀z ∈ M . If we set z = y2 , proceeding as in Remark 2.12, we get the involutive
system:
19
y3
y2
y1
=
=
=
x2 z3 − (x2 )2 z1
z
z3 − x2 z1
1 2
1 2
1 •
3
•
•
The differential submodule Dz ⊂ M is defined by the second order involutive system:
z33 − 2x2 z13 + (x2 )2 z11
z23 − x2 z12 − 2z1
=0
=0
1 2
1 2
3
•
and we get cd(Dz) = 1 exactly. However, even on such a very elementary example, it is not
evident that t0 (M ) = t(M ) = M is 1-pure. We also understand that the decoupling system for
any autonomous element in engineering sciences, like in magnetohydrodynamics, cannot be studied
without these new techniques if we want intrinsic results. Finally, if we denote by I the left ideal of
D = Dy generated by y3 −x2 y1 , we notice the relation ann(G) = (χ3 −x2 χ1 ) = gr(I) = rad(gr(I)).
However, we have ann(gr(Dz)) = ((χ3 − x2 χ1 )2 , χ2 (χ3 − x2 χ1 )) with radical equal to the prime
ideal (χ3 − x2 χ1 ) as before. Hence, in this example, the strict inclusion Dz ⊂ M does not imply
gr((Dz) ⊂ gr(M ) = G because otherwise we should get ann(G) ⊆ ann(gr(Dz) and this is the
reason for which only the radical must be considered as it does not depend on the filtration.
• STEP 2: Once we have obtained cd(M ) = r, in order to check that M is r-pure, it remains
to prove that tr (M ) = 0 as we already know that tr−1 (M ) = M . For this, the second step will
be to use the specific properties of the Spencer form Rq+1 ⊂ J1 (Rq ). More generally, it is possible to use any equivalent involutive first order system of the form R1 ⊂ J1 (E) with no zero
order equations, that is with an induced epimorphism R1 −→ E −→ 0 and such that the corresponding differential module is isomorphic and thus identified to the initial module as in Remark
2.13 . We have now the characters α11 ≥ ... ≥ αn1 ≥ 0 and the smallest non-zero character is
is still α = α1n−r 6= 0 providing of course the same codimension cd(M ) = r as in the first step.
Accordingly, the number r of non-zero characters and the number r of full classes is the same
as in the previous step. However, it must be noticed that the filtration may be different and the
following example explains once more why only the radical of the characteristic ideal must be used.
EXAMPLE 5.3: K = Q, n = 2, m = 1, q = 2.For the involutive system y22 = 0, y12 = 0, the characteristic ideal is a = ((χ2 )2 , χ1 χ2 ) =⇒ rad(a) = (χ2 ) =⇒ r = 1. Setting z 1 = y, z 2 = y1 , z 3 = y2 ,
we get the equivalent first order system d2 z 3 = 0, d2 z 2 = 0, d2 z 1 − z 3 = 0, d1 z 1 − z 2 = 0, d1 z 3 = 0
and the polynomial ideal generated by the 3 × 3 minors of the characteristic matrix is a =
((χ2 )3 , (χ2 )2 χ1 , χ2 (χ1 )2 ). Hence the characteristic ideal is rad(a) = (χ2 ) and r = 1 too.
EXAMPLE 5.4: For Example 2.8 we may set z 1 = y, z 2 = y1 , z 3 = y2 , z 4 = y3 and obtain the
first order involutive system :
d3 z 4 = 0, d3 z 3 = 0, d3 z 2 − z 3 = 0, d3 z 1 − z 4 = 0
d2 z 4 = 0, d2 z 3 = 0, d2 z 2 − d1 z 3 = 0, d2 z 1 − z 3 = 0
d1 z 4 − z 3 = 0, d1 z 1 − z 2 = 0
1
1
1
2 3
2 •
• •
with no zero-order equation. We have α31 = 4 − 4 = 0, α21 = 4 − 4 = 0, α11 = 4 − 2 = 2 =⇒ r = 2
too. We let the reader treat Example 4.2 similarly and obtain α41 = 0, α31 = 0, α21 = 2 =⇒ r = 2.
It is at this moment that we discover that such systems have particular properties not held by
other systems, apart from the fact that a canonical sequence may be constructed exactly like the
Spencer sequence or the first order part of the Janet sequence.
Shrinking the board of multiplicative variables, we obtain from the definition of involutiveness:
PROPOSITION 5.5: For an involutive first order system with no zero order equations and
solved with respect to the principal (pri) first order jets expressed by means of the parametric
20
(par) other first order jets, the system obtained by looking only at the PD equations of class 1+
... + class i only contains d1 , ..., di and is still involutive ∀i = 1, ..., n, after adopting the ordering
di+1 , ..., dn , d1 , ..., di .
EXAMPLE 5.6: Looking at Example 2.9, we notice that the systems:
y3 − x4 y1
y2 − y1
y2 − y1
=0
=0
=0
4
4
3
1
1
2 3
2 •
4 1
2
are both involutive. Also, looking at Example 5.4, we notice that the system:
d2 z 4 = 0, d2 z 3 = 0, d2 z 2 − d1 z 3 = 0, d2 z 1 − z 3 = 0
d1 z 4 − z 3 = 0, d1 z 1 − z 2 = 0
3
3
1
1
2
•
is still involutive and we let the reader treat Example 4.2 similarly.
We shall denote the corresponding differential module by Mn−i and we notice that M = M0
is defined by more equations than Mn−i . Accordingly, we have an epimorphism (specialization)
Mn−i −→ M −→ 0 and similarly epimorphisms Mn−i −→ Mn−i−1 −→ 0. Finally, as cd(M ) = r,
we notice that the classes n − r + 1, ..., n are full and we find therefore (χn )m , ..., (χn−r+1 )m among
the m × m minors with lower powers of χ1 , ..., χn−r for the other minors because the numbers of
equations of the lower classes are decreasing and thus strictly smaller than m. The characteristic
ideal is thus (χn , ..., χn−r+1 ) if we set χ1 , ..., χn−r to zero, in a coherent way. Finally, choosing
i = n − r, we get an epimorphism Mr −→ M −→ 0. The background will always indicate clearly
the meaning of the lower index and cannot be confused with the filtration index of M .
• STEP 3: We are now in position to study Mr with more details as a system in n − r variables
([11], §77, p 86). Its defining system has β1n−r = β = m − α equations of strict class n − r, a
smaller number of equations of class n − r − 1, ... , and eventually an even smaller number of
equations of class 1. Studying this system for itself, we may look for t(Mr ) exactly following the
known techniques working for any differential module, in particular double duality as described
in section 4. However, if one could find any (relative) torsion element z ∈ Mr , we could project
it to an element z ∈ M and we have N = Dz ⊆ M where we do not put a residue bar on the
new z for simplicity. Introducing the respective annihilator ideal a and b of M and N , we should
have a ⊆ rad(a) ⊆ rad(b) as it is the only result not depending on the filtration of the modules.
However, we know that z must be killed by at least one operator involving only d1 , ..., dn−r , in
addition to the operators involving separately (dn )m + ..., ..., (dn−r−1 )m + ... and we should obtain
tr−1 (M ) = M but tr (M ) 6= 0, that is M should not be pure. Hence M is r-pure if and only if Mr
is torsion-free as a differential module over K[d1 , ..., dn−r ]. In such a case, the system defining Mr
can be parametrized by α arbitrary unknowns among {y 1 , ..., y m } by using a so-called minimum
parametrization in the sense of ([24]). In actual practice, as shown on all the examples, one can use
a relative localization with respect to (d1 , ..., dn−r ) only by keeping (dn−r+1 , ..., dn ) untouched and
replacing (d1 , ..., dn−r ) by (χ1 , ..., χn−r ) considered as (constant) ”parameters” in the language of
Macaulay ([11], §43, p 45 and §77, p 86 with r instead of n − r and a different ordering). Such
a method provides therefore a quite useful and simple test for checking purity by linking it to
involutivity. An important intermediate result is provided by the next proposition.
PROPOSITION 5.7: The partial localization ”kills” the equations of class 1 up to class n− r − 1
(care) and finally only depends on the equations of strict class n − r.
Proof: Instead of using the column n − r in the multiplicative board, we provide the proof when
r = 0 by using the column n. Working out as usual the first order CC, we only look at the p < m
CC of class n for the equations Φ1 , ..., Φp of class 1 up to class n − 1 if we order the Φ starting
from the lower class involved. These p CC will be of a very specific form with a square p × p
operator matrix for (Φ1 , ..., Φp ) with diagonal operators of the form dn + ... where the dots denote
21
operators involving only d1 , ..., dn−1 in a quasi linear way with coefficients in K, the remaining
of the matrix only depending on d1 , ..., dn−1 for the (Φp+1 , ..., Φm ) of strict class n. Therefore, if
we have K = k = cst(K), the absolute localization is simply done by setting di −→ χi and the
determinant of the square matrix is equal to (χn )p + ... where the dots denote a polynomial of
degree < p in χp with coefficients involving only χ1 , ..., χn−1 . It follows that each Φ1 , ..., Φp can be
expressed as a linear combination over k(χ1 , ..., χn ) of the Φ of strict class n. The result is similar
for the variable coefficient case by using the graded machinery but needs much more work. In any
case, setting the Φ of strict class n equal to zero, we should obtain a zero graded module for the
Φ1 , ..., Φp which must be eequal to zero too. It must finally be noticed that the first order CC used
are in reduced Spencer form as the dn of the Φ of strict class n do not appear in the CC we have
used and these Φ do not appear in the other CC too.
Q.E.D
EXAMPLE 5.8: Coming back to Example 2.11, we notice that the only CC is an identity in
(u, v, w) and we may forget about u in order to obtain the new system for (v, w):
3
φ ≡ d4 v − x3 d1 v − v = 0
φ2 ≡ d4 w − x3 d1 w − w = 0
1
φ ≡ d3 w − x4 d1 w − d2 v + d1 v = 0
1
1
1
2 3
2 3
2 3
4
4
•
defining a new module M with cd(M ) = 1 as the class 4 is now full. The same CC as before can
be written again in the form:
(d4 − x3 d1 − 1)φ1 = (d3 − x4 d1 )φ2 − (d2 − d1 )φ3
that we can localize in (d1 , d2 , d3 ) with:
φ1 = 0 =⇒ (d3 − x4 d1 )w = (d2 − d1 )v =⇒ v = (d3 − x4 d1 )y, w = (d2 − d1 )y
We let the reader check that we have indeed:
2
φ ≡ (d4 − x3 d1 − 1)(d3 − x4 d1 )y
φ3 ≡
(d4 − x3 d1 − 1)(d2 − d1 )y
=
=
(d3 − x4 d1 )u = 0 =⇒ u = 0
(d2 − d1 )u = 0 =⇒ u = 0
We finally obtain for M a relative parametrization with the only constraint u ≡ (d4 −x3 d1 −1)y = 0
in a coherent way with Example 2.11.
• STEP 4: In this section we come back to the commutative situation with a field k of constants
and generalize the results of Macaulay in the following theorem which is recapitulating the results
so far obtained.
THEOREM 5.9: One has the commutative and exact diagram:
0 −→
t(Mr ) −→ Mr
↓
↓
0 −→ tr (M ) −→ M
↓
↓
0
0
−→ k(χ1 , ..., χn−r ) ⊗
↓
−→ k(χ1 , ..., χn−r ) ⊗
↓
0
Mr
M
Proof: For simplifying the notations in this diagram of modules over D, we have identified Mr as a
module over k[d1 , ..., dn−r ] with Mr as a module over k[dn−r+1 , ..., dn , d1 , ..., dn−r ] = k[d1 , ..., dn ] =
D while the localization of M just tells that the coefficients are now in the field k(χ1 , ..., χn−r ),
exactly following Macaulay. Moreover the central column is exact according to the definitionof Mr
and the right column is exact because localization preserves the exactness of a sequence.
For exampe, with k = Q, n = 2, m = 2, q = 2, r = 1, the differential module M defined by
the involutive system y22 = 0, y12 = 0 may also be defined by the first order involutive system
z 1 = y, z 2 = y1 , z 3 = y2 =⇒ d2 z 3 = 0, d2 z 2 = 0, d2 z 1 − z 3 = 0, d1 z 3 = 0, d1 z 1 − z 2 = 0. Then M1
is defined by the first order system d1 z 3 = 0, d1 z 1 − z 2 = 0 with torsion module t(M1 ) generated
22
by z 3 and the tensor product of M by k(χ1 ) is defined by d2 z 3 = 0, d2 z 2 = 0, d2 z 1 = 0, d1 z 3 =
0, z 3 = 0, χ1 z 1 − z 2 = 0 after division by χ1 in a coherent way with Example 1.1.
Q.E.D.
Finally, when Mr is torsion-free as a differential module over k[d1 , ..., dn−r ], then tr (M ) = 0
and we get the following generalization of a result provided by Macalualy ([11], §41, p 43):
COROLLARY 5.10: The differential module M is r-pure if and only if cd(M ) = r and there is
a monomorphism 0 −→ M −→ k(χ1 , ..., χn−r ) ⊗ M .
• STEP 5: The final idea is to embed Mr into a free module over K[d1 , ..., dn−r ] in order to
parametrize the corresponding system and substitute into the equations of class n−r+1, ..., n. However, if we look at Example 1.2, we should find after the substitution Φ1 ≡ z13 = 0, Φ2 ≡ z23 = 0
with one CC d2 Φ1 − d1 Φ2 = 0, that is on one side a module L which is not 1-pure and, on the
other side a module L having a finite free resolution with 2 operators. However, we forgot that
M , being pure, may be identified with its embedding into its localization. Hence, we get in fact
χ1 z3 = 0, χ2 z3 = 0 and thus only z3 = 0 is providing a convenient parametrizing module L.
Our purpose is to explain and illustrate this procedure for finding such an L in the general situation. Again, the main idea will be provided by this example. Indeed, we obtain the
only CC Ψ ≡ d3 Φ3 − d2 Φ1 + d1 Φ2 = 0. Substituting the parametrization, we get of course
Φ3 = 0 ⇐⇒ χ1 y 2 = χ2 y 1 , that is, among the two unknowns y 1 , y 2 we are left with only one,
say y 1 and, similarly, among the two equations Φ1 , Φ2 we are left with only one, say Φ1 , because
χ1 Φ2 = χ2 Φ1 from the CC which is of course compatible with the localization and we choose
z3 = 0 as χ1 z3 = 0 =⇒ z3 = 0.
The general situation may be treated similarly. Indeed, according to the previous step, we
are only concerned with the equations of class n − r + 1, ... , class n while the localization has
only to do with the β equations of strict class n − r (care) allowing to express β unknowns as
linear combinations of the α remaining unknowns with coefficients in k(χ1 , ..., χn−r ). To each such
equation are associated exactly r dots and each dot of index n − r + i provides a reduction of the
respective equations of class n − r + i for i = 1, ..., r. It follows that we are left with α equations of
each such class. When we ”delocalize”, replacing χi by di , we have to take into account the need to
take out the denominators and may find a few ”simplifications” as in the example just considered..
Finally, the maximum number r − 1 (care again) of dots found for one equation is obtained for the
equations of strict class n − r + 1 = n − (r − 1) and we have thus exhibited a system defining a
module L which is r-pure and admits a free resolution with exactly (r − 1) + 1 = r operators. In
any case, the reader must not forget that the localization of a module is useful only if we already
know that this module is torsion-free by means of the double-duality formula t(M ) = ext1 (N )
given in the introduction.
EXAMPLE 5.11: Let M be defined by the involutive system:
d3 y 4 + d1 y 2 − d1 y 1
d3 y 3 − d2 y 4 + d1 y 2 − d1 y 1
d3 y 2 + d1 y 2
d3 y 1 − d1 y 4 + d1 y 2
d2 y 2 − d1 y 4 + d1 y 1
d2 y 1 − d1 y 3 + d1 y 1
=0
=0
=0
=0
=0
=0
1
1
1
1
1
1
2
2
2
2
2
2
3
3
3
3
•
•
with characters α31 = 0, α21 = α = 2, α11 = 4. It follows that cd(M ) = 1 as only the class 4 is full
and we obtain the following relative localization showing that M is 1-pure:
y 1 = χ1 y, y 2 = χ1 z, y 3 = (χ1 + χ2 )y, y 4 = χ1 y + χ2 z
Substituting in the four equations of class 3, we only obtain the two equations:
23
d3 z + χ1 z
d3 y + (χ1 − χ2 )z − χ1 y
=0
=0
after a division by χ1 , χ2 and χ1 + χ2 . The parametrizing module L is thus defined by the two
equations:
d3 z + d1 z
=0
d3 y + (d1 − d2 )z − d1 y = 0
which are differentially independent and we have the relative parametrization:
y 1 = d1 y,
y 2 = d1 z,
y 3 = (d1 + d2 )y,
y 4 = d1 y + d2 z
Finally, M ⊂ L =⇒ ass(M ) ⊂ ass(L) =⇒ ass(M ) = {(d3 + d1 ), (d3 − d1 )} =⇒ annD (M ) =
(d3 + d1 ) ∩ (d3 − d1 ), a striking result showing that M can be embedded into the direct sum of two
primary differential modules according to Remark 3.32 (See [18] for more details).
EXAMPLE 5.12: With k = Q, m = 1, n = 3, let us consider the polynomial map χ1 = u5 , χ2 =
u3 , χ3 = u4 as in ([11], p 53). We have the exact sequence 0 −→ p −→ k[χ] −→ k[u] ⊂ k(u)
showing that p = ((χ2 )2 (χ3 ) − (χ1 )2 , (χ2 )3 − χ1 χ3 , (χ3 )2 − χ1 χ2 ) is a prime ideal ([18], p 126). The
corresponding prime differential module M is defined by the involutive system:
y333 − y123
y
233 − y122
y223 − y11
y222 − y13
y
133 − y112
y33 − y12
=0
=0
=0
=0
=0
=0
1
1
1
1
1
•
2
2
2
2
•
•
3
•
•
•
•
•
and is 2-pure. The localized system is finite type over k(χ1 )[d2 , d3 ] with par = {y, y2 , y3 , y22 , y23 }.
One can prove that p2 is not a primary ideal even though rad(p2 ) = p.
EXAMPLE 5.13: Similarly but now with k = Q, m = 1, n = 4, let us consider the polynomial map χ1 = uv, χ2 = u, χ3 = uv 3 , χ4 = uv 4 as in ([11], p 47). We have the exact sequence
0 −→ p −→ k[χ] −→ k[u, v] ⊂ k(u, v) showing that p = (χ2 χ4 − χ1 χ3 , (χ1 )3 − (χ2 )2 χ3 , (χ3 )3 −
χ1 (χ4 )2 , (χ1 )2 χ4 − χ2 (χ3 )2 ) is a prime ideal. It is not evident at all that the corresponding prime
differential module M can be defined by the homogeneous involutive system (exercise):
y444 − y224 − y134 − y123
y
344 − y111
y
334 − y114 − y112
y333 − y124 − y122 − y113
y244 + y224 − y123
y
234 − y133 + y111
y
144 + y124 − y113
y44 + y24 − y13
=0
=0
=0
=0
=0
=0
=0
=0
1
1
1
1
1
1
1
•
2
2
2
2
2
2
•
•
3
3
3
3
•
•
•
•
4
•
•
•
•
•
•
•
and is thus also 2-pure. The localized system is finite type over k(χ1 , χ2 )[d3 , d4 ] with par =
{y, y3 , y4 , y33 } and we have for example χ2 y34 − χ1 y33 + (χ1 )3 y = 0 in a coherent way with the
comments of Macaulay in ([11], §78, p 88, formula (A) and §88,89, p 98).
6) CONCLUSION :
In 1916, F.S. Macaulay discovered a new localization technique for studying unmixed polynomial ideals. We have been able to generalize this procedure for studying pure differential modules,
obtaining in particular a kind of relative parametrization generalizing the absolute parametrization already known for torsion-free modules and equivalent to controllability in classical control
theory. In the language of multidimensional systems theory, which is more intuitive, instead of
24
using arbitrary potential-like functions for the parametrization, the idea is now to use potentiallike functions which must satisfy a kind of minimum differential constraint limiting, in some sense,
the number of independent variables appearing in these functions, in a way similar to the situation met in the Cartan-Khäler theorem of analysis. For such a purpose, we have exhibited new
links between purity and involutivity, providing also a new insight into the primary decomposition
of modules and ideals by means of tools from the formal theory of linear multidimensional systems.
7) BIBLIOGRAPHY :
[1] J.E. BJORK: Analytic D-modules and Applications, Kluwer, 1993.
[2] N. BOURBAKI: Algèbre homologique, chap. X, Masson, Paris, 1980.
[3] N. BOURBAKI: Algèbre commutative, chap. I-IV, Masson, Paris, 1985.
[4] B. BUCHBERGER: Gröbner bases: an Algorithmic Methods in Polynomial Ideal Theory, in:
Recent Trends in Multidimensional System Theory, N.K. Bose ED., Reidel, Dordrecht, 1985, 184232.
[5] E. COSSERAT, F. COSSERAT: Théorie des Corps Déformables, Hermann, Paris, 1909.
[6] W. GRÖBNER: Über die Algebraischen Eigenschaften der Integrale von Linearen Differentialgleichungen mit Konstanten Koeffizienten, Monatsh. der Math., 47, 1939, 247-284.
[7] M. JANET: Sur les Systèmes aux dérivées partielles, Journal de Math., 8, 3, 1920, 65-151.
[8] E.R. KALMAN, Y.C. YO, K.S. NARENDA: Controllability of Linear Dynamical Systems, Contrib. Diff. Equations, 1, 2, 1963, 189-213.
[9] M. KASHIWARA: Algebraic Study of Systems of Partial Differential Equations, Mémoires de
la Société Mathématique de France 63, 1995, (Transl. from Japanese of his 1970 Master’s Thesis).
[10] E. KUNZ: Introduction to Commutative Algebra and Algebraic Geometry, Birkhäuser, 1985.
[11] F.S. MACAULAY: The Algebraic Theory of Modular Systems, Cambridge Tracts, vol. 19,
Cambridge University Press, London, 1916. Stechert-Hafner Service Agency, New-York, 1964.
[12] P. MAISONOBE, C. SABBAH: D-Modules Cohérents et Holonomes, Travaux en Cours, 45,
Hermann, Paris, 1993.
[13] D.G. NORTHCOTT: An Introduction to Homological Algebra, Cambridge University Press,
Cambridge, 1966.
[14] D.G. NORTHCOTT: Lessons on Rings, Modules and Multiplicities, Cambridge University
Press, Cambridge, 1968.
[15] V.P. PALAMODOV: Linear Differential Operators with Constant Coefficients, Grundlehren
der Mathematischen Wissenschaften 168, Springer, 1970.
[16] J.-F. POMMARET,:Dualité Différentielle et Applications, C. R. Acad. Sci. Paris, 320, Série
I, 1995, 1225-1230.
[17] J.-F. POMMARET: Partial Differential Equations and Group Theory: New Perspectives for
Applications, Kluwer, 1994.
[18] J.-F. POMMARET: Partial Differential Control Theory, Kluwer, 2001.
[19] J.-F. POMMARET: Algebraic Analysis of Control Systems Defined by Partial Differential
Equations, in Advanced Topics in Control Systems Theory, Lecture Notes in Control and Information Sciences LNCIS 311, Chapter 5, Springer, 2005, 155-223.
[20] J.-F. POMMARET: Parametrization of Cosserat Equations, Acta Mechanica, 215, 2010, 4355.
[21] J.-F. POMMARET: Macaulay Inverse Systems Revisited, Journal of Symbolic Computation,
46, 2011, 1049-1069.
[22] J.-F. POMMARET: Spencer Operator and Applications: From Continuum Mechanics to
Mathematical Physics, in ”Continuum Mechanics-Progress in Fundamentals and Engineering Applications”, Dr. Yong Gan (Ed.), ISBN: 978-953-51-0447–6, InTech, 2012, Available from:
http://www.intechopen.com/books/continuum-mechanics-progress-in-fundamentals-and-engineerin-applications/spen
[23] J.-F. POMMARET: A Pedestrian Approach to Cosserat/Maxwell/Weyl Theory, Preprint 2012
http://hal.archives-ouvertes.fr/hal-00740314
http://fr.arXiv.org/abs/1210.2675
[24] J.-F. POMMARET, A. QUADRAT: Localization and parametrization of linear multidimensional control systems, Systems & Control Letters 37 (1999) 247-260.
[25] J.-F. POMMARET, A. QUADRAT: Algebraic Analysis of Linear Multidimensional Control
Systems, IMA Journal of Mathematical Control and Informations, 16, 1999, 275-297.
25
[26] A. QUADRAT: Analyse Algébrique des Systèmes de Contrôle Linéaires Multidimensionnels,
Thèse de Docteur de l’Ecole Nationale des Ponts et Chaussées, 1999
(http://www-sop.inria.fr/cafe/Alban.Quadrat/index.html).
[27] A. QUADRAT: Une Introduction à l’Analyse Algébrique Constructive et à ses Applications,
INRIA Research Report 7354, AT-SOP Project, july 2010. Les Cours du CIRM, 1 no. 2: Journées
Nationales de Calcul Formel (2010), p281-471 (doi:10.5802/ccirm.11).
[28] J.J. ROTMAN: An Introduction to Homological Algebra, Pure and Applied Mathematics,
Academic Press, 1979.
[29] W. M. SEILER: Involution: The Formal Theory of Differential Equations and its Applications
to Computer Algebra, Springer, 2009, 660 pp. (See also doi:10.3842/SIGMA.2009.092 for a recent
presentation of involution, in particular sections 3 (p 6 and reference [11], [22]) and 4).
[30] D.C. SPENCER: Overdetermined Systems of Partial Differential Equations, Bull. Amer.
Math. Soc., 75, 1965, 1-114.
[31] O. ZARISKI, P. SAMUEL: Commutative Algebra, Van Nostrand, 1958.
[32] E. ZERZ: Topics in Multidimensional Linear Systems Theory, Lecture Notes in Control and
Information Sciences 256, Springer, 2000.
26
| 0 |
Data Augmentation of Railway Images
for Track Inspection
S. Ritika, Dattaraj Rao
General Electric
ABSTRACT
Regular maintenance of all the assets is pivotal for proper functioning of railway. Manual maintenance
can be very cumbersome and leave room for errors. Track anomalies like vegetation overgrowth, sun kinks
affect the track construct and result in unequal load transfer, imbalanced lateral forces on tracks which
causes further deterioration of tracks and can ultimately result in derailment of locomotive. Hence there
is a need to continuously monitor rail track health.
Track anomalies are rare with the skew as high as one anomaly in millions of good images. We propose a
method to build training data that will make our algorithms more robust and help us detect real world
track issues. The data augmentation will have a direct effect in making us detect better anomalies and
hence improve time for railroads that is spent in manual inspection.
This paper talks about a real-world use-case of detecting railway track defects from a camera mounted
on a moving locomotive and tracking their locations. The camera is engineered to withstand the
environment factors on a moving train and provide a consistent steady image at around 30 frames per
second. An image simulation pipeline of track detection, region of interest selection, augmenting image
for anomalies is implemented. Training images are simulated for sun kink and vegetation overgrowth.
Inception V3 model pretrained on Imagenet dataset is finetuned for a 2 class classification. For the case
of vegetation overgrowth, the model generalizes well on actual vegetation images, though it was trained
and validated solely on simulated images which might have different distribution than the actual
vegetation. Sun kink classifier can classify professionally simulated sun kink videos with a precision of
97.5%.
INTRODUCTION
Regular maintenance of all the assets is pivotal for proper functioning of railway. Track is a very important
asset for railway. Usually the maintenance is done manually and can be very cumbersome [1]. Also, this
leaves room for errors and misses which can cause further deterioration of locomotive as well as track
and might ultimately lead to derailment. Track is constructed in a particular way by stacking ties on layers
of ballast to ensure the load is transferred equally throughout the construct. But if there is growth of
vegetation it can reduce the efficiency of ballast in load transferring and can ultimately affect the track
[1]. During extreme climatic changes the track can deform. During summer, due to heat the tracks may
expand and if proper clearance is not provided it can get bent. This condition is termed sun kink. Also, if
locomotive isn’t designed properly, high imbalanced lateral forces on track from locomotive can be the
cause of sun kink. This is very detrimental as it causes high vibration in locomotive running on a sun kinked
track and can ultimately cause of derailment. This problem has been attempted using conventional image
processing technique in the past. [2][3] But these techniques fail to generalize to different environmental
and track conditions.
PROBLEM STATEMENT
Deep neural networks have the constraint that they require a lot of data to be trained without overfitting.
Fine tuning a model pre-trained on datasets like Imagenet, COCO, ILSVRC which have millions of labeled
images helps a lot to bring down the number of training images required. This is because a lower level
features of model which has been trained for classification of a particular type of image can be reused to
a completely new image dataset. Even after leveraging pre-trained weights, the minimum number of
images required might range from few hundreds to thousands depending on the complexity of the task
at hand. Hence the biggest challenge for image classification becomes availability of trainable data.
For the task of railway track health monitoring, we have a lot of good track data available. But
anomalies like vegetation overgrowth, sun kink are rare and difficult to find. These can be generated
manually using tools like paint but it can be a very cumbersome, labor-intensive process. Hence if
synthetic data can be generated for the anomalies mentioned above, it can ease the training process and
reduce the problem of overfitting.
APPROACH
The problem can be divided into two parts: finding the track and augmenting it to add anomalies in the
region of interest.
Detecting Track
Track is a very prominent feature in the image since it is a straight converging line. With the placement of
camera as seen in figures below, the track position doesn’t vary a lot. Hence, a region of interest (ROI) can
be demarcated where there is a high possibility of track to be present. Once the ROI is clipped, the exact
position of track needs to be found. Since track are straight lines, edge detection can be used for the same.
Canny edge detection [4] is found to give very good results once the thresholds are tuned properly. Image
can be filtered before edge detection to remove noise. Edge detection results in a cluster of number of
lines. We need to extract the tracks out of it. Hough’s line transform [5] can be used for the same. Thus,
the number of lines have reduced but we didn’t get the two lines corresponding to the track yet. The
feature that helps at this juncture is that tracks converge. Hence all the lines can be grouped depending
on whether their slope is positive or negative. The zero slope lines (mostly due to the track tie) can be
ignored in this process. This gives the coordinates of the two lines corresponding to the track. The ROI can
be specified utilizing these points.
Simulating Vegetation Overgrowth
Once the ROI is selected, vegetation needs to be simulated. Vegetation can disrupt the balance of the
ballast and can be detrimental to the track. The level of vegetation can be anything from sparse to high.
The region can be inside the tracks and over the rails of the track also. Vegetation is usually a cluster of
many small grass blobs. Hence the simulation can be generated in the same way. To generate the grass
blob greenish pixels can be placed at random places iteratively. Another efficient way for doing this will
be using gaussian distribution. Normal, skewed distribution can also be experimented. Random number
of such blobs can be placed in the ROI. The number of blobs can control the level of vegetation.
Figure 1 Step by step process for simulating vegetation overgrowth
Figure 2 Examples of images simulated with vegetation overgrowth
Simulating Sun kink
Once the track coordinates are detected, the task can be divided into two parts: removing the track and
adding a kinked track. To remove the track, a part of ROI can be copied and shifted sideways. This will
ensure that the ties look complete. This step can also be used to generate missing rail condition. To
generate broken rail condition, two random points on the track can be connected with a dark line. To
generate the kinked track, cubic splines can be utilized. Depending on the size of kink, two random points
can be chosen on the track and a spline can be drawn utilizing these.
Figure 3 Step by step process for simulating sun kink
Figure 4 Examples of images simulated for sun kinks
EXPERIMENTAL RESULTS
The experiments were done on the video feed from a front facing camera mounted on the locomotive.
An ROI 600*300 was extracted from the image. The image was filtered using gaussian filter. Canny edge
detection was applied followed by Hough’s line transform. The results for these steps are shown in Figure
1 Step by step process for simulating vegetation overgrowthFigure 1 and Figure 3 above.
Vegetation Overgrowth
A balanced dataset of 500 images were created comprising of images with vegetation generated by the
process described above and healthy track images. The dataset was augmented by varying the brightness,
flipping the image horizontally. Inception [6] V3 network pretrained on Imagenet dataset was taken as
baseline. The network was fit on the training data by finetuning the last classification layer to obtain a two
class classifier. The performance of the model on the test data can be seen in Table 1. An important point
to note is that the validation set while training the data was solely the simulated image while the model
was tested on real vegetation images. The model was seen to give a good performance even though
validation and test data might have different distributions.
Table 1 Test results on actual vegetation images by Inception V3 model trained solely on images simulated for vegetation
overgrowth
Actual\Predicted Positive
Positive
10
Negative
1
Negative
4
5
Figure 5 Vegetation overgrowth predicted correctly by the model
Sun kink
A balanced dataset of 500 images was created for a 2-class classification using Inception V3 network
similar to the approach taken for vegetation overgrowth. The network was finetuned on the dataset with
augmentation of flipping and random brightness adjustments. The trained model was tested on track
video professionally simulated for sun kink.
Table 2 Results on professionally simulated videos of the Inception V3 model trained solely on simulated sun kink images
Actual\Predicted Positive
Positive
6
Negative
1
Negative
0
5
Figure 6 Sun kink detected correctly on professionally simulated video
CONCLUSION
For the domain of rail data, training image generation for anomalies like vegetation and sun kink was
attempted using conventional image processing techniques. It was observed that using pre-trained
models and data augmentation brings down the number of training images required. The image
simulation pipeline used was: track detection, region of interest selection, augmenting image for
anomalies. A balanced dataset of 500 images containing healthy track as well as anomaly was
generated. Two separate models were trained for vegetation overgrowth and sun kink. Inception V3
model pretrained on Imagenet dataset was finetuned for 2 class classification. For the case of
vegetation overgrowth, the model generalized well on actual vegetation images, though it was trained
and validated on simulated images which might have different distribution than the actual vegetation.
Sun kink classifier could classify professionally simulated sun kink videos with a precision of 97.5%.
REFERENCES
1. Routine inspection for railroad tracks http://www.railroadtrackinspection.com/track_problems.html
2. Machine Vision Approach for Automating Vegetation Detection on Railway Tracks, Siril Yella et al,
Journal of Intelligent Systems
3. Automating Condition Monitoring of Vegetation on Railway Trackbeds and Embankments, Roger G.
Nyberg https://www.diva-portal.org/smash/get/diva2:929179/FULLTEXT01.pdf
4. A Computational Approach to Edge Detection, John Canny, Morgan Kaufmann Publishers
5. Use of the Hough Transformation To Detect Lines and Curves in Pictures,
https://www.cse.unr.edu/~bebis/CS474/Handouts/HoughTransformPaper.pdf
6. Going Deeper with Convolutions, Christian Szegedy et al. arXiv.org
| 1 |
arXiv:1703.01150v1 [] 3 Mar 2017
On the intersection graph of ideals of Zm
∗
S. Khojasteh
Department of Mathematics, Lahijan Branch, Islamic Azad University, Lahijan, Iran
Abstract
Let m > 1 be an integer, and let I(Zm )∗ be the set of all non-zero proper ideals of
Zm . The intersection graph of ideals of Zm , denoted by G(Zm ), is a graph with vertices
I(Zm )∗ and two distinct vertices I, J ∈ I(Zm )∗ are adjacent if and only if I ∩ J 6= 0.
Let n > 1 be an integer and Zn be a Zm -module. In this paper, we introduce and
study a kind of graph structure of Zm , denoted by Gn (Zm ). It is the undirected graph
with the vertex set I(Zm )∗ , and two distinct vertices I and J are adjacent if and only
if IZn ∩ JZn 6= 0. Clearly, Gm (Zm ) = G(Zm ). We obtain some graph theoretical
properties of Gn (Zm ) and we compute some of its numerical invariants, namely girth,
independence number, domination number, maximum degree and chromatic index.
We also determine all integer numbers n and m for which Gn (Zm ) is Eulerian.
1
Introduction
Let R be a commutative ring, and I(R)∗ be the set of all non-zero proper ideals of R.
There are many papers on assigning a graph to a ring R, for instance see [1], [2], [5] and
[6]. Also the intersection graphs of some algebraic structures such as groups, rings and
modules have been studied by several authors, see [3, 9, 10]. In [9], the intersection graph
of ideals of R, denoted by G(R), was introduced as the graph with vertices I(R)∗ and for
distinct I, J ∈ I(R)∗ , the vertices I and J are adjacent if and only if I ∩ J 6= 0. Also
in [3], the intersection graph of submodules of an R-module M , denoted by G(M ), is
defined to be the graph whose vertices are the non-zero proper submodules of M and two
∗
Key words: intersection graph, independent set, dominating set, chromatic index, Eulerian graph.
2010 Mathematics Subject Classification: 05C15, 05C25, 05C45, 05C69.
E-mail addresses: s [email protected] (S. Khojasteh).
1
distinct vertices are adjacent if and only if they have non-zero intersection. Let n, m > 1
be integers and Zn be a Zm -module. In this paper, we associate a graph to Zm , in which
the vertex set is being the set of all non-zero proper ideals of Zm , and two distinct vertices
I and J are adjacent if and only if IZn ∩ JZn 6= 0. We denote this graph by Gn (Zm ).
Clearly, if n = m, then Gn (Zm ) is exactly the same as the intersection graph of ideals of
Zm . This implies that Gn (Zm ) is a generalization of G(Zm ). As usual, Zm denotes the
integers modulo m.
Now, we recall some definitions and notations on graphs. Let G be a graph with the
vertex set V (G) and the edge set E(G). Then we say the order of G is |V (G)| and the
size of G is |E(G)|. Suppose that x, y ∈ V (G). If x and y are adjacent, then we write x
— y. We denote by deg(x) the degree of a vertex x in G. Also, we denote the maximum
degree of G by ∆(G). We recall that a path between x and y is a sequence x = v0 —
v1 — · · · — vk = y of vertices of G such that for every i with 1 ≤ i ≤ k, the vertices
vi−1 and vi are adjacent and vi 6= vj , where i 6= j. We say that G is connected if there
is a path between any two distinct vertices of G. For vertices x and y of G, let d(x, y)
be the length of a shortest path from x to y (d(x, x) = 0 and d(x, y) = ∞ if there is
no path between x and y). The diameter of G, diam(G), is the supremum of the set
{d(x, y) : x and y are vertices of G}. The girth of G, denoted by gr(G), is the length of
a shortest cycle in G (gr(G) = ∞ if G contains no cycles). We use n-cycle to denote the
cycle with n vertices, where n ≥ 3. Also, we denote the complete graph on n vertices by
Kn . A null graph is a graph containing no edges. We use Kn to denote the null graph of
order n. The disjoint union of two vertex-disjoint graphs G1 and G2 , which is denoted by
G1 ∪ G2 , is a graph with V (G1 ∪ G2 ) = V (G1 ) ∪ V (G2 ) and E(G1 ∪ G2 ) = E(G1 ) ∪ E(G2 ).
An independent set is a subset of the vertices of a graph such that no vertices are adjacent.
The number of vertices in a maximum independent set of G is called the independence
number of G and is denoted by α(G). A dominating set is a subset S of V (G) such that
every vertex of V (G) \ S is adjacent to at least one vertex in S. The number of vertices in
a smallest dominating set denoted by γ(G), is called the domination number of G. Recall
that a k-edge coloring of G is an assignment of k colors {1, . . . , k} to the edges of G such
that no two adjacent edges have the same color, and the chromatic index of G, χ′ (G), is
the smallest integer k such that G has a k-edge coloring.
In [9], the authors were mainly interested in the study of intersection graph of ideals
of Zm . For instance, they determined the values of m for which G(Zm ) is connected,
complete, Eulerian or has a cycle. In this article, we generalize these results to Gn (Zm )
and also, we find some new results. In Section 2, we compute its girth, independence
2
number, domination number and maximum degree. We also determine all integer numbers
n and m for which Gn (Zm ) is a forest. In Section 3, we investigate the chromatic index of
Gn (Zm ). In the last section, we determine all integer numbers n and m for which Gn (Zm )
is Eulerian.
2
Basic Properties of Gn (Zm)
Let n, m > 1 be integers and Zn be a Zm -module. Clearly, Zn is a Zm -module if and
only if n divides m. Throughout the paper, without loss of generality, we assume that
m = pα1 1 · · · pαs s and n = pβ1 1 · · · pβs s , where pi ’s are distinct primes, αi ’s are positive integers,
βi ’s are non-negative integers, and 0 ≤ βi ≤ αi for i = 1, . . . , s. Let S = {1, . . . , s},
S ′ = {i ∈ S : βi 6= 0}. The cardinality of S ′ is denoted by s′ . Also, we denote the least
common multiple of integers a and b by [a, b]. We write a|b (a ∤ b) if a divides b (a does
not divide b). We begin with a simple example.
Example 1. Let m = 12. Then we have the following graphs.
2Z12
3Z12
2Z12
3Z12
2Z12
3Z12
2Z12
3Z12
4Z12
6Z12
4Z12
6Z12
4Z12
6Z12
4Z12
6Z12
G(Z12 )
G2 (Z12 )
G3 (Z12 )
G4 (Z12 )
Remark 1. It is easy to see that I(Zm ) = {dZm : d divides m} and |I(Zm )∗ | =
Qs
i=1 (αi +
1) − 2. Let Zn be a Zm -module. If n|d, then dZm is an isolated vertex of Gn (Zm ).
Obviously, d1 Zm and d2 Zm are adjacent in Gn (Zm ) if and only if n ∤ [d1 , d2 ]. This implies
that Gn (Zm ) is a subgraph of G(Zm ).
By [3, Theorem 2.5], we have gr(G(Zm )) ∈ {3, ∞}. We extend this result to Gn (Zm ).
Theorem 1. Let Zn be a Zm -module. Then gr(Gn (Zm )) ∈ {3, ∞}.
Proof. With no loss of generality assume that S ′ = {1, . . . , s′ }. Clearly, if s′ ≥ 3, then
p1 Zm — p2 Zm — p1 p2 Zm is a 3-cycle in Gn (Zm ). Therefore gr(Gn (Zm )) = 3. Now,
consider two following cases:
3
Case 1. s′ = 2. If s ≥ 3, then p1 Zm — p3 Zm — p1 p3 Zm is a 3-cycle in Gn (Zm ).
So we may assume that s = 2. If αi ≥ 3 for some i, i = 1, 2, then pi Zm — p2i Zm —
p3i Zm is a 3-cycle in Gn (Zm ). Also, if βi ≥ 2 for some i, i = 1, 2, then p1 Zm — p2 Zm —
p1 p2 Zm is a 3-cycle in Gn (Zm ). Now, assume that n = p1 p2 and α1 , α2 = 1, 2. It is easy
to see that gr(Gn (Zm )) = ∞. Note that Gp1 p2 (Zp1 p2 ) ∼
= K2 , Gp1 p2 (Zp21 p2 ) ∼
= K2 ∪ K2 , and
∼
Gp p (Z 2 2 ) = K2 ∪ K2 ∪ K3 .
p1 p2
1 2
Case 2. s′ = 1. If s ≥ 3, then p2 Zm — p3 Zm — p2 p3 Zm is a 3-cycle in Gn (Zm ). So
assume that s = 2. If β1 ≥ 2, then p1 Zm — p2 Zm — p1 p2 Zm is a 3-cycle in Gn (Zm ).
Also, if α2 ≥ 3, then p2 Zm — p22 Zm — p32 Zm is a 3-cycle in Gn (Zm ). Now, suppose that
n = p1 and m = pα1 p2 or m = pα1 p2 . Then gr(Gn (Zm )) = ∞, since Gp (Z α1 ) ∼
= K2α
1
1
2
1
p1 p2
1
β
and Gp1 (Zpα1 p2 ) ∼
= K3α1 −1 ∪ K2 . Finally, assume that n = p1 1 and m = pα1 1 . If β1 ≥ 4,
1
2
then p1 Zm — p21 Zm — p31 Zm is a 3-cycle in Gn (Zm ). It is easy to see that if β1 ≤ 3, then
gr(Gpβ1 (Zpα1 )) = ∞.
1
1
As an immediate consequence of Theorem 1, we have the following corollary.
Corollary 1. Let Zn be a Zm -module. Then Gn (Zm ) is a forest if and only if one of the
following holds:
(i) n = p1 p2 , m = pα1 1 pα2 2 and α1 , α2 ≤ 2.
(ii) n = p1 , m = pα1 1 pα2 2 and α2 ≤ 2.
(iii) n = pβ1 1 , m = pα1 1 and 1 ≤ β1 ≤ 3.
By [3, Theorem 3.4], we find that G(Zm ) is a tree if and only if G(Zm ) is a star. Now,
we have a similar result.
Corollary 2. Let Zn be a Zm -module. Then Gn (Zm ) is a tree if and only if Gn (Zm ) is
a star. In particular, Gn (Zm ) is a tree if and only if one of the following holds:
(i) n = m = p21 .
(ii) n = m = p31 .
(iii) n = p1 and m = p21 .
By Corollary 1, we can characterize the values of n and m for which Gn (Zm ) is a null
graph.
4
Let Zn be a Zm -module. Then Gn (Zm ) is a null graph if and only if one
Corollary 3.
of the following holds:
(i) n = p1 and m = pα1 1 p2 .
(ii) n = p1 and m = pα1 1 .
(iii) n = p21 and m = pα1 1 .
(iv) n = m = p1 p2 .
Throughout the rest of this paper, we use A to denote the set of all isolated vertices
of Gn (Zm ).
Lemma 1.
Let Zn be a Zm -module. If Gn (Zm ) is not a null graph, then dZm is an
isolated vertex of Gn (Zm ) if and only if n|d, except for the case n = p1 p2 , m = pα1 1 p2 and
α1 ≥ 2, in which case Gn (Zm ) = Kα1 ∪ Kα1 .
Proof. Clearly, if n|d, then dZm is an isolated vertex of Gn (Zm ). For the other side
suppose that Gn (Zm ) is not a null graph, d = pr11 · · · prss is a divisor of m and n ∤ d. Since
n ∤ d, we may assume that r1 < β1 . If s ≥ 3, then dZm is adjacent to one of pα2 2 · · · pαs s Zm
or pα3 3 · · · pαs s Zm . Now, suppose that s = 2. If r1 6= 0, then dZm and pα2 2 Zm are adjacent.
Hence r1 = 0 and d = pr22 . If r2 ≥ 2, then dZm and p2 Zm are adjacent. Therefore
d = p2 . If β1 ≥ 2, then dZm and p1 Zm are adjacent. Thus β1 = 1. If α2 ≥ 2, then
dZm and p22 Zm are adjacent. So α2 = 1. Then m = pα1 1 p2 and n = p1 or n = p1 p2 . If
n = p1 , then by Corollary 3, Gn (Zm ) is a null graph, a contradiction. Therefore n = p1 p2 ,
m = pα1 1 p2 and α1 ≥ 2, in which case dZm is an isolated vertex of Gn (Zm ). Moreover,
A = {pr11 p2 Zm : 0 ≤ r1 ≤ α1 − 1} and V (Gn (Zm )) \ A = {pr11 Zm : 1 ≤ r1 ≤ α1 }.
Clearly, pr11 Zm and pr12 Zm are adjacent, where 1 ≤ r1 < r2 ≤ α1 . This implies that
Gp1 p2 (Zpα1 p2 ) = Kα1 ∪ Kα1 , where α1 ≥ 2. Next, suppose that s = 1. Since Gn (Zm ) is
1
not a null graph, by Corollary 3, we conclude that β1 ≥ 3. Therefore dZm is adjacent to
p1 Zm or p21 Zm . This completes the proof.
Lemma 2. Let Zn be a Zm -module. If Gn (Zm ) is not a null graph, then
|A| =
(
α1 ,
Qs
i=1 (αi
if n = p1 p2 , m = pα1 1 p2 and α1 ≥ 2;
− βi + 1) − 1, otherwise.
Proof. By Lemma 1, we know that if n = p1 p2 , m = pα1 1 p2 and α1 ≥ 2, then |A| = α1 .
Otherwise, from Lemma 1, we find that dZm is an isolated vertex of Gn (Zm ) if and only
5
if n divides d. So A = {dZm : n divides d, d divides m, d 6= m} = {pr11 · · · prss Zm : βi ≤
Q
ri ≤ αi } \ {0}. This implies that |A| = si=1 (αi − βi + 1) − 1 and the proof is complete.
Corollary 4. Let Zn be a Zm -module. Then Gn (Zm ) contains no isolated vertex if and
only if n = m 6= p1 , p21 , p1 p2 .
Proof. One side is obvious. For the other side assume that Gn (Zm ) contains no isolated
vertex. Hence n = m. By Corollary 3 and Lemma 2, it is clear that m 6= p1 , p21 , p1 p2 .
Theorem 2. Let Zn be a Zm -module. If Gn (Zm ) is not a null graph, then
α(Gn (Zm )) =
(
α1 + 1,
if n = p1 p2 , m = pα1 1 p2 and α1 ≥ 2;
Qs
′
i=1 (αi − βi + 1) − 1 + s , otherwise.
Proof. By Lemma 1, it is clear that if n = p1 p2 , m = pα1 1 p2 and α1 ≥ 2, then
α(Gn (Zm )) = α1 + 1. Otherwise, with no loss of generality we may assume that S ′ =
o
n
Q
β −1 Q
αi
αi
′ . Obviously, n ∤ pβj −1
{1, . . . , s′ }. Let B = A ∪ pj j
p
Z
:
1
≤
j
≤
s
m
j
i6=j pi ,
i6=j i
β −1 Q
αi
for every j, 1 ≤ j ≤ s′ . Hence by Lemma 1, pj j
i6=j pi Zm is not an isolated vertex,
for every j, 1 ≤ j ≤ s′ . Also, it is easy to see that B is an independent set and so
Q
α(Gn (Zm )) ≥ |A| + s′ = si=1 (αi − βi + 1) − 1 + s′ . Moreover, if C is an independent set
of Gn (Zm ) \ A and |C| ≥ s′ + 1, then by Pigeonhole Principle we conclude that there exist
Qs
Qs
ri′
ri
′
′
i=1 pi Zm ,
i=1 pi Zm ∈ C such that rj , rj < βj , for some j, 1 ≤ j ≤ s . This implies
′
Qs
Q
r
that i=1 pri i Zm and si=1 pi i Zm are adjacent, which is impossible. Therefore
α(Gn (Zm )) =
Qs
i=1 (αi
− βi + 1) − 1 + s′ .
We denote {i ∈ S : ri < βi } by Dd , where d = pr11 · · · prss is a divisor of m. Obviously,
Dd ⊆ S ′ .
Theorem 3. Let Zn be a Zm -module and d = pr11 · · · prss (6= 1, m) be a divisor of m. If n
divides d, then deg(dZm ) = 0 and otherwise
deg(dZm ) =
s
Y
Y
Y
(αi + 1) − 2 −
(αi + 1)
(αi − βi + 1).
i=1
i∈D
/ d
6
i∈Dd
Proof. If n|d, then dZm is an isolated vertex and so deg(dZm ) = 0. Assume that n
does not divide d. Then Dd is nonempty. Clearly, dZm and pt11 · · · ptss Zm are not adjacent
if and only if ti ≥ βi for each i ∈ Dd . Thus the number of vertices not adjacent to
Q
Q
dZm is i∈D
i∈Dd (αi − βi + 1) − 1 and hence the number of its neighbors is
/ d (αi + 1)
Q
Qs
Q
i∈Dd (αi − βi + 1).
i=1 (αi + 1) − 2 −
i∈D
/ d (αi + 1)
Suppose that Zn is a Zm -module and Gn (Zm ) is not a null graph. If
Q
Q
n = p1 · · · ps , then ∆(Gn (Zm )) = si=1 (αi + 1) − 2 − (α1 + 1) si=2 αi , where α1 ≥ · · · ≥ αs
Q
Q
and otherwise ∆(Gn (Zm )) = si=1 (αi + 1) − 2 − si=1 (αi − βi + 1).
Theorem 4.
Proof. First, suppose that n = p1 · · · ps . By Theorem 3, if d = pr11 · · · prss is a divisor of
Q
Q
Q
m and n does not divide d, then deg(dZm ) = si=1 (αi + 1) − 2 − i∈D
i∈Dd αi .
/ d (αi + 1)
Q
Q
If α1 ≥ · · · ≥ αs , then deg(dZm ) = ∆(Gn (Zm )) if and only if i∈D
i∈Dd αi =
/ d (αi + 1)
Qs
(α1 + 1) i=2 αi . Let {i ∈ S : αi = α1 } = {1, . . . , k} for some k, 1 ≤ k ≤ s. In
fact, {pri i Zm : 1 ≤ i ≤ k, 1 ≤ ri ≤ αi } is the set of all vertices with maximum degree. So
Q
Q
∆(Gn (Zm )) = si=1 (αi + 1) − 2 − (α1 + 1) si=2 αi .
Now, assume that there is an integer j such that βj 6= 1. We claim that there exist
Q
some vertices adjacent to all non-isolated vertices and so ∆(Gn (Zm )) = si=1 (αi + 1) − 2 −
Qs
i=1 (αi − βi + 1). If βj = 0, then pj Zm is adjacent to all non-isolated vertices. Otherwise,
β −1
βj ≥ 2. With no loss of generality suppose that S ′ = {1, . . . , s′ }. Let d = p1β1 −1 · · · ps′s′
Then dZm is adjacent to all non-isolated vertices. The claim is proved.
Theorem 5.
.
Let Zn be a Zm -module. Then Gn (Zm ) has a vertex which is adjacent to
all other vertices if and only if n = m and αj ≥ 2, for some j, 1 ≤ j ≤ s.
Proof. If Gn (Zm ) has a vertex which is adjacent to all other vertices, then Gn (Zm )
has not any isolated vertex. This implies that n = m. By contradiction suppose that
α1 = · · · = αs = 1. Let dZm be a vertex of Gn (Zm ) such that it is adjacent to all other
vertices. With no loss of generality, we may assume that d = p1 · · · pt , where 1 ≤ t < s.
Let d′ = pt+1 · · · ps . It is easy to see that dZm and d′ Zm are non-adjacent, a contradiction.
Therefore αj ≥ 2, for some j, 1 ≤ j ≤ s. Conversely, if n = m and αj ≥ 2, for some j,
1 ≤ j ≤ s, then by Corollary 4, Gn (Zm ) has no isolated vertex. Also, in view of the proof
of Theorem 4, we find that Gn (Zm ) has a vertex which is adjacent to all other vertices.
The following corollary is a generalization of [9, Theorem 2.9].
7
Corollary 5. Let Zn be a Zm -module. Then Gn (Zm ) is a complete graph if and only if
n = m = pα1 1 and α1 ≥ 2.
Proof. Suppose that Gn (Zm ) is a complete graph. By Theorem 5, we find that n = m
Qs−1 αi
pi Zm are two
and αj ≥ 2, for some j, 1 ≤ j ≤ s. If s ≥ 2, then pαs s Zm and i=1
non-adjacent vertices which is a contradiction. Therefore n = m = pα1 1 and α1 ≥ 2. The
other side is obvious.
Theorem 6. Let Zn be a Zm -module. If Gn (Zm ) is not a null graph, then
γ(Gn (Zm )) =
(
|A| + 1, if n 6= p1 · · · ps or n = p1 p2 , m = pα1 1 p2 and α1 ≥ 2;
|A| + 2, otherwise.
Proof. Suppose that n 6= p1 · · · ps . In view of the proof of Theorem 4, Gn (Zm ) has a
vertex which is adjacent to every non-isolated vertices. This implies that γ(Gn (Zm )) =
|A| + 1. Next, assume that n = p1 · · · ps . If s = 1, then Gn (Zm ) is a null graph. Now,
assume that s ≥ 2. If n = p1 p2 , m = pα1 1 p2 and α1 ≥ 2, then by Lemma 1, γ(Gn (Zm )) =
|A| + 1. Otherwise, let B = {p1 Zm , p2 · · · ps Zm }. If dZm is a non-isolated vertex, then n
does not divide d. If p1 ∤ d, then dZm is adjacent to p2 · · · ps Zm . Otherwise, pj ∤ d, for some
j, 2 ≤ j ≤ s. Therefore pj ∤ [d, p1 ]. This yields that dZm is adjacent to p1 Zm . Therefore B
is a dominating set for Gn (Zm ) \ A. Now, we claim that Gn (Zm ) has not a vertex which is
adjacent to every non-isolated vertices. By contradiction, suppose that dZm is adjacent to
every non-isolated vertices. Since n ∤ d, we may assume that d = p1 · · · pt , where 1 ≤ t < s.
Let d′ = pt+1 · · · ps . Clearly, dZm and d′ Zm are non-adjacent. Since n ∤ d, by Theorem 1,
d′ Zm is not an isolated vertex, a contradiction. Thus γ(Gn (Zm )) = |A| + 2.
3
Chromatic Index of Gn (Zm)
In this section, we study the chromatic index of Gn (Zm ). First, we need the following
theorems:
Theorem A. [8, Theorem 17.4] (Vizing’s Theorem) If G is a simple graph, then either
χ′ (G) = ∆(G) or χ′ (G) = ∆(G) + 1.
Theorem B. [7, Corollary 5.4] Let G be a simple graph. Suppose that for every vertex
8
u of maximum degree, there exists an edge u — v such that ∆(G) − deg(v) + 2 is more
than the number of vertices with maximum degree in G. Then χ′ (G) = ∆(G).
Theorem C. [11, Theorem D] If G has order 2k and maximum degree 2k − 1, then
χ′ (G) = ∆(G). If G has order 2k + 1 and maximum degree 2k, then χ′ (G) = ∆(G) + 1 if
and only if the size of G is at least 2k2 + 1.
Theorem 7. Suppose that Zn is a Zm -module and Gn (Zm ) is not a null graph. If n = pβ1 1 ,
Q
then χ′ (Gn (Zm )) = ∆(Gn (Zm )) if and only if β1 si=2 (αi + 1) is an odd integer.
Proof. Clearly, the set of all non-isolated vertices of Gn (Zm ) is equal to {dZm : d =
pr11 · · · prss , d 6= 1, 0 ≤ r1 ≤ β1 − 1 and 0 ≤ ri ≤ αi for i = 2, . . . , s}. So Gn (Zm ) \ A is a
Q
complete graph of order β1 si=2 (αi + 1) − 1. The result, now, follows from [4, Theorem
5.11].
Theorem 8.
Suppose that Zn is a Zm -module and Gn (Zm ) is not a null graph. If
n = p1 p2 and m = pα1 1 pα2 2 , then χ′ (Gn (Zm )) = ∆(Gn (Zm )) if and only if max{α1 , α2 } is
an even integer.
Proof. If α1 = α2 = 1, then Gn (Zm ) is a null graph. If α1 = 1 and α2 ≥ 2, then
r
Gn (Zm ) ∼
= Kα2 ∪ Kα2 . If α2 , α2 ≥ 2, then assume that Bi = {pi i Zm : 1 ≤ ri ≤ αi },
for i = 1, 2. Clearly, B1 ∪ B2 is the set of all non-isolated vertices of Gn (Zm ), every
Bi forms a complete graph and Gn (Zm ) \ A is the disjoint union of them. Therefore
Gn (Zm ) \ A ∼
= Kα1 ∪ Kα2 . Now, [4, Theorem 5.11] completes the proof.
Theorem 9.
Let Zn be a Zm -module. If n = p1 · · · ps and s ≥ 3, then χ′ (Gn (Zm )) =
∆(Gn (Zm )).
Proof. With no loss of generality assume that α1 ≥ · · · ≥ αs . Let {i ∈ S : αi = α1 } =
{1, . . . , k} for some k, 1 ≤ k ≤ s. By Theorem 4, we know that {pri i Zm : 1 ≤ i ≤ k, 1 ≤
ri ≤ αi } is the set of all vertices with maximum degree. Hence Gn (Zm ) has kα1 vertices
Q
with maximum degree. On the other hand, we know that ∆(Gn (Zm )) = si=1 (αi + 1) −
Q
Q
Qs−1
2 − (α1 + 1) si=2 αi and deg(p1 · · · ps−1 Zm ) = si=1 (αi + 1) − 2 − αs i=1
(αi + 1). It
is easy to check that if s ≥ 4, then ∆(Gn (Zm )) − deg(p1 · · · ps−1 Zm ) + 2 is more than
α1 + · · · + αs and so ∆(Gn (Zm )) − deg(p1 · · · ps−1 Zm ) + 2 is more than the number of
9
vertices with maximum degree. Note that deg(prss Zm ) = ∆(Gn (Zm )) (1 ≤ rs ≤ αs ) if and
only if α1 = · · · = αs . Also, if s ≥ 4, then ∆(Gn (Zm )) − deg(p2 · · · ps Zm ) + 2 is more than
sα1 . Therefore by Theorem B, we find that χ′ (Gn (Zm )) = ∆(Gn (Zm )). Now, consider
s = 3. If α3 ≥ 2, then it is easy to see that ∆(Gn (Zm )) − deg(p1 · · · ps−1 Zm ) + 2 is more
than α1 + α2 + α3 and the proof is complete. Therefore assume that α3 = 1. There are
two following cases:
Case 1. α2 = 1. If α1 = 1, then n = m and so Gn (Zm ) = G(Zm ). One can easily
check that χ′ (G(Zp1 p2 p3 )) = ∆(G(Zp1 p2 p3 )) = 4 (see [9, Fig. 3]). If α1 ≥ 2, then Gn (Zm )
has α1 vertices with maximum degree. Also, ∆(Gn (Zm )) − deg(p1 p3 Zm ) + 2 is more than
α1 and the result, follows from Theorem B.
Case 2. α2 ≥ 2. In this case, it is easy to see that ∆(Gn (Zm )) − deg(p2 p3 Zm ) + 2 is
more than α1 + α2 . Also, ∆(Gn (Zm )) − deg(p1 p3 Zm ) + 2 is more than α1 + α2 and the
result, follows from Theorem B.
Theorem 10.
Let Zn be a Zm -module. If n 6= p1 · · · ps and s ≥ 2, then χ′ (Gn (Zm )) =
∆(Gn (Zm )).
Q
Q
Proof. By Theorem 4, we know that Gn (Zm ) has si=1 (αi + 1) − 1 − si=1 (αi − βi + 1)
Q
Q
non-isolated vertices and ∆(Gn (Zm )) = si=1 (αi +1)−2− si=1 (αi −βi +1). Hence by TheQ
Q
orem C, we find that χ′ (Gn (Zm )) = ∆(Gn (Zm )), where si=1 (αi +1)−1− si=1 (αi −βi +1)
Q
Q
is even. Next, assume that si=1 (αi + 1) − 1 − si=1 (αi − βi + 1) is odd. Since the size
of a complete graph of order 2k + 1 is 2k2 + k, if we prove that Gn (Zm ), in this case,
Qs
Qs
losses at least k =
i=1 (αi + 1) − 2 −
i=1 (αi − βi + 1) /2 edges, then by Theorem
Q
C, χ′ (Gn (Zm )) = ∆(Gn (Zm )). Let d = si=1 pri i be a divisor of m such that n ∤ d and
Q
Q′
{i ∈ S ′ : ri ≥ βi } =
6 ∅. Let {i ∈ S ′ : ri ≥ βi } = {1, . . . , t}. Set d = si=t+1 pβi i si=s′ +1 pri i .
It is easy to check that dZm and dZm are not adjacent. So we conclude that Gn (Zm )
Qs
Qs
Qs ′
Qs
losses at least
(α
+
1)
−
(α
−
β
+
1)
−
β
(α
+
1)
/2 edges. We
′
i
i
i
i
i
i=1
i=1
i=1
i=s +1
continue the proof in the following two cases:
Case 1. βi ≥ 2, for some i, 1 ≤ i ≤ s′ . With no loss of generality we may aso
n Q′
sume that βs′ ≥ 2. Suppose that B = pr11 si=2 pβi i Zm : 0 ≤ r1 ≤ β1 − 1 and C =
o
n
Q
pβ1 1 si=2 pri i Zm : 0 ≤ ri ≤ βi − 1, for 2 ≤ i ≤ s′ and otherwise 0 ≤ ri ≤ αi . Clearly, every element of B is not adjacent to every element of C. This implies that Gn (Zm ) losses at
Q
Q′
Q′
least si=1 βi si=s′ +1 (αi + 1) − ( si=2 βi + β1 − 1) new edges. Therefore Gn (Zm ) losses at
Q
Q′
Q
Q
Q′
least l = ( si=1 (αi + 1)− si=1 (αi − βi + 1)+ si=1 βi si=s′ +1 (αi + 1))/2− ( si=2 βi + β1 − 1)
edges. It suffices we prove that k ≤ l. One can easily see that k ≤ l if and only if
Q
Q
Q′
2(β1 − 2) ≤ (β1 si=s′ +1 (αi + 1)− 2) si=2 βi . Since βs′ ≥ 2, so 2(β1 − 2) ≤ (β1 si=s′ +1 (αi +
10
1) − 2)
Q s′
i=2 βi
and hence k ≤ l.
Case 2. β1 = · · · = βs′ = 1. Clearly,
Qs ′
i=2 pi Zm
and p1
Qs
ri
i=s′ +1 pi Zm
are non-
adjacent, where 0 ≤ ri ≤ αi , for i = s′ + 1, . . . , s . This implies that Gn (Zm ) losses at
Q
Q
least si=s′ +1 (αi + 1) − 1 new edges. Therefore Gn (Zm ) losses at least l = ( si=1 (αi + 1) −
Qs
Qs
i=1 (αi − βi + 1) +
i=s′ +1 (αi + 1))/2 − 1 edges. In this case, k ≤ l is obvious and the
proof is complete.
From the above theorems, we can deduce the next result.
Corollary 6. Let Zn be a Zm -module. If Gn (Zm ) is not a null graph, then χ′ (Gn (Zm )) =
∆(Gn (Zm )), unless the following cases:
(i) n = pβ1 1 and β1
Qs
i=2 (αi
+ 1) is even.
(ii) n = p1 p2 , m = pα1 1 pα2 2 and max{α1 , α2 } is odd.
4
Eulerian Tour in Gn (Zm)
An Eulerian tour in a graph is a closed trail including all the edges of the graph. A graph
is Eulerian if it has an Eulerian tour. By [8, Theorem 4.1], a simple connected graph is
Eulerian if and only if it has no vertices of odd degree. In this section, we determine all
integer numbers n and m for which Gn (Zm ) \ A is an Eulerian graph. We start with the
following theorem.
Theorem 11. Let Zn be a Zm -module. If Gn (Zm ) is not a null graph, then diam(Gn (Zm )\
A) ≤ 4, except for the case n = p1 p2 , m = pα1 1 p2α2 and α1 , α2 ≥ 2, in which case
Gn (Zm ) \ A ∼
= Kα1 ∪ Kα2 .
Proof. Assume that Gn (Zm ) is not a null graph. In view of the proof of Theorem 6, we
conclude that if n 6= p1 · · · ps , then Gn (Zm ) \ A is connected and diam(Gn (Zm ) \ A) ≤ 2.
Now, suppose that n = p1 · · · ps . Since Gn (Zm ) is not a null graph, so s 6= 1. First assume
that s ≥ 3. By the proof of Theorem 6, we know that {p1 Zm , p2 · · · ps Zm } is a dominating
set for Gn (Zm ) \ A. Since s ≥ 3, p2 Zm is adjacent to both p1 Zm and p2 · · · ps Zm . This
implies that Gn (Zm ) \ A is connected and diam(Gn (Zm ) \ A) ≤ 4. Next, assume that
s = 2. By Lemma 1, if n = p1 p2 , m = pα1 1 p2 and α1 ≥ 2, then diam(Gn (Zm ) \ A) = 1.
Now, consider n = p1 p2 , m = pα1 1 pα2 2 and α1 , α2 ≥ 2. As we saw in the proof of Theorem
8, Gn (Zm ) \ A ∼
= Kα1 ∪ Kα2 and the proof is complete.
11
Now, we are in a position to generalize Theorem 5.1 of [9].
Theorem 12.
Suppose that Zn is a Zm -module and Gn (Zm ) is not a null graph. Then
Gn (Zm ) \ A is an Eulerian graph if and only if one of the following holds:
(i) αi and βi are even integers for each i, 1 ≤ i ≤ s.
(ii) αi is an odd integer and βi is an even integer for some i, 1 ≤ i ≤ s.
(iii) n = p1 · · · ps and m = pα1 1 · · · pαs s , where αi ’ are odd integers and s ≥ 3.
(iv) n = p1 p2 and m = pα1 1 p2 , where α1 > 1 is an odd integer.
Proof. By Theorem 3, Gn (Zm ) \ A is an Eulerian graph if and only if for each nonQ
Q
Q
isolated vertex dZm of Gn (Zm ) both si=1 (αi + 1) and i∈D
i∈Dd (αi − βi + 1)
/ d (αi + 1)
are even or odd integers. So the “if” part of theorem is obvious. For the converse,
Q
suppose that Gn (Zm ) \ A is an Eulerian graph. First assume that both si=1 (αi + 1)
Q
Q
and i∈D
i∈Dd (αi − βi + 1) are odd integers for a vertex dZm of Gn (Zm ) \ A.
/ d (αi + 1)
Thus all αi ’ are even integers. If βi is an odd integer for some i, 1 ≤ i ≤ s, then there
exists a vertex d′ Zm such that Dd′ = {i}. (Note that Gn (Zm ) is not a null graph.) So
Q
Q
′
i∈D
/ ′ (αi + 1)
i∈D ′ (αi − βi + 1) is an even integer which implies that deg(d Zm ) is an
d
d
odd integer, a contradiction. Therefore βi is an even integer for each i, 1 ≤ i ≤ s. Next,
Q
Q
Q
assume that both si=1 (αi + 1) and i∈D
i∈Dd (αi − βi + 1) are odd integers for a
/ d (αi + 1)
vertex dZm of Gn (Zm ) \ A. Hence αi is an odd integer for some i, 1 ≤ i ≤ s. With no loss
of generality suppose that {i ∈ S : αi is an odd integer} = {1, . . . , t}, where 1 ≤ t ≤ s.
Suppose that β1 , . . . , βt are odd integers. If n = p1 · · · ps and m = pα1 1 · · · pαs s , where αi ’ are
odd integers, then we are done. (Note that Gn (Zm ) \ A is a connected graph.) Otherwise
Q
Q
there exists a vertex d′ Zm such that Dd′ = {1, . . . , t}. So i∈D
/ d′ (αi +1)
i∈Dd′ (αi −βi +1)
is an odd integer which implies that deg(d′ Zm ) is an odd integer, a contradiction. Thus
βi is an even integer for some i, 1 ≤ i ≤ t. The proof is complete.
References
[1] S. Akbari, S. Khojasteh, Commutative rings whose cozero-divisor graphs are unicyclic or of bounded degree, Comm. Algebra, 42 (2014), 1594–1605.
[2] S. Akbari, S. Khojasteh, A. Yousefzadehfard, The proof of a conjecture in Jacobson
graph of a commutative ring, J. Algebra Appl., Vol. 14, No. 10 (2015) 1550107.
12
[3] S. Akbari, H.A. Tavallaee, S. Khalashi Ghezelahmad, Intersection graph of submodules of a module, J. Algebra Appl. 11 (2012), Article No. 1250019.
[4] I. Anderson, A First Course in Discrete Mathematics, Springer-Verlag, 2001.
[5] D.F. Anderson, P.S. Livingston, The zero-divisor graph of a commutative ring, J.
Algebra, 217 (1999), 434–447.
[6] S.E. Atani, A. Yousefian Darani, E.R. Puczylowski, On the diameter and girth
of ideal-based zero-divisor graphs, Publicationes Mathematicae Debrecen, 78:3-4
(2011) 607–612.
[7] L.W. Beineke, B.J. Wilson, Selected Topics in Graph Theory, Academic Press Inc.,
London, 1978.
[8] J.A. Bondy, U.S.R. Murty, Graph Theory, Graduate Texts in Mathematics, 244
Springer, New York, 2008.
[9] I. Chakrabarty, S. Ghosh, T.K. Mukherjee, M.K. Sen, Intersection graphs of ideals
of rings, Discrete Mathematics, 309 (2009), 5381–5392
[10] B. Csákány and G. Pollák, The graph of subgroups of a finite group (Russian),
Czech. Math. J. 19 (1969), 241–247.
[11] M.J. Plantholt, The chromatic index of graphs with large maximum degree, Discrete
Math. 47 (1981), 91–96.
13
| 0 |
1
Degrees of Freedom of the Broadcast Channel
with Hybrid CSI at Transmitter and Receivers
arXiv:1709.02884v1 [] 9 Sep 2017
Mohamed Fadel, Student Member, IEEE, and Aria Nosratinia, Fellow, IEEE
Abstract
In general the different links of a broadcast channel may experience different fading dynamics
and, potentially, unequal or hybrid channel state information (CSI) conditions. The faster the fading
and the shorter the fading block length, the more often the link needs to be trained and estimated at
the receiver, and the more likely that CSI is stale or unavailable at the transmitter. Disparity of link
fading dynamics in the presence of CSI limitations can be modeled by a multi-user broadcast channel
with both non-identical link fading block lengths as well as dissimilar link CSIR/CSIT conditions.
This paper investigates a MISO broadcast channel where some receivers experience longer coherence
intervals (static receivers) and have CSIR, while some other receivers experience shorter coherence
intervals (dynamic receivers) and do not enjoy free CSIR. We consider a variety of CSIT conditions
for the above mentioned model, including no CSIT, delayed CSIT, or hybrid CSIT. To investigate the
degrees of freedom region, we employ interference alignment and beamforming along with a product
superposition that allows simultaneous but non-contaminating transmission of pilots and data to different
receivers. Outer bounds employ the extremal entropy inequality as well as a bounding of the performance
of a discrete memoryless multiuser multilevel broadcast channel. For several cases, inner and outer
bounds are established that either partially meet, or the gap diminishes with increasing coherence times.
Index Terms
Broadcast channel, Channel state information, Coherence time, Coherence diversity, Degrees of
freedom, Fading channel, Multilevel broadcast channel, Product superposition.
The authors are with the Department of Electrical Engineering, University of Texas at Dallas, Richardson, TX 750830688 USA, E-mail: [email protected];[email protected]. This work was presented in part at the IEEE International
Symposium on Information Theory (ISIT), Germany, June 2017 [1].
March 26, 2018
DRAFT
2
I. I NTRODUCTION
The performance of a broadcast channel depends on both the channel dynamics as well as
the availability and the quality of channel state information (CSI) on the two ends of each
link [2]–[5]. The two issues of CSI and the channel dynamics are practically related. The faster
the fading, the more often the channel needs training, thus consuming more channel resources,
while a very slow fading link requires infrequent training, therefore slow fading models often
assume that CSIR is available due to the cost of training being small when amortized over time.
In practice, in a broadcast channel some links may fade faster or slower than others. Recently,
it has been shown [6], [7], that the degrees of freedom of the broadcast channel are affected
by the disparity of link fading speeds, but existing studies have focused on a few simple and
uniform CSI conditions, e.g., neither CSIT nor CSIR were available in [6], [7] for any user. This
paper studies a broadcast channel where the links experience both disparate fading conditions
as well as non-uniform or hybrid CSI conditions.
A review of the relevant literature is as follows. Under perfect instantaneous CSI, the degrees
of freedom of a broadcast channel increase with the minimum of the transmit antennas and
the total number of receivers antennas [8], [9]. However, due to the time-varying nature of the
channel and feedback impairments, perfect instantaneous transmit-side CSI (CSIT) may not be
available, and also receive-side CSI (CSIR) can be assumed for slow-fading channels only.
Broadcast channel with perfect CSIR has been investigated under a variety of CSIT conditions,
including imperfect, delayed, or no CSIT [2]–[4], [10]–[12]. In the absence of CSIT, Huang et
al. [2] and Vaze and Varanasi [3] showed that the degrees of freedom collapse to that of the
single-receiver, since the receivers are stochastically equivalent with respect to the transmitter.
For a MISO broadcast channel Lapidoth et al. [4] conjectured that as long as the precision of
CSIT is finite, the degrees of freedom collapse to unity. This conjecture was recently settled in
the positive by Davoodi and Jafar in [10]. Moreover, for a MISO broadcast channel under perfect
delayed CSIT Maddah-Ali and Tse in [11] showed using retrospective interference alignment
that the degrees of freedom are
1
1
1+ 12 +...+ K
> 1, where K is the number of the transmit antennas
and also the number of receivers. A scenario of mixed CSIT was investigated in [12], where the
transmitter has partial knowledge on the current channel in addition to delayed CSI.
The potential variation between the quality of feedback links has led to the model of hybrid
March 26, 2018
DRAFT
3
CSIT, where the CSIT with respect to different links may not be identical [10], [13]–[15]. A
MISO broadcast channel with perfect CSIT for some receivers and delayed for the others was
studied by Tandon et al. in [13] and Amuru et al. in [14]. Davoodi and Jafar in [10] showed
that for a MISO two-receiver broadcast channel under perfect CSIT for one user and no CSIT
for the other, the degrees of freedom collapse to unity. Tandon et al. in [15] considered a MISO
broadcast channel with alternating hybrid CSIT to be perfect, delayed, or no CSIT with respect
to different receivers.
As mentioned earlier, investigation of broadcast channels under unequal link fading dynamics
is fairly recent. An achievable degrees of freedom region for one static and one dynamic receiver
was given in [16]–[19] via product superposition, producing a gain that is now known as
coherence diversity. Coherence diversity gain was further investigated in [6], [7] for a K-receiver
broadcast channel with neither CSIT nor CSIR. Also, a broadcast channel was investigated in
[20], where the receivers MIMO fading links experience nonidentical spacial correlation.
In this paper, we consider a multiuser model under a hybrid CSIR scenario where a group of
receivers, denoted static receivers, are assumed to have CSIR, and another group with shorter
link coherence time, denoted dynamic receivers, do not have free CSIR. We consider this model
under a variety of CSIT conditions, including no CSIT, delayed CSIT, and two hybrid CSIT
scenarios. In each of these conditions, we analyze the degrees of freedom region. A few new
tools are introduced, and inner and outer bounds are derived that partially meet in some cases.
The results of this paper are cataloged as follows.
In the absence of CSIT, an outer bound on the degrees of freedom region is produced via
bounding the rates of a discrete memoryless multilevel broadcast channel [21], [22] and then
applying the extremal entropy inequality [23], [24]. Our achievable degrees of freedom region
meets the outer bound in the limiting case where the coherence times of the static and dynamic
receivers are the same.
For delayed CSIT, we use the outdated CSI model that was used by Maddah-Ali and Tse [11]
under i.i.d. fading and assuming global CSIR at all nodes. Noting that our model does not have
uniform CSIR, we produced a technique with alignment over super-symbols to utilize outdated
CSIT but merge it together with product superposition to reuse the pilots of the dynamic receivers
for the purpose of transmission to static receivers. Moreover, we develop an outer bound that
is suitable for block-fading channels with different coherence times, by appropriately enhancing
March 26, 2018
DRAFT
4
static
Tx
dynamic
Fig. 1.
A broadcast channel with multiple static and multiple dynamic users
the channel to a physically-degraded broadcast channel and then applying the extremal entropy
inequality [23], [24]. For one static and one dynamic receiver, our achievable degrees of freedom
partially meet our outer bound, and furthermore the gap decreases with the dynamic receiver
coherence time T .
Under hybrid CSIT, we analyze two conditions: First, we consider perfect CSIT for the static
receivers and no CSIT with respect to the dynamic receivers. The achievable degrees of freedom
in this case are obtained using product superposition with the dynamic receiver’s pilots reused
and beamforming for the static receivers to avoid interference. Second, we consider perfect CSIT
with respect to the static receivers and delayed CSIT with respect to the dynamic receivers. An
achievable transmission scheme is proposed via a combination of beamforming, interference
alignment, and product superposition methodologies. The outer bounds for the two hybrid-CSIT
cases were based on constructing an enhanced physically degraded channel and then applying the
extremal entropy inequality. For one static receiver with perfect CSIT and one dynamic receiver
with delayed CSIT, the gap between the achievable and the outer sum degrees of freedom is
1
.
T
II. S YSTEM M ODEL
Consider a broadcast channel with multiple single-antenna receivers and the transmitter is
equipped with Nt antennas. The expressions “receiver” and “user” are employed without distinction throughout the paper, indicating the receiving terminals in the broadcast channel. The
channels of the users are modeled as Rayleigh block fading where the channel coefficients
remain constant over each block and change independently across blocks [25], [26]. As shown
in Fig. 1, the users are partitioned into two sets based on channel availability and the length of the
coherence interval: one set of dynamic users and another set of static users. The former contains
March 26, 2018
DRAFT
5
TABLE I
N OTATION
Static Users
Dynamic Users
m′
m
MISO channel gains
g1 , . . . , gm′
h1 , . . . , hm
received signals (continuous)
′
y1′ , . . . , ym
′
y1 , . . . , ym
DMC receive variables
′
Y1′ , . . . , Ym
′
Y1 , . . . , Ym
transmission rates
′
R1′ , . . . , Rm
′
R1 . . . , Rm
′
M1′ , . . . , Mm
′
M1 , . . . , Mm
d′1 , . . . , d′m′
d1 , . . . , dm
number of users
messages
degrees of freedom
coherence time
T
′
T
General Variables
X
transmit signal
ρ
signal-to-noise ratio
Ui , Vj , W
auxiliary random variables
H
set of all channel gains
Dx
vertex of degrees of freedom region
ei
canonical coordinate vector
m dynamic users having coherence time T and no free CSIR1 , and the latter contains m′ static
users having coherence time T ′ and perfect instantaneous CSIR. We consider the transmitter is
equipped with more antennas than the number of dynamic and static users, i.e., Nt ≥ m′ + m.
The received signals yj′ (t), yi (t) at the static user j, and the dynamic user i, respectively, at
time instant t are
yj′ (t) = gj† (t)x(t) + zj′ (t),
j = 1, . . . , m′ ,
yi (t) = h†i (t)x(t) + zi (t),
i = 1, . . . , m,
(1)
where x(t) ∈ CNt is the transmitted signal, zj′ (t), zi (t) denote the corresponding additive i.i.d.
Gaussian noise of the users, and gj (t) ∈ CNt , hi (t) ∈ CNt denote the channels of the static
user j and the dynamic user i whose coefficients stay the same over T ′ and T time instances,
1
This means that the cost of knowing CSI at the receiver, e.g., by channel estimation, is not ignored.
March 26, 2018
DRAFT
6
respectively. The distributions of gj and hi are globally known at the transmitter and at the users.2
Having CSIR, the value of gj (t) is available instantaneously and perfectly at the static user j.
Furthermore, the static user j obtains an outdated version of the dynamic users channels hi , and
also the dynamic user i obtains an outdated version of the static users channel gi (completely
stale) [11]. CSIT for each user can take one of the following:
•
Perfect CSIT: the channel vectors, gj (t), hi (t), are available at the transmitter instantaneously and perfectly.
•
Delayed CSIT: the channel vectors, gj (t), hi (t), are available at the transmitter after they
change independently in the following block (completely stale [11]).
•
No CSIT: the channel vectors, gj (t), hi (t), cannot not be known at the transmitter.
We consider the broadcast channel with private messages for all users and no common
′
messages. More specifically, we assume that the independent messages Mj′ ∈ [1 : 2nRi (ρ) ], Mi ∈
[1 : 2nRi (ρ) ] associated with rates Rj′ (ρ), Ri (ρ) are communicated from the transmitter to the
static user j and dynamic user i, respectively, at ρ signal-to-noise ratio. The degrees of freedom
of the static and dynamic users achieving rates Rj′ (ρ), Ri (ρ) can be defined as
Rj′ (ρ)
,
ρ→∞ log(ρ)
Ri (ρ)
di = lim
,
ρ→∞ log(ρ)
j = 1, . . . , m′ ,
d′j = lim
i = 1, . . . , m.
(2)
The degrees of freedom region is defined as
′ +m
′
∃(R1′ (ρ), . . . , Rm
D = (d′1 , . . . , d′m′ , d1 , . . . , dm ) ∈ Rm
′ (ρ), R1 (ρ), . . . , Rm (ρ)) ∈ C(ρ),
+
Rj′ (ρ)
Ri (ρ)
, di = lim
, j = 1, . . . , m′ , i = 1, . . . , m ,
ρ→∞ log(ρ)
ρ→∞ log(ρ)
d′j = lim
(3)
where C(ρ) is the capacity region at ρ signal-to-noise ratio. The sum degrees of freedom is
defined as
Csum (ρ)
,
ρ→∞ log(ρ)
dsum = lim
where
′
Csum (ρ) = max
m
X
j=1
2
Rj′ (ρ)
+
m
X
(4)
Ri (ρ).
(5)
i=1
Also, the coherence times of all channels are globally known at the transmitter and at the users.
March 26, 2018
DRAFT
7
Y1’
X
Y2’
p(y3’|y2’)
p(ym’’|ym’-1’)
p(y3|y2)
p(ym|ym-1)
Ym’’
p(y1’,y1|x)
Y1
Fig. 2.
p(y2’|y1’)
p(y2|y1)
Y2
Ym
Discrete memoryless multiuser multilevel broadcast channel
In the sequel, we study the degrees of freedom of the above MISO broadcast channel under
different CSIT scenarios that could be perfect, delayed or no CSIT.
III. N O CSIT
FOR
A LL U SERS
In this section, we study the broadcast channel defined in Section II when there is no CSIT for
all users. In particular, we give outer and achievable degrees of freedom regions in Section III-B
and Section III-C, respectively. The outer degrees of freedom region is based on the construction
of an outer bound on the rates of a multiuser multilevel discrete memoryless channel that is given
in Section III-A.
A. Multiuser Multilevel Broadcast Channel
The multilevel broadcast channel was introduced by Borade et al. [21] as a three-user broadcast
discrete memoryless broadcast channel where two of the users are degraded with respect to each
other. The capacity of this channel under degraded message sets was established by Nair and El
Gamal [22]. Here, we study a multiuser multilevel broadcast channel with two sets of degraded
users (see Fig. 2). One set contains m′ users with Yj′ received signal at user j, and the other set
contains m users with Yi received signal at user i. Therefore,
X →Y1′ → Y2′ → · · · → Ym′ ′
X →Y1 → Y2 → · · · → Ym
March 26, 2018
(6)
DRAFT
8
form two Markov chains. We consider a broadcast channel with (m′ + m) private messages and
no common message. An outer bound for the above multilevel broadcast channel is given in the
following theorem.
Theorem 1: The rate region of the multilevel broadcast channel with two sets of degraded
users (Eq. (6)) is outer bounded by the intersection of
R1 ≤I(Um′ , W ; Y1|V1 ) − I(W ; Ym′ ′ |Um′ ),
(7)
Ri ≤I(Vi−1 ; Yi |Vi ),
(8)
Rj′ ≤I(Uj−1 ; Yj′ |Uj ),
i = 2, . . . , m,
j = 1, . . . , m′ − 1,
′
′
′
′
Rm
′ ≤I(W ; Ym′ |Um′ ) + I(X; Ym′ |Um′ , W ) − I(X; Ym′ |Um′ −1 ),
(9)
(10)
and
Ri ≤I(Ũi−1 ; Yi |Ũi ),
i = 1, . . . , m − 1,
(11)
Rm ≤I(W̃ ; Ym |Ũm ) + I(X; Ym|Ũm , W̃ ) − I(X; Ym|Ũm−1 ),
(12)
R1′ ≤I(Ũm , W̃ ; Y1′ |Ṽ1 ) − I(W̃ ; Ym |Ũm ),
(13)
Rj′ ≤I(Ṽj−1; Yj′ |Ṽj ),
(14)
j = 2, . . . , m′ ,
for some pmf
p(u1 , . . . , um′ , ũ1 , . . . , ũm , v1 , . . . , vm , ṽ1 , . . . , ṽm′ , w, w̃, x),
(15)
where
Um′ → · · · → U1 →X → (Y1 , . . . , Ym, Y1′ , . . . , Ym′ ′ )
Vm → · · · → V1 → (W, Um′ ) →X → (Y1 , . . . , Ym, Y1′ , . . . Ym′ ′ )
Ũm → · · · → Ũ1 →X → (Y1 , . . . Ym , Y1′ , . . . , Ym′ ′ )
Ṽm′ → · · · → Ṽ1 → (W̃ , Ũm ) →X → (Y1 , . . . Ym , Y1′ , . . . , Ym′ ′ )
(16)
forms Markov chains and U0 = Ũ0 , X.
Proof: See Appendix I.
Remark 1: Theorem 1 is an extension of the Körner-Marton outer bound [27, Theorem 5] to
more than two users, and it recovers the Körner-Marton bound when m = m′ = 1.
March 26, 2018
DRAFT
9
Remark 2: For the multiuser multilevel broadcast channel characterized by (6), we establish
the capacity for degraded message sets in Appendix II, where one common message is communicated to all receivers and one further private message is communicated to one receiver.
B. Outer Degrees of Freedom Region
In the sequel, we give an outer bound on the degrees of freedom of the broadcast channel
defined in Section II when there is no CSIT for all users. The outer bound development depends
on the results of Theorem 1 in Section III-A.
Theorem 2: An outer bound on the degrees of freedom region of the fading broadcast channel
characterized by Eq. (1) without CSIT is,
′
m
X
d′j ≤ 1,
m
X
di ≤ 1 −
(17)
j=1
i=1
m′
X
j=1
d′j
+
m
X
i=1
di ≤
1
4
3
1
,
T
T = T ′ , ∆T = 0
(18)
(19)
otherwise,
where ∆T is the offset between the two coherence intervals.
Proof: Equations (17) and (18) are outer bounds for a broadcast channel whose users are
either all homogeneously static or all homogeneously dynamic [7], [18]. The remainder of the
proof is dedicated to establishing (19). We enhance the channel by giving all users global CSIR.
When T ′ = T and ∆T = 0, (19) follows directly from [7], [18]. When T ′ 6= T or ∆T 6= 0,
having no CSIT, the channel belongs to the class of multiuser multilevel broadcast channels in
Section III-A. We then use the two outer bounds developed for the multilevel broadcast channels
to generate two degrees of freedom bounds, and merge them to get the desired result.
We begin with the outer bound described in (7)-(10); we combine these equations to obtain
P
P
partial sum-rate bounds on the static ( Rj′ ) and dynamic ( Ri ) receivers:
′
m
X
Rj′
j=1
≤
′ −1
m
X
′
′
I(Uj−1 ; yj′ |Uj , H) + I(W ; ym
′ |Um′ , H) + I(x; ym′ |Um′ , W, H)
j=1
′
− I(x; ym
′ |Um′ −1 , H)
March 26, 2018
DRAFT
10
=
′ −1
m
X
′
′
h(yj′ |Uj , H) − h(yj′ |Uj−1 , H) + I(W ; ym
′ |Um′ , H) + h(ym′ |Um′ , W, H)
j=1
′
− h(ym
′ |Um′ −1 , H) + o(log(ρ))
(20)
′
′
=I(W ; ym
′ |Um′ , H) + h(ym′ |Um′ , W, H) + o(log(ρ)),
(21)
where H is the set of all channel vectors, (20) follows from the chain rule, h(yj′ |x, H) =
o(log(ρ)), and (21) follows since the received signals of all static users, yj′ , have the same
statistics [7], [18]. Also, using Theorem 1,
m
X
Rj ≤I(Um′ , W ; y1|V1 , H) −
′
I(W ; ym
′ |Um′ , H)
+
m
X
I(Vj−1; yj |Vj , H)
j=2
j=1
=h(y1 |V1 , H) − h(y1 |Um′ , W, H) −
′
I(W ; ym
′ |Um′ , H)
+
m
X
h(yj |Vj , H)
j=2
− h(yj |Vj−1, H)
(22)
′
= − h(y1 |Um′ , W, H) − I(W ; ym
′ |Um′ , H) + h(ym |Vm , H) + o(log(ρ))
(23)
′
≤ − h(y1 |Um′ , W, H) − I(W ; ym
′ |Um′ , H) + log(ρ) + o(log(ρ)),
(24)
where (22) follows from the chain rule, (23) follows since yj have the same statistics, and (24)
′
follows since h(ym |Vm , H) ≤ n log(ρ) + o(log(ρ)). Define Yj,k
to be the received signal of user
j at time instance k. From (21) and (24), we can obtain the bound (27) on the rates.
m′
m
1X ′ X
1
1
′
′
Rj +
Rj ≤ I(W ; ym
h(ym
′ |Um′ , H) +
′ |Um′ , W, H) − h(y1 |Um′ , W, H)
2 j=1
2
2
j=1
′
− I(W ; ym
′ |Um′ , H) + log(ρ) + o(log(ρ))
1
′
= h(ym
′ |Um′ , W, H) − h(y1 |Um′ , W, H) + log(ρ) + o(log(ρ))
2
1
′
≤ h(ym
′ , y1 |Um′ , W, H) − h(y1 |Um′ , W, H) + log(ρ) + o(log(ρ))
2
n
X
1
′
′
′
≤
h(ym
′ ,k , y1,k |Um′ , W, H, ym′ ,1 , . . . , ym′ ,k−1 , y1,1 , . . . , y1,k−1 )
2
k=1
(25)
′
′
− h(y1,k |Um′ , W, H, ym
′ ,1 , . . . , ym′ ,k−1 , y1,1 , . . . , y1,k−1 ) + log(ρ)
+ o(log(ρ))
≤
March 26, 2018
max
Tr{Σx }≤ρ,Σx <0
(26)
EH
1
log |I + HΣx H† | − log(1 + h†1 Σx h1 ) + log(ρ)
2
DRAFT
11
+ o(log(ρ)),
(27)
where (25) and (26) follow from the chain rule and that conditioning does not increase differential
entropy, and (27) follows from extremal entropy inequality [23], [24], [28]. In order to bound (27),
we use a specialization of [29, Lemma 3] as follows.
Lemma 1: Consider two random matrices H1 ∈ CN1 ×Nt and H2 ∈ CN2 ×Nt , where N1 ≥ N2 .
For a covariance matrix Σx , where Tr{Σx } ≤ ρ, we have
1
1
log |I + H1 Σx H†1 | −
log |I + H2 Σx H†2 | ≤ o(log(ρ)). (28)
Σx min{Nt , N1 }
min{Nt , N2 }
The proof of Lemma 1 is omitted as it directly follows from [29, Lemma 3]. Lemma 1 yields
max
the following outer bound on the degrees of freedom:
m′
m
1X ′ X
d +
di ≤ 1.
2 j=1 j i=1
(29)
We now repeat the exercise of bounding the sum rates and deriving degrees of freedom, this
time starting from (11)-(14). By following bounding steps parallel to (21), (24), (27),
′
m
X
m
d′j
j=1
1X
di ≤ 1.
+
2 i=1
(30)
Adding (29) and (30) yields the outer bound (19), completing the proof of Theorem 2.
C. Achievable Degrees of Freedom Region
Theorem 3: The fading broadcast channel described by Eq. (1) can achieve the following
degrees of freedom without CSIT:
m
X
di ≤ 1 −
m
X
di ≤ 1.
i=1
m′
X
j=1
d′j
+
1
,
T
(31)
(32)
i=1
Proof: The achievable scheme uses product superposition [17], [18], where the transmitter
uses one antenna to send the super symbol to two users: one dynamic and one static,
x† = xs x†d ,
(33)
where xs ∈ C is a symbol intended for the static user,
x†d = [xτ , x†δ ]
March 26, 2018
(34)
DRAFT
12
where xτ ∈ C is a pilot and xδ ∈ CT −1 is a super symbol intended for the dynamic user. Since
degrees of freedom analysis is insensitive to the additive noise, we omit the noise component
in the following.
y† = hxs [xτ , x†δ ]
= [hxτ , hx†δ ],
(35)
where h = hxs . The dynamic user estimates the equivalent channel h during the first time
instance and then decodes xδ coherently based on the channel estimate. The static receiver only
utilizes the received signal during the first time instance:
y1′ = gxs .
(36)
Knowing its channel gain g, the static receiver can decode xs . The achievable degrees of freedom
of the two users are,
(d′ , d) =
1
1
.
,1 −
T
T
(37)
We now proceed to prove that the degrees of freedom region characterized by (31) and (32) can
be achieved via a combination of two-user product superposition strategies that were outlined
above, and single-user strategies. For clarity of exposition we refer to (31), which describes
the degrees of freedom constraints of the dynamic receivers, as the non-coherent bound, and
to (32) as the coherent bound. The non-negativity of degrees of freedom restricts them to the
′
m+m
. The intersection of the coherent bound and the non-negative orthant
non-negative orthant R+
is a (m′ + m)–simplex that has m + m′ + 1 vertices. The non-coherent bound is a hyperplane
that partitions the simplex with m′ + 1 vertices on one side of the non-coherent bound and m
on the other. Therefore the intersection of the simplex with the non-coherent bound produces
a polytope with (m′ + 1)(m + 1) vertices.3 For illustration, see Fig. 3 showing the three-user
degrees of freedom with two static users and Fig. 4 with one static user.
We now verify that each of the (m′ + 1)(m + 1) vertices can be achieved with either a
single-user strategy, or via a two-user product superposition strategy:
•
m′ vertices corresponding to single-user transmission to each static user j achieving one
degree of freedom.
3
This can be verified with a simple counting exercise involving the number of edges of the simplex that cross the non-coherent
bound.
March 26, 2018
DRAFT
13
d
d1
Non-coherent bound
Non-coherent bound
d2
Coherent bound
d’2
d’
d’1
Fig. 3.
Achievable degrees of freedom region of one
dynamic and two static users
•
Coherent bound
Fig. 4. Achievable degrees of freedom region of one static
and two dynamic users
m vertices corresponding to single-user transmission to each dynamic user i achieving
(1 − T1 ) degrees of freedom.
•
m′ m vertices corresponding to product superposition applied to all possible pairs of static
and dynamic users, achieving
1
T
degrees of freedom for one static user and (1 − T1 ) degrees
of freedom for one dynamic user.
•
One trivial vertex at the origin, corresponding to no transmission achieving zero degrees of
freedom for all users.
Hence, the number of the vertices is m′ + m + m′ m + 1 = (m + 1)(m′ + 1). This completes the
achievability proof of Theorem 3.
Remark 3: When the static and dynamic users have the same coherence time, the inner and
outer bounds on degrees of freedom coincide. In this case it is degrees of freedom optimal to
serve two-users at a time (one dynamic and one static).
IV. D ELAYED CSIT
FOR ALL USERS
Under delayed CSIT, the transmitter knows each channel gain only after it is no longer valid.
This condition is also known as outdated CSIT. We begin by proving inner and outer bounds
March 26, 2018
DRAFT
14
when transmitting only to static users, only to dynamic users, and to one static and one dynamic
user. We then synthesize this collection of bounds into an overall degrees of freedom region.
A. Transmission to Static Users
Theorem 4: The degrees of freedom region of the fading broadcast channel characterized by
Eq. (1) with delayed CSIT and having m′ static users and no dynamic users is
1
j = 1, . . . , m′ .
(38)
1 ,
1 + + . . . + m′
Proof: The special case of fast fading (T ′ = 1) was discussed by Maddah-Ali and Tse
d′j ≤
1
2
in [11], where the achievability was established by retrospective interference alignment that
aligns the interference using the outdated CSIT, and the converse was proved by generating
an improved channel without CSIT having a tight degrees of freedom region against TDMA
according to the results in [2], [3]. For T ′ ≥ 1, the achievability is established by employing
retrospective interference alignment presented in [11] over super symbols each of length T ′ .
The converse is proved by following the same procedures in [11] to generate a block-fading
improved channel without CSIT and with identical coherence intervals of length T ′ . According
to the results of [7], [18], TDMA is tight against the degrees of freedom region of the improved
channel.
B. Transmission to Dynamic Users
Theorem 5: The fading broadcast channel characterized by Eq. (1), with delayed CSIT and
having m dynamic users and no static users, can achieve the degrees of freedom
di ≤
1
1 + + ...+
1
2
1 (1
m
−
m
),
T
i = 1, . . . , m.
(39)
An outer bound on the degrees of freedom region is
di ≤1 −
m
X
1
,
T
(40)
m
.
(41)
1 + + . . . + m1
i=1
Proof: The achievability part can be proved as follows. At the beginning of each super
di ≤
1
2
symbol, m pilots are sent for channel estimation. Then retrospective interference alignment
in [11] over super symbols is employed during the remaining (T − m) instances, to achieve (39).
March 26, 2018
DRAFT
15
For the converse part, (41) is proved by giving the users global CSIR, and then applying
Theorem 4. Moreover, (40) is the single-user bound for each dynamic user that can be proved as
follows. For a single user with delayed CSIT, feedback does not increase the capacity [30], and
consequently the assumption of delayed CSIT can be removed. Hence, the single-user bound for
each dynamic user with delayed CSIT is the same as the single-user bound without CSIT [26].
C. Transmission to One Static and One Dynamic User
Theorem 6: The fading broadcast channel characterized by Eq. (1), with delayed CSIT and
having one static and one dynamic user, can achieve the following degrees of freedom
2
1 2
2
(1 + ), (1 − ) ,
3
T 3
T
1
1
D2 : (d′ , d) = ( , 1 − ).
T
T
D1 : (d′ , d) =
(42)
(43)
Furthermore, the achievable degrees of freedom region is the convex hull of the above degrees
of freedom pairs.
Proof: From Section III-C, product superposition achieves the pair (43) that does not require
CSIT for any of the two users. The remainder of the proof is dedicated to the achievability of the
pair (42). We provide a transmission scheme based on retrospective interference alignment [11]
along with product superposition.
1) The transmitter first emits a super-symbol intended for the static user:
X1 = [X1,1 , · · · , X1,ℓ ],
where ℓ =
T′
,
T
(44)
and each X1,n ∈ C2×T occupies T time instances and has the following
structure:
X1,n = [Ūn , Ūn Un ],
n = 1, . . . , ℓ,
(45)
both the diagonal matrix Ūn ∈ C2×2 and Un ∈ C2×(T −2) contain symbols intended for the
′
′
′
†
†
static user. The components of y1† = [y1,1
, · · · , y1,ℓ
] are:
′
†
y1,n
=[g1† Ūn , g1† Ūn Un ],
†
†
=[g̃1,n
, g̃1,n
Un ],
March 26, 2018
n = 1, . . . , ℓ
(46)
DRAFT
16
†
= g1† Ūn . The static user by definition knows g1 so it can decode Ūn which
where g̃1,n
′
yields 2 TT degrees of freedom. The remaining
†
Un involve
(T − 2) observations in g̃1,n
T′
T
′
2 TT (T − 2) unknowns, so they require a further
T′
T
(T − 2) independent observations for
reliable decoding.
†
†
The components of y1† = [y1,1
, · · · , y1,ℓ
] are
†
=[h†1,n Ūn , h†1,n Ūn Un ],
y1,n
n = 1, . . . , ℓ
=[h̃†1,n , h̃†1,n Un ],
(47)
where h̃†1,n = h†1,n Ūn is the equivalent channel estimated by the dynamic user. The dynamic
user saves h̃†1,n Un for interference cancellation in the upcoming steps.
2) The transmitter sends a second super symbol intended for the dynamic user:
X2 = [X2,1 , · · · , X2,ℓ ],
(48)
where
X2,n = [Ũn , Ũn Vn ],
n = 1, . . . , ℓ,
(49)
Ũn ∈ C2×2 is diagonal and includes 2 independent symbols intended for the static user,
and Vn ∈ C2×(T −2) contains independent symbols intended for the dynamic user. The
†
†
components of y2† = [y2,1
, · · · , y2,ℓ
] are
†
y2,n
=[h†2,n Ũn , h†2,n Ũn Vn ],
n = 1, . . . , ℓ
=[h̃†2,n , h̃†2,n Vn ],
(50)
where h̃†2,n = h†2,n Ũn is the equivalent channel estimated by the dynamic user. The dynamic
user saves h̃†2,n Vn which includes
T′
T
T′
(T −
T
′†
y2,ℓ
] are
unknowns, and hence an additional
′
′
†
components of y2† = [y2,1
, ··· ,
′
(T − 2) independent observations about 2 TT (T − 2)
2) observations are needed to decode Vn . The
′
y2,n
=[g2† Ũn , g2† Ũn Vn ],
n = 1, . . . , ℓ
†
†
=[g̃2,n
, g̃2,n
Vn ],
(51)
†
where g̃2,n
= g2† Ũn is the equivalent channel estimated by the static user; the static user
†
saves g̃2,n
Vn for the upcoming steps. Knowing g2 , the static user achieves 2 TT further
′
degrees of freedom from decoding Ũn .
March 26, 2018
DRAFT
17
3) The transmitter emits a third super symbol consisting of a linear combination of the signals
generated from the first and the second super symbols.
X3 = [X3,1 , · · · , X3,ℓ ],
(52)
where
†
X3,n = [Ûn , Ûn (h̃†1,n Un + g̃2,n
Vn )],
n = 1, . . . , ℓ,
(53)
Ûn ∈ C2×2 is diagonal and contains 2 independent symbols intended for the static user,
′
and hence the static user achieves further 2 TT degrees of freedom.
†
The static user cancels g̃2,n
Vn saved during the second super symbol and obtains h̃†1,n Un
that includes the additional independent
T′
T
(T − 2) observations needed for decoding Un .
T′
Therefore, the static user achieves 2 T (T − 2) further degrees of freedom. The dynamic
user estimates the equivalent channel h̃†3,n = h†3,n Ûn , cancels h̃†1,n Un saved during the
†
Vn that contains the additional observations needed
first super symbol, and obtains g̃2,n
′
for decoding Vn . Hence, the dynamic user achieves 2 TT (T − 2) degrees of freedom.
In aggregate, over 3T ′ time instants, the static and dynamic user achieve the degrees of freedom
d′ = 6
T′
T′
+ 2 (T − 2),
T
T
d=2
T′
(T − 2).
T
(54)
This completes the proof of Theorem 6.
Theorem 7: An outer bound on the degrees of freedom region of the fading broadcast channel
characterized by Eq. (1), with one static and one dynamic user having delayed CSIT, is
d′
+ d ≤ 1,
2
d
d′ + ≤ 1,
2
(55)
(56)
1
.
(57)
T
Proof: The inequality (57) represents the single-user outer bound [26]. We prove the
d≤1−
bound (55) as follows. We enhance the original channel by giving both users global CSIR. In
addition, the channel output of the dynamic user, y(t), is given to the static user. Therefore,
the channel outputs at time instant t are (y ′(t), y(t), H) at the static user, and (y(t), H) at the
dynamic user. The enhanced channel is physically degraded [31], [32], hence, removing the
delayed CSIT does not reduce the capacity [33]. Also,
R′ ≤I(x(t); y ′(t), y(t)|U, H) = h(y ′(t), y(t)|U, H) − h(y ′ (t), y(t)|U, x(t), H)
March 26, 2018
DRAFT
18
R ≤I(U; y(t)|H) = h(y(t)|H) − h(y(t)|U, H),
(58)
where U is an auxiliary random variable, and U → x → (y ′(t), y(t)) forms a Markov chain.
Therefore,
1
R′
+ R ≤h(y(t)|H) + h(y ′(t), y(t)|U, H) − h(y(t)|U, H) + o(log(ρ))
2
2
1
(59)
≤ log(ρ) + h(y ′(t), y(t)|U, H) − h(y(t)|U, H) + o(log(ρ))
2
1
≤ log(ρ) +
max
EH
log |I + HΣx H† | − log(1 + h† (t)Σx h(t)) + o(log(ρ))
Tr{Σx }≤ρ,Σx <0
2
(60)
≤ log(ρ) + o(log(ρ)),
(61)
where (59) follows since h(y(t)|H) ≤ log(ρ)+o(log(ρ)) [34], (60) follows from extremal entropy
inequality [23], [24], [29], and (61) follows from Lemma 1. Hence, the bound (55) is proved.
A similar argument, with the role of the two users reversed, leads to the bound (56).
Remark 4: The inner and outer bounds obtained for the two-user case partially meet, with
the gap diminishing with the coherence time of the dynamic user as shown in Fig. 5 and Fig. 6
for T = 15 and T = 30, respectively.
D. Transmission to arbitrary number of static and dynamic users
Theorem 8: The fading broadcast channel characterized by Eq. (1), with delayed CSIT, can
achieve the multiuser degrees of freedom characterized by vectors Di ,
1
D1 :
1
1 + 2 + ...+
′
m
X
1
m′ i=1
e†i ,
1
2
2
2
D2 , . . . , Dmm′ +1 : (1 + )e†j + (1 − )e†m′ +i , j = 1, . . . , m′ , i = 1, . . . , m,
3
T
3
T
m
m †
1
m X †
Dmm′ +2 , . . . , Dmm′ +m′ +2 : ej +
e , j = 1, . . . , m′ ,
(1 − )
T
T i=1 i
1 + 21 + . . . + m1
(62)
(63)
(64)
where ej is the canonical coordinate vector. Their convex hull characterized an achievable degrees
of freedom region.
Proof: The achievability of (62) was proved in Section IV-A via multiuser transmission to
static users. The achievability of (63) was proved in Section IV-C, via a two-user transmission
to a dynamic-static pair.
March 26, 2018
DRAFT
19
1.5
Achievable region
Outer region
1
0.5
0
0
Fig. 5.
0.2
0.4
0.6
0.8
1
1.2
One static and one dynamic with delayed CSIT and T = 15
1.5
Achievable region
Outer region
1
0.5
0
0
Fig. 6.
0.2
0.4
0.6
0.8
1
1.2
One static and one dynamic with delayed CSIT and T = 30
March 26, 2018
DRAFT
20
We now show the achievability of (64) via retrospective interference alignment [11] along with
product superposition. Over a super symbol of length T , consider the following transmission:
X = [U, UV],
(65)
where U ∈ Cm×m is diagonal and includes m independent symbols intended for the static user j,
and V ∈ Cm×(T −m) is a super symbol containing independent symbols intended for the dynamic
users according to retrospective interference alignment [11]. Therefore, the static user decodes
U. Thus, over T time instants, the static user achieves m degrees of freedom and the dynamic
users achieve
1
1 (T
1+ 12 +...+ m
− m), hence (64) is achieved.
Theorem 9: An outer bound on the degrees of freedom of the fading broadcast channel
characterized by Eq. (1), with delayed CSIT, is
′
m
X
d′j
di
+
≤ 1,
′
m + m i=1 m
(66)
m
m
X
d′j X
di
+
≤ 1,
′
′
m
m
+
m
j=1
i=1
(67)
m
X
j=1
′
d′j ≤ 1,
∀j = 1, . . . , m′ ,
(68)
1
, ∀i = 1, . . . , m.
(69)
T
Proof: The inequalities (68), (69) represent the single-user bounds on the static and the
di ≤ 1 −
dynamic users, respectively [26], [34]. The remainder of the proof is dedicated to establishing
the bounds (66) and (67).
We enhance the channel by providing global CSIR as well as allowing full cooperation among
static users and full cooperation among dynamic users. The enhanced channel is equivalent to
a broadcast channel with two users: one static equipped with m′ antennas and one dynamic
′
equipped with m antennas. Define Y ′ ∈ Cm and Y ∈ Cm to be the received signals of the
static and the dynamic super-user, respectively, in the enhanced channel. We further enhance
the channel by giving Y to the static user, generating a physically degraded channel since
X → (Y ′ , Y) → Y forms a Markov chain. Feedback including delayed CSIT has no effect
on capacity [33], therefore we remove it from consideration. Subsequently, we can utilize the
Körner-Marton outer bound [27],
′
m
X
Rj′ ≤I(X; Y ′, Y|U, H)
j=1
March 26, 2018
DRAFT
21
m
X
Ri ≤I(U; Y|H).
(70)
i=1
Therefore, from applying extremal entropy inequality [23], [29], [35] and Lemma 1,
′
m
X
j=1
m
X
Rj′
1
1
Ri
′
+
≤
I(X;
Y
,
Y|U,
H)
+
I(U; Y|H)
m′ + m i=1 m m′ + m
m
=
m′
1
1
1
h(Y ′, Y|U, H) + o(log(ρ)) + h(Y|H) − h(Y|U, H)
+m
m
m
≤ log(ρ) + o(log(ρ)).
(71)
Therefore, the bound (66) is proved. Similarly, we can prove the bound (67) using the same
steps after switching the roles of the two users in the enhanced channel.
V. H YBRID CSIT: P ERFECT CSIT
FOR THE
S TATIC U SERS
AND
N O CSIT
FOR THE
DYNAMIC U SERS
Theorem 10: The fading broadcast channel characterized by Eq. (1), with perfect CSIT for
the static users and no CSIT for the dynamic users, can achieve the following multiuser degrees
of freedom,
′
D1 :
m
X
e†j ,
(72)
j=1
′
D2 , . . . , Dm+1
m
1
1X †
ej + (1 − )e†i ,
:
T j=1
T
i = 1, . . . , m.
(73)
Therefore, their convex hull is also achievable.
Proof: D1 is achieved by inverting the channels of the static users at the transmitter and
then every static user achieves one degree of freedom. D2 , . . . , Dm+1 in (73) are achieved using
product superposition along with channel inversion as follows. The transmitted signal over T
instants is,
X = [u, uv† ],
where u =
Pm′
j=1 bj uj ,
(74)
uj is a symbol intended for the static user j, gj† bj = 0, and v ∈ CT −1
contain independent symbols intended for the dynamic user i. Each of the static users receive
an interference-free signal during the first time instant achieving one degrees of freedom. The
dynamic user estimates its equivalent channel during the first time instant and decodes v during
the remaining (T − 1) time instants.
March 26, 2018
DRAFT
22
Theorem 11: An outer bound on the degrees of freedom of the fading broadcast channel
characterized by Eq. (1), with perfect CSIT for the static users and no CSIT for the dynamic
users, is
′
m
X
j=1
m
X
d′j
+
di ≤ 1,
m′ + 1 i=1
d′j ≤ 1,
m
X
(75)
∀j = 1, . . . , m′ ,
(76)
1
.
(77)
T
i=1
Proof: The inequalities (76) represent single-user bounds for the static users [34], and (77)
di ≤ 1 −
is a time-sharing outer bound for the dynamic users that was established in [7], [18]. It remains
to prove (75), as follows.
We enhance the channel by giving global CSIR to all users and allowing full cooperation
between the static users. This gives rise to an equivalent static user with m′ antennas receiving
Y ′ over an equivalent channel G and noise Z′ . At this point, we have a multi-user system where
CSIT is available with respect to one user, but not others. We then bound the performance
of this system with that of another (similar) system that has no CSIT. To do so, we use the
local statistical equivalence property developed and used in [13], [15], [36]. First, we draw
G̃, Z̃ according to the distribution of G, Z′ and independent of them. We enhance the channel
by providing Ỹ = G̃X + Z̃ to the static receiver and G̃ to all receivers. Because we do not
provide G̃ to the transmitter, there is no CSIT with respect to Ỹ. According to [36], we have
h(Ỹ, Y ′|H) = h(Y ′ |H) + o(log(ρ)), where H = (G, G̃, h1 , . . . , hm ), therefore we can remove
Y ′ from the enhanced channel without reducing its degrees of freedom. This new equivalent
channel has one user with m′ antennas receiving (Ỹ, H), m single-antenna users receiving
(yi , H), and no CSIT.4 Having no CSIT, the enhanced channel is in the form of a multilevel
broadcast channel studied in Section III-A, and hence using Theorem 1,
′
m
X
Rj′ ≤I(W ; Ỹ|U, H) + I(X; Ỹ|U, W, H)
j=1
4
In the enhanced channel after removal of Y ′ , the transmitter and receivers still share information about G, but this random
variable is now independent of all (remaining) transmit and receive variables.
March 26, 2018
DRAFT
23
R1 ≤I(U, W ; y1|V1 , H) − I(W ; Ỹ|U, H)
Ri ≤I(Vi−1 ; yi |Vi , H),
i = 2, . . . , m.
(78)
The dynamic receiver received signals have the same distribution. By following bounding steps
parallel to (22), (23), (24),
m
X
Ri ≤ log(ρ) + o(log(ρ)) − I(W ; Ỹ|U, H) − h(y1 |U, W, H).
(79)
j=1
Therefore,
′
m
X
j=1
m
X
Rj′
1
h(Ỹ|U, W, H)
+
R
≤
log(ρ)
+
o(log(ρ))
+
(
−
1)I(W
;
Ỹ|U,
H)
+
i
m′ + 1 j=1
m′ + 1
m′ + 1
− h(y1 |U, W, H),
≤ log(ρ) + o(log(ρ)) +
(80)
h(Ỹ, y1 |U, W, H)
− h(y1 |U, W, H)
m′ + 1
≤ log(ρ) + o(log(ρ)),
(81)
(82)
where the last inequality follows from applying the extremal entropy inequality [23], [29], [35]
and Lemma 1. This concludes the proof of the bound (75).
VI. H YBRID CSIT: P ERFECT CSIT
FOR THE
S TATIC U SERS
AND
D ELAYED CSIT
FOR THE
DYNAMIC U SERS
We begin with inner and outer bounds for one static and one dynamic user, then extend the
result to multiple users. The transmitter knows the channel of the static users perfectly and
instantaneously, and an outdated version of the channel of the dynamic users.
A. Transmitting to One Static and One Dynamic User
Theorem 12: For the fading broadcast channel characterized by Eq. (1) with one static and
one dynamic user, with perfect CSIT for the static user and delayed CSIT for the dynamic user,
the achievable degrees of freedom region is the convex hull of the vectors,
1
1 1
, −
),
2T 2 2T
1
1
D2 :(d′, d) = ( , 1 − ).
T
T
D1 :(d′, d) = (1 −
March 26, 2018
(83)
(84)
DRAFT
24
Proof: The degrees of freedom (84) can be achieved by product superposition as discussed
in Section III, without CSIT. We proceed to prove the achievability of (83).
1) Consider [u1 ,
··· ,
uT −1 ] to be a complex 2 × (T − 1) matrix containing symbols
intended for the static user, [v1 , · · · , vT −1 ] intended for the dynamic user, and b ∈ C is
a beamforming vector so that g† b = 0. In addition we define u0 = 0, v0 = 1. Using these
components, the transmitter constructs and transmits a super-symbol of length T , whose
value at time t is:
x†1 (t) = ut + b vt .
(85)
Note that x1 (0) = b does not carry any information for either user, and serves as a pilot.
The received super symbol at the static user is:
′
y1† = [0, g† u1 , · · · , g† uT −1 ].
(86)
The received super symbol at the dynamic user
y1† =[h†1 b, (h†1 u1 + h†1 bv1 ), · · · , (h†1 uT −1 + h†1 bvT −1 )].
(87)
The dynamic user estimates its equivalent channel h†1 b from the received value in the first
time instant. The remaining terms include symbols intended for the dynamic user plus
some interference, whose cancellation is the subject of the next step.
2) The transmitter next sends a second super symbol of length T ,
x2 = [ū, ū(h†1 u1 ), · · · , ū(h†1 uT −1 )],
(88)
where ū ∈ C is a symbol intended for the static user. Hence,
y2† = [h2 ū, h2 ū(h†1 u1 ), · · · , h2 ū(h†1 uT −1 )].
(89)
The dynamic user estimates the equivalent channel h2 ū during the first time instant and
then acquires h†1 ut , the interference in (87). Therefore, using y1 , y2 , the dynamic user
solves for vt achieving (T − 1) degrees of freedom. Furthermore,
′
y2† = [g1 ū, g1 ū(h†1 u1 ), · · · , g1 ū(h†1 uT −1 )].
(90)
The static user solves for ū achieving one degree of freedom and also uses h†1 ut to solve
for ut achieving further 2 (T − 1) degrees of freedom.
March 26, 2018
DRAFT
25
In summary, during 2T instants, the static user achieves (2T − 1) degrees of freedom and the
dynamic user achieves (T − 1) degrees of freedom. This shows the achievability of (83).
Theorem 13: For the fading broadcast channel characterized by Eq. (1) with one static and
one dynamic user, where there is perfect CSIT for the static user and delayed CSIT for the
dynamic user, an outer bound on the degrees of freedom region is,
d′
+ d ≤1,
2
d′ ≤1,
(91)
(92)
1
.
(93)
T
Proof: The inequalities (92) and (93) represent the single-user outer bounds [26], [34]. It
d ≤1 −
only remains to prove the outer bound (91), as follows.
1) We enhance the channel by giving global CSIR to both users and also give y to the static
user. The enhanced channel is physically degraded having (Y ′ , G) at the static user and
(y, G) at the dynamic user, where Y ′ , (y ′, y) and G , (h, g). In a physically degraded
channel, causal feedback (including delayed CSIT) does not affect capacity [33], so we
can remove the delayed CSIT with respect to the dynamic user.
2) We now use another enhancement with the motivation to remove the remaining CSIT (noncausal, with respect to the static user). This is accomplished, similar to Theorem 11, via
local statistical equivalence property [13], [15], [36] in the following manner. We create a
channel G̃, and noise Z̃ with the same distribution but independently of the true channel
and noise, and a signal Ỹ = G̃X + Z̃. A genie will give Ỹ to the static receiver and G̃
to both receivers. It has been shown [36] that h(Ỹ, Y ′ |H) = h(Y ′ |H) + o(log ρ), where
H = (G, G̃), therefore we can remove Y ′ from the enhanced channel without reducing
its degrees of freedom.
3) The enhanced channel is still physically degraded, therefore [31], [32]
R′ ≤I(x; Ỹ|U, H) = h(Ỹ|U, H) + o(log(ρ))
R ≤I(U; y|H) = h(y|H) − h(y|U, H),
(94)
where U is an auxiliary random variable, and U → x → (y ′ , y) forms a Markov chain.
Therefore,
1
1 ′
R + R ≤h(y|H) + h(Ỹ|U, H) − h(y|U, H) + o(log(ρ))
2
2
March 26, 2018
DRAFT
26
1.5
Achievable region
Outer region
1
0.5
0
0
Fig. 7.
0.2
0.4
0.6
0.8
1
1.2
One static and one dynamic user with hybrid CSIT and T = 15
≤ log(ρ) + o(log(ρ)),
(95)
where the last inequality follows from extremal entropy inequality and Lemma 1 [23],
[29], [35]. This concludes the proof of the bound (91).
Remark 5: For the above broadcast channel with hybrid CSIT, the achievable sum degrees of
freedom is dsum =
3
2
− T1 , and the outer bound on the sum degrees of freedom is dsum ≤ 32 . The
gap decreases with the dynamic user coherence time (see Fig. 7 and 8).
B. Multiple Static and Dynamic Users
Theorem 14: The fading broadcast channel characterized by Eq. (1), with perfect CSIT for
the static users and delayed CSIT for the dynamic users, can achieve the following degrees of
freedom,
′
D1 :
m
X
e†j ,
(96)
j=1
D2 , . . . , Dmm′ +1 : (1 −
March 26, 2018
1
1 †
1 †
)ej + ( −
)e ,
2T
2 2T i
j = 1, . . . , m′ , i = 1, . . . , m,
(97)
DRAFT
27
1.5
Achievable region
Outer region
1
0.5
0
0
Fig. 8.
0.2
0.4
0.6
0.8
1
1.2
One static and one dynamic user with hybrid CSIT and T = 30
′
Dmm′ +2 , . . . , Dmm′ +m+2
m′
Dmm′ +m+3
m
1
1X †
ej + (1 − )e†i ,
:
T j=1
T
mX †
1
:
ej + (
1
T j=1
1 + 2 + ...+
i = 1, . . . , m,
(98)
m
m X †
(1
−
))
e.
1
T i=1 i
m
(99)
The achievable region consists of the convex hull of the above vectors.
Proof: D1 is achieved by inverting the channel of the static users at the transmitter, providing
one degree of freedom per static user. The achievability of D2 , . . . , Dmm′ +1 was established in
Section VI-A, and that of Dmm′ +2 , . . . , Dmm′ +m+2 was proved in Section V without CSIT for the
dynamic user, so it remains achievable with delayed CSIT. Dmm′ +m+3 is achieved by retrospective
interference alignment [11] along with product superposition, as follows. The transmitted signal
over T instants is
X = [Ū, ŪV],
(100)
where Ū ∈ Cm×m contains independent symbols intended for the static users sent by inverting the
channels of the static users. Therefore, during the first m time instants, each static user receives
an interference-free signal and achieves m degree of freedom, and furthermore the dynamic users
estimate their equivalent channels. During the remaining time instants, each dynamic receiver
March 26, 2018
DRAFT
28
obtains coherent observations of (T − m) transmit symbols, which are pre-processed, combined
and interference-aligned into super-symbols V according to retrospective interference alignment
techniques of [11]. Accordingly, each dynamic receiver achieves
1
1 (1
1+ 12 +...+ m
−
m
)
T
degrees of
freedom.
Theorem 15: An outer bound on the degrees of freedom region of the fading broadcast channel
characterized by Eq. (1), with perfect CSIT for the static users and delayed CSIT for the dynamic
users, is
′
m
X
j=1
m
X
d′j
di
+
≤ 1,
′
m + m i=1 m
m
X
i=1
di ≤
1+
d′j ≤ 1,
(101)
1
2
m
+ ...+
1
m
,
j = 1, . . . , m′ ,
(102)
(103)
1
,
i = 1, . . . , m.
(104)
T
Proof: The inequalities (103) and (104) represent the single-user outer bounds for the static
di ≤ 1 −
and dynamic users, respectively [26], [34]. According to Theorem 5, (102) represents an outer
bound for the dynamic users. It only remains to prove (101) as follows.
1) The original channel is enhanced by giving the users global CSIR. Furthermore, we assume
full cooperation between the static users and between the dynamic users. The resulting
enhanced channel is a broadcast channel with two users: one static user equipped with
m′ antennas, received signal Y ′ , channel G, and noise noise Z′ , and one dynamic user
equipped with m antennas, received signal Y, channel H, and noise Z.
2) We further enhance the channel by giving Y to the static user, constructing a physically
degraded channel. For the enhanced channel, the static receiver is equipped with m′ + m
′
antennas and has received signal Ŷ = [Y † , Y † ]† , channel Ĝ = [G† , H† ]† , and noise
′
Ẑ = [Z† , Z † ]† . Since any causal feedback (including delayed CSIT) does not affect the
capacity of a physically degraded channel [33], the delayed CSIT for the dynamic receiver
can be removed.
3) We now use another enhancement with the motivation to remove the remaining CSIT
(non-causal, with respect to the static user). We create an artificial channel and noise, G̃,
Z̃, with the same distribution but independent of Ĝ, Ẑ, and a signal Ỹ = G̃X + Z̃. A
March 26, 2018
DRAFT
29
genie will give Ỹ to the static receiver and G̃ to both receivers. It has been shown [36]
that h(Ỹ, Ŷ|H) = h(Ŷ|H) + o(log ρ), where H = (Ĝ, G̃), therefore we can remove Ŷ
from the enhanced channel without reducing its degrees of freedom.
4) The enhanced channel is physically degraded without CSIT, therefore [31], [32],
′
m
X
Rj′ ≤ I(X; Ỹ|U, H)
m
X
Ri ≤ I(U; Y|H).
j=1
(105)
i=1
Hence,
′
m
X
j=1
m
X
Rj′
1
1
1
Ri
+
≤ ′
h(Ỹ|U, H) + h(Y|H) − h(Y|U, H) + o(log(ρ))
′
m + m j=1 m m + m
m
m
≤ log(ρ) + o(log(ρ)),
(106)
where the last inequality follows from the extremal entropy inequality [23], [29], [35] and
Lemma 1 and since h(Y|H) ≤ m log(ρ) + o(log(ρ)) [34]. This concludes the proof of the
bound (101).
VII. C ONCLUSION
A multiuser broadcast channel was studied where some receivers experience longer coherence
intervals and have CSIR while other receivers experience a shorter coherence interval and do not
enjoy free CSIR. The degrees of freedom were studied under delayed CSIT, hybrid CSIT, and
no CSIT. Among the techniques employed were interference alignment and beamforming along
with product superposition for the inner bounds. The outer bounds involved a bounding of the
rate region of the multiuser (discrete memoryless) multilevel broadcast channel. Some highlights
of the results are: for one static and one dynamic user with delayed CSIT, the achievable degrees
of freedom region partially meets the outer bound. For one static user with perfect CSIT and one
dynamic user with delayed CSIT, the gap between the achievable and the outer sum degrees of
freedom is inversely proportional to the dynamic user coherence time. For each of the considered
CSI conditions, inner and outer bounds were also found for arbitrary number of users.
From these results we conclude that in the broadcast channel, coherence diversity delivers gains
that are distinct from, and augment, the gains from beamforming and interference alignment.
March 26, 2018
DRAFT
30
The authors anticipate that the tools and results of this paper can be helpful for future studies
of hybrid CSIT/CSIR in other multi-terminal networks.
A PPENDIX I
P ROOF
OF
T HEOREM 1
Recall, Mj′ , Mi are the messages of users j = 1, . . . , m′ and i = 1, . . . , m, respectively. We
′
′
enhance the channel by assuming that user j = 1, . . . , m′ knows the messages Mj+1
, . . . , Mm
′
and M1 , . . . , Mm and user i = 1, . . . , m knows the messages Mi+1 , . . . , Mm . Using Fano’s
inequality, chain rule, and data processing inequality we can bound the rates of the static user
j = 1, . . . , m′ ,
′
′
′
′
nRj′ ≤ I(Mj′ ; Yj,1
, . . . , Yj,n
|Mj+1
, . . . , Mm
′ , M1 , . . . , Mm )
(107)
=
n
X
′
I(Mj′ ; Yj,k
|Uj,k )
(108)
≤
n
X
′
′
′
I(Mj′ , Uj,k , Yj−1,1
, . . . , Yj−1,k−1
; Yj,k
|Uj,k )
(109)
=
n
X
′
I(Uj−1,k ; Yj,k
|Uj,k )
(110)
k=1
k=1
k=1
where
′
′
′
′
, . . . , Mm
Uj,k = Mj+1
′ , M1 , . . . , Mm , Yj,1 , . . . , Yj,k−1 ,
′
denotes the received signal of user j at time instant k,
Yj,k
Um′ → · · · → U1 → X → (Y1′ , . . . , Ym′ ′ , Y1 , . . . , Ym)
forms a Markov chain, and U0 = X. The rate of static user m′ can be bounded as
n
X
′
I(Um′ −1,k ; Ym′ ′ ,k |Um′ ,k )
nRm′ ≤
=
≤
=
k=1
n
X
k=1
n
X
k=1
n
X
k=1
March 26, 2018
I(Xk ; Ym′ ′ ,k |Um′ ,k )
−
n
X
I(Xk ; Ym′ ′ ,k |Um′ −1,k )
(111)
(112)
k=1
I(Xk , Y1,k+1, . . . , Y1,n ; Ym′ ′ ,k |Um′ ,k )
−
n
X
I(Xk ; Ym′ ′ ,k |Um′ −1,k )
(113)
k=1
I(Y1,k+1, . . . , Y1,n ; Ym′ ′ ,k |Um′ ,k )
+
n
X
I(Xk ; Ym′ ′ ,k |Um′ ,k , Y1,k+1, . . . , Y1,n )
k=1
DRAFT
31
−
n
X
I(Xk ; Ym′ ′ ,k |Um′ −1,k )
(114)
k=1
=
n
X
I(Wk ; Ym′ ′ ,k |Um′ ,k )
+
n
X
I(Xk ; Ym′ ′ ,k |Um′ ,k , Wk )
I(Xk ; Ym′ ′ ,k |Um′ −1,k ),
k=1
k=1
k=1
−
n
X
(115)
n
where Wk = Y1,k+1
. Similarly,
nRi ≤I(Mi ; Yi,1, . . . , Yi,n |Mi+1 , . . . , Mm )
=
=
=
n
X
k=1
n
X
k=1
n
X
(116)
I(Mi ; Yi,k |Vi,k )
(117)
I(Mi , Vi,k , Yi−1,k+1, . . . , Yi−1,n ; Yi,k |Vi,k )
(118)
I(Vi−1,k ; Yi,k |Vi,k ),
(119)
k=1
where we define Vi,k , (Mi+1 , . . . , Mm , Yi,k+1, . . . , Yi,n ), which leads to the Markov chain Vm →
· · · → V1 → (Um′ , W ) → X → (Y1′ , . . . , Ym′ ′ , Y1 , . . . , Ym ) . Using the chain rule and Csiszár
sum identity [37], we obtain the bound (125).
R1 ≤
n
X
I(M1 , . . . , Mm ; Y1,k |V1,k )
(120)
≤
n
X
I(M1 , . . . , Mm , Y1,k+1, . . . , Y1,n ; Y1,k |V1,k )
(121)
=
n
X
I(M1 , . . . , Mm , Y1,k+1, . . . , Y1,n , Ym′ ′ ,1 , . . . , Ym′ ′ ,k−1 ; Y1,k |V1,k )
(122)
k=1
k=1
k=1
−
n
X
I(Ym′ ′ ,1 , . . . , Ym′ ′ ,k−1; Y1,k |M1 , . . . , Mm , Y1,k+1, . . . , Y1,n )
(123)
k=1
=
n
X
=
n
X
I(Um′ ,k , Wk ; Y1,k |V1,k ) −
n
X
I(Y1,k+1, . . . , Y1,n ; Ym′ ′ ,k |Um′ ,k )
(124)
I(Um′ ,k , Wk ; Y1,k |V1,k ) −
n
X
I(Wk ; Ym′ ′ ,k |Um′ ,k ).
(125)
k=1
k=1
k=1
k=1
By introducing a time-sharing auxiliary random variable, Q, [38] and defining
X ,(XQ , Q),
March 26, 2018
′
Yj′ , (Yj,Q
, Q)
DRAFT
32
Yi ,(Yi,Q , Q),
Ui , (Ui,Q , Q)
Vj ,(Vj,Q , Q),
W , (WQ , Q),
(126)
we establish (7)-(10). Similarly, we can follow the same steps to prove (11)-(14) after switching
the role of the two sets of variables Y1′ , . . . , Ym′ ′ and Y1 , . . . , Ym . This completes the proof of
Theorem 1.
A PPENDIX II
M ULTILEVEL B ROADCAST C HANNEL
WITH
D EGRADED M ESSAGE S ETS
Here, we study the capacity of the multiuser multilevel broadcast channel that is characterized
by (6) with degraded message sets. In particular, M0 ∈ 1 : 2nR0 is to be communicated to
all receivers, and furthermore M1 ∈ 1 : 2nR1 is to be communicated to receiver Y1 .5 A threereceiver special case was studied by Nair and El Gamal [22] where the idea of indirect decoding
was introduced, and the capacity is the set of rate pairs (R1 , R0 ) such that
R0 ≤ min I(U; Y2 ), I(V ; Y1′ ) ,
R1 ≤I(X; Y1|U),
R0 + R1 ≤I(V ; Y1′ ) + I(X; Y1|V ),
(127)
for some pmf p(u, v)p(x|v). In the sequel, we give a generalization of Nair and El Gamal for
multiuser multilevel broadcast channel.
Theorem 16: The capacity of multiuser multilevel broadcast channel characterized by (6),
with degraded message sets, is the set of rate pairs (R1 , R0 ) such that
R0 ≤ min I(U; Ym ), I(V ; Ym′ ′ ) ,
R1 ≤I(X; Y1|U),
R0 + R1 ≤I(V ; Ym′ ′ ) + I(X; Y1 |V ),
(128)
for some pmf p(u, v)p(x|v).
Proof: The converse parallels the proof of the converse of the three-receiver case studied
by Nair and El Gamal in [22] after replacing Y2 , Y1′ with Ym , Ym′ ′ , respectively. In particular, U
5
For compactness of expression, here we refer to each receiver by the variable denoting its received signal.
March 26, 2018
DRAFT
33
and V are defined as follows.
Uk , (M0 , Y1,1, . . . , Y1,k−1, Ym,k+1, . . . , Ym,n ),
Vk , (M0 , Y1,1, . . . , Y1,k−1, Ym′ ′ ,k+1, . . . , Ym′ ′ ,n ),
k = 1, . . . , n, and let Q be a time-sharing random variable uniformly distributed over the
set {1, . . . , n} and independent of X n , Y1n , Ym,1 , . . . , Ym,n , Ym′ ′ ,1 , . . . , Ym′ ′ ,n . We then set U =
(UQ , Q), V = (VQ , Q), X = XQ , Y1 = Y1,Q , Ym = Ym,Q , and Ym′ ′ = Ym′ ′ ,Q . This completes the
converse part of the proof.
The achievability part uses superposition coding and indirect decoding as follows.
•
Rate splitting: divide the private message M1 into two independent messages M10 at rate
R10 and M11 at rate R11 , where R1 = R10 + R11 .
•
Codebook generation: fix a pmf p(u, v)p(x|v) and randomly and independently generate
Q
2nR0 sequences un (m0 ), m0 ∈ 1 : 2nR0 , each according to nk=1 pU (uk ). For each m0 ,
randomly and conditionally independently generate 2nR10 sequences v n (m0 , m10 ), m10 ∈ [1 :
Q
2nR10 ], each according to nk=1 pV |U (vk |uk (m0 )). For each pair (m0 , m10 ), randomly and
conditionally independently generate 2nR11 sequences xn (m0 , m10 , m11 ), m11 ∈ [1 : 2nR11 ],
Q
each according to nk=1 pX|V (xk |vk (m0 , m10 )).
•
Encoding: to send the message pair (m0 , m1 ) = (m0 , m10 , m11 ), the encoder transmits
xn (m0 , m10 , m11 ).
•
Decoding at the users Y2 , . . . , Ym : decoder i declares that m̂0i ∈ [1 : 2nR0 ] is sent if it is
(n)
the unique message such that (un (m̂0i ), yin) ∈ Tǫ . Hence, by law of large numbers and
the packing lemma [38], the probability of error tends to zero as n → ∞ if
R0 <
min {I(U; Yi ) − δ(ǫ)},
2≤i≤m
= I(U; Ym ) − δ(ǫ),
(129)
where the last equality follows from applying data processing inequality on the Markov
chain U → X → Y1 → Y2 → · · · → Ym .
•
Decoding at Y1 : decoder 1 declares that (m̂01 , m̂10 , m̂11 ) is sent if it is the unique message
triple such that un (m̂01 ), v n (m̂01 , m̂10 ), xn (m̂01 , m̂10 , m̂11 ), y1n ∈ [1 : 2nR0 ]. Hence, by law
of large numbers and the packing lemma [38], the probability of error tends to zero as
March 26, 2018
DRAFT
34
n → ∞ if
R11 < I(X; Y1|V ) − δ(ǫ),
R10 + R11 < I(X; Y1|U) − δ(ǫ),
R0 + R10 + R11 < I(X; Y1) − δ(ǫ).
•
(130)
Decoding at users Y1′ , . . . , Ym′ ′ : decoder j decodes m0 indirectly by declaring m̃0j is sent
(n)
if it is the unique message such that (un (m̃0j ), v n (m̃0j , m10 ), zjn ) ∈ Tǫ
for some m10 ∈
[1 : 2nR0 ]. Hence, by law of large numbers and packing lemma, the probability of error
tends to zero as n → ∞ if
R0 + R10 <
min
{I(U, V ; Yj′ ) − δ(ǫ)},
min
{I(V ; Yj′ ) − δ(ǫ)},
1≤j≤m′
=
1≤j≤m′
= I(V ; Ym′ ′ ) − δ(ǫ),
(131)
where the last two equalities follow from applying the chain rule and data processing
inequality on the Markov chain U → V → X → Y1′ → Y2′ → · · · → Ym′ ′ .
By combining the bounds in (129), (130), (131), substituting R10 + R11 = R1 , and eliminating
R10 and R11 by the Fourier-Motzkin procedure [22], the proof of the achievability is completed.
R EFERENCES
[1] M. Fadel and A. Nosratinia, “Block-fading broadcast channel with hybrid CSIT and CSIR,” in IEEE International
Symposium on Information Theory (ISIT), June 2017, pp. 1873–1877.
[2] C. Huang, S. Jafar, S. Shamai, and S. Vishwanath, “On degrees of freedom region of MIMO networks without channel
state information at transmitters,” IEEE Trans. Inf. Theory, vol. 58, no. 2, pp. 849–857, Feb. 2012.
[3] C. Vaze and M. Varanasi, “The degree-of-freedom regions of MIMO broadcast, interference, and cognitive radio channels
with no CSIT,” IEEE Trans. Inf. Theory, vol. 58, no. 8, pp. 5354–5374, Aug. 2012.
[4] A. Lapidoth, S. Shamai, and M. Wigger, “On the capacity of fading MIMO broadcast channels with imperfect transmitter
side-information,” arXiv preprint cs/0605079, 2006.
[5] S. Jafar, “Blind interference alignment,” IEEE J. Sel. Topics Signal Process., vol. 6, no. 3, pp. 216–227, June 2012.
[6] M. Fadel and A. Nosratinia, “Broadcast channel under unequal coherence intervals,” in IEEE International Symposium on
Information Theory (ISIT), July 2016, pp. 275–279.
[7] ——, “Coherence disparity in broadcast and multiple access channels,” IEEE Trans. Inf. Theory, vol. 62, no. 12, pp.
7383–7401, Dec. 2016.
March 26, 2018
DRAFT
35
[8] G. Caire and S. Shamai, “On the achievable throughput of a multiantenna Gaussian broadcast channel,” IEEE Trans. Inf.
Theory, vol. 49, no. 7, pp. 1691–1706, July 2003.
[9] H. Weingarten, Y. Steinberg, and S. Shamai, “The capacity region of the Gaussian multiple-input multiple-output broadcast
channel,” IEEE Trans. Inf. Theory, vol. 52, no. 9, pp. 3936–3964, Sept. 2006.
[10] A. Davoodi and S. Jafar, “Aligned image sets under channel uncertainty: Settling a conjecture by Lapidoth, Shamai and
Wigger on the collapse of degrees of freedom under finite precision CSIT,” arXiv preprint arXiv:1403.1541, 2014.
[11] M. Maddah-Ali and D. Tse, “Completely stale transmitter channel state information is still very useful,” IEEE Trans. Inf.
Theory, vol. 58, no. 7, pp. 4418–4431, July 2012.
[12] T. Gou and S. Jafar, “Optimal use of current and outdated channel state information: Degrees of freedom of the MISO
BC with mixed CSIT,” IEEE Commun. Lett., vol. 16, no. 7, pp. 1084–1087, July 2012.
[13] R. Tandon, M. A. Maddah-Ali, A. Tulino, H. V. Poor, and S. Shamai, “On fading broadcast channels with partial channel
state information at the transmitter,” in International Symposium on Wireless Communication Systems (ISWCS), Aug. 2012,
pp. 1004–1008.
[14] S. Amuru, R. Tandon, and S. Shamai, “On the degrees-of-freedom of the 3-user MISO broadcast channel with hybrid
CSIT,” in IEEE International Symposium on Information Theory (ISIT), 2014, pp. 2137–2141.
[15] R. Tandon, S. Jafar, S. Shamai, and V. Poor, “On the synergistic benefits of alternating CSIT for the MISO broadcast
channel,” IEEE Trans. Inf. Theory, vol. 59, no. 7, pp. 4106–4128, July 2013.
[16] Y. Li and A. Nosratinia, “Product superposition for MIMO broadcast channels,” IEEE Trans. Inf. Theory, vol. 58, no. 11,
pp. 6839–6852, Nov. 2012.
[17] ——, “Coherent product superposition for downlink multiuser MIMO,” IEEE Trans. Wireless Commun., vol. PP, no. 99,
pp. 1–9, 2014.
[18] M. Fadel and A. Nosratinia, “Coherent, non-coherent, and mixed–CSIR broadcast channels: Multiuser degrees of freedom,”
in IEEE International Symposium on Information Theory (ISIT), June 2014, pp. 2574–2578.
[19] ——, “Coherence disparity in time and frequency,” in Proc. IEEE Global Telecommunication Conference (GLOBECOM’16), Dec. 2016, pp. 1–6.
[20] F. Zhang, M. Fadel, and A. Nosratinia, “Spatially correlated MIMO broadcast channel: Analysis of overlapping correlation
eigenspaces,” in IEEE International Symposium on Information Theory (ISIT), June 2017, pp. 1097–1101.
[21] S. Borade, L. Zheng, and M. Trott, “Multilevel broadcast networks,” in IEEE International Symposium on Information
Theory (ISIT), June 2007, pp. 1151–1155.
[22] C. Nair and A. Gamal, “The capacity region of a class of three-receiver broadcast channels with degraded message sets,”
IEEE Trans. Inf. Theory, vol. 55, no. 10, pp. 4479–4493, Oct. 2009.
[23] T. Liu and P. Viswanath, “An extremal inequality motivated by multiterminal information-theoretic problems,” IEEE Trans.
Inf. Theory, vol. 53, no. 5, pp. 1839–1851, May 2007.
[24] R. Liu, T. Liu, V. Poor, and S. Shamai, “A vector generalization of Costa’s entropy-power inequality with applications,”
IEEE Trans. Inf. Theory, vol. 56, no. 4, pp. 1865–1879, Apr. 2010.
[25] T. Marzetta and B. Hochwald, “Capacity of a mobile multiple-antenna communication link in Rayleigh flat fading,” IEEE
Trans. Inf. Theory, vol. 45, no. 1, pp. 139–157, Jan. 1999.
[26] L. Zheng and D. Tse, “Communication on the Grassmann manifold: a geometric approach to the noncoherent multipleantenna channel,” IEEE Trans. Inf. Theory, vol. 48, no. 2, pp. 359–383, Feb. 2002.
March 26, 2018
DRAFT
36
[27] K. Marton, “A coding theorem for the discrete memoryless broadcast channel,” IEEE Trans. Inf. Theory, vol. 25, no. 3,
pp. 306–311, May 1979.
[28] S. Yang, M. Kobayashi, D. Gesbert, and X. Yi, “Degrees of freedom of time correlated MISO broadcast channel with
delayed CSIT,” IEEE Trans. Inf. Theory, vol. 59, no. 1, pp. 315–328, Jan. 2013.
[29] X. Yi, S. Yang, D. Gesbert, and M. Kobayashi, “The degrees of freedom region of temporally correlated MIMO networks
with delayed CSIT,” IEEE Trans. Inf. Theory, vol. 60, no. 1, pp. 494–514, Jan. 2014.
[30] C. Shannon, “The zero error capacity of a noisy channel,” IEEE Trans. Inf. Theory, vol. 2, no. 3, pp. 8–19, Sept. 1956.
[31] P. Bergmans, “Random coding theorem for broadcast channels with degraded components,” IEEE Trans. Inf. Theory,
vol. 19, no. 2, pp. 197–207, Mar. 1973.
[32] ——, “A simple converse for broadcast channels with additive white Gaussian noise,” IEEE Trans. Inf. Theory, vol. 20,
no. 2, pp. 279–280, Mar. 1974.
[33] A. E. Gamal, “The feedback capacity of degraded broadcast channels (corresp.),” IEEE Trans. Inf. Theory, vol. 24, no. 3,
pp. 379–381, May 1978.
[34] E. Telatar, “Capacity of multi-antenna Gaussian channels,” European transactions on telecommunications, vol. 10, no. 6,
pp. 585–595, 1999.
[35] H. Weingarten, T. Liu, S. Shamai, Y. Steinberg, and P. Viswanath, “The capacity region of the degraded multiple-input
multiple-output compound broadcast channel,” IEEE Trans. Inf. Theory, vol. 55, no. 11, pp. 5011–5023, Oct. 2009.
[36] P. Mukherjee, R. Tandon, and S. Ulukus, “Secure degrees of freedom region of the two-user MISO broadcast channel with
alternating CSIT,” IEEE Trans. Inf. Theory, vol. PP, no. 99, pp. 1–1, Apr. 2017.
[37] I. Csiszár and J. Körner, “Information theory : Coding theorems for discrete memoryless channels,” Budapest: Akadémiai
Kiadó, 1981.
[38] A. E. Gamal and Y. Kim, Network information theory.
March 26, 2018
Cambridge University Press, 2011.
DRAFT
| 7 |
Synchronization Patterns in Networks of Kuramoto Oscillators:
A Geometric Approach for Analysis and Control
arXiv:1709.06193v1 [math.OC] 18 Sep 2017
Lorenzo Tiberi, Chiara Favaretto, Mario Innocenti, Danielle S. Bassett, and Fabio Pasqualetti
Abstract— Synchronization is crucial for the correct functionality of many natural and man-made complex systems. In
this work we characterize the formation of synchronization
patterns in networks of Kuramoto oscillators. Specifically, we
reveal conditions on the network weights and structure and on
the oscillators’ natural frequencies that allow the phases of a
group of oscillators to evolve cohesively, yet independently from
the phases of oscillators in different clusters. Our conditions
are applicable to general directed and weighted networks of
heterogeneous oscillators. Surprisingly, although the oscillators
exhibit nonlinear dynamics, our approach relies entirely on
tools from linear algebra and graph theory. Further, we develop
a control mechanism to determine the smallest (as measured
by the Frobenius norm) network perturbation to ensure the
formation of a desired synchronization pattern. Our procedure
allows us to constrain the set of edges that can be modified, thus
enforcing the sparsity structure of the network perturbation.
The results are validated through a set of numerical examples.
I. I NTRODUCTION
Synchronization of coupled oscillators is everywhere in
nature [1], [2], [3] and in several man-made systems, including power grids [4] and computer networks [5]. While some
systems require complete synchronization among all the parts
to function properly [6], [7], others rely on cluster or partial
synchronization [8], where subsets of nodes exhibit coherent
behaviors that remain independent from the evolution of
other oscillators in the network. For example, while partial
synchronization patterns have been observed in healthy individuals [9], complete synchronization in neural systems is
associated with degenerative diseases including Parkinson’s
and Huntington’s diseases [10], [11], and epilepsy [12].
Cluster synchronization has received attention only recently,
and several fundamental questions remain unanswered, including the characterization of the network features enabling
the formation of a desired pattern, and the development of
control mechanisms to enforce the emergence of clusters.
In this paper we focus on networks of Kuramoto oscillators
[13], and we characterize intrinsic and topological conditions
that ensure the formation of desired clusters of oscillators.
This material is based upon work supported by NSF award
#BCS-1430279 and ARO award 71603NSYIP. Lorenzo Tiberi and
Fabio Pasqualetti are with the Mechanical Engineering Department,
University
of
California
at
Riverside,
[email protected],
[email protected].
Chiara
Favaretto
is
with
the
Department of Information Engineering, University of Padova,
[email protected]. Danielle S. Bassett is
with the Departments of Bioengineering and Electrical and Systems
Engineering, University of Pennsylvania, [email protected]. Mario
Innocenti is with the Department of Electrical Systems and Automation,
University of Pisa, [email protected].
Our network model is motivated by a large body of literature showing broad applicability of the Kuramoto model to
virtually all systems that exhibit synchronization properties.
Although Kuramoto networks exhibit nonlinear dynamics,
we adopt tools from linear algebra and graph theory to
characterize network conditions enabling the formation of
a given synchronization pattern. Further, we design a control
mechanism to perturb (a subset of) the network weights so
as to enforce or prevent desired synchronization patterns.
Related work Complete synchronization in networks of
Kuramoto oscillators has been extensively studied, e.g.,
see [14], [15]. It has been shown that synchronization of
all nodes emerges when the coupling strength among the
agents is sufficiently larger than the heterogeneity of the
oscillators’ natural frequencies. Partial synchronization and
pattern formation have received considerably less attention,
with the literature being composed of only few recent works.
In [16] it is shown how symmetry of the interconnections
may lead to partial synchronization. Methods based on graph
symmetry have also been used to find all possible clusters
in networks of Laplacian-coupled oscillators [17]. The relationship between clusterization and network topology has
been studied in [18] for unweighted interconnections. In [19],
the emergence and the stability of groups of synchronized
agents within a network has been studied for different
classes of dynamics, like delay-coupled laser models and
neuronal spiking models. Here, the approach of master
stability function has been used to characterize the results.
In [20], [21] the idea is put forth to study an approximate
notion of cluster synchronization in Kuramoto networks
via tools from linear systems theory. It is quantitatively
shown how cluster synchronization depends on strong intracluster and weak inter-cluster connections, similarity with
respect to the natural frequencies of the oscillators within
each cluster, and heterogeneity of the natural frequencies of
coupled oscillators belonging to different subnetworks. With
respect to this work, we focus on an exact notion of cluster
synchronization, identify necessary and sufficient conditions
on the network weights and oscillators’ natural frequencies
for the emergence of a desired synchronization pattern, and
exploit our analysis to design a structural control algorithm
for the formation of a desired synchronization pattern.
The work that is closer to this paper is [22], where the
authors relate cluster synchronization to the notion of an
external equitable partition in a graph. In fact, the notion
of an external equitable partition can be interpreted in terms
of invariant subspaces of the network adjacency matrix, a
notion that we exploit in our development. However, the
analysis in [22] is carried out with unweighted and undirected
networks and, as we show in this paper, the conditions in
[22] may not be necessary when dealing with directed and
weighted networks. Further, our approach relies on simple
notions from linear algebra, and leads to the development of
our control algorithm for the formation of desired patterns.
Paper contributions The contributions of this paper are
twofold. First, we consider a notion of exact cluster synchronization, where the phases of the oscillators within each
cluster remain equal to each other over time, and different
from the phases of the oscillators in different clusters. We
derive necessary and sufficient conditions for the formation
of a given synchronization pattern in directed and weighted
networks of Kuramoto oscillators. In particular we show
that cluster synchronization is possible if and only if (i)
the natural frequencies are equal within each cluster, and
(ii) for each cluster, the sum of the weights of the edges
from every separate group is the same for all nodes in
the cluster. Second, we leverage our characterization of
cluster synchronization to develop a control mechanism that
modifies the network weights so as to ensure the formation
of a desired synchronization pattern. Our control method
is optimal, in the sense that it determines the smallest
(measured by the Frobenius norm) network perturbation
for a given synchronization pattern, and it guarantees the
modification of only a desired subset of the edge weights.
Paper organization The rest of this paper is organized as
follows. Section II contains the problem setup and some
preliminary definitions. Section III contains our characterization of cluster synchronization, and Section IV contains
our structural control algorithm for the formation of a desired
synchronization pattern. Section V concludes the paper.
II. P ROBLEM SETUP AND PRELIMINARY NOTIONS
Consider a network of heterogenous Kuramoto oscillators
described by the digraph G = (V, E), where V = {1, . . . , n}
denotes the set of oscillators and E ⊆ V × V their interconnections. Let A = [aij ] be the weighted adjacency matrix of
G, where aij ∈ R if (i, j) ∈ E and aij = 0 otherwise. We
assume that G is strongly connected [23]. Let θi ∈ R denote
the phase of the i-th oscillator, whose dynamics reads as
θ̇i = ωi +
n
X
aij sin(θj − θi ),
j=1
where ωi is the natural frequency of the i-th oscillator. The
dynamics is a generalized version of the classic Kuramoto
model [24]. Depending on the interconnection graph G, the
adjacency matrix A, and the oscillators natural frequencies,
different oscillatory patterns are possible corresponding to
(partially) synchronized or chaotic states [25]. In this work
we are particularly interested in the case where the phases
of groups of oscillators evolve cohesively within each group,
yet independently from the phases of oscillators in different
groups. To formalize this discussion, let P = {P1 , . . . , Pm }
be a partition of V, that is, V = ∪m
i=1 Pi and Pi ∩ Pj = ∅ for
all i, j ∈ {1, . . . , m} with i 6= j. We restrict our attention
to the case m > 1. Throughout the paper we will assume
10
1
2
9
4
2
6
5
7
10
2
5
3
5
9
Fig. 1. A network of oscillators with partitions P1 = {1, 2, 3} and P2 =
{4, 5, 6}. The sum of the weights of all edges (i, j) is equal for each node
i of P1 (resp. P2 ), with j ∈ P2 (resp. j ∈ P1 ). In Section III we show that
this is a necessary condition for phase synchronization of the partition P.
without loss of generality that, given P = {P
, . . . , Pm },
P1 i−1
the oscillators are labeled so that Pi = { j=1 |Pj | +
Pi
1, . . . , j=1 |Pj |}, where |Pj | denotes the cardinality of the
set Pj . While different notions of synchronization exist, we
will use the following definitions.
Definition 1: (Phase synchronization) For the network of
oscillators G = (V, E), the partition P = {P1 , . . . , Pm }
is phase synchronizable if, for some initial phases
θ1 (0), . . . , θn (0), it holds
θi (t) = θj (t),
for all times t ∈ R≥0 and i, j ∈ Pk , with k ∈ {1, . . . , m}.
Definition 2: (Frequency synchronization) For the network of oscillators G = (V, E), the partition P =
{P1 , . . . , Pm } is frequency synchronizable if, for some initial
phases θ1 (0), . . . , θn (0), it holds
θ̇i (t) = θ̇j (t),
for all times t ∈ R≥0 and i, j ∈ Pk , with k ∈ {1, . . . , m}.
Clearly, phase synchronization implies frequency synchronization, while the converse statement typically fails to hold.
Finally, we define the characteristic matrix associated with
a partition P of the network nodes, which will be used to
derive our synchronization conditions in Section III.
Definition 3: (Characteristic matrix) For the network of
oscillators G = (V, E) and the partition P = {P1 , . . . , Pm },
the characteristic matrix of P is VP ∈ Rn×m , where
VP = v1
v2
···
1
···
{z
vm ,
and
viT = 0
|
0
···
{z
Pi−1
j=1
|Pj |
0 1
} |
|Pi |
1 0 0 ···
} | P {z
n
j=i+1
|Pj |
0 .
}
We conclude this section with an illustrative example.
Example 1: (Setup and definitions) Consider the network
of Kuramoto oscillators in Fig. 1, with graph G = (V, E),
V = {1, 2, 3, 4, 5, 6} and partition P = {P1 , P2 }. The graph
cases where this assumption is satisfied are presented in
Fig. 2. A special case where Assumption (A1) is not satisfied
is discussed at the end of this section.
Theorem 3.1: (Cluster synchronization) For the network
of oscillators G = (V, E), the partition P = {P1 , . . . , Pm } is
phase synchronizable if and only if the following conditions
are simultaneously satisfied:
P
(i) the network weights satisfy k∈P` aik − ajk = 0 for
every i, j ∈ Pz and z, ` ∈ {1, . . . , m}, with z 6= `;
(a)
(ii) the natural frequencies satisfy ωi = ωj for every
k ∈ {1, . . . , m} and i, j ∈ Pk .
Proof: (If) Let θi = θj for all i, j ∈ Pk , k = 1, . . . , m.
Let i, j ∈ P` , and notice that
X X
aik sin(θk − θi ) − ajk sin(θk − θj )
θ̇i − θ̇j =
z6=` k∈Pz
=
X
sz`
z6=`
(b)
Fig. 2. For the network in Example 1 with natural frequencies ω =
[30, 30, 30, 10, 10, 10]T Fig. (a) shows the frequencies of the oscillators in
the clusters P1 = {1, 2, 3} and P2 = {4, 5, 6} as a function of time. Notice that Assumption A1 is satisfied over the entire time interval. In Fig. (b)
we let the frequencies be more homogeneous, ω = [19, 19, 19, 10, 10, 10]T .
Assumption A1 is satisfied within bounded time intervals, such as [t1 , t2 ].
G and the partition P are described by A and VP
1
0 0 0 0 0 10
1
0 0 0 5 0
5
1
0 0 0 0 10 0
A=
, VP = 0
9
0
0
0
0
0
0
0 9 0 0 0
0
0
0 7 2 2 0
0
as follows:
0
0
0
.
1
1
1
III. C ONDITIONS FOR CLUSTER SYNCHRONIZATION
In this section we derive necessary and sufficient conditions ensuring phase (hence frequency) synchronization of a
partition of oscillators. In particular, we show how synchronization of a partition depends both on the interconnection
structure and weights, as well as the oscillators natural
frequencies. We make the following technical assumption.
(A1) For the partition P = {P1 , . . . , Pm } there exists an
ordering of the clusters Pi and an interval of time
[t1 , t2 ], with t2 > t1 , such that for all times t ∈ [t1 , t2 ]:
max θ̇i > max θ̇i > · · · > max θ̇i .
i∈P1
i∈P2
i∈Pm
Assumption (A1) requires the phases of the oscillators
in different clusters to evolve with different frequencies, at
least in some interval of time. This assumption is in fact not
restrictive, as this is typically the case when the oscillators
in different clusters have different natural frequencies. Two
X
aik − ajk = 0,
k∈Pz
where we have used conditions (i) and (ii), and where szl =
sin(θz −θ` ) depends on the clusters z and ` but not on i, j, k.
Thus, when conditions (i) and (ii) are satisfied, θ ∈ Im(VP )
implies θ̇ ∈ Im(VP ), the image of VP . Im(VP ) is invariant
and the network is phase synchronizable (θ(0) ∈ Im(VP )).
(Only if) We first show that condition (i) is necessary for
phase synchronization. Assume that the network is phase
synchronized. Let i, j ∈ P` . At all times is must hold that
X X
0 = θ̈i − θ̈j =
aik cos(θk − θi )(θ̇k − θ̇i )
z6=` k∈Pz
−
X X
ajk cos(θk − θj )(θ̇k − θ̇j )
(1)
z6=` k∈Pz
=
X
cz` vz`
z6=`
X
aik − ajk ,
k∈Pz
|
{z
}
dz
where cz` = cos(θz − θ` ) and vz` = θ̇z − θ̇` depend on the
clusters z and `, but not on i, j, k. From (A1), possibly after
reordering the clusters, in some nontrivial interval we have
max θ̇i > max θ̇i > · · · > max θ̇i .
i∈P1
i∈P2
i∈Pm
Thus, (1) implies that, either dz = 0 for all z (thus
implying condition (i)), or the functions cz` vz` must be
linearly dependent at all times in the interval. Assume by
contradiction that the functions czl vzl are linearly dependent
at all times in the above interval. Then it must hold that
X dn
dz n cz` vz` = 0,
dt
z6=`
n
d
for every nonnegative integer n, where dt
n denotes n-times
differentiation. In other words, not only the functions cz` vz`
must be linearly dependent, but also all their derivatives, at
some times in the above interval. Let d1 6= 0 (if d1 = 0,
simply select the first nonzero coefficient), and i, j 6∈ P1 .
Because of assumption (A1), there exists an integer n such
dn
dn
that d1 dt
n c1` v1` dz dtn cz` vz` , for all z 6= 1. Thus, the
functions cz` vz` cannot be linearly dependent. We conclude
that statement (i) is necessary for phase synchronization.
We now prove that, when the network is phase synchronized, statement (i) implies statement (ii). This shows that
statement (ii) is necessary for phase synchronization. Let the
network be phase synchronized, and let i, j ∈ P` . We have
X
X
0 = θ̇i − θ̇j = ωi − ωj +
sz`
aik − ajk ,
z6=`
k∈Pz
|
{z
=0
min
∆
s.t.
}
where sz` = sin(θz − θ` ) does not depend on the indices
i, j, k (see above), and where we have used that statement
(i) is necessary for phase synchronization. To conclude,
ωi = ωj , and statement (ii) is also necessary for phase
synchronization.
Remark 1: (Necessity of assumption A1) Consider a network of oscillators with adjacency matrix
0 a12 0
0
a21 0 a23 0
A=
0 a32 0 a34 ,
0
0 a43 0
and natural frequencies ωi = ω̄ for all i ∈ {1, . . . , 4}.
Notice that condition (i) in Theorem 3.1 is not satisfied.
Let θ1 (0) = θ2 (0) and θ3 (0) = θ4 (0) = θ1 (0) + π, and
notice that θ̇i = ω̄ at all times and for all i ∈ {1, . . . , 4}
(Assumption A1 is not satisfied). In other words, the partition
P = {P1 , P2 }, with P1 = {1, 2} and P2 = {3, 4} is phase
synchronized, independently of the interconnection weights
among the oscillators. Thus, condition (i) in Theorem may
not be necessary when Assumption A1 is not satisfied.
Let A B denote the Hadamard product between A and
B [26], and Im(VP )⊥ the orthogonal subspace to Im(VP ).
Corollary 3.2: (Matrix condition for synchronization)
Condition (i) in Theorem 3.1 is equivalent to V̄PT ĀVP = 0,
where V̄P ∈ Rn×(n−m) satisfies Im(V̄P ) = Im(VP )⊥ , and
Ā = A − A
typically not satisfied for arbitrary partitions and interconnection weights. In this section we develop a control mechanism
to modify the oscillators’ interconnection weights so as to
guarantee synchronization of a given partition. Specifically,
we study the following minimization problem:
VP VPT .
(2)
Proof: Let Ā = [āij ] and A = [aij ]. Notice that āij =
aij when i and j belong to different clusters, and āij = 0
when i and j belong to the same cluster. Thus,
(P
/ Pj ,
k∈Pj aik , if i ∈
[ĀVP ]ij =
0,
if i ∈ Pj .
v̄iT x
Select V̄P so that V̄P = [v̄1 · · · v̄n−m ] and
= xr − xs ,
with r, s ∈ P` , for a vector x of compatible dimension. Then,
(P
/ Pj ,
k∈Pj ark − ask , r, s ∈
T
[V̄P ĀVP ]ij =
0,
r, s ∈ Pj ,
where r, s are the nonzero indices of v̄i .
IV. C ONTROL OF CLUSTER SYNCHRONIZATION
In the previous section we derive conditions on the network of oscillators to guarantee phase and frequency synchronization. These conditions are rather stringent, and are
k∆k2F
V̄PT Ā + ∆ VP = 0
(3b)
∆∈H
(3c)
(3a)
where k∆kF denotes the Frobenius norm of the matrix ∆, Ā
is as in (2), and H encodes a desired sparsity pattern of the
perturbation matrix ∆. For example, H may represent the set
of matrices compatible with the graph G = (V, E), that is,
H = {M : M ∈ R|V|×|V| and mij = 0 if (i, j) 6∈ E}. The
constraint (3b) reflects the invariance condition in Corollary
3.2 and, together with condition (ii) in Theorem 3.1, ensures
synchronization of the partition P. Thus, the minimization
problem (3) determines the smallest perturbation of the
interconnection weights that guarantees synchronization of
a partition P and satisfies desired sparsity constraints. It
should be observed that, given the solution ∆∗ to (3), the
modified adjacency matrix is A + ∆∗ even if the constraint
(3b) is expressed in terms of Ā. This follows from the
fact that connections among nodes of the same cluster do
not affect the synchronization properties of the partition
P = {P1 , . . . , Pm } (see Corollary 3.2).
To solve the minimization problem (3), we define the
following minimization problem by including the sparsity
constraints (3c) into the cost function:
min
∆
s.t.
k∆
T
Hk2F
(4a)
V̄P Ā + ∆ VP = 0
(4b)
where denotes elementwise division, and H satisfies hij =
1 if there exists a matrix M ∈ H such that mij 6= 0, and
hij = 0 otherwise. Clearly, the minimization problems (3)
and (4) are equivalent, in the sense that ∆∗ is a (feasible)
solution to (3) if and only if it has finite cost in (4).
Theorem 4.1: (Synchronization via structured perturbation) Let T = [VP V̄P ], and let
Ã11 Ã12
= T −1 ĀT.
Ã21 Ã22
The minimization problem (3) has a solution if and only if
there exists a matrix Λ satisfying
X = (V̄P ΛVPT )
H, and Ã21 = V̄PT XVP .
Moreover, if it exists, a solution ∆∗ to (3) is
∗
˜
˜∗
∆
∆
∗
−1
11
12
∆ = T ˜∗
˜∗ T ,
∆21 ∆
22
˜ ∗ = −V T XVP , ∆
˜ ∗ = −V T X V̄P , ∆
˜ ∗ = −Ã21 ,
where ∆
11
12
21
P
P
∗
T
˜ = −V̄ X V̄P .
and ∆
22
P
Proof: We adopt the method of Lagrange multipliers
to derive the optimality conditions for the problem (4). The
Lagrangian is
L(∆, Λ) =
n X
n
X
2 −1
δij
hij +
i=1 j=1
m
X
T
λT
i V̄P (Ā + ∆)vi ,
i=1
where Λ = [λ1 , . . . , λm ] ∈ R(n−m)×m is a matrix collecting
vectors of Lagrange multipliers, and vi ∈ Rn is the i-th
column of VP . By equating the partial derivatives of L to
zero we obtain the following optimality conditions:
∂L
= 0 ⇒ V̄P (Ā + ∆)vi = 0,
∂λi
m
X
∂L
T
= 0 ⇒ 2δij h−1
+
λT
k v̄i vjk = 0,
ij
∂δij
(5a)
∆∗ = −V̄P V̄PT ĀVP VPT .
Proof: Because hij = 1 for all i and j, the optimality
condition (6b) becomes
(5b)
∆ + V̄P ΛVPT = 0.
k=1
where v̄i is the i-th row of V̄P and vjk is the entry (j, k) of
the matrix VP . Finally, (5a) and (5b) can be rewritten as
V̄PT (Ā + ∆)VP = 0,
(6a)
H + V̄P ΛVPT = 0,
(6b)
∆
where the factor 2 of (5b) has been included into the
Lagrange multipliers. Applying the change of coordinates
˜ −1 , with Id the
T = [VP V̄P ], Ā = T ÃT −1 and ∆ = T ∆T
identity matrix of dimension d, equation (6a) becomes
˜ −1 VP =
V̄PT T (Ã + ∆)T
˜ 11
Ã11 + ∆
0 In−m
˜ 21
Ã21 + ∆
˜ 12
Ã12 + ∆
˜ 22
Ã22 + ∆
˜ ∗ = −Ã21 . Equation (6b) is equivalent to
which leads to ∆
21
T
∆ + (V̄P ΛVP ) H = 0, which can be decomposed as
˜ 11 ∆
˜ 12 V T
∆
P +(V̄ ΛV T )
VP V̄P ˜
H = 0.
P
P
˜
V̄PT
| {z } ∆
21 ∆22
{z
}
|
T
T −1
Consequently,
(V̄P ΛVPT )
H = 0.
We now pre- and post-multiply both sides of the above
equality by V̄PT and VP , respectively, and obtain
Λ = V̄PT ĀVP , ∆∗ = −V̄P V̄PT ĀVP VPT ,
where we have used (6a), VPT VP = I, and V̄PT V̄P = I.
We now present an example where we modify the network
weights to ensure synchronization of a desired partition.
Example 2: (Enforcing synchronization of a partition)
Consider the network in Fig. 3(a). The dashed edges and the
solid edges represent constrained and uncostrained edges,
respectively. The corresponding matrices Ā and H read as
Im
= 0,
0
˜ 11 VPT − V̄P Ã12 VPT + VP ∆
˜ 12 V̄PT + V̄P ∆
˜ 22 V̄PT )+
(VP ∆
Theorem 4.1 characterizes the smallest (measured by the
Frobenius norm) structured network perturbation that ensures
synchronization of a given partition. Without constraints, the
optimal perturbation has a straightforward expression.
Corollary 4.2: (Unconstrained minimization problem)
Let H = {M : mij 6= 0 for all i and j}. The minimization
problem (3) is always feasible, and its solution is
(7)
Let X = (V̄P ΛVPT ) H. Recall that V̄PT V̄P = In−m ,
= Im , and V̄PT VP = 0. By pre-multiplicating equation
(7) by V̄PT and post-multiplicating it by VP , we obtain
VPT VP
−Ã21 + V̄PT XVP = 0,
which is a system of linear equations, that can be solved with
respect to the unknown Λ. Following the same reasoning of
above, we can obtain the following other three equations that
˜ 11 , ∆
˜ 12 , and ∆
˜ 22 :
entirely determine the solution ∆
˜ 11 + V T XVP = 0, ∆
˜ 12 + V T X V̄P = 0, and
∆
P
P
˜ 22 + V̄PT X V̄P = 0.
∆
Finally, the optimal matrix ∆∗ , solution to the problem (4),
is given in original coordinates as
∗
˜
˜∗
∆
∆
∗
−1
11
12
∆ =T
˜∗ T .
−Ã21 ∆
22
0
0
0
Ā =
9
0
0
0
0
0
0
9
7
0
0
0
0
0
2
0
5
0
0
0
0
0
0
10
0
0
0
12
0
1
5
0
1
,H =
0
0
1
0
0
1
1
0
1
1
1
0
1
1
0
1
1
0
0
0
0
0
1
1
0
1
1
1
0
1
0
0
1
.
1
1
0
Notice that H allows only a subset of interconnections to be
modified, specifically, those corresponding to its unit entries.
It can be shown that, because condition (i) in Theorem
3.1 is not satisfied (equivalently V̄PT ĀVP 6= 0), the network
is not phase synchronizable (see Fig. 3(b) and 3(c) for an
evolution of the oscillators’ phases and frequencies). From
Theorem 4.1 we obtain the optimal perturbation that ensures
synchronization, which leads to the network in Fig. 3(d).
Notice that the network in Fig. 3(d) satisfies condition (i)
in Theorem 3.1. In fact, when the natural frequencies are
equal within each cluster (condition (ii) in Theorem 3.1),
the clusters evolve cohesively; see Fig. 3(e) and 3(f).
V. C ONCLUSION
In this work we study cluster synchronization in networks
of Kuramoto oscillators. We derive necessary and sufficient
conditions on the network interconnection weights and on the
oscillators’ natural frequencies to guarantee that the phases
of groups of oscillators evolve cohesively with one another,
yet independently from the phases of oscillators belonging to different groups. Additionally, we develop a control
mechanism to modify the edges of a network to ensure the
formation of desired clusters. Our control method is optimal,
as it determines the smallest perturbation (measured by the
Frobenius norm) for a desired synchronization pattern that is
compatible with a pre-specified set of structural constraints.
1
12
2
9
4
2
6
5
7
2
5
3
10
9
5
(a)
1
(c)
(e)
(f)
2
12
2
9
(b)
6
3
1
5
7
11
9
4
5
2
5
2
(d)
Fig. 3. Fig. (a) shows the network in Example 2, where the dashed (resp. solid) edges correspond to the zero (resp. unit) entries of H. The partition
P = {P1 , P2 }, with P1 = {1, 2, 3} and P2 = {4, 5, 6}, is not synchronizable because, for instance, the sum of the weights of the incoming edges to
nodes 1 and 2 is different (see Theorem 3.1). Fig. (b) and (c) show the phases and frequencies of the oscillators as a function of time. Fig. (d) shows the
modified network obtained from Theorem 4.1, which satisfies condition (i) in Theorem 3.1 and leads to a synchronizable partition P. When the natural
frequencies are selected to satisfy condition (ii) in Theorem 3.1, the oscillators’ phases and frequencies are synchronized as illustrated in Fig. (e) and (f).
R EFERENCES
[1] F. L. Lewis, H. Zhang, K. Hengster-Movric, and A. Das. Introduction
to synchronization in nature and physics and cooperative control for
multi-agent systems on graphs. In Cooperative Control of Multi-Agent
Systems, pages 1–21. Springer, 2014.
[2] S. H. Strogatz. From Kuramoto to Crawford: exploring the onset
of synchronization in populations of coupled oscillators. Physica D:
Nonlinear Phenomena, 143(14):1–20, 2000.
[3] F. A. S. Ferrari, R. L. Viana, S. R. Lopes, and R. Stoop. Phase
synchronization of coupled bursting neurons and the generalized
Kuramoto model. Neural Networks, pages 107–118, 2015.
[4] F. Dörfler and F. Bullo. Synchronization in complex networks of phase
oscillators: A survey. Automatica, 50(6):1539–1564, 2014.
[5] N. Takashi and A. E. Motter. Comparative analysis of existing
models for power-grid synchronization. New Journal of Physics,
17(1):015012, 2015.
[6] T. Danino, O. Mondragon-Palomino, L. Tsimring, and J. Hasty. A
synchronized quorum of genetic clocks. Nature, 463(7279):326–330,
Jan 2010.
[7] J. Kim, D. Shin, S. H. Jung, P. Heslop-Harrison, and K. Cho. A design
principle underlying the synchronization of oscillations in cellular
systems. Journal of Cell Science, 123(4):537–543, 2010.
[8] D. A. Paley, N. E. Leonard, R. Sepulchre, D. Grunbaum, and J. K.
Parrish. Oscillator models and collective motion. IEEE Control
Systems Magazine, 27(4):89–105, 2007.
[9] A. Schnitzler and J. Gross. Normal and pathological oscillatory
communication in the brain. Nature Reviews Neuroscience, 6(4):285–
296, Apr 2005.
[10] H. Constance, B. Hagai, and P. Brown. Pathological synchronization
in parkinson’s disease: networks, models and treatments. Trends in
Neurosciences, 30(7):357–364, 2007.
[11] M. Banaie, Y. Sarbaz, M. Pooyan, S. Gharibzadeh, and F. Towhidkhah.
Modeling huntingtons disease considering the theory of central pattern
generators. In Advances in Computational Intelligence, pages 11–19.
Springer, 2009.
[12] K. Lehnertz, S. Bialonski, M. T. Horstmann, D. Krug, A. Rothkegel,
M. Staniek, and T. Wagner. Synchronization phenomena in human epileptic brain networks. Journal of Neuroscience Methods,
183(1):42–48, 2009.
[13] Y. Kuramoto. Self-entrainment of a population of coupled non-linear
oscillators. In International symposium on mathematical problems in
theoretical physics, pages 420–422, Berlin, Heidelberg, 1975.
[14] J. Gómez-Gardeñes, Y. Moreno, and A. Arenas. Synchronizability
determined by coupling strengths and topology on complex networks.
Physical Review E, 75:066106, Jun 2007.
[15] Z. Zhang, A. Sarlette, and Z. Ling. Synchronization of Kuramoto oscillators with non-identical natural frequencies: a quantum dynamical
decoupling approach. In IEEE Conf. on Decision and Control, pages
4585–4590, December 2014.
[16] L. M. Pecora, F. Sorrentino, A. M. Hagerstrom, T. E. Murphy, and
R. Roy. Cluster synchronization and isolated desynchronization in
complex networks with symmetries. Nature communications, 5, 2014.
[17] F. Sorrentino, L. M. Pecora, A. M. Hagerstrom, T. E. Murphy,
and R. Roy. Complete characterization of the stability of cluster
synchronization in complex dynamical networks. Science Advances,
2(4), 2016.
[18] W. Lu, B. Liu, and T. Chen. Cluster synchronization in networks of
coupled nonidentical dynamical systems. Chaos: An Interdisciplinary
Journal of Nonlinear Science, 20(1):013120, 2010.
[19] T. Dahms, J. Lehnert, and E. Schöll. Cluster and group synchronization
in delay-coupled networks. Physical Review E, 86:016202, Jul 2012.
[20] C. Favaretto, D. S. Bassett, A. Cenedese, and F. Pasqualetti. Bode
meets kuramoto: Synchronized clusters in oscillatory networks. In
IEEE American Control Conference, pages 2799–2804, Seattle, WA,
USA, May 2017.
[21] C. Favaretto, A. Cenedese, and F. Pasqualetti. Cluster synchronization
in networks of kuramoto oscillators. In IFAC World Congress, 2017.
To Appear.
[22] M. T. Schaub, N. O’Clery, Y. N. Billeh, J. Delvenne, R. Lambiotte,
and M. Barahona. Graph partitions and cluster synchronization in
networks of oscillators. Chaos, 26(9):094821, 2016.
[23] C. D. Godsil and G. F. Royle. Algebraic Graph Theory, volume 207
of Graduate Texts in Mathematics. Springer, 2001.
[24] Y. Kuramoto. Self-entrainment of a population of coupled non-linear
oscillators. In H. Araki, editor, Int. Symposium on Mathematical
Problems in Theoretical Physics, volume 39 of Lecture Notes in
Physics, pages 420–422. Springer, 1975.
[25] M. Mirchev, L. Basnarkov, F. Corinto, and L. Kocarev. Cooperative
phenomena in networks of oscillators with non-identical interactions and dynamics. IEEE Transactions on Circuits and Systems I,
61(3):811–819, 2014.
[26] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University
Press, 1985.
| 3 |
Bayesian Loss-based Approach to Change
Point Analysis
arXiv:1702.05462v2 [stat.ME] 7 Jan 2018
Laurentiu Hinoveanu∗1 , Fabrizio Leisen†1 , and Cristiano Villa‡1
1
School of Mathematics, Statistics and Actuarial Science,
University of Kent
Abstract
In this paper we present a loss-based approach to change point analysis. In particular, we look at the problem from two perspectives. The
first focuses on the definition of a prior when the number of change
points is known a priori. The second contribution aims to estimate
the number of change points by using a loss-based approach recently
introduced in the literature. The latter considers change point estimation as a model selection exercise. We show the performance of the
proposed approach on simulated data and real data sets.
Keywords: Change point; Discrete parameter space; Loss-based prior; Model
selection.
1
Introduction
There are several practical scenarios where it is inappropriate to assume
that the distribution of the observations does not change. For example, financial data sets can exhibit alternate behaviours due to crisis periods. In
this case it is sensible to assume changes in the underlying distribution. The
change in the distribution can be either in the value of one or more of the
parameters or, more in general, on the family of the distribution. In the
∗
[email protected]
[email protected]
‡
[email protected]
†
1
latter case, for example, one may deem appropriate to consider a normal
density for the stagnation periods, while a Student t, with relatively heavy
tails, may be more suitable to represent observations in the more turbulent
stages of a crisis. The task of identifying if, and when, one or more changes
have occurred is not trivial and requires appropriate methods to avoid detection of a large number of changes or, at the opposite extreme, seeing no
changes at all. The change point problem has been deeply studied from a
Bayesian point of view. Chernoff and Zacks (1964) focused on the change in
the means of normally distributed variables. Smith (1975) looked into the
single change point problem when different knowledge of the parameters of
the underlying distributions is available: all known, some of them known
or none of them known. Smith (1975) focuses on the binomial and normal
distributions. In Muliere and Scarsini (1985) the problem is tackled from
a Bayesian nonparametric perspective. The authors consider Dirichlet processes with independent base measures as underlying distributions. In this
framework, Petrone and Raftery (1997) have showed that the Dirichlet process prior could have a strong effect on the inference and may lead to wrong
conclusions in the case of a single change point. Raftery and Akman (1986)
have approached the single change point problem in the context of a Poisson
likelihood under both proper and improper priors for the model parameters.
Carlin et al. (1992) build on the work of Raftery and Akman (1986) by considering a two level hierarchical model. Both papers illustrate the respective
approaches by studying the well-known British coal-mining disaster data set.
In the context of multiple change points detection, Loschi and Cruz (2005)
have provided a fully Bayesian treatment for the product partitions model
of Barry and Hartigan (1992). Their application focused on stock exchange
data. Stephens (1994) has extended the Gibbs sampler introduced by Carlin et al. (1992) in the change point literature to handle multiple change
points. Hannart and Naveau (2009) have used Bayesian decision theory, in
particular 0-1 cost functions, to estimate multiple changes in homoskedastic
normally distributed observations. Schwaller and Robin (2017) extend the
product partition model of Barry and Hartigan (1992) by adding a graphical
structure which could capture the dependencies between multivariate observations. Fearnhead and Liu (2007) proposed a filtering algorithm for the
sequential multiple change points detection problem in the case of piecewise
regression models. Henderson and Matthews (1993) introduced a partial
Bayesian approach which involves the use of a profile likelihood, where the
aim is to detect multiple changes in the mean of Poisson distributions with an
application to haemolytic uraemic syndrome (HUS) data. The same data set
was studied by Tian et al. (2009), who proposed a method which treats the
change points as latent variables. Ko et al. (2015) have proposed an exten2
sion to the hidden Markov model of Chib (1998) by using a Dirichlet process
prior on each row of the regime matrix. Their model is semiparametric, as
the number of states is not specified in advance, but it grows according to
the data size. Heard and Turcotte (2017) have proposed a new sequential
Monte Carlo algorithm to infer multiple change points.
Whilst the literature covering change point analysis from a Bayesian perspective is vast when prior distributions are elicited, the documentation referring
to analysis under minimal prior information is limited, see Moreno et al.
(2005) and Girón et al. (2007). The former paper discusses the single change
point problem in a model selection setting, whilst the latter paper, which is
an extension of the former, tackles the multivariate change point problem
in the context of linear regression models. Our work aims to contribute to
the methodology for change point analysis under the assumption that the information about the number of change points and their location is minimal.
First, we discuss the definition of an objective prior for change point location, both for single and multiple changes, assuming the number of changes
is known a priori. Then, we define a prior on the number of change points via
a model selection approach. Here, we assume that the change point coincides
with one of the observations. As such, given X1 , X2 , . . . , Xn data points, the
change point location is discrete. To the best of our knowledge, the sole
general objective approach to define prior distributions on discrete spaces is
the one introduced by Villa and Walker (2015a).
To illustrate the idea, consider a probability distribution f (x|m), where
m ∈ M is a discrete parameter. Then, the prior π(m) is obtained by objectively measuring what is lost if the value m is removed from the parameter space, and it is the true value. According to Berk (1966), if a
model is misspecified, the posterior distribution asymptotically accumulates
on the model which is the most similar to the true one, where the similarity
is measured in terms of the Kullback–Leibler (KL) divergence. Therefore,
DKL (f (·|m)kf (·|m0 )), where m0 is the parameter characterising the nearest
model to f (x|m), represents the utility of keeping m. The objective prior is
then obtained by linking the aforementioned utility via the self-information
loss:
0
π(m) ∝ exp min
DKL (f (·|m)kf (·|m )) − 1,
(1)
0
m 6=m
where the Kullback–Leibler divergence (Kullback and Leibler, 1951) from the
sampling distribution with density f (x|m) to the one with density f (x|m0 )
3
is defined as:
Z
0
DKL (f (·|m)kf (·|m )) =
f (x|m)
f (x|m) · log
dx.
f (x|m0 )
X
Throughout the paper, the objective prior defined in equation (1) will be
referenced as the loss-based prior. This approach is used to define an objective prior distribution when the number of change points is known a priori.
To obtain a prior distribution for the number of change points, we adopt a
model selection approach based on the results in Villa and Walker (2015b),
where a method to define a prior on the space of models is proposed. To
illustrate, let us consider k Bayesian models:
Mj = {fj (x|θj ), πj (θj )}
j ∈ {1, 2, . . . , k},
(2)
where fj (x|θj ) is the sampling density characterised by θj and πj (θj ) represents the prior on the model parameter.
Assuming the prior on the model parameter, πj (θj ), is proper, the model
prior probability Pr(Mj ) is proportional to the expected minimum Kullback–
Leibler divergence from Mj , where the expectation is considered with respect
to πj (θj ). That is:
Pr(Mj ) ∝ exp Eπj inf DKL (fj (x|θj )kfi (x|θi ))
j = 1, . . . , k. (3)
θi ,i6=j
The model prior probabilities defined in equation (3) can be employed to
derive the model posterior probabilities through:
#−1
" k
X Pr(Mj )
Bji
,
(4)
Pr(Mi |x) =
Pr(M
)
i
j=1
where Bji is the Bayes factor between model Mj and model Mi , defined as
R
fj (x|θj )πj (θj ) dθj
Bji = R
,
fi (x|θi )πi (θi ) dθi
with i 6= j ∈ {1, 2, . . . , k}.
This paper is structured as follows: in Section 2 we establish the way we set
objective priors on both single and multiple change point locations. Section
3 shows how we define the model prior probabilities for the number of change
point locations. Illustrations of the model selection exercise are provided in
Sections 4 and 5, where we work with simulated and real data, respectively.
Section 6 is dedicated to final remarks.
4
2
Objective Prior on the Change Point Locations
This section is devoted to the derivation of the loss-based prior when the
number of change points is known a priori. Specifically, let k be the number
of change points and m1 < m2 < . . . < mk their locations. We introduce the
idea in the simple case where we assume that there is only one change point
in the data set (see Section 2.1). Then, we extend the results to the more
general case where multiple change points are assumed (see Section 2.2).
A well-known objective prior for finite parameter spaces, in cases where there
is no structure, is the uniform prior (Berger et al., 2012). As such, a natural
choice for the prior on the change points location is the uniform (Koop and
Potter, 2009). The corresponding loss-based prior is indeed the uniform, as
shown below, which is a reassuring result as the objective prior for a specific
parameter space, if exists, should be unique.
2.1
Single Change Point
As mentioned above, we show that the loss-based prior for the single change
point case coincides with the discrete uniform distribution over the set {1, 2, . . . , n−
1}.
Let X(n) = (X1 , . . . , Xn ) denote an n-dimensional vector of random variables,
representing the random sample, and m be our single change point location,
that is m ∈ {1, 2, . . . , n − 1}, such that
i.i.d.
X1 , . . . , Xm |θ̃1 ∼ f1 (·|θ̃1 )
i.i.d.
Xm+1 , . . . , Xn |θ̃2 ∼ f2 (·|θ̃2 ).
(5)
Note that we assume that there is a change point in the series, as such the
space of m does not include the case m = n. In addition, we assume that
θ̃1 6= θ̃2 when f1 = f2 . The sampling density for the vector of observations
x(n) = (x1 , . . . , xn ) is:
f (x
(n)
|m, θ̃1 , θ̃2 ) =
m
Y
f1 (xi |θ̃1 )
i=1
n
Y
f2 (xi |θ̃2 ).
(6)
i=m+1
Let m0 6= m. Then, the Kullback–Leibler divergence between the model
5
parametrised by m and the one parametrised by m0 is:
Z
(n)
(n)
0
DKL (f (x |m, θ̃1 , θ̃2 )kf (x |m , θ̃1 , θ̃2 )) = f (x(n) |m, θ̃1 , θ̃2 )
log
f (x(n) |m, θ̃1 , θ̃2 )
f (x(n) |m0 , θ̃1 , θ̃2 )
!
dx(n) .
(7)
Without loss of generality, consider m < m0 . In this case, note that
0
m
Y
f (x(n) |m, θ̃1 , θ̃2 )
f2 (xi |θ̃2 )
=
,
f (x(n) |m0 , θ̃1 , θ̃2 ) i=m+1 f1 (xi |θ̃1 )
leading to
DKL (f (x(n) |m, θ̃1 , θ̃2 )kf (x(n) |m0 , θ̃1 , θ̃2 )) =
m0 Z
X
f2 (xi |θ̃2 ) log
i=m+1
f2 (xi |θ̃2 )
f1 (xi |θ̃1 )
!
dxi .
(8)
On the right hand side of equation (8), we can recognise the Kullback–Leibler
divergence from density f2 to density f1 , thus getting:
DKL (f (x(n) |m, θ̃1 , θ̃2 )||f (x(n) |m0 , θ̃1 , θ̃2 )) =
(m0 − m)DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 )).
(9)
In a similar fashion, when m > m0 , we have that:
DKL (f (x(n) |m, θ̃1 , θ̃2 )kf (x(n) |m0 , θ̃1 , θ̃2 )) =
(m − m0 )DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 )).
(10)
In this single change point scenario, we can consider m0 as a perturbation
of the change point location m, that is m0 = m ± l where l ∈ N∗ , such
that 1 ≤ m0 < n. Then, taking into account equations (9) and (10), the
Kullback–Leibler divergence becomes:
DKL (f (x(n) |m, θ̃1 , θ̃2 )kf (x(n) |m0 , θ̃1 , θ̃2 )) =
l · DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 )),
l · DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 )),
6
if m < m0
if m > m0 ,
and
h
i
(n)
(n)
0
min
DKL (f (x |m, θ̃1 , θ̃2 )kf (x |m , θ̃1 , θ̃2 )) =
0
m 6=m
= min
{l · DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 )), l · DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 ))}
0
m 6=m
= min
{DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 )), DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 ))} · min
{l} .
m0 6=m
m0 6=m
| {z }
1
(11)
We observe that equation (11) is only a function of θ̃1 and θ̃2 and does not
depend on m. Thus, π(m) ∝ 1 and, therefore,
π(m) =
1
n−1
m ∈ {1, . . . , n − 1}.
(12)
This prior was used, for instance, in an econometric context by Koop and
Potter (2009) with the rationale of giving equal weight to every possible
change point location.
2.2
Multivariate Change Point Problem
In this section, we address the change point problem in its generality by
assuming that there are 1 ≤ k < n change points. In particular, for the data
x(n) = (x1 , . . . , xn ), we consider the following sampling distribution
f (x
(n)
|m, θ̃) =
m1
Y
f1 (xi |θ̃1 )
i=1
j+1
k−1
Y mY
n
Y
fj+1 (xi |θ̃j+1 )
j=1 i=mj +1
fk+1 (xi |θ̃k+1 ),
i=mk +1
(13)
where m = (m1 , . . . , mk ), 1 ≤ m1 < m2 < . . . < mk < n, is the vector
of the change point locations and θ̃ = (θ̃1 , . . . , θ̃k , θ̃k+1 ) is the vector of the
parameters of the underlying probability distributions. Schematically:
X1
Xm1 +1
..
.
, . . . , Xm1 |θ̃1
, . . . , Xm2 |θ̃2
.
, . . . , ..
i.i.d.
∼ f1 (·|θ̃1 )
∼ f2 (·|θ̃2 )
.. ..
.....
i.i.d.
i.i.d.
Xmk−1 +1 , . . . , Xmk |θ̃k ∼ fk (·|θ̃k )
i.i.d.
Xmk +1
, . . . , Xn |θ̃k+1 ∼ fk+1 (·|θ̃k+1 ).
If f1 = f2 = · · · = fk+1 , then it is reasonable to assume that some of the θ’s
are different. Without loss of generality, we assume that θ̃1 6= θ̃2 6= · · · 6=
7
θ̃k 6= θ̃k+1 . In a similar fashion to the single change point case, we cannot
assume mk = n since we require exactly k change points.
In this case, due to the multivariate nature of the vector m = (m1 , . . . , mk ),
the derivation of the loss-based prior is not as straightforward as in the one
dimensional case. In fact, the derivation of the prior is based on heuristic
considerations supported by the below Theorem 1 (the proof of which is in
the Appendix). In particular, we are able to prove an analogous of equations
(9) and (10) when only one component is arbitrarily perturbed. Let us define
the following functions:
d+1
j (θ̃) = DKL (fj+1 (·|θ̃j+1 )kfj (·|θ̃j ))
d−1
j (θ̃) = DKL (fj (·|θ̃j )kfj+1 (·|θ̃j+1 )),
where j ∈ {1, 2, . . . , k}. The following Theorem is useful to understand the
behaviour of the loss-based prior in the general case.
Theorem 1. Let f (x(n) |m, θ̃) be the sampling distribution defined in equation (13) and consider j ∈ {1, . . . , k}. Let m0 be such that m0i = mi for i 6= j,
and let the component m0j be such that m0j 6= mj and mj−1 < m0j < mj+1 .
Therefore,
DKL (f (x(n) |m, θ̃)kf (x(n) |m0 , θ̃) = |m0j − mj |dSj (θ̃),
where S = sgn(m0j − mj ).
Note that, Theorem 1 states that the minimum Kullback–Leibler divergence
is achieved when m0j = mj + 1 or m0j = mj − 1. This result is not surprising
since the Kullback–Leibler divergence measures the degree of similarity between two distributions. The smaller the perturbation caused by changes in
one of the parameters is, the smaller the Kullback–Leibler divergence between
the two distributions is. Although Theorem 1 makes a partial statement
about the multiple change points scenario, it provides a strong argument for
supporting the uniform prior. Indeed, if now we consider the general case of
having k change points, it is straightforward to see that the Kullback–Leibler
divergence is minimised when only one of the components of the vector m is
perturbed by (plus or minus) one unit. As such, the loss-based prior depends
on the vector of parameters θ̃ only, as in the one-dimensional case, yielding
the uniform prior for m.
Therefore, the loss-based prior on the multivariate change point location is
−1
n−1
π(m) =
,
(14)
k
8
where m = (m1 , . . . , mk ), 1 ≤ m1 < m2 < . . . < mk < n. The denominator
in equation (14) has the above form because, for every number of k change
points, we are interested
in the number of k-subsets from a set of n − 1
n−1
elements, which is k . The same prior was also derived in a different way
by Girón et al. (2007).
3
Loss-based Prior on the Number of Change
Points
Here, we approach the change point analysis as a model selection problem.
In particular, we define a prior on the space of models, where each model
represents a certain number of change points (including the case of no change
points). The method adopted to define the prior on the space of models is the
one introduced in Villa and Walker (2015b). We proceed as follows. Assume
1
1
M1
n
𝜃̃1
M0
𝜃̃1
m1
n
𝜃̃2
1
1
M2
θ1
𝜃̃1
m1
1
M3
𝜃̃1
𝜃̃2
m2
n
𝜃̃3
2
θ1
m1
θ1
𝜃̃2
m2
𝜃̃3
m3
n
𝜃̃4
⋮
1
Mk
𝜃̃1
m1
𝜃̃2
m2
𝜃̃3
m3
𝜃̃4
⋯ ⋯ ⋯
𝜃̃𝑘
mk
n
𝜃̃𝑘+1
Figure 1: Diagram showing the way we specify our models. The arrows indicate that the
respective change point locations remain fixed from the previous model to the current one.
we have to select from k + 1 possible models. Let M0 be the model with no
change points, M1 the model with one change point and so on. Generalising,
model Mk corresponds to the model with k change points. The idea is that
the current model encompasses the change point locations of the previous
model. As an example, in model M3 the first two change point locations will
be the same as in the case of model M2 . To illustrate the way we envision our
models, we have provided Figure 1. It has to be noted that the construction
of the possible models from M0 to Mk can be done in a different way to
one here described. Obviously, the approach to define the model priors stays
9
unchanged. Consistently with the notation used in Section 1,
θ̃1 , . . . , θ̃k+1 , m1 , . . . , mk if k = 1, . . . , n − 1
θk =
θ̃1
if k = 0
represents the vector of parameters of model Mk , where θ̃1 , . . . , θ̃k+1 are the
model specific parameters and m1 , . . . , mk are the change point locations, as
in Figure 1.
Based on the way we have specified our models, which are in direct correspondence with the number of change points and their locations, we state
Theorem 2 (the proof of which is in the Appendix).
Theorem 2. Let
DKL (Mi kMj ) = DKL (f (x(n) |θi )kf (x(n) |θj )).
For any 0 ≤ i < j ≤ k integers, with k < n, and the convention mj+1 = n,
we have the following:
DKL (Mi kMj ) =
j
h
X
i
(mq+1 − mq ) · DKL (fi+1 (·|θ̃i+1 )kfq+1 (·|θ̃q+1 )) ,
q=i+1
and
DKL (Mj kMi ) =
j
h
X
i
(mq+1 − mq ) · DKL (fq+1 (·|θ̃q+1 )kfi+1 (·|θ̃i+1 )) .
q=i+1
The result in Theorem 2 is useful when the model selection exercise is implemented. Indeed, the Villa and Walker (2015b) approach requires the computation of the Kullback–Leibler divergences in Theorem 2. Recalling equation
(3), the objective model prior probabilities are then given by:
Pr(Mj ) ∝ exp Eπj inf DKL (Mj kMi )
j = 0, 1, . . . , k.
(15)
θi ,i6=j
For illustrative purposes, in the Appendix we derive the model prior probabilities to perform model selection among M0 , M1 and M2 .
It is easy to infer from equation (15) that model priors depend on the prior
distribution assigned to the model parameters, that is on the level of uncertainty that we have about their true values. For the change point location,
10
a sensible choice is the uniform prior which, as shown in Section 2, corresponds to the loss-based prior. For the model specific parameters, we have
several options. If one wishes to pursue an objective analysis, intrinsic priors
(Berger and Pericchi, 1996) may represent a viable solution since they are
proper. Nonetheles, the method introduce by Villa and Walker (2015b) does
not require, in principle, an objective choice as long as the priors are proper.
Given that we use the latter approach, here we consider subjective priors for
the model specific parameters.
Remark. In the case where the changes in the underlying sampling distribution are limited to the parameter values, the model prior probabilities
defined in (15) follow the uniform distribution. That is, Pr(Mj ) ∝ 1. In the
real data example illustrated in Section 5.1, we indeed consider a problem
where the above case occurs.
3.1
A special case: selection between M0 and M1
Let us consider the case where we have to estimate whether there is or not
a change point in a set of observations. This implies that we have to choose
between model M0 (i.e. no change point) and M1 (i.e. one change point).
Following our approach, we have:
Pr(M0 ) ∝ exp Eπ0 inf DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 )) ,
(16)
θ̃2
and
Pr(M1 ) ∝ exp Eπ1 (n − m1 ) · inf DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 )) .
(17)
θ̃1
Now, let us assume independence between the prior on the change point
location and the prior on the parameters of the underlying sampling distributions, that is π1 (m1 , θ̃1 , θ̃2 ) = π1 (m1 )π1 (θ̃1 , θ̃2 ). Let us further recall that,
as per equation (14), π1 (m1 ) = 1/(n−1). As such, we observe that the model
prior probability on M1 becomes:
n
Eπ1 (θ̃1 ,θ̃2 ) inf DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 )) .
(18)
Pr(M1 ) ∝ exp
2
θ̃1
We notice that the model prior probability for model M1 is increasing when
the sample size increases. This behaviour occurs whether there is or not a
11
change point in the data. We propose to address the above problem by using
a non-uniform prior for m1 . A reasonable alternative, which works quite well
in practice, would be the following shifted binomial as prior:
m −1 n−m1 −1
n−2
n−1 1
1
π1 (m1 ) =
, 1 ≤ m1 ≤ n − 1.
(19)
m1 − 1
n
n
To argument the choice of (19), we note that, as n increases, the probability
mass will be more and more concentrated towards the upper end of the
support. Therefore, from equations (17) and (19) follows:
2n − 2
Pr(M1 ) ∝ exp
Eπ1 (θ̃1 ,θ̃2 ) inf DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 )) . (20)
n
θ̃1
For the more general case where we consider more than two models, the
problem highlighted in equation (18) vanishes.
4
Change Point Analysis on Simulated Data
In this section, we present the results of several simulation studies based
on the methodologies discussed in Sections 2 and 3. We start with a scenario involving discrete distributions in the context of the one change point
problem. We then show the results obtained when we consider continuous
distributions for the case of two change points. The choice of the underlying
sampling distributions is in line with Villa and Walker (2015b).
4.1
Single sample
Scenario 1. The first scenario concerns the choice between models M0 and
M1 . Specifically, for M0 we have:
i.i.d.
X1 , X2 , . . . , Xn |p ∼ Geometric(p)
and for M1 we have:
i.i.d.
X1 , X2 , . . . , Xm1 |p ∼ Geometric(p)
i.i.d.
Xm1 +1 , Xm1 +2 , . . . , Xn |λ ∼ Poisson(λ)
Let us denote with f1 (·|p) and f2 (·|λ) the probability mass functions of the
Geometric and the Poisson distributions, respectively. The priors for the
parameters of f1 and f2 are p ∼ Beta(a, b) and λ ∼ Gamma(c, d).
12
In the first simulation, we sample n = 100 observations from model M0 with
p = 0.8. To perform the change point analysis, we have chosen the following
parameters for the priors on p and λ: a = 2, b = 2, c = 3 and d = 1.
Applying the approach introduced in Section 3, we obtain Pr(M0 ) ∝ 1.59
and Pr(M1 ) ∝ 1.81. These model priors yield the posterior distribution
probabilities (refer to equation (4)) Pr(M0 |x(n) ) = 0.92 and Pr(M1 |x(n) ) =
0.08. As expected, the selection process strongly indicates the true model
as M0 . Table 1 reports the above probabilities including other information,
such as the appropriate Bayes factors.
The second simulation looked at the opposite setup, that is we sample n =
100 observations from M1 , with p = 0.8 and λ = 3. We have sampled 50 data
points from the Geometric distribution and the remaining 50 data points from
the Poisson distribution. In Figure 2, we have plotted the simulated sample,
where it is legitimate to assume a change in the underlying distribution.
Using the same prior parameters as above, we obtain Pr(M0 |x(n) ) = 0.06 and
Pr(M1 |x(n) ) = 0.94. Again, the model selection process is assigning heavy
posterior mass to the true model M1 . These results are further detailed in
Table 1.
8
6
4
2
0
1
20
40
60
80
100
Figure 2: Scatter plot of the data simulated from model M1 in Scenario 1.
Scenario 2. In this scenario we consider the case where we have to select
among three models, that is model M0 :
i.i.d.
X1 , X2 , . . . , Xn |λ, κ ∼ Weibull(λ, κ),
(21)
model M1 :
i.i.d.
X1 , X2 , . . . , Xm1 |λ, κ ∼ Weibull(λ, κ)
i.i.d.
Xm1 +1 , Xm1 +2 , . . . , Xn |µ, τ ∼ Log-normal(µ, τ ),
13
(22)
Pr(M0 )
Pr(M1 )
B01
B10
Pr(M0 |x(n) )
Pr(M1 |x(n) )
True model
M0
M1
0.47 0.47
0.53 0.53
12.39 0.08
0.08 12.80
0.92 0.06
0.08 0.94
Table 1: Model prior, Bayes factor and model posterior probabilities for the change point
analysis in Scenario 1. We considered samples from, respectively, model M0 and model
M1 .
with 1 ≤ m1 ≤ n−1 being the location of the single change point, and model
M2 :
i.i.d.
X1 , X2 , . . . , Xm1 |λ, κ ∼ Weibull(λ, κ)
i.i.d.
Xm1 +1 , Xm1 +2 , . . . , Xm2 |µ, τ ∼ Log-normal(µ, τ )
i.i.d.
Xm2 +1 , Xm2 +2 , . . . , Xn |α, β ∼ Gamma(α, β),
(23)
with 1 ≤ m1 < m2 ≤ n − 1 representing the locations of the two change
points, such that m1 corresponds exactly to the same location as in model
M1 . Analogously to the previous scenario, we sample from each model in
turn and perform the selection to detect the number of change points.
Let f1 (·|λ, κ), f2 (·|µ, τ ) and f3 (·|α, β) represent the Weibull, Log-normal and
Gamma densities, respectively, with θ̃1 = (λ, κ), θ̃2 = (µ, τ ) and θ̃3 = (α, β).
We assume a Normal prior on µ and Gamma priors on all the other parameters as follows:
λ ∼ Gamma(1.5, 1)
κ ∼ Gamma(5, 1)
τ ∼ Gamma(16, 1)
α ∼ Gamma(10, 1)
µ ∼ Normal(0.05, 1),
β ∼ Gamma(0.2, 0.1).
In the first exercise, we have simulated n = 100 observations from model
M0 , where we have set λ = 1.5 and κ = 5. We obtain the following model
priors: Pr(M0 ) ∝ 1.09, Pr(M1 ) ∝ 1.60 and Pr(M2 ) ∝ 1.37, yielding the
posteriors Pr(M0 |x(n) ) = 0.96, Pr(M1 |x(n) ) = 0.04 and Pr(M2 |x(n) ) = 0.00.
We then see that the approach assigns high mass to the true model M0 .
Table 2 reports the above probabilities and the corresponding Bayes factors.
The second simulation was performed by sampling 50 observations from a
Weibull with parameter values as in the previous exercise, and the remaining
14
Pr(M0 )
Pr(M1 )
Pr(M2 )
B01
B02
B12
Pr(M0 |x(n) )
Pr(M1 |x(n) )
Pr(M2 |x(n) )
M0
0.27
0.39
0.34
36.55
1.84 × 103
50.44
0.96
0.04
0.00
True model
M1
M2
0.27
0.27
0.39
0.39
0.34
0.34
3.24 × 10−4 4.65 × 10−40
0.02
1.27 × 10−45
55
2.72 × 10−6
0.00
0.00
0.98
0.00
0.02
1.00
Table 2: Model prior, Bayes factor and model posterior probabilities for the change point
analysis in Scenario 2. We considered samples from, respectively, model M0 , model M1
and model M2 .
50 observations from a Log-normal density with location parameter µ = 0.05
and scale parameter τ = 16. The data is displayed in Figure 3.
2.5
2
1.5
1
0.5
0
1
20
40
60
80
100
Figure 3: Scatter plot of the observations simulated from model M1 in Scenario 2.
The model posterior probabilities are Pr(M0 |x(n) ) = 0.00, Pr(M1 |x(n) ) = 0.98
and Pr(M2 |x(n) ) = 0.02, which are reported in Table 2. In this case as well,
we see that the model selection procedure indicates M1 as the true model,
as expected.
15
12
10
8
6
4
2
0
1
20
40
60
80
100
Figure 4: Scatter plot of the observations simulated from model M2 in Scenario 2.
Finally, for the third simulation exercise we sample 50 and 20 data points
from, respectively, a Weibull and a Log-normal with parameter values as
defined above, and the last 30 observations are sampled from a Gamma
distribution with parameters α = 10 and β = 2. From Table 2, we note that
the posterior distribution on the model space accumulates on the true model
M2 .
4.2
Frequentist Analysis
In this section, we perform a frequentist analysis of the performance of the
proposed prior by drawing repeated samples from different scenarios. In
particular, we look at a two change points problem where the sampling distributions are Student-t with different degrees of freedom. In this scenario,
we perform the analysis with 60 repeated samples generated by different
densities with the same mean values.
Then, we repeat the analysis of Scenario 2 by selecting 100 samples for
n = 500 and n = 1500. We consider different sampling distributions with
the same mean and variance. In this scenario, where we added the further
constraint of the equal variance, it is interesting to note that the change in
distribution is captured when we increase the sample size, meaning that we
learn more about the true sampling distributions.
We also compare the performances of the loss-based prior with the uniform
prior when we analyse the scenario with different sampling distributions.
Namely, Weibull/Log-normal/Gamma. It is interesting to note that the uniform prior is unable to capture the change in distribution even for a large
sample size. On the contrary, the loss-based prior is able to detect the number of change points when n = 1500. Furthermore, for n = 500, even though
16
both priors are not able to detect the change points most of the times, the
loss-based prior has a higher frequency of success when compared to the
uniform prior.
Scenario 3. In this scenario, we consider the case where the sampling
distributions belong to the same family, that is Student-t, where the true
model has two change points. In particular, let f1 (·|ν1 ), f2 (·|ν2 ) and f3 (·|ν3 )
represent the densities of three standard t distributions, respectively. We
assume that ν1 , ν2 and ν3 are positive integers strictly greater than one so
to have defined mean for each density. Note that this allows us to compare
distributions of the same family with equal mean. The priors assigned to the
number of degrees of freedom assume a parameter space of positive integers
strictly larger than 1. As such, we define them as follows:
ν1 ∼ 2 + Poisson(30)
ν2 ∼ 2 + Poisson(3)
ν3 ∼ 2 + Poisson(8).
In this experiment, we consider 60 repeated samples, each of size n = 300
and with the following structure:
• X1 , . . . , X100 from a Student-t distribution with ν1 = 30,
• X101 , . . . , X200 from a Student-t distribution with ν2 = 3,
• X201 , . . . , X300 from a Student-t distribution with ν3 = 8.
Table 3 reports the frequentist results of the simulation study. First, note
that P (M1 ) = P (M2 ) = P (M3 ) = 1/3 as per the Remark in Section 3. For
all the simulated samples, the loss-based prior yields a posterior with the
highest probability assigned to the true model M2 . We also note that the
above posterior is on average 0.75 with a variance 0.02, making the inferential
procedure extremely accurate.
Pr(M0 |x(n) )
Pr(M1 |x(n) )
Pr(M2 |x(n) )
Mean posterior
0.01
0.24
0.75
Variance posterior
3.84 × 10−4
0.0160
0.0190
Freq. true model
0/60
0/60
60/60
Table 3: Average model posterior probabilities, variance and frequency of true model for
the Scenario 3 simulation exercise.
Scenario 4. In this scenario, we perform repeated sampling from the setup
described in scenario 2 above, where the true model has two change points. In
17
particular, we draw 100 samples with n = 500 and n = 1500. For n = 500, the
loss-based prior probabilities are P (M0 ) = 0.18, P (M1 ) = 0.16 and P (M2 ) =
0.66. For n = 1500, the loss-based prior probabilities are P (M0 ) = 0.015,
P (M1 ) = 0.014 and P (M2 ) = 0.971. The simulation results are reported,
respectively, in Table 4 and in Table 5. The two change point locations
for n = 500 are at the 171st and 341st observations. For n = 1500, the
first change point is the 501st observation, while the second is at the 1001st
observation. We note that there is a sensible improvement in detecting the
true model, using the loss-based prior, when the sample size increases. In
particular, we move from 30% to 96%.
(n)
Pr(M0 |x )
Pr(M1 |x(n) )
Pr(M2 |x(n) )
Mean posterior
9.88 × 10−4
0.63
0.37
Variance posterior
2.60 × 10−5
0.0749
0.0745
Freq. true model
0/100
70/100
30/100
Table 4: Average model posterior probabilities, variance and frequency of true model for
the Scenario 4 simulation exercise with n = 500 and the loss-based prior.
(n)
Pr(M0 |x )
Pr(M1 |x(n) )
Pr(M2 |x(n) )
Mean posterior
1.33 × 10−13
0.08
0.92
Variance posterior
1.76 × 10−24
0.0200
0.0200
Freq. true model
0/100
4/100
96/100
Table 5: Average model posterior probabilities, variance and frequency of true model for
the Scenario 4 simulation exercise with n = 1500 and the loss-based prior.
To compare the loss-based prior with the uniform prior we have run the
simulation on the same data samples used above. The results for n = 500
and n = 1500 are in Table 6 and in Table 7, respectively. Although we can
observe an improvement when the sample size increases, the uniform prior
does not lead to a clear detection of the true model for both sample sizes.
Pr(M0 |x(n) )
Pr(M1 |x(n) )
Pr(M2 |x(n) )
Mean posterior
16 × 10−4
0.82
0.18
Variance posterior
7.15 × 10−5
0.0447
0.0443
Freq. true model
0/100
91/100
9/100
Table 6: Average model posterior probabilities, variance and frequency of true model for
the Scenario 4 simulation exercise with n = 500 and the uniform prior.
18
Mean posterior
8.64 × 10−12
0.501
0.499
(n)
Pr(M0 |x )
Pr(M1 |x(n) )
Pr(M2 |x(n) )
Variance posterior
7.45 × 10−21
0.1356
0.1356
Freq. true model
0/100
49/100
51/100
Table 7: Average model posterior probabilities, variance and frequency of true model for
the Scenario 4 simulation exercise with n = 1500 and the uniform prior.
0.3
Weibull
Log-normal
Gamma
0.25
Pdfs
0.2
0.15
0.1
0.05
0
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Figure 5: The densities of Weibull(λ, κ), Log-normal(µ, τ ) and Gamma(α, β) with the
same mean (equal to 5) and the same variance (equal to 2.5).
Finally, we conclude this section with a remark. One may wonder why the
change point detection requires an increasing in the sample size, and the
reply can be inferred from Figure 5, which displays the density functions
of the distributions employed in this scenario. As it can be observed, the
densities are quite similar, which is not surprising since these distributions
have the same means and the same variances. The above similarity can
also be appreciated in terms of Hellinger distance, see Table 8. In other
words, from Figure 5 we can see that the main differences in the underlying
distributions are in the tail areas. It is therefore necessary to have a relatively
large number of observations in order to be able to discern differences in the
19
densities, because in this case only we would have a sufficient representation
of the whole distribution.
Weibull(λ, κ)
Log-normal(µ, τ )
Hellinger distances
Weibull(λ, κ) Log-normal(µ, τ ) Gamma(α, β)
0.1411996
0.09718282
0.04899711
Table 8: Hellinger distances between all the pairs formed from a Weibull(λ, κ), Lognormal(µ, τ ) and Gamma(α, β). The six hyperparameters are such that the distributions
have the same mean=5 and same variance=2.5.
5
Change Point Analysis on Real Data
In this section, we illustrate the proposed approach applied to real data. We
first consider a well known data set which has been extensively studied in
the literature of the change point analysis, that is the British coal-mining
disaster data (Carlin et al., 1992). The second set of data we consider refers
to the daily returns of the S&P 500 index observed over a period of four years.
The former data set will be investigated in Section 5.1, while the latter in
Section 5.2.
5.1
British Coal-Mining Disaster Data
The British coal-mining disaster data consists of the yearly number of deaths
for the British coal miners over the period 1851-1962. It is believed that the
change in the working conditions, and in particular, the enhancement of the
security measures, led to a decrease in the number of deaths. This calls for
a model which can take into account a change in the underlying distribution
around a certain observed year. With the proposed methodology we wish
to detect if the assumption is appropriate. In particular, if a model with
one change point is more suitable to represent the data than a model where
no changes in the sampling distribution are assumed. Figure 6 shows the
number of deaths per year in the British coal-mining industry from 1851 to
1962. As in Chib (1998), we assume a Poisson sampling distribution with a
possible change in the parameter value. That is
i.i.d.
X1 , X2 , . . . , Xm |φ1 ∼ Poisson(φ1 )
i.i.d.
Xm+1 , Xm+2 , . . . , Xn |φ2 ∼ Poisson(φ2 ),
20
(24)
7
6
5
4
3
2
1
0
1851
1871
1891
1911
1931
1951
Year
Figure 6: Scatter plot of the British coal-mining disaster data.
where m is the unknown location of the single change point, such that 1 ≤
m ≤ n, and a Gamma(2, 1) is assumed for φ1 and φ2 . The case m = n
corresponds to the scenario with no change point, that is model M0 . The
case m < n assumes one change point, that is model M1 .
Let f1 (·|φ1 ) and f2 (·|φ2 ) be the Poisson distributions with parameters φ1 and
φ2 , respectively. Then, the analysis is performed by selecting between model
M0 , that is when the sampling distribution is f1 , and model M1 , where the
sampling distribution is f1 up to a certain m < n and f2 from m + 1 to n.
As highlighted in the Remark at the end of Section 3, the prior on the model
space is the discrete uniform distribution, that is Pr(M0 ) = Pr(M1 ) = 0.5.
The proposed model selection approach leads to the Bayes factors B01 =
1.61 × 10−13 and B10 = 6.20 × 1012 , where it is obvious that the odds are
strongly in favour of model M1 . Indeed, we have Pr(M1 |x(n) ) ≈ 1.
5.2
Daily S&P 500 Absolute Log-Return Data
The second real data analysis aims to detect change points in the absolute
value of the daily logarithmic returns of the S&P500 index observed from
the 14/01/2008 to the 31/12/2011 (see Figure 7). As underlying sampling
distributions we consider the Weibull and the Log-normal (Yu, 2001), and
the models among which we select are as follows. M0 is a Weibull(λ, κ), M1 is
formed by a Weibull(λ, κ) and a Log-normal(µ1 , τ1 ) and, finally, M2 is formed
by a Weibull(λ, κ), a Log-normal(µ1 , τ1 ) and a Log-normal(µ2 , τ2 ). An interesting particularity of this problem is that we will consider a scenario where
the changes are in the underlying distribution or in the parameter values of
the same distribution. As suggested in Section 4.1.3 of Kass and Raftery
(1995), due to the large sample size of the data set, we could approximate
21
the Bayes factor by using the Schwarz criterion. Therefore, in this case the
specification of the priors for the parameters of the underlying distributions
is not necessary. From the results in Table 9, we see that the model indicated
0.12
0.1
0.08
0.06
0.04
0.02
0
14/01/08
29/10/08
14/08/09
31/05/10
16/03/11
30/12/11
Figure 7: Absolute daily log-returns of the S&P500 index from 14/01/08 to 30/12/11.
by the proposed approach is M2 . In other words, there is very strong indication that there are two change points in the data set. From Table 9, we note
Pr(M0 )
Pr(M1 )
Pr(M2 )
B01
B02
B12
Pr(M0 |x(n) )
Pr(M1 |x(n) )
Pr(M2 |x(n) )
0.36
0.32
0.32
7.72 × 1018
3.30 × 10−3
4.28 × 10−22
0.00
0.00
1.00
Table 9: Model prior, Bayes factor and model posterior probabilities for the S&P500
change point analysis.
that the prior on model M1 and M2 assigned by the proposed method are the
same. This is not surprising as the only difference between the two models
is an additional Log-normal distribution with different parameter values.
6
Conclusion
Bayesian inference in change point problems under the assumption of minimal
prior information has not been deeply explored in the past, as the limited
literature on the matter shows.
22
We contribute to the area by deriving an objective prior distribution to detect change point locations, when the number of change points is known a
priori. As a change point location can be interpreted as a discrete parameter,
we apply recent results in the literature (Villa and Walker, 2015a) to make
inference. The resulting prior distribution, which is the discrete uniform distribution, it is not new in the literature (Girón et al., 2007), and therefore
can be considered as a validation of the proposed approach.
A second major contribution is in defining an objective prior on the number
of change points, which has been approached by considering the problem
as a model selection exercise. The results of the proposed method on both
simulated and real data, show the strength of the approach in estimating the
number of change points in a series of observations. A point to note is the
generality of the scenarios considered. Indeed, we consider situations where
the change is in the value of the parameter(s) of the underlying sampling
distribution, or in the distribution itself. Of particular interest is the last
real data analysis (S&P 500 index), where we consider a scenario where we
have both types of changes, that is the distribution for the first change point
and on the parameters of the distribution for the second.
The aim of this work was to set up a novel approach to address change point
problems. In particular, we have selected prior densities for the parameters
of the models to reflect a scenario of equal knowledge, in the sense that
model priors are close to represent a uniform distribution. Two remarks are
necessary here. First, in the case prior information about the true value of
the parameters is available, and one wishes to exploit it, the prior densities
will need to reflect it and, obviously, the model prior will be impacted by
the choice. Second, in applications it is recommended that some sensitivity
analysis is performed, so to investigate if and how the choice of the parameter
densities affects the selection process.
Acknowledgements
Fabrizio Leisen was supported by the European Community’s Seventh Framework Programme [FP7/2007-2013] under grant agreement no: 630677. Cristiano Villa was supported by the Royal Society Research Grant no: RG150786.
23
References
Barry, D., Hartigan, J. A., 1992. Product Partition Models for Change Point
Problems. The Annals of Statistics 20 (1), 260–279.
Berger, J. O., Bernardo, J. M., Sun, D., 2012. Objective Priors for Discrete Parameter Spaces. Journal of the American Statistical Association
107 (498), 636–648.
Berger, J. O., Pericchi, L. R., 1996. The Intrinsic Bayes Factor for Model
Selection and Prediction. Journal of the American Statistical Association
91 (433), 109–122.
Berk, R. H., 1966. Limiting Behavior of Posterior Distributions when the
Model is Incorrect. The Annals of Mathematical Statistics 37 (1), 51–58.
Carlin, B. P., Gelfand, A. E., Smith, A. F. M., 1992. Hierarchical Bayesian
Analysis of Changepoint Problems. Journal of the Royal Statistical Society.
Series C (Applied Statistics) 41 (2), 389–405.
Chernoff, H., Zacks, S., 1964. Estimating the Current Mean of a Normal
Distribution which is Subjected to Changes in Time. The Annals of Mathematical Statistics 35 (3), 999–1018.
Chib, S., 1998. Estimation and Comparison of Multiple Change-point Models. Journal of Econometrics 86 (2), 221–241.
Fearnhead, P., Liu, Z., 2007. On-line Inference for Multiple Changepoint
Problems. Journal of the Royal Statistical Society: Series B (Statistical
Methodology) 69 (4), 589–605.
Girón, F. J., Moreno, E., Casella, G., 2007. Objective Bayesian Analysis of
Multiple Changepoints for Linear Models (with discussion). In: Bernardo,
J.M., Bayarri, M.J., Berger, J.O., Dawid, A.P., Heckerman, D., Smith,
A.F.M., West, M. (Eds.), Bayesian Statistics 8. Oxford University Press,
London, pp. 227–252.
Hannart, A., Naveau, P., 2009. Bayesian Multiple Change Points and Segmentation: Application to Homogenization of Climatic Series. Water Resources Research 45 (10), W10444.
Heard, N. A., Turcotte, M. J. M., 2017. Adaptive Sequential Monte Carlo for
Multiple Changepoint Analysis. Journal of Computational and Graphical
Statistics 26 (2), 414–423.
24
Henderson, R., Matthews, J. N. S., 1993. An Investigation of Changepoints in
the Annual Number of Cases of Haemolytic Uraemic Syndrome. Journal of
the Royal Statistical Society. Series C (Applied Statistics) 42 (3), 461–471.
Kass, R. E., Raftery, A. E., 1995. Bayes Factors. Journal of the American
Statistical Association 90 (430), 773–795.
Ko, S. I. M., Chong, T. T. L., Ghosh, P., 2015. Dirichlet Process Hidden
Markov Multiple Change-point Model. Bayesian Analysis 10 (2), 275–296.
Koop, G., Potter, S. M., 2009. Prior Elicitation in Multiple Change-Point
Models. International Economic Review 50 (3), 751–772.
Kullback, S., Leibler, R. A., 1951. On Information and Sufficiency. The Annals of Mathematical Statistics 22 (1), 79–86.
Loschi, R.H., Cruz, F.R.B., 2005. Extension to the Product Partition Model:
Computing the Probability of a Change. Computational Statistics & Data
Analysis 48 (2), 255 – 268.
Moreno, E., Casella, G., Garcia-Ferrer, A., 2005. An Objective Bayesian
Analysis of the Change Point Problem. Stochastic Environmental Research
and Risk Assessment 19 (3), 191–204.
Muliere, P., Scarsini, M., 1985. Change-point Problems: A Bayesian Nonparametric Approach. Aplikace matematiky 30 (6), 397–402.
Petrone, S., Raftery, A. E., 1997. A Note on the Dirichlet Process Prior in
Bayesian Nonparametric Inference with Partial Exchangeability. Statistics
& Probability Letters 36 (1), 69 – 83.
Raftery, A. E., Akman, V. E., 1986. Bayesian Analysis of a Poisson Process
with a Change-Point. Biometrika 73 (1), 85–89.
Schwaller, L., Robin, S., 2017. Exact Bayesian Inference for Off-line Changepoint Detection in Tree-structured Graphical Models. Statistics and Computing 27 (5), 1331–1345.
Smith, A. F. M., 1975. A Bayesian Approach to Inference about a ChangePoint in a Sequence of Random Variables. Biometrika 62 (2), 407–416.
Stephens, D. A., 1994. Bayesian Retrospective Multiple-Changepoint Identification. Journal of the Royal Statistical Society. Series C (Applied Statistics) 43 (1), 159–178.
25
Tian, G.-L., Ng, K. W., Li, K.-C., Tan, M., 2009. Non-iterative Samplingbased Bayesian Methods for Identifying Changepoints in the Sequence of
Cases of Haemolytic Uraemic Syndrome. Computational Statistics and
Data Analysis 53 (9), 3314–3323.
Villa, C., Walker, S., 2015a. An Objective Approach to Prior Mass Functions for Discrete Parameter Spaces. Journal of the American Statistical
Association 110 (511), 1072–1082.
Villa, C., Walker, S., 2015b. An Objective Bayesian Criterion to Determine
Model Prior Probabilities. Scandinavian Journal of Statistics 42 (4), 947–
966.
Yu, J., 2001. Chapter 6 - Testing for a Finite Variance in Stock Return
Distributions. In: Knight, J., Satchell, S. E. (Eds.), Return Distributions
in Finance (Quantitative Finance). Butterworth-Heinemann, Oxford, pp.
143 – 164.
26
Appendix
A
Model prior probabilities to select among models
M0 , M1 and M2
Here, we show how model prior probabilities can be derived for the relatively
simple case of selecting among scenarios with no change points (M0 ), one
change point (M1 ) or two change points (M2 ). First, by applying the result
in Theorem 2, we derive the Kullback–Leibler divergences between any two
models. That is:
• the prior probability for model M0 depends on the following quantities:
DKL (M0 kM1 ) =(n − m1 ) · DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 ))
DKL (M0 kM2 ) =(m2 − m1 ) · DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 ))
+ (n − m2 ) · DKL (f1 (·|θ̃1 )kf3 (·|θ̃3 ))
• the prior probability for model M1 depends on the following quantities:
DKL (M1 kM2 ) =(n − m2 ) · DKL (f2 (·|θ̃2 )kf3 (·|θ̃3 ))
DKL (M1 kM0 ) =(n − m1 ) · DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 ))
• the prior probability for model M2 depends on the following quantities:
DKL (M2 kM1 ) =(n − m2 ) · DKL (f3 (·|θ̃3 )kf2 (·|θ̃2 ))
DKL (M2 kM0 ) =(m2 − m1 ) · DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 ))
+ (n − m2 ) · DKL (f3 (·|θ̃3 )kf1 (·|θ̃1 ))
The next step is to derive the minimum Kullback–Leibler divergence computed at each model:
27
• for model M0 :
inf DKL (M0 kM1 ) = inf (n − m1 ) · inf DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 ))
θ1
m1 6=n
θ̃2
|
{z
}
1
= inf DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 ))
θ̃
2
inf DKL (M0 kM2 ) = inf (m2 − m1 ) · inf DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 ))
θ2
m1 6=m2
θ̃2
{z
}
|
1
+ inf (n − m2 ) · inf DKL (f1 (·|θ̃1 )kf3 (·|θ̃3 ))
m2 6=n
θ̃3
|
{z
}
1
= inf DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 )) + inf DKL (f1 (·|θ̃1 )kf3 (·|θ̃3 ))
θ̃2
θ̃3
• for model M1 :
inf DKL (M1 kM2 ) = inf (n − m2 ) · inf DKL (f2 (·|θ̃2 )kf3 (·|θ̃3 ))
θ2
m2 6=n
θ̃3
|
{z
}
1
= inf DKL (f2 (·|θ̃2 )kf3 (·|θ̃3 ))
θ̃3
inf DKL (M1 kM0 ) =(n − m1 ) · inf DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 ))
θ0 =θ̃1
θ̃1
• for model M2 :
inf DKL (M2 kM1 ) =(n − m2 ) · inf DKL (f3 (·|θ̃3 )kf2 (·|θ̃2 ))
θ1
θ̃2
inf DKL (M2 kM0 ) =(m2 − m1 ) · inf DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 ))
θ0 =θ̃1
θ̃1
+ (n − m2 ) · inf DKL (f3 (·|θ̃3 )kf1 (·|θ̃1 ))
θ̃1
Therefore, the model prior probabilities can be computed through equation
(15), so that:
• the model prior probability Pr(M0 ) is proportional to the exponential
of the minimum between:
Eπ0 inf DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 )) , Eπ0 inf DKL (f1 (·|θ̃1 )kf2 (·|θ̃2 ))
θ̃2
θ̃2
+ inf DKL (f1 (·|θ̃1 )kf3 (·|θ̃3 ))
θ̃3
28
• the model prior probability Pr(M1 ) is proportional to the exponential
of the minimum between:
Eπ1 inf DKL (f2 (·|θ̃2 )kf3 (·|θ̃3 )) ,
θ̃
3
Eπ1 (n − m1 ) · inf DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 ))
θ̃1
• the model prior probability Pr(M2 ) is proportional to the exponential
of the minimum between:
Eπ2 (n − m2 ) · inf DKL (f3 (·|θ̃3 )kf2 (·|θ̃2 )) ,
θ̃2
Eπ2 (m2 − m1 ) · inf DKL (f2 (·|θ̃2 )kf1 (·|θ̃1 )) + (n − m2 )
θ̃1
· inf DKL (f3 (·|θ̃3 )kf1 (·|θ̃1 ))
θ̃1
29
B
Proofs
Proof of Theorem 1
We distinguish two cases: S = +1 and S = −1. When S = +1, equivalent
to mj < m0j :
!
Z
(n)
f
(x
|m,
θ̃)
DKL (f (x(n) |m, θ̃)kf (x(n) |m0 , θ̃)) = f (x(n) |m, θ̃) · ln
dx(n)
f (x(n) |m0 , θ̃)
0
!
mj
Z
X
fj+1 (xi |θ̃j+1 ) (n)
= f (x(n) |m, θ̃) ·
dx
ln
fj (xi |θ̃j )
i=mj +1
0
=
mj Z
X
"
f (x(n) |m, θ̃) · ln
i=mj +1
=
m0j (
X
1n−1 ·
fj+1 (xi |θ̃j+1 )
fj (xi |θ̃j )
"
Z
fj+1 (xi |θ̃j+1 ) · ln
i=mj +1
!#
dx(n)
fj+1 (xi |θ̃j+1 )
fj (xi |θ̃j )
!#
)
dxi
m0
=
j
X
DKL (fj+1 (xi |θ̃j+1 )kfj (xi |θ̃j ))
i=mj +1
=(m0j − mj ) · DKL (fj+1 (·|θ̃j+1 )kfj (·|θ̃j ))
=(m0j − mj ) · d+1
j (θ̃).
(25)
When S = −1, equivalent to mj > m0j , in a similar fashion, we get
DKL (f (x(n) |m, θ̃)kf (x(n) |m0 , θ̃)) = (mj − m0j ) · d−1
j (θ̃)
(26)
From equations (25) and (26), we get the result in Theorem 1.
Proof of Theorem 2
We recall that the model parameter θi is the vector (m1 , m2 , . . . , mi , θ̃1 , θ̃2 , . . . , θ̃i+1 ),
where i = 0, 1, . . . , k. Here, θ̃1 , θ̃2 , . . . , θ̃i+1 represent the parameters of the
underlying sampling distributions considered under model Mi and m1 , m2 , . . . , mi
are the respective i change point locations. In this setting,
f (x
(n)
|θi ) =
m1
Y
r=1
f1 (xr |θ̃1 )
i−1 m
t+1
Y
Y
ft+1 (xr |θ̃t+1 )
t=1 r=mt +1
30
n
Y
r=mi +1
fi+1 (xr |θ̃i+1 )
(27)
We proceed to the computation of DKL (Mi kMj ), that is the Kullback–Leibler
divergence introduced in Section 3. Similarly to the proof of Theorem 1, we
obtain the following result.
!
mi+2
X Z
f
(x
|
θ̃
)
i+1
r
i+1
dx(n)
DKL (Mi kMj ) =
f (x(n) |θi ) ln
f
(x
|
θ̃
)
i+2 r i+2
r=mi+1 +1
!
Z
mi+3
X
f
(x
|
θ̃
)
i+1 r i+1
+
f (x(n) |θi ) ln
dx(n) +
fi+3 (xr |θ̃i+3 )
r=mi+2 +1
!
Z
n
X
f
(x
|
θ̃
)
i+1
r
i+1
... +
f (x(n) |θi ) ln
dx(n) .
f
(x
|
θ̃
)
j+1 r j+1
r=mj +1
Given equation (27), if we integrate out the variables not involved in the
logarithms, we obtain
DKL (Mi kMj ) =(mi+2 − mi+1 ) · DKL (fi+1 (·|θ̃i+1 )kfi+2 (·|θ̃i+2 ))
+ (mi+3 − mi+2 ) · DKL (fi+1 (·|θ̃i+1 )kfi+3 (·|θ̃i+3 ))+
. . . + (n − mj ) · DKL (fi+1 (·|θ̃i+1 )kfj+1 (·|θ̃j+1 )).
In a similar fashion, it can be shown that
DKL (Mj kMi ) =(mi+2 − mi+1 ) · DKL (fi+2 (·|θ̃i+2 )kfi+1 (·|θ̃i+1 ))
+ (mi+3 − mi+2 ) · DKL (fi+3 (·|θ̃i+3 )kfi+1 (·|θ̃i+1 ))+
. . . + (n − mj ) · DKL (fj+1 (·|θ̃j+1 )kfi+1 (·|θ̃i+1 ))
31
| 10 |
1
arXiv:1705.09704v1 [] 26 May 2017
Lock-step simulation is child’s play
Experience Report – Extended Version
JOACHIM BREITNER, University of Pennsylvania
CHRIS SMITH, Google
Implementing multi-player networked games by broadcasting the player’s input and letting each client
calculate the game state – a scheme known as lock-step simulation – is an established technique. However,
ensuring that every client in this scheme obtains a consistent state is infamously hard and in general requires
great discipline from the game programmer. The thesis of this report is that in the realm of functional
programming – in particular with Haskell’s purity and static pointers – this hard problem becomes almost
trivially easy.
We support this thesis by implementing lock-step simulation under very adverse conditions. We extended
the educational programming environment CodeWorld, which is used to teach math and programming to
middle school students, with the ability to create and run interactive, networked multi-user games. Despite
providing a very abstract and high-level interface, and without requiring any discipline from the programmer,
we can provide consistent lock-step simulation with client prediction.
ACM Reference format:
Joachim Breitner and Chris Smith. 2017. Lock-step simulation is child’s play. Proc. ACM Program. Lang. 1, 1,
Article 1 (January 2017), 18 pages.
DOI: 10.1145/nnnnnnn.nnnnnnn
Networked multi-user games must tackle the challenge of ensuring that all participating players on
a network with potentially significant latency still see the same game state. In some circumstances,
an appealing choice is lock-step simulation. In this scheme, which dates back to the age of Doom,
the state of the game itself is never transmitted over the network. Instead, the clients exchange
information about their player’s interactions – as abstract game moves or just the actual user input
events – and each client independently calculates the state of the game.
Of course, this only works as intended if all clients end up with the same state. The technique is
fraught with danger if the programmer is not very careful and disciplined about managing that
state. Terrano and Bettner (2001), who implemented the network code for the real time strategy
games Age of Empires 1 & 2, report:
As much as we check-summed the world, the objects, the pathfinding, targeting and
every other system – it seemed that there was always one more thing that slipped
just under the radar. [. . . ] Part of the difficulty was conceptual – programmers were
not used to having to write code that used the same number of calls to random
within the simulation.
More drastic words were voiced by Smith (2011), also a video game software engineer:
One of the most vile bugs in the universe is the desync bug. They’re mean sons of
bitches. The grand assumption to the entire engine architecture is all players being
fully synchronous. What happens if they aren’t? What if the simulations diverge?
Chaos. Anger. Suffering.
© 2017 ACM. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The
definitive Version of Record was published in Proc. ACM Program. Lang., http://dx.doi.org/10.1145/nnnnnnn.nnnnnnn.
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
1:2
Joachim Breitner and Chris Smith
The pitfalls facing a programmer implementing lockstep simulation include reading the system clock, querying the random number generator, other I/O, uninitialized
memory, and local or hidden statefulness. In short: side
effects! What if we chose a programming language without such side-effects? Would these problems disappear?
Intuitively, we expect that pure functional programming makes lock-step simulation easy.
This experience report corroborates our expectation.
We have implemented lock-step simulation in Haskell
under very adverse conditions. The authors of the quotes
above are professional programmers working on notable
games. They can be expected to maintain a certain level
of programming discipline, and to tolerate additional
complexity. Our implementation is part of CodeWorld1 ,
Fig. 1. The Snake game
an educational, web-based programming environment
used to teach mathematics and coding to students as early as middle school. These children, who
are just learning to code, can write and run arbitrary game logic, using a simple API, without
adhering to any additional requirements or coding discipline. Nevertheless, we still guarantee
consistent lock-step simulation and avoid the dreaded desync bug.
The main contributions of this experience report are:
• With a bold disregard for pesky implementation detail, we design a natural extension to
CodeWorld’s existing interfaces that can describe multi-user interactive programs in as
straightforward, simple and functional a manner as possible (Section 2.1).
• We identify a complication – unwanted capture of free variables – which can thwart
consistency of such a program. We solve it using either using the module system (Section 2.2)
or the Haskell language extension static pointers (Section 2.3).
• We explain how to implement this interface. Despite its abstractness, we present an
eventually consistent implementation that works for arbitrary client code, and includes
client prediction to react immediately to local input while still reconciling delayed input
from other users (Section 3).
• We share lessons learned in stress-testing the system (Section 4.1). Testing was successful,
but we identified an inconsistency in floating point transcendental functions. Replacing these with deterministic approximations recovers the consistency that we rely upon
(Section 4.2).
• We show that, even with no knowledge of the structure of the program’s state, our approach
still allows us to smooth out artifacts that arise due to network latency (Section 4.3).
• Overall, we show that pure functional programming makes lock-step simulation easy.
1 https://code.world/haskell
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
Lock-step simulation is child’s play
1
1:3
CODEWORLD
In this section, we give a brief overview of how students interact with the CodeWorld environment,
the programming interfaces that are provided by CodeWorld and how student programs are
executed. Many of the figures illustrating this paper are created by students. These and more can
be found in the CodeWorld gallery at https://code.world/gallery.html.
To ease deployment, students need only a web browser to use CodeWorld. They write their
code with an integrated editor inside the browser. Programs are written in Haskell, which the
CodeWorld server compiles to JavaScript using GHCJS (Stegeman and Mackenzie 2017) and sends
that back to browser to execute in a canvas beside the editor. These programs are always graphical:
students create static pictures, then animations, and finally interactive games and other activities.
1.1
Two flavors of Haskell
The standard Haskell language is not an ideal vessel for the children in CodeWorld’s target audience.
Therefore, CodeWorld by default provides a specially tailored educational environment. In this
mode, a custom prelude is used to help students avoid common obstacles. Graphics primitives are
available without an import, to create appealing visual programs. Functions of multiple arguments
are not curried but rather take their arguments in a tuple, both to improve error messages and
match mathematical notation that students are already learning. Finally, a single Number type
(isomorphic to Double) is provided to avoid the need for type classes, and the RebindableSyntax
language extension makes literals monomorphic. Compiler error messages are post-processed to
make them more intelligible to the students. Nevertheless, the code students write is still Haskell,
and is accepted by GHC.
However, at https://code.world/haskell instead of https://code.world/, one finds a standard Haskell
environment, with full access to the standard library. In this paper we focus on the latter variant.
1.2
API design principles
An important principle of CodeWorld is to provide students with the simplest possible abstraction for a given
task. This allows them to concentrate on the ideas they
want to express and think clearly about the meaning of
their code, and hides as many low-level details as possible.
The first and simplest task that students face is to produce a static drawing. This is done with the abstract data
type Picture, with a simple compositional API (Figure 3)
which was heavily inspired by the Gloss library (Lippmeier 2017). Complex pictures are built by combining
and transforming simple geometric objects. The entry
point used for this has the very simple type
drawingOf :: Picture → IO ()
This function takes care of the details of displaying the
Fig. 2. Smiley
student’s picture on the screen, redrawing upon window size changes and so on. So all it takes for
a student to get the computer to smile like in Figure 2 is to write
import CodeWorld
smiley = translated (−4) 4 (solidCircle 2) & translated 4 4 (solidCircle 2) &
thickArc 2 (−pi) 0 6 & colored yellow (solidCircle 10)
main = drawingOf smiley
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
1:4
Joachim Breitner and Chris Smith
data Picture –– abstract
–– Various geometric shapes (circle, rectangle, arc, polygon etc.) are
–– available, and can either be filled or equipped with a thickness. E.g.:
–– Parameter: radius
solidCircle :: Double → Picture
–– Parameters: thickness, radius, start angle and end angle
thickArc :: Double → Double → Double → Double → Picture
–– Pictures can be transformed and overlaid
colored :: Color →
(Picture → Picture)
translated :: Double → Double → (Picture → Picture)
rotated :: Double →
(Picture → Picture)
scaled
:: Double → Double → (Picture → Picture)
(&)
:: Picture → Picture → Picture
Fig. 3. An excerpt of CodeWorld’s Picture API
As a next step, the students can create animations and
simulations to make their pictures move, before eventually making their programs react to user input in interactions. The game in Figure 4 is a typical interaction, where
the player saves flying Grandma from various obstacles
by attaching balloons or parachutes to her wheelchair.
These are created by calling the following interface:
interactionOf :: world
→ (Double → world → world)
→ (Event → world → world)
→ (world → Picture)
→ IO ()
In a typical call
main = interactionOf start step handle draw
Fig. 4. Yo Grandma, by Sophia (6th grade)
the student passes four arguments, namely:
(1) an initial state, start,
(2) a time step function, step, which calculates an updated state as time passes,
(3) an event handler function, handle, which calculates an updated state when the user interacts
with the program and
(4) a visualization function, draw, to depict the current state as a Picture.
The Event type, shown in Figure 5, is a simple algebraic data type that describes the press or
release of a key or mouse button, or a movement of the mouse pointer.
The type of the state, world, is chosen by the user and consists of the domain-specific data needed
by the program. The world type is completely unconstrained, and this will be an important factor
influencing our design. It need not even be serializable, nor comparable for equality. In particular,
the state may contain first-class functions and infinite lazy data structures. One way that students
commonly make use of this capability is by defining infinite lazy lists of anticipated future events,
based on a random number source fetched before the simulation begins.
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
Lock-step simulation is child’s play
1:5
type Point = (Double, Double)
data Event = KeyPress Text
| KeyRelease Text
| MousePress MouseButton Point
| MouseRelease MouseButton Point
| MouseMovement Point
data MouseButton = LeftButton | MiddleButton | RightButton
Fig. 5. The Event type
2
AN INTERFACE FOR MULTI-PLAYER GAMES
We would like students to extend their programming to networked multi-user programs, so that
they can invite their friends to join over the internet and collaborate together on a drawing, fight
each other in a fierce duel of Snake, or interact in any other way the student designs and implements.
In this section, we turn our attention to choosing an API for such a task.
2.1
Wishful thinking
Let us apply “API design by wishful thinking”, and ask: What is the most convenient abstract model
of a multi-player game we can hope for, independent of implementation concerns or constraints?
As experienced programmers, our thoughts might drift
to network protocols or message passing between independent program instances, each with its own local state.
Our students, though, care about none of this, and ideally
we would not burden them with it. In fact, motivated
students have already implemented games to be played
with classmates, using different keys on the same device.
An example is shown in Figure 6, where the red player
uses the keys W A S D and the blue player the
keys ↑ ← ↓ → , in a race to consume more dots.
Their games, which they have already designed, are described in terms of one shared global state. Why should
the programming model change drastically simply because of one detail – that the code will now run on multiple nodes communicating over a network?
Fig. 6. Dot Grab, by Adrian (7th grade)
We conclude, then, that an interactive multi-user program is a generalization of an interactive single-user program, and the centerpiece of the API is still
a single, global state, which is mutually acted upon by all players. Basing the API on interactionOf,
we make only minimal changes to adapt to the new environment:
• A new first parameter specifies the number of players.
• The parameters start and step remain as they are.
• The handle parameter, though, ought to know which user pressed a certain button or moved
their mouse, so it receives the player number (a simple Int) as an additional parameter.
• Different players may also see different views of the state, so the draw function also receives
the player number for which it should render the screen – but it is free to ignore that
parameter, of course.
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
1:6
Joachim Breitner and Chris Smith
All together, we arrive at the following “ideal” interface that we call collaborations, which allows
students to build networked multi-player games and other activities:
collaborationOf :: Int
→ world
→ (Double → world → world)
→ (Int → Event → world → world)
→ (Int → world → Picture)
→ IO ()
A small example will clarify how this interface is used.
The following code traces the mouse movements of two
players using colored, fading circles, and Figure 7 shows
this program in action. The green player is a bot that
simply mirrors the red player’s movements.
import CodeWorld
type World = [(Color, Double, Double, Double)]
step :: Double → World → World
Fig. 7. Two players’ mouse movements
step dt dots = [(c, exp (−dt) ∗ r, x, y) | (c, r, x, y) ← dots, r > 0.1]
handle :: Int → Event → World → World
handle 0 (MouseMovement (x, y)) dots = (red, 1, x, y) : dots
handle 1 (MouseMovement (x, y)) dots = (green, 1, x, y) : dots
handle
dots = dots
draw :: Int → World → Picture
draw dots = mconcat [translated x y (colored c (solidCircle r)) | (c, r, x, y) ← dots]
main :: IO ()
main = collaborationOf 2 [ ] step handle draw
A collaboration begins with a lobby, featuring buttons to create or join a game. Upon creating a
new game, the player is given a four-letter code to be shared with friends. Those friends may enter
the four-letter code to join the game. Once enough players have joined, the game begins.
2.2
Solving random problems with the module system
Like interactionOf before it, the parameters of collaborationOf provide enough information to
completely determine the behavior of the program from the sequence of time steps and UI events
that occur. Unlike interactionOf, however, a collaboration involves more than one use of the
collaborationOf API, as the function is executed by each participating player. To ensure that there
is a single, well-defined behavior, it is essential that all players run collaborationOf with the same
arguments. Obviously, we need to ensure that all clients run the same program, and the CodeWorld
server does so. But even with the same code, the arguments to collaborationOf can differ from
client to client:
main = do
r ← randomRIO (0, 1)
collaborationOf numPlayers start step (handle r) draw
The event handling function now depends on I/O – specifically, the choice of a random number –
and it is very unlikely that all clients happen to pick the same random number. Despite sharing the
same code, the clients will disagree about the correct behavior of the system.
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
Lock-step simulation is child’s play
1:7
The problem is not the use of random numbers per se, but rather the unconstrained flow of
client-specific state resulting from any I/O into the collaborationOf API via free variables in its
parameters. Since most of the parameters to collaborationOf have function types, we cannot just
compare them to establish consistency at runtime.
We solve this problem in two ways: one in the educational environment, and the other in the
standard Haskell environment.
In the former, we have tight control over the set of library functions available to the student.
No packages are exposed except for a custom standard library with a heavily customized Prelude
module, and this library simply does not provide any functions to compose IO operations, such
as as the monadic bind operators (>>=, >>). This also rules out the use of Haskell’s do-notation,
which under the regime of RebindableSyntax requires an operator called (>>=) to be in scope. A
valid Haskell program requires a top-level function main :: IO (), and since the only available
way to obtain an IO () is through our API entry points (drawingOf, interactionOf, and so on), we
know that all CodeWorld collaborations are of essentially the form main = collaborationOf . . .
In particular, no I/O can be executed prior to the collaboration, and hence no client-dependent
behavior is possible.
2.3
Solving random problems syntactically
This solution is not suitable for the standard Haskell environment, where we do not want to restrict
the user’s access to the standard library. We can still prevent the user from using the results of
client-specific I/O in arguments to collaborationOf. To accomplish this, we creatively use the work
of Epstein et al. (2011), who sought to bring Erlang-like distributed computing to Haskell. They
had to exchange functions over the network, which is possible by passing code references, as long
as no potentially unserializable values are captured from the environment. To guarantee that, they
introduced a Haskell language extension, static pointers, which introduces:
• a new type constructor StaticPtr a, which wraps values of type a,
• a new syntactic construct static foo, such that for any expression foo of type a, the
expression static foo has type StaticPtr a, but is only valid if foo does not contain any
locally bound free variables,
• a pure function deRefStaticPtr :: StaticPtr a → a, to unwrap the static pointer, and
• a pure function staticKey :: StaticPtr a → StaticKey which produces a key that – within
one program – uniquely identifies a static pointer.
The requirement that StaticPtr values cannot have locally bound free variables turns out to be
exactly what we need to prevent programs from smuggling client-specific state obtained with I/O
actions into collaborations. We therefore further refine the API to require its arguments to be static
pointers:
collaborationOf :: Int
→ StaticPtr world
→ StaticPtr (Double → world → world)
→ StaticPtr (Int → Event → world → world)
→ StaticPtr (Int → world → Picture)
→ IO ()
The mouse tracing program in Figure 7 must now change its definition of main to
main = collaborationOf 2 (static [ ]) (static step) (static handle) (static draw).
On the other hand, writing static (handle r) to smuggle in a randomly drawn number r, as in the
example above, will fail at compile time. Requiring the static keyword here admittedly muddies
the clarity of the API a bit. We believe that the target audience of CodeWorld’s standard Haskell
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
1:8
Joachim Breitner and Chris Smith
mode can handle this. Beginners working within the educational mode need not deal with this
slight complication.
A somewhat more clever attempt, though, still causes problems:
main = do
coinFlip ← randomIO
let step = if coinFlip then static step1 else static step2
collaborationOf 2 (static [ ]) step (static handle) (static draw)
This program is accepted by the compiler because the arguments to collaborationOf are indeed
StaticPtr values of the right types, yet it raises the same questions when clients disagree on the
choice of step function. While we cannot prevent this case at compile time, we can at least detect it
at runtime. Static pointers can be serialized using the function staticKey :: StaticPtr a → StaticKey.
Before a game starts, the participating clients compare the keys of their arguments to check that
they match. This is a subtly different use of static pointers from the original intent of sending
functions over a network in a message-passing protocol. We need not actually receive the original
values on the remote end of our connections, but instead use the serialized keys only to check for
consistency.
With this check in place – short of using unsafe features such as unsafePerformIO – we are
confident that every client is indeed running the same functions. However, this forces our games
to be entirely deterministic. This is a problem, since many games involve an element of chance! To
restore the possibility of random behavior, we supply a random number source to use in building
the initial state, with a consistent seed in all clients. The type of the start parameter is now
StaticPtr (StdGen → world). (This is not entirely new: CodeWorld’s educational environment has
never exported a random number generator, and its simulations and interactions have always been
initialized with an infinite list of random numbers.)
This completes our derivation of collaborationOf, which in its final form is
collaborationOf :: Int
→ StaticPtr (StdGen → world)
→ StaticPtr (Double → world → world)
→ StaticPtr (Int → Event → world → world)
→ StaticPtr (Int → world → Picture)
→ IO ()
3
FROM WISHFUL THINKING TO RUNNING CODE
How can we implement this interface? It turns out that our implementation options are severely
narrowed down by the following requirements:
(1) We need to handle any code using the API. Given the educational setting of CodeWorld,
we cannot require any particular discipline.
(2) The players need to see an eventually consistent state. They may have different ideas about
the state of the world, but only until everybody receives information about everybody’s
interactions.
(3) The effects of a player’s own interactions are immediately visible to that player. Even a
“local” interaction, such as selecting a piece in a game of Chess, will have to represented in
the game state, and any latency here would make the user interface sluggish.
The first requirement in particular implies that the game state is completely opaque to us. This
already rules out the usual client-server architecture, where only the central server manages the
game state and the clients send abstract moves (e.g., “white moves the knight to e8”) and render
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
Lock-step simulation is child’s play
1:9
the game state that they receive from the server. We have neither insight into what constitutes an
abstract move, nor how to serialize and transmit the game state.
We could avoid this problem by sending the raw UI Event instead of an abstract move to the
server, and letting the server respond to each client with the Picture to show. This “dumb terminal”
approach however would run afoul of our third requirement, as every user interaction would be
delayed by the time it takes messages to travel to the server and back.
The requirement of immediate responsiveness implies that every client needs to manage its own
copy of the game state, and being abstract in the game state implies that there is nothing else
but the UI events that the clients can transmit to synchronize the state. In other words, lock-step
simulation is the only way for us.
This approach assumes the integrity of client code. Since all clients track the entire game state,
malicious players could trick CodeWorld into running a modified version of the program which,
among other things, could then reveal hidden parts of the game state. Given the educational goals
of CodeWorld, we are willing to trade this security for a cleaner API.
3.1
Types and messages
We seek, then, to implement the API by exchanging UI events between clients. For the purposes
of this paper, it does not matter how events are transmitted from client to client. The CodeWorld
implementation uses a very simple relay server that broadcasts messages from one client to the
others via WebSockets (a full-duplex server-client protocol for web applications), but peer-to-peer
communication using WebRTC (a peer-to-peer protocol for web applications) or other methods
would work equally well, as long as they deliver events reliably and in order.
Every such message obviously needs to contain the actual Event and the player number. In
addition, it must contain a timestamp, so that each client applies the event at the same time despite
differences in network latency. Otherwise – assuming a time-sensitive game with a non-trivial step
function – the various clients would obtain different views of the world. Timestamps are Double
values, measured in seconds since the start of the game.
type Timestamp = Double
type Player
= Int
type Message = (Timestamp, Player, Event)
3.2
Resettable state
Having fixed the message type still leaves open the question of what to do with these messages,
which is non-trivial due to the network latency.
Assume that 23.5 seconds into a real-time strategy game, I send my knights to attack the other
player. My client sends the corresponding message (23.500, 0, MousePress LeftButton (20, 30)) to
the other player. The message arrives, say, 100ms later. As mentioned before, the other player
cannot simply let my knights set out a bit later. What else?
The classical solution (Terrano and Bettner 2001) is to not act on local events immediately, but
add a delay of, say, 200ms. The message would be (23.700, 0, MousePress LeftButton (20, 30)), and
assuming it reaches all other players in time, all are able to apply the event at precisely the same
moment. This solution works well if the UI can somehow respond to the user’s actions immediately,
e.g. by letting the knight audibly confirm the command, so hide this delay from the user.
The luxury of such a separation is not available to us – according to the third requirement, each
client must immediately apply its own events – and the message really has to have the timestamp
23.500. This leaves the other player, when it receives the message 100ms later, with no choice but
to roll back the game state to time 23.500, apply my event, and replay the following 100ms. While
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
1:10
Joachim Breitner and Chris Smith
rollback and replay are hard to implement in imperative programming paradigms, where every
piece of data can have local mutable state, they are easy in Haskell, where we know that the value
of type world really holds all relevant bits of the program’s state.
One way of allowing such recalculation is to simply not store the state at all, and re-calculate it
every time we draw the game screen. The function to do so would expect the game specification,
the current time and the list of messages that we have seen so far, including the locally generated
ones, and would calculate the game state. Its type signature would thus be
currentState :: Game world ⇒ Timestamp → [Message] → world
where the hypothetical type class Game captures the user-defined game logic; we introduce it here
to avoid obscuring the following code listings by passing it explicitly around as an argument:
class Game world where
start :: world
step :: Double → world → world
handle :: Player → Event → world → world
Assume, for a short while, that there was no step function, i.e. the game state changes only when
there is an actual event. Then the timestamps are only required to put the events into the right
order and to disregard events which are not yet to be applied (which can happen if the player’s
game time started at slightly different points in time):
currentState :: Game world ⇒ Timestamp → [Message] → world
currentState now messages = applyEvents to_apply start
where to_apply = takeWhile (λ(t, , ). t 6 now) (sortMessages messages)
sortMessages :: [Messages] → [Messages]
sortMessages = sortOn (λ(t, p, ). (t, p)) messages
applyEvents :: Game world ⇒ [Message] → world → world
applyEvents messages w = foldl apply w messages
where apply w ( , p, e) = handle p e w
Eventually, every client receives the same list of messages, up to the interleaving of events from
different players. After a stable sort by timestamp and player, the lists of events will be identical, so
all clients will calculate the same game state.
3.3
A few more steps
This is nice and simple, but ignores the step function, which models the evolution of the state
as time passes. Clearly, we have to call step before each event, and again at the end. In order to
calculate the time passed since the last event, we also have to keep track of which timestamp a
snapshot of the game state corresponds to:
currentState :: Game world ⇒ Timestamp → [Message] → world
currentState now messages = step (now − t) world
where to_apply = takeWhile (λ(t, , ). t 6 now) (sortMessages messages)
(t, world) = applyEvents to_apply (0, start)
applyEvents :: Game world ⇒ [Message] → (Timestamp, world) → (Timestamp, world)
applyEvents messages ts = foldl apply ts messages
where apply (t0, world) (t1, p, e) = (t1, handle p e (step (t1 − t0) world)))
Unfortunately, students would not be quite happy with this implementation. The step function
is commonly used to calculate a single step in a physics simulation, which requires that it is called
often enough to achieve a decent simulation frequency.
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
Lock-step simulation is child’s play
1:11
Player 1:
1
t
3
Player 2:
2
4
t
Fig. 8. The evolution of a simple multi-player program with latency in the messages
For instance, when simulating a projectile, a common technique is to adjust the position linearly
along the velocity vector, and the velocity linearly according to forces like gravity or drag. The
result is a stepwise-linear approximation, the precision of which depends on the sampling frequency.
Another common technique is to do collision detection only once per time step, and again the
result depends on the frequency of steps. It is important, then, that the step function is called at a
reasonably high frequency.
We could leave students to resolve this themselves, by dividing time steps into multiple finer
steps, if necessary, in their step implementation. However, imposing that burden would violate our
first requirement: not requiring any discipline from the user. Therefore, we have to ensure that the
step function is called often enough, even if there is no user event for a while.
In simulations and interactions, the implemented behavior is to evaluate the step function as
quickly as possible between animation frames. Thus, simulations running on faster computers
may take smaller steps and be more accurate. The need for eventual consistency precludes this
strategy here. Instead, the desired step length for collaborationOf is defined globally and set to
one-sixteenth of a second:
gameRate :: Double
gameRate = 1 / 16
We can obtain the desired resolution by wrapping the student’s step function in one that iterates
step on time steps larger than the desired rate:
gameStep :: Game world ⇒ Double → world → world
gameStep dt world | dt 6 0
= world
| dt > gameRate = gameStep (dt − gameRate) (step gameRate world)
| otherwise
= step dt world
Replacing step with gameStep in the implementation of currentState and applyEvents above yields
a correct solution.
To see this code in action, we construct the following program: As the time passes, a column
grows on the screen, from bottom to top. Initially, it is gray. When a player presses a number key,
the column begins to grow in a different color. Additionally, whenever step is called, this current
height of the column is marked with a black line.
Because the program output is one-dimensional, we can use the horizontal dimension to show
in Figure 8 how the players’ displays evolves over time. The dashed arrows indicate the transfer
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
1:12
Joachim Breitner and Chris Smith
of each packet to the other player, which is not instant. When a message from the other player
arrives, the state is updated to reflect this change. Because this game essentially records its history,
these delayed updates result in a “flicker” as the client updates the state. In many cases the effect
will be less noticeable than it is here. We can see that the algorithm achieved eventual consistency,
as the right edge of the drawing looks identical for both clients.
3.4
Limiting time travel
In the course of a game, quite a large number of events occur. As time goes by, the cost of calculating
the current state from scratch grows without bound, and will eventually become too large to be
completed between each frame, and animations will stop being smooth. Clearly, some of that
computation is quite pointless to repeat.
Our message transport guarantees that messages from each client are delivered in order, so
that when we receive a message, we know that we have seen all messages from the sender up to
that timestamp. If we call this the client’s commit time, then we know that no new events will be
received before the earliest commit time of any client, which we call the commit horizon. We can
now precompute the game state up to the commit horizon, forget all older state and events, and
use this as the basis for future state recalculations.
In the following we will explain the data structure and associated operations that CodeWorld
uses to keep track of the committed state, the pending events and each player’s commit time. The
main data type is
data Log world = Log { committed :: (Timestamp, world),
events
:: [Message],
latest
:: [(Player, Timestamp)] }
Initially, there are no events, and everything is at timestamp zero:
initLog :: Game world ⇒ [Player] → Log world
initLog ps = Log (0, start) [ ] [(p, 0) | p ← ps]
When an event comes in, the message is added to events via the public addEvent function.
addEvent :: Game world ⇒ Message → Log world → Log world
addEvent (t, p, e) log = recordActivity t p (log { events = events’ })
where events’ = sortMessages (events log ++ [(t, p, e)])
Then, the client’s commit time in latest is updated.
recordActivity :: Game world ⇒ Timestamp → Player → Log world → Log world
recordActivity t p log | t < t_old = error "Messages out of order"
| otherwise = advanceCommitted (log { latest = latest’ })
where latest’ = (p, t) : delete (p, t_old) (latest log)
Just t_old = lookup p (latest log)
This might have moved the commit horizon, and if some of the messages from the list events are
from before the commit horizon, we can integrate them into the committed state.
advanceCommitted :: Game world ⇒ Log world → Log world
advanceCommitted log = log { events
= to_keep,
committed = applyEvents to_commit (committed log) }
where (to_commit, to_keep) = span (λ(t, , ). t < commitHorizon log) (events log)
commitHorizon :: Log world → Timestamp
commitHorizon log = minimum [t | (p, t) ← latest log]
The final public function is used to query the current state of the game. Starting from the committed
state, it applies the pending events.
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
Lock-step simulation is child’s play
1:13
currentState :: Game world ⇒ Timestamp → Log world → world
currentState now log | now < commitHorizon log = error "Cannot look into the past"
currentState now log = gameStep (now − t) world
where past_events = takeWhile (λ(t, , ). t 6 now) (events log)
(t, world) = applyEvents past_events (committed log)
This algorithm, printed in Figure 9 in its entirety, relies on these assumptions:
(1) The list of players provided to initLog is correct.
(2) For each player, events are added in order, with monotonically increasing timestamps.
(3) The state is never queried at a time that lies before commitHorizon.
The first assumption is ensured by the CodeWorld framework. The second is ensured by using a
monotonic time source to create the timestamps, and by using an order-preserving communication
channel. The third follows from the fact that every client’s own timestamps are always in that
player’s past, and therefore the argument to currentState is later than the commit horizon.
If one of the players were to stop interacting with the program, that client would not send
any messages. In this case, no events can be committed and the list of events to be processed by
currentState would again grow without bound. To avoid this, each client sends empty messages
(“pings”) whenever the user has not produced input for a certain amount of time. When such a
ping is received, the addPing function advances the latest field without adding a new event:
addPing :: Game world ⇒ (Timestamp, Player) → Log world → Log world
addPing (t, p) log = recordActivity t p log
This way, the number of events in the events field is bound by
max input rate×(max network delay+max time between events or pings)×(number of players−1)
which is independent of how long the game has been running.
More tweaks are possible. In the CodeWorld implementation, we also cache the current state, so
that querying the current state again, when no new events were received, is much cheaper. When
an input event from another player comes in, we discard this cached value and recalculate it based
on the committed state and the stored events.
The main property of the code in Figure 9 is: No matter the interleaving of events from the
various players, the result of currentState is the same. To increase our confidence that this property
holds we used the QuickCheck library to randomly generate pairs of lists of events with monotonically increasing timestamps, considered all possible interleavings and checked that the resulting
Log world data structure is identical.
4
EXPERIENCES AND DISCUSSION
The interface from Section 2 allows the creation of multiuser applications with great ease, and with the algorithms
in Section 3, CodeWorld can provide a smooth user experience. The reader may wonder, though, how well this
works in practice, and what the drawbacks are for this
approach.
4.1
Early experience
For a first practical evaluation of the system, the second
author organized a stress test, involving four colleagues,
a selection of games with different styles, and small prizes
for winners. During the event, participants play-tested
Fig. 10. The tank game
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
1:14
Joachim Breitner and Chris Smith
type Timestamp = Double
type Player
= Int
type Message
= (Timestamp, Player, Event)
data Log world = Log { committed :: (Timestamp, world),
events
:: [Message],
latest
:: [(Player, Timestamp)] }
–– Public interface
initLog :: Game world ⇒ [Player] → Log world
initLog ps = Log (0, start) [ ] ([(p, 0) | p ← ps])
addEvent :: Game world ⇒ Message → Log world → Log world
addEvent (t, p, e) log = recordActivity t p (log { events = events’ })
where events’ = sortMessages (events log ++ [(t, p, e)])
addPing :: Game world ⇒ (Timestamp, Player) → Log world → Log world
addPing (t, p) log = recordActivity t p $ log
currentState :: Game world ⇒ Timestamp → Log world → world
currentState now log | now < commitHorizon log = error "Cannot look into the past"
currentState now log = gameStep (now − t) world
where past_events = takeWhile (λ(t, , ). t 6 now) (events log)
(t, world) = applyEvents past_events (committed log)
–– Internal functions
gameRate :: Double
gameRate = 1 / 16
sortMessages :: [Message] → [Message]
sortMessages = sortOn (λ(t, p, ). (t, p))
recordActivity :: Game world ⇒ Timestamp → Player → Log world → Log world
recordActivity t p log | t < t_old = error "Messages out of order"
| otherwise = advanceCommitted (log { latest = latest’ })
where latest’ = (p, t) : delete (p, t_old) (latest log)
Just t_old = lookup p (latest log)
advanceCommitted :: Game world ⇒ Log world → Log world
advanceCommitted log = log { committed = applyEvents to_commit (committed log)
, events
= to_keep }
where (to_commit, to_keep) = span (λ(t, , ). t < commitHorizon log) (events log)
commitHorizon :: Log world → Timestamp
commitHorizon log = minimum [t | (p, t) ← latest log]
applyEvents :: Game world ⇒ [Message] → (Timestamp, world) → (Timestamp, world)
applyEvents messages ts = foldl apply ts messages
where apply (t0, world) (t1, p, e) = (t1, handle p e (gameStep (t1 − t0) world))
gameStep :: Game world ⇒ Double → world → world
gameStep dt world | dt 6 0
= world
| dt > gameRate = gameStep (dt − gameRate) (step gameRate world)
| otherwise
= step dt world
Fig. 9. The complete client prediction code discussed in Section 3.4
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
Lock-step simulation is child’s play
1:15
the games, hoping to uncover any bugs or unexpected quirks of the format. The games involved,
which can be played at https://code.world/gallery-icfp17.html, include:
• The Dot Grab game (Figure 6), which was originally written by a student as a singlecomputer interaction. Since the API for games is a straightforward extension of the one for
interactions, it was trivial to make this game networked.
• The game “Snake” (Figure 1), where a player has to move across the playing field while
avoiding the other player’s trails and the walls.
• A tank game (Figure 10) where each player steers a tank using the keyboard, aims using
the mouse and fires bullets that explode after a certain time. Here the game evolves over
time and manages a larger number of moving parts – tanks, bullets, and explosions.
Manual testing showed that the system is nicely responsive and that the artifacts due to network
latency are noticeable, but not irritating. The system handled the more complex tank game well. A
separate test using a high latency satellite connection remained playable, but with more pronounced
latency-related artifacts, as expected.
We plan to introduce the API to students in the Spring semester of 2017.
4.2
Floating point calculation
A dominant concern in the implementation in Section 3 was to guarantee eventual consistency of
all clients, so that game states would always converge over time. We achieve that requirement,
on the assumption that the code passed to collaborationOf consists of pure functions. This result
relies on a strong notion of pure function, though, which requires that outputs are predictable
even between instances of the code running on different machines, operating systems, and runtime
environments. In this sense, even functions in Haskell may not always be pure!
A notable source of nondeterminism in Haskell is underspecified floating point operations.
The Double type in Haskell is implementation-defined, and “should cover IEEE double-precision”
(Marlow 2010). Our interest is limited to the Haskell-to-JavaScript compiler GHCJS (Stegeman and
Mackenzie 2017), which inherits the floating point operation semantics from JavaScript. The ECMA
standard (ECMA International 2015) specifies a JavaScript number to be a “double-precision 64-bit
binary format IEEE 754-2008 value” – which is luckily already a quite specific specification. We are
optimistic that the basic arithmetic operations are deterministic, and this optimism is supported by
anecdotal reports from a game developer with Gas Powered Games (Emerson 2009)
We have never had a problem with the IEEE standard across any PC CPU, AMD
and Intel, with this approach. None of our [. . . ] customers have had problems with
their machines either, and we are talking over 1 million customers here. We would
have heard if there was a problem with the FPU not having the same results as
replays or multi-player mode wouldn’t work at all.
However, transcendental functions (exp, sin, cos, log, etc.) are not completely specified by IEEE-754,
and different browser/system combinations are allowed to yield slightly different results here.
We tested this with a double pendulum simulation, which makes heavy use of sin and cos in
every simulation step. The double pendulum is a well-known example of a chaotic system, and
we expect it to quickly magnify any divergence in state. Indeed, after running the program on
two different browsers (Firefox and Chrome, on the same Linux machine) for several minutes, the
simulations take different paths, confirming the worries about these functions.
If, however, we use a custom implementation of sin – based on a quadratic curve approximation
– the simulation runs consistently. We tested this variant on multiple JavaScript engines (Chrome,
Firefox, and Microsoft Edge), on different operating systems (Windows, Linux, Android, and
ChromeOS) and on different CPUs (Intel and ARM), and did not uncover any more consistency
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
1:16
Joachim Breitner and Chris Smith
Player 1:
1
t
3
Player 2:
2
4
t
Fig. 11. Smoothing the effect of late events
issues. The tests confirm again that, apart from inconsistent implementations of transcendental
functions, basic floating point operations are reliably deterministic in practice.
We can deploy a fix to transcendental functions in two ways. In CodeWorld’s educational
mode, where we have implemented a custom standard library, it is easy to just substitute new
implementations of these functions. In the plain Haskell variant, however, we would like to allow
the programmer to make use of existing libraries, which may use standard floating point functions.
To achieve this, we can instead replace these operations at the JavaScript level, ensuring that even
third-party Haskell libraries are deterministic.
In the future, we also plan to automate checks for synchronization problems like this. We cannot
directly compare program states in our implementation, since they are of arbitrary type. However,
we can compare the generated pictures – or a hash thereof – to achieve essentially the same effect.
4.3
Interpolating the effects of delayed messages
Another trick in the game programming toolbox is interpolation to smooth out artifacts that result
from corrections to the game state. These artifacts can be clearly seen in Figure 8: The moment the
message 2 reaches the first player, the top segment of the growing column abruptly changes
from green to red. Similarly, in a game like the tank-fighting game in Figure 10, an opponent can
appear to teleport to a new location. In this situation, many games would instead interpolate the
position smoothly over a fraction of a second. This can introduce new anomalies of its own, such
as characters passing through walls, or tanks moving sideways, but in most cases, it is hoped the
result will appear more realistic than the alternative.
By providing an API that is completely abstract in the game state, it seems that we have shut the
door on implementing this trick. We lack the ability to look inside the state and adjust positions.
Surprisingly, though, a form of interpolation is possible. All that is needed is a sort of change of
coordinates. While we cannot interpolate in space, we can interpolate in time! When a delayed
event arrives, we initially treat it as if its timestamp is “now” and then slide it backward in time
over a short interpolation period until it reaches its actual time.
Usually, the step function is approximately continuous, and as a result, moving an event backwards in time gives a smooth interpolation in the state as well. This can be seen in Figure 11: After
the message 2 arrives at Player 1, the column smoothly changes its color from green to red, from
the tip downwards, until the correct state is reached. Like all interpolation, though, anomalies can
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
Lock-step simulation is child’s play
1:17
still happen. This scheme introduces abrupt artifacts as we slide a delayed message past another
event with a non-commuting effect. In Figure 11 the second player smoothly integrates the delayed
3 message, and the top of the column changes color from blue to yellow. But the moment this
event is pushed before the local event 4 , the column abruptly changes its color back to blue.
This is an elegant trick to recover the ability to do interpolation. However, it is not clear if
interpolation is always the best experience, and a jerky, abrupt update may be preferred for certain
games.
4.4
Irreversible updates
In some cases, the visual artifacts due to delayed messages, whether smooth or jerky, pose a serious
problem. Consider, for example, a card game in which both players click to draw cards from the
same deck. Suppose player 1 clicks to draw a card first, but the message from player 1 to player 2
arrives after player 2 clicks as well. For a brief moment before the message is received, player 2
sees the top card, even though it ultimately ends up in the first player’s hand! This is an example of
a case where eventual consistency in the game state is not good enough.
This problem is hard to avoid, given our constraints and the third requirement of responding
immediately to local events. It can be mitigated by the game programmer, by adding a short delay
before major events such as those that reveal secrets. The delay can sometimes be creatively hidden
by animations or effects. This trick dodges the problem as long as network latency is shorter than
this delay, but it provides no guarantee. A complete solution to this problem must involve the
programmer in a way that is undesirable in our setting, since only the programmer understands
which state changes represent a significant enough event to postpone.
4.5
Lock-step simulation and CRDTs
Our approach to lock-step simulation may remind some readers of conflict-free replicated data
types (CRDTs), introduced by Shapiro et al. (2011) as a lightweight approach to providing strong
consistency guarantees in distributed systems, even in the face of network failure, partition or
out-of-order event delivery. These data types come in two forms: “convergent” replicated data
types (CvRDTs) are based on transmitting state directly, while “commutative” replicated data types
(CmRDTs), are based on transmitting operations that act on that state. Despite the similarity, our
game state does not form a CmRDT, as these require that update operations on the game state are
commutative. This limits the types of data that can used in such an approach and is inconsistent
with our first requirement of supporting arbitrary game state.
We find, however, that type Log type defined in Figure 9 forms a CmRDT. The addEvent events
from different players commute, as both just add the event to the set. The theory of CRDTs hence
provides another argument that the resulting game state is eventually consistent (in fact, strongly
so).
5
CONCLUSIONS
By implementing lock-step simulation with client prediction generically in the educational programming environment CodeWorld, we have demonstrated once more that that pure functional
programming excels at abstraction and modularity. In addition, this work will directly support the
education of our next generation of programmers.
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
1:18
Joachim Breitner and Chris Smith
ACKNOWLEDGMENTS
The first author has been supported by the National Science Foundation under Grant No. CCF1319880 and Grant No. 14-519. We thank Stephanie Weirich, Zach Kost-Smith, Sterling Stein and
Justin Hsu for helpful comments on a draft of this paper, as well as the reviewers for their comments.
REFERENCES
ECMA International. 2015. ECMAScript Language Specification (6th ed.). Geneva. http://www.ecma-international.org/
ecma-262/6.0/ECMA-262.pdf
Elijah Emerson. 2009. How to make Box2D more deterministic? (2009). http://www.box2d.org/forum/viewtopic.php?t=
1800&start=10#p16662
Jeff Epstein, Andrew P. Black, and Simon L. Peyton Jones. 2011. Towards Haskell in the cloud. In Proceedings of the 4th ACM
SIGPLAN Symposium on Haskell, Haskell 2011, Tokyo, Japan, 22 September 2011, Koen Claessen (Ed.). ACM, 118–129. DOI:
http://dx.doi.org/10.1145/2034675.2034690
Ben Lippmeier. 2017. gloss. http://gloss.ouroborus.net/. (2017).
Simon Marlow (Ed.). 2010. Haskell 2010 Language Report.
Marc Shapiro, Nuno Preguiça, Carlos Baquero, and Marek Zawirski. 2011. A comprehensive study of Convergent and
Commutative Replicated Data Types. Research Report RR-7506. Inria – Centre Paris-Rocquencourt ; INRIA. 50 pages.
https://hal.inria.fr/inria-00555588
Forrest Smith. 2011. Synchronous RTS Engines and a Tale of Desyncs. (2011). https://blog.forrestthewoods.com/
synchronous-rts-engines-and-a-tale-of-desyncs-9d8c3e48b2be
Luite Stegeman and Hamish Mackenzie. 2017. GHCJS. https://github.com/ghcjs/ghcjs. (2017).
Mark Terrano and Paul Bettner. 2001. 1500 Archers on a 28.8: Network Programming in Age of Empires and Beyond. In
Proceedings of the 15th Games Developers Conference. http://www.gamasutra.com/view/feature/3094/1500 archers on a
288 network .php
Proc. ACM Program. Lang., Vol. 1, No. 1, Article 1. Publication date: January 2017.
| 6 |
Some asymptotic results for fiducial and confidence distributions
Piero Veronese 1 and Eugenio Melilli
Bocconi University, via Röntgen, 1, 20136, Milan, Italy
arXiv:1612.04288v2 [] 17 Oct 2017
[email protected], [email protected]
Abstract. Under standard regularity assumptions, we provide simple approximations for specific classes of
fiducial and confidence distributions and discuss their connections with objective Bayesian posteriors. For a real
parameter the approximations are accurate at least to order O(n−1 ). For the mean parameter µ = (µ1 , . . . , µk )
of an exponential family, our fiducial distribution is asymptotically normal and invariant to the importance
ordering of the µi ’s.
Keywords: ancillary statistic, confidence curve, coverage probability, natural exponential family, matching
prior, reference prior.
1
Introduction
Confidence and fiducial distributions, often confused in the past, have recently received a renewed attention
by statisticians thanks to several contributions which clarify the concepts within a purely frequentist setting
and overcome the lack of rigor and completeness typical of the original formulations. For a wide and comprehensive presentation of the theory of confidence distributions and a rich bibliography we refer the reader
to the book by Schweder & Hjort (2016) and to the review paper by Xie & Singh (2013). This latter also
highlights the importance of this theory in meta-analysis, see also Liu et al. (2015). For what concerns fiducial
distributions Hannig and his coauthors, starting from the original idea of Fisher, have developed in several
papers a generalized fiducial inference which is suitable for a large range of situations; see Hannig et al. (2016)
for a complete review on the topic and updated references.
Given a random vector S (representing the observations or a sufficient statistic) with distribution indexed
by η = (θ, λ), where θ is the real parameter of interest, a confidence distribution (CD) for θ is a function C
of S and θ such that: i) C(s, ·) is a distribution function on R for any fixed realization s of S and ii) C(S, θ)
has a uniform distribution on (0, 1), whatever the true value of η. The second condition is crucial because it
implies that the coverage of the intervals derived from C is exact. If it is satisfied only for the sample size
tending to infinity, C is an asymptotic CD and the coverage is correct only approximately. Given a CD, it is
1 Corresponding
author
1
possible to define the confidence curve ccs (θ) = |1 − 2C(s, θ)|, which displays the confidence intervals induced
by C for all levels, see Schweder & Hjort (2016, Sec. 1.6).
A fiducial distribution (FD) for a parameter θ has been obtained by several authors starting from a datagenerating equation S = G(U, θ), with U a random vector with known distribution, which allows to transfer
randomness from S to θ. In particular Hannig (2009, 2016) derives an explicit expression for the density of a
FD which coincides with that originally proposed by Fisher (1930), namely hs (θ) = |∂Fθ (s)/∂θ|, when both θ
and S are real and G(U, θ) = Fθ−1 (U ), with Fθ distribution function of S and U uniform in (0, 1).
In this paper we consider the specific definition of FD given in Veronese & Melilli (2016), recalled in Section
3, which for a real parameter and a continuous S again simplifies to the Fisher’s formula. In particular, we
assume that the FD function is
Hs (θ) = 1 − Fθ (s) = Prθ (S > s)
(1)
with Fθ (s) decreasing and differentiable in θ and with limits 0 and 1 when θ tends to the boundaries of
its parameter space. This conditions are always true, for example, if Fθ belongs to a regular real natural
exponential family (NEF). This FD is also a CD (asymptotically in the discrete case). For the multi-parameter
case a peculiar aspect of our FD is its dependence on the inferential importance ordering of the parameters,
similarly to what happens for the objective Bayesian posterior obtained from a reference prior. The connections
between our definition and Hannig’s setup are discussed in Veronese & Melilli (2016).
In Section 2.1, extending a result proved in Veronese & Melilli (2015) for a NEF, we give a second order
asymptotic expansion of our FD/CD in the real parameter case based only on the maximum likelihood estimator (MLE). This expansion does not require any other regularity conditions than the standard ones usually
assumed in maximum likelihood asymptotic theory. Furthermore, we show that it coincides with the expansion of the Bayesian posterior induced by the Jeffrey prior. This fact establishes a connection with objective
Bayesian inference, whose aim is to produce posterior distributions free of any subjective prior information. In
Section 2.2, starting from the well known p∗ -formula of Barndorff-Nielsen (1980, 1983), we propose and discuss
a FD/CD which, using an ancillary statistic in addition to the MLE, has good asymptotic behavior. Higher
order asymptotics for generalized fiducial distributions have been discussed, at our knowledge, only in the
unpublished paper Pal Majumder & Hannig (2016). However, its focus is different being devoted to identify
data generating equation with desirable properties. In Section 3 we consider a NEF with a multidimensional
parameter and show that, without any further regularity conditions, the asymptotic FD of the mean parameter
is normal, it does no longer depend on the inferential ordering of the parameters and coincides with the cor-
2
responding asymptotic Bayesian posterior. Some examples illustrate the good properties and performances of
the various proposed FD/CD with emphases on coverage and expected length of confidence intervals. Finally,
the Appendix includes the proofs of all the theorems and propositions stated in the paper.
2
Asymptotics for fiducial and confidence distributions: the real
parameter case
2.1
An expansion with error of order O(n−1 )
In Veronese & Melilli (2015) an Edgeworth expansion with an error of order O(n−1 ) of the FD/CD for the
mean parameter of a real NEF was derived. Here we generalize this result to an arbitrary regular model.
Let X = (X1 , . . . , Xn ) be an i.i.d. sample of size n from a density (with respect to the Lebesgue measure)
parameterized by θ belonging to an open set Θ ⊆ R. Let θ̂ be the MLE of θ based on X and denote by
pθ (θ̂) its density. Let ℓ(θ) = n−1 log pθ (θ̂) and let ℓ′′ (θ̂) and ℓ′′′ (θ̂) be the second and the third derivative of
ℓ(θ) with respect to θ, evaluated in θ̂. Then the expected and observed Fisher information of θ̂ (per unit)
are I(θ) = −n−1 Eθ (∂ 2 log pθ (θ̂)/∂θ2 ) and −ℓ′′ (θ̂), respectively. Let b = b(θ̂) = −1/ℓ′′ (θ̂). Consider now
p
Z = n/b (θ − θ̂), which is an approximate standardized version of θ in the FD/CD-setup, and let Hn,θ̂ (z)
be its FD/CD derived from the sampling distribution of θ̂. If θ̂ is sufficient, Hn,θ̂ (z) is exact, otherwise
it is a natural approximation of the exact one, see e.g. Schweder & Hjort (2016). To prove our result we
resort to the expansion of the frequentist probability Prθ (Z ≤ z) provided in Datta & Ghosh (1995) or in
Mukerjee & Ghosh (1997). Thus we need the regularity assumptions used in these papers, see also Ghosh
(1994, Ch. 8) and Bickel & Ghosh (1990) for a precise statement. Notice that the conditions required for the
frequentist expansion of the distribution of the MLE are rarely reported in a rigorous way in books and papers.
However, what is important here is that, in order to prove our result, we do not need any further assumption
and this fact allows an immediate and fair comparison between MLE- and FD/CD-asymptotic theory.
Theorem 1 Let X be an i.i.d. sample of size n from a density pθ , θ ∈ Θ ⊆ R. Then, under the regularity
p
assumptions cited above, the distribution function Hn,θ̂ (z) of the FD/CD for Z = n/b(θ−θ̂) has the expansion
1 3/2 ′′′
2
Hn,θ̂ (z) = Φ(z) − φ(z) b ℓ (θ̂)(z − 1) n−1/2 + O(n−1 ).
6
(2)
If pθ also satisfies the conditions for the expansion of a Bayesian posterior, see e.g. Johnson (1970, Theorem
2.1, with K = 1), we have the following
3
Corollary 1 In the same setting of Theorem 1, let π J (θ) ∝ I(θ)1/2 be the Jeffreys prior for θ. If π J is
improper, assume that there exists an n0 ≥ 1 such that the posterior distribution π J (θ|z) of θ is proper for
n ≥ n0 , almost surely for all θ. Then the expansion of π J (θ|z) coincides with that of Hn,θ̂ (z) given in (2).
Theorem 1 and Corollary 1 confirm the idea that the Jeffreys posterior is really free of any subjective prior
information. Furthermore, they naturally establish a connection between FD/CD-theory and matching priors,
i.e. priors that ensure approximate frequentist validity of posterior credible sets. More precisely, a prior π for
which Prθ (θ ≤ q1−α (X, π)) = 1 − α + o(n−r/2 ), where q1−α (X, π) denotes the (1 − α)th posterior quantile of
θ, is called a matching prior of order r, see Datta & Mukerjee (2004) for a general review and references. For
a regular model indexed by a real parameter it is well known, see Datta & Mukerjee (2004, Theorem 2.5.1),
that the Jeffreys prior π J is the unique first order matching prior and is also a second order matching prior if
and only if the model satisfies the following condition:
I(θ)−3/2 Eθ [(∂ℓ(θ)/∂θ)3 ]
is a constant free of θ.
(3)
Veronese & Melilli (2015) study the existence of a prior (named fiducial prior ) which induces a Bayesian
posterior coinciding with the FD, extending a result given by Lindley (1958) for a continuous univariate
sufficient statistic, see also Taraldsen & Lindqvist (2015) for a generalization to multivariate group models.
Because a FD/CD realizes the exact matching, we immediately have the following
Corollary 2 If a fiducial prior π F exists, then it coincides with the Jeffreys prior π J . Furthermore, the
condition (3) is necessary for the existence of π F .
Notice that for a model belonging to a NEF, with mean parameter µ and variance function V (µ), condition
(3) becomes: “2V ′ (µ)V (µ)−1/2 is constant”. The solution of this differential equation is V (µ) = (c1 µ + c2 )2 ,
i.e. a fiducial prior for a parameter of a NEF may exist only if its variance function is quadratic. This result
was found for the first time in Veronese & Melilli (2015), using a totally different approach.
Example 1 (Exponential distribution). Let (X1 , . . . , Xn ) be an i.i.d. sample from an exponential distribution with mean µ. The MLE µ̂ of µ is the sample mean and b = b(µ̂) = µ̂2 . Then the expansions of the
√
√
FD/CD for Z = ( n/µ̂)(µ − µ̂) in (2) and that of the standardized MLE W = −Z = ( n/µ̂)(µ̂ − µ), see
(A.3), are respectively
Φ(z) − φ(z) 2(z 2 − 1)/3 n−1/2 + O(n−1 ),
Φ(w) − φ(w) −(2w2 + 1)/3 n−1/2 + O(n−1 ).
4
(4)
(5)
10
8
6
2
4
Expected length
12
0.92
0.90
Coverage
0.88
0
0.86
0
5
10
15
0
µ
5
10
15
µ
Figure 1: Coverages and expected lengths for the 90% intervals with n = 15 based on: exact
FD/CD (red), Normal approximation (green), expansions of FD/CD (black) and of MLE (blue).
It follows that the confidence intervals obtained from (4) and (5) are different, contrary to what happens for
those based only on the normal approximation. Their coverages and expected lengths are reported in Figure 1
for a sample of size n = 15 and confidence level 0.9. Notice that the coverage of the FD/CD-intervals is much
closer to the nominal level than that of the intervals based on the MLE, while the expected lengths are quite
similar. For the sake of comparison Figure 1 reports also the coverage and the expected length of the intervals
based on the exact FD/CD, which is an inverse-gamma(n, nµ̂), see Veronese & Melilli (2015, Tab.1). The
latter intervals are clearly exact, but wider. Finally, by Corollary 1, the expansion (4) coincides with that of
the Jeffreys posterior. It is easy to verify, according to Corollary 2, that the fiducial prior exists and coincides
with π J (µ) ∝ 1/µ.
⋄
Example 2 (Fisher’s gamma hyperbola-Nile problem). Fisher (1973, Sec.VI.9) considers a sample of size n
from a curved exponential family obtained by two independent gamma distributions with means constrained
on an hyperbole. Following Efron & Hinkley (1978), we directly start with the sufficient statistic S = (S1 , S2 ),
with S1 and S2 distributed according to ga(n, e−η ) and ga(n, eη ), respectively. Here ga(α, β) denotes a gamma
distribution with shape parameter α and mean α/β. It follows that the likelihood of the model is Lη (s) =
exp{−e−η s1 − eη s2 } and that the MLE of η is η̂ = (1/2) log(S1 /S2 ). Even if an exact inference on η cannot
be performed using only η̂, the minimal sufficient statistic S is indeed bivariate, an asymptotic FD/CD for η,
based on η̂, can be easily obtained from Theorem 1. Since ℓ′′′ (η̂) = 0, it follows from (2) that, in this case, the
√
normal distribution N(η̂, b/n), with b = −1/ℓ′′(η̂) = n/(2 s1 s2 ), is an approximate FD/CD of η with error
of order O(n−1 ). Figure 2 reports the plot of its density compared with the exact FD/CD based on S, which
will be derived in the next section. It shows the goodness of the approximation even for a very small sample
size (n = 5 in the plot). Finally, it is easy to check that Eη [(∂ℓ(η)/∂η)3 ] = 0, thus condition (3) holds and, by
Corollary 2, a fiducial prior might exist for this model. Indeed it exists and we will find it in the next section.⋄
Another criterium to define matching priors studied in the Bayesian literature is based directly on the
distribution functions, see Datta & Mukerjee (2004, Sec. 3.2). Because Hn,θ̂ (z) is stochastic in a frequentist setup, as it occurs for a posterior distribution, we can consider the matching between Eθ (Hn,θ̂ (z)) =
5
Figure 2: Approximate (black) and exact(red) fiducial densities for η in the Fisher’s gamma
hyperbola for a sample size n = 5, s1 = 17.321, s2 = 0.116.
np
np
o
o
n/b(θ − θ̂) ≤ z
n/b(θ − θ̂) ≤ z . Clearly quantiles and distribution functions are
Eθ Prθ̂
and Prθ
strongly connected and thus it is not surprising that the conditions for the existence of matching priors in the
two criteria are related. Indeed, the first order matching conditions are the same, while this is not true for
the second order ones. Notice that the matching in terms of quantiles is obtained using the quantity Z which
can be seen as an approximate pivotal quantity. This is meaningful in an asymptotic setting, but it is not
appropriate for small sample sizes. In this case, the FD/CD realizes an exact matching if we replace Z with
the pivotal quantity given by the distribution function of θ̂, namely Fθ (θ̂). Indeed, we have
Eθ Prθ̂ {Fθ (θ̂) ≤ z} = Eθ Prθ̂ {1 − Hθ̂ (θ) ≤ z} =
Eθ Prθ̂ {Hθ̂ (θ) ≥ 1 − z} = 1 − Eθ Prθ̂ {Hθ̂ (θ) ≤ 1 − z} = 1 − Eθ (1 − z) = z,
and because Prθ {Fθ (θ̂) ≤ z} = z, the exact matching for distribution functions holds. However, an exact
FD/CD does not always exist and thus it is natural to look for approximations which have nice asymptotic
properties. Furthermore, in a multiparameter case quantiles are not well defined and thus the study of the
frequentist properties of a multivariate FD/CD can be conducted along the lines developed for matching
distribution functions.
2.2
An approximation based on the Barndorff-Nielsen p∗ -formula
Consider a sample X whose distribution depends on a real parameter θ. In the previous section we have
obtained an approximate FD/CD for θ starting from the distribution of the MLE θ̂. However, if θ̂ is not
sufficient, the approximation of the FD/CD can be improved adding the remaining information included in
the sample. This can be done resorting to the “conditionality resolution” of the statistical model, i.e. the
construction of an ancillary statistic A and of an approximate conditional distribution of θ̂ given A = a. We
refer to Barndorff-Nielsen (1980, 1983) for a detailed discussion on the topic and recall here only some useful
facts. His well known approximate distribution of θ̂ given A = a is
p∗θ (θ̂|a) = c(a, θ)|j(θ̂)|1/2 L(θ; x)/L(θ̂; x),
6
(6)
where L(θ; x) is the likelihood function, j(θ̂) is the observed Fisher information and c(a, θ) is the normalizing
constant which does not depend on θ in many important cases. Formula (6) is quite simple, is generally
accurate to order O(n−1 ), or even O(n−3/2 ), and exact in specific cases. Here the term approximation refers
to one of the two following situations: i) there exists an ancillary statistic A, but it is not possible to construct
the exact conditional distribution of θ̂ given A = a; ii) an exact ancillary statistic does not exist and an
approximate one is used. It is worth to remark that formula p∗ is invariant to reparameterizations and is exact
for transformation models. Furthermore, under repeated sampling from a real NEF, where no conditioning is
involved, p∗ is often of order O(n−3/2 ) and is exact for normal (known variance), gamma (known shape) and
inverse-gaussian (known shape) distributions.
If Fθ∗ (θ̂|a) denotes the distribution function corresponding to p∗θ (θ̂|a) and satisfies the conditions reported
after (1), we can derive an approximate FD/CD for θ as h∗x (θ) = |∂Fθ∗ (θ̂|a)/∂θ|. This construction of a
FD/CD is not different in essence from the widespread procedure used to derive a Bayesian posterior starting
from an approximate (e.g. profile, pseudo or composite) likelihood. A similar approach based on approximate
likelihood is used also by Schweder & Hjort (2016) to construct a CD.
The next result concerning a real NEF is useful when the exact distribution of θ̂ is difficult to obtain.
Proposition 1 If θ̂ is the MLE of θ based on an i.i.d. sample from a real regular NEF, with density pθ (x) =
exp{θx − M (θ)}, then h∗θ̂ (θ) = |∂Fθ∗ (θ̂)/∂θ| is an exact FD/CD for θ based on p∗θ (θ̂). It is an approximate
FD/CD based on the whole sample and its order of approximation depends on that of p∗θ (θ̂).
The following examples, concerning curved exponential families, i.e. NEFs in which a constraint on the
natural parameter space is imposed, illustrate another typical case in which formula (6) can be fruitfully
applied to construct a FD/CD.
Example 2 ctd. As previously observed, the MLE η̂ is not sufficient and thus the exact FD/CD can be
√
obtained starting from the conditional distribution of η̂ given the ancillary statistic A = S1 S2 /n, proposed
by Fisher (1973, Sec. VI.10-11). After some calculations, one obtains
pη (η̂|a) = exp{−2na cosh(η̂ − η)}/(2K0 (2na)),
where K0 (w) =
R∞
0
(7)
exp{−w cosh(z)}dz is the modified Bessel function of the second order evaluated in (0, w).
As observed by Efron & Hinkley (1978), it is easy to see from (7) that this example involves a translation (and
thus a transformation) model, so that pη (η̂|a) = p∗η (η̂|a). Thus the exact FD for η is hη̂,a (η) = −∂Fη∗ (η̂|a)/∂η
and, because η is a location parameter, it equals the posterior obtained from the Jeffreys prior π J (η) ∝ 1, see
Veronese & Melilli (2016, Prop.8). The nature of the parameter η also implies that inferences based on MLE
7
Figure 3: Confidence curves for a sample size n = 15, generated from ρ = 0.3 with s1 = 19.248
and s2 = 4.827, r = 0.414 and ρ̂ = 0.209. Left graph: ccr (green), ccrstab (brown), cc0 (blue) and
cc1 (red). Right graph: cc1 (red), cc∗ (black), ccJ (orange). The horizontal line identifies the 95%
confidence intervals.
and hη̂,a (η) coincide.
⋄
Example 3 (Bivariate normal model). Consider an i.i.d. sample (Xi , Yi ), i = 1, . . . , n, from a bivariate
normal distribution with expectations 0, variances 1 and correlation coefficient ρ. This is a simple curved
Pn
Pn
exponential model with sufficient statistics S1 = i=1 (Xi2 + Yi2 )/2 and S2 = i=1 Xi Yi , but the inference
on ρ is a challenging problem as shown in Fosdick & Raftery (2012) and Fosdick & Perlman (2016). Both
Efron & Hinkley (1978) and Barndorff-Nielsen (1980) use this example to illustrate the construction of an
approximate ancillary statistic in a conditional inference setting. Their proposals essentially coincide and lead
p
to consider the “affine” ancillary A = (S1 − n)/ n(1 + ρ̂2 ), where ρ̂ is the MLE of ρ.
To discuss the performance of h∗ obtained starting from p∗ , we compare it with other possible asymptotic
FDs and with the Bayesian posterior obtained from the Jeffreys prior π J (ρ) ∝ (ρ2 + 1)1/2 /(1 − ρ2 ). In
particular, we consider the following FDs: hr and hrstab obtained from the sample correlation coefficient r
and its stabilizing transformation which, as well known, improves the inferential performance of r, see also
Schweder & Hjort (2016, pag. 209 and 224); h0 and h1 obtained considering the first one or the first two terms
of (2), respectively. We assume a sample size n = 15 because a larger value of n, e.g. 50, produces essentially
the same (good) results for all choices. The left graph of Figure 3 reports an example of the confidence curves
ccr , ccrstab , cc0 and cc1 corresponding to the previous FDs. The curves present different behaviors because
they are based on the two estimators r and ρ̂ of ρ, which assume quite different values in the sample. The
right graph compares cc1 with cc∗ and ccJ obtained from h∗ and Jeffreys posterior, respectively. As expected,
the last two curves, both based on the sufficient statistics S1 and S2 , are very similar and induce confidence
intervals narrower than those induced by cc1 . To better appreciate the good behavior of h∗ , we compare
the corresponding coverage and expected length with those of hr , hrstab and the Jeffreys posterior. Figure
4 confirms the very bad inferential performance of hr . The intervals corresponding to h∗ have the coverage
closest to the nominal one, while those obtained by hrstab present an over-coverage. However, these latter
intervals have a uniformly larger expected length. Finally, Bayesian intervals show an intermediate behavior
in terms of both coverage and expected length. The same example is discussed by Pal Majumder & Hannig
8
1.0
0.8
0.6
expected length
0.4
0.2
coverage
0.86 0.88 0.90 0.92 0.94 0.96 0.98
0.0
0.2
0.4
0.6
0.8
0.0
0.2
ρ
0.4
0.6
0.8
ρ
Figure 4: Coverages and expected lengths of the 95% intervals with n = 15 based on: h∗ (black),
hrstab (brown), π J (orange) and hr (green).
(2016), but they have a different aim and consider different FDs.
3
Asymptotics for fiducial distributions: the multidimensional parameter case
For a parameter θ in Rd , inspired by the step-by-step procedure proposed by Fisher (1973), Veronese & Melilli
(2016) give a simple and quite general definition of FD, which we summarize here. We refer to the latter
paper for details, examples, relationships with objective Bayesian analysis performed using reference priors
and a comparison with Hannig’s fiducial approach. Notice that for a multidimensional parameter there is not
a unique definition of CD, see Schweder & Hjort (2016, Ch.9), so that in the following we refer only to FDs.
Given a random vector S, representing the sample or a sufficient statistic, with dimension m ≥ d and density
pθ , consider the partition S = (S[d] , S−[d] ), where S[d] = (S1 , . . . , Sd ) and S−[d] = (Sd+1 , . . . , Sm ), and suppose
that S−[d] is ancillary for θ. Clearly, if d = m, S−[d] disappears. Thus, the density pθ of S can be written
as pθ (s[d] |s−[d] )p(s−[d] ) and the information on θ provided by the whole sample is included in the conditional
distribution of S[d] given S−[d] . Assume now that there exists a one-to-one smooth reparameterization from θ
to φ = (φ1 , . . . , φd ), with the φi ’s ordered with respect to their inferential importance, such that
pφ (s[d] |s−[d] ) =
d
Y
k=1
pφd−k+1 (sk |s[k−1] , s−[d] ; φ[d−k] ),
(8)
with obvious meaning for s[0] and φ[0] . If, for each k, the one-dimensional conditional distribution function
of Sk is monotone and differentiable in φk and has limits 0 and 1 when φk tends to the boundaries of its
parameter space (this is always true, for example, if this distribution belongs to a regular real NEF), it is
possible to define the joint fiducial density of φ as
hs (φ) =
d
Y
k=1
hs[k] ,s−[d] (φd−k+1 |φ[d−k] ),
9
(9)
where
hs[k] ,s−[d] (φd−k+1 |φ[d−k] ) =
∂
Fφ
(sk |s[k−1] , s−[d] ; φ[d−k] )
∂φd−k+1 d−k+1
(10)
is inspired by the definition of the FD for a real parameter. Some remarks useful in the sequel follow.
i) When m = d = 1, so that an ancillary statistic is not needed, formulas (9) and (10) reduce to hs (φ) =
|∂Fφ (s)/∂φ|, the original proposal of Fisher (1930).
ii) When d > 1 but the parameter of interest is φ1 only, it follows from (9) that its FD is simply given by
hs (φ1 ) =
∂
Fφ (sd |s[d−1] , s−[d] ) ,
∂φ1 1
which is based on the whole sample and is also a CD. A typical choice for Sd is given by the MLE φb1 of φ1
and thus, when φb1 is not sufficient, one has to consider the distribution of φb1 given the ancillary statistic s−[d]
as done in Section 2.2.
iii) The FD in (9) is generally not invariant under a reparameterization of the model unless the transformation
from φ to λ = (λ1 , . . . , λd ) say, maintains the same increasing order of importance in the components of the two
vectors and λk is a function of φ1 , . . . , φk , for each k = 1, . . . , d, i.e. φ(λ) is a lower triangular transformation.
In Veronese & Melilli (2015) it is shown that the univariate FD/CD for a real NEF is asymptotically
normal. Because the multivariate FD defined in (9) is a product of one-dimensional conditional FDs, it is
quite natural to expect that also the FD for a d-dimensional NEF is asymptotically normal.
Theorem 2 Let X = (X1 , . . . , Xn ) be an i.i.d. sample from a regular NEF on Rd with Xi having density
Pd
pθ (xi ) = exp{ k=1 θk xk − M (θ)}, mean vector µ = µ(θ) and variance function V(µ) = Varµ (Xi ). FurtherPn
more, let x̄ be the observed value of the sample mean X̄ = n−1 i=1 Xi . If Xi admits bounded density with
respect to the Lebesgue measure or is supported by a lattice, then the fiducial distribution of µ is asymptotically
order-invariant and asymptotically normal with mean x̄ and covariance matrix V(x̄)/n.
Since V(x̄) coincides with the reciprocal of both the observed and the estimated expected Fisher information
matrix, recalling standard results about asymptotic Bayesian posterior distributions, see e.g. Johnson & Ladalla
(1979), the following corollary immediately holds.
Corollary 3 Consider the statistical model specified in Theorem 2. If we assume a positive prior for µ
having continuous first partial derivatives, then the asymptotic Bayesian posterior for µ coincides with the
asymptotically normal fiducial distribution.
The asymptotic normality for multidimensional generalized fiducial distributions has been proved by Sonderegger & Hannig
10
(2014) under a set of regularity assumptions. We remark that the previous two results are specific for our
definition of FD and hold for NEFs without any extra regularity condition. Furthermore, the proof of Theorem
2, given in the Appendix, is completely different from the standard ones used to show asymptotic normality
in frequentist, Bayesian or generalized fiducial settings. It is based on the convergence of the conditional
distributions determined by the importance ordering of the parameters, it heavily relies on the properties of
the mixed parametrization of the NEF and consequently the result is given in terms of the mean parameter,
which is more interpretable than the natural one.
Consider now a parameter λ = g(µ), with g a one-to-one lower triangular continuously differentiable
function. From Veronese & Melilli (2016, Prop. 1), it follows that the FD for λ can be obtained from that for
µ by the standard change of variable technique and thus we can construct the asymptotic FD in the same way.
However, Theorem 2 states that the asymptotic FD for µ is order invariant and hence it could be interesting
to investigate if this is true also for an arbitrary parameter. This conjecture might be reasonable looking at
what happens in the Bayesian theory where the asymptotic (reference) posteriors do not depend on the order
of the parameter components. The following example illustrate this point.
Example 4. Consider a sample of size n from a multinomial experiment with outcome probability vector
P
P
p = (p1 , . . . , pd ), with dk=1 pk ≤ 1. Then, the vector of counts S = (S1 , . . . , Sd ), with dk=1 Sk ≤ n, is
distributed according to a multinomial distribution with parameters n and p. Using the step-by-step procedure
described above, Veronese & Melilli (2016, formula 25) have proved that the FD for p is a generalized Dirichlet
distribution which depends on the specific fixed ordering of the pi ’s. Assume now d = 2 and consider the
transformation φ1 = p1 /p2 and φ2 = p2 which is not lower triangular. The FD of φ = (φ1 , φ2 ) in this order is
s −1/2
hs (φ) ∝ φ11
s +s2 −1/2
(1 + φ1 )−1/2 φ21
(1 − (1 + φ1 )φ2 )n−s1 −s2 −1/2 .
(11)
This latter is different from the FD induced by that of p but coincides with the posterior distribution obtained
from the reference prior for φ, see Veronese & Melilli (2016, Sec. 5.4).
Consider now the asymptotic setting. From Theorem 2 it follows that the asymptotic FD of p = (p1 , . . . , pd )
is N(x̄, V(x̄)/n), with x̄ = s/n and where the elements of V(x̄) are vkk = x̄k (1 − x̄k ) and vkr = −x̄k x̄r ,
k 6= r. It is easy to verify that for d = 2 it induces on φ a normal distribution with means x̄1 /x̄2 , x̄2 ,
variances x̄1 (x̄1 + x̄2 )/(nx̄32 ), x̄2 (1− x̄2 ) and covariance −x̄1 /x̄2 . This distribution coincides with the asymptotic
distribution corresponding to (11) ( derived for example using standard results on Bayesian theory) and this
fact supports our conjecture that asymptotic FDs are invariant to the importance ordering of the parameters
and can always been derived through the standard delta method.
11
Appendix
Proof of Theorem 1. For the sake of clearness, in this proof we denote by Θ̂ the MLE of a parameter θ and by
θ̂ the corresponding estimate. If Fθ (θ̂) is the distribution function of Θ̂, assumed decreasing in θ, let 1 − Fθ (θ̂)
be the FD for θ. If Fθ (θ̂) is increasing the proof is similar with 1 − Fθ (θ̂) replaced by Fθ (θ̂). Then
Hn,θ̂ (z) = Prθ̂
where θn = z
np
o
n/b(θ − θ̂) ≤ z = Prθ̂ {θ ≤ θn } = 1 − Prθn {Θ̂∗n ≤ θ̂},
(12)
p
∗
b/n + θ̂ and Θ̂∗n is the MLE based on n i.i.d. random variables Xn,i
, i = 1, . . . , n, belonging
to the same family of distributions of Xi , but with parameter θn . Note that θn converges to θ for n → ∞,
because θ̂ converges to the “true” value θ for almost all sequences (x1 , x2 , . . .) and Θ is an open interval. Thus
θn belongs to Θ for n large enough and for each z ∈ R. Starting from (12), we can also write
p
p
Hn,θ̂ (z) = 1 − Prθn { n/b(Θ̂∗n − θn ) ≤ n/b(θ̂ − θn )}
p
p
= Prθn { n/b(Θ̂∗n − θn ) ≥ −z} = Prθn { n/b(θn − Θ̂∗n ) ≤ z}.
Thus, the asymptotic expansion of Hn,θ̂ (z) can be derived by expanding the frequentist distribution function of
p
∗
n/b(θn − Θ̂∗n ). This expansion can be directly obtained by standard results, even if {Xn,i
, i = 1, 2, . . . , n; n =
1, 2, . . . , } is a triangular array because we consider only random variables and a first order approximation, see
p
e.g. Garcı́a-Soidán (1998) and Petrov (1995, Theorem 5.22). The frequentist expansion of Z = n/b(θ − Θ̂)
has been provided in several papers about matching priors under a set of regularity assumptions. Using formula
(3.2.3) in Datta & Mukerjee (2004) with θ = θn and recalling that Θ̂∗n is the MLE of θn , we obtain
Prθn {
p
1 I ′ (θn )
1
ℓ′′′ (θn )
1
1
2
√ + O( ).
n/b(θn − Θ̂∗n ) ≤ z} = Φ(z) − φ(z)
E
(z
+
2)
+
θ
2 I(θn )3/2
6 n (−ℓ′′ (θn ))3/2
n
n
Now, because −ℓ′′ (θ̂)−I(θ̂) = Op (n−1/2 ) (see e.g. Severini, 2000, Sec. 3.5.3) and θn − θ̂ = z
(13)
p
b/n = Op (n−1/2 ),
we have I(θn ) = −ℓ′′ (θ̂) + Op (n−1/2 ) = 1/b + Op (n−1/2 ). Moreover, applying the delta method to the
expectation in (13), this expansion becomes
p
1
1
Prθn { n/b(θn − Θ̂∗n ) ≤ z} = Φ(z) − φ(z) − b3/2 ℓ′′′ (θ̂) + b3/2 ℓ′′′ (θ̂)(z 2 + 2) n−1/2 + O(n−1 )
2
6
1
= Φ(z) − φ(z) b3/2 ℓ′′′ (θ̂)(z 2 − 1) n−1/2 + O(n−1 ),
6
and the theorem is proved.
⋄
12
Proof of Corollary 1. The result follows immediately using the expansion of the posterior distribution provided
by Johnson (1970, Theorem 2.1 and formulae (2.25) and (2.26)), assuming π J (θ) ∝ I(θ)1/2 as prior. Notice
that under the stated conditions on the posterior, this result can be used even if the prior is improper, as
observed in Ghosh et al. (2006, pag. 106).
⋄
Proof of Proposition 1. Recalling that for a real NEF x̄ = M ′ (θ̂), we can write
p∗θ (θ̂) = exp{n(θM ′ (θ̂) − M ∗ (θ))},
R
where M ∗ (θ) = log( exp{n(θM ′ (θ̂))}dν(θ̂)), with ν(θ̂) denoting the dominating measure of the density of
θ̂. Thus p∗θ (θ̂) belongs to a regular real NEF and the result follows immediately by Veronese & Melilli (2015,
Theorem 1).
⋄
Proof of Theorem 2. Given a square d × d matrix A, we use Ak[r] to denote the vector of the first r elements of
the k-th row of A and A[k][k] to denote the matrix identified by the first k rows and columns of A. Moreover,
AT denotes the transpose of A.and
In order to determine the asymptotic FD of µ we apply the step-by-step procedure introduced in Section
3 to the conditional distribution of X̄k given X̄[k−1] = x̄[k−1] for each k. Clearly for k = 1, we have the
marginal distribution of X̄1 . Since the covariance matrix V(µ) of Xi is finite, by the central limit theorem
X̄ is asymptotically N(µ, n−1 V(x̄)) and thus the marginal distribution of X̄[k] is also asymptotically normal
with E(X̄[k] ) = µ[k] and V ar(X̄[k] ) = n−1 V(x̄)[k][k] . Let
−1
λk = µk + V(x̄)k[k−1] V(x̄)[k−1][k−1]
(x̄[k−1] − µ[k−1] )
(14)
−1
T
qk = V(x̄)kk − V(x̄)k[k−1] V(x̄)[k−1][k−1]
V(x̄)k[k−1] .
(15)
and
Using known results about the convergence of conditional distributions, see Steck (1957, Theorem 2.4) or
Barndorff-Nielsen & Cox (1979, Sec.4), it follows that the conditional distribution of X̄k given X̄[k−1] = x̄[k−1]
is asymptotically N(λk , n−1 qk ).
Now recall that for a NEF it is always possible to consider the so called “mixed parameterization”
(µ[k] , θ−[k] ) which is one-to-one with the natural parameter θ, see e.g. Brown (1986, ch. 3). For θ−[k]
fixed, the distribution of X̄[k] belongs to a NEF with parameter θ [k] and thus the conditional distribution of
X̄k given X̄[k−1] = x̄[k−1] depends only on θk . The same must be true of course for the corresponding asymptotic distribution, so that its mean parameter λk depends only on θk and hence only on µk . Considering now
13
the alternative mixed parameter (µ[k−1] , θk , θ −[k] ), it follows that there exists a one-to-one correspondence
between θk and µk , for µ[k−1] and θ−[k] fixed. As a consequence µ[k−1] can be fixed arbitrarily in the mixed
parameterizations (µ[k−1] , θk , θ−[k] ) with no effect on the conditional distribution and we specifically assume
µ[k−1] = x̄[k−1] . Using the parameter (x̄[k−1] , µk , θ−[k] ), we have that λk coincides with µk , see (14). Summing
up, each of the three parameters λk , θk and µk represents a possible parameterization of the asymptotic conditional distribution of X̄k given X̄[k−1] = x̄[k−1] , for fixed x̄[k−1] and θ−[k] . Thus we can find the asymptotic
FD of λk . Consider now a random vector X̄∗ with distribution belonging to the same family of that of X̄,
√
with mixed parameter (x̄[k−1] , µ∗k , θ −[k] ), where µ∗k = x̄k + zk / n, with zk ∈ R, as in the proof of Theorem 1.
Notice that the marginal distributions of X̄∗[k−1] and of X̄[k−1] are equal. Such a µ∗k is well defined for large
n since (x̄[k−1] , x̄k , θ−[k] ) is a possible value for the mixed parameter in the distribution of the whole vector,
because the NEF is regular and thus the parameter space is open.
For n varying and fixed k, the sequence of marginal sample means X̄∗[k] derives from random vectors whose
mean parameter depends on n, so that it forms a triangular array. In order to determine the FD of λk , we can
√
consider the quantity n(λk − x̄k ), which is a sort of standardization of λk in our fiducial context. Using (1),
similarly to what done in (12), we can write
Prx̄k
zk
λk ≤ x̄k + √ X̄∗ [k−1] = x̄[k−1] , θ−[k]
n
∗
(16)
= 1 − Prλ∗k X̄k ≤ x̄k |X̄∗ [k−1] = x̄[k−1] , θ−[k] ,
√
n(λk − x̄k ) ≤ zk |X̄∗ [k−1] = x̄[k−1] , θ−[k] = Prx̄k
√
where λ∗k = x̄k + zk / n. Since V ar(X̄∗[k] ) is a continuous function of µ∗ = E(X̄∗ ), it converges to a positive
definite matrix for each k when µ∗ converges to the “true” value of µ, for n → ∞. Then, using the result on
the convergence of a conditional distribution presented at the beginning of the proof with µ replaced by µ∗ ,
we have that X̄k∗ given X̄[k−1] = x̄[k−1] is asymptotically N(λ∗k , qk /n). Notice that from the existence of the
second moment of each component of X̄∗[k−1] , it follows that the condition required by Steck (1957, Theorem
2.4, formula (28)), for the case of triangular arrays, is satisfied. Thus, the asymptotic normality of X̄k∗ given
X̄[k−1] = x̄[k−1] implies, for n → +∞,
sup Prλ∗k X̄k∗ ≤ x̄k |X̄[k−1] = x̄[k−1] , θ −[k] − Φ
zk
r
n
(x̄k − λ∗k ) → 0 a.s.
qk
Recalling the expression of λ∗k , we obtain
√
sup Prx̄k +zk /√n X̄k ≤ x̄k |X̄[k−1] = x̄[k−1] , θ−[k] − Φ (−zk / qk ) → 0
zk
14
a.s.
which, using (16), gives
sup Prx̄k
zk
√
√
n(λk − x̄k ) ≤ zk |X̄[k−1] = x̄[k−1] , θ−[k] − Φ (zk / qk ) → 0
a.s.
We can conclude that the conditional FD of λk given θ−[k] is asymptotically normal with mean x̄k and
variance n−1 qk , and thus it does not depend on θ−[k] . Recalling the one-to-one correspondence between θk
and λk , for fixed θ−[k] , and in particular that λd is a one-to-one function of θd , it follows that λ1 , λ2 , . . . , λd
are asymptotically independent, so that the full vector λ = (λ1 , λ2 , . . . , λd ) is asymptotically N(x̄, n−1 Q(x̄)),
where Q(x̄) is the diagonal matrix with k-th element qk .
To obtain the asymptotic FD of µ we consider the one-to-one lower-triangular transformation µ = g(λ),
with λ = g−1 (µ) given by (14) for k = 1, . . . , d. Consider now the lower d × d triangular matrix A = A(x̄)
whose k-th row is made up by the vector −V(x̄)k[k−1] [V(x̄)[k−1][k−1] ]−1 , in the first k−1 positions, 1 in the k-th
position and 0 elsewhere. Thus we can write λ = Aµ + (I − A)x̄ and µ = A−1 λ + (I − A−1 )x̄, with I denoting
the identity matrix of order d. By applying the Cramér delta method it follows that µ is asymptotically normal
with (asymptotic) mean and covariance matrix A−1 x̄ + (I − A−1 )x̄ = x̄ and n−1 A−1 Q(x̄)A−1 T , respectively.
We now show that A−1 Q(x̄)A−1 T = V(x̄) or, equivalently, Q(x̄) = AV(x̄)AT . By direct computation it is
easy to see that the (k, h)-th element of AV(x̄), k, h = 1, 2, . . . , d, is
T
V(x̄)kh − V(x̄)k[k−1] [V(x̄)[k−1][k−1] ]−1 V(x̄)h[k−1] .
(17)
Notice that (17) is 0 for k > h because the product of its last two factors gives a (k −1)-dimensional vector with
1 in the h-th position and 0 otherwise. The matrix AV(x̄)AT is of course symmetric, so that it is sufficient
to proceed only for k ≥ h. On its diagonal we have
T
V(x̄)kk − V(x̄)k[k−1] [V(x̄)[k−1][k−1] ]−1 V(x̄)k[k−1] ,
k = 1, . . . , d,
(18)
because the only nonzero element in the product of the k-th row of AV(x̄) and the k-th column of AT is the
product of (17), with h = k, and 1. For k > h, the (k, h)-th element of AV(x̄)AT is 0, because the first k − 1
components of the k-th row of AV(x̄) and the last d − h components of the h-th column of AT are zero. Thus
the matrix AV(x̄)AT coincides with Q(x̄) and this completes the proof of the theorem.
15
⋄
Acknowledgments
This research was supported by grants from Bocconi University.
References
Barndorff-Nielsen, O. (1980). Conditionality resolutions. Biometrika, 67, 293–310.
Barndorff-Nielsen, O. & Cox, D. R. (1979). Edgeworth and saddle-point approximation with statistical applications. J. R. Stat. Soc. Ser. B, 41, 279–312.
Bickel, P. J. & Ghosh, J. (1990). A decomposition for the likelihood ratio statistic and the bartlett correction–a
bayesian argument. The Annals of Statistics, (pp. 1070–1090).
Brown, L. D. (1986). Fundamentals of statistical exponential families with applications in statistical decision
theory. Lecture Notes-Monograph Series, 9, 1–279.
Datta, G. S. & Ghosh, J. K. (1995). On priors proving frequentist validity for Bayesian inference. Biometrika,
82, 37–45.
Datta, G. S. & Mukerjee, R. (2004). Probability matching priors: higher order asymptotics. Lecture Notes in
Statistics, 178, 1–126.
Efron, B. & Hinkley, D. V. (1978). Assessing the accuracy of the maximum likelihood estimator: Observed
versus expected Fisher information. Biometrika, 65, 457–482.
Fisher, R. A. (1930). Inverse probability. Proceedings of the Cambridge Philosophical Society, 26, 528–535.
Fisher, R. A. (1973). Statistical methods and scientific inference. Hafner Press: New York.
Fosdick, B. K. & Perlman, M. D. (2016). Variance-stabilizing and confidence-stabilizing transformations for
the normal correlation coefficient with known variances. Comm. Statist. Simulation Comput., 45, 1918–1935.
Fosdick, B. K. & Raftery, A. E. (2012). Estimating the correlation in bivariate normal datat with known
variances and small sample size. The American Statistician, 66, 34–41.
Garcı́a-Soidán, P. H. (1998). Edgeworth expansions for triangular arrays. Communications in Statistics-Theory
and Methods, 27(3), 705–722.
Ghosh, J. K. (1994). Higher order asymptotic. Institute of Mathematical Statistics and American Statistical
Association: Hayward, California.
16
Ghosh, J. K., Delampady, M., & Samanta, T. (2006). An introduction to Bayesian analysis. Springer: New
York.
Hannig, J. (2009). On generalized fiducial inference. Statist. Sinica, 19, 491–544.
Hannig, J., Iyer, H. K., Lai, R. C. S., & Lee, T. C. M. (2016). Generalized fiducial inference: A review and
new results. J. American Statist. Assoc., 44, 476–483.
Johnson, R. A. (1970). Asymptotic expansions associated with posterior distributions. Ann. Math. Statist,
41, 851–864.
Johnson, R. A. & Ladalla, J. N. (1979). The large sample behaviour of posterior distributions when sampling
from multiparameter exponential family models, and allied results. Sankhyā, Series B, 41, 196–215.
Lindley, D. V. (1958). Fiducial distributions and Bayes theorem. J. R. Stat. Soc. Ser. B, 20, 102–107.
Liu, D., Liu, R. Y., & Xie, M. (2015). Multivariate meta-analysis of heterogeneous studies using only summary
statistics: efficiency and robustness. J. Amer. Statist. Assoc., 110, 326–340.
Mukerjee, R. & Ghosh, M. (1997). Second-order probability matching priors. Biometrika, 84, 970–975.
Pal Majumder, A. & Hannig, J. (2016). Higher order asymptotics of Generalized Fiducial Distribution.
arXiv:1608.07186 [], (pp. 1–33).
Petrov, V. V. (1995). Limit theorems of probability theory. Clarendom Press: Oxford.
Schweder, T. & Hjort, N. L. (2016). Confidence, likelihood and probability. London: Cambridge University
Press.
Severini, T. A. (2000). Likelihood methods in statistics, volume 22. Oxford University Press, Oxford.
Sonderegger, D. L. & Hannig, J. (2014). Fiducial theory for free-knot splines. In Contemporary Developments
in Statistical Theory (pp. 155–189). Springer, New York.
Steck, G. P. (1957). Limit theorems for conditional distributions. Univ. California Publ. Statist, 2, 237–284.
Taraldsen, G. & Lindqvist, B. H. (2015). Fiducial and posterior sampling. Communications in Statistics Theory and Methods, 44, 3754–3767.
Veronese, P. & Melilli, E. (2015). Fiducial and confidence distributions for real exponential families. Scand.
J. Stat., 42, 471–484.
17
Veronese, P. & Melilli, E. (2016). Objective bayesian and fiducial inference: some results and comparisons. To
appear in Journal of Statistical Planning and Inference; arXiv:1612.01882 [], (pp. 1–37).
Xie, M. & Singh, K. (2013). Confidence distribution, the frequentist distribution estimator of a parameter: a
review. Internat. Stat. Rev, 81, 3–39.
18
| 10 |
A Generalization of the maximal-spacings in several
dimensions and a convexity test.
Catherine Aarona , Alejandro Cholaquidisb , Ricardo Fraimanb
a
Université Blaise-Pascal Clermont II.
b
Universidad de la República.
arXiv:1411.2482v2 [] 5 May 2016
Abstract
The notion of maximal-spacing in several dimensions was introduced and studied
by Deheuvels (1983) for data uniformly distributed on the unit cube. Later on,
Janson (1987) extended the results to data uniformly distributed on any bounded
set, and obtained a very fine result, namely, he derived the asymptotic distribution
of different maximal-spacings notions. These results have been very useful in many
statistical applications.
We extend Janson’s results to the case where the data are generated from
a Hölder continuous density that is bounded from below and whose support is
bounded. As an application, we develop a convexity test for the support of a distribution.
Key words:maximal spacing, convexity test, non-parametric density estimation.
1
Introduction
The notion of spacings, which for one dimensional data are just the differences between
two consecutive order statistics, have been extensively studied in the one dimensional
setting; see, e.g., the review papers [18, 19]. Many important applications to testing
and estimation problems have been derived from the study of the asymptotic behaviour
of the spacings. Applications to testing problems date back to [20], who address the
asymptotic theory of a class of tests for Increasing Failure Rate. For estimation problems,
[21] propose the maximum spacing estimation method to estimate the parameters of a
univariate statistical model.
In the multidimensional case, several different notions of maximal-spacing have been
proposed. Most of them are based on the nearest-neighbors balls (see for instance [14]
or [2]) or on the Voronoi tessellation [16], (a comparison can be found in [22]), but they
do not capture the key idea of ‘largest set missing the observations’. In contrast, this is
the case with the different and global notion proposed in [7], and generalized in [12].
In [7], the notion of maximal-spacing is defined and studied
Q for iid data uniformly
distributed in [0, 1]d as the maximal length a of a cube C = [xi , xi + a], included in
[0, 1]d that does not contain any of the observations. This notion has been extended in
[12], in which the uniformity assumption remains but the support of the distribution is
no longer assumed to be [0, 1]d but may be any compactum S. Moreover, C is allowed
to be any compact and convex set. Finally, while in [7] only bounds are given, in [12]
the asymptotic distribution for the maximal spacing is provided.
The notion of maximal multivariate spacing, and in particular Janson’s result, has
been used to solve different statistical problems. In set estimation (see, for instance, [3]
1
and [4] ), it is used to prove the optimality of the rates of convergence.
The aim of this paper is to extend Janson’s result to Hölder continuous densities,
and develop, using that extension, a test to decide whether the support is convex or not.
It is organized as follows: Section 2 is devoted to the extension of Janson’s results. A
new definition (which includes Janson’s as a particular case) is given and the associated
theoretical results are presented. The proofs of these results are given in Appendix
A. Section 3 is dedicated to the problem of testing the convexity of the support. The
corresponding proofs are given in Appendix B. Just to mention some application of this
test, let us recall that when dealing with support estimation, if the support is known
to be convex, the convex hull of the observations provides a consistent and well studied
(see for instance [23],[24] and [27]) estimator of S which does not require any smoothing
parameter. Also, a convexity test can be used to, a posteriori, select a tuning parameter.
In [6] a test for convexity is also proposed, and applied to choose the parameter of the
ISOMAP (see [26]) method for dimensionality reduction. The convexity test based on
Janson’s extension allows us to provide an estimation of the p-value, whereas, in [6], it
has to be estimated via the Monte Carlo method.
2
Main definitions and results
We start by fixing some notation that will be used throughout the paper.
Given a set S ⊂ Rd , we denote by ∂S, S̊, S, diam(S), |S|, and H(S), the boundary,
interior, closure, diameter, Lebesgue measure, and convex hull of S, respectively. We
denote by k · k and h·, ·i the euclidean norm and the inner product respectively. We
write N (S, ε) for the inner covering number of S (i.e.: the minimum number of balls
of radius ε and centred in S required to cover S). Recall that if S is compact, there
exists CS such that N (S, ε) ≤ CS ε−d . We denote by B(x, ε) the closed ball in Rd ,
of radius ε, centered at x. We set ωd = |B(0, 1)|. Given λ ∈ R, A, C ⊂ Rd , we set
λA = {λa : a ∈ A}, A ⊕ C = {a + c : a ∈ A, c ∈ C}, and A C = {x : {x} ⊕ C ⊂ A}. For
the sake of simplicity, we use the notation x + C, instead of {x} ⊕ C. If λ ≥ 0, we set
Aλ = A ⊕ λB(0, 1) and A−λ = A λB(0, 1). Given A, C ⊂ Rd two non-empty compact
sets, the Hausdorff (or Pompeiu–Hausdorff) distance between them is given by
dH (A, C) = max max d(a, C), max d(c, A) ,
a∈A
c∈C
where d(a, C) = inf{ka − ck : c ∈ C}.
Let A ⊂ Rd be a compact convex set with |A| = 1, let v be a vector of Rd , and let
αA (v) be the constant defined in equation (2.4) of [12]:
Z
Z
d
1
αA (v) =
...
Det n(yi ) i=1 dω(y1 ) . . . dω(yd ),
(1)
d!
where ω denotes the d − 1 dimensional Hausdorff measure, and for y ∈ ∂A, n(y) denotes
the exterior unit normal vector to A at y. The integral in (1) is over all y1 , . . . , yd ∈ ∂A
2
such that v is a linear combination of n(y1 ), . . . , n(yd ) with positive coefficients, and
Det(n(yi ))di=1 is the determinant of the vectors n(yi ) in an orthonormal basis. Corollary
7.4 in [13] proves that αA (v) is almost everywhere independent of v, so that there can
be defined an αA such that αA (v) = αA almost everywhere.
If A is the unit cube, then αA = 1; while if B is the unit ball, then
1
αB =
d!
√
πΓ
Γ
d
2 +1
d+1
2
!d−1
.
Lastly, let U be a random variable such that P(U ≤ t) = exp − exp(−t) .
2.1
Janson’s result and its extension
Let S ⊂ Rd be a bounded set with |S| = 1 and |∂S| = 0. Let ℵn = {X1 , . . . , Xn } be iid
random vectors uniformly distributed on S, and A a bounded convex set. In [12], the
maximal-spacing is defined as
n
o
∆∗ (ℵn ) = sup r : ∃x such that x + rA ⊂ S \ ℵn .
To generalize the results of [12] to the non-uniform case, we need to extend the
definition of maximal-spacing. When the sample is drawn according to a probability
measure PX , we consider the probability measure of the largest set λA missing ℵn . If
|A| = 1, ∆∗ (ℵn )d is the Lebesgue measure of the largest set x + rA ⊂ S \ ℵn . When
the sample is drawn from a non-uniform probability measure, is natural to use the
same definition, replacing the Lebesgue measure by the true underling distribution PX .
If PX has continuous density f , then PX (x + rA) ∼ f (x)rd for sufficiently small r,
so one can define the maximal-spacing as the largest r such that there exists x with
x + f (x)r 1/d A ⊂ S \ ℵn .
Definition 1. Let ℵn = {X1 , . . . , Xn } be an iid random sample of points in Rd , drawn
according to a density f with bounded support S. Let A ⊂ Rd be a convex and compact
set such that |A| = 1 and its barycentre is the origin of Rd . We define
n
o
r
∆(ℵn ) = sup r : ∃x such that x +
A
⊂
S
\
ℵ
,
n
f (x)1/d
V (ℵn ) = ∆d (ℵn ),
and
U (ℵn ) = n∆d (ℵn ) − log(n) − (d − 1) log log(n) − log(αA ).
The following result can be found in [12].
Theorem 1. Let S ⊂ Rd be a bounded set such that |S| = 1 and |∂S| = 0. Let
ℵn = {X1 , . . . , Xn } be iid random vectors uniformly distributed on S. Then,
3
i)
L
U (ℵn ) −→ U
when n → ∞,
ii)
lim inf
n→+∞
nV (ℵn ) − log(n)
= d − 1 a.s.,
log(log(n))
iii)
lim sup
n→+∞
nV (ℵn ) − log(n)
= d + 1 a.s.
log(log(n))
A rescaling extends these results to the case where |S| =
6 1.
Corollary 1. Let S ⊂ Rd be a bounded set such that |∂S| = 0 and |S| > 0. Let
ℵn = {X1 , . . . , Xn } be iid random vectors uniformly distributed on S. Then,
i)
L
U (ℵn ) −→ U
when n → ∞,
ii)
lim inf
n→+∞
nV (ℵn ) − log(n)
= d − 1 a.s.,
log(log(n))
iii)
lim sup
n→+∞
nV (ℵn ) − log(n)
= d + 1 a.s.
log(log(n))
Janson’s result does not require any condition on the shape of the support S, while in
our extension it will be required that the inside covering number of ∂S is such that there
exists C∂S > 0 and κ < d satisfying N (∂S, ) ≤ C∂S −κ . Note that this is a very mild
hypothesis: if ∂S is smooth enough (for instance, a C1 (d − 1)−dimensional manifold), it
is fulfilled for κ = d − 1. More generally, it also holds for any set S with finite Minkowski
content of the boundary, for κ = d − 1 (see, for instance [15]). With respect to the
distribution of the sample, we require that the density is Hölder continuous on S (i.e.
there exists Kf and β ∈ (0, 1] such that for all x, y ∈ S, |f (x) − f (y)| ≤ Kf kx − ykβ )
and bounded from below on it support, by a positive constant f0 .
This is stated in our main theorem, given below.
Theorem 2. Let ℵn = {X1 , . . . , Xn } be iid random vectors distributed according to a
distribution PX whose density f with respect to Lebesgue measure is Hölder continuous
and bounded from below on its support S. Let us assume that S is compact, and there
exists κ < d and C∂S > 0 such that N (∂S, ε) ≤ C∂S ε−κ Then, we have that
L
U (ℵn ) −→ U
lim inf
n→+∞
when n → ∞,
nV (ℵn ) − log(n)
≥ d − 1 a.s.,
log(log(n))
4
(2)
(3)
lim sup
n→+∞
nV (ℵn ) − log(n)
≤ d + 1 a.s.
log(log(n))
(4)
The proof is given in Appendix A.
3
3.1
A new test for convexity
The semi-parametric case
In this section we propose, using the concept of maximal-spacing defined in Section 2,
a consistent hypothesis test based on an iid sample {X1 , . . . , Xn } uniformly distributed
on a compact set S, to decide whether S is convex or not.
The main idea is the following: if S is convex and the sample is uniformly distributed on S, then H(ℵn ) is a good approximation to S and |H(ℵn )|−1 IH(ℵn ) is a good
approximation of the uniform law. As a result,
n
o
˜ n ) = sup r : ∃x such that x + r|H(ℵn )|1/d B(0, 1) ⊂ H(ℵn ) \ ℵn
∆(ℵ
is a plug-in estimator of the maximal spacing and should converge to 0. On the other
˜ n ) is expected to converge to a positive constant (that
hand, if S is not convex, ∆(ℵ
depends on the shape of S). In order to unify notation, let us first define the maximal
inner radius.
Definition 2. Let S ⊂ Rd be a bounded set satisfying S̊ 6= ∅. We define the maximal
inner radius of S as
R(S) = sup r : ∃x ∈ S such that B(x, r) ⊂ S .
1/d
˜ n ) = R(H(ℵn )\ℵn )ω 1/d |H(ℵn )|1/d .
Remark 1. We have ∆(ℵn ) = R(S\ℵn )ωd |S|1/d and ∆(ℵ
d
˜ n ), we
When testing the convexity of the support using a test statistic based on ∆(ℵ
only obtain, in general, an upper asymptotic bound on the test level. However, if the
boundary of the support is smooth enough, we have a converging estimation of the level.
The regularity condition is the following.
Condition (P): For all x ∈ ∂S there exists a unique vector ξ = ξ(x) with kξk = 1,
such that hy, ξi ≤ hx, ξi for all y ∈ S, and
kξ(x) − ξ(y)k ≤ lkx − yk
∀ x, y ∈ ∂S,
where l is a constant. We will denote by CP the class of convex subsets that satisfy
condition (P).
The convexity test and its asymptotic behaviour is given in the following theorem.
5
Theorem 3. Let S ⊂ Rd be compact with non-empty interior. Let ℵn = {X1 , . . . , Xn }
be a set of iid random vectors uniformly distributed on S. For the following decision
problem,
(
H0 : the set S is convex
(5)
H1 : the set S is not convex,
d
the test based on the statistic Ṽn = |H(ℵn )|ωd R H(ℵn ) \ ℵn with critical region given
by
RC = Ṽn > cn,γ ,
where
cn,γ =
1
− log − log(1 − γ) + log(n) + (d − 1) log log(n) + log(αB ) ,
n
and αB is the constant defined in (1), is asymptotically of level less than or equal to γ.
Moreover, if S ∈ CP , the asymptotic level is γ. If S is not convex, the test has power
one for all sufficiently large n.
The proof of Theorem 3 is given in Appendix B.
3.2
The non-parametric case
We now assume that we have a sample ℵn = {X1 , . . . , Xn } of iid random vectors in Rd
drawn according to an unknown density f . As in the semi-parametric case, the idea is
to estimate the maximal-spacing and use this estimation as a test statistic. As before,
H(ℵn ) is proposed as an estimator of S. To ensure that the test proposed in Theorem 3
allows determining whether the support is convex or not, the density estimator should
have a non-conventional behaviour: it is expected to converge toward the unknown
density when the support is convex, but not when the support is not convex. That is
why we propose the following density estimator.
Definition 3. Let Vor(Xi ) be the Voronoi cell of the point Xi (i.e. Vor(X
R i) = x :
K is a kernel function (i.e. K ≥ 0, K = 1 and
kx
R − Xi k = miny∈ℵn kx − yk ). 1 IfP
uK(u)du = 0) and fn (x) = nhd
K((x − Xi )/hn ) denotes the usual kernel density
n
estimator, we define
fˆn (x) =
max
fn (Xi )Ix∈H(ℵn ) .
(6)
i:x∈Vor(Xi )
We propose to test the convexity using the following plug-in estimator of ∆(ℵn ):
r
δ̂ H(ℵn ) \ ℵn = sup r : ∃x such that x +
A ⊂ H(ℵn ) \ ℵn ,
fˆn (x)1/d
−1/d
with A = ωd B(0, 1), and reject H0 (the support is convex) if δ̂(H(ℵn ) \ ℵn ) is sufficiently large.
The proof of Theorem 4 makes use of Theorem 2.3 in [11]. In order to apply that
result, we will introduce some technical hypotheses on the kernel function.
6
Definition 4. Let K be the set of kernel functions K(u) = φ(p(u)), where p is a
polynomial
and φ a is bounded real function of bounded variation, such that cK =
R
kukK(u)du < ∞, K ≥ 0 and there exists rK and c0K > 0 such that K(x) ≥ c0K
for all x ∈ B(0, rK ).
Note that, for example, the Gaussian and the uniform kernel are in K.
Definition 5. A set S is standard if there exist positive numbers r0 , cS such that
|B(x, r) ∩ S| ≥ cS ωd rd for all r ≤ r0 . We write C for the class of compact convex
sets with non-empty interior, and A for the class of all compact standard sets.
Finally it is also necessary to impose some conditions on the density.
Condition (B): A density f with support S fulfils condition B if its restriction to S
is Lipschitz continuous (i.e. there exists kf such that ∀x, y ∈ S, |f (x)−f (y)| ≤ kf kx−yk)
and there exists f0 > 0 such that f (x) ≥ f0 for all x ∈ S. We denote f1 = maxx∈S f (x).
Remark 2. The condition f (x) ≥ f0 > 0 for all x ∈ S is a necessary condition to test
convexity, as indeed is mentioned in [6]: ‘...an assumption like the density being bounded
away from zero on its support is necessary for consistent decision rules.’
Theorem 4. Let K ∈ K, and let fˆn be as defined in (3). Assume that hn = O(n−β ) for
some 0 < β < 1/d. Assume also that the unknown density fulfils condition B. For the
following decision problem,
(
H0 : S ∈ C
(7)
H1 : S ∈
/ C,
a) the test based on the statistic V̂n = δ̂ H(ℵn )\ℵn
cn,γ }, where
cn,γ =
d
with critical region RC = {V̂n ≥
1
− log(− log(1 − γ)) + log(n) + (d − 1) log(log(n)) + log(αB ) ,
n
has an asymptotic level less than γ.
b) Moreover, if S ∈ A is not convex, the power is 1 for sufficiently large n.
Remark 3. Notice that the ‘optimal’ kernel sequence size, hn = h0 n1/(d+4) , satisfies the
hypothesis of our theorem, so that any bandwidth selection method should be suitable for
testing for convexity.
However, in the semi-parametric case it is possible to derive the asymptotic behaviour
for the level under regularity conditions on the support. In this more general setup, we
will not have a convergent level estimation but only a bound for the level (the price to
pay for estimating the density). The proof of Theorem 4 is given in Appendix B.
7
3.3
Simulations
We have performed two simulation studies to assess the behaviour of our test in the
scenarios described in Sections 3.1 and 3.2. For the first study, the data were drawn
uniformly from sets S ⊂ R2 , and we will perform the test defined in Section 3.1 to
obtain estimations of the power and the level. In the second study, the non-parametric
case, the data can be not uniformly drawn drawn, and we estimate the density using the
estimator given by (6). In this case, we consider the same sets and density as in [6].
3.3.1
Semi-Parametric case
The data were generated uniformly from the sets Sϕ = [0, 1]2 \ Tϕ , where Tϕ is the
isosceles triangle with height 1/2 (see Figure 1) whose angle at the vertex (1/2, 1/2)
is equal to ϕ. If we have a random sample from Sϕ , it is clear that as ϕ increases, it
should be easier to detect the non-convexity of the set. The results of the simulations
are summarized in Table 1.
ϕ = π/4
n
β̂
100
.4
130 .636
160 .835
200 .946
300 .997
ϕ = π/6
n
β̂
200 .565
250 .787
300 .926
400 .996
500
1
ϕ = π/8
n
β̂
300 .543
350 .679
400 .846
500 .976
600 .997
Table 1: Power estimated over 1000 replications, for different values of ϕ, when the
sample is uniformly distributed on [0, 1]2 \ Tϕ , where Tϕ is an isosceles triangle, (see
Figure 1).
φ
Figure 1: [0, 1]2 \ Tϕ where Tϕ is an isosceles triangle with height 1/2.
3.3.2
Non-parametric case
We performed a simulation study for the same sets used in [6]. Consider the curves
γR,θ = R(cos(θ), sin(θ)) with θ ∈ [ 3π(R−1)
, 32 π] and the reflections of those curves along
2R
the y axis (which will be denoted by ζR,θ ). We consider ΓR = T(0,R) (γR,θ ) ∪ T(0,−R) (ζR,θ )
with θ ∈ [ 3π(R−1)
, 23 π], where Tv is the translation along the vector v. It is easy to
2R
8
see that the length of every ΓR is 32 π. We will consider, for different values of R, the
S-shaped sets (see the first row in Figure 2)
SR = T(0,R)
[
γr,θ ∪ T(0,−R)
R−0.6≤r≤R+0.6
[
ζr,θ .
R−0.6≤r≤R+0.6
Observe that when R approaches infinity, the sets S converge to a rectangle (which
corresponds to the convex case). We have generated the data according to two different
densities. The first one is the same as that considered in [6]: that is, along the orthogonal
direction of ΓR , we choose a random variable with normal density (with zero mean and
standard deviation σ = 0.15) truncated to 0.6 (the truncation is performed to ensure
that we obtain a point in the set SR ). In the second case, we consider a random variable
along the orthogonal direction of ΓR but uniformly distributed on [−0.6, 0.6]. In Tables
2 and 3, we have summarized the results of the simulations, for different sample sizes
(we performed the test B = 100 times). The results are quite encouraging and slightly
better that those obtained in [6] since the non-convexity is better detected (see Fig. 7 in
[6] for comparison) with no need for the decision rule to be calibrated.
R
1
1.5
3
6
12
24
∞
N=100
np unif
.13
.44
.98
1
.38
.24
.08
.09
.01
.05
0
.07
0
.04
N=250
np unif
.55
.99
1
1
1
1
.41
.66
.02
.08
.01
.05
0
.09
N=500
np unif
1
1
1
1
1
1
1
1
.39
.68
0
.09
0
.04
N=1000
np unif
1
1
1
1
1
1
1
1
.98
1
.07
.48
.01
.05
Table 2: Power estimated over B replications, for different values of R, when the sample
is uniformly distributed along the orthogonal direction of ΓR .
R
1
1.5
3
6
12
24
∞
N=100
np unif
1
1
1
1
1
.99
.67
.41
.25
.19
.1
.30
0
.33
N=250
np unif
1
1
1
1
1
1
.99
1
.62
.98
.30
.92
.04
.92
N=500
np unif
1
1
1
1
1
1
1
1
.85
1
.38
1
.06
1
N=1000
np unif
1
1
1
1
1
1
1
1
.94
1
.48
1
.04
1
Table 3: Power estimated over B replications, for different values of R, when the sample
is drawn according to a truncated normal distribution along the orthogonal direction of
ΓR .
9
Figure 2: SR for different values of R together with the sample drawn with a uniform
radial noise (top) and with a truncated Gaussian noise (bottom)
.
4
Appendix A
In Appendix A we proof the main result on the generalization of the maximal-spacing,
given in Theorem 2. First we settle some preliminary lemmas, then we prove a weaker
version of Theorem 2, for the case of piecewise constant densities on disjoint sets. We
continue by considering piecewise constant densities, and finally we prove the result for
Hölder continuous densities.
4.1
Preliminary Lemmas
As we mentioned before, the proof of Corollary 1 follows from a simple rescaling in the
lemmas stated in [12], used to prove Theorem 1. In particular the following rescaled
lemma will be used in this section.
Lemma 1. Let S ⊂ Rd be a bounded set, |S| > 0, |∂S| = 0, and ℵn = {X1 , . . . , Xn }
iid random vectors with uniform distribution on S. Then, there exists aS− = aS− (w, n),
aS+ = aS+ (w, n) such that aS− → αA and aS+ → αA as w → ∞ and w/n → 0, such that if
n d−1 −w
γ = |S|
w e ,
exp(−γaS+ |S|) ≤ P nV (ℵn ) < w ≤ exp(−γaS− |S|).
(8)
The functions aS+ and aS− only depend on the “shape” of S (i.e. are invariant by similarity
transformations). Without loss of generality they can be chosen such that, for all w0 ≥ w
and n0 ≥ n: aS+ (w0 , n0 ) ≤ a+ (w, n) and aS( − w0 , n0 ) ≥ a− (w, n).
Next we settle two lemmas whose proofs are quite similar. The first one (Lemma
2) gives a first rough upper bound for the maximal spacing. The second one (Lemma
3) bounds a sort “constrained” maximal-spacing (the centre x of the largest set x + rA
missing the observation is constrained to be in a “small” subset of the support).
Recall first (see [1]) that, since A is convex, there exist ε0 > 0 such that,
for all ε ≤ ε0 , A−ε 6= ∅ and |A−ε | = |A| − ε|∂A|d−1 + o(ε).
10
(9)
It can also be proved easily that,
for all r > 0, and for all x ∈ B(0, ε0 /r), x + (rA)−kxk ⊂ rA.
(10)
Lemma 2. Let ℵn = {X1 , . . . , Xn } be iid random vectors in Rd , with common density
f . Assume that f has bounded support S and there exist 0 < f0 < f1 < ∞ such that
f0 ≤ f (x) ≤ f1 for all x ∈ S. Then, for all rf > 0 such that rfd > 2f1 /f0 we have,
∆(ℵn ) ≤ rf
log(n)
n
1/d
eventually almost surely.
Proof. First observe that S can be covered with N (S, n−1/d ) ≤ CS n balls of radius n−1/d
1/d
with rfd > 2f1 /f0 .
centered at some points {x1 , . . . , xνn } ⊂ S. Denote wn = rf log(n)
n
First observe that ∆(ℵn ) ≥ wn ⇔ ∃x ∈ S, such that x + wn f (x)−1/d A ⊂ S \ ℵn , then
−1/d
∆(ℵn ) ≥ wn ⇒ ∃x ∈ S, such that x+wn f1
A ⊂ S \ℵn . Applying (10), for sufficiently
large n (that is possible because n−1/d wn ) we get
−1/d
∆(ℵn ) ≥ wn ⇒ ∃xi such that xi + (wn f1
1/d
A)−1/n
⊂ S \ ℵn .
(11)
Next notice that,
−1/d
P xi + wn f1
A
−1/n1/d
⊂ S \ ℵn =
≤
≤
1 − PX
−1/d −1/n1/d
wn f1
A
1 − f0
1−
−1/d −1/n1/d
xi + wn f1
A
f
0
f1
wnd
−
f0
d−1
!n
!n
wnd−1 n−1/d (1
+ o(1))
f1 d
The last inequality is obtained using (9). Since wn n−1/d , we finally get
n
f0 d
−1/d −1/n1/d
P xi + wn f1
A
⊂ S \ ℵn ≤ 1 − wn (1 + o(1)) .
f1
From this inequality and (11) it follows that,
n
1/d
f0
P ∆(ℵn ) ≥ rf log(n)n/n
≤ N (S, n−1/d ) 1 − wnd (1 + o(1))
f1
f
0
≤ CS n exp − nwnd (1 + o(1)) ,
f1
and therefore,
d
P ∆(ℵn ) ≥ rf (log(n)/n)1/d ≤ CS n1−rf f0 /f1 +o(1) .
11
!n
.
P
Finally, since rfd > 2f1 /f0 we have
P ∆(ℵn ) ≥ rf (log(n)/n)1/d < ∞. Thus, the
Borel-Cantelli Lemma entails that ∆(ℵn ) ≤ rf (log(n)/n)1/d eventually almost surely.
Lemma 3. Let ℵn = {X1 , . . . , Xn } be iid random vectors in Rd with common distribution
distribution PX supported on a compact set S and density f continuous on S. Assume
that there exists f0 > 0 such that f (x) ≥ f0 ∀x ∈ S. Let Gn be a sequence of sets included
in S, with the following property: there exist C such that N (Gn , n−1/d ) ≤ Cn1−a (log(n))b
for some a > 0 and b > 0. Let A be a compact and convex set with |A| = 1 such that its
barycenter is the origin of Rd . Let us denote
n
o
r
∆(ℵn , Gn ) = sup r : ∃x ∈ Gn such that x +
A
⊂
S
\
ℵ
,
n
f (x)1/d
V (ℵn , Gn ) = ∆d (ℵn , Gn ),
U (ℵn , Gn ) = nV (ℵn , Gn ) − log(n) − (d − 1) log(log(n)) − log(αA ).
Then, P U (ℵn , Gn ) ≥ − log(log(n)) → 0.
Proof. Let us first cover Gn with νn = N (Gn , n−1/d ) balls of radius n−1/d , centred at
A ) 1/d
)
some points {x1 , . . . , xνn } belonging to S, and choose wn = ( log(n)+(d−2) log(log(n))+log(α
n
(observe that wn (1/n)1/d ). As in the proof of Lemma 2 we have,
∆(ℵn ) ≥ wn ⇔ ∃x ∈ Gn , such that x + wn f (x)−1/d A ⊂ S \ ℵn .
which implies,
∆(ℵn ) ≥ wn ⇒ ∃xi ∃x ∈ B(xi , n−1/d ), such that xi + (wn f (x)−1/d A)−1/n
1/d
⊂ S \ ℵn .
Therefore,
P xi + (wn f (x)−1/d A)
−1/n1/d
⊂ S \ ℵn =
1 − PX
−(1/n1/d )
xi + wn f (x)−1/d A
!n
.
With rough bounds on the density,
−1/n1/d mint∈S∩(xi +wn f1−1/d A) f (t) d
PX xi + wn f (x)−1/d A
≥
w (1 + o(1)).
maxt∈S∩B(xi ,n−1/d ) f (t) n
Since f is uniformly continuous on S, A is bounded, wn → 0 and n−1/d → 0, for all
c < 1 there exist nc such that for all n ≥ nc
n
1/d
P xi + (wn f (x)−1/d A)−1/n ⊂ S \ ℵn ≤ 1 − cwnd (1 + o(1)) ,
then,
P ∆(ℵn , Gn ) ≥ wn ≤ Cn1−a (log n)b (1 − cwnd (1 + o(1)))n .
12
Taking c = 1 − a/2, we finally get that
−1+a/2 −a/2
P U (ℵn , Gn ) ≥ − log(log(n)) ≤ CαA
n
(log(n))b−(1−a/2)(d−2) (1 + o(1)) → 0.
The next lemma relates the behaviour of the maximal-spacing for two different densities having the same support.
Lemma 4. Let us consider f and h, two densities with compact support S such that,
h(x) > h0 for all x ∈ S and maxx∈S |f (x) − h(x)| ≤ εh0 for a given ε ∈ (0, 1/2). Denote
by n0 = bn(1 − 2ε)c and n1 = dn(1 + 2ε)e the floor and ceiling of n(1 − 2ε) and n(1 + 2ε)
−1 )
respectively. For any w ∈ R, let us define wn,0 = w(1−2ε−n
and wn,1 = w(1−ε)
1+2ε . Then,
(1+ε)
P n0 V (Yn0 ) ≤ wn,0
and
1−
1 − ε
≤ P nV (ℵn ) ≤ w ,
nε
1 + 2ε + n−1 −1
,
P nV (ℵn ) ≤ w ≤ P n1 V (Yn1 ) ≤ wn,1 1 −
(nε + 1)(1 + ε)
(12)
(13)
where Yn0 = {Y1 , . . . , Yn0 } and Yn1 = {Y1 , . . . , Yn1 } are iid random vectors on Rd , with
density h, and ℵn = {X1 , . . . , Xn } are iid random vectors with density f .
Proof. We first prove (12). Observe that X can be generated from the following mixture:
with probability p = 1 − ε, X is drawn with density h, and, with probability 1 − p, X
IS (x). Let us denote
is drawn with the law given by the density g(x) = f (x)−h(x)(1−ε)
ε
∗
by N0 the number of points drawn according to h on S and ℵN0 = {Y1 , . . . , YN0 } the
associated sample. Let us recall that
o
n
r
A
⊂
S
\
ℵ
.
∆(ℵn ) = sup r : ∃x such that x +
n
f (x)1/d
Observe that
sup h0 |f (x)/h(x) − 1| ≤ sup h(x)|f (x)/h(x) − 1| ≤ h0 ε,
x
x
so f (x)/h(x) ≤ 1 + ε. Then we have,
n
∆(ℵn ) ≤ (1 + ε)1/d sup r : ∃x such that x +
From the inclusion ℵ∗N0 ⊂ ℵn we get
n
1/d
∆(ℵn ) ≤ (1 + ε) sup r : ∃x such that x +
o
r
A
⊂
S
\
ℵ
.
n
h(x)1/d
o
r
∗
A ⊂ S \ ℵ N0 ,
h(x)1/d
and therefore ∆(ℵn ) ≤ (1 + ε)1/d ∆(ℵ∗N0 ), which entails that V (ℵn ) ≤ (1 + ε)V (ℵ∗N0 ).
Then, for all w > 0,
P nV (ℵn ) ≤ w ≥ P (1 + ε)nV (ℵ∗N0 ) ≤ w ,
13
and
P nV (ℵn ) ≤ w ≥ P (1 + ε)nV (ℵ∗N0 ) ≤ w ∩ N0 ≥ n0 .
For N0 ≥ n0 , let us denote by Yn0 = {Y1 , . . . , Yn0 } the n0 first values of ℵ∗N0 . Clearly
we have V (ℵ∗N0 ) ≤ V (Yn0 ) so,
PN0 ≥n0 (1 + ε)nV (ℵ∗N0 ) ≤ w ≥ P (1 + ε)nV (Yn0 ) ≤ w ,
where PN0 ≥n0 denotes the conditional probability given N0 ≥ n0 . Therefore,
wn0
P nV (ℵn ) ≤ w ≥ P n0 V (Yn0 ) ≤
P(N0 ≥ n0 ).
(1 + ε)n
On the other hand, since N0 ∼ Bin(n, 1 − ε), we obtain,
P(N0 < n0 ) = P N0 − (1 − ε)n < n0 − (1 − ε)n ≤ P(N0 − (1 − ε)n ≤ −εn).
Since n0 = bn(1 − 2ε)c, n0 − n(1 − ε) ≤ −εn, which together with Chebyshev’s inequality
entails that
nε(1 − ε)
(1 − ε)
=
,
P(N0 < n0 ) ≤
2
2
n ε
nε
and then,
(1 − ε)
P(N0 ≥ n0 ) ≥ 1 −
.
nε
−1 )
Let us denote by wn,0 = w(1−2ε−n
(1+ε)
from where it follows that
. Since n(1 − 2ε) − 1 ≤ n0 we have wn,0 ≤
P nV (ℵn ) ≤ w ≥ P n0 V (Yn0 ) ≤ wn,0
1−ε
1−
nε
wn0
(1+ε)n ,
.
Equation (13) is proved in the same way. We just provide a sketch of the proof. The
key point for the proof of (12) was to think the law of a random variable Y drawn with the
1
density h as the following mixture: with probability p = 1+ε
, Y as a random variable with
(x)
density f , and, with probability 1−p, Y is drawn with density g(x) = h(x)(1+ε)−f
IS (x).
ε
Next, we consider a sample Yn1 = {Y1 , . . . , Yn1 } of iid copies of Y , (that follows a law
given by h). Denote by N the number of the points that drawn according to the density
f and Y∗N = {X1 , . . . XN } these points. The rest of the proof follows using the same
argument to prove (12).
4.2
Uniform mixture on disjoint supports
Proposition 1. Let E1 , . . . , Ek be subsets of Rd such that for all i 6= j ⇒ Ei ∩ Ej = ∅,
and 0 < |Ei | < ∞ for all i. Let ℵn = {X1 , . . . , Xn } be iid random vectors in S = ∪i Ei
with density
k
X
f (x) =
pi IEi (x),
i=1
14
where p1 , . . . , pk are positive real numbers. Then,
L
U (ℵn ) −→ U
when n → ∞.
Proof. First let us introduce some notation, for i = 1, . . . , k
• Ni = #{ℵn ∩ Ei } denotes the number of data points in Ei . Notice that Ni ∼
Bin(n, pi |Ei |).
• ℵiNi = {Xi1 , . . . , XiNi } denotes the subsample of ℵn that falls in Ei . Observe that
they all are uniformly distributed.
P
• ai = P
pi |Ei | for i = 1, . . . , k, that fulfils
ai = 1, a0 = mini ai , A0 = maxi ai and
1−ai
C=
.
ai
• εn,i =
Ni −ai n
nai .
Since the support of f is ∪i Ei , and by assumption i 6= j ⇒ Ei ∩ Ej = ∅, we have
n
o
r
∆(ℵn ) = sup r : ∃x∃i such that x + 1/d A ⊂ Ei \ ℵn ,
pi
so
while
o
n
r|Ei |1/d
i
∆(ℵn ) = max sup r : ∃x such that x +
A
⊂
E
\
ℵ
i
Ni ,
i
(|Ei |pi )1/d
(14)
o
n
∆(ℵiNi ) = sup r0 : ∃x ∈ Ei such that x + r0 |Ei |1/d A ⊂ Ei \ ℵiNi .
(15)
From (14) and (15) we derive that
n
o
n
o
∆(ℵn ) = max (|Ei |pi )1/d ∆(ℵiNi ) and V (ℵn ) = max |Ei |pi V (ℵiNi ) ,
i
i
which entails that
Y
wNi
i
P(nV (ℵn ) ≤ w) =
P Ni V (ℵNi ) ≤
.
ai n
i
−
→
Let P n (A) = P(A|N1 = n1 , . . . , Nk = nk ) stand for the conditional probability given the
number of points that fall in each Ei . We have that,
P
−
→
n
nV (ℵn ) ≤ w =
k
Y
P
−
→
n
n|Ei |pi V
(ℵini )
≤w =
i=1
Now, taking wn,i =
exp
−
k
X
wni
n|Ei |pi ,
k
Y
P
ni V (ℵini ) ≤
−
→
n
i=1
γn,i =
i
γn,i aE
+ |Ei |
d−1 −wn,i
ni wn,i
e
|Ei |
−
→
n
≤P
and applying Lemma 1 we obtain,
nV (ℵn ) ≤ w ≤ exp
i=1
−
k
X
i=1
15
wni
n|Ei |pi
.
i
γn,i aE
− |Ei |
.
On the other hand,
k
X
i
γn,i aE
+ |Ei |
=
i=1
=
k
X
i=1
k
X
ni
wni
n|Ei |pi
d−1
wni
i
aE
exp −
+
n|Ei |pi
i
ni wd−1 (1 + εi )d−1 exp(−w(1 + εi ))aE
+ (wn,i , ni ).
i=1
E
Let εn = maxi |εn,i | and εa+ = maxi
k
X
|a+i (wn,i ,ni )−αA |
,
αA
then we have
d−1
i
γn,i aE
exp(−w)αA (1 + εn )d−1 exp(wε)(1 + εa+ ).
+ |Ei | ≤ nw
i=1
Taking w = wn = x + log(n) + (d − 1) log(log(n)) + log(αA ), we obtain that nV ≤ w ⇔
U ≤ x, which implies that
−
→
P n U (ℵn ) ≤ x ≥ exp −e−x (1 + ε)d−1 exp(log(n)εn )(1 + εa+ )(1 + on (1)) .
In the same way it can be proved,
−
→
P n U (ℵn ) ≤ x ≤ exp −e−x (1 − εn )d−1 exp(− log(n)εn )(1 − εa− )(1 + on (1)) .
Ei
(wn,i ,ni )−α
|a
|
A
where we denoted εa− = maxi −
.
αA
2
Suppose that εn = max |εn,i | ≤ 1/ log(n) , then, if n ≥ 5, a0 n/2 ≤ Ni ≤ n for all
i, which imply that for all i,wn,i ≥ log(n)a0 /(2A0 ) → ∞ and wn,i /n ≤ x + log(n) +
(d − 1) log(log(n)) + log(αA ) /(na0 ) → 0. Then εa− and εa+ converges to 0, according
to Lemma 4. Therefore
2
Pεn ≤1/ log(n) U (ℵn ) ≤ x → exp(− exp(−x)) when n → ∞.
(16)
Since
P max |εn,i | ≥
i
k
[
1 X
1
1
=
P
|ε
|
≥
≤
P
|ε
|
≥
,
n,i
n,i
log(n)2
log(n)2
log(n)2
i
i=1
from Chebyshev’s inequality we obtain
1
P |εn,i | ≥
≤ log(n)4 V(ε2n,i )
log(n)2
and therefore
where
V(ε2n,i ) =
4
log(n)
1
≤C
.
P εn ≥
log(n)2
n
Finally, from equations (16) and (17) we get,
P U (ℵn ) ≤ x → exp(− exp(−x)) when n → ∞,
which concludes the proof.
16
1 − ai
,
nai
(17)
4.3
Uniform mixture
Proposition 2. Let E1 , . . . , Ek be subsets of Rd such that,
1) i 6= j ⇒ |Ei ∩ Ej | = 0.
2) 0 < |Ei | < ∞ for i = 1, . . . , k.
3) There exists a constant C > 0 such that, for all i, for all ε > 0, N (∂Ei , ε) ≤
Cε−d+1 .
Let ℵn = {X1 , . . . , Xn } be iid random vectors with density
f (x) =
k
X
pi IE̊i ,
i=1
where p1 , . . . , pk are positive real numbers. If there exist constants r0 > 0 and c > 1−1/d
such that, for all r ≤ r0 and all x ∈ ∪i E̊i ,
mint∈S∩B(x,r) f (t)
≥ c,
maxt∈S∩B(x,r) f (t)
then,
L
U (ℵn ) −→ U
when
n → ∞.
Proof. We start by introducing some definitions and notation.
n
o
r
˚ n ) = sup r : ∃x∃i, such that x +
∆(ℵ
A
⊂
E̊
\
ℵ
,
i
n
f (x)1/d
˚ d (ℵn ),
V̊ (ℵn ) = ∆
Ů (ℵn ) = nV̊ (ℵn ) − log(n) − (d − 1) log log(n) − log(αA )
With the same ideas used to prove Proposition 1 (and the fact that |Ei | = |E̊i |) it
L
follows that Ů ℵn −→ U . Let Fn (x) = P Ů ℵn ≤ x . Clearly U (ℵn ) ≥ Ů ℵn , and
therefore
P U (ℵn ) ≤ x ≤ Fn (x) → exp(− exp(−x)).
(18)
In order to prove the other inequality let us define
S
• G = i,j Ei ∩ Ej .
• p0 = mini pi .
• ρA = maxx∈A kxk.
1/d
1/d
1/d
with rf such that ∆(ℵn ) ≤ rf log(n)/n
even• ρn = (rf ρA /p0 ) log(n)/n
tually almost surely (whose existence follows from Lemma 2). Notice that condition
3) ensures that N (Gρn , n−1/d ) = O(n1−1/d (log(n))1/d ).
17
o
r
A
⊂
S
\
ℵ
.
n
f (x)1/d
o
r
such that x +
A
⊂
S
\
ℵ
n .
f (x)1/d
n
• ∆ ℵn , S \ Gρn = sup r : ∃x ∈ S \ Gρn such that x +
n
• ∆ ℵn , Gρn = sup r : ∃x ∈ Gρn
Clearly we have that,
n
o
∆(ℵn ) = max ∆ ℵn , S \ Gρn , ∆ ℵn , Gρn .
(19)
For the chosen ρn , we are going to prove that,
˚ ℵn ≥ ∆ ℵn , S \ Gρn eventually almost surely.
∆
(20)
Let us suppose first that ∆ ℵn , S \ Gρn ≤ rf (log(n)/n)1/d (which holds, e.a.s., due to
Lemma 2) then
∆ ℵn , S \ Gρn − ε
ρn
for all ε > 0 there exists xε ∈ S \ G such that xε +
A ⊂ S \ ℵn ,
f (xε )1/d
and
∆ ℵ n , S \ Gρ n − ε
rf (log(n)/n)1/d − ε
xε +
A ⊂ B xε , ρA
⊂ B(xε , ρn ).
1/d
f (xε )1/d
p
0
S
∆ ℵn ,S\Gρn −ε
A ⊂ i E̊i \ ℵn . Then, for all ε > 0,
From d(xε , G) ≥ ρn we get xε +
f (xε )1/d
˚ ℵn ≥ ∆ ℵn , S \ Gρn − ε e.a.s., which concludes the proof of (20).
∆
By (19), introducing U (ℵn , Gρn ) = n∆(ℵn , Gρn )d − log(n) − (d − 1) log(log(n)) −
log(αA ), one can bound P(U (ℵn ) ≥ x) for all x, as follows:
P(U (ℵn ) ≥ x) ≤ P(Ů (ℵn ) ≥ x) + P(U (ℵn , Gρn ) ≥ − log(log(n))) if x ≥ − log(log(n))
P(U (ℵn ) ≥ x) ≤ 1
if x ≤ − log(log(n))
By Lemma 3 we obtain:
P(U (ℵn ) ≤ x) ≥ Fn (x) + o(1)
P(U (ℵn ) ≤ x) ≥ 0
if x ≥ − log(log(n))
(21)
if x ≤ − log(log(n)).
Finally using (18), (21) and that P U ≤ − log(log(n)) = 1/n → 0 we conclude the
proof.
Proof of Theorem 2
1
Let cn = (log(n)/n) 3d . Take a “mesh” of Rd with small squares of side cn ,
d h
i
Y
ki cn , (ki + 1)cn
i=1
18
with ki ∈ N,
and denote by mn ≤ |S|c−d
n the number of these squares {C1 , . . . , Cmn } that are included
in S.
Like in the proof of Proposition 2 let us denote,
n
o
r
˚ n ) = sup r : ∃x∃i, such that x +
∆(ℵ
A
⊂
C̊
\
ℵ
,
i
n
f (x)1/d
˚ d (ℵn ),
V̊ (ℵn ) = ∆
Ů (ℵn ) = nV̊ (ℵn ) − log(n) − (d − 1) log log(n) − log(αA ).
S n
From the inclusion m
C̊
⊂
S
it
follows
that,
P
U
(ℵ
)
≤
x
≤
P
Ů
(ℵ
)
≤
x
.
i
n
n
i=1
Like in the proof of Proposition 1 let us denote, for i = 1, . . . , mn .
• Ni = #{ℵn ∩ Ci },
R
P 1−ai
• ai = Ci f (t)dt; a0 = mini ai ; A0 = maxi ai and C =
ai . Observe that
P
d
ai = 1 and a0 ≥ f0 cn .
• ℵiNi = {Xi1 , . . . , XiNi }, the subsample of ℵn that belongs to Ci . Observe that Xij
for j = 1, . . . , Ni has density fi (x) = f (x)/ai ICi (x).
• εn,i =
Ni −ai n
nai ,
εn = maxi εi .
We start with some asymptotic properties about εn,i and εn . If we bound |ai | ≥ f0 |cn |d
and apply Hoeffding’s inequality we get
P(log(n)|εn,i | ≥ t) ≤ 2 exp − 2t2 f02 (log(n))−4/3 n1/3 ,
and,
P(log(n)|εn | ≥ t) ≤
2|S|n1/3
2 2
−4/3 1/3
.
exp
−
2t
f
(log(n))
n
0
(log n)1/3
a.s.
Borel-Cantelli Lemma entails (log(n))εn → 0. Then, with probability 1, for n large
enough,
f0
(log n)1/3 n2/3 ≤ Ni ≤ 2f1 (log n)1/3 n2/3 for i = 1, . . . , mn .
2
In what follows n is large enough so that (22) is fulfilled.
Proceeding exactly as in the proof of Proposition 1 we can derive that
n
˚ n ) = max sup r : ∃x such that x +
∆(ℵ
i
1/d
o
rai
i
A
⊂
C̊
\
ℵ
i
Ni ,
(ai fi (x))1/d
and therefore
n
o
˚ n ) = max a1/d ∆(ℵiN )
∆(ℵ
i
i
i
and
19
n
o
V (ℵn ) = max ai V (ℵiNi ) .
i
(22)
First we to bound P(Un (ℵn ) ≥ x) from above. As in Proposition 1
mn
Y
wn Ni
d
P nV (ℵn ) ≤ wn ≤ P nV̊ (ℵn ) ≤ wn =
P Ni ∆ (ℵNi ) ≤
.
ai n
(23)
i=1
At any of the small squares Ci , by Hölder continuity, the density is close to the
uniform density, that will allow us to apply Lemma 4 with h = 1/|Ci |ICi . More precisely:
for all i an for all y ∈ Ci ,
Z
Z
Z
1
f (y)
1
|Ci | − 1 =
fi (y)|Ci | − 1 =
f (y)dt −
f (t)dt ≤ Kf
|y − t|β dt.
ai
ai Ci
ai
Ci
Ci
√
Since |y − t| ≤ dcn , if we denote Af = Kf f0−1 dβ/2 we derive that
fi (y)|Ci | − 1 ≤
√ β
1
Kf d cd+β
≤ Af cβn
n
ai
Let Ni0 = dNi (1 + 2Af cβn )e, wn0 = wn
1−Af cβ
n
1+2Af cβ
n
∀y ∈ Ci .
and YNi0 a sample of Ni0 variables
uniformly drawn on Ci , then Lemma 4 implies that
!−1
1 + 2Af cβn + Ni−1
wn0 Ni0
wn Ni
0 d
d
≤ P Ni ∆ (YNi0 ) ≤
1−
.
P Ni ∆ (ℵNi ) ≤
ai n
ai n
(Ni Af cβn )(1 + Af cβn )
(24)
On the other hand, by (22), with probability one, for n large enough we have that,
!−1
−1
1 + 2Af cβn + Ni−1
2
1
≤ 1−
1−
for all i,
f0 Af (log n)1/3+β/3d n2/3−β/3d (1 + o(1))
(Ni Af cβn )(1 + Af cβn )
(25)
and,
!−1
−1
1 + 2Af cβn + Ni−1
1
1 + o(1)
1−
≥ 1−
for all i.
2f1 Af (log n)1/3+β/3d n2/3−β/3d
(Ni Af cβn )(1 + Af cβn )
(26)
Let us prove that,
!−1
mn
Y
1 + 2Af cβn + Ni−1
a.s.
1−
→ 1.
(27)
β
β
(Ni Af cn )(1 + Af cn )
i=1
Since the right hand side of (25) can be express, for n large enough, as
exp − C(log n)1/3+β/3d n2/3−β/3d (1 + o(1)) ,
being C a positive constant, we get, for n large enough,
!−1
mn
Y
1 + 2Af cβn + Ni−1
1/3+β/3d 2/3−β/3d
1−
≤
exp
−Cm
(log
n)
n
(1+o(1))
→ 1,
n
(Ni Af cβn )(1 + Af cβn )
i=1
20
where the limit follows from mn (log n)−1/3−β/3d n−2/3+β/3d ≤ (log n)−β/3d n−1/3+β/3d →
0. (27) is obtained doing the same with (26).
0 N0
Q n 0 d
wn
i
0
P
N
∆
(Y
)
≤
Now let us study the asymptotic behaviour of m
. If we
Ni
i
i=1
ai n
apply Lemma 1, (observe that the functions aS− only depends on the shape of S) we get,
0 0 d−1
0 0
!
mn
mn
0 N0
0ω0
X
Y
d
N
ω
N
ω
w
N
[0,1]
i n
i n
Ni0
P Ni0 ∆d (YNi0 ) ≤ n i ≤ exp −
exp − i n a−
, Ni0
.
ai n
ai n
ai n
ai n
i=1
i=1
Let, ε0n,i =
0
Ni wn
ai n wn
0
n
− 1 for i = 1, . . . , mn . Observe that ε0n,i = (1 + εn,i ) w
wn − 1 =
a.s.
1−Af cβ
n
− 1 and ε0n = maxi |ε0n,i |. Since (log(n))εn → 0, ε0n fulfils
1+2Af cβ
n
all i, Ni0 ≥ Ni . The previous equation together with (23) entails,
(1 + εn,i )
and for
a.s.
log(n)ε0n → 0
[0,1]d
P(nV (ℵn ) ≤ wn ) ≤ exp −nωnd−1 (1−ε0n )d−1 exp −wn (1+ε0n ) a− (min wn (1−ε0n ), min Ni ) .
i
(28)
If we choose wn = x + n log(n) + (d − 1) log(log(n)) + log(αA ) in (28) we get,
P(U (ℵn ) ≤ x) ≤ exp(− exp(−x)) + o(1).
(29)
In order to conclude the proof of (2) we have to bound P(U (ℵn ) ≤ x) from below. We
provide just a sketch of the proof since the arguments are similar to those in Proposition
2, using Lemma 4 as in the proof of (29). Let us denote,
• ρn =
rf ρA log(n) 1/d
1/d
n
f
with ρA = maxx∈A kxk.
0
n
• Gn = ∪m
i6=j (Ci ∩ Cj ).
cn
n
• Hn = S \ (∪m
i Ci ), notice that Hn ⊂ ∂S .
Proceeding as in Proposition 2 we have
n
o
U (ℵn ) ≤ max Ů ℵn , U (ℵn , Gρnn ), U (ℵn , Hn ) eventually almost surely,
and
P(Ů (ℵn ) ≤ x) ≥ exp(− exp(−x)) + o(1).
Then, reasoning as in Proposition 2 we get
P(U (ℵn ) ≤ x) ≥ exp(− exp(−x)) + o(1).
Finally, in order to conclude the proof of (2) it suffices to prove that Gρnn and Hn satisfies
the hypothesis of Lemma 3.
Gn is the union of less than mn 2d (d − 1)−dimensional cubes of size cn . Each
−d+1 balls of radius ρ (with a a positive
of them can be cover by less than a1 cd−1
n
1
n ρn
21
S
constant), centered at some points {xij }i,j . Since Gρnn ⊂ i,j B(xij , 2ρn ), and every
B(xij , 2ρn ) can be covered by a2 ρdn n balls of radius n−1/d , Gρnn can be covered by less
2
2
−d+1 a ρd n = O(n1− 3d (log n) 3d ) balls of radius n−1/d .
than νn = mn a1 cd−1
2 n
n ρn
d+κ
d+κ
In the same way it can be proved that Hn can be covered with O(n1− 3d log(n) 3d )
balls of radius n−1/d . Indeed, cover ∂S with O(c−κ
n ) balls of radius cn , apply triangular inequality
to
obtain
that
the
union
of
the
balls
with the same centre but with a
√
√
dc
n
radius cn d covers ∂S
and then covers Hn , finally cover every of these balls by
O((cn n1/d )−d ) balls of radius n−1/d .
Proof of (3)
In Equation (28), let
√ us choose wn = log(n) + c log(log(n)) with c < (d − 1) and introduce Nk = dexp( k)e, like in equation (3.12) in [12], we obtain for k large enough,
d−1−c
P(Nk V (ℵNk ) ≤ wNk ) ≤ exp(−αA k 2 /2) and, Borel-Cantelli Lemma implies that,
with probability one, for k large enough Nk V (ℵNk ) ≤ wNk . The rest of the proof is
exactly the same as in Lemma 5 in [12]
Proof of (4):
For any u > 0 let us introduce the sequence rn =
1
log(n) log(log(n)) .
log n+(d+1+u) log(log(n))
n
1/d
and εn =
−d
Cover S with νn ≤ CS ε−d
n rn balls of radius εn rn . Reasoning like in [12]
we get
n
nV (ℵn ) − log(n)
P
≥ d + 1 + u = P(∆n ≥ rn ) ≤ νn 1 − rnd (1 − εn )d (1 − 2Kf rn diam(A)) ,
log(log(n))
that implies,
nV (ℵn ) − log(n)
(log(log(n)))d
P
≥ d + 1 + u ≤ CS
.
log(log(n))
(log(n))2+u+o(1)
√
From Borel-Cantelli Lemma, taking Nk = dexp( k)e it follows that, with probability
Nk V (ℵNk )−log(Nk )
≤ d + 1 + u. Now
log(log(Nk ))
Nk V (ℵNk )−log(Nk )
≤ d + 1 + u. We have,
log(log(Nk ))
one, for k large enough
and suppose that
Nk+1
nV (ℵn ) ≤
V (ℵNk ) ≤ exp
Nk
take n ≥ Nk and n ≤ Nk+1
1
√ (1 + o(1)) (log(Nk ) + (d + 1 + u) log(log(Nk ))),
2 k
that entails,
nV (ℵn ) ≤ 1 +
1
(1 + o(1))) (log(n) + (d + 1 + u) log(log(n))).
2 log(n)
Finally for n large enough,
nV (ℵn ) ≤ log(n) + (d + 1 + u) log(log(n)) + 1,
22
so that
nV (ℵn ) − log(n)
1
≤d+1+u+
,
log(log(n))
log(log n)
which concludes the proof of (4).
5
Appendix B
5.1
Proof of Theorem 3
The proof make use of the following two propositions. The first one gives conditions
under which the maximal-spacing of two compact sets are close. The second one shows
that if the set S is not convex, then R(H(S) \ S) > 0.
Proposition 3. Let A and B be bounded and non-empty subsets of Rd . If dH (A, B) ≤ ε
and dH (∂A, ∂B) ≤ ε. Then R(A) − R(B) ≤ 2ε.
Proof. First we introduce A0 = x ∈ A : d(x, ∂A) > 2ε and prove that A0 ⊂ B by
contradiction. Suppose that there exists x ∈ A such that d(x, ∂A) > 2ε and x ∈
/ B.
Since dH (A, B) ≤ ε we have A ⊂ B ε , then x ∈ B ε \ B, so d(x, ∂B) ≤ ε. Now, as
dH (∂A, ∂B) ≤ ε, by the triangular inequality, d(x, ∂A) ≤ 2ε. which is a contradiction.
From A0 ⊂ B it follows that R(A0 ) ≤ R(B). Now for all r < R(A) there exist x ∈ A
such that B(x, r) ⊂ A so that B(x, r − 2ε) ⊂ A0 which entails R(A0 ) ≥ R(A) − 2ε and,
finally, R(B) ≥ R(A) − 2ε. Proceeding in the same way, we get R(A) ≥ R(B) − 2ε that
conclude the proof.
d
Proposition
4. Let S ⊂ R be a non–convex, closed set with non-empty interior. Then,
R H(S) \ S > 0.
Proof. Since S is closed and non-convex there exists x ∈ H(S) \ S with d(x, S) = r > 0.
˚ so that, for all ε > 0, there exists
By Corollary 7.1 in [10] we know that H(S) = H(S)
xε and νε > 0 such that |xε − x| ≤ ε and B(xε , νε ) ⊂ H(S). Taking ε = r/2 and
ρ = min(νr/2 , r/2) > 0 we conclude that R H(S) \ S ≥ ρ > 0.
Now we can prove Theorem 3.
First observe that, if S is convex R H(ℵn ) \ ℵn ≤ R S \ ℵn and |H(ℵn )| ≤ |S| so
that V˜n ≤ V (ℵn ). Then, from Corollary 1 we obtain that P V˜n > cn,γ ≤ γ + o(1), and
the test is asymptotically of level smaller than γ.
Now we prove that if S ∈ CP , then PH0 Ṽn > cn,γ → γ. Recall that, if S ⊂ Rd is
convex and ℵn = {X1 , . . . , Xn } is an iid random sample, uniformly drawn on S ∈ CP , in
23
[9] it is proved that, almost surely:
2/(d+1)
2/(d+1)
dH H(ℵn ), S = O log(n)/n
and dH ∂H(ℵn ), ∂S = O log(n)/n
.
(30)
2/(d+1)
Thus, by Proposition 3, we have that R H(ℵn )\ℵn −R S\ℵn = O log(n)/n
almost surely. Therefore
1/d
2/(d+1)
˜ n (ℵn ) = |H(ℵn )|
∆
∆(ℵ
)
+
O
log(n)/n
a.s.
n
|S|1/d
2/(d+1)
The second equation in (30) also provides that |H(ℵn )| = |S|+O log(n)/n
almost surely. Finally we have,
2/(d+1)
1+O
Ṽn = V (ℵn ) 1 + O log(n)/n
2/(d+1) !!d
log(n)/n
a.s.
∆(ℵn )
Observe that from Corollary 1 ii) and iii) ∆(ℵn ) = (log(n)/n)1/d (1 + o(1)) almost
surely, then
1+ d−1
d(d+1)
a.s.
Ṽn = V (ℵn ) + O log(n)/n
This, together with cn,γ = O(log(n)/n) entails that P(Ṽn ≥ cn,γ ) → γ, as desired.
To conclude the proof of the theorem consider now that S is not convex. First we
prove that, if εn = dH (S, ℵn ), then dH (H(S), H(ℵn )) ≤ 2εn . Indeed, for all x ∈ H(S)
there exist x1 ∈ S, x2 ∈ S and λ ∈ [0, 1] such that x = λx1 + (1 − λ)x2 . Since εn =
dH (S, ℵn ) there exist Xi ∈ ℵn and Xj ∈ ℵn such that kx1 −Xi k ≤ εn and kx2 −Xj k ≤ εn
so that y = λXi +(1−λ)Xj belongs to H(ℵn ). By the triangular inequality kx−yk ≤ 2εn .
Since H(ℵn ) ⊂ H(S) we also have that dH (∂H(S), ∂H(ℵn )) ≤ 2εn , which implies that
kH(ℵn )|−|H(S)k ≤ O(εn ). On the other hand we have dH (H(S)\ℵn , H(ℵn )\ℵn ) ≤ 2εn
and dH (∂(H(S) \ ℵn ), ∂(H(ℵn ) \ ℵn )) ≤ 2εn . By Proposition 3 we get, R(H(ℵn ) \ ℵn ) ≥
R(H(S) \ ℵn ) − 2εn . Since H(S) \ S ⊂ H(S) \ ℵn it follows,
R(H(ℵn ) \ ℵn ) ≥ R(H(S) \ S) − 2εn .
a.s.
(31)
Since εn → 0 almost surely, |H(ℵn )| → |H(S)| and R(H(ℵn ) \ ℵn ) ≥ R(H(S) \ S)
eventually almost surely. Then, there exists CS a positive constant that depends on S
such that, Ṽn ≥ CS eventually almost surely. Finally, since cn,γ → 0 we conclude the
proof.
5.2
Proof of Theorem 4
The proof of Theorem 4 is based on the following lemma
24
Lemma 5. Assume that the unknown density f fulfils condition B and S ∈ A. Take
K ∈ K, and hn = O(n−β ) with β ∈ (0, 1/d).
Let fˆn (x) be the density estimator introduced in Definition 3. Then,
+
(i) there exists a sequence ε+
n such that log(n)εn → 0 and for all x ∈ S,
1−
ε+
n
f (x)
fˆn (x)
1/d
≥
e.a.s.
(ii) there exist a sequence ε−
n → 0 and a constant λ0 > 0 such that for all x ∈ H(ℵn ),
(fˆn (x))1/d ≥ λ0 − ε−
.
e.a.s.
n
Proof. We start the proof by establishing some useful preliminary results. First notice
that, for S ∈ A, with exactly the same kind of calculation we did to prove Lemma 2,
1/d
n
choosing ρn = f04clog
we have,
S ωd n
P(dH (ℵn , S) ≥ ρn ) ≤ CS n−2
for n large enough.
(32)
Notice that, since K ∈ K, S ∈ A, and K is bounded from below on a neighbourhood of
the origin, there exist c00K > 0 and rK > 0 such that,
Z
0
K((u − x)/r)du ≥ c00K rd for all x ∈ S and r ≤ rK
.
(33)
S
We have, for all x ∈ S,
Z
Efn (x) =
K(u)f (x + uhn )du.
{u:x+uhn ∈S}
Using that f is Lipschitz and
Z
Efn (x) ≤
R
Rd
K(u)du = 1, we get, for all x ∈ S
K(u) f (x) + kf kukhn du ≤ f (x) + kf hn cK .
(34)
{u:x+uhn ∈S}
From (33) and the condition f (x) > f0 for all x ∈ S, it follows that,
Efn (x) ≥ f0 c0K
for all x ∈ S.
We start by proving (i). The triangular inequality entails that,
max fˆn (x) − f (x) ≤ sup fˆn (x) − Efˆn (x) + sup Efˆn (x) − f (x) .
x∈S
(35)
(36)
x∈S
x∈S
In order to deal with the first term on the right hand side of (36), observe that, since
K ∈ K and hn = O(n−β ) with β ∈ (0, 1/d) we can apply Theorem 2.3 in [11]. Then,
there exists a constant C1 such that, with probability one, for n large enough,
s
nhdn
sup fn (x) − Efn (x) ≤ C1 .
− log(hn ) x∈Rd
25
Thus
s
nhdn
sup fn (x) − Efn (x) ≤ C1 ,
− log(hn ) x∈ℵn
and therefore,
s
nhdn
sup fˆn (x) − Efˆn (x) ≤ C1 .
− log(hn ) x∈S
(37)
Now let us bound supx∈S (Efˆn (x) − f (x)) from above. For all x ∈ S we have,
E(fˆn (x)) = E(fˆn (x)|dH (ℵn , S) ≤ ρn )P(dH (ℵn , S) ≤ ρn )+
E(fˆn (x)|dH (ℵn , S) > ρn )P(dH (ℵn , S) > ρn ). (38)
Since {(x, y) ∈ S 2 , kx − yk ≤ hn } is compact, the Lebesgue dominate convergence
theorem entails that there exist y0 ∈ S such that kx − y0 k ≤ ρn , and a sequence yk with
yk → y0 , kyk − y0 k ≤ ρn , such that for n large enough, with probability one
E(fˆn (x)|dH (S, ℵn ) ≤ ρn ) ≤ sup E
x∈S
lim sup
fn (y) =
y∈S:kx−yk≤ρn
sup E lim fn (yk ) = sup lim (E(fn (yk ))) ≤ sup
x∈S
yk →y0
x∈S yk →y0
sup
E(fn (y)).
x∈S y∈S:kx−yk≤ρn
Now applying (34) and the Lipschitz continuity of f we obtain,
E(fˆn (x)|dH (S, ℵn ) ≤ ρn ) ≤
max
y∈S,kx−yk≤ρn
{f (y) + kf hn cK } ≤ f (x) + kf ρn + kf hn cK .
(39)
With the same kind of argument it can be proved that,
E(fˆn (x)|dH (ℵn , S) ≥ ρn ) ≤ sup Efn (y) ≤ f1 + kf hn cK .
(40)
y∈S
From equations (38),(39),(40) and (32) we get,
sup(E(fˆn (x)) − f (x)) ≤ kf ρn + kf hn cK + (f1 + kf hn cK )CS n−2 .
(41)
x∈S
1/2
nhdn
Take now εn = kf ρn + kf hn cK + (f1 + kf hn cK )CS n−2 + C1 − log(h
. Which
n)
fulfils log(n)εn → 0. From equations (36), (37), (41), we obtain that, with probability
one, for n large enough,
max fˆn (x) − f (x) ≤ εn .
x∈S
Then, for all x ∈ S, fˆn (x) − f (x) ≤ f (x)εn /f0 , and thus,
lently,
!1/d
εn −1/d
f (x)
≥ 1+
.
f0
fˆn (x)
26
fˆn (x)
f (x)
≤ 1+
εn
f0 ,
or equiva-
Finally, taking ε+
= (1 − (1 + εn /f0 )−1/d ) ∼ εn /(df0 ) (observe that we have ε+
n log(n) →
n
1/d
0) then maxx∈S ˆf (x)
≥ 1 − ε+
n eventually almost surely, which concludes the proof
fn (x)
of (i).
In order to prove (ii), observe that
min fˆn (x) ≥ min Efˆn (x) − max Efˆn (x) − fˆn (x) .
x∈Rd
x∈Rd
x∈Rd
Since we have already proved that maxx∈Rd Efˆn (x)− fˆn (x) → 0 a.s., it remains to prove
that minx∈Rd Efˆn (x) is bounded from below by a positive constant. From minx∈Rd Efˆn (x) =
minx∈ℵn Efn (x) and (35), we get
min Efˆn (x) ≥ min Efn (x) ≥ f0 c0K .
x∈S
x∈Rd
Now we are ready to prove Theorem 4 .
Proof of Theorem 4 a)
Recall that
δ̂ H(ℵn ) \ ℵn = sup r : ∃x such that x +
r
fˆn (x)1/d
A ⊂ H(ℵn ) \ ℵn ,
−1/d
with A the ball B(O, ωd ).
Under H0 (S is convex),
δ̂ H(ℵn ) \ ℵn ≤ sup r : ∃x such that x +
r
fˆn (x)1/d
A ⊂ S \ ℵn .
If we apply Lemma 5 (i), (notice that all convex sets are in A) we get,
r
+
(1 − εn )A ⊂ S \ ℵn .
δ̂ H(ℵn ) \ ℵn ≤ sup r : ∃x such that x +
f (x)1/d
Equivalently, ∆(ℵn ) ≥ (1−ε+
n )δ̂ H(ℵn )\ℵn , and therefore P(V̂n ≥ cn,γ ) ≤ P V (ℵn ) ≥
d
(1 − ε+
n ) cn,γ , from where it follows that P(V̂n ≥ cn,γ ) can be majorized by,
d
+ d
P U (ℵn ) ≥ −(1−ε+
n ) log −log(1−γ) + (1−εn ) −1 log(n)+(d−1) log(log(n))+log(αB ) .
Therefore, by Theorem 2, (using that log(n)ε+
n → 0) we get that,
P(V̂n ≥ cn,γ ) ≤ P U (ℵn ) ≥ − log(− log(1 − γ)) + o(1) → γ.
27
Proof of Theorem 4 b)
From Lemma 5 (ii), we have that
δ̂ H(ℵn ) \ ℵn ≥ (λ0 − ε−
n )R H(ℵn ) \ ℵn ,
where ε−
n → 0 a.s. Then, under H1 (S is not convex), from (31), we obtain
δ̂ H(ℵn ) \ ℵn ≥ (λ0 − ε−
n ) R H(S) \ S − 2dH (S, ℵn ) .
Since S ∈ A, dH (S, ℵn ) → 0 a.s. (see [4]), and R H(S) \ S > 0 (see Proposition 4) then
with probability one, for n large enough
1
δ̂ H(ℵn ) \ ℵn ≥ λ0 R H(S) \ S ,
2
and Theorem 4 b) follows from the fact that cn,γ → 0.
References
[1] Ambrosio, L., Colesanti, A., and Villa, E. (2008). Outer Minkowski content for some
classes of closed sets. Math. Ann. 342, 727–748.
[2] Baryshnikov B.Y., Penrose, M., and Yukich, J.E. (2009) Gaussian limits for generalized spacing. Ann. Appl. Prob. 19(1) 158–185
[3] Cuevas, A., and Fraiman, R. (1997) A plug-in approach to support estimation. Ann.
Statist. 25, 2300–2312.
[4] Cuevas, A., and Rodriguez-Casal, A. (2004) On boundary estimation. Adv. Appl.
Probab. 36, 340–354.
[5] Cuevas, A., Fraiman, R., and Pateiro-Lopez, B. (2012) On statistical properties of
sets fulfilling rolling-type conditions. Adv. Appl. Probab. 44, 311–239.
[6] Delicado, P., Hernández, A., and Lugosi, G. (2014) Data-based decision rules about
the convexity of the support of a distribution. Electron. J. Statist. 8, 96–129.
[7] Deheuvels, P. (1983). Strong Bounds for Multidimensional Spacings. Probab. Theory
Related Fields 64 (4) 411–424
[8] Devroye, L. (1981). Laws of the iterated logarithm for order statistics of uniform
spacings. Ann. Probab. 9 860–867.
[9] Dümbgen L., and Walther, G. (1996) Rate of convergence for random approximation
of convex sets. Ann. Appl. Prob. 28 384–393
[10] Gallier, J. (2011). Geometric Methods and Applications: For Computer Science and
Engineering, 2nd Edition. Springer-Verlag.
28
[11] Giné, E., and Guillou, A. (2002). Rates of strong uniform consistency for multivariate kernel density estimators. Annales de l’Institut Henri Poincare (B). Probability
and Statistics 38(6) 907–921.
[12] Janson, S. (1987). Maximal spacings in several dimensions. Ann. Prob. 15 274–280.
[13] Janson, S. (1986). Random coverings in several dimensions. Acta Math. 156 82–118.
[14] Leonenko, N., Prozanto, L., and Savani, V. (2008) A class of Rényi information
estimators for multidimensional densities. Ann. Stat. 36(5) 2153–2182.
[15] Matilla, P. (1995) Geometry of sets and measures in Euclidean spaces, Cambridge
Univ. Press.
[16] Miller, E.G. (2003) A new class of entropy estimators for multi-dimensional densities. International Conference on Acoustics, Speech, and Signal Processing.
[17] Møller, J., and Waagepetersen R. P. (2004) Statistical inference and simulation for
spatial point processes. Chapman & Hall/CRC
[18] Pyke, R. (1965). Spacings. J. Roy. Statist. Soc. Ser. B 27 395–449.
[19] Pyke, R. (1972). Spacings revisited. Proc. Sixth Berkeley Symp. Math. Statist.
Probab. 1 417–427.
[20] Proschan, F. and Pyke, R. (1967). Tests for monotone failure rate. Proc. Fifth
Berkeley Symp. on Math. Statist. and Prob. 3 293–312.
[21] Ranneby, B. (1984). The maximal spacing method. An estimation method related
to maximum likelihood method. Scand. J. Statist. 11 93–112.
[22] Ranneby, B., Jammalamadaka, S.R., and Teterukovskiy, A. (2005) The maximum
spacing estimation for multivariate observations. Journal of Statistical Planning and
Inference 129 (1–2) 427–446.
[23] Rényi, A., and Sulanke, R. (1963) Uber die konvexe Hülle von n zufällig gewählten
Punkten. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete. 2, 75–84.
[24] Rényi, A., and Sulanke, R. (1964) Uber die konvexe Hülle von n zufällig gewählten
Punkten (II). Z. Wahrscheinlichkeitstheorie und Verw. Gebiete. 3, 138–147.
[25] Stevens, W. L. (1939) Solution to a geometrical problem in probability. Ann. Eugenics 9 315–320
[26] Tenenbaum, J.B., De Silva V., and Lanford, J.C. (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290 2319–2323.
[27] Dümbgen, L., and Walther, G.(1996) Rates of convergence for random approximations of convex sets. Adv. Appl. Probab. 28 384–393.
29
| 10 |
Multimodal Deep Learning for Robust RGB-D Object Recognition
Andreas Eitel
Jost Tobias Springenberg
Luciano Spinello
Martin Riedmiller
Wolfram Burgard
arXiv:1507.06821v2 [] 18 Aug 2015
input
Abstract— Robust object recognition is a crucial ingredient
of many, if not all, real-world robotics applications. This paper
leverages recent progress on Convolutional Neural Networks
(CNNs) and proposes a novel RGB-D architecture for object
recognition. Our architecture is composed of two separate
CNN processing streams – one for each modality – which are
consecutively combined with a late fusion network. We focus
on learning with imperfect sensor data, a typical problem in
real-world robotics tasks. For accurate learning, we introduce
a multi-stage training methodology and two crucial ingredients
for handling depth data with CNNs. The first, an effective
encoding of depth information for CNNs that enables learning
without the need for large depth datasets. The second, a data
augmentation scheme for robust learning with depth images by
corrupting them with realistic noise patterns. We present stateof-the-art results on the RGB-D object dataset [15] and show
recognition in challenging RGB-D real-world noisy settings.
I. I NTRODUCTION
RGB-D object recognition is a challenging task that is at
the core of many applications in robotics, indoor and outdoor.
Nowadays, RGB-D sensors are ubiquitous in many robotic
systems. They are inexpensive, widely supported by open
source software, do not require complicated hardware and
provide unique sensing capabilities. Compared to RGB data,
which provides information about appearance and texture,
depth data contains additional information about object shape
and it is invariant to lighting or color variations.
In this paper, we propose a new method for object
recognition from RGB-D data. In particular, we focus on
making recognition robust to imperfect sensor data. A scenario typical for many robotics tasks. Our approach builds
on recent advances from the machine learning and computer
vision community. Specifically, we extend classical convolutional neural network networks (CNNs), which have recently
been shown to be remarkably successful for recognition
on RGB images [13], to the domain of RGB-D data. Our
architecture, which is depicted in Fig. 1, consists of two
convolutional network streams operating on color and depth
information respectively. The network automatically learns
to combine these two processing streams in a late fusion
approach. This architecture bears similarity to other recent
multi-stream approaches [21], [23], [11]. Training of the
individual stream networks as well as the combined architecture follows a stage-wise approach. We start by separately
training the networks for each modality, followed by a third
training stage in which the two streams are jointly finetuned, together with a fusion network that performs the final
All authors are with the Department of Computer Science, University
of Freiburg, Germany. This work was partially funded by the DFG under
the priority programm “Autonomous Learning” (SPP 1527). {eitel, springj,
riedmiller, spinello, burgard}@cs.uni-freiburg.de
227
conv-1
conv-2
RGB
conv-3
fc6
conv-4
fc7 fc1-fus class
conv-5
227
3
96
256
256
384
384
256
384
384
256
depth
4096 4096 4096 51
96
3
Fig. 1: Two-stream convolutional neural network for RGBD object recognition. The input of the network is an RGB
and depth image pair of size 227 × 227 × 3. Each stream
(blue, green) consists of five convolutional layers and two
fully connected layers. Both streams converge in one fully
connected layer and a softmax classifier (gray).
classification. We initialize both the RGB and depth stream
network with weights from a network pre-trained on the
ImageNet dataset [19]. While initializing an RGB network
from a pre-trained ImageNet network is straight-forward,
using such a network for processing depth data is not. Ideally,
one would want to directly train a network for recognition
from depth data without pre-training on a different modality
which, however, is infeasible due to lack of large scale
labeled depth datasets. Due to this lack of labeled training
data, a pre-training phase for the depth-modality – leveraging
RGB data – becomes of key importance. We therefore
propose a depth data encoding to enable re-use of CNNs
trained on ImageNet for recognition from depth data. The
intuition – proved experimentally – is to simply encode
a depth image as a rendered RGB image, spreading the
information contained in the depth data over all three RGB
channels and then using a standard (pre-trained) CNN for
recongition.
In real-world environments, objects are often subject to
occlusions and sensor noise. In this paper, we propose a data
augmentation technique for depth data that can be used for
robust training. We augment the available training examples
by corrupting the depth data with missing data patterns
sampled from real-world environments. Using these two
techniques, our system can both learn robust depth features
and implicitly weight the importance of the two modalities.
We tested our method to support our claims: first, we
report on RGB-D recognition accuracy, then on robustness
with respect to real-world noise. For the first, we show that
our work outperforms the current state of the art on the
RGB-D Object dataset of Lai et al. [15]. For the second, we
show that our data augmentation approach improves object
recognition accuracy in a challenging real-world and noisy
environment using the RGB-D Scenes dataset [16].
II. R ELATED W ORK
Our approach is related to a large body of work on both
convolutional neural networks (CNNs) for object recognition
as well as applications of computer vision techniques to
the problem of recognition from RGB-D data. Although a
comprehensive review of the literature on CNNs and object
recognition is out of the scope of this paper, we will briefly
highlight connections and differences between our approach
and existing work with a focus on recent literature.
Among the many successful algorithms for RGB-D object
recognition a large portion still relies on hand designed
features such as SIFT in combination with multiple shape
features on the depth channel [15], [14]. However, following
their success in many computer vision problems, unsupervised feature learning methods have recently been extended
to RGB-D recognition settings. Blum et al. [3] proposed an
RGB-D descriptor that relies on a K-Means based feature
learning approach. More recently Bo et al. [5] proposed
hierarchical matching pursuit (HMP), a hierarchical sparsecoding method that can learn features from multiple channel
input. A different approach pursued by Socher et al. [22] relies on combining convolutional filters with a recursive neural
network (a specialized form of recurrent neural network) as
the recognition architecture. Asif et al. [1] report improved
recognition performance using a cascade of Random Forest
classifiers that are fused in a hierarchical manner. Finally,
in recent independent work Schwarz et al. [20] proposed to
use features extracted from CNNs pre-trained on ImageNet
for RGB-D object recognition. While they also make use
of a two-stream network they do not fine-tune the CNN
for RGB-D recognition, but rather just use the pre-trained
network as is. Interestingly, they also discovered that simple
colorization methods for depth are competitive to more
involved preprocessing techniques. In contrast to their work,
ours achieves higher accuracy by training our fusion CNN
end-to-end: mapping from raw pixels to object classes in a
supervised manner (with pre-training on a related recognition
task). The features learned in our CNN are therefore by
construction discriminative for the task at hand. Using CNNs
trained for object recognition has a long history in computer
vision and machine learning. While they have been known
to yield good results on supervised image classification tasks
such as MNIST for a long time [17], recently they were
not only shown to outperform classical methods in large
scale image classification tasks [13], object detection [9]
and semantic segmentation [8] but also to produce features
that transfer between tasks [7], [2]. This recent success story
has been made possible through optimized implementations
for high-performance computing systems, as well as the
availability of large amounts of labeled image data through,
e.g., the ImageNet dataset [19].
While the majority of work in deep learning has focused
on 2D images, recent research has also been directed towards
using depth information for improving scene labeling and
object detection [6], [10]. Among them, the work most
similar to ours is the one on object detection by Gupta
et al. [10] who introduces a generalized method of the
R-CNN detector [9] that can be applied to depth data.
Specifically, they use large CNNs already trained on RGB
images to also extract features from depth data, encoding
depth information into three channels (HHA encoding).
Specifically, they encode for each pixel the height above
ground, the horizontal disparity and the pixelwise angle
between a surface normal and the gravity direction. Our
fusion network architecture shares similarities with their
work in the usage of pre-trained networks on RGB images.
Our method differs in both the encoding of depth into color
image data and in the fusion approach taken to combine
information from both modalities. For the encoding step, we
propose an encoding method for depth images (’colorizing’
depth) that does not rely on complicated preprocessing and
results in improved performance when compared to the
HHA encoding. To accomplish sensor fusion we introduce
additional layers to our CNN pipeline (see Fig. 1) allowing us
to automatically learn a fusion strategy for the recognition
task – in contrast to simply training a linear classifier on
top of features extracted from both modalities. Multi-stream
architectures have also been used for tasks such as action
recognition [21], detection [11] and image retrieval [23]. An
interesting recent overview of different network architectures
for fusing depth and image information is given in Saxena
et al. [18]. There, the authors compared different models
for multimodal learning: (1) early fusion, in which the input
image is concatenated to the existing image RGB channels
and processed alongside; (2) an approach we denote as
late fusion, where features are trained separately for each
modality and then merged at higher layers; (3) combining
early and late fusion; concluding that late fusion (2) and the
combined approach perform best for the problem of grasp
detection. Compared to their work, our model is similar to the
late fusion approach but widely differs in training – Saxena
et al. [18] use a layer-wise unsupervised training approach –
and scale (the size of both their networks and input images
is an order of magnitude smaller than in our settings).
III. M ULTIMODAL ARCHITECTURE FOR RGB-D OBJECT
RECOGNITION
An overview of the architecture is given in Fig. 1. Our
network consists of two streams (top-blue and bottom-green
part in the figure) – processing RGB and depth data independently – which are combined in a late fusion approach.
Each stream consists of a deep CNN that has been pretrained for object classification on the ImageNet database
(we use the CaffeNet [12] implementation of the CNN from
Krizhevsky et al. [13]). The key reason behind starting from
original size
0
0
0
0
0
1
1
0
0
1
1
our approach
0
warped
Fig. 2: Different approaches for color encoding of depth images. From left to right: RGB, depth-gray, surface normals [5],
HHA [10], our method.
1
1
1
0
1
1
Fig. 3: CNNs require a fixed size input. Instead of the widely
used image warping approach (middle), our method (bottom)
preserves shape information and ratio of the objects. We
rescale the longer side and create additional image context,
by tiling the pixels at the border of the longer side, e.g., 1.
We assume that the depth image is already transformed to
three channels using our colorization method.
a pre-trained network is to enable training a large CNN
with millions of parameters using the limited training data
available from the Washington RGB-D Object dataset (see,
e.g., Yosinski et al. [25] for a recent discussion). We first
pre-process data from both modalities to fully leverage the
ImageNet pre-training. Then, we train our multimodal CNN
in a stage-wise manner. We fine-tune the parameters of each
individual stream network for classification of the target data
and proceed with the final training stage in which we jointly
train the parameters of the fusion network. The different steps
will be outlined in the following sections.
A. Input preprocessing
To fully leverage the power of CNNs pre-trained on
ImageNet, we pre-process the RGB and depth input data
such that it is compatible with the kind of original ImageNet
input. Specifically, we use the reference implementation
of the CaffeNet [12] that expects 227 × 227 pixel RGB
images as input which are typically randomly cropped from
larger 256 × 256 RGB images (see implementation details
on data augmentation). The first processing step consists
of scaling the images to the appropriate image size. The
simplest approach to achieve this is to use image warping by
directly rescaling the original image to the required image
dimensions, disregarding the original object ratio. This is
depicted in Fig. 3 (middle). We found in our experiments that
this process is detrimental to object recognition performance
– an effect that we attribute to a loss of shape information
(see also Section IV-C). We therefore devise a different
preprocessing approach: we scale the longest side of the
original image to 256 pixels, resulting in a 256 × N or an N
× 256 sized image. We then tile the borders of the longest
side along the axis of the shorter side. The resulting RGB
or depth image shows an artificial context around the object
borders (see Fig. 3). The same scaling operation is applied
to both RGB and depth images.
While the RGB images can be directly used as inputs
for the CNNs after this processing step, the rescaled depth
data requires additional steps. To realize this, recall that a
network trained on ImageNet has been trained to recognize
objects in images that follow a specific input distribution
(that of natural camera images) that is incompatible with
data coming from a depth sensor – which essentially encodes
distance of objects from the sensor. Nonetheless, by looking
at a typical depth image from a household object scene (c.f.,
Fig. 4) one can conclude that many features that qualitatively
appear in RGB images – such as edges, corners, shaded
regions – are also visible in, e.g., a grayscale rendering of
depth data. This realization has previously led to the idea of
simply using a rendered version of the recorded depth data
as an input for CNNs trained on ImageNet [10]. We compare
different such encoding strategies for rendering depth to
images in our experiments. The two most prevalent such
encodings are (1) rendering of depth data into grayscale and
replicating the grayscale values to the three channels required
as network input; (2) using surface normals where each
dimension of a normal vector corresponds to one channel in
the resulting image. A more involved method, called HHA
encoding [10], encodes in the three channels the height above
ground, horizontal disparity and the pixelwise angle between
a surface normal and the gravity direction.
We propose a fourth, effective and computationally inexpensive, encoding of depth to color images, which we found
to outperform the HHA encoding for object recognition. Our
method first normalizes all depth values to lie between 0
and 255. Then, we apply a jet colormap on the given image
that transforms the input from a single to a three channel
image (colorizing the depth). For each pixel (i, j) in the
depth image d of size W × H, we map the distance to color
values ranging from red (near) over green to blue (far), essen-
tially distributing the depth information over all three RGB
channels. Edges in these three channels often correspond to
interesting object boundaries. Since the network is designed
for RGB images, the colorization procedure provides enough
common structure between a depth and an RGB image
to learn suitable feature representations (see Fig. 2 for a
comparison between different depth preprocessing methods).
object
boundaries
occlusion
B. Network training
noise
min
N
X
WD ,θ D
L softmax WD g D (di ; θD ) , y i ,
(1)
k
noise
patterns
Fig. 5: We create synthetic training data by inducing artificial
patterns of missing depth information in the encoded image.
log likelihood
min
Wf ,θ I ,θ D ,θ F
N
X
L softmax Wf f ([gI , gD ]; θF ) , y i ,
i=1
(2)
where gI = g I (xi ; θI ), gD = g D (di ; θD ). Note that in
this stage training can also be performed by optimizing only
the weights of the fusion network (effectively keeping the
weights from the individual stream training intact).
C. Robust classification from depth images
i=1
where W D are the weights of the
ping from g(·) to RM , the softmax
softmax(z) =
P exp(z)/kzk1 and the
L(s, y) = − yk log sk . Training the
Fig. 4: Kitchen scene in the RGB-D Scenes dataset showing
objects subjected to noise and occlusions.
noised
image
Let D = {(x1 , d1 , y1 ), . . . , (xN , dN , yN )} be the labeled
data available for training our multimodal CNN; with xi , di
denoting the RGB and pre-processed depth image respectively and yi corresponding to the image label in one-hot
encoding – i.e., yi ∈ RM is a vector of dimensionality M (the
number of labels) with yki = 1 for the position k denoting
the image label. We train our model using a three-stage
approach, first training the two stream networks individually
followed by a joint fine-tuning stage.
1) Training the stream networks: We first proceed by
training the two individual stream networks (c.f., the blue and
green streams in Fig. 1). Let g I (xi ; θI ) be the representation
extracted from the last fully connected layer (fc7) of the CaffeNet – with parameters θI – when applied to an RGB image
xi . Analogously, let g D (di ; θD ) be the representation for the
depth image. We will assume that all parameters θI and θD
(the network weights and biases) are initialized by copying
the parameters of a CaffeNet trained on the ImageNet dataset.
We can then train an individual stream network by placing
a randomly initialized softmax classification layer on top of
f D and f I and minimizing the negative log likelihood L of
the training data. That is, for the depth image stream network
we solve
softmax layer mapfunction is given by
loss is computed as
RGB stream network
then can be performed by an analogous optimization. After
training, the resulting networks can be used to perform
separate classification of each modality.
2) Training the fusion network: Once the two individual stream networks are trained we discard their softmax
weights, concatenate their – now fine-tuned – last layer
responses g I (xi ; θI ) and g D (di ; θD ) and feed them through
an additional fusion stream f ([g I (xi ; θI ), g D (di ; θD )]; θF )
with parameters θF . This fusion network again ends in a
softmax classification layer. The complete setup is depicted
in Fig. 1, where the two fc7 layers (blue and green) are
concatenated and merge into the fusion network (here the
inner product layer fc1-fus depicted in gray). Analogous
to Eq. (1) the fusion network can therefore be trained by
jointly optimizing all parameters to minimize the negative
Finally, we are interested in using our approach in real
world robotics scenarios. Robots are supposed to perform
object recognition in cluttered scenes where the perceived
sensor data is subject to changing external conditions (such
as lighting) and sensor noise. Depth sensors are especially
affected by a non-negligible amount of noise in such setups.
This is mainly due to the fact that reflective properties of materials as well as their coating, often result in missing depth
information. An example of noisy depth data is depicted in
Fig. 4. In contrast to the relatively clean training data from
the Washington RGB-D Object dataset, the depicted scene
contains considerable amounts of missing depth values and
partial occlusions (the black pixels in the figure). To achieve
robustness against such unpredictable factors, we propose a
new data augmentation scheme that generates new, noised
training examples for training and is tailored specifically to
robust classification from depth data.
Our approach utilizes the observation that noise in depth
data often shows a characteristic pattern and appears at
object boundaries or object surfaces. Concretely, we sampled
a representative set of noise patterns P = {P1 , . . . , PK }
that occur when recording typical indoor scenes through a
Kinect sensor. For sampling the noise patterns we used the
RGB-D SLAM dataset [24]. First, we extract 33,000 random
noise patches of size 256 × 256 from different sequences at
varying positions and divide them into five groups, based on
the number of missing depth readings they contain. Those
noise patches are 2D binary masks patterns. We randomly
sample pairs of noise patches from two different groups that
are randomly added or subtracted and optionally inverted to
produce a final noise mask pattern. We repeat this process
until we have collected K = 50, 000 noise patterns in total.
Examples of the resulting noise patterns and their application
to training examples are shown in Fig. 5.
Training the depth network with artificial noise patterns
then proceeds by minimizing the objective from Equation
Eq. (1) in which each depth sample di is randomly replaced
with a noised variant with probability 50%. Formally,
(
p ∼ B{0.5}
di
if p = 1
i
with
(3)
d =
i
k ∼ U{1, K},
Pk ◦ d
else
where ◦ denotes the Hadamard product, B the Bernoulli
distribution and U the discrete uniform distribution.
IV. E XPERIMENTS
We evaluate our multimodal network architecture on the
Washington RGB-D Object Dataset [15] which consists of
household objects belonging to 51 different classes. As an
additional experiment – to evaluate the robustness of our
approach for classification in real-world environments – we
considered classification of objects from the RGB-D Scenes
dataset whose class distribution partially overlaps with the
RGB-D Object Dataset.
A. Experimental setup
All experiments were performed using the publicly available Caffe framework [12]. As described previously we use
the CaffeNet as the basis for our fusion network. It consists
of five convolutional layers (with max-pooling after the first,
second and fifth convolution layer) followed by two fully
connected layers and a softmax classification layer. Rectified
linear units are used in all but the final classification layer.
We initialized both stream networks with the weights and
biases of the first eight layers from this pre-trained network,
discarding the softmax layer. We then proceeded with our
stage-wise training. In the first stage (training the RGB and
depth streams independently) the parameters of all layers
were adapted using a fixed learning rate schedule (with initial
learning rate of 0.01 that is reduced to 0.001 after 20K
iterations and training is stopped after 30K iterations). In
the second stage (training the fusion network, 20k iterations,
mini-batch size of 50) we experimented with fine-tuning all
weights but found that fixing the individual stream networks
(by setting their learning rate to zero) and only training the
fusion part of the network resulted in the best performance.
The number of training iterations were chosen based on
the validation performance on a training validation split in
TABLE I: Comparisons of our fusion network with other
approaches reported for the RGB-D dataset. Results are
recognition accuracy in percent. Our multi-modal CNN outperforms all the previous approaches.
Method
Nonlinear SVM [15]
HKDES [4]
Kernel Desc. [14]
CKM Desc. [3]
CNN-RNN [22]
Upgraded HMP [5]
CaRFs [1]
CNN Features [20]
Ours, Fus-CNN (HHA)
Ours, Fus-CNN (jet)
RGB
74.5 ± 3.1
76.1 ± 2.2
77.7 ± 1.9
N/A
80.8 ± 4.2
82.4 ± 3.1
N/A
83.1 ± 2.0
84.1 ± 2.7
84.1 ± 2.7
Depth
64.7 ± 2.2
75.7 ± 2.6
78.8 ± 2.7
N/A
78.9 ± 3.8
81.2 ± 2.3
N/A
N/A
83.0 ± 2.7
83.8 ± 2.7
RGB-D
83.9 ± 3.5
84.1 ± 2.2
86.2 ± 2.1
86.4 ± 2.3
86.8 ± 3.3
87.5 ± 2.9
88.1 ± 2.4
89.4 ± 1.3
91.0 ± 1.9
91.3 ± 1.4
a preliminary experiment. A fixed momentum value of 0.9
and a mini-batch size of 128 was used for all experiments
if not stated otherwise. We also adopted the common data
augmentation practices of randomly cropping 227 × 227
sub-images from the larger 256 × 256 input examples and
perform random horizontal flipping. Training of a single network stream takes ten hours, using a NVIDIA 780 graphics
card.
B. RGB-D Object dataset
The Washington RGB-D Object Dataset consists of 41,877
RGB-D images containing household objects organized into
51 different classes and a total of 300 instances of these
classes which are captured under three different viewpoint
angles. For the evaluation every 5th frame is subsampled. We
evaluate our method on the challenging category recognition
task, using the same ten cross-validation splits as in Lai et
al. [15]. Each split consists of roughly 35,000 training images
and 7,000 images for testing. From each object class one
instance is left out for testing and training is performed on
the remaining 300−51 = 249 instances. At test time the task
of the CNN is to assign the correct class label to a previously
unseen object instance.
Table I shows the average accuracy of our multi-modal
CNN in comparison to the best results reported in the literature. Our best multi-modal CNN, using the jet-colorization,
(Fus-CNN jet) yields an overall accuracy of 91.3 ± 1.4%
when using RGB and depth (84.1 ± 2.7% and 83.8 ± 2.7%
when only the RGB or depth modality is used respectively),
which – to the best of our knowledge – is the highest
accuracy reported for this dataset to date. We also report
results for combining the more computationally intensive
HHA with our network (Fus-CNN HHA). As can be seen
in the table, this did not result in an increased performance.
The depth colorization method slightly outperforms the HHA
fusion network (Fus-CNN HHA) while being computationally cheaper. Overall our experiments show that a pretrained CNN can be adapted for recognition from depth data
using our depth colorization method. Apart from the results
reported in the table, we also experimented with different
fusion architectures. Specifically, performance slightly drops
to 91% when the intermediate fusion layer (fc1-fus) is
removed from the network. Adding additional fusion layers
1.0
0.8
0.6
0.4
0.2
lime
notebook
flashlight
banana
glue stick
cereal box
hand towel
sponge
marker
lemon
scissors
toothbrush
instant noodles
food can
water bottle
toothpaste
comb
food cup
onion
rubber eraser
binder
food bag
pliers
dry battery
greens
soda can
garlic
shampoo
tomato
kleenex
lightbulb
keyboard
stapler
bell pepper
cap
bowl
apple
orange
coffee mug
food jar
plate
cell phone
food box
calculator
potato
pear
camera
ball
pitcher
peach
mushroom
0.0
Fig. 6: Per-class recall of our trained model on all test-splits.
The worst class recall belongs to mushrooms and peaches.
also did not yield an improvement. Finally, Fig. 6 shows the
per-class recall, where roughly half of the objects achieve a
recall of ≈ 99%.
C. Depth domain adaptation for RGB-D Scenes
To test the effectiveness of our depth augmentation
technique in real world scenes, we performed additional
recognition experiments on the more challenging RGB-D
Scenes dataset. This dataset consists of six object classes
(which overlap with the RGB-D Object Dataset) and a large
amount of depth images subjected to noise.
For this experiment we trained two single-stream depthonly networks using the Object dataset and used the Scenes
dataset for testing. Further, we assume that the groundtruth
bounding box is given in order to report only on recognition
performance. The first “baseline” network is trained by
following the procedure described in Section III-B.1, with the
total number of labels M = 6. The second network is trained
by making use of the depth augmentation outlined in III-C.
The results of this experiment are shown in Table II (middle
and right column) that reports the recognition accuracy for
each object class averaged over all eight video sequences.
As is evident from the table, the adapted network (right
column) trained with data augmentation outperforms the
baseline model for all classes, clearly indicating that additional domain adaptation is necessary for robust recognition
in real world scenes. However, some classes (e.g., cap, bowl,
soda can) benefit more from noise aware training than others
(e.g., flashlight, coffe mug). The kitchen scene depicted in
Fig. 4 gives a visual intuition for this result. On the one
hand, some objects (e.g., soda cans) often present very noisy
object boundaries and surfaces, thus they show improved
recognition performance using the adapted approach. On
the other hand, small objects (e.g. a flashlight), which are
often captured lying on a table, are either less noisy or
just small, hence susceptible to be completely erased by the
noise from our data augmentation approach. Fig. 7 shows
several exemplary noisy depth images from the test set that
are correctly classified by the domain-adapted network while
the baseline network labels them incorrectly. We also tested
the effect of different input image rescaling techniques –
previously described in Fig. 3 – in this setting. As shown in
the left column of Table II, standard image warping performs
poorly, which supports our intuition that shape information
TABLE II: Comparison of the domain adapted depth network
with the baseline: six-class recognition results (in percent)
on the RGB-D Scenes dataset [16] that contains everyday
objects in real-world environments.
Class
flashlight
cap
bowl
soda can
cereal box
coffee mug
class avg.
Ours, warp.
93.4
62.1
57.4
64.5
98.3
61.9
73.6 ± 17.9
Ours, no adapt.
97.5
68.5
66.5
66.6
96.2
79.1
79.1 ± 14.5
Ours, adapt.
96.4
77.4
69.8
71.8
97.6
79.8
82.1 ± 12.0
gets lost during preprocessing.
D. Comparison of depth encoding methods
Finally, we conducted experiments to compare the different depth encoding methods described in Fig. 2. For rescaling
the images, we use our proposed preprocessing method described in Fig. 3 and tested the different depth encoding. Two
scenarios are considered: 1) training from scratch using single channel depth images 2) for each encoding method, only
fine-tuning the network by using the procedure described
in Section III-B.1. When training from scratch, the initial
learning rate is set to 0.01, then changed to 0.001 after 40K
iterations thus stopped after 60K iterations. Training with
more iterations did not further improve the accuracy. From
the results, presented in Table III, it is clear that training the
network from scratch – solely on the RGB-D Dataset – is
inferior to fine-tuning. In the latter setting, the results suggest
that the simplest encoding method (depth-gray) performs
considerably worse than the other three methods. Among
these other encodings (which all produce colorized images),
surface normals and HHA encoding require additional image
preprocessing – meanwhile colorizing depth using our depthjet encoding has negligible computational overhead. One
potential reason why the HHA encoding underperforms in
this setup is that all objects are captured on a turntable
with the same height above the ground. The height channel
used in the HHA encoding therefore does not encode any
additional information for solving the classification task.
In this experiment, using surface normals yields slightly
better performance than the depth-jet encoding. Therefore,
we tested the fusion architecture on the ten splits of the RGBD Object Dataset using the surface normals encoding but this
did not further improve the performance. Specifically, the
recognition accuracy on the test-set was 91.1 ± 1.6 which
is comparable to our reported results in Table I.
V. C ONCLUSION
We introduce a novel multimodal neural network architecture for RGB-D object recognition, which achieves state
of the art performance on the RGB-D Object dataset [15].
Our method consists of a two-stream convolutional neural
network that can learn to fuse information from both RGB
and depth automatically before classification. We make use
of an effective encoding method from depth to image data
that allows us to leverage large CNNs trained for object
recognition on the ImageNet dataset. We present a novel
[7] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and
T. Darrell, “Decaf: A deep convolutional activation feature for generic
visual recognition,” arXiv preprint arXiv:1310.1531, 2013.
[8] C. Farabet, C. Couprie, L. Najman, and Y. LeCun, “Learning hierarchical features for scene labeling,” TPAMI, pp. 1915–1929, 2013.
[9] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature
hierarchies for accurate object detection and semantic segmentation,”
in IEEE Int. Conf. on Computer Vision and Pattern Recognition
(CVPR), 2014.
[10] S. Gupta, R. Girshick, P. Arbeláez, and J. Malik, “Learning rich
features from rgb-d images for object detection and segmentation,”
in European Conference on Computer Vision (ECCV), 2014.
[11] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik, “Simultaneous
detection and segmentation,” in European Conference on Computer
Vision (ECCV), 2014.
[12] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick,
S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for
fast feature embedding,” arXiv preprint arXiv:1408.5093, 2014.
d) coffee mug
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification
with deep convolutional neural networks,” in Advances in Neural
Information Processing Systems (NIPS), 2012.
Fig. 7: Objects from the RGB-D Scenes test-set for which
the domain adapted CNN predicts the correct label, while
the baseline (no adapt.) CNN fails. Most of these examples
are subject to noise or partial occlusion.
[14] L. Bo, X. Ren and D.Fox, “Depth kernel descriptors for object
recognition,” in Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots
and Systems (IROS), 2011.
TABLE III: Comparison of different depth encoding methods
on the ten test-splits of the RGB-D Object dataset.
[16] K. Lai, L. Bo, X. R. Ren, and D. Fox, “Detection-based object
labeling in 3d scenes,” in Proc. of the IEEE Int. Conf. on Robotics &
Automation (ICRA), 2012.
a) bowl
b) cap
c) soda can
Depth Encoding
Depth-gray (single channel), from scratch
Depth-gray
Surface normals
HHA
Depth-jet encoding
Accuracy
80.1 ± 2.6
82.0 ± 2.8
84.7 ± 2.3
83.0 ± 2.7
83.8 ± 2.7
depth data augmentation that aims at improving recognition
in noisy real-world setups, situations typical of many robotics
scenarios. We present extensive experimental results and
confirm that our method is accurate and it is able to learn
rich features from both domains. We also show robust object
recognition in real- world environments and prove that noiseaware training is effective and improves recognition accuracy
on the RGB-D Scenes dataset [16].
R EFERENCES
[1] U. Asif, M. Bennamoun, and F. Sohel, “Efficient rgb-d object categorization using cascaded ensembles of randomized decision trees,” in
Proc. of the IEEE Int. Conf. on Robotics & Automation (ICRA), 2015.
[2] H. Azizpour, A. S. Razavian, J. Sullivan, A. Maki, and S. Carlsson,
“From generic to specific deep representations for visual recognition,”
arXiv preprint arxiv:1406.5774, 2014.
[3] M. Blum, J. T. Springenberg, J. Wuelfing, and M. Riedmiller, “A
learned feature descriptor for object recognition in rgb-d data.” in Proc.
of the IEEE Int. Conf. on Robotics & Automation (ICRA), 2012.
[4] L. Bo, K. Lai, X. Ren, and D. Fox, “Object recognition with hierarchical kernel descriptors,” in IEEE Int. Conf. on Computer Vision and
Pattern Recognition (CVPR), 2011.
[5] L. Bo, X. Ren, and D. Fox, “Unsupervised feature learning for
rgb-d based object recognition,” in Proc. of the Int. Symposium on
Experimental Robotics (ISER), 2012.
[6] C. Couprie, C. Farabet, L. Najman, and Y. LeCun, “Indoor semantic
segmentation using depth information,” in Int. Conf. on Learning
Representations (ICLR), 2013.
[15] K. Lai, L. Bo, X. Ren, and D. Fox, “A large-scale hierarchical multiview rgb-d object dataset,” in Proc. of the IEEE Int. Conf. on Robotics
& Automation (ICRA), 2011.
[17] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based
learning applied to document recognition,” in Proc. of the IEEE, 1998.
[18] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic
grasps,” in Proc. of Robotics: Science and Systems (RSS), 2013.
[19] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and
L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,”
2014.
[20] M. Schwarz, H. Schulz, and S. Behnke, “RGB-D object recognition
and pose estimation based on pre-trained convolutional neural network
features,” in Proc. of the IEEE Int. Conf. on Robotics & Automation
(ICRA), 2015.
[21] K. Simonyan and A. Zisserman, “Two-stream convolutional networks
for action recognition in videos,” in Advances in Neural Information
Processing Systems(NIPS), 2014.
[22] R. Socher, B. Huval, B. Bhat, C. D. Manning, and A. Y. Ng,
“Convolutional-recursive deep learning for 3d object classification,”
in Advances in Neural Information Processing Systems (NIPS), 2012.
[23] N. Srivastava and R. R. Salakhutdinov, “Multimodal learning with
deep boltzmann machines,” in Advances in Neural Information Processing Systems (NIPS), 2012.
[24] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A
benchmark for the evaluation of rgb-d slam systems,” in Proc. of the
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2012.
[25] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are
features in deep neural networks?” in Advances in Neural Information
Processing Systems (NIPS), 2014.
| 9 |
Learning to Detect Violent Videos using Convolutional Long Short-Term
Memory
arXiv:1709.06531v1 [] 19 Sep 2017
Swathikiran Sudhakaran1,2 and Oswald Lanz2
1
2
University of Trento, Trento, Italy
Fondazione Bruno Kessler, Trento, Italy
{sudhakaran,lanz}@fbk.eu
Abstract
Developing a technique for the automatic analysis of
surveillance videos in order to identify the presence of violence is of broad interest. In this work, we propose a
deep neural network for the purpose of recognizing violent
videos. A convolutional neural network is used to extract
frame level features from a video. The frame level features
are then aggregated using a variant of the long short term
memory that uses convolutional gates. The convolutional
neural network along with the convolutional long short term
memory is capable of capturing localized spatio-temporal
features which enables the analysis of local motion taking
place in the video. We also propose to use adjacent frame
differences as the input to the model thereby forcing it to encode the changes occurring in the video. The performance
of the proposed feature extraction pipeline is evaluated on
three standard benchmark datasets in terms of recognition
accuracy. Comparison of the results obtained with the state
of the art techniques revealed the promising capability of
the proposed method in recognizing violent videos.
1. Introduction
Nowadays, the amount of public violence has increased
dramatically. This can be a terror attack involving one or
a number of persons wielding guns to a knife attack by a
single person. This has resulted in the ubiquitous usage of
surveillance cameras. This has helped authorities in identifying violent attacks and take the necessary steps in order to
minimize the disastrous effects. But almost all the systems
nowadays require manual human inspection of these videos
for identifying such scenarios, which is practically infeasible and inefficient. It is in this context that the proposed
study becomes relevant. Having such a practical system that
can automatically monitor surveillance videos and identify
the violent behavior of humans will be of immense help and
assistance to the law and order establishment. In this work,
we will be considering aggressive human behavior as violence rather than the presence of blood or fire.
The development of several deep learning techniques,
brought about by the availability of large datasets and computational resources, has resulted in a landmark change
in the computer vision community. Several techniques
with improved performance for addressing problems such
as object detection, recognition, tracking, action recognition, caption generation, etc. have been developed as a
result. However, despite the recent developments in deep
learning, very few deep learning based techniques have
been proposed to tackle the problem of violence detection
from videos. Almost all the existing techniques rely on
hand-crafted features for generating visual representations
of videos. The most important advantage of deep learning
techniques compared to the traditional hand-crafted feature
based techniques is the ability of the former to achieve a
high degree of generalization. Thus they are able to handle unseen data in a more effective way compared to handcrafted features. Moreover, no prior information about the
data is required in the case of a deep neural network and
they can be inputted with raw pixel values without much
complex pre-processing. Also, deep learning techniques
are not application specific unlike the hand-crafted feature
based methods since a deep neural network model can be
easily applied for a different task without any significant
changes to the architecture. Owing to these reasons, we
choose to develop a deep neural network for performing violent video recognition.
Our contributions can be summarized as follows:
• We develop an end-to-end trainable deep neural network model for performing violent video classification
• We show that a recurrent neural network capable of encoding localized spatio-temporal changes generates a
better representation, with less number of parameters,
for detecting the presence of violence in a video
• We show that a deep neural network trained on the
frame difference performs better than a model trained
on raw frames
• We experimentally validate the effectiveness of the
proposed method using three widely used benchmarks
for violent video classification
The rest of the document is organized as follows. Section
2 discusses some of the relevant techniques for performing
violent video recognition followed by a detailed explanation
of the proposed deep neural network model in Section 3.
The details about the various experiments conducted as part
of this research are given in Section 4 and the document is
concluded in Section 5.
2. Related Works
Several techniques have been proposed by researchers
for addressing the problem of violence detection from
videos. These include methods that use the visual content [21, 2], audio content [23, 12] or both [33, 1]. In this
section, we will be concentrating on methods that use the
visual cues alone since it is more related to the proposed
approach and moreover audio data is generally unavailable
with surveillance videos. All the existing techniques can be
divided into two classes depending on the underlying idea
1. Inter-frame changes: Frames containing violence undergo massive variations because of fast motion due to
fights [28, 5, 4, 8]
2. Local motion in videos: The motion change patterns taking place in the video is analyzed [6, 3, 7, 21, 15, 32, 20, 24,
13, 2, 11, 34]
Vasconcelos and Lippman [28] used the tangent distance
between adjacent frames for detecting the inter-frame variations. Clarin et al. improves this method in [5] by finding the regions with skin and blood and analyzing these regions for fast motion. Chen et al. [4] uses the motion vector
encoded in the MPEG-1 video stream for detecting frames
with high motion content and then detects the presence of
blood for classifying the video as violent. Deniz et al. [8]
proposes to use the acceleration estimate computed from
the power spectrum of adjacent frames as an indicator of
fast motion between successive frames.
Motion trajectory information and the orientation of
limbs of the persons present in the scene is proposed as a
measure for detecting violence by Datta et al. [6]. Several
other methods follow the techniques used in action recognition, i.e., to identify spatio-temporal interest points and extract features from these points. These include Harris corner
detector [3], Space-time interest points (STIP) [7], motion
scale-invariant feature transform (MoSIFT) [21, 32]. Hassner et al. [15] introduces a new feature descriptor called
violent flows (ViF), which is the flow magnitude over time
of the optical flow between adjacent frames, for detecting
violent videos. This method is improved by Gao et al. [11]
by incorporating the orientation of the violent flow features
resulting in oriented violent flows (OViF) features. Substantial derivative, a concept in fluid dynamics, is proposed by
Mohammadi et al. [20] as a discriminative feature for detecting violent videos. Gracia et al. [13] proposes to use the
blob features, obtained by subtracting adjacent frames, as
the feature descriptor. The improved dense trajectory features commonly used in action recognition is used as a feature vector by Bilinski et al. in [2]. They also propose an
improved Fisher encoding technique that can encode spatiotemporal position of features in a video. Zhang et al. [34]
proposes to use a modified version of motion Weber local
descriptor (MoIWLD) followed by sparse representation as
the feature descriptor.
The hand-crafted feature based techniques used methods
such as bag of words, histogram, improved Fisher encoding, etc. for aggregating the features across the frames.
Recently various models using long short term memory
(LSTM) RNNs [16] have been developed for addressing
problems involving sequences such as machine translation
[27], speech recognition [14], caption generation [31, 29]
and video action recognition [9, 26]. The LSTM was introduced in 1997 to combat the effect of vanishing gradient
problem which was plaguing the deep learning community.
The LSTM incorporates a memory unit which contains information about the inputs the LSTM unit has seen and is
regulated using a number of fully-connected gates. The
same idea of using LSTM for feature aggregation is proposed by Dong et al. in [10] for violence detection. The
method consisted of extracting features using a convolutional neural network from raw pixels, optical flow images
and acceleration flow maps followed by LSTM based encoding and a late fusion.
Recently, Xingjian et al. [30] replaced the fullyconnected gate layers of the LSTM with convolutional layers and used this improved model for predicting precipitation nowcasting from radar images with improved performance. This newer model of the LSTM is named as
convolutional LSTM (convLSTM). Later, it has been used
for predicting optical flow images from videos [22] and
for anomaly detection in videos [18]. By replacing the
fully-connected layers in the LSTM with convolutional layers, the convLSTM model is capable of encoding spatiotemporal information in its memory cell.
3. Proposed method
The goal of the proposed study was to develop an endto-end trainable deep neural network model for classifying
Frame 1
ConvLSTM
Frame 2
Frame 2
96
256
384
384
256
96
256
384
384
256
96
256
384
384
256
Hidden
State (256)
Frame 3
Frame N-1
Frame N
10
2
256
1000
Figure 1. Block diagram of the proposed model. The model consists of alternating convolutional (red), normalization (grey) and pooling
(blue) layers. The hidden state of the ConvLSTM (green) at the final time step is used for classification. The fully-connected layers are
shown in brown colour.
videos in to violent and non-violent ones. The block diagram of the proposed model is illustrated in figure 1. The
network consists of a series of convolutional layers followed
by max pooling operations for extracting discriminant features and convolutional long short memory (convLSTM) for
encoding the frame level changes, that characterizes violent
scenes, existing in the video.
3.1. ConvLSTM
Videos are sequences of images. For a system to identify if a fight is taking place between the humans present
in the video, it should be capable of identifying the locations of the humans and understand how the motion of the
said humans are changing with time. Convolutional neural networks (CNN) are capable of generating a good representation of each video frame. For encoding the temporal changes a recurrent neural network (RNN) is required.
Since we are interested in changes in both the spatial and
temporal dimensions, convLSTM will be a suitable option.
Compared to LSTM, the convLSTM will be able to encode
the spatial and temporal changes using the convolutional
gates present in them. This will result in generating a better
representation of the video under analysis. The equations of
the convLSTM model are given in equations 1-6.
it = σ(wxi ∗ It + whi ∗ ht−1 + bi )
(1)
ft = σ(wxf ∗ It + whf ∗ ht−1 + bf )
c̃t = tanh(wxc̃ ∗ It + whc̃ ∗ ht−1 + bc̃ )
(2)
(3)
ct = c̃t ⊙ it + ct−1 ⊙ ft
ot = σ(wxo ∗ It + who ∗ ht−1 + bo )
(4)
(5)
ht = ot ⊙ tanh(ct )
(6)
In the above equations, ‘*’ represents convolution operation and ‘⊙’ represents the Hadamard product. The hidden
state ht , the memory cell ct and the gate activations it , ft
and ot are all 3D tensors in the case of convLSTM.
For a system to identify a video as violent or non-violent,
it should be capable of encoding localized spatial features
and the manner in which they change with time. Handcrafted features are capable of achieving this with the downside of having increased computational complexity. CNNs
are capable of generating discriminant spatial features but
existing methods use the features extracted from the fullyconnected layers for temporal encoding using LSTM. The
output of the fully-connected layers represents a global descriptor of the whole image. Thus the existing methods fail
to encode the localized spatial changes. As a result, they resort to methods involving addition of more streams of data
such as optical flow images [10] which results in increased
computational complexity. It is in this context that the use
of convLSTM becomes relevant as it is capable of encoding the convolutional features of the CNN. Also, the convolutional gates present in the convLSTM is trained to encode the temporal changes of local regions. In this way,
the whole network is capable of encoding localized spatiotemporal features.
3.2. Network Architecture
Figure 1 illustrates the architecture of the network used
for identifying violent videos. The convolutional layers
are trained to extract hierarchical features from the video
frames and are then aggregated using the convLSTM layer.
The network functions as follows: The frames of the video
under consideration are applied sequentially to the model.
Once all the frames are applied, the hidden state of the
convLSTM layer in this final time step contains the representation of the input video frames applied. This video representation, in the hidden state of the convLSTM, is then
applied to a series of fully-connected layers for classification.
In the proposed model, we used the AlexNet model [17]
pre-trained on the ImageNet database as the CNN model for
extracting frame level features. Several studies have found
Table 1. Classification accuracy obtained with the hockey fight
dataset for different models
Input
Video Frames
(random initialization)
Video Frames
(ImageNet pre-trained)
Difference of Video Frames
(random initialization)
Difference of Video Frames
(ImageNet pre-trained)
Classification Accuracy
94.1±2.9%
96±0.35%
95.5±0.5%
97.1±0.55%
out that networks trained on the ImageNet database is capable of having better generalization and results in improved
performance for tasks such as action recognition [25] [19].
In the convLSTM, we used 256 filters in all the gates with a
filter size of 3 × 3 and stride 1. Thus the hidden state of the
convLSTM consists of 256 feature maps. A batch normalization layer is added before the first fully-connected layer.
Rectified linear unit (ReLU) non-linear activation is applied
after each of the convolutional and fully-connected layers.
In the network, instead of applying the input frames as
such, the difference between adjacent frames are given as
input. In this way, the network is forced to model the
changes taking place in adjacent frames rather than the
frames itself. This is inspired by the technique proposed
by Simonyan and Zisserman in [25] to use optical flow images as input to a neural network for action recognition. The
difference image can be considered as a crude and approximate version of optical flow images. So in the proposed
method, the difference between adjacent video frames are
applied as input to the network. As a result, the computational complexity involved in the optical flow image generation is avoided. The network is trained to minimize the
binary cross entropy loss.
rithm. Since the number of videos present in the datasets
are limited, data augmentation techniques such as random
cropping and horizontal flipping are used during training
stage. During each training iteration, a portion of the frame
of size 224 × 224 is cropped, from the four corners or from
the center, and is randomly flipped before applying to the
network. Note that the same augmentation technique is followed for all the frames present in a video. The network is
run for 7500 iterations during the training stage. In the evaluation stage, the video frames are resized to 224 × 224 and
are applied to the network for classifying them as violent or
non-violent. All the training video frames in a dataset are
normalized to make their mean zero and variance unity.
4.2. Datasets
To evaluate the effectiveness of the proposed approach
in classifying violent videos, three benchmark datasets are
used and the classification accuracy is reported.
The performance of the proposed method is evaluated on three standard public datasets namely, Hockey
Fight Dataset [21], Movies Dataset [21] and Violent-Flows
Crowd Violence Dataset [15]. They contain videos captured
using mobile phones, CCTV cameras and high resolution
video cameras.
Hockey Fight Dataset: Hockey fight dataset is created by
collecting videos of ice hockey matches and contains 500
fighting and non-fighting videos. Almost all the videos
in the dataset have a similar background and subjects (humans). 20 frames from each video are used as inputs to the
network.
Movies Dataset: This dataset consists of fight sequences
collected from movies. The non-fight sequences are collected from other publicly available action recognition
datasets. The dataset is made up of 100 fight and 100 nonfight videos. As opposed to the hockey fight dataset, the
videos of the movies dataset is substantially different in its
content. 10 frames from each video are used as inputs to the
network.
Violent-Flows Dataset: This is a crowd violence dataset as
the number of people taking part in the violent events are
very large. Most of the videos present in this dataset are
collected from violent events taking place during football
matches. There are 246 videos in this dataset. 20 frames
from each video are used as inputs to the network.
4.1. Experimental Settings
4.3. Results and Discussions
The network is implemented using the Torch library.
From each video, N number of frames equally spaced in
time are extracted and resized to a dimension of 256 × 256
for training. This is to avoid the redundant computations
involved in processing all the frames, since adjacent frames
contain overlapping information. The number of frames selected is based on the average duration of the videos present
in each dataset. The network is trained using RMSprop algorithm with a learning rate of 10−4 and a batch size of
16. The model weights are initialized using Xavier algo-
Performance evaluation is done using 5-folds cross validation scheme, which is the technique followed in existing literature. The model architecture selection was done
by evaluating the performance of the different models on
the hockey fight dataset. The classification accuracies obtained for the two cases, video frames as input and difference of frames as input, is listed in table 1. From the table,
it can also be seen that using a network that is pre-trained on
the ImageNet dataset (we used BVLC AlexNet from Caffe
model zoo) results in better performance compared to us-
4. Experiments and Results
Table 2. Comparison of classification results
Method
MoSIFT+HIK[21]
ViF[15]
MoSIFT+KDE+Sparse Coding[32]
Deniz et al.[8]
Gracia et al.[13]
Substantial Derivative[20]
Bilinski et al.[2]
MoIWLD[34]
ViF+OViF[11]
Three streams + LSTM[10]
Proposed
Hockey Dataset
90.9%
82.9±0.14%
94.3±1.68%
90.1±0%
82.4±0.4%
93.4
96.8±1.04%
87.5±1.7%
93.9
97.1±0.55%
ing a network that is randomly initialized. In this way, we
decided to use frame difference as the input and to use a
pre-trained network in the model. Table 2 gives the classification accuracy values obtained for the various datasets
considered in this study and is compared against 10 state of
the art techniques. From the table, it can be seen that the
proposed method is able to better the results of the existing
techniques in the case of hockey fights dataset and movies
dataset.
As mentioned earlier, this study considers aggressive behavior as violent. The biggest problem of considering this
definition occurs in the case of sports. For instance, in the
hockey dataset, the fight videos consists of players colliding against each other and hitting one another. So one easy
way to detect violent scenes is to check if one player moves
closer to another. But the non-violent videos also consist
of players hugging each other or doing high fives as part
of a celebration. It is highly likely that these videos could
be mistaken as violent. But the proposed method is able
to avoid this which suggests that it is capable of encoding
motion of localized regions (motion of limbs, reaction of involved persons, etc.). However, in the case of violent-flows
dataset, the proposed method is not able to best the previous
state of the art technique (it came second in terms of accuracy). Analyzing the dataset, it is found that in most of the
violent videos, only a small part of the crowd is found to be
involved in aggressive behavior while a large part remained
as spectators. This forces the network to mark such videos
as non-violent since majority of the people present in it is
found to behave normally. Further studies are required for
devising techniques to alleviate this problem involved with
crowd videos. One technique that can be considered is to
divide the frame in to sub-regions and predict the output of
the regions separately and mark the video as violent if any
of the regions is outputted by the network as violent.
In order to compare the advantage of convLSTM over
traditional LSTM, a different model that consists of LSTM
is trained and tested on the hockey fights dataset. The new
Movies Dataset
89.5%
98.0±0.22%
97.8±0.4%
96.89±0.21%
99
100±0%
Violent-Flows Dataset
81.3±0.21%
89.05±3.26%
85.43±0.21%
96.4
93.19±0.12%
88±2.45%
94.57±2.34%
Table 3. Comparison between convLSTM and LSTM models in
terms of classification accuracy obtained in the hockey fights
dataset and number of parameters
Model
convLSTM
LSTM
Accuracy
97.1±0.55%
94.6±1.19%
No. of Parameters
9.6M(9619544)
77.5M(77520072)
model consists of the AlexNet architecture followed by an
LSTM RNN layer. The output of the last fully-connected
layer (fc7) of AlexNet is applied as input to an LSTM with
1000 units. The rest of the architecture is similar to the one
that uses convLSTM. The results obtained with this model
and the number of trainable parameters associated with it
are compared against the proposed model in table 3. The table clearly shows the advantages of using convLSTM over
LSTM and the capability of convLSTM in generating useful video representation. It is also worth mentioning that
the number of parameters that are required to be optimized,
in the case of convLSTM, is very much less compared to
LSTM (9.6M vs 77.5M). This helps the network to generalize better without overfitting in the case of limited data.
The proposed model is capable of processing 31 frames per
second on an NVIDIA K40 GPU.
5. Conclusions
This work presents a novel end-to-end trainable deep
neural network model for addressing the problem of violence detection in videos. The proposed model consists of
a convolutional neural network (CNN) for frame level feature extraction followed by feature aggregation in the temporal domain using convolutional long short term memory
(convLSTM). The proposed method is evaluated on three
different datasets and resulted in improved performance
compared to the state of the art methods. It is also shown
that a network trained to model changes in frames (frame
difference) performs better than a network trained using
frames as inputs. A comparative study between the tradi-
tional fully-connected LSTM and convLSTM is also done
and the results show that the convLSTM model is capable of
generating a better video representation compared to LSTM
with less number of parameters, thereby avoiding overfitting.
References
[1] E. Acar, F. Hopfgartner, and S. Albayrak. Breaking down
violence detection: Combining divide-et-impera and coarseto-fine strategies. Neurocomputing, 208:225–237, 2016. 2
[2] P. Bilinski and F. Bremond. Human violence recognition and
detection in surveillance videos. In AVSS, 2016. 2, 5
[3] D. Chen, H. Wactlar, M.-y. Chen, C. Gao, A. Bharucha, and
A. Hauptmann. Recognition of aggressive human behavior
using binary local motion descriptors. In International Conference of the IEEE Engineering in Medicine and Biology
Society (EMBS), 2008. 2
[4] L.-H. Chen, H.-W. Hsu, L.-Y. Wang, and C.-W. Su. Violence
detection in movies. In International Conference on Computer Graphics, Imaging and Visualization (CGIV), 2011. 2
[5] C. Clarin, J. Dionisio, and M. Echavez. Dove: Detection of
movie violence using motion intensity analysis on skin and
blood. Technical report, University of the Philippines, 01
2005. 2
[6] A. Datta, M. Shah, and N. D. V. Lobo. Person-on-person
violence detection in video data. In ICPR, 2002. 2
[7] F. D. De Souza, G. C. Chavez, E. A. do Valle Jr, and A. d. A.
Araújo. Violence detection in video using spatio-temporal
features. In Conference on Graphics, Patterns and Images
(SIBGRAPI), 2010. 2
[8] O. Deniz, I. Serrano, G. Bueno, and T.-K. Kim. Fast violence
detection in video. In International Conference on Computer
Vision Theory and Applications (VISAPP), 2014. 2, 5
[9] J. Donahue, L. Anne Hendricks, S. Guadarrama,
M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual
recognition and description. In CVPR, 2015. 2
[10] Z. Dong, J. Qin, and Y. Wang. Multi-stream deep networks
for person to person violence detection in videos. In Chinese
Conference on Pattern Recognition, 2016. 2, 3, 5
[11] Y. Gao, H. Liu, X. Sun, C. Wang, and Y. Liu. Violence detection using oriented violent flows. Image and Vision Computing, 48:37–41, 2016. 2, 5
[12] T. Giannakopoulos, A. Pikrakis, and S. Theodoridis. A
multi-class audio classification method with respect to violent content in movies using bayesian networks. In IEEE
Workshop on Multimedia Signal Processing (MMSP), 2007.
2
[13] I. S. Gracia, O. D. Suarez, G. B. Garcia, and T.-K. Kim. Fast
fight detection. PloS one, 10(4):e0120448, 2015. 2, 5
[14] A. Graves, N. Jaitly, and A.-r. Mohamed. Hybrid speech
recognition with deep bidirectional lstm. In IEEE Workshop on Automatic Speech Recognition and Understanding
(ASRU), 2013. 2
[15] T. Hassner, Y. Itcher, and O. Kliper-Gross. Violent flows:
Real-time detection of violent crowd behavior. In CVPR
Workshops, June 2012. 2, 4, 5
[16] S. Hochreiter and J. Schmidhuber. Long short-term memory.
Neural computation, 9(8):1735–1780, 1997. 2
[17] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet
classification with deep convolutional neural networks. In
NIPS, 2012. 3
[18] J. R. Medel and A. Savakis. Anomaly detection in video
using predictive convolutional long short-term memory networks. arXiv preprint arXiv:1612.00390, 2016. 2
[19] I. Misra, C. L. Zitnick, and M. Hebert. Shuffle and learn:
unsupervised learning using temporal order verification. In
ECCV, 2016. 4
[20] S. Mohammadi, H. Kiani, A. Perina, and V. Murino. Violence detection in crowded scenes using substantial derivative. In AVSS, 2015. 2, 5
[21] E. B. Nievas, O. D. Suarez, G. B. Garcı́a, and R. Sukthankar. Violence detection in video using computer vision
techniques. In International Conference on Computer Analysis of Images and Patterns. Springer, 2011. 2, 4, 5
[22] V. Pătrăucean, A. Handa, and R. Cipolla. Spatio-temporal
video autoencoder with differentiable memory. In ICLR
Workshop, 2016. 2
[23] S. Pfeiffer, S. Fischer, and W. Effelsberg. Automatic audio
content analysis. In ACM International Conference on Multimedia, 1997. 2
[24] P. Rota, N. Conci, N. Sebe, and J. M. Rehg. Real-life violent
social interaction detection. In ICIP, 2015. 2
[25] K. Simonyan and A. Zisserman. Two-stream convolutional
networks for action recognition in videos. In NIPS, 2014. 4
[26] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using lstms. In
ICML, 2015. 2
[27] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence
learning with neural networks. In NIPS, 2014. 2
[28] N. Vasconcelos and A. Lippman. Towards semantically
meaningful feature spaces for the characterization of video
content. In ICIP, 1997. 2
[29] S. Venugopalan, M. Rohrbach, J. Donahue, R. Mooney,
T. Darrell, and K. Saenko. Sequence to sequence-video to
text. In ICCV, 2015. 2
[30] S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong,
and W.-c. Woo. Convolutional lstm network: A machine
learning approach for precipitation nowcasting. In NIPS,
2015. 2
[31] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show, attend and tell:
Neural image caption generation with visual attention. In
ICML, 2015. 2
[32] L. Xu, C. Gong, J. Yang, Q. Wu, and L. Yao. Violent video
detection based on mosift feature and sparse coding. In
ICASSP, 2014. 2, 5
[33] W. Zajdel, J. D. Krijnders, T. Andringa, and D. M. Gavrila.
Cassandra: audio-video sensor fusion for aggression detection. In AVSS, 2007. 2
[34] T. Zhang, W. Jia, X. He, and J. Yang. Discriminative dictionary learning with motion weber local descriptor for violence detection. IEEE Transactions on Circuits and Systems
for Video Technology, 27(3):696–709, 2017. 2, 5
| 1 |
Logical Methods in Computer Science
Vol. 12(1:7)2016, pp. 1–32
www.lmcs-online.org
Submitted
Published
Sep. 10, 2015
Mar. 31, 2016
HISTORY-REGISTER AUTOMATA
RADU GRIGORE a AND NIKOS TZEVELEKOS b
a
University of Kent
e-mail address: [email protected]
b
Queen Mary University of London
e-mail address: [email protected]
Abstract. Programs with dynamic allocation are able to create and use an unbounded
number of fresh resources, such as references, objects, files, etc. We propose History-Register
Automata (HRA), a new automata-theoretic formalism for modelling such programs. HRAs
extend the expressiveness of previous approaches and bring us to the limits of decidability
for reachability checks. The distinctive feature of our machines is their use of unbounded
memory sets (histories) where input symbols can be selectively stored and compared with
symbols to follow. In addition, stored symbols can be consumed or deleted by reset. We
show that the combination of consumption and reset capabilities renders the automata
powerful enough to imitate counter machines, and yields closure under all regular operations
apart from complementation. We moreover examine weaker notions of HRAs which strike
different balances between expressiveness and effectiveness.
1. Introduction
Program analysis faces substantial challenges due to its aim to devise finitary methods and
machines which are required to operate on potentially infinite program computations. A
specific such challenge stems from dynamic generative behaviours such as, for example,
object or thread creation in Java, or reference creation in ML. A program engaging in such
behaviours is expected to generate a possibly unbounded amount of distinct resources, each
of which is assigned a unique identifier, a name. Hence, any machine designed for analysing
such programs is expected to operate on an infinite alphabet of names. The latter need has
brought about the introduction of automata over infinite alphabets in program analysis,
starting from prototypical machines for mobile calculi [27] and variable programs [23], and
recently developing towards automata for verification tasks such as equivalence checks of
ML programs [29, 30], context-bounded analysis of concurrent programs [10, 3] and runtime
program monitoring [20].
The literature on automata over infinite alphabets is rich in formalisms each based
on a different approach for tackling the infiniteness of the alphabet in a finitary manner
(see e.g. [36] for an overview). A particularly intuitive such model is that of Register
2012 ACM CCS: [Theory of computation]: Formal languages and automata theory.
Key words and phrases: register automata, automata over infinite alphabets, infinite systems reachability,
freshness, counter automata.
l
LOGICAL METHODS
IN COMPUTER SCIENCE
c
DOI:10.2168/LMCS-12(1:7)2016
CC
R. Grigore and N. Tzevelekos
Creative Commons
2
R. GRIGORE AND N. TZEVELEKOS
q0
P
∅, 1
1, 2
O
O
P
The automaton starts at state q0 with empty history and nondeterministically makes a transition to state P or Q, accepting
the respective symbol. From state P , it accepts any input name
a which does not appear in any of its histories (this is what ∅
stands for), puts it in history number 1, and moves back to q0 .
From state O, it accepts any input name a which appears in
history number 1, puts it in history number 2, and moves back
to q0 .
Figure 1: History-register automaton accepting L0 .
Automata (RA) [23, 31], which are machines built around the concept of an ordinary finitestate automaton attached with a fixed finite amount of registers. The automaton can store
in its registers names coming from the input, and make control decisions by comparing new
input names with those already stored. Thus, by talking about addresses of its memory
registers rather than actual names, a so finitely-described automaton can tackle the infinite
alphabet of names. Driven by program analysis considerations, register automata have been
recently extended with the feature of name-freshness recognition [38], that is, the capability
of the automaton to accept specific inputs just if they are fresh — they have not appeared
before during computation. Those automata, called Fresh-Register Automata (FRA), can
account for languages like the following,
L0 = {a1 . . . an ∈ N ∗ | ∀i 6= j. ai 6= aj }
which captures the output of a fresh-name generator (N is a countably infinite set of names).
FRAs are expressive enough to model, for example, finitary fragments of languages like the
π-calculus [38] or ML [29].
The freshness oracle of FRAs administers the automata with perhaps too restricted an
access to the full history of the computation: it allows them to detect name freshness, but
not non-freshness. Consider, for instance, the following simple language,
L0 = {w ∈ ({O, P } × N )∗ | each letter of w appears exactly once in it
∧ each (O, a) in w is preceded by some (P, a) }
where the alphabet is made of pairs containing an element from the set {O, P } and a name
(O and P can be seen as different processes or agents exchanging names). The language L0
represents a paradigmatic scenario of a name generator P coupled with a name consumer O:
each consumed name must have been created first, and no name can be consumed twice. It
can capture e.g. the interaction of a process which creates new files with one that opens
them, where no file can be opened twice. The inability of FRAs to detect non-freshness, as
well as the fact that names in their history cannot be removed from it, do not allow them
to express L0 . More generally, the notion of re-usage or consumption of names is beyond
the reach of those machines. Another limitation of FRAs is the failure of closure under
concatenation, interleaving and Kleene star.
Aiming at providing a stronger theoretical tool for analysing computation with names,
in this work we further capitalise on the use of histories by effectively upgrading them
to the status of registers. That is, in addition to registers, we equip our automata with
a fixed number of unbounded sets of names (histories) where input names can be stored
and compared with names to follow. As histories are internally unordered, the kind of
HISTORY-REGISTER AUTOMATA
3
FRA
unary HRA
HRA
RA
non-reset
HRA
DA / CMA
Figure 2: Expressiveness of history-register automata compared to previous models (in
italics). The inclusion M −→ M0 means that for each A ∈ M we can effectively
construct an A0 ∈ M0 accepting the same language as A. All inclusions are strict.
name comparison we allow for is name belonging (does the input name belong to the i-th
history? ). Moreover, names can be selected and removed from histories, and individual
histories can be emptied/reset. We call the resulting machines History-Register Automata
(HRA). For example, L0 is accepted by the HRA with 2 histories depicted in Figure 1, where
by convention we model pairs of symbols by sequences of two symbols.1
The strengthening of the role of histories substantially increases the expressive power of
our machines. More specifically, we identify three distinctive features of HRAs:
(1) the capability to reset histories;
(2) the use of multiple histories;
(3) the capability to select and remove individual names from histories.
Each feature allows us to express one of the paradigmatic languages below, none of which
are FRA-recognisable.
L1 = {a0 w1 . . . a0 wn ∈ N ∗ | ∀i. wi ∈ N ∗ ∧ a0 wi ∈ L0 } for given a0
L2 = {a1 a01 . . . an a0n ∈ N ∗ | a1 . . . an , a01 . . . a0n ∈ L0 }
L3 = {a1 . . . an a01 . . . a0n0 ∈ N ∗ | a1 . . . an , a01 . . . a0n0 ∈ L0 ∧ ∀i.∃j. a0i = aj }
Apart from the gains in expressive power, the passage to HRAs yields a more well-rounded
automata-theoretic formalism for generative behaviours as these machines enjoy closure under
all regular operations apart from complementation. On the other hand, the combination of
features (1-3) above enables us to use histories as counters and simulate counter machines. We
therefore obtain non-primitive recursive bounds for checking language emptiness. Given that
language containment and universality are undecidable already for register automata [31],
HRAs are fairly close to the decidability boundary for properties of languages over infinite
alphabets. Nonetheless, starting from HRAs and weakening them in each of the first two
factors (1,2) we obtain automata models which are still highly expressive but computationally
more tractable. Overall, the expressiveness hierarchy of the machines we examine is depicted
in Figure 2 (weakening in (1) and (2) respectively occurs in the second column of the figure).
Motivation and related work. The motivation for this work stems from semantics and
verification. In semantics, the use of names to model resource generation originates in the
work of Pitts and Stark on the ν-calculus [32] and Stark’s PhD [37]. Names have subsequently
been incorporated in the semantics literature (see e.g. [21, 4, 1, 24]), especially after the advent
1Although, technically speaking, the machines we define below do not handle constants (as e.g. O, P ), the
latter are encoded as names appearing in initial registers, in standard fashion.
4
R. GRIGORE AND N. TZEVELEKOS
of Nominal Sets [18], which provided formal foundations for doing mathematics with names.
Moreover, recent work in game semantics has produced algorithmic representations of game
models using extensions of fresh-register automata [29, 30, 28], thus achieving automated
equivalence checks for fragments of ML and Java. In a parallel development, a research
stream on automated analysis of dynamic concurrent programs has developed essentially
the same formalisms, this time stemming from trace-based operational techniques [10, 3].
This confluence of different methodologies is exciting and encourages the development of
stronger automata for a wider range of verification tasks, and just such an automaton we
examine herein.
Although our work is driven by program analysis, the closest existing automata models
to ours come from XML database theory and model checking. Research in the latter area has
made great strides in the last years on automata over infinite alphabets and related logics
(e.g. see [36] for an overview from 2006). As we show in this paper, history-register automata
fit very well inside the big picture of automata over infinite alphabets (cf. Figure 2) and in
fact can be seen as closely related to Data Automata (DA) [7] or, equivalently, Class Memory
Automata (CMA) [5]. A crucial difference lies in the reset capabilities of our machines, which
allow us to express languages like L1 that cannot be expressed by DA/CMAs. On the other
hand, the local termination conditions of DA/CMAs allow them to express languages that
HRAs cannot capture. We find the correspondence between HRAs and DAs particularly
pleasing as it relates two seemingly very different kind of machines, with distant operational
descriptions and intuitions.
A recent strand of research in foundations of atom-based computation [6, 9, 8] has
examined nominal variants of classical machine models, ranging from finite-state automata
to Turing machines. Finally, since the publication of the conference version of this paper [39],
there has been work in nested DA/CMAs [14, 13], which can be seen as extensions of
non-reset HRAs whereby the histories satisfy some nesting relations. The latter are a clean
extension of our machines, leading to higher reachability complexities.
This article is the journal version of [39], with strengthened results and with full proofs.
Section 4 is new: it collects all results that show how registers can be simulated by other
means. Many upper bounds are tighter: Propositions 4.1, 4.4, 5.4, 6.2, 6.8. Some results are
new: Propositions 4.5, 4.7, 5.6, 6.10. Example 4.3 is new. Most proofs have been revised.
Section 7 is new: it collects in one place the main properties of HRAs. Also, we relate our
work to what has been done after the conference version was published.
Overview. In Section 2 we introduce HRAs and their basic properties. In Section 3 we
examine regular closure properties of HRAs. In Section 4 we explain how registers can
be simulated by other means, such as histories. In Section 5 we prove that emptiness is
Ackermann-complete. In Section 6 we introduce weaker models, and study their properties.
In Section 7 we summarize the main properties of HRAs. In Section 8 we connect HRAs to
existing automata formalisms. We conclude by discussing future directions which emanate
from this work.
2. Definitions and first properties
We start by fixing some notation. Let N be a countably infinite alphabet of names, over
which we range by a, b, c, etc. For any pair of natural numbers i ≤ j, we write [i, j] for the
set {i, i+1, . . . , j}, and for each i we let [i] be the set {1, . . . , i}. For any set S, we write |S|
HISTORY-REGISTER AUTOMATA
5
for the cardinality of S, we write P(S) for the powerset of S, we write Pfn (S) for the set
of finite subsets of S, and we write P6=∅ (S) for the set of nonempty subsets of S. We write
id : S → S for the identity function on S, and img(f ) for the image of f : S → T .
We define automata which are equipped with a fixed number of registers and histories
where they can store names. Each register is a memory cell where one name can be stored at
a time; each history can hold an unbounded set of names. We use the term place to refer to
both histories and registers. Transitions are of two kinds: name-accepting transitions and
reset transitions. Those of the former kind have labels of the form (X, X 0 ), for sets of places
X and X 0 ; and those of the latter carry labels with single sets of places X. A transition
labelled (X, X 0 ) means:
• accept name a if it is contained precisely in places X, and
• update places in X and X 0 so that a be contained precisely in places X 0 after the transition
(without touching other names).
By a being contained precisely in places X we mean that it appears in every place in X,
and in no other place. In particular, the label (∅, X 0 ) signifies accepting a fresh name (one
which does not appear in any place) and inserting it in places X 0 . On the other hand, a
transition labelled by X resets all the places in X; that is, it updates each of them to the
empty set (registers are modelled as sets with at most one element). Reset transitions do
not accept names; they are -transitions from the outside. Note that the label (X, ∅) has
different semantics from the label X: the former stipulates that a name appearing precisely
in X be accepted and then removed from X; whereas the latter clears all the contents of
places in X, without accepting anything.
2.1. Definitions. Formally, let us fix positive integers m and n which will stand for the
default number of histories and registers respectively in the machines we define below. The
set Asn of assignments and the set Lab of labels are:
Asn = { H : [m + n] → Pfn (N ) | ∀i > m. |H(i)| ≤ 1 }
Lab = P([m + n])2 ∪ P([m + n])
For example, {(i, ∅) | i ∈ [m+n]} is the empty assignment.2 We range over elements of Asn
by H and variants, and over elements of Lab by ` and variants.
Let H ∈ Asn be an assignment, let a ∈ N be a name, let S ⊆ N be a set of names, and
let X ⊆ [m + n] be a set of places. We introduce the following notation:
• We set H@X
to be the
T
S set of names which appear preciselySin places X in H; that is,
H@X = i∈X H(i) \ i∈X
/ H(i). In particular, H@ ∅ = N \ i H(i) is the set of names
which do not appear in H.
• H[X 7→ S] is the update H 0 of H so that all places in X are mapped to S; that is,
H 0 = {(i, H(i)) | i 6∈ X} ∪ {(i, S) | i ∈ X}. E.g., H[X 7→ ∅] resets all places in X.
• H[a in X] is the update of H which removes name a from all places and inserts it back
in X; that is, H[a in X] is the assignment:
{(i, H(i) ∪ {a}) | i ∈ X ∩ [m]} ∪ {(i, {a}) | i ∈ X \ [m]} ∪ {(i, H(i) \ {a}) | i ∈
/ X}
Note above that operation H[a in X] acts differently in the case of histories (i ≤ m) and
registers (i > m) in X: in the former case, the name a is added to the history H(i), while
in the latter the register H(i) is set to {a} and its previous content is cleared.
2We represent functions as sets of pairs.
6
R. GRIGORE AND N. TZEVELEKOS
We can now define our automata.
Definition 2.1. A history-register automaton (HRA) of type (m, n) is a tuple A =
hQ, q0 , H0 , δ, F i where:
• Q is a finite set of states, q0 is the initial state, F ⊆ Q are the final ones,
• H0 ∈ Asn is the initial assignment, and
• δ ⊆ Q × Lab × Q is the transition relation.
For brevity, we shall call A an (m, n)-HRA.
X,X 0
X
We write transitions in the forms q −→ q 0 and q −→ q 0 , for each kind of transition label.
In diagrams, we may unify different transitions with common source and target, for example
X,X 0
X,X 0 / Y,Y 0
Y,Y 0
q −→ q 0 and q −→ q 0 may be written q −−−−−−−→ q 0 ; moreover, we shall lighten notation
and write i for the singleton {i}, and ij for {i, j}.
We already gave an overview of the semantics of HRAs. This is formally defined by
means of configurations representing the current computation state of the automaton. A
configuration of A is a pair (q, H) ∈ Q̂, where:
Q̂ = Q × Asn
From the transition relation δ we obtain the configuration graph of A as follows.
Definition 2.2. Let A be an (m, n)-HRA as above. Its configuration graph (Q̂, −→),
x
where −→ ⊆ Q̂ × N ∪ {} × Q̂, is constructed by setting (q, H) −→ (q 0 , H 0 ) if and only
if one of the following conditions is satisfied.
X,X 0
• x = a ∈ N and there is q −→ q 0 ∈ δ such that a ∈ H@X and H 0 = H[a in X 0 ].
X
• x = and there is q −→ q 0 ∈ δ such that H 0 = H[X 7→ ∅].
The language accepted by A is
w
L(A) = { w ∈ N ∗ | (q0 , H0 ) −→
−→ (q, H) and q ∈ F }
x1 ...x
x
x
k
1
k
where −→
−→ is the reflexive transitive closure of −→ (i.e. q̂ −−−−→
→
q̂ 0 if q̂ −→
· · · −→
q̂ 0 ).
Note that we use both for the empty sequence and the empty transition so, in particular,
when writing sequences of the form x1 . . . xk we may implicitly consume ’s. It is worth
noting here that our formulation follows M-automata [23] in that multiple places can be
mentioned at each transition.
Example 2.3. The language L1 of the Introduction is recognised by the following (1, 1)HRA (leftmost below), with initial assignment {(1, ∅), (2, a0 )}. The automaton starts by
accepting a0 , leaving it in register 2, and moving to state q1 . There, it loops accepting fresh
names (appearing in no place) which it stores in history 1. From q1 it goes back to q0 by
resetting its history.
∅,1 / 2,12
∅,1
q0
2,2
1
q1
q0
1,∅
∅,1
q1
q0
1,∅
q1
∅,2 / 1,12
We can also see that the other two HRAs, of type (2, 0) and (1, 0), accept the languages L2
and L3 respectively. Both automata start with empty assignments.
HISTORY-REGISTER AUTOMATA
7
Finally, the automaton we drew in Figure 1 is, in fact, a (2,2)-HRA where its two registers
initially contain the names O and P respectively. The transition label O corresponds to
(3, 3), and P to (4, 4).
As mentioned in the introductory section, HRAs build upon (Fresh) Register Automata [23, 31, 38]. The latter can be defined within the HRA framework as follows.3
Definition 2.4. A Register Automaton (RA) of n registers is a (0, n)-HRA with no
reset transitions. A Fresh-Register Automaton
(FRA) of n registers is a (1, n)-HRA
S
A = hQ, q0 , H0 , δ, F i such that H0 (1) = i H0 (i) and:
• for all (q, `, q 0 ) ∈ δ, there are X, X 0 such that ` = (X, X 0 ) and 1 ∈ X 0 ;
• for all (q, {1}, X 0 , q 0 ) ∈ δ, there is also (q, ∅, X 0 , q 0 ) ∈ δ.
Thus, in an FRA all the initial names must appear in its history, and the same holds
for all the names the automaton accepts during computation (1 ∈ X 0 ). As, in addition, no
reset transitions are allowed, the history effectively contains all names of a run. On the
other hand, the automaton cannot recognise non-freshness: if a name appearing only in the
history is to be accepted at any point then a totally fresh name can be also be accepted in
the same way. Now, from [38] we have the following.
Lemma 2.5. The languages L1 , L2 and L3 are not FRA-recognisable.
Proof. L1 was explicitly examined in [38]. For L2 and L3 we use a similar argument as the
one for showing that L0 ∗ L0 is not FRA-recognisable [38] .
2.2. Bisimulation. Bisimulation equivalence, also called bisimilarity, is a useful tool for
relating automata, even from different paradigms. It implies language equivalence and is
generally easier to reason about than the latter. We will be using it avidly in the sequel.
Definition 2.6. Let Ai = hQi , q0i , H0i , δi , Fi i be (m, n)-HRAs, for i = 1, 2. A relation
R ⊆ Q̂1 × Q̂2 is called a simulation on A1 and A2 if, for all (q̂1 , q̂2 ) ∈ R,
• if q̂1 −→
−→ q̂10 and π1 (q̂10 ) ∈ F1 then q̂2 −→
−→ q̂20 for some π1 (q̂20 ) ∈ F2 , where π1 is the first
projection function;
a
a
• if q̂1 −→
−→ · −→ q̂10 then q̂2 −→
−→ · −→ q̂20 for some (q̂10 , q̂20 ) ∈ R.
R is called a bisimulation if both R and R−1 are simulations. We say that A1 and A2 are
bisimilar, written A1 ∼ A2 , if ((q01 , H01 ), (q02 , H02 )) ∈ R for some bisimulation R.
Lemma 2.7. If A1 ∼ A2 then L(A1 ) = L(A2 ).
2.3. Determinism. We close our presentation here by describing the deterministic class of
HRAs. We defined HRAs in such a way that, at any given configuration (q, H) and for any
input symbol a, there is at most one set of places X that can match a, i.e. such that a ∈ H@X.
As a result, the notion of determinism in HRAs can be ensured by purely syntactic means.
X1
Xn 0
X
Below we write q −→
−→ q 0 ∈ δ if there is a sequence of transitions q −→
· · · −→
q in δ such
Sn
∅
that X = i=1 Xi . In particular, q −→
−→ q ∈ δ.
3The definitions given in [23, 31, 38] are slightly different but can routinely be shown equivalent.
8
R. GRIGORE AND N. TZEVELEKOS
Definition 2.8. We say that an HRA A is deterministic when, for any reachable config
a
a
uration q̂ and any name a, if q̂ −→
−→ · −→ q̂1 and q̂ −→
−→ · −→ q̂2 then q̂1 = q̂2 . We say that an
HRA A is strongly deterministic when, for any state q and any sets X, X1 , X2 , Y1 , Y2 ,
Y
X\Y1 ,X1
Y
X\Y2 ,X2
1
2
if q −→
−→
· −−−−−→ q1 ∈ δ and q −→
−→
· −−−−−→ q2 ∈ δ then q1 = q2 , Y1 = Y2 and X1 = X2 .
Even if A is deterministic, it is still possible to have multiple paths in the configuration
graph that are labeled by the same word. However, such paths may only differ in their
-transitions. In the definition of ‘strongly deterministic’, the set X is guessing where the
name a occurs.
Lemma 2.9. If A is strongly deterministic then it is deterministic.
3. Closure properties
History-register automata enjoy good closure properties with respect to regular language
operations. In particular, they are closed under union, intersection, concatenation and
Kleene star, but not closed under complementation.
In fact, the design of HRAs is such that the automata for union and intersection
come almost for free through a straightforward product construction which is essentially
an ordinary product for finite-state automata, modulo reindexing of places to account for
duplicate labels (cf. [23]). The constructions for Kleene star and concatenation are slightly
more involved as there is need for passing through intermediate automata which do not
touch their initial names.
We shall need the following technical gadget. Given an (m, n)-HRA A and a sequence
w of k distinct names, we construct a bisimilar (m, n+k)-HRA, denoted A fix w, in which
the names of w appear exclusively in the additional k registers, which, moreover, remain
unchanged during computation. The construction will allow us, for instance, to create
feedback loops in automata ensuring that after each feedback transition the same initial
configuration occurs.
Lemma 3.1. Let A be an (m, n)-HRA with initial assignment H0 and w = a1 . . . ak a
sequence of distinct names. We can effectively construct an (m, n+k)-HRA A fix w with
initial assignment H00 such that A fix w ∼ A and:
• H00 (m+n+i) = ai for all i ∈ [k], and H00 (i) = H0 (i) \ {a1 , . . . , ak } for all i ∈ [m+n];
• for all reachable configurations (q, H) of A fix w and all i > m+n, H(i) = H00 (i).
Proof. We construct A fix w = hQ0 , q00 , H00 , δ 0 , F 0 i as follows. First, we insert/move all names
of w to the new registers (places [m+n+1, m+n+k]), i.e. we set H00 (i) = H0 (i) \ {a1 . . . ak }
for all i ∈ [m+n], and H00 (m+n+i) = {ai } for each i ∈ [k]. The role of the new registers is
to constantly store the names in w and act on the behalf of other places when the latter
intend to use those names: during computation, whenever an ai is captured by a transition
of the initial automaton A, in A fix w it will be instead simulated by a transition involving
the new registers. In order for the simulation to be accurate, we shall inject inside states
information specifying the intended location of the ai s in the places of A. Thus, the states
of the new automaton are pairs (q, f ), where q ∈ Q and f is a function recording, for each
of the new registers, where would the name of the register appear in the original automaton
A. That is,
Q0 = Q × {f : [k] → P([m+n]) | ∀j 6= j 0. f (j) ∩ f (j 0 ) ⊆ [m]}
HISTORY-REGISTER AUTOMATA
9
while q00 = (q0 , {(i, {j | ai ∈ H0 (j)}) | i ∈ [k]}) and F 0 = {(q, f ) ∈ Q0 | q ∈ F }. Finally, δ 0
operates just like δ albeit taking into account the f ’s of states to figure out the intended
positions of the ai s and, at the same time, update the f ’s after each transition. We therefore
include in δ 0 precisely the following transitions. Below we write i◦ for m+n+i. For each
X,X 0
(q, f ) ∈ Q0 and q −→ q 0 ∈ δ,
X,X 0
• add a transition (q, f ) −→ (q 0 , f );
{i◦ },{i◦ }
• if f (i) = X for some i then add (q, f ) −−−−−→ (q 0 , f 0 ) where f 0 = f [i 7→ X 0 ];
X
X
Moreover, for each q −→ q 0 ∈ δ include (q, f ) −→ (q 0 , f 0 ) where f 0 = {(j, f (j) \ X) | j ∈ [k]}.
Following the above line of reasoning, we can show that the relation
{((q, H), (q, f, H 0 )) | ∀i ∈ [m+n]. H(i) = H 0 (i) ∪ {aj | i ∈ f (j)}}
with (q, H), (q, f, H 0 ) reachable configurations, is a bisimulation.
We write L ◦ L0 for concatenation of languages, and L∗ for Kleene closure of a language.
We use the same definitions as the standard ones for languages over finite alphabets: L ◦ L0
is { ww0 | w ∈ L ∧ w0 ∈ L0 }, and L∗ is the least fixed-point of the equations ∈ L∗
and L∗ ◦ L ⊆ L∗ , where is the empty word.
Proposition 3.2. Languages recognised by HRAs are closed under union, intersection,
concatenation and Kleene star.
Proof. We show concatenation and Kleene star only. For the former, consider HRAs
Ai = hQi , q0i , H0i , δi , Fi i, i = 1, 2, and assume wlog that they have common type (m, n). Let
w be an enlistment of all names in H02 and construct A0i = Ai fix w, for i = 1, 2. Then, the
concatenation L(A1 ) ◦ L(A2 ) is the language recognised by connecting A01 and A02 serially,
that is, the automaton obtained by connecting each final state of A01 to the initial state of A02
with a transition labelled [m+n], and with initial/final states those of A01 /A02 respectively.
Finally, given an (m, n)-HRA A and an enlistment w of its initial names, we construct
an automaton A0 by connecting the final states of A fix w to its initial state with a transition
labelled [m+n]. We can see that L(A0 ) = L(A)∗ .
As we shall next see, while universality is undecidable for HRAs, their emptiness problem
can be decided by reduction to coverability for transfer-reset vector addition systems. In
combination, these results imply that HRAs cannot be effectively complemented. In fact,
there are HRA-languages whose complements are not recognisable by HRAs. This can be
shown via the following example, adapted from [26].
Lemma 3.3. HRAs are not closed under complementation.
Example 3.4. Consider L4 = {w ∈ N ∗ | not all names of w occur exactly twice in it },
which is accepted by the (2, 0)-HRA below.
∅,1 / 1,1
∅,1 / 1,1
q0
∅,2
q1
∅,1 / 1,1
∅,1 / 1,1
2,2
q2
2,1
q3
The automaton non-deterministically selects an input name which either appears only once
in the input or at least three times. We claim that L4 , the language of all words whose
names occur exactly twice in them, is not HRA-recognisable.
10
R. GRIGORE AND N. TZEVELEKOS
Proof. Suppose it were recognisable (wlog, Proposition 4.1) by an (m, 0)-HRA A with k
states. Then, A would accept the word w = a1 . . . ak a1 . . . ak where all ai ’s are distinct and
do not appear in the initial assignment of A. Let p = p1 p2 be the path in A through which w
is accepted, with each pi corresponding to one of the two halves of w. Since all ai s are fresh for
A, the non-reset transitions of p1 must carry labels of the form (∅, X), for some sets X. Let
q be a state appearing twice in p1 , say p1 = p11 (q)p12 (q)p13 . Consider now the path p0 = p01 p2
where p01 is the extension of p1 which repeats p12 , that is, p01 = p11 (q)p12 (q)p12 (q)p13 . We
claim that p0 is an accepting path in A. Indeed, by our previous observation on the labels of
X,Y
p1 , the path p01 does not block, i.e. it cannot reach a transition q1 −−→ q2 , with X 6= ∅, in
some configuration (q1 , H1 ) such that H1 @X = ∅. We need to show that p2 does not block
either (in p0 ). Let us denote (q, H1 ) and (q, H2 ) the configurations in each of the two visits
of q in the run of p on w; and let us write (q, H3 ) for the third visit in the run of p01 , given
that for the other two visits we assume the same configurations as in p. Now observe that,
for each nonempty X ⊆ [m], repeating p12 cannot reduce the number of names appearing
precisely in X, therefore |H2 @X| ≤ |H3 @X|. The latter implies that, since p does not block,
p0 does not block either. Now observe that any word accepted by w0 is not in L4 , as p01
accepts more than k distinct names, a contradiction.
4. Removing Registers
Although registers are convenient for expressing some languages (Example 4.2 and Example 4.3), when reasoning about HRAs it is more convenient to focus on histories only. In
this section, we show that this approach is sound and in particular we present three ways of
removing registers:
• we can construct a bisimilar automaton if we are allowed to use extra histories and resets;
• we can preserve language if we are allowed to use extra histories (but no resets);
• we can preserve emptiness if we are allowed to use extra states (but no histories nor
resets).
Each of these constructions will be useful in the sequel either for devising emptiness checks
and, more generally, they demonstrate that registers can be considered as a derivative notion.
4.1. Simulating Registers with Histories and Resets. The semantics of registers is
very similar to that of histories. The main difference is that registers are forced to contain
at most one name. To simulate registers with histories, we reset histories before inserting
names. Resetting histories might cause the automaton to forget names that are necessary
for deciding how to proceed. The solution is to use two histories for each register: one holds
the old name, the other holds the new name. Only the history where the new name will be
written needs to be reset.
Proposition 4.1. Let A = hQ, q0 , H0 , δ, F i be an (m, n)-HRA. We can construct an (m +
0
n
2n, 0)-HRA A0 = hQ0 , q00 , H00 , δ 0 , F 0 i that is bisimilar to A. We have |Q
| ∈ O(2 |Q|) and
0
n
0
0
|δ | ∈ O(2 |δ|). The construction can be done in O (m + n)(|Q | + |δ |) time.
Proof. For each q in Q, we include 2n states (q, f ) in Q0 , where f : [n] → [2n] is such that
f (i) ∈ {i, i + n} for all i. The name that A holds in register i will be found in history m + f (i)
of A0 . We set f¯ to be the complement of f ; that is, f¯(i) , n + 2i − f (i). Let f † (i) be i if
i ∈ [m] and m + f (i) otherwise. Moreover, q00 = (q0 , id), F 0 = {(q, f ) | q ∈ F }, and H00 is
HISTORY-REGISTER AUTOMATA
11
H0 extended so that H00 (i) = ∅ for all i > m + n. Finally, we include in δ 0 precisely the
following transitions.
X,X 0
Y
f † (X),f¯† (X 0 )
• For each q −→ q 0 ∈ δ, add (q, f ) −→ · −−−−−−−−→ (q 0 , f 0 ) where Y = [m+1, m+2n] \
img(f ), and f 0 is given by: f 0 (i) = f¯(i) if i ∈ X ∪ X 0 and f 0 (i) = f (i) otherwise. (Note
that we need a few extra states in Q0 , which remain nameless in this proof.)
f † (X)
X
• For each q −→ q 0 ∈ δ, add (q, f ) −−−−→ (q 0 , f ).
The relation { ((q, H), ((q, f ), H 0 )) | H = H 0 ◦ f † } witnesses bisimilarity.
4.2. Replacing Registers with Histories using Colours. The result of Proposition 4.1
comes at the cost of introducing reset transitions even if the original automaton does not
have such transitions. Reset transitions are undesirable because they increase the complexity
of the emptiness problem (Section 5). We can avoid introducing reset transitions by using the
colouring technique of [5]. The construction is more involved and the resulting automaton is
not bisimilar to the original, but it is language equivalent.
Before we proceed with the proof, let us illustrate the technique on two examples.
Example 4.2 illustrates how to check that a name is not in the simulated register; Example 4.3
illustrates also how to check that a name is in the simulated register.
Example 4.2. The language
L5 = { a1 . . . an | ai 6= ai+1 for all i }
is recognized by both of the automata below (with histories initially empty):
∅,1 / 2,1
q0
∅,1
∅,2 / 2,2
q0
∅,2 / 1,2
q1
∅,1 / 1,1
The one on the left is a (0, 1)-HRA, and we can straight away see its accepted language
is L5 . The one on the right is a (2, 0)-HRA for which it is less clear why it accepts L5 . The
reason is the following invariant:
• H(1) ] H(2) is a partition of all names seen so far; and
• if the state is qk then the last seen name is in H(k + 1), for k ∈ {0, 1}; and
• all possible partitions of the names are realisable.
The first two points are easy to check. Because all transitions have labels of the form (X, {i}),
all seen names are remembered in precisely one history. Because all transitions incoming
into qk have labels the form (X, {k + 1}), the last seen name is remembered in H(k + 1).
The consequence of these first two points is that the automaton on the right accepts a name
only if it is different from the last seen name. Indeed, all transitions outgoing from qk have
labels of the form (X, X 0 ) with k + 1 ∈
/ X. Thus, the first two points should be seen as
lemmas which let us establish that all words accepted by the automaton on the right belong
to L5 .
Informally, the third point is the key lemma that lets us establish the converse, that all
words in L5 are accepted. However, to see why this is so, we need to rephrase it in a more
formal way. Given a word a1 . . . an ∈ L5 , we consider an arbitrary partition H(1) ] H(2)
of the set {a1 , . . . , an }. Without loss of generality, assume an ∈ H(1). The claim is that
12
R. GRIGORE AND N. TZEVELEKOS
there exists a run of the automaton on the right that accepts the word a1 . . . an and ends in
configuration (q0 , H). We can prove this by induction. If an−1 ∈ H(k + 1) then the previous
state in the run must have been qk , for k ∈ {0, 1}. Suppose an−1 ∈ H(2); the other case is
symmetric. Then, by the induction hypothesis, we know that there is a run that accepts the
word a1 . . . an−1 and ends in configuration (q1 , H 0 ), where we choose
(
H(1) \ {an } if an ∈
/ {a1 , . . . , an−1 }
H 0 (1) ,
H 0 (2) , H(2) \ {an }
H(1) ∪ {an } if an ∈ {a1 , . . . , an−1 }
It is clear that H 0 (1) ] H 0 (2) is a partition of {a1 , . . . , an−1 }, as required. Finally, we need
an
to show that (q1 , H 0 ) −→
(q0 , H) belongs to the configuration graph. If an ∈
/ {a1 , . . . , an−1 },
∅,1
then this is true because of q1 −→ q0 ; if an ∈ {a1 , . . . , an−1 }, then this is true because of
1,1
q1 −→ q0 .
Example 4.3. The language
L6 = { a1 . . . an | ai = ai+1 iff i is odd }
is recognized by both of the following automata:
∅,1 / 3,1
q2
∅,1
q0
q1
q0
∅,1
q1
1,2
1,3
1,1
∅,1 / 2,1
q3
The one on the left is a (0, 1)-HRA, while the one on the right is a (3, 0)-HRA. As in
the previous example, the fact that the (3, 0)-HRA accepts L6 is not immediately clear.
Informally, the reason is the following invariant:
• H(1) ] H(2) ] H(3) is a partition of the names seen so far;
• if the state is qk then the last seen name is in H(k), for k ∈ {1, 2, 3};
• |H(1)| = 1 in q1 , and |H(1)| = 0 otherwise; and
• for each partition H(1) ] H(2) ] H(3) that is compatible with the previous constraints,
there is a nondeterministic run that realises it.
The first two points hold for the same reasons as in Example 4.2. The third point holds
because all incoming transitions of q1 insert a name in H(1), all outgoing transitions of q1
remove a name from H(1), and all runs alternate between state q1 and some other state.
After an odd number of names was processed, the automaton is in state q1 and H(1) contains
only the last name seen. All outgoing transitions from q1 accept a name only if it is in
H(1) — in other words, if it equals the last seen name. After an even and positive number
of names was processed, the automaton is in state q2 or q3 . All outgoing transitions from q2
accept a name only if it is not in H(2), where the last seen name is; q3 acts symmetrically.
Thus, the first three points let us establish that all words accepted by the automaton on the
right belong to L6 .
As in Example 4.2, the last point lets us establish that all word in L6 are accepted.
Given that the proof of the last point is very similar to the one in Example 4.2, let us
only sketch it. Given a word a1 . . . an ∈ L6 with n > 0, we consider an arbitrary partition
H(1) ] H(2) ] H(3) of the set {a1 , . . . , an } such that H(1) ⊆ {an }. Let k be such that
an ∈ H(k). Formally, the claim of the last point is that there exists a run of the automaton
HISTORY-REGISTER AUTOMATA
13
on the right that is labelled by the word a1 . . . an and ends in the configuration (qk , H). The
case n odd and > 1 is similar to the previous example: We pick k 0 such that an−1 ∈ H(k 0 )
and
(
H(5 − k 0 ) \ {an } if an ∈
/ {a1 , . . . , an−1 }
0
0 0
0
0
0
H (1) , ∅ H (k ) , H(k ) \ {an } H (5 − k ) ,
0
H(5 − k ) ∪ {an } if an ∈ {a1 , . . . , an−1 }
Then we invoke the induction hypothesis to show there is a run that accepts a1 . . . an−1 and
ends in configuration (qk0 , H 0 ). In the case n even, we pick
H 0 (1) , {an }
H 0 (2) , H(2) \ {an }
H 0 (3) , H(3) \ {an }
and then invoke the induction hypothesis to show that there is a run that accepts a1 . . . an−1
and ends in configuration (q1 , H 0 ). We skip the case n = 1 in this proof sketch.
We are now ready for the general result, which we prove in two steps. The main
construction will be presented first and only applies to HRAs with initially empty registers.
(The correctness of the main construction requires that certain graphs are 2-colourable. The
arguments in the previous two examples can be seen as giving explicit colouring algorithms
that work in special cases.) At a second stage, we show how to initially simulate nonempty
registers at the expense of some more additional histories.
Proposition 4.4. Let A = hQ, q0 , H0 , δ, F i be an (m, n)-non-reset-HRA with registers
initially empty. We can construct an (m + 3n, 0)-non-reset-HRA A0 = hQ0 , q00 , H00 , δ 0 , F 0 i
that accepts the same language as A. We have |Q0 | ∈ O(22n · |Q|) and |δ 0 | ∈ O(23.3n · |δ|).
Proof. Each register i will be simulated by three histories, named iR , iB and iY respectively.4
Each state q ∈ Q will be simulated by several states (q, f ) ∈ Q0 , where f : [m + 1, m + n] →
{∅, R, B, Y}. The construction will ensure the following invariant:
• H(iR ) ] H(iB ) ] H(iY ) is a partition of the names that have been written to register i
and have subsequently been rewritten by other names or are still in register i;5
• |H(iR )| = 1 if f (i) = R, and |H(iR )| = 0 otherwise; and
• if f (i) 6= ∅ then the current name of register i is in H(if (i) ); register i is empty otherwise.
Thus, according to the last point above, f records in which of histories iR , iB , iY has the
current name of register i been stored. We shall instrument A0 in such a way that it will
only store that name, say a, in iR if the next time register i is being invoked by A is for
reading a. This way we shall ensure that iR never contains more than one name. Otherwise,
i.e. if A next invokes i for overwriting its contents, f will be mapping i to one of iB , iY .
These can be seen as garbage collecting histories: they contain all names that have passed
from register i and will not be immediately read from it. The reason why we need two of
these, iB and iY , is to be able to reuse old names of register i without running the risk of
confusing them with its current name a.
X,X 0
To simulate one transition q −→ q 0 , accepting say a name a, we shall use several
Z,Z 0
transitions of the form (q, f ) −→ (q 0 , f 0 ) where f and f 0 agree outside X ∪ X 0 . Let us
consider an arbitrary such pair (f, f 0 ), and see how to pick Z and Z 0 . On histories, X and Z
coincide: X ∩ [m] = Z ∩ [m]. For each register i ∈ X \ [m], we need that a be equal to the
4B and Y are the black and yellow colours of [5]; R stands for ‘read’.
5note that a name can also be transferred out of register i (via transition with label X, Y where i ∈ X \ Y ),
instead of being directly rewritten, in which case we would not store it in H(iR ) ] H(iB ) ] H(iY ).
14
R. GRIGORE AND N. TZEVELEKOS
name currently written in register i. But, we know the current name in register i only if
Z,Z 0
f (i) = R. That is why we include a transition (q, f ) −→ (q 0 , f 0 ) only if X \ [m] ⊆ f −1 (R),
and we include { iR | i ∈ X \ [m] } in Z. For each register i ∈ [m + 1, m + n] \ X, we must
ensure that a is not equal to the current name in register i, which resides in H(if (i) ). Hence,
we pick all Z such that
Z = (X ∩ [m]) ] Z1 ] Z0
where Z1 = { iR | i ∈ X \ [m] } and Z0 ⊆ { ix | i ∈ [m + 1, m + n] \ X ∧ x ∈ {B, Y} ∧ x =
6 f (i) }
is such that, for all i, |Z0 ∩ {iB , iY }| ≤ 1. Now we must write the current name to histories
X 0 ∩ [m], and we must simulate writing the current name to registers X 0 \ [m]. For each
i ∈ X 0 \ [m] we shall nondeterministically write the current name to one of iR , iB , iY by
guessing whether register i will be used next for reading or not. The place where we write is
given by f 0 (i), that is,
Z 0 = (X 0 ∩ [m]) ] { if 0 (i) | i ∈ X 0 \ [m] }.
Finally, we make sure that all i ∈ ([m + 1, m + n] ∩ X) \ X 0 in f 0 are mapped to ∅, as these
registers are now empty, i.e. we impose f 0 (([m + 1, m + n] ∩ X) \ X 0 ) ⊆ {∅}.
Thus, in summary, we take Q0 = Q × ([m + 1, m + n] → {∅, R, B, Y}) and:
0
• q0 = (q0 , {(i, ∅) | i ∈ [m + 1, m + n]}) and F 0 = F × ([m + 1, m + n] → {∅, R, B, Y});
• H00 = H0 ∪ {(i, ∅) | i ∈ [m + n + 1, m + 3n]};
Z,Z 0
X,X 0
• we include (q, f ) −→ (q 0 , f 0 ) in δ 0 just if there is some q −→ q 0 in δ such that:
– X \ [m] ⊆ f −1 (R),
– for all i ∈ [m + 1, m + n] \ (X ∪ X 0 ), f 0 (i) = f (i),
– for all i ∈ ([m + 1, m + n] ∩ X) \ X 0 , f 0 (i) = ∅,
– Z = (X ∩ [m]) ] {iR | i ∈ X \ [m]} ] Z0
with Z0 ⊆ {ix | i ∈ [m + 1, m + n] \ X ∧ x 6= f (i)},
0
– Z = (X 0 ∩ [m]) ] { if 0 (i) | i ∈ X 0 \ [m] }.
Let us now see what is the size of the HRA A0 so constructed. We have |Q0 | ∈ O(4n · |Q|).
X,X 0
Z,Z 0
For each transition q −→ q 0 in A, we introduce several transitions (q, f ) −→ (q 0 f 0 ) in A0 .
Let us count how many. There are ≤ 4n choices for f ; there are ≤ 3n choices for Z0
because for each i we pick iB , or iY , or none of them; and there are ≤ 3n choices for f 0
because f 0 (X 0 ) ⊆ {iR , iB , iY } and f 0 is uniquely determined outside X 0 . In summary,
|δ 0 | ∈ O(22(1+log2 3)n · |δ|).
Finally, we show that L(A) = L(A0 ). Let first w ∈ L(A0 ) have an accepting transition
Zk ,Z 0
path p0 in A0 with edges (qk , fk ) −→k (qk+1 , fk+1 ), for k = 1, . . . , N . Reading the definition
Xk ,X 0
of δ 0 backwards, this yields an accepting transition path p in A with edges qk −→k qk+1
where
• Xk = (Zk ∩ [m]) ∪ { i | iR ∈ Zk },
• Xk0 = (Zk0 ∩ [m]) ∪ { i | {iR , iB , iY } ∩ Zk0 6= ∅ }.
To see that p accepts w, suppose that p0 yields a sequence of configurations ((qk , fk ), Hk0 ).
Then, by induction, we can show that p yields a sequence of configurations (qk , Hk ), where:
• for all i ∈ [m], Hk (i) = Hk0 (i);
• for all i ∈ [m + 1, m + n], if fk (i) 6= ∅ then Hk (i) = {a} for some a ∈ Hk0 (ifk (i) ), otherwise
Hk (i) = ∅;
• for all names a, if a ∈ Hk0 @Zk then a ∈ Hk @Xk .
HISTORY-REGISTER AUTOMATA
15
Hence, w ∈ L(A).
Conversely, let w = a1 · · · aN ∈ L(A) have an accepting transition path p in A with edges
Xk ,X 0
qk −→k qk+1 for k = 1, . . . , N . We construct a corresponding accepting path p0 in A0 with
Zk ,Z 0
edges (qk , fk ) −→k (qk+1 , fk+1 ) as follows. We have that f0 = {(i, ∅) | i ∈ [m]}. Moreover,
Zk = (Xk ∩ [m]) ∪ Wk and Zk0 = (Xk0 ∩ [m]) ∪ Wk0 where:
(a) For each position k such that the previous appearance of ak in w is some ak0 = ak with
k 0 < k, we set Wk = Wk0 0 . If there is no previous appearance, we set Wk = ∅.
(b) For each position k and i ∈ Xk0 \ [m] such that the next appearance of ak in w is some
ak0 with i ∈ Xk0 , we include iR in Wk0 .
(c) For each position k and i ∈ Xk0 \ [m] such that the next appearance of ak in w is some
ak0 with i ∈
/ Xk0 , we include in Wk0 one of iB , iY . We do the same also if there is no next
appearance of ak in w.
The above specifications determine the values of all Wk , Wk0 , modulo the choice between
B and Y in case (c). Clearly, if the path p0 can be so constructed then A0 accepts w. It
remains to show that p0 can indeed be implemented in A0 . The form of the fk ’s is derived
from (a-c) according to the definition of δ 0 . But note that the definition of δ 0 imposes the
following condition:
(d) For each position k and iY ∈ Wk0 such that the next appearance of any ix in p0 is in
some Wk0 (with k < k 0 ), we must have iB ∈ Wk0 . Dually if iB ∈ Wk0 .
For example, if iY ∈ W20 but none of iB , iY , iR occurs in any of W3 , W30 , W4 , W40 , W5 , W50 ,
then {iB , iY , iR } ∩ W6 ⊆ {iB }. This condition stems from the interdiction to include if (i) in
Wk0 when f (i) 6= iR . The consequence of condition (d) is that in case (c) above we cannot
pick B and Y arbitrarily.
We need to show that a choice of “colours” (B and Y) satisfying both (c) and (d) can
be made. We achieve this by applying a graph colouring argument. Let us define a labelled
graph G with:
• Vertices (k, i) and (k, i)0 for each k ∈ [0, N − 1] and i ∈ [m + 1, m + n];
• For each i and k < k 0 as in (c) above, an edge between (k, i)0 and (k 0 , i) labelled with “=”.
• For each i and k < k 0 as in (d) above, an edge between (k, i)0 and (k 0 , i) labelled with “6=”.
Then, a valid choice of colours can be made as long as G can be coloured with B and Y in
such a way that =-connected vertices have matching colours, while 6=-connected vertices
have different colours. For the latter, it suffices to show that the graph obtained by merging
=-connected vertices can be 2-coloured, for which it is enough to show that G contains no
cycles. Suppose G contained a cycle. Then, by definition of the edge relation of the graph, it
must be the case that the leftmost vertex (i.e. the one with the least k index) in the cycle
be some (k, i)0 . The vertex (k, i)0 has two outgoing edges, one for each label. The 6=-edge in
particular connects to some (k 0 , i) such that k < k 0 , obtained from condition (d). Since (k 0 , i)
is part of the cycle, it must have an outgoing =-edge to some vertex (k 00 , i)0 with k 00 < k 0 .
But note that condition (d) stipulates that there is no mention of i between k and k 0 in p0 ,
and therefore k 00 ≤ k. Moreover, k 00 = k is not an option as it would imply that register i
was not rewritten between steps k and k 0 in p, in which case k and k 0 would fall under case
(b) above. Hence, k 00 < k which contradicts our assumption that (k, i)0 was the leftmost
vertex in the cycle.
16
R. GRIGORE AND N. TZEVELEKOS
Note that the above result can be extended to handle the case in which registers are
not initially empty by simply making use of Lemma 3.1. However, the construction in
that lemma leads to a doubly exponential blow-up in size, which we can be avoided by the
alternative approach that follows.
Proposition 4.5. Let A = hQ, q0 , H0 , δ, F i be an (m, n)-non-reset-HRA. We can construct
a bisimilar (m + n, n)-non-reset-HRA A0 = hQ0 , q00 , H00 , δ 0 , F 0 i such that, for all i ∈ [m + n +
1, m + 2n], H0 (i) = ∅. Moreover, we have |Q0 | ∈ O(2n · |Q|) and |δ 0 | ∈ O(22n · |δ|).
Proof. The main idea behind the construction of A0 is to use the additional n histories to
store just the initial names of the registers in A. Once these names have been used in the
computation, they are transferred to their actual registers (if any). We will also need to
track which of the registers in A are still simulated by histories in A0 . Thus, we set
Q0 = Q × ([m + 1, m + n] → {0, 1})
and q00 = (q0 , {(i, 1) | i ∈ [m + 1, m + n]}), H00 = H0 ∪ {(m + n + i, ∅) | i ∈ [n]} and
X,X 0
F 0 = F × ([m + 1, m + n] → {0, 1}). Moreover, for each q −→ q 0 in δ and map f , we include
Y,Y 0
in δ 0 a transition (q, f ) −→ (q 0 , f 0 ) where:
• Y = (X ∩ [m]) ] Y1 ] Y0 , where
Y1 = { i ∈ X | f (i) = 1 } ∪ { n + i | i ∈ X ∧ f (i) = 0 }
and Y0 ⊆ { i ∈ [m + 1, m + n] | i ∈
/ X ∧ f (i) = 0 };
• Y 0 = (X 0 ∩ [m]) ∪ { n + i | i ∈ X 0 \ [m] };
• f 0 = f [i 7→ 0 | i ∈ X ∪ X 0 ].
Then, taking R to be the relation:
R = { ((q, H), ((q, f ), H 0 ) | H [m] = H 0 [m] ∧ ∀i ∈ [m + 1, m + n].
∧ f (i) = 1 =⇒ H 0 (n + i) = ∅ ∧ H 0 (i) = H(i)
∧ f (i) = 0 =⇒ H 0 (n + i) = H(i)
∧ ∀j ∈ [m + 1, m + n]. H 0 (i) ∩ H 0 (n + j) = ∅ }
we can show that R is a bisimulation.
Let us now see what is the size of the HRA A0 we constructed. We have |Q0 | ∈ O(2n ·|Q|).
X,X 0
Y,Y 0
For each transition q −→ q 0 in A, we introduce several transitions (q, f ) −→ (q 0 f 0 ) in A0 :
there are ≤ 2n choices for f ; and there are ≤ 2n choices for Y0 . In summary, |δ 0 | ∈ O(22n ·|δ|).
Hence, the general case follows.
Corollary 4.6. Let A = hQ, q0 , H0 , δ, F i be an (m, n)-non-reset-HRA. We can construct
an (m + 4n, 0)-non-reset-HRA A0 = hQ0 , q00 , H00 , δ 0 , F 0 i that accepts the same language as A.
We have |Q0 | ∈ O(23n · |Q|) and |δ 0 | ∈ O(25.3n · |δ|).
HISTORY-REGISTER AUTOMATA
17
4.3. Simulating Registers Symbolically. So far we saw how to simulate registers using
histories. If we are interested only in emptiness/reachability rather than language equivalence,
we can actually simulate the behaviour of registers without the inclusion of additional histories.
This alternative is going to be crucial in Section 6.2, where the number of histories will be
fixed to just one.
We next describe how this simulation can be done. Given an assignment H with m
histories and n registers, we can represent H symbolically as follows:
• we map each name stored in the registers of H to a number from the set [n];
• we subsequently replace in H all these names by their number.
For example, consider the assignment
1 7→ {a, b, c}, 2 7→ {d}, 3 7→ ∅, 4 7→ {a}, 5 7→ {d}
of a (1, 4)-HRA. We can simulate it symbolically by mapping d to 1, and a to 2. This results
to a symbolic representation:
1 7→ {2, b, c}, 2 7→ 1, 3 7→ ∅, 4 7→ 2, 5 7→ 1
where the nominal part has been curtailed to the fact that H(1) contains the names b and c.
We can now employ this representation technique to represent configurations of (m, n)HRAs by corresponding ones belonging to (m, 0)-HRAs. In particular, given the configuration
q, 1 7→ {a, b, c}, 2 7→ {d}, 3 7→ ∅, 4 7→ {a}, 5 7→ {d}
of a (1, 4)-HRA, we map it to the configuration
q, 1 7→ {2}, 2 7→ 1, 3 7→ ∅, 4 7→ 2, 5 7→ 1 , 1 →
7 {b, c}
of a (1, 0)-HRA which incorporates the non-nominal part of our representation scheme in
its state. Clearly, the state space of the new automaton in this simulation will experience
an exponential blowup, as the next result shows. However, no additional histories will be
needed, which is the main target here.
Proposition 4.7. Let A = hQ, q0 , H0 , δ, F i be an (m, n)-HRA. We can construct an (m, 0)HRA A0 = hQ0 , q00 , H00 , δ 0 , F 0 i that is empty if and only if A is empty. We have |Q0 | ∈
O(2mn nBn |Q|) and |δ 0 | ∈ O(2mn nBn |δ|), where Bn is the nth Bell number. Moreover, A0
contains reset transitions if and only if A contains reset transitions.
Proof. Each state q ∈ Q will be simulated by several states (q, f ) ∈ Q0 , where f : [m + n] →
P([n]) will be called an assignment skeleton. Such a skeleton f is valid when:
• |f (i)| ≤S1 for registers i ∈ [m + 1, m + n];
• f (i) ⊆ nj=1 f (m + j) for histories i ∈ [m];
S
• for all k ∈ [n] there is a k 0 ∈ [k] such that ki=1 f (m + i) = [k 0 ].
The latter condition essentially stipulates that f is a partition function on the set [m+1, m+n]:
the elements of the set are uniquely assigned numbers which can be seen as class indices —
two elements are assigned the same number iff they belong to the same class. There is also a
special class in this partition, namely of all elements of [m + 1, m + n] to which f assigns ∅.
We can now define the rest of A0 . First, we let q00 = (q0 , f0 ), where (f0 , H00 ) is the
symbolic representation of H0 . In order to construct δ 0 we define a transition relation on
skeletons, which is very similar to the configuration graph of HRAs (Definition 2.2) except
X,X 0
that it allows symbols to be permuted after the transition is taken. We write f −→ f 0 when
18
R. GRIGORE AND N. TZEVELEKOS
there exists a permutation π on [n] and a k ∈ [n] such that k ∈ f @X and f 0 = π ◦ (f [k in X 0 ]).
X
We write f −→ f 0 when there exists a permutation π such that f 0 = π ◦ (f [X 7→ ∅]).
X,X 0
To simulate one transition of the form q −→ q 0 from δ, we use several transitions of the
`
form (q, f ) −→ (q 0 , f 0 ) in δ 0 . Let us consider an arbitrary pair (f, f 0 ) of valid skeletons, and
see how to pick `. There are four cases, depending on whether X and X 0 mention or not
registers.
∅,∅
• Case X ⊆ [m] and X 0 ⊆ [m]. It must be that f −→ f 0 , and we pick ` = (X, X 0 ).
∅,X 0
• Case X ⊆ [m] and X 0 6⊆ [m]. It must be that f −→ f 0 , and we pick ` = (X, ∅).
X,∅
• Case X 6⊆ [m] and X 0 ⊆ [m]. It must be that f −→ f 0 , and we pick ` = (∅, X 0 ).
X,X 0
• Case X 6⊆ [m] and X 0 6⊆ [m]. It must be that f −→ f 0 , and we pick ` = (∅, ∅).
X
Similarly, each reset transition q −→ q 0 from δ yields several transitions of the form
Z
(q, f ) −→ (q 0 , f 0 ) in δ 0 . Given an arbitrary pair (f, f 0 ) of valid skeletons, we pick Z as follows.
X
If X ⊆ [m] then it must be that f 0 = f and we pick Z = X. Otherwise, f −→ f 0 and we
pick Z = ∅.
To estimate |Q0 | it suffices to count how many valid skeletons there are. The values
f (m + 1), . . . , f (m + n) of a valid skeleton correspond to a partition of the registers and a
selection of a class (if any) whose registers are empty. There are Bn possible partitions and
≤ (n + 1) possible selections, which gives ≤ (n + 1)Bn cases. For the values f (1), . . . , f (m)
of a valid skeleton there are ≤ 2mn possibilities. In total, |Q0 | ≤ 2mn (n + 1)Bn |Q|.
To estimate |δ 0 |, note that once f is fixed in the construction above, the constraints on
0
f determine it uniquely. So, the number of transitions increases by the same factor as the
number of states.
Since log Bn ∈ Θ(n log n), we have that log 2mn (n + 1)Bn ∈ Θ(mn + n log n).
5. Emptiness and Universality
5.1. Emptiness. Here we show that deciding emptiness is Ackermann-complete. We work
by reducing from and to state reachability problems in counter systems (similarly e.g. to [7, 5]).
For the upper bound, we reduce nonemptiness of HRAs to control-state reachability of TVASSs. For the lower bound, we reduce control-state reachability of R-VASSs to nonemptiness
of HRAs. Recall that the nonemptiness problem for HRAs asks, given an HRA A with
w
initial state q0 and initial assignment H0 , whether (q0 , H0 ) −→
−→ (qF , HF ) for some word w,
final state qF and assignment HF .
The configurations of a k-dimensional TR-VASS (Transfer–Reset Vector Addition System
with States) have the form (q, ~v ), where q is a state from a finite set, and ~v is a k-dimensional
vector of nonnegative counters. A VASS has moves that shift the counter vector, changing ~v
into ~v + ~v 0 , where ~v 0 comes from some finite and fixed subset of Zk . An R-VASS also has
moves that reset a counter, changing ~v into ~v [i 7→ 0] for some counter i. A T-VASS also
has moves that transfer the content of one counter into another counter, changing ~v into
~v [j 7→ ~v (i) + ~v (j)][i 7→ 0] for some i =
6 j. A TR-VASS is the obvious combination of the
above, and is formally defined as follows.
HISTORY-REGISTER AUTOMATA
19
Definition 5.1 (TR-VASS). A k-dimensional Transfer-Reset Vector Addition System
with States A is a pair hQ, δi, where Q is a finite set of states, and δ ⊆ Q×(Zk ][k]2 ][k])×Q
is a transition relation. A configuration of A is a pair (q, ~v ) of a state q and a vector
~v ∈ Nk of counter values. The configuration graph of A is constructed by including an
arc (q, ~v ) → (q 0 , ~v 0 ) when one of the following holds:
• there is some (q, ~v 00 , q 0 ) ∈ δ such that ~v 0 = ~v + ~v 00
• there is some (q, (i, j), q 0 ) ∈ δ such that ~v 0 = ~v [i 7→ 0][j 7→ ~v (i) + ~v (j)] and i 6= j
• there is some (q, (i, i), q 0 ) ∈ δ and ~v 0 = ~v
• there is some (q, i, q 0 ) ∈ δ such that ~v 0 = ~v [i 7→ 0].
The control-state reachability problem for A asks whether, given states q0 , qF and initial
vector ~v0 , is there some ~vF such that (q0 , ~v0 ) −→
−→ (qF , ~vF ).
The reduction from a (m, 0)-HRA to a T-VASS of dimension 2m − 1 is done by mapping
e of the corresponding T-VASS. Then,
each nonempty set of histories X into a counter X
name-accepting transitions are mapped into counter decreases
and increases, while resets
result in transfers between the counters. Let e· : P [m] → [0, 2m − 1] be a bijection such
i−1 . Further, given an assignment H,
e ,P
that e
∅ = 0; for instance, one could take X
i∈X 2
m −1
2
e denote the vector (h1 , . . . , h2m −1 ) ∈ N
let H
such that hXe = |H@X|, for all nonempty
X ⊆ [m]; that is, hXe counts how many names occur in exactly the histories indexed by X.
Lemma 5.2. Given an (m, 0)-HRA A it is possible to construct a T-VASS A0 of dimension
2m − 1 such that, for all q, q 0 , H, H 0 ,
w
f0 ).
e →
∃w, (q, H) −→
−→A (q 0 , H 0 )
if and only if
(q, H)
→A0 (q 0 , H
Let Q and δ be the states and the transitions of A, and let Q0 and δ 0 be the transitions
of A0 . We have that |Q0 | ∈ O(2m |Q|) and |δ 0 | ∈ O(2m |δ|). Moreover, the construction takes
O(|Q0 | + m|δ 0 |) time. If there are no reset transitions in A, then A0 is a |δ|-dimensional
VASS with Q0 = Q and |δ 0 | = |δ| that uses only increments and decrements.
Let ~0 be the all-zero vector (0, . . . , 0). Let ~δi be ~0[i 7→ 1] for i ∈ [m], and ~δ0 be ~0.
~
δ e 0 −~
δe
X,X 0
X
−−−X
→ q 0 in
Proof. For each transition q −→ q 0 of the HRA, we construct a transition q −−
X
the T-VASS. For each transition q −→ q 0 of the HRA, we construct a path
1,j1
2,j2
3,j3
2m −2,j2m −2
2m −1,j2m −1
q −→ · −→ · −→ · · · −−−−−−−−→ · −−−−−−−−→ q 0
in the T-VASS such that jYe = Y^
\ X. To construct such a path we iterate through 2m − 1
nonempty sets Y , and for each we compute Y \ X in O(m) time.
Lemma 5.2 implies that nonemptiness of a HRA reduces to control-state reachability
of a T-VASS. We shall describe an algorithm that solves control-state reachability for the
T-VASS constructed in Lemma 5.2. The analysis of this algorithm depends on the so-called
Length Function Theorem, which is phrased in terms of the Fast Growing Hierarchy and
bad sequences. We define these next.
The Fast Growing Hierarchy consists of classes F0 , F1 , F2 , . . . of functions, where
F0 = F1 contain the linear functions, F2 contains the elementary functions, primitive
20
R. GRIGORE AND N. TZEVELEKOS
recursive functions are in Fk for some finite k, and Fω is the Ackermann complexity class.
The classes Fk are defined in terms of the following functions:
Fn+1 (x) , (Fn ◦ · · · ◦ Fn )(x) = Fnx+1 (x)
{z
}
|
F0 (x) , x + 1
Fω (x) , Fx (x)
x + 1 times
f ∈ O(Fkn )
For k ≥ 2, (a) f ∈ Fk if and only if
for some n; and (b) a nondeterministic
algorithm using space bounded by some function in Fk can be transformed into a deterministic
algorithm using time bounded by some (other) function in Fk .
Let X be a partially ordered set with some size function |·| : X → N. We say that a
sequence x0 , x1 , x2 , . . . of elements of X is a bad sequence when xi 6≤ xj for all i < j. Given
a strictly increasing function g : N → N, we say that the sequence is controlled by g when
|xi+1 | ≤ g(|xi |) for all i. We will consider such sequences of VASS configurations, where the
order is given by
(q, ~v ) ≤ (q 0 , ~v 0 )
iff
q = q 0 ∧ ~v (1) ≤ ~v 0 (1) ∧ ~v (2) ≤ ~v 0 (2) ∧ ~v (3) ≤ ~v 0 (3) ∧ . . .
Lemma 5.3 (Length Function Theorem [34]). Let q̂0 , q̂1 , q̂2 , . . . be a bad sequence of kdimensional VASS configurations. If the sequence is controlled by some function g ∈ Fγ with
γ ≥ 1, then its length is bounded by f (|q̂0 |) for some function f ∈ Fγ+k .
We can now describe and analyze an algorithm for deciding emptiness of a (m, 0)-HRA.
Proposition 5.4. The emptiness problem for (m, 0)-HRAs is in F2m when m > 0. Thus,
the emptiness problem is in Fω when m is part of the input.
Proof. Let A be the given HRA, and let A0 be the T-VASS constructed as in Lemma 5.2.
We use the backward coverability algorithm [34, Sections 1.2.2 and 2.2.2], which explores all
bad sequences (q0 , ~v0 ), (q1 , ~v1 ), . . . , (qL , ~vL ) such that
• (q0 , ~v0 ) is a minimal final configuration, and
• (qk , ~vk ) is a minimal configuration out of those that can reach a configuration ≥ (qk−1 , ~vk−1 ).
The constraint that (q0 , ~v0 ) is a minimal final configuration simply means that q0 is final
and ~v0 = ~0. To construct such sequences, we need an effective way of generating all possible
(qk , ~vk ), given a fixed (qk−1 , ~vk−1 ). For this, we enumerate all transitions of A0 that go to qk−1 .
~
δi −~
δj
i,j
~
δi −~
δj
There are two types of such transitions: qk −→ qk−1 and qk −→ qk−1 . For qk −→ qk−1 ,
i,j
we let ~vk be max(~vk−1 − ~δi + ~δj , ~0), where max is taken pointwise. For qk −→ qk−1 with
i,j
i = j, we take ~vk to equal ~vk−1 . For qk −→ qk−1 with i 6= j, we may have multiple choices
for ~vk . Assuming ~vk−1 (i) = 0, it could be that ~vk (i) is any of 0, 1, . . . , ~vk−1 (j); otherwise, if
~vk−1 (i) 6= 0, the transition could not have been taken. In all the cases from above, we keep
only those choices of ~vk that ensure the sequence is bad.
To show that the sequences so constructed are finite, we use Lemma 5.3. Let |(q, ~v )| be the
number bits in a concrete representation of (q, ~v ): we encode q with ∼ log2 |Q0 | = m log2 |Q|
bits, then we write each ~v (i) in binary, precede each of its bits by 1 and mark the end
with 0. (For example, we represent 5 by 1110110.) For the sequences constructed as in the
previous paragraph, we have |(qk , ~vk )| ≤ 2 · |(qk−1 , ~vk−1 )|. Thus, the sequences are controlled
by g(x) = 2x, which is a function in F1 . As A0 has dimension 2m − 1, Lemma 5.3 gives us
that the length L of the sequence is bounded by some function in F2m .
A nondeterministic algorithm can repeatedly guess the correct successor in the sequence,
using 2L · |(q0 , ~0)| space. If m ≥ 1, then this is bounded by f (m log |Q|) for some function
HISTORY-REGISTER AUTOMATA
21
f ∈ F2m , and we are in a situation where the distinctions time/space and deterministic/
nondeterministic are irrelevant.
It is possible to modify the algorithm described in the previous proof so that it works
directly on the HRA representation, without appealing to Lemma 5.2. Similarly, it is possible
to extend the algorithm described in the previous proof to handle registers directly, without
appealing to Proposition 4.1. Such improvements may be worthwhile in an implementation,
but the complexity upper bound remains Ackermannian.
Doing the opposite reduction we show that deciding emptiness is Ackermann-hard
even for strongly deterministic HRAs. In this direction, each R-VASS of dimension m can
be simulated by an (m, 0)-HRA so that the value of each counter i of the former is the same
as the number of names appearing precisely in history i of the latter. In order to extend the
bound to strongly deterministic HRAs one can choose to reduce from a restricted class of
R-VASSs, so that the image of the reduction can be made strongly deterministic, or resolve
nondeterminacy at the level of HRAs by appropriate obfuscation. We follow the latter,
simpler solution.
Proposition 5.5. The emptiness problem for strongly deterministic HRAs is Ackermannhard.
Proof. Let A be an m-dimensional R-VASS whose additive transitions only increment or
~v
decrement single counters: for each transition q −→ q 0 , we have ~v = ±~δi for some i. By [35],
control-state reachability for such R-VASSs is Ackermann-hard. We construct an (m, 0)δ
∅,{i}
−δ
i
i
HRA A0 with the same states as A, and we map: each q −
→
q 0 to q −→ q 0 , each q −−→
q 0 to
{i},∅
i
{i}
q −→ q 0 , and each q →
− q 0 to q −→ q 0 . We can see that A0 simulates the behaviour of A by
storing the value of each counter i as |H@{i}|. Hence, L(A0 ) is nonempty if and only if qF
is reachable, from (q0 , ~v0 ), in the R-VASS A.
We observe that A0 may not be strongly deterministic. Suppose that the size of the
transition function of A is n. We can then impose strong determinacy on A0 by enriching
it with n registers and preluding each transition of the above translation with a transition
reading from one of the additional registers. We thus obtain an (m, n)-HRA that is strongly
deterministic and simulates A as above.
Proposition 5.6. The emptiness problem of HRAs is Ackermann-complete.
Proof. Combine Proposition 4.1 with Proposition 5.4 and Proposition 5.5.
5.2. Universality. We finally consider universality and language containment. Note first
that our machines inherit undecidability of these properties from register automata [31].
However, these properties are decidable in the deterministic case.
In order to simplify our analysis, we shall be reducing HRAs to the following compact
form where -transitions are incorporated inside name-accepting ones. As we show below,
no expressiveness is lost by this packed form.
A packed (m, 0)-HRA is a tuple A = hQ, q0 , δ, H0 , F i defined exactly as an (m, 0)-HRA,
with the exception that now:
δ ⊆ Q × P([m]) × P([m]) × P([m]) × Q
22
R. GRIGORE AND N. TZEVELEKOS
Y ;X,X 0
We shall write q −−−−−→ q 0 for (q, Y, X, X 0 , q 0 ) ∈ δ. The semantics of such a transition is
Y
X,X 0
the same as that of a pair of transitions q −→ · −→ q 0 of an ordinary HRA. Formally,
configurations of packed HRAs are pairs (q, H), like in HRAs, and the configuration graph
a
of a packed HRA A like the above is constructed as follows. We set (q, H) −→ (q, H 0 ) if
Y ;X,X 0
there is some q −−−−−→ q 0 in δ such that, setting HY = H[Y 7→ ∅], we have a ∈ HY @X and
H 0 = HY [a in X 0 ].
Lemma 5.7. Let A be an (m, 0)-HRA. There is a packed (m, 0)-HRA A0 such that A ∼ A0 .
Proof. Let A = hQ, q0 , δ, H0 , F i. We set A0 = hQ, q0 , δ 0 , H0 , F 0 i where:
Y
F 0 = {q 0 ∈ Q | ∃q ∈ F, Y. q 0 −→
−→ q ∈ δ}
Y
X,X 0
δ 0 = {(q, Y, X, X 0 , q 0 ) | q −→
−→ · −→ q 0 ∈ δ}
Bisimilarity of A and A0 is witnessed by the identity on configurations, which means that
R = { ((q, H), (q, H)) | q ∈ Q ∧ H ∈ Asn } is a bisimulation.
We shall decide language containment via complementation. In particular, given a
deterministic packed HRA A, the automaton A0 accepting the language N ∗ \ L(A) can
be constructed in the analogous way as for deterministic finite-state automata, namely by
obfuscating the automaton with all missing transitions and swapping final with non-final
states.
Lemma 5.8. Deterministic packed HRAs are closed under complementation.
Proof. Let A = hQ, q0 , δ, H0 , F i be a packed (m, 0)-HRA. Following the above rationale, we
construct a packed (m, 0)-HRA A0 = hQ ] {qF }, q0 , δ ∪ δ 0 , H0 , F 0 i, where F 0 = {qF } ∪ (Q \ F )
Y ;X\Y,X 0
and δ 0 is given as follows. For each q ∈ Q and all X such that there is no q −−−−−−→ q 0 add
∅;X,∅
[m];∅,∅
a transition q −−−−→ qF in δ 0 . In addition, δ 0 contains a transition qF −−−−−→ qF .
We claim that L(A0 ) = N ∗ \ L(A). Indeed, if s ∈ L(A0 ) and s is accepted at a state in Q \ F
then, since A is deterministic, we have s ∈
/ L(A). Otherwise, if s = s0 as00 with a the point
where a transition to the sink state is taken then, upon acceptance of s0 by A, a appears
precisely in some histories X such that A has no transition to accept a at that point. Thus,
s∈
/ L(A).
Conversely, if s ∈ N ∗ \ L(A) then either s induces a configuration in A which does not end
in a final state, or s = s0 as00 where s0 is accepted by A but at that point a is not a possible
transition. We can see that, in each case, s ∈ L(A0 ).
Proposition 5.9. Language containment and universality are undecidable for (general)
HRAs and Ackermann-complete for strongly deterministic HRAs.
Proof. Undecidability in the general case is inherited from RAs.
Now consider two HRAs A and A0 such that we can compute the complement of A0 .
Then, we can decide the language containment L(A) ⊆ L(A0 ) by checking whether the
product of A with the complement of A0 is empty. The product construction is polynomial,
and the emptiness check is in Ackermann (Proposition 5.4). Thus, language containment
is in Ackermann if computing the complement of A0 is in Ackermann. This is the case
because (a) removing registers can be done while preserving determinism with only an
exponential increase in size (Proposition 4.1), and (b) complementing deterministic HRAs
HISTORY-REGISTER AUTOMATA
23
without registers takes polynomial time (Lemma 5.8). For hardness, note that emptiness
and universality are equally hard in the deterministic case (Lemma 5.8), and emptiness is
Ackermann-hard (Proposition 5.5).
We showed that language containment is in Ackermann and universality is Ackermann-hard. Finally, note that there is a trivial reduction from universality to language
containment.
6. Weakening HRAs
Since the complexity of HRAs is substantially high, e.g. for deciding emptiness, it is useful
to seek for restrictions thereof which allow us to trade expressiveness for efficiency. As
the encountered complexity stems from the fact that HRAs can simulate computations of
R-VASSs, our strategy for producing weakenings is to restrict the functionalities of the
corresponding R-VASSs. We follow two directions:
(a) We remove reset transitions. This corresponds to removing counter transfers and resets
and drops the complexity of control-state reachability to exponential space.
(b) We restrict the number of histories to just one. We thus obtain polynomial space
complexity as the corresponding counter machines are simply one-counter automata.
This kind of restriction is also a natural extension of FRAs with history resets.
Observe that each of the aspects of HRAs targeted above corresponds to features (1,2) we
identified in the Introduction, witnessed by the languages L1 and L2 respectively. We shall
see that each restriction leads to losing the corresponding language.
6.1. Non-reset HRAs. We first weaken our automata by disallowing resets. We show that
the new machines retain all their closure properties apart from Kleene-star closure. The
latter is concretely manifested in the fact that language L1 of the Introduction is lost. On
the other hand, the emptiness problem reduces in complexity to exponential space.
Definition 6.1. A non-reset HRA of type (m, n) is an (m, n)-HRA A = hQ, q0 , H0 , δ, F i
X
such that there is no q −→ q 0 ∈ δ.
Closure properties. Of the closure constructions of Section 3 we can see that union and
intersection readily apply to non-reset HRAs, while the construction for concatenation needs
some amendments.
More specifically, of the two constructions presented in the proof of Proposition 3.2,
the one for concatenation can be adapted to non-reset HRAs as follows. We add empty
transitions from the final states of A01 to the initial state of a version of A02 which keeps the
places used by A01 untouched and uses its own separate copy of places, obfuscating its own
transitions so as to capture accidental matchings of the legacy names of A01 . This solution
cannot be used for Kleene closure as in each loop the automaton needs to find a fresh copy
of its initial configuration, and be able to use it (in the previous construction, the final
assignment of A01 is lost).
On the other hand, using an argument similar to that of [5, Proposition 7.2], we can
show that the language L1 is not recognised by non-reset HRAs and, hence, the latter are
not closed under Kleene star. Finally, note that the HRA constructed for the language L4
24
R. GRIGORE AND N. TZEVELEKOS
in Example 3.4 is a non-reset HRA, which implies that non-reset HRAs are not closed under
complementation.
Emptiness. In the general case we saw an upper bound of F2m (Proposition 5.4), by a
reduction to T-VASS followed by the backward coverability algorithm. For a non-reset HRA,
the same reduction yields a VASS, without transfers. In the absence of transfers, better
bounds are known for the backward coverability algorithm [11]. More generally, it has been
known for some time that coverability for VASS is ExpSpace-complete [12, 25].
The following result refers to the number N of bits used to represent an HRA. Of course,
N depends on the exact representation being used. Still, we do not make this representation
explicit because the result holds for a wide variety of possible representations. We only
require that the representation obeys m, n, |δ|, log |Q| ∈ O(N ).
Proposition 6.2. The emptiness problem
for a non-reset HRA is in ExpSpace. More
precisely, it is in NSpace 2O(N log N ) , where N is the number of bits used to represent the
HRA.
Proof. We start with an (m, n)-non-reset-HRA A = hQ, q0 , δ, H0 , F i. We use Proposition 4.7
to construct an (m, 0)-non-reset-HRA A0 = hQ0 , q00 , δ 0 , H00 , F 0 i that preserves emptiness.
Moreover, log |Q0 | ∈ O(mn + n log n + log |Q|). Using Lemma 5.2, we reduce A0 to a (|δ| + 1)dimensional VASS with |Q0 | states that uses only increments/decrements. (Lemma 5.2 creates
an m-dimensional VASS where m is the number of sets labelling transitions. Proposition 4.7
puts in A0 only sets that already occurred in A, with the possible exception of ∅.) Now
we apply the backward coverability algorithm, as described in the proof of Proposition 5.4.
By [11, Theorem 2], the algorithm will only consider counter values less than some V =
O(|δ| log |δ|)
(3|Q0 |)2
. In the nondeterministic version of the algorithm, we guess the next
configuration, which means we only need space O((|δ| + 1) log V ) to store a couple of
configurations. We have
log log V = O(|δ| log |δ| + log log |Q0 |) = O(|δ| log |δ| + log(mn + n log n + log |Q|))
Since m, n, |δ|, log |Q| ∈ O(N ), we conclude log |δ| + log log V ∈ O(N log N ). This implies that a nondeterministic
version of the backward coverability algorithm works in
O(N
log
N
)
NSpace 2
.
The previous proposition has a couple of obvious consequences. First, emptiness is also in
O(N log N )
DSpace 2O(N log N ) , by Savitch’s theorem. Second, emptiness is also in DTime 22
,
by a standard easy argument [19, Theorem 5.3]. In fact, one can show that the time bound
applies to the backward coverability algorithm, without invoking generic constructions from
complexity theory: By [11, Theorem 2], the runtime of the backward coverability algorithm —
O(|δ| log |δ|)
like the counter values — is also upper bounded by some T = O (3|Q0 |)2
. The rest
of the argument is as in the proof of Proposition 6.2.
Proposition 6.3. The emptiness problem for non-reset HRAs is ExpSpace-hard.
Proof. By [25], the control-state reachability problem for VASS is ExpSpace-hard even if
all the transitions are restricted to have labels of the form ±~δi . (More precisely, Lipton
proves that certain parallel programs of size poly(k) can simulate any Turing machine that
uses < 2k space. Then, [25, Lemma 2] asserts that reachability in these programs reduces
to reachability in VAS, the full proof being: ‘We omit a detailed proof of this lemma. It
HISTORY-REGISTER AUTOMATA
25
should, however, be clear that parallel programs can be encoded as vector addition systems.’
Similarly, we claim without proof, that it should be clear how Lipton’s programs reduce to
the control-state reachability problem for VASSs whose transitions only increment/decrement
single counters.) We shall reduce the control-state reachability problem for such VASSs to
the emptiness problem for non-reset HRAs.
Let m be the dimension of the VASS. We construct a HRA with m0 histories, where
0
0
m is the smallest integer such that m ≤ 2m − 1. As a result, there exists an injection
φ : [m] → P6=∅ ([m0 ]); we fix arbitrarily one such injection.
+~
δ
∅,φ(i)
−~
δ
φ(i),∅
• For each transition q −→i q 0 in the VASS, we include a transition q −→ q 0 in the HRA.
• For each transition q −→i q 0 in the VASS, we include a transition q −→ q 0 in the HRA.
This construction maintains the invariant |H@φ(i)| = ~v (i). To establish the invariant, we
pick the initial history assignment H0 accordingly. Finally, we set as final the state in whose
reachability we are interested.
The reduction described above is clearly polynomial, from which it follows that emptiness
of non-reset HRAs (even without registers) is ExpSpace-hard.
Proposition 6.4. The emptiness problem for non-reset HRAs is ExpSpace-complete.
Proof. Immediate from Proposition 6.2 and Proposition 6.3.
6.2. Unary HRAs. Our second restriction concerns allowing resets but bounding the
number of histories to just one. Thus, these automata are closer to the spirit of FRAs
and, in fact, extend them by rounding up their history capabilities. We show that these
automata require polynomial space complexity for emptiness and retain all their closure
properties apart from intersection. The latter is witnessed by failing to recognise L2 from
the Introduction. We can see that extending this example to multiple interleavings we can
show that intersection is in general incompatible with bounding the number of histories.
Definition 6.5. A (1, n)-HRA is called unary HRA of n registers.
In other words, unary HRAs are extensions of FRAs where names can be selectively
inserted or removed from the history and, additionally, the history can be reset. These
capabilities give us in fact a strict extension.
Example 6.6. The automata used in Example 2.3 for L1 and L3 were unary HRAs. Note
that neither of those languages is FRA-recognisable. On the other hand, in order to recognise
L2 , an HRA would need to use at least two histories: one history for the odd positions of
the input and another for the even ones. We can formalise an argument to show that L2 is
not recognisable by unary HRAs as follows.
Proof. Suppose L2 = L(A) for some unary HRA A of n registers and let
w = a1 b1 . . . ak bk b1 a1 · · · bk ak
for k = n + 1 and some pairwise distinct names a1 , b1 , . . . , ak , bk . As w ∈ L2 , there is a path,
say p, in A which accepts w. We divide p as p1 p2 with p2 accepting the second half of w. Let
p̂ = p̂1 p̂2 be the corresponding configuration path and let (q 0 , H 0 ) be the first configuration
in p̂2 . We set S = {a1 , b1 , · · · , ak , bk } \ {a | a ∈ H 0 (i) ∧ i > 1} and do a case analysis on the
labels of the form (X, X 0 ) which appear in p2 and accept names from S. Since names in S
26
R. GRIGORE AND N. TZEVELEKOS
do not appear in any H 0 (i), for i > 0, it must be that each such X is either {1} or ∅. We
have the following cases.
• There are two such labels, say ({1}, Xi ) and ({1}, Xj ), accepting names ai and bj respectively. But this would imply that A also accepts w0 , where w0 is w with these occurrences
of ai and bj swapped, contradicting L(A) = L2 (as w0 ∈
/ L2 ).
• There are two such labels, say (∅, Xi ) and (∅, Xj ), accepting names ai and bj respectively.
In order for A not to accept w0 (w0 as above), it is necessary that a reset transition with
label Y 3 1 occurs between the two transitions. Suppose i < j. Then, since k > n, there
is a name ai0 which does not appear in any place after clearing Y . Thus, (∅, Xj ) can
accept ai0 and complete the path p by accepting a word w0 ∈
/ L2 . Dually if j ≤ i.
• Each ai ∈ S is accepted by a label ({1}, X 0 ), and each bj ∈ S by a label (∅, X 0 ). Let
ai ∈ S be the last such accepted in p2 . This means that the rest of the path has length
at most 2n. Therefore, since k > n, there is a bj ∈ S accepted in p2 before ai . Let
(q, H) be the configuration just before accepting bj . In order for A not to accept any ai0
at that point, it must be that all ai0 ∈ S appear in H. Since |S| > n + 1, there exists
ai0 ∈ H(1) ∩ S such that ai0 6= ai . But then, the transition accepting ai can accept ai0
instead and lead to acceptance of a word w0 6∈ L2 .
We therefore reach a contradiction in every case.
Closure properties. The closure constructions of Section 3 readily apply to unary HRAs, with
one exception: intersection. For the latter, we can observe that L2 = L(A1 ) ∩ L(A2 ), where
L(A1 ) = {a1 a01 . . . an a0n ∈ N ∗ | a1 . . . an ∈ L0 } and L(A2 ) =
∅,1
q0
q1
{a1 a01 . . . an a0n ∈ N ∗ | a01 . . . a0n ∈ L0 }, and A1 and A2 are the
unary (1, 0)-HRAs on the side, with empty initial assignments. On
∅,∅ / 1,1
the other hand, unary HRAs are not closed under complementation
∅,∅ / 1,1
as well, as one can construct unary HRAs accepting L(A1 ) and
q0
q1
L(A2 ), and then take their union to obtain a unary HRA for L2 .
∅,1
Emptiness. In the case of just one history, the results on TR-VASS reachability [35, 16]
from Section 5 provide rather rough bounds. It is therefore useful to do a direct analysis.
We reduce nonemptiness for unary HRAs to control-state reachability for one dimensional
R-VASSs. Our analysis below shows that the minimal path has length at most quadratic,
from which it follows that nonemptiness has polynomial complexity.
The following result applies to any R-VASS representation for which |Q| log |Q| ∈ O(N ),
where |Q| is the number of states of the R-VASS, and N is the number of bits used to
represent the R-VASS. Note that the condition is true if all the states are listed in the
R-VASS representation, something all reasonable representations would do.
Lemma 6.7. Control-state reachability for one dimensional R-VASSs is in NL, provided
that non-reset transitions increase and decrease the counter by at most 1.
Proof. Let A = hQ, δi be an R-VASS of dimension 1. The proof relies on two observations:
Fact 1: If (q, i) −→
−→ (q 0 , i0 ) is a configuration path of A then, for each k > 0, there is a path
(q, i+k) −→
−→ (q 0 , i00 ) of the same length.
Fact 2: If (q, i) −→
−→ (q 0 , i0 ) is a configuration path of A in which there are no reset transitions
and the counter never becomes less than some k > 0, then there is a path (q, i−k) −→
−→
(q 0 , i00 ) of the same length.
HISTORY-REGISTER AUTOMATA
27
Consider an instance (A, q0 , i0 , qF ) of the control-state reachability problem: Is the
state qF reachable in A starting from configuration (q0 , i0 )? Let p be a configuration path of
minimal length from (q0 , i0 ) to some configuration whose state is qF . Let us see if a state can
appear repeatedly in p. By Fact 1, p is non-decreasing: any path segment (q, i) −→
−→ (q, i0 )
0
can be circumvented if i ≥ i . Now suppose that p contains a segment (q, i) −→
−→ (q, i + k)
for some k > 0. Consider the segment (q, i + k) −→
−→ (q 00 , i00 ) that follows, uses only non-reset
transitions, and is maximal. By Fact 2, if the counter never becomes < k in the latter
segment, then there exists a path (q, i) −→
−→ (q 00 , i00 ) of the same length. Since p is minimal,
this is a contradiction, and therefore the counter must become < k, somewhere after (q, i + k).
Let p0 be the segment (q, i + k) −→
−→ (q 0 , k − 1). Since non-reset transitions decrease the
counter by ≤ 1, it must be that all the values i + k, i + k − 1, . . . , k − 1 occur in p0 . When
one of these values is reached for the first time, it must be paired with a state that was not
used for the bigger values. It follows that (i + k) − (k − 1) + 1 ≤ |Q|, and so i ≤ |Q| − 2.
This gives us a bound on the counter value of any state that can be repeated in p. Thus,
each state can appear in p at most |Q| times. This implies that the length of p is at most
|Q|2 and that in p the counter does not exceed the value i0 + |Q|2 .
We can therefore answer the instance (A, q0 , i0 , qF ) of the control-state reachability
problem as follows. Note first that, by Facts 1 and 2 and because the length of minimal
reaching path is ≤ |Q|2 , we can replace i0 by min(i0 , |Q|2 ). Because we only consider initial
counter values ≤ |Q|2 and because the minimal path has length ≤ |Q2 |, we can store one
configuration on the minimal path using O(log |Q|) bits. Since |Q| log |Q| ∈ O(N ), we
have log |Q| ∈ O(log N ), and therefore O(log N ) bits suffice to represent a configuration of
the minimal path. Finally, we note that a nondeterministic algorithm can guess the next
configuration on the minimal path.
We remark that an NL upper bound follows from an analysis of the backward coverability
algorithm as well. However, the proof from above has the advantage that it is self-contained.
We now give an upper bound for the emptiness problem of unary HRAs. The result
holds for all representations that obey several weak requirements. Let hQ, q0 , δ, H0 , F i be a
unary HRA with n registers, represented with N bits. We require that
• n ∈ O(N ), which is justified because there are > 2n possible labels on transitions;
• |δ| ∈ O(N ), which is justified because we expect each transition to require at least a bit;
• |Q| log |Q| ∈ O(N ), which is justified because we expect each state to be mentioned at
least once in the representation. (This last point implies that log |Q| ∈ O(log N ).)
Proposition 6.8. The emptiness problem for unary HRAs is in PSpace. More precisely,
it is in NSpace(N log N ), where N is the number of bits used to represent the HRA.
Proof. Let A = hQ, q0 , δ, H0 , F i be the given unary HRA. Using Proposition 4.7, we
build a (1, 0)-HRA A0 that preserves emptiness, has O(Bn 2n n log |δ|) transitions, and has
O(Bn 2n n log |Q|) states. Using the construction from Lemma 5.2, we reduce the emptiness
of A0 to control-state reachability in an R-VASS A00 . Specialized to our case, the construction
says that
∅,{1}
+1
{1},∅
−1
• for each transition q −→ q 0 in A0 , we include a transition q −→ q 0 in A00 ;
• for each transition q −→ q 0 in A0 , we include a transition q −→ q 0 in A00 ; and
{1}
reset
• for each transition q −→ q 0 in A0 , we include a transition q −→ q 0 in A00 .
28
R. GRIGORE AND N. TZEVELEKOS
According to Lemma 6.7, the control-state reachability problem for A00 is in NSpace(log N 00 ),
where N 00 is the number of bits used to represent A00 . Thus, it remains to compute N 00
as a function of N . For this, we pick one particular representation of A00 , namely a list of
transitions. For such a representation we have N 00 = O Bn 2n n|δ| · log(Bn 2n n|Q|) . Thus,
log N 00 = O(n log n + log |δ| + log(n log n + log |Q|)) = O(N log N )
The last step assumes that n, |Q| log |Q|, |δ| ∈ O(N ). We require the representation of A
to satisfy these assumptions.
Proposition 6.9. The emptiness problem for unary HRAs is PSpace-hard.
Proof. By [15, Theorem 5.1a], the nonemptiness problem of register automata is PSpacehard. Register automata are a special case of unary HRAs.
Proposition 6.10. The emptiness problem for unary HRAs is PSpace-complete.
Proof. Immediate from Proposition 6.8 and Proposition 6.9.
7. Summary of Main Results
The theorems in this section summarize the main results proved in the previous sections.
Theorem 7.1. Languages recognised by HRAs are closed under union, intersection, concatenation, and Kleene star, but not under complementation. Also,
• if resets are banned, then closure under Kleene star is lost;
• if the number of histories is bounded, then closure under intersection is lost.
Proof. Immediate from Proposition 3.2, Lemma 3.3, and the closure results of Section 6.
Theorem 7.2. Deciding emptiness of an (m, n)-HRA has the following complexity:
(a) NL-complete if m = n = 0;
(b) NP-complete if m = 0 and all sets labelling transitions are singletons;
(c) PSpace-complete if m ≤ 1;
(d) ExpSpace-complete if there are no reset transitions; and
(e) Ackermann-complete in the general case.
Proof. (a) When m = n = 0, nonemptiness is equivalent to reachability in a directed graph,
which is a standard NL-complete problem. (b) In this case, HRAs are equivalent to RAs
that disallow repetitions of values in registers, as they were originally defined [23]. For such
RAs, nonemptiness is known to be NP-complete [33, Theorem 4]. (c) Proposition 6.10.
(d) Proposition 6.4. (e) Proposition 5.6.
For universality and language inclusion, see Proposition 5.9.
HISTORY-REGISTER AUTOMATA
29
8. Connections with existing formalisms
We have already seen that HRAs strictly extend FRAs. In this section, we compare HRAs
with CMAs (class memory automata). Like HRAs and FRAs, CMAs work on infinite
alphabets, and have a decidable nonemptiness problem. Implicitly, we also compare with
other formalisms: CMAs have been shown to express the same languages as data automata [5,
Proposition 3.7]; and data automata have been shown to express the same languages as the
two-variable fragment of existential monadic second order logic with data equality, position
successor, and class successor [7, Proposition 14].
Definition 8.1. A Class Memory Automaton (CMA) is a tuple A = hQ, q0 , φ0 , δ, F1 , F2 i
where Q is a finite set of states, q0 ∈ Q is initial, F1 ⊆ F2 ⊆ Q are sets of final states and
the transition relation is of type δ ⊆ Q × (Q ∪ {⊥}) × Q. Moreover, φ0 is an initial class
memory function, that is, a function φ : N → Q ∪ {⊥} with finite domain ({ a | φ(a) 6= ⊥ }
is finite).
The semantics of a CMA A is given as follows. Configurations of A are pairs of the
form (q, φ), where q ∈ Q and φ a class memory function. The configuration graph of A is
a
constructed by setting (q, φ) −→ (q 0 , φ0 ) just if there is (q, φ(a), q 0 ) ∈ δ and φ0 = φ[a 7→ q 0 ].
The initial configuration is (q0 , φ0 ), while a configuration (q, φ) is accepting just if q ∈ F1
and, for all a ∈ N , φ(a) ∈ F2 ∪ {⊥}.
Thus, CMAs resemble HRAs in that they store input names in “histories”, only that
histories are identified with states: for each state q there is a corresponding history q (note
notation overloading), and a transition which accepts a name a and leads to a state q must
store a in the history q. Moreover, each name appears in at most one history (hence the
type of φ), while the finality conditions for configurations allow us to impose that, at the
end, all names must appear in specific histories, if they appear in any. For instance, the
language L4 of Example 3.4, which we know cannot be recognized by HRAs (Lemma 3.3),
can be recognized by the following CMA on the left (with F1 = F2 = {q0 }).
∅,1
⊥
q0
q1
q1
q0
⊥
1,2
q1
q1
1,2
∅,1
Each name is put in history q1 when seen for the first time, and in history q0 when seen for
the second time. The automaton accepts if all its names are in q0 . This latter condition
is what makes the essential difference to HRAs, namely the capability to check where the
names reside for acceptance. For example, the HRA on the right above would accept the
same language it we were able to impose
S the condition that accepting configurations (q, H)
satisfy a ∈ H@{2} for all names a ∈ i H(i). Note though that, extending HRAs with such
finality conditions would render their nonemptiness problem reducible from reachability of
R-VASS (i.e. the question whether a specific state and counter content can be reached), a
problem known to be undecidable [2].
The above example proves that HRAs cannot express the same languages as CMAs.
Conversely, as shown in [5, Proposition 7.2], the fact that CMAs lack resets does not allow
them to express languages like, for example, L1 . As a result, the languages expressed by
CMAs are closed under intersection, union and concatenation, but not under Kleene star.
In the latter sections of [5] several extensions of CMAs are considered, one of which does
30
R. GRIGORE AND N. TZEVELEKOS
involve resets. However, the resets considered there do not seem directly comparable to the
reset capability of HRAs.
On the other hand, a direct comparison can be made with non-reset HRAs. We already
saw in Proposition 4.4 that, in the latter idiom, histories can be used for simulating register
behaviour. In the absence of registers, CMAs differ from non-reset HRAs solely in their
constraint of relating histories to states (and their termination behaviour, which is more
expressive). As the latter can be easily counterbalanced by obfuscating the set of states, we
obtain the following.
Proposition 8.2. For each non-reset HRA A there is a CMA A0 such that L(A) = L(A0 ).
9. Further directions
Our goal is to apply automata with histories in static and runtime verification. For static
verification, the complexity results derived in this paper may seem discouraging at first.
However, they are based on very specific representations of hard problems; in practice, we
expect programs to yield automata of simpler complexities. Experience with tools based
on coverability of TR-VASSs, like e.g. BFC [22], positively testify in that respect. Another
solution, already pursued herein, is to explore constrained versions of our machines. A
specific such variant we envisage to consider is one with restricted resets, in analogy to
e.g. [17]. In a related direction, we aim to look at abstractions that would allow us to
attack the model-checking problem for these automata, and also look at temporal logics that
capture part or all of the expressivity of HRAs.
In this work we examined nondeterministic automata but did not look at alternating
variants. This is justified by the undecidability of universality already at the level of register
automata. However, if one is willing to restrict the number of registers and histories, there
may still be room for decidability. In the case of register automata, it has been shown [15]
that alternating register automata with one register are decidable for emptiness, and become
undecidable at two registers. While these automata cannot capture languages that inherently
require more than one register, they can use alternation to express name freshness and
e.g. capture the languages L0 , L2 of the Introduction, and also a variant of L1 which uses
constants for tokenizing the input (instead of a0 ). It would be useful to examine whether a
similar restriction can yield decidable alternating HRAs, and what would their expressivity
be. Finally, a problem left open here is decidability and complexity of bisimilarity. (In a
private communication, Piotrek Hofman sketched a proof that bisimilarity is decidable.)
References
[1] S. Abramsky, D. R. Ghica, A. S. Murawski, C.-H. L. Ong, and I. D. B. Stark. Nominal games and full
abstraction for the nu-calculus. In Logic in Computer Science (LICS), 2004.
[2] T. Araki and T. Kasami. Some decision problems related to the reachability problem for Petri nets.
Theoretical Computer Science (TCS), 1976.
[3] M. Faouzi Atig, A. Bouajjani, and S. Qadeer. Context-bounded analysis for concurrent programs with
dynamic creation of threads. Logical Methods in Computer Science (LMCS), 2011.
[4] N. Benton and B. Leperchey. Relational reasoning in a nominal semantics for storage. In Typed Lambda
Calculi and Applications (TLCA), 2005.
[5] H. Björklund and T. Schwentick. On notions of regularity for data languages. Theoretical Computer
Science (TCS), 2010.
HISTORY-REGISTER AUTOMATA
31
[6] M. Bojanczyk, L. Braud, B. Klin, and S. Lasota. Towards nominal computation. In Principles of
Programming Languages (POPL), 2012.
[7] M. Bojanczyk, C. David, A. Muscholl, T. Schwentick, and L. Segoufin. Two-variable logic on data words.
Transactions on Computational Logic (TOCL), 2011.
[8] M. Bojanczyk, B. Klin, and S. Lasota. Automata theory in nominal sets. Logical Methods in Computer
Science (LMCS), 2014.
[9] M. Bojanczyk, B. Klin, S. Lasota, and S. Torunczyk. Turing machines with atoms. In Logic in Computer
Science (LICS), 2013.
[10] A. Bouajjani, S. Fratani, and S. Qadeer. Context-bounded analysis of multithreaded programs with
dynamic linked structures. In Computer Aided Verification (CAV), 2007.
[11] L. Bozzelli and P. Ganty. Complexity analysis of the backward coverability algorithm for VASS. In
Reachability Problems (RP), 2011.
[12] Rackoff C. The covering and boundedness problems for vector addition systems. Theoretical Computer
Science (TCS), 1978.
[13] C. Cotton-Barratt, A. S. Murawski, and C.-H. Luke Ong. Weak and nested class memory automata. In
Language and Automata Theory and Applications (LATA), 2015.
[14] N. Decker, P. Habermehl, M. Leucker, and D. Thoma. Ordered navigation on multi-attributed data
words. In Concurrency Theory (CONCUR), 2014.
[15] S. Demri and R. Lazić. LTL with the freeze quantifier and register automata. Transactions on Computational Logic (TOCL), 2009.
[16] D. Figueira, S. Figueira, S. Schmitz, and P. Schnoebelen. Ackermannian and primitive-recursive bounds
with Dickson’s lemma. In Logic in Computer Science (LICS), 2011.
[17] A. Finkel and A. Sangnier. Mixing coverability and reachability to analyze VASS with one zero-test. In
Current Trends in Theory and Practice of Computer Science (SOFSEM), 2010.
[18] M. J. Gabbay and A. M. Pitts. A new approach to abstract syntax with variable binding. Formal Aspects
of Computing, 2002.
[19] O. Goldreich. Computational Complexity: A Conceptual Perspective. Cambridge University Press, 2008.
[20] R. Grigore, D. Distefano, R. L. Petersen, and N. Tzevelekos. Runtime verification based on register
automata. In Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 2013.
[21] A. Jeffrey and J. Rathke. Towards a theory of bisimulation for local names. In Logic in Computer Science
(LICS), 1999.
[22] A. Kaiser, D. Kroening, and T. Wahl. Efficient coverability analysis by proof minimization. In Concurrency
Theory (CONCUR), 2012.
[23] M. Kaminski and N. Francez. Finite-memory automata. Theoretical Computer Science (TCS), 1994.
[24] J. Laird. A fully abstract trace semantics for general references. In Automata, Languages, and Programming (ICALP), 2007.
[25] R. J. Lipton. The reachability problem requires exponential space. Technical report, Yale University,
1976.
[26] A. Manuel and R. Ramanujam. Class counting automata on datawords. Foundations of Computer
Science (IJFCS), 2011.
[27] U. Montanari and M. Pistore. An introduction to history dependent automata. Electronic Notes in
Theoretical Computer Science (ENTCS), 1997.
[28] A. S. Murawski, S. J. Ramsay, and N. Tzevelekos. Game semantic analysis of equivalence in IMJ. In
Automated Technology for Verification and Analysis (ATVA), 2015.
[29] A. S. Murawski and N. Tzevelekos. Algorithmic nominal game semantics. In European Symposium on
Programming (ESOP), 2011.
[30] A. S. Murawski and N. Tzevelekos. Algorithmic games for full ground references. In Automata, Languages,
and Programming (ICALP), 2012.
[31] F. Neven, T. Schwentick, and V. Vianu. Finite state machines for strings over infinite alphabets.
Transactions on Computational Logic (TOCL), 2004.
[32] A. M. Pitts and I. Stark. On the observable properties of higher order functions that dynamically create
local names, or: What’s new? In Mathematical Foundations of Computer Science (MFCS), 1993.
[33] H. Sakamoto and D. Ikeda. Intractability of decision problems for finite-memory automata. Theoretical
Computer Science (TCS), 2000.
32
R. GRIGORE AND N. TZEVELEKOS
[34] S. Schmitz and P. Schnoebelen. Algorithmic aspects of WQO theory. Lecture Notes hcel-00727025v2i,
2012.
[35] P. Schnoebelen. Revisiting Ackermann-hardness for Lossy Counter Machines and Reset Petri Nets. In
Mathematical Foundations of Computer Science (MFCS), 2010.
[36] L. Segoufin. Automata and logics for words and trees over an infinite alphabet. In Computer Science
Logic (CSL), 2006.
[37] I. D. B. Stark. Names and Higher-Order Functions. PhD thesis, University of Cambridge Computing
Laboratory, 1995.
[38] N. Tzevelekos. Fresh-register automata. In Principles of Programming Languages (POPL), 2011.
[39] N. Tzevelekos and R. Grigore. History-register automata. In Foundations of Software Science and
Computation Structures (FoSSaCS), 2013.
This work is licensed under the Creative Commons Attribution-NoDerivs License. To view a
copy of this license, visit http://creativecommons.org/licenses/by-nd/2.0/ or send a
letter to Creative Commons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or
Eisenacher Strasse 2, 10777 Berlin, Germany
| 6 |
1
Energy Storage Sharing in Smart Grid: A Modified
Auction Based Approach
arXiv:1512.07700v1 [] 24 Dec 2015
Wayes Tushar, Member, IEEE, Bo Chai, Chau Yuen, Senior Member, IEEE, Shisheng Huang, Member, IEEE,
David B. Smith, Member, IEEE, H. Vincent Poor Fellow, IEEE, and Zaiyue Yang, Member, IEEE
Abstract—This paper studies the solution of joint energy
storage (ES) ownership sharing between multiple shared facility
controllers (SFCs) and those dwelling in a residential community.
The main objective is to enable the residential units (RUs) to
decide on the fraction of their ES capacity that they want to
share with the SFCs of the community in order to assist them
storing electricity, e.g., for fulfilling the demand of various shared
facilities. To this end, a modified auction-based mechanism is
designed that captures the interaction between the SFCs and
the RUs so as to determine the auction price and the allocation
of ES shared by the RUs that governs the proposed joint ES
ownership. The fraction of the capacity of the storage that each
RU decides to put into the market to share with the SFCs and
the auction price are determined by a noncooperative Stackelberg
game formulated between the RUs and the auctioneer. It is shown
that the proposed auction possesses the incentive compatibility
and the individual rationality properties, which are leveraged via
the unique Stackelberg equilibrium (SE) solution of the game.
Numerical experiments are provided to confirm the effectiveness
of the proposed scheme.
Index Terms—Smart grid, shared energy storage, auction
theory, Stackelberg equilibrium, strategy-proof, incentive compatibility.
I. I NTRODUCTION
E
NERGY storage (ES) devices are expected to play a
significant role in the future smart grid due to their
capabilities of giving more flexibility and balance to the grid
by providing a back-up to the renewable energy [1]–[9]. ES
can improve the electricity management in a distribution network, reduce the electricity cost through opportunistic demand
response, and improve the efficient use of energy [10]. The
distinct features of ES make it a perfect candidate to assist in
W. Tushar and C. Yuen are with Singapore University of Technology and
Design (SUTD), 8 Somapah Road, Singapore 487372 (Email: {wayes tushar,
yuenchau}@sutd.edu.sg).
B. Chai is with the State Grid Smart Grid Research Institute, Beijing,
102211, China (Email: [email protected]).
S. Huang is with the Ministry of Home Affairs, Singapore (Email:
[email protected]).
D. B. Smith is with the National ICT Australia (NICTA), ACT 2601,
Australia and adjunct with the Australian National University (Email:
[email protected]).
H. V. Poor is with the School of Engineering and Applied Science at
Princeton University, Princeton, NJ, USA (Email: [email protected]).
Z. Yang is with the State Key Laboratory of Industrial Control Technology
at Zhejiang University, Hangzhou, China (Email: [email protected]).
This work is supported in part by the Singapore University of Technology
and Design (SUTD) through the Energy Innovation Research Program (EIRP)
Singapore NRF2012EWT-EIRP002-045 and IDC grant IDG31500106, and in
part by the U.S. National Science Foundation under Grant ECCS-1549881. D.
B. Smith’s work is supported by NICTA, which is funded by the Australian
Government through the Department of Communications and the Australian
Research Council.
residential demand response by altering the electricity demand
due to the changes in the balance between supply and demand.
Particularly, in a residential community setting, where each
household is equipped with an ES, the use of ES devices can
significantly leverage the efficient flows of energy within the
community in terms of reducing cost, decarbonization of the
electricity grid, and enabling effective demand response (DR).
However, energy storage requires space. In particular for
large consumers like shared facility controllers (SFCs) of large
apartment buildings [11], the energy requirements are very
high, which consequently necessitates the actual installment
of very large energy storage capacity. The investment cost of
such storage can be substantial whereas due to the random
usage of the facilities (depending on the usage pattern of
different residents) some of the storage may remain unused.
Furthermore, the use of ESs for RUs is very limited for two
reasons [10]: firstly, the installation cost of ES devices is very
high and the costs are entirely borne by the users. Secondly, the
ESs are mainly used to save electricity costs for the RUs rather
than offer any support to the local energy authorities, which
further makes their use economically unattractive. Hence, there
is a need for solutions that will capture both the problems
related to space and cost constraints of storage for SFCs and
the benefit to RUs for supporting third parties.
To this end, numerous recent studies have focused on energy
management systems with ES devices as we will see in the
next section. However, most of these studies overlook the
potential benefits that local energy authorities such as SFCs
can attain by jointly sharing the ES devices belonging to the
RUs. Particularly due to recent cost reduction of small-scale
ES devices, sharing of ES devices installed in the RUs by the
SFCs has the potential to benefit both the SFCs and the RUs
of the community as we will see later. In this context, we
propose a scheme that enables joint ES ownership in smart
grid. During the sharing, each RU leases the SFCs a fraction
of its ES device to use, and charges and discharges from the
rest of its ES capacity for its own purposes. On the contrary,
each SFC exclusively uses its portion of ES devices leased
from the RUs. This work is motivated by [10], in which the
authors discussed the idea of joint ownership of ES devices
between domestic customers and local network operators, and
demonstrated the potential system-wide benefits that can be
obtained through such sharing. However, no policy has been
developed in [10] to determine how the fraction of battery
capacity, which is shared by the network operators and the
domestic users, is decided.
Note that, as an owner of an ES device, each RU can decide
IEEE Trans. Smart Grid, Dec. 2015
II. S TATE - OF -T HE A RT
whether or not to take part in the joint ownership scheme with
the SFCs and what fraction of the ES can be shared with the
SFCs. Hence, there is a need for solutions that can capture this
decision making process of the RUs by interacting with the
SFCs of the network. In this context, we propose a joint ES
ownership scheme in which by participating in storage sharing
with the SFCs, both the RUs and SFCs benefit economically.
Due to the interactive nature of the problem, we are motivated
to use auction theory to study this problem [12].
Exploiting the two-way communications aspects, auction
mechanisms can exchange information between users and electricity providers, meet users’ demands at a lower cost, and thus
contribute to the economic and environmental benefits of smart
grid1 [13]. In particular, 1) we modify the Vickrey auction
technique [14] by integrating a Stackelberg game between the
auctioneer and the RUs and show that the modified scheme
leads to a desirable joint ES ownership solution for the RUs
and the SFCs. To do this, we modify the auction price derived
from the Vickrey auction, to benefit the owner of the ES,
through the adaptation of the adopted game as well as keep
the cost savings to the SFCs at the maximum; 2) We study
the attributes of the technique, and show that the proposed
auction scheme possesses both the incentive compatibility and
the individual rationality properties leveraged by the unique
equilibrium solution of the game; 3) We propose an algorithm
for the Stackelberg game that can be executed distributedly
by the RUs and the auctioneer, and the algorithm is shown to
be guaranteed to reach the desired solution. We also discuss
how the proposed scheme can be extended to the time varying
case; and 4) Finally, we provide numerical examples to show
the effectiveness of the proposed scheme.
The importance and necessity of the proposed study with
respect to actual operation of smart grid lies in assisting
the SFCs of large apartment buildings in smart communities
to reduce space requirements and investment costs of large
energy storage units. Furthermore, by participating in storage
sharing with the SFCs, the RUs can benefit economically,
which can consequently influence them to efficiently schedule
their appliances and thus reduce the excess use of electricity.
We stress that multi-agent energy management schemes are
not new in the smart grid paradigm and have been discussed
in [11], [15] and [16]. However, the scheme discussed in the
paper differs from these existing approaches in terms of the
considered system model, chosen methodology and analysis,
and the use of the set of rules to reach the desired solution.
The remainder of the paper is organized as follows. We
provide a comprehensive literature review of the related work
in Section II followed by the considered system model in
Section III. Our proposed modified auction-based mechanism
is demonstrated in Section IV where we also discuss how the
scheme can be adopted in a time varying environment. The
numerical case studies are discussed in Section V, and finally
we draw some concluding remarks in Section VI.
In the recent years, there has been an extensive research
effort to understand the potential of ES devices for residential energy management [17]. This is mainly due to their
capabilities in reducing the intermittency of renewable energy
generation [18] as well as lowering the cost of electricity [19].
The related studies can be divided into two general categories.
The first category of studies consisting of [20], [21], which
assume that the ESs are installed within each RU premises
and are used solely by the owners in order to perform different
energy management tasks such as optimal placement, sizing
and control of charging and discharging of storage devices.
The second type of studies deal with ES devices that are
not installed within the RUs but located in a different location
such as in electric vehicles (EVs). Here, the ESs of EVs are
used to provide ancillary services for RUs [22]–[24] and local
energy providers [25]–[27]. Furthermore, another important
impact of ES devices on residential distribution grids is studied
in [28] and [29]. In particular, these studies focus on how
the use of ES devices can bring benefits for the stakeholders
in external energy markets. In [28], the authors propose a
multi-objective optimization method for siting and sizing of
ESs of a distribution grid to capture the trade-offs between
the storage stakeholders and the distribution system operators.
Furthermore, in [29], optimal storage profiles for different
stakeholders such as distribution grid operators and energy
traders are derived based on case studies with real data. Studies
of other aspects of smart grid can be found in [30]–[36].
As can be seen from the above discussion, the use of
ES devices in smart grid is not only limited to address the
intermittency of renewable generation [18] and assisting users
to take part in energy management to reduce their cost of
electricity [19], [21] but also extends to assisting the grid
(or, other similar energy entities such as an SFC) [37] and
generating revenues for stakeholders [28], [29]. However, one
similarity between most of the above mentioned literature is
that only one entity owns the ES and uses it according to its
requirements. Nonetheless, this might not always be the case if
there are large number of RUs2 in a community. In this regard,
considering the potential benefits of ES sharing, as discussed
in [10], this paper investigates the case in which the SFCs
in a smart community are allowed to share some fraction of
the ESs owned by the RUs through a third party such as an
auctioneer or a community representative.
The proposed modified auction scheme differs from the
existing techniques for multi-agent energy management such
as those in [11], [15], [16] in a number of ways. Particularly, in
contrast to these studies, the proposed auction scheme captures
the interaction between the SFCs and the RUs, whereby the
decision on the auction price is determined via a Stackelberg
game. By exploiting auction rules including the determination
rule, payment rule, and allocation rule the interaction between
the SFCs and RUs is greatly simplified. For instance, the
determination rule can easily identify the number of RUs that
are participating in the auction process, which further leverage
1 Please note that such a technique can be applied in the real distribution
network such as in electric vehicle charging stations by using the two-way
information and power flow infrastructure of smart grids [3].
2 Each RU may participate as a single entity or as a group where RUs
connected via an aggregator [38].
2
IEEE Trans. Smart Grid, Dec. 2015
sicap
either due to the fact that some SFCs do not have their own
ESs [11] or that the ESs of the SFCs are not large enough
to store all the excess energy at that time. It is important to
note that the ES requirement of the SFCs can stem from any
type of intermittent generation profile that the SFCs or RUs
can adopt. For example, one can consider that the proposed
scheme is based on a hybrid generation profile comprising both
solar and wind generation. However, the proposed technique
is equally suitable for other types of intermittent generation
as well. We assume that there are N = |N | RUs, where N is
the set of all RUs in the system, that are willing to share some
parts of their ES with the SFCs of the network. The battery
cap
capacity of each RU i ∈ N is si , and each RU i wants to
put xi fraction of its ES in the market to share with the SFCs,
where
(1)
xi ≤ bi = scap
i − di .
sicap : Total capacity of ES device of RU i.
di : The amount that RU i does not sell.
bi : The maximum ES space the RU i might sell to the SFCs.
bi = sicap − di
di
Sharing price pt
x1 ≤ b1
x2 ≤ b2
Each i decides
ri
ri < pt
Yes
No
x N ≤ bN
Leaves the
et
sharing market
q1
q2
Yes
Each m decides
am
qM
Take
ke part in ES
sharing
am > pt
No
Sharing price pt
Here, bi is the maximum amount of battery space that the RU
can share with the SFCs if the cost-benefit tradeoff for the
sharing is attractive for it. di is the amount of ES that the RU
does not want to share, rather uses for its own needs, e.g., to
run the essential loads in the future if there is any electricity
disruption within the RU or if the price of electricity is very
high.
Fig. 1: The fraction of the ES capacity that an RU i is willing to share with
the SFCs of the community.
the determination of the auction price via the Stackelberg game
in the payment rule. Furthermore, on the one hand the work
here complements the existing works focusing on the potential
of ES for energy management in smart grid. On the other
hand, the proposed work has the potential to open new research
opportunities in terms of control of energy dispatch from ES,
the size of ES, and exploring other interactive techniques such
as cooperative games and bi-level optimization for ES sharing.
To this end, to offer an ES space xi , on the one hand, each
RU i decides on an reservation price ri per unit of energy.
Hereinafter, we will use ES space and energy interchangeably
to refer to the ES space that each RU might share with the
SFCs. However, if the price pt , which each RU received for
sharing its ES, is lower than ri , the RU i removes its ES
space xi from the market as the expected benefit from the
joint sharing of ES is not economically attractive for it. On
the other hand, each SFC m ∈ M, that needs to share ES
space with the RUs to store their energy, decides a reservation
bid am , which represents the maximum unit price the SFC m
is willing to pay for sharing per unit of ES with the RUs in
the smart community, to enter into the sharing market. And,
if am > pt , the SFC removes its commitment of joint ES
ownership with RUs from the market due to the same reason
as mentioned for the RU. A graphical representation of the
concept of ES sharing and their decision making process of
sharing the ES space of each RU i with the SFCs are shown
in Fig. 1. Please note that to keep the formulation simple,
we do not include any specific storage model in the scheme.
However, by suitably modeling some related parameters such
as the storage capacity scap
i , and parameters like di and bi , the
proposed scheme can be adopted for specific ES devices.
III. S YSTEM M ODEL
Let us consider a smart community that consists of a large
number of RUs. Each RU can be an individual home, a
single unit of a large apartment complex, or a large number
of units connected via an aggregator that acts as a single
entity [38]–[40]. Each RU is equipped with an ES device that
the RU can use to store electricity from the main grid or its
renewable energy sources, if there are any, or can perform
DR management according to the real-time price offered by
the grid. The ES device can be a storage device installed
within each RU premises or can be the ES used for the RU’s
electric vehicles. The entire community is considered to be
divided into a number of blocks, where each block consists
of a number of RUs and an SFC. Each SFC m ∈ M, where
M is the set of all SFCs and M = |M|, is responsible for
controlling the electrical equipment and machines such as lifts,
parking lot lights and gates, water pumps, and lights in the
corridor area of a particular block of the community, which
are shared and used by the residents of that block on regular
basis. Each SFC is assumed to have its own renewable energy
generation and is also connected to the main electricity grid
with appropriate communication protocols.
Considering the fact that the nature of energy generation
and consumption is highly sporadic [41], let us assume that
the SFCs in the community need some extra ESs to store
their electricity after meeting the demand of their respected
shared facilities at a particular time of the day. This can be
The interaction that arises from the choice of ES sharing
price between the SFCs and RUs as well as the need of the
SFCs to share the ES space to store their energy and the profits
that the RUs can reap from allowing their ESs to be shared
give rise to a market of ES sharing between the RUs and the
SFCs in the smart grid. In this market, the involved N RUs
and M SFCs will interact with each other to decide as to
how many of them will take part in sharing the ESs between
themselves, and also to agree on the ES sharing parameters
such as the trading price pt and the amount of ES space to
3
IEEE Trans. Smart Grid, Dec. 2015
IV. AUCTION BASED ES OWNERSHIP
Vickrey auction is a type of sealed-bid auction scheme,
where the bidders submit their written bids to the auctioneer without knowing the bids of others participating in the
auction [14]. The highest bidder wins the auction but pays
the second highest bid price. Nevertheless, in this paper, we
modify the classical Vickrey auction [14] to model the joint
ES ownership scheme for a smart community consisting of
multiple customers (i.e., the SFCs) and multiple owners of ES
devices (i.e., the RUs). The modification is motivated by the
following factors: 1) unlike the classical Vickrey auction, the
modified scheme would enable the multiple owners and customers to decide simultaneously and independently whether to
take part in the joint ES sharing through the determination rule
of the proposed auction process, as we will see shortly; 2) the
modification of the auction provides each participating RU i
with flexibility of choosing the amount of ES space that they
may want to share with the SFCs in cases when the auction
price5 pt is lower than their expected reservation price ri ;
and 3) finally, the proposed auction scheme provides solutions
that satisfy both the incentive compatibility and individual
rationality properties, as we will see later, which are desirable
in any mechanism that adopts auction theory [41].
To this end, the proposed auction process, as shown in
Fig. 2, consists of three elements:
Fig. 2: Energy management in a smart community through auction process
consisting of multiple RUs with ES devices, an auctioneer and a number of
SFCs.
be shared. In the considered model, the RUs not only decide
on the reservation prices ri , but also on the amount of ES
space xi that they are willing to share with the SFCs. The
amount of xi is determined by the trade-off between between
the economic benefits that the RU i expects to obtain from
giving the SFCs the joint ownership of its ES device and
the associated reluctance αi of the RU for such sharing. The
reluctance to share ESs may arise from the RUs due to many
factors. For instance, sharing would enable frequent charging
and discharging of ESs that reduce the lifespan3 of an ES
device [42]. Hence, an RU i may set its αi higher so as
to increase its reluctance to participate in the ES sharing.
However, if the RU is more interested in earning revenue rather
than increasing ES life time, it can reduce its αi and thus get
more net benefits from sharing its storage. Therefore, for a
given set of bids am , ∀m and storage requirement qm , ∀m
by the SFCs, the maximum amount of ES xi that each RU
i will decide to put for sharing is strongly affected by the
trading price pt and the reluctance parameter4 αi of each
RU i ∈ N during the sharing process. In this context, we
develop an auction based joint ES ownership scheme in the
next section. We understand that the proposed scheme involves
different types of users such as auctioneers, SFCs, and RUs.
Therefore, the communication protocol used by them could
be asynchronous. However, in our study we assume that the
communication between different entities of the system are
synchronous. This is mainly due to the fact that we assume
our algorithm is executed once in a considered time slot, and
the duration of this time slot can be one hour [43]. Therefore,
synchronization is not a significant issue for the considered
case and the communication complexity is affordable. For
example, the auctioneer can wait for five minutes until it
receives all the data from SFCs and the RUs and then the
algorithm, which is proposed in Section IV-C, can be executed.
1) Owner: The RUs in set N , that own the ES devices,
and expect to earn some economic benefits, e.g., through
maximizing a utility function, by letting the SFCs to share
some fraction of their ES spaces.
2) Customer: The SFCs in set M, that are in need of ESs in
order to store some excess electricity at a particular time
of the day. The SFCs offer the RUs a price with a view
to jointly own some fraction of their ES devices.
3) Auctioneer: A third party (e.g., estate or building manager), that controls the auction process between the
owners and the customers according to some predefined
rules.
The proposed auction policies consist of A) determination
rule, B) payment rule and C) storage allocation rule. Here,
determination rule allows the auctioneer to determine the
maximum limit for the auction price pmax
and the number
t
of SFCs and RUs that will actively take part in the ES sharing
scheme once the auction process is initiated. The payment rule
enables the auctioneer to decide on the price that the customer
needs to pay to the owners for sharing their ES devices, which
allows the RUs to decide how much storage space they will
be putting into the market to share with the SFCs. Finally,
the auctioneer allocates the ES spaces for sharing for each
SFC following the allocation rule of the proposed auction.
It is important to note that although both the customers and
owners do not have any access to others private information
such as the amount of ES to be shared by an RU or the required
energy space by any SFC, the rules of auction are known to
all the participants of the joint ownership process.
3 Please note that the life time degradation due to charging and discharging
may not true for all electromechanical systems such as redox-flow system.
4 Reluctance parameter refers to the opposite of preference parameter [38].
5 Hereinafter, p will be used to refer to auction price instead of sharing or
t
trading price.
4
IEEE Trans. Smart Grid, Dec. 2015
of the SFCs and RUs in the network:
90
Cosumers participating
in auction
Owner
Customer
am < ri ; ∀m ∈ M/{1, 2, . . . , K}, ∀i ∈ N /{1, 2, . . . , J}. (4)
80
Hence, the joint ownership of ES would be a detrimental choice for the RUs and the SFCs within the set
N /{1, 2, . . . , J} and M/{1, 2, . . . , K} respectively, which
consequently remove them from the proposed auction process.
Now, one desirable property of any auction mechanism is that
no participating agents in the auction mechanism will cheat
once the payment and allocation rules are being established.
To this end, we propose that, once J and K are determined,
K − 1 SFCs and J − 1 RUs will be engaged in the joint ES
sharing process, which is a necessary condition for matching
total demand and supply while maintaining a truthful auction
scheme [44]. Nevertheless, if truthful auction is not a necessity,
SFC K and RU J can also be allowed to participate in the
joint ES ownership auction.
Price
70
60
Maximum auction price pmax
t
Vickrey price pmin
t
50
40
Owners participating
in auction
30
0
50
100
150
Storage amount
Fig. 3: Determination of the Vickrey price, the maximum auction price, and
the number of participating RUs and SFCs in the auction process.
The proposed scheme initially determines the set of SFCs
⊂ M and RUs ⊂ N that will effectively take part in the
auction mechanism once the upper bound of the auction price
pmax
is determined. Eventually, the payment and the allocation
t
rules are executed in the course of the auction plan.
B. Payment Rule
We note that the intersection of the demand and supply
curves demonstrates the highest reservation price pmax
for the
t
participating J − 1 RUs. According to the Vickrey auction
mechanism [14], the auction price for sharing the ES devices
would be the second highest reservation price, i.e., the Vickrey
price, which will be indicated as pmin
hereinafter. However, we
t
note that this second highest price might not be considerably
beneficial for all the participating RUs in the auction scheme.
In contrast, if pt is set to pt = pmax
t , the price could be
detrimental for some of the SFCs. Therefore, to make the
auction scheme attractive and beneficial to all the participating
RUs and, at the same time, to be cost effective for all the SFCs,
we strike a balance between the pmax
and pmin
t
t . To do so, we
propose a scheme for deciding on both the auction price pt
and the amount of ESs xi that RUs will put into the market for
sharing according to pt . In particular, we propose a Stackelberg
game between the auctioneer that decides on the auction price
pt to maximize the average cost savings to the SFCs as well as
satisfying their desirable needs of ESs, and the RUs, that decide on the vector of the amount of ES x = [x1 , x2 , . . . , xJ−1 ]
that they would like to put into the market for sharing such that
their benefits are maximized. Please note that the solution of
the proposed problem formulation can also be solved following
other distributed algorithms, e.g., algorithms designed via the
bi-level optimization technique [45].
Stackelberg game: Stackelberg game is a multi-level decision making process, in which the leader of the game takes
the first step to choose its strategy. The followers, on the other
hand, choose their strategy in response to the decision made by
the leader. In the proposed game, we assume the auctioneer
as the leader and the RUs as the followers. Hence, it can
be seen as a single-leader-multiple-follower Stackelberg game
(SLMFSG). We propose that the auctioneer, as a leader of
the SLMFSG Γ, will take the first step to choose a suitable
min
auction price pt from the range [pmin
t , pt ]. Meanwhile, each
RU i ∈ {1, 2, . . . , J − 1}, as a follower of the game, will
play its best strategy by choosing a suitable xi ∈ [0, bi ] in
response to the price pt offered by the auctioneer. The best
A. Determination Rule
The determination rule of the proposed scheme is executed
by the following steps (inspired from [44]):
i) The RUs of set N , i.e., the owners of the ESs, declare
their reservation price ri , ∀i in an increasing order, which
we can consider, without loss of generality, as:
r1 < r2 < . . . < rN .
(2)
The RUs submit the reservation price along with the
amount xi of ES that they are interested to share with
the SFCs to the auctioneer.
ii) The SFCs’ bidding prices, i.e., am , ∀m, are arranged in
a decreasing order, i.e.,
a1 > a2 > . . . > aM .
(3)
The SFCs submit to the auctioneer along with the quantity
qm ∀m of ES that they require.
iii) Once the auctioneer receives the ordered information from
the RUs and the SFCs, it generates the aggregated supply
(reservation price of the RUs versus the amount of ES the
RUs interested to share) and demand curves (reservation
bids am verses the quantity of ES qm needed) using (2)
and (3) respectively.
iv) The auctioneer determines the number of of participating
SFCs K and RUs J that satisfies aK ≥ rJ from
the intersection of the two curves using any standard
numerical method [44].
As soon as the SFC K ≤ M and RU J ≤ N are determined
from the intersection point, as shown in Fig. 3, an important
aspect of the auction mechanism is to determine the number
of SFCs and RUs, which will take part in the joint ownership
of ESs. We note that once the number of SFCs K and RUs
J are determined, the following relationship holds for the rest
5
IEEE Trans. Smart Grid, Dec. 2015
response strategy of each RU i will stem from a utility function
Ui , which captures the benefit that an RU i can gain from
deciding on the amount of ES xi to be shared for the offered
price. Whereas the auctioneer chooses the price pt with a
view to maximize the average cost savings Z of the SFCs
in the network. Now to capture the interaction between the
auctioneer and the RUs, we formally define the SLMFSG Γ
as
Γ = {{{1, 2, . . . , J − 1}, {Auctioneer}}, {Ui }i∈{1,2,...,J−1} ,
{Xi }i∈{1,2,...,J−1} , Z, pt },
which consists of: i) the set of RUs {1, 2, . . . , J − 1} participating in the auction scheme and the auctioneer; ii) the utility
Ui that each RU i reaps from choosing a suitable strategy
xi in response to the price pt announce by the auctioneer;
iii) the strategy set Xi of each RU i ∈ {1, 2, . . . , J − 1};
iv) the average cost savings Z that incurred to each SFC
m ∈ {1, 2, . . . , K − 1} from the strategy
by the
chosen
max
of the
auctioneer, and v) the strategy pt ∈ pmin
t , pt
auctioneer.
In the proposed approach, each RU i iteratively responses to
the strategy pt chosen by the auctioneer independent of other
RUs in set {1, 2, . . . , J − 1}/{i}. The response of i is affected
by the offered price pt , its reluctance parameter αi and the
initial reservation price ri .
However, we note that the auctioneer does not have any
control over the decision making process of the RUs. It only
sets the auction price pt with a view to maximize the cost
savings Z, with respect to the cost with the initial bidding
price, for the SFCs. To this end, the target of auctioneer is
(5)assumed to maximize the average cost savings
! J−1
PK−1
X
(a
−
p
)
m
t
m=1
Z=
xi
(8)
K −1
i=1
by choosing an appropriate price
p to offer to each RU from
PK−1 t
min max
m=1 (am −pt )
is the average savthe range [pt , pt ]. Here,
K−1
ing in auction
price
that
the
SFCs
pay
to
the
RUs for sharing
P
the ESs and i xi is the total amount of ES that all the SFCs
share from the RUs. From Z, we note that the cost savings
will be more if pt is lower for all m ∈ {1, 2, . . . , K − 1}.
However, this is conflicted by that fact that a lower pt may
lead to the choice of lower xi ∀i ∈ {1, 2, . . . , J − 1} by the
RUs, which in turn will affect the cost to the SFCs. Hence,
to reach a desirable solution set (x∗ , p∗t ), the auctioneer and
the RUs continue to interact with each other until the game
reaches a Stackelberg equilibrium (SE).
Now, the utility function Ui , which defines the benefits that
an RU i can attain from sharing xi amount of its ES with the
SFCs, is proposed to be
Ui (xi ) = (pt − ri )xi − αi x2i , xi ≤ bi ,
(6)
where, αi is the reluctant parameter of RU i, and ri is the
reservation price set by RU i. Ui mainly consists of two parts.
The first part (pi − ri )xi is the utility in terms of its revenue
that an RU i obtains from sharing its xi portion of ES device.
The second part αi x2i , on other hand, is the negative impact
in terms of liability on the RU i stemming from sharing its
ES with the SFC. This is mainly due to the fact that once an
RU decides to share its xi amount of storage space with an
SFC, the RU can only use scap
i − xi amount of storage for its
own use. The term αi x2i captures this restriction of the RU
on the usage of its own ES. In (6), the reluctance parameter
αi is introduced as a design parameter to measure the degree
of unwillingness of an RU to take part in energy sharing. In
particular, a higher value of αi refers to the case when an RU
i is more reluctant to take part in the ES sharing, and thus, as
can be seen from (6), even with the same ES sharing attains
a lower net benefit. Thus, Ui can be seen as a net benefit
to RU i for sharing its ES. The utility function is based on
the assumption of a non-decreasing marginal utility, which
is suitable for modeling the benefits of power consumers, as
explained in [46]. In addition, the proposed utility function
also possesses the following properties: i) the utility of any
RU increases as the amount of price pt paid to it for sharing
per unit of ES increases; ii) as the reluctance parameter αi
increases, the RU i becomes more reluctant to share its ES, and
consequently the utility decreases; and iii) for a particular price
pt , the more an RU shares with the SFCs, the less interested it
becomes to share more for the joint ownership. To that end, for
a particular price pt and reluctance parameter αi , the objective
of RU i is
(7)
max (pt − ri )xi − αi x2i , xi ≤ bi .
Definition 1. Let us consider the game Γ as described in (5),
where the utility of each RU i and the average utility per SFC
are described via Ui and Z respectively. Now, Γ will reach a
SE (x∗ , p∗t ), if and only if the solution of the game satisfies
the following set of conditions:
Ui (x∗i , x∗−i , p∗t ) ≥ Ui (xi , x∗−i , p∗t ), ∀i ∈ {1, 2, . . . , J − 1},
max
∀xi ∈ Xi , p∗t ∈ [pmin
t , pt ], (9)
and
PK−1
− p∗t ) X ∗
xi ≥
K −1
i
m=1 (am
PK−1
− pt ) X ∗
xi ,
K −1
i
m=1 (am
(10)
where x−i = [x1 , x2 , . . . , xi−1 , xi+1 , . . . , xJ−1 ].
Hence, according to (9) and (10), both the RUs and the SFCs
achieve their best possible outcomes at the SE. Hence, neither
the RUs nor the auctioneer will have any incentive to change
their strategies as soon as the game Γ reaches the SE. However,
achieving an equilibrium solution in pure strategies is not
always guaranteed in non-cooperative games [38]. Therefore,
we need to investigate whether the proposed Γ possesses an
SE or not.
Theorem 1. There always exists a unique SE solution for
the proposed SLMFSG Γ between the auctioneer and the
participating RUs in set {1, 2, . . . , J − 1}.
Proof: Firstly, we note that the strategy set of the auction
max
.
eer is non-empty and continuous within the range pmin
t , pt
Hence, there will always be a non-empty strategy for the
auctioneer that will enable the RUs to offer some part of their
xi
6
IEEE Trans. Smart Grid, Dec. 2015
ES, within their limits, to the SFCs. Secondly, for any price
pt , the utility function Ui in (6) is strictly concave with respect
2
of xi , ∀i ∈ {1, 2, . . . , J − 1}, i.e., δδxU2i < 0. Hence, for any
i
min max
price pt ∈ pt , pt , each RU will have a unique xi , which
will be chosen from a bounded range [0, bi ] and maximize Ui .
Therefore, it is evident that as soon as the scheme will find
a unique p∗t such that the average utility Z per SFC attains a
maximum value, the SLMFSG Γ will consequently reach its
unique SE.
To this end, first we note that the amount of ES x∗i , at which
the RU i achieves its maximum utility in response to a price
pt can be obtained from (6),
pt − ri
x∗i =
.
(11)
2αi
Algorithm 1: Algorithm for SLMFSG to reach the SE
∗
1: Initialization: p∗t = pmin
t , Z = 0.
2: for Auction price pt from pmin
to pmax
do
t
t
3:
for Each RU i ∈ {1, 2, . . . , (J − 1)} do
4:
RU i adjusts its amount of ES xi to share according to
x∗i = arg
5:
6:
7:
8:
Now, replacing the value of x∗i in (8) and doing some simple
arithmetics, the auction price p∗t , which maximizes the average
cost savings to the SFCs can be found as
P
P
P
J−1 1
K−1
J−1
a
+ i=1 ri (K−1)
m
i=1 2αi
m=1
2αi
∗
pt =
, (12)
PJ−1 K−1
i=1
max
0≤xi ≤bi
[(pt − ri )xi − αi x2i ].
(13)
end for
The auctioneer computes the average cost savings to SFCs
! J −1
PK−1
X ∗
m=1 (am − pt )
xi .
(14)
Z=
K−1
i=1
if Z ≥ Z ∗ then
The auctioneer record the desirable price and maximum
average cost savings
p∗t = pt , Z ∗ = Z.
(15)
9:
end if
10: end for
The SE (x∗ , p∗t ) is achieved.
αi
guaranteed to reach SE of the proposed SLMFSG Γ.
where am for any m ∈ {1, 2, . . . , K − 1} and αi for any
i ∈ {1, 2, . . . , J − 1} is exclusive. Therefore, p∗t is unique for
Γ, and thus Theorem 1 is proved.
Proof: In the proposed algorithm, we note that the choice
of strategies by the RUs emanate from the choice pt of the
auctioneer, which as shown in (12) will always attain a nonempty single value p∗t at the SE due to its bounded strategy set
max
[pmin
t , pt ]. On the other hand, as the Algorithm 1 is designed,
in response to the p∗t , each RU i will choose its strategy xi
from the bounded range [0, bi ] in order to maximize its utility
function Ui . To that end, due to the bounded strategy set and
continuity of Ui with respect to xi , it is confirmed that each
RU i will always reach a fixed point x∗i for the given p∗t .
Therefore, the proposed Algorithm 1 is always guaranteed to
reach the unique SE of the SLMFSG.
C. Algorithm for payment
To attain the SE, the auctioneer, which has the information
of am , m = {1, 2, 3, . . . , K − 1}, needs to communicate with
each RU. It is considered that the auctioneer does not have
any knowledge of the private information of the RUs such
as αi , ∀i. In this regard, in order to decide on a suitable
auction price pt that will be beneficial for both the RUs and
the SFCs, the auctioneer and the RUs interact with one another.
To capture this interaction, we design an iterative algorithm,
which can be implemented by the auctioneer and the RUs in a
distributed fashion to reach the unique SE of the proposed
SLMFSG. The algorithm initiates with the auctioneer who
sets the auction price pt to pmin
and the optimal average
t
cost saving per SFC Z ∗ to 0. Now, in each iteration, after
having the information on the offered auction price by the
auctioneer, each RU i plays its best response xi ≤ bi and
submits its choice to the auctioneer. The auctioneer, on other
hand, receives the information on x = [x1 , x2 , . . . , xJ−1 ]
from all the participating RUs and determines the average
cost savings per SFC Z from its knowledge on the reservation
bids [a1 , a2 , . . . , aK−1 ] and using (8). Then, the auctioneer
compares the Z with Z ∗ . If Z > Z ∗ , the auctioneer updates
the optimal auction price to the one recently offered and
sends a new choice of price to the RUs in the next iteration.
However, if Z ≤ Z ∗ , the auctioneer keeps the same price
and offers another new price to the RUs in the next iteration.
The iteration process continues until the conditions in (9) and
(10) are satisfied, and hence the SLMFSG reaches the SE. We
show the step-by-step process of the proposed algorithm in
Algorithm 1.
D. Allocation Rule
Now, once the the amount of ES x∗i that each RU i ∈
{1, 2, . . . , J − 1} decides to put into the market for sharing in
response to the auction price p∗t is determined, the auctioneer
allocates the quantity Qi to be jointly shared by each RU i
and the SFCs according to following rule [44]:
(
PK−1
PJ−1
x∗i
if i=1 x∗i ≤ m=1 qm ,
Qi (x) =
(16)
P
P
J−1 ∗
(x∗i − ηi )+ if i=1
xi > K−1
m=1 qm ,
where (f )+P = max(0,
Pf ) and ηi is the allotment of the
J−1 ∗
excess ES i=1
xi − K−1
m=1 qm that an RU i must endure.
Essentially, the rule in (16) emphasizes that if the requirements
of the SFCs exceed the available ES space from the RUs, each
RU i will allow the SFCs to share all of the ES xi that it put
into the market. However, if the available ES exceeds the total
demand by the SFCs, then
RU P
i will have to share a
Peach
J−1
K−1
fraction of the oversupply i=1 x∗i − m=1 qm . Nonethless,
this burden, if there is any, can be distributed in different
ways among the participating RUs. For instance, the burden
can be distributed either proportionally to the amount of ES
x∗i that each RU i shared with the SFCs or proportionally to
Theorem 2. The algorithm proposed in Algorithm 1 is always
7
IEEE Trans. Smart Grid, Dec. 2015
the reservation price6 ri of each RU. Alternatively, the total
burden can also be shared equally by the RUs in the auction
scheme [44].
1) Proportional allocation: In proportional allocation [47],
a fraction of the total burden ηi is allocated to each RU i
in
to the reservation
price ri (or, x∗i ) such that
PJ−1
PK−1
P proportion
∗
i ηi =
i=1 xi −
m=1 qm , which can be implemented
as follows:
P
PK−1
J−1 ∗
x
−
i=1 i
m=1 qm ri
P
ηi =
, i = [1, 2, . . . , J − 1]. (17)
i ri
it is clear that all the participants in the proposed auction
scheme are individually rational, which leads to the following
Corollary 1.
Corollary 1. The proposed auction technique possesses the
individual rationality property, in which the J − 1 rational
owners and K − 1 rational customers actively participate in
the mechanism to gain the higher utility.
Theorem 3. The proposed auction mechanism is incentive
compatible, i.e., truthful auction is the best strategy for any
RU i ∈ {1, 2, . . . , J − 1} and SFC m ∈ {1, 2, . . . , K − 1}.
By replacing ri with x∗i in (17), the burden allocation can be
determined in proportion to the shared ES by each RU.
2) Equal allocation: According to equal allocation [44],
each RU bears an equal burden
!
K−1
J−1
X
X
1
∗
ηi =
x −
qm , i = [1, 2, . . . , J − 1] (18)
J − 1 i=1 i m=1
Proof: To validate Theorem 3, first we note that the
choice of strategies by the RUs always guaranteed to converge
to a unique SE, i.e., x∗ = [x∗1 , x∗2 , . . . , x∗J−1 ] as proven in
Theorem 1 and Theorem 2, which confirms the stability of
their selections. Now, according to [44], once the owners of an
auction process, i.e., the RUs in this proposed case, decide on
a stable amount of commodity, i.e., x∗i ∀i ∈ {1, 2, . . . , J − 1},
to supply to or to share with the customers, the auction process
always converges to a strategy-proof auction if the allocation
of commodity is conducted according to the rules described
in (16) and (18). Therefore, neither any RU nor any SFC
will have any intention to falsify their allocation once they
adopt (16) and (18) [44] for sharing the storage space of the
RUs from their SE amount. Therefore, the auction process is
incentive compatible, and thus Theorem 3 is proved.
of the oversupply.
Here it is important to note that, although proportional
allocation allows the distribution of oversupply according to
some properties of the RUs, equal allocation is more suitable
to make the auction scheme strategy proof [44]. Strategy
proofness is important for designing auction mechanisms as
it encourages the participating players not to lie about their
private information such as reservation price [41], which
is essential for the acceptability and sustainability of such
mechanisms in energy markets. Therefore, we will use equal
allocation of (18) for the rest of the paper.
F. Adaptation to Time-Varying Case
To extend the proposed scheme to a time-varying case, we
assume that the ES sharing scheme works in a time-slotted
fashion where each time slot has a suitable time duration based
on the type of application, e.g., 1 hour [43]. It is considered
that in each time slot all the RUs and SFCs take part in the
proposed ES sharing scheme to decide on the parameters such
as the auction price and the amount of ESs that needs to be
shared. However, in a time-varying case, the amount of ES
that an RU shares at time slot t may be affected by the burden
that the RU needed to bear in the previous time slot t − 1. To
this end, first we note that once the number of participating
RUs and SFCs is decided for a particular time slot via the
determination rule, the rest of the procedures, i.e., the payment
and allocation rules are executed following the descriptions in
Section IV-B and IV-D respectively for the respective time
slot. Now, if the total number of RUs and SFCs is fixed, the
RUs and SFCs that participate in the modified auction scheme
in any time slot is determined by their respective reservation
and bidding prices for that time slot. Further, the proposed
auction process may evolve across different time slots based
on the change of the amount of ES that each participating RU
i may want to share and the change in the total amount of
ES required for the SFCs in different time slots. Now, before
discussing how the proposed modified auction scheme can be
extended to a time-varying environment7, first we define the
E. Properties of the Auction Process
We note that once the auction process is executed, there
is always a possibility that the owners of the ES might
cheat on the amount of storage that they wanted put into
the market during auction [13]. In this context, we need to
investigate whether the proposed scheme is beneficial enough,
i.e., individually rational, for the RUs such that they are not
motivated to cheat, i.e., incentive compatible, once the auction
is executed.
Now for the individual rationality property, first we note that
all the players, i.e., the RUs and the auctioneer on behalf of the
SFCs, take part in the SLMFSG to maximize their benefits in
terms of their respected utility from their choice of strategies.
The choice of the RUs is to determine vector of ES x∗ such
that each of the RU can be benefitted at its maximum. On the
other hand, the strategy of the auctioneer is to choose a price pt
to maximize the savings of the SFCs. Accordingly, once both
the RUs and the auctioneer reach such a point of the game
when neither the owners nor the customers can be benefitted
more from choosing another strategy, the SLMFSG reaches
the SE. To this end, it is already proven in Theorem 1 that the
proposed Γ in this auction process must possesses a unique
SE. Therefore, as a subsequent outcome of the Theorem 1,
7 Certain loads such as lifts and water pumps in large apartment buildings
are not easy to schedule as they are shared by different users of the buildings.
Hence, we focus on the time variation of the storage sharing process by the
RUs of the considered system.
6 Please
note that the reservation price ri indicates how much each RU
i wants to be paid forPsharing its ES with the SFCs, and thus affects the
determination of total
x∗i and the total burden.
8
IEEE Trans. Smart Grid, Dec. 2015
which households are equipped with a dedicated battery to sell
the stored electricity to the grid [48]. Nonetheless, xi,t is also
affected by the amount of burden ηi,t−1 that an RU needed to
bear due to an oversupply of ES spaces, if there was any, in
the previous time slot. To this end, the amount of ES space
that an RU i can offer to the SFCs at t can be defined as
(
xi,t−1
if i ∈
/ Jt−1
xi,t =
. (21)
max(bi,t − (xi,t−1 − ηi,t−1 ), 0) otherwise
following parameters:
t: index of time slot.
T : total number of time slot.
ri,t : the reservation price of RU i ∈ N at time slot t.
ri = [ri,1 , ri,2 , . . . , ri,T ]: is the reservation price vector for
RU i ∈ N .
xi,t : the fraction of ES space that the RU i wants to shares
with the SFCs at time slot t.
xi = [xi,1 , xi,2 , . . . , xi,T ]: the vector of ES space shared by
RU i with the SFC during the total considered times.
bi,t : maximum available ES of RU i for sharing at time slot
t.
am,t : the bidding price of each SFC m ∈ M at time slot t.
am = [am,1 , am,2 , . . . , am,T ]: is the reservation price vector
for SFC m ∈ N .
qm,t : the required ES space by each SFC m at time slot t.
pt,t : the auction price at time slot t.
Ui,t : the benefit that each RU i achieves at time slot t.
Zt : the average cost saving per SFC at time slot t.
ηi,t : the burden that is shared by each participating RU at time
slot t.
Kt : number of participating SFCs in the modified auction
scheme at time slot t.
Jt : number of participating RUs in the modified auction
scheme at time slot t.
To this end, the utility function Ui,t of each RU i and the
average cost savings Zt per SFC at time slot t can be defined
as
Ui,t (xi,t ) = (pt,t − ri,t )xi,t − αi x2i,t ,
The SFC m, on the other hand, decides on the amount of ES
qm,t that it needs to share from the RUs at t based on the
random requirement of the shared facilities at t, the available
shared ES space qm,t−1 from time slot t − 1, and the random
generation of renewable energy sources, where appropriate.
Hence,
qm,t = f (qm,t−1 , renewables, facility requirement).
Now, if we assume that the fraction of shared ES available
from previous time slot is negligible, i.e., qm,t−1 ≈ 0, the
requirement qm,t can be assumed to be random for each
time slot t considering the random nature of both renewable
generation and energy requirement of shared facilities. Note
that this assumption is particularly valid if the SFC uses all its
shared ESs from the previous time slot for meeting the demand
of the shared facilities and cannot use them in considered time
slot. Nonetheless, please note that this assumption does not
imply that the inter-temporal relationship between the auction
process across different time slots is non-existent. The auction
process in one time slot still depends on other time slots due
to the inter-temporal dependency of xi,t via (21).
To this end, for the modeled xi,t ∀i ∈ N and qm,t ∀m ∈ M,
the proposed modified auction scheme studied in Section IV
can be adopted in each time slot t = 1, 2, . . . , T with a
view to maximize (19) and (20) ∀t. It is important to note
that the reservation price vector ri of each RU i ∈ N and
the bidding price vector am of each SFC m ∈ M can be
modeled through any existing time-varying pricing schemes
such as time-of-use price [3]. Now, p∗t = [p∗t,1 , p∗t,2 , . . . , p∗t,T ]
and x∗ = [x∗1 , x∗2 , . . . , x∗N ] constitute the solutions of the
proposed modified auction scheme in a time-varying condition,
if the x∗ comprises the solution vector of all ES spaces shared
by the participating RUs in each time slot t = 1, 2, . . . , T
for the auction price vector p∗t . Further, all the auction rules
adopted in each time slot of the proposed time-varying case
will be similar to the rules discussed in Section IV. Hence, the
solution of the proposed modified auction scheme for a timevarying environment also possesses the incentive compatibility
and individual rationality properties for each time slot.
(19)
and
Zt =
PKt −1
(am,t − pt,t )
Kt − 1
m=1
! J−1
X
xi,t
(22)
(20)
i=1
respectively8.
Now, at time slot t, the determination rule of the proposed
scheme determines the number of participating RUs and SFCs
based on their reservation and bidding prices for that time slot.
The number of participation is also motivated by the available
ES space of each RU and the requirement of each SFC.
However, unlike the static case, in a time-varying environment
the offered ES space by an RU at time slot t is influenced by
its contribution to the auction process in the previous time
slot. For instance, if an RU i receives a burden ηi,t−1 in time
slot t − 1, its willingness to share ES space xi,t at time slot
t may reduce. xi,t is also affected by the maximum amount
of ES bi,t available to RU i at t. For simplicity, we assume
that bi,t and αi,t do not change over different time slots.
Therefore, an RU i can offer to share the same amount of
ES space xi,t to the SFCs at time slot t if it did not share
any amount in time slot t − 1. An analogous example of such
arrangement can be found in FIT scheme with ES device in
V. C ASE S TUDY
For numerical case studies, we consider a number of RUs
at different blocks in a smart community that are interested in
allowing the SFCs of the community to jointly share their ES
devices. We stress that when there are a large number of RU
and SFCs in the system, the reservation and bidding prices
will vary significantly from one another. Therefore, it will be
difficult to find an intersection point to determine the highest
8 Please note that in each time slot t, (19) and (20) are related with each
other in a similar manner as (7) and (8) are related for the static case. However,
unlike the static case, the execution of the auction process in each time slot t
is affected by the value of parameters such as xi,t and pt for that particular
time slot.
9
ES shared by each RU
IEEE Trans. Smart Grid, Dec. 2015
TABLE I: Change of average utility achieved by each SFC and each RU in
the network (according to Algorithm 1) due to the change of the reluctance
of each RU for sharing one kWh ES with the SFC.
RU5
300
RU3
RU2
RU4
Reluctance Parameter
!
0.001
0.01
0.1
1
200
RU1
100
0
0
5
10
RU6, RU7, RU8
15
20
Number of iteration
25
30
Average utility per RU
(Net benefit)
3450.6
2883.6 (-16.43%)
1117.3 (-67.6%)
142.37 (-95.3%)
Average utility for SFC
(Average cost savings)
7500
4578.3 (-38.9%)
1671.7 (-77.7%)
259.03 (-96%)
4
Average utility for SFC
8
x 10
6
RU to put into the market for sharing. As can be seen from
the figure, on the one hand, RU 1, RU 2, and RU 3 reach the
SE much quicker than RU 4 and RU 5. On the other hand, no
interest for sharing any ES is observed for RU 4, 5 and 6.
This is due to the fact that as the interaction between the
auctioneer and the RUs continues, the auction price pt is
updated in each iteration. In this regard, once the auction price
for any RU becomes larger than its reservation price, it put all
its reserve ES to the market with an intention to be shared by
the SFCs. Due to this reason, RU 1, RU 2, and RU 3 put their
ESs in the market much sooner, i.e., after the 2nd iteration,
than RU 4 and RU 5 with higher reservation prices, whose
interest for sharing ES reaches the SE once the auction price
is encouraging enough for them to share their ESs after the
5th and 20th iterations. Unfortunately, the utilities of RU 6, 7,
and 8 are not convenient enough to take part in the auction
process, and therefore their shared ES fractions are 0.
We note that the demonstration of the convergence of the
SLMFSG to a unique SE subsequently demonstrates the proofs
of Theorem 1, Theorem 2, Theorem 3 and Corollary 1, which
are strongly related to the SE as explained in the previous
section. Now, we would like to investigate how the reluctance
parameters of the RUs may affect their average utility from
Algorithm 1, and thus affecting their decisions to share ES.
To this end, we first determine the average utility that is
experienced by each RU and SFC for a reluctance parameter of
αi = 0.001 ∀i. Then considering the outcome as a benchmark,
we show the effect of different reluctance parameters on the
achieved average benefits of each SFC and RU in Table I. The
demonstration of this property is necessary in order to better
understand the working principle of the designed technique
for ES sharing.
According to Table I, as the reluctance of each RU increases,
it becomes more uncomfortable, i.e., lower utility, to put its
ES in the market to be jointly owned by the SFCs. As a consequence, it also affects the average utility achieved by each
SFC. As shown in Table I, the reduction in average utilities per
RU are 16.73%, 67.6% and 95.3% respectively compared to
the average utility achieved by an RU at αi = 0.001 for every
ten times reduction in the reluctance parameter. For similar
settings, the reduction of average utility for the SFCs are
38.9%, 77.7% and 96% at αi = 0.01, 0.1 and 1 respectively.
Therefore, the proposed scheme will enable the RUs to put
more storage in the auction market if the related reluctance for
this sharing is small. Note that although the current investment
cost of batteries is very high compared to their relative short
life times, it is expected that battery costs will go down in
the near future [10] and become very popular for addressing
*
Z
4
2
0
0
5
10
15
20
Number of iteration
25
30
Fig. 4: Convergence of Algorithm 1 to the SE. At SE, the average utility per
SFC reaches its maximum and the ES that each RU wants to put into the
market for share reaches a steady state level that maximize their benefits.
reservation price pmax
according to the determination rule. So,
t
in this paper, we limit ourself to around 6 − 10 RUs. However,
having 6-10 RUs can in fact cover a large community, e.g.,
through aggregation such as discussed in [38], [39]. Here,
each RU is assumed to be a group of [5, 25] households,
where each household is equipped with a battery of capacity
25 kilo-Watt hour (kWh) [49]. The reluctance parameter of
all RUs are assumed to be similar, which is taken from range
of [0, 0.1]. It is important to note that αi is considered as a
design parameter in the proposed scheme, which we used to
map the reluctance of each RU to share its ES with the SFCs.
Such reluctance of sharing can be affected by parameters like
ES capacity, the condition of the environment (if applicable)
and the RU’s own requirement. Now, considering the different
system parameters in our proposed scheme, we capture these
two extremes with 0 (not reluctant) and 0.1 (highly reluctant).
The required electricity storage for each SFC is assumed to be
within the range of [100, 500] kWh. Nevertheless, the required
ES for sharing could be different if the usage pattern by the
users changes. Since, the type of ESs (and their associated
cost) used by different RUs can vary significantly [50], the
choices of reservation price to share their ESs with the SFCs
can vary considerably as well. In this context, we consider
that the reservation price set by each RU and SFC is taken
from a range of [20, 70]. It is important to note that all chosen
parameter values are particular to this study only, and may vary
according the availability and number of RUs, requirements of
SFCs, trading policy, time of the day/year and the country.
Now, we first show the convergence of Algorithm 1 to the
SE of the SLMFG in Fig. 4. For this case study, we assume
that there are five SFCs in the smart grid community that are
taking part in an auction process with eight RUs. From Fig. 4,
first we note that the proposed SLMFG reaches the SE after 20
interations when the average cost savings per SFC reaches its
maximum. Hence, the convergence speed, which is just few
seconds, is reasonable. Nonetheless, an interesting property
can be observed when we examine the choice of ES by each
10
IEEE Trans. Smart Grid, Dec. 2015
1100
for the SFCs, it would put a higher burden on the RUs to
carry. As a consequence, the relative utility from auction is
lower. Nevertheless, if the requirement of the SFCs is higher,
the sharing brings significant benefits to the RUs as can be
seen from Fig. 5. On the other hand, for higher reluctance,
RUs tend to share a lower ES amount, which then enables
them to endure a lower burden in case of lower demands from
the SFCs. This consequently enhances their achieved utility.
Nonetheless, if the requirement is higher from the SFCs, their
utility reduces subsequently compared to the RUs with lower
reluctance parameters. Thus, from observing the effects of
different αi ’s on the average utility per RU in Fig. 5, we
understand that, if the total required ES is smaller, RUs with
higher reluctance benefit more and vice versa. This illustrates
the fact that even RUs with high unwillingness to share their
ESs can be beneficial for SFCs of the system if their required
ESs are small. However, for a higher requirement, SFCs would
benefit more from having RUs with lower reluctances as they
will be interested in sharing more to achieve higher average
utilities.
Now, we discuss the computational complexity of the proposed scheme, which is greatly reduced by the determination
rule of the modified auction scheme as this rule determines
the actual number of participating RUs and SFCs in the
auction. We also note that after determining the number of
participating SFCs and RUs, the auctioneer iteratively interacts
with each of the RUs and sets the auction price with a
view to increase the average savings for the SFC. Therefore,
the main computational complexity of the modified auction
scheme stems from the interactions between the auctioneer
and the participating RUs to decide on the auction price. In
this context, the computational complexity of the problem falls
within a category of that of a single leader multiple follower
Stackelberg game, whose computational complexity, which
can be approximated to increase linearly with the number of
followers [38], and is shown to be reasonable in numerous
studies such as in [11] and [38]. Hence, the computational
complexity is feasible for adopting the proposed scheme.
Having an insight into the properties of the proposed auction
scheme, we now demonstrate how the technique can benefit
the RUs of the smart network compared to existing ES
allocation schemes such as equal distribution (ED) [38] and
FIT schemes [48]. ED is essentially an allocation scheme
that allows the SFCs to meet their total storage requirements
by sharing the total requirement equally from each of the
participating RUs. We assume that if the shared ES amount
exceeds the total amount of reservation storage that an RU puts
into the market, the RU will share its full reservation amount.
In FIT, which is a popular scheme for energy trading between
consumers and the grid, we assume that each RU prefers to
sell the same storage amount of energy to the grid at an FIT
price rate, e.g., 22 cents/kWh [52] instead of sharing the same
fraction of storage with the SFC. To this end, the resulting
average utilities that each RU can achieve from sharing its ES
space with the SFCs by adopting the proposed, ED, and FIT
schemes are shown in Table II.
From Table II, first we note that as the amount of required
ES by the SFCs increases the average utility achieved per
α = 0.001 (more willing to share)
Average utility achieved by the RUs
1000
α = 0.01 (less willing to share)
900
800
700
600
500
400
300
η0.001 > 0
η
η0.01 > 0
η
Supply > Demand
200
100
150
200
0.001
0.01
η
>0
0.001
η
=0
0.01
=0
=0
Supply < Demand
250
300
350
400
450
500
Required battery space by the SFCs (kWh)
550
600
Fig. 5: Effect of change of required ES amount by the SFCs on the achieved
average utility per RU.
intermittency of renewables [51]. We have foreseen such a
near future when our proposed scheme will be applicable to
gain the benefit of storage sharing and thus motivate the RUs
to keep their αi ∀i small. According to the observation from
Table I, it can further be said that if the reluctance parameters
of RUs change over either different days or different time slots,
the performance of the system in terms of average utility per
RU and average cost savings per SFC will change accordingly
for the given system parameters.
Once all the participating RUs put their ES amount into
the auction market, they are distributed according to the
allocation rule described in (16) and (18). In this regard,
we investigate how the average utility of each RU is altered as the total storage amount required by the SFCs
changes from in the network. For this particular case, the
considered total ES requirement of the SFCs is assumed to
be 100, 150, 200, 250, 300, 350, 400, 450, 500, 550 and 600. In
general, as shown in Fig. 5, the average utility of each RU
initially increases with the increase required by the SFCs and
eventually becomes saturated to a stable value. This is due to
the fact that as the required amount of ES increases, the RU
can share more of its reserved ES that it put into the market
with the SFCs with the determined auction price from the
SLMFSG. Hence, its utility increases. However, each RU has
a particular fixed ES amount that it puts into the market to
share. Consequently, once the shared ES amount reaches its
maximum, even with the increase of requirement by the SFCs
the RU cannot share more, i.e., ηi = 0. Therefore, its utility
becomes stable without any further increment. Interestingly,
the proposed scheme, as can be seen in Fig. 5, favors the
RUs with higher reluctance more when the ES requirement
by the SFCs is relatively lower and favors the RUs with
lower reluctance during higher demands. This is due to the
way we have designed the proposed allocation scheme, which
is dictated by the burden in (18) and the allocation of ES
through (16). We note that, according to (11), if αi is lower,
the RU i will put a higher amount of ES in the market to
share. However, if the total required amount of ES is lower
11
IEEE Trans. Smart Grid, Dec. 2015
TABLE II: Comparison of the change of average utility per RU in the smart grid system as the required total amount of energy storage required by the SFCs
varies.
Required ES space by the SFCs
Average utility (net benefit) of RU for equal distribution (ED) scheme
Average utility (net benefit) of RU for FIT scheme
Average utility (net benefit) of RU for proposed scheme
Percentage improvement (%) compared to ED scheme
Percentage improvement (%) compared to FIT scheme
ES shared by each RU (kWh)
ES available to share (kWh)
RU also increases for all the cases. The reason for this
increment is explained in Fig. 5. Also, in all the studied
cases, the proposed scheme shows a considerable performance
improvement compared to the ED and FIT schemes. An
interesting trend of performance improvement can be observed
if we compare the performance of the proposed scheme with
the ED and FIT performances for each of the ES requirements.
In particular, the performance of the proposed scheme is higher
as the requirement of the ES increases from 200 to 350.
However, the improvement is relatively less significant as the
ES requirement switches from 400 to 450. This change in
performance can be explained as follows:
In the proposed scheme, as we have seen in Fig. 5, the
amount of ES shared by each participating RU is influenced
by their reluctance parameters. Hence, even the demand of
the SFCs could be larger, the RUs may choose not to share
more of their ES spaces once their reluctance is limited. In
this regard, the RUs in the current case study increase their
share of ES as the requirement by the SFCs increases, which
in turn produces higher revenue for the RUs. Furthermore,
once the RUs choice of ESs reach the saturation, the increase
in demand, i.e., from 200 to 350 in this case, does not affect
their share. As a consequence, their performance improvement
is not as noticeable as the previous four cases. Nonetheless, for
all the considered cases, the auction process performs superior
to the ED scheme with an average performance improvement
of 34.76%, which clearly shows the value of the proposed
methodology to adopt joint ES sharing in smart grid. The
performance improvement with respect to the FIT scheme,
which is 34.34% on average, is due to the difference between
the determined auction price and the price per unit of energy
for the FIT scheme.
Finally, we show how the decision making process of each
RU in the system is affected by its decision in the previous
time slot and the total storage requirement by the SFCs. The
total number of time slots that are considered to show this
performance analysis is four. In this context, we assume that
there are five RUs in the system with ES of 100, 200, 300, 200
and 200 kWh respectively to share with the SFCs. The total
ES requirements of the SFCs for considered four time slot
are 500, 250, 500, and 100. Please note that these numbers are
considered for this case study only and may have different
values for different scenarios. Now, in Fig. 6, we show the
available ES to each of the RUs at the begining of each time
slot and how much they are going to share if the modified
auction scheme is adopted in each time slot. For a simple
analysis, we assume that once an RU shares its total available
ES, it cannot share its ES for the remaining of the time slots.
200
536.52
537.83
629.82
17.4
17.1
250
581.85
583.16
789.82
35.74
35.43
300
624.52
626.83
944.26
51.19
50.63
350
669.85
673.16
960.09
43.32
42.61
400
715.19
717.50
960.09
34.24
33.81
450
757.85
759.16
960.09
26.68
26.46
300
RU1
RU2
RU3
RU4
RU5
200
100
0
1
2
3
Number of time slot
4
300
RU1
RU2
RU3
RU4
RU5
200
100
0
1
2
3
Number of time slot
4
Fig. 6: Demonstration of how the proposed modified auction scheme can be
extended to time varying system. The reservation ES amount varies by the
RUs varies between different time slots based on their sharing amount in the
previous time slot. The total required storage by the SFCs is chosen randomly
due to the reasons explained in Section IV-F.
The reservation prices are considered to change from one time
to the next based on a predefined time of use price scheme.
Now, as can be seen from Fig. 6, in time slot 1, RU1 and RU2
share all their available ESs with the SFC, whereby other RUs
do not share their ESs due to the reasons explained in Fig. 4.
Since, the total requirement is 500, therefore neither of RU1
and RU2 needs to carry any burden. In time slot 3, only RU3
shares its ESs of 300 to meet the requirement. As the SFC’s
requirement is lower than the supply, RU3 needs to carry a
burden of 50 kWh. Similarly, in time slot 3 and 4, all of RU3,
RU4 and RU5 take part in the energy auction scheme as they
have enough ES to share with the SFC. However, the ES to
share in time slot 4 stems from the burden of oversupply from
time slot 3. The scheme is not shown for more than time slot
4 as the available ES from all RUs is already shared by the
SFCs by the end of time slot 4. Thus, the proposed modified
auction scheme can successfully capture the time variation if
the scheme is modified as given in Section IV-F.
VI. C ONCLUSION
In this paper, we have modeled a modified auction based
joint energy storage ownership scheme between a number of
residential units (RUs) and shared facility controllers (SFCs)
in smart grid. We have designed a system and discussed the
determination, payment and allocation rule of the auction,
where the payment rule of this scheme is facilitated by a
Single-leader-multiple-follower Stackelberg game (SLMFSG)
12
IEEE Trans. Smart Grid, Dec. 2015
between the auctioneer and the RUs. The properties of the
auction scheme and the SLMFSG have been studied, and it has
been shown that the proposed auction possesses the individual
rationality and the incentive compatibility properties leveraged
by the unique Stackeberg equilibrium of the SLMFSG. We
have proposed an algorithm for the SLMFSG, which has been
shown to be guaranteed to reach the SE and that also facilitates
the auctioneer and the RUs to decide on the auction price as
well as the amount of ES to be put into the market for joint
ownership.
A compelling extension of the proposed scheme would be
to study of the feasibility of scheduling of loads such as
lifts and water machines in shared space. Another interesting
research direction would be to determine how a very large
number of SFCs or RUs with different reservation and bidding
prices can take part in such a modified auction scheme. One
potential way to look at this problem can be from a cooperative
game-theoretic point-of-view in which the SFCs and RUs may
cooperate to decide on the amount of reservation ES and
bidding price they would like to put into the market so as
to participate in the auction and benefit from sharing. Another
very important, yet interesting, extension of this work would
be to investigate how to quantify the reluctance of each RU to
participate in the ES sharing. Such quantification of reluctance
(or, convenience) will also enable the practical deployment of
many energy management schemes already described in the
literature.
[11] W. Tushar, B. Chai, C. Yuen, D. B. Smith, K. L. Wood, Z. Yang, and
H. V. Poor, “Three-party energy management with distributed energy
resources in smart grid,” IEEE Trans. Ind. Electron., vol. 62, no. 4, Apr.
2015.
[12] P. Klemperer, “Auction theory: A guide to the literature,” Journal of
Economic Surveys, vol. 13, no. 3, pp. 227–286, July 1999.
[13] J. Ma, J. Deng, L. Song, and Z. Han, “Incentive mechanism for demand
side management in smart grid using auction,” IEEE Trans. Smart Grid,
vol. 5, no. 3, pp. 1379–1388, May 2014.
[14] W. Vickrey, “Counterspeculation, auctions, and competitive sealed tenders,” The Journal of Finance, vol. 16, no. 1, pp. 8–37, Mar 1961.
[15] B. Chai, J. Chen, Z. Yang, and Y. Zhang, “Demand response management with multiple utility companies: A two-level game approach,” IEEE
Trans. Smart Grid, vol. 5, no. 2, pp. 722–731, March 2014.
[16] S.Maharjan, Q. Zhu, Y. Zhang, S. Gjessing, and T. Başar, “Dependable
demand response management in the smart grid: A Stackelberg game
approach,” IEEE Trans. Smart Grid, vol. 4, no. 1, pp. 120–132, March
2013.
[17] P. Siano, “Demand response and smart grids-A survey,” Elsevier Renewable and Sustainable Energy Reviews, vol. 30, pp. 461–478, Feb
2014.
[18] P. Denholm, E. Ela, B. Kirby, and M. Milligan, “The role of energy
storage with renewable electricity generation,” National Renewable
Energy Laboratory (NREL), Colorado, USA, Technical Report, Jan
2010.
[19] Y. Cao, T. Jiang, and Q. Zhang, “Reducing electricity cost of smart
appliances via energy buffering framework in smart grid,” IEEE Trans.
Parallel Distrib. Syst., vol. 23, no. 9, pp. 1572–1582, Sep 2012.
[20] M. Sechilariu, B. Wang, and F. Locment, “Building integrated photovoltaic system with energy storage and smart grid communication,” IEEE
Trans. Ind. Electron., vol. 60, no. 4, pp. 1607–1618, April 2013.
[21] G. Carpinelli, G. Celli, S. Mocci, F. Mottola, F. Pilo, and D. Proto,
“Optimal integration of distributed energy storage devices in smart
grids,” IEEE Trans. Smart Grid, vol. 4, no. 2, pp. 985–995, June 2013.
[22] B.-G. Kim, S. Ren, M. van der Schaar, and J.-W. Lee, “Bidirectional
energy trading and residential load scheduling with electric vehicles in
the smart grid,” IEEE J. Sel. Areas Commun., vol. 31, no. 7, pp. 1219–
1234, July 2013.
[23] J. V. Roy, N. Leemput, F. Geth, R. Salenbien, J. Buscher, and J. Driesen,
“Apartment building electricity system impact of operational electric
vehicle charging strategies,” IEEE Trans. Sustain. Energy, vol. 5, no. 1,
pp. 264–272, Jan 2014.
[24] R. Yu, J. Ding, W. Zhong, Y. Liu, and S. Xie, “PHEV charging and
discharging cooperation in v2g networks: A coalition game approach,”
IEEE Internet Things J., vol. 1, no. 6, pp. 578–589, Dec 2014.
[25] J. Lin, K.-C. Leung, and V. Li, “Optimal scheduling with vehicle-to-grid
regulation service,” IEEE Internet Things J., vol. 1, no. 6, pp. 556–569,
Dec 2014.
[26] J. Tan and L. Wang, “Integration of plug-in hybrid electric vehicles into
residential distribution grid based on two-layer intelligent optimization,”
IEEE Trans. Smart Grid, vol. 5, no. 4, pp. 1774–1784, July 2014.
[27] L. Igualada, C. Corchero, M. Cruz-Zambrano, and F.-J. Heredia, “Optimal energy management for a residential microgrid including a vehicleto-grid system,” IEEE Trans. Smart Grid, vol. 5, no. 4, pp. 2163–2172,
July 2014.
[28] F. Geth, J. Tant, E. Haesen, J. Driesen, and R. Belmans, “Integration of
energy storage in distribution grids,” in IEEE Power and Energy Society
General Meeting, Minneapolis, MN, July 2010, pp. 1–6.
[29] S. Nykamp, M. Bosman, A. Molderink, J. Hurink, and G. Smit,
“Value of storage in distribution grids – Competition or cooperation of
stakeholders?” IEEE Trans. Smart Grid, vol. 4, no. 3, pp. 1361–1370,
Sep 2013.
[30] W. Tushar, C. Yuen, D. B. Smith, and H. V. Poor, “Price discrimination
for energy trading in smart grid: A game theoretic approach,” IEEE
Trans. Smart Grid, Dec. 2015, (to appear).
[31] W.-T. Li, C. Yuen, N. U. Hassan, W. Tushar, C.-K. Wen, K. L. Wood,
K. Hu, and X. Liu, “Demand response management for residential smart
grid: From theory to practice,” IEEE Access-Special Section on Smart
Grids: A Hub of Interdisciplinary Research, vol. 3, Nov. 2015.
[32] A. Naeem, A. Shabbir, N. U. Hassan, C. Yuen, A. Ahmed, and
W. Tushar, “Understanding customer behavior in multi-tier demand
response management program,” IEEE Access-Special Section on Smart
Grids: A Hub of Interdisciplinary Research, vol. 3, Nov. 2015.
[33] Y. Liu, C. Yuen, R. Yu, Y. Zhang, and S. Xie, “Queuing-based energy
consumption management for heterogeneous residential demands in
smart grid,” IEEE Trans. Smart Grid, vol. pre-print, June 2015, (doi:
10.1109/TSG.2015.2432571).
R EFERENCES
[1] M. L. D. Silvestre, G. Graditi, and E. R. Sanseverino, “A generalized
framework for optimal sizing of distributed energy resources in microgrids using an indicator-based swarm approach,” IEEE Trans. Ind.
Inform., vol. 10, no. 1, pp. 152–162, Feb 2014.
[2] P. Garcı́a, C. A. Garcı́a, L. M. Fernández, F. Llorens, and F. Jurado,
“ANFIS-based control of a grid-connected hybrid system integrating
renewable energies, hydrogen and batteries,” IEEE Trans. Ind. Inform.,
vol. 10, no. 2, pp. 1107–1117, May 2014.
[3] X. Fang, S. Misra, G. Xue, and D. Yang, “Smart grid - The new and
improved power grid: A survey,” IEEE Commun. Surveys Tuts., vol. 14,
no. 4, pp. 944–980, Oct 2012.
[4] Y. Liu, C. Yuen, S. Huang, N. U. Hassan, X. Wang, and S. Xie, “Peakto-average ratio constrained demand-side management with consumer’s
preference in residential smart grid,” IEEE J. Sel. Topics Signal Process.,
vol. PP, no. 99, pp. 1–14, Jun 2014.
[5] Y. Liu, C. Yuen, N. U. Hassan, S. Huang, R. Yu, and S. Xie, “Electricity
cost minimization for a microgrid with distributed energy resources
under different information availability,” IEEE Trans. Ind. Electron.,
vol. 62, no. 4, pp. 2571–2583, Apr. 2015.
[6] N. U. Hassan, Y. I. Khalid, C. Yuen, and W. Tushar, “Customer
engagement plans for peak load reduction in residential smart grids,”
IEEE Trans. Smart Grid, vol. 6, no. 6, pp. 3029–3041, Nov. 2015.
[7] N. U. Hassan, Y. I. Khalid, C. Yuen, S. Huang, M. A. Pasha, K. L.
Wood, and S. G. Kerk, “Framework for minimum user participation
rate determination to achieve specific demand response management
objectives in residential smart grids,” Elsevier International Journal of
Electrical Power & Energy Systems, vol. 74, pp. 91–103, Jan. 2016.
[8] W. Tushar, C. Yuen, S. Huang, D. B. Smith, and H. V. Poor, “Cost
minimization of charging stations with photovoltaics: An approach with
EV classification,” IEEE Trans. Intell. Transp. Syst., vol. pre-print, Aug.
2015, (doi: 10.1109/TITS.2015.2462824).
[9] S. Huang, W. Tushar, C. Yuen, and K. Otto, “Quantifying economic
benefits in the ancillary electricity market for smart appliances in Singapore households,” Elsevier Sustainable Energy, Grids and Networks,
vol. 1, pp. 53–62, Mar. 2015.
[10] Z. Wang, C. Gu, F. Li, P. Bale, and H. Sun, “Active demand response
using shared energy storage for household energy management,” IEEE
Trans. Smart Grid, vol. 4, no. 4, pp. 1888–1897, Dec 2013.
13
IEEE Trans. Smart Grid, Dec. 2015
[34] X. Wang, C. Yuen, X. Chen, N. U. Hassan, and Y. Ouyang, “Cost-aware
demand scheduling for delay tolerant applications,” Elsevier Journal of
Networks and Computer Applications, vol. 53, pp. 173–182, July 2015.
[35] Y. Zhang, S. He, and J. Chen, “Data gathering optimization by dynamic
sensing and routing in rechargeable sensor network,” IEEE/ACM Trans.
Netw., vol. pre-print, June 2015, (doi: 10.1109/TNET.2015.2425146).
[36] H. Zhang, P. Cheng, L. Shi, and J. Chen, “Optimal DoS attack scheduling
in wireless networked control system,” IEEE Trans. Control Syst.
Technol., vol. pre-print, Aug. 2015, (doi: 10.1109/TCST.2015.2462741).
[37] Z. Wang and S. Wang, “Grid power peak shaving and valley filling
using vehicle-to-grid systems,” IEEE Trans. Power Del., vol. 28, no. 3,
pp. 1822–1829, July 2013.
[38] W. Tushar, W. Saad, H. V. Poor, and D. B. Smith, “Economics of electric
vehicle charging: A game theoretic approach,” IEEE Trans. Smart Grid,
vol. 3, no. 4, pp. 1767–1778, Dec 2012.
[39] L. Gkatzikis, I. Koutsopoulos, and T. Salonidis, “The role of aggregators
in smart grid demand response markets,” IEEE J. Sel. Areas Commun.,
vol. 31, no. 7, pp. 1247–1257, July 2013.
[40] W. Tushar, J. A. Zhang, D. Smith, H. V. Poor, and S. Thiébaux,
“Prioritizing consumers in smart grid: A game theoretic approach,” IEEE
Trans. Smart Grid, vol. 5, no. 3, pp. 1429–1438, May 2014.
[41] W. Saad, Z. Han, H. V. Poor, and T. Başar, “A noncooperative game for
double auction-based energy trading between PHEVs and distribution
grids,” in Proc. IEEE Int’l Conf. Smart Grid Commun. (SmartGridComm), Brussels, Belgium, Oct. 2011, pp. 267–272.
[42] T. H. Bradley and A. A. Frank, “Design, demonstrations and sustainability impact assessments for plug-in hybrid electric vehicles,” Renewable
and Sustainable Energy Reviews, vol. 13, no. 1, pp. 115–128, Jan 2009.
[43] O. Derin and A. Ferrante, “Scheduling energy consumption with local
renewable micro-generation and dynamic electricity prices,” in Proc. 1st
Workshop Green Smart Embedded Syst. Technol.: Infrastruct., Methods,
Tools, Stockholm, Sweden, Apr 2010, pp. 1–6.
[44] P. Huang, A. Scheller-Wolf, and K. Sycara, “Design of a multi-unit
double auction e-market,” Computational Intelligence, vol. 18, no. 4,
pp. 596–617, Feb 2002.
[45] V. Oduguwa and R. Roy, “Bi-level optimisation using genetic algorithm,”
in Proc. IEEE International Conference on Artificial Intelligence Systems, Geelong, Australia, Feb 2002, pp. 322–327.
[46] P. Samadi, A.-H. Mohsenian-Rad, R. Schober, V. Wong, and J. Jatskevich, “Optimal real-time pricing algorithm based on utility maximization
for smart grid,” in Proc. IEEE Int’l Conf. Smart Grid Commun. (SmartGridComm), Gaithersburg, MD, Oct. 2010, pp. 415–420.
[47] X. Guojun, L. Yongsheng, H. Xiaoqin, X. Xicong, W. Qianggang,
and Z. Niancheng, “Study on the proportional allocation of electric
vehicles with conventional and fast charge methods when in distribution network,” in Proc. China International Conference on Electricity
Distribution (CICED), Shanghai, China, Sept 2012, pp. 1–5.
[48] G. Krajačić, N. Duića, A. Tsikalakis, M. Zoulias, G. Caralis, E. Panteri,
and M. da Graça Carvalho, “Feed-in tariffs for promotion of energy
storage technologies,” Energy Policy, vol. 39, no. 3, pp. 1410–1425,
Mar 2011.
[49] Fraunhofer-Gesellschaft, “Breakthrough in electricity storage: New
large and powerful redox flow battery,” Science Daily,
March 2013, retrieved August 31, 2014. [Online]. Available:
www.sciencedaily.com/releases/2013/03/130318105003.htm
[50] H. Ali and S. Ali-Oettinger, “Advancing li-ion,” May
2012,
published
in
pv
magazine.
[Online].
Available:
http://www.pv-magazine.com/archive/articles/beitrag/advancing-li-ion-100006681/501/axzz3Vkp0jXnS
[51] N. Garun, “At 38,000 reservations, Tesla’s powerwall is already sold
out until mid-2016,” 2015, accessed May 7, 2015. [Online]. Available:
http://thenextweb.com/gadgets/2015/05/07/powerwall-is-sold-out/
[52] LIPA, “Proposal concerning modifications to lipa’s tariff for electric
service,” 2010, accessed on April 3, 2015. [Online]. Available:
http://www.lipower.org/pdfs/company/tariff/proposal feedin.pdf
14
| 3 |
Published as a conference paper at ICLR 2018
M ODELING L ATENT ATTENTION W ITHIN N EURAL
N ETWORKS
arXiv:1706.00536v2 [] 30 Dec 2017
Christopher Grimm
[email protected]
Department of Computer Science & Engineering
University of Michigan
Dilip Arumugam, Siddharth Karamcheti,
David Abel, Lawson L.S. Wong,
Michael L. Littman
{dilip_arumugam,
siddharth_karamcheti, david_abel,
lsw, michael_littman}@brown.edu
Department of Computer Science
Brown University
Providence, RI 02912
A BSTRACT
Deep neural networks are able to solve tasks across a variety of domains and
modalities of data. Despite many empirical successes, we lack the ability to clearly
understand and interpret the learned mechanisms that contribute to such effective
behaviors and more critically, failure modes. In this work, we present a general
method for visualizing an arbitrary neural network’s inner mechanisms and their
power and limitations. Our dataset-centric method produces visualizations of how
a trained network attends to components of its inputs. The computed “attention
masks” support improved interpretability by highlighting which input attributes are
critical in determining output. We demonstrate the effectiveness of our framework
on a variety of deep neural network architectures in domains from computer vision
and natural language processing. The primary contribution of our approach is
an interpretable visualization of attention that provides unique insights into the
network’s underlying decision-making process irrespective of the data modality.
1
I NTRODUCTION
Machine-learning systems are ubiquitous, even in safety-critical areas. Trained models used in
self-driving cars, healthcare, and environmental science must not only strive to be error free but,
in the face of failures, must be amenable to rapid diagnosis and recovery. This trend toward realworld applications is largely being driven by recent advances in the area of deep learning. Deep
neural networks have achieved state-of-the-art performance on fundamental domains such as image
classification (Krizhevsky et al., 2012), language modeling (Bengio et al., 2000; Mikolov et al., 2010),
and reinforcement learning from raw pixels (Mnih et al., 2015). Unlike traditional linear models, deep
neural networks offer the significant advantage of being able to learn their own feature representation
for the completion of a given task. While learning such a representation removes the need for manual
feature engineering and generally boosts performance, the resulting models are often hard to interpret,
making it significantly more difficult to assign credit (or blame) to the model’s behaviors. The use of
deep learning models in increasingly important application areas underscores the need for techniques
to gain insight into their failure modes, limitations, and decision-making mechanisms.
Substantial prior work investigates methods for increasing interpretability of these systems. One body
of work focuses on visualizing various aspects of networks or their relationship to each datum they
take as input Yosinski et al. (2015); Zeiler & Fergus (2015). Other work investigates algorithms for
eliciting an explanation from trained machine-learning systems for each decision they make Ribeiro
et al. (2016); Baehrens et al. (2010); Robnik-Šikonja & Kononenko (2008). A third line of work, of
which our method is most aligned, seeks to capture and understand what networks focus on and what
they ignore through attention mechanisms.
Attention-based approaches focus on network architectures that specifically attend to regions of
their input space. These “explicit” attention mechanisms were developed primarily to improve
1
Published as a conference paper at ICLR 2018
network behavior, but additionally offer increased interpretability of network decision making
through highlighting key attributes of the input data (Vinyals et al., 2015; Hermann et al., 2015; Oh
et al., 2016; Kumar et al., 2016). Crucially, these explicit attention mechanisms act as filters on the
input. As such, the filtered components of the input could be replaced with reasonably generated
noise without dramatically affecting the final network output. The ability to selectively replace
irrelevant components of the input space is a direct consequence of the explicit attention mechanism.
The insight at the heart of the present work is that it is possible to evaluate the property of “selective
replaceability” to better understand a network that lacks any explicit attention mechanism. An
architecture without explicit attention may still depend more on specific facets of its input data when
constructing its learned, internal representation, resulting in a “latent” attention mechanism.
In this work, we propose a novel approach for indirectly measuring latent attention mechanisms
in arbitrary neural networks using the notion of selective replaceability. Concretely, we learn an
auxiliary, “Latent Attention Network” (LAN), that consumes an input data sample and generates a
corresponding mask (of the same shape) indicating the degree to which each of the input’s components
are replaceable with noise. We train this LAN by corrupting the inputs to a pre-trained network
according to generated LAN masks and observing the resulting corrupted outputs. We define a loss
function that trades off maximizing the corruption of the input while minimizing the deviation between
the outputs generated by the pre-trained network using the true and corrupted inputs, independently.
The resultant LAN masks must learn to identify the components of the input data that are most critical
to producing the existing network’s output (i.e. those regions that are given the most attention by the
existing network.)
We empirically demonstrate that the LAN framework can provide unique insights into the inner
workings of various pre-trained networks. Specifically, we show that classifiers trained on a Translated
MNIST domain learn a two-stage process of first localizing a digit within the image before determining
its class. We use this interpretation to predict regions on the screen where digits are less likely to be
properly classified. Additionally, we use our framework to visualize the latent attention mechanisms
of classifiers on both image classification (to learn the visual features most important to the network’s
prediction), and natural language document classification domains (to identify the words most relevant
to certain output classes). Finally, we examine techniques for generating attention masks for specific
samples, illustrating the capability of our approach to highlight salient features in individual members
of a dataset.
2
R ELATED W ORK
We now survey relevant literature focused on understanding deep neural networks, with a special
focus on approaches that make use of attention.
Attention has primarily been applied to neural networks to improve performance Mnih et al. (2014);
Gregor et al. (2015); Bahdanau et al. (2014). Typically, the added attention scheme provides an
informative prior that can ease the burden of learning a complex, highly structured output space (as in
machine translation). For instance, Cho et al. (2015) survey existing content-based attention models
to improve performance in a variety of supervised learning tasks, including speech recognition,
machine translation, image caption generation, and more. Similarly, Yang et al. (2016) apply stacked
attention networks to better answer natural language questions about images, and Goyal et al. (2016)
investigate a complementary method for networks specifically designed to answer questions about
visual content; their approach visualizes which content in the image is used to inform the network’s
answer. They use a strategy similar to that of attention to visualize what a network focuses on when
tasked with visual question answering problems.
Yosinski et al. (2015) highlight an important distinction for techniques that visualize aspects of
networks: dataset-centric methods, which require a trained network and data for that network, and
network-centric methods, which target visualizing aspects of the network independent of any data.
In general, dataset-centric methods for visualization have the distinct advantange of being network
agnostic. Namely, they can treat the network to visualize entirely as a black box. All prior work
for visualizing networks, of both dataset-centric and network-centric methodologies, is specific to
particular network architectures (such as convolutional networks). For example, Zeiler & Fergus
(2015) introduce a visualization method for convolutional neural networks (CNNs) that illustrates
which input patterns activate feature maps at each layer of the network. Their core methodology
2
Published as a conference paper at ICLR 2018
x
A
A(x)
⌘
⌦
x̃
F
F
F (x̃)
F (x)
Figure 1: Diagram of the Latent Attention Network (LAN) framework.
is to project activations of nodes at any layer of the network back to the input pixel space using a
Deconvolutional Network introduced by Zeiler et al. (2011), resulting in highly interpretable feature
visualizations. An exciting line of work has continued advancing these methods, as in Nguyen et al.
(2016); Simonyan et al. (2013), building on the earlier work of Erhan et al. (2009) and Berkes &
Wiskott (2005).
A different line of work focuses on strategies for eliciting explanations from machine learning
systems to increase interpretability Ribeiro et al. (2016); Baehrens et al. (2010); Robnik-Šikonja
& Kononenko (2008). Lei et al. (2016) forces networks to output a short “rationale” that (ideally)
justifies the network’s decision in Natural Language Processing tasks. Bahdanau et al. (2014) advance
a similar technique in which neural translation training is augmented by incentivizing networks to
jointly align and translate source texts. Lastly, Zintgraf et al. (2017) describe a method for eliciting
visualizations that offer explanation for decisions made by networks by highlighting regions of the
input that are considered evidence for or against a particular decision.
In contrast to all of the discussed methods, we develop a dataset-centric method for visualizing
attention in an arbitrary network architecture. To the best of our knowledge, the approach we develop
is the first of its kind in this regard. One similar class of methods is sensitivity analysis, introduced
by Garson (1991), which seeks to understand input variables’ contribution to decisions made by the
network Wang et al. (2000); Gedeon (1997); Gevrey et al. (2006). Sensitivity analysis has known
limitations Mazurowski & Szecowka (2006), including failures in highly dependent input spaces and
restriction to ordered, quantitative input spaces Montano & Palmer (2003).
3
M ETHOD
A key distinguishing feature of our approach is that we assume minimal knowledge about the network
to be visualized. We only require that the network F : Rd 7→ R` be provided as a black-box
function (that is, we can provide input x to F and obtain output F (x)) through which gradients
can be computed. Since we do not have access to the network architecture, we can only probe the
network either at its input or its output. In particular, our strategy is to modify the input by selectively
replacing components via an attention mask, produced by a learned Latent Attention Network (LAN).
3.1
L ATENT ATTENTION N ETWORK F RAMEWORK
A Latent Attention Network is a function A : Rd 7→ [0, 1]d that, given an input x (for the original
network F ), produces an attention mask A(x) of the same shape as x. The attention mask seeks
to identify input components of x that are critical to producing the output F (x). Equivalently, the
attention mask determines the degree to which each component of x can be corrupted by noise while
minimally affecting F (x). To formalize this notion, we need two additional design components:
LF : R` × R` 7→ R
d
H : R 7→ R
a loss function in the output space of F ,
a noise probability density over the input space of F .
3
(1)
Published as a conference paper at ICLR 2018
We can now complete the specification of the LAN framework. As illustrated in Figure 1, given an
input x, we draw a noisy vector η ∼ H and corrupt x according to A(x) as follows:
x̃ = A(x) · η + (1 − A(x)) · x,
(2)
where 1 denotes a tensor of ones with the same shape as A(x), and all operations are performed
element-wise. Under this definition of x̃, the components of A(x) that are close to 0 indicate that
the corresponding components of x represent signal/importance, and those close to 1 represent
noise/irrelevance. Finally, we can apply the black-box network F to x̃ and compare the output F (x̃)
to the original F (x) using the loss function LF .
An ideal attention mask A(x) replaces/corrupts as many input components as possible (it has A(x)
components close to 1), while minimally distorting the original output F (x), as measured by LF .
Hence we train the LAN A by minimizing the following training objective for each input x:
h
i
LLAN (x) = Eη∼H LF (F (x̃), F (x)) − βA(x) ,
(3)
where A(x) denotes the mean value of the attention mask for a given input, x̃ is a function of both η
and A(x) as in Equation 2, and β > 0 is a hyperparameter for weighting the amount of corruption
applied to the input against the reproducibility error with respect to LF , for more information about
this trade-off see Section E in the Appendix.
3.2
L ATENT ATTENTION N ETWORK D ESIGN
To specify a LAN, we provide two components: the loss function LF and the noise distribution H.
The choice of these two components depends on the particular visualization task. Typically, the loss
function LF is the same as the one used to train F itself, although it is not necessary. For example, if
a network F was pre-trained on some original task but later applied as a black-box within some novel
task, one may wish to visualize the latent attention with respect to the new task’s loss to verify that F
is considering expected parts of the input.
The noise distribution H should reflect the expected space of inputs to F , since input components’
importance is measured with respect to variation determined by H. In the general setting, H could
be a uniform distribution over Rd ; however, we often operate in significantly more structured spaces
(e.g. images, text). In these structured cases, we suspect it is important to ensure that the noise vector
η lies near the manifold of the input samples.
Based on this principle, we propose two methods of defining H via the generating process for η:
• Constant noise η const : In domains where input features represent quantities with default
value c (e.g. 0 word counts in a bag of words, 0 binary valued images), set η = c1, where 1
is a tensor of ones with the appropriate shape and c ∈ R.
• Bootstrapped noise η boot : Draw uniform random samples from the training dataset.
We expect that the latter approach is particularly effective in domains where the data occupies a small
manifold of the input space. For example, consider that that the set of natural images is much smaller
than the set of possible images. Randomly selecting an image guarantees that we will be near that
manifold, whereas other basic forms of randomness are unlikely to have this property.
3.3
S AMPLE -S PECIFIC L ATENT ATTENTION M ASKS
In addition to optimizing whole networks that map arbitrary inputs to attention masks, we can also
directly estimate that attention-scheme of a single input. This sample-specific approach simplifies
a LAN from a whole network to just a single, trainable variable that is the same shape as the input.
This translates to the following optimization procedure:
h
i
LxSSL = Eη∼H LF (F (x̃), F (x)) − βA(x)
(4)
where A(x) represents the attention mask learned specifically for sample x and x̃ is a function of η,
A and x defined in Eq. (2).
4
Published as a conference paper at ICLR 2018
Figure 2: Visualization of attention maps for different translated MNIST digits. For each pair
of images, the original translated MNIST digit is displayed on the top, and a visualization of the
attention map is displayed on the bottom (where warmer colors indicate more important regions to the
pre-trained classifier). Notice the blobs of network importance around each digit, and the seemingly
constant “griding” pattern present in each of the samples.
4
E XPERIMENTS
To illustrate the wide applicability of the LAN framework, we conduct experiments in a variety
of typical learning tasks, including digit classification and object classification in natural images.
The goal of these experiments is to demonstrate the effectiveness of LANs to visualize latent
attention mechanisms of different network types. Additionally, we conduct an experiment in a topicmodeling task to demonstrate the flexibility of LANs across multiple modalities. While LANs can be
implemented with arbitrary network architectures, we restrict our focus here to fully-connected LANs
and leave investigations of more expressive LAN architectures to future work. More specifically, our
LAN implementations range from 2–5 fully-connected layers each with fewer than 1000 hidden units.
At a high level, these tasks are as follows (see supplementary material for training details):
Translated MNIST
Data : A dataset of 28 × 28 grayscale images with MNIST digits, scaled down to 12 × 12, are
placed in random locations. No modifications are made to the orientation of the digits.
Task : We train a standard deep network for digit classification.
CIFAR-10
Data : A dataset of 3-channel 32 × 32 color images of objects or animals, each belonging to one
of ten unique classes. The images are typically centered around the classes they depict.
Task : We train a standard CNN for object detection.
Newsgroup-20
Data : A dataset consisting of news articles belonging to one of twenty different topics. The list
of topics includes politics, electronics, space, and religion, amongst others.
Task : We train a bag-of-words neural network, similar to the Deep Averaging Network (DAN) of
Iyyer et al. (2015) to classify documents into one of the twenty different categories.
For each experiment, we train a network F (designed for the given task) to convergence. Then, we
train a Latent Attention Network, A on F . For all experiments conducted with image data, we used
bootstrapped noise while our exploratory experiment with natural language used constant noise. Since
LANs capture attention in the input space, the result of the latter training procedure is to visualize the
attention mechanism of F on any sample in the input. For a detailed description of all experiments
and associated network architectures, please consult the supplementary material.
5
5.1
R ESULTS
T RANSLATED MNIST R ESULTS
Results are shown in Figure 2. We provide side-by-side visualizations of samples from the Translated
MNIST dataset and their corresponding attention maps produced by the LAN network. In these
5
Published as a conference paper at ICLR 2018
attention maps, there are two striking features: (1) a blob of attention surrounding the digit and (2) an
unchanging grid pattern across the background. This grid pattern is depicted in Figure 3a.
In what follows, we support an interpretation of the grid effect illustrated in Figure 3a. Through
subsequent experiments, we demonstrate that our attention masks have illustrated that the classifier
network operates in two distinct phases:
1. Detect the presence of a digit somewhere in the input space.
2. Direct attention to the region in which the digit was found to determine its class.
Under this interpretation, one would expect classification accuracy to decrease in regions not spanned
by the constant grid pattern. To test this idea, we estimated the error of the classifier on digits centered
at various locations in the image. We rescaled the digits to 7 × 7 pixels to make it easier to fit them in
the regions not spanned by the constant grid. Visualizations of the resulting accuracies are displayed
in Figure 3b. Notice how the normalized accuracy falls off around the edges of the image (where
the constant grid is least present). This effect is particularly pronounced with smaller digits, which
would be harder to detect with a fixed detection grid.
To further corroborate our hypothesis, we conducted an additional experiment with a modified version
of the Translated MNIST domain. In this new domain, digits are scaled to 12 × 12 pixels and
never occur in the bottom right 12 × 12 region of the image. Under these conditions, we retrained
our classifier and LAN, obtaining the visualization of the constant grid pattern and probability
representation presented in Figure 3(c-d). Notice how the grid pattern is absent from the bottom
right-hand corner where digits never appeared at training time. Consequently, the accuracy of the
classifier falls off if tested on digits in this region.
Through these results, we showcase the capability of LANs to produce attention masks that not
only provide insights into the inner workings of a trained network but also serve as a diagnostic for
predicting likely failure modes.
(a)
(b)
(c)
(d)
Figure 3: (a) Constant grid pattern observed in the attention masks on Translated MNIST. (b)
Accuracy of the pre-trained classifier on 7 × 7 digits centered at different pixels. Each pixel in the
images is colored according to the estimated normalized accuracy on digits centered at that pixel
where warmer colors indicate higher normalized accuracy. Only pixels that correspond to a possible
digit center are represented in these images, with other pixels colored dark blue. (c–d) Duplicate of
(a) and (b) for a pre-trained network on a modified Translated MNIST domain where no digits can
appear in the bottom right hand corner.
5.2
CIFAR-10 CNN
In Figure 4, we provide samples of original images from the CIFAR-10 dataset alongside the
corresponding attention masks produced by the LAN. Notice that, for images belonging to the same
class, the resulting masks capture common visual features such as tail feathers for birds or hulls/masts
for ships. The presence of these features in the mask suggests that the underlying classifier learns a
canonical representation of each class to discriminate between images and to confirm its classification.
We further note that, in addition to revealing high level concepts in the learned classifier, the LAN
appears to demonstrate the ability to compose those concepts so as to discriminate between classes.
This property is most apparent between the horse and deer classes, both of which show extremely
similar regions of attention for capturing legs while deviating in their structure to confirm the presence
of a heads or antlers, respectively.
6
Published as a conference paper at ICLR 2018
Figure 4: Each frame pairs an input image (left) with its LAN attention mask (right). Each column
represents a different category: horse, plane, truck, bird, ship, and deer.
5.3
N EWSGROUP -20 D OCUMENT C LASSIFICATION R ESULTS
Tables 1 and 2 contrast words present in documents against the 15 most important words, as determined by the corresponding attention mask, for topic classification. We note that these important
words generally tend to be either in the document itself (highlighted in yellow) or closely associated
with the category that the document belongs to. The absence of important words from other classes is
explained by our choice of η0 -noise, which produces more visually appealing attention-masks, but
doesn’t penalize the LAN for ignoring such words. We suspect that category-associated words not
present in the document occur due to the capacity limitations on the fully-connected LAN architecture
on a high dimensional and poorly structured bag-of-words input space. Future work will further
explore the use of LANs in natural language tasks.
Document Topic
Document Words (Unordered)
15 Most Important Words
comp.sys.mac.hardware
ralph, rutgers, rom, univ, mac, gonzalez,
gandalf, work, use, you, phone, drives,
internet, camden, party, floppy, science,
edu, roms, drive, upgrade, disks, computer
mac, drive, computer,
problem, can, this, drives,
disk, use, controller, UNK
memory, for, boot, fax
Table 1: A visualization of the attention mask generated for a specific document in the Newsgroup20 Dataset. The document consists of the words above, and is labeled under the category
“comp.sys.mac.hardware” which consists about topics relating to Apple Macintosh computer hardware. Note the top 15 words identified by the LAN Mask, and how they seem to be picking important
words relevant to the true class of the given document.
5.4
S AMPLE -S PECIFIC ATTENTION M ASKS
In all of the previous results, there is a strong sense in which the resultant attention masks are highly
correlated with the pre-trained network outputs and less sensitive to variations in the individual
input samples. Here we present results on the same datasets (see Figures 5, 3 and 4) using the
sample specific objective defined in Eq. (4). We notice that these learned attention masks are more
representative of nuances present in each invididual sample. This increase in contained information
seems reasonable when considering the comparative ease of optimizing a single attention mask for a
single sample rather than a full LAN that must learn to map from all inputs to their corresponding
attention masks.
6
C ONCLUSION
As deep neural networks continue to find application to a growing collection of tasks, understanding
their decision-making processes becomes increasingly important. Furthermore, as this space of tasks
7
Published as a conference paper at ICLR 2018
Document Topic
Document Words (Unordered)
15 Most Important Words
soc.religion.christian
UNK, death, university, point, complaining,
atheists, acs, isn, since, doesn, never, that,
matters, god, incestuous, atterlep, rejection,
forever, hell, step, based, talk, vela, eternal,
edu, asked, worse, you, tread, will, not, and,
rochester, fear, opinions, die, faith, fact, earth
oakland, lot, don, christians, alan, melissa,
rushing, angels, comparison, heaven, terlep
UNK, clh, jesus, this
church, christians,
interested, lord, christian,
answer, will, heaven,
find, worship, light
Table 2: Another visualization of the attention mask generated for a specific document in the
Newsgroup-20 Dataset. This document consists of the words above, and is labeled under the category
“soc.religion.christian”, which consists of topics relating to Christianity. The presence of UNK as an
important word in this religious documents could be attributed to a statistically significant number
of references to people and places from Abrahamic texts which are converted to UNK due to their
relative uncommonness in the other document classes.
Figure 5: Each image pair contains a CIFAR-10 image and its corresponding sample-specific attention
mask. Each column contains images from a different category: car, cat deer, dog, horse and ship.
Notice how these sample-specific attention masks retain the class specific features mentioned in
Section 5.2 while more closely tracking the subjects of the images.
Document Topic
Document Words (Unordered)
15 Most Important Words
comp.sys.ibm.pc.hardware
UNK, video, chip, used, color,
card, washington, drivers, name,
edu, driver, chipset, suffice,
functions, for, type, cica
card, chip, video, drivers,
driver, type, used, cica,
edu, washington, bike, functions,
time, sale, color
Table 3: A visualization of the sample specific attention mask generated for a specific document
in the Newsgroup-20 Dataset. The document consists of the words above and is labeled under the
category “comp.sys.ibm.pc.hardware” which consists of topics relating to personal computing and
hardware. Words that are both in the document and detected by the sample specific attention mask
are highlighted in yellow.
grows to include areas where there is a small margin for error, the ability to explore and diagnose
problems within erroneous models becomes crucial.
8
Published as a conference paper at ICLR 2018
Document Topic
Document Words (Unordered)
15 Most Important Words
talk.religion.misc
newton, jesus, spread, died, writes,
truth, ignorance, bliss, sandvik, not,
strength, article, that, good, apple, kent
ignorance, died, sandvik, kent, newton,
bliss, jesus, truth, good, can,
strength, for, writes, computer, article
Table 4: A visualization of the sample specific attention mask generated for a specific document
in the Newsgroup-20 Dataset. The document consists of the words above and is labeled under the
category “talk.religion.misc” which consists of topics relating to religion. Words that are both in the
document and detected by the sample specific attention mask are highlighted in yellow.
In this work, we proposed Latent Attention Networks as a framework for capturing the latent attention
mechanisms of arbitrary neural networks that draws parallels between noise-based input corruption
and attention. We have shown that the analysis of these attention measurements can effectively
diagnose failure modes in pre-trained networks and provide unique perspectives on the mechanism
by which arbitrary networks perform their designated tasks.
We believe there are several interesting research directions that arise from our framework. First, there
are interesting parallels between this work and the popular Generative Adversarial Networks (Goodfellow et al., 2014). It may be possible to simultaneously train F and A as adversaries. Since both F
and A are differentiable, one could potentially exploit this property and use A to encourage a specific
attention mechanism on F , speeding up learning in challenging domains and otherwise allowing for
novel interactions between deep networks. Furthermore, we explored two types of noise for input
corruption: η const and η boot . It may be possible to make the process of generating noise a part of the
network itself by learning a nonlinear transformation and applying it to some standard variety of noise
(such as Normal or Uniform). Since our method depends on being able to sample noise that is similar
to the “background noise” of the domain, better mechanisms for capturing noise could potentially
enhance the LAN’s ability to pick out regions of attention and eliminate the need for choosing a
specific type of noise at design time. Doing so would allow the LAN to pick up more specific features
of the input space that are relevant to the decision-making process of arbitrary classifier networks.
R EFERENCES
David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and KlausRobert MÞller. How to explain individual classification decisions. Journal of Machine Learning
Research, 11(Jun):1803–1831, 2010.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly
learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic
language model. In Journal of Machine Learning Research, 2000.
Pietro Berkes and Laurenz Wiskott. Slow feature analysis yields a rich repertoire of complex cell
properties. Journal of vision, 5(6):9–9, 2005.
Kyunghyun Cho, Aaron Courville, and Yoshua Bengio. Describing multimedia content using
attention-based encoder-decoder networks. IEEE Transactions on Multimedia, 17(11):1875–1886,
2015.
Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer
features of a deep network. University of Montreal, 1341:3, 2009.
David G Garson. Interpreting neural network connection weights. 1991.
Tamás D Gedeon. Data mining of inputs: analysing magnitude and functional measures. International
Journal of Neural Systems, 8(02):209–218, 1997.
Muriel Gevrey, Ioannis Dimopoulos, and Sovan Lek. Two-way interaction of input variables in the
sensitivity analysis of neural network models. Ecological modelling, 195(1):43–50, 2006.
9
Published as a conference paper at ICLR 2018
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,
Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
Yash Goyal, Akrit Mohapatra, Devi Parikh, and Dhruv Batra. Towards transparent ai systems:
Interpreting visual question answering models. arXiv preprint arXiv:1608.08974, 2016.
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A
recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa
Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, 2015.
Mohit Iyyer, Varun Manjunatha, Jordan L. Boyd-Graber, and Hal Daumé. Deep unordered composition rivals syntactic methods for text classification. In ACL, 2015.
Diederik P. Kingma and Jimmy Ba.
abs/1412.6980, 2014.
Adam: A method for stochastic optimization.
CoRR,
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor
Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for
natural language processing. In ICML, 2016.
Tao Lei, Regina Barzilay, and Tommi Jaakkola. Rationalizing Neural Predictions. Naacl, 2016.
Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. Rectifier nonlinearities improve neural
network acoustic models. 2013.
Maciej A Mazurowski and Przemyslaw M Szecowka. Limitations of sensitivity analysis for neural
networks in cases with dependent inputs. In Computational Cybernetics, 2006. ICCC 2006. IEEE
International Conference on, pp. 1–5. IEEE, 2006.
Tomas Mikolov, Martin Karafiát, Lukás Burget, Jan Cernocký, and Sanjeev Khudanpur. Recurrent
neural network based language model. In INTERSPEECH, 2010.
Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In
Advances in neural information processing systems, pp. 2204–2212, 2014.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen,
Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra,
Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning.
Nature, 518 7540:529–33, 2015.
JJ Montano and A Palmer. Numeric sensitivity analysis applied to feedforward neural networks.
Neural Computing & Applications, 12(2):119–125, 2003.
Anh Mai Nguyen, Jason Yosinski, and Jeff Clune. Multifaceted feature visualization: Uncovering the
different types of features learned by each neuron in deep neural networks. CoRR, abs/1602.03616,
2016.
Junhyuk Oh, Valliappa Chockalingam, Satinder P. Singh, and Honglak Lee. Control of memory,
active perception, and action in minecraft. In ICML, 2016.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?: Explaining the
predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM, 2016.
Marko Robnik-Šikonja and Igor Kononenko. Explaining classifications for individual instances.
IEEE Transactions on Knowledge and Data Engineering, 20(5):589–600, 2008.
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks:
Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
10
Published as a conference paper at ICLR 2018
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015.
Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton.
Grammar as a foreign language. In NIPS, 2015.
Wenjia Wang, Phillis Jones, and Derek Partridge. Assessing the impact of input features in a
feedforward neural network. Neural Computing & Applications, 9(2):101–112, 2000.
Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks
for image question answering. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 21–29, 2016.
Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural
networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015.
Eliezer Yudkowsky. Artificial intelligence as a positive and negative factor in global risk. Global
catastrophic risks, 1(303):184, 2008.
Matthew Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. pp. 1–11,
2015. ISSN 16113349.
Matthew D Zeiler, Graham W Taylor, and Rob Fergus. Adaptive deconvolutional networks for mid
and high level feature learning. In Computer Vision (ICCV), 2011 IEEE International Conference
on, pp. 2018–2025. IEEE, 2011.
Luisa M Zintgraf, Taco S Cohen, Tameem Adel, and Max Welling. Visualizing deep neural network
decisions: Prediction difference analysis. arXiv preprint arXiv:1702.04595, 2017.
11
Published as a conference paper at ICLR 2018
In the following experiment subsections we describe network architectures by sequentially listing
their layers using an abbreviated notation:
Conv (hNum Filtersi, hStridei, hFilter Dimensionsi, hActivation Functioni)
ConvTrans (hNum Filtersi, hStridei, hFilter Dimensionsi, hActivation Functioni)
FC (hNum Hidden Unitsi, hActivation Functioni)
(5)
for convolutional, convolutional-transpose and fully connected layers respectively. In all network
architectures, `-ReLU denotes the leaky-ReLU Maas et al. (2013).
We now describe each experiment in greater detail.
A
T RANSLATED MNIST H ANDWRITTEN D IGIT C LASSIFIER
Here we investigate the attention masks produced by a LAN trained on a digit classifier. We show
how LANs provide intuition about the particular method a neural network uses to complete its task
and highlight failure-modes. Specifically, we construct a “translated MNIST” domain, where the
original digits are scaled down from 28 × 28 to 12 × 12 and positioned in random locations in the
original 28 × 28 image. The network F is a classifier, outputting the probability of each digit being
present in a given image.
The pre-trained network, F has the following architecture: Conv(10, 2, (4 × 4), `-ReLU),
Conv(20, 2, (4 × 4), `-ReLU), FC(10, softmax). F is trained
P with the Adam Optimizer for 100, 000
iterations with a learning rate of 0.001 and with LF = − i yi log F (x)i where y ∈ R10 is a one-hot
vector indicating the digit class.
The latent attention network, A has the following architecture: FC(100, `-ReLU), FC(784, sigmoid),
with its output being reshaped to a 28 × 28 image. A is trained with the Adam Optimizer for 100, 000
iterations with a learning rate of 0.0001. We use β = 5.0 and η = ηboot for this experiment.
B
CIFAR-10 CNN
In this experiment we demonstrate that the LAN framework can illuminate the decision making of
classifier (based on the Alexnet architecture) on natural images. To avoid overfitting, we augment the
CIFAR-10 dataset by applying small random affine transformations to the images at train time. We
used β = 5.0 for this experiment.
The pre-trained network, F has the following architecture: Conv(64, 2, (5 × 5), `-ReLU),
Conv(64, 2, (5 × 5), `-ReLU), Conv(64, 1, (3 × 3), `-ReLU), Conv(64, 1, (3 × 3), `-ReLU),
Conv(32, 2, (3 × 3), `-ReLU), FC(384, tanh), FC(192, tanh), FC(10, softmax), where dropout and
local response normalization is applied at each layer. F is trainedPwith the Adam Optimizer for
250, 000 iterations with a learning rate of 0.0001 and with LF = − i yi log F (x)i where y ∈ R20
is a one-hot vector indicating the image class.
The latent attention network, A has the following architecture: FC(500, `-ReLU), FC(500, `-ReLU),
FC(500, `-ReLU), FC(1024, sigmoid), with its output being reshaped to a 32 × 32 × 1 image and
tiled 3 times on the channel dimension to produce a mask over the pixels. A is trained with the Adam
Optimizer for 250, 000 iterations with a learning rate of 0.0005. We used β = 7.0 and η = ηboot for
this experiment.
C
20 N EWSGROUPS D OCUMENT C LASSIFICATION
In this experiment, we extend the LAN framework for use on non-visual tasks. Namely, we show
that it can be used to provide insight into the decision-making process of a bag-of-words document
classifier, and identify individual words in a document that inform its predicted class label.
To do this, we train a Deep Averaging Network (DAN) (Iyyer et al., 2015) for classifying documents
from the Newsgroup-20 dataset. The 20 Newsgroups Dataset consists of 18,821 total documents,
partioned into a training set of 11,293 documents, and a test set of 7,528 documents. Each document
belongs to 1 of 20 different categories, including topics in religion, sports, computer hardware, and
12
Published as a conference paper at ICLR 2018
politics, to name a few. In our experiments, we utilize the version of the dataset with stop words
(common words like “the”, “his”, “her”) removed.
The DAN Architecture is very simple - each document is represented with a bag-of-words histogram
vector, with dimension equal to the number of unique words in the dataset (the size of the vocabulary).
This bag of words vector is then multiplied with an embedding matrix and divided by the number of
words in the document, to generate a low-dimension normalized representation. This vector is then
passed through two separate hidden layers (with dropout), and then a final softmax layer, to produce
a distribution over the 20 possible classes. In our experiments we use an embedding size of 50, and
hidden layer sizes of 200 and 150 units, respectively. We train the model for 1,000,000 mini-batches,
with a batch size of 32. Like with our previous experiments, we utilize the Adam Optimizer Kingma
& Ba (2014), with a learning rate of 0.00005.
The latent attention network, A has the following architecture: FC(100, `-ReLU), FC(1000, `-ReLU),
FC(vocab-size, sigmoid). A is trained with the Adam Optimizer for 100, 000 iterations with a
learning rate of 0.001. We used β = 50.0 and η = ηconst with a constant value of 0.
D
S AMPLE S PECIFIC E XPERIMENTS
In the sample specific experiments the same pre-trained networks are used as in the standard CIFAR10 and Newsgroup-20 experiments. To train the sample specific masks, we used a learning rate of
0.001 and 0.05 for the Newsgroup-20 and CIFAR-10 experiments respectively. Both experiments
used the Adam Optimizer and each mask is trained for 10, 000 iterations. We used β = 50.0 and
η = ηboot for both experiments.
E
B ETA H YPERPARAMETERS
In this section, we illustrate the role of the β hyperparameter in the sample specific experiments.
As stated earlier, β controls the trade-off between the amount of corruption in the input and the
similarity of the corrupted and uncorrupted inputs (measured as a function of the respective pretrained network outputs). Based on the loss functions presented in equations 3 and 4, high values of
β encourage a larger amount of input corruption and weight the corresponding term more heavily in
the loss computation. Intuitively, we would like to identify the minimal number of dimensions in
the input space that most critically affect the output of the pre-trained network. The small amount
of input corruption that corresponds to a low value of β would make the problem of reconstructing
pre-trained network outputs too simple, resulting in attention masks that deem all or nearly all input
dimensions as important; conversely, a heavily corrupted input from a high value of β could make
output reconstruction impossible. We illustrate this for individual images from the CIFAR-10 dataset
below:
0.25
2
0.75
13
4
5
Published as a conference paper at ICLR 2018
F
I MAGENET R ESULTS
To demonstrate the capacity of our technique to scale up, we visualize attention masks learned on top
of the Inception network architecture introduced in Szegedy et al. (2015) and trained for the ILSVRC
2014 image classification challenge. We utilize a publicly available Tensorflow implementation of
the pre-trained model1 . In these experiments we learned sample-specific attention masks for different
settings of β on the images shown below. For our input corruption, we used uniformly sampled
RGB noise: η ∼ Uniform(0, 1). We note that the attention masks produced on this domain seem
to produce much richer patterns of contours than in the CIFAR-10 experiments. We attribute this
difference to both the increased size of the images and the increased number of classes between the
two problems(1000 vs 10).
1.5
G
3
5
M OTIVATING E XAMPLE : TANK D ETECTION
In this section, we motivate our approach by demonstrating how it identifies failure modes within a
real-world application of machine learning techniques.
There is a popular and (allegedly) apocryphal story, relayed in Yudkowsky (2008), that revolves
around the efforts of a US military branch to use neural networks for the detection of camouflaged
tanks within forests. Naturally, the researchers tasked with this binary classification problem collected
images of camouflaged tanks and empty forests in order to compile a dataset. After training, the
neural network model performed well on the testing data and yet, when independently evaluated by
other government agencies, failed to achieve performance better than random chance. It was later
determined that the original dataset only collected positive tank examples on cloudy days leading to a
classifier that discriminated based on weather patterns rather than the presence of camouflaged tanks.
We design a simple domain for highlighting the effectiveness of our approach, using the aforementioned story as motivation. The problem objective is to train an image classifier for detecting the
presence of tanks in forests. Our dataset is composed of synthetic images generated through the
1
https://github.com/tensorflow/models/tree/master/research/slim
14
Published as a conference paper at ICLR 2018
random placement of trees, clouds and tanks. A representative image from this dataset is provided
below on the left with its component objects highlighted and displayed on the right:
As in the story, our training and testing datasets are generated such that clouds are only present in
positive examples of camouflaged tanks. A simple convolutional neural network architecture is then
trained on this data and treated as the pre-trained network in our LAN framework. Unsurprisingly,
this classifier suffers from the same problems outlined in Yudkowsky (2008); despite high accuracy
on testing data, the classifier fails to detect tanks without the presence of clouds.
We now observe the sample-specific attention masks trained from this classifier:
The resulting attention mask (β = 1.0 with bootstrapped noise η = ηboot ) assigns high importance to
the pixels associated with the cloud while giving no importance to the region of the image containing
the tank. With this example, we underscore the utility of our methods in providing a means of
visualizing the underlying “rationale” of a network for producing a given output. Our attention mask
help recognize that the classifier’s basis for detecting tanks is incorrectly based on the presence of
clouds.
15
| 2 |
arXiv:1708.07818v2 [] 1 Sep 2017
ON BOUNDARIES OF RELATIVELY HYPERBOLIC
RIGHT-ANGLED COXETER GROUPS
MATTHEW HAULMARK, HOANG THANH NGUYEN, AND HUNG CONG TRAN
Abstract. We give “visual descriptions” of cut points and non-parabolic
cut pairs in the Bowditch boundary of relatively hyperbolic right-angled
Coxeter groups. We also prove necessary and sufficient conditions for a
relatively hyperbolic right-angled Coxeter group whose defining graph
has a planar flag complex with minimal peripheral structure to have
the Sierpinski carpet or the 2-sphere S2 as its Bowditch boundary. We
apply these results to the problem of quasi-isometry classification of
right-angled Coxeter groups. Additionally, we study right-angled Coxeter groups with isolated flats whose CAT(0) boundaries are Menger
curve.
1. Introduction
For each finite simplicial graph Γ the associated right-angled Coxeter
group GΓ has generating set S equal to the vertices of Γ, relations s2 = 1 for
each s in S and relations st = ts whenever s and t are adjacent vertices. In
geometric group theory, groups acting on CAT(0) cube complexes are fundamental objects and right-angled Coxeter groups provide a rich source of
these such groups. The coarse geometry of right-angled Coxeter groups was
studied by Caprace [Cap09, Cap15], Dani-Thomas [DT15, DT], Dani-StarkThomas [DST], Behrstock-Hagen-Sisto [BHS17], Levcovitz [Lev] and others.
In this paper, we will study boundaries of relatively hyperbolic right-angled
Coxeter groups.
The notion of a relatively hyperbolic group was introduced by Gromov
[Gro87] to generalize both word hyperbolic and geometrically finite Kleinian
groups. Introduced by Bowditch [Bow12] there is a boundary for relatively
hyperbolic groups. The Bowditch boundary generalizes the Gromov boundary of a word hyperbolic group and the limit set of a geometrically finite
Kleinian group. Under modest hypotheses on the peripheral subgroups, the
homeomorphism type of the Bowditch boundary is known to be a quasiisometry invariant of the group (see Groff [Gro13]). Combining the work of
Groff [Gro13] and Behrstock-Druţu-Mosher [BDM09], we elaborate on the
homeomorphism between Bowditch boundaries induced by a quasi-isometry
between two relatively hyperbolic groups whose peripheral subgroups are
not relatively hyperbolic (see Theorem 2.12).
Date: September 4, 2017.
2000 Mathematics Subject Classification. 20F67, 20F65.
1
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
2
In [Cap09, Cap15], Caprace gives necessary and sufficient conditions on
the defining graph Γ for the associated right-angled Coxeter group GΓ to
be relatively hyperbolic with respect to a collection of finitely generated
subgroups. Behrstock-Hagen-Sisto [BHS17] develop further the work of
Caprace to describe the minimal peripheral structure of a relatively hyperbolic right-angled Coxeter group (see Definition 2.8). In Sections 3 and 4,
we study Bowditch boundaries of relatively hyperbolic right-angled Coxeter
groups with minimal peripheral structure. We use our results concerning the
Bowditch boundary to study quasi-isometry classification of certain classes
of right-angled Coxeter groups. Lastly, we will also study CAT(0) boundaries of right-angled Coxeter groups with isolated flats.
1.1. Visual descriptions of cut points and non-parabolic cut pairs.
Cut points and non-parabolic cut pairs are topological features of the Bowditch
boundary of a relatively hyperbolic group. In relatively hyperbolic rightangled Coxeter groups, we are able to visualize these features via defining
graphs. Our first main result of Section 3 is the following theorem relating
cut points in the Bowditch boundary of a relatively hyperbolic right-angled
Coxeter group to the defining graph.
Theorem 1.1. Let Γ be a simplicial graph and J be a collection of induced
proper subgraphs of Γ. Assume that the right-angled Coxeter groups GΓ
is one-ended, hyperbolic relative to the collection P = { GJ | J ∈ J }, and
suppose each subgroup in P is also one-ended. Then each parabolic point
vgGJ0 is a global cut point if and only if some induced subgraph of J0 separates
the graph Γ.
Therefore, we obtain a new quasi-isometry invariant among all relatively
hyperbolic right-angled Coxeter groups.
Corollary 1.2. Let Γ1 and Γ2 be simplicial graphs such that GΓ1 and GΓ2
are both one-ended. Assume that each graph Γi has a peripheral structure
Ji that consists of proper subgraphs of Γi such that each subgroup in Pi =
{ GJ | J ∈ Ji } is one-ended and not relatively hyperbolic. If GΓ1 and GΓ2
are quasi-isometric, then for each graph K ∈ J1 there is a graph L ∈ J2 such
that GK and GL are quasi-isometric and vice versa. Moreover, if K has
an induced subgraph that separates Γ1 , then L also has an induced subgraph
which separates Γ2 .
We refer the reader to Example 3.7 for an illustration of the application of Theorem 1.1 to the quasi-isometry classification of right-angled Coxeter groups. In Section 3 we also visualize non-parabolic cut pairs of the
Bowditch boundary of a relatively hyperbolic group via its defining graph.
Theorem 1.3. Let Γ be a simplicial graph and J be a collection of induced
proper subgraphs of Γ. Assume that the right-angled Coxeter groups GΓ is
one-ended and hyperbolic relative to the collection P = { GJ | J ∈ J }, and
suppose each subgroup in P is one-ended. If the Bowditch boundary ∂(GΓ , P)
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
3
has a non-parabolic cut pair, then Γ has a separating complete subgraph
suspension. Moreover, if Γ has a separating complete subgraph suspension
whose non-adjacent vertices do not lie in the same subgraph J ∈ J, then the
Bowditch boundary ∂(GΓ , P) has a non-parabolic cut pair.
We remark that Theorem 1.3 can help us prove that Bowditch boundaries
of two distinct groups GΓ1 and GΓ2 are not homeomorphic in certain cases
by detecting non-parabolic cut pairs in their Bowditch boundaries. Therefore, we can conclude these groups are not quasi-isometric if their Bowditch
boundaries are based on their minimal peripheral structures (see Example
3.10).
1.2. Relatively hyperbolic right-angled Coxeter groups with Sierpinski carpet or sphere Bowditch boundary. In Section 4, we also give
necessary and sufficient conditions for a relatively hyperbolic right-angled
Coxeter group whose defining graph has planar flag complex to have the
Sierpinski carpet or S2 as its Bowditch boundary. We refer the reader to
the beginning of Section 4 for definitions of the graph theoretic terms used
in the statement of the following theorem.
Theorem 1.4. Let Γ be a graph whose flag complex is planar. Assume
that Γ has a non-trivial peripheral structure J and let P = { GJ | J ∈ J }.
The Bowditch boundary ∂(GΓ , P) is S2 or the Sierpinski carpet if and only
if Γ is inseparable and each graph in J is a strongly non-separating 4-cycle
extension graph.
In addition, the Bowditch boundary ∂(GΓ , P) is the Sierpinski carpet if
and only if Γ contains a strongly non-separating induced n–cycle extension
K (n ≥ 5) satisfying the following properties:
(1) K does not contain any nonadjacent vertices of an induced 4-cycle
extension;
(2) No vertex outside K is adjacent to two non-adjacent vertices of K.
A graph is inseparable if it is connected, has no separating complete subgraph, no cut pair, and no separating complete subgraph suspension. We
note that if the defining graph Γ is inseparable, distinct from a complete
graph, and has planar flag complex distinct from a triangulation of S 2 as in
Theorem 1.4, then the CAT(0) boundary of a Davis complex ΣΓ is a Sierpinski carpet (see Świa̧tkowski [Ś]). With the additional assumption that
Γ has a non-trivial peripheral structure, the group GΓ becomes a relatively
hyperbolic group and Theorem 1.4 gives a classification of such right-angled
Coxeter groups. More importantly, the classification in Theorem 1.4 also
contributes to the problem of quasi-isometry classification of right-angled
Coxeter groups as we demonstrate in Example 4.15.
We now restrict Theorem 1.4 to the case of 2–dimensional right-angled
Coxeter groups GΓ (i.e. Γ is triangle free and has at least one edge) and
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
4
we obtain a slightly simpler characterization of relatively hyperbolic rightangled Coxeter groups whose boundaries are the Sierpinski carpet or the
sphere.
Corollary 1.5. Let Γ be a triangle free, planar graph. Assume that Γ has a
non-trivial peripheral structure J and let P = { GJ | J ∈ J }. The Bowditch
boundary ∂(GΓ , P) is S2 or the Sierpinski carpet if and only if Γ is inseparable
and each graph in J is a strongly non-separating 4-cycle.
In addition, the Bowditch boundary ∂(GΓ , P) is the Sierpinski carpet if
and only if Γ contains a strongly non-separating induced n–cycle K (n ≥ 5)
satisfying the following properties:
(1) K does not contain any nonadjacent vertices of an induced 4-cycle;
(2) No vertex outside K is adjacent to two non-adjacent vertices of K.
We remark that in [DT] Dani-Thomas study 2–dimensional hyperbolic
right-angled Coxeter groups. The above corollary can be considered an
extension of their work to relatively hyperbolic right-angled Coxeter groups.
We also note that the above corollary is no longer true if we drop the planar
condition on the defining graph (see Remark 5.9).
1.3. Non-hyperbolic right-angled Coxeter groups with Menger curve
boundary. Hyperbolic groups with 1-dimensional boundary, generically
have a boundary which is homeomorphic to the Menger curve [KK00, DGP11].
For non-hyperbolic groups, examples of groups with Menger curve boundary are scarce. In fact, prior to [Haua] there were no known techniques for
constructing non-hyperbolic groups with Menger curve CAT(0) boundary.
The first example of a non-hyperbolic group with Menger curve boundary
is constructed in [DHW] using 3-manifolds. In Section 5 we use the main
result of [Haua] and a theorem of Mihalik-Tschantz [MT09] to prove the
following:
Proposition 1.6. Let Γ be a triangle free, non-planar graph with a nontrivial peripheral structure J that consists of induced 4-cycles. If the graph
Γ is inseparable and the CAT(0) boundary ∂ΣΓ of the right-angled Coxeter
Davis complex ΣΓ is not a Sierpinski carpet, then ∂ΣΓ is a Menger curve.
Inspired by the examples in Dani-Haulmark-Walsh [DHW], we use Proposition 1.6 to construct examples of non-hyperbolic right-angled Coxeter
groups with Menger curve CAT(0) boundary. We then use the Bowditch
boundary to show that the constructed examples are not quasi-isometric
(see Lemma 5.7).
Acknowledgments. First, all three authors would like to thank Chris
Hruska for suggestions and insights. The first author would also like thank
Genevieve Walsh for many helpful conversations. Lastly, the authors would
like to thank Jason Behrstock for a correction to Corollary 1.2.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
5
2. Preliminaries
In this section, we review some concepts in geometric group theory:
CAT(0) spaces, δ–hyperbolic spaces, CAT(0) spaces with isolated flats, relatively hyperbolic groups, CAT(0) boundaries, Gromov boundaries, Bowditch
boundaries, and peripheral splitting of relatively hyperbolic groups. We
also use the work of Behrstock-Druţu-Mosher [BDM09] and Groff [Gro13]
to prove that Bowditch boundary is a quasi-isometry invariant among relatively hyperbolic groups with non-relatively hyperbolic peripheral subgroup
structures. We review right-angled Coxeter groups and discuss the work of
Caprace [Cap09, Cap15] and Behrstock-Hagen-Sisto [BHS17] on peripheral
structures of relatively hyperbolic right-angled Coxeter groups.
2.1. CAT(0) spaces, δ–hyperbolic spaces, and relatively hyperbolic
groups. We first discuss on CAT(0) spaces, δ–hyperbolic spaces, Gromov
boundaries, and CAT(0) boundaries. We refer the reader to the book [BH99]
for more details.
Definition 2.1. We say that a geodesic triangle ∆ in a geodesic space X
satisfies the CAT (0) inequality if d(x, y) ≤ d(x, y) for all points x, y on the
edges of ∆ and the corresponding points x, y on the edges of the comparison
triangle ∆ in Euclidean space E2 .
Definition 2.2. A geodesic space X is said to be a CAT (0) space if every
triangle in X satisfies the CAT(0) inequality.
If X is a CAT(0) space, then the CAT (0) boundary of X, denoted ∂X, is
defined to be the set of all equivalence classes of geodesic rays in X, where
two rays c and c′ are equivalent if the Hausdorff distance between them is
finite.
We note that for any x ∈ X and ξ ∈ ∂X there is a unique geodesic ray
αx,ξ : [0, ∞) → X with αx,ξ = x and [αx,ξ ] = ξ. The CAT(0) boundary has
′
a natural topology with
basis given by the sets U (x, ξ, R, ǫ) = { ξ ∈ ∂X |
d αx,ξ (R), αx,ξ ′ (R) ≤ ǫ }, where x ∈ X, ξ ∈ ∂X, R > 0 and ǫ > 0.
Definition 2.3. A geodesic metric space (X, d) is δ–hyperbolic if every geodesic triangle with vertices in X is δ–thin in the sense that each side lies
in the δ–neighborhood of the union of other sides. If X is a δ–hyperbolic
space, then we could build the Gromov boundary of X, denoted ∂X, in the
same way as for a CAT(0) space. That is, the Gromov boundary of X is
defined to be the set of all equivalence classes of geodesic rays in X, where
two rays c and c′ are equivalent if the Hausdorff distance between them is
finite. However, the topology on it is slightly different from the topology on
the boundary of a CAT(0) space (see for example [BH99, Section III.3] for
details).
We now review relatively hyperbolic groups and related concepts.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
6
Definition 2.4 (Combinatorial horoball [GM08]). Let T be any graph with
the vertex set V . We define the combinatorial horoball based at T , H(=
H(T )) to be the following graph:
(1) H(0) = V × {{0} ∪ N}.
(2) H(1) = {((t, n), (t, n + 1))} ∪ { ((t1 , n), (t2 , n)) | dT (t1 , t2 ) ≤ 2n }. We
call edges of the first set vertical and of the second horizontal.
Remark 2.5. In [GM08], the combinatorial horoball is described as a 2complex, but we will only require we only require the 1-skeleton for the
horoball in this paper.
Definition 2.6 ([GM08]). Let H be the horoball based at some graph T .
Let D : H → [0, ∞) be defined by extending the map on vertices (t, n) → n
linearly across edges. We call D the depth function for H and refer to
vertices v with D(v) = n as vertices of depth n or depth n vertices.
Because T × {0} is homeomorphic to T , we identify T with D −1 (0).
Definition 2.7 (Cusped space [GM08]). Let G be a finitely generated group
and P a finite collection of finitely generated subgroups of G. Let S be a
finite generating set of G such that S ∩ P generates P for each P ∈ P. For
each left coset gP of subgroup P ∈ P let H(gP ) be the horoball based at a
copy of the subgraph TgP with vertex set gP of the Cayley graph Γ(G, S).
The cusped space X(G, P, S) is the union of Γ(G, S) with H(gP ) for every
left coset of P ∈ P, identifying the subgraph TgP with the depth 0 subset
of H(gP ). We suppress mention of S and P when they are clear from the
context.
Definition 2.8 (Relatively hyperbolic group [GM08]). Let G be a finitely
generated group and P a finite collection of finitely generated proper subgroups of G. Let S be a finite generating set of G such that S ∩ P generates
P for each P ∈ P. If the cusped space X(G, P, S) is δ–hyperbolic then we
say that G is hyperbolic relative to P or that (G, P) is a relatively hyperbolic.
Collection P is a peripheral structure, each group P ∈ P is a peripheral subgroup and its left cosets are peripheral left cosets. The peripheral structure
P is minimal if for any other peripheral structure Q on G, each P ∈ P is
conjugate into some Q ∈ Q.
Remark 2.9. Replacing S for some other finite generating set S ′ may
change the value of δ, but does not affect the hyperbolicity of the cusped
space for some δ′ (see [GM08]). Consequently, the concept of relatively
hyperbolic group does not depend on the choice of finite generating set.
We say that a finitely generated group is not relatively hyperbolic if it is
not relatively hyperbolic with respect to any collection of proper subgroups.
2.2. The Bowditch boundary. We now discuss the Bowditch boundary of
a relatively hyperbolic group and prove that it is a quasi-isometry invariant.
We also recall peripheral splitting of relatively hyperbolic groups.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
7
Definition 2.10 (Bowditch boundary [Bow12]). Let (G, P) be a finitely
generated relatively hyperbolic group. Let S be a finite generating set of
G such that S ∩ P generates P for each P ∈ P. The Bowditch boundary,
denoted ∂(G, P), is the Gromov boundary of the associated cusped space,
X(G, P, S).
Remark 2.11. There is a natural topological action of G on the Bowditch
boundary ∂(G, P) that satisfies certain properties (see [Bow12]).
Bowditch has shown that the Bowditch boundary does not depend on the
choice of finite generating set (see [Bow12]). More precisely, if S and T are finite generating sets for G as in the above definition, then the Gromov boundaries of the cusped spaces X(G, P, S) and X(G, P, T ) are G–equivariantly
homeomorphic (see [Bow12]).
For each peripheral left coset gP the limit set of the associated horoball
H(gP ) consists of a single point in ∂(G, P), called parabolic point vgP . The
stabilizer of the point vgP is the subgroup gP g−1 . We call each infinite
subgroup of gP g−1 a parabolic subgroup and subgroup gP g−1 a maximal
parabolic subgroup.
The homeomorphism type of the Bowditch boundary was already known
to be a quasi-isometry invariant of the group (see Groff [Gro13]) under modest hypotheses on the peripheral subgroups. However, we combine the work
of Groff [Gro13] and Behrstock-Druţu-Mosher [BDM09] to elaborate the
homeomorphism between Bowditch boundaries induced by a quasi-isometry
between two relatively hyperbolic groups whose peripheral subgroups are
not relatively hyperbolic.
Theorem 2.12. Let (G1 , P1 ) and (G2 , P2 ) be finitely generated relatively
hyperbolic groups such that all peripheral subgroups of both (G1 , P1 ) and
(G2 , P2 ) are not relatively hyperbolic. If G1 and G2 are quasi-isometric,
then there is a homeomorphism f from ∂(G1 , P1 ) to ∂(G2 , P2 ) that maps
the set of parabolic points of ∂(G1 , P1 ) bijectively onto the set of parabolic
points of ∂(G2 , P2 ). Moreover, if a parabolic point v of ∂(G1 , P1 ) is labelled
by some peripheral left coset g1 P1 in G1 and the parabolic point f (v) of
∂(G2 , P2 ) is labelled by some peripheral left coset g2 P2 in G2 , then P1 and
P2 are quasi-isometric.
Proof. Fix generating sets S1 and S2 as in Definition 2.10 for G1 and G2
respectively, then there is a quasi-isometry q : Γ(G1 , S1 ) → Γ(G2 , S2 ). By
Theorem 4.1 in [BDM09], the map q takes a peripheral left coset g1 P1 of G1
to within a uniform bounded distance of the corresponding peripheral left
coset g2 P2 of G2 . In particular, P1 and P2 are quasi-isometric. Using the
proof of Theorems 6.3 in [Gro13], we can extend q to the quasi-isometry q̂ :
X(G1 , P1 , S1 ) → X(G2 , P2 , S2 ) between cusped spaces such that q̂ restricts
to a quasi-isometry embedding on each individual horoball of X(G1 , P1 , S1 )
and the image of the horoball lies in some neighborhood of a horoball of
X(G2 , P2 , S2 ). Therefore, there is a homeomorphism f induced by q̂ from
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
8
∂(G1 , P1 ) to ∂(G2 , P2 ) that maps the set of parabolic points of ∂(G1 , P1 )
bijectively onto the set of parabolic points of ∂(G2 , P2 ). Moreover, if a
parabolic point v of ∂(G1 , P1 ) is labelled by some peripheral left coset g1 P1
in G1 and the parabolic point f (v) of ∂(G2 , P2 ) is labelled by some peripheral
left coset g2 P2 in G2 , then by the above observation P1 and P2 are quasiisometric.
Definition 2.13 ([Bow01]). Let G be a group. By a splitting of G, over
a given class of subgroups, we mean a presentation of G as a finite graph
of groups, where each edge group belongs to this class. Such a splitting is
said to be relative to another class P of subgroups if each element of P is
conjugate into one of the vertex groups. A splitting is said to be trivial if
there exists a vertex group equal to G.
Assume G is hyperbolic to a collection P. A peripheral splitting of (G, P)
is a representation of G as a finite bipartite graph of groups, where P consists
precisely of the (conjugacy classes of) vertex groups of one color. Obviously,
any peripheral splitting of (G, P) is relative to P and over subgroups of
elements of P. Peripheral splittings of (G, P) are closely related to cut points
in the Bowditch boundary ∂(G, P) ([Bow01]).
Definition 2.14. Given a compact connected metric space X, a point x ∈ X
is a global cut point (or just simply cut point) if X − {x} is not connected.
If {a, b} ⊂ X contains no cut points and X − {a, b} is not connected, then
{a, b} is a cut pair. A point x ∈ X is a local cut point if X − {x} is not
connected, or X − {x} is connected and has more than one end.
2.3. CAT(0) spaces with isolated flats. In this section, we discuss the
work of Hruska-Kleiner [HK05] on CAT(0) spaces with isolated flats.
Definition 2.15. A k–flat in a CAT(0) space X is an isometrically embedded copy of Euclidean space Ek for some k ≥ 2. In particular, note that a
geodesic line is not considered to be a flat.
Definition 2.16. Let X be a CAT(0) space, G a group acting geometrically
on X, and F a G–invariant set of flats in X. We say that X has isolated
flats with respect to F if the following two conditions hold.
(1) There is a constant D such that every flat F ⊂ X lies in a D–
neighborhood of some F ′ ∈ F.
(2) For each positive r < ∞ there is a constant ρ = ρ(r) < ∞ so that
for
any two distinct flats F, F ′ ∈ F we have diam Nr (F ) ∩ Nr (F ) < ρ.
We say X has isolated flats if it has isolated flats with respect to some
G–invariant set of flats.
Theorem 2.17 ([HK05]). Suppose X has isolated flats with respect to F.
For each F ∈ F the stabilizer StabG (F ) is virtually abelian and acts cocompactly on F . The set of stabilizers of flats F ∈ F is precisely the set
of maximal virtually abelian subgroups of G of rank at least two. These
stabilizers lie in only finitely many conjugacy classes
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
9
Theorem 2.18 ([HK05]). Let X have isolated flats with respect to F. Then
G is relatively hyperbolic with respect to the collection of all maximal virtually
abelian subgroups of rank at least two.
The previous theorem also has the following converse.
Theorem 2.19 ([HK05]). Let G be a group acting geometrically on a CAT(0)
space X. Suppose G is relatively hyperbolic with respect to a family of virtually abelian subgroups. Then X has isolated flats
A group G that admits an action on a CAT(0) space with isolated flats has
a “well-defined” CAT(0) boundary, often denoted by ∂G, by the following
theorem.
Theorem 2.20 ([HK05]). Let G act properly, cocompactly, and isometrically on CAT(0) spaces X and Y . If X has isolated flats, then so does Y ,
and there is a G–equivariant homeomorphism ∂X → ∂Y .
2.4. Right-angled Coxeter groups and their relatively hyperbolic
structures. In this section, we review the concepts of right-angled Coxeter groups and Davis complexes. We also review the work of Caprace
[Cap09, Cap15] and Behrstock-Hagen-Sisto [BHS17] on peripheral structures of relatively hyperbolic right-angled Coxeter groups.
Definition 2.21. Given a finite simplicial graph Γ, the associated rightangled Coxeter group GΓ is generated by the set S of vertices of Γ and has
relations s2 = 1 for all s in S and st = ts whenever s and t are adjacent
vertices.
Let S1 be a subset of S. The subgroup of GΓ generated by S1 is a rightangled Coxeter group GΓ1 , where Γ1 is the induced subgraph of Γ with
vertex set S1 (i.e. Γ1 is the union of all edges of Γ with both endpoints in
S1 ). The subgroup GΓ1 is called a special subgroup of GΓ .
Definition 2.22. Given a finite simplicial graph Γ, the associated Davis
complex ΣΓ is a cube complex constructed as follows. For every k–clique,
T ⊂ Γ, the special subgroup GT is isomorphic to the direct product of k
copies of Z2 . Hence, the Cayley graph of GT is isomorphic to the 1–skeleton
of a k–cube. The Davis complex ΣΓ has 1–skeleton the Cayley graph of GΓ ,
where edges are given unit length. Additionally, for each k–clique, T ⊂ Γ,
and coset gGT , we glue a unit k–cube to gGT ⊂ ΣΓ . The Davis complex ΣΓ
is a CAT(0) space and the group GΓ acts properly and cocompactly on the
Davis complex ΣΓ (see [Dav08]).
Theorem 2.23 (Theorem A’ in [Cap09, Cap15]). Let Γ be a simplicial
graph and J be a collection of induced subgraphs of Γ. Then the right-angled
Coxeter groups GΓ is hyperbolic relative to the collection P = { GJ | J ∈ J }
if and only if the following three conditions hold:
(1) If σ is an induced 4-cycle of Γ, then σ is an induced 4-cycle of some
J ∈ J.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
10
(2) For all J1 , J2 in J with J1 6= J2 , the intersection J1 ∩ J2 is empty or
J1 ∩ J2 is a complete subgraph of Γ.
(3) If a vertex s commutes with two non-adjacent vertices of some J in
J, then s lies in J.
Theorem 2.24 (Theorem B in [Cap09, Cap15]). Let Γ be a simplicial graph.
If GΓ is relatively hyperbolic with respect to finitely generated subgroups
H1 , · · · , Hm , then each Hi is conjugate to a special subgroup of GΓ .
Theorem 2.25 (Theorem I in [BHS17]). Let T be the class consisting of the
finite simplicial graphs Λ such that GΛ is strongly algebraically thick. Then
for any finite simplicial graph Γ either: Γ ∈ T , or there exists a collection J
of induced subgraphs of Γ such that J ⊂ T and GΓ is hyperbolic relative to
the collection P = { GJ | J ∈ J } and this peripheral structure is minimal.
Remark 2.26. In Theorem 2.25 we use the notion of strong algebraic thickness which is introduced in [BD14] and is a sufficient condition for a group
to be non-hyperbolic relative to any collection of proper subgroups. We
refer the reader to [BD14] for more details. The following theorem from
[BHS17] characterizes all strongly algebraically thick right-angled Coxeter
groups and it will prove useful for studying peripheral subgroups of relatively
hyperbolic right-angled Coxeter groups.
Theorem 2.27 (Theorem II in [BHS17]). Let T be the class of finite simplicial graphs whose corresponding right-angled Coxeter groups are strongly
algebraically thick. Then T is the smallest class of graphs satisfying the
following conditions:
(1) The 4-cycle lies in T .
(2) Let Γ ∈ T and let Λ ⊂ Γ be an induced subgraph which is not a
complete graph. Then the graph obtained from Γ by coning off Λ is
in T .
(3) Let Γ1 , Γ2 ∈ T and suppose there exists a graph Γ, which is not a
complete graph, and which arises as a subgraph of each of the Γi .
Then the union Λ of Γ1 , Γ2 along Γ is in T , and so is any graph
obtained from Λ by adding any collection of edges joining vertices in
Γ1 − Γ to vertices of Γ2 − Γ.
Definition 2.28. The graphs in the collection T from Theorems 2.25 and
2.27 are called thick graphs. Let Γ be a simplicial, non-thick graph. The collection J of induced subgraphs of Γ from Theorem 2.25 is called a peripheral
structure of Γ and each graph in J is called a peripheral subgraph of Γ. A
peripheral structure J of a graph Γ is non-trivial if J is non-empty and does
not contain Γ. A graph Γ has a non-trivial peripheral structure if and only
if GΓ is non-trivially relative hyperbolic group but GΓ is not hyperbolic.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
11
3. Bowditch boundaries of relatively hyperbolic right-angled
Coxeter groups
In this section, we give “visual” descriptions of cut points and nonparabolic cut pairs of Bowditch boundaries of relatively hyperbolic rightangled Coxeter groups. The Bowditch boundary is a quasi-isometry invariant. So, these results can be applied, in certain cases, to differentiate two
relatively hyperbolic RACGs in terms of quasi-isometry equivalence.
In [Tra13], the third author investigates the connection between the Bowditch
boundary of a relatively hyperbolic group (G, P) and the boundary of a
CAT(0) space X on which G acts geometrically. For relatively hyperbolic
right-angled Coxeter groups, the relevant result from [Tra13] can be stated
as follows:
Theorem 3.1 (Tran [Tra13]). Let Γ be a finite simplicial graph. Assume
that the right-angled Coxeter group GΓ is relatively hyperbolic with respect
to a collection P of its subgroups. Then the Bowditch boundary ∂(GΓ , P) is
obtained from the CAT(0) boundary ∂ΣΓ by identifying the limit set of each
peripheral left coset to a point. Moreover, this quotient map is GΓ -invariant.
We now introduce some definitions concerning defining graphs of rightangled Coxeter groups that we will use to visualize cut points and nonparabolic cut points in the Bowditch boundary.
Definition 3.2. Let Γ1 and Γ2 be two graphs, the join of Γ1 and Γ2 , denoted
Γ1 ∗Γ2 , is the graph obtained by connecting every vertex of Γ1 to every vertex
of Γ2 by an edge. If Γ2 consists of distinct vertices u and v, then the join
Γ1 ∗ {u, v} is the suspension of Γ1 .
Definition 3.3. Let Γ be a simplicial graph. A pair of non-adjacent vertices
{a, b} in Γ is called a cut pair if {a, b} separates Γ. An induced subgraph Γ1
of Γ is a complete subgraph suspension if Γ1 is a suspension of a complete
subgraph σ of Γ. If σ is a single vertex, then Γ1 is a vertex suspension.
An induced subgraph Γ1 of Γ is separating if Γ1 separates Γ. By this way,
we can also consider a cut pair as a separating complete subgraph suspension
which is a suspension of the empty graph.
We will need the following lemma in order to visualize cut points in the
Bowditch boundary of a relatively hyperbolic right-angled Coxeter group.
Lemma 3.4. Let Γ be a simplicial graph and J a collection of induced proper
subgraphs of Γ. Assume that the right-angled Coxeter groups GΓ is oneended, hyperbolic relative to the collection P = { GJ | J ∈ J }, and suppose
each subgroup in P is one-ended. Let J0 be an element in J such that some
induced subgraph of J0 separates the graph Γ. Then we can write Γ = Γ1 ∪Γ2
such that the following conditions hold:
(1) Γ1 , Γ2 are both proper induced subgraphs of Γ;
(2) Γ1 ∩ Γ2 is an induced subgraph of J0 .
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
12
(3) For each J in J, J lies completely inside either Γ1 or Γ2
Proof. Since each subgroup in P is one-ended, each graph J in J is connected and J has no separating complete subgraph. Let L be an induced
subgraph of J0 that separates the graph Γ. Let Γ′1 be L together with some
of the components of Γ − L, and let Γ′2 be L together with the remaining
components of Γ − L. Then, Γ′1 , Γ′2 are both proper induced subgraphs of
Γ, L = Γ′1 ∩ Γ′2 , and Γ = Γ′1 ∪ Γ′2 . Since J0 is a proper subgraph of Γ,
Γ′1 − J0 6= ∅ or Γ′2 − J0 6= ∅ (say Γ′1 − J0 6= ∅).
Let Γ1 = Γ′1 , Γ2 = Γ′2 ∪ J0 . Then, Γ1 is an induced proper subgraph of
Γ. We now prove that Γ2 is also an induced proper subgraph. Choose a
vertex w ∈ Γ1 − J0 = Γ′1 − J0 . Since Γ1 ∩ Γ2 = Γ′1 ∩ (Γ′2 ∪ J0 ) ⊂ J0 , the
vertex w does not belong to Γ2 . Therefore, Γ2 is a proper subgraph of Γ.
We now prove that Γ2 is induced. Let e be an arbitrary edge with endpoints
u and v in Γ2 . If e is an edge of Γ′2 , then e is the edge of Γ2 . Otherwise,
e is an edge of Γ′1 = Γ1 , because Γ = Γ′1 ∪ Γ′2 . In particular, u and v are
also the vertices of Γ1 . Again, Γ1 ∩ Γ2 is a subgraph of J0 . Then u and v
are vertices of J0 . Therefore, e is an edge of J0 , because J0 is an induced
subgraph. Thus, e is also an edge of Γ2 . Thus, Γ2 is an induced subgraph.
This implies that Γ1 ∩ Γ2 is an induced subgraph. We already checked that
Γ1 ∩ Γ2 is a subgraph of J0 , and by construction Γ = Γ1 ∪ Γ2 .
We now prove that for each J in J, J lies completely inside either Γ1 or
Γ2 . By the construction, J0 is a subgraph of Γ2 . Therefore, we only need to
check the case where J 6= J0 . It suffices to show that J lies completely inside
either Γ′1 or Γ′2 . By Theorem 2.23, for each J 6= J0 in J the intersection J ∩J0
is empty or it is a complete subgraph of Γ. Also the intersection J ∩ L is an
induced subgraph of J ∩ J0 if J ∩ L 6= ∅. Therefore, J ∩ L is empty or it
is a complete subgraph of Γ. Recall that each graph in J is connected and
has no separating complete subgraph. Therefore, J − L = J − (J ∩ L) is
connected for each J 6= J0 in J. By the construction J − L lies completely
inside either Γ′1 or Γ′2 . Thus, J also lies completely inside either Γ′1 or Γ′2 .
Therefore, J lies completely inside either Γ1 or Γ2 .
We can now give the visual description of cut points in the Bowditch
boundary of relatively hyperbolic right-angled Coxeter groups.
Theorem 3.5. Let Γ be a simplicial graph and J a collection of induced
proper subgraphs of Γ. Assume that the right-angled Coxeter groups GΓ
is one-ended, hyperbolic relative to the collection P = { GJ | J ∈ J }, and
suppose each subgroup in P is also one-ended. Then each parabolic point
vgGJ0 is a global cut point if and only if some induced subgraph of J0 separates
the graph Γ.
Proof. We first assume that some induced subgraph of J0 separates the
graph Γ. By Lemma 3.4, we can write Γ = Γ1 ∪ Γ2 such that the following
conditions hold:
(1) Γ1 , Γ2 are both proper induced subgraphs of Γ;
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
13
(2) K = Γ1 ∩ Γ2 is an induced subgraph of J0 .
(3) For each J in J, J lies completely inside either Γ1 or Γ2
This implies that GΓ = GΓ1 ∗GK GΓ2 , GΓ1 6= GΓ , and GΓ2 6= GΓ . Since
J lies completely inside either Γ1 or Γ2 for each J ∈ J, each peripheral
subgroup in P must be a subgroup of GΓ1 or GΓ2 . Therefore, GΓ splits
non-trivially relative to P over the parabolic subgroup GK ≤ GJ0 . By the
claim following Theorem 1.2 of [Bow01] the parabolic point vGJ0 labelled by
GJ0 is a global cut point of ∂(G, P). Also, the group GΓ acts topologically
on ∂(G, P) and gvGJ0 = vgGJ0 . Thus, each parabolic point vgGJ0 is also a
global cut point.
We now assume that some parabolic point vgGJ0 is a global cut point.
Again, the group GΓ acts topologically on ∂(G, P) and gvGJ0 = vgGJ0 .
Therefore, vGJ0 is also a global cut point. So, by Theorem 3.3 of [Haub]
GΓ splits non-trivially over a subgroup H of GJ0 . Theorem 1 of [MT09]
implies that there is some induced subgraph K of Γ which separates Γ such
that GK is contained in some conjugate of H. Therefore, GK is also contained in some conjugate of the peripheral subgroup GJ0 . Moreover, GK
and GJ0 are both special subgroups of GΓ . Thus, K is an induced subgraph
of J0 . (This is a standard fact, and we leave the details to the reader.)
The following corollary follows directly from Theorems 2.12 and 3.5. This
corollary can be used to distinguish between quasi-isometry classes of certain
relatively hyperbolic right-angled Coxeter groups.
Corollary 3.6. Let Γ1 and Γ2 be simplicial graphs such that GΓ1 and GΓ2
are both one-ended. Assume that each graph Γi has a peripheral structure
Ji that consists of proper subgraphs of Γi such that each subgroup in Pi =
{ GJ | J ∈ Ji } is one-ended and not relatively hyperbolic. If GΓ1 and GΓ2
are quasi-isometric, then for each graph K ∈ J1 there is a graph L ∈ J2
such that GK and GL are quasi-isometric and vice versa. Moreover, if K
has some induced subgraph that separates Γ1 , then L also has some induced
subgraph that separates Γ2 .
We now discuss a few examples related to cut points in Bowditch boundaries of relatively hyperbolic right-angled Coxeter groups. These examples
illustrate an application of Theorem 3.5 to the problem of quasi-isometry
classification of right-angled Coxeter groups.
Example 3.7. Let Γ1 , Γ2 , and Γ3 be the graphs in Figure 1. Observe that
all groups GΓi are one-ended. We will prove that groups GΓ1 , GΓ2 , and GΓ3
are not pairwise quasi-isometric by investigating their peripheral structures.
(1)
(2)
In Γ1 , let K1 and K1 be an induced subgraphs generated by {a1 , a2 , a3 , a4 , a5 }
and {a6 , a7 , a8 , a9 , a10 }, respectively. It is easy to see that Γ1 has only six
(1)
(2)
induced 4-cycles which are not subgraphs of K1 and K1 . Denote these
(i)
(i)
(j)
cycles by L1 (i = 1, 2, · · · , 6). Let J1 be the set of all graphs L1 and K1 .
By Theorems 2.23 and 2.25, J1 is a peripheral structure of Γ1 . Moreover,
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
14
Γ2
Γ1
a2
Γ3
c2
b2
a3
a1
a5
b3
b1
a4
c4
b4
a7
c7
b7
a8
a6
c5
c1
b5
a10
b8
b6
a9
b10
b9
c8
c6
c9
Figure 1. The three groups GΓ1 , GΓ2 , and GΓ3 are pairwise
not quasi-isometric because the Bowditch boundaries with
respect to their minimal peripheral structures are pairwise
not homeomorphic.
no induced subgraph of a graph in J1 separates Γ1 . Therefore by Theorem
3.5, GΓ1 is hyperbolic relative to the collection P1 = { GJ | J ∈ J1 } and the
Bowditch boundary ∂(GΓ1 , P1 ) has no global cut point.
(1)
(2)
Similarly, let K2 and K2 be an induced subgraphs of Γ2 generated by
{b1 , b2 , b3 , b4 , b5 } and {b6 , b7 , b8 , b9 , b10 }, respectively. It is easy to see that Γ2
(i)
has only four induced 4-cycles, denoted L2 (i = 1, 2, · · · , 4), such that each
(1)
(2)
of them is not a subgraph of K2 and K2 . Let J2 be the set of all graphs
(i)
(j)
L2 and K2 . Then by Theorems 2.23 and 2.25, J2 is a peripheral structure
(1)
(2)
of Γ2 . Moreover, K2 and K2 are the only graphs in J2 which contain
induced subgraphs that separate Γ2 . Therefore, GΓ2 is hyperbolic relative
to the collection P2 = { GJ | J ∈ J2 }, the Bowditch boundary ∂(GΓ2 , P2 )
has global cut points and each of them is labelled by some left coset of GK (1)
2
or GK (2) by Theorem 3.5.
2
(1)
Finally, let K3 be an induced subgraph of Γ3 generated by {c6 , c7 , c8 , c9 , c10 }.
(i)
It is easy to see that Γ3 has only three induced 4-cycles, denoted L3
(1)
(i = 1, 2, 3), such that each of them is not a subgraph of K3 . Assume
(1)
that L3 is the induced 4-cycle generated by {c1 , c2 , c5 , c4 }. Let J3 be the
(i)
(1)
set of all graphs L3 and K3 . Again, by Theorem 2.23 and 2.25 we have
(1)
(1)
that J3 is a peripheral structure of Γ3 . Moreover, K3 and L3 are the only
graphs in J3 which contain induced subgraphs that separate Γ3 . Therefore, GΓ3 is hyperbolic relative to the collection P3 = { GJ | J ∈ J3 }, the
Bowditch boundary ∂(GΓ3 , P3 ) has global cut points and each of them is
labelled by some left coset of GK (1) or GL(1) by Theorem 3.5.
3
3
c10
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
15
Note that all the groups in Pi are one-ended. The Bowditch boundary
∂(GΓ1 , P1 ) has no global cut point, but the Bowditch boundaries ∂(GΓ2 , P2 )
and ∂(GΓ3 , P3 ) do. So, GΓ1 cannot be quasi-isometric to GΓ2 and GΓ3 .
Additionally, the Bowditch boundary ∂(GΓ3 , P3 ) has global cut points labelled by some left coset of GL(1) . Meanwhile, no global cut point of the
3
Bowditch boundary ∂(GΓ2 , P2 ) is labelled by the left coset of a peripheral
subgroup that is quasi-isometric to GL(1) . Therefore, GΓ2 and GΓ3 are not
3
quasi-isometric.
In the remainder of this section, we work on the description of nonparabolic cut pairs in Bowditch boundaries of relatively hyperbolic rightangled Coxeter groups in terms of their defining graphs.
Proposition 3.8. Let Γ be a simplicial graph and J be a collection of induced proper subgraphs of Γ. Assume that the right-angled Coxeter groups
GΓ is one-ended, hyperbolic relative to the collection P = { GJ | J ∈ J },
and suppose each subgroup in P is also one-ended. If Γ has a separating
complete subgraph suspension whose non-adjacent vertices do not lie in the
same subgraph J ∈ J, then the CAT(0) boundary ∂ΣΓ has a cut pair and the
Bowditch boundary ∂(GΓ , P) has a non-parabolic cut pair.
Proof. Let K be a separating complete subgraph suspension of Γ whose nonadjacent vertices u and v do not both lie in the same subgraph J ∈ J. Let
T be the set of all vertices of Γ which are both adjacent to u and v. Then T
is a vertex set of a complete subgraph σ of Γ. Otherwise, the two vertices u
and v both lie in the same 4-cycle. Thus, u and v lie in the same subgraph
J ∈ J, a contradiction.
Let K = σ ∗ {u, v}. We can easily verify the following properties of K.
(1) For each J ∈ J the intersection K ∩ J is empty or a complete subgraph.
(2) No vertex outside K is adjacent to the unique pair of nonadjacent
vertices {u, v} of K.
(3) K is an induced subgraph of K.
Therefore, the collection J = J ∪ {K} satisfies all the conditions of Theorem
2.23, which implies that GΓ is hyperbolic relative to the collection P = { GJ |
J ∈ J }.
Using an argument similar to that of Lemma 3.4, we can write Γ = Γ1 ∪Γ2
such that the following conditions hold:
(1) Γ1 , Γ2 are both proper induced subgraphs of Γ;
(2) Γ1 ∩ Γ2 is an induced subgraph L of K.
(3) For each J in J, J lies completely inside either Γ1 or Γ2
Therefore, we can prove that the Bowditch boundary ∂(G, P) has a global
cut point vGK stabilized by the subgroup GK by using an argument similar
to the one in Theorem 3.5.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
16
By Theorem 3.1, the Bowditch boundary ∂(GΓ , P) is obtained from the
CAT(0) boundary ∂ΣΓ by identifying the limit set of each peripheral left
coset of a subgroup in P to a point. Let f be this quotient map. Since GK is
two-ended, its limit set consists of two points w
1 and w2 in ∂ΣΓ . Therefore,
f (w1 ) = f (w2 ) = vGK and f ∂ΣΓ − {w1 , w2 } = ∂(GΓ , P) − {vGK }. Since
∂(GΓ , P) − {vGK } is not connected, the space ∂ΣΓ − {w1 , w2 } is also not
connected. This implies that {w1 , w2 } is a cut pair of the CAT(0) boundary
∂ΣΓ .
Again, by Theorem 3.1 the Bowditch boundary ∂(GΓ , P) is obtained from
the CAT(0) boundary ∂ΣΓ by identifying limit set of each peripheral left
coset of a subgroup in P to a point. Let g be this quotient map. The two
points w1 and w2 do not lie in limit sets of peripheral left cosets of subgroups
in P. Therefore, g(w1 ) 6= g(w2 ) and they are non-parabolic
points in the
Bowditch boundary ∂(GΓ , P). Moreover, g ∂ΣΓ − {w1 , w2 } = ∂(GΓ , P) −
{g(w1 ), g(w2 )} and limit set of each peripheral left coset of a subgroup in P
lies completely inside ∂ΣΓ − {w1 , w2 }.
We observe that for any two points s1 , s2 ∈ ∂ΣΓ − {w1 , w2 } satisfying
g(s1 ) = g(s2 ) the two points s1 and s2 both lie in some limit set C of a
peripheral left coset of a subgroup in P. Also, each subgroup in P is oneended, so C is connected. Therefore, s1 and s2 lie in the same connected
component of ∂ΣΓ − {w1 , w2 }. This implies that if U and V are different
components of ∂ΣΓ −{w1 , w2 }, then g(U )∩g(V ) = ∅. Therefore, ∂(GΓ , P)−
{g(w1 ), g(w2 )} is not connected. This implies that {g(w1 ), g(w2 )} is a nonparabolic cut pair of the Bowditch boundary ∂(GΓ , P).
Theorem 3.9. Let Γ be a simplicial graph and J be a collection of induced
proper subgraphs of Γ. Assume that the right-angled Coxeter groups GΓ is
one-ended and hyperbolic relative to the collection P = { GJ | J ∈ J }, and
suppose each subgroup in P is one-ended. If the Bowditch boundary ∂(GΓ , P)
has a non-parabolic cut pair, then Γ has a separating complete subgraph
suspension. Moreover, if Γ has a separating complete subgraph suspension
whose non-adjacent vertices do not lie in the same subgraph J ∈ J, then the
Bowditch boundary ∂(GΓ , P) has a non-parabolic cut pair.
Proof. Since GΓ is one-ended, the Bowditch boundary ∂(GΓ , P) is connected.
If the Bowditch boundary ∂(GΓ , P) is a circle, then GΓ is virtually a surface
group, and the peripheral subgroups are the boundary subgroups of that
surface by Theorem 6B in [Tuk88]. This is a contradiction because each peripheral subgroup is one-ended. Therefore, the Bowditch boundary ∂(GΓ , P)
is not a circle. We now assume that the Bowditch boundary ∂(GΓ , P) has
a non-parabolic cut pair {u, v}. Then u is obviously a non-parabolic local
cut point. Therefore by Theorem 1.1 in [Haub], GΓ splits over a two-ended
subgroup H.
Since GΓ splits over a two-ended subgroup H, there is an induced subgraph K of Γ which separates Γ such that GK is contained in some conjugate
of H by Theorem 1 in [MT09]. Because the group GΓ is one-ended, the group
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
u
Γ1
17
v
Γ2
Figure 2. Graph Γ1 has no separating complete subgraph
suspension while graph Γ2 has cut pair (u, v) such that u and
v do not lie in the same 4-cycle.
GK is two-ended. This implies that K is a complete subgraph suspension.
The remaining conclusion is obtained from Proposition 3.8.
Example 3.10. Let Γ1 and Γ2 be the graphs in Figure 2. Then GΓ1 and
GΓ2 are both one-ended. Let J1 and J2 be the sets of all induced 4-cycles
of Γ1 and Γ2 , respectively. By Theorems 2.23 and 2.25, the collection Ji
is a peripheral structure of Γi for each i. Also, subgroups in each Pi =
{ GJ | J ∈ Ji } are virtually Z2 and thus one-ended. Moreover, for each i
no induced subgraph of a graph in Ji separates Γi . Thus by Theorem 3.5,
both Bowditch boundaries ∂(GΓ1 , P1 ) and ∂(GΓ2 , P2 ) have no cut points. So
in this case, we cannot use cut points to differentiate GΓ1 and GΓ2 up to
quasi-isometry. However, the graph Γ1 has no separating complete subgraph
suspension, so by Theorem 3.9 the Bowditch boundary ∂(GΓ1 , P1 ) has no
non-parabolic cut pair. Meanwhile, Γ2 has a cut pair (u, v) such that u
and v do not lie in the same subgraph in J2 . Again, by Theorem 3.9 the
Bowditch boundary ∂(GΓ2 , P2 ) has a non-parabolic cut pair. By Theorem
2.12, GΓ1 and GΓ2 are not quasi-isometric.
4. Relatively hyperbolic right-angled Coxeter groups with
Sierpinski carpet or sphere Bowditch boundary
In this section, we give a necessary and sufficient conditions for a relatively
hyperbolic right-angled Coxeter group whose defining graph has planar flag
complex to have the Sierpinski carpet or sphere as its Bowditch boundary.
We begin by recalling topological descriptions of the Sierpinski carpet and
Menger curve due to Whyburn [Why58] and Anderson [And58a, And58b],
respectively. Anderson’s result will be used in Section 5 to study nonhyperbolic right-angled Coxeter groups with Menger curve CAT(0) boundary.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
18
Theorem 4.1 ([Why58], [And58a, And58b]). A compact space Σ is homeomorphic to the Sierpinski carpet if and only if it is 1–dimensional, planar,
connected, locally connected, and has no local cut points.
Let Σ be the Sierpinski carpet, then the non-separating embedded circles
(called peripheral circles) are pairwise disjoint, and form an infinite countable set.
A compact space Σ is homeomorphic to the Menger curve if and only if it
is 1–dimensional, connected, locally connected, has no local cut points, and
no non-empty open subset of Σ is planar.
We begin by introducing some terminology that we will use to characterize
a certain class of relatively hyperbolic right-angled Coxeter groups whose
boundaries are the Sierpinski carpet or 2-sphere. Some of these terms will
also be used in Section 5.
Definition 4.2. Let Γ be a finite simplicial graph. The flag complex of Γ
is the simplicial complex with 1-skeleton Γ and any complete subgraph of Γ
is the 1-skeleton of some simplex. Obviously, if Γ is triangle free, the flag
complex of Γ is Γ itself.
Definition 4.3. A graph is inseparable if it is connected, has no separating complete subgraph, no cut pair, and no separating complete subgraph
suspension. Obviously, a triangle free graph is inseparable if and only if it
is connected, has no separating vertex, no separating edge, no cut pair, and
no separating vertex suspension.
Definition 4.4. A graph K is an n–cycle extension if K is an n–cycle or K
is a join graph between an n–cycle and a complete graph. Note that if K is
an n–cycle extension subgraph of a triangle free graph, then K is an n–cycle.
An induced n–cycle extension subgraph K is strongly non-separating if no
induced subgraph of K separates the ambient graph.
Theorem 4.5 (Theorem A.2 in [DT]). Let K be a finite simplicial graph.
Then the right-angled Coxeter group GK is a hyperbolic virtual surface group
if and only if K is an n–cycle extension (n ≥ 5).
The following lemma is an easy exercise and is left to the reader.
Lemma 4.6. Let K be a finite simplicial graph. Then the right-angled
Coxeter group GK is a virtually Z2 group if and only if K is a 4–cycle
extension.
We first prove the sufficient conditions of the characterization (see Theorem 1.4). We recall Theorem 3.1 that the Bowditch boundary of a relatively
hyperbolic right-angled Coxeter group GΓ is obtained from the CAT(0)
boundary of the Davis complex ΣΓ by identifying the limit set of each peripheral left coset to a point. Therefore, we recall the following result due to
Świa̧tkowski [Ś] about CAT(0) boundaries of right-angled Coxeter groups.
We then investigate the limit set of each peripheral left coset in the CAT(0)
boundary and compute the quotient space ∂(GΓ , P).
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
19
Theorem 4.7 (Corollary 1.4 in [Ś]). Let Γ be a graph whose flag complex is
planar. Then the CAT(0) boundary ∂ΣΓ is the Sierpinski carpet if and only
if Γ is inseparable, distinct from a complete graph, and the flag complex of
Γ is distinct from a triangulation of S 2 .
Lemma 4.8. Let Γ be a graph whose flag complex is planar. Assume that
the graph Γ is inseparable and Γ has a non-trivial peripheral structure J that
consists of induced 4-cycle extension subgraphs. Let K be a strongly nonseparating induced 4-cycle extension or K a strongly non-separating induced
n–cycle extension (n ≥ 5) with the following properties:
(1) K does not contain any nonadjacent vertices of an induced 4-cycle
extension;
(2) No vertex outside K is adjacent to two non-adjacent vertices of K.
Then the limit set of the subgroup GK is a peripheral circle in the Sierpinski
carpet ∂ΣΓ .
Proof. Let J = J ∪ {K} and P = { GJ | J ∈ J }. By Theorem 2.23, the group
GΓ is relatively hyperbolic with respect to the collection P. We assume that
the limit set of the subgroup GK is a separating circle C of the Sierpinski
carpet ∂ΣΓ . By Theorem 3.1, the Bowditch boundary ∂(GΓ , P) is obtained
from the CAT(0) boundary ∂ΣΓ by identifying the limit set of each peripheral left coset of a subgroup in P to a point. Let f be this quotient map. Let
vGK be the point in ∂(GΓ , P) that is image of C under the map f . Since no
induced subgraph of K separates Γ, the point vGK is not a global cut point
of ∂(GΓ , P) by Theorem 3.5.
We observe that for any two points u, v ∈ ∂ΣΓ − C satisfying f (u) = f (v)
the two points u and v both lie in some circle C1 ⊂ ∂ΣΓ − C that is the
limit set of some peripheral subgroup. Therefore, u and v lie in the same
connected component of ∂ΣΓ − C. This implies that if U and V are different
components of ∂ΣΓ −C, then f (U )∩f (V ) = ∅. Therefore, vGK is the global
cut point which is a contradiction. Thus, the limit set of the subgroup GK
is a peripheral circle in the Sierpinski carpet ∂ΣΓ .
Proposition 4.9. Let Γ be a graph whose flag complex is planar. Assume
that Γ is inseparable and Γ has a non-trivial peripheral structure J that
consists of strongly non-separating induced 4-cycle extension graphs. Then
the group GΓ is relatively hyperbolic with respect to the collection P = { GJ |
J ∈ J } and the Bowditch boundary ∂(GΓ , P) is S2 or the Sierpinski carpet.
Proof. By Theorem 4.7, the CAT(0) boundary ∂ΣΓ of the Davis complex ΣΓ
is the Sierpinski carpet. Also, by Lemma 4.8 the limit set of each peripheral
left coset is a peripheral circle of the Sierpinski carpet ∂ΣΓ . By Theorem
3.1, the Bowditch boundary ∂(GΓ , P) is obtained from the CAT(0) boundary
∂ΣΓ by identifying the limit set of each peripheral left coset to a point.
We now consider two cases. If all peripheral circles of the Sierpinski ∂ΣΓ
are limit sets of peripheral left cosets, then the Bowditch boundary ∂(GΓ , P)
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
20
is S2 . Otherwise, it is clear that the Bowditch boundary ∂(GΓ , P) is still the
Sierpinski carpet.
We now work on the necessary conditions for the characterization (see
Theorem 1.4).
Theorem 4.10 (Theorem 0.3 and Corollary 0.4 in [Dah05]). If (G, P) is
a relatively hyperbolic group whose boundary is the Sierpinski carpet or S2 ,
then each subgroup in P is virtually a surface group. Moreover, if (G, P) is a
minimal relatively hyperbolic structure, then each subgroup in P is virtually
Z2 .
Proposition 4.11. Let Γ be a graph whose flag complex is planar. Assume
that Γ has a non-trivial peripheral structure J and let P = { GJ | J ∈ J }.
If the Bowditch boundary ∂(GΓ , P) is S2 or the Sierpinski carpet, then Γ is
inseparable and each graph in J is a strongly non-separating 4-cycle extension
graph.
Proof. If the graph Γ is not connected or Γ has a separating complete
subgraph, then GΓ is not one-ended. Therefore, the Bowditch boundary
∂(GΓ , P) is not connected which is a contradiction. Thus, the graph Γ is
connected and has no separating complete subgraph. By Theorem 4.10,
each subgroup in P is virtually Z2 . Therefore by Lemma 4.6, each subgraph
in J must be a 4-cycle extension. Also, ∂(GΓ , P) has no global cut point.
Then each graph in J is strongly non-separating. Also, there is no cut pair
and no separating complete subgraph suspension of the ambient graph Γ
which is a subgraph of a graph in J. By way of contradiction, we assume
that graph Γ has a cut pair or a separating complete subgraph suspension.
Then, the non-adjacent vertices of the cut pair (or of the separating complete subgraph suspension) does not lie in the same J ∈ J. Therefore by
Proposition 3.8, the Bowditch boundary ∂(GΓ , P) has a non-parabolic cut
pair, a contradiction. Thus, Γ is inseparable.
In the following proposition, we further differentiate between the cases of
Sierpinski carpet and S2 Bowditch boundaries.
Proposition 4.12. Let Γ be a graph whose flag complex is planar. Assume
that Γ is inseparable graph and Γ has a non-trivial peripheral structure J that
consists of strongly non-separating induced 4-cycle extension graphs. Let
P = { GJ | J ∈ J }. Then the Bowditch boundary ∂(GΓ , P) is the Sierpinski
carpet if and only if Γ contains a strongly non-separating induced n–cycle
extension K (n ≥ 5) satisfying the following properties:
(1) K does not contain any nonadjacent vertices of an induced 4-cycle
extension;
(2) No vertex outside K is adjacent to two non-adjacent vertices of K.
S
Proof. Let S be the vertex set of Γ and P = P ∈P P . We first assume that
Γ contains a strongly non-separating induced n–cycle extension K (n ≥ 5)
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
21
with all the properties as above. We will prove that the Bowditch boundary ∂(GΓ , P) is the Sierpinski carpet. By Theorem 4.7 and Lemma 4.8, the
CAT(0) boundary ∂ΣΓ is the Sierpinski carpet and the limit set of the subgroup GK is a peripheral circle in ∂ΣΓ . This implies that not all peripheral
circles of the Sierpinski carpet ∂ΣΓ are limit sets of peripheral left cosets.
Therefore, the Bowditch boundary ∂(GΓ , P) is still the Sierpinski carpet as
in the proof of Proposition 4.9.
We now assume that the Bowditch boundary ∂(GΓ , P) is the Sierpinski
carpet. As in the proof of Proposition 4.9, there is some peripheral circle
C in ∂ΣΓ which is not the limit set of a peripheral left coset. Therefore,
this circle C still survives after identifying the limit set of each peripheral
left coset in ∂ΣΓ to a point to obtain the Bowditch boundary ∂(GΓ , P).
Moreover, the quotient map is GΓ –equivariant (see Theorem 3.1), so the
stabilizers of C via the actions of GΓ on ∂ΣΓ and ∂(GΓ , P) are the same.
Call this group Stab(C).
By the proof of Proposition 3.7 in [Dah05], Stab(C) acts as a uniform convergence group on C, which is clearly its limit set. Moreover, g Stab(C)g −1 ∩
Stab(C) is finite for each g ∈
/ Stab(C). Therefore by Theorem 9.9 of [Hru10],
Stab(C) is generated by a finite set T and the inclusion
Stab(C) ֒→ GΓ in
duces a quasi-isometric embedding Γ Stab(C), T ֒→ Γ(GΓ , S ∪P). By Theorem 1.5 in [Osi06], GΓ is hyperbolic relative to the collection P∪{Stab(C)}.
Therefore by Theorem 2.24, Stab(C) is the conjugate of some special subgroup GK of GΓ . Thus, the group GΓ is relatively hyperbolic with respect
to the collection P = P ∪ {GK }, and GK is the stabilizer of some translation
C1 of C.
Since the group GΓ is relatively hyperbolic with respect to the collection
P = P ∪ {GK }, the subgraph K satisfies Condition (1) and (2) of the proposition. Also, the limit set of GK is the peripheral circle C1 in ∂ΣΓ . Arguing
as in Proposition 4.9, we have that the Bowditch boundary ∂(GΓ , P) is either
S2 or the Sierpinski carpet. In particular, ∂(GΓ , P) has no global cut point.
Therefore, no induced subgraph of K separates Γ. Also, GK is virtually a
surface group by Theorem 4.10, and GK is not virtually Z2 . Therefore by
Theorem 4.5, K is a strongly non-separating n–cycle (n ≥ 5).
The following theorem is obtained from Propositions 4.9, 4.11, and 4.12.
With planar condition imposed on the flag complex of defining graphs, this
theorem gives a complete characterization of relatively hyperbolic rightangled Coxeter groups whose Bowditch boundaries are the Sierpinski carpet
or S2 .
Theorem 4.13. Let Γ be a graph whose flag complex is planar. Assume
that Γ has a non-trivial peripheral structure J and let P = { GJ | J ∈ J }.
The Bowditch boundary ∂(GΓ , P) is S2 or the Sierpinski carpet if and only
if Γ is inseparable and each graph in J is a strongly non-separating 4-cycle
extension graph.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
Γ1
Γ2
22
Γ3
Figure 3. The three groups GΓ1 , GΓ2 , and GΓ3 all have
homeomorphic CAT(0) boundaries (Sierpinski carpet), but
they have different Bowditch boundaries with respect to their
minimal peripheral structures.
In addition, the Bowditch boundary ∂(GΓ , P) is the Sierpinski carpet if
and only if Γ contains a strongly non-separating induced n–cycle extension
K (n ≥ 5) satisfying the following properties:
(1) K does not contain any nonadjacent vertices of an induced 4-cycle
extension;
(2) No vertex outside K is adjacent to two non-adjacent vertices of K.
We now restrict Theorem 4.13 to the case of 2–dimensional right-angled
Coxeter groups GΓ (i.e. Γ is triangle free and has at least one edge) to obtain a slightly simpler characterization of relatively hyperbolic right-angled
Coxeter groups whose boundaries are the Sierpinski carpet or S2 .
Corollary 4.14. Let Γ be a triangle free, planar graph. Assume that Γ
has a non-trivial peripheral structure J and let P = { GJ | J ∈ J }. The
Bowditch boundary ∂(GΓ , P) is S2 or the Sierpinski carpet if and only if Γ
is inseparable and each graph in J is a strongly non-separating 4-cycle.
In addition, the Bowditch boundary ∂(GΓ , P) is the Sierpinski carpet if
and only if Γ contains an induced strongly non-separating n–cycle K (n ≥ 5)
satisfying the following properties:
(1) K does not contain any nonadjacent vertices of an induced 4-cycle;
(2) No vertex outside K is adjacent to two non-adjacent vertices of K.
We now discuss some examples of Bowditch boundaries of relatively hyperbolic right-angled Coxeter groups whose defining graphs are triangle free
and planar.
Example 4.15. Let Γ1 , Γ2 , and Γ3 be the graphs in Figure 3. Let J1 , J2 ,
and J3 be the sets of all induced 4-cycles of Γ1 , Γ2 , and Γ3 , respectively. By
Theorems 2.23 and 2.25, for each i the collection Ji is a peripheral structure
of Γi . Therefore, each group GΓi is hyperbolic relative to the collection
Pi = { GJ | J ∈ Ji }. Moreover, each Davis complex ΣΓi is a CAT(0) space
with isolated flats and its CAT(0) boundary is the Sierpinski carpet by
Theorem 4.7. However, their Bowditch boundaries are pairwise distinct.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
23
In fact, the Bowditch boundary ∂(GΓ1 , P1 ) is S2 and the Bowditch boundary ∂(GΓ2 , P2 ) is the Sierpinski carpet by Theorem 4.13. In particular, this
implies that GΓ1 and GΓ2 are not quasi-isometric. Also, the graph Γ3 has
a separating induced 4-cycle. Then, the Bowditch boundary ∂(GΓ3 , P3 ) has
global cut points. Therefore, GΓ3 is not quasi-isometric to the other two
groups.
5. Non-hyperbolic right-angled Coxeter groups with Menger
curve boundary
In this section, we study the CAT(0) boundary of 2–dimensional rightangled Coxeter groups with isolated flats. In particular, we give conditions
which guarantee that the CAT(0) boundary of such a group will be the
Menger curve (see Proposition 5.3). We conclude the section with a pair of
examples of right-angled Coxeter groups with Menger Curve CAT(0) boundary and a show in Lemma 5.7 that these examples have non-homeomorphic
Bowditch boundaries.
Our work in this section is mainly based on the following results of the
first author in [Haua] and Mihalik-Tschantz [MT09].
Theorem 5.1 (Theorem 1.2 in [Haua]). Let Γ be a group acting geometrically on a CAT(0) space X with isolated flats. Assume ∂X is 1–dimensional.
If Γ does not split over a virtually cyclic subgroup then one of the following
holds:
(1) ∂X is a circle
(2) ∂X is a Sierpinski carpet
(3) ∂X is a Menger curve.
Theorem 5.2 ([MT09]). Let Γ be a triangle free, connected graph with no
separating vertex and no separating edge. The right-angled Coxeter group
GΓ splits over a two-ended subgroup H if and only if Γ has a cut pair {u, v}
or has a separating vertex suspension σ. Moreover, the special subgroup
generated by the cut pair {u, v} or by the separating vertex suspension σ is
contained in some conjugate of H.
We are now ready for the main proposition of this section.
Proposition 5.3. Let Γ be a triangle free, non-planar graph with a nontrivial peripheral structure J that consists of induced 4-cycles, and assume
the Γ is inseparable. Let ΣΓ be the Davis complex of the right-angled Coxeter
group defined by Γ. If the CAT(0) boundary ∂ΣΓ is not a Sierpinski carpet,
then ∂ΣΓ is a Menger curve.
Proof. We first observe that ΣΓ is a CAT(0) space with isolated flats and
the group GΓ is hyperbolic relative to the collection P = { GJ | J ∈ J }. The
Davis complex is one-ended and 2–dimensional, because Γ is triangle free,
connected with no separating vertex, and has no separating edge. Therefore,
∂ΣΓ is connected and 1–dimensional (see [GO07]). The graph Γ has no cut
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
24
Ω2
Ω1
Figure 4. The two groups GΩ1 and GΩ2 have homeomorphic CAT(0) boundaries (Menger curve), but they have different Bowditch boundaries with respect to their standard
peripheral structures.
pair and no separating vertex suspension so by Theorem 5.2 the group GΓ
does not split over a two-ended subgroup. Because Γ is inseparable and
cannot contain a separating clique, we also have that GΓ does not split over
a finite group. Therefore by Theorem 5.1, ∂ΣΓ must be a circle, a Sierpinski
carpet, or a Menger curve.
Since Γ is not a 4–cycle and Γ has non-trivial peripheral structure J, the
CAT(0) boundary ∂ΣΓ contains infinitely many disjoint circles. Thus, ∂ΣΓ
is not a circle. By hypothesis ∂ΣΓ is not a Sierpinski carpet, so ∂ΣΓ must
be a Menger curve.
We now provide two examples illustrating Proposition 5.3. These examples were inspired by an example from [DHW]. Dani-Haulmark-Walsh take
a triple of a 3–manifold glued along a common boundary component to construct the first example of a non-hyperbolic group whose CAT(0) boundary
is the Menger curve. We apply this idea to right-angled Coxeter groups,
thereby constructing the first examples of non-hyperbolic right-angled Coxeter groups whose CAT(0) boundaries are Menger curve.
Construction 5.4. Let K1 , K2 , and K3 be copies of the graph Γ1 in Figure
3. In each graph Ki we fix an induced 4-cycle Ci and glue all 4-cycles Ci to
a 4-cycle C to obtain the graph Ω1 (see Figure 4). Let J1 be the collection
of all induced 4-cycles of Ω1 . The graph Ω1 is triangle free and inseparable.
Using Theorems 2.23 and 2.25, it is not hard to check J1 is a peripheral
structure of Ω1 . Next we show that the CAT(0) boundary of ΣΩ1 cannot be
the Sierpinski carpet by showing the ∂ΣΩ1 contains a non-planar subspace.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
25
Lemma 5.5. Let Ω1 be the graph constructed above. Then the CAT(0)
boundary ∂ΣΩ1 is the Menger curve.
Proof. For each graph Ki let ΣKi be the associated Davis complex in ΣΩ1
that contains the identity e. Then all spaces ΣKi share the Davis complex
ΣC . We remind the reader that C is the 4-cycle on which each 4-cycle Ci
of the graph Ki is glued, and point out that limit set of ΣC is a circle.
Therefore, all limit sets ∂ΣKi meet in the limit set ∂ΣC . By Theorem 4.7
and Lemma 4.8, each limit set ∂ΣKi is a Sierpinski carpet and ∂ΣC is a
peripheral circle of ∂ΣKi . Therefore, the union of all limit sets ∂ΣKi is nonplanar (see [DHW]). Since ∂ΣΩ1 contains a non-planar subspace, it is not
the Sierpinski carpet. Thus by Proposition 5.3, ∂ΣΩ1 is a Menger curve.
We now construct a variation Ω2 of the graph Ω1 such that the CAT(0)
boundaries of the two right-angled Coxeter groups GΩ1 and GΩ2 are homeomorphic, but their Bowditch boundaries with respect to minimal peripheral
structures are not.
Construction 5.6. Let L1 , L2 , and L3 be copies of the graph Γ2 in Figure 3. In each graph Li we fix a 5-cycle Di and we glue all 5-cycles Di
to a 5-cycle D to obtain the graph Ω2 (see Figure 4). Let J2 be the collection of all induced 4-cycles of Ω2 . Again, it is not hard to check that
J2 is a peripheral structure of Ω2 and that the graph Ω2 satisfies all the
conditions in Proposition 5.3. We now prove that the CAT(0) boundaries
of the right-angled Coxeter groups GΩ1 and GΩ2 are the same (the Menger
curve), but their Bowditch boundaries ∂(GΩ1 , P1 ) and ∂(GΩ2 , P2 ) are not
homeomorphic, where Pi = { GJ | J ∈ Ji }.
Lemma 5.7. Let Ω1 and Ω2 be the graphs constructed above. Then, the
CAT(0) boundaries of the two right-angled Coxeter groups GΩ1 and GΩ2 are
the Menger curve, but their Bowditch boundaries ∂(GΩ1 , P1 ) and ∂(GΩ2 , P2 )
are not homeomorphic where each peripheral structure Pi of GΩi is constructed above. Moreover, the Bowditch boundary ∂(GΩ2 , P2 ) is not homeomorphic to S2 or the Sierpinski carpet.
Proof. For each graph Li let ΣLi be the associated Davis complex in ΣΩ2
that contains the identity e, and let ΣD be the associated Davis complex for
the common 5-cycle D. The proof that ∂ΣΩ2 is a Menger curve is similar to
the proof of Lemma 5.5. We note that by Lemma 4.8 the limit set ∂ΣD is a
peripheral circle in each Sierpinski carpet ∂ΣLi . However, the limit set ∂ΣD
is not the limit set of a peripheral left coset because D is not an induced
4–cycle of Ω2 .
We now claim that the Bowditch boundaries ∂(GΩ1 , P1 ) and ∂(GΩ2 , P2 )
are not homeomorphic, where Pi = { GJ | J ∈ Ji }. Indeed, the Bowditch
boundary ∂(GΩ1 , P1 ) has a global cut point since the induced 4-cycle C
separates the graph Ω1 . Meanwhile, there is no induced subgraph of an
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
26
induced 4–cycle in Ω2 that separates Ω2 . This implies the Bowditch boundary ∂(GΩ2 , P2 ) has no global cut point. Therefore, the Bowditch boundaries
∂(GΩ1 , P1 ) and ∂(GΩ2 , P2 ) are not homeomorphic.
We now prove that the Bowditch boundary ∂(GΩ2 , P2 ) is not homeomorphic to S2 or the Sierpinski carpet. By Theorem 3.1, the Bowditch boundary
∂(GΩ2 , P2 ) is obtained from the CAT(0) boundary ∂ΣΩ2 by identifying the
limit set of each peripheral left coset to a point, let f be the quotient map.
We will prove that f (∂ΣL1 ) ∪ f (∂ΣL2 ) ∪ f (∂ΣL3 ) is also a triple of the
Sierpinski carpets along a peripheral circle.
We first observe that for each 4-cycle J ∈ J2 the intersection J ∩ L1 is
a single edge or the whole 4-cycle J. Therefore, the intersection between
the limit set ∂ΣL1 and the limit set ∂(gGJ ) of a peripheral left coset gGJ is
empty or the whole limit set ∂(gGJ ). Moreover, the second case occurs if and
only if J is a 4-cycle in L1 and g is an element in GL1 . The image f (∂ΣL1 )
is obtained from ∂ΣL1 by identifying each peripheral circle on ∂ΣL1 which
is the limit set of a peripheral left coset gGJ , where g ∈ GL1 and J ⊂ L1 , to
a point. Therefore, the image f (∂ΣL1 ) is still a Sierpinski carpet. Similarly,
the images of f (∂ΣL2 ) and f (∂ΣL3 ) are also a Sierpinski carpets.
Lastly, all limit sets ∂ΣLi share the limit set ∂ΣD as a common subspace.
The limit set ∂ΣD is a peripheral circle in each Sierpinski carpet ∂ΣLi , but it
is not the limit set of a peripheral left coset. Therefore, the images f (∂ΣL1 ),
f (∂ΣL2 ), and f (∂ΣL3 ) pairwise intersect in the circle ∂ΣD . This implies
that the union f (∂ΣL1 ) ∪ f (∂ΣL2 ) ∪ f (∂ΣL3 ) is a triple of the Sierpinski
carpet along a peripheral circle. Since f (∂ΣL1 ) ∪ f (∂ΣL2 ) ∪ f (∂ΣL3 ) cannot
be topologically embedded into S2 or the Sierpinski carpet, the Bowditch
boundary ∂(GΩ2 , P2 ) is not homeomorphic to S2 or the Sierpinski carpet.
Corollary 5.8. The groups GΩ1 and GΩ2 are not quasi-isometric.
Remark 5.9. In Corollary 4.14, with the additional assumption that the
defining graph is planar, we characterized 2-dimensional relatively hyperbolic right-angled Coxeter group GΓ whose Bowditch boundary ∂(GΓ , P) is
a Sierpinski or S2 . A natural question to ask is whether one can remove
“planar” as a hypothesis on the graph Γ in Corollary 4.14. However, we
have seen that for the graph Ω2 above the Bowditch boundary ∂(GΩ2 , P2 ) is
not either a Sierpinski carpet or an S2 . Thus we cannot disregard the planar
hypothesis in Corollary 4.14.
References
[And58a] R. D. Anderson. A characterization of the universal curve and a proof of its
homogeneity. Ann. of Math. (2), 67:313–324, 1958.
[And58b] R. D. Anderson. One-dimensional continuous curves and a homogeneity theorem. Ann. of Math. (2), 68:1–16, 1958.
[BD14] Jason Behrstock and Cornelia Druţu. Divergence, thick groups, and short conjugators. Illinois J. Math., 58(4):939–980, 2014.
[BDM09] Jason Behrstock, Cornelia Druţu, and Lee Mosher. Thick metric spaces, relative
hyperbolicity, and quasi-isometric rigidity. Math. Ann., 344(3):543–595, 2009.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
[BH99]
27
Martin R. Bridson and André Haefliger. Metric spaces of non-positive curvature,
volume 319 of Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1999.
[BHS17] Jason Behrstock, Mark F. Hagen, and Alessandro Sisto. Thickness, relative hyperbolicity, and randomness in Coxeter groups. Algebr. Geom. Topol.,
17(2):705–740, 2017. With an appendix written jointly with Pierre-Emmanuel
Caprace.
[Bow01] B. H. Bowditch. Peripheral splittings of groups. Trans. Amer. Math. Soc.,
353(10):4057–4082, 2001.
[Bow12] B. H. Bowditch. Relatively hyperbolic groups. Internat. J. Algebra Comput.,
22(3):1250016, 66, 2012.
[Cap09] Pierre-Emmanuel Caprace. Buildings with isolated subspaces and relatively hyperbolic Coxeter groups. Innov. Incidence Geom., 10:15–31, 2009.
[Cap15] Pierre-Emmanuel Caprace. Erratum to “Buildings with isolated subspaces and
relatively hyperbolic Coxeter groups” [ MR2665193]. Innov. Incidence Geom.,
14:77–79, 2015.
[Dah05] François Dahmani. Parabolic groups acting on one-dimensional compact spaces.
Internat. J. Algebra Comput., 15(5-6):893–906, 2005.
[Dav08] Michael W. Davis. The geometry and topology of Coxeter groups, volume 32 of
London Mathematical Society Monographs Series. Princeton University Press,
Princeton, NJ, 2008.
[DGP11] François Dahmani, Vincent Guirardel, and Piotr Przytycki. Random groups do
not split. Math. Ann., 349(3):657–673, 2011.
[DHW] Pallavi Dani, Matthew Haulmark, and Genevieve Walsh. Planarity and Menger
curve boundaries. In preparation.
[DST]
Pallavi Dani, Emily Stark, and Anne Thomas. Commensurability for certain
right-angled Coxeter groups and geometric amalgams of free groups. Submitted.
arXiv:1610.06245.
[DT]
Pallavi Dani and Anne Thomas. Bowditch’s JSJ tree and the quasi-isometry
classification of certain Coxeter groups. Preprint. arXiv:1402.6224.
[DT15] Pallavi Dani and Anne Thomas. Divergence in right-angled Coxeter groups.
Trans. Amer. Math. Soc., 367(5):3549–3577, 2015.
[GM08] Daniel Groves and Jason Fox Manning. Dehn filling in relatively hyperbolic
groups. Israel J. Math., 168:317–429, 2008.
[GO07] Ross Geoghegan and Pedro Ontaneda. Boundaries of cocompact proper CAT(0)
spaces. Topology, 46(2):129–137, 2007.
[Gro87] M. Gromov. Hyperbolic groups. In Essays in group theory, volume 8 of Math.
Sci. Res. Inst. Publ., pages 75–263. Springer, New York, 1987.
[Gro13] Bradley W. Groff. Quasi-isometries, boundaries and JSJ-decompositions of relatively hyperbolic groups. J. Topol. Anal., 5(4):451–475, 2013.
[Haua]
Matthew Haulmark. Boundary classification and 2-ended splittings of groups
with isolated flats. Submitted. arXiv:1704.07937.
[Haub]
Matthew Haulmark. Local cut points and splittings of relatively hyperbolic
groups. Submitted. arXiv:1708.02855.
[HK05] G. Christopher Hruska and Bruce Kleiner. Hadamard spaces with isolated flats.
Geom. Topol., 9:1501–1538, 2005. With an appendix by the authors and Mohamad Hindawi.
[Hru10] G.C. Hruska. Relative hyperbolicity and relative quasiconvexity for countable
groups. Algebr. Geom. Topol., 10(3):1807–1856, 2010.
[KK00] Michael Kapovich and Bruce Kleiner. Hyperbolic groups with low-dimensional
boundary. Ann. Sci. École Norm. Sup. (4), 33(5):647–669, 2000.
[Lev]
Ivan Levcovitz. Divergence of CAT(0) cube complexes and Coxeter groups. Submitted. arXiv:1611.04378.
ON BOUNDARIES OF RIGHT-ANGLED COXETER GROUPS
28
[MT09]
Michael Mihalik and Steven Tschantz. Visual decompositions of Coxeter groups.
Groups Geom. Dyn., 3(1):173–198, 2009.
[Osi06]
Denis V. Osin. Elementary subgroups of relatively hyperbolic groups and
bounded generation. Internat. J. Algebra Comput., 16(1):99–118, 2006.
[Ś]
Jacek Świa̧tkowski. Right-angled Coxeter groups with n-dimensional Sierpiński
compacta as boundaries. Submitted. arXiv:1603.02152.
[Tra13] Hung Cong Tran. Relations between various boundaries of relatively hyperbolic
groups. Internat. J. Algebra Comput., 23(7):1551–1572, 2013.
[Tuk88] Pekka Tukia. Homeomorphic conjugates of Fuchsian groups. J. Reine Angew.
Math., 391:1–54, 1988.
[Why58] G. T. Whyburn. Topological characterization of the Sierpiński curve. Fund.
Math., 45:320–324, 1958.
Department of Mathematics, 1326 Stevenson Center, Vanderbilt University, Nashville, TN 37240 USA
E-mail address: [email protected]
Department of Mathematical Sciences, University of Wisconsin-Milwaukee,
P.O. Box 413, Milwaukee, WI 53201, USA
E-mail address: [email protected]
Department of Mathematics, The University of Georgia, 1023 D. W. Brooks
Drive, Athens, GA 30605, USA
E-mail address: [email protected]
| 4 |
1
Boosted KZ and LLL Algorithms
arXiv:1703.03303v4 [] 10 Oct 2017
Shanxiang Lyu and Cong Ling, Member, IEEE
Abstract—There exist two issues among popular lattice reduction (LR) algorithms that should cause our concern. The first one
is Korkine-Zolotarev (KZ) and Lenstra–Lenstra–Lovász (LLL)
algorithms may increase the lengths of basis vectors. The other is
KZ reduction suffers much worse performance than Minkowski
reduction in terms of providing short basis vectors, despite its
superior theoretical upper bounds. To address these limitations,
we improve the size reduction steps in KZ and LLL to set up two
new efficient algorithms, referred to as boosted KZ and LLL,
for solving the shortest basis problem (SBP) with exponential
and polynomial complexity, respectively. Both of them offer
better actual performance than their classic counterparts, and
the performance bounds for KZ are also improved. We apply
them to designing integer-forcing (IF) linear receivers for multiinput multi-output (MIMO) communications. Our simulations
confirm their rate and complexity advantages.
Index Terms—lattice reduction, KZ, LLL, shortest basis problem, integer-forcing
I. I NTRODUCTION
L
ATTICE reduction (LR) is a process that, given a
lattice basis as input, to ascertain another basis with
short and nearly orthogonal vectors [1]. Their applications
in signal processing include global positioning system (GPS)
[2], color space estimation in JPEG images [3], and data detection/precoding in wireless communications [4], [5]. Recent
advances in LR algorithms are mostly made in wireless communications and cryptography [6], [7]. Popular LR algorithms
with exponential complexity include Korkine-Zolotarev (KZ)
[8], [9] and Minkowski reductions [10], which have set the
benchmarks for the best possible performance in LR aided
successive interference cancellation (SIC) and zero-forcing
(ZF) detectors for multi-input multi-output (MIMO) systems
[10]. In MIMO detection problems, KZ and Minkowski reductions are preferable when the channel coefficients stay
fixed for a long time frame so that their high complexity
can be shared across time. In the part of polynomial or fixed
complexity algorithms, the celebrated Lenstra–Lenstra–Lovász
(LLL) [11] algorithm has been well studied and many new
variants have been proposed. Typical variants of LLL in
wireless communications can be summarized into two types:
either sacrificing the execution of full size reductions [12],
[13], or controlling the implementation order of swaps and size
reductions [14], [15], [16], [17]. The reason to establish the
first type variants is that a full size reduction has little influence
on the performance of LR aided SIC detectors. Variants of the
second type, e.g., fixed complexity LLL [14], [15] and greedy
LLL [16], [17], serve the purpose of enhancing the system
This work was supported in part by the Royal Society and in part by the
China Scholarship Council.
S. Lyu and C. Ling are with the Department of Electrical and Electronic
Engineering, Imperial College London, London SW7 2AZ, United Kingdom
(e-mail: [email protected], [email protected]).
performance especially when the number of LLL iterations
is restrained. It is also noteworthy to introduce the block KZ
(BKZ) reduction [18] as a tradeoff between KZ and LLL. BKZ
is scarcely probed in MIMO but more often in cryptography.
Many records in the shortest vector problem (SVP) challenge
hall of fame [19] are set by using BKZ although no good
upper bound on the complexity of BKZ is known.
In this work, we point out two issues among popular LR
algorithms which were rarely investigated before. The first one
is that KZ and LLL may elongate basis vectors. This issue
was discovered when we applied LLL to Gaussian random
matrices of dimensions higher than 40. The second one is
KZ reduction practically suffers much worse performance than
Minkowski reduction in terms of providing short basis vectors,
while Nguyen and Stehle conjectured in [20, P. 46:7] that KZ
may be stronger than Minkowski in high dimensions because
theoretically all vectors of a KZ reduced basis are known
to be closer to the successive minima than Minkowski’s.
So engineers may be quite confused about the discrepancies
between theory and practice.
The contributions of this work are twofold. First, we propose
improved algorithms to address the above limitations of KZ
and LLL, and they are in essence suitable for any application
that needs to solve the shortest basis problem (SBP). Second,
we show that our algorithms can be applied to the design
of integer forcing (IF) linear MIMO receivers [21] to obtain
some gains in rates.
The first algorithm is referred to as boosted KZ. It harnesses
the strongest length reduction every time after the shortest
vector in a projected lattice basis has been found, and such
an operation is proved to be valid. We improve analysis on
the best known bounds for the lengths of basis vectors and
Gram-Schmidt vectors via boosted KZ. After choosing sphere
decoding as subroutines, the total complexity of boosted KZ
is shown to be closed to that of conventional KZ.
In the second algorithm called boosted LLL, it also dumps
conventional size reduction conditions in LR while deploying a
flexible effort to perform length reduction. In order to maintain
the Siegel condition [22], two criteria for doing length reductions before/after testing the necessity of swaps are proposed,
which guarantee the basis potential is decreasing after swaps
and the lengths of vectors shrink at the largest extent. With
our scheme, bounds on basis lengths and orthogonal defects
can also be obtained. An optimal principle of choosing Lovász
constants is proposed as well. The complexity of this algorithm
is of O(Ln4+c ln n) if the condition number of the input basis
is of O(ln n), where L is the number of routes in boosted LLL
and c > 1 is a constant.
IF is a new MIMO receiver architecture that attempts to
decode an integer combination of lattice codes [21]. It can
be thought of as a special case of compute and forward
2
[23] because this design has full cooperation among receive
antennas. We apply our algorithms to IF because it represents
the kind of applications that need to find multiple short lattice
vectors, as opposed to LR aided SIC receivers [24], lattice
Gaussian samplers [25], and those searching the shortest or
closest vectors [26], [27]. This receiver is more general than
LR aided minimum mean square error (MMSE) receiver in that
it allows concise evaluation on rates, owning to lattice coding
and dithering. In [21], the performance of IF receiver is shown
to outperform conventional ZF and MMSE receivers, and the
optimality in diversity-multiplexing gain tradeoff (DMT) is
also proved. We will elaborate on the IF architecture and
the SBP interface where boosted KZ and LLL turn out to
be beneficial. Simulations will verify the advantages of our
algorithms in terms of rates and complexity.
The rest of this paper is organized as follows. Backgrounds
about lattices and lattice reduction algorithms are reviewed
in Section II. After that, we provide a motivating example to
indicate the drawback of KZ and LLL. The boosted KZ and
LLL algorithms are subsequently constructed and analyzed
in Sections III and IV, respectively. After introducing the
IF framework, exemplary simulation results are then shown
in Section V to emphasize that the proposed algorithms can
deliver higher rates. We mention some open questions Section
VI.
Notation: Matrices and column vectors are denoted by
uppercase and lowercase boldface letters. For a matrix D,
Di:j,i:j denotes the submatrix of D formed by rows and
columns i, i+1, . . . , j. When referring to the (i, j)th element
of D, we simply write di,j . In and 0n denote the n×n identity
matrix and n × 1 zero vector, and the operation (·)⊤ denotes
transposition. For an index set Γi = {1, . . . , i − 1}, DΓi
denotes the columns of D indexed by Γi . span(DΓi ) denotes
the vector space spanned by vectors in DΓi . πDΓi (x) and
⊥
πD
(x) denote the projection of x onto span(DΓi ) and the
Γi
orthogonal complement of span(DΓi ). ⌊x⌉ denotes rounding
x to the nearest integer, |x| denotes getting the absolute value
of x, and kxk denote the Euclidean norm of vector x. The
set of all n × n matrices with determinant ±1 and integer
coefficients will be denoted by GLn (Z).
II. P RELIMINARIES
A. Lattices
A full rank n-dimensional lattice L is a discrete additive
subgroup in Rn . The lattice generated by a basis D =
[d1 , . . . , dn ] ∈ Rn×n can be written as
X
c i di ; c i ∈ Z ;
L(D) = v | v =
i∈[n]
its dual lattice L(D̃) has a basis D̃ = D−⊤ . If the lattice basis
is clear from the context, we omit D and simply write L.
Definition 1. SBP is, given a lattice basis D0 of rank n, find
min
D, L(D)=L(D0 )
l(D)
where l(D) = maxi kdi k, D ranges over all possible bases
of L(D0 ), and l(D) is referred to as basis length.
The Gram-Schmidt orthogonalization (GSO) vectors of a
⊥
basis D can be found by: d∗1 = d1 , d∗i = πD
(di ) =
Γi
Pi−1
∗
µ
d
,
for
i
=
2,
.
.
.
,
n,
where
µ
=
di −
i,j
j=1 i,j j
∗
∗ 2
hdi , dj i/ dj . In matrix notations, GSO vectors can be
written as D = [d∗1 , . . . , d∗n ][µi,j ]⊤ , where [µi,j ] is a lowertriangular matrix with unit diagonal elements. In relation to the
QR decomposition, let Λ be a diagonal matrix with diagonal
entries kd∗1 k , . . . , kd∗n k, then we have [d∗1 , . . . , d∗n ]Λ−1 = Q
and Λ[µi,j ]⊤ = R whose diagonal elements reflect the lengths
of GSO vectors.
The ith successive minimum of an n dimensional lattice
L(D) is the smallest real number r such that L contains i
linearly independent vectors of length at most r:
λi = inf {r | dim(span((L ∩ B(0, r))) ≥ i} ,
in which B(t, r) denotes a ball centered at t with radius r.
We also write λi as λi (D) to distinguish different lattices.
Hermite’s constant γn is defined by
γn =
λ1 (D)2
.
|det(D)|2/n
sup
D∈Rn×n
Exact values for γn are known for n ≤ 8 and n = 24.
With Minkowski’s convex body theorem, we can obtain
γn ≤ π4 Γ(1 + n/2)2/n , which yields γn ≤ 2n
3 for n ≥ 2
[28]. It also follows from the work of Blichfeldt [29] that
n
.
γn ≤ π2 Γ(2 + n/2)2/n , whose asymptotic value is πe
The open Voronoi cell of lattice L with center v is the set
Vv (D) = {x ∈ Rn | kx − vk < kx − v − v′ k , ∀v′ ∈ L} ,
in which the outer radius of the Voronoi cell centered at
the origin is denoted as “covering radius”, i.e., ρ(D) =
maxt∈span(L) dist(t, L).
The orthogonality defect (OD), ξ(D), can alternatively
quantify the goodness of a basis:
Qn
kdi k
ξ(D) = p i=1
.
(1)
det(D⊤ D)
It has a lower bound ξ(D) ≥ 1 in accordance with Hadamard’s
inequality.
B. Lattice reduction algorithms
In this subsection, we review three popular LR metrics where the lengths of basis vectors can be upper
bounded by scaled versions of the successive minima. Operations/transforms to reach these metrics are referred to as
the corresponding algorithms. Let R be the R matrix of a QR
decomposition on D, with elements ri,j ’s.
Definition 2. A basis D is called LLL reduced if the following
two conditions hold [11]:
1. |ri,j /ri,i | ≤ 21 , 1 ≤ i ≤ n, j > i. (Size reduction
conditions)
2
⊥
2. δ πD
(di )
≤
Γi
(Lovász conditions)
⊥
πD
(di+1 )
Γ
i
2
, 1 ≤ i ≤ n − 1.
In the definition, δ ∈ (1/4, 1] is called the Lovász constant.
If D is LLL reduced, it has [11]
kdi k ≤
β n−1 λi (D), 1 ≤ i ≤ n,
(2)
3
p
√
in which β = 1/ δ − 1/4 ∈ (2/ 3, ∞).
Definition 3. A basis D is called KZ reduced if it satisfies the
⊥
size reduction conditions, and πD
(di ) is the shortest vector
Γi
⊥
of the projected lattice πDΓ ([di , . . . , dn ]) for 1 ≤ i ≤ n [28].
i
(Projection conditions)
For a KZ reduced basis, it satisfies [28]
√
i+3
kdi k ≤
λi (D), 1 ≤ i ≤ n.
(3)
2
Definition 4. A lattice basis D is called Minkowski reduced
if for any integers c1 , ..., cn such that ci , ..., cn are altogether
coprime, it has kd1 c1 + · · · + dn cn k ≥ kdi k for 1 ≤ i ≤ n
[10].
For a Minkowski reduced basis, it satisfies [10]
o
n
kdi k ≤ max 1, (5/4)(i−4)/2 λi (D), 1 ≤ i ≤ n.
(4)
When n ≤ 4, Minkowski reduction is optimal as it reaches
all the successive minima. Its bounds on lengths are however
exponential for n > 4.
III. B OOSTED KZ
In this section, we propose to improve KZ by abandoning
its size reduction conditions, as well as employing the exact
closest vector problem (CVP) oracles to reduce di with
L(DΓi ) after the projection condition has been met at each
time i. Better theoretical results can be obtained via boosted
KZ, and the implication is using CVP for LR can be better than
solely relying on SVP. This should not be a surprise because
CVP is generally believed to be harder than SVP [1].
A. Replacing size reduction with CVP
We first show that imposing size reduction conditions in KZ
and LLL may lengthen basis vectors, and thus enlarging OD’s.
Proposition 1. There always exist real value bases of rank
n, n ≥ 3, such that KZ and LLL algorithms lengthen basis
vectors. 1
Proof: We prove this by constructing examples in dimension n = 3 because bases of higher ranks can be built by
concatenating another identity matrix in the diagonal direction.
Consider the following matrix
1 c1 0
R = 0 1 c2 ,
(5)
0 0 1
where |c1 | < 1/2, |c2 | > 1/2. Since |r1,2 /r1,1 | < 1/2 and
|r2,3 /r2,2 | > 1/2, it follows from the definition of KZ or LLL
that r1 and r2 will remain unchanged, while size reducing r3
by r2 yields a new vector r′3 = [−c1 ⌊c2 ⌉, c2 − ⌊c2 ⌉, 1]⊤ . If
⌊c2 ⌉ = ±1, then r′3 cannot be further reduced by r1 . So we
2
2
can assume kr′3 k > kr3 k and
solve this inequality about
c2 , which yields |c2 | < 1 + c21 /2. Therefore, there exist at
least matrices
like (5) with |c1 | < 1/2 and 1/2 < |c2 | <
1 + c21 /2 such that KZ/LLL lengthens basis vectors.
1 This
proposition is inspired by [20, Lem. 2.2.3].
To avoid such problems in KZ, we shall review the process
of the KZ reduction algorithm [10]. In the beginning, the
projection conditions are met by finding the shortest lattice
vectors of the projected lattices and carrying them to the
lattice basis. The size reduction conditions are subsequently
addressed by using Babai points v’s in L(DΓi ) to reduce di
by di ← di − v for all i. Concerning the above procedure,
what we try to ameliorate are the size reduction operations.
The “di ← di − v” step is redefined as length reduction, in
which the optimal update needs to solve a CVP.
Definition 5. CVP is a problem that, given a vector y ∈ Rn
and a lattice basis D of rank n, find a vector v ∈ L(D) such
2
2
that ky − vk ≤ ky − wk , ∀ w ∈ L(D).
An algorithm solving CVP, which quantizes any input to a
lattice point, is denoted as v = QL(D) (y). It is evident that
QL(DΓi ) (πDΓi (di )) = QL(DΓi ) (di ) . To obtain explicit properties from the length reductions, we first establish Proposition
2 to show
di − QL(DΓi ) (πDΓi (di )) < kdi k
if QL(DΓi ) (πDΓi (di )) 6= 0 for all i. The proof is given in
Appendix A.
Proposition 2. If πDΓi (di ) lies outside the Voronoi region
V0 (DΓi ), i.e., v , QL(DΓi ) (πDΓi (di )) 6= 0, then we can
replace di with di − v because kdi − vk < kdi k.
Together with the case of QL(DΓi ) (πDΓi (di )) = 0, we
conclude that
di − QL(DΓi ) (πDΓi (di )) ≤ kdi k
(6)
for all i, which means, during the length reductions, all
solutions provided by CVP can be treated as effective updates.
We call them effective because each di is the shortest vector
that can be extended to a basis for L([DΓi , di ]), and the length
reductions never increase the lengths of di ’s.
After executing these length reduction operations as di ←
di − QL(DΓi ) (πDΓi (di )), all πDΓi (di )’s must lie inside the
Voronoi regions V0 (DΓi )’s, so that
2
⊥
kdi k ≤ πD
(di )
Γi
2
+ ρ(DΓi )2
(7)
for all i, where ρ(DΓi ) is the covering radius of L(DΓi ).
B. Algorithm description
The concrete steps of boosted KZ are presented in Algorithm 1. This algorithm can be briefly explained as follows. In
line 4, the Schnorr and Euchner (SE) enumeration algorithm
[18] is applied to solve SVP over L(Ri:n,i:n ), in that if Ri:n,i
⊥
is the shortest vector of L(Ri:n,i:n ), then πD
(di ) is the
Γi
⊥
shortest vector of the projected lattice πDΓ ([di , . . . , dn ]).
i
Lines 5 to 7 are designed to plug new vectors found into the
lattice basis, and the basis expansion method in [10] can do
this efficiently. Other basis expansion methods include using
LLL reduction [30] or employing the Hermite normal form
of the coefficient matrix [1, Lem. 7.1], but both of them have
higher complexity than the one in [10]. Lines 8 to 10 restore
4
the upper triangular property of R, and these be alternatively
implemented by performing another QR decomposition. Line
11 is the unique new design of boosted KZ, i.e., to reduce
R1:n,i by using its closest vector in L(R1:n,1:i−1 ).2
A direct application of the above proposition also shows a
boosted KZ reduced basis has length
√
n+2
l(D) ≤
λn (D).
(10)
2
Algorithm 1: The boosted KZ algorithm.
Input: original lattice basis D ∈ Rn×n , Lovász constant
δ.
Output: reduced basis D, unimodular matrix T
1 [Q, R] = qr(D);
⊲ The QR decomposition of D;
2 T = I;
3 for i = 1 : n do
4
find the shortest vector Ri:n,i:n c1 in L(Ri:n,i:n ) by
LLL aided SE enumeration; ⊲ SVP subroutine;
5
construct a (n − i + 1) × (n − i + 1) unimodular
matrix U whose first column is c1 ;
6
R1:n,i:n ← R1:n,i:n U;
7
T1:n,i:n ← T1:n,i:n U;
8
define G as a unitary matrix that can restore the
upper triangular property of R;
9
R ← GR;
10
Q ← QG⊤ ;
11
find the closest vector R1:n,1:i−1 c2 in L(R1:n,1:i−1 )
to R1:n,i with SE enumeration; ⊲ CVP subroutine;
12
R1:n,i ← R1:n,i − R1:n,1:i−1 c2 ;
13
T1:n,i ← T1:n,i − T1:n,1:i−1 c2 ;
Remark 1. Our results of (8), (9) and (10) are better than
those of KZ and Minkowski reductions. If we assume all
the successive minima are available, then there exists a
polynomial time√transformation that generates a basis with
l(D) ≤ max {1, n/2} λn (D) [1, Lem. 7.1].
14
D ← QR.
Proposition 4. Suppose a basis D is boosted KZ reduced,
then this basis satisfies
λ1 (D)2 ≤
8i
2
(i − 1)ln(i−1)/2 ri,i
,
9
2i
2
2
kdi k ≤ 1 + (i − 1)1+ln(i−1)/2 ri,i
9
(11)
(12)
for 1 ≤ i ≤ n.
C. Properties of boosted KZ
Based on Algorithm 1, a lattice basis D is called boosted
⊥
KZ reduced if πD
(di ) is the shortest vector of the projected
Γi
⊥
lattice πDΓ ([di , . . . , dn ]), and πDΓi (di ) ∈ V0 (DΓi ) for all
i
i.
In boosted KZ, all length reductions are the strongest, and
they can help us to deliver better bounds for the lengths of
basis vectors, as given in Proposition 3. The
is given
n proof
√ o
n
in Appendix B. We have kdn k ≤ max 1, 2 λn (D),
√
outperforming the kdn k ≤ n+3
2 λn (D) bound in [28, Thm.
2.1] which was conjectured not tight in their work.
Proposition 3. Suppose a basis D is boosted KZ reduced,
then this basis satisfies
)
( √ )
(√
i+3
i
λi (DΓi+1 )
λi (D), max 1,
kdi k ≤ min
2
2
(8)
for 1 ≤ i < n, and
√
n
kdn k ≤ max 1,
λn (D).
(9)
2
2 Since
The lengths of GSO vectors in this new algorithm remains
the same as those of KZ reduction, so readily we can claim that
2
2
λ1 (D)2 ≤ i1+ln(i) ri,i
and kdi k2 ≤ i2+ln(i) ri,i
as given in [28,
Prop. 4.2], where ri,i denotes the (i, i)th entry of the R matrix
of a QR decomposition on D. As another contribution, now
we show these two bounds can be improved in Proposition 4,
whose proof is given in Appendix C.
we only modify the size reduction steps in KZ, one may employ
any improved KZ implementation, e.g., [9], to make the boosted KZ faster.
We adhere to the current version for making a fair complexity comparison
with Minkowski’s reduction which employs similar subroutines [10].
The relaxed versions of (11) and (12) can be read as
2
2
2
λ1 (D)2 ≤ i1+ln(i)/2 ri,i
and kdi k ≤ i2+ln(i)/2 ri,i
. Proposition 4 can be either applied to bound the complexity of boosted
KZ, or to achieve the best explicit bounds for the proximity
factors of lattice reduction aided decoding, i.e., updating Eqs.
(41) and (45) of [24]. Moreover, (12) leads to an alternative
bound for OD,
ξ(D) ≤
n
Y
i=1
i1+ln(i)/4 ≤ nn+ln(n!)/4 .
A better bound on ξ(D) comes after applying Minkowski’s
second theorem [31, P. 202] to (8) and (9),
√
n
ξ(D) ≤
2
n−1
Y
i=1
!
√
n/2
2
i+3
n
.
2
3
(13)
Remark 2. The properties of |ri,j /ri,i | ≤ 12 for 1 ≤ i ≤ n,
j > i, are no longer guaranteed in boosted KZ. Of independent
interests, we have another attribute in Proposition 5 that each
pair (d1 , di ) of the boosted KZ reduced basis is Lagrange
reduced [7, P. 41] for all i, which may not hold in the
conventional KZ. The proof can be found in Appendix D.
Proposition 5. Suppose a basis D is boosted KZ reduced, then
this basis satisfies |r1,i /r1,1 | ≤ 12 , and kd1 k, kdi k reaches the
first and second successive minima of L([d1 , di ]) for 2 ≤ i ≤
n.
5
D. Implementation and complexity
The complexity of boosted KZ is dominated by its SVP
and CVP subroutines, in which the SE enumeration algorithm
[18] will be adopted for our implementations and complexity
analysis. The total complexity is assessed by counting the
number of floating-point operations (flops).
1) Complexity of CVP subroutines: It suffices to discuss the
complexity of the most time-consuming nth round of reducing
R1:n,n , which represents an n − 1 dimensional CVP problem.
First of all, the complexity of SE is directly proportional to
the number of nodes in the search tree. In the kth layer of the
enumeration, the number of nodes is
n
Nk (s) = |xk:n−1 | | xk:n−1 ∈ Zn−k ,
o
kRk:n−1,n − Rk:n−1,k:n−1 xk:n−1 k2 ≤ s2
where s refers to the radius of a specified sphere and R1:n−1,n
is the projection of R1:n,n onto span(R1:n−1,1:n−1 ). From
Vn−k (1)sn−k
[32], Nk (s) can be estimated by Nk (s) ≈ |rk,k
|···|rn−1,n−1 |
n/2 1
π n/2
√
∼ 2eπ
where Vn (1) = Γ(1+n/2)
stands
for the
n
πn
volume of an n dimensional unit ball. Since limn→∞ Vn (1) =
0, we can cancel this term in the asymptotic analysis. By
summing the nodes from layer 1 to n − 1, the total number of
nodes in the n − 1 dimensional CVP can be given as
NCVP,n−1 (s) =
n−1
X
k=1
Vn−k (1)sn−k
.
|rk,k | · · · |rn−1,n−1 |
≤
n−1
X
k=1
n−1
X
Vn−k (1)s
Qn−1
j=k
j 1/2+ln(j)/4 (2k + 7)
λ1 (D)n−k
, (14)
in which Proposition 4 has been used to get the inequality.
Then we present a general strategy to choose s. It starts
from s = |r1,1 |/2 which equals to the packing
radius because
qP
k
1
2
λ1 (R1:n,1:k−1 ) = |r1,1 |, and improves s to 2
j=1 rj,j with
k = 2, . . . , n−1 gradually until at least one node can be found
inside the searching sphere. For a random basis, one may
expect s = |r1,1 |/2 to work well with high probability,
qP though
n−1 2
1
we can also use the worst case criterion of s = 2
j=1 rj,j
(i.e., larger
than the covering radius). In the worst case, let
√
n−1λn (D)
C1 = 2λ1 (D) . Since Vn (1)(2n+7) also vanishes for large
n, then (14) can be written as
FCVP,n−1 (C1 λ1 (D)) ≤ nC1n nn/2+ln(n!)/4
k=1
λ1 (D)n−k+1
. (16)
A practical principle for choosing s is to set s = kR1:n,1 k,
which is no smaller than λ1 (D). It follows from an LLL
reduced basis property of kR1:n,1 k ≤ β n−1 λ1 (D) that (16)
becomes
FSVP,n (β n−1 λ1 (D)) ≤ nβ n(n−1) β n(n−1)/2 = nβ 3n(n−1)/2 .
(17)
3) Total complexity in flops: The complexity of other operations in Algorithm 1 can be counted as well. In the ith round,
Lines 5 to 10 being implemented by the method described in
[10, Fig. 3] costs O(n(n−i)). The total complexity of boosted
KZ is therefore upper bounded as
FboostKZ ≤ (n−1) FCVP,n−1 (C1 λ1 (D))+FSVP,n (β n−1 λ1 (D))
4
+ O(n2 − n) + n3 + (2n − 1)n2 . (18)
3
By plugging (15) and (17) inside (18), we can explicitly obtain
2
Nk (s)(2k + 7)
k=1
n−k
FSVP,n (s) ≤
n
X
Vn−k+1 (1)sn−k+1 β (n+k−2)(n−k+1)/2 (2k + 7)
2
FboostKZ ≤ C1n nn/2+ln(n!)/4+O(ln n) + β 3n(n−1)/2+O(ln n) .
For the visit of each node, the operations of updating the
residual and outer radius etc. cost around 2k + 7 flops in layer
k, so the complexity of the n − 1 dimensional CVP problem,
FCVP,n−1 (s), can be accessed:
FCVP,n−1 (s) =
β (n+k−2)(n−k+1)/2 /λ1 (D)n−k+1 , so the number of flops
spent by the first round SVP subroutine can be similarly
bounded as
(15)
2) Complexity of SVP subroutines: Among the SVP subroutines of boosted KZ, its first round of finding λ1 (D) is
the most difficult one. By invoking the SiegelQcondition of
n
|ri−1,i−1 | ≤ β|ri,i | due to LLL [22], we have 1/ j=k |rj,j | ≤
A few comments are made regarding the above analysis.
Firstly, it provides a worst case analysis for strong lattice
reductions like KZ and boosted KZ, which is broad enough
to include many applications. A byproduct of our analysis is
that we can replace the term FCVP,n−1 (C1 λ1 (D)) in (18)
with O(n3 ) to get the worst case complexity of KZ, i.e.,
2
FKZ ≤ β 3n(n−1)/2+O(ln n) . This compensates the expected
complexity analysis in [10, Sec. III.C] which hinges on
Gaussian lattice bases. Secondly, we can observe from (18)
that how much harder the boosted KZ has become by using
CVP. If
√ λ1 (D) is of the same order as λn (D), we can put
into (18) to conclude that boosted KZ is not much
C1 ≈ n−1
2
more complicated than KZ. Actually, if the lattices are random
(see [33, Sec. 2] for more details about random lattices), then
the Gaussian heuristic implies λ1 (D) ≈ · · · ≈ λn (D) [34]. In
the application to IF, our claim that boosted KZ is not much
harder than KZ will be supported by simulations in Section V.
IV. B OOSTED LLL
In the same spirit of extending size reductions to length
reductions, we will revamp LLL towards better performance
in this section. In a nutshell, the boosted LLL algorithm implements its length reduction via the parallel nearest plane (PNP)
algorithm [35, Sec. 4] and rejection. PNP can be regarded as
a compromise between Babai’s nearest plane algorithm and
the CVP oracle. If PNP has a route number L = 1, then
it becomes equivalent to Babai’s algorithm, while setting L
infinitely large solves CVP. The complexity of boosted LLL
is about L times as large as the that of LLL. In our algorithm,
setting L = 1 means only imposing a rejection operation.
6
A. Replacing size reduction with PNP and rejection
First of all, the classic LLL algorithm consists of two
sequential phases, i.e., size reductions by using Babai points,
and swaps based on testing the Lovász conditions. To reduce
di with DΓi , the sharpest reduction should utilize the closest
vector of πDΓi (di ) in L(DΓi ) as shown in Proposition 2. In
order to devote flexible efforts to these length reductions, we
shall investigate the success probability of a Babai point being
optimum. Generally, assume πDΓi (di ) is uniformly distributed
over L(DΓi ), then the probability of a Babai point being the
closest vector is
R
I(x ∈ P(D∗Γi ))dx
|V0 (DΓi ) ∩ P(D∗Γi )|
,
|V0 (DΓi )|
|V0 (DΓi )|
o(19)
nP
i−1
∗
∗
is
in which P(DΓi ) =
k=1 ck dk | − 1/2 ≤ ck ≤ 1/2
∗
∗
the parallelepiped of GSO vectors d1 , . . . , di−1 , and I(·)
denotes an indicator function. One evident observation from
Eq. (19) is, by updating d∗1 ← p1 d∗1 , . . . , d∗i−1 ← pi−1 d∗i−1 ,
the probability in Eq. (19) rises if we choose some constants
p1 > 1, . . . , pi−1 > 1. Another implication from Eq. (19) is, if
πDΓi (di ) belongs to both the external of P(D∗Γi ) and internal
of V0 (DΓi ), then a Babai point should be rejected; otherwise
it elongates di . An example of i = 3 is shown in Fig. 1.
x∈V0 (DΓi )
=
total number of routes it consists of, and (pi−1 , . . . , p1 ) ∈
(Z+ )i−1 . Then each route of PNP can be marked by a label
(qi−1 , . . . , q1 ) where (qi−1 , . . . , q1 ) ∈ {1, . . . , pi−1 } × · · · ×
{1, . . . , p1 }. From layer i−1 of each route, let ci−1,(qi−1 ,...,q1 )
be the qi−1 th closest integer to ⌊ri−1,i /ri−1,i−1 ⌉, and
ri,(qi−1 ,...,q1 ) = ri , we set ri,(qi−1 ,...,q1 ) ← ri,(qi−1 ,...,q1 ) −
ci−1,(qi−1 ,...,q1 ) ri−1 and repeat this process down to
layer 1, resulting in L pairs of coefficient vectors
c(qi−1 ,...,q1 ) = [−c1,(qi−1 ,...,q1 ) , . . . , −ci−1,(qi−1 ,...,q1 ) , 1] and
residuals ri,(qi−1 ,...,q1 ) = RΓi+1 c(qi−1 ,...,q1 ) . We also mark the
old ri by ri,(0,··· ,0) = ri , and c(0,··· ,0) = [0, . . . , 0, 1]. At this
stage, it can choose the shortest vector among all the L + 1
∗
candidates as the reduced version of ri , i.e., ri = ri,(zi−1
,...,z1∗ )
where
∗
(zi−1
, ..., z1∗ ) = arg min
ri,(zi−1 ,...,z1 ) .
(20)
(zi−1 ,...,z1 )
If one also intends to export the unimodular transformation
matrix T, then it can be simultaneously updated inside PNP,
∗
∗
which means ti,(zi−1
,...,z1∗ ) , T1:n,1:i =
,...,z1∗ ) = TΓi+1 c(zi−1
∗
∗
ti,(zi−1
.
,...,z1 )
Since |rk,i /rk,k | < 1/2 for k < i is no longer guaranteed,
together with the Lovász condition they may destroy the Siegel
2
2
for 2 ≤ i ≤ n with some
condition [22] ri−1,i−1
≤ ( 43 + ε)ri,i
small ε > 0. For this reason, we should relax the Lovász
condition to the diagonal reduction (DR) condition [13].
Definition 6 (DR condition [13]). An upper triangular lattice
basis R satisfies the DR condition with parameter δ (1/2 <
δ < 1) if it has
2
2
δri−1,i−1
≤ ri,i
+ (ri−1,i − ⌊ri−1,i /ri−1,i−1 ⌉ri−1,i−1 )2 (21)
for all 2 ≤ i ≤ n, where δ is still referred to as the Lovász
constant.
If (21) holds, the Siegel condition must be true, so we let
i ← i + 1 and safely go to the next iteration. However, if (21)
fails, one should also investigate whether a swap can tweak
such cases. Consider the sublattice L(RΓi+1 ) generated by the
first i vectors and define the potential of basis R [11] as
n
n
Y
Y
2(n−i+1)
.
(22)
ri,i
det(L(RΓi+1 ))2 =
Pot(R) =
i=1
i=1
Fig. 1. The rejected region (orange dots) of the possible projections
πDΓ (d3 ) in L(DΓ3 ) with respect to V0 (DΓ3 ) (red hexagon) and
3
P(d∗1 , d∗2 ) (black rectangle), where the size reduction of LLL elongates d3 .
The four blue triangles are the region whose Babai point is the origin and
size reductions cannot alter a suboptimal d3 .
With the above demonstrations, we propose to amplify the
success probabilities of Babai points with minimal efforts
and to reject operations that elongate current basis vectors.
Either using lattice Gaussian sampling [25] or PNP suffices
the first objective, but we shall adhere to PNP because it is
deterministic and this feature will be employed by (27). We
detail the length reductions in boosted LLL as follows.
Assume we are working on the R matrix of a QR decomposition and trying to reduce ri by Q
ri−1 , . . . , r1 . Let PNP
i−1
be abstracted by a parameter L = k=1 pk indicating the
⊥
If the DR condition fails in πR
(R1:n,i−1:i ) and we swap
Γi
ri−1 and ri , then the potential of the basis should be decreasing for the sake of bounding the number of iterations. After
the swap, Ri−1:i,i−1:i becomes
ri−1,i ri−1,i−1
.
(23)
ri,i
0
Let G be a 2 × 2 unitary matrix
√ 2ri−1,i2
√ 2 ri,i 2
ri,i +ri−1,i
ri,i +ri−1,i
;
r
√ 2ri−1,i2
− √ 2 i,i 2
ri,i +ri−1,i
(24)
ri,i +ri−1,i
clearly, GRi−1:i,1:n can restore the upper triangular property
of (23), which transforms to
q
ri−1,i ri−1,i−1
2 + r2
√
ri,i
2 +r 2
i−1,i
ri,i
.
i−1,i
(25)
r r
0
− √i,i2 i−1,i−1
2
ri,i +ri−1,i
7
From (22) and (25), one can obtain the potential ratio between
two consecutive bases R′ and R as
2(n−i+1)
2(n−i+2)
q
ri,i ri−1,i−1
2 + r2
√
ri,i
′
2
2
i−1,i
ri,i +ri−1,i
Pot(R )
=
2(n−i+2) 2(n−i+1)
Pot(R)
ri−1,i−1 ri,i
≤
2
2
δ(ri,i
+ ri−1,i
)
,
2 + (r
2
−
⌊r
/r
ri,i
i−1,i
i−1,i i−1,i−1 ⌉ri−1,i−1 )
(26)
where′ the last inequality comes from (21). Based on (26),
Pot(R )
Pot(R) ≤ δ if and only if ⌊ri−1,i /ri−1,i−1 ⌉ = 0. As
∗
∗
a result, preparing the pairs ti,(zi−1
,...,z1∗ ) and ri,(zi−1
,...,z1∗ )
based on (20) is only suitable for reductions before checking
the DR conditions. In case that this condition fails, we
should also prepare ti,(z′ ,...,z′ ) and ri,(z′ ,...,z′ ) that make
1
1
i−1
i−1
⌊ri−1,i /ri−1,i−1 ⌉ = 0:
′
′
(zi−1 , ..., z1 ) = arg
min
(zi−1 ,...,z1 )
n
ri,(zi−1 ,...,z1) ,
o
s.t. ⌊ri−1,i,(zi−1 ,...,z1 ) /ri−1,i−1 ⌉ = 0 , (27)
in which ri−1,i,(zi−1 ,...,z1) denotes the (i − 1)th component
of ri,(zi−1 ,...,z1 ) . In such a manner, if a vector is swapped to
the front, it is not only a short vector, but also the one that
decreases the basis potential so that this kind of swaps cannot
happen too many times.
and the rejection operation marking r3 is r3,(0,0) =
[0, 0.52, 1]⊤ . Eq. (20) would choose the shortest among the
above four routes. Let it be r3,(2,1) (or r3,(0,0) ), which is
employed by Line 5. Eq. (27) can only choose from r3,(1,1) .
Then we test the DR condition and it succeeds, so the while
loop stops.
Algorithm 2: The boosted LLL algorithm.
Input: original lattice basis D ∈ Rn×n , Lovász constant
δ, list number L.
Output: reduced basis D, unimodular matrix T
1 [Q, R] = qr(D);
⊲ The QR decomposition of D;
2 i = 2, T = I;
3 while i ≤ n do
∗
∗
4
use (20) to get [ri,(zi−1
,...,z1∗ ) , ti,(zi−1
,...,z1∗ ) ] and use
(27) to get [ri,(z′ ,...,z′ ) , ti,(z′ ,...,z′ ) ];
1
1
i−1
i−1
∗
∗
5
R1:n,i = ri,(zi−1
,...,z1∗ ) ,T1:n,i = ti,(zi−1
,...,z1∗ ) ;
6
if condition (21) fails then
7
R1:n,i = ri,(z′ ,...,z′ ) ,T1:n,i = ti,(z′ ,...,z′ ) ;
1
1
i−1
i−1
8
define G as in (24);
9
swap R1:n,i and R1:n,i−1 , T1:n,i and T1:n,i−1 ;
10
Ri−1:i,1:n ← GRi−1:i,1:n ;
11
Q1:n,i−1:i ← Q1:n,i−1:i G⊤ ;
12
i ← max(i − 1, 2);
13
14
B. Algorithm description
Combing the length reduction process above, the procedure
of boosted LLL is given in Algorithm 2. Inside the loops, it
employs a fixed structure column traverse strategy rather than
using a parallel traversing [12], [13], such that a theoretical
O(n) factor in bounding the number of loops can be saved. In
line 4, the PNP algorithm and rejection prepare two versions
∗
of reduced vectors. The stronger version ri,(zi−1
,...,z1∗ ) is used
before testing the DR condition (line 6), so that the new ri is
the shortest candidate among the L routes of PNP and the old
ri . If it cannot pass this test, a weaker version ri,(z′ ,...,z′ )
1
i−1
is used in line 7, who has identical value in the first layer as
the Babai point and a variety of pi−2 × · · · × p1 routes in the
remaining layers. Lastly, line 10 restores the upper triangular
feature of R via a lightweight 2×2 Givens rotation matrix and
line 11 balances the unitary matrix. The toy example below
may help to understand our algorithm.
Example 1. Suppose we are
1
R= 0
0
reducing a matrix
0.4
0
1 0.52
0
1
in round i = 3 and executing from Line 4 of Algorithm 2.
For the PNP algorithm, we set the L = 3 routes as p2 × p1 =
3 × 1. The three nearest integers to r2,3 /r2,2 are 1, 0, 2, so
the corresponding PNP routes are
⊤
r3,(1,1) = [−0.4, −0.48, 1] ,
r3,(2,1) = [0, 0.52, 1]⊤ ,
r3,(3,1) = [−0.8, −1.48, 1]⊤ ,
15
else
i ← i + 1;
D ← QR.
In essence, this algorithm attempts to minimize the basis
length while keeping the Siegel condition, and the PNP
algorithm offers flexible efforts to do so. If we replace lines
4 and 5 by using the Babai point and delete line 7, Algorithm
2 degrades to the classic LLL algorithm [11].
C. Properties of boosted LLL
2
When the boosted LLL algorithm terminates, δri−1,i−1
−
ri−1,i
2 2
2
( ri−1,i−1 −⌊ ri−1,i−1 ⌉) ri−1,i−1 ≤ ri,i holds for all 2 ≤ i ≤ n,
which ensure the Siegel properties hold:
ri−1,i
|ri−1,i−1 | ≤ β|ri,i |.
(28)
Assume the PNP algorithm has parameters (pk−1 , . . . , p1 )
for 2 ≤ k ≤ n, then πRΓi (ri ) is contained in V0 (RΓi ) ∪
P( q1 r∗1 , . . . , qi−1 r∗i−1 ), where r∗1 , . . . , r∗i−1 are the GSO
vectors
of RΓi . Though this region can be much larger than
P( r∗1 , . . . , r∗i−1 ), we have
1X 2
r
(29)
4 j<i j,j
/
if πRΓi (ri ) ∈ P( r∗1 , . . . , r∗i−1 ). If πRΓi (ri ) ∈
P( r∗1 , . . . , r∗i−1 ), we can always find the Babai point
P
2
2
2
2
due to (20)
r′i such that kri k < kr′i k ≤ ri,i
+ 41 j<i rj,j
and (27), so condition (29) always holds in boosted LLL.
With (28) and (29), classical properties of LLL can be
trivially proved:
2
2
kri k ≤ ri,i
+
8
l(D) ≤ β n−1 λn (D),
(30)
ξ(D) ≤ β n(n−1)/2 .
(31)
Since we have devoted much effort to implement the length
reductions, (30) and (31) are the least bounds that we should
expect from boosted LLL. However, moving any step forward
seems difficult because even using CVP as length reduction
still fails to generate a better explicit bound than (29). The
difficulty of improving bounds on lengths exists in all variants
of LLL, including the LLL with deep insertions (LLL-deep)
[18]. In this regard, boosted LLL only serves as an ameliorated
practical algorithm.
D. Implementation and Complexity
The total number of loops K in Algorithm 2 equals to the
number testing condition (21), whose number of positive and
negative tests are denoted as K + and K − , respectively. The
total number of negative test is
1
Pot(D0 )
Pot(D0 )
−
, (32)
=
×ln
K ≤ log1/δ
Pot(DK )
ln(1/δ)
Pot(DK )
where D0 , DK are the initial basis and the basis after K
loops, respectively. In the fixed traversing strategy, we also
have K + ≤ K − + n − 1. We first show how to choose δ
such that the boosted LLL algorithm has the best performance
1
remains to be a polynomial number. After that,
while ln(1/δ)
Pot(D0 )
Pot(DK )
is evaluated to complete our complexity analysis.
1) Optimal δ: Among literature, δ is often chosen arbitrarily close to 1 while explanations are lacking. In Micciancio’s
book [1, Lem. 2.9], it is shown if δ = 1/4 + (3/4)n/(n−1) ,
1
≤ nc for all c > 1. More generally, we can define
then ln(1/δ)
an optimal principle of choosing δ, i.e.,
1
δ(a , n) = ∗ +
a
∗
a∗ − 1
a∗
n/(n−1)
where a∗ = 1−e1 −1 .
With such settings, three distinctive properties exist:
1
c
ln(1/δ) ≤ n for all n; δ is asymptotically close to 1 so that
the algorithm has the best performance; and it is the smallest
value satisfying the previous two attributes (the fastest one
among the class of best performance). Proposition 6 justifies
these claims and the proof is given in Appendix E.
Proposition 6. For arbitrary constants a > 1, c > 1, if
n/(n−1)
δ(a, n) = a1 + a−1
, then for all n,
a
1
≤ nc .
ln(1/δ)
(33)
Let a = limn→∞ arg mina δ(a, n) be defined as the universal good constant, then
1
.
1 − e−1
0)
With reference to [36], we have Ψ(D
ψ(D0 ) ≤ κ(D0 ), where
κ(D0 ) is the condition number of D0 . So if the condition
number of the input basis satisfies ln κ(D0 ) = O(ln n), then
the number of iterations in boosted LLL is K ≤ 2K − +n−1 =
O(nc+2 ln n), where c > 1 is a constant arbitrarily close to
1. By further counting the number of flops inside and outside
the loop of Algorithm 2, the total complexity of boosted LLL
is O(Ln4+c ln n).
Remark 3. The complexity analysis above is quite general.
For instance, if D0 is Gaussian, then it follows from [36] that
E(ln κ(D0 )) ≤ ln n + 2.24. In the application to IF [21], we
can also take a detour to employ this property of Gaussian
matrices. Firstly, the condition number of the input basis D0
would increase if the signal to noise ratio (SNR) rises, so it
suffices to investigate the case for infinite SNR. The target
then becomes the dual of a Gaussian random matrix that has
the same condition number, so E(ln κ(D0 )) ≤ ln n+2.24 also
holds in IF.
V. A PPLICATION
TO INTEGER FORCING
In the context of optimizing the achievable rates of IF, some
results based on LR have been presented in [37], where the
difference between KZ and Minkowski is not obvious because
the system size is small (2 × 2 or 4 × 4). Since we have
improved the classic KZ and LLL, we will verify our boosted
algorithms in IF by showing their performance about ergodic
rates, orthogonal defects (inversely proportional to sum-rates),
and complexity in flops.
A. IF and SBP
In this subsection, the IF transceiver architecture will be
reviewed by using real value representations for simplicity. In
a MIMO system with size n × n, each antenna has a message
wi = [wi (1), wi (2), . . . , wi (k)]⊤ ,
where i ∈ {1, . . . , n}, wi ∈ Fkp , and Fp is a finite field with
size p. As the conversion from message layer to physical layer,
an encoder Ei : Fkp → Rn maps the length-k message wi into
a lattice codeword
xi = [xi (1), xi (2), . . . , xi (T )]⊤ ,
∗
a∗ =
2) Total complexity in flops: Further define ψ(D) =
2
2
mini ri,i
and Ψ(D) = maxi ri,i
. Since our length reduction
does not change ri,i while (25) shows that any swap can
narrow the gap between ri−1,i−1 and ri,i , the number of
negative tests between the initial basis D0 to the final basis
DK is
Pot(D0 )
K − ≤ nc ln
Pot(DK )
Ψ(D0 )
nc+1 (n + 1)
.
(35)
ln
≤
2
ψ(D0 )
(34)
2
where kxi k ≤ T P , T stands for code length and P stands for
SNR. All encoders operate at the same lattice with the same
rate:
k
RTX = log2 p.
T
9
Let xi (j) be the jth symbol of xi , we may write the
transmitted vector across all antennas in time j as x[j] =
[x1 (j), . . . , xn (j)]⊤ . An observation y[j] ∈ Rn can be subsequently written as
y[j] = Hx[j] + z[j]
(36)
in which H ∈ Rn×n denotes the MIMO channel matrix and
z[j] is the additive white Gaussian noise (AWGN) with zi [j] ∼
N (0, 1). Let Y, X, and Z be the concatenated y[j], x[j] and
z[j] from time slots 1 to T . In a linear receiver architecture, the
receiver will project Y with a matrix B = [b1 , . . . , bn ]⊤ ∈
Rn×n to get the useful information AX for further decoding,
BY =
AX
|{z}
useful information
+ (BH − A)X + BZ.
|
{z
}
(37)
effective noise
⊤
We choose A = [a1 , . . . , an ] ∈ Zn×n because these lattice
codewords are closed under integer combinations. A should
also be full rank to avoid losing information.
For a preprocessing matrix B, the following computation
rate can be obtained in the ith effective channel if the coding
lattices satisfy goodnesses for channel coding and quantization
[21]
!
P
1
+
, (38)
R(H, ai , bi ) = log2
2
2
2
kbi k + P kH⊤ bi − ai k
where log+
2 (x) = max {log2 (x), 0}.
The
first
step towards maximizing
the rates is to set
o
n
2
2
/∂bi = 0 for a fixed IF co∂ kbi k + P H⊤ bi − ai
efficient matrix A, which leads to
1
bi = (HH⊤ + I)−1 Hai .
P
Plug this into (38) and use Woodbury matrix identity for the
inverse of a matrix, we have
!
1
P
+
R(H, ai ) =
log2
,
(39)
2
2
kDai k
1
where D = Λ− 2 V⊤ and VΛV⊤ = H⊤ H + 1/P I is the
eigendecomposition. Achieving the optimum rate is therefore
equivalent to solving SIVP on lattice L(D):
arg
min
A∈Zn×n , rank(A)=n
2
max kDai k ,
i
(40)
2
in which minA∈Zn×n , rank(A)=n maxi kDai k = λn (D).
Now we explain how to obtain the estimations of messages.
Upon quantizing BY to the fine lattice and modulo the coarse
lattice in a row-wise manner [23], a converter Di : Rn → Fkp
then maps the physical layer codeword to a message under
finite field representations, i.e., ûi = [W⊤ ai ] mod p, W =
[w1 , . . . , wn ]⊤ . These combinations are then collected, so as
to decode the messages as
⊤
[ŵ1 , . . . , ŵn ]⊤ = A−1
p [û1 , . . . , ûn ] ,
where Ap is a full rank matrix over Zp and A−1
p is taken over
the same field.
With the above demonstrations in mind, there should be at
least two reasons for us to restrain SIVP to SBP
arg
min
2
A∈GLn (Z)
max kDai k .
i
(41)
The first reason is about flexibility. With SBP, we can choose
among lattice reduction algorithms from polynomial to exponential complexity with guaranteed properties, and these
algorithms are still efficient when SNR is high. The second
reason is about complexity, where the inverse of A over finite
fields is much easier to calculate when A ∈ GLn (Z), and
algorithms for SIVP or the successive minima problem (SMP)
are generally more complicated than those of SBP [37], [38].
For instance, we can observe that for the enumeration routines
of SMP, Minkowski reduction and boosted KZ reduction, one
needs to verify the linear independence of a new vector with
previous lattice vectors for SMP, while Minkowski reduction
only needs to check the greatest common divisor of the
enumerated coefficients [10] and boosted KZ does not require
such inspections.
B. Simulation results
This subsection examines the rates and complexity performance when applying the proposed boosted KZ and boosted
LLL algorithms for IF receivers. We show the achievable rates
rather than the bit error rates of IF MIMO receivers, since
the latter depend on which capacity-approaching code for the
AWGN channel is used at the transmitter. All simulations are
performed on real matrices with random entries drawn from
i.i.d. Gaussian distributions N (0, 1). Results in the figures are
all averaged from 103 Monte Carlo runs.
The boosted LLL algorithm is referred as “b-LLL-L” with L
being the total number of branches in the PNP algorithm, i.e.,
L = pi−1 pi−2 · · · p1 remains unchanged for different columns
i’s. If L = 1, this version means only adding a rejection
operation to the classic LLL algorithm [11]. When L = 3
or L = 9, we expand 3 branches in the first or first two layers
of the PNP algorithm. Regarding other typical variants of LLL,
such as the effective LLL [12] and greedy LLL [17], they all
boil down to the same performance as LLL if we implement
a full size reduction at the end of their algorithms, so we omit
comparing our algorithms with these variants.
The boosted KZ algorithm (“b-KZ”) is implemented as
described in Algorithm 1. To ensure a fair comparison, the
KZ algorithm follows the same routine as Algorithm 1 except replacing the “CVP subroutine” with a size reduction.
Minkowski reduction is also included as our reference with
the label “Minkow”, whose implementation follows [10, Sec.
V].
1) Achievable rate: The actual achievable rate of the IF
receivers can be quantitatively evaluated by the ergodic rate
defined by [37]
RE = E n min R(H, ai ) ,
i
where the expectation is taken over different realizations of
H, and R(H, ai ) was defined in (39).
In Fig. 2, we have plotted the rate performance of different
LR algorithms in a 20 × 20 real-valued MIMO channel, in
which the channel capacity 21 log2 (det(I + P HH⊤ )) serves
10
70
1100
1000
60
900
50
700
LLL
b−LLL−1
b−LLL−3
b−LLL−9
KZ
b−KZ
Minkow
Capacity
40
30
20
0
5
10
OD
Ergodic rate
800
600
LLL
b−LLL−1
b−LLL−3
b−LLL−9
KZ
b−KZ
Minkow
500
400
300
200
100
0
15
5
SNR/dB
Fig. 2. SNR versus ergodic rate for different LR algorithms.
6
10
120
110
4
100
LLL
b−LLL−1
b−LLL−3
b−LLL−9
KZ
b−KZ
Minkow
Capacity
90
80
70
60
10
OD
Ergodic rate
15
Fig. 4. SNR versus OD for different LR algorithms.
130
50
15
10
SNR/dB
LLL
b−LLL−1
b−LLL−3
b−LLL−9
KZ
b−KZ
Minkow
2
10
0
20
25
30
dimension n
10
5
10
15
20
dimension n
25
30
Fig. 3. Dimension versus ergodic rate for different LR algorithms.
Fig. 5. Dimension versus OD for different LR algorithms.
as an upper bound. In the figure, the b-LLL-1 algorithm has
higher rates than KZ and LLL, and the improvements after we
increase the list number to L = 3, 9 can still be spotted in
this crowded figure. The b-KZ method attains almost the same
rates as those of Minkowski reduction. KZ reduction does not
offer better rates than LLL because KZ only guarantees to
yield a basis with the smallest potential, and both of them are
under the curse Proposition 1.
In Fig. 3, we fixed the SNR to be 20dB and study how the
size of the system is affecting their ergodic rates. From this
graph, the differences of rates among different LR methods
amplify as dimension n increases, and their mutual relations
are the same as those of Fig. 2.
2) Orthogonal defect: The ergodic rate RE is only determined by the basis length l(D). To evaluate the sum-rates for
all data streams, OD’s can be employed which are proportional
to the length products of basis vectors. Such a quantity can
reveal the gaps between different algorithms more vividly.
In Figs. 4 (fixed size of 20 × 20) and 5 (fixed SNR of
20dB), we have plotted the SNR versus OD, and dimension
versus OD relations for distinct lattice reduction algorithms.
From these two figures, several phenomenons can be observed.
Boosted KZ cannot surpass Minkowski reduction but remains
close to it. The performance improvements from b-LLL-1,
to b-LLL-3, b-LLL-9 are approximately proportional to the
increment in the list size L. One interesting thing to observe
from Fig. 4 is, the performance gaps between boosted and
non-boosted algorithms are becoming larger as P rises. Since
1
D = Λ− 2 V⊤ and VΛV⊤ = H⊤ H + 1/P I, the increment of
P is, intrinsically, changing the goodness of the corresponding
minimal basis. It also says that the possibility of size reduction
being suboptimal would increase if the lattice bases tend to be
more random. Lastly, Fig. 5 shows an evident “Minkow < bKZ < b-LLL-9 < b-LLL-3 < b-LLL-1 < KZ < LLL” relation
about OD, and their OD values are much better than their
theoretical bounds (see e.g., Eqs. (13) and (31)).
3) Complexity: In addition to our theoretical analysis on
the complexity of the proposed algorithms, we further compare their empirical costs by the expected number of flops,
which are clearly shown in Fig. 6. Not surprisingly, the b-
11
KZ algorithm spends about 1.5 times the efforts of KZ in
the dimensions depicted in Fig. 6, and the b-LLL-1, 3, 9
algorithms costs around 1, 1.5, and 3 times the efforts of
LLL. Both b-KZ and KZ reductions have dramatically lower
number of flops than Minkowski reduction. Moreover, the
boosted LLL algorithms have much smaller complexity than
KZ while reducing the bases more effectively as Figs. 2 to 5
have revealed.
To sum up, concerning the complexity-performance tradeoffs as well as the theoretical bounds, the boosted KZ and
LLL algorithms can be the ideal candidates for reducing lattice
bases in IF with exponential and polynomial complexity,
respectively.
x 10
LLL
b−LLL−1
b−LLL−3
b−LLL−9
KZ
b−KZ
Minkow
complexity in flops
6
5
4
A PPENDIX B
P ROOF OF PROPOSITION 3
Proof: Regarding (9), first recall a fact that we cannot
produce n independent vectors by using a lattice of rank
n − 1, then among λ1 (D), . . . , λn (D), at P
least one of them,
n−1
xi di where
e.g., λi′ (D), corresponds to v = xn dn + i=1
Pn−1
i=1 xi di ∈ L(DΓn ), xn ∈ Z \ 0. With QR decomposition
[Q, R] = qr(D),
i−1
n
X
X
rj,i qj
xi ri,i qi +
v =
6
7
there are two lattice points (0 and v) inside Vv (DΓi ), which
contradicts the basic property of a Voronoi cell, i.e., there can
be only one lattice point inside a Voronoi region.
We proceed to prove (42). Since πDΓi (di ) is quantized to
v, their difference πDΓi (di ) − v lies inside the Voronoi cell
2
V0 (DΓi ), which yields hπDΓi (di ) − v, wi < kwk /2 for all
w ∈ L(DΓi ) and w 6= 0. As v−πDΓi (di ) ∈ V0 (DΓi ), choose
2
an instance of w = v 6= 0 for hv − πDΓi (di ), wi < kwk /2,
then (42) is obtained.
=
2
n−1
X
i=1
1
0
j=1
i=1
3
5
10
dimension n
15
20
|rn,n |2 ≤
QUESTIONS
We have only demonstrated the theoretical superiority of
boosted KZ over KZ and Minkowski, while Minkowski reduction still yields shorter vectors in our simulations. One
interesting open question is whether there exist better performance bounds for Minkowski reduction. It is also of sufficient
interests to improve the performance analysis on boosted LLL.
A PPENDIX A
P ROOF OF PROPOSITION 2
2
Proof: First of all, the reducible condition kdi − vk <
2
<
kdi k can be reformulated as
πDΓi (di ) − v
2
πDΓi (di ) , which becomes equivalent to
2
kvk2 /2 < hπDΓi (di ), vi.
xi ri,i qi +
i−1
X
j=1
rj,i qj + xn rn,n qn .
Notice that −v also corresponds to λi′ (D), so we can confine
xn > 0 and consider the following two cases.
1) If xn
> 1, it is observed that kvk =
Pi−1
Pn−1
≥ xn |rn,n |
r
q
x
i,i i +
j=1 rj,i qj + xn rn,n qn
i=1 i
because the qi ’s are orthogonal, which yields
Fig. 6. Dimension versus complexity for different LR algorithms.
VI. O PEN
(42)
It is necessary to show hπDΓi (di ), vi > 0 for v 6= 0
so that the inequality we pursuit makes sense. We give a
proof by contradiction. Suppose that θ(πDΓi (di ), v) > π/2,
due to symmetricity of the Voronoi cell Vv (DΓi ), there
exists a symmetric point d′i of πDΓi (di ) on v, such that
θ(d′i , v) > π/2. Define the half-space of v as Hv =
n
{x
convex combination among
∈ R | hv, xi > 0}, then the
Vv (DΓi ) ∩ Hv , πDΓi (di ), d′i must include the origin. Then
1
1
λi′ (D)2 ≤ λn (D)2 .
x2n
4
(43)
We then proceed to bound the last term in (7). For 1 ≤ i ≤ n,
the covering radius of lattice L(DΓn ) is
ρ(DΓn ) =
≤
max dist (x, L(DΓn ))
x
v
un−1
uX
2 ,
rk,k
1/2t
(44)
k=1
where the inequality is obtained after choosing x as a “deep
hole” [39, P. 33] and solving this CVP by applying Babai’s
nearest plane algorithm [40]. Since boosted KZ still assures
⊥
|rk,k | = λ1 (πD
([dk , . . . , dn ])), and the projection of the
Γk
kth successive minimum in D onto the orthogonal complement of DΓk must have a least one non-zero coefficient for
[dk , . . . , dn ], we have |rk,k | ≤ λk (D); plug this into (44),
v
un−1
uX
λk (D)2 .
(45)
ρ(DΓn ) ≤ 1/2t
k=1
Put (43) and (45) into (7), then kdn k2 ≤ 14 λn (D)2 +
Pn−1
1
n
2
2
k=1 λk (D) ≤ 4 λn (D) .
4
2) If xn = 1, recall that our length reduction by CVP
(line 11) ensures dn is the shortest vector among the set
12
n
o
Pn−1
dn + i=1 zi di | ∀ zi ∈ Z , so kdn k ≤ λi′ (D) ≤ λn (D)
in such a scenario.
Combining 1) and 2) proves (9).
As for (8), since all sublattices L(DΓi+1 ) | 1 ≤ i ≤ n
are also boosted KZ
it follows from the proved (9)
n reduced,
√ o
i
that kdi k ≤ max 1, 2 λi (DΓi+1 ). With |ri,i | ≤ λi (D)
and
√ the bound for the covering radius, we also have kdi k ≤
i+3
2 λi (D) for all i. So choosing the minimum among them
yields (8).
A PPENDIX C
P ROOF OF PROPOSITION 4
Proof: Since |ri,i | = λ1 (Ri:n,i:n ), we apply Minkowski’s
second theorem [31, P. 202] to lattices L(Ri:n,i:n ) with
2
1 ≤ i ≤ n − 1, then we have rn−j+1,n−j+1
≤
Q
1/j
j
2
γj
. As those of [28, Prop. 4.2], we
k=1 rn−k+1,n−k+1
cancel duplicated terms in this inequality and use induction
from L(Rn−1:n,n−1:n ) to L(R1:n,1:n ), then
2
rn−j+1,n−j+1
≤ γj
j
Y
1/(k−1)
γk
k=2
2j
3 ,
!
2
rn,n
.
(46)
Qj
2k 1/(k−1)
3
As γj ≤
we define g(j) =
k=2
evaluate this term. Let z = k − 1, it can be shown that
!
j−1
X
8j
2z + 2 1
g(j) =
exp
ln
9
3
z
z=2
!
j−1
X
(a)
ln(z)
8j
exp
≤
9
z
z=2
Z
j−1
(b)
8j
ln(z)
≤
exp
dz
9
z
1
8j
=
(j − 1)ln(j−1)/2 ,
9
8j
2
(j − 1)ln(j−1)/2 rn,n
,
9
2
≤
≤
i
X
8j
1
2
1 +
(j − 1)ln(j−1)/2 ri,i
4 j=2 9
2i
2
1 + (i − 1)1+ln(i−1)/2 ri,i
9
for 1 ≤ i ≤ n, so (12) is proved.
(48)
All bases can be evaluated via Eq. (48) because either [u, v]
or [u, −v] must be acute. In the boosted KZ algorithm, if we
cannot reduce the length of di with only d1 , then |cd1 |/(d21 +
d22 ) < 1/2. In the other direction, reducing d1 by di is also
impossible because d1 is already the shortest, so |d1 /c| <
1/2. By combining these two non-reducible conditions and
the acute condition of d1 /c ≥ 0, requirements in (48) can be
met.
A PPENDIX E
P ROOF OF PROPOSITION 6
Proof: The proof of (33) follows those in [1, Lem. 2.9].
To prove (33), it suffices to prove
n→∞
and
1 − e(−(1/n) )
a−1
a−1 n/(n−1) ≤ 1,
a −( a )
(49)
where its l.h.s. is an indeterminate form. Replace n by another
variable x as n = x+1
x , then by using L’Hospital’s Rule, the
l.h.s. of (49) becomes
c
c
1
x c−1
−1/(1+1/x)
)
c( x+2
1 − e−1/(1+1/x)
(x+1)2 e
lim a−1
=
lim
=0
a−1 x+1
x+1 ln( a−1 )
x→0
x→0
−( a−1
a −( a )
a )
a
and thus (49) is proved.
n)
n
1 1/(n−1)
= (n−1)a
−
As for (34), let ∂δ(a,
2 (1 − a )
∂a
1
′
a2 = 0, we obtain the stationary point of δ(a, n) as a =
n)
∂δ(a, n)
1
< 0 if a ∈ (1, a′ ) and ∂δ(a,
>
1−(1−1/n)n−1 , where
∂a
∂a
0 if a ∈ (a′ , ∞). Notice that (1 − 1/n)n−1 = e
after using L’Hospital’s Rule again, we have
ln(1−1/n)
1/(n−1)
, then
1
1
1
=
= lim
.
n−1
−1+1/n
n→∞ 1 − e
n→∞ 1 − (1 − 1/n)
1 − e−1
lim
(47)
for 2 ≤ j ≤ n, and this is the condition in boosted KZ that
corresponds to the Siegel condition in LLL. Let j = n and
apply (47) to each of L(R1:n,1:i ) for 2 ≤ i ≤ n, then we obtain
ln(j−1)/2 2
rj,j . By further incorporating (47)
λ1 (D)2 ≤ 8j
9 (j−1)
Pi−1 2
2
2
, it yields
and the relation of kdi k ≤ ri,i
+ 14 k=1 rk,k
kdi k
|v/u| ≥ 1 and 0 ≤ R(v/u) ≤ 1/2.
c
where the relaxation in (a) avoids evaluating Spence’s function
in the integration and Riemann integral has been used in (b).
Plug this back into (46), we have
Proof: It is equivalent to characterize d1 , di by two
numbers √
u, v in the complex plane C, i.e., u = c and
v = d1 + −1d2 . For an “acute basis” [u, v] [7, P. 76] where
R(v/u) ≥ 0, the basis reaches the first and second successive
minima if and only if
lim
2j
3
2
rn−j+1,n−j+1
≤
A PPENDIX D
P ROOF OF PROPOSITION 5
R EFERENCES
[1] D. Micciancio and S. Goldwasser, Complexity of Lattice Problems, pp.
1–228. Boston, MA: Springer US, 2002.
[2] A. Hassibi and S. Boyd, “Integer parameter estimation in linear models
with applications to gps,” IEEE Trans. Signal Process., vol. 46, no. 11,
pp. 2918–2925, 1998.
[3] R. Neelamani, R. Baraniuk, and R. de Queiroz, “Compression color
space estimation of JPEG images using lattice basis reduction,” in 2001
Int. Conf. Image Process., vol. 1. IEEE, 2001, pp. 890–893.
[4] H. Yao and G. Wornell, “Lattice-reduction-aided detectors for MIMO
communication systems,” in 2002 Glob. Telecommun. Conf., vol. 1.
IEEE, 2002, pp. 424–428.
[5] C. Windpassinger, R. F. H. Fischer, and J. B. Huber, “Lattice-reductionaided broadcast precoding,” IEEE Trans. Commun., vol. 52, no. 12, pp.
2057–2060, 2004.
[6] D. Wubben, D. Seethaler, J. Jalden, and G. Matz, “Lattice Reduction,”
IEEE Signal Process. Mag., vol. 28, no. 3, pp. 70–91, may 2011.
13
[7] P. Q. Nguyen and V. Brigitte, The LLL Algorithm, ser. Information
Security and Cryptography, P. Q. Nguyen and B. Vallée, Eds., pp.
1–503. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010.
[8] A. Korkinge and G. Zolotareff, “Sur les formes quadratiques positives,”
Math. Ann., vol. 11, no. 2, pp. 242–292, jun 1877.
[9] J. Wen and X.-W. Chang, “A modified KZ reduction algorithm,”
in 2015 IEEE Int. Symp. Inf. Theory, no. 7. IEEE, jun 2015, pp.
451–455.
[10] W. Zhang, S. Qiao, and Y. Wei, “HKZ and Minkowski Reduction
Algorithms for Lattice-Reduction-Aided MIMO Detection,” IEEE
Trans. Signal Process., vol. 60, no. 11, pp. 5963–5976, nov 2012.
[11] A. K. Lenstra, H. W. Lenstra, and L. Lovász, “Factoring polynomials
with rational coefficients,” Math. Ann., vol. 261, no. 4, pp. 515–534,
1982.
[12] C. Ling, W. H. Mow, and N. Howgrave-Graham, “Reduced and FixedComplexity Variants of the LLL Algorithm for Communications,” IEEE
Trans. Commun., vol. 61, no. 3, pp. 1040–1050, mar 2013.
[13] W. Zhang, S. Qiao, and Y. Wei, “A Diagonal Lattice Reduction
Algorithm for MIMO Detection,” IEEE Signal Process. Lett., vol. 19,
no. 5, pp. 311–314, may 2012.
[14] H. Vetter, V. Ponnampalam, M. Sandell, and P. A. Hoeher, “Fixed
complexity LLL algorithm,” IEEE Trans. Signal Process., vol. 57, no. 4,
pp. 1634–1637, 2009.
[15] Q. Wen, Q. Zhou, and X. Ma, “An enhanced fixed-complexity LLL
algorithm for MIMO detection,” 2014 IEEE Glob. Commun. Conf.
GLOBECOM 2014, pp. 3231–3236, 2014.
[16] X. W. Chang, X. Yang, and T. Zhou, “MLAMBDA: A modified
LAMBDA method for integer least-squares estimation,” J. Geod.,
vol. 79, no. 9, pp. 552–565, 2005.
[17] Q. Wen and X. Ma, “Efficient Greedy LLL Algorithms for Lattice
Decoding,” IEEE Trans. Wirel. Commun., vol. 15, no. 5, pp. 3560–3572,
may 2016.
[18] C. P. Schnorr and M. Euchner, “Lattice basis reduction: Improved
practical algorithms and solving subset sum problems,” Math. Program.,
vol. 66, no. 1-3, pp. 181–199, aug 1994.
[19] M. Schneider and N. Gama, (2010). “SVP Challenge.” [Online].
Available: http://latticechallenge.org/svp-challenge/index.php
[20] P. Q. Nguyen and D. Stehle, “Low-dimensional lattice basis reduction
revisited (extended abstract),” Algorithmic Number Theory - Proc.
ANTS-VI, vol. 5, no. 4, pp. 1–20, 2012.
[21] J. Zhan, B. Nazer, U. Erez, and M. Gastpar, “Integer-Forcing Linear
Receivers,” IEEE Trans. Inf. Theory, vol. 60, no. 12, pp. 7661–7685,
dec 2014.
[22] N. Gama, N. Howgrave-Graham, H. Koy, and P. Q. Nguyen, “Rankin’s
Constant and Blockwise Lattice Reduction,” Crypto, vol. 4117, pp. 112–
130, 2006.
[23] B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing interference through structured codes,” IEEE Trans. Inf. Theory, vol. 57, no. 10,
pp. 6463–6486, 2011.
[24] C. Ling, “On the proximity factors of lattice reduction-aided decoding,”
IEEE Trans. Signal Process., vol. 59, no. 6, pp. 2795–2808, 2011.
[25] S. Liu, C. Ling, and D. Stehlé, “Decoding by sampling: A randomized
lattice algorithm for bounded distance decoding,” IEEE Trans. Inf.
Theory, vol. 57, no. 9, pp. 5933–5945, 2011.
[26] E. Agrell, T. Eriksson, A. Vardy, and K. Zeger, “Closest point search in
lattices,” IEEE Trans. Inf. Theory, vol. 48, no. 8, pp. 2201–2214, 2002.
[27] B. Hassibi and H. Vikalo, “On the sphere-decoding algorithm I.
Expected complexity,” IEEE Trans. Signal Process., vol. 53, no. 8, pp.
2806–2818, aug 2005.
[28] J. C. Lagarias, H. W. Lenstra, and C. P. Schnorr, “Korkin-Zolotarev
bases and successive minima of a lattice and its reciprocal lattice,”
Combinatorica, vol. 10, no. 4, pp. 333–348, 1990.
[29] H. F. Blichfeldt, “A new principle in the geometry of numbers, with
some applications,” Trans. Am. Math. Soc., vol. 15, no. 3, pp. 227–227,
1914.
[30] Y. Chen and P. Q. Nguyen, “{BKZ} 2.0: Better Lattice Security
Estimates,” Asiacrypt, vol. 7073, pp. 1–20, 2011.
[31] J. W. S. Cassels, An Introduction to the Geometry of Numbers, pp.
1–343. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997.
[32] X. W. Chang, J. Wen, and X. Xie, “Effects of the LLL reduction on the
success probability of the babai point and on the complexity of sphere
decoding,” IEEE Trans. Inf. Theory, vol. 59, no. 8, pp. 4915–4926, 2013.
[33] P. Q. Nguyen and D. Stehlé, “LLL on the average,” Algorithmic
Number Theory, vol. 4076, pp. 1–17, 2006.
[34] N. Gama and P. Q. Nguyen, “Predicting Lattice Reduction,” Eurocrypt,
vol. 4965, pp. 31–51, 2008.
[35] R. Lindner and C. Peikert, “Better key sizes (and Attacks) for LWEbased encryption,” Lect. Notes Comput. Sci. (including Subser. Lect.
Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 6558 LNCS, pp.
319–339, 2011.
[36] J. Jalden, D. Seethaler, and G. Matz, “Worst- and average-case
complexity of LLL lattice reduction in MIMO wireless systems,” in
2008 IEEE Int. Conf. Acoust. Speech Signal Process. IEEE, mar
2008, pp. 2685–2688.
[37] A. Sakzad, J. Harshan, and E. Viterbo, “Integer-forcing MIMO linear
receivers based on lattice reduction,” IEEE Trans. Wirel. Commun.,
vol. 12, no. 10, pp. 4905–4915, 2013.
[38] L. Ding, K. Kansanen, Y. Wang, and J. Zhang, “Exact SMP Algorithms
for Integer-Forcing Linear MIMO Receivers,” IEEE Trans. Wirel. Commun., vol. 14, no. 12, pp. 6955–6966, 2015.
[39] J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and
Groups, ser. Grundlehren der mathematischen Wissenschaften, pp.
1–690. New York, NY: Springer New York, 1999, vol. 290.
[40] L. Babai, “On Lovasz lattice reduction and the nearest lattice point
problem,” Combinatorica, vol. 6, no. 1, pp. 1–13, 1986.
| 7 |
Sparse CCA: Adaptive Estimation and Computational Barriers ∗
Chao Gao1, Zongming Ma2 , and Harrison H. Zhou1
arXiv:1409.8565v4 [stat.ME] 4 Apr 2016
1
2
Yale University
University of Pennsylvania
Abstract
Canonical correlation analysis is a classical technique for exploring the relationship
between two sets of variables. It has important applications in analyzing high dimensional datasets originated from genomics, imaging and other fields. This paper considers
adaptive minimax and computationally tractable estimation of leading sparse canonical
coefficient vectors in high dimensions. First, we establish separate minimax estimation
rates for canonical coefficient vectors of each set of random variables under no structural
assumption on marginal covariance matrices. Second, we propose a computationally feasible estimator to attain the optimal rates adaptively under an additional sample size
condition. Finally, we show that a sample size condition of this kind is needed for any
randomized polynomial-time estimator to be consistent, assuming hardness of certain
instances of the Planted Clique detection problem. The result is faithful to the Gaussian models used in the paper. As a byproduct, we obtain the first computational lower
bounds for sparse PCA under the Gaussian single spiked covariance model.
Keywords. Convex programming, group-Lasso, Minimax rates, Computational complexity, Planted Clique, Sparse CCA (SCCA), Sparse PCA (SPCA)
1
Introduction
Canonical correlation analysis (CCA) [23] is a classical and important tool in multivariate
statistics [1, 31]. For two random vectors X ∈ Rp and Y ∈ Rm , at the population level, CCA
finds successive vectors uj ∈ Rp and vj ∈ Rm (called canonical coefficient vectors) that solve
max a′ Σxy b,
a,b
subject to a′ Σx a = b′ Σy b = 1, a′ Σx ul = b′ Σy vl = 0, ∀0 ≤ l ≤ j − 1,
(1)
where Σx = Cov(X), Σy = Cov(Y ), Σxy = Cov(X, Y ), u0 = 0, and v0 = 0. Since our primary
interest lies in the covariance structure among X and Y , we assume that their means are
zeros from here on. Then the linear combinations (u′j X, vj′ Y ) are the j-th pair of canonical
variates. This technique has been widely used in various scientific fields to explore the
relationship between two sets of variables. In practice, one does not have knowledge about
∗
The research of C. Gao and H. H. Zhou is supported in part by NSF Career Award DMS-0645676 and NSF
FRG Grant DMS-0854975. The research of Z. Ma is supported in part by NSF Career Award DMS-1352060.
1
b x,
the population covariance, and Σx , Σy , and Σxy are replaced by their sample versions Σ
b y , and Σ
b xy in (1).
Σ
Recently, there have been growing interests in applying CCA to analyzing high-dimensional
datasets, where the dimensions p and m could be much larger than the sample size n. It has
been well understood that classical CCA breaks down in this regime [25, 4, 18]. Motivated
by genomics, neuroimaging and other applications, people have become interested in seeking
sparse leading canonical coefficient vectors. Various estimation procedures imposing sparsity on canonical coefficient vectors have been developed in the literature, which are usually
termed sparse CCA. See, for example, [42, 43, 33, 22, 27, 38, 3].
The theoretical aspect of sparse CCA has also been investigated in the literature. A useful
model for studying sparse CCA is the canonical pair model proposed in [14]. In particular,
suppose there are r pairs of canonical coefficient vectors (and canonical variates) among the
two sets of variables, then the model reparameterizes the cross-covariance matrix as
Σxy = Σx U ΛV ′ Σy ,
where U ′ Σx U = V ′ Σy V = Ir .
(2)
Here U = [u1 , ..., ur ] and V = [v1 , ..., vr ] collect the canonical coefficient vectors and Λ =
diag(λ1 , . . . , λr ) with 1 > λ1 ≥ · · · ≥ λr > 0 are the ordered canonical correlations. Let
Su = supp(U ) and Sv = supp(V ) be the indices of nonzero rows of U and V . One way to
impose sparsity on the canonical coefficient vectors is to require the sizes of Su and Sv to be
small, namely |Su | ≤ su and |Sv | ≤ sv for some su ≤ p and sv ≤ m. Under this model, Gao
et al. [18] showed that the minimax rate for estimating U and V under the joint loss function
b Vb ′ − U V ′ k2 is
kU
F
ep
em
1
r(s
+
s
)
+
s
log
+
s
log
.
(3)
u
v
u
v
nλ2r
su
sv
However, to achieve the rate, Gao et al. [18] used a computationally infeasible and nonadaptive procedure, which requires exhaustive search of all possible subsets with the given
cardinality and the knowledge of su and sv . Moreover, it is unclear from (3) whether the
estimation error of U depends on the sparsity and the ambient dimension of V and vice versa.
The goal of the present paper is to study three fundamental questions in sparse CCA: (1)
What are the minimax rates for estimating the canonical coefficient vectors on the two sets
of variables separately? (2) Is there a computationally efficient and sparsity-adaptive method
that achieves the optimal rates? (3) What is the price one has to pay to achieve the optimal
rates in a computationally efficient way?
1.1
Main contributions
We now introduce the main contributions of the present paper from three different viewpoints
as suggested by the three questions we have raised.
b Vb ′ − U V ′ k2 studied by [18] characterizes the
Separate minimax rates The joint loss kU
F
joint estimation error of both U and V . In this paper, we provide a finer analysis by studying
individual estimation errors of U and V under a natural loss function that can be interpreted
as prediction error of canonical variates. The exact definition of the loss functions is given in
Section 2. Separate minimax rates are obtained for U and V . In particular, we show that the
minimax rate of convergence in estimating U depends only on n, r, λr , p and su , but not on
2
either m or sv . Consequently, if U is sparser than V , then convergence rate for estimating U
can be faster than that for estimating V . Such a difference is not reflected by the joint loss,
since its minimax rate (3) is determined by the slower of the rates of estimating U and V .
Adaptive estimation As pointed out in [14] and [18], sparse CCA is a more difficult
problem than the well-studied sparse PCA. A naive application of sparse PCA algorithm to
sparse CCA can lead to inconsistent results [14]. The additional difficulty in sparse CCA
mainly comes from the presence of the nuisance parameters Σx and Σy , which cannot be
estimated consistently in a high-dimensional regime in general. Therefore, our goal is to
design an estimator that is adaptive to both the nuisance parameters and the sparsity levels.
Under the canonical pair model, we propose a computationally efficient algorithm. The
algorithm has two stages. In the first stage, we propose a convex program for sparse CCA
based on a tight convex relaxation of a combinatorial program in [18] by considering the
smallest convex set containing all matrices of the form AB ′ with both A and B being rankr orthogonal matrices. The convex program can be efficiently solved by the Alternating
Direction Method with Multipliers (ADMM) [16, 11]. Based on the output of the first stage,
we formulate a sparse linear regression problem in the second stage to improve estimation
b and Vb can be obtained via a group-Lasso algorithm [45].
accuracy, and the final estimator U
Under the sample size condition that
n ≥ Csu sv log(p + m)/λ2r
(4)
b and Vb recover the true canonical
for some sufficiently large constant C > 0, we show U
coefficient matrices U and V within optimal error rates adaptively with high probability.
Computational lower bound We require the sample size condition (4) for the adaptive
procedure to achieve optimal rates of convergence. Assuming hardness of certain instances of
the Planted Clique detection problem, we provide a computational lower bound to show that a
condition of this kind is unavoidable for any computationally feasible estimation procedure to
achieve consistency. Up to an asymptotically equivalent discretization which is necessary for
computational complexity to be well-defined, our computational lower bound is established
directly for the Gaussian canonical pair model used throughout the paper.
An analogous sample size condition has been imposed in the sparse PCA literature [24,
29, 13, 37], namely n ≥ Cs2 log p/λ2 where s is the sparsity of the leading eigenvector
and λ the gap between the leading eigenvalue and the rest of the spectrum. Berthet and
Rigollet [7] showed that if there existed a polynomial-time algorithm for a generalized sparse
PCA detection problem while such a condition is violated, the algorithm could be made (in
randomized polynomial-time) into a detection method for the Planted Clique problem in a
regime where it is believed to be computationally intractable. However, both the null and the
alternative hypotheses in the sparse PCA detection problem were generalized in [7] to include
all multivariate distributions whose quadratic forms satisfy certain uniform tail probability
bounds and so the distributions need not be Gaussian or having a spiked covariance structure
[24]. The same remark also applies to the subsequent work on sparse PCA estimation [39].
Hence, the computational lower bound in sparse PCA was only established for such enlarged
parameter spaces. As a byproduct of our analysis, we establish the desired computational
lower bound for sparse PCA in the Gaussian single spiked covariance model.
3
1.2
Organization
After an introduction to notation, the rest of the paper is organized as follows. In Section
2, we formulate the sparse CCA problem by defining its parameter space and loss function.
Section 3 presents separate minimax rates for estimating U and V . Section 4 proposes a
two-stage adaptive estimator that is shown to be minimax rate optimal under an additional
sample size condition. Section 5 shows a condition of this kind is necessary for all randomized
polynomial-time estimator to achieve consistency by establishing new computational lower
bounds for sparse PCA and sparse CCA. Section 6 presents proofs of theoretical results in
Section 4. Implementation details of the adaptive procedure, numerical studies, additional
proofs and technical details are deferred to the supplement [19].
1.3
Notation
For any t ∈ Z+ , [t] denotes the set {1, 2, ..., t}. For any set S, |S| denotes its cardinality
and S c its complement. For any event E, 1{E} denotes its indicator function. For any
a, b ∈ R, ⌈a⌉ denotes the smallest integer no smaller than a, ⌊a⌋ the largest integer
qP no
2
larger than a, a ∨ b = max(a, b) and a ∧ b = min(a, b). For a vector u, ||u|| =
i ui ,
P
P
||u||0 = i 1{ui 6=0} , and ||u||1 = i |ui |. For any matrix A = (aij ) ∈ Rp×k , Ai· denotes its
i-th row and supp(A) = {i ∈ [p] : kAi· k > 0}, the index set of nonzero rows, is called its
support. For any subset J ⊂ [p] × [k], AJ = (aij 1{(i,j)∈J} ) ∈ Rp×k is obtained by keeping all
entries in J and replacing all entries in J c with zeros. We write AJ1 J2 for AJ1 ×J2 and A(J1 J2 )c
for A(J1 ×J2 )c . Note that AJ1 ∗ = AJ1 ×[k] ∈ Rp×k while AJ1 · stands for the corresponding
nonzero submatrix of size |J1 | × k. In addition, PA ∈ Rp×p stands for the projection matrix
onto the column space of A, O(p, k) denotes the set of all p × k orthogonal matrices and
O(k) = O(k, k). Furthermore, σi (A) stands for the i-th largest singular value of A and
σmax (A) = σq
1 (A), σmin (A) = σp∧k (A). The Frobenius norm and the operator norm of A
P
a2 and kAkop = σ1 (A), respectively. The l1 norm and the nuclear norm
are kAkF =
P i,j ij
P
are ||A||1 =P ij |aij | and kAk∗ = i σi (A), respectively. If A is a square matrix, its trace
is Tr(A) = i aii . For two square matrices A and B, we write A B if B − A is positive
semidefinite. For any positive semi-definite matrix A, A1/2 denotes its principal square root
that is positive semi-definite and satisfies A1/2 A1/2 = A. The trace inner product of two
matrices A, B ∈ Rp×k is hA, Bi = Tr(A′ B). Given a random element X, L(X) denotes its
probability distribution. The symbol C and its variants C1 , C ′ , etc. are generic constants
and may vary from line to line, unless otherwise specified. The symbols P and E stand for
generic probability and expectation when the distribution is clear from the context.
2
2.1
Problem Formulation
Parameter space
Consider a canonical pair model where the observed pairs of measurement vectors (Xi′ , Yi′ )′ ,
i = 1, . . . , n are i.i.d. from a multivariate Gaussian distribution Np+m (0, Σ) where
Σx Σxy
,
Σ=
Σyx Σy
4
with the cross-covariance matrix Σxy satisfying (2). We are interested in the situation where
the leading canonical coefficient vectors are sparse. One way to quantify the level of sparsity
is to bound how many nonzero rows there are in the U and V matrices. This notion of
sparsity has been used previously in both sparse PCA [13, 37] and sparse CCA [18] problems
when one seeks multiple sparse vectors simultaneously.
Recall that for any matrix A, supp(A) collects the indices of nonzero rows in A. Adopting the above notion of sparsity, we define F(su , sv , p, m, r, λ; M ) to be the collection of all
covariance matrices Σ with the structure (2) satisfying
1. U ∈ Rp×r and V ∈ Rm×r with |supp(U )| ≤ su and |supp(V )| ≤ sv ;
2. σmin (Σx ) ∧ σmin (Σy ) ≥ M −1 and σmax (Σx ) ∨ σmax (Σy ) ≤ M ;
(5)
M −1 .
3. λr ≥ λ and λ1 ≤ 1 −
The probability space we consider is
P(n, su , sv , p, m, r, λ; M ) = L((X1′ , Y1′ )′ , ..., (Xn′ , Yn′ )′ ) :
iid
(Xi′ , Yi′ )′ ∼ Np+m (0, Σ)
with Σ ∈ F(su , sv , p, m, r, λ; M ) ,
(6)
where n is the sample size. We shall allow su , sv , p, m, r, λ to vary with n, while M > 1 is
restricted to be an absolute constant.
2.2
Prediction loss
From now on, the presentation of definitions and results will focus on U only since those
b = [b
for V can be obtained via symmetry. Given an estimator U
u1 , . . . , u
br ] of the leading
canonical coefficient vectors for X, a natural way of assessing its quality is to see how well
it predicts the values of the canonical variables U ′ X ⋆ ∈ Rr for a new observation X ⋆ which
b . This
is independent of and identically distributed as the training sample used to obtain U
leads us to consider the following loss function
b, U) =
L(U
b ′ X ⋆ − U ′ X ⋆ k2 ,
E⋆ kW ′ U
(7)
b W − U )′ Σx (U
b W − U )].
Tr[(U
(8)
inf
W ∈O(r)
b , U ) is still a random quantity
where E⋆ means taking expectation only over X ⋆ and so L(U
b . Since L(U
b , U ) is the expected squared error for predicting the
due to the randomness of U
′
⋆
′
⋆
b
canonical variables U X via U X , we refer to it as prediction loss from now on. It is worth
noting that the introduction of an r × r orthogonal matrix W is unavoidable. To see this,
we can simply consider the case where λ1 = · · · = λr = λ in (2), then we can replace the
pair (U, V ) in (2) by (U W, V W ) for any W ∈ O(r). In other words, the canonical coefficient
vectors are only determined up to a joint orthogonal transform. If we work out the E⋆ part
in the definition (7), then the loss function can be equivalently defined as
b, U) =
L(U
inf
W ∈O(r)
b , X ⋆ and Σx in (7) and (8)
By symmetry, we can define L(Vb , V ) by simply replacing U , U
with V , Vb , Y ⋆ and Σy .
5
A related loss function is kPUb −PU k2F measuring the difference between two subspaces. By
b , U ) is a stronger
Proposition 9.2 in the supplementary material [19], the prediction loss L(U
2
b
loss function. That is, kPUb − PU kF ≤ CL(U , U ) for some constant C > 0 only depending on
b , U ) is strictly stronger. To see this, let Σx = Ip , U ∈ O(p, r) and U
b = 2U .
M . Actually, L(U
2
b , U ) = inf W ∈O(r) kΣ1/2
b
Then, kPUb − PU k2F = 0, while L(U
x (U W − U )kF = inf W ∈O(r) (5r −
b , U ), and provide brief
Tr(W )) = r > 0. In this paper, we will focus on the stronger loss L(U
2
remarks on results for kPUb − PU kF .
3
Minimax Rates
We first provide a minimax upper bound using a combinatorial optimization procedure, and
then show that the resulting rate is optimal by further providing a matching minimax lower
bound.
Let (Xi′ , Yi′ )′ ∈ Rp+m, i = 1, . . . , n, be i.i.d. observations following Np+m (0, Σ) for some
Σ ∈ F(su , sv , p, m, r, λ; M ). For notational convenience, we assume the sample size is divisible
by three, i.e., n = 3n0 for some n0 ∈ N.
Procedure To obtain minimax upper bound, we propose a two-stage combinatorial op0
timization procedure. We split the data into three equal size batches D0 = {(Xi′ , Yi′ )′ }ni=1
,
2n0
′
′
′
′
′
′
n
D1 = {(Xi , Yi ) }i=n0 +1 and D2 = {(Xi , Yi ) }i=2n0 +1 , and denote the sample covariance mab (j)
b (j)
b (j)
trices computed on each batch by Σ
x , Σy and Σxy for j ∈ {0, 1, 2}.
b (0) , Vb (0) ) which solves the following program:
In the first stage, we find (U
max
L∈Rp×r ,R∈Rm×r
b (0) R),
Tr(L′ Σ
xy
b (0) L = R′ Σ
b (0) R = Ir , and
subject to L′ Σ
x
y
(9)
|supp(L)| ≤ su , |supp(R)| ≤ sv .
b (1) solving
In the second stage, we further refine the estimator for U by finding U
min
L∈Rp×r
′ b (1) b (0)
b (1)
Tr(L′ Σ
)
x L) − 2 Tr(L Σxy V
(10)
subject to |supp(L)| ≤ su .
b (1) , defined as
The final estimator is a normalized version of U
b =U
b (1) ((U
b (1) )′ Σ
b (2)
b (1) )−1/2 .
U
x U
(11)
The purpose of sample splitting employed in the above procedure is to facilitate the proof.
Theory and discussion The program (9) was first proposed in [18] as a sparsity constrained version of the classical CCA formulation. However, the resulting estimator will have
a convergence rate that involves the sparsity level sv and the ambient dimension m of the V
6
matrix [18, Theorem 1], which is sub-optimal. The second stage in the procedure is thus proposed to further pursue the optimal estimation rates. First, if we were given the knowledge
of V , then the least square solution of regressing V ′ Y ∈ Rr on X ∈ Rp is
U Λ = argmin EkY ′ V − X ′ Lk2
L∈Rp×r
= argmin Tr(L′ Σx L) − 2 Tr(L′ Σxy V ) + Tr(V ′ Σy V )
L∈Rp×r
(12)
= argmin Tr(L′ Σx L) − 2 Tr(L′ Σxy V ),
L∈Rp×r
where the expectation is with respect to the distribution (X ′ , Y ′ )′ ∼ Np+m (0, Σ). The second
equality results from taking expectation over each of the three terms in the expansion of the
square Euclidean norm, and the last equality holds since Tr(V ′ Σy V ) does not involve the
argument to be optimized over. In fact, from the canonical pair model, one can easily derive
a regression interpretation of CCA, V ′ Y = ΛU ′ X + E, where E ∼ N (0, Ir − Λ2 ). Then, (10)
is a least square formulation of the regression interpretation. However, CCA is different from
regression because the response V ′ Y depends on an unknown V . Comparing (10) with (12),
it is clear that (10) is a sparsity constrained version of (12) where the knowledge of V and the
covariance matrix Σ are replaced by the initial estimator Vb (0) and sample covariance matrix
b (1) can be viewed as an estimator of U Λ. Hence,
from an independent sample. Therefore, U
a final normalization step is taken in (11) to transform it to an estimator of U .
We now state a bound for the final estimator (11).
Theorem 3.1. Assume
ep
em
1
r(su + sv ) + su log
+ sv log
≤c
n
su
sv
(13)
for some sufficiently small constant c > 0. Then there exist constants C, C ′ > 0 only depending on c such that
b , U ) ≤ C su r + log ep ,
L(U
(14)
nλ2
su
with P-probability at least 1 − exp (−C ′ (su + log(ep/su ))) − exp (−C ′ (sv + log(em/sv ))) uniformly over P ∈ P(n, su , sv , p, m, r, λ; M ).
Remark 3.1. The paper assumes that M is a constant. However, it is worth noting that the
minimax upper bound of Theorem 3.1 does not depend on M even if M is allowed to grow
with n. To be specific, assume the eigenvalues of Σx are bounded
in the interval [M1 , M2 ].
b , U ) would still be 1 2 su r + log ep , because the dependence on
The convergence rate of L(U
su
nλ
M1 , M2 has been implicitly built into the prediction
loss.
On
the other hand, a convergence
1
ep
M2
2
rate for the loss kPUb − PU kF would be M1 nλ2 su r + log su , with an extra factor of the
condition number of Σx .
Under assumption (13), Theorem 3.1 achieves a convergence rate for the prediction loss
in U that does not depend on any parameter related to V . Note that the probability tail still
′
involves m and sv . However, it can be shown that exp (−C ′ (sv + log(em/sv ))) ≤ m−C /2 ,
and so the corresponding term in the tail probability goes to 0 as long as m → ∞. The
optimality of this upper bound can be justified by the following minimax lower bound.
7
v
Theorem 3.2. Assume that r ≤ su ∧s
2 . Then there exists some constant C > 0 only depending on M and an absolute constant c0 > 0, such that
C
ep
b
inf sup P L(U , U ) ≥ c0 ∧
su r + log
≥ 0.8,
b P∈P
nλ2
su
U
where P = P(n, su , sv , p, m, r, λ; M ).
By Theorem 3.1 and Theorem 3.2, the rate in (14), whenever it is upper bounded by a
constant, is the minimax rate of the problem.
4
Adaptive and Computationally Efficient Estimation
Section 3 determines the minimax rates for estimating U under the prediction loss. However,
there are two drawbacks of the procedure (9) – (11). One is that it requires the knowledge
of the sparsity levels su and sv . It is thus not adaptive. The other is that in both stages
one needs to conduct exhaustive search over all subsets of given sizes in the optimization
problems (9) and (10), and hence the computation cost is formidable.
In this section, we overcome both drawbacks by proposing a two-stage convex program
approach towards sparse CCA. The procedure is named CoLaR, standing for Convex program
with group-Lasso Refinement. It is not only computationally feasible but also achieves the
minimax estimation error rates adaptively over a large collection of parameter spaces under an
additional sample size condition. The issues related to this additional sample size condition
will be discussed in more detail in the subsequent Section 5.
4.1
Estimation scheme
The basic principle underlying the computationally feasible estimation scheme is to seek tight
convex relaxations of the combinatorial programs (9) – (10). In what follows, we introduce
convex relaxations for the two stages in order. As in Section 3, we assume that the data is
b (j)
b (j)
b (j)
split into three batches D0 , D1 and D2 of equal sizes and for j = 0, 1, 2, let Σ
x , Σy and Σxy
be defined as before.
First stage By the definition of trace inner product, the objective function in (9) can be
b xy R) = hΣ
b xy , LR′ i. Since it is linear in F = LR′ , this suggests treating
rewritten as Tr(L′ Σ
′
LR as a single argument rather than optimizing over L and R separately. Next, the support
size constraints |supp(L)| ≤ su , |supp(R)| ≤ sv imply that the vector ℓ0 norm kLR′ k0 ≤ su sv .
Applying the convex relaxation of ℓ0 norm by ℓ1 norm and including it as a Lagrangian term,
we are led to consider a new objective function
b (0) , F i − ρ||F ||1 ,
max hΣ
xy
F ∈Rp×m
(15)
P
where F serves as a surrogate for LR′ , kF k1 = i∈[p],j∈[m] |Fij | denotes the vector ℓ1 norm
of the matrix argument, and ρ is a penalty parameter controlling sparsity. Note that (15)
is the maximization problem of a concave function, which becomes a convex program if the
8
constraint set is convex. Under the identity F = LR′ , the normalization constraint in (9)
reduces to
1/2
1/2
b (0)
b (0)
(Σ
F (Σ
∈ Or = {AB ′ : A ∈ O(p, r), B ∈ O(m, r)}.
x )
y )
(16)
Cr = {G ∈ Rp×m : kGk∗ ≤ r, kGkop ≤ 1} = conv(Or )
(17)
1/2 F (Σ
1/2 ∈ C where
b (0)
b (0)
Naturally, we relax it to (Σ
x )
y )
r
is the smallest convex set containing Or . The relation (17) is stated in the proof of Theorem
3 of [40]. Combining (15) – (17), we use the following convex program for the first stage in
our adaptive estimation scheme:
max
F ∈Rp×m
b (0) , F i − ρ||F ||1
hΣ
xy
b (0) )1/2 F (Σ
b (0) )1/2 k∗ ≤ r, k(Σ
b (0) )1/2 F (Σ
b (0) )1/2 kop ≤ 1.
subject to k(Σ
x
y
x
y
(18)
Implementation of (18) is discussed in Section 10 in the supplement [19].
Remark 4.1. A related but different convex relaxation was proposed in [37] for the sparse
PCA problem, where the set of all rank r projection matrices (which are symmetric) is relaxed
to its convex hull – the Fantope {P : Tr(P ) = r, 0 P Ip }. Such an idea is not directly
applicable in the current setting due to the asymmetric nature of the matrices included in
the set Or in (16).
Remark 4.2. The risk of the solution to (18) for estimating U V ′ is sub-optimal compared
to the optimal rates determined in [18]. See Theorem 4.1 below. Nonetheless, it leads to
a reasonable estimator for the subspaces spanned by the first r left and right canonical
coefficient vectors under a sample size condition, which is sufficient for achieving the optimal
estimation rates for U and V in a further refinement stage to be introduced below. Although
it is possible that some other penalty function rather than the ℓ1 penalty in (18) could also
achieve this goal, ℓ1 is appealing due to its simplicity.
Second stage Now we turn to the convex relaxation to (10) in the second stage. By the
discussion following Theorem 3.1, if we view the rows of L as groups, then (10) becomes
a least square problem with a constrained number of active groups. A well-known convex
relaxation for such problems is the group-Lasso [45], where the number of active groups
constraint is relaxed by bounding the sum of ℓ2 norms of the coefficient vector of each group.
b (0) (resp. Vb (0) ) be the matrix consisting of its first r left
b be the solution to (18) and U
Let A
(resp. right) singular vectors. Thus, in the second stage of the adaptive estimation scheme,
we propose to solve the following group-Lasso problem:
b (1) L) − 2 Tr(L′ Σ
b (1) Vb (0) ) + ρu
min Tr(L′ Σ
x
xy
L∈Rp×m
p
X
j=1
kLj· k,
(19)
P
where pj=1 kLj· k is the group sparsity penalty, defined as the sum of the ℓ2 norms of all the
row vectors in L, and ρu is a penalty parameter controlling sparsity. Note that the group
sparsity penalty is crucial, since if one uses an ℓ1 penalty instead, only a sub-optimal rate can
9
b (1) , then our final estimator in the adaptive
be achieved. Suppose the solution to (19) is U
estimation scheme is its normalized version
b =U
b (1) ((U
b (1) )′ Σ
b (2)
b (1) )−1/2 .
U
x U
(20)
As before, sample splitting is only used for technical arguments in the proof. Simulation
results in Section 11 in the supplement [19] show that using the whole dataset repeatedly
in (18) – (20) yields satisfactory performance and the improvement by the second stage is
considerable.
4.2
Theoretical guarantees
b to the convex program (18).
We first state the upper bound for the solution A
Theorem 4.1. Assume that
su sv log(p + m)
,
(21)
λ2
for some sufficiently large constant C1 > 0. Then there p
exist positive constants γ1 , γ2 and
′
C, C only depending on M and C1 , such that when ρ = γ log(p + m)/n for γ ∈ [γ1 , γ2 ],
n ≥ C1
b − U V ′ k2 ≤ C
kA
F
su sv log(p + m)
,
nλ2
with P-probability at least 1 − exp(−C ′ (su + log(ep/su ))) − exp(−C ′ (sv + log(em/sv ))) for any
P ∈ P(n, su , sv , p, m, r, λ; M ).
Note that the error bound in Theorem 4.1 can be much larger than the optimal rate for
joint estimation of U V ′ established in [18]. Nonetheless, under the sample size condition (21),
b is close to U V ′ in Frobenius norm distance. This fact, together with
it still ensures that A
the proposed refinement scheme (19) – (20), guarantees the optimal rates of convergence for
the estimator (20) as stated in the following theorem.
Theorem 4.2. Assume (21) holds for some sufficiently large C1 ≥ 0. p
Then there exist
′ [log(p + m)]/n
constants γ and
γ
only
depending
on
C
and
M
such
that
if
we
set
ρ
=
γ
u
1
p
and ρu = γu′ (r + log p)/n for any γ ′ ∈ [γ, C2 γ] and γu′ ∈ [γu , C2 γu ] for some absolute
constant C2 > 0, there exist a constants C, C ′ > 0 only depending on C1 , C2 and M , such
that
b, U) ≤ C
L(U
su (r + log p)
,
nλ2
with P-probability at least 1 − exp(−C ′ (su + log(ep/su ))) − exp(−C ′ (sv + log(em/sv ))) −
exp(−C ′ (r + log(p ∧ m))) uniformly over P ∈ P(n, su , sv , p, m, r, λ; M ).
Remark 4.3. The result of Theorem 4.2 assumes a constant M . Explicit dependence on the
eigenvalues of the marginal covariance can be tracked even when M is diverging. Assuming
b, U)
the eigenvalues of Σx all lie in the interval [M1 , M2 ], then the convergence rate of L(U
2
3
su (r+log p)
su (r+log p)
M2
M2
and a convergence rate of kPUb − PU k2F would be M
.
would be M
nλ2
nλ2
1
1
M2 2
Compared with Remark 3.1, there is an extra factor M1 , which is also present for the Lasso
error bounds [8, 32]. Evidence has been given in the literature that such an extra factor can
be intrinsic to all polynomial-time algorithms [46].
10
Although both Theorem 4.1 and Theorem 4.2 assume Gaussian distributions, a scrutiny
of the proofs shows that the same results hold if the Gaussian assumption is weakened to
subgaussian. By Theorem 3.2, the rate in Theorem 4.2 is optimal. By Theorem 4.1 and
Theorem 4.2, the choices of the penalty parameters ρ and ρu in (18) and (19) do not depend
on su or sv . Therefore, the proposed estimation scheme (18) – (20) achieves the optimal rate
adaptively over sparsity levels. A full treatment of adaptation to M is beyond the scope of
the current paper, though it seems possible in view of the recent proposals in [6, 35, 12]. A
careful examination of the proofs shows that the dependence of ρ and ρu on M is through
1/2
1/2
kΣx kop kΣy kop and kΣx kop , respectively. When p and m are bounded from above by a
constant multiple of n, we can upper bound the operator norms by the sample counterparts
to remove the dependence of these penalty parameters on M . We conclude this section with
two more remarks.
Remark 4.4. The group sparsity penalty used in the second stage (19) plays an important
role in achieving the optimal rate su (r + log p)/(nλ2 ). Except for the extra λ−2 term, this
convergence rate is a typical one for group Lasso [28]. If we simply use an ℓ1 penalty, then
we will obtain the rate rsu log p/(nλ2 ), which is clearly sub-optimal.
Remark 4.5. Comparing Theorem 3.1 with Theorem 4.2, the adaptive estimation scheme
achieves the optimal rates of convergence for a smaller collection of parameter spaces of
interest due to the more restrictive sample size condition (21). We examine the necessity of
this condition in more details in Section 5 below.
5
Computational Lower Bounds
In this section, we provide evidence that the sample size condition (21) imposed on the
adaptive estimation scheme in Theorems 4.1 and 4.2 is probably unavoidable for any computationally feasible estimator to be consistent. To be specific, we show that for a sequence of
parameter spaces in (5) – (6), if the condition is violated, then any computationally efficient
consistent estimator of sparse canonical coefficients leads to a computationally efficient and
statistically powerful test for the Planted Clique detection problem in a regime where it is
believed to be computationally intractable.
Planted Clique Let N be a positive integer and k ∈ [N ]. We denote by G(N, 1/2) the
Erdős-Rényi graph on N vertices where each edge is drawn independently with probability 1/2, and by G(N, 1/2, k) the random graph generated by first sampling from G(N, 1/2)
and then selecting k vertices uniformly at random and forming a clique of size k on these
vertices. For an adjacency matrix A ∈ {0, 1}N ×N of an instance from either G(N, 1/2) or
G(N, 1/2, k), the Planted Clique detection problem of parameter (N, k) refers to testing the
following hypotheses
H0G : A ∼ G(N, 1/2)
v.s. H1G : A ∼ G(N, 1/2, k).
(22)
It is widely believed that when k = O(N 1/2−δ ), the problem (22) cannot be solved by any
randomized polynomial-time algorithm. In the rest of the paper, we formalize the conjectured
hardness of Planted Clique problem into the following hypothesis.
11
Hypothesis A. For any sequence k = k(N ) such that lim supN →∞
randomized polynomial-time test ψ,
2
lim inf PH G ψ + PH G (1 − ψ) ≥ .
0
1
N →∞
3
log k
log N
<
1
2
and any
Evidence supporting this hypothesis has been provided in [34, 17]. Computational lower
bounds in several statistical problems have been established by assuming the above hypothesis
and its close variants, including sparse PCA detection [7] and estimation [39] in classes defined
by a restricted covariance concentration condition, submatrix detection [30] and community
detection [21].
Necessity of the sample size condition (21) Under Hypothesis A, the necessity of
condition (21) is supported by the following theorem.
Theorem 5.1. Suppose that Hypothesis A holds and that as n → ∞, p = m satisfying
2n ≤ p ≤ na for some constant a > 1, su = sv , n(log n)5 ≤ cs4u for some sufficiently small
su sv
c > 0, and λ = 7290n(log(12n))
2 . If for some δ ∈ (0, 1),
lim inf
n→∞
(su sv )1−δ log(p + m)
> 0,
nλ2
then for any randomized polynomial-time estimator u
b,
n
1 o 1
> .
lim inf
sup
P L(b
u, u) >
n→∞ P∈P(n,s ,s ,p,m,1,λ;3)
300
4
u v
(23)
(24)
Comparing (21) with (23), we see that subject to a sub-polynomial factor, the condition (21) is necessary to achieve consistent sparse CCA estimation within polynomial time
complexity.
Remark 5.1. The statement in Theorem 5.1 is rigorous only if we assume the computational
complexities of basic arithmetic operations on real numbers and sampling from univariate
continuous distributions with analytic density functions are all Θ(1) [10]. To be rigorous under
the probabilistic Turing machine model [2], we need to introduce appropriate discretization
of the problem and be more careful with the complexity of random number generation.
To convey the key ideas in our computational lower bound construction, we focus on the
continuous case throughout this section and defer the formal discretization arguments to
Section 8 in the supplement [19].
In what follows, we divide the reduction argument leading to Theorem 5.1 into two parts.
In the first part, we show Hypothesis A implies the computational hardness of the sparse
PCA problem under the Gaussian spiked covariance model. In the second part, we show
computational hardness of sparse PCA implies that of sparse CCA as stated in Theorem 5.1.
5.1
Hardness of sparse PCA under Gaussian spiked covariance model
Gaussian single spiked model [24] refers to the distribution Np (0, Σ) where Σ = τ θθ ′ + Ip .
Here, θ is the eigenvector of unit length and τ > 0 is the eigenvalue. Define the following
12
Gaussian single spiked model parameter space for sparse PCA
iid
Q(n, s, p, λ) = {L(W1 , . . . , Wn ) :Wi ∼ Np (0, τ θθ ′ + Ip ),
kθk0 ≤ s, τ ∈ [λ, 3λ]}.
(25)
ep
The minimax estimation rate for θ under the loss kPθb − Pθ k2F is λ+1
nλ2 s log s . See, for instance,
[13]. However, to achieve the above minimax rate via computationally efficient methods such
as those proposed in [9, 29, 13], researchers have required the sample size to satisfy n ≥
2
p
C s λlog
for some sufficiently large constant C > 0. Moreover, no computationally efficient
2
estimator is known to achieve consistency when the sample size condition is violated. As a first
step toward the establishment of Theorem 5.1, we show that Hypothesis A implies hardness
2−δ
p
>0
of sparse PCA under Gaussian spiked covariance model (25) when lim inf n→∞ s nλlog
2
for some δ > 0.
We note that previous computational lower bounds for sparse PCA in [7, 39] cannot be
used here directly because they are only valid for parameter spaces defined via the restricted
covariance concentration (RCC) condition. As pointed out in [39], such parameter spaces
include (but are not limited to) all subgaussian distributions with sparse leading eigenvectors
and the covariance matrices need not be of the spiked form Σ = τ θθ ′ + Ip . Therefore,
the Gaussian single spiked model parameter space defined in (25) only constitutes a small
subset of such RCC parameter spaces. The goal of the present subsection is to establish the
computational lower bound for the Gaussian single spiked model directly.
b 1 , . . . , Wn ) of the leading sparse eigenvector, we
Suppose we have an estimator θb = θ(W
propose the following reduction scheme to transform it into a test for (22). To this end, we
first introduce some additional notation. Consider integers k and N . Define
δN =
k
,
N
ηN =
k
.
45N (log N )2
(26)
For any µ ∈ R, let φµ denote the density function of the N (µ, 1) distribution, and let
φ̄µ =
1
(φµ + φ−µ )
2
(27)
e0
denote the density function of the Gaussian mixture 12 N (µ, 1) +√12 N (−µ,√1). Next, let Φ
be the √
restriction of the N (0, 1) distribution on the interval [−3 log N , 3 log N ]. For any
|µ| ≤ 3 ηN log N , define two probability distributions Fµ,0 and Fµ,1 with densities
−1
[φ̄µ (x) − φ0 (x)] 1{|x|≤3√log N } ,
fµ,0 (x) = M0 φ0 (x) − δN
(28)
−1
(29)
[φ̄µ (x) − φ0 (x)] 1{|x|≤3√log N } ,
fµ,1 (x) = M1 φ0 (x) + δN
R
where the Mi ’s are normalizing constants such that R fµ,i = 1 for i = 0, 1. √It can be
verified that fµ,i are properly defined probability density function when |µ| ≤ 3 ηN log N .
For details, see Lemma 7.4 in the supplement [19].
With the foregoing definition, the proposed reduction scheme can be summarized as
Algorithm 1. Here, the starting point is the adjacency matrix A of the random graph, and
the reduction is well defined for all instances of N ≥ 12n and p ≥ 2n.
13
Algorithm 1: Reduction from Planted Clique to Sparse PCA (in Gaussian Single
Spiked Model)
Input:
1. Graph adjacency matrix A ∈ {0, 1}N ×N ;
2. Estimator θb for the leading eigenvector θ.
Output: A solution to the hypothesis testing problem (22).
e 0 . Set
1 Initialization. Generate i.i.d. random variables ξ1 , . . . , ξ2n ∼ Φ
1/2
µi = ηN ξi ,
2
i = 1, . . . , 2n.
(30)
Gaussianization. Generate two matrices B0 , B1 ∈ R2n×2n where conditioning on the
µi ’s, all the entries are mutually independent satisfying
L((B0 )ij |µi ) = Fµi ,0
and
L((B1 )ij |µi ) = Fµi ,1 .
(31)
Let A0 ∈ {0, 1}2n×2n be the lower–left 2n × 2n submatrix of the matrix A. Generate a
′ ]′ ∈ R2n×p where for each i ∈ [2n], if j ∈ [2n], we set
matrix W = [W1′ , . . . , W2n
Wij = (B0 )ij (1 − (A0 )ij ) + (B1 )ij (A0 )ij .
3
(32)
If 2n < j ≤ p, we let Wij be an independent draw from N (0, 1).
b 1 , . . . , Wn ) be the estimator of the leading
Test Construction. Let θb = θ(W
n
eigenvector by treating {Wi }i=1 as data. It is normalized to be a unit vector. We
reject H0G if
1
θb′ (
n
2n
X
i=n+1
1
Wi Wi′ )θb ≥ 1 + k ηN .
4
(33)
We now explain how the reduction achieves its goal. For simplicity, focus on the case where
p = 2n. Let ǫ = (ǫ1 , ..., ǫ2n ) ∈ {0, 1}2n where ǫi is the indicator of whether the i-th row of A0
(defined in Step 2 of Algorithm 1) belongs to the planted clique or not, and γ = (γ1 , ..., γ2n )
the indicators of the columns of A0 . In what follows, we discuss the distributions of W when
A ∼ H0G and H1G , respectively.
When A ∼ H0G , the ǫi ’s and γj ’s are all zeros. In this case, we can verify that the entries
of W are mutually independent and for each (i, j) the marginal distribution of Wij is close
to the N (0, 1) distribution (c.f., Lemma 7.1 in the supplement [19]). Hence, the rows of W
b 1 , . . . , Wn ) is
are close to i.i.d. random vectors from the Np (0, Ip ) distribution. Since θb = θ(W
2n
2
independent of {Wi }i=n+1 , the LHS of (40) is close in distribution to a χn random variable
scaled by p
n which concentrates around its expected value one. Indeed, it is upper bounded
by 1 + O( log(n)/n) with high probability.
If A ∼ H1G , then the (i, j)-th entry of A0 is an edge in the planted clique if and only if
ǫi = γj = 1. Moreover, the joint distribution of {ǫ1 , . . . , ǫ2n , γ1 , . . . , γ2n } is close to that of
4n i.i.d. Bernoulli random variables {e
ǫ1 , . . . , e
ǫ2n , γ
e1 , . . . , γ
e2n } with success probability δN =
k/N . For simplicity, suppose that these indicators are indeed i.i.d. Bernoulli(δN ) variables
14
{e
ǫ1 , . . . , e
ǫ2n , γ
e1 , . . . , γe2n }. Then, one can show that conditioning on γ
ej = 0, for any i ∈ [2n], the
conditional distribution of (Wij |e
γj = 0), after integrating over the conditional distribution
of e
ǫi , µi and (A0 )ij , is approximately N (0, 1). In contrast, conditioning on γ
ej = 1, for
any i ∈ [2n], the conditional distribution of (Wij |e
γj = 1) is approximately N (0, 1 + ηN ).
Therefore, conditioning on γ
e the distribution of the Wi ’s is close to that of 2n i.i.d. random
vectors sampled from
Np (0, τ θθ ′ + Ip ),
where θ = e
γ /ke
γ k and τ = ηN ke
γ k2 ,
i.e., a Gaussian spiked covariance
P model in (25). Here, the leading eigenvector θ has sparsity
level |supp(θ)| = |supp(e
γ )| = j γ
ej , which concentrates around its mean value nδN ≍ k if
b
N ≍ n. Thus, if θ estimates θ well, then the LHS of (40) approximately
follows a non-central
p
χn distribution scaled by n, which should exceed 1 + O( log(n)/n) with high probability
under the alternative hypothesis. Hence, Algorithm 1 is expected to yield a test with small
error for the Planted Clique problem (22) when θb is a good estimator.
The materialization of the foregoing discussion leads to the following result which demonstrates quantitatively that a decent estimator of the leading sparse eigenvector results in a
good test (by applying the reduction (30) – (33)) for the Planted Clique detection problem
(22).
5
N)
≤ c, cN ≤ n ≤
Theorem 5.2. For some sufficiently small constant c > 0, assume N (log
k4
b
N/12 and p ≥ 2n. Then, for any θ such that
n
1o
≤ β,
(34)
sup
Q kPθb − Pθ k2F >
3
Q∈Q(n,3k/2,p,kηN /2)
the test ψ defined by (30) – (33) satisfies
PH G ψ + PH G (1 − ψ) < β +
0
1
4n
′
+ C(n−1 + N −1 + e−C k ),
N
for sufficiently large n with some constants C, C ′ > 0.
If the estimator θb is uniformly consistent over Q(n, 3k/2, p, kηN /2), then β is close to zero.
Hence the conclusion of Theorem 5.2 implies that for appropriate growing sequences of n, N
and k, the testing error for (22) can be made smaller than any fixed nonzero probability.
Further invoking Hypothesis A, we obtain the following computational lower bounds for
sparse PCA.
Theorem 5.3. Suppose that Hypothesis A holds and that as n → ∞, 2n ≤ p ≤ na for some
s2
constant a > 1, n(log n)5 ≤ cs4 for some sufficiently small c > 0, and λ = 2430n(log(12n))
2 . If
for some δ ∈ (0, 2),
s2−δ log p
lim inf
> 0,
(35)
n→∞
nλ2
b
then for any randomized polynomial-time estimator θ,
lim inf
n→∞
n
1o 1
> .
Q kPθb − Pθ k2F >
3
4
Q∈Q(n,s,p,λ)
sup
15
(36)
In addition to estimation, we can also consider the following sparse PCA detection problem: Let Q denote the joint distribution of W1 , . . . , Wn , and we want to test
H0 : Q ∈ Q(n, s, p, 0),
v.s. H1 : Q ∈ Q(n, s, p, λ).
(37)
iid
Note that Q(n, s, p, 0) contains only one distribution Q0 where Wi ∼ Np (0, Ip ). Given any
testing procedure φ = φ(W1 , . . . , Wn ), we can obtain a solution to (22) by replacing the third
step in Algorithm 1 with the direct testing result of φ(W1 , . . . , Wn ). Following the lines of
the proof of Theorem 5.3, we have the following theorem.
Theorem 5.4. Under the same condition of Theorem 5.3, for any randomized polynomialtime test φ for testing (37),
1
(38)
sup
Q(1 − φ) ≥ .
lim inf Q0 φ +
n→∞
4
Q∈Q(n,s,p,λ)
Remark 5.2. Theorems 5.3–5.4 are the first computational lower bounds for sparse PCA
that are valid in the setting of Gaussian single spiked covariance models (25).
5.2
Hardness of sparse CCA
In the second step, we show that computational hardness of sparse PCA under Gaussian
spiked covariance model implies the desired result in Theorem 5.1. To this end, we propose
the following reduction.
Algorithm 2: Reduction from Sparse PCA to Sparse CCA
Input:
1. Observations W1 , . . . , Wn ∈ Rp ;
2. Estimator u
b of the first leading canonical correlation coefficient u.
Output: An estimator θb of the leading eigenvector of L(W1 ).
1 Generate i.i.d. random vectors Z1 , . . . , Zn ∼ Np (0, Ip ). Set
1
Xi = √ (Wi + Zi ),
2
2
1
Yi = √ (Wi − Zi ),
2
i = 1, . . . , n.
Compute u
b=u
b(X1 , Y1 , . . . , Xn , Yn ). Set
b 1 , . . . , Wn ) = u
θb = θ(W
b/kb
uk.
(39)
(40)
iid
To see why Algorithm 2 is effective, one can verify that if Wi ∼ Np (0, τ θθ ′ + Ip ), then
iid
(Xi′ , Yi′ )′ ∼ Np+m (0, Σ) where
Σx = Σy =
with u = v = √
θ
,
τ /2+1
λ=
τ /2
τ /2+1 .
τ ′
θθ + Ip , Σxy = Σx (λuv ′ )Σy
2
(41)
This is a special case of the Gaussian canonical pair model
(2). Thus, the leading eigenvector of Wi aligns with the leading canonical coefficient vectors
of (Xi , Yi ). Exploiting this connection, we obtain the following theorem.
16
Theorem 5.5. Consider p = m, su = sv and λ ≤ 1. Then for any u
b such that
n
1 o
≤ β,
P L(b
u, u) >
300
P∈P(n,su ,sv ,p,m,1,λ/3;3)
sup
(42)
the estimator θb defined by Algorithm 2 satisfies
n
1o
≤ β.
Q kPθb − Pθ k2F >
3
Q∈Q(n,s,p,λ)
sup
If we start with an estimator u
b of the leading canonical coefficient vector, then we can
construct the reduction from Planted Clique to sparse CCA directly by essentially following
the steps in Algorithm 1 while using Algorithm 2 to construct θb from u
b in the third step.
Finally, the desired Theorem 5.1 is a direct consequence of Theorems 5.3 and 5.5.
6
Proofs
This section presents proofs of Theorems 4.1 and 4.2. The proofs of the other theoretical
results are given in the supplement [19].
6.1
Proof of Theorem 4.1
Before presenting the proof, we state some technical lemmas. The proofs of all the lemmas
are given in Section 9.3 in the supplement [19]. First, note that the estimator is normalized
b (0)
b (0)
with respect to Σ
x and Σy , while the truth U and V is normalized with respect to Σx and
b (0)
b (0)
Σy . To address this issue, we normalize the truth with respect to Σ
x and Σy to obtain
−1/2 and V
−1/2 . Also define Λ
1/2 Λ(V ′ Σ
1/2 .
e = U (U ′ Σ
b (0)
e = V (V ′ Σ
b (0)
e = (U ′ Σ
b (0)
b (0)
U
x U)
y V)
x U)
y V)
For notational convenience, define
r
r
ep
em
1
1
su + log
sv + log
, ǫn,v =
.
(43)
ǫn,u =
n
su
n
sv
The following lemma bounds the normalization effect.
Lemma 6.1. Assume ǫ2n,u + ǫ2n,v ≤ c for some sufficiently small constant c ∈ (0, 1). Then
there exist some constants C, C ′ > 0 only depending on c such that
1/2 e
e
e
kΣ1/2
x (U − U )kop ≤ Cǫn,u , kΣy (V − V )kop ≤ Cǫn,v , kΛ − Λkop ≤ C(ǫn,u + ǫn,v ),
with probability at least 1 − exp (−C ′ (su + log(ep/su ))) − exp (−C ′ (sv + log(em/sv ))).
e and Ve , let us state the following lemma, which asserts that the
Using the definitions of U
′
e
e
e
matrix A = U V is feasible to the optimization problem (18).
e=U
e Ve ′ . When A
e exists, we have
Lemma 6.2. Define A
1/2 e b (0) 1/2
b (0)
k(Σ
A(Σy ) k∗ = r
x )
and
17
1/2 e b (0) 1/2
b (0)
k(Σ
A(Σy ) kop = 1.
x )
As was argued in Section 4.1, the set Cr is the convex hull of Or . The following curvature
lemma shows that the relaxation Cr preserves the restricted strong convexity of the objective
function.
Lemma 6.3. Let F ∈ O(p, r), G ∈ O(m, r), K ∈ Rr×r and D = diag(d1 , ..., dr ) with
d1 ≥ ... ≥ dr > 0. If E satisfies kEkop ≤ 1 and kEk∗ ≤ r, then
hF KG′ , F G′ − Ei ≥
Define
dr
kF G′ − Ek2F − kK − DkF kF G′ − EkF .
2
′ b (0)
e xy = Σ
b (0)
Σ
x U ΛV Σy .
(44)
(45)
Lemma 6.4 is instrumental in determining the proper value of the tuning parameter required
in the program (18).
p
Lemma 6.4. Assume r [log(p + m)]/n ≤ c for some sufficiently small constant c ∈ (0, 1).
b (0)
Then there p
exist some constants C, C ′ > 0 only depending on M and c such that ||Σ
xy −
′
−C
e
Σxy ||∞ ≤ C [log(p + m)]/n, with probability at least 1 − (p + m)
.
We also need a lemma on restricted eigenvalue. For any p.s.d. matrix B, define
φB
max (k) =
u′ Bu
,
||u||0 ≤k,u6=0 u′ u
φB
min (k) =
max
u′ Bu
.
||u||0 ≤k,u6=0 u′ u
min
The following lemma is adapted from Lemma 12 in [18], and its proof is omitted.
Lemma 6.5. Assume n1 (ku ∧ p) log(ep/(ku ∧ p)) + (kv ∧ m) log(em/(kv ∧ m)) ≤ c for some
sufficiently small constant c > 0. Then
C ′ > 0 only depending
q there exist some constants C,q
on M and c such that for δu (ku ) =
we have
(ku ∧p) log(ep/(ku ∧p))
n
b (j)
b (j)
b (j)
Σ
b (j)
Σ
and δv (kv ) =
(kv ∧m) log(em/(kv ∧m))
,
n
Σx
x
M −1 − Cδu (ku ) ≤ φΣ
min (ku ) ≤ φmax (ku ) ≤ M + Cδu (ku ),
y
y
(kv ) ≤ φmax
(kv ) ≤ M + Cδv (kv ),
M −1 − Cδv (kv ) ≤ φmin
withprobability at least 1−exp −C ′ (ku ∧p) log(ep/(ku ∧p)) −exp −C ′ (kv ∧m) log(em/(kv ∧
m)) , for j = 0, 1, 2.
Finally, we need a result on subspace distance. Recall that for a matrix F , PF denotes
the projection matrix onto its column subspace.
Lemma 6.6. For any matrix F ∈ O(d, r) and any matrix G ∈ Rd×r , we have
inf kF − GW k2F =
W
1
kPF − PG k2F .
2
If further G ∈ O(d, r), then inf W ∈O(r,r) kF − GW k2F = 21 kPF − PG k2F .
Proofs of Lemma 6.1-6.6 are given in Section 9.3.1 of the supplement [19].
18
b (0)
b (0)
b (0)
b b
Proof of Theorem 4.1. In the rest of this proof, we denote Σ
x , Σy and Σxy by Σx , Σy and
b xy for notational convenience. We also let ∆ = A
b − A.
e The proof consists of two steps.
Σ
b x1/2 ∆Σ
b y1/2 kF . In the second
In the first step, we are going to derive an upper bound for kΣ
b 1/2
b 1/2
step, we derive a generalized cone condition and use it to lower bound kΣ
x ∆Σy kF by a
b 1/2
b 1/2
constant multiple of k∆kF and hence the upper bound on kΣ
x ∆Σy kF leads to an upper
bound on k∆kF .
e and Ve are well-defined with high probability. Thus, A
e is
Step 1.
By Lemma 6.1, U
well-defined with high probability, and we have
′
1/2
e
kΣ1/2
x (A − U V )Σy kop ≤ C(ǫn,u + ǫn,v ).
(46)
with probability at least 1 − exp (−C ′ (su + log(ep/su ))) − exp (−C ′ (sv + log(em/sv ))). Ace is feasible. Then, by the definition of A,
b we have
cording to Lemma 6.2, A
b xy , Ai
b − ρ||A||
b 1 ≥ hΣ
b xy , Ai
e − ρ||A||
e 1.
hΣ
After rearrangement, we have
b xy − Σ
e xy , ∆i,
e xy , ∆i ≤ ρ ||A||
e 1 − ||A
e + ∆||1 + hΣ
− hΣ
(47)
e xy is defined in (45). For the first term on the right hand side of (47), we have
where Σ
e 1 − ||A
e + ∆||1 = ||A
eSu Sv ||1 − ||A
eSu Sv + ∆Su Sv ||1 − ||∆(S S )c ||1
||A||
u v
≤ ||∆Su Sv ||1 − ||∆(Su Sv )c ||1 .
b xy − Σ
e xy , ∆i ≤ ||Σ
b xy −
For the second term on the right hand side of (47), we have hΣ
e xy ||∞ ||∆||1 . Thus when
Σ
b xy − Σ
e xy ||∞ ,
ρ ≥ 2||Σ
(48)
we have
e xy , ∆i ≤ 3ρ ||∆Su Sv ||1 − ρ ||∆(S S )c ||1 .
− hΣ
u v
2
2
Using Lemma 6.3, we can lower bound the left hand side of (49) as
e xy , ∆i = hΣ
b x1/2 U ΛV ′ Σ
b 1/2
b 1/2 e b b 1/2
− hΣ
y , Σx (A − A)Σy i
b x1/2 U
eΛ
e Ve ′ Σ
b 1/2
b 1/2 e b b 1/2
= hΣ
y , Σx (A − A)Σy i
≥
1
b 1/2
e b b 1/2 2
b 1/2 e b b 1/2
λr kΣ
x (A − A)Σy kF − δkΣx (A − A)Σy kF ,
2
e − ΛkF . Combining (49) and (50), we have
where δ = kΛ
(49)
(50)
b 1/2 ∆Σ
b 1/2 k2 ≤ 3ρ||∆Su Sv ||1 − ρ||∆(S S )c ||1 + 2δkΣ
b 1/2 ∆Σ
b 1/2 kF
λr kΣ
x
y
F
x
y
u v
1/2 b 1/2
b
≤ 3ρ||∆Su Sv ||1 + 2δkΣx ∆Σy kF .
(51)
2
2
b 1/2
b 1/2 2
kΣ
x ∆Σy kF ≤ 6ρ||∆Su Sv ||1 /λr + 4δ /λr .
(53)
(52)
Solving the quadratic equation (52) by Lemma 2 of [13], we have
19
Combining (51) and (53), we have
b 1/2
b 1/2 2
0 ≤ 3ρ||∆Su Sv ||1 − ρ||∆(Su Sv )c ||1 + δ2 /λr + λr kΣ
x ∆Σy kF
≤ 9ρ||∆Su Sv ||1 − ρ||∆(Su Sv )c ||1 + 5δ2 /λr ,
(54)
which gives rise to the generalized cone condition that we are going to use in Step 2. Finally,
√
by the bound ||∆Su Sv ||1 ≤ su sv ρk∆Su Sv kF and (53), we have
√
2
2
b 1/2
b 1/2 2
kΣ
x ∆Σy kF ≤ 6 su sv ρk∆Su Sv kF /λr + 4δ /λr ,
(55)
which completes the first step.
Step 2. By (54), we have obtained the following condition
||∆(Su Sv )c ||1 ≤ 9||∆Su Sv ||1 + 5δ2 /(ρλr ).
(56)
Due to the existence of the extra term 5δ2 /(ρλr ) on the RHS, we call it a generalized cone
b x1/2 ∆Σ
b 1/2
condition. In this step, we are going to lower bound kΣ
y kF by k∆kF on the generalized
cone. Motivated by the argument in [8], let the index set J1 = {(ik , jk )}tk=1 in (Su × Sv )c
correspond to the entries with the largest absolute values in ∆, and we define the set Je =
(Su × Sv ) ∪ J1 . Now we partition Jec into disjoint subsets J2 , ..., JK of size t (with |JK | ≤ t),
such that Jk is the set of
(double) indices corresponding to the entries of t largest absolute
Sk−1
values in ∆ outside Je ∪ j=2
Jj . By triangle inequality,
b 1/2
b 1/2
b 1/2
b 1/2
kΣ
x ∆Σy kF ≥ kΣx ∆JeΣy kF −
≥
q
bx
φΣ
min (su
+
by
Σ
(sv
t)φmin
K
X
k=2
+ t)k∆JekF −
b 1/2
b 1/2
kΣ
x ∆Jk Σy kF
q
K
X
by
bx
Σ
φΣ
(t)φ
(t)
k∆Jk kF .
max
max
k=2
By the construction of Jk , we have
K
X
k=2
k∆Jk kF ≤
K
K
X
√ X
||∆Jk−1 ||1 ≤ t−1/2 ||∆(Su Sv )c ||1
||∆Jk ||∞ ≤ t−1/2
t
k=2
k=2
r
su sv
5δ2
5δ2
−1/2
√ ,
≤t
9||∆Su Sv ||1 +
k∆JekF +
≤9
ρλr
t
ρλr t
(57)
where we have used the generalized cone condition (56). Hence, we have the lower bound
√
2
b 1/2
b 1/2
kΣ
x ∆Σy kF ≥ κ1 k∆JekF − κ2 δ /(ρλr t),
with
κ1 =
q
bx
φΣ
min (su
+
by
Σ
(sv
t)φmin
q
by
bx
Σ
κ2 = 5 φΣ
max (t)φmax (t).
r
q
by
su sv
bx
Σ
+ t) − 9
φΣ
max (t)φmax (t),
t
20
(58)
Taking t = c1 su sv for some sufficiently large constant c1 > 1, with high probability, κ1 can
be lower bounded by a positive constant κ0 only depending
p that by
p on M . To see this, note
−1
Lemma 6.5, (58) can be lower bounded by the difference of M − Cδu (2c1 su sv ) M −1 − Cδv (2c1 su sv )
p
−1/2 p
and 9c1
M + Cδu (c1 su sv ) M + Cδv (c1 su sv ), where δu and δv are defined as in Lemma
6.5. It is sufficient to show that δu (2c1 su sv ), δv (2c1 su sv ), δu (c1 su sv ) and δv (c1 su sv ) are sufficiently small to get a positive absolute constant κ0 . For the first term, when 2c1 su sv ≤ p,
it is bounded by 2c1 su svnlog(ep) and is sufficiently small under the assumption (13). When
2c1 su sv > p, it is bounded by 2c1 nsu sv and is also sufficiently small under (13). The same argument also holds for the other terms. Similarly, κ2 can be upper bounded by some constant.
Together with (55), this brings the inequality
√
√
k∆Jek2F ≤ C1 ( su sv ρ/λr )k∆JekF + C2 δ2 /λ2r + (δ2 /(ρλr t))2 .
Solving this quadratic equation, we have
k∆Jek2F ≤ C
s s ρ2
δ2 δ2 2
u v
√
.
+
+
λ2r
λ2r
ρλr t
(59)
By (57), we have
r
5δ2
su sv
√ .
k∆JekF +
k∆Jk kF ≤ 9
k∆Jec kF ≤
t
ρλ
t
r
k=2
K
X
(60)
Summing (59)
p and (60), we obtain a bound for k∆kF . According to Lemma 6.4, we may
choose ρ = γ [log(p
p + m)]/n for some large γ, so that
√ (48) holds with high probability. By
′
Lemma 6.1, δ ≤ C r(su + sv + log(p + m))/n ≤ C ρ t with high probability. Hence,
√
k∆kF ≤ C su sv ρ/λr ,
(61)
with high probability. This completes the second step. Finally, the triangle inequality leads
b − U V ′ kF ≤ k∆kF + kA
e − U V ′ kF . By (46) and (61), the proof is complete.
to kA
6.2
Proof of Theorem 4.2
b (1) − U ∗ .
Define U ∗ = U ΛV ′ Σy Vb (0) and ∆ = U
r+log p
≤ c for some sufficiently small constant c ∈
n
′
C, C > 0 only depending on M and c such that max
Lemma 6.7. Assume
(0, 1). Then there
b (1) b (0) −
exist some constants
1≤j≤p ||[Σxy V
p
∗
′
b (1)
Σ
x U ]j· || ≤ C (r + log p)/n, with probability at least 1 − exp − C (r + log p) .
The proof of Lemma 6.7 is given in Section 9.3.1 of the supplement [19].
b (1)
b (1)
b (1)
b b
Proof of Theorem 4.2. In the rest of this proof, we denote Σ
x , Σy and Σxy by Σx , Σy
b xy for simplicity of notation. Note that they depends on D1 , while the estimator Vb (0)
and Σ
depends on D0 . Hence, Vb (0) is independent of the sample covariance matrices occurring
in this proof. The proof consists of three steps. In the first step, we derive a bound for
b x ∆). In the second step, we derive a cone condition and use it to obtain a bound for
Tr(∆′ Σ
21
b x ∆) upper bounds k∆kF . In the last step, we derive the desired
k∆kF by arguing that Tr(∆′ Σ
b , U ).
bound for L(U
b (1) , we have Tr((U
b (1) )′ Σ
b xU
b (1) )−2 Tr((U
b (1) )′ Σ
b xy Vb (0) )+ρu Pp ||U
b (1) || ≤
Step 1. By definition of U
j=1
j·
b x U ∗ ) − 2 Tr((U ∗ )′ Σ
b xy Vb (0) ) + ρu Pp ||U ∗ ||. After rearrangement, we have
Tr((U ∗ )′ Σ
j·
j=1
′b
Tr(∆ Σx ∆) ≤ ρu
p h
X
j=1
i
h
i
b xy Vb (0) − Σ
b xU ∗ ) .
||Uj·∗ || − ||Uj·∗ + ∆j· || + 2 Tr ∆′ (Σ
(62)
For the first term on the right hand side of (62), we have
p
X
j=1
X
X
X
||Uj·∗ + ∆j· || −
||∆j· ||
||Uj·∗ || −
||Uj·∗ || − ||Uj·∗ + ∆j· || =
≤
X
j∈Su
j∈Suc
j∈Su
j∈Su
||∆j· || −
X
j∈Suc
||∆j· ||.
For the second term on the right hand side of (62), we have
p
X
′ b
(0)
∗
b xy Vb (0) − Σ
b x U ∗ ]j· ||,
b
b
||∆j· || max ||[Σ
Tr ∆ (Σxy V − Σx U ) ≤
1≤j≤p
j=1
where [·]j· means the j-th row of the corresponding matrix. When
b xy Vb (0) − Σ
b x U ∗ ]j· ||,
ρu ≥ 4 max ||[Σ
1≤j≤p
we have
Since
P
j∈Su ||∆j· || ≤
b x ∆) ≤
Tr(∆′ Σ
ρu X
3ρu X
||∆j· || −
||∆j· ||.
2
2
c
j∈Su
(63)
(64)
j∈Su
√ qP
2
su
j∈Su ||∆j· || , (64) can be upper bounded by
√
s
3 s u ρu X
Tr(∆ Σx ∆) ≤
||∆j· ||2 .
2
′b
(65)
j∈Su
This completes the first step.
Step 2. The inequality (64) implies the cone condition
X
X
||∆j· ||.
||∆j· || ≤ 3
j∈Suc
(66)
j∈Su
Let the index set J1 = {j1 , ..., jt } in Suc correspond to the rows with the largest ℓ2 norm in ∆,
and we define the extended support Seu = Su ∪ J1 . Now we partition Seuc into disjoint subsets
J2 , ..., JK of size t (with |JK | ≤ t), such that Jk is the set of indices corresponding to the t
22
Sk−1
b x ∆) = kn−1/2 X∆k2 ,
rows with largest ℓ2 norm in ∆ outside Seu ∪ j=2
Jj . Note that Tr(∆′ Σ
F
where X = [X1 , ..., Xn ]′ ∈ Rn×p denotes the data matrix. By triangle inequality, we have
X
kn−1/2 X∆kF ≥ kn−1/2 X∆Seu ∗ kF −
kn−1/2 X∆Jk ∗ kF
≥
q
k≥2
b
x
φΣ
min (su + t)k∆Seu ∗ kF −
q
b
x
φΣ
max (t)
X
k≥2
k∆Jk ∗ kF ,
where for a subset B ⊂ [p], ∆B∗ = (∆ij 1{i∈B,j∈[r]} ), and
X
k≥2
√ X
√ X1 X
t
max ||∆j· || ≤ t
||∆j· ||
j∈Jk
t
k≥2
k≥2 j∈Jk−1
X
X
||∆j· ||
≤ t−1/2
||∆j· || ≤ 3t−1/2
k∆Jk ∗ kF ≤
j∈Suc
(67)
j∈Su
r s
r
su X
su
2
≤ 3
||∆j· || ≤ 3
k∆Seu ∗ kF .
t
t
(68)
j∈Su
In the above derivation, we have used the construction
of Jk and the
qcone condition (66).
q
p
bx
b
Σ
x
Hence, kn−1/2 X∆kF ≥ κk∆Seu ∗ kF with κ = φmin
(su + t) − 3 stu φΣ
max (t). In view of
Lemma 6.5, taking t = c1 su for some sufficiently large constant c1 , with high probability, κ
can be lower bounded by a positive constant κ0 only depending on M . Combining with (65),
we have
√
k∆Seu ∗ kF ≤ C su ρu /(2κ20 ).
(69)
By (67)-(68), we have
k∆(Seu )c ∗ kF ≤
X
p
−1/2
k∆Jk ∗ kF ≤ 3 su /tk∆Seu ∗ kF ≤ 3c1 k∆Seu ∗ kF .
(70)
k≥2
√
and
(70),
we
have
k∆k
≤
C
su ρ. By Lemma 6.7, we may choose ρu ≥
Summing
(69)
F
q
p
for some large γu so that (63) holds with high probability. Hence, k∆kF ≤
γu r+log
p n
C su (r + log p)/n with high probability. This completes the second step.
Step 3. Using the same argument in Step 2 of the proof of Theorem 3.1 (see supplementary
b , U ). The proof is complete.
material [19]), we obtain the desired bound for L(U
References
[1] T. W. Anderson. An Introduction to Multivariate Statistical Analysis. Wiley, 1958.
[2] S. Arora and B. Barak. Computational complexity: a modern approach. Cambridge
University Press, 2009.
[3] B. B. Avants, P. A. Cook, L. Ungar, J. C. Gee, and M. Grossman. Dementia induces
correlated reductions in white matter integrity and cortical thickness: a multivariate
neuroimaging study with sparse canonical correlation analysis. NeuroImage, 50(3):1004–
1016, 2010.
23
[4] Z. Bao, J. Hu, G. Pan, and W. Zhou. Canonical correlation coefficients of highdimensional normal vectors: finite rank case. arXiv preprint arXiv:1407.7194, 2014.
[5] S. R. Becker, E. J. Candès, and M. C. Grant. Templates for convex cone problems with
applications to sparse signal recovery. Mathematical Programming Computation, 3(3):
165–218, 2011.
[6] A. Belloni, V. Chernozhukov, and L. Wang. Square-root lasso: pivotal recovery of sparse
signals via conic programming. Biometrika, 98(4):791–806, 2011.
[7] Q. Berthet and P. Rigollet. Complexity theoretic lower bounds for sparse principal
component detection. In Conference on Learning Theory, pages 1046–1066, 2013.
[8] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig
selector. The Annals of Statistics, 37(4):1705–1732, 2009.
[9] A. Birnbaum, I.M. Johnstone, B. Nadler, and D. Paul. Minimax bounds for sparse PCA
with noisy high-dimensional data. The Annals of Statistics, 41(3):1055–1084, 2013.
[10] L. Blum, F. Cucker, M. Shub, and S. Smale. Complexity and Real Computation. Springer,
2012.
[11] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and
statistical learning via the alternating direction method of multipliers. Foundations and
Trends in Machine Learning, 3(1):1–122, 2011.
[12] F. Bunea, J. Lederer, and Y. She. The group square-root lasso: Theoretical properties
and fast algorithms. Information Theory, IEEE Transactions on, 60(2):1313–1325, 2014.
[13] T. T. Cai, Z. Ma, and Y. Wu. Sparse PCA: Optimal rates and adaptive estimation. The
Annals of Statistics, 41(6):3074–3110, 2013.
[14] M. Chen, C. Gao, Z. Ren, and H. H. Zhou. Sparse CCA via precision adjusted iterative
thresholding. arXiv preprint arXiv:1311.6186, 2013.
[15] P. Diaconis and D. Freedman. Finite exchangeable sequences. The Annals of Probability,
pages 745–764, 1980.
[16] J. Douglas and H. Rachford. On the numerical solution of heat conduction problems in
two and three space variables. Transactions of the AMS, pages 421–439, 1956.
[17] V. Feldman, E. Grigorescu, L. Reyzin, S. Vempala, and Y. Xiao. Statistical algorithms
and a lower bound for detecting planted cliques. In Proceedings of the forty-fifth annual
ACM symposium on Theory of computing, pages 655–664. ACM, 2013.
[18] C. Gao, Z. Ma, Z. Ren, and H. H. Zhou. Minimax estimation in sparse canonical
correlation analysis. The Annals of Statistics, 43(5):2168–2197, 2015.
[19] C. Gao, Z. Ma, and H. H. Zhou. Supplement to “Sparse CCA: Adaptive estimation and
computational barriers”. 2015.
24
[20] G. H. Golub and C. F. Van Loan. Matrix Computations (3rd ed.). The Johns Hopkins
University Press, 1996.
[21] B. Hajek, Y. Wu, and J. Xu. Computational lower bounds for community detection on
random graphs. arXiv preprint arXiv:1406.6625, 2014.
[22] D. R. Hardoon and J. Shawe-Taylor. Sparse canonical correlation analysis. Machine
Learning, 83(3):331–353, 2011.
[23] H. Hotelling. Relations between two sets of variates. Biometrika, 28:321–377, 1936.
[24] I. M. Johnstone and A. Y. Lu. On consistency and sparsity for principal components
analysis in high dimensions. Journal of the American Statistical Association, 104(486):
682–693, 2009.
[25] I.M. Johnstone. Multivariate analysis and Jacobi ensembles: Largest eigenvalue, TracyWidom limits and rates of convergence. Ann. Stat., 36:2638–2716, 2008.
[26] L. Le Cam. Asymptotic Methods in Statistical Theory. Springer-Verlag, 1986.
[27] K.-A. Lê Cao, P. G. P. Martin, C. Robert-Granié, and P. Besse. Sparse canonical
methods for biological data integration: application to a cross-platform study. BMC
Bioinformatics, 10(1):34, 2009.
[28] K. Lounici, M. Pontil, S. van de Geer, and A. B. Tsybakov. Oracle inequalities and
optimal inference under group sparsity. The Annals of Statistics, 39(4):2164–2204, 2011.
[29] Z. Ma. Sparse principal component analysis and iterative thresholding. The Annals of
Statistics, 41(2):772–801, 2013.
[30] Z. Ma and Y. Wu. Computational barriers in minimax submatrix detection. arXiv
preprint arXiv:1309.5914, 2013.
[31] K. Mardia, J. Kent, and J. Bibby. Multivariate Analysis. Academic Press, 1979.
[32] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu. A unified
framework for high-dimensional analysis of m-estimators with decomposable regularizers.
Statistical Science, 27(4):538–557, 2012.
[33] E. Parkhomenko, D. Tritchler, and J. Beyene. Sparse canonical correlation analysis
with application to genomic data integration. Statistical Applications in Genetics and
Molecular Biology, 8(1):1–34, 2009.
[34] B. Rossman. Average-case complexity of detecting cliques. PhD thesis, Massachusetts
Institute of Technology, 2010.
[35] T. Sun and C.-H. Zhang. Scaled sparse linear regression. Biometrika, 99(4):879–898,
2012.
[36] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv
preprint arXiv:1011.3027, 2010.
25
[37] V. Q. Vu, J. Cho, J. Lei, and K. Rohe. Fantope projection and selection: A near-optimal
convex relaxation of sparse pca. In Advances in Neural Information Processing Systems,
pages 2670–2678, 2013.
[38] S. Waaijenborg and A. H. Zwinderman. Sparse canonical correlation analysis for identifying, connecting and completing gene-expression networks. BMC Bioinformatics, 10
(1):315, 2009.
[39] T. Wang, Q. Berthet, and R. J. Samworth. Statistical and computational trade-offs in
estimation of sparse principal components. arXiv preprint arXiv:1408.5369, 2014.
[40] G. A. Watson. On matrix approximation problems with Ky Fan k norms. Numerical
Algorithms, 5(5):263–272, 1993.
[41] P.-Å. Wedin. Perturbation bounds in connection with singular value decomposition. BIT
Numerical Mathematics, 12(1):99–111, 1972.
[42] A. Wiesel, M. Kliger, and A. O. Hero III. A greedy approach to sparse canonical
correlation analysis. arXiv preprint arXiv:0801.2748, 2008.
[43] D. M. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposition, with
applications to sparse principal components and canonical correlation analysis. Biostatistics, 10(3):515–534, 2009.
[44] B. Yu. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, pages 423–435.
Springer, 1997.
[45] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society: Series B, 68(1):49–67, 2006.
[46] Y. Zhang, M. J. Wainwright, and M. I. Jordan. Lower bounds on the performance of
polynomial-time algorithms for sparse linear regression. arXiv preprint arXiv:1402.1918,
2014.
26
Supplement to “Sparse CCA: Adaptive Estimation and Computational
Barriers”
7
Proofs of Results in Section 5
In this section, we present the proofs of Theorems 5.1–5.5. Here we do not consider the
issue of discretization. The main purpose is to help the readers get the intuition behind the
problem without worrying about rigor at the theoretical computer science level. A rigorous
treatment of the computational lower bounds is deferred to Section 8 where the asymptotic
equivalent discretization and the statement of rigorous results for the discretized models will
be presented.
7.1
Proof of Theorem 5.2
The reason for introducing the two distributions (28)–(29) is to match specific mixtures of
them to φ0 and φ̄µ respectively as summarized in the following lemma. For two probability
distributions P and Q, the total variation distance is defined as TV(P, Q) = supB |P(B) −
Q(B)|. We also write TV(p, q) if p and q are the densities of P and Q, respectively.
Lemma 7.1. There exists
√ an absolute constant C > 0, such that for all integers N ≥ 12,
k ≤ N/12 and all |µ| ≤ 3 ηN log N ,
TV(hµ,0 , φ0 ) ≤ CN −3
and
TV(hµ,1 , φ̄µ ) ≤ CN −3 ,
where hµ,0 = 12 (fµ,0 + fµ,1 ) and hµ,1 = δN fµ,1 + (1 − δN ) 21 (fµ,0 + fµ,1 ).
The proof of the above lemma will be given in Section 7.4. To facilitate the proof of Theorem 5.2, we need to state and prove another two lemmas which characterize the distributions
of the Wi ’s under H0G and H1G respectively. Let L({Wi }2n
i=1 ) denote the joint distribution of
2n
′
{Wi }i=1 . In addition, denote Np (0, τ θθ + Ip ) by Qθ,τ . When τ = 0, Np (0, Ip ) is denoted by
Q0 . The first lemma shows that under H0G , the joint distribution of {Wi }2n
i=1 is close in total
variation to that of a random sample of size 2n from Q0 .
Lemma 7.2. Suppose A ∼ G(N, 1/2). There exists an absolute constant C > 0 such that
2n
−1
TV(L({Wi }2n
.
i=1 ), Q0 ) ≤ CN
Proof. Recall ηN defined in (26) and hµ,0 in Lemma√7.1. Let ν be
√ N (0, ηN ), and ν̄ be the
distribution obtained by restricting ν on the set [−3 ηN log N , 3 ηN log N ]. Then the µi ’s
in (30) are i.i.d. r.v.’s following the distribution ν̄.
For each i ∈ [2n] and each j ∈ [2n], define i.i.d. random variables W ij ∼ N (0, 1). For each
i ∈ [2n] and 2n < j ≤ p, define W ij = Wij . Let W i = (W i1 , ..., W ip )′ . It is straightforward
2n
to verify that L({W i }2n
i=1 ) = Q0 . By the data-processing inequality, we have
2n
n
n
TV(L({Wi }i=1
), Q2n
0 ) ≤ TV(L({Wi }i=1 ), L({W i }i=1 )).
27
Hence, it is sufficient to bound TV(L({Wi }ni=1 ), L({W i }ni=1 )). Conditioning on µi , Wij follows
hµi ,0 when A ∼ G(N, k). Therefore,
Z
TV(Wij , W ij ) = TV( hµi ,0 dν̄(µi ), φ0 )
≤
sup
√
|µi |≤3 ηN log N
TV(hµi ,0 , φ0 ) ≤ CN −3 .
Here the
last inequality is due to Lemma 7.1. Applying Lemma 7 of [30], we obtain TV(L({Wi }ni=1 ), L({W i }ni=1 ))
P
n Pn
−1 . This completes the proof.
i=1
j=1 TV(Wij , W ij ) ≤ CN
The next lemma shows that the joint distribution {Wi }2n
i=1 is close in total variation to a
mixture of the joint distribution of a random sample of size 2n from Qθ,τ . Here, the mixture
is defined by a prior distribution π on the (θ, τ ) pair, which is supported on a region where θ
is sparse and τ is bounded away from zero. For notational convenience, for any distribution
R
Pβ indexed by parameter β ∈ B and any probability
R measure π on B, we let Pβ dν(β)
denote the probability measure P defined by P(E) = Pβ (E)dν(β) for any event E. When
β ∼ ν Ris a random variable and Pβ = L(W |β) is the conditional distribution of W |β, we also
write L(W |β)dν(β) to represent the marginal distribution of W after integrating out β.
Lemma 7.3. Suppose A ∼ G(N, 1/2, k). There exists a distribution π supported on the set
(θ, τ ) : θ ∈ S p−1 , |supp(θ)| ≤ 3k/2, τ ∈ [kηN /2, 3kηN /2] ,
(71)
such that for some absolute constants C1 , C2 > 0,
Z
4n
1
−C2 k
2n
2n
.
+
+
TV(L({Wi }i=1 ), Qθ,τ dπ(θ, τ )) ≤ C1 e
N
N
Proof. Recall ηN defined in (26) and hµ,0 and hµ,1 defined in Lemma 7.1. As in the proof of
Lemma
√ 7.2, let ν√ be N (0, ηN ), and ν̄ the distribution obtained by restricting ν on the set
[−3 ηN logRN , 3 ηN log N ]. Then the µi ’s in (30) are i.i.d. r.v.’s followingRν̄. Simple calculus
shows that φ0 (x)dν(µ) = φ0 (x) is the density function of N (0, 1), and φ̄µ (x)dν(µ) gives
the density function of N (0, 1 + ηN ).
We first focus on the case p = 2n. The case of p ≥ 2n will be treated at the end of the
proof. Recall that (ǫ1 , ..., ǫ2n ) are the indicators of the rows of A0 whether the corresponding
vertices belong to the planted clique, and (γ1 , ..., γp ) are the corresponding indicators of the
columns of A0 . Let (e
ǫ1 , ..., e
ǫ2n ) and (e
γ1 , ..., γep ) be i.i.d. Bernoulli random variables with
e
e0 )ij = 1 if e
mean δN = k/N . Define a matrix A0 , where an entry (A
ǫi = γ
ej = 1 and is an
f
independent instantiation of the Bernoulli(1/2) distribution otherwise. Then, we define W
with entries
fij = (B0 )ij (1 − (A
e0 )ij ) + (B1 )ij (A
e0 )ij .
W
Then, by Theorem 4 of [15] and the data-processing inequality, we have
f ), L(W )) ≤ TV(L(e
TV(L(W
ǫ, γ
e), L(ǫ, γ)) ≤
4n
.
N
(72)
f , conditioning on µi and
Recall hµ,0 and hµ,1 defined in Lemma 7.1. By the definition of W
fij ∼ hµ ,0 , while conditioning on µi and γ
fij ∼ hµ ,1 .
γ
ej = 0, W
ej = 1, W
i
i
28
Further define W ij by setting
W ij |(e
γj = 0, µi ) ∼ φ0 ,
W ij |(e
γj = 1, µi ) ∼ φ̄µi ,
where φ̄µi is √
defined according to (27). By Lemma 7.1 and Lemma 7 of [30], uniformly over
maxi |µi | ≤ 3 ηN log N, we have
p
2n X
X
f |e
fij |e
TV L(W
γ , µ), L(W |e
TV L(W
γj , µi ), L(W ij |e
γ , µ) ≤
γj , µi ) ≤ CN −1
i=1 j=1
for some constant C > 0.
Next, we integrate the above bound over µ. To this end, first note that
Z
Z
dν(µ) =
φ0 (x)dx ≤ CN −4 .
TV(ν, ν̄) =
√
√
|x|>3 log N
|µ|>3 ηN log N
R
R
f |e
f |e
With slight abuse of notation, let L(W
γ , µ)dν̄(µ) (resp.
L(W
γ , µ)dν(µ)) denote the
f |e
conditional distribution
of
W
γ
if
the
coordinates
of
µ
=
(µ
,
.
.
.
,
µ
1
2n ) were i.i.d. following
R
R
ν̄ (resp. ν), and let L(W |e
γ , µ)dν̄(µ) and L(W |e
γ , µ)dν(µ) be analogously defined. Then,
conditioning on γ
e, we obtain
Z
Z
f
TV( L(W |e
γ , µ)dν̄(µ), L(W |e
γ , µ)dν(µ))
Z
Z
f |e
≤ TV( L(W
γ , µ)dν̄(µ), L(W |e
γ , µ)dν̄(µ))
Z
Z
+ TV( L(W |e
γ , µ)dν̄(µ), L(W |e
γ , µ)dν(µ))
≤
sup
√
maxi |µi |≤3 ηN log N
f |e
TV(L(W
γ , µ), L(W |e
γ , µ)) + CnTV(ν̄, ν) ≤ CN −1 .
Here, the first inequality comes from the triangle inequality, the second
Pnfrom thePdefinition
of total variation distance. For each given γ
e = (e
γ1 , ..., γen ), define s = j=1 γ
ej = nj=1 γ
ej2 =
ke
γj k2 , θ = s−1/2 γ
e and τ = sηN . Note that both θ and τ are functions of γ
e. Then observe
R
R
f |e
f |e
,
which
implies
for
L(
W
γ
)
=
L(
W
γ
,
µ)dν̄(µ),
γ , µ)dν(µ) = Q2n
that L(W |e
θ,τ
−1
f |e
.
TV L(W
γ ), Q2n
θ,τ ≤ CN
Define the event Q = {e
γ : |s − k| ≤ k/2}. Then, by Bernstein’s inequality, P(Qc ) ≤ e−Ck . Let
π
e be the joint distribution of (θ, τ ), and π be the distribution obtained from renormalizing
the restriction of π
e on {(θ(e
γ ), τ (e
γ )) : γ
e ∈ Q} which is exactly the set in (71). Then we have
c
−Ck
f |e
f |θ, τ ) since there
TV(π, π
e) ≤ CP(Q ) ≤ Ce
. In addition, we note that L(W
γ ) = L(W
exists one-to-one identification between the pair (θ, τ ) and e
γ . Therefore, we have
Z
Z
f ), L(W
f |θ, τ )dπ(θ, τ ))
f ), Q2n dπ(θ, τ )) ≤ TV(L(W
TV(L(W
θ,τ
Z
Z
f |θ, τ )dπ(θ, τ ), Q2n dπ(θ, τ ))
+TV( L(W
θ,τ
f |θ, τ ), Q2n )
≤ TV(e
π , π) + sup TV(L(W
θ,τ
θ,τ
−Ck
≤ C e
29
+N
−1
.
R
f ) = L(W
f |θ, τ )de
Here, the Rsecond inequality holds since L(W
π(θ, τ ). Hence, by (72),
4n
−Ck
−1
2n
+N
+ N . Note that on the support of π, the
TV(L(W ), Qθ,τ dπ(θ, τ )) ≤ C e
parameter (θ, τ ) belongs to the set (71). An application of data-processing inequality leads
to the conclusion. When p ≥ 2n, we may first analyze the distribution of the first 2n coordinates using the above arguments. The remaining 2n − p coordinates are exact, and the total
variation bound is zero.
P
′
b
Proof of Theorem 5.2. Abbreviate n1 2n
i=n+1 Wi Wi by Σ. We can rewrite the testing function
ψ as
n
o
b θb ≥ 1 + kηN /4 .
ψ(W ) = ψ(A, µ, B0 , B1 ) = 1 θb′ Σ
Here, µ = (µ1 , . . . , µ2n ) collects the random variables in (30). Thus, it is clear that ψ is a
randomized test for the Planted Clique detection problem (22). Note that for any (θ, τ ) in
the support of π, we have
3k
kηN
.
(73)
Qnθ,τ ∈ Q n, , p,
2
2
We now bound the testing errors. For Type-I error, Lemma 7.2 implies
PH G ψ ≤ Qn0 ψ + CN −1 .
0
b are independent. Conditioning on θb and using Bernstein’s
Note that under Qn0 , θb and Σ
inequality, we have
2n
X
b 2 > 1 + kηN ,
b θb = 1 + 1
|θb′ Wi |2 − ||θ||
θb′ Σ
n
4
i=n+1
4
b
with probability at most exp − N 2Cnk
(log N )4 . Integrating over θ, we have
PH G ψ ≤ exp −
0
Cnk 4
+ CN −1 ≤ C(n−1 + N −1 ),
N 2 (log N )4
where the last inequality holds under the assumptions
some sufficiently small constant c > 0.
Turn to the Type-II error. Lemma 7.3 implies
N (log N )5
k4
(74)
≤ c and cN ≤ n ≤ N/12 for
4n
,
(75)
PH G (1 − ψ) ≤ Qπ (1 − ψ) + C e−Ck + N −1 +
1
N
R
where we have used the notation Qπ = Qnθ,τ dπ. For each Qnθ,τ in the support of π, Wn+i has
√
representation Wn+i = τ gi θ + ǫi , where the gi ’s and the ǫi ’s are independently distributed
according to N (0, 1) and Np (0, Ip ), and are independent across i = 1, ..., n, and τ ≥ kηN /2.
Therefore,
√
n
n
n
1X
1 X
2
τ b′ X ′ b
2
′
2
′
′
2
b θb = τ |θb θ|
gi ǫi θ.
gi +
|θb ǫi | +
θθ
θb Σ
n
n
n
i=1
i=1
30
i=1
After rearrangement, we have
n
b θb − (1 + τ )
θb′ Σ
≤
1X 2
(gi − 1) + τ min{|(θb − θ)′ θ|2 , |(θb + θ)′ θ|2 }
n
+
i=1
n
X
1
n
i=1
n
(|θb′ ǫi |2 − 1) +
2X
b ,
gi (ǫ′i θ)
n
i=1
where min{|(θb − θ)′ θ|2 , |(θb + θ)′ θ|2 } is bounded by min{|(θb − θ)′ θ|2 , |(θb + θ)′ θ|2 } ≤ min ||θb −
θ||2 , ||θb + θ||2 ≤ kPθb − Pθ k2F . Together with (34), the above bound implies that for each
(θ, τ ) pair in the support of π,
n
1o
≤ β.
Qnθ,τ min{|(θb − θ)′ θ|2 , |(θb + θ)′ θ|2 } >
3
(76)
By Bernstein’s inequality, we have
Qnθ,τ
r
n
n
n
n 1X
1 X b′ 2
2X
′
log n o
2
′b
≤ n−C .
(gi − 1) +
(|θ ǫi | − 1) +
gi (ǫi θ) > C
n
n
n
n
i=1
i=1
i=1
Combining the above analysis and using the assumptions that
N/12, we have
N (log N )5
k4
≤ c and cN ≤ n ≤
′
Qnθ,τ (1 − ψ) ≤ β + n−C .
(77)
Integrating over (θ, τ ) according to the prior π and applying (75), we obtain
4n
′
.
PH G (1 − ψ) ≤ β + n−C + C e−Ck + N −1 +
1
N
Summing up the Type-I and Type-II errors, we have
PH G ψ + PH G (1 − ψ) ≤ β +
0
1
4n
′
+ C(n−1 + N −1 + e−C k ).
N
(78)
Thus, the proof is complete.
7.2
Proofs of Theorems 5.3, 5.4 and 5.1
Consider a Planted Clique detection problem with N = 12n and
k = ⌊2s/3⌋. Then the
assumptions of Theorem 5.2 are satisfied. Thus, supQ∈Q(n,s,p,λ) Q kPθb − Pθ k2F > 13 ≤ 41
′
implies a testing error bounded by 41 + 31 + C(n−1 + N −1 + e−C k ), which is smaller than 2/3
as n → ∞. This contradicts the condition (35) that implies the condition of Hypothesis A.
Hence, we must have (36) and the proof of Theorem 5.3 is complete. To prove Theorem 5.4,
note that according to the proof of Theorem 5.2, a testing error bound 1/4 for the sparse PCA
′
problem implies a testing error bound 14 + 31 + C(n−1 + N −1 + e−C k ) for the Planted Clique
detection problem. The same argument we have just used leads to the desired conclusion.
Finally, Theorem 5.1 can be derived from Theorem 5.5 and Theorem 5.2 by applying a similar
argument.
31
7.3
Proof of Theorem 5.5
Let Wi ∼ Np (0, τ θθ ′ + Ip ), then (Xi′ , Yi′ )′ ∼ Np+m (0, Σ) with Σ given in (41). We complete
the proof by noting
o 16 min kb
n
u − uk2 , kb
u + uk2
kPθb − Pθ k2F ≤ 4 min kθb − θk2 , kθb + θk2 ≤
kuk2
3 2
σ 2 (Σx )
L(b
u, u) ≤ 16 1 +
≤ 16 max
L(b
u, u).
2
2
σmin (Σx )
7.4
Proof of Lemma 7.1
√
We first verify that (28)–(29) are proper density functions when |µ| ≤ 3 ηN log N , which is
a corollary of the following lemma.
√
√
Lemma 7.4. If k ≤ N/12, |µ| ≤ 3 ηN log N and |x| ≤ 3 log N , then
4
−1
φ̄µ (x) − φ0 (x) ≤ φ0 (x).
δN
5
Proof. By definition,
µ2
µ2
−1
φ̄µ (x) − φ0 (x) = (2δN )−1 φ0 (x) exp µx −
δN
+ exp − µx −
−2 .
2
2
Under the conditions of the lemma, we have |µx| +
µ2
2
≤ 12 , and so
µ2
4
µ4
µ2
+ exp − µx −
− 2 ≤ µ2 + |µx|2 +
≤ 8µ2 log N.
exp µx −
2
2
3
3
We complete the proof by combining the last two displays.
The following lemma controls the rescaling constants in (28) and (29).
Lemma 7.5. There exists an absolute constant C > 0 such that for any |µ| ≤ 1, |Mi − 1| ≤
CN −4 for i = 0, 1.
Proof. Note that
Z
Z
1 = fµ,0 (x)dx = M0
−M0
Z
−1
(φ̄µ (x) − φ0 (x)) dx
φ0 (x) − δN
√
|x|>3 log N
= M0 − M0
Z
−1
(φ̄µ (x) − φ0 (x)) dx
φ0 (x) − δN
√
|x|>3 log N
The integral on the RHS is upper bounded by
Z
Z
−1
−1
φ
(x)dx
+
δ
1 + δN
0
N
√
−1
(φ̄µ (x) − φ0 (x)) dx.
φ0 (x) − δN
√
|x|>3 log N
|x|>3 log N
φ̄µ (x)dx ≤ CN −4 ,
where the last inequality comes from standard Gaussian tail bounds. This readily implies
|M0 − 1| ≤ CN −4 The desired bound on M1 follows from similar arguments.
32
Proof of Lemma 7.1. Define
−1
(φ̄µ (x) − φ0 (x)),
gi (x) = φ0 (x) − (−1)i δN
for i = 0, 1.
Then we have for i = 0 and 1,
fµ,i (x) = gi (x) − (1 − Mi 1{|x|≤3√log N } )gi (x),
By Lemma 7.4 and Lemma 7.5,
Z
Z
(1 − Mi 1{|x|≤3√log N } )gi (x) dx
|fµ,i (x) − gi (x)|dx ≤
Z
Z
≤ |1 − Mi | |gi (x)|dx + Mi
√
|x|>3 log N
|gi (x)|dx ≤ CN −3 .
(79)
Therefore, we have
Z
1
1
1
TV
(fµ,0 + fµ,1 ), φ0 =
φ0 (x) − (fµ,0 (x) + fµ,1 (x)) dx
2
2
2
Z
Z
1
1
1 X
≤
|fµ,i (x) − gi (x)|dx ≤ CN −3 ,
φ0 (x) − (g0 (x) + g1 (x)) dx +
2
2
4
i=0,1
where the last inequality is due to the identity φ0 = 12 (g0 + g1 ) and (79). In addition, we
have
1
TV δN fµ,1 + (1 − δN ) (fµ,0 + fµ,1 ) , φ̄µ
2
Z
1 − δN
1
(fµ,0 (x) + fµ,1 (x)) − φ̄µ (x) dx
δN fµ,1 (x) +
=
2
2
Z
1
1 − δN
≤
(g0 (x) + g1 (x)) − φ̄µ (x) dx
δN g1 (x) +
2
2
X 1 − (−1)i δN Z
+
|fµ,i (x) − gi (x)|dx
4
≤ CN
i=0,1
−3
.
Here, the last inequality is due to the identity δN g1 +
completes the proof.
8
1−δN
2
(g0 + g1 ) = φ̄µ and (79). This
Discretization and Computational Lower bounds
To formally address the computational complexity issue in a continuous statistical model,
we adopt the framework in [30]. After introducing the asymptotically equivalent discretized
models, we state the computational lower bounds for sparse PCA and sparse CCA under
the discretized models in Section 8.1. The necessary modifications to Algorithms 1 and 2
are spelled out in Section 8.2 to ensure that they are truly of randomized polynomial time
complexity.
33
For any t ∈ N, define the function [·]t : R → 2−t Z by
[x]t = 2−t 2t x .
(80)
For any matrix, the function is defined component-wise. Let
(p,n)
EM
iid
= L(X1 , . . . , Xn ) : Xi ∼ Np (µ, Σ), 1/M ≤ σmin (Σ) ≤ σmax (Σ) ≤ M
be the class of joint distributions of n i.i.d. samples from all multivariate Gaussian distributions with spectrum contained in [1/M, M ], and
(p,n,t)
EM
(p,n)
= {L([X1 ]t , . . . , [Xn ]t ) : L(X1 , . . . , Xn ) ∈ EM
}
be its discretized counterpart. The following lemma bounds the Le Cam distance [26] between
the two classes of distributions. Its proof is given below in Section 8.3.
(p,n)
Lemma 8.1. When 2t t−1/2 ≥ 2(pM )3/2 , the Le Cam distance between EM
(p,n) (p,n,t)
) ≤ n(pM )3/2 t1/2 2−t .
satisfies ∆(EM , EM
(p,n,t)
and EM
For any t ∈ N, we define discretized sparse PCA parameter space as
Qt (n, s, p, λ) = {L([W ]t ) : L(W ) ∈ Q(n, s, p, λ)} ,
and discretized sparse CCA probability space as
P t (n, su , sv , p, m, r, λ; M ) = {L([X]t , [Y ]t ) : L(X, Y ) ∈ P(n, su , sv , p, m, r, λ; M )} .
In view of Theorem 5.1, we are primarily interested in P(n, su , sv , p, m, 1, λ; 3) and its discretized counterpart.
For the sparse PCA parameter spaces, with the choice of n, s, p and λ in Theorem
(p,n)
(p,n,t)
5.3, under condition (35), Q(n, s, p, λ) ⊂ E4
and Qt (n, s, p, λ) ⊂ E4
. Thus, if we
set the discretization level at t = ⌈4 log2 (p + n)⌉, then Q(n, s, p, λ) and Qt (n, s, p, λ) are
asymptotically equivalent. Similarly, with the choice of n, su , sv , p, m, λ in Theorem 5.1, un(p+m,n)
and
der condition (23), when t = ⌈4 log2 (p + m + n)⌉, P(n, su , sv , p, m, 1, λ; 3) ⊂ E5
(p+m,n,t)
t
P (n, su , sv , p, m, 1, λ; 3) ⊂ E5
are also asymptotically equivalent. Therefore, with the
foregoing discretization levels, the statistical difficulties of the original sparse PCA and CCA
problems are asymptotically equivalent to those of the discretized problems. In particular, the
conditions for any procedure to be consistent are the same for the original and the discretized
parameter spaces.
8.1
Computational lower bounds for discretized models
We now state computational lower bounds for the discretized sparse PCA and sparse CCA
problems. The meaning of “randomized polynomial-time estimators” is now based on the
probabilistic Turing machine computation model rather than the computation model mentioned in Remark 5.1.
34
Theorem 8.1. Let t = ⌈4 log2 (p + n)⌉. Under the condition of Theorem 5.3, for any ranb
domized polynomial-time estimator θ,
n
1o 1
> .
(81)
lim inf
sup
Q kPθb − Pθ k2F >
n→∞ Q∈Qt (n,s,p,λ)
3
4
Theorem 8.2. Let t = ⌈4 log2 (p + m + n)⌉. Under the condition of Theorem 5.1, for any
randomized polynomial-time estimator u
b,
n
1 o 1
lim inf
sup
P L(b
u, u) >
> .
(82)
n→∞ P∈P t (n,s ,s ,p,m,1,λ;3)
300
4
u v
To prove these theorems, we need to modify Algorithms 1 and 2 which are not compatible
with the Turing machine computation model. The details are spelled out in the next subsection. After these modifications, the proofs can be obtained by essentially following the lines
of the proofs of their continuous counterparts while controlling some additional negligible
terms in total variation bounds due to additional truncation. The details are omitted.
8.2
Randomized polynomial-time reduction for discretized models
We first introduce a way to approximately sample with polynomial time complexity from a
distribution obtained from discretizing a continuous distribution with density [30, Section
4.2]. The modifications to Algorithms 1 and 2 then follow.
For any w, K ∈ N and K + w + 1 < b ∈ N, define the discrete distribution Aw,b,K [F] with
probability mass function Aw,b,K [f ] as
R K −w
−2 +i2
f
(x)dx
i − 1
−2K +(i−1)2−w
,
(83)
Aw,b,K [f ] − 2K + w =
R 2K
2
K f (x)dx
−2
b
for i ∈ [2K+w+1 − 1], and let
K
−w
Aw,b,K [f ](2 − 2
) =1−
2K+w+1
X −1
i=1
Aw,b,K [f ](−2K + (i − 1)2−w ).
(84)
In (83), [·]b is the quantization defined previously in (80), and (84) ensures that Aw,b,K [F] is
a proper probability distribution. By the definition of total variation distance, it is straightforward to verify that the approximation error in total variation distance by Aw,b,K [F] to the
distribution of [U 1{U ∈[−2K ,2K ]} ]w with U ∼ F is upper bounded by 2K+w+1−b . As discussed
in Section 4.2 of [30], regardless of the original distribution F, the computational complexity
of drawing a random number from Aw,b,K (F) is O(b2K+w ). This fact is crucial in ensuring
the modified reduction below is of randomized polynomial-time.
Randomized polynomial-time reduction For any t ∈ N, we let
p
w = t + ⌈4 log2 p⌉, K = ⌈log2 (3 log(N + p))⌉, b = w + K + 1 + ⌈4 log2 p⌉.
(85)
The reduction in Algorithm 1 is modified to Algorithm 3 and the reduction in Algorithm 2 is
modified to Algorithm 4. As in the continuous case, a direct reduction from Planted Clique
35
Algorithm 3: Reduction from Planted Clique to Sparse PCA (in Discretized Gaussian
Single Spiked Model)
Input:
1. Graph adjacency matrix A ∈ {0, 1}N ×N ;
2. Estimator θb for the leading eigenvector θ.
Output: A solution to the hypothesis testing problem (22).
e 0 ]. Set
1 Initialization. Generate i.i.d. random variables ξ1 , . . . , ξ2n ∼ Aw,b,K [Φ
1/2
µi = [ηN ]w ξi ,
2
i = 1, . . . , 2n.
Gaussianization. Generate two matrices B0 , B1 ∈ R2n×2n where conditioning on the
µi ’s, all the entries are mutually independent satisfying
L((B0 )ij |µi ) = Aw,b,K [Fµi ,0 ] and
L((B1 )ij |µi ) = Aw,b,K [Fµi ,1 ].
Let A0 ∈ {0, 1}2n×2n be the lower–left 2n × 2n submatrix of the matrix A. Generate a
′ ]′ ∈ R2n×p where for each i ∈ [2n], if j ∈ [2n], we set
matrix W = [W1′ , . . . , W2n
Wij = (B0 )ij (1 − (A0 )ij ) + (B1 )ij (A0 )ij .
3
If 2n < j ≤ p, we let Wij be an independent draw from Aw,b,K [N (0, 1)].
b 1 ]t , . . . , [Wn ]t ) be the estimator of the leading
Test Construction. Let θb = θ([W
eigenvector by treating {[Wi ]t }ni=1 as data. It is normalized to be a unit vector. We
reject H0G if
θb′
2n
1 X
n
[Wi ]t [Wi ]′t
i=n+1
1
θb ≥ 1 + [ k ηN ]w .
w
4
to discretized sparse CCA can be obtained by constructing the estimator θb in the third step
of Algorithm 3 from Algorithm 4.
By (85) and the discussion following (84), the complexity for sampling any random variable in the above reduction is O(p8 (log p)3/2 ), and in total, we need to generate no more than
O(n(p + n)) random variables. Hence, the total complexity for random number generation
is O(p10 (log p)3/2 ) in view of the condition p ≥ 2n. On the other hand, it is straightforward
b have complexity
to verify that all the other computations (except for the estimator u
b or θ)
10
3/2
no more than O(p (log p) ). Since the conditions of Theorems 8.2 and 8.1 ensure that for
some constant a > 1, 2n ≤ p ≤ na and n ≤ N/12, we obtain that the additional computational complexity induced by the proposed reductions is O(N 10a (log N )3/2 ). Therefore, they
are of randomized polynomial-time complexity.
8.3
Proof of Lemma 8.1
We need the following lemma for the proof.
36
Lemma 8.2. For X ∼ Np (µ, Σ) with M −1 ≤ σmin (Σ) ≤ σmax (Σ) ≤ M and U = (U1 , . . . , Up )′
iid
where Ui ∼ Unif[0, 1], we have for any t−1/2 2t ≥ 2(pM )3/2 ,
TV(X, [X]t + 2−t U ) ≤ (pM )3/2 t1/2 2−t .
(p,n,t)
comes from discretizing a corresponding distribution in
Since each distribution in EM
(p,n,t)
(p,n)
(p,n)
) = 0. On the other hand,
EM on a grid with equal spacing 2−t , we have δ(EM , EM
(p,n)
(p,n,t)
, EM ) ≤ n(pM )3/2 t1/2 2−t . This completes
Lemma 8.2 and Lemma 7 of [30] lead to δ(EM
the proof.
−t
Proof of Lemma 8.2. Let f and g denote the density functions of X and [X]
Qpt + 2−t U , respectively. Then g is a piecewise constant function. For any (x1 , ..., xp ) ∈ B = i=1 [2 ij , 2−t (ij +
1)), where ij ∈ Z, we have
Z
1
f (x1 , ..., xp )dx1 ...dxp ,
g(x1 , ..., xp ) =
ν(B) B
where ν is the Lebesgue measure. Hence,
sup
||x−µ||∞ ≤K
≤
≤
g(x)
−1 ≤
f (x)
′
e|(x−µ) Σ
sup
sup
||x−µ||∞ ≤K
||x−y||∞ ≤2−t
f (x)
−1
f (y)
−1 (x−µ)−(y−µ)′ Σ−1 (y−µ)|/2
||x−µ||∞ ≤K
||x−y||∞ ≤2−t
ekΣ
sup
−1
−1 k k(x−µ)(x−µ)′ −(y−µ)(y−µ)′ k /2
F
F
||x−µ||∞ ≤K
||x−y||∞ ≤2−t
−1
≤ exp p3/2 M K2−t − 1
≤
(87)
3 3/2
p M K2−t ,
2
Algorithm 4: Reduction from Discretized Sparse PCA to Discretized Sparse CCA
Input:
1. Observations W1 , . . . , Wn ∈ (2−t Z)p ;
2. Estimator u
b of the first leading canonical correlation coefficient u.
Output: An estimator θb of the leading eigenvector of L(W1 ).
1 Generate i.i.d. random vectors Zi = (Zi1 , . . . , Zip )′ for i ∈ [n] with
iid
Zij ∼ Aw,b,K [N (0, 1)]. Set
h 1 i
(Wi + Zi ),
Xi = √
2 w
2
(86)
h 1 i
Yi = √
(Wi − Zi ),
2 w
Compute u
b=u
b([X1 ]t , [Y1 ]t , . . . , [Xn ]t , [Yn ]t ). Set
b 1 , . . . , Wn ) = [b
θb = θ(W
u/kb
uk]w .
37
i = 1, . . . , n.
whenever p3/2 M K2−t ≤ 21 . The inequality (86) holds since
|(x − µ)′ Σ−1 (x − µ) − (y − µ)′ Σ−1 (y − µ)|
= Tr Σ−1 [(x − µ)(x − µ)′ − (y − µ)(y − µ)′ ]
≤ kΣ−1 kF k(x − µ)(x − µ)′ − (y − µ)(y − µ)′ kF
√
by Cauchy-Schwarz inequality. The inequality (87) holds because kΣ−1 kF ≤ pkΣ−1 kop ≤
√
pM and k(x − µ)(x − µ)′ − (y − µ)(y − µ)′kF ≤ p||x − y||∞ (||x − µ||∞ + ||y − µ||∞) ≤ 2pK2−t .
Note that
Z
Z
Z
g(x)
− 1 dx.
f (x)
|f (x) − g(x)|dx +
|f − g| ≤
f (x)
||x−µ||∞ ≤K
||x−µ||∞ >K
According to Gaussian tail probability, the first term can be bounded by 2p
q
√
2
2 M − (K−1)
2M
.
π K−1 e
The second term is bounded by 23 p3/2 M K2−t according to our previous analysis. Choosing
√
K = 2M t log 2 + 1, we obtain the bound 2(pM )3/2 t1/2 2−t for Rall t−1/2 2t ≥ 2(pM )3/2 . The
conclusion follows the simple fact that TV(X, [X]t + 2−t U ) = 21 |f − g|.
9
9.1
Additional Proofs
Proof of Theorem 3.1
We first present a bound for the estimator defined by (9) under the joint loss.
Theorem 9.1. Assume (13) for some sufficiently small c > 0. Then there exist constants
C, C ′ > 0 only depending on c such that
b (0) (Vb (0) )′ − U V ′ Σ1/2 k2 ≤ C r(su + sv ) + su log ep + sv log em ,
kΣ1/2
U
y
F
x
nλ2
su
sv
with P-probability at least 1 − exp (−C ′ (su + log(ep/su ))) − exp (−C ′ (sv + log(em/sv ))) uniformly over P ∈ P(n, su , sv , p, m, r, λ; M ).
Theorem 9.1 is similar to Theorem 1 of [18], except that the loss function depends on the
marginal covariances so that the error bound is independent of M . Its proof is omitted given
the similarity with that of Theorem 1 of [18].
∗
b (1)
b (1)
b (1)
From now on, we omit the superscript in Σ
x , Σy and Σxy for simplicity. Let U =
b (1) − U ∗ . The following lemmas are needed in the proof of Theorem
U ΛV ′ Σy Vb (0) and ∆ = U
3.1. Their proofs are given in Section 9.3.2.
u)
Lemma 9.1. Assume su log(ep/s
≤ c for some sufficiently small constant c ∈ (0, 1). Then,
n
there exist some constants C, C ′ > 0 only depending on c, such that with probability at least
1 − exp(−C ′ su log(ep/su )),
r
r
s
log(ep/s
)
su log(ep/su )
u
u
b (0) kop ≤ 1 + C
, k(Vb (0) )′ Σy Vb (0) − Ikop ≤ C
.
kΣ1/2
y V
n
n
38
u)
Lemma 9.2. Assume su log(ep/s
≤ c for some sufficiently small constant c > 0. Then, there
n
′
exist some constants C, C > 0 only depending on c, such that
′
2
′
1/2
2
b 1/2 2
(1 − δC
)kΣ1/2
x ∆kF ≤ kΣx ∆kF ≤ (1 + δC )kΣx ∆kF ,
q
u)
′
′
.
with probability at least 1 − exp(−C su log(ep/su )), with δC = C su log(ep/s
n
Lemma 9.3. Assume n1 (sv log(em/sv ) + su log(ep/su ) + rsu ) ≤ c for some sufficiently small
constant c > 0. Then, there exist some constants C, C ′ > 0 only depending on c, such that
r
rsu + su log(ep/su ) 1/2
′ b
(0)
kΣx ∆kF ,
Tr ∆ (Σxy − Σxy )Vb
≤C
n
with probability at least 1 − exp (−C ′ (su log(ep/su ) + rsu )) − exp(−C ′ sv log(em/sv )).
Lemma 9.4. Assume n1 (sv log(em/sv ) + su log(ep/su ) + rsu ) ≤ c for some sufficiently small
constant c > 0. Then, there exist some constants C, C ′ > 0 only depending on c, such that
r
b x − Σx )U ∗ ≤ C rsu + su log(ep/su ) kΣ1/2 ∆kF ,
Tr ∆′ (Σ
x
n
with probability at least 1 − exp (−C ′ (su log(ep/su ) + rsu )) − exp(−C ′ sv log(em/sv )).
1/2
Proof. The proof consists of two steps. First, we derive a bound for kΣx ∆kF . Next, we
b , U ).
derive the desired bound for L(U
Step 1. By the definition of the estimator, we have
b (1) )′ Σ
b xU
b (1) ) − 2 Tr((U
b (1) )′ Σ
b xy Vb (0) ) ≤ Tr((U ∗ )′ Σ
b x U ∗ ) − 2 Tr((U ∗ )′ Σ
b xy Vb (0) ).
Tr((U
After rearrangement, we have
b x ∆) ≤ 2 Tr ∆′ (Σ
b xy Vb (0) − Σ
b xU ∗ )
Tr(∆′ Σ
b xy − Σxy )Vb (0) + 2 Tr ∆′ (Σ
b x − Σx )U ∗ .
≤ 2 Tr ∆′ (Σ
Using Lemma 9.2, Lemma 9.3 and Lemma 9.4, we have
r
1 1/2 2
rsu + su log(ep/su ) 1/2
kΣx ∆kF ≤ 4C
kΣx ∆kF ,
2
n
1/2
with high probability, which immediately implies a bound for kΣx ∆k2F . This completes
Step 1.
Step 2. We claim that
−1
′
b (0) ≤ C ,
σmin
Σ1/2
x U ΛV Σy V
λr
b − Σ1/2 U
b (1) ((U
b (1) )′ Σx U
b (1) )−1/2 kF ≤ C rsu + su log(ep/su ) ,
kΣx1/2 U
x
n
39
(88)
(89)
with high probability. The two claims (88) and (89) will be proved in the end. We bound
b , U ) by
L(U
q
b , U ) = inf kΣ1/2
b
L(U
x (U W − U )kF
W ∈O(r,r)
1/2 b (1) b (1) ′
b
b (1) )−1/2 kF
≤ kΣ1/2
((U ) Σx U
x U − Σx U
b (1) ((U
b (1) )′ Σx U
b (1) )−1/2 W − Σ1/2 U kF .
+ inf kΣ1/2 U
W ∈O(r,r)
x
x
With high probability, we could further bound the rightmost side by
r
1
rsu + su log(ep/su )
C
+ √ kPΣ1/2 Ub − PΣ1/2 U kF
x
x
n
2
r
rsu + su log(ep/su )
′
−1
b (0) kΣ1/2
≤ C
Σ1/2
+ Cσmin
x ∆kF
x U ΛV Σy V
n
r
rsu + su log(ep/su )
≤ C
.
nλ2
(90)
(91)
The bound (90) is due to the claim (89), Lemma 6.6 and the fact that PΣ1/2 Ub = PΣ1/2 Ub (1) .
The inequality (91) is derived from the sin-theta theorem [41]. Thus, we have obtained
b , U ). To finish the proof, we need to prove (88) and (89). Since
the desired bound for L(U
1/2
Σx U ∈ O(p, r), we have
−1
′
b (0) ≤ λ−1 k(V ′ Σy Vb (0) )−1 kop .
σmin
Σ1/2
x U ΛV Σy V
Thus, it is sufficient to bound k(V ′ Σy Vb (0) )−1 kop . By Theorem 9.1 and sin-theta theorem [41],
kPΣ1/2 Vb (0) − PΣ1/2 V kF is sufficiently small. In view of Lemma 6.6, there exists W ∈ O(r, r),
y
y
such that
kΣy1/2 Vb (0) (Vb (0) Σy Vb (0) )−1/2 − Σ1/2
y V W kF
is sufficiently small. Therefore, together with Lemma 9.1,
kV ′ Σy Vb (0) − W kop
≤ kV ′ Σy Vb (0) − V ′ Σy V W (Vb (0) Σy Vb (0) )1/2 kop + kW kop k(Vb (0) Σy Vb (0) )1/2 − Ikop
b (0) Σy Vb (0) )1/2 kop + k(Vb (0) Σy Vb (0) )1/2 − Ikop
≤ kΣy1/2 Vb (0) (Vb (0) Σy Vb (0) )−1/2 − Σ1/2
y V W kF k(V
is also sufficiently small. By Weyl’s inequality [20, p.449], |σmin (V ′ Σy Vb (0) )−1| ≤ kV ′ Σy Vb (0) −
W kop is sufficiently small. Hence, k(V ′ Σy Vb (0) )−1 kop ≤ 2 with high probability, which implies
the desired bound in (88). Finally, we need to prove (89). We have
b − Σ1/2
b (1) ((U
b (1) )′ Σx U
b (1) )−1/2 kF
kΣx1/2 U
x U
b (1) kF k((U
b (1) )′ Σ
b (2) U
b (1) )−1/2 − ((U
b (1) )′ Σx U
b (1) )−1/2 kop
≤ kΣx1/2 U
x
b (1) )′ (Σ
b (2)
b (1) kop .
≤ C kΣx1/2 U ΛV ′ Σy Vb (0) kF + kΣ1/2
x − Σx )U
x ∆kF k(U
1/2
1/2
We have already shown that kΣx ∆kF is sufficiently small. The term kΣx U ΛV ′ Σy Vb (0) kF
√
√
√
is bounded by rkV ′ Σy Vb (0) kop ≤ r(1 + kV ′ Σy Vb (0) − W kop ) ≤ C r by using the bound
40
b (1) )′ (Σ
b (2)
b (1) kop , note that Σ
b (2)
derived for kV ′ Σy Vb (0) − W kop . To bound k(U
x − Σx )U
x only
(1)
b
depends on D2 and is independent of U . Using union bound and an ǫ-net argument (see,
1/2
for example, [36]) and the fact q
that r ≤ su (which is implied by Σx U ∈ O(p, r)), we have
b (1) )′ (Σ
b (2)
b (1) kop ≤ C rsu +su log(ep/su ) with high probability. Hence, the proof is
k(U
x − Σx )U
n
complete.
9.2
Proof of Theorem 3.2
For any probability
measures P, Q, define the Kullback-Leibler(KL) divergence by D(P||Q) =
R
dP
log dQ dP. The following result is Lemma 14 in [18]. It gives explicit formula for the KL
divergence between distributions generated by a special kind of covariance matrices.
"
#
′
Ip
λU(i) V(i)
Lemma 9.5. For i = 1, 2, let Σ(i) =
with λ ∈ (0, 1), U(i) ∈ O(p, r)
′
λV(i) U(i)
Im
and V(i) ∈ O(m, r). Let P(i) denote the distribution of a random i.i.d. sample of size n from
the Np+m (0, Σ(i) ) distribution. Then
D(P(1) ||P(2) ) =
nλ2
′
kU V ′ − U(2) V(2)
k2F .
2(1 − λ2 ) (1) (1)
The main tool for our proof is the following Fano’s lemma [44, Lemma 3].
Proposition 9.1. Let (Θ, ρ) be a metric space and {Pθ : θ ∈ Θ} a collection of probability
measures. For any totally bounded T ⊂ Θ, denote by M(T, ρ, ǫ) the ǫ-packing number of T
with respect to ρ, i.e., the maximal number of points in T whose pairwise minimum distance
in ρ is at least ǫ. Define the KL diameter of T by dKL (T ) , supθ,θ′ ∈T D(Pθ ||Pθ′ ). Then
ǫ2
dKL (T ) + log 2
2 b
.
inf sup Pθ ρ θ(X), θ ≥
≥1−
4
log M(T, ρ, ǫ)
b
θ θ∈Θ
(92)
Finally, we lower bound the prediction loss by the squared subspace distance. Its proof
is given in Section 9.3.2.
Proposition 9.2. Suppose the eigenvalues of Σx lie in the interval [M1 , M2 ]. Then, we have
kPUb − PU k2F ≤
A similar inequality holds for L(Vb , V ).
√ M2
b , U ).
2
L(U
M1
Proof of Theorem 3.2. Let us first give an outline of the proof. By Proposition 9.2, we have
b , U ) ≥ Cǫ2 ≥ inf sup P kP b − PU k2F ≥ C1 ǫ2 ,
inf sup P L(U
U
b P∈P
U
b P∈P
U
for any rate ǫ2 . Therefore, it is sufficient to derive a lower bound for the loss kPUb − PU k2F .
Without loss of generality, we assume su /3 is an integer and su ≤ 3p/4. The case su >
41
3p/4 is harder and thus it shares the same lower bound. The subset of covariance class
F(p, m, su , sv , r, λ; M ) we consider is
e 0
Ip
λU V0′
U
e ∈ B,
:U =
T = Σ=
,U
λV0 U ′
Im
0 ur
p−2su /3
ur ∈ R
, ||ur || = 1, |supp(ur )| ≤ su /3 ,
′
where V0 = Ir 0′ ∈ O(m, r) and B is a subset of O(2su /3, r − 1) to be specified later.
e and the vector ur . As U
e and ur vary, we
From the construction, U depends on the matrix U
∗
always have U ∈ O(p, r). We use T (ur ) to denote a subset of T where ur = u∗r is fixed, and
e ∗ ) to denote a subset of T where U
e =U
e ∗ is fixed.
use T (U
rsu
∗
The proof has three steps. In the first step, we derive the part nλ
2 using the subset T (ur )
s
log(ep/s
)
u
u
for some particular u∗r . In the second step, we derive the other part
using the
nλ2
∗
∗
e
e
subset T (U ) for some fixed U . Finally, we combine the two results in the third step.
e0 = Ir−1 0′ ′ ∈
Step 1. Let u∗r = (1, 0, ..., 0)′ , and we consider the subset T (u∗r ). Let U
√
O(2su /3, r − 1) and ǫ0 ∈ (0, r] to be specified later. Define
By Lemma 9.5,
e ∈ O(2su /3, r − 1) : kU
e −U
e0 kF ≤ ǫ0 }.
B = B(ǫ0 ) = {U
dKL (T (u∗r )) =
2 2
nλ2
e(1) − U
e(2) k2 ≤ 2nλ ǫ0 .
k
U
F
2(1 − λ2 )
1 − λ2
∈B(ǫ0 )
sup
e(i)
U
(93)
Here, the equality is due to the definition of V0 and the inequality due to the definition of
B(ǫ0 ). We now establish a lower bound for the packing number of T (u∗r ). For some α ∈ (0, 1)
e(1) , . . . , U
e(N ) } ⊂ O(2su /3, r − 1) be a maximal set such that for
to be specified later, let {U
any i 6= j ∈ [N ],
√
e(i) U
e′ − U
e0 U
e0′ kF ≤ ǫ0 ,
e(i) U
e′ − U
e(j) U
e ′ kF ≥ 2αǫ0 .
(94)
kU
kU
(i)
(i)
(j)
Then by [13, Lemma 1], for some absolute constant C > 1,
N≥
1
Cα
(r−1)(2su /3−r+1)
.
e(i) U
e′ −
It is easy to see that the loss function kPU(i) − PU(j) k2F on the subset T (u∗r ) equals kU
(i)
√
e(j) U
e ′ k2 . Thus, for ǫ = 2αǫ0 with sufficiently small α, log M(T (u∗ ), ρ, ǫ) ≥ (r−1)(2su /3−
U
r
(j) F
1
1
1
1
r+1) log Cα
≥ (r−1)( 16 su −1) log Cα
≥ 12
rsu log Cα
when r is sufficiently large and r ≤ su /2.
rsu
2
Taking ǫ0 = c1 nλ2 for sufficiently small c1 , we have
ǫ2
2
inf sup P kPUb − PU kF ≥
≥1−
4
b T (u∗ )
U
r
2c1 rsu
1−λ2 + log 2
1 .
1
12 rsu log Cα
(95)
Since λ is bounded away from 1, we may choose sufficiently small c1 and α, so that the right
hand side of (95) can be lower bounded by 0.9. This completes the first step.
42
u)
can be obtained from the rank-one argument spelled out in
Step 2. The part su log(ep/s
nλ2
e ∗ ) with U
e ∗ = Ir−1 0′ ′ ∈ O(2su /3, r − 1).
[14]. To be rigorous, consider the subset T (U
e ∗ ), the loss function is
Restricting on the set T (U
kPU(i) − PU(j) k2F = kur,(i) u′r,(i) − ur,(j) u′r,(j) k2F .
Let X = [X1 X2 ] with X1 ∈ Rn×(r−1) and X2 ∈ Rn×(p−r+1) , and Y = [Y1 Y2 ] with
Y1 ∈ Rn×(r−1) and Y2 ∈ Rn×(m−r+1) . Then it is further equivalent to estimating u1 under projection loss based on the observation (X2 , Y2 ), because (X2 , Y2 ) is a sufficient statistic
for ur . Applying the argument in [14, Appendix G] and choosing the appropriate constant,
we have
su log(ep/su )
∧
c
inf sup P kPUb − PU k2F ≥ C
(96)
0 ≥ 0.9,
nλ2
b
U
e ∗)
T (U
for some constant C > 0. This completes the second step.
Step 3. For any P ∈ P, by union bound, we have
P kPUb − PU k2F ≥ ǫ21 ∨ ǫ22
≥ 1 − P kPUb − PU k2F < ǫ21 − P kPUb − PU k2F < ǫ22
= P kPUb − PU k2F ≥ ǫ21 + P kPUb − PU k2F ≥ ǫ22 − 1.
rsu
Taking supT (u∗ )∪T (Ue ∗ ) on both sides of the inequality, and letting ǫ21 = C1 nλ
2 in (95) and
r
u)
ǫ22 = C2 su log(ep/s
∧ c0 in (96), we have
nλ2
sup P kPUb − PU k2F ≥ ǫ21 ∨ ǫ22 ≥ 0.9 + 0.9 − 1 = 0.8,
P∈P
e
where we have used the identity supU∈T
e
e ∗ ) (f (ur ) + g(U )) = supur ∈T (U
e ∗ ) f (ur ) +
(u∗r ),ur ∈T (U
e
supUe ∈T (u∗ ) g(U ). Careful readers may notice that we have assume sufficiently large r in Step
r
1. For r which is not sufficiently large, a similar rank-one argument as in Step 2 gives the
desired lower bound. Thus, the proof is complete.
9.3
Proofs of technical lemmas
This section gathers the proofs of all technical results used in the above sections. The proofs
are organized according to the order of their first appearance. To simplify notation, we denote
b x(j) , Σ
b (j)
b (j)
b b
b
Σ
y and Σxy by Σx , Σy and Σxy for j ∈ {0, 1, 2} whenever there is no confusion from
the context.
9.3.1
Proofs of lemmas in Section 6
In order to prove Lemma 6.1, we need an auxiliary result.
43
Lemma 9.6. Assume n1 (su + sv + log(ep/su ) + log(em/sv )) ≤ c for some sufficiently small
constant c ∈ (0, 1). Then there exist some constants C, C ′ > 0 only depending on c such that
s
ep
1
′b
′b
1/2
kU Σx U − Ikop ∨ k(U Σx U ) − Ikop ≤ C
,
su + log
n
su
s
em
1
′b
′b
1/2
,
kV Σy V − Ikop ∨ k(V Σy V ) − Ikop ≤ C
sv + log
n
sv
with probability at least 1 − exp(−C ′ (su + log(ep/su ))) − exp(C ′ (sv + log(em/sv ))).
Proof. Using the definition of operator norm and the sparsity of U , we have
b x U − Ir kop = kU ′ (Σ
b x − Σx )U kop = k(USu ∗ )′ (Σ
b xSu Su − ΣxSu Su )USu ∗ kop
kU ′ Σ
−1/2
b xSu Su − ΣxSu Su )(USu ∗ v) ≤ kΣ1/2 USu ∗ k2 kΣ−1/2 Σ
b
= sup (USu ∗ v)′ (Σ
op
xSu Su xSu Su ΣxSu Su − Ikop ,
xSu Su
||v||=1
1/2
−1/2
−1/2
b xSu Su Σ
where kΣxSu Su USu ∗ k2op ≤ 1 and kΣxSu Su Σ
xSu Su − Ikop is bounded by the desired rate
b x U )1/2 −
with high probability according to Lemma 16 in [18]. Lemma 15 in [18] implies k(U ′ Σ
b x U − Ikop , and thus k(U ′ Σ
b x U )1/2 − Ikop also shares same upper bound. The
Ikop ≤ CkU ′ Σ
′
′
b y V − Ikop ∨ k(V Σ
b y V )1/2 − Ikop can be derived by the same argument.
upper bound for kV Σ
Hence, the proof is complete.
Proof of Lemma 6.1. According to the definition, we have
1/2
′b
1/2
e
b x U )−1/2 kop ,
kΣ1/2
− Ikop k(U ′ Σ
x (U − U )kop ≤ kΣx U kop k(U Σx U )
1/2
′b
1/2
e
b y V )−1/2 kop ,
kΣ1/2
− Ikop k(V ′ Σ
y (V − V )kop ≤ kΣy V kop k(V Σy V )
e − Λkop ≤ k(U ′ Σ
b x U )1/2 − Ikop kΛ(V ′ Σ
b y V )1/2 kop
kΛ
b y V )1/2 − Ikop .
+kΛkop k(V ′ Σ
Applying Lemma 9.6, the proof is complete.
e , we have U
e ′Σ
b xU
e = I, and thus Σ
b 1/2
e
Proof of Lemma 6.2. By the definition of U
x U ∈ O(p, r).
b 1/2
e
Similarly Σ
y V ∈ O(m, r). Thus,
b 1/2
e b 1/2
b 1/2 e
b 1/2 e
kΣ
x AΣy kop ≤ kΣx U kop kΣy V kop ≤ 1.
(97)
b y V )−1 (V ′ Σ
b y V )) = r.
Tr(Q′ Q) = Tr((V ′ Σ
(98)
′
b 1/2
e b 1/2
e
Now let us use the notation Q = Σ
x AΣy . Then, by the definition of A, we have Q Q =
′b
−1 ′ b 1/2
b 1/2
Σ
y V (V Σy V ) V Σy , and
Combining (97) and (98), it is easy to see that all eigenvalues of Q′ Q are 1. Thus, we have
kQk∗ = r and kQkop = 1. The proof is complete.
44
Proof of Lemma 6.3. Denote F = [f1 , ..., fr ], G = [g1 , ..., gr ] and cj = fj′ Egj . By kEkop ≤ 1,
we have |cj | ≤ 1. The left hand side of (44) is lower bounded by hF KG′ , F G′ − Ei ≥
hF DG′ , F G′ − Ei − P
kK − DkF kF G′ − EkF , where hF DG′ , F G′ − Ei = hD, I − F ′ EGi =
P
r
r
l=1 dl (1 − cl ) ≥ dr
l=1 (1 − cl ). The first term on the right hand side of (44) is
dr
dr
kF G′ k2F + kEk2F − 2 Tr(F ′ EG)
kF G′ − Ek2F =
2
2
r
X
dr
≤
cj
Tr(F ′ F G′ G) + kEkop kEk∗ − 2
2
j=1
≤ dr
r
X
j=1
(1 − cj ).
This completes the proof.
b xy − Σ
e xy ||∞ can be upper bounded by the
Proof of Lemma 6.4. Using triangle inequality, ||Σ
following sum,
b xy − Σxy ||∞ + ||(Σ
b x − Σx )U ΛV ′ Σy ||∞
||Σ
b y − Σy )||∞ + ||(Σ
b x − Σx )U ΛV ′ (Σ
b y − Σy )||∞ .
+||Σx U ΛV ′ (Σ
The first term can be bounded by the desired rate by union bound and Bernstein’s inequality
[36, Prop. 5.16]. For the second term, it can be written as
n
1X
max
(Xij [Xi′ U ΛV ′ Σy ]k − EXij [Xi′ U ΛV ′ Σy ]k ) ,
j,k n
i=1
where Xij is the j-th element of Xi and the notation [·]k means the k-th element of the
referred vector. Thus, it is a maximum over average of centered sub-exponential random
variables. Then, by Bernstein’s inequality and union bound, it is also bounded by the desired
rate.
we can bound the third term. For the last term, it can be bounded by
Pr Similarly,
′ b
′ b
b
b
λ
||(
Σ
−
Σ
x
x )ul vl (Σy − Σy )||∞ , where for each l, ||(Σx − Σx )ul vl (Σy − Σy )||∞ can be
l=1 l
written as
n
n
1 X
1 X
max
(Xij Xi′ ul − EXij Xi′ ul )
(Yik Yi′ vl − EYik Yi′ vl ) .
j,k
n
n
i=1
i=1
with the desired probability using union bound and
It can be bounded by the rate log(p+m)
n
. Under the
Bernstein’s inequality. Hence, the last term can be bounded by λ1 r log(p+m)
n
q
log(p+m)
assumption that r
is bounded by a constant, it can further be bounded by the
n
q
rate log(p+m)
with high probability. Combining the bounds of the four terms, the proof is
n
complete.
Proof of Lemma 6.6. By the property of least squares, we have
inf kF − GW k2F = kF − G(G′ G)− G′ F k2F = kF − PG F k2F = r − Tr(PF PG ).
W
Since kPF − PG k2F = 2r − 2 Tr(PF PG ), the proof is complete.
45
Proof of Lemma 6.7. By the definition of U ∗ , we have Σxy Vb (0) = Σx U ∗ . Thus,
b xy Vb (0) − Σ
b x U ∗ ]j· || ≤ max ||[(Σ
b xy − Σxy )Vb (0) ]j· || + max ||[(Σ
b x − Σx )U ∗ ]j· ||.
max ||[Σ
1≤j≤p
1≤j≤p
1≤j≤p
b x − Σx )U ∗ ]j· ||. Note that the sample covariance can be
Let us first bound max1≤j≤p ||[(Σ
written as
n
X
1/2 1
b
Zi Zi′ Σ1/2
Σx = Σx
x ,
n
i=1
where
1/2
{Zi }ni=1
are i.i.d. Gaussian vectors distributed as N (0, Ip ). Let Tj′ be the j-th row of
Σx , and then we have
n
b x − Σx )U ∗ ]j· =
[(Σ
1X ′
∗
′ 1/2 ∗
(Tj Zi Zi′ Σ1/2
x U − Tj Σx U ).
n
i=1
For each i and j, define vector
(j)
Wi
1/2
#
Tj′ Zi
.
=
1/2
(U ∗ )′ Σx Zi
"
(j)
(j)
b x −Σx )U ∗ ]j· || ≤ k 1
Since Tj′ Zi Zi′ Σx U ∗ is a submatrix of Wi (Wi )′ , we have ||[(Σ
n
(j)
(j)
EWi (Wi )′ )kop . Hence, for any t > 0, we have
n
o
b x − Σx )U ∗ ]j· || > t
P max ||[(Σ
Pn
(j)
(j) ′
i=1 (Wi (Wi ) −
1≤j≤p
p
n
o
n 1X
X
(j)
(j)
(j)
(j)
(Wi (Wi )′ − EWi (Wi )′ )kop > t
P k
≤
n
≤
(j)
j=1
p
X
j=1
i=1
n
exp C1 r − C2 n min
t
t2
,
kW (j) kop kW (j) k2op
o
,
(99)
(j)
where W (j) = EWi (Wi )′ , and we have used concentration inequality for sample covariance
[36, Thm. 5.39]. Since kW (j) kop ≤ C3 for some constant C3 only depending on M , (99) can
be bounded by
exp C1′ (r + log p) − C2′ n(t ∧ t2 ) .
p
Take t2 = C4 r+log
for some sufficiently large C4 , and under the assumption n−1 (r + log p) ≤
n
q
b x −Σx )U ∗ ]j· || ≤ C r+log p with probability at least 1−exp(−C ′ (r+log p)).
C1 , max1≤j≤p ||[(Σ
n
b xy − Σxy )Vb (0) ]j· ||. Let us sketch the
Similar arguments lead to the bound of max1≤j≤p ||[(Σ
proof. Note that we may write
n
1X ′
(0)
b
b
[(Σxy − Σxy )V ]j =
Tj Zi Yi′ Vb (0) − E(Tj′ Zi Yi′ Vb (0) ) .
n
i=1
46
(j)
Then, define Hi
=
Tj′ Zi
, and we have
(Vb (0) )′ Yi
b xy − Σxy )Vb (0) ]j || ≤ max k 1
max ||[(Σ
1≤j≤p n
1≤j≤p
n
X
i=1
(j)
Using the same argument, we can bound this term by C
1 − exp(−C ′ (r + log p)). Thus, the proof is complete.
9.3.2
(j)
(j)
(j)
(Hi (Hi )′ − EHi (Hi )′ )kop .
q
r+log p
n
with probability at least
Proofs of lemmas in Section 9
1/2
(0)
Proof of Lemma 9.1. Let Tv = Sbv ∪Sv , where Sbv = supp(Vb (0) ). First, let us bound kΣyTv Tv VbTv ∗ kop .
b y Vb (0) = Ir , we have
Since (Vb (0) )′ Σ
1/2
(0)
1/2
b −1/2
b 1/2 Vb (0) kop ≤ kΣ1/2 Σ
b −1/2 kop kΣ
kΣyTv Tv VbTv ∗ kop ≤ kΣyTv Tv Σ
yTv Tv yTv Tv kop
yTv Tv
yTv Tv
−1/2 −1/2
1/2
b
b −1 Σ1/2 k1/2 = σmin (Σ−1/2 Σ
= kΣyTv Tv Σ
yTv Tv yTv Tv ΣyTv Tv )
yTv Tv yTv Tv op
r
−1/2
su log(ep/su )
−1/2 b
−1/2
≤ 1 − kΣyTv Tv ΣyTv Tv ΣyTv Tv − Ikop
≤ 1+C
,
n
with probability at least 1 − exp(−C ′ su log(ep/su )), where the last inequality is by Lemma
12 of [18]. Hence,
b yTv Tv )Vb (0) kop
b y )Vb (0) kop = k(Vb (0) )′ (ΣyTv Tv − Σ
k(Vb (0) )′ Σy Vb (0) − Ikop = k(Vb (0) )′ (Σy − Σ
Tv ∗
Tv ∗
r
su log(ep/su )
−1/2 b
−1/2
(0)
1/2
,
≤ kΣyTv Tv VbTv ∗ k2op kΣyTv Tv Σ
yTv Tv ΣyTv Tv − Ikop ≤ 4C
n
with probability at least 1 − exp(−C ′ su log(ep/su )). The proof is completed by realizing
1/2
(0)
1/2
kΣyTv Tv VbTv ∗ kop = kΣy Vb (0) kop .
b ). Using the definition of FrobeProof of Lemma 9.2. Let Tu = Sbu ∪ Su , where Sbu = supp(U
nius norm, we have
2
′ b
′ b
b 1/2 2
kΣ1/2
x ∆kF − kΣx ∆kF = Tr(∆ (Σx − Σx )∆) = Tr((∆Tu ∗ ) (ΣxTu Tu − ΣxTu Tu )∆Tu ∗ )
r
su log(ep/su ) 1/2 2
−1/2 b
−1/2
≤ kΣxTu Tu ∆Tu ∗ k2F kΣxTu Tu Σ
kΣx ∆kF ,
xTu Tu ΣxTu Tu − Ikop ≤ C
n
1/2
with high probability, where we have used kΣxTu Tu ∆Tu ∗ k2F = kΣx ∆k2F and Lemma 12 in
[18] in the last inequality. After rearrangement, the proof is complete.
b x is constructed from D0 and Σ
b y is constructed from D1 .
Proof of Lemma 9.3. In this proof, Σ
b ) and Sbv = supp(Vb (0) ).
We use the notation Tu = Su ∪Sbu and Tv = Sv ∪Sbv , where Sbu = supp(U
Note that Tu depends on D1 and Tv depends on D0 . We first condition on D0 , and then we
have
b xy − Σxy )Vb (0) = hΣ
b xyTu Tv − ΣxyTu Tv , ∆Tu ∗ (Vb (0) )′ i
Tr ∆′ (Σ
Tv ∗
1/2
(0)
1/2
−1/2
−1/2
b xyTu Tv − ΣxyTu Tv )Σ
≤ kΣyTv Tv VbTv ∗ kop kΣxTu Tu ∆Tu ∗ kF hΣxTu Tu (Σ
yTv Tv , KTu i
1/2
(0)
1/2
−1/2 b
−1/2
≤ kΣyTv Tv VbTv ∗ kop kΣxTu Tu ∆Tu ∗ kF sup hΣxT T (Σ
xyT Tv − ΣxyT Tv )ΣyTv Tv , KT i
T
47
where T ranges over all subsets with cardinality bounded by 2su , and for each such T ,
1/2
1/2
(0)
1/2
b (0) ′ 1/2
KT = kΣxT T ∆T ∗ (VbTv ∗ )′ ΣyTv Tv k−1
F ΣxT T ∆T ∗ (VTv ∗ ) ΣyTv Tv satisfying kKT kF = 1. We do not
put Tv in the subscript of K because conditioning on D0 , Tv is fixed. For each T , we can
−1/2 b
−1/2
use Lemma 7 in [18] to bound hΣxT T (Σ
xyT Tv − ΣxyT Tv )ΣyTv Tv , KT i . A direct union bound
argument leads to
r
rsu + su log(ep/su )
−1/2 b
−1/2
sup hΣxT T (ΣxyT Tv − ΣxyT Tv )ΣyTv Tv , KT i ≤ C
,
n
T
(0)
1/2
with probability at least 1−exp (−C ′ (su log(ep/su ) + rsu )). By Lemma 9.1, we have kΣyTv Tv VbTv ∗ kop =
1/2
1/2
1/2
kΣy Vb (0) kop ≤ 2 with high probability. Finally, observing that kΣxTu Tu ∆Tu ∗ kF = kΣx ∆kF ,
we have completed the proof.
Proof of Lemma 9.4. It is omitted due to similarity to that of Lemma 9.3.
Proof of Proposition 9.2. Let the singular value decomposition of U be U = ΘRH ′ . Then
we have HRΘ′ Σx ΘRH ′ = U ′ Σx U = I, from which we derive Θ′ Σx Θ = R−2 . Using Lemma
6.6, we have
√
b W − ΘkF
kPUb − PU kF = 2 inf kU
W
√
√
b W HR−1 − ΘRH ′ HR−1 kF ≤ 2 inf kU
b W − U kF kR−1 kop
≤ 2 inf kU
W
W
√
−1/2
′
1/2
b
inf kΣ1/2
≤ 2M1
x (U W − U )kF kΘ Σx Θkop
W
√
b
≤ 2(M2 /M1 )1/2 inf kΣ1/2
x (U W − U )kF
W
√
b
≤ 2(M2 /M1 )1/2 inf kΣ1/2
x (U W − U )kF .
W ∈O(r,r)
1/2
b W − U )k2 = Tr((U
b W − U )′ Σx (U
b W − U )), the proof is complete.
Finally, by kΣx (U
F
10
Implementation of (18)
To implement the convex programming (18), we turn to the Alternating Direction Method
b x and Σ
b y for Σ
b (0)
of Multipliers (ADMM) [16, 11]. In the rest of this section, we write Σ
x and
(0)
b
Σy for notational convenience.
First, note that (18) can be rewritten as
minimize
subject to
where
f (F ) + g(G),
b 1/2 F Σ
b 1/2 − G = 0,
Σ
x
b xy , F i + ρkF k1 ,
f (F ) = −hΣ
g(G) = ∞1{kGk∗ >r} + ∞1{kGkop >1} .
48
(100)
y
(101)
(102)
Thus, the augmented Lagrangian form of the problem is
b 1/2 F Σ
b 1/2 − Gi + η kΣ
b 1/2 F Σ
b 1/2 − Gk2 .
Lη (F, G, H) = f (F ) + g(G) + hH, Σ
x
y
y
F
2 x
(103)
Following the generic algorithm spelled out in Section 3 of [11], suppose after the kth iteration, the matrices are (F k , Gk , H k ), then we update the matrices in the (k + 1)-th
iteration as follows:
F k+1 = argmin Lη (F, Gk , H k ),
(104)
Gk+1 = argmin Lη (F k+1 , G, H k ),
(105)
k+1 b 1/2
b 1/2
H k+1 = H + η(Σ
Σy − Gk+1 ).
x F
(106)
F
G
k
The algorithm iterates over (104) – (106) till some convergence criterion is met. It is clear
that the update (106) for the dual variable H is easy to calculate. Moreover the updates
(104) and (105) can be solved easily and have explicit meaning in giving solution to sparse
CCA. We are going to show that (104) can be viewed as a Lasso problem. Thus, this step
targets at the sparsity of the matrix U V ′ . The update (105) turns out to be equivalent to
a singular value capped soft thresholding problem, and it targets at the low-rankness of the
1/2
1/2
matrix Σx U V ′ Σy . In what follows, we study in more details the updates for F and G.
First, we note that (104) is equivalent to
b 1/2 F Σ
b 1/2 − Gk k2
b 1/2 F Σ
b 1/2 i + η kΣ
F k+1 = argmin f (F ) + hH k , Σ
x
y
F
x
y
2
F
1 k 1 b −1/2 b b −1/2 2
η b 1/2 b 1/2
k
Σxy Σy )kF + ρkF k1 .
= argmin kΣ
x F Σy − (G − H + Σx
2
η
η
F
(107)
Thus, it is clear that the update of F in (104) can be viewed as a Lasso problem as summarized
in the following proposition. Here and after, for any positive semi-definite matrix A, A−1/2
denotes the principal square root of its pseudo-inverse.
Proposition 10.1. Let vec be the vectorization operation of any matrix and ⊗ the Kronecker
product. Then vec(F k+1 ) is the solution to the following standard Lasso problem
min kΓx − bk2F +
x
2ρ
kxk1
η
b −1/2 Σ
b xy Σ
b −1/2
b y1/2 ⊗ Σ
b x1/2 and b = vec(Gk − 1 H k + 1 Σ
).
where Γ = Σ
y
η
η x
Remark 10.1. It is worth mentioning that the vectorized formulation in Proposition 10.1
is for illustration only. In practice, we solve the problem in (107) directly, since the vectorized version, especially the Kronecker product, would great increase the computation cost.
The solver to (107) can be easily implemented in standard software packages for convex
programming, such as TFOCS [5].
Since each update of F is the solution of some Lasso problem, it should be sparse in the
sense that its vector ℓ1 norm is well controlled.
49
Turning to the update for G, we note that (105) is equivalent to
η b 1/2 k+1 b 1/2
Gk+1 = argmin g(G) − hH k , Gi + kΣ
Σy − Gk2F
x F
2
G
1
η
k+1 b 1/2 2
b 1/2
Σy )kF
= argmin kG − ( H k + Σ
x F
2
η
G
+ ∞1{kGk∗ >r} + ∞1{kGkop >1}
1
b 1/2 F k+1 Σ
b 1/2 )k2
= argmin kG − ( H k + Σ
x
y
F
η
G
+ ∞1{kGk∗ >r} + ∞1{kGkop >1} .
(108)
The solution to the last display has a closed form according to the following result.
Proposition 10.2. Let G∗ be the solution to the optimization problem:
minimize kG − W kF
subject to kGk∗ ≤ r, kGkop ≤ 1.
P
′
Let the SVDPof W be W = m
i=1 ωi ai bi with ω1 ≥ · · · ≥ ωm ≥ 0 the ordered singular values.
m
Then G∗ = i=1 gi ai b′i where for any i, gi = 1 ∧ (ωi − γ ∗ )+ for some γ which is the solution
to
m
X
1 ∧ (ωi − γ)+ ≤ r.
minimize γ,
subject to γ > 0,
i=1
Proof. The proof essentially follows that of Lemma 4.1 in [37]. In addition to the fact that
the current problem deals
P with asymmetric matrix, the only difference that we now have an
inequality constraint i gi ≤ r rather than an equality constraint as in [37]. The asymmetry
of the current problem does not matter since it is orthogonally invariant.
Here and after, we call the operation in Proposition 10.2 singular value capped soft thresholding (SVCST) and write G∗ = SVCST(W ). Thus, any update for G results from the SVCST
operation of some matrix, and so it has well controlled singular values.
In summary, the convex program (18) is implemented as Algorithm 5.
11
Numerical Studies
This section presents numerical results demonstrating the competitive finite sample performance of the proposed adaptive estimation procedure CoLaR on simulated datasets.
Simulation settings We consider three simulation settings. In all these settings, we set
p = m, Σx = Σy = Σ, and r = 2 with λ1 = 0.9 and λ2 = 0.8. Moreover, the nonzero rows of
both U and V are set at {1, 6, 11, 16, 21}. The values at the nonzero coordinates are obtained
from normalizing (with respect to Σ) random numbers drawn from the uniform distribution
on the finite set {−2, 1, 0, 1, 2}. The choices of Σ in the three settings are as follows:
1. Identity: Σ = Ip .
50
Algorithm 5: An ADMM algorithm for SCCA
Input:
b x, Σ
b y and Σ
b xy ,
1. Sample covariance matrices Σ
2. Penalty parameter ρ,
3. Rank r,
4. ADMM parameter η and tolerance level ǫ.
1
2
3
4
5
6
b
Output: Estimated sparse canonical correlation signal A.
b xy ), G0 = 0, H 0 = 0.
Initialize: k = 0, F 0 = SVCST(Σ
repeat
Update F k+1 as in (104)
(Lasso) ;
k+1
−1
k+1 Σ
b 1/2
b 1/2
Update G
← SVCST(η H k + Σ
(SVCST) ;
x F
y )
1/2 k+1 b 1/2
k+1
k
k+1
b
Update H
← H + η(Σx F
Σy − G ) ;
k ←k+1 ;
until max{kF k+1 − F k kF , ρkGk+1 − Gk kF } ≤ ǫ;
b = F k.
Return A
2. Toeplitz: Σ = (σij ) where σij = 0.3|i−j| for all i, j ∈ [p]. In other words, Σx and Σy
are Toeplitz matrices.
q
0 / σ 0 σ 0 ). We set Σ0 = (σ 0 ) = Ω−1 where Ω = (ω ) with
3. SparseInv: Σ = (σij
ij
ij
ii jj
ωij = 1{i=j} + 0.5 × 1{|i−j|=1} + 0.4 × 1{|i−j|=2} ,
i, j ∈ [p].
In other words, Σx and Σy have sparse inverse matrices.
In all three settings, we normalize the variance of each coordinate to be one.
Implementation details The proposed CoLaR estimator in Section 4.1 has two stages.
The convex program (18) in the first stage can be solved via an ADMM algorithm [11]. The
details of the ADMM approach are presented in Section 10. The optimization problem (19)
in the second stage can be solved by a standard group-Lasso algorithm [45].
In
p all numerical results reported in this section, we used the same penalty level ρ =
0.55 log(p + m)/n in (18) and we used η = 2 in (106). Inp(19), we used five-fold cross
validation to select a common penalty parameter ρu = ρv = b (r + log p)/n. In particular,
test , Y test ) and the other four
for l = 1, . . . , 5, we use one fold of the data as the test set (X(l)
(l)
train , Y train ). For any choice of b, we solved (19) on (X train , Y train )
folds as the training set (X(l)
(l)
(l)
(l)
b(l) , Vb(l) ). Then we computed the sum of canonical correlations between
to obtain estimates (U
b(l) ∈ Rn×r and Y test Vb(l) ∈ Rn×r to obtain CVl (b). Finally, CV(b) = P5 CVl (b).
X test U
(l)
l=1
(l)
Among all the candidate penalty parameters, we select the b value such that CV(b) is maximized. The candidate b values used in the simulation below are {0.5, 1, 1.5, 2}. Throughout
the simulation, we used all the sample {(Xi , Yi )}ni=1 to form the sample covariance matrices
used in (18) – (20).
51
In addition to the performance of CoLaR, we also report that of the method proposed
in [43] (denoted by PMA here and on). The PMA seeks the solution to the optimization
problem
b xy v, subject to ||u|| ≤ 1, ||v|| ≤ 1, ||u||1 ≤ c1 , ||v||1 ≤ c2 .
max u′ Σ
u,v
The solution is used to estimate the first canonical pair (b
u1 , vb1 ). Then the same procedure is
′
′
b
b
b
repeated after Σxy is replaced by Σxy − (b
u1 Σxy vb1 )b
u1 vb1 , and the solution gives the estimator
of the second canonical pair (b
u2 , vb2 ). This process is repeated until u
br , vbr is obtained. Note
that the normalization constraint kuk ≤ 1 and kvk ≤ 1 implicitly assumes that the marginal
covariance matrices Σx and Σy are identity matrices. We used the R implementation of the
method (function CCA in the PMA package in R) by the authors of [43]. To remove undesired
amplification of error caused by normalization, we renormalized each individual u
bj with
b x and each individual vbj with respect to Σ
b y before calculating the error under
respect to Σ
the loss (7). For each simulated dataset, we set the sparsity penalty parameters penaltyx
and penaltyz of the function CCA at each of the eleven different values {0.6l : l = 0, 1, . . . , 10}
and only the smallest estimation error out of all eleven trials was used to compute the error
reported in the tables below.
Results Tables 1 – 3 report, in each of the three settings, the medians of the prediction
errors of CoLaR and PMA out of 100 repetitions for four different configurations of (p, m, n)
values.
In each table, the columns U -PMA and V -PMA report the medians of the smallest estimation errors out of the eleven trials on each simulated dataset. The columns U -init and
V -init report the median estimation errors of the renormalized r left singular vectors and
right singular vectors of the solutions to the initialization step (18), where the renormalization is the same as in (20) and in both (18) and renormalization we used all the n pairs
of observations. Last but not least, the columns U -CoLaR and V -ColaR report the median
estimation errors of the CoLaR estimators where both stages were carried out.
In all simulation settings, both the renormalized initial estimators and the CoLaR estimators consistently outperform PMA. Comparing the last four columns within each table,
we also find that the CoLaR estimators with both stages carried out significantly improve
over the renormalized initial estimators, which is in accordance with our theoretical results
in Section 4.
In summary, the proposed method delivers consistent and competitive performance in all
three covariance settings across all dimension and sample size configurations, and its behavior
agrees well with the theory.
(p, m, n)
(300, 300, 200)
(600, 600, 200)
(300, 300, 500)
(600, 600, 500)
U -PMA
2.1316
3.4154
0.2683
2.0335
V -PMA
2.1297
3.3584
0.2701
2.0368
U -init
0.2653
0.3167
0.1207
0.1448
V -init
0.1712
0.2087
0.0665
0.0817
U -CoLaR
0.0498
0.0671
0.0135
0.0166
V -CoLaR
0.0646
0.0776
0.0159
0.0203
Table 1: Prediction errors (Identity): Median in 100 repetitions.
52
(p, m, n)
(300, 300, 200)
(600, 600, 200)
(300, 300, 500)
(600, 600, 500)
U -PMA
2.1853
3.4247
0.2358
2.1214
V -PMA
2.1840
3.4852
0.2191
2.0889
U -init
0.2885
0.3236
0.1202
0.1408
V -init
0.1706
0.2004
0.0664
0.0811
U -CoLaR
0.0511
0.0638
0.0135
0.0176
V -CoLaR
0.0601
0.0764
0.0166
0.0209
Table 2: Prediction errors (Toeplitz): Median in 100 repetitions.
(p, m, n)
(300, 300, 200)
(600, 600, 200)
(300, 300, 500)
(600, 600, 500)
U -PMA
2.9697
4.6908
2.3967
2.8707
V -PMA
2.9619
4.3339
2.0620
2.8609
U -init
0.5552
0.5596
0.2695
0.3068
V -init
0.5718
0.6133
0.1917
0.2368
U -CoLaR
0.1568
0.2123
0.0242
0.0338
V -CoLaR
0.1194
0.1572
0.0219
0.0271
Table 3: Prediction errors (SparseInv): Median in 100 repetitions.
Model misspecification We now examine the performance of our estimator when the
model is misspecified. To this end, we consider the case where there are three pairs of nontrivial canonical correlations present in the data but we set r = 2 in our algorithm. As before,
we consider three different types of marginal covariance matrices: Identity, Toeplitz and
SparseInv. In addition, we generate the first two pairs of canonical correlation vectors
in the same way as before. For the generation of the third pair of canonical directions,
we consider two different scenarios. In the first scenario, the support of the third pair of
canonical direction vectors are set at {1, 6, 11, 16, 21} and so they are the same as those of
the first two pairs. In the second scenario, we put no constraint on the support of these
vectors. For both scenarios, we set λ3 = 0.3. Table 4 reports the prediction errors of the first
two pairs of canonical correlations in both scenarios when (p, m, n) = (300, 300, 500). The
implementation details are exactly the same as before. The first two columns contain results
in the first scenario, and the third and the fourth columns the second scenario. Comparing
these results with their counterparts in correctly specified models (the last two cells in the
second last rows of Tables 1–3), we have found that the performance of our estimator was
robust to model misspecification in both scenarios.
Covariance
Identity
Toeplitz
SparseInv
Scenario 1
U -CoLaR V -CoLaR
0.0190
0.0195
0.0197
0.0197
0.0348
0.0263
Scenario 2
U -CoLaR V -CoLaR
0.0144
0.0160
0.0138
0.0147
0.0221
0.0328
Table 4: Prediction errors with misspecified models: Median in 100 repetitions. (p, m, n) =
(300, 300, 500).
53
| 10 |
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
MIKHAIL BOROVOI
WITH AN APPENDIX BY
arXiv:1710.02471v3 [math.AG] 17 Feb 2018
GIULIANO GAGLIARDI
Abstract. Let G be a connected semisimple group over an algebraically closed field
k of characteristic 0. Let Y = G/H be a spherical homogeneous space of G, and let
Y ′ be a spherical embedding of Y . Let k0 be a subfield of k. Let G0 be a k0 -model
(k0 -form) of G. We show that if G0 is an inner form of a split group and if the subgroup
H of G is spherically closed, then Y admits a G0 -equivariant k0 -model. If we replace
the assumption that H is spherically closed by the stronger assumption that H coincides
with its normalizer in G, then both Y and Y ′ admit G0 -equivariant k0 -models, and these
models are unique.
Contents
0.
Introduction
2
1.
Semi-morphisms of k-schemes
4
2.
Semi-morphisms of G-varieties
9
3.
Quotients
11
4.
Semi-morphisms of homogeneous spaces
12
5.
k-automorphisms of homogeneous spaces
14
6.
Equivariant models of G-varieties
15
7.
Spherical homogeneous spaces and their combinatorial invariants
17
8.
Action of an automorphism of the base field
on the combinatorial invariants of a spherical homogeneous space
21
9.
Equivariant models of automorphism-free spherical homogeneous spaces
24
10.
Equivariant models of spherically closed spherical homogeneous spaces
25
11.
Equivariant models of spherical embeddings
of automorphism-free spherical homogeneous spaces
31
Appendix A. Algebraically closed descent
for spherical homogeneous spaces
Appendix B.
The action of the automorphism group on the colors
of a spherical homogeneous space
References
32
34
36
Date: February 20, 2018.
2010 Mathematics Subject Classification. 14M27, 14M17, 14G27, 20G15.
Key words and phrases. Spherical variety, spherical homogeneous space, spherical embedding, color,
model, form, semi-automorphism.
This research was partially supported by the Hermann Minkowski Center for Geometry and by the
Israel Science Foundation (grant No. 870/16).
1
2
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
0. Introduction
Let G be a connected semisimple group over an algebraically closed field k of characteristic 0. Let Y be a G-variety, that is, an (irreducible) algebraic variety over k together
with a morphism
θ : G ×k Y → Y
defining an action of G on Y . We say that (Y, θ) is a G-k-variety or just that Y is a
G-k-variety.
Let k0 ⊂ k be a subfield. Let G0 be a k0 -model (k0 -form) of G, that is, an algebraic
group over k0 together with an isomorphism of algebraic k-groups
∼
κG : G0 ×k0 k → G.
By a G0 -equivariant k0 -model of the G-k-variety (Y, θ) we mean a G0 -k0 -variety (Y0 , θ0 )
∼
together with an isomorphism κY : Y0 ×k0 k → Y such that the diagram (26) commutes,
see Section 6 below.
From now on till the end of the Introduction we assume that Y is a spherical homogeneous space of G. This means that Y = G/H (with the natural action of G) for some
algebraic subgroup H ⊂ G and that a Borel subgroup B of G has an open orbit in Y .
Let Y ֒→ Y ′ be a spherical embedding of Y = G/H. This means that Y ′ is a G-kvariety, that Y ′ is a normal variety, and that Y ′ contains Y as an open dense G-orbit.
Then B has an open dense orbit in Y ′ .
Inspired by the works of Akhiezer and Cupit-Foutou [ACF14], [Akh15], [CF15], for a
given k0 -model G0 of G we ask whether there exist a G0 -equivariant k0 -model Y0 of Y and
a G0 -equivariant k0 -model Y0′ of Y ′ .
Since char k = 0, by a result of Alexeev and Brion [AB05, Theorem 3.1], see Knop’s
MathOverflow answer [Kn17b] and Appendix A below, the spherical subgroup H of G
is conjugate to some (spherical) subgroup defined over the algebraic closure of k0 in k.
Therefore, from now on we assume that k is an algebraic closure of k0 . We set Γ =
Gal(k/k0 ) (the Galois group of k over k0 ).
Let T be a maximal torus of G contained in a Borel subgroup B. We consider the
Dynkin diagram Dyn(G) = Dyn(G, T, B). The k0 -model G0 of G defines the so-called
∗-action of Γ = Gal(k/k0 ) on the Dynkin diagram Dyn(G), see Tits [Tits66, Section 2.3,
p. 39]. In other words, we obtain a homomorphism
ε : Γ → Aut Dyn(G).
The k0 -group G0 is called an inner form (of a split group) if the ∗-action is trivial, that
is, if εγ = id for all γ ∈ Γ. For example, if G is a simple group of any of the types
A1 , Bn , Cn , E7 , E8 , F4 , G2 , then any k0 -model G0 of G is an inner form, because in these
cases Dyn(G) has no nontrivial automorphisms. If G is a split k-group, then of course G
is an inner form.
Let D(Y ) denote the set of colors of Y = G/H, that is, the (finite) set of the closures of
B-orbits of codimension one in Y . A spherical subgroup H ⊂ G is called spherically closed
if the automorphism group AutG (Y ) = NG (H)/H acts on D = D(Y ) faithfully, that is, if
the homomorphism
AutG (Y ) → Aut(D)
is injective. Here NG (H) denotes the normalizer of H in G.
Example 0.1. Let k = C, G = PGL2,C , H = T (a maximal torus), Y = G/T . Then
|NG (T )/T | = 2, and the spherical homogeneous space Y of G has exactly two colors, which
are swapped by the non-unit element of NG (T )/T . We see that the subgroup H = T of
G is spherically closed.
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
3
Theorem 0.2. Let G be a connected semisimple group over an algebraically closed field
k of characteristic 0. Let Y = G/H be a spherical homogeneous space of G. Let k0 be a
subfield of k such that k is an algebraic closure of k0 . Let G0 be a k0 -model of G. Assume
that:
(i) G0 is an inner form,
(ii) H is spherically closed.
Then Y admits a G0 -equivariant k0 -model Y0 .
Theorem 0.2 is a special case of the more general Theorem 10.2 below, where instead of
assuming that G0 is an inner form, we assume only that for all γ ∈ Γ the automorphism εγ
of Dyn(G) preserves the combinatorial invariants (Luna-Losev invariants) of the spherical
homogeneous space Y . Note that we have to assume that char k = 0 because in the proof
we use Losev’s uniqueness theorem [Lo09, Theorem 1], which has been proved only in
characteristic 0. Note also that Y = G/H might have no G0 -equivariant k0 -models if H
is not spherically closed, see Example 10.11 below.
Theorem 0.2 was inspired by Theorem 1.1 of Akhiezer [Akh15] and by Corollary 1 of
Cupit-Foutou [CF15, Section 2.5].
Note that the G0 -equivariant k0 -model Y0 in Theorem 0.2 is in general not unique. The
following theorem is a special case of the more general theorem 10.13 below.
Theorem 0.3. In Theorem 0.2 the set of isomorphism classes of G0 -equivariant k0 -models
of Y = G/H is naturally a principal homogeneous space of the abelian group
H 1 (Γ, AutG (Y )) ≃ (Hom(Γ, S2 ))Ω
(2)
.
Here S2 is the symmetric group on two symbols (isomorphic to Z/2Z), Ω(2) = Ω(2) (Y ) is
(2)
the finite set defined in Section 7 below (before Definition 7.3), and ( . )Ω denotes the
group of maps from the set Ω(2) to the group in the parentheses.
In particular, for k0 = R we have Hom(Γ, S2 ) = S2 , and therefore, the number of these
isomorphism classes is 2s , where s = |Ω(2) |. For G and Y as in Example 0.1 we have s = 1,
hence for each of the two R-models of G there are exactly two non-isomorphic equivariant
R-models of Y , see Example 10.15 below.
Corollary 0.4 (Akhiezer’s theorem). In Theorem 0.2 instead of (ii) assume that
(ii′ ) H is self-normalizing, that is, NG (H) = H.
Then Y = G/H admits a G0 -equivariant k0 -model Y0 , and this model is unique up to a
unique isomorphism.
Indeed, since H is self-normalizing, it is spherically closed. By Theorem 0.2 Y admits
a G0 -equivariant k0 -model. The uniqueness assertion is obvious because AutG (Y ) = {1}.
Corollary 0.4 generalizes Theorem 1.1 of Akhiezer [Akh15], where the case k0 = R was
considered.
Theorem 0.5. Under the assumptions of Corollary 0.4, any spherical embedding Y ′ of
Y = G/H admits a G0 -equivariant k0 -model Y0′ . This k0 -model Y0′ is compatible with the
unique G0 -equivariant k0 -model Y0 of Y from Corollary 0.4, and hence is unique up to a
unique isomorphism.
Theorem 0.5 generalizes Theorem 1.2 of Akhiezer [Akh15], who proved in the case
k0 = R that the wonderful embedding of Y admits a unique G0 -equivariant R-model.
Our proof of Theorem 0.5 uses results of Huruguen [Hu11]. Note that in Theorem 0.5 we
do not assume that Y ′ is quasi-projective.
4
MIKHAIL BOROVOI
GIULIANO GAGLIARDI
WITH AN APPENDIX BY
Theorems 0.2, 0.3, and 0.5 seem to be new even in the case k0 = R.
The plan of the rest of the paper is as follows. In Sections 1–6 we consider semilinear
morphisms and models for general G-varieties and homogeneous spaces of G, not necessarily spherical. In Sections 7–8 we consider combinatorial invariants of spherical homogeneous spaces. Following ideas of Akhiezer [Akh15, Theorem 1.1] and Cupit-Foutou [CF15,
Theorem 3(1), Section 2.2], for γ ∈ Γ = Gal(k/k0 ) we give a criterion of isomorphism of
a spherical homogeneous space Y = G/H and the “conjugate” variety γ∗ Y = G/γ(H)
in terms of the action of γ on the combinatorial invariants of G/H. In Sections 9–11 we
prove Corollary 0.4, Theorem 0.2, Theorem 0.3, and Theorem 0.5. In Appendix A for a
connected reductive group G0 defined over an algebraically closed field k0 of characteristic
0 and for an algebraically closed extension k ⊃ k0 , it is proved that any spherical subgroup
H of the base change G = G0 ×k0 k is conjugate to a (spherical) subgroup defined over k0 .
In Appendix B, following Friedrich Knop’s MathOverflow answer [Kn17a] to the author’s
question, Giuliano Gagliardi gives a proof of an unpublished theorem of Ivan Losev that
describes the image of AutG (G/H) = NG (H)/H in the group of permutations of D(G/H).
Our proofs of Theorems 0.2, 0.3, and 0.5 use this result of Losev.
Acknowledgements. The author is very grateful to Friedrich Knop for answering the
author’s numerous MathOverflow questions, especially for the answer [Kn17a], to Giuliano
Gagliardi for writing Appendix B, and to Roman Avdeev for suggesting Example 10.9 and
proving Proposition 10.10. It is a pleasure to thank Michel Brion for very helpful e-mail
correspondence. The author thanks Dmitri Akhiezer, Stéphanie Cupit-Foutou, Cristian
D. González-Avilés, David Harari, Boris Kunyavskiı̆, and Stephan Snigerov for helpful
discussions. This preprint was written during the author’s visits to the University of La
Serena (Chile) and to the Paris-Sud University, and he is grateful to the departments of
mathematics of these universities for support and excellent working conditions.
Notation and assumptions.
k is a field. In Section 2 and everywhere starting Section 4, k is algebraically closed.
k0 is a subfield of the algebraically closed field k such that k is a Galois extension of k0
(except for Appendix A), hence k0 is perfect.
A k-variety is a reduced separated scheme of finite type over k, not necessarily irreducible.
An algebraic k-group is a smooth k-group scheme of finite type over k, not necessarily
connected. All algebraic k-subgroups are assumed to be smooth.
1. Semi-morphisms of k-schemes
Let k be a field and let Spec k denote the spectrum of k. By a k-scheme we mean a pair
(Y, pY ), where Y is a scheme and pY : Y → Spec k is a morphism of schemes. Let (Y, pY )
and (Z, pZ ) be two k-schemes. By a k-morphism
λ : (Y, pY ) → (Z, pZ )
we mean a morphism of schemes λ : Y → Z such that the following diagram commutes:
Y
λ
/Z
pZ
pY
Spec k
id
/ Spec k
Let γ : k → k be an automorphism of k (we write γ ∈ Aut(k)). Let
γ ∗ := Spec γ : Spec k → Spec k
denote the induced automorphism of Spec k, then (γγ ′ )∗ = (γ ′ )∗ ◦ γ ∗ .
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
5
Let (Y, pY ) be a k-scheme. By abuse of notation we write just that Y is a k-scheme. We
define the γ-conjugated k-scheme γ∗ (Y, pY ) = (γ∗ Y, γ∗ pY ) to be the base change of (Y, pY )
from Spec k to Spec k via γ ∗ . By abuse of notation we write just γ∗ Y for γ∗ (Y, pY ).
Lemma 1.1. Let (Y, pY ) be a k-scheme, and let γ ∈ Aut(k). Then the γ-conjugated
k-scheme γ∗ (Y, pY ) is canonically isomorphic to (Y, (γ ∗ )−1 ◦ pY )
Proof. Write (X, pX ) = γ∗ (Y, pY ), then X comes with a canonical morphism λ : X → Y
such that the following diagram commutes:
λ
/Y
γ∗
X
pY
pX
Spec k
/ Spec k
Since (γ −1 )∗ (γ∗ (Y, pY )) is canonically isomorphic to (Y, pY ), one can easily see that λ is
an isomorphism of schemes. From the above diagram we obtain a commutative diagram
λ
X
/Y
pY
Spec k
pX
(γ ∗ )−1
Spec k
id
/ Spec k
∼
which gives a canonical isomorphism of k-schemes (X, pX ) → (Y, (γ ∗ )−1 ◦ pY ).
We define an action of γ : k → k on k-points. Let y be a k-point of Y , that is, a
morphism y : Spec k → Y such that pY ◦ y = idSpec k . We denote
(1)
γ! (y) = y ◦ γ ∗ : Spec k → Spec k → Y,
then an easy calculation shows that γ! (y) a k-point of γ∗ Y . Thus we obtain a bijection
(2)
γ! : Y (k) → (γ∗ Y )(k),
y 7→ γ! (y).
Let G be a k-group scheme. Following Flicker, Scheiderer, and Sujatha [FSS98, (1.2)],
we define the k-group scheme γ∗ G to be the base change of G from Spec k to Spec k via
γ ∗ . Then the map (2)
γ! : G(k) → (γ∗ G)(k)
is an isomorphism of groups (because for any field extension λ : k ֒→ k′ the corresponding
map on rational points
λ! : G(k) → (G ×k k′ )(k′ )
is a homomorphism). If H ⊂ G is a k-group subscheme, then γ∗ H is naturally a k-group
subscheme of γ∗ G (because a base change of a group subscheme is a group subscheme).
From the commutative diagram
H(k)
G(k)
γ!
γ!
/ (γ∗ H)(k)
/ (γ∗ G)(k)
we see that
(3)
(γ∗ H)(k) = γ! (H(k)) ⊂ (γ∗ G)(k).
6
MIKHAIL BOROVOI
GIULIANO GAGLIARDI
WITH AN APPENDIX BY
Let (Y, θ) be a G-k-scheme (a G-scheme over k), where
θ : G ×k Y → Y,
is an action of G on Y . By abuse of notation we write just that Y is a G-k-scheme. Again
we define the γ∗ G-k-scheme γ∗ (Y, θ) = (γ∗ Y, γ∗ θ) to be the base change of (Y, θ) from
Spec k to Spec k via γ ∗ .
Definition 1.2. Let (Y, pY ) and (Z, pZ ) be two k-schemes. A semilinear morphism
(γ, ν) : (Y, pY ) → (Z, pZ )
is a pair (γ, ν) where γ : k → k is an automorphism of k, and ν : Y → Z is a morphism of
schemes such that the following diagram commutes:
Y
ν
/Z
(γ ∗ )−1
pZ
pY
Spec k
/ Spec k
We shorten “semilinear morphism” to “semi-morphism”. We write “ν : (Y, pY ) → (Z, pZ )
is a γ-semi-morphism” if (γ, ν) : (Y, pY ) → (Z, pZ ) is a semi-morphism. Then by abuse of
notation we write just that ν : Y → Z is a γ-semi-morphism.
Note that if we take γ = idk , then a idk -semi-morphism (Y, pY ) → (Z, pZ ) is just a
morphism of k-schemes.
Lemma 1.3. If (γ, ν) : (Y, pY ) → (Z, pZ ) is a semi-morphism of nonempty k-schemes,
then the morphism of schemes ν : Y → Z uniquely determines γ.
Proof. We may and shall assume that Y and Z are affine, Y = Spec RY , Z = Spec RZ .
Then we have a commutative diagram
(4)
RO Y o
ν∗
RO Z
γ −1
ko
k
Since k is a field, the vertical arrows are injective, and therefore, the homomorphism of
rings ν ∗ uniquely determines the automorphism γ −1 .
We define an action of a semi-morphism (γ, ν) : (Y, pY ) → (Z, pZ ) on k-points. If
y : Spec k → Y is a k-point of (Y, pY ), we set
(5)
(γ, ν)(y) = ν ◦ y ◦ γ ∗ : Spec k → Z,
which is a k-point of (Z, pZ ). This formula is compatible with the usual formula for
the action of a k-morphism on k-points. By abuse of notation we write ν(y) instead of
(γ, ν)(y).
Definition 1.4. By a γ-semi-isomorphism ν : (Y, pY ) → (Z, pZ ) we mean a γ-semimorphism ν : (Y, pY ) → (Z, pZ ) for which the morphisms of schemes ν : Y → Z is an
isomorphism. By a γ-semi-automorphism of a k-scheme (Y, pY ) we mean a γ-semiisomorphism µ : (Y, pY ) → (Y, pY ).
Let us fix γ ∈ Aut(k). The commutative diagram
(6)
ν
/Z
Y ▲▲
▲▲▲
▲▲γ▲∗ pY
pZ
pY
▲▲
▲▲▲
&
∗
−1
(γ )
/ Spec k
Spec k
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
7
shows that
(7)
(γ, ν) : (Y, pY ) → (Z, pZ )
is a semi-morphism (that is, ν : (Y, pY ) → (Z, pZ ) is a γ-semi-morphism) if and only if
(8)
(idk , ν) : γ∗ (Y, pY ) → (Z, pZ )
is a semi-morphism (that is, ν : γ∗ (Y, pY ) → (Z, pZ ) is a k-morphism). For brevity we
write
ν ♮ : γ∗ Y → Z
(9)
for the k-morphism (8), then the k-morphism ν♮ acts on k-points as follows:
(10)
(y ′ : Spec k → γ∗ Y ) 7−→ (ν ◦ y ′ : Spec k → Z).
Example 1.5. Let (Y, pY ) be a k-scheme. Recall that γ∗ (Y, pY ) = (Y, (γ ∗ )−1 ◦ pY ). The
commutative diagram
idY
Y
/Y
pY
pY
Spec k
(γ ∗ )−1
Spec k
(γ ∗ )−1
/ Spec k
shows that (γ, idY ) : Y → γ∗ Y is a γ-semi-isomorphism. We denote this γ-semi-isomorphism
by
γ! : Y → γ∗ Y.
Comparing formulas (1) and (5), we see that the γ-semi-isomorphism γ! : Y → γ∗ Y acts
on k-points as the bijective map γ! : Y (k) → (γ∗ Y )(k) defined by formula (1).
Now let ν : Y → Z be a γ-semi-morphism. The commutative diagram (6) shows that
(11)
γ
ν♮
ν = ν♮ ◦ γ! = (idk , ν) ◦ ((γ ∗ )−1 , idY ) : Y −−!→ γ∗ Y −−→ Z,
where γ! is a γ-semi-isomorphism and ν♮ is a k-morphism (an idk -semi-morphism). It
follows that
(12)
ν(y) = ν♮ (γ! (y)) for y ∈ Y (k)
(this follows also from comparing formulas (1), (5), and (10)).
Example 1.6. Let Y0 be a k0 -scheme, where k0 is a subfield of k. Let γ ∈ Aut(k/k0 ),
that is, γ is an automorphism of k that fixes all elements of k0 . Consider
Y := Y0 ×k0 k = Y0 ×Spec k0 Spec k
and
µγ = idY0 × (γ ∗ )−1 : Y → Y.
It follows from the construction of µγ that the following diagram commutes:
Y
µγ
/Y
pY
pY
Spec k
(γ ∗ )−1
/ Spec k
We see that µγ is a γ-semi-automorphism of Y .
Let Y be an affine k-variety, Y = Spec RY , then RY is the ring of regular functions on
Y . If f ∈ RY , then for any y ∈ Y (k) the value f (y) ∈ k is defined.
8
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
Lemma 1.7. Let ν : (Y, pY ) → (Z, pZ ) be a γ-semi-isomorphism of affine k-varieties,
where γ : k → k is an automorphism of k. Let Y = Spec RY , Z = Spec RZ , and let
ν ∗ : RZ → RY denote the morphism of rings corresponding to ν. Let fZ ∈ RZ . Then
fZ (ν(y)) = γ((ν ∗fZ )(y))
(13)
for all y ∈ Y (k).
Proof. The assumption that ν : Y → Z is a γ-semi-morphism means that the diagram (4)
commutes. A k-point y ∈ Y (k) corresponds to a homomorphism of k-algebras ϕy : RY →
k, and the following diagram commutes:
RY o
ν∗
RZ
γ −1
ϕν(y)
ϕy
ko
hence ϕν(y) = γ ◦ ϕy
that
◦ ν ∗.
We set fY =
ν ∗f
Z
k
∈ RY , then fY (y) = ϕy (fY ), and (13) means
(γ ◦ ϕy ◦ ν ∗ )(fZ ) = γ(ϕy (ν ∗ fZ )),
which is obvious.
Now let ν : (Y, pY ) → (Z, pZ ) be a γ-semi-isomorphism of irreducible k-varieties, where
γ : k → k is an automorphism of k. Then the isomorphism of schemes ν : Y → Z induces
an isomorphism of the fields of rational functions
ν∗ : K(Y ) → K(Z),
f 7→ ν∗ f.
For f ∈ K(Y ) and y ∈ Y (k), the value f (y) ∈ k ∪ {∞} of f at y is defined, where we
write f (y) = ∞ if f is not regular at y.
Corollary 1.8. With the above notation and assumptions we have
(ν∗ fY )(z) = γ(fY (ν −1 (z)))
for all fY ∈ K(Y ), z ∈ Z(k).
Proof. We consider the isomorphism
ν ∗ = ν∗−1 : K(Z) → K(Y ),
fZ 7→ ν ∗fZ ,
where fZ ∈ K(Z).
Set fZ = ν∗ fY , then fY = ν ∗ fZ . We must prove that (13) holds. We may and shall
assume that Y and Z are affine varieties, Y = Spec RY , Z = Spec RZ , fZ ∈ RZ , and
that the morphism ν corresponds to a homomorphism of rings ν ∗ : RZ → RY . Now the
corollary follows from Lemma 1.7.
Remark 1.9. (Classical language.) In this remark we describe the variety γ∗ Y and the
map γ! : Y (k) → (γ∗ Y )(k) in the language of the classical algebraic geometry. First,
consider the affine space Ank , then Ank (k) = kn . Let k0 be the prime subfield of k, that
is, the subfield generated by 1, then Ank = Ank0 ×k0 k. Let γ ∈ Aut(k) = Aut(k/k0 ), then
γ induces a γ-semi-automorphism µγ : Ank → Ank , see Example 1.6. For i = 1, . . . , n, let
fi denote the i-th coordinate function on Ank , which is a regular function. Since fi comes
from a regular function on Ank0 , we have (µγ )∗ fi = fi , and by Lemma 1.7 we have
fi (µγ (x)) = γ(fi (x)) for x ∈ Ank (k) = kn .
If we write x = (xi )ni=1 ∈ kn , where xi = fi (x) ∈ k, then
µγ (x) = γ(xi )ni=1 .
Now let Y ⊂ Ank be an affine variety. Let ι : Y ֒→ Ank denote the inclusion morphism,
then γ induces a k-morphism
γ∗ ι : γ∗ Y → γ∗ Ank .
We have a k-isomorphism
(µγ )♮ : γ∗ Ank → Ank ,
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
9
and we obtain a k-morphism
(γ∗ ι)′ = (µγ )♮ ◦ γ∗ ι : γ∗ Y → Ank .
From the commutative diagram
Y (k)
γ!
/ (γ∗ Y )(k)
γ∗ ι
ι
Ank (k)
γ!
/ (γ∗ An )(k)
k
we see that
(γ∗ ι)(γ! (y)) = γ! (ι(y))
for y ∈ Y (k),
hence
(γ∗ ι)′ (y) = (µγ )♮ (γ! (ι(y))) = µγ (ι(y)).
Now we assume that k is algebraically closed. As usual in the classical algebraic geometry, we identify Y with the algebraic set
Y (k) ⊂ Ank (k) = kn .
Furthermore, we identify γ∗ Y with the algebraic set
(γ∗ ι)′ (Y (k)) = µγ (Y (k)) ⊂ kn .
We see that
γ∗ Y = µγ (Y ) = {γ(yi )ni=1 | (yi )ni=1 ∈ Y },
and that the map
γ! : Y (k) → (γ∗ Y )(k)
sends a point y with coordinates (yi )ni=1 to the point with coordinates γ(yi )ni=1 . If Y ⊂ kn
is defined by a family of polynomials (Pα )α∈A , then γ∗ Y ⊂ kn is defined by the family
(γ(Pα ))α∈A , where γ(Pα ) is the polynomial obtained from Pα by acting by γ on the
coefficients.
2. Semi-morphisms of G-varieties
2.1. In this section k is an algebraically closed field, and Y is a k-variety, that is, a reduced
separated scheme of finite type over k.
Let G be an algebraic group over k (we write also “an algebraic k-group”), that is, a
smooth group scheme of finite type over k. Let (Y, θ) be a G-k-variety, that is, a k-variety
Y together with an action
θ : G ×k Y → Y
of G on Y . If g ∈ G(k) and y ∈ Y (k), we write just g ∗ y or g · y for θ(g, y) ∈ Y (k).
Y
Definition 2.2 (cf. [FSS98, (1.2)]). Let γ ∈ Aut(k). A γ-semi-automorphism of an
algebraic k-group G is a γ-semi-automorphism of k-schemes τ : G → G such that the
corresponding isomorphism of k-varieties τ♮ : γ∗ G → G, see (9), is an isomorphism of algebraic k-groups. This condition is the same as to require that certain diagrams containing
τ commute, see [Brv93, 1.2].
Let H ⊂ G be an algebraic k-subgroup. By Definition 2.2 we have τ (H) = τ♮ (γ∗ H)),
hence τ (H) is a k-subgroup of G. We have (τ (H))(k) = τ (H(k)).
We shall always assume that G = G0 ×k0 k, where G0 is an algebraic group defined over
a subfield k0 of k, and that γ ∈ Aut(k/k0 ), that is, γ is an automorphism of k fixing all
elements of k0 . Then we have a γ-semi-automorphism
σγ = idG0 × (γ ∗ )−1 : G → G,
10
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
compare Example 1.6. If α is any k-automorphism of G, then
τ := α ◦ σγ : G → G
is a γ-semi-automorphism of G, and all γ-semi-automorphisms of G (for given γ) can be
obtained in this way.
Definition 2.3. Let G be an algebraic k-group, and let (Y, θY ) and (Z, θZ ) be two Gk-varieties. Let γ ∈ Aut(k), and let τ : G → G be a γ-semi-automorphism of G. A
τ -equivariant γ-semi-morphism
ν : (Y, θY ) → (Z, θZ )
is a γ-semi-morphism ν : Y → Z such that the following diagram commutes:
G ×k Y
(14)
θY
/Y
ν
τ ×ν
G ×k Z
θZ
/Z
where we write τ × ν for the product of τ and ν over the automorphism (γ ∗ )−1 of Spec k.
Since k is algebraically closed, G is smooth (reduced), and Y and Z are reduced, we see
that the diagram (14) commutes if and only if
ν(g · y) = τ (g) · ν(y) for all g ∈ G(k), y ∈ Y (k).
Construction 2.4. Let G be an algebraic k-group, and let (Y, θY ) be a G-k-variety. The
group γ∗ G naturally acts on γ∗ Y : the action θ : G ×k Y → Y gives an action
γ∗ θ : γ∗ G ×k γ∗ Y → γ∗ Y.
(15)
By definition, a γ-semi-automorphism τ of G defines an isomorphism of algebraic k-groups
τ♮ : γ∗ G → G.
We identify G and γ∗ G via τ♮ and obtain from (15) an action
τ ∗ γ∗ θ : G ×k γ∗ Y → γ∗ Y,
(g, y ′ ) 7→ (γ∗ θ)(τ♮−1 (g), y ′ )
for g ∈ G(k), y ′ ∈ (γ∗ Y )(k).
By abuse of notation we write τ ∗ θ for τ ∗ γ∗ θ and we write γ∗ Y for the G-k-variety
(γ∗ Y, τ ∗ γ∗ θ). We write
g ∗τ y ′
for (τ ∗ θ)(g, y ′ ),
where g ∈ G(k), y ′ ∈ (γ∗ Y )(k).
By formula (12) we have
τ (g) = τ♮ (γ! (g)),
hence
(16)
g ∗τ y ′ = (γ∗ θ)(τ♮−1 (g), y ′ ) = γ! (θ(γ!−1 (τ♮−1 (g)), γ!−1 (y ′ )))
= γ! (θ(τ −1 (g), γ!−1 (y ′ )) = γ! (τ −1 (g) ∗ γ!−1 (y ′ )).
Y
Lemma 2.5. Let G be an algebraic k-group, and let (Y, θ) be a G-k-variety. Let γ ∈
Aut(k), and let τ : G → G be a γ-semi-automorphism of G. Let y (0) ∈ Y (k) be a k-point,
and write H = StabG (y (0) ). Consider the action
τ ∗ θ : G ×k γ∗ Y → γ∗ Y.
Then the stabilizer in G(k) of the point γ! (y (0) ) ∈ (γ∗ Y )(k) under the action τ ∗ θ is
τ (H(k)) = (τ (H))(k).
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
11
Proof. By formula (16) we have
g ∗τ γ! (y (0) ) = γ! (τ −1 (g) · y (0) ).
Since the stabilizer in G(k) of y (0) ∈ Y (k) is H(k), the lemma follows.
Note that ν in Definition 2.3 defines a k-morphism
ν♮ : γ∗ Y → Z,
see (9).
Lemma 2.6. Let γ ∈ Aut(k) and let τ : G → G be a γ-semi-automorphism of G. Let
(Y, θY ) and (Z, θZ ) be two G-k-varieties. A morphism of schemes ν : Y → Z is a τ equivariant γ-semi-morphism if and only if ν♮ : γ∗ Y → Z is a G-equivariant morphism of
k-varieties.
Proof. By (6) the morphism of schemes ν is a γ-semi-morphism Y → Z if and only if it is
a k-morphism γ∗ Y → Z.
Let g ∈ G(k), y ′ ∈ (γ∗ Y )(k). Using formula (16) we obtain
ν♮ (g ∗τ y ′ ) = ν(γ!−1 (g ∗τ y ′ )) = ν(τ −1 (g) ∗ γ!−1 (y ′ )).
Y
We have also
g ∗ ν(γ!−1 (y ′ )) = g ∗ ν♮ (y ′ ).
Z
Z
If ν is τ -equivariant, then
ν(τ −1 (g) ∗ γ!−1 (y ′ )) = g ∗ ν(γ!−1 (y ′ )),
Z
Y
and we obtain that
ν♮ (g ∗τ y ′ ) = g ∗ ν♮ (y ′ ) for all g ∈ G(k), y ′ ∈ (γ∗ Y )(k),
Z
hence ν♮ is G-equivariant.
Conversely, if ν♮ is G-equivariant, we obtain from the above calculations that
ν(τ −1 (g) ∗ γ!−1 (y ′ )) = ν♮ (g ∗τ y ′ ) = g ∗ ν♮ (y ′ ) = g ∗ ν(γ!−1 (y ′ )).
Z
Y
Set y =
γ!−1 (y ′ ),
Z
g′ = τ −1 (g), then we obtain that
ν(g ′ ∗ y) = τ (g′ ) ∗ ν(y) for all g′ ∈ G(k), y ∈ Y (k).
Y
Z
Thus ν is τ -equivariant.
Corollary 2.7. Let γ be an automorphism of k, and let τ : G → G be a γ-semi-automorphism
of G. Let (Y, θ) be a G-k-variety. There exists a τ -equivariant γ-semi-automorphism
µ : Y → Y if and only if the G-k-variety (γ∗ Y, τ ∗ θ) is isomorphic to (Y, θ).
Proof. We take Z = Y in Lemma 2.6.
3. Quotients
Let k be a field (not necessarily algebraically closed). By an algebraic scheme over k
we mean a scheme of finite type over k. By an algebraic group scheme over k we mean a
group scheme over k whose underlying scheme is of finite type over k.
Let H be an algebraic group subscheme of an algebraic group k-scheme G. A quotient
of G by H is an algebraic scheme Y over k equipped with an an action θ : G ×k Y → Y
and a point y (0) ∈ Y (k) fixed by H satisfying certain properties (a) and (b), see Milne
[Mi18, Definition 5.20].
12
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
By [Mi18, Theorem 5.28] there exists a quotient of G by H. By [Mi18, Proposition 5.22]
this quotient (Y, θ, y (0) ) has the following universal property:
(U) Let Z be a k-scheme on which G acts, and let z (0) ∈ Z(k) be a point fixed by H. Then
there exists a unique G-equivariant map Y → Z making the following diagram commute:
g7→g·y (0)
/Y
◆◆◆
◆◆◆
◆◆
g7→g·z (0) ◆◆◆
&
G ◆◆
Z
Clearly the universal property (U) uniquely determines the quotient up to a unique
isomorphism, so we may take (U) as a definition of the quotient.
We return to our settings: k is an algebraically closed field, G is a linear algebraic kgroup (a smooth affine group k-scheme) and H is a smooth algebraic k-subgroup of G.
Since G is smooth, so is the quotient Y , see [Mi18, Corollary 5.26]. Since G is smooth and
affine, the quotient Y is a separated algebraic scheme, see [Mi18, Theorem 7.18]. Thus Y
is a k-variety, and therefore, in the universal property U defining Y we may assume that Z
is a k-variety. Since k is algebraically closed and H is smooth, the condition “fixed by H”
is equivalent to “fixed by H(k)”. Thus we arrive to the following definition of Springer:
Definition 3.1 (cf. Springer [Sp98, Section 5.5]). Let k be an algebraically closed field,
and let G be a linear algebraic k-group. Let H ⊂ G be a smooth k-subgroup. A quotient
of G by H is a pointed G-k-variety (Y, θ : G ×k Y → Y, y (0) ∈ Y (k) ) such that H(k) fixes
y0 , with the following universal property:
(U′ ) For any pointed G-k-variety (Z, θZ , z (0) ) such that the k-point z (0) ∈ Z(k) is fixed by
H(k), there exists a unique morphism of pointed G-k-varieties (Y, θ, y (0) ) → (Z, θZ , z (0) ).
For G and H as in Definition 3.1, let (Y, θ, y (0) ) be a quotient of G by H. The action
of G on Y induces a G-k-morphism
(17)
G → Y,
g 7→ g · y (0) ,
where G acts on itself by left translations.
As usual, we write G/H for Y and g · H or gH for g · y (0) , where g ∈ G(k). In
particular, we write 1 · H for y (0) . The G-equivariant morphism G/H = Y → Z of (U′ )
sends 1·H ∈ (G/H)(k) to z (0) , hence for any g ∈ G(k) it sends the k-point gH ∈ (G/H)(k)
to g · z (0) ∈ Z(k). Thus the quotient G/H has the following universal property:
(U′′ ) For any pointed G-k-variety (Z, θZ , z (0) ) such that the k-point z (0) is fixed by H(k),
there exists a unique G-k-morphism G/H → Z sending gH to g · z (0) for any g ∈ G(k).
By [Mi18, Definition 5.20(a)] the morphism (17) induces an injective map G(k)/H(k) →
(G/H)(k) sending g · H(k) to gH. By [Mi18, Proposition 5.25] the morphism (17) is
faithfully flat, and therefore, since k is algebraically closed, we see that the induced map
G(k)/H(k) → (G/H)(k) is surjective. We conclude that this map is bijective. Thus any
k-point of G/H is of the form gH, where g ∈ G(k).
4. Semi-morphisms of homogeneous spaces
Let k be an algebraically closed field.
Lemma 4.1 (well-known). Let G be a linear algebraic k-group over an algebraically closed
field k, and let H1 , H2 be two k-subgroups. Then Y1 = G/H1 and Y2 = G/H2 are isomorphic as G-k-varieties if and only if the subgroups H1 and H2 are conjugate. To be more
precise, for a ∈ G(k) the following two assertions are equivalent:
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
13
(i) There exists an isomorphism of G-k-varieties φa : G/H1 → G/H2 taking g · H1 to
ga−1 · H2 for g ∈ G(k);
(ii) H1 = a−1 H2 a.
Proof. (i)⇒(ii). Clearly StabG(k) (1 · H1 ) = H1 (k) and StabG(k) (a−1 · H2 ) = a−1 · H2 (k) · a.
Since φa (1 · H1 ) = a−1 · H2 , these stabilizers coincide, whence (ii).
(0)
(0)
(ii)⇒(i). Set Y2 = G/H2 , y2 = 1 · H2 ∈ Y2 (k), y ′ = a−1 · y2 ∈ Y2 (k), then
StabG(k) (y ′ ) = a−1 H2 (k)a = H1 (k), so by the property (U′′ ) of the quotient G/H1 there
exists a unique morphism of G-varieties φa : G/H1 → G/H2 such that
φa (g · H1 ) = g · a−1 · H2
for g ∈ G(k).
Similarly, since the stabilizer in G(k) of a · H1 ∈ (G/H1 )(k)
unique morphism of G-varieties ψa : G/H2 → G/H1 such that
ψa (g · H2 ) = g · a · H1
is H2 (k), there exists a
for g ∈ G(k).
Clearly these two morphisms are mutually inverse, hence both φa and ψa are isomorphisms.
4.2. Let k be an algebraically closed field. Let G be a linear algebraic group over k. Let
γ ∈ Aut(k). Let τ : G → G be a γ-semi-automorphism of G.
Let H ⊂ G be a smooth k-subgroup. Set Y = G/H, then we have a morphism
θ : G ×k Y → Y defining the action of G on Y . Furthermore, the variety Y has a k-point
y (0) = 1 · H such that StabG(k) (y (0) ) = H(k), and the group of k-points G(k) acts on Y (k)
transitively.
Consider the variety γ∗ Y , the action γ∗ θ : γ∗ G ×k γ∗ Y → γ∗ Y of γ∗ G on γ∗ Y , and the
k-point γ! (y (0) ) ∈ (γ∗ Y )(k). As in Construction 2.4 we obtain an action
τ ∗ θ : G ×k γ∗ Y → γ∗ Y.
Lemma 4.3. Let k, G, γ, τ , H be as in Subsection 4.2. Set Y = G/H and let θ : G×k Y →
Y denote the canonical action. Consider the map on k-points
(18)
(G/H)(k) → (G/τ (H))(k),
g · H 7→ τ (g) · τ (H)
for g ∈ G(k).
Then the following assertions hold:
(i) The pointed G-k-variety (γ∗ Y, τ ∗ θ, γ! (y (0) )) is isomorphic to G/τ (H);
(ii) the map (18) is induced by some γ-semi-isomorphism ν : G/H → G/τ (H).
Proof. Let (Z, θZ , z (0) ) be a pointed G-k-variety, and assume that τ (H(k)) fixes z (0) . Consider the pointed G-k-variety
((γ −1 )∗ Z, (τ −1 )∗ θZ , γ!−1 (z (0) )),
where the action
(19)
(τ −1 )∗ θZ : G ×k (γ −1 )∗ Z → (γ −1 )∗ Z
is defined as in Construction 2.4, but for the pair (γ −1 , τ −1 ) instead of (γ, τ ). By Lemma
2.5, H(k) fixes γ!−1 (z (0) ) ∈ (γ −1 )∗ Z. For any morphism of pointed G-k-varieties
(20)
κ : (Y, θ, y (0) ) → ((γ −1 )∗ Z, (τ −1 )∗ θZ , γ!−1 (z (0) ))
we obtain a morphism of pointed G-k-varieties
(21)
γ∗ κ : (γ∗ Y, τ ∗ θ, γ! (y (0) )) → (Z, θZ , z (0) ).
We see that the map κ 7→ γ∗ κ is a bijection between the set of morphisms as in (20)
and the set of morphisms as in (21). Since Y = G/H and H(k) fixes γ!−1 (z (0) ) under the
action (19), we conclude by the universal property (U′′ ) for the quotient Y = G/H that
14
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
the former set contains exactly one element. It follows that the latter set contains exactly
one element, that is, the triple (γ∗ Y, τ ∗ θ, γ! (y (0) )) has the universal property (U′′ ). This
means that (γ∗ Y, τ ∗ θ, γ! (y (0) )) is a quotient of G by τ (H), which proves (i). It follows
that there exists an isomorphism of G-k-varieties
λ : γ∗ Y → G/τ (H) such that g ∗τ γ! (y (0) ) 7−→ g · τ (H).
(22)
We set
γ
λ
ν = λ ◦ γ! : G/H = Y −−!→ γ∗ Y −−→ G/τ (H),
(23)
where γ! : Y → γ∗ Y is the γ-semi-morphism of Example 1.5. Then we have ν♮ = λ. Since ν♮
is an isomorphism of G-k-varieties, by Lemma 2.6 ν is a τ -equivariant γ-semi-isomorphism.
Since
ν(1 · H) = ν(y (0) ) = λ(γ! (y (0) )) = 1 · τ (H)
by (22), we have
ν(g · H) = ν(g · (1 · H)) = τ (g) · ν(1 · H) = τ (g) · ν(H),
which proves (ii).
Corollary 4.4. Let G be a linear algebraic k-group and H ⊂ G be an algebraic k-subgroup.
Set Y = G/H. Let γ ∈ Aut(k) and let τ : G → G be a γ-semi-automorphism of G. The
following three conditions are equivalent:
(i) There exists a τ -equivariant γ-semi-automorphism µ : Y → Y ;
(ii) The G-k-variety G/τ (H) is isomorphic to G/H;
(iii) The algebraic subgroup τ (H) ⊂ G is conjugate to H.
Proof. By Corollary 2.7 there exists µ : Y → Y as in (i) if and only if the G-k-variety
(γ∗ Y, τ ∗ θ) is isomorphic to (Y, θ). By construction (Y, θ) = G/H, and by Lemma 4.3
(γ∗ Y, τ ∗ θ) ∼
= G/τ (H). Thus (i)⇔(ii). By Lemma 4.1 (ii)⇔(iii).
4.5. With the assumptions of Subsection 4.2, assume also that G is connected, then
the homogeneous spaces Y1 = G/H and Y2 = G/τ (H) of G are irreducible k-varieties.
We consider the fields of rational functions K(Y1 ) and K(Y2 ). The γ-semi-isomorphism
ν : Y1 → Y2 of (23) induces an isomorphism of fields
∼
ν∗ : K(Y1 ) → K(Y2 ),
f1 7→ f2 = ν∗ f1 .
and by Lemma 1.8 we have
f2 (y2 ) = γ! (f1 (ν −1 (y2 )))
(24)
for y2 ∈ Y2 (k).
5. k-automorphisms of homogeneous spaces
Let G be a linear algebraic group over an algebraically closed field k. Let Y be a G-kvariety. We denote by AutG (Y ) the group of G-equivariant k-automorphisms of Y , that
is, of k-automorphisms ψ : Y → Y such that
ψ(g · y) = g · ψ(y)
for g ∈ G, y ∈ Y.
We assume that Y is a homogeneous space of G, that is, Y = G/H, where H is a
k-subgroup of G. Set N = NG (H), the normalizer of H in G.
Lemma 5.1 (well-known). For n ∈ N (k) we define a map on k-points
n∗ : G/H → G/H, gH 7→ gn−1 H
for g ∈ G(k).
Then
(i) The map n∗ is induced by some automorphism φn ∈ AutG (G/H);
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
15
(ii) The map
φ : N (k) → AutG (G/H),
(25)
n 7→ φn
is a homomorphism inducing an isomorphism
∼
N (k)/H(k) → AutG (G/H).
Proof. By assumption n−1 Hn = H, and by Lemma 4.1 there exists an isomorphism
φn : G/H → G/H such that φn (g · H) = gn−1 · H, which proves (i).
Clearly the map φ of (25) is a homomorphism with kernel H(k). To prove (ii) it remains
to show that φ is surjective.
Let ψ ∈ AutG (G/H). Write ψ(1 · H) = a · H with a ∈ G(k). Since ψ is an isomorphism
of G-k-varieties, we have StabG(k) (a · H) = StabG(k) (1 · H) = H(k). On the other hand,
StabG(k) (a · H) = a · H(k) · a−1 . Thus a · H · a−1 = H, hence a ∈ N (k). We have
ψ(g · H) = ga · H. Write a = n−1 , then n ∈ N (k) and ψ = φn . Thus the homomorphism
φ is surjective, as required.
Corollary 5.2. If NG (H) = H, then AutG (G/H) = {1}.
6. Equivariant models of G-varieties
Let k be an algebraically closed field, and k0 ⊂ k be a subfield such that k is a Galois
extension of k0 , that is, k0 is a perfect field and k is an algebraic closure of k0 . We write
Γ = Gal(k/k0 ) := Aut(k/k0 ).
Let Y be a k-variety. Let γ, γ ′ ∈ Γ. If µ is a γ-semi-automorphism of Y , and µ′ is a
of Y , then µ ◦ µ′ is a γ ◦ γ ′ -semi-automorphism of Y and µ−1 is a
−1
γ -semi-automorphism of Y .
γ ′ -semi-automorphism
We denote by SAutk/k0 (Y ) or just by SAut(Y ) the group of all γ-semi-automorphisms
µ of Y where γ runs over Γ = Gal(k/k0 ).
A k0 -model of Y is a k0 -variety Y0 together with an isomorphism of k-varieties
∼
κY : Y0 ×k0 k → Y.
Note that γ ∈ Γ defines a γ-semi-automorphism of Y0 ×k0 k
idY0 × (γ ∗ )−1 : Y0 ×Spec k0 Spec k → Y0 ×Spec k0 Spec k
and thus, via κY , a γ-semi-automorphism µγ of Y . We obtain a homomorphism
Γ → SAut(Y ), γ 7→ µγ .
Conversely:
Lemma 6.1 (Borel and Serre [BS64, Lemma 2.12]). Let k, k0 , Γ, Y be as above. Assume
that for any γ ∈ Γ we have a γ-semi-automorphism µγ of Y such that
(i) the map Γ → SAutk/k0 (Y ), γ 7→ µγ , is a homomorphism,
(ii) the restriction of this map to Gal(k/k1 ) for some finite Galois extension k1 /k0 in
k comes from a k1 -model Y1 of Y ,
(iii) Y is quasi-projective.
Then there exists a k0 -model Y0 of Y that defines this homomorphism γ 7→ µγ .
6.2. Let G be a linear algebraic group over k. We assume that we are given a k0 -model of
G, that is, a linear algebraic group G0 over k0 together with an isomorphism of algebraic
∼
k-groups over κG : G0 ×k0 k → G. For γ ∈ Γ the automorphism (γ ∗ )−1 of Spec k induces a
γ-semi-automorphism idG0 × (γ ∗ )−1 of G0 ×Spec k0 Spec k. We identify G with G0 ×Spec k0
16
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
Spec k via κG , then for any γ ∈ Γ we obtain a γ-semi-automorphism σγ : G → G. The
map
Γ → SAut(G), γ 7→ σγ
is a homomorphism. We identify γ∗ G with G using (σγ )♮ : γ∗ G → G.
Let (Y, θ) be a G-k-variety. By a G0 -equivariant k0 -model of the G-k-variety Y we mean
∼
a G0 -k0 -variety (Y0 , θ0 ) together with an isomorphism κY : Y0 ×k0 k → Y such that the
following diagram commutes:
(26)
G0,k ×k Y0,k
θ0,k
/ Y0,k
κY
κG ×κY
G ×k Y
θ
/Y
where G0,k := G0 ×k0 k and Y0,k := Y0 ×k0 k. For a given k0 -model G0 of G we ask whether
there exists a G0 -equivariant k0 -model Y0 of Y .
Let (Y, θ) be a G-k-variety. We write g · y for θ(g, y). Recall (Definition 2.3) that a
γ-semi-automorphism µ of Y is σγ -equivariant if the following diagram commutes:
G ×k Y
θ
/Y
θ
µ
σγ ×µ
G ×k Y
/Y
Since k is algebraically closed, G is smooth (reduced) and Y is reduced, this is the same
as to require that
µ(g · y) = σγ (g) · µ(y) for all g ∈ G(k), y ∈ Y (k).
We ask whether there exists such µ.
A G0 -equivariant k0 -model of Y defines a homomorphism
Γ → SAutk/k0 (Y ),
γ 7→ µγ ,
where for any γ ∈ Γ, the γ-semi-automorphism µγ of Y is σγ -equivariant. Conversely:
Lemma 6.3. Let k, k0 , Γ = Gal(k/k0 ), G, (Y, θ) be as above and let G0 be a k0 -model
of G. Assume that for any γ ∈ Γ we have a γ-semi-automorphism µγ of Y such that the
following conditions are satisfied:
(i) the map Γ → SAutk/k0 (Y ), γ 7→ µγ is a homomorphism,
(ii) the restriction of this map to Gal(k/k1 ) for some finite Galois extension k1 /k0 in
k comes from a G1 -equivariant k1 -model Y1 of Y , where G1 = G0 ×k0 k1 ,
(iii) Y is quasi-projective,
(iv) for any γ ∈ Γ, the γ-semi-automorphism µγ is σγ -equivariant.
Then there exists a G0 -equivariant k0 -model Y0 of Y that defines this homomorphism
γ 7→ µγ .
Proof. By Lemma 6.1 the homomorphism
Γ → SAut(Y ),
γ 7→ µγ
defines a k0 -model Y0 of Y . Using Galois descent for morphisms (see e.g. Jahnel [Ja00,
Proposition 2.8]) we obtain from condition (iv) that θ comes from some morphism θ0 : G0 ×k0
Y0 → Y0 , and the k0 -model (Y0 , θ0 ) of (Y, θ) is G0 -equivariant.
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
17
Remark 6.4. If in Lemma 6.3 we do not assume that Y is quasi-projective, then we obtain
a k0 -model Y0 in the category of algebraic k0 -spaces (see Wedhorn [We15, Proposition
8.1]), but not necessarily in the category of k0 -schemes (even when k = C and k0 = R,
see Huruguen [Hu11, Theorem 2.35]).
Lemma 6.5. Let k, k0 , Γ = Gal(k/k0 ), G, (Y, θ) be as above and let G0 be a k0 -model
of G. Assume that AutG (Y ) = {1}. Assume that for any γ ∈ Γ there exists a γ-semiautomorphism µγ of Y satisfying condition (iv) of Lemma 6.3. Then such µγ is unique,
and the map γ 7→ µγ satisfies conditions (i) and (ii) of Lemma 6.3.
G
′
Proof. If µ′γ another such γ-semi-automorphism, then µ−1
γ µγ ∈ Aut (Y ) = {1}, hence
G
µ′γ = µγ . If γ, δ ∈ Γ, then µ−1
γδ µγ µδ ∈ Aut (Y ) = {1}, hence µγδ = µγ µδ , hence the map
γ 7→ µγ is a homomorphism, that is, condition (i) holds.
Since G and Y are of finite type over k, there exists a finite Galois extension k1 /k in k
and a G1 -equivariant k1 -model (Y1 , θ1 ) of (Y, θ), where G1 := G0 ×k0 k1 . This k1 -model
defines a homomorphism
(27)
γ 7→ µ′γ : Gal(k/k1 ) → SAut(Y )
such that µ′γ is σγ -equivariant for all γ ∈ Gal(k/k1 ). Since a σγ -equivariant γ-semiautomorphism is unique, we see that for all γ ∈ Gal(k/k1 ) we have µγ = µ′γ , and hence,
the restriction of the map
(28)
γ 7→ µγ : Γ → SAut(Y )
to Gal(k/k1 ) comes from the k1 -model (Y1 , θ1 ) of (Y, θ), that is, condition (ii) of Lemma
6.3 is satisfied.
7. Spherical homogeneous spaces and their combinatorial invariants
Let G be a connected reductive group over an algebraically closed field k. We describe
combinatorial invariants (invariants of Luna and Losev) of a spherical homogeneous space
Y = G/H of G.
We start with combinatorial invariants of G. We fix T ⊂ B ⊂ G, where B is a Borel
subgroup and T is a maximal torus. Let BRD(G) = BRD(G, T, B) denote the based root
datum of G. We have
BRD(G, T, B) = (X, X ∨ , R, R∨ , S, S ∨ )
where
X = X∗ (T ) := Hom(T, Gm,k ) is the character group of T ;
X ∨ = X∗ (T ) := Hom(Gm,k , T ) is the cocharacter group of T ;
R = R(G, T ) ⊂ X is the root system;
R∨ ⊂ X ∨ is the coroot system;
S = S(G, T, B) ⊂ R is the system of simple roots (the basis of R) defined by B;
S ∨ ⊂ R∨ is the system of simple coroots.
There is a canonical pairing X × X ∨ → Z, (χ, x) 7→ hχ, xi, and a canonical bijection
α 7→ α∨ : R → R∨ such that S ∨ = {α∨ | α ∈ S}. See Springer [Sp79, Sections 1 and 2] for
details.
We consider also the Dynkin diagram Dyn(G) = Dyn(G, T, B), which is a graph with
the set of vertices S. The edge between two simple roots α, β ∈ S is described in terms of
the integers hα, β ∨ i and hβ, α∨ i.
We call a pair (T, B) as above a Borel pair. If (T ′ , B ′ ) is another Borel pair, then by
Theorem 11.1 and Theorem 10.6(4) in Borel’s book [B91], there exists g ∈ G(k) such that
(29)
g · T · g −1 = T ′ ,
g · B · g −1 = B ′ .
18
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
This element g induces an isomorphism
∼
g∗ : BRD(G, T ′ , B ′ ) → BRD(G, T, B).
If g′ ∈ G(k) another element as in (29), then g = gt for some t ∈ T (k), and therefore, the
isomorphism
∼
(g′ )∗ : BRD(G, T ′ , B ′ ) → BRD(G, T, B)
coincides with g∗ . Thus we can canonically identify BRD(G, T ′ , B ′ ) with BRD(G, T, B)
and write BRD(G) for BRD(G, T, B). We say that BRD(G) is the canonical root datum
of G. We see that the based root datum BRD(G) is an invariant of G. In particular,
the character lattice X = X∗ (T ) with the subset S ⊂ X is an invariant, and the Dynkin
diagram Dyn(G) is an invariant.
Now we describe combinatorial invariants of a homogeneous spherical G-variety Y =
G/H. Let K(Y ) denote the field of rational functions of Y . The group G(k) acts on K(Y )
by
(g · f )(y) = f (g−1 · y) for f ∈ K(Y ), g ∈ G(k), and y ∈ Y (k).
(B)
For χ ∈ X∗ (B) let K(Y )χ denote the space of χ-eigenfunctions in K(Y ), that is, the
k-space of rational functions f ∈ K(X) such that
b · f = χf (b) · f
for all b ∈ B(k).
(B)
Since B has an open dense orbit in Y , the k-dimension of K(Y )χ is ≤ 1. Let X =
(B)
X (Y ) ⊂ X∗ (B) denote the set of characters χ of B such that K(Y )χ 6= 0, which is a
subgroup of X∗ (B) called the weight lattice of Y . We set
V = V (Y ) = HomZ (X , Q).
Let Val(K(Y )) denote the set of Q-valued valuations of the field K(Y ) that are trivial
on k. The group G(k) naturally acts on K(Y ) and on Val(K(Y )). We will consider the
set ValB (K(Y )) of B(k)-invariant valuations, and the set ValG (K(Y )) of G(k)-invariant
valuations. We have a canonical map
ρ : ValB (K(Y )) → V, v 7→ (χ 7→ v(fχ )),
(B)
where v ∈ ValB (K(Y )), χ ∈ X , fχ ∈ K(Y )χ , fχ 6= 0. It is known, see Knop [Kn89,
Corollary 1.8], that the restriction of ρ to ValG (K(Y )) is injective. We denote by
V = V(Y ) := ρ(ValG (K(Y ))) ⊂ V
the image of ValG (K(Y )) in V . It is a cone in V called the valuation cone of Y .
Let D = D(Y ) denote the set of colors of Y , that is, the set of closures of B-orbits of
codimension 1 in Y ; it is finite. Each D ∈ D is a B-invariant divisor, which defines a
B-invariant valuation of K(Y ) that we denote by val(D). Thus we obtain a map
val : D → ValB (K(Y )).
By abuse of notation we denote ρ(val(D)) ∈ V by ρ(D). Thus we obtain a map
ρ : D → V,
which in general is not injective (for example, it is not injective for G and Y as in Example
0.1).
For D ∈ D, let StabG (D) denote the stabilizer of D ⊂ Y in G. Clearly StabG (D) ⊃
B, hence StabG (D) is a parabolic subgroup of G. For α ∈ S, let Pα ⊃ B denote the
corresponding minimal parabolic subgroup of G containing B. Let ς(D) denote the set of
α ∈ S for which Pα is not contained in StabG (D). We obtain a map
ς : D → P(S),
where P(S) denotes the set of all subsets of S.
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
19
Lemma 7.1. Any fiber of the map ς has ≤ 2 elements.
Proof. Let D ∈ D. Since Y is a homogeneous G-variety, clearly StabG (D) 6= G, hence
ς(D) 6= ∅. We see that there exists α ∈ ς(D). Consider the set D(α) consisting of those
D ∈ D for which α ∈ ς(D). By Proposition B.2 in Appendix B below we have |D(α)| ≤ 2,
and the lemma follows.
Consider the map
ρ × ς : D → V × P(S).
Corollary 7.2. Any fiber of the map ρ × ς has ≤ 2 elements.
Consider the subset Ω := im(ρ × ς) ⊂ V × P(S). Let Ω(2) (resp. Ω(1) ) denote the subset
of Ω consisting of the elements with two preimages (resp. with one preimage) in D. We
obtain two subsets Ω(1) , Ω(2) ⊂ V × P(S), and by Corollary 7.2 we have Ω = Ω(1) ∪ Ω(2)
(disjoint union).
Definition 7.3. Let G be a connected reductive group over an algebraically closed field
k. Let Y = G/H be a spherical homogeneous space of G. By the combinatorial invariants
of Y we mean
X ⊂ X∗ (B),
V ⊂ V := HomZ (X , Q), and Ω(1) , Ω(2) ⊂ V × P(S).
7.4. Let G be a connected reductive k-group. Let H1 ⊂ G be a spherical subgroup, then
we set Y1 = G/H1 . We consider the set of colors D(Y1 ) and the canonical maps
ρ1 : D(Y1 ) → V (Y1 ),
ς1 : D(Y1 ) → P(S).
If H2 ⊂ G is another spherical subgroup, then we set Y2 = G/H2 and consider the set of
colors D(Y2 ) and the canonical maps
ρ2 : D(Y2 ) → V (Y2 ),
ς2 : D(Y2 ) → P(S).
Now assume that there exists a ∈ G(k) such that H2 = aH1 a−1 . Then we have an
isomorphism of G-varieties of Lemma 4.1
φa : Y1 → Y2 ,
g · H1 7→ ga−1 · H2 .
It follows that X (Y1 ) = X (Y2 ), V (Y1 ) = V (Y2 ), V(Y1 ) = V(Y2 ). Moreover, the Gequivariant map φa : Y1 → Y2 induces a bijection
(φa )∗ : D(Y1 ) → D(Y2 )
satisfying
(30)
ρ2 ◦ (φa )∗ = ρ1 ,
ς2 ◦ (φa )∗ = ς1 .
It follows that
Ω(1) (Y1 ) = Ω(1) (Y2 ) and
Ω(2) (Y1 ) = Ω(2) (Y2 ).
Conversely:
Proposition 7.5 (Losev’s Uniqueness Theorem [Lo09, Theorem 1]). Let G be a connected
reductive group over an algebraically closed field k of characteristic 0. Let H1 , H2 ⊂ G
be two spherical subgroups, and let Y1 = G/H1 and Y2 = G/H2 be the corresponding
spherical homogeneous spaces. If X (Y1 ) = X (Y2 ), V(Y1 ) = V(Y2 ), Ω(1) (Y1 ) = Ω(1) (Y2 ),
and Ω(2) (Y1 ) = Ω(2) (Y2 ), then there exists a ∈ G(k) such that H2 = aH1 a−1 .
7.6. Consider the group AutG (Y ) = NG (H)/H, this group acts on D. We consider the
surjective map
ζ = ρ × ς : D → Ω.
By Corollary 7.2 each of the fibers of ζ has either one or two elements. We denote by
AutΩ (D) the group of permutations π : D → D such that ζ ◦ π = ζ. It is clear that
20
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
the group AutG (Y ), when acting on D, acts on the fibers of the map ζ, so we obtain a
homomorphism
AutG (Y ) → AutΩ (D).
Theorem 7.7 (Losev, unpublished). Let G be a connected reductive group over an algebraically closed field k of characteristic 0. Let Y = G/H be a spherical homogeneous space
of G. Then, with the above notation, the homomorphism
AutG (Y ) → AutΩ (D).
(31)
is surjective.
This theorem will be proved in Appendix B, see Theorem B.4.
Corollary 7.8 (Strong version of Losev’s Uniqueness Theorem). Let G, H1 , H2 , Y1 =
G/H1 , Y2 = G/H2 be as in Proposition 7.5, in particular, X (Y1 ) = X (Y2 ), V(Y1 ) = V(Y2 ),
Ω(1) (Y1 ) = Ω(1) (Y2 ), and Ω(2) (Y1 ) = Ω(2) (Y2 ). Let ϕ : D(Y1 ) → D(Y2 ) be any bijection
satisfying
ρ2 ◦ ϕ = ρ1 , ς2 ◦ ϕ = ς1
(such a bijection exists because Ω(1) (Y1 ) = Ω(1) (Y2 ) and Ω(2) (Y1 ) = Ω(2) (Y2 )). Then there
exists a′ ∈ G(k) such that H2 = a′ H1 (a′ )−1 and
(φa′ )∗ = ϕ : D(Y1 ) → D(Y2 ).
Proof. By Proposition 7.5 there exists a ∈ G(k) such that H2 = aH1 a−1 . This element a
defines a map
(φa )∗ : D(Y1 ) → D(Y2 )
satisfying (30). Set
ψ = (φa )−1
∗ ◦ ϕ : D(Y1 ) → D(Y1 ),
then ψ satisfies
ρ1 ◦ ψ = ρ1 ,
ς1 ◦ ψ = ς1 ,
hence ψ ∈ AutΩ D(Y1 ). By Theorem 7.7 there exists n ∈ NG (H1 ) such that
(φn )∗ = ψ : D(Y1 ) → D(Y1 ).
We set a′ = an, then a′ H1 (a′ )−1 = H2 , φa′ = φa ◦ φn , and
(φa′ )∗ = (φa )∗ ◦ (φn )∗ = (φa )∗ ◦ ψ = ϕ.
Corollary 7.9. If in Theorem 7.7 H is spherically closed, then the homomorphism (31)
is an isomorphism.
Proof. Indeed, since H is spherically closed, the homomorphism (31) is injective, and by
Theorem 7.7 it is surjective, hence it is bijective, as required.
Note that
AutΩ (D) =
Y
Aut(ζ −1 (ω)),
ω∈Ω
Aut(ζ −1 (ω))
where
is the group of permutations of the set ζ −1 (ω). It is clear that for any
ω ∈ Ω, the restriction homomorphism
(32)
AutΩ (D) → Aut(ζ −1 (ω))
is surjective.
Corollary 7.10. If in Theorem 7.7 NG (H) = H, then the surjective map ζ is bijective,
hence D injects into V × P(S).
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
21
Proof. It follows from Theorem 7.7 and the surjectivity of the homomorphism (32), that
the group AutG (Y ) = NG (H)/H acts transitively on the fiber ζ −1 (ω) for any ω ∈ Ω.
Since by assumption NG (H)/H = {1}, we conclude that each fiber of ζ has exactly one
element, hence ζ is bijective, as required.
8. Action of an automorphism of the base field
on the combinatorial invariants of a spherical homogeneous space
8.1. Let k is an algebraically closed field, G be a connected reductive group over k, H1 ⊂ G
be a spherical subgroup, and Y1 = G/H1 be the corresponding spherical homogeneous
space.
Let k0 ⊂ k be a subfield and let γ ∈ Aut(k/k0 ). We assume that “G is defined over k0 ”,
that is, we are given a k0 -model G0 of G. Then we have a γ-semi-automorphism σγ of G,
see Subsection 6.2. Set H2 = σγ (H1 ) ⊂ G and denote by Y2 := G/H2 the corresponding
spherical homogeneous space.
We wish to know whether the spherical homogeneous spaces Y1 and Y2 are isomorphic
as G-varieties. For this end we compare their combinatorial invariants.
We fix a Borel pair (T, B), then T ⊂ B ⊂ G. Consider
σγ (T ) ⊂ σγ (B) ⊂ G.
Then (σγ (T ), σγ (B)) is again a Borel pair, hence there exists gγ ∈ G(k) such that
gγ · σγ (T ) · gγ−1 = T,
gγ · σγ (B) · gγ−1 = B.
Set τ = inn(gγ ) ◦ σγ : G → G, then τ is a γ-semi-automorphism of G, and
(33)
Set H2′
and Y2′
τ (B) = B,
Y2′
τ (T ) = T.
G/H2′ .
= τ (H1 ) ⊂ G and
=
We have H2′ = gγ · H2 · gγ−1 , so by Lemma 4.1 Y2
are isomorphic, and we wish to know whether Y1 and Y2′ are isomorphic.
By (33), τ acts on the characters of T and B; we denote the corresponding automorphism
by εγ . By definition
(34)
εγ (χ)(b) = γ(χ(τ −1 (b)))
for χ ∈ X∗ (B), b ∈ B(k),
and the same for the characters of T (recall that X∗ (B) = X∗ (T )). Since τ (B) = B, we see
that εγ , when acting on X∗ (T ), preserves S = S(G, T, B) ⊂ X∗ (T ). It is well known (see
e.g. [BKLR14, 3.2 and Proposition 3.1(a)]) that the automorphism εγ does not depend
on the choice of gγ and that the map
ε : Aut(k/k0 ) → Aut(X∗ (T ), S),
is a homomorphism. Since εγ acts on
εγ (Ω(1) (Y1 )), εγ (Ω(2) (Y1 )).
X∗ (B)
γ 7→ εγ
and on S, one can define εγ (X (Y1 )), εγ (V(Y1 )),
Following Akhiezer [Akh15], we compute the combinatorial invariants of the spherical
homogeneous space Y2′ . We define a map
(35)
Y1 (k) → Y2′ (k),
g · H1 7→ τ (g) · H2′ , where g ∈ G(k).
By Lemma 4.3 the map (35) is induced by some τ -equivariant γ-semi-isomorphism
ν : Y1 → Y2′ ,
and so we obtain an isomorphism of the function fields
ν∗ : K(Y1 ) → K(Y2′ ).
(B)
(B)
Lemma 8.2. Let χ1 ∈ X∗ (B) and assume that f1 ∈ K(Y1 )χ1 . Then ν∗ f1 ∈ K(Y2′ )χ2 ,
where χ2 = εγ (χ1 ).
22
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
Proof. By assumption
f1 (b−1 y1 ) = χ1 (b) · f1 (y1 ) for all y1 ∈ Y1 (k), b ∈ B(k).
We write f2′ = ν∗ f1 ∈ K(Y2′ ). Since ν : Y1 → Y2′ is a γ-semi-isomorphism, by Corollary 1.8
we have
f2′ (y2′ ) = γ(f1 (ν −1 (y2′ ))) for y2′ ∈ Y2′ (k).
Note that τ −1 : G → G is a γ −1 -semi-automorphism of G, and ν −1 : Y2′ → Y1 is a τ −1 equivariant γ −1 -semi-isomorphism. Moreover, τ −1 (T ) = T and τ −1 (B) = B. We compute:
f2′ (b−1 · y2′ ) = γ(f1 (ν −1 (b−1 · y2′ ))) = γ(f1 ((τ −1 (b))−1 · ν −1 (y2′ )))
= γ(χ1 (τ −1 (b))) · γ(f1 (ν −1 (y2′ ))) = εγ (χ1 )(b) · f2′ (y2′ ).
(B)
Thus f2′ ∈ K(Y2′ )χ2 , where χ2 = εγ (χ1 ), as required.
Corollary 8.3. X (Y2′ ) = εγ (X (Y1 )).
Proof. By Lemma 8.2 we have εγ (X (Y1 )) ⊂ X (Y2′ ). Applying Lemma 8.2 to the triple
(γ −1 , τ −1 , ν −1 ), we obtain that εγ −1 (X (Y2′ )) ⊂ X (Y1 ), hence X (Y2′ ) ⊂ εγ (X (Y1 )). Thus
X (Y2′ ) = εγ (X (Y1 )), as required.
Let v1 ∈ ValB (K(Y1 )). We define ν∗ v1 ∈ ValB (K(Y2′ )) by
(ν∗ v1 )(f2′ ) = v1 (ν∗−1 (f2′ ))
for f2′ ∈ K(Y2′ ).
We consider the maps
ρ1 : ValB (K(Y1 )) → V (Y1 ) and
ρ′2 : ValB (K(Y2′ )) → V (Y2′ ).
Lemma 8.4. For any v1 ∈ ValB (K(Y1 )) we have
ρ′2 (ν∗ v1 ) = εγ (ρ1 (v1 )).
Proof. See Huruguen [Hu11, Proposition 2.18].
Corollary 8.5. V(Y2′ ) = εγ (V(Y1 )).
Let D1 ∈ D(Y1 ) be a color, that is, D1 is the closure of a B-orbit of codimension one in
Y1 . We set D2′ := ν(D1 ) ⊂ Y2′ , then D2′ ∈ D(Y2′ ). We also write ν∗ D1 for ν(D1 ).
Let D1 ∈ D(Y1 ) and D2′ ∈ D(Y2′ ), then we denote by val1 (D1 ) ∈ ValB (K(Y1 )) and
val′2 (D2′ ) ∈ ValB (K(Y2′ )) the corresponding B-invariant valuations.
Lemma 8.6. For any D1 ∈ D(Y1 ) we have
val′2 (ν∗ D1 ) = ν∗ (val1 (D1 )).
Proof. See Huruguen [Hu11, Proposition 2.19].
Remark 8.7. Propositions 2.18 and 2.19 of Huruguen [Hu11, Section 2.2] are proved in
his paper under certain additional assumptions. Namely, Huruguen assumes that that
k/k0 is a Galois extension, that the triple (G, Y, θ) has a k0 -model (G0 , Y0 , θ0 ), and that
Y0 has a k0 -point y (0) . Those assumptions are not used in his proof.
By abuse of notation, if D1 ∈ D(Y1 ) and D2′ ∈ D(Y2′ ), we write ρ1 (D1 ) for ρ1 (val1 (D1 )) ∈
V (Y1 ) and ρ′2 (D2′ ) for ρ′2 (val′2 (D2′ )) ∈ V (Y2′ ).
Corollary 8.8 (from Lemma 8.4 and Lemma 8.6). For any D1 ∈ D(Y1 ) we have
ρ′2 (ν∗ D1 ) = εγ (ρ1 (D1 )).
Lemma 8.9. For any D1 ∈ D(Y1 ) we have:
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
23
(i) StabG (ν∗ D1 ) = τ (StabG (D1 ));
(ii) ς(ν∗ D1 ) = εγ (ς(D1 )).
Proof. (i) follows from the fact that the map ν : Y1 (k) → Y2′ (k) is τ -equivariant, and (ii)
follows from (i).
Corollary 8.10 (from Corollary 8.8 and Lemma 8.9).
Ω(1) (Y2′ ) = εγ (Ω(1) (Y1 ))
and
Ω(2) (Y2′ ) = εγ (Ω(2) (Y1 )).
Proposition 8.11.
X (Y2 ) = εγ (X (Y1 )),
V(Y2 ) = εγ (V(Y1 )),
Ω(1) (Y2 ) = εγ (Ω(1) (Y1 )),
Ω(2) (Y2 ) = εγ (Ω(2) (Y1 )).
Proof. Since H2′ and H2 are conjugate, by Lemma 4.1 the G-varieties Y2′ and Y2 are
isomorphic, hence they have the same combinatorial invariants, and the proposition follows
from Corollaries 8.3, 8.5, and 8.10.
Note that Proposition 8.11 generalizes Propositions 5.2, 5.3, and 5.4 of Akhiezer [Akh15].
Namely, in the case when γ 2 = 1, our Proposition 8.11 is equivalent to those results of
Akhiezer. Our proofs are similar to his.
Proposition 8.12. With the notation and assumptions of Subsection 8.1, if the subgroups
H1 and H2 = σγ (H1 ) are conjugate, then εγ preserves the combinatorial invariants of Y1 ,
that is
(36)
εγ (X (Y1 )) = X (Y1 ),
εγ (V(Y1 )) = V(Y1 ),
εγ (Ω(1) (Y1 )) = Ω(1) (Y1 ),
εγ (Ω(2) (Y1 )) = Ω(2) (Y1 ).
Conversely, if equalities (36) hold and char k = 0, then H1 and H2 are conjugate.
Proposition 8.12 generalizes Theorem 3(1) of Cupit-Foutou [CF15], where the case k0 =
R was considered.
Proof. If H1 and H2 are conjugate, then by Lemma 4.1 the homogeneous spaces Y1 = G/H1
and Y2 = G/H2 are isomorphic as G-varieties, hence then they have the same combinatorial
invariants, that is,
(37)
X (Y2 ) = X (Y1 ),
V(Y2 ) = V(Y1 ),
Ω(1) (Y2 ) = Ω(1) (Y1 ),
Ω(2) (Y2 ) = Ω(2) (Y1 ),
and (36) follows from Proposition 8.11. Conversely, if equalities (36) hold and char k =
0, then by Proposition 8.11 the equalities (37) hold, and by Proposition 7.5 (Losev’s
Uniqueness Theorem) the subgroups H1 and H2 are conjugate.
Corollary 8.13. With the notation and assumptions of Subsection 8.1, if there exists
a σγ -equivariant γ-semi-automorphism µ : Y1 → Y1 , then εγ preserves the combinatorial
invariants of Y1 , that is, equalities (36) hold. Conversely, if equalities (36) hold and
char k = 0, then there exists a σγ -equivariant γ-semi-automorphism µ : Y1 → Y1 .
Proof. By Corollary 4.4 there exists a σγ -equivariant γ-semi-automorphism µ : Y1 → Y1 if
and only if the subgroup σγ (H1 ) of G is conjugate to H1 . Now the corollary follows from
Proposition 8.12.
24
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
9. Equivariant models of automorphism-free spherical homogeneous spaces
9.1. Let k0 be a perfect field and let k be a fixed algebraic closure of k0 with Galois group
Γ = Gal(k/k0 ).
Let G be a connected reductive group over k. Let T ⊂ B ⊂ G be as in Section 7. We
consider the based root datum BRD(G) = BRD(G, T, B).
Let G0 be a k0 -model of G. For any γ ∈ Γ, this model defines a γ-semi-automorphism
σγ : G → G,
which induces an automorphism εγ ∈ AutBRD(G), see Section 8. We obtain a homomorphism
ε : Γ → AutBRD(G), γ 7→ εγ .
Let Y = G/H be a spherical homogeneous space of G. We consider the combinatorial
invariants of Y :
X = X (Y ) ⊂ X∗ (T ),
V = V(Y ) ⊂ HomZ (X , Q),
Ω(1) = Ω(1) (Y ),
Ω(2) = Ω(2) (Y ) ⊂ HomZ (X , Q) × P(S),
see Section 7. Since εγ acts on BRD(G), we can define
εγ (X ), εγ (V), εγ (Ω(1) ), εγ (Ω(2) ).
Recall that Y = G/H. By Lemma 4.3(i) we have γ∗ Y ∼
= G/σγ (H).
Proposition 9.2. If Y = G/H admits a G0 -equivariant k0 -model Y0 , then for all γ ∈ Γ,
εγ preserves the combinatorial invariants of Y , that is
(38)
εγ (X ) = X , εγ (V) = V, εγ (Ω(1) ) = Ω(1) , εγ (Ω(2) ) = Ω(2) .
Proposition 9.2 follows from formulas of Huruguen [Hu11, Section 2.2]. For the reader’s
convenience we prove it here.
Proof. A G0 -equivariant k0 -model Y0 of Y defines, for any γ ∈ Γ, a σγ -equivariant γ-semiautomorphism µγ of Y , hence an isomorphism of G-k-varieties (µγ )♮ : G/σγ (H) = γ∗ Y →
Y . We see that the G-varieties G/H and G/σγ (H) are isomorphic, hence they have the
same combinatorial invariants. By Proposition 8.11 the combinatorial invariants of the
spherical homogeneous space G/σγ (H) are
εγ (X ), εγ (V), εγ (Ω(1) ), εγ (Ω(2) ),
and (38) follows.
The next theorem is a partial converse of Proposition 9.2.
Theorem 9.3. Let k, k0 , Γ, G, H, G0 be as in 9.1. Assume that:
(i) For all γ ∈ Γ, εγ preserves the combinatorial invariants of Y , that is, equalities
(38) hold;
(ii) NG (H) = H;
(iii) char k = 0.
Then Y = G/H admits a G0 -equivariant k0 -model Y0 . This k0 -model is unique up to a
unique isomorphism.
Proof. Let γ ∈ Γ. Since char k = 0 and εγ preserves the combinatorial invariants of Y , by
Corollary 8.13 there exists a σγ -equivariant γ-semi-automorphism
µγ : Y → Y.
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
25
Thus condition (i) of Lemma 6.3 is satisfied.
Since NG (H) = H, by Corollary 5.2 AutG (Y ) = {1}, and by Lemma 6.5 conditions (ii)
and (iii) of Lemma 6.3 are satisfied.
The variety Y = G/H is quasi-projective, hence condition (iv) of Lemma 6.3 is satisfied.
By Lemma 6.3 there exists a G0 -equivariant k0 -model Y0 of Y . Since AutG (Y ) = {1},
for any given γ ∈ Γ the γ-semi-automorphism µγ is unique, hence the model Y0 is unique
up to a unique isomorphism.
Recall that a k0 -model G0 of a connected reductive k-group G is called an inner form
(of a split group) if εγ = 1 for all γ ∈ Γ = Gal(k/k0 ).
Lemma 9.4. Let k, k0 , Γ, G, H, G0 be as in 9.1. Then each of the conditions below
imply condition (i) of Theorem 9.3.
(i) G0 is an inner form;
(ii) G0 is absolutely simple (that is, G is simple) of one of the types A1 , Bn , Cn , E7 ,
E8 , F4 , G2 ;
(iii) G0 is split.
Proof. (i) If G0 is an inner form, then εγ = 1 for any γ ∈ Γ, hence condition (i) of Theorem
9.3 is clearly satisfied.
(ii) In these cases Dyn(G) has no nontrivial automorphisms, hence Γ acts trivially on
Dyn(G). We see that (ii) implies (i).
(iii) In this case clearly εγ = 1 for all γ ∈ Γ.
Corollary 9.5. If char k = 0, NG (H) = H, and at least one of the conditions (i–iii) of
Lemma 9.4 is satisfied, then Y admits a G0 -equivariant k0 -model, and this k0 -model is
unique.
Remark 9.6. Assume that k = R and NG (H) = H. The assertion that if condition (iii)
of Lemma 9.4 is satisfied, then Y has a unique G0 -equivariant R-model Y0 , is Theorem
4.12 of Akhiezer and Cupit-Foutou [ACF14]. The similar assertion when only condition
(i) of Lemma 9.4 is satisfied, is Theorem 1.1 of Akhiezer [Akh15]. Our paper is inspired
by this result of Akhiezer, and our proof of Theorem 9.3 is similar to his proof.
10. Equivariant models of spherically closed spherical homogeneous spaces
In this section we do not assume that NG (H) = H.
Let k, G, H, Y = G/H, T ⊂ B ⊂ G be as in Section 7, in particular, k is algebraically
closed, and we assume that char k = 0. The group AutG (Y ) = NG (H)/H acts on Y and
on the set D of colors of Y .
Definition 10.1. A spherical homogeneous space Y = G/H is called spherically closed if
NG (H)/H acts on D faithfully, that is, if the homomorphism
AutG (Y ) → Aut(D)
is injective. (Here Aut(D) denotes the group of permutations of the finite set D.)
Let k0 ⊂ k be a subfield such that k is an algebraic closure of k0 . Let G0 be a k0 -model
of G, and for γ ∈ Γ := Gal(k/k0 ) let σγ : G → G be the γ-semi-automorphism defined by
G0 . Let εγ : X∗ (T ) → X∗ (T ) be as in (34).
Theorem 10.2. Let k be an algebraically closed field. Let G, H, Y = G/H be as in
Section 7. Let G0 be a k0 -model of G, where k0 ⊂ k is a subfield such that k is an
algebraic closure of k0 . Assume that
26
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
(i) εγ preserves the combinatorial invariants of Y for all γ ∈ Γ,
(ii) Y is spherically closed,
(iii) char k = 0.
Then Y admits a G0 -equivariant k0 -model.
Theorem 10.2 generalizes the existence assertion of Theorem 9.3. It was inspired by
Corollary 1 of Cupit-Foutou [CF15, Section 2.5], where the case k0 = R was considered.
In order to prove the theorem we need a few lemmas.
Lemma 10.3. Let ζ : D → Ω be a mapping of nonempty finite sets. Let Γ be a group
acting on Ω by a homomorphism
s : Γ → Aut(Ω),
γ 7→ sγ .
Assume that for any γ ∈ Γ there exists a permutation mγ : D → D covering sγ , that is,
such that the following diagram commutes:
D
mγ
/D
ζ
ζ
Ω
sγ
/Ω
Then there exists a homomorphism m′ : Γ → Aut(D) such that:
(i) for any γ ∈ Γ the permutation m′γ covers sγ ;
(ii) for any γ ∈ Γ we have m′γ = aγ ◦ mγ , where aγ ∈ AutΩ (D);
(iii) m′γ = idD for all γ ∈ ker s.
Proof. We may and shall assume that Γ acts transitively on Ω. Let ω, ω ′ ∈ Ω, then there
exists γ ∈ Γ such that sγ (ω) = ω ′ . By hypotheses there exists mγ ∈ Aut(D) covering
sγ , then mγ induces a bijection ζ −1 (ω) → ζ −1 (ω ′ ), hence the cardinalities of ζ −1 (ω) and
ζ −1 (ω ′ ) are equal. We see that ω 7→ |ζ −1 (ω)| is a constant function on Ω; we denote its
value by n. For each ω ∈ Ω we fix some bijection between ζ −1 (ω) and the set {1, . . . , n};
(i)
we denote the element of ζ −1 (ω) ⊂ D corresponding to i ∈ {1, . . . , n} by dω . Then we
define m′γ ∈ Aut(D) by
(i)
m′γ (d(i)
ω ) = dsγ (ω) .
Since s : γ 7→ sγ is a homomorphism, we see that m′ : γ 7→ m′γ is a homomorphism, and
clearly m′γ covers sγ , which proves (i). Set aγ = m′γ ◦ m−1
γ , then clearly (ii) holds, and the
assertion (iii) holds by construction.
10.4. Write
ζ = ρ × ς : D → V × P(S),
Ω = im ζ,
sγ = εγ |Ω : Ω → Ω,
then the map
Γ → Aut(Ω),
γ 7→ sγ
is a homomorphism. Assume that for all γ ∈ Γ there exists a σγ -equivariant γ-semiautomorphism µ : Y → Y , that is, a γ-semi-automorphism satisfying
(39)
µ(g · y) = σγ (g) · µ(y) for all g ∈ G(k), y ∈ Y (k).
The following lemma is obvious:
Lemma 10.5. If γ, δ ∈ Aut(k/k0 ), µ is a σγ -equivariant γ-semi-automorphism, and ν is a
σδ -equivariant δ-semi-automorphism, then µν is a σγδ -equivariant γδ-semi-automorphism
and µ−1 is a σγ −1 -equivariant γ −1 -semi-automorphism.
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
27
10.6. Consider σγ (T ) ⊂ σγ (B) ⊂ G. There exists gγ ∈ G(k) such that if we set σγ′ =
inn(gσ ) ◦ σγ , then
σγ′ (T ) = T
(40)
and σγ′ (B) = B.
For µ : Y → Y as in (39), we define a γ-semi-automorphism
µ′ = gγ ◦ µ : Y → Y,
y 7→ gγ · µ(y)
for y ∈ Y (k).
Then for g ∈ G(k), y ∈ Y (k) we have
(41)
µ′ (g · y) = gγ · µ(g · y) = gγ · σγ (g) · µ(y)
= (gγ · σγ (g) · gγ−1 ) · (gγ · µ(y)) = σγ′ (g) · µ′ (y).
Let D ∈ D = D(Y ) be a color, this means that D is the closure of a codimension one
B-orbit in Y . Since σγ′ (B) = B, it follows from (41) that the divisor µ′ (D) in Y is the
closure of a codimension one B-orbit, that is, a color. We obtain a permutation
mµ : D → D,
(42)
D 7→ µ′ (D),
covering sγ . Since gγ for which (40) holds is defined uniquely up to multiplication on the
left by an element t ∈ T (k) ⊂ B(k), we see that mµ depends only on µ and does not
depend on the choice of gγ .
Lemma 10.7. The map µ 7→ mµ is a homomorphism: for γ, µ, δ, ν as in Lemma 10.5,
we have mµ◦ν = mµ ◦ mν .
Proof. Let (γ, µ) and (δ, ν) be as in Lemma 10.5. Choose gγ , gδ ∈ G(k) such that
(43)
gγ · σγ (T, B) · gγ−1 = (T, B),
gδ · σδ (T, B) · gδ−1 = (T, B).
Set
µ′ = gγ ◦ µ : Y → Y,
ν ′ = gδ ◦ ν : Y → Y.
Then for y ∈ Y (k) we have
(44)
(µ′ ν ′ )(y) = µ′ (ν ′ (y)) = gγ · µ(gδ · ν(y)) = gγ · σγ (gδ ) · (µν)(y).
On the other hand, from (43) we obtain
gγ · σγ (gδ · σδ (T, B) · gδ−1 ) · gγ−1 = (T, B),
hence
gγ σγ (gδ ) · σγδ (T, B) · (gγ σγ (gδ ))−1 = (T, B).
Thus we may set
(µν)′ = (gγ σγ (gδ )) ◦ µν ,
that is,
(µν)′ (y) = gγ σγ (gδ ) · (µν)(y)
for y ∈ Y (k).
Comparing with (44), we see that with this (µν)′ we have
(µν)′ = µ′ ◦ ν ′ .
hence
mµν = mµ ◦ mν ,
as required.
28
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
Proof of Theorem 10.2. Let γ ∈ Γ. Since char k = 0 and εγ preserves the combinatorial invariants of Y = G/H, by Corollary 8.13 there exists a σγ -equivariant γ-semi-automorphism
µγ : Y → Y . Set
mγ = mµγ ∈ Aut(D),
see Subsection 10.6, then mγ covers sγ , where sγ ∈ Aut(Ω) is the restriction of εγ to Ω.
By Lemma 10.3 there exists a homomorphism
m′ : Γ → Aut(D),
γ 7→ m′γ
such that for any γ ∈ Γ the permutation m′γ ∈ Aut(D) covers sγ (property (i)) and
we have m′γ = aγ ◦ mγ , where aγ ∈ AutΩ (D) (property (ii)). By Theorem 7.7 there
exists an automorphism e
aγ ∈ AutG (Y ) inducing aγ on D. We set µ′γ = e
aγ ◦ µγ , then
′
′
µγ is a σγ -equivariant γ-semi-automorphism of Y , and by Lemma 10.7, µγ acts on D by
aγ ◦ mγ = m′γ .
Let γ, δ ∈ Γ, then µ′γ µ′δ (µ′γδ )−1 ∈ AutG (Y ) and by Lemma 10.7 it acts on D by
m′γ m′δ (m′γδ )−1 = idD . Since Y is spherically closed, we conclude that µ′γ µ′δ (µ′γδ )−1 = idY ,
hence µγδ = µγ ◦ µδ . Thus the map γ 7→ µ′γ is a homomorphism.
It is easy to see that Y admits a Gk2 -equivariant k2 -model Y2 over some finite Galois
extension k2 /k0 in k. Let Γ2 = Gal(k/k2 ), and for γ ∈ Γ2 let µ′′γ denote the γ-semiautomorphism of Y defined by the k2 -model Y2 . After passing to a finite extension, we
may assume that for γ ∈ Γ2 we have sγ = idΩ , and by property (iii) of Lemma 10.3 we
have m′γ = idD . Moreover, we may assume that for γ ∈ Γ2 the semi-automorphism µ′′γ
acts trivially on D. It follows that (µ′′γ )−1 µ′γ acts trivially on D, and clearly (µ′′γ )−1 µ′γ ∈
AutG (Y ). Since Y is spherically closed, we conclude that (µ′′γ )−1 µ′γ = idY , and hence,
µ′γ = µ′′γ for γ ∈ Γ2 . We see that the homomorphism γ 7→ µ′γ satisfies condition (iii) of
Lemma 6.3. Note that Y = G/H is quasi-projective, that is, condition (iv) of Lemma 6.3
is satisfied as well. By Lemma 6.3 the homomorphism γ 7→ µ′γ defines a G0 -equivariant
k0 -model Y0 of Y , which completes the proof of the theorem.
10.8. In Example 0.1 we considered a spherically closed spherical variety Y = G/H,
where G = SL2,k and H = T , a maximal torus in G. In this case it is obvious that for
any k0 -model G0 of G there exists a G0 -equivariant k0 -model Y0 of Y = G/H. Indeed,
there exists a maximal torus T0 ⊂ G0 defined over k0 , and it is clear that Y0 := G0 /T0 is a
G0 -equivariant k0 -model of Y = G/T . In the following example we consider a spherically
closed spherical subgroup that is not conjugate to a subgroup defined over k0 .
Example 10.9. Let k = C, k0 = R. Following a suggestion of Roman Avdeev, we take
G = SO2n+1,C , where n ≥ 2, and we take for H a Borel subgroup of SO2n,C , where
SO2n,C ⊂ SO2n+1,C = G. By Proposition 10.10 below, H is a spherically closed spherical
subgroup of G and NG (H) 6= H. Take G0 = SO2n+1,R , then G0 is an anisotropic (compact)
R-model of G. Since the Dynkin diagram of G has no nontrivial automorphisms, G0 is an
inner form. We wish to show that Y = G/H admits a G0 -equivariant R-model. Clearly
H is not conjugate to any subgroup of G0 defined over R because H is not reductive,
and therefore, we cannot argue as in Subsection 10.8. Since NG (H) 6= H, we cannot
apply Theorem 9.3 either. However, since H is spherically closed, by Theorem 10.2 the
homogeneous variety Y = G/H does admit a G0 -equivariant R-model Y0 .
Proposition 10.10 (Roman Avdeev, private communication). Let G = SO2n+1,C , where
n ≥ 2. Let H be a Borel subgroup of SO2n,C , where SO2n,C ⊂ SO2n+1,C = G. Then H is
a spherically closed spherical subgroup of G, and NG (H) 6= H.
Proof. Set g = Lie(G). Choose a Borel subgroup B ⊂ G and a maximal torus T ⊂ B. Let
X = X∗ (T ) denote the character lattice of T and let R = R(G, T ) ⊂ X be the root system.
The Borel subgroup B defines a set of positive roots R+ ⊂ R and the corresponding set of
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
29
simple roots S ⊂ R+ ⊂ R. Let U denote the unipotent radical of B and put u = Lie(U ).
We have
M
M
g = Lie(T ) ⊕
gβ ,
u=
gβ ,
β∈R
β∈R+
where gβ is the root subspace corresponding to a root β.
Let Rl ⊂ R denote the root subsystem consisting of the long roots. Observe that R is
a root system of type Bn , and Rl is a root system of type Dn . Set Rl+ = R+ ∩ Rl . We set
M
M
gl = Lie(T ) ⊕
gβ ,
ul =
gβ .
β∈R+
l
β∈Rl
Let Gl (resp., Ul ) be the connected algebraic subgroup of G with Lie algebra gl (resp., ul ).
Set H = T Ul . Then Gl ≃ SO2n,C and H is a Borel subgroup of Gl .
It is well known that H is a spherical subgroup of G. For example, this fact follows from
Theorem 1 of Avdeev [Av11] (to apply this theorem one needs to check that all the short
positive roots in R are linearly independent). By [Av15, Proposition 5.25] H is spherically
closed.
We consider the Weyl group W = W (G, T ) = W (R). Let r ∈ W = NG (T )/T denote
the reflection with respect to the short simple root, and let ρ be a representative of r
in NG (T ). Since r preserves Rl+ , we see that ρ ∈ NG (H). Since ρ ∈ NG (T ) r T and
NG (T ) ∩ B = T , we see that ρ ∈
/ B. By construction H ⊂ B, and we conclude that ρ ∈
/ H,
hence NG (H) 6= H. In fact, NG (H) = H ∪ ρH by [Av13, Theorem 3].
The following example shows that G/H might have no G0 -equivariant k0 -model when
H is not spherically closed.
Example 10.11. Let k = C, k0 = R. Let n ≥ 1, G = Sp2n,C ×C Sp2n,C , Y = Sp2n,C ,
the group G acts on Y by
(g1 , g2 ) ∗ y = g1 y g2−1 ,
g1 , g2 , y ∈ Sp2n (C).
Let H denote the stabilizer in G of 1 ∈ Sp2n (C) = Y (C), then H = Sp2n,C embedded
diagonally in G. We have Y = G/H, and Y is a spherical homogeneous space of G. We
have NG (H) = Z(G)·H, where Z(G) denotes the center of G. It follows that NG (H)/H ≃
{±1} =
6 {1}. Clearly NG (H)/H acts trivially on D(G/H), so H is not spherically closed.
Consider the following real model of G:
G0 = Sp2n,R ×R Sp(n),
where Sp(n) is the compact real form of Sp2n . We show that Y cannot have a G0 equivariant real model, although G0 is an inner form of a split group.
Indeed, assume for the sake of contradiction that such a real model Y0 of Y exists.
We have Y = Sp2n,C , and Y0 is simultaneously a principal homogeneous space of Sp2n,R
and of Sp(n). Since H 1 (R, Sp2n,R ) = 1, we see that Y0 (R) is not empty. It follows that
the topological space Y0 (R) is simultaneously a principal homogeneous space of Sp(2n, R)
and of Sp(n). Thus Y0 (R) is simultaneously homeomorphic to the noncompact Lie group
Sp(2n, R) and to the compact Lie group Sp(n), which is clearly impossible. Thus, there
is no G0 -equivariant real model Y0 of Y .
10.12. Let k, k0 , Γ, G, H, G0 be as in Subsection 9.1, in particular, Γ = Gal(k/k0 ).
We do not assume that char k = 0. We assume that H is spherically closed and that
Y = G/H admits a G0 -equivariant k0 -model Y0 . Then by Corollary 8.13 εγ preserves
the combinatorial invariants of Y for all γ ∈ Γ, in particular, Γ acts on the finite set
Ω(2) = Ω(2) (Y ). Let U1 , U2 , . . . , Ur be the orbits of Γ in Ω(2) . For each i = 1, 2, . . . , r, let
us choose a point ui ∈ Ui . Set Γi = StabΓ (ui ).
30
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
Theorem 10.13. With the notation and assumptions of 10.12 we have:
(i) The set of isomorphism classes of G0 -equivariant k0 -models of Y is canonically a
1
principal homogeneous
Qr space of the abelian group H (Γ, AutΩ (D));
1
(ii) H (Γ, AutΩ (D)) ≃ i=1 Hom(Γi , S2 ), where S2 is the symmetric group on two
symbols.
Note that S2 ∼
= {±1} ∼
= Z/2Z.
Corollary 10.14. In Theorem 10.13 assume that |Γ| = 2. Then the number of isomorphism classes of G0 -equivariant k0 -models of Y = G/H is 2s , where s is the number of
fixed points of Γ in Ω(2) .
Proof. Let Ui be an orbit of Γ in Ω(2) . If |Ui | = 2, then Γi = {1}, hence |Hom(Γi , S2 )| = 1.
If |Ui | = 1, then Γi = Γ, and hence, |Hom(Γi , S2 )| = 2. Now the corollary follows from the
theorem.
Proof of Theorem 10.13. Since Y is quasi-projective, the set of the isomorphism classes
in the theorem is in a canonical bijection with the pointed set H 1 (Γ, AutG (Y )); see Serre
[Se97], Proposition 5 in Section III.1.3. By Theorem 2 of Losev [Lo09] (see also Theorem
B.1 below), the group AutG (Y ) is abelian, hence H 1 (Γ, AutG (Y )) is an abelian group,
and the set of isomorphism classes in the theorem is canonically a principal homogeneous
space of this abelian group. By Corollary 7.9 there is a canonical isomorphism of abelian
∼
groups AutG (Y ) → AutΩ (D), and (i) follows.
We compute H 1 (Γ, AutΩ (D)). Recall that we have a surjective map ζ : D → Ω. Set
D (2) = ζ −1 (Ω(2) ), then clearly
r
Y
Y
Y
AutΩ (D) = AutΩ(2) (D (2) ) =
S2 =
S2 ,
ω∈Ω(2)
hence
1
H (Γ, AutΩ (D)) =
r
Y
H 1 (Γ,
i=1
i=1
Y
ω∈Ui
S2 ).
ω∈Ui
Since Γ acts on Ui transitively, by the lemma of Faddeev and Shapiro, see Serre [Se97,
I.2.5, Proposition 10], we have
Y
H 1 (Γ,
S2 ) ≃ H 1 (Γi , S2 ) = Hom(Γi , S2 ).
ω∈Ui
Thus
H 1 (Γ, AutΩ (D)) ≃
r
Y
Hom(Γi , S2 ),
i=1
which proves (ii).
Example 10.15. Let G = SO3,C ≃ PGL2,C . Let T ⊂ G be a maximal torus. We take
H = T and consider Y = G/H = G/T , which is a spherical homogeneous space of G. We
have
AutG (G/T ) = NG (T )/T ≃ {±1}.
Let G0 be an R-form of G, and let T0 be a maximal torus of G0 (defined over R), then
clearly G0 /T0 is a G0 -equivariant R-model of Y = G/T (so we do not have to refer to
Theorem 10.2 in order to see that Y admits a G0 -equivariant real model). Since
H 1 (Γ, AutG (G/T )) = Hom(Γ, {±1}) ≃ {±1},
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
31
Y has exactly two G0 -equivariant R-models. We describe such models for each R-model
of G = SO3,C .
Consider the indefinite real quadratic form in three variables
F2,1 (x1 , x2 , x3 ) = x21 + x22 − x23 ,
xi ∈ R.
Set G0 = SO(F2,1 ) = SO2,1 , which is a noncompact (split) R-model of G. We consider
+
the affine quadric Y2,1
⊂ A3R given by the equation F2,1 (x) = +1, and the affine quadric
−
+
−
3
Y2,1 ⊂ AR given by the equation F2,1 (x) = −1. Then Y2,1
and Y2,1
are SO2,1 -equivariant
+
R-models of Y = G/T . It is well known that Y2,1 (R) is a hyperboloid of one sheet, hence
−
it is connected, while Y2,1
(R) is a hyperboloid of two sheets, hence it is not connected.
+
−
It follows that the R-varieties Y2,1
and Y2,1
are two non-isomorphic SO2,1 -equivariant
R-models of Y = G/T .
Now consider the positive definite real quadratic form in three variables
F3 (x1 , x2 , x3 ) = x21 + x22 + x23 ,
xi ∈ R.
Set G0 = SO(F3 ) = SO3,R , which is a compact (anisotropic) R-model of G. We consider
the affine quadric Y3+ ⊂ A3R given by the equation F3 (x) = +1, and the affine quadric
Y3− ⊂ A3R given by the equation F3 (x) = −1. Then Y3+ and Y3− are SO3,R -equivariant
R-models of Y = G/T . Clearly, Y3+ (R) is the unit sphere in R3 , hence it is nonempty,
while Y3− (R) is empty. It follows that the R-varieties Y3+ and Y3− are two non-isomorphic
SO3,R -equivariant R-models of Y = G/T .
11. Equivariant models of spherical embeddings
of automorphism-free spherical homogeneous spaces
In this section we assume that NG (H) = H.
Theorem 11.1. Let k, G, H, Y = G/H, k0 , Γ, G0 be as in Section 9.1 . We assume
that
(i) G0 is an inner form of a split group,
(ii) NG (H) = H,
(iii) char k = 0.
Let Y ֒→ Y ′ be an arbitrary spherical embedding of Y . Then Y ′ admits a G0 -equivariant
k0 -model Y0′ . This model is compatible with the k0 -model Y0 of Y from Theorem 9.3 and
is unique up to a canonical isomorphism.
This theorem generalizes Theorem 1.2 of Akhiezer [Akh15], who considered the case
k0 = R. Note that Akhiezer considered only the wonderful embedding of Y , while we
consider an arbitrary spherical embedding, so our result is new even in the case k0 = R.
Proof. We show that a k0 -model of Y ′ , if exists, is unique. Indeed, let Y0′ be such a
k0 -model. For γ ∈ Γ := Gal(k/k0 ), let µ′γ : Y ′ → Y ′ be the corresponding γ-semiautomorphism of Y ′ . Since Y is the only open G-orbit in Y ′ , it is stable under µ′γ for all
γ ∈ Γ. Since k0 is a perfect field, this defines a G0 -equivariant k0 -model Y0 of Y , which is
unique because NG (H) = H and hence AutG (Y ) = {1}. Since Y is Zariski-dense in Y ′ , we
conclude that the model Y0′ of Y ′ is unique. (This argument does not use the assumption
that char k = 0.)
We prove the existence. By Theorem 9.3 Y admits a unique G0 -equivariant k0 -model
Y0 . The model Y0 defines an action of Γ on the finite set D, see e.g. Huruguen [Hu11,
2.2.5]. Namely, for any γ ∈ Γ we have a σγ -equivariant γ-semi-automorphism µγ , which
induces an automorphism mγ : D → D covering sγ : Ω → Ω, see (42). We show that this
32
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
action of Γ on D is trivial. Indeed, since NG (H) = H, by Corollary 7.10 the surjective
map
ζ: D → Ω
is bijective. Since by assumption G0 is an inner form, for all γ ∈ Γ we have εγ = 1, hence
sγ = 1. Thus Γ acts trivially on Ω and on D.
Let CF(Y ′ ) denote the colored fan of Y ′ (see Knop [Kn89] or Perrin [Pe14, Definition
3.1.9]) which is a set of colored cones (C, F) ∈ CF(Y ′ ), where C ⊂ V and F ⊂ D. We
know that Γ acts trivially on V = HomZ (X , Q) and on D. It follows that for any γ ∈ Γ
and for any colored cone (C, F) ∈ CF(Y ′ ), we have
(45)
γ∗ (C) = C,
γ∗ (F) = F.
It follows that the colored fan CF(Y ′ ) is Γ-stable. Moreover, it follows from (45) that the
hypothesis of Theorem 2.26 of Huruguen [Hu11] is satisfied, that is, Y ′ has a covering by
G-stable and Γ-stable open quasi-projective subvarieties. By this theorem Y ′ admits a
G0 -equivariant k0 -model compatible with Y0 .
Remark 11.2. Huruguen [Hu11] assumes that Y0 has a k0 -point, but he does not use
that assumption.
Remark 11.3. In Theorem 11.1 we do not assume that Y ′ is quasi-projective.
Appendix A. Algebraically closed descent
for spherical homogeneous spaces
The proofs in this appendix were communicated to the author by experts.
Theorem A.1. Let G0 be a connected reductive group defined over an algebraically
closed field k0 of characteristic 0. Let k ⊃ k0 be a larger algebraically closed field. We set
G = G0 ×k0 k, the base change of G0 from k0 to k. Let H ⊂ G be a spherical subgroup of
G (defined over k). Then H is conjugate to a (spherical) subgroup defined over k0 .
Proof. The theorem will be proved in five steps.
1) Let X0 be a variety equipped with an action of G0 . Then X0 is the disjoint union
of locally closed G0 -stable subvarieties X0m consisting of all orbits of a fixed dimension m.
The orbits of maximal (resp. minimal) dimension form an open (resp. closed) subvariety.
To see this, consider the product G0 × X0 and the subvariety Y0 consisting of the pairs
(g, x) such that gx = x, equipped with the projection to X0 . By a theorem of Chevalley, see
EGA [Gr66, 13.1.3], the dimension of fibers of this projection is an upper semicontinuous
function on Y0 . Restricting this function to X0 (viewed as the closed subvariety of Y0 on
which g = e), it follows that the dimension of the G0 -orbit is an upper semicontinuous
function on X0 .
2) Take for X0 the variety of Lie subalgebras of g0 = Lie G0 of a fixed codimension, say
r, and let X be the k-variety obtained from X0 by scalar extension. Then X is the variety
of Lie subalgebras of codimension r in g = Lie G. Moreover, the stabilizer of a k-point h
in X is the normalizer of h (viewed as a Lie algebra) in G. So the dimension of the orbit
of h is
dim(G) − dim NG (h) = dim(G) − dim(h) − (dim NG (h) − dim(h)) = r − dim ng (h)/h,
where ng (h) denotes the normalizer of h in g. Thus, if there exists h such that h = ng (h),
then the Lie subalgebras h satisfying this property are the k-points of the open subset of
orbits of maximal dimension. Note that ng (h) is the Lie algebra of NG (h). So if h = ng (h),
then h is an algebraic Lie algebra.
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
33
3) Let H be a spherical subgroup of G with Lie algebra h such that ng (h) = h. We
claim that the orbit G · h in X is open.
To prove this, recall that G · h has maximal dimension among G-orbits in X. Since h is
a spherical Lie subalgebra, all Lie subalgebras h′ in an open neighborhood U of h in X are
spherical and their orbits are of the same (maximal) dimension; thus, NG (h′ ) is a spherical
subgroup of G, with Lie algebra h′ by Step 2. By Theorem 3.1 of Alexeev and Brion [AB05],
only finitely many conjugacy classes of spherical subgroups of the form NG (h′ ) for h′ ∈ U
are obtained in this way; let H1 , . . . , Hr be representatives of the conjugacy classes. We
write hi = Lie Hi , then we see that any the spherical Lie algebra h′ ∈ U is conjugate to
one of h1 , . . . , hr ; in particular, U intersects nontrivially with finitely many G-orbits in X,
and all these orbits are of the same dimension. It follows that all these orbits are open, in
particular, the orbit G · h in X is open.
4) By Step 3, the Lie algebras h of spherical subgroups H of G such that ng (h) = h
form finitely many G-orbits, and the closures of these orbits are irreducible components
of the variety X, which is defined over k0 . It follows that every such orbit is defined over
k0 . Since k0 is algebraically closed, every such G-orbit has a k0 -point, which proves the
theorem for spherical subgroups such that ng (h) = h. Also
NG (h) = NG (H 0 ) = NG (H),
where the latter equality follows from Corollary 5.2 of Brion and Pauer [BP87]. Thus the
condition that ng (h) = h is equivalent to the condition that NG (H)/H is finite.
5) To handle the case of an arbitrary spherical k-subgroup H of G, consider the spherical
closure of H, that is, the algebraic subgroup H of NG (H) containing H such that
H/H = ker [NG (H)/H → Aut D(G/H)] .
By Corollary A.3 below, the spherical closure H is spherically closed, that is, NG (H)/H
acts faithfully on the finite set of colors of G/H, hence the group NG (H)/H is finite, and
therefore, ng (Lie H) = Lie H. By Step 4 we may assume that H is defined over k0 . Now H
is an intersection of kernels of characters of H (since the quotient H/H is diagonalizable)
and every such character is defined over k0 . Thus H is defined over k0 , as required.
An alternative proof, also based on Theorem 3.1 of Alexeev and Brion [AB05], is
sketched in Knop’s MathOverflow answer [Kn17b].
From now on till the end of this appendix we follow Avdeev [Av15]. Let G be a
connected reductive group over an algebraically closed field k of characteristic 0. Fix a
e → G such that G
e is a direct product of a torus with a simply
finite covering group G
e
connected semisimple group. For every simple G-module
V , the corresponding projective
space P(V ) has the natural structure of a G-variety. Every G-variety arising in this way
is said to be a simple projective G-space.
Proposition A.2 (Bravi and Luna [BL11, Lemma 2.4.2], see also Avdeev [Av15, Corollary
3.24]). For any spherical subgroup H of a connected reductive group G over an algebraically
closed field k of characteristic 0, the spherical closure H of H is the common stabilizer in
G of all H-fixed points in all simple projective G-spaces.
Corollary A.3 (well known). Let H be a spherical subgroup of a connected reductive
group G defined over an algebraically closed field k of characteristic 0. Let H ′ denote the
spherical closure of H, and let H ′′ denote the spherical closure of H ′ . Then H ′′ = H ′ ,
that is, H ′ is spherically closed.
This result was stated without proof in Section 6.1 of Luna [Lu01] (see also Avdeev
[Av15, Corollary 3.25]).
34
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
Deduction of Corollary A.3 from Proposition A.2. Let P(V ) be a simple projective G-space.
Let P(V )H denote the set of fixed points of H in P(V ). Since H ⊂ H ′ , we have
′
′
P(V )H ⊂ P (V )H . By Proposition A.2 applied to H, we have P(V )H ⊃ P (V )H . Thus
′
P(V )H = P(V )H .
By Proposition A.2 applied to H ′ , the group H ′′ (k) is the set of g ∈ G(k) that fix
′
P(V )H for all simple projective G-spaces P(V ). By Proposition A.2 applied to H, the
group H ′ (k) is the set of g ∈ G(k) that fix P(V )H for all simple projective G-spaces P(V ).
′
Since P(V )H = P(V )H , we have H ′′ = H ′ , as required.
Appendix B. The action of the automorphism group on the colors
of a spherical homogeneous space
By Giuliano Gagliardi
In this appendix we prove Theorem 7.7, which we restate below as Theorem B.4. Our
proof is based on Friedrich Knop’s MathOverflow answer [Kn17a] to Borovoi’s question.
Knop writes that Theorem B.4 was communicated to him by Ivan Losev.
Let G be a connected reductive group over an algebraically closed field k of characteristic 0. Let Y = G/H be a spherical homogeneous space.
We present results of Knop [Kn96] and Losev [Lo09] describing AutG (Y ), using the
notation of Section 7. Consider the uniquely determined set Σ ⊂ X of linearly independent
primitive elements in the lattice X such that
\
{v ∈ V : hγ, vi ≤ 0}.
V=
γ∈Σ
The elements of Σ are called the spherical roots of Y . For the description of AutG (Y ) a
related subset Σ ⊂ X is used, see Losev [Lo09, Introduction, before Theorem 2]. Losev
has shown how Σ can be computed from Σ and D.
Theorem B.1 (Knop [Kn96, Theorem 5.5]). For every φ ∈ AutG (Y ) and every χ ∈ X
there exists aφ,χ ∈ k× such that
φ|K(Y )(B) = aφ,χ id.
χ
The resulting homomorphism
AutG (Y ) → Hom(X , k× )
is injective and its image is {ψ ∈ Hom(X , k× ) : ψ(Σ) = {1}}.
For α ∈ S, let D(α) denote the set of colors D ∈ D such that the parabolic subgroup
Pα moves D, that is, α ∈ ς(D). We need the following results of Luna [Lu97], [Lu01]:
Proposition B.2. Let α ∈ S.
(1) We have |D(α)| ≤ 2. Moreover, |D(α)| = 2 if and only if α ∈ Σ ∩ S.
(2) Assume |D(α)| = 2 and write D(α) = {Dα+ , Dα− }. If ρ(Dα+ ) = ρ(Dα− ), then:
(i) hρ(Dα+ ), χi = hρ(Dα− ), χi = 21 hα∨ , χi for all χ ∈ X , where α∨ ∈ X∗ (T ) is the
corresponding simple coroot;
(ii) we have ς(Dα+ ) = ς(Dα− ) = {α}.
Proof. For (1), see Luna [Lu97, Sections 2.6 and 2.7] or Timashev [Tim11, 30.10]. For
(2), we use that [Lu01, Theorem 2] or [Tim11, Theorem 30.22] implies that the invariants
of a spherical homogeneous space satisfy the axioms of a homogeneous spherical datum.
These axioms are stated in [Lu01, Sections 2.1 and 2.2] and [Tim11, Definition 30.21]. In
particular, we have ρ(Dα+ ) + ρ(Dα− ) = α∨ |X and for every β ∈ ς(Dα± ) we have β ∈ X and
hρ(Dα± ), βi = 1. With the assumption ρ(Dα+ ) = ρ(Dα− ), we obtain (i) and then (ii).
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
35
We need the following result of Losev:
Proposition B.3. The set Σ has the following properties:
(1) The set Σ can be obtained from Σ by replacing some elements γ ∈ Σ by 2γ and
leaving the other elements γ ∈ Σ unchanged.
(2) If α ∈ Σ ∩ S and hρ(Dα+ ), χi = hρ(Dα− ), χi = 21 hα∨ , χi for all χ ∈ X , then 2α ∈ Σ
(hence α ∈
/ Σ).
Proof. See Losev [Lo09, Theorem 2 and Definition 4.1.1(1)].
The following theorem is the main result of this appendix:
Theorem B.4 (Losev, unpublished). The homomorphism
(46)
AutG (Y ) → AutΩ (D)
is surjective.
Proof of Theorem B.4. Let A denote the set of simple roots α ∈ S such that |D(α)| = 2
and ρ(Dα+ ) = ρ(Dα− ). By Proposition B.2, for every α ∈ A we have ς(Dα+ ) = ς(Dα− ) = {α},
hence the map α 7→ {Dα+ , Dα− } is a bijection between A and the set of unordered pairs
{Dα+ , Dα− } such that (ρ × ς)(Dα+ ) = (ρ × ς)(Dα− ). Note that there is a canonical bijection
A → Ω(2) ,
α 7→ (ρ × ς)(Dα+ ).
By Proposition B.2(1), for every α ∈ A we have α ∈ S ∩ Σ ⊂ X , hence there exists
(B)
fα ∈ K(Y )α , fα 6= 0. Moreover, from Propositions B.2 and B.3 we obtain 2α ∈ Σ (and
α∈
/ Σ).
We want to show that for any α ∈ A there exists φα ∈ AutG (Y ) such that φα swaps
and Dα− , but fixes all Dβ+ and Dβ− for β ∈ A, β 6= α.
Dα+
Let α ∈ A. Since the field k is algebraically closed and the set Σ ⊂ X is linearly
independent, we can construct a homomorphism ψα : X → k× with ψα (α) = −1 and
ψα (γ) = 1 for every γ ∈ Σ r {α}, and such that ψα is of finite order in the group
Hom(X , k× ). Then we have ψα (Σ) = {1}. By Theorem B.1 there exists an automorphism
of finite order φα ∈ AutG (Y ) with
(
−fβ for β = α,
φα (fβ ) =
(47)
fβ
for β ∈ A r {α}.
e ⊂ NG (H) denote the subgroup containing H such that
Let H
e
H/H
= hφα i ⊂ NG (H)/H = AutG (Y ),
e We use the
where hφα i denotes the finite subgroup generated by φα . We set Ye = G/H.
same notation for the combinatorial objects associated to the spherical homogeneous space
Ye as for Y , but with a tilde above the respective symbol. The morphism of G-varieties
Y → Ye induces an embedding K(Ye ) ֒→ K(Y ), and K(Ye ) is the fixed subfield of φα . Since
K(Ye ) is a G-invariant subfield of K(Y ), we have K(Ye )(B) ⊂ K(Y )(B) and Xe ⊂ X .
By (47) we have φα (fα ) = −fα 6= fα . We see that fα ∈
/ K(Ye ), hence α ∈ X r Xe, in
e
e
particular α ∈
/ Σ. By Proposition B.2(1) we have |D(α)| ≤ 1, hence the two colors in D(α)
are mapped to one color by the map Y → Ye , that is, φα swaps Dα+ and Dα− .
On the other hand, for any β ∈ Ar{α}, by (47) we have φα (fβ ) = fβ , hence fβ ∈ K(Ye )
and β ∈ Xe. Since β is a primitive element of X , it is a primitive element of Xe ⊂ X . The
e (see Knop
natural map V → Ve induced by Y 7→ Ye is bijective and identifies V and V
e = −V.
[Kn89, Section 4]). Since β ∈ Σ is dual to a wall of −V, it is dual to a wall of −V
36
MIKHAIL BOROVOI
WITH AN APPENDIX BY
GIULIANO GAGLIARDI
e hence |D(β)|
e
It follows that β ∈ S ∩ Σ,
= 2, and the two colors in D(β) are mapped to
e
distinct colors under Y → Y , that is, φα fixes Dβ+ and Dβ− .
References
[Akh15]
Dmitri Akhiezer, Satake diagrams and real structures on spherical varieties, Internat. J. Math.
26 (2015), no. 12, 1550103, 13 pp.
[ACF14] Dmitri Akhiezer and Stéphanie Cupit-Foutou, On the canonical real structure on wonderful
varieties, J. reine angew. Math., 693 (2014), 231–244.
[AB05]
Valery Alexeev and Michel Brion, Moduli of affine schemes with reductive group action, J.
Algebraic Geom. 14 (2005), 83–117.
[Av11]
Roman Avdeev, On solvable spherical subgroups of semisimple algebraic groups, Trans. Moscow
Math. Soc. 2011, 1–44.
[Av13]
Roman Avdeev, Normalizers of solvable spherical subgroups, Math. Notes 94 (2013), 20–31.
[Av15]
Roman Avdeev, Strongly solvable spherical subgroups and their combinatorial invariants, Selecta
Math. (N.S.) 21 (2015), 931–993.
[B91]
Armand Borel, Linear Algebraic Groups, Second edition, Graduate Texts in Mathematics 126,
Springer-Verlag, New York, 1991.
[BS64]
Armand Borel et Jean-Pierre Serre, Théorèmes de finitude en cohomologie galoisienne, Comm.
Math. Helv., 39 (1964), 111–164.
[Brv93]
Mikhail Borovoi, Abelianization of the second nonabelian Galois cohomology, Duke Math. J. 72
(1993), 217–239.
[BKLR14] M. Borovoi, B. Kunyavskiı̆, N. Lemire, and Z. Reichstein, Stably Cayley groups in characteristic
zero, Internat. Math. Res. Notices, 2014, 5340–5397.
[BL11]
Paolo Bravi and Domingo Luna, An introduction to wonderful varieties with many examples of
type F4 , J. Algebra 329 (2011), 4–51.
[BP87]
Michel Brion et Franz Pauer, Valuations des espaces homogènes sphériques, Comment. Math.
Helv. 62 (1987), 265–285.
[CF15]
Stéphanie Cupit-Foutou, Anti-holomorphic involutions and spherical subgroups of reductive
groups, Transform. Groups 20 (2015), 969–984.
[FSS98]
Yuval Z. Flicker, Claus Scheiderer, and R. Sujatha, Grothendieck’s theorem on non-abelian H 2
and local-global principles, J. Amer. Math. Soc. 11 (1998), 731–750.
[Gr66]
Alexandre Grothendieck, Éléments de géométrie algébrique. IV. Étude locale des schémas et
des morphismes de schémas, Troisième partie, Inst. Hautes Études Sci. Publ. Math. No. 28,
1966.
[Hu11]
Mathieu Huruguen, Toric varieties and spherical embeddings over an arbitrary field, J. Algebra
342 (2011), 212–234.
[Ja00]
Jörg Jahnel, The Brauer-Severi variety associated with a central simple algebra: a survey,
https://www.math.uni-bielefeld.de/LAG/man/052.pdf.
[Kn89]
Friedrich Knop, The Luna-Vust theory of spherical embeddings, Proceedings of the Hyderabad
Conference on Algebraic Groups, December 1989. Madras: Manoj Prakashan (1991), 225–249.
[Kn96]
Friedrich Knop, Automorphisms, root systems, and compactifications of homogeneous varieties,
J. Amer. Math. Soc. 9 (1996), 153–174.
[Kn17a]
Friedrich Knop (https://mathoverflow.net/users/89948/friedrich-knop), Action of
N (H)/H on the colors of a spherical homogeneous space G/H, URL (version: 2017-06-09):
https://mathoverflow.net/q/271795.
[Kn17b] Friedrich Knop (https://mathoverflow.net/users/89948/friedrich-knop), Is any spherical
subgroup conjugate to a subgroup defined over a smaller algebraically closed field?, URL (version:
2017-08-01): https://mathoverflow.net/q/277708.
[Lo09]
Ivan Losev, Uniqueness property for spherical homogeneous spaces, Duke Math. J. 147 (2009),
315–343.
[Lu97]
Domingo Luna, Grosses cellules pour les variétés sphériques, Algebraic groups and Lie groups,
Austral. Math. Soc. Lect. Ser., vol. 9, Cambridge Univ. Press, Cambridge, 1997, pp. 267–280.
[Lu01]
Domingo Luna, Variétés sphériques de type A, Publ. Math. Inst. Hautes Etudes Sci. 94 (2001),
161–226.
[Mi18]
James S. Milne, Algebraic Groups. The Theory of Group Schemes of Finite Type over a Field,
Cambridge Studies in Advanced Mathematics 170, Cambridge University Press, Cambridge,
2017.
[Pe14]
Nicolas Perrin, On the geometry of spherical varieties, Transform. Groups 19 (2014), 171–223.
[Se97]
Jean-Pierre Serre, Galois cohomology, Springer-Verlag, Berlin, 1997.
EQUIVARIANT MODELS OF SPHERICAL VARIETIES
[Sp79]
[Sp98]
[Tim11]
[Tits66]
[We15]
37
Tonny A. Springer, Reductive groups, in: Automorphic forms, representations and L-functions
(Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977) Part 1, pp. 3–27, Proc.
Sympos. Pure Math. 33, Amer. Math. Soc., Providence, RI, 1979.
Tonny A. Springer, Linear Algebraic Groups, Second ed., Progress in Math., 9, Birkhäuser,
Boston, MA, 1998.
Dmitry A. Timashev, Homogeneous spaces and equivariant embeddings, Encyclopaedia of Mathematical Sciences, 138, Berlin, Springer, 2011.
Jacques Tits, Classification of algebraic semisimple groups, in: Algebraic Groups and Discontinuous Subgroups (Proc. Sympos. Pure Math. 9, Boulder, Colo., 1965) pp. 33–62 Amer. Math.
Soc., Providence, R.I., 1966.
Thorsten Wedhorn, Spherical spaces, to appear in Annales de l’Institut Fourier,
arXiv:1512.01972[math.AG].
Raymond and Beverly Sackler School of Mathematical Sciences, Tel Aviv University,
6997801 Tel Aviv, Israel
E-mail address: [email protected]
| 4 |
Simulation studies on the design of optimum PID
controllers to suppress chaotic oscillations in a family of
Lorenz-like multi-wing attractors
Saptarshi Dasa,b, Anish Acharyac,d, and Indranil Pana
a) Department of Power Engineering, Jadavpur University, Salt Lake Campus, LB-8,
Sector 3, Kolkata-700098, India.
b) Communications, Signal Processing and Control (CSPC) Group, School of Electronics
and Computer Science, University of Southampton, Southampton SO17 1BJ, United
Kingdom.
c) Department of Instrumentation and Electronics Engineering, Jadavpur University, Salt
Lake Campus, LB-8, Sector 3, Kolkata-700098, India.
d) Department of Electrical and Computer Science Engineering, University of California
Irvine, CA 92697-2620, USA.
Authors’ Emails:
[email protected], [email protected] (S. Das)
[email protected] (A. Acharya)
[email protected], [email protected] (I. Pan)
Abstract:
Multi-wing chaotic attractors are highly complex nonlinear dynamical systems with
higher number of index-2 equilibrium points. Due to the presence of several equilibrium
points, randomness and hence the complexity of the state time series for these multi-wing
chaotic systems is much higher than that of the conventional double-wing chaotic
attractors. A real-coded Genetic Algorithm (GA) based global optimization framework
has been adopted in this paper as a common template for designing optimum
Proportional-Integral-Derivative (PID) controllers in order to control the state trajectories
of four different multi-wing chaotic systems among the Lorenz family viz. Lu system,
Chen system, Rucklidge (or Shimizu Morioka) system and Sprott-1 system. Robustness
of the control scheme for different initial conditions of the multi-wing chaotic systems
has also been shown.
Keywords - chaos control; chaotic nonlinear dynamical systems; Lorenz family; multiwing attractor; optimum PID controller
1. Introduction
Chaos is a field in mathematical physics which has found wide applications,
ranging from weather prediction to geology, mathematics, biology, computer science,
economics, philosophy, sociology, population dynamics, psychology, robotics etc. [29].
Chaos theory studies the behavior of dynamical systems which are nonlinear, highly
sensitive to changes in initial conditions, and have deterministic (rather than
probabilistic) underlying rules which dictates the evolution of the future states of a
1
system. Such systems exhibit aperiodic oscillations in the time series of the state
variables and long term prediction of such systems is not possible due to the sensitive
dependence on initial conditions. Theoretical investigations of chaotic time series have
percolated into real world applications like those in computer networks, data encryption,
information processing systems, pattern recognition, economic forecasting, stock market
prediction etc. [29]. Historically, chaos was first observed in natural convection [16],
weather prediction and subsequently in several other fluid mechanics related problems
[2]. Survey of a wide variety of physical systems exhibiting chaos and pattern formation
in nature ranging from convection, mixing of fluids, chemical reaction, electrohydrodynamics to solidification and nonlinear optics have been summarized in [10].
An interesting ramification of chaos theory is the investigation of synchronization
and control of such systems [19]. Even with their unpredictable dynamics, proper control
laws can be designed to suppress the undesirable excursions of the state variables or
make the state variables follow a definite trajectory [35]. These chaos control schemes
have a variety of engineering applications and hence control of chaotic systems has
received increased attention in last few years [5]. In chaos control, the prime objective is
to suppress the chaotic oscillations completely or reduce them to regular oscillations.
Recent chaos control techniques include open loop control methods, adaptive control
methods, traditional linear and nonlinear control methods, fuzzy control techniques etc.
among many others [5]. Most chaos control techniques exploit the fact that any chaotic
attractor containing infinite number of unstable periodic orbits can be modified using an
external control action to produce a stable periodic orbit. The chaotic system’s states
never remains in any of these unstable orbits for a long time but rather it continuously
switches from one orbit to the other which gives rise to this unpredictable, random
wandering of the state variables over a longer period of time. The control scheme
basically tries to achieve stabilization, by means of small system perturbations, of one of
these unstable periodic orbits. The result is to render an otherwise chaotic motion more
stable and predictable. The perturbation must be tiny, to avoid significant modification of
the system’s natural dynamics. Several techniques have been used to control chaos, but
most of them are developments of two basic approaches: the OGY (Ott, Grebogi and
Yorke) method [20], and Pyragas continuous control method [22]. Both methods require
a prior determination of the unstable periodic orbits of the chaotic system before the
control algorithm can be designed. The basic difference between the OGY and Pyragas
methods of chaos control is that the former relies on the linearization of the Poincare map
and the latter is based on time delay feedback.
PID controller based designs are popular in the control engineering community
for several decades, due to its design simplicity, ease of use, and implementation
feasibility. PID type controller design can be found in recent literatures for
synchronization of chaotic systems with different initial guess [1, 3, 7-9, 14, 15].
However, except few attempts like [4], optimum PID controller design for chaos control
is not well investigated yet, especially for the control of highly complex chaotic systems
like multi-wing attractors, as attempted in this paper. The rationale behind particularly
choosing PID controllers is due to its simplicity, ease of implementation and tuning
methods compared to unavailability of Lypunov based stabilization and sliding mode
techniques [19, 35] for multi-wing chaotic attractors. The classical approach of dealing
with chaos control and synchronization is to formally eliminate the nonlinear terms for
2
designing a suitable nonlinear control scheme, in order to make the error dynamical
system linear and then find a suitable Lypunov based stabilization scheme [19]. The
present approach needs no additional requirement of canceling complex multi-segment
nonlinear terms by a nonlinear control scheme for each chaotic system. The success of
the present approach lies in the fact that a generalized linear PID control framework can
stabilize a family of complex multi-wing chaotic system when only the second state
variable needs to be sensed and manipulated using an external PID control action. The
controller gains are optimized to damp the oscillations as early as possible and the control
scheme is also tested for required robustness against system’s initial condition variation.
Previously, design of PID type controllers equipped with fractional order integrodifferential operators and fuzzy logic have been researched for chaos synchronization
[11] and chaos control [21], within a common global optimization framework without
paying much attention to the specific nonlinear structure and complexity of the chaotic
system. This idea has been extended here for the control of highly complex nonlinear
dynamical systems i.e. multi-wing chaotic attractors in the Lorenz family.
Many researches have focused on theoretical and experimental studies on the
synchronization and control of classical double-wing chaotic and hyper-chaotic systems
from the Lorenz family like the Lorenz, Chen and Lu attractors [19, 35, 5, 12]. However,
there is almost no significant result for relatively complex chaotic systems like the multiscroll and multi-wing attractors which is addressed in the present paper. Multi-scroll
attractors first emerged as an extension of the Chua’s circuit [31] and it has been found to
be more suitable than the classical double-wing chaotic attractors in applications like
secure communication and data encryption. The multi-scroll attractors are produced by
suitably modifying the nonlinear terms of the governing differential equations of the
chaotic system which has been successfully applied to produce 1D, 2D and 3D scroll grid
attractors in [17]. The reason behind obtaining highly complex time-series out of multiscroll and multi-wing attractors is that they have more number of equilibrium points than
the classical double-wing attractors. Recent research has shown that such highly complex
time-series and phase space behavior can also be obtained from typical chaotic and
hyper-chaotic systems with no equilibrium point [27] and infinite number of equilibrium
points [36]. Theoretical studies on the complexity analysis of such multi-wing attractors
have been reported in He et al. [13] using spectral entropy and statistical complexity
measure. Among the two major families of complex chaotic systems, the control and
synchronization of multi-scroll attractors are proposed in [30] but there is almost no result for
the multi-wing attractors.
The purpose of the present study is to develop a common template to suppress highly
complex chaotic oscillations in the Lorenz family of multi-wing attractors using a simple
controller structure like the PID and a global optimization based gain tuning mechanism. An
objective function for the controller design problem is framed in terms of the state variables
of the chaotic system and the desired state trajectories of the system. The real coded GA
based optimization is then employed to find out the values of the PID controller gains.
2. Basics of the Lorenz family of multi-wing chaotic systems
Four classical examples of symmetric double-wing chaotic attractors are chosen
as the base cases here among the Lorenz family to study the control of multi-wing
attractors [32]. In spite of having several literatures on the generation of multi-scroll
attractors as an extension of Chua’s circuit, the first extension of the Lorenz family of
3
attractors from double-wing to multi-wing was proposed by Yu et al. [32]. It is also
observed that in the Lorenz family of chaotic systems, similar attractors could be
generated by replacing the cross-products with quadratic terms. The common
characteristics of the source of nonlinearity in the state equations of the original doublewing Lorenz family of chaotic systems are due to having either a square and/or crossterms of the state variables. These particular terms can be replaced by a multi-segment
parameter adjustable quadratic function (1) to generate multi-wing attractor with
additional flexibility of modifying the numbers and locations of index-2 equilibrium
points. As reported in the pioneering works of multi-wing chaos [32]-[33], the segment
characteristics like the slope and width can be adjusted using the parameters { F0 , Fi , Ei } of
equation (1). This typical function increases the number of index-2 equilibrium points
along a particular axis for Lorenz family of chaotic systems from two to ( 2 N + 2 ) ,
thereby increasing the randomness or complexity of the state trajectories of the nominal
(double-wing) chaotic system which is difficult to control by analytical methods.
N
f ( x ) = F0 x 2 − ∑ Fi 1 + 0.5sgn ( x − Ei ) − 0.5sgn ( x + Ei )
(1)
1 for x > 0
sgn ( x ) = 0 for x = 0
−1 for x < 0
(2)
i =1
where,
and N being a positive integer responsible for the number of wings of the chaotic
attractor.
In this paper, two multi-wing chaotic systems among the Lorenz family are
obtained by replacing the cross-product terms (in Lu and Chen system) by the above
mentioned multi-segment function. In the other two systems among the Lorenz family,
the quadratic terms (in Rucklidge and Sprott-1 system) are replaced by the multi-segment
function (1).
2.1. Chaotic multi-wing Lu system
The double-wing chaotic Lu system [18] is represented by (3).
xɺ = − ax + ay
yɺ = cy − xz
(3)
zɺ = xy − bz
The typical parameter settings for chaotic double-wing Lu attractor is given as
a = 36, b = 3, c = 20 . The equilibrium points of the Lu system are located at
( 0, 0, 0 ) ; ( ±
)
bc , ± bc , c . The state equations of the multi-wing chaotic Lu attractor
whose states are to be controlled using a PID controller are given by (4).
xɺ = − ax + ay
yɺ = cy − (1 P ) xz + u
zɺ = f ( x ) − bz
4
(4)
Here, P reduces the dynamic range of the attractors so as to facilitate hardware
realization. The suggested parameters for N = 4 are given in (5) [32] for which the chaotic
Lu system exhibits multi-wing attractors in the phase portraits.
P = 0.05, F0 = 100, F1 = 10, F2 = 12, F3 = 16.67, F4 = 18.18,
(5)
E1 = 0.3, E2 = 0.45, E3 = 0.6, E4 = 0.75
Here, the original cross product ( xy ) is replaced by the multi-segment quadratic function
f ( x ) . Fig. 1 shows the projections of the 3-dimensional (x-y-z) phase space dynamics
(Fig. 1d) of the multi-wing Lu system on the x-y (Fig. 1a), y-z (Fig. 1b), x-z (Fig. 1c)
planes. Also, in (4) the PID control action is added to the second state variable to
suppress the chaotic oscillations and the control action is given by (6).
de
u = K p e + K i ∫ e.dt + K d
, e = (r − y)
(6)
dt
Here, { K p , Ki , K d } are the PID controller gains which are to be found out by a suitable
optimization technique for the reference signal ( r ) as a unit step.
Fig. 1. Uncontrolled phase plane portraits for multi-wing Lu system [32].
2.2. Chaotic multi-wing Chen system
The double-wing Chen system [6] is represented by (7).
xɺ = − ax + ay
yɺ = ( c − a ) x + cy − xz
(7)
zɺ = xy − bz
The typical parameter settings for chaotic double-wing Chen attractor is given as
a = 35, b = 3, c = 28 . The equilibrium points of the Chen system are located at
( 0, 0, 0 ) ; ( ± b ( 2c − a ) , ± b ( 2c − a ) , ( 2c − a ) ) .
The state equations of the multi-wing
chaotic Chen attractor whose states are to be controlled are given by (8).
5
xɺ = −ax + ay
yɺ = ( c − a ) x + cy − (1 P ) xz + u
(8)
zɺ = f ( x ) − bz
The suggested parameters for N = 4 are given in (9) for which the chaotic Chen system
exhibits multi-wing attractors in phase portraits.
P = 0.05, F0 = 100, F1 = 12, F2 = 12, F3 = 16.67, F4 = 18.75,
(9)
E1 = 0.3, E2 = 0.45, E3 = 0.6, E4 = 0.75
The multi-wing Chen system in Fig. 2 shows similar phase space behavior to that of the
multi-wing Lu system in Fig. 1, except the fact that more number of trajectories are
observed away from the equilibrium points. Similar to the previous case of multi-wing Lu
system, the nonlinear cross-product ( xy ) in the third state variable is replaced by f ( x ) .
Fig. 2. Uncontrolled phase plane portraits for multi-wing Chen system [32].
2.3. Chaotic Multi-wing Rucklidge system
The double-wing Shimizu-Morioka system [24] is given by (10).
xɺ = −ax + by − yz
yɺ = x
(10)
zɺ = y 2 − z
The typical parameter settings for chaotic double-wing Shimizu-Morioka attractor is
given as a = 2, b = 7.7 . The equilibrium points of the Shimizu-Morioka system are located
(
)
at ( 0, 0, 0 ) ; 0, ± b , b . The state equations of the multi-wing chaotic Shimizu-Morioka
attractor whose states are to be controlled are given by (11).
xɺ = − ax + by − (1 P ) yz
yɺ = x + u
(11)
zɺ = f ( y ) − z
6
The above mentioned multi-wing chaotic Shimizu-Morioka system [34] is also known as
modified Rucklidge system [23]. The suggested parameters for N = 3 are given in (12)
[32] for which the chaotic Rucklidge system exhibits multi-wing attractors in the phase
portraits (Fig. 3).
(12)
P = 0.5, F0 = 4, F1 = 9.23, F2 = 12, F3 = 18.18, E1 = 1.5, E2 = 2.25, E3 = 3.0
Here in the case of Rucklidge system, the quadratic term ( y 2 ) (unlike the cross-product
of states in previous cases of Lu and Chen system) is replaced by the multi-segment
function f ( y ) in the third state equation.
Fig. 3. Uncontrolled phase plane portraits for multi-wing Rucklidge (Shimizu-Morioka)
system [32].
2.4. Chaotic Multi-wing Sprott-1 system
The double-wing Sprott-1 system [25]-[26] is given by (13).
xɺ = yz
yɺ = x − y
(13)
zɺ = 1 − x 2
The equilibrium points of the Sprott-1 system are located at ( ±1, ±1, 0 ) . State equations of
the multi-wing chaotic Sprott-1 attractor whose states are to be controlled are given by
(14).
xɺ = yz
(14)
yɺ = x − y + u
zɺ = 1 − f ( x )
The suggested parameters for N = 4 are given by (15) [32] for which the chaotic Sprott-1
system exhibits multi-wing attractors in phase portraits (Fig. 4).
(15)
F0 = 1, F1 = 5, F2 = 5, F3 = 6.67, F4 = 8.89, E1 = 2, E2 = 3, E3 = 4, E4 = 5
7
Here, the quadratic term ( x 2 ) is replaced by the multi-segment function f ( x ) similar to
the previous case of Rucklidge system. It is evident from the phase portraits of all the
four multi-wing chaotic systems that the wandering of the states in phase space are highly
complex, thus indicating the corresponding state time series being highly jittery which is
hard to regularize using analytically derived external control action. Here, we have shown
that a simple PID type linear controller structure which is widely used in industrial
process control is capable of suppressing chaotic oscillations in such complex dynamical
systems.
Fig. 4. Uncontrolled phase plane portraits for multi-wing Sprott-1 system [32].
3. Optimum PID control to suppress chaotic oscillations in multi-wing attractors
Each of the above four classes of multi-wing chaotic systems are to be controlled
using a PID controller (6) which will enforce the second state variable ( y ) to track a unit
reference step input ( r ). Instead of simple error minimization criteria for PID controller
tuning, the well-known Integral of Time multiplied Absolute Error (ITAE) criterion has
been used as the performance index J (16) in order to ensure fast tracking of the second
state variable with lesser oscillations.
T
J = ∫ t ⋅ e ( t ) dt , e ( t ) = r ( t ) − y ( t )
(16)
0
For time domain simulations, the upper time limit of the above integral ( T ) is
restricted to realistic values depending on the speed of the chaotic time series to ensure
that all oscillations in the state variables have died down due to introduction of the PID
control action in the second state. It has also been shown through simulation examples
that controlling the second state variable with PID automatically damps chaotic
oscillations in the other two state variables for the Lorenz family of multi-wing attractors.
Tuning of the PID controller gains have been done in this study using the widely
used population based global optimizer known as the real coded genetic algorithm [28].
The GA is a stochastic optimization process which can be used to minimize a chosen
8
objective function. A solution vector is initially randomly chosen from the search space
and undergoes reproduction, crossover and mutation, in each generation, to give rise to a
better population of solution vectors in the next generation. Reproduction implies that
solution vectors with higher fitness values can produce more copies of themselves in the
next generation. Crossover refers to information exchange based on probabilistic
decisions between solution vectors. In mutation a small randomly selected part of a
solution vector is occasionally altered, with a very small probability. This way the
solution is refined iteratively until the objective function is minimized below a certain
tolerance level or the maximum number of generations are exceeded. In this study, the
population size in GA is chosen to be 20. The crossover and mutation fraction are chosen
to be 0.8 and 0.2 respectively for the present minimization problem using (16).
Due to the randomness of the chaotic time series of the multi-wing attractors, the
error signal with respect to step command input also becomes highly jittery and will
contain several minima which justifies the application of GA in such controller tuning
problems as compared to other gradient based optimization algorithms. GA being a
global optimization algorithm is able to get out of the local minima, whereas the other
gradient based optimization algorithms often get stuck in the local minima and cannot
give good solutions. For the control and synchronization of chaotic systems, an
evolutionary and swarm based PID controller design with other time domain performance
index optimization have been used previously as reported in [11], [21]. But for the sake
of simplicity, we have restricted the study for ITAE based PID design only to handle
multi-wing attractors in chaotic nonlinear dynamical systems. The GA based
optimization results for the PID controller parameters (gains) are given in Table 1 for the
four respective multi-wing attractors among the Lorenz family. Also, in order to ensure
that the best possible solution is found in the global optimization process, the algorithm
has been run several times and only the controller gains, resulting in the fastest tracking
performance (and hence the lowest value of Jmin) for respective systems are reported here.
Table 1: GA Based Optimum PID Controller Settings for Chaos Suppression in Multiwing Attractors
Multi-wing Chaotic systems
Jmin
Kp
Ki
Kd
Lu system
244.986
3.156
27.562
1.449
Chen system
307.709
3.305
21.274
1.591
Rucklidge system
1.161
19.435
30.097
0.237
Sprott-1 system
1468.193
0.272
0.433
0.393
In all cases except the Sprott-1 system, the integral gains ( K i ) take high value
signifying that the controller gives extra effort to reduce the steady state error using the
integral action. Also, due the sluggish nature of the uncontrolled dynamics of Rucklidge
system, the proportional gain ( K p ) also takes high value to make the overall closed loop
system faster. It is obvious that such controller gains justifies the final objective of ITAE
(16) that puts extra penalty for sluggish controlled response and pushes the system to
reduce oscillations as soon as possible. A simpler Integral of Absolute Error (IAE) error
criterion without the temporal effect in the objective function, unlike ITAE, would have
yielded different controller parameters since it only enforces tracking of the set-point but
not the fastest possible set-point tracking.
9
3.1. PID Control of Multi-wing Lu System
Fig. 5. Uncontrolled and PID controlled response of the state variable for multi-wing Lu
system (a) state-x (b) state-y (c) state-z.
Simulation results for the uncontrolled and PID controlled state variables of the
multi-wing Lu system (4) has been shown in Fig. 5 with the corresponding control signal
and error of the second state depicted in Fig. 6. In Fig. 5(a)-(c), it can be observed that the
chaotic evolution of the state variables in the uncontrolled state disappears when the PID
controller is applied to the system. All the controlled states evolve to a steady state value.
It is evident that the second state variable of the multi-wing Lu system not only tracks the
unit step reference but the randomness in the other two state variables is also stabilized.
As can be observed from Fig. 5, the controller achieves its design objective of
suppressing the oscillations in the second state and making it track the desired trajectory
of a unit step. Although the oscillations in the other states x and z are suppressed, the
final value at which they settle is dependent on the controller parameters and cannot be
obtained a-priori. A more customized control action could have been incorporated like
that in [11] utilizing the sum of error for all the three state variables with the intention of
getting set-point tracking in all the three state variables. But it would unnecessarily
increase the control action which will create difficulty in implementing the control
scheme in an analog circuit. A closer look on the set-point tracking error in Fig. 6 shows
that the error asymptotically goes to zero and correspondingly the PID control action
stabilizes to its final value. This essentially implies that in the steady state, the controller
has to continuously give a constant signal to the second state of the chaotic system so that
the error remains zero and the oscillations are suppressed.
10
Fig. 6. Control signal and the second state error for the PID controlled multi-wing Lu
system.
Also, the time series of the controlled states in Fig. 5(a)-(c) show initial
oscillations which get damped quite fast even with a simple GA based optimum PID
controller having a linear structure, though better performance can be expected at the cost
of implementation of complex nonlinear controller structures and associated
computational complexity like that used in [21].
3.2. PID control of multi-wing Chen system
Fig. 8. Uncontrolled and PID controlled response of the state variables for multi-wing
Chen system (a) state-x (b) state-y (c) state-z.
11
Fig. 8 shows the controlled three state trajectories of the multi-wing Chen system
(8). The associated error signal and control signal (Fig. 9) are quite similar to that of the
Lu system. Damping of the chaotic oscillations in the PID controlled phase space is
shown in Fig. 10 for this particular system among the Lorenz family. Similar to the case
of Lu system, control of Chen system also shows initial jerk in the state trajectories which
finally settles down asymptotically. A comparison can be made between the control
efforts of Fig. 6 and Fig. 9. It can be observed that the area under the curve is less for Fig.
9 (a) than Fig. 6 (a). This implies that the overall control effort required for controlling
the multi-wing Chen system is less than that required for controlling the multi-wing Lu
system.
Fig. 9. Control signal and the second state error for the PID controlled multi-wing Chen
system.
3.3. PID control of multi-wing Rucklidge system
Similar nature of chaos control can be obtained in the multi-wing Rucklidge
system (11) also with the GA based optimum PID controller which enforces fast tracking
of the second state variable. Also the irregular oscillations of this system are found to be
more sluggish compared to the Lu system which is controlled by the PID to track a
reference input using ITAE criterion. Also, controlled state trajectories are smoother at
initial stages unlike that for the Lu and Chen system. Here, Fig. 11 shows the respective
uncontrolled and controlled three state trajectories. The control and error signals are
shown in Fig. 12. A comparison can be made between Fig. 11 and Figs. 5 and 8. It can be
seen that the oscillations in the multi-wing Rucklidge system is suppressed within a few
seconds of applying the control signal, unlike that in the two former cases. A comparison
of the control and error signals in this case (Fig. 12) as compared to the previous cases
(Figs. 6 and 9) show that the steady state control signal required in this case is much
smaller than the former two. The error goes to zero within the first 10 seconds while it
takes almost 200 seconds in the former two cases. Therefore it can be concluded that the
12
Rucklidge system is comparatively easier to control using a simple linear PID controller
structure than the previously studied chaotic systems i.e. Lu and Chen system.
Fig. 11. Uncontrolled and PID controlled response of the state variables for multi-wing
Rucklidge system (a) state-x (b) state-y (c) state-z.
Fig. 12. Control signal and the second state error for the PID controlled multi-wing
Rucklidge system.
3.4. PID control of multi-wing Sprott-1 system
For the multi-wing Sprott-1 system (14), the ITAE based GA tuned PID enforces
fast reference tracking and simultaneously damps chaotic oscillations (Fig. 14) in an
efficient way. It can be noted that in this case, although the original objective of making
13
the second state (y) follow a step function is achieved, there are small oscillations in the x
and the z states. This is unlike the previous three cases where controlling one of the states
resulted in controlling all the other states automatically. If the oscillations in the other
two states are desired to be minimized then they have to be explicitly taken into account
in the objective function itself.
It is also evident from the control and error signals in Fig. 15 (which shows the
settling of the error signal to zero and control action approaching to its final value) that
this system is easier to control than the multi-wing Chen and Lu system. But it is more
difficult to control than the multi-wing Rucklidge system, in the sense that it takes more
control effort than the former and also takes much more time to settle than the former.
Fig. 14. Uncontrolled and PID controlled response of the state variables for multi-wing
Sprott-1 system (a) state-x (b) state-y (c) state-z.
It is well known that chaotic systems are highly sensitive to the initial conditions
of the states. Since in the presented approach only a single initial condition of the state
variables are assumed to tuned the GA based PID controllers with minimum ITAE for the
second state, hence under different initial conditions, effective damping of chaotic
oscillations need to be investigated. Therefore, study of the robustness of the present PID
control scheme for effective control is shown in next section for each of the four classes
of multi-wing chaotic systems.
14
Fig. 15. Control signal and the second state error for the PID controlled multi-wing
Sprott-1 system.
4. Test of robustness for the PID control scheme with different initial conditions of
state variables
Fig. 17. Robustness of the PID controller for chaos suppression in multi-wing Lu system
for different initial conditions.
In this section, the proposed GA based PID control scheme has also been tested
for its robustness with variation in the initial conditions of multi-wing chaotic systems.
Question may arise whether the optimum PID control scheme would be able to stabilize
the system for other initial values of the state variables, since the controller is tuned with
15
any one initial guess of the states. This is particularly important to investigate since the
proposed approach does not rely on classical Lyapunov criterion based analytical
stabilization which has been extensively studied for double wing attractors [19], [35], [5].
Fig. 18. Robustness of the PID controller for chaos suppression in multi-wing Chen
system for different initial conditions.
Fig. 17-20 shows that in the phase portraits the chaotic oscillations get suppressed
along different trajectories for the four complex nonlinear dynamical systems, even if the
initial conditions for the first two states are gradually decreased from unity to zero. In
spite of high sensitivity to the initial conditions for the states in all chaotic systems, the
proposed PID control scheme is capable of suppressing the wandering of the states in
phase space for the Lu and Chen system which are quite similar and shown in Fig. 17-18.
For the Rucklidge system as shown in Fig. 19, the controlled phase space trajectories
converges towards a particular direction even with variation in initial condition of the
system’s states. The controlled phase portraits for different initial conditions are much
more complex for the Sprott-1 system as portrayed in Fig. 20 but finally converge to a
stable equilibrium point, similar to the other multi-wing systems.
A comparison of Figs 17 and 18 can be made with that of Fig. 19. It can be seen
that in case of the Rucklidge system in Fig. 19, the state trajectories from the different
initial conditions to the final equilibrium solution are much smaller than those of the Lu
or the Chen systems (in Figs. 17 and 18 respectively). This also provides an insight into
the amount of controller effort and settling time required to suppress the chaotic
oscillations in these systems. As discussed previously, the Lu and the Chen systems are
more difficult to control and this can also be understood from the phase portraits of the
trajectories in Figs. 17 and 18. In these figures the trajectories evolve through multiple
small scrolls and hence take more time than that of the Rucklidge system in Fig. 19. The
trajectories of the controlled systems in Fig. 20 show that they take larger excursions than
the Rucklidge system but do not get trapped in small scrolls, therefore making them
16
easier to control than the Lu and the Chen systems, but more difficult than the Rucklidge
system.
Fig. 19. Robustness of the PID controller for chaos suppression in multi-wing Rucklidge
system for different initial conditions.
Fig. 20. Robustness of the PID controller for chaos suppression in multi-wing Sprott-1
system for different initial conditions.
5. Discussion:
It is well known that it is difficult to find a quadratic Lyapunov function in such
complicated nonlinear dynamical systems like multi-wing chaotic attractors except few
attempts like that in [30]. The present approach has been shown to work reliably well,
even though analytical stabilization has not yet been addressed for such systems due to
17
high system complexity. The proposed optimum PID control scheme has another
advantage over the conventional Lyapunov function based stabilization approach i.e. the
optimum tracking of the desired state variable which is difficult to achieve with the
classical Lyapunov approach. With slight modification of the objective function different
states or even more than one state variable can easily be enforced to track different
reference inputs [11], unlike the Lyapunov based approach. The Lyapunov and sliding
mode approaches enforce guaranteed stabilization but not optimum set-point tracking and
also the nonlinear control law needs to be changed every time, depending on the structure
of the chaotic system which restricts the designer to have a common controller structure
(like a simple PID here) to control all the multi-wing systems in the Lorenz family. Thus
to the best of authors’ knowledge, this is the first approach to suppress chaotic
oscillations in the family of multi-wing Lorenz family of chaotic systems with a simple
control scheme augmented with a global optimization framework.
Another important part of the present control scheme is the course of control
action to derive the control law i.e. the effect of the upper limit of the integral objective
function (16). Also, the trajectory of control for different chaotic systems plays an
important role here with respect to the quality of the control. Depending on the number of
equilibrium points and speed of oscillations for a particular chaotic system, the control
action and the associated controller gains would vary widely within the same common
PID structure. Also, the upper limit of the time domain performance index needs to be
chosen suitably keeping in mind the number of oscillations within a same window. For
example, the multi-wing Rucklidge system has got less number of irregular oscillations
with a chosen 100 sec of simulation time window thus it gets settled down very quickly.
Whereas both the multi-wing Lu and multi-wing Chen system produces more number
irregular oscillation (in the uncontrolled mode) within that chosen window which takes
much higher time get damped in the controlled mode. The multi-wing Sprott-1 system
has the slowest dynamics compared to the other three chaotic systems thus it also settles
down fast.
The particular improvements of the present paper over state-of-the-art techniques
are highlighted below.
•
•
•
Due to the structure specific Lyapunov stabilization scheme for a particular
chaotic system, the proposed PID control scheme is more suited where the control
scheme does not need to be changed every time except the PID controller gains.
Deriving an analytical control action to suppress chaotic oscillation is based on
the choice of the quadratic Lyapunov candidate which guarantees stabilization but
not the fastest possible stabilization. The optimum PID controller not only
suppresses the oscillations but enforces optimum tracking of one or multiple state
variables (with suitable choice of the objective function as in [11]).
Most of the popular analytical control schemes for a particular chaotic system rely
on manipulating all the state variables [19], [35], [5]. The present PID control
scheme only senses the second state variables and manipulates it to perform the
same stabilization task. This is practical from the hardware implementation point
of view as less number of sensors is required. Also not all the states of the chaotic
system may be measurable. In such cases, this scheme scores over the others.
18
•
The control scheme is particularly useful even in the presence of noise in the
measured variable as previously studied in [21] which is difficult tackle with
using analytical methods of nonlinear state feedback law design [19], [35], [5].
The PID control scheme, due to not being dependent on the system structure and
directly working on the sensed state time series instead of the system model, is
capable of stabilization of such nonlinear dynamical systems [12]. This is difficult
to achieve with the classical state feedback controller.
In spite of having several advantages, it is well known that stabilization of chaotic
systems cannot be guaranteed by simulation due to their sensitive dependence on initial
conditions. Although the simulations presented in section 4 in order to prove the
robustness of PID control for multi-wing chaotic systems show promising results, still
after testing with thousands of different initial conditions, the stabilization is not
theoretically guaranteed unlike the Lyapunov based approach. Except the unavailability
of guaranteed stabilization for all possible initial condition of the states, the proposed
control scheme enjoys more flexibility of the controller design in a common template and
can easily be tweaked by modifying the objective function (16) and the global optimizer
used in this context.
6. Conclusion
A real coded Genetic Algorithm based optimum PID controller design has been
proposed in this paper to suppress chaotic oscillations in a family of highly complex
multi-wing Lorenz family of chaotic systems. The PID controller enforces fast tracking
of the second state due to presence of the ITAE criterion in the controller design phase
which also removes chaotic oscillations in other state variables. The present approach is
based on a heuristic optimization framework which can be easily modified to enforce
optimum tracking performance in the other state variables and also with a combination of
them. Using credible numerical examples, the optimum PID control scheme has been
shown to be robust enough with different initial conditions, even for such highly complex
nonlinear dynamical systems known as the multi-wing Lorenz family of chaotic systems.
As mentioned earlier, obtaining analytical guaranteed stabilization scheme is quite
challenging for such complex dynamical systems and has been left as a scope for future
research.
References
[1] R. Aguilar-Lopez and R. Martinez-Guerra, Partial synchronization of different chaotic
oscillators using robust PID feedback, Chaos, Solit. & Fract. 33 (2) (2007) 572-581.
[2] A. Brandstäter, J. Swift, H.L. Swinney, A. Wolf, J.D. Farmer, E. Jen, and P.J.
Crutchfield, Low-dimensional chaos in a hydrodynamic system, Phys. Rev. Let. 51 (16)
(1983) 1442-1445.
[3] W.-D. Chang, PID control for chaotic synchronization using particle swarm
optimization, Chaos, Solit. & Fract. 39 (2) (2009) 910-917.
[4] W.-D. Chang and J.-J. Yan, Adaptive robust PID controller design based on a sliding
mode for uncertain chaotic systems, Chaos, Solit. & Fract. 26 (1) (2005) 167-175.
[5] G. Chen and X. Yu, Chaos Control: Theory and Applications, Springer, Berlin, 2003.
19
[6] G. Chen and T. Ueta, Yet another chaotic attractor, Intern. J. of Bifurc. and Chaos 9
(7) (1999) 1465-1466.
[7] H.-C. Chen, J.-F. Chang, J.-J. Yan and T.-L. Liao, EP-based PID control design for
chaotic synchronization with application in secure communication, Expert Syst. with
Appl. 34 (2) (2008) 1169-1177.
[8] L.S. Coelho and R.B. Grebogi, Chaotic synchronization using PID control combined
with population based incremental learning algorithm, Expert Syst. with Appl. 37 (7)
(2010) 5347-5352.
[9] L.S. Coelho and D.L.A. Bernert, PID control design for chaotic synchronization using
a tribes optimization approach, Chaos, Solit. & Fract. 42 (1) (2009) 634-640.
[10] M.C. Cross and P.C. Hohenberg, Pattern formation outside of equilibrium, Rev. of
Mod. Phys. 65 (3) (1993) 851-1112.
[11] S. Das, I. Pan, S. Das, and A. Gupta, Master-slave chaos synchronization via optimal
fractional order PIλDµ controller with bacterial foraging algorithm, Nonlin. Dynam. 69
(4) (2012) 2193-2206.
[12] A.L. Fradkov and R.J. Evans, Control of chaos: methods and applications in
engineering, Ann. Rev. in Cont. 29 (1) (2005) 33-56.
[13] S.-B. He, K.-H. Sun, and C.-X. Zhu, Complexity analyses of multi-wing chaotic
systems, Chinese Phys. B 22 (5) (2013) 050506.
[14] M.-L. Hung, J.-S. Lin, J.-J. Yan and T.-L. Liao, Optimal PID control design for
synchronization of delayed discrete chaotic systems, Chaos, Solit. & Fract. 35 (4) (2008)
781-785.
[15] C.L. Kuo, H.T. Yau and Y.C. Pu, Design and implementation of a digital PID
controller for a chaos synchronization system by evolutionary programming, J. of Appl.
Scien. 8 (13) (2008) 2420-2427.
[16] E.N. Lorenz, Deterministic nonperiodic flow, J. of the Atmospher. Sci. 20 (2) (1963)
130-141.
[17] J. Lü, F. Han, X. Yu, and G. Chen, Generating 3-D multi-scroll chaotic attractors: A
hysteresis series switching method, Automatica 40 (10) (2004) 1677-1687.
[18] J. Lu and G. Chen, A new chaotic attractor coined, Intern. J. of Bifurc. and Chaos 12
(3) (2002) 659-661.
[19] J.M.G. Miranda, Synchronization and Control of Chaos: an Introduction for
Scientists and Engineers, Imperial College Press, London, 2004.
[20] E. Ott, C. Grebogi, and J.A. Yorke, Controlling chaos, Phys. Rev. Lett. 64 (11)
(1990) 1196-1199.
[21] I. Pan, A. Korre, S. Das, and S. Durucan, Chaos suppression in a fractional order
financial system using intelligent regrouping PSO based fractional fuzzy control policy in
the presence of fractional Gaussian noise, Nonlin. Dynam. 70 (4) (2012) 2445-2461.
[22] K. Pyragas, Continuous control of chaos by self-controlling feedback, Phys. Lett. A
170 (6) (1992) 421-428.
[23] A.M. Rucklidge, Chaos in models of double convection, J. of Fluid Mech. 237
(1992) 209-229.
[24] T. Shimizu and N. Morioka, On the bifurcation of a symmetric limit cycle to an
asymmetric one in a simple model, Phys. Lett. 76 (3-4) (1980) 201-204.
[25] J.C. Sprott, Some simple chaotic flows, Phys. Rev. E 50 (2) (1994) R647-R650.
20
[26] J.C. Sprott, Simple chaotic systems and circuits, Americ. J. of Phys. 68 (8) (2000)
758-763.
[27] Z. Wang, S. Cang, E.O. Ochola, and Y. Sun, A hyperchaotic system without
equilibrium, Nonlin. Dynam. 69 (1-2) (2012) 531-537.
[28] P. Wang and D.P. Kwok, Optimal design of PID process controllers based on
genetic algorithms, Contr. Eng. Pract. 2 (4) (1994) 641-648.
[29] S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos,
Springer, New-York, 2003.
[30] F. Xu and P. Yu, Chaos control and chaos synchronization for multi-scroll chaotic
attractors generated using hyperbolic functions, J. of Math. Anal. and Appl. 362 (1)
(2010) 252-274.
[31] S. Yu, J. Lu, W.K.S. Tang, and G. Chen, A general multiscroll Lorenz system family
and its realization via digital signal processors, Chaos 16 (3) (2006) 033126.
[32] S. Yu, W.K.S. Tang, J. Lu, and G. Chen, Generating 2n-wing attractors from
Lorenz-like systems, Intern. J. of Circuit Theory and Appl. 38 (2010) 243-258.
[33] S. Yu, W.K.S. Tang, J. Lu, and G. Chen, Multi-wing butterfly attractors from the
modified Lorenz systems, IEEE Intern. Symp. on Circuits and Syst., ISCAS 2008 (2008)
768-771, Seattle.
[34] S. Yu, W.K.S. Tang, J. Lu, and G. Chen, Generation of n×m-wing Lorenz-like
attractors from a modified Shimizu–Morioka model, IEEE Trans. on Circuits and Syst. II:
Expr. Briefs 55 (11) (2008) 1168-1172.
[35] H. Zhang, D. Liu, and Z. Wang, Controlling Chaos: Suppression, Synchronization
and Chaotification, Springer-Verlag, London, 2009.
[36] P. Zhou and F. Yang, Hyperchaos, chaos, and horseshoe in a 4D nonlinear system
with an infinite number of equilibrium points, Nonlin. Dynam. (2013) (In Press).
21
| 3 |
1
Pseudometrically Constrained Centroidal Voronoi Tessellations: Generating
uniform antipodally symmetric points on the unit sphere with a novel acceleration
strategy and its applications to Diffusion and 3D radial MRI
Cheng Guan Koay
Department of Medical Physics
University of Wisconsin School of Medicine and Public Health
Madison, WI 53705
Corresponding author:
Cheng Guan Koay, PhD
Department of Medical Physics
University of Wisconsin School of Medicine and Public Health
1161 Wisconsin Institutes for Medical Research (WIMR)
1111 Highland Avenue
Madison, WI 53705
E-mail: [email protected]
Word Counts: 4953
Keywords: antipodal symmetry, uniform point set, Centroidal Voronoi Tessellations,
diffusion MRI, 3D radial MRI, uniform points on the sphere.
2
ABSTRACT
Purpose:
The purpose of this work is to investigate the hypothesis that uniform sampling measurements
that are endowed with antipodal symmetry play an important role when the raw data and image
data are related through the Fourier relationship as in q-space diffusion MRI and 3D radial MRI.
Currently, it is extremely challenging to generate large uniform antipodally symmetric point sets
suitable for 3D radial MRI. A novel approach is proposed to solve this important and longstanding problem.
Methods: The proposed method is based upon constrained centroidal Voronoi tessellations of
the upper hemisphere with a novel pseudometric. Geometrically intuitive approach to
tessellating the upper hemisphere is also proposed.
Results: The average time complexity of the proposed centroidal tessellations was shown to be
effectively linear. For small sample size, the proposed method was comparable to the state-ofthe-art iterative method in terms of the uniformity. For large sample size, in which the state-ofthe-art method is infeasible, the reconstructed images from the proposed method has less
streak and ringing artifact as compared to those of the commonly used methods.
Conclusion: This work solved a long-standing problem on generating uniform sampling points
for 3D radial MRI.
3
INTRODUCTION
A centroidal Voronoi tessellation (2) is a Voronoi tessellation (3) in which the center of mass (or
centroid) of the Voronoi region is also its generator and it has been found useful in many
applications, from analysis of cellular pattern (4), neuronal density (5) and territorial behavior of
animals(6) in biological sciences to optimal data acquisition, data quantization, compression and
clustering (7) in the engineering and physical sciences. However, it remains a challenge to
generate these tessellations via Lloyd's algorithm even though Lloyd's algorithm has been found
to be very robust (8).
The main computational bottleneck in the computation of the centroidal Voronoi
tessellations has been the reconstruction step in which the Voronoi tessellations are
reconstructed anew from the newly computed centroids at each iteration.
From the
conventional point of view, if the time complexity of the Voronoi tessellations of a specific
manifold, e.g., plane, sphere or other esoteric surfaces, is O(n log n), then the time complexity
of the centroidal Voronoi tessellations on this manifold must be O(m n log n), i.e., invoking the
O(n log n) algorithm m number times. Note that m is the number of iterations needed to reach
convergence and n is the number of generators.
We have discovered strong heuristic strategies to reduce the average time complexity of
the problem from O(m n log n) to O(m n) by using less optimal, i.e., O(n2), but simpler algorithm
innovatively and will present these strategies in the context of generating uniform antipodally
symmetric points on the sphere. The approach used in this work for accelerating the centroidal
Voronoi tessellations is completely novel, representing a clear departure from the current
paradigms.
4
The problem of generating uniform points on the sphere was proposed by J.J. Thomson
(9) more than a century ago and has spurred many theoretical and computational investigations,
and applications (1,10-15). A variant of the Thomson problem is that of generating uniform
antipodally symmetric points on the sphere (16); the resultant point set plays a particular
important role in MRI, which will be discussed next. It is interesting to note that the problem of
generating nearly uniform antipodally symmetric points on the sphere via a deterministic method
was solved only recently (13,14).
Conceptually, diffusion MRI (17) and 3D radial MRI (18) share many similar features. The
most notable feature relevant to this work is the data redundancy associated with symmetries
inherent in each of these imaging techniques. Specifically, antipodal symmetry of the diffusion
propagator (19) and Hermitian symmetry due to the real-valuedness of the object in the image
domain influence how data are acquired in diffusion MRI and 3D radial MRI, respectively.
Hermitian symmetry and antipodal symmetry are closely related to each other and both
symmetries are defined through parity transformation or spatial inversion with Hermitian
symmetry having an additional operation, which is complex conjugation. Specifically, a realvalued function f
possesses antipodal symmetry if f ( x) f ( x) and a complex-valued
______
function g possesses Hermitian symmetry if it is a Hermitian function, i.e., g (x) g ( x) where
complex conjugation is denoted by the overhead bar. Due to these symmetries, it is sufficient to
sample only half of the space, e.g., q-space in diffusion MRI or k-space in MRI. However,
certain constraints on the imaging gradients (20) force the sampling trajectory in 3D radial MRI
to take the usual diametrical line or a curve consists of two straight radial lines adjoined with a
slight non-smooth bend at the center of k-space (21); the former sampling strategy cannot take
advantage of the data redundancy because sampling along a diametrical line through k-space
only is equivalent to acquiring repeated measurements and not new measurements and the
5
latter sampling strategy is more desirable with the potential of acquiring new and different points
in k-space.
The use of uniformity and antipodal symmetry in sampling scheme was advocated by
Jones (16) in diffusion tensor imaging on the basis that the diffusion tensor is symmetric. Here,
we argue that uniformity and antipodal symmetry should be incorporated into sampling design in
diffusion MRI (q-space formalism) and 3D radial MRI (k-space formalism) because both the qspace and k-space possess Hermitian symmetry. Uniform sampling measurements endowed
with antipodal symmetry can maximize sampling coverage and at the same time take advantage
of the inherent symmetry of the imaging systems. It is to the best of our knowledge that none of
the works in 3D radial MRI (18,21-28) used antipodally symmetric point sets to acquire k-space
data. In fact, many studies (22,23,25,27) used the method of Wong and Roos (15) or the
generalized spiral scheme (10,11). The point sets generated from any of the methods, i.e.,
generalized spiral scheme, the method of Wong and Roos and the method of Bauer (12), have
the spiral pattern although each of these methods was formulated and derived differently.
Antipodally symmetric point sets have not been used in any 3D radial MRI study may be due to
the following reasons: the lack of a robust iterative method or a deterministic method capable of
generating large number of antipodally symmetric points on the sphere, and the availability of
other deterministic methods (1,11,12,15,29) for generating non-antipodally symmetric point sets.
Ad hoc strategy of collecting half of the non-antipodally symmetric points on one of the
hemispheres is usually adopted to come up with an antipodally symmetric point set. We should
note that the diffusion MRI sampling method based on vertices of a multifold-tessellated
icosahedral hemisphere was used by Tuch et al. (30) but this method is inflexible and limited in
that it cannot generate point set of any size except certain "magic" numbers, e.g., 6, 26, 46, 91,
126, 196, 246, 341, 406, 526 and so on. Note that fivefold-tessellated icosahedral hemisphere
6
produces 126 points on the upper hemisphere, see Tuch (30). Due to the above mentioned
limitation, this sampling method used by Tuch et al. may not be very appealing to the
practitioners of 3D radial MRI.
Even with the advent of the deterministic method capable of generating nearly uniform
antipodally symmetric points on the sphere (13,14), it is still extremely challenging to generate
highly uniform antipodally symmetric point set for use in 3D radial MRI through iterative function
minimization even when the initial solution is taken from the deterministic methods. The number
of points (on the sphere) required in 3D radial MRI and diffusion MRI is in the thousands (4000
to 20000 or more) and in the hundreds (6 to 500), respectively. Therefore, the number of
parameters in spherical coordinate representation to be optimized are twice as many (8000 to
40000) and optimization methods that rely on approximate or exact Hessian or its inverse are
completely infeasible. While some of the iteratively optimized antipodally symmetric point sets
(with sample size in the hundreds) have been tabulated and available for public use (31), the
lack of highly uniform and antipodally symmetric point sets applicable to 3D radial MRI
motivates the present investigation.
Here, we propose a constrained centroidal Voronoi tessellation endowed with a
pseudometric to robustly and efficiently generating uniform antipodally symmetric point sets on
the unit sphere. The uniformity of the point sets generated from the proposed method is shown
to be comparable to the state-of-the-art iterative method (31) for small sample size relevant to
diffusion MRI but not to 3D radial MRI. For large point set with sample size in the range of
thousands, which is relevant to 3D radial MRI and other applications (32), the state-of-the-art
method is completely infeasible but the proposed approach remains feasible. Further, we used
our recently developed three-dimensional analytical MRI phantom in the Fourier domain (33),
which was based upon the three-dimensional version of the famous Shepp-Logan phantom
7
(34,35) in the image domain, to compare and contrast the qualitative features of the
reconstructed images obtained through the proposed method, the method of Wong and Roos,
the generalized scheme and the tessellated-icosahedral of the upper hemisphere (the method
used by Tuch et al.). We found that the reconstructed image from the proposed method has less
streak and ringing artifacts as compared to those of other methods.
8
METHODS
A centroidal Voronoi tessellation (2) is a Voronoi tessellation in which the center of mass (or
centroid) of the Voronoi region is also its generator and the Voronoi regions can be prescribed
with a density function. To the best of our knowledge, a centroidal Voronoi tessellation capable
of generating uniform antipodally symmetric points on the unit sphere has never been proposed.
Instead of dealing with a density function that is constant and invariant with respect to spatial
inversion or antipodally symmetry, it is more convenient and efficient to introduce a novel
pseudometric in the centroidal Voronoi tessellation so as to make generating uniform antipodally
symmetric points on the unit sphere possible.
The spherical Voronoi regions, { Vi }iN1 , on the upper hemisphere are characterized by
the following properties:
No two distinct regions share the same point. That is, the intersection between any two
distinct regions is empty, Vi V j with i j . Points on the boundary between any
two regions belong to the closure of these regions, which is denoted by an overhead bar,
e.g., Vi .
The union of all the closures of the spherical Voronoi regions covers the upper
hemisphere, denoted by S 2 .
Vi {x S2 | d (x, gi ) d (x, g j ) for
j 1,..., N
and
j i}
[1]
Each of the unit vectors, { g i }iN1 , on S 2 is called the generator of its respective Voronoi
region. The pseudometric, d , will be defined later.
The center of mass of each Voronoi region does not necessarily coincide with the
generator of that region. An iterative method such as the Lloyd algorithm (8,36) may be used to
9
make the generators from each successive iterations closer and closer to the centers of mass of their
respective regions. The resultant tessellations are known as the centroidal Voronoi tessellations (2).
The center of mass of a spherical Voronoi region,
ĝi
vd i
vd i
Vi , can be expressed as:
.
[2]
Note that i is the spherical surface of Vi , v represents unit vector normal to the spherical
surface element d i and . denotes the Euclidean norm, which is needed to ensure ĝ i is unit
length. In practice, the computation of ĝ i is based on the discretized version of Eq.[2].
Specifically, it is obtained through the sum of the products between the area of the spherical
triangle formed by the generator and each pair of consecutive vertices at the boundary
surrounding the generator in counterclockwise order. We further note that the density function,
which usually appears as a factor in the integrand of Eq.[2], is taken to be a unit constant
function in order to ensure uniformity of the generators.
The distance measure or metric, d (,) , used in this work is a novel extension of the
modified electrostatic potential energy term studied in Ref.(37). For completeness, we will
introduce the notion of real and virtual points for manipulating the antipodally symmetric point
set suggested in Ref.(37). Due to the constraint of antipodal symmetry, we classify points as
real and their corresponding antipodal points as virtual. If we have N real points (on the upper
hemisphere), denoted by unit vectors ri with i 1, , N , then the total electrostatic energy for
the complete configuration (37) of 2 N points of both real and virtual points on the whole sphere
is given by:
10
N
2
1
2
r
i 1 j i 1 ij
N 1
N
4 rij2
1
[3]
with rij ri r j . Note that Eq.[3] is expressed solely in terms of real points. If we define
S (ri , r j )
1
1
then S (ri , r j ) may be thought of as a reciprocal metric (or reciprocal of
2
rij
4 rij
the distance measure) between two real points.). Here, we exploit this reciprocal metric fully by
defining our pseudometric, d (,) , as
d (ri , r j ) 1 / S (ri , r j ) .
[4]
Note that d (r j , r j ) 0 d (r j ,r j ) , hence the term pseudometric. Please refer to Appendix B
for the proof that d (,) is indeed a pseudometric.
We should note that implementation of spherical Voronoi tessellations is not a trivial
computational task (38,39) and our approach is different from the existing methods,
Refs.(38,39). To compare and contrast our proposed approach, we will first present an outline of
the Lloyd's algorithm below:
Pseudometrically Constrained Centroidal Voronoi Tessellation:
Let n be the number of desired points on the upper hemisphere.
Step 1. Deterministically generate 2 n highly uniform points on the unit sphere
via Analytically Exact Spiral Scheme(1) and select those points that are
on the upper hemisphere as the generators.
Step 2. Construct the spherical Voronoi regions of the upper hemisphere with the
chosen generators.
Step 3. Compute the normalized centroids using the discretized version of Eq.[2].
Step 4. Adopt the normalized centroids as the generators.
Step 5. Iterate Steps 2, 3, 4 until convergence it reached.
11
We would like to point out that time complexity of Step 2 above is O(n log n), see Ref.(39).
Therefore, the time complexity of the centroidal Voronoi tessellation is O(m n log n) where m is
the number of iterations needed to reach convergence.
Our approach to generating the Voronoi tessellations of the upper hemisphere is based
on the following steps and observations:
1. The Voronoi region of a generator can be found by first identifying surrounding
generators (approximately neighbors of neighbors) within a spherical cap with a
prescribed radius, which depends on the total number of generators, away from
the generator of interest, see Figure 1A. This step can always be done because
the initial point set obtained from the Analytically Exact Spiral Scheme(1) is
already reasonably uniform. When the total number of generators is large, the
number of surrounding generators remain constant and is around 20. The time
complexity of this step for all the generators is O(n2).
2. The generator and its surrounding generators are rotated in such a way that the
generator is situated along the z-axis, see Figure 1A. The time complexity of this
step for all the generators is O(n).
3. To compute the vertices of the Voronoi region, the smallest convex region formed
by the surrounding generators and around the generator is needed, see Figure
1B. Finding this smallest convex region is equivalent to finding the boundary
points of the convex hull of the surrounding generators after these generators
have been stereographically projected (40) onto the x-y plane, see Figure 1C. The
validity of this step hinges on the angle preserving (or conformal) property of
steoreographic projection (41). The boundary points of the convex hull of a set of
points on the plane is then computed through the Graham's scan, (42). The time
12
complexity of Graham's scan is O(k log k) and k is the number of points in the
convex hull, i.e., the number of surrounding generators, which is about 20.
Therefore, the time complexity of this step for all the generators is O(n) because k
is a constant.
4. The vertices of the Voronoi region, the area of spherical triangles formed by the
Voronoi vertices and the generator can be computed in O(n) for all the generators.
The steps proposed above are intuitive and geometrically motivated but we should note that
Step 1 in our proposed approach is O(n2), which is the main bottleneck of our approach if this
step were to be invoked at every iteration. In what follows, we will describe a simple way to
avoid tessellating at every iteration and provide empirical evidence that our approach is
effectively O(n) for generating the centroidal Voronoi tessellations. We can compute the
distance between a generator at the current iteration and the same generator from the previous
iteration. This information has already been used to determine convergence, i.e., Ref.(8). The
novelty of our proposal is in using this information to compute the cumulative sum of Euclidean
distances made by each generators. If the maximum value of these cumulative sums is less
than some prescribed value, see Appendix A, then Step 1 will not be invoked. Otherwise, Step 1
will be invoked and the cumulative sum is reset to zero. When Step 1 is not invoked, the
connectivity network between a generator and its surrounding generators will not be altered but
the coordinates of the generator and its surrounding generators will likely be different from one
iteration to the next.
Here, we discuss other important implementation details. Note that some of the vertices
around the equator may be located at the lower hemisphere and the resultant centroids may
also be on the lower hemisphere. Therefore, the centroids or generators should be reoriented
13
onto S 2 at each iteration. Finally, we note that the convergence of the proposed algorithm is
based on that of local deviation as described in Ref.(8).
14
RESULTS
We implemented the proposed pseudometrically constrained centroidal Voronoi tessellation in
Java. We conducted four tests to illustrate the proposed method. The first two tests were run on
a machine with an Intel® Core™ i7 CPU at 1.73 GHz and with 8 GB of RAM and the last two
tests were run on a different machine, Four X 8-Core 2.3GHz AMD Opteron Processors (6134)
with 128GB of RAM.
The first test is to show that the performance of the point sets generated by the proposed
method is of comparable quality in terms of modified electrostatic energy to that of the state-ofthe-art iterative method and to show that the proposed centroidal Voronoi algorithm is O(m n)
through empirical analysis of the execution time per iteration as a function of number of
generators (or points on the upper hemisphere), which shows a linear trend, see Figure 2.
Figure 2A shows the percent of relative error in terms of the modified electrostatic energy
of initial deterministic generators, which were obtained our recent analytically exact spiral
scheme(1) and of the final generators obtained by the proposed method with respect to the
state-of-the-art method. At the level of percent of relative error achieved by the proposed
method, the uniformity of the points between the proposed method and the state-of-the-art
method is indistinguishable visually or quantitatively. However, we observed that the point sets
generated centroidal Voronoi tessellations has consistently shown to be higher in modified
electrostatic energy than those obtained through iterative scheme. We believe this observation
has to do with the intrinsic property of centroidal Voronoi tessellations, that is the fundamental
centroidal constraint itself. The centroidal constraint forces generators to be in geometrically
frustrated positions even though the effect of the centroidal constraint on geometric frustration is
very slight.
15
Figure 2B shows the performance of the proposed method in terms of execution time (in
seconds) per iteration of the proposed method as a function of the number of points on the
upper hemisphere. It is clear that the execution time per iteration has a linear trend with respect
to N.
Figure 2C shows the frequency of invocation of Step 1, which is an O(N2) algorithm, as a
function of N, number of points on the upper hemisphere and Figure 2D shows the total number
of iterations as a function of N and the instances of invocation of Step1 that occurred within
these iterations are shown in red. Note that the highest peak is at (N=324, 4161 iterations).
The second test is to give an example of a point set with sample size large enough that
the state-of-the-art method is currently not feasible. In this example, we used sample size of
888. Figure 3A shows the initial generators and their Voronoi regions as obtained the
analytically exact spiral scheme(1). Figures 3B and 3C shows the centroids and centroidal
Voronoi regions on the upper and the lower hemispheres, respectively. Figures 3D shows the
centroids on the whole sphere by combining Figures 3B and 3C.
The third test is intended to show the feasibility of the proposed method in generating point
set of sample size large enough for 3D MRI applications and beyond. Figure 4 shows only the
Voronoi regions on the sphere with 16000 generators on the upper hemisphere. It took 10.38
minutes to generate the centroidal Voronoi tessellations (at the tolerance level of 10-12) on the
machine with Four X 8-Core 2.3GHz AMD Opteron Processors (6134).
The final test is intended to show the qualitative features of the reconstructed images of
the three-dimensional analytical MRI phantom (33) obtained from the proposed method, the
method of Wong and Roos, the generalized spiral scheme and the tessellated icosahedral of
the upper hemisphere. Note that the reconstruction algorithm is based upon the following nonuniform discrete Fourier transform for the image domain signal at r :
16
m
I (r ) S (k i ) | k i |2 exp(2 i k i r ) ,
i 1
where S (k i ) is the signal value of the three-dimensional Shepp-Logan phantom evaluated
analytically at
k i , see (33). The term, | k i |2 , came about from the spherical coordinate
transformation in the Fourier expansion. Two target matrix sizes of the reconstructed imaging
volume were chosen and they were 128 128 128 (standard resolution) and 256 256 256
(high resolution). We have m 526 64 33664 , kmax 32 (arbitrary units) and k 0.5
(arbitrary units) at the standard resolution and m 526 128 67328 , kmax 64 (arbitrary
units) and k 0.5 (arbitrary units) at the high resolution. Figure 5 shows the reconstructed
images generated from the different schemes studied in this work at the standard resolution (the
first two rows) and at the high resolution (the last two rows). At the standard resolution of
128x128x128 with z = -0.181, the images generated by the method of Wong and Roos and the
generalized spiral scheme have more noticeable ringing artifacts around the bright region than
that of the proposed method. The image generated by the tessellated icosahedral scheme has
more streak artifacts in the dark regions than that of the proposed method. At the same
resolution with z = 0.228, the images generated from these methods have more ringing artifacts
than that of the proposed method. Similar patterns of artifacts showed up more noticeably at a
higher resolution of 256x256x256. At this resolution, the images generated from the proposed
method have less ringing artifacts than those from other methods.
17
DISCUSSION
Even though our recent works (13,14) were the first to solve the problem of deterministically
distributing antipodally symmetric points on the unit sphere, the generated points are structurally
very regular and are prone to causing degeneracy in triangulation (43), i.e., when four points
rather than three sit on the same circumcircle, and hence, point sets from these works could not
be used as the initial generators for the proposed method. The point sets generated from the
analytically exact spiral scheme did not have the above mentioned defect and contribute
immensely to the feasibility of the present approach by providing highly efficient technique for
generating nearly uniform points on the sphere and half of which are then used as the initial
generators in the present approach.
The novelty of this work lies in the realization that the reciprocal of the reciprocal metric
can be treated as a pseudometric in the antipodally symmetric space, in the extension of the
centroidal Voronoi tessellations to the antipodally symmetric space, in the heuristic strategies
suggested above for accelerating centroidal Voronoi tessellations without tessellating at every
iteration, and in the utilization of the proposed constrained centroidal Voronoi tessellations to
generate uniform antipodally symmetric points on the sphere. It should be clear that the
pseudometric used in this work can also be used in any K-mean clustering algorithm of discrete
data that are endowed with antipodal symmetry. The results showed that antipodally symmetric
uniform sampling strategy does play a positive role in image quality and such a strategy should
be incorporated into sampling design considerations of any 3D radial MRI study.
One of the most exciting results of this work is the heuristic strategies used to accelerate
centroidal Voronoi tessellations. This result highlights the emergence of strong heuristics in
which traditional optimal algorithm that is invoked iteratively may not outperform less optimal
algorithm that is invoked sparingly in an innovative way to accomplish the same computational
18
task. For small sample size, the point sets generated from the proposed method are
comparable to those generated from the state-of-the-art method in all the cases tested. One of
the important highlights of this work is that the proposed method is capable of generating large
uniform point sets relevant to 3D radial MRI applications; the state-of-the-art iterative method is
completely infeasible in this case particularly when the desired number of points is in the
thousands.
We must note that the reconstruction algorithm adopted in this work is for testing
purposes as it is computational expensive but it serves the present purpose very well because
whatever the differences in the reconstructed images these differences have to come from
uniformity or lack thereof of each of the methods studied in this work. It is not fortuitous that the
reconstructed image from the proposed method has less streak and ringing artifacts as
compared to those of other methods studied in this work.
Uniform antipodally symmetric point sets have continued to play a major role in MRI
acquisition design (14,16,23,37,44-49) and are beginning to make an impact in other fields (32).
We hope this work contributes to further advances to these scientific endeavors beyond
diffusion and 3D radial MRI.
19
Acknowledgment
The author dedicates this work to Lean Choo Khoo. He would like to thank Drs. M. Elizabeth
Meyerand, Charles A. Mistretta and Peter J. Basser for the encouragement. Thanks go to Dr.
Orhan Unal, Director of Medical Physics Computing, for making available the computing
resources needed for this work and for providing the specifications of the Opteron Processors.
The author would like to thank Drs. Steven R. Kecskemeti, Kevin M Johnson, and Michael A.
Speidel for helpful discussion on image reconstruction and image quality assessment. This work
was supported in part by the National Institutes of Health IRCMH090912. The software related
to
this
work
will
be
made
freely
http://sites.google.com/site/hispeedpackets/ .
Appendix A
available
for
research
use
at
URL:
20
In this appendix, we present a method, which is an special case of our previous work(50), to
compute the radius of a spherical cap so as to cover sufficient number of surrounding
generators for the purposes of computing Voronoi region. We also mention the criterion adopted
to decide whether to invoke Step 1 or otherwise.
If the center of a circle of radius r on a plane that is tangent to a unit sphere at point
(0,0,1) or on the z-axis, then its inverse Gnomonic projection is a spherical cap, see Figure 2A
of Ref.(50). Since the area of the upper hemisphere of unit radius is 2 , the area associated
with a generator can be approximated by 2 / N
where N is the total number of generators
on the upper hemisphere. This approximate works well for large N , i.e., 15 and above. The
unnormalized areal measure, Eq.(33) of Ref.(50), of a spherical cap with area equals to 2 / N
is related to r by the following expression:
2 / N 2 (1
Therefore, r
1 ).
1 r 2
NN1 2 1 .
The angle subtended by an longest arc passing through the interior of the spherical cap is given
by:
2 tan 1( r ) .
This angle is also the approximate spherical distance between two generators. Therefore, any
generators that are within 5 / 2 in spherical distance away for the generator of interest will be
classified as surrounding generators.
Previously, we mentioned that if the maximum value of the cumulative sums of distances
made by generators in successive iterations is less than some prescribed threshold value,
21
which is a small fraction of the prescribed radius in Step 1, then Step 1 will not be invoked. The
prescribed value is based on Euclidean distance for simplicity as the local convergence criterion
is also based on Euclidean distance. The prescribed threshold value is given
C 2(1 cos(tan 1( r ))) where C is a positive real number less than unity. The specific value of
C adopted in this study was C 0.15 . Note that if C assumes too high a value, one may run
into the problem in which the initial surrounding generators may not be the current surrounding
generators, which may cause the error in Voronoi tessellations. If C assumes too low a value,
Step1 may be invoked many more times than necessary.
Appendix B: The proposed pseudometric for the unit sphere
22
The distance measure, d (,) , proposed in this work is a novel extension of the modified
electrostatic potential energy term studied in (37). For the sake of completeness, the notion of
real and virtual points for manipulating the antipodally symmetric point set as suggested in (37)
will be reintroduced here.
Due to the constraint of antipodal symmetry, we classify points as real and their
corresponding antipodal points as virtual. If we have N real points (say on the upper
hemisphere), denoted by unit vectors ri with i 1, , N , then the total electrostatic energy for
the whole configuration (37) of 2 N points of both real and virtual points on the whole sphere is
given by:
N
2
1
1
2
4 rij2
i 1 j i 1 rij
N 1 N
,
[B1]
with
ri r j
rij
ri r j
if
if
arccos(ri r j ) / 2
.
arccos(ri r j ) / 2
[B2]
We should note that we could have defined rij to be simply ri r j , which is in fact the actual
definition used in the implementation of the proposed algorithm, and the expression in Eq.[B1]
would still be valid but the definition of rij in Eq.[B2] would bring much clarity to the proof later.
Note that Eq.[B1] is expressed solely in terms of real points. The interaction term in Eq.[B1],
1
I (ri , r j ) 1
rij
4 rij2
[B3]
was first postulated in (37) as a reciprocal metric (or reciprocal of the distance measure)
between two real points.). Here, we prove that m(,) given by
23
m(ri , r j ) 1 / I (ri , r j )
1
rij
1
[B4]
1
4 rij2
is indeed a pseudometric.
Before we begin the proof to show that m (,) is a pseudometric on the unit sphere, we
first list all the necessary information about (1) the pseudometric conditions on the unit sphere,
(2) concave functions, (3) sub-additive functions and (4) a well known property about the
vertices of any spherical triangle on the unit sphere that is endowed with antipodal symmetry.
Definition 2.1. Let S 2 be the three-dimensional unit sphere. A pseudometric on S 2 is a
mapping, d : S 2 S 2 [0,2] , such that
(a) d (ri , r j ) 0 ,
(b) d (ri , r j ) d (r j , ri ) ,
(c) ri r j implies d (ri , r j ) 0 , and
(d) d (ri , r j ) d (ri , rk ) d (r j , rk ) .
Note that ri , r j , and rk are unit vectors. Note further that the d (,) is considered a metric if it
satisfies
the
above
conditions
and
also
satisfies
the
converse
of
(c),
i.e.,
d (ri , r j ) 0 implies ri r j . In brief, a metric is automatically a pseudometric but the converse
is not generally true. It is not hard to see that m (,) satisfies the first three conditions trivially by
virtue of Eq.[B2]. The proof will focus on showing that m (,) also satisfies the last condition, i.e.,
the triangle inequality.
24
Definition 2.2. A function g is concave on an interval [ a , b] if, for any x and y in [ a , b] and for
any [0,1] , we have
g ( x (1 ) y ) g ( x) (1 ) g ( y ) .
[B5]
Lemma 2.1. If a function g is concave on an interval [0, b] for some b 0 and g (0) 0 , then
g is subadditive, i.e., g ( x y ) g ( x ) g ( y ) for any x, y 0 .
Proof. By setting y 0 and invoking the fact that g is concave and g (0) 0 , Eq.[B5] reduces
to
g ( x ) g ( x ) .
[B6]
Therefore,
y
y
g ( x y) x g ( x y)
g ( x y ) g ( x ( x y )) g (
( x y )) g ( x) g ( y ).
x y
x y
x y
x y
The following lemma is well known and it is stated for completeness.
Lemma 2.2. Any three points, ri , r j , and rk , on S 2 that are endowed with antipodal symmetry,
two of the eight triplets, ( ri ,r j ,rk ) , have vertices bounded by a spherical octant.
ri , ~
rj , and ~
rk , satisfies the following
Consequently, any one of these two triplets, say, ~
properties:
arccos((~
rk ~
ri ) (~
rj ~
ri )) / 2 ,
arccos((~
rk ~
rj ) (~
ri ~
r j )) / 2 , and
arccos((~
rj ~
rk ) (~
ri ~
rk )) / 2 .
25
If you define a new function based on Eq.[B4] as
f (r )
1
r
1
1
,
[B7]
4r 2
it should be clear that f is concave on [0,2] because f is continuous and twice-differentiable,
and
d2 f
dr 2
0 for any r (0,2) . Further, f (0) 0 . Therefore, f is sub-additive. We should
note that f is an increasing function on [0, 2 ] and a decreasing function on [ 2 ,2] .
ri , ~
rj ,
To prove the triangle inequality, we invoke Lemma 2.2 to focus on one of the triplets, i.e., ~
rk , and write
and ~
~ ~
~ ~
~ ~
~ ~
( r r )( r r )
( r r )( r r )
~
ri ~
r j a b 2 , with a i ~j ~ k j 0 , and b j ~i ~ k i 0 .
r r
r r
i
j
i
j
By Lemma 2.1, f ~
ri ~
r j f (a b) f (a ) f (b) .
By Lemma 2.2 again, we have
ri ~
r j )( ~
rk ~
rj)
rj ~
ri )( ~
rk ~
ri )
(~
(~
,
and
1
1 , and therefore,
~
~
ri ~
rj ~
rk ~
rj
ri ~
rj ~
rk ~
ri
a ~
rk ~
r j 2 and b ~
rk ~
ri 2 .
Note that
f (a ) f (b) f ~
rk ~
rj f~
rk ~
ri
because f is an increasing function on [0, 2 ] . Therefore,
f ~
ri ~
rj f ~
rk ~
rj f~
rk ~
ri .
We conclude that m (,) satisfies the triangular inequality and therefore, it is a pseudometric.
26
References
1.
2.
3.
Koay CG. Analytically exact spiral scheme for generating uniformly distributed points on the unit sphere. Journal of Computational Science
2011;2(1):88-91.
Du Q, Faber V, Gunzburger M. Centroidal Voronoi Tessellations: Applications and Algorithms. SIAM Review 1999;41(4):637-676.
Okabe A, Boots B, Sugihara K, Chiu DSN. Spatial Tessellations: Concepts and Applications of Voronoi Diagrams: John Wiley & Sons;
2009.
27
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
Honda H. Description of cellular patterns by Dirichlet domains: The two-dimensional case. Journal of Theoretical Biology 1978;72(3):523543.
Duyckaerts C, Godefroy G, Hauw J-J. Evaluation of neuronal numerical density by Dirichlet tessellation. Journal of Neuroscience Methods
1994;51(1):47-69.
Barlow GW. Hexagonal territories. Animal Behaviour 1974;22, Part 4(0):876-IN871.
Gray RM, Kieffer JC, Linde Y. Locally optimal block quantizer design. Information and Control 1980;45(2):178-198.
Du Q, Emelianenko M, Ju L. Convergence of the Lloyd Algorithm for Computing Centroidal Voronoi Tessellations. SIAM Journal on
Numerical Analysis 2006;44(1):102-119.
Thomson JJ. On the structure of the atom: an investigation of the stability and periods of oscillation of a number of corpuscles arranged at
equal intervals around the circumference of a circle; with application of the results to the theory of atomic structure. Philosophical Magazine
1904;7(39):237-265.
Rakhmanov E, Saff E, Zhou Y. Minimal discrete energy on the sphere. Mathematical Research Letters 1994;1:647-662.
Saff E, Kuijlaars A. Distributing many points on a sphere. The Mathematical Intelligencer 1997;19:5-11.
Bauer R. Distribution of points on a sphere with application to star catalogs. Journal of guidance control and dynamic 2000;23:130-137.
Koay CG. A simple scheme for generating nearly uniform distribution of antipodally symmetric points on the unit sphere. Journal of
Computational Science 2011(2):376-380.
Koay CG, Hurley SA, Meyerand ME. Extremely efficient and deterministic approach to generating optimal ordering of diffusion MRI
measurements. Med Phys 2011;38(8):4795-4801.
Wong S, Roos M. A strategy for sampling on a sphere applied to 3D selective RF pulse design. Magnetic Resonance in Medicine
1994;32:778-784.
Jones DK, Horsfield MA, Simmons A. Optimal strategies for measuring diffusion in anisotropic systems by magnetic resonance imaging.
Magnetic Resonance in Medicine 1999;42(3):515-525.
Basser PJ, Mattiello J, Le Bihan D. MR diffusion tensor spectroscopy and imaging. Biophys J 1994;66(1):259-267.
Lai CM, Lauterbur PC. TRUE 3-DIMENSIONAL IMAGE-RECONSTRUCTION BY NUCLEAR MAGNETIC-RESONANCE
ZEUGMATOGRAPHY. Phys Med Biol 1981;26(5):851-856.
Callaghan PT. Principles of nuclear magnetic resonance microscopy. New York: Oxford University Press; 1991.
Lustig M, Seung-Jean K, Pauly JM. A Fast Method for Designing Time-Optimal Gradient Waveforms for Arbitrary <formula> <img
src="/images/tex/348.gif" alt="k"> </formula>-Space Trajectories. Medical Imaging, IEEE Transactions on 2008;27(6):866-873.
Liu J, Redmond MJ, Brodsky EK, Alexander AL, Lu A, Thornton FJ, Schulte MJ, Grist TM, Pipe JG, Block WF. Generation and
visualization of four-dimensional MR angiography data using an undersampled 3-D projection trajectory. Medical Imaging, IEEE
Transactions on 2006;25(2):148-157.
Mistretta CA, Wieben O, Velikina J, Block W, Perry J, Wu Y, Johnson K. Highly constrained backprojection for time-resolved MRI.
Magnetic Resonance in Medicine 2006;55(1):30-40.
Barger AV, Block WF, Toropov Y, Grist TM, Mistretta CA. Time-resolved contrast-enhanced imaging with isotropic resolution and broad
coverage using an undersampled 3D projection trajectory. Magnetic Resonance in Medicine 2002;48(2):297-305.
Stehning C, Börnert P, Nehrke K, Eggers H, Dössel O. Fast isotropic volumetric coronary MR angiography using free-breathing 3D radial
balanced FFE acquisition. Magnetic Resonance in Medicine 2004;52(1):197-203.
Nagel AM, Laun FB, Weber M-A, Matthies C, Semmler W, Schad LR. Sodium MRI using a density-adapted 3D radial acquisition
technique. Magnetic Resonance in Medicine 2009;62(6):1565-1573.
Rahmer J, Börnert P, Groen J, Bos C. Three-dimensional radial ultrashort echo-time imaging with T2 adapted sampling. Magnetic
Resonance in Medicine 2006;55(5):1075-1082.
Nielles-Vallespin S, Weber M-A, Bock M, Bongers A, Speier P, Combs SE, Wöhrle J, Lehmann-Horn F, Essig M, Schad LR. 3D radial
projection technique with ultrashort echo times for sodium MRI: Clinical applications in human brain and skeletal muscle. Magnetic
Resonance in Medicine 2007;57(1):74-81.
Piccini D, Littmann A, Nielles-Vallespin S, Zenge MO. Spiral phyllotaxis: The natural way to construct a 3D radial trajectory in MRI.
Magnetic Resonance in Medicine 2011;66(4):1049-1056.
Ahmad R, Deng Y, Vikram DS, Clymer B, Srinivasan P, Zweier JL, Kuppusamy P. Quasi Monte Carlo-based isotropic distribution of
gradient directions for improved reconstruction quality of 3D EPR imaging. Journal of Magnetic Resonance 2007;184(2):236-245.
Tuch DS, Reese TG, Wiegell MR, Makris N, Belliveau JW, Wedeen VJ. High angular resolution diffusion imaging reveals intravoxel white
matter fiber heterogeneity. Magnetic Resonance in Medicine 2002;48(4):577-582.
Cook PA, Bai Y, Nedjati-Gilani S, Seunarine KK, Hall MG, Parker GJ, Alexander DC. Camino: Open-source Diffusion-MRI
Reconstruction and Processing. 2006; 14th Scientific Meeting of the International Society for Magnetic Resonance in Medicine, Seattle,
WA, USA. p 2759.
King C, Brown WR, Geller MJ, Kenyon SJ. Identifying star streams in the milky way halo. Astrophysical Journal 2012;750(1).
Koay CG, Sarls JE, Ozarslan E. Three-dimensional analytical magnetic resonance Imaging phantom in the Fourier domain. Magnetic
Resonance in Medicine 2007;58(2):430-436.
Shepp LA, Logan BF. Reconstructing Interior Head Tissue from X-Ray Transmissions. Nuclear Science, IEEE Transactions on
1974;21(1):228-236.
Shepp LA. Computerized Tomography and Nuclear Magnetic Resonance. Journal of Computer Assisted Tomography 1980;4(1):94-107.
Lloyd S. Least squares quantization in PCM. Information Theory, IEEE Transactions on 1982;28(2):129-137.
Koay CG, Özarslan E, Johnson KM, Meyerand ME. Sparse and optimal acquisition design for diffusion MRI and beyond. Med Phys
2012;39(5):2499-2511.
Augenbaum JM, Peskin CS. On the construction of the Voronoi mesh on a sphere. Journal of Computational Physics 1985;59(2):177-192.
Renka RJ. Algorithm 772: STRIPACK: Delaunay triangulation and Voronoi diagram on the surface of a sphere. ACM Trans Math Softw
1997;23(3):416-434.
Coxeter HSM. Introduction to geometry. New York: Wiley; 1989.
28
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
Pólya G, Latta G. Complex variables. New York: Wiley; 1974.
Graham RL. An efficient algorith for determining the convex hull of a finite planar set. Information Processing Letters 1972;1(4):132-133.
Beichl I. Dealing with degeneracy in triangulation. Computing in Science & Engineering 2002;4(6):70-74.
Dubois J, Poupon C, Lethimonnier F, Le Bihan D. Optimized diffusion gradient orientation schemes for corrupted clinical DTI data sets.
Magnetic Resonance Materials in Physics, Biology and Medicine 2006;19(3):134-143.
Cook PA, Symms M, Boulby PA, Alexander DC. Optimal acquisition orders of diffusion-weighted MRI measurements. Journal of
Magnetic Resonance Imaging 2007;25(5):1051-1058.
Koay CG, Chang L-C, Carew JD, Pierpaoli C, Basser PJ. A unifying theoretical and algorithmic framework for least squares methods of
estimation in diffusion tensor imaging. Journal of Magnetic Resonance 2006;182(1):115-125.
Koay CG, Özarslan E, Basser PJ. A signal transformational framework for breaking the noise floor and its applications in MRI. Journal of
Magnetic Resonance 2009;197(4):108-119.
Deriche R, Calder J, Descoteaux M. Optimal real-time Q-ball imaging using regularized Kalman filtering with incremental orientation sets.
Medical Image Analysis 2009;13(4):564-579.
Nett EJ, Johnson KM, Frydrychowicz A, Del Rio AM, Schrauben E, Francois CJ, Wieben O. Four-dimensional phase contrast MRI with
accelerated dual velocity encoding. Journal of Magnetic Resonance Imaging 2012;35(6):1462-1471.
Koay CG, Nevo U, Chang LC, Pierpaoli C, Basser PJ. The elliptical cone of uncertainty and its normalized measures in diffusion tensor
imaging. IEEE Transactions on Medical Imaging 2008;27(6):834-846.
FIGURE CAPTIONS
29
Figure 1. (A) A generator (in blue) and its surrounding generators (in red) within a spherical cap
(blue closed curve) of predetermined radius. (B) The smallest convex region formed by the
surrounding generators that contains the generator. (C) Finding the smallest convex region that
contains the generator is equivalent to finding the convex hull of the surrounding generators in
the stereographic projection.
30
Figure 2. (A) Percent of relative error of initial generators obtained from the analytically exact
spiral scheme and of the final generators or centroids obtained from the proposed method
compared to the state-of-the-art method in terms of modified electrostatic energies of N
antipodally symmetric point sets with N ranges from 16 to 356. (B) The performance of the
proposed method in terms of execution time (in seconds) per iteration of the proposed method
as a function of the number of points on the upper hemisphere has a linear trend with respect to
N. (C) The frequency of invocation of Step 1, which is an O(N2) algorithm, as a function of N,
number of points on the upper hemisphere. (D) Total number of iterations as a function of N with
instances of invocation of Step1 throughout the iteration are shown in red. The highest peak is
at (N=324, 4161 iterations).
31
Figure 3. (A) 888 deterministic generators taken from the analytically exact spiral scheme and
their Voronoi regions on the upper hemisphere. 888 final (B) real and (C) virtual generators
from the proposed method. (D) 1776 antipodally symmetric points on the sphere obtained from
(B) and (C).
32
Figure 4. Pseudometrically constrained Voronoi tessellations of the sphere with 16000
generators (not shown in the figure) on the upper hemisphere.
33
Figure 5. Reconstructed images of the analytical phantom generated from different sampling
schemes at various slice locations. At the standard resolution of 128x128x128 with z = -0.181,
the images generated by the method of Wong and Roos and the generalized spiral scheme
have more noticeable ringing artifacts around the bright region than that of the proposed
method. The image generated by the tessellated icosahedral scheme has more streak artifacts
in the dark regions than that of the proposed method. At the same resolution with z = 0.228, the
images generated from these methods have more ringing artifacts than that of the proposed
method. Similar patterns of artifacts showed up more noticeably at the high resolution of
256x256x256. At this resolution, the images generated from the proposed method have less
ringing artifacts than those from other methods.
| 5 |
NON-ASYMPTOTIC RESULTS FOR CORNISH-FISHER
EXPANSIONS
arXiv:1604.00539v1 [] 2 Apr 2016
V.V. ULYANOV, M. AOSHIMA, AND Y. FUJIKOSHI
Abstract. We get the computable error bounds for generalized
Cornish-Fisher expansions for quantiles of statistics provided that
the computable error bounds for Edgeworth-Chebyshev type expansions for distributions of these statistics are known. The results
are illustrated by examples.
1. Introduction and main results
In statistical inference it is of fundamental importance to obtain
the sampling distribution of statistics. However, we often encounter
situations, where the exact distribution cannot be obtained in closed
form, or even if it is obtained, it might be of little use because of its
complexity. One practical way of getting around the problem is to
provide reasonable approximations of the distribution function and its
quantiles, along with extra information on their possible errors. It can
be made with help of Edgeworth–Chebyshev and Cornish–Fisher type
expansions. Recently the interest for Cornish–Fisher type expansions
stirred up because of intensive study of VaR (Value at Risk) models
in financial mathematics and financial risk management (see, e.g. [14]
and [15]).
Mainly, it is studied the asymptotic behavior of the expansions mentioned above. It means that accuracy of approximation for distribution
of statistics or its quantiles is given as O(·), that is in the form of order
with respect to some parameter(s) (usually, w.r.t. n as a number of
observations and/or p as dimension of observations). In this paper we
construct non-asymptotic error bounds, in other words – computable
error bounds, for Cornish–Fisher type expansions, that is for an error of approximation we prove upper bounds with closed-form dependence on n and/or p and, perhaps, on some moment characteristics of
observations. We get these bounds under condition that similar nonasymptotic results are already known for accuracy of approximation of
distributions of statistics by Edgeworth–Chebyshev type expansions.
Let X be a univariate random variable with a continuous distribution
function F . For α : 0 < α < 1, there exists x such that F (x) = α,
Key words and phrases. computable bounds, non-asymptotic results, CornishFisher expansions.
This work was supported by RSCF grant No. 141100364.
1
2
V.V. ULYANOV, M. AOSHIMA, AND Y. FUJIKOSHI
which is called the (lower) 100α% point of F . If F is strictly increasing,
the inverse function F −1 (·) is well defined and the 100α% point is
uniquely determined. We also speak of “quantiles” without reference
to particular values of α meaning the values given by F −1 (·). Even
in the general case, when F (x) is not necessarily continuous nor is it
strictly increasing, we can define its inverse function by formula
F −1 (u) = inf{x; F (x) > u}.
This is a right-continuous nondecreasing function defined on the interval (0, 1) and F (x0 ) ≥ u0 if x0 = F −1 (u0 ).
Let Fn (x) be a sequence of distribution functions and let each Fn
admit the Edgeworth-Chebyshev type expansion (ECE) in the powers
of ǫ = n−1/2 or n−1 :
(1)
Fn (x) = Gk,n (x) + Rk (x) with Rk (x) = O(ǫk ) and
Gk,n (x) = G(x) + ǫa1 (x) + · · · + ǫk−1 ak−1 (x) g(x),
where g(x) is a density function of the limiting distribution function
G(x). An important approach to the problem of approximating the
quantiles of Fn is to use their asymptotic relation to those of G’s. Let
x and u be the corresponding quantiles of Fn and G, respectively. Then
we have
(2)
Fn (x) = G(u).
Write x(u) and u(x) to denote the solutions of (2) for x in terms of u
and u in terms of x, respectively [i.e. u(x) = G−1 (Fn (x)) and x(u) =
Fn−1 (G(u))]. Then we can use the ECE (1) to obtain formal solutions
x(u) and u(x) in the form
(3)
and
x(u) = u + ǫb1 (u) + ǫ2 b2 (u) + · · ·
(4)
u(x) = x + ǫc1 (x) + ǫ2 c2 (x) + · · · .
Cornish and Fisher in [3] and [6] obtained the first few terms of these
expansions when G is the standard normal distribution function (i.e.,
G = Φ). Both (3) and (4) are called the Cornish–Fisher expansions,
(CFE). Concerning CFE for random variables obeying limit laws from
the family of Pearson distributions see, e.g., [1]. Hill and Davis in [13]
gave a general algorithm for obtaining each term of CFE when G is an
analytic function.
Usually the CFE are applied in the following form with k = 1, 2 or
3:
(5)
xk (u) = u +
k−1
X
j=1
ǫj bj (u) + R̂k (u) with R̂k (u) = O(ǫk ).
NON-ASYMPTOTIC RESULTS FOR CORNISH-FISHER EXPANSIONS
3
It is known (see, e.g., [15]) how to find the explicit expressions for b1 (u)
and b2 (u) as soon as we have (1). By Taylor’s expansions for G, g, and
a1 , we obtain
b1 = −a1 (u),
(6)
1
b2 = {g ′(u)/g(u)}a21(u) − a2 (u) + a′1 (u)a1 (u),
2
provided that g and a1 are smooth enough functions.
In the following Theorems we show how xk (u) from (5) could be
expressed in terms of u. Moreover, we show what kind of bounds we
can get for R̂k (x) as soon as we have some bounds for Rk (x) from (1).
Theorem 1. Suppose that for the distribution function of a statistic
U we have
(7)
F (x) ≡ Pr{U ≤ x} = G(x) + R1 (x),
where for remainder term R1 (x) there exists a constant c1 such that
|R1 (x)| ≤ d1 ≡ c1 ǫ.
Let xα and uα be the upper 100α% points of F and G respectively, that
is
(8)
Pr{U ≤ xα } = G(uα ) = 1 − α.
Then for any α such that 1 − c1 ǫ > α > c1 ǫ > 0 we have
(i): uα+d1 ≤ xα ≤ uα−d1 .
(ii): |xα − uα | ≤ c1 ǫ/g(u(1) ), where g is the density function of
the limiting distribution G and
g(u(1) ) =
min
g(u).
u∈[uα+d1 ,uα−d1 ]
Theorem 2. In the notation of Theorem 1 we assume that
F (x) ≡ Pr{U ≤ x} = G(x) + ǫg(x)a(x) + R2 (x),
where for remainder term R2 (x) there exists a constant c2 such that
|R2 (x)| ≤ d2 ≡ c2 ǫ2 .
Let T = T (u) be a monotone increasing transform such that
Pr{T (U) ≤ x} = G(x) + R̃2 (x) with |R̃2 (x)| ≤ d˜2 ≡ c̃2 ǫ2 .
Let x̃α and uα be the upper 100α% points of Pr{T (U) ≤ x} and G,
respectively. Then for any α such that
we have
(9)
where
1 − c̃2 ǫ2 > α > c̃2 ǫ2 > 0,
|x̃α − uα | ≤ c̃2 ǫ2 /g(u(2) ),
g(u(2) ) =
min
u∈[uα+d̃ ,uα−d̃ ]
2
2
g(u).
4
V.V. ULYANOV, M. AOSHIMA, AND Y. FUJIKOSHI
Theorem 3. We use the notation of Theorem 2. Let b(x) be a function
inverse to T , i.e. b(T (x)) = x. Then xα = b(x̃α ) and for α such that
1 − c̃2 ǫ2 > α > c̃2 ǫ2 we have
|b′ (u∗ )| 2
(10)
|xα − b(uα )| ≤ c̃2
ǫ,
g(u(2) )
where
|b′ (u∗ )| =
max
u∈[uα+d̃ ,uα−d̃ ]
2
2
|b′ (u)|.
Moreover,
(11)
b(x) = x − ǫa(x) + O(ǫ2 ).
Remark 1. The main assumption of the Theorems is that for distributions of statistics and for distributions of transformed statistics we
have some approximations with computable error bounds. There are
not many papers with this kind of non-asymptotic results because it
requires technique which is different from the asymptotic results methods (cf., e.g., [10] and [20]). In series of papers [7], [8], [10], [11], [2],
[16], [18], [19] we got non-asymptotic results for wide class of statistics
including multivariate scale mixtures and MANOVA tests. We considered as well the case of high dimensions, that is the case when the
dimension of observations and sample size are comparable. The results
were included in the book [9]. See also [5].
Remark 2. The results of Theorems 1–3 could not be extended
to the whole range of α ∈ (0, 1). It follows from the fact that the
Cornish-Fisher expansion does not converge uniformly in 0 < α < 1.
See corresponding example in Section 2.5 of [12].
Remark 3. In Theorem 2 we required the existence of a monotone
increasing transform T (z) such that distribution of transformed statistic T (U) is approximated by some limit distribution G(x) in better way
than the distribution of original statistic U. We call this transformation T (z) the Bartlett type correction. See corresponding examples in
Section 3.
Remark 4. According to (10) and (11) the function b(uα ) in Theorem 3 could be considered as an ”asymptotic expansion” for xα up to
order O(ǫ2 ).
2. Proofs of main results
Proof of Theorem 1. By the mean value theorem,
|G(xα ) − G(uα )| ≥ |xα − uα | min g(uα + θ(xα − uα )).
0<θ<1
From (7) and the definition of xα and uα in (8), we get
|G(xα ) − G(uα )| = |G(xα ) − Pr{U ≤ xα }|
= |R1 (xα )| ≤ d1 .
NON-ASYMPTOTIC RESULTS FOR CORNISH-FISHER EXPANSIONS
5
Therefore,
(12)
|xα − uα | ≤
d1
.
min0<θ<1 g(uα + θ(xα − uα )
On the other hand, it follows from (7) that
G(xα ) = 1 − α − R1 (α)
≤ 1 − (α − d1 ) = G(uα−d1 ).
This implies that xα ≤ uα−d1 . Similarly, we have uα+d1 ≤ xα . Therefore, we proved Theorem 1 (i).
It follows from Theorem 1 (i) that
min
u∈[uα+d1 ,uα−d1 ]
g(u) ≤ min g(uα + θ(xα − uα )).
0<θ<1
Thus, using (12) we get statement of Theorem 1 (ii).
Proof of Theorem 2. It is easy to see that it is sufficient to apply Theorem 1 (ii) to the transformed statistic T (U).
Proof of Theorem 3.
we obtain
(13)
Using now (9) and the mean value theorem
x̃α − uα = b−1 (xα ) − b−1 (b(uα )) = (b−1 )′ (x∗ ) xα − b(uα ) ,
where x∗ is a point on the interval min{xα , b(uα )} , max{xα , b(uα )} .
By Theorem 1 (i) we have
uα+d˜2 ≤ x̃α ≤ uα−d˜2 .
Therefore, for xα = b(x̃α ) we get
(14)
min{b−1 (xα ), uα } , max{b−1 (xα ), uα } ⊆ uα+d˜2 , uα−d˜2 .
Since by properties of derivatives of inverse functions
(b−1 )′ (z) = 1/ b′ (b−1 (z)) = 1/b′ (y)
for z = b(y), the relations (13) and (14) imply (10).
Representation (11) for b(x) follows from (6) and (10).
3. Examples
In [17] we gave sufficient conditions for transformation T (x) to be
the Bartlett type correction (see Remark 3 above) for wide class of
statistics U allowing the following represantion
(15)
Pr{U ≤ x} = Gq (x) +
k
1 X
aj Gq+2j (x) + R2k ,
n j=0
where R2k = O(n−2 ) and Gq (x) is the distribution function of chisquared distribution with q degrees of freedom and coefficients aj ’s
P
satisfy the relation kj=0 aj = 0. Some examples of the statistic U are
6
V.V. ULYANOV, M. AOSHIMA, AND Y. FUJIKOSHI
as follows: for k = 1, the likelihood ratio test statistic; for k = 2, the
Lawley-Hotelling trace criterion and the Bartlett-Nanda-Pillai trace
criterion, which are test statistics for multivariate linear hypothesis
under normality; for k = 3, the score test statistic and Hotelling’s T 2 statistic under nonnormality. The results of [17] were extended in [4]
and [5].
In [10] we were interested in the null distribution of Hotelling’s generalized T02 statistic defined by
T02 = n trSh Se−1 ,
where Sh and Se are independently distributed as Wishart distributions
Wp (q, Ip ) and Wp (n, Ip ) with identity operator Ip in Rp , respectively.
In Theorem 4.1 (ii) in [10] we proved (15) for all n ≥ p with k = 3 and
computable error bound:
r
|Pr(T02 ≤ x) − Gr (x) −
{(q − p − 1)Gr (x)
4n
−2qGr+2 (x) + (q + p + 1)Gr+4 (x)}|
cp,q
≤ 2,
n
where r = pq and for constant cp,q we gave expicit formula with dependence on p and q.
Therefore, according to [17] we can take in this case the Bartlett
type correction T (z) as
s
2
z
a−1
a−1
+
T (z) =
+ ,
2b
2b
b
where
a=
1
p(q − p − 1),
2n
1
p(q + p + 1)(q + 2) −1.
2n
It is clear that T (z) is invertable and we can apply Theorem 3.
Other examples and numerical calculations and comparisons of approximation accuracy see in [4] and [5].
One more example is connected with sample correlation coefficient.
~ = (X1 , ..., Xn )T , and Y~ = (Y1 , ..., Yn )T be two vectors from an
Let X
n-dimensional normal distribution N(0, In ) with zero mean, identity
covariance matrix In and the sample correlation coefficient
Pn
Xk Y k
~
~
.
R = R(X, Y ) = pPn k=1 2 Pn
2
k=1 Yk
k=1 Xk
b=
In [2] it was proved for n ≥ 7 and N = n − 2.5:
√
Bn
x3 ϕ(x)
supx Pr
≤ 2,
N R ≤ x − Φ(x) −
4N
N
NON-ASYMPTOTIC RESULTS FOR CORNISH-FISHER EXPANSIONS
7
with Bn ≤ 2.2. It is easy to see that we can take T (z) as the Bartlett
type correction in the form T (z) = z + z 3 /(4N). Then the inverse
function b(z) = T −1 (z) is defined by formula
1/3
p
2
3
b(z) = 2 N z + (2Nz) + (4N/3)
1/3
p
− − 2 N z + (2Nz)2 + (4N/3)3
z3
3 z5
+
+ O(N −3 ).
4N
16 N 2
Now we can apply Theorem 3.
= z−
References
[1] L.N. Bol’shev, ”Asymptotically Pearson transformations”, Theor. Probab.
Appl., 8, 121–146 (1963).
[2] G. Christoph, V.V. Ulyanov, and Y. Fujikoshi, ”Accurate approximation of
correlation coefficients by short Edgeworth-Chebyshev expansion and its statistical applications”, Springer Proceedings in Mathematics and Statistics, 33,
239–260 (2013).
[3] E.A. Cornish and R.A. Fisher, ”Moments and cumulants in the specification
of distributions”, Rev. Inst. Internat. Statist., 4, 307–320 (1937).
[4] H. Enoki and M. Aoshima, ”Transformations with improved chi-squared approximations”, Proc. Symp., Res. Inst. Math. Sci., Kyoto Univ., 1380, 160–
181 (2004).
[5] H. Enoki and M. Aoshima, ”Transformations with improved asymptotic approximations and their accuracy”, SUT Journal of Mathematics, 42, no.1,
97–122 (2006).
[6] R.A. Fisher and E.A. Cornish, ”The percentile points of distributions having
known cumulants”, J. Amer. Statist. Assoc., 80, 915–922 (1946).
[7] Y. Fujikoshi, and V.V. Ulyanov, ”Error bounds for asymptotic expansions of
Wilks’ lambda distribution”, Journal of Multivariate Analysis, 97, no.9, 1941–
1957 (2006).
[8] Y. Fujikoshi, and V.V. Ulyanov, ”On accuracy of approximations for location
and scale mixtures”, Journal of Mathematical Sciences, 138, no.1, 5390–5395
(2006).
[9] Y. Fujikoshi, V.V. Ulyanov, and R. Shimizu, Multivariate Statistics : HighDimensional and Large-Sample Approximations, Wiley Series in Probability
and Statistics, John Wiley & Sons, Hoboken, N.J., (2010).
[10] Y. Fujikoshi, V.V. Ulyanov, and R. Shimizu, ”L1 -norm error bounds for asymptotic expansions of multivariate scale mixtures and their applications to
Hotelling’s generalized T02 ”, Journal of Multivariate Analysis, 96, no.1, 1–19
(2005).
[11] Y. Fujikoshi, V.V. Ulyanov, and R. Shimizu, ”Error bounds for asymptotic
expansions of the distribution of multivariate scale mixture”, Hiroshima Mathematical Journal, 35, no.3, 453–469 (2005).
[12] P. Hall, The Bootstrap and Edgeworth Expansion, Springer-Verlag, New York
(1992).
[13] G.W. Hill and A.W. Davis, ”Generalized asymptotic expansions of CornishFisher type”, Ann. Math. Statist., 39, 1264–1273 (1968).
[14] S. Jaschke, ”The Cornish-Fisher-expansion in the context of delta-gammanormal approximations”, J. Risk, 4, no.4, 33–52 (2002).
8
V.V. ULYANOV, M. AOSHIMA, AND Y. FUJIKOSHI
[15] V.V. Ulyanov, ”Cornish-Fisher Expansions”. In International Encyclopedia of
Statistical Science, Ed. M.Lovric, Springer-Verlag, Berlin–Heidelberg, 312–315
(2011).
[16] V.V. Ulyanov, G.Christoph, and Y. Fujikoshi, ”On approximations of transformed chi-squared distributions in statistical applications”, Siberian Mathematical Journal, 47, no.6, 1154–1166 (2006).
[17] V.V. Ulyanov and Y. Fujikoshi, ”On accuracy of improved χ2 -approximations”,
Georgian Mathematical Journal, 8, no.2, 401–414 (2001).
[18] V.V. Ulyanov, Y. Fujikoshi, and R. Shimizu, ”Nonuniform error bounds in asymptotic expansions for scale mixtures under mild moment conditions”, Journal of Mathematical Sciences, 93, no.4, 600–608 (1999).
[19] V.V. Ulyanov, H. Wakaki, and Y. Fujikoshi, ”Berry-Esseen bound for high dimensional asymptotic approximation of Wilks’ lambda distribution”, Statistics
and Probability Letters, 76, no.12, 1191–1200 (2006).
[20] H. Wakaki, Y. Fujikoshi, and V.V. Ulyanov ”Asymptotic Expansions of the
Distributions of MANOVA Test Statistics when the Dimension is Large”, Hiroshima Mathematical Journal, 44, no.3, 247–259 (2014).
V.V. Ulyanov, Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow, 119991, Russia, and
National Research University Higher School of Economics (HSE),
Moscow, 101000, Russia
E-mail address: [email protected]
M. Aoshima, Institute of Mathematics, University of Tsukuba,
Tsukuba, Ibaraki 305-8571, Japan
E-mail address: [email protected]
Y. Fujikoshi, Department of Mathematics, Hiroshima University,
Higashi-Hiroshima, 739-8526, Japan
E-mail address: fujikoshi [email protected]
| 10 |
1
Millimeter Wave MIMO Prototype:
Measurements and Experimental Results
arXiv:1710.09449v1 [] 25 Oct 2017
Vasanthan Raghavan, Andrzej Partyka, Ashwin Sampath, Sundar Subramanian,
Ozge Hizir Koymen, Kobi Ravid, Juergen Cezanne, Kiran Mukkavilli, Junyi Li
Abstract
Millimeter-wave multi-input multi-output (mm-Wave MIMO) systems are one of the candidate schemes
for 5G wireless standardization efforts. In this context, the main contributions of this article are threefold. 1) We describe parallel sets of measurements at identical transmit-receive location pairs with
2.9, 29 and 61 GHz carrier frequencies in indoor office, shopping mall, and outdoor settings. These
measurements provide insights on propagation, blockage and material penetration losses, and the key
elements necessary in system design to make mm-Wave systems viable in practice. 2) One of these
elements is hybrid beamforming necessary for better link margins by reaping the array gain with large
antenna dimensions. From the class of fully-flexible hybrid beamformers, we describe a robust class
of directional beamformers towards meeting the high data-rate requirements of mm-Wave systems. 3)
Leveraging these design insights, we then describe an experimental prototype system at 28 GHz that
realizes high data-rates on both the downlink and uplink and robustly maintains these rates in outdoor
and indoor mobility scenarios. In addition to maintaining large signal constellation sizes in spite of
radio frequency challenges, this prototype leverages the directional nature of the mm-Wave channel to
perform seamless beam switching and handover across mm-Wave base-stations thereby overcoming the
path losses in non-line-of-sight links and blockages encountered at mm-Wave frequencies.
Index Terms
Millimeter-wave, experimental prototype, MIMO, channel measurements, beamforming, handover, RF.
I. I NTRODUCTION
Millimeter-wave multi-input multi-output (mm-Wave MIMO) systems are one of the candidates for the physical layer (PHY) in the currently ongoing standardization efforts for the Fifth
The authors are with Qualcomm Corporate R&D, Bridgewater, NJ 08807, USA and Qualcomm Corporate R&D, San Diego,
CA 92121, USA.
2
Generation (5G) air link specifications. Over the last few years, there has been an exploding
interest on mm-Wave systems (see, e.g., [1]–[3] and references therein). Most of these works
either present expectations from mm-Wave systems, or use-case analysis, or channel measurements, or performance studies assuming certain PHY abstractions. With this backdrop, the focus
of this paper is on understanding the implementation of mm-Wave systems in practice and to
present a complete picture starting from channel measurements to system design implications to
prototype performance.
Towards this goal, we first study electromagnetic propagation in the mm-Wave regime using
a number of parallel measurements at the same transmit-receive location pairs in different usecases (indoor office, shopping mall and outdoor) at 2.9, 29 and 61 GHz. Such a parallel set of
measurements minimizes the number of confounding factors and allows a direct comparison of
propagation across different carrier frequencies. A limited number of such measurement studies
at the same location pairs are available in the literature. While our studies show that losses at
mm-Wave frequencies are typically higher than with sub-6 GHz systems, these losses are not
substantially worse. Nevertheless, additional losses due to hand/human blockages and material
penetration can be significantly detrimental to the link margins and are expected to play the role
of a serious differentiator for a mm-Wave chipset solution.
The above observations motivate the use of beamforming to overcome these losses. We then
briefly describe the radio frequency (RF) component challenges that impact the design of a
practical mm-Wave chipset. The use of near-optimal beamforming structures needed to improve
the link margin in both single- and multi-user contexts in a practical mm-Wave chipset is
not viable due to cost and complexity challenges of radio frequency (RF) components. Thus,
secondly, to overcome these constraints, we are motivated to use a certain subset of directional
beamformers for MIMO transmissions. In addition to low complexity, the proposed approaches
also enjoy advantages such as robustness to phase changes across paths (an issue of immense
importance for small wavelength systems such as mm-Wave) and a simpler system design for
initial user equipment (UE) discovery and subsequent beam refinement.
With this background, thirdly, we describe our prototype system operating at 28 GHz that
realizes a robust directional beamforming solution leading to high data-rates on both the downlink
and uplink. We describe various experiments performed with the prototype in both outdoor and
indoor scenarios and provide unique insights into the operations of a practical mm-Wave system.
3
The key elements tested by these experiments include robustness of mm-Wave links in nonline-of-sight (NLOS) settings via beam switching in response to mobility and blockage, interbase-station handover, and interference management. Comparable prototypes in the literature
such as [4] mostly emphasize peak throughputs in line-of-sight (LOS) settings and do not
provide lessons applicable for practical deployments. Other prototypes of importance in practical
deployments include the CAP-MIMO architecture [5] and [6]–[8] that apply lens array techniques
for steering multiple beams from the base-station.
II. M ILLIMETER -WAVE C HANNEL M EASUREMENTS AND S YSTEM I MPLICATIONS
The focus of this section is on reporting mm-Wave channel measurements at 2.9, 29 and
61 GHz, which are representative of the three most-likely commercial offerings in the 2018-20
time-frame and likely to be compared against each other in terms of performance: sub-6 GHz
5G-NR, mm-Wave MIMO 5G-NR, and 802.11ad/ax/ay. While a number of mm-Wave channel
measurement campaigns have been reported in the literature, the novelty of this work is on
channel propagation comparisons across these three carrier frequencies at identical transmitreceive location pairs in different use-cases. Such studies are important as they eliminate most
confounding factors that prevent a direct comparison across frequencies.
Towards this goal, channel sounding is performed with both omni-directional antennas as well
as directional horn antennas. For directional measurements, an azimuthal scan (or a 360o view)
with a 10 dBi gain horn antenna and producing 39 directional slices, and a spherical scan (360o
azimuth view and −30o to 90o view in elevation) with a 20 dBi gain horn antenna and producing
331 directional slices are generated. The time-resolution of the channel sounder is approximately
5 ns. An Agilent E8267D signal generator is used to generate a pseudo-noise (PN) sequence at
a chip rate of 100 Mc/s, which is then used to sound the channel. At the receiver, an Agilent
N9030A signal analyzer is used for acquisition and the PN chip sequence is despread using a
sampler at 200 MHz and with 16 bit resolution.
These sounding measurements are obtained for the indoor office setting (two floors of the
Qualcomm building in Bridgewater, NJ), indoor shopping mall setting (Bridgewater Commons
Mall, Bridgewater, NJ), outdoor settings (open areas outside the Qualcomm building), including
the suburban setting (residential location in Bedminster, NJ) and the Urban Micro setting (New
Brunswick, NJ), etc., as well as emulation of stadium deployments. These measurement scenarios
4
are representative of typical applications considered for future deployment efforts.
While macroscopic channel properties such as path loss are studied with omni-directional
scans, other properties such as delay spread, path diversity, etc., are studied with both omnidirectional and directional scans. Processing of these measurements lead to the following observations and implications on system design for mm-Wave channels. More technical details on
these studies can be found in [9].
Path Loss: Measurements in different deployments are used to study macroscopic properties of
LOS and NLOS links. We use a frequency-dependent path loss model with a close-in free space
reference distance of d0 = 1 m where the path loss (in dB) at a distance of d m is modeled as
PL(d) = PL(d0 ) + α · 10 log10 (d/d0 ) + X,
X ∼ N (0, σX2 ).
(1)
The path loss exponents (PLEs), denoted as α, and the shadowing factors (σX ) for different
types of links (LOS/NLOS) in different use-cases are learned with a least-squares fitting of the
model in (1) to the measured data. These parameters are listed in Table I and they show that
PLEs and shadowing factors for NLOS links generally increase with frequency. For LOS links,
PLEs are generally smaller than those for NLOS links and in indoor settings can be smaller
than the freespace PLE of 2. A plausible explanation for this observation is waveguide effect
where long enclosures such as walkways/corridors, dropped/false ceilings, etc., tend to propagate
electromagnetic energy via alternate modes/more reflective paths decreasing the PLE. Shadowing
factors show inconsistent behavior with frequency. From Table I, we conclude that while mmWave systems experience higher path losses than sub-6 GHz systems, the differential impact of
the PLEs and shadowing factors on link margin at higher carrier frequencies is not dramatic.
Delay Spread: The delay spread of the channel is an important metric to understand the
system overhead (in terms of the cyclic prefix length for a multi-carrier design). In this context,
frequency-dependent delay spreads are observed in NLOS settings both with omni-directional
and directional antennas. While omni-directional delay spreads are small in most scenarios (for
example, the essential spread is on the order of 30-50 ns in indoor office, 50-90 ns in indoor
shopping mall and 150-300 ns in outdoor street canyon settings), there are also scenarios where a
significantly large delay spread is seen (e.g., even up to 800 ns in outdoor open square settings).
These extreme scenarios can be explained with the radar cross-section effect, where seemingly
small objects that do not participate in electromagnetic propagation at lower frequencies show
5
TABLE I
PATH L OSS M ODEL PARAMETERS IN D IFFERENT U SE - CASES
Parameter ↓
fc (in GHz) →
LOS
2.9
29
NLOS
61
2.9
LOS
29
61
2.9
Indoor office
29
NLOS
61
2.9
29
61
Indoor shopping mall
PLE (α)
1.62
1.46
1.59
3.08
3.46
4.17
1.93
1.98
2.05
2.61
2.76
2.98
σX (in dB)
5.49
4.25
4.81
6.60
8.31
13.83
5.32
3.56
4.29
9.08
9.47
12.86
Urban Micro street canyon
Outdoor open areas
PLE (α)
2.18
2.19
2.22
2.95
3.07
3.27
2.41
2.73
2.83
3.01
3.39
3.42
σX (in dB)
4.41
4.37
4.84
7.82
8.16
10.70
4.60
5.73
6.78
4.00
8.03
1.97
up at higher frequencies. Such behavior happens as the wavelength approaches the roughness
of surfaces (e.g., walls, light poles, etc.). Supporting these extremes without incurring a high
fixed system overhead is important. In most indoor scenarios, sparse scattering implies that the
beamformed delay spread is comparable with the omni-directional delay spread. While the same
trend holds for most scenarios in the outdoor setting, the beamformed delay spread can be
significantly smaller than the omni-directional delay spread for the tail values.
Blockage: An important feature that makes mm-Wave propagation significantly different from
propagation at sub-6 GHz frequencies is that a large area of the UE can be easily covered
and blocked by parts of the human body, other humans, vehicles, foliage, etc. Additional link
impairments due to these blockers (not seen at sub-6 GHz) are observed at mm-Wave frequencies
and the practical viability of mm-Wave systems are more dependent on blockage than the path
losses reported in Table I. For example, a typical UE design with multiple linear subarray units of
four antenna elements (on the Top and Long edges of the device) is presented for the Landscape
mode in Fig. 1(a). Corresponding to this UE antenna design, Figs. 1(b)-(d) show the received
gain in azimuth and elevation in Freespace, and with hand blocking in the Landscape and Portrait
modes, respectively. While almost the entire sphere is covered around the UE in the Freespace
mode, the presence of hand leads to an angular blockage region of 160o × 75o (blue areas) in the
Landscape mode. The region in blue stretches from behind the palm to the thumb. In this setting,
the Top edge subarray is not useful and the Long edge subarray allows good signal reception.
6
Fig. 1.
(a)
(c)
(b)
(d)
(a) A typical UE design with multiple subarrays (Top and Long edges) in Landscape mode. Received gain as a function
of azimuth and elevation angles for the UE design at 28 GHz in (b) Freespace mode, and with hand blocking in (c) Landscape
and (d) Portrait modes.
Furthermore, in the Portrait mode, a blockage region of 120o × 80o (blue areas) is seen and the
Long edge subarray does not play an important role in signal reception as it is blocked with
the fingers resulting in significantly deteriorated antenna efficiencies. However, the Top edge
subarray is not affected much with the presence of the hand. These observations suggest that
subarray diversity in UE design is critical in overcoming near-field obstructions as well as to
ensure coverage at the UE side over the entire sphere.
Penetration Loss: For the outdoor-to-indoor coverage scenario, material measurements show
that penetration loss generally increases with frequency. Further, periodic notches that are several
GHz wide and often with more than 30 dB in loss are seen. These losses are attributed to
changing material properties with frequency due to which signals constructively/destructively
interfere from different surfaces that make the material. While a similar trend is observed across
these experiments for both polarizations and different choices of incidence angles, the precise
loss at a frequency and the depth of the notches depend on the material, incidence angle and
7
polarization. This observation motivates the need for designs that support both frequency and
spatial diversity.
Path Diversity: Low pre-beamforming signal-to-noise ratios (SNRs) are typically the norm when
mm-Wave path losses and additional blockage/penetration losses are incorporated with typical
equivalent isotropically radiated power (EIRP) constraints. Thus, a viable system design has to
overcome these huge losses with beamforming array gains from the packing of a large number
of antennas within the same array aperture [3]–[5], [10]–[13]. In this context, a small number
of (at most 4-6) well-spread out (in direction) clusters/paths rendering multi-mode/multi-layer
signaling viable are typically observed. The viability of multiple modes suggests the use of both
single-user MIMO strategies for increasing the peak rate as well as multi-user MIMO strategies
for increasing the sum-rate [11], [14]. Such modes also offer robustness against blockages via
intra-base-station beam switching. In addition to the likelihood of multiple viable paths to a
certain base-station, there are also viable paths to multiple base-stations. These observations
suggest the criticality of a dense deployment of base-stations for robust mm-Wave operation and
inter-base-station handover to leverage these paths. Integrated access and backhaul operation is
highly desirable for small cell deployment, which also leads to the need to study inter-base-station
interference management issues more carefully.
III. E XPERIMENTAL P ROTOTYPE D ESCRIPTION
Motivated by the system design intuition developed in the previous section, we now describe
our experimental prototype system operating in a time-division duplexing framework at 28 GHz.
In this setup, baseband analog in-phase and quadrature (IQ) signals are routed to/from the modem
to an IQ modulator/demodulator at 2.75 GHz center frequency. The 2.75 GHz intermediate
frequency signal is translated to 28 GHz using a 25.25 GHz tunable local oscillator (with a 100
MHz step size). The bandwidth supported is 240 MHz at a sampling rate of 240 Msps. ADCs
with an effective number of bits (ENOB) resolution of 8 bits are used at both ends. At the basestation end, the 28 GHz signal is routed to an 16 × 8 element planar array (a waveguide design)
and analog beamforming is applied using tunable four bit phase shifters and gain controllers.
The prototype uses a transmit power dynamic range of 19 dB with a maximum EIRP of 55 dBm.
As motivated earlier, to overcome blockage, the UE end is made of four selectable subarrays,
each a four element phased array of either dipoles or patches as in Fig. 1(a).
8
With beamforming being a central component in meeting the mm-Wave link budget, RF
component and architecture-driven challenges (e.g., cost, power, complexity, form factor, regulatory constraints, etc.) play a principal role in determining practically viable hybrid beamforming
solutions. In this context, the sparse and directional channel structure suggests the use of a certain
subset of directional beamforming strategies along the dominant clusters/paths at both ends [5],
[12]–[14] relative to optimal beamforming along the dominant eigen-modes/singular vectors of
the channel matrix. Directional beamforming structures offer robustness to small perturbations
in the channel matrix and also allow a tradeoff between peak beamforming gain and initial UE
discovery latency (with minimal loss relative to the optimal schemes) via the construction of a
hierarchy of directional codebooks [13]. The directional channel structure can be leveraged for
scheduling and can also be generalized to multi-user beamforming design with the following
solutions: i) beam steering to each UE (with complete agnosticism of the interference caused
to other users), ii) zeroforcing (where each user’s beamforming vector steers a beam null to the
other users), and iii) generalized eigenvector precoding (that performs a weighted combination
of beam steering and beam nulling). These solutions can result in substantial performance
improvement over single-user solutions. The readers are referred to [14] for technical details
on these constructions as well as performance studies in outdoor and indoor deployments.
Motivated by the robustness of directional beamformers, this solution is implemented in the
prototype by leveraging the beam broadening principles described in [13, Sec. IVB] to construct
static analog beam codebooks to be used at both the base-station and UE ends. The experimental
system implements mm-Wave beamforming by initially determining the best beam direction to be
used at either end. After this, the system continuously evaluates all possible beam directions from
all available transmitters and switches to the best beam and the best transmitter (handover) with
little to no performance degradation. In addition, the system adjusts its parameters to optimize
for the link type (LOS/NLOS) by SNR control allowing upto 64-QAM operation.
The seamless beam switching and capability to maintain high SNR enable the experimental
system to realize high rates (on both the downlink and uplink) and robustly maintain these rates
despite channel variations. That said, the main focus behind the experimental system is not the
optimization of data-rates, but to study the various fundamental difficulties in realizing mm-Wave
systems in practice, especially with NLOS links. In the next two sections, we describe some
experimental results illustrating the versatility of the prototype.
9
(a)
Fig. 2.
(b)
(a) Elevated gNB and UE inside the testing vehicle used in outdoor testing. (b) Aerial layout of the testing range
including gNB locations, achieved rates and important features in rates as the UE is driven over the trajectory.
IV. O UTDOOR M OBILITY S TUDIES
An outdoor mobility testing experiment (see Fig. 2(a)) is conducted in the parking spaces
adjacent to the Qualcomm building (see aerial layout in Fig. 2(b)). One base-station is mounted
on a mast elevated 14 feet in a testing vehicle and located in the parking lot (marked gNB1 ),
and another base-station is mounted in the sixth floor of the Qualcomm building facing the
window and elevated 5 feet from the ground (marked gNB2 ). gNB1 and gNB2 have a 90o and
110o downtilt, respectively. The UE is mounted on the dashboard of another testing vehicle and
testing is done by driving through the parking spaces at 10-15 mph speeds. Fig. 2(b) plots the
achieved throughput (in Mbps) as a function of the driving trajectory. From this plot, we note that
a high throughput close to 600 Mbps is realized (Scenario 1) when gNB2 has an unobstructed
LOS path to the UE. As the UE moves over the trajectory (Scenario 2), seamless handover is
realized between gNB2 and gNB1 . Further, as the UE is driven on this trajectory, the LOS path
from gNB1 is obstructed and communication is realized through a reflected path first (Scenario
3) and other paths subsequently (Scenario 4) leading to a drop in throughput (gradings in the
heat map). This experiment illustrates the prototype’s capability to maintain mm-Wave links
robustly in outdoor scenarios with intra- and inter-base-station beam switching and handover.
10
Fig. 3.
Left: Outdoor aerial layout of experiment including key geographical features. Right: Achieved rates as a function of
outdoor trajectory and time as well as beam indices at gNB and subarray indices at UE.
In a second study illustrated in Fig. 3, an outdoor mobility experiment around the Qualcomm
building is conducted. This environment is mostly a tree-lined open square-type setting with
some street canyon-type features. Specific points-of-interest include parking lots and structures
with bordering buildings having glass window panes, foliage (a mix of pine and spruce trees),
a large shopping mall in close vicinity (Bridgewater Commons Mall), highways (US Rt. 202),
etc. For the specific experiment reported here, a testing vehicle is driven for a period of ≈ 55
seconds through the exit lane of US Rt. 202 at a speed of 20-30 mph, onto the ramp and into a
side street enveloping the Qualcomm building (see trajectory in red in Fig. 3, Left side). In terms
of notable observations from this experiment, a base-station mounted on a raised platform at 24
feet with a 90o downtilt (marked gNB1 ) offers a LOS path to the UE as it starts exiting from Rt.
202 (throughput of 375 Mbps). However, as the UE traverses the exit lane, a small hillock-like
feature blocks the LOS path leading to a coverage hole that cannot bridged with any reasonable
NLOS path from this base-station and a significant deterioration in rate. This observation points
at the necessity of sufficient base-station density to enhance mm-Wave coverage under blockages.
For example, a base-station on the opposite side of the highway (Rt. 202) could have provided
coverage to the UE over this coverage hole. As the UE crosses this feature, a beam recovery
process recovers the LOS path albeit with a different subarray offering complementary coverage
11
in the LOS direction leading to an improved throughput of 375 Mbps. Further, as the testing
vehicle enters the ramp, blockage loss due to foliage results in a throughput drop (125-250
Mbps) and two subarrays turn out to be useful over this period. As the UE exits the ramp onto
the adjoining street, the LOS path is recovered leading to a throughput of over 375 Mbps. The
distance between the gNB and UE varies from 50-100 m over the whole experiment.
(a)
(b)
(c)
Fig. 4.
(a) Typical UE testing with pedestrian mobility in an indoor scenario. (b) Indoor layout and building plan along with
two gNB locations and coverage areas. (c) Achieved rate along with key features in the rate trajectory over a certain indoor
segment.
V. I NDOOR M OBILITY S TUDIES
Complementary to the above discussion, an indoor mobility study (see Fig. 4(a)) in the third
floor of the Qualcomm building (see building layout in Fig. 4(b)) is now described. The floor
plan is mostly comprised of cubicles along the edge with walled offices and conference rooms
towards the center. Two base-stations (marked gNB1 and gNB2 ) are placed at the far corners of
the floor plan and the UE is moved at pedestrian speeds through the layout. From our studies,
these two base-stations are sufficient to guarantee adequate coverage with at least 1 bps/Hz
12
spectral efficiency as the UE is traversed through the floor plan (coverage areas with each gNB
marked in red and blue of Fig. 4(b), respectively). Nevertheless, the coverage area corresponding
to each gNB does not lead to a well-defined cell boundary and is clearly dependent on the
environment, material properties, etc. This observation points to necessity of further system
coverage studies with irregular cell boundaries and base-station density to overcome coverage
holes in such scenarios. As a particular illustration of this study, the throughput achieved with
an ≈ 90 second trajectory is illustrated in Fig. 4(c) which illustrates both link drops due to
penetration loss through obstructions (concrete, elevator area, wall, metallic material, etc.) and
link variability due to changing material properties. Such link drops can be mitigated with
enhanced beamforming, fast subarray switching, network densification, etc.
(a)
Fig. 5.
(b)
(a) A successful handover from eNB1 to eNB2 in an indoor setup. (b) Layout of another indoor coverage experiment
with downlink/uplink rates using the proposed beamforming solutions.
More indoor mobility experiments can be seen in the video demonstration at [15]. In one
experiment, illustrated in Fig. 5(a) and seen in [15], a LOS link is initially established between
the transmitter (labeled eNB1 ) and receiver (labeled UE) by beam scanning at both ends. With a
high SNR from the LOS link, a high rate is established on both the downlink and uplink. As the
UE is moved across the long edge of the hallway (see relative positions of eNB1 , eNB2 and UE
in the bottom left inset of Fig. 5(a)), the link connecting eNB1 with the UE becomes NLOS with
increasing path loss as the distance between eNB1 and UE increases. As the UE turns the corner
13
at the short edge, the NLOS1 link between eNB2 and UE becomes better than the NLOS link
between eNB1 and UE and a successful handover (illustrating robustness to blockage) happens
from eNB1 to eNB2 , as shown in Fig. 5(a).
Figure 5(b) illustrates the layout of yet another indoor experiment where the UE is moved from
an initial position (marked “1” in a white circle) towards its final destination (marked “1” in an
orange square) via the dashed white-line trajectory. As the UE is moved over the trajectory, the
achieved downlink and uplink rates show many disruptions as the connected path is blocked by
the pillars in the layout (marked as gray squares). For example, when the pillar blocks the LOS
path, connection is established to the dominant NLOS path (again through reflections) leading
to the first disruption in rate(s). Connection is re-established to the LOS path as the UE moves
past the pillar until the next pillar is reached where the second disruption happens. The third
disruption corresponds to the switch from the re-established LOS path to the dominant NLOS
path as the UE turns the corner. Thus, these examples illustrate the robustness of our proposed
beamforming solutions to blockages in real deployments.
VI. C ONCLUDING R EMARKS
A. Summary
This article provides a brief overview of mm-Wave channel measurements and what the
implications of these measurements are for system design. An immediate consequence of the
blockage and penetration losses inherent and specific to mm-Wave systems are the poor link
margins. These motivate the necessity to reap spatial array gains via the use of near-optimal
beamforming solutions over large antenna arrays. Prominent challenges in this goal include
the limited range and performance of mm-Wave components, as well as the robustness of the
beamforming solution to spatio-temporal channel variations and its impact on overall system
design. Further, cost considerations may allow only the use of a small number of RF chains
at either end and thus the beamforming solutions should be adaptive to changes in the RF
architecture(s). Towards this goal, directional beamforming approaches can be used as robust,
low-complexity, near-optimal solutions that help overcome the high propagation losses at mmWave frequencies. Such solutions are demonstrated with our experimental prototype, illustrating
1
Note that the LOS link between eNB2 and UE is blocked by a pillar in the layout, all marked in white squares.
14
the viability of mm-Wave systems for high data-rate requirements. In particular, the prototype
system demonstrates: i) beamforming and beam scanning, ii) outdoor coverage and mobility,
iii) resilience to blockage of paths, iv) inter-base-station handover, v) indoor mobility, and vi)
interference management in both outdoor and indoor settings.
B. Future Research Directions
Important issues that require further study include: i) a more exhaustive study on realistic
channel modeling for mm-Wave propagation, ii) models for spatio-temporal channel variations,
iii) models for impairments such as hand/body/human blockages, phase noise, etc., iv) advanced
MIMO techniques for both single- and multi-user multi-carrier transmissions, v) impact of
mm-Wave channel properties on mm-Wave system/network design issues such as coverage
and network latency tradeoffs, mm-Wave handover, interworking with sub-6 GHz bands and
applications, integrated access-bachkaul solutions, vi) advanced MIMO RF architectures such
as [5]–[8] for prototype studies and real deployments, vii) RF tradeoffs in form factor UE
design, etc.
R EFERENCES
[1] F. Boccardi et al., “Five disruptive technology directions for 5G,” IEEE Commun. Magaz., vol. 52, no. 2, Feb. 2014, pp.
74–80.
[2] T. S. Rappaport et al., “Millimeter wave mobile communications for 5G cellular: It will work!,” IEEE Access, vol. 1,
2013, pp. 335–349.
[3] R. W. Heath, Jr. et al., “An overview of signal processing techniques for millimeter wave MIMO systems,” IEEE Journ.
Sel. Topics in Sig. Proc., vol. 10, no. 3, Apr. 2016, pp. 436–453.
[4] W. Roh et al., “Millimeter-wave beamforming as an enabling technology for 5G cellular communications: Theoretical
feasibility and prototype results,” IEEE Commun. Magaz., vol. 52, no. 2, Feb. 2014, pp. 106–113.
[5] J. Brady, N. Behdad, and A. M. Sayeed, “Beamspace MIMO for millimeter-wave communications: System architecture,
modeling, analysis and measurements,” IEEE Trans. Ant. Propagat., vol. 61, no. 7, July 2013, pp. 3814–3827.
[6] Y. Zeng and R. Zhang, “Millimeter wave MIMO with lens antenna array: A new path division multiplexing paradigm,”
IEEE Trans. Commun., vol. 64, no. 4, Apr. 2016, pp. 1557–1571.
[7] J. A. Laurinaho et al., “2-D beam-steerable integrated lens antenna system for 5G E-band access and backhaul,” IEEE
Trans. on Microwave Theory and Tech., vol. 64, no. 7, July 2016, pp. 2244–2255.
[8] T. Kwon et al., “RF lens-embedded massive MIMO systems: Fabrication issues and codebook design,” IEEE Trans. on
Microwave Theory and Tech., vol. 64, no. 7, July 2016, pp. 2256–2271.
[9] V. Raghavan et al., “Millimeter wave channel measurements and implications for PHY layer design,” IEEE Trans. Ant.
Propagat., vol. 65, no. 12, Dec. 2017.
15
[10] S. Hur et al., “Millimeter wave beamforming for wireless backhaul and access in small cell networks,” IEEE Trans.
Commun., vol. 61, no. 10, Oct. 2014, pp. 4391–4403.
[11] S. Sun et al., “MIMO for millimeter wave wireless communications: Beamforming, spatial multiplexing, or both?,” IEEE
Commun. Magaz., vol. 52, no. 12, Dec. 2014, pp. 110–121.
[12] O. El Ayach et al., “Spatially sparse precoding in millimeter wave MIMO systems,” IEEE Trans. Wireless Commun., vol.
13, no. 3, Mar. 2014, pp. 1499–1513.
[13] V. Raghavan et al., “Beamforming tradeoffs for initial UE discovery in millimeter-wave MIMO systems,” IEEE Journ.
Sel. Topics in Sig. Proc., vol. 10, no. 3, Apr. 2016, pp. 543–559.
[14] V. Raghavan et al., “Single-user vs. multi-user precoding for millimeter wave MIMO systems,” IEEE Journ. Sel. Areas
in Commun., vol. 35, no. 6, June 2017, pp. 1387–1401.
[15] Qualcomm,
“5G mm-Wave demonstration,”
Accessed on Sept. 29, 2017.
https://www.qualcomm.com/videos/5g-mmwave-demonstration,
| 7 |
Preprint submitted to Energy and Buildings, November 2013
Pseudo Dynamic Transitional Modeling of Building Heating Energy Demand Using Artificial
Neural Network
Subodh Paudel
a,b,c
b
c
, Mohamed Elmtiri , Wil L. Kling , Olivier Le Corre
a*
,
Bruno Lacarrière
a
a
Department of Energy System and Environment, Ecole des Mines, Nantes, GEPEA, CNRS, UMR
6144, France
b
Environnement Recherche et Innovation, Veolia, France
c
Department of Electrical Engineering, Technische Universiteit Eindhoven, Netherlands
*Corresponding author. Tel.: +33 2 51 85 82 57
E-mail: [email protected]
Abstract
This paper presents the building heating demand prediction model with occupancy profile and
operational heating power level characteristics in short time horizon (a couple of days) using artificial
neural network. In addition, novel pseudo dynamic transitional model is introduced, which consider
time dependent attributes of operational power level characteristics and its effect in the overall model
performance is outlined. Pseudo dynamic model is applied to a case study of French Institution
building and compared its results with static and other pseudo dynamic neural network models. The
results show the coefficients of correlation in static and pseudo dynamic neural network model of 0.82
and 0.89 (with energy consumption error of 0.02%) during the learning phase, and 0.61 and 0.85
during the prediction phase respectively. Further, orthogonal array design is applied to the pseudo
dynamic model to check the schedule of occupancy profile and operational heating power level
characteristics. The results show the new schedule and provide the robust design for pseudo dynamic
model. Due to prediction in short time horizon, it finds application for Energy Services Company
(ESCOs) to manage the heating load for dynamic control of heat production system.
Keywords: Building Energy Prediction; Short term building energy forecasting; Operational Heating
Characteristics; Occupancy Profile; Artificial Neural Network; Orthogonal Arrays
1. Introduction
The global concerns of climate change and regulation in energy emissions have drawn more
attention towards researchers and industries for the design and implementation of energy systems for
low energy buildings. According to IEA statistics [1], total energy use globally accounts for around
7200 Mtoe (Mega Tonnes Oil Equivalents). Residential and commercial buildings consume 40% of
final energy use in the world and European countries consume 76% of energy towards thermal comfort
in buildings. The small deviations in design parameters of buildings could bring large adverse effect in
the energy efficiency and which, additionally, results in huge emissions from the buildings. It is
estimated that improvement in energy efficiency of the buildings in European Union by 20% will result
in saving at least 60 billion Euro annually [2]. So, research is very active in driving towards the
sustainable/low energy buildings. In order to accomplish this and to ensure thermal comfort, it is
essential to know energy flows and energy demand of the buildings for the control of heating and
cooling energy production from plant systems. The energy demand of the building system, thus,
depends on physical and geometrical parameters of buildings, operational characteristics of heating
and cooling energy plant systems, weather conditions, appliances characteristics and internal gains.
There
are
various
approaches
to
predict
building energy demand based on physical methods
and data-driven methods (statistical and regression methods and artificial intelligence methods) as
mentioned by Zhao et al. [3]. Physical methods are based on physical engineering methods and uses
thermodynamics and heat transfer characteristics to determine the energy demand of the building.
There are numerous physical simulation tools developed as EnergyPlus [4], ESP-r [5], IBPT [6],
SIMBAD [7], TRNSYS [8], CARNOT [9] etc… to compute the building energy demand. A simplified
physical model based on physical, geometrical, climatic and occupant model was presented by
Duanmu et al. [10] to bridge the complexities of collecting more physical data required in simulation
tools. Other possible approaches for building energy prediction are semi-physical models like
response factor method, transfer function method, frequency analysis method and lumped method
[11]. Though methodologies adapted to estimate energy demand of buildings are different in physical
and semi-physical models, both are highly parameterized. In addition, physical parameters of buildings
are not always known or even sometimes data are missing. And also, these models are
computationally expensive for Energy Services Company (ESCOs) to manage heating and cooling
loads for control applications.
Other approaches to predict building energy demand with limited physical parameters are
data-driven methods, which strongly dependent on the measurements of historical data. Statistical and
regression methods seem more feasible to predict building energy demand with limited physical
parameters. The statistical approaches have been widely used by Girardin et al. [12] to determine the
best model parameters by fitting actual data. Different approaches (physical and behaviour
characteristics based on statistical data) were presented by Yao et al. [13] to bridge the gap between
semi-physical and statistical methods. In their work, statistical daily load profile was grounded on
energy consumption per capita and human behaviour factor, and semi-physical method was based on
thermal resistance capacitance network. Nevertheless, these statistical models used linear
characteristics of input and output variables to evaluate the building parameters and are not adapted
to non-linear energy demand behavior. Regression models [14-15] have also been used to predict the
energy demand, but, they are not accurate enough to represent short term horizon (couple of days)
with hourly (or couple of minutes) sampling time energy demand prediction. In order to find the best
fitting from the actual data, this kind of models requires significant effort and time.
In recent years, there is a growth in research work in the field of artificial intelligence (AI) like
artificial neural network [3, 16] and support vector machines [3, 17-18]. These methods are known for
solving the complex non-linear function of energy demand models with limited physical parameters.
Neural network method has shown better performances than physical, statistical and regression
methods. Authors [19-20] used static neural network to predict energy demand of the building and
compared results with physical models. For instance, Kalogirou et al. [19] used climate variables
(mean and maximum of solar radiation, wind speed, and other parameters as wall and roof type)
coupled with artificial neural network (ANN) to predict daily heating and cooling load of the buildings. In
their work, results obtained using ANN are similar to those given by the physical modelling tool
TRNSYS. Neto et al. [20] presented a comparison of neural network approach with physical simulation
tool EnergyPlus. In this work, authors used climate variables as external dry temperature, relative
humidity and solar radiation as input variables to predict daily consumption of the building. Results
showed that neural network is slightly more accurate than EnergyPlus when comparing with real data.
Static neural network model proposed by Shilin et al. [21] consider climate variables as dry bulb
temperature and information regarding schedule of holiday’s to predict cooling power of residential
buildings. Dong et al. [17] used support vector machine (SVM) to predict the monthly building energy
consumption using dry bulb temperature, relative humidity and global solar radiation. Performance of
SVM and neural network model wee compared and results show that SVM was better than neural
network in prediction.
Various authors [22-26] performed hourly building energy prediction using ANN. Mihalakakou
et al. [22] performed hourly prediction of residential buildings with solar radiation and multiple delays of
air temperature predictions as input variables. Ekici et al. [23] used building parameters (window’s
transmittivity, building’s orientation, and insulation thickness) and Dombayci [24] used time series
information of hour, day and month, and energy consumption of the previous hour to predict the hourly
heating energy consumptions. Gonzalez et al. [25] used time series information hour and day, current
energy consumption and predicted values of temperature as input variables to predict hourly energy
consumption of building system. Popescu et al. [26] used climate variables as solar radiation, wind
speed, outside temperature of previous 24 hours, and other variables as mass flow rate of hot water of
previous 24 hours and hot water temperature exit from plant system to predict the space hourly heat
consumptions of buildings. Li et al. [18] used SVM to predict hourly cooling load of office building using
climate variables as solar radiation, humidity and outdoor temperature. In their work, SVM was
compared with static neural network and result showed SVM better than static neural network in terms
of model performance. Dynamic neural network method which includes time dependence was
presented by Kato et al. [27] to predict heating load of district heating and cooling system based on
maximum and minimum air temperature. Kalogirou et al. [28] used Jordan Elman recurrent dynamic
network to predict energy consumption of a passive solar building system based on seasonal
information, masonry thickness and thermal insulation.
For many authors [29-31] occupancy profile has a significant impact on building energy
consumption. Sun et al. [29] mentioned that occupancy profile period has a significant impact on initial
temperature requirement in the building during morning. In their work, reference day (the targeted day
prediction which depends on previous day and beginning of following day based on occupancy and
non-occupancy profile period) was calculated based on occupancy profile period. In addition to this
value, correlated weather data and prediction errors of previous 2 hours were used as input variables
to predict hourly cooling load. Yun et al. [30] used ARX (autoregressive with exogeneous i.e., external,
inputs) time and temperature indexed model with occupancy profile to predict hourly heating and
cooling load of building system and compared this with results given by neural network. Results
showed that occupancy profile has a significant contribution in determination of auto regressive terms
during different intervals of time and further showed a variation of it in the building heating and cooling
energy consumption. The proposed ARX model showed similar performance with neural network.
Sensitivity analysis for heating, cooling, hot water, equipment and lighting energy consumption based
on occupancy profile was performed by Azar et al. [31] for different sizes of office buildings. In their
work, they found that heating energy consumption has the highest sensitivity compared to cooling, hot
water, equipment and lighting energy consumption for small size buildings. Also, results showed that
heating energy consumption is highly influenced by occupancy profile for medium and small buildings
during the occupancy period. Moreover, few literatures focused on operational power level
characteristics (schedule of heating and cooling energy to manage energy production from plant
system). For example, Leung et al. [32] used climate variables and operational characteristics of
electrical power demand (power information of lighting, air-conditioning and office equipment which
implicitly depends on occupancy schedule of electrical power demand) to predict hourly and daily
building cooling load using neural network.
In conclusion, it can be reiterated that physical and semi-physical models [4-11], though give
precise prediction of building energy, they are highly parameterized and are computationally expensive
to manage the energy for control applications for ESCOs. Data-driven methods which depend on
measurement historical data are not effective during the early stage of building operation and
construction since measurement data are not available at these stages. When building energy data
are available, data-driven methods can be considered if measurement data are accurate and reliable
as this kind of models can be sensitive on the quality of measured data. Sensitivity of the accuracy of
data driven models, thus, depends on the measurement data. Data-driven models based on statistical
and regression methods [12-15, 26] cannot precisely represent short time horizon (couple of days)
with hourly (or couple of minutes) sampling time prediction, though they perform prediction of energy
consumptions of buildings with limited physical parameters. They also require significant efforts and
time to compute the best fitting of the actual data. Static neural network models [19-21] are used for
daily prediction and [22-25] are used for hourly prediction of the buildings energy consumptions.
Though dynamic neural network model [27-28] gives better precision in compared to static neural
network, they do not consider occupancy profile and operational power level characteristics of the
plant system and therefore not adapted for the ESCOs to manage energy production for control
applications. The important features like transition and time dependent attributes of operational power
level characteristics of the plant system are still missing, though, authors [29-30] consider occupancy
profile and author [32] considers operational characteristics of electrical power demand. The detailed
variables and application of models developed in the literature reviews are summarized in Table 1.
Table
1
:
Summary of variables and application models in the literature
Input Variable of Model
Climate Variables
Author and Year
Type of Model
Outside Tempeature
Inner
Temperature
Ambient Dry Bulb Wet Bulb
Girardin et al. (2009)
Yao et al. (2005)
Catalina et al. (2008)
Wan et al. (2012)
Statistical
Thermal and
Statistical
Regression
Horizon of
Forecast
Annually
√ (2*)
Daily
√
Regression
√
√
Static NN
Mihalakakou et al. (2002)
Ekici et al. (2009)
Dombayci (2010)
Gonzalez et al. (2005)
Popescu et al. (2009)
Kato et al. (2008)
Kalogirou et al. (2000)
Li et al. (2010)
Static NN
Static NN
Static NN
Static NN
Static NN
Dynamic NN
Dynamic NN
SVM
√(4*)
Regression
√
Autoregressive
with exogeneous
√
√
√
√
√
√
√
√
√ (3*)
√
√
√
√
Monthly
Monthly &
Yearly
Monthly
Daily
Daily
Daily
√
√ (5*)
√ (6*)
√ (7*)
√ (8*)
√
√
√
√
√(9*)
√ (10*)
√(11*)
Static NN
Physical
Occupancy
Operational
Other
Profile
Characteristics Parameters
√ (1*)
Shilin et al. (2010)
Leung et al. (2012)
Duanmu et al. (2013)
Relative
Humidity
√
SVM
Static NN
Static NN
Yun et al. (2012)
Wind
Speed
√
Dong et al. (2005)
Kalogirou et al. (2001)
Neto et al. (2008)
Sun et al. (2013)
Global
Solar
Radiation
√(11*)
√
√
√
√
√
√
√
√
√
√
√
√
√
√ (12*)
√
√
√ (14*)
√
√ (13*)
Hourly
Hourly
Hourly
Hourly
Hourly
Hourly
Hourly
Hourly
Hourly
Hourly
√
√ (15*)
√ (16*)
Hourly &
Daily
Hourly
Type of Applications for Buildings
80 Residential
(heating and cooling)
Residential (space heating)
Residential
Office (heating and cooling)
4 Buildings (total energy consumptions)
9 Buildings (heating and cooling)
Office (3000 m2)
Residential (cooling power)
Residential (200 m2)
Heating Energy of Buildings
Residential (heating energy)
Electrical load
8 Buildings
District (heating energy)
Passive solar buildings
Office building and library
Cooling load for high rise
buildings (440,000 m2)
Small building for
heating load (464 m2)
Office (space electrical power
demand)
Cooling load of buildings
Remarks:
1*: Nominal Temperature of heating, cooling and hot water system; Threshold heating and cooling temperature
2*: Appliances Model
3*: Climate Index based on principal component
4*: Multiple lag output predictions of ambient air temperature
5*: Transmittivity, orientation and insulation thickness
6*: Heating degree hour method
7*: Predict value of temperature, present electricity load, hour and day
8*: Outside temperature and mass flow rate in previous 24 hour, hot water temperature
9*: Highest and Lowest open air temperature
10*:Season, insulation, wall thickness, heat transfer coefficient
11*: Multiple lag of dry bulb temperature and solar radiation
12*: Reference day of each day based on occupancy schedule
13*: Correlated weather data based on reference day and accuracy of calibrated prediction error of previous 2 hours
14*: Occupancy profile represented by space electrical power demand
15*: Clearness of sky, rainfall, cloudiness conditions
16*: Physical and geometrical parameters, hourly cooling load factor
None of these studies has evaluated the transition and time dependent effects of operational
power level characteristics of heating plant system and has predicted building heating energy demand
in short time horizon (a couple of days). This short term prediction is important to ESCOs for dynamic
control of heat plant system. This paper bridges the gap between static and dynamic neural network
methods with occupancy profile and operational power level characteristics of heating plant system. It
introduces novel pseudo dynamic model, which incorporates time dependent attributes of operational
power level characteristics. Their effects on neural network model performances are compared to
static neural network for building heating demand. Orthogonal arrays are applied to the proposed
pseudo dynamic model for robust design and confirmed the new schedule of occupancy profile and
operational heating power level characteristics obtained from ESCOs. The proposed method allows
short term horizon prediction (around 4 days with sampling interval of 15 minutes) to make decision
(e.g. management of wood power plant) for the ESCOs. The next section describes methodology
including scope of study, design of transitional and pseudo dynamic characteristics, neural network
model and orthogonal arrays. Finally, a case study is presented and results and discussion are drawn
to analyze the performance of different static and pseudo dynamic models along with robustness of
proposed pseudo dynamic model for heating demand prediction of the building.
2. Methodology
Collection of climate
data
Collection of Building
Heating Energy data
Operational heating
power level
characteristics
Settling and Steady
State time of the control
system
Transitional
and
Pseudo Dynamic
Model
Occupancy
Profiles
Artificial Neural Network
Static and Pseudo dynamic model for heating demand
prediction
The development and implementation of models proposed in this work are based on collection of
real building heating demand, operational heating power level characteristics, climate variables and
approximated occupancy profile data (see Appendix A for selection of relevant input variables). An
outline of the methodology presented in this paper is shown in figure (1). The input of this methodology
is in form of time-series climate and building heating energy data. The other inputs data are occupancy
profile and operational heating power level characteristics for working and off-days for 24 hours.
Dynamics of building heating demand is also an input to the methodology which includes settling and
steady state time and is estimated from real building data. Based on operational heating power level
and dynamics of building characteristics, transitional and pseudo dynamic models are designed.
Finally, neural networks for static and pseudo dynamic models are designed to predict heating
demand in short time horizon (couple of days). For the robustness of pseudo dynamic model,
occupancy profile and operational power level characteristics are analyzed for different time intervals
to confirm occupancy schedule profile and operation of plant system from the orthogonal arrays. The
pseudo dynamic model after optimum orthogonal arrays design is used for final prediction of the
building heating demand. Scope of this study, details of transitional and pseudo dynamic model,
neural network model and orthogonal arrays are described in section 2.1 - 2.4.
Figure 1: Outline of the proposed methodology on heating demand prediction
2.1 Scope of Study
The scope of this paper is heating demand prediction in short time horizon for the large building. The
overall objective is to make an energy services decisions (e.g. management of wood power plant) for
ESCOs. The assumptions carried for this study are highlighted as:
1. Winter period is studied.
2. Existing building is considered and space heating demand of this building is fed up from a heat
network to a central substation. Domestic hot water (DHW) is out of the scope.
3. The heating demand data was recorded in data acquisition system database and thermal
comfort inside the building was performed in this database. Thus, the effects of ventilation and
air-conditioning on heating are already included in this database.
4. Simple occupancy profile of building is anticipated approximately to assist the ESCOs to
schedule their heat production system. In such a system, individual occupant’s behavior or
precise occupancy profile is not considered. Thus, the modeling constraints are closer to the
operational condition of ESCOs to estimate the heat demand.
5. The wind speed and direction are not taken into consideration. This is due to the fact that
present weather variables data are taken from data acquisition system but future weather
variables values are coming from an atmospheric modeling system which mesh size can be
15 km (as ARPEGE, see [33]), 10 km (as ALADIN, see [34]) or 2.5 km (as AROME, see [35]).
In such a case, wind impact on heating demand prediction of a specific building located inside
the mesh is very difficult or even impossible to consider for precise effect. Further, heating
energy demand is highly dependent on outside temperature and other climate variables have
less significant impact on heat energy [36].
2.2 Transitional and Pseudo Dynamic Model
The operational heating power level characteristics gives operational features of the plant system,
however, they do not give abstract information about transition attributes of operational heating power
level which is illustrated through an example in figure (2). The y-axis represents set up power level
from the production system and x-axis represents operation schedule.
100
3' State 1
3
90
4
4'
7'
7
α 43
State 3
8'
8
α 87
40
Transition 3
Transition 2
Transition 1
Transition 0
Power Level (%)
75
State 2
5' 6'
5
α 65
6
25
1
10
1'
State 0
State 4
2'
2
α 21
0
9
6
12
Hour
14
20
9' α 109 10'
10
24
Figure 2: Operational heating power level characteristics of the plant system (for a day)
In figure (2), operational power levels are identified by different states and transition levels and
each level has its own significant effects on the operational power level characteristics. State means
consistency in the power level from one operation schedule to another and transition means change in
power level from one operation schedule to another in heat production system. The transition level 0,
1, 2 and 3 have similar feature of transitional power level characteristics on the overall operational
performance, however, power level required for transition from point 2 to 3, point 4 to 5, point 6 to 7
and point 8 to 9 is different for each level. If the power level of state 0, 1, 2, 3 and 4 in operational
heating power level characteristics is represented by the
from point
α uv , then
the power required for transition
v to point u can be represented as β uv in the transitional characteristics as shown in
figure (3). Thus, the power level transition in transitional characteristics corresponding to operational
characteristics can be written as:
β uv = β (u −2 )(v−2 ) + 2Δβ α uv − α (u −2 )(v−2 ) , ∀ u = 4,6,8....., v = 3,5,7...
where,
(1)
, v = 1, u = 2
β0
β 0 , Δβ
and
represents initial power level, step size of transition power level and absolute
values respectively. Each level ( β 21 ,
β 43 , β 65 , β 87
and
β 109 )
represents transitional level and
depends on the power level of operational characteristics.
9
β109
Transition Level
7
Transition Level
β87
β87
β109
10
8
5 β65 6
3
β43
β43
4
PDL
1
β21
0
β21
2
6
12
14
Hour
20
24
Figure 3: Transitional and Pseudo dynamic characteristics (for a day)
The transitional characteristics explicate the power transition level of operational
characteristics, however, dynamic information of power level attributes is still lacking. It means that
power content in operational characteristics of figure (2) of point 1-1’ is not equal to 2-2’; point 3-3’ is
not equal to the 4-4’; 5-5’ is not equal to 6-6’; 7-7’ is not equal to 8-8’ and 9-9’ is not equal to 10-10’.
Dynamic transition information, thus, is necessary in the model which considers dynamic
characteristics of the building. The simple first order dynamics of building characteristic is shown in
figure (4), where τ represents time constant.
Amplitude of Heating Power (%)
100
80
60
Tsteady
40
Ts
τ
20
delay
0
-20
0
τ
Time Constant
5τ
6τ
Figure 4: Dynamics of building characteristics
In figure (4), delay represents time it takes from plant system to reach the building for heating
operation and after this, power is sufficient to provide heating demand. The τ represents the 63% of
power transferred to the building heating system from plant system. Other dynamics to incorporate is
settling time ( Ts ), which is the time elapsed for heating power to reach and remain within the specified
error band and equal to [2 τ , 5 τ ] and have almost similar behavior like steady state time. The steady
state time corresponds to [3 τ , 6 τ ]. Thus, τ , settling time ( Ts ) and steady state time ( Tsteady ) gives
information about dynamic characteristics of heating demand. This dynamic information of building,
thus, depends on the transitional attributes of power level and this information is not totally dynamic
but pertaining to the appearance of dynamic behavior, so pseudo dynamic name is chosen. Thus,
pseudo dynamic is just a lag of transitional attribute information and further depends on time constant
τ or range between settling and steady state of the dynamic building heating characteristics. The
simplified pseudo dynamic lag (PDL) is calculated from equation (2), where, ts represents the
sampling time of building data and
Tu
represents the new unknown time which lies between settling
and steady state time. The concise value of
Tu depends on dynamics of the heating demand and
pseudo dynamic characteristics can be seen from figure (3), where PDL is pseudo dynamic lag.
Ts ≤ Tu ≤ Tsteady , where Ts ∈ [2τ ,5τ ]; Tsteady ∈ [3τ ,6τ ]
PDL ∈
τ
ts
[3,6]
(2)
2.3 Neural Network Model
The neural network consists of neurons to interconnect the inputs, model parameters and
activation function. Each interconnection between the neurons represents model parameters. Input-
output mapping in neural network is based on the linear and non-linear activation function. From input
and targeted data, model parameters are adjusted to minimize the error i.e. difference between actual
values and predicted values produced by the network. Learning/training of data are repeated until
there is no significant change in the model parameters and only stops the training. This type of
learning approach is called supervised learning since predicted value of the model is guided by actual
values.
There are numerous ANN model like Feed-forward Multilayer Perceptron (MLP), Radial Basis
Function (RBF) Network, Recurrent Network and Self-Organizing Maps (SOM) [37]. All of these
networks have their own learning algorithm to learn and generalize the network. In this paper, MLP is
taken as a neural network model since pseudo dynamic model is not fully dynamic (in time behavior).
There are two ways of learning mechanism in the neural network: sequential learning and batch
learning. In sequential learning, cost function is computed and model parameters are adjusted after
each input is applied to the network. In batch learning, all the inputs are fed to the network before
model parameters are updated. In batch learning, model parameter adjustment is done at the end of
epoch (one complete representation of the learning process) and for this paper, batch learning is
carried out.
MLP network consists of three layers: input layer, hidden layer and output layer and there can
exist more than one hidden layer. However, according to the Kolmogorov’s theorem [38], single hidden
layer is sufficient to map the function provided suitable hidden neurons and for this paper, single
hidden layer is used as shown in figure (5). The hidden layer assists to solve non-linear separable
problems.
Figure 5: Neural Network Architecture
In figure (5), xi , wk and
neuron which varies from
lag of 1 and z
-M
y represents input neuron which varies from i = 0 to i = q , hidden
k = 0 to k = p and output neuron respectively. The z-1 signifies transition
signifies transition lag corresponding to PDL, where maximum value of M ( M max )
equals to PDL i.e.
M max = {1,2,.....PDL}. The MLP uses logistic function or hyperbolic tangent as a
threshold function in the hidden layer. It has been identified empirically [39] that network using logistic
functions tends to converge slower than hyperbolic tangent activation function in the hidden layer
during the learning phase. Hyperbolic tangent activation functions is chosen in the hidden layer and
pure linear activation function is chosen in the output layer for this paper and hyperbolic tangent
T
function is shown in equation (3), where θ represents model parameter with transpose of matrix.
Division of input and output data into learning, validation and testing gives more generalization of
model. Learning data sets are used to learn the behavior of input data and to adjust the model
parameters. Validation data is used to minimize the overfitting. It is not used to adjust the model
parameter but it is used to verify if any increase in accuracy over learning dataset actually yields an
increase in accuracy over dataset that has not learned to the network before. Testing data sets are
used to confirm the actual prediction from neural network model which is unknown to neural network
before. For this paper, data is divided into learning, validation and testing sets. Normalization of input
data is also important for faster convergence to achieve desire performance goal. If input data are
poorly scaled during learning process, there is a risk of inaccuracy and slower convergence. It is, thus,
essential to standardize the input data before applying to neural network. There are various methods
for normalization of input and output variable, and for this paper, normalization with zero mean and
i
unit standard deviation is done as shown in equation (4). In equation (4), x , X and m represents
mean of input variable, overall vector of input variable and number of datasets respectively and thus,
applies similarly for output variable.
(
)
T
hθ ,x =
eθ
T
x
− e −θ
T
x
eθ
T
x
+ e −θ
T
x
(3)
xi − x
Xi =
(4)
1
(
xi − x )
∑
m −1 i
The cost function of MLP network is computed in equation (5):
1 m (l )
(l )
J (θ ) =
y − ya
∑
2m l =1
[
]
2
(5)
y , y a , l and J (θ ) represents predicted values produced from the network, actual values of
given datasets, individual data from m number of datasets and cost function of the neural network
model respectively. Further, y of the network is computed as:
where
p
⎛ q
⎞
y = ∑θ k h⎜⎜ ∑θ ki xi ⎟⎟
k =1
⎝ i =0
⎠
(6)
In order to update the model parameters for a higher degree approximation on unknown nonlinear function for learning process, there are different methods as – gradient descent, Newton’s
method and so on [37]. Gradient descent is too slow for the convergence, and it takes more time to
compute the hessian matrix in Newton’s method as well. Levenberg-Marquardt algorithm is used for
this paper which takes approximation of hessian matrix in the form of Newton’s method and model
parameter update equation
[
θ t +1 is given as:
]
θ t +1 = θ t − [LT L + µI ] LT J (θ )
−1
(7)
T
In equation (7), hessian matrix is approximated as [ L
L]
LT J (θ ),
parameter, µ is
and gradient is computed as
L is Jacobian matrix, J (θ ) is vector of cost function, θ t
is initial model
suitable chosen scalar and I is identity matrix. Update model parameter, thus, depends on the cost
where,
function and scalar value µ .
2.3.1
Stopping Criteria
There are different criteria for stopping the neural network model. For this paper, the stopping
criteria depend on number of epochs to learn the network, performance goal, maximum range of µ
and maximum failures in the validation. The performance goal (PG) is given as:
m
PG = 0.01 ∑ y a
(l )
(8)
l =1
The maximum failures in validation or accuracy over validation datasets is defined to stop the
learning process if the accuracy of learning datasets increase and validation accuracy stays same or
decrease.
2.3.2
Model Performance
Performances of models are characterized by mean square error (MSE) and coefficient of
2
2
correlation (R ). The MSE and R can be calculated as:
m
MSE =
∑ [y
− ya
(l )
l =1
]
(9)
m
m
R2 =
2
(l )
∑ [y ( ) − y
(l )
l
a
l =1
m
]
2
(10)
∑ (y )
(l )
2
a
l =1
2.3.3
Degree of Freedom Adjustment
One of the issues of neural network model is over learning of the network. With increase of
hidden neurons, model performance can be increased, but, it will lead neural network to over learning.
Validation accuracy and degree of freedom (DOF) adjustments are done in this paper to avoid over
fitting. Number of learning equations that model could deliver are given by equation (11), where
learning equations of the network and
Le is
L y is length of vector output neurons ( y ), and in this case
equal to 1 since there is only heating demand load.
Le = m * L y
(11)
The number of model parameters for a single hidden layer MLP neural network are given by
the equation (12), where
Lθ , L x
and
Lw represents
number of model parameters, vector length of
input neurons ( xi ) and vector length of hidden neurons ( wk ) respectively.
Lθ = (Lx + 1) * Lw + (Lw + 1) * L y
(12)
DOF of neural network model is the difference between number of learning equations and
number of model parameters in the network. It should be always >>1 and depends on the optimum
size of hidden neurons. DOF and maximum hidden neurons are given by equation (13) and (14),
where,
δ
represents the scalar constant value and depends on DOF required for design and
Wmax
is
the maximum hidden neurons.
DOF = Le − Lθ
Wmax ≅
1
(L
(13)
− Ly )
θ
(14)
δ (L x + L y + 1)
Modified performance goal according to degree of freedom adjustment is given as:
m
PG =
0.01 DOF ∑ y a
(l )
l =1
(15)
Le
Model performance is also further modified based on degree of freedom adjustment. The
2
modified MSE and R can be calculated as:
m
MSE mod ified =
l =1
(l )
]
2
(16)
DOF * m
m
R 2 mod ified =
[
Le ∑ y (l ) − y a
[
Le ∑ y (l ) − y a
l =1
m
]
2
(17)
( )
DOF ∑ y a
l =1
(l )
(l )
2
For each hidden neurons, optimal
MSE mod ified and maximum R 2 mod ified for learning and
validation are calculated from the different initialized random parameters. For different number of
hidden neurons,
R 2 mod ified
and
MSE mod ified for each model is performed for learning and validation,
and based on it, optimal configuration of model is identified for the final prediction.
2.4 Orthogonal Arrays
It is essential to know whether schedule of occupancy profile and operational characteristics
obtained from ESCOs is reliable for the robust design of pseudo dynamic model. Occupancy profile
and operational characteristics transition period, thus, plays an important role in the model
performance and if all these transition period are consider for finding the best robust model, it takes
long time to compute. Orthogonal arrays (OA) identify the main effects with minimum number of trials
to find the best design. These are applied in various fields: mechanical and aerospace engineering
[40], electromagnetic propagation [41] and signal processing [42] for the robust design model.
The orthogonal array allows the effect of several parameters to find best design with given
different levels of parameters. It can be defined as matrix with column representing number of
parameters with different settings to be studied and rows representing number of experiments. In
orthogonal arrays, parameters are called factors and parameter settings are called levels. In general,
OA(N , k , s, t ) is used to represent the orthogonal arrays, where N , k , s
and
t represents number
of experiments, number of design parameters, number of levels and strength. There are different
methods as Latin square [43]; Juxtaposition [44]; Finite geometries [45] etc... to create orthogonal
arrays with different strength and levels. Orthogonal arrays with different number of design parameter,
level, and strength are available from OA databases or libraries. The orthogonal arrays used for this
paper is taken from OA library [46].
3. Case Study
The methodology is applied for case study at Ecole des Mines de Nantes, French Institution.
2
The building has floor area of 25,000 m . It has 600 students and 200 employees. The building
consists of 120 research and administration rooms, 30 class rooms, 3 laboratories, and 8 seminar
halls. Class rooms have different sizes and can accommodate to 18 to 28 students. The 2 big seminar
halls can be occupied by 250 students and 6 small seminar halls can be occupied by 80 students.
2
Each floor area of the laboratory is 600 m .
The data is taken from data acquisition system and consists of day/month/time, solar radiation,
outside air temperature and heating demand from mid of January to February 2013 with sampling
interval of 15 minutes. The 70% of data (outside temperature, solar radiation and heating demand as
shown in figure 5) are used for learning phase i.e.
in mathematical equation in neural network, see
section 2.3, equivalent to 19 days with 15 minute sampling time, and each 15% of data (4 days with 15
minute sampling time) is used for validation and testing phase. Outside temperature taken for this
0
0
0
study has minimum, average and maximum value of 1.2 C, 8.95 C and 15.3 C respectively. Global
2
2
solar radiation has an average and maximum value of 7 W/m and 438 W/m respectively.
The simplified/theoretical occupancy profile and operational heating power level characteristics
for working and off-days for 24 hours is shown in figure (6) and (7).
1000
Working Day
Off Day
900
800
Number of Occupants
700
600
500
400
300
200
100
0
0
8
12 13:30
Hour
17:45
Figure 6: Occupancy profiles for working and off-day
24
Working Day
Off Day
100
90
Power Level (%)
75
40
25
10
0
6
12
Hour
14
20
24
Figure 7: Operational heating power level characteristics for working and off-day
Power demand and occupancy profile during working day is depicted from figure (8). From
figure (8), occupancy profile almost gives information about power demand characteristics, however,
from 18 hour onwards, power demand characteristics is not accordance with occupancy profile. Thus,
it further shows that simplified occupancy profile is not enough to characterize the heating demand.
Working Day
Off Day
100
90
Power Level (%)
75
40
25
10
0
6
12
14
Hour
20
24
Figure 8: Heating power demand and occupancy profile during working days
Different neural network models are designed based on climate variables (outside temperature
and solar radiation), work/off day information, occupancy profile and operational characteristics as
shown in figure (5). For this case study, 10 represent working day and 5 represent off day information
(work/off day) to the input of neural network model. Static neural network model 1 consists of
operational characteristics and occupancy profile, external temperature and solar radiation as input
variables and heating power demand as an output variable, and thus, vector length of input neurons
( L x ) in equation (12) equals to 5. Model 2 comprises additional transitional characteristics in model 1
and vector length of input neurons ( L x ) in equation (12) equal to 6. For this case study the sampling
time ( ts ) of real building data is 15 minutes, settling time ( Ts ) is estimated approximately 45 minutes
and steady state time ( Tsteady ) is approximately 1 hour. The PDL, thus, is calculated from equation (2),
where PDL corresponds to settling and steady state time is nearly equal to 3 and 4 respectively. Since
pseudo dynamic model depends on transition lag of operational heating power level and building
dynamic characteristics, PDL is varied from 3-4, and to understand the phenomena of pseudo dynamic
lag, PDL is varied from 1-4. Model 3 comprises model 2 with additional parameters of one PDL i.e. i.e.
L x equals to
7; model 4 consists model 2 with additional parameters of two PDL i.e.
model 5 includes model 2 with additional parameters of three PDL i.e.
L x equals
L x equals to
8;
to 9 and model 6
comprises model 2 with additional parameters of four PDL in the transitional characteristics i.e.
L x equals to 10. Transitional and
pseudo dynamic characteristic with four lags during working day is
shown in figure (9). Transition level in figure (9) is calculated from equation (1) and for this case study,
25 is chosen for each
β0
and
Δβ .
In figure (9), lag 0 means static model which contains transition
attributes, lag 1 means pseudo dynamic model with transition lag 1 (PDL=1), lag 2 means pseudo
dynamic model with transition lag 2 (PDL=2) and so on. Further, effects of transitional and pseudo
dynamic effects on the heating demand can be understood from figure (10). It is clear that the
information hidden in heating demand which climate variables could not answer can be justify from
transitional and pseudo dynamic attributes of operational characteristics. The summary of models is
shown in table (2).
No Lag
Lag 1
Lag 2
Lag 3
Lag 4
Transition Level
825
575
425
275
25
0
6
7
12 13 14 15
Hour
20 21
24
Figure 9: Transitional and pseudo dynamic characteristics during working day
Heating Demand
No Lag
Lag 1
Lag 2
Lag 3
Lag 4
1000
1000
Pseudo Dynamic
transitional effects
800
800
Heating Demand (kW)
Transition
effects
600
PDL
600
400
400
200
200
0
0
24
48
Hour
72
82
Figure 10: Pseudo dynamic transitional effects on heating demand
Table 2: Summary of models
Model No.
Type of
Model
Input Variables
Remarks
Model 1
Static
Climates, occupancy profile and operational characteristics
No Lag
Model 2
Static
Model 1 with transitional characteristics
No Lag
Model 3
Pseudo Dynamic
Model 2 with pseudo dynamic transition in dead band
Lag 1
Model 4
Pseudo Dynamic
Model 2 with pseudo dynamic transition in t
Lag 2
Model 5
Pseudo Dynamic
Model 2 with pseudo dynamic transition in settling time
Lag 3
Model 6
Pseudo Dynamic
Model 2 with pseudo dynamic transition in steady state time
Lag 4
For each model, cost function
J (θ ) in equation (5) is computed iteratively up to 1000 for each
of the minimum and maximum number of hidden neurons. The maximum number of hidden neurons is
calculated from equation (14), where δ is chosen 8 as it gives the flexibility in the degree of model
parameters. Thus, three minimum hidden neurons are chosen as 3 for this case study. Hidden
neurons length ( Lw ), thus, is varied from 3 to
Wmax . Performance of model at each iteration (number
of epochs) is computed from equation (16) and (17) and model parameters are updated based on
equation (7), where initial value of µ is chosen as 0.01 and its value is increased with a factor of 10
and decreased with a factor of 0.1. The maximum value of
µ
is chosen as 1e10. Neural network
model in this study will be stopped if the number of epochs reached to 1000 and performance goal
reached the value given by equation (15).
Under the scope of study (see subsection 2.1), the accuracy on the number of occupants are
not relevant, however, it is essential to know inside the sampling time, when the staff and students
come and leaves the buildings. It is necessary to check occupancy and operational power level
characteristics provided by ESCOs are right or not for robust design model. And, the main controlling
factors for robust design model are the transition schedule of occupancy and operational
characteristics. From figure (6), it is clear that there is no transition of occupancy during off-day, but
there is transition of occupancy during the interval at 8 hour, 12 hour, 13:30 hour and 17:45 hour and
these are represented by t1, t2, t3 and t4 factors respectively. Similarly, there is a transition of
operational characteristics for working and off day as shown in figure (7) and these transition factors
are represented by t5, t6, t7 and t8 for working day for 6 hour, 12 hour, 14 hour and 20 hour; t9 and
t10 for off day for 6 hour and 20 hour. Since the sampling interval taken for this case study is 15
minutes, three levels are used for orthogonal arrays so that the model will represent the 15 minutes
ahead and before from occupancy and operational characteristics schedule period. The summary of
control factors and their levels are shown in table (3), where OSW represents occupancy schedule at
work day, OCSW represents operational characteristics schedule at work day and OCSO represent
operational characteristics schedule at off day.
Table 3: Summary of control factors and their levels
Levels
Factors
1
2
3
OSW at 8 hour (f1)
t1-15 min
t1
t1+15 min
OSW at 12 hour (f2)
t2-15 min
t2
t2+15 min
OSW at 13:30 hour (f3)
t3-15 min
t3
t3+15 min
OSW at 17:45 hour (f4)
t4-15 min
t4
t4+15 min
OCSW at 6 hour (f5)
t5-15 min
t5
t5+15 min
OCSW at 12 hour (f6)
t6-15 min
t6
t6+15 min
OCSW at 14 hour (f7)
t7-15 min
t7
t7+15 min
OCSW at 20 hour (f8)
t8-15 min
t8
t8+15 min
OCSO at 6 hour (f9)
t9-15 min
t9
t9+15 min
OCSO at 20 hour (f10)
t10-15 min
t10
t10+15 min
Thus, there are 10 factors and 3 levels that govern the robustness of the model and if the full
10
factorials are used to generalize the model, it takes 3 = 59049 experiments. The orthogonal arrays
reduce the number of experiments to 729 with 5 strengths. OA (729,10,3,5) is applied to the proposed
pseudo dynamic model in this case study.
4. Result and Discussion
Optimal configuration of the model is based on maximum
R 2 mod ified
and minimum
MSE mod ified
from different random initialized parameters. For each hidden neurons in the model, five
random initialized parameters is assigned for learning phase and based on it, the neurons with
minimum
MSE mod ified and maximum R 2 mod ified for learning and validation are chosen from random
initialized parameters. Optimal configuration of each model is chosen from maximum
minimum
R 2 mod ified
and
MSE mod ified
model performance from learning and validation datasets for different hidden
neurons. Figure (11) and (12) shows
R 2 mod ified
and
MSE mod ified performance for learning, validation
and testing for different hidden neurons sizes of model 5 and from this optimal configuration is chosen
from the best performance model. It is clear from figure (11) and (12) that the maximum
and minimum
R 2 mod ified
MSE mod ified performance is achieved in hidden neuron size 13 and which is the optimal
2
configuration of the model. It can also be noticed that although R testing performance increases for
2
hidden neuron size 15, R for validation and learning does not increase optimally. The model 5 is just
an example and similarly, the process is repeated for each model to find the optimal configuration of
the neural network model. The optimal configurations of the different neural network model are
summarized in table (4).
Learning
Validation
Testing
0.95
Best Performance
Coefficient of Correlation
0.9
0.85
0.8
0.75
0.7
0.65
2
4
6
8
10
12
14
Hidden Neurons
16
18
Figure 11: Coefficient of correlation performance (Model 5)
20
22
Learning
Validation
Testing
0.55
0.5
0.45
Mean Square Error
0.4
0.35
0.3
Best Performance
0.25
0.2
0.15
0.1
2
4
6
8
10
12
14
Hidden Neurons
16
18
20
22
Figure 12: Mean Square Error performance (Model 5)
Table 4: Optimal configuration of models
Model
Hidden
Coefficient of Correlation
Neurons Learning Validation
Testing
Mean Square Error
Learning Validation
Testing
Model 1
10
0.82
0.81
0.61
0.18
0.18
0.40
Model 2
19
0.87
0.85
0.80
0.13
0.15
0.21
Model 3
7
0.88
0.86
0.75
0.12
0.14
0.25
Model 4
9
0.89
0.87
0.82
0.12
0.13
0.18
Model 5
13
0.89
0.87
0.83
0.11
0.13
0.18
Model 6
9
0.89
0.87
0.85
0.11
0.13
0.15
Table (4) shows that with static neural network model 1, best
R 2 mod ified
for learning and
validation can be obtained up to 0.82 and 0.81. From this, it is clear that occupancy profile and
operational characteristics are not enough to determine and generalize the unknown function of the
building heating demand. As transitional attributes of operational characteristic is introduced in model
2,
R 2 mod ified
model performance increases significantly from 0.82 to 0.87 for learning phase and from
MSE mod ified decreases in contrast to model 1.
Pseudo dynamic transitional attributes in model 3 and time constant τ in model 4 leads increase in
0.81 to 0.85 for validation phase and correspondingly
model performance. Further, dynamics of settling time and steady state plays an important role in
characterizing the neural network model. It is seen that
R 2 mod ified
performance increases from 0.87 to
0.89 for learning and 0.85 to 0.87 for validation in model 5 compare to model 2 although transition
attributes is introduce in model 2. In addition, hidden neuron size is also reduces from 19 to 13.
Moreover, it is distinguish that learning and validation performances remained the same in the model 6
compared to model 5. The optimal choice of the model, thus, lies in between settling and steady state
time.
It can be further view that model 5 and model 6 show reasonable and consistent model
performances. However, minimum hidden neuron size and maximum learning criteria is essential for
the overall network generalization. Since the hidden neurons size decreases from 13 to 9 and model
performance
R 2 mod ified
remained the same (0.89) in model 6 comparing to model 5, model 6 is
chosen as the best configuration of the overall models. The optimal choice of the model 5 and model 6
can be delineated by the error in percentage of energy consumption (kWh) in actual and prediction for
the learning and validation phase. Heating energy consumption error in actual and prediction in
learning phase in Model 6 is 0.02% compare to 0.32% in Model 5. For validation phase, heating
energy consumption error is 2.39% in Model 6 compare to 2.57% in Model 5. From this energy
consumption error, it is clear that there is a small heating energy consumption error in Model 6
compare to Model 5 during the learning and validation phase. So, one can conclude that Model 6 can
be chosen as optimal configuration of the overall model. The model 6, thus, bridges the gap between
static and dynamic neural network model in the sense that it is better than static model and increases
the performance comparable to dynamic neural network model.
For the robustness of pseudo dynamic model, orthogonal arrays are applied to determine the
highest coefficient of correlation for learning and validation for the optimum 9 hidden neuron size of
model 6. Table (5) shows OA(729,10,3,5) and coefficient of correlation for learning and validation
phase. It is clear from table (5) that the schedule taken from the ESCOs is from experiment 1 and from
the orthogonal arrays, the optimal schedule that fits the best for model 6 is experiment 398. The
orthogonal arrays, thus, ensures that there is transition in occupancy in 7:45 hour, 12 hour, 13:45 hour
and 18 hour instead of 8 hour, 12 hour, 13:30 hour and 17:45 hour period in the existing case
respectively. There is also a transition in 5:45 hour, 11:45 hour, 14 hour and 17:45 hour instead of 6
hour, 12 hour, 14 hour and 17:45 hour for working day; 5:45 hour and 20 hour instead of 6 hour and
20 hour in off days for operational characteristics. The coefficient of correlation after the orthogonal
array design is 0.90 for learning, 0.88 for validation and 0.86 for training phase. Nevertheless, other
issue of overall model is that it is difficult to increase the coefficient of correlation beyond 0.90 and this
is due to the sampling time of 15 minutes. With short sampling time, it is very difficult to learn the
datasets which changes in 15 minutes sample, nonetheless, for good generalization of the model,
R 2 mod ified
value of 0.90 during the learning phase is always acceptable.
Coefficient of correlation of linear regression obtained from neural network model in the actual
and prediction of heating demand for learning, validation and testing phase of Model 6 after optimum
orthogonal array design are 0.95, 0.95 and 0.93 respectively. The prediction of heating demand for
model 6 after optimum orthogonal array design during validation phase is shown in figure (13).
Prediction gives the power heating demand and the area under the curve gives the heating energy
demand. From figure (13), it is clear that heating demand tremendously increases approximately 990
kW during third and fourth day and pseudo dynamic model is able to predict and learn the behavior.
However, there is a fluctuation in the power demand in the morning for each consecutive 4 days and it
is difficult to learn datasets which transits rapidly in actual power demand. The prediction of heating
demand for model 6 during testing phase after optimum orthogonal array design is shown in figure
(14). It is vivid that pseudo dynamic model is able to predict heating demand, however during the third
day, the pseudo dynamic model is not able to meet 1.1 MW of heating demand. This is due to the fact
that neural network does not learn this threshold maximum heating demand in the learning phase as
this kind of information is not available in the database. This data, thus, needs to be improved in the
learning phase through feature extraction techniques. Nonetheless, pseudo dynamic model (model 6)
prediction is in accordance to the actual target except for some rapid transits in the actual target. To
sum up, pseudo dynamic transition attributes in model 6 after orthogonal array design leads best
prediction of heating demand.
Table 5: OA(729,10,3,5) and coefficient of correlation for learning and validation for model 6
Element
f1
f2
f3
f4
f5
f6
f7
f8
f9
f10
1
2
2
2
2
2
2
2
2
2
2
1
2
2
2
2
2
2
1
3
3
2
2
2
2
2
2
4
2
1
2
2
2
2
3
5
1
1
2
2
2
2
6
3
1
2
2
2
7
2
3
2
2
8
1
3
2
2
9
3
3
2
10
2
2
11
1
12
3
….
Experiment
Coefficient of Correlation
Learning
Validation
Testing
2
0.89
0.87
0.85
1
1
0.89
0.88
0.81
3
3
3
0.90
0.86
0.76
2
1
3
0.89
0.86
0.79
3
1
3
2
0.89
0.87
0.78
2
3
3
2
1
0.89
0.88
0.79
2
2
1
2
3
1
0.89
0.87
0.83
2
2
1
1
2
3
0.89
0.87
0.84
2
2
2
1
3
1
2
0.90
0.86
0.85
1
2
2
2
3
1
2
1
0.89
0.87
0.76
2
1
2
2
2
3
3
1
3
0.89
0.87
0.67
2
1
2
2
2
3
2
3
2
0.89
0.87
0.67
.
.
.
.
.
.
.
.
.
.
.
.
.
…
.
.
.
.
.
.
.
.
.
.
.
.
.
394
2
3
1
3
1
1
3
3
3
3
0.89
0.87
0.80
395
1
3
1
3
1
1
3
2
2
2
0.90
0.87
0.76
396
3
3
1
3
1
1
3
1
1
1
0.90
0.87
0.76
397
2
2
3
3
1
1
2
2
2
3
0.89
0.88
0.81
398
1
2
3
3
1
1
2
1
1
2
0.90
0.88
0.86
399
3
2
3
3
1
1
2
3
3
1
0.90
0.87
0.70
400
2
1
3
3
1
1
3
2
1
1
0.90
0.87
0.77
401
1
1
3
3
1
1
3
1
3
3
0.89
0.88
0.84
402
3
1
3
3
1
1
3
3
2
2
0.87
0.88
0.74
….
.
.
.
.
.
.
.
.
.
.
.
.
.
…
.
.
.
.
.
.
.
.
.
.
.
.
.
725
1
1
3
3
3
3
2
1
2
3
0.89
0.87
0.80
726
3
1
3
3
3
3
2
3
1
2
0.90
0.87
0.84
727
2
3
3
3
3
3
3
2
2
2
0.90
0.87
0.80
728
1
3
3
3
3
3
3
1
1
1
0.89
0.88
0.78
729
3
3
3
3
3
3
3
3
3
3
0.89
0.88
0.61
Validation Phase
1000
Actual
Predict
900
Heating Demand (kW)
800
700
600
500
400
300
0
24
48
Hour
72
96
Figure 13: Prediction of heating demand in model 6 during validation phase (after optimum orthogonal
array design)
Test Phase
1100
Actual
Predict
1000
Heating Demand (kW)
900
800
700
600
500
400
300
0
24
48
Hour
72
96
Figure 14: Prediction of heating demand in model 6 during testing phase (after optimum orthogonal
array design)
4. Conclusion
This paper introduces pseudo dynamic transitional model for the building heating demand
prediction in a short time horizon using artificial neural network. Occupancy profile and operational
heating power level characteristics are included in the model. Dynamic characteristic of the building is
included in the model for the determination of pseudo dynamic transition lag. Settling time and steady
state time of the heating demand give an increment in precision of the model, however, choice of
model depends on their actual time between settling and steady state. The results were based on case
study where occupancy profile is already known and results may vary for more fluctuating occupancy
buildings. Coefficient of correlation increases from 0.82 to 0.89 for learning, 0.81 to 0.87 for validation
and 0.61 to 0.85 for testing in pseudo dynamic comparing to static neural network model. Also, the
size of hidden neuron is further reduced, which reduces complexities and increases generalization of
the model. Moreover, minimum energy consumption error is achieved in pseudo dynamic model as
0.02% for learning and 2.57% for validation phase. Further, orthogonal array is applied to optimal
pseudo dynamic model to confirm the schedule of occupancy profile and operational level
characteristics, and robustness of the model. The orthogonal array design leads to the increases in
coefficient of correlation in pseudo dynamic model and confirmed the new schedule of the occupancy
profile and operational level characteristics. The major contribution of this paper, thus, is the
introduction of transition and novel time dependent attributes of operational heating power level
characteristics, which is the dominant factor for building heating demand. Also, orthogonal array
design in the model makes flexibility in cross checking the schedule of occupancy profile and
operational heating power level characteristics obtained from ESCOs to design the robust model. The
prediction is in short time horizon (4 days) with sampling interval of 15 minutes and thus useful for
dynamic control of building heating demand.
Further, research will be focused towards the feature extraction of data before learning phase of
the neural network so that abnormalities in the data can be corrected in the learning phase. Also
adaptive and real time learning criteria with seasonal behaviour will be studied.
Acknowledgement
This research has been done in collaboration with Ecole des Mines, Nantes, Technische Universiteit
Eindhoven and VEOLIA Environnement Recherche et Innovation, funded through Erasmus Mundus
Joint Doctoral Programme SELECT+, the support of which is gratefully acknowledged.
References
1. J. Laustsen, Energy efficiency requirements in building codes, energy efficiency policies for
new
buildings,
International
Energy
Agency,
OECD/IEA,
(March)
(2008).
(http://www.iea.org/publications/freepublications/publication/Building_Codes.pdf
)
2. X. Li, C.P. Bowers, T. Schnier, Classification of Energy Consumption in Buildings with outlier
detection, IEEE Transactions on Industrial Electronics 57 (2010) 3639-3644.
3. H. Zhao, F. Magoules, A review on the prediction of building energy consumption, Renewable
and Sustainable Energy Reviews 16 (2012) 3586-3592.
4. D.B. Crawley, L.K. Lawrie, F.C. Winkelmann, W.F. Buhl, Y.J. Huang, C.O. Pedersen, R.K.
Strand, R.J. Liesen, D.E. Fisher, M.J. Witte, J. Glazer, EnergyPlus: creating a new-generation
building energy simulation program, Energy and Buildings 33 (2001) 319-331.
5. S. Citherlet, Towards the holistic assessment of building performance based on integrated
simulation approach, PhD Thesis, Swiss Federal Institute of Technology (2001).
(http://www.esru.strath.ac.uk/Documents/PhD/citherlet_thesis.pdf)
6. A.S. Kalagasidis, Weitzmann, T.R. Nielsen, R. Peuhkuri, C. Hagentoft, Rode, The international
building physics toolbox in simulink, Energy and Buildings 39 (2007) 665-674.
7. A. Husaunndee, R. Lahrech, H. Vaezi-Nejad, J.C. Visier, SIMBAD: A simulation toolbox for the
design and test of HVAC control systems, International IBPSA Conference, Prague
(September) (1997) 269-276 p. (http://www.ibpsa.org/proceedings/BS1997/BS97_P022.pdf)
8. TRNSYS 17, a TRaNsient SYstem Simulation program. http://sel.me.wisc.edu/trnsys/features
(Access on: 30/12/2012)
9. CARNOT Blockset, User’s Guide, Solar-Institut Juelich (1999).
10. L. Duanmu, Z. Wang, Z.J. Zhai, X. Li, A simplified method to predict hourly building cooling
load for urban energy planning, Energy and Buildings, 58 (2013) 281-291.
11. C.P. Underwood, F.W.H. Yik, Modeling methods for energy in Buildings, Blackwell Science
(2004).
12. L. Girardin, F. Marechal, M. Dubuis, N. Calame-Darbellay, D. Favrat, EnerGIS: A geographical
information based system for the evaluation of integrated energy conversion systems in urban
areas, Energy 35 (2010) 830-840.
13. R. Yao, K. Steemers, A method of formulating energy load profile for domestic buildings in the
UK, Energy and Buildings 37 (2005) 663-671.
14. T. Catalina, J. Virgone, E. Blanco, Development and Validation of regression models to predict
monthly heating demand for residental buildings, Energy and Buildings 40 (2008) 1825-1832.
15. K.K.W Wan, D.H.W Li, D. Liu, J.C. Lam, Future trends of building heating and cooling loads
and energy consumption in different climates, Building and Environment 46 (2011) 223-234.
16. R. Yokoyama, T. Wakui, R. Satake, Prediction of energy demands using neural network with
model identification by global optimization, Energy Conversion and Management 50 (2009)
319-327.
17. B. Dong, C. Cao, S.E. Lee, Applying support vector machines to predict building energy
consumption in tropical region, Energy and Buildings 37 (2005) 545-553.
18. Q. Li, Q. Meng, Development and applications of hourly building cooling load prediction model,
International Conference on Advances in Energy Engineering, IEEE, China (June) (2010).
(http://dx.doi.org/10.1109/ICAEE.2010.5557536)
19. S. Kalogirou, G. Florides, C. Neocleous, C. Schizas, Estimation of daily heating and cooling
loads using artificial neural networks, 2001 World Congress, Napoli (September) (2001) .
http://ktisis.cut.ac.cy/bitstream/10488/883/1/C41-CLIMA2001.pdf (Access on: 13/11/2012)
20. A.H. Neto, F.A.S. Fiorelli, Comparison between detailed model simulation and artificial neural
network for forecasting building energy consumption, Energy and Buildings, 40 (2008) 21692176.
21. Q. Shilin, S. Zhifeng, BP neural network for the prediction of urban building energy
consumption based on Matlab and its application, International Conference on Computer
Modeling and Simulation, IEEE, China (January) 2010.
22. G. Mihalakakou, M. Santamouris, A. Tsangrassoulis, On the energy consumptions in the
residential buildings, Energy and Buildings 34 (2002) 727-736.
23. B.B Ekici, U.T. Aksoy, Prediction of building energy consumption by using artificial neural
network, Advances in Engineering Software 40 (2009) 356-362.
24. O.A. Dombayci, The prediction of heating energy consumption in a model house using artificial
neural networks in Denizli-Turkey, Advances in Engineering Software 41 (2010) 141-147.
25. P.A. Gonzalez, J.M. Zamarreno, Prediction of hourly energy consumption in buildings based
on feedback artificial neural network, Energy and Buildings 37 (2005) 595-601.
26. P. Popescu, F. Ungureanu, A. Hernàndez-Guerrero, Simulation models for the analysis of
space heat consumption of buildings, Energy 34 (2009) 1447-1453.
27. K. Kato, M. Sakawa, K. Ishimaru, S. Ushiro, T. Shibano, Heat load prediction through recurrent
neural network in district heating and cooling systems, International Conference on Systems,
Man and Cybernetics (SMC), IEEE, Singapore (October) (2008).
28. S.A. Kalogirou, M. Bojic, Artificial neural networks for the prediction of the energy consumption
of a passive solar building, Energy, 25 (2000) 479-491.
29. Y. Sun, S. Wang, F. Xiao, Development and Validation of a simplified online cooling load
prediction strategy for a super high-rise building in Hongkong, Energy Conversion and
Management 68 (2013) 20-27.
30. K. Yun, R. Luck, P.J. Mago, H. Cho, Building hourly thermal load predictions using an indexed
ARX model, Energy and Buildings 54 (2012) 225-233.
31. E. Azar, C.C. Menassa, A comprehensive analysis of the impact of occupancy parameters in
energy simulation of office buildings, Energy and Buildings 55 (2012) 841-853.
32. M.C. Leung, N.C.F Tse, L.L. Lai, T.T. Chow, The use of occupancy space electrical power
demand in building cooling load prediction, Energy and Buildings, 55 (2012) 151-163.
33. F. Bompay, Evaluation of the Meteo-France response in ETEX release 1, Atmospheric
Environment, 32 (1998) 4351-4357.
34. C. Voyant, M. Muselli, C. Paoli, M-L. Nivet, Numerical weather prediction (NWP) and hybrid
ARMA/ANN model to predict global radiation, Energy, 39 (2012) 341-355.
35. The AROME modeling system. http://www.cnrm.meteo.fr/arome/
36. D. Chen, X. Wang, Z. Ren, Selection of climate variables and time scales for future weather
preparation in building heating and cooling energy predictions, Energy and Buildings 51 (2012)
223-233.
37. S. Haykin, Neural networks, a comprehensive foundation, Second Edition, Pearson Education
Inc (2005).
th
38. W. Yu, H. He, N. Zhang, Advances in Neural Networks – ISNN, 6 International Symposium
on Neural Networks, Springer-Berlin Heidelberg, New York (2009).
39. W.W. Hsieh, Machine learning methods in the environmental sciences, neural networks and
kernels, Cambridge University Press (2009).
40. X. Wu, D.Y.C. Leung, Optimization of biodiesel production from camelina oil using orthogonal
experiment, Applied Energy 88 (2011) 3615-3624.
41. W.C. Weng, F. Yang, A.Z. Elsherbeni, Linear antenna arrays synthesis using tgauchi’s
methods: a novel optimization technique in electromagnetics, IEEE Transactions on Antennas
and Propagation 55 (2007) 723-730.
42. L. Franek, X. Jiang, Orthogonal design of experiments for parameter learning in image
segementation, Signal Processing, 93 (2013) 1694-1704.
43. M.V.M Nguyen, Some new constructions of strength 3 mixed orthogonal arrays, Journal of
Statistical Planning and Inference, 138 (2008) 220-233.
44. C.Y. Suen, Construction of mixed orthogonal arrays by juxtaposition, Statistics and
Proabability Letters, 65 (2003) 161-163.
45. C.Y. Suen, A. Dey, Construction of asymmetric orthogonal arrays through finite geometries,
Journal of Statistical Planning and Inference, 115 (2003) 623-635.
46. N.J.A. Sloane, A library of orthogonal arrays (Online).
http://www2.research.att.com/~njas/oadir/ (Access on : 28/04/2013)
Appendix A
The influence of input variables on the model output is evaluated based on the correlation analysis.
Correlation measures the strength and weakness of linear relationship between two variables. There
are several coefficients that measure the correlation degree and Pearson’s correlation coefficient is
used to determine the input variables relevance for this paper. Pearson’s correlation coefficient is
calculated by dividing covariance of two variables by product of their standard deviation as shown in
equation (A.1 – A.2), where r represents Pearson’s correlation coefficient. In equations (A.1-A.2),
cov(xy)
is
covariance which represents strength of linear relationship between two variables
x
y ; x and y are mean values of variables x and y ; s x and s y are standard deviations of
variables x and y ; and n is the number of data.
and
r=
cov(xy )
sx s y
cov(xy ) =
(A.1)
1 n
∑ xi − x y i − y
n − 1 i =1
(
)(
)
(A.2)
The correlation coefficients can range from -1 to +1:
r =1
: perfect positive linear correlation
r = -1
: perfect negative linear correlation
0.1<
r <0.25 : small positive linear correlation
0.25<
r <0.6
0.6<
r <1
-1< r <0
: medium positive linear correlation
: strong positive linear correlation
: negative linear correlation
Climatic conditions (outside temperature and solar radiation), operational power level characteristics
and approximate occupancy profile are used to evaluate the relevance variables that affect building
heat demand based on case study data. Other variables pseudo dynamic transitional attributes, which
signifies the dynamics of building characteristics is not consider for relevance variable determination
since it only signifies time and phase interval of heating power transition.
Results show the linear coefficient of correlation of outside air temperature, solar radiations,
occupancy profile and operational power level characteristics with the heat load are -0.84, -0.40, 0.32
and 0.35 respectively. Results, thus, signifies that climatic conditions (outside temperature and solar
radiations) are relevant input variables to predict the heat load. Also, it is clearer that occupancy profile
and operational power level characteristics has medium positive correlation with heat load and shows
relevance to characterize the heat demand behaviour.
| 5 |
Experimental Biological Protocols with Formal
Semantics
Alessandro Abate2 , Luca Cardelli1,2 , Marta Kwiatkowska2 , Luca Laurenti2 ,
and Boyan Yordanov1
arXiv:1710.08016v1 [] 22 Oct 2017
1
2
Microsoft Research Cambridge
Department of Computer Science, University of Oxford
Abstract. Both experimental and computational biology is becoming
increasingly automated. Laboratory experiments are now performed automatically on high-throughput machinery, while computational models
are synthesized or inferred automatically from data. However, integration between automated tasks in the process of biological discovery is still
lacking, largely due to incompatible or missing formal representations.
While theories are expressed formally as computational models, existing
languages for encoding and automating experimental protocols often lack
formal semantics. This makes it challenging to extract novel understanding by identifying when theory and experimental evidence disagree due
to errors in the models or the protocols used to validate them. To address
this, we formalize the semantics of a core protocol language as a stochastic hybrid process, which provides a unified description for the models of
biochemical systems being experimented on, together with the discrete
events representing the liquid-handling steps of biological protocols. Such
a representation captures uncertainties in equipment tolerances, making
it a suitable tool for both experimental and computational biologists. We
illustrate how the proposed protocol language can be used for automated
verification and synthesis of laboratory experiments on case studies from
the fields of chemistry and molecular programming.
1
Introduction
The classical cycle of observation, hypothesis formulation, experimentation, and
falsification, which has driven scientific and technical progress since the scientific
revolution, is lately becoming automated in all its separate components. Data
gathering is conducted by high-throughput machinery. Models are automatically
synthesized, at least in part, from data [6,11,4]. Experiments are selected to maximize knowledge acquisition. Laboratory protocols are run under reproducible
and auditable software control. However, integration between these automated
components is lacking. Theories are not placed in the same formal context as the
(coded) protocols that are supposed to test them. Theories talk about changing
in physical quantities, while protocols talk about steps carried out by machines:
neither knows about the other, although they both try to describe the same
process. The consequence is that often it is hard to tell what happened when
experiments and models do not match: was it an error in the model, or an error
in the protocol? Often both the model and the protocol have unknown parameters: do we use the experimental data to fit the model or to fit the protocol?
When most activities are automated, we need a way to answer those questions
that is equally automated.
In this paper, we present a core language to model experimental biological
protocols that gives an integrated description of the protocol and of the underlying molecular process. A basic example of experimental biological protocol is
shown in Figure 1.
From this integrated representation both the model of a phenomenon
(for possibly automated mathematical analysis), and the steps carried out
to test it (for automated execution by
lab equipment) can be separately extracted. This is essential to perform
automated model synthesis and falsification by taking into account uncertainties in the both model structure
and in equipment tolerances. We map
our language into a Piecewise Deterministic Markov Process (PDMP).
That is, a class of Markov stochastic
hybrid processes where the continuous
variables evolve according to ordinary
differential equations (ODEs) and the Fig. 1: Graphical representation of an
discrete variables evolve by means of acid-base titration protocol. The protorandom jumps [12]. The discrete dy- col is initialized with samples A (connamics are used to map the discrete taining H + and Cl− ) and B (containoperations of a lab protocol, while ing N a+ and OH − ). Some fraction of
continuous dynamics model the evo- each sample (p1 and p2 ) is mixed tolution of the physical variables. In our gether and the resulting sample is let to
language physical variables are de- equilibrate for t seconds.
scribed with Chemical Reaction Networks (CRNs), a widely used formalism to model molecular interactions [16].
Our goal is to define a simple core language and focusing on formalizing its
semantics. We then show how our language can easily be extended to collect
observations of the process and to model complicate protocols. Giving to the
language a formal semantics in terms of a PDMP allows us to include in our semantics the uncertainties intrinsic in the discrete operations of an experimental
protocol. Uncertainties for experimental protocols have also been standardized
(standards ISO 17025 and 8655). On examples from chemistry and molecular
programming, we demonstrate how our integrated representation allows one to
perform analysis and synthesis of both the discrete steps of the protocol and of
the underlying physical system.
Related Work Several factors contribute to the growing need for a formalization of experimental protocols in biology. First, better record-keeping of experimental operations is recognized as a step towards tackling the reproducibility
crisis in biology [17]. Second, the emergence of cloud labs [18] creates a need for
precise, machine-readable descriptions of the experimental steps to be executed.
To address these needs, frameworks allowing protocols to be recorded, shared,
and reproduced locally or in a remote lab have been proposed. These frameworks
introduce different programming languages for experimental protocols including
BioCoder [3], Autoprotocol, and Antha [24]. These languages provide expressive, high-level protocol descriptions but consider each experimental sample as
a labelled black-box. This makes it challenging to study a protocol together
with the biochemical systems it manipulates in a common framework. In contrast, we consider a simpler set of protocol operations but capture the details of
experimental samples, enabling us to track properties of chemical species (e.g.
amounts, concentrations, etc) as they react during the execution of a protocol.
This allows us to formalize and verify requirements for the correct execution of
a protocol or to optimize various protocol or system parameters to satisfy these
specifications.
2
Background
We first introduce the formalism of PDMP, which we use to model experimental
protocols. Then, we introduce Chemical Reaction Networks (CRN), which are
used to model the underlying physical process.
2.1 Piecewise Deterministic Markov Process
The syntax of a PDMP is given as follows.
Definition 1 A Piecewise Deterministic Markov Process (PDMP) H is a tuple
H = (Q, d, G, F, Λ, R), where
– Q = {q1 , ..., q|Q| } is the set of discrete modes
– d : Q → N is a map such that Rd(q) , is the state space of the continuous
dynamics for state q. The hybrid state space is defined as D = ∪q∈Q {q} ×
Rd(q)
– G : Q × Rd(q) → {0, 1} is a set of guards
– F : Q × Rd(q) → Rd(q) is a family of vector fields
– Λ : S × Q → R≥0 is an intensity function,
where for (qi , x) ∈ S, qj ∈ Q, we
P
define Λ((qi , x), qj ) = λi,j (x) and qj 6=qi λi,j (x) = λqi (x)
– R : B(S) × S → [0.1] is the reset function, which assigns to each (q, x) ∈ S
a measure R(·, q, x) on (S, B(S)).
Let B(S) be the smallest σ−algebra on S containing all the sets of the form
∪q∈Q {q}×Aq , where Aq is a Borel subeset of Rd(q) . For t ∈ R≥0 , q ∈ Q, x ∈ Rd(q) ,
we call Φ(q, t, x), the solution of the following differential equation
dΦ(q, t, x)
= F (q, Φ(q, t, x)), Φ(q, 0, x) = x.
dt
The solution of a PDMP is a stochastic process Y = (α, X), whose semantics is
classically defined according to the notion of execution (see Definition 2 below)
[13]. In order to introduce such a notion, we define the exit time t∗ (q, x, G) as
t∗ (q, x, G) = inf{t ∈ R≥0 | G(q, Φ(q, t, x)) = 1}
(1)
(
Rt
e− 0 λq (Φ(q,τ,x))dτ
if t < t∗ (q, x, G)
.
and the survival function f (q, t, x) =
0
otherwise.
Here t∗ (q, x, G) represents the first time instant, starting from state (q, x), when
the guard set is reached by a solution of the process; further f (q, t, x) denotes
the probability that the system remains within q, starting from x, at time t
[12], which depends on random arrivals induced by the intensity function Λ. The
semantics of a PDMP is provided next.
Definition 2 (Execution of PDMP H)
Set t := 0
Set (α(0), X(0)) := q0 , x0
While t < ∞
Extract R≥0 -valued random variable T such that
P rob(T > t̄) = f (α(t), t̄, X(t))
∀τ ∈ [t, t + T ) Set (α(τ ), X(τ )) := (α(t), Φ(α(t), τ − t, X(t)))
If t + T < ∞
Extract (α(t + T ), X(t + T )) according to
R(·, (α(t), Φ(α(t), T, X(t)))
End If
Set t := t + T
End While
Let T < t∗ (qi , x̄, G) be the dwelling time in state qi ∈ Q, with x̄ such that
Φ(qi , T, x̄) = x, (qj , A) ∈ B(S). Assume that x is such that G(x, qi ) = 0. Then,
the reset has the following form, which results from a transition that is not due
to the crossing of a guard:
R((qj , A), (qi , x)) = Rc (A|qj , (qi , x)) P
ΛTqi ,qj (Φ, x)
qk 6=qi
where ΛTqi ,qj (Φ, x) =
RT
0
ΛTqi ,qk (Φ, x)
,
(2)
λi,j (Φ(s, qi , x))ds, and
Rc (A|qj , (qi , x)) = P rob(X(T ) ∈ A|α(T ) = qj , (α(0) = qi , X(0) = x))
is the conditional reset of the continuous dynamics.
2.2 Chemical Reaction Networks
A CRN C = (A, R) is a pair of finite sets, where A denotes a set of chemical
species, |A| is its cardinality, and R denotes a set of reactions. A reaction τ ∈ R
is a triple τ = (rτ , pτ , kτ ), where rτ ∈ N|Λ| is the source complex, pτ ∈ N|Λ| is
the product complex and kτ ∈ R>0 is the coefficient associated with the rate of
the reaction.The quantities rτ and pτ represent the stoichiometry of reactants
and products. Given a reaction τ1 = ([1, 0, 1], [0, 2, 0], k1 ) we often refer to it
visually as τ1 : λ1 + λ3 →k1 2λ2 . The net change associated to τ is defined by
υτ = pτ − rτ .
Many models have been introduced to study CRNs [9,7,15,8]. Here we consider the reaction rate equations [15], which describe the time evolution of the
concentration of the species in C, in a sample of temperature T and volume V ,
as follows:
X
dΦ(t)
= F (t) =
υτ · γS (Φ(t), kτ , V, T ),
(3)
dt
τ ∈R
where γS (Φ(t), kτ , V, T )) is the propensity rate, and in case of mass action kinetics we have
Y r
γS (Φ(t), kτ , V, T )) = kτ (T )
ΦSS,τ (t),
S∈Λ
where ΦS and rS,τ are the components of vectors Φ and rτ relative to species S,
and where in kτ (T ) we make explicit the dependence from temperature T .
Definition 3 (Chemical Reaction System) A chemical reaction system (CRS)
C = (A, R, x0 ) is defined as a tuple, where (A, R) is a CRN and x0 ∈ N|Λ|
represents its initial condition.
Example 1. Consider the CRS C = (A, R, x0 ), evolving in a volume V and at
temperature T , where A = {H2 O, N a+ , OH − , Cl− , H + } and R is composed of
the following reactions:
N a+ + OH − + H + + Cl− →k H2 O + N a+ + Cl−
where k = 2.81e−10 is the rates at temperature T = 298 Kelvin. Then, According
to Equation (3), we have that the state of H + is given by the solution of the
following ordinary differential equation:
dH + (t)
= − kN a+ (t)OH − (t)H + (t)Cl− (t),
dt
with H + (0) =
3
x0,H +
V
, where x0,H + is the component of x0 relative to H + .
A Language for Experimental Biological Protocols
In this section we introduce the syntax of a language for modelling experimental
protocols. A formal semantics of the language, based on denotational semantics
[25], is then discussed. We model the physical process underlying a biological
experimental protocol as a CRS. As a consequence, in order to introduce formal
semantics for experimental protocols, we first need to define semantics for a CRS,
which has been only introduced informally in the previous section.
Let S = (R|A| × R≥0 × R≥0 ) be a sample. We define the semantics for a CRS
as follows.
|A|
Definition 4 (CRS Semantics) Let C = (A, R) be a CRN, x0 ∈ R≥0 , V, T ∈
R≥0 be the initial concentration (moles), volume (liters) and temperature (degrees
Kelvin). Call F (V, T ) : R|A| → R|A| the drift at volume V and temperature T
for C. Then, the semantics of the CRS (A, R, x0 ) at volume V , temperature T
and time t, for a time horizon H ∈ R≥0 ∪ {∞},
[[·]] : (CRS × R≥0 × R≥0 ) → R≥0 ∪ {∞} → R≥0 → S
is defined as
[[((A,R, x0 ), V, T )]](H)(t) =
let G : [0...H) → R
|A|
0
Z
be the solution of G(t ) = x0 +
t0
F (V, T )(G(s))ds
0
(G(t), V, T )
where the above operation reads as follows: ’first line’ = ’third line’, where G is
defined as in ’second line’. If for such an H, G is not unique, then we say that
[[((A, R, x0 ), V, T )]] (H)(t) is ill posed.
In Definition 4 we have explicitly introduced a dependence on a time horizon H,
because it may happen that the solution of the rate equations is defined only for
a finite time horizon [15].
3.1 A Language for Experimental Protocols
Our goal is to build a simple core language that gives an integrated representation
of a discrete protocol within the physical process being implemented on. We
consider the following language modelling the basic operation of a lab protocol.
Definition 5 (Syntax of a Protocol) Given a set of variables V ar, the syntax
of a protocol P for a given fixed CRN C = (A, R) is
P =
x
(sample variable)
(x0 , V, T )
(initial condition)
M ix(P1 , P2 )
let x = P1 in P2
let x, y = Dispense(P1 , p) in P2
Equilibrate(P, t)
Dispose(P )
(mix samples)
(define variable)
(dispense samples)
(let time pass)
(discard P)
where T, V, t ∈ R≥0 , x, y ∈ V ar, p ∈ R(0,1) . Moreover, let-bound variables must
occur exactly once ( that is, be free) in P2 .
A protocol P yields a sample, which is the result of operations of Equilibrate,
Mix, Dispose and Dispense, over a CRS. This syntax allows one to create and
manipulate new samples using Mix (put together different samples), Dispense
(separate samples) and Dispose (discard samples) operations. Note that the CRN
is common for all samples. However, different samples may have different initial
conditions. The single-occurrence (linearity) restriction implies that a sample
cannot be duplicated or eliminated from the pool.
Example 2. We use let x, = Dispense(P1 , p) in P2 as a short-hand for let x, y =
Dispense(P1 , p) in M ix(Dispose(y), x). The protocol (call it P ro1 ) represented
graphically in Figure 1 is defined formally as
P ro1 =let A = ([(H + , 0.1M ); (Cl− , 0.1M )], 1.0mL, 25.0o C) in
let B = ([(N a+ , 0.1M ); (OH − , 0.1M )], 1.0mL, 25.0o C) in
let a, = Dispense(A, p1 ) in
let b, = Dispense(B, p2 ) in
Equilibrate(M ix(a, b), t).
In order to define the semantics of a protocol we introduce the following
definitions.
Definition 6 (Free Variables) The set of Free Variables (FV) of a protocol P
is defined inductively as follows:
F V (x) = {x}
F V ((x0 , V, T )) = {}
F V (M ix(P1 , P2 )) = F V (P1 ) ∪ F V (P2 )
F V (let x = P1 in P2 ) = F V (P1 ) ∪ (F V (P2 ) − {x})
F V (let x, y = Dispense(P1 , p) in P2 ) = F V (P1 ) ∪ (F V (P2 ) − {x, y})
F V (Equilibrate(P, t)) = F V (P )
F V (Dispose(P )) = F V (P ).
We define the operation of substitution of a protocol into a variable as follows.
Definition 7 (Substitution) P1 {x ← P2 } is defined inductively as follows:
x{x ← P } = P
y{x ← P } = y, for x 6= y
M ix(P1 , P2 ){x ← P3 } = M ix(P1 {x ← P3 }, P2 {x ← P3 })
(let x = P1 in P2 ){x ← P3 } = (letx = P1 {x ← P3 } in P2 )
(let y = P1 in P2 ){x ← P3 } = (let y = P1 {x ← P3 }inP2 {x ← P3 }),
for x 6= y and y 6∈ F V (P3 )
Equilibrate(P, t){x ← P1 } = Equilibrate(P {x ← P1 , t})
Dispose(P ){x ← P1 } = Dispose(P {x ← P1 }).
The following equivalences can be shown structurally, based on the definitions
above.
Proposition 1. (Equivalence Relationships)
let x = P1 in P2 = P2 {x ← P1 }
let x = P1 in P2 = let y = P1 in (P2 {x ← y}) for y 6∈ F V (P2 ).
3.2
Deterministic Semantics of a Protocol
We introduce the deterministic semantics for a protocol. Then, in the next Section, we extend such a semantics in order to take into account errors and inaccuracies within the protocol, which in practice may be quite relevant.
The deterministic semantics of a protocol P for a CRN C = (A, R), under
a given environment ρ : V ar → S, is a function [[P ]]ρ : (V ar → S) → S that is
defined inductively as follows.
Definition 8 (Deterministic Semantics of a Protocol) Let S = (R|A| × R≥0 ×
R≥0 ), then the deterministic semantics of a protocol P for CRN C = (A, R),
under environment ρ : V ar → S is defined inductively as follows
[[x]]ρ = ρ(x)
[[x0 , V, T ]]ρ = (x0 , V, T )
[[M ix(P1 , P2 )]]ρ =
let (x10 , V1 , T1 ) = [[P1 ]]ρ
let (x20 , V2 , T2 ) = [[P2 ]]ρ
T1 V1 + T2 V2
x10 V1 + x20 V2
, V1 + V2 ,
)
V1 + V2
V1 + V2
[[let x = P1 in P2 ]]ρ =
(
let (x0 , V, T ) = [[P1 ]]ρ
let ρ1 = ρ{x ← (x0 , V, T )}
[[P2 ]]ρ1
[[let x, y = Dispense(P1 , p) in P2 ]]ρ =
let (x0 , V, T ) = [[P1 ]]ρ
let ρ1 = ρ{x ← (x0 , V · p, T ), y ← (x0 , V · (1 − p), T )}
[[P2 ]]ρ1
[[Equilibrate(P, t)]]ρ =
let (x0 , V, T ) = [[P ]]ρ
[[(A, R, x0 ), V, T )]](H)(t)
[[Dispose(P )]]ρ = (0|Λ| , 0, 0),
where H ∈ R≥0 is such that for any Equilibrate(P, t), [[(A, R), x0 , V, T )]](H)(t)
is well posed. If such an H does not exist, we say that P is ill posed.
The above semantics identifies a protocol which outputs the concentration of the
species, the volume, and the temperature of the sample at final time.
3.3
Deterministic Semantics of a Protocol as a PDMP
Given a protocol P and an environment ρ, [[P ]]ρ induces semantics that correspond to the solution of a PDMP H = (Q, d, G, F, Λ, R) as per Definitions
1 and 2. In the corresponding PDMP model H, Q represents the set of discrete operations, so that d(q) = |A| + 1 denotes the continuous dimension (the
number of continuous variables). The vector field F is given by Definition 4,
with an additional clock variable time representing time as dtime
= 1. For each
dt
Equilibrate(P, t) step, there is a guard set defined as time ≥ t: when a trajectory
enters this guard, an associated reset is modeled with the identity function. The
resulting process is a PDMP without random jumps (that is, where Λ(q, x) = 0
for any q ∈ Q, x ∈ R|A|+1 ) and with non-probabilistic resets R. As such, the
resulting PDMP is also a (non-probabilistic) hybrid model [2]. We elucidate this
with the next example.
Example 3. Consider the protocol P ro1 introduced in Example 2. The CRN of
the system comprises the reactions given in the CRN in Example 1. According
to Definition 11, the state of variable H + in time is given by the solution of the
following equation:
Z t
+
+
H (t) = H (0) −
kN a+ (s)OH − (s)H + (s)Cl− (s)ds,
0
−7.4
2 10
. Then, given an environment ρ, [[P ro1 ]]ρ is the
where H + (0) = p1 0.1+p
p1 +p2
solution of the PDMP H = (Q, d, G, F, Λ, R), where Q = {q}; d(q) = R8 ;
G(q, x) = 1 iff xtime ≥ t, where xtime is the component of x relative to variable time; and F is given by Definition 11. Notice again that Λ(q, x) = 0 for any
x and R(q, x) = δ(x), where δ(x) is the Dirac delta function centered at x.
3.4
Stochastic Semantics of a Protocol, and Interpretation as a
PDMP
The semantics of Definition 11 are fully deterministic, and indeed are shown to
map into a fully non-probabilistic PDMP model. However, it is often the case
that operations of Dispense and Equilibrate are stochastic in nature, due to
the fact that they are performed by humans and in view of experimental inaccuracies related to lab equipment. In what follows, we encompass this features by
extending the previously defined semantics with stochasticity. More precisely,
– in the Equilibrate(P, t) step, time is sampled from a distribution;
– the resulting volume after a Dispense step is sampled from a distribution.
The first characteristic models the fact that in real experiments the system is not
equilibrated for exactly t seconds, as it may start or be stopped at different time
instants, and accounts for the fact that after a mix of samples well mixed conditions are not reached instantaneously, whereas the second takes into account the
error of pipetting devices, whose ranges and parameters have been standardized
t0
(standard ISO 8655). For the first feature, consider the function T (t0 , t) = e− t ,
defined for two values t0 , t ∈ R≥0 . This function corresponds to the density
function of an exponential random variable, modelling random arrivals. For the
m
second function, let B(Rm
≥0 ) be the Borel sigma-algebra over R≥0 , m > 0. Then,
we consider the following function D : B(R(0,1) ) × R≥0 × R[0,1] → [0, 1], which
assigns to D(·, V, p) a probability measure in B(R[0,1] ). Function D is used to
reset the volume randomly, after a discrete operation. (As an anticipation of
results, notice that both functions T and D can be mapped to elements in the
syntax in a PDMP model.)
We define the Stochastic Semantics of a protocol as an extension of the
deterministic ones in Definition 11. For the sake of compactness, we explicitly
write only the operators that differ from the earlier ones.
Definition 9 (Stochastic Semantics of a Protocol) Let S = (R|A| × R≥0 × R≥0 ),
then the semantics of a protocol P for CRN C = (A, R), under environment
ρ : V ar → S and functions T , D, as defined above, is defined inductively as
follows
[[let x, y = Dispense(P1 , p) in P2 ]]ρ =
let (x0 , V, T ) = [[P1 ]]ρ
let p0 being sampled f rom D(·, V, p)
let ρ1 = ρ{x ← (x0 , V · p0 , T ), y ← (x0 , V · (1 − p0 ), T )}
[[P2 ]]ρ1
[[Equilibrate(P, t)]]ρ =
let (x0 , V, T ) = [[P ]]ρ
let I be a R≥0 − valued random variable such that for s ∈ R≥0
P rob(I > s) = T (s, t)
[[(A, R, x0 ), V, T )]](H)(I),
where H ∈ R≥0 is such that for any Equilibrate(P, t), and any I random variable
such that P rob(I > s) = T (s, t), [[(A, R), x0 , V, T )]](H)(I) is well posed with
probability 1. If such an H does not exist, we say that [[P ]]ρ is ill posed.
D is a transition kernel that depends only on the current state of the system. T
is the cumulative probability distribution of a random variable with exponential
distribution. As a consequence, according to Definition 2, [[P ]]ρ induces semantics
that are again solution of a PDMP. However, here T determines the probability
of changing discrete state and D acts as a probabilistic reset, there are no guards,
and the continuous dynamics evolve according to the ODE in Definition 4.
Next, we leverage results from the analysis of PDMP models and export
them over the protocol language. The following assumptions guarantee that the
solution of the PDMP induced by [[P ]]ρ exists, establish that it is a strong Markov
process, and allow to exclude pathological Zeno behaviours [12,19].
Assumption 1
– Let A0 , A1 ⊂ B(R[0,1] ) be the smallest sets in B(R[0,1] ) containing respectively
0 and 1. Then, D(A0 , V, p) = D(A1 , V, p) = 0 for any p ∈ (0, 1), V 6= 0. That
is, the Volume after a dispense is zero with probability zero.
– Let F : R|A| → R|A| be the drift term of the rate equations (Eqn (4)). Then,
F is a globally Lipschitz function.
– For any Equilibrate(·, t) we have that t > 0.
Let us interpret these assumptions over the protocol languages. The first assumption guarantees that the volume of a non-empty sample cannot be 0 almostsurely. The second assumption guarantees that the solution of (4) exists and
does not hit infinity in finite time. This excludes non-physical reactions like
X + X → X + X + X. The third assumption guarantees that we have a finite
number of jumps over a finite time, thus excluding Zeno behaviours [12,13].
Example 4. Consider the protocol introduced in Example 2. For σ1 > 0, A ⊂
−
R[0.1,0.8] . Assume that D(A, p, V̄ ) =
R
A
e
R 0.8
0.1
x−p
2
2σ1
−
e
dx
x−p
2
2σ1
. That is, D(·, p, V ) is a trun-
dx
cated Gaussian measure centered at p and independent of the volume. Then,
according to Definition 9, we have the following Stochastic Semantics:
Z I
H + (I) =H + (0) −
kN a+ (s)OH − (s)H + (s)Cl− (s)ds,
0
−7.4
2 10
. Here I is a random variable with an exponential
with HCl(0) = V1 0.1+V
V1 +V2
distribution with rate T1 , V1 is a random variable sampled from D(·, p1 , 1), and
V2 is a random variable sampled from D(·, p2 , 1).
4
Extending the Protocol Language with Observations
The language introduced in Section 3 can be extended in a number of directions,
according to specific envisioned scenarios for the protocols. A common task is to
take observations of the state of the protocol. That is, often it is useful to store
the state of the system at different times or when a particular event happens.
As some of the events may be stochastic, in general, it is not possible to know
before the simulation starts when a particular event happens. Consequently,
observations need to be included in the language.
Definition 10 (Extended Syntax). Given a set of variables V ar, the syntax of
a protocol for a given fixed CRN C = (A, R) and idn ∈ N is
P =
x
(sample variable)
(x0 , V, T )
(initial condition)
M ix(P1 , P2 )
let x = P1 in P2
let x, y = Dispense(P1 , p) in P2
Equilibrate(P, t)
Dispose(P )
Observe(P, idn)
(mix samples)
(define variable)
(dispense samples)
(let time pass)
(discard P)
(observe sample)
where T, V, t ∈ R≥0 , x, y ∈ V ar, p ∈ R[0,1] . Moreover, let-bound variables
must occur exactly once ( that is, be free) in P2 .
Observe(P, idn) makes an observation of protocol P after its execution, and
identifies such an observation with identifier idn. In order to include observations
we extend the semantics as detailed next, where we consider in detail just the
deterministic semantics, focusing on a few key operators. The other operators
and the Stochastic Semantics follow similarly.
Definition 11 (Extended Deterministic Semantics) For CRN C = (A, R) let
S = R|A| × R≥0 × R≥0 , Obs = R≥0 × N × R|A| , Obs∗ , an eventually empty
set of Obs and M = S × Obs∗ × R≥0 . The semantics of a protocol P , under
environment ρ : V ar → M, is a function [[P ]] : (V ar → M) × R≥0 → M defined
inductively as follows
[[M ix(P1 , P2 )]]ρt =
let ((x10 , V1 , T1 ), Obs1 , t1 ) = [[P1 ]]ρt
let ((x20 , V2 , T2 ), Obs2 , t2 ) = [[P2 ]]ρt
T1 V1 + T2 V2
x10 V1 + x20 V2
, V1 + V2 ,
), Obs1 :: Obs2 , max(t1 , t2 ))
V1 + V2
V1 + V2
[[Observe(P, idn)]]ρt =
((
let ((x0 , V, T ), Obs, t1 ) = [[P ]]ρt
let O = (x0 , idn, t1 )
((x0 , V, T ), Obs ∪ O, t1 )
[[Equilibrate(P, t)]]ρt0 =
let ((x0 , V, T ), Obs, t1 ) = [[P ]]ρt0
([[(A, R), x0 , V, T )]](H)(t), Obs, t1 + t),
where H ∈ R≥0 is such that for any Equilibrate(P, t), [[(A, R), x0 , V, T )]](H)(t)
is well posed. If such H does not exist, we say that P is ill posed.
Note that the above syntax does not prevent the programmer to assign the same
identifier to two distinct observations. We further stress that often observations
of the state of an experiment are not exact, but corrupted by sensing noise.
For instance, this is what happens with noisy fluorescence measurements. This
noise can be easily taken into account at a semantical level by sampling an
observation from a distribution with added noise, where the noise level depends
on the particular measure technique or instrumentation. Finally, we can as well
extend the sample semantic to take into account noise in Dispense operations.
5
Case Study
As a case study we consider an experimental protocol for DNA strand displacement. DNA strand displacement (DSD) is a design paradigm for DNA
nano-devices [10]. In such a paradigm, single-stranded DNA acts as signals and
double-stranded (or more complex) DNA structures act as gates. The interactions between signals and gates allow one to generate computational mechanisms
that can operate autonomously at the molecular level [26]. The DSD programming language has been developed as a means of formally programming and
analyzing such devices [20,10]. Here, we consider an AN D circuit implemented
in DSD, which can be represented with the reactions in Figure 2b. Strands
Input1 =< 1∗ 2 > and Input2 =< 3 4∗ > represent the two inputs, while
strand Output =< 2 3 > is the output. Strand Gate = {1∗ }[2 3]{4∗ } is an
auxiliary strand. The Output strand is released only if both the inputs react
with the Gate gate. We consider the protocol in Figure 2a, which can be written
formally as follows. We use let x,= Dispense(P 1, p) in P 2 as a short-hand for
let x, y = Dispense(P 1, p) in M ix(Dispose(y), x)
P1 =let In1 = ((Input1, 100.0nM ), 0.1mL, 25.0o C) in
let In2 = ((Input2, 100.0nM ), 0.1mL, 25.0o C) in
let GA = ((Output, 100.0nM ), 0.1mL, 25.0o C) in
let GB = ((GateB , 100.0nM ), 0.1mL, 25.0o C) in
let sGA,= Dispense(GA, p1 ) in
let sGB,= Dispense(GB, p2 ) in
let sIn1,= Dispense(In1, p3 ) in
let sIn2,= Dispense(In1, p4 ) in
Observe(Equilibrate(M ix(M ix(Equilibrate(M ix(sGA, sGB), t1 ), sIn1), sIn2), t2 ), idn).
The protocol proceeds as follow: Output and GateB strands are dispensed from
the original samples. Then, they are let evolve for t1 seconds to create Gate
strands. Then, the two inputs are dispensed from their samples. The resulting
samples are mixed and the resulting solution evolves for t2 seconds. Finally, we
collect the final sample, observe the results, and associate to the observation
the identifier ’idn’. According to the standard ISO 8655 for a volume of 1mL,
Fig. 2: (A) Graphical representation of the protocol. (B) Graphical representation
of the reactions between the different DNA strands in the considered solution.
For example, in the second reaction, stand {1∗ }[2 3]{4∗ } reacts with < 1∗ 2 >
at a rate 0.0003, and there exists an inverse reaction with rate 0.1126.
the maximum standard deviation of a particular pipetting device is 0.3µL per
single operation. To incorporate such an error in our model, we make use of the
Stochastic Semantics. Thus, the concentration of the Output strand at the end
of the protocol is a random variable. It is common that the reaction rates of the
physical system are not known exactly and they may be affected by extrinsic
noise [23]. This leads to another source of uncertanity in the output of the
protocol. To estimate the distribution of the output under the effect of both
these sources of noise we extend our semantics to sample the rate of a reaction
from a normal distribution with variance equals to half of its mean (sub-Poisson
noise). In Figure 3a we plot 4500 executions resulting from the protocol. From
the figure it is easy to realize how the difference source of noise may have a
distinctive effect on the final outcome of the experiments.
In many experimental protocols, one of the key challenges is to synthesize
their optimal discrete parameters, to optimize the probability of obtaining desired behaviours. Here, we assume perfect knowledge of the reaction rates of the
physical system and p1 = p2 = 0.4. Our goal is to see how the concentration of
the Output changes varying (p3 , p4 ) ∈ [0.45, 0.65]×[0.45, 0.65]. We are interested
in the following property
PSaf e ([3.0·10−4 , 3.5·10−4 ]) = P rob(Output(t0 ) ∈ [3.0·10−4 , 3.5·10−4 ]|t0 = tf inal ),
where tf inal is the final time of the protocol. The following probability is estimated using Statistical Model Checking [22] in Figure 3b, which in this context
reduces to Monte-Carlo sampling. From Figure 3b it is easy to infer that the
Fig. 3: (A): (red) 1500 execution of the protocol assuming the physical model
is fully known, and the only source of noise is the discrete parameters of the
protocols (p1 , p2 , p3 , p4 ). (yellow) 1500 executions of the protocol when the rates
of the physical system are sampled from a sub-Poisson distribution, and discrete
operations are exact. (blue) 1500 simulations of the protocol when both sources
of noise are active.(B): PSaf e ([3.0 · 10−4 , 3.5 · 10−4 ]) as a function of p3 and p4 .
Each cell is estimated from 20000 executions of the protocol.
optimal value for such property is not unique (it is attained at values over the
yellow band) and obtained, for instance, at (p3 , p4 ) = (0.5, 0.54).
6
Discussion
We presented a language to model experimental biological protocols, and provided semantics to this protocol in terms of PDMPs. That is, to each experimental biological protocol is associated a particular instance of a stochastic hybrid
process. Our language provides a unified description of the model of the system
being experimented on, together with the discrete events representing the parts
of biological protocols dealing with handling samples. Moreover, we allow the
modeller to take into account uncertainties in both the model structure and the
equipment tolerances. This makes our language a suitable tool for both experimental and computational biologists. Our objective has been that of providing
a basic language, yielding an integrated representation of an experimental biological protocol. To this end, we have kept the language as simple as possible,
showing how different extensions are easy to be integrated. For instance, in our
denotational semantics the dynamics of a physical process is given by a set of
ODEs. This is accurate when the number of molecules involved is big enough,
as in the discussed example of DNA strand displacement (DSD). However, in
other scenarios, such as localized computation or gene expression, this might be
unsatisfactory as stochasticity becomes important [5,14]: nevertheless, the semantics presented here can be easily extended to incorporate such stochasticity,
which can be done for example by considering more general classes of stochastic
hybrid processes, such as switching diffusions [21,27]. Another relatively simple
extension is to include finite loops or operations based on concentrations.
One of the main advantages in providing a language with formal semantics for
experimental protocols is that experimental protocols can now be quantitatively
analyzed inexpensively in-silico, and classical problems of analysis of CRNs, such
as parameter estimation [6], can be performed within this framework, also by
taking into account the discrete operations of the protocol, which influences the
dynamics of the system. An additional target is to provide automated techniques
to synthesise optimal protocols, or protocols that are guaranteed to perform as
desired. This can be attained by tapping into the mature literature on formal
verification and strategy synthesis of PDMPs, or that of any other more special model that the given protocol can be mapped onto. Notions of finite-state
abstractions [28] and of probabilistic bisimulations [1], as well as algorithms for
probabilistic model checking of stochastic hybrid models [21] will be relevant
towards this goal.
References
1. A. Abate. Probabilistic bisimulations of switching and resetting diffusions. In
Proceedings of the 49th IEEE Conference of Decision and Control, pages 5918–
5923, 2010.
2. R. Alur, C. Courcoubetis, N. Halbwachs, T. A. Henzinger, P.-H. Ho, X. Nicollin,
A. Olivero, J. Sifakis, and S. Yovine. The algorithmic analysis of hybrid systems.
Theoretical computer science, 138(1):3–34, 1995.
3. V. Ananthanarayanan and W. Thies. Biocoder: A programming language for standardizing and automating biology protocols. Journal of Biological Engineering,
4(1):13, Nov 2010.
4. A. Andreychenko, L. Mikeev, D. Spieler, and V. Wolf. Parameter identification for
markov models of biochemical reactions. In Computer Aided Verification, pages
83–98. Springer, 2011.
5. A. Arkin, J. Ross, and H. H. McAdams. Stochastic kinetic analysis of developmental pathway bifurcation in phage λ-infected Escherichia coli cells. Genetics,
149(4):1633–1648, 1998.
6. L. Cardelli, M. Češka, M. Fränzle, M. Kwiatkowska, L. Laurenti, N. Paoletti, and
M. Whitby. Syntax-guided optimal synthesis for chemical reaction networks. In International Conference on Computer Aided Verification, pages 375–395. Springer,
2017.
7. L. Cardelli, M. Kwiatkowska, and L. Laurenti. Programming discrete distributions
with chemical reaction networks. In International Conference on DNA-Based Computers, pages 35–51. Springer, 2016.
8. L. Cardelli, M. Kwiatkowska, and L. Laurenti. Stochastic analysis of chemical
reaction networks using linear noise approximation. Biosystems, 149:26–33, 2016.
9. L. Cardelli, M. Kwiatkowska, and L. Laurenti. A stochastic hybrid approximation
for chemical kinetics based on the linear noise approximation. In International Conference on Computational Methods in Systems Biology, pages 147–167. Springer,
2016.
10. Y.-J. Chen, N. Dalchau, N. Srinivas, A. Phillips, L. Cardelli, D. Soloveichik, and
G. Seelig. Programmable chemical controllers made from DNA. Nature nanotechnology, 8(10):755–762, 2013.
11. N. Dalchau, N. Murphy, R. Petersen, and B. Yordanov. Synthesizing and tuning
chemical reaction networks with specified behaviours. In International Workshop
on DNA-Based Computers, pages 16–33. Springer, 2015.
12. M. H. Davis. Piecewise-deterministic Markov processes: A general class of nondiffusion stochastic models. Journal of the Royal Statistical Society. Series B
(Methodological), pages 353–388, 1984.
13. M. H. Davis. Markov Models & Optimization, volume 49. CRC Press, 1993.
14. K. E. Dunn, F. Dannenberg, T. E. Ouldridge, M. Kwiatkowska, A. J. Turberfield,
and J. Bath. Guiding the folding pathway of dna origami. Nature, 525(7567):82,
2015.
15. S. N. Ethier and T. G. Kurtz. Markov processes: characterization and convergence,
volume 282. John Wiley & Sons, 2009.
16. M. Feinberg. Chemical reaction network structure and the stability of complex
isothermal reactors – The deficiency zero and deficiency one theorems. Chemical
Engineering Science, 42(10):2229–2268, 1987.
17. L. P. Freedman, I. M. Cockburn, and T. S. Simcoe. The economics of reproducibility in preclinical research. PLOS Biology, 13(6):1–9, 06 2015.
18. E. C. Hayden. The automated lab. Nature, 516(7529):131–132, 12 2014.
19. P. Kouretas, K. Koutroumpas, J. Lygeros, and Z. Lygerou. Stochastic hybrid
modeling of biochemical processes. Stochastic Hybrid Systems, 24(9083), 2006.
20. M. R. Lakin, S. Youssef, F. Polo, S. Emmott, and A. Phillips. Visual DSD: a
design and analysis tool for DNA strand displacement systems. Bioinformatics,
27(22):3211–3213, 2011.
21. L. Laurenti, A. Abate, L. Bortolussi, L. Cardelli, M. Ceska, and M. Kwiatkowska.
Reachability computation for switching diffusions: Finite abstractions with certifiable and tuneable precision. In Proceedings of the 20th International Conference
on Hybrid Systems: Computation and Control, pages 55–64. ACM, 2017.
22. A. Legay, B. Delahaye, and S. Bensalem. Statistical model checking: An overview.
RV, 10:122–135, 2010.
23. J. Paulsson. Summing up the noise in gene networks. Nature, 427(6973):415–418,
2004.
24. M. I. Sadowski, C. Grant, and T. S. Fell. Harnessing qbd, programming languages,
and automation for reproducible biology. Trends in Biotechnology, 34(3):214–227,
10 2017.
25. D. S. Scott and C. Strachey. Toward a mathematical semantics for computer
languages, volume 1. Oxford University Computing Laboratory, Programming
Research Group, 1971.
26. G. Seelig, D. Soloveichik, D. Y. Zhang, and E. Winfree. Enzyme-free nucleic acid
logic circuits. science, 314(5805):1585–1588, 2006.
27. G. Yin and C. Zhu. Hybrid switching diffusions: properties and applications, volume 63. Springer New York, 2010.
28. M. Zamani and A. Abate. Symbolic models for randomly switched stochastic
systems. Systems and Control Letters, 69:38–46, 2014.
| 6 |
arXiv:1604.01946v1 [cs.LG] 7 Apr 2016
Optimizing Performance of Recurrent Neural
Networks on GPUs
Jeremy Appleyard⋆ Tomáš Kočiský†‡ Phil Blunsom†‡
⋆
NVIDIA † University of Oxford ‡ Google DeepMind
[email protected]
{tomas.kocisky,phil.blunsom}@cs.ox.ac.uk
Abstract
As recurrent neural networks become larger and deeper, training times for single
networks are rising into weeks or even months. As such there is a significant
incentive to improve the performance and scalability of these networks. While
GPUs have become the hardware of choice for training and deploying recurrent
models, the implementations employed often make use of only basic optimizations
for these architectures. In this article we demonstrate that by exposing parallelism
between operations within the network, an order of magnitude speedup across a
range of network sizes can be achieved over a naive implementation. We describe
three stages of optimization that have been incorporated into the fifth release of
NVIDIA’s cuDNN: firstly optimizing a single cell, secondly a single layer, and
thirdly the entire network.
1 Introduction
Recurrent neural networks have become a standard tool for modelling sequential dependencies in
discrete time series and have underpinned many recent advances in deep learning, from generative
models of images [1, 2] to natural language processing [3, 4, 5, 6, 7, 8, 9]. A key factor in these
recent successes has been the availability of powerful Graphics Processing Units (GPUs) which are
particularly effective for accelerating the large matrix products at the heart of recurrent networks.
However as recurrent networks become deeper [10] and their core units more structured [11, 12],
it has become increasingly difficult to maximally utilise the computational capacity of the latest
generation of GPUs.
There have been several studies optimizing the implementation of neural networks for GPUs, particularly in the case of convolutional neural networks [13, 14]. While GPUs are already widely used
to compute RNNs [15, 16, 17, 18], there has been less work on the optimization of RNN runtime.
In this article we present a number of options for going beyond straightforward RNN GPU implementations that allow us to achieve close to maximum computational throughput for common network architectures. These enhancements are implemented in the fifth version of NVIDIA’s cuDNN
library for Simple RNN, GRU, and LSTM architectures.
2 Implementation
In this section we will consider the performance of a forward and backward propagation passes
through an LSTM network[11]. This is a standard four-gate LSTM network without peephole connections. The equations to compute the output at timestep t in the forward pass of this LSTM are
given below:
1
for l in layers
for j in iterations
i(l,j)’ = A_i(l) * i(l,j)
h(l,j)’ = A_h(l) * h(l,j)
for pointwiseOp in pointwiseOps
do pointwiseOp
Listing 1: Pseudocode demonstrating the starting point for optimization of the forward pass.
ii = σ(Wi xi + Ri ht−1 + bi )
ft = σ(Wf xt + Rf ht−1 + bf )
ot = σ(Wo xt + Ro ht−1 + bo )
c′t = tanh(Wc xt + Rc ht−1 + bc )
ct = ft ◦ c′t−1 + it ◦ c′t
ht = ot ◦ tanh(ct )
Most of the same strategies found to be beneficial to LSTM performance are easily transferable to
other types of RNN.
2.1 Naive implementation
There are many ways to naively implement a single propagation step of a recurrent neural network.
As a starting point we will consider an implementation where each individual kernel (ie. matrix
multiplication, sigmoid, point-wise addition, etc.) is implemented as a separate kernel. While the
GPU executes the operations within each kernel in parallel, the kernels are executed back-to-back
sequentially. The forward pass performance of this implementation is poor, achieving approximately
0.4 TFLOPS on a test case with hidden state size 512 and minibatch 64, less than 10% of the peak
performance of the hardware (approximately 5.8 TFLOPS when running at base clocks).1
A widely used optimization is to combine matrix operations sharing the same input into a single
larger matrix operation. In the forward pass the standard formulation of an LSTM leads to eight
matrix matrix multiplications: four operating on the recurrent input (R∗ ht−1 ), four operating on
the input from the previous layer (W∗ xt ). In these groups of four the input is shared, although the
weights are not. As such, it is possible to reformulate a group of four matrix multiplications into a
single matrix multiplication of four times the size. As larger matrix operations are more parallel (and
hence more efficient), this roughly doubles forward pass throughput to 0.8 TFLOPs in the test case
described above. This reformulation is very easy to implement in most deep learning frameworks
leading to its wide use. A similar optimization is possible for GRU units, with two groups of three
matrices able to be grouped. Single gate RNNs cannot benefit from this optimization as they have
only one matrix multiplication at each input. The backward pass also benefits from this optimization
as four inputs are transformed into one output. Pseudocode for this implementation is given in
Listing 1.
While some of the optimizations that follow have been implemented before, those that have are by
no means universal nor standard practice.
2.2 Single Cell
2.2.1 Streamed matrix operations
The matrix multiplications performed by RNNs often have insufficient parallelism for optimal performance on the GPU. Current state-of-the-art GEMM kernels are implemented with each CUDA
1
All runtime and FLOP measurements reported are based off the mean of 100 executions on an NVIDIA M40 GPU (https://images.nvidia.com/content/tesla/pdf/
nvidia-teslam40-datasheet.pdf), at default application clocks with auto-boost disabled. The
host CPU is an Intel Xeon CPU E5-2690 v2 @ 3.00GHz.
2
for l in layers
for j in iterations
set stream 0
i(l,j)’ = A_i(l) * i(l,j)
set stream 1
h(l,j)’ = A_h(l) * h(l,j)
wait for stream 0
do pointwise ops
Listing 2: Pseudocode demonstrating forward pass single cell optimizations.
block computing a rectangular tile of the output. The dimensions of these tiles is typically in the
range 32 to 128. Partitioning the matrix multiplications required for the forward pass of a hidden
state size 512, minibatch 64 LSTM with 128x64 tiles gives a total of 16 CUDA blocks. As blocks
reside on a single GPU streaming multiprocessor (SM), and modern top-of-the-range GPUs (eg.
our M40) currently have 24 streaming multiprocessors, this matrix multiplication will use at most
two thirds of the available GPU performance. As it is desirable to have multiple blocks per SM to
maximise latency hiding it is clear that to achieve better performance, it is required that we increase
parallelism.
One easy way to increase the parallelism of a single RNN cell is to execute both matrix multiplications on the GPU concurrently. By using CUDA streams we can inform hardware that the matrix
multiplications are independent. This doubles the amount of parallelism available to the GPU, increasing performance by up to 2x for small matrix multiplications. For larger matrix multiplications,
streaming is still useful as it helps to minimize the so called “tail effect”. If the number of blocks
launched to the GPU are only sufficient to fill its SMs a few times they can be thought of as passing
through the GPU in waves. All of the blocks in the first wave finishes at approximately the same
time, and as they finish the second wave begins. This continues until there is no more work. If
the number of waves is small, the last wave will often have less work to do than the others, creating a “tail” of low performance. By increasing parallelism this tail can be overlapped with another
operation, reducing the performance penalty.
2.2.2 Fusion of point-wise operations
Although parallelism comes naturally to point-wise operations, it was found that they were being
executed inefficiently. This is for two reasons: firstly because there is a cost associated with launching a kernel to the GPU; secondly because it is inefficient to move the output of one point-wise
operation all the way out to GPU main memory before reading it in again moments later for the
next. By their nature point-wise operations are independent, and as such, it is possible to fuse all of
the point-wise kernels into one larger kernel.
2.3 Single Layer
A single recurrent layer comprises many cells, the recurrent input of each depending on the output
of the previous. The input from the previous layer may not have such a dependency and it is often
possible to concatenate the inputs for multiple time steps producing a larger, more efficient, matrix
multiplication. Selecting the amount of steps to concatenate over is not trivial: more steps leads to
a more efficient matrix multiplication, but fewer steps reduces the time a recurrent operation may
potentially be waiting for. The exact amount of steps will depend not only on the hyper-parameters,
but also on the target hardware.
Another operation that is possible, when considering a layer in its entirety, is re-ordering the layout
of the weight matrices. As the same weight matrices are used repeatedly over the course of a
layer the cost of reordering is typically small compared to the cost of operating on the matrices.
In our tests it was found that pre-transposing the weight matrix lead to noticeable performance
improvements. Note that because the transpose of the weight matrix is used in the backward pass
this pre-transposition must be performed every pass through the network.
3
for l in layers
A_it(l) = transpose(A_i(l))
A_ht(l) = transpose(A_h(l))
for j in iterations, step s
set stream 0
i(l,j:j+s)’ = A_it(l) * i(l,j:j+s)
for k in 1,s
set stream 1
h(l,j+k)’ = A_ht(l) * h(l,j+k)
wait for stream 0 to complete operation on j+k
do pointwise ops
Listing 3: Pseudocode demonstrating optimizations across the forward pass of a layer.
for l in layers
A_it(l) = transpose(A_i(l))
A_ht(l) = transpose(A_h(l))
while not Complete
l,it = get next task
for j in it->it+s
set stream 0+2*l
i(l,j:j+s)’ = A_it(l) * i(l,j:j+s)
for k in 1,s
set stream 1+2*l
h(l,j+k)’ = A_ht(l) * h(l,j+k)
wait for stream 0+2*l to complete operation on j+k
do pointwise ops
Listing 4: Pseudocode demonstrating the final optimized forward pass.
2.4 Multiple Layers
It is becoming increasingly common for RNNs to feature multiple recurrent layers “stacked” such
that each recurrent cell feeds its output directly into a recurrent cell in the next layer. In this situation,
it is possible to exploit the parallelism between recurrent layers: the completion of a recurrent cell
not only resolves the dependency on the next iteration of the current layer, but also on the current
iteration of the next layer. This allows multiple layers to be computed in parallel, greatly increasing
the amount of work the GPU has at any given time.
2.4.1 Scheduling
As launching work to the GPU takes a small, but not insignificant, amount of time it is important
to consider the order in which the kernels are launched to the GPU. For example, if GPU resources
are available it is almost always preferable to launch a kernel with all of its dependencies resolved,
rather than a kernel which may be waiting some time for its dependencies to be cleared. In this way
as much parallelism as possible can be exposed. In order to do this we chose a simple scheduling
rule whereby the next work to be scheduled is that with the fewest edges to traverse before reaching
the “first” recurrent cell. If one considers a recurrent network as a 2D grid of cells, this leads to a
diagonal “wave” of launches propagating from the first cell.
2.5 Performance
The impact of each of the optimizations described on the forward pass of a 1000 step, four layer
LSTM network with hidden state size 512 and an input minibatch of 64 is shown in Table 1. For this
network we achieve an ~11x speedup over a completely naive implementation, and a ~6x speedup
over an implementation with the standard GEMM grouping optimization.
4
Optimization
Naive
#1 Grouped GEMMs
#2 Streamed GEMMs
#3 Fused point-wise
#4 Pre-transpose
#5 Batching inputs (2-way)
#6 Overlapping layers
Time per cell (us)
777
400
280
146
125
119
70
Speedup
(1.0x)
1.9x
2.8x
5.3x
6.2x
6.5x
11.1x
Table 1: LSTM forward pass performance. Each optimization was applied on top of the previous.
These measurements were made on a 100 iteration, four layer LSTM with a hidden state size of 512
and a minibatch of 64 using cuBLAS 7.5.
Given sufficient recurrent steps there are three variables that are expected to significantly influence
performance of an RNN implementation. These are: hidden state size, minibatch size and number of
layers. Fixing the number of layers to four, Figure 1 shows the impact of each of these optimizations
across a wide range of hidden state sizes and minibatch sizes.
In some cases increasing the number of layers from one to four doubles throughput (ie. 4x the
work in only 2x the time). This performance improvement is particularly high in low parallelism
cases, where the minibatch is small, the hidden state size is small, or both are small. One feature
of note is the reduction of performance in the minibatch 32 case at high hidden state size. This
is attributable to cuBLAS choosing a different path of execution for matrix multiplications due to
an internal heuristic. As cuBLAS is unaware of the algorithm it is being used for, this is arguably
reasonable behaviour, and performance for a single layer continues to climb as the problem size
increases.
The most significant speedup is seen when the minibatch is 64. As the minibatch size increases
so does the amount of parallelism already available to the GPU, so optimization strategies focused
around increasing parallelism are less effective. Speedup is also lower with larger hidden layer sizes,
for the same reason. Despite this, even for the largest problem benchmarked (minibatch 256, hidden
state size 4096) the increase in parallelism due to layer overlapping still brings better performance.
At larger minibatches it becomes less clear that some of the individual optimizations bring improvement. Batching inputs is actually found to be detrimental in many cases for minibatch sizes other
than 32, likely due to the trade-off discussed in Section 2.3. In other cases, changes can bring a significant improvement for particular problem sizes, while causing a slowdown for others. This makes
it hard to say for sure that the combination of all optimizations will give the fastest runtime for a
given set of hyperparameters, however it is the case that, excepting batched inputs, each optimization
helps in most cases.
2.6 Weight Update
The above optimizations only apply to propagation steps. By completing the gradient propagation
before starting the weight update, the weight update becomes very efficient. A single large matrix
multiplication can be used to update each matrix with no dependencies and this will usually achieve
close to peak performance. Updating the bias weights is very cheap in comparison to updating the
matrices.
3 cuDNN
The optimizations described in Section 2 have been implemented in the fifth version of NVIDIA’s
cuDNN library for single gate RNNs, GRUs and LSTMs. The performance of this implementation is
shown in Figure 2. For this implementation it was possible to interact at a lower level with cuBLAS
than is available via the current interface, and to tune the heuristics used to determine the mode of
operation of cuBLAS to this use-case. In particular, cuBLAS will often pick a more parallel but less
resource efficient route if it detects that the GPU is likely to be underused by a call. As we know the
expected amount of parallelism at a higher level than cuBLAS, overriding this behaviour to favour
5
Minibatch 32
7
6
TFLOPS
5
Minibatch 64
7
Naive
Opt #1
Opt #2
Opt #3
Opt #4
1 Layer
4 Layers
6
5
4
4
3
3
2
2
1
1
0
128
256
512
1024
2048
4096
0
128
Naive
Opt #1
Opt #2
Opt #3
Opt #4
1 Layer
4 Layers
256
Hidden State Size
Minibatch 128
7
6
TFLOPS
5
Naive
Opt #1
Opt #2
Opt #3
Opt #4
1 Layer
4 Layers
6
5
4
3
3
2
2
1
1
256
512
1024
1024
2048
4096
2048
4096
Minibatch 256
7
4
0
128
512
Hidden State Size
2048
4096
Hidden State Size
0
128
Naive
Opt #1
Opt #2
Opt #3
Opt #4
1 Layer
4 Layers
256
512
1024
Hidden State Size
Figure 1: Impact of optimizations on a the forward pass of a four layer LSTM network. The peak
performance of the M40 GPU used at fixed base clocks is approximately 5.8 TFLOPS.
6
RNN
6
TFLOPS
5
GRU
6
4 Layers Forward
4 Layers Backward
1 Layer Forward
1 Layer Backward
5
4
4
3
3
2
2
1
1
0
512
1024
2048
Hidden State Size
4096
0
256
4 Layers Forward
4 Layers Backward
1 Layer Forward
1 Layer Backward
512
1024
Hidden State Size
LSTM
6
5
4 Layers Forward
4 Layers Backward
1 Layer Forward
1 Layer Backward
TFLOPS
4
3
2
1
0
256
512
1024
Hidden State Size
2048
Figure 2: Forward and backward performance of different networks types using cuDNN v5 RC. The
peak performance of the M40 GPU used at fixed base clocks is approximately 5.8 TFLOPS.
more resource efficient paths in cases of high streamed parallelism sometimes resulted in an overall
speedup. It is hoped that an interface to allow this sort of manual tuning will be incorporated into a
future release of cuBLAS.
4 Conclusions
We have presented a method by which recurrent neural networks can be executed on GPUs at high
efficiently. While previous implementations exist achieving good acceleration on GPUs [19, 20],
to our knowledge none achieve the levels of performance we achieve using the methods discussed
above. The primary strategy used was to expose as much parallelism to the GPU as possible so as
to maximize the usage of hardware resources. The methods are particularly efficient when working
on smaller deeper recurrent networks where individual layers have less inherent parallelism.
7
2048
One feature of the problem that we do not exploit for performance benefit is the reuse of the parameters between recurrent iterations. It is conceivable that these parameters could be stored in a lower
level of GPU memory and reused from iteration to iteration. In bandwidth bound regimes this could
potentially greatly improve performance. There are several drawbacks to this method; firstly the
amount of storage for parameters is limited, and hence there would be an upper limit on the number
of parameters. Secondly, any implementation would have to make assumptions which are invalid in
the CUDA programming model, and hence would be prone to unexpected failure.
Source code able to reproduce the forward pass timings for each optimization step is available at https://github.com/parallel-forall/code-samples/blob/master/
posts/rnn/LSTM.cu. This code closely mirrors the code used to write the core RNN functionality from version 5 of NVIDIA’s cuDNN library, which is available for download at https://
developer.nvidia.com/cudnn.
References
[1] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra.
DRAW: A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages
1462–1471, 2015.
[2] Ivo Danihelka Nal Kalchbrenner, Alex Graves. Grid long short-term memory. In Proceedings
of the International Conference on Learning Representations, ICLR, Puerto Rico, 2016.
[3] Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In Proceedings
of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–
1709, Seattle, Washington, USA, October 2013. Association for Computational Linguistics.
[4] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by
jointly learning to align and translate. In Proceedings of the International Conference on
Learning Representations, ICLR, San Diego, 2015.
[5] Ilya Sutskever, Oriol Vinyals, and Quoc V. V Le. Sequence to sequence learning with neural
networks. In Advances in Neural Information Processing Systems 27. 2014.
[6] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural
image caption generator. In IEEE Conference on Computer Vision and Pattern Recognition,
CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3156–3164, 2015.
[7] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption
generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 2048–2057, 2015.
[8] Tomas Mikolov, Martin Karafiát, Lukás Burget, Jan Cernocký, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH 2010, 11th Annual Conference
of the International Speech Communication Association, Makuhari, Chiba, Japan, September
26-30, 2010, pages 1045–1048, 2010.
[9] Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. Transitionbased dependency parsing with stack long short-term memory. In Proceedings of the 53rd
Annual Meeting of the Association for Computational Linguistics and the 7th International
Joint Conference on Natural Language Processing, pages 334–343, Beijing, China, July 2015.
Association for Computational Linguistics.
[10] Alex Graves. Supervised Sequence Labelling with Recurrent Neural Networks, volume 385 of
Studies in Computational Intelligence. Springer, 2012.
[11] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation,
9(8):1735–1780, November 1997.
[12] Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8,
Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–
111, Doha, Qatar, October 2014. Association for Computational Linguistics.
8
[13] Andrew Lavin. Fast algorithms for convolutional neural networks. CoRR, abs/1509.09308,
2015.
[14] Nicolas Vasilache, Jeff Johnson, Michaël Mathieu, Soumith Chintala, Serkan Piantino, and
Yann LeCun. Fast convolutional nets with fbfft: A GPU performance evaluation. CoRR,
abs/1412.7580, 2014.
[15] Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan C. Catanzaro,
Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel,
Linxi Fan, Christopher Fougner, Tony Han, Awni Y. Hannun, Billy Jun, Patrick LeGresley,
Libby Lin, Sharan Narang, Andrew Y. Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman,
Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang,
Bo Xiao, Dani Yogatama, Jun Zhan, and Zhenyao Zhu. Deep speech 2: End-to-end speech
recognition in English and Mandarin. CoRR, abs/1512.02595, 2015.
[16] Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring
the limits of language modeling. CoRR, abs/1602.02410, 2016.
[17] Nicholas Léonard, Sagar Waghmare, Yang Wang, and Jin-Hwa Kim. RNN : Recurrent library
for torch. CoRR, abs/1511.07889, 2015.
[18] Felix Weninger. Introducing currennt: The Munich open-source CUDA recurrent neural network toolkit. Journal of Machine Learning Research, 16:547–551, 2015.
[19] Tomas Kocisky. oxnn. https://github.com/tkocisky/oxnn, 2015.
[20] Justin Johnson. Torch-rnn. https://github.com/jcjohnson/torch-rnn, 2016.
9
| 9 |
Convergence Results for Neural Networks via Electrodynamics
arXiv:1702.00458v4 [] 21 Nov 2017
Rina Panigrahy
Google Inc.
Mountain View, CA
[email protected]
Ali Rahimi
Google Inc.
Mountain View, CA
[email protected]
Sushant Sachdeva∗
University of Toronto
Toronto, Canada
[email protected]
Qiuyi Zhang†
University of California Berkeley,
Berkeley, CA
[email protected]
November 22, 2017
Abstract
We study whether a depth two neural network can learn another depth two network using
gradient descent. Assuming a linear output node, we show that the question of whether gradient
descent converges to the target function is equivalent to the following question in electrodynamics:
Given k fixed protons in Rd , and k electrons, each moving due to the attractive force from the
protons and repulsive force from the remaining electrons, whether at equilibrium all the electrons
will be matched up with the protons, up to a permutation. Under the standard electrical force,
this follows from the classic Earnshaw’s theorem. In our setting, the force is determined by
the activation function and the input distribution. Building on this equivalence, we prove the
existence of an activation function such that gradient descent learns at least one of the hidden
nodes in the target network. Iterating, we show that gradient descent can be used to learn the
entire network one node at a time.
1
Introduction
Deep learning has resulted in major strides in machine learning applications including speech
recognition, image classification, and ad-matching. The simple idea of using multiple layers of nodes
with a non-linear activation function at each node allows one to express any function. To learn a
certain target function we just use (stochastic) gradient descent to minimize the loss; this approach
has resulted in significantly lower error rates for several real world functions, such as those in the
above applications. Naturally the question remains: how close are we to the optimal values of
the network weight parameters? Are we stuck in some bad local minima? While there are several
recent works [CHM+ 15, DPG+ 14, Kaw16] that have tried to study the presence of local minima,
the picture is far from clear.
There has been some work on studying how well can neural networks learn some synthetic
function classes (e.g. polynomials [APVZ14], decision trees). In this work we study how well can
neural networks learn neural networks with gradient descent? Our focus here, via the framework of
∗
†
This work was done when the author was a Research Scientist at Google, Mountain View, CA.
Part of this work was done when the author was an intern at Google, Mountain View, CA.
1
proper learning, is to understand if a neural network can learn a function from the same class (and
hence achieve vanishing error).
Specifically, if the target function is a neural network with randomly initialized weights, and we
attempt to learn it using a network with the same architecture, then, will gradient descent converge
to the target function?
Experimental simulations (see Figure 1 and Section 5 for further details) show that for depth
2 networks of different widths, with random network weights, stochastic gradient descent of a
hypothesis network with the same architecture converges to a squared `2 error that is a small
percentage of a random network, indicating that SGD can learn these shallow networks with random
weights. Because our activations are sigmoidal from -1 to 1, the training error starts from a value of
about 1 (random guessing) and diminishes quickly to under 0.002. This seems to hold even when
the width, the number of hidden nodes, is substantially increased (even up to 125 nodes), but depth
is held constant at 2.
In this paper, we attempt to understand this phenomenon theoretically. We prove that, under
some assumptions, depth-2 neural networks can learn functions from the same class with vanishingly
small error using gradient descent.
Figure 1: Test Error of Depth 2 Networks of Varying Width.
1.1
Results and Contributions.
We theoretically investigate the question of convergence for networks of depth two. Our main
conceptual contribution is that for depth 2 networks where the top node is a sum node, the question
of whether gradient descent converges to the desired target function is equivalent to the following
question in electrodynamics: Given k fixed protons in Rd , and k moving electrons, with all the
electrons moving under the influence of the electrical force of attraction from the protons and
repulsion from the remaining electrons, at equilibrium, are all the electrons matched up with all the
fixed protons, up to a permutation?
2
In the above, k is the number of hidden units, d is the number of inputs, the positions of each
fixed charge is the input weight vector of a hidden unit in the target network, and the initial positions
of the moving charges are the initial values of the weight vectors for the hidden units in the learning
network. The motion of the charges essentially tracks the change in the network during gradient
descent. The force between a pair of charges is not given by the standard electrical force of 1/r2
(where r is the distance between the charges), but by a function determined by the activation and
the input distribution. Thus the question of convergence in these simplified depth two networks
can be resolved by studying the equivalent electrodynamics question with the corresponding force
function.
Theorem 1.1 (informal statement of Theorem 2.3). Applying gradient descent for learning the
output of a depth two network with k hidden units with activation σ, and a linear output node, under
squared loss, using a network of the same architecture, is equivalent to the motion of k charges in
the presence of k fixed charges where the force between each pair of charges is given by a potential
function that depends on σ and the input distribution.
Based on this correspondence we prove the existence of an activation function such that the
corresponding gradient descent dynamics under standard Gaussian inputs result in learning at least
one of the hidden nodes in the target network. We then show that this allows us to learn the
complete target network one node at a time. For more realistic activation functions, we only obtain
partial results. We assume the sample complexity is close to its infinite limit.
Theorem 1.2 (informal statement of Theorem 4.1). There is an activation function such that
running gradient descent for minimizing the squared loss along with `2 regularization for standard
Gaussian inputs, at convergence, we learn at least one of the hidden weights of the target neural
network.
We prove that the above result can be iterated to learn the entire network node-by-node using
gradient descent (Theorem 4.6). Our algorithm learns a network with the same architecture and
number of hidden nodes as the target network, in contrast with several existing improper learning
results.
In the appendix, we show some weak results for more practical activations. For the sign activation,
we show that for the loss with respect to a single node, the only local minima are at the hidden
target nodes with high probability if the target network has a randomly picked top layer. For the
polynomial activation, we derive a similar result under the assumption that the hidden nodes are
orthonormal.
Name of Activation
Almost λ-harmonic
Sign
Polynomial
Potential (Φ(θ, w))
Complicated (see Lem 4.2)
1 − π2 cos−1 (θT w)
(θT w)m
Convergence?
Yes, Thm 4.6
Yes for d = 2, Lem G.2
Yes, for orthonormal wi . Lem G.3
Table 1: Activation, Potentials, and Convergence Results Summary
1.2
Intuition and Techniques.
Note that for the standard electric potential function given by Φ = 1/r where r is the distance
between the charges, it is known from Earnshaw’s theorem that an electrodynamic system with some
3
fixed protons and some moving electrons is at equilibrium only when the moving electrons coincide
with the fixed protons. Given our translation above between electrodynamic systems and depth 2
networks (Section 2), this would imply learnability of depth 2 networks under gradient descent under
`2 loss, if the activation function corresponds to the electrostatic potential. However, there exists no
activation function σ corresponding to this Φ.
The proof of Earnshaw’s theorem is based on the fact that the electrostatic potential is harmonic,
i.e, its Laplacian (trace of its Hessian) is identically zero. This ensures that at every critical point,
there is direction of potential reduction (unless the hessian is identically zero). We generalize
these ideas to potential functions that are eigenfunctions of the Laplacians, λ-harmonic potentials
(Section 3). However, these potentials are unbounded. Subsequently, we construct a non-explicit
activation function such that the corresponding potential is bounded and is almost λ-harmonic,
i.e., it is λ-harmonic outside a small sphere (Section 4). For this activation function, we show at
a stable critical point, we must learn at least one of the hidden nodes. Gradient descent (possibly
with some noise, as in the work of Ge et al. [GHJY15]) is believed to converge to stable critical
points. However, for simplicity, we descend along directions of negative curvature to escape saddle
points. Our activation lacks some regularity conditions required in [GHJY15]. We believe the results
in [JGN+ 17] can be adapted to our setting to prove that perturbed gradient descent converges to
stable critical points.
There is still a large gap between theory and practice. However, we believe our work can offer
some theoretical explanations and guidelines for the design of better activation functions for gradientbased training algorithms. For example, better accuracy and training speed were reported when
using the newly discovered exponential linear unit (ELU) activation function in [CUH15, SKS+ 16].
We hope for more theory-backed answers to these and many other questions in deep learning.
1.3
Related Work.
If the activation functions are linear or if some independence assumptions are made, Kawaguchi
shows that the only local minima are the global minima [Kaw16]. Under the spin-glass and other
physical models, some have shown that the loss landscape admits well-behaving local minima that
occur usually when the overall error is small [CHM+ 15, DPG+ 14]. When only training error is
considered, some have shown that a global minima can be achieved if the neural network contains
sufficiently many hidden nodes [SC16]. Recently, Daniely has shown that SGD learns the conjugate
kernel class [Dan17]. Under simplifying assumptions, some results for learning ReLU’s with gradient
descent are given in [Tia17, BG17]. Our research is inspired by [APVZ14], where the authors show
that for polynomial target functions, gradient descent on neural networks with one hidden layer
converges to low error, given a large number of hidden nodes, and under complex perturbations,
there are no robust local minima. Even more recently, similar results about the convergence of SGD
for two-layer neural networks have been established for a polynomial activation function under a
more complex loss function [GLM17]. And in [LY17], they study the same problem as ours with
the RELU activation and where lower layer of the network is close to identity and the upper layer
has weights all one. This corresponds to the case where each electron is close to a distinct proton –
under these assumptions they show that SGD learns the true network.
Under worst case assumptions, there has been hardness results for even simple networks. A neural
network with one hidden unit and sigmoidal activation can admit exponentially many local minima
[AHW96]. Backprogration has been proven to fail in a simple network due to the abundance of bad
local minima [BRS89]. Training a 3-node neural network with one hidden layer is NP-complete
[BR88]. But, these and many similar worst-case hardness results are based on worst case training
data assumptions. However, by using a result in [KS06] that learning a neural network with threshold
4
activation functions is equivalent to learning intersection of halfspaces, several authors showed that
under certain cryptographic assumptions, depth-two neural networks are not efficiently learnable
with smooth activation functions [LSSS14, ZLWJ15, ZLJ16].
Due to the difficulty of analysis of the non convex gradient descent in deep learning, many
have turned to improper learning and the study of non-gradient methods to train neural networks.
Janzamin et. al use tensor decomposition methods to learn the shallow neural network weights,
provided access to the score function of the training data distribution [JSA15]. Eigenvector and
tensor methods are also used to train shallow neural networks with quadratic activation functions in
[LSSS14]. Combinatorial methods that exploit layerwise correlations in sparse networks have also been
analyzed provably in [ABGM14]. Kernel methods, ridge regression, and even boosting were explored
for regularized neural networks with smooth activation functions in [SSSS11, ZLWJ15, ZLJ16].
Non-smooth activation functions, such as the ReLU, can be approximated by polynomials and are
also amenable to kernel methods[GKKT16]. These methods however are very different from the
simple popular SGD.
2
2.1
Deep Learning, Potentials, and Electron-Proton Dynamics
Preliminaries.
We will work in the space M = Rd . We denote the gradient and Hessian as ∇Rd f and ∇2Rd f
respectively. The Laplacian is defined as ∆Rd f = Tr(∇2Rd f ). If f is multivariate with variable xi ,
then let fxi be a restriction of f onto the variable xi with all other variables fixed. Let ∇xi f, ∆xi f
to be the gradient and Laplacian, respectively, of fxi with respect to xi . Lastly, we say x is a critical
point of f if ∇f does not exist or ∇f = 0.
We focus on learning depth two networks with a linear activation on the output node. If the
network takes inputs x ∈ Rd (say from some distribution D), then the network output, denoted f (x) is
d
d
d
a sum over k = poly(d) hidden units with weight vectors
Pkwi ∈ R , activation σ(x, w) : R × R → R,
and output weights bi ∈ R. Thus, we can write f (x) = i=1 bi σ(x, wi ). We denote this concept class
Cσ,k . Our hypothesis concept class is also Cσ,k .
P
Let a = (a1 , ..., ak ) and θ = (θ1 , ..., θk ); similarly for b, w and our guess is fˆ(x) = ki=1 ai σ(x, θi ).
We define Φ, the potential function corresponding to the activation σ, as
Φ(θ, w) = E [σ(X, θ)σ(X, w)].
X∼D
We work directly with the true squared loss error L(a, θ) = Ex∼D [(f − fˆ)2 ]. To simplify L, we
re-parametrize a by −a and expand.
!2
k
k
X
X
L(a, θ) = E
ai σ(X, θi ) +
bi σ(X, wi )
X∼D
=
k X
k
X
i=1
i=1
ai aj Φ(θi , θj ) + 2ai bj Φ(θi , wj ) + bi bj Φ(wi , wj ),
(1)
i=1 j=1
Given D, the activation function σ, and the loss L, we attempt to show that we can use some variant
of gradient descent to learn, with high probability, an -approximation of wj for some (or all) j.
Note that our loss is jointly convex, though it is quadratic in a.
In this paper, we restrict our attention to translationally invariant activations and potentials.
Specifically, we may write Φ = h(θ − w) for some function h(x). Furthermore, a translationally
invariant function Φ(r) is radial if it is a function of r = kx − yk.
5
Remark: Translationally symmetric potentials satisfy Φ(θ, θ) is a positive constant. We normalize
Φ(θ, θ) = 1 for the rest of the paper.
We assume that our input distribution D = N (0, Id×d ) is fixed as the standard Gaussian in Rd .
This assumption is not critical and a simpler distribution might lead to better bounds. However, for
arbitrary distributions, there are hardness results for PAC-learning halfspaces [KS06].
We call a potential function realizable if it corresponds to some activation σ. The following
theorem characterizes realizable translationally invariant potentials under standard Gaussian inputs.
Proofs and a similar characterization for rotationally invariant potentials can be found in Appendix B
.
Theorem 2.1. Let M = Rd and Φ is square-integrable and F(Φ) is integrable. Then, Φ is
realizable under standardpGaussian inputs if F(Φ)(ω) ≥ 0 and the corresponding activation is
T
σ(x) = (2π)d/4 ex x/4 F−1 ( F(Φ))(x), where F is the generalized Fourier transform in Rd .
2.2
Electron-Proton Dynamics
By interpreting the pairwise potentials as electrostatic attraction potentials, we notice that our
dynamics is similar to electron-proton type dynamics under potential Φ, where wi are fixed point
charges in Rd and θi are moving point charges in Rd that are trying to find wi . The total force on
each charge is the sum of the pairwise forces, determined by the gradient of Φ. We note that standard
dynamics interprets the force between particles as an acceleration vector. In gradient descent, it is
interpreted as a velocity vector.
Definition 2.2. Given a potential Φ and particle locations θ1 , ..., θk ∈ Rd along with their respective
charges a1 , ..., ak ∈ R. We define Electron-Proton Dynamics under Φ with some subset S ⊆ [k]
of fixed particles to be the solution (θ1 (t), ..., θk (t)) to the following system of differential equations:
For each pair (θi , θj ), there is a force from θj exerted on θi that is given by Fi (θj ) = ai aj ∇θi Φ(θi , θj )
and
dθi X
−
=
Fi (θj )
dt
j6=i
for all i 6∈ S, with θi (0) = θi . For i ∈ S, θi (t) = θi .
For the following theorem, we assume that θ is fixed.
Theorem 2.3. Let Φ be a symmetric potential and L be as in (1). Running continuous gradient
descent on 12 L with respect to θ, initialized at (θ1 , ..., θk ) produces the same dynamics as ElectronProton Dynamics under 2Φ with fixed particles at w1 , ..., wk with respective charges b1 , .., bk and
moving particles at θ1 , ..., θk with respective charges a1 , ..., ak .
3
Earnshaw’s Theorem and Harmonic Potentials
When running gradient descent on a non-convex loss, we often can and do get stuck at a local minima.
In this section, we use second-order information to deduce that for certain classes of potentials, there
are no spurious local minima. The potentials In this section are often unbounded and un-realizable.
However, in the next section, we apply insights developed here to derive similar convergence results
for approximations of these potentials.
Earnshaw’s theorem in electrodynamics shows that there is no stable local minima for electronproton dynamics. This hinges on the property that the electric potential Φ(θ, w) = kθ − wk2−d , d 6= 2
is harmonic, with d = 3 in natural setting. If d = 2, we instead have Φ(θ, w) = − ln(kθ − wk). First,
6
we notice that this is a symmetric loss, and our usual loss in (1) has constant terms that can be
dropped to further simplify.
L(a, θ) = 2
k X
X
ai aj Φ(θi , θj ) + 2
i=1 i<j
k X
k
X
ai bj Φ(θi , wj )
(2)
i=1 j=1
Definition 3.1. Φ(θ, w) is a harmonic potential on Ω if ∆θ Φ(θ, w) = 0 for all θ ∈ Ω, except
possibly at θ = w.
Definition 3.2. Let Ω ⊆ Rd and consider a function f : Ω → R. A critical point x∗ ∈ Ω is a local
minimum if there exists > 0 such that f (x∗ + v) ≥ f (x∗ ) for all kvk ≤ . It is a strict local
minimum if the inequality is strict for all kvk ≤ .
Fact 3.3. Let x∗ be a critical point of a function f : Ω → R such that f is twice differentiable at x∗ .
Then, if x∗ is a local minimum then λmin (∇2 f (x∗ )) ≥ 0. Moreover, if λmin (∇2 f (x∗ )) > 0, then x∗
is a strict local minimum.
Note that if λmin (∇2 f (x∗ )) < 0 then moving along the direction of the corresponding eigenvector
decreases f locally. If Φ is harmonic then it can be shown the trace of its Hessian is 0 so if there is
any non zero eigenvalue then at least one eigenvalue is negative. This idea results in the following
known theorem (see full proof in supplementary material) that is applicable to the electric potential
function 1/r in 3-dimensions since is harmonic. It implies that a configuration of n electrons and n
protons cannot be in a strict local minimum even if one of the mobile charges is isolated (however
note that this potential function goes to ∞ at r = 0 and may not be realizable).
Theorem 3.4. (Earnshaw’s Theorem. See [AKN85]) Let M = Rd and let Φ be harmonic and L be
as in (2). Then, L admits no differentiable strict local minima.
Note that the Hessian of a harmonic potential can be identically zero. To avoid this possibility
we generalize harmonic potentials.
3.1
λ-Harmonic Potentials
In order to relate our loss function with its Laplacian, we consider potentials that are non-negative
eigenfunctions of the Laplacian operator. Since the zero eigenvalue case simply gives rise to harmonic
potentials, we restrict our attention to positive eigenfunctions.
Definition 3.5. A potential Φ is λ-harmonic on Ω if there exists λ > 0 such that for every θ ∈ Ω,
∆θ Φ(θ, w) = λΦ(θ, w), except possibly at θ = w.
Note that there are realizable versions of these potentials; for example Φ(a, b) = e−ka−bk1 in R1 .
In the next section, we construct realizable potentials that are λ-harmonic almost everywhere except
when θ and w are very close.
Theorem 3.6. Let Φ be λ-harmonic and L be as in (1). Then, L admits no local minima (a, θ),
except when L(a, θ) = L(0, θ) or θi = wj for some i, j.
Proof. Let (a, θ) be a critical point of L. On the contrary, we assume that θi 6= wj for all i, j.
WLOG, we can partition [k] into S1 , ..., Sr such that for all u ∈ Si , v ∈ Sj , we have θu = θv
iff i = j. Let S1 = {θ1 , . . . , θl }. We consider changing all θ1 , . . . , θl by the same v and define
H(a, v) = L(a, θ1 + v, ..., θl + v, θl+1 . . . , θk ).
7
P
P
∂L
The optimality conditions on a are 0 = ∂a
= 2 j aj Φ(θi , θj ) + 2 kj=1 bj Φ(θi , wj ). Thus, by the
i
definition of λ-harmonic potentials, we may differentiate as θi 6= wj and compute the Laplacian as
l
k
k
X
X
X
∆v H = λ
ai 2
bj Φ(θi , wj ) + 2
aj Φ(θi , θj )
i=1
=λ
l
X
i=1
j=1
ai −2
l
X
j=l+1
aj Φ(θi , θj ) = −2λ
j=1
l
X
i=1
ai
l
X
aj = −2λ
j=1
l
X
!2
ai
i=1
Pl
If
i=1 ai 6= 0, then we conclude that the Laplacian is strictly
P negative, so we are not at a
local minimum. Similarly, we can conclude that for each Si , u∈Si au = 0. In this case, since
Pk
i=1 ai σ(θi , x) = 0, L(a, θ) = L(0, θ).
4
Realizable Potentials with Convergence Guarantees
In this section, we derive convergence guarantees for realizable potentials that are almost λ-harmonic,
specifically, they are λ-harmonic outside of a small neighborhood around the origin. First, we prove
the existence of activation functions such that the corresponding potentials are almost λ-harmonic.
Then, we reason about the Laplacian of our loss, as in the previous section, to derive our guarantees.
We show that at a stable minima, each of the θi is close to some wj in the target network. We may
end up with a many to one mapping of the learned hidden weights to the true hidden weights, instead
of a bijection. To make sure that kak remains controlled throughout the optimization process, we
add a quadratic regularization term to L and instead optimize G = L + kak2 .
Our optimization procedure is a slightly altered version of gradient descent, where we incorportate
a second-order method (which we call Hessian descent as in Algorithm 1) that is used when the
gradient is small and progress is slow. The descent algorithm (Algorithm 2) allows us to converge to
points with small gradient and small negative curvature. Namely, for smooth functions, in poly(1/)
iterations, we reach a point in MG, , where
n
o
MG, = x ∈ M k∇G(x)k ≤ and λmin (∇2 G(x)) ≥ −
We show that if (a, θ) is in MG, for small, then θi is close to wj for some j. Finally, we show how
to initialize (a(0) , θ (0) ) and run second-order GD to converge to MG, , proving our main theorem.
Algorithm 1 x = HD(L, x0 , T, α)
Input: L : M → R; x0 ∈ M; T ∈ N; α ∈ R
Initialize x ← x0
for i = 1 to T do
Find unit eigenvector vmin corresponding to λmin (∇2 f (x))
β ← −αλmin (∇2 f (x))sign(∇f (x)T vmin )
x ← x + βvmin
Theorem 4.1. Let M = Rd for d ≡ 3 mod 4 and k = poly(d). For all ∈ (0, 1), we can construct
an activation σ such that if w1 , ..., wk ∈ Rd with wi randomly chosen from wi ∼ N (0, O(d log d)Id×d )
and b1 , ..., bk be randomly chosen at uniform from [−1, 1], then with high probability, we can choose an
initial point (a(0) , θ (0) ) such that after running SecondGD (Algorithm 2) on the regularized objective
G(a, θ) for at most (d/)O(d) iterations, there exists an i, j such that kθi − wj k < .
8
We start by stating a lemma concerning the construction of an almost λ-harmonic function on
The construction is given in Appendix B and uses a linear combination of realizable potentials
that correspond to an activation function of the indicator function of a n-sphere. By using Fourier
analysis and Theorem 2.1, we can finish the construction of our almost λ-harmonic potential.
Rd .
Lemma 4.2. Let M = Rd for d ≡ 3 mod 4. Then, for any ∈ (0, 1), we can construct a radial
activation σ (r) such that the corresponding radial potential Φ (r) is λ-harmonic for r ≥ .
Furthermore, we have Φ (d−1) (r) ≥ 0 for all r > 0, Φ (k) (r) ≥ 0, and Φ (k+1) (r) ≤ 0 for all r > 0
and d − 3 ≥ k ≥ 0 even.
(k)
When λ = 1, |Φ (r)| ≤ O((d/)2d ) for all 0 ≤ k ≤ d−1. And when r ≥ , Ω(e−r r2−d (d/)−2d ) ≤
Φ (r) ≤ O((1 + r)d e1−r (r)2−d ) and Ω(e−r r1−d (d/)−2d ) ≤ |Φ0 (r)| ≤ O((d + r)(1 + r)d e1−r r1−d )
Our next lemma use the almost λ-harmonic properties to show that at an almost stationary
point of G, we must have converged close to some wj as long as our charges ai are not too small.
The proof is similar to Theorem 3.6. Then, the following lemma relates the magnitude of the charges
ai to the progress made in the objective function.
Lemma 4.3. Let M = Rd for d ≡ 3 mod 4 and let G be the regularized loss corresponding to the
activation σ given by Lemma 4.2 with λ = 1. For any ∈ (0, 1) and δ ∈ (0, 1), if (a, θ) ∈ MG,δ ,
then for all i, either 1) there exists j such that kθi − wj k < k or 2) a2i < 2kdδ.
p
p
Lemma 4.4. Assume the conditions of Lemma 4.3. If G(a, θ) ≤ G(0, 0) − δ and (a, θ) ∈
MG,δ2 /(2k3 d) , then there exists some i, j such that kθi − wj k < k.
Finally, we guarantee that our initialization substantially decreases our objective function.
Together with our previous lemmas, it will imply that we must be close to some wj upon convergence.
This is the overview of the proof of Theorem 4.1, presented below.
of Theorem 4.1 and Lemma 4.3. With high probability, we can
Lemma 4.5. Assume the conditions
q
p
(0)
(0)
initialize (a , θ ) such that G(a(0) , θ (0) ) ≤ G(0, 0) − δ with δ = (d/)−O(d) .
Proof of Theorem 4.1. Let our potential Φ/k be the one as constructed in Lemma 4.2 that is 1harmonic for all r ≥ q
/k and as always, k = poly(d). First, by Lemma 4.5, we can initialize
p
(a(0) , θ (0) ) such that G(a(0) , θ (0) ) ≤ G(0, 0) − δ for δ = (d/)−O(d) . If we set α = (d/)−O(d)
and η = γ = δ 2 /(2k 3 d), then running Algorithm 2 will terminate and return some (a, θ) in at most
(d/)O(d) iterations. This is because our algorithm ensures that our objective function decreases by at
least min(αη 2 /2, α2 γ 3 /2) at each iteration, G(0, 0) is bounded by O(k), and G ≥ 0 is non-negative.
Let θ = (θ1 , ...θk ). If there exists θi , wj such that kθi − wj k < , then we are done. Otherwise, we
claim that (a, θ) ∈ MG,δ2 /(2k3 d) . For the sake of contradiction, assume otherwise. By our algorithm
termination conditions, then it must be that after one step of gradient or Hessian descent from
(a, θ), we reach some (a0 , θ 0 ) and G(a0 , θ 0 ) > G(a, θ) − min(αη 2 /2, α2 γ 3 /2).
Algorithm 2 x = SecondGD(L, x0 , T, α, η, γ)
Input: L : M → R; x0 ∈ M; T ∈ N; α, η, γ ∈ R
for i = 1 to T do
if k∇L(xi−1 )k ≥ η then xi ← xi−1 − α∇L(xi−1 )
else xi ← HD(L, xi−1 , 1, α)
if L(xi ) ≥ L(xi−1 ) − min(αη 2 /2, α2 γ 3 /2) then return xi−1
9
Algorithm 3 Node-wise Descent Algorithm
Input: (a, θ) = (a1 , ..., ak , θ1 , ..., θk ), ai ∈ R, θi ∈ M; T ∈ N; L; α, η, γ ∈ R;
for i = 1 to k do
Initialize (ai , θi )
(ai , θi ) = SecondGD (Lai ,θi , (ai , θi ), T, α, η, γ)
return a = (a1 , ..., ak ), θ = (θ1 , ..., θk )
Now, Lemma 4.2 ensures all first three derivatives of Φ/k are bounded by O((dk/)2d ), except
at w1 , ..., wk . Furthermore, since there do not exist θi , wj such that kθi − wj k < , G is three-times
continuously differentiable within a α(dk/)2d = (d/)−O(d) neighborhood of θ. Therefore, by
Lemma D.1 and D.2 in the appendix , we must have G(a0 , θ 0 ) ≤ G(a, θ) − min(αη 2 /2, α2 γ 3 /2), a
contradiction.
p
pLastly, since our algorithm maintains that our objective function is decreasing, so
G(a, θ) ≤ G(0, 0) − δ. Finally, we conclude by Lemma 4.4.
4.1
Node-by-Node Analysis
We cannot easily analyze the convergence of gradient descent to the global minima when all θi are
simultaneously moving since the pairwise interaction terms between the θi present complications,
even with added regularization. Instead, we run a greedy node-wise descent (Algorithm 3) to learn
the hidden weights, i.e. we run a descent algorithm with respect to (ai , θi ) sequentially. The main
idea is that after running SGD with respect to θ1 , θ1 should be close to some wj for some j. Then,
we can carefully induct and show that θ2 must be some other wk for k 6= j and so on.
Let L1 (a1 , θ1 ) be the objective L restricted to a1 , θ1 being variable, and a2 , ..., ak = 0 are fixed.
The tighter control on the movements of θ1 allows us to remove our regularization. While our
previous guarantees before allow us to reach a -neighborhood of wj when running SGD on L1 , we
will strengthen our guarantees to reach a (d/)−O(d) -neighborhood of wj , by reasoning about the
first derivatives of our potential in an -neighborhood of wj . By similar argumentation as before, we
will be able to derive the following convergence guarantees for node-wise training.
Theorem 4.6. Let M = Rd and d ≡ 3 mod 4 and let L be as in 1 and k = poly(d). For all
∈ (0, 1), we can construct an activation σ such that if w1 , ..., wk ∈ Rd with wi randomly chosen
from wi ∼ N (0, O(d log d)Id×d ) and b1 , ..., bk be randomly chosen at uniform from [−1, 1], then with
high probability, after running nodewise descent (Algorithm 3) on the objective L for at most (d/)O(d)
iterations, (a, θ) is in a (d/)−O(d) neighborhood of the global minima.
5
Experiments
For our experiments, our training data is given by (xi , f (xi )), where xi are randomly chosen from a
standard Gaussian in Rd and f is a randomly generated neural network with weights chosen from
a standard Gaussian. We run gradient descent (Algorithm 4) on the empirical loss, with stepsize
around α = 10−5 , for T = 106 iterations. The nonlinearity used at each node is sigmoid from -1 to 1,
including the output node, unlike the assumptions in the theoretical analysis. A random guess for
the network will result in a mean squared error of around 1. Our experiments (see Fig 1) show that
for depth-2 neural networks, even with non-linear outputs, the training error diminishes quickly to
under 0.002. This seems to hold even when the width, the number of hidden nodes, is substantially
increased (even up to 125 nodes), but depth is held constant; although as the number of nodes
10
Depth
Depth
Depth
Depth
Depth
Width 5
0.0015
0.0033
0.0036
0.0085
0.0845
2
3
5
9
17
Width 10
0.0017
0.0264
0.0579
0.1662
0.3862
Width 20
0.0018
0.1503
0.2400
0.4171
0.4934
Width 40
0.0019
0.2362
0.4397
0.6071
0.5777
Table 2: Test Error of Learning Neural Networks of Various Depth and Width
increases, the rate of decrease is slower. This substantiates our claim that depth-2 neural networks
are learnable.
However, it seems that for depth greater than 2, the test error becomes significant when width
is high (see Fig 2). Even for depth 3 networks, the increase in depth impedes the learnability of
the neural network and the training error does not get close enough to 0. It seems that for neural
networks with greater depth, positive convergence results in practice are elusive. We note that we
are using training error as a measure of success, so it’s possible that the true underlying parameters
are not learned.
Figure 2: Test Error of Varying-Depth Networks vs. Width
References
[ABGM14] Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some
deep representations. In ICML, pages 584–592, 2014.
[AHW96]
Peter Auer, Mark Herbster, and Manfred K Warmuth. Exponentially many local minima for
single neurons. pages 316–322, 1996.
11
[AKN85]
Vladimir I Arnold, Valery V Kozlov, and Anatoly I Neishtadt. Mathematical aspects of classical
and celestial mechanics. Encyclopaedia Math. Sci, 3:1–291, 1985.
[APVZ14] Alexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning polynomials with
neural networks. In International Conference on Machine Learning, pages 1908–1916, 2014.
[BG17]
Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a convnet with gaussian
inputs. arXiv preprint arXiv:1702.07966, 2017.
[BR88]
Avrim Blum and Ronald L. Rivest. Training a 3-node neural network is np-complete. pages 9–18,
1988.
[BRS89]
Martin L Brady, Raghu Raghavan, and Joseph Slawny. Back propagation fails to separate where
perceptrons succeed. IEEE Transactions on Circuits and Systems, 36(5):665–674, 1989.
[CHM+ 15] Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The
loss surfaces of multilayer networks. In AISTATS, 2015.
[CUH15]
Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network
learning by exponential linear units (elus). CoRR, abs/1511.07289, 2015.
[Dan17]
Amit Daniely. Sgd learns the conjugate kernel class of the network.
arXiv:1702.08503, 2017.
[DFS16]
Amit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks:
The power of initialization and a dual view on expressivity. In Advances In Neural Information
Processing Systems, pages 2253–2261, 2016.
arXiv preprint
[DPG+ 14] Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua
Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex
optimization. In Advances in neural information processing systems, pages 2933–2941, 2014.
[GHJY15] Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points-online stochastic
gradient for tensor decomposition. In COLT, pages 797–842, 2015.
[GKKT16] Surbhi Goel, Varun Kanade, Adam Klivans, and Justin Thaler. Reliably learning the relu in
polynomial time. arXiv preprint arXiv:1611.10258, 2016.
[GLM17]
Rong Ge, Jason D Lee, and Tengyu Ma. Learning one-hidden-layer neural networks with landscape
design. arXiv preprint arXiv:1711.00501, 2017.
[JGN+ 17] Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan. How to escape
saddle points efficiently. arXiv preprint arXiv:1703.00887, 2017.
[JSA15]
Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Beating the perils of non-convexity:
Guaranteed training of neural networks using tensor methods. CoRR, abs/1506.08473, 2015.
[Kaw16]
Kenji Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information
Processing Systems, pages 586–594, 2016.
[KS06]
Adam R Klivans and Alexander A Sherstov. Cryptographic hardness for learning intersections
of halfspaces. In 2006 47th Annual IEEE Symposium on Foundations of Computer Science
(FOCS’06), pages 553–562. IEEE, 2006.
[LSSS14]
Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training
neural networks. In Advances in Neural Information Processing Systems, pages 855–863, 2014.
[LY17]
Yuanzhi Li and Yang Yuan. Convergence analysis of two-layer neural networks with relu activation.
arXiv preprint arXiv:1705.09886, 2017.
[SC16]
Daniel Soudry and Yair Carmon. No bad local minima: Data independent training error guarantees
for multilayer neural networks. CoRR, abs/1605.08361, 2016.
12
[SKS+ 16]
Anish Shah, Eashan Kadam, Hena Shah, Sameer Shinde, and Sandip Shingade. Deep residual
networks with exponential linear unit. In Proceedings of the Third International Symposium on
Computer Vision and the Internet, pages 59–65. ACM, 2016.
[SSSS11]
Shai Shalev-Shwartz, Ohad Shamir, and Karthik Sridharan. Learning kernel-based halfspaces
with the 0-1 loss. SIAM Journal on Computing, 40(6):1623–1646, 2011.
[Tia17]
Yuandong Tian. An analytical formula of population gradient for two-layered relu network and
its applications in convergence and critical point analysis. arXiv preprint arXiv:1703.00560, 2017.
[ZLJ16]
Yuchen Zhang, Jason D Lee, and Michael I Jordan. l1-regularized neural networks are improperly
learnable in polynomial time. In International Conference on Machine Learning, pages 993–1001,
2016.
[ZLWJ15] Yuchen Zhang, Jason D. Lee, Martin J. Wainwright, and Michael I. Jordan. Learning halfspaces
and neural networks with random initialization. CoRR, abs/1511.07948, 2015.
13
A
Electron-Proton Dynamics
Theorem 2.3. Let Φ be a symmetric potential and L be as in (1). Running continuous gradient
descent on 12 L with respect to θ, initialized at (θ1 , ..., θk ) produces the same dynamics as ElectronProton Dynamics under 2Φ with fixed particles at w1 , ..., wk with respective charges b1 , .., bk and
moving particles at θ1 , ..., θk with respective charges a1 , ..., ak .
Proof. The initial values are the same. Notice that continuous gradient descent on L(a, θ) with
i (t)
respect to θ produces dynamics given by dθdt
= −∇θi L(a, θ). Therefore,
k
X
X
dθi (t)
= −2
ai aj ∇θi Φ(θi , θj ) − 2
ai bj ∇θi Φ(θi , wj )
dt
j=1
j6=i
And gradient descent does not move wi . By definition, the dynamics corresponds to Electron-Proton
Dynamics as claimed.
B
B.1
Realizable Potentials
Activation-Potential Calculations
First define the dual of a function f : R → R is defined to be
fb(ρ) =
[f (X)f (Y )],
E
X,Y ∼N (ρ)
where N (ρ) is the bivariate normal distribution with X, Y unit variance and ρ covariance. This is as
in [DFS16].
b is the corresponding potential
Lemma B.1. Let M = S d−1 and σ be our activation function, then σ
function.
Proof. If u, v have norm 1 and if X is a standard Gaussian in Rd , then note that X1 = uT X and
X2 = v T X are both standard Gaussian variables in R1 and the covariance is E[X1 X2 ] = uT v.
Therefore, the dual function of the activation gives us the potential function.
E[σ(uT X)σ(v T X)] =
X
E
X,Y ∼N (uT v)
[σ(X)σ(Y )]
=σ
b(uT v).
By Lemma B.1, the calculations of the activation-potential for the sign, ReLU, Hermite, exponential functions are given in [DFS16]. For the Gaussian and Bessel activation functions, we can
calculate directly. In both case, we notice that we may write the integral as a product of integrals in
each dimension. Therefore, it suffices to check the following 1-dimensional identities.
Z ∞√
1
2
2√
2
2
2
2ex /4 e−(x−θ) 2ex /4 e−(x−w) √ e−x /2 dx
2π
−∞
r Z ∞
2
2
2
2
=
e−(x−θ) e−(x−w) dx = e−(θ−w) /2
π −∞
14
Z
∞
2
1
2
2
( )3/2 ex /2 K0 (|x − θ|)K0 (|x − w|) √ e−x /2 dx
2π
−∞ π
Z ∞
2
=
K (|x − θ|)K0 (|x − w|) dx = e−|θ−w|
2 0
−∞ π
The last equality follows
uniqueness and taking the Fourier transform of both sides,
p by Fourier
2
−1
which are both equality 2/π(ω + 1) .
B.2
Characterization Theorems
Theorem 2.1. Let M = Rd and Φ is square-integrable and F(Φ) is integrable. Then, Φ is
realizable under standardpGaussian inputs if F(Φ)(ω) ≥ 0 and the corresponding activation is
T
σ(x) = (2π)d/4 ex x/4 F−1 ( F(Φ))(x), where F is the generalized Fourier transform in Rd .
p
Proof. Since Φ is square-integrable, its Fourier transform exists. Let h(x) = F−1 ( F(Φ))(x) and
this is well-defined
since the Fourier transform was non-negative everywhere and the Fourier inverse
p
2
exists since F(Φ)(x) is square-integrable. Now, let σ(x, w) = (2π)1/4 ex /4 h(x − w). Realizability
follows by the Fourier inversion theorem:
Z
E [σ(X, w)σ(X, θ)] =
h(x − w)h(x − θ) dx
X∼N
n
R
Z
=
h(x)h(x − (θ − w)) dx
Rn
−1
=F
(F(h ∗ h)(θ − w))
= F−1 (F(h)2 (θ − w))
= F−1 (F(Φ)(θ − w))
= Φ(θ − w)
Note that ∗ denotes function convolution.
When our relevant space is M = S d−1 , we let ΠM be the projection operator on M. The
simplest way to define the gradident on S d−1 is ∇S d−1 f (x) = ∇Rd f (x/kxk), where k · k denotes the
l2 norm and x ∈ S d−1 . The Hessian and Laplacian are analogously defined and the subscripts are
usually dropped where clear from context.
We say that a potential Φ on M = S d−1 is rotationally invariant if for all θ, w ∈ S d−1 , we have
Φ = h(θT w).
Theorem B.2. Let M = S d−1 and Φ(θ, w) = f (θT w). Then, Φ is P
realizable if f has non-negative
√
Taylor coefficients, ci ≥ 0 , and the corresponding activation σ(x) = ∞
i=1 ci hi (x) converges almost
everywhere, where hi (x) is the i-th Hermite polynomial.
P
Proof. By B.1 and due to the orthogonality of hermite polynomials, if f = i ai hi , where hi (x) is
the i-th Hermite polynomial, then
X
fb(ρ) =
a2i ρi
i
Therefore, any function with non-negative taylor coefficients is a valid potential function, with
the corresponding activation function determined by the sum of hermite polynomials, and the sum
is bounded almost everywhere by assumption.
15
B.3
Further Characterizations
To apply Theorem 2.1, we need to check that the Fourier transform of our function is non-negative.
Not only is this is not straightforward to check, many of our desired potentials do not satisfy this
criterion. In this section, we would like to have a stronger characterization of realizable potentials,
allowing us to construct realizable potentials that approximates our desired potential.
Definition B.3. Let Φ be a positive semidefinite function if for all x1 , ..., xn , the matrix Aij =
Φ(xi − xj ) is positive semidefinite.
Lemma B.4. Let M = Rd and Φ(θ, w) = f (θ − w) is is realizable, then it is positive semidefinite.
Proof. If Φ is realizable, then there exists σ such that Φ(θ, w) = EX∼N [σ(X, w)σ(X, θ)]. For
x1 , ..., xn , we note that the quadratic form:
!2
X
X
X
Φ(xi , xj )vi vj =
E [σ(X, xi )σ(X, xj )]vi vj = E
vi σ(X, xi ) ≥ 0
i,j
i,j
X∼N
X∼N
i
Since Φ is translationally symmetric, we conclude that Φ is positive semidefinite.
Definition B.5. A potential Φ is F-integrable if it is square-integrable and F(Φ(ω)) is integrable,
where F is the standard Fourier transform.
Rb
Lemma B.6. Let w(x) ≥ 0 be a positive weighting function such that a w(x) dx is bounded. If
Rb
Φx is a parametrized family of F-integrable realizable potentials, then, a w(x)Φx is F-integrable
realizable.
Rb
Rb
Proof. Let Φ = a w(x)Φx . From linearity of the Fourier transform and a w(x) dx is bounded, we
know that Φ is F-integrable. Since Φx are realizable, they are positive definite by Lemma B.4 and
by Bochner’s theorem, their Fourier transforms are non-negative. And since w(x) ≥ 0, we conclude
by linearity and continuity of the Fourier transform that F(Φ) ≥ 0. By Theorem 2.1, we conclude
that Φ is realizable.
Lemma B.7. Let M = Rd for d ≡ 3 mod 4. Then, for any , t > 0, there exists a F-integrable
realizable Φ such that for t ≥ r > , Φ(d−1) (r) = t − r and for r ≤ , Φ(d−1) (r) = t−
r. Furthermore,
Φ(k) (r) = 0 for r > t for all 0 ≤ k ≤ d.
Proof. Our construction is based on the radial activation function ht (x, θ) = 1kθ−xk≤t/2 , which is
the indicator in the disk of radius t/2. This function, when re-weighted correctly as σt (x, θ) =
2
(2π)1/4 ex /4 ht (x, θ) gives rise to a radial potential function that is simply the convolution of ht with
itself, measuring the volume of the intersection of two spheres of radius t centered at θ and w.
( R t/2
C kθ−wk/2 ((t/2)2 − x2 )(d−1)/2 dx kθ − wk ≤ t
Φt (θ, w) = E[σt (X, θ)σt (X, w)] =
X
0
otherwise.
R t/2
2
2 (d−1)/2 dx
r/2 ((t/2) − x )
and Φ0t (r) = −C 0 ((t/2)2 − (r/2)2 )(d−1)/2 . Since d ≡ 3 mod 4, we notice that Φ0t has a positive
coefficient in the leading rd−1 term and since it is a function of r2 , it has a zero rd−2 term. Therefore,
Therefore, as a function of r = kθ − wk, we see that when r ≤ t, Φt (r) = C
we can scale Φt such that
(
r r≤t
(d−1)
Φt
(r) =
0 otherwise.
16
Φt is clearly realizable and now we claim that it is F-integrable. First, Φt is bounded on a
compact set so it is square-integrable. Now, since Φt = ht ∗ ht can be written as a convolution,
F(Φt ) = F(ht )2 . Since ht is square integrable, then by Parseval’s, F(ht ) is square integrable, allowing
us to conclude that Φt is F-integrable.
Now, for any > 0, let us construct our desired Φ by taking a positive sum of Φt and then
appealing to Lemma B.6. Consider
Z t
1
Φ(r) =
Φ (r) dx
2 x
x
Rt
First, note that the total weight x12 is bounded. Then, when r ≥ t, since Φx (r) = 0 for
x ≤ t, we conclude that Φ(k) (r) = 0 for any k. Otherwise, for < r < t, we can apply dominated
convergence theorem to get
Z t
Z t
Z r
1 (d−1)
r
1 (d−1)
(d−1)
Φx
(r) dx +
Φx
(r) dx = 0 +
dx = 1 − r/t
Φ
(r) =
2
2
2
r x
r x
x
Scaling by t gives our desired claim. For r ≤ , we integrate similarly and scale by t to
conclude.
Lemma B.8. Let M = Rd for d ≡ 3 mod 4 and let Φ(r) be a radial potential. Also, Φ(k) (r) ≥ 0
and Φ(k+1) (r) ≤ 0 for all r > 0 and k ≥ 0 even, and limr→∞ Φ(k) (r) = 0 for all 0 ≤ k ≤ d.
(k)
Then, for any > 0, there exists a F-integrable realizable potential Φ such that Φ (r) = Φ(k) (r)
(d−1)
(k)
for all 0 ≤ k ≤ d − 1 and r ≥ . Furthermore, we have Φ
(r) ≥ 0 for all r > 0 and Φ (r) ≥ 0
(k+1)
and Φ
(r) ≤ 0 for all r > 0 and d − 3 ≥ k ≥ 0 even.
k−j+1
P
(d−1−k)
(d−j) ()|
Lastly, for r < and 0 ≤ k ≤ d − 1, |Φ
(r)| ≤ |Φ(d−1−k) ()| + kj=1 (−r)
(k−j+1)! |Φ
Proof. By Lemma B.7, we can find Φt such that
t−
r
(d−1)
Φt
= t−r
0
0≤r≤
<r≤t
r>t
(k)
Furthermore, Φt (r) = 0 for r > t for all 0 ≤ k ≤ d. Therefore, we consider
Z ∞
Φ(r) =
Φ(d+1) (x)Φx (r) dx
R∞
Note that this is a positive sum with Φ(d+1) (x) dx = −Φ(d) () < ∞. By the non-negativity
of our summands, we can apply dominated convergence theorem and Fubini’s theorem to get
Z ∞
(d−1)
Φ
(r) =
Φ(d+1) (x)(Φ(d−1)
(r)) dx
x
Z ∞
=
Φ(d+1) (x)(Φx(d−1) (r)) dx
Zr ∞
Z x
(d+1)
=
Φ
(x)
1 dy dx
r
Zr ∞ Z ∞
Z ∞
(d+1)
=
Φ
(x) dx dy =
−Φ(d) (y) dy
r
y
(d−1)
=Φ
r
(r)
17
(d−1)
(k)
Now, since Φ
(r) = Φ(d−1) (r) for r ≥ and limr→∞ Φ (r) = limr→∞ Φ(k) (r) = 0 for
0 ≤ k ≤ d − 1, repeated integration gives us our claim.
Finally, for the second claim, notice that for r ≤ , we get
Z ∞
Z ∞
x−
(d−1)
(d+1)
(d−1)
Φ(d+1) (x)
Φ
(x)(Φx
(r)) dx = r
Φ
(r) =
dx = Cr
Note that our constant C ≥ 0 since the summands are non-negative. Therefore, we conclude
(d−1)
(k)
that Φ
(r) ≥ 0 for all r > 0. Repeated integration and noting that limr→∞ Φ (r) = 0 for
0 ≤ k ≤ d − 1 gives us our claim.
Lastly, we prove the last claim of the theorem with induction on k. This holds trivially for k = 0
(d−1)
(d−1)
since Φ
(r) ≤ Φ
() = Φ(d−1) () for r ≤ . Then, assume we have the inequality for k < d − 1.
By integration, we have
Z
(d−k−2)
(d−k−2)
(d−1−k)
|Φ
|Φ
(r)| ≤ |Φ
()| +
(y)| dy
r
Z
(d−k−2)
(d−1−k)
|Φ
()| +
()| dy
≤ |Φ
r
Z X
k
( − y)k−j+1 (d−j)
+
|Φ
()| dy
(k − j + 1)!
r
j=1
≤ |Φ
(d−k−2)
()| +
k+1
X
( − y)k−j+2
j=1
(k − j + 2)!
|Φ(d−j) ()|
Therefore, we conclude with induction.
Lemma 4.2. Let M = Rd for d ≡ 3 mod 4. Then, for any ∈ (0, 1), we can construct a radial
activation σ (r) such that the corresponding radial potential Φ (r) is λ-harmonic for r ≥ .
Furthermore, we have Φ (d−1) (r) ≥ 0 for all r > 0, Φ (k) (r) ≥ 0, and Φ (k+1) (r) ≤ 0 for all r > 0
and d − 3 ≥ k ≥ 0 even.
(k)
When λ = 1, |Φ (r)| ≤ O((d/)2d ) for all 0 ≤ k ≤ d−1. And when r ≥ , Ω(e−r r2−d (d/)−2d ) ≤
d
Φ (r) ≤ O((1 + r) e1−r (r)2−d ) and Ω(e−r r1−d (d/)−2d ) ≤ |Φ0 (r)| ≤ O((d + r)(1 + r)d e1−r r1−d )
Proof. This is a special case of the following lemma.
Lemma B.9. Let M = Rd for d ≡ 3 mod 4. Then, for any 1 > > 0, we can construct a radial
activation σ (r) with corresponding normalized radial potential Φ (r) that is λ-harmonic when r ≥ .
Furthermore, we have Φ (d−1) (r) ≥ 0 for all r > 0 and Φ (k) (r) ≥ 0 and Φ (k+1) (r) ≤ 0 for all
r > 0 and d − 3 ≥ k ≥ 0 even.
√
√
√
(k)
λ for all 0 ≤ k ≤ d − 1. And for r ≥ , e− λr r 2−d (2d +
Also, |Φ (r)| ≤ 3(2d + λ)2d −2d
e
√
√ −2d 2d
√ d √λ(1−r) 2−d
√
λ) /3 ≤ Φ (r) ≤ (1 + r √
λ) e
(r) . Also for r ≥ , e− λr r1−d (2d + λ)−2d 2d /3 ≤
√
√
|Φ0 (r)| ≤ (d + λr)(1 + r λ)d e λ(1−r) r1−d
√
Proof. Consider a potential of the form Φ(r) = p(r)e− λr /rd−2 . We claim that there exists a
polynomial p of degree k = (d − 3)/2 with non-negative coefficients and p(0)
√ = 1 such that Φ is
λ-harmonic. Furthermore, we will also show along√the way that p(r) ≤ (1 + λr)d .
When d = 3, it is easy to check that Φ(r) = e(− λ)r /r is our desired potential. Otherwise, by our
formula for the radial Laplacian in d dimensions, we want to solve the following differential equation:
∆Φ =
1
rd−1
∂ d−1 ∂Φ
(r
) = λΦ
∂r
∂r
18
Solving this gives us the following second-order differential equation on p
√
√
rp00 − (d − 3 + 2 λr)p0 + λ(d − 3)p = 0
P
Let us write p(r) = ki=0 ai ri . Then, substituting into our differential equation gives us the
following equations by setting each √
coefficient of ri to zero:
i
r : ai+1 (i + 1)(i − (d − 3)) = ai λ(2i − (d − 3))
rk : (−2k + d − 3)ak = 0
The last equation explains why we chose k = (d − 3)/2, so that it is automatically zero. Thus,
setting a0 = 1 and running the recurrence gives us our desired polynomial. Note that the recurrence
is valid and produces positive coefficients since i√< k = (d
√ − 3)/2. Our claim follows and
√ Φ is
λ-harmonic. And furthermore, notice that ai+1 ≤ λai ≤ ( λ)i+1 . Therefore, p(r) ≤ (1 + r λ)d .
Lastly, we assert that Φ(j) (r) is non-negative for j even and non-positive for √
j odd. To prove our
−
assertion, we note that it suffices to show that if Φ is of the form Φ(r) √
= p(r)e λr /rl for some p of
degree k < l and p has non-negative coefficients, then Φ0 (r) = −q(r)e− λr /rl+1 for some q of degree
k + 1 with non-negative coefficients.
Differentiating Φ gives:
√
e−r
Φ0 = l+1 (rp0 (r) − (l + λr)p(r))
r
√
It is clear that if p has degree k, then q(r) = (l + λr)p(r) − rp0 (r) has degree k + 1, so it
suffices to show that it has non-negative coefficients. Let p0 , ..., pk be the non-negative coefficients of
p. Then, by our formula, we see that
q0 = lp0
√
√
qi = lpi √
− ipi + λpi−1 = (l − i)pi + λpi−1
qk+1 = λpk
Since i ≤ k < l, we conclude that q has non-negative coefficients. Finally, our assertion follows
with induction since Φ(0) (r) is non-negative and has our desired form with k = (d − 3)/2 < d − 2.
By Lemma B.8, our primary theorem follows, we can construct a realizable radial potential Φ (r)
that is λ-harmonic when r ≥ and has alternating-signed derivatives.
(k)
(k)
Lastly, we prove the following preliminary bound on Φ (r) when k ≤ d: |Φ (r)| ≤ 3(2d +
√ 2d −2d
(k)
λ)
for all 0 ≤ k ≤ d − 1. First, notice that by the results of Lemma B.8, Φ (r) is monotone
(k)
(k)
and limr→∞ Φ (r)
= 0. So, it follows that we just have to bound |Φ (0)|. From our construction,
√
(k)
Φ () = pk ()e− λ 2−d−k , for some
√ polynomial pk . Furthermore, from our construction, we have
the recurrence pk () = (d − 2 + k + λ)pk−1 () − p0k−1 (). Therefore, we conclude that for k ≤ d,
√
√
√
√
pk () ≤ (2d + λ)k p0 () ≤ (2d + λ)k (1 + λ)d ≤ (2d + λ)2d .
√
(k)
Therefore, we can bound |Φ ()| ≤ (2d + λ)2d −2d . Finally, by Lemma B.8,
|Φ(d−1−k)
(0)| ≤ |Φ(d−1−k) ()| +
k
X
()k−j+1
|Φ(d−j) ()|
(k − j + 1)!
j=1
√
k
X
k−j+1
)
(k − j + 1)!
j=1
√ 2d −2d
√
≤ (2d + λ) e ≤ 3(2d + λ)2d −2d
≤ (2d +
λ)2d −2d (1 +
√
√
√
− λr
And for r ≥ , we see that |Φ (r)| = |Φ(r)| ≤ |p(r)| erd−2 = (1 + r λ)d e− λr r2−d . And
√
√
√
√
e− λr
0
0
|Φ (r)| = |Φ (r)| ≤ |p1 (r)| rd−1 ≤ (d + λr)(1 + r λ)d e− λr r1−d .
19
e = Φ /Φ (0). Note that since Φ is monotonically
Finally, we consider the normalized potential: Φ
√
decreasing, we can lower bound Φ (0) ≥√Φ () ≥ e− λ . Therefore, we can derive √the following upper
√ 2d −2d λ
√
e (k)
e (r)| ≤ (1 + r λ)d e λ(1−r) r2−d and its
bounds: |Φ
e and for r ≥ , |√Φ
(r)| ≤ 3(2d + λ)
√
√
e 0 (r)| ≤ (d + λr)(1 + r λ)d e λ(1−r) r1−d .
derivative is bounded by |Φ
e and the first derivative when r ≥ , by using the
And lastly, we derive some lower bounds on
√
√ Φ−2d
√
e (r) ≥ Φ (r)(2d + λ) 2d /3 ≥ e− λr r2−d (2d + λ)−2d 2d /3. For the
upper bound on Φ (0): Φ
√
√
e 0 (r)| ≥ e− λr r1−d (2d + λ)−2d 2d /3.
derivative, we get |Φ
Lemma B.10. The λ-harmonic radial potential Φ(r) = e−r /r in 3-dimensions is realizable by the
activation σ(r) = K1 (r)/r.
Proof. The activation is obtained from the potential function by first taking its Fourier transform,
then taking its square root, and then taking the inverse fourier transform. Since the functions in
consideration are radially symmetric
F (y) of f (x) (and inverse) are obtained by
R ∞ the Fourier transform
√
the Hankel Transfom yF (y) = 0 xf (x)J1/2 (xy) xydx. Plugging f (x) = e−x /x, from the Hankel
2
2
tranform tables we get yF
we wish to find the inverse
p(y) = cy/(1 + y ) giving F (y) = cy/(1 + y ). SoR ∞
√
2
Fourier transform for 1/ 1 + y . The inverse f (x) is given by xf (x) = 0 yF (y)J1/2 (xy) xydy =
cK1 (x). So σ(r) = K1 (r)/r.
C
Earnshaw’s Theorem
Theorem 3.4. (Earnshaw’s Theorem. See [AKN85]) Let M = Rd and let Φ be harmonic and L be
as in (2). Then, L admits no differentiable strict local minima.
Proof. If (a, θ) is a differentiable strict local minima, then for any i, we must have
∇θi L = 0, and Tr(∇2θi L) > 0.
Since Φ is harmonic, we also have
Tr(∇2θi L(θ1 , ..., θn ))
= ∆θi L = 2
X
ai aj ∆θi Φ(θi , θj ) + 2
k
X
ai bj ∆θi Φ(θi , wj ) = 0,
j=1
j6=i
which is a contradiction. In the first line, there is a factor of 2 by symmetry.
D
Descent Lemmas and Iteration Bounds
Algorithm 4 x = GD(L, x0 , T, α)
Input: L : M → R; x0 ∈ M; T ∈ N; α ∈ R
Initialize x = x0
for i = 1 to T do
x = x − α∇L(x)
x = ΠM x
Lemma D.1. Let f : Ω → R be a thrice differentiable function such that |f (y)| ≤ B0 , k∇f (y)k ≤
B1 , k∇2 f (y)k ≤ B2 , k∇2 f (z) − ∇2 L(y)k ≤ B3 kz − yk for all y, z in a (αB1 )-neighborhood of x. If
k∇f (x)k ≥ η and x0 is reached after one iteration of gradient descent (Algorithm 4) with stepsize
α ≤ B12 , then kx0 − xk ≤ αB1 and f (x0 ) ≤ f (x) − αη 2 /2.
20
Proof. The gradient descent step is given by x0 = x − α∇f (x). The bound on kx0 − xk is clear since
k∇f (x)k ≤ B1 .
f (x0 ) ≤ f (x) − α∇f (x)T ∇f (x)T + α2
≤ f (x) − (α − α2
For 0 ≤ α ≤
1
B2 ,
B2
k∇f (x)k2
2
B2 2
)η
2
we have α − α2 B2 /2 ≥ α/2, and our lemma follows.
Lemma D.2. Let f : Ω → R be a thrice differentiable function such that |f (y)| ≤ B0 , k∇f (y)k ≤
B1 , k∇2 f (y)k ≤ B2 , k∇2 f (z) − ∇2 L(y)k ≤ B3 kz − yk for all y, z in a (αB2 )-neighborhood of x. If
λmin (∇2 f (x)) ≤ −γ and x0 is reached after one iteration of Hessian descent (Algorithm 1) with
stepsize α ≤ B13 , then kx0 − xk ≤ αB2 and f (x0 ) ≤ f (x) − α2 γ 3 /2.
Proof. The gradient descent step is given by x0 = x + βvmin , where vmin is the unit eigenvector
corresponding to λmin (∇2 f (x)) and β = −αλmin (∇2 f (x))sgn(∇f (x)T vmin ). Our bound on kx0 − xk
is clear since |λmin (∇2 f (x))| ≤ B2 .
T
f (x0 ) ≤ f (x) + β∇f (x)T vmin + β 2 vmin
∇2 f (x)vmin +
B3 3
|β| kvmin k3
6
B3 3
|β|
6
≤ f (x) − |β|2 γ +
The last inequality holds since the sign of β is chosen so that β∇f (x)T vmin ≤ 0. Now, since
|β| = αγ ≤ Bγ3 , −|β|2 γ + B63 |β|3 ≤ −α2 γ 3 /2.
E
Convergence of Almost λ-Harmonic Potentials
Lemma 4.3. Let M = Rd for d ≡ 3 mod 4 and let G be the regularized loss corresponding to the
activation σ given by Lemma 4.2 with λ = 1. For any ∈ (0, 1) and δ ∈ (0, 1), if (a, θ) ∈ MG,δ ,
then for all i, either 1) there exists j such that kθi − wj k < k or 2) a2i < 2kdδ.
Proof. The proof is similar to Theorem 3.6. Let Φ be the realizable potential in 4.2 such that Φ (r)
is λ-harmonic when r ≥ with λ = 1. Note that Φ (0) = 1 is normalized. And let (a, θ) ∈ MG,δ .
WLOG, consider θ1 and a initial set S0 = {θ1 } containing it. For a finite set of points S and
a point x, define d(x, S) = miny∈S kx − yk. Then, we consider the following set growing process.
If there exists θi , wi 6∈ Sj such that d(θi , Sj ) < or d(wi , Sj ) < , add θi , wi to Sj to form Sj+1 .
Otherwise, we stop the process. We grow S0 to until the process terminates and we have the grown
set S.
If there is some wj ∈ S, then it must be the case that there exists j1 , · · · jq such that kθ1 −θj1 k <
and kθji − θji+1 k < , and kθjq − wj k < for some wj . So, there exists j, such that kθ1 − wj k < k.
Otherwise, notice that for each θi ∈ S, kwj − θi k ≥ for all j, and kθi − θj k ≥ for all θj 6∈ S.
WLOG, let S = {θ1 , . . . , θl }.
We consider changing all θ1 , . . . , θl by the same v and define
H(a, v) = G(a, θ1 + v, ..., θl + v, θl+1 . . . , θk ).
The optimality conditions on a are
k
X
X
∂H
= |4ai + 2
aj Φ (θi , θj ) + 2
bj Φ (θi , wj )| ≤ δ
∂ai
j=1
j6=i
21
Next, since Φ (r) is λ-harmonic for r ≥ , we may calculate the Laplacian of H as
l
k
k
X
X
X
∆v H =
λ 2
ai bj Φ (θi , wj ) + 2
ai aj Φ (θi , θj )
i=1
≤
l
X
j=1
j=l+1
λ −4a2i − 2
i=1
l
X
ai aj Φ (θi , θj ) + δ
= −2λ E
l
X
!2
ai σ(θi , X)
λ|ai |
i=1
j=1,j6=i
l
X
− 2λ
i=1
l
X
i=1
a2i + δλ
l
X
|ai |
i=1
The second line follows from our optimality conditions and the third
Pl line2 follows from completing the
square. Since (a, θ)√∈√
MG,δ , we have ∆v H ≥ −2kdδ. Let S = i=1 ai . Then, by √
Cauchy-Schwarz,
√
we have −2λS + δλ k S ≥ −2kdδ. When S ≥ δ 2 k, we see that −λS ≥ −2λS + δλ k S ≥ −2kdδ.
Therefore, S ≤ 2kdδ/λ.
We conclude that S ≤ max(δ 2 k, 2kdδ/λ) ≤ 2kdδ/λ since δ ≤ 1 ≤ 2d/λ and λ = 1. Therefore,
2
ai ≤ 2kdδ.
p
p
Lemma 4.4. Assume the conditions of Lemma 4.3. If G(a, θ) ≤ G(0, 0) − δ and (a, θ) ∈
MG,δ2 /(2k3 d) , then there exists some i, j such that kθi − wj k < k.
Proof. If there does not exists i, j such that kθi − wj k < k, p
then by Lemma 4.3, this implies
2 for all i. Now, for a integrable function f (x), kf k =
a2i < δ 2 /kP
EX [f (X)2 ] is a norm. Therefore,
X
if f (x) = i bi σ(wi , x) be our true target function, we conclude that by triangle inequality
k
X
p
G(a, θ) ≥
ai σ(θi , x) − f (x)
i=1
≥ kf (x)kX −
k
X
kai σ(θi , x)kX ≥
p
G(0, 0) − δ
i=1
X
This gives a contradiction, so we conclude that there must exist i, j such that θi is in a k neighborhood
of wj .
Lemma 4.5. Assume the conditions
of Theorem 4.1 and Lemma 4.3. With high probability, we can
q
p
(0)
(0)
initialize (a , θ ) such that G(a(0) , θ (0) ) ≤ G(0, 0) − δ with δ = (d/)−O(d) .
Proof. Consider choosing θ1 = 0 and then optimizing a1 . Given θ1 , the loss decrease is:
G(a1 , 0) − G(0, 0) = min 2a21 + 2
a1
k
X
j=1
2
k
X
1
a1 bj Φ (0, wj ) = −
bj Φ (0, wj )
2
j=1
Because wj are random Gaussians with variance O(d log d), we have kwj k ≤ O(d log d) with
high probability for all j. By Lemma 4.2, our potential satisfies Φ (0, wj ) ≥ (d/)−O(d) . And since
bj are uniformly chosen in [−1, 1], we conclude that with high probability over the choices of bj ,
2
P
k
b
Φ(θ
,
w
)
≥ (d/)−O(d) by appealing to Chebyshev’s inequality on the squared term.
− 21
1
j
j=1 j
Therefore, we conclude that with high probability, G(a1 , 0) ≤ G(0, 0) − 12 (d/)−O(d) . Let
p
p
G(a1 , 0) = G(0, 0) − ∆ ≥ 0. Squaring and rearranging gives ∆ ≥ √ 1
(d/)−O(d) . Since
4
G(0, 0) ≤ O(k) = O(poly(d)), we are done.
22
G(0,0)
E.1
Node by Node Analysis
The first few lemmas are similar to the ones proven before in the simultaneous case. The proof
are presented for completeness because the regularization terms are removed. Note that our loss
function is quadratic in a. Therefore, let a∗1 (θ1 ) denote the optimal value of a1 to minimize our loss.
Lemma E.1. Let M = Rd for d ≡ 3 mod 4 and let L1 be the loss restricted to (a1 , θ1 ) corresponding
to the activation function σ given by Lemma 4.2 with λ = 1. For any ∈ (0, 1) and δ ∈ (0, 1),
we can construct σ such that if (a1 , θ1 ) ∈ ML1 ,δ , then for all i, either 1) there exists j such that
kθ1 − wj k < or 2) a21 < 2dδ.
Proof. The proof is similar to Lemma 4.3. Let Φ be the realizable potential in 4.2 such that Φ (r)
is λ-harmonic when r ≥ . Note that Φ (0) = 1 is normalized. And let (a1 , θ1 ) ∈ ML1 ,δ . Assume
that there does exist wj such that kθ1 − wj k < .
The optimality condition on a1 is
k
X
∂L
= |2a1 + 2
bj Φ (θ1 , wj )| ≤ δ
∂a1
j=1
Next, since Φ (r) is λ-harmonic for r ≥ , we may calculate the Laplacian of L as
k
X
∆θ1 L = λ 2
a1 bj Φ (θ1 , wj ) ≤ −2λa21 + δλ|a1 |
j=1
The inequality follows from our optimality conditions. Since (a1 , θ1 ) ∈ ML,δ , we have ∆θ1 L ≥ −2dδ.
When a21 ≥ δ 2 , we see that −λa21 ≥ −2λa21 + δλ|a1 | ≥ −2dδ. Therefore, a21 ≤ 2dδ/λ. We conclude
that a21 ≤ max(δ 2 , 2dδ/λ) ≤ 2dδ/λ for δ ≤ 2d ≤ 2d/λ since λ = 1. Therefore, a21 ≤ 2dδ.
p
p
Lemma E.2. Assume the conditions of Lemma E.1. If L1 (a1 , θ1 ) ≤ L1 (0, 0) − δ and (a1 , θ1 ) ∈
MG,δ2 /(2d) , then there exists some j such that kθ1 − wj k < .
Proof. The proof follows similarly from Lemma 4.4.
Now, our main observation is below, showing that in a neighborhood around wj , descending
along the gradient direction will move θ1 closer to wj . Our tighter control of the gradient of Φ
around wj will eventually allow us to show that θ1 converges to a small neighborhood around wj .
Lemma E.3. Assume the conditions of Theorem E.5 and Lemma E.1. If kθ1 − wj k ≤ d and
|bj | ≥ 1/poly(d) and |a1 − a∗1 (θ1 )| ≤ (d/)−O(d) is almost optimal and for i, kwi − wj k ≥ Ω(d log d),
w −θ
1
(d/)−8d and ξ ≤ (d/)−O(d) .
then −∇θ1 L1 = ζ kθ1j−w1j k + ξ with ζ ≥ poly(d)
Proof. Through the proof, we assume k = poly(d). Now, our gradient with respect to θ1 is
∇θ1 L1 = 2a1 bj Φ0 (kθ1 − wj k)
X
θ 1 − wj
θ 1 − wi
+2
a1 bi Φ0 (kθ1 − wi k)
kθ1 − wj k
kθ1 − wi k
i6=j
√
√
Since kθ1 − wj k ≤ d, we may lower bound |Φ0 (kθ1 − wj k)| ≥ e− λd d1−d (2d + λ)−2d 2d /3 ≥
Ω((d/)−4d ). Similarly, Φ (kθ1 − wj k) ≥ Ω((d/)−4d ). On the other hand since kwi − wj k ≥ Ω(d log d)
for all i =
6 j, we may upper bound |Φ (kθ1 − wi k)| ≤ (d/)−O(d) and |Φ0 (kθ1 − wi k)| ≤ (d/)−O(d) .
θ −w
Together, we conclude that ∇θ1 L1 = 2a1 bj Φ0 (kθ1 − wj k) kθ11 −wjj k + 2a1 ξ, where kξk ≤ (d/)−O(d) .
23
By assumption, |a1 − a∗1 (θ1 )| ≤ (d/)−O(d) , so
X
∂L1
= |2a1 + 2bj Φ (kθ1 − wj k) + 2
bi Φ (kθ1 − wi k)| ≤ (d/)−O(d)
∂a1
i6=j
By a similar argument as on the derivative, we see that a1 = −bj Φ (kθ1 − wj k) + (d/)−O(d) .
Therefore, the direction of −∇θ1 L1 is moving θ1 closer to wj since
−∇θ1 L1 = b2j Φ (kθ1 − wj k)Φ0 (kθ1 − wj k)
θ 1 − wj
+ (d/)−O(d)
kθ1 − wj k
and we know Φ > 0 and Φ0 < 0, thereby −b2j Φ (kθ1 − wj k)Φ0 (kθ1 − wj k) ≥ 1/poly(d)(d/)−8d .
Lemma E.4 (Node-wise Initialization). Assume the conditions
q of Theorem E.5pand Lemma E.1.
(0) (0)
(0) (0)
With high probability, we can initialize (a1 , θ1 ) such that L(a1 , θ1 ) ≤ L(0, 0) − δ with
1
δ = poly(d)
(d/)−18d in time log(d)O(d) .
Proof. By our conditions, there must exist some |bj | such that |bj | ≥ 1/poly(d) and for all i,
kwi − wj k ≥ Ω(d log d). Note that if we randomly sample points in a ball of radius O(d log d), we
will land in a d-neighborhood of wj with probability log(d)−O(d) since kwj k ≤ O(d log d).
Let θ1 be such that kθ1 − wj k ≤ d and then we can solve for a1 = a∗1 (θ1 ) since we are
simply minimizing a quadratic in one variable. Then, by Lemma E.3, we see that k∇θ1 L1 k ≥
1/poly(d)(d/)−8d . Finally, by Lemma 4.2, we know that the Hessian is bounded by poly(d)(d/)2d .
1
So, by Lemma D.1, we conclude by taking a stepsize of α = poly(d)
(d/)−2d to reach (a01 , θ10 ), we can
guarantee that L1 (a01 , θ10 ) ≤ L1 (a∗1 (θ1 ), θ1 ) −
1
−18d .
poly(d) (d/)
conclude that L1 (a01 , θ10 )
1
But since L1 (a∗1 (θ1 ), θ1 ) ≤ L1 (0, 0), we
≤ L1 (0, 0) − poly(d)
(d/)−18d . Let
p
p
1
−18d .
L1 (a01 , θ10 ) = L1 (0, 0) − ∆ ≥ 0. Squaring and rearranging gives ∆ ≥ √ 1
poly(d) (d/)
4
L1 (0,0)
Since L1 (0, 0) ≤ O(k) = O(poly(d)), we are done.
Lemma E.5. Assume the conditions of Lemma E.1. Also, assume b1 , ..., bk are any numbers in
[−1, 1] and w1 , ..., wk ∈ Rd satisfy kwi k ≤ O(d log d) for all i and there exists some |bj | ≥ 1/poly(d)
with kwi − wj k ≥ Ω(d log d) for all i.
(0) (0)
Then with high probability, we can choose an initial point (a1 , θ1 ) such that after running
SecondGD (Algorithm 2) on the restricted regularized objective L1 (a1 , θ1 ) for at most (d/)O(d)
iterations, there exists some wj such that kθ1 − wj k < . Furthermore, if |bj | ≥ 1/poly(d) and
kwi − wj k ≥ Ω(d log d) for all i, then kθ1 − wj k < (d/)−O(d) and |a + bj | < (d/)−O(d) .
q
p
(0) (0)
(0) (0)
Proof. First, by Lemma E.4, we can initialize (a1 , θ1 ) such that L1 (a1 , θ1 ) ≤ L1 (0, 0) − δ
1
for δ = poly(d)
(d/)−18d . If we set α = (d/)−O(d) and η = γ = λδ 2 /(2d), then running Algorithm 2
will terminate and return some (a1 , θ1 ) in at most (d/)O(d) iterations. This is because our algorithm
ensures that our objective function decreases by at least min(αη 2 /2, α2 γ 3 /2) at each iteration and
G(0, 0) is bounded by O(k) and G ≥ 0 is non-negative.
Assume there does not exist wj such that kθ1 − wj k < (d/)−O(d) . Then, we claim that
(a1 , θ1 ) ∈ ML,λδ2 /(2d) . For the sake of contradiction, assume otherwise. By our algorithm termination
conditions, then it must be that after one step of gradient or Hessian descent from (a1 , θ1 ), we reach
24
some (a0 , θ0 ) and L1 (a0 , θ0 ) > L1 (a1 , θ1 ) − min(αη 2 /2, α2 γ 3 /2). Now, Lemma 4.2 ensures all first
three derivatives of Φ are bounded by (d/)2d , except at w1 , ..., wk . Furthermore, since there does
not exists wj such that kθ1 − wj k < (d/)−O(d) , L1 is three-times continuously differentiable within
a α(d/)2d = (d/)−O(d) neighborhood of θ1 . Therefore, by Lemma D.1 and D.2, we know that
L(a0 , θ0 ) ≤ L1 (a0 , θ0 ) ≤ L1 (a1 , θ1 ) − min(αη 2 /2, α2 γ 3 /2), a contradiction.
So, it must be (a1 , θ1 ) ∈ ML,λδ2 /(2d) . Since our algorithm maintains that our objective function
p
p
is decreasing, so L1 (a1 , θ1 ) ≤ L1 (0, 0) − δ. So, by Lemma E.2, there must be some wj such that
kθ − wj k ≤ .
Now, if |bj | ≥ 1/poly(d) and kwi − wj k ≥ Ω(d log d) for all i, then since (a, θ) ∈ ML,λδ2 /(2d) and
kθ − wj k ≤ , by Lemma E.3, we have k∇θ1 L1 k ≥ 1/poly(d)(d/)−8d > δ 2 /(2d), a contradiction.
Therefore, we must conclude that our original assumption was false and kθ − wj k < (d/)−O(d) for
some wj .
Finally, we see that the charges also converge since a = −2bj Φ (kθ − wj k) + O(d/)−O(d) and
kθ − wj k = (d/)−O(d) . By noting that Φ (0) = 1 and Φ is O((d/)2d )-Lipschitz, we conclude.
Finally, we have our final theorem.
Theorem 4.6. Let M = Rd and d ≡ 3 mod 4 and let L be as in 1 and k = poly(d). For all
∈ (0, 1), we can construct an activation σ such that if w1 , ..., wk ∈ Rd with wi randomly chosen
from wi ∼ N (0, O(d log d)Id×d ) and b1 , ..., bk be randomly chosen at uniform from [−1, 1], then with
high probability, after running nodewise descent (Algorithm 3) on the objective L for at most (d/)O(d)
iterations, (a, θ) is in a (d/)−O(d) neighborhood of the global minima.
Proof. Let our potential Φ be the one as constructed in Lemma 4.2 that is λ-harmonic for all
r ≥ with λ = 1. Let (ai , θi ) be the i-th node that is initialized and applied second order gradient
descent onto. We want to show that the nodes (ai , θi ) will converge, in a node-wise fashion, to some
permutation of {(b1 , w1 ), ..., (bk , wk )}.
First, with high probability we know that 1 − 1/poly(d) ≥ |bj | ≥ 1/poly(d) and kwi k ≤ O(d log d)
and kwi − wj k ≥ Ω(d log d) for all i, j. By Lemma E.5, we know that with high probability (a1 , θ1 )
will converge to some (d/)−O(d) neighborhood of (bπ(1) , wπ(1) ) for some function π : [k] → [k]. Now,
we treat a1 , θ1 as one of the fixed charges and note that |a1 | ≤ 1 and kθ1 k ≤ O(d log d) and as long
as k > 1 (if k = 1, we are done), then there exists |bj | ≥ 1/poly(d) with kwi − wj k ≥ Ω(d log d) for
all i and kθ1 − wj k ≥ Ω(d log d).
q
p
(0) (0)
(0) (0)
Then, by Lemma E.4, we can initialize (a2 , θ2 ) such that L2 (a2 , θ2 ) ≤ L2 (0, 0) − δ,
with δ = 1/poly(d)(d/)−18d . Then, by Lemma E.5, we know that (a2 , θ2 ) will converge to some
wπ(2) such that kθ2 − wπ(2) k < (or kθ2 − θ1 k < but θ2 is still -close to wπ(1) ). We claim that
π(1) 6= π(2).
By optimality conditions on a2 , we see that
X
a∗2 (θ2 ) = a1 Φ (kθ2 − θ1 k) + bj Φ (kθ1 − wj k) +
bi Φ (kθ1 − wi k)
i6=j
P
If wπ(1) = wπ(2) , then note that kθ1 − wi k ≥ Ω(d log d) for all i 6= π(1). Therefore, 2 i6=j bi Φ (kθ1 −
wi k) = (d/)−O(d) . And by our convergence guarantees and the (d/)2d -Lipschitzness of Φ ,
a1 Φ (kθ2 − θ1 k) + bj Φ (kθ1 − wj k) ≤ (d/)−O(d) . Therefore, a∗2 (θ2 ) ≤ (d/)−O(d) .
However, we see that L2 (a2 , θ2 ) ≥ L2 (a∗2 (θ2 ), θ2 ) = L2 (0, 0) − 12 a∗2 (θ2 )2 ≥ L2 (0, 0) − (d/)−O(d) .
But since L2 is non-increasing, this contradicts our initialization and therefore π(1) 6= π(2). Therefore,
our claim is done and by Lemma E.5, we see that since |bπ(2) | ≥ 1/poly(d) and for all i, kwi −wπ(2) k ≥
25
Ω(d log d) and kθ1 − wπ(2) k ≥ Ω(d log d), we conclude that (a2 , θ2 ) is in a (d/)−O(d) neighborhood
of (bπ(2) , θπ(2) ). Finally, we induct and by similar reasoning, π is a permutation. Now, our theorem
follows.
F
Convergence of Almost Strictly Subharmonic Potentials
Definition F.1. Φ(θ, w) is a strictly subharmonic potential on Ω if it is differentiable and
∆θ Φ(θ, w) > 0 for all θ ∈ Ω, except possibly at θ = w.
An example of such a potential is Φ(θ, w) = kθ − wk2−d− for any > 0. Although this potential
is unbounded at θ = w for most d, we remark that it is bounded when d = 1. Furthermore, the
sign of the output weights ai , bi matter in determining the sign of the Laplacian of our loss function.
Therefore, we need to make suitable assumptions in this framework.
Under Assumption 1, we are working with an even simpler loss function:
L(θ) = 2
k X
X
Φ(θi , θj ) − 2
i=1 i<j
k X
k
X
Φ(θi , wj )
(3)
i=1 j=1
Theorem F.2. Let Φ be a symmetric strictly subharmonic potential on M with Φ(θ, θ) = ∞. Let
Assumption 1 hold and let L be as in (3). Then, L admits no local minima, except when θi = wj for
some i, j.
Proof. First, let Φ be translationally invariant and M = Rd . Let θ be a critical point. Assume, for
sake of contradiction, that for all i, j, θi 6= wj . If θi are not distinct, separating them shows that we
are not at a local minima since Φ(θi , θj ) = ∞ and finite elsewhere.
The main technical detail is to remove interaction terms between pairwise θi by considering a
correlated movement, where each θi are moved along the same direction v. In this case, notice that
our objective, as a function of v, is simply
H(v) = L(θ1 + v, θ2 + v, ..., θk + v)
=2
k X
X
Φ(θi + v, θj + v) − 2
i=1 i<j
k X
k
X
Φ(θi + v, wj )
i=1 j=1
Note that the first term is constant as a function of v, by translational invariance. Therefore,
∇2v H = −2
k
k X
X
∇2 Φ(θi , wj )
i=1 j=1
P P
By the subharmonic condition, Tr(∇2v H) = −2 ki=1 kj=1 ∆θi Φ(θi , wj ) < 0. Therefore, we conclude
that θ is not a local minima of H and L. We conclude that θi = wj for some i, j.
The above technique generalizes to Φ being rotationally invariant case by working in spherical
coordinates and correlated translations are simply rotations. Note that we can change to spherical
coordinates (without the radius parameter) and let θe1 , ..., θek be the standard spherical representation
of θ1 , ..., θk .
We will consider a correlated translation in the spherical coordinate space, which are simply
rotations on the sphere. Let v be a vector in Rd−1 and our objective is simply
H(v) = L(θe1 + v, ..., θek + v)
26
Then, we apply the same proof since Φ(θei + v, θej + v) is constant as a function of v by rotationally
invariance.
Corollary F.3. Assume the conditions of Theorem F.2 and Φ(θ, θ) < ∞. Then, L admits no local
minima, except at the global minima.
Proof. From the same proof from theorem F.2, we conclude that there must exists i, j such that
θi = wj . Then, since Φ(θ, θ) < ∞, notice that θi , wj cancels each other out and by drop θi , wj from
the loss function, we have a new loss function L with k − 1 variables. Then, using induction, we see
that θi = wπ(i) at the local minima for some permutation π.
For concreteness, we will focus on a specific potential function with this property: the Gaussian
kernel Φ(θ, w) = exp(−kθ − wk2 /2). In Rd , the Laplacian is ∆Φ = (kθ − wk2 − d) exp(−kθ − wk2 /2),
which
becomes positive when kθ − wk2 ≥ d. Thus, Φ√is strictly subharmonic outside a ball of radius
√
d. This informally implies that θ1 converges to a d-ball around some wj .
For concreteness, we will focus on a specific potential function with this property: the Gaussian
kernel Φ(θ, w) = exp(−ckθ − wk2 /2), which corresponds to a Gaussian activation. In Rd , the
Laplacian is ∆Φ = (ckθ − wk2 − d) exp(−ckθ − wk2 /2),p
which becomes positive when kθ − wk2 ≥ d/c.
Thus, Φ is strictly subharmonic outside a ball of radius d/c. Note that Gaussian potential restricted
to S d−1 gives rise to the exponential activation function, so we can show convergence similarly.
2
Theorem F.4. Let M = Rd and Φ(θ, w) = e−ckθ−wk /2 and Assumption 1 holds. Let L be as in
(3) and kwk ≤ poly(d).
If c = O(d/) and (a, θ) ∈ Me−poly(d,1/) , then there exists i, j such that kθi − wj k2 ≤ .
Proof. Consider again a correlated movement, where each θi are moved along the same direction
v. As before, this drops the pairwise θi terms. If for all i, j kθi − wj k2 ≤ , then we see that
∆θi Φ = (ckθ − wk2 − d) exp(−ckθ − wk2 /2) > e−poly(d,1/) .
2
Tr(∇ L) = −2
k X
k
X
∆θi Φ(θi , wj ) < −e−poly(d,1/)
i=1 j=1
Therefore, ∇2 L must admit a strictly negative eigenvalue that is less than e−c3 d , which implies
our claim (we drop the poly(d, k) terms).
G
Common Activations
First, we consider the sign activation function. Under restrictions on the size of the input dimension
or the number of hidden units, we can prove convergence results under the sign activation function,
as it gives rise to a harmonic potential.
Assumption 1. All output weights bi = 1 and therefore the output weights ai = −bi = −1 are fixed
throughout the learning algorithm.
Lemma G.1. Let M = S 1 and let Assumption 1 hold. Let L be as in (2) and σ is the sign activation
function. Then L admits no strict local minima, except at the global minima.
27
We cannot simply analyze the convergence of GD on all θi simultaneously since as before, the
pairwise interaction terms between the θi present complications. Therefore, we now only consider
the convergence guarantee of gradient descent on the first node, θ1 , to some wj , while the other
nodes are inactive (i.e. a2 , ..., ak = 0). In essence, we are working with the following simplified loss
function.
k
X
L(a1 , θ1 ) = a21 Φ(θ1 , θ1 ) + 2
a1 bj Φ(θ1 , wj )
(4)
j=1
Lemma G.2. Let M = S 1 and L be as in (4) and σ is the sign activation function. Then, almost
surely over random choices of b1 , ..., bk , all local minima of L are at ±wj .
For the polynomial activation and potential functions, we also can show convergence under
orthogonality assumptions on wj . Note that the realizability of polynomial potentials is guaranteed
in Section B.
Theorem G.3. Let M = S d−1 . Let w1 , ..., wk be orthonormal vectors in Rd and Φ is of the form
Φ(θ, w) = (θT w)l for some fixed integer l ≥ 3. Let L be as in (4). Then, all critical points of L are
not local minima, except when θ1 = wj for some j.
G.1
Convergence of Sign Activation
Lemma G.1. Let M = S 1 and let Assumption 1 hold. Let L be as in (2) and σ is the sign activation
function. Then L admits no strict local minima, except at the global minima.
Proof. We will first argue that unless all the electrons and protons have matched up as a permutation
it cannot be a strict local minimum and then argue that the global minimum is a strict local
minimum.
First note that if some electron and proton have merged, we can remove such pairs and argue
about the remaining configuration of charges. So WLOG we assume there are no such overlapping
electron and proton.
First consider the case when there is an isolated electron e and there is no charge diagonally
opposite to it. In this case look at the two semicircles on the left and the right half of the circle
around the isolated electron – let q1 and q2 be the net charges in the left and the right semi-circles.
Note that q1 =
6 q2 since they are integers and q1 + q2 = +1 which is odd. So by moving the electron
slightly to the side with the larger charge you decrease the potential.
If there is a proton opposite the isolated electron the argument becomes simpler as the proton
benefits the motion of the electron in either the left or right direction. So the only way the electron
does not benefit by moving in either direction is that q1 = −1 and q2 = −1 which is impossible.
If there is an electron opposite the isolated electron then the combination of these two diagonally
opposing electrons have a zero effect on every other charge. So it is possible rotate this pair jointly
keeping them opposed in any way and not change the potential. So this is not a strict local minimum.
Next if there is a clump of isolated electrons with no charge on the diagonally opposite point
then again as before if q1 6= q2 we are done. If q1 = q2 then the the electrons in the clump locally
are unaffected by the remaining charges. So now by splitting the clump into two groups and moving
them apart infinitesimally we will decrease the potential.
Now if there is only protons in the diagonally opposite position an isolated electron again we are
done as in the case when there is one electron diagonally opposite one proton.
Finally if there is only electrons diagonally opposite a clump of electrons again we are done as
we have found at least one pair of opposing electrons that can be jointly rotated in any way.
28
Next we will argue that a permutation matching up is a strict local minumum. For this we will
assume that no two protons are diagonally opposite each other (as they can be removed without
affecting the function). Now given a perfect matching up of electrons and protons, if we perturb the
electrons in any way infinitesimally, then any isolated clump of electrons can be moved slightly to
the left or right to improve the potential.
Lemma G.2. Let M = S 1 and L be as in (4) and σ is the sign activation function. Then, almost
surely over random choices of b1 , ..., bk , all local minima of L are at ±wj .
Proof. In S 1 , notice that the pairwise potential function is Φ(θ, w) = 1 − 2 cos−1 (θT w)/π = 1 − 2α/π,
where α is the angle between θ, w. So, let us parameterize in polar coordinates, calling our true
parameters as w
f1 , ..., w
fk ∈ [0, 2π] and rewriting our loss as a function of θe ∈ [0, 2π].
Since Φ is a linear function of the angle between θ, wj , each wj exerts a constant gradient on θe
towards w
fj , with discontinuities at w
fj , π + w
fj . Almost surely over b1 , .., bk , the gradient is non-zero
almost everywhere, except at the discontinuities, which are at w
fj , π + w
fj for some j.
G.2
Convergence of Polynomial Potentials
Theorem G.3. Let M = S d−1 . Let w1 , ..., wk be orthonormal vectors in Rd and Φ is of the form
Φ(θ, w) = (θT w)l for some fixed integer l ≥ 3. Let L be as in (4). Then, all critical points of L are
not local minima, except when θ1 = wj for some j.
Proof. WLOG, we can consider w1 , ..., wd to be the basis vectors e1 , ..., ed . Note that this is a
manifold optimization problem, so our optimality conditions are given by introducing a Lagrange
multiplier λ, as in [GHJY15].
d
X
∂L
=2
abi (θi )l + 2a = 0
∂a
i=1
(∇θ L)i = 2abi l(θi )l−1 − 2λθi = 0
where λ is chosen that minimizes
λ = arg min
λ
X
(abi l(θi )l−1 − λθi )2 =
X
abi l(θi )l
i
Therefore, either θi = 0 or bi (θi )l−2 = λ/(al). From [GHJY15], we consider the constrained Hessian,
which is a diagonal matrix with diagonal entry:
(∇2 L)ii = 2abi l(l − 1)(θi )l−2 − 2λ
Assume that there exists θi , θj 6= 0, then we claim that θ is not a local minima. First, our optimality
conditions imply bi (θi )l−2 = bj (θj )l−2 = λ/(al). So,
(∇2 L)ii = (∇2 L)jj = 2abi l(l − 1)(θi )l−2 − 2λ
= 2(l − 2)λ = −2(l − 2)la2
Now, there must exist a vector v ∈ S d−1 such that vk = 0 for k 6= i, j and v T θ = 0, so v is in the
tangent space at θ. Finally, v T (∇2 L)v = −2(l − 2)la2 < 0, implying θ is not a local minima when
a 6= 0. Note that a = 0 occurs with probability 0 since our objective function is non-increasing
throughout the gradient descent algorithm and is almost surely initialized to be negative with a
optimized upon initialization, as by observed before.
29
Under a node-wise descent algorithm, we can show polynomial-time convergence to global minima
under orthogonality assumptions on wj for these polynomial activations/potentials. We will not
include the proof but it follows from similar techniques presented for nodewise convergence in
Section E.
H
Proof of Sign Uniqueness
For the sign activation function, we can show a related result.
Theorem H.1. Let M = S d−1 and σ be the sign activation function and b2 , ...,
√ bk = 0. If the loss
(1) at (a, θ) is less than O(1), then there must exist θi such that w1T θi > Ω(1/ k).
Proof. WLOG let w1 = e1 . Notice that our loss can be bounded below by Jensen’s:
!2
k
X
E
ai σ(θiT X) − σ(X1 )
X
i=1
≥ E
X1
" k
X
E
X2 ...Xd
!2
ai σ(θiT X) − σ(X1 ) ,
#
i=1
where X is a standard Gaussian in Rd .
" k
#
k
X
X
X
EX2 ,..,Xd
ai σ(θiT X) =
ai EX2 ,...Xd σ(θi1 X1 +
θij Xj )
i=1
i=1
j>1
k
q
X
2
=
EY σ(θi1 X1 + 1 − θi1 Y )
=
i=1
k
X
ai EY
σ( √ θi1
2
1−θi1
i=1
X1 + Y ) ,
where Y is an independent standard Gaussian and for any small δ, if p(y) is the standard Gaussian
density,
Z δ
EY [σ(δ + Y )] =
p(y) dy = 2p(0)δ + O(δ 2 )
−δ
If w1T θi = θi1 < for all i, then notice that with high probability on X1 (say condition on
|X1 | ≤ 1),
θ
i1
E σ( √ 2 X1 + Y ) = 2p(0) √ θi1 2 X1 + O(2 X12 )
1−θi1
Y
1−θi1
√
Therefore, since < O(1/ k),
" k
#
k
X
X
T
E
ai σ(θi X) = X1
2p(0)ai √ θi1
X2 ,..,Xd
i=1
i=1
= cX1 + O(1)
30
2
1−θi1
+ O(k2 X12 )
Finally, our error bound is now
E
X1
"
E
X2 ...Xd
≥
k
X
#
ai σ(θiT X) − σ(X1 )
!2
i=1
E [(cX1 + O(1) − σ(X1 ))2 ]
|X1 |≤1
And the final expression is always larger than some constant, regardless of c.
31
| 8 |
An Optimal Polarization Tracking Algorithm for
Lithium-Niobate-based Polarization Controllers
Joaquim D. Garcia and Gustavo C. Amaral
arXiv:1603.06751v1 [] 12 Mar 2016
February 27, 2018
Abstract
We present an optimal algorithm for the three-stage arbitrary polarization tracking using LithiumNiobate-based Polarization Controllers: device calibration, polarization state rotation, and stabilization. The theoretical model representing the lithium-niobate-based polarization controller is derived
and the methodology is successfully applied. Results are numerically simulated in the MATLAB
environment.
1
Introduction
Keeping the polarization state stable in long-haul fiber optical communication links is a difficult task
due to the local changes in the silica structure along the fiber which induces bi-refringence and, therefore,
variation of the polarization state [1, 2]. Although until the early 1990’s most optical communication links
did not account for it, Polarization Mode Dispersion (PMD) has been a more serious threat to modern
optical links as the bit rate grows [3]. In this context, polarization control has gained much attention,
specially with the development PMD-compensation techniques [4] and of the so-called Polarization Shift
Keying [5].
Lithium-niobate (LiN bO3 ) is a material capable of altering its refractive index upon application of a
difference of potential between its terminals [6]. This device represented a huge step in the polarization
stabilization and control technology since it allowed extremely fast polarization controlling and tracking
devices to be developed, once no mechanical structures were necessary [7]. In this work, we present a
complete algorithm for polarization control and stabilization that relies on the use of the aforementioned
LiN bO3 structures, more specifically, the EOSpace Polarization Controller Module [8]. The polarization
control itself is composed of two main steps: firstly, an analytical rotation along the Poincaré Sphere
relying on basic analytic geometry [9] and quaternion arithmetic [10]; secondly, stabilization method
based on adaptive filtering to achieve fine adjustment [11]. A state estimator that is fundamental for the
good functioning of the rotation algorithm is also presented.
2
Mathematical representation of polarization
The state of polarization of light has been represented mathematically as Jones Vectors and Stokes
Vectors [12]. We shall stick to the Stokes representation since visualization in the Poincaré Sphere is direct.
Neverthelees, the conversion between these two representations require straightforward computations.
Stokes vectors are 4-dimensional vectors that carry information about the State of Polarization (SOP) of
light. Since the first component (S0 ) is associated to the total light intensity, it is common to normalize the
Stokes Vector by dividing it by S0 . In the case of coherent light, the 3-dimensional Stokes Vector formed
of the remaining three normalized components of the former 4-dimensional vector, has norm 1. Since we
have 3-dimensional normalized vectors representing the SOP, it is usual to represent it graphically in a
3-sphere known as the Poincaré sphere. Since all SOPs are mapped bijectively in the 3-sphere, we shall
treat, from now on, an SOP as a point in the 3-sphere.
SOP changes that do not affect the light intensity can be represented by rotations in 3-space. These
rotations are a class of unitary transformations and can be represented by orthonormal matrices [13].
Even though, polarisers do affect the light intensity and, as such, cannot be represented by orthonormal
1
matrices, they can be represented by projection matrices. The rotation matrix in 3-space has a very
interesting characterization via the Spectral Theorem: they always have 3 eigenvalues (since they are
normal); all of the eigenvalues have norm 1; one of the eigenvalues is always equal to 1 and its eigenvector is
the rotation axis, e; the remaining eigenvalues are complex conjugate numbers whose real part correspond
to the cosine of the rotation angle, θ, and whose imaginary part correspond to the sine of the rotation
angle.
The quaternions are a number system that extend the complex numbers. Since the unitary quaternions
are homeomorphically mapped to the 3-space rotation matrices [14], it is possible to use them to perform
3-space rotations for which they were shown to be way more stable [15]. For the aforementioned reasons,
our algorithm will rely on quaternions representation instead of on the matrix representation.
3
Lithium-Niobate-based Polarization Controller Characteristics
Literature around polarization control is very rich and diverse techniques were proposed and verified
along the last years: [16] shows that three elements, two quarter wave plates and one half wave plate,
are sufficient to reach any SOP from any other SOP; other methods appear in [17, 18, 19]. As mentioned
before our methodology focuses on the electro-optic LiN bO3 EOSpace Polarization Controller Module
(PCM). Such device is available commercially as a multi-stage component but, for simplicity, we are
going to develop an algorithm that uses a single stage. The algorithm is easily extended to account for
the multi-stage version to reduce the input voltage that may vary within a ±70 Volts range.
A single stage of the PCM has 3 electrodes [8] and realizes an arbitrary Linear Retarder : linear
wave plates that induce relative phase differences between the TE and TM modes of the propagating
electromagnetic field due to the variation of the refractive index in both axes as a function of the applied difference of potential which causes birefringence and, thus, altering the polarization state. Linear
Retarders have a main polarization axis, also known as eigen-mode, e ∈ {v ∈ R3 |z = 0}, and a characteristic phase delay, θ ∈ [0, 2π). It is possible to show that, by changing the eigen-mode and the
phase delay of a linear retarder, one can shift from one SOP to any other SOP. The proof of this is
actually constructive and our algorithm provides such construction. In order to set the eigen-mode to
e = (cos(α/2), sin(α/2), 0) and the phase delay to θ = 2πδ the electrodes voltages must be set to:
Va = 2V0 δsin(α) − Vπ δcos(α) + Vab
(1)
Vb = 0
(2)
Vc = 2V0 δsin(α) − Vπ δcos(α) +
Vcb
(3)
where Vπ is the voltage required to induce a 180o phase shift between the TE and TM modes for a single
stage, V0 is the voltage required to rotate all power from the TE to the TM mode, or vice versa, for a
single stage, and Vab and Vcb are the bias voltages required on electrodes A and C, respectively, in order to
achieve zero birefringence between the TE and TM modes [8]. Even though the data-sheet of the device
provides the range within which V0 , Vπ ,Vab and Vcb should be, their actual values for an arbitrary stage
must be determined via a calibration procedure. The calibration procedure adopted in our methodology
is presented in [20].
4
Analytical Rotation Algorithm
The feasible eigen-modes of Linear retarders are linear SOPs, hence their name, so the rotation axis
must lie in the s1 -s2 plane, i.e., along the equator of the Poincaré Sphere. Therefore, given two polarization
states represented by sin and starget , we must find a rotation axis lying on the s1 -s2 plane and a rotation
angle such that the corresponding linear transformation converts sin into starget . Since the possibility of
measuring the output SOP after a transformation is of paramount importance of our adaptive algorithm,
we assume, throughout the development, that a polarimeter such as the one described in [21] is included
in the control loop.
We start the rotation step by defining the following vectors: v1 , orthogonal to the rotation axis; v2 ,
the normalized cross-product between v1 and s3 (supposing v1 is not parallel to s3 ); v3 , the centre of the
2
rotation, i.e., the point in the rotation plane that intersects the line parallel to the rotation axis. In the
case that v1 is parallel to s3 , we use the first of these that is not parallel to v1 : sin , starget or s1 . Now,
we set: v4 = sin − v3 ; and v5 = starget − v3 where one can imagine v4 and v5 as two clock arrows where
the rotation angle, θ, is the oriented angle between them. The cosine of θ is given by:
cos(θ) =
hv4 , v5 i
||v4 ||2 ||v5 ||2
(4)
Note that the angle −θ has the same cosine as θ. To determine which rotation direction is the right one,
we take the sign of the cross product between v4 and v5 with the following implication: if its negative,
then θ = −θ and v3 = −v3 ; if its positive, then do nothing. After all those computations, θ is the rotation
angle and v3 is the rotation axis. Fig. 1 presents the graphical interpretation on the 3-space Poincaré
Sphere of the determination of both rotation axis and rotation angle given two arbitrary SOPs for sin
and starget .
Figure 1: Rotation example: input SOP in green, target SOP in black, output SOP in red, rotation axis
in black, rotation circle in pink. The rotation arc is the smaller arc (least angular distance) defined by
the points in the pink circle.
5
Stabilization algorithm
Generally, non-linear optimization problems such as the one presented suffer from an intrinsic issue:
finding a rotation that changes from one polarization state to another is extremely useful when the
SOPs are distant but become highly unstable when sin and starget are close to each other. Taking the
Levenberg-Marquadt procedure as inspiration [], we devise a two-step control algorithm that alternates
between rotation and stabilization depending on the distance between SOPs. The stabilization step
also helps eliminating problems involving measurement errors and numerical approximations that may
influence the stability. For this step, we resort to an old optimization algorithm known as the Gradient
Descent, which has been successfully applied to adaptive filtering [22] and machine learning [23].
3
Given a cost function J(x) to be minimized, where x is a n-dimensional vector, the Gradient Descent
algorithm updates x in the direction of the negative (descending) gradient, in search for the minimum of
the functional, with the rule x = x − α∇J(x), where α is a step size parameter that must be calibrated
[24]. Since the cost function depends non-linearly on the input and target SOP’s, there is no simple
analytical form for the gradient and it must be estimated by measurements with intrinsic error. This
measurement error is the reason why we employ a variant of the algorithm known as the Stochastic
Gradient Descent (SGD) [25].
It is possible to estimate all the components of the gradient ∇J(x) by using the well-known secant
method. Given ei , the ith element of the canonical basis of Rn , we can perturb the current value x by a
small value , and accounting for measurement imprecisions by evaluating the cost function m times for
ˆ
a better estimate, the ith component of ∇J(x)
is approximated by:
m
X J k (x + ei ) − J k (x − ei )
ˆ i≈ 1
∇J(x)
m
(5)
k=1
When the gradient is estimated instead of being computed analytically, the descending path is way
less smoother, and it is usual to see some roaming around the local minimum []. For our problem,
x = [Va , Vc ]> and J(x) = ||Sout (x) − Sin ||22 since the 3-Space output Stokes Vector is a function of the
linear retarder input voltages. The algorithm was simulated numerically via MATLAB with the step α
empirically set as directly proportional to the error between starget and sin .
6
State estimation
For a given calibration, we have estimates of the values V0 , Vπ ,Vab and Vcb . Thus, at any time, if we
know the voltages Va and Vc applied to the PCM stage’s electrodes, we can obtain α and δ by solving a
2 × 2 non-linear system which has an analytical solution. With the pair (α, δ), we can easily obtain the
pair (e, θ) and figure out the rotation implemented by the stage and, by taking its inverse and applying
on Sout , we can obtain an estimate of Sin . Let A = 2V0 , B = −Vπ , C = Va − Vab , and D = Vc − Vcb .
Then:
q
2
2
δ = ((C + D) / (2A)) + ((C − D) / (2B))
(6)
2α = f ((C + D) / (2Aδ) , (C − D) / (2Bδ))
(7)
where f (·, ·) is the function that receives cos(α) and sin(α) and returns α. It is worth emphasizing that
an analogous procedure can be performed if the calibration resorts on LUTs [26].
The complete two-stage methodology to control and to stabilize polarization making use of the rotation
algorithm , the stochastic gradient descent algorithm and, the state estimator is presented in Fig.2 in
the form of a workflow chart. It contains two main loops that compute the voltages to be applied to the
Polarization Controller stage’s electrodes. The choice for which pair of voltages to be used depends on
the distance ε between the output SOP and the target SOP: when Sout and Starget are distant, the SGD
algorithm may take a long time to reach stability so the rotation algorithm is employed; on the other
hand, if they are close enough, the rotation algorithm can be unstable so the SGD is employed.
7
Simulation Results
In our simulation procedure, two issues were taken into account: the polarization drift of sin , which is
inherent to fiber optical communication systems; and the necessity of shift between two or more starget ’s.
To clearly depict the algorithm’s performance through the simulation results, we present, in Fig. 3, the
Poincaré Sphere and, in it, the green dots represent the wandering state of polarization at the input of
the controlling apparatus and the red dots represent the output state of polarization. The algorithm is
capable of maintaining the polarization stable in the vicinity of the set of starget defined by the three
main linear states of polarization states: horizontal; vertical; diagonal; and anti-diagonal.
In Fig. 4, we present the values of each component of sout and starget , as well as the associated
error between starget and sout , as a function of time. We observe that the algorithm uses the rotation
4
Figure 2: Workflow chart diagram of the proposed three-stage control algorithm.
step only after the shifts in starget , while the SGD is responsible for stabilizing the polarization around
its value while sin drifts. This result is in correspondence to the expected behaviour of the algorithm,
and confirms its good performance and applicability since the associated error is very small, i.e., even
though the vicinity into which the algorithm keeps the output state of polarization in relation to the
target polarization state seems large when observing Fig. 3, we see that, in the majority of time, inside
a smaller vicinity define as the algorithm’s error tolerance.
Figure 4: Simulation of real-time polarization control considering both the inherent polarization drift in
the output of the controller and the target polarization shift. All graphs are in the same horizontal scale
displayed at the bottom of the figure. The first three graphs display the values of the Stokes parameters
that compose the SOP vector. The bottom two graphs display the associated error between sout and
starget , where the second one is a re-scaled version of the first to clarify that the error does not exceed
the threshold between SGD and the rotation step except in the cases of target polarization shift.
The time scale was determined based on the time responses of off-the-shelf Analog-to-Digital and
Digital-to-Analog Converters (ADC and DAC, respectively), of General Photonic’s Polarimeter module,
and of the EOSpace Polarization Controller Module [27, 28, 29, 8]. It yielded an overall control loop
5
Figure 3: Simulation of real-time polarization control considering both the inherent polarization drift in
the output of the controller and the target polarization shift. The green dots represent the wandering
state of polarization at the input of the controlling apparatus and the red dots represent the output state
of polarization. The algorithm is capable of maintaining the polarization stable in the vicinity of starget .
iteration of approximately 1 µs. The polarization shift was set to match a Polarization Shift Keying
system working at 50 × 103 symbols per second. The drift in polarization was set at 6 krad/s, a value
well that attempts to mimic the operation of an optical communication system under severe conditions.
8
Conclusion
We presented a three-step methodology to calibrate a Lithium-Niobate-based polarization controller
module, to control the polarization and to stabilize the output SOP. Calibration, rotation algorithm, the
Stochastic Gradient Descent and the state estimation algorithm were successfully tested in MATLAB
simulations. The algorithms presented take simple forms and are readily embeddable in an FPGA or
micro-controlled unit so we leave it as a future point of investigation.
Acknowledgment
The authors would like to thank brazilian agency CNPq for financial support. The authors are
indebted to F. Calliari and V. Lima for help with the polarimeter unit and with the electronic circuitry,
specially the analog-to-digital and digital-to-analog converters.
Supplemental Material
The authors provide digital supplemental material to accompany the article: a video file depicting the
simulation run of the algorithm can be accessed in [30]; and the Matlab source files necessary for running
the algorithm and the simulation are available at [31].
References
[1] G. P. Agrawal, Fiber-Optic Communication Systems, 2nd ed.
John Wiley & Sons, 2002.
[2] D. Derickson, Fiber Optics: Tests and Measurements, 1st ed.
Prentice Hall, 1998.
[3] N. Gisin, J. P. von der Weid, J.-P. Pellaux et al., “Polarization mode dispersion of short and long
single-mode fibers,” Lightwave Technology, Journal of, vol. 9, no. 7, pp. 821–827, 1991.
6
[4] J. Gordon and H. Kogelnik, “Pmd fundamentals: Polarization mode dispersion in optical fibers,”
Proceedings of the National Academy of Sciences, vol. 97, no. 9, pp. 4541–4550, 2000.
[5] S. Benedetto and P. Poggiolini, “Theory of polarization shift keying modulation,” Communications,
IEEE Transactions on, vol. 40, no. 4, pp. 708–721, 1992.
[6] A. J. Van Haasteren, J. J. Van der Tol, M. O. Van Deventer, and H. J. Frankena, “Modeling and
characterization of an electrooptic polarization controller on linbo 3,” Lightwave Technology, Journal
of, vol. 11, no. 7, pp. 1151–1157, 1993.
[7] B. Koch, A. Hidayat, H. Zhang, V. Mirvoda, M. Lichtinger, D. Sandel, and R. Noé, “Optical endless
polarization stabilization at 9 krad/s with fpga-based controller,” Photonics Technology Letters,
IEEE, vol. 20, no. 12, pp. 961–963, 2008.
[8] EOSpace, “Lithium niobate polarization controller,” http://www.hanamuraoptics.com/device/
EOSPACE/PC030123 EO.pdf, accessed: 2016-01-22.
[9] S. K. Stein, “Calculus and analytic geometry,” AMC, vol. 10, p. 12, 1977.
[10] J. B. Kuipers, Quaternions and rotation sequences.
vol. 66.
Princeton university press Princeton, 1999,
[11] S. Haykin, “Adaptive filters,” Signal Processing Magazine, vol. 6, 1999.
[12] B. E. Saleh, M. C. Teich, and B. E. Saleh, Fundamentals of photonics.
vol. 22.
[13] G. Strang, Linear Algebra and Its Applications.
[14] M. D. Crossley, Essential topology.
Wiley New York, 1991,
Wellesley, MA: Wellesley-Cambridge Press, 2009.
Springer Science & Business Media, 2006.
[15] M. Karlsson and M. Petersson, “Quaternion approach to pmd and pdl phenomena in optical fiber
systems,” Lightwave Technology, Journal of, vol. 22, no. 4, pp. 1137–1146, 2004.
[16] F. Heismann, “Analysis of a reset-free polarization controller for fast automatic polarization stabilization in fiber-optic transmission systems,” Lightwave Technology, Journal of, vol. 12, no. 4, pp.
690–699, 1994.
[17] T. Imai, K. Nosu, and H. Yamaguchi, “Optical polarisation control utilising an optical heterodyne
detection scheme,” Electronics Letters, vol. 21, no. 2, pp. 52–53, 1985.
[18] R. Noé, H. Heidrich, and D. Hoffmann, “Automatic endless polarization control with integratedoptical ti: Linbo 3 polarization transformers,” Optics letters, vol. 13, no. 6, pp. 527–529, 1988.
[19] F. Heismann, “Integrated-optic polarization transformer for reset-free endless polarization control,”
Quantum Electronics, IEEE Journal of, vol. 25, no. 8, pp. 1898–1906, 1989.
[20] L. Xi, X. Zhang, X. Tang, X. Weng, and F. Tian, “A novel method to calibrate linbo3-based
polarization controllers,” Chinese Optics Letters, vol. 8, no. 8, pp. 804–806, 2010.
[21] F. Calliari, Electrical Engineering, PUC-Rio 2014 Monography, “Desenvolvimento de interface gráfica
para análise do estado de polarização da luz através de plataforma fpga,” http://www.maxwell.vrac.
puc-rio.br/23790/23790.PDF, accessed: 2016-01-22.
[22] B. Widrow, J. M. McCool, M. G. Larimore, and C. R. Johnson Jr, “Stationary and nonstationary
learning characteristics of the lms adaptive filter,” Proceedings of the IEEE, vol. 64, no. 8, pp.
1151–1162, 1976.
[23] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender, “Learning
to rank using gradient descent,” in Proceedings of the 22nd international conference on Machine
learning. ACM, 2005, pp. 89–96.
7
[24] S. Haykin, Adaptive Filter Theory, ser. Prentice-Hall information and system sciences series.
Prentice Hall, 1996. [Online]. Available: https://books.google.com.br/books?id=l78QAQAAMAAJ
[25] J. J. Shynk, Probability, random variables, and random processes: theory and signal processing
applications. John Wiley & Sons, 2012.
[26] A. Hidayat, B. Koch, H. Zhang, V. Mirvoda, M. Lichtinger, D. Sandel, and R. Noé, “High-speed
endless optical polarization stabilization using calibrated waveplates and field-programmable gate
array-based digital controller,” Optics express, vol. 16, no. 23, pp. 18 984–18 991, 2008.
[27] Texas Instruments, “Adc324x dual-channel, 14-bit, 25-msps to 125-msps, analog-to-digital converters,” http://www.ti.com/lit/ds/symlink/adc3242.pdf, accessed: 2016-01-22.
[28] General Photonics, “High-speed polarimeter – poladetect: Pod-201,” http://www.generalphotonics.
com/wp-content/uploads/2015/05/POD-201-5-8-15.pdf, accessed: 2016-01-22.
[29] Texas Instruments, “Dac3484 quad-channel, 16-bit, 1.25 gsps digital-to-analog converter,” http:
//www.ti.com/lit/ds/symlink/dac3484.pdf, accessed: 2016-01-22.
[30] Optoelectronics Laboratory – Gustavo C. Amaral and Joaquim D. Garcia, “Polarization tracking algorithm – simulation run in youtube,” https://www.youtube.com/channel/
UCcfPvWdcMmyhIS7lmXIhm-w, accessed: 2016-02-01.
[31] Optoelectronics Laboratory – Joaquim D. Garcia and Gustavo C. Amaral, “Matlab source files,”
https://github.com/joaquimg/PolarizationControl, accessed: 2016-02-01.
8
| 3 |
Biological and Shortest-Path Routing Procedures for Transportation
Network Design
arXiv:1803.03528v1 [physics.soc-ph] 7 Mar 2018
François Queyroi∗
CNRS, UMR8504 Géographie-cités (CNRS/Université, Paris-1,
Panthéon-Sorbonne/Université Paris Diderot)
Abstract
Background. The design of efficient transportation networks is an important challenge in
many research areas. Among the most promising recent methods, biological routing mimic local
rules found in nature. However comparisons with other methods are rare.
Methods. In this paper we define a common framework to compare network design method. We
use it to compare biological and a shortest-path routing approaches.
Results. We find that biological routing explore a more efficient set of solution when looking
to design a network for uniformly distributed transfers. However, the difference between the two
approaches is not as important for a skewed distribution of transfers.
1
Introduction
Transportation networks. The transportation network design corresponds to the problem of selecting a set of possible links between locations (for example cities) in order for transfers (for example
of goods, people, etc.) to be made possible [8]. The automated design of transportation network has
a range of applications going from solving transshipment problems [9] to the computation of space
trajectories [17]. In the social sciences, researchers want to compare efficient simulated networks with
the real ones (railroads, railways, etc.) in order to assess the existence and nature of suboptimal
choices [18]. Transportation networks are also important for the simulation of the development of a
city system [23].
The design of transportation networks as a computational problem is part of a large domain of inquiry
which is known as network flow problems [1]. Transportation network design can be here described
as a multi-objective variant of the multi-commodity flow problem [10] with unlimited edge capacities.
While most analytical research focus on cost minimisation, we are looking for transportation
networks that are efficient with respect to multiple criteria. The choice therefore often involves a
cost/benefits analysis. The most common criteria are the time performance (how quickly can we
travel using the network), the cost (the size or total length of the network) and the tolerance to fault
(how travel is affected by random perturbations). Obviously, finding a good balance between these
∗
[email protected]
criteria make the problem hardly solvable analytically and solutions are often explored using heuristics.
Networks computation models. Several computation models can be used to explore potential
solutions. The first approach is what we call the biological approach. The reason for this name is
that those methods are derived from actual natural phenomena and the most cited one is probably
the behaviour of the Physarum polycephalum organism [27]. This slime mould development is indeed
capable to solving mazes or discover shortest-paths in a difficult terrain [25]. Experiments involving
this organism caught a lot of attention in the scientific press. One important achievement was the
simulation of the Greater Tokyo Area transportation network [26] where the authors introduce an algorithm replicating the behaviour of the organism. It should be noted however that similar behaviours
can be found in other natural phenomena such as ant colonies [7] or current in an electrical network [16].
The second possible approach to the problem of network design is what we call the shortest-path
routing method. We should stress out the fact that this category has known less extensions and led
to less applications than the previous one. To the best of our knowledge, Levinson and Yerra [14] are
the first to study this model of computation. Their objective was to show that a hierarchy of routes
can emerge in the transportation network from a uniform distribution of transfers [28]. This method
has been rediscovered by others in the domain of Information Visualisation [13].
Others methods such as greedy algorithms could be used. The idea is here to incrementally
build a network by adding at each iteration the links that contribute the most to the performance of
the network [5, 22]. A common variant is to start from a minimum cost spanning tree covering all
transfers and then iteratively adding the best alternative routes. The Physarium simulation was actually compared to this approach in [26]. It is also possible to start with a more complete network and
prune the least used paths. This last method actually mimics the development of neural networks [21].
Routing and Reinforcement. Both the biological and shortest-path approaches actually rely
on two common mechanisms. First, the routing of goods (food in the case of the slime mould) is
done by assigning them to paths depending of their “attractiveness” (the diameter of the tubes for
the slime mould). Then, paths attractiveness is updated according to the amount of goods using these
paths (the slime mould’s tubes expand or shrink due to the pressure). We call this second phase reinforcement. By repeating those steps we can mimic a continuous process where the network gradually
appears from a starting grid as least used paths in the grid gradually disappear while others gather
more and more transfers.
Biological routing uses local rules of dispersion. Transfers will be assumed to behave like a liquid
flowing through pipes of various size to reach a sink. On the other hand, the shortest-path method
routes the transfers along the shortest-path between their source and destination. Biological routing
therefore explores different paths even if there are longer while the shortest-path routing only selects
the best paths.
Contribution. The aim of this study is to analyse the differences between biological and shortestpath approaches. We introduce a common framework for the two algorithms. Most of the previous
studies (in particular with the biological models) focus on uniform transfers between locations (either
there is an exchange of commodities or not). However this setting may not be appropriate for different
applications (in urban planning for example) since locations may differ in term of attraction potential.
2
We therefore expand previous definitions of the algorithms in order to take into account arbitrary
transfers distributions between several sources and destinations.
This common framework allows us to compare biological and shortest-path routing only focusing on
the way transfers are routed without the interference of other minor differences. Moreover, we use as
experiments a random generated set of grid-embedded locations and transfers while previous studies
use a few small toy examples or real world configurations.
Biological network design such as the one simulating the Physarum polycephalum organism has
been shown to produce efficient network but was never, to the best of our knowledge, compared to the
shortest-path approach. Our hypothesis is that the shortest-path method while not being as popular
as biologically based approaches in the literature may be worthwhile to pursue if the transfers between
location are modelled according to distributions found in human mobility (such as the gravity model).
2
Reinforced Routing Procedure Overview
Notations. Throughout this paper, we call the support graph G = (V, E, l) a graph (or network)
with nodes (or vertices) set V , edge set E and edge length l : E → R+ . Let n = |V | the number of
nodes and m = |E| the number of edges. We call F : V × V → R+ the transfer matrix between nodes
in G and we call Q : E → R+ a flow distributionPon the network (i.e. the way transfers in F are
distributed along the edges of G). We have F = s,t F (s, t) the total amount of transfers between
the nodes of G. A transportation network on the support graph G is simply a subgraph N = (V, E 0 , l)
of G with E 0 ⊆ E. For edge defined functions such as Q or l and an edge e = (u, v), we may write
Q(u, v) or l(u, v) to refer to Q(e) or l(e).
Networks flows are mostly defined for transfers between a source node and one or several destination nodes. Here Q corresponds to the distribution of the total F units among E i.e. if all units
transferred travel through the network G to reach their destination according to F then Q corresponds
to the number of units that went through each edge. Notice that, for a given G and F , there are
multiple possible values of Q. Although we use the same terminology of flow, we do not expect Q to
respect to classic networks flow rules [1] and we consider G to be undirected without a loss of generality.
Main procedure backbone. We design the procedure so that the differences between biological
and shortest-path approaches only rest on the way transfers are routed. As previously explained, the
procedure can be divided into two parts: the routing of the transfers in F (which gives Q) and the
adaptation of edge length according to the flow Q. The routing depends on the length of the edges of
the network l. Algorithm 1 details this generic procedure. Here, the reinforcement modifies the length
of network edges by modifying σ which can be interpreted as edges’ “diameter” of tubes in the slime
mould organism, the “speed” in a railway network or the “resistance” in an electrical circuit. Notice
that Algorithm 1 does not directly produce a transportation network. Rather, the σ values indicate
whether an edge is likely to be part of an efficient transportation network. For the experiments, we
create a network by selecting edges with a σ value higher than .
Flow routing. The function FlowRouting (line 5) will depend on the chosen method (Biological
or Shortest-Paths routing). We assume that, in both cases, the result Q is a normalized flow i.e. such
3
Algorithm 1: Reinforced Routing of flow F along network G
Input: G = (V, E, l), F : V × V → R+ , α > 0, µ > 1, δ ∈]0, 1], > 0
Output: σ : E → [0, 1]
1 σ 0 (e) ← 1, e ∈ E;
2 n ← 0;
3 do
l
0
4
G ← G V, E, n ;
σ
5
Q ← FlowRouting(G0 , F );
6
σ n+1 ← δf (Q; µ, α) + (1 − δ)σ n ;
7
n ← n + 1;
8 while max |σ n+1 − σ n | > ;
9 return σ n ;
that Q ∈ [0, 1]m . To do this we simply divide the number of units transferred going through a given
edge by F. The two following sections will describe each routing procedure in details.
Reinforcement. First, notice Q is an aggregation of transfers with different sources and destinations. Previous works [26] used to adapt network edges after the routing of a single line of F (the
transfers connecting a single node s) or a single exchange in F (the transfers between s and t). This
obviously requires to take lines or elements of F in a random order. This approach may speed up
the convergence of the procedure but we believe this addition of randomness is of little interest in the
context of our study.
In Algorithm 1, support network’s edges’ resistance σ is modified according to a reinforcement
function f and an update rate δ (line 6). The latter is a parameter that influences the convergence
rate of the algorithm. In previous studies, this parameter was often implicit: the reinforcement is
defined as a change in resistance over time in a continuous fashion so the convergence speed is given
by the way time is split into steps.
The reinforcement function (sometimes called “value function”) f we use is a logit-like function frequently used in the literature [25].
f (Q; µ, α) =
(1 + α)Qµ
1 + αQµ
(1)
Function f is a response function where parameter µ > 1 controls the slope of the curve while
parameter α > 0 controls the inflection point of the function. Indeed the higher α is the greater is
the response value of small signals (see examples of curves with various values of α in Fig. 1). In the
biology analogy, this function models the responses to the flow going through the Physarium tubes.
We initialize σ 0 (e) = 1, ∀e ∈ E (line 1) and, since f is dimensionless with f (0) = 0 and f (1) = 1, we
have σ n ∈ [0, 1]m at each iteration of the algorithm. The length edges of the modified support graph
G0 will increase (line 4) which will affect the next routing phase.
Running time. The stopping condition of the algorithm (line 8) depends of the difference in the
4
Figure 1: f (q; µ, α) for µ = 1.8 and different values of α
distribution of σ values between successive steps. The parameter is used to set the precision of the
algorithm (in this study we used = 5.10−4 ). Previous definition of the Physarium algorithm uses a
arbitrary number of iterations instead.
The convergence of resistance σ to a stable solution has been studied in the case of the Physarium
solver [19, 11]. In our case, Algorithm 1 seems to always converge to a solution whether we use biological or shortest-path routing although the number of steps required for the shortest path routing
procedure is lower.
The running time of function FlowRouting may also be affected by the ordering of the vertices.
This is true whether we adopt the biological or the shortest-path routing procedure (described below).
In both cases, we only need to route the transfers connected to the minimal vertex cover of the graph
formed with the transfers in F higher than 0. The reason is that the flow we compute is an aggregation
of different “commodities” (we can not exchange a person travelling from a to b with one going from c
to d). Both routing procedure will route one “commodity” (the people travelling from a to b or from
a to c) at a time. We can find a small enough vertex cover by taking vertices in decreasing order of
their number of strictly positive transfers.
An implementation of the two procedures is available at https://github.com/fqueyroi/tulip_
plugins/tree/master/TransportationNetworks as plugins for the Tulip software [2] http://tulip.
labri.fr.
2.1
Shortest-path routing
Using the shortest-path model, the flow going through support network’s edges is given by what we
call the flow betweenness. Informally, flow betweenness corresponds to the total number of travellers
going through a given edge if the travellers choose the fastest (shortest) route. In the general case, we
5
should therefore have:
X
Q(e) =
(s,t)∈V ×V
πe (s, t)
F (s, t)
π(s, t)
(2)
where πe (s, t) is the number of shortest-paths from node s to t going through edge e and π(s, t) is the
total number of shortest paths from node s to t. In practice, if we use continuous edge length on a
connected network, we can assume that πe (s, t) ∈ {0, 1} and π(s, t) = 1 for all pairs (s, t). This means
that there is only one shortest-path between each pair of nodes in G.
On way to compute Eq. 2 is therefore to compute all pairwise shortest-paths between the locations
evolved in the transfers using a shortest-path algorithm such as Dijkstra’s algorithm [6]. However,
notice that if F (s, t) = 1 for all (s, t) ∈ V × V then Eq. 2 corresponds to the betweenness centrality
measure [3]. This measure is well-known in network analysis as it allows to identify vital elements of
the network (hubs). It is possible to generalize the algorithm proposed in [3] in order to take account of
transfer values different than 1. The computation of FlowRouting therefore runs in O(nm + n log m).
In practice it involves computing as many shortest-path tree as the size of the minimal vertex cover
of transfers higher than 0 (which is at most n).
2.2
Biological routing
As previously explained, “biological” routing is designed to mimic several known physical phenomena
that can be found in biological organisms such as the slime mould Physarum Polycephalum. As such,
the routing of flow in biological models corresponds to a flow Q that respects classic flow constraints
and minimizes the total energy of the model:
ξσ (Q) =
X l(e)
Q(e)
σ(e)
(3)
e∈E
This biological routing of flow is also commonly introduced as a solution of a linear equation system:
Q(u, v) =
l(u, v)
(p(u) − p(v))
σ(u, v)
(4)
l(u, v)
is the conductance of edge (u, v).
σ(u, v)
This routing is therefore also close to other network analysis measures such as Eigenvector centrality
or PageRank [4]. The potential p(u) of a node u to attract flows depends on u’s neighbours’ potential
w. r. t. the conductance (speed or diameter) of adjacent edges. However, in our case, we are not
interested in the potential value of nodes but only by the flow going through each edge.
Eq. 4 corresponds to Ohm’s law for electrical circuit where
The computation of the flow Q in the biological model corresponds to a solution of a linear
equation system. This computation can be cumbersome due to the number of variables. We adopt
the approximation algorithm described in [12]. It starts with a suboptimal solution where the flows
are routed on a low-stretch spanning tree (distance preserving tree). Then, we randomly select a
“short-cut” edge (a, b) that does not belong to that tree and send a portion of the flow going from
6
a to b in the tree through that short-cut. Those modifications are called “cycle updates”. The
number of cycle updates performed is set so that the distribution of flow is a -approximation of
the solution of the linear equation system. This approach leads to a near-linear time computation
of O(m log2 n log log n log(−1 )). Note however that a solution Q to Eq. 3 can only be found if all
transfers in F have for source a single unique vertex (in order to respect flow constraints). Again,
we need to find as many solutions the size of the minimal vertex cover of non-null transfers (which
is at most n). The running time of the biological routing therefore highly depends on the density of
transfers.
3
Experiment details
We present in this section the choices made for comparing the two transportation network construction
models described above.
Fitness Measures. We introduce here the fitness measures used. We use concepts found previously in the literature [26] (the three dimensions: performance, fault-tolerance and cost). The
differences with previous definitions of performance or fault-tolerance come from the fact that we
consider arbitrary transfers distributions.
Even though Algorithm 1 outputs a real vector of speed/diameter σ, we take as transportation net¯
work N = (V, {e ∈ E : σ(e) > },
Pl) for simplicity sake. We call dN the diameter of N (length of the
longest shortest-path) and F = s,t F (s, t) the total amount of transfers on the network. We define
the following indicators:
1. Performance P corresponds to the total time taken for units to go from their source to their
destination in N .
1 X
P (N, F ) = 1 − ¯
F (s, t)dN (s, t)
(5)
F dN s,t
where dN (s, t) is the distance between s and t in N (i.e. the sum of the length of the edges
along the shortest-path from s to t). Notice we have P (N ) ∈ [0, 1] and we say the network N
has good performance when P (N ) is close to 1. The main difference with previous definitions of
performance is that the amount of transfers is taken into account. Indeed, the measure P corresponds to the mean time taken by travellers to reach there destination while previous definition
(using uniform transfers) correspond to the mean time of travel for each pair of locations.
2. Fault Tolerance F T corresponds to the proportion of transfers still able to reach destination
after the removal of a random edge in N .
X 1
X
1
F T (N, F ) = 1 −
F (s, t)1PN \e (s,t)6=∅
(6)
|E(N )|
F
e∈E(N )
(s,t)∈V ×V
where 1PN \e 6=∅ (s, t) is equal to 1 if there is still a path between s et t in N after the removal
of edge e or 0 otherwise. Using this normalisation, we have F T (N ) ∈ [0, 1] and we say that N
is highly tolerant to fault when F T (N ) is close to 1. Notice that F T is a generalisation of the
classic metric that just focused on whether the network is disconnected or not (a measure used
in [26]). If an area is loosely connected to the network, it may not affect F T much if the amount
of transfers with this area is relatively low.
7
3. Cost C corresponds to the normalized sum of the length of edges in N
P
e∈E(N ) l(e)
C(N ) = P
e∈E(G) l(e)
(7)
Using this normalisation, we have C(N ) ∈ [0, 1] and we say that N is a costly network when
C(N ) is close to 1.
Obviously, a costly network is more likely to have a higher tolerance to fault and performance.
Therefore it is interesting to look at the ratio P/C and F T /C for comparison purpose.
Samples. In order to compare the two algorithms, we first use synthetic data generated randomly.
We sample a set of 150 points in the [0, 1]2 plain and connect the points using a standard Delaunay
triangulation. This random grid corresponds to the possible adjacencies of the future network (the
support graph G). We then select a subset of 8 points that will correspond to the sources and
destinations of transfers. The matrix F is then generated using two different models:
1. Uniform distribution: set F (s, t) = 1 for all (s, t). It is the same as in [26].
P (s)P (t)
where P : V → R is the population of the nodes and
dE (s, t)γ
dE (s, t) is the euclidean distance between s and t. Here we set γ = 1.2. This model is often used
in urban geography to model human mobility [24]. The population P is generated using an Zipf
exponential model i.e. the population exponentially decreases with the rank of the points by
a factor of 1.5 (the ranks of the 8 locations being chosen randomly). This model produces few
important centres of attractions and isolated areas.
2. Gravity model: set F (s, t) =
The way transfers are distributed corresponds to two experimental settings. In addition, we develop
a third and a forth using as locations cities of the French region Pays-de-la-Loire (West of France).
The resulting grid can be seen in Fig. 4 (red nodes represent important cities in the region). We
also analyse the difference between a uniform and a gravity-like distribution of transfers. For the latter, we use as locations weight the actual population of the cities according to the 1999 national census.
Parameters choices. Algorithm 1 has many different parameters. The most influential however
is the parameter α. For the experiments, we take for α values powers of 2 (see the different behaviour
of f in Fig. 1). In [26], the authors choose to modify the total amount of transfers (which here corresponds to F) since their reinforcement function does not include a parameter similar to α. The effect
is however similar. Indeed, the higher α is, the more the route that are less used are given a high
weight. In [26], the more transfers amount there is, the more those routes are likely to be used. The
α value can be used to influence the final cost of the network. Small α values are likely to result in a
tree-like organisation of the transportation network while higher α values are more likely to produce
to a grid-like organisation [15].
Regarding the other parameters we use an update rate γ = 0.5, a slope of reinforcement µ = 1.8
(similar to the one in Fig. 1) and a precision of = 5.10−4 (i.e. the algorithm will stop when the
greatest difference in σ values is smaller than 5.10−4 ).
Hypothesis on the difference between the two approaches. Previous studies show that the
biological routing is efficient when compared for example to greedy approaches [26]. The efficiency here
8
involves finding a good compromise between performance and fault tolerance. These results where
made using uniformly distributed transfers between locations. Using the same distribution, we expect
shortest-path routing to perform worst than biological routing. The reason is that shortest-paths
explore fewer solutions and the procedure could quickly fall in a local minima. However, we could
expect the results to be different when modifying the distribution of transfers. A skewed distribution
of transfers should favour the shortest-path routing approach since the direct routing of the most
important transfers along the fastest route will have the most impact.
4
Results
In this section, we discuss the results of the various experiments using the indicators of performance
(P ), fault tolerance (F T ) and cost (C). We first look at the statistical results for the synthetic experiments then we discuss some qualitative aspect of the results in the case of the geographical grid.
For clarity purpose, we use SP and BIO to refer to the shortest-path routing and biological routing
respectively. The two types of transfers distribution are referred as U M for the uniformly distributed
transfers and GM for the gravity model.
We generated 200 instances of random grid and applied the transfers distribution models described
above. We shall first stress out the fact that BIO computation is very slow. It requires several minutes
when SP only takes a second.
The distributions of the indicators according to α can be found in Fig. 2. Note that we do not have
to compare each algorithm using the same α value. Therefore, the evolution of the ratio P/C and
F T /C can be found in Fig. 3. For the geographical grid, we report the statistics computed for some
values of α for U M (Table 1) and GM (Table 2). A representation of the computed networks can be
seen in Figures 4 and 5.
Here are the conclusions that can be drawn from these results:
1. BIO seems to achieve better results than SP if we look at the ratio. In the U M
experiments and for α ∈ [1, 4], the ratio P/C and F T /C are both greater for BIO than for
SP with any α values. The same can be said in GM with α < 4. However, the variation of
the indicators is important (height of the boxplots in Fig. 2). It means there are configurations
when SP routing may find a better network. It is actually the case for the geographical grid
with both U M and GM .
2. BIO explores a broader set of results. A trade-off between performance and fault-tolerance
is clearly visible in U M with both BIO and SP . This phenomenon was already observed for BIO
in [26] using different fitness indicators. One important difference between the two procedures
is that the set of networks that can be found using BIO corresponds to a wide range of cost.
This is more limited for SP as the networks found for various value of α may not differ a lot.
3. There is important difference in the results between the two transfers distribution
models. The trade-off between perf6ormance and fault-tolerance is not clearly apparent in
the GM case. We can observe that the behaviour of the ratio P/C and F T /C is similar for
BIO and SP . It can be explained by the fact that the simulated transfers create a single
9
Figure 2: Distribution of the various indicators according to α values (x-axis) for the four experimental
settings (see Legend). Bottom-right plot: similarity between the results of SP and BIO routing
procedures (proportion of edges found in both networks).
10
Table 1: Results for the geographical grid (with a uniform distribution of transfers)
Routing
SP
BIO
α
2
16
64
2
16
64
Perf
.469
.479
.489
.471
.487
.548
FT
.793
.797
.822
.782
.956
.998
Cost
.034
.035
.035
.035
.042
.054
Perf/Cost
13.794
13.686
13.971
13.457
11.595
10.148
FT/Cost
23.324
22.771
23.486
22.343
22.762
18.481
important centre of attraction in the grid (the most “populated” location). In this context,
achieving good performance and fault-tolerance is not hard: we just need to connect this centre
to the periphery. Accordingly, we observe that the networks found with BIO and SP are more
similar in this model (see Fig. 2). Moreover, the value of the ratio P/C and F T /C is higher for
both algorithms. Still, further expansions of the network (using higher α values) are less and
less cost-effective since they connect locations whose transfers between them are exponentially
smaller.
4. The differences in performance or fault tolerance seems to correspond to different
behaviours when looking at the geographical example (Figs. 4 and 5).
In the U M case, networks found using SP have a tree-like structure for most values of α. BIO
finds similar networks using small α value. However, it can also find more costly networks
with higher α value such as a grid-like network (see Fig. 4b). Table 1 shows this trade-off
between P/C and F T /C. Notice however that BIO produces a lot of alternative paths that are
actually parallel routes going between the same locations. This type of behaviour can explain
the important increase of cost for higher α values.
In the GM case, the result are very different. Small α values still correspond to tree-like structure
for both BIO and SP . For higher α values however, SP produces alternative paths. This is
also the case for BIO but again with redundant routes. This apparently strange behaviour from
BIO could be traced back to the “double edges” often found in experimental settings using a
real Physarum polycephalum organism [20]. In our case, it seems that the redundant edges do
not add much in term of fault-tolerance since alternatives (but longer) routes already exist.
5
Conclusion and Related Questions
In this paper we compared two different approaches of transportation network design. We provide a
common analysis framework and an implementation of the algorithms. The methods were compared
based on synthetic random grids using statistical indicators. From a quantitative point-of-view, we
can conclude that the biological inspired approach is overall better than the routing of flows based on
shortest-paths. However, the difference between the two is not always significant. The biologicallyinspired model can be used to explore a wider range of solution. Contrary to our hypothesis, those
observations are still valid when using a more realistic non-uniform model of transfers even though
the difference in terms of performance and fault-tolerance is even smaller than in the uniform case.
11
Figure 3: Evolution of the median value of the ratio Perf/Cost and FT/Cost (middle of the corresponding boxplots in Fig. 2) according α values (points labels).
Table 2: Results for the geographical network (with a gravity model distribution of transfers)
Routing
SP
BIO
α
0
128
256
512
1024
0
128
256
512
1024
Perf
.644
.661
.661
.677
.689
.578
.578
.627
.637
.689
FT
.851
.928
.928
.981
.987
.831
.831
.938
.967
.987
12
Cost
.031
.034
.034
.04
.053
.033
.033
.037
.045
.053
Perf/Cost
20.799
19.369
19.161
16.734
12.968
17.343
17.343
16.743
14.15
10.759
FT/Cost
27.483
27.201
26.913
24.242
18.559
24.914
24.914
25.05
21.469
15.398
(a) Bio routing – α = 4
(b) Bio routing – α = 64
(c) SP routing – α = 4
(d) SP routing – α = 64
Figure 4: Comparison of the result on the geographical example with uniform transfers. Red nodes
correspond to major cities in the regions (préfecture and sous-préfecture).
Our experiments still have important limitations since there are numerous dimensions still unexplored. One could access, for example, the influence of the size of the grid. First results tend to reveal
a similar behaviour but the discrepancy between the two methods may be amplified. The influence
of others parameters such as the decay rate γ or the reinforcement curve gradient µ should also be
investigated but we expect they will have less of an impact.
Our study is also limited by the fact that our evaluation relies on objectives functions related to performance, fault-tolerance and cost. However, geographers, biologists or urban planners might want
to know whether or not simulated networks are close to real-world networks. In this context, the
objective functions are useful but it is not possible to infer the closeness between two networks based
on the proximity in these functions values.
What we learn from this study is that biologically-inspired routing may be better suited for the
researcher since it allows an exploration of a broader set of solutions. However, taking into account
13
(a) Bio routing – α = 0
(b) Bio routing – α = 512
(c) SP routing – α = 0
(d) SP routing – α = 512
Figure 5: Comparison of the result on the geographical example with transfers following a gravity
model. Red nodes correspond to major cities in the regions (préfecture and sous-préfecture). Node
size is proportional to the population of the cities.
smaller and smaller flows (with increasing α value) leads to networks with high cost but without much
additional performance or fault-tolerance. One of our experiment reveals for example that biological
routing may produce parallel short routes that are not efficient.
References
[1] Ravindra K Ahuja, Thomas L Magnanti, and James B Orlin. Network flows. Elsevier, 2014.
[2] David Auber, Romain Bourqui, Maylis Delest, Antoine Lambert, Patrick Mary, Guy Melançon,
Bruno Pinaud, Benjamin Renoust, and Jason Vallet. TULIP 4. Research report, LaBRI - Laboratoire Bordelais de Recherche en Informatique, September 2016.
14
[3] Ulrik Brandes. A faster algorithm for betweenness centrality. Journal of Mathematical Sociology,
25(2):163–177, 2001.
[4] Fan Chung and Wenbo Zhao. Pagerank and random walks on graphs. In Fete of combinatorics
and computer science, pages 43–62. Springer, 2010.
[5] Erik D Demaine and Morteza Zadimoghaddam. Minimizing the diameter of a network using
shortcut edges. In Algorithm Theory-SWAT 2010, pages 420–431. Springer, 2010.
[6] Edsger W Dijkstra. A note on two problems in connexion with graphs. Numerische mathematik,
1(1):269–271, 1959.
[7] Audrey Dussutour, Vincent Fourcassie, Dirk Helbing, and Jean-Louis Deneubourg. Optimal
traffic organization in ants under crowded conditions. Nature, 428(6978):70, 2004.
[8] Terry L. Friesz. Transportation network equilibrium, design and aggregation: Key developments
and research opportunities. Transportation Research Part A: General, 19(5):413 – 427, 1985.
Special Issue Transportation Research: The State of the Art and Research Opportunities.
[9] Cai Gao, Chao Yan, Daijun Wei, Yong Hu, Sankaran Mahadevan, and Yong Deng. A biologically
inspired model for transshipment problem. arXiv preprint arXiv:1401.2181, 2014.
[10] Andrew V Goldberg, Jeffrey D Oldham, Serge Plotkin, and Cliff Stein. An implementation of a
combinatorial approximation algorithm for minimum-cost multicommodity flow. In International
Conference on Integer Programming and Combinatorial Optimization, pages 338–352. Springer,
1998.
[11] Kentaro Ito, Anders Johansson, Toshiyuki Nakagaki, and Atsushi Tero. Convergence properties
for the physarum solver. arXiv preprint arXiv:1101.5249, 2011.
[12] Jonathan A Kelner, Lorenzo Orecchia, Aaron Sidford, and Zeyuan Allen Zhu. A simple, combinatorial algorithm for solving sdd systems in nearly-linear time. In Proceedings of the forty-fifth
annual ACM symposium on Theory of computing, pages 911–920. ACM, 2013.
[13] Antoine Lambert, Romain Bourqui, and David Auber. Winding roads: Routing edges into bundles. In Computer Graphics Forum, volume 29, pages 853–862. Wiley Online Library, 2010.
[14] David Levinson and Bhanu Yerra. Self-organization of surface transportation networks. Transportation Science, 40(2):179–188, 2006.
[15] Rémi Louf, Pablo Jensen, and Marc Barthelemy. Emergence of hierarchy in cost-driven growth
of spatial networks. Proceedings of the National Academy of Sciences, 110(22):8824–8829, 2013.
[16] Qi Ma, Anders Johansson, Atsushi Tero, Toshiyuki Nakagaki, and David JT Sumpter. Currentreinforced random walks for constructing transport networks. Journal of the Royal Society Interface, 10(80):20120864, 2013.
[17] Luca Masi and Massimiliano Vasile. A multidirectional physarum solver for the automated design
of space trajectories. In Evolutionary Computation (CEC), 2014 IEEE Congress on, pages 2992–
2999. IEEE, 2014.
15
[18] Christophe Mimeur, François Queyroi, Arnaud Banos, and Thomas Thévenin. Revisiting the
structuring effect of transportation infrastructure: an empirical approach with the French Railway
Network from 1860 to 1910. Historical Methods: A Journal of Quantitative and Interdisciplinary
History, 2017.
[19] Tomoyuki Miyaji and Isamu Ohnishi. Physarum can solve the shortest path problem on riemannian surface mathematically rigourously. International Journal of Pure and Applied Mathematics,
47(3):353–369, 2008.
[20] Tomoyuki Miyaji, Isamu Ohnishi, Atsushi Tero, and Toshiyuki Nakagaki. Failure to the shortest path decision of an adaptive transport network with double edges in plasmodium system.
International Journal of Dynamical Systems and Differential Equations, 1(3):210–219, 2008.
[21] Saket Navlakha, Alison L Barth, and Ziv Bar-Joseph. Decreasing-rate pruning optimizes
the construction of efficient and robust distributed networks. PLoS computational biology,
11(7):e1004347, 2015.
[22] N Parotisidis, Evaggelia Pitoura, and Panayiotis Tsaparas. Selecting shortcuts for a smaller
world. In SIAM International Conference on Data Mining (SDM), 2015.
[23] Juste Raimbault, Arnaud Banos, and René Doursat. A hybrid network/grid model of urban
morphogenesis and optimization. CoRR, abs/1612.08552, 2016.
[24] Jean-Paul Rodrigue, Claude Comtois, and Brian Slack. The geography of transport systems.
Routledge, 2009.
[25] Atsushi Tero, Ryo Kobayashi, and Toshiyuki Nakagaki. A mathematical model for adaptive
transport network in path finding by true slime mold. Journal of theoretical biology, 244(4):553–
564, 2007.
[26] Atsushi Tero, Seiji Takagi, Tetsu Saigusa, Kentaro Ito, Dan P Bebber, Mark D Fricker, Kenji
Yumiki, Ryo Kobayashi, and Toshiyuki Nakagaki. Rules for biologically inspired adaptive network
design. Science, 327(5964):439–442, 2010.
[27] David Vogel and Audrey Dussutour. Direct transfer of learned behaviour via cell fusion in nonneural organisms. In Proc. R. Soc. B, volume 283, page 20162382. The Royal Society, 2016.
[28] Bhanu M Yerra and David M Levinson. The emergence of hierarchy in transportation networks.
The Annals of Regional Science, 39(3):541–553, 2005.
16
| 8 |
Finite element method for thermal analysis of concentrating solar receivers
Stanko Shtrakov and Anton Stoilov
South-West University, Blagoevgrad, Bulgaria
Application of finite element method and heat conductivity transfer model for calculation of
temperature distribution in receiver for dish-Stirling concentrating solar system is described. The
method yields discretized equations that are entirely local to the elements and provides complete
geometric flexibility. A computer program solving the finite element method problem is created
and great number of numerical experiments is carried out. Illustrative numerical results are given
for an array of triangular elements in receiver for dish-Stirling system.
1. Introduction.
Cavity receivers for solar concentrating systems absorb a concentrated solar energy, convert it to
heat, and transfer heat to the working gas in power unit (Stirling engine, steam turbine). Heat flux in
such elements is great and material tension is very high. Different shapes and materials were used to
optimize the performance of cavity receivers.
Besides the natural experimental tests, there are possibilities to use serious theoretical studies for
the thermal behavior of the cavity receivers. It would be very useful, because the processes in the
cavity are complicated by high energy intensity and a real measure of temperature distribution is
difficult and expensive.
Because of the special form of the solar receivers, it is not suitable to use ordinary mathematical
technique, such as analytic methods or numeric solutions with finite difference approximation. Finite
elements method is now one of the most popular ways to solve complicated mathematical problems,
especially with irregular definition area. There are many works treating the heat conductivity problems
by using finite elements methods and there is practical experience in this field.
In this paper is presented a mathematical model for heat conductivity processes in cavity receiver
for dish-Stirling system with appliance of finite elements method for solving differential equations.
The studies and numerical tests are made for a typical cylindrical cavity receiver, but algorithm and
computer program created for calculation of temperature distribution in receiver are suited to solving a
different forms of receivers.
The simple construction scheme of typical
Absorber
tube type cavity receiver for dish-Stirling system
is shown in Fig. 1. It comprises absorber plate
(tube system for gas heating), cavity space, walls,
Cavity
aperture and insulation on the external surfaces of
walls.
The absorbing surface is usually placed
Walls
behind the focal point of the concentrator so that
the flux density on the absorbing surface is
reduced. The size of the absorber and cavity
Aperture
walls is typically kept to a minimum to reduce
Insulation
Fig. 1. Cavity receiver for dish-Stirling system heat loss and receiver cost.
Concentrated radiation entering the receiver
aperture diffuses inside the cavity. Most of the energy is directly absorbed by the absorber, and most of
the remainder is reflected or reradiated within the cavity and is eventually absorbed by absorber and
cavity walls. Converted by walls heat is conducted to the absorber plate and is transferred to the
working gas. The major advantage of cavity receiver is that the size of the absorber may be different
from the size of aperture. With a cavity receiver, the concentrator’s focus is placed at the cavity
aperture and the highly concentrated flux spread inside the cavity before encountering the larger
absorbing surface area. This spreading reduces the flux incident on the absorber surface. When incident
flux on the absorbing surface is high, it is difficult to transfer heat through surface without thermally
overstressing materials.
The bottom of the receiver is a heat exchanger for the Stirling engine. Part of solar energy strikes
directly the bottom wall of the receiver and by conduction it delivers heat to exchange pipes.
Cylindrical walls absorb the other part of solar energy and conduct it to the bottom of the receiver. The
external cylindrical surfaces are well insulated for protecting heat losses. Natural convection in cavity
produces air circulation and convective losses. Heat losses are further caused by insensitive radiation
from cavity aperture to the ambient. These losses are not big, because of the small measurements of the
receiver. Nevertheless, all heat losses can be taking into account because they influence the
temperature distribution in the receiver.
2. Theoretical Model
Mathematical model of processes in cavity receiver can be derived on the base of theoretical
treating of heat conductivity in receiver walls, heat transfer to the working gas by absorber plate and
heat losses to the ambient. Because of symmetry, the problem domain can be simply presented as in
fig.2. The main mathematical equation in this problem is the heat conductivity equation for receiver
construction written in cylindrical coordinate system:
∂ ⎛ ∂T ⎞ ∂ ⎛ ∂T ⎞ 1 ∂T
= ∆2T = 0
⎟+ λ
⎜λ
⎟ + ⎜λ
∂r ⎝ ∂r ⎠ ∂z ⎝ ∂z ⎠ r ∂r
(1)
where T is the temperature in the walls, r and z – space variables in radial and high direction (fig.2)
and λ – heat conductivity coefficient [W/m K]. ∇ 2 is Poason’s operator for vector form description of
equation.
z
E
C
D
The boundary conditions, needed for the solution
of eqn (1), can be obtained from consideration of energy
balance for different surfaces of the cavity receiver.
External surfaces of the receiver (D) are insulated and the
heat transfer to the ambient can be presented by the next
equation:
λ
A
∂Τ
= K D (Τ − Τa )
∂r
(2)
Here Ta is the ambient temperature, T – temperature
of external surface and Kd – overall heat transfer
coefficient from the receiver external surface to the
ambient air (including insulation).
r
The inner surfaces of receiver (A and C)
B
exchange heat energy with air in cavity and absorb solar
Fig.2 Theoretical scheme of receiver radiation entering the space in receiver. Boundary
condition for these surfaces can be written as:
λ
(
)
∂Τ
= α B Τ − Τf + q
∂r
(3)
where αB is a convective transfer coefficient in receiver cavity [W/m2 K], Tf – air temperature in cavity,
T – temperature of inner receiver surface and q – solar energy flux [W/m2].
Surface E can be modeled as a heat exchanger for heating the working gas in Stirling motor.
Boundary condition in this case is:
λ
∂Τ
= K W (Τ − Τwg )
∂r
(4)
Heat transfer coefficient Kw depends on parameters of convective heat transfer to the working fluid.
Temperature of working fluid Twg is defined from the thermodynamic treating of Stirling process. In
general, boundary conditions can be written in common form as follow:
λ
∂T
= h eT + C e ,
∂n
(5)
where he refers to the convection coefficient and Ce describes the convection coefficient with the
external temperature and solar flux. Superscription e refers to the surface index (e). Derivative ∂T/∂n is
taken to the outward normal direction n for every boundary surface.
Differential problem (1) – (4) can be solved by numerical methods. The special form of problem
domain (fig.2) leads to creating an unstructured grid for descretization of calculation area. This is the
reason for using the finite element method for solving the problem.
3. Finite Element Method
In the finite element method (FEM), the problem domain is discretized and represented by an
assembly of finite elements. The method yields discretized equations that are entirely local to the
element. As a result, the discrete equations are developed in isolation and are independent of the mesh
configuration. In this way, finite elements can readily accommodate unstructured and complex grids,
unlike other numerical methods (finite difference method) requiring structured (rows and columns)
format. Finite element method provides complete geometric flexibility.
Various approaches to the finite element method formulation are used, but the most prevalent is the
Galerkin method [1,2]. It provides the correct number of basis function and the same resulting
equations as methods based on other (more complicated) types of formulations.
Galerkin’s method selects the weight functions equal to the basis functions (shape functions) of the
approximate solution. It will be demonstrated on the example of solving the two-dimensional thermal
conductivity problem in cavity receiver for dish-Stirling system – equation (1) with boundary condition
(2) – (4).
Commonly encountered elements in two-dimensional configuration include a linear triangle,
bilinear rectangle, and bilinear quadrilateral. Triangular elements are well suited to irregular
boundaries, and spatial and scalar interpretation can be accommodated in terms of a linear polynomial.
Nodes are assigned to location in the element at
local node 1
which unknown function, such as temperature, is to be
(r1,z1)
determined. Nodes are often placed only at the corners
of the elements (Fig.3), but additional nodes can be
placed internally or along the element boundaries.
The main problem in finite elements method is to
local node 3
choose shape or integration function to provide
(r3,z3)
approximate variation of the depended variable within
local node 2
each element between the values of the nodes. The
(r2,z2)
simplest shape function is linear. Each shape function
is a local interpretation function that is defined only
Fig. 3. Linear triangle
within elements containing a particular node. Linear
triangle element (Fig.3) refers to linear interpolation along a side or within the element. For an
interpolation involving a scalar shape form function, N(r,z) within a linear triangle can be described as:
N(r,z)=a+br+cz.
(6)
where the unknown coefficients, a, b, c, can be determined on the basis of the substitution of nodal
values. For example, at node 1, the position is (r1, z1) and the scalar shape function is N1.
By using the shape functions, an approximate solution Tˆ ( r , z ) for T(r,z) is assumed in the
form
N
Tˆ (r , z ) = ∑ T j N j (r , z )
(7)
j =1
where Tj are the temperatures at the nodes desired for the solution, and Nj are the shape functions that
are equal to 1 at each node. When this approximation is substituted into the energy equation (1), there
is a residual that depends on r and z:
∂ ⎛ ∂Tˆ ⎞ ∂ ⎛ ∂Tˆ ⎞ 1 ∂Tˆ
⎟+ λ
⎟ + ⎜λ
⎜λ
= ∆2Tˆ = Re s (r , z )
⎟
⎜
⎟
⎜
∂r ⎝ ∂r ⎠ ∂z ⎝ ∂z ⎠ r ∂r
(8)
It is desired to be obtained a solution that, in average sense over the entire volume, is as close
as possible to the exact solution each time. Variational principles are applied to minimize the residual.
A set Wi(r,z) of independent weighting function is applied, and the residual is made orthogonal with
respect to each of the weighting functions. This provides the following integral, which is evaluated for
each of the sets of independent weighting functions,
∫∫ Re s(r , z )W (r , z )dS
(9)
i
S
The integration is over the whole volume in which the solution is being obtained. In the
Galerkin method the weighting functions are chosen to be the same function set as the shape functions.
Equation (9) provides the Galerkin form of energy equation for each of the weighting functions,
⎡ N
⎤
T j ∇ 2 N j (r , z )⎥ ⋅ N i (r , z ) ⋅ dS
∫∫S ⎢⎣λ ⋅ ∑
j =1
⎦
(10)
Boundary conditions can be incorporated through the boundary integral term in above
equation. Considering the general form of the boundary conditions (5) and using integration by parts,
the following form of equation (10) can be received [1]:
⎡ N ∂N i ∂N j ∂N i ∂N j 1 ∂N j
⎤
+
+
(
N i )T j ⎥ ⋅ dS = ∫ [ N i (r , z )(h eTi + C e )dn
∫∫S ⎢⎣λ ∑
∂r ∂r
∂z ∂z
r ∂r
j =1
n
⎦
(11)
Evaluating equation (11) for each
i provides N simultaneous algebraic
equations for the Tˆ ( r , z ) . In the matrix
form, this is:
[Kij] [Tj] = [Fj]
(12)
Since each shape function Nj(r,z)
(and hence each weighting function in
Galerkin method) is zero except within an
element containing Tj, the resulting matrix
7
8
9
10
11
12
[Kij] for solving the simultaneous
6
4
10
2
8
equations for Tj is banded and sparse. It is
3
1
9
5
7
usually
banded along the diagonal (banded
3
4
5
6
1
2
matrix).
The finite element method yields
Fig. 4. Discretization of domain
discretized equations that are entirely local
to the element and hence the global matrix [Kij] is a simple combination (sum) from local matrixes of
each node.
Solving the system of algebraic equations (12), the unknown temperatures Tj can be received
as a result. Because of the special form of [Kij] matrix, different numerical techniques have been
developed [1,2] for solving the system (12).
4. Numerical solution techniques
The mentioned above technique is demonstrated on the example of cavity receiver (fig.1).
Using triangle elements the problem domain can be discretized as it is shown in fig.4. Each element is
numbered and nodes coordinates (ri,zi) are specified.
Interpolation of the temperature in triangle element can be written in terms of the shape
functions, Ni, Nj, Nk, as follow:
T (e ) (r , z ) = Ti N i(e ) (r , z ) + T j N (je ) (r , z ) + Tk N k(e ) (r , z ) where function N ie = a ie + bie r + cie z ( i =1, 2, 3
) is determined by considering the next conditions:
N ie (ri , z i ) = 1
N ie (r j , z j ) = N ie (rk , z k ) = 0 .
Superscription e refers to the element number (e). Coefficients a, b, and c in shape form functions have
the following form:
⎧ai = (r j z k − rk z j ) / 2 S
⎪
⎨ bi = (z j − z k ) / 2 S
⎪ c = (r − r ) / 2 S
k
j
⎩ i
⎧a j = (rk z i − rk z i ) / 2 S
⎪
⎨b j = ( z k − z i ) / 2S
⎪
⎩c j = (ri − z k ) / 2S
⎧a k = (ri z j − r j z i ) / 2S
⎪
(13)
⎨bk = (z i − z j ) / 2S
⎪
⎩c k = (r j − ri ) / 2S
where S is the area of triangle element.
Finite element method is well-established numerical technique for heat conduction problem in
orthogonal coordinate system. Heat conductive problem in cylindrical coordinate system (equation (1))
differs from the orthogonal coordinate problem with next integral in equation (11):
λ ∫∫
S
1 ∂N j
N i drd
r ∂r
(14)
The shape function derivatives in equation (11) are constants: ∂Ni/∂r = bi and ∂Ni/∂z = bi. This
gives a simple form of the local matrix for element n (part of global matrix [Kij]) in orthogonal
coordinate system (without contribution of integral (14)). With regarding the above values of
derivatives and not considering the boundary conditions the matrix can be written in the following
form [1]:
k (n)
⎡ (bi n ) 2 + (ci n ) 2
⎢ n n
n
n
= λ ⎢bi b j + ci c j
⎢b n b n + c n c n
i
k
⎣ i k
bi b j + ci c j
n
n
(b j ) 2 + (c j ) 2
n
n
n
n
b j bk + c j c k
n
n
n
n
n
n
n
n
bi bk + ci c k ⎤
n
n
n
n⎥
b j bk + c j c k ⎥
n
n
(bk ) 2 + (c k ) 2 ⎥⎦
(15)
Integral (14) is more complicated because there is 1/r member and Ni is in form (6) with
coordinates r and z. The integral (14) can be divided in three parts:
I = λ ⋅ b j ∫∫
Ni
drdz = λ ⋅ b j [
r
ai
∫∫ r drdz + ∫∫ b drdz + ∫∫ c
i
i
z
drdz ] = λ ⋅ b j [ I1 + I 2 + I 3 ]
r
ntegral
I2 is easy to be calculated because bi is constant and: I2 = bi S, where S is the area of triangle element.
Other integrals can be calculated by integrating over triangle area (fig. 5). By using the triangle
elements with particular disposition as it is shown in fig.4 and dimensions presented on the fig.5 the
integrals I1 and I3 can be presented as:
I1 =
α i r2
[1 +ν (lnν − 1)]
µ
2
r
3 2 r
I 3 = [ r1 ln ν ( z 2 + 1 ) − z 2 ∇ r ( r1r2 − r1 − 1 )]
µ
4
4
2µ
ci
(17)
I
µ=
where
r
∇r
и ν = 2
r1
∇z
z
3
3
3
2
r2= r3
z
1
2
r
1
r2
r1= r3
r
2
r1
(б )
(a)
r2
1
r1= r3
(в )
Fig.5 Triangle elements
These values must be added to the corresponding members in matrix [Kij].
The cylindrical heat conductivity problem can be solved by approximate methods. It is necessary
in case of common disposition and form of triangle elements. One of these methods is to make
integration in (14) assuming coordinates r and z as constants and equal to the coordinates of the mass
center of the element - rm and zm. In this case approximate solution can be received, and it will be so
close to the real solution as much as small are elements. Integral (16) can be presented as:
⎡a
z ⎤
I = b j S e ⎢ i + bi + ci m ⎥
rm ⎦
⎣ rm
(18)
Other approximate method is determined by rearranging the original equation (1) in form:
∂ ⎛ ∂T ⎞ ∂ ⎛ ∂T ⎞
⎜ λr
⎟ + ⎜ λr
⎟=0
∂r ⎝ ∂r ⎠ ∂z ⎝ ∂z ⎠
(19)
This form resembles to the form of conductivity equation written in Decart’s coordinates if
product λr is interpreted as modified conductivity coefficient. It will vary for different elements and will
depend on radius r. For integration in (11) it can be assumed approximate approach, for example to use
constant radius rm of the mass center of element. In this case the matrix (15) can be used, but the
coefficient λ will be different for each element of the matrix.
Boundary conditions (5) contribute the matrix elements by adding values for triangle sides that
touch the boundaries of the receiver: surfaces A, B, C, D, E (fig.2). Contribution is determined by
integrating the last members in (11). It differs for elements with equal indexes and with different
indexes:
gii = gjj = gkk = he∆r/3 ; gij = gjk = gik = he∆r/6 ;
(20)
Values in right column of matrix equation are determined for triangle sides that contact the
ambient or exchange elements of receiver. These are formed by integrating the members not containing
temperature in last part of (11):
Fi =
1 e
1
⋅ C ⋅ ∇r or Fi = ⋅ C e ⋅ ∇z
2
2
(21)
5. Programming and result
A computer program solving the finite element problem has been created and great number of
numerical experiments has been carried out. The program comprises two main parts. The first part
includes program modules for automatic discretization and numbering the elements and nodes in mesh
configuration. These modules are specific for different problems and configuration of domain. The
85
4
85 0
80 0
73 5
80 0
85 0
73 5
73
2
2
73
4
85
main result of performance of program modules is forming the matrix with numbers and coordinates of
triangle element nodes.
The second part of the program modules uses created in the first module data to form the basic
matrix Kij – equation (12). Special procedure for solving the band matrix is used to calculate the
requested parameters (temperatures, heat flux etc.). This part is standard and can be used for solving
different tasks.
The program and mathematical treating of finite element method were verified by using the
exact analytical solution of thermal conductivity problem. It is known, that for simple configuration of
domain, the heat conductivity problem has analytical solution. Such configuration is a cylindrical
domain. For such a simple configuration the finite element method gives results, which a very close to
the analytical solution (difference is below 0.1%).
Fig. 7 presents the
73
0
0
73
temperature distribution in
receiver for dish-Stirling
85
8
8
85
system,
calculated
by
computer program. Some of
the used data for boundary
conditions
are
assessed
710
710
approximately. This means
that
the
temperature
distribution in receiver must
be interpreted only in rough
figures. When the conditions
for receiver performance are
700
700
defined more precise, the
results
for
temperature
distribution will be more
accurately.
Fig.7 Temperature distribution in Receiver
6. Conclusion
Finite element method has been applied for heat transfer problem in receiver for dish-Stirling
system. It uses unstructured and complex grids, which are suited to the special forms of absorber
element. Finite element method provides complete geometric flexibility.
Presented mathematical model and algorithm for program is universal and can be used for
different tasks. In this paper are discussed only principle aspects of using the finite element method for
heat transfer processes in receiver for solar receivers in concentrating solar systems. Numerical
experiments carried out in this research show that this technique gives very good approaches to the real
thermal processes. It can be used successfully to investigate different conditions and forms of receivers
for concentrating solar systems and other thermal equipments.
References:
1. Zienkiewich O.C., K. Morgan, Finite Elements and Approximation, John Wiley & Sons,
New York, 1983.
2. Zienkiewich O.C., The Finite Element Method in Engineering Science, McGraw-Hill,
London, 1971.
3. Naterer G.F, Heat Transfer in Single and Multiphase Systems, Taylor & Francis, New York,
1997
4. Siegal R, J.R.Howell, Thermal Radiation Heat Transfer, Taylor & Francis, New York, 1998
5. Strang G, G.J. Fox, An Analysis of the Finite Element Method, Prentice-Hall,Inc,
Engeleword Cliffs, 1973.
6. Norrie D.H., G de Vires, An Introduction to Finite Element Analysis, Calgary, Alberta,
Canada.
7. Kreith F., W.B. Black, Basic Heat Transfer, Harper and Row, Publishers, New York, 1980.
| 5 |
Lisco: A Continuous Approach in LiDAR Point-cloud Clustering
arXiv:1711.01853v1 [] 6 Nov 2017
Hannaneh Najdataei, Yiannis Nikolakopoulos, Vincenzo Gulisano, Marina Papatriantafilou
Chalmers University of Technology
Gothenburg, Sweden
{hannajd,ioaniko,vinmas,ptrianta}@chalmers.se
Abstract—The light detection and ranging (LiDAR) technology allows to sense surrounding objects with fine-grained
resolution in a large areas. Their data (aka point clouds), generated continuously at very high rates, can provide information
to support automated functionality in cyberphysical systems.
Clustering of point clouds is a key problem to extract this
type of information. Methods for solving the problem in a
continuous fashion can facilitate improved processing in e.g.
fog architectures, allowing continuous, streaming processing
of data close to the sources. We propose Lisco, a singlepass continuous Euclidean-distance-based clustering of LiDAR
point clouds, that maximizes the granularity of the data
processing pipeline. Besides its algorithmic analysis, we provide
a thorough experimental evaluation and highlight its up to
3x improvements and its scalability benefits compared to the
baseline, using both real-world datasets as well as synthetic
ones to fully explore the worst-cases.
Keywords-streaming, clustering, pointcloud, LiDAR
I. I NTRODUCTION
Active sensors that are able to measure properties of
the surrounding environment with very fine-grained time
resolution are utilized more and more in cyber-physical
systems, such as autonomous vehicles, digitalized automated industrial environments and more. These sensors can
produce large streams of readings, with the LiDAR (light
detection and ranging) sensor being a prominent example. A
LiDAR sensor commonly mounts several lasers on a rotating
column; at each rotation step, these lasers shoot light rays
and, based on the time the reflected rays take to reach back
the sensor, they produce a stream of distance readings at
high rates, in the realm of million of readings per second.
As common in big data applications, one of the challenges
in leveraging the information carried by such large streams
is the need for efficient methods that can rapidly distill the
valuable information from the raw measurements [5], [8],
[17], [28]. A common problem in the analysis of the LiDAR
sensor data is clustering of the raw distance measurements,
in order to detect objects surrounding the sensor [22].
This can, for instance, enable the detection of surrounding
obstacles and prevent accidents (e.g. avoiding pedestrians
in autonomous driving) or study the motion feasibility of
objects in factories’ production paths [1].
Challenges and contributions
The processing time incurred by clustering of raw measurements (aka point clouds) represents one of the main
challenges in this context because of the high rates and the
need for the clustering outcome to be available in a timely
manner for it to be useful. Furthermore, the accuracy of the
clustering is challenging as well, since readings from objects
that are at different distances from the sensor can vary a lot
in density.
A key performance enabler for high-rate data streaming
analysis is the pipelining of its composing tasks. Nevertheless, common state-of-the art approaches for clustering of
LiDAR data (cf [27], [21], [24] and more detail in § VII
elaborating on related work) first organize the points to be
clustered (e.g. sorting them so that points close in space
are also close in the data structure maintaining them) and
only then perform the clustering by querying the organized
data (e.g. by running neighbor-query as discussed in § II).
By doing this, they introduce a batch based processing that
affects the clustering performance.
To overcome this, we target a single-pass analysis that
will enable fine-grained pipelining in processing the data.
We propose a new method that achieves LiDAR data-point
clustering, called Lisco, that allows to boost processing
throughput by maximizing the internal pipelining of the
analysis steps, through a key idea that can exploit the inner
ordering of the data generated by a LiDAR sensor.
In more detail, we make the following contributions:
1) We introduce Lisco, a new algorithm for Euclideandistance-based clustering of LiDAR point clouds, that
maximizes the granularity of the data processing
pipeline, without the need for supporting sorting data
structures.
2) We provide a fully implemented prototype and we
discuss Lisco’s complexity in connection to the stateof-the-art Euclidean-distance-based clustering method
in the Point Cloud Library (PCL), which we adopt
as baseline due to its known accuracy, efficiency and
wide use-base [22].1
3) We perform a thorough comparative evaluation, using
both real-world datasets as well as synthetic ones
to fully explore the worst-cases and the spectrum of
trade-offs. We achieve a significant improvement, up
to 3 times faster than the baseline and we also show
significant scalability benefits.
1 Available
through: http://pointclouds.org
The rest of the paper is organized as follows.
§ II overviews the LiDAR sensor, the data it produces
and clustering-related techniques that exist for such data.
§ III presents the main idea, the outline and argues about
the properties of the proposed Lisco method, while the
algorithmic implementation is given in § IV. We evaluate
the proposed method in § VI. Finally, we discuss related
work and conclude in § VII and § VIII, respectively.
II. P RELIMINARIES
In this section, we give details about the key properties of
LiDAR sensors and the data they generate. We also provide
a detailed problem description and evaluation criteria of
solutions. Finally, we briefly describe the Euclidean-distance
based clustering method in PCL, which we use as baseline
as explained in the introduction.
A. LiDAR - sensor and data
The light detection and ranging (LiDAR) technology
allows sensing surrounding objects with fine-grained resolution in a large areas.
The LiDAR sensor mounts L lasers in a column, each
measuring the distance from the target by means of the
time difference from emitted and reflected light pulses. The
column of lasers performs R rotations per second, each
consisting of S steps, producing a set of n points, also called
point cloud. The number of points reported by the LiDAR
sensor for each rotation can be lower than L×S, since some
of the emitted pulses might not hit any obstacle. We refer to
the angle in the x-y plane between two consecutive steps2
as ∆α (e.g. measured in radians) and to the elevation angle
from the x-y plane between two consecutive lasers as ∆θ.
Each point p is described through attributes hd, l, si where
d, l, s are the measured distance, the laser index and the step
index. The measured distance, d, is a value greater than or
equal to 0. Value 0 for d shows no reflection in the point. In
the following, we use the notation px to refer to attribute x
of point p (i.e., pd refers to the distance reported for point p).
Let us consider in the following examples that L = 8 and
S = 8. We visually present the points by unfolding them
as a 2D matrix, where each column contains L rows and
each rotation step is a column. We assume that new data is
delivered from the physical sensor with the granularity of
one rotation step (Figure 1).
B. Problem formulation: from point clouds to clusters
Given a set of points corresponding to LiDAR measurements, we want to identify disjoint groups of them that can
be potential objects in the environment surrounding the sensor. A natural criterion, commonly used in the literature and
2 If the horizontal angle between steps is not constant and lasers are not
perfectly aligned in the x-y plane, then ∆α refers to the minimum such
angle. Similarly, if the vertical angle between lasers is not constant, ∆θ
refers to the minimum such angle.
applications, is the distance between points. In particular,
adopting the problem definition in [20] (Chapter 4), which
we paraphrase here for ease of reference:
Definition 1. [Euclidean-distance based clustering] Given n
points in 3D space, we seek to partition them into some (unknown) number of clusters C using the Euclidean-distance
metric, such that every cluster contains at least a predefined
number of points (minP ts), that is ∀j, |Cj | ≥ minP ts,
and all clusters are disjoint, that is Ci ∩ Cj = ∅, ∀i 6= j.
Two points pi and pj should be clustered together if their
Euclidean distance ||pi − pj ||2 is at most , with being a
predefined threshold.
To facilitate the presentation of the baseline algorithm
and our proposed one, we introduce the -neighborhood of
a point p: the set of points of the input whose Euclidean
distance from p is at most . A set of points closed under
the union of their -neighborhoods is characterized as noise
if its cardinality is less than minP ts.
It should be noticed that, when clustering data from
scenarios like the vehicular one, a pre-processing task is
usually defined to filter out points that refer to the ground,
since many objects laying on it would be otherwise clustered
together. Since ground removal can be implemented as a
non expensive and continuous filtering operation [10] (e.g.
by removing points below a certain threshold, as we do in
§ VI) we do not further discuss it in the remainder.
C. Euclidean-distance based clustering in PCL
PCL [22] provides a set of tools based on a collection
of state-of-the-art algorithms to process 3D data, including
filtering, clustering, surface reconstruction and more. In this
section we review its method for cluster extraction, which
is an Euclidean-distance based clustering and we use it as a
baseline. For brevity we call this algorithm PCL E Cluster
in the rest of this document.
PCL E Cluster works on batches of data points. It first
builds a kd-tree to facilitate finding the nearest neighbors of
points. Subsequently, it proceeds as described by algorithm
1 to extract the clusters.
Algorithm 1 Main loop of PCL E Cluster
1: clusters = ∅
2: for p ∈ P do
3:
Q=∅
4:
if p.status 6= processed then
5:
Q.add(p)
6:
for q ∈ Q do
7:
q.status = ’processed’
8:
N = GetNeighbors(q, )
9:
Q.addAll(N )
10:
if size(Q) ≥ minPts then clusters.add(Q)
Starting from any arbitrary unprocessed point p, the
algorithm adds it in an empty list, Q (line 5). Then,
S steps
S steps
pd
Δα
p
A. Top view
Object
L lasers
Object
L lasers
ct
Obje
pd
p
Δθ
B. Side view
C. 2D view
Figure 1: Top and side views for the LiDAR’s emitted light pulses showing steps and lasers, together with the resulting 2D
view. In the 2D view, non-reflected pulses are white while reflected ones are coloured.
PCL E Cluster adds all the points of the -neighborhood
of each member of Q to it. After processing all members of
the list Q, if its size is greater than or equal to minPts, the list
is returned a cluster. The algorithm continues with the next
unprocessed point, to explore another cluster. The procedure
is terminated when all input points have been processed. We
discuss the computational complexity of algorithm 1 in § V.
III. Lisco
We present in this section Lisco. We first discuss the
intuition behind the algorithm, i.e. a continuous, singlepass approach in clustering in contrast to existing batch
based methods as discussed in § II. Subsequently, we focus
on the challenges and trade-offs that continuous clustering
introduces.
A. Towards continuous clustering
Based on the clustering requirements as introduced in
Definition 1, each point p reported by the LiDAR sensor
is temporarily clustered with each neighbor point p0 within
distance from p. A cluster of points is eventually delivered
if it contains at least minPts points, otherwise its points are
characterized as noise. As discussed in § II-C, implementations such as the one provided by PCL limit the pipelining
of the analysis because of processing stages that cannot be
executed concurrently. More concretely, they first traverse
all the points to populate a supporting data structure that
facilitates finding the points in the -neighborhood of each
point p (a kd-tree in the case of PCL) and subsequently they
traverse a second time all the points to cluster them.
As shown in Figure 2, the intuition behind Lisco is that
the search space for the -neighborhood of point p can be
translated into a set of readings given by certain steps (Figure
2.A) and lasers (Figure 2.B) around p, and within a certain
distance range from the LiDAR’s emitting sensors. It should
be noted that these constraints describe a region of space that
may also contain points whose distance from p is greater
than . Nevertheless, if points within distance from p exist,
they will be returned by one of these steps and lasers and will
fall within the given distance range, as we further explain
in § V.
Thus, in order to discover the -neighborhood of point p,
by leveraging the sorted delivering of tuples from the LiDAR
sensor step after step, it is enough to explore a neighbor
mask centered in p, as shown in Figure 2.C. In this way, we
eliminate the need for a search-optimized data structure like
kd-trees, and we allow the algorithm to process data as they
are received from the sensor.
B. Coping with the challenges of continuous clustering
The continuous one-pass analysis of Lisco introduces several challenges that are not found in batch based approaches.
Here we discuss and explain how they are addressed in Lisco
in the following.
1) Partial view of neighbor mask: Algorithm 2 shows
how the neighbor mask is computed, i.e. the number of
previous and subsequent steps σ and the number of upper
and lower lasers λ that possibly contain points within
distance from p. As discussed in § II, ∆α and ∆θ refer
to the minimum angle differences between two consecutive
steps and lasers, respectively.
Algorithm 2 Given point p, compute the number of previous
steps σ and upper and lower lasers λ bounding at least all
the points within distance from p.
1: procedure getNeighborMask(p)
2:
λ ← d| arcsin(/pd )|/∆θe
3:
σ ← d| arcsin(/pd )|/∆αe
4:
return λ, σ
Algorithm 3 Main loop of Lisco
1: subclusters = ∅
2: upon event reception of step s do
3:
for l ∈ 1, . . . , L do
4:
p = M [l, s]
5:
if pd > 0 then
6:
λ, σ = getNeighborMask(p)
7:
cluster(p,σ,λ,subclusters)
8: upon event all steps processed do
9:
for subcluster ∈ subclusters do
10:
if size(subcluster) ≥ minPts then return subcluster
A. Top view
B. Side view
C. 2D view
Distance range containing
points within distance ε from p
ε
p
S steps
ε
p
L lasers
L lasers
S steps
p
Sphere containing p’s
neighbors (within distance ε)
Steps that can report points
within distance ε from p
Lasers that can report points
within distance ε from p
Neighbor mask covering at least all
the points within distance ε from p
Figure 2: Top and side views (A and B) showing which steps and lasers, respectively, to include in the neighbor mask (C)
for the latter to contain at least all the points within distance from p.
The main loop of Lisco, Algorithm 3, processes points
in M , the 2D matrix of input points (described in § II),
in step and laser order (Line 4). Each point is processed
only if its distance is greater than 0 (that is, if the LiDAR’s
pulse has been reflected for the point’s step and laser index)
(Line 5). Once all the points have been processed, all the
clusters containing at least minPts points are delivered. As
it can be noticed, the parameter minPts does not have an
effect in the complexity of the algorithm, since it is only
used to filter the delivered clusters at the very end of the
clustering process. We describe how clusters are discovered
and managed within the function cluster in the following.
Given points p1 and p2 within distance , p1 ’s neighbor
mask will contain p2 and vice versa. To avoid comparing
each pair of neighboring points twice, it is enough to
consistently traverse half the neighbor mask. Lisco explores
the half containing the λ lasers above and below p and the
σ steps on p’s left side. This allows for points in p’s step
to be processed as soon as they are delivered (points on p’s
right side are yet to be delivered upon reception of p’s step
points). Take into account that, for a minority of steps, not
all the points on p’s left side lay on columns with a lower
index than p’s (i.e., they are not stored on the left side of
the M matrix). For instance, if a point in column 2 should
be compared with 3 columns on the left (λ = 3), then it
should be compared with columns S-1, S and 1. In such a
case, some comparisons must be postponed until such steps
are delivered. A point p0 on the left side of p and within
distance lays on a lower index than p if 0 ≤ ps − p0s ≤ σ.
On the other hand, if p0s + σ > S, then p0 is on the left side
and within distance from points in steps 1, . . . , ps + σ − S.
In both cases, the clustering semantics defined in § II require
p and p0 to be compared, as we do in Algorithm 4.
2) Continuous cluster management: A second challenge
brought by the continuous nature of Lisco is that subclusters
evolve as more steps arrive. Hence, a cluster identified once
all the points in a rotation are processed might be the
union of several previously discovered subclusters, as seen
in Figure 3. Figure 3.A shows the subclusters found when
C1
C
x
C2
A. 4 out of 8 steps have been
received, 2 clusters C1 and
C2 have been identified so far
B. 8 out of 8 steps have been
received, 1 clusters C is
eventually found
Figure 3: Example showing how the points of two subclusters, that evolve concurrently in Lisco’s continuous analysis,
may end up in the same cluster at a subsequent step.
half of the points in the rotation have been processed. In the
example, 5 points have been clustered in C1 and 4 points
have been clustered in C2 . The other non-colored points
have not been clustered since they had no neighbors within
distance . Figure 3.B shows the clusters found when all the
points in the rotation have been processed. At this stage,
the points previously clustered in different subclusters are
now clustered together. The point marked with x represents
the point that has one neighbor in each of the two disjoint
subclusters found by Lisco. Once x is processed, these two
subclusters should be merged.
Based on this observation, we introduce the following
informal notions to facilitate the detailed description of
Lisco. A subcluster is a set of points that have been clustered
together during the processing of the previously received
steps of the input. A cluster is a set of at least minPts points
that have been clustered together once all the steps of the
input have been processed, i.e. a subcluster with cardinality
at least minPts is characterized as a cluster after all the steps
have been processed. Finally, we consider each subcluster to
have a unique identifier called head.
Based on the above, one can notice that a subcluster can
contain points that previously belonged to two or more subclusters. Because subclusters are found continuously while
the points of a rotation are being processed, the clustering
algorithm requires methods to: (1) retrieve the head of the
points belonging to a subcluster and (2) merge two subclusters together in order for the final cluster to be delivered
as a single item. In order to do this, we use in Algorithm 4
(overviewing the clustering process applied to each incoming
point p) the functions Head H = getH(Point p) and
merge(Head H1,Head H2). Once two subclusters are
merged invoking function merge, we expect the function
getH to return the same head for any point of the two
subclusters. Without loss of generality, we assume this head
to be H1 in the following. Because of this we define a third
function setH(point p,Head H). Finally, we define
the function createH() to allow for newly discovered
subclusters to be instantiated.
Algorithm 4 Given point p, cluster it together with all the
points already received from the LiDAR sensor that are
within distance from it.
1: procedure cluster(p,λ,σ,subclusters)
2:
for p0 |(0 ≤ ps − p0s ≤ σ ∨ 1 ≤ p0s ≤ ps + σ − S) ∧
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
|pl − p0l | ≤ λ ∧ |pd − p0d | ≤ do
H1 =getH(p)
H2 =getH(p’)
if H1 6= H2 ∧ ||p − p0 ||2 ≤ then
if H1 = ∅ ∧ H2 = ∅ then
H = createH()
setH(p,H)
setH(p0 ,H)
subclusters.add(H)
else if H1 = ∅ ∧ H2 6= ∅ then
setH(p,H2 )
else if H1 6= ∅ ∧ H2 = ∅ then
setH(p0 ,H1 )
else
subclusters.remove(H2 )
merge(H1 ,H2 ))
As shown in Algorithm 4, four different cases should be
checked for two points within distance that do not belong
to the same subcluster:
• Line 6: None of the two points belongs to a subcluster.
In this case, a new subcluster head is created and set
for both points
• Line 11: Point p does not belong to a subcluster while
point p0 does. In such a case, point p will refer to the
same head as point p0 .
0
• Line 13: Point p belongs to a subcluster while point p
0
does not. In such a case, point p will refer to the same
head as point p.
0
• Line 15: Both points p and p have been clustered
but to different subclusters. In such a case these two
subclusters are merged together.
IV. A LGORITHMIC IMPLEMENTATION
We discuss the details of the algorithmic implementation
of Lisco in this section.
Data points are kept in a 2D matrix, M . The number of
rows and columns in the matrix is equal to the number of
lasers and steps respectively. Upon reception of a column of
points from LiDAR, which contains the reflected points of
all lasers in one step, we store them in the corresponding column of the matrix. By using the laser and the step number,
all the attributes of a point can be extracted in constant time.
Each entry in M holds the attributes of the corresponding
point and a pointer (initially set to NULL) to the head of its
subcluster. The head of a subcluster is defined as the point
with the lowest indices in lexicographical order of steps and
lasers during the creation of a new subcluster. When two
subclusters are merged, the head from the subcluster with
the largest number of members is maintained.
Subclusters is a hash map used in order to keep track
of subclusters and their corresponding members. It is implemented as a linked list of arrays, where each key is the
header of a subcluster and its members are stored in the
array. If the size of a subcluster exceeds the size of the
array, a new array is linked to the tail of the current array,
so that subclusters can grow without restriction. At the end
of the clustering procedure, we use subclusters to traverse
through subcluster heads. Each subcluster that has more than
minPts members is announced as a cluster, otherwise it is
characterized as noise.
To keep of Lisco’s time complexity low (discussed in § V)
we aim at efficient time complexity of the main methods
used in our algorithms. As shown in Algorithm 2, function
getNeighborMask is executed in constant number of
steps, since it boils down to a fixed number of numerical
operations. Similarly, functions createH and setH can be
also implemented to incur in O(1) complexity, as we discuss
in the following. The algorithmic implementation of getH
and merge induces the following trade-off. On the one
hand, merge can be implemented to induce O(1) time-cost;
this can be done by maintaining a hierarchy of subclusters
being part of the same subcluster while incurring a higher
cost for the getH, linear in the number of subclusters.
Figure 4 shows how some of the points clustered together
once all the data is processed point to head H1 via head H2 .
For these points the getH method has a cost higher than
that of the points directly pointing to H1 , which depends
on the chains induced by the data structure to maintain the
hierarchy. In the proposed implementation we opt for an
O(1) cost for method getH and a higher cost for merge,
as seen in Figure 5 and § V. The reason, as can be seen
in Algorithm 4 and based also on our empirical evaluation
in § VI, is that getH is executed twice for pairs of points
being compared, while merge is executed significantly less
often.
In detail, the implementations of the functions used in
Algorithm 4 are as follows:
- createH: This function gets two points that do not belong
to any subcluster, and returns the one with the lower indexpair for step and laser. Since the returned point will be head
of a subcluster, a new node is created in the subclusters
H1
H1
x
H2
H2
A. The subclusters have
different heads
B. After merging, one subcluster’s
head (H1) points to the other (H2)
Figure 4: Possible implementation in which the merge
method is made O(1) by hierarchically linking heads of
subclusters belonging to the same subcluster. Notice that
the complexity of getH is no longer O(1) but linear in the
number of subclusters for some of the points.
H1
H1
x
H2
A. The subclusters have
different heads
B. After merging, all the points are
directly linked to the head
Figure 5: Possible implementation in which the getH
method can run in O(1) steps by having all points directly
linked with the subcluster head. Notice the head of all
the points of subcluster C2 has been updated when the
subcluster has been merged with subcluster C1 .
and the head is mapped to it.
- setH: This function sets the pointer of a point to the head
point. By calling this function, we are adding a point to a
subcluster with an identified head. The point also needs to
be added as the last element in the array of the mapped
head.
- getH: This function reads the pointer to get the head point.
If two points are in the same subcluster, they get the same
head point as the result of this function.
- merge: If two points belong to different subclusters,
Lisco merges the two subclusters by calling this function.
It chooses the subcluster with bigger number of members as
the base subcluster, and merges the other one by changing
the head of its members to the base subcluster’s head.
After merging subclusters, it is also necessary to remove the
merged subcluster’s head from the subclusters hash map
and append its array to the base subcluster’s array.
V. A NALYSIS
A. Correctness
Based on Lisco’s functionality and algorithmic implementation, we discuss here why Lisco’s outcome satisfies
Definition 1.
Claim 1. If two points p and p0 are in the -neighborhood
of each other they will either be in the same cluster at the
end of Lisco procedure or be characterized as noise, in the
same way as given in Definition 1.
Proof: (sketch) Consider that, w.l.o.g. p is processed
second. As argued in paragraph getNeighborMask in the
previous section, p0 will be found that it belongs in
the -neighborhood of p. This implies that they will be
merged/inserted in the same subcluster. Unless that subcluster in the end is found to contain fewer points than M inP ts,
it will be return as a final cluster in the end of the main loop
of Lisco.
B. Complexity
In this section, we discuss the complexity analysis of
Lisco and compare it with PCL E Cluster.
Regarding PCL E Cluster, the required processing work
volume is similar to the DBSCAN algorithm, i.e. building
a spatial index (kd-tree) and using it to execute region
queries for each point, resulting in an overall expected time
complexity of O(nlogn) processing steps [4], [19], [25].
Claim 2. Lisco’s time complexity is linear in the number of
points, multiplied by a factor that depends on the size of the
clusters in the set of data points. In the worst-case where
there is a big cluster of O(n) points, it can take O(nlogn)
processing steps for Lisco to complete.
Proof: (sketch) Overall, the time complexity of Lisco
is the number of iterations in the main loop (i.e. n, as the
number of points), times the work in each iteration, i.e. for
each point (i) finding its -neighborhood, and (ii) working
with each point in the neighborhood.
Part (i) from above, induces an asymptotically constant
cost, depending on , as it is performed through the comparisons implied by the masking operation getNeighborMask.
Part of the -neighborhood of a point p, i.e. the points with
smaller step index, is compared with p through step 2 of
Algorithm 4 on behalf of p, while for each of the remaining
points p0 in p’s -neighborhood, p will be identified as part
of the -neighborhood of p0 when the respective step is
executed on behalf of p0 .
Regarding part (ii), functions’ createH and setH algorithmic implementation incurs a constant number of processing steps each, as we explain in § IV. Moreover, as explained
in the aforementioned section, getH induces O(1) timecost, as a point can identify its head in O(1) (e.g. with a
direct link). This incurs a cost that is O(x) for merge, where
x is the size of the smaller subcluster, since the merging
itself needs to update the head for all the points of the
smaller one of the subclusters being merged.
Since the merge function chooses the subcluster with the
bigger number of points as the base subcluster, in the worstcase the clustering has a huge subcluster of O(n) points, and
an unlikely scenario for constructing it, might require Lisco
to merge roughly equally-sized subclusters at each of the
merge operations leading to the big subcluster – any other
combination of subclusters would lead at most to an equal
cost as the one described. Since halving O(n) points can
be made at most O(logn) times, we can observe that the
worst-case total number of merge-related processing steps
will be dominated by a sequence of O(nlogn) steps, which
will be the dominating cost in the worst case complexity of
Lisco.
VI. E VALUATION
In this section, we present our experimental methodology
and results of Lisco and compare them with those of
PCL E Cluster algorithm. Since the clustering outcomes of
PCL E Cluster and Lisco are the same, we do not need to
compare the clusters and we can focus on the completion
time for each approach.
points (including NULL points and ground points) is the
same and it is equal to 72000. After removing the ground
points and eliminating NULL points, we get a number of
reflected points. As shown in the table, with the same
number of objects in the environment (e.g. SCEN1 and
SCEN2), if we change the position of objects to near or
far, we get different numbers of reflected points.
Name
SCEN1
SCEN2
SCEN3
SCEN4
SCEN5
# Points after removing NULL points and ground points
26891
16218
39028
18229
64518
Table I: Properties of synthetic datasets and the effect of
number of objects and their distances from LiDAR on
number of points after ground removal.
C. Performance evaluation
A. Evaluation setup
we
use
the
To
run
PCL E Cluster
EuclideanClusterExtraction class from PCL library,
which is designed to cluster 3D point clouds and is
implemented in C++. We also implemented Lisco in C++11
and compiled both of algorithms with gcc-4.8.4 using the
-O3 optimization flag. All the experiments have been run
on the same system running Linux with 2.00GHz Intel(R)
Xeon(R) E5-2650 processor and 64GB Ram.
B. Data
We used both synthetic and real-world datasets. The realworld dataset has been collected from the Ford Campus
dataset [18] and the synthetic ones have been generated
using the Webots simulator [15]. We use synthetic datasets
to explore the effect of data (e.g. different total number
of points that have been collected by the LiDAR, different
densities and distances of objects) on the performance of
algorithms.
There are five scenarios for synthetic datasets. SCEN1
and SCEN2 have the same and few number of objects
but we changed the position of the objects to near and
far from LiDAR. Near objects reflect more points while
far objects reflect fewer points with a larger gap between
two nearby points. Similarly, SCEN2 and SCEN3 have
the same number of objects (number of objects are more
than previous scenarios) and we only changed the position
of the objects. Finally, SCEN5 represents a high density
environment with a lot of objects. Figure 6 shows five
simulated environments for several scenarios. In all the
environments, a VelodyneHDL64E is used to collect data
points and generate a dataset within one physical rotation.
Table I summarizes the properties of the synthetic
datasets. Since we used the same specifications for the
LiDAR in all scenarios, the number of steps and lasers for
one physical rotation is the same, so the total number of
The execution time is measured from the time-instant the
first data point of the dataset is received until the time instant
when the clustering algorithm has processed all data points
of one full physical rotation. A higher value of implies
a larger -neighborhood of a point, hence the clustering
algorithm needs more time to search in the neighborhood.
1) Synthetic datasets: Figure 7 shows the average execution time with confidence level 99% on 20 runs with
different values of and constant value of minPts = 10. Since
the maximum margin of error for a confidence level 99% is
small, we can not distinguish them clearly in the figure. We
chose a range [0.1 − 1] meters for , so that for example
if = 0.4, all the objects that their closest points have
at least 40 centimetres distance from each other, should be
detected as separated objects. While clustering with smaller
values of find at least one cluster for each object, bigger
values increase the probability of clustering distinct objects
together. For example, with = 1 for SCEN3, all the cars
at each side of the black car, are clustered together which
leads to incorrect segmentation.
As expected, by increasing the value of the , the execution time increases for both algorithms. As it can be seen,
when the number of points is high, regardless of the value
of the , Lisco is always faster than PCL. Only for a dataset
with a relatively small number of points and when is set
to a high value PCL has slightly better performance than
Lisco (figure 7 SCEN2 and SCEN4). The effect of number
of points is also shown in figure 8. As discussed in § V
building a kd-tree and using it to find nearest neighbors in
PCL becomes a bottleneck when the number of points is
high.
2) Real-world dataset: This Ford Campus dataset is
collected by an autonomous ground vehicle testbed, with a
Velodyne HDL-64E LiDAR scanner [18]. The vehicle path
trajectory in this dataset contains several large and small
objects (e.g. buildings, vehicles, pedestrians, vegetation,
(a) SCEN1 - The simulated
sparse environment in which
objects are close to the LiDAR which is located on the
black car
(b) SCEN2 - The simulated
sparse environment in which
objects are far from the LiDAR which is located on the
black car
(c) SCEN3 - The simulated
dense environment in which
objects are close to the LiDAR which is located on the
black car
(d) SCEN4 - The simulated
dense environment in which
objects are far from LiDAR
which is located on the black
car
(e) SCEN5 - The simulated
room for high density environment. The LiDAR is located on the purple column
Figure 6: Different scenarios for simulated environments.
SCEN1
Execution Time (sec)
0.8
SCEN2
0.2
0.6
SCEN3
4
0.15
SCEN4
0.25
0.2
3
10
0.15
0.4
0.1
2
0.1
0.2
0.05
0
1
0
0
0.5
1
SCEN5
15
0
0
0.5
(m)
1
5
0.05
0
0
(m)
0.5
1
0
0
0.5
(m)
PCL
1
0
(m)
0.5
1
(m)
Lisco
Figure 7: The average execution time on synthetic datasets with confidence level 99% over 20 runs for minPts = 10 and
different values of
1.2
PCL
Lisco
Execution Time (sec)
Execution Time (sec)
8
6
4
2
0
2
4
#points
6
104
Figure 8: Scalability of Lisco and PCL with respect to the
number of points
etc.). We have tested PCL E Cluster and Lisco on 2280
rotations of this dataset and compare their execution times
with confidence level 99%.
Figure 9 shows the results of the comparison for values
0.3, 0.4, and 0.7. Among all the rotations, the minimum
number of reflected points after ground removal is 5000, the
maximum is 75550, and the average is 50225. As shown in
the figure, Lisco outperforms PCL E Cluster in real-world
PCL
Lisco
1
0.8
0.6
0.4
0.2
0
0.3
0.4
(m)
0.7
Figure 9: The average execution time of running PCL and
Lisco over 2280 rotations of real-world dataset for minPts
= 10 and different values of
datasets. In real-world data, generally there are more objects
around the LiDAR and therefore there are more reflected
points besides the ground. Since Lisco processes points upon
receiving them from LiDAR, it can save more time and it
has thus better throughput.
VII. OTHER RELATED WORK
Data clustering has been studied for several decades
and existing algorithms have been categorized into four
classes: density-based, partition-based, hierarchy-based, and
grid based [9]. Due to their ability in finding arbitrarily
shaped clusters without requiring to know the number of the
clusters a priori, density-based methods are widely used in
different applications. Well-known algorithms of this class
include DBSCAN [4] and OPTICS [2]. However, to use
these algorithms in big data applications and overcome
their performance bottleneck in dealing with extremly large
datasets, there are several attempts to parallelize DBSCAN
[13], [19]. In parallel models, the clustering procedure is
divided into three steps: 1) data distribution (e.g. using kdtree) 2) local clustering (which is splitted on several machines) 3) merging of local clusters. Although the efficiency
is improved by splitting the clustering on several machines,
pipelining of steps has not being studied yet.
Rusu et al. [21] introduce an Euclidean-distance-based
clustering which is a partition-based clustering method that
produces arbitrarily shaped clusters. This approach is designed for unorganized data points. So, to facilitate searching
for nearest neighbors, first a kd-tree is built over the dataset
and then clustering is being performed. In other works [24],
[27], an octree is used to identify the neighbors before
starting the clustering procedure.
Since LiDAR data points are implicitly ordered, organizing them (e.g. in a tree) may be avoided, similar to the
spirit of this paper. Specifically, the characteristics of the
sensor data can be used to establish neighborhood relations
between points [12], [16]. Klasing [12] et al. proposed a
clustering method for 2D laser scans that rotate with an
independent motor to cover a 3D environment. While the
proposed method compares points across different scans
similarly to the problem studied in this work, the semantics
of Definition 1 are not enforced resulting to a lower accuracy.
Moosmann et al. [16] proposed an approach to turn the
scan into an undirected graph to retrieve the neighborhood
information of each point during clustering, but they have
not studied pipelining building the graph and clustering.
Zermas et al. [29] recently proposed a clustering method
specific to the structure of LiDAR data points. This approach
processes one scan-line (a layer (rotation) of points that
are produced from the same laser) at a time and merges
nearby clusters from different scan-lines. However, the entire
rotation is needed as the algorithm does more than one pass
over the data. Also, this approach, similarly to previous
works, still relies on a kd-tree for some necessary nearest
neighbor searches. Moreover, the neighborhood criterion for
points clustered in the same scan-line does not take into
account the distance of the point from the sensor, and thus
does not guarantee the semantics of Definition 1.
Clustering LiDAR data points are being used in wide
range of applications [11], [14], [23]. Among all, autonomous vehicle applications are one of the most challenging since they need fast and accurate results [3], [10], [26],
[29]. In [3], a set of voxelisation and meshing segmentation
methods are presented. Wang et al. [26] first separates data
into foreground and background. then a clustering procedure
is conducted only on the foreground segments.
VIII. C ONCLUSIONS AND F UTURE W ORK
This work is about one of the challenges in common
big data applications, namely leveraging the information
carried by high-rate streams through efficient methods that
can rapidly distill the valuable information from the raw
measurements. A common problem in the analysis of LiDAR
sensor data, that generate date at rates of megabytes per
second, is clustering of the raw distance measurements, in
order to facilitate detection of objects surrounding the sensor.
Lisco represents a streaming approach to process the
LiDAR points while the data is being collected. This
characteristic helps to facilitate extraction of clusters in a
continuous fashion and contribute to real-time processing.
By keeping track of the different subcluster heads, Lisco
can deliver subclusters to the user at anytime by request,
i.e. provide continuous information.
Important follow-up questions include the parallelization
of Lisco’s processing pipeline to take advantage of computing architectures for the corresponding deploy environments.
This necessitates algorithmic implementations in a variety of
processing architectures, such as manycores/GPUs, SIMD
systems, single board devices and high-end servers, to
explore Lisco’s properties in a broad range of cloud and fog
architectures and evaluate its impact on applications that can
be deployed on such systems. In addressing such questions
it will be useful to leverage the benefits of efficient finegrained synchronization methods in streaming-centered and
bulk-operations-enabled data structures, as proposed in [6]–
[8], [17].
R EFERENCES
[1] Annual report, point cloud achievements, profile project,
department of production. Technical report, FraunhoferChalmers Research Center for Industrial Mathematics (FCC),
2015. [http://www.fcc.chalmers.se/mediadir/2014/09/fcc vb
2015.pdf; accessed 18-August-2017].
[2] Mihael Ankerst, Markus M Breunig, Hans-Peter Kriegel, and
Jörg Sander. Optics: ordering points to identify the clustering
structure. In ACM Sigmod record, volume 28, pages 49–60.
ACM, 1999.
[3] Bertrand Douillard, James Underwood, Noah Kuntz,
Vsevolod Vlaskine, Alastair Quadros, Peter Morton, and
Alon Frenkel. On the segmentation of 3d lidar point
clouds. In Robotics and Automation (ICRA), 2011 IEEE
International Conference on, pages 2798–2805. IEEE, 2011.
[4] Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu,
et al. A density-based algorithm for discovering clusters in
large spatial databases with noise. In Kdd, pages 226–231,
1996.
[5] Zhang Fu, Magnus Almgren, Olaf Landsiedel, and Marina
Papatriantafilou. Online temporal-spatial analysis for detection of critical events in cyber-physical systems. In Big Data
(Big Data), 2014 IEEE International Conference on, pages
129–134. IEEE, 2014.
[6] Vincenzo Gulisano, Yiannis Nikolakopoulos, Daniel Cederman, Marina Papatriantafilou, and Philippas Tsigas. Efficient data streaming multiway aggregation through concurrent
algorithmic designs and new abstract data types. CoRR,
abs/1606.04746, 2016.
[7] Vincenzo Gulisano, Yiannis Nikolakopoulos, Marina Papatriantafilou, and Philippas Tsigas. Scalejoin: A deterministic,
disjoint-parallel and skew-resilient stream join. IEEE Transactions on Big Data, 2016.
[17] Yiannis Nikolakopoulos, Marina Papatriantafilou, Peter
Brauer, Martin Lundqvist, Vincenzo Gulisano, and Philippas
Tsigas. Highly concurrent stream synchronization in manycore embedded systems. In Proceedings of the Third ACM
International Workshop on Many-core Embedded Systems,
MES ’16, pages 2–9, New York, NY, USA, 2016. ACM.
[18] Gaurav Pandey, James R McBride, and Ryan M Eustice. Ford
campus vision and lidar data set. The International Journal
of Robotics Research, 30(13):1543–1552, 2011.
[19] Mostofa Ali Patwary, Diana Palsetia, Ankit Agrawal, Weikeng Liao, Fredrik Manne, and Alok Choudhary. A new
scalable parallel dbscan algorithm using the disjoint-set data
structure. In Proceedings of the International Conference
on High Performance Computing, Networking, Storage and
Analysis, page 62. IEEE Computer Society Press, 2012.
[20] Radu Bogdan Rusu. Semantic 3d object maps for everyday
manipulation in human living environments. KI-Künstliche
Intelligenz, 24(4):345–348, 2010.
[8] Vincenzo Gulisano, Yiannis Nikolakopoulos, Ivan Walulya,
Marina Papatriantafilou, and Philippas Tsigas. Deterministic
real-time analytics of geospatial data streams through scalegate objects. In Proceedings of the 9th ACM International
Conference on Distributed Event-Based Systems, DEBS ’15,
pages 316–317, New York, NY, USA, 2015. ACM.
[21] Radu Bogdan Rusu, Nico Blodow, Zoltan Csaba Marton, and
Michael Beetz. Close-range scene segmentation and reconstruction of 3d point cloud maps for mobile manipulation in
domestic environments. In Intelligent Robots and Systems,
2009. IROS 2009. IEEE/RSJ International Conference on,
pages 1–6. IEEE, 2009.
[9] Jiawei Han, Jian Pei, and Micheline Kamber. Data mining:
concepts and techniques. Elsevier, 2011.
[22] Radu Bogdan Rusu and Steve Cousins. 3d is here: Point cloud
library (pcl). In Robotics and Automation (ICRA), 2011 IEEE
International Conference on, pages 1–4. IEEE, 2011.
[10] Michael Himmelsbach, Felix V Hundelshausen, and H-J
Wuensche. Fast segmentation of 3d point clouds for ground
vehicles. In Intelligent Vehicles Symposium (IV), 2010 IEEE,
pages 560–565. IEEE, 2010.
[11] Klaas Klasing, Dirk Wollherr, and Martin Buss. A clustering
method for efficient segmentation of 3d laser data. In Robotics
and Automation, 2008. ICRA 2008. IEEE International Conference on, pages 4043–4048. IEEE, 2008.
[12] Klaas Klasing, Dirk Wollherr, and Martin Buss. Realtime
segmentation of range data using continuous nearest neighbors. In Robotics and Automation, 2009. ICRA’09. IEEE
International Conference on, pages 2431–2436. IEEE, 2009.
[13] Sonal Kumari, Poonam Goyal, Ankit Sood, Dhruv Kumar,
Sundar Balasubramaniam, and Navneet Goyal. Exact, fast
and scalable parallel dbscan for commodity platforms. In Proceedings of the 18th International Conference on Distributed
Computing and Networking, page 14. ACM, 2017.
[14] Wenkai Li, Qinghua Guo, Marek K Jakubowski, and Maggi
Kelly. A new method for segmenting individual trees from the
lidar point cloud. Photogrammetric Engineering & Remote
Sensing, 78(1):75–84, 2012.
[15] Olivier Michel. Cyberbotics ltd. webots: professional mobile
robot simulation. International Journal of Advanced Robotic
Systems, 1(1):5, 2004.
[16] Frank Moosmann, Oliver Pink, and Christoph Stiller. Segmentation of 3d lidar data in non-flat urban environments
using a local convexity criterion. In Intelligent Vehicles
Symposium, 2009 IEEE, pages 215–220. IEEE, 2009.
[23] Aparajithan Sampath and Jie Shan. Segmentation and reconstruction of polyhedral building roofs from aerial lidar point
clouds. IEEE Transactions on geoscience and remote sensing,
48(3):1554–1567, 2010.
[24] Anh-Vu Vo, Linh Truong-Hong, Debra F Laefer, and Michela
Bertolotto. Octree-based region growing for point cloud
segmentation. ISPRS Journal of Photogrammetry and Remote
Sensing, 104:88–100, 2015.
[25] Ingo Wald and Vlastimil Havran. On building fast kd-trees
for ray tracing, and on doing that in o (n log n). In Interactive
Ray Tracing 2006, IEEE Symposium on, pages 61–69. IEEE,
2006.
[26] Dominic Zeng Wang, Ingmar Posner, and Paul Newman.
What could move? finding cars, pedestrians and bicyclists
in 3d laser data. In Robotics and Automation (ICRA), 2012
IEEE International Conference on, pages 4038–4044. IEEE,
2012.
[27] H Woo, E Kang, Semyung Wang, and Kwan H Lee. A
new segmentation method for point cloud data. International
Journal of Machine Tools and Manufacture, 42(2):167–178,
2002.
[28] Nikos Zacheilas, Vana Kalogeraki, Yiannis Nikolakopoulos,
Vincenzo Gulisano, Marina Papatriantafilou, and Philippas
Tsigas. Maximizing determinism in stream processing under
latency constraints. In Proceedings of the 11th ACM International Conference on Distributed and Event-based Systems,
DEBS ’17, pages 112–123, New York, NY, USA, 2017. ACM.
[29] Dimitris Zermas, Izzat Izzat, and Nikolaos Papanikolopoulos.
Fast segmentation of 3d point clouds: A paradigm on lidar
data for autonomous vehicle applications. In Robotics and
Automation (ICRA), 2017 IEEE International Conference on,
pages 5067–5073. IEEE, 2017.
| 8 |
Hardware for Machine Learning:
Challenges and Opportunities
(Invited Paper)
Vivienne Sze, Yu-Hsin Chen, Joel Emer, Amr Suleiman, Zhengdong Zhang
arXiv:1612.07625v5 [] 17 Oct 2017
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract—Machine learning plays a critical role in extracting
meaningful information out of the zetabytes of sensor data
collected every day. For some applications, the goal is to analyze
and understand the data to identify trends (e.g., surveillance,
portable/wearable electronics); in other applications, the goal is
to take immediate action based the data (e.g., robotics/drones,
self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred
over the cloud due to privacy or latency concerns, or limitations
in the communication bandwidth. However, at the sensor there
are often stringent constraints on energy consumption and cost in
addition to throughput and accuracy requirements. Furthermore,
flexibility is often required such that the processing can be
adapted for different applications or environments (e.g., update
the weights and model in the classifier). In many applications,
machine learning often involves transforming the input data into
a higher dimensional space, which, along with programmable
weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can
be addressed at various levels of hardware design ranging from
architecture, hardware-friendly algorithms, mixed-signal circuits,
and advanced technologies (including memories and sensors).
I. I NTRODUCTION
This is the era of big data. More data has been created in
the past two years than the entire history of the human race [1].
This is primarily driven by the exponential increase in the use
of sensors (10 billion per year in 2013, expected to reach 1
trillion by 2020 [2]) and connected devices (6.4 billion in 2016,
expected to reach 20.8 billion by 2020 [3]). These sensors and
devices generate hundreds of zetabytes (1021 bytes) of data
per year — petabytes (1015 bytes) per second [4].
Machine learning is needed to extract meaningful, and ideally
actionable, information from this data. A significant amount
of computation is required to analyze this data, which often
happens in the cloud. However, given the sheer volume and
rate at which data is being generated, and the high energy
cost of communication and often limited bandwidth, there is
an increasing need to perform the analysis locally near the
sensor rather than sending the raw data to the cloud. Embedding
machine learning at the edge also addresses important concerns
related to privacy, latency and security.
II. A PPLICATIONS
Many applications can benefit from embedded machine
learning ranging from multimedia to medical space. We
will provide a few examples of areas that researchers have
Machine
Learning
(Inference)
Fig. 1.
Dog (0.7)
Cat (0.1)
Bike (0.02)
Car (0.02)
Plane (0.02)
House (0.04)
Image classification.
investigated; however, this paper will primarily focus on
computer vision, specifically image classification, as a driving
example.
A. Computer Vision
Video is arguably the biggest of the big data. It accounts for
over 70% of today’s Internet traffic [5]. For instance, over 800
million hours of video is collected daily worldwide for video
surveillance [6]. In many applications (e.g., measuring wait
times in stores, traffic patterns), it would be desirable to use
computer vision to extract the meaningful information from
the video right at the image sensor rather than in the cloud to
reduce the communication cost. For other applications such
as autonomous vehicles, drone navigation and robotics, local
processing is desired since the latency and security risk of
relying on the cloud are too high. However, video involves a
large amount of data, which is computationally complex to
process; thus, low cost hardware to analyze video is challenging
yet critical to enabling these applications.
In computer vision, there are many different artificial
intelligence (AI) tasks [7]. In this paper, we focus on image
classification (Fig. 1), where the entire image is provided and
the task is to determine which class of objects is in the image.
B. Speech Recognition
Speech recognition has significantly improved our ability to
interact with electronic devices, such as smartphones. While
currently most of the processing for applications such as Apple
Siri and Amazon Alexa voice services is in the cloud, it
is desirable to perform the recognition on the device itself
to reduce latency and dependence on connectivity, and to
increase privacy. Speech recognition is the first step before
many other AI tasks such as machine translation, natural
language processing, etc. Low power hardware for speech
recognition is explored in [8, 9].
Image
Trained weights (w)
pixels
Feature
Extraction
Handcrafted Features
(e.g. HOG)
Features (x)
Classification
(wTx)
Scores
Scores per class
(select class based
on max or threshold)
Learned Features
(e.g. DNN)
extraction was designed through a hand-crafted process by
experts in the field. For instance, for object recognition in
computer vision, it was observed that humans are sensitive
to edges (i.e., gradients) in an image. As a result, many wellknown computer vision algorithms use image gradient-based
features such as Histogram of Oriented Gradients (HOG) [13]
and Scale Invariant Feature Transform (SIFT) [14]. The
challenge in designing these features is to make them robust
to variations in illumination and noise.
B. Classification
Fig. 2.
Inference pipeline.
C. Medical
There is a strong clinical need to be able to monitor patients
and to collect long-term data to help either detect/diagnose
various diseases or monitor treatment. For instance, constant
monitoring of ECG or EEG signals would be helpful in
identifying cardiovascular diseases or detecting the onset of a
seizure for epilepsy patients, respectively. In many cases, the
devices are either wearable or implantable, and thus the energy
consumption must be kept to a minimum. Using embedded
machine learning to extract meaningful physiological signal
and process it locally is explored in [10–12].
III. M ACHINE L EARNING BASICS
Machine learning is a form of artificial intelligence (AI)
that can perform a task without being specifically programmed.
Instead, it learns from previous examples of the given task
during a process called training. After learning, the task is
performed on new data through a process called inference.
Machine learning is particularly useful for applications where
the data is difficult to model analytically.
Training involves learning a set of weights from a dataset.
When the data is labelled, it is referred to as supervised learning,
which is currently the most widely-used approach. Inference
involves performing a given task using the learned weights
(e.g., classify an object in an image)1 . In many cases, training
is done in the cloud. Inference can also happen in the cloud;
however, as previously discussed, for certain applications this
is not desirable from the standpoint of communication, latency
and privacy. Instead it is preferred that the inference occur
locally on a device near the sensor. In these cases, the trained
weights are downloaded from the cloud and stored in the device.
Thus, the device needs be programmable in order to support a
reasonable range of tasks.
A typical machine learning pipeline for inference can be
broken down into two steps as shown in Fig. 2: Feature
Extraction and Classification. Approaches such as deep neural
networks (DNN) blur the distinction between these steps.
A. Feature Extraction
Feature extraction is used to transform the raw data into
meaningful inputs for the given task. Traditionally, feature
1 Machine learning can be used in a discriminative or generative manner.
This paper focuses on the discriminative use.
The output of feature extraction is represented by a vector,
which is mapped to a score using a classifier. Depending on
the application, the score is either compared to a threshold
to determine if an object is present, or compared to the other
scores to determine the object class.
Techniques often used for classification include linear
methods such as support vector machine (SVM) [15] and
Softmax, and non-linear methods such as kernel-SVM [15]
and Adaboost [16]. In many of these classifiers, the computation
of the score is effectivelyPa dot product of the features (~x)
and the weights (w)
~ (i.e., i wi xi ). As a result, much of the
hardware research has been focused on reducing the cost of a
multiply and accumulate (MAC).
C. Deep Neural Networks (DNN)
Rather than using hand-crafted features, the features can
be learned directly from the data, similar to the weights in
the classifier, such that the entire system is trained end-to-end.
These learned features are used in a popular form of machine
learning called deep neural networks (DNN), also known as
deep learning [17]. DNN delivers higher accuracy than handcrafted features on a variety of tasks [18] by mapping inputs
to a high-dimensional space; however, it comes at the cost of
high computational complexity.
There are many forms of DNN (e.g., convolutional neural
networks, recurrent neural networks, etc.). For computer vision
applications, DNNs are composed of multiple convolutional
(CONV) layers [19] as shown in Fig. 3. With each layer,
a higher-level abstraction of the input data, called a feature
map, is extracted to preserve essential yet unique information.
Modern DNNs are able to achieve superior performance by
employing a very deep hierarchy of layers.
Fig. 4 shows an example of a convolution in DNNs. The
3-D inputs to each CONV layer are 2-D feature maps (W × H)
with multiple channels (C). For the first layer, the input would
be the 2-D image itself with three channels, typically the red,
green and blue color channels. Multiple 3-D filters (M filters
with dimension R × S × C) are then convolved with the input
feature maps, and each filter generates a channel in the output
3-D feature map (E × F with M channels). The same set of
M filters is applied to a batch of N input feature maps. Thus
there are N input feature maps and N output feature maps. In
addition, a 1-D bias is added to the filtered result.
The output of the final CONV layer is processed by fullyconnected (FC) layers for classification purposes. In FC layers,
Modern Deep CNN: 5 – 1000 Layers
CONV
Layer
Low-Level
Features
convolu'on
…
CONV
Layer
non-linearity
1 – 3 Layers
High-Level
FC
Features Layer
normaliza'on
Classes
pooling
×
Fig. 3. Deep Neural Networks are composed of several convolutional layers
followed by fully connected layers.
C
AlexNet
16.4
5
2.3M
666M
3
58.6M
58.6M
61M
724M
VGG
16
7.4
16
14.7M
15.3G
3
124M
124M
138M
15.5G
GoogLeNet
(v1)
6.7
21
6.0M
1.43G
1
1M
1M
7M
1.43G
ResNet
50
5.3
49
23.5M
3.86G
1
2M
2M
25.5M
3.9G
ImageNet
M
H
1
E
1
S
1
…
F
…
…
W
M
C
C
R
Accuracy
CONV Layers
Weights
MACs
FC Layers
Weights
MACs
Total Weights
Total MACs
LeNet
5
n/a
2
2.6k
283k
2
58k
58k
60k
341k
Output fmaps
C
R
Metrics
MNIST
Input fmaps
Filters
TABLE I
S UMMARY OF POPULAR DNN S [20, 21, 24, 26, 27]. ACCURACY
MEASURED BASED ON T OP -5 ERROR ON I MAGE N ET [18].
M
E
H
S
N
N
F
W
Fig. 4.
Computation of a convolution in DNN.
the filter and input feature map are the same size, so that
there is a different weight for each input pixel. The number
of FC layers has been reduced from three to one in most
recent DNNs [20, 21]. In between CONV and FC layers,
additional layers can be optionally added, such as the pooling
and normalization layers [22]. Each of the CONV and FC layers
is also immediately followed by an activation layer, such as
a rectified linear unit (ReLU) [23]. Convolutions account for
over 90% of the run-time and energy consumption in DNNs.
Table I compares modern DNNs, with a popular neural net
from the 1990s, LeNet-5 [24], in terms of number layers (depth),
number of filters weights, and number of operations (i.e.,
MACs). Today’s DNNs are several orders of magnitude larger
in terms of compute and storage. A more detailed discussion
on DNNs can be found in [25].
D. Complexity versus Difficulty of Task
It is important to factor in the difficulty of the task when
comparing different machine learning methods. For instance,
the task of classifying handwritten digits from the MNIST
dataset [28] is much simpler than classifying an object into
one of a 1000 classes as is required for the ImageNet
dataset [18](Fig. 5). It is expected that the size of the classifier
or network (i.e., number of weights) and the number of MACs
will be larger for the more difficult task than the simpler task
and thus require more energy. For instance, LeNet-5[24] is
Fig. 5. MNIST (10 classes, 60k training, 10k testing) [28] vs. ImageNet
(1000 classes, 1.3M training, 100k testing)[18] dataset.
designed for digit classification, while AlexNet[26], VGG16[27], GoogLeNet[20], and ResNet[21] are designed for the
1000 class image classification.
IV. C HALLENGES
The key metrics for embedded machine learning are accuracy,
energy consumption, throughput/latency, and cost.
The accuracy of the machine learning algorithm should be
measured on a sufficiently large dataset. There are many widelyused publicly-available datasets that researcher can use (e.g.,
ImageNet).
Programmability is important since the weights need to
be updated when the environment or application changes.
In the case of DNNs, the processor must also be able to
support different networks with varying number of layers,
filters, channels and filter sizes.
The high dimensionality and need for programmability both
result in an increase in computation and data movement. Higher
dimensionality increases the amount of data generated and
programmability means that the weights also need be read and
stored. This poses a challenge for energy-efficiency since data
movement costs more than computation [29]. In this paper, we
will discuss various methods that reduce data movement to
minimize energy consumption.
The throughput is dictated by the amount of computation,
which also increases with the dimensionality of the data. In
this paper, we will discuss various methods that the data can
be transformed to reduce the number of required operations.
The cost is dictated by the amount of storage required on
the chip. In this paper, we will discuss various methods to
reduce storage costs such that the area of the chip is reduced,
while maintaining low off-chip memory bandwidth.
Temporal Architecture
(SIMD/SIMT)
Memory Hierarchy
Spatial Architecture
(Dataflow Processing)
Accelerator
Off-Chip
DRAM
Memory Hierarchy
Register File
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
ALU
PE
PE
PE
ALU
PE
ALU
Buffer
ALU
ALU
ALU
ALU
ALU
RF
ALU
2×
1×
1× (Reference)
ALU
Fig. 7.
Fig. 6.
6×
ALU
ALU
Control
ALU
200×
ALU
PE
ALU
=
Data Movement Energy Cost
DRAM
ALU
Processing Engine
Global
Buffer
Memory hierarchy and data movement energy [34].
Highly-parallel compute paradigms.
Finally, training requires a significant amount of labeled data
(particularly for DNNs) as well as computation for multiple
iterations of back-propagation to determine the value of the
weights. There is on-going research on training in the cloud
using CPUs, GPUs, FPGAs and ASICs. However, this is beyond
the scope of this paper.
Currently, state-of-the-art DNNs consume orders of magnitude higher energy than other forms of embedded processing
(e.g., video compression). We must exploit opportunities
at multiple levels of hardware design to address all these
challenges and close this energy gap.
V. O PPORTUNITIES IN A RCHITECTURES
The MAC operations in both the feature extraction (CONV
layer in a DNN) and classification (for both DNN and handcrafted features) can be easily parallelized. Two common highlyparallel compute paradigms are shown in Fig. 6 with multiple
arithmetic logic units (ALU).
A. CPU and GPU Platforms
CPUs and GPUs use temporal architectures such as SIMD or
SIMT to perform the MACs in parallel. All the ALUs share the
same control and memory (register file). On these platforms,
all classifications are represented by a matrix multiplication.
The CONV layer in a DNN can also be mapped to a matrix
multiplication using the Toeplitz matrix. There are software
libraries designed for CPUs (e.g., OpenBLAS, Intel MKL,
etc.) and GPUs (e.g., cuBLAS, cuDNN, etc.) that optimize for
matrix multiplications. The matrix multiplication is tiled to the
storage hierarchy of these platforms, which are on the order
of a few megabytes at the higher levels.
The matrix multiplications on these platforms can be further
sped up by applying transforms to the data to reduce the
number of multiplications. Fast Fourier Transform (FFT) [30,
31] is a well known approach that reduces the number of
multiplications from O(No2 Nf2 ) to O(No2 log2 No ), where the
output size is No × No and the filter size is Nf × Nf ; however,
the benefits of FFTs decrease with filter size. Other approaches
include Strassen [32] and Winograd [33], which rearrange the
computation such that the number of multiplications scale from
O(N 3 ) to O(N 2.807 ) and 2.25× for a 3 × 3 filter, respectively,
at the cost of reduced numerical stability, increased storage
requirements, and specialized processing depending on the size
of the filter.
B. Accelerators
Accelerators provide an opportunity to optimize the data
movement (i.e., dataflow) in order to minimize accesses from
the expensive levels of the memory hierarchy as shown in Fig. 7.
In particular, for DNNs we investigate dataflows that exploit
three forms of data reuse (convolutional, filter and image). We
use a spatial architecture (Fig. 6) with local memory (register
file) at each ALU processing element (PE) on the order of
0.5 – 1.0kB and a shared memory (global buffer) on the order
of 100 – 500kB. The global buffer communicates with the
off-chip memory (e.g., DRAM). Data movement is allowed
between the PEs using an on-chip network (NoC) to reduce
accesses to the global buffer and the off-chip memory. Three
types of data movement include input pixels, filter weights and
partial sums (i.e., the product of pixels and weights) that are
accumulated for the output.
Recent work [35–46] has proposed solutions for DNN
acceleration, but it is difficult to compare their performance
directly due to differences in implementation and design
choices. The following taxonomy (Fig. 8) can be used to
classify these existing DNN dataflows based on their data
handling characteristics [34]:
• Weight stationary (WS): The weights are stored in the
register file at the PE and remains stationary to minimized
the movement cost of the weights (Fig. 8(a)). The inputs
and partial sums must move through the spatial array and
global buffer. Examples are found in [35–40].
• Output stationary (OS): The outputs are stored in the
register file at the PE and remains stationary to minimized
the movement cost of the partial sums (Fig. 8(b)). The
inputs and weights must move through the spatial array
and global buffer. Examples are found in [41–43].
• No local reuse (NLR): While small register files are
efficient in terms of energy (pJ/bit), they are inefficient in
terms area (µm2 /bit). In order to maximize the storage
capacity, and minimize the off-chip memory bandwidth,
no local storage is allocated to the PE and instead all
2
1.5
psums
Normalized
Energy/MAC
(a) Weight Stationary
1
weights
pixels
0.5
0
WS
OSA
OSB
OSC
NLR
RS
(a) Across types of data
(b) Output Stationary
2
ALU
1.5
Normalized
Energy/MAC
RF
1
NoC
buffer
0.5
DRAM
(c) No Local Reuse
0
Fig. 8. Dataflows for DNNs.
Row 1
Row 2
*
Row 1
Row 2
*
Row 2
Row 3
*
Row 3
PE 4
Row 1
*
Row 2
Row 2
*
Row 3
Row 3
*
Row 4
PE 2
*
=
Fig. 9.
PE 7
Row 1
*
Row 3
Row 2
*
Row 4
Row 3
*
Row 5
PE 5
PE 3
PE 8
PE 6
*
=
PE 9
*
Row Stationary Dataflow [34].
OSA
OSB
OSC
NLR
RS
(b) Across levels of memory hierarchy
Row 3
PE 1
Row 1
WS
=
Fig. 10. Energy breakdown of dataflows [34].
than the other dataflows for the convolutional layers. This
is due to the fact that the energy of all types of data is
reduced. Furthermore, both the on-chip and off-chip energy is
considered.
VI. O PPORTUNITIES IN J OINT A LGORITHM AND
H ARDWARE D ESIGN
There is on-going research on modifying the machine
learning algorithms to make them more hardware-friendly while
maintaining accuracy; specifically, the focus is on reducing
computation, data movement and storage requirements.
that area is allocated to the global buffer to increase its
capacity (Fig. 8(c)). The trade-off is that there will be
increased traffic on the spatial array and to the global A. Reduce Precision
buffer for all data types. Examples are found in [44–46].
The default size for programmable platforms such as
• Row stationary (RS): In order to increase reuse of
CPUs and GPUs is often 32 or 64 bits with floating-point
all types of data (weights, pixels, partial sums), a row representation. While this remains the case for training, during
stationary approach is proposed in [34]. A row of the filter inference, it is possible to use a fixed-point representation and
convolution remains stationary within a PE to exploit substantially reduce the bitwidth for energy and area savings,
1-D convolutional reuse within the PE. Multiple 1-D and increase in throughput. Retraining is typically required to
rows are combined in the spatial array to exhaustively maintain accuracy when pushing the weights and features to
exploit all convolutional reuse (Fig. 9), which reduces lower bitwidth.
accesses to the global buffer. Multiple 1-D rows from
In hand-crafted approaches, the bitwidth can be drastically
different channels and filters are mapped to each PE to reduced to below 16-bits without impacting the accuracy. For
reduce partial sum data movement and exploit filter reuse, instance, in object detection using HOG, each 36-dimension
respectively. Finally, multiple passes across the spatial feature vector only requires 9-bit per dimension, and each
array allow for additional image and filter reuse using the weight of the SVM uses only 4-bits [48]; for object detection
global buffer. This dataflow is demonstrated in [47].
using deformable parts models (DPM) [49], only 11-bits are
The dataflows are compared on a spatial array with the required per feature vector and only 5-bits are required per
same number of PEs (256), area cost and DNN (AlexNet). SVM weight [50].
Fig. 10 shows the energy consumption of each approach. The
Similarly for DNN inference, it is common to see accelerators
row stationary approach is 1.4× to 2.5× more energy-efficient support 16-bit fixed point [45, 47]. There has been significant
Fig. 11.
Sparse weights after basis projection [50].
a result, compression can be applied to exploit data statistics
to reduce data movement and storage cost.
Various forms of lightweight compression have been explored to reduce data movement. Lossless compression can be
used to reduce the transfer of data on and off chip [11, 53, 64].
Simple run-length coding of the activations in [65] provides
up to 1.9× bandwidth reduction, which is within 5-10% of the
theoretical entropy limit. Lossy compression such as vector
quantization can also be used on feature vectors [50] and
weights [8, 12, 66] such that they can be stored on-chip at low
cost. Generally, the cost of the compression/decompression is
on the order of a few thousand kgates with minimal energy
overhead. In the lossy compression case, it is also important
to evaluate the impact on performance accuracy.
VII. O PPORTUNITIES IN M IXED -S IGNAL C IRCUITS
research on exploring the impact of bitwidth on accuracy [51].
In fact, recently commercial hardware for DNN reportedly
support 8-bit integer operations [52]. As bitwidths can vary by
layer, hardware optimizations have been explored to exploit
the reduced bitwidth for 2.56× energy savings [53] or 2.24×
increase in throughput [54] compared to a 16-bit fixed point
implementation. With more significant changes to the network,
it is possible to reduce bitwidth down to 1-bit for either
weights [55] or both weights and activations [56, 57] at the cost
of reduced accuracy. The impact of 1-bit weights on hardware
is explored in [58].
B. Sparsity
For SVM classification, the weights can be projected onto
a basis such that the resulting weights are sparse for a 2×
reduction in number of multiplications [50] (Fig. 11). For
feature extraction, the input image can be made sparse by preprocessing for a 24% reduction in power consumption [48].
For DNNs, the number of MACs and weights can be reduced
by removing weights through a process called pruning. This
was first explored in [59] where weights with minimal impact
on the output were removed. In [60], pruning is applied to
modern DNNs by removing small weights. However, removing
weights does not necessarily lead to lower energy. Accordingly,
in [61] weights are removed based on an energy-model to
directly minimize energy consumption. The tool used for energy
modeling can be found at [62].
Specialized hardware has been proposed in [47, 50, 63,
64] to exploit sparse weights for increased speed or reduced
energy consumption. In Eyeriss [47], the processing elements
are designed to skip reads and MACs when the inputs are
zero, resulting in a 45% energy reduction. In [50], by using
specialized hardware to avoid sparse weights, the energy and
storage cost are reduced by 43% and 34%, respectively.
C. Compression
Data movement and storage are important factors in both
energy and cost. Feature extraction can result in sparse data
(e.g., gradient in HOG and ReLU in DNN) and the weights
used in classification can also be made sparse by pruning. As
Most of the data movement is in between the memory
and processing element (PE), and also the sensor and PE.
In this section, we discuss how this is addressed using mixedsignal circuit design. However, circuit non-idealities should
also be factored into the algorithm design; these circuits can
benefit from the reduced precision algorithms discussed in
Section VI. In addition, since the training often occurs in the
digital domain, the ADC and DAC conversion overhead should
also be accounted for when evaluating the system.
While spatial architectures bring the memory closer to the
computation (i.e., into the PE), there have also been efforts to
integrate the computation into the memory itself. For instance,
in [67] the classification is embedded in the SRAM. Specifically,
the word line (WL) is driven by a 5-bit feature vector using
a DAC, while the bit-cells store the binary weights ±1. The
bit-cell current is effectively a product of the value of the
feature vector and the value of the weight stored in the bit-cell;
the currents from the column are added together to discharge
the bitline (BL or BLB). A comparator is then used to compare
the resulting dot product to a threshold, specifically sign
thresholding of the differential bitlines. Due to the variations
in the bitcell, this is considered a weak classifier, and boosting
is needed to combine the weak classifiers to form a strong
classifier [68]. This approach gives 12× energy savings over
reading the 1-bit weights from the SRAM.
Recent work has also explored the use of mixed-signal
circuits to reduce the computation cost of the MAC. It was
shown in [69] that performing the MAC using switched
capacitors can be more energy-efficient than digital circuits
despite ADC and DAC conversion overhead. Accordingly,
the matrix multiplication can be integrated into the ADC as
demonstrated in [70], where the most significant bits of the
multiplications for Adaboost classification are performed using
switched capacitors in an 8-bit successive approximation format.
This is extended in [71] to not only perform multiplications,
but also the accumulation in the analog domain. It is assumed
that 3-bits and 6-bits are sufficient to represent the weights
and input vectors, respectively. This enables the computation
to move closer to the sensor and reduces the number of ADC
conversions by 21×.
To further reduce the data movement from the sensor, [72]
proposed performing the entire convolution layer (including
convolution, max pooling and quantization) in the analog
domain at the sensor. Similarly, in [73], the entire HOG
feature is computed in the analog domain to reduce the sensor
bandwidth by 96.5%.
VIII. O PPORTUNITIES IN A DVANCED T ECHNOLOGIES
Exponen6al
10000
VGG162
1000
Energy/
Pixel (nJ)
Measured in 65nm*
1. [Suleiman, VLSI 2016]
2. [Chen, ISSCC 2016]
* Only feature extrac6on. Does
not include data, augmenta6on,
ensemble and classifica6on
energy, etc.
AlexNet2
100
10
1
HOG1
0.1
Linear
In the previous section, we discussed how data movement
0
20
40
60
80
can be reduced by moving the processing near the memory or
Accuracy (Average Precision)
the sensor using mixed-signal circuits. In this section, we will
Measured in on VOC 2007 Dataset
1. DPM v5 [Girshick, 2012]
discuss how this can be achieved with advanced technologies.
2. Fast R-CNN [Girshick, CVPR 2015]
The use of advanced memory technologies such as embedded
Fig. 12. Energy vs. accuracy comparison of hand-crafted and learned features.
DRAM (eDRAM) and Hybrid Memory Cube (HMC) are
explored in [46] and [74], respectively, to reduce the energy
the same energy as video compression (under 1nJ/pixel [80]
access cost of the weights in DNN. There has also been a lot
for real-time high definition video), which servers as a good
of work that investigates integrating the multiplication directly
benchmark of what is acceptable for energy consumption near
into advanced non-volatile memories by using them as resistive
the sensor; however, DNNs currently consume several orders
elements. Specifically, the multiplications are performed where
of magnitude more. A more detailed comparison can be found
the conductance is the weight, the voltage is the input, and
in [81]. We hope that the many design opportunities that we
the current is the output (note: this is the ultimate form of
have highlighted in this paper will help close this gap.
weight stationary, as the weights are always held in place);
the addition is done by summing the current using Kirchhoff’s
X. S UMMARY
current law. In [75], memristors are used to compute a 16-bit
Machine learning is an important area of research with
dot product operation with 8 memristors each storing 2-bits;
many
promising applications and opportunities for innovation
a 1-bit×2-bit multiplication is performed at each memristor,
at
various
levels of hardware design. During the design process,
where a 16-bit input requires 16 cycles to complete. In [76],
it
is
important
to balance the accuracy, energy, throughput and
ReRAM is used to compute the product of a 3-bit input and
cost
requirements.
4-bit weight. Similar to the mixed-signal circuits, the precision
Since data movement dominates energy consumption, the
is limited, and the ADC and DAC conversion overhead must
primary
focus of recent research has been to reduce the data
be considered in the overall cost, especially when the weights
movement
while maintaining accuracy, throughput and cost.
are trained in the digital domain. The conversion overhead can
This
means
selecting architectures with favorable memory
be avoided by training directly in the analog domain as shown
hierarchies
like
a spatial array, and developing dataflows that
for the fabricated memristor array in [77].
increase
data
reuse
at the low-cost levels of the memory
Finally, it may be feasible to embed the computation into
hierarchy.
With
joint
design
of algorithm and hardware, reduced
the sensor itself. This is useful for image processing where
bitwidth
precision,
increased
sparsity and compression are used
the bandwidth to read the data from the sensor accounts for
to
minimize
the
data
movement
requirements. With mixeda significant portion of the system energy consumption. For
signal
circuit
design
and
advanced
technologies, computation
instance, an Angle Sensitive Pixels sensor can be used to comis
moved
closer
to
the
source
by
embedding
computation near
pute the gradient of the input, which along with compression,
or
within
the
sensor
and
the
memories.
reduces the sensor bandwidth by 10× [78]. A sensor that
One should also consider the interactions between these
outputs gradients can also reduce the computation and energy
different
levels. For instance, reducing the bitwidth through
consumption of subsequent processing engine [48, 79].
hardware-friendly algorithm design enables reduced precision
processing with mixed-signal circuits and non-volatile memory.
IX. H AND - CRAFTED VERSUS L EARNED F EATURES
Hand-crafted approaches give higher energy efficiency at Reducing the cost of memory access with advanced technolothe cost of reduced accuracy as compared with learned gies could result in more energy-efficient dataflows.
features such as DNNs. For hand-crafted features, the amount
of computation is less, and reduced bit-width is supported.
Furthermore, less data movement is required since the weights
are not required for the features. The classification weights for
both approaches must however remain programmable. Fig. 12
compares the energy consumption of HOG feature extraction
versus the convolution layers in AlexNet and VGG-16 based
measured results from fabricated 65nm chips [50] and [47],
respectively. Note that HOG feature extraction consumes around
ACKNOWLEDGMENT
Funding provided by DARPA YFA, MIT CICS, TSMC
University Shuttle, and gifts from Texas Instruments and Intel.
R EFERENCES
[1] B. Marr, “Big Data: 20 Mind-Boggling Facts Everyone Must
Read,” Forbes.com, October 2015.
[2] “For a Trillion Sensor Road Map,” TSensorSummit, October
2013.
[3] “Gartner Says 6.4 Billion Connected ”Things” Will Be in Use
in 2016, Up 30 Percent From 2015,” Gartner.com, November
2015.
[4] “Cisco Global Cloud Index: Forecast and Methodology, 2015 2020,” Cisco, June 2016.
[5] “Complete Visual Networking Index (VNI) Forecast,” Cisco,
June 2016.
[6] J. Woodhouse, “Big, big, big data: higher and higher resolution
video surveillance,” technology.ihs.com, January 2016.
[7] R. Szeliski, Computer vision: algorithms and applications.
Springer Science & Business Media, 2010.
[8] M. Price, J. Glass, and A. P. Chandrakasan, “A 6 mW, 5,000Word Real-Time Speech Recognizer Using WFST Models,”
IEEE J. Solid-State Circuits, vol. 50, no. 1, pp. 102–112, 2015.
[9] R. Yazdani, A. Segura, J.-M. Arnau, and A. Gonzalez, “An
ultra low-power hardware accelerator for automatic speech
recognition,” in MICRO, 2016.
[10] N. Verma, A. Shoeb, J. V. Guttag, and A. P. Chandrakasan,
“A micro-power EEG acquisition SoC with integrated seizure
detection processor for continuous patient monitoring,” in Sym.
on VLSI, 2009.
[11] T.-C. Chen, T.-H. Lee, Y.-H. Chen, T.-C. Ma, T.-D. Chuang, C.-J.
Chou, C.-H. Yang, T.-H. Lin, and L.-G. Chen, “1.4µW/channel
16-channel EEG/ECoG processor for smart brain sensor SoC,”
in Sym. on VLSI, 2010.
[12] K. H. Lee and N. Verma, “A low-power processor with
configurable embedded machine-learning accelerators for highorder and adaptive analysis of medical-sensor signals,” IEEE J.
Solid-State Circuits, vol. 48, no. 7, pp. 1625–1637, 2013.
[13] N. Dalal and B. Triggs, “Histograms of oriented gradients for
human detection,” in CVPR, 2005.
[14] D. G. Lowe, “Object recognition from local scale-invariant
features,” in ICCV, 1999.
[15] N. Cristianini and J. Shawe-Taylor, An introduction to support
vector machines and other kernel-based learning methods.
Cambridge university press, 2000.
[16] R. E. Schapire and Y. Freund, Boosting: Foundations and
algorithms. MIT press, 2012.
[17] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature,
vol. 521, no. 7553, pp. 436–444, May 2015.
[18] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg,
and L. Fei-Fei, “ImageNet Large Scale Visual Recognition
Challenge,” International Journal of Computer Vision (IJCV),
vol. 115, no. 3, pp. 211–252, 2015.
[19] Y. LeCun, K. Kavukcuoglu, and C. Farabet, “Convolutional
networks and applications in vision,” in ISCAS, 2010.
[20] C. Szegedy and et al., “Going Deeper With Convolutions,” in
CVPR, 2015.
[21] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning
for Image Recognition,” in CVPR, 2016.
[22] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating
deep network training by reducing internal covariate shift,” in
ICML, 2015.
[23] V. Nair and G. E. Hinton, “Rectified Linear Units Improve
Restricted Boltzmann Machines,” in ICML, 2010.
[24] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based
learning applied to document recognition,” Proc. IEEE, vol. 86,
no. 11, pp. 2278–2324, Nov 1998.
[25] Emer, Joel and Sze, Vivienne and Chen, Yu-Hsin, “Tutorial
on Hardware Architectures for Deep Neural Networks,” http:
//eyeriss.mit.edu/tutorial.html, 2016.
[26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet
Classification with Deep Convolutional Neural Networks,” in
NIPS, 2012.
[27] K. Simonyan and A. Zisserman, “Very Deep Convolutional
Networks for Large-Scale Image Recognition,” in ICLR, 2015.
[28] C. J. B. Yann LeCun, Corinna Cortes, “THE MNIST DATABASE
of handwritten digits,” http://yann.lecun.com/exdb/mnist/.
[29] M. Horowitz, “Computing’s energy problem (and what we can
do about it),” in ISSCC, 2014.
[30] M. Mathieu, M. Henaff, and Y. LeCun, “Fast training of
convolutional networks through FFTs,” in ICLR, 2014.
[31] C. Dubout and F. Fleuret, “Exact acceleration of linear object
detectors,” in ECCV, 2012.
[32] J. Cong and B. Xiao, “Minimizing computation in convolutional
neural networks,” in ICANN, 2014.
[33] A. Lavin and S. Gray, “Fast algorithms for convolutional neural
networks,” in CVPR, 2016.
[34] Y.-H. Chen, J. Emer, and V. Sze, “Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural
Networks,” in ISCA, 2016.
[35] M. Sankaradas, V. Jakkula, S. Cadambi, S. Chakradhar, I. Durdanovic, E. Cosatto, and H. P. Graf, “A Massively Parallel
Coprocessor for Convolutional Neural Networks,” in ASAP,
2009.
[36] V. Sriram, D. Cox, K. H. Tsoi, and W. Luk, “Towards an
embedded biologically-inspired machine vision processor,” in
FPT, 2010.
[37] S. Chakradhar, M. Sankaradas, V. Jakkula, and S. Cadambi, “A
Dynamically Configurable Coprocessor for Convolutional Neural
Networks,” in ISCA, 2010.
[38] V. Gokhale, J. Jin, A. Dundar, B. Martini, and E. Culurciello,
“A 240 G-ops/s Mobile Coprocessor for Deep Neural Networks,”
in CVPRW, 2014.
[39] S. Park, K. Bong, D. Shin, J. Lee, S. Choi, and H.-J. Yoo, “A
1.93TOPS/W scalable deep learning/inference processor with
tetra-parallel MIMD architecture for big-data applications,” in
ISSCC, 2015.
[40] L. Cavigelli, D. Gschwend, C. Mayer, S. Willi, B. Muheim, and
L. Benini, “Origami: A Convolutional Network Accelerator,” in
GLVLSI, 2015.
[41] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan,
“Deep Learning with Limited Numerical Precision,” in ICML,
2015.
[42] Z. Du and et al., “ShiDianNao: Shifting Vision Processing Closer
to the Sensor,” in ISCA, 2015.
[43] M. Peemen, A. A. A. Setio, B. Mesman, and H. Corporaal,
“Memory-centric accelerator design for Convolutional Neural
Networks,” in ICCD, 2013.
[44] C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing FPGA-based Accelerator Design for Deep Convolutional
Neural Networks,” in FPGA, 2015.
[45] T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam,
“DianNao: A Small-footprint High-throughput Accelerator for
Ubiquitous Machine-learning,” in ASPLOS, 2014.
[46] Y. Chen and et al., “DaDianNao: A Machine-Learning Supercomputer,” in MICRO, 2014.
[47] Y.-H. Chen and et al., “Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks,”
in ISSCC, 2016.
[48] A. Suleiman and V. Sze, “Energy-efficient HOG-based object
detection at 1080HD 60 fps with multi-scale support,” in SiPS,
2014.
[49] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based
models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9,
pp. 1627–1645, 2010.
[50] A. Suleiman, Z. Zhang, and V. Sze, “A 58.6 mW real-time
programmable object detector with multi-scale multi-object
support using deformable parts model on 1920× 1080 video at
30fps,” in Sym. on VLSI, 2016.
[51] P. Gysel, M. Motamedi, and S. Ghiasi, “Hardware-oriented
Approximation of Convolutional Neural Networks,” in ICLR,
2016.
[52] S. Higginbotham, “Google Takes Unconventional Route with
Homegrown Machine Learning Chips,” Next Platform, May
2016.
[53] B. Moons and M. Verhelst, “A 0.3–2.6 TOPS/W precisionscalable processor for real-time large-scale ConvNets,” in Sym.
on VLSI, 2016.
[54] P. Judd, J. Albericio, and A. Moshovos, “Stripes: Bit-serial deep
neural network computing,” IEEE Computer Architecture Letters,
2016.
[55] M. Courbariaux, Y. Bengio, and J.-P. David, “Binaryconnect:
Training deep neural networks with binary weights during
propagations,” in NIPS, 2015.
[56] M. Courbariaux and Y. Bengio, “Binarynet: Training deep neural
networks with weights and activations constrained to+ 1 or-1,”
arXiv preprint arXiv:1602.02830, 2016.
[57] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “XNORNet: ImageNet Classification Using Binary Convolutional Neural
Networks,” in ECCV, 2016.
[58] R. Andri and et al., “YodaNN: An Ultra-Low Power Convolutional Neural Network Accelerator Based on Binary Weights,”
in ISVLSI, 2016.
[59] Y. LeCun, J. S. Denker, and S. A. Solla, “Optimal Brain Damage,”
in NIPS, 1990.
[60] S. Han, J. Pool, J. Tran, and W. Dally, “Learning both Weights
and Connections for Efficient Neural Network,” in NIPS, 2015.
[61] T.-J. Yang and et al., “Designing Energy-Efficient Convolutional
Neural Networks using Energy-Aware Pruning,” CVPR, 2017.
[62] “DNN Energy Estimation,” http://eyeriss.mit.edu/energy.html.
[63] J. Albericio, P. Judd, T. Hetherington, T. Aamodt, N. E. Jerger,
and A. Moshovos, “Cnvlutin: ineffectual-neuron-free deep neural
network computing,” in ISCA, 2016.
[64] S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz,
and W. J. Dally, “EIE: efficient inference engine on compressed
deep neural network,” in ISCA, 2016.
[65] Y.-H. Chen, T. Krishna, J. Emer, and V. Sze, “Eyeriss: An EnergyEfficient Reconfigurable Accelerator for Deep Convolutional
Neural Networks,” IEEE J. Solid-State Circuits, vol. 51, no. 1,
2017.
[66] S. Han, H. Mao, and W. J. Dally, “Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization
and Huffman Coding,” in ICLR, 2016.
[67] J. Zhang, Z. Wang, and N. Verma, “A machine-learning classifier
implemented in a standard 6T SRAM array,” in Sym. on VLSI,
2016.
[68] Z. Wang, R. Schapire, and N. Verma, “Error-adaptive classifier
boosting (EACB): Exploiting data-driven training for highly
fault-tolerant hardware,” in ICASSP, 2014.
[69] B. Murmann, D. Bankman, E. Chai, D. Miyashita, and L. Yang,
“Mixed-signal circuits for embedded machine-learning applications,” in Asilomar, 2015.
[70] J. Zhang, Z. Wang, and N. Verma, “A matrix-multiplying ADC
implementing a machine-learning classifier directly with data
conversion,” in ISSCC, 2015.
[71] E. H. Lee and S. S. Wong, “A 2.5 GHz 7.7 TOPS/W switchedcapacitor matrix multiplier with co-designed local memory in
40nm,” in ISSCC, 2016.
[72] R. LiKamWa, Y. Hou, J. Gao, M. Polansky, and L. Zhong, “RedEye: analog ConvNet image sensor architecture for continuous
mobile vision,” in ISCA, 2016.
[73] J. Choi, S. Park, J. Cho, and E. Yoon, “A 3.4-µW object-adaptive
CMOS image sensor with embedded feature extraction algorithm
for motion-triggered object-of-interest imaging,” IEEE J. SolidState Circuits, vol. 49, no. 1, pp. 289–300, 2014.
[74] D. Kim, J. Kung, S. Chai, S. Yalamanchili, and S. Mukhopadhyay, “Neurocube: A programmable digital neuromorphic architecture with high-density 3D memory,” in ISCA, 2016.
[75] A. Shafiee, A. Nag, N. Muralimanohar, R. Balasubramonian, J. P.
Strachan, M. Hu, R. S. Williams, and V. Srikumar, “ISAAC: A
Convolutional Neural Network Accelerator with In-Situ Analog
Arithmetic in Crossbars,” in ISCA, 2016.
[76] P. Chi, S. Li, Z. Qi, P. Gu, C. Xu, T. Zhang, J. Zhao, Y. Liu,
Y. Wang, and Y. Xie, “PRIME: A Novel Processing-In-Memory
Architecture for Neural Network Computation in ReRAM-based
Main Memory,” in ISCA, 2016.
[77] M. Prezioso, F. Merrikh-Bayat, B. Hoskins, G. Adam, K. K.
Likharev, and D. B. Strukov, “Training and operation of an integrated neuromorphic network based on metal-oxide memristors,”
Nature, vol. 521, no. 7550, pp. 61–64, 2015.
[78] A. Wang, S. Sivaramakrishnan, and A. Molnar, “A 180nm CMOS
image sensor with on-chip optoelectronic image compression,”
in CICC, 2012.
[79] H. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan,
A. Veeraraghavan, and A. Molnar, “ASP Vision: Optically
Computing the First Layer of Convolutional Neural Networks
using Angle Sensitive Pixels,” in CVPR, 2016.
[80] T.-J. Lin, C.-A. Chien, P.-Y. Chang, C.-W. Chen, P.-H. Wang,
T.-Y. Shyu, C.-Y. Chou, S.-C. Luo, J.-I. Guo, T.-F. Chen et al.,
“A 0.48 V 0.57 nJ/pixel video-recording SoC in 65nm CMOS,”
in ISSCC, 2013.
[81] A. Suleiman, Y.-H. Chen, J. Emer, and V. Sze, “Towards Closing
the Energy Gap Between HOG and CNN Features for Embedded
Vision,” in ISCAS, 2017.
| 1 |
M A M A D ROID: Detecting Android Malware by Building Markov
Chains of Behavioral Models (Extended Version)∗
Lucky Onwuzurike1 , Enrico Mariconti1 , Panagiotis Andriotis2 ,
Emiliano De Cristofaro1 , Gordon Ross1 , and Gianluca Stringhini1
arXiv:1711.07477v1 [cs.CR] 20 Nov 2017
1
University College London
2
University of the West of England
Abstract
lenges compared to desktop/laptop computers: smartphones
have limited battery life, making it impossible to use traditional approaches requiring constant scanning and complex
computation [43]. Thus, Android malware detection is typically performed in a centralized fashion, i.e., by analyzing
apps submitted to the Play Store using Bouncer [40]. However, many malicious apps manage to avoid detection [56, 38],
and manufacturers as well as users can install apps that come
from third parties, which might not perform any malware
checks at all [67].
As a result, the research community has proposed a number
of techniques to detect malware on Android. Previous work
has often relied on the permissions requested by apps [18, 47],
using models built from malware samples. This, however, is
prone to false positives, since there are often legitimate reasons
for benign apps to request permissions classified as dangerous [18]. Another approach, used by D ROIDAPIM INER [1],
is to perform classification based on API calls frequently used
by malware. However, relying on the most common calls observed during training prompts the need for constant retraining, due to the evolution of malware and the Android API
alike. For instance, “old” calls are often deprecated with new
API releases, so malware developers may switch to different
calls to perform similar actions.
In this paper, we present a novel malware detection system for Android that relies on the sequence of abstracted API
calls performed by an app rather than their use or frequency,
aiming to capture the behavioral model of the app. We design M A M A D ROID to abstract API calls to either the class
name (e.g., java.lang.Throwable) of the call or its package
name (e.g., java.lang) or its source (e.g., java, android,
google), which we refer to as family.
Abstraction provides resilience to API changes in the Android framework as families and packages are added and removed less frequently than single API calls. At the same time,
this does not abstract away the behavior of an app: for instance, packages include classes and interfaces used to perform similar operations on similar objects, so we can model
the types of operations from the package name alone. For example, the java.io package is used for system I/O and access
to the file system, even though there are different classes and
interfaces provided by the package for such operations.
After abstracting the calls, M A M A D ROID analyzes the sequence of API calls performed by the app aiming to model
As Android becomes increasingly popular, so does malware
targeting it, this motivating the research community to propose
many different detection techniques. However, the constant
evolution of the Android ecosystem, and of malware itself,
makes it hard to design robust tools that can operate for long
periods of time without the need for modifications or costly
re-training. Aiming to address this issue, we set to detect malware from a behavioral point of view, modeled as the sequence
of abstracted API calls. We introduce M A M A D ROID, a staticanalysis based system that abstracts app’s API calls to their
class, package, or family, and builds a model from their sequences obtained from the call graph of an app as Markov
chains. This ensures that the model is more resilient to API
changes and the features set is of manageable size. We evaluate M A M A D ROID using a dataset of 8.5K benign and 35.5K
malicious apps collected over a period of six years, showing
that it effectively detects malware (with up to 0.99 F-measure)
and keeps its detection capabilities for long periods of time (up
to 0.87 F-measure two years after training). We also show that
M A M A D ROID remarkably improves over D ROIDAPIM INER,
a state-of-the-art detection system that relies on the frequency
of (raw) API calls. Aiming to assess whether M A M A D ROID’s
effectiveness mainly stems from the API abstraction or from
the sequencing modeling, we also evaluate a variant of it that
uses frequency (instead of sequences), of abstracted API calls.
We find that it is not as accurate, failing to capture maliciousness when trained on malware samples including API calls
that are equally or more frequently used by benign apps.
1
Introduction
Malware running on mobile devices can be particularly lucrative, as it may enable attackers to defeat two-factor authentication for financial and banking systems [52] and/or trigger the leakage of sensitive information [25]. As a consequence, the number of malware samples has skyrocketed in
recent years, and, due to its increased popularity, cybercriminals have increasingly targeted the Android ecosystem [15].
Detecting malware on mobile devices presents additional chal∗ A preliminary version of this paper appears in the 24th Network and Distributed System Security Symposium (NDSS 2017) [36].
1
the app’s behavior using Markov chains. Our intuition is
that malware may use calls for different operations, and
in a different order, than benign apps. For example, android.media.MediaRecorder can be used by any app that
has permission to record audio, but the call sequence may
reveal that malware only uses calls from this class after
calls to getRunningTasks(), which allows recording conversations [65], as opposed to benign apps where calls from the
class may appear in any order. Relying on the sequence of abstracted calls allows us to model behavior in a more complex
way than previous work, which only looked at the presence or
absence of certain API calls or permissions [1, 4], while still
keeping the problem tractable [30]. M A M A D ROID then builds
a statistical model to represent the transitions between the API
calls performed by an app as Markov chains, and use them to
extract features. Finally, it classifies an app as either malicious
or benign using the features it extracts from the app.
We present a detailed evaluation of the classification accuracy (using F-measure, precision, and recall) and runtime
performance of M A M A D ROID, using a dataset of almost 44K
apps (8.5K benign and 35.5K malware samples). We include
a mix of older and newer apps, from October 2010 to May
2016, verifying that our model is robust to changes in Android malware samples and APIs. To the best of our knowledge, this is the largest malware dataset used to evaluate an
Android malware detection system in a research paper. Our
experimental analysis shows that M A M A D ROID can effectively model both benign and malicious Android apps, and
efficiently classify them. Compared to other systems such as
D ROIDAPIM INER [1], our approach allows us to account for
changes in the Android API, without the need to frequently
retrain the classifier. Moreover, to assess the impact of abstraction and Markov chain modeling on M A M A D ROID, we
not only compare to D ROIDAPIM INER, but also build a variant (called FAM) that still abstracts API calls but instead of
building a model from the sequence of calls it does so on the
frequency, similar to D ROIDAPIM INER.
Overall, we find that M A M A D ROID can effectively detect
unknown malware samples not only in the “present,” (with Fmeasure up to 0.99) but also consistently over the years (i.e.,
when the system is trained on older samples and evaluated
over newer ones), as it keeps an average detection accuracy,
evaluated in terms of F-measure, of 0.87 after one year and
0.75 after two years (as opposed to 0.46 and 0.42 achieved
by D ROIDAPIM INER [1] and 0.81 and 0.76 by FAM). We
also highlight that when the system is not efficient anymore
(when the test set is newer than the training set by more than
two years), it is as a result of M A M A D ROID having low recall, but maintaining high precision. We also do the opposite,
i.e., training on newer samples and verifying that the system
can still detect old malware. This is particularly important as
it shows that M A M A D ROID can detect newer threats, while
still identifying malware samples that have been in the wild
for some time.
in a tool called M A M A D ROID, to detect Android malware
by abstracting API calls to their class, package, and family,
and model the behavior of the apps through the sequences of
API calls as Markov chains. Second, we can detect unknown
samples on the same year of training with an F-measure of
0.99, but also years after training the system, meaning that
M A M A D ROID does not need continuous re-training. Compared to previous work [1], M A M A D ROID achieves higher
accuracy with reasonably fast running times, while also being more robust to evolution in malware development and
changes in the Android API. Third, by abstracting API calls
and using frequency analysis we still perform better than a
system that also uses frequency analysis but without abstraction (D ROIDAPIM INER). Finally, we explore the detection
performance of a finer-grained abstraction and show that abstracting to classes does not perform better than abstracting to
packages.
Paper Organization. The rest of the paper is organized as fol-
lows. The next section presents M A M A D ROID, then, Section 3 introduces the datasets used throughout the paper. In
Section 4, we evaluate M A M A D ROID in family and package modes, while, in Section 5, we explore the effectiveness
of finer-grained abstraction (i.e., class mode). In Section 6,
we present and evaluate the variant using a frequency analysis model (FAM), while we analyze runtime performances in
Section 7. Section 8 further discusses our results as well as
its limitations. After reviewing related work in Section 9, the
paper concludes in Section 10.
2
The MaMaDroid System
In this section, we introduce M A M A D ROID, an Android malware detection system that relies on the transitions between
different API calls performed by Android apps.
2.1
Overview
M A M A D ROID builds a model of the sequence of API calls
as Markov chains, which are in turn used to extract features
for machine learning algorithms to classify apps as benign or
malicious.
Abstraction. M A M A D ROID does not use the raw API calls,
but abstracts each call to its family, package, or class. For instance, the API call getMessage() in Figure 1 is parsed to, respectively, java, java.lang, and java.lang.Throwable.
package
z }| {
java.lang.Throwable: String getMessage()
|{z}
family
|
{z
Class
}
Figure 1: Example of an API call and its family, package, and class.
Given the three different types of abstractions, M A M A D ROID operates in one of three modes, each using one of
the types of abstraction. Naturally, we expect that the higher
Summary of Contributions. This paper makes several contri-
butions. First, we introduce a novel approach, implemented
2
?
Call Graph
Extraction (1)
Sequence
Extraction (2)
Markov Chain
Modeling (3)
Classification
(4)
Figure 2: Overview of M A M A D ROID operation. In (1), it extracts the call graph from an Android app, next, it builds the sequences of
(abstracted) API calls from the call graph (2). In (3), the sequences of calls are used to build a Markov chain and a feature vector for that app.
Finally, classification is performed in (4), labeling the app as benign or malicious.
package com.fa.c;
import
import
import
import
import
import
import
android.content.Context;
android.os.Environment;
android.util.Log;
com.stericson.RootShell.execution.Command;
com.stericson.RootShell.execution.Shell;
com.stericson.RootTools.RootTools;
java.io.File;
public class RootCommandExecutor {
public static boolean Execute(Context paramContext) {
paramContext = new Command(0, new String[] { "cat " + Environment.getExternalStorageDirectory().getAbsolutePath() + File.separator + Utilities.GetWatchDogName(
paramContext) + " > /data/" + Utilities.GetWatchDogName(paramContext), "cat " + Environment.getExternalStorageDirectory().getAbsolutePath() + File.separator +
Utilities.GetExecName(paramContext) + " > /data/" + Utilities.GetExecName(paramContext), "rm " + Environment.getExternalStorageDirectory().getAbsolutePath() +
File.separator + Utilities.GetWatchDogName(paramContext), "rm " + Environment.getExternalStorageDirectory().getAbsolutePath() + File.separator + Utilities.
GetExecName(paramContext), "chmod 777 /data/" + Utilities.GetWatchDogName(paramContext), "chmod 777 /data/" + Utilities.GetExecName(paramContext), "/data/" +
Utilities.GetWatchDogName(paramContext) + " " + Utilities.GetDeviceInfoCommandLineArgs(paramContext) + " /data/" + Utilities.GetExecName(paramContext) + " " +
Environment.getExternalStorageDirectory().getAbsolutePath() + File.separator + Utilities.GetExchangeFileName(paramContext) + " " + Environment.
getExternalStorageDirectory().getAbsolutePath() + File.separator + " " + Utilities.GetPhoneNumber(paramContext) });
try {
RootTools.getShell(true).add(paramContext);
return true;
}
catch (Exception paramContext) {
Log.d("CPS", paramContext.getMessage());
}
return false;
}
}
Figure 3: Code from a malicious app (com.g.o.speed.memboost) executing commands as root.
the abstraction, the lighter the system is, although possibly less
accurate.
Building Blocks. M A M A D ROID’s operation goes through four
phases, as depicted in Figure 2. First, we extract the call graph
from each app by using static analysis (1), then, we obtain
the sequences of API calls using all unique nodes after which
we abstract each call to class, package, or family (2). Next,
we model the behavior of each app by constructing Markov
chains from the sequences of API calls for the app (3), with
the transition probabilities used as the feature vector to classify
the app as either benign or malware using a machine learning
classifier (4). In the rest of this section, we discuss each of
these steps in detail.
lists a class extracted from the decompiled apk of malware
disguised as a memory booster app (with package name
com.g.o.speed.memboost), which executes commands (rm,
chmod, etc.) as root.1 To ease presentation, we focus on the
portion of the code executed in the try/catch block. The resulting call graph of the try/catch block is shown in Figure 4. For
simplicity, we omit calls for object initialization, return types
and parameters, as well as implicit calls in a method. Additional calls that are invoked when getShell(true) is called are
not shown, except for the add() method that is directly called
by the program code, as shown in Figure 3.
2.2
In its second phase, M A M A D ROID extracts the sequences of
API calls from the call graph and abstract the calls to one of
three mode.
2.3
Call Graph Extraction
The first step in M A M A D ROID is to extract the app’s call
graph. We do so by performing static analysis on the app’s
apk, i.e., the standard Android archive file format containing
all files, including the Java bytecode, making up the app. We
use a Java optimization and analysis framework, Soot [51], to
extract call graphs and FlowDroid [5] to ensure contexts and
flows are preserved.
To better clarify the different steps involved in our system, we employ, throughout this section, a “running example,” using a real-world malware sample. Figure 3
Sequence Extraction and Abstraction
Sequence Extraction. Since M A M A D ROID uses static analy-
sis, the graph obtained from Soot represents the sequence of
functions that are potentially called by the app. However, each
execution of the app could take a specific branch of the graph
and only execute a subset of the calls. For instance, when running the code in Figure 3 multiple times, the Execute method
could be followed by different calls, e.g., getShell() in the try
1 https://www.hackread.com/ghost-push-android-malware/
3
com.fa.c.RootCommandExecutor:
Execute()
android.util.Log:
d()
com.stericson.RootTools.RootTools:
getShell()
java.lang.Throwable:
getMessage()
com.stericson.RootShell.execution.Shell:
add()
com.fa.c.RootCommandExecutor:
Execute()
[self-defined,
self-defined, self-defined]
com.stericson.RootTools.RootTools:
getShell()
[self-defined,
self-defined, self-defined]
com.fa.c.RootCommandExecutor:
Execute()
[self-defined,
self-defined, self-defined]
android.util.Log:
d()
[android.util.Log,
android.util, android]
com.fa.c.RootCommandExecutor:
Execute()
[self-defined,
self-defined, self-defined]
java.lang.Throwable:
getMessage()
[java.lang.Throwable,
java.lang, java]
com.stericson.RootShell.
execution.Shell: add()
[self-defined,
self-defined, self-defined]
Figure 5: Sequence of API calls extracted from the call graphs in
Figure 4, with the corresponding class/package/family abstraction in
square brackets.
Figure 4: Call graph of the API calls in the try/catch block of Figure 3. (Return types and parameters omitted to ease presentation).
identifier mangling. Overall, there are 11 (9+2) families, 340
(243+95+2) possible packages, and 5,973 (4,855+1,116+2)
possible classes.
block only or getShell() and then getMessage() in the catch
block.
Thus, in this phase, M A M A D ROID operates as follows.
First, it identifies a set of entry nodes in the call graph, i.e.,
nodes with no incoming edges (for example, the Execute
method in the snippet from Figure 3 is the entry node if there is
no incoming edge from any other call in the app). Then, it enumerates the paths reachable from each entry node. The sets of
all paths identified during this phase constitutes the sequences
of API calls which will be used to build a Markov chain behavioral model and to extract features. In Figure 5, we show
the sequence of API calls obtained from the call graph in Figure 4. We also report in square brackets, the family, package,
and class to which the call is abstracted.
2.4
Markov-chain Based Modeling
Next, M A M A D ROID builds feature vectors, used for classification, based on the Markov chains representing the sequences
of abstracted API calls for an app. Before discussing this in
detail, first review the basic concepts of Markov chains.
Markov Chains. Markov Chains are memoryless models where
the probability of transitioning from a state to another only
depends on the current state [39]. They are often represented
as a set of nodes, each corresponding to a different state, and
a set of edges connecting one node to another labeled with
the probability of that transition. The sum of all probabilities
associated to all edges from any node (including, if present,
an edge going back to the node itself) is exactly 1. The set
of possible states of the Markov chain is denoted as S. If Sj
and Sk are two connected states, Pjk denotes the probability
of transition from Sj to Sk . Pjk is given by the number of
occurrences (Ojk ) of state Sk after state Sj , divided by Oji
O
for all states i in the chain, i.e., Pjk = P jkOji .
API Call Abstraction. Rather than analyzing raw API calls
from the sequence of calls, we build M A M A D ROID to work at
a higher level, and operate in one of three modes by abstracting each call to its family, package, or class. The intuition is
to make M A M A D ROID resilient to API changes and achieve
scalability. In fact, our experiments, presented in Section 3,
show that, from a dataset of 44K apps, we extract more than
10 million unique API calls, which, depending on the modeling approach used to model each app, may result in the feature
vectors being very sparse. When operating in family mode,
we abstract an API call to one of the nine Android families,
i.e., android, google, java, javax, xml, apache, junit,
json, dom, which correspond to the android.*, com.google.*,
java.*, javax.*, org.xml.*, org.apache.*, junit.*, org.json, and
org.w3c.dom.* packages. Whereas, in package mode, we abstract the call to its package name using the list of Android
packages from the documentation2 consisting of 243 packages
as of API level 24 (the version as of September 2016), as well
as 95 from the Google API.3 In class mode, we abstract each
call to its class name using a whitelist of all class names in the
Android and Google APIs, which consists respectively, 4,855
and 1116 classes.4
In all modes, we abstract developer-defined (e.g.,
com.stericson.roottools) and obfuscated (e.g. com.fa.a.b.d)
API calls respectively, as self-defined and obfuscated.
Note that we label an API call as obfuscated if we cannot
tell what its class implements, extends, or inherits, due to
i∈S
Building the model. For each app, M A M A D ROID takes as in-
put the sequence of abstracted API calls of that app (classes,
packages or families, depending on the selected mode of operation), and builds a Markov chain where each class/package/family is a state and the transitions represent the probability
of moving from one state to another. For each Markov chain,
state S0 is the entry point from which other calls are made
in a sequence. As an example, Figure 6 illustrates the Markov
chains built using classes, packages, and families, respectively,
from the sequences reported in Figure 5.
We argue that considering single transitions is more robust
against attempts to evade detection by inserting useless API
calls in order to deceive signature-based systems [33]. In
fact, M A M A D ROID considers all possible calls – i.e., all the
branches originating from a node – in the Markov chain, so
adding calls would not significantly change the probabilities
of transitions between nodes (specifically, families, packages,
or classes depending on the operational mode) for each app.
Feature Extraction. Next, we use the probabilities of tran-
2 https://developer.android.com/reference/packages.html
sitioning from one state (abstracted call) to another in the
Markov chain as the feature vector of each app. States that are
3 https://developers.google.com/android/reference/packages
4 https://developer.android.com/reference/classes.html
4
self-defined
0.5
0.25
self-defined
0.25
0.25
java.lang.Throwable android.util.Log
self-defined
0.5
0.5
java.lang
(a)
0.25
android.util
(b)
0.25
0.25
java
android
(c)
Figure 6: Markov chains originating from the call sequence in Figure 5 when using classes (a), packages (b) or families (c).
3
not present in a chain are represented as 0 in the feature vector. The vector derived from the Markov chain depends on the
operational mode of M A M A D ROID. With families, there are
11 possible states, thus 121 possible transitions in each chain,
while, when abstracting to packages, there are 340 states and
115,600 possible transitions and with classes, there are 5,973
states therefore, 35,676,729 possible transitions.
In this section, we introduce the datasets used in the evaluation of M A M A D ROID (presented later in Section 4), which
include 43,940 apk files, specifically, 8,447 benign and 35,493
malware samples. We include a mix of older and newer apps,
ranging from October 2010 to May 2016, as we aim to verify
that M A M A D ROID is robust to changes in Android malware
samples as well as APIs. To the best of our knowledge, we are
leveraging the largest dataset of malware samples ever used in
a research paper on Android malware detection.
We also apply Principal Component Analysis (PCA) [29],
which performs feature selection by transforming the feature
space into a new space made of components that are a linear combination of the original features. The first components
contain as much variance (i.e., amount of information) as possible. The variance is given as percentage of the total amount
of information of the original feature space. We apply PCA to
the feature set in order to select the principal components, as
PCA transforms the feature space into a smaller one where the
variance is represented with as few components as possible,
thus considerably reducing computation/memory complexity.
Also, PCA could improve the accuracy of the classification by
removing, from the feature space, features that make the classifier perform worse.
2.5
Dataset
Benign Samples. Our benign datasets consist of two sets of
samples: (1) one, which we denote as oldbenign, includes
5,879 apps collected by PlayDrone [54] between April and
November 2013, and published on the Internet Archive5 on
August 7, 2014; and (2) another, newbenign, obtained by
downloading the top 100 apps in each of the 29 categories
on the Google Play store as of March 7, 2016, using the
googleplay-api tool.6 Due to errors encountered while downloading some apps, we have actually obtained 2,843 out of
2,900 apps. Note that 275 of these belong to more than
one category, therefore, the newbenign dataset ultimately includes 2,568 unique apps.
Android Malware Samples. The set of malware samples in-
Classification
cludes apps that were used to test D REBIN [4], dating back
to October 2010 – August 2012 (5,560), which we denote as
drebin, as well as more recent ones that have been uploaded
on the VirusShare7 site over the years. Specifically, we gather
from VirusShare, respectively, 6,228, 15,417, 5,314, and 2,974
samples from 2013, 2014, 2015, and 2016. We consider each
of these datasets separately for our analysis.
The last step is to perform classification, i.e., labeling apps as
either benign or malware. To this end, we test M A M A D ROID
using different classification algorithms: Random Forests, 1Nearest Neighbor (1-NN), 3-Nearest Neighbor (3-NN), and
Support Vector Machines (SVM). Note that since both accuracy and speed are worse with SVM, we omit results obtained
with it.
API Calls. For each app, we extract all API calls, using An-
droguard8 , since, as explained in Section 4.5, these constitute
the features used by D ROIDAPIM INER [1] (against which we
compare our system) as well as a variant of M A M A D ROID that
is based on frequency analysis (see Section 6). Due to Androguard failing to decompress some of the apks, bad CRC-32
redundancy checks, and errors during unpacking, we are not
able to extract the API calls for all the samples, but only for
Each model is trained using the feature vector obtained from
the apps in a training sample. Results are presented and discussed in Section 4, and have been validated by using 10-fold
cross validation. Also note that, due to the different number
of features used in different modes, we use two distinct configurations for the Random Forests algorithm. Specifically,
when abstracting to families, we use 51 trees with maximum
depth 8, while, with classes and packages, we use 101 trees of
maximum depth 64. To tune Random Forests we follow the
methodology applied in [6].
5 https://archive.org/details/playdrone-apk-e8
6 https://github.com/egirault/googleplay-api
7 https://virusshare.com/
8 https://github.com/androguard/androguard
5
Category
Name
Date Range
#Samples
Benign
oldbenign
newbenign
Apr 2013 – Nov 2013
Mar 2016 – Mar 2016
Total Benign:
Malware
drebin
2013
2014
2015
2016
Oct 2010 – Aug 2012
Jan 2013 – Jun 2013
Jun 2013 – Mar 2014
Jan 2015 – Jun 2015
Jan 2016 – May 2016
Total Malware:
5,879
2,568
8,447
#Samples
(API Calls)
5,837
2,565
8,402
#Samples
(Call Graph)
5,572
2,465
8,037
5,560
6,228
15,417
5,314
2,974
35,493
5,546
6,146
14,866
5,161
2,802
34,521
5,512
6,091
13,804
4,451
2,555
32,413
1.0
0.8
0.8
0.6
0.6
0.4
0.2
0.00
10000
20000
#API Calls
(a) API calls
2015
newbenign
oldbenign
drebin
2013
2014
2016
30000
40000
1.0
2016
2015
2014
2013
drebin
newbenign
oldbenign
0.8
0.6
CDF
1.0
CDF
CDF
Table 1: Overview of the datasets used in our experiments.
0.4
0.4
0.2
0.2
0.0
0.0
0.2
0.4
0.6
Fraction of Calls
(b) android
0.8
1.0
0.0
0.0
0.2
0.4
0.6
Fraction of Calls
2016
2015
2014
2013
drebin
newbenign
oldbenign
0.8
1.0
(c) google
Figure 7: CDFs of the number of API calls per app in each dataset (a), and of the percentage of android (b) and google (c) family calls.
ure 7(b)), both in benign and malicious datasets, while google
calls become more common in newer apps (Figure 7(c)). In
general, we conclude that benign and malicious apps show the
same evolutionary trends over the years. Malware, however,
appears to reach the same characteristics (in terms of level of
complexity and fraction of API calls from certain families) as
legitimate apps with a few years of delay.
40,923 (8,402 benign, 34,521 malware) out of the 43,940 apps
in our datasets.
Call Graphs. To extract the call graph of each apk, we use
Soot. Note that for some of the larger apks, Soot requires a
non-negligible amount of memory to extract the call graph, so
we allocate 16GB of RAM to the Java VM heap space. We find
that for 2,472 (364 benign + 2,108 malware) samples, Soot is
not able to complete the extraction due to it failing to apply the
jb phase as well as reporting an error in opening some zip files
(i.e., the apk). The jb phase is used by Soot to transform Java
bytecode into jimple intermediate representation (the primary
IR of Soot) for optimization purposes. Therefore, we exclude
these apps in our evaluation and discuss this limitation further
in Section 8.3. In Table 1, we provide a summary of our seven
datasets, reporting the total number of samples per dataset, as
well as those for which we are able to extract the API calls
(second-to-last column) and the call graphs (last column).
Dataset Characterization. Aiming to shed light on the evolution of API calls in Android apps, we also performed some
measurements over our datasets. In Figure 7(a), we plot the
Cumulative Distribution Function (CDF) of the number of
unique API calls in the apps in different datasets, highlighting that newer apps, both benign and malicious, use more API
calls overall than older apps. This indicates that as time goes
by, Android apps become more complex. When looking at the
fraction of API calls belonging to specific families, we discover some interesting aspects of Android apps developed in
different years. In particular, we notice that API calls to the
android family become less prominent as time passes (Fig-
Principal Component Analysis. Finally, we apply PCA to se-
lect the two most important PCA components. We plot and
compare the positions of the two components for benign (Figure 8(a)) and malicious samples (Fig. 8(b)). As PCA combines
the features into components, it maximizes the variance of the
distribution of samples in these components, thus, plotting the
positions of the samples in the components shows that benign
apps tend to be located in different areas of the components
space, depending on the dataset, while malware samples occupy similar areas but with different densities. These differences highlight a different behavior between benign and malicious samples, and these differences should also be found by
the machine learning algorithms used for classification.
4
M A M A D ROID Evaluation
We now present an experimental evaluation of M A M A D ROID
run in family and package mode. Later in Section 5, we evaluate it in class mode. We use the datasets summarized in
Table 1, and evaluate M A M A D ROID, as per (1) its accuracy
on benign and malicious samples developed around the same
6
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
PCA2
PCA2
1.0
0.0
0.0
0.2
0.2
0.4
0.4
0.6
0.8
1.0
0.5
0.0
PCA1
0.5
1.0
0.6
oldbenign
newbenign
0.8
1.0
1.5
(a) benign
0.5
0.0
PCA1
0.5
1.0
drebin
2013
2014
2015
2016
1.5
(b) malware
Figure 8: Positions of benign vs malware samples in the feature space of the first two components of the PCA (family mode).
time; and (2) its robustness to the evolution of malware as well
as of the Android framework by using older datasets for training and newer ones for testing and vice-versa.
4.1
smaller one. When operating in package mode, PCA could
be particularly beneficial to reduce computation and memory complexity, since M A M A D ROID originally has to operate
over 116,281 features. Hence, in Table 2 we report the precision, recall, and F-measure achieved by M A M A D ROID in both
modes with and without the application of PCA using Random
Forest classifier. We report the results for Random Forest only
because it outperforms both 1-NN and 3-NN (Figure 9) while
also being very fast. In package mode, we find that only 67%
of the variance is taken into account by the 10 most important
PCA components, and, in family mode, at least 91% of the
variance is included by the 10 PCA Components. As shown
in Table 2, the F-measure using PCA is only slightly lower
(up to 3%) than using the full feature set. In general, M A M A D ROID performs better in package mode in all datasets
with F-measure ranging from 0.92 – 0.99 compared to 0.88 –
0.98 in family mode. This is as a result of the increased granularity which enables M A M A D ROID identify more differences
between benign and malicious apps. On the other hand, however, this likely reduces the efficiency of the system, as many
of the states derived from the abstraction are used only a few
times. The differences in time performance between the two
modes are analyzed in details in Section 7.
Experimental Settings
To assess the accuracy of the classification, we use the standard F-measure metric, calculated as F = 2 · (precision ·
recall)/(precision + recall), where precision = TP/(TP+FP)
and recall = TP/(TP+FN). TP denotes the number of samples correctly classified as malicious, while FP an FN indicate,
respectively, the number of samples mistakenly identified as
malicious and benign.
Note that all our experiments perform 10-fold cross validation using at least one malicious and one benign dataset from
Table 1. In other words, after merging the datasets, the resulting set is shuffled and divided into ten equal-size random
subsets. Classification is then performed ten times using nine
subsets for training and one for testing, and results are averaged out over the ten experiments.
When implementing M A M A D ROID in family mode, we exclude json and dom families because they are almost never
used across all our datasets, and junit, which is primarily
used for testing. In package mode, in order to avoid mislabeling when self-defined APIs have “android” in the
name, we split the android package into its two classes, i.e.,
android.R and android.Manifest. Therefore, in family
mode, there are 8 possible states, thus 64 features, whereas,
in package mode, we have 341 states and 116,281 features
(cf. Section 2.4).
4.2
4.3
Detection Over Time
As Android evolves over the years, so do the characteristics of
both benign and malicious apps. Such evolution must be taken
into account when evaluating Android malware detection systems, since their accuracy might significantly be affected as
newer APIs are released and/or as malicious developers modify their strategies in order to avoid detection. Evaluating this
aspect constitutes one of our research questions, and one of the
reasons why our datasets span across multiple years (2010–
2016).
Recall that M A M A D ROID relies on the sequence of API
calls extracted from the call graphs and abstracted to either the
package or the family level. Therefore, it is less susceptible
to changes in the Android API than other classification systems such as D ROIDAPIM INER [1] and D REBIN [4]. Since
these rely on the use, or the frequency, of certain API calls to
M A M A D ROID’s Performance (Family and
Package Mode)
We start by evaluating the performance of M A M A D ROID
when it is trained on dataset from the same year.
In Figure 9, we plot the F-measure achieved by M A M A D ROID in family and package modes using datasets from
the same year for training and testing and the three different classifiers. As already discussed in Section 2.4, we apply PCA as it allows us transform a large feature space into a
7
1.0
RF
1-NN
3-NN
0.8
0.8
0.6
0.6
F-measure
F-measure
1.0
0.4
0.2
0.0
RF
1-NN
3-NN
0.4
0.2
Drebin &
OldBenign
2013 &
OldBenign
2014 &
OldBenign
2014 &
Newbenign
2015 &
Newbenign
0.0
2016 &
Newbenign
Drebin &
OldBenign
(a) family mode
2013 &
OldBenign
2014 &
OldBenign
2014 &
Newbenign
2015 &
Newbenign
2016 &
Newbenign
(b) package mode
Figure 9: F-measure of M A M A D ROID classification with datasets from the same year using three different classifiers.
XXX
X Mode
Dataset XXX
X
drebin, oldbenign
2013, oldbenign
2014, oldbenign
2014, newbenign
2015, newbenign
2016, newbenign
0.82
0.91
0.88
0.97
0.89
0.87
Family
0.95
0.93
0.96
0.99
0.93
0.91
0.88
0.92
0.92
0.98
0.91
0.89
0.95
0.98
0.93
0.98
0.93
0.92
[Precision, Recall, F-measure]
Package
Family (PCA)
0.97
0.96 0.84
0.92
0.88
0.95
0.97 0.93
0.90
0.92
0.97
0.95 0.87
0.94
0.90
1.00
0.99 0.96
0.99
0.97
0.98
0.95 0.87
0.93
0.90
0.92
0.92 0.86
0.88
0.87
Package (PCA)
0.94
0.95
0.94
0.97
0.95
0.96
0.92
0.96
0.94
0.97
1.00
0.99
0.91
0.97
0.94
0.88
0.89
0.89
Table 2: Precision, Recall, and F-measure obtained by M A M A D ROID when trained and tested with dataset from the same year in package and
family mode, using Random Forests, and with and without PCA.
classify malware vs benign samples, they need to be retrained
following new API releases. On the contrary, retraining is not
needed as often with M A M A D ROID, since families and packages represent more abstract functionalities that change less
over time. Consider, for instance, the android.os.health
package: released with API level 24, it contains a set of classes
helping developers track and monitor system resources.9 Classification systems built before this release – as in the case
of D ROIDAPIM INER [1] (released in 2013, when Android
API was up to level 20) – need to be retrained if this package is more frequently used by malicious apps than benign
apps, while M A M A D ROID only needs to add a new state to
its Markov chain when operating in package mode, while no
additional state is required when operating in family mode.
when classifying future samples and using drebin (with samples from 2010 to 2012) or 2013 as the malicious training set
and oldbenign (late 2013/early 2014) as the benign training
set. More specifically, we observe that M A M A D ROID correctly detects benign apps, while it starts missing true positives
and increasing false negatives – i.e., achieving lower recall.
Newer training, older testing. We also set to verify whether
older malware samples can still be detected by the system—
if not, this would obviously become vulnerable to older (and
possibly popular) attacks. Therefore, we also perform the “opposite” experiment, i.e., training M A M A D ROID with newer
benign (March 2016) and malware (early 2014 to mid 2016)
datasets, and checking whether it is able to detect malware developed years before. Specifically, Figure 11(a) and 11(b) report results when training M A M A D ROID with samples from
a given year, and testing it with others that are up to 4 years
older: M A M A D ROID retains similar F-measure scores over
the years. Specifically, in family mode, it varies from 0.93 to
0.96, whereas, in package mode, from 0.95 to 0.97 with the
oldest samples.
Older training, newer testing. To verify this hypothesis, we test
M A M A D ROID using older samples as training sets and newer
ones as test sets. Figure 10(a) reports the average F-measure
of the classification in this setting, with M A M A D ROID operating in family mode. The x-axis reports the difference in
years between training and test malware data. We obtain 0.86
F-measure when we classify apps one year older than the samples on which we train. Classification is still relatively accurate, at 0.75, even after two years. Then, from Figure 10(b),
we observe that the average F-measure does not significantly
change when operating in package mode. Both modes of operations are affected by one particular condition, already discussed in Section 3: in our models, benign datasets seem to
“anticipate” malicious ones by 1–2 years in the way they use
certain API calls. As a result, we notice a drop in accuracy
4.4
Case Studies of False Positives and Negatives
The experiment analysis presented above show that M A M A D ROID detects Android malware with high accuracy. As
in any detection system, however, the system makes a small
number of incorrect classifications, incurring some false positives and false negatives. Next, we discuss a few case studies aiming to better understand these misclassifications. We
focus on the experiments with newer datasets, i.e., 2016 and
9 https://developer.android.com/reference/android/os/health/
package-summary.html
8
1.0
0.8
0.6
0.4
0.2
0.0
RF
1-NN
3-NN
0.8
F-measure
F-measure
1.0
RF
1-NN
3-NN
0.6
0.4
0.2
0
1
2
Years
3
0.0
4
0
1
(a) family mode
2
Years
3
4
(b) package mode
1.0
1.0
0.8
0.8
0.6
0.6
F-measure
F-measure
Figure 10: F-measure achieved by M A M A D ROID using older samples for training and newer samples for testing. The x-axis shows the
difference in years between the training and test data.
0.4
0.2
0.0
0.2
RF
1-NN
3-NN
0
1
2
Years
3
0.4
0.0
4
RF
1-NN
3-NN
0
(a) family mode
1
2
Years
3
4
(b) package mode
Figure 11: F-measure achieved by M A M A D ROID using newer samples for training and older samples for testing. The x-axis shows the
difference in years between the training and test data.
45% of M A M A D ROID’s false negatives are adware, typically,
repackaged apps in which the advertisement library has been
substituted with a third-party one, which creates a monetary
profit for the developers. Since they are not performing any
clearly malicious activity, M A M A D ROID is unable to identify
them as malware. Finally, we find that 16% of the false negatives reported by M A M A D ROID are samples sending text messages or starting calls to premium services. We also do a similar analysis of false negatives when abstracting to packages
(74 samples), with similar results: there a few more adware
samples (53%), but similar percentages for potentially benign
apps (15%) and samples sending SMSs or placing calls (11%).
newbenign.
False Positives. We analyze the manifest of 164 apps mistak-
enly detected as malware by M A M A D ROID, finding that most
of them use “dangerous” permissions [3]. In particular, 67%
write to external storage, 32% read the phone state, and 21%
access the device’s fine location. We further analyzed apps
(5%) that use the READ SMS and SEND SMS permissions,
i.e., even though they are not SMS-related apps, they can read
and send SMSs as part of the services they provide to users. In
particular, a “in case of emergency” app is able to send messages to several contacts from its database (possibly added by
the user), which is a typical behavior of Android malware in
our dataset, ultimately leading M A M A D ROID to flag it as malicious.
4.5
False Negatives. We also check 114 malware samples missed
M A M A D ROID vs D ROIDAPIM INER
We also compare the performance of M A M A D ROID to previous work using API features for Android malware classification, specifically, to D ROIDAPIM INER [1], because: (i) it
uses API calls and its parameters to perform classification; (ii)
it reports high true positive rate (up to 97.8%) on almost 4K
malware samples obtained from McAfee and G ENOME [66],
and 16K benign samples; and (iii) its source code has been
by M A M A D ROID when operating in family mode, using
VirusTotal.10 We find that 18% of the false negatives are actually not classified as malware by any of the antivirus engines used by VirusTotal, suggesting that these are actually
legitimate apps mistakenly included in the VirusShare dataset.
10 https://www.virustotal.com
9
made available to us by the authors.
the future and past respectively. Likewise, we use the same
datasets for M A M A D ROID, with the best results achieved on
the same dataset as D ROIDAPIM INER. In package mode,
M A M A D ROID achieves an F-measure of 0.99 which is maintained more than two years into the past, but drops to respectively, 0.85 and 0.81 one and two years into the future
As summarized in Table 3, M A M A D ROID achieves significantly higher performance than D ROIDAPIM INER in all but
one experiment. This case occurs when the malicious training
set is much older than the malicious test set.
In D ROIDAPIM INER, permissions that are requested more
frequently by malware samples than by benign apps are used
to perform a baseline classification. Then, the system also applies frequency analysis on the list of API calls after removing
API calls from ad libraries, using the 169 most frequent API
calls in the malware samples (occurring at least 6% more in
malware than benign samples). Finally, data flow analysis is
applied on the API calls that are frequent in both benign and
malicious samples, but do not occur by at least 6% more in the
malware set. Using the top 60 parameters, the 169 most frequent calls change, and authors report a precision of 97.8%.
5
After obtaining D ROIDAPIM INER’s source code, as well
as a list of packages (i.e., ad libraries) used for feature refinement, we re-implement the system by modifying the code
in order to reflect recent changes in Androguard (used by
D ROIDAPIM INER for API call extraction), extract the API
calls for all apps in the datasets listed in Table 1, and perform a frequency analysis on the calls. Recall that Androguard fails to extract calls for about 2% (1,017) of apps,
thus D ROIDAPIM INER is evaluated over the samples in the
second-to-last column of Table 1. We also implement classification, which is missing from the code provided by the authors, using k-NN (with k=3) since it achieves the best results
according to the paper. We use 2/3 of the dataset for training
and 1/3 for testing as implemented by the authors.
Finer-grained Abstraction
In Section 4, we have showed that building models from abstracted API calls allows M A M A D ROID to obtain high accuracy, as well as to retain it over the years, which is crucial due
to the continuous evolution of the Android ecosystem. Our experiments have focused on operating M A M A D ROID in family
and package mode (i.e., abstracting calls to family or package).
In this section, we investigate whether a finer-grained abstraction – namely, to classes – performs better in terms of
detection accuracy. Recall that our system performs better in
package mode than in family mode due to the system using
in the former, finer and more features to distinguish between
malware and benign samples, so we set to verify whether one
can trade-off higher computational and memory complexities
for better accuracy. To this end, as discussed in Section 2.3, we
abstract each API call to its corresponding class name using a
whitelist of all classes in the Android API, which consists of
4,855 classes (as of API level 24), and in the Google API, with
1,116 classes, plus self-defined and obfuscated.
In Table 3, we report the results of D ROIDAPIM INER compared to M A M A D ROID on different combination of datasets.
Specifically, we report results for experiments similar to those
carried out in Section 4.3 as we evaluate its performance on
dataset from the same year and over time. First, we train it
using older dataset composed of oldbenign combined with
one of the three oldest malware datasets each (drebin, 2013,
and 2014), and test on all malware datasets. Testing on all
datasets ensures the model is evaluated on dataset from the
same year and newer. With this configuration, the best result
(with 2014 and oldbenign as training sets) is 0.62 F-measure
when tested on the same dataset. The F-measure drops to
0.33 and 0.39, respectively, when tested on samples one year
into the future and past. If we use the same configurations
in M A M A D ROID, in package mode, we obtain up to 0.97 Fmeasure (using 2013 and oldbenign as training sets), dropping to 0.73 and 0.94 respectively, one year into the future
and into the past. For the datasets where D ROIDAPIM INER
achieves its best result (i.e., 2014 and oldbenign), M A M A D ROID achieves an F-measure of 0.95, which drops to
respectively, 0.78 and 0.93 one year into the future and the
past. The F-measure is stable even two years into the future
and the past at 0.75 and 0.92 respectively. As a second set
of experiments, we train D ROIDAPIM INER using a dataset
composed of newbenign (March 2016) combined with one
of the three most recent malware datasets each (2014, 2015,
and 2016). Again, we test D ROIDAPIM INER on all malware
datasets. The best result is obtained with the dataset (2014
and newbenign) used for both testing and training, yielding a
F-measure of 0.92, which drops to 0.67 and 0.75 one year into
5.1
Reducing the size of the problem
Since there are 5,973 classes, processing the Markov chain
transitions that results in this mode increases the memory requirements. Therefore, to reduce the complexity, we cluster
classes based on their similarity. To this end, we build a cooccurrence matrix that counts the number of times a class is
used with other classes in the same sequence in all datasets.
More specifically, we build a co-occurrence matrix C, of size
(5,973·5,973)/2, where Ci,j denotes the number of times the ith and the j-th class appear in the same sequence, for all apps in
all datasets. From the co-occurrence matrix, we compute the
y
·y
x, y ) = ||xxx||·||y
cosine similarity (i.e., cos(x
y || ), and use k-means
to cluster the classes based on their similarity into 400 clusters
and use each cluster as the label for all the classes it contains.
Since we do not cluster classes abstracted to self-defined and
obfuscated, we have a total of 402 labels.
5.2
Class Mode Accuracy
In Table 4, we report the resulting F-measure in class mode using the above clustering approach. Once again, we also report
the corresponding results from package mode for comparison
10
drebin, oldbenign
Training Sets
Droid
MaMa
drebin & oldbenign 0.32
0.96
2013 & oldbenign
0.33
0.94
2014 & oldbenign
0.36
0.92
drebin, newbenign
Training Sets
Droid
MaMa
2014 & newbenign
0.76
0.98
2015 & newbenign
0.68
0.97
2016 & newbenign
0.33
0.96
2013, oldbenign
Droid
MaMa
0.35
0.95
0.36
0.97
0.39
0.93
2013, newbenign
Droid
MaMa
0.75
0.98
0.68
0.97
0.35
0.98
Testing Sets
2014, oldbenign
Droid
MaMa
0.34
0.72
0.35
0.73
0.62
0.95
2014, newbenign
Droid
MaMa
0.92
0.99
0.69
0.99
0.36
0.98
2015, oldbenign
Droid
MaMa
0.30
0.39
0.31
0.37
0.33
0.78
2015, newbenign
Droid
MaMa
0.67
0.85
0.77
0.95
0.34
0.92
2016, oldbenign
Droid
MaMa
0.33
0.42
0.33
0.28
0.37
0.75
2016, newbenign
Droid
MaMa
0.65
0.81
0.65
0.88
0.36
0.92
Table 3: Classification performance of D ROIDAPIM INER (Droid) [1] vs M A M A D ROID (MaMa) in package mode using Random Forest.
PP Mode
P
Dataset PP
P
drebin, oldbenign
2013, oldbenign
2014, oldbenign
2014, newbenign
2015, newbenign
2016, newbenign
0.95
0.98
0.93
0.98
0.93
0.91
[Precision, Recall, F-measure]
Class
Package
0.97
0.96 0.95
0.97
0.95
0.97 0.98
0.95
0.97
0.95 0.93
0.97
1.00
0.99 0.98
1.00
0.98
0.95 0.93
0.98
0.92
0.92 0.92
0.92
when the training samples are two years newer, M A M A D ROID
achieves an F-measure of 0.99, 0.97, and 0.95 respectively, in
class, package, and family modes. Whereas, when they are
three years newer, the F-measure is respectively, 0.97, 0.97,
and 0.96 in class, package, and family modes.
0.96
0.97
0.95
0.99
0.95
0.92
Table 4: M A M A D ROID’s Precision, Recall, and F-measure when
trained and tested on dataset from the same year using Random
Forests and API calls abstracted to classes and packages.
6
M A M A D ROID mainly relies on (1) API call abstraction, and
(2) behavioral modeling via sequence of calls. As shown,
it outperforms state-of-the-art Android detection techniques,
such as D ROIDAPIM INER [1], that are based on the frequency
of non-abstracted API calls. In this section, we aim to assess
whether M A M A D ROID’s effectiveness mainly stems from the
API abstraction, or from the sequence modeling. To this
end, we implement and evaluate a variant that uses frequency,
rather than sequences, of abstracted API calls. More precisely,
we perform frequency analysis on the API calls extracted using Androguard after removing ad libraries, as also done in
D ROIDAPIM INER. In the rest of the section, we denote this
variant as FAM (Frequency Analysis Model).
We again use the datasets in Table 1 to evaluate FAM’s accuracy when training and testing on datasets from the same
year and from different years. We also evaluate how it compares to standard M A M A D ROID. Although we have also implemented FAM in class mode, we do not discuss/report results here due to space limitation.
(cf Section 4.2). Overall, we find that class abstraction does
not provide significantly higher accuracy. In fact, compared
to package mode, abstraction to classes only yields an average
increase in F-measure of 0.0012.
5.3
Frequency Analysis Model (FAM)
Detection over time
We also compare accuracy when M A M A D ROID is trained and
tested on dataset from different years (Figure 12). We find
that, when M A M A D ROID operates in class mode, it achieves
an F-measure of 0.95 and 0.99, respectively, when trained with
datasets one and two years newer than the test sets, as reported
in Figure 12(a)). Likewise, when trained on datasets one and
two years older than the test set, F-measure reaches 0.84 and
0.59, respectively (see Figure 12(b)).
Overall, comparing results from Figure 10 to Figure 12(b),
we find that finer-grained abstraction actually performs worse
with time when older samples are used for training and newer
for testing. We note that this is due to a possible number of reasons: 1) newer classes or packages in recent API releases cannot be captured in the behavioral model of older tools whereas,
families are; and 2) evolution of malware either as a result of
changes in the API or patching of vulnerabilities or presence
of newer vulnerabilities that allows for stealthier malicious activities.
On the contrary, Figure 11 and 12(a) show that finer-grained
abstraction performs better when the training samples are
more recent than the test samples. This is because from recent samples, we are able to capture the full behavioral model
of older samples. However, our results indicate there is a
threshold for the level of abstraction which when exceeded,
finer-grained abstraction will not yield any significant improvement in detection accuracy. This is because API calls in
older releases are subsets of subsequent releases. For instance,
6.1
FAM Accuracy
We start our evaluation by measuring how well FAM detects
malware by training and testing using samples that are developed around the same time. Figure 13 reports the F-measure
achieved in family and package modes using three different
classifiers. Also, Table 5 reports precision, recall, and Fmeasure achieved by FAM on each dataset combination, when
operating in family and package mode, using Random Forests.
We only report the results from the Random Forest classifier
because it outperforms both the 1-NN and 3-NN classifiers.
Family mode. Due to the number of possible families (i.e.,
11), FAM builds a model from all families that occur more
in our malware dataset than the benign dataset. Note that in
this modeling approach, we also remove the junit family as
it is mainly used for testing. When the drebin and 2013 mal11
1.0
0.8
0.8
0.6
0.6
F-measure
F-measure
1.0
0.4
0.2
0.0
0
Class
Package
0.4
0.2
Class
Package
1
2
3
Years
0.0
4
0
1
(a) Newer/Older
2
Years
3
4
(b) Older/Newer
Figure 12: F-measure achieved by M A M A D ROID in class mode when using newer (older) samples for training and older (newer) samples for
testing. The x-axis shows the difference in years between the training and test data.
0.6
0.4
0.2
0.6
0.4
0.2
Drebin & 2013 & 2014 & 2014 & 2015 & 2016 &
OldBenign OldBenign OldBenign Newbenign Newbenign Newbenign
(a) family mode
0.0
1.0
RF
1-NN
3-NN
0.8
F-measure
F-measure
0.8
0.0
1.0
RF
1-NN
3-NN
RF
1-NN
3-NN
0.8
F-measure
1.0
0.6
0.4
0.2
Drebin & 2013 & 2014 & 2014 & 2015 & 2016 &
OldBenign OldBenign OldBenign Newbenign Newbenign Newbenign
(b) package mode
0.0
Drebin & 2013 & 2014 & 2014 & 2015 & 2016 &
OldBenign OldBenign OldBenign Newbenign Newbenign Newbenign
(c) class mode
Figure 13: F-measure for the FAM variant, over same-year datasets, with different classifiers.
ware datasets are used in combination with the oldbenign
dataset, there are no families that are more frequently used in
these datasets than the benign dataset. As a result, FAM does
not yield any result with these datasets as it builds a model
only from API calls that are more frequently used in malware
than benign samples. With the other datasets, there are two
(2016), four (2014), and five families (2015) that occur more
frequently in the malware dataset than the benign one.
Classification performance improves in package mode, with
F-measure ranging from 0.53 with 2016 and newbenign to
0.89 with 2014 and newbenign, using Random Forests. Figure 13(b) shows that Random Forests generally provides better
results also in this case. Similar to family mode, the drebin
and 2013 datasets respectively, have only five and two packages that occur more than in the oldbenign dataset. Hence,
the results when these datasets are evaluated is poor due to the
limited amount of features.
From Figure 13(a), we observe that F-measure is always
at least 0.5 with Random Forests, and, when tested on the
2014 (malware) dataset, it reaches 0.87. In general, lower Fmeasures are due to increased false positives. This follows a
similar trend observed in Section 4.3.
Class mode. When FAM operates in class mode, we build a
model using the minimum of, all API calls that occur more
frequently in malware than benign samples or the top 336 API
calls that occur more in malware than benign apps. We use
the top 336 classes so as to build a model with classes from at
least two different packages as the highest number of classes
in a single package (java.util) is 335. In our datasets, there
is at least three classes that occur more in malware (2013)
than benign samples and at most, 862 classes that occur more
frequently in malware (2015) than the benign dataset.
When FAM operates in package mode, it
builds a model using the minimum of, all API calls that occur more frequently in malware or the top 172 API calls used
more frequently in malware than benign apps. We use the top
172 API calls as we attempt to build the model where possible,
with packages from at least two families (the android family
has 171 packages). In our dataset, there are at least two (2013)
and at most 39 (2016) packages that are used more frequently
in malware than in benign samples. Hence, all packages that
occur more in malware than benign apps are always used to
build the model.
Package mode.
In Figure 13(c), we report precision, recall, and F-measure
achieved by FAM. It achieves its best result (0.89 F-measure)
when trained and tested with the 2014 and newbenign datatsets. Also, we show in Table 5 how the three modes compare
when trained and tested on dataset from the same year. Over12
PP Mode
P
Dataset PP
P
drebin, oldbenign
2013, oldbenign
2014, oldbenign
2014, newbenign
2015, newbenign
2016, newbenign
0.71
0.85
0.64
0.51
Family
0.76
0.90
0.70
0.49
[Precision, Recall, F-measure]
Package
- 0.51
0.57
0.54 0.50
- 0.53
0.57
0.55 0.58
0.73 0.73
0.73
0.73 0.73
0.87 0.88
0.89
0.89 0.88
0.67 0.68
0.66
0.67 0.68
0.50 0.53
0.52
0.53 0.54
Class
0.47
0.57
0.76
0.89
0.67
0.54
0.49
0.58
0.75
0.89
0.68
0.54
Table 5: Precision, Recall, and F-measure (with Random Forests) of FAM when trained and tested on dataset from the same year in family,
package, and class modes.
all, we find that abstraction to classes only slightly improves
over package mode.
Take-Away. Although we discuss in more detail the performance of the FAM variant vs the standard M A M A D ROID in
Section 6.3, we can already observe that the former does not
yield as robust of a model, mostly due to the fact that in some
cases no abstracted API calls occur more in malware than benign samples.
6.2
Newer training, older testing. We also evaluate the opposite
setting, i.e., training FAM with newer datasets, and checking
whether it is able to detect malware developed years before.
Specifically, Figure 15 report results when training FAM with
samples from a given year, and testing it with others that are
up to 4 years older showing that F-measure range from 0.69
to 0.92 in family mode, 0.65 to 0.94 in package mode, and
0.66 to 0.97 in class mode. Recall that in family mode, FAM
is unable to build a model when drebin and 2013 are used
for training, thus, effecting the overall result. This effect is
minimal in this setting since the training sets are newer than
the test sets, thus, the drebin dataset is not used to evaluate
any dataset while the 2013 dataset is used in only one setting
i.e., when the training set is one year newer than the test set.
Detection Over Time
Once again, we evaluate the detection accuracy over time, i.e.,
we train FAM using older samples and test it with newer samples and vice versa. We report the F-measure as the average of the F-measure over multiple dataset combinations; e.g.,
when training with newer samples and testing on older samples, the F-measure after three years is the average of the Fmeasure when training with (2015, newbenign) and (2016,
newbenign) respectively, and testing on drebin and 2013.
Older training, newer testing. In Figure 14(a), we show the Fmeasure when FAM operates in family mode and is trained
with datasets that are older than the classified datasets. The
x-axis reports the difference in years between training and test
data. We obtain an F-measure of 0.97 when training with samples that are one year older than the samples in the testing
set. As mentioned in Section 6.1, there is no result when the
drebin and 2013 datasets are used for training, hence, after 3
years the F-measure is 0. In package mode, the F-measure is
0.81 after one year, and 0.76 after two (Figure 14(b)). Whereas
in class mode, the F-measure after one and two years are 0.85
and 0.70 respectively (Figure 14(c)).
While FAM appears to perform better in family mode than
in package and class modes, note that the detection accuracy
after one and two years in family mode does not include results
when the training set is (drebin, oldbenign) or (2013,
oldbenign) (cf Section 6.1). We believe this is as a result of
FAM performing best when trained on the 2014 dataset in all
modes and performing poorly in package mode when trained
with (drebin, oldbenign) and (2013, oldbenign) due
to limited features. For example, accuracy after two years
is the average of the F-measure when training with (2014,
oldbenign/newbenign) datasets and testing on the 2016
dataset. Whereas, in package mode, accuracy is the average Fmeasure obtained from training with (drebin, oldbenign),
(2013, oldbenign), and (2014, oldbenign/newbenign)
datasets and testing with respectively, 2014, 2015, and 2016.
6.3
Comparing Frequency
Markov Chain Model
Analysis
vs.
We now compare the detection accuracy of FAM– a variant
of M A M A D ROID that is based on a frequency analysis model
– to the standard M A M A D ROID, which is based on a Markov
chain model using the sequence of abstracted API calls.
Detection Accuracy of malware from same year. In Table 6, we
report accuracy of FAM and M A M A D ROID when they are
trained and tested on samples from the same year using Random Forests in all modes. For completeness, we also report
results from D ROIDAPIM INER, showing that M A M A D ROID
outperforms FAM and D ROIDAPIM INER in all tests. Both
FAM and D ROIDAPIM INER performs best when trained
and tested with (2014 and newbenign) with F-measures of
0.89 (package mode) and 0.92, respectively. Overall, M A M A D ROID achieves higher F-measure compared to FAM
and D ROIDAPIM INER due to both API call abstraction and
Markov chain modeling of the sequence of calls, which successfully captures the behavior of the app. In addition, M A M A D ROID is more robust as with some datasets, frequency
analysis fails to build a model with abstracted calls when the
abstracted calls occur equally or more frequently in benign
samples.
Detection Accuracy of malware from different years. We also
compare FAM with M A M A D ROID when they are trained and
tested with datasets across several years. In Table 7, we report the F-measures, achieved by M A M A D ROID and FAM
in package mode using Random Forests, and show how they
compare with D ROIDAPIM INER using two different sets of
13
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.2
0.0
0
2
Years
3
0.4
0.2
RF
1-NN
3-NN
1
F-measure
1.0
F-measure
F-measure
1.0
0.0
4
0
(a) family mode
0.2
RF
1-NN
3-NN
1
2
Years
3
0.4
0.0
4
0
RF
1-NN
3-NN
1
(b) package mode
2
Years
3
4
(c) class mode
1.0
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.2
0.0
0
2
Years
(a) family mode
3
0.4
0.2
RF
1-NN
3-NN
1
F-measure
1.0
F-measure
F-measure
Figure 14: F-measure achieved by FAM using older samples for training and newer samples for testing. The x-axis shows the difference in
years between the training and test data.
4
0.0
0
0.2
RF
1-NN
3-NN
1
2
Years
(b) package mode
3
0.4
4
0.0
0
RF
1-NN
3-NN
1
2
Years
3
4
(c) class mode
Figure 15: F-measure achieved by FAM using newer samples for training and older samples for testing. The x-axis shows the difference in
years between the training and test data.
experiments. We report the results for package mode only because with both systems, when the training sets are older than
the test sets, package mode performs better than class mode
(although, less than family mode, there are no results for FAM
with two of our training datasets in family mode) and only
slightly lower when the training sets are newer than the test
sets.
using samples comprising the newbenign and one of the three
recent malware datasets (2014, 2015, 2016) each, and testing on all malware datasets. In this setting, M A M A D ROID
outperforms both FAM and D ROIDAPIM INER in all but one
experiment where FAM is only slightly better. Comparing
D ROIDAPIM INER and FAM shows that D ROIDAPIM INER
only performs better than FAM in two out of 15 experiments.
In these two experiments, FAM was trained and tested on samples from the same year and resulted in a slightly lower precision, thus, increasing false positives.
Overall, we find that the Markov chain based model
achieves higher detection accuracy in both family and package
modes when M A M A D ROID is trained and tested on dataset
from the same year (Table 6) and across several years (Table 7).
In the first set of experiments, we train M A M A D ROID,
FAM, and D ROIDAPIM INER using samples comprising the
oldbenign and one of the three oldest malware datasets
(drebin, 2013, 2014) each, and testing on all malware datasets. M A M A D ROID and FAM both outperform
D ROIDAPIM INER in all experiments in this set, showing that
abstracting the API calls improves the detection accuracy of
our systems. FAM outperforms M A M A D ROID in nine out of
the 15 experiments, largely, when the training set comprises
the drebin/2013 and oldbenign datasets. Recall that when
drebin and 2013 malware datasets are used for training FAM
in package mode, only five and two packages respectively, are
used to build the model. It is possible that these packages
are the principal components (as in PCA) that distinguishes
malware from benign samples. In the second set of experiments, we train M A M A D ROID, FAM, and D ROIDAPIM INER
7
Runtime Performance
We now analyze the runtime performance of M A M A D ROID
and and the FAM variant, when operating in family, package,
or class mode, as well as D ROIDAPIM INER. We run our experiments on a desktop with an 40-core 2.30GHz CPU and
14
F-measure
XX Mode
Family
Package
Class
Dataset XXX
X FAM M A M A D ROID FAM M A M A D ROID FAM M A M A D ROID D ROIDAPIM INER
XX
drebin, oldbenign
2013, oldbenign
2014, oldbenign
2014, newbenign
2015, newbenign
2016, newbenign
0.73
0.87
0.67
0.50
0.88
0.92
0.92
0.98
0.91
0.89
0.96
0.97
0.95
0.99
0.95
0.92
0.54
0.55
0.73
0.89
0.67
0.53
0.49
0.58
0.75
0.89
0.68
0.54
0.96
0.97
0.95
0.99
0.95
0.92
0.32
0.36
0.62
0.92
0.77
0.36
Table 6: F-measure of FAM and M A M A D ROID in all modes using Random Forests as well as, D ROIDAPIM INER [1] when trained and tested
on dataset from the same year.
Testing Sets
drebin, oldbenign 2013, oldbenign
2014, oldbenign
2015, oldbenign
2016, oldbenign
Training Sets
Droid FAM MaMa Droid FAM MaMa Droid FAM MaMa Droid FAM MaMa Droid FAM MaMa
drebin,oldbenign 0.32 0.54 0.96
0.35 0.50 0.96
0.34 0.50 0.79
0.30 0.50 0.42
0.33 0.51 0.43
2013,oldbenign
0.33 0.90 0.93
0.36 0.55 0.97
0.35 0.95 0.74
0.31 0.87 0.36
0.33 0.82 0.29
2014,oldbenign
0.36 0.95 0.92
0.39 0.99 0.93
0.62 0.73 0.95
0.33 0.81 0.79
0.37 0.82 0.78
drebin, newbenign 2013, newbenign
2014, newbenign
2015, newbenign
2016, newbenign
Training Sets
Droid FAM MaMa Droid FAM MaMa Droid FAM MaMa Droid FAM MaMa Droid FAM MaMa
2014,newbenign
0.76 0.99 0.99
0.75 0.99 0.99
0.92 0.89 0.99
0.67 0.86 0.89
0.65 0.82 0.83
2015,newbenign
0.68 0.92 0.98
0.68 0.84 0.98
0.69 0.95 0.99
0.77 0.67 0.95
0.65 0.91 0.90
2016,newbenign
0.33 0.83 0.97
0.35 0.69 0.97
0.36 0.91 0.99
0.34 0.86 0.93
0.36 0.53 0.92
Table 7: F-Measure of M A M A D ROID (MaMa) vs our variant using frequency analysis (FAM) vs D ROIDAPIM INER (Droid) [1].
M A M A D ROID’s third step includes Markov chain modeling and feature vector extraction. With malicious samples, it
takes on average 0.2s±0.3, 2.5s±3.2, and 1.49s±2.39 (and
at most 2.4s, 22.1s, and 46.10s), respectively, with families,
packages, and classes, whereas, with benign samples, it takes
0.6s±0.3, 6.7s±3.8, and 2.23s±2.74 (at most 1.7s, 18.4s, and
43.98s). Finally, the last step is classification, and performance depends on both the machine learning algorithm employed and the mode of operation. More specifically, running
times are affected by the number of features for the app to be
classified, and not by the initial dimension of the call graph, or
by whether the app is benign or malicious. Regardless, in family mode, Random Forests, 1-NN, and 3-NN all take less than
0.01s. With packages, it takes, respectively, 0.65s, 1.05s, and
0.007s per app with 1-NN, 3-NN, Random Forests. Whereas,
it takes, respectively, 1.02s, 1.65s, and 0.05s per app with 1NN, 3-NN, and Random Forests in class mode.
Overall, when operating in family mode, malware and benign samples take on average, 10.7s and 27.3s respectively to
complete the entire process, from call graph extraction to classification. In package mode, the average completion times
for malware and benign samples are 13.37s and 33.83s respectively. Whereas, in class mode, the average completion
times are respectively, 21.7s and 41.12s for malware and benign apps. In both modes of operation, time is mostly (>80%)
spent on call graph extraction.
128GB of RAM, but only use 1 core and allocate 16GB of
RAM for evaluation.
7.1
M A M A D ROID
We envision M A M A D ROID to be integrated in offline detection systems, e.g., run by the app store. Recall that M A M A D ROID consists of different phases, so in the following, we
review the computational overhead incurred by each of them,
aiming to assess the feasibility of real-world deployment.
M A M A D ROID’s first step involves extracting the call graph
from an apk and the complexity of this task varies significantly
across apps. On average, it takes 9.2s±14 (min 0.02s, max
13m) to complete for samples in our malware sets. Benign
apps usually yield larger call graphs, and the average time
to extract them is 25.4s±63 (min 0.06s, max 18m) per app.
Next, we measure the time needed to extract call sequences
while abstracting to families, packages or classes depending
on M A M A D ROID’s mode of operation. In family mode, this
phase completes in about 1.3s on average (and at most 11.0s)
with both benign and malicious samples. Abstracting to packages takes slightly longer, due to the use of 341 packages in
M A M A D ROID. On average, this extraction takes 1.67s±3.1
for malicious apps and 1.73s±3.2 for benign samples. Recall
that in class mode, after abstracting to classes, we cluster the
classes to a smaller set of labels due to its size. Therefore, in
this mode it takes on average, 5.84s±2.1 and 7.3s±4.2 respectively, to first abstract the calls from malware and benign apps
to classes and 2.74s per app to build the co-occurrence matrix
from which we compute the similarity between classes. Finally, clustering and abstracting each call to its corresponding
class label takes 2.38s and 3.4s respectively, for malware and
benign apps. In total, it takes 10.96s to abstract calls from
malware apps to their corresponding class labels and 13.44s
for benign apps.
7.2
FAM
Recall that FAM is a variant of M A M A D ROID including three
phases. The first one, API calls extraction, takes 0.7s±1.5
(min 0.01s, max 28.4s) per app in our malware datasets and
13.2s±22.2 (min 0.01s, max 222s) per benign app. The second phase includes API call abstraction, frequency analysis,
and feature extraction. While API call abstraction is depen15
dent on the dataset and the mode of operation, frequency analysis and feature extraction are only dependent on the mode
of operation and are very fast in all modes. In particular, it
takes on average, 1.32s, 1.69s±3.2, and 5.86s±2.1 respectively, to complete a malware app in family, package, and class
modes. Whereas, it takes on average, 1.32s±3.1, 1.75s±3.2,
and 7.32s±2.1 respectively, for a benign app in family, package, and class modes. The last phase which is classification is
very fast regardless of dataset, mode of operation, and classifier used. Specifically, it takes less than 0.01s to classify each
app in all modes using the three different classifiers. Overall,
it takes in total 2.02s, 2.39s, and 6.56s respectively, to classify
a malware app in family, package, and class modes. While
with benign apps, the total is 14.52s, 14.95s, and 20.52s respectively, in family, package, and class modes.
7.3
and Markov chains, discuss possible evasion techniques, and
highlight some limitations of our approach.
8.1
Our work yields important insights around the use of API calls
in malicious apps, showing that, by abstracting the API calls
to higher levels and modeling these abstracted calls, we can
obtain high detection accuracy and retain it over several years
which is crucial due to the continuous evolution of the Android
ecosystem.
As discussed in Section 3, the use of API calls changes over
time, and in different ways across malicious and benign samples. From our newer datasets, which include samples up to
Spring 2016 (API level 23), we observe that newer APIs introduce more packages, classes, and methods, while also deprecating some. Figure 7(a) shows that benign apps use more
calls than malicious ones developed around the same time. We
also notice an interesting trend in the use of Android (Figure 7(b)) and Google (Figure 7(c)) APIs: malicious apps follow the same trend as benign apps in the way they adopt certain APIs, but with a delay of some years. This might be a side
effect of Android malware authors repackaging benign apps,
adding their malicious functionalities onto them.
Given the frequent changes in the Android framework
and the continuous evolution of malware, systems like
D ROIDAPIM INER [1] – being dependent on the presence or
the use of certain API calls – become increasingly less effective with time. As shown in Table 3, malware that uses
API calls released after those used by samples in the training
set cannot be identified by these systems. On the contrary,
as shown in Figure 10, M A M A D ROID detects malware samples that are 1 year newer than the training set obtaining an Fmeasure of 0.86 (as opposed to 0.46 with D ROIDAPIM INER)
when the apps are modeled as Markov chains. After 2 years,
the value is still at 0.75 (0.42 with D ROIDAPIM INER), dropping to 0.51 after 4 years.
We argue that the effectiveness of M A M A D ROID’s classification remains relatively high “over the years” owing to the
Markov models capturing app’s behavior. These models tend
to be more robust to malware evolution because abstracting to
e.g., packages makes the system less susceptible to the introduction of new API calls. To verify this, we developed a variant of M A M A D ROID named FAM that abstracts API calls and
is based on frequency analysis similar to D ROIDAPIM INER.
Although, the addition of API call abstraction results in an
improvement of the detection accuracy of the system (an Fmeasure of 0.81 and 0.76 after one and two years respectively),
it also resulted in scenarios where there are no API calls that
are more frequently used in malware than benign apps.
In general, abstraction allows M A M A D ROID to capture
newer classes/methods added to the API, since these are abstracted to already-known families or packages. In case newer
packages are added to the API, and these packages start being used by malware, M A M A D ROID only requires adding a
new state to the Markov chains, and probabilities of a transition from a state to this new state in old apps would be 0. In
D ROIDAPIM INER
Finally, we evaluate the runtime performance of
D ROIDAPIM INER [1]. Its first step, i.e., extracting API
calls, takes 0.7s±1.5 (min 0.01s, max 28.4s) per app in our
malware datasets. Whereas, it takes on average 13.2s±22.2
(min 0.01s, max 222s) per benign app. In the second phase,
i.e., frequency and data flow analysis, it takes, on average, 4.2s
per app. Finally, classification using 3-NN is very fast: 0.002s
on average. Therefore, in total, D ROIDAPIM INER takes
respectively, 17.4s and 4.9s for a complete execution on one
app from our benign and malware datasets, which while faster
than M A M A D ROID, achieves significantly lower accuracy. In
comparison to M A M A D ROID, D ROIDAPIM INER takes 5.8s
and 9.9s less on average to analyze and classify a malicious
and benign app when M A M A D ROID operates in family mode
and 8.47s and 16.43s less on average in package mode.
7.4
Take Away
In conclusion, our experiments show that our prototype implementation of M A M A D ROID is scalable enough to be deployed. Assuming that, everyday, a number of apps in the
order of 10,000 are submitted to Google Play, and using the
average execution time of benign samples in family (27.3s),
package (33.83s), and class (41.12s) modes, we estimate that
it would take less than an hour and a half to complete execution of all apps submitted daily in both modes, with just 64
cores. Note that we could not find accurate statistics reporting the number of apps submitted everyday, but only the total
number of apps on Google Play.11 On average, this number
increases of a couple of thousands per day, and although we
do not know how many apps are removed, we believe 10,000
apps submitted every day is likely an upper bound.
8
Lessons Learned
Discussion
We now discuss the implications of our results with respect to
the feasibility of modeling app behavior using static analysis
11 http://www.appbrain.com/stats/number-of-android-apps
16
reality though, methods and classes are more frequently added
than packages with new API releases. Hence, we also evaluate whether M A M A D ROID still performs as well as in package
mode when we abstract an API call to class and measure the
overall overhead increase. Results from Figure 14, 15, and 12
indicate that finer-grained abstraction is less effective as time
passes when older samples are used for training and newer
samples for testing, while they are more effective when samples from the same year or newer than the test sets are used
for training. However, while all three modes of abstraction
performs relatively well, we believe abstraction to packages
is the most effective as it generally performs better than family – though less lightweight – and as well as class but more
efficient.
8.2
modeled, as it is not Java and is not processed by Soot. These
limitations are not specific of M A M A D ROID, but are a problem of static analysis in general, which can be mitigated by
using M A M A D ROID alongside dynamic analysis techniques.
Another approach could be using dynamic dispatch so that
a class X in package A is created to extend class Y in package
B with static analysis reporting a call to root() defined in Y as
X.root(), whereas, at runtime Y.root() is executed. This can be
addressed, however, with a small increase in M A M A D ROID’s
computational cost, by keeping track of self-defined classes
that extend or implement classes in the recognized APIs, and
abstract polymorphic functions of this self-defined class to the
corresponding recognized package, while, at the same time,
abstracting as self-defined overridden functions in the class.
Finally, identifier mangling and other forms of obfuscation
could be used aiming to obfuscate code and hide malicious actions. However, since classes in the Android framework cannot be obfuscated by obfuscation tools, malware developers
can only do so for self-defined classes. M A M A D ROID labels
obfuscated calls as obfuscated so, ultimately, these would
be captured in the behavioral model (and the Markov chain)
for the app. In our sample, we observe that benign apps use
significantly less obfuscation than malicious apps, indicating
that obfuscating a significant number of classes is not a good
evasion strategy since this would likely make the sample more
easily identifiable as malicious. Malware developers might
also attempt to evade M A M A D ROID by naming their selfdefined packages in such a way that they look similar to that of
the android or google APIs, e.g., java.lang.reflect.malware
is easily prevented by first abstracting to classes before abstracting to any further modes as we already do in Section 5.
Evasion
Next, we discuss possible evasion techniques and how they
can be addressed. One straightforward evasion approach could
be to repackage a benign app with small snippets of malicious
code added to a few classes. However, it is difficult to embed malicious code in such a way that, at the same time, the
resulting Markov chain looks similar to a benign one. For
instance, our running example from Section 2 (malware posing as a memory booster app and executing unwanted commands as root) is correctly classified by M A M A D ROID; although most functionalities in this malware are the same as
the original app, injected API calls generate some transitions
in the Markov chain that are not typical of benign samples.
The opposite procedure, i.e., embedding portions of benign
code into a malicious app, is also likely ineffective against
M A M A D ROID, since, for each app, we derive the feature vector from the transition probability between calls over the entire
app. A malware developer would have to embed benign code
inside the malware in such a way that the overall sequence of
calls yields similar transition probabilities as those in a benign
app, but this is difficult to achieve because if the sequences of
calls have to be different (otherwise there would be no attack),
then the models will also be different.
An attacker could also try to create an app with a similar
Markov chain to that of a benign app. Because this is derived
from the sequence of abstracted API calls in the app, it is actually very difficult to create sequences resulting in Markov
chains similar to benign apps while, at the same time, actually
engaging in malicious behavior.
Moreover, attackers could try using reflection, dynamic
code loading, or native code [42]. Because M A M A D ROID
uses static analysis, it fails to detect malicious code when it
is loaded or determined at runtime. However, M A M A D ROID
can detect reflection when a method from the reflection package (java.lang.reflect) is executed. Therefore, we obtain
the correct sequence of calls up to the invocation of the reflection call, which may be sufficient to distinguish between
malware and benign apps. Similarly, M A M A D ROID can detect the usage of class loaders and package contexts that can be
used to load arbitrary code, but it is not able to model the code
loaded; likewise, native code that is part of the app cannot be
8.3
Limitations
M A M A D ROID requires a sizable amount of memory in order
to perform classification, when operating in package mode,
working on more than 100,000 features per sample. The quantity of features, however, can be further reduced using feature
selection algorithms such as PCA. As explained in Section 6
when we use 10 components from the PCA the system performs almost as well as the one using all the features; however, using PCA comes with a much lower memory complexity in order to run the machine learning algorithms, because
the number of dimensions of the features space where the classifier operates is remarkably reduced.
Soot [51], which we use to extract call graphs, fails to analyze some apks. In fact, we were not able to extract call graphs
for a fraction (4.6%) of the apps in the original datasets due to
scripts either failing to apply the jb phase, which is used to
transform Java bytecode to the primary intermediate representation (i.e., jimple) of Soot or not able to open the apk. Even
though this does not really affect the results of our evaluation,
one could avoid it by using a different/custom intermediate
representation for the analysis or use different tools to extract
the call graphs.
In general, static analysis methodologies for malware detection on Android could fail to capture the runtime environment
17
context, code that is executed more frequently, or other effects
stemming from user input [4]. These limitations can be addressed using dynamic analysis, or by recording function calls
on a device. Dynamic analysis observes the live performance
of the samples, recording what activity is actually performed
at runtime. Through dynamic analysis, it is also possible to
provide inputs to the app and then analyze the reaction of the
app to these inputs, going beyond static analysis limits. To this
end, we plan to integrate dynamic analysis to build the models
used by M A M A D ROID as part of future work.
9
this still takes a non-negligible toll on battery life. Recently,
hybrid systems like IntelliDroid [55] have also been proposed
that use input generators, producing inputs specific to dynamic
analysis tools. Other work combining static and dynamic analysis include [23, 28, 58, 7].
9.2
A number of techniques have used signatures for Android
malware detection. NetworkProfiler [16] generates network
profiles for Android apps and extracts fingerprints based on
such traces, ASTROID [20] used common subgraphs among
samples as signatures for identifying unknown malware samples, while work in [10] obtains resource-based metrics (CPU,
memory, storage, network) to distinguish malware activity
from benign one. [13] extract statistical features, such as permissions and API calls, and extend their vectors to add dynamic behavior-based features. While their experiments show
that their solution outperforms, in terms of accuracy, other antivirus systems, [13] indicate that the quality of their detection
model critically depends on the availability of representative
benign and malicious apps for training. MADAM [46] also
extract features at four layers which are used to build a behavioral model for apps and uses two parallel classifier to detect malware. Similarly, ScanMe Mobile [64] uses the Google
Cloud Messaging Service (GCM) to perform static and dynamic analysis on apks found on the device’s SD card. While,
TriFlow [37] rank apps based on potential risks by using the
observed and possible information flows in the apps.
The sequences of system calls have also been used to detect
malware in both desktop and Android environments. Hofmeyr
et al. [27] demonstrate that short sequences of system calls
can be used as a signature to discriminate between normal and
abnormal behavior of common UNIX programs. Signaturebased methods, however, can be evaded using polymorphism
and obfuscation, as well as by call re-ordering attacks [33],
even though quantitative measures, such as similarity analysis, can be used to address some of these attacks [48]. M A M A D ROID inherits the spirit of these approaches, proposing
a statistical method to model app behavior that is more robust
against evasion attempts.
In the Android context, [9] use the sequences of three system calls (extracted from the execution traces of apps under
analysis) to detect malware. This approach models specific
malware families, aiming to identify additional samples belonging to such families. In contrast, M A M A D ROID’s goal is
to detect previously-unseen malware, and we also show that
our system can detect new malware families that even appear
years after the system has been trained. In addition, using strict
sequences of system or API calls can be easily evaded by malware authors who could add unnecessary calls to effectively
evade detection. Conversely, M A M A D ROID builds a behavioral model of an Android app, which makes it robust to this
type of evasion.
Dynamic analysis has also been applied to detect Android
malware by using predefined scripts of common inputs that
will be performed when the device is running. However, this
Related Work
Over the past few years, Android security has attracted a
wealth of work by the research community. In this section,
we review (i) program analysis techniques focusing on general security properties of Android apps, and then (ii) systems
that specifically target malware on Android.
9.1
Android Malware Detection
Program Analysis
Previous work on program analysis applied to Android security has used both static and dynamic analysis. With the former, the program’s code is decompiled in order to extract features without actually running the program, usually employing tools such as Dare [41] to obtain Java bytecode. The latter
involves real-time execution of the program, typically in an
emulated or protected environment.
Static analysis techniques include work by Felt et al. [19],
who analyze API calls to identify over-privileged apps, while
Kirin [18] is a system that examines permissions requested by
apps to perform a lightweight certification, using a set of security rules that indicate whether or not the security configuration bundled with the app is safe. RiskRanker [26] aims to
identify zero-day Android malware by assessing potential security risks caused by untrusted apps. It sifts through a large
number of apps from Android markets and examines them to
detect certain behaviors, such as encryption and dynamic code
loading, which form malicious patterns and can be used to detect stealthy malware. Other methods, such as CHEX [34],
use data flow analysis to automatically vet Android apps for
vulnerabilities. Static analysis has also been applied to the detection of data leaks and malicious data flows from Android
apps [5, 32, 62, 31].
DroidScope [59] and TaintDroid [17] monitor run-time app
behavior in a protected environment to perform dynamic taint
analysis. DroidScope performs dynamic taint analysis at the
machine code level, while TaintDroid monitors how thirdparty apps access or manipulate users’ personal data, aiming
to detect sensitive data leaving the system. However, as it is
unrealistic to deploy dynamic analysis techniques directly on
users’ devices, due to the overhead they introduce, these are
typically used offline [45, 67, 49]. ParanoidAndroid [44] employs a virtual clone of the smartphone, running in parallel
in the cloud and replaying activities of the device – however,
even if minimal execution traces are actually sent to the cloud,
18
might be inadequate due to the low probability of triggering
malicious behavior, and can be side-stepped by knowledgeable adversaries, as suggested by Wong and Lie [55]. Other
approaches include random fuzzing [35, 63] and concolic testing [2, 24]. Dynamic analysis can only detect malicious activities if the code exhibiting malicious behavior is actually running during the analysis. Moreover, according to [53], mobile
malware authors often employ emulation or virtualization detection strategies to change malware behavior and eventually
evade detection.
Machine learning techniques have also been applied to assist Android malware detection. Droidmat [57] uses API call
tracing and manifest files to learn features for malware detection, Teufl et al. [50] apply knowledge discovery processes and
lean statistical methods on app metadata extracted from the
app market, while [22] rely on embedded call graphs. DroidMiner [60] studies the program logic of sensitive Android/Java
framework API functions and resources, and detects malicious
behavior patterns. MAST [12] statically analyzes apps using
features such as permissions, presence of native code, and intent filters and measures the correlation between multiple qualitative data. Crowdroid [8] relies on crowdsourcing to distinguish between malicious and benign apps by monitoring system calls. AppContext [61] models security-sensitive behavior, such as activation events or environmental attributes, and
uses SVM to classify these behaviors, while RevealDroid [21]
employs supervised learning and obfuscation-resilient methods targeting API usage and intent actions to identify their
families.
D REBIN [4] deduces detection patterns and identifies malicious software directly on the device, performing a broad static
analysis. This is achieved by gathering numerous features
from the manifest file as well as the app’s source code (API
calls, network addresses, permissions). Malevolent behavior
is reflected in patterns and combinations of extracted features
from the static analysis: for instance, the existence of both
SEND SMS permission and the android.hardware.telephony
component in an app might indicate an attempt to send premium SMS messages, and this combination can eventually
constitute a detection pattern.
In Section 4.5,
we have compared against
D ROIDAPIM INER [1]. This system relies on the top169 API calls that are used more frequently in the malware
than in the benign set, along with data flow analysis on calls
that are frequent in both benign and malicious apps, but occur
up to 6% more in the latter. As shown in our evaluation, using
the most common calls observed during training requires
constant retraining, due to the evolution of both malware
and the Android API. On the contrary, M A M A D ROID can
effectively model both benign and malicious Android apps,
and perform an efficient classification on them. Compared to
D ROIDAPIM INER, our approach is more resilient to changes
in the Android framework than D ROIDAPIM INER, resulting
in a less frequent need to re-train the classifier.
Overall, compared to state-of-the-art systems like
D REBIN [4] and D ROIDAPIM INER [1], M A M A D ROID
is more generic and robust as its statistical modeling does not
depend on specific app characteristics, but can actually be run
on any app created for any Android API level.
Finally, also related to M A M A D ROID are Markov-chain
based models for Android malware detection. [14] dynamically analyze system- and developer-defined actions from intent messages (used by app components to communicate with
each other at runtime), and probabilistically estimate whether
an app is performing benign or malicious actions at run time,
but obtain low accuracy overall. Canfora et al. [11] use a
Hidden Markov model (HMM) to identify malware samples
belonging to previously observed malware families, whereas,
M A M A D ROID can detect previously unseen malware, not relying on specific malware families.
10
Conclusion
This paper presented M A M A D ROID, an Android malware detection system that is based on modeling the sequences of API
calls as Markov chains. Our system is designed to operate in
one of three modes, with different granularities, by abstracting API calls to either families, packages or classes. We ran
an extensive experimental evaluation using, to the best of our
knowledge, the largest malware dataset in an Android malware detection research paper, and aiming at assessing both
the accuracy of the classification (using F-measure, precision,
and recall) and runtime performances. We showed that M A M A D ROID effectively detects unknown malware samples developed around the same time as the samples on which it is
trained (F-measure up to 0.99). It also maintains good detection performance: one year after the model has been trained
with a F-measure of 0.86, and 0.75 after two years.
We compared M A M A D ROID to D ROIDAPIM INER [1], a
state-of-the-art system based on API calls frequently used by
malware, showing that, not only does M A M A D ROID outperforms D ROIDAPIM INER when trained and tested on datasets
from the same year, but that it is also much more resilient over
the years to changes in the Android API. We also developed
a variant of M A M A D ROID, called FAM, that performs API
call abstraction but is based on frequency analysis to evaluate whether M A M A D ROID’s high detection accuracy is based
solely on the abstraction. We found that FAM improves on
D ROIDAPIM INER but, while abstraction is important for high
detection rate and resilience to API changes, abstraction and a
modeling approach based on frequency analysis is not as robust as M A M A D ROID, especially in scenarios where API calls
are not more frequent in malware than in benign apps.
Overall, our results demonstrate that statistical behavioral
models introduced by M A M A D ROID—in particular, abstraction and Markov chain modeling of API call sequence—are
more robust than traditional techniques, highlighting how our
work can form the basis of more advanced detection systems
in the future. As part of future work, we plan to further investigate the resilience to possible evasion techniques, focusing
on repackaged malicious apps as well as injection of API calls
to maliciously alter Markov models. We also plan to explore
19
the possibility of seeding the behavioral modeling performed
by M A M A D ROID with dynamic instead of static analysis.
Acknowledgments. We wish to thank Yousra Aafer for sharing
the D ROIDAPIM INER source code and Yanick Fratantonio for
his comments on an early draft of the paper. This research was
supported by the EPSRC under grant EP/N008448/1, by an
EPSRC-funded “Future Leaders in Engineering and Physical
Sciences” award, a Xerox University Affairs Committee grant,
and by a small grant from GCHQ. Enrico Mariconti was supported by the EPSRC under grant 1490017, while Lucky Onwuzurike was funded by the Petroleum Technology Development Fund (PTDF).
[12] S. Chakradeo, B. Reaves, P. Traynor, and W. Enck. MAST:
Triage for Market-scale Mobile Malware Analysis. In ACM
Conference on Security and Privacy in Wireless and Mobile
Networks (WiSec), 2013.
[13] S. Chen, M. Xue, Z. Tang, L. Xu, and H. Zhu. StormDroid: A
Streaminglized Machine Learning-Based System for Detecting
Android Malware. In ACM Asia Conference on Computer and
Communications Security (AsiaCCS), 2016.
[14] Y. Chen, M. Ghorbanzadeh, K. Ma, C. Clancy, and R. McGwier. A hidden Markov model detection of malicious Android
applications at runtime. In Wireless and Optical Communication Conference (WOCC), 2014.
[15] J. Clay. Continued Rise in Mobile Threats for 2016. http://blog.
trendmicro.com/continued-rise-in-mobile-threats-for-2016/,
2016.
References
[16] S. Dai, A. Tongaonkar, X. Wang, A. Nucci, and D. Song.
NetworkProfiler: Towards automatic fingerprinting of Android
apps. In IEEE International Conference on Computer Communications (INFOCOM), 2013.
[1] Y. Aafer, W. Du, and H. Yin. DroidAPIMiner: Mining APILevel Features for Robust Malware Detection in Android. In
International Conference on Security and Privacy in Communication Networks (SecureComm), 2013.
[17] W. Enck, P. Gilbert, S. Han, V. Tendulkar, B.-G. Chun, L. P.
Cox, J. Jung, P. McDaniel, and A. N. Sheth. TaintDroid: An
Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones. ACM Transactions on Computer Systems, 32(2), 2014.
[2] S. Anand, M. Naik, M. J. Harrold, and H. Yang. Automated
Concolic Testing of Smartphone Apps. In ACM Symposium on
the Foundations of Software Engineering (FSE), 2012.
[3] P. Andriotis, M. A. Sasse, and G. Stringhini. Permissions snapshots: Assessing users’ adaptation to the android runtime permission model. In IEEE Workshop on Information Forensics
and Security (WIFS), 2016.
[18] W. Enck, M. Ongtang, and P. McDaniel. On Lightweight Mobile Phone Application Certification. In ACM Conference on
Computer and Communications Security (CCS), 2009.
[4] D. Arp, M. Spreitzenbarth, M. Hubner, H. Gascon, and
K. Rieck. DREBIN: Effective and Explainable Detection of
Android Malware in Your Pocket. In Annual Symposium on
Network and Distributed System Security (NDSS), 2014.
[19] A. P. Felt, E. Chin, S. Hanna, D. Song, and D. Wagner. Android
Permissions Demystified. In ACM Conference on Computer
and Communications Security (CCS), 2011.
[20] Y. Feng, O. Bastani, R. Martins, I. Dillig, and S. Anand. Automated synthesis of semantic malware signatures using maximum satisfiability. In Annual Symposium on Network and Distributed System Security (NDSS), 2017.
[5] S. Arzt, S. Rasthofer, C. Fritz, E. Bodden, A. Bartel, J. Klein,
Y. Le Traon, D. Octeau, and P. McDaniel. FlowDroid: Precise Context, Flow, Field, Object-sensitive and Lifecycle-aware
Taint Analysis for Android Apps. In ACM SIGPLAN Conference on Programming Language Design and Implementation
(PLDI), 2014.
[6] S. Bernard, S. Adam, and L. Heutte. Using random forests for
handwritten digit recognition. In Ninth International Conference on Document Analysis and Recognition, 2007.
[21] J. Garcia, M. Hammad, B. Pedrood, A. Bagheri-Khaligh, and
S. Malek. Obfuscation-resilient, efficient, and accurate detection and family identification of android malware. Department
of Computer Science, George Mason University, Tech. Rep,
2015.
[7] R. Bhoraskar, S. Han, J. Jeon, T. Azim, S. Chen, J. Jung,
S. Nath, R. Wang, and D. Wetherall. Brahmastra: Driving Apps
to Test the Security of Third-Party Components. In USENIX
Security Symposium, 2014.
[22] H. Gascon, F. Yamaguchi, D. Arp, and K. Rieck. Structural Detection of Android Malware Using Embedded Call Graphs. In
ACM Workshop on Artificial Intelligence and Security (AISec),
2013.
[8] I. Burguera, U. Zurutuza, and S. Nadjm-Tehrani. Crowdroid:
Behavior-based Malware Detection System for Android. In
ACM Workshop on Security and Privacy in Smartphones and
Mobile Devices (SPSM), 2011.
[23] X. Ge, K. Taneja, T. Xie, and N. Tillmann. DyTa: Dynamic
Symbolic Execution Guided with Static Verification Results.
In International Conference on Software Engineering (ICSE),
2011.
[9] G. Canfora, E. Medvet, F. Mercaldo, and C. A. Visaggio. Detecting Android Malware Using Sequences of System Calls. In
Workshop on Software Development Lifecycle for Mobile, 2015.
[24] P. Godefroid, N. Klarlund, and K. Sen. DART: Directed Automated Random Testing. ACM SIGPLAN Notices, 40(6), 2005.
[25] M. I. Gordon, D. Kim, J. H. Perkins, L. Gilham, N. Nguyen,
and M. C. Rinard. Information Flow Analysis of Android Applications in DroidSafe. In Annual Symposium on Network and
Distributed System Security (NDSS), 2015.
[10] G. Canfora, E. Medvet, F. Mercaldo, and C. A. Visaggio. Acquiring and Analyzing App Metrics for Effective Mobile Malware Detection. In International Workshop on Security and Privacy Analytics (IWSPA), 2016.
[26] M. Grace, Y. Zhou, Q. Zhang, S. Zou, and X. Jiang.
RiskRanker: Scalable and Accurate Zero-day Android Malware
Detection. In International Conference on Mobile Systems, Applications, and Services (MobiSys), 2012.
[11] G. Canfora, F. Mercaldo, and C. A. Visaggio. An HMM and
Structural Entropy based Detector for Android malware: An
Empirical Study. Computers & Security, 61, 2016.
20
[45] V. Rastogi, Y. Chen, and X. Jiang. DroidChameleon: Evaluating Android Anti-malware Against Transformation Attacks. In
ACM Asia Conference on Computer and Communications Security (AsiaCCS), 2013.
[27] S. A. Hofmeyr, S. Forrest, and A. Somayaji. Intrusion detection
using sequences of system calls. Journal of Computer Security,
6(3), 1998.
[28] Y. Z. X. Jiang. Detecting passive content leaks and pollution
in android applications. In Annual Symposium on Network and
Distributed System Security (NDSS), 2013.
[46] A. Saracino, D. Sgandurra, G. Dini, and F. Martinelli. Madam:
Effective and efficient behavior-based android malware detection and prevention. IEEE Transactions on Dependable and
Secure Computing, 2016.
[29] I. Jolliffe. Principal Component Analysis. John Wiley & Sons,
2002.
[47] B. P. Sarma, N. Li, C. Gates, R. Potharaju, C. Nita-Rotaru,
and I. Molloy. Android Permissions: A Perspective Combining Risks and Benefits. In ACM Symposium on Access Control
Models and Technologies (SACMAT), 2012.
[30] M. J. Kearns. The Computational Complexity of Machine
Learning. MIT press, 1990.
[31] J. Kim, Y. Yoon, K. Yi, J. Shin, and S. Center. ScanDal: Static
analyzer for detecting privacy leaks in android applications. In
MoST, 2012.
[48] M. K. Shankarapani, S. Ramamoorthy, R. S. Movva, and
S. Mukkamala. Malware detection using assembly and API call
sequences. Journal in Computer Virology, 7(2), 2011.
[32] W. Klieber, L. Flynn, A. Bhosale, L. Jia, and L. Bauer. Android
Taint Flow Analysis for App Sets. In SOAP, 2014.
[49] K. Tam, S. J. Khan, A. Fattori, and L. Cavallaro. CopperDroid:
Automatic Reconstruction of Android Malware Behaviors. In
Annual Symposium on Network and Distributed System Security
(NDSS), 2015.
[33] C. Kolbitsch, P. M. Comparetti, C. Kruegel, E. Kirda, X.-y.
Zhou, and X. Wang. Effective and Efficient Malware Detection at the End Host. In USENIX security symposium, 2009.
[50] P. Teufl, M. Ferk, A. Fitzek, D. Hein, S. Kraxberger, and C. Orthacker. Malware detection by applying knowledge discovery processes to application metadata on the Android Market
(Google Play). Security and Communication Networks, 9(5),
2016.
[34] L. Lu, Z. Li, Z. Wu, W. Lee, and G. Jiang. CHEX: Statically
Vetting Android Apps for Component Hijacking Vulnerabilities. In ACM Conference on Computer and Communications
Security (CCS), 2012.
[35] A. Machiry, R. Tahiliani, and M. Naik. Dynodroid: An Input Generation System for Android Apps. In Joint Meeting on
Foundations of Software Engineering (ESEC/FSE), 2013.
[51] R. Vallée-Rai, P. Co, E. Gagnon, L. Hendren, P. Lam, and
V. Sundaresan. Soot - a Java Bytecode Optimization Framework. In Conference of the Centre for Advanced Studies on
Collaborative Research, 1999.
[36] E. Mariconti, L. Onwuzurike, P. Andriotis, E. De Cristofaro,
G. Ross, and G. Stringhini. MaMaDroid: Detecting Android
Malware by Building Markov Chains of Behavioral Models. In
Annual Symposium on Network and Distributed System Security
(NDSS), 2017.
[52] D. Venkatesan.
Android.Bankosy: All ears on voice
call-based 2FA.
http://www.symantec.com/connect/blogs/
androidbankosy-all-ears-voice-call-based-2fa, 2016.
[53] T. Vidas and N. Christin. Evading android runtime analysis via
sandbox detection. In ACM Asia Conference on Computer and
Communications Security (AsiaCCS), 2014.
[37] O. Mirzaei, G. Suarez-Tangil, J. Tapiador, and J. M. de Fuentes.
Triflow: Triaging android applications using speculative information flows. In ACM Asia Conference on Computer and Communications Security (AsiaCCS), 2017.
[54] N. Viennot, E. Garcia, and J. Nieh. A measurement study of
google play. ACM SIGMETRICS Performance Evaluation Review, 42(1), 2014.
[38] D. Morris. An Extremely Convincing WhatsApp Fake Was
Downloaded More Than 1 Million Times From Google Play.
http://fortune.com/2017/11/04/whatsapp-fake-google-play/,
2017.
[55] M. Y. Wong and D. Lie. IntelliDroid: A Targeted Input Generator for the Dynamic Analysis of Android Malware. In Annual Symposium on Network and Distributed System Security
(NDSS), 2016.
[39] J. R. Norris. Markov chains. Cambridge University Press, 1998.
[40] J. Oberheide and C. Miller. Dissecting the Android Bouncer. In
SummerCon, 2012.
[56] B. Woods. Google Play has hundreds of Android apps
that contain malware. http://www.trustedreviews.com/news/
malware-apps-downloaded-google-play, 2016.
[41] D. Octeau, S. Jha, and P. McDaniel. Retargeting Android Applications to Java Bytecode. In ACM Symposium on the Foundations of Software Engineering (FSE), 2012.
[57] D.-J. Wu, C.-H. Mao, T.-E. Wei, H.-M. Lee, and K.-P. Wu.
DroidMat: Android Malware Detection through Manifest and
API Calls Tracing. In Asia JCIS, 2012.
[42] S. Poeplau, Y. Fratantonio, A. Bianchi, C. Kruegel, and G. Vigna. Execute This! Analyzing Unsafe and Malicious Dynamic
Code Loading in Android Applications. In Annual Symposium
on Network and Distributed System Security (NDSS), 2014.
[58] M. Xia, L. Gong, Y. Lyu, Z. Qi, and X. Liu. Effective RealTime Android Application Auditing. In IEEE Symposium on
Security and Privacy (S&P), 2015.
[43] I. Polakis, M. Diamantaris, T. Petsas, F. Maggi, and S. Ioannidis. Powerslave: Analyzing the Energy Consumption of Mobile
Antivirus Software. In DIMVA, 2015.
[59] L. K. Yan and H. Yin. DroidScope: Seamlessly Reconstructing the OS and Dalvik Semantic Views for Dynamic Android
Malware Analysis. In USENIX Security Symposium, 2012.
[44] G. Portokalidis, P. Homburg, K. Anagnostakis, and H. Bos.
Paranoid Android: Versatile Protection for Smartphones. In
Annual Computer Security Applications Conference (ACSAC),
2010.
[60] C. Yang, Z. Xu, G. Gu, V. Yegneswaran, and P. Porras. Droidminer: Automated mining and characterization of fine-grained
malicious behaviors in Android applications. In European Symposium on Research in Computer Security (ESORICS), 2014.
21
Cloud-based Android Malware Analysis Service. SIGAPP Applied Computing Review, 16(1), 2016.
[61] W. Yang, X. Xiao, B. Andow, S. Li, T. Xie, and W. Enck. AppContext: Differentiating Malicious and Benign Mobile App
Behaviors Using Context. In International Conference on Software Engineering (ICSE), 2015.
[65] N. Zhang, K. Yuan, M. Naveed, X. Zhou, and X. Wang. Leave
me alone: App-level protection against runtime information
gathering on Android. In IEEE Symposium on Security and
Privacy (S&P), 2015.
[62] Z. Yang, M. Yang, Y. Zhang, G. Gu, P. Ning, and X. S. Wang.
AppIntent: Analyzing Sensitive Data Transmission in Android
for Privacy Leakage Detection. In ACM Conference on Computer and Communications Security (CCS), 2013.
[66] Y. Zhou and X. Jiang. Dissecting Android Malware: Characterization and Evolution. In IEEE Symposium on Security and
Privacy (S&P), 2012.
[63] H. Ye, S. Cheng, L. Zhang, and F. Jiang. DroidFuzzer: Fuzzing
the Android Apps with Intent-Filter Tag. In International Conference on Advances in Mobile Computing and Multimedia
(MoMM), 2013.
[67] Y. Zhou, Z. Wang, W. Zhou, and X. Jiang. Hey, You, Get Off of
My Market: Detecting Malicious Apps in Official and Alternative Android Markets. In Annual Symposium on Network and
Distributed System Security (NDSS), 2012.
[64] H. Zhang, Y. Cole, L. Ge, S. Wei, W. Yu, C. Lu, G. Chen,
D. Shen, E. Blasch, and K. D. Pham. ScanMe Mobile: A
22
| 2 |
J. Comput.
Phys.
Journal of Computational Physics 00 (2017) 1–33
arXiv:1708.08741v1 [] 29 Aug 2017
Coupled Multiphysics Simulations of Charged Particle
Electrophoresis for Massively Parallel Supercomputers
Dominik Bartuschata,∗, Ulrich Rüdea,b
a Lehrstuhl
für Systemsimulation, Friedrich-Alexander Universität Erlangen-Nürnberg, Cauerstrasse 11, 91058 Erlangen,
Germany
b Parallel Algorithms Group, CERFACS, 42 Avenue Gaspard Coriolis, 31057 Toulouse, France
Abstract
The article deals with the multiphysics simulation of electrokinetic flows. When charged particles are immersed in a
fluid and are additionally subjected to electric fields, this results in a complex coupling of several physical phenomena.
In a direct numerical simulation, the dynamics of moving and geometrically resolved particles, the hydrodynamics of
the fluid, and the electric field must be suitably resolved and their coupling must be realized algorithmically. Here
the two-relaxation-time variant of the lattice Boltzmann method is employed together with a momentum-exchange
coupling to the particulate phase. For the electric field that varies in time according to the particle trajectories,
a quasistatic continuum model and its discretization with finite volumes is chosen. This field is coupled to the
particulate phase in the form of an acceleration due to electrostatic forces and conversely via the respective charges
as boundary conditions for the electric potential equation. The electric field is also coupled to the fluid phase by
modeling the effect of the ion transport on fluid motion. With the multiphysics algorithm presented in this article,
the resulting multiply coupled, interacting system can be simulated efficiently on massively parallel supercomputers.
This algorithm is implemented in the waLBerla framework, whose modular software structure naturally supports
multiphysics simulations by allowing to flexibly combine different models. The largest simulation of the complete
system reported here performs more than 70 000 time steps on more than five billion (5 × 109 ) mesh cells for both
the hydrodynamics, as represented by a D3Q19 lattice Boltzmann automaton, and the scalar electric field. The
computations are executed in a fully scalable fashion on up to 8192 processor cores of a current supercomputer.
c 2017 Published by Elsevier Ltd.
Keywords: Parallel simulation; Electrokinetic flow; Electrophoresis; Fluid-particle interaction; MPI.
1. Introduction
1.1. Motivation
The motion of charged particles in fluids under the influence of electric fields occurs in a wide range
of industrial, medical, and biological processes. When the charged particles are immersed in liquids, their
migration caused by electric fields is termed electrophoresis. Due to the complex interplay of the physical
effects involved in such particle-laden electrokinetic flows, numerical simulations are required to analyze,
∗ Corresponding
author
Email address: [email protected] (Dominik Bartuschat)
1
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
2
predict, and optimize the behavior of these processes. To this end, we present a parallel multiphysics
algorithm for direct numerical simulations of electrophoretic particle motion.
Industrial applications that involve electrophoretic effects are electrofiltration [1, 2, 3] and electrodewatering [4]. Moreover, electrophoresis is utilized in electrophoretic deposition techniques for fabricating
advanced materials [5] and especially ceramic coatings [6, 7] in material science. Electrophoresis and electric
fields are also applied in many medical and biological applications. The trend towards miniaturization of
analysis processes has lead to the development of micro total analysis systems. Due to their high portability,
reduced costs, fast operation, and high sensitivity [8, 9], the design of such lab-on-a-chip systems has been a
highly active area of research for many years. These microfluidic systems require only small samples of liquid
and particles, which are transported, manipulated, and analyzed in structures of length scales from several
nm to 100 µm. Therefore, microfluidic separation and sorting of particles and cells are important steps of
diagnostics in such systems [10, 11]. Many of the employed techniques utilize electric fields to manipulate,
separate, and sort biological particles and macromolecules [8], such as cells [9, 10] or DNA [12].
At the small scales of microfluidic analysis systems, flow measurements are difficult or even impossible. Moreover, the complex coupling of hydrodynamic and electrostatic effects involved in electrophoretic
processes make predictions of electrophoretic motion challenging, especially for large numbers of particles.
Therefore, numerical simulations are essential to aid the design and optimization of electrophoretic systems.
The different physical effects in electrophoretic deposition can be better understood from insight gained
in simulations. By means of such simulations, electrophoretic sorting in lab-on-a-chip systems can be optimized for maximal throughput, sorting efficiency, and sorting resolution. A review of simulation methods for
electrophoretic separation of macromolecules is given in [13]. Also industrial applications of electrophoretic
deposition can be optimized with the help of simulations, as presented in [14] for a coating process.
1.2. Multiphysics Coupling Strategy
For simulations of electrokinetic flows with electrophoretic particle motion, the coupling between three
system components must be modeled: charged objects, fluid flow, and electric effects. The interacting
physical effects are sketched in Fig. 1. In the simulation method introduced in this article, the motion of the
ce
on
io
ty
ns
io
ot
e
si
m
rg
n
de
n
n
a
ch
r
fo
a
st
e
rc
fo
io
el
tro
ec
tic
Electroquasi-statics
object motion
Rigid body
dynamics
Fluid
dynamics
hydrodynamic force
Figure 1: Coupled physical effects of electrophoresis simulated with waLBerla and pe.
rigid, charged particles is modeled with Newtonian mechanics. The motion of the surrounding fluid, which
exerts hydrodynamic forces on the particles, is described by the incompressible Navier-Stokes equation. To
capture fluid-particle interactions, the hydrodynamic forces and the influence of the particle motion on the
fluid are modeled, based on the momentum exchange between fluid and particles. In this way, long-range
hydrodynamic interactions between individual particles and between particles and walls are recovered.
Moreover, electrostatic forces exerted by applied electric fields on the charged particles are modeled, which
cause the electrophoretic motion. The varying positions of the charged particles in return affect the electric
2
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
3
potential distribution in the simulation domain, based on their surface charge. Such a charge is carried by
most biomolecules such as cells, proteins, and DNA [15]. In fact, most substances acquire surface charges
when they get into contact with an aqueous medium [16] or electrolyte solution [17].
Electrostatic forces additionally act on the ions in the fluid and affect the fluid motion via body forces
present in locations of net charge. This net charge originates from the repulsion of co-ions in the fluid of
the same polarity as the surface charge and from the attraction of counter-ions. The ions in the fluid are
transported with the fluid flow, which in turn alters the electric potential distribution. In general, the motion
of ions in the fluid due to an electric field is governed by the Nernst-Planck equation, an advection-diffusion
equation that employs a continuum description and treats the ions as point charges. For flows in which
diffusion strongly dominates over advection and therefore quasi-thermodynamic equilibrium can be assumed
to hold, as considered in this paper, the electric potential due to the ion charge distribution is governed by
the Poisson-Boltzmann equation.
The region of the particle’s surface charge and of excess counter-ions in the fluid is denoted as electric
double layer (EDL). According to the Stern model [18] employed in this article, this double layer comprises
a region of ions attached to the surface and a diffuse part in which the ion concentration follows a Boltzmann distribution. At the surface of shear between the particle and the surrounding diffuse double layer,
the characteristic ζ-potential is defined. The employed equilibrium considerations based on the PoissonBoltzmann equation capture the dominant retardation effect of electrophoretic motion. This effect describes
the retardation of the charged particle motion by the action of the applied electric field on the opposite net
charge in the surrounding EDL. At high ζ-potentials, additionally the weaker relaxation effect occurs [16]
that is caused by a distortion of the EDL and can be captured by the Nernst-Planck equation.
In this article, we present an efficient parallel multiphysics simulation algorithm for electrophoresis on
a fixed Eulerian grid with a Lagrangian representation of moving particles. The particles are represented
by the physics engine pe [19, 20] as geometrically fully resolved three-dimensional objects. Dependent
on the electrostatic and hydrodynamic forces acting on the particles, the pe computes their trajectories
by rigid body dynamics and additionally resolves particle collisions. The pe is coupled to waLBerla
[21, 22, 23], a massively parallel simulation framework for fluid flow applications that employ the lattice
Boltzmann method (LBM) [24, 25]. By means of a modular software structure that avoids dependencies
between modules, functionality from different modules can be combined flexibly. To model fluid-particle
interactions, the LBM momentum exchange method [26, 27] implemented in waLBerla [28] is applied. For
the electrophoresis simulations, the LBM is performed with the two-relaxation time collision operator [29, 30]
and an appropriate forcing term for the electric body force due to the ions in the fluid. The electric potential
is represented by the Debye-Hückel approximation of the Poisson-Boltzmann equation that is discretized
with finite volumes whose mesh structure naturally conforms to the lattice Boltzmann (LB) grid. This
discretization also facilitates accommodating variable and discontinuous dielectricity values that vary in
time according to the particle positions, as required for simulating dielectrophoretic effects. By means of the
waLBerla solver module introduced in [31] together with the cell-centered multigrid solver implemented
therein, the resulting linear system of equations is solved. Since the counter-ions lead to a quicker decay of
the electric potential compared to the long-range electric potentials modeled in [31], the parallel successive
over-relaxation (SOR) method implemented in this module is an adequate choice.
In a previous article, we have shown that the implemented fluid-particle interaction algorithm for arbitrarily shaped objects efficiently recovers hydrodynamic interactions also for elongated particles [32]. Moreover,
we presented a parallel multiphysics algorithm for charged particles in the absence of ions in the fluid in
[31]. We have therein shown that several millions of charged particles with long-range hydrodynamic and
electrostatic interactions can be simulated with excellent parallel performance on a supercomputer. The
present paper extends these simulation algorithms by also considering ions in the EDL around the particles and their effect on the fluid motion, and presenting suitable parallel coupling techniques. Together
with the full four-way coupling of the fluid-particle interaction [31], the coupling with the quasi-equilibrium
representation of the electric effects results in a 7.5-way interaction.
3
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
4
1.3. Related Work
In the following, we give an overview of numerical methods for simulations of electrophoretic phenomena
that have been developed for different resolution levels. At the coarsest modeling scale, both fluid and
solid phase are described by an Eulerian approach. These continuum models represent the charged species
in terms of concentrations and are well suited for simulations with large numbers of unresolved particles.
In [33] two-dimensional electrophoresis simulations of biomolecule concentrations are presented. These
finite element simulations consider the effect of reactive surfaces and the electric double layer is included
through the biomolecule charge. Also three-dimensional parallel simulations of electrophoretic separation
with continuum approaches have been reported. The finite element simulations in [34] consider the buffer
composition and the ζ-potential at channel walls. In [35, 36] mixed finite element and finite difference
simulations of protein separation were performed on up to 32 processes.
At the finest level of resolution, fluid, ions, and particles are simulated by Lagrangian approaches.
These explicit solvent methods [37] typically apply coarse-grained molecular dynamics (MD) models to
describe the motion of fluid molecules and incorporate Brownian motion [38]. The mesoscale dissipative
particle dynamics method is applied in [39] to simulate electrophoresis of a polyelectrolyte in a nanochannel.
Another explicit solvent method is presented in [40] for simulating DNA electrophoresis, modeling DNA as
a polymer. In both methods the polymer is represented by bead-spring chains with beads represented by
a truncated Lennard-Jones potential and connected by elastic spring potentials. Explicit solvent models,
however, are computationally very expensive, especially for large numbers of fluid molecules due to pairwise
interactions [40]. Moreover, the resolution of solvent, macromolecules, and ions on the same scale limits the
maximal problem sizes that can be simulated [41]. Also the mapping of measurable properties from colloidal
suspensions to these particle-based methods is problematic [37].
The high computational effort is significantly reduced in implicit particle-based methods that incorporate
hydrodynamic interactions into the inter-particle forces. Such methods are applied in [37] and [42] to simulate
electrophoretic deposition under consideration of Brownian motion and van der Waals forces. Nevertheless,
these methods are restricted to few particle shapes and hydrodynamic interactions in Stokes flow.
Euler-Lagrange methods constitute the intermediate level of resolution. These approaches employ Eulerian methods to simulate the fluid phase, whereas the motion of individual particles is described by Newtonian mechanics. For simulations of particles in steady-state motion, the resolved particles can be modeled
as fixed while the moving fluid is modeled by an Eulerian approach. In [43] the finite volume method is
applied to simulate electrophoresis of up to two stagnant toroids in a moving fluid, employing the Hückel
approximation for a fixed ζ-potential and different electrical double layer thicknesses. The steady-state electrophoretic motion of particles with low surface potentials under a weak applied electric field in a charged
cylindrical pore is simulated in [44] for a single cylinder and in [45] for two identical spheres. In both cases,
a two-dimensional simulation with a finite element method for Stokes flow and Hückel approximation is
performed, exploiting the axial symmetry of the problem.
For electrophoresis at steady state perturbation approaches can be employed that are based on the assumption that the double layer is only slightly distorted from the equilibrium distribution for weak applied
electric fields (w.r.t. the field in the EDL). In addition to the equilibrium description based on the PoissonBoltzmann equation, small perturbations in the equilibrium EDL are considered in terms of linear correction
terms in the applied electric field for the ion distribution and the electric potential (see e. g. [46]). Using a
perturbation approach with finite elements, the electrophoresis of two identical spheres along the symmetry
axis of a cylindrical domain at pseudo steady-state were studied in [47]. Additionally to the hydrodynamic
and electric interactions, these axisymmetric simulations consider van der Waals forces for particles in close
proximity. In [48] a perturbation approach is applied to simulate a single colloid in a rest frame with periodic boundary conditions. The zeroth-order perturbation corresponds to the Poisson-Boltzmann equation,
which is solved by a constrained variational approach suggested in [49]. For the first-order perturbation,
the stationary Stokes equation is solved by a surface element method, the convection-diffusion for ionic
concentrations by a finite volume solver, and the Poisson equation by a fast Fourier Transform method.
More sophisticated Euler-Lagrange methods include direct numerical simulation (DNS) models that
represent the moving particles as geometrically fully resolved objects. These methods for particulate flows
4
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
5
include approaches with body-fitted moving meshes and fixed meshes. Moving meshes can be represented by
the Arbitrary-Lagrangian-Eulerian (ALE) formulation [50, 51] that employs moving unstructured meshes for
fluid-particle interaction problems. Such an ALE method is applied in [52] to simulate electrophoresis of a
single particle surrounded by a thin electrical double layer. The moving-mesh techniques require re-meshing
when the distortion of the updated mesh becomes too high, and subsequent projection of the solution onto
the new mesh. This overhead is circumvented in fixed-mesh techniques that allow the use of regular grids
and therefore the application of efficient solvers. The fluid particle dynamics method [53] falls into the latter
category, solving the Navier-Stokes and continuity equation on a fixed lattice, and representing the moving
solid particles by fluid particles of very high viscosity. By means of a concentration field that represents the
particle distribution, the particles affect the fluid viscosity and, together with forces acting on the particles,
the body force term of the Navier-Stokes equation. The rigid particles are geometrically modeled by the
Lennard-Jones potential and their motion is described by Newtonian mechanics [53]. This technique is
applied to simulate electrophoretic deposition of two charged particles in [54] and electrophoretic separation
in [55]. In these simulations the electrostatic interactions of the particles are modeled in terms of the body
force field, together with the advection and diffusion of ions and the resulting effect of the applied electric field
on the fluid motion. With this method, however, the particle rigidity is imposed by very high viscosity values
that restrict the time step size [56] and the Lennard-Jones potential restricts the particles shapes to spheres.
The smoothed profile method (SPM) [56] circumvents this time-step constraint by directly modeling the
particles as solid objects. Inside the particles and at solid-fluid boundaries that are represented by diffuse
interfaces, a body force is imposed on the fluid to model the effect of the particle motion on the fluid.
The fluid is again modeled on a fixed Cartesian grid and the particle motion with Newtonian mechanics,
where particle overlaps are typically prevented by a truncated Lennard-Jones potential. With this method,
electrophoresis of charged spherical particles is simulated in [57] for a constant, uniform electric field and
in [58] for an oscillating electric field. In both articles, the ion number concentration is modeled by an
advection-diffusion equation to recover non-equilibrium double layer effects. The SPM is also applied in [59]
to simulate electrophoresis of single cylinders and microtubules, employing the equilibrium representation of
the EDL. A further fixed-mesh technique is the immersed boundary method [60, 61, 62], where the rigid body
motion is imposed on the flow by body forces applied at the particle boundaries. This method, combined
with a finite volume method for solving the steady-state Poisson-Nernst-Planck equation system, is applied
in [63] to simulate the electrophoretic motion of up to three spherical particles in a two-dimensional setup.
Lattice-Boltzmann based methods are very well suited for parallel direct numerical simulations of fluidparticle interactions on fixed Cartesian grids. Both the Lagrangian particles and ions are often explicitly
modeled by molecular dynamics approaches that represent the rigid objects by repulsive potentials. In [64]
the electrophoresis of a colloidal sphere immersed in a fluid with counter-ions is simulated, modeling the
solvent by a lattice Boltzmann method. The charged sphere modeled with molecular dynamics is represented
by a raspberry model that comprised several beads connected by the finitely extensible nonlinear-elastic
(FENE) potential. Using a modified raspberry model with two spherical shells of beads solidly attached
to a larger spherical particle, this method is extended in [65] to simulate the electrophoresis of a spherical
Janus particle. The partially uncharged particle is surrounded by anions and cations represented by charged
beads. Electrophoresis simulations for a single highly charged spherical macro-ion in an electrolyte solution
with explicitly modeled positive and negative micro-ions are presented in [66]. Since the coupling of fluid
and macro-ion is performed via several particle boundary points, a single spherical particle is sufficient
to represent the macro-ion. A similar LB-MD method is applied in [67] to simulate the stretching of a
charged polyelectrolyte between parallel plates. The polyelectrolyte immersed in a liquid with explicitly
modeled counter- and co-ions is modeled by beads bonded together by the FENE potential. In all these
LB-MD simulations with explicitly modeled ions, hydrodynamic interactions are simulated with the LBM
and thermal fluctuations are added to both the fluid and the MD objects. The high computational effort
for modeling each individual ion by means of molecular dynamics, however, restricts the maximum feasible
problem size. With these approaches, only a limited number of ions per colloidal particle can be simulated
and the colloid radius is typically restricted to one order of magnitude larger than the ion size [48]. Therefore,
approaches based on continuum descriptions of the suspended ions are more practical for simulations of many
or for larger charged particles.
5
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
6
Alternatively to the continuum approach based on the Poisson-Boltzmann equation employed in this
article, electrophoresis can be simulated with the link-flux method [68] that models the advection and
diffusion of ions in terms of Nernst-Planck equations. The link-flux method employs the LBM for fluid
dynamics and models ion motion in terms of fluxes between lattice cells. In [69] this method is compared to
a LB-Poisson-Boltzmann approach for a fixed spherical particle in a periodic three-dimensional domain. The
aim is to examine the influence of particle motion and counter-ion concentration on the ζ-potential, leading to
the conclusion that for weakly perturbing electric fields or low Péclet numbers the equilibrium and dynamic
ζ-potentials are indistinguishable. The link-flux method is extended in [41] to support moving particles in
combination with the LB momentum exchange method. To ensure charge conservation in electrophoresis
simulations, appropriate moving boundary conditions for the solute fluxes are introduced. This method is
verified in [41] by electrophoresis simulations of up to eight particles.
1.4. Objectives and Outline
The primary goal of this paper is the introduction of a parallel multiphysics algorithm for electrophoresis
simulations together with validations of the physical correctness of the coupled algorithm for different particle
sizes. For this algorithm, waLBerla is augmented by an efficient boundary handling method that is able to
treat electric potential boundary conditions on the moving particles. Moreover, a joint parameterization for
the different coupled numerical methods is introduced. To achieve excellent computational performance, a
matrix-free representation of the linear system based on a stencil paradigm is used in the solver module [31].
For the linear Debye-Hückel approximation it is systematically exploited that these stencils are almost
uniformly identical throughout the simulation domain. The validation runs were performed on up to 8192
parallel processes of a modern supercomputer. Moreover, simulation results for the electrophoretic motion of
a single particle in a microchannel are presented, including visualizations of the electric potential distribution
and of the resulting flow field around the particle.
The equilibrium considerations in the present paper recover the predominant retardation effect due to
an opposing electrostatic force on the net opposite charge in the electrical double layer that counteracts the
particle motion. For the presented method, a computationally cheap and flexible SOR method is sufficient
to solve the electric potential equations. With our approach we aim for simulations of millions of charged
particles as in [31]. For these large numbers of particles the dynamics of an electrical double layer as in [41]
is computationally too expensive, even on modern supercomputers.
This paper is structured as follows: The physical background of fluid-particle interactions and electrophoresis are described in Sec. 2 and Sec. 3, respectively. In Sec. 4, the employed LB-momentum exchange
method for fluid-particle interactions is outlined, together with the finite volume discretization and the common parameterization concept for the coupled multiphysics methods. Then the extension of the waLBerla
framework for the electrophoresis algorithm is described in Sec. 5. Finally validation results for the electrophoretic motion of a spherical particle and visualizations of the resulting flow field and electric potential
distribution are presented in Sec. 6 before conclusions are drawn in Sec. 7.
2. Fluid-Particle Interaction
The macroscopic description of fluid behavior is based on the continuum hypothesis (cf. Batchelor [70])
that allows to consider a fluid as continuum, irrespective of the underlying molecular structure. In this case,
fluid properties can be represented by macroscopic quantities like density ρf , velocity ~u, and pressure p,
as functions of space and time. In terms of these quantities, fluid dynamics is described by conservation
laws for mass, momentum, and energy. In this article isothermal flows are considered, and therefore the
energy equation does not have to be solved. Moreover, non-continuum effects that become relevant for gas
flows at very small scales [71] are assumed to be negligible. Therefore, no-slip boundary conditions are
assumed to hold at solid-fluid interfaces, and slip velocities due to non-continuum Knudsen layer effects are
not considered.
Conservation of mass is described by the continuity equation. This equation can be derived by considering
a fixed control volume in the fluid relative to a stationary observer (Eulerian view). For incompressible fluids
6
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
7
the density of a fluid element is not affected by pressure changes [70, 72]. Then the density is spatially and
temporally constant (i. e. ρf = const), and the continuity equation reads as
∇· ~u = 0.
(1)
Conservation of momentum in a viscous, compressible fluid can be described in terms of the momentum
flux density tensor Π [72]. The temporal change of momentum in a control volume is balanced by the net
momentum flux through the surface of this volume and by external body forces f~b acting on the volume as
∂ (ρf ~u)
= −∇· Π + f~b .
∂t
(2)
The second-order tensor Π comprises a term for the convective transport of momentum and the total stress
tensor σ for the momentum transfer due to pressure and viscosity
Π = ρf ~u~u> − σ.
(3)
The total stress tensor σ can be decomposed into a part representing normal stresses related to the pressure
and a viscous part related to shear stresses. For incompressible Newtonian fluids, the stress tensor reads as
(4)
σ = −pI + µf ∇~u + (∇~u)> ,
where the first term with the second-rank identity tensor I contains the thermodynamic pressure p defined
according to Landau & Lifshitz [72] as used in the LBM literature [73]. The second term with dynamic
viscosity µf represents the shear stresses that are proportional to the rate of deformation [74] and result from
molecular transport of momentum [75]. With this stress tensor the incompressible Navier-Stokes equation
results from Eqn. (2), together with basic vector calculus and the continuity equation for compressible
fluids [76], as
∂~u
ρf
+ (~u · ∇)~u =
−∇p
+ µf ∆ ~u +
f~b
.
(5)
| {z }
|{z}
∂t
| {z }
|
{z
} pressure stress viscous stress external body force
inertial forces
This equation describes the balance of momentum change and the net force acting on a control volume
in terms of Newton’s second law. The left-hand side represents inertial forces acting on a fluid volume.
It comprises a term for local change of velocity and a term for convective acceleration i. e. the change of
velocity in space [77]. The right-hand side represents surface forces and body forces acting on the fluid
volume. Surface forces are short-range forces acting on the surface of the fluid element and are equivalent
to stress in the fluid [70]. Body forces, such as gravity or electrostatic force, act on the center of mass and
are represented by the force per unit volume of fluid element (or force density) f~b .
An important dimensionless quantity to characterize fluid flows is the Reynolds number Re = UνfL , with
µ
kinematic viscosity νf = ρff , characteristic velocity U , and characteristic length scale L. Flows in the regime
of creeping motion, where Re 1 and thus inertial forces are negligible, are termed Stokes flow.
The Stokes equations for incompressible Newtonian fluids resulting from Eqns. (5) and (1) read as
−∇p + µf ∆ ~u + f~b = 0,
∇· ~u = 0.
(6)
These equations are linear in both, velocity and pressure. Due to the linearity the superposition principle
holds, which is often utilized for fluid-particle interaction in Stokes flow and is employed for the validations
in Sec. 6.3.
Particles immersed in a fluid experience a force in case of relative fluid motion or a pressure gradient
in the fluid (e. g. due to gravity). This force exerted by the surrounding fluid can be calculated from the
7
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
8
stresses in the fluid next to the particle by integrating the stress tensor σ over the particle surface Γp [74, 46]
Z
F~part = σ · ~n dA.
(7)
Γp
Here, dA denotes the surface area elements and ~n the associated normal vector pointing into the fluid.
For simple cases, such as spherical bodies moving in unbounded Stokes flow, or single bodies moving in a
confined domain, analytical solutions for the particle motion are known as described in the following. More
complex fluid-structure interaction problems must be solved numerically, e. g. by the LBM.
For the computation of particle motion in incompressible fluids, hydrostatic effects that result e. g. from
gravitation acting on the fluid, do not have to be considered explicitly in the momentum equation (if the
buoyancy and gravitational force are directly applied to the particle). In this case, the force exerted by the
fluid on the particle according to Eqns. (7) and (4) is the drag force given by
Z
Z
~
~
~
Fd = − p dA + µf ∇~u + (∇~u)> dA,
Γp
Γp
where p is the hydrodynamic part of the total pressure.
The resistance to the motion of a sphere in an unbounded fluid at very low Reynolds numbers can be
calculated analytically from the above expression for the drag force and the Stokes equations (6) that govern
the fluid flow w.r.t. the imposed boundary conditions (BCs). For a sphere located at ~x = ~0, the unbounded
fluid is represented by the BC
~u → 0 as ~x → ∞
imposed on the fluid velocity in the Stokes equations (6). The resulting drag force acting on a rigid sphere
~ was derived by Stokes [78] as
of radius R that moves at constant velocity U
~.
F~d = −6πµf RU
(8)
This equation is commonly referred to as Stokes’ law.
For a sphere moving in a fluid subject to a constant force F~ , Stokes’ law relates the terminal steady-state
velocity of the sphere to the drag force exerted by the fluid. Such a constant force may e. g. be the Coulomb
force that acts on a charged particle in an electric field. The terminal sphere velocity is then obtained from
the balance of the external force and the drag force F~ + F~d = ~0 as
~ =
U
1
F~ .
6πµf R
(9)
A particle moving in a confined domain experiences a retardation caused by surrounding walls. Consequently, the drag force on a sphere that moves in a viscous fluid limited by walls is higher than the force
according to Stokes’ law. The effect of walls on a moving particle in Stokes regime can be determined by
means of the method of reflections, as described in detail in [74]. Happel & Bart [79] employed this method
to obtain a first-order correction to the drag force on a sphere settling in a long square duct with no-slip
walls. Miyamura et al. [80] found polynomial expressions for the increased drag by fitting the coefficients to
experimentally obtained settling velocities of spheres in different confining geometries. The correctness of
the wall effect recovered in LBM simulations with the fluid-particle interaction algorithm employed in this
article was verified in [76] against these expressions.
3. Electrokinetic Flows
The transport of ions in fluids subject to electric fields that occurs in electrokinetic flows can be modeled
by means of a continuum theory, similar to the description of fluid dynamics by the Navier-Stokes equation.
8
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
9
Instead of modeling individual ions and their interactions, local ion concentrations ni and fluxes ~ji of the
different ionic species i are considered. Based on these macroscopic quantities the ion transport in dilute
electrolyte can be described by the Nernst-Planck equation combined with the law for the conservation of
ionic species in a solution
∂ni
= −∇· ~ji
(10)
∂t
in the absence of chemical reactions. Here ni denotes the number density or concentration that is related
to the molar concentration as ci = NnAi with Avogadro’s number NA . The total ionic flux ~ji of species i
comprises an advective flux with a common mass average velocity ~u for all species and fluxes relative to the
advective flux due to diffusion and electric migration [46]. This relation is expressed by the Nernst-Planck
equation
~ji =
ni ~u
− Di ∇ni − ni µ∗i ∇Φ,
(11)
|{z}
| {z }
| {z }
advective flux
diffusive flux
migration flux
where Di and
represent the spatially homogeneous diffusion coefficient and ionic mobility of species i,
respectively, and ∇Φ the local electric potential gradient. The ionic mobility is defined as µ∗i =: DkiBzTi e ,
where e denotes the elementary charge, zi the valence of a given ionic species, kB the Boltzmann constant
and T the temperature.
To model the influence of the charged ions on the electric potential governed by the Poisson equation
µ∗i
− ∆ Φ(~x) =
ρe
εe
(12)
for spatially uniform fluid permittivity εe , the ion charge distribution is considered in terms of the local
mean macroscopic charge density as
X
ρe =
e zi ni .
(13)
i
The Poisson-Nernst-Planck equation system Eqns. (10)–(12) is highly nonlinear, and solving the overall system is computationally very expensive, especially for electrophoresis of many particles. Therefore
the problem is simplified by restriction to equilibrium considerations based on the Boltzmann distribution
that capture the dominant electrophoretic effects. The resulting Poisson-Boltzmann equation holds for
(quasi-)thermodynamic equilibrium when the ion distribution is not affected by fluid flow or by externally
applied electric fields. Therefore the electric potential ψ resulting from the non-uniform ion distribution in
the EDL is considered in the following, instead of the total electric potential Φ = ψ + ϕ that additionally
comprises the potential ϕ of the externally applied electric field.
The Boltzmann distribution for ions can be derived from the Nernst-Planck equation, as outlined in [46].
Considering the Nernst-Planck equation (11) in one dimension and at equilibrium, i. e., for zero macroscopic
fluid velocity u = 0 and ionic flux ji = 0, results in
d ni
zi e d ψ
= −ni
dx
kB T dx
(14)
for the above definition of µ∗i . In this case, the Péclet number P e = UDL for mass transfer relating the
advection rate to diffusion rate becomes zero. Here U is the fluid speed, L a characteristic length scale, and
D the diffusion coefficient. Applying the chain rule to the left-hand side of Eqn. (14) and integrating from
a reference point in the bulk with potential ψ∞ and concentration ni∞ , yields
zi e(ψ − ψ∞ )
kB T
ni = ni∞ e
.
−
(15)
Setting the reference potential ψ∞ in the electroneutral bulk solution to zero recovers the Boltzmann distribution with the number density ni∞ at the location of the neutral state.
9
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
10
From Poisson’s equation (12) with net charge density according to Eqn. (13) and the obtained Boltzmann
distribution, the Poisson-Boltzmann equation follows as
zi e ψ
−
e X
− ∆ψ =
zi ni∞ e kB T ,
εe i
(16)
relating the electric potential ψ to the ion concentrations at equilibrium. For binary, symmetric electrolyte
solutions comprising two species of valence z = −z− = z+ , the Poisson-Boltzmann equation takes the form
2 z e n∞
zeψ
− ∆ψ = −
sinh
.
(17)
εe
kB T
For low ζ-potentials compared to the thermal voltage kB T /e, the term kzBe ψT in Eqn. (17) becomes
smaller than unity. At room temperature this is fulfilled for ζ 25.7zmV [81]. In this case the approximation
sinh(x) ≈ x is accurate, up to a small error of order O(x3 ) (cf. Taylor’s expansion). With this linearization,
the symmetric Poisson-Boltzmann equation simplifies to the Debye-Hückel approximation (DHA)
− ∆ψ = −
2 e2 z 2 n∞
ψ = −κ2 ψ.
εr ε0 kB T
(18)
This equation was originally derived by Debye & Hückel [82] for strong electrolytes [81]. The parameter κ,
defined by
r
εr ε0 kB T
κ :=
,
(19)
2 e2 z 2 n∞
is commonly referred to as Debye-Hückel parameter. Moreover, the charge density in the fluid is then given
by
ρe (ψ) = −κ2 εe ψ.
(20)
In this article, we consider spherical particles with uniform ζ-potential distribution as depicted in Fig. 2.
The electric potential ψ for such a particle of radius R is represented by the Debye-Hückel equation in
E∞
r
ψ =ζ
~u = −U
θ
R
x
λD
~ex
Figure 2: Electrophoresis setup of a stationary (negatively) charged sphere of radius R, surrounded by a double layer and
subject to an applied electric field in opposing fluid flow. Similar to [16].
spherical-polar coordinates as
1 d
r2 dr
r2
dψ
dr
10
= κ2 ψ,
(21)
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
11
with radial distance r from the sphere center and subject to the Dirichlet BCs
ψ=ζ
ψ→0
at
as
r = R,
r → ∞.
(22)
Solving this equation subject to these BCs results in the electric potential distribution in the surrounding
EDL (and beyond) as [81]
R
ψ(r) = ζ e−κ(r−R)
for r ≥ R.
(23)
r
For the Debye-Hückel approximation, an analytical solution for the electrophoretic velocity of a single
spherical particle at steady state and arbitrary EDL thickness has been derived by Henry [83]. Initially the
problem was formulated to account for finite conductivities of particle and medium, and the potential in
the EDL was described by the Poisson-Boltzmann equation. The final results, however, were provided for
insulating spheres, and sufficiently low ζ-potentials for the Debye-Hückel approximation to hold. Therefore
the Debye-Hückel approximation is employed in the following to represent the ion distribution around the
particles under electrophoretic motion.
Instead of modeling a sphere moving at terminal speed U , Henry [83] considered the analogous problem of
a stationary sphere in a steadily moving liquid. To this end, the opposing velocity −U was imposed on the
overall system by setting ~u r→∞ = −U ~ex in the liquid far from the particle. As shown in Fig. 2, a sphericalpolar coordinate system fixed at the particle center with radial distance r and polar angle θ was used. Under
the assumption that the electric potential in the EDL is not distorted from its equilibrium distribution by
the applied field and the fluid flow, the potentials ϕ and ψ were linearly superimposed. Therefore, the
electric potential ψ in the diffuse double layer is described by the Poisson-Boltzmann equation and the
applied potential ϕ by a Laplace equation. The BCs for the Laplace equation applied by Henry represent
the insulating particle by homogeneous Neumann BCs at the particle surface and impose the applied field
by the inhomogeneous Neumann condition ∂ϕ/∂x r→∞ = −E∞ . For the Poisson-Boltzmann equation, the
ζ-potential at the hydrodynamic radius R and the decaying potential were imposed as given in Eqn. (22).
Making use of the equations for the electric potential, the Stokes equations for steady-state creeping flow
with body force term on the right-hand side
−µf ∆ u + ∇p
= −ρe ∇ (ϕ + ψ)
∇· ~u =
(24)
0,
(25)
were solved by Henry [83]. In addition to the BC imposing the opposing velocity far from the particle to
bring the whole system to rest, the no-slip condition ~u r=R = 0 was applied at the particle surface.
From the flow field around the particle, the force acting on the particle was obtained by integrating
the normal stresses over the sphere surface. To the resulting force that comprises Stokes drag and electric
components, the electrostatic force on the particle due to its fixed surface charge was added. The total force
must vanish at steady motion and was thus equated with zero, resulting in the electrophoretic velocity
ZR
ZR
ψ
ψ
ε
~ ext
~ EP = e ψR + R3 5R2
dr − 2
dr E
(26)
U
µf
r6
r4
∞
∞
|
{z
}
=ζ f (κR)
obtained by Henry for an insulating particle. The function f (κR) introduced in [83] is usually referred to
as Henry’s function. In [84] the following expression is derived that approximates the integral equations as
f (κR) =
2
1 +
3
1
2 1+
11
2.5
κR (1 + 2e−κR )
3 ,
(27)
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
12
with a relative error below 1 % for all values of κR [84].
In terms of Henry’s function f (κR) [83], the electrophoretic mobility of a spherical, non-conducting
particle reads as
εe
ζ f (κR) .
(28)
µEP =
µf
This solution is correct to the first order of the ζ-potential, since the relaxation effect is neglected [85]. With
the definition of the electrophoretic mobility µEP := UEP /Eext as electrophoretic speed per unit applied field
the electrophoretic velocity of the particle can be obtained.
Henry’s analytical solution for the electrophoretic velocity of a spherical particle of radius R in an
unbounded electrolyte solution of dynamic viscosity µf with Debye-Hückel parameter κ, subject to an
~ ext reads as
applied field of strength E
~ EP = 2εe ζ
U
1 +
3µf
1
h
2 1+
2.5
κR(1+2e−κR )
~
i3 E
ext ,
(29)
according to the expression for the electrophoretic mobility of Ohshima given in Eqn. (27).
For the electrophoresis simulations in this article, the particle charge must be known to compute the
electrostatic force on the particle. Analytical solutions for electrophoretic motion such as Henry’s equation,
however, are typically given in terms of the ζ-potential, which is defined at the slip surface between the
compact and diffuse EDL layer. Since the particle charge is acquired as a surface charge, for a given ζpotential the surface charge (density) enclosed by the slip surface is therefore needed in the simulations.
The surface charge density is hereby obtained from the overall surface charge bound at the fluid-particle
interface and in the Stern layer. This approach is justified by the fact that the electric potential at the Stern
surface and the ζ-potential can in general be assumed to be identical [86].
The relation of the surface density σs to the ζ-potential is obtained from the Neumann BC on the surface
of the insulating particle [85]
dψ
σs = −εe
(30)
d~r r=R
in case these electrical properties do not vary in angular direction. This condition holds for insignificant
permittivity of the insulating particle compared to the fluid permittivity εe . Alternatively, this relation can
be derived from the electroneutrality condition [46]. With the spatial distribution of ψ around the spherical
particle according to Eqn. (23) the ζ–σs relationship follows from Eqn. (30) as
qs
1 + κR
σs =
=
ζ
ε
.
(31)
e
4πR2
R
For a spherical particle with an EDL potential ψ described by the spherical symmetric Poisson-Boltzmann
equation, the more general ζ–σs relationship
v
u
zeζ
u
8 ln cosh
zeζ u
2
1
2εe κkB T
1
4kB T
u
+
u1 +
sinh
(32)
σs =
2
zeζ
zeζ
ze
κR
2kB T t
(κR)
2
2
cosh
sinh
4kB T
2kB T
for 1-1 and 2-1 electrolyte solutions is derived in [87]. The relative error of this approximation w.r.t. the
exact numerical results computed by [88] is below 1 % for 0.5 ≤ κR < ∞ [85]. This relationship is applied in
the electrophoresis simulation validation in Sec. 6.3 to compute the particle charge for a given ζ-potential.
The applied ζ–σs relationship is derived for electric potentials governed by the spherical Poisson-Boltzmann
equation and is thus more general than the ζ–σs relationship (31) for the Debye-Hückel approximation. For
the simulation parameters used in Sec. 6.2, the deviation of σs for the general relationship w.r.t. the exact
value of the Debye-Hückel approximation is about 0.2 % and hence negligible.
12
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
13
4. Numerical Modeling
4.1. Lattice Boltzmann Method with Forcing
The LBM is a mesoscopic method for the numerical simulation of fluid dynamics based on kinetic theory
of gases. This method statistically describes the dynamics of ensembles of fluid molecules in terms of particle
distribution functions (PDFs) that represent the spatial and velocity distribution of molecules in phase space
over time. The temporal and spatial variation of PDFs, balanced by molecular collisions, is described by
the Boltzmann equation. The solution of this equation converges towards the Maxwell-Boltzmann velocity
distribution of molecules in local thermodynamic equilibrium. For small deviations from this equilibrium,
the Navier-Stokes equations can be derived from the Boltzmann equation by means of a Chapman-Enskog
expansion (cf. [89, 90]).
In the LBM, the phase space is discretized into a Cartesian lattice Ωdx ⊂ RD of dimension D with
spacing dx, and a set of Q discrete velocities ~cq ∈ RD , q ∈ {1, . . . , Q}. Moreover, time is discretized as
cq are chosen
Tdt = {tn : n = 0, 1, 2, . . .} ⊂ R+
0 , with a time increment of dt = tn+1 − tn . The velocities ~
such that within a time increment, molecules can move to adjacent lattice sites or stay at a site. Associated
with each of these velocities is a PDF fq : Ωdx × Tdt 7→ R. A forward-difference discretization in time and
an upwind discretization in space [91] result in the discrete lattice Boltzmann equation
fq (~xi + ~cq dt, tn + dt) − fq (~xi , tn ) = dtCq + dtFq ,
(33)
with lattice site positions ~xi , discrete collision operator dtCq , and discrete body-force term Fq . This equation
describes the advection of PDFs between neighboring lattice sites and subsequent collisions.
In general, the collision operator can be represented in terms of the collision matrix S as dtCq =
P
eq
(cf. [92]), with the vector f~ := (f0 , f1 , . . . , fQ−1 )> of the PDFs fq , and f~eq of the equij Sqj fj − fj
librium distributions fqeq . The latter are obtained from a low Mach number expansion of the MaxwellBoltzmann distribution [89]. With a representation of the macroscopic fluid density as ρf = ρ0 + δρ in
terms of a reference density ρ0 and a density fluctuation δρ, the equilibrium distribution function for the
incompressible LBM is derived in [93] as
(~u · ~u)
(~cq · ~u) (~cq · ~u)2
+
−
,
(34)
fqeq (ρ0 , ~u) = wq ρf + ρ0
c2s
2c4s
2c2s
where ‘·’ denotes the standard Euclidean scalar product. This distribution function depends on ρf , the
macroscopic fluid velocity ~u, and lattice-model dependent weights wq . At each instant of time, ρf and ~u are
given by moments of PDFs as
P
ρf (~xi , t) = fq (~xi , t),
qP
(35)
~cq fq (~xi , t).
~u(~xi , t) = ρ10
q
c2s ρf (~xi , t)
according to the equation of √
state for an ideal
Moreover, the pressure p is given as p(~xi , t) =
gas. We employ the D3Q19 model of [94] with thermodynamic speed of sound cs = c/ 3 for the lattice
velocity c = dx/dt. For this model, the weights wq are: w1 = 1/3, w2,...,7 = 1/18, and w8,...,19 = 1/36. As
discussed in [93], fqeq recovers the incompressible Navier-Stokes equation with at least second-order accuracy
u|
2
of the Mach number Ma := |~
cs , O(Ma ). In LBM simulations, the kinematic fluid viscosity νf is generally
determined by a dimensionless relaxation time τ of the collision operator as
1 2
c dt.
(36)
ν= τ−
2 s
As shown in [95], with this definition the LBM is second-order accurate in space and time.
Among the different collision operators available for the LBM, we employ the two-relaxation-time (TRT)
collision operator of [29, 30]
X
Sqj fj − fjeq = λe fqe − fqeq,e + λo fqo − fqeq,o ,
(37)
j
13
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
14
with the relaxation parameters λe and λo for even- and odd-order non-conserved moments, respectively.
Alternative collision operators either have disadvantages regarding stability and accuracy [96] such as the
BGK model [97] or are computationally more costly such as the MRT operator [92] or the cumulant operator
[98]. To ensure stability, both relaxation parameters should be within the interval ] − 2, 0[, cf. [99, 30].
The even relaxation parameter is related to the dimensionless relaxation time by λe = − τ1 and therefore
determines the fluid viscosity. The free parameter λo is set to λo = −8(2 − 1/τ )/(8 − 1/τ ) in this article,
which prevents τ -dependent boundary locations for no-slip BCs as they arise for the BGK operator. Instead,
walls aligned with the lattice dimensions are fixed half-way between two lattice sites, as shown in [100]. For
the TRT operator, the PDFs are decomposed as fq = fqe + fqo into their even and odd components
fqe = 12 (fq + fq̄ )
fqo = 12 (fq − fq̄ )
and
and
fqeq,e = 12 (fqeq + fq̄eq )
fqeq,o = 12 (fqeq − fq̄eq ),
with ~cq = −~cq̄ . The local equilibrium distribution function in Eqn. (34) is then given by
ρ0
ρ0
u · ~u) + 2c
cq · ~u)2
fqeq,e = wq ρf − 2c
2 (~
4 (~
s
s
fqeq,o = wq ρc20 (~cq · ~u).
(38)
(39)
s
At each time step tn ∈ Tdt the lattice Boltzmann method performs a collide– and a stream step
f˜q (~xi , tn ) = fq (~xi , tn ) +λe [fqe (~xi , tn ) − fqeq,e (~xi , tn )]
+λo [fqo (~xi , tn ) − fqeq,o (~xi , tn )]
(40)
fq (~xi + ~eq , tn + dt) = f˜q (~xi , tn ) + dtFq ,
(41)
where f˜q denotes the post-collision state and ~eq = ~cq dt a discrete lattice direction.
In the stream step, the product of dt and the forcing term Fq is added to the post-collision PDFs. The
term Fq considers the external effect of body forces acting on the fluid. In this article, the discrete forcing
term according to Luo [101] is employed as
(~cq − ~u) (~cq · ~u)
~
+
~
c
Fq = w q
(42)
q · fb ,
c2s
c4s
with f~b representing the electrical body force per unit volume. The forcing terms lead to an additional
term in the continuity equation that arises for spatially varying external forces, as shown in [102, 103]. This
additional term can be removed by incorporating the external force in the momentum density definition as
!
1 X
dt ~
~u =
fq~cq + fb .
(43)
ρ0
2
q
Thus, [102] suggest to use the forcing term (42) in combination with the modified momentum density
definition in Eqn. (43) for the BGK. We therefore use this forcing term with second-order accuracy, together
with the modified momentum density for the resulting velocity ~u.
To increase the computational efficiency of the implementation, the compute-intensive collide step and
the memory-intensive stream step are fused to a stream-collide step. In the simulations presented in this
article, no-slip and free-slip BCs are applied as described in [32].
4.2. Momentum Exchange Approach
To model the fluid-particle interaction and the resulting hydrodynamic interactions of the particles, the
momentum exchange approach suggested by [27] is employed in this article. This approach computes the
momentum exchange between the fluid and the suspended rigid particles from PDFs surrounding these
solid objects. The implementation of this approach in waLBerla has recently been applied to simulate
fluid-particle interactions also at Reynolds numbers beyond the Stokes regime, as presented in [104, 105, 106].
14
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
15
For the momentum exchange approach, the solid particles are mapped onto the lattice by considering
each cell whose center is overlapped by a particles a moving obstacle cell. All other lattice cells are fluid cells,
resulting in a staircase approximation of the particle surfaces. These surfaces are represented by surface cells
indicated by subscript s in the following. On the fluid cells denoted by subscript F , the LBM is applied. To
model the momentum transfer from the particles to the fluid, the velocity bounce-back BC
ωq
fq̄ (~xF , tn + dt) = f˜q (~xF , tn ) − 2 2 ρ0~cq · ~us
cs
(44)
is applied at fluid cells with position ~xF = ~xs + ~eq̄ adjacent to a surface cell at ~xs . This boundary condition
introduced in [26] matches the fluid velocity to the local particle surface velocity ~us .
From the sum of all force contributions due to the momentum transfer from fluid cells to neighboring
surface cells (cf. [32]), the overall hydrodynamic force on the particle can be obtained according to [27] as
X X
dx3
ωq
F~h =
2f˜q (~xF , tn ) − 2 2 ρ0~cq · ~us ~cq
.
(45)
cs
dt
s
q∈Ds
Here, Ds is the set of direction indices q in which a given particle surface cell s is accessed from an adjacent
~ h is given by substituting ~cq × (~xs − ~xC ) for the last ~cq term in
fluid cell. Analogously, the overall torque M
Eqn. (45), with ~xC representing the particle’s center of mass.
The mapping of the solid particles onto the lattice results in fluid cells appearing and disappearing as
the particles move. Therefore, at uncovered lattice sites the PDFs must be reconstructed. We set the PDFs
at those fluid cells to the equilibrium distribution f eq (ρ0 , ~us (~xs (tn − dt)) according to Eqn. (34) dependent
on the local particle surface velocity from the previous time step.
4.3. Finite Volume Discretization for Electric Potential Equations
To solve the Debye-Hückel approximation a cell-centered finite volume scheme is applied on the Cartesian
lattice Ωdx introduced in Sec. 4.1. Associated with this lattice of spacing dx that represents the computational
domain Ω ⊂ R3 is the three-dimensional cell-centered grid Gdx defined (cf. [107]) as
j − 1/2
V
V
Gdx := ~xi ∈ Ω ~xi = k − 1/2 dx i ∈ i , (j, k, m) ∈ J
,
(46)
m − 1/2
with Ωdx = Ω ∩ Gdx . For
V indexing of lattice cells by tuples (j, k, m) of cell indices in the three spatial
dimensions, the index set J := {(j, k, m) | j = 1, . . . , lx ; k = 1, . . . , ly ; m = 1, . . . , lz } is introduced.
Here
V
lx , ly , and lz represent the numbers
of
cells
in
x,
y,
and
z-direction,
respectively.
The
set
is
related
J
V
to the
V setVof single cell indices i := {i | i = 1, . . . , lx · ly · lz } used for the LBM by a bijective mapping
g : i → J , according to Eqn. (46).
The finite volume discretization of the Debye-Hückel approximation Eqn. (18) includes volume integration
over each lattice cell Ωi = Ωklm := [~xj−1,k,m , ~xj,k,m ] × [~xj,k−1,m , ~xj,k,m ] × [~xj,k,m−1 , ~xj,k,m ] and applying
the divergence theorem to the Laplace operator ∆ = ∇· ∇, resulting in
I
Z
2
~
−
∇ψ(~x) dΓi + κ
ψ(~x) d~x = 0 ∀ Ωi ∈ Ωdx .
(47)
∂Ωi
Ωi
Here, ∂Ωi denotes the closed surface of the cell, and ~Γi is a surface directed element. The cell surface
consists of six planar faces with constant outward unit normal vectors ~niq (q = 1, . . . , 6). Therefore, the
surface integral can be decomposed into a sum of integrals [108] as
I
−
∂Ωi
∇ψ(~x) d~Γi = −
Z
6
X
q=1 ∂Ω
iq
15
∇ψ(~x) · ~niq dΓiq ,
(48)
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
16
where q represents LBM direction indices introduced in Sec. 4.1, and ∂Ωiq is the common face with the neighboring cell in direction q. The gradients ∇ψ(~xi ) · ~niq in normal direction of the faces ∂Ωiq are approximated
at the face centers by central differences of ψ from a neighboring and the current cell as
∇ψ(~xi ) · ~niq
~
xi + 12 ~
eq
≈
ψ(~xi + ~eq ) − ψ(~xi )
.
dx
(49)
Here, ~eq represents the corresponding lattice direction introduced in Sec. 4.1.
Substituting the approximation of the normal fluxes across the surfaces of area dΓiq = dx2 , Eqn. (49) into
Eqn. (48) results in
I
6
X
ψ(~xi + ~eq ) − ψ(~xi ) 2
−
dx .
(50)
∇ψ(~x) d~Γi ≈ −
dx
q=1
∂Ωi
Applying the above finite volume discretization to the linear term of the Debye-Hückel approximation
results in
Z
κ2
ψ(~x) d~x ≈ κ2 ψ(~xi ) dx3 .
(51)
Ωi
DHA
This additional term enters the central element of the resulting seven-point stencil Ξdx
as
0
−1
0
0
0
0
0
0
0
1
−1 6 + κ2 dx2 −1 0 −1 0 ,
0
−1
0
dx2
0
0
0
0
0
0
0
−1
0
(52)
and the right-hand side for each unknown is zero.
4.4. Parametrization for Electrokinetic Flows
Numerical simulations are typically performed in terms of dimensionless parameters. To ensure consistently good numerical accuracy independent of the simulated scales, the physical quantities are mapped to a
computationally reasonable numerical value range. In LBM simulations usually the quantities are expressed
in lattice units. Therefore, physical values must be converted to lattice values before the simulation and
vice versa to obtain physical values from simulation results. In the following the lattice unit (LU) system
employed in this article is presented, providing a common parameterization also for further (electrokinetic)
flow scenarios [76].
Physical simulation parameters are given in terms of the international system of quantities (ISQ) associated to the international system of units (Système International d’Unités, SI [109]) [110]. The value of a
quantity is defined as product of a numerical value and a unit, e. g. 10−6 m. The SI system comprises the
base units displayed in Tab. 1, together with the corresponding mutually independent base quantities length,
time, mass, electric current, thermodynamic temperature, amount of substance, and luminous intensity.
Moreover, derived units such as N = kgs2m (Newton) are defined in the SI system as products of powers of
base units [109].
For LBM simulations the base quantities are length, time, and mass density. The corresponding lattice
Table 1: Physical and LBM base quantities for electrokinetic simulations.
kind of quantity
length
time
mass
electr.
thermodyn.
chem.
photometr.
ISQ quantity
SI unit
x
m
t
s
m
kg
I
A
T
K
n
mol
Iv
cd
LBM quantity
lattice unit
x
dx
t
dt
ρ
ρ0
Φ
V
–
–
–
–
–
–
16
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
17
base units are the (physical) lattice spacing dx, time increment dt, and fluid reference density ρ0 . Therefore,
the numerical values of these quantities in lattice units become unity, as shown in Tab. 2. The lattice
parameters representing these dimensionless numerical values in LUs are indicated with subscript L in
the following, e. g. dxL for the lattice spacing. Performing LBM computations on such normalized lattice
parameters saves numerical operations: Additional scaling factors are avoided, e. g. in the LBM streamcollide step, when computing the equilibrium distribution function given in Eqn. (34) (cf. cL = dxL /dtL = 1)
or the macroscopic velocity according to Eqn. (35) (cf. ρ0,L = 1).
For the electrokinetic simulations, the electric potential Φ is chosen as electric base quantity. This quantity, however, is not scaled on the lattice but keeps its numerical value that typically lies in a range not too
far from unity. In contrast to the SI system, the LU system for electrokinetic simulations requires no base
units corresponding to the thermodynamic temperature T or the amount of substance n:
In the simulations, temperature appears only in combination with the Boltzmann constant as energy,
i. e., E = kB T , with the derived unit [E] = [m] [x]2 /[t]2 . With the relation of mass and mass density
[m] = [ρ0 ] [x]3 , this unit can be represented in lattice base units (see Tab. 1).
Moreover, by representing the molar concentration with unit [ci ] = mol
l in terms of the number density as
1
ni = ci NA with Avogadro’s number NA = 6.022 14 · 1023 mol
, the unit ‘mol’ cancels out, yielding [ni ] = [x]−3 .
With the choice of the potential Φ as base quantity, the electric current I becomes a derived quantity
in the LU system. The unit of I can be derived from the energy in electrical units [E] = [I] [Φ] [t] and the
above definition of energy in terms of lattice units [E] = [ρ0 ] [x]5 /[t]2 . Equating both relations results in the
derived lattice unit
[ρ0 ] [x]5
.
(53)
[I] =
[Φ] [t]3
In Tab. 2 different physical quantities and their SI units are displayed, as well as their representation by
(dimensionless) lattice parameters and their lattice units. The conversion of physical quantities to lattice
Table 2: Relation of physical quantities and lattice parameters for electrokinetic simulations.
physical quantity
SI unit
dx (lattice spacing)
dt (time increment)
m
s
ρ0 (fluid density)
Φ (electr. potential)
kg
m3
L (length)
ν (kinem. viscosity)
~u (velocity)
m (mass)
F~ (force)
I (electr. current)
V
m
m2
s
m
s
kg
kg m
s2
A
lattice parameter
dxL
dtL
ρ0,L
ΦL
LL
νL
~uL
mL
F~L
IL
e (elem. charge)
As
eL
ε0 (vacuum permittivity)
~ (electr. field )
E
As
Vm
V
m
ε0,L
~L
E
J
EL
E (energy)
numerical value
1
dx dx (= 1)
1
dt dt (= 1)
1
ρ0 ρ0 (= 1)
1
V Φ
1
dx L
dt
ν
dx2
dt
u
dx ~
1
m
ρ0 dx3
dt2
F~
ρ0 dx4
3
V dt
I
ρ0 dx5
V dt2
e
ρ0 dx5
V dt2
ε
ρ0 dx5 0
dx ~
V E
dt2
E
ρ0 dx5
lattice unit
dx
dt
ρ0
V
dx
dx2
dt
dx
dt
ρ0 dx3
ρ0 dx4
dt2
ρ0 dx5
dt3 V
ρ0 dx5
dt2 V
ρ0 dx5
V dt2
V
dx
ρ0 dx5
dt2
units requires their division by powers of the LBM base quantities with the corresponding SI units. Since the
potential Φ has the same numerical value in physical and lattice units, the LBM base unit of the potential is
dx5
simply ‘1 V’. Therefore, the derived lattice unit ‘Ampere’ for the electric current is given by A = ρV0 dt
3 (see
Eqn. (53)). The corresponding scale factors for converting the physical parameters (e. g. ν) to the associated
17
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
18
lattice parameters (e. g. νL ) are shown in Tab. 2. Multiplication with the inverse scale factors converts the
lattice parameters back to physical quantities.
5. Extension of waLBerla for Electrophoresis Simulations
Electrophoresis simulations require the mutual coupling of fluid dynamics, rigid body dynamics, and
electro-statics, as shown in Fig. 1. In addition to electrostatic and hydrodynamic interactions, the applied
field acts on the EDL charge and thereby affects fluid flow and particle motion. For the electrophoresis
algorithm presented below, the equilibrium description of the EDL potential in terms of the linear DebyeHückel equation is employed. Therefore, the predominant retardation effect is recovered in the simulations.
The applied potential ϕ and the EDL potential ψ are linearly superimposed (cf. Henry’s equation, Sec. 3),
which is valid for weak applied fields when the EDL distortion by the field is negligible. Thus, the applied
electric field can be imposed directly, without solving the associated Laplace equation. In the following the
main concepts of the waLBerla framework are described, together with the functionality for electrophoresis
simulations implemented therein.
5.1. Design Concepts of waLBerla
WaLBerla is a framework for massively parallel multiphysics simulations with a MPI-based distributed
memory parallelization that is specifically designed for supercomputers. The main software design goals of
this framework are flexibility to combine models of different effects, extensibility to allow the incorporation
of further effects and details, e. g., for electrokinetic flows, and generality to support further applications [76].
These goals are reached by integrating the coupled simulation models into waLBerla in a modular fashion
that avoids unnecessary dependencies between the modules. This way, the modules can be augmented
by more sophisticated models or models tailored to a certain application, and functionality from different
modules can be combined flexibly. The modular code structure also provides excellent maintainability, since
modifications of the code in one module do not affect other modules. Developers can therefore efficiently
locate faulty modules and find bugs inside these modules, also by systematically utilizing automatic tests.
In addition to modules, the waLBerla code structure comprises a core for sequence control that initializes data structures, performs the time stepping, and finalizes the simulation on each parallel process.
By means of applications, multiphysics simulations can be defined by assembling the associated functionality and coupled models from the modules. The coupling strategy for multiphysics simulations is based
on accessing mutually dependent data structures (see [31] for more details). These data strucutres are
defined in the modules that implement models for the different physical effects. Also infrastructural and
utility functionality is encapsulated in modules, e. g., for domain setup, MPI communication, BC handling,
parameterization, or simulation data output. For parallel simulations the discretized simulation domain is
decomposed into equally sized blocks of cells that can be assigned to different parallel processes. In this block
concept, each block contains a layer of surrounding ghost cells that is needed for BC treatment and parallelization. For parallelization, cell data of neighboring processes is copied to the ghost layer, dependent on
the data dependencies of the unknowns located on a given block. Moreover, metadata of a block specifies its
location in the simulation domain or its rank for MPI communication. The communication concept provides
a simple and flexible communication mechanism tailored to simulations on Cartesian grids and facilitates
various communication patterns. Individual work steps of a simulation algorithm are specified as sweeps that
are executed on a block-parallel level. The sweep concept defines a structure in which callable objects (i. e.
kernels) implemented in the modules can be specified at compile time. By means of dynamic application
switches, specific kernels tailored to a given computer architecture or implementing a desired model variant,
can be selected at run time. For time-dependent simulations, the sweeps are organized in a timeloop that
specifies the order in which the individual work steps are executed at each time step. To facilitate iterative
solvers, sweeps can also be nested to repeatedly perform a grid traversal until a termination criterion is met.
The boundary condition concept for handling multiple physical fields, numerical methods, and governing
equations, introduced in [31], is applied in this article for moving obstacles with electric BCs. This concept
relies on flags to indicate for each boundary cell the kind of boundary treatment that to is be performed,
18
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
19
with an individual BC flag for each condition. Moreover, cells adjacent to a boundary are indicated with a
nearBC flag and non-boundary cells with a nonBC flag. Individual sets of these flags are defined for each
governing equation. Due to specific LBM requirements, the boundary handling is performed such that the
BCs are fulfilled when a boundary cell is accessed from a nearBC cell in the susequent sweep. The abstract
boundary handling is implemented in the bc module and provides functionality for all BCs are handled
either as direct or direction-dependent treatment. In direction-dependent treatment the BC value is set
at a boundary cell dependent on the value at a neighboring cell, whereas in direct BC treatment the BC
value is directly set at a boundary cell [31]. The actual boundary handling functionality is implemented in
corresponding BC classes whose functions are executed when the associated BC flag is found.
The parameterization concept for multiphysics simulations introduced in [76] is based on the conversion
of physical parameters to lattice units before the simulation, as described in Sec. 4.4. This approach ensures
consistent parameters in all waLBerla modules, independent of the underlying physics. Individual modules
can therefore be developed independently w.r.t. the common lattice unit system.
Simulation parameters and BCs are typically provided to waLBerla through an input file. To ensure
a consistent parameter set and a correct mapping of the physical quantities to lattice units, the class
PhysicalCheck has been introduced in waLBerla in [111]. This class checks the simulation parameter set
specified in the input file for completeness and physical validity and converts the parameters to lattice units.
Since the quantities are converted based on the SI system, the unit Ampere is re-defined for PhysicalCheck
according to Eqn. (53) to support simulations including electric effects.
5.2. Overview of waLBerla Modules for Electrophoresis Simulations
In the following an overview of the modules relevant for electrophoresis simulations is given. For fluid
simulations with the LBM, the lbm module implements various kernels for the stream-collide step with the
different collision operators and forcing terms described in Sec. 4.1. The classes for treating the corresponding
BCs are provided by the associated lbm bc module. In the lbm module block-fields of cells are provided for the
PDFs, the velocity, the density, and an external force. The external force field is used for coupling the LBM
to other methods, e. g. via the forces exerted by electric fields on the EDL and the fluid in electrophoresis
simulations. The PDF and velocity field are accessed by moving obstacle module functions for the simulation
of moving particles.
The moving obstacle module facilitates simulations of fluid-particle interactions with the momentum
exchange method by implementing kernels for moving obstacle sweeps and providing the corresponding
data structures. This module furthermore provides setup functions for initializing and connecting the pe
to waLBerla. For the moving obstacle handling, an obstacle-cell relation field is provided that stores for
each lattice cell overlapped by a pe object a pointer to this object. Moreover, from the lbm module the PDF
source field is accessed for the hydrodynamic force computation and the reconstruction of PDFs. In the lbm
velocity field, body velocities are stored and accessed in the moving boundary treatment of the LBM.
For the computation of the electric potential distribution, the lse solver module described in Sec. 5.3 is
employed. This module has been implemented for solving large sparse linear systems of equations as they
arise from the discretization of the electric potential equations (see Sec. 4.3). The corresponding BC classes
are implemented in the pot bc module described in Sec. 5.4. The data structures defined in the lse solver
module, accessed by the application and other modules, include the stencil field representing the system
matrix as well as the scalar fields for the solution and for the RHS.
The electrokin flow module was designed for facilitating electrokinetic flow simulations. This module
provides kernels for coupling the involved methods as well as the setup and parameterization of simulations
such as electrophoresis. The setup includes initializing the stencils and RHS from the lse solver module
according to the finite volume discretization of the Debye-Hückel equation presented in Sec. 4.3. The kernels
for imposing the electric potential BCs on the moving particles and for computing the electrostatic forces
on fluid and particles are described in Sec. 5.5 and Sec. 5.6, respectively. Finally, the coupled algorithm for
electrophoresis provided by the electrokin flow module is presented in Sec. 5.7.
19
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
20
5.3. Solver for Linear Elliptic PDEs with Moving Boundaries
The solver module lse solver in waLBerla for large linear systems of equations has been designed as
an efficient and robust black-box solver on block-structured grids [112, 31, 76]. To specify the problem to
be solved, the application sets up the system matrix, right-hand side, and BC handling.
In the lse solver module, solver sweeps are pre-defined that perform the iterations at the position where
they are added to the timeloop. For a given simulation setup, the employed solver is selected via the input
file where also the solver parameters and BCs are specified. An iterative solver requires a nested sweep
that is executed until a specific convergence criterion is satisfied. For all implemented solvers, these sweeps
share a common structure. This structure is displayed for the SOR sweep solveTimeVaryingBCSOR for moving
boundaries and linear PDEs such as the Debye-Hückel approximation in Fig. 3 as an activity diagram. For
Adapt stencils
and RHS to BCs
Maximum iterations reached?
yes
no
Compute
residual and norm
Termination criterion met?
yes
no
Update ‘red’
unknowns
Update ‘black’
unknowns
Figure 3: Activity diagram for SOR sweep solveTimeVaryingBCSOR with time varying boundary conditions (BCs),
e. g., due to moving particles
the employed discretization of the electric potential equations, the system matrix is represented by D3Q7
stencils. These stencils comprise an entry for each, the center and the six cardinal directions.
In the setup function for this SOR sweep, the communication for the ghost layer exchange of the solution
field in the initialization phase is set up first. Then the SOR solver sweep is added to the timeloop, and the
kernels for relaxation, communication, and BC treatment are specified as solver sub-sweeps. For parallel
execution, the SOR algorithm is implemented in red-black order.
The filled circle at the top of the diagram in Fig. 3 indicates the starting point of the sweep in the
timeloop. At the beginning of the sweep solveTimeVaryingBCSOR for moving boundaries, the stencils are
constructed to incorporate the BCs according to the present boundary locations. Furthermore, the RHS
is adapted to these BCs before the solver iterations start. For this purpose, a sub-sweep with a kernel
for re-setting the stencils and RHS is executed before the iteration loop, followed by the BC treatment
functions adaptStencilsBC and adaptRHSBC described in Sec. 5.4. Then the standard parallel Red-Black SOR
sweep is performed until the termination criterion is met. This sweep comprises a sub-sweep for computing
the residual and its L2 -norm for the termination criterion, as well as two solver sub-sweeps for the SOR
update of the ‘red’ and the ‘black’ unknowns, respectively. In these sub-sweeps, the quasi-constant stencil
optimization technique introduced in [31] is employed. Based on the residual L2 -norm, the termination
20
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
21
criterion for the simulations performed in this article is provided as residual reduction factor RESRF w.r.t.
the initial norm of the simulation.
For multigrid solvers as applied in [31] to charged particle simulations in absence of ions in the fluid, the
red-black update sub-sweeps in Fig. 3 are replaced by solver sub-sweeps of a V-cycle. To apply the SOR
sweep to varying stencils of linearized PDEs, such as the symmetric Poisson-Boltzmann equation, only the
sub-sweep for adapting the stencils and RHS to the BCs is performed at the beginning of each iteration
instead of once before the iterations start (see [76]).
5.4. Boundary Condition Handling for Solver Module
The implicit BC handling used and initiated by the solver module has been introduced in [31]. This
boundary handling is based on incorporating the BCs into the stencils and right-hand side of the finite
volume discretization. That way, at an iterative update of a near-boundary value, the method implicitly
uses the new values for the BCs. For Dirichlet boundaries, the boundary values are linearly extrapolated to
the boundary cell and for Neumann BCs the boundary values are approximated by central differences. For
both the stencils and the right-hand side, a direction-dependent BC treatment is used.
The functions for this BC treatment are implemented in the pot bc module. This module employs own
nonBC and nearBC flags for the BC handling of scalar potentials. Moreover, for each BC class in this
module an associated BC flag is defined. For the employed cell-centered discretization, the module contains
one class for each, Neumann and Dirichlet BCs.
For incorporating the BCs into the stencils the kernel adaptStencilsBC is implemented. This kernel
iterates over all lattice cells to find scalar potential nearBC cells. At each cell with nearBC flag, the kernel
employs the D3Q7 stencil directions to iterate over the neighboring cells. In directions of a cell with scalar
potential BC flag, the stencil entry of the nearBC cell, associated with the direction of the BC flag, is
adapted accordingly.
The function adaptRHSBC employs the standard boundary handling of the bc module to invoke the pot bc
kernels for adapting the RHS to the BCs. To this end, the direction-dependent BC treatment kernels in
the corresponding BC classes implement the RHS adaption depending on the BC value. The BC value
is specified in the input file for static BCs, or in a previous time step for BCs of moving particles (see
Sec. 5.5). To facilitate such complex boundaries, the BC classes store the BC values and the corresponding
boundary cell ranges. The latter are stored in a memory-efficient way either as cell intervals or as cell vectors.
Moreover, to allow the computation of scalar potential gradients directly from the solution field, the BC
values are set in this field at boundary cells when the RHS is adapted. From these BC values and from the
solution at the nearBC cell, the value at the boundary cell required for the gradient can be extrapolated.
5.5. Electric Potential Boundary Condition Handling for Moving Particles
Prior to the EDL potential distribution computation by the lse solver module, uniform ζ-potentials
or surface charge densities are imposed at the moving particles by means of scalar potential BCs. These
electrical surface properties are specified in the input file for different particle types with a common uid
(unique identifier) defined in the pe. To this end, the sweep function setPotBC_ChargParticles is implemented
in the electrokin flow module that maps the charged particles onto the lattice and sets the electric potential
BC values and the associated BC handling flags at the corresponding cells. The function first overwrites the
values of all cells in the RHS field with zero to remove the values from the previous BC treatment. Then
the mapping is performed for all movable rigid bodies located in the subdomain of each process. For each
particle the mapping is conducted in an extended axis-aligned bounding box that surrounds this rigid body
and the associated nearBC flags. The mapping is realized in three steps:
1) First, all scalar potential nearBC and BC flags from the previous time step are removed, and the
scalar potential nonBC flags are set. Moreover, the previous BC values and the associated cells in the
BC class instances are removed.
2) Then, for particles with prescribed electric BCs, the BC handling for the lse solver module (see Sec. 5.4)
is prepared. For each rigid body with a uid for which a surface property is specified in the input file,
the associated BC is obtained. The cells overlapped by this particle are gathered and are added
21
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
22
together with the BC value to the corresponding BC class instance. Moreover, the BC flag is set at
the overlapped cells.
3) Finally, the nearBC flag is set at all cells adjacent to a BC cell.
Each step is performed for all bodies on a process before the next step begins, to prevent that for particles
with overlapping bounding boxes the flags from a previous step are overwritten.
5.6. Computing Electrostatic Forces on Fluid and Particles
The electric forces acting on the ions in the fluid are incorporated into the incompressible NavierStokes equation by the body force term f~b = −ρe (ψ) ∇ (ϕ + ψ), which coincides with the corresponding
term employed in Henry’s solution (see Sec. 3). Due to the linear superposition of the electric potential
~ ext = −∇ϕ is
components, the gradient is applied to both components separately. Since the applied field E
given, only the EDL potential gradient must be computed.
For the computation of the electric field due to the EDL, the electrokin flow module provides a kernel
that performs the gradient computation at all scalar potential nonBC cells. The gradient of the electric
potential is computed, as previously introduced in [31], by means of finite differences that provide O(dx2 )
accuracy. Where possible, an isotropy-preserving D3Q19 stencil is used (cf. [113]) instead of a D3Q7 stencil.
With the LBM D3Q19 stencil, the gradient can be computed using wq -weighted differences of neighboring
values in 18 directions ~eq as
19
~eq
1 X
wq ψ(~xb + ~eq ) · 2 .
(54)
∇ψ(~xb ) ≈
w1 q=2
dx
At nearBC cells the D3Q7 stencil is applied to compute the gradient of ψ from the BC values stored at
particle and boundary cells in the BC treatment (see Sec. 5.4). The obtained electric field is stored in a field
of cells that is accessed in the body force computation.
For the computation of the electric body force and of the electrostatic force exerted on the particles, a
further kernel is implemented in the electrokin flow module:
The kernel first iterates on each parallel process over all lattice cells to compute the body force at scalar
potential nonBC cells. This force is computed as product of charge density ρe (ψ) and total electric field
~ total = E
~ ext − ∇ψ. The relation of charge density and EDL potential follows Eqn. (20). The obtained
E
electric body force is written to the external force field of the lbm module that is accessed by the LBM
kernels with forcing. Then the kernel iterates over all non-fixed particles residing on the current parallel
process to compute the electrostatic force. For each of these particles the force is computed from the particle
~ ext and is then added to the particle.
charge and the applied field as F~C = qe E
5.7. Algorithm for Electrophoresis
The overall parallel algorithm for electrophoresis simulations with waLBerla is shown in Alg. 1. After
the setup and initialization phase, the electric BCs for the EDL potential are set at the moving charged
particles by means of setPotBC_ChargParticles at each time step. Then the Debye-Hückel approximation is
solved by means of the SOR sweep solveTimeVaryingBCSOR. The iterations are performed until the specified
termination criterion for the residual is met.
From the obtained EDL potential distribution and the applied field the electric body force exerted on the
fluid is computed, as described in Sec. 5.6. First the kernel computing the electric field caused by the EDL is
applied. Then the kernel for the electric body force computation from the charge density distribution in the
fluid and the total electric field is invoked. This kernel additionally applies the electrostatic force exerted
by the applied field to the particles.
Then the rigid body mapping sweep described in Sec. 4.2 is performed, imposing the particle velocities
for the subsequent LBM sweep. In that parallel sweep, an LBM kernel with fused stream-collide step and
forcing term (see Sec. 4.1) is employed to compute the fluid motion influenced by the moving particles and
by the electrostatic force exerted on ions in the EDL.
After the LBM sweep, the hydrodynamic forces on the particles are computed by the momentum exchange
method. The obtained hydrodynamic force contributions and the electrostatic forces are then aggregated by
22
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
23
Algorithm 1: Electrophoresis Algorithm
foreach time step, do
// solve Debye-Hückel approximation (DHA):
set electric BCs of particles
while residual ≥ tol do
apply SOR iteration to DHA
// couple potential solver and LBM:
begin
compute electric field due to EDL
compute charge density in fluid
compute electric body force
apply electrostatic force to particles
// solve lattice Boltzmann equation with forcing, considering particle velocities:
set velocity BCs of particles
begin
perform fused stream-collide step
// couple potential solver and LBM to pe:
begin
apply hydrodynamic force to particles
pe moves particles depending on forces
the pe. From the resulting forces and torques, the new particle velocities and positions are computed in the
subsequent pe simulation step by the PFFD algorithm [114] that additionally resolves rigid body collisions.
6. Electrophoresis Simulations
In the following, the electric potential computation is validated for a sphere with uniform surface charge
surrounded by an EDL. This sphere is placed in a micro-channel subject to an applied electric field. Moreover
the flow field caused by the electrophoretic motion of the sphere in the micro-channel is visualized, together
with the electric potential and the surrounding ions to qualitatively show the correctness of the simulations.
Then the electrophoretic velocity and the retardation by the counter-ions in the EDL is validated w.r.t.
Henry’s solution for different sphere sizes and values of κR.
The simulation setup for the validation experiments is depicted in Fig. 4. A sphere is placed on the
2
L x/
BC
Fy
BC
C
B
z
x
y
BC
BC
Lz/2
Lz/2
2
L x/
C
B
Ly
Figure 4: Setup for electrophoresis of spheres in square duct with different BCs.
longitudinal axis of a cuboid domain of size Lx × Ly × Lz at an initial position of yinit , and an electrostatic
force in y-direction acts on the sphere. The sphere is suspended in a symmetric aqueous electrolyte solution
23
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
24
kg
of kinematic viscosity νf = 1.00 · 10−6 ms , density ρf = 1.00 · 103 m
3 , permittivity εe = 78.54 · ε0 , and ions
of valence z = 1 at a temperature of T = 293 K.
In all simulations, the EDL thickness λD is in the order of the particle diameter. The electric potential
around the sphere due to the EDL is represented by the Debye-Hückel approximation. For this approximation, analytical solutions are known for the potential distribution and for the electrophoretic velocity.
The LBM is employed with TRT operator and second-order forcing term by Luo and re-defined momentum
density (see Sec. 4.1) in all simulations. The linear system of equations resulting from the finite volume
discretization of the Debye-Hückel equation is solved by the SOR method that is sufficient for the quickly
decaying electric potential due to the counter-ions in the EDL. For the SOR, a relaxation parameter of
ωSOR = 1.7 is applied in all simulations.
2
6.1. Electric Potential in an EDL around a Sphere
To validate the computation of the EDL potential ψ around a charged particle, a sphere of radius
R with uniform ζ-potential is simulated in a large domain. The analytical solution of the Debye-Hückel
equation (21) representing the EDL potential around the sphere is given by Eqn. (23). For the validation,
a spherical particle of radius RL = 12 with initial position yinit = 64 dx is chosen and a domain size of
128 dx × 256 dx × 128 dx. The lattice spacing dx is displayed in Tab. 3 with the simulation parameters as
employed related to the electric potential. The ζ-potential is chosen sufficiently low to approximate the
Table 3: Parameters for sphere with uniform surface charge in micro-channel.
c∞ / mol
l
ζ/mV
−10.0
5.00 · 10
1
κ/ m
−6
7.41 · 10
dx/m
6
10 · 10
λD,L
−9
13.49
Poisson-Boltzmann equation by the Debye-Hückel approximation. To obtain the displayed values of κ and
of the characteristic EDL thickness λD , a symmetric aqueous electrolyte solution with ions of the valence
z and the bulk concentration c∞ shown in Tab. 3 is simulated. The EDL thickness is greater than the 12
lattice sites required for a sufficient resolution as observed in the electro-osmotic flow simulations in [76].
Despite its quick decay, the analytical solution of the electric potential at the domain boundaries differs from zero. Thus, the values of ψ at these boundaries are set to the analytical solution by means of
Dirichlet conditions. To solve the Debye-Hückel equation subject to these BCs, a residual reduction factor
of RESRF = 2 · 10−7 is employed as termination criterion for the SOR method.
The analytical (ψ) and numerical (ψ ∗ ) solution at the initial particle position are depicted in Fig. 5 along
ψ/V
0
·10−3
Analytical
solution
−5
−10
Numerical
solution
0
20
40
60
xL
80
100
120
Figure 5: Analytical and numerical solution for EDL potential of sphere with uniform surface charge.
24
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
25
a line in x-direction through the sphere center. Both graphs agree very well, showing the correctness of the
finite volume discretization and the applied SOR solver, as well as the boundary handling at the particle
surface. Inside the insulating particle, the electric potential is not computed explicitly, due to the uniform
surface potential and the resulting symmetric distribution of ψ in the sphere. The electrostatic force needed
to compute the particle motion is computed directly from the applied field and the particle charge, instead
of the electric potential gradient as in [31].
6.2. Electrophoresis of a Sphere in a Micro-Channel
The application of the external electric field in the micro-channel setup described in Sec. 6.1 gives rise to
an electrophoretic motion of the sphere. For the simulation of this scenario, the parameters listed in Tab. 4
are employed in addition to the parameters in Tab. 3. The particle is suspended in an electrolyte solution
Table 4: Parameters for electrophoretic motion of sphere in micro-channel.
V
Ey / m
qs /A s
−18
−19.9 · 10
−4.7 · 10
dt/s
6
−12
200 · 10
UEP,L
4.49 · 10−3
with the parameters introduced at the beginning of Sec. 6. From the ζ-potential and the parameters in
Tab. 3, the sphere’s charge qs in Tab. 4 is obtained from the surface charge density σs according to the ζ–σs
relationship (32), multiplied by the surface area of the sphere, as qs = 4πR2 σs [81].
The high electric field strength Ey in Tab. 4 is chosen to keep the number of simulation time steps at a
minimum. Due to the applied field and the resulting electrostatic force FC,y = 933 · 10−12 N the particle
moves in y-direction, retarded by the channel walls and the opposing force on the EDL. For the chosen
parameters, the terminal particle speed of UEP = 224.5 mm
s is obtained for free space according to Henry’s
solution (see Eqn. (29)), corresponding to a particle Reynolds number of Rep,d = 0.054.
In the simulation, periodic BCs are applied in y-direction. At all other walls, no-slip conditions are
applied for the LBM and homogeneous Neumann conditions for the electric potential. At each time step,
the Debye-Hückel equation is solved by SOR with the termination criterion from Sec. 6.1. The LBM is
run with τ = 6.5, resulting in the time increment dt listed in Tab. 4 for the chosen viscosity νf and dx
(cf. Eqn. (36)). With dt and dx, the electrophoretic speed in lattice units UEP,L attains the values given
in Tab. 4. Gravitational effects are neglected in the simulation to ensure that the particle motion is driven
kg
solely by electric forces. Thus, the employed density ρp = 1195 m
3 of the particle only has an impact on the
time required to reach steady-state.
In Fig. 6, the results of the electrophoresis simulation are visualized at different time steps. The EDL
potential ψ in the y-z plane through the domain center is displayed, together with semi-transparent equipotential surfaces representing the excess counter-ions in the EDL. The flow field around the moving sphere is
visualized in the x-z plane through the domain center. Arrows of uniform length indicate the flow direction,
while the velocity magnitude is represented by the shown color-scale and by twelve white isosurface contour
lines with logarithmic intervals in the range of 13.67 · 10−3 ms to 6.08 · 10−6 ms .
The flow field around the moving particle shown in Fig. 6(a) is fully developed after 5001 time steps, and
the particle has already attained its terminal velocity. Due to the periodicity of the domain, a channel flow in
axial direction has emerged from the particle motion, as indicated by the contour lines. Moreover, a vortex
has formed between the sphere and the surrounding no-slip walls. The flow field moves with the particle
that translates along the channel centerline (see Fig. 6(b)). Because of the equilibrium representation of the
EDL, the distribution of ψ is at all time steps symmetric w.r.t. the sphere center, almost up to the boundary.
6.3. Validation of Electrophoretic Motion of a Sphere
To quantitatively validate the overall electrophoresis simulation algorithm, the electrophoretic velocity
of spheres with uniform surface charge is compared to Henry’s solution Eqn. (29) for a spherical particle in
an unbounded electrolyte solution.
25
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
26
(a) Results after 5001 time steps.
(b) Results after 30 001 time steps.
Figure 6: Electrophoresis of spherical particle in micro-channel with insulating no-slip walls. Visualization of flow field in x-y
plane, EDL potential in y-z plane, and ion charge distribution around charged particle.
In the simulation experiments, a sphere with uniform ζ-potential is moving under the influence of an
~ ext in a large domain filled with an electrolyte solution. The simulations are
applied field of strength E
performed for different values of κR by varying the sphere radius while keeping the EDL thickness constant.
For validation, the relative deviation ∆r U of the obtained terminal sphere velocity from the theoretical
velocity in Eqn. (29) is evaluated.
The simulation parameters are chosen such that the electrophoretic motion is in the Stokes regime, and
thus the nonlinear inertial term of Navier-Stokes equation can be neglected. Moreover, the EDL potential
distribution is governed by the linear Debye-Hückel approximation. Therefore, the superposition principle is
assumed to hold. Thus, from the obtained relative deviation from the analytical solution in an unbounded
domain ∆r U , the relative deviation due to wall effects and volume mapping errors ∆r UStokes will be subtracted. These relative deviations were examined in [76] for several domain sizes and sphere radii. For no-slip
BCs, the wall effect was shown to comply with analytical and experimental results. In the experiments,
domain sizes are used for which the relative deviations ∆r UStokes for free-slip BCs are close to −3 %.
For all simulations, the parameters in Tab. 5 are used. In each experiment, a sphere is suspended in a
symmetric aqueous electrolyte solution with the parameters introduced at the beginning of Sec. 6 and a bulk
concentration of c∞ = 1.60 · 10−5 mol
l . These parameters result in the Debye-Hückel parameter κ shown in
Tab. 5 and, together with the displayed lattice spacing dx, in the EDL thickness λD of approximately 15
26
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
27
Table 5: Parameters for electrophoretic velocity validation w.r.t. Henry’s solution.
ζ/mV
1
κ/ m
V
Ey / m
dx/m
λD,L
10.0
13.3 · 106
99.0 · 106
5.00 · 10−9
15.08
lattice sites. The ζ-potential has the same absolute value as in the electrophoresis simulations in a microchannel from Sec. 6.2. Moreover, as in Sec. 6.2, a high electric field strength Ey is chosen to keep the number
of simulation time steps low. The resulting electrostatic forces FC = qs Ey for the different sphere radii
are displayed in Tab. 6, together with the electrophoretic charges qs on the spheres. These surface charges
associated with the ζ-potential are obtained from the general ζ–σs relationship (32) as in Sec. 6.2. For all
simulation parameters, the maximum relative deviation of σs for the general relationship from the exact
value for the Debye-Hückel approximation is below 0.2 %. These charges are applied as particle charges to
drive the spheres by the electrostatic force due to the electric field.
For the different sphere radii and the associated values of κR, the electrophoretic velocities according
to Henry’s solution are displayed in Tab. 6. These velocities correspond to particle Reynolds numbers Rep,d
from 0.018 to 0.057 for the particle diameters of 40 nm to 120 nm.
EM
is introduced.
To quantify the retardation by the opposing force on the EDL, the variable EPRet = UEPU−U
EM
This variable represents the relative deviation of the electrophoretic velocity of a particle with charge qs in
presence of the electric double layer from the migration velocity UEM of a particle with the same charge in
absence of surrounding ions. For the examined sphere radii, this retardation is in the range of 20 % to 42 %.
Table 6: Electrophoresis parameters and domain sizes dependent on sphere size. Shown are the sphere radii RL in lattice units,
the corresponding charges qs , and the electrostatic forces FC . For relation of sphere radius to EDL thickness κR, theoretical
electrophoretic velocities UEP , Reynolds numbers Rep,d , and electrophoretic retardation parameter EPRet are given. Listed in
lower part are domain sizes per dimension Lx,y,z , initial sphere positions yinit , process numbers per dimension #procx,y,z , and
relative deviations of sphere velocities in free-slip domain from Stokes velocity ∆r UStokes .
4
6
8
9
12
qs /(10
A s)
FC /(10−12 N)
2.21
219
3.67
363
5.36
530
6.29
622
9.43
934
κR
UEP /(10−3
Rep,d
EPRet /%
0.265
461
0.018
-20.6
0.398
464
0.028
-27.7
0.530
466
0.037
-33.6
0.597
468
0.042
-36.2
0.796
471
0.057
-42.8
864
1248
392
1280
1536
588
1632
2048
784
1632
2048
882
1632
2048
1176
8 × 16 × 8
8 × 16 × 16
16 × 16 × 32
16 × 16 × 32
16 × 16 × 32
-3.04
-2.65
-2.98
-2.89
-3.00
RL
−18
m
s )
Lx,z /dx
Ly /dx
yinit /dx
#procx,y,z
∆r UStokes /%
Then the domain sizes for which the relative deviation from Stokes velocity is about ∆r UStokes = −3 %
are listed in Tab. 6, together with the initial position yinit in movement direction that correspond to 98 × RL .
The LBM with TRT collision operator is employed with the relaxation time τ = 6, resulting in the time
increment dt = 45.8 · 10−12 s. As in the micro-channel electrophoresis simulations, gravitational are effects
kg
neglected, and the insulating particles have a density of ρp = 1195 m
3.
For the electric potential, homogeneous Neumann BCs are applied at the walls in x- and z-direction as
27
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
28
= 0. To improve convergence [31], homogeneous Dirichlet BCs are applied at the walls
in y-direction as Φ Γ ∪Γ = 0 V. The particle is located at sufficient distance from these walls for the
N
S
EDL potential to decay to approximately zero, and therefore these BCs do not affect the electric potential
distribution. As termination criterion of the SOR solver for the Debye-Hückel equation, a residual reduction
factor of RESRF = 1 · 10−6 is used.
The parallel simulations are run on SuperMUC1 of the Leibniz Supercomputing Centre LRZ in Garching
(Germany) on 64 to 512 nodes with the numbers of processes listed in Tab. 6. Within the execution time
of 37 h to 48 h, a number of 70 296 to 74 000 time steps are performed, and the spheres cover a distance
of about 300 dx. The high numbers of time steps are chosen to ensure that the spheres reach steady-state
motion, and that large numbers of sampling values are available to compute the terminal velocities.
The sphere velocities are sampled every 20 time steps during the simulation. In the second half of the
simulation, the terminal particle velocity is reached. Thus, the mean particle velocity U ∗ and the velocity
U ∗ −U ∗
fluctuations δU = maxU ∗ min due to volume mapping effects are computed from the last 50 % output values.
In the considered range of time steps additionally the number of time steps between two SOR calls and the
number of SOR iterations is monitored. The average number of time steps between two SOR calls decreases
from ∆TSSOR = 24 to ∆TSSOR = 3 as RL increases from 4 to 12. Likewise, the average number of SOR
iterations per solver call decreases from 451 iterations to 198 iterations for the respective sphere radii.
The obtained terminal velocity in lattice units UL∗ and the fluctuations are displayed in Tab. 7. As expected,
∂Φ
∂~
n ΓW ∪ΓE ∪ΓB ∪ΓT
Table 7: Simulation results of electrophoretic velocity validation for different sphere sizes. Shown are the theoretical velocities
∗ and fluctuations δ , the relative deviations ∆ U of
UEP,L for unbounded domains in lattice units, the obtained velocities UL
r
U
∗ from U
UL
EP,L , and the relative deviations ∆r UEP corrected by hydrodynamic wall and mapping effects.
4
6
8
9
12
UEP,L /10
4.227
4.249
4.274
4.286
4.320
UL∗ /10−3
4.119
2.52
-2.56
0.5
4.144
1.52
-2.48
0.2
4.120
0.83
-3.59
-0.6
4.135
0.68
-3.51
-0.6
4.151
0.29
-3.91
-0.9
RL
−3
δU /%
∆r U/%
∆r UEP /%
the fluctuations decrease with increasing sphere resolution. Moreover, the relative deviation of the obtained
velocity U ∗ from the theoretical electrophoretic velocity UEP given by ∆r U = (U ∗ − UEP )/UEP is listed
in Tab. 7. For all examined sphere radii, the obtained velocities in the confined domain are by 2.5 % to
3.9 % lower than the theoretical values of UEP for a particle in an unbounded electrolyte solution. The
effect of the confinement on the particle velocity is deducted by subtracting the relative deviation from
Stokes velocity ∆r UStokes in Tab. 6 for the corresponding domain sizes from the relative electrophoretic
velocity deviation ∆r U obtained in the electrophoresis simulation. From the resulting relative deviations
∆r UEP = ∆r U − ∆r UStokes , the inaccuracies due to electric effects in the electrophoresis simulation are
assessed. As can be seen from the values of ∆r UEP in Tab. 7, the simulations results agree with the theoretical
values with relative deviations of less than 1 %.
7. Conclusion
In this article, a coupled multiphysics algorithm for parallel simulations of the electrophoretic motion
of geometrically resolved particles in electrolyte solutions is presented. The physical effects are simulated
by means of the lattice Boltzmann method for fluid dynamics, a physics engine for rigid body dynamics,
and a scalar iterative solver for electric potentials. These components are integrated into the parallel
software framework waLBerla to simulate the electrophoretic motion of fully resolved charged particles in
1 www.lrz.de/services/compute/supermuc/
28
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
29
microfluidic flow. The simulations include fluid-particle and electrostatic particle interactions. Additionally
electric effects on ions around the homogeneously charged particles are recovered.
The current work is an extension of [31], where the electrical migration of charged particles without
ions in the fluid was validated and excellent parallel performance was shown for more than seven millions
of interacting charged particles. In the present article, the opposite net charge in the electric double layer
(EDL) around the charged particles due to ions in the fluid is considered, together with its effect on the fluid
motion that counteracts particle motion. To this end, the electric potential distribution in the fluid due to
the EDLs is computed that causes an electric body force on the fluid. This quasi-equilibrium distribution
recovers the motion of ions in the fluid along with the charged particles while neglecting EDL distortion.
The overall electrophoresis algorithm is introduced and an overview of the coupled functionality implemented in the involved waLBerla modules is given. For the simulations, a solver sweep for time-varying
boundary conditions has been developed that is presented here for the parallel SOR method employed to
solve the EDL potential equation. Based on the multiphysics boundary handling concept [31] an efficient
parallel algorithm is implemented to impose electric potential boundary conditions on the moving particles. These methods can also be employed for other governing equations with spatially varying boundary
conditions that model physical effects different from electric fields. The presented parallel electrophoresis simulations are also facilitated by a joint parameterization concept for the different coupled governing
equations and numerical methods implemented in waLBerla. This concept is based on lattice Boltzmann
requirements and is applicable and extensible to further multiphysics simulations.
For the electrophoresis simulations in this article, the electric potential in the double layer is shown to
coincide with analytical solutions. The obtained terminal electrophoretic velocities comply with analytical
solutions for different proportions of the particle radii to double layer thickness. These validation results
verify the correctness of the implementation and the coupling of the different methods. Moreover, the
observed relative errors in the modeling of electric effects are below 1 %. The retardation effect caused
by the presence of the EDL is shown to be significant for the examined sphere radii, reducing the sphere
velocity up to 42 %. For the electrophoretic motion in a micro-channel, the flow field and the electric potential
distribution are visualized, including the ion charge distribution in the EDL surrounding the particle.
The presented algorithm can be applied to find design parameters in industrial and medical applications,
e. g., for optimal separation efficiency of charged biological particles in lab-on-a-chip devices, depending on
fluid, particle, and electrolyte properties. Our algorithms were shown to correctly recover fluid-particle
interactions for elongated particles in [32]. These future simulations may therefore include suspended,
possibly charged particles of various shapes including spheres, spherocylinders, and particles of more complex
shapes, e. g., to represent different biological particles. Also pairwise van-der-Waals forces can be added
easily, to facilitate simulations of electrophoretic deposition in material science applications.
The electrophoresis algorithm introduced here is well suited for massively parallel simulations. In the
current implementation of this algorithm, the EDL thickness is restricted to values in the order of the particle
radius. Therefore, adaptive lattice refinement as in [23] may be employed to allow for thinner double layers
relative to the particle size. For the incorporation of transient effects in the simulations including EDLs,
the link-flux method implemented into waLBerla in [115] and [41] may be employed. This method was
extended in [41] to simulate electrophoresis, enabling the simulation of non-equilibrium ion distributions in
the EDL. Due to the higher computational complexity of the link-flux method compared to the equilibrium
approach in this article, the maximum number of particles will be lower than in our approach.
Acknowledgements
The authors are grateful to the LRZ for providing the computational resources on SuperMUC.
References
[1] K. J. Ptasinski, P. J. A. M. Kerkhof, Electric Field Driven Separations: Phenomena and Applications, Sep. Sci. Technol.
27 (8-9) (1992) 995–1021. doi:10.1080/01496399208019021.
29
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
30
[2] S. Zhang, R. Tan, K. Neoh, C. Tien, Electrofiltration of Aqueous Suspensions, J. Colloid Interface Sci. 228 (2) (2000)
393–404. doi:10.1006/jcis.2000.6966.
[3] Y.-H. Weng, K.-C. Li, L. H. Chaung-Hsieh, C. Huang, Removal of humic substances (HS) from water by electromicrofiltration (EMF), Water Res. 40 (9) (2006) 1783–1794. doi:10.1016/j.watres.2006.02.028.
[4] A. Mahmoud, J. Olivier, J. Vaxelaire, A. Hoadley, Electrical field: A historical review of its application and contributions
in wastewater sludge dewatering, Water Res. 44 (8) (2010) 2381–2407. doi:10.1016/j.watres.2010.01.033.
[5] L. Besra, M. Liu, A review on fundamentals and applications of electrophoretic deposition (EPD), Prog. Mater. Sci.
52 (1) (2007) 1 – 61. doi:10.1016/j.pmatsci.2006.07.001.
[6] I. Zhitomirsky, Cathodic electrodeposition of ceramic and organoceramic materials. Fundamental aspects, Adv. Colloid
Interface Sci. 97 (1–3) (2002) 279–317. doi:10.1016/S0001-8686(01)00068-9.
[7] P. Sarkar, D. De, T. Uchikochi, L. Besra, Electrophoretic Deposition (EPD): Fundamentals and Novel Applications in
Fabrication of Advanced Ceramic Microstructures, Springer, 2012, Ch. 5, pp. 181–215. doi:10.1007/978-1-4419-9730-2_
5.
[8] Y. Kang, D. Li, Electrokinetic motion of particles and cells in microchannels, Microfluid Nanofluid. 6 (4) (2009) 431–460.
doi:10.1007/s10404-009-0408-7.
[9] A. A. S. Bhagat, H. Bow, H. W. Hou, S. J. Tan, J. Han, C. T. Lim, Microfluidics for cell separation, Med. Biol. Eng.
Comput. 48 (10) (2010) 999–1014. doi:10.1007/s11517-010-0611-4.
[10] D. R. Gossett, W. M. Weaver, A. J. Mach, S. C. Hur, H. T. K. Tse, W. Lee, H. Amini, D. Di Carlo, Label-free
cell separation and sorting in microfluidic systems, Anal. Bioanal. Chem. 397 (8) (2010) 3249–3267. doi:10.1007/
s00216-010-3721-9.
[11] N. Pamme, Continuous flow separations in microfluidic devices, Lab Chip 7 (2007) 1644–1659. doi:10.1039/B712784G.
[12] D. G. Hert, C. P. Fredlake, A. E. Barron, Advantages and limitations of next-generation sequencing technologies: A
comparison of electrophoresis and non-electrophoresis methods, Electrophoresis 29 (23) (2008) 4618–4626. doi:10.1002/
elps.200800456.
[13] G. W. Slater, C. Holm, M. V. Chubynsky, H. W. de Haan, A. Dube, K. Grass, O. A. Hickey, C. Kingsburry, D. Sean,
T. N. Shendruk, L. Zhan, Modeling the separation of macromolecules: A review of current computer simulation methods,
Electrophoresis 30 (5) (2009) 792–818. doi:10.1002/elps.200800673.
[14] F. Keller, H. Nirschl, W. Dörfler, E. Woldt, Efficient numerical simulation and optimization in electrophoretic deposition
processes, J. Eur. Ceram. Soc. 35 (9) (2015) 2619–2630. doi:10.1016/j.jeurceramsoc.2015.02.031.
[15] D. Sheehan, Physical Biochemistry: Principles and Applications, John Wiley & Sons, 2013.
[16] R. F. Probstein, Physicochemical Hydrodynamics: An Introduction, 2nd Edition, Butterworths series in chemical engineering, Butterworth Publishers, 1989.
[17] H. C. Chang, L. Y. Yeo, Electrokinetically Driven Microfluidics and Nanofluidics, Cambridge Univ. Press, 2009.
[18] O. Stern, ZUR THEORIE DER ELEKTROLYTISCHEN DOPPELSCHICHT, Z. Elektrochem. 30 (21-22) (1924) 508–
516. doi:10.1002/bbpc.192400182.
[19] T. Preclik, U. Rüde, Ultrascale Simulations of non-smooth granular dynamics, Comp. Part. Mech. 2 (2) (2015) 173–196.
doi:10.1007/s40571-015-0047-6.
[20] K. Iglberger, U. Rüde, Massively Parallel Rigid Body Dynamics Simulation, Comp. Sci. Res. Dev. 23 (3-4) (2009) 159–167.
doi:10.1007/s00450-009-0066-8.
[21] C. Feichtinger, S. Donath, H. Köstler, J. Götz, U. Rüde, WaLBerla: HPC software design for computational engineering
simulations, J. Comput. Sci. 2 (2) (2011) 105–112. doi:10.1016/j.jocs.2011.01.004.
[22] C. Godenschwager, F. Schornbaum, M. Bauer, H. Köstler, U. Rüde, A Framework for Hybrid Parallel Flow Simulations
with a Trillion Cells in Complex Geometries, in: Proc. Int. Conf. on High Performance Computing, Networking, Storage
and Analysis, SC ’13, ACM, 2013, pp. 35:1–35:12. doi:10.1145/2503210.2503273.
[23] F. Schornbaum, U. Rüde, Massively Parallel Algorithms for the Lattice Boltzmann Method on NonUniform Grids, SIAM
J. Sci. Comput. 38 (2) (2016) C96–C126. arXiv:1508.07982, doi:10.1137/15M1035240.
[24] S. Chen, G. D. Doolen, Lattice Boltzmann Method for Fluid Flows, Annu. Rev. Fluid Mech. 30 (1) (1998) 329–364.
doi:10.1146/annurev.fluid.30.1.329.
[25] C. K. Aidun, J. R. Clausen, Lattice-Boltzmann method for complex flows, Annu. Rev. Fluid Mech. 42 (1) (2010) 439–472.
doi:10.1146/annurev-fluid-121108-145519.
[26] A. J. C. Ladd, Numerical simulations of particulate suspensions via a discretized Boltzmann equation. Part 1. Theoretical
foundation, J. Fluid Mech. 271 (1994) 285–309. doi:10.1017/S0022112094001771.
[27] N. Q. Nguyen, A. J. C. Ladd, Lubrication corrections for lattice-Boltzmann simulations of particle suspensions, Phys.
Rev. E 66 (4) (2002) 046708. doi:10.1103/PhysRevE.66.046708.
[28] J. Götz, K. Iglberger, M. Stürmer, U. Rüde, Direct Numerical Simulation of Particulate Flows on 294912 Processor
Cores, in: Proc. 2010 ACM/IEEE Proc. Int. Conf. for High Performance Computing, Networking, Storage and Analysis,
SC ’10, IEEE, 2010, pp. 1–11. doi:10.1109/SC.2010.20.
[29] I. Ginzburg, J.-P. Carlier, C. Kao, Lattice Boltzmann approach to Richards’ equation, in: W. G. G. C. T. Miller,
M. W. Farthing, G. F. Pinder (Eds.), Computational Methods in Water Resources, Vol. 55 of Developments in Water
Science, Elsevier, 2004, pp. 583–595. doi:10.1016/S0167-5648(04)80083-2.
[30] I. Ginzburg, F. Verhaeghe, D. d’Humières, Two-relaxation-time lattice Boltzmann scheme: About parametrization,
velocity, pressure and mixed boundary conditions, Commun. Comput. Phys. 3 (2) (2008) 427–478.
[31] D. Bartuschat, U. Rüde, Parallel Multiphysics Simulations of Charged Particles in Microfluidic Flows, J. Comput. Sci.
8 (0) (2015) 1–19. doi:10.1016/j.jocs.2015.02.006.
[32] D. Bartuschat, E. Fischermeier, K. Gustavsson, U. Rüde, Two Computational Models for Simulating the Tumbling
30
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
31
Motion of Elongated Particles in Fluids, Comput. Fluids 127 (2016) 17–35. doi:10.1016/j.compfluid.2015.12.010.
[33] S. Jomeh, M. Hoorfar, Study of the effect of electric field and electroneutrality on transport of biomolecules in microreactors, Microfluid Nanofluid. 12 (1) (2012) 279–294. doi:10.1007/s10404-011-0871-9.
[34] P. Kler, C. Berli, F. Guarnieri, Modeling and high performance simulation of electrophoretic techniques in microfluidic
chips, Microfluid Nanofluid. 10 (1) (2011) 187–198. doi:10.1007/s10404-010-0660-x.
[35] M. Chau, T. Garcia, P. Spiteri, Asynchronous grid computing for the simulation of the 3D electrophoresis coupled
problem, Adv. Eng. Softw. 60–61 (2013) 111–121. doi:10.1016/j.advengsoft.2012.11.010.
[36] M. Chau, P. Spiteri, H. C. Boisson, Parallel numerical simulation for the coupled problem of continuous flow electrophoresis, Int. J. Numer. Meth. Fluids 55 (10) (2007) 945–963. doi:10.1002/fld.1502.
[37] B. Giera, L. A. Zepeda-Ruiz, A. J. Pascall, J. D. Kuntz, C. M. Spadaccini, T. H. Weisgraber, Mesoscale particle-based
model of electrophoresis, J. Electrochem. Soc. 162 (11) (2015) D3030–D3035. doi:10.1149/2.0161511jes.
[38] R. G. M. van der Sman, Simulations of confined suspension flow at multiple length scales, Soft Matter 5 (2009) 4376–4387.
doi:10.1039/B915749M.
[39] J. Smiatek, F. Schmid, Mesoscopic Simulations of Electroosmotic Flow and Electrophoresis in Nanochannels, Comput.
Phys. Commun. 182 (9) (2011) 1941 – 1944, computer Physics Communications Special Edition for Conference on
Computational Physics Trondheim, Norway, June 23-26, 2010. doi:10.1016/j.cpc.2010.11.021.
[40] R. Wang, J.-S. Wang, G.-R. Liu, J. Han, Y.-Z. Chen, Simulation of DNA electrophoresis in systems of large number
of solvent particles by coarse-grained hybrid molecular dynamics approach, J. Comput. Chem. 30 (4) (2009) 505–513.
doi:10.1002/jcc.21081.
[41] M. Kuron, G. Rempfer, F. Schornbaum, M. Bauer, C. Godenschwager, C. Holm, J. de Graaf, Moving charged particles
in lattice Boltzmann-based electrokinetics, J. Chem. Phys. 145 (21) (2016) 214102. doi:10.1063/1.4968596.
[42] J. S. Park, D. Saintillan, Direct Numerical Simulations of Electrophoretic Deposition of Charged Colloidal Suspensions,
in: R. C. A. R. Boccaccini, O. Van der Biest, J. Dickerson (Eds.), Key Engineering Materials, Vol. 507, Trans Tech Publ,
2012, pp. 47–51. doi:10.4028/www.scientific.net/KEM.507.47.
[43] J.-P. Hsu, C.-H. Chou, C.-C. Kuo, S. Tseng, R. Wu, Electrophoresis of an arbitrarily oriented toroid in an unbounded
electrolyte solution, Colloids Surf., B 82 (2) (2011) 505–512. doi:10.1016/j.colsurfb.2010.10.009.
[44] J.-P. Hsu, M.-H. Ku, Boundary effect on electrophoresis: finite cylinder in a cylindrical pore, J. Colloid Interface Sci.
283 (2) (2005) 592–600. doi:10.1016/j.jcis.2004.09.004.
[45] J.-P. Hsu, L.-H. Yeh, Electrophoresis of Two Identical Rigid Spheres in a Charged Cylindrical Pore, J. Phys. Chem. B
111 (10) (2007) 2579–2586. doi:10.1021/jp068407z.
[46] J. H. Masliyah, S. Bhattacharjee, Electrokinetic and Colloid Transport Phenomena, John Wiley & Sons, Inc., 2006.
[47] S. Tseng, C.-H. Huang, J.-P. Hsu, Electrophoresis of two spheres: Influence of double layer and van der Waals interactions,
J. Colloid Interface Sci. 451 (2015) 170–176. doi:10.1016/j.jcis.2015.03.060.
[48] R. Schmitz, V. Starchenko, B. Dünweg, Computer simulation of electrokinetics in colloidal systems, Eur. Phys. J. Spec.
Topics 222 (11) (2013) 2873–2880. doi:10.1140/epjst/e2013-02063-2.
[49] M. Baptista, R. Schmitz, B. Dünweg, Simple and robust solver for the Poisson-Boltzmann equation, Phys. Rev. E 80
(2009) 016705. doi:10.1103/PhysRevE.80.016705.
[50] H. H. Hu, N. A. Patankar, M. Y. Zhu, Direct Numerical Simulations of Fluid-Solid Systems Using the Arbitrary
Lagrangian-Eulerian Technique, J. Comput. Phys. 169 (2) (2001) 427–462. doi:10.1006/jcph.2000.6592.
[51] T. J. R. Hughes, W. K. Liu, T. K. Zimmermann, Lagrangian-Eulerian finite element formulation for incompressible
viscous flows, Compu. Meth. Appl. Mech. and Engin. 29 (3) (1981) 329–349. doi:10.1016/0045-7825(81)90049-9.
[52] C. Ye, D. Li, 3-D transient electrophoretic motion of a spherical particle in a T-shaped rectangular microchannel, J.
Colloid Interface Sci. 272 (2) (2004) 480–488. doi:10.1016/j.jcis.2003.11.014.
[53] H. Tanaka, T. Araki, Simulation Method of Colloidal Suspensions with Hydrodynamic Interactions: Fluid Particle
Dynamics, Phys. Rev. Lett. 85 (2000) 1338–1341. doi:10.1103/PhysRevLett.85.1338.
[54] H. Kodama, K. Takeshita, T. Araki, H. Tanaka, Fluid particle dynamics simulation of charged colloidal suspensions, J.
Phys. Condens. Matter 16 (10) (2004) L115–L123. doi:10.1088/0953-8984/16/10/L01.
[55] T. Araki, H. Tanaka, Physical principle for optimizing electrophoretic separation of charged particles, Europhys. Lett.
82 (1) (2008) 18004. doi:10.1209/0295-5075/82/18004.
[56] Y. Nakayama, R. Yamamoto, Simulation method to resolve hydrodynamic interactions in colloidal dispersions, Phys.
Rev. E 71 (2005) 036707. doi:10.1103/PhysRevE.71.036707.
[57] K. Kim, Y. Nakayama, R. Yamamoto, Direct Numerical Simulations of Electrophoresis of Charged Colloids, Phys. Rev.
Lett. 96 (2006) 208302. doi:10.1103/PhysRevLett.96.208302.
[58] C. Shih, R. Yamamoto, Dynamic electrophoresis of charged colloids in an oscillating electric field, Phys. Rev. E 89 (2014)
062317. doi:10.1103/PhysRevE.89.062317.
[59] X. Luo, A. Beskok, G. E. Karniadakis, Modeling Electrokinetic Flows by the Smoothed Profile Method, J. Comput.
Phys. 229 (10) (2010) 3828 – 3847. doi:10.1016/j.jcp.2010.01.030.
[60] C. S. Peskin, Flow patterns around heart valves: A numerical method, J. Comput. Phys. 10 (2) (1972) 252–271. doi:
10.1016/0021-9991(72)90065-4.
[61] R. Mittal, G. Iaccarino, Immersed boundary methods, Annu. Rev. Fluid Mech. 37 (2005) 239–261. doi:10.1146/annurev.
fluid.37.061903.175743.
[62] M. Uhlmann, An immersed boundary method with direct forcing for the simulation of particulate flows, J. Comput.
Phys. 209 (2) (2005) 448–476. doi:10.1016/j.jcp.2005.03.017.
[63] S. Kang, Direct simulations on the electrophoretic motion of multiple charged particles using an immersed boundary
method, Comput. Fluids 73 (2013) 10–23. doi:10.1016/j.compfluid.2012.12.005.
31
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
32
[64] V. Lobaskin, B. Dünweg, C. Holm, Electrophoretic mobility of a charged colloidal particle: a computer simulation study,
J. Phys. Condens. Matter 16 (38) (2004) S4063–S4073. doi:10.1088/0953-8984/16/38/021.
[65] T. Y. Molotilin, V. Lobaskin, O. I. Vinogradova, Electrophoresis of Janus particles: A molecular dynamics simulation
study, J. Chem. Phys. 145 (24) (2016) 244704. doi:10.1063/1.4972522.
[66] A. Chatterji, J. Horbach, The role of effective charges in the electrophoresis of highly charged colloids, J. Phys. Condens.
Matter 22 (49) (2010) 494102. doi:10.1088/0953-8984/22/49/494102.
[67] O. A. Hickey, C. Holm, J. Smiatek, Lattice-Boltzmann simulations of the electrophoretic stretching of polyelectrolytes:
The importance of hydrodynamic interactions, J. Chem. Phys. 140 (16) (2014) 164904. doi:10.1063/1.4872366.
[68] F. Capuani, I. Pagonabarraga, D. Frenkel, Discrete solution of the electrokinetic equations, J. Chem. Phys. 121 (2) (2004)
973–986. doi:10.1063/1.1760739.
[69] G. Giupponi, I. Pagonabarraga, Determination of the zeta potential for highly charged colloidal suspensions, Phil. Trans.
R. Soc. A: Mathematical, Physical and Engineering Sciences 369 (2011) 2546–2554. doi:10.1098/rsta.2011.0024.
[70] G. K. Batchelor, An Introduction to Fluid Dynamics, Cambridge Univ. Press, 1979.
[71] M. A. van der Hoef, M. Ye, M. van Sint Annaland, A. T. Andrews, S. Sundaresan, J. A. M. Kuipers, Multi-Scale
Modeling of Gas-Fluidized Beds, in: G. B. Marin (Ed.), Computational Fluid Dynamics, Vol. 31 of Advances in Chemical
Engineering, Elsevier, 2006, pp. 65–149. doi:10.1016/S0065-2377(06)31002-2.
[72] L. D. Landau, E. M. Lifshitz, Fluid Mechanics, Second Edition: Volume 6 (Course of Theoretical Physics), 2nd Edition,
Vol. 6 of Course of theoretical physics / by L. D. Landau and E. M. Lifshitz, Vol. 6, Butterworth-Heinemann, 1987.
[73] P. J. Dellar, Bulk and shear viscosities in lattice Boltzmann equations, Phys. Rev. E 64 (3) (2001) 031203. doi:
10.1103/PhysRevE.64.031203.
[74] J. Happel, H. Brenner, Low Reynolds number hydrodynamics: with special applications to particulate media, Vol. 1,
Springer, 1983.
[75] F. Durst, Grundlagen Der Strömungsmechanik, Springer, 2006.
[76] D. Bartuschat, Direct Numerical Simulation of Particle-Laden Electrokinetic Flows on High Performance Computers,
Ph.D. thesis, University of Erlangen-Nürnberg (2016).
URL https://opus4.kobv.de/opus4-fau/frontdoor/index/index/docId/7298
[77] J. Anderson, Computational Fluid Dynamics, Computational Fluid Dynamics: The Basics with Applications, McGrawHill Education, 1995.
[78] G. G. Stokes, On the effect of the internal friction of fluids on the motion of pendulums, Vol. 9, Pitt Press, 1851.
[79] J. Happel, E. Bart, The settling of a sphere along the axis of a long square duct at low Reynolds’ number, Appl. Sci.
Res. 29 (1) (1974) 241–258. doi:10.1007/BF00384149.
[80] A. Miyamura, S. Iwasaki, T. Ishii, Experimental wall correction factors of single solid spheres in triangular and square
cylinders, and parallel plates, Int. J. Multiph. Flow 7 (1) (1981) 41–46. doi:10.1016/0301-9322(81)90013-6.
[81] R. J. Hunter, Zeta Potential in Colloid Science: Principles and Applications, Academic Press, London, 1981.
[82] P. Debye, E. Hückel, Zur Theorie der Elektrolyte. I. Gefrierpunktserniedrigung und verwandte Erscheinungen, Phys. Z
24 (9) (1923) 185–206.
[83] D. C. Henry, The cataphoresis of suspended particles. Part I. The equation of cataphoresis, Proc. R. Soc. Lond. A
133 (821) (1931) 106–129. doi:10.1098/rspa.1931.0133.
[84] H. Ohshima, A Simple Expression for Henry’s Function for the Retardation Effect in Electrophoresis of Spherical Colloidal
Particles, J. Colloid Interface Sci. 168 (1) (1994) 269–271. doi:10.1006/jcis.1994.1419.
[85] H. Ohshima (Ed.), Theory of Colloid and Interfacial Electric Phenomena, Vol. 12 of Interface Science and Technology,
Elsevier, 2006. doi:10.1016/S1573-4285(06)80022-0.
[86] D. J. Shaw, Introduction to Colloid and Surface Chemistry, 4th Edition, Butterworth-Heinemann, 1992.
[87] H. Ohshima, T. W. Healy, L. R. White, Accurate analytic expressions for the surface charge density/surface potential
relationship and double-layer potential distribution for a spherical colloidal particle, J. Colloid Interface Sci. 90 (1) (1982)
17–26. doi:10.1016/0021-9797(82)90393-9.
[88] A. L. Loeb, J. T. G. Overbeek, P. H. Wiersema, The Electrical Double Layer Around a Spherical Colloid Particle, J.
Electrochem. Soc. 108 (12) (1961) 269C. doi:10.1149/1.2427992.
[89] D. Hänel, Molekulare Gasdynamik, Springer, 2004.
[90] S. Chapman, T. G. Cowling, The Mathematical Theory of Non-Uniform Gases, 3rd Edition, Cambridge Univ. Press,
1990.
[91] D. A. Wolf-Gladrow, Lattice-Gas Cellular Automata and Lattice Boltzmann Models: An Introduction, no. 1725 in
Lattice-gas Cellular Automata and Lattice Boltzmann Models: An Introduction, Springer, 2000.
[92] D. d’Humières, Generalized lattice-Boltzmann equations, in: Rarefied Gas Dynamics: Theory and Simulations, Vol. 159
of Prog. Astronaut. Aeronaut., AIAA, 1992, pp. 450–458. doi:10.2514/5.9781600866319.0450.0458.
[93] X. He, L.-S. Luo, Lattice Boltzmann Model for the Incompressible Navier-Stokes Equation, J. Stat. Phys. 88 (3) (1997)
927–944. doi:10.1023/B:JOSS.0000015179.12689.e4.
[94] Y. H. Qian, D. d’Humières, P. Lallemand, Lattice BGK Models for Navier-Stokes Equation, Europhys. Lett. 17 (6) (1992)
479. doi:10.1209/0295-5075/17/6/001.
[95] J. D. Sterling, S. Chen, Stability Analysis of Lattice Boltzmann Methods, J. Comput. Phys. 123 (1) (1996) 196–206.
doi:10.1006/jcph.1996.0016.
[96] L.-S. Luo, W. Liao, X. Chen, Y. Peng, W. Zhang, Numerics of the lattice Boltzmann method: Effects of collision models
on the lattice Boltzmann simulations, Phys. Rev. E 83 (5) (2011) 056710. doi:10.1103/PhysRevE.83.056710.
[97] P. L. Bhatnagar, E. P. Gross, M. Krook, A Model for Collision Processes in Gases. I. Small Amplitude Processes in
Charged and Neutral One-Component Systems, Phys. Rev. 94 (3) (1954) 511–525. doi:10.1103/PhysRev.94.511.
32
D. Bartuschat and U. Rüde / Journal of Computational Physics 00 (2017) 1–33
33
[98] M. Geier, M. Schönherr, A. Pasquali, M. Krafczyk, The cumulant lattice Boltzmann equation in three dimensions: Theory
and validation, Comput. Math. Appl. 70 (4) (2015) 507–547. doi:10.1016/j.camwa.2015.05.001.
[99] F. J. Higuera, S. Succi, R. Benzi, Lattice Gas Dynamics with Enhanced Collisions, Europhys. Lett. 9 (4) (1989) 345.
doi:10.1209/0295-5075/9/4/008.
[100] I. Ginzburg, F. Verhaeghe, D. d’Humières, Study of Simple Hydrodynamic Solutions with the Two-Relaxation-Times
Lattice Boltzmann Scheme, Commun. Comput. Phys. 3 (3) (2008) 519–581.
[101] L.-S. Luo, Analytic Solutions of Linearized Lattice Boltzmann Equation for Simple Flows, J. Stat. Phys. 88 (3/4) (1997)
913–926. doi:10.1023/B:JOSS.0000015178.19008.78.
[102] A. J. C. Ladd, R. Verberg, Lattice-Boltzmann Simulations of Particle-Fluid Suspensions, J. Stat. Phys. 104 (5) (2001)
1191–1251. doi:10.1023/A:1010414013942.
[103] Z. Guo, C. Zheng, B. Shi, Discrete lattice effects on the forcing term in the lattice Boltzmann method, Phys. Rev. E
65 (4) (2002) 046308. doi:10.1103/PhysRevE.65.046308.
[104] C. Rettinger, U. Rüde, A comparative study of fluid-particle coupling methods for fully resolved lattice Boltzmann
simulations, Comput. Fluids 154 (2017) 74–89. doi:10.1016/j.compfluid.2017.05.033.
[105] E. Fattahi, C. Waluga, B. Wohlmuth, U. Rüde, M. Manhart, R. Helmig, Lattice Boltzmann methods in porous media
simulations: From laminar to turbulent flow, Comput. Fluids 140 (2016) 247 – 259. doi:10.1016/j.compfluid.2016.
10.007.
[106] S. Bogner, S. Mohanty, U. Rüde, Drag correlation for dilute and moderately dense fluid-particle systems using the lattice
Boltzmann method, Int. J. Multiph. Flow 68 (2015) 71–79. doi:10.1016/j.ijmultiphaseflow.2014.10.001.
[107] P. Wesseling, Introduction To Multigrid Methods, R. T. Edwards, 2004.
[108] P. Knabner, L. Angermann, Numerical Methods for Elliptic and Parabolic Partial Differential Equations, Texts in Applied
Mathematics, Springer, 2003.
[109] B. I. des Poids et Mesures, The International System of Units (SI), 8th Edition, BIPM, 2006 [cited Oct. 2015].
URL http://www.bipm.org/en/publications/si-brochure/
[110] I. 80000-1:2009, Quantities and units – Part 1: General, ISO/IEC, 2009 [cited Oct. 2015].
URL https://www.iso.org/obp/ui/#iso:std:30669:en
[111] S. Donath, Wetting Models for a Parallel High-Performance Free Surface Lattice Boltzmann Method, Ph.D. thesis,
University of Erlangen-Nürnberg (2011).
URL http://www.dr.hut-verlag.de/978-3-8439-0066-9.html
[112] D. Bartuschat, D. Ritter, U. Rüde, Parallel multigrid for electrokinetic simulation in particle-fluid flows, in: High
Performance Computing and Simulation (HPCS). Madrid 2012, IEEE, 2012, pp. 374–380. doi:10.1109/HPCSim.2012.
6266940.
[113] R. Ramadugu, S. P. Thampi, R. Adhikari, S. Succi, S. Ansumali, Lattice differential operators for computational physics,
Europhys. Lett. 101 (5) (2013) 50006. doi:10.1209/0295-5075/101/50006.
[114] K. Iglberger, Software design of a massively parallel rigid body framework, Ph.D. thesis, University of Erlangen-Nürnberg
(2010).
URL http://www.dr.hut-verlag.de/978-3-86853-736-9.html
[115] L. Hufnagel, Transient simulation of electric double layers in electrokinetic flows with a coupled lattice Boltzmann
Link-Flux method, Bachelor thesis, Lehrstuhl für Informatik 10 (Systemsimulation), Universität Erlangen-Nürnberg
(November 2014).
33
| 5 |
arXiv:1801.10562v1 [q-bio.QM] 31 Jan 2018
Feature Decomposition Based Saliency Detection in
Electron Cryo-Tomograms
∗
Bo Zhou1 , Qiang Guo2 , Xiangrui Zeng3 , and Min Xu3
1
Robotics Institute, Carnegie Mellon University, Pittsburgh, USA
2
Max Planck Institute for Biochemistry, Martinsried, Germany
3
Computational Biology Department, Carnegie Mellon University, Pittsburgh, USA
∗
Corresponding author
Abstract
Electron Cryo-Tomography (ECT) allows 3D visualization of subcellular structures at the
submolecular resolution in close to the native state. However, due to the high degree of structural complexity and imaging limits, the automatic segmentation of cellular components from
ECT images is very difficult. To complement and speed up existing segmentation methods, it
is desirable to develop a generic cell component segmentation method that is 1) not specific to
particular types of cellular components, 2) able to segment unknown cellular components, 3)
fully unsupervised and does not rely on the availability of training data. As an important step
towards this goal, in this paper, we propose a saliency detection method that computes the likelihood that a subregion in a tomogram stands out from the background. Our method consists
of four steps: supervoxel over-segmentation, feature extraction, feature matrix decomposition,
and computation of saliency. The method produces a distribution map that represents the
regions’ saliency in tomograms. Our experiments show that our method can successfully label
most salient regions detected by a human observer, and able to filter out regions not containing cellular components. Therefore, our method can remove the majority of the background
region, and significantly speed up the subsequent processing of segmentation and recognition
of cellular components captured by ECT.
Keywords: saliency detection, Electron Cryo-Tomography, super-voxel segmentation, 3D
Gabor filter, robust PCA
1
Introduction
The development of Electron Cryo-Tomography (ECT) has enabled the detailed inspection and
visualization of subcellular structures at sub-molecular resolution and in near-native state[19, 24].
With this cellular imaging technique, subcellular components can be systematically analyzed at
unprecedented level of detail and faithfulness. Recent studies have shown this in situ visualization
enables the discovery of numerous important structural features in complex virus[10, 11], as well as
in prokaryotic and eukaryotic cells[4, 9, 14]. ECT has established its position as one of the leading
techniques for visualizing the subcellular and macromolecular organization of single cells[5].
In principle, an ECT tomogram captures structural information of all cellular components in
the field of view. However, several factors, such as the low signal-to-noise ratio (SNR), the limited
1
tilt projection range (missing wedge effect) and the crowded nature of intracellular structures,
make the systematic structural analysis of cellular components captured by ECT very difficult. In
addition, tomograms are large size gray scale 3D images. A typical raw tomogram can have a
size of 6000 × 6000 × 1500 voxels, which is very computationally expensive to process. In order
to analyze the cellular components captured by ECT tomograms, these cellular components must
first be segmented and recognized from these tomograms. Currently, most of the segmentation
is performed either manually, or automatically. The manual segmentation of cellular components
in such 3D images is very laborious, even with the aid of computational segmentation tools like
watershed transform and thresholding [25]. On the other hand, most of the automatic segmentation
is very computationally intensive and often limited to specific types of objects. Many of these
automatic segmentation methods rely on matching of the geometrical model of the specific cellular
structure, such as filaments, microtubules and membranes [18, 23, 16]. Recently, supervised methods
have been developed that segments specific cellular structures using classification models trained
on annotated training images of the specific cellular structures of interest [8, 17]. Such supervised
segmentation methods are restricted to segmenting specific structures in tomograms that are already
characterized by human annotator.
A generic segmentation algorithm that is able to segment general cellular components including
unknown cellular components, without considering the availability of training data is needed to
complement the existing segmentation methods. As an important step towards this goal, we propose
a saliency detection method and address the problem of automatically extracting the salient region
from an unknown background in ECT. The saliency of an image subregion is the likelihood that
it stands out relative to its background. Given that the perceptual, cognitive and computational
resources are limited, to facilitate such segmentation, it is important to ignore the background
regions which occupy the majority portion of the tomogram.
To solve the saliency detection problem, we propose a method that consists of following steps:
supervoxel over-segmentation, feature extraction, feature matrix decomposition, and computation
of saliency. The method produces a map that represents the saliency of subregions of a tomogram.
Our experiments show that 1) our method can successfully perform saliency detection compared
with human annotation, and 2) the number of detected salient voxels is significantly smaller than
the total number of voxels. As a result, our approach can substantially reduce the background
region, and accelerate the succeeding segmentation and analysis of the cellular components in the
ECT.
Our main contributions are summarized as follows:
• We design an easy-to-implement salient region detection method based on feature decomposition.
• With the candidate regions of cellular objects in ECT, the computational cost of tasks like
object detection and segmentation of specific structure is greatly reduced, facilitating the
isolation of different cellular structure of interest.
2
Methods
Our method comprises five major steps, including data pre-processing for de-noising, supervoxel
over-segmentation, feature extraction, feature matrix construction, and computation of saliency.
The flow diagram of the method is shown in Figure 1. The 1st stage is intended to enhance the
Figure 1: General flow diagram of the method for salient object detection and segmentation.
image contrast to improve the performance of supervoxel over-segmentation in the 2nd stage, as
well as the quality of the extracted feature in the 3rd stage. The different stages are described
in detail in the following sections. The procedure takes the intuitive assumption that, in terms of
image features, the salient regions are the minority regions in the tomograms.
The details of the de-noising stage are described in Appendix 6.1. In this paper, we assume that
all the salient objects have a size bigger than the assigned kernel scale. The scale σ is chosen by
visually evaluating the processed data using different scales. Given the majority of noise is smaller
than the scale selected, this step can filter out the noise and preserve the useful information in the
tomogram. We chose σ = 1.8 for all experiment in this paper.
2.1
Supervoxel over-segmentation
The geometrical features are important information for analyzing the tomograms. Geometry is
the primary feature to be considered when researchers choose the interested region in a tomogram
to analyze. In this step, we use a supervoxel over-segmentation method, simple linear iterative
clustering (SLIC)[3], to over-segment the tomogram into sub-volumes which generate 3D geometric
boundaries and clusters that enclose small volumes with similar density within a certain distance
of the neighborhood. In ECT tomogram, low gray-scale levels represent the electron-dense region.
We generalize the method in R. Achanta’s paper[3] to three dimensions (x, y, z) and gray-scale
(I). Each voxel corresponds to a vector in R4 . The vectors are used for clustering. One of the
method’s parameter is n, the desired number of equally sized supervoxels.
The clustering procedure begins
by initializing n cluster centers at points spaced S voxels apart
q
on a regular grid, where: S = N
n , and N is the number of the voxels in the tomogram. There are
approximately S voxels in each supervoxel. Then each voxel is assigned to the nearest cluster center
whose search region overlaps its spatial location. The raw distance measurement D0 calculates the
distance between a voxel and cluster center. D0 is defined as
r
dI
ds
0
(1)
D = wI ( )2 + ws ( )2
m
S
, where spatial distance
ds =
and intensity distance
p
(xc − xi )2 + (yc − yi )2 + (zc − zi )2
dI =
p
(Ic − Ii )2
(2)
(3)
. D0 is further simplified as
r
ds 2 2
) m
(4)
S
, where weight coefficient w is the second parameter in the 3D SLIC clustering, the compactness
of clusters. The voxel clusters for tomograms can be generated by iterating every voxel in it. In
short, there are two parameters we need to specify, the number of clusters n and the compactness
of cluster w. A higher value of n and lower value of w should be chosen if the potential content
is small and morphological complex, which can produce higher spatial density cluster and more
deformable shape to enclose the region containing potential objects, vice versa.
D=
2.2
dI 2 + w(
Feature extraction
After the supervoxel over-segmentation, for each supervoxel, we perform feature extraction and
construct a feature vector with a size of 30. Two groups of the features are calculated:
2.2.1
3D Gabor filter based features
Previous studies have shown that the Gabor function provides a useful and reasonably accurate
description of most spatial aspects of simple receptive fields[15].
Gabor function is a product of a Gaussian kernel and a complex sinusoid function. The derivation
of extending 2D to 3D Gabor function is shown in Appendix 6.2. We assume the Gaussian envelop
have the same scale in three axes; Thus the shape of this Gaussian can only be spherical but can
have different scales. By modifying sinusoid frequency in three orientations and scale parameters,
we generate 24 3D Gabor filters by rotating 0, 45, 90 and 135 degrees about the x, y, and z-axes,
as well as two different Gaussian scales. Each Gabor feature is calculated by taking the mean of
the filter response inside the supervoxels.
2.2.2
Density features
A tomogram is reconstructed from multiple electron beam projections at different angles. Higher
density material causes more energy attenuation of the electron beam when arriving at the energy
detector. This results in a lower voxel intensity corresponding to a denser cellular structure. Cellular
structure density features are inversely related to the intensity value in the tomogram. Given this
information, the second group of features consists of the intensity distribution in the supervoxel.
The intensity’s dynamic range could vary between ECT tomograms due to different settings of the
imaging system and reconstruction parameters. With that, we first normalize the dynamic range
into [0 600] and generate a six bins histogram range from the 0 to 600. The number of voxels
counted in each bin is used as one density feature. There are six density features extracted from
the histogram of each supervoxel.
After segmenting the tomogram into candidate supervoxels, we compute the feature vector for
each supervoxel, and construct a feature matrix with each row corresponding to the feature vector
of a supervoxel. The feature matrix with the size of n by 30 will be used in the next step, where n
is the number of supervoxels.
2.3
Feature matrix decomposition
Previous studies have shown that subspace estimation by sparse representation and rank minimization is an excellent unsupervised method for separating the salient image regions from the image
background [6, 22]. We decompose the feature matrix constructed from the last section into a lowrank matrix and a sparse matrix using robust principal component analysis (RPCA) via principal
component pursuit (PCP)[7]. Each row of the sparse matrix represents the saliency of individual
supervoxels, whereas the low-rank matrix represents the common background information. The
feature matrix F is represented as F = L + S, where L is a low-rank matrix, and ||S||0 is the L0
norm of the matrix S. The minimization of ||S||0 enforces S to be a sparse matrix with a small
fraction of nonzero entries. The decomposition is an optimization process in the following form:
min rank(L) + β||S||0
L,S
s.t.
F −L−S =0
(5)
, where β ≥ 0 is a hyperparameter. The minimization of Equation 5 is NP-hard. Instead of directly
solving Equation 5, PCP transforms Equation 5 to an equivalent convex optimization problem[7]:
min ||L||∗ + β||S||1
L,S
s.t.
F −L−S =0
(6)
, where ||L||∗ is the nuclear norm of L, calculated from the singular values of L, ||S||1 is the L1 norm
of S. The RPCA-PCP method can effectively recover the low-rank and the sparse matrix for our
feature matrix by optimizing Equation 6 [7]. In an ideal decomposition of the feature space, most
of the image background should lie in a low dimensional subspace so that they can be represented
as a low-rank matrix L shown above.
2.4
Calculation of saliency map and salient region segmentation
Given the sparse matrix S decomposed from the feature matrix F , each supervoxel’s saliency
can be represented by corresponding row in the sparse matrix S. In this section, we refer it as
saliency vector of supervoxel. After obtaining the saliency vector of the corresponding supervoxel,
we calculate the representation of saliency by taking the mean of the saliency vector, and the
saliency is assigned to the corresponding supervoxel’s spatial location which generates a saliency
volume.
2.5
2.5.1
Evaluation of salient region segmentation
Obtaining ground truth through manual annotation
Ground truth salient region annotation is obtained by using the human annotator selected supervoxel regions where the annotator believe the regions stand out from background. The anchor
points are dropped by the annotator to select regions of such supervoxels. The annotator has no
previous knowledge about the salient map and any output produced by our method.
2.5.2
ROC based performance measure
Binary salient region segmentation of the saliency region is generated by setting a threshold value
to the saliency map volume (Figure 3). The performance of the segmentation using the saliency
volume is evaluated using receptor operating curve (ROC) analysis[21]. A true positive in our ROC
analysis is if the manually annotated anchor point falls within the salient region segmentation.
Three tomogram slices in three different datasets are evaluated. The mean area under the curve
(AUC) are also calculated for each dataset (Figure 4).
3
Results
Three different experimental tomograms are included to test the method’s performance. Tomogram 1 and tomogram 3 are obtained from the EMPIAR public image archive [13]. Tomogram 1
(EMPIAR ID: 10045) captures a sub-region of the purified S. cerevisiae ribosomes[2]. Tomogram
2 contains a sub-region of a rat’s primary neuron, which was collected at Max Planck Institute
of Biochemistry[12]. Tomogram 3 (EMPIAR ID: 10048) captures a sub-region of the Chlamydia
trachomatis secretion system[1].
3.1
Supervoxel over-segmentation via SLIC clustering
The optimal scale space for different tomograms can vary due to the noise introduced by different
imaging data acquisition parameters and reconstruction methods. Using a fixed imaging and reconstruction protocol, an optimal scale can be chosen. A de-noised ECT tomogram using different scale
3D Gaussian filters are shown in Appendix 6.1 and Figure A.1. In this tomogram, we choose scale
σ = 1.8 voxels, which best preserves the cellular structural information and rule out noise. Bigger
or smaller filter scale will over-blur or under-de-noise the tomogram. The tomogram de-noised with
optimal Gaussian filter is used for all results shown later.
Using the iterative clustering method discussed in Section 2.1, we cluster a tomogram into
different number of supervoxel. The compactness of the supervoxel is set by w, which means bigger
w will constrain the deformation of supervoxel. Figure 2 shows the supervoxel over-segmentation
using the different combination of n and w.
In principle, to avoid missing small and complex salient region in tomogram, we choose a bigger
number of supervoxel n and smaller compactness coefficient w to make sure these complex regions
are clustered and enclosed in supervoxels so that the saliency can be detected in later steps. We
choose n = 10000 and w = 0.025 as the optimal parameters for SLIC clustering in the three test
tomograms.
3.2
Extraction of feature and construction of saliency map
By applying the 3D Gabor filters with different orientations to the tomogram, we extract the
corresponding feature in each voxel. Given the supervoxels’ spatial information, voxel’s 3D Gabor
features are assigned to the corresponding supervoxel. Example of feature extraction results with
different 3D Gabor filters is shown in Appendix 6.2 and Figure B.1. The supervoxel’s density
features are extracted by generating the six bins histogram for each supervoxel; each bin represents
one density feature. The cellular structure, like ribosome, membrane, microtubule usually have
higher density, so the first three density features of the region contains such cellular structure will
have higher values. Whereas, the last three density features will be high if supervoxel contains
low-density cellular structures.
Given the feature vectors of every supervoxel in the tomogram, we construct the feature matrix
and apply RPCA to decompose it into the low-rank matrix and the saliency matrix. The saliency
Figure 2: SLIC Supervoxel over-segmentation in tomogram 2 with different number of supervoxel and
compactness setting. First column: one slice from the de-noised tomogram. Second to Fourth column:
visualization of supervoxels’ margins in this slice with n=1000, 5000, 10000 (columns) and w=0.025, 0.05
(rows).
of supervoxel is calculated using the rows in the saliency matrix and assigned to the spatial location
of the supervoxels.
3.3
Evaluation of saliency detection
Our method is tested and evaluated in three different ECT tomograms consisted of simple and
complex cellular contents. Figure 3 shows the saliency results generated using the tomograms from
these three tomograms. The structural composition of tomogram 1 and tomogram 3 are relatively
simple. As we can see, our method can efficiently detect the saliency regions. Tomogram 2 contains
cellular structures with various size, density, and shape. More details of salient region visualization
are shown in Appendix 6.3. Our method can effectively represent the saliency of different subregions
enriched with compact cellular structures in this tomogram.
The performance of salient region segmentation using saliency map is evaluated using the three
tomograms mentioned above. Three slices with ground truth anchor annotation in these three
tomograms are used to generate the ROC analysis. ROC curve for these tomogram slices is shown
in Figure 4. The average AUC for tomogram 1 and tomogram 2 are approximately equal to 0.89
and 0.863. The average AUC for tomogram 3 is about 0.815. The performance of salient region
segmentation is better in the tomogram 1, compared to segmentation in tomogram 2 and 3, due to
its simple and uniform structural content.
The number of detected salient supervoxels and voxels are recorded using the optimal cutoff
threshold obtain from the ROC experiment shown above. The results are shown in Table 1. The
number of salient supervoxel and salient voxels are significantly smaller than the total number
of supervoxels and voxels. The background voxels are filtered out using this method. Therefore
the automatic segmentation and consequent processing steps can be significantly sped up by only
processing such a small number of salient voxels.
Figure 3: Visualization of example slices of saliency map (saliency level below 21 (Smax +Smin ) set to 0) along
with corresponding tomogram slices and human annotated salient region anchor in 3 different tomograms.
Figure 4: Segmentation evaluation with ROC curve using saliency volume.
Table 1: Salient supervoxel selected from tomogram
4
Tomogram 1
Tomogram 2
Tomogram 3
Number of Supervoxel
10000
10000
10000
Number of Salient Supervoxel
869
1324
330
Percentage of Selected Supervoxel
8.69%
13.24%
3.30%
Number of voxel
321,553,500
134,522,880
37,796,240
Number of Salient voxel
25,768,154
14,756,210
1,564,276
Percentage of Selected voxel
8.01%
10.96%
4.14%
Discussion
Efficient and fully unsupervised automatic segmentation of all cellular components in a tomogram
is extremely challenging, because of the high degree of structural complexity and the imaging
limitations in the cellular electron cryo-tomograms. As an important step towards this goal, we have
proposed an efficient salient region detection method in ECT that mimics the human visual system
for detecting a tomogram’s salient sub-regions that contain cellular components. The quantitative
results obtained suggest that our method is useful in salient region detection in studies using ECT
technique.
A direct application of our saliency detection is for template free particle picking in ECT tomograms. A tomogram usually contain macromolecules of diverse structures of diverse size. Currently,
the main computational approach for selecting such diverse macromolecules without the use of a
structural template is through Difference-of-Gaussian(DoG) filtering based template-free particle
picking [26, 20]. However, such approach tend to select macromolecules with globular structure and
with certain size range. By contrast, the connected salient supervoxels in the binarized saliency map
generated from our approach can directly serve as candidate macromolecules or other subcellular
components without structure and size constrains. Therefore our saliency detection approach can
potentially be used as a more powerful and flexible template-free particle selector.
Future efforts to combine our method with the automatic analysis system for systematically
analyzing the object using adapted structural pattern mining methods [e.g., 29, 28, 30]. Such
combination will allow us to tremendously reduce the time for laborious manual annotation of
tomogram and discover the underlying correlation between structures.
5
Acknowledgements
We thank Dr. Robert F. Murphy for suggestions. This work was supported in part by U.S. National
Institutes of Health (NIH) grant P41 GM103712. MX acknowledge support from Samuel and Emma
Winters Foundation.
6
Appendix
6.1
De-noising
The noise in a tomogram can directly impact every subsequent step in our method. Thus, denoising and preserving useful information for the input tomogram is critical. According to the
scale-space theory, we can isolate information according to the spatial scale. Given a scale, all
the features with a size smaller than the designed scale can be filtered out, whereas the others
are preserved[27]. In this step, we implement the scale-space theory with a sampled 3D Gaussian
kernel, which aims for isolating useful information from noise by directly convolving the Gaussian
kernel with tomogram. In three dimension, continuous 3D Gaussian kernel with scale σ is defined
2
2
2
by G[x; σ] = 3 1 3 exp(− x1 +x2σ2 2+x3 ). The tomogram T is then convolved with G, and generates
σ (2π) 2
the de-noised volume V : V [x; σ] = G[x; σ] ∗ T
Figure A.1: Gaussian filtered tomogram with different scale 3D Gaussian filter. Three de-noised tomograms
are produced by convolving different scale 3D Gaussian filter to the original tomogram. De-noised tomogram
with scale σ=1.8 better preserve detailed structural information compared to de-noised tomogram with scale
σ=4 and scale σ=8.
6.2
3D Gabor filter and feature
Here, we extend the 2-D Gabor function to a 3-D function. The 3D Gabor function (H) can be
written as H[x, y, z] = G[x, y, z] ∗ S[x, y, z], where G is a 3D Gaussian envelope G[x, y, z; σ] =
x2 +y 2 +z 2
1
), and S is a complex sinusoid function S[x, y, z] = exp[2πi(U x + V y + W z)],
3 exp(−
2σ 2
3
σ (2π) 2
where U , V , and W are the 3D frequencies of the complex sinusoid. They determine Gabor filter’s
orientation and spacing in the spatial domain.
6.3
Visualization of salient region with saliency
Examples of the salient region with the corresponding saliency in different slices of the tomogram
that captures a sub-region of rat’s primary neuron are shown in Figure C.1. It can be seen that the
regions containing bigger cellular structure with higher material density normally present higher
saliency, compared to the structure with smaller size, simpler structure, and lower density. This
behavior of our method matches the human behavior when observes imaging data. Human observer
normally first detect big object with higher density and complex shape, and then simpler and smaller
object.
Figure B.1: Visualization of extracted features from tomogram by applying different 3D Gabor filters. First
row: Gabor response in x, y and z direction with 0 degree of rotation along the axes. Second row: Gabor
response in x, y and z direction with 45 degree of rotation along the axes.
Figure C.1: Visualization of saliency map (second row) along with the corresponding tomogram slices (first
row). saliency level below the 12 (Smax + Smin )) is set to 0 in this visualization. Different regions present
different level of saliency due to the various complexity of content.
References
[1] Cryo-electron tomogram of Chlamydia trachomatis with type III secretion system in contact
with HeLa cell. https://www.ebi.ac.uk/pdbe/emdb/empiar/entry/10048/, 2015. [Online;
accessed 11-06-2017].
[2] Cryo-electron tomogram of purified S. cerevisiae 80S ribosomes. https://www.ebi.ac.uk/
pdbe/emdb/empiar/entry/10045/, 2016. [Online; accessed 11-06-2017].
[3] Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine
Süsstrunk. Slic superpixels compared to state-of-the-art superpixel methods. IEEE transactions
on pattern analysis and machine intelligence, 34(11):2274–2282, 2012.
[4] Martin Beck, Vladan Lučić, Friedrich Förster, Wolfgang Baumeister, and Ohad Medalia. Snapshots of nuclear pore complexes in action captured by cryo-electron tomography. Nature,
449(7162):611–615, 2007.
[5] Kfir Ben-Harush, Tal Maimon, Israel Patla, Elizabeth Villa, and Ohad Medalia. Visualizing
cellular processes at the molecular level by cryo-electron tomography. J Cell Sci, 123(1):7–12,
2010.
[6] Thierry Bouwmans and El Hadi Zahzah. Robust pca via principal component pursuit: A review
for a comparative evaluation in video surveillance. Computer Vision and Image Understanding,
122:22–34, 2014.
[7] Emmanuel J Candès, Xiaodong Li, Yi Ma, and John Wright. Robust principal component
analysis? Journal of the ACM (JACM), 58(3):11, 2011.
[8] Muyuan Chen, Wei Dai, Ying Sun, Darius Jonasch, Cynthia Y He, Michael F Schmid, Wah
Chiu, and Steven J Ludtke. Convolutional neural networks for automated annotation of cellular
cryo-electron tomograms. arXiv preprint arXiv:1701.05567, 2017.
[9] Lidia Delgado, Gema Martínez, Carmen López-Iglesias, and Elena Mercadé. Cryo-electron
tomography of plunge-frozen whole bacteria and vitreous sections to analyze the recently described bacterial cytoplasmic structure, the stack. Journal of structural biology, 189(3):220–229,
2015.
[10] Kay Grünewald and Marek Cyrklaff. Structure of complex viruses and virus-infected cells by
electron cryo tomography. Current opinion in microbiology, 9(4):437–442, 2006.
[11] Kay Grünewald, Prashant Desai, Dennis C Winkler, J Bernard Heymann, David M Belnap,
Wolfgang Baumeister, and Alasdair C Steven. Three-dimensional structure of herpes simplex
virus from cryo-electron tomography. Science, 302(5649):1396–1398, 2003.
[12] Q. Guo, C. Lehmer, A. Martínez-Sánchez, T. Rudack, F. Beck, H. Hartmann, M. PérezBerlanga, F. Frottin, M. Hipp, U. Hartl, D. Edbauer, W. Baumeister, and R FernándezBusnadiego. In Situ Structure of Neuronal C9ORF72 Poly-GA Aggregates Reveals Proteasome
Recruitment. Cell doi:10.1016/j.cell.2017.12.030, 2018.
[13] Andrii Iudin, Paul K Korir, José Salavert-Torres, Gerard J Kleywegt, and Ardan Patwardhan.
Empiar: a public archive for raw electron microscopy image data. Nature methods, 13(5):387–
388, 2016.
[14] Marion Jasnin, Mary Ecke, Wolfgang Baumeister, and Günther Gerisch. Actin organization in
cells responding to a perforated surface, revealed by live imaging and cryo-electron tomography.
Structure, 24(7):1031–1043, 2016.
[15] Judson P Jones and Larry A Palmer. An evaluation of the two-dimensional gabor filter model
of simple receptive fields in cat striate cortex. Journal of neurophysiology, 58(6):1233–1258,
1987.
[16] Leandro A Loss, George Bebis, Hang Chang, Manfred Auer, Purbasha Sarkar, and Bahram
Parvin. Automatic segmentation and quantification of filamentous structures in electron tomography. In Proceedings of the ACM Conference on Bioinformatics, Computational Biology
and Biomedicine, pages 170–177. ACM, 2012.
[17] Imanol Luengo, Michele C Darrow, Matthew C Spink, Ying Sun, Wei Dai, Cynthia Y He,
Wah Chiu, Tony Pridmore, Alun W Ashton, Elizabeth MH Duke, et al. Survos: Super-region
volume segmentation workbench. Journal of Structural Biology, 198(1):43–53, 2017.
[18] Antonio Martinez-Sanchez, Inmaculada Garcia, and Jose-Jesus Fernandez. A differential structure approach to membrane segmentation in electron tomography. Journal of structural biology,
175(3):372–383, 2011.
[19] Richard McIntosh, Daniela Nicastro, and David Mastronarde. New views of cells in 3d: an
introduction to electron tomography. Trends in cell biology, 15(1):43–51, 2005.
[20] Long Pei, Min Xu, Zachary Frazier, and Frank Alber. Simulating cryo electron tomograms of
crowded cell cytoplasm for assessment of automated particle picking. BMC Bioinformatics,
17:405, 2016.
[21] Michael J Pencina, Ralph B D’Agostino, and Ramachandran S Vasan. Evaluating the added
predictive ability of a new marker: from area under the roc curve to reclassification and beyond.
Statistics in medicine, 27(2):157–172, 2008.
[22] Houwen Peng, Bing Li, Haibin Ling, Weiming Hu, Weihua Xiong, and Stephen J Maybank.
Salient object detection via structured matrix decomposition. IEEE transactions on pattern
analysis and machine intelligence, 39(4):818–832, 2017.
[23] Alexander Rigort, David Günther, Reiner Hegerl, Daniel Baum, Britta Weber, Steffen Prohaska, Ohad Medalia, Wolfgang Baumeister, and Hans-Christian Hege. Automated segmentation of electron tomograms for a quantitative description of actin filament networks. Journal
of structural biology, 177(1):135–144, 2012.
[24] Elizabeth Villa, Miroslava Schaffer, Jürgen M Plitzko, and Wolfgang Baumeister. Opening
windows into the cell: Focused-ion-beam milling for cryo-electron tomography. Biophysical
Journal, 106(2):600a, 2014.
[25] Niels Volkmann. A novel three-dimensional variant of the watershed transform for segmentation
of electron density maps. Journal of structural biology, 138(1):123–129, 2002.
[26] NR Voss, CK Yoshioka, M. Radermacher, CS Potter, and B. Carragher. Dog picker and
tiltpicker: software tools to facilitate particle selection in single particle electron microscopy.
Journal of structural biology, 166(2):205–213, 2009.
[27] Andrew Witkin. Scale-space filtering: A new approach to multi-scale description. In Acoustics,
Speech, and Signal Processing, IEEE International Conference on ICASSP’84., volume 9, pages
150–153. IEEE, 1984.
[28] Min Xu, Xiaoqi Chai, Hariank Muthakana, Xiaodan Liang, Ge Yang, Tzviya Zeev-BenMordehai, and Eric Xing. Deep learning based subdivision approach for large scale macromolecules structure recovery from electron cryo tomograms. Bioinformatics, 33(14):i13–i22,
2017.
[29] Min Xu, Elitza I Tocheva, Yi-Wei Chang, Grant J Jensen, and Frank Alber. De novo visual
proteomics in single cells through pattern mining. arXiv preprint arXiv:1512.09347, 2015.
[30] Xiangrui Zeng, Miguel Ricardo Leung, Tzviya Zeev-Ben-Mordehai, and Min Xu. A convolutional autoencoder approach for mining features in cellular electron cryo-tomograms and
weakly supervised coarse segmentation. arXiv preprint arXiv:1706.04970, Journal of Structural Biology, doi:10.1016/j.jsb.2017.12.015, 2017.
| 1 |
arXiv:1802.01806v1 [q-bio.MN] 6 Feb 2018
Detection of sustained signals and its relation to
coherent feedforward loops
Chun Tung Chou
School of Computer Science and Engineering,
University of New South Wales,
Sydney, Australia.
E-mail: [email protected]
February 7, 2018
Abstract
Many studies have shown that cells use temporal dynamics of signaling molecules to encode information. One particular class of temporal
dynamics is sustained and transient signals, i.e. signals of long and short
durations respectively. This paper formulates a detection problem for
distinguishing sustained signals from transient ones. The solution of the
detection problem is to compute the likelihood ratio of observing a sustained signal to a transient signal. We show that, if the logarithm of this
likelihood ratio is positive (i.e. when the signal is likely to be sustained),
then this log-likelihood ratio can be approximately computed by a coherent feedforward loop. Although the capability of coherent feedforward
loops to discriminate sustained signals is known, its statistical information
processing interpretation has not been pointed out before.
Living cells use many different strategies to encode information. A strategy
is to encode information using the temporal dynamics of signaling molecules
[9, 2]. One particular class of temporal dynamics is sustained (also known as
persistent) and transient signals, i.e. signals of long and short durations respectively. The study in [8] found that PC-12 cells proliferated after a transient
ERK activation but differentiated after a sustained ERK activation. This shows
that the duration of ERK signaling affects the cell fate. Ref. [7] shows that coherent feedforward loops (FFL) can act as persistence detectors to differentiate
sustained signals from transient ones. A particular type of FFL is the coherent
type-1 feedforward loop with an AND logic at the output, or C1FFL for short.
Fig. 1a depicts the structure of the C1FFL which consists of a short arm, a long
arm and an AND logic combining the signals of the two arms. By modelling
the reaction dynamics of C1FFL with ordinary differential equations (ODEs),
Ref. [7] shows that a short and a long pulse applied to the input of the C1FFL,
give, respectively a zero and a non-zero output. Instead of starting from the
structure of C1FFL and show that it can detect sustained signals, we ask the
1
Input s(t)
Input s(t)
x*(t)
X
x*(t)
X
x*(t)
Y
Loglikelihood
ratio
computation
y(t)
AND
Log-likelihood
ratio
Output z(t)
(a)
(b)
Figure 1: (a) The coherent type-1 feedforward loop with AND logic. (b) The
detection theory framework.
question in the opposite direction. In this Letter, we start from the requirement
of detecting a sustained signal and use detection theory (DT) [6] to determine
the detector. We then show that the resulting detector can be approximately
realized by a C1FFL.
DT is a branch of statistical signal processing. Its aim is to use the measured data to decide whether an event of interest has occurred. For example,
DT is used in radar signal processing to determine whether a target is present or
not. In the context of this Letter, the events are whether the signal is transient
or sustained. A detection problem is often formulated as a hypothesis testing
problem, where each hypothesis corresponds to a possible event. Let us consider
a detection problem with two hypotheses, denoted by H0 and H1 , which correspond to respectively, the events of transient and sustained signals. Our aim is
to decide which hypothesis is more likely to hold. We define the log-likelihood
ratio (LLR) R:
P[measured data|H1 ]
R = log
(1)
P[measured data|H0 ]
where P[measured data|Hi ] is the conditional probability that the measured
data is generated by the signal specified in hypothesis Hi . Intuitively, if the
LLR R is positive, then the measured data is more likely to have been generated
by a sustained signal or hypothesis H1 , and vice versa. Therefore, the key idea
of DT is to use the measured data to compute the LLR and then use it to make
a decision.
We will now present a big picture explanation of how we will connect DT
with C1FFL. The signal x∗ (t) in Fig. 1a is the output signal of Node X in the
C1FFL. We can view the C1FFL as a 2-stage signal processing engine. In the
first stage, the input signal is processed by Node X to obtained x∗ (t) and this
is the part within the dashed box in Fig. 1a. In the second stage, the signal
2
x∗ (t) is processed by the rest of the C1FFL to produce the output signal z(t).
We will now make a connection to DT. We apply DT to the dashed box in
Fig. 1a. We consider x∗ (t) as the measured data and use them to determine
whether the input signal is transient or sustained. DT tells us that we should
use x∗ (t) to compute the LLR. This means that we can consider the 2-stage
signal processing depicted in Fig. 1b where the input signal generates x∗ (t) and
the measured data x∗ (t) are used to calculate the LLR. If we can identify the
LLR calculation in Fig. 1b with the processing by the part of C1FFL outside of
the dashed box, then we can identify the signal z(t) with the LLR.
We will now define the problem for detecting a sustained signal using DT.
Our first step is to specify the signaling pathway in Node X, which consists of
three chemical species: signaling molecule S, molecular type X in inactive form
and its active form X* . The activation and inactivation reactions are:
k+
S + X −−→ S + X∗
k−
X∗ −−→ X
(2a)
(2b)
where k+ and k− are reaction rate constants. Let x(t) and x∗ (t) denote, respectively, the number of X and X* molecules at time t. Note that both x(t)
and x∗ (t) are piecewise constant because they are molecular counts. We assume
that x(t) + x∗ (t) is a constant for all t and we denote this constant by M .
We assume that the input signal s(t), which is the concentration of the
signaling molecules S at time t, is a deterministic signal. We also assumed that
the signal s(t) cannot be observed, so any characteristics of s(t) can only be
inferred.
We model the dynamics of the chemical reactions by using chemical master
equation [4]. This means that x∗ (t) is a realisation of a continuous-time Markov
chain. This also means that the same input signal s(t) can result in different
x∗ (t).
The measured datum at time t is x∗ (t). However, in the formulation of
the detection problem, we will assume that at time t, the data available to
the detection problem are x∗ (τ ) for all τ ∈ [0, t]; in other words, the data are
continuous in time and are the history of the counts of X* up to time t. We will
use X∗ (t) to denote the continuous-time history of x∗ (t) up to time t. Note that
even though we assume that the entire history X∗ (t) is available for detection,
we will show that the calculation of the LLR at time t does not require the
storage of the past history.
The last step in defining the detection problem is to specify the hypotheses
Hi (i = 0, 1). Later on, we will identify H0 and H1 with, respectively, transient
and sustained signals. However, at this stage, we want to solve the detection
problem in a general way. We assume that the hypothesis H0 (resp. H1 ) is
that the input signal s(t) is the signal c0 (t) (c1 (t)) where c0 (t) and c1 (t) are
two different deterministic signals. Intuitively, the aim of the detection problem
is to use the history X∗ (t) to decide which of the two signals c0 (t) and c1 (t) is
more likely to have produced the observed history.
3
Based on the definition of the detection problem, the LLR L(t) at time t is
given by:
P[X∗ (t)|H1 ]
L(t) = log
(3)
P[X∗ (t)|H0 ]
where P[X∗ (t)|Hi ] is the conditional probability of observing the history X∗ (t)
given hypothesis Hi . We show in the Appendix that the L(t) obeys the following
ODE:
dL(t)
dx∗ (t)
c1 (t)
=
log
−
dt
dt
c0 (t)
+
k+ (M − x∗ (t))(c1 (t) − c0 (t))
(4)
where [a]+ = max(a, 0). We also assume that the two hypotheses are equally
likely, so L(0) = 0. Since x∗ (t) is a piecewise constant function counting the
number of X* molecules, its derivative is a sequence of Dirac deltas at the time
instants that X is activated or X* is deactivated. Note that the Dirac deltas
corresponding to the activation of X carries a positive sign and the [ ]+ operator
keeps only these. Note that a special case of Eq. (4) with constant ci (t) and
M = 1 appeared in [10]. A more general form of Eq. (4) which includes the
diffusion of signaling molecules can be found in [3].
The importance of Eq. (4) is that, given the measured data x∗ (t), we can use
Eq. (4) together with ci (t) to compute the LLR L(t). If s(t) is similar to c1 (t)
(resp. c0 (t)), then the LLR L(t) generally increases (decreases) over time and
becomes more positive (negative). If our aim is to distinguish a sustained signal
from a transient one accurately, then we want the sustained signal to produce
a large positive L(t). Since the positive contribution of L(t) comes from the
first term in Eq. (4), we can get a large positive L(t) by making sure that a
sustained signal will produce many activations. This occurs when a sustained
signal has a duration which is long compared to time scale of the activation and
deactivation reactions (2) and we will make use of this condition later.
The LLR L(t) can certainly be computed numerically by a computer but
a more interesting question is whether it can be computed by using a set of
chemical reactions. We will address this next.
In order to build a connection with C1FFL, we need to specify the form of
ci (t). We now assume that the transient signal c0 (t) and sustained signal c1 (t)
are of the form of a rectangular pulse. The durations for c0 (t) and c1 (t) are,
respectively, d0 and d1 with d1 > d0 . We define two concentration levels for the
signaling molecules: basal concentration a0 and reference concentration a1 , with
a1 > a0 . The temporal profile of ci (t) is: for 0 ≤ t < di , ci (t) = a1 , otherwise
ci (t) = a0 . Furthermore, we assume the input s(t) is a pulse of duration d. The
temporal profile of s(t) is: for 0 ≤ t < d, s(t) = a, otherwise s(t) = a0 .
There are two difficulties why the computation in Eq. (4) cannot be carried
out by chemical reactions. These are: (1) The LLR can take any real value but
chemical concentration can only be non-negative; (2) It is difficult to calculate
derivatives and M − x∗ (t) using chemical reactions. Instead of using L(t), we
4
will derive an approximation L̂(t) which has the properties: L̂(t) ≈ L(t) for
sustained signals and L̂(t) = 0 for transient signals.
We first consider the case when the input s(t) is a sustained signal which
results in a positive L(t). We assume that d is long compared with the time
scale of the activation and deactivation reactions. Since these reactions are fast,
we can assume quasi-steady state:
x∗ (t) ≈
M k+ s(t)
k+ s(t) + k−
(5)
By using the form of ci (t) and s(t) in Eq. (4), we see that, for t ∈ [d0 , min{d, d1 }],
the contribution of the first term on the right-hand side (RHS) of (4) to L(t)
equals to the number of times that X is activated in the time interval t ∈ [d0 , t].
If t is sufficiently large and if the activation and deactivation reactions are fast,
then a large number of activations take place. In this case, we can get approximately the same L(t) by approximating the positive derivative of x∗ (t) by its
mean value:
dx∗ (t)
≈ k+ (M − x∗ (t))s(t)
(6)
dt
+
After substituting Eqs. (5) and (6) into Eq. (4), and after some manipulation,
we arrive at:
dL̂(t)
= x∗ (t) × {k− π(t) [φ(s(t))]+ } where
dt
a1
a1 − a0
φ(u) = log
,
−
a0
u
(7)
(8)
and π(t) = 1 for d0 ≤ t < d1 and zero otherwise. We first verify that L̂(t) ≈ L(t)
for t ∈ [0, min{d, d1 }) for long d and d1 . We use the Stochastic Simulation
Algorithm (SSA) [5] to obtain 100 realisations of x∗ (t). We then use these time
series and Eq. (4) to compute the mean and standard deviation of L(t). We
numerically integrate Eq. (7) to obtain L̂(t) for comparison. We use k+ = 0.02,
k− = 0.5, d0 = 5, d1 = 60, a0 = 0.25, a1 = a = 10.7. The top and bottom
plots in Fig. 2a show the results for d = 70 and d = 40. We can see that the
approximation is good. In general, the approximation is good if d and d1 are
long compared to the time scale of the activation and deactivation reactions.
We note from Fig. 2a that when the approximation holds, the maximum value
of L̂(t) and L(t) match very well.
There are a few noteworthy features on the RHS of Eq. (7). The RHS
consists of the multiplication of x∗ (t) with some other functions. We will map
x∗ (t) to the short arm of the C1FFL and the other functions to the long arm.
The multiplication is to implement the AND logic. Next, we note that the
function π(t) is zero in the time interval [0, d0 ) which means the RHS of Eq. (7)
is zero t < d0 independent of the input. This shows how the detector is making
use of the prior knowledge of c0 (t) by not making use of any observations before
time d0 .
5
The form of Eq. (7) appears to be robust to how we define Hi . Instead of
assuming that Hi is a pulse of a specific duration di , we consider an alternative where we define hypothesis H0 (resp. H1 ) to be pulses whose duration is
uniformly distributed in [0, d0 ] ([d0 , d1 ]). These are composite hypotheses and
can be handled by using the Bayesian approach in [6, p.198]. We can show that
these hypotheses give an ODE in the same form as Eq. (7) except that π(t) is
slightly different. Thus the mapping to C1FFL also holds for these alternative
hypotheses.
Having shown that L̂(t) ≈ L(t) for long pulses, we now show that L̂(t) = 0
for short pulses. For a small d, the LLR L(t) becomes negative. In Eq. (7), the
[ ]+ operator is included so that L̂(t) ≥ 0 for all t. In particular, for transient
inputs with duration d < d0 , L̂(t) = 0 ∀t ≥ 0. This is consistent with the
behaviour of an ideal C1FFL which gives a zero output for short pulses [1].
Our next step is to show that Eq. (7) can be approximately realized by
the following reaction system, which models Node Y and the AND logic in the
C1FFL in Fig. 1a:
dy(t)
dt
=
hy x∗ (t)ny
ny
Ky + x∗ (t)ny
|
dz(t)
dt
=
{z
Hy (x∗(t))
x∗ (t) ×
−dy y(t)
(9a)
}
hz y(t)nz
Kznz + y(t)nz
{z
}
|
(9b)
Hz (y(t))
where hy , ny etc. are coefficients of the Hill functions. Our aim is to make
z(t) in Eq. (9b) to be approximately equal to L̂(t), and this can be achieved by
approximating k− π(t)[φ(s(t))]+ (= η(t)) in Eq. (7) by Hz (y(t)) in Eq. (9b). If
the input s(t) is a long pulse, then the time profiles of both η(t) and y(t) contain
a period of time that they plateau. The plateau in η(t) contributes to the ramplike increase in L̂(t) in Fig. 2a. This means we need to match the values of η(t)
and Hz (y(t)) at their plateau. For a pulse s(t) with an amplitude a, the heights
of the plateau of η(t) and Hz (y(t)) are, respectively, k− [φ(a)]+ (= f1 (a)) and
k+ a
Hz ( d1y Hy ( kM
))(= f2 (a)), and we want f1 (a) ≈ f2 (a) for as large a range of
+ a+k−
a as possible. Note that for all a such that f1 (a) > 0, both functions f1 (a) and
f2 (a) are strictly increasing, strictly concave and both f1 (∞) and f2 (∞) are
constants. Therefore, we can choose the Hill function coefficients to fit f2 (a) to
f1 (a). This argument takes care of the case when s(t) is a long pulse. For short
pulses, we need to realise π(t) whose purpose is to make the initial part of L̂(t)
zero. This is a feature shared by the ideal C1FFL model in [1]. Ref. [1] shows
that this can be realized by choosing a big enough Kz in Eq. (9b) so that the
production rate of z(t) is small initially.
We now present two numerical examples. We use the same k+ , k− , M ,
a0 and a1 values as before. We choose d0 = 10, d1 = 80. We use parameter
estimation to determine the reaction constants in Eq. (9) so that the C1FFL
output z(t) matches L̂(t) for a range of a. Fig. 2b compares L̂(t) and z(t) for
6
input s(t) with d = 10, 30 and 70 for a = 10.7. When d = 10, the output of the
C1FFL is small. For d = 30 and 70, the C1FFL output matches well with the
approximate LLR L̂(t). In the second example, we use inputs of a fixed duration
of 70 but vary the amplitude a from 2.7 to 85.7. Fig. 2b compares L̂(t) and
z(t) at t = 70. It can be seen that the C1FFL approximation works for a wide
range of a. These examples show that we can match the C1FFL output z(t)
to approximate LLR L̂(t) for input s(t) of different durations and amplitudes.
Although we have only shown that z(t) matches L̂(t) for pulse inputs, the match
also extends to other slowly-varying inputs.
Conclusions: This Letter considers the problem of detecting sustained signals
from transient ones. It makes use of detection theory and turns the problem into
one of computing the log-likelihood ratio. It further shows that the log-likelihood
ratio can be approximately computed by a coherent type-1 feedforward loop.
Although the capability of coherent feedforward loops to discriminate sustained
signals is known, its statistical information processing interpretation has not
been pointed out before.
A
Proof of (4)
In order to derive (4), we consider the history X∗ (t + ∆t) as a concatenation of
X∗ (t) and x∗ (t) in the time interval (t, t+∆t]. We assume that ∆t is chosen small
enough so that no more than one activation or deactivation reaction can take
place in (t, t + ∆t]. Given this assumption and right continuity of continuoustime Markov Chain, we can use x∗ (t + ∆t) to denote the history in (t, t + ∆t].
By using the Markov property, we can show that:
P[x∗ (t + ∆t)|H1 , x∗ (t)]
(10)
L(t + ∆t) = L(t) + log
P[x∗ (t + ∆t)|H0 , x∗ (t)]
By using the propensity functions of the reactions, we have:
P[x∗ (t + ∆t)|Hi , x∗ (t)] =
δx∗ (t+∆t),x∗ (t)+1 k+ (M − x∗ (t)) ∆t ci (t)(t)+
δx∗ (t+∆t),x∗ (t)−1 k− b(t) ∆t +
δx∗ (t+∆t),x∗ (t) (1 − k+ (M − x∗ (t))ci (t) ∆t − k− b(t) ∆t)
(11)
where δa,b is the Kronecker delta which is 1 when a = b. By substituting Eq. (11)
into Eq. (10) and taking the limit ∆t → 0, we have after some manipulations:
δx∗ (t+∆t),x∗ (t)+1
dL(t)
c1 (t)
= lim
log
−
∆t→0
dt
∆t
c0 (t)
δx∗ (t+∆t),x∗ (t) k+ (M − x∗ (t)) (c1 (t) − c0 (t))
(12)
In order to obtain Eq. (4), we use the following reasonings. First, the term
δ
∗ (t)+1
lim∆t→0 x∗ (t+∆t),x
is a Dirac delta at the time instant that an X molecule
∆t
7
is activated. Second, the term δx∗ (t+∆t),x∗ (t) is only zero when the number of X∗
molecule changes but the number of such changes is countable. In other words,
δx∗ (t+∆t),x∗ (t) = 1 with probability one. This allows us to drop δx∗ (t+∆t),x∗ (t) .
References
[1] U. Alon. An Introduction to Systems Biology: Design Principles of Biological Circuits. Chapman & Hall, 2006.
[2] Marcelo Behar and Alexander Hoffmann. Understanding the temporal
codes of intra-cellular signals. Current opinion in genetics & development,
20(6):684–693, December 2010.
[3] Chun Tung Chou. Maximum a-posteriori decoding for diffusion-based
molecular communication using analog filters. Nanotechnology, IEEE
Transactions on, 14(6):1054–1067, 2015.
[4] C. Gardiner. Stochastic methods. Springer, 2010.
[5] D Gillespie. Exact stochastic simulation of coupled chemical reactions. The
journal of physical chemistry, 1977.
[6] Steve M. Kay. Fundamentals of Statistical Signal Processing, Volume II:
Detection Theory. Prentice Hall, 1998.
[7] S Mangan and U Alon. Structure and function of the feed-forward loop
network motif. Proceedings of the National Academy of Sciences of the
United States of America, 100(21):11980–11985, October 2003.
[8] C J Marshall. Specificity of receptor tyrosine kinase signaling: transient versus sustained extracellular signal-regulated kinase activation. Cell, 80:179–
185, January 1995.
[9] Jeremy E Purvis and Galit Lahav. Encoding and Decoding Cellular Information through Signaling Dynamics. Cell, 152(5):945–956, February 2013.
[10] Eric D Siggia and Massimo Vergassola. Decisions on the fly in cellular sensory systems. Proceedings of the National Academy of Sciences,
110(39):E3704–12, September 2013.
8
LLR
2500
1250
0
0
20
40
60
80
60
80
time
LLR
1600
800
0
0
20
40
time
(a) Comparing L(t) to L̂(t).
Approx LLR / C1FFL output
1000
500
0
0
40
80
time
Approx LLR / C1FFL output
(b) Comparing L̂(t) to C1FFL output.
8000
6000
4000
2000
0
0
20
40
60
80
(c) Comparing L̂(t) and C1FFL output for
different pulse input amplitude a.
Figure 2: Numerical results. (Best view in color.)
9
| 7 |
FINITE ELEMENT MODEL UPDATING USING RESPONSE SURFACE METHOD
Tshilidzi Marwala
School of Electrical and Information Engineering
University of the Witwatersrand
P/Bag 3, Wits, 2050, South Africa
[email protected]
This paper proposes the response surface method for finite element model updating. The response
surface method is implemented by approximating the finite element model surface response equation
by a multi-layer perceptron. The updated parameters of the finite element model were calculated using
genetic algorithm by optimizing the surface response equation. The proposed method was compared
to the existing methods that use simulated annealing or genetic algorithm together with a full finite
element model for finite element model updating. The proposed method was tested on an unsymmetrical H-shaped structure. It was observed that the proposed method gave the updated natural frequencies and mode shapes that were of the same order of accuracy as those given by simulated annealing
and genetic algorithm. Furthermore, it was observed that the response surface method achieved these
results at a computational speed that was more than 2.5 times as fast as the genetic algorithm and a full
finite element model and 24 times faster than the simulated annealing.
proximation model, which traditionally is a polynomial
and therefore is less expensive to evaluate. This makes
RSM very useful to FE model updating because optimizing the FE model to match measured data to FE model
generated data is a computationally expensive exercise.
Furthermore, the calculation of the gradients that are
essential when traditional optimization methods, such as
conjugate gradient methods, are used is computationally
expensive and often encounters numerical problems
such as ill-conditioning. RSM tends to be immune to
such problems when used for FE model updating. This
is largely because RSM solves a crude approximation of
the FE model rather than the full FE model which is of
high dimensional order. The multi-layer perceptron
(MLP)6 is used to approximate the response equation.
The RSM is particularly useful for optimizing systems
that are evolving as a function of time, a situation that is
prevalent in model-based fault diagnostics found in the
manufacturing sector. To date, RSM has been used extensively to optimize complex models and processes7,8.
In summary, the RSM is used because of the following
reasons: (1) the ease of implementation that includes low
computational time; (2) the suitability of the approach to
the manufacturing sector where model-based methods
are often used to monitor structures that evolve as a
function of time.
FE model updating has been used widely to detect
damage in structures9. When implementing FE updating
methods for damage identification, it is assumed that the
FE model is a true dynamic representation of the structure and this is achieved through FE model updating.
This means that changing any physical parameter of an
element in the FE model is equivalent to introducing
damage in that region. There are two approaches that
are used in FE updating: direct methods and iterative
Introduction
Finite element (FE) models are widely used to predict
the dynamic characteristics of aerospace structures.
These models often give results that differ from the
measured results and therefore need to be updated to
match the measured data. FE model updating entails
tuning the model so that it can better reflect the measured data from the physical structure being modeled1.
One fundamental characteristic of an FE model is that it
can never be a true reflection of the physical structure
but it will forever be an approximation. FE model updating fundamentally implies that we are identifying a
better approximation model of the physical structure
than the original model. The aim of this paper is to introduce updating of finite element models using Response Surface Method (RSM)2. Thus far, the RSM
method has not been used to solve the FE updating problem1. This new approach to FE model updating is compared to methods that use simulated annealing (SA) or
genetic algorithm (GA) together with full FE models for
FE model updating. FE model updating methods have
been implemented using different types of optimization
methods such as genetic algorithm and conjugate gradient methods3-5. Levin and Lieven5 proposed the use of
SA and GA for FE updating.
RSM is an approximate optimization method that
looks at various design variables and their responses and
identify the combination of design variables that give the
best response. The best response, in this paper, is defined as the one that gives the minimum distance between the measured data and the data predicted by the
FE model. RSM attempts to replace implicit functions
of the original design optimization problem with an ap*
Associate Professor
1
methods1. Direct methods, which use the modal properties, are computationally efficient to implement and reproduce the measured modal data exactly. Furthermore,
they do not take into account the physical parameters
that are updated. Consequently, even though the FE
model is able to predict measured quantities, the updated
model is limited in the following ways: it may lack the
connectivity of nodes - connectivity of nodes is a phenomenon that occurs naturally in finite element modeling because of the physical reality that the structure is
connected; the updated matrices are populated instead
of banded - the fact that structural elements are only
connected to their neighbors ensures that the mass and
stiffness matrices are diagonally dominated with few
couplings between elements that are far apart; and there
is a possible loss of symmetry of the systems matrices.
Iterative procedures use changes in physical parameters
to update FE models and produce models that are physically realistic. Iterative methods that use modal properties and the RSM for FE model updating are implemented in this paper. The FE models are updated so that
the measured modal properties match the FE model predicted modal properties. The proposed RSM updating
method is tested on an unsymmetrical H-shaped structure.
and stiffness matrices. The frequency response functions (FRFs) are defined as the ratio of the Fourier transformed response to the Fourier transformed force. The
FRFs may be expressed in receptance and inertance
form. On the one hand, receptance expression of the
FRF is defined as the ratio of the Fourier transformed
displacement to the Fourier transformed force. On the
other hand, inertance expression of the FRF is defined as
the ratio of the Fourier transformed acceleration to the
Fourier transformed force. The inertance FRF (H) may
be written in terms of the modal properties by using the
modal summation equation as follows10:
N
− ω 2φkiφli
H kl ( ω ) = ∑
(3)
2
2
i =1 − ω + 2ζ i ω i ωj + ω i
Equation 3 is an FRF due to excitation at position k and
response measurement at position l, ω is the frequency
point, ω i is the ith natural frequency, N is the number of
modes and ζi is the damping ratio of mode i. The excitation and response of the structure and Fourier transform method10 can be used to calculate the FRFs.
Through equation 3 and a technique called modal analysis10, the natural frequencies and mode shapes can be
indirectly calculated from the measured FRFs. The modal properties of a dynamic system depend on the mass
and stiffness matrices of the system as indicated by
equation 2. Therefore, the measured modal properties
can be reproduced by the FE model if the correct mass
and stiffness matrices are identified.
FE model updating is achieved by identifying the
correct mass and stiffness matrices. The correct mass
and stiffness matrices, in the light of the measured data,
can be obtained by identifying the correct moduli of
elasticity for various sections of the structure under consideration1. In this paper, to correctly identify the
moduli of elasticity of the structure, the following cost
function that measures the distance between measured
data and FE model calculated data, is minimized:
Mathematical Background
In this study, modal properties, i.e. natural frequencies
and mode shapes, are used as a basis for FE model updating. For this reason these parameters are described in
this section. Modal properties are related to the physical
properties of the structure. All elastic structures may be
described in terms of their distributed mass, damping
and stiffness matrices in the time domain through the
following expression10:
[ M ]{ X ' ' } + [ C ]{ X ' } + [ K ]}{ X } = { F }
(1)
where [M], [C] and [K] are the mass, damping and
stiffness matrices respectively, and {X}, {X′} and {X′′}
are the displacement, velocity and acceleration vectors
respectively while {F} is the applied force vector. If
equation 1 is transformed into the modal domain to form
an eigenvalue equation for the ith mode, then10:
ω m − ω calc
E = ∑ γ i i m i
ωi
i =i
N
N
(
2
...
+ β ∑ 1 − diag( MAC({ φ }icalc ,{ φ }im ))
(4)
)
i
2
( −ω i [ M ] + jω i [ C ] + [ K ]){ φ }i = { 0 }
(2)
th
where j = − 1 , ω i is the i complex eigenvalue, with
its imaginary part corresponding to the natural frequency
ωi, { 0 } is the null vector and { φ }i is the ith complex
mode shape vector with the real part corresponding to
the normalized mode shape {φ}i. From equation 2, it
may be deduced that the changes in the mass and stiffness matrices cause changes in the modal properties of
the structure. Therefore, the modal properties can be
identified through the identification of the correct mass
Here m is for measured, calc is for calculated, N is the
number of modes; γ i is the weighting factor that measures the relative distance between the initial estimated
natural frequencies for mode i and the target frequency
of the same mode; the parameter β is the weighting
function on the mode shapes; the MAC is the modal
assurance criterion11; and the diag(MAC)i stands for the
ith diagonal element of the MAC matrix. The MAC is a
measure of the correlation between two sets of mode
shapes of the same dimension. In equation 4 the first
2
part has a function of ensuring that the natural frequencies predicted by the FE model are as close to the measured ones as possible while the second term ensures that
the mode shapes between measurements and those predicted by the FE model are correlated. When two sets of
mode shapes are perfectly correlated then the MAC
matrix is an identity matrix. The updated model is
evaluated by comparing the natural frequencies and
mode shapes from the FE models before and after updating to the measured ones.
Initial Conditions
Updating
Parameters
Updating
Objective
Updating Space
Generation of surface
response data using
the FE model
Response Surface Method
Functional
approximation
RSM method is a procedure that operates by generating a response for a given input. The inputs are the parameters to be updated and the response is the error between the measured data and the FE model generated
data. Then an approximation model of the input parameters and the response, called a response surface
equation, is constructed. As a consequence of this, the
optimization method operates on the surface response.
This equation is usually simple and not computationally
intensive as opposed to a full FE model. RSM has other
advantages such as the ease of implementation through
parallel computation and the ease at which parameter
sensitivity can be calculated.
The proposed RSM consists of these essential components: (1) the response surface approximation equation; and (2) the optimization procedure. There are
many techniques that have been used for response surface approximation such as polynomial approximation12
and neural networks13. A multi-layer perceptron is used
as a response surface approximation equation6. Further
understanding of different approaches to response surface approximation may be found in the literature14-19.
In this paper, MLP is used because it has been successfully used to solve complicated regression problems.
The details of the MLP are described in the next section.
The second component of the RSM is the optimization
of the response surface. There are many types of optimization methods that can be used to optimize the response surface equation and these include the gradient
based methods20 and evolutionary computation methods21. The gradient based methods have a shortcoming
of identifying local optimum solutions while evolutionary computing methods are better able to identify global
optimum solution. As a result of the global optimum
advantage of evolutionary methods, in this study the GA
is used to optimize the response surface equation. The
manner in which the RSM is implemented is shown in
Figure 1.
In this figure it shown that the RSM is implemented
by following these steps:
1) Setting initial conditions which are: updating parameters, updating objective, which is in equation 4,
Global
optimization
Functional
evaluation at
the optimum
solution on the
FE model
Updating
criteria
satisfied?
N
Replace the worst
surface response
data with the
optimum point and
its FE response
Y
Stop
Figure 1. The flowchart of the RSM. Here N stands for no
and Y stands for yes.
and updating space.
2) The FE model is then used to generate sample response surface data
3) MLP is used to approximate the response surface
approximation equation from the data generated in
Step 2.
4) GA is used to find a global optimum solution.
5) The new optimum solution is used to evaluate the
response from the full FE model.
6) If the optimum solution does not satisfy the objective, then the new optimum and the corresponding
FE model calculated response replaces the candidate
with the worst response in data set generated in Step
2 and then steps 3 to 5 are repeated. If the objective
is satisfied then stop and the optimum solution becomes the ultimate solution.
Step 6 ensures that the simulation always operates in
the region of the optimum solution. The next section
3
sic dimensionality of the data. Models of this form can
approximate any continuous function to arbitrary accuracy if the number of hidden units M is sufficiently large.
The relationship between the output y, representing error
between the model and measured data, and input, x,
representing updating parameters may be written as
follows6:
describes an MLP, which is used for functional approximation.
Multi-layer Perceptron
Multi-layer perceptron is a type of neural networks
which used in the present study. This section gives the
over-view of the MLP in the context of functional approximation. The MLP is viewed in this paper as parameterized graphs that make probabilistic assumptions
about data. Learning algorithms are viewed as methods
for finding parameter values that look probable in the
light of the data. Supervised learning is used to identify
the mapping function between the updating parameters
(x) and the response (y). The response is calculated using equation 4. The reason why the MLP is used is because it provides a distributed representation with respect to the input space due to cross-coupling between
input, hidden and output layers. The MLP architecture
contains a hyperbolic tangent basis function in the hidden units and linear basis functions in the output units6.
A schematic illustration of the MLP is shown in Figure
2.
This network architecture contains hidden units and
output units and has one hidden layer. The bias parameters in the first layer are shown as weights from an extra
M
d
y k = ∑ wkj( 2 ) tanh ∑ w (ji1 ) x i + w (j 01 ) + wk( 02 )
(5)
i =1
j =1
Here, w (ji1 ) and w(ji2 ) indicate weights in the first and second layers, respectively, going from input i to hidden
unit j, M is the number of input units, d is the number of
output units while w (j 01 ) indicates the bias for the hidden
unit j. Training the neural network identifies the weights
in equations 5 and a cost function must be chosen to
identify these weights. A cost function is a mathematical
representation of the overall objective of the problem.
The main objective, this is used to construct a cost
function, is to identify a set of neural network weights
given updating parameters and the error between the FE
model and the measured data. If the training set
D = { x k , t k }kN=1 is used and assuming that the targets t
are sampled independently given the inputs xk and the
weight parameters, wkj, the cost function, E, may be
written as follows using the sum-of-square error func6
tion :
Output Units
N
zM
The sum-of-square error function is chosen because it
6
has been found to be suited to regression problems . In
equation 6, N is the number of training examples and K
is the number of output units. In this paper, N is equal to
150, while K is equal to 1.
Before the MLP is trained, the network architecture
needs to be constructed by choosing the number of hidden units, M. If M is too small, the MLP will be insufficiently flexible and will give poor generalization of the
data because of high bias. However, if M is too large,
the neural network will be unnecessarily flexible and
will give poor generalization due to a phenomenon
known as over-fitting caused by high variance. In this
study, we choose M such that the number of weights is at
most fewer than the number of response data. This is in
line with the basic mathematical principle which states
that in order to solve a set of equations with n variables
you need at least n independent data points. The next
section describes the GA, which is a method that is used
to solve for the optimum solution of the response surface
approximation equation.
Hidden Units
bias
xd
(6)
n =1 k =1
z0
z1
K
E = ∑ ∑ { t nk − y nk } 2
yc
y1
x1
x0
Input Units
Figure 2. Feed-forward network having two layers of
adaptive weights
input having a fixed value of x0=1. The bias parameters
in the second layer are shown as weights from an extra
hidden unit, with the activation fixed at z0=1. The model
in Figure 2 is able to take into account the intrinsic di-
Genetic Algorithms
GA was inspired by Darwin’s theory of natural evolution. Genetic algorithm is a simulation of natural evolu-
4
tion where the law of the survival of the fittest is applied
to a population of individuals. This natural optimization
method is used to optimize either the response surface
approximation equation or the error between the FE
model and the measured data. GA is implemented by
generating a population and creating a new population
by performing the following procedures: (1) crossover;
(2) mutation; (3) and reproduction. The details of these
procedures can be found in Holland21 and Goldberg22.
The crossover operator mixes genetic information in the
population by cutting pairs of chromosomes at random
points along their length and exchanging over the cut
sections. This has a potential of joining successful operators together. Arithmetic crossover technique22 is
used in this paper. Arithmetic crossover takes two parents and performs an interpolation along the line formed
by the two parents. For example if two parents p1 and
p2 undergo crossover, then a random number a which
lies in the interval [0,1] is generated and the new offsprings formed are p1(a-1) and pa.
The next section describes simulated annealing which
is used to update an FE model using a FE model.
Simulated Annealing
Simulated Annealing is a Monte Carlo method that is
used to investigate the equations of state and frozen
states of n degrees of freedom system23. SA was inspired by the process of annealing where objects, such as
metals, re-crystallize or liquids freeze. In the annealing
process the object is heated until it is molten, then it is
slowly cooled down such that the metal at any given
time is approximately in thermodynamic equilibrium.
As the temperature of the object is lowered, the system
becomes more ordered and approaches a frozen state at
T=0. If the cooling process is conducted insufficiently
or the initial temperature of the object is not sufficiently
high, the system may become quenched forming defects
or freezing out in metastable states. This indicates that
the system is trapped in a local minimum energy state.
The process that is followed to simulate the annealing
process was proposed by Metropolis et al.24 and it involves choosing the initial state with energy Eold (see
equation 4) and temperature T and holding T constant
and perturbing the initial configuration and computing
Enew at the new state. If Enew is lower than Eold, then accept the new state, otherwise if the opposite is the case
then accept this state with a probability of exp -(dE/T)
where dE is the change in energy. This process can be
mathematically represented as follows:
Mutation is a process that introduces to a population,
new information. Non-uniform mutation22 was used and
it changes one of the parameters of the parent based on a
non-uniform probability distribution. The Gaussian distribution starts with a high variance and narrows to a
point distribution as the current generation approaches
the maximum generation.
Reproduction takes successful chromosomes and reproduces them in accordance to their fitness functions.
In this study normalized geometric selection method was
used22. This method is a ranking selection function
which is based on the normalized geometric distribution.
Using this method the least fit members of the population are gradually driven out of the population. The basic genetic algorithm was implemented in this paper as
follows:
if E new < E old accept state E new
E − E old (7)
else accept E new with probability exp new
T
This processes is repeated such that the sampling statistics for the current temperature is adequate, and then
the temperature is decreased and the process is repeated
until a frozen state where T=0 is achieved.
SA was first applied to optimization problems by
Kirkpatrick, et al. 23. The current state is the current
updating solution, the energy equation is the objective
function in equation 4, and the ground state is the global
optimum solution.
1) Randomly create an initial population of a certain
size.
2) Evaluate all of the individuals in the population using the objective function in equation 4.
3) Use the normalized geometric selection method to
select a new population from the old population
based on the fitness of the individuals as given by the
objective function.
Example: Asymmetrical H-structure
An unsymmetrical H-shaped aluminum structure
shown in Figure 3 was used to validate the proposed
method. This structure was also used by Marwala and
Heyns4 as well as Marwala25. This structure had three
thin cuts of 1mm that went half-way through the crosssection of the beam. These cuts were introduced to elements 3, 4 and 5. The structure with these cuts was used
so that the initial FE model gives data that are far from
the measured data and, thereby test the proposed proce-
4) Apply some genetic operators, non-uniform mutation
and arithmetic crossover, to members of the population to create new solutions.
5) Repeat steps 2-6, which is termed one generation,
until a certain fixed number of generations has been
achieved
5
dure on a difficult FE model updating problem. The
structure was suspended using elastic rubber bands. The
structure was excited using an electromagnetic shaker
and the response was measured using an accelerometer.
The structure was divided into 12 elements. It was excited at a position indicated by double-arrows, in Figure
3, and acceleration was measured at 15 positions indicated by single-arrows in Figure 3. The structure was
tested freely-suspended, and a set of 15 frequency response functions were calculated. A roving accelerometer was used for the testing. The mass of the accelerometer was found to be negligible compared to the mass
was run 150 times to generate the data for functional
approximation. The MLP implemented had 12 input
variables corresponding to the 12 elements in the FE
model, 8 hidden units and one output unit corresponding
to the error in equation 4. As described before, the MLP
had a hyperbolic tangent activation function in the hidden layer and linear activation function in the output
layer. The RSM functional approximation via the MLP
was evaluated 10 times (iterations) each time using the
GA to calculate the optimum point and evaluating this
optimum point on the FE model and then storing the
previous optimum point in the data set for the current
A
200mm
400mm
B
600mm
y
Cross-section indicated by line AB
9.8mm
x
32.2mm
Figure 3. Irregular H-shaped structure
of the structure. The number of measured coordinates functional approximation. The scaled conjugate gradiwas 15.
ent method was used to train the MLP, primarily beThereafter, the finite element model was constructed cause of its computational efficiency27. The initial funcusing the Structural Dynamics Toolbox26. The FE tional approximation was obtained by training the MLP
model used Euler-Bernoulli beam elements. The FE for 150 training cycles and on a subsequent functional
model contained 12 elements. The moduli of elasticity of these elements were used as updating parameTable 1. Results showing measured frequencies, the initial freters. When the FE updating was implemented the
quencies and the frequencies obtained when the FE model is
moduli of elasticity was restricted to vary from
updated using the RSM, SA and GA
10
10
-2
6.00x10 to 8.00x10 N.m . The weighting factors,
Measured
Initial
Frequencies Frequencies Frequencies
in the first term in equation 4, were calculated for
Freq
Freq
from RSM
from SA
from GA
each mode as the square of the error between the
(Hz)
(Hz)
Updated
Updated
Updated
measured natural frequency and the natural frequency
Model
(Hz)
Model
(Hz)
Model (Hz)
calculated from the initial model and the weighting
53.9
56.2
52.2
54.0
53.9
function for the second term in equation 4 was set to
117.3
127.1
118.4
118.8
120.1
0.75. When the RSM, SA and GA were implemented
228.4
209.4
209.7
211.3
for model updating the results shown in Table 1 were 208.4
254.0
263.4
251.1
253.8
253.4
obtained.
445.1
452.4
432.7
435.8
438.6
On implementing the proposed RSM, the FE model
6
and that from the initial model was 8.4%. When the
RSM was used, this error was reduced to 0.9% while
using SA it was reduced to 1.3% and using the GA it
was reduced to 2.4%. The error of the third natural frequencies between the measured data and the initial FE
model was 9.6%. When the RSM was used, this error
was reduced to 0.5% while using SA reduced it to 0.6%
and using the GA and a full FE model reduced it to
1.4%. The error between the fourth measured natural
frequency and that from the initial model was 3.7%.
When the RSM was used for FE updating, this error was
reduced to 1.1% while using the SA reduced it to 0.1%
and using the GA and a full FE model reduced it to
0.2%. The error between the fifth measured natural frequency and that from the initial model was 1.6%. When
the RSM was used, this error was increased to 2.8%
while using SA increased it to 2.1% and using the GA
and a full FE model the error was reduced to 1.5%.
Overall, the SA gave the best results with an average
error, calculated over all the five natural frequencies, of
0.9% followed by the GA with an average error of 1.1%
and then RSM with an average error of 1.7%. All the
three methods on average improved when compared to
the average error between the initial FE model and the
measured data, which was 5.5%.
The updated FE models implemented were
9
also
validated on the mode shapes they pre8
dicted. To make this assessment possible the
7
MAC11 was used. The mean of the diagonal of
6
the MAC vector was used to compare the mode
5
shapes predicted by the updated and initial FE
models to the measured mode shapes. The av4
erage MAC calculated between the mode shapes
3
from an initial FE model and the measured
2
mode shapes was 0.8394. When the average
1
MAC was calculated between the measured data
0
and data obtained from the updated FE models,
it was observed that the RSM, SA and GA up1 2 3 4 5 6 7 8 9 10 11 12
dated FE model gave the improved average of
Element
the diagonal of the MAC matrix of 0.8413,
Initial
RSM
GA
SA
0.8430 and 0.8419, respectively. Therefore, the
Figure 4. Graph showing the initial moduli of elasticity and the SA gave the best MAC followed by the GA
moduli of elasticity obtained when the FE model is updated using which was followed by the RSM. However,
the RSM, GA and SA. Here e10 indicates 10 to the power 10 and these differences in accuracies of the MAC and
the units are Nm-2
natural frequencies were not significant.
The computational time taken to run the complete
RSM method was 46 CPU seconds, while
measured natural frequency and that from the initial FE
the
SA
and
a
full
FE model took 19 CPU minutes to run
model, which was obtained when the modulus of elasticand
the
GA
and
a
full FE model took 117 CPU seconds.
10
-2
ity of 7.00x10 N.m was assumed, was 4.3%. When
The
RSM
was
found
to be faster than the GA which was
the RSM was used for FE updating, this error was rein
turn
much
faster
than
the SA which was faster that the
duced to 3.1% while using SA it was reduced to 0.2%
GA.
On
implementing
the
RSM, 160 FE model evaluaand using the GA approach it was reduced to 0%. The
tions
were
made,
while
on
implementing the SA 19485
error between the second measured natural frequency
FE model evaluations were made and on implementing
Modulus of Elasticity (e10)
approximation, where the data set had the previous optimum solution added to it, used 5 training cycles. On
using the RSM, the MLP was only initialized once. The
GA was implemented on a population size of 50 and 200
generations. The normalized geometric distribution was
implemented with a probability of selecting the best
candidate set to 8%, mutation rate of 0.3% and crossover rate of 60%.
When SA and a full FE model was implemented for
FE updating, the scale of the cooling schedule was set to
4 and the number of individual annealing runs was set to
3. When the simulation was run, the first run involved
7008 FE model calculations, in the second run 6546 FE
model calculations and in the third run 5931 FE model
calculations were made.
On implementing the GA and a full FE model, the
same options as those that were used in the implementation of the RSM were used. The results showing the
moduli of elasticity of the initial FE model, RSM updated FE model, SA updated FE model and GA updated
FE model are shown in Figure 4. Table 1 shows the
measured natural frequencies, initial natural frequencies
and natural frequencies obtained by the RSM, SA and
GA updated FE models. The error between the first
7
the GA 10000 FE model calculations were made. In this
paper, a simple FE model with 39 degrees of freedom is
updated. It can, therefore, be concluded that if the FE
model had several thousand degrees of freedom, the
RSM will be substantially faster than the other methods.
This conclusion should be understood in the light of the
fact that FE models usually have many degrees of freedom.
changes in their vibration characteristics: a literature
review” Los Alamos National Laboratory Report LA13070-MS, 1996.
10
Ewins, D.J., Modal testing: theory and practice, Research Studies Press, Letchworth, U.K, 1995.
11
Allemang, R.J. and Brown, D.L., “A correlation coefficient for modal vector analysis” Proceedings of the 1st
International Modal Analysis Conference, 1-18, 1982.
12
Sacks, J., Welch, W.J., Mitchell, T.J., and Wynn,
H.P., “Design and analysis of computer experiments”
Statistical Science, 4(4), 1989, pp. 409-435.
13
Varaajan, S., Chen, W., and Pelka, C.J., “Robust
concept exploration of propulsion systems with enhanced model approximation” Engineering Optimization, 32(3), 2000, pp. 309-334.
14
Giunta, A. A., and Watson, L. T., “A Comparison of
Approximation Modeling Techniques: Polynomial Versus Interpolating Models,” AIAA-98-4758, American
Institute of Aeronautics and Astronautics, Inc., 1998,
pp.392-401.
15
Koch, P. N., Simpson, T. W., Allen, J. K., and Mistree, F., 1999, “Statistical Approximations for Multidisciplinary Design Optimization: The Problem of Size,”
Journal of Aircraft, 36(1), 1999, pp. 275-286.
16
Jin, R., Chen, W., and Simpson, T., 2000, “Comparative Studies of Metamodeling Techniques under
Multiple
Modeling
Criteria,”
8th
AIAA/NASA/USAF/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Long Beach, CA, September 6-8, 2000.
17
Lin, Y., Krishnapur, K., Allen, J. K., and Mistree,
F., 2000, “Robust Concept Exploration in Engineering
Design: Metamodeling Techniques and Goal Formulations,” Proceedings of the 2000 ASME Design Engineering Technical Conferences, DETC2000/DAC14283, September 10-14, 2000, Baltimore, Maryland.
18
Wang, G.G., 2003, “Adaptive response surface
method using inherited Latin hypercube design points,”
Transactions of the ASME, Journal of Mechanical Design,125, pp. 210-220.
19
Simpson, T. W., Peplinski, J. D., Koch, P. N., and
Allen, J. K., “Metamodels for Computer-based Engineering Design: Survey and Recommendations,” Engineering with Computers, 17, 2001, pp. 129-150.
20
Fletcher, R., Practical Methods of Optimization.
2nd edition, New York: Wiley, 1987.
21
Holland, J, Adaptation in Natural and Artificial Systems, Ann Arbor: University of Michigan Press, 1975.
22
Goldberg, D.E., Genetic algorithms in search, optimization and machine learning, Addison-Wesley, Reading, MA, 1989.
23
Metropolis, N, Rosenbluth, A.W., Rosenbluth,
M.N., Teller, A.H., and Teller, E., “Equations of state
Conclusion
In this study, RSM is proposed for FE model updating. The proposed RSM was implemented within the
framework of the MLP for functional approximation and
GA for optimization of the MLP response surface function. This procedure was compared to the GA and SA.
When these techniques were tested on the unsymmetrical
H-shaped structure, it was observed that the RSM was
faster than the SA and GA without much compromise on
the accuracy of the predicted modal properties.
Acknowledgment
The author would like to thank Stefan Heyns, the now
National Research Foundation as well as the AECI, Ltd
for their assistance in this work.
References
1
Friswell, M.I., and Mottershead, J.E., Finite element
model updating in structural dynamics, Kluwer Academic Publishers Group, Norwell, MA, 1995, pp. 1-286.
2
Montgomery, D.C., Design and analysis of experiments, 4th Edition, John Wiley and Sons, NY, 1995,
Chapter 14.
3
Marwala, T., “Finite element model updating using
wavelet data and genetic algorithm” Journal of Aircraft,
39(4), 2002, pp. 709-711.
4
Marwala, T., and Heyns, P.S., “A multiple criterion
method for detecting damage on structures” AIAA Journal, 195(2), 1998, pp. 1494-1501.
5
Levin, R.I. and Lieven, N.A.J, “Dynamic finite element model updating using simulated annealing and
genetic algorithms” Mechanical Systems and Signal
Processing, 12(1), pp. 91-120.
6
Bishop, C.M., Neural Networks for Pattern Recognition. Oxford: Clarendon, 1996.
7
Lee, S.H., Kim, H.Y., and Oh, S.I., “Cylindrical tube
optimization using response surface method based on
stochastic process” Journal of Materials Processing
Technology, 130-131(20), 2002, pp. 490-496.
8
Edwards, I.M., and Jutan, A., “Optimization and control using response surface methods” Computers &
Chemical Engineering, 21(4), 1997, pp. 441-453.
9
Doebling, S.W., Farrar, C.R., Prime, M.B., and
Shevitz, D.W., “Damage identification and health monitoring of structural and mechanical systems from
8
26
calculations by fast computing machines,” Journal of
Chemical Physics, 21, 1953, pp. 1087-1092.
24
Kirkpatrick, S., Gelatt, C.D., and Vecchi, M.P.,
“Optimization by simulated annealing,” Science, 220,
1983, pp. 671-680.
25
Marwala, T, A multiple criterion updating method
for damage detection on structures. University of Pretoria Masters Thesis, 1997.
Balmès, E., Structural Dynamics Toolbox User’s
Manual Version 2.1, Scientific Software Group, Sèvres,
France, 1997.
27
Møller, M. “A scaled conjugate gradient algorithm
for fast supervised learning” Neural Networks, vol. 6,
1993, pp. 525-533.
9
| 5 |
Non-Asymptotic Bounds and a General Formula for
the Rate-Distortion Region of the Successive Refinement Problem ∗
Tetsunao Matsuta†
arXiv:1802.07458v1 [] 21 Feb 2018
source symbol
X
encoder 1
✲ f1
encoder 2
✲ f2
decoder 1
✲ ϕ1
reprod. symbol 1
decoder 2
reprod. symbol 2
✲ϕ
✲ 2
Tomohiko Uyematsu‡
rate-distortion region. Since a codeword is used in both decoders, we cannot always optimize rates like the case where
each codeword is used for each reconstruction separately. However, there are some cases where we can achieve the optimum
rates. Necessary and sufficient conditions for such cases were
independently given by Koshelev [3], [4] and Equitz and Cover
[5]. The complete characterization of the rate-distortion region
for discrete stationary memoryless sources was given by Rimoldi
[6]. Yamamoto [7] also gave the rate-distortion region as a special case of a more general coding problem. Later, Effros [8]
characterized the rate-distortion region for discrete stationary
ergodic and non-ergodic sources.
Recently, the asymptotic analysis of the second-order rates to
the blocklength becomes an active target of the study. Especially, for the successive refinement problem, No et al. [9] and
Zhou et al. [10] gave a lot of results to the set of second-order
rates for discrete and Gaussian stationary memoryless sources.
No et al. [9] considered separate excess-distortion criteria such
that a probability that a distortion exceeds a given distortion
level is less than a given probability level separately for each
reconstruction. On the other hand, Zhou et al. [10] considered
the joint excess-distortion criterion such that a probability that
either of distortions exceeds a given distortion level is less than
a given probability level. Although they also gave several nonasymptotic bounds on the set of pairs of rates, they mainly focus
on the asymptotic behavior of the set.
On the other hand, in this paper, we consider non-asymptotic
bounds on pairs of rates in finite blocklengths. Especially, since
a rate is easily calculated by a number of codewords, we focus on
pairs of two numbers of codewords. Although we adopt separate
excess-distortion criteria, our result can be easily applied to the
joint excess-distortion criterion. We give inner and outer bounds
on pairs of numbers of codewords. These bounds are characterized by using the smooth max Rényi divergence introduced by
Warsi [11]. For the point-to-point lossy source coding problem,
we also used the smooth max Rényi divergence to characterize
the rate-distortion function which is the minimum rate when
the blocklength is unlimited [12]. Proof techniques are similar
to that of [12], but we employ several extended results for the
successive refinement problem. The inner bound is derived by
using an extended version of the previous lemma [12, Lemma
2]. We give this lemma as a special case of an extended version of the previous generalized covering lemma [13, Lemma
1]. The outer bound is derived by using an extended version of
the previous converse bound [12, Lemma 4].
In this paper, we also consider the rate-distortion region for
✲ Y
✲ Z
Figure 1: Successive refinement problem
SUMMARY In the successive refinement problem, a fixed-length
sequence emitted from an information source is encoded into two
codewords by two encoders in order to give two reconstructions of
the sequence. One of two reconstructions is obtained by one of two
codewords, and the other reconstruction is obtained by all two codewords. For this coding problem, we give non-asymptotic inner and
outer bounds on pairs of numbers of codewords of two encoders such
that each probability that a distortion exceeds a given distortion level is
less than a given probability level. We also give a general formula for
the rate-distortion region for general sources, where the rate-distortion
region is the set of rate pairs of two encoders such that each maximum
value of possible distortions is less than a given distortion level.
Key words: general source, information spectrum, non-asymptotic
bound, rate-distortion region, successive refinement
1 Introduction
The successive refinement problem is a fixed-length lossy source
coding problem with many terminals (see Fig. 1). In this coding
problem, a fixed-length sequence emitted from an information
source is encoded into two codewords by two encoders in order to give two reconstructions of the sequence. One of two
reconstructions is obtained by one of two codewords by using
a decoder, and the other reconstruction is obtained by all two
codewords by using the other decoder.
An important parameter of the successive refinement problem is a pair of rates of two encoders such that each distortion
between the source sequence and a reconstruction is less than
a given distortion level. The set of these pairs when the length
(blocklength) of the source sequence is unlimited is called the
∗ Portions of this paper were presented at the 38th Symposium on Information
Theory and Its Applications [1], and at the 2016 IEICE Society Conference [2].
† [email protected]
‡ [email protected]
The authors are with Dept. of Information and Communications Engineering, Tokyo Institute of Technology, Tokyo, 152-8552 Japan.
1
general sources. In this case, we adopt the maximum-distortion
criterion such that the maximum value of possible distortion is
less than a given distortion level for each reconstruction. By
using the spectral sup-mutual information rate (cf. [14]) and
the non-asymptotic inner and outer bounds, we give a general
formula for the rate-distortion region. We show that our ratedistortion region coincides with the region obtained by Rimoldi
[6] when a source is discrete stationary memoryless. Furthermore, we consider a mixed source which is a mixture of two
sources and show that the rate-distortion region is the intersection of those of two sources.
The rest of this paper is organized as follows. In Section
2, we provide some notations and the formal definition of the
successive refinement problem. In Section 3, we give several
lemmas for an inner bound on pairs of numbers of codewords and
the rate-distortion region. These lemmas are extended versions
of our previous results [12, Lemma 2] and [13, Lemma 1]. In
Section 4, we give outer and inner bounds using the smooth max
Rényi divergence on pairs of numbers of codewords. In Section
5, we give a general formula for the rate-distortion region. In
this section, we consider the rate-distortion region for discrete
stationary memoryless sources and mixed sources. In Section
6, we conclude the paper.
note numbers of codewords. Two decoders decoder 1 and decoder 2 are represented as functions ϕ1 : [1 : M1 ] → Y and
ϕ2 : [1 : M1 ] × [1 : M2 ] → Z, respectively. We refer to a tuple of encoders and decoders ( f1, f2, ϕ1, ϕ2 ) as a code. In order
to measure distortions between the source symbol and reconstruction symbols, we introduce distortion measures defined by
functions d1 : X × Y → [0, +∞) and d2 : X × Z → [0, +∞).
We define two events of exceeding given distortion levels
D1 ≥ 0 and D2 ≥ 0 as follows:
E1 (D1 ) , {d1 (X, ϕ1 ( f1 (X))) > D1 },
E2 (D2 ) , {d2 (X, ϕ2 ( f1 (X), f2 (X))) > D2 }.
Then, we define the achievability under the excess-distortion
criterion.
Definition 1. For positive integers M1, M2 , real numbers
D1, D2 ≥ 0, and ǫ1, ǫ2 ∈ [0, 1], let M = (M1, M2 ), D = (D1, D2 ),
and ǫ = (ǫ1, ǫ2 ). Then, for a source X, we say (M, D) is ǫachievable if and only if there exists a code ( f1, f2, ϕ1, ϕ2 ) such
that numbers of codewords of encoder 1 and encoder 2 are M1
and M2 , respectively, and
Pr{Ei (Di )} ≤ ǫi,
∀i ∈ {1, 2}.
In what follows, for constants M1, M2, D1, D2, ǫ1, and ǫ2 , we
often use the above simple notations: M = (M1, M2 ), D =
(D1, D2 ), and ǫ = (ǫ1, ǫ2 ). In this setting, we consider the set of
all pairs (M1, M2 ) of numbers of codewords under the excessdistortion criterion. According to the ǫ-achievability, this set is
defined as follows:
2 Preliminaries
Let N, R, and R ≥0 be sets of positive integers, real numbers, and
non-negative real numbers, respectively.
Unless otherwise stated, we use the following notations. For
a pair of integers i ≤ j, the set of integers {i, i + 1, · · · , j}
is denoted by [i : j]. For finite or countably infinite sets X
and Y, the set of all probability distributions over X and X ×
Y are denoted by PX and PX Y , respectively. The set of all
conditional probability distributions over X given Y is denoted
by PX |Y . The probability distribution of a random variable (RV)
X is denoted by the subscript notation PX , and the conditional
probability distribution for X given an RV Y is denoted by PX |Y .
The n-fold Cartesian product of a set X is denoted by X n while
an n-length sequence of symbols (a1, a2, · · · , an ) is denoted by
∞ is denoted by the bold-face
an . The sequence of RVs {X n }n=1
∞ and
letter X. Sequences of probability distributions {PX n }n=1
∞
conditional probability distributions {PX n |Y n }n=1 are denoted
by bold-face letters PX and PX |Y , respectively.
For the successive refinement problem (Fig. 1), let X, Y,
and Z be finite or countably infinite sets, where X represents
the source alphabet, and Y and Z represent two reconstruction
alphabets. Let X over X be an RV which represents a single
source symbol. Since the source can be characterized by X, we
also refer to it as the source. When we consider X as an n-fold
Cartesian product of a certain finite or countably infinite set, we
can regard the source symbol X as an n-length source sequence.
Thus, for the sake of brevity, we deal with the single source
symbol unless otherwise stated.
Two encoders encoder 1 and encoder 2 are represented as
functions f1 : X → [1 : M1 ] and f2 : X → [1 : M2 ], respectively, where M1 and M2 are positive integers which de-
Definition 2. For a source X, real numbers D1, D2 ≥ 0, and
ǫ1, ǫ2 ∈ [0, 1], we define
M(D, ǫ |X) , {(M1, M2 ) ∈ N2 : (M, D) is ǫ-achievable}.
Basically, this paper deals with a coding for a single source
symbol. However, in Section 5, we deal with the coding for
an n-length source sequence. Hence in that section, by abuse
of notation, we regard the above sets X, Y, and Z as n-fold
Cartesian products X n , Y n , and Z n , respectively. We also
regard source symbol X on X as an n-length source sequence
∞
X n on X n . Then we call the sequence X = {X n }n=1
of source
sequences the general source that is not required to satisfy the
consistency condition.
We use the superscript (n) for a code, distortion measures,
(n) (n) (n) (n)
and numbers of codewords (e.g., ( f1 , f2 , ϕ1 , ϕ2 )) to make
clear that we are dealing with source sequences of length n. For
a code, we define rates R1(n) and R2(n) as
1
log Mi(n), ∀i ∈ {1, 2}.
n
Hereafter, log means the natural logarithm.
We introduce maximum distortions for a sequence of codes.
To this end, we define the limit superior in probability [14].
Ri(n) ,
Definition 3 (Limit superior in probability). For an arbitrary
∞ of real-valued RVs, we define the limit
sequence S = {S n }n=1
superior in probability by
n
o
p-lim sup Sn , inf α : lim Pr {Sn > α} = 0 .
n→∞
2
n→∞
h
M2 i M1
= E E E 1{(A, B̃, C̃) < F } A, B̃
A
,
Now we introduce the maximum distortions:
p-lim sup d1(n) (X n, ϕ1(n) ( f1(n) (X n ))),
n→∞
(n)
p-lim sup d2 (X n, ϕ2(n) ( f1(n) (X n ), f2(n) (X n ))).
n→∞
where 1{·} denotes the indicator function.
Proof. We have
Then, we define the achievability under the maximum distortion
criterion.
Pr
Definition 4. For real numbers R1, R2 ≥ 0, let R = (R1, R2 ).
Then, for a general source X, and real numbers D1, D2 ≥ 0, we
say a pair (R, D) is fm-achievable if and only if there exists a
sequence {( f1(n), f2(n), ϕ1(n), ϕ2(n) )} of codes satisfying
p-lim sup d1(n) (X n, ϕ1(n) ( f1(n) (X n )))
n→∞
(n)
p-lim sup d2 (X n, ϕ2(n) ( f1(n) (X n ), f2(n) (X n )))
n→∞
=
≤ D1,
≤ D2,
=
∀i ∈ {1, 2}.
M1
Ö
PB̃ (b̃i )
M2
Ö
Õ
PC̃ | B̃ (c̃i, j | b̃i )
j=1
× 1{(a, b̃i, c̃i, j ) < F }
=
Õ
P A (a)
a ∈A
M1 Õ
Ö
PB̃ (b̃i )
i=1 b̃i ∈B
× 1{(a, b̃i, c̃i, j ) < F }
Remark 1. We can show that the rate-distortion region R(D|X)
is a closed set by the definition and using the diagonal line
argument (cf. [14]).
=
Õ
a ∈A
P A (a)
M1 Õ
Ö
i=1 b̃ ∈B
PB̃ (b̃)
M2 Õ
Ö
PC̃ | B̃ (c̃i, j | b̃i )
j=1 c̃i, j ∈C
M2
Ö
j=1
Õ
PC̃ | B̃ (c̃| b̃)
c̃ ∈C:
(a, b̃, c̃)<F
ª
© Õ
©Õ
®
PC̃ | B̃ (c̃| b̃)®
PB̃ (b̃)
=
P A (a)
®
c̃ ∈C:
a ∈A
b̃ ∈B
¬
«(a, b̃, c̃)<F
«
Õ
We note that when we regard X as n-length sequence in the
definition of M(D, ǫ |X), it gives a non-asymptotic region of
pairs of rates for a given finite blocklength.
3 Covering Lemma
M2 M1
ª
®
®
®
¬
.
By recalling that ( B̃, C̃) is independent of A, this coincides with
the right side of (1).
In this section, we introduce some useful lemmas and corollaries
for an inner bound on the set M(D, ǫ |X) and R(D|X).
The next lemma is the most basic and important result in the
sense that all subsequent results in this section are given by this
lemma.
This lemma implies an exact analysis of the error probability
of covering a set A in terms of a given condition F by codewords
{ B̃i } and {C̃i, j } of random coding. Hence, this lemma can be
regarded as an extended version of [15, Theorem 9].
Although the above lemma gives an exact analysis, it is difficult to use it for characterizing an inner bound on pairs of
numbers of codewords and the rate-distortion region. Instead of
it, we will use the next convenient lemma.
Lemma 1. Let A ∈ A be an arbitrary RV, and B̃ ∈ B and
C̃ ∈ C be RVs such that the pair ( B̃, C̃) is independent of A.
For an integer M1 ≥ 1, let B̃1, B̃2, · · · , B̃ M1 be RVs which are
independent of each other and of A, and each distributed according to PB̃ . For an integer i ∈ [1 : M1 ] and M2 ≥ 1, let
C̃i,1, C̃i,2, · · · , C̃i, M2 be RVs which are independent of each other
and of A, and each distributed according to PC̃ | B̃ (·| B̃i ). Then,
for any set F ⊆ A × B × C, we have
j=1
M2
i=1
R(D|X) , {(R1, R2 ) ∈ R2≥0 : (R, D) is fm-achievable}.
i=1
Õ
Ö
Ö
©
ª
× P A(a)
PC̃ | B̃ (c̃i, j | b̃i )®
PB̃ (b̃i )
j=1
i=1
«
¬
M
M
2
1
©Ö Ö
ª
×
1{(a, b̃i, c̃i, j ) < F }®
« i=1 j=1Õ
¬ Õ
Õ
× P A(a)
Definition 5 (Rate-distortion region). For a general source X
and real numbers D1, D2 ≥ 0, we define
{(A, B̃i, C̃i, j ) < F }
j=1
a ∈A (b̃1 ··· , b̃ M )∈B M1 (c̃1,1, ··· , c̃1, M ,c2,1 ··· , c̃ M , M )∈C M1 M2
2
1
2
1
In what follows, for constants R1 and R2 , we often use the
above simple notation: R = (R1, R2 ). In this setting, we consider
the set of all rate pairs under the maximum distortion criterion.
According to the fm-achievability, this set, usually called the
rate-distortion region, is defined as follows:
M2
M1 Ù
Ù
i=1
Õ
{(A, B̃i, C̃i, j ) < F }
M1
n→∞
Pr
M2
M1 Ù
Ù
a ∈A (b̃1 ··· , b̃ M )∈B M1 (c̃1,1, ··· , c̃1, M ,c2,1 ··· , c̃ M , M )∈C M1 M2
2
1
2
1
and
lim sup Ri(n) ≤ Ri,
(1)
Lemma 2. Let A ∈ A, B ∈ B, and C ∈ C be arbitrary RVs, and
B̃ ∈ B and C̃ ∈ C be RVs such that the pair ( B̃, C̃) is independent
of A. Let ψ1 : A × B → [0, 1] be a function and α1 ∈ [0, 1] be
a constant such that
P A(a)PB̃ (b) ≥ α1 ψ1 (a, b)P AB (a, b),
∀(a, b) ∈ A × B.
3
(2)
Furthermore, let ψ2 : A × B × C → [0, 1] be a function and
α2 ∈ [0, 1] be a constant such that
× PBC | A (b, c|a) − e−α2 M2 −log α1
(3)
Then, we have
h
M2 i M1
E E E 1{(A, B̃, C̃) < F } A, B̃
A
©
P A(a) 1 − α1
≤
a ∈A:
P A (a)>0
«
Õ
≤ 1 − E[ψ1 (A, B)ψ2 (A, B, C)] + Pr {(A, B, C) < F }
+ e−α2 M2 −log α1 + e−α1 M1 .
(4)
Proof. We have
h
M i M1
E E E 1{(A, B̃, C̃) < F } A, B̃ 2 A
=
Õ
P A(a)
a ∈A:
P A (a)>0
Õ
a ∈A:
P A (a)>0
©
P A(a) 1 −
≤
a ∈A:
P A (a)>0
«
(a)
b ∈B
b ∈B
M2 M1
ª
®
®
®
¬
(c)
≤
Õ
ψ1 (a, b)ψ2 (a, b, c)
(b,c)∈B×C:
(a,b,c)∈F,
PB| A (b |a)>0
+ e−α2 M2 −log α1 + e−α1 M1
M2 M1
Õ
(b)
ª
®
®
®
®
®
®
¬
≤ 1 − E[ψ1 (A, B)ψ2 (A, B, C)] + Pr {(A, B, C) < F }
+ e−α2 M2 −log α1 + e−α1 M1 ,
where |x| + = max{0, x}, (a) follows since (1 − x y) M ≤ 1 − y +
e−x M for 0 ≤ x, y ≤ 1 and M > 0, and (b) comes from the fact
that ψ1 (a, b)ψ2 (a, b, c) ≤ 1.
The importance of this lemma is to be able to change RVs
from (A, B̃, C̃) to arbitrary correlated RVs (A, B, C). This makes
it possible to characterize an inner bound on pairs of numbers
of codewords and the rate-distortion region.
Lemma 2 can be regarded as an extended version of our
previous lemma [13, Lemma 1] to multiple correlated RVs.
Hence, like the previous lemma, by changing functions and
constants, it gives many types of bounds such as the following
two corollaries.
©
Õ
α1 ψ1 (a, b)PB | A(b|a)
P A(a) 1 −
a ∈A:
b ∈B:
PB| A (b |a)>0
P A (a)>0
«
Õ
ª
®
ψ2 (a, b, c)PC | AB (c|a, b) + e−α2 M2 ®
×
®
c ∈C:
(a,b,c)∈F
¬
Õ
+ M1
(a,b,c)∈A×B×C:
(a,b,c)∈F, P AB (a,b)>0
©
ª
®
Õ
®
ψ2 (a, b, c)PC | AB (c|a, b)®®
× 1 − α2
®
c ∈C:
®
(a,b,c)∈F,
PB| A (b |a)>0
«
¬
Õ
ψ1 (a, b)ψ2 (a, b, c)
(b,c)∈B×C:
(a,b,c)∈F,
PB| A (b |a)>0
+
×PBC | A (b, c|a) − e−α2 M2 −log α1 + e−α1 M1
Õ
ψ1 (a, b)ψ2 (a, b, c)P ABC (a, b, c)
≤ 1−
©
©
Õ
Õ
P A(a)
≤
PB̃ (b) 1 −
ψ2 (a, b, c)
b ∈B
a ∈A:
c
∈C:
P A (a)>0
(a,b,c)∈F,
PB| A (b |a)>0
«
«
M1
×PC | AB (c|a, b) + e−α2 M2
(b)
Õ
× PBC | A (b, c|a) − e−α2 M2 −log α1
PB̃ (b)
ª
©
Õ
®
PC̃ | B̃ (c|b)®
× 1 −
®
c ∈C:
(a,b,c)∈F
«
¬
Õ
(a) Õ
≤
P A(a)
PB̃ (b)
,
where (a) comes from (3), (b) follows since (1 − x y) M ≤ 1 −
y + e−x M for 0 ≤ x, y ≤ 1 and M > 0 (cf. [16, Lemma 10.5.3]),
and (c) comes from (2). Since the probability is not grater than
1, we have
h
M2 i M1
E E E 1{(A, B̃, C̃) < F } A, B̃
A
P AB (a, b)PC̃ | B̃ (c|b) ≥ α2 ψ2 (a, b, c)P ABC (a, b, c),
∀(a, b, c) ∈ A × B × C.
M1
Corollary 1. For any real numbers γ1, γ2 ∈ R, and any integers
M1, M2 ≥ 1 such that M1 ≥ exp(γ1 ) and M2 ≥ exp(γ2 ), we have
h
M2 i M1
E E E 1{(A, B̃, C̃) < F } A, B̃
A
PB | A (B| A)
≤ Pr log
> log M1 − γ1
PB̃ (B)
)
PC | AB (C| A, B)
or log
> log M2 − γ2
PC̃ | B̃ (C|B)
M1
©
©
Õ
P A(a) 1 − α1
=
ψ1 (a, b)ψ2 (a, b, c)
(b,c)∈B×C:
a ∈A:
(a,b,c)∈F,
P A (a)>0
«
« PB| A (b |a)>0
Õ
+ Pr {(A, B, C) < F } + e− exp(γ2 )−γ1 +log M1 + e− exp(γ1 ) .
4
Proof. Let α1 =
exp(γ1 )
M1 ,
α2 =
exp(γ2 )
M2 ,
On the other hand, let α1 and α2 be constants such that
δ1
α1 = exp − D̄∞
(P AB kP A × PB̃ |ψ1 ) ,
δ2
α2 = exp − D̄∞
(P ABC kP AB × PC̃ | B̃ |ψ2 ) .
PB | A (b|a)
≤ log M1 − γ1 ,
ψ1 (a, b) = 1 log
PB̃ (b)
(
)
PC | AB (c|a, b)
ψ2 (a, b, c) = 1 log
≤ log M2 − γ2 .
PC̃ | B̃ (c|b)
Then, for any (a, b, c) ∈ A × B × C, we have
α1 ψ1 (a, b)P AB (a, b)
P A(a)PB̃ (b)
≤
inf
ψ1 (a, b)P AB (a, b)
(a,b)∈A×B ψ1 (a, b)P AB (a, b)
Then, we can easily check that these constants and functions
satisfy (2) and (3). Plugging these functions and constants into
(4), we have the desired bound.
≤ P A(a)PB̃ (b),
This corollary can be regarded as a bound in terms of the
information spectrum (cf. [14]). To the best of our knowledge,
this type of bound has not been reported so far (although, there
are some converse bounds [10, Lemma 15] and [17, Theorem
3]).
On the other hand, the next corollary gives a bound in terms
δ
of the smooth max Rényi divergence D∞
(PkQ) defined as
δ
D∞
(PkQ)
where
|x| +
,
Í
inf
ψ:A→[0,1]:
a∈A ψ(a)P(a)≥1−δ
and
α2 ψ2 (a, b, c)P ABC (a, b, c)
P AB (a, b)PC̃ | B̃ (c|b)
≤
inf
(a,b,c)∈A×B×C ψ2 (a, b, c)P ABC (a, b, c)
× ψ2 (a, b, c)P ABC (a, b, c)
≤ P AB (a, b)PC̃ | B̃ (c|b).
ψ(a)P(a) +
,
log sup
Q(a)
a ∈A
Thus, ψ1 , ψ2 , α1 , and α2 satisfy (2) and (3).
Plugging these functions and constants into (4), we have
h
M i M1
E E E 1{(A, B̃, C̃) < F } A, B̃ 2 A
= max{0, x}.
Corollary 2. For any real numbers δ1, δ2 ≥ 0, and any integers
M1, M2 ≥ 1, we have
h
M i M1
E E E 1{(A, B̃, C̃) < F } A, B̃ 2 A
≤ δ1 + δ2 + Pr {(A, B, C) < F }
δ2
+ e− exp(−D∞
δ1
× e D∞
≤ δ1 + δ2 + Pr {(A, B, C) < F }
δ2
+ e− exp(−D∞
+e
δ
− exp(−D∞1 (P AB
k P A ×PB̃ ))M1
+ e− exp(−D∞
.
(5)
a∈A
where
δ
D̄∞
(PkQ|ψ) = log sup
a ∈A
ψ(a)P(a)
Q(a)
+
.
δ
D∞
(PkQ)− can be simply defined as
δ
D∞
(PkQ)− ,
Then, we have
E[ψ1 (A, B)ψ2 (A, B, C)]
Í
log sup
inf
ψ:A→[0,1]:
ψ(a)P(a)≥1−δ
a ∈A
ψ(a)P(a)
.
Q(a)
a∈A
In this definition, it may be a negative value depending on δ.
δ (PkQ).
Since this case is meaningless in this study, we adopt D∞
Here we also note that
(a)
≥ E[ψ1 (A, B)] + E[ψ2 (A, B, C)] − 1
≥ 1 − δ1 − δ2,
ψ(a)P(a)≥1−δ
Since for non-negative real valued functions f (b) and g(b), it
holds that (cf. e.g. [16, Lemma 16.7.1])
Í
f (b)
f (b)
Íb ∈B
,
≤ sup
g(b)
g(b)
b ∈B
b ∈B
(6)
ǫ,
,
Remark 2. The original definition of the smooth max Rényi
divergence (cf. [12]) is as follows:
Í
ψ(b)P(b)
δ
.
log sup bÍ∈B
D∞
(PkQ)− ,
inf
B⊆A
b ∈B Q(b)
Í ψ:A→[0,1]:
and
δ2
D̄∞
(P ABC kP AB × PC̃ | B̃ |ψ2 )
δ2
≤ D∞
(P ABC kP AB × PC̃ | B̃ ) +
(P AB k P A ×PB̃ )−ǫ )M1
where we use inequalities (5), (6), and (7). Since ǫ > 0 is
arbitrary, this completes the proof.
Proof. For an arbitrarily fixed ǫ > 0, let ψ1 and ψ2 be functions
such that E[ψ1 (A, B)] ≥ 1 − δ1 , E[ψ2 (A, B, C)] ≥ 1 − δ2 ,
δ1
δ1
(P AB kP A × PB̃ ) + ǫ,
(P AB kP A × PB̃ |ψ1 ) ≤ D∞
D̄∞
(P AB k P A ×PB̃ )+ǫ
δ1
δ
(P ABC k P AB ×PC̃ | B̃ ))M2 +D∞1 (P AB k P A ×PB̃ )
(P ABC k P AB ×PC̃ | B̃ )−ǫ )M2
(7)
+
δ
δ
D∞
(PkQ) = D∞
(PkQ)− .
where (a) follows since x y ≥ x + y − 1 for x, y ∈ [0, 1].
5
(8)
4 Inner and Outer Bounds on the Set of
Pairs of Numbers of Codewords
Remark 3. This proof is valid even without the RV Ũ. This
auxiliary RV is introduced merely for consistency with the outer
bound.
In this section, we give outer and inner bounds on M(D, ǫ |X)
by using the smooth max Rényi divergence.
First of all, we show a bound on the probability of the two
events E1 (D1 ) and E2 (D2 ) for the successive refinement problem. In what follows, let U be an arbitrary set.
We use the next notation for the sake of simplicity.
Definition 6. For RVs (A, B, C), we define
δ
δ
I∞
(A; B) , D∞
(P AB kP A × PB ),
δ
δ
(A; B|C) , D∞
(P ABC kP AC × PB |C ).
I∞
Theorem 1. For a source X, let (Ũ, Ỹ, Z̃) ∈ U × Y × Z be
RVs such that (Ũ, Ỹ, Z̃) is independent of X. Then, for any real
numbers D1 , D2 ≥ 0, there exists a code ( f1, f2, ϕ1, ϕ2 ) such that
numbers of codewords of encoder 1 and encoder 2 are M1 and
M2 , respectively, and
We also define the following set of probability distributions
for a given source X and constants D and ǫ.
P(D, ǫ |X) , {PUY Z |X ∈ PU Y Z |X :
Pr{d1 (X, Y ) > D1 } ≤ ǫ1,
Pr{d2 (X, Z) > D2 } ≤ ǫ2 }
Pr{E1 (D1 ) ∪ E2 (D2 )}
h
M2 i M1
≤ E E E 1{(X, Ũ, Ỹ, Z̃) < D} X, Ũ, Ỹ
X
,
Now, by using the above theorem, we give an inner bound on
M(D, ǫ |X).
where
Theorem 2 (Inner bound). For a source X, real numbers D1 ,
D2 ≥ 0, and ǫ1, ǫ2 > 0, we have
Ø
Ø
M(D, ǫ |X) ⊇
MI (δ, β, PUY Z |X ),
D = {(x, u, y, z) ∈ X × U × Y × Z :
d1 (x, y) ≤ D1, d2 (x, z) ≤ D2 } .
Proof. We generate (ũ1, ỹ1 ), (ũ2, ỹ2 ), · · · , (ũ M1, ỹ M1 ) ∈ U × Y
independently subject to the probability distribution PŨỸ , and
define the set C1 , {(ũ1, ỹ1 ), (ũ2, ỹ2 ), · · · , (ũ M1, ỹ M1 )}. For any
i ∈ [1 : M1 ], we generate z̃i,1, z̃i,2, · · · , z̃i, M2 ∈ Z independently subject to the probability distribution PZ̃ |Ũ Ỹ (·|ui, yi ),
and define the set C2,i , { z̃i,1, z̃i,2, · · · , z̃i, M2 }. We denote
{C2,1, C2,2, · · · , C2, M1 } as C2 . For a given set C1 , C2 and a
given symbol x ∈ X, we choose i ∈ [1 : M1 ] and j ∈ [1 : M2 ]
such that
(δ,β,γ)∈S(ǫ ) PUY Z | X ∈P(D,γ |X)
where δ = (δ1, δ2 ), β = (β1, β2 ), γ = (γ1, γ2 ),
S(ǫ) , (δ, β, γ) ∈ (0, 1]6 : δ1 + δ2 + γ1 + γ2
+β1 + β2 ≤ min{ǫ1, ǫ2 }} ,
MI (δ, β, PUY Z |X ) , (M1, M2 ) ∈ N2 :
1
δ1
log M1 ≥ I∞
(X; U, Y ) + log log ,
β1
d1 (x, ỹi ) ≤ D1 and d2 (x, z̃i, j ) ≤ D2 .
δ2
(X; Z |U, Y )
log M2 ≥ I∞
1
δ1
(X; U, Y ) + log
+ log I∞
,
β2
If there not exists such pair, we set (i, j) = (1, 1). For this pair,
we define encoders f1 and f2 as
f1 (x) = i and f2 (x) = j.
(9)
(10)
and (X, U, Y, Z) is a tuple of RVs with the probability distribution
PX × PUY Z |X .
On the other hand, we define decoders ϕ1 and ϕ2 as
Proof. We only have to show that (M, D) is ǫ-achievable for
(δ, β, γ) ∈ S(ǫ), PUY Z |X ∈ P(D, γ|X), and M1, M2 ≥ 1 such
that
1
δ1
M1 = exp I∞
(X; U, Y ) log
,
(11)
β1
1
δ2
δ1
.
M2 = exp I∞
(X; Z |U, Y) I∞
(X; U, Y) + log
β2
(12)
ϕ1 (i) = ỹi and ϕ2 (i, j) = z̃i, j .
By taking the average over the random selection of C1 and C2 ,
the average probability of Pr{E1 (D1 ) ∪ E2 (D2 )} is as follows:
E[Pr{E1 (D1 ) ∪ E2 (D2 )}]
M2
M1 Ù
Ù
{d1 (X, Ỹi ) > D1 or d2 (X, Z̃i, j ) > D2 }
= Pr
i=1 j=1
M
M
2
1
Ù
Ù
{(X, Ũi, Ỹi, Z̃i, j ) < D} ,
= Pr
i=1 j=1
To this end, let (Ũ, Ỹ, Z̃) ∈ U × Y × Z be RVs that is independent of X and has the same marginal distribution as (U, Y, Z),
i.e., PŨỸ Z̃ = PUY Z . Then, according to Theorem 1, there exists
a code ( f1, f2, ϕ1, ϕ2 ) such that numbers of codewords of encoder
1 and encoder 2 are M1 and M2 , respectively, and
where {(Ũi, Ỹi, Z̃i, j )} denote randomly selected sequences in C1
and C2 . Now, by noting that Z̃i, j is generated for a given (Ũi, Ỹi ),
the theorem follows from Lemma 1 by setting that A = X,
B̃ = (Ũ, Ỹ ), and C̃ = Z̃.
max{Pr{E1 (D1 )}, Pr{E2 (D2 )}}
6
h
M2 i M1
≤ E E E 1{(X, Ũ, Ỹ, Z̃) < D} X, Ũ, Ỹ
X
. (13)
Lemma 4. For a function g : A → B × C and c ∈ C, let kgkc
denote the size of the image of g when one output is fixed to c,
i.e.,
On the other hand, according to Corollary 2, we have
h
i M1
M
E E E 1{(X, Ũ, Ỹ, Z̃) < D} X, Ũ, Ỹ 2 X
kgkc = |{b ∈ B : g(a) = (b, c), ∃a ∈ A}|.
Then, for any δ ∈ (0, 1], any RV A ∈ A, and (B, C) = g(A), we
have
≤ δ1 + δ2 + Pr {(X, U, Y, Z) < D}
δ2
+ e− exp(−I∞
δ
log sup kgkc ≥ I∞
(A; B|C) + log δ.
δ
(X;Z |U,Y))M2 +I∞1 (X;U,Y)
c ∈C
δ
+e
− exp(−I∞1 (X;U,Y))M1
Proof. Let M = supc ∈C kgkc . Define a subset Dδ ⊆ B × C and
the function ψo : A × B × C → [0, 1] as
δ
,
(16)
Dδ , (b, c) ∈ B × C : PB |C (b|c) ≥
M
(a)
≤ δ1 + δ2 + γ1 + γ2 + β1 + β2
(b)
≤ min{ǫ1, ǫ2 },
(14)
ψo (a, b, c) , 1{(a, b, c) ∈ A × Dδ }.
(17)
Í
Since PBC (b, c) = a ∈A P A(a)1{(b, c) = g(a)}, PB |C (b|c) > 0
for c ∈ C such that PC (c) > 0 if and only if there exists a ∈ A
such that (b, c) = g(a) and P A (a) > 0. Thus, for c ∈ C such that
PC (c) > 0, we have
where (a) follows from (11), (12) and the fact that PUY Z |X ∈
P(D, γ|X), and (b) comes from (9). This implies that (M, D) is
ǫ-achievable.
Remark 4. The proof is also valid if we do not restrict (Ũ, Ỹ, Z̃)
to be the same distribution as (U, Y, Z). However, for the sake
of simplicity, we consider the restricted case.
|{b ∈ B : PB |C (b|c) > 0}
= |{b ∈ B : (b, c) = g(a), ∃a ∈ A s.t. P A (a) > 0}
An outer bound on M(D, ǫ |X) is given in the next theorem.
≤ kgkc
≤ M.
Theorem 3 (Outer bound). For a source X, real numbers D1 ,
D2 ≥ 0 and ǫ1, ǫ2 > 0, and any set U such that |U| ≥ |X|, we
have
Ø
Ù
M(D, ǫ |X) ⊆
MO (δ, PUY Z |X ),
Then, by using Lemma 3, it is easy to see that
Õ
ψo (a, b, c)P ABC (a, b, c)
PUY Z | X ∈P(D,ǫ |X) δ ∈(0,1]2
δ
= Pr PB |C (B|C) ≥
M
(a,b,c)∈A×B×C
where
MO (δ, PUY Z |X ) = (M1, M2 ) ∈ N2 :
log M1 ≥
δ1
I∞
(X; U, Y)
log M2 ≥
δ2
I∞
(X; Z |U, Y )
+ log δ1,
> 1 − δ.
o
Thus, we have
+ log δ2 .
δ
I∞
(A; B|C)
Before proving the theorem, we show some necessary lemmas.
≤ log
Lemma 3. Suppose that a pair of RVs (A, B) on A × B satisfies
= log
sup
(a,b,c)∈A×D δ
|{a ∈ A : P A|B (a|b) > 0}| ≤ M,
∀b ∈ B s.t. PB (b) > 0,
sup
(a,b,c)∈A×B×C
(a)
(15)
≤ log
sup
(a,b,c)∈A×D δ
for some M > 0. Then, for any ǫ ∈ (0, 1], we have
ψo (a, b, c)P ABC (a, b, c)
P AC (a, c)PB |C (b|c)
PB | AC (b|a, c)
PB |C (b|c)
+
PB | AC (b|a, c)
δ/M
+
+
≤ log M − log δ,
ǫ o
> 1 − ǫ.
Pr P A|B (A|B) ≥
M
n
where (a) comes from the definition (16). This completes the
proof.
Proof. Since the lemma can be proved in a similar manner as
[14, Lemma 2.6.2], we omit the proof.
Now, we give the proof of Theorem 3.
Proof of Theorem 3. Let k f1 k be the size of the image of an
encoder f1 . Since k f1 k ≤ |X| and |X| ≤ |U| by the assumption,
there exists an injective function id : [1 : k f1 k] → U. For this
The next lemma is an extended version of [12, Lemma 4],
which gives a bound on the size of the image of a function.
7
function, let Uid ⊆ U be the image of id and id−1 : Uid → [1 :
k f1 k] be the inverse function of id on Uid .
Suppose that (M, D) is ǫ-achievable. Then, there exists a code
( f1, f2, ϕ1, ϕ2 ) such that
Since δ1 ∈ (0, 1] and δ2 ∈ (0, 1] are arbitrary, (22) and (25)
imply that
Ù
(M1, M2 ) ∈
MO (δ, PUY Z |X ).
δ ∈(0,1]2
Pr {d1 (X, ϕ1 ( f1 (X))) > D1 } ≤ ǫ1,
Pr {d2 (X, ϕ2 ( f1 (X), f2 (X))) > D2 } ≤ ǫ2 .
Now, by recalling that (X, U, Y, Z) satisfy (18) and (19), for
any ǫ-achievable pair (M, D), we have
Ø
Ù
(M1, M2 ) ∈
MO (δ, PUY Z |X ).
Thus, by setting U = id( f1 (X)), Y = ϕ1 ( f1 (X)), and Z =
ϕ2 ( f1 (X), f2 (X)), we have
Pr {d1 (X, Y) > D1 } ≤ ǫ1,
Pr {d2 (X, Z) > D2 } ≤ ǫ2 .
PUY Z | X ∈P(D,ǫ |X) δ ∈(0,1]2
(18)
(19)
Remark 5. If we do not employ the RV U which has a role
in fixing the RV f1 (X) to a certain codeword, we cannot bound
kg2 k(u, y) by M2 in (24). Thus in this proof, introducing U is
quite important.
For a constant value c, let g1 (x) = (id( f1 (x)), ϕ1 ( f1 (x)), c),
A = X, B = (U, Y), and C = c. According to Lemma 4, for any
δ1 ∈ (0, 1], we have
δ1
log kg1 kc ≥ I∞
(X; U, Y |C) + log δ1
δ1
= I∞
(X; U, Y) + log δ1 .
Remark 6. In [1], we gave inner and outer bounds on M(D,ǫ |X)
by using the α-mutual information of order infinity [18], where
the α-mutual information is a generalized version of the mutual
information. In this paper, however, we use the smooth max
Rényi divergence. This is because it is compatible with the
information spectrum quantity which is well studied and useful
to analyze rates of a code.
(20)
On the other hand, we have
kg1 kc = |{(u, y) ∈ Uid × Y : g1 (x) = (u, y, c), ∃x ∈ X}|
Õ Õ
=
1{g1 (x) = (u, y, c), ∃x ∈ X}
Õ
u ∈Uid y ∈Y
=
1{g1 (x) = (u, ϕ1 (id−1 (u)), c), ∃x ∈ X}
5 General Formula
Distortion Region
u ∈Uid
≤ M1,
(21)
where the last inequality follows since the size of Uid is at most
k f1 k and k f1 k ≤ M1 . Combining (20) and (21), we have
δ1
log M1 ≥ I∞
(X; U, Y) + log δ1 .
δ2
kg2 k(u, y) ≥ I∞
(X; Z |U, Y) + log δ2 .
sup
(22)
(23)
Rate-
PY n |X n (Y n |X n )
1
log
,
PY n (Y n )
n→∞ n
PY n |X n Z n (Y n |X n, Z n )
1
.
I(X; Y|Z) , p-lim sup log
PY n |Z n (Y n |Z n )
n→∞ n
I(X; Y) , p-lim sup
On the other hand, for any (u, y) ∈ Uid × Y, we have
Õ
kg2 k(u, y) =
1{z = ϕ2 (id−1 (u), f2 (x)),
z ∈Z
−1
The smooth max Rényi divergence is related to the spectral
sup-mutual information rate as shown in the corollary of the next
lemma.
id (u) = f1 (x), y = ϕ1 (id−1 (u)), ∃x ∈ X}
Õ
≤
1{∃x ∈ X, z = ϕ2 (id−1 (u), f2 (x))}
Õ
z ∈Z
Lemma 5. Consider two sequences X and Y of RVs over the
same set. Then, we have
1{∃ j ∈ [1 : M2 ], z = ϕ2 (id−1 (u), j)}
Õ
z ∈Z
≤
the
∞ of
Definition 7. For a sequence (X, Y, Z) = {(X n, Y n, Z n )}n=1
RVs, we define
(u, y)∈U×Y
≤
for
In this section, we deal with the coding for an n-length source
sequence and give a general formula for the rate-distortion region.
First of all, we introduce the spectral (conditional) sup-mutual
information rate [14].
Let g2 (x) = (ϕ2 ( f1 (x), f2 (x)), id( f1 (x)), ϕ1 ( f1 (x))), A = X,
B = Z, and C = (U, Y). Then, according to Lemma 4, for any
δ2 ∈ (0, 1], we have
log
This completes the proof.
Õ
1 δ
1
PX n (X n )
lim lim sup D∞
(PX n kPY n ) = p-lim sup log
.
δ↓0 n→∞ n
PY n (X n )
n→∞ n
1{z = ϕ2 (id−1 (u), j)}
j ∈[1:M2 ] z ∈Z
= M2 .
∞ of real numbers, it holds that
Proof. For a sequence {an }n=1
+
lim supn→∞ |an | = | lim supn→∞ an | + . Thus, according to (8),
we have
(24)
We note that for any (u, y) ∈ {U \ Uid } × Y, it holds that
kg2 k(u, y) = 0. Combining (23) and (24), we have
δ2
log M2 ≥ I∞
(X; Z |U, Y ) + log δ2 .
1 δ
(PX n kPY n )
lim lim sup D∞
δ↓0 n→∞ n
(25)
8
+
1 δ
= lim lim sup D∞
(PX n kPY n )− .
δ↓0 n→∞ n
5.1 Direct Part
(26)
In this section, we show
Ø
R(D|X) ⊇
According to [11, Lemma 3], it holds that
1 δ
(PX n kPY n )−
lim lim sup D∞
δ↓0 n→∞ n
PX n (X n )
1
.
= p-lim sup log
PY n (X n )
n→∞ n
R G (PUYZ|X |X).
Let PUYZ|X ∈ PG (D|X) and suppose that I(X; U, Y) < ∞
and I(X; Z|U, Y) < ∞. Then, for any ǫ, ǫ1, ǫ2 > 0 such that
ǫ1 = ǫ2 = ǫ, any (δ, β, γ) ∈ S(ǫ) in (9), and every sufficiently
large n, we have
(27)
Furthermore, according to [14, Lemma 3.2.1], the right side of
(27) is non-negative. Thus, by combining (26) and (27), we have
the lemma.
Pr{d1(n) (X n, Y n ) > D1 + ǫ } ≤ γ1,
(n)
Pr{d2 (X n, Z n ) > D2 + ǫ } ≤ γ2 .
Corollary 3. For a sequence (X, Y, Z) of RVs, we have
1 δ n n
lim lim sup I∞
(X ; Y ) = I(X; Y),
δ↓0 n→∞ n
1 δ n n n
(X ; Y |Z ) = I(X; Y|Z).
lim lim sup I∞
δ↓0 n→∞ n
Hence, according to Theorem 2, there exists a sequence of codes
{( f1(n), f2(n), ϕ1(n), ϕ2(n) )} such that for sufficiently large n,
Pr{d1(n) (X n, Ŷ n ) > D1 + ǫ } ≤ ǫ,
Pr{d2(n) (X n, Ẑ n ) > D2 + ǫ } ≤ ǫ,
Proof. Since this corollary immediately follows from Lemma 5
and Definition 7, we omit the proof.
and
Let PUYZ|X be a sequence of conditional probability distributions PU n Y n Z n |X n ∈ PU n Y n Z n |X n . We define
PG (D|X) , {PUYZ|X : D1 (X, Y) ≤ D1,
D2 (X, Z) ≤ D2 },
(n)
D1 (X, Y) , p-lim sup d1 (X n, Y n ),
n→∞
D2 (X, Z) , p-lim sup d2(n) (X n, Z n ),
1
1
1
(n)
δ1
n
n n
log M1 = log exp I∞ (X ; U , Y ) log
,
n
n
β1
l
1
1
δ2
log M2(n) = log exp I∞
(X n ; Z n |U n, Y n )
n
n
1
δ1
n
n n
.
× I∞ (X ; U , Y ) + log
β2
Thus, we have
n→∞
1 δ1 n n n
lim sup R1(n) ≤ lim sup I∞
(X ; U , Y )
n→∞
n→∞ n
R G (PUYZ|X |X) , {(R1, R2 ) ∈ R2 : R1 ≥ I(X; U, Y),
R2 ≥ I(X; Z|U, Y)},
where (X, U, Y, Z) is a sequence of RVs (X n, U n, Y n,
(a)
≤ I(X; U, Y),
Z n ) induced
by PUYZ|X and a general source X.
The main result of this section is the next theorem which gives
a general formula for the rate-distortion region.
and
lim sup R2(n)
1
δ2
(X n ; Z n |U n, Y n )
≤ lim sup log exp I∞
n→∞ n
1
δ1
+1
× I∞
(X n ; U n, Y n ) + log
β2
1
δ2
≤ lim sup log exp I∞
(X n ; Z n |U n, Y n )
n→∞ n
1 δ1 n n n
(X ; U , Y ) + δ2
× n lim sup I∞
n→∞ n
(a)
1
δ2
≤ lim sup log exp I∞
(X n ; Z n |U n, Y n )
n→∞ n
× n I(X; U, Y) + δ2
n→∞
Theorem 4. For a general source X, real numbers D1, D2 ≥ 0,
and any set U such that |U| ≥ |X|, we have
Ø
R(D|X) =
R G (PUYZ|X |X).
(28)
PUYZ|X ∈PG (D |X)
Remark 7. We can show that the right side of (28) is a closed
set by using the diagonal line argument (cf. [14, Remark 5.7.5]).
Remark 8. We are not sure whether a sequence U of auxiliary
RVs is really necessary or not. It may be possible to characterize
the region without it.
The proof of this theorem is presented in the subsequent two
sections. In these sections, for a code, we denote
Ŷ n =
Ẑ n =
(29)
PUYZ|X ∈PG (D |X)
1 δ2 n n n n
(X ; Z |U , Y )
= lim sup I∞
n→∞ n
(b)
(n) (n)
ϕ1 ( f1 (X n )),
ϕ2(n) ( f1(n) (X n ), f2(n) (X n )).
(a)
≤ I(X; Z|U, Y),
9
This means that (M (n), D + γn ) is γn -achievable. According to
∞
Theorem 3, there exists a sequence PUYZ|X = {PU nY n Z n |X n }n=1
of conditional probability distributions such that PU n Y n Z n |X n ∈
P(D + γn, γn |X n ) and for any δ ∈ (0, 1]2 ,
δ is a nonwhere (a) comes from Corollary 3 and the fact that I∞
increasing function of δ, and (b) follows since I(X; U, Y) < ∞.
Now, by using the usual diagonal line argument, we can construct a sequence of codes {( f1(n), f2(n), ϕ1(n), ϕ2(n) )} such that
1
1 δ n n n
log M1(n) ≥ lim sup I∞
(X ; U , Y ),
n→∞ n
n→∞ n
1
1 δ n n n n
lim sup log M2(n) ≥ lim sup I∞
(X ; Z |U , Y ).
n
n
n→∞
n→∞
(n)
lim sup R1 ≤ I(X; U, Y),
lim sup
n→∞
lim sup R2(n) ≤ I(X; Z|U, Y).
n→∞
and for any ǫ > 0,
Since this holds for any δ ∈ (0, 1]2 , we have
lim Pr{d1(n) (X n, Ŷ n ) > D1 + ǫ } = 0,
lim sup
n→∞
n→∞
lim Pr{d2(n) (X n, Ẑ n ) > D2 + ǫ } = 0.
1 δ n n n
1
log M1(n) ≥ lim lim sup I∞
(X ; U , Y )
n
δ↓0 n→∞ n
(a)
n→∞
= I(X; U, Y),
This implies that
lim sup
n→∞
p-lim sup d1(n) (X n, Ŷ n ) ≤ D1,
1 δ n n n n
1
(n)
log M2 ≥ lim lim sup I∞
(X ; Z |U , Y )
n
δ↓0 n→∞ n
(a)
n→∞
p-lim sup d2(n) (X n,
n→∞
(32)
= I(X; Z|U, Y).
n
(33)
Ẑ ) ≤ D2 .
where (a) are comes from Corollary 3. On the other hand,
since PU nY n Z n |X n ∈ P(D + γn, γn |X n ), PUYZ|X must satisfy
D1 (X; Y) ≤ D1 and D2 (X; Z) ≤ D1 , i.e., PUYZ|X ∈ PG (D|X).
By combining (31), (32), and (33), we can conclude that for
any fm-achievable pair (R, D),
Ø
(R1, R2 ) ∈
R G (PUYZ|X |X).
Thus, for any PUYZ|X ∈ PG (D|X) such that I(X; U, Y) ∈ R and
I(X; Z|U, Y) ∈ R, we have
(I(X; U, Y), I(X; Z|U, Y)) ∈ R(D|X).
This implies (29).
PUYZ|X ∈PG (D |X)
5.2 Converse Part
This implies (30).
In this section, we show that
Ø
R(D|X) ⊆
R G (PUYZ|X |X).
5.3 Discrete Stationary Memoryless Sources
(30)
In this section, we show that the rate-distortion region given in
Theorem 4 coincides with the region by Rimoldi [6] when a
source X is a discrete stationary memoryless source.
∞ is a
Let X, Y, and Z be finite sets. Since X = {X n }n=1
discrete stationary memoryless source, we assume that X n =
(X1, X2, · · · , Xn ) is a sequence of independent copies of an RV
X on X. We also assume that distortion measures d1(n) and d2(n)
are additive, i.e., for two functions d1 : X × Y → [0, +∞) and
d2 : X × Z → [0, +∞), distortion measures are represented as
PUYZ|X ∈PG (D |X)
Suppose that (R, D) is fm-achievable. Then, there exists a
sequence of codes satisfying
p-lim sup d1(n) (X n, Ŷ n ) ≤ D1,
n→∞
p-lim sup d2(n) (X n, Ẑ n ) ≤ D2,
n→∞
and
1Õ
d1 (xi, yi ),
n i=1
n
1
lim sup log Mi(n) ≤ Ri,
n→∞ n
∀i ∈ {1, 2}.
d1(n) (x n, y n ) =
(31)
Thus, we have for any γ > 0
(n)
lim Pr{d1 (X n, Ŷ n ) > D1 + γ} = 0,
We define
n→∞
lim Pr{d2(n) (X n, Ẑ n ) > D2 + γ} = 0.
PM (D|X) , {PY Z |X ∈ PY Z |X : E[d1 (X, Y)] ≤ D1,
n→∞
E[d2 (X, Z)] ≤ D2 },
∞
This implies that there exists a sequence {γn }n=1
such that
limn→∞ γn = 0 and
Pr{d1(n) (X n, Ŷ n )
Pr{d2(n) (X n, Ẑ n )
1Õ
d2 (xi, zi ).
n i=1
n
d2(n) (x n, z n ) =
and for PY Z |X ∈ PY Z |X ,
> D1 + γ n } ≤ γ n ,
R M (PY Z |X |X) , {(R1, R2 ) ∈ R2≥0 : R1 ≥ I(X; Y ),
R1 + R2 ≥ I(X; Y, Z)},
> D2 + γ n } ≤ γ n .
10
(X, Ȳ, Z̄) = {(X n, Ȳ n, Z̄ n )} be a sequence of RVs such
that (X1, Ȳ1, Z̄1 ), (X2, Ȳ2, Z̄2 ), · · · , (Xn, Ȳn, Z̄n ) are independent
and
where (X, Y, Z) is the tuple of RVs induced by a conditional
probability distribution PY Z |X ∈ PY Z |X and a given RV X.
Then, we have the next theorem.
Theorem 5. For a discrete stationary memoryless source X,
additive distortion measures, and any set U such that |U| ≥
|Z| + 1, we have
Ø
R G (PUYZ|X |X)
Ø
PXi Ȳi Z̄i (x, y, z) = PXi Yi Zi (x, y, z),
where PXi Yi Zi (x, y, z) is the i-th marginal distribution of
(X n, Y n, Z n ). Then, according to [14, Lemma 5.8.1 and
5.8.2], we have D1 (X, Y) ≥ D1 (X, Ȳ), D2 (X, Z) ≥ D2 (X, Z̄),
I(X; Y) ≥ I(X; Ȳ), and I(X; Y, Z) ≥ I(X; Ȳ, Z̄). Thus, by introducing the set I of probability distributions for independent
RVs as
(
n
Ö
PYi Zi |Xi ,
I , PYZ|X = {PY n Z n |X n } : PY n Z n |X n =
PUYZ|X ∈PG (D |X)
=
R M (PY Z |X |X).
(34)
PY Z | X ∈PM (D |X)
Remark 9. The right side of (34) can be written as
i=1
{(R1, R2 ) ∈ R2≥0 : R1 ≥ R1 (D1 ), R1 + R2 ≥ Rb (R1, D1, D2 )},
∃PYi Zi |Xi ∈ PYZ |X, i ∈ [1 : n] ,
(35)
we have
where R1 (D1 ) is the rate-distortion function and Rb (R1, D1, D2 )
gives the boundary for a given R1 , which are defined as (see,
e.g., [19, Corollary 1], [10, (22)])
R1 (D1 ) ,
min
PY | X ∈PY |X :E[d1 (X,Y )]≤D1
Rb (R1, D1, D2 ) ,
min
R1 + R2 ≥ I(X; Y, Z)}
Ø
⊆
{(R1, R2 ) : R1 ≥ I(X; Y),
I(X; Y, Z).
PYZ|X ∈I:
D 1 (X,Y)≤D1,D 2 (X,Z)≤D2
R1 + R2 ≥ I(X; Y, Z)}.
We note that R1 (D1 ) and Rb (R1, D1, D2 ) are convex and continuous functions of a triple (R1, D1, D2 ) (see, e.g., [14, Remark
5.2.1] and [20, Lemma 4]).
I(X; Y) ≥ I(X; Y ) − δ,
Proof: The left side of (34) ⊆ The right side of (34). We have
Ø
R G (PUYZ|X |X)
I(X; Y, Z) ≥ I(X; Y, Z) − δ,
and
PUYZ|X ∈PG (D |X)
⊆
{(R1, R2 ) : R1 ≥ I(X; U, Y),
D1 ≥ E[d1 (X, Y )] − γ,
PUYZ|X ∈PG (D |X)
D2 ≥ E[d2 (X, Z)] − γ.
R1 + R2 ≥ I(X; U, Y) + I(X; Z|U, Y)}
Ø
(a)
⊆
{(R1, R2 ) : R1 ≥ I(X; U, Y),
Thus, we have
PUYZ|X ∈PG (D |X)
Ø
{(R1, R2 ) : R1 ≥ I(X; Y),
PYZ|X ∈I:
D 1 (X,Y)≤D1,D 2 (X,Z)≤D2
R1 + R2 ≥ I(X; U, Y, Z)}
Ø
(b)
⊆
{(R1, R2 ) : R1 ≥ I(X; Y),
PYZ|X :
D 1 (X,Y)≤D1,D 2 (X,Z)≤D2
R1 + R2 ≥ I(X; Y, Z)},
(37)
On the other hand, in the same way as [14, p.372], for any
δ > 0, γ > 0, and any PYZ|X ∈ I such that D1 (X, Y) ≤ D1 and
D2 (X, Z) ≤ D2 , there exists PY Z |X ∈ PY Z |X such that
We will prove the theorem by two parts separately.
Ø
{(R1, R2 ) : R1 ≥ I(X; Y),
PYZ|X :
D 1 (X,Y)≤D1,D 2 (X,Z)≤D2
I(X; Y ),
PY Z | X ∈PYZ|X :
E[d1 (X,Y)]≤D1,E[d2 (X, Z)]≤D2,
I (X;Y )≤R1
Ø
⊆
R1 + R2 ≥ I(X; Y, Z)}
ÙÙ
Ø
{(R1, R2 ) :
γ>0 δ>0 PY Z | X ∈PM (D+γ |X)
(36)
R1 ≥ I(X; Y ) − δ, R1 + R2 ≥ I(X; Y, Z) − δ}
ÙÙ
=
{(R1, R2 ) : R1 + δ ≥ R1 (D1 + γ),
where (a) and (b) respectively come from the fact that
γ>0 δ>0
I(X; U, Y) + I(X; Z|U, Y) ≥ I(X; U, Y, Z),
R1 + R2 + δ ≥ Rb (R1 + δ, D1 + γ, D2 + γ)},
(38)
I(X; U, Y) ≥ I(X; Y).
For a sequence (X, Y, Z)
=
{(X n, Y n, Z n )}
RVs induced by PYZ|X and a given source X,
where the last equality comes from (35).
Since R1 (D1 ) and Rb (R1, D1, D2 ) are convex and continuous
functions of a triple (R1, D1, D2 ) (see Remark 9), it holds that
of
let
11
we have
for any ǫ > 0, there exist sufficiently small γǫ > 0 and δǫ > 0
such that
ÙÙ
{(R1, R2 ) : R1 + δ ≥ R1 (D1 + γ),
I(X; Z |U, Y ) = H(X |U, Y) − H(X |U, Y, Z)
= αH(X |Y, Z) + (1 − α)H(X |Y) − H(X |Y, Z)
δ>0 γ>0
= (1 − α)I(X; Z |Y ).
R1 + R2 + δ ≥ Rb (R1 + δ, D1 + γ, D2 + γ)}
Ù
Ù
⊆
{(R1, R2 ) : R1 ≥ R1 (D1 ) − δ − ǫ,
Now by defining PUYZ|X as
PU n Y n Z n |X n (un, y n, z n |x n ) =
δǫ >δ>0 γǫ >γ>0
R1 + R2 ≥ Rb (R1, D1, D2 ) − δ − ǫ }
Ù
=
{(R1, R2 ) : R1 ≥ R1 (D1 ) − δ − ǫ,
(X, U, Y, Z) becomes a sequence of independent copies of RVs
(X, U, Y, Z). Thus, we have
(39)
I(X; U, Y) = I(X; U, Y) = αI(X; Z, Y) + (1 − α)I(X; Y),
I(X; Z|U, Y) = I(X; Z |U, Y ) = (1 − α)I(X; Z |Y ),
By combining (36), (37), (38), and (39), we have
Ø
R G (PUYZ|X |X)
Ù Ù
D1 (X, Y) = E[d1 (X, Y)] ≤ D1,
D2 (X, Z) = E[d2 (X, Z)] ≤ D2,
PUYZ|X ∈PG (D |X)
⊆
∞ ,
where we use the fact that for i.i.d. RVs {Ai }i=1
Í
n
Ai = E[A1 ]. Hence, by noting that PUYZ|X ∈
p-lim sup n1 i=1
{(R1, R2 ) : R1 ≥ R1 (D1 ) − δ − ǫ,
ǫ >0 δǫ >δ>0
n→∞
R1 + R2 ≥ Rb (R1, D1, D2 ) − δ − ǫ }
PG (D|X), we have for any α ∈ [0, 1],
= {(R1, R2 ) : R1 ≥ R1 (D1 ), R1 + R2 ≥ Rb (R1, D1, D2 )}.
According to Remark 9, this completes the proof.
(αI(X; Z, Y) + (1 − α)I(X; Y), (1 − α)I(X; Z |Y ))
Ø
∈
R G (PUYZ|X |X).
(40)
PUYZ|X ∈PG (D |X)
Proof: The left side of (34) ⊇ The right side of (34). Since
|U| ≥ |Z| + 1, there exists an injective function id : Z → U.
For this function, let Uid ⊆ U be the image of id and
id−1 : Uid → Z be the inverse function of id on Uid . Let
u∗ ∈ U be a symbol such that u∗ < Uid .
For any PY Z |X ∈ PM (D|X) and any α ∈ [0, 1], we define
PUY Z |X ∈ PU Y Z |X as
αP
(y, z|x)
Y Z |X
PUY Z |X (u, y, z|x) , (1 − α)PY Z |X (y, z|x)
0
PUY Z |X (ui, yi, zi |xi ),
i=1
δǫ >δ>0
R1 + R2 ≥ Rb (R1, D1, D2 ) − δ − ǫ }.
n
Ö
This implies that for any (R1, R2 ) ∈ R M (PY Z |X |X), it holds that
Ð
(R1, R2 ) ∈ PUYZ|X ∈PG (D |X) R G (PUYZ|X |X). This is because for
any (R1, R2 ) ∈ R M (PY Z |X |X) such that I(X; Y, Z) ≥ R1 , there
exists α ∈ [0, 1] such that
R1 = αI(X; Z |Y ) + I(X; Y )
= αI(X; Y, Z) + (1 − α)I(X; Y ).
if u = id(z),
if u = u∗,
otherwise.
Then, we have
R2 ≥ I(X; Y, Z) − R1
= I(X; Y, Z) − αI(X; Y, Z) − (1 − α)I(X; Y )
Since
= (1 − α)I(X; Z |Y ).
(41)
αP
(x, y, id−1 (u)) if u ∈ Uid,
XY Z
PXUY (x, u, y) = (1 − α)PXY (x, y)
if u = u∗,
0
otherwise,
P
(x| y, id−1 (u)) if u ∈ Uid,
X |Y Z
PX |UY (x|u, y) = PX |Y (x| y)
if u = u∗,
0
otherwise,
According to (40), such pair (R1, R2 ) is included in the reÐ
gion PUYZ|X ∈PG (D |X) R G (PUYZ|X |X). On the other hand, for
any (R1, R2 ) ∈ R M (PY Z |X |X) such that R1 > I(X; Y, Z),
we have R2 ≥ 0. Since it holds that (I(X; Y, Z), 0) ∈
Ð
PUYZ|X ∈PG (D |X) R G (PUYZ|X |X)
Ð due to (40), such pair (R1, R2 )
is also included in the region PUYZ|X ∈PG (D |X) R G (PUYZ|X |X).
Therefore, we have
Ø
R G (PUYZ|X |X) ⊇ R M (PY Z |X |X).
I(X; U, Y ) = H(X) − αH(X |Y, Z) − (1 − α)H(X |Y )
= αI(X; Y, Z) + (1 − α)I(X; Y).
Since this holds for any PY Z |X ∈ PM (D|X), this completes the
proof.
we have
PUYZ|X ∈PG (D |X)
Remark 10. Unlike the rate-distortion region by Rimoldi, our
region includes a sequence U of auxiliary RVs. This comes
from the fact that the time-sharing argument as in (41) cannot
be employed because it holds that in general,
Furthermore, since
P
(x| y, z)
X |Y Z
PX |UY Z (x|u, y, z) = PX |Y Z (x| y, z)
0
if u = id(z),
if u = u∗,
otherwise,
I(X; Y, Z) , I(X; Y) + I(X; Z|Y).
12
5.4 Mixed Sources
R G (PŨỸZ̃|X |X)
PU1 Y1 Z1 |X1 ,PU2 Y2 Z2 |X2 :
In this section, we give the rate-distortion region for mixed
sources.
∞ and X =
The mixed source X is defined by X1 = {X1n }n=1
2
n ∞
{X2 }n=1 as
Ø
D 1 (X, Ỹ)≤D1,D 2 (X, Z̃)≤D2
(a)
=
{(R1, R2 ) :
PU1 Y1 Z1 |X1 ,PU2 Y2 Z2 |X2 :
max{D 1 (X1,Y1 ),D 1 (X2,Y2 )} ≤D1,
max{D 2 (X1,Z1 ),D 2 (X2,Z2 )} ≤D2
PX n (x n ) = α1 PX1n (x n ) + α2 PX2n (x n ),
R1 ≥ max{I(X1 ; U1, Y1 ), I(X; U2, Y2 )}
where α1, α2 ∈ [0, 1] and α1 + α2 = 1.
The next lemma shows a fundamental property of the information spectrum of mixed sources.
=
R2 ≥ max{I(X; Z1 |U1, Y1 ), I(X; Z2 |U2, Y2 )}}
Ø
Ø
PU1 Y1 Z1 |X1 ∈PG (D |X1 ) PU2 Y2 Z2 |X2 ∈PG (D |X2 )
Lemma 6. For sequences of RVs (X1, Y1, Z1 ) and (X2, Y2, Z2 ),
let (X, Y, Z) be defined by
R G (PU1 Y1 Z1 |X1 |X1 ) ∩ R G (PU2 Y2 Z2 |X2 |X2 )
= R(D|X1 ) ∩ R(D|X2 ).
PX n Y n Z n (x n, y n, z n ) = α1 PX1n Y1n Z1n (x n, y n, z n )
where (a) comes from [14, Lemma 1.4.2], Lemma 6 and the fact
that
+ α2 PX2n Y2n Z2n (x n, y n, z n ).
Then, we have
PX n Ũ n Ỹ n Z̃ n (x n, un, y n, z n )
I(X; Y) = max{I(X1 ; Y1 ), I(X2 ; Y2 )},
= α1 PX1n U1n Y1n Z1n (x n, un, y n, z n )
+ α2 PX2n U2n Y2n Z2n (x n, un, y n, z n ).
I(X; Y|Z) = max{I(X1 ; Y1 |Z1 ), I(X2 ; Y2 |Z2 )}.
Proof. Since this lemma can be proved in the same way as
[14, Lemma 7.9.1] by using [14, Lemma 1.4.2], we omit the
details.
This completes the proof.
6 Conclusion
The next theorem shows that the rate-distortion region for a
mixed source is the intersection of those of two sources.
In this paper, we have dealt with the successive refinement problem. We gave inner and outer bounds using the smooth max
Rényi divergence on the set of pairs of numbers of codewords.
These bounds are obtained by extended versions of our previous
covering lemma and converse bound. By using these bounds,
we also gave a general formula using the spectral sup-mutual information rate for the rate-distortion region. Further, we showed
some special cases of our rate-distortion region for discrete stationary memoryless sources and mixed sources.
Theorem 6. For a mixed source X defined by X1 and X2 , and
any real numbers D1, D2 ≥ 0, we have
R(D|X) = R(D|X1 ) ∩ R(D|X2 ).
Proof. For PU1 Y1 Z1 |X1 and PU2 Y2 Z2 |X2 , we define PŨỸZ̃|X by
PŨ n Ỹ n Z̃ n |X n (un, y n, z n |x n )
=
Ø
=
α1 PX1n (x n )PU1nY1n Z1n |X1n (un, y n, z n |x n )
α2 P
+
X2n
PX n (x n )
n
(x )PU2n Y2n Z2n |X2n (un,
Acknowledgment
y n, z n |x n )
PX n (x n )
.
This work was supported in part by JSPS KAKENHI Grant
Number 15K15935.
When PU1 Y1 Z1 |X1 = PU2 Y2 Z2 |X2 = PUYZ|X , we have PŨỸZ̃ |X =
PUYZ|X by the definition. This implies that for any PUYZ|X , there
exist PU1 Y1 Z1 |X1 and PU2 Y2 Z2 |X2 such that PUYZ|X = PŨỸZ̃ |X . On
the other hand, for any PU1 Y1 Z1 |X1 and PU2 Y2 Z2 |X2 , there trivially
exists PUYZ|X such that PUYZ|X = PŨỸZ̃|X . Thus, we have
Ø
R G (PUYZ|X |X)
Ø
References
[1] T. Matsuta and T. Uyematsu, “Non-asymptotic bounds
on numbers of codewords for the successive refinement
problem,” Proc. 38th Symp. on Inf. Theory and its Apps.
(SITA2015), pp.43–48, Nov. 2015.
PUYZ|X ∈PG (D |X)
=
[2] T. Matsuta and T. Uyematsu, “A general formula of the
achievable rate region for the successive refinement problem,” Proc. IEICE Society Conference, p.37, Sep. 2016.
R G (PŨỸZ̃|X |X).
PU1 Y1 Z1 |X1 ,PU2 Y2 Z2 |X2 :
D 1 (X, Ỹ)≤D1,D 2 (X, Z̃)≤D2
[3] V. N. Koshelev, “Estimation of mean error for a discrete
successive-approximation scheme,” Problemy Peredachi
Informatsii, vol.17, no.3, pp.20–33, 1981.
Thus, according to Theorem 4, we have
R(D|X)
13
[20] V. Kostina and E. Tuncel, “Successive refinement of abstract sources,” arXiv preprint arXiv:1707.09567, July
2017.
[4] V. N. Koshelev, “Hierarchical coding of discrete sources,”
Problemy Peredachi Informatsii, vol.16, no.3, pp.31–49,
1980.
[5] W. Equitz and T. Cover, “Successive refinement of information,” IEEE Trans. Inf. Theory, vol.37, no.2, pp.269–275,
Mar. 1991.
[6] B. Rimoldi, “Successive refinement of information: characterization of the achievable rates,” IEEE Trans. Inf. Theory, vol.40, no.1, pp.253–259, Jan. 1994.
[7] H. Yamamoto, “Source coding theory for a triangular communication system,” IEEE Trans. Inf. Theory, vol.42, no.3,
pp.848–853, May 1996.
[8] M. Effros, “Distortion-rate bounds for fixed- and variablerate multiresolution source codes,” IEEE Trans. Inf. Theory, vol.45, no.6, pp.1887–1910, Sep. 1999.
[9] A. No, A. Ingber, and T. Weissman, “Strong successive
refinability and rate-distortion-complexity tradeoff,” IEEE
Trans. Inf. Theory, vol.62, no.6, pp.3618–3635, June 2016.
[10] L. Zhou, V.Y.F. Tan, and M. Motani, “Second-order and
moderate deviations asymptotics for successive refinement,” IEEE Trans. Inf. Theory, vol.63, no.5, pp.2896–
2921, May 2017.
[11] N.A. Warsi, “One-shot bounds for various information theoretic problems using smooth min and max Rényi divergences,” Proc. 2013 IEEE Inf. Theory Workshop, pp.1–5,
Sep. 2013.
[12] T. Uyematsu and T. Matsuta, “Revisiting the rate-distortion
theory using smooth max Rényi divergence,” Proc. 2014
IEEE Inf. Theory Workshop, pp.202–206, Nov. 2014.
[13] T. Matsuta and T. Uyematsu, “New non-asymptotic bounds
on numbers of codewords for the fixed-length lossy compression,” IEICE Trans. Fundamentals, vol.E99-A, no.12,
pp.2116–2129, Dec. 2016.
[14] T. S. Han, Information-Spectrum Methods in Information
Theory, Springer, 2003.
[15] V. Kostina and S. Verdú, “Fixed-length lossy compression
in the finite blocklength regime,” IEEE Trans. Inf. Theory,
vol.58, no.6, pp.3309–3338, June 2012.
[16] T. M. Cover and J. A. Thomas, Elements of Information
Theory, 2nd ed., Wiley, New York, 2006.
[17] V. Kostina and E. Tuncel, “The rate-distortion function for
successive refinement of abstract sources,” Proc. IEEE Int.
Symp. on Inf. Theory, pp.1923–1927, June 2017.
[18] S. Verdú, “α-mutual information,” 2015 Inf. Theory and
Apps. Workshop, pp.1–6, Feb. 2015.
[19] A. Kanlis and P. Narayan, “Error exponents for successive refinement by partitioning,” IEEE Trans. Inf. Theory,
vol.42, no.1, pp.275–282, Jan. 1996.
14
| 7 |
Shannon information storage in noisy phasemodulated fringes and fringe-data
compression by phase-shifting algorithms
MANUEL SERVIN,* AND MOISES PADILLA
Centro de Investigaciones en Optica A. C., Loma del Bosque 115, 37150 Leon Guanajuato, Mexico.
*[email protected]
www.cio.mx
Abstract: Optical phase-modulated fringe-patterns are usually digitized with XxY pixels and
8 bits/pixel (or higher) gray-levels. The digitized 8 bits/pixel are raw-data bits, not Shannon
information bits. Here we show that noisy fringe-patterns store much less Shannon
information than the capacity of the digitizing camera. This means that high signal-to-noise
ratio (S/N) cameras may waste to noise most bits/pixel. For example one would not use
smartphone cameras for high quality phase-metrology, because of their lower (S/N) images.
However smartphones digitize high-resolution (12 megapixel) images, and as we show here,
the information storage of an image depends on its bandwidth and its (S/N). The standard
formalism for measuring information are the Shannon-entropy H, and the Shannon capacity
theorem (SCT). According to SCT, low (S/N) images may be compensated with a larger
fringe-bandwidth to obtain high-information phase measurements. So broad bandwidth
fringes may give high quality phase, in spite of digitizing low (S/N) fringe images. Most reallife images are redundant, they have smooth zones where the pixel-value do not change much,
and data compression algorithms are paramount for image transmission/storage. Shannon's
capacity theorem is used to gauge competing image compression algorithms. Here we show
that phase-modulated phase-shifted fringes are highly correlated, and as a consequence,
phase-shifting algorithms (PSAs) may be used as fringe-data compressors. Therefore a PSA
may compress a large number of phase-shifted fringes into a single complex-valued image.
This is important in spaceborne optical/RADAR phase-telemetry where downlink is severely
limited by huge distance and low-power downlink. That is, instead of transmitting M phaseshifted fringes, one only transmit the phase-demodulated signal as compressed sensing data.
2017-11 Centro de Investgaciones en Optica A. C.
OCIS Codes: (120.0120) Instrumentation, measurement, and metrology; (100.2650) Fringe analysis; (120.3180)
Interferometry; (120.5050) Phase measurement.
References and Links
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
K. J. Gasvik, Optical Metrology, 3rd ed. (John Wiley & Sons, 2003).
D. Malacara, Optical Shop Testing, 3rd ed. (Wiley-Interscience, 2007).
M. Servin, J. A. Quiroga, and M. Padilla, Fringe Pattern Analysis for Optical Metrology, Theory, Algorithms,
and applications (Wiley-VCH, 2014).
M. Schwartz, Information Transmission Modulation and Noise, 4th ed. (McGraw-Hill, 1990).
A. Papoulis, Probability, Random Variables and Stochastic Processes, 4th ed. (McGraw Hill, 2000).
C. E. Shannon and W. Weaver, The Mathematical Theory of Communication. (University of Illinois Press.
Urbana. IL, 1949).
H. Nyquist, “Certain factors affecting telegraph speed,” Bell Syst. Tech. J., 3, 412-422, (1924)
R. V. L. Hartley, “Transmission of information,” Bell Syst. Tech. J., 3, 535–564 (1928).
F. T. S. Yu, Optics and Information Theory, (John Wiley & Sons, 1976).
T. M. Cover and J. A. Thomas, Elements of Information Theory, 2th ed. (Wiley-Interscience, 2006).
A. N. Kolmogorov, " On the Shannon Theory of Information Transmission in the Case of Continuous Signals,"
IRE Trans. Inf. Theory, 102-108 (1956).
T. J. Lim and M. Franceschetti, "Information without rolling dice," IEEE Trans. Inf. Theory, 63(3), 1349-1363
(2017).
Y. C. Eldar, Compressed Sensing: Theory and Applications, (Cambridge Univ. Press, 2012).
1
1. Introduction
Optical phase-metrology has productively incorporated well known results from digital signal
processing, stochastic processes and telecommunications [1-5]. In particular, we have applied
the frequency transfer function (FTF) paradigm to the theory of phase-shifting algorithms
(PSAs) [3]. The FTF is now widely used in phase-shifting algorithm (PSA) theory to
investigate their signal-to-noise-ratio (S/N), their harmonic and detuning robustness [3]. Also
stochastic processes theory applied to linear system has been used in PSAs to study the
signal-to-noise-ratio (S/N) of the phase demodulation processes [1,3,4,5]. By plotting the
absolute value of the FTF associated to a PSA one may gauge graphically its measuring noise,
detuning and harmonic rejection [3].
Shannon information theory has not been applied to wavefront phase-metrology [4-11].
Many scientific and engineering disciplines, including optics, use Shannon theory to study the
information processing behavior of their systems [9,10]. Shannon built on Nyquist [7] and
Hartley [8] to develop a general theory for reliable information transmission/storage over
noisy channels [6]. Nowadays this is simply known as Shannon information theory [4,10]. In
this work we use Shannon information theory [6] to quantify the information storage of noisy
fringe-patterns and fringe-data compression by phase-shifting algorithms (PSAs) [1-3].
In digital phase-metrology one wrongly equals the amount of raw fringe data-bits with
information. One normally think that a fringe-image with XxY pixels, and 8 bit/pixel, has
8(XxY) bits of information; however these are raw-data bits, not Shannon information bits.
We show that noisy fringe patterns have much lower Shannon information than the capacity
of their digitizing cameras. Quantifying the Shannon information storage of noisy fringeimages is important for fringe data compression for phase-measuring telemetry. For example,
a spaceborne interferometer may take many phase-shifted fringe-images and send them back
to earth for phase-demodulation. Or phase demodulate them within the spacecraft, and send
just a single complex-valued demodulated signal as compressive sensing wave-data [13].
Knowing the Shannon information storage of noisy-fringes is also important because
costly, high (S/N) CCD-cameras may not be necessary to digitize phase-modulated fringes for
precision phase-metrology. That is because noisy fringe-images contains only a fraction of
the information capacity of the CCD cameras which digitize them. Most bits/pixel
information are wasted to speckle/electronic fringe-noise; for example an electronic speckle
interferometry (ESPI) fringe-image, probably waste 7 out of 12 bits/pixel to noise. So a much
cheaper, high resolution, low (S/N) camera, may be enough for ESPI metrology. In this work
we show that poor (S/N) fringe-images may be compensated by large bandwidth B fringes.
For example, low (S/N) smartphone cameras, compensate this with high resolution images.
The SCT allows us to think seriously about low (S/N), high resolution, mass-produced
smartphone cameras for high-quality wave phase-metrology.
2. Mathematical theory of communication
Here we outline two important contributions on the mathematical theory of information by
Claude E. Shannon: information-source entropy and channel-capacity [6]. Figure 1 shows the
Shannon's abstraction of a telecommunication process [6]. The information source (the
message) may be continuous or discrete data [4]. The message may then be amplitude or
phase-modulated, digitally encoded/compressed, and finally transmitted. The radio signal
then propagate towards the receiver. The communication channel is assumed bandlimited and
noisy. The received signal is then decoded to obtain a reliable estimation of the original
message. Signal decoding may consist on phase-demodulation, decompressing and error
correction-algorithms [4]. We finally produce a recovered information which is a reliable
copy the original message.
2
Fig. 1. Shannon abstraction of a memoryless noisy digital communication process in electrical
engineering. The information source generate r symbols/second [6].
Claude Shannon proved a formula for the channel capacity (the SCT) needed to transmit a
message over a memoryless, noisy-channel with negligible information degradation [6]. In
this paper [6], Shannon introduced the information-entropy and channel-capacity formulas for
reliable transmission over a noisy additive white Gaussian noise (AWGN) channel [6].
2.1 Shannon information entropy of a discrete information source
Shannon information entropy H of a discrete message consisting of a large sequence
xm (t m r ) of symbols, drawn from {x0 , x1,..., xK 1} , each sampled with probability
p( xk ) 0 is [6],
K 1
H p ( xk ) log 2 p ( xk )
k 0
In the average, a large data-sequence
bits
;
symbol
x (t m r )
m
K 1
p( x ) 1.0 . .
k 0
k
(1)
have an information entropy of H
bits/symbol. If these symbols {xk } are transmitted at a rate of r symbols/second then, the
information-rate is,
K 1
bits
.
R rH r p ( xk ) log 2 p ( xk )
k 0
second
(2)
The prototypical example of a discrete source is the English language where the data are
{ A, B, C ,..., Z } , with p ( A) ... p ( Z ) 1.0 , being { p ( A),..., p ( Z )} the probability of
occurrence of a single letter in a text, then H= [ p ( A) log 2 p( A) ... p ( Z ) log 2 p ( Z )] .
2.2 Shannon capacity theorem (SCT) for a bandlimited AWGN-channel
The message
x (t m r ),
m
is then transmitted over a AGWN-channel (see Fig. 1). The
received signal is,
X (t ) ( axm nm ) (t m r );
t [0, T ]; T 1.
(3)
m
Where a<1.0 is the channel attenuation. Shannon proved that the rate of information that a
bandlimited AWGN-channel can transmit with negligible errors is [4-6,10],
S bits
C B log 2 1
; S a 2 E xm2 ; N E nm2 . .
N
second
(4)
Where B is the channel's bandwidth in cycles/second, S is the received signal-power, N is the
channel noise-power, and E
is the ensemble average. This is the famous Shannon capacity
theorem (SCT) for a bandlimited AWGN-channel [4-6,10-11]. The relation between the
source information-rate R and channel capacity C for reliable communication is,
3
R C ; Unreliable Communication ;
.
(5)
R C ; Negligible Error Communication .
Therefore in order to transmit reliably through an AWGN-channel the information-rate R
must be below C. For example, a typical AWGN telephone line with B 3kHz and
(S/N)=103 has a capacity of C 3000 log 2 (1 103 ) 30,000 bits/second.
The SCT also tell us that we can interchange (S/N) with bandwidth B while keeping the
channel capacity C constant, as Table 1 shows,
Table 1. Channel capacity C=log2[1+(S/N)] trade between (S/N) and bandwidth B
Channel capacity C (bits/sec)
Channel Bandwidth B (Hz).
Signal-to-noise ratio (S/N)
30,000
1,500
1,100,000
30,000
3,000
1,000
30,000
6,000
31
30,000
9,000
9
30,000
12,000
5
To keep C constant, B increases linearly while (S/N) decreases exponentially. That is, for low
(S/N), one may keep C constant by increasing B, and still have reliable communication; this is
why spread-spectrum communication is so robust against thermal-noise, radio-jamming and
low-power spacecraft communications [4]. A daily example of the use of the SCT is when
one walks away from a Wi-Fi transmitter. Given that the hardware have not changed, the
channel bandwidth B remains the same. However at a large distance from the Wi-Fi
transmitter the signal-power decreases. This reduces the data transfer rate R towards our
laptop because the channel encoding must be more redundant, less information-rate efficient.
2.3 Shannon's capacity is a mathematical existence theorem
Finally keep in mind that Shannon capacity theorem (SCT) is a theoretical upper-bound limit
for reliable classical communication/storage of information independently of any engineering
apparatus. The SCT is an existence theorem, it say nothing on how to implement it physically
or algorithmically. That is, the SCT is a mathematical abstraction disembodied of any
implementation, such as Einstein's E=mc2 is not a receipt for building a nuclear reactor. In
fact the SCT may be seen as a sphere-packing theorem in a K-dimensional Euclidian space
[6,10,12]. In other words, no matter how advanced (non-quantum) digital-modem one may
built, it cannot reliably transmit more classical information than C. This is like trying to
obtain more energy than E=mc2 from a rest mass, or trying to build a perpetual moving
machine bypassing the second law of thermodynamics. Modern digital communication
modems using efficient block error-correcting codes approaches the SCT limit, i.e. R C .
3. Shannon information storage of noisy fringe-patterns
Many signals that we communicate/store are continuous such as voice, video, or telemetry,
where the discrete-data entropy H cannot be used. The source information-rate R=R(H,r) and
channel-capacity C=C(B,S,N) are both given in bits/second, however they use different
parameters [6]. The entropy H, the rate r, and signal-power S are information-source variables
[6]. On the other hand, the bandwidth B, and AWGN-power N are communication channel
variables [6,4,10]. To estimate the source information-rate of a continuous-signal, one would
need a continuous-density entropy. However a continuous-density entropy is not always welldefined [10]. In contrast, the bandwidth, the noise, and the signal-power of a noisy
continuous-source are always well-defined. So it is better to have an information-rate formula
for continuous-signals based on its bandwidth and signal-to-noise ratio, rather than on its
4
continuous-density entropy [10,11]. Fortunately mathematician Andrei Kolmogorov proposed
an information-rate formula for continuous, noisy information-sources, the entropy
[11,12]. That is, Kolmogorov entropy does not depend on a continuous-density entropy
which is good, because it avoids some theoretical problems [10,11,12].
In wave phase-metrology the message is the continuous phase-field which modulate the
fringes; it could be an optical wavefront W ( x, y ) , or a solid z h ( x , y ) in fringe-projection
profilometry [1,2]. The model for a continuous fringe-pattern is,
2
I ( x, y ) a ( x, y ) b( x, y ) cos W ( x, y ) n ( x, y ).
(6)
Here a ( x, y ) , b( x, y ) are smooth functions, and the illumination wavelength. The
probability density p ( I ) of these fringes is continuous. Shannon defined the differential
entropy of a continuous random variable X with density p(x) as [6,10],
H p( x ) log 2 p( x ) dx;
p( x) dx 1.0.
(7)
As in every example involving an integral, or even a density, we should include the statement
if it exist [10]. It is easy to construct examples of random variables for which a density
function does not exist or for which the above integral does not exist [10]. Instead of trying to
estimate a continuous-density entropy H of a fringe-image, we use the Kolmogorov's
entropy H ( ) . In 1956 Kolmogorov wrote [11]: According to the proper meaning of the
word, the entropy of a signal with a continuous distribution is always infinite. If the
continuous signals can, nevertheless, serve to transmit finite information, then it is only
because they are always observed with bounded accuracy. Consequently, it is natural to
define the appropriate " entropy " H ( ) of the object by giving the accuracy of
observation . Shannon did this under the designation "rate of creating information with
respect to a fidelity criterion" [11]. In our notation, the noiseless object is a b cos[ ] ,
which is observed with bounded-accuracy n . Kolmogorov proved that H ( I )
approximates Shannon's capacity C [11, 12]. The information-rate H ( I ) for I ( x , y ) is [11],
H ( I ) BI log 2 1 SNR K
bits
;
pixel
BI in
fringes
.
pixel
(8)
; s( x, y ) b cos .
(9)
Being SNRK the signal-to-noise ratio of I ( x , y ) , given by,
SNR K
s 2 ( x , y ) dxdy
n 2 ( x, y ) dxdy
( x , y )
I ( x, y )
2
s (u, v ) dudv
( u , v ) B
2
n (u, v ) dudv
[ , ]x[ , ]
Where s(u, v ) , n(u, v ) are the spectra of s( x, y ) , n( x, y ) respectively; ( x, y ) is the
region with well defined fringes; (u, v) B the fringes' spectral lobe; BI is the average
bandwidth (see Fig. 2). Note that formally H ( I ) equals C; but the difference is that now B
and SNRK refer to the noisy signal, not the noisy channel. Kolmogorov H ( I ) cannot exceed
the storage-capacity of the digitizing CCD-camera [11]. For example, for a 256 gray-levels
per pixel camera, then H ( I ) <8 bits/pixel.
5
Fig. 2. A 8 bits/pixel quantized, noiseless simulated fringes I(x,y). The spectrum |I(u,v)| has a
bandwidth .
Let us consider the limits for H ( I ) when the fringe's noise n( x, y ) tends to zero or infinite,
lim BI log 2 1 SNR K ;
lim BI log 2 1 SNR K 0.
n
n 0
(10)
The information-rate is infinite H ( I ) for noiseless fringes, and zero H ( I ) 0 for
infinite noisy fringes; regardless of BI. One might think that a constant-signal would hold
little information; however if that signal is coded into a large number with say, 1x108
significant digits (above the noise n ), then one may encode a large amount of information
within that single number.
As mentioned, Shannon capacity C (Eq. (3)) and Kolmogorov entropy-rate H ( I ) (Eq.
(7)) look identical, just substituting (S/N) for SNRK. However in Shannon's capacity N is a
Gaussian stochastic processes [6], while the observation accuracy , in H ( I ) , is a
deterministic error-signal [11,12]. This seems as an irrelevant theoretical nuance, but it has
however practical consequences [6,10,11,12]. Lim and Franceschetti proved that C and
H ( I ) , are indeed "identical" under appropriate mathematical conditions [12].
4. Simulated Shannon information storage of fringe-images
Here we give two simple but illustrative examples of noisy fringe information storage. We
use low resolution fringes to show the fringes at their Nyquist sampling rate of BI=0.5
fringes/pixel. Figure 3 shows three 200x200-pixel, computer-generated noisy fringes phasemodulated by defocus and increasing n(x,y).
Fig. 3. Information storage-rate for fringes with BI= fringes/pixel. The 8 bit/pixel fringes
have SNRK=70,000 (about 8-bit quantizing noise). The noise is obvious by 1.0 bit/pixel
(SNRK=3.0). The noisier fringes have 0.5 bits/pixel (SNRK=1.0).
6
We then decrease the SNRK, until reaching H ( I ) =0.5-bits/pixel. Figure 3 and Fig. 4 show
the interplay between BI and SNRK stated by H ( I ) . Figure 4 shows a similar sequence of
fringes but with reduced bandwidth BI=0.125 fringes/pixel. In Fig. 4 the maximum fringefrequency is one fourth of the Nyquist rate: BI=0.25(0.5)=0.125 fringes/pixel, therefore the
average information-rate H ( I ) is reduced one fourth with respect to Fig. 3 for the same
SNRk.
Fig. 4. Average information-rate of fringe-images with BI=0.125 fringes/pixel. The 2-bit/pixel
fringes have SNRK=70,000 (about 8-bits quantizing noise). The noisier fringes have 0.2
bits/pixel (SNRK=2.03). The reduced spectrum of the noisier fringes is at far right.
The information-rate H ( I ) is reduced by decreasing the SNRk until the noisier fringes at the
far-right with 0.2-bit/pixel is obtained. Finally the spectrum of the noisiest-fringes with 0.2
bits/pixel is shown at the far right.
Remember that we are finding the low information-rate H ( I ) of noisy fringes with
respect to the CCD camera used to digitize them. Kolmogorov's H ( I ) tell us nothing about
how to program a compressing algorithm to store/transmit efficiently a large number of noisy
fringe images.
5. Shannon information storage of experimental fringes
Here we are estimating the Shannon information content of experimental fringe-projection
profilometry fringes. To estimate H ( I ) for profilometry fringes, one possibility is to take two
images: the noisy fringes, and the noisy background. The formalism for both images are [1],
2
I ( x , y ) a ( x, y ) b( x, y ) cos
tan( )h ( x , y ) n ( x, y );
p
I n ( x, y ) a ( x, y ) n ( x, y ).
(11)
Where h( x, y ) is the object being digitized; p the period of the projected fringes and is
the sensitivity angle between the projector and the (8 bits/pixel) CCD-camera [1].
7
Fig. 5. High spatial frequency fringe-images (512x640, 8 bits/pixel) for obtaining an
estimation of the information storage-rate for fringe-projection profilometry carrier-fringes.
White-light fringe-projection patterns have relatively high SNRK. We then find the spectrum
of the fringes I (u, v ) F {I ( x, y )} , and define the indicator function for the signal-plus-noise
region as (u, v ) SN , and for the noise-only region as (u, v ) NR .
Fig. 6. Fourier spectrum corresponding to the fringes and noise (512x640 pixels).
Figure 7 shows the Fourier spectrum of the profilometry fringes and the Fourier spectrum of
the background-image; both images defined in Eq. (11). Figure 7 shows the spatial filters to
keep just the signal-lobes spectra within the region (u, v ) SN .
Fig. 7. Fourier filtered fringe-lobes (BI=0.11), and high-pass filtered noise.
Next we show the demodulated wrapped-phase obtained by taken the right lobe of the signalplus-noise region (u, v ) SN in Fig. 7. This was made to be sure that we have included all
the phase signal bandwidth. This is shown in Fig. 8,
8
Fig. 8. The fringe-pattern and its wrapped demodulated phase (512x640 pixels). At the fringe
self-occluding shadow the fringe-amplitude drops and the wrapped phase becomes noisier.
We then find the noise-power density within the noise-only region (u, v ) NR as,
1
ANR
a ( u, v ) n (u , v ) .
2
(12)
( u , v )NR
Where a (u, v ) n(u, v ) F{a ( x, y ) n( x, y )} and ANR the area of (u, v ) NR . We then
assume that the noise-density within (u, v ) SN is also (Eq. (15)). Finding (S/N) as,
SNR K
( u , v )SN
I ( u, v )
2
512(640)
11.2 .
(13)
We finally estimate the information-rate (with BI=0.11, Fig. 7) of this fringe-pattern as,
H ( I ) BI log 2 1 SNR K 0.41
bits
.
pixel
(14)
The average bandwidth BI (Fig. 7) of the fringes may be estimated by looking at the spectrum
the fringes I (u, v ) F {I ( x, y )} . This profilometry fringe-image has in the average 0.41
bits/pixel, much less information than the 8 bits/pixel capacity of the CCD-camera. The
information storage of most fringe patterns is usually well below the capacity of the digitizing
CCD-camera. This means that if we want to transmit/store fringe-images one may effectively
use compression image algorithms.
6. Real-life versus fringe-pattern images data compression
Real life images are highly redundant; have low information entropy. A real life photograph
usually have smooth image regions, along with sparse noisy-like regions with detailed image
features. Data compression algorithms may use local space-frequency wavelets expansion to
reduce redundancy [13]. Image compression algorithms are broadly classified as lossy and
lossless. In lossy compression the encoding/decoding algorithm decompress the original
image not 100% accurately. However, subjective quality perception may tolerate large
compression ratios without subjective degradation [10]. In lossless algorithms, decompressed
images remains 100% accurate to the original. Therefore the most compressible images are
usually smooth piece-wise continuous. On the other hand, one cannot compress a purelyrandom image, if we want to preserve exactly every pixel value; a white noise image is not
compressible. Common image storage formats are: 8-bits/pixel Bit Maps (BMP); Joint
Photographic Experts Group (JPEG); 8-bits/pixel Graphic Interchange Format (GIF); Tagged
Image File Format (TIFF), and lossless Portable Network Graphics (PNG). These file
formats have been designed to store real-life color images. These formats store noisy fringepatterns as images containing more "information" than noiseless fringes. The noise is
9
interpreted as fine image-details that the user wants to "preserve". On the contrary, noiseless
fringes occupy less storage bytes; they are interpreted as smoother images (see Fig. 9).
Fig. 9. Noiseless and noisy fringes with 200x200 pixel resolution and 8 bits/pixel. At right, the
PNG file size is 40-Kbytes, while the noisy fringe image has a PNG file size of 180-Kbytes.
We have simulated two fringe patterns shown in Fig. 9. We have store them using lossless
PNG format, and Fig. 9 shows the resulting PNG file sizes in kilobytes. The noisy image file
is about four times bigger because the noise is interpreted as high frequency image details.
The PNG format is designed to compress color real-life images for good visual perception,
not phase-modulated noisy fringes. Fringe patterns have more mathematical structure than
real-life images. So noisy fringe images are far more redundant and as a consequence, more
compressible. The mathematical model for fringe patterns (Eq. (6)) allow us to distinguish
between phase information and noise. Due to these considerations, real-life image
compressors are not suitable for efficient transmission/storage of noisy fringe patterns; they
were simply not designed for that purpose.
7. Fringe-data compression by phase-shifting algorithms (PSAs)
We have seen that noisy fringe patterns store much less Shannon information than the
capacity of the CCD used to digitize them. In this section we show that phase-shifted fringes
are even more information redundant. In remote optical/RADAR phase-metrology one may
digitize M phase-shifted fringes at a spacecraft (see Fig. 10). Then, one may downlink these
M fringes sequentially, and phase-demodulate them at the receiver. Or we may use a PSA to
phase-demodulate them at the spacecraft, and downlink just the demodulated analytic-signal.
A PSA may then be regarded as compressive-sensing, where we are "sensing" not real-valued
fringes, but less noisy complex-valued wave-data [13]. So PSA compression of phase-shifted
fringes is paramount in remote phase-shifting metrology where communications are limited
due to large distances and few watts for radio transmission. A PSA then packs the information
contained in M phase-shifted images into a single phase-demodulated complex-valued image.
Fig. 10. Schematic of a spacecraft transmitting high-information, complex-valued wave-data
from a set of M phase-shifted fringe-patterns.
To quantify the increase of information-rate by phase-shifting algorithms (PSAs),
consider a sequence of M phase-shifted noisy interferograms,
10
I ( x, y, t )
M 1
2
a b cos
m 0
2
W m 0 nm (t m ); 0
.
M
(15)
Here some (x,y) coordinates were omitted for clarity. We may phase-demodulate these fringes
by a PSA whose impulse response is [3],
h (t )
M 1
c
m 0
m
(t m ).
(16)
Being cm complex-valued. The Fourier transform of h(t ) ( H ( ) F [h(t )] ) must have at
least the following zeroes [3],
H ( 0 ) H (0) 0;
and
H (0 ) 0.
(17)
Obtaining the estimated quadrature analytic signal as [3],
Aei ( x , y ) I ( x, y , t ) h (t ) t M 1
M 1
c
m 1
m
I ( x, y , m ).
(18)
A PSA increases the (S/N) of the analytic-signal with respect to the fringes (Eq. (15)) by [3],
GS / N
H (0 )
1
2
2
(19)
.
H ( ) d
2
The highest GS / N is obtained by the least-squares PSA, where GS / N M [3]. Then the SNRK
of the noisy fringes increases to M(SNRK) for the complex-valued phase-demodulated signal
Aei ( x , y ) . Then the information-rate contained in the analytic signal Aei ( x , y ) is,
H ( A; M ) BI log 2 1 M SNR K
bits
.
pixel
(20)
The information of Aei ( x , y ) increases logarithmically with M. An interesting question is: how
many phase-shifted fringes double the information content of H ( A; M ) , or,
BI log 2 1 M SNR K 2 BI log 2 1 SNR K .
(21)
Solving for M one obtains,
M 2 SNR K .
This is shown in Fig. 11,
Fig. 11. Information storage as a function of SNRK, with BI=0.5 fringes/pixel. At the
green-zone, the spatial information-rate increases almost linearly from 1 to 2
bits/pixel. In contrast in the blue-zone, this rate increases logarithmically from 2 to 4
bits/pixel.
11
(22)
For example with SNR K 1 one would need M=3 phase-shifted fringes to double the
information-rate of Aei ( x , y ) . In contrast for SNR K 20 , one would need M=22 phaseshifted fringes to double the information-rate of Aei ( x , y ) from 2 to 4 bits/pixel. An example
of this is shown in Fig. 12 and Fig. 13 (for M=12).
Fig. 12. Four out-of 12 phase-shifted fringes degraded with AWGN. These fringes have an
average information-rate of 0.4bits/pixel (BI=0.125fringes/pixel, SNRK=8.1).
The M=12 phase-shifted fringes in Fig. 12 were phase-demodulated using a least-squares
PSA and the resulting real and imaginary images of the analytic signal are shown in Fig. 13,
Fig. 13. Real and imaginary parts of the phase-demodulated signal from the 12 noisy phaseshifted fringes in Fig. 12. The information-rate has increased about twice, from 0.4 bits/pixel to
0.82 bits/pixel (BI=0.12fringes/pixel, SNRK=97.5, M=12).
As conclusion, a PSA regarded as fringe-image compressor packs into a single analyticsignal Aei ( x , y ) the phase information stored in M phase-shifted fringes. In other words,
instead of transmitting/storing M phase-shifted fringes, one may send/store a single, higherinformation signal Aei ( x , y ) , or wrapped-phase arg[ Aei ( x , y ) ] . Therefore a PSA may be
regarded as an efficient compression algorithm for transmission or storage of large blocks of
phase-shifted fringe-data. This is paramount for efficient transmission of optical/RADAR
interference fringes from remote phase-metrology sites where the channel is severely limited
by large distances (high attenuation) and low power radio link. Also a PSA may be regarded
as a compressive-sensing algorithm where the collected data is complex-valued with lowernoise and as consequence higher information rate [13].
8. Summary
We have shown that noisy fringe-images in wave phase-metrology contain much less
Shannon information than the capacity of the CCD camera used to digitize them. For
example, it does not make sense to store electronic speckle-pattern interferometry (ESPI)
fringes using a costly 14 bits/pixel cooled CCD-camera. That is because most data bits (not
information bits) are wasted to fringe-noise. ESPI noisy fringes have very-low spatial
information-rate. Shannon information may be used as cost-efficiency criteria for choosing a
CCD camera according to the class of fringes that we are dealing with. For example, mass-
12
produced smartphone cameras may do just fine in many optical-metrology cases. Smartphone
cameras have low signal-to-noise ratio, but this is compensated with high image-pixel
resolution; 12 megapixels or more is nowadays available. So smartphone cameras with
moderate (S/N) and very high resolution images, may be efficiently used for remote phasewave telemetry applications.
We showed that low-noise fringe-images may be obtained from white-light, fringeprojection profilometry. Here we have estimated that an experimental carrier-fringe
profilometry image may contain less than 0.5 bits/pixel of Shannon information (Figs. 5-6).
With these small information-rates one realizes that there is plenty of room for noisy fringedata compression for efficient transmission or storage. We have seen that PSAs may be
regarded as fringe-data compression algorithms. Using PSAs, all Shannon information
contained in M phase-shifted fringes may be compressed into a single analytic signal. In
spaceborne phase-metrology for example, one may use PSAs to compress large blocks of M
phase-shifted fringes for efficient downlink. Finally we saw that PSAs compress noisy fringedata information by increasing the SNR K of the phase-demodulated complex-signal.
13
| 7 |
arXiv:1708.02546v1 [] 8 Aug 2017
A PRINCIPAL IDEAL THEOREM FOR COMPACT SETS OF
RANK ONE VALUATION RINGS
BRUCE OLBERDING
Abstract. Let F be a field, and let Zar(F ) be the space of valuation rings of F
with respect to the Zariski topology. We prove that if X is a quasicompact set of
rank one valuation rings in Zar(F ) whose maximal ideals do not intersect to 0,
then the intersection of the rings in X is an integral domain with quotient field F
such that every finitely generated ideal is a principal ideal. To prove this result, we
develop a duality between (a) quasicompact sets of rank one valuation rings whose
maximal ideals do not intersect to 0, and (b) one-dimensional Prüfer domains with
nonzero Jacobson radical and quotient field F . The necessary restriction in all
these cases to collections of valuation rings whose maximal ideals do not intersect
to 0 is motivated by settings in which the valuation rings considered all dominate
a given local ring.
1. Introduction
Throughout this article, F denotes a field and Zar(F ) denotes the set of valuation
rings of F ; i.e., the subrings V of F such that for all 0 ≠ t ∈ F , t ∈ V or t−1 ∈ V . In
this article we are interested in subrings A of F which are an intersection of rank
one valuation rings in a quasicompact subset of Zar(F ). The rank of a valuation
ring, which coincides with its Krull dimension, is the real rank of its value group.
Thus the rank one valuation rings have valuations that take values in R ∪ {∞}.
The Zariski topology on Zar(F ) is the topology having as a basis of open sets the
subsets of Zar(F ) of the form {V ∈ Zar(F ) ∶ t1 , . . . , tn ∈ F } for t1 , . . . , tn ∈ F . With
this topology, Zar(F ) is the Zariski-Riemann space of F . The main purpose of this
article is to prove the following instance of what Roquette [38] calls a Principal Ideal
Theorem, that is, a theorem which guarantees a given class of integral domains has
the property that every finitely generated ideal is principal.
Main Theorem. If X is a quasicompact set of rank one valuation rings in Zar(F )
whose maximal ideals do not intersect to 0, then the intersection of the valuation
rings in X is an integral domain with quotient field F and Krull dimension one such
that every finitely generated ideal is a principal ideal.
The theorem, which most of the paper is devoted to proving, asserts that the
intersection of such valuation rings is a Bézout domain, a domain for which every
MSC: 13A18; 13F05; 14A15.
1
2
BRUCE OLBERDING
finitely generated ideal is principal. Such rings belong to the extensively studied
class of Prüfer domains, those domains A for which AM is a valuation domain for
each maximal ideal M of A. The problem of when an intersection of valuation rings
is a Prüfer domain is a difficult but well-studied problem that has applications to real
algebraic geometry (e.g., [1, 2, 3, 42]), non-Noetherian commutative ring theory (e.g.,
[15, 27, 39]), formally p-adic holomorphy rings [38] and the study of integer-valued
polynomials [4, 24]. Using the equivalence of this problem to that of determining
when a subspace of Zar(F ) yields an affine scheme, a geometric criterion involving
morphisms of the Zariski-Riemann space into the projective line was given in [33].
In that approach, treating the Zariski-Riemann space as a locally ringed space, not
simply a topological space, is crucial. By contrast, the main theorem shows that
unlike in the general case, the geometry of the locally ringed space structure is not
needed to distinguish a Bézout intersection when the valuation rings satisfy the
hypotheses of the theorem. Instead, the question of whether the intersection of such
a collection of rank one valuation rings is a Prüfer domain is purely topological.
Moving beyond rank one valuation rings, quasicompactness of a subset X of
Zar(F ) is far from sufficient to guarantee that the intersection of valuation rings in
X is a Bézout domain. For example, if D is a Noetherian local domain with quotient
field F , then for any t1 , . . . , tn ∈ F ,
U = {V ∈ Zar(F /D) ∶ x1 , . . . , xn ∈ V }
is quasicompact, but the intersection of the valuation rings in U is a Bézout domain if and only if the integral closure of D[x1 , . . . , xn ] is a principal ideal domain,
something that occurs only for very special choices of D and x1 , . . . , xn . In fact,
the main theorem is optimal in the sense that if any one of the hypotheses “quasicompact”, “rank one”, or “the maximal ideals do not intersect to 0” is omitted,
the conclusion is false; see Example 5.7. Note that the condition that the maximal
ideals of the rank one valuation rings in X do not intersect to 0 occurs naturally
in settings where the valuation rings in X are assumed to dominate a local ring of
Krull dimension > 0. Examples of such settings of recent interest include Berkovich
spaces and tropical geometry (see for example [13]) and valuative trees of regular
local rings [9, 17].
Interest in compactness in the Zariski-Riemann space dates back to Zariski’s introduction of the topology on Zar(F ) in [44]. If D is a subring of F , then Zar(F /D),
the subspace of Zar(F ) consisting of the valuation rings in Zar(F ) that contain D
as a subring, is the Zariski-Riemann space of F /D. That Zar(F /D) is quasicompact was proved by Zariski [44] in 1944 as a step in his program for resolution of
singularities of surfaces and three-folds. In more recent treatments of the topology
of Zar(F /D) such as [10, 36], quasicompactness is viewed as part of a more refined
topological picture that treats Zar(F /D) as a spectral space or as a locally ringed
COMPACT SETS AND HOLOMORPHY RINGS
3
space that is a projective limit of projective schemes. The latter point of view also
has its origins in Zariski’s work [44].
However, in restricting to the subspace of rank one valuation rings of F , key
topological features of Zar(F ) are lost. For example, this subspace need not be
spectral, nor even quasicompact. Yet, in passing to the space of rank one valuation
rings, the main theorem shows that the topology becomes much more consequential
for the ring-theoretic structure of an intersection of valuation rings. One of the key
steps in proving this is first showing that in the setting of the main theorem, the
intersection of valuation rings in X is a Prüfer domain. We prove this in Theorem 5.6
by establishing the following lemma. For a subset X of Zar(F ), we let A(X) =
⋂V ∈X V be the holomorphy ring1 of X and J(X) = ⋂V ∈X MV be the ideal of A(X)
determined by the intersection of the maximal ideals MV of the valuation rings V .
For a ring A, Max(A) denotes its set of maximal ideals.
Main Lemma. The mappings
X ↦ A(X) and A ↦ {AM ∶ M ∈ Max(A)}
define a bijection between the quasicompact sets X of rank one valuation rings in
Zar(F ) with J(X) ≠ 0 and the one-dimensional Prüfer domains A with nonzero
Jacobson radical and quotient field F .
Corollary 5.9 gives another version of this result in which the spaces X need not
be assumed a priori to satisfy J(X) ≠ 0. In this case, “quasicompact” is replaced
with “compact” (= quasicompact and Hausdorff), and the holomorphy rings are
all assumed to have quotient field F . A consequence of the main lemma is that
if X is quasicompact, J(X) ≠ 0 and X consists of rank one valuation rings, then
X = {A(X)M ∶ M ∈ Max(A(X))}. As this suggests, the main lemma can be recast
in the language of schemes, and we do this in Corollary 5.14.
The applicability of the main theorem depends on whether a space of rank one
valuation rings can be determined to be quasicompact. The key technical observation behind our approach is that if a subset X of Zar(F ) consists of rank one
valuation rings and J(X) ≠ 0, then X is quasicompact if and only if X is closed in
the patch topology of Zar(F ). (See Section 2.) On the level of proofs and examples,
this reduces the issue of quasicompactness to calculation of patch limits point of X
in Zar(F ), and specifically whether such limit points have rank one. In a future
paper [37] we use the results of the present article along with additional methods
to show how to apply these ideas to divisorial valuation overrings of a Noetherian
local domain of Krull dimension two. This is discussed in Remark 6.4.
The present paper is motivated by recent work such as in [10, 11, 31, 32, 33,
34, 35, 36] on understanding how topological or geometric properties of a space of
1See Roquette [38] for an explanation of this terminology.
4
BRUCE OLBERDING
valuation rings are reflected in the algebraic structure of the intersection of these
valuation rings.
2. Topological preliminaries
In this section we outline the topological point of view needed for the later sections. Recall that throughout the paper, F denotes an arbitrary field.
Notation 2.1. For each subset S of the field F , let
U (S) = {V ∈ Zar(F ) ∶ S ⊆ V } and V (S) = {V ∈ Zar(F ) ∶ S ⊆/ V }.
The Zariski topology on Zar(F ) has as a basis of nonempty open sets the sets of the
form U (x1 , . . . , xn ), where x1 , . . . , xn ∈ F . The set Zar(F ) with the Zariski topology
is the Zariski-Riemann space of F . Several authors have established independently
that the Zariski-Riemann space Zar(F ) is a spectral space2, meaning that (a) Zar(F )
is quasicompact and T0 , (b) Zar(F ) has a basis of quasicompact open sets, (c) the
intersection of finitely many quasicompact open sets in Zar(F ) is quasicompact, and
(d) every nonempty irreducible closed set in Zar(F ) has a unique generic point. See
[10] and [36] for more on the history of this central result. A spectral space admits
the specialization order ≤ given by x ≤ y if and only if y is in the closure of {x}.
In the Zariski topology on Zar(F ), we have for V, W ∈ Zar(F ) that V ≤ W if and
only if W ⊆ V . We use the specialization order in the results in this section but not
elsewhere in the paper.
As a spectral space, Zar(F ) admits two other useful topologies, the inverse and
patch topologies. The inverse topology on Zar(F ) is the topology that has, as a
basis of closed sets, the subsets of Zar(F ) that are quasicompact and open in the
Zariski topology; i.e., the nonempty closed sets are intersections of finite unions of
sets of the form U (x1 , . . . , xn ), x1 , . . . , xn ∈ F . The inverse topology is useful for
dealing with issues of irredundance and uniqueness of representations of integrally
closed subrings of F ; for example, see [35]. In the present article, we use the inverse
topology in a limited way.
The most important topology on Zar(F ) for the purposes of this article is the
patch topology on Zar(F ), which is given by the topology that has as a basis of open
sets the subsets of Zar(F ) of the form
U (x1 , . . . , xn ), V (y1 ) or U (x1 , . . . , xn ) ∩ V (y1 ) ∩ ⋯ ∩ V (ym ),
where x1 , . . . , xn , y1 , . . . , ym ∈ F . The complement in Zar(F ) of any set in this basis
is again open in the patch topology. Thus the patch topology has as a basis sets
that are both closed and open (i.e., the patch topology is zero-dimensional). The
patch topology is also spectral and hence quasicompact [23, p. 45]. Unlike the
2The terminology is motivated by a theorem of Hochster [23] that shows a space is spectral if
and only if it is homeomorphic to the prime spectrum of a ring
COMPACT SETS AND HOLOMORPHY RINGS
5
Zariski and inverse topologies on Zar(F ), the patch topology is always Hausdorff
[23, Theorem 1].
Convention. In the article we work with all three topologies, inverse, patch
and Zariski, sometimes even in the same proof. To avoid confusion, we insert the
adjective “patch” before a topological property when working with it in the patch
topology. For example, a “patch open set” is a set that is open in the patch topology.
Similarly, we insert “inverse” as an adjective when working with the inverse topology.
If no adjective is present (e.g., “the set Z is quasicompact”), this is always to be
understood as indicating we are working in the Zariski topology. Thus the Zariski
topology is the default topology if no other topology is specified. Recall also from
the Introduction that by compact we mean both quasicompact and Hausdorff.
One of our main technical devices in the paper is that of a patch limit point.
Let X ⊆ Zar(F ). Then V ∈ Zar(F ) is a patch limit point of X if each patch open
neighborhood U of V in X contains a point in X distinct from V ; equivalently
(since the patch topology is Hausdorff), every patch open neighborhood U of V
contains infinitely many valuation rings in X. Applying the relevant definitions, it
follows that V is a patch limit point of X if and only if for all finite (possibly empty)
subsets S of V and T of MV there is a valuation ring U in X ∖ {V } such that S ⊆ U
and T ⊆ MU .3
Notation 2.2. Let ∅ ≠ X ⊆ Zar(F ). We use the following notation.
●
●
●
●
lim(X) = the set of patch limit points of X in Zar(F ).
patch(X) = X ∪ lim(X) = closure of X in the patch topology of Zar(F ).
A(X) = ⋂V ∈X V = holomorphy ring of X.
J(X) = ⋂V ∈X MV .
In the next lemma we collect some properties of patch closure that are needed in
later sections. More systematic treatments of Zariski, patch and inverse closure in
Zar(F ) can be found in [10] and [36] and their references.
Lemma 2.3. Let X be a nonempty subset of Zar(F ). Then
(1) A(X) = ⋂V ∈patch(X) V and J(X) = ⋂V ∈patch(X) MV .
(2) The set patch(X) is spectral in the subspace Zariski topology.
Suppose in addition that X is quasicompact and consists of rank one valuation rings.
(3) The set patch(X) is contained in X ∪ {F }.
(4) If S is a multiplicatively closed subset of A = A(X), then AS = ⋂AS ⊆V ∈X∪{F } V .
Proof. (1) The first assertion can be found in [10, Proposition 4.1] or [36, Proposition
5.6]. Since patch(X) = X ∪ lim(X), to see that the second assertion holds, it suffices
to show that J(X) ⊆ MU for each U ∈ lim(X). Let 0 ≠ a ∈ J(X), and let U ∈ lim(X).
3Throughout the paper, we denote the maximal ideal of a valuation ring V by M .
V
6
BRUCE OLBERDING
If a ∈/ MU , then U ∈ U (a−1 ). Since U ∈ lim(X), there exists V ∈ X ∩ U (a−1 ), so that
a−1 ∈ V . However, a ∈ MV by the choice of a, a contradiction. Thus J(X) ⊆ MU ,
which verifies (1).
(2) A patch closed subspace of a spectral space is spectral in the subspace topology
[18, Proposition 9.5.29, p. 433].
(3) Suppose X is quasicompact. This implies that if U ∈ patch(X), then there
exists V ∈ X such that V ⊆ U [36, Proposition 2.2]. Since X consists of rank one
valuation rings, we have V = U or U = F . Thus patch(X) ⊆ X ∪ {F }.
(4) By [36, Corollary 5.7],
AS =
⋂
V,
AS ⊆V ∈Y
where Y is the set of all V ∈ Zar(F ) such that V ⊇ U for some U ∈ patch(X). Thus
(4) follows from (3) and the fact that every valuation ring in X has rank one.
The following proposition reinterprets for rank one valuation rings the property of
compactness in the Zariski topology of Zar(F ) in terms of the patch topology. This
enables us to work with the patch topology– and specifically, patch limit points– in
the algebraic arguments of the next sections. The proposition also shows that the
Hausdorff condition for a set X of rank one valuation rings is closely connected with
the algebraic property that J(X) ≠ 0.
Proposition 2.4. The following statements hold for every nonempty set X of rank
one valuation rings in Zar(F ).
(1) X is compact if and only if X is patch closed in Zar(F ).
(2) If J(X) ≠ 0, then the Zariski and patch topologies agree on X, and hence X is
Hausdorff and zero-dimensional.
(3) Suppose A(X) has quotient field F . Then J(X) ≠ 0 if and only if X is Hausdorff.
Proof. (1) Suppose that X is compact. By Lemma 2.3(3), patch(X) ⊆ X ∪{F }, so to
show that X is patch closed it suffices to show that F /∈ patch(X). If X consists of a
single valuation ring, then patch(X) = X and the claim is clear since this valuation
ring must have rank one. Suppose X contains at least two distinct valuation rings
V and W . Since X is Hausdorff, there exist x1 , . . . , xn , y1 , . . . , ym ∈ F such that
V ∈ U (x1 , . . . , xn ), W ∈ U (y1 , . . . , ym ) and
U (x1 , . . . , xn ) ∩ U (y1 , . . . , ym ) ∩ X = ∅.
Let
Y = V (x1 , . . . , xn ) ∪ V (y1 , . . . , ym ).
Then X ⊆ Y and Y is patch closed, so patch(X) ⊆ Y . Since F ∈/ Y , we have
F /∈ patch(X). Therefore, X is patch closed.
COMPACT SETS AND HOLOMORPHY RINGS
7
Conversely, suppose that X is patch closed in Zar(F ). By Lemma 2.3(2), X is
a spectral space with respect to the Zariski topology and hence is quasicompact.
Since the valuation rings in X have rank one and F ∈/ X, the elements of X are
minimal in X with respect to the specialization order. This implies the patch and
Zariski topologies agree on X [41, Corollary 2.6]. Thus X is Hausdorff since the
patch topology is Hausdorff.
(2) Let 0 ≠ x ∈ J(X). Then X ⊆ V (x−1 ). Since the valuation rings in X have
rank one and F ∈/ V (x−1 ), the elements of X are minimal in V (x−1 ) with respect
to the specialization order induced by the Zariski topology. As a patch closed
subset of Zar(F ), the set V (x−1 ) is, with respect to the Zariski topology, a spectral
space (Lemma 2.3(2)). Thus the Zariski and patch topologies agree on the set of
elements of V (x−1 ) that are minimal with respect to the specialization order [41,
Corollary 2.6]. Since the patch topology is Hausdorff and zero-dimensional, the last
statement of (2) now follows.
(3) Suppose A = A(X) has quotient field F and X is Hausdorff. If X consists
of a single valuation ring, then it is clear that J(X) ≠ 0. Assume that X has more
than one valuation ring. Since X is Hausdorff and A has quotient field F , there
exist nonzero a1 , . . . , an , b1 , . . . , bm , c ∈ A such that
U (a1 /c, . . . , an /c) ∩ U (b1 /c, . . . , bm /c) ∩ X = ∅.
Therefore, U (1/c) ∩ X = ∅. Since each V ∈ X is a valuation ring, this implies
c ∈ MV , so that 0 ≠ c ∈ J(X). The converse follows from (2).
3. Residually transcendental limit points
The main result of this section, Theorem 3.2, is of a technical nature and involves
the existence of patch limit points that are residually transcendental over a local
subring of F . For the purpose of proving the results in this section, we recall the
notion of a projective model of a field; see [45, Chapter 6, §17] for more background
on this topic. Let D be a subring of the field F , and let t0 , t1 , . . . , tn be nonzero
elements of F . For each i, let Di = D[t0 /ti , . . . , tn /ti ], and let
M = ⋃ {(Di )P ∶ P ∈ Spec(Di )}.
n
i=0
Then M is the projective model of F /D defined by t0 , t1 , . . . , tn . Alternatively, a
projective model of F /D can be viewed as a projective integral scheme over Spec(D)
whose function field is a subfield of F . Motivated by this interpretation, it is often
convenient to view M as a locally ringed space; see for example [36, Section 3].
We do not need to do this explicitly in the present paper, but the notation remains
helpful here when we wish to view the local rings in M as points. In particular,
8
BRUCE OLBERDING
viewing the set M as a topological space with respect to the Zariski topology4, the
local rings in M are points in M. For this reason, and in keeping with the locally
ringed space point of view, we denote a local ring x ∈ M by OM,x , despite the
redundancy in doing so. A subset Y of M is an affine submodel of M if there exists
a D-subalgebra R of F such that Y = {RP ∶ P ∈ Spec(R)}. (We differ here from [45]
in that we do not require R to be a finitely generated D-algebra.)
For each x ∈ M, there exists a valuation ring in Zar(F ) that dominates OM,x , and
for each V ∈ Zar(F /D), there exists a unique x ∈ M such that V dominates OM,x ;
see [45, pp. 119–120] or apply the valuative criterion for properness [20, Theorem 4.7,
p. 101]. For a nonempty subset X of M, we denote by M(X) the set of all x ∈ M
such that OM,x is dominated by some V ∈ X. More formally, M(X) is the image
of X under the continuous closed surjection Zar(F ) → M given by the domination
mapping [45, Lemma 5, p. 119].
Lemma 3.1. Let X be a nonempty subset of Zar(F ) such that A = A(X) is a
local ring. Let D be a subring of A, let t0 , . . . , tn be nonzero elements of F , and
let M be the projective model of F /D defined by t0 , . . . , tn . If M(X) is finite, then
t0 /ti , . . . , tn /ti ∈ A for some i = 0, 1, . . . , n.
Proof. Suppose M(X) is finite. Each finite subset of the projective model M is
contained in an affine open submodel of M. (This is a consequence of homogeneous
prime avoidance; see for example [43, Lemma 01ZY] or the proof of [33, Corollary 3.2]). Thus M(X) is contained in an affine submodel of M, and hence there
is a D-subalgebra R of F such that for each x ∈ M(X), OM,x is a localization of
R. For each V ∈ X, there is x ∈ M(X) such that V dominates OM,x . Since A is a
local ring, so is the subring
B = ⋂ OM,x
x∈M(X)
of A. (Indeed, if b ∈ B is a unit in A, then since, for each x ∈ M(X), OM,x is
dominated by a valuation ring in X, b is a unit in B. Thus if b, c ∈ B are nonunits
in B, then b + c is a nonunit in the local ring A, so that b + c is a nonunit in B.)
For each x ∈ M(X), since OM,x is a localization of R and R ⊆ B ⊆ OM,x with B
a local ring, we have that OM,x is a localization of B at a prime ideal of B. Since
M(X) is finite, B is a local ring that is a finite intersection of the local rings OM,x ,
x ∈ M(X), each of which is a localization of B at a prime ideal. It follows that
B is equal to one of these localizations; i.e., B = OM,x for some x ∈ M(X). Since
B ∈ M, there is i such that t0 /ti , . . . , tn /ti ∈ B ⊆ A.
The proof of the next lemma can be streamlined by using the language of schemes
and morphisms into the projective line (see [33] for more on this point of view in
4The basis for this topology is given by sets of the form {R ∈ M ∶ x , . . . , x ∈ R}, where
1
n
x1 , . . . , xn ∈ F .
COMPACT SETS AND HOLOMORPHY RINGS
9
our context), but in order to make the proof self-contained, we develop the needed
ideas in the course of the proof.
Theorem 3.2. Let X be a nonempty subset of Zar(F ) such that A = A(X) is a
local ring, and let 0 ≠ t ∈ F . Suppose D is a local subring of A, t ∈/ A, 1/t ∈/ A, and
all but at most finitely many V ∈ X dominate D. Then there exists U ∈ lim(X) such
that t, 1/t ∈ U , U dominates D and the image of t in U /MU is transcendental over
the residue field of D.
Proof. We first show that we can replace D with a local subring D ′ of A such that D ′
is integrally closed in D ′ [t] and all but finitely many valuation rings in X dominate
D ′ . Let D denote the integral closure of D in D[t]. Since A in integrally closed in F ,
D ⊆ A. Let m = M ∩ D, where M is the maximal ideal of A, and let D ′ = D m . Then
A dominates D ′ . Since D ⊆ D[t], we have D[t] = D[t], and so D is integrally closed
in D[t]. Thus D ′ = D m is integrally closed in D ′ [t] = D m [t] since D ′ and D ′ [t] are
localizations of D and D[t], respectively, at the same multiplicatively closed set.
Therefore, D ′ is a local ring that is integrally closed in D ′ [t] and is dominated by
A.
To see next that all but finitely many valuation rings in X dominate D ′ , suppose
that V ∈ X and V dominates D. We claim that V dominates D ′ . Let n denote the
maximal ideal of D, and let p = MV ∩ D. Since V dominates D, p lies over n. Thus,
since D is integral over D, p is a maximal ideal of D. If p ≠ m, then there exists
d ∈ p ∖ m so that 1/d ∈ D m = D ′ ⊆ A ⊆ V . Since d ∈ MV , this is a contradiction that
implies p = m. Thus V dominates D ′ . This shows that every valuation ring in X
that dominates D also dominates D ′ . Thus, if V ∈ X does not dominate D ′ , then
V does not dominate D. By assumption, there are at most finitely many valuation
rings in X that do not dominate D, so there are at most finitely many valuation
rings in X that do not dominate D ′ . With this in mind, we work for the rest of the
proof with D ′ instead of D and draw the desired conclusion for D in the last step
of the proof. The advantage of working with D ′ is that D ′ is a local ring that is
integrally closed in D ′ [t].
Let M be the projective model of F /D ′ defined by 1, t. Let f ∶ M → Spec(D ′ )
be the canonical mapping that sends a local ring R in M to D ′ ∩ N , where N is the
maximal ideal of R. Let C = f −1 (m′ ), where m′ is the unique maximal ideal of D ′ .
Since f is continuous in the Zariski topology [45, Lemma 5, p. 119], C is the closed
subset of M consisting of all the local rings in M that dominate D ′ . Since D ′ is
integrally closed in D ′ [t] and t, t−1 ∈/ D ′ (indeed, by assumption, t, t−1 ∈/ A), Seidenberg’s Lemma [29, Exercise 31, pp. 43–44] implies that the rings D ′ [t]/mD ′ [t] and
D ′ [t−1 ]/m′ D ′ [t−1 ] are each isomorphic to the polynomial ring (D ′ /m′ )[T ], where T
is an indeterminate. These isomorphisms are induced by the mappings t ↦ T and
t−1 ↦ T , respectively. Thus (t, m′ )D ′ [t] is a maximal ideal of D ′ [t], while m′ D ′ [t]
is a prime ideal in D ′ [t] and m′ D ′ [t−1 ] is a prime ideal in D ′ [t−1 ].
10
BRUCE OLBERDING
Now, since m′ extends to a prime ideal in both D ′ [t] and D ′ [t−1 ], C is irreducible in the Zariski topology with generic point the local ring D ′ [t]m′ D′ [t] =
D ′ [t−1 ]m′ D′ [t−1 ] . Using the fact that the rings D ′ [t]/m′ D ′ [t] and D ′ [t−1 ]/m′ D ′ [t−1 ]
are each PIDs isomorphic to the polynomial ring (D ′ /m′ )[T ], it follows that the
closed points in C are the local rings in the set
{D ′ [t]P ∶ P is a maximal ideal in D ′ [t], m ⊆ P } ∪ {D ′ [t−1 ](m,t−1 ) }.
The only point in C that is not closed is D ′ [t]m′ D′ [t] , the unique generic point of C.
This accounts for all the local rings in C. The rest of the proof consists in showing
that there is a valuation ring U in lim(X) that dominates D ′ [t]m′ D′ [t] .
To this end, we next describe the local rings in M(X) ∩ C; i.e., we describe the
local rings in C that are dominated by valuation rings in X. In particular, we show
that there are infinitely many such local rings in C and that the Zariski closure of
M(X) ∩ C in M is C.
Let X ∗ be the set of valuation rings in X that dominate D ′ . We have established
that all but finitely many valuation rings in X dominate D ′ ; that is, X ∖X ∗ is finite.
The image M(X ∗ ) of X ∗ in M under the domination mapping is contained in C,
so that C(X ∗ ) = M(X ∗ ). By Lemma 3.1 the fact that A is a local ring and t and
1/t are not elements of D ′ implies M(X) is infinite. Since also X ∖ X ∗ is finite,
it follows that M(X ∗ ) = C(X ∗ ) is infinite. Thus, since C consists only of closed
points and a unique generic point for C, there are infinitely many closed points of
C in C(X ∗ ), which means there is a subset X ′ of X ∗ such that the image C(X ′ ) of
X ′ in C is infinite and consists of rings of the form D ′ [t]P , where P is a maximal
ideal of D ′ [t] that contains m′ . Therefore, the valuation rings in X ′ contain D ′ [t],
and the image of X ′ in Spec(D ′ [t]) under the map that sends a valuation ring to
its center in D ′ [t] consists of infinitely many maximal ideals of D ′ [t], all of which
contain the dimension one prime ideal m′ D ′ [t]. Since D ′ [t]/m′ D ′ [t] is a PID, the
intersection of these infinitely many maximal ideal is m′ D ′ [t].
Next, to see how this fact is reflected in C, let M1 be the affine submodel of M
given by
M1 = {D′ [t]P ∶ P ∈ Spec(D′ [t])}.
Let g ∶ M1 → Spec(D ′ [t]) denote the homeomorphism that sends a local ring D ′ [t]P
in M1 to its center in D ′ [t]. We have shown that the Zariski closure of C(X ′ ) in
Spec(D ′ [t]) is the set of all prime ideals of D ′ [t] containing the prime ideal m′ D ′ [t].
Since g is a homeomorphism, it follows that D ′ [t]m′ D′ [t] is in the Zariski closure of
C(X ′ ). Hence the Zariski closure of C(X ′ ) in M is C.
Now, since the generic point of the closed set C is the local ring D ′ [t]m′ D′ [t] and
C(X ′ ) is infinite and Zariski dense in C, we apply [36, Lemma 2.7(4)] to obtain that
there is a valuation ring U in the patch closure of X ′ that dominates
D ′ [t]m′ D′ [t] = D ′ [1/t]m′ D′ [1/t] .
COMPACT SETS AND HOLOMORPHY RINGS
Since
11
D ′ [t]/m′ D ′ [t] ≅ (D ′ /m′ )[T ],
the image of t in the residue field of U is transcendental over the residue field of D ′ .
Since all the valuation rings in X ′ are centered on maximal ideals of D ′ [t], U is not
a member of X ′ . Therefore, since U is in the patch closure of X ′ but not in X ′ , it
must be that U ∈ lim(X ′ ) ⊆ lim(X). Finally, since D ′ dominates D, we conclude
that the image of t in U /MU is transcendental over the residue field of D, which
completes the proof of the theorem.
4. Limit points of rank greater than one
We show in Theorem 4.3 that if X ⊆ Zar(F ) and A(X) is local but not a valuation
domain, then there is a patch limit valuation ring of X of rank > 1. This is the key
result needed in the next section to prove the main results of the paper. The proof of
Theorem 4.3 relies on the following lemma, which gives a criterion for the existence
of a valuation ring of rank > 1 to lie in the patch closure of X.
Lemma 4.1. Let A be a local integrally closed subring of F with maximal ideal M ,
and let X be a patch closed subset of Zar(F ) such that A = A(X). Suppose that
there exist 0 ≠ t ∈ F and m ∈ M such that mt /∈ A and t−1 /∈ A. If for each i > 0
there exists Vi ∈ X such that m ∈ MVi and mti is a unit in Vi , then X contains a
valuation ring of rank > 1.
Proof. Let i > 0. If mti ∈ A, then (mt)i ∈ A, so that since A is integrally closed
we have mt ∈ A, contrary to assumption. Thus mti ∈/ A for all i > 0. Moreover,
(mti )−1 ∈/ A, since otherwise t−i = m(m−1 t−i ) ∈ A, which since A is integrally closed
forces t−1 ∈ A, contrary to the choice of t. By assumption, there exists Vi ∈ X such
that mti is a unit in Vi and m ∈ MVi . If t ∈ Vi , then since mti is a unit in Vi it is
the case that m is a unit in Vi and an element of MVi , a contradiction. Thus t ∈/ Vi .
Using Notation 2.2, let
Ci = U (mti ) ∩ V (t) ∩ X.
Then Vi ∈ Ci , so that Ci is a nonempty patch closed subset of X.
We use compactness to show that ⋂i>0 Ci is nonempty. To this end, we claim
that the collection {Ci ∶ i > 0} has the finite intersection property. Let i > 0, and let
0 < j < i. Then
Vi ∈ Cj = U (mtj ) ∩ V (t) ∩ X
since mti ∈ Vi and t−1 ∈ Vi implies mtj = mti (t−1 )i−j ∈ Vi . For each i > 0, it
follows that C1 ∩ C2 ∩ ⋯ ∩ Ci contains the valuation ring Vi . Therefore, the collection
{Ci ∶ i > 0} of patch closed subsets of X has the finite intersection property. Since
X is a patch closed subset of the patch quasicompact space Zar(F ), X is patch
12
BRUCE OLBERDING
quasicompact. Thus the set
i
⋂ Ci = X ∩ V (t) ∩ ( ⋂ U (mt ))
i>0
i>0
is nonempty. Let U be a valuation ring in this intersection. Then U ∈ X, and, for
each i > 0, we have 0 ≠ m ∈ (t−1 )i U . Also, since t /∈ U , we have t−1 ∈ MU . Thus
m ∈ (t−1 )i U ⊆ MU . If U has rank 1, then the radical of mU in U is MU . In this
case there exists n > 0 such that
(t−1 )n ∈ mU ⊆ (t−1 )n+1 U,
a contradiction to the fact that t−1 is in U but is not a unit in U . We conclude that
U has rank > 1.
In the proof of Theorem 4.3 we pass to a subfield K of F . In doing so, we
need that features of the topology of X are preserved in the image of X in the
Zariski-Riemann space of K. This is given by the next lemma.
Lemma 4.2. Let K be a subfield of F . Then the mapping f ∶ Zar(F ) → Zar(K) ∶
V ↦ V ∩ K is a surjective map that is closed and continuous in the patch topology.
Proof. That f is surjective follows from the Chevalley Extension Theorem [8, Theorem 3.1.1, p. 57]. To see that f is continuous in the patch topology observe that
for x1 , . . . , xn , y ∈ K, the preimages under f of the subbasic patch open sets
{V ∈ Zar(K) ∶ x1 , . . . , xn ∈ V } and {V ∈ Zar(K) ∶ y ∈/ V }
are U (x1 , . . . , xn ) and V (y), respectively, and hence are patch open in Zar(F ).
Finally, with the patch topologies on Zar(F ) and Zar(K), f is a continuous map
between compact Hausdorff spaces, and so f is closed [7, Theorem 3.1.12, p. 125].
Theorem 4.3. Let X be a nonempty subset of Zar(F ) such that J(X) ≠ 0. If A(X)
is a local ring that is not a valuation domain, then patch(X) contains a valuation
ring of rank > 1.
Proof. Let A = A(X), J = J(X), and let M denote the unique maximal ideal of A.
We prove the lemma by establishing a series of claims. In the proof, for an ideal I
of A, we denote by End(I) the subring of F given by {t ∈ F ∶ tI ⊆ I}. Thus End(I)
is the largest ring in F in which I is an ideal.
Claim 1. If A is not completely integrally closed5, then patch(X) contains a
valuation ring of rank > 1.
5A domain R is completely integrally closed if for each nonzero ideal I of R and each element
t of the quotient field of R, tI ⊆ I if and only if t ∈ R; equivalently, R = End(I) for each nonzero
ideal I of R.
COMPACT SETS AND HOLOMORPHY RINGS
13
Proof of Claim 1. If every valuation ring in patch(X) has rank ≤ 1, then A,
as an intersection of completely integrally closed domains, is completely integrally
closed.
Claim 2. If M is a principal ideal of A, then patch(X) contains a valuation ring
of rank > 1.
Proof of Claim 2. Suppose M = mA for some m ∈ M . Since A is not a valuation
domain, A is not field, and hence M ≠ 0. Since M is principal, the ideal P = ⋂i>0 M i
is the unique largest nonmaximal prime ideal of A and P AP = P [29, Exercise 1.5,
p. 7]. If P = 0, then A is valuation ring (in fact, a DVR), contrary to assumption.
Thus P ≠ 0, and, since P = P AP , it follows that AP ⊆ End(P ). Since P is a
nonmaximal prime ideal, this implies A ⊊ End(P ), so that A is not completely
integrally closed. By Claim 1, patch(X) contains a valuation ring of rank > 1,
which proves Claim 2.
Claim 3. If all the valuation rings in X dominate A, then patch(X) contains a
valuation ring of rank > 1.
Proof of Claim 3. If M is a principal ideal of A, Claim 2 implies that patch(X)
contains a valuation ring of rank > 1, and the proof of the claim is complete. Thus
we assume M is not a principal ideal of A. Also, by Claim 1, we may assume
that A is completely integrally closed and hence End(M ) = A. It remains to prove
Claim 3 in the case in which End(M ) = A, M is not a principal ideal of A and all
the valuation rings in X dominate A.
Since A is not a valuation ring, there exists t ∈ F such that t /∈ A and t−1 ∈/ A.
Since M is not an invertible ideal of A, we have
M ⊆ M (A ∶F M ) ⊊ A,
which forces M = M (A ∶F M ). Thus
(A ∶F M ) = End(M ) = A,
so t ∈/ A = (A ∶F M ). Let m ∈ M such that mt ∈/ A. Since t−1 , mt ∈/ A, we have
mti , (mti )−1 ∈/ A for each i > 0, as noted at the beginning of the proof of Lemma 4.1.
By Theorem 3.2 (applied in the case where “D” is A), there exists, for each i > 0,
a valuation ring Vi ∈ patch(X) that dominates A and for which mti is a unit in Vi .
By assumption, every valuation ring in X dominates A, so Lemma 2.3(1) implies
that every valuation ring in patch(X) dominates A. Thus m ∈ MVi for all i. By
Lemma 4.1, patch(X) contains a valuation ring of rank > 1, which completes the
proof of Claim 3.
Claim 4. If there are valuation rings in X that do not dominate A, then there
exists a two-dimensional Noetherian local subring D of A such that D is dominated
by A and D contains nonzero elements a, b with a ∈ M ∖ J and b ∈ J.
14
BRUCE OLBERDING
Proof of Claim 4. Since there are valuation rings in X that do not dominate A,
we have 0 ≠ J ⊊ M . To prove the existence of the ring D, suppose first A contains
a field k. In this case, choose a ∈ M ∖ J and 0 ≠ b ∈ J. Let C = k[a, b] and let
D = CM ∩C . Then D is a Noetherian local subring of A that is dominated by A.
Since C is generated by two elements over a field, D has Krull dimension at most
two. To see that D has exactly Krull dimension two, observe that since a ∈ M ∖ J
there is V ∈ X such that a ∈/ MV . Thus 0 ≠ b ∈ MV ∩ D ⊊ M ∩ D, so that D has
Krull dimension two. This verifies Claim 4 if A contains a field.
Suppose next that A does not contain a field. Then the contraction of M to the
prime subring of A is a nonzero principal ideal, and hence there is a DVR W that
is dominated by A. Let p be the generator of the maximal ideal of W .
If p ∈ J, then let b = p and choose a ∈ M ∖ J.
If p /∈ J, then let a = p and choose 0 ≠ b ∈ J.
In either case, we have a ∈ M ∖ J and 0 ≠ b ∈ J. Let C = W [a, b], and let
D = CM ∩C . Since W is a DVR and either a ∈ W or b ∈ W , the ring D has Krull
dimension at most two. As in the case in which A contains a field, since a ∈ M ∖ J
and J ≠ 0, it follows that D has Krull dimension two. This verifies Claim 4.
Claim 5. The set patch(X) contains a valuation ring of rank > 1.
Proof of Claim 5. If every valuation ring in X dominates A, then, by Claim 3,
patch(X) contains a valuation ring of rank > 1. It remains to consider the case
where there are valuation rings in X that do not dominate A. By Claim 4, there is
a two-dimensional local Noetherian subring D of A such that D is dominated by A
and D contains nonzero elements a, b such that a ∈ M ∖ J and b ∈ J. The next step
of the proof involves a reduction that allows us to work in the quotient field of D.
Let K denote the quotient field of D. Let
A′ = A ∩ K,
M ′ = M ∩ K,
and
X ′ = {V ∩ K ∶ V ∈ patch(X)}.
Then A′ is a local integrally closed overring6 of D with maximal ideal M ′ and
A′ = ⋂V ∈X ′ V . Also, by Lemma 4.2, X ′ is patch closed in Zar(K).
Claim 5(a). All but finitely many valuation rings in X ′ dominate D.
Proof of Claim 5(a). Suppose that V ∈ X ′ and V does not dominate D. Then
MV ∩ D is a nonmaximal prime ideal of D that by Lemma 2.3(1) contains b. Since
D is a Noetherian ring of Krull dimension 2, there are only finitely many height
one prime ideals P1 , . . . , Pn of D containing b. The valuation ring V contains the
integral closure D ′ of the semilocal one-dimensional Noetherian ring DP1 ∩ ⋯ ∩ DPn .
Since the ring D ′ is a semilocal PID, there are only finitely many valuation rings
between D ′ and its quotient field K. Therefore, the set of valuation rings in X ′ that
do not dominate D is finite.
6By an overring of a domain, we mean a ring between the domain and its quotient field.
COMPACT SETS AND HOLOMORPHY RINGS
15
Returning to the proof of Claim 5, to show that patch(X) contains a valuation
ring of rank > 1, it is enough to prove that X ′ contains a valuation ring of rank > 1.
This is because if U ∈ X ′ has rank > 1, there is a valuation ring V ∈ patch(X) with
V ∩ K = U , and V is necessarily of rank > 1 since V extends U .
Thus we need only show that X ′ contains a valuation ring of rank > 1. We prove
this by verifying two claims.
Claim 5(b). If A′ [1/a] is a valuation ring, then X ′ contains a valuation ring of
rank > 1.
Proof of Claim 5(b). Since a ∈/ J, there exists V ∈ X ′ such that a ∈/ MV , and hence
1/a ∈ V . Since 0 ≠ b ∈ J ∩ A′ ⊆ MV , the valuation ring V does not have rank 0.
Therefore, A′ [1/a] ⊆ V ⊊ K. As an overring of the two-dimensional Noetherian ring
D, A′ has Krull dimension at most two. (This follows from the Dimension Inequality
[28, Theorem 15.5, p. 118].) Since a ∈ M ′ , the Krull dimension of A′ [1/a] is less
than that of A′ , and hence, since A′ [1/a] ⊊ K, it follows that A′ [1/a] has Krull
dimension one (and A′ has Krull dimension two). Thus the valuation ring A′ [1/a]
has rank one.
Now suppose by way of contradiction that every valuation ring in X ′ has rank
≤ 1. Then A′ , as an intersection of valuation rings in X ′ , is completely integrally
closed. Thus, since A′ has Krull dimension 2, A′ is not a valuation domain.
By [35, (4.2)] every patch closed representation of a domain contains a minimal
patch closed representation. Thus there is a patch closed subset Y of X ′ (recall that
X ′ is patch closed) such that A′ = ⋂V ∈Y V and no proper patch closed subset of Y
is a representation of A′ . By Lemma 2.3(4), the rank one valuation ring A′ [1/a], as
a proper subring of K, is an intersection of valuation rings in Y . Since A′ [1/a] has
the same quotient field as the valuation rings in X ′ , we conclude that A′ [1/a] is in
Y.
Since Y ⊆ X ′ and J ∩ A′ = ⋂V ∈X ′ MV , we have by Lemma 2.3(1) that 0 ≠ b ∈
J ∩ A′ ⊆ MV for all V ∈ Y . In particular, K ∈/ Y . Let W = A′ [1/a]. Since W has
rank one, we conclude that {W } = U (1/a) ∩ Y , so that W is a patch isolated point
in Y . We show this leads to a contradiction to the fact that Y is a minimal patch
closed representation of A′ .
Since W is a patch isolated point in Y , Y ∖ {W } is a patch closed set, and hence
by the minimality of Y we have have
A′ ⊊
⋂
U.
U ∈Y ∖{W }
Let
I =
⋂
U ∈Y ∖{W }
MU .
16
Then
BRUCE OLBERDING
⋂
U ⊆ End(I).
U ∈Y ∖{W }
Since A′ is completely integrally closed, it follows that I = 0 or I ⊆/ A′ . The former
case is impossible, since 0 ≠ b ∈ J ∩ A′ ⊆ I. Thus we conclude that I ⊆/ A′ . Let
t ∈ I ∖ A′ . Then t−1 ∈/ A′ since for any U ∈ Y ∖ {W }, we have t ∈ MU and A′ ⊆ U .
Thus t, t−1 /∈ A′ .
By Claim 5(a), all but finitely many valuation rings in X ′ , hence also in Y ,
dominate D. Applying Theorem 3.2 to D, A′ , Y and t, there exists a valuation ring
U ∈ lim(Y ) such that t, t−1 ∈ U . Since Y is patch closed, U ∈ Y , and, since t is a
unit in U , t /∈ MU . By the choice of t, t is in the maximal ideal of every valuation
ring in Y except W . This forces W = U ∈ lim(Y ), contradicting the fact that W is
a patch isolated point in Y . This contradiction shows that X ′ contains a valuation
ring of rank > 1. This completes the proof of Claim 5(b).
Claim 5(c). If A′ [1/a] is not a valuation ring, then X ′ contains a valuation ring
of rank > 1.
Proof of Claim 5(c). Since A′ [1/a] is not a valuation ring and A′ has quotient
field K, there exists 0 ≠ t ∈ K such that t ∈/ A′ [1/a] and t−1 ∈/ A′ [1/a]. Thus t, t−1 ∈/ A′
and at ∈/ A′ . With the aim of applying Lemma 4.1 to A′ and X ′ , we fix i > 0 and we
show that there is a valuation ring V ∈ X ′ such that ati is a unit in V and a ∈ MV .
Once this is proved, Lemma 4.1 implies that X ′ contains a valuation ring of rank
> 1, and the proof of Claim 5(c) is complete.
Let s = ati . Since t ∈/ A′ [1/a] and A′ [1/a] is integrally closed, it follows that
i
t ∈/ A′ [1/a], and hence s ∈/ A′ . If s−1 ∈ A′ , then t−i = a(a−1 t−i ) = as−1 ∈ A′ , so that,
since A′ is integrally closed in K, t−1 ∈ A′ , contrary to the choice of t. Therefore,
s, s−1 ∈/ A′ . By Claim 5(a), all but finitely many valuation rings in X ′ dominate D.
By Theorem 3.2, with D, A′ , X ′ and s playing the roles of “D”, “A”, “X” and “t” in
the theorem, we obtain U ∈ X ′ such that s, 1/s ∈ U and U dominates D. Therefore,
s = ati is a unit in U and a ∈ MU since U dominates D. By Lemma 4.1, X ′ contains
a valuation of rank > 1, which proves Claim 5(c).
Finally, to complete the proof of Claim 5, we note that Claim 5(b) and 5(c) show
that X ′ contains a valuation ring of rank > 1. As discussed after the proof of Claim
5(a), this implies that patch(X) contains a valuation of rank > 1. Therefore, with
the proof of Claim 5 complete, the proof of the theorem is complete also.
5. Compact sets and holomorphy rings
The main results of the paper involve one-dimensional Prüfer domains. We collect
in the next lemma some basic properties of such rings that are needed for the
COMPACT SETS AND HOLOMORPHY RINGS
17
theorems in this section. We denote by J(A) the Jacobson radical of a ring A, and
by Max(A) the space of maximal ideals of A endowed with the Zariski topology.
Lemma 5.1. Let A be a one-dimensional Prüfer domain with quotient field F , and
let X ⊆ Zar(F ) such that F ∈/ X and A = A(X). Then J(A) = J(X). If J(A) ≠ 0,
then the Zariski, patch and inverse topologies all coincide on X and
patch(X) = {AM ∶ M ∈ Max(A)}.
Proof. It is straightforward to check that J(X) ⊆ J(A); for example, see [19, Remark 1.3]. To see that the reverse inclusion holds, let V ∈ X. Since F /∈ X and A is
a one-dimensional Prüfer domain, V = AM , where M = MV ∩ A is a maximal ideal
of A. Since J(A) ⊆ M ⊆ MV , we have J(A) ⊆ J(X). This proves the first assertion
of the lemma.
Suppose now that J(A) ≠ 0. Since A has Krull dimension one, Max(A) is homeomorphic to the spectral space Spec(A/J(A)). In a spectral space for which every
point is both minimal and maximal with respect to the specialization order, the
Zariski, patch and inverse topologies all agree; cf. [41, Corollary 2.6] or use the fact
that A/J(A) is a von Neumann regular ring. Thus these three topologies all agree
on X since X is homeomorphic to a subspace of Max(A).
Finally, since A is a one-dimensional Prüfer domain represented by X, the set of
all valuation overrings of A (each of which must have rank ≤ 1) is patch(X)∪{F } [10,
Corollary 4.10]. Since J(A) ≠ 0, Lemma 2.3(1) implies F /∈ patch(X). Therefore,
patch(X) = {AM ∶ M ∈ Max(A)}.
Remark 5.2. Topological aspects and factorization theory of one-dimensional Prüfer
domains with nonzero Jacobson radical are studied in [22].
The first application of the results of the previous section is the following characterization of subsets of Zar(F ) whose holomorphy ring is a one-dimensional Prüfer
domain with nonzero Jacobson radical and quotient field F .
Theorem 5.3. The following are equivalent for a nonempty subset X of Zar(F )
with J(X) ≠ 0.
(1) A(X) is a one-dimensional Prüfer domain with quotient field F .
(2) X is contained in a quasicompact set of rank one valuation rings in Zar(F ).
(3) Every valuation ring in patch(X) has rank one.
Proof. Let A = A(X) and J = J(X).
(1) ⇒ (2) Since A is a one-dimensional Prüfer domain with quotient field F , the
subset of Zar(F ) given by Y = {AM ∶ M ∈ Max(A)} consists of rank one valuation
rings in Zar(F ). The only other valuation overring of A is F . Since J ≠ 0, we have
F ∈/ X, which forces X ⊆ Y . Moreover, Y is homeomorphic to Max(A) and the
maximal spectrum of a ring is quasicompact, so statement (2) follows.
18
BRUCE OLBERDING
(2) ⇒ (3) Let Y be a quasicompact set of rank one valuation rings in Zar(F ) such
that X ⊆ Y . Since Y is quasicompact, Lemma 2.3(3) implies patch(X) ⊆ Y ∪ {F }. If
F ∈ patch(X), then from Lemma 2.3(1) it follows that J = 0, contrary to assumption.
Therefore, patch(X) ⊆ Y , so that patch(X) consists of rank one valuation rings.
(3) ⇒ (1) By Lemma 2.3(1), A = ⋂V ∈patch(X) V and 0 ≠ J = ⋂V ∈patch(X) MV .
Thus we can assume without loss of generality that X = patch(X). We claim first
that A has quotient field F . Let 0 ≠ a ∈ J. By Lemma 2.3(2), X is quasicompact,
so by Lemma 2.3(4) we have
A[1/a] =
⋂
V = F,
1/a∈V ∈X∪{F }
where the last equality follows from the fact that every valuation ring V in X has
rank one and satisfies a ∈ MV . Since A[1/a] = F , we conclude that A has quotient
field F .
To prove that A is a one-dimensional Prüfer domain, it suffices to show that AM
is a rank one valuation domain for each maximal ideal M of A. Let M be a maximal
ideal of A. Let Y = {V ∈ X ∶ AM ⊆ V }. Since
Y = X ∩ ( ⋂ U (t)),
t∈AM
Y is patch closed in X. By Lemma 2.3(2) and the fact that X is patch closed in
Zar(F ), Y is a quasicompact subset of Zar(F ). Since Y is quasicompact and consists
of rank one valuation rings, Lemma 2.3(4) implies that AM = ⋂V ∈Y V . Thus Y is
a patch closed representation of AM consisting of rank one valuation rings. Since
J ≠ 0, Theorem 4.3 implies that AM is a valuation domain. Since AM has quotient
field F and Y is a representation of AM consisting of rank one valuation domains,
it follows that AM ∈ Y . Hence AM is a rank one valuation domain, which proves
that A is a Prüfer domain with Krull dimension one.
A domain A is an almost Dedekind domain if for each maximal ideal M of A, AM
is a DVR. There exist many interesting examples of almost Dedekind domains; see
for example [22, 25, 30] and their references. For the factorization theory of such
rings, see [12, 22, 26].
Corollary 5.4. Let X be a nonempty set of Zar(F ) such that J(X) ≠ 0. Then X
is contained in a quasicompact set of DVRs in Zar(F ) if and only if A(X) is an
almost Dedekind domain with quotient field F .
Proof. Let A = A(X). Suppose X is contained in a quasicompact set Y of DVRs in
Zar(F ). By Theorem 5.3, A is a one-dimensional Prüfer domain with quotient field
F and nonzero Jacobson radical. By Lemma 2.3(3),
patch(X) ⊆ patch(Y ) = Y ∪ {F }.
COMPACT SETS AND HOLOMORPHY RINGS
19
Since J(X) ≠ 0, Lemma 2.3(1) implies F ∈/ patch(X), so patch(X) ⊆ Y . Thus
patch(X) consists of DVRs. By Lemma 5.1, we have that for each maximal ideal M
of A, AM is in patch(X) and hence AM is a DVR. Thus A is an almost Dedekind
domain. The converse follows from Lemma 5.1 and Theorem 5.3.
Remark 5.5. Let X be a nonempty quasicompact set of DVRs in Zar(F ) such
that J(X) ≠ 0. By Corollary 5.4, A = A(X) is an almost Dedekind domain, and
J(A) = J(X) by Lemma 5.1. If there is t ∈ J(A) such that J(A) = tA (equivalently,
MV = tV for each V ∈ X), then A has the property that every proper ideal is
a product of radical ideals. For this and related results on such rings, which are
known in the literature as SP-domains or domains with the radical factorization
property, see [12, 22, 30].
We prove next our main theorem of this section (the “main lemma” of the introduction) regarding the correspondence between quasicompact sets and holomorphy
rings in the space of rank one valuation rings.
Theorem 5.6. The mappings
X ↦ A(X) and A ↦ {AM ∶ M ∈ Max(A)}
define a bijection between the quasicompact sets X of rank one valuation rings in
Zar(F ) with J(X) ≠ 0 and the one-dimensional Prüfer domains A with quotient
field F and nonzero Jacobson radical.
Proof. Let X be a quasicompact set of rank one valuation rings in Zar(F ) with
J(X) ≠ 0. By Lemma 5.1 and Theorem 5.3, A = A(X) is a one-dimensional
Prüfer domain such that J(A) ≠ 0 and A has quotient field F . By Lemma 2.3(3),
patch(X) ⊆ X ∪ {F }. Since J(A) ≠ 0, Lemma 2.3(1) implies F /∈ patch(X). Thus
X = patch(X), and, by Lemma 5.1, X = {AM ∶ M ∈ Max(A)}. Conversely, suppose
A is one-dimensional Prüfer overring with quotient field F and 0 ≠ t ∈ J(A). Since
Max(A) is quasicompact, X = {AM ∶ M ∈ Max(A)} is a quasicompact set of rank
one valuation rings such that A = A(X) and 0 ≠ t ∈ J(X).
The next example shows the necessity of the hypotheses in Theorem 5.6.
Example 5.7. Let X be a nonempty subset of Zar(F ). Theorem 5.6 shows that if
(a) X consists of rank one valuation rings, (b) X is quasicompact and (c) J(X) ≠ 0,
then A(X) is a Prüfer domain. These hypotheses are necessary in the sense that
A(X) need not be a Prüfer domain if any one of (a), (b) or (c) is omitted. The
following classes of examples illustrate this.
(1) An example in which A(X) is not a Prüfer domain but in which (a) and (b)
hold. Let D be an integrally closed Noetherian domain of Krull dimension > 1.
Then the set X of localizations of D at height one prime ideals is a quasicompact
set of rank one valuation rings for which A(X) = D.
20
BRUCE OLBERDING
(2) An example in which A(X) is not a Prüfer domain but in which (b) and (c)
hold. Let k be a field, and let V be the DVR k(S)[T ](T ) , where S and T are
indeterminates for k. Let R = k + MV . Then R is a one-dimensional integrally
closed local domain. With X the set of valuation overrings of R of rank > 0,
we have that R = A(X) and J(X) ≠ 0. Moreover, X is quasicompact since X
is the closed subset of the (quasicompact) space of valuation overrings. Indeed,
X is the set of valuation overrings of R contained in V . Thus (b) and (c) hold,
but A(X) is not a Prüfer domain since A(X) is a local domain contained in
more than one valuation ring of Krull dimension 2.
(3) An example in which A(X) is not a Prüfer domain but in which (a) and (c) hold.
Let D be an integrally closed Noetherian local domain of Krull dimension > 1
with quotient field F . To exhibit the desired example, it suffices to show that
D is the intersection of the DVRs in Zar(F /D) that dominate D. Let m denote
the maximal ideal of D, and let x ∈ F ∖ D. If x−1 ∈ D, then x−1 ∈ m. Since
every Noetherian local domain is birationally dominated by a DVR [6, p. 26],
there exists a DVR V in Zar(F /D) such that x−1 ∈ MV . Thus x ∈/ V . On the
other hand, if x−1 /∈ D, then the ring D[x−1 ] has a maximal ideal generated by
m and x−1 [29, Exercise 31, pp. 43–44], so, again by [6, p. 26], there is a DVR
V in Zar(F /D) that dominates D and for which x−1 ∈ MV and hence x ∈/ V . It
follows that D is the intersection of all the DVRs in Zar(F /D) that dominate
it.
Restricting to DVRs in Theorem 5.6 yields a correspondence with almost Dedekind
domains.
Corollary 5.8. The mappings
X ↦ A(X) and A ↦ {AM ∶ M ∈ Max(A)}
define a bijection between the quasicompact sets X of DVRs in Zar(F ) with J(X) ≠ 0
and the almost Dedekind domains A with quotient field F and nonzero Jacobson
radical.
Proof. If X is a quasicompact set of DVRs in Zar(F ) with J(X) ≠ 0, then, by
Corollary 5.4, A is an almost Dedekind domain. Conversely, if A is almost Dedekind
domain, then clearly X = {AM ∶ M ∈ Max(A)} is a quasicompact set of DVRs. Thus
the corollary follows from Theorem 5.6.
By Proposition 2.4(2) the quasicompact sets in Theorem 5.6 are compact, so the
theorem alternatively can be stated for compact sets instead. Along these lines, in
the case in which all the valuation rings under consideration occur as overrings of a
domain with quotient field F , the restriction that J(X) ≠ 0 in Theorem 5.6 can be
omitted (or, more correctly, hidden) if the quasicompact hypothesis is strengthened
to that of being compact.
COMPACT SETS AND HOLOMORPHY RINGS
21
Corollary 5.9. Let R be a proper subring of F with quotient field F . There is a
bijection given by
X ↦ A(X) and A ↦ {AM ∶ M ∈ Max(A)}
between the compact sets X of rank one valuation overrings of R and the onedimensional Prüfer overrings A of R with nonzero Jacobson radical.
Proof. If X is a compact set of rank one valuation overrings, then J(X) ≠ 0 by
Proposition 2.4(3). By Theorem 5.6, A = A(X) is a one-dimensional Prüfer domain
with J(A) ≠ 0 and X = {AM ∶ M ∈ Max(A)}. Conversely, if A is one-dimensional
Prüfer overring of R with J(A) ≠ 0, then X = {AM ∶ M ∈ Max(A)} is quasicompact
by Theorem 5.6, and X is Hausdorff by Proposition 2.4(2). Clearly, A = A(X).
Remark 5.10. As in Corollary 5.8, the bijection in Corollary 5.9 restricts to a
bijection between compact sets of DVRs and almost Dedekind overrings A of R
with nonzero Jacobson radical.
In general it is difficult to determine when an intersection of two one-dimensional
Prüfer domains with quotient field F is a Prüfer domain. For example, it is easy
to see any Noetherian local UFD A of Krull dimension 2 can be written as an
intersection A = A1 ∩ A2 where A1 is a DVR overring and A2 is a PID overring.
In this example, A is not a Prüfer domain despite being an intersection of two
PIDs. Significantly, the ring A2 here has J(A2 ) = 0. In our context, the topological
characterization in Corollary 5.9 shows that the difficulty here is removed if J(Ai ) ≠
0, a fact we prove in the next corollary.
Corollary 5.11. Let A1 , . . . , An be one-dimensional Prüfer domains, each with quotient field F . Let J = J(A1 ) ∩ ⋯ ∩ J(An ), and let A = A1 ∩ ⋯ ∩ An . If J(Ai ) ∩ A ≠ 0
for all i, then A is a one-dimensional Prüfer domain with J(A) = J and quotient
field F . If also each Ai is an almost Dedekind domain, then so is A.
Proof. For each i = 1, 2, . . . , n, let
Xi = {(Ai )M ∶ M ∈ Max(Ai )}.
By Corollary 5.9, each Xi is compact. Thus X = X1 ∪ ⋯ ∪ Xn is quasicompact. Since
J(Ai ) ∩ A ≠ 0 for all i, we have
J(X) = J(A1 ) ∩ ⋯ ∩ J(An ) ≠ 0.
Thus Theorem 5.6 implies A = A(X) is a one-dimensional Prüfer domain, and, by
Lemma 5.1, J(A) = J(A1 ) ∩ ⋯ ∩ J(An ). If also Ai is an almost Dedekind domain,
then each Xi is a quasicompact set of DVRs, so that X is also a quasicompact set
of DVRs. In this case, A is an almost Dedekind domain by Corollary 5.8.
Remark 5.12. It is an open question as to whether the intersection A = A1 ∩ A2
of one-dimensional Prüfer domains A1 and A2 with quotient field F and nonzero
22
BRUCE OLBERDING
Jacobson radical has quotient field F . If the answer is affirmative, then A is a
one-dimensional Prüfer domain by Corollary 5.11. In any case, let D be the prime
subring of A, let Si = {1 + d ∶ d ∈ J(Ai )} and let Bi = DSi + J(Ai ). Then B1 and
B2 are local domains of Krull dimension one with quotient field F . A necessary
condition for A to have quotient field F is that B1 ∩ B2 has quotient field F . In
[16, Question 2.1] Gilmer and Heinzer ask whether the intersection of two onedimensional local domains, each with the same quotient field F , has quotient field
F?
Corollary 5.13. Let X be a nonempty quasicompact set of rank one valuation rings
in Zar(F ). If J(X) ≠ 0, then the patch, inverse and Zariski topologies agree on X.
Proof. By Theorem 5.6, A = A(X) is a one-dimensional Prüfer domain with J(A) ≠ 0
and quotient field F . By Lemma 5.1, the inverse, patch and Zariski topologies agree
on X.
As the last application of this section, we describe schemes in Zar(F ) consisting
of rank ≤ 1 valuation rings. Let X be a subspace of Zar(F ), and let X ∗ = X ∪ {F }.
Let OX ∗ be the sheaf on X ∗ defined for each nonempty open set U of X ∗ by
OX ∗ (U ) = ⋂V ∈U V . (The reason for appending F to X is to guarantee that OX ∗
is a sheaf.) We say that X ∗ is a scheme in Zar(F ) if the locally ringed space
(X ∗ , OX ∗ ) is a scheme, and that X ∗ is an affine scheme in Zar(F ) if (X ∗ , OX ∗ ) is
an affine scheme. Thus X ∗ is an affine scheme in Zar(F ) if and only if the set of
all localizations A(X)P , P a nonzero prime ideal of A(X), is X. The question of
whether a subset of Zar(F ) is an affine scheme is closely connected to the question
of whether the intersection of valuation rings in the set is a Prüfer domain with
quotient field F . For more on this, see [36].
A necessarily condition for X to be an affine scheme in Zar(F ) is that X is
quasicompact; similarly, for X to be a scheme, X must be locally quasicompact
(i.e., every point has a quasicompact neighborhood). The corollary shows that
these conditions are also sufficient for sets X of valuation rings of rank ≤ 1 with
J(X) ≠ 0.
Corollary 5.14. Let X be a nonempty set of rank one valuation rings in Zar(R)
with J(X) ≠ 0. Then X ∗ is an affine scheme in Zar(F ) if and only if X is quasicompact; X ∗ is a scheme in Zar(F ) if and only X is locally quasicompact.
Proof. An affine scheme in Zar(F ) is quasicompact since the prime spectrum of
a ring is quasicompact. Conversely, if X is quasicompact, then, with A = A(X),
Theorem 5.6 implies that X ∗ = {AP ∶ P ∈ Spec(A)}, so that X ∗ is an affine scheme
in Zar(F ).
Now suppose X is locally quasicompact. Let V ∈ X, and let Z be a quasicompact
neighborhood of V in X. Then there is an open subset Y of X of the form Y =
COMPACT SETS AND HOLOMORPHY RINGS
23
U (x1 , . . . , xn ) ∩ X, where x1 , . . . , xn ∈ V and Y ⊆ Z. Since Z is quasicompact with
J(Z) ≠ 0, Corollary 5.13 implies that the Zariski and inverse topologies agree on
Z. Thus Y is a Zariski closed subset of Z. Since Z is quasicompact, Y is also
quasicompact, and hence Y is an affine scheme that is open in X. This shows that
X is a union of affine open schemes, so that X is a scheme in Zar(F ). Conversely, if
X is a scheme in Zar(F ), then X is a union of open sets that are affine schemes in
Zar(F ). Thus X is a union of quasicompact open subsets, proving that X is locally
quasicompact.
6. Proof of Main Theorem
In this section we prove the main theorem of the introduction. In light of Theorem 5.6, what remains to be shown is that the one-dimensional Prüfer domains with
nonzero Jacobson radical are Bézout domains. In fact, we prove more generally
that any one-dimensional domain with nonzero Jacobson radical has trivial Picard
group.
Theorem 6.1. If A is a one-dimensional domain with J(A) ≠ 0, then every invertible ideal of A is a principal ideal.
Proof. Let I be an invertible ideal of A. We show that I is a principal ideal of A.
After multiplying I by a nonzero element of J(A), we can assume I ⊆ J(A). Since
A has Krull dimension one and I is invertible, there exist 0 ≠ a, b ∈ I such that
I = (a, b)A; see [40, Corollary 4.3], or [21, Theorem 3.1] for a more general result.
Let J = J(A), and let
X = {M ∈ Max(A) ∶ (aA ∶A b) ⊆/ M } and Y = {M ∈ Max(A) ∶ (aA ∶A b) ⊆ M }.
First we show that there is e ∈ A such that e2 − e ∈ J and
X = {M ∈ Max(A) ∶ e ∈ M } and Y = {M ∈ Max(A) ∶ 1 − e ∈ M }.
Since A/J is a reduced ring of Krull dimension 0, A/J is a von Neumann regular
ring, and hence every finitely generated ideal of A/J is generated by an idempotent.
In order to apply this observation to the image of (aA ∶A b) in A/J, we claim that
(aA ∶A b) is a finitely generated ideal of A. Since I = (a, b)A, we have (aA ∶A b) =
(aA ∶F I). Since I is invertible, it follows that
(aA ∶A b)I = (aA ∶F I)I = aA.
With I −1 = (A ∶F I), we have (aA ∶A b) = aI −1 . Since I −1 is invertible, I −1 is
a finitely generated A-submodule of F , and it follows that (aA ∶A b) is a finitely
generated ideal of A. Therefore, since A/J is a von Neumann regular ring, there is
f ∈ (aA ∶A b) such that f A + J = (aA ∶A b) + J and f 2 − f ∈ J. Set e = 1 − f . Then
e2 − e ∈ J, and, since J is contained in every maximal ideal of A, it follows that
24
BRUCE OLBERDING
X = {M ∈ Max(A) ∶ e ∈ M } and Y = {M ∈ Max(A) ∶ f ∈ M }.
√
√
Next, since ab ∈ J and A/ abA has Krull dimension 0, it follows that J = abA.
(Recall we have assumed that I ⊆ J.) Since
√
√
ef = e − e2 ∈ J = abA = abJ,
there is k > 0 such that ek f k ∈ abJ. Let c = (a − ek )(b − f k ). We claim that I = cA.
It suffices to check that this equality holds locally.
Let M be a maximal ideal of A. Suppose first that M ∈ X. Then (aA ∶A b) ⊆/ M ,
so there exists d ∈ A ∖ M such that db ∈ aA. It follows that bAM ⊆ aAM , and hence
IAM = aAM . Thus to show that IAM = cAM , it suffices to show that aAM = cAM .
If b ∈/ M , then since d ∈/ M we have db ∈/ M . However, with b ∈/ M , the fact that
ab ∈ M implies db ∈ aA ⊆ M , a contradiction. Thus b ∈ M . Now, since e ∈ M , we
have f = 1 − e ∈/ M and hence (because b ∈ M ) we conclude that b − f k ∈/ M . This
implies that cAM = (a − ek )AM . Since
ek AM = ek f k AM ⊆ aJAM ,
there is j ∈ J and h ∈ A ∖ M such that hek = aj. Thus
h(a − ek ) = ha − hek = ha − aj = (h − j)a.
Since j ∈ M and h ∈/ M , we have h − j /∈ M . From the fact that h(a − ek ) = (h − j)a
we conclude that
cAM = (a − ek )AM = aAM ,
which proves the claim that for each M ∈ X, IAM = cAM .
Now suppose that M ∈ Y , so that f ∈ M . We show that IAM = cAM in this case
also. Since (aA ∶A b) ⊆ M , we have bAM ⊆/ aAM . Since I is invertible, IAM is a
principal ideal of AM . The following standard argument shows that this implies that
IAM = bAM . Let z ∈ I such that IAM = zAM . Then there exist x, y, s, t ∈ AM such
that a = zx, b = zy and z = as + bt. If x is a unit in AM , then bAM ⊆ zAM = aAM ,
a contradiction. Thus x is not a unit in AM . Since a = zx = (as + bt)x, we have
a(1 − sx) = btx, with 1 − sx a unit in AM since x ∈ M AM . Therefore, aAM ⊆ bAM ,
which proves that IAM = bAM .
We have shown that for M ∈ Y , we have IAM = bAM . To complete the proof of
the lemma, it suffices to show that bAM = cAM . The proof proceeds as in the case
where M ∈ X. Since bAM ⊆/ aAM , it follows that a ∈ M , and hence a − ek ∈/ M . Thus
cAM = (b − f k )AM . Also,
f k AM = ek f k AM ⊆ bJAM ,
so that there is h ∈ A ∖ M such that hf k = bj for some j ∈ J. Thus
h(b − f k ) = hb − bj = b(h − j),
COMPACT SETS AND HOLOMORPHY RINGS
25
with h, h − j units in AM . Therefore, (b − f k )AM = bAM , which shows that
cAM = (b − f k )AM = bAM = IAM .
This proves that IAM = cAM for all maximal ideals M of Y . Since Max(A) = X ∪Y ,
we conclude that I = cA.
Corollary 6.2. If A is a one-dimensional Prüfer domain with J(A) ≠ 0, then A is
a Bézout domain.
Proof. This follows from Theorem 6.1 and the fact that every finitely generated ideal
of a Prüfer domain is invertible [14, Theorem 22.1].
A special case of the lemma in which it is assumed in addition that A is an almost
Dedekind domain for which every maximal ideal of A has finite sharp degree was
proved by Loper and Lucas [26, Theorem 2.9] using different methods.
With Corollary 6.2 and the results of the preceding sections, we can now prove
the main theorem from the introduction.
Theorem 6.3. If X is a quasicompact set of rank one valuation rings in Zar(F )
such that J(X) ≠ 0, then A(X) is a Bézout domain of Krull dimension one with
quotient field F .
Proof. By Theorem 5.6, A(X) is a one-dimensional Prüfer domain with quotient
field F and nonzero Jacobson radical. By Corollary 6.2, A(X) is a Bézout domain,
which proves the theorem.
Remark 6.4. In [37] we apply the results of this article to rank one valuation
overrings of a two-dimensional Noetherian local domain D with quotient field F .
We focus on the divisorial valuation overrings of D, i.e., the DVRs that birationally
dominate D and are residually transcendental over D. It is shown, for example,
that if n is a positive integer, then the subspace X of Zar(F /D) consisting of all
divisorial valuation rings that can be reached through an iterated sequence of at
most n normalized quadratic transforms of D is quasicompact. By Corollary 5.8,
A(X) is an almost Dedekind domain with nonzero Jacobson radical.
Acknowledgment. I thank the referee for helpful comments that improved the
presentation of the article and for suggesting the version of Example 5.7(2) that is
included here.
References
[1] E. Becker, The real holomorphy ring and sums of 2nth powers. In Real algebraic geometry and
quadratic forms (Rennes, 1981), Lecture Notes in Math. 959, pages 139–181. Springer, Berlin,
1982.
[2] R. Berr, On real holomorphy rings. In Real analytic and algebraic geometry (Trento, 1992),
pages 47–66. de Gruyter, Berlin, 1995.
26
BRUCE OLBERDING
[3] M. A. Buchner and W. Kucharz, On relative real holomorphy rings, manuscripta math., 63(3)
(1989), 303–316.
[4] P. J. Cahen and J. L. Chabert, Integer-valued polynomials, Mathematical Surveys and Monographs, 48. American Mathematical Society, Providence, RI, 1997.
[5] S. Chase, Direct products of modules, Trans. Amer. Math. Soc. 97 (1960), 457–473.
[6] C. Chevalley, La notion dánneau de décomposition, Nagoya Math. J. 7 (1954), 21–33.
[7] R. Engelking, General Topology. Revised and completed edition. Heldermann Verlag, Berlin,
1989.
[8] A. J. Engler and A. Prestel, Valued fields, Springer-Verlag, 2005.
[9] C. Favre and M. Jonsson, The valuative tree, Lecture Notes in Mathematics 1853, Springer,
New York, 2004.
[10] C. Finochario, M. Fontana and K. A. Loper, The constructible topology on spaces of valuation
domains, Trans. Amer. Math. Soc. 365 (2013), 6199–6216.
[11] C. Finocchiaro and D. Spirito, Topology, intersections and flat modules, Proc. Amer. Math.
Soc. 144 (2016), 4125–4133.
[12] M. Fontana, E. Houston, T. Lucas, Factoring ideals in integral domains. Lecture Notes of the
Unione Matematica Italiana, 14. Springer, Heidelberg; UMI, Bologna, 2013.
[13] J. Giansiracusa and N. Giansiracusa, The universal tropicalization and the Berkovich analytification, arXiv:1410.4348.
[14] R. Gilmer, Multiplicative ideal theory, Queen’s Papers in Pure and Applied Mathematics, No.
12, Queen’s University, Kingston, Ont. 1968.
[15] R. Gilmer, Two constructions of Prüfer domains, J. Reine Angew. Math. 239/240 (1969),
153–162.
[16] R. Gilmer and W. Heinzer, The quotient field of an intersection of integral domains, J. Algebra
70 (1991), 238–249.
[17] A. Granja, The valuative tree of a two-dimensional regular local ring, Math. Res. Lett. 14
(2007), no. 1, 19–34.
[18] J. Goubault-Larrecq, Non-Hausdorff topology and domain theory. New Mathematical Monographs, 22. Cambridge University Press, Cambridge, 2013.
[19] W. Heinzer, Noetherian intersections of integral domains. II. Conference on Commutative
Algebra (Univ. Kansas, Lawrence, Kan., 1972), pp. 107–119. Lecture Notes in Math., Vol. 311,
Springer, Berlin, 1973.
[20] R. Hartshorne, Algebraic geometry. Graduate Texts in Mathematics, No. 52. Springer-Verlag,
New York-Heidelberg, 1977.
[21] R. Heitmann, Generating ideals in Prüfer domains. Pacific J. Math. 62 (1976), no. 1, 117–126.
[22] O. Heubo-Kwegna, B. Olberding and A. Reinhart, Group-theoretic and topological invariants
of completely integrally closed Prüfer domains. J. Pure Appl. Algebra 220 (2016), no. 12,
3927–3947.
[23] M. Hochster, Prime ideal structure in commutative rings, Trans. Amer. Math. Soc. 142 (1969),
43–60.
[24] K. A. Loper, A classification of all D such that Int(D) is a Prüfer domain. Proc. Amer. Math.
Soc. 126 (1998), no. 3, 657–660.
[25] K. A. Loper, Almost Dedekind domains which are not Dedekind. in Multiplicative ideal theory
in commutative algebra, 279–292, Springer, New York, 2006.
[26] K. A. Loper and T.G. Lucas, Factoring ideals in almost Dedekind domains. J. Reine Angew.
Math. 565 (2003), 61–78.
COMPACT SETS AND HOLOMORPHY RINGS
27
[27] K. A. Loper and F. Tartarone, A classification of the integrally closed rings of polynomials
containing Z[x], J. Commutative Algebra 1 (2009), 91–157.
[28] H. Matsumura, Commutative ring theory, Cambridge Studies in Advanced Mathematics 8,
Cambridge University Press, 1986.
[29] I. Kaplansky, Commutative Rings, Allyn and Bacon, Boston, 1970.
[30] B. Olberding, Factorization into radical ideals, in Arithmetical properties of commutative rings
and monoids, 363–377, Lect. Notes Pure Appl. Math., 241, Chapman & Hall/CRC, Boca
Raton, FL, 2005.
[31] B. Olberding, Irredundant intersections of valuation overrings of two-dimensional Noetherian
domains, J. Algebra 318 (2007), 834-855.
[32] B. Olberding, Overrings of two-dimensional Noetherian domains representable by Noetherian
spaces of valuation rings, J. Pure Appl. Algebra 212 (2008), 1797-1821.
[33] B. Olberding, On the geometry of Prüfer intersections of valuation rings, Pacific J. Math. 273
(2015), No. 2, 353–368.
[34] B. Olberding, Intersections of valuation overrings of two-dimensional Noetherian domains,
in Commutative algebra: Noetherian and non-Noetherian perspectives, Springer, 335–361,
Springer, New York, 2011.
[35] B. Olberding, Topological aspects of irredundant intersections of ideals and valuation rings.
Multiplicative ideal theory and factorization theory, 277–307, Springer Proc. Math. Stat., 170,
Springer, [Cham], 2016.
[36] B. Olberding, Affine schemes and topological closures in the Zariski-Riemann space of valuation
rings, J. Pure Appl. Algebra 219 (2015), no. 5, 1720–1741.
[37] B. Olberding, Intersections of divisorial valuation overrings of a two-dimensional Noetherian
local domain, in preparation.
[38] P. Roquette, Principal ideal theorems for holomorphy rings in fields, J. Reine Angew. Math.
262–263 (1973), 361–374.
[39] D. Rush, Bézout domains with stable range 1. J. Pure Appl. Algebra, 158 (2001), 309–324.
[40] J. D. Sally and W. Vasconcelos, Stable rings. J. Pure Appl. Algebra 4 (1974), 319–336.
[41] N. Schwartz and M. Tressl, Elementary properties of minimal and maximal points in Zariski
spectra, J. Algebra, 323 (2010), 698–728.
[42] H. W. Schülting, Real holomorphy rings in real algebraic geometry, Real algebraic geometry
and quadratic forms (Rennes, 1981), pp. 433-442, Lecture Notes in Math., 959, Springer,
Berlin-New York, 1982.
[43] Stacks project, http://stacks.math.columbia.edu, 2016.
[44] O. Zariski, The compactness of the Riemann manifold of an abstract field of algebraic functions,
Bull. Amer. Math. Soc. 50 (1944), no. 10, 683–691.
[45] O. Zariski and P. Samuel, Commutative algebra. Vol. II. Graduate Texts in Mathematics, Vol.
29. Springer-Verlag, New York-Heidelberg, 1975.
Department of Mathematical Sciences, New Mexico State University, Las Cruces,
NM 88003-8001
| 0 |
Optimum Synthesis of Mechanism for single- and hybrid-tasks using
Differential Evolution
F. Peñuñuri,∗ R. Peón-Escalante,† C. Villanueva,‡ and D. Pech-Oy§
arXiv:1102.2017v2 [] 18 Jun 2011
Facultad de Ingenierı́a, Universidad Autónoma de Yucatán,
A.P. 150, Cordemex, Mérida, Yucatán, México.
The optimal dimensional synthesis for planar mechanisms using differential evolution (DE) is
demonstrated. Four examples are included: in the first case, the synthesis of a mechanism for
hybrid-tasks, considering path generation, function generation, and motion generation, is carried
out. The second and third cases pertain to path generation, with and without prescribed timing.
Finally, the synthesis of an Ackerman mechanism is reported. Order defect problem is solved by
manipulating individuals instead of penalizing or discretizing the search space for the parameters.
A technique that consists in applying a transformation in order to satisfy the Grashof and crank
conditions to generate an initial elitist population is introduced. As a result, the evolutionary
algorithm increases its efficiency.
I.
INTRODUCTION
Dimensional synthesis of mechanisms comprises the problems of path, function and motion generation. There are three types of methods for this purpose: graphical, analytical, and those involving
optimization [1].
Graphical methods offer a quick solution by sacrificing accuracy, and are rarely used since computers
can do the same work faster and better.
Analytical methods are based on algebraic expressions [1, 2], displacement matrix [3], complex
numbers [4], or continuation methods [5] resulting in mechanisms whose error will be zero at the
precision points.
The problem of motion generation, in the case of a planar four-bar mechanism, can be designed
based on the Burmester curve. This is one of the first proposed analytical methods for the dimensional
synthesis of mechanisms. In [6] an algorithm for the robust computation of the solution of the fiveposed Burmester problem is introduced. In [7] a Matlab-based graphical user interface to the algorithm
of [6] is done.
Also, the general equation of the coupler curve of a four-bar linkage has attracted the attention of
researchers. For a given set of points on the coupler curve, Blechschmidt and Uicker [8] have used the
equation of the coupler curve to synthesize a four-bar linkage by determining the coefficients of the
curve.
The main disadvantage of the analytical methods lies in the maximum number of points of accuracy
that can be set. The mechanisms are restricted to move exactly in a number of points equal to the
number of independent parameters that define them [9, 10]. Even though the mechanisms obtained
can reach the precision points, they may have other problems, known as design defects, that are not
taken into account during the synthesis process, thereby preventing the mechanisms from fulfilling the
task for which they were designed [9].
Optimization methods are based on numerical methods and allow a large number of design points
tolerating a loss of accuracy. These are formulated in terms of nonlinear programming problems.
The optimal solution is found by optimizing an objective function within an iterative procedure. The
objective function can be defined as a difference between the generated and the specified movement,
known as the structural error [3]. In general, it can be defined as the design error, i.e., the error
∗
†
‡
§
[email protected]
[email protected]
[email protected]
[email protected]
2
that arises when we are trying to satisfy a design equation [11] (which could be the Freudenstein
equation). An interesting definition for the objective function is presented in [12] where it is defined as
a kind of entropy that is maximized. The use of optimization methods is inevitable when the number
of positions to be covered during the duty cycle exceeds a certain number (in the case of motion
generation synthesis the classical analytical approach is limited to five specified points for a four-bar
mechanism).
The interest in optimum synthesis of mechanisms is not new. There have been a large number
of studies on this topic using a variety of methods. For example, some local search methods have
been described in references [13–20]. The main disadvantages of these methods are that the objective
function must be differentiable. Also, they are very sensitive to the initial search point.
Within the global search methods some of the techniques that have been used are Simulated Annealing (SA) [21], Neural Network [23, 24], Genetic Algorithm (GA) [25–30], Particle Swarm Optimization
Technique (PSO) [30], and Differential Evolution (DE) [30–34]. There are works that use a combination of two optimization methods such as SA-Powell’s Method [35], GA-FL [36], Tabu-Gradient [37],
Ant Colony Optimization-Gradient (AG) [38], and GA-DE [39].
The use of evolutionary algorithms has been of significant interest in recent years. For instance,
Ullah and Kota solved the path generation problem by presenting an objective function based on
Fourier descriptors that evaluates only the shape differences between two curves [35]. This function
is first minimized using a simulated annealing followed by Powell’s method. The size, orientation
and position of the desired curve are addressed at a later stage by determining analogous points on
the desired and candidate curves. Similarly, Vasiliu and Yannou [23] synthesized the dimensions of a
planar mechanism whose purpose is to generate a trajectory shape by using a neural network.
Laribi et al. presented the combined GA-FL method to solve the problem of path generation in
mechanism synthesis [36]. The FL-controller monitors the variation of the design variables during the
first run of the GA and modifies the initial bounding intervals to restart a second run of the GA.
Smaili and Diab (2007) apply AG to the mechanism synthesis problem for single- and hybrid-tasks
[38]. Shiakolas introduced a technique called the Geometric Centroid of Precision Points for defining
initial bounds for the design variables combining with DE [31].
Acharyya and Mandal carry out the path synthesis of a four-bar linkage using three different methods
[30]. They found that the DE with /rand/1/exp method performs better than the two others; one
being a binary-coded genetic algorithm (BGA) with multipoint crossover, and the other a PSO with
the constriction factor approach.
In [39] they used a GA-DE hybrid algorithm to make a path synthesis of a four-bar linkage. A
real-valued genetic algorithm, where the crossover operation of GA is replaced by differential vector
perturbation, is employed.
The DE method is a simple yet powerful algorithm for global optimization [40]. It is not difficult
to modify the main operators and try for improvements of the method. In the present work, we use
DE to find optimum solutions for the dimensional synthesis problem of four mechanisms. The first
three correspond to planar four-bar and the last one to a six-bar mechanism. The paper is organized
as follows: In section II we present the classical DE method, which is used throughout this work;
section III presents notation and conventions. In Section IV we employ the idea of hybrid-task for
the synthesis of mechanisms as was introduced in [38]. The problem of this section presents us with
the difficulty of mixing angles with lengths. This difficulty is addressed by introducing a factor that,
on the one hand, defines consistently the objective function and on the other hand, allows for proper
weighing of the involved errors. This is important in order to fulfill the task of function and motion
generation. Moreover, this problem is used to show an easy and effective way to handle the order
defect problem. The proposed method avoids entirely both individual penalization and space search
discretization.
Section V deals with the prescribed timing path generation for 18 points and 10 design variables.
We introduce a transformation which constructs an elitist population, in the sense of satisfying the
Grashof and crank conditions, avoiding a probabilistic or penalization approach. This problem has
been presented by other authors [26, 27, 39]. To avoid some controversies related to the values of the
objective function that each of them report, we have written a Fortran 90 program that evaluates
the objective function.
3
Start
Specify the DE parameters
Initialize the population
Generate a trial population:
Mutation, Crossover, Selection
No
Termination criteria
satisfied?
Yes
Stop
FIG. 1. Flowchart for the DE algorithm.
The ideas introduced in Sections IV and V allow us to solve in a single manner the path generation
problem without prescribed timing for 18 target points and 27 design variables, which is the problem
described in Section VI. In Section VII an Ackerman mechanism is optimized. Finally, we present our
conclusions in Section VIII.
II.
CLASSICAL DE
Below, the original version of the method is outlined [41].
1. The population:
Px,g = (xi,g ), i = 1, ...m; g = 0, ...gmax
xi;g = (xji;g ), j = 1, ...D;
(1)
where D, m and gmax represent the dimensionality of x, the number of individuals and the
number of generations respectively. In [42] it is mentioned that a good choice for m is 10D.
However, to balance the speed and reliability in [43] values from 2D to 40D are suggested.
2. Initialization of population:
xji;0 = randj (0, 1) · (bjU − bjL ) + bjL .
Vectors bU and bL are the parameter limits and randj (0, 1) is a random number in [0, 1) generated
for each parameter.
4
3. Mutation:
vi;g = xr0 ;g + F · (xr1 ;g − xr2 ;g ).
(2)
The main difference between DE and other evolutionary algorithms like GA comes from this
mutation operator. xr0 ;g is called the base vector which is perturbed by the difference of other
two vectors.
r0 , r1 , r2 ∈ {1, 2, ...m}, r1 6= r2 6= r3 6= i . F is a scale factor greater than zero. Even
though upper limits for F do not exist, values greater than 1 are rarely chosen in the literature
[40–42, 44].
4. Crossover:
A dual recombination of vectors is used to generate the trial vector:
(
j
if(randj (0, 1) 6 Cr or j = jrand )
vi;g
j
ui;g = ui;g =
j
xi;g otherwise.
(3)
The crossover probability, Cr ∈ [0, 1], is a user-defined value.
5. Selection:
The selection is made according to
xi;g+1 =
ui;g if f (ui;g ) 6 f (xi;g )
xi;g otherwise
(4)
The method just described is known as DE/rand/1/bin. There are variants of it. For example, when
F is chosen to be a random number, the variant is called dither. In this work we will use the exposed
method with the dither variant where F ∈ [0; 1). Fig. 1 shows the flowchart for the DE algorithm.
III.
MECHANISM SYNTHESIS PROBLEM: NOTATION AND CONVENTIONS
The simplicity of a 4-bar mechanism, (easy to manufacture and highly reliable) makes it a very
important mechanism with a large number of industrial applications. Its use ranges from simple
devices such as windshield-wiping mechanisms and door-closing mechanisms to more complicated ones
such as rock crushers, sewing machines, round balers, and suspension systems of automobiles [30].
In this section the notation and conventions used throughout this work are established. The only
exception is in Section VII where we will deal with a 6-bar mechanism.
A four-bar linkage shown in Fig. 2 consists of four rigid links and four revolute joints. The set
of variables that describes the mechanism (the design variable vector) will be put into the vector X
whose components will be enclosed within braces. Usually, in the synthesis of a mechanism there are
two sets of points (or coordinates), desired and generated points, allocated in the vectors rd and rgen ,
respectively. A vector error E = rd − rgen is proposed and the objective function is defined as the
square of its Euclidean norm.
f ob = |E|2 .
(5)
If quantities are not dimensionally homogeneous, constants with appropriate units must be introduced
so that equations have compatible units. In this work, there are quantities with different units, and
some constants are chosen so that f ob is dimensionless. For example, in the problem of motion
generation, we have to fit angles and coordinates, so the quadratic error will be
X
E2 =
c(xi;d − xi;gen )2 + c(yi;d − yi;gen )2 + (θi;d − θi;gen )2 ,
(6)
i
where xi;d[gen] , yi;d[gen] and θi;d[gen] are the coordinates and angles of the desired [generated] point i.
5
FIG. 2. Four-bar linkage notation.
The constant c has a numerical value equal to 1 and is introduced for consistency with units. The
objective function is given by
X
f ob =
fc2 (xi;d − xi;gen )2 + fc2 (yi;d − yi;gen )2 + (θi;d − θi;gen )2 .
(7)
i
The fc constant is introduced for consistency with units but is not necessarily 1. Such a constant can
be defined by the user as a weight factor. In this work it is chosen as the inverse of the longest distance
between the coordinates. As a matter of illustration, for the points P = {(1, 1), (2, 3), (−5, −1)} we
construct the set Uxy = {1, 2, 3, −5, −1} (i.e., the union of the coordinates) and take fc = 1/dr with
dr = max(Uxy ) − min(Uxy ). In this case min(Uxy ) = −5, max(Uxy ) = 3 thus fc = 1/8. Notice that
this definition is motivated by the curvature concept. For example, in the case of a circle with radius
r, we have s/r = θ or ks = θ where k is the curvature, s the arc length subtended by the angle θ.
In general, f ob 6= 0 and its minimization process is what generates values for the parameters of a
possible mechanism. In the analysis of mechanisms, two conditions are important. They are known as
the crank and Grashof conditions (CG):
min(r1 , r2 , r3 , r4 ) = crank,
2 min(r1 , r2 , r3 , r4 ) + 2 max(r1 , r2 , r3 , r4 ) < r1 + r2 + r3 + r4 .
(8)
(9)
In our case r2 is the crank, see Fig 2. Whenever we refer to a transformation acting on a vector,
|xi is used instead of x. Usually such transformations are carried out by subroutines or functions in
Fortran 90 and by functions in C++. For linear transformations, the matrix representation can be
used. In this work, all the algorithms for the synthesis of mechanisms were implemented in Fortran
90. The compiler used was ifort and the calculations were made in an intel Core 2 Duo processor with
velocity of 2.53 GHz, 4 GB of memory and a bus velocity of 1.07 GHz.
IV.
HYBRID TASK SYNTHESIS
In this section we analyze the problem presented by McGarva [45]. We address the problem from
the viewpoint of hybrid tasks as proposed by Smaili and Diab in [38]. The problem has three tasks:
function generation, motion generation and path generation. Table I (as presented in [38]) summarizes
the variables used in this study.
The design variable vector is defined as
X = {x0 , y0 , r1 , r2 , r3 , r4 , rcx , rcy , ψ1 , ψ2 , ψ3 , ψ4 , ψ5 , ψ6 , ψ7 , γ}.
(10)
6
Desired point, i 1
2
3
4
5
6
7
8
9
10
4.67
2.6
122
*
138
4.38
2.2
137
*
143
4.04
1.67
151
*
147
Function points
xid
yid
ψi + γ
θi + γ
φi + γ
7.03
5.99
21
*
108
11
6.95
5.45
36
*
110
12
6.77
5.03
50
*
113
13
6.4
4.6
65
*
117
14
5.91
4.03
79
*
121
15
5.43
3.56
93
*
126
16
4.93
2.94
108
*
132
17
Motion function
xid
yid
ψi + γ
θi + γ
φi + γ
3.76
1.22
N
−13
*
18
3.76
1.97
N
−7
*
19
3.76
2.78
N
−2
*
20
3.76
3.56
N
2
*
21
3.76
4.34
N
7
*
22
3.76
4.91
N
11
*
23
3.76
5.47
N
14
*
24 25
Path point
xid
yid
ψi + γ
θi + γ
φi + γ
3.8
5.98
266
*
*
4.07
6.4
281
*
*
4.53
6.75
295
*
*
5.07
6.85
309
*
*
5.05
6.84
324
*
*
5.89
6.83
338
*
*
6.41
6.8
353
*
*
6.92
6.58
367
*
*
TABLE I. Hybrid-tasks problem; N: Generated crank angle values, *: Non-prescribed values.
In addition to the CG restrictions, we have the following constraints for motion generation:
ψmin < ψ j < ψmax ; j = {1, 2, ..., 7}
ψ k < ψ k+1 ; k = {1, 2, .., 6}.
(11)
In this case the objective function consists of three parts:
f ob = f obf unc + fg
obmot + fg
obpath
(12)
where, in the usual approach of the least square method, the partial objective functions are defined as:
X
f obf unc =
(θi;d − θi;gen )2f unc
(13)
i
fg
obmot
X
=
fc2 (xi;d − xi;gen )2mot + fc2 (yi;d − yi;gen )2mot + (θi;d − θi;gen )2mot
(14)
i
fg
obpath =
X
fc2 (xi;d − xi;gen )2path + fc2 (yi;d − yi;gen )2path
(15)
i
The evaluation of the weight factor fc is explained in Sec. III. In this case we have max(Uxy ) = 7.03,
min(Uxy ) = 1.22 thus fc2 = 0.02 with units of length−2 .
The following values were tested for Cr: 0.05, 0.1, ..., 0.9. It turns out that 0.3 gives the best results.
The number of individuals and generations were m = 250, gmax = 15 000, respectively. The evaluation
of f ob resulted in a value of 6.99 × 10−3 with the design variables shown in Table II.
x0
y0
r1
r2
r3
r4
rcx
rcy
-8.0339 1.07673 13.2425 1.96639 7.71759 7.57298 13.4593 3.13037
ψ1
ψ2
ψ3
ψ4
ψ5
ψ6
ψ7
γ
3.5639 3.83348 4.05641 4.22857 4.48498 4.71726 4.92507 5.83047
TABLE II. Parameter values of an optimal mechanism. Hybrid-tasks synthesis.
For the searching space we have used the limits:
xmin = {−15, −15, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 3, 3, 3, 0}
xmax = {15, 15, 15, 15, 15, 15, 15, 15, 5.03, 5.03, 5.03, 5.03, 5.03, 5.03, 5.03, 2π}.
7
A.
Constraints Management
In this case the CG conditions do not play any active role. We can just verify that they are met
after the minimum of f ob is obtained. Concerning the requirement of the constraints of Eq. (11),
previous methods are based on the discretization of the search space for ψ angles [38],
h
i
j
j
ψ j ∈ ψmin
, ψmax
.
(16)
The best fitting angle is selected from this range. In our case this discretization is not applied, and
individuals xi,g are chosen so as to comply with the restriction of Eq. (11). To this end, a random
vector of angles within desired limits is generated, and its coordinates written in ascending order. This
idea has been used in [30, 39]. Thus the method here is not exactly a classic DE because the evolution
of individuals is manipulated. However, it is clear that the results will be the same. The only thing
it does is to accelerate the evolutionary process. Symbolically, if |ψi represents a vector of random
d represents a transformation that puts them in ascending order, then
numbers and sort
h
ij
d
ψ j = sort|ψi
.
(17)
There are several ways to implement Eq. (17). In particular, it can be done in the crossover part.
(
j
ṽi;g
if(randj (0, 1) 6 Cr or j = jrand )
j
ui;g = ui;g =
(18)
j
x̃i;g otherwise.
where
h
ij
d
r̃j = sort|ri
.
(19)
d will act only on those components that we choose to order.
The transformation sort
FIG. 3. Optimal mechanism and the corresponding coupler curve. Hybrid-tasks synthesis.
For the ordering of the ψ angles, we have used the heap sort method [46–48] as it is efficient enough
and easy to implement.
Penalizing angles ψ j is not very efficient because the probability of having a set of size n randomly
ordered is low if n is large. For example, the probability to throw in seven random numbers between 0
and 1 (or any other continuum interval) in an ordered way is 1/7!, which is about 2 × 10−4 . We thus
end up with a method without individuals to evolve unless the number of individuals in the initial
8
population were extremely large, which would lead to a grossly inefficient method. Proceeding as [38]
is a brilliant possibility and the results so obtained are very good. However, discretizing the searching
space could prevent us from locating the minimum. Fig. 3 shows the mechanism obtained. In B a
program in M athematica R that makes an animation of the mechanism is shown step by step.
V.
A CLASSICAL COMPARISON: PATH GENERATION FOR 18 TARGET POINTS
AND 10 DESIGN VARIABLES
Recently, in [39] a hybrid method (GA-DE) was proposed that can synthesize a four-bar mechanism
and the problem of prescribed timing path generation for 18 points, (previously introduced by [26] and
[27]) is addressed. We will optimize this problem by using a DE algorithm. The objective function
value is lower than the reported values of previous references. It is worth mentioning that the values
for the links of the mechanism generated are of the same order of magnitude of the generation path
dimensions.
FIG. 4. Optimal mechanism and the corresponding coupler curve. Prescribed timing path generation.
A.
The problem
The target points are:
xd = {0.5, 0.4, 0.3, 0.2, 0.1, 0.05, 0.02, 0, 0, 0.03, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.6}
yd = {1.1, 1.1, 1.1, 1, 0.9, 0.75, 0.6, 0.5, 0.4, 0.3, 0.25, 0.2, 0.3, 0.4, 0.5, 0.7, 0.9, 1}
(20)
(21)
The design variable vector is
X = {x0 , y0 , r1 , r2 , r3 , r4 , rcx , rcy , γ, ψ 0 }
(22)
and the precribed timing is defined by
ψk = ψ0 +
π
(k − 1); k = {1, 2, ..., 18}.
9
(23)
9
Figure 2 shows the design variables.
This problem has been discussed in Refs. [26, 27, 39]. They do not show the explicit form of f ob,
and there is a controversy concerning the numerical values of the objective function. We show in Table
III the values that according to [39] the other two references should have obtained.
Wen-Yi Lin [39]
−2
f ob = 1.08613 × 10
Kunjur and Krishnamurty [26] Cabrera etal [27]
f ob = 1.09034 × 10−2
f ob = 3.48391 × 10−2
TABLE III. Values for the objective function reported by [39].
Here we get the following values of f ob for the design variable vectors that they report: [39],
f ob = 1.0306 × 10−2 ; [26], f ob = 1.0214 × 10−2 ; [27], f ob = 3.3748 × 10−2 . They are slightly
different from the values of [39], perhaps because of rounding errors. With the purpose of avoiding
any misunderstanding, in A we show a Fortran 90 program that evaluates f ob.
Table IV shows the values for the design variable vector for which the objective function is 9.088 ×
10−3 . The values 0.1, 0.2, . . . , 0.9 were tested for Cr. It turns out that 0.3 gives the best results.
Figure 4 shows the optimum mechanism and its path.
x0
y0
r1
r2
r3
r4
rcx
rcy
γ
ψ0
0.27892 0.11673 1.08913 0.42259 0.96444 0.58781 0.39137 0.42950 0.32195 0.86323
TABLE IV. Parameter values of an optimal mechanism with f ob = 9.088 × 10−3 .
In order to obtain the last result for f ob, we proceed in two steps. First, we choose parameter values
inside the interval [−1.5, 1.5] for x0 and y0 . For the remaining parameters we choose values in [0, 1.5],
and we evaluate f ob over and over until we find a design variable vector for which f ob 6 5 × 10−2 .
Second, from the obtained parameters, the searching space is reduced to
vxmin = {0.2, 0.1, 0.8, 0.3, 0.7, 0.4, 0.2, 0.3, 0.1, 0.7}
vxmax = {0.3, 0.3, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1}.
(24)
(25)
The value of f ob = 9.088 × 10−3 was obtained for 200 individuals and 11 817 generations. We could
reach smaller values of f ob if the generation number and/or individual number were increased, but
improvements are not considerable. For example, for 200 individuals and 30 000 generations we obtain
f ob = 9.06 × 10−3 . Moreover, by making a third refinement of the searching space, we obtain f ob =
9.03 × 10−3 for the design variables shown in Table V.
x0
y0
r1
r2
r3
r4
rcx
rcy
γ
ψ0
0.26439 0.16956 1.04028 0.42446 0.89397 0.60308 0.36129 0.38864 0.26873 0.90493
TABLE V. Parameter values of an optimal mechanism with f ob = 9.03 × 10−3 .
B.
On steps 1 and 2
In this work we subdivide the optimization task in two steps. In the first step we use an elitist
population in the sense of choosing only those individuals that satisfy the CG condition. To this end
we construct a transformation that takes an individual that does not satisfy the CG condition and
turns it into one that does. Then, in the second stage (with the result for the possible mechanism
obtained in this first stage) we refine the searching space, remove the CG condition and re-run the
optimization program. The process terminates when some criteria have been met and the individual
satisfies the CG condition.
10
C.
On the construction of the elitist population
In order to construct individuals satisfying the CG conditions we could proceed in a random way,
but this would be inefficient. In this work we proceed as follows:
Assuming the links of the four-bar mechanism belong to the interval (0,1), four random numbers
are generated (the links) and they are sorted in ascending order. At this point, proceeding randomly
would not be a bad choice since the probability of satisfying the CG condition for the sorted list is
0.5. However a better choice will be to construct a transformation T̂ that makes the CG condition
fulfill the ascending order list, |xi. There are many possible forms for the transformation T̂ . We define
T̂ = F̂ R̂, where R̂ is defined as the transformation that inverts the components of a vector and F̂ a
reflection plus a translation. Symbolically,
R̂|x1 , x2 , x3 , x4 i = |x4 , x3 , x2 , x1 i,
F̂ |xi = −|xi + |1i,
|1i = |1, 1, 1, 1i .
(26)
(27)
(28)
If the upper limit for the links is |Li, we replace |1i by |Li.
Notice that R̂ is a linear transformation, whereas F̂ is not. Once the vector that satisfies the Grashof
condition has been constructed, the crank is taken as the lesser of the elements; thus the conditions
CG will be satisfied.
For example, suppose that we have the four numbers xr = {0.38, 0.98, 0.25, 0.19} which do not
satisfy the CG condition. After sorting them we have |xi = |0.19, 0.25, 0.38, 0.98i and
R̂|xi = |0.98, 0.38, 0.25, 0.19i,
F̂ R̂|xi = | − 0.98, −0.38, −0.25, −0.19i + |1, 1, 1, 1i,
F̂ R̂|xi = |0.02, 0.62, 0.75, 0.81i.
By choosing the crank as 0.02, we can see that xg = {0.02, 0.62, 0.75, 0.81} satisfies the CG conditions
since min(xg) = 0.02, max(xg) = 0.81 and 0.02 + 0.81 < 0.62 + 0.75.
In general, suppose we have four positive numbers less or equal than 1 that are sorted in ascending
order, but that do not satisfy the CG conditions. Let |xi = |x1 , x2 , x3 , x4 i be the vector containing
such numbers. We have x1 + x4 > x2 + x3 since the numbers are sorted and do not comply Eq. (9).
Clearly −x1 − x4 < −x2 − x3 and (1 − x1 ) + (1 − x4 ) < (1 − x2 ) + (1 − x3 ). For these four constructed
numbers, the minimum is (1 − x4 ) and the maximum is (1 − x1 ) so the CG conditions are satisfied if
we chose the crank as (1 − x4 ).
VI.
PATH GENERATION WITHOUT PRESCRIBED TIMING FOR 18 TARGET POINTS
AND 27 DESIGN VARIABLES
It is interesting to synthesize the above mechanism without the prescribed timing Eq. (23). Finding
a minimum for the objective function is now more difficult. We have 27 design variables and the order
defect problem appears hard to solve.
Let
X ={x0 , y0 , r1 , r2 , r3 , r4 , rcx , rcy , γ, ψ 0 , ψ 1 , ψ 2 , ψ 3 , ψ 4 , ψ 5 , ψ 6 , ψ 7 , ψ 8 , ψ 9 , ψ 10 , ψ 11 , ψ 12 , ψ 13 , ψ 14 ,
ψ 15 , ψ 16 , ψ 17 }
(29)
be the design variable vector. Besides the CG conditions we also have the requirement
ψ k < ψ k+1 ; k = {0, 1, .., 16}
which prevents the order defect.
(30)
11
The use of penalization for the restriction Eq. (30) is not effective. Practically all the individuals
would be penalized as the probability of finding one that would not is 1/18! – a very small probability.
If we discretize the searching space for angles then there is no guarantee that the minimum will lie in
the generated intervals. However, if we adopt the approach stated in subsection IV A, the problem is
easily solved and in a consistent manner.
The time used for the algorithm was 110 seconds and this was the longest time for all the programs
run in this study. The running times for the other cases, were between 30 and 80 seconds. The value
of the objective function was f ob = 3.69 × 10−3 for the design variable vector whose components are
shown in Table VI.
x0
y0
r1
r2
r3
r4
rcx
rcy
γ
0.22922 -0.63525 2.27468 0.44667 2.18422 0.72409 1.02937 0.82440 0.58183
ψ0
ψ1
ψ2
ψ3
ψ4
ψ5
ψ6
ψ7
ψ8
0.78140 1.09985 1.34998 1.68045 2.00009 2.35036 2.70304 2.95102 3.22683
ψ9
ψ 10
ψ 11
ψ 12
ψ 13
ψ 14
ψ 15
ψ 16
ψ 17
3.58801 4.11376 4.35829 4.70801 5.07939 5.35914 5.76271 6.21586 6.49216
TABLE VI. Parameter values of an optimal mechanism. Path generation without prescribed timing.
The searching interval for the angles ψ was
0 < ψ j < 2π; j = {0, 1, ..., 17}.
(31)
It is well known that DE can yield individuals that do not belong to the searching interval. This is the
case for the last angle. However, since there is no order defect the values of table VI are an acceptable
solution for the problem.
Once again the result is obtained in two steps. First, we choose an elitist initial population that
satisfies the CG conditions. Then, the CG condition is removed and the searching space is restricted
according to the solution obtained in the first step.
It is worthwhile to mention that we tried to optimize the f ob function using the DE method without
the transformations of sections IV A and V C but the method was not capable of finding the minimum.
VII.
ACKERMAN STEERING LINKAGE SYNTHESIS
In this section DE is used for the synthesis of an Ackerman steering. For the deduction of the
equations used and a detailed treatment of the problem see [49].
It is known that when a vehicle is moving very slowly there is a kinematic condition between the
inner and outer wheel that allows it to turn slip-free. The condition is called the Ackerman condition
and is written as follows:
w
cot δo − cot δi = ,
(32)
l
where w and l represent the width and length of the vehicle, δo and δi are the rotation angles of the
wheels (Figure 5).
In general it is desirable for a mechanism to satisfy the Ackerman condition. Unfortunately, there
is no four-bar mechanism that can fulfill the Ackerman condition perfectly. However, it is possible
to synthesize a six-bar mechanism to work closely to the Ackerman condition and be exact at a few
points.
A six-bar Watt’s mechanism can be used to design the vehicle steering. The sizes In this case are
w = 1 m, l = 1.8 m and the minimum radius R = 2.5 m. The position of the center of mas with
respect to the rear axle is a = 0.45 m.
We have
p
R = a2 + l2 cot2 δM ,
(33)
12
Outer
whell
Inner
whell
Center of
rotation
FIG. 5. Vehicle diagram.
FIG. 6. Six-bar Watt’s mechanism.
with
δM =
cot δo + cot δi
2
therefore δM = 37.2731◦ , R1 = l cos δM and consequently R1 = 2.36514 m.
From trigonometry we have
l
l
δi = arctan
;
δ
=
arctan
o
R1 − w2
R1 + w2
(34)
(35)
so we obtain that δi and δ0 must lie in the ranges −32.1387◦ 6 δi 6 43.9818◦ and −43.9818◦ 6 δo 6
32.1387◦ in order to achieve the desired turning radius.
Unlike previous examples where the number of points is finite, in this case it is possible to use an
arbitrary number of desired angles. Therefore, it is convenient to define the objective function as
f ob =
|E|2
,
n
(36)
where E is the vector containing the n differences between δ2 and δack . The angle δack is the steering
angle δo from Eq. (32).
13
h
x
ξ
0.298192 -0.472091 0.219837
TABLE VII. Parameter values for the multi-link Ackerman steering.
0.5
0.4
0.3
0.2
0.1
0.5
-0.5
-0.1
FIG. 7. Optimal mechanism. Ackerman steering linkage synthesis.
The design variable vector is
X = {h, x, ξ} .
(37)
Fig. 6 shows the mechanism.
With a search space of 0.1 6 h 6 0.45, −0.5 6 x 6 0.2, 13◦ 6 ξ 6 30◦ , and a working range
(−35◦ , 45◦ ) with steps of 0.1◦ for δ1 we obtain the values shown in Table VII for the design variables.
The objective function is found to be f ob = 7.6 × 10−5 .
The obtained mechanism is illustrated in Fig. 7. Table VIII shows the values of the desired angles
and generated angles.
VIII.
CONCLUSIONS
Dimensional synthesis of mechanisms is a subject of great relevance in the field of mechanical design.
Among the great variety of optimization methods available, those that employ evolutionary algorithms
have seen an increase in use due to the excellent results that they allow.
In this work we have presented a methodology that uses differential evolution to solve the dimensional
synthesis problem of four mechanisms. With the use of a heuristic deduction, we have determined a
weight factor that allows us to solve the hybrid-tasks problem in an efficient manner. Two transformations were implemented in the differential evolution algorithm. The first one deals with the order
defect problem and was coded in the crossover part of the differential evolution algorithm. With this
transformation, the penalization approach and the use of big populations are avoided. In addition, the
chance of not finding the minimum of the objective function has disappeared as the need of discretization of the search space is also avoided. The second transformation constructs elitist populations in the
sense that their individuals satisfy the Grashof and crank conditions. Therefore, a random generation
and/or a penalization procedure are avoided, which makes this method more efficient.
Something that deserves mention is the amazing speed of convergence of the differential evolution
method which for generations as large as 80 000, the total CPU time was less than two minutes in a
single processor.
14
δ1
δack
◦
−30
−20◦
−10◦
0◦
10◦
20◦
30◦
40◦
δ2
◦
−40.471
−24.567◦
−11.069◦
0◦
9.117◦
16.822◦
23.571◦
29.720◦
−40.254◦
−23.700◦
−10.810◦
0◦
9.303◦
17.332◦
24.174◦
29.869◦
TABLE VIII. Desired (δack ) and generated (δ2 ) angles.
ACKNOWLEDGMENTS
We want to thank F. Larios for proofreading the manuscript. We also thank PROMEP and Conacyt
for support.
Appendix A: Fortran 90 objective function (f ob). Path generation for 18 target points and 10
design variables
What follows is the code for the f ob function in Fortran 90.
Double precision function fob(x0,y0,r1,r2,r3,r4,rcx,rcy,gamma,psi0)
Implicit None
Integer, Parameter:: Np=18
Double precision, Parameter :: Pi=3.14159265358979d0
Double precision :: x0,y0,r1,r2,r3,r4,rcx,rcy,gamma,psi0,L1,L2,L3,xd(Np), &
yd(Np),psi(Np),KA(Np),KB(Np),KC(Np),theta(Np),px(Np),py(Np),Ex(Np),Ey(Np),&
Ex2,Ey2
Integer :: k
xd=(/0.5d0, 0.4d0, 0.3d0, 0.2d0, 0.1d0, 0.05d0, 0.02d0, 0d0, 0d0,0.03d0,&
0.1d0, 0.15d0, 0.2d0, 0.3d0, 0.4d0, 0.5d0, 0.6d0, 0.6d0/)
yd = (/1.1d0, 1.1d0, 1.1d0, 1d0, 0.9d0, 0.75d0, 0.6d0,0.5d0,0.4d0,0.3d0,&
0.25d0, 0.2d0, 0.3d0, 0.4d0, 0.5d0, 0.7d0, 0.9d0, 1d0/)
L3=(r4**2 - r1**2 - r2**2 - r3**2)/(2d0*r2*r3)
L2=r1/r3
L1=r1/r2
Do, k=1,Np
psi(k) = psi0 + (k-1)*Pi/9d0
Enddo
KA = Dcos(psi) - L1 + L2*Dcos(psi) + L3
KB = -2d0*Dsin(psi)
KC = L1 + (L2 - 1)*Dcos(psi) + L3
theta = 2d0*Datan2(-KB - Dsqrt(KB**2-4d0*KA*KC),2d0*KA)
px = x0 + Dcos(gamma)*(r2*Dcos(psi) + rcx*Dcos(theta) - rcy*Dsin(theta)) - &
15
Dsin(gamma)*(r2*Dsin(psi) + rcx*Dsin(theta) + rcy*Dcos(theta))
py = y0 + Dsin(gamma)*(r2*Dcos(psi) + rcx*Dcos(theta) - rcy*Dsin(theta)) + &
Dcos(gamma)*(r2*Dsin(psi) + rcx*Dsin(theta) + rcy*Dcos(theta))
Ex = xd-px
Ey = yd-py
Ex2 = Dot_Product(Ex,Ex)
Ey2 = Dot_Product(Ey,Ey)
fob = Ex2 + Ey2
Return
End
Appendix B: Mathematica R hybrid task animation
nparam = {-8.0339,1.07673,13.2425,1.96639,7.71759,7.57298,13.4593,3.13037,
3.5639,3.83348,4.05641,4.22857,4.48498,4.71726,4.92507,5.83047};
vparam = {x0,y0,r1,r2,r3,r4,rcx,rcy,psi1,psi2,psi3,psi4,psi5,psi6,psi7,gamma};
supersolanima = Thread[Rule[vparam, nparam]]
r0 = {x0, y0} /. supersolanima;
xs = {7.03,6.95,6.77,6.4,5.91,5.43,4.93,4.67,4.38,4.04,3.76,3.76,3.76,3.76,
3.76,3.76,3.76,3.8,4.07,4.53,5.07,5.05,5.89,6.41,6.92};
ys = {5.99,5.45,5.03,4.6,4.03,3.56,2.94,2.6,2.2,1.657,1.22,1.97,2.78,3.56,
4.34,4.91,5.47,5.98,6.4,6.75,6.85,6.84,6.83,6.8,6.58};
DatT = Thread[{xs, ys}];
Dat = Take[DatT, {11, 25}];
L3 = (r4^2 - r1^2 - r2^2 - r3^2)/(2 r2 r3);
L2 = r1/r3;
L1 = r1/r2;
KA = Cos[psi] - L1 + L2 Cos[psi] + L3;
KB = -2 Sin[psi];
KC = L1 + (L2 - 1) Cos[psi] + L3;
theta[psi_] = 2 ArcTan[(-KB - Sqrt[KB^2 - 4 KA KC])/
(2 KA)] /.supersolanima;
Px[psi_] = (x0 + Cos[gamma] (r2 Cos[psi] + rcx Cos[theta[psi]] rcy Sin[theta[psi]]) - Sin[gamma] (r2 Sin[psi] + rcx Sin[theta[psi]] +
rcy Cos[theta[psi]])) /.supersolanima;
Py[psi_] = (y0 + Sin[gamma] (r2 Cos[psi] + rcx Cos[theta[psi]] rcy Sin[theta[psi]]) + Cos[gamma] (r2 Sin[psi] + rcx Sin[theta[psi]] +
rcy Cos[theta[psi]])) /.supersolanima;
16
PrPl[psi_] = Thread[{Px[psi], Py[psi]}];
B[psi_] = r0 + {r2 Cos[psi + gamma], r2 Sin[psi + gamma]} /.supersolanima;
Cc[psi_] = B[psi] + {r3 Cos[theta[psi] + gamma], r3 Sin[theta[psi] +
gamma]} /.supersolanima;
CoorD = (r0 + r1 {Cos[gamma], Sin[gamma]}) /. supersolanima;
gDat = ListPlot[Dat];
linkb[psi_]
linkc[psi_]
linkd[psi_]
linke[psi_]
linkf[psi_]
:=
:=
:=
:=
:=
Graphics[{Thick,
Graphics[{Thick,
Graphics[{Thick,
Graphics[{Thick,
Graphics[{Thick,
Line[{B[psi], r0}]}];
Line[{Cc[psi], B[psi]}]}];
Line[{Cc[psi], CoorD}]}];
Line[{PrPl[psi], B[psi]}]}];
Line[{PrPl[psi], Cc[psi]}]}];
linka = Graphics[{Thickness[.01], EdgeForm[Thick],RGBColor[0.75, 0.75, 0.75],
Polygon[{r0, {r0[[1]], CoorD[[2]]}, CoorD}]},PlotRange -> {{-10, 10},
{-4.8, 8}}];
gr = ListPlot[{Table[PrPl[psi], {psi, 0, 2 Pi, Pi/50}], Dat},Joined ->
{True, False}, PlotStyle ->{{PointSize[Medium],AbsoluteThickness[1.1]}},
PlotRange ->{{-10, 10}, {-5.2, 7.5}}, Frame -> True, Axes -> False,
FrameLabel -> {"X", "Y"}, AspectRatio -> Automatic];
Animate[Show[{linka, gr, linkb[psi], linkc[psi], linkd[psi], linke[psi],
linkf[psi], gDat}, Axes -> True, AspectRatio -> Automatic], {psi, 0, 2 Pi}]
[1] R. S. Hartenberg and D. J, Kinematic Synthesis of Linkages (McGraw Hill, New York, 1964).
[2] F. Freudenstein, Transactions of the ASME 76, 483 (1954).
[3] C. H. Suh and C. W. Radcliffe, Kinematics and Mechanism Desig (John Wiley and Sons, New York,
1978).
[4] G. N. Sandor and A. G. Erdman, Advanced Mechanism Design, Vol. 2 (Prentice Hall, 1984).
[5] A. P. Morgan and C. W. Wampler, Journal of Mechanical Design 112, 544 (1990).
[6] K. Al-Widyan, J. Angeles, and J. J. Cervantes-Sánchez, Proceedings of Design Engineering Technical
Conferences(September 2002).
[7] J. S. Bourrelle, C. Chen, S. Caro, and J. Angeles, Proc. 12th IFToMM World Congress, Besanon(June
18-21 2007).
[8] J. L. Blechschmidt and J. J. Uiker, Mechanisms, Transmissions, and Automation in Design 108, 543
(December 1986).
[9] H. H. Mabie and C. F. Reinholtz, Mechanisms and Dynamics of Machinery, 4th ed. (Wiley, New York,
1987).
[10] R. L. Norton, Kinematics and Dynamics of Machinery (McGraw Hill Inc., New York, 2009).
[11] J. J. Cervantes-Sánchez, H. I. Medellı́n-Castillo, J. M. Rico-Martı́nez, and E. J. González-Galván, Mechanism and Machine Theory 44, 103 (January 2009).
[12] G. S. Singh and S. Beohar, Journal of the Franklin Institute 334, 377 (1997).
[13] C. H. Suh and C. W. Radcliffe, ASME Paper , (1966).
[14] H. Nolle and K. H. Hunt, Journal of Mechanisms 6, 267 (1971).
[15] C. H. Suh and A. W. Mecklenburg, Mechanism and Machine Theory 8, 479 (Winter 1973).
17
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
R. I. Alizade, I. G. Novruzbekov, and G. N. Sandor, Mechanism and Machine Theory 10, 327 (1975).
J. Angeles, A. Alivizatos, and R. Akhras, Mechanism and Machine Theory 23, 343 (1988).
R. Akhras and J. Angeles, Mechanism and Machine Theory 25, 97 (1990).
Z. Liu and J. Angeles, Journal of Mechanical Design 114, 569 (1992).
S. Krishnamurty and D. A. Turcic, Mechanism and Machine Theory, 599(1992).
H. Martinez-Alfaro, H. Valdez, and J. Ortega, ASME Design Engineering Technical Conferences, (1998).
S. Krishnamurty, Proceedings of 3rd Applied Mechanisms and Robotics Conference, 94(1993).
A. Vasiliu and B. Yannou, Mechanism and Machine Theory 36, 299 (2001).
G. Galán-Marı́n, F. J. Alonso, and J. M. Del Castillo, Mechanism and Machine Theory 44, 1132 (2009).
W. E. Fang, Proceeding of 23rd Biennial Mechanisms Conference(1994).
A. Kunjur and S. Krishnamurthy, Journal of Applied Mechanisms and Robotics 4, 18 (1997).
J. Cabrera, A. Simon, and M. Prado, Mechanism and Machine Theory 37, 1165 (October 2002).
R. Starosta, Journal of Theoretical and Applied Mechanics 46, 395 (2008).
N. Nariman-Zadeh, M. Felezi, A. Jamali, and M. Ganji, Mechanism and Machine Theory 44, 180 (2009).
S. K. Acharyya and M. Mandal, Mechanism and Machine Theory 44, 1784 (September 2009).
P. S. Shiakolas, D. Koladiya, and J. Kebrle, Journal of Inverse Problems in Engineering 10, 485 (2002).
P. S. Shiakolas, D. Koladiya, and J. Kebrle, Mechanism and Machine Theory 40, 319 (March 2005).
J. Cabrera, F. Nadal, J. P. M. noz, and A. Simon, Mechanism and Machine Theory 42, 791 (July 2007).
R. R. Bulatović and S. R. Dordević, Mechanism and Machine Theory 44, 235 (January 2009).
I. Ullah and S. Kota, Journal of Mechanical Design 119, 504 (1997).
M. A. Laribi, A. Mlika, L. Romdhane, and S. Zeghloul, Mechanism and Machine Theory 39, 717 (July
2004).
A. Smaili, N. Diab, and N. Atallah, Mechanical Design 127, 917 (2005).
A. Smaili and N. Diab, Mechanism and Machine Theory 42, 115 (January 2007).
W. Y. Lin, Mechanism and Machine Theory 45, 1096 (August 2010).
J. Brest, S. Greiner, B. Boskovic, M. Mernik, and V. Zumer, IEEE Trans. on Evol. Comp. 10, 646
(December 2006).
K. V. Price, R. M. Storn, and J. A. Lampinen, Differential Evolution: A Practical Approach to Global
Optimization (Springer, Germany, 2005).
R. Storn, Biennial Conference of the North American Fuzzy Information Pro- cessing Society (NAFIPS),
IEEE, 519(1996).
J. Ronkkonen, S. Kukkonen, and K. V. Price, IEEE Congress on Evolutionary Computation, 506(2005).
R. Storn, Advances in Differential Evolution; Differential Evolution Research Trends and Open Questions, Vol. 143 (Springer, 2008) pp. 1–31.
J. R. McGarva, Mechanism and Machine Theory 29, 223 (1994).
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in Fortran 77 The
Art of Scientific Computing, 2nd ed., Vol. 1 (Cambridge Univerity Press, 1995).
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, The Art of Parallel Scientific Computing, 2nd ed., Vol. 2 (Cambridge Univerity Press, 2002).
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, The Art of Scientific Computing, 2nd
ed., Vol. 2 (Cambridge Univerity Press, 2007).
R. N. Jazar, Vehicle Dynamics: Theory and Application (Springer, 2008).
| 5 |
Graphical-model Based Multiple Testing under Dependence, with
Applications to Genome-wide Association Studies
Jie Liu
Computer Sciences, UW-Madison
Chunming Zhang
Statistics, UW-Madison
Catherine McCarty
Essentia Institute of Rural Health
Peggy Peissig
Marshfield Clinic Research Foundation
Elizabeth Burnside
Radiology, UW-Madison
David Page
BMI & CS, UW-Madison
Abstract
Large-scale multiple testing tasks often exhibit dependence, and leveraging the dependence between individual tests is still one
challenging and important problem in statistics. With recent advances in graphical models, it is feasible to use them to perform multiple testing under dependence. We propose
a multiple testing procedure which is based
on a Markov-random-field-coupled mixture
model. The ground truth of hypotheses is
represented by a latent binary Markov random field, and the observed test statistics appear as the coupled mixture variables. The
parameters in our model can be automatically learned by a novel EM algorithm. We
use an MCMC algorithm to infer the posterior probability that each hypothesis is null
(termed local index of significance), and the
false discovery rate can be controlled accordingly. Simulations show that the numerical performance of multiple testing can be
improved substantially by using our procedure. We apply the procedure to a real-world
genome-wide association study on breast cancer, and we identify several SNPs with strong
association evidence.
1
Introduction
Observations from large-scale multiple testing problems often exhibit dependence. For instance, in
genome-wide association studies, researchers collect
hundreds of thousands of highly correlated genetic
markers (single-nucleotide polymorphisms, or SNPs)
with the purpose of identifying the subset of markers associated with a heritable disease or trait. In
functional magnetic resonance imaging studies of the
brain, thousands of spatially correlated voxels are col-
lected while subjects are performing certain tasks,
with the purpose of detecting the relevant voxels. The
most popular family of large-scale multiple testing
procedures is the false discovery rate analysis, such
as the p-value thresholding procedures (Benjamini &
Hochberg, 1995, 2000; Genovese & Wasserman, 2004),
the local false discovery rate procedure (Efron et al.,
2001), and the positive false discovery rate procedure
(Storey, 2002, 2003). However, all these classical multiple testing procedures ignore the correlation structure among the individual factors, and the question
is whether we can reduce the false non-discovery rate
by leveraging the dependence, while still controlling the
false discovery rate in multiple testing.
Graphical models provide an elegant way of representing dependence. With recent advances in graphical
models, especially more efficient algorithms for inference and parameter learning, it is feasible to use these
models to leverage the dependence between individual tests in multiple testing problems. One influential paper (Sun & Cai, 2009) in the statistics community uses a hidden Markov model to represent the
dependence structure, and has shown its optimality
under certain conditions and its strong empirical performance. It is the first graphical model (and the only
one so far) used in multiple testing problems. However, their procedure can only deal with a sequential
dependence structure, and the dependence parameters
are homogenous. In this paper, we propose a multiple testing procedure based on a Markov-random-fieldcoupled mixture model which allows arbitrary dependence structures and heterogeneous dependence parameters. This extension requires more sophisticated algorithms for parameter learning and inference. For
parameter learning, we design an EM algorithm with
MCMC in the E-step and persistent contrastive divergence algorithm (Tieleman, 2008) in the M-step.
We use the MCMC algorithm to infer the posterior
probability that each hypothesis is null (termed local
index of significance or LIS). Finally, the false discovery rate can be controlled by thresholding the LIS.
Section 2 introduces related work and our procedure.
Sections 3 and 4 evaluate our procedure on a variety of simulations, and the empirical results show that
the numerical performance can be improved substantially by using our procedure. In Section 5, we apply
the procedure to a real-world genome-wide association
study (GWAS) on breast cancer, and we identify several SNPs with strong association evidence. We finally
conclude in Section 6.
0
߰
1
0
ij 1 ij
…
1 1 ik ik
2.1
Method
…
1
0
0 ik 1 ik
߰
2
߰
1 1 ij ij
1
0
0 jk 1 jk
1 1 jk jk
…
Terminology and Previous Work
Table 1: Classification of tested hypotheses
Null
Non-null
Total
Not rejected
N00
N01
S
Rejected
N10
N11
R
Total
m0
m1
m
Suppose that we carry out m tests whose results can
be categorized as in Table 1. False discovery rate
(FDR), defined as E(N10 /R|R > 0)P (R > 0), depicts the expected proportion of incorrectly rejected
null hypotheses (Benjamini & Hochberg, 1995). False
non-discovery rate (FNR), defined as E(N01 /S|S >
0)P (S > 0), depicts the expected proportion of false
non-rejections in those tests whose null hypotheses are
not rejected (Genovese & Wasserman, 2002). An FDR
procedure is valid if it controls FDR at a nominal level,
and optimal if it has the smallest FNR among all the
valid FDR procedures (Sun & Cai, 2009). The effects
of correlation on multiple testing have been discussed,
under different assumptions, with a focus on the validity issue (Benjamini & Yekutieli, 2001; Finner &
Roters, 2002; Owen, 2005; Sarkar, 2006; Efron, 2007;
Farcomeni, 2007; Romano et al., 2008; Wu, 2008; Blanchard & Roquain, 2009). The efficiency issue has also
been investigated (Yekutieli & Benjamini, 1999; Genovese et al., 2006; Benjamini & Heller, 2007; Zhang
et al., 2011), indicating FNR could be decreased by
considering dependence in multiple testing. Several
approaches have been proposed, such as dependence
kernels (Leek & Storey, 2008), factor models (Friguet
et al., 2009) and principal factor approximation (Fan
et al., 2012). Sun & Cai (2009) explicitly use a hidden
Markov model (HMM) to represent the dependence
structure and analyze the optimality under the compound decision framework (Sun & Cai, 2007). However, their procedure can only deal with sequential dependence, and it uses only a single dependence parameter throughout. In this paper, we replace HMM with
a Markov-random-field-coupled mixture model, which
allows richer and more flexible dependence structures.
The Markov-random-field-coupled mixture models are
Figure 1: The MRF-coupled mixture model for three
dependent hypotheses Hi , Hj and Hk with observed
test statistics (xi , xj and xk ) and latent ground truth
(θi ,θj and θk ). The dependence is captured by potential functions parameterized by φij ,φjk and φik , and
coupled mixtures are parameterized by ψ.
related to the hidden Markov random field models used
in many image segmentation problems (Zhang et al.,
2001; Celeux et al., 2003; Chatzis & Varvarigou, 2008).
2.2
Our Multiple Testing Procedure
Let x = (x1 , ..., xm ) be a vector of test statistics from
a set of hypotheses (H1 , ..., Hm ). The ground truth
of these hypotheses is denoted by a latent Bernoulli
vector θ = (θ1 , ..., θm ) ∈ {0, 1}m , with θi = 0 denoting
that the hypothesis Hi is null and θi = 1 denoting that
the hypothesis Hi is non-null. The dependence among
these hypotheses is represented as a binary Markov
random field (MRF) on θ. The structure of the MRF
can be described by an undirected graph G(V, E) with
the node set V and the edge set E. The dependence
between Hi and Hj is denoted by an edge connecting nodei and nodej in E, and the strength of dependence is parameterized by the potential function on
the edge. Suppose that the probability density function of the test statistic xi given θi = 0 is f0 , and the
density of xi given θi = 1 is f1 . Then, x is an MRFcoupled mixture. The mixture model is parameterized
by a parameter set ϑ = (φ, ψ), where φ parameterizes
the binary MRF and ψ parameterizes f0 and f1 . For
example, if f0 is standard normal N (0, 1) and f1 is
noncentered normal N (µ, 1), then ψ only contains parameter µ. Figure 1 shows the MRF-coupled mixture
model for three dependent hypotheses Hi , Hj and Hk .
In our MRF-coupled mixture model, x is observable,
and θ is hidden. With the parameter set ϑ = (φ, ψ),
the joint probability density over x and θ is
P (x, θ|φ, ψ) = P (θ; φ)
Ym
i=1
P (xi |θi ; ψ).
(1)
Define the marginal probability that Hi is null given
all the observed statistics x under the parameters in
ϑ, Pϑ (θi = 0|x), to be the local index of significance
(LIS) for Hi (Sun & Cai, 2009). If we can accurately
calculate the posterior marginal probabilities of θ (or
LIS), then we can use a step-up procedure to control
FDR at the nominal level α as follows (Sun & Cai,
2009). We first sort LIS from the smallest value to the
largest value. Suppose LIS(1) , LIS(2) , ..., and LIS(m)
are the ordered LIS, and the corresponding hypotheses
are H(1) , H(2) ,..., and H(m) . Let
1 Xi
LIS(j) ≤ α .
k = max i :
j=1
i
(2)
Then we reject H(i) for i = 1, ..., k.
Therefore, the key inferential problem that we need
to solve is that of computing the posterior marginal
distribution of the hidden variables θi given the test
statistics x, namely Pϑ (θi = 0|x), for i = 1, ..., m.
It is a typical inference problem if the parameters in
ϑ are known. Section 2.3 provides possible inference
algorithms for calculating Pϑ (θi = 0|x) for given ϑ.
However, ϑ is usually unknown in real-world applications, and we need to estimate it. Section 2.4 provides
a novel EM algorithm for parameter learning in our
MRF-coupled mixture model.
2.3
Posterior Inference
Now we are interested in calculating Pϑ (θi = 0|x) for a
given parameter set ϑ. One popular family of inference
algorithms is the sum-product family (Kschischang et
al., 2001), also known as belief propagation (Yedidia
et al., 2000). For loop-free graphs, belief propagation
algorithms provide exact inference results with a computational cost linear in the number of variables. In
our MRF-coupled mixture model, the structure of the
latent MRF is described by a graph G(V, E). When
G is chain structured, the instantiation of belief propagation is the forward-backward algorithm (Baum et
al., 1970). When G is tree structured, the instantiation of belief propagation is the upward-downward
algorithm (Crouse et al., 1998). For graphical models with cycles, loopy belief propagation (Murphy et
al., 1999; Weiss, 2000) and the tree-reweighted algorithm (Wainwright et al., 2003a) can be used for approximate inference. Other inference algorithms for
graphical models include junction trees (Lauritzen &
Spiegelhalter, 1988), sampling methods (Gelfand &
Smith, 1990), and variational methods (Jordan et al.,
1999). Recent papers (Schraudolph & Kamenetsky,
2009; Schraudolph, 2010) discuss exact inference algorithms on binary Markov random fields which allow loops. In our simulations, we use belief propaga-
tion when the graph G has no loops. When G has
loops (e.g. in the simulations on genetic data and
the real-world application), we use a Markov chain
Monte Carlo (MCMC) algorithm to perform inference
for Pϑ (θi = 0|x).
2.4
Parameters and Parameter Learning
In our procedure, the dependence among these hypotheses is represented by a graphical model on the
latent vector θ parameterized by φ, and observed test
statistics x are represented by the coupled mixture parameterized by ψ. In Sun and Cai’s work on HMMs,
φ is the transition parameter and ψ is the emission
parameter. One implicit assumption in their work is
that the transition parameter and the emission parameter stay the same for i(i = 1, ..., m). Our extension
to MRFs also allows us to untie these parameters. In
the second set of basic simulations in Section 3, we
make φ and ψ heterogeneous and investigate how this
affects the numerical performance. In the simulations
on genetic data in Section 4 and the real-world GWAS
application in Section 5, we have different parameters
for SNP pairs with different levels of correlation.
In our model, learning (φ, ψ) is difficult for two reasons. First, learning parameters is difficult by nature
in undirected graphical models due to the global normalization constant (Wainwright et al., 2003b; Welling
& Sutton, 2005). State-of-the-art MRF parameter
learning methods include MCMC-MLE (Geyer, 1991),
contrastive divergence (Hinton, 2002) and variational
methods (Ganapathi et al., 2008). Several new sampling methods with higher efficiency have been recently proposed, such as persistent contrastive divergence (Tieleman, 2008), fast-weight contrastive divergence (Tieleman & Hinton, 2009), tempered transitions (Salakhutdinov, 2009), and particle-filtered
MCMC-MLE (Asuncion et al., 2010). In our procedure, we use the persistent contrastive divergence algorithm to estimate parameters φ. Another difficulty is
that θ is latent and we only have one observed training
sample x. We use an EM algorithm to solve this problem. In the E-step, we run our MCMC algorithm in
Section 2.3 to infer the latent θ based on the currently
estimated parameters ϑ = (φ, ψ). In the M-step, we
run the persistent contrastive divergence (PCD) algorithm (Tieleman, 2008) to estimate φ from the currently inferred θ. Note that PCD is also an iterative
algorithm, and we run it until it converges in each
M-step. In the M-step, we also do a maximum likelihood estimation of ψ from the currently inferred θ and
observed x. We run the EM algorithm until both φ
and ψ converge. Although this EM algorithm involves
intensive computation in both E-step and M-step, it
converges very quickly in our experiments.
3
Basic Simulations
In the basic simulations, we investigate the numerical
performance of our multiple testing approach on different fabricated dependence structures where we can
control the ground truth parameters. We first simulate
θ from P (θ; φ) and then simulate x from P (x|θ; ψ)
under a variety of settings of ϑ = (φ, ψ). Because
we have the ground truth parameters, we have two
versions of our multiple testing approach, namely the
oracle procedure (OR) and the data-driven procedure
(LIS). The oracle procedure knows the true parameters ϑ in the graphical models, whereas the data-driven
procedure does not and has to estimate ϑ. The baseline procedures include the BH procedure (Benjamini
& Hochberg, 1995) and the adaptive p-value procedure (AP) (Benjamini & Hochberg, 2000; Genovese &
Wasserman, 2004) which are compared by Sun & Cai
(2009). We include another baseline procedure, the
local false discovery rate procedure (localFDR) (Efron
et al., 2001). The adaptive p-value procedure requires
a consistent estimate of the proportion of the true null
hypotheses. The localFDR procedure requires a consistent estimate of the proportion of the true null hypotheses and the knowledge of the distribution of the
test statistics under the null and under the alternative.
In our simulations, we endow AP and localFDR with
the ground truth values of these in order to let these
baseline procedures achieve their best performance.
In the simulations, we assume that the observed xi
under the null hypothesis (namely θi = 0) is standardnormally distributed and that xi under the alternative hypothesis (namely θi = 1) is normally distributed with mean µ and standard deviation 1.0.
We choose the setup and parameters to be consistent
with the work of Sun & Cai (2009) when possible.
In total, we consider three MRF models, namely a
chain-structured MRF, tree-structured MRF and gridstructured MRF. For chain-MRF, we choose the number of hypotheses m = 3, 000. For tree-MRF, we
choose perfect binary trees of height 12 which yields a
total number of 8, 191 hypotheses. For grid-MRF, we
choose the number of rows and the number of columns
to be 100 which yields a total number of 10, 000 hypotheses. In all the experiments, we choose the number of replications N = 500 which is also the same as
the work of Sun & Cai (2009). In total, we have three
sets of simulations with different goals as follows.
Basic simulation 1: We stay consistent with Sun
& Cai (2009) in the simulations except that we use
the three MRF models. In all three structures, (θi )m
1
is generated
whose potentials on the
from the MRFs
φ
1−φ
edges are
. Therefore, φ only contains
1−φ
φ
parameter φ, and ψ only includes parameter µ.
Basic simulation 2: One assumption in basic simulation 1 is that the parameters φ and µ are homogeneous in the sense that they stay the same for
i(i = 1, ..., m). This assumption is carried down
from the work of Sun & Cai (2009). However in
many real-world applications, the transition parameters can be different across the multiple hypotheses.
Similarly, the test statistics for the non-null hypotheses, although normally distributed and standardized,
could have different µ values. Therefore, we investigate the situation where the parameters can vary in
different hypotheses. The simulations are carried out
for all three different dependence structures aforementioned. In the first set of simulations, instead of fixing
φ, we choose φ’s uniformly distributed on the interval
(0.8 − ∆(φ)/2, 0.8 + ∆(φ)/2). In the second set of simulations, instead of fixing µ, we choose µ’s uniformly
distributed on the interval (2.0−∆(µ)/2, 2.0+∆(µ)/2).
The oracle procedure knows the true parameters. The
data-driven procedure does not know the parameters,
and assumes the parameters are homogeneous.
Basic simulation 3: Another implicit assumption in
basic simulation 1 is that each individual test in the
multiple testing problem is exact. Many widely used
hypothesis tests, such as Pearson’s χ2 test and the likelihood ratio test, are asymptotic in the sense that we
only know the limiting distribution of the test statistics for large samples. As an example, we simulate the
two-proportion z-test in this section and show how the
sample size affects the performance of the procedures
when the individual test is asymptotic. Suppose that
we have n samples (half of them are positive samples
and half of them are negative samples). For each sample, we have m Bernoulli distributed attributes. A
fraction of the attributes are relevant. If the attribute
A is relevant, then the probability of “heads” in the
positive samples (p+
A ) is different from that in the neg−
ative samples (p−
).
p+
A
A and pA are the same if A is nonrelevant. For each individual test, the null hypothesis
is that the attribute is not relevant, and the alternative
hypothesis is otherwise. The two-proportion z-test can
−
be used to test whether p+
A −pA is zero, which yields an
asymptotic N (0, 1) under the null and N (µ, 1) under
the alternative (µ is nonzero). In the simulations, we
fix µ, but vary the sample size n, and apply the aforementioned tree-MRF structure (m = 8, 191). The oracle procedure and localFDR only know the limiting
distribution of the test statistics and assume the test
statistics exactly follow the limiting distributions even
when the sample size is small.
Figure 2 shows the numerical results in basic simulation 1. Figures (1a)-(1f) are for the chain structure.
Figures (2a)-(2f) are for tree structure. Figures (3a)(3f) are for the grid structure. In Figures (1a)-(1c),
1500
0
ATP
500
1000
FNR
0.0 0.1 0.2 0.3 0.4 0.5
0.10
0.04
ATP
600 800 1000
FDR
0.06 0.08
0.35
0.15
FNR
0.25
0.09
FDR
0.07
0.05
0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.2 0.3 0.4 0.5 0.6 0.7 0.8
1.0 1.5 2.0 2.5 3.0 3.5 4.0
1.0 1.5 2.0 2.5 3.0 3.5 4.0
1.0 1.5 2.0 2.5 3.0 3.5 4.0
(1a)
(1b)
(1c)
(1d)
(1e)
(1f)
1000
ATP
3000
0
FDR
0.06 0.08
FNR
0.0 0.1 0.2 0.3 0.4 0.5
0.10
0.04
1500
0.15
ATP
2500
FNR
0.25
0.09
FDR
0.07
0.05
3500
0.35
0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.2 0.3 0.4 0.5 0.6 0.7 0.8
1.0 1.5 2.0 2.5 3.0 3.5 4.0
1.0 1.5 2.0 2.5 3.0 3.5 4.0
1.0 1.5 2.0 2.5 3.0 3.5 4.0
(2a)
(2b)
(2c)
(2d)
(2e)
(2f)
ATP
2000
4000
0.4
FNR
0.2
FDR
0.10 0.14
0
0.0
0.06
0.0
2000
0.1
ATP
3000 4000
FNR
0.2 0.3
0.09
FDR
0.07
0.05
5000
0.4
0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.2 0.3 0.4 0.5 0.6 0.7 0.8
1.0 1.5 2.0 2.5 3.0 3.5 4.0
1.0 1.5 2.0 2.5 3.0 3.5 4.0
1.0 1.5 2.0 2.5 3.0 3.5 4.0
(3a)
(3b)
(3c)
(3d)
(3e)
(3f)
Figure 2: Comparison of BH( ), AP(4), localFDR(×), OR (+), and LIS () in basic simulation 1: (1) chainMRF, (2) tree-MRF, (3) grid-MRF; (a) FDR vs φ, (b) FNR vs φ, (c) ATP vs φ, (d) FDR vs µ, (e) FNR vs µ,
(f) ATP vs µ.
(2a)-(2c) and (3a)-(3c), we set µ = 2 and plot FDR,
FNR and the average number of true positives (ATP)
when we vary φ between 0.2 and 0.8. In Figures (1d)(1f), (2d)-(2f) and (3d)-(3f), we set φ = 0.8 and plot
FDR, FNR and ATP when we vary µ between 1.0
and 4.0. The nominal FDR level is set to be 0.10.
From Figure 2, we can observe comparable numerical
results between the chain structure and tree structure.
The FDR levels of all five procedures are controlled
at 0.10 and BH is conservative. From the plots for
FNR and ATP, we can observe that the data-driven
procedure performs almost the same as the oracle procedure, and they dominate the p-value thresholding
procedures BH and AP. The oracle procedure and the
data-driven procedure also dominate localFDR except
when φ = 0.5, when they perform comparably. This
is to be expected because the dependence structure is
no longer informative when φ is 0.5. In this situation
when the hypotheses are independent, our procedure
reduces to the localFDR procedure. As φ departs from
0.5 and approaches either 0 or 1.0, the difference between OR/LIS and the baselines gets larger. When
the individual hypotheses are easy to test (large µ values), the differences between them are not substantial.
When we turn to the grid structure, the numerical performance is similar to that in the chain structure and
the tree structure except for two observations. First,
the data-driven procedure does not appear to control
the FDR at 0.1 when µ is small (e.g. µ = 1.0), although the oracle procedure does, which indicates the
parameter estimation in the EM algorithm is difficult
when µ is small. In other words, with a limited number
of hypotheses, it is difficult to estimate the pairwise potential parameters if the test statistics of the non-nulls
do not look much different from the test statistics of
the nulls. The second observation is that the slopes of
the FNR curve and ATP curve for the grid structure
are different from those in the chain and tree structures. The reason is that the connectivity in the grid
structure is higher than that in the chain and tree.
Therefore we can observe that even when the individual hypotheses are difficult to test (small µ values),
the FNR is still low because each individual hypothesis has more neighbors in the grid than in the chain or
tree, and the neighbors are informative.
Figure 3 shows the numerical performance in basic
simulation 2. Figures (1a)-(1f), (2a)-(2f), and (3a)(3f) correspond to the chain structure, the tree structure and the grid structure respectively. In Figures
(1a)-(1c), (2a)-(2c), and (3a-3c), we set µ = 2 and
vary ∆(φ) between 0 and 0.4. In Figures (1d)-(1f),
(2d)-(2f), and (3d)-(3f), we set φ = 0.8 and vary ∆(µ)
between 0 and 4.0. Again, the nominal FDR level is
set to be 0.10. From Figure 3, we observe that all five
procedures control FDR at the nominal level and BH is
conservative when the transition parameter φ is heterogeneous. However, the data-driven procedure becomes more and more conservative as we increase the
variance of φ in the grid-structure. Nevertheless, the
data-driven procedure does not lose much efficiency
compared with the oracle procedure based on FNR
and ATP. Both the data-driven procedure and the oracle procedure dominate the three baselines. When
the µ parameter is heterogeneous, all five procedures
0.4
0.4
0.1
0.2
0.3
0.4
ATP
600 800 1000
0.35
FNR
0.25
0.05
0.15
1
3
4
1
2
3
4
0.2
0.3
0.4
1
(3c)
2
(3d)
4
4
3
4
3
4
ATP
2500
2
3
4
(2e)
3
3
1500
1
(2d)
0.0
0.1
(3b)
2
3500
0.35
0.15
0.4
1
(1f)
FNR
0.25
0.09
0.3
2
(1e)
FDR
0.07
0.2
(2c)
2000
0.1
0.0
0.4
4
0.05
0.1
ATP
3000 4000
FNR
0.2 0.3
0.09
0.3
(3a)
3
1
2
(2f)
5000
0.3
2
0.4
0.2
1
(1d)
ATP
2500
0.1
(2b)
FDR
0.07
0.2
0.4
1500
0.4
0.05
0.1
0.3
3500
0.40
0.3
(2a)
5000
0.2
0.2
(1c)
0.10
0.05
0.1
0.1
ATP
3000 4000
0.4
2000
0.3
FNR
0.20
0.30
FDR
0.07
0.09
0.2
(1b)
FNR
0.2 0.3
0.1
0.1
0.4
FDR
0.04 0.06 0.08 0.10
0.3
FDR
0.07
ATP
600 800
0.10
0.2
(1a)
0.09
1200
0.40
FNR
0.20
0.30
FDR
0.07
0.09
0.05
0.1
1
2
(3e)
3
4
1
2
(3f)
Figure 3: Comparison of BH( ), AP(4), localFDR(×), OR (+), and LIS () in basic simulation 2: (1) chainMRF, (2) tree-MRF, (3) grid-MRF; (a) FDR vs ∆(φ), (b) FNR vs ∆(φ), (c) ATP vs ∆(φ), (d) FDR vs ∆(µ),
(e) FNR vs ∆(µ), (f) ATP vs ∆(µ).
100
200
300
Sample size
(a)
400
500
3500
ATP
2500
1500
0.05
0.10
FDR
0.07 0.09
FNR
0.20 0.30
0.11
0.40
are still valid, but the data-driven procedure becomes
more and more conservative as we increase the variance of µ. The data-driven procedure can be more conservative than the BH procedure when ∆(µ) is large
enough. The conservativeness appears most severe
in the grid-structure. However when we look at the
FNR and ATP, the data-driven procedure still dominates BH, AP and localFDR substantially in all the
situations, although the data-driven procedure loses a
certain amount of efficiency compared with the oracle
procedure when the variance of µ gets large.
100
200
300
Sample size
(b)
400
500
100
200
300
400
500
Sample size
(c)
Figure 4: Comparing BH( ), AP(4), localFDR(×),
OR(+), and LIS() in basic simulation 3: (a)FDR vs
n, (b)FNR vs n, (c)ATP vs n.
Figure 4 shows the results from basic simulation 3.
The oracle procedure and localFDR are liberal when
the sample size is small. This is because when the sample size is small, there exists a discrepancy between the
true distribution of the test statistic and the limiting
distribution. Quite surprisingly, the data-driven procedure stays valid. The reason is that the data-driven
procedure can estimate the parameters from data. The
data-driven procedure and the oracle procedure still
have comparable performance and enjoy a much lower
level of FNR compared with the baselines. For all the
basic simulations, we set the nominal FDR level to be
0.10. We have also replicated the basic simulations
by setting the nominal level to be 0.05, and similar
conclusions can be made.
4
Simulations on Genetic Data
Unlike the fabricated dependence structures in the basic simulations in Section 3, the dependence structure
in the simulations on genetic data in this section is
real. We simulate the linkage disequilibrium structure
of a segment on human chromosome 22, and treat a
test of whether a SNP is associated as one individual
test. We follow the simulation settings in the work of
Wu et al. (2010). We use HAPGEN2 (Su et al., 2011)
and the CEU sample of HapMap (The International
HapMap Consortium, 2003) (Release 22) to generate
SNP genotype data at each of the 2, 420 loci between
bp 14431347 and bp 17999745 on Chromosome 22. A
total of 685 out of 2, 420 SNPs can be genotyped with
the Affymetrix 6.0 array. These are the typed SNPs
that we use for our simulations. Within the overall
2, 420 SNPs, we randomly select 10 SNPs to be the
causal SNPs. All the SNPs on the Affymetrix 6.0 array whose r2 values, according to HapMap, with any
of the causal SNPs are above t are set to be the associated SNPs. In the simulations, we report results
for three different t values, namely 0.8, 0.5 and 0.25.
We also simulate three different genetic models (additive model, dominant model, and recessive model)
with different levels of relative risk (1.2 and 1.3). In
total, we simulate 250 cases and 250 controls. The
experiment is replicated for 100 times and the average result is provided. With the simulated data, we
apply our multiple testing procedure (LIS) and three
baseline procedures: the BH procedure, the adaptive
p-value procedure (AP), and the local false discovery
rate procedure (localFDR). Because the dependence
structure is real and the ground truth parameters are
unknown to us, we do not have the oracle procedure
in the simulations on genetic data.
With the simulated genetic data, we use two commonly used tests in genetic association studies, namely
two-proportion z-test and Cochran-Armitage’s trend
test (CATT) (Cochran, 1954; Armitage, 1955; Slager
& Schaid, 2001; Freidlin et al., 2002) as the individual
tests for the association of each SNP. CATT also yields
an asymptotic N (0, 1) under the null and N (µ, 1) under the alternative (µ is nonzero). Therefore, we parameterize ψ = (µ1 , σ12 ) where µ1 and σ12 are the mean
and variance of the test statistics under alternative.
The graph structure is built as follows. Each SNP
becomes a node in the graph. For each SNP, we connect it with the SNP with the highest r2 value with it.
There are in total 490 edges in the graph. We further
categorize the edges into a high correlation edge set
Eh (r2 above 0.8), medium correlation edge set Em (r2
between 0.5 and 0.8) and low correlation edge set El
(r2 between 0.25 and 0.5). We have three different parameters (φh , φm , and φl ) for the three sets of edges.
Then the density of θ in formula (1) takes the form
P (θ; φ) ∝ exp{
X
φh I(θi = θj )+
(i,j)∈Eh
X
(i,j)∈Em
φm I(θi = θj ) +
X
(3)
φl I(θi = θj )},
(i,j)∈El
where I(θi = θj ) is an indicator variable that indicates
whether θi and θj take the same value. In the MCMC
algorithm, we run the Markov chain for 20, 000 iterations with a burn-in of 100 iterations. In the PCD
algorithm, we generate 100 particles. In each iteration
of PCD learning, the particles move forward for 5 iterations (the n parameter in PCD-n). The learning rate
in PCD gradually decreases as suggested by Tieleman
(2008). The EM algorithm converges after about 10 to
20 iterations, which usually take less than 10 minutes
on a 3.00GHz CPU.
Figure 5 shows the performance of the procedures in
the additive models with the homozygous relative risk
set to 1.2 and 1.3. The test statistics are from a twoproportion z-test. We have also replicated the simulations on Cochran-Armitage’s trend test, and the
results are almost the same. In Figure 5, table (1)
summarizes the empirical FDR and the total number
of true positives (#TP) of our LIS procedure, BH, AP
and localFDR (lfdr), in the additive models with different (homozygous) relative risk levels, when we vary
t and when we vary the nominal FDR level α. We
regard a SNP having r2 above t with any causal SNP
as an associated SNP, and we regard a rejection of
the null hypothesis for an associated SNP as a true
positive. Our LIS procedure and localFDR are valid
while being conservative. BH and AP appear liberal
in some of the configurations. In any of the circumstances, our LIS procedure can identify more associated SNPs than the baselines. We can find a clue to
why our procedure LIS is being conservative from the
results in Figure 3. In basic simulation 2, we observe
that when the parameters µ and φ are heterogeneous
and we carry out the data-driven procedure under the
homogeneous parameter assumption, the data-driven
procedure is conservative. The discrepancy between
the nominal FDR level and the empirical FDR level
increases as the parameters move further away from
homogeneity. Although we assign three different parameters φh , φm , and φl to Eh , Em and El respectively,
the edges within the same set (e.g. El ) may still be
heterogeneous. The fact that the LIS procedure recaptures more true positives than the baselines while
remaining more conservative in many configurations
indicates that the local indices of significance provide
a ranking more efficient than the ranking provided by
the p-values from the individual tests. Therefore, we
further plot the ROC curves and precision-recall (PR)
curves when we rank SNPs by LIS and by the p-values
from the two-proportion z-test. The ROC curve and
PR curve are vertically averaged from 100 replications.
Subfigures (2a)-(2f) are for the additive model with homozygous relative risk level set to be 1.2. Subfigures
(3a)-(3f) are for the additive model with homozygous
relative risk level set to be 1.3. It is observed that the
curves from LIS dominate those from the p-values from
individual tests in most places, which further suggests
that LIS provides a more efficient ranking of the SNPs
than the individual tests.
Figure 6 shows the performance of the procedures in
the dominant model and the recessive model with the
homozygous relative risk set to be 1.2. The test statistics are from a two-proportion z-test. In Figure 6, table (1) summarizes the empirical FDR and the total
number of true positives (#TP) of our LIS procedure,
BH, AP and localFDR (lfdr) in the dominant model
and the recessive model when we vary t and when we
vary the nominal FDR level α. Our LIS procedure and
localFDR are valid while being conservative in all configurations, and they appear more conservative in the
recessive model than in the dominant model. On the
other hand, BH and AP appear liberal in the recessive
model. Our LIS procedure still confers an advantage
t = 0.8
LIS
α = 0.05
rr = 1.2
α = 0.10
α = 0.05
rr = 1.3
α = 0.10
FDR:
#TP:
FDR:
#TP:
FDR:
#TP:
FDR:
#TP:
BH
t = 0.5
AP
lfdr
LIS
BH
t = 0.25
AP
lfdr
LIS
BH
AP
lfdr
0.018
12
0.077
13
0.059
11
0.089
11
0.059
11
0.089
11
0.010
1
0.010
1
0.018
12
0.077
13
0.059
11
0.089
11
0.059
11
0.089
11
0.010
1
0.010
1
0.018
20
0.076
21
0.058
18
0.079
20
0.058
19
0.079
20
0.009
7
0.009
8
0.047
16
0.067
18
0.044
4
0.104
15
0.054
4
0.104
15
0.015
1
0.015
1
0.047
16
0.067
18
0.044
4
0.104
15
0.064
4
0.104
15
0.005
1
0.005
1
0.046
22
0.066
27
0.044
10
0.103
21
0.064
10
0.103
21
0.014
6
0.014
6
0.8
1.0
0.0
0.4 0.6
Recall
0.8
1.0
0.4 0.6
FPR
0.8
(3a) ROC t=0.8
1.0
0.8
0.2
0.0
0.2
0.4 0.6
Recall
0.8
1.0
(3b) PR t=0.8
Precision
0.2
0.3
0.1
0.2
0.4 0.6
FPR
0.8
1.0
0.0
(2e) ROC t=0.25
0.0
0.2
0.0
(2d) PR t=0.5
TPR
0.4 0.6
Precision
0.2
0.3
0.1
0.2
0.2
0.4 0.6
Recall
0.8
1.0
(2f) PR t=0.25
0.4
0.4 0.6
FPR
1.0
0.4
1.0
0.8
TPR
0.4 0.6
0.2
0.2
(2c) ROC t=0.5
0.0
0.0
0.4
1.0
0.0
Precision
0.2
0.3
1.0
0.1
0.8
1.0
0.4 0.6
Recall
(2b) PR t=0.8
0.8
0.2
TPR
0.4 0.6
0.0
0.2
1.0
0.0
0.8
0.4
0.4 0.6
FPR
(2a) ROC t=0.8
Precision
0.2
0.3
0.2
0.1
0.0
0.0
0.1
0.2
TPR
0.4 0.6
Precision
0.2
0.3
0.8
0.4
1.0
0.8
TPR
0.4 0.6
0.2
0.0
0.0
0.1
0.2
TPR
0.4 0.6
Precision
0.2
0.3
0.8
0.4
1.0
(1)
0.0
0.2
0.4 0.6
FPR
0.8
1.0
(3c) ROC t=0.5
0.0
0.2
0.4 0.6
Recall
0.8
(3d) PR t=0.5
1.0
0.0
0.2
0.4 0.6
FPR
0.8
(3e) ROC t=0.25
1.0
0.0
0.2
0.4 0.6
Recall
0.8
1.0
(3f) PR t=0.25
Figure 5: Comparison of BH, AP, localFDR and LIS in the additive models when we vary relative risk rr, t and
the nominal FDR level α. Table (1) summarizes results. Subfigures (2a)-(2f) shows ROC and PR curves of LIS
(solid red lines) and individual p-values (dashed green lines) with rr = 1.2. Subfigures (3a)-(3f) shows ROC and
PR curves of LIS (solid red lines) and individual p-values (dashed green lines) with rr = 1.3.
over the baselines in the dominant model. The LIS
procedure also recaptures almost the same number of
true positives as BH and AP while maintaining a much
lower FDR in the recessive model. Again, we further
plot the ROC curves and precision-recall curves when
we rank SNPs by LIS and by the p-values from individual tests. Subfigures (2a)-(2f) are for the dominant model. Subfigures (3a)-(3f) are for the recessive
model. It is also observed that the curves from LIS
dominate those from the p-values from individual tests
in most places, which also suggests that LIS provides
a more efficient ranking.
5
Real-world Application
Our primary GWAS dataset on breast cancer is
from NCI’s Cancer Genetics Markers of Susceptibility (CGEMS) (Hunter et al., 2007). 528, 173 SNPs for
1, 145 cases and 1, 142 controls are genotyped on the
Illumina HumanHap500 array. Our secondary GWAS
dataset comes from Marshfield Clinic. The Personalized Medicine Research Project (McCarty et al., 2005),
sponsored by Marshfield Clinic, was used as the sampling frame to identify 162 breast cancer cases and
162 controls. The project was reviewed and approved
by the Marshfield Clinic IRB. Subjects were selected
using clinical data from the Marshfield Clinic Cancer
Registry and Data Warehouse. Cases were defined as
women having a confirmed diagnosis of breast cancer. Both the cases and controls had to have at least
one mammogram within 12 months prior to having a
biopsy. The subjects also had DNA samples that were
genotyped using the Illumina HumanHap660 array, as
part of the eMERGE (electronic MEdical Records and
Genomics) network by McCarty et al. (2011).
We apply our multiple testing procedure on the
CGEMS data. The settings of the procedure are the
same as in the simulations on genetic data in Section
4. The individual test is two-proportion z-test. Our
procedure reports 32 SNPs with LIS value of 0.0 (an
estimated probability 1.0 of being associated). We further calculate the per-allele odds-ratio of these SNPs
on the Marshfield data, and 14 of them have an oddsratio around 1.2 or above. The details about the 14
SNPs are given in supplementary material. There are
two clusters among them. First, rs3870371, rs7830137
and rs920455 (on chromosome 8) locate near each
other and near the gene hyaluronan synthase 2 (HAS2)
which has been shown to be associated with invasive
breast cancer by many studies (Udabage et al., 2005;
t = 0.8
LIS
α = 0.05
Dominant
α = 0.10
α = 0.05
Recessive
α = 0.10
FDR:
#TP:
FDR:
#TP:
FDR:
#TP:
FDR:
#TP:
BH
t = 0.5
AP
lfdr
LIS
BH
t = 0.25
AP
lfdr
LIS
BH
AP
lfdr
0.026
14
0.051
20
0.040
4
0.079
12
0.040
4
0.089
12
0.010
2
0.010
3
0.026
14
0.048
22
0.040
4
0.079
12
0.040
4
0.109
12
0.010
2
0.010
3
0.025
21
0.044
33
0.039
10
0.079
19
0.039
10
0.109
29
0.009
7
0.009
18
0.009
11
0.018
11
0.079
11
0.104
12
0.079
11
0.104
12
0.009
11
0.009
11
0.009
11
0.018
11
0.079
11
0.104
12
0.079
11
0.114
12
0.009
11
0.009
11
0.009
18
0.017
22
0.079
17
0.104
21
0.079
18
0.114
21
0.009
17
0.009
17
0.4 0.6
FPR
0.8
1.0
0.2
0.4 0.6
Recall
0.8
1.0
0.4 0.6
FPR
0.8
1.0
(3a) ROC t=0.8
1.0
0.8
0.2
0.0
0.2
0.4 0.6
Recall
0.8
(3b) PR t=0.8
1.0
Precision
0.2
0.3
0.1
0.2
0.4 0.6
FPR
0.8
1.0
0.0
(2e) ROC t=0.25
0.0
0.2
0.0
(2d) PR t=0.5
TPR
0.4 0.6
Precision
0.2
0.3
0.1
0.0
0.0
0.4
1.0
0.0
(2c) ROC t=0.5
0.4
1.0
0.8
TPR
0.4 0.6
0.2
0.2
0.2
0.4 0.6
Recall
0.8
1.0
(2f) PR t=0.25
0.4
0.0
Precision
0.2
0.3
1.0
0.1
0.8
1.0
0.4 0.6
Recall
(2b) PR t=0.8
0.8
0.2
TPR
0.4 0.6
0.0
0.2
1.0
0.0
0.8
0.4
0.4 0.6
FPR
(2a) ROC t=0.8
Precision
0.2
0.3
0.2
0.1
0.0
0.0
0.1
0.2
TPR
0.4 0.6
Precision
0.2
0.3
0.8
0.4
1.0
0.8
TPR
0.4 0.6
0.2
0.0
0.0
0.1
0.2
TPR
0.4 0.6
Precision
0.2
0.3
0.8
0.4
1.0
(1)
0.0
0.2
0.4 0.6
FPR
0.8
1.0
(3c) ROC t=0.5
0.0
0.2
0.4 0.6
Recall
0.8
(3d) PR t=0.5
1.0
0.0
0.2
0.4 0.6
FPR
0.8
(3e) ROC t=0.25
1.0
0.0
0.2
0.4 0.6
Recall
0.8
1.0
(3f) PR t=0.25
Figure 6: Comparison of BH, AP, localFDR and LIS in the dominant model and the recessive model with
different t values and different nominal FDR α values. Table (1) summarizes results. Subfigures (2a)-(2f) shows
ROC and PR curves of LIS (solid red lines) and individual p-values (dashed green lines) in the dominant model.
Subfigures (3a)-(3f) shows ROC and PR curves of LIS and individual p-values in the recessive model.
Li et al., 2007; Bernert et al., 2011). The other cluster includes rs11200014, rs2981579, rs1219648, and
rs2420946 on chromosome 10. They are exactly the
4 SNPs reported by Hunter et al. (2007). Their associated gene FGFR2 is also well known to be associated with breast cancer. SNP rs4866929 on chromosome 5 is also very likely to be associated because
it is highly correlated (r2 =0.957) with SNP rs981782
(not included in our data) which was identified from
a much larger dataset (4, 398 cases and 4, 316 controls
and a follow-up confirmation stage on 21, 860 cases and
22, 578 controls) by Easton et al. (2007).
6
Conclusion
In this paper, we use an MRF-coupled mixture model
to leverage the dependence in multiple testing problems, and show the improved numerical performance
on a variety of simulations and its applicability in a
real-world GWAS problem. A theoretical question of
interest is whether this graphical model based procedure is optimal in the sense that it has the smallest FNR among all the valid procedures. The optimality of the oracle procedure can be proved under
the compound decision framework (Sun & Cai, 2007,
2009), as long as an exact inference algorithm exists
or an approximate inference algorithm can be guaranteed to converge to the correct marginal probabilities.
The asymptotic optimality of the data-driven procedure (the FNR yielded by the data-driven procedure
approaches the FNR yielded by the oracle procedure
as the number of tests m → ∞) requires consistent
estimates of the unknown parameters in the graphical models. Parameter learning in undirected models
is more complicated than in directed models due to
the normalization constant. To the best of our knowledge, asymptotic properties of parameter learning for
hidden MRFs and MRF-coupled mixture models have
not been investigated. Therefore, we cannot prove the
asymptotic optimality of the data-driven procedure so
far, although we can observe its close-to-oracle performance in the basic simulations.
Acknowledgements
The authors acknowledge the support of the Wisconsin
Genomics Initiative, NCI grant R01CA127379-01 and
its ARRA supplement 3R01CA127379-03S1, NIGMS
grant R01GM097618-01, NLM grant R01LM01102801, NIEHS grant 5R01ES017400-03, eMERGE grant
1U01HG004608-01, NSF grant DMS-1106586 and the
UW Carbone Cancer Center.
References
Armitage, P. (1955). Tests for linear trends in proportions and frequencies. BIOMETRICS, 11, 375C386.
Asuncion, A. U., Liu, Q., Ihler, A. T., & Smyth, P.
(2010). Particle filtered MCMC-MLE with connections to contrastive divergence. In ICML.
Baum, L. E., Petrie, T., Soules, G., & Weiss, N. (1970).
A maximization technique occurring in the statistical analysis of probabilistic functions of Markov
chains. ANN MATH STAT, 41(1), 164–171.
Benjamini, Y. & Heller, R. (2007). False discovery
rates for spatial signals. J AM STAT ASSOC, 102,
1272–1281.
Benjamini, Y. & Hochberg, Y. (1995). Controlling
the false discovery rate: A practical and powerful
approach to multiple testing. J ROY STAT SOC B,
57(1), 289–300.
Benjamini, Y. & Hochberg, Y. (2000). On the adaptive
control of the false discovery rate in multiple testing with independent statistics. J EDUC BEHAV
STAT, 25(1), 60–83.
Benjamini, Y. & Yekutieli, D. (2001). The control
of the false discovery rate in multiple testing under
dependency. ANN STAT, 29, 1165–1188.
Bernert, B., Porsch, H., & Heldin, P. (2011). Hyaluronan synthase 2 (HAS2) promotes breast cancer cell
invasion by suppression of tissue metalloproteinase
inhibitor 1 (TIMP-1). J BIOL CHEM, 286(49),
42349–42359.
Blanchard, G. & Roquain, E. (2009). Adaptive false
discovery rate control under independence and dependence. J MACH LEARN RES, 10, 2837–2871.
Celeux, G., Forbes, F., & Peyrard, N. (2003). EM
procedures using mean field-like approximations for
Markov model-based image segmentation. Pattern
Recognition, 36, 131–144.
Chatzis, S. P. & Varvarigou, T. A. (2008). A fuzzy
clustering approach toward hidden Markov random
field models for enhanced spatially constrained image segmentation. IEEE Transactions on Fuzzy Systems, 16, 1351 – 1361.
Cochran, W. G. (1954). Some methods for strengthening the common chi-square tests. BIOMETRICS,
10, 417–451.
Crouse, M. S., Nowak, R. D., & Baraniuk, R. G.
(1998). Wavelet-based statistical signal processing
using hidden Markov models. IEEE T SIGNAL
PROCES, 46(4), 886–902.
Easton, D. F., Pooley, K. A., Dunning, A. M.,
Pharoah, P. D. P., Thompson, D., Ballinger, D. G.,
Struewing, J. P., Morrison, J., Field, H., Luben, R.,
Wareham, N., Ahmed, S., Healey, C. S., Bowman,
R., Meyer, K. B., Haiman, C. A., Kolonel, L. K.,
Henderson, B. E., Le Marchand, L., Brennan, P.,
Sangrajrang, S., Gaborieau, V., Odefrey, F., Shen,
C.-Y., Wu, P.-E., Wang, H.-C., Eccles, D., Evans,
G. D., Peto, J., Fletcher, O., Johnson, N., Seal,
S., Stratton, M. R., Rahman, N., Chenevix-Trench,
G., Bojesen, S. E., Nordestgaard, B. G., Axelsson,
C. K., Garcia-Closas, M., Brinton, L., Chanock, S.,
Lissowska, J., Peplonska, B., Nevanlinna, H., Fagerholm, R., Eerola, H., Kang, D., Yoo, K.-Y., Noh,
D.-Y., Ahn, S.-H., Hunter, D. J., Hankinson, S. E.,
Cox, D. G., Hall, P., Wedren, S., Liu, J., Low, Y.-L.,
Bogdanova, N., Schürmann, P., Dörk, T., Tollenaar,
R. A. E. M., Jacobi, C. E., Devilee, P., Klijn, J.
G. M., Sigurdson, A. J., Doody, M. M., Alexander,
B. H., Zhang, J., Cox, A., Brock, I. W., Macpherson, G., Reed, M. W. R., Couch, F. J., Goode, E. L.,
Olson, J. E., Meijers-Heijboer, H., van den Ouweland, A., Uitterlinden, A., Rivadeneira, F., Milne,
R. L., Ribas, G., Gonzalez-Neira, A., Benitez, J.,
Hopper, J. L., Mccredie, M., Southey, M., Giles,
G. G., Schroen, C., Justenhoven, C., Brauch, H.,
Hamann, U., Ko, Y.-D., Spurdle, A. B., Beesley, J.,
Chen, X., Mannermaa, A., Kosma, V.-M., Kataja,
V., Hartikainen, J., Day, N. E., Cox, D. R., & Ponder, B. A. J. (2007). Genome-wide association study
identifies novel breast cancer susceptibility loci. Nature, 447, 1087–1093.
Efron, B. (2007). Correlation and large-scale simultaneous significance testing. J AM STAT ASSOC,
102(477), 93–103.
Efron, B., Tibshirani, R., Storey, J. D., & Tusher, V.
(2001). Empirical Bayes analysis of a microarray
experiment. J AM STAT ASSOC, 96, 1151–1160.
Fan, J., Han, X., & Gu, W. (2012). Control of the
false discovery rate under arbitrary covariance dependence. (to appear) J AM STAT ASSOC.
Farcomeni, A. (2007). Some results on the control of
the false discovery rate under dependence. SCAND
J STAT, 34(2), 275–297.
Finner, H. & Roters, M. (2002). Multiple hypotheses
testing and expected number of type I errors. ANN
STAT, 30, 220–238.
Freidlin, B., Zheng, G., Li, Z., & Gastwirth, J. L.
(2002). Trend tests for case-control studies of genetic markers: power, sample size and robustness.
HUM HERED, 53(3), 146–152.
Friguet, C., Kloareg, M., & Causeur, D. (2009). A factor model approach to multiple testing under dependence. J AM STAT ASSOC, 104(488), 1406–1415.
Ganapathi, V., Vickrey, D., Duchi, J., & Koller, D.
(2008). Constrained approximate maximum entropy
learning of Markov random fields. In UAI.
methods and recruitment for a large populationbased biobank. PERS MED, 2, 49–79.
Gelfand, A. E. & Smith, A. F. M. (1990). Samplingbased approaches to calculating marginal densities.
J AM STAT ASSOC, 85(410), 398–409.
McCarty, C. A., Chisholm, R. L., Chute, C. G., Kullo,
I. J., Jarvik, G. P., Larson, E. B., Li, R., Masys,
D. R., Ritchie, M. D., Roden, D. M., Struewing,
J. P., Wolf, W. A., & eMERGE Team (2011). The
eMERGE Network: a consortium of biorepositories
linked to electronic medical records data for conducting genomic studies. BMC MED GENET, 4(1),
13.
Genovese, C. & Wasserman, L. (2002). Operating
characteristics and extensions of the false discovery
rate procedure. J ROY STAT SOC B, 64, 499–517.
Genovese, C. & Wasserman, L. (2004). A stochastic
process approach to false discovery control. ANN
STAT, 32, 1035–1061.
Genovese, C., Roeder, K., & Wasserman, L. (2006).
False discovery control with p-value weighting.
BIOMETRIKA, 93, 509–524.
Geyer, C. J. (1991). Markov chain Monte Carlo maximum likelihood. COMP SCI STAT, pages 156–163.
Hinton, G. (2002). Training products of experts by
minimizing contrastive divergence. NEURAL COMPUT, 14, 1771–1800.
Hunter, D. J., Kraft, P., Jacobs, K. B., Cox, D. G.,
Yeager, M., Hankinson, S. E., Wacholder, S., Wang,
Z., Welch, R., Hutchinson, A., Wang, J., Yu, K.,
Chatterjee, N., Orr, N., Willett, W. C., Colditz,
G. A., Ziegler, R. G., Berg, C. D., Buys, S. S., Mccarty, C. A., Feigelson, H. S., Calle, E. E., Thun,
M. J., Hayes, R. B., Tucker, M., Gerhard, D. S.,
Fraumeni, J. F., Hoover, R. N., Thomas, G., &
Chanock, S. J. (2007). A genome-wide association
study identifies alleles in FGFR2 associated with
risk of sporadic postmenopausal breast cancer. NAT
GENET, 39(7), 870–874.
Jordan, M. I., Ghahramani, Z., Jaakkola, T., & Saul,
L. K. (1999). An introduction to variational methods for graphical models. MACH LEARN, 37, 183–
233.
Kschischang, F., Frey, B., & Loeliger, H.-A. (2001).
Factor graphs and the sum-product algorithm.
IEEE T INFORM THEORY, 47(2), 498 –519.
Lauritzen, S. L. & Spiegelhalter, D. J. (1988). Local
computations with probabilities on graphical structures and their application to expert systems. J
ROY STAT SOC B, 50(2), 157–224.
Leek, J. T. & Storey, J. D. (2008). A general framework for multiple testing dependence. P NATL
ACAD SCI USA, 105(48), 18718–18723.
Li, Y., Li, L., Brown, T. J., & Heldin, P. (2007). Silencing of hyaluronan synthase 2 suppresses the malignant phenotype of invasive breast cancer cells. INT
J CANCER, 120(12), 2557–2567.
McCarty, C., Wilke, R., Giampietro, P., Wesbrook, S.,
& Caldwell, M. (2005). Marshfield Clinic Personalized Medicine Research Project (PMRP): design,
Murphy, K. P., Weiss, Y., & Jordan, M. I. (1999).
Loopy belief propagation for approximate inference:
An empirical study. In UAI, pages 467–475.
Owen, A. B. (2005). Variance of the number of false
discoveries. J ROY STAT SOC B, 67, 411–426.
Romano, J., Shaikh, A., & Wolf, M. (2008). Control of
the false discovery rate under dependence using the
bootstrap and subsampling. TEST, 17, 417–442.
Salakhutdinov, R. (2009). Learning in Markov random
fields using tempered transitions. In NIPS, pages
1598–1606.
Sarkar, S. K. (2006). False discovery and false nondiscovery rates in single-step multiple testing procedures. ANN STAT, 34(1), 394–415.
Schraudolph, N. N. (2010). Polynomial-time exact inference in NP-hard binary MRFs via reweighted perfect matching. In AISTATS.
Schraudolph, N. N. & Kamenetsky, D. (2009). Efficient
exact inference in planar Ising models. In NIPS.
Slager, S. L. & Schaid, D. J. (2001). Case-control
studies of genetic markers: power and sample size
approximations for Armitage’s test for trend. HUM
HERED, 52(3), 149–153.
Storey, J. D. (2002). A direct approach to false discovery rates. J ROY STAT SOC B, 64, 479–498.
Storey, J. D. (2003). The positive false discovery rate:
A Bayesian interpretation and the q-value. ANN
STAT, 31(6), 2013–2035.
Su, Z., Marchini, J., & Donnelly, P. (2011). HAPGEN2: simulation of multiple disease SNPs. BIOINFORMATICS.
Sun, W. & Cai, T. T. (2007). Oracle and adaptive compound decision rules for false discovery rate control.
J AM STAT ASSOC, 102(479), 901–912.
Sun, W. & Cai, T. T. (2009). Large-scale multiple
testing under dependence. J ROY STAT SOC B,
71, 393–424.
The International HapMap Consortium (2003). The
international HapMap project. NATURE, 426, 789–
796.
Tieleman, T. (2008). Training restricted Boltzmann
machines using approximations to the likelihood
gradient. In ICML, pages 1064–1071.
Tieleman, T. & Hinton, G. (2009). Using fast weights
to improve persistent contrastive divergence. In
ICML, pages 1033–1040.
Udabage, L., Brownlee, G. R., Nilsson, S. K., &
Brown, T. J. (2005). The over-expression of HAS2,
Hyal-2 and CD44 is implicated in the invasiveness
of breast cancer. EXP CELL RES, 310(1), 205 –
217.
Wainwright, M. J., Jaakkola, T. S., & Willsky, A. S.
(2003a). Tree-based reparameterization framework
for analysis of sum-product and related algorithms.
IEEE T INFORM THEORY, 49, 2003.
Wainwright, M. J., Jaakkola, T. S., & Willsky, A. S.
(2003b). Tree-reweighted belief propagation algorithms and approximate ML estimation via pseudomoment matching. In AISTATS.
Weiss, Y. (2000). Correctness of local probability propagation in graphical models with loops. NEURAL
COMPUT, 12(1), 1–41.
Welling, M. & Sutton, C. (2005). Learning in Markov
random fields with contrastive free energies. In AISTATS.
Wu, M. C., Kraft, P., Epstein, M. P., Taylor, D. M.,
Chanock, S. J., Hunter, D. J., & Lin, X. (2010).
Powerful SNP-set analysis for case-control genomewide association studies. AM J HUM GENET,
86(6), 929–942.
Wu, W. B. (2008). On false discovery control under
dependence. ANN STAT, 36(1), 364–380.
Yedidia, J. S., Freeman, W. T., & Weiss, Y. (2000).
Generalized belief propagation. In NIPS, pages 689–
695. MIT Press.
Yekutieli, D. & Benjamini, Y. (1999). Resamplingbased false discovery rate controlling multiple test
procedures for correlated test statistics. J STAT
PLAN INFER, 82, 171–196.
Zhang, C., Fan, J., & Yu, T. (2011). Multiple testing
via F DRL for large-scale imaging data. ANN STAT,
39(1), 613–642.
Zhang, Y., Brady, M., & Smith, S. (2001). Segmentation of brain MR images through a hidden
Markov random field model and the expectationmaximization algorithm. IEEE Transactions on
Medical Imaging.
| 5 |
1
No Blind Spots: Full-Surround Multi-Object
Tracking for Autonomous Vehicles using
Cameras & LiDARs
arXiv:1802.08755v1 [] 23 Feb 2018
Akshay Rangesh, Member, IEEE, and Mohan M. Trivedi, Fellow, IEEE
Abstract—Online multi-object tracking (MOT) is extremely
important for high-level spatial reasoning and path planning
for autonomous and highly-automated vehicles. In this paper,
we present a modular framework for tracking multiple objects
(vehicles), capable of accepting object proposals from different
sensor modalities (vision and range) and a variable number
of sensors, to produce continuous object tracks. This work
is inspired by traditional tracking-by-detection approaches in
computer vision, with some key differences - First, we track
objects across multiple cameras and across different sensor
modalities. This is done by fusing object proposals across sensors
accurately and efficiently. Second, the objects of interest (targets)
are tracked directly in the real world. This is a departure from
traditional techniques where objects are simply tracked in the
image plane. Doing so allows the tracks to be readily used by an
autonomous agent for navigation and related tasks.
To verify the effectiveness of our approach, we test it on real
world highway data collected from a heavily sensorized testbed
capable of capturing full-surround information. We demonstrate
that our framework is well-suited to track objects through entire
maneuvers around the ego-vehicle, some of which take more than
a few minutes to complete. We also leverage the modularity of
our approach by comparing the effects of including/excluding
different sensors, changing the total number of sensors, and the
quality of object proposals on the final tracking result.
Index Terms—Multi-object tracking (MOT), panoramic surround behavior analysis, highly autonomous vehicles, computer
vision, sensor fusion, collision avoidance, path planning.
Y
X
(a)
I. I NTRODUCTION
T
RACKING for autonomous vehicles involves accurately
identifying and localizing dynamic objects in the environment surrounding the vehicle. Tracking of surround vehicles is
essential for many tasks crucial to truly autonomous driving,
such as obstacle avoidance, path planning, and intent recognition. To be useful for such high-level reasoning, the generated
tracks should be accurate, long and robust to sensor noise.
In this study, we propose a full-surround MOT framework to
create such desirable tracks.
Traditional MOT techniques for autonomous vehicles can
roughly be categorized into 3 groups based on the sensory
inputs they use - 1) dense point clouds from range sensors,
2) stereo vision, and 3) a fusion of range and vision sensors.
Studies like [1]–[4] make use of dense point clouds created
by 3D LiDARs like the Velodyne HDL-64E. Such sensors,
although bulky and expensive, are capable of capturing finer
The authors are with the Laboratory for Intelligent and Safe Automobiles,
University of California, San Diego, CA 92092, USA.
email - arangesh, [email protected]
(b)
Fig. 1: (a) Illustration of online MOT for autonomous vehicles.
The surround vehicles (in red) are tracked in a coordinate
system centered on the ego-vehicle (center). The ego-vehicle
has full-surround coverage from vision and range sensors, and
must fuse proposals from each of them to generate continuous
tracks (dotted lines) in the real world.
(b) An example of images captured from a full-surround
camera array mounted on our testbed, along with color coded
vehicle annotations.
2
details of the surroundings owing to its high vertical resolution.
Trackers can therefore create suitable mid-level representations
like 2.5D grids, voxels etc. that retain unique statistics of
the volume they enclose, and group such units together to
form coherent objects that can be tracked. It must be noted
however, that these approaches are reliant on having dense
point representations of the scene, and would not scale well
to LiDAR sensors that have much fewer scan layers. On
the other hand, studies such as [5]–[8] make use of stereo
vision alone to perform tracking. The pipeline usually involves
estimating the disparity image and optionally creating a 3D
point cloud, followed by similar mid-level representations like
stixels, voxels etc. which are then tracked from frame to frame.
These sensors are limited by the quality of disparity estimates
and the field of view (FoV) of the stereo pair. Unlike 3D
LiDAR based systems, they are unable to track objects in
full-surround. There are other single camera approaches to
surround vehicle behavior analysis [9], [10], but they too are
limited in their FoVs and localization capabilities. Finally,
there are fusion based approaches like [11]–[13], that make
use of LiDARs, stereo pairs, monocular cameras, and Radars
in a variety of configurations. These techniques either perform
early or late fusion based on their sensor setup and algorithmic
needs. However, none of them seem to offer full-surround
solutions for vision sensors, and are ultimately limited to
fusion only in the FoV of the vision sensors.
In this study, we take a different approach to full-surround
MOT and try to overcome some of the limitations in previous
approaches. Specifically, we propose a framework to perform full-surround MOT using calibrated camera arrays, with
varying degrees of overlapping FoVs, and with an option to
include low resolution range sensors for accurate localization
of objects in 3D. We term this the M3 OT framework, which
stands for multi-perspective, multi-modal, multi-object tracking framework. To train and validate the M3 OT framework, we
make use of naturalistic driving data collected from our testbed
(illustrated in Figure 1) that has full-surround coverage from
vision and range modalities.
Since we use vision as our primary perception modality,
we leverage recent approaches from the 2D MOT community
which studies tracking of multiple objects in the 2D image
plane. Recent progress in 2D MOT has focused on the
tracking-by-detection strategy, where object detections from
a category detector are linked to form trajectories of the
targets. To perform tracking-by-detection online (i.e. in a
causal fashion), the major challenge is to correctly associate
noisy object detections in the current video frame with previously tracked objects. The basis for any data association
algorithm is a similarity function between object detections
and targets. To handle ambiguities in association, it is useful
to combine different cues in computing the similarity, and
learn an association based on these cues. Many recent 2D
MOT methods such as [14]–[18] use some form of learning
(online or offline) to accomplish data association. Similar to
these studies, we formulate the online multi-object tracking
problem using Markov Decision Processes (MDPs) proposed
in [19], where the lifetime of an object is modeled with a
MDP (see Figure 2), and multiple MDPs are assembled for
multi-object tracking. In this method, learning a similarity
function for data association is equivalent to learning a policy
for the MDP. The policy learning is approached in a reinforcement learning fashion which benefits from advantages of
both offline-learning and online-learning in data association.
The M3 OT framework is also capable to naturally handle the
birth/death and appearance/disappearance of targets by treating
them as state transitions in the MDP, and also benefits from
the strengths of online learning approaches in single object
tracking ( [20]–[23]).
Our main contributions in this work can be summarized
as follows - 1) We extend and improve the MDP formulation
originally proposed for 2D MOT, and modify it to track objects
in 3D (real world). 2) We make the M3 OT framework capable
of tracking objects across multiple vision sensors in calibrated
camera arrays by carrying out efficient and accurate fusion of
object proposals. 3) The M3 OT framework is made highly
modular, capable of working with any number of cameras,
with varying degrees of overlapping FoVs, and with the option
to include range sensors for improved localization and fusion
in 3D. 4) Finally, we carry out experiments using naturalistic
driving data collected on highways using full-surround sensory
modalities, and validate the accuracy, robustness and modularity of our framework.
II. R ELATED R ESEARCH
We highlight some representative works in 2D and 3D MOT
below. We also summarize the some key aspects of related 3D
MOT studies in Table I. For a recent survey on detection and
tracking for intelligent vehicles, we refer the reader to [25].
2D MOT: Recent research in MOT has focused on trackingby-detection, where the main challenge is data association
for linking object detections to targets. Majority of batch
(offline) methods ( [17], [26]–[31]) formulate MOT as a
global optimization problem in a graph-based representation,
while online methods solve the data association problem
either probabilistically [32]–[34] or deterministically (e.g.,
Hungarian algorithm [35] in [14], [15] or greedy association
[36]). A core component in any data association algorithm
is a similarity function between objects. Both batch methods
[16], [17] and online methods [14], [15], [18] have explored
the idea of learning to track, where the goal is to learn a
similarity function for data association from training data. In
this work, we extend and improve the MDP framework for 2D
MOT proposed in [19], which is an online method that uses
reinforcement learning to learn associations.
3D MOT for autonomous vehicles: In [5], the authors use a
stereo rig to first calculate the disparity using SGM, followed
by height based segmentation and free-space calculation to
create a mid-level representation using stixels that encode the
height within a cell. Each stixel is then represented by a 6D
state vector, which is tracked using the Extended Kalman Filter
(EKF). In [6], the authors use a voxel based representation
instead, and cluster neighboring voxels based on color to
create objects that are then tracked using a greedy association
model. The authors in [7] use a grid based representation of
the scene, where cells are grouped to create objects, each
3
TABLE I: Relevant research in 3D MOT for intelligent vehicles.
Sensors used
Study
Tracker Type
Monocular
camera
Stereo
pair
Full-surround
camera array
LiDAR
Choi et al. [1]
-
-
-
Broggi et al. [6]
-
3
Song et al. [2]
-
-
Experimental Analysis
Radar
Single object
tracker
Multi-object
tracker
Online
(causal)
Dataset
Evaluation metrics
3
-
-
3
3
Proposed
Distance and velocity errors
-
-
-
-
3
3
Proposed
True positives, false positives,
false negatives
-
3
-
3
7
3
KITTI
Position error, intersection ratio
Cho et al. [11]
3
-
-
3
3
-
3
3
Proposed
Correctly tracked, falsely tracked,
true and false positive rates
Asvadi et al. [3]
Vatavu et al. [7]
-
3
-
3
-
-
-
3
3
3
3
KITTI
Proposed
-
Asvadi et al. [4]
-
-
-
3
-
-
3
3
KITTI
Number of missed and false
obstacles
Asvadi et al. [12]
3
-
-
3
-
3
7
3
KITTI
Average center location errors in
2D and 3D, orientation errors
Ošep et al. [8]
-
3
-
-
-
-
3
3
KITTI
Class accuracy, GOP recall,
tracking recall
Allodi et al. [13]
-
3
-
3
-
-
3
3
KITTI
MOTP, IDS, Frag, average
localization error
Dueholm et al. [24]
-
-
3
-
-
-
3
7
VIVA Surround
MOTA, MOTP, IDS, Frag,
Precision, Recall
This work1
(M3 OT)
-
-
3
3
-
-
3
3
Proposed
MOTA, MOTP, MT, ML, IDS for a
variety of sensor configurations
1
this framework can work with or without LiDAR sensors, and with any subset of camera sensors.
of which is represented by a set of control points on the
object surface. This creates a high dimensional state-space
representation, which is accounted for by a Rao-Blackwellized
particle filter. More recently, the authors of [8] propose to carry
out semantic segmentation on the disparity image, which is
then used to generate generic object proposals by creating a
scale-space representation of the density, followed by multiscale clustering. The proposed clusters are then tracked using
a QPBO framework. The work in [24] use a camera setup
similar to ours, but the authors propose an offline framework
for tracking, hence limiting their use to surveillance related
applications.
Alternatively, there are approaches that make use of dense
point clouds generated by LiDARs as opposed to creating
point clouds from a disparity image. In [1], the authors first
carry out ground classification based on the variance in the
radius of each scan layer, followed by a 2.5D occupancy grid
representation of the scene. The grid is then segmented, and
regions of interest (RoIs) identified within, each of which is
tracked by a standard Kalman Filter (KF). Data association
is achieved by simple global nearest neighbor. Similar to
this, the authors in [3] use a 2.5D occupancy grid based
representation, but augment this with an occupancy grid map
which accumulates past grids to create a coherent global map
of occupancy by accounting for ego-motion. Using these two
representations, a 2.5D motion grid is created by comparing
the map with the latest occupancy grid, which isolates and
identifies dynamic objects in the scene. Although the work in
[4] follows the same general idea, the authors propose a piecewise ground plane estimation scheme capable of handling nonplanar surfaces. In a departure from grid based methods, the
authors in [2] project the 3D point cloud onto a virtual image
plane, creating an object appearance model based on 4 image-
based cues for each template of the desired target. A particle
filtering framework is implemented, where the particle with
least reconstruction error with respect to the stored template
is chosen to update the tracker. Background filtering and
occlusion detection are implemented to improve performance.
Finally, we list recent methods that rely on fusion of
different sensor modalities to function. In [11], the authors
propose an EKF based fusion scheme, where measurements
from each modality are fed sequentially. Vision is used to
classify the object category, which is then used to choose
appropriate motion and observation models for the object.
Once again, observations are associated based on a global
nearest neighbor policy. This is somewhat similar to the work
in [12], where given an initial 3D detection box, the authors
propose to project 3D points within the box to the image
plane and calculate the convex hull of projected points. In this
case, a KF is used to perform both fusion and tracking, where
fusion is carried out by projecting the 2D hull to a sparse
3D cloud, and using both 3D cues to perform the update.
In contrast, the authors of [13] propose using the Hungarian
algorithm (for bipartite matching) for both data association and
fusion of object proposals from different sensors. The scores
for association are obtained from Adaboost classifiers trained
on high-level features. The objects are then tracked using an
Unscented Kalman Filter (UKF).
III. F USION OF O BJECT P ROPOSALS
In this study, we make use of full-surround camera arrays comprising of sensors with varying FoVs. The M3 OT
framework, however, is capable of working with any type and
number of cameras, as long as they are calibrated. In addition
to this, we also propose a variant of the framework for cases
where LiDAR point clouds are available. To effectively utilize
4
a3
fused
proposals
a1
Object Proposal
(u, v)
w
a4
Tracked
h
a6
Active
Lost
P
a5
a2
Inactive
a7
(a)
Flattened LiDAR
Point Cloud
Object Proposal
l1
Fig. 2: The Markov Decision Process (MDP) framework
proposed in [19]. In this work, we retain the structure of the
MDP, and modify the actions, rewards and inputs to enable
multi-sensory tracking.
all available sensors, we propose an early fusion of object
proposals obtained from each of them. At the very start of each
time step during tracking, we identify and fuse all proposals
belonging to the same object. These proposals are then utilized
by the M3 OT framework to carry out tracking.
a) Projection & Back-projection: It is essential to have
a way of associating measurements from different sensors to
track objects across different camera views, and to carry out
efficient fusion across sensor modalities. This is achieved by
defining a set of projection mappings, one from each sensor’s
unique coordinate system to the global coordinate system, and
a set of back-projection mappings that take measurements in
the global coordinate system to individual coordinate systems.
In our case, the global coordinate system is centered at the
mid-point of the rear axle of the ego-vehicle. The axes are
oriented as shown in Figure 1.
The LiDAR sensors output a 3D point cloud in a common
coordinate system at every instant. This coordinate frame may
either be centered about a single LiDAR sensor, or elsewhere
depending on the configuration. In this case, the projection and
back-projection mappings are simple 3D coordinate transformations:
Prange→G (xrange ) = Rrange · xrange + trange l,
(1)
and,
Prange←G (xG ) = RTrange · xG − RTrange trange ,
(2)
where Prange→G (·) and Prange←G (·) are the projection and
back-projection mappings from the LiDAR (range) coordinate
system to the global (G) coordinate system and vice-versa.
The vectors xrange and xG are the corresponding coordinates
in the LiDAR and global coordinate frames. The 3 × 3
l2
(b)
Fig. 3: Projection of object proposals using: (a) IPM: The
bottom center of the bounding boxes are projected into the
global coordinate frame (right), (b) LiDAR point clouds:
LiDAR points that fall within a detection window are flattened
and lines are fit to identify the vehicle center (right).
orthonormal rotation matrix Rrange and translation vector
trange are obtained through calibration.
Similarly, the back-projection mappings for each camera
k ∈ {1, · · · , K} can be defined as:
Pcamk ←G (xG ) = (u, v)T ,
u
s.t v = C k · [Rk |tk ] · [(xG )T |1]T ,
1
(3)
(4)
where the set of camera calibration matrices {C k }K
k=1 are
obtained after the intrinsic calibration of cameras, and the set
of tuples {(Rk , tk )}K
k=1 obtained after extrinsic calibration.
Unlike the back-projection mappings, the projection mappings for camera sensors are not well defined. In fact, the mappings are one-to-many due to the depth ambiguity of single
camera images. To find a good estimate of the projection, we
use two different approaches. In case of a vision-only system,
we use the inverse perspective mapping (IPM) approach:
Pcamk →G (xk ) = (x, y)T ,
x
s.t. y = H k · [(xk )T |1]T ,
1
(5)
(6)
where {H k }K
k=1 are the set of homographies obtained after
IPM calibration. Since we are only concerned with lateral and
5
longitudinal displacements of vehicles in the global coordinate
system, we only require the (x, y)T coordinates, and set the
altitude coordinate to a fixed number.
b) Sensor Measurements & Object Proposals: As we
adopt a tracking-by-detection approach, each sensor is used
to produce object proposals to track and associate. In case of
vision sensors, a vehicle detector is run on each individual
camera’s image to obtain multiple detections d, each of which
is defined by a bounding box in the corresponding image. Let
(u, v) denote the top left corner and (w, h) denote the width
and height of a detection d respectively.
In case of a vision-only system, the corresponding location
of d in the global coordinate system is obtained using the
mapping Pcamk →G ((u + w2 , v + h)T ), where k denotes the
camera from which the proposal was generated. This procedure is illustrated in Figure 3a.
In cases where LiDAR sensors are available, an alternative
is considered (shown in Figure 3b). First, the back-projected
LiDAR points that fall within a detection box d are identified
using a look-up table with pixel coordinates as the key, and
the corresponding global coordinates as the value. These points
are then flattened by ignoring the altitude component of their
global coordinates. Next, a line l1 is fitted to these points
using RANSAC with a small inlier ratio (0.3). This line aligns
with the dominant side of the detected vehicle. The other
side of the vehicle corresponding to line l2 is then identified
by removing all inliers from the previous step and repeating
a RANSAC line fit with a slightly higher inlier ratio (0.4).
Finally, the intersection of lines l1 and l2 along with the
vehicle dimensions yield the global coordinates of the vehicle
center.
Depending on the type of LiDAR sensors used, object
proposals along with their dimensions in the real world can
be obtained. However, we decide not to make use of LiDAR
proposals, but rather use vision-only proposals with high recall
by trading off some of the precision. This was seen to provide
sufficient proposals to track all surrounding vehicles, at the
expense of more false positives which the tracker is capable
of handling.
c) Early Fusion of Proposals: Since we operate with
camera arrays with overlapping FoVs, the same vehicle may
be detected in two adjacent views. It is important to identify
and fuse such proposals to track objects across camera views.
Once again, we propose two different approaches to carry out
this fusion. For vision-only systems, the fusion of proposals is
carried out in 4 steps: i) Project proposals from all cameras to
the global coordinate system using proposed mappings, ii) Sort
all proposals in descending order based on their confidence
scores (obtained from the vehicle detector), iii) Starting with
the highest scoring proposal, find the subset of proposals
whose euclidean distance in the global coordinate system falls
within a predefined threshold. These proposals are considered
to belong to the same object and removed from the original
set of proposals. iv) The projection of each proposal within
this subset is set to the mean of projections of all proposals
within the subset. This process is repeated for the remaining
proposals until no proposals remain.
Alternatively, for a system consisting of LiDAR sensors,
Fig. 4: Fusion of object proposals using LiDAR point clouds:
Points common to both detections are drawn in green, and the
rest are drawn red.
we project the 3D point cloud onto each individual camera
image. Next, for each pair of proposals, we make a decision
as to whether or not they belong to the same object. This is
done by considering the back-projected LiDAR points that fall
within the bounding box of each proposal (see Figure 4). Let
P1 and P2 denote the index set of LiDAR points falling within
each bounding box. Then, two proposals are said to belong to
the same object if:
P1 ∩ P2 P1 ∩ P2
,
≥ 0.8,
(7)
max
|P1 |
|P2 |
where |P1 | and |P2 | denote the cardinalities of sets P1
and P2 respectively. It should be noted that after fusion is
completed, the union of LiDAR point sets that are backprojected into fused proposals can be used to obtain better
projections.
IV. M3 OT F RAMEWORK
Once we have a set of fused object proposals, we feed
them into the MDP as illustrated in Figure 2. This forms a
crucial component of the M3 OT framework, details of which
are presented in this section.
A. Markov Decision Process
As detailed in [19], we model the lifetime of a target with
a Markov Decision Process (MDP). The MDP consists of the
tuple (S, A, T (·, ·), R(·, ·)), where:
• States s ∈ S encode the status of the target.
• Actions a ∈ A define the actions that can be taken.
• The state transition function T : S ×A 7→ S dictates how
the target transitions from one state to another, given an
action.
• The real-valued reward function R : S × A 7→ R assigns
the immediate reward received after executing action a
in state s.
In this study, we retain the states S, actions A and the state
transition function T (·, ·) from the MDP framework for 2D
MOT [19], while changing only the reward function R(·, ·).
States: The state space is partitioned into four subspaces,
i.e., S = SActive ∪ ST racked ∪ SLost ∪ SInactive , where each
subspace contains infinite number of states which encode the
information of the target depending on the feature representation, such as appearance, location, size and history of the
target. Figure 2 illustrates the transitions between the four
subspaces. Active is the initial state for any target. Whenever
6
an object is detected by the object detector, it enters an
Active state. An active target can transition to Tracked or
Inactive. Ideally, a true positive from the object detector
should transition to a Tracked state, while a false alarm should
enter an Inactive state. A tracked target can stay tracked, or
transition to a Lost state if the target is not visible due to some
reason, e.g. occlusion, or disappearance from sensor range.
Likewise, a lost target can stay Lost, or go back to a Tracked
state if it appears again, or transition to an Inactive state if it
has been lost for a sufficiently long time. Finally, Inactive is
the terminal state for any target, i.e., an inactive target stays
inactive forever.
Actions and Transition Function: Seven possible transitions are designed between the state subspaces, which correspond to seven actions in our target MDP. Figure 2 illustrates
these transitions and actions. In the MDP, all the actions are
deterministic, i.e., given the current state and an action, we
specify a new state for the target. For example, executing
action a6 on a Lost target would transfer the target into a
Tracked state, i.e., T (sLost , a6 ) = sT racked .
Reward Function: As in the original study [19], we
learn the reward function from training data, i.e., an inverse
reinforcement learning problem, where we use ground truth
trajectories of the targets as supervision.
B. Policy
In the context of our MDP, a policy π is a mapping from
the state space S to the action space A, i.e., π : S 7→ A.
Given the current state of the target, a policy determines which
action to take. Ideally, the policy is chosen to maximize the
total reward obtained. In this subsection, we list the policies
chosen for the Active and Tracked subspaces, after which we
describe the reinforcement learning algorithm used to generate
a good policy for data association in the Lost subspace.
a) Policy in Active States: In an Active state s, the MDP
makes the decision between transferring an object proposal
into a Tracked or Inactive state based on whether the detection
is true or noisy. To do this, we train a set of binary Support
Vector Machines (SVM) offline, one for each camera view, to
classify a detection belonging to that view into Tracked or Inactive states using a normalized 5D feature vector φActive (s),
i.e., 2D image plane coordinates, width, height and score of the
detection, where training examples are collected from training
video sequences.
This is equivalent to learning the reward function:
k
RActive (s, a) = y(a)((wActive
)T · φActive (s) + bkActive ), (8)
for an object proposal belonging to camera k ∈ {1, · · · , K}.
k
(wActive
, bkActive ) defines the learned weights and bias of
the SVM for camera k, y(a) = +1 if action a = a1 , and
y(a) = −1 if a = a2 (see Figure 2). Training a separate SVM
for each camera view allows weights to be learned based on
object dimensions and locations in that particular view, and
thus works better than training a single SVM for all views.
Since a single object can result in multiple proposals, we
initialize a tracker for that object if any of the fused proposals
result in a positive reward. Note that a false positive from the
object detector can still be misclassified and transferred to a
Tracked state, which we then leave to be handled by the MDP
in Tracked and Lost states.
b) Policy in Tracked States: In a Tracked state, the MDP
needs to decide whether to keep tracking the target or to
transfer it to a Lost state. As long as the target is visible, we
should keep tracking it. Else, it should be marked ”lost”. We
build an appearance model for the target online and use it to
track the target. If the appearance model is able to successfully
track the target in the next video frame, the MDP leaves the
target in a Tracked state. Otherwise, the target is transferred
to a Lost state.
Template Representation: The appearance of the target is
simply represented by a template that is an image patch of
the target in a video frame. Whenever an object detection is
transferred to a Tracked state, we initialize the target template
with the detection bounding box. If the target is initialized
with multiple fused proposals, then each detection is stored as
a template. We make note of detections obtained from different
camera views, and use these to model the appearance of the
target in that view. This is crucial to track objects across
camera views under varying perspective changes. When the
target is being tracked, the MDP collects its templates in the
tracked frames to represent the history of the target, which
will be used in the Lost state for decision making.
Template Tracking: Tracking of templates is carried out
by performing dense optical flow as described in [19]. The
stability of the tracking is measured using the median of the FB
n
errors of all sampled points: emedF B = median(e(ui )i=1 ),
where n is the number of points. If emedF B is larger than
some threshold, the tracking is considered to be unstable.
Moreover, after filtering out unstable matches whose FB error
is larger than the threshold, a new bounding box of the target
is predicted using the remaining matches by measuring scale
change between points before and after. This process is carried
out for all camera views in which a target template has been
initialized and tracking is in progress.
Similar to the original MDP framework, we use the optical
flow information in addition to the object proposals history
to prevent drifting of the tracker. To do this, we compute the
bounding box overlap between the target box for l past frames,
and the corresponding detections in each of those frames. Then
we compute the mean bounding box overlap for the past L
tracked frames omean as another cue to make the decision.
Once again, this process is repeated for each camera view the
target is being tracked in. In addition to the above features, we
also gate the target track. This involves introducing a check
to see if the current global position of the tracked target falls
within a window (gate) of it’s last known global position.
This forbids the target track from latching onto objects that
appear close on the image plane, yet are much farther away in
the global coordinate frame. We denote the last know global
position and the currently tracked global position of the target
as xG (t − 1) and x̂G (t) respectively.
Finally, we define the reward function in a
Tracked state s using the feature set φT racked (s) =
7
0
0
0
0
k
K
G
G
({ekmedF B }K
k0 =1 , {omean }k0 =1 , x (t − 1), x̂ (t)) as:
y(a),
if ∃k 0 ∈ {1, · · · , K 0 } s.t.
0
0
(ekmedF B < e0 ) ∧ (okmean > o0 )
RT racked (s, a) =
∧(|xG (t − 1) − x̂G (t)| ≤ tgate ),
−y(a), otherwise,
(9)
where e0 and o0 are fixed thresholds, y(a) = +1 if action
a = a3 , and y(a) = −1 if a = a4 (see Figure 2). k 0 above
indexes camera views in which the target is currently being
tracked and tgate denotes the gating threshold. So the MDP
keeps the target in a Tracked state if emedF B is smaller and
omean is larger than their respective thresholds for any one of
K 0 camera views in addition to satisfying the gating check.
Otherwise, the target is transferred to a Lost state.
Template Updating: The appearance model of the target
needs to be regularly updated in order to accommodate appearance changes. As in the original work, we adopt a ”lazy”
updating rule and resort to the object detector in preventing
tracking drift. This is done so that we don’t accumulate
tracking errors, but rather rely on data association to handle
appearance changes and continue tracking. In addition to this,
templates are initialized in views where the target is yet to
be tracked by using proposals that are fused with detections
corresponding to the tracked location in an adjacent camera
view. This helps track objects that move across adjacent
camera views, by creating target templates in the new view
as soon as they are made available.
c) Policy in Lost States: In a Lost state, the MDP needs
to decide whether to keep the target in a Lost state, or transition
it to a Tracked state, or mark it as Inactive. We simply mark
a lost target as Inactive and terminate the tracking if the
target has been lost for more than LLost frames. The more
challenging task is to make the decision between tracking
the target and keeping it as lost. This is treated as a data
association problem where, in order to transfer a lost target
into a Tracked state, the target needs to be associated with an
object proposal, else, the target retains its Lost state.
Data Association: Let t denote a lost target, and d be an
object detection. The goal of data association is to predict the
label y ∈ {+1, −1} of the pair (t, d) indicating that the target
is linked (y = +1) or not linked (y = −1) to the detection.
Assuming that the detection d belongs to camera view k,
this binary classification is performed using the real-valued
k
linear function f k (t, d) = (wLost
)T ·φLost (t, d)+bkLost , where
k
k
(wLost , bLost ) are the parameters that control the function (for
camera view k), and φLost (t, d) is the feature vector which
captures the similarity between the target and the detection.
The decision rule is given by y = +1 if f k (t, d) ≥ 0, else
y = −1. Consequently, the reward function for data association in a lost state s given the feature set {φLost (t, dj )}M
m=1
is defined as
M
km T
km
RLost (s, a) = y(a) max ((wLost ) · φLost (t, dm ) + bLost ) ,
m=1
(10)
where y(a) = +1 if action a = a6 , y(a) = −1 if a = a5
(see Figure 2), and m indexes M potential detections for
input : Set of multi-video sequences V = {(vi1 , · · · , viK )}N
i=1 ,
Ni
ground truth trajectories Ti = {tij }j=1
, object proposals
Mi
Di = {dim }m=1
and their corresponding projections for
each multi-video sequence (vi1 , · · · , viK )
k
output: Binary classifiers {(wLost
, bkLost )}K
k=1 for data
association
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
repeat
foreach multi-video sequence (vi1 , · · · , viK ) in V do
foreach target tij do
Initialize the MDP in an Active state;
l ← index of the first frame in which tij is correctly
detected;
Transfer the MDP to a Tracked state and initialize
the target template for each camera view in which
target is observed;
while l ≤ index of last frame of tij do
Fuse object proposals as described in III;
Follow the current policy and choose an action
a;
Compute the action agt according to the
ground truth;
if current state is lost and a 6= agt then
foreach camera view k in which the target
has been seen do
k
Decide the label ym
of the pair
k
(tkij , dkimk );
S
Sk ← Sk
{(φ(tkij , dkimk ), ymk )};
k
(wLost
, bkLost ) ← solution of Eq.11
on S k ;
end
break;
else
Execute action a;
l ← l + 1;
end
if l > index of last frame of tij then
Mark target tij as successfully tracked;
end
end
end
end
until all targets are successfully tracked;
Algorithm 1: Reinforcement learning of the binary
classifier for data association.
association. Potential detections for association with a target
are simply obtained by applying a gating function around
the last known location of the target in the global coordinate
system. Note that based on which camera view each detection
km
m
dm originates from, the appropriate weights (wLost
, bkLost
)
associated with that view are used. As a result, the task of
policy learning in the Lost state reduces to learning the set
k
of parameters {(wLost
, bkLost )}K
k=1 for the decision functions
k
K
{f (t, d)}k=1 .
Reinforcement Learning: We train the binary classifiers
described above using the reinforcement learning paradigm.
Let V = {(vi1 , · · · , viK )}N
i=1 denote a set of multi-video
sequences for training, where N is the number of sequences
and K is the total number of camera views. Suppose there
th
i
are Ni ground truth targets Ti = {tij }N
multij=1 in the i
1
K
video sequence (vi , · · · , vi ). Our goal is to train the MDP
to successfully track all these targets across all camera views
they appear in. We start training with initial weights (w0k , bk0 )
and an empty training set S0k = ∅ for the binary classifier
corresponding to each camera view k. Note that when the
8
TABLE II: Features used for data association [19]. We introduce two new features (highlighted in bold) based on the
global coordinate positions of targets and detections.
Type
Notation
FB error
φ1 , · · · , φ 5
φ6
Feature Description
Mean of the median forward-backward errors from the entire, left half, right half,
upper half and lower half of the templates
obtained from optical flow
Mean of the median Normalized Correlation
Coefficients (NCC) between image patches
around the matched points in optical flow
input : A multi-video sequence (v 1 , · · · , v K ), corresponding object
proposals D = {dm }M
m=1 and their projections, learned binary
k
classifier weights {(wActive
, bkActive )}K
k=1 and
k
{(wLost
, bkLost )}K
k=1
output: Trajectories of targets T = {tj }N
j=1 in the sequence
1
2
3
4
5
6
7
NCC
φ7
φ8
Height ratio
φ9
Overlap
φ10
Score
φ11
φ12
Distance
φ13
φ14
Mean of the median Normalized Correlation
Coefficients (NCC) between image patches
around the matched points obtained from
optical flow
Mean of ratios of the bounding box height
of the detection to that of the predicted
bounding boxes obtained from optical flow
Ratio of the bounding box height of the
target to that of the detection
Mean of the bounding box overlaps between
the detection and the predicted bounding
boxes from optical flow
Normalized detection score
Euclidean distance between the centers of
the target and the detection after motion
prediction of the target with a linear velocity
model
Lateral offset between last known global
coordinate position of the target and that
of the detection
Longitudinal offset between last known
global coordinate position of the target
and that of the detection
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
foreach frame l in (v 1 , · · · , v K ) do
Fuse object proposals as described in III;
/* process targets in tracked states
*/
foreach tracked target tj in T do
Follow the policy, move the MDP of tj to the next state;
end
/* process targets in lost states
*/
foreach lost target tj in T do
foreach proposal dm not covered by any tracked target do
Compute
km
f km (tj , dm ) = (wkm )T
Lost · φ(tj , dm ) + bLost ;
end
end
Data association with Hungarian algorithm for the lost targets;
Initialize target templates for uninitialized camera views using
matched (fused) proposals;
foreach lost target tj in T do
Follow the assignment, move the MDP of tj to the next state;
end
/* initialize new targets
*/
foreach proposal dm not covered by any tracked target in T do
Initialize a MDP for a new target t with proposal dm ;
if action a1 is taken following the policy then
Transfer t to the tracked state and initialize the target
template for each camera view in which target is
observed;
S
T ←T
{t};
else
Transfer t to the Inactive state;
end
end
end
Algorithm 2: Multi-object tracking with MDPs.
weights of the binary classifiers are specified, we have a
complete policy for the MDP to follow. So the training
algorithm loops over all the multi-video sequences and all
the targets, follows the current policy of the MDP to track
the targets. The binary classifier or the policy is updated only
when the MDP makes a mistake in data association. In this
case, the MDP takes a different action than what is indicated
by the ground truth trajectory. Suppose the MDP is tracking
the j th target tij in the video vik , and on the lth frame
of the video, the MDP is in a lost state. Consider the two
types of mistakes that can happen: i) The MDP associates
the target tkij (l) to an object detection dkm which disagrees
with the ground truth, i.e., the target is incorrectly associated
to a detection. Then φ(tkij (l), dkm ) is added to the training
set S k of the binary classifier for camera k as a negative
example. ii) The MDP decides to not associate the target to
any detection, but the target is visible and correctly detected
by a detection dkm based on the ground truth, i.e., the MDP
missed the correct association. Then φ(tkij (l), dkm ) is added
to the training set as a positive example. After the training
set has been augmented, we update the binary classifier by
re-training it on the new training set. Specifically, given the
k
current training set S k = {(φ(tkm , dkm ), ym
)}M
m=1 , we solve
the following soft-margin optimization problem to obtain a
max-margin classifier for data association in camera view k:
M
X
1
k
||wLost
||2 + C
ξm
k
2
wLost
,bk
Lost ,ξ
m=1
k
k
s.t. ym
(wLost
)T ·φ(tkm , dkm )+bkLost ≥ 1−ξm , ξm ≥ 0, ∀m,
(11)
min
where ξm , m = 1, · · · , M are the slack variables, and C
is a regularization parameter. Once the classifier has been
updated, we obtain a new policy which is used in the next
iteration of the training process. Note that based on which
view the data association is carried out in, the weights of
the classifier in that view are updated in each iteration. We
keep iterating and updating the policy until all the targets
are successfully tracked. Algorithm 1 summarizes the policy
learning algorithm.
Feature Representation: We retain the same feature representation described in [19], but add two features based on
the lateral and longitudinal displacements of the last known
target location and the object proposal location in the global
coordinate system. This leverages 3D information that is
otherwise unavailable in 2D MOT. Table II summarizes our
feature representation.
9
TABLE III: Quantitative results showing ablative analysis of our proposed tracker.
Criteria for Comparison
Tracker Variant
Sensor Configuration
MOT Metrics [38], [39]
# of Cameras
Range Sensors
MOTA (↑)
MOTP (↓)
MT (↑)
ML (↓)
IDS (↓)
-
2
3
4
4†
6
8
3
3
3
3
3
3
73.38
77.26
72.81
74.18
79.06
75.10
0.03
0.03
0.05
0.05
0.04
0.04
71.36%
77.34%
72.48%
74.10%
79.66%
70.37%
16.13%
14.49%
20.76%
18.18%
11.93%
14.07%
16
38
49
45
51
59
Projection Scheme
(Section V-B)
Point cloud based projection
IPM projection
8
8
3
3(for fusion)
75.10
47.45
0.04
0.39
70.37%
53.70%
14.07%
19.26%
59
152
Fusion Scheme
(Section V-C)
Point cloud based fusion
Distance based fusion
8
8
3
3(for projection)
75.10
72.20
0.04
0.04
70.37%
68.23%
14.07%
12.23%
59
65
Sensor Modality
(Sections V-B,V-C)
Cameras+LiDAR
Cameras
8
8
3
7
75.10
40.98
0.04
0.40
70.37%
50.00%
14.07%
27.40%
59
171
Vehicle Detector
(Section V-D)
RefineNet [40]
RetinaNet [41]
SubCat [42]
8
8
8
3
3
3
75.10
73.89
69.93
0.04
0.05
0.04
70.37%
68.37%
66.67%
14.07%
17.07%
22.22%
59
72
81
Global Position
based Features
(Section V-E)
with {φ13 , φ14 }
without {φ13 , φ14 }
8
8
3
3
75.10
71.32
0.04
0.05
70.37%
64.81%
14.07%
17.78%
59
88
Number of
Cameras Used
(Section V-A)
C. Multi-Object Tracking with MDPs
After learning the policy/reward of the MDP, we apply it to
the multi-object tracking problem. We dedicate one MDP for
each target, and the MDP follows the learned policy to track
the object. Given a new input video frame, targets in tracked
states are processed first to determine whether they should stay
as tracked or transfer to lost states. Then we compute pairwise
similarity between lost targets and object detections which
are not covered by the tracked targets, where non-maximum
suppression based on bounding box overlap is employed
to suppress covered detections, and the similarity score is
computed by the binary classifier for data association. After
that, the similarity scores are used in the Hungarian algorithm
[35] to obtain the assignment between detections and lost
targets. According to the assignment, lost targets which are
linked to some object detections are transferred to tracked
states. Otherwise, they stay as lost. Finally, we initialize a
MDP for each object detection which is not covered by any
tracked target. Algorithm 2 describes our 3D MOT algorithm
using MDPs in detail. Note that, tracked targets have higher
priority than lost targets in tracking, and detections covered
by tracked targets are suppressed to reduce ambiguities in data
association.
V. E XPERIMENTAL A NALYSIS
Testbed: Since we propose full-surround MOT using vision
sensors, we use a testbed comprising of 8 outside looking RGB
cameras (seen in Figure 1). This setup ensures full surround
coverage of the scene around the vehicle, while retaining
a sufficient overlap between adjacent camera views. Frames
captured from these cameras along with annotated surround
vehicles are shown in Figure 1. In addition to full vision
coverage, the testbed has full-surround Radar and LiDAR
FoVs. Despite the final goal of this study being full-surround
MOT, we additionally consider cases where only a subset of
the vision sensors are used to illustrate the modularity of the
approach. More details on the sensors, their synchronization
and calibration, and the testbed can be found in [43].
Dataset: To train and test our 3D MOT system, we collect
a set of four sequences, each 3-4 minutes long, comprising of
multi-camera videos and LiDAR point clouds using our testbed
described above. The sequences are chosen much longer than
traditional MOT sequences so that long range maneuvers of
surround vehicles can be tracked. This is very crucial to
autonomous driving. We also annotate all vehicles in the 8
camera videos for each sequence with their bounding box, as
well as track IDs. It should be noted that each unique vehicle
in the scene is assigned the same ID in all camera views. With
these sequences set up, we use one sequence for training our
tracker, and reserve the rest for testing. All our results are
reported on the entire test set.
Evaluation Metrics: We use multiple metrics to evaluate
the multiple object tracking performance as suggested by the
MOT Benchmark [38]. Specifically, we use the 3D MOT
metrics described in [39]. These include Multiple Object
Tracking Accuracy (MOTA), Multiple Object Tracking Precision (MOTP), Mostly Track targets (MT, percentage of ground
truth objects who trajectories are covered by the tracking
output for at least 80%), Mostly Lost targets (ML, percentage
of ground truth objects who trajectories are covered by the
tracking output less than 20%), and the total number of ID
Switches (IDS). In addition to listing the metrics in Table III,
we also draw arrows next to each of them indicating if a high
(↑) or low (↓) value is desirable. Finally, we provide top-down
visualizations of the tracking results in a global coordinate
system centered on the ego-vehicle for qualitative evaluation.
A. Experimenting with Number of Cameras
As our approach to tracking is designed to be extremely
modular, we test our tracker with different camera configurations. We experiment with 2, 3, 4, 6 and 8 cameras
10
(a) Ground truth
(b) 8 cameras
(c) 6 cameras
(d) 4 cameras
(e) 4† cameras
(f) 3 cameras
(g) 2 cameras
Fig. 5: Tracking results with different number of cameras. The camera configuration used is depicted above each result.
respectively. Top-down visualizations of the generated tracks
for a test sequence are depicted in Figure 5. The ground truth
tracks are provided for visual comparison. As can be seen,
the tracker provides consistent results in its FoV irrespective
of the camera configuration used, even if the cameras have no
overlap between them.
The quantitative results on the test set for each camera
configuration are listed in Table III. It must be noted that
the tracker for each configuration is scored only based on the
ground truth tracks visible in that camera configuration. The
tracker is seen to score very well on each metric, irrespective
of the number of cameras used. This illustrates the robustness
of the M3 OT framework. More importantly, it is seen that
our tracker performs exceptionally well in the MT and ML
metrics, especially in camera configurations with overalapping
FoVs. Even though our test sequences are about 3 minutes long
in duration, the tracker mostly tracks more than 70% of the
targets, while mostly losing only a few. This demonstrates that
our M3 OT framework is capable of long-term target tracking.
B. Effect of Projection Scheme
Figure 6 depicts the tracking results for a test sequence
using the two projection schemes proposed. It is obvious that
LiDAR based projection results in much better localization in
3D, which leads to more stable tracks and fewer fragments.
The IPM based projection scheme is very sensitive to small
changes in the input domain, and this leads to considerable
errors during gating and data association. This phenomenon
is verified by the high MOTP value obtained for IPM based
projection as listed in Table III.
C. Effect of Fusion Scheme
Once again, we see that the LiDAR point cloud based fusion
scheme is more reliable in comparison to the distance based
approach, albeit this difference is much less noticeable when
proposals are projected using LiDAR point clouds. The LiDAR
based fusion scheme results in objects being tracked longer
(across camera views), and more accurately. The distance
based fusion approach on the other hand fails to associate
(a) Ground truth
(b) LiDAR
based projection
(c) IPM based
projection
Fig. 6: Tracking results on a test sequence with different
projection schemes.
certain proposals, which results in templates not being stored
for new camera views, thereby cutting short the track as soon
as the target exits the current view. This superiority is reflected
in the quantitative results shown in Table III. The drawbacks of
the distance based fusion scheme are exacerbated when using
IPM to project proposals, reflected by the large drop in MOTA
for a purely vision based system. This drop in performance is
to be expected in the absence of LiDAR sensors. However, it
must be noted that half the targets are still tracked for most
of their lifetime, while only a quarter of the targets are mostly
lost.
D. Effect of using Different Vehicle Detectors
Ideally, a tracking-by-detection approach should be detector
agnostic. To observe how the tracking results change for
different vehicle detectors, we ran the proposed tracker on
vehicle detections obtained from three commonly used object
detectors [40]–[42]. All three detectors were trained on the
KITTI dataset [44] and have not seen examples from the
proposed multi-camera dataset. The ROC curves for the detectors on the proposed dataset are shown in Figure 7. The
corresponding tracking results for each detector are listed in
Table III. Despite the sub-optimal performance of all three
11
our colleagues Kevan Yuen, Nachiket Deo, Ishan Gupta and
Borhan Vasli at the Laboratory for Intelligent and Safe Automobiles (LISA), UC San Diego for their useful inputs and
help in collecting and annotating the dataset.
R EFERENCES
Fig. 7: ROC curves for different vehicle detectors on the 4 test
sequences.
detectors in addition to significant differences in their ROC
curves, the tracking results are seen to be relatively unaffected.
This indicates that the tracker is less sensitive to errors made
by the detector, and consistently manages to correct for it.
E. Effect of Global Position based Features
Table III indicates a clear benefit in incorporating features
{φ13 , φ14 } for data association in Lost states. These features
express how near/far a proposal is from the last know location
of a target. This helps the tracker disregard proposals that
are unreasonably far away from the latest target location.
Introduction of these features leads to an improvement in all
metrics and therefore justifies their inclusion.
VI. C ONCLUDING R EMARKS
In this work, we have described a full-surround camera
and LiDAR based approach to multi-object tracking for autonomous vehicles. To do so, we extend a 2D MOT approach
based on the tracking-by-detection framework, and make it
capable of tracking objects in the real world. The proposed
M3 OT framework is also made highly modular so that it
is capable of working with any camera configuration with
varying FoVs, and also with or without LiDAR sensors. An
efficient and fast early fusion scheme is adopted to handle
object proposals from different sensors within a calibrated
camera array. We conduct extensive testing on naturalistic
full-surround vision and LiDAR data collected on highways,
and illustrate the effects of different camera setups, fusion
schemes and 2D-to-3D projection schemes, both qualitatively
and quantitatively. Results obtained on the dataset support the
modular nature of our framework, as well as its ability to track
objects for a long duration. In addition to this, we believe
that the M3 OT framework can be used to test the utility of
any camera setup, and make suitable modifications thereof to
ensure optimum coverage from vision and range sensors.
VII. ACKNOWLEDGMENTS
We gratefully acknowledge the continued support of our
industrial and other sponsors. We would also like to thank
[1] J. Choi, S. Ulbrich, B. Lichte, and M. Maurer, “Multi-target tracking
using a 3d-lidar sensor for autonomous vehicles,” in Intelligent Transportation Systems-(ITSC), 2013 16th International IEEE Conference on.
IEEE, 2013, pp. 881–886.
[2] S. Song, Z. Xiang, and J. Liu, “Object tracking with 3d lidar via multitask sparse learning,” in Mechatronics and Automation (ICMA), 2015
IEEE International Conference on. IEEE, 2015, pp. 2603–2608.
[3] A. Asvadi, P. Peixoto, and U. Nunes, “Detection and tracking of moving
objects using 2.5 d motion grids,” in Intelligent Transportation Systems
(ITSC), 2015 IEEE 18th International Conference on. IEEE, 2015, pp.
788–793.
[4] A. Asvadi, C. Premebida, P. Peixoto, and U. Nunes, “3d lidar-based
static and moving obstacle detection in driving environments: An
approach based on voxels and multi-region ground planes,” Robotics
and Autonomous Systems, vol. 83, pp. 299–311, 2016.
[5] D. Pfeiffer and U. Franke, “Efficient representation of traffic scenes by
means of dynamic stixels,” in Intelligent Vehicles Symposium (IV), 2010
IEEE. IEEE, 2010, pp. 217–224.
[6] A. Broggi, S. Cattani, M. Patander, M. Sabbatelli, and P. Zani, “A full3d voxel-based dynamic obstacle detection for urban scenario using
stereo vision,” in Intelligent Transportation Systems-(ITSC), 2013 16th
International IEEE Conference on. IEEE, 2013, pp. 71–76.
[7] A. Vatavu, R. Danescu, and S. Nedevschi, “Stereovision-based multiple
object tracking in traffic scenarios using free-form obstacle delimiters
and particle filters,” IEEE Transactions on Intelligent Transportation
Systems, vol. 16, no. 1, pp. 498–511, 2015.
[8] A. Ošep, A. Hermans, F. Engelmann, D. Klostermann, M. Mathias,
and B. Leibe, “Multi-scale object candidates for generic object tracking
in street scenes,” in Robotics and Automation (ICRA), 2016 IEEE
International Conference on. IEEE, 2016, pp. 3180–3187.
[9] R. K. Satzoda, S. Lee, F. Lu, and M. M. Trivedi, “Vision based
front & rear surround understanding using embedded processors,” IEEE
Transactions on Intelligent Vehicles, 2017.
[10] S. Sivaraman and M. M. Trivedi, “Dynamic probabilistic drivability
maps for lane change and merge driver assistance,” IEEE Transactions
on Intelligent Transportation Systems, vol. 15, no. 5, pp. 2063–2073,
2014.
[11] H. Cho, Y.-W. Seo, B. V. Kumar, and R. R. Rajkumar, “A multisensor fusion system for moving object detection and tracking in urban
driving environments,” in Robotics and Automation (ICRA), 2014 IEEE
International Conference on. IEEE, 2014, pp. 1836–1843.
[12] A. Asvadi, P. Girão, P. Peixoto, and U. Nunes, “3d object tracking using
rgb and lidar data,” in Intelligent Transportation Systems (ITSC), 2016
IEEE 19th International Conference on. IEEE, 2016, pp. 1255–1260.
[13] M. Allodi, A. Broggi, D. Giaquinto, M. Patander, and A. Prioletti,
“Machine learning in tracking associations with stereo vision and
lidar observations for an autonomous vehicle,” in Intelligent Vehicles
Symposium (IV), 2016 IEEE. IEEE, 2016, pp. 648–653.
[14] S.-H. Bae and K.-J. Yoon, “Robust online multi-object tracking based
on tracklet confidence and online discriminative appearance learning,”
in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2014, pp. 1218–1225.
[15] S. Kim, S. Kwak, J. Feyereisl, and B. Han, “Online multi-target tracking
by large margin structured learning,” in Asian Conference on Computer
Vision. Springer, 2012, pp. 98–111.
[16] C.-H. Kuo, C. Huang, and R. Nevatia, “Multi-target tracking by online learned discriminative appearance models,” in Computer Vision and
Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010,
pp. 685–692.
[17] Y. Li, C. Huang, and R. Nevatia, “Learning to associate: Hybridboosted
multi-target tracker for crowded scene,” in Computer Vision and Pattern
Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp.
2953–2960.
[18] X. Song, J. Cui, H. Zha, and H. Zhao, “Vision-based multiple interacting
targets tracking via on-line supervised learning,” in European Conference on Computer Vision. Springer, 2008, pp. 642–655.
[19] Y. Xiang, A. Alahi, and S. Savarese, “Learning to track: Online multiobject tracking by decision making,” in Proceedings of the IEEE
International Conference on Computer Vision, 2015, pp. 4705–4713.
12
[20] B. Babenko, M.-H. Yang, and S. Belongie, “Robust object tracking with
online multiple instance learning,” IEEE transactions on pattern analysis
and machine intelligence, vol. 33, no. 8, pp. 1619–1632, 2011.
[21] C. Bao, Y. Wu, H. Ling, and H. Ji, “Real time robust l1 tracker using
accelerated proximal gradient approach,” in Computer Vision and Pattern
Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012, pp. 1830–
1837.
[22] S. Hare, S. Golodetz, A. Saffari, V. Vineet, M.-M. Cheng, S. L. Hicks,
and P. H. Torr, “Struck: Structured output tracking with kernels,” IEEE
transactions on pattern analysis and machine intelligence, vol. 38,
no. 10, pp. 2096–2109, 2016.
[23] Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-learning-detection,”
IEEE transactions on pattern analysis and machine intelligence, vol. 34,
no. 7, pp. 1409–1422, 2012.
[24] J. V. Dueholm, M. S. Kristoffersen, R. K. Satzoda, T. B. Moeslund, and
M. M. Trivedi, “Trajectories and maneuvers of surrounding vehicles with
panoramic camera arrays,” IEEE Transactions on Intelligent Vehicles,
vol. 1, no. 2, pp. 203–214, 2016.
[25] S. Sivaraman and M. M. Trivedi, “Looking at vehicles on the road:
A survey of vision-based vehicle detection, tracking, and behavior
analysis,” IEEE Transactions on Intelligent Transportation Systems,
vol. 14, no. 4, pp. 1773–1795, 2013.
[26] J. Berclaz, F. Fleuret, E. Turetken, and P. Fua, “Multiple object tracking
using k-shortest paths optimization,” IEEE transactions on pattern
analysis and machine intelligence, vol. 33, no. 9, pp. 1806–1819, 2011.
[27] A. A. Butt and R. T. Collins, “Multi-target tracking by lagrangian
relaxation to min-cost network flow,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2013, pp.
1846–1853.
[28] A. Milan, S. Roth, and K. Schindler, “Continuous energy minimization
for multitarget tracking,” IEEE transactions on pattern analysis and
machine intelligence, vol. 36, no. 1, pp. 58–72, 2014.
[29] J. C. Niebles, B. Han, and L. Fei-Fei, “Efficient extraction of human motion volumes by tracking,” in Computer Vision and Pattern Recognition
(CVPR), 2010 IEEE Conference on. IEEE, 2010, pp. 655–662.
[30] H. Pirsiavash, D. Ramanan, and C. C. Fowlkes, “Globally-optimal
greedy algorithms for tracking a variable number of objects,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference
on. IEEE, 2011, pp. 1201–1208.
[31] L. Zhang, Y. Li, and R. Nevatia, “Global data association for multiobject tracking using network flows,” in Computer Vision and Pattern
Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008, pp.
1–8.
[32] Z. Khan, T. Balch, and F. Dellaert, “Mcmc-based particle filtering for
tracking a variable number of interacting targets,” IEEE transactions on
pattern analysis and machine intelligence, vol. 27, no. 11, pp. 1805–
1819, 2005.
[33] S. Oh, S. Russell, and S. Sastry, “Markov chain monte carlo data
association for multi-target tracking,” IEEE Transactions on Automatic
Control, vol. 54, no. 3, pp. 481–497, 2009.
[34] K. Okuma, A. Taleghani, N. d. Freitas, J. J. Little, and D. G. Lowe,
“A boosted particle filter: Multitarget detection and tracking,” Computer
Vision-ECCV 2004, pp. 28–39, 2004.
[35] J. Munkres, “Algorithms for the assignment and transportation problems,” Journal of the society for industrial and applied mathematics,
vol. 5, no. 1, pp. 32–38, 1957.
[36] M. D. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier, and
L. Van Gool, “Online multiperson tracking-by-detection from a single,
uncalibrated camera,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 9, pp. 1820–1833, 2011.
[37] J.-Y. Bouguet, “Pyramidal implementation of the affine lucas kanade
feature tracker description of the algorithm,” Intel Corporation, vol. 5,
no. 1-10, p. 4, 2001.
[38] K. Bernardin and R. Stiefelhagen, “Evaluating multiple object tracking
performance: the clear mot metrics,” EURASIP Journal on Image and
Video Processing, vol. 2008, no. 1, pp. 1–10, 2008.
[39] A. Milan, K. Schindler, and S. Roth, “Challenges of ground truth evaluation of multi-target tracking,” in Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition Workshops, 2013, pp. 735–
742.
[40] R. N. Rajaram, E. Ohn-Bar, and M. M. Trivedi, “Refinenet: Refining object detectors for autonomous driving,” IEEE Transactions on Intelligent
Vehicles, vol. 1, no. 4, pp. 358–368, 2016.
[41] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for
dense object detection,” arXiv preprint arXiv:1708.02002, 2017.
[42] E. Ohn-Bar and M. M. Trivedi, “Fast and robust object detection
using visual subcategories,” in Computer Vision and Pattern Recognition
Workshops (CVPRW), 2014 IEEE Conference on. IEEE, 2014, pp. 179–
184.
[43] A. Rangesh, K. Yuen, R. K. Satzoda, R. N. Rajaram, P. Gunaratne, and
M. M. Trivedi, “A multimodal, full-surround vehicular testbed for naturalistic studies and benchmarking: Design, calibration and deployment,”
arXiv preprint arXiv:1709.07502, 2017.
[44] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics:
The kitti dataset,” The International Journal of Robotics Research,
vol. 32, no. 11, pp. 1231–1237, 2013.
Akshay Rangesh is currently working towards his
PhD in electrical engineering from the University
of California at San Diego (UCSD), with a focus
on intelligent systems, robotics, and control. His
research interests span computer vision and machine
learning, with a focus on object detection and tracking, human activity recognition, and driver safety
systems in general. He is also particularly interested
in sensor fusion and multi-modal approaches for real
time algorithms.
Mohan Manubhai Trivedi is a Distinguished
Professor at University of California, San Diego
(UCSD) and the founding director of the UCSD
LISA: Laboratory for Intelligent and Safe Automobiles, winner of the IEEE ITSS Lead Institution
Award (2015). Currently, Trivedi and his team are
pursuing research in intelligent vehicles, machine
perception, machine learning, human-robot interactivity, driver assistance, active safety systems. Three
of his students have received ”best dissertation”
recognitions. Trivedi is a Fellow of IEEE, ICPR and
SPIE. He received the IEEE ITS Society’s highest accolade ”Outstanding
Research Award” in 2013. Trivedi serves frequently as a consultant to industry
and government agencies in the USA and abroad.
| 1 |
arXiv:1410.0640v3 [] 6 Oct 2014
Term-Weighting Learning via Genetic Programming for
Text Classification
Hugo Jair Escalantea,∗, Mauricio A. Garcı́a-Limóna , Alicia Morales-Reyesa ,
Mario Graffb , Manuel Montes-y-Gómeza , Eduardo F. Moralesa
a
Computer Science Department,
Instituto Nacional de Astrofı́sica, Óptica y Electrónica,
Luis Enrique Erro 1, Puebla, 72840, Mexico
b
División de Estudios de Postgrado
Facultad de Ingenierı́a Eléctrica
Universidad Michoacana de San Nicolás de Hidalgo, México
Abstract
This paper describes a novel approach to learning term-weighting schemes
(TWSs) in the context of text classification. In text mining a TWS determines the way in which documents will be represented in a vector space
model, before applying a classifier. Whereas acceptable performance has been
obtained with standard TWSs (e.g., Boolean and term-frequency schemes),
the definition of TWSs has been traditionally an art. Further, it is still a difficult task to determine what is the best TWS for a particular problem and it
is not clear yet, whether better schemes, than those currently available, can
be generated by combining known TWS. We propose in this article a genetic
program that aims at learning effective TWSs that can improve the performance of current schemes in text classification. The genetic program learns
how to combine a set of basic units to give rise to discriminative TWSs. We
report an extensive experimental study comprising data sets from thematic
and non-thematic text classification as well as from image classification. Our
study shows the validity of the proposed method; in fact, we show that TWSs
learned with the genetic program outperform traditional schemes and other
Corresponding author.
Email addresses: [email protected] (Hugo Jair Escalante),
[email protected] (Mauricio A. Garcı́a-Limón), [email protected]
(Alicia Morales-Reyes), [email protected] (Mario Graff), [email protected]
(Manuel Montes-y-Gómez), [email protected] (Eduardo F. Morales)
∗
Preprint submitted to Elsevier
October 8, 2014
TWSs proposed in recent works. Further, we show that TWSs learned from
a specific domain can be effectively used for other tasks.
Keywords: Term-weighting Learning, Genetic Programming, Text Mining,
Representation Learning, Bag of words.
2000 MSC: code 68T10,
2000 MSC: 68T20
1. Introduction
Text classification (TC) is the task of associating documents with predefined categories that are related to their content. TC is an important and
active research field because of the large number of digital documents available and the consequent need to organize them. The TC problem has been
approached with pattern classification methods, where documents are represented as numerical vectors and standard classifiers (e.g., naı̈ve Bayes and
support vector machines) are applied (Sebastiani, 2008). This type of representation is known as the vector space model (VSM) (Salton and Buckley,
1988). Under the VSM one assumes a document is a point in a N-dimensional
space and documents that are closer in that space are similar to each other (Turney and Pantel,
2010). Among the different instances of the VSM, perhaps the most used
model is the bag-of-words (BOW) representation. In the BOW it is assumed
that the content of a document can be determined by the (orderless) set of
terms1 it contains. Documents are represented as points in the vocabulary
space, that is, a document is represented by a numerical vector of length equal
to the number of different terms in the vocabulary (the set of all different
terms in the document collection). The elements of the vector specify how
important the corresponding terms are for describing the semantics or the
content of the document. BOW is the most used document representation
in both TC and information retrieval. In fact, the BOW representation has
been successfully adopted for processing other media besides text, including,
images (Csurka et al., 2004), videos (Sivic and Zisserman, 2003), speech signals (S.Manchala et al., 2014), and time series (Wanga et al., 2013) among
others.
A crucial component of TC systems using the BOW representation is the
1
A term is any basic unit by which documents are formed, for instance, terms could be
words, phrases, and sequences (n-grams) of words or characters.
2
term-weighting scheme (TWS), which is in charge of determining how relevant a term is for describing the content of a document (Feldman and Sanger,
2006; Altyncay and Erenel, 2010; Lan et al., 2009; Debole and Sebastiani,
2003). Traditional TWS are term-frequency (TF ), where the importance of a
term in a document is given by its frequency of occurrence in the document;
Boolean (B ), where the importance of a term in document is either 1, when
the term appear in the document or 0, when the term does not appear in
the document; and term-frequency inverse-document-frequency (TF-IDF ),
where the importance of a term for a document is determined by its occurrence frequency times the inverse frequency of the term across the corpus
(i.e., frequent terms in the corpus, as prepositions and articles, receive a low
weight). Although, TC is a widely studied topic with very important developments in the last two decades (Sebastiani, 2008; Feldman and Sanger,
2006), it is somewhat surprising that little attention has been paid to the
development of new TWSs to better represent the content of documents for
TC. In fact, it is quite common in TC systems that researchers use one or
two common TWSs (e.g., B, TF or TF-IDF ) and put more effort in other
processes, like feature selection (Forman, 2003; Yang and Pedersen, 1997),
or the learning process itself (Agarwal and Mittal, 2014; Aggarwal, 2012;
Escalante et al., 2009). Although all of the TC phases are equally important, we think that by putting more emphasis on defining or learning effective
TWSs we can achieve substantial improvements in TC performance.
This paper introduces a novel approach to learning TWS for TC tasks.
A genetic program is proposed in which a set of primitives and basic TWSs
are combined through arithmetic operators in order to generate alternative
schemes that can improve the performance of a classifier. Genetic programming is a type of evolutionary algorithm in which a population of programs is
evolved (Langdon and Poli, 2001), where programs encode solutions to complex problems (mostly modeling problems), in this work programs encode
TWSs. The underlying hypothesis of our proposed method is that an evolutionary algorithm can learn TWSs of comparable or even better performance
than those proposed so far in the literature.
Traditional TWSs combine term-importance and term-document-importance
factors to generate TWSs. For instance in TF-IDF, TF and IDF are termdocument-importance and term-importance factors, respectively. Term-document
weights are referred as local factors, because they account for the occurrence
of a term in a document (locally). On the other hand, term-relevance weights
are considered global factors, as they account for the importance of a term
3
across the corpus (globally). It is noteworthy that the actual factors that define a TWS and the combination strategy itself have been determined manually. Herein we explore the suitability of learning these TWSs automatically,
by providing a genetic program with a pool of TWSs’ building blocks with
the goal of evolving a TWS that maximizes the classification performance
for a TC classifier. We report experimental results in many TC collections
that comprise both: thematic and non-thematic TC problems. Throughout
extensive experimentation we show that the proposed approach is very competitive, learning very effective TWSs that outperform most of the schemes
proposed so far. We evaluate the performance of the proposed approach under different settings and analyze the characteristics of the learned TWSs.
Additionally, we evaluate the generalization capabilities of the learned TWSs
and even show that a TWS learned from text can be used to effectively represent images under the BOW formulation.
The rest of this document is organized as follows. Next section formally
introduces the TC task and describes common TWSs. Section 3 reviews
related work on TWSs. Section 4 introduces the proposed method. Section 5 describes the experimental settings adopted in this work and reports
results of experiments that aim at evaluating different aspects of the proposed approach. Section 6 presents the conclusions derived from this paper
and outlines future research directions.
2. Text classification with the Bag of Words
The most studied TC problem is the so called thematic TC (or simply
text categorization) (Sebastiani, 2008), which means that classes are associated to different themes or topics (e.g., classifying news into “Sports” vs.
“Politics” categories). In this problem, the sole occurrence of certain terms
may be enough to determine the topic of a document; for example, the occurrence of words/terms “Basketball”, “Goal”, “Ball”, and “Football” in a
document is strong evidence that the document is about “Sports”. Of course,
there are more complex scenarios for thematic TC, for example, distinguishing documents about sports news into the categories: “Soccer” vs. “NFL”.
Non-thematic TC, on the other hand, deals with the problem of associating
documents with labels that are not (completely) related to their topics. Nonthematic TC includes the problems of authorship attribution (Stamatatos,
2009), opinion mining and sentiment analysis (Pang et al., 2002), authorship
verification (Koppel and Schler, 2004), author profiling (Koppel et al., 2002),
4
among several others (Reyes and Rosso, 2014; Kiddon and Brun, 2011). In
all of these problems, the thematic content is of no interest, nevertheless,
it is common to adopt standard TWSs for representing documents in nonthematic TC as well (e.g., BOW using character n-grams or part-of-speech
tags (Stamatatos, 2009)).
It is noteworthy that the BOW representation has even trespassed the
boundaries of the text media. Nowadays, images (Csurka et al., 2004), videos (Sivic and Zisserman,
2003), audio (S.Manchala et al., 2014), and other types of data (Wanga et al.,
2013) are represented throughout analogies to the BOW. In non-textual data,
a codebook is first defined/learned and then the straight BOW formulation
is adopted. In image classification, for example, visual descriptors extracted
from images are clustered and the centers of the clusters are considered as
visual words (Csurka et al., 2004; Zhang et al., 2007). Images are then represented by numerical vectors (i.e., a VSM) that indicate the relevance of
visual words for representing the images. Interestingly, in other media than
text (e.g., video, images) it is standard to use only the TF TWS, hence motivating the study on the effectiveness of alternative TWSs in non-textual
tasks. Accordingly, in this work we also perform experiments on learning
TWSs for a standard computer vision problem (Fei-Fei et al., 2004).
TC is a problem that has been approached mostly as a supervised learning
task, where the goal is to learn a model capable of associating documents to
categories (Sebastiani, 2008; Feldman and Sanger, 2006; Agarwal and Mittal,
2014). Consider a data set of labeled documents D = (xi , yi ){1,...,N } with N
pairs of documents (xi ) and their classes (yi ) associated to a TC problem;
where we assume xi ∈ Rp (i.e., a VSM) and yi ∈ C = {1, . . . K}, for a problem with K−classes. The goal of TC is to learn a function f : Rp → C from
D that can be used to make predictions for documents with unknown labels,
the so called test set: T = {xT1 , . . . , xTM }. Under the BOW formulation, the
dimensionality of documents’ representation, p, is defined as p = |V |, where
V is the vocabulary (i.e., the set all the different terms/words that appear
in a corpus). Hence, each document di is represented by a numerical vector
xi = hxi,1 . . . , xi,|V | i, where each element xi,j , j = 1, . . . , |V |, of xi indicates
how relevant word tj is for describing the content of di , and where the value
of xi,j is determined by the TWS.
Many TWSs have been proposed so far, including unsupervised (Sebastiani,
2008; Salton and Buckley, 1988; Feldman and Sanger, 2006) and supervised
schemes (Debole and Sebastiani, 2003; Lan et al., 2009), see Section 3. Unsupervised TWSs are the most used ones, they were firstly proposed for
5
information retrieval tasks and latter adopted for TC (Sebastiani, 2008;
Salton and Buckley, 1988). Unsupervised schemes rely on term frequency
statistics and measurements that do not take into account any label information. For instance, under the Boolean (B) scheme xi,j = 1 if f term
tj appears in document i and 0 otherwise; while in the term-frequency
(TF ) scheme, xi,j = #(di , tj ), where #(di , tj ) accounts for the times term
tj appears in document di . On the other hand, supervised TWSs aim at
incorporating discriminative information into the representation of documents (Debole and Sebastiani, 2003). For example in the TF-IG scheme,
xi,j = #(di , tj ) × IG(tj ), is the product of the TF TWS for term tj and
document di (a local factor) with the information gain of term tj (IG(tj ),
global factor). In this way, the discrimination power of each term is taken
into account for the document representation; in this case through the information gain value (Yang and Pedersen, 1997). It is important to emphasize
that most TWSs combine information from both term-importance (global)
and term-document-importance (local) factors (see Section 3), for instance,
in the TF-IG scheme, IG is a term-importance factor, whereas TF is a termdocument-importance factor.
Although acceptable performance has been reported with existing TWS,
it is still an art determining the adequate TWS for a particular data set; as
a result, mostly unsupervised TWSs (e.g., B, TF and TF-IDF ) have been
adopted for TC systems (Feldman and Sanger, 2006; Aggarwal, 2012). A first
hypothesis of this work is that different TWSs can achieve better performance
on different TC tasks (e.g., thematic TC vs. non-thematic TC); in fact, we
claim that within a same domain (e.g., news classification) different TWSs
are required to obtain better classification performance on different data sets.
On the other hand, we notice that TWSs have been defined as combinations
of term-document weighting factors (which can be seen as other TWSs, e.g.,
TF ) and term-relevance measurements (e.g., IDF or IG), where the definition of TWSs has been done by relying on the expertise of users/researchers.
Our second hypothesis is that the definition of new TWSs can be automated.
With the aim of verifying both hypotheses, this paper introduces a genetic
program that learns how to combine term-document-importance and termrelevance factors to generate effective TWSs for diverse TC tasks.
6
3. Related work
As previously mentioned, in TC it is rather common to use unsupervised
TWSs to represent documents, specifically B, TF and TF-IDF schemes are
very popular (see Table 1). Their popularity derives from the fact that these
schemes have proved to be very effective in information retrieval (Salton and Buckley,
1988; Baeza-Yates and Ribeiro-Neto, 1999; Turney and Pantel, 2010) and
in many TC problems as well as (Sebastiani, 2008; Feldman and Sanger,
2006; Agarwal and Mittal, 2014; Aggarwal, 2012; Aggarwal and Zhai, 2012).
Unsupervised TWSs mainly capture term-document occurrence (e.g., term
occurrence frequency, TF ) and term-relevance (e.g., inverse document frequency, IDF ) information. While acceptable performance has been obtained with such TWSs in many applications, in TC one has available labeled documents, and hence, document-label information can also be exploited to obtain more discriminative TWSs. This observation was noticed
by Debole & Sebastiani and other authors that have introduced supervised
TWSs (Debole and Sebastiani, 2003; Lan et al., 2009).
Supervised TWSs take advantage of labeled data by incorporating a discriminative term-weighting factor into the TWSs. In (Debole and Sebastiani,
2003) TWSs were defined by combining the unsupervised TF scheme with the
following term-relevance criteria: information gain (TF-IG), which measures
the reduction of entropy when using a term as classifier (Yang and Pedersen,
1997); χ2 (TF-CHI ), makes an independence test regarding a term and the
classes (Sebastiani, 2008); and gain-ratio (TF-GR) measuring the gain-ratio
when using the term as classifier (Debole and Sebastiani, 2003). The conclusions from (Debole and Sebastiani, 2003) were that small improvements
can be obtained with supervised TWSs over unsupervised ones. Although
somewhat disappointing, it is interesting that for some scenarios supervised
TWSs were beneficial. More recently, Lan et al. proposed an alternative
supervised TWS (Lan et al., 2009), the so called TF-RF scheme. TF-RF
combines TF with a criterion that takes into account the true positives
and true negative rates when using the occurrence of the term as classifier.
In (Lan et al., 2009) the proposed TF-RF scheme obtained better performance than unsupervised TWSs and even outperformed the schemes proposed in (Debole and Sebastiani, 2003). In (Altyncay and Erenel, 2010) the
RF term-relevance factor was compared with alternative weights, including
mutual information, odds ratio and χ2 ; in that workRF outperformed the
other term-importance criteria.
7
Table 1 shows most of the TWSs proposed so far for TC. It can be observed that TWSs are formed by combining term-document (TDR) and term
(TR) relevance weights. The selection of what TDR and TR weights to use
rely on researchers choices (and hence on their biases). It is quite common
to use TF as TDR, because undoubtedly the term-occurrence frequency carries on very important information: we need a way to know what terms a
document is associated with. However, it is not that clear what TR weight
to use, as there is a wide variety of TR factors that have been proposed. The
goal of TRs is to determine the importance of a given term, with respect
to the documents in a corpus (in the unsupervised case) or to the classes
of the problem (in the supervised case). Unsupervised TRs include: global
term-frequency, and inverse document frequency (IDF) TRs. These weights
can capture word importance depending on its global usage across a corpus,
however, for TC seems more appealing to use discriminative TRs as one can
take advantage of training labeled data. In this aspect, there is a wide variety
of supervised TRs that have been proposed, including: mutual information,
information gain, odds ratio, etcetera (Aggarwal and Zhai, 2012).
Table 1: Common term weighting schemes for TC. In every TWS, xi,j indicates how
relevant term tj is for describing the content of document di , under the corresponding
TWS. N is the number of documents in training data set, #(di , tj ) indicates the frequency
of term tj in document di , df (tj ) is the number of documents in which term tj occurs,
IG(tj ) is the information gain of term tj , CHI(tj ) is the χ2 statistic for term tj , and T P ,
T N are the true positive and true negative rates for term tj (i.e., the number of positive,
resp. negative, documents that contain term tj ).
Acronym Name
B
Boolean
Formula
xi,j = 1{#(t ,d )>0}
i j
TermFrequency
TF - Inverse
Document
Frequency
TF - Information Gain
xi,j = #(ti , dj )
TFCHI
TF square
xi,j = #(ti , dj ) ×
CHI(tj )
TF-RF
TF - Relevance
Frequency
TF
TFIDF
TF-IG
Chi-
xi,j = #(ti , dj ) ×
)
log( df N
(tj )
xi,j =
IG(tj )
xi,j
=
#(ti , dj ) ×
#(ti , dj ) ×
TP
)
log(2 + max(1,T
N)
Description
Indicates
the
prescense/abscense
of
terms
Accounts for the frequency
of occurrence of terms
An TF scheme that penalizes the frequency of terms
across the collection
TF scheme that weights
term occurrence by its information gain across the
corpus.
TF scheme that weights
term occurrence by its χ2
statistic
TF scheme that weights
term occurrence by its χ2
statistic
Ref.
(Salton and Buckley,
1988)
(Salton and Buckley,
1988)
(Salton and Buckley,
1988)
(Debole and Sebastiani,
2003)
(Debole and Sebastiani,
2003)
(Lan et al., 2009)
The goal of a supervised TR weight is to determine the importance of a
given term with respect to the classes. The simplest, TR would be to estimate the correlation of term frequencies and the classes, although any other
8
criterion that accounts for the association of terms and classes can be helpful
as well. It is interesting that although many TRs are available out there, they
have been mostly used for feature selection rather than for building TWSs for
TC. Comprehensive and extensive comparative studies using supervised TRs
for feature selection have been reported (Altyncay and Erenel, 2010; Forman,
2003; Yang and Pedersen, 1997; Mladenic and Grobelnik, 1999). Although
not being conclusive, these studies serve to identify the most effective TRs
weights, such weights are considered in this study.
To the best of our knowledge, the way we approach the problem of learning TWSs for TC is novel. Similar approaches based on genetic programming to learn TWSs have been proposed in (Cummins and O’Riordan, 2006,
2007, 2005; Trotman, 2005; Oren, 2002; Fan et al., 2004a), however, these researchers have focused on the information retrieval problem, which differs significantly from TC. Early approaches using genetic programming to improve
the TF-IDF scheme for information retrieval include those from (Trotman,
2005; Oren, 2002; Fan et al., 2004a,b). More recently, Cummins et al. proposed improved genetic programs to learn TWSs also for information retrieval (Cummins and O’Riordan, 2006, 2007, 2005).
Although the work by Cummins et al. is very related to ours, there are
major differences (besides the problem being approached): Cummins et al.
approached the information retrieval task and defined a TWS as a combination of three factors: local, global weighting schemes and a normalization
factor2 . The authors designed a genetic program that aimed at learning a
TWS by evolving the local and global schemes separately. Only 11 terminals, including constants, were considered. Since information retrieval is an
unsupervised task, the authors have to use a whole corpus with relevance
judgements (i.e., a collection of documents with queries and the set of relevant documents to each query) to learn the TWS, which, once learned, could
be used for other information retrieval tasks. Hence they require a whole
collection of documents to learn a TWS. On the other hand, the authors
learned a TWS separately, first a global TWS was evolved fixing a binary
local scheme, then a local scheme was learned by fixing the learned global
weight. Hence, they restrict the search space for the genetic program, which
2
Recall a local factor incorporates term information (locally) available in a document,
whereas a global term factor takes into account term statistics estimated across the whole
corpus. In information retrieval it is also common to normalize the vectors representing a
document to reduce the impact of the length of a document.
9
may limit the TWSs that can be obtained. Also, it is worth noticing that the
focus of the authors of (Cummins and O’Riordan, 2006, 2007, 2005) was on
learning a single, and generic TWS to be used for other information retrieval
problems, hence the authors performed many experiments and reported the
single best solution they found after extensive experimentation. Herein, we
provide an extensive evaluation of the proposed approach, reporting average performance over many runs and many data sets. Finally, one should
note that the approach from (Cummins and O’Riordan, 2006, 2007, 2005) required of large populations and numbers of generations (1000 individuals and
500 generations were used), whereas in this work competitive performance is
obtained with only 50 individuals and 50 generations.
4. Learning term-weighting schemes via GP
As previously mentioned, the traditional approach for defining TWSs has
been somewhat successful so far. Nevertheless, it is still unknown whether
we can automatize the TWS definition process and obtain TWSs of better
classification performance in TC tasks. In this context, we propose a genetic
programming solution that aims at learning effective TWSs automatically.
We provide the genetic program with a pool of TDR and TR weights as
well as other TWSs and let a program search for the TWS that maximizes
an estimate of classification performance. Thus, instead of defining TWSs
based on our own experiences on text mining, we let a computer itself to build
an effective TWS. The advantages of this approach are that it may allow to
learn a specific TWS for each TC problem, or to learn TWSs from one data
set (e.g., a small one) and implement it in a different collection (e.g., a
huge one). Furthermore, the method reduces the dependency on users/dataanalysts and their degree of expertise and biases for defining TWSs. The
rest of this section describes the proposed approach. We start by providing a
brief overview of genetic programming, then we explain in detail the proposal,
finally, we close this section with a discussion on the benefits and limitations
of our approach.
4.1. Genetic programming
Genetic programming (GP) (Langdon and Poli, 2001) is an evolutionary
technique which follows the reproductive cycle of other evolutionary algorithms such as genetic algorithms (see Figure 1): an initial population is
created (randomly or by a pre-defined criterion), after that, individuals are
10
selected, recombined, mutated and then placed back into the solutions pool.
The distinctive feature of GP, when compared to other evolutionary algorithms, is in that complex data structures are used to represent solutions
(individuals), for example, trees or graphs. As a result, GP can be used for
solving complex learning/modeling problems. In the following we describe
the GP approach to learn TWSs for TC.
Figure 1: A generic diagram of an evolutionary algorithm.
4.2. TWS learning with genetic programming
We face the problem of learning TWSs as an optimization one, in which
we want to find a TWSs that maximizes the classification performance of a
classifier trained with the TWS. We define a valid TWS as the combination
of: (1) other TWSs, (2) TR and (3) TDR factors, and restrict the way in
which such components can be combined by a set of arithmetic operators.
We use GP as optimization strategy, where each individual corresponds to a
tree-encoded TWS. The proposed genetic program explores the search space
of TWSs that can be generated by combining TWSs, TRs and TDRs with
11
a predefined set of operators. The rest of this section details the components of the proposed genetic program, namely, representation, terminals
and function set, genetic operators and fitness function.
4.2.1. Representation
Solutions to our problem are encoded as trees, where we define terminal
nodes to be the building blocks of TWSs. On the other hand, we let internal
nodes of trees to be instantiated by arithmetic operators that combine the
building blocks to generate new TWSs. The representation is graphically
described in Figure 2.
Figure 2: Representation adopted for TWS learning.
4.2.2. Terminals and function set
As previously mentioned, traditional TWSs are usually formed by two factors: a term-document relevance (TDR) weight and a term-relevance (TR)
factor. The most used TDR is term frequency (TF ), as allows one to relate
documents with the vocabulary. We consider TF as TDR indicator, but also
we consider standard TWSs (e.g., Boolean, TD, RF ) as TDR weights. The
decision to include other TWSs as building blocks is in order to determine
whether standard TWSs can be enhanced with GP. Regarding TR, there
are many alternatives available. In this work we analyzed the most common and effective TR weights as reported in the literature (Sebastiani, 2008;
12
Altyncay and Erenel, 2010; Lan et al., 2009; Debole and Sebastiani, 2003;
Forman, 2003) and considered them as building blocks for generating TWSs.
Finally we also considered some constants as building blocks. The full set
of building blocks (terminals in the tree representation) considered is shown
in Table 1, whereas the set of operators considered in the proposed
method
√
2
(i.e., the function set) is the following: F = {+, −, ∗, /, log2 x, x, x }, where
F includes operators of arities one and two.
Table 2: Terminal set.
Variable
W1
W2
W3
W4
W5
W6
W7
W8
W9
W10
W11
W12
W13
W14
W15
W16
W17
W18
W19
W20
W21
W22
Meaning
N, Constant matrix, the total number of training documents.
kV k, Constant matrix, the number of terms.
CHI, Matrix containing in each row the vector of χ2 weights for the terms.
IG, Matrix containing in each row the vector of information gain weights for the terms.
T F − IDF , Matrix with the TF-IDF term weighting scheme.
T F , Matrix containing the TF term-weighting scheme.
F GT , Matrix containing in each row the global term-frequency for all terms.
T P , Matrix containing in each row the vector of true positives for all terms.
F P , Matrix containing in each row the vector of false positives.
T N, Matrix containing in each row the vector of true negatives.
F N, Matrix containing in each row the vector of false negatives.
Accuracy, Matrix in which each row contains the accuracy obtained when using the term as classifier.
Accuracy Balance, Matrix containing the AC Balance each (term, class).
BNS, An array that contains the value for each BNS per (term, class).
DF req, Document frequency matrix containing the value for each (term, class).
F M easure, F-Measure matrix containing the value for each (term, class).
OddsRatio, An array containing the OddsRatio term-weighting.
P ower, Matrix containing the Power value for each (term, class).
P robabilityRatio, Matrix containing the ProbabilityRatio each (term, class).
M ax T erm, Matrix containing the vector with the highest repetition for each term.
RF , Matrix containing the RF vector.
T F × RF , Matrix containing T F × RF .
In the proposed approach, a TWS is seen as a combination of building blocks by means of arithmetic operators. One should note, however,
that three types of building blocks are considered: TDR, TR and constants.
Hence we must define a way to combine matrices (TDR weights), vectors
(TR scores) and scalars (the constants), in such a way that the combination
leads to a TWS (i.e., a form of TDR). Accordingly, and for easiness of implementation, each building block shown in Table 1 is processed as a matrix
of the same length as the TWS (i.e., N × |V |) and operations are performed
element-wise. In this way a tree can be directly evaluated, and the operators
are applied between each element of the matrices, leading to a TWS.
TDRs are already matrices of the same size as the TWSs: N × |V |. In
the case of TRs, we have a vector of length |V |, thus for each TR we generate
a matrix of size N × |V | where each of its rows is the TR; that is, we repeat
N times the TR weight. In this way, for example, a TWS like TF-IDF can
13
be obtained as T F × IDF , where the × operator means that each element
tfi,j of TF is multiplied by each element of the IDF matrix idfi,j and where
idfi,j = log( dfN(tj ) ) for i = 1, . . . , N, all TRs were treated similarly. In the case
of constants we use a scalar-matrix operator, which means that the constant
is operated with each element of the matrix under analysis.
Estimating the matrices each time a tree is evaluated can be a time consuming process, therefore, at the beginning of the search process we compute
the necessary matrices for every terminal from Table 1. Hence, when evaluating an individual we only have to use the values of the precomputed matrices
and apply the operators specified by a tree.
4.2.3. Genetic operators
As explained above, in GP a population of individuals is initialized and
evolved according to some operators that aim at improving the quality of the
population. For initialization we used the standard ramped-half-and-half
strategy (Eiben and Smith, 2010), which generates half of the population
with (balanced) trees of maximum depth, and the other half with trees of
variable depth. As genetic operators we also used standard mechanisms: we
considered the subtree crossover and point mutation. The role of crossover
is to take two promising solutions and combine their information to give rise
to two offspring, with the goal that the offspring have better performance
than the parents. The subtree crossover works by selecting two parent solutions/trees (in our case, via tournament) and randomly select an internal
node in each of the parent trees. Two offspring are created by interchanging
the subtrees below the identified nodes in the parent solutions.
The function of the mutation operator is to produce random variations in
the population, facilitating the exploration capabilities of GP. The considered
mutation operator first selects an individual to be mutated. Next an internal
node of the individual is identified, and if the internal node is an operator
(i.e., a member of F ) it is replaced by another operator of the same arity. If
the chosen node is a terminal, it is replaced by another terminal. Where in
both cases the replacing node is selected with uniform probability.
4.2.4. Fitness function
As previously mentioned, the aim of the proposed GP approach is to
generate a TWS that obtains competitive classification performance. In this
direction, the goodness of an individual is assessed via the classification performance of a predictive model that uses the representation generated by the
14
TWS. Specifically, given a solution to the problem, we first evaluate the tree
to generate a TWS using the training set. Once training documents are represented by the corresponding TWS, we perform a k−fold cross-validation
procedure to assess the effectiveness of the solution. In k−fold cross validation, the training set is split into k disjoint subsets, and k rounds of training
and testing are performed; in each round k − 1 subsets are used as training
set and 1 subset is used for testing, the process is repeated k times using a
different subset for testing each time. The average classification performance
is directly used as fitness function. Specifically, we evaluate the performance
of classification models with the f1 measure. Let T P , F P and F N to denote
the true positives, false positives and false negative rates for a particular class,
P
P
precision (P rec) is defined as T PT+F
and recall (Rec) as T PT+F
. f1 -measure
P
N
rec×Rec
.
is simply the harmonic average between precision and recall: f1 = 2×P
P rec+Rec
The average across classes is reported (also called, macro-average f1 ), this
way of estimating the f1 -measure is known to be particularly useful when
tackling unbalanced data sets (Sebastiani, 2008).
Since under the fitness function k models have to be trained and tested for
the evaluation of a single TWS, we need to look for an efficient classification
model that, additionally, can deal naturally with the high-dimensionality of
data. Support vector machines (SVM) comprise a type of models that have
proved to be very effective for TC (Sebastiani, 2008; Joachims, 2008). SVMs
can deal naturally with the sparseness and high dimensionality of data, however, training and testing an SVM can be a time consuming process. Therefore, we opted for efficient implementations of SVMs that have been proposed
recently (Zhang et al., 2012; Djuric et al., 2013). That methods are trained
online and under the scheme of learning with a budget. We use the predictions of an SVM as the fitness function for learning TWSs. Among the methods available in (Djuric et al., 2013) we used the low-rank linearized SVM
(LLSMV) (Zhang et al., 2012). LLSVM is a linearized version of non-linear
SVMs, which can be trained efficiently with the so called block minimization
framework (Chang and Roth, 2011). We selected LLSVM instead of alterative methods, because this method has outperformed several other efficient
implementations of SVMs, see e.g., (Djuric et al., 2013; Zhang et al., 2012).
4.3. Summary
We have described the proposed approach to learn TWSs via GP. When
facing a TC problem we start by estimating all of the terminals described in
Table 1 for the training set. The terminals are feed into the genetic program,
15
together with the function set. We used the GPLAB toolbox for implementing the genetic program with default parameters (Silva and Almeida, 2003).
The genetic program searches for the tree that maximizes the k−fold cross
validation performance of an efficient SVM using training data only. After a
fixed number of generations, the genetic program returns the best solution
found so far, the best TWS. Training and test (which was not used during the
search process) data sets are represented according to such TWS. One should
note that all of the supervised term-weights in Table 1 are estimated from
the training set only (e.g., the information gain for terms is estimated using
only the labeled training data); for representing test data we use the precomputed term-weights. Next, the LLSVM is trained in training data and
the trained model makes predictions for test samples. We evaluate the performance of the proposed method by comparing the predictions of the model
and the actual labels for test samples. The next section reports results of
experiments that aim at evaluating the validity of the proposed approach.
5. Experiments and results
This section presents an empirical evaluation of the proposed TWL approach. The goal of the experimental study is to assess the effectiveness of
the learned TWSs and compare their performance to existing schemes. Additionally, we evaluate the generalization performance of learned schemes,
and their effectiveness under different settings.
5.1. Experimental settings
For experimentation we considered a suite of benchmark data sets associated to three types of tasks: thematic TC, authorship attribution (AA,
a non-thematic TC task) and image classification (IC). Table 3 shows the
characteristics of the data sets. We considered three types of tasks because
we wanted to assess the generality of the proposed approach.
Seven thematic TC data sets were considered, in these data sets the goal
is to learn a model for thematic categories (e.g., sports news vs. religion
news). The considered data sets are the most used ones for the evaluation
of TC systems (Sebastiani, 2008). For TC data sets, indexing terms are the
words (unigrams). Likewise, seven data sets for AA were used, the goal in
these data sets is to learn a model capable of associating documents with
authors. Opposed to thematic collections, the goal in AA is to model the
16
Table 3: Data sets considered for experimentation
Text categorization
Data set
Classes Terms Train Test
Reuters-8
8
23583
5339
2333
Reuters-10
10
25283
6287
2811
20-Newsgroup
20
61188
11269
7505
TDT-2
30
36771
6576
2818
WebKB
4
7770
2458
1709
Classic-4
4
5896
4257
2838
CADE-12
12
193731 26360 14618
Authorship attribution
Data set
Classes Terms Train Test
CCA-10
10
15587
500
500
Poetas
5
8970
71
28
Football
3
8620
52
45
Business
6
10550
85
90
Poetry
6
8016
145
55
Travel
4
11581
112
60
Cricket
4
10044
98
60
Image Classification
Data set
Classes Terms Train Test
Caltech-101
101
12000
1530
1530
Caltech-tiny
5
12000
75
75
writing style of authors, hence, it has been shown that different representations and attributes are necessary for facing this task (Stamatatos, 2009).
Accordingly, indexing terms in AA data sets were 3-grams of characters, that
is, sequences of 3-characters found in documents, these terms have proved to
be the most effective ones in AA (Stamatatos, 2009; Escalante et al., 2011;
Luyckx and Daelemans, 2010). Finally, two data sets for image classification, taken from the CALTECH-101 collection, were used. We considered
the collection under the standard experimental settings (15 images per class
for training and 15 images for testing), two subsets of the CALTECH-101
data set were used: a small one with only 5 categories and the whole data
set with 102 classes (101 object categories plus background) (Fei-Fei et al.,
2004). Images were represented under the Bag-of-Visual-Words formulation
using dense sift descriptors (PHOW features): descriptors extracted from
images were clustered using k−means, the centers of the clusters are the visual words (indexing terms), images are then represented by accounting the
occurrence of visual words, the VLFEAT toolbox was used for processing
images (Vedaldi and Fulkerson, 2008).
The considered data sets have been partitioned into training and test
subsets (the number of documents for each partition and each data set are
shown in Table 3). For some data sets there were predefined categories, while
17
for others we randomly generated them using 70% of documents for training
and the rest for testing. All of the preprocessed data sets in Matlab format
are publicly available3 .
For each experiment, the training partition was used to learn the TWS,
as explained in Section 4. The learned TWS is then evaluated in the corresponding test subset. We report two performance measures: accuracy, which
is the percentage of correctly classified instances, and f1 measure, which assesses the tradeoff between precision and recall across classes (macro-average
f1 ), recall that f1 was used as fitness function (see Section 4).
The genetic program was run for 50 generations using populations of 50
individuals, we would like to point out that in each run of the proposed
method we have used default parameters. It is expected that by optimizing
parameters and running the genetic program for more generations and larger
populations we could obtain even better results. The goal of our study, however, was to show the potential of our method even with default parameters.
5.2. Evaluation of TWS Learning via Genetic Programming
This section reports experimental results on learning TWSs with the genetic program described in Section 4. The goal of this experiment is to assess
how TWSs learned via GP compare with traditional TWSs. The GP method
was run on each of the 16 data sets from Table 3, since the vocabulary size
for some data sets is huge we decided to reduce the number of terms by using
term-frequency as criterion. Thus, for each data set we considered the top
2000 more frequent terms during the search process. In this way, the search
process is accelerated at no significant loss of accuracy. In Section 5.3 we
analyze the robustness of our method when using the whole vocabulary size
for some data sets.
For each data set we performed 5 runs with the GP-based approach, we
evaluated the performance of each learned TWS and report the average and
standard deviation of performance across the five runs. Tables 4, 5, and 6
show the performance obtained by TWSs learned for thematic TC, AA and
IC data sets, respectively. In the mentioned tables we also show the result
obtained by the best baseline in each collection. Best baseline is the best TWS
we found (from the set of TWSs reviewed in related work and the TWSs in
Table 1) for each data set (using the test-set performance). Please note that
3
http://ccc.inaoep.mx/ hugojair/TWL/
18
under these circumstances best baseline is in fact, a quite strong baseline for
our GP method. Also, we would like to emphasize that no parameter of
the GP has been optimized, we used the same default parameters for every
execution of the genetic program.
Table 4: Classification performance on thematic TC obtained with learned TWSs and the
best baseline.
PG- Avg.
Best baseline
Data set
f1
Acc.
f1
Acc.
Baseline
+
Reuters-8
90.56+
1.43
91.35
1.99
86.94
88.63
TF
−
−
+
Reuters-10
88.21+
2.69
91.84
1.01
85.24
93.25
TFIDF
−
−
20-Newsgroup 66.23+
67.97+
59.21 61.99
TF
− 3.84
− 4.16
TDT-2
96.95+
96.95+
95.20 95.21
TFIDF
− 0.41
− 0.57
WebKB
88.79+
89.12+
87.49 88.62
B
− 1.26
− 1.30
+
Classic-4
94.75− 1.08 95.42+
94.68 94.86
TF
− 0.67
CADE-12
41.03+
53.80+
39.30 41.89
TF
− 4.45
− 4.0
From Table 4 it can be seen that, regarding the best baseline, different
TWSs obtained better performance for different data sets. Hence evidencing
the fact that different TWSs are required for different problems. On the
other hand, it can be seen that the average performance of TWSs learned
with our GP outperformed significantly the best baseline in all but one result
(accuracy for Reuters-10 data set). The differences in performance are large,
mainly for the f1 measure, which is somewhat expected as this was the
measure used as fitness function (recall f1 measure is appropriate to account
for the class imbalance across classes); hence showing the competitiveness of
our proposed approach for learning effective TWSs for thematic TC tasks.
Table 5: Classification performance on AA obtained with learned TWSs and the best
baseline.
PG- Avg.
Best baseline
Data set
f1
Acc.
f1
Acc.
Baseline
+
CCA-10
70.32+
2.73
73.72
2.14
65.90
73.15
TF-IG
−
−
+
Poetas
72.23+
1.49
72.63
1.34
70.61
71.84
TF-IG
−
−
+
Football
76.37+
9.99
83.76
4.27
76.45
83.78
TF-CHI
−
−
Business 78.08+
83.58+
73.77
81.49 TF-CHI
− 4.87
− 1.57
+
Poetry
70.03+
7.66
74.05
7.38
59.93
76.71
B
−
−
+
Travel
73.92+
10.26
78.45
6.72
71.75
75.32
TF-CHI
−
−
Cricket
88.10+
92.06+
89.81 91.89 TF-CHI
− 7.12
− 3.29
From Table 5 it can be seen that for AA data sets the best baseline
19
performs similarly as the proposed approach. In terms of f1 measure, our
method outperforms the best baseline in 5 out of 7 data sets, while in accuracy our method beats the best baseline in 4 out of 7 data sets. Therefore,
our method still obtains comparable (slightly better) performance to the best
baselines, which for AA tasks were much more competitive than in thematic
TC problems. One should note that for PG we are reporting the average performance across 5 runs, among the 5 runs we found TWSs that consistently
outperformed the best baseline.
Table 6: Classification performance on IC obtained with learned TWSs and the best
baseline.
PG- Avg.
Best baseline
Data set
f1
Acc.
f1
Acc. Baseline
+
Caltech-101 61.91+
1.41
64.02
1.42
58.43
60.28
B
−
−
+
Caltech-tiny 89.70+
2.44
91.11
2.36
85.65
86.67
TF
−
−
It is quite interesting that, comparing the best baselines from Tables 4
and 5, for AA tasks supervised TWSs obtained the best results (in particular
TF-CHI in 4 out of 7 data sets), whereas for thematic TC unsupervised
TWSs performed better. Again, these results show that different TWSs are
required for different data sets and different types of problems. In fact, our
results confirm the fact that AA and thematic TC tasks are quite different,
and, more importantly, our study provides evidence on the suitability of
supervised TWSs for AA; to the best of our knowledge, supervised TWSs
have not been used in AA problems.
Table 6 shows the results obtained for the image categorization data sets.
Again, the proposed method obtained TWSs that outperformed the best
baselines. This result is quite interesting because we are showing that the
TWS plays a key role in the classification of images under the BOVWs approach. In computer vision most of the efforts so far have been devoted to the
development of novel/better low-level image-descriptors, using a BOW with
predefined TWS. Therefore, our results pave the way for research on learning TWSs for image categorization and other tasks that rely in the BOW
representation (e.g. speech recognition and video classification).
Figure 3 and Table 7 complement the results presented so far. Figure 3 indicates the difference in performance between the (average of) learned TWSs
and the best baseline for each of the considered data sets. We can clearly
appreciate from this figure the magnitude of improvement offered by the
learned TWSs, which in some cases is too large.
20
12
f
1
Difference in performance
10
Accuracy
8
6
4
2
0
Catech−tiny
Cricket
Travel
Poetry
Business
Caltech−101
Data set
Poetas
CCA
Football
CADE12
Classic4
Web−KB
TDT−2
20−Newsgroup
Reuters−10
−4
Reuters−8
−2
Figure 3: Difference in performance between learned TWSs and best baseline per each
data set, values above zero indicate better performance obtained by the TWSs.
Table 7, on the other hand, shows a more fair comparison between our
method and the reference TWSs: it shows the average performance obtained
by reference schemes and the average performance of our method for thematic
TC, AA and IC data sets. It is clear from this table that in average our
method performs consistently better than any of the reference methods in
terms of both accuracy and f1 measure for the three types of tasks. Thus,
from the results of this table and those from Tables 4, 5, and 6, it is evident
that standard TWSs are competitive, but one can take advantage of them
only when the right TWS is selected for each data set. Also, the performance
of TWSs learned with our approach are a better option than standard TWSs,
as in average we were able to obtain much better representations.
Summarizing the results from this section, we can conclude that:
• The proposed GP obtained TWSs that outperformed the best baselines
in the three types of tasks: thematic TC, AA and IC. Evidencing the
generality of our proposal across different data types and modalities.
Larger improvements were observed for thematic TC and IC data sets.
In average, learned TWSs outperformed standard ones in the three
types of tasks.
21
Table 7: Average performance on thematic TC obtained with learned TWSs and the
baselines.
Thematic TC
AA
IC
TWS
f1
Acc.
f1
Acc.
f1
Acc.
TF
76.60
79.53
62.17
72.43
68.86
71.54
B
77.42
79.73
66.07
76.76
71.22
72.78
TFIDF
61.69
76.17
40.88
55.26
62.27
67.56
TF-CHI
71.56
75.63
68.75
73.69
65.38
67.45
TF-IG
64.22
69.00
68.96
74.91
66.02
67.93
PG-worst 77.81
81.19
66.47
74.84
74.30
75.67
PG-Avg. 81.01 83.63 75.58 79.75 75.81 77.07
PG-best
82.88
85.81
81.37
83.98
76.97
78.18
• Our results confirm our hypothesis that different TWSs are required
for facing different tasks, and within a same task (e.g., AA) a different TWS may be required for a different data set. Hence, motivating
further research on how to select TWS for a particular TC problem.
• We show evidence that the proposed TWS learning approach is a
promising solution for enhancing the classification performance in other
tasks than TC, e.g., IC.
• Our results show that for AA supervised TWS seem to be more appropriate, whereas unsupervised TWS performed better on thematic TC
and IC. This is a quite interesting result that may have an impact in
non-thematic TC and supervised term-weighting learning.
5.3. Varying vocabulary size
For the experiments from Section 5.2 each TWS was learned by using only
the top 2000 most frequent terms during the search process. This reduction
in the vocabulary allowed us to speed up the search process significantly,
however, it is worth asking ourselves what the performance of the TWSs
would be when using an increasing number of terms. We aim to answer such
question in this section.
For this experiment we considered three data sets, one from each type of
task: thematic TC, AA, and IC. The considered data sets were the Reuters8 (R8) for thematic TC, the CCA benchmark for AA, and Caltech-101 for
IC. These data sets are the representative ones from each task: Reuters-8
is among the most used TC data sets, CCA has been widely used for AA
as well, and Caltech-101 is the benchmark in image categorization For each
22
of the considered data sets we use a specific TWS learned using the top2000 most frequent terms (see Section 5.2), and evaluate the performance
of such TWSs when increasing the vocabulary size: terms were sorted in
ascending order of their frequency. Figures 4, 5, and 6 show the results of
this experiment in terms of f1 measure and accuracy (the selected TWS is
shown in the caption of each image).
R8
R8
1
1
0.9
0.9
0.8
0.8
0.7
Accuracy
0.6
0.5
1
f measure
0.7
0.4
0.3
0.2
0.1
0
10
TF
B
TFIDF
TF−RF
TF−CHI
TF−IG
GP
20
0.6
TF
B
TFIDF
TF−RF
TF−CHI
TF−IG
GP
0.5
0.4
0.3
30
40
50
60
70
Percentage of features
80
90
0.2
10
100
20
30
40
50
60
70
Percentage of features
80
90
100
√
Figure
4: Classification performance on the Reuters-8 data set for the TWS: W5 −
√
log W19
when increasing the number of considered terms. The left plot shows results in
W21
terms of f1 measure while the right plot shows accuracy performance.
f
CCA− Sthatis
0.8
0.78
0.78
0.76
TF
B
TFIDF
TF−RF
TF−CHI
TF−IG
GP
0.74
0.72
0.7
0.68
10
Accuracy
0.76
1
measure
CCA (Sthatis)
0.8
20
30
40
50
60
70
Percentage of features
TF
B
TFIDF
TF−RF
TF−CHI
TF−IG
GP
0.74
0.72
0.7
80
90
0.68
10
100
20
30
40
50
60
70
Percentage of features
80
90
100
Figure 5: Classification performance on the CCA data set of the TWS: W4 − (W22 + W5 )
when increasing the number of considered terms.
Different performance behavior can be observed in the different data sets.
Regarding Figure 4, which shows the performance for a thematic TC data
set, it can be seen that the TWS learned by our method outperformed all
other TWSs for any data set size. Hence confirming the suitability of the
proposed method for thematic TC.
Figure 5, on the other hand, behaves differently: the proposed method
outperforms all the other TWSs only for a single data set size (when 20%
23
Caltech−101
0.7
0.6
0.6
0.5
0.5
Accuracy
f1 measure
Caltech−101
0.7
0.4
TF
B
TFIDF
TF−RF
TF−CHI
TF−IG
GP
0.3
0.2
0.1
0
10
20
30
40
50
60
70
Percentage of features
80
90
0.4
TF
B
TFIDF
TF−RF
TF−CHI
TF−IG
GP
0.3
0.2
0.1
0
10
100
20
30
40
50
60
70
Percentage of features
80
90
100
Figure
6: Classification performance on the Caltech-101 data set of the TWS:
rq
p√
W17 −
W22 when increasing the number of considered terms.
of the terms were used). In general, our method consistently outperformed
TF-CHI and TF-IG TWSs, and performs similarly as TF-IDF, but it was
outperformed by the TF-RF TWS. This result can be due to the fact that for
this AA data set, the genetic program learned a TWS that was suitable only
for the vocabulary size that was used during the optimization. Although,
interesting, this result is not that surprising: in fact, it is well known in AA
that the number of terms considered in the vocabulary plays a key role on the
performance of AA systems. AA studies suggest using a small amount of the
most-frequent terms when approaching an AA problem (Stamatatos, 2009;
Escalante et al., 2011; Luyckx and Daelemans, 2010). Results from Figure 5
corroborate the latter and seem to indicate that when approaching an AA
problem, one should first determine an appropriate vocabulary size and then
apply our method. One should note, however, that our method outperforms
the other TWSs for the data set size that was used during the optimization,
and this is, in fact, the highest performance that can be obtained with any
other TWS and data set size combination.
Finally, Figure 6 reports the performance of TWSs for the Caltech-101
data set under different data set sizes. In this case, the learned TWS outperforms all other TWSs when using more than 20% and 30% in terms of
f1 measure and accuracy, respectively. The improvement is consistent and
monotonically increases as more terms are considered. Hence showing the
robustness of the learned TWS when increasing the vocabulary size for IC
tasks. Among the other TWSs, TFIDF obtains competitive performance
when using a small vocabulary, this could be due to the fact that when considering a small number of frequent terms the IDF component is important
24
for weighting the contribution of each of the terms.
Summarizing the results from this section we can conclude the following:
• TWSs learned with our method are robust to variations in the vocabulary size for thematic TC and IC tasks. This result suggests, we can
learn TWSs using a small number of terms (making more efficient the
search process) and evaluating the learned TWSs with larger vocabularies.
• Learned TWSs outperform standard TWSs in thematic TC and IC
tasks when varying the vocabulary size.
• For AA, TWSs learned with our proposed approach seem to be more
dependent on the number of terms used during training. Hence, when
facing this type of problems it is a better option to fix the number of
terms beforehand and then running our method.
5.4. Generalization of the learned term-weights
In this section we evaluate the inter-data set generalization capabilities
of the learned TWSs. Although results presented so far show the generality
of our method across three types of tasks, we have reported results obtained
with TWSs that were learned for each specific data set. It remains unclear
whether the TWSs learned for a collection can perform similarly in other
collections, we aim to answer to this question in this section.
To assess the inter-data set generalization of TWSs learned with our
method we performed an experiment in which we considered for each data
set a single TWS and evaluated its performance across all the 16 considered
data sets. The considered TWSs are shown in Table 8, we named the variables with meaningful acronyms for clarity but also show the mathematical
expression using variables as defined in Table 2.
Before presenting the results of the experiments it is worth analyzing the
type of solutions (TWSs) learned with the proposed approach. First of all,
it can be seen that the learned TWSs are not too complex: the depth of
the trees is small and solutions have few terminals as components. This is
a positive result because it allows us to better analyze the solutions and,
more importantly, it is an indirect indicator of the absence of the over-fitting
phenomenon. Secondly, as in other applications of genetic programming, it
is unavoidable to have unnecessary terms in the solutions, for instance, the
subtree: div(pow2(TF-RF),pow2(TF-RF))), (from TWS 2) is unnecessary
25
Table 8: Considered TWSs for the inter-data set generalization experiment for each data
set. In column 2 each TWS is shown as a prefix expression, the names of the variables
are self-explanatory. Column 3 shows the mathematical expression of each TWS using the
terminal set from Table 2.
Text categorization
ID
Data set
Learned TWS
1
Reuters-8
-(sqrt(TFIDF),div(log2(sqrt(ProbR)),RF))
2
Reuters-10
sqrt(div(pow2(sqrt(TFIDF)),div(pow2(TFRF),pow2(TF-RF))))
3
20-Newsg.
Formula
√
√
log W19
W5 −
W21
v √
u ( W )2
u
5
u W2
t
22
sr
sqrt(sqrt(div(TF,GLOBTF)))
4
TDT-2
sqrt(×(sqrt(sqrt(TFIDF)), sqrt(×(sqrt(TFIDF),IG))))
5
WebKB
div(TF-RF,+(+(+(RF,TF-RF),FMEAS),FMEAS))
6
7
Classic-4
CADE-12
×(ProbR,TFIDF)
div(TF,sqrt(log2(ACCU)))
qp
2
W22
W6
W7
√
W5 ×
p√
W5 × W4
W22
(((W21 +W22 )+W16 )+W16 )
W5 × W19
√ W6
log W12
Authorship attribution
ID
8
9
10
Data set
CCA-10
Poetas
Football
Learned TWS
-(IG,plus(TF-RF,TFIDF))
-(-(RF,TF-RF),TF-IDF)
div(TF-RF,pow2(ODDSR))
Formula
W4 − (W22 + W5 )
(W21 − W22 ) − W5
11
12
Business
Poetry
minus(TF-RF,PROBR)
div(TF,log2(div(TF,log(TF-RF))))
W22 − W19
W22
W2
17
log
13
Travel
14
Cricket
ID
Data set
15
Caltech101
Caltechtiny
16
+(-(TF-RF,-(-(TF-RF,-(TFRF,POWER)),POWER)),TF)
×(IG,TF-RF)
Image Classification
Learned TWS
sqrt(sqrt(-(ODDSR,sqrt(sqrt(TF-RF)))))
(W22 − ((W22 − (W22 − W18 )) −
W18 )) + W6
W4 × W22
Formula
rq
p√
W17 −
W22
√
sqrt(-(TF-RF,ACBAL))
W6
W6
log W22
W22 − W13
because it reduces to a constant matrix; the same happens with the term
pow2(sqrt(TFIDF)). Nevertheless, it is important to emphasize that this type
of terms do not harm the performance of learned TWSs, and there are not
too many of these type of subtrees. On the other hand, it is interesting
that all of the learned TWSs incorporate supervised information. The most
used TR weight is RF, likewise the most used TDR is TFIDF. Also it is
interesting that simple operations over standard TWSs, TR and TDR weights
results in significant performance improvements. For instance, compare the
performance of TF-RF and the learned weight for Caltech-101 in Figure 6.
By simply subtracting an odds-ratio from the TF-RF TWS and applying
scaling operations, the resultant TWS outperforms significantly TF-RF.
The 16 TWSs shown in Table 8 were evaluated in the 16 data sets in order
to determine the inter-data set generality of the learned TWSs. Figure 7
shows the results of this experiment. We show the results with boxplots,
26
where each boxplot indicates the normalized4 performance of each TWSs
across the 16 data sets, for completion we also show the performance of the
reference TWSs on the 16 data sets.
1
f1 measure
0.9
0.8
0.7
0.6
Catech−tiny
Cricket
Travel
Poetry
Business
CCA
Poetas
Caltech−101
Learned TWSs
Football
Classic4
CADE12
TDT−2
Web−KB
20−NewsG.
Reuters−8
Reuters−10
TF−GI
TF−RF
TF−CHI
TF−IDF
B
TF
0.5
Figure 7: Heatmap that shows the performance of TWSs (rows 7-22) from Table 8 in the 16
data sets (x axis) considered in the study. For completion, we also show the performance
of standard TWSs (rows 1-6).
It can be seen from Figure 7 that the generalization performance of
learned TWSs is mixed. On the one hand, it is clear that TWSs learned for
thematic TC (boxplots 7-13) achieve the highest generalization performance.
Clearly, the generalization performance of these TWSs is higher than that
of traditional TWSs (boxplots 1-6). It is interesting that TWSs learned for
a particular data set/problem/modality perform well across different data
sets/problems/modalities. In particular, TWSs learned for Reuters-10 and
TDT-2 obtained the highest performance and the lowest variance among all
4
Before generating the boxplots we normalized the performance on a per-data set basis:
for each data set, the performance of the 16 TWSs was normalized to the range [0, 1], in
this way, the variation in f1 -values across data sets is eliminated, i.e, all f1 values are in
the same scale and are comparable to each other.
27
of the TWSs. On the other hand, TWSs learned for AA and IC tasks obtained lower generalization performance: the worst in terms of variance is the
TWS learned for the Poetry data set, while the worst average performance
was obtained by the TWS learned for the Football data set. TWSs learned for
IC are competitive (in generalization performance) with traditional TWSs.
Because of the nature of the tasks, the generalization performance of TWSs
learned from TC is better than that of TWSs learned for AA and IC. One
should note that these results confirm our findings from previous sections:
(i) the proposed approach is very effective mainly for thematic TC and IC
tasks; and, (ii) AA data sets are difficult to model with TWSs.
Finally, we evaluate the generality of learned TWSs across different classifiers. The goal of this experiment is to assess the extend to which the learned
TWSs are tailored for the classifier they were learn for. For this experiment,
we selected two TWSs corresponding to Caltech-tiny and Caltech-101 (15
and 16 in Table 8) and evaluated their performance of different classifiers
across the 16 data sets. Figure 8 shows the results of this experiment.
It can be seen from Figure 8 that the considered TWSs behaved quite
differently depending on the classifier. On the one hand, the classification
performance when using naı̈ve Bayes (Naive), kernel-logistic regression (KLogistic), and 1−nearest neighbors (KNN ) classifiers degraded significantly.
On the other hand, the performance SVM and the neural network (NN) was
very similar. These results show that TWSs are somewhat robust across
classifiers of similar nature as SVM and NN are very similar classifiers: both
are linear models in the parameters. The other classifiers are quite different
to the reference SVM and, therefore, the performance is poor5 . It is interesting that in some cases the NN classifier outperformed the SVM, although
in average the SVM performed better. This is a somewhat expected result
as the performance of the SVM was used as fitness function.
According to the experimental results from this section we can draw the
following conclusions:
• TWSs learned with the proposed approach are not too complex despite
their effectiveness. Most of the learned TWSs included a supervised
component, evidencing the importance of taking advantage of labeled
5
One should note that among the three worse classifiers, KNN, Naive and KLogistic,
the latter obtained better performance than the former two, this is due to the fact that
KLogistic is closer, in nature, to an SVM.
28
1
0.6
0.4
Catech−tiny
Data set
Caltech−101
Cricket
Travel
Poetry
Poetas
Poetas
Business
CCA
Football
CADE12
TDT−2
Web−KB
20−Newsgroup
Reuters−10
0
Reuters−8
0.2
CCA
KNN
Naive
SVM
KLogistic
Neural
Classic4
f1 − measure
0.8
1
0.6
0.4
Catech−tiny
Cricket
Travel
Poetry
Caltech−101
Data set
Business
Football
Web−KB
TDT−2
20−Newsgroup
Reuters−10
0
Reuters−8
0.2
CADE12
KNN
Naive
SVM
KLogistic
Neural
Classic4
f1 − measure
0.8
Figure 8: Classification performance of selected TWSs across different classifiers, f1 measure is reported. The plot at the top is a TWS learned for Caltech-tiny, while the bottom
plot shows the performance for a TWS learned for Caltech-101.
documents.
• TWSs offer acceptable inter-data set generalization performance, in
particular, TWSs learned for TC generalize pretty well across data
sets.
• We showed evidence that TWSs learned for a modality (e.g., text /
images) can be very competitive when evaluated on other modality.
• TWSs are somewhat robust to the classifier choice. It is preferable
29
to use the classifier used to estimate the fitness function, although
classifiers of similar nature perform similarly.
6. Conclusions
We have described a novel approach to term-weighting scheme (TWS)
learning in text classification (TC). TWSs specify the way in which documents are represented under a vector space model. We proposed a genetic
programming solution in which standard TWSs, term-document, and term
relevance weights are combined to give rise to effective TWSs. We reported
experimental results in 16 well-known data sets comprising thematic TC,
authorship attribution and image classification tasks. The performance of
the proposed method is evaluated under different scenarios. Experimental
results show that the proposed approach learns very effective TWSs that outperform standard TWSs. The main findings of this work can be summarized
as follows:
• TWSs learned with the proposed approach outperformed significantly
to standard TWSs and those proposed in related work.
• Defining the appropriate TWS is crucial for image classification tasks,
an ignored issue in the field of computer vision.
• In authorship attribution, supervised TWSs are beneficial, in comparison with standard TWSs.
• The performance of learned TWSs do not degrades when varying the
vocabulary size for thematic TC and IC. For authorship attribution
a near-optimal vocabulary size should be selected before applying our
method.
• TWSs learned for a particular data set or modality can be applied
to other data sets or modalities without degrading the classification
performance. This generalization capability is mainly present in TWSs
learned for thematic TC and IC.
• Learned TWSs are easy to analyze/interpret and do not seem to overfit
the training data.
30
Future work directions include studying the suitability of the proposed
approach to learn weighting schemes for cross domain TC. Also we would
like to perform an in deep study on the usefulness of the proposed GP for
computer vision tasks relying in the Bag-of-Visual-Words formulation.
References
Agarwal, B., Mittal, N., 2014. Text classification using machine learning
methods-a survey. In: Proceedings of the Second International Conference
on Soft Computing for Problem Solving (SocProS 2012). Vol. 236 of Advances in Intelligent Systems and Computing. pp. 701–709.
Aggarwal, C. C., 2012. Mining Text Data. Springer, Ch. A Survey of Text
Classification Algorithms, pp. 163–222.
Aggarwal, C. C., Zhai, C. (Eds.), 2012. Mining Text Data. Springer.
Altyncay, H., Erenel, Z., 2010. Analytical evaluation of term weighting
schemes for text categorization. Pattern Recognition Letters 31, 1310–
1323.
Baeza-Yates, R., Ribeiro-Neto, B., 1999. Modern Information Retrieval.
Addison-Wesley.
Chang, K. W., Roth, D., 2011. Selective block minimization for faster convergence of limited memory large-scale linear models. In: ACM SIGKDD
Conference on Knowledge Discovery and Data Mining.
Csurka, G., Dance, C. R., Fan, L., Willamowski, J., Bra, C., 2004. Visual
categorization with bags of keypoints. In: International workshop on statistical learning in computer vision.
Cummins, R., O’Riordan, C., 2005. Evolving general term-weighting schemes
for information retrieval: Tests on larger collections. Artificial Intelligence
Review 24, 277–299.
Cummins, R., O’Riordan, C., 2006. Evolving local and global weighting
schemes in information retrieval. Information Retrieval 9, 311–330.
Cummins, R., O’Riordan, C., 2007. Evolved term-weighting schemes in information retrieval: An analysis of the solution space. Artificial Intelligence.
31
Debole, F., Sebastiani, F., 2003. Supervised term weighting for automated
text categorization. In: Proceedings of the 2003 ACM Symposium on Applied Computing. SAC ’03. ACM, New York, NY, USA, pp. 784–788.
URL http://doi.acm.org/10.1145/952532.952688
Djuric, N., Lan, L., Vucetic, S., Wang, Z., 2013. Budgetedsvm: A toolbox
for scalable svm approximations. Journal of Machine Learning Research
14, 3813–3817.
Eiben, A. E., Smith, J. E., 2010. Introduction to Evolutionary Computing.
Natural Computing. Springer.
Escalante, H. J., Montes, M., Villasenor, L., 2009. Particle swarm model selection for authorship verification. In: Proceedings of the 14th Iberoamerican Congress on Pattern Recognition. Vol. 5856 of LNCS. Springer, pp.
563–570.
Escalante, H. J., Solorio, T., y Gomez, M. M., 2011. Local histograms of
character n-grams for authorship attribution. In: Proceedings of the 49th
Annual Meeting of the Association for Computational Linguistics. pp. 288–
298.
Fan, W., Fox, E. A., Pathak, P., Wu, H., 2004a. The effects of fitness functions on genetic programming based ranking discovery for web search.
Journal of the american Society for Information Science and Technology
55 (7), 628–636.
Fan, W., Gordon, M. D., Pathak, P., 2004b. A generic ranking function
discovery framework by genetic programming for information retrieval. Information Processing and Management.
Fei-Fei, L., Fergus, R., Perona, P., 2004. Learning generative visual models
from few training examples: an incremental bayesian approach tested on
101 object categories. In: IEEE. CVPR 2004, Workshop on GenerativeModel Based Vision.
Feldman, R., Sanger, J., 2006. The Text Mining Handbook Advanced Approaches in Analyzing Unstructured Data. ABS Ventures.
Forman, G., 2003. An extensive empirical study of feature selection metrics
for text classification. J. of Mach. Learn. Res. 3, 1289–1305.
32
Joachims, T., 2008. Text categorization with support vector machines:
Learning with many relevant features. In: Proceedings of ECML-98. Vol.
1398 of LNCS. Springer, pp. 137–142.
Kiddon, C., Brun, Y., 2011. That’s what she said: Double entendre identification. In: Proceedings of the 49th Annual Meeting of the Association for
Computational Linguistics. pp. 89–94.
Koppel, M., Argamon, S., Shimoni, A. R., 2002. Automatically categorizing
written texts by author gender. Literary and Linguistic Computing 17 (4),
401–412.
Koppel, M., Schler, J., 2004. Authorship verification as a one-class classification problem. In: Proceedings of the 21st International Conference on
Machine Learning. p. 62.
Lan, M., Tan, C. L., Su, J., Lu, Y., 2009. Supervised and traditional term
weighting methods for automatic text categorization. Trans. PAMI 31 (4),
721–735.
Langdon, W. B., Poli, R., 2001. Foundations of Genetic Programming.
Springer.
Luyckx, K., Daelemans, W., 2010. The effect of author set size and data size
in authorship attribution. Literary and Linguistic Computing, 1–21.
Mladenic, D., Grobelnik, M., 1999. Feature selection for unbalanced class
distribution and nave bayes. In: Proceedings of the 16th Conference on
Machine Learning. pp. 258–267.
Oren, N., 2002. Re-examining tf.idf based information retrieval with genetic
programming. In: SAICSIT.
Pang, B., Lee, L., Vaithyanathan, S., 2002. Thumbs up, sentiment classification using machine learning techniques. In: Proceedings of the Conference
on Empirical Methods in Natural Language Processing (EMNLP). pp. 79–
86.
Reyes, A., Rosso, P., 2014. On the difficulty of automatically detecting irony:
Beyond a simple case of negation. Knowledge and Information Systems
40 (3), 595–614.
33
Salton, G., Buckley, C., 1988. Term-weighting approaches in automatic text
retrieval. Information Processing and Management, 513–523.
Sebastiani, F., 2008. Machine learning in automated text categorization.
ACM Computer Surveys 34 (1), 1–47.
Silva, S., Almeida, J., 2003. Gplab-a genetic programming toolbox for matlab. In: Proceedings of the Nordic MATLAB conference. pp. 273–278.
Sivic, J., Zisserman, A., 2003. Video google: A text retrieval approach to object matching in videos. In: International Conference on Computer Vision.
Vol. 2. pp. 1470–1477.
S.Manchala, Prasad, V. K., Janaki, V., 2014. GMM based language identification system using robust features. International Journal of Speech
Technology (2014) 17:99105 17, 99–105.
Stamatatos, E., 2009. A survey of modern authorship attribution methods.
Journal of the American Society for Information Science and Technology
60 (3), 538–556.
Trotman, A., 2005. Learning to rank. Information Retrieval 8, 359–381.
Turney, P., Pantel, P., 2010. From frequency to meaning: Vector space models
of semantics. Journal of Artificial Intelligence Research 37, 141–188.
Vedaldi, A., Fulkerson, B., 2008. VLFeat: An open and portable library of
computer vision algorithms.
Wanga, J., Liub, P., Shea, M. F., Nahavandia, S., Kouzanid, A., 2013. Bag-ofwords representation for biomedical time series classification. Biomedical
Signal Processing and Control 8 (6), 634–644.
Yang, Y., Pedersen, J. O., 1997. A comparative study on feature selection
in text categorization. In: Proceedins of the 14th International Conference
on Machine Learning. pp. 412–420.
Zhang, J., Marszablek, M., Lazebnik, S., Schmid, C., 2007. Local features and
kernels for classification of texture and object categories: A comprehensive
study. International Journal of Computer Vision 73 (2), 213–238.
34
Zhang, K., Lan, L., Wang, Z., Moerchen, F., 2012. Scaling up kernel svm on
limited resources: A low-rank linearization approach. In: Proceedings of
the 15th International Conference on Artificial Intelligence and Statistics
(AISTATS).
35
| 9 |
AN ANALYSIS OF GENE EXPRESSION DATA USING PENALIZED
FUZZY C-MEANS APPROACH
P.K. Nizar Banu1 H. Hannah Inbarani2
1
Department of Computer Applications, B. S. Abdur Rahman University, Chennai, Tamilnadu,
India
Email: [email protected]
2
Department of Computer Science, Periyar University, Salem, Tamilnadu, India
Email: [email protected]
ABSTRACT
With the rapid advances of microarray technologies, large amounts of high-dimensional gene
expression data are being generated, which poses significant computational challenges. A first
step towards addressing this challenge is the use of clustering techniques, which is essential in
the data mining process to reveal natural structures and identify interesting patterns in the
underlying data. A robust gene expression clustering approach to minimize undesirable
clustering is proposed. In this paper, Penalized Fuzzy C-Means (PFCM) Clustering algorithm is
described and compared with the most representative off-line clustering techniques:
K-Means Clustering, Rough K-Means Clustering and Fuzzy C-Means clustering. These
techniques are implemented and tested for a Brain Tumor gene expression Dataset. Analysis of
the performance of the proposed approach is presented through qualitative validation
experiments. From experimental results, it can be observed that Penalized Fuzzy C-Means
algorithm shows a much higher usability than the other projected clustering algorithms used in
our comparison study. Significant and promising clustering results are presented using Brain
Tumor Gene expression dataset. Thus patterns seen in genome-wide expression experiments can
be interpreted as indications of the status of cellular processes. In these clustering results, we find
that Penalized Fuzzy C-Means algorithm provides useful information as an aid to diagnosis in oncology.
Keywords: Clustering, Microarray, Gene Expression, Brain Tumor, Fuzzy Clustering, FCM,
PFCM
produced. Searching for meaningful
information patterns and dependencies
among genes, in order to provide a basis for
hypothesis testing, typically includes the
initial step of a natural basis for organizing
gene expression data to group genes together
with similar patterns of expression. The field
of gene expression data analysis has grown
in the past few years from being purely datacentric
to
integrative,
aiming
at
complementing microarray analysis with
data and knowledge from diverse available
sources.
Advances
in
microarray
technologies have made it possible to
measure the expression profiles of thousands
1.INTRODUCTION
Gene expression is the fundamental
link between genotype and phenotype in a
species, with microarray technologies
facilitating the measurement of thousands of
Gene expression values under tightly
controlled conditions, e.g. (i) from a
particular point in the cell cycle, (ii) after an
interval response to some environmental
change, (iii) from RNA, isolated from a
tissue
exhibiting
certain
phenotypic
characteristics and so on (Kerr, et. al., 2008).
A problem inherent in the use of microarray
technologies is the huge amount of data
1
of genes in parallel under varying
experimental conditions. Due to the large
number of genes and complex gene
regulation networks, clustering is a useful
exploratory technique for analyzing these
data. It divides data of interest into a small
number of relatively homogeneous groups
or clusters. Clustering is a popular data
mining technique for various applications.
One of the reasons for its popularity is the
ability to work on datasets with minimum or
on a priori knowledge. This makes
clustering practical for real world
applications. We can view the expression
levels of different genes as attributes of the
samples, or the samples as the attributes of
different genes. Clustering can be performed
on genes or samples (Michael et. al. 1998).
This paper introduces the application of
Penalized Fuzzy C-Means algorithm to
cluster Brain Tumor genes.
expression
profiles
are
hierarchical
clustering algorithms (Michael et. al. 1998),
self-organizing maps (Paul, 1999) and
K-Means clustering algorithms (Tavazoie,
et. al., 1999). Hierarchical algorithms merge
genes with the most similar expression
profiles iteratively in a bottom-up manner.
Self-organizing
maps
and
K-means
algorithms partition genes into userspecified k optimal clusters. Other full space
clustering algorithms applied on gene
expression data include Bayesian network
(Friedman et. al. 2000) and neural network.
A robust image segmentation method that
combines the watershed segmentation and
penalized fuzzy Hopfield neural network
algorithms to minimize over-segmentation is
described in (Kuo et. al. 2006). (Brehelin et.
al., 2008) evaluates the stability of clusters
derived from hierarchical clustering by
taking repeated measurements. A wide
variety of clustering algorithms are available
for clustering gene expression data (Bezdek,
1981). They are mainly classified as
Partitioning methods, hierarchical methods,
Density based methods, Model based
methods, Graph Theoretic methods, soft
computing methods etc.
This paper is organized as follows.
In Section 2, Research background for
clustering Gene Expression patterns is
discussed. In Section 3, Methodology for
preparing Gene Expression patterns is
presented. Section 4 presents a method for
extracting highly suppressed and expressed
genes based on Penalized Fuzzy C-Means
algorithm. The Experimental results are
discussed in Section 5. Finally, section 6
concludes this paper by enumerating the
merits of the proposed approaches.
Multiple expression measurements
are commonly recorded as a real-valued
matrix, with row objects corresponding to
gene expression measurements over a
number of experiments and columns
corresponding to the pattern of expression of
all genes for a given microarray experiment.
Each entry xij, is the measured expression of
gene i in experiment j. Dimensionality of a
gene refers to the number of expression
values recorded for it. A gene/gene-profile x
is a single data item (row) consisting of d
measurements, x = (x1, x2, . . . , xd). An
experiment/sample y is a single microarray
experiment corresponding to a single
column in the gene expression matrix,
y = (x1, x2, …, xn)T where n is the number of
2. RESEARCH BACKGROUND
2.1. Clustering
Clustering genes, groups similar
genes into the same cluster based on a
proximity measure. Genes in the same
cluster have similar expression patterns. One
of the characteristics of gene expression data
is that it is meaningful to cluster both genes
and samples. The most commonly applied
full space clustering algorithms on gene
2
genes in the dataset. Clustering is considered
an interesting approach for finding
similarities in data and putting similar data
into groups. Initial step in the analysis of
gene expression data is the detection of
groups of genes that exhibit similar
expression patterns. In gene expression,
elements are usually genes and the vector of
each gene is its expression pattern. Patterns
that are similar are allocated in the same
cluster, while the patterns that differ
significantly are put in different clusters.
Gene expression data are usually of high
dimensions and relatively small samples,
which results in the main difficulty for the
application of clustering algorithms.
Clustering the microarray matrix can be
achieved in two ways:
(i) Genes can form a group which
show similar expression across
conditions,
(ii) Samples can form a group which
shows similar gene expression
across all genes.
This gives rise to global clustering, where a
gene or sample is grouped across all
dimensions. Additionally, the clustering can
be complete or partial. A complete
clustering assigns each gene to a cluster,
whereas a partial clustering does not. Partial
clustering tends to be more suited to gene
expression, as the dataset often contains
irrelevant genes or samples. Clearly this
allows:
(i) Noisy genes to be left out, with
correspondingly less impact on the
outcome and
(ii) Genes belonging to no cluster omitting a large number of
irrelevant contributions.
Microarrays measure expression for the
entire genome in one experiment, but genes
may change expression, independent of the
experimental condition. Forced inclusion in
well-defined but inappropriate groups may
impact the final structures found for the
data. Partial clustering avoids the situation
where an interesting sub-group in a cluster is
hidden through forcing membership of
unrelated genes (Kerr, et. al., 2008).
2.2. Categories of Gene Expression data
clustering
Methods of clustering can be
categorized as Hard Clustering or Soft
Clustering. Hard Clustering requires each
gene to belong to a single cluster, whereas
Soft
Clustering
permit
genes
to
simultaneously be members of numerous
clusters. Hard Clustering tells whether a
gene belongs to a cluster or not. Whereas in
Soft Clustering, with membership values,
every gene belongs to each cluster with a
membership weight between 0 (doesn’t
belong) and 1 (belongs). Clustering
algorithms which permit genes to belong to
more than one cluster are more applicable to
Gene expression. Gene expression data has
certain special characteristics and is a
challenging research problem
A modern working definition of a
gene is "a locatable region of genomic
sequence, corresponding to a unit of
inheritance, which is associated with
regulatory regions, transcribed regions, and
or other functional sequence regions".
Currently, a typical microarray experiment
contains 103 to 104 genes, and this number is
expected to reach to the order of 106.
However, the number of samples involved
in a microarray experiment is generally less
than 100. One of the characteristics of gene
expression data is that it is meaningful to
cluster both genes and samples. On one
hand, co-expressed genes can be grouped
into clusters based on their expression
patterns (Ben-Dor, et. al., 1999&Michael et.
al. 1998). In such gene-based clustering, the
genes are treated as the objects, while the
samples are the features. On the other hand,
the samples can be partitioned into
homogeneous groups. Each group may
3
correspond to some particular macroscopic
phenotype, such as clinical syndromes or
cancer types (Golub et. al., 1999). Such
sample-based clustering considers the
samples as the objects and the genes as the
features. The distinction of gene-based
clustering and sample-based clustering is
based on different characteristics of
clustering tasks for gene expression data.
Some clustering algorithms, such as KMeans and hierarchical approaches, can be
used both to group genes and to partition
samples.
Hard clustering algorithms like
K-Means and k-medoid place a restriction
that a data object can belong precisely to
only one cluster during clustering process.
This can be too restrictive while clustering
high dimensional data like Gene Expression
data because genes have a property of
getting expressed in multiple conditions.
Fuzzy set clustering like Fuzzy C-Means,
Penalized Fuzzy C-Means allows data
objects to belong to multiple clusters based
on the degree of membership.
to remember that when a gene expression
profile is analyzed in a given sample, it is
just a snapshot in time and space.
2.4 Clustering Techniques
K-Means Algorithm
The K-means method aims to minimize
the sum of squared distances between all
points and the cluster centre. This procedure
includes the steps, as described by Tou and
Gonzalez (Tou et. al. 1974).
Rough Set Theory
Rough set theory introduced by
Pawlak (Pawlak, 1982) deals with
uncertainty and vagueness. Rough set theory
became popular among scientists around the
world due to its fundamental importance in
the field of artificial intelligence and
cognitive sciences. Similar to fuzzy set
theory it is not an alternative to classical set
theory but it is embedded in it. Rough set
theory can be viewed as a specific
implementation of Frege’s idea of
vagueness, i.e., imprecision in this approach
is expressed by a boundary region of a set,
and not by a partial membership, like in
fuzzy set.
2.3. Analysis of Gene Expression data
Gene expression data is usually
represented by a matrix, with rows
corresponding to genes, and columns
corresponding to conditions, experiments or
time points. The content of the matrix is the
expression levels of each gene under each
condition. Those levels may be absolute,
relative or otherwise normalized. Each
column contains the results obtained from a
single array in a particular condition, and is
called the profile of that condition. Each row
vector is the expression pattern of a
particular gene across all the conditions.
Analyzing gene expression data is a
process by which a gene's information is
converted into the structures and functions
of a cell. Thousands of different mRNAs are
present in a given cell; together they make
up the transcriptional profile. It is important
Rough Clustering
A rough cluster is defined in a
similar manner to a rough set that is with
lower and upper approximation. The lower
approximation of a rough cluster contains
genes that only belong to that cluster. The
upper approximation of a rough cluster
contains genes in the cluster which are also
members of other clusters (SushmitaMitra,
2004 & SushmitaMitra, 2006). To use the
theory of rough sets in clustering, the value
set (Va) need to be ordered. This allows a
measure of the distance between each object
to be defined. Distance is a form of
similarity, which is a relaxing of the strict
4
requirement of indiscernibility outlined in
canonical rough sets theory, and allows the
inclusion of genes that are similar rather
than identical. Clusters of genes are then
formed on the basis of their distance from
each other. An important distinction
between rough clustering and traditional
clustering approaches is that, with rough
clustering, an object can belong to more than
one cluster.
3. METHODOLOGY
Cluster analysis, is an important tool in gene
expression
data
analysis.
For
experimentation, we used a set of gene
expression data that contains a series of gene
expression measurements of the transcript
(mRNA) levels of brain tumor gene. In
clustering gene expression data, the genes
are treated as objects and the samples are
treated as attributes. Gene pattern extraction
consists of 4 steps.
i)
Data Preparation
ii)
Data Normalization
iii)
Clustering
iv)
Pattern analysis
Fuzzy Clustering
Cluster analysis is a method of
grouping data with similar characteristics
into larger units of analysis. First in (Zadeh,
1965) fuzzy set theory that gave rise to the
concept of partial membership, based on a
membership function, fuzziness was
articulated and has received increasing
attention. Fuzzy clustering which produces
overlapping cluster partitions has been
widely studied and applied in various areas.
In fuzzy clustering, the Fuzzy C-Means
(FCM) clustering algorithm is the best
known and most powerful methods used in
cluster analysis (Bezdek, 1981). In (Yu et.
al., 2007), a general theoretical method to
evaluate the performance of fuzzy clustering
algorithm is proposed. The Fuzzy integrated
model is accurate than rough integrated
model and conventional integrated model
(Banu et. al., 2011). Fuzzy clustering
approach captures the uncertainty that
prevails in gene expression and becomes
more suitable for tumor prediction. One of
the important parameters in the FCM is the
weighting exponent m. When m is close to
one, the FCM approaches the hard C-Means
algorithm. When m approaches infinity, the
only solution of the FCM will be the mass
center of the data set. Therefore, the
weighting exponent m plays an important
role in the FCM algorithm.
3.1. Data Preparation
We represent the gene expression
ˆ {m |i 1,2,...,n , j 1,2,...,n},
data as ng by ns matrix: M
i, j
g
s
There are ns columns, one for each sample
and ng rows, one for each gene. One row of
genes is also called a gene vector, denoted
. Thus a gene
as g i m , m ,..., m
vector contains the values of a particular
attribute for all samples.
i ,1
i,2
i ,n
s
3.2. Data Normalization
Data sometimes need to be
transformed before being used. For example;
attributes may be measured using different
scales, such as centimeters and kilograms. In
instances where the range of values differ
widely from attribute to attribute, these
differing attribute scales can dominate the
results of the cluster analysis. It is therefore
common to normalize the data so that all
attributes are on the same scale. The
following are two common approaches for
data normalization of each gene vector:
m i, j
m i, j m i
m i, j
5
,
mi
m i, j m i
i
or
,
where
mi
ns
j 1
ns
m i, j
m
ns
, i
j 1
The membership weighting system reduces
noise effects, as a low membership grade is
less important in centroid calculation.
PFCM algorithm helps in identifying
hidden pattern and providing enhanced
understanding of the functional genomics in
a better way.
mi
2
i, j
ns 1
m i, j denotes the normalized value
and
for gene vector i of sample j, mi,j represents
the original value for gene i of sample j, ns is
the number of samples, m i is the mean of
the values for gene vector i over all samples,
and i is the standard deviation of the ith
gene vector.
Fuzzy Clustering permit genes to
belong to more than one cluster, is more
applicable to Gene Expression. Noisy genes
are unlikely to be members of several
clusters and genes with similar change in
expression for a set of samples are involved
in several biological functions and groups
should not be co-active under all conditions.
This gives rise to high inconsistency in the
gene groups and some overlap between
them. The boundary of a cluster is usually
fuzzy for three reasons:
The Brain Tumor gene expression
data is used for our experiments. This data
set is publically available in Broad Institute
web site. The various cluster validation
techniques namely Root Mean Square Error
(RMSE), Mean Absolute Error (MAE) and
Xie-Beni (XB) validity index are used to
validate the clusters obtained after applying
the clustering algorithms.
i.
4. PROPOSED PPROACH:
PENALIZED FUZZY C-MEANS
ii.
Penalized Fuzzy C-Means (PFCM)
algorithm for clustering gene expression
data is introduced in this paper, which
modified Fuzzy C-Means (FCM) algorithm
to produce more meaningful fuzzy clusters.
Genes are assigned a membership degree to
a cluster indicating its percentage
association with that cluster. The two
algorithms differ in the weighting scheme
used for the contribution of a gene to the
mean of the cluster. FCM membership
values for a gene are divided among clusters
in proportion to similarity with that clusters
mean. The contribution of each gene to the
mean of a cluster is weighted, based on its
membership grade. Membership values are
adjusted iteratively until the variance of the
system falls below a threshold. These
calculations require the specification of a
degree of fuzziness parameter which is
problem specific (Dembele et. al., 2003).
iii.
The gene expression dataset might
be noisy and incomplete
The similarity measurement between
genes is continuous and there is no
clear cutoff value for group
membership
A gene might behave similarly to
gene1 under a set of samples and
behave similarly to another gene2
under another set of samples.
Therefore, there is a great need for a
fuzzy clustering method, which
produces clusters in which genes can
belong to a cluster partially and to
multiple clusters at the same time
with different membership degrees.
The main objective of using this method is
to minimize the objective function value so
that the highly suppressed genes and highly
expressed genes are clustered separately and
also it helps to diagnose at an early stage of
tumor formation.
6
Based on the numerical results PFCM is
more accurate than FCM.
The steps of the PFCM algorithm are given
as follows:
Step 1: Initialize the cluster centroids
wj (2 j c),v(v 0),
fuzzification parameter,
m (1 m ),
and the value 0. Gives a fuzzy
C-partition ( 0 ) and t=1.
4.1. Penalized Fuzzy C-Means Algorithm
Another
strategy
for
fuzzy
clustering, called the penalized Fuzzy CMeans (PFCM) algorithm, with the addition
of a penalty term was proposed by Yang
(Yang, 1993 &Yang, 1994). Yang made the
fuzzy extension of the Classification
Maximum Likelihood (CML) procedure in
conjunction with fuzzy C-partitions and
called it a class of fuzzy CML procedures.
The idea of penalization is also important in
statistical learning. Combining the CML
procedure and penalty idea, Yang (Yang,
1993) added a penalty term to the FCM
objective function JFCM and then extended
the FCM to the so-called Penalized FCM
(PFCM) which produces more meaningful
and effective results than the FCM
algorithm. Thus, the PFCM objective
function is defined as follows:
Step 2: Calculate (j t ) , w(jt ) with ( t 1) using
Eqs. (1) and (2).
Step 3: Calculate the membership matrix
( t ) [ui , j ]with (jt ) , w(jt )
using Eq. (3)
c n
1c n m
2 1
JPFCM ui, j || xi wj || vuim, j lnj
2 j1 i1
2 j1 i1
1 c n m
JFCM vui, j lnj
2 j1 i1
(
5. EXPERIMENTAL RESULTS
where j is a proportional constant of
class j and v (≥ 0) is a constant. The penalty
term 1 v u ln is added to the
c
n
j1
i1
m
i, j
2
The effectiveness of the algorithms
based on cluster validity measure is
demonstrated in this section.
j
objective function, when v=0, JPFCM is equal
to JFCM.
αj, wj and
ui,j are defined as
n
u im, j
j c i 1 n
, j 1, 2,..., c
m
ui, j
(1)
j 1 i 1
wj
n
1
n
u
u
m
i, j
xi
5.1. Data Source
A set of gene expression data that
contains a series of gene expression
measurements of the transcript (mRNA)
levels of brain Tumor gene is used in this
paper to analyze the efficiency of the
proposed approach.
The brain tumor dataset is taken
from
the
Broad-Institute
website
(http://www.broadinstitute.org/cgibin/cancer/datasets.cgi). The dataset is titled
as “Gene Expression-Based Classification
and Outcome Prediction of Central Nervous
(2)
m i 1
i, j
i 1
ui, j
Step 4: Compute max | (t 1) ( t ) | .
If , and , t t 1 go to Step 2;
otherwise go to Step 5.
(
Step 5: Find the results1for the final class
centroids.
)
c || x w || 2 v ln
i
j
j
l 1 || x w || 2 v ln
i
l
l
1 /( m 1)
1 /( m 1)
1
(3)
7
System Embryonal Tumors”. The Datasets
Consists of three types of Brain Tumors data
namely Medulloblastoma classic and
desmoplastic, Multiple Brain tumors,
Medulloblastoma treatment outcome. Each
dataset contains nearly 7000 genes with 42
samples.
In order to evaluate the proposed
algorithm, we applied it to the Brain Tumor
gene expression data taken from Broad
Institute by taking 7129, 5000, 3000, and
1000 genes for all samples with various
numbers of clusters.
above mentioned validity measures and the
effectiveness of the proposed algorithm is
well understood. We tested our method for
the Brain Tumour gene expression dataset to
cluster the highly suppressed and highly
expressed genes and are depicted for various
dataset sizes. It is observed that for each set
of genes taken, the value of validity
measures for the proposed algorithm is
lower than the value of validity measures for
other algorithms and it is graphically
illustrated from Figure 1 to Figure 12.
Among
these
clustering
algorithms
Penalized Fuzzy C-Means produces better
results in identification of differences
between data sets. This helps to correlate the
samples according to the level of gene
expression.
5.2. Comparative Analysis
A comparative study of the
performance of K-Means (Tou et. al. 1974),
Rough K-Means (Peters, 2006) and Fuzzy
C-Means (Peters, 2006) is made with
Penalized Fuzzy C-Means algorithm (Shen
et. al, 2006).
In terms of MAE, Penalized Fuzzy
C-Means shows superior performance and
K-Means and rough K-Means exhibits better
performance than Fuzzy C-Means.
Cluster Validation
In this paper, Root Mean Square
Error, Mean Absolute Error and Xie-Beni
validity index are used to validate the
clusters obtained after applying the
clustering algorithms. To assess the quality
of our method, we need an objective
external criterion. In order to validate our
clustering results, we employed Root Mean
Square Error (RMSE), Mean Absolute Error
(MAE) (Pablo de Castro et. al., 2007).
Xie-Beni validity index has also been
chosen as the cluster validity measure
because it has been shown to be able to
detect the correct number of clusters in
several experiments (Pal et. al. 1995).
Also, With respect to RMSE and
Xie-Beni Index Penalized Fuzzy C-Means
produces greater performance than the other
algorithms.
The comparative results based on
Root Mean Square Error, Mean Absolute
Error and Xie-Beni validity measure for all
the Gene expression data clustering
algorithms for various Brain Tumor data sets
taken with different number of clusters are
enumerated in Table 1.
Figure 1 to Figure 3 shows the
comparative analysis of various approaches
for 7129 genes taking K as 7, 5 and 3
respectively. It can be observed from the
figures,
Penalized
Fuzzy
C-Means
outperforms other approaches Fuzzy CMeans, Rough K-Means and K-Means.
K Means, Rough K-Means, Fuzzy C-Means
and Penalized Fuzzy C-Means clustering
algorithm are applied and analysed for Brain
Tumour genes.
Table.1 shows the experimental
results that are obtained by applying the
8
Data Set Size=7129
k=7
Value of Valididty Measures
Value of Valididty Measures
1.2
1
0.8
0.6
0.4
0.2
0
FCMeans
RKMeans
KMeans
MAE
RMSE
XB
PFCMeans
1.2
1
0.8
0.6
0.4
0.2
0
Value of Valididty Measures
Value of Valididty Measures
FCMeans
RKMeans
KMeans
XB
PFCMeans
1.2
1
0.8
0.6
0.4
0.2
0
FCMeans
0.4
RKMeans
Value of Valididty Measures
Value of Valididty Measures
0.6
KMeans
MAE
RMSE
XB
PFCMeans
FCMeans
RKMeans
KMeans
RMSE
XB
PFCMeans
Figure5: Validity Measure for Data set size = 5000, k=5
Data Set Size=7129
k=3
0.2
0
XB
Validity Measures
Figure 2: Validity Measure for Data set size =7129, k=5
0.8
RMSE
Data Set Size=5000
k=5
MAE
Validity Measures
1
KMeans
Figure4: Validity Measure for Data set size =5000, k=7
Data Set Size=7129
k=5
RMSE
RKMeans
Validity Measures
Figure 1: Validity Measure for Data set size = 7129, k=7
MAE
FCMeans
MAE
Validity Measures
1.2
1
0.8
0.6
0.4
0.2
0
Data Set Size=5000
k=7
PFCMeans
1
0.8
Data Set Size=5000
k=3
0.6
0.4
FCMeans
0.2
KMeans
RKMeans
0
MAE
RMSE
XB
PFCMeans
Validity Measures
Validity Measures
Figure 3: Validity Measure for Data set size = 7129, k=3
Figure6: Validity Measure for Data set size =5000, k=3
Figure 4 to Figure 6 shows the
comparative analysis of various approaches
for 5000 genes taking K as 7, 5 and 3
respectively. The experimental result shows
that the Penalized Fuzzy C-Means
outperforms other approaches Fuzzy CMeans, Rough K-Means and K-Means.
Figure 7 to Figure 9 shows the
comparative analysis of various approaches
for 3000 genes taking K as 7, 5 and 3
respectively. The experimental result shows
that the Penalized Fuzzy C-Means gives
better results than the other algorithms.
9
Data Set Size=3000
k=7
3
Value of Valididty
Measures
Value of Valididty Measures
4
FCMeans
2
RKMeans
1
KMeans
0
MAE
RMSE
XB
PFCMeans
3
2.5
2
1.5
1
0.5
0
Value of Valididty
Measures
Value of Valididty Measures
FCMeans
RKMeans
0.5
KMeans
0
MAE
RMSE
XB
PFCMeans
4
3
PFCMeans
Data Set Size=1000
k=5
FCMeans
RKMeans
1
KMeans
0
MAE
RMSE
XB
PFCMeans
Validity Measures
Figure8: Validity Measure for Data set size =3000, k=5
Figure 11: Validity Measure for Dataset Size =1000, k=5
Data Set Size=3000
k=3
Value of Valididty Measures
Value of Valididty
Measures
XB
2
Validity Measures
1.2
1
0.8
0.6
0.4
0.2
0
KMeans
Figure10: Validity Measure for Data set size =1000,
k=7
Data Set Size=3000
k=5
1
RKMeans
Validity Measures
Figure 7: Validity Measure for Data set size = 3000, k=7
1.5
FCMeans
MAE RMSE
Validity Measures
2
Data Set Size=1000
k=7
FCMeans
RKMeans
KMeans
PFCMeans
Validity Measures
1.5
Data Set Size=1000
k=3
1
FCMeans
RKMeans
0.5
KMeans
0
MAE
RMSE
XB
PFCMeans
Validity Measures
Figure12: Validity Measure for Data set size =1000, k=3
Figure 9: Validity Measure for Data set size = 3000, k=3
Figure 10 to Figure 12 shows the
comparative analysis of various approaches
for 1000 genes taking K as 7, 5 and 3
respectively. The experimental result shows
that the Penalized Fuzzy C-Means performs
better than the other algorithms
10
Table 1: Performance Analysis of K-Means, Rough K-Means, Fuzzy C-Means and Penalized Fuzzy C-Means
No. of
No. of
Clusters
Genes
7
5
3
7
7129
5000
3000
1000
Root Mean
Mean
Square
Absolute
Error
Error
K-Means
0.0019
0.0044
0.2804
Rough K-Means
0.0511
0.0874
0.0020
Fuzzy C-Means
1.0438
0.8627
0.9548
Penalized Fuzzy C-Means
0.0043
0.0009
0.0001
K-Means
0.0034
0.0074
0.4798
Rough K-Means
0.0624
0.4654
0.0581
Fuzzy C-Means
1.0336
0.7504
0.0184
Penalized Fuzzy C-Means
0.0255
0.0015
0.0002
K-Means
0.0065
0.0150
0.1240
Rough K-Means
0.0336
0.0082
0.0203
Fuzzy C-Means
0.9925
0.6331
0.0624
Penalized Fuzzy C-Means
0.0079
0.0002
0.0001
K-Means
0.0225
0.0470
0.4994
Rough K-Means
0.1204
1.0548
0.1079
Fuzzy C-Means
1.8542
2.7108
0.1107
Penalized Fuzzy C-Means
0.0056
0.0015
0.0001
Clustering Algorithms
Xie-Beni
Index
Table .2 Parameter setting and other issues
Clustering
Algorithm
Cluster
Membership
Input
Proximity
Measure
K-Means
Binary
Starting Prototype,
Stopping Threshold, K
Pair wise
Distance
Rough K-Means
Rough
Membership
Starting Prototype,
Stopping Threshold, K
Pair wise
Distance
Fuzzy C-Means
Fuzzy
Membership
Degree of fuzziness,
Starting Prototypes,
Stopping Threshold, K
Pair wise
Distance
Penalized Fuzzy
C-Means
Improved Fuzzy
Membership
Fuzzification Parameter,
K
Pair wise
Distance
11
Other Issues
Very Sensitive to Input
parameters and order of
Input
Imprecision in Gene
Expression data can be
captured
Careful interpretations
of membership values.
Sensitive to Input
parameters and order of
Input
Quality of membership
is increased by
introducing Penalty
term
Pattern Analysis
The parameter setting and other
issues of the clustering approaches which
are discussed are given in Table.2.
For each gene, red indicates a high
level of expression (highly expressed Genes)
relative to the mean; green indicates a low
level of expression (highly suppressed
Genes) relative to the mean.
Transcriptional Initiation is the most
important mode for control of gene
expression
level.
Suppressed
gene
expression level may be stimulated by gene
therapy (i.e.) promoter insertion or up
regulation of suppressed gene and radiation
therapy (i.e.) more amount of radiation may
cause sudden mutation in gene, due to this
sudden change the gene expression may be
in high level. This helps to correlate the
samples according to the level of gene
expression. When precise functions for over
or under expressed genes are determined,
new avenues for intervention strategies may
emerge. These studies are in their infancy;
however,
the
improved
technology
employed here shows reasonable promise as
our understanding of these deadly tumours
increases. Therefore, future treatment
decisions based on the expression profile of
a primary tumour is a rational approach
towards preventing the outgrowth of
metastases.
6. CONCLUSION
Cluster
analysis
applied
to
microarray measurements aims to highlight
meaningful patterns for gene expression.
The goal of gene clustering is to identify the
important gene markers. Gene expression
data are the representation of nonlinear
interactions among genes and environmental
factors. Brain Tumor is so deadly because,
it is not usually diagnosed until it has
reached an advance stage. Early detection
can help prolong or save lives, but clinicians
currently have no specific and sensitive
method. Computing analysis of these data is
expected to gain knowledge of gene
functions and disease mechanisms. We used
a set of gene expression data that contains a
series of gene expression measurements of
the transcript (mRNA) levels of Brain
Tumor gene. Highly expressed genes and
suppressed genes are identified and
clustered
using
various
clustering
techniques. The following methods such as
Post-Translational Modification, Small
RNAs and Control of Transcript Levels,
Translational Initiation, Transcript Stability,
RNA Transport, Transcript Processing and
Modification, Epigenetic Control and
Transcriptional Initiation can be used to
reduce the tumor level.
The empirical results also reveal the
importance of using Penalized Fuzzy CMeans (PFCM) clustering methods for
clustering
more
meaningful
highly
suppressed and highly expressed genes from
the gene expression data set. Meanwhile,
evaluation metrics Root Mean Square Error,
Figure 13: Clusters of Brain Tumor Genes
Figure.13 represents the Clusters of tumor
genes by Penalized Fuzzy C-Means using all
genes exhibiting variation across the data
set.
12
Mean Absolute Error and Xie-Beni Index is
adopted to assess the quality of clusters, and
the experimental results have shown that the
proposed approach is capable of effectively
discovering gene expression patterns and
revealing the underlying relationships
among genes as well. This clustering
approach can be applied for any gene
expression dataset and tumor growth can be
predicted easily.
[8]
[9]
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
A. Ben-Dor, R. Shamir and Z.Yakhini,
“Clustering gene expression patterns”,
Journal of Computational Biology,
Vol. 6, No. (3/4), pp. 281–297, 1999.
J.C. Bezdek, “Fuzzy mathematics in
pattern
classification,
Ph.D.
Dissertation, Applied Mathematics”,
Cornell University, Ithac, New York,
1973.
J.C. Bezdek, “Pattern Recognition with
Fuzzy
Objective
Function
Algorithms”, Plenum Press, New York,
1981.
P. A.D.de Castro, O. F. de França H.
M. Ferreira and F. J. Von Zuben,
“Applying Biclustering to Perform
Collaborative
Filtering”,
In
Proceedings
of
the
Seventh
International Conference on Intelligent
Systems Design and Applications, pp.
421-426, 2007.
D. Dembele and P. Kastner. “Fuzzy cmeans
method
for
clustering
microarray
data”.
Bioinformatics
(Oxford, England), Vol. 19, No.8,
pp.973-980, May 2003.
J.C. Dunn, “A fuzzy relative of the
ISODATA process and its use in
detecting compact and well- separated
clusters”, Journal of Cybernetics, Vol.
3, pp. 32-57, 1974.
B. Eisen Michael, T. Spellman Paul,
O. Brown Patrick and Botstein David.
[10]
[11]
[12]
[13]
[14]
13
“Cluster analysis and display of
genome-wide expression patterns”.
Proc. Natl. Acad. Sci. USA, Vol. 95,
No. 25, pp. 14863–14868, Dec. 1998.
N. Friedman, M. Linial, I. Nachman,
and D. Peter. “Using Bayesian network
to analyze expression data”, Journal of
Computational Biology, Vol. 7, pp.
601-620, Aug. 2000.
T.R. Golub, D. K. Slonim, P. Tamayo,
C. Huard, M. Gassenbeek, J.P.
Mesirov, H. Coller, M. L. Loh, J.R.
Downing, M. A. Caligiuri, D. D.
Bloomfield,
and
E.S.
Lander,
“Molecular classification of cancer:
Class discovery and class prediction by
gene expression monitoring”. Science,
Vol. 286. No. 15, pp. 531–537,
October 1999.
Jian Yu and Miin-shen Yang, “A
Generalized
Fuzzy
Clustering
Regularization Model with optimality
Tests
and
Model
Complexity
Analysis”, IEEE Transactions on
Fuzzy Systems, Vol. 15, No. 5, pp.
904-915, 2007.
G. Kerr, H. J. Ruskin, M. Crane, P.
Doolan, “Techniques for Clustering
Gene Expression data”, Computers in
Biology and Medicine, Vol.36 No. 3,
pp 283-293, March2008.
Laurent Brehelin, Olivier Gascue and
Olivier Martin, “Using repeated
measurements to validate hierarchical
gene clusters”, Bioinformatics, Vol.
24, No. 5, pp. 682-688, 2008.
Miin-Shen Yang, Wen-Liang Hung
and Chia-Hsuan Chang, “A penalized
fuzzy clustering algorithm”, In.
Proceedings of the 6th WSEAS
International Conference on Applied
Computer Science, Spain, Dec. 2006.
N.R. Pal and J.C. Bezdek, “On cluster
validity for the fuzzy c-means model”,
IEEE Trans. Fuzzy Systems, Vol. 3, pp.
370-379, 1995.
[15] T. Paul, “Interpreting patterns of gene
expression with self-organizing maps:
Methods
and
application
to
hematopoietic differentiation”. In
Proc. Natl. Acad. Sci. USA, Vol. 96,
pp. 2907–2912, 1999.
[16] P. K. Nizar Banu, H. Hannah Inbarani,
“Analysis of Click Stream Patterns
using Soft Biclustering Approaches”,
International Journal of Information
Technologies and Systems Approach
(IJITSA), Vol. 4, No. 1, 2011.
[17] Z. Pawlak, “Rough sets”, International
Journal of Computer and Information
Sciences, Vol.2, pp. 341–356, 1982.
[18] G. Peters, “Some refinements of rough
kmeans
clustering”,
Pattern
Recognition, Vol. 39, pp. 1481–1491,
2006.
[19] SushmitaMitra, “An evolutionary
rough partitive clustering”, Pattern
Recognition Letters Vol. 25, pp.1439–
1449, 2004.
[20] SushmitaMitra,
“Rough
Fuzzy
Colloborative
Clustering”,
IEEE
transactions on Systems, Man and
Cybernetics, Vol. 36, No. 4, August
2006.
[21] S. Tavazoie, J. D. Hughes, M. J.
Campbell, R. J. Cho, and G. M.
Church. “Systematic determination of
genetic network architecture”. In
Nature Genetics, Vol. 22, pp. 281–
285, 1999.
[22] J. T. Tou and R. C. Gonzalez, Pattern
Recognition Principles, Massachusetts:
Addision-Wesley, 1974.
[23] Wen-Feng Kuo, Chi-Yuan Lin and
Yung-Nien Sun, “Region Similarity
Relationship between watershed and
Penalzed Fuzzy Hopfield Neural
Network Algorithms for Brain Image
Segmentation”, International Journal
of pattern Recognition and Artificial
Intelligence, Vol. 22. No. 7, pp: 14031425, 2008.
[24] M. S. Yang, “On a class of fuzzy
classification maximum likelihood
procedures”, Fuzzy Sets Syst. Vol. 57,
No. 3, pp.365-375, 1993.
[25] M. S. Yang and C. F. Su, “On
parameter estimation for normal
mixtures based on fuzzy clustering
algorithms”, Fuzzy Sets Syst. Vol. 68,
No. 1, pp. 13-28, 1994.
[26] L.A. Zadeh, “Fuzzy sets”, Inform.
Control, Vol.8, pp. 338-353, 1965.
14
| 5 |
arXiv:1708.01751v2 [q-bio.QM] 8 Aug 2017
DNA Sequence Complexity Reveals Structure
Beyond GC Content in Nucleosome Occupancy
Hector Zenil 1,2
∗
Peter Minary 1
August 9, 2017
1
Department of Computer Science, University of Oxford, Oxford, U.K.
Algorithmic Dynamics Lab, Unit of Computational Medicine, SciLife Lab,
Centre for Molecular Medicine, Department of Medicine Solna, Karolinska Institute, Stockholm, Sweden.
2
Abstract
We introduce methods that rapidly evaluate a battery of informationtheoretic and algorithmic complexity measures on DNA sequences in application to potential binding sites for nucleosomes. The first application
of this new tool demonstrates structure beyond GC content on DNA sequences in the context of nucleosome binding. We tested the measures on
well-studied genomic sequences of size 20K and 100K bps. The measures
reveal the known in vivo versus in vitro predictive discrepancies, but they
also uncover the internal structure of G and C within the nucleosome
length, thus disclosing more than simply GC content when one examines
alphabet transformations that separate and scramble the GC content signal and the DNA sequence. Most current prediction methods are based
upon training (e.g. k-mer discovery), the one here advanced, however,
is a training-free approach to investigating informative measures of DNA
information content in connection with structural nucleosomic packing.
Keywords: Nucleosome positioning/occupancy; DNA sequence complexity; DNA structure; genomic information content
1
The challenge of Predicting Nucleosome Organization
DNA in the cell is organized into a compact form, called chromatin. Nucleosome
organization in the cell is referred to as the primary chromatin structure and
can depend on the ‘suitability’ of a sequence for accommodating a nucleosome,
which may in turn be influenced by the packing of neighbouring nucleosomes.
Depending on the context, nucleosomes can inhibit or facilitate transcription
∗ To
whom correspondence should be addressed: [email protected]
1
factor binding and are thus a very active area of research. The location of low
nucleosomic occupancy is key to understanding active regulatory elements and
genetic regulation that is not directly encoded in the genome but rather in a
structural layer of information.
Structural organization of DNA in the chromosomes is widely known to be
heavily driven by GC content, involving a simple count of G and C occurrences
in the DNA sequence. Despite its simplicity, uncovering exactly how (and to
what extent) GC content drives/affects nucleosome organization is among the
central questions in modern molecular biology.
GC content, local and short-range signals carried by DNA sequence ‘motifs’
or fingertips (k-mer statistical regularities), have been found (Refs. [1] and [2])
to be able to determine a good fraction of the structural (and thus functional)
properties of DNA, such as nucleosome occupancy, but the explanatory (and
predictive) power of GC content (the G or C count in a sequence) alone and
sequence motifs display very significant differences in vivo versus in vitro [3].
Despite intensive analysis of the statistical correspondence between in vitro
and in vivo positioning, there is a lack of consensus as to the degree to which
the nucleosome landscape is intrinsically specified by the DNA sequence [4], as
well as in regards to the apparently profound difference in dependence in vitro
versus in vivo. Because the nucleosome landscape is known to be significantly
dependent on the DNA sequence, it encodes the structural information of the
DNA (particularly demonstrated in vitro). We consider this an opportunity
to compare the performance of complexity measures, in order to discover how
much of the information encoded in a sequence in the context of the nucleosome landscape can be recovered from information-content versus algorithmic
complexity measures. Nucleosome location is thus an ideal test case to probe
how informative sequence-based indices of complexity can be in determining a
structural (and ultimately functional) property of genomic DNA.
1.1
Algorithmic Information Theory in Genomic Sequence
Profiling
Previous applications of measures based upon algorithmic complexity include
experiments on the evaluation of lossless compression lengths of sets of genomes [5,
7] and more recently [13] with interesting results. For example, in a landmark
paper in the area, a measure of algorithmic mutual information was introduced
to distinguish sequence similarities by way of minimal encodings and lossless
compression algorithms in which a mitochondrial phylogenetic tree that conformed to the evolutionary history of known species was reconstructed [6, 7].
These approaches have, however, either been purely theoretical or have effectively been reduced to applications of Shannon entropy rather than of algorithmic complexity because, implementations of lossless compression are actually
entropy estimators [8, 38]. In some other cases, control tests have been missing. For example, in the comparison of the compressibility indices of different
genomes [6, 7], GC content (counting every G and C in the sequence) can reconstruct the same if not a more accurate phylogenetic tree. This is because
2
two species that are close to each other evolutionarily will have similar GC content (see e.g. [9]). Species close to each other will have similar DNA sequence
entropy values, allowing lossless compression algorithms to compress statistical
regularities of genomes of related species with similar compression rates. Here
we intend to go beyond previous attempts, in breadth as well as depth, using
better-grounded algorithmic measures and more biologically relevant test cases.
1.2
Current Sequence-based Prediction Methods
While the calculation of GC content is extremely simple, the reasons behind its
ability to predict the structural properties of DNA are largely unknown [10, 11].
For example, it has been shown that low GC content can explain low occupancy,
but high GC content can mean either high or low occupancy [12].
Current algorithms that build upon while probing beyond GC content have
been largely influenced by sequence motif ([14, 2]) and dinucleotide models
[15]—and to a lesser degree by k-mers [1], DNA sequence motifs that are experimentally isolated and used for their informative value in determining the
structural properties of DNA.
Table 1 (SI) shows the in vitro nucleosome occupancy dependence on GC
content, with a correlation of 0.684 (similar to that reported by Kaplan [3]) for
the well-studied 20K bp genomic region (187K – 207K) of Yeast Chromosome
14 [16]. Knowledge-based methods dependent on observed sequence motifs [17,
18] are computationally cost-efficient alternatives for predicting genome-wide
nucleosome occupancy. However, they are trained on experimental statistical
data and are not able to predict anything that has not been observed before.
They also require context, as it may not be sufficient to consider only short
sequence motifs such as dinucleotides [19, 3].
More recently, deep machine learning techniques have been applied to DNA
accessibility related to chromatin and nucleosome occupancy [17]. However,
these techniques require a huge volume of data for training if they are to predict
just a small fraction of data with marginally improved accuracy as compared to
more traditional approaches based on k-mers, and they have not shed new light
on the sequence dependence of occupancy.
Here we test the ability of a general set of measures, statistical and algorithmic, to be informative about nucleosome occupancy and/or about the relationship between the affinity of nucleosomes with certain sequences and their
complexities.
2
2.1
Methods
The Dinucleotide Wedge Model
The formulation of models of DNA bending was initially prompted by a recognition that DNA must be bent for packaging into nucleosomes, and that bending
would be an informative index of nucleosome occupancy. Various dinucleotide
3
models can account reasonably well for the intrinsic bending observed in different sets of sequences, especially those containing A-tracts [19].
The Wedge model [20] suggests that bending is the result of driving a wedge
between adjacent base pairs at various positions in the DNA. The model assumes
that bending can be explained by wedge properties attributed solely to an AA
dinucleotide (8.7 degrees for each AA). No current model provides a completely
accurate explanation of the physical properties of DNA such as bending [21],
but the Wedge model (like the more basic Junction model which is less suitable
for short sequences and less general [22]) reasonably predicts the bending of
many DNA sequences [23]. Although it has been suggested that trinucleotide
models may make for greater accuracy in explaining DNA curvature in some of
the sequences, dinucleotide models remain the most effective [19].
2.2
The Segal Model
Segal et al. established a probabilistic model to characterize the possibility
that one DNA sequence is occupied by a nucleosome [16]. They constructed a
nucleosome-DNA interaction model and used a hidden Markov model (HMM)
to obtain a probability score. The model is based mainly on a 10-bp sequence
periodicity that indicates the probability of any base pair being covered by a
nucleosome.
All k-nucleotide models, including that of Segal et al., are based upon
knowledge-based sequence motifs and are thereby dependent on certain previously learned patterns. They can only account for local curvature and local
predictions, not longer range correlations. Perhaps the fact that k-nucleotide
models for k > 2 have not been proven to provide a significant advantage over
k = 2 has led researchers to disregard longer range signals across DNA sequences
involved in both DNA curvature and nucleosome occupancy [24]. To date, these
models, including that of Kaplan [3] (which considers up to k = 5) and of Segal
et al., are considered the gold standard for comparison purposes.
To study the extent of different signals in the determination of nucleosome
occupancy, we applied some basic transformations to the original genomic DNA
sequence. The SW transformation substitutes G and C for S (Strong interaction), and A and T for W (for Weak interaction). The RY transformation
substitutes A and G for R (for puRines), and C and T for Y (pYrimidines).
2.3
Complexity-based Genomic Profiling
In what follows, we generate a function score fc for every complexity measure c
(detailed descriptions in the S.I.) by applying each measure to a sliding window
of length 147 nucleotides (nts) across a 20K and 100K base pair (bps) DNA
sequence from Yeast chromosome 14. At every position of the sliding window,
we get a sequence score for every c that is used to compare against in vivo and
in vitro experimental occupancy.
The following measures (followed by the name we refer to in parenthesis
throughout the text) are here introduced. Among the measures considered are
4
Figure 1: Top: Correlation values of nucleosome occupancy (measured experimentally from chromosomic Yeast) on a sliding window of length 4K nt for both
in vitro and in vivo against different measures/signals: the occupancy predictive Segal model (clearly better for in vitro). Middle: Calculated correlation
values between the SW DNA transformation, carrying the GC content signal,
found highly correlated to the Segal model but poorly explaining in vivo occupancy data. Bottom: The RY DNA transformation, an orthogonal signal to SW
(and thus to GC content) whose values report a non-negligible max-min correlation, suggesting that the mixing of AT and GC carries some information about
nucleosome occupancy (even if weaker than GC content), with in vivo values
showing greatest correlation values unlike SW/GC and thus possibly neglected
in predictive models (such as Segal’s).
5
entropy-based ones (see Supplementary Material for exact definitions):
• Shannon entropy (Entropy) with uniform probability distribution.
• Entropy rate with uniform probability distribution.
• Lossless compression (Compress)
A set of measures of algorithmic complexity (see Supplementary Material
for exact definitions):
• Coding Theorem Method (CTM) as an estimator of algorithmic randomness by way of algorithmic probability via the algorithmic Coding theorem
(see Supplementary Material) relating causal content and classical probability [33, 34].
• Logical Depth (LD) as a BDM-based (see below) estimation of logical
depth [25], a measure of sophistication that assigns both algorithmically
simple and algorithmically random sequences shallow depth, and everything else higher complexity, believed to be related to biological evolution [26, 27].
And a hybrid measure of complexity combining local approximations of algorithmic complexity by CTM and global estimations of (block) Shannon entropy
(see Supplementary Material for exact definitions):
• The Block Decomposition Method (BDM) that approximates Shannon
entropy—up to a logarithmic term—for long sequences, but KolmogorovChaitin complexity otherwise, as in the case of short nucleotides [28].
We list lossless compression under information-theoretic measures and not
under algorithmic complexity measures, because implementations of lossless
compression algorithms such as Compress and all those based on LempelZivWelch
(LZ or LZW) as well as derived algorithms (ZIP, GZIP, PNG, etc.) are actually
entropy estimators [38, 8, 28].
BDM allows us to expand the range of application of both CTM and LD to
longer sequences by using Shannon entropy. However, if sequences are divided
into short enough subsequences (of 12 nucleotides) we can apply CTM and avoid
any trivial connection to Shannon entropy and thus to GC content.
Briefly, to estimate the algorithmic probability [29, 30]—on which the measure BDM is based—of a DNA sequence (e.g. the sliding window of length
147 nucleoides or nt), we produce an empirical distribution [33, 34] to compare with by running a sample of up to 325 433 427 739 Turing machines with
2 states and 4 symbols (the number of nucleotide types in a DNA sequence)
with empty input. If a DNA sequence is algorithmically random, then very
few computer programs (Turing machines) will produce it, but if it has a regularity, either statistical or algorithmic, then there is a high probability of its
being produced. Producing approximations to algorithmic probability provides
approximations to algorithmic complexity by way of the so-called algorithmic
6
Coding Theorem [30, 33, 34]. Because the procedure is computationally expensive (and ultimately uncomputable) only the full set of strings of up to 12 bits
was produced, and thus direct values can be given only to DNA sequences of
up to 12 digits (binary for RY and SW and quaternary for full-alphabet DNA
sequences).
The tool is available at http://complexitycalculator.com/ where the
user can calculate the information content and algorithmic complexity using
the methods here introduced on any DNA segment for the purpose of similar of
any other investigation of the structure of DNA and beyond.
3
3.1
Results
Complexity-based Indices
Fig. 1 shows the correlations between in vivo, in vitro, and the Segal model. In
contrast, the SW transformation captures GC content, which clearly drives most
of the nucleosome occupancy, but the correlation with the RY transformation
that loses all GC content is very interesting. While significantly lower, it is
existent and indicates a signal not contained in the GC content alone, as verified
in Fig. 4.
In Table 1 (SI), we report the correlation values found between experimental nucleosome occupancy data and ab initio training-free complexity measures.
BDM alone explains more than any other index, including GC content in vivo,
and unlike all other measures LD is negatively correlated, as theoretically expected [35] and numerically achieved [28], it being a measure that assigns low
logical depth to high algorithmic randomness, with high algorithmic randomness
implying high entropy (but not the converse).
Surprisingly, entropy alone does not capture all the GC signals, which means
that there is more structure in the distributions of Gs and Cs beyond the GC
content alone. However, entropy does capture GC content in vivo, suggesting
that local nucleotide arrangements (for example, sequence motifs) have a greater
impact on in vivo prediction. Compared to entropy, BDM displays a higher
correlation with in vivo nucleosome occupancy, thereby suggesting more internal
structure than is captured by GC content and Shannon entropy alone.
3.2
Model Curvature versus Complexity Indices
The dinucleotide model incorporates knowledge regarding sequence motifs that
are known to have specific natural curvature properties and adds to the knowledge and predictive power that GC content alone offers.
Using the Wedge dinucleotide model we first estimated the predicted curvature on a set of 20 artificially generated sequences (Table 4 (SI)) with different statistical properties, in order to identify possibly informative informationtheoretic and algorithmic indices. As shown in Table 2 (SI), we found all measures negatively correlated to the curvature modelled, except for LD, which
7
displays a positive correlation–and the highest in absolute value–compared to
all the others. Since BDM negatively correlates with curvature, it is expected
that the minima may identify nucleosome positions (see next subsection).
An interesting observation in Table 2 (SI) concerns the correlation values
between artificially generated DNA sequences and DNA structural curvature
according to the Wedge nucleotide model: all values are negatively correlated,
but curvature as predicted by the model positively correlates with LD, in exact inverse fashion vis-a-vis the correlation values reported in Table 1 (SI).
This is consonant with the theoretically predicted relation between algorithmic
complexity and logical depth [28]. All other measures (except for LD) behave
similarly to BDM.
The results in Tables 1 and 2 (SI) imply that for all measures both extrema
(min and max for BDM and max and min for LD) may be indicative of high
nucleosome occupancy. In the next section we explore whether extrema of these
measures are also informative about nucleosome locations.
3.3
Nucleosome Dyad and Centre Location Test
Positioning and occupancy of nucleosomes are closely related. Nucleosome positioning is the distribution of individual nucleosomes along the DNA sequence
and can be described by the location of a single reference point on the nucleosome, such as its dyad of symmetry [36]. Nucleosome occupancy, on the other
hand, is a measure of the probability that a certain DNA region is wrapped
onto a histone octamer.
Fig. 2 shows the predictive capabilities of algorithmic indices for nucleosome
dyad and centre location when nucleosomic regions are placed against a background of (pseudo-) randomly generated DNA sequences with the same average
GC content as themselves (∼ 0.5). BDM outperforms all methods in accuracy
and strength. When taking the local min/max as potential indicators of nucleosome centres, we find that GC content fails (by design, as the surrounding
sequences have the same GC content as the nucleosomic region of interest);
lossless compression (Compress) performs well on the second half of the nucleosomes (left panel) but fails for the first half (right panel). Entropy performs
as poorly as Compress—not surprisingly, as lossless compression algorithms are
Entropy estimators.
The results for BDM and LD suggest that the first 4 nucleosomal DNA sequences, of which 3 are clones, display greater algorithmic randomness (BDM)
than the statistically pseudo-randomly generated background (surrounding sequences), while all other nucleosomes are of significantly lower algorithmic randomness (BDM) and mixed (both high and low) structural complexity (LD).
The same robust results were obtained after several replications with different
pseudo-random backgrounds. Moreover, the signal produced by similar nucleosomes with strong properties [37], such as clones 601, 603 and 605, had similar
shapes and convexity.
Fig. 3 shows the strength of the BDM signal at indicating the nucleosome
centres based on the min/max value of the corresponding functions. The signal8
Figure 2: Nucleosome centre prediction (red dots) of 14 nucleosomes on an
intercalated background of pseudo-random DNA segments of 147 nts with the
same average GC content as the surrounding nucleosomic regions. Values are
normalized between 0 and 1 and they were smoothed by taking one data point
per 40 and using and interpolating order 2. Experimentally known nucleosome
centres (called dyads) are marked with dashed lines. Panels on the right have
their nucleotide centre estimated by the centre of the nucleosomic sequence (also
dashed lines). Predictions are based on the local min/max values (up to 75 nts
to each side) from the actual/estimated dyad/centre. Some red dots may appear
to be placed slightly off but this is because there was a local min or max that
vanished after the main curve was made smoother for visualization purposes.
9
Figure 3: Signal-to-noise ratio histogram. Distributions of centre predicted values demonstrating how BDM and LD are removed from a normal distribution
thus picking a signal, unlike GC content that distributes values normally and
performs no better than chance on a pseudo-random background with similar
GC content. BDM carries the strongest signal followed by LD skewed in the
opposite direction both peaking closer to the nucleosome centres than GC content. On the x-axis are complexity values arranged in bins of 1000 as reported
in Fig. 2.
to-noise ratio is much stronger for BDM, and is also shifted by LD in the opposite direction (to BDM), as would be consistent with the relation between
algorithmic complexity and logical depth (see S.I.).
Both BDM and LD spike at nucleosome positions stronger than GC content
on a random DNA background with the same GC content, and perform better
than entropy and compression. BDM is informative about every dyad or centre
of nucleosomes, with 10 out of the 14 predicted within 1 to 3 bps distance and
the rest within a 20 bps range. Unlike all other measures, LD performed better
for the first half (left panel) of nucleosome centre locations than for the second
half (right panel), suggesting that the nucleosomes of the first half may have
greater structural organization.
Table 3 (SI) compares distances to the nucleosome centres and error percentages as predicted without any training with BDM, to GC content prediction.
The average distance between the predicted and the actual nucleosome centre is
calculated to the closest local extreme (minima or maxima) within a window of
41 base pairs or bps (20 bps to each side plus the centre) from the actual centre
(the experimentally known dyad or the centre nucleotide when the dyad was
not known). In accordance with the results in Table 1 (SI) the maxima of BDM
(minima of LD) could be informative about nucleosome positions and for those
sequences, whose natural curvature is a fit to the superhelix, the minima BDM
(maxima of LD) could also indicate nucleosome locations. This latter finding is
supported by results in Table 2 (SI).
Our results suggest that if some measures of complexity peak where GC
10
content is (purposely) tricked, there must be some structure different to GC
content along the DNA sequence, either a distribution of GC content within the
nucleosome length that is not related simply to the G and C count, or some
other signal.
3.4
Informative Measures of High and Low Occupancy
To find the most informative measures of complexity c we maximized the potential separation by taking only the sequences with highest (X% high) and lowest
(Y% low) nucleosome occupancy. To this end we took as cutoff values 2 and
0.2 respectively, generating 300 disjoint sequences each from a 100K DNA segment for highest and lowest nucleosome occupancy values. The 100K segment
starting and ending points are 187K − 40K and 207K + 40K nts in the 14th
Yeast chromosome, so 40K nts surrounding the original shorter 20K sequence
first explored.
In Fig. 4 it was puzzling to find that the Segal model correlates less strongly
than GC content alone for in vivo, suggesting that the model assigns greater
weight to k-mer information than to GC content for these extreme cases given
that we already knew that the Segal model is mostly driven by GC content
(Fig. 1 middle). The box plot for the Segal model indicates that the model does
not work as well for extreme sequences of high occupancy, with an average of
0.6 where the maximum over the segments on which these nucleosome regions
are contained reaches an average correlation of ∼ 0.85 (in terms of occupancy),
as shown in Fig. 1 for in vitro data. This means that these high occupancy
sequences are on the outer border of the standard deviation in terms of accuracy
in the Segal model.
While the best model is the one that best separates the highest from the lowest occupancy, and therefore is clearly Segal’s model. Except for informationtheoretic indices (entropy and Compress), all algorithmic complexity indices
were found to be informative of high and low occupancy. Moreover, all algorithmic complexity measures display a slight reduction in accuracy in vivo versus
in vitro, as is consistent with the limitations of current models such as Segal’s.
All but the Segal model are, however, training-free measures, in the sense that
they do not contain any k-mer information related to high or low occupancy and
thus are naive indices, yet all algorithmic complexity measures were informative
to different extents, with CTM and BDM performing best and LD performing
worst, LD displaying inverted values for high and low occupancy as theoretically
expected (because LD assigns low LD to high algorithmic complexity) [35]. Also
of note is the fact that CTM and BDM applied to the RY transformation were
informative of high versus low occupancy, thereby revealing a signal different to
GC content that models such as Segal’s partially capture in their encoded k-mer
information. Interestingly, GC content alone outperforms the Segal model for
high occupancy both in vitro and in vivo, but the Segal model outperforms GC
content for low occupancy.
Lossless compression was the worst behaved, showing how CTM and BDM
outperform what is usually used as an estimator of algorithmic complexity [38,
11
Figure 4: Box plots reporting the informative value of complexity indices in
vivo and in vitro for segments of lowest and highest occupancy, providing an
overview of the informative value of sequence-dependent complexity measures.
The occupancy score is given by the re-scaling of the complexity value fc (yaxis) so that the highest value is 1 and the lowest 0. In the case of the Segal
model, fc is the direct score sequence based on the probability assigned by the
model [16], with no re-scaling (because it is already scaled from origin with
probability values between 0 and 1). Other cases not shown (e.g. entropy
rate or Compress on RY or SW) had no significant results. Magenta and pink
(bright colours) signify measures of algorithmic complexity; in dark gray colour
are information-theoretic based measures.
12
8, 28]. Unlike entropy alone, however, lossless compression does take into consideration sequence repetitions, averaging over all k-mers up to the compression
algorithm sliding window length. The results thus indicate that averaging over
all sequence motifs—both informative and not—deletes all advantages, thereby
justifying specific knowledge-driven k-mer approaches introduced in models such
as Segal’s.
4
Conclusions
The Kaplan and Segal models are considered to be the most accurate and the
gold standard for predicting in vitro nucleosome occupancy. However, previous
approaches including Segal and Kaplan requires extensive (pre-)training. In
contrast, all measures considered by our approach are training free.
These training-free measures revealed that there is more structure to nucleosome occupancy than GC content, and potentially to k-mer structure as well
(e.g. non-AT-based mers), based on the correlations found in RY transformations indicative of low versus high occupancy.
The nucleotide location test suggests a complexity hierarchy in which natural non-nucleosomic regions are less algorithmically random than nucleosomic
regions, which in turn are less algorithmically random than pseudo-randomly
generated DNA sequences with GC content equal to the nucleosomic regions.
When pseudo-random regions are placed between nucleosomes, we showed that
BDM tends to identify nucleosomic regions with a preference for lower algorithmic randomness with relative accuracy, and more consistently than GC content,
which showed no pattern and mostly failed, as was expected, when fooled with
a background of similar GC content.
We have thus gone beyond previous attempts to connect and apply measures of complexity to structural and functional properties of genomic DNA,
specifically in the highly active and open challenge of nucleosome occupancy in
molecular biology.
A direction for future research suggested by our work is the exploration
of the use of these complexity indices to complement current machine learning approaches for reducing the feature space, by, e.g., determining which kmers are more and less informative, and thereby ensuring better prediction
results. Another direction is a more extensive investigation of the possible use
of genomic profiling for other types of structural and functional properties of
DNA, with a view to contributing to, e.g., HiC techniques or protein encoding/promoter/enhancer region detection, and to furthering our understanding
of the effect of extending the alphabet transformation of a sequence to epigenetics.
13
References
[1] Gu C et al. (2015) DNA structural correlation in short and long ranges.
The Journal of Physical Chemistry B 119(44):13980–13990.
[2] Schep AN et al. (2015) Structured nucleosome fingerprints enable highresolution mapping of chromatin architecture within regulatory regions.
Genome research 25(11):1757–1770.
[3] Kaplan N et al. (2009) The DNA-encoded nucleosome organization of a
eukaryotic genome. Nature 458(7236):362–366.
[4] Gracey LE et al. (2010) An in vitro-identified high-affinity nucleosomepositioning signal is capable of transiently positioning a nucleosome in
vivo. Epigenetics & chromatin 3(1):1.
[5] Rivals E, Delahaye JP, Dauchet M, Delgrange O (1996) Compression and
genetic sequence analysis. Biochimie 78:315–322.
[6] Li M, Chen X, Li X, Ma B, Vitányi PM (2004) The similarity metric.
IEEE transactions on Information Theory 50(12):3250–3264.
[7] Cilibrasi R, Vitányi PM (2005) Clustering by compression. IEEE Transactions on Information theory 51(4):1523–1545.
[8] Zenil H, Kiani, NA, Tegnér, J (2017) Low Algorithmic Complexity
Entropy-deceiving Graphs Phys Rev E. (accepted)
[9] Pozzoli U et al. (2008) Both selective and neutral processes drive GC
content evolution in the human genome. BMC evolutionary biology 8(1):1.
[10] Tillo D, Hughes TR (2009) G+C content dominates intrinsic nucleosome
occupancy. BMC bioinformatics 10(1):1.
[11] Galtier N, Piganeau G, Mouchiroud D, Duret L (2001) GC-content evolution in mammalian genomes: the biased gene conversion hypothesis.
Genetics 159(2):907–911.
[12] Minary P, Levitt M (2014) Training-free atomistic prediction of nucleosome occupancy. Proceedings of the National Academy of Sciences
111(17):6293–6298.
[13] Diogo Pratas, Armando J. Pinho (2017) On the Approximation of the
Kolmogorov Complexity for DNA Lecture Notes in Computer Science
book series (LNCS), vol. 10255:pp 259–266.
[14] Cui F, Zhurkin VB (2010) Structure-based analysis of DNA sequence patterns guiding nucleosome positioning in vitro. Journal of Biomolecular
Structure and Dynamics 27(6):821–841.
14
[15] Trifonov, E. N., and Sussman, J. L. (1980) The pitch of chromatin DNA
is reflected in its nucleotide sequence. Proc. Natl. Acad. Sci. USA 77, pp.
3816–3820.
[16] Segal E et al. (2006) A genomic code for nucleosome positioning. Nature
442(7104):772–778.
[17] Kelley DR, Snoek J, Rinn JL (2016) Basset: Learning the regulatory
code of the accessible genome with deep convolutional neural networks.
Genome research.
[18] Lee W et al. (2007) A high-resolution atlas of nucleosome occupancy in
yeast. Nature genetics 39(10):1235–1244.
[19] Kanhere A, Bansal M (2003) An assessment of three dinucleotide parameters to predict DNA curvature by quantitative comparison with experimental data. Nucleic acids research 31(10):2647–2658.
[20] Ulanovsky LE, Trifonov EN (1986) Estimation of wedge components in
curved DNA. Nature 326(6114):720–722.
[21] Burkhoff AM, Tullius TD (1988) Structural details of an adenine tract
that does not cause DNA to bend. Nature 331:455–457.
[22] Crothers DM, Haran TE, Nadeau JG (1990) Intrinsically bent DNA. J.
Biol. Chem 265(13):7093–7096.
[23] Sinden RR (2012) DNA structure and function. (Elsevier).
[24] van der Heijden T, van Vugt JJ, Logie C, van Noort J (2012) Sequencebased prediction of single nucleosome positioning and genome-wide nucleosome occupancy. Proceedings of the National Academy of Sciences
109(38):E2514–E2522.
[25] Bennett CH (1995) Logical depth and physical complexity. The Universal
Turing Machine, A Half-Century Survey pp. 207–235.
[26] Bennett CH (1993) Dissipation, information, computational complexity
and the definition of organization in Santa Fe Institute Studies in the Sciences of Complexity -Proceedings Volume-. (Addison-Wesley Publishing
Company), Vol. 1, pp. 215–215.
[27] Hernández-Orozco S, Hernández-Quiroz F, Zenil H (2016) The limits of
decidable states on open-ended evolution and emergence in ALIFE Conference. (MIT Press).
[28] Zenil H, Soler-Toscano F, Kiani NA, Hernández-Orozco S, Rueda-Toicen
A (2016) A decomposition method for global evaluation of Shannon entropy and local estimations of algorithmic complexity. arXiv preprint
arXiv:1609.00110.
15
[29] Solomonoff RJ (1964) A formal theory of inductive inference. parts i and
ii. Information and control 7(1):1–22 and 224–254.
[30] Levin LA (1974) Laws of information conservation (nongrowth) and aspects of the foundation of probability theory. Problemy Peredachi Informatsii 10(3):30–35.
[31] Kolmogorov AN (1968) Three approaches to the quantitative definition of
information. International Journal of Computer Mathematics 2(1-4):157–
168.
[32] Chaitin GJ (1969) On the length of programs for computing finite binary sequences: statistical considerations. Journal of the ACM (JACM)
16(1):145–159.
[33] Delahaye JP, Zenil H (2012) Numerical evaluation of algorithmic complexity for short strings: A glance into the innermost structure of randomness.
Applied Mathematics and Computation 219(1):63–77.
[34] Soler-Toscano F, Zenil H, Delahaye JP, Gauvrit N (2014) Calculating
kolmogorov complexity from the output frequency distributions of small
Turing machines. PloS one 9(5):e96223.
[35] Soler-Toscano F, Zenil H, Delahaye JP, Gauvrit N (2013) Correspondence
and Independence of Numerical Evaluations of Algorithmic Information
Measures Computability, vol. 2, no. 2, pp 125–140.
[36] Klug A, Rhodes D, Smith J, Finch J, Thomas J (1980) A low resolution
structure for the histone core of the nucleosome. Nature 287(5782):509–
516.
[37] Gaykalova DA et al. (2011) A polar barrier to transcription can be circumvented by remodeler-induced nucleosome translocation. Nucleic acids
research p. gkq1273.
[38] Zenil H (2013) Algorithmic data analytics, small data matters and correlation versus causation in Computability of the World? Philosophy and
Science in the Age of Big Data, eds. Pietsch W, Wernecke J, Ott M.
(Springer Verlag). In press.
[39] Collier JD (1998) Information increase in biological systems: how does
adaptation fit? in Evolutionary systems. (Springer), pp. 129–139.
[40] Rado T (1962) On non-computable functions. Bell System Technical Journal 41(3):877–884.
16
Supplementary Material
5
Indices of Information and of Algorithmic Complexity
Here we describe alternative measures to explore correlations from an informationtheoretic and algorithmic (hence causal) complexity perspective.
5.1
Shannon Entropy
Central to information theory is the concept of Shannon’s information entropy,
which quantifies the average number of bits needed to store or communicate a
message. Shannon’s entropy determines that one cannot store (and therefore
communicate) a symbol with n different symbols in less than log(n) bits. In
this sense, Shannon’s entropy determines a lower limit below which no message
can be further compressed, not even in principle. Another application (or interpretation) of Shannon’s information theory is as a measure for quantifying the
uncertainty involved in predicting the value of a random variable.
For an ensemble X(R, p(xi )), the Shannon information content or entropy
of X is then given by
H(X) = −
n
X
p(xi ) log2 p(xi )
i=1
where R is the set of possible outcomes (the random variable), n = |R| and
p(xi ) is the probability of an outcome in R.
5.1.1
Entropy Rate
The function R gives what is variously denominated as rate or block entropy
and is Shannon entropy over blocks or subsequences of X of length b. That is,
b=|X|
R(X) = min H(Xb )
b=1
If the sequence is not statistically random, then R(X) will reach a low value
for some b, and if random, then it will be maximally entropic for all blocks b.
R(X) is computationally intractable as a function of sequence size, and typically
upper bounds are realistically calculated for a fixed value of b (e.g. a window
length). Notice that, as discussed in the main text, having maximal entropy
does not by any means imply algorithmic randomness (c.f. 5.3).
5.2
Compression algorithms
Two widely used lossless compression algorithms were employed.
17
5.2.1
Bzip2
Bzip2 is a lossless compression method that uses several layers of compression techniques stacked one on top of the other, including Run-length encoding
(RLE), BurrowsWheeler transform (BWT), Move to Front (MTF) transform,
and Huffman coding, among other sequential transformations. Bzip2 compresses
more effectively than LZW, LZ77 and Deflate, but is considerably slower.
5.2.2
Compress
Compress is a lossless compression algorithm based on the LZW compression
algorithm. LempelZivWelch (LZW) is a lossless data compression algorithm
created by Abraham Lempel, Jacob Ziv, and Terry Welch, and is considered universal for an infinite sliding window (in practice the sliding window is bounded
by memory or choice). It is considered universal in the sense of Shannon entropy, meaning that it approximates the entropy rate of the source (an input
in the form of a file/sequence). It is the algorithm of the widely used Unix
file compression utility ‘Compress’, and is currently in the international public
domain.
5.3
Measures of Algorithmic Complexity
A binary sequence s is said to be random if its Kolmogorov complexity C(s) is
at least twice its length. It is a measure of the computational resources needed
to specify the object. Formally,
C(s) = min{|p| : T (p) = s}
where p is a program that outputs s running on a universal Turing machine T . A
technical inconvenience of C as a function taking s to the length of the shortest
program that produces s is its uncomputability. This is usually considered a
major problem. The measure was first conceived to define randomness and is
today the accepted objective mathematical measure of complexity, among other
reasons because it has been proven to be mathematically robust (by virtue of
the fact that several independent definitions converge to it).
The invariance theorem guarantees that complexity values will only diverge
by a constant (e.g. the length of a compiler, a translation program between T1
and T2 ) and will converge at the limit. Formally,
|C(s)T1 − C(s)T2 | < c
5.3.1
Lossless Compression as Approximation to C
Lossless compression is traditionally the method of choice when a measure of algorithmic content related to Kolmogorov-Chaitin complexity C is needed. The
Kolmogorov-Chaitin complexity of a sequence s is defined as the length of the
18
shortest computer program p that outputs s running on a reference universal Turing machine T . While lossless compression is equivalent to algorithmic
complexity, actual implementations of lossless compression (e.g. Compress) are
heavily based upon entropy rate estimations [38, 8, 28] that mostly deal with
statistical repetitions or k-mers of up to a window length size L, such that
k ≤ L.
5.3.2
Algorithmic Probability as Approximation to C
Another approach consists in making estimations by way of a related measure,
Algorithmic Probability [33, 34]. The Algorithmic Probability of a sequence s
is the probability that s is produced by a random computer program p when
running on a reference Turing machine T . Both algorithmic complexity and
Algorithmic Probability rely on T , but invariance theorems for both guarantee
that the choice of T is asymptotically negligible.
One way to minimize the impact of the choice of T is to average across a
large set of different Turing machines all of the same size. The chief advantage
of algorithmic indices is that causal signals in a sequence may escape entropic
measures if they do not produce statistical regularities. And it has been the case
that increasing the length of k in k-nucleotide models of structural properties
of DNA have not returned more than a marginal advantage.
The Algorithmic Probability [29] (also known as Levin’s semi-measure [30])
of a sequence s is a measure that describes the expected probability of a random
program p running on a universal prefix-free Turing machine T producing s.
Formally,
X
m(s) =
1/2|p|
p:T (p)=s
The Coding theorem beautifully connects C(s) and m(s):
C(s) ∼ − log m(s)
5.3.3
Bennett’s Logical Depth
Another measure of great interest is logical depth [25]. The logical depth (LD)
of a sequence s is the shortest time logged by the shortest programs pi that produce s when running on a universal reference Turing machine. In other words,
just as algorithmic complexity is associated with lossless compression, LD can
be associated with the shortest time that a Turing machine takes to decompress
the sequence s from its shortest computer description. A multiplicative invariance theorem for LD has also been proven [25]. Estimations of Algorithmic
Probability and logical depth of DNA sequences were performed as determined
in [33, 34].
Unlike algorithmic (Kolmogorov-Chaitin) complexity C, logical depth is a
measure related to ‘structure’ rather than randomness. LD can be identified
19
with biological complexity [26, 39] and is therefore of great interest when comparing different genomic regions.
5.4
Measures Based on Algorithmic Probability and on
Logical Depth
The Coding theorem method (or simply CTM) is a method [33, 34] rooted in the
relation between C(s) and m(s) specified by Algorithmic Probability, that is,
between frequency of production of a sequence from a random program and its
Kolmogorov complexity as described by Algorithmic Probability. Essentially,
it uses the fact that the more frequent a sequence the lower its Kolmogorov
complexity, and sequences of lower frequency have higher Kolmogorov complexity. Unlike algorithms for lossless compression, the Algorithmic Probability
approach not only produces estimations of C for sequences with statistical regularities, but it is deeply rooted in a computational model of Algorithmic Probability, and therefore, unlike lossless compression, has the potential to identify
regularities that are not statistical (e.g. a sequence such as 1234...), that is,
sequences with high entropy or no statistical regularities but low algorithmic
complexity [38, 8].
Let (n, m) be the space of all n-state m-symbol Turing machines, n, m > 1
and s a sequence, then:
D(n, m)(s) =
|{T ∈ (n, m) : T produces s}|
|{T ∈ (n, m)}|
(1)
T is a standard Turing machine as defined in the Busy Beaver problem by
Radó [40] with 4 symbols (in preparation for the calculation of the DNA alphabet size).
Then using the relation established by the Coding theorem, we have:
CT M (s) = − log2 (D(n, m)(s))
(2)
That is, the more frequently a sequence is produced the lower its Kolmogorov
complexity, and vice versa. CTM is an upper bound estimation of KologorovChaitin complexity.
From CTM, a measure of Logical Depth can also be estimated–as the computing time that the shortest Turing machine (i.e. the first in the quasilexicographic order) takes to produce its output s upon halting. CTM thus
produces both an empirical distribution of sequences up to a certain size, and
an LD estimation based on the same computational model.
Because CTM is computationally very expensive (equivalent to the Busy
Beaver problem [40]), only short sequences (currently only up to length k = 12)
have associated estimations of their algorithmic complexity. To approximate
the complexity of genomic DNA sequences up to length k = 12, we calculated
D(5, 4)(s), from which CT M (s) was approximated.
To calculate the Algorithmic Probability of a DNA sequence (e.g. the sliding window of length 147 nt) we produced an empirical Algorithmic Probability
20
Table 1: Spearman correlations between complexity indices with in vivo and
in vitro experimental nucleosome occupancy data from position 187 001 bp to
207 000 bp on the 14th Yeast chromosome
in vitro
in vivo
GC content
LD
Entropy
BDM
Compress
in vitro
1
0.5
0.684
-0.29
0.588
0.483
0.215
in vivo
0.5
1
0.26
-0.23
0.291
0.322
0.178
distribution from (5, 4) to compare with by running a sample of 325 433 427 739
Turing machines with up to 5 states and 4 symbols (the number of nucleotides
in a DNA sequence) with empty input (as required by Algorithmic Probability).
The resulting distribution came from 325 378 582 327 non-unique sequences (after removal of those sequences only produced by 5 or fewer machines/programs).
5.5
Relation of BDM to Shannon Entropy and GC Content
The Block Decomposition Method (BDM) is a divide-and-conquer method that
can be applied to longer sequences on which local approximations of C(s) using
CTM can be averaged, thereby extending the range of application of CTM.
Formally,
X
BDM (s, k) =
log(n) + CTM(r).
(3)
sk
The set of subsequences sk is composed of the pairs (r, n), where r is an element
of the decomposition of sequence s of size k, and n the multiplicity of each
subsequence of length k. BDM (s) is a computable approximation from below
to the algorithmic information complexity of s, C(s). BDM approximations
to K improve with smaller departures (i.e. longer k-mers) from the Coding
Theorem method. When k decreases in size, however, we have shown [28] that
BDM approximates the Shannon entropy of s for the chosen k-mer distribution.
In this sense, BDM is a hybrid complexity measure that in the ‘worst case’
behaves like Shannon entropy and in the best approximates K. We have also
shown that BDM is robust when instead of partitioning a sequence, overlapping
subsequences are used, but this latter method tends to over-fit the value of the
resultant complexity of the original sequence that was broken into k-mers.
21
Table 2: Spearman correlation values of complexity score functions versus the
Wedge dinucleotide model prediction of DNA curvature on 20 synthetically generated DNA sequences
p
rho
GC
content
0.047
-0.45
Entropy
0.051
-0.44
Entropy
rate (4)
0.0094
-0.57
Compress
BZip2
BDM
LD
0.0079
-0.58
0.048
-0.45
0.0083
-0.57
0.0019
0.65
Table 3: Distance in nucleotides to local min/max within a window of 2 tests,
around 40 and 140 nts around the centre on a pseudo-randomly generated DNA
background with the same GC content as the mean of the GC content of the
next contiguous nucleosomic region. Clearly the experiment is designed for GC
content to fail, yet BDM predicts the nucleosome position (by its centre) in a
high number of cases and with great accuracy, with 10 out of the 14 centres
predicted to within a 1 to 3 nt distance, thereby suggesting that there is more
structure than GC content. Contrast this to GC content performing no better
than chance, with an average fractional distance of 0.538 versus 0.105 for BDM
from the predicted centre. Likewise for windows around the centre of 40 nt and
140 nt. All other methods (not included) reported intermediate values between
GC content and BDM
BDM
601
603
605
5Sr DNA
pGub
chicken β−
globulin
-48
-2
-1
-1
19
25
msat CAG TATA
CA
NoSecs
TGGA
TGA BadSecs
2
1
1
1
14
1
1
1
Table 4: The 20 short artificial DNA sequences generated covering a wide range
of patterns and regularities used to find informative measures of DNA curvature.
AAAAAAAAAAAA
AAAAAAAAATAA
ATAGAACGCTCC
TCGTTCGCGAAT
CTCTCAGGTCGT
GGCGGGGGGTGG
GCGCGCGCGCGC
ATATATATATAT
AAAAAAAACAAT
ACCTATGAAAGC
TGCACGTGTGGA
CTCGTGGATATC
GGGGGGGCGGGC
GGGGGGGGGGGG
22
AAAAAATTTTTT
AAGATCTACACT
TAGGCGGCGGGC
CTAAACACAATA
CCACGATCCCGT
GGGGGGCCCCCC
Table 5: 14 Experimental nucleosome sequences [24]. Only the first 6 have
known dyads
name
601
dyad
position
74
sequence
ACAGGATGTATATATCTGACACGTGCCTGGAGACTAGGGAGTA
ATCCCCTTGGCGGTTAAAACGCGGGGGACAGCGCGTACGTGCG
TTTAAGCGGTGCTAGAGCTGTCTACGACCAATTGAGCGGCCTCG
GCACCGGGATTCTCCAG
603
154
CGAGACATACACGAATATGGCGTTTTCCTAGTACAAATCACCCCA
GCGTGACGCGTAAAATAATCGACACTCTCGGGTGCCCAGTTCGC
GCGCCCACCTACCGTGTGAAGTCGTCACTCGGGCTTCTAAGTACG
CTTAGGCCACGGTAGAGGGCAATCCAAGGCTAACCACCGTGCAT
CGATGTTGAAAGAGGCCCTCCGTCCTTATTACTTCAAGTCCCTGG
GGTACCGTTTC
605
132
TACTGGTTGGTGTGACAGATGCTCTAGATGGCGATACTGACAGG
TCAAGGTTCGGACGACGCGGGATATGGGGTGCCTATCGCACATT
GAGTGCGAGACCGGTCTAGATACGCTTAAACGACGTTACAACCC
TAGCCCCGTCGTTTTAGCCGCCCAAGGGTATTCAAGCTCGACGCT
AATCACCTATTGAGCCGGTATCCACCGTCACGACCATATTAATAG
GACACGCCG
5Sr DNA
74, 92
AACGAATAACTTCCAGGGATTTATAAGCCGATGACGTCATAACAT
CCCTGACCCTTTAAATAGCTTAACTTTCATCAAGCAAGAGCCTAC
GACCATACCATGCTGAATATACCGGTTCTCGTCCGATCACCGAAG
TCAAGCAGCATAGGGCTCGGTTAGTACTTGGATGGGAGACCGCC
TGGGAATACCG
pGub
84, 104
GATCCTCTAGACGGAGGACAGTCCTCCGGTTACCTTCGAACCACGT
GGCCGTCTAGATGCTGACTCATTGTCGACACGCGTAGATCTGCTAG
CATCGATCCATGGACTAGTCTCGAGTTTAAAGATATCCAGCTGCCC
GGGAGGCCTTCGCGAAATATTGGTACCCCATGGAATCGAGGGATC
chicken β-globulin
125
CTGGTGTGCTGGGAGGAAGGACCCAACAGACCCAAGCTGTGGTC
TCCTGCCTCACAGCAATGCAGAGTGCTGTGGTTTGGAATGTGTGA
GGGGCACCCAGCCTGGCGCGCGCTGTGCTCACAGCACTGGGGTG
AGCACAGGGTGCCATGCCCACACCGTGCATGGGGATGTATGGCGC
ACTCCGGTATAGAGCTGCAGAGCTGGGAATCGGGGGG
mouse minor
satellite
ATTTGTAGAACAGTGTATATCAATGAGCTACAATGAAAATCATGGA
CAG
AGCAGCAGCAGCAACAGTAGTAGAAGCAGCAGCACTAACGACAG
AAATGATAAAAACCACACTGTAGAACATATTAGATGAGTGAGTTA
CACTGAAAAACACATCCGTTGGAAACCGGCAT
CACAGCAGTAGCAGTAATAGAAGCAGCAGCAGCAGCAGTAGCAG
TAGCAGCAGCAGCAGCAGCAATTTCAACAACAGCAGCAGCAGCT
TATA
AGGTCTATAAGCGTCTATAAGCGTCTATGAACGTCTATAAACGTCT
ATAAACGCCTATAAACGCCTATAAACGCCTATACAAGCCTATAAAC
GCCTATACACGTCTATGCACGACTATACACGTCT
CA
GAGAGTAACACAGGCACAGGTGTGGAGAGTAACACAGGCACAG
GTGTGGGAGAGTGACACACAGGCACAGGTGAGGAGAGTACACA
CAGGCACAGGTGTGGAGAGCACACACAGGTGCGGAGAG
NoSecs
GGGCTGTAGAATCTGATGGAGGTGTAGGATGGATGGACAGTATGA
CAAAAGGGTACTAGCCTGGGACAGCAGGATTGGTGGAAAGGTTA
CAGGCAGGCCCAGCAGGCTCGGACGCTGTATAGAG
TGGA
AGATGGATGGATGATGGATGGATGATGGATAGATGGATGATGGAT
GGATGGATGATGATGGATGAATAGATGGATGGATGGATGATGGAT
GGATGGACGATGGATGGATAGATGGATGGATGG
TGA
ATAGATGGATGAGTGGATGGATGGGTGGATGGATAGATGGGTGG
ATGGGTGGATGGGTGGATGGATGATGGATGGATGAGTGGATGGA
TGGATGGATGGGTGGATGGGTGGACGG
BadSecs
TCTAGAGTGTACAACTATCTACCCTGTAGGCATCAAGTCTATTTCGG
23
TAATCACTGCAGTTGCATCATTTCGATACGTTGCTCTTGCTTCGCTAG
CAACGGACGATCGTACAAGCAC
| 7 |
1
arXiv:1711.10662v1 [] 29 Nov 2017
Undefined 1 (2010) 1–5
IOS Press
An Adaptive Fuzzy-Based System to
Simulate, Quantify and Compensate Color
Blindness
Jinmi Lee a and Wellington Pinheiro dos Santos b,∗
a
Escola Politécnica de Pernambuco, Universidade de Pernambuco,
Rua Benfica, 455, Madalena, Recife, Pernambuco, 50720-001, Brazil
E-mail: [email protected]
b
Núcleo de Engenharia Biomédica, Universidade Federal de Pernambuco,
Av. Prof. Morais Rego, 1235, Cidade Universitária, Recife, Pernambuco, 50670-901, Brazil
E-mail: [email protected]
Abstract. About 8% of the male population of the world are affected by a determined type of color vision disturbance, which
varies from the partial to complete reduction of the ability to distinguish certain colors. A considerable amount of color blind
people are able to live all life long without knowing they have color vision disabilities and abnormalities. Nowadays the evolution of information technology and computer science, specifically image processing techniques and computer graphics, can be
fundamental to aid at the development of adaptive color blindness correction tools. This paper presents a software tool based
on Fuzzy Logic to evaluate the type and the degree of color blindness a person suffer from. In order to model several degrees
of color blindness, herein this work we modified the classical linear transform-based simulation method by the use of fuzzy
parameters. We also proposed four new methods to correct color blindness based on a fuzzy approach: Methods A and B, with
and without histogram equalization. All the methods are based on combinations of linear transforms and histogram operations.
In order to evaluate the results we implemented a web-based survey to get the best results according to optimize to distinguish
different elements in an image. Results obtained from 40 volunteers proved that the Method B with histogram equalization got
the best results for about 47% of volunteers.
Keywords: Color blindness correction, color blindness simulation, linear color systems, color transformations, digital image
processing
1. Introduction
Millions of years of evolution have made the human
visual system one of the most important sensory faculties. Nevertheless, about 8% of the male population
has some sort of color vision disturbance. This condition is characterized by the partial or complete reduction of the ability to distinguish some colors [1]. It is
a relatively common fact that people with color blindness use to live several years without realizing that
they have a color vision deficiency. The reason for this
* Corresponding
author. E-mail: [email protected]
fact is that the disorder can appear in different intensities. The increasing user interaction with graphical interfaces has been evidencing several problems related
to color discrimination, which often restrics the use of
these web-based applications [8].
The increasing evolution of information technology
and computer science, as well as digital image processing on the improvement of visual information for human interpretation, have improved the visual quality of
digital images for people with certain degrees of disturbance in color perception. However, most applications intended to lessen the color blindness effects do
0000-0000/10/$00.00 c 2010 – IOS Press and the authors. All rights reserved
2
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
not take in account that such disorders could occur in
many degrees, depending on each person [10].
The classical methods to simulate color blindness
are based on mathematical color models to represent
extreme cases of color blindness, i.e. the absence of
one of the three photoreceptors of the human eye (red,
green, and blue, i.e. low, medium, and high frequencies) [32]. Real cases of color blindness are characterized by some degree of anomaly at the absorbance
spectrum of red, green, and blue spectral bands. Such
a degree of chromatic deviation can be measured by
software tools designed to collect parameters that can
simulate real color blindness using fuzzy membership
functions.
One of the targets of this paper is to perform a study
on the chromatic abnormalities of the human visual
system and to develop computational tools for adaptability of human-machine interfaces, providing the inclusion of individuals with color blindness and creating more accessible solutions.
In order to reach this target, we developed a simulation tool of color blindness from the application of
linear transformation matrices that model the nonexistence or disability of the cone cells responsible for
sensitivity to low, medium and high frequency spectral
bands. We also developed a software tool to test color
blindness and assess the degree of color blindness, using a fuzzy-based approach. The last tool was developed to improve the visual quality of digital images
for people with color blindness. This integrated solution includes tests for the diagnosis of the color disturbance type to the image preview with the adjustments
that aims to reduce the color blindness effects.
In order to contribute for the evaluation of future
color blindness correction tools, another purpose of
this paper is to present a fuzzy-based method to simulate real color blindness. The development of color
blindness simulators is very important both for the development and design of GUIs (Graphical User Interfaces) and for supporting the development of new
mathematical methods of correction of color blindness. We developed a tool which simulates color blindness from linear transformation matrices which model
either the nonexistence or the disability of the cone
cells responsible for the sensitivities to low, medium
and high frequencies. Those matrices are adapted according to the fuzzy parameters obtained by a previous tool, briefly cited above, which evaluates color
blindness and assesses its severity degree, using an approach based on Fuzzy Logic.
2. Materials and Methods
In the eyes, specifically in the retina, there is a zone
where we can find the sensory cells specialized to
capture the light stimuli: the photoreceptors known as
rods and cones [8]. The rods have a scotopic characteristic, i.e they have a high sensitivity to achromatic
light. Since the cones have the photopic characteristic, they are less sensitive to light, but are able to discriminate between different wavelengths [8]. There are
three different types of cones in human eyes, each containing one type of photosensitive pigment. One type
detects the spectrum of low frequencies of light (red
color), while another one detects the medium frequencies spectrum (green color). The third type of cones
detects the high frequencies spectral band (blue color).
All these cones working together allow color vision
[8,32].
The eye cones can be classified according to their
sensitivity to different wavelengths. Those which are
sensitive to the red spectral band are stimulated by long
wavelengths; those ones sensitive to green are stimulated by wavelengths considered average, while cones
that are sensitive to blue color are stimulated by short
wavelengths. The vision of different types of colors
becomes possible when those three types of cones are
stimulated at the same time [8].
Several statistical studies show that about 8% of
males and 0.4% of females have some form of disability related to the perception of colors [8]. Deficiencies
for the colors are also called dyschromatopsias, or simply color blindness [32,8].
Most people have trichromatic vision. However, regarding to color vision disorders such as color blindness, it is possible to classify them in [8]:
Anomalous trichromacy: anomaly in the proportions
of red, green and blue. There are three types of
anomalous trichomatic classified:
1. Protanomaly: less sensitive to red.
2. Deuteranomaly: less sensitive to green.
3. Tritanomaly: less sensitive to colors in the
range of blue-yellow.
Dichromatism: it can be considered as special absolute cases of anomalous dichromatism, i.e. the total lack of sensitivity to red, green or blue. Due to
the absence of one kind of cone, it is presented in
the form of:
1. Protanopia: absence of red retinal photoreceptors.
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
2. Deuteranopia: absence of green retinal photoreceptors.
3. Tritanopia: absence of blue retinal photoreceptors.
Monochromatism: Total inability to perceive color.
This type of color blindness makes one see the
world in gray levels. It is a very rare deficiency
called achromatic vision.
Considering monochromatism as monospectral vision, it is not possible to realce image features depending on color. Furthermore, dichromatism can be
perceived as extremes cases of anomalous trichromacy. Therefore, herein this paper we focused only on
anomalous trichromacy and dichromatism.
In order to simplify notation and avoid transcribing the full name of the deficiencies of the chromatic
visual system, throughout this article we decided to
adopt the terms protan, deuteran and tritan to the deficiency of sensitivity to the frequencies of red (low frequencies), green (medium frequencies) and blue (high
frequencies), respectively.
About twenty methods of diagnosis and classification of color disorders are most often used. Among
them are: pseudoisochromatic plates or color discrimination tests, color arrangement tests or color hue test,
equalization, appointment or designation, etc. Pseudoisochromatic plates are used in the discrimination
tests. These various types of available plates are combined in various tests, among which the Ishihara test,
that is quite popular, being the most known and used
worldwide [8].
Although this test does not provide a quantitative
assessment of the problem and does not identify deficiencies of the tritan type, studies show that the Ishihara test remains the most effective test for rapid identification of congenital deficiencies in color vision [8].
Once visual information depends on image color
distribution, it also depends on the frequency response
of the visual system. Consequently, the lack of information can constitute a considerable problem for people with color blindness. Thus, it is possible to say
that color blindness constitutes an obstacle to the effective use of computers, which nowadays uses more
and more graphics in its interface and its visual communication.
There is a growing commitment to create computational tools focused on the accessibility of people with
color visual disturbances. The color blindness simulators are already quite common and avoid serious problems of accessibility, aiding to comprehend the percep-
3
tual limitations of a color blind individual [32]. Nevertheless, these simulators are designed just for dichromatism [32].
There exist also applications designed to improve
the visual quality of images. However, in most applications, it is assumed that the user already knows his
type of disorder and does not consider that the disorder
may occur in varying degrees. The difference of using
an adaptive filter is related to the diagnosis and the use
of an approach that considers the uncertainty associated with the problem, where Fuzzy Logic appears as
a natural candidate to solve the adaptation to the user
according to his degree of color blindness.
It is quite common for people with color blindness
not to realize that they have visual disturbances, and
many - when they discover the problem - do not know
how to classify it. However, it is very important to improve efficiently the quality of life of those people, the
knowledge of information as the type of color blindness and in what degree it is.
In order to fill this information gap, a test tool called
DaltonTest was developed. The goal of this tool is to
classify the color blindness, showing the degree of the
disability and its possible forms of presentation.
In DaltonTest tool, the user is submitted to the
Ishihara test (see figure 1), that has been customized
through the use of weights. Different weights were assigned to the Ishihara test questions, so that the inaccurate responses received fewer points than the precise
answers. This simple change makes it possible to evaluate approximately the user’s blindness degree.
Figure 2 shows an example of the data structure
adopted to describe and store a question of the Ishihara test used in the DaltonTest tool. The test, when
completed, presents an estimated diagnosis of the user
color blindness, containing three factors: the degree of
color blindness, the degree of protanomaly, and the degree of deuteranomaly. These degrees are in fact degrees of membership to fuzzy sets associated to types
of color blindness, protanomaly, and deuteranomaly,
calculated from the fuzzyfication of determined scores.
Such result is then used by the correction tool, providing a fuzzy characteristic for the application.
The developed tool DaltonSim allows the simulation
of the most common cases of anomalous trichomatic:
protanomaly and the deuteranomaly.
The simulation algorithm of colorblindness is based
on the color model LMS (Longwave, Middlewave,
Shortwave), once this color model is the most adequate
to model the behaviour of light reception: notice eye
cones are organized in receptor groups of short, mid-
4
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
follows [32]:
Lp
0 2.0234 −2.5258
L
Mp = 0 1
M ,
0
Sp
0 0
1
S
(3)
and for deuteranopia [32]:
(a)
(c)
Ld
1 0 0
L
Md = 0.4942 0 1.2483 M .
Sd
0 0 1
S
(4)
Finally, there must be a transformation of the LMS
color model to RGB. This transformation is obtained
using the inverse matrix of the first step matrix [32].
(b)
(d)
Fig. 1. Examples of Ishihara plates: (a) demonstration plate; (b) hidden plate; (c) masked plate; and (d) diagnosis plate.
R
L
G = TLM S−RGB M ,
B
S
(5)
L
R
M ,
G = T −1
RGB−LM S
S
B
(6)
0.0809 −0.1305 0.1167
L
R
G = −0.0102 0.0540 −0.1136 M . (7)
B
−0.0004 −0.0041 0.6935
S
Fig. 2. Example of XML code describing the data structure of the
DaltonTest tool
dle, and high frequencies [32]. Conversion of the RGB
components into components of the LMS model is the
first step of the algorithm. The conversion is achieved
by the application of a matrix, and, therefore, a linear
conversion [8]:
L
R
M = TRGB−LM S G ,
S
B
(1)
L
17.8824 43.5161 4.1194
R
M = 3.4557 27.1554 3.8671 G . (2)
S
0.0300 0.1843 1.4671
B
The second step is reducing the normal domain of
colors to the domain of a color blind individual. The
linear transformation for protanopia is expressed as
In order to modify these dichromatism models, we
generate other matrices by including several fuzzy parameters, in order to get a mathematical model useful to deal with anomalous trichromatism. Therefore,
our fuzzy-based model considers not only the extreme
cases of protanopia and deuteranopia, but also hybrid cases where each case of color reception, including color “normality”, is represented by a given degree of membership to fuzzy sets designed to model
these cases [15,35,6,33,11,5,34,20,31,23,30,18,24,21,
1,28,27,4,26,16,29,12,2,3,13,22,17,25,7,7,14].
The fuzzy parameter were designed to generate a
linear dichromatism model to get LMS models able
to vary from nondichromatic matrices represented by
identity matrices to the absolute dichromatic simulation matrices given by expressions 3 and 4. Consequently, for the fuzzy degree of protanomaly αp , if
αp = 0 we have the identity matrix, whilst in case we
have αp = 1, we get the transform matrix described
by expression 3. The analysis is similar for the deuteranopia simulation matrix. Our proposal deals with the
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
(a)
(a)
(a)
Fig. 3. Result of dichromate simulation. (a) Original image, normal
vision. (b) Image simulating protan type color blindness. (c) Image
simulating type deuteran color blindness.
generation of intermediate simulation matrices able to
model nonabsolute cases of dichromatism. Therefore,
considering the components already converted into the
LMS model, the linear transformation for protanomaly
is proposed as following:
Lp
(1 − αp ) 2.0234αp −2.5258αp
L
Mp =
M ,
0
1
0
Sp
S
0
0
1
(8)
where αp is equal to the degree of protan; and for
deuteranomaly:
5
individuals. Figure 3 shows some results from the simulation tool. Notice the presence of green color, since
the absence of cones that detect green, in this case the
type deuteran color blindness, does not prevent the perception of this spectral range, as there is a compensation due to the presence of other cones, as evidenced
in the mathematical model of the anomaly deuteran.
However, there is no perception of the real green [32].
Herein this work we also implemented a software
tool called DaltonCor, intended to improve the visual
quality of the color blind individuals, whereas its deficiency may be presented in different degrees. Two
methods (A and B) were developed for the DaltonCor tool. Method A is based on the generation of a
compensated image generated by the linear combination of the original image and the corrected versions for protan and deuteran cases. The weights are
based on fuzzy rules. Method B is based on a linear
transform matrix based on the fuzzy degrees of protan
and deuteran color blindness, without using additional
fuzzy rules. Both methods try to compensate for the
lack of sensitivity to a particular color with new values
for the normal color perception.
2.1. Method A
Ld
1
0
0
L
Md = 0.4942αd (1 − αd ) 1.2483αd M ,
Sd
0
0
1
S
(9)
where αd is equal to the degree of deuteran.
Based on both linear transformations, we propose a
hybrid model of protanomaly and deuteranomaly:
Lh
(1 − αp ) 2.0234αp −2.5258αp
L
Mh = 0.4942αd (1 − αd ) 1.2483αd M .
Sh
0
0
1
S
(10)
The degrees of protanomaly αp and deuteranomaly
αd are parameters obtained from the testing tool DaltonTest. Therefore, it is possible to simulate real cases
of color blindness.
The DaltonSim has a very simple interface, and the
result from the simulation is the visualization of the
original images alongside the simulated images.
The simulation has shown to be essential to understand the problems of accessibility of the color blind
The Method A was divided into three modules: Filter, Fuzzy and Control. This last one, in addition to link
the other two modules, communicates with the graphical interface and feeds it with information. The solution developed in the filter module is based on color
perception in cases of absolute color blindness. First,
let us focus on the protan color blindness:
Consider f = (fr , fg , fb ) the original image followed by its three bands of color r, g and b. Its correction is performed by following two steps. The first one
(equation 11) is to assign new values to the bands not
affected by color blindness. To protans, these bands of
color are fg and fb , as following:
f 0 = (fr , fg0 , fb0 ),
fg0 = 12 (fr + fg )
.
fb0 = 12 (fr + fb )
(11)
In order to increase the visual quality of the image, the second step (equation 12) is intended to improve the contrast. The chosen technique was the standard histogram equalization presented by Gonzalez
and Woods (2002) [10]. Considering γ the contrast optimizer (histogram equalizer), we have the following
6
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
expression:
fp = (fr , fg00 , fb00 ),
fg00 = γ(fg0 )
.
fb00 = γ(fb0 )
(12)
The correction equations for deuterans (13 and 14)
are similar to those presented above:
0
f =
(fr0 , fg , fb0 ),
fr0 = 12 (fr + fg )
.
fb0 = 12 (fg + fb )
(13)
xn =
x0p
.
x0p + x0d + x0n
(20)
The corrected image f ∗ is a weighted average of the
corrected images for protan and deuteran type color
blindness, and the original image, as represented in the
following expression:
f ∗ = xp fp + xd fd + xn f.
fd = (fr00 , fg , fb00 ),
fr00 = γ(fr0 )
.
fb00 = γ(fb0 )
(14)
The Filter Module returns two corrected images,
fp and fd . The Fuzzy Module is responsible for the
filter customization and, based on the outcome from
the test tool, assigns a fuzzy character of this correction. From an experimental approach, the following
fuzzy rules were proposed, based on the output data we
collect from DaltonTest color blindness software test
[20,34,9,19]:
x0p = β ∧ αp ,
(15)
where x0p is equal to the degree of color blindness β
with conjunction the degree of protan αp .
x0d = β ∧ αd ,
(16)
where x0d is equal to the degree of color blindness β
with conjunction the degree of deuteran αd .
x0n = αn ∧ (¬β),
(17)
where x0n is equal to the degree of normality αn and in
conjunction with not color blindness.
From the measurements obtained from the rules
above, it is possible to make a fuzzification process on
these magnitudes to obtain the weights expressed by
the following equations:
xp =
x0p
,
x0p + x0d + x0n
x0p
xd = 0
,
xp + x0d + x0n
(18)
(19)
(21)
2.2. Method B
The Method B is based on linear transformations designed to adaptatively correct the effects of chromatic
disturbance. Differently from method A, this method
is divided into two modules: Correction and Control.
It is important to notice that, in this method, there is no
previous correction for absolute color blindness, as it
occurs in the module Filter of Method A. The module
Control furnishes the graphic interface important information to be described in the following paragraphs.
The solution implemented in module Correction is
inspired by the perception of colors in absolute color
blindness. However, instead of using the absolute degrees of color blindness, we take into account the fuzzy
degrees of protanomaly and deuteranomaly. Therefore,
this method is an adaptative correction method.
Similarly to the method A, the proposal of correction of method B also tries to compensate the lack of
sensitivity for determined colors (in fact, the deficiency
to receive determined spectral bands), increasing their
occurrence in the digital image and modifying the colors acquired with normal perception.
Let f = (fr , fg , fb ) be the original image with its
three color bands: r, g and b. This image is compensated by the use of a transform matrix to correct the
pixel values by changing them using linear combinations. The expressions 22 and 23 represent the corrections for the absolute cases of protan and deuteran
using fuzzy features. These empirical expressions are
based on the following idea: compensating the lack of
color band in color blind vision by equally distributing
the information of this band to the other two bands. For
protans, the corrected image is proposed as following:
fp =
(fr , fg0 , fb0 ),
fg0 =
fb0 =
αp
2 fr
αp
2 fr
2−α
+ 2 p fg
; (22)
2−α
+ 2 p fb
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
for deuterans, the customization is performed in a similar manner, as can be seen in expression 23:
fd =
(fr0 , fg , fb0 ),
fr0 =
fb0 =
αd
2 fg
αd
2 fg
+
+
2−αd
2 fr
2−αd
2 fb
. (23)
In order to deal with hybrid cases of color blindness
where the individual has anomalous trichromatism for
the red and green spectral bands at the same time, the
new values of bands fr , fg and fb are calculated as
following:
f ∗ = (fr0 , fg0 , fb0 ),
(24)
fr0 =
2 − αd
αd
fg +
fr
2
2
(25)
fg0 =
αp
2 − αp
fr +
fg
2
2
(26)
fb0 =
αp
αd
4 − αp − αd
fr +
fg +
fb
4
4
4
(27)
Based on the new values of the three spectral bands,
r, g and b, it was possible to generate an unique transform matrix to deal with the adaptative correction of
anomalous trichromatism cases, as seen in expression
28:
0
1 − α2d α2d
0
R
R
G0 = α2p 1 − α2p
G . (28)
0
αp
α +α
αd
B0
B
1 − p4 d
4
4
The result of the application of the transform matrix
is an image modified based on the given degrees of
protan and deuteran.
3. Results
3.1. Simulation Results
The fuzzy-based simulation tool presented in this
paper receives as inputs the results obtained by the
customized Ishihara testing tool, DaltonTest. These
parameters allow our simulation tool to generate results in accordance to what is expected from real color
7
blindness cases, which is essential to understand how
people with different degrees of color blindness perceive color variation.
Notice in figure 4 a simulation for different degrees
of the color vision anomalies and the gradual loss of
ability to distinguish different colors.
It is important to notice the similarity of images figure 4(c) is a protanomaly in 50% of cones and figure 4(j) is a hybrid case: 25% protanomaly and 25%
deuteranomaly, which together reduce the compensation due to the presence of other cones, as can be perceived in the new mathematical model we presented
herein this work.
The simulation results with different levels of abnormality proved to be very satisfactory, with exception of hybrid cases with more than 50% of protan and
deuteran.
In order to analyze the results of the correction tool
proposed, 10 images in bitmap 32 bits format were
used. The use of the simulation tool was essential to
understand how the corrections would be perceived
by people with color blindness. We also analyzed all
the correction variations in RGB, LMS, and with and
without histogram equalization. In figure 5, it can be
seen the correction result for an individual 100% color
blind, 0% protan, 100% deuteran and 0% normal.
Notice that in figure 5(b) the red tomatoes and the
green peppers appear nearly equal. However, in figure
5(d) the shades of tomatoes and peppers are very different. Thus it is easy to see the gain of visual information after correcting the image.
3.2. Results for Color Blind People
Although the analysis done by the simulator display
is satisfactory, a group of four people with color blindness volunteered to test the tool. The volunteers were
evaluated with the customized Ishihara test using the
test tool. The test set was composed by a set of 32 bits
bitmap images with size of 300×300 distributed as following: 10 original images without correction and, for
each of these images, 4 corrected images using Method
A with and without histogram equalization, taking into
account RGB and LMS images, composing a total of
44 images. Through the obtained results, the four combinations of correction were analyzed. The results are
presented in Table 1. Althoug these results were not
conclusive due to the small number of volunteers, it
was important as a first feedback from color blind people.
8
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
ORIGINAL
PROTAN
(b)
(a)
DEUTERAN
(f)
(a)
(b)
(c)
(d)
(e)
(f)
(j)
(g)
(k)
(d)
(h)
(l)
(i)
SIMULATED
HYBRID
(c)
(e)
ORIGINALS
(m)
Fig. 4. Results of dichromate simulation: (a) original image, normal
vision; (b), (c), (d) and (e): protan type color blindness, for degrees
of 25%, 50%, 75%, and 100%, respectively; (f), (g), (h) and (i):
deuteran type color blindness, for degrees of 25%, 50%, 75%, and
100%, respectively; (j), (k), (l) and (m): hybrid (protan and deuteran)
type color blindness, for equal degrees of protan and deuteran of
25%, 50%, 75%, and 100%, respectively.
Each corrected image was subjectively evaluated
and judged as much better, better, indifferent, worse
or much worse. Distortion and greater ability to distinguish elements in relation to the original image were
considered as evaluation criteria.
It was observed that, by the correction in RGB domain with histogram equalization, the images became
more understandable, because elements that were perceived with the same color (due to color blindness) re-
Fig. 5. Results from image correction using the methods A and B
in RGB with histogram equalization: (a) original image; (b) simulation for the absolute protan type; (c) corrected image with Method
A; (d) corrected image with Method A simulated for protan color
blindness; (e) corrected image with Method B; (f) corrected image
with Method B simulated for protan color blindness.
ceived different colors. Some images after the correction showed less saturation in their colors and some of
these cases were considered worse. With the correction in RGB without histogram equalization there was
not a big gain in the visual improvement of images, although this type of correction has been found to cause
less negative impact on the distortion of the original
colors.
The correction in LMS, in general, changed overmuch the original colors of the images, although it performed efficiently on images that purposely have hidden elements, like the images used in pseudoisochromatic Ishihara plates.
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
With equalization
RGB
LMS
9
Without equalization
Much better
20%
Much better
0%
Better
Indifferent
Worse
Much worse
46%
20%
14%
0%
Better
Indifferent
Worse
Much worse
43%
43%
14%
0%
Much better
Better
Indifferent
Worse
Much worse
20%
17%
20%
7%
36%
Table 1
Much better
Better
Indifferent
Worse
Much worse
7%
33%
3%
7%
50%
Preliminar evaluation by a set of 4 volunteers of 40 images corrected
using Method A based on RGB and LMS, with and without histogram equalization
3.3. Results for People without Color Blindness
Considering that: 1) the amount of people with color
blindness represents a considerably lower percentage
in the world’s population [32,8], 2) the occurrence of
color blindness is not related to factors that might allow a clear separation between color blind individuals
and the rest of the population, it is quite difficult to assemble a good set of test tools to validate the correction of color blindness proposed in this paper. Thus,
we build a test based on the web to collect information of a subjective nature that could be useful to distinguish the methods proposed in this paper according
to a qualitative evaluation.
In order to build the platform test site we used 10 basic images. For each of these 10 images we generated
images for protan and deuteran color blindness using
the simulation method proposed in this work, with degrees of color blindness of 0%, 25%, 50%, 75% and
100%. We also used images with 0% of blindness as
a way of maintaining quality control of results. Thus,
we generated a total amount of 450 images.
The web survey consists of 90 questions with five
randomly arranged choices. Each issue consists in presenting an image, converted to the vision of a colorblind using four simulation methods proposed, with a
certain degree of colorblindness. For each presented
image there are five options, where four of them are
obtained from the application of the correction methods, while the other image is simply a copy of the presented image. The user must choose the option among
five options that best highlights different elements in
the presented image, i.e. the option that optimizes the
contrast between different elements, even if these ele-
Fig. 6. Percentages of responses versus color blindness correction
methods, considering a set of 40 volunteers without color blindness
ments are not clearly distinguished in the original image.
The web survey was administered to a universe of
40 users without color blindness. The absolute results
are present in figure 6. From the results presented, we
can notice that subjective aesthetic criteria influenced
the results: 20.8% of volunteers chose the option “No
improvements”, when the expected number was 11.1%
(10 of the 90 presented images with 0% of color blindness). Figure 7 shows the results without the option
“No improvements”.
The results for protan and deuteran colorblindness
distributed according to the degree of color blindness
are shown in figures 8 and 10, respectively.
Figure 8 shows that Method B with histogram equalization received the greatest number of positive responses from volunteers. Method B without histogram
equalization also had a reasonable amount of positive
responses for all degrees of color blindness, yet its
performance was inferior to that obtained by Method
B with histogram equalization. Method A, with and
without histogram equalization, is shown below the
Method B, for the degrees of color blindness of 25%,
50% and 75%. However, for protans with 100% of
color blindness, Method A with histogram equalization can achieve a similar performance to the performance of Method B without histogram equalization.
For protans with 100% color blindness, Method A
without histogram equalization reaches almost twice
10
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
Fig. 7. Percentages of positive responses (responses different from
“no improvements”) versus color blind correction methods, considering a set of 40 volunteers without color blindness
Fig. 8. Percentages of positive responses (responses different from
“no improvements”) versus the degree of protan color blindness,
considering a set of 40 volunteers without color blindness
the performance obtained for 25% and 50% protans.
However, despite the improvement acquired for protan
cases with 100% of color blindness, Method A without
histogram equalization is still much lower than other
methods.
Figure 9 shows the overall results for protan colorblindness. The results show that, for protan colorblindness, Method B with histogram equalization had the
best performance: 47.3% of positive responses, compared with Method B without histogram equalization,
Fig. 9. Percentages of positive responses (responses different from
“no improvements”) versus color blind correction methods, considering a set of 40 volunteers without color blindness and protan simulated images
with 31.4% of positive responses. Method A was very
inferior to Method B, with 14.1% and 7.2% of positive responses, with and without histogram equalization, respectively. The results also show that the histogram equalization is also an important factor in improved distinction of different elements in the images.
Figure 10 shows the results for deuteran color blindness, considering all degrees of color blindness. Here
again the results obtained with Method B with histogram equalization were superior to those obtained
by other methods. However, the distance to Method
B without histogram equalization increased. The performance of Method A, with and without histogram
equalization, improved slightly to 25%, 50% and 75%
of blindness in relation to protan cases, although it remains well below the Method B with histogram equalization, but the behavior for 100% of color blindness
was similar. Unlike the protan cases, Method B without histogram equalization had the worst performance
among the four methods for 75% of blindness.
Figure 11 illustrates the general results for the cases
of deuteran color blindness. Comparing the results for
deuterans with the results for protans, it is clear to
notice that the performance of Method B with histogram equalization remained virtually the same, at approximately 47%. However, Method A with and without histogram equalization had much better results for
deuterans than for protans. Already Method B with-
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
Fig. 10. Percentages of positive responses (responses different from
“no improvements”) versus the degree of deuteran color blindness,
considering a set of 40 volunteers without color blindness
Fig. 11. Percentages of positive responses (responses different from
“no improvements”) versus color blind correction methods, considering a set of 40 volunteers without color blindness and deuteran
simulated images
out histogram equalization proved to be far worse for
deuterans than for protans. Therefore it is important to
notice that there is a clear improvement of results due
to the addition of histogram equalization.
4. Discussion and Conclusion
Despite the rapid technological developments and
advancements in the area of digital image processing,
11
it is not easy to find tools that simulate milder, intermediate forms of color vision impairment and reduce the effects of chromatic visual disturbances. As
cited before, the extreme forms of color blindness which are characterized by the complete functional absence of one type of cones - are not as common as the
milder forms with partial or shifted sensitivity. However, many studies assume that if a color scheme is
legible for someone with extreme color vision impairment, it will also be easily legible for those with a
minor affliction, and ignore the possibility of adaptive
and customized correction methods.
This work presents an adaptive tool to simulate color
blindness. The fuzzy parameters designed to make the
simulation flexible are obtained from the testing tool
previously presented.
Therefore, this paper proposed the development of a
set of computational tools to improve the accessibility
and visual quality of life for color blind people. However, one of the greatest difficulties encountered in developing this work is the relative lack in the literature
about other mathematical methods for adaptive compensation of color blindness as the proposed.
As mentioned, color blindness affects only a low
percentage of population. The absence of color blindness cases in a statistically significant amount can be
an obstacle to the development of adaptive tools for
correction of the anomaly, especially when the focus
is on real cases, rather than in extreme cases (dichromatism). Thus good color blindness simulators are of
great importance on generating of synthetic cases of
color vision disturbance in a statistically significant
amount, so that they can be used in the development of
correction tools. The fuzzy-based simulation proved to
be essential to understand the accessibility problems of
people with red and green color disorders in different
degrees.
Tests were conducted with a group of people with
color blindness. This experience was important to evaluate the results obtained from the correction filters, and
to more accurately gauge the difficulties encountered
by the volunteers. The results were very positive, confirming that the proposed correction is able to extract a
higher amount of information from an image.
Moreover, all the tools proved to be intuitive and
easy to use, providing a better user experience, and
consequently their habitual use.
Extensive tests were also conducted with people
without color blindness using color blindness simulators and a web tool based on a survey. These tests were
performed by a group of 40 volunteers. Results indi-
12
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
cated that the Method B with histogram equalization
got better results for both protan and deuteran color
blindness.
[18]
[19]
References
[1] H. Adeli and S. L. Hung. An Adaptive Conjugate Gradient Learning Algorithm for Effective Training of Multilayer
Neural Networks. Applied Mathematics and Computation,
62(1):81–102, 1994.
[2] H. Adeli and X. Jiang. Neuro-Fuzzy Logic Model for Freeway
Work Zone Capacity Estimation. Journal of Transportation
Engineering, 129(5):484–493, 2003.
[3] H. Adeli and X. Jiang. Dynamic Fuzzy Wavelet Neural Network Model for Structural System Identification. Journal of
Structural Engineering, 132(1):102–111, 2006.
[4] H. Adeli and A. Karim. Fuzzy-Wavelet RBFNN Model for
Freeway Incident Detection. Journal of Transportation Engineering, 126(6):464–471, 2000.
[5] M. D. Alexiuk and N. J. Pizzi. Robust centroids using fuzzy
clustering with feature partitions. Pattern Recognition Letters,
(26):1039–1046, 2005.
[6] H. Axer, J. Jantzen, D. G. Keiserlingk, and G. Berks. The application of fuzzy-based methods to central nerve fiber imaging. Artificial Intelligence in Medicine, (29):225–239, 2003.
[7] A. Bianchini and P. Bandini. Prediction of Pavement Performance through Neuro-Fuzzy Reasoning. Computer-Aided
Civil and Infrastructure Engineering, 25(1):39–54, 2010.
[8] L. F. Bruni and A. A. V. Cruz. Chromatic sense: types of defects and clinical evaluation tests. Arquivos Brasileiros de Oftalmologia, 69(5), 2006.
[9] J. L. Fan and Y. L. Ma. Some new fuzzy entropy formulas.
Fuzzy Sets and Systems, 128:277–284, 2002.
[10] R. C. Gonzales and R. E. Woods. Digital Image Processing.
Prentice Hall, New York, 2002.
[11] W. L. Hung, M. S. Yang, and D. H. Chen. Parameter selection
for suppressed fuzzy c-means with an application to MRI segmentation. Pattern Recognition Letters, (27):424–438, 2006.
[12] X. Jiang and H. Adeli. Fuzzy Clustering Approach for Accurate Embedding Dimension Identification in Chaotic Time Series. Integrated Computer-Aided Engineering, 10(3):287–302,
2003.
[13] X. Jiang and H. Adeli. Dynamic Fuzzy Wavelet Neuroemulator for Nonlinear Control of Irregular Highrise Building Structures. International Journal for Numerical Methods in Engineering, 74(7):1045–1066, 2008.
[14] X. L. Jin and H. Doloi. Modelling Risk Allocation DecisionMaking in PPP Projects Using Fuzzy Logic. Computer-Aided
Civil and Infrastructure Engineering, 24(7):509–524, 2009.
[15] C. F. Juang. Temporal problems solved by dynamic fuzzy network based on genetic algorithm with variable-length cromossomes. Fuzzy Sets and Systems, (142):199–219, 2004.
[16] A. Karim and H. Adeli. Comparison of the Fuzzy-Wavelet
RBFNN Freeway Incident Detection Model with the California
Algorithm. Journal of Transportation Engineering, 128(1):21–
30, 2002.
[17] Y. Kim, R. Langari, and S. Hurlebus. Model-based Multi-input,
Multi-output Supervisory Semiactive Nonlinear Fuzzy Con-
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
troller. Computer-Aided Civil and Infrastructure Engineering,
25(5):387–393, 2010.
H. Lee, E. Kim, and M. Park. A genetic feature weighting
scheme for pattern recognition. Integrated Computer-Aided
Engineering, 14(2):161–171, 2007.
J. Lee and W. P. Santos. An adaptative fuzzy-based system to
evaluate color blindness. In IWSSIP 2010 - 17th International
Conference on Systems, Signals and Image Processing, Rio de
Janeiro, Brazil, 2010.
C. Lucas and B. N. Araabi. Generalization of the DempsterShafer theory: a fuzzy-valued measure. IEEE Transactions on
Fuzzy Systems, 7(3):255–270, 1999.
K. Perusich. Using Fuzzy Cognitive Maps to Identify Multiple Causes in Troubleshooting Systems. Integrated ComputerAided Engineering, 15(2):197–206, 2008.
U. Reuter and B. Moeller. Artificial Neural Networks for Forecasting of Fuzzy Time Series. Computer-Aided Civil and Infrastructure Engineering, 25(5):363–374, 2010.
S. Rokni and A. R. Fayek. A Multi-Criteria Optimization
Framework for Industrial Shop Scheduling Using Fuzzy Set
Theory. Integrated Computer-Aided Engineering, 17(3):175–
196, 2010.
C. Sabourin, K. Madani, and O. Bruneau. Autonomous Biped
Gait Pattern based on Fuzzy-CMAC Neural Networks. Integrated Computer-Aided Engineering, 14(2):173–186, 2007.
N. Sadeghi, A. R. Fayek, and W. Pedrycz. Fuzzy Monte Carlo
Simulation and Risk Assessment in Construction. ComputerAided Civil and Infrastructure Engineering, 25(4):238–252,
2010.
A. Samant and H. Adeli. Enhancing Neural Network Incident
Detection Algorithms using Wavelets. Computer-Aided Civil
and Infrastructure Engineering, 16(4):239–245, 2001.
K. Sarma and H. Adeli. Fuzzy Discrete Multicriteria Cost Optimization of Steel Structures. Journal of Structural Engineering, 126(11):1339–1347, 2000.
K. Sarma and H. Adeli. Fuzzy Genetic Algorithm for Optimization of Steel Structures. Journal of Structural Engineering, 126(5):596–604, 2000.
K. C. Sarma and H. Adeli. Life-Cycle Cost Optimization of
Steel Structures. International Journal for Numerical Methods
in Engineering, 55(12):1451–1462, 2002.
J. F. Smith and T. H. Nguyen. Autonomous and cooperative
robotic behavior based on fuzzy logic and genetic programming. Integrated Computer-Aided Engineering, 14(2):141–
159, 2007.
J. R. Villar, E. de la Cal, and J. Sedano. A Fuzzy Logic
Based Efficient Energy Saving Approach for Domestic Heating
Systems. Integrated Computer-Aided Engineering, 16(2):151–
163, 2009.
F. Viénot, H. Brettel, and J. Mollon. Digital video colourmaps
for checking the legibility of displays by dichromats. COLOR
Research and Applications, 24(4):243–252, 1999.
B. Wang, P. K. Saha, J. K. Udupa, M. A. Ferrante, J. Baumgardner, D. A. Roberts, and R. R. Rizi. 3D airway segmentation method via hyperpolarized 3 He gas MRI by using scalebased fuzzy connectedness. Computerized Medical Imaging
and Graphics, (28):77–86, 2004.
R. R. Yager. On the entropies of fuzzy measures. IEEE Transactions on Fuzzy Systems, 8(4):453–461, 2000.
C. Zhu and T. Jiang. Multicontext fuzzy clustering for separation of brain tissues in magnetic resonance images. NeuroIm-
J. Lee and W. P. dos Santos / An Adaptive Fuzzy-Based System to Simulate, Quantify and Compensate Color Blindness
age, (18):685–696, 2003.
13
| 1 |
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A
NEGATION MAP
arXiv:1705.01075v1 [math.RA] 2 May 2017
GUY BLACHAR
Abstract. In this article, we present the basic definitions of modules and Lie semialgebras over
semirings with a negation map. Our main example of a semiring with a negation map is ELT
algebras, and some of the results in this article are formulated and proved only in the ELT theory.
When dealing with modules, we focus on linearly independent sets and spanning sets. We
define a notion of lifting a module with a negation map, similarly to the tropicalization process,
and use it to prove several theorems about semirings with a negation map which possess a lift.
In the context of Lie semialgebras over semirings with a negation map, we first give basic
definitions, and provide parallel constructions to the classical Lie algebras. We prove an ELT
version of Cartan’s criterion for semisimplicity, and provide a counterexample for the naive version
of the PBW Theorem.
Contents
0. Introduction
0.1. Semirings with a Negation Map
0.2. Modules Over Semirings with a Negation Map
0.3. Supertropical Algebras
0.4. Exploded Layered Tropical Algebras
1. Modules over Semirings with a Negation Map
1.1. The Surpassing Relation for Modules
1.2. Basic Definitions for Modules
1.3. -morphisms
1.4. Lifting a Module Over a Semiring with a Negation Map
1.5. Linearly Independent Sets
1.6. d-bases and s-bases
1.7. Free Modules over Semirings with a Negation Map
2. Symmetrized Versions of Important Algebraic Structures
2.1. Semialgebras over Symmetrized Semirings
2.2. Lie Semialgebras with a Negation Map
2.3. Free Lie Algebras with a Negation Map
2.4. Symmetrized Versions of the Classical Lie Algebras
3. Solvable and Nilpotent Symmetrized Lie Algebras
3.1. Basic Definitions
3.2. Lifts of Lie Semialgebras with a Negation Map
3.3. Cartan’s Criterion for Lie Algebras Over ELT Algebras
4. PBW Theorem
4.1. Tensor Power and Tensor Algebra of Modules over a Semirings
4.2. The Universal Enveloping Algebra of a Lie Algebra with a Negation Map
4.3. Counterexample to the Naive PBW Theorem
Page
2
2
3
4
4
5
6
7
9
10
13
14
18
20
20
21
24
28
30
30
33
35
37
37
37
38
Date: 14th March 2018.
This article contains work from the author’s M.Sc. Thesis, which was submitted to the Math Department at BarIlan University. The work was carried under the supervision of Prof. Louis Rowen from Bar-Ilan University, to whom
the author thanks deeply for his help and guidance.
1
2
GUY BLACHAR
References
38
0. Introduction
This paper’s objective is to form an algebraic basis in the context of semirings with negation maps.
Semirings need not have additive inverses to all of the elements. While some of the theory of rings
can be copied “as-is” to semirings, there are many facts about rings which use the additive inverses
of the elements. The idea of negation maps on semirings is to imitate the additive inverse map. In
order to do that, we assume that we are given a map a 7→ (−) a, which satisfies all of the properties
of the usual inverse, except for the fact that a + (−) a = 0. This allows us to use the concept of
additive inverses, even if they do not exist in the usual definition. Semirings with a negation map
are discussed in [1, 8, 9, 2, 3, 24].
During this paper, we will first deal with modules which have a negation map over a semiring,
and then study Lie semialgebras with a negation map over semirings.
In the context of modules, our main interest would be to study linearly independent sets and
spanning sets. We will study maximal linearly independent sets, minimal spanning sets and the
connection between them.
As the introduction demonstrates, the tropical world provides us with several nontrivial examples
of semirings with a negation map – supertropical algebras and Exploded Layered Tropical (ELT)
algebras. In the study of tropical algebra and tropical geometry, one tool is using Puiseux series –
which allow us to “lift” a tropical problem to the classical algebra, and using the classical tools to
solve it. We will give a general definition of what a lift is, and study some of its properties.
We will move to considering Lie semialgebras with a negation map. We will study at first the
basic definitions related to Lie algebras when working over semirings. We will consider on Lie semialgebras which are free as modules, such that their basis contains 1, 2 or 3 elements, and also consider
nilpotent, solvable and semisimple Lie semialgebras.
Cartan’s criterion regarding semisimple Lie algebras states that a Lie algebra is semisimple if and
only if its Killing form is nondegenerate. In this paper, we prove an ELT version of this theorem for
Lie semialgebras over ELT algebras.
Another point of interest is Poincaré-Birkhoff-Witt (PBW) Theorem. In the classical theory of
Lie algebras, the PBW Theorem states that every Lie algebra can be embedded into an associative
algebra (its universal enveloping algebra), where the Lie bracket on this algebra is the commutator.
We will construct the universal enveloping algebra of a Lie semialgebra, and give a counterexample
to the naive version of the PBW theorem.
Throughout this paper, addition and multiplication will always be denoted as + and ·, in order
to emphasize the algebraic structure. Negation maps will always be denoted by (−). N stands for
the set of natural numbers, whereas N0 = N ∪ {0}. Z stands for the ring of integers, and Z /nZ
denotes the quotient ring of Z by nZ.
0.1. Semirings with a Negation Map.
Definition 0.1. Let R be a semiring. A map (−) : R → R is a negation map (or a symmetry)
on R if the following properties hold:
(1) ∀a, b ∈ R : (−) (a + b) = (−) a + (−) b.
(2) (−) 0R = 0R .
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
3
(3) ∀a, b ∈ R : (−) (a · b) = a · ((−) b) = ((−) a) · b.
(4) ∀a ∈ R : (−) ((−) a) = a.
We say that (R, (−)) is a semiring with a negation map. If (−) is clear from the context, we
will not mention it.
For example, the map (−) a = a defines a trivial negation map on every semiring R. If R is a
ring, it has a negation map given by (−) a = −a.
Throughout this article, we use the following notations:
• a (−) a is denoted a◦ .
• R◦ = {a◦ |a ∈ R}.
• R∨ = R \ R◦ ∪ {0R }.
• We define two partial orders on R:
– The relation (which we call the surpassing relation) defined by
a b ⇔ ∃c ∈ R◦ : a = b + c
– The relation ∇ defined by
a∇b ⇔ a (−) b ∈ R◦
0.2. Modules Over Semirings with a Negation Map. We now consider modules over semirings
with a negation map.
Definition 0.2. Let R be a semiring with a negation map. A (left) R-module is a commutative
monoid (M, +, 0M ), equipped with an operation of “scalar multiplication”, R × M → M , where we
denote (α, x) 7→ αx, such that the following properties hold:
(1) ∀α ∈ R ∀x, y ∈ M : α (x + y) = αx + αy.
(2) ∀α, β ∈ R ∀x ∈ M : (α + β) x = αx + βx.
(3) ∀α, β ∈ R ∀x ∈ M : α (βx) = (α · β) x.
(4) ∀α ∈ R : α0M = 0M .
(5) ∀x ∈ M : 0R x = 0M .
(6) ∀x ∈ M : 1R x = x.
A right R-module is defined similarly.
Example 0.3. The following are examples of R-modules, where R is a semiring:
(1) R is an R-module, where the scalar multiplication is the multiplication of R. The submodules
of R are its left ideals, meaning sets I ⊆ R that are closed under addition and satisfy
RI ⊆ I.
(2) Taking the direct sum of Mi = R for i ∈ I, we obtain the free R-module:
M
RI =
R
i∈I
Two important special cases are:
(3) Mm×n (R) is an R-module with the standard scalar multiplication, (αA)ij = α · (A)ij .
Y
M
(4) Let {Mi }i∈I be R-modules. Then
Mi and
Mi are R-modules with the usual comi∈I
i∈I
ponentwise sum and scalar multiplication.
As in [24], we consider negation maps on modules.
Definition 0.4. Let R be a semiring, and let M be an R-module. A map (−) : M → M is a
negation map (or a symmetry) on M if the following properties hold:
(1) ∀x, y ∈ M : (−) (x + y) = (−) x + (−) y.
(2) (−) 0M = 0M .
(3) ∀α ∈ R ∀x ∈ M : (−) (αx) = α ((−) x).
(4) ∀x ∈ M : (−) ((−) x) = x.
4
GUY BLACHAR
If the underlying semiring has a negation map (−), every R-module M has an induced negation
map given by (−) x = ((−) 1R ) x. Unless otherwise written, when working with a module over a
semiring with a negation map, the negation map will be the induced one. We note that if it is not
mentioned that the semiring has a negation map, the negation map on the module can be arbitrary
(although the main interest is with the induced negation map).
Definition 0.5. Let R be a semiring, and let M be an R-module with a negation map. A submodule with a negation map of M is a submodule N of M which is closed under the negation
map. Since we are in the context of negation maps, and every module in this paper has a negation
map, we will write “submodule” for a “submodule with a negation map”, where we understand that
every submodule should be closed under the negation map.
A main example of semirings and modules with a negation map uses the process of symmetrization
defined in [24]. Briefly, If R is an arbitrary semiring, we may define R̂ = R × R with componentwise
addition, and with multiplication given by
(r1 , r2 ) · (r1′ , r2′ ) = (r1 r1′ + r2 r2′ , r1 r2′ + r2 r1′ ) .
This is indeed a semiring with a negation map given by (r1 , r2 ) 7→ (r2 , r1 ). Of course, there is a
natural injection R ֒→ R̂ by r 7→ (r, 0).
The idea of this construction is to imitate the way Z is constructed from N. One should think of
the element (r1 , r2 ) as r1 + (−) r2 .
Now, if M is any R-module, we may define M̂ = M × M . We want to define it as an R̂-module;
so we use componentwise addition, and define multiplication by
(r1 , r2 ) (x1 , x2 ) = (r1 x1 + r2 x2 , r1 x2 + r2 x1 ) .
This multiplication is called the twist action on M̂ over R̂. It endows M̂ with a R̂-module structure,
and the induced negation map is
(−) (x1 , x2 ) = (0, 1) (x1 , x2 ) = (x2 , x1 )
For the remainder of the introduction, we will present our two main examples of semirings with
a negation map – supertropical algebras and ELT algebras.
0.3. Supertropical Algebras. Supertropical algebras are a refinement of the usual max-plus algebra. This is a refinement of the max-plus algebra, in which one adds “ghost elements” to replace
the role of the classical zero. Supertropical algebras are discussed in several articles, including
[13, 16, 17, 14, 18, 15].
Definition 0.6. A supertropical semiring is a quadruple R := (R, T , G, ν), where R is a semiring,
T ⊆ R is a multiplicative submonoid, and G0 = G ∪ {0} ⊆ R is an ordered semiring ideal, together
with a map ν : R → G0 , satisfying ν 2 = ν as well as the conditions:
a,
ν (a) > ν (b)
a+b=
ν (a) ν (a) = ν (b)
The monoid T is called the monoid of tangible elements, while the elements of G are called
ghost elements, and ν : R → G0 is called the ghost map. Intuitively, the tangible elements
correspond to the original max-plus algebra, although now a + a = ν (a) instead of a + a = a.
Any supertropical semiring R is a semiring with a negation map, when the negation map is (−) a =
a. Endowed with this negation map, R◦ = G0 .
0.4. Exploded Layered Tropical Algebras. ELT algebras are a more refined degeneration of
the classical algebra than the classical max-plus algebra. The main idea which stands behind this
structure is not only to remember the element, but rather remember also another information – its
layer – which tells us “how many times we added this element to itself”. ELT algebras originate in
[22], were formally defined in [25] and are discussed in [5, 6].
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
5
Definition 0.7. Let L be a semiring, and F a totally ordered semigroup. An Exploded Layered
Tropical algebra (or, in short, an ELT algebra) is the pair R = R (L , F ), whose elements are
denoted [ℓ]a for a ∈ F and ℓ ∈ L , together with the semiring (without zero) structure:
[ℓ ]
a1 > a2
1 a1
[ℓ
]
[ℓ
]
[ℓ
]
1
2
2
(1)
a1 + a2 :=
a2
a1 < a2 .
[ℓ1 +L ℓ2 ]
a1 a1 = a2
(2) [ℓ1 ]a1 · [ℓ2 ]a2 := [ℓ1 ·L ℓ2 ] (a1 +F a2 ).
We write . For [ℓ]a, ℓ is called the layer, whereas a is called the tangible value.
Let R be an ELT algebra. We write s : R → L for the projection on the first component (the
sorting map):
s [ℓ]a = ℓ
We also write τ : R → F for the projection on the second component:
τ [ℓ]a = a
We denote the zero-layer subset
R◦ = {α ∈ R|s (α) = 0}
and
R∗ = {α ∈ R|s (α) 6= 0} = R \ R◦
Definition 0.8. An ELT algebra R = R (F , L ) in which F is a totally ordered group and L is
a ring (with 1) is called an ELT ring.
Definition 0.9. For any ELT ring R, we define (−) [ℓ]a = [−ℓ]a.
Lemma 0.10. a 7→ (−) a is a negation map on any ELT ring R.
Proof. Straightforward from the definition.
Remark 0.11. When dealing with ELT algebra, the relation is denoted .
We point out some important elements in any ELT ring R:
(1) [1]0, which is the multiplicative identity of R.
(2) [0]0, which is idempotent to both operations of R.
(3) [−1]0, which has the role of “−1” in our theory.
Since ELT algebras lack an additive identity element, we adjoin one, denoted 0R (denoted −∞
in [5, 6]), and we denote R = R ∪ {0R }. This endows R with an antiring structure.
1. Modules over Semirings with a Negation Map
Modules over rings have an important role in the classical theory. In the classical theory, a module M over a ring R is an abelian group, which has a scalar multiplication operation by elements
from R. If one takes R to be a semiring rather than a ring, then a semimodule M is a commutative
monoid, which has a scalar multiplication operation.
Since semirings with a negation map are not rings, modules over them lack some basic properties
of modules over rings. The study of modules can be found in [10, Chapters 14-18].
However, knowing that we are working over a semiring with a negation map instead of a general
semiring, we note some unique properties of the module. For example, over a semiring with a negation map we have the notion of quasi-zeros, i.e. elements of the form x + (−) x, which play the role
of the classical zero.
In this part, we look into modules over semirings with a negation map. The main issue which we
will deal with is different definitions of base. Following [14], we will give four definitions:
(1) d-base – a maximal linearly independent set, Definition 1.45.
6
GUY BLACHAR
(2) s-base – a minimal spanning set, Definition 1.54.
(3) d,s-base – an s-base which is also a d-base, Definition 1.62.
(4) base – defined as the classical base, Definition 1.66.
We will point out the properties satisfied by modules possessing these bases, and we will examine
the differences between these definitions.
1.1. The Surpassing Relation for Modules. We will now give analogous definitions of some
concepts defined previously for semirings with a negation map.
Definition 1.1. Let R be a semiring, and let M be a module with a negation map. An element of
the form x + (−) x for some x ∈ M is called a quasi-zero (or a balanced element, as in [2]). We
denote x◦ = x + (−) x. The submodule of quasi-zeros is
M ◦ = {x◦ | x ∈ M }
If R has a negation map, and the negation map of M coincides with the induced negation map,
then M ◦ = 1◦R M .
Definition 1.2. Let R be a semiring, and let M be an R-module with a negation map. We define
a relation on M in the following way:
x y ⇐⇒ ∃z ∈ M ◦ : x = y + z
If x y, we say that x surpasses y.
Example 1.3. We refer to some of the examples of modules given here, and demonstrate the
meaning of the surpassing relation in these modules.
(1) In R, the surpassing relation of the module coincides with the surpassing relation of the
semiring with a negation map.
(2) In RI , the surpassing relation means surpassing componentwise.
(3) As a special case of the free module, in Mm×n (R) the surpassing relation is equivalent to
surpassing componentwise.
(4) As another special case of the free module, in R [λ], the surpassing relation means surpassing
of each coefficient of the polynomials.
Lemma 1.4. is a reflexive and transitive relation on any R-module with a negation map. However, it may not be antisymmetric, as demonstrated in [2, Example 4.12].
Proof. Let M be an R-module with a negation map.
(1) If x ∈ M , then x = x + 0M , and 0M ∈ M ◦ ; so x x.
(2) If x, y, z ∈ M satisfy x y z, then x = y + z1 , y = z + z2 , where z1 , z2 ∈ M ◦ . So
x = y + z1 = z + z1 + z2
◦
and z1 + z2 ∈ M , implying x z.
Although in general may not be a partial order relation, we give a necessary and sufficient
condition for it to be one, and we point out a specific case in which it is a partial order relation.
Lemma 1.5. The following are equivalent for an R-module M with a negation map:
(1) ∀x ∈ M ∀z1 , z2 ∈ M ◦ : x = x + z1 + z2 ⇒ x = x + z1 ∧ x = x + z2 .
(2) ∀x ∈ M ∀z1 , z2 ∈ M ◦ : x = x + z1 + z2 ⇒ x = x + z1 ∨ x = x + z2 .
(3) is a partial order relation on M .
Proof. 1 ⇒ 2 is trivial.
2 ⇒ 3: Assume that condition 2 holds. By Lemma 1.4, it is enough to show that is antisymmetric. Let x, y ∈ M such that x y and y x. Then there are z1 , z2 ∈ M ◦ such that x = y + z1
and y = x + z2 . Therefore, x = x + z1 + z2 . By condition 2, either x = x + z2 = y (in which case
we are done) or x = x + z1 . In the latter case,
x = x + z1 + z2 = x + z2 = y
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
7
which proves that is a partial order relation on M .
3 ⇒ 1: Assume that is a partial order relation on M , and let x ∈ M and z1 , z2 ∈ M ◦ such that
x = x + z1 + z2 . Write y = x + z1 . By definition, y x. On the other hand,
x = x + z1 + z2 = y + z2 y
Since is antisymmetric, x = y = x + z1 , and thus also x = x + z1 + z2 = x + z2 .
Corollary 1.6. Let R be a semiring with a negation map, and let M be an R-module (with the
induced negation map). The following are equivalent:
(1) 1◦R is idempotent.
(2) R◦ is idempotent.
(3) M ◦ is idempotent for every R-module M .
In this case, is a partial order relation on every R-module M .
Proof. 1 ⇒ 2: Let α◦ ∈ R◦ . Then
α◦ + α◦ = 1◦R · α + 1◦R · α = (1◦R + 1◦R ) α = 1◦R α = α◦
2 ⇒ 3: Assume that R◦ is idempotent, let M be an R-module, and let x◦ ∈ M ◦ . Then
x◦ + x◦ = 1◦R x + 1◦R x = (1◦R + 1◦R ) x = 1◦R x = x◦
3 ⇒ 1: Trivial.
It is left to prove that in this case, if M is an R-module then is a partial order relation on M .
We prove condition 2 in Lemma 1.5. Indeed, if x ∈ M and z1 , z2 ∈ M ◦ satisfy x = x + z1 + z2 , then
x = x + z1 + z2 = x + z1 + z2 + z2 = x + z2
Example 1.7. If R is a ring with (−) a = −a, a supertropical algebra with (−) a = a or an ELT
ring with (−) a = [−1]0a, then is a partial order relation on every R-module.
Example 1.8. If is a partial order relation on R, the induced surpassing relation on an R-module
M might not be a partial order relation on M . For example, consider R = N0 with the usual addition
and multiplication and the negation map (−) a = a. Then N◦0 = 2N0 , and thus for m, n ∈ N0 ,
m n ⇔ ∃k ∈ N0 : m = n + 2k
This is clearly a partial order relation.
However, consider the R-module M = Z. The induced negation map is (−) a = a, and thus
Z◦ = 2Z. Thus, for m, n ∈ Z,
m n ⇔ ∃k ∈ Z : m = n + 2k
which is not only not a partial order relation, but an equivalence relation.
We will later see that if is not a partial order, we can “enforce” it to be one, by taking the
module modulo the congruence it generates (see Definition 1.21).
We note another property of M ◦ :
Lemma 1.9. If x ∈ M ◦ and y x, then y ∈ M ◦ .
Proof. Since y x, there exists z ∈ M ◦ such that y = x + z. The result follows, since M ◦ is a
submodule of M .
1.2. Basic Definitions for Modules.
8
GUY BLACHAR
1.2.1. Spanning Sets.
Definition 1.10. Let R be a semiring with a negation map, let M be an R-module, and let S ⊆ M
be a subset of M . The R-module spanned by S, denoted Span (S), is
)
( k
X
αi xi ∈ M k ∈ N, α1 , . . . , αk ∈ R, x1 , . . . , xk ∈ S
Span (S) =
i=1
It is easy to see that S ⊆ Span (S), and that Span (S) ⊆ M is also an R-module with respect to
the induced operations.
Definition 1.11. Let R be a semiring with a negation map, and let M be an R-module. We say
that a subset S ⊆ M is a spanning set of M , if Span (S) = M .
Definition 1.12. A module with a finite spanning set is called finitely generated.
1.2.2. Congruences and the First Isomorphism Theorem. Since the usual construction of quotient
modules using ideals fails over semirings, we use the language of congruences. In this subsubsection,
we study congruences of modules over semirings with a negation map. Since this is a special case of
congruences in the context of universal algebra, as written in [10], we mainly cite some known facts.
Definition 1.13. Let R be a semiring, and let M1 , M2 be two R-modules with a negation map. An
R-module homomorphism is a function ϕ : M1 → M2 , which satisfies:
(1) ∀x, y ∈ M1 : ϕ (x + y) = ϕ (x) + ϕ (y).
(2) ∀α ∈ R, ∀x ∈ M1 : ϕ (αx) = αϕ (x).
(3) ∀x ∈ M1 : ϕ ((−) x) = (−) ϕ (x).
We note that if R has a negation map, and the negation maps on M1 and M2 are the induced
negation maps, then condition 3 follows from condition 2, since
ϕ ((−) x) = ϕ (((−) 1R ) x) = ((−) 1R ) ϕ (x) = (−) ϕ (x) .
Remark 1.14. ϕ (M1 ) ⊆ M2 ◦ , since
◦
ϕ (x◦ ) = ϕ (x + (−) x) = ϕ (x) + (−) ϕ (x) ∈ M2 ◦
Definition 1.15. Let R be a semiring, and let M be an R-module with a negation map. An
equivalence relation ∼ on M is called a module congruence, if:
(1) ∀x1 , x2 , y1 , y2 ∈ M : x1 ∼ x2 ∧ y1 ∼ y2 ⇒ x1 + y1 ∼ x2 + y2 .
(2) ∀α ∈ R ∀x, y ∈ M : x ∼ y ⇒ αx ∼ αy.
(3) ∀x, y ∈ M : x ∼ y ⇒ (−) x ∼ (−) y.
The kernel of ∼ is
M∼ = {x ∈ M | ∃y ∈ M ◦ : x ∼ y}
Again, if the negation map on M is induced from a negation map from R, condition 3 follows from
condition 2. Therefore, the negation map (−) of M a negation map (−) on M /∼ by (−) [x] = [(−) x].
It is easy to see that this negation map and the induced negation map from R coincide, since
(−) [x] = [(−) x] = [((−) 1R ) x] = ((−) 1R ) [x]
Lemma 1.16. M /∼ is an R-module.
Lemma 1.17. M∼ is a submodule of M .
Proof. If x1 , x2 ∈ M∼ , take y1 , y2 ∈ M ◦ such that x1 ∼ y1 and x2 ∼ y2 ; then x1 + x2 ∼ y1 + y2 ,
and y1 + y2 ∈ M ◦ . Thus, x1 + x2 ∈ M∼ .
Now, if α ∈ R, x ∈ M∼ , take y ∈ M ◦ such that x ∼ y; then αx ∼ αy, and αy ∈ M ◦ . Thus,
αx ∈ M∼ .
Lemma 1.18. Let ρ : M → M /∼ be the canonical map. Then
M /∼ ◦ = ρ (M∼ )
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
Proof.
ρ (x) ∈ M /∼
◦
9
⇐⇒ ρ (x) = 1◦R ρ (x) = ρ (1◦R x) ⇐⇒ x ∼ 1◦R x ⇐⇒ x ∈ M∼
We recall the first isomorphism theorem:
Lemma 1.19. Let ϕ : M1 → M2 be an R-module homomorphism. Then
x ∼ y ⇔ ϕ (x) = ϕ (y)
is a module congruence on M1 .
Theorem 1.20 (The First Isomorphism Theorem). Let R be a semiring with a negation map, and
let M1 , M2 be R-modules. If ϕ : M1 → M2 is an R-module homomorphism, then there exists a
module congruence ∼ on M1 such that
M1 /∼ ∼
= Imϕ
We return to to demonstrate how we can “enforce” to be a partial order on M .
Definition 1.21. Let R be a semiring, and let M be an R-module with a negation map. Define a
relation ≡◦ on M as
a ≡◦ b ⇔ a b ∧ b a
Remark 1.22. ≡◦ is a congruence on M , and the R-module M ≡◦ is partially ordered by the
induced negation map (−) [x] = [(−) x].
1.3. -morphisms. As we will see more when dealing with Lie algebras with a negation map, we
cannot always construct functions that will preserve every operation of the Lie algebra. We will now
define the notion of -morphisms, as also defined in [24, Section 8.2].
Definition 1.23. Let R be a semiring, and let M1 , M2 be two R-modules with a negation map. A
-morphism is a function ϕ : M1 → M2 , which satisfies:
(1) ∀x, y ∈ M1 : ϕ (x + y) ϕ (x) + ϕ (y).
(2) ∀α ∈ R, ∀x ∈ M1 : ϕ (αx) αϕ (x).
(3) ∀x ∈ M1 : ϕ ((−) x) = (−) ϕ (x).
(4) ϕ (M1 ◦ ) ⊆ M2 ◦ .
Our purpose now will be to formulate a version of the First Isomorphism Theorem for morphisms.
Assume that ϕ : M1 → M2 is a surjective -morphism. We define the equivalence relation ∼
(which is not necessarily a congruence) on M1 as
x ∼ y ⇐⇒ ϕ (x) = ϕ (y)
We now wish to define addition and scalar multiplication on M1 /∼ . The usual definition (adding
or scalar multiplying the representatives) will no longer work, because ∼ may not be a congruence;
so we define
[x] + [y] =
α · [x] =
{z ∈ M1 |ϕ (z) = ϕ (x) + ϕ (y)} = ϕ−1 (ϕ (x) + ϕ (y))
{z ∈ M1 |ϕ (z) = α · ϕ (x)} = ϕ−1 (α · ϕ (x))
and the natural negation map (−) [x] = [(−) x].
The usual verifications show that:
Lemma 1.24. M1 /∼ is an R-module with the above operations. The quasi-zeros are equivalence
classes of the form [x], where x ∈ ker ϕ.
We define the usual projection map ρ : M1 → M1 /∼ as ρ (x) = [x].
Lemma 1.25. ρ is a -morphism.
Proof.
10
GUY BLACHAR
(1) Let x, y ∈ M1 . Since ϕ (x + y) ϕ (x) + ϕ (y), and since ϕ is injective, there is some z ∈ M1
such that
ϕ (x + y) + ϕ (z) = ϕ (x) + ϕ (y)
◦
and ϕ (z) ∈ M2 , that is, z ∈ ker ϕ. Hence
ρ (x + y) = [x + y] [x + y] + [z] = [x] + [y] = ρ (x) + ρ (y)
(2) Similarly as 1.
(3) Given x ∈ M1 , it is easily seen that
ρ ((−) x) = [(−) x] = (−) [x] = (−) ρ (x)
(4) This is obvious.
We now get a version of the First Isomorphism Theorem:
Theorem 1.26 (The First Isomorphism Theorem for -morphisms). Let R be a semiring, let
M1 , M2 be two R-modules with a negation map, and let ϕ : M1 → M2 be a -morphism. Let ∼
be the equivalence relation defined above, and rho : M1 → M1 /∼ the projection map. Then there
exists a unique R-isomorphism ϕ̂ : M1 /∼ → M2 such that ϕ = ϕ̂ ◦ ρ.
Proof. We define ϕ̂ : M1 /∼ as ϕ̂ ([x]) = ϕ (x). The usual verifications as in the proof of the First
Isomorphism Theorem prove that ϕ̂ is an R-isomorphism, and that ϕ̂ is uniquely defined.
We define a similar concept for semirings with a negation map:
Definition 1.27. Let R1 and R2 be semirings with a negation map. A -morphism is a function
ϕ : R1 → R2 such that the following properties hold:
(1) ∀α, β ∈ R1 : ϕ (α + β) ϕ (α) + ϕ (β).
(2) ∀α, β ∈ R1 : ϕ (α · β) ϕ (α) · ϕ (β).
(3) ∀α ∈ R1 : (−) ϕ (α) = ϕ ((−) α).
(4) ϕ (0R1 ) = 0R2 .
(5) ϕ (1R1 ) = 1R2 .
1.4. Lifting a Module Over a Semiring with a Negation Map.
1.4.1. Lifting a semiring with a negation map. When dealing with tropical algebra, we had a powerful tool – Puiseux series. This tool enables one to use known results in the classical theory to prove
tropical results. In this section, we attempt to give a similar construction for an arbitrary semiring
with a negation map.
Definition 1.28. Let R be a semiring with a negation map. For any subset A ⊆ R, define
A = {β ∈ R | ∃α ∈ A : β α}
If A = R, we say that A is -dense in R.
For example, R∨ = R. We note that this defines a topology on R; however, this topology is
usually not even T1 , since, for example, {0R } = R◦ .
b and a map
Definition 1.29. Let R be a semiring with a negation map. A lift of R is a ring R
b → R, such that the following properties hold:
ϕ
b:R
b is (−) α
(1) ϕ
b is a -morphism, where the negation map on R
b = −b
α.
∨
(2) Imϕ
b ⊆ R is -dense in R.
(3) ϕ
b (b
α) = 0R ⇐⇒ α
b = 0Rb
Example 1.30. We give several examples of lifts.
(1) If R is a ring with the negation map (−) a = −a, then the identity map R → R is a lift of R.
b = L {{t}} with the EL tropicalization map is a
(2) Given an ELT algebra R = R (F , L ), R
lift of R, as defined and proved in [5, Lemma 0.4].
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
11
(3) Although the intuition is that lifts should be “very big”, this example proves that this is
not always the case. Consider the semiring (N0 , +, ·) with the negation map (−) a = a.
Then Z /2Z is a lift of N0 , with the map defined by ϕ
b (0̄) = 0 and ϕ
b (1̄) = 1.
We now prove the following theorem:
Theorem 1.31. Let R be an antiring with a negation map. Then R possesses a lift.
Proof. We shall first construct the lift. We denote by A the following set:
A = {aα | α ∈ R∨ }
(We use the notation aα rather than α to distinguish the elements of A and those of R∨ ).
We define a multiplication on the elements of A as follows:
aαβ , αβ ∈ R∨
aα · aβ =
a0R , Otherwise
This multiplication endows A with a monoid structure.
b = Z [A], meaning
We also denote R
)
( k
X
∨
b=
ni aαi ni ∈ Z, αi ∈ R
R
i=1
b is a ring.
Since A is a monoid, R
b → R. We define an action of {−1, 0, 1} on R by
We are left with defining the lifting map ϕ
b:R
1 · α = α, 0 · α = 0R , (−1) · α = (−) α
b → R is defined by ϕ
The map ϕ
b:R
b 0Rb = 0R and
!
k
k
X
X
|ni | (sign (ni ) · αi )
ni aαi =
ϕ
b
i=1
i=1
where
k
X
ni aαi is in its reduced representation, i.e. αi 6= αj for i 6= j, and sign (ni ) · αi is calculated
i=1
by the above action.
b ϕ
We shall now prove that R,
b is a lift of R. We first prove that ϕ
b is a -morphism.
(1) It is easy to see that if αi 6= αj whenever i 6= j,
!
k
k
X
X
ϕ
b
ϕ
b (ni aαi )
ni aαi =
i=1
i=1
Therefore we only have to prove that ϕ
b (maα ) + ϕ
b (naα ) ϕ
b (maα + naα ) where m, n 6= 0.
If sign (m) = sign (n), then
ϕ
b (maα ) + ϕ
b (naα )
= |m| (sign (m) · α) + |n| (sign (n) · α) = |m + n| (sign (m + n) · α) =
= ϕ
b ((m + n) aα ) = ϕ
b (maα + naα )
Now, suppose sign (m) 6= sign (n). Without loss of generality, we assume n < 0 < m and
−n < m (the other cases are proved similarly). Thus,
ϕ
b (maα ) + ϕ
b (naα )
= mα + (−n) ((−) α) = (m + n) α + (−n) α + (−n) ((−) α) =
= (m + n) α + (−n) α◦ (m + n) α = ϕ
b ((m + n) aα ) =
= ϕ
b (maα + naα )
as we needed to prove.
12
GUY BLACHAR
(2) Let us begin by observing that
ϕ
b (aα ) ϕ
b (aβ ) ϕ
b (aαβ )
∨
(Since if αβ ∈ R , there is equality; otherwise, the LHS is αβ ∈ R◦ , whereas the RHS is
0R ). Now,
! ℓ
!
k
ℓ
k
X
X
X
X
n j βj =
mi αi ·
b
nj aβ j =
mi aαi · ϕ
ϕ
b
j=1
i=1
j=1
i=1
X
=
(mi nj ) αi βj +
αi βj ∈R∨
X
αi βj
=
(3)
ϕ
b −
k
X
i=1
ni aαi
!
=
k
X
∈R∨
X
(mi nj ) αi βj
αi βj ∈R
/ ∨
(mi nj ) αi βj = ϕ
b
X
αi βj
∈R∨
! ℓ
k
X
X
nj aβj
mi aαi ·
ϕ
b
(mi nj ) aαi βj
j=1
i=1
|−ni | (sign (−ni ) · αi ) = (−)
k
X
i=1
i=1
|ni | (sign (ni ) · αi ) = (−) ϕ
b
k
X
i=1
ni aαi
!
(4) ϕ
b 0Rb = 0R by definition.
b 1Rb = 1R .
(5) Noting that 1Rb = 1 · a1R , ϕ
This proves that ϕ
b is a -morphism. Since Imϕ
b = R∨ , we are left to prove that ϕ
b (b
α) = 0R if
and only if α
b = 0Rb . But this follows from the fact that R is an antiring.
b ϕ
Hence R,
b is a lift of R.
1.4.2. Lifting a module. We move towards lifting a module. We use again the concept of -density:
Definition 1.32. Let R be a semiring, let M be an R-module with a negation map. For any subset
S ⊆ M , define
S = {y ∈ M | ∃x ∈ S : y x}
If S = M , we say that S is -dense in M .
b ϕ
Definition 1.33. Let R be a semiring with a negation map with a lift R,
b , and let M be an
b
c and a map ψb : M
c → M such that the following properties
R-module. A lift of M is an R-module
M
hold:
c : ψb (x1 ) + ψb (x2 ) ψb (x1 + x2 ).
(1) ∀x1 , x2 ∈ M
b ∀x ∈ M
c:ϕ
(2) ∀b
α∈R
b (b
α) ψb (x) ψb (b
αx
b).
b
(3) ψ 0M
c = 0M .
(4) Imψb is -dense in M .
Theorem 1.34. If R has a lift, then every R-module has a (free) lift. Furthermore, if the module
is generated by µ elements (for some cardinal number µ), it possesses a lift which is a free module
generated by µ elements.
b ϕ
c=R
bS , and define a
Proof. Let R,
b be a lift of R. Let S ⊆ M be a generating set. Define M
c → M by
function ψb : M
X
ψb (b
rs )
=
ϕ
b (b
rs ) s
s∈S
c, ψb is a lift of M .
We now prove that M
s∈S
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
13
c. Then
(1) Let (b
xs )s∈S , (b
ys )s∈S ∈ M
X
X
X
ψb (b
xs )s∈S + ψb (b
ys )s∈S
=
ϕ
b (b
xs ) s +
ϕ
b (b
ys ) s =
(ϕ
b (b
xs ) + ϕ
b (b
ys )) s
s∈S
X
s∈S
s∈S
s∈S
ϕ
b (b
xs + ybs ) s = ψb (b
xs )s∈S + (b
ys )s∈S
b and (b
c. Then
(2) Let α
b∈R
xs )s∈S ∈ M
X
X
X
ϕ
b (b
α) ψb (b
xs )s∈S = ϕ
b (b
α)
ϕ
b (b
xs ) s =
(ϕ
b (b
α) ϕ
b (b
xs )) s
ϕ
b (b
αx
bs ) s = ψb α
b (b
xs )s∈S
s∈S
s∈S
s∈S
(3) Follows immediately.
P
b
b is -dense in R, for each αs there is α
bs ∈ R
(4) Let x ∈ M . Write x = s∈S αs s. Since Imϕ
such that αs ϕ
b (b
αs ). If αs = 0R , we may choose α
bs = 0Rb . Therefore,
X
X
x=
αs s
ϕ
b (b
αs ) s = ψb (b
αs )s∈S
s∈S
s∈S
n
b ϕ
b , ψb is a lift of Rn , where
Remark 1.35. If R,
b is a lift of R, then
R
b ((b
x)i )
ψb (b
x) = ϕ
i
n
I.e., ψb applies ϕ
b on each entry of the given vector. In this case, Imψb = (R∨ ) .
We will later see theorems which hold over modules which have a lift, such as Corollary 1.49.
b ⊆M
c be a submodule. Then ψb N
b is a submodule of M .
Lemma 1.36. Let N
b and let α, β ∈ R. Take x
c such that x ψb (b
b, yb ∈ M
x) and y ψb (b
y ), and
Proof. Let x, y ∈ ψb N
b such that α ϕ
take α
b, βb ∈ R
b (b
α) and β ϕ
b βb . Then
by ψb α
by
αx + βy ϕ
b (b
α) ψb (b
x) + ϕ
b βb ψb (b
y ) ψb (b
αx
b) + ψb βb
bx
b + βb
by ∈ N , we are finished.
Since α
bx
b + βb
1.5. Linearly Independent Sets. Up until now, most of our results were formulated and proved
for a module with a negation map, where we didn’t assume that the underlying semiring has a
negation map. However, now that we are going to deal with linear dependency, we need the notion
of quasi-zero scalars; hence, we assume for the rest of this section that our semiring has a negation
map.
We first specialize our underlying semiring, to avoid the problem of “quasi-zero divisors”:
Definition 1.37. A semiring with a negation map R is called entire, if R is commutative in both
of the operations and if
∀α, β ∈ R : αβ ∈ R◦ ⇒ α ∈ R◦ ∨ β ∈ R◦
Definition 1.38. Let R be an entire semiring with a negation map, and let M be an R-module. A
set S ⊆ M is called ◦-linearly dependent, if
∃k ∈ N, ∃x1 , . . . , xk ∈ S, ∃α1 , . . . , αk ∈ R∨ \ {0R } : α1 x1 + · · · + αk vk ∈ M ◦
S is called ◦-linearly independent, if it is not linearly dependent.
Since we are dealing with negation maps, we will omit the ◦ in ◦-linearly dependent and ◦-linearly
independent.
Lemma 1.39. Let R be an entire semiring with a negation map, let M be an R-module, and let
S ⊆ M be a linearly independent set. Then
S ∩ M◦ = ∅
14
GUY BLACHAR
Proof. Assume there exists some x ∈ S such that x ∈ M ◦ . Then the linear combination 1R x is
quasi-zero, contradicting the fact that S is linearly independent.
Lemma 1.40. Let R be an entire semiring with a negation map, let M be an R-module, and let
S = {x1 , . . . , xm } be a linearly dependent subset of M . Assume that S ′ = {y1 , . . . , ym } ⊆ M satisfies
∀i : yi xi . Then S ′ is also linearly dependent.
X
αi xi ∈ M ◦ , where ∀i : αi ∈ R∨ ; then
Proof. Assume that
i
implying
X
X
αi yi
αi xi ∈ M ◦
i
i
◦
X
αi yi ∈ M , by Lemma 1.9.
i
Lemma 1.41. Let M be an R-module. If x y, then x + (−) y ∈ M ◦ .
Proof. By definition, there is some z ∈ M ◦ such that x = y + z. Therefore,
x + (−) y = y + z + (−) y = 1◦R y + z ∈ M ◦
Lemma 1.42. Let R be an entire semiring with a negation map, and let M be an R-module. Assume
k
X
αi xi . Then the set {x1 , . . . , xk , y} is linearly dependent.
that y
i=1
Proof. Write
I = {i = 1, . . . , n|αi ∈
/ R◦ }
◦
We note that if i ∈
/ I, then αi ∈ R , and thus αi xi ∈ M ◦ . If I = ∅, then y ∈ M ◦ , and thus
{x1 , . . . , xk , y} is linearly dependent. Therefore, we may assume that I 6= ∅, and thus
y
k
X
αi xi
i=1
By Lemma 1.41,
y+
X
X
αi xi
i∈I
((−) αi ) xi = y + (−)
i∈I
X
αi xi ∈ M ◦ .
i∈I
We found a linear combination of some of the vectors in the set {x1 , . . . , xk , x}, which is quasi-zero,
and the coefficients are not quasi-zero, implying {x1 , . . . , xk , x} is linearly dependent.
Lemma 1.43. Let S ⊆ M be a linearly independent set. Then for all x ∈ S, S\ {x} is not a
spanning set of M .
Proof. Assume that S\ {x} is a spanning set of M . Then
∃k ∈ N, ∃α1 , . . . , αk ∈ R, ∃x1 , . . . , xk ∈ S : α1 x1 + · · · + αk xk = x = 1R x
By Lemma 1.42, (S\ {x}) ∪ {x} = S is linearly dependent, a contradiction. Thus, S\ {x} is not a
spanning set of M .
Corollary 1.44. Suppose A ⊆ M is a linearly independent set, and that B ⊆ M is a spanning set
of M . If B ⊆ A, then A = B.
Proof. If A 6= B, there exists x ∈ A \ B. Therefore, there exists a linear combination
X
x=
αb b
b∈B
By Lemma 1.42, B ∪ {x} is linearly dependent. But B ∪ {x} ⊆ A, contradicting the assumption
that A is linearly independent. Thus A = B.
1.6. d-bases and s-bases.
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
15
1.6.1. d-bases.
Definition 1.45. Let R be an entire semiring with a negation map. A d-base (for dependence
base) of an R-module M is a maximal linearly independent subset of M .
Definition 1.46. Let R be an entire semiring with a negation map, and let M be an R-module.
The rank of M is
rk (M ) = max {|B||B is a d-base of M }
Lemma 1.47. Every linearly independent set is contained in some d-base.
Proof. Let M be an R-module, and let S ⊆ M be a linearly independent set. Consider
{S ′ ⊆ M |S ⊆ S ′ is linearly independent}
This set satisfies Zorn’s condition, and thus, has a maximal element S ′′ , which is a d-base of M .
1.6.2. d-bases of modules
which possess a lift. For this part, we fix an entire semiring with a negation
b
map R with a lift R, ϕ
b .
c, ψb , and let x
c be vectors. If
Lemma 1.48. Let M be an R-module with a lift M
b1 , . . . , x
bm ∈ M
x
b1 , . . . , x
bm are linearly dependent, then ψb (b
x1 ) , . . . , ψb (b
xm ) are also linearly dependent.
b not all are 0 b , such that
Proof. x
b1 , . . . , x
bm are linearly dependent, so there are α
b1 , . . . , α
bm ∈ R,
R
m
X
i=1
Applying ψb on the equation, we get
m
X
i=1
Using Lemma 1.9, we get
ϕ
b (b
αi ) ψb (b
xi ) ψb
m
X
i=1
α
bi x
bi = 0M
c
m
X
i=1
α
bi x
bi
!
= ψb 0M
c = 0M
ϕ
b (b
αi ) ψb (b
xi ) ∈ M ◦
Since Imϕ
b ⊆ R∨ , we get that for each i, ϕ
b (b
αi ) ∈ R∨ . But we know that not all of the α
bi ’s are 0Rb ,
and thus not all of the ϕ
b (b
αi ) are 0R , as required.
Corollary 1.49. If R has a lift, than any n + 1 vectors in Rn are linearly dependent.
n
b , ψb . Let
Proof. Let x1 , . . . , xn+1 ∈ Rn be any vectors. By Remark 1.35), Rn has a lift
R
n
b
x′1 , . . . , x′n+1 ∈ Imψb such that xi x′i for each i. Now, let y1 , . . . , yn+1 ∈ R
be some vectors
′
b
such that for each i, ψ (yi ) = x .
i
n
b is a ring, hence any n + 1 vectors in R
b
R
are linearly dependent; in particular, y1 , . . . , yn+1
are linearly dependent. By Lemma 1.48, x′1 , . . . , x′n+1 are linearly dependent. By Lemma 1.40, we
are finished.
Theorem 1.50. If R has a lift, then rk (Rn ) = n.
Proof. By Corollary 1.49, any n + 1 vectors in Rn are linearly dependent; so rk (Rn ) ≤ n. However,
since {e1 , . . . , en } are linearly dependent, rk (Rn ) ≥ n, which together yield the desired equality.
Recall the relation ≡◦ from Definition 1.21, defined as
a ≡◦ b ⇔ a b ∧ b a
Theorem 1.51. Assume that R ≡◦ has a lift (rather than R). Then any n + 1 vectors in Rn are
linearly dependent.
16
GUY BLACHAR
n
Proof. Let x1 , . . . , xn+1 ∈ Rn , and let [x1 ] , . . . , [xn+1 ] ∈ R ≡◦ be their equivalence classes under ≡◦ . Noting that
n
Rn ≡ ∼
= R ≡
◦
◦
by Corollary 1.49, there are α1 , . . . , αn+1 ∈ R∨ , not all are 0R , such that
Hence,
◦
n
[α1 x1 + · · · + αn+1 xn+1 ] = α1 [x1 ] + · · · + αn+1 [xn+1 ] ∈ R ≡◦
[α1 x1 + · · · + αn+1 xn+1 ] = 1◦R [α1 x1 + · · · + αn+1 xn+1 ] = [1◦R (α1 x1 + · · · + αn+1 xn+1 )]
By the definition of ≡◦ ,
◦
α1 x1 + · · · + αn+1 xn+1 1◦R (α1 x1 + · · · + αn+1 xn+1 ) ∈ (Rn )
By Lemma 1.9,
◦
α1 x1 + · · · + αn+1 xn+1 ∈ (Rn )
as required.
n
Corollary 1.52. Let R be a ring with some negation map. Then any n+1 vectors in R are linearly
dependent.
Proof. In this case, R ≡◦ is a ring, and the induced negation map is (−) [a] = − [a]; therefore,
R ≡ has a lift, and Theorem 1.51 can be applied.
◦
In the above cases, we have another corollary:
Corollary 1.53. If M ⊆ Rn is a submodule, then rk (M ) ≤ n.
Proof. Any d-base of M is contained in a d-base of Rn , due to Lemma 1.47, whose order is at
most n.
1.6.3. s-bases.
Definition 1.54. Let R be an entire semiring with a negation map. An s-base (for spanning base)
of an R-module M is a minimal spanning subset of M .
An easy corollary from the First Isomorphism Theorem is the following:
Corollary 1.55. Let M be an R-module with a finite spanning set S, |S| = n. Then there exists a
congruence ∼ on Rn , such that
n
M∼
= R /∼
Corollary 1.56. Any spanning set of an R-module M must have at least rk (M ) elements.
n
Proof. By Corollary 1.55, if M = Span {x1 , . . . , xn }, then M ∼
= R /∼ for some congruence ∼.
n
′
]} of R /∼ .
Thus, any d-base {y1 , . . . , ym } of M maps to a linearly independent set {[y1′ ] , . . . , [ym
′
′
n
Then also y1 , . . . , ym are linearly independent in R , and thus m ≤ n.
1.6.4. Critical elements versus s-bases. We define a similar concept to the concept of critical elements
presented in [14]:
Definition 1.57. Let R be an entire semiring with a negation map, and let M be an R-module.
(1) We define an equivalence relation ∼ on M , called projective equivalence, as the transitive
closure of the following relation: we say that x ∼ y if there is an invertible α ∈ R∨ such that
x = αy.
(2) We define the equivalence class of x to be
[x]∼ = {y ∈ M |y ∼ x}
◦
(3) We say that x ∈ M \M is critical, if there is no linear combination x = x1 + x2 where
x1 , x2 ∈ M \ [x]∼ .
(4) A critical set is a set of representatives of all the equivalence classes of ∼.
Remark 1.58. Any critical set is projectively unique.
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
17
Lemma 1.59. Suppose x ∈ M is critical. Then it is not spanned by M \ [x]∼ .
Proof. Assume that
x=
n
X
αi yi
i=1
for yi ∈ M \ [x]∼ . x is critical, thus n > 1. Write x1 = α1 y1 and x2 =
of criticality, x2 ∈ [x]∼ . We get a contradiction by induction on n.
Pn
i=2
αi yi . By the definition
Lemma 1.60. Suppose S is an s-base of M . Then S contains a critical set of M .
Proof. Suppose x ∈ M is critical. S is a s-base, thus x is spanned by S. So, by Lemma 1.59, it must
be an element of S (up to projective equivalence).
In the classical theory (meaning when working with modules over rings), this definition is meaningless; there are no critical vectors. However, In [14, Theorem 5.24] it is proven that in the
supertropical theory, any s-base (if it exists) is a critical set. One could hope that over any entire
antiring with a negation map these definition will coincide – but even in the ELT theory there is a
counterexample.
Example 1.61. Consider R = R (C, R), and
[1]0
−∞
[1]0
M = Span [1]0 , −∞ , [1]0
[0]
[1] (−1)
−∞
0
{z
}
|
S
The set is a spanning set of M . It is straightforward to prove that any vector in S cannot be
presented as a linear combination of the others. However,
[1]
[1]
[1]
[1]0
−∞
−∞
0
0
0
[1]0 = [0]0 [1]0 + −∞ + [1]0 = [0]0 + [1]0
[1] (−1)
[1] (−1)
[0]0
[0]0
[0]0
−∞
[1]
0
implying [1]0 is not critical. Since this vector is not critical, we get an example of an ELT module
[0]0
with two s-bases which are not projectively equivalent:
[1]0
[1]0
−∞
−∞
[1]0
[1]0
M = Span [1]0 , −∞ , [1]0 = Span [0]0 , −∞ , [1]0
[0]
[0]
[1] (−1)
[1] (−1)
−∞
−∞
0
0
1.6.5. d,s-bases.
Definition 1.62. Let R be an entire semiring with a negation map, and let M be an R-module. A
d,s-base of M is an s-base of M , which is also a d-base.
Lemma 1.63. If S is a d,s-base of M , with M finitely generated, then |S| = rk (M ).
Proof. By Corollary 1.56, |S| ≥ rk (M ); but, since S is also a d-base, we get equality.
Lemma 1.64. Any linearly independent spanning set is a d,s-base.
Proof. Assume that S is a linearly independent spanning set of M . If S ′ ⊆ S is an s-base, we get
that S ′ = S, by Corollary 1.44.
Now, assume that S ⊆ S ′′ is a d-base, using the same corollary we get S = S ′′ . So S is a d-base.
To sum up, S is an s-base and a d-base, and thus S is a d,s-base.
Note that not every s-base is a d,s-base.
Example 1.65. Any submodule spanned by zero-layered elements has no d,s-base, since the only
linearly independent subset of this submodule is the empty set, which is not a spanning set.
18
GUY BLACHAR
1.7. Free Modules over Semirings with a Negation Map.
1.7.1. Definitions and examples.
Definition 1.66. Let R be an entire semiring with a negation map. An R-module M is called free,
if there exists a set B ⊆ M such that for all x ∈ M \ {0M } there exists a unique choice of n ∈ N,
α1 , . . . , αn ∈ R and x1 , . . . , xn ∈ B such that
x = α1 x1 + · · · + αn xn
Such a set B is called a base of M .
Example 1.67. Let R be an entire semiring with a negation map. Then Rn is a free R-module,
with the base {e1 , . . . , en }. Indeed, the linear combination is determined uniquely, each component
separately.
Definition 1.68. Let R be an entire semiring with a negation map, and let M be a free Rmodule with a base B (|B| = n). For every x ∈ M there exists a unique representation x =
αi1 vi1 + · · · + αik vik . We define the coordinate vector of x by the base B, [x]B ∈ Rn , to be:
(
0R j ∈
/ {i1 , . . . , ik }
([x]B )j =
, j = 1, . . . , n
αij j ∈ {i1 , . . . , ik }
It is easy to verify that [x + y]B = [x]B + [y]B , and [αx]B = α [x]B .
Lemma 1.69. Let R be an entire semiring with a negation map, and let M be an R-module. M is
free with a base B of size n if and only if M ∼
= Rn .
Proof. If M ∼
= Rn , then M is free with a base of size n.
Now, assume that M is free with a base B of size n. Denote B = {x1 , . . . , xn }, and define a
function ϕ : M → Rn by ϕ (x) = [x]B .
(1) ϕ is an R-module homomorphism: since [x + y]B = [x]B + [y]B and [αx]B = α [x]B .
(2) ϕ is injective: If ϕ (x) = ϕ (y), then the representations of x and y with respect to B are
the same. But we also know that these representations
are unique, and thus x = y.
!
n
X
αi xi = (αi ).
(3) ϕ is surjective: If (αi ) ∈ Rn , then ϕ
i=1
Thus, M ∼
= Rn .
We now prove that any base is a d,s-base.
Lemma 1.70. If B is a base of M , and if
n
X
αi bi ∈ M ◦
i=1
for α1 , . . . , αn ∈ R, b1 , . . . , bn ∈ B, then α1 , . . . , αn ∈ R◦ .
Proof. Since
n
X
αi bi ∈ M ◦ , there is some x ∈ M such that
αi bi = x◦ . Writing x =
i=1
i=1
have
n
X
n
X
i=1
αi bi = x◦ =
X
b∈B
βb b
!
=
X
X
βb b, we
b∈B
βb◦ b
b∈B
But B is a base, and thus α1 , . . . , αn ∈ R◦ (since all of the coefficients in the right side are quasizeros).
Lemma 1.71. If B is a base of M , then it is a linearly independent set.
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
Proof. Assume that
X
19
αb b ∈ M ◦
b∈B
◦
By Lemma 1.70, ∀b ∈ B : αb ∈ R , and thus B is linearly independent.
Corollary 1.72. Any base is a d,s-base.
Proof. Let M be a free R-module with base B. By definition, B is a spanning set of M . In addition,
by Lemma 1.71, B is linearly dependent. We are finished, by Lemma 1.64.
Corollary 1.73. The cardinality of a base of a free module is uniquely determined.
Example 1.74. We give an example of a submodule of a free module, which has a d,s-base which
is not a base. Take R = R (C, R), and consider
[1] [1]
0
1
[1]1
M = Span [1]0 , [1]1 , [−1]0
[1]
[1]1
[1]0
0
[1] [1]
0
1
[1]1
The set [1]0 , [1]1 , [−1]0 is a d,s-base of M (to prove this set is linearly inde [1]
[1]1
[1]0
0
pendent, we use [25, Theorem 3.3.5]; their determinant is [2]2, which is not zero-layered, and thus
this set is linearly dependent). However, it is not a base, because there is a vector which can be
written as a linear combination of its elements in two different ways:
[1] [1] [1] [1] [1]
1
0
1
0
1
[1]0 + [1]1 = [1]1 = [1]1 + [−1]0
[1]0
[1]1
[1]1
[1]1
[1]0
However, if we know that a module is free, we also know all of its d,s-bases:
Lemma 1.75. Let M be a free R-module, and let S be a d,s-base of M . Then S is a base of M .
Proof. By Lemma 1.69, we can assume M = Rn . Since S is an s-base of M , by Lemma 1.60, S
contains some critical set, A. Since the critical elements in Rn are precisely αi ei for some αi ∈ R∨ ,
we may write
A = {α1 e1 , . . . , αn en }
We now prove that each αi is invertible. Otherwise, assume without loss of generality that α1 is
not invertible. Since S is an s-base of Rn , and since e1 is critical, there are β1 e1 , . . . , βk e1 ∈ S such
that
∃γ1 , . . . , γk ∈ R : γ1 β1 e1 + · · · + γk βk e1 = e1
Since S is linearly independent, we must have k = 1, and thus βe1 ∈ S such that β is invertible.
Again, since S is linearly independent, α1 = β is invertible, as required.
1.7.2. Free modules over entire antirings with a negation map. For the remainder of this part, we
consider free modules over entire antirings with a negation map.
Definition 1.76. Let R be an entire semiring with a negation map, and let M be a free R-module
with two bases B = {v1 , . . . , vn } and C = {w1 , . . . , wn }. We define the transformation matrix
B
from B to C, [I]C , by:
B
[I]C = ([v1 ]C , . . . , [vn ]C )
Notice that [I]B
C ∈ Mn (R).
B
Lemma 1.77. Under the conditions of Definition 1.76, ∀x ∈ M : [I]C [x]B = [x]C .
20
GUY BLACHAR
Proof. Denote
x=
B
[I]C
n
X
k=1
= (αi,j ), [x]B = (βk ). That means x =
βk vk =
n
X
k=1
βk
n
X
ℓ=1
αℓ,k wℓ
!
n
X
βk vk , and vk =
k=1
=
n
n
X
X
ℓ=1
k=1
αℓ,k βk
!
n
X
αℓ,k wℓ . We get
ℓ=1
wℓ =
n
X
And by the definition, ∀ℓ = 1, . . . , n : [I]B
[x]
= ([x]C )ℓ , as needed.
C
B
B
[I]C [x]B
ℓ=1
ℓ
wℓ
ℓ
Lemma 1.78. Under the conditions of Definition 1.76,
B
[I]C
B · [I]C = In
Proof. We will show equality of columns. For all i = 1, . . . , n,
Lemma 1.77
B
C
B
C
B
·
[I]
=
[I]C
Ci [I]C
B
C = [I]B · [I]C · ei = [I]B · [I]C · [vi ]B
B · [vi ]C
Lemma
=
1.77
as needed.
[vi ]B = ei
Corollary 1.79. A base of a free module over an entire antiring with a negation map is unique, up
to invertible scalar multiplication of each element and permutation. In other words, any two bases
of a free ELT module are projectively equivalent.
B
Proof. If B and C are two bases of a free R-module M , then [I]C is invertible. We also know (by
B
Corollary 1.73) that [I]C is a square matrix. By [7, Corollary 3], it is a generalized permutation
matrix, namely C is B up to scalar multiplication of each element (ci ’s) and change of order (σ).
2. Symmetrized Versions of Important Algebraic Structures
We now turn to study Lie semialgebras over semirings with a negation map. In the classical
theory, a Lie algebra is a vector space endowed with a bilinear alternating multiplication, which
satisfies Jacobi’s identity. In the context of negation maps, our definition will be quite similar to
the classical one. We will mostly follow the approach of [12].
2.1. Semialgebras over Symmetrized Semirings.
Definition 2.1. A nonassociative semialgebra with a negation map over a semiring R is an
R-module with a negation map A, together with an R-bilinear multiplication, A × A → A, that is
distributive over addition and satisfies the axioms:
(1) ∀α ∈ R ∀x, y ∈ A : α (xy) = (αx) y = x (αy).
(2) ∀x, y ∈ A : (−) (xy) = ((−) x) y = x ((−) y).
Note that “nonassociative” means “not necessarily associative”.
As in the case of homomorphisms and congruences, if the negation map on A is induced from a
negation map on R, the last condition is follows from the other.
Example 2.2. Let A, B be nonassociative semialgebras with a negation map over a semiring R.
Then A × B is also a nonassociative semialgebra with a negation map (a module as in Example 0.3),
where the multiplication and the negation map are defined componentwise.
Definition 2.3. An associative semialgebra is a nonassociative semialgebra, whose multiplication is associative.
Definition 2.4. Let A, B be nonassociative semialgebras with a negation map over a semiring. A
function ϕ : A → B is called an R-homomorphism, if it satisfies the following properties:
(1) ϕ is an R-module homomorphism (as defined in Definition 1.13).
(2) ∀x, y ∈ A : ϕ (xy) = ϕ (x) ϕ (y).
The set of all R-homomorphisms ϕ : A → B is denoted Hom (A, B). We also say that:
• ϕ is an R-monomorphism, if ϕ is injective.
• ϕ is an R-epimorphism, if ϕ is surjective.
• ϕ is an R-isomorphism, if ϕ is bijective. In this case we denote A ∼
= B.
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
21
In addition, if A is a semialgebra with negation over a semiring R, then End (A) = Hom (A, A) is
also semialgebra with a negation map over R, with the usual addition and scalar multiplication,
composition as multiplication and negation map
((−) ϕ) (x) = (−) ϕ (x)
One may also define -morphism of algebras with a negation map similarly.
Definition 2.5. Let A, B be nonassociative semialgebras with a negation map over a semiring R,
and let ϕ : A → B be an R-homomorphism. The kernel of ϕ is
ker ϕ = {x ∈ A | ϕ (x) ∈ B ◦ } = ϕ−1 (B ◦ )
By Remark 1.14, A◦ ⊆ ker ϕ.
We recall the definition of an ideal:
Definition 2.6. Let A be a nonassociative semialgebra over a semiring R. A subalgebra I ⊆ A
which satisfies IA, AI ⊆ I is called an ideal of A, denoted I ⊳ A.
Lemma 2.7. ker ϕ is an ideal of A.
Proof. We prove it directly:
(1) If x, y ∈ ker ϕ, then ϕ (x + y) = ϕ (x) + ϕ (y) ∈ B ◦ . Hence, x + y ∈ ker ϕ.
(2) If α ∈ R and x ∈ ker ϕ, then ϕ (αx) = αϕ (x) ∈ B ◦ , and thus αx ∈ ker ϕ.
(3) If x ∈ ker ϕ, then ϕ ((−) x) = (−) ϕ (x) ∈ B ◦ , so (−) x ∈ ker ϕ.
(4) If x ∈ ker ϕ and y ∈ A, then ϕ (x) ∈ B ◦ . Hence,
ϕ (xy) = ϕ (x) ϕ (y) = (1◦R ϕ (x)) ϕ (y) = 1◦R (ϕ (x) ϕ (y))
Therefore, ϕ (xy) ∈ B ◦ , implying xy ∈ ker ϕ. Similarly, yx ∈ ker ϕ.
2.2. Lie Semialgebras with a Negation Map.
2.2.1. Basic definitions. We first restrict our base semiring to be a semifield with a negation
map, i.e. a commutative semiring with a negation map in which every element which is not quasizero is invertible.
Definition 2.8. Let R be a semifield. A Lie semialgebra with a negation map over R is a
nonassociative semialgebra with a negation map over R, whose multiplication [·, ·] : L × L → L,
called a negated Lie bracket, satisfies the following axioms:
(1) Commutes with the negation map: ∀x, y ∈ L : [(−) x, y] = (−) [x, y].
(2) Alternating on L: ∀x ∈ L : [x, x] ∈ L◦ .
(3) Anticommutativity: ∀x, y ∈ L : [y, x] = (−) [x, y].
(4) Jacobi’s identity: ∀x, y, z ∈ L : [x, [y, z]] + [z, [x, y]] + [y, [z, x]] ∈ L◦ .
Remark 2.9.
(1) If charR 6= 2, then 2 ⇒ 1.
(2) If R has a negation map and the negation map of L is the induced negation map from R,
then condition 1 follows from the bilinearity of the negated Lie bracket, since
[(−) x, y] = [((−) 1R ) x, y] = ((−) 1R ) [x, y] = (−) [x, y] .
Our definition is a bit different than Rowen’s definition [24]. The difference is reflected in Jacobi’s
identity; Rowen’s definition requires that
∀x, y, z ∈ L : [x, [y, z]] (−) [y, [x, z]] [[x, y] , z]
We call this axiom the strong Jacobi’s identity. Note that while the strong Jacobi’s identity
implies our version of Jacobi’s identity, the converse does not hold. An example is given in Example 4.6.
Lemma 2.10. ∀x, y ∈ L : [x, y] + [y, x] ∈ L◦ .
22
GUY BLACHAR
Proof. For all x, y ∈ L,
[x, y] + [y, x] = [x, y] + (−) [x, y] ∈ L◦
Definition 2.11. A Lie semialgebra with a negation map L is called abelian, if
∀x, y ∈ L : [x, y] ∈ L◦
Example 2.12. Given a semifield R, let A be an associative semialgebra with a negation map
over R. We define an operation as follows: [x, y] = xy + (−) yx, which is called the negated
commutator. We will now show that it forms a structure of a Lie semialgebra with a negation
map over R, which satisfied the strong Jacobi’s identity:
(1) Bilinear: Let x, y, z ∈ A and α, β ∈ R. Then,
[αx + βy, z] = (αx + βy) z + (−) (z (αx + βy)) =
= α (xz + (−) zx) + β (yz + (−) zy) = α [x, z] + β [y, z]
Linearity in the second component is proved similarly.
(2) [·, ·] commutes with (−): For all x, y ∈ L,
[(−) x, y] = ((−) x) y + (−) (y ((−) x)) = (−) (xy + (−) yx) = (−) [x, y]
(3) Alternating on A: For all x ∈ A,
[x, x] = x2 + (−) x2 ∈ L◦
(4) Anticommutativity: For all x, y ∈ A,
[y, x] = yx + (−) xy = (−) (xy + (−) yx) = (−) [x, y]
(5) Strong Jacobi’s identity: a proof can be found in [24], which uses the strong transfer principle.
Lemma 2.13.
∀x, y ∈ L : x ∈ L◦ ∨ y ∈ L◦ =⇒ [x, y] ∈ L◦
Proof. Assume x ∈ L◦ ; write x = z + (−) z for some z ∈ L. Then
[x, y] = [z + (−) z, y] = [z, y] + (−) [z, y] ∈ L◦
Definition 2.14. Let R be a semifield with a negation map, and let L be a Lie semialgebra with a
negation map over R. A subset L1 ⊆ L is called a subalgebra of L, if it is a Lie semialgebra with
a negation map over R with the restrictions of the operations of L.
Definition 2.15. Let R be a semifield with a negation map, and let L be a Lie semialgebra with a
negation map over R. The center of L is
Z (L) = {x ∈ L | ∀y ∈ L : [x, y] ∈ L◦ }
Lemma 2.16. L◦ ⊆ Z (L).
Proof. By Lemma 2.13.
Definition 2.17. Let R be a semifield with a negation map, let L be a Lie semialgebra with a
negation map over R, and let L1 , L2 ⊆ L be subalgebras of L. Define
[L1 , L2 ] = Span {[x1 , x2 ]|x1 ∈ L1 , x2 ∈ L2 }
Lemma 2.18.
[Z (L) , L] ⊆ L◦ ⊆ Z (L)
Proof.
∀x ∈ Z (L) ∀y ∈ L : [x, y] ∈ L◦ ⊆ Z (L)
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
23
2.2.2. Homomorphisms and Ideals. We use the general definitions for homomorphisms and ideals of
nonassociative algebras given in section 2.1.
Lemma 2.19. L◦ ⊳ L and Z (L) ⊳ L.
Proof. L◦ is immediate by Lemma 2.13, and Z (L) by Lemma 2.18.
Lemma 2.20. If I, J ⊳ L, then I ∩ J ⊳ L.
Proof. Let x ∈ I ∩ J, y ∈ L. x ∈ I, so [x, y] ∈ I. Also x ∈ J, so [x, y] ∈ J. In conclusion,
[x, y] ∈ I ∩ J.
Definition 2.21. Let R be a semifield, let L be a Lie semialgebra with a negation map over R, and
let I, J ⊳ L be two ideals of L. Then their sum is defined as
I + J = {x + y|x ∈ I, y ∈ J}
Lemma 2.22. If I, J ⊳ L, then I + J ⊳ L.
Proof. Obviously, I + J is a submodule of L. Now, let x ∈ I + J, y ∈ L. x ∈ I + J, so there exists
x′ ∈ I and x′′ ∈ J such that x = x′ + x′′ . We get
[x, y] = [x′ + x′′ , y] = [x′ , y] + [x′′ , y] ∈ I + J
2.2.3. The Adjoint Algebra.
Definition 2.23. Let R be a semifield, let L be a Lie semialgebra with a negation map over R, and
let x ∈ L. We define a homomorphism adx : L → L by
adx (y) = [x, y]
Definition 2.24. Let R be a semifield, and let L be a Lie semialgebra with a negation map over R.
The adjoint algebra of L is the set
AdL = {adx |x ∈ L}
endowed with the following negated Lie bracket:
∀adx , ady ∈ AdL : [adx , ady ] = ad[x,y]
and the induced negation map from End (L): (−) adx = ad(−)x .
Jacobi’s identity can be rewritten now as
∀x, y, z ∈ L : adx (ady (z)) + adz (adx (y)) + ady (adz (x)) ∈ L◦
and the strong Jacobi’s identity can be rewritten now as
adx ady + (−) ady adx ad[x,y]
Lemma 2.25. AdL is a Lie semialgebra with a negation map over R.
Proof. We first check that AdL is a submodule of End (L). But this is obvious, since adx + ady =
adx+y ∈ AdL, αadx = adαx ∈ AdL and (−) adx = ad(−)x ∈ AdL.
Now, we need to check that it is a Lie semialgebra with a negation map.
(1) ∀α, β ∈ R ∀adx , ady , adz ∈ AdL : [αadx + βady , adz ] = [adαx+βy , adz ] = ad[αx+βy,z] =
adα[x,z]+β[y,z] = αad[x,z] + βad[y,z] =
α [adx , adz] + β [ady , adz ]
(2) ∀adx , ady ∈ AdL : [(−) adx , ady ] = ad(−)x , ady = ad[(−)x,y] = (−) ad[x,y] = (−) [adx , ady ].
(3) ∀adx ∈ AdL : [adx , adx ] = ad[x,x] , which is a quasi-zero.
(4) ∀adx , ady ∈ AdL : [ady , adx ] = ad[y,x] = ad(−)[x,y] = (−) ad[x,y] = (−) [adx , ady ].
(5) ∀adx , ady , adz ∈ AdL : [adx [ady , adz ]]+[adz [adx , ady ]]+[ady [adz , adx ]] = ad[x[y,z]]+[z[x,y]]+[y[z,x]] ,
which is a quasi-zero.
24
GUY BLACHAR
Lemma 2.26. There is a homomorphism of Lie semialgebras with a negation map L → AdL given
by
x 7→ adx
Therefore, given the congruence ≡ on L defined by x ≡ y ⇔ adx = ady , one has
L/ ≡ ∼
= AdL
Proof. Using the definition of the operation on AdL, this is a homomorphism. By Theorem 1.20,
we get the conclusion.
A word of caution: with the definition given above to the negated Lie bracket, AdL is not a
subalgebra of End (L). Generally, there is no obvious way of fixing this problem; however, one can
use the strong Jacobi’s identity.
If L satisfies the strong Jacobi’s identity, then one can define
adL = {adx | x ∈ L} ⊆ End (L)
as in [24], and this is also a Lie semialgebra with a negation map over R (under the negated
commutator). Nevertheless, we now have only a -morphism L → adL given by x 7→ adx .
2.3. Free Lie Algebras with a Negation Map. In this subsection we turn to study Lie semialgebras with a negation map which are free as modules and have a base consisting of one, two or
three elements.
2.3.1. 1-dimensional Lie Algebras with a negation map.
Lemma 2.27. Any Lie semialgebra L over a semifield with a negation map R with a base B = {x}
is abelian.
Proof. Given y, z ∈ L, there exist α, β ∈ R such that y = αx, z = βx. Therefore,
[y, z] = [αx, βx] = αβ [x, x] ∈ L◦
Thus, L is abelian.
2.3.2. 2-dimensional Lie Algebras with a negation map.
Lemma 2.28. Let R be a semifield with a negation map, and let L be a free R-module with base
B = {x1 , x2 }. Define
[x1 , x1 ] = α1 x1 + α2 x2 , [x2 , x2 ] = β1 x1 + β2 x2 , [x1 , x2 ] = (−) [x2 , x1 ] = γ1 x1 + γ2 x2
where α1 , α2 , β1 , β2 , γ1 , γ2 ∈ R and α1 , α2 , β1 , β2 ∈ R◦ . Now, extend [·, ·] to [·, ·] : L × L → L by
bilinearity.
Then L equipped with [·, ·] is a Lie semialgebra over R.
Proof. By bilinearity, it is enough to check Jacobi’s identity in two cases:
(1) [x1 , [x1 , x2 ]] + [x1 , [x2 , x1 ]] + [x2 , [x1 , x1 ]] ∈ L◦ .
(2) [x2 , [x2 , x1 ]] + [x2 , [x1 , x2 ]] + [x1 , [x2 , x2 ]] ∈ L◦ .
We shall prove the first part; the second is proved similarly. Indeed,
◦
◦
[x1 , [x1 , x2 ]] + [x1 , [x2 , x1 ]] + [x2 , [x1 , x1 ]] = x1 , [x1 , x2 ] + [x2 , [x1 , x1 ]] = [x1 , [x1 , x2 ]] + [x2 , [x1 , x1 ]]
We note that [x1 , x1 ] ∈ L◦ , and thus [x1 , [x1 , x2 ]] + [x1 , [x2 , x1 ]] + [x2 , [x1 , x1 ]] ∈ L◦ , as required.
Lemma 2.29. Let L be a Lie semialgebra over a semifield with a negation map R with base B =
{x1 , x2 }. Then there are α1 , α2 , β1 , β2 , γ1 , γ2 ∈ R such that
[x1 , x1 ] = α1 x1 + α2 x2 , [x2 , x2 ] = β1 x1 + β2 x2 , [x1 , x2 ] = (−) [x2 , x1 ] = γ1 x1 + γ2 x2
and α1 , α2 , β1 , β2 ∈ R◦ . Therefore, each such Lie semialgebra is obtained from Lemma 2.28.
Proof. The existence of α1 , α2 , β1 , β2 , γ1 , γ2 ∈ R follows from the fact that L is free, and by the
antisymmetry of [·, ·]. Since [·, ·] is alternating, combined with Lemma 1.70, α1 , α2 , β1 , β2 ∈ R◦ .
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
25
2.3.3. 3-dimensional Lie Algebras with a negation map. We turn to studying free Lie semialgebras
with a base consisting of three elements. The purpose of this case is to define a Lie semialgebra that
will be parallel to sl (2, F ) in the classical theory.
We are now going to give a necessary and sufficient condition for a bilinear multiplication defined
on a module over a semifield with a negation map to be a negated Lie bracket. This is formulated
in Corollary 2.32 and in Lemma 2.33.
Throughout this part, R is a semifield with a negation map.
Lemma 2.30. Let L be a free R-module with base B = {x1 , x2 , x3 }. Define
∀i ≤ j : [xi , xj ] = (−) [xj , xi ] =
3
X
αi,j,ℓ xℓ
ℓ=1
where ∀i, j, ℓ : αi,j,ℓ ∈ R, and ∀i, ℓ : αi,i,ℓ ∈ R◦ . If i > j, we write αi,j,ℓ = (−) αj,i,ℓ . Assume also
that
3
XX
∀i, j, k, m :
αi,ℓ,m αj,k,ℓ ∈ R◦
where
X
i,j,k ℓ=1
is the cyclic sum over i, j, k. Extend [·, ·] to [·, ·] : L × L → L by bilinearity.
i,j,k
Then L equipped with [·, ·] is a Lie semialgebra over R.
Proof. Again, it is enough to check Jacobi’s identity. We need to ensure that
∀i, j, k : [xi , [xj , xk ]] + [xk , [xi , xj ]] + [xj , [xk , xi ]] ∈ L◦
Take some i, j, k. By calculating [xi , [xj , xk ]], we get:
!
#
"
3
3
3
3
3
X
X
X
X
X
αj,k,ℓ [xi , xℓ ] =
αj,k,ℓ
αi,ℓ,m xm =
[xi , [xj , xk ]] = xi ,
αj,k,ℓ xℓ =
ℓ=1
ℓ=1
By permuting the indices,
[xk , [xi , xj ]] =
3
X
m=1
[xj , [xk , xi ]] =
3
X
m=1
By summing all of the above, we get
3
X
ℓ=1
3
X
αi,j,ℓ αk,ℓ,m
αk,i,ℓ αj,ℓ,m
ℓ=1
[xi , [xj , xk ]] + [xk , [xi , xj ]] + [xj , [xk , xi ]] =
3
X
m=1
Thus, the condition
∀m :
3
XX
m=1
m=1
ℓ=1
!
!
3
X
αj,k,ℓ αi,ℓ,m
ℓ=1
xm
xm
3
XX
αj,k,ℓ αi,ℓ,m
cyc ℓ=1
!
xm
αi,ℓ,m αj,k,ℓ ∈ R◦
i,j,k ℓ=1
is equivalent to Jacobi’s identity (by Lemma 1.70), and the assertion follows.
The above condition involving the cyclic sum may seem rather strong, since it has to hold for
each choice of 1 ≤ i, j, k, m ≤ 3. However, it turns out that it is enough to check that it holds for
a specific choice of i, j, k in which they are all different – for example, (i, j, k) = (1, 2, 3). This is
formulated in the following lemma:
Lemma 2.31. Assume that ∀i, ℓ : αi,i,ℓ ∈ R◦ and ∀i, j, ℓ : αi,j,ℓ = (−) αj,i,ℓ . If
∀m :
3
X
ℓ=1
(α1,ℓ,m α2,3,ℓ + α3,ℓ,m α1,2,ℓ + α2,ℓ,m α3,1,ℓ ) ∈ R◦
!
xm
26
GUY BLACHAR
holds, then
3
XX
∀i, j, k, m :
αi,ℓ,m αj,k,ℓ ∈ R◦
i,j,k ℓ=1
Proof. We prove the lemma in two cases.
Case 1. We first assume that two of i, j, k are equal. Without loss of generality, we assume that
j = k, and thus ∀ℓ : αj,k,ℓ ∈ R◦ , and ∀ℓ : αi,j,ℓ = (−) αk,i,ℓ . We expand the cyclic sum:
3
XX
αi,ℓ,m αj,k,ℓ
=
i,j,k ℓ=1
3
X
(αi,ℓ,m αj,k,ℓ + αk,ℓ,m αi,j,ℓ + αj,ℓ,m αk,i,ℓ ) =
ℓ=1
=
=
3
X
ℓ=1
3
X
αi,ℓ,m αj,k,ℓ +
3
X
1◦R αi,ℓ,m αj,k,ℓ +
ℓ=1
=
1◦R
3
X
1◦R αk,ℓ,m αi,j,ℓ =
ℓ=1
3
X
αi,ℓ,m αj,k,ℓ +
ℓ=1
3
X
αk,ℓ,m αi,j,ℓ
ℓ=1
and thus the sum is quasi-zero.
Case 2.
(αk,ℓ,m αi,j,ℓ + (−) αj,ℓ,m αi,j,ℓ ) =
ℓ=1
!
Now we may assume that i, j, k are different indices. Since 1 ≤ i, j, k ≤ 3, and by the
cyclic symmetry of i, j, k, there are two options: either (i, j, k) = (1, 2, 3), and the sum is
S(1,2,3) =
3
XX
αi,ℓ,m αj,k,ℓ =
3
XX
αi,ℓ,m αj,k,ℓ =
i,j,k ℓ=1
3
X
(α1,ℓ,m α2,3,ℓ + α3,ℓ,m α1,2,ℓ + α2,ℓ,m α3,1,ℓ )
3
X
(α1,ℓ,m α3,2,ℓ + α2,ℓ,m α1,3,ℓ + α3,ℓ,m α2,1,ℓ )
ℓ=1
or (i, j, k) = (1, 3, 2), and the sum is
S(1,3,2) =
ℓ=1
i,j,k ℓ=1
We notice that S(1,3,2) = (−) S(1,2,3) , since
S(1,3,2)
=
=
3
X
ℓ=1
3
X
(α1,ℓ,m α3,2,ℓ + α2,ℓ,m α1,3,ℓ + α3,ℓ,m α2,1,ℓ ) =
((−) α1,ℓ,m α2,3,ℓ + (−) α2,ℓ,m α3,1,ℓ + (−) α3,ℓ,m α1,2,ℓ ) =
ℓ=1
=
(−)
3
X
(α1,ℓ,m α2,3,ℓ + α2,ℓ,m α3,1,ℓ + α3,ℓ,m α1,2,ℓ ) = (−) S(1,2,3)
ℓ=1
But we assumed that S(1,2,3) ∈ R◦ , and thus S(1,3,2) ∈ R◦ , as required.
Corollary 2.32. Let L be a free R-module with base B = {x1 , x2 , x3 }. Define
∀i ≤ j : [xi , xj ] = (−) [xj , xi ] =
3
X
αi,j,ℓ xℓ
ℓ=1
where ∀i, j, ℓ : αi,j,ℓ ∈ R, and ∀i, ℓ : αi,i,ℓ ∈ R◦ . If i > j, we write αi,j,ℓ = (−) αj,i,ℓ . Assume that
also
3
X
∀m :
(α1,ℓ,m α2,3,ℓ + α3,ℓ,m α1,2,ℓ + α2,ℓ,m α3,1,ℓ ) ∈ R◦
ℓ=1
Extend [·, ·] to [·, ·] : L × L → L by bilinearity.
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
27
Then L equipped with [·, ·] is a Lie semialgebra over R.
Proof. By Lemma 2.31, the conditions of Lemma 2.30 hold, and thus the assertion follows.
Lemma 2.33. Let L be a Lie semialgebra over R with base B = {x1 , x2 , x3 }. Then there are αi,j,ℓ ∈
R such that
3
X
αi,j,ℓ xℓ
∀i ≤ j : [xi , xj ] = (−) [xj , xi ] =
ℓ=1
and ∀i, j, ℓ : αi,j,ℓ ∈ R, and ∀i, ℓ : αi,i,ℓ ∈ R◦ . If i > j, we denote αi,j,ℓ = (−) αj,i,ℓ . Moreover,
∀m :
3
X
(α1,ℓ,m α2,3,ℓ + α3,ℓ,m α1,2,ℓ + α2,ℓ,m α3,1,ℓ ) ∈ R◦
ℓ=1
Therefore, each such Lie semialgebra is obtained from Corollary 2.32.
Proof. The existence of αi,j,ℓ follows from the fact that L is free, and by the antisymmetry of [·, ·].
Since [·, ·] is also alternating, and by Lemma 1.70, ∀i, ℓ : αi,i,ℓ ∈ R◦ .
In addition, in the proof of Lemma 2.30, we showed that Jacobi’s identity is equivalent to
∀i, j, k, m :
3
XX
αj,k,ℓ αi,ℓ,m ∈ R◦
i,j,k ℓ=1
In particular, substituting (i, j, k) = (1, 2, 3),
∀m :
3
X
(α1,ℓ,m α2,3,ℓ + α3,ℓ,m α1,2,ℓ + α2,ℓ,m α3,1,ℓ ) ∈ R◦
ℓ=1
and thus we are finished.
We use the above way to construct a Lie semialgebra which has the same relations as sl (2, F ).
In section 2.4 we will see the naive way of constructing sl (n, R), which will be different from the
following Lie semialgebra:
Example 2.34. We take a free R-module L with base {e, f, h}, and define
[e, e] = [f, f ] = [h, h] =
0L
[e, f ] = (−) [f, e] =
[h, e] = (−) [e, h] =
h
2e
[h, f ] = (−) [f, h] =
2f
where 2e and 2f mean e + e and f + f . In the notations of Lemma 2.30, if x1 = e, x2 = f and
x3 = h, then
α1,2,3 = (−) α2,1,3 = 1R , α3,1,1 = (−) α1,3,1 = 2, α3,2,2 = (−) α2,3,2 = (−) 2
and all other coefficients are 0R . We want to prove that these coefficients satisfy the conditions of
Corollary 2.32, and thus L equipped with the bilinear extension of [·, ·] is a Lie semialgebra over R.
We need to ensure that
3
X
∀m :
(α1,ℓ,m α2,3,ℓ + α3,ℓ,m α1,2,ℓ + α2,ℓ,m α3,1,ℓ ) ∈ R◦
ℓ=1
(1) If m = 1, αi,ℓ,1 = 0R unless (i, ℓ) ∈ {(1, 3) , (3, 1)}. We are left with two summands 6= 0R :
α1,3,1 α2,3,3 + α3,1,1 α1,2,1 = 0R
(2) If m = 2, αi,ℓ,2 = 0R unless (i, ℓ) ∈ {(2, 3) , (3, 2)}. We are left with two summands 6= 0R :
α2,3,2 α3,1,3 + α3,2,2 α1,2,2 = 0R
(3) If m = 3, αi,ℓ,3 = 0R unless (i, ℓ) ∈ {(1, 2) , (2, 1)}. We are left with two summands 6= 0R :
α1,2,3 α2,3,2 + α2,1,3 α3,1,1 = 1 · 2 (−) 1 · 2 = 2◦
28
GUY BLACHAR
Since the sum is quasi-zero for each choice of m, such L exists.
2.4. Symmetrized Versions of the Classical Lie Algebras. In this subsection we construct
negated versions of the classical Lie algebras: An , Bn , Cn and Dn . We assume again that our
underlying semiring R is a semifield with a negation map.
Definition 2.35. The general linear algebra, gl (n, R), is the Lie semialgebra of all matrices of
size n × n, where the Lie bracket is the negated commutator.
Definition 2.36. We define matrices ei,j ∈ gl (n, R) by
(
1R (i, j) = (k, ℓ)
(ei,j )k,ℓ =
0R (i, j) 6= (k, ℓ)
these matrices form a base of gl (n, R).
Remark 2.37. In gl (n, R), we have the formula
[ei,j , ek,ℓ ] = δj,k ei,ℓ + (−) δi,ℓ ek,j
where
δi,j =
(
1R
0R
i=j
i 6= j
Definition 2.38. An , or the negated special linear algebra, is
An = {A ∈ gl (n + 1, R) | s (tr (A)) = 0}
Lemma 2.39. An is a subalgebra of gl (n + 1, R).
Proof. Obviously, An is a submodule of gl (n + 1, R). Since tr (AB) = tr (BA), we have
s (tr ([A, B])) = s (tr (AB + (−) BA)) = 0
so An is a subalgebra of gl (n + 1, R).
Lemma 2.40. An has the following s-base:
B = {ei,j | i 6= j} ∪ {ei,i + (−) ej,j | 1 ≤ i < j ≤ n} ∪ {1◦R ei,i | 1 ≤ i ≤ n}
Proof. Assume that A = (ai,j ) ∈ An . Therefore, s (tr (A)) = 0. There are two options:
(1) If tr (A) is achieved from a dominant zero-layered element in the diagonal of A, without loss
of generality a1,1 , then
A=
n
X
ai,j ei,j +
n
X
ai,i (ei,i + (−) e1,1 )
i=1
i,j=1
i6=j
(2) Otherwise, tr (A) is achieved from (at least) two elements, not of layer zero, such that they
“cancel” each other. Assume without loss of generality that these elements are a1,1 , . . . , ak,k ;
that means that τ (a1,1 ) = · · · = t (ak,k ) = τ (tr (A)), s (a1,1 ) , . . . , s (ak,k ) 6= 0 and for
each ℓ > k, either τ (aℓ,ℓ ) < τ (tr (A)) or s (aℓ,ℓ ) = 0. Then
A=
n
X
ai,j ei,j +
i,j=1
i6=j
n
X
ai,i (ei,i + (−) e1,1 )
i=2
Therefore, B spans An . It is easy to see that no element of B is spanned by the others, and thus it
is an s-base.
Lemma 2.41. Considering A1 , we see that its s-base consists of the elements
e = e1,2 ,
f = e2,1 ,
h = e1,1 + (−) e2,2
they satisfy the relations
[e, f ] = h;
[e, h] = [−2]0e;
[f, h] = [2]0f
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
29
Proof. Considering the formula mentioned in Remark 2.37 we get:
[e, f ] =
[e1,2 , e2,1 ] = e1,1 + (−) e2,2 = h
[e, h] =
[e1,2 , e1,1 + (−) e2,2 ] = (−) e1,2 + (−) e1,2 = [−2]0e
[f, h] =
[e2,1 , e1,1 + (−) e2,2 ] = e2,1 + (−) (−) e2,1 = [2]0f
Let L be the Lie semialgebra constructed in Example 2.34. Note that A1 is similar to L; however,
A1 has no d,s-base, whereas L is free.
Unlike the definition of An , the Lie semialgebras Bn , Cn and Dn are all defined using involutions.
In this part, we follow [23].
Definition 2.42. Let R be a semifield with a negation map, and let A1 , A2 be nonassociative
semialgebras. An antihomomorphism is a function ϕ : A1 → A2 that satisfies:
(1) ∀x, y ∈ A1 : ϕ (x + y) = ϕ (x) + ϕ (y).
(2) ∀α ∈ R, ∀x ∈ A1 : ϕ (αx) = αϕ (x).
(3) ∀x, y ∈ A1 : ϕ (xy) = ϕ (y) ϕ (x).
Definition 2.43. Let R be a semifield with a negation map, and let A be a nonassociative semialgebra. An involution is an antihomomorphism ϕ : A → A for which
∀x ∈ A : ϕ (ϕ (x)) = x
In other words, ϕ is its own inverse. We also denote involutions as ∗ : A → A, x 7→ x∗ .
Example 2.44. Let R be a semifield with a negation map. Then the transpose on Mn×n (R) is an
involution.
Definition 2.45. Let R be a semifield with a negation map, and let A be a nonassociative semialgebra with an involution ∗. An element x ∈ A is called symmetric, if
x∗ = x
x is called skew-symmetric, if
x∗ = (−) x
Lemma 2.46. Let A be a nonassociative semialgebra over R. Denote by [·, ·] the negated commutator
on A, and let ∗ be an involution on A. Then the set of all skew-symmetric elements in A,
à = {x ∈ A | x∗ = (−) x}
is a Lie subalgebra of A.
Proof. ∗ is an involution, and in particular an antihomomorphism, so à is a submodule of A.
Assume x, y ∈ A. Then
∗
∗
[x, y] = (xy + (−) yx) = y ∗ x∗ + (−) x∗ y ∗ = [y ∗ , x∗ ]
If x, y ∈ Ã, then
[x, y]∗ = [y ∗ , x∗ ] = [(−) y, (−) x] = [y, x] = (−) [x, y]
so [x, y] ∈ Ã, as needed.
Now we can define Bn , Cn and Dn by defining their involutions.
Definition 2.47. Let R be a semifield with a negation map. We define an involution ∗ on gl (k, R)
by the transpose (A∗ = At ). As stated in Lemma 2.46, we get a Lie subalgebra consisting all of the
skew-symmetric elements.
(1) When k = 2n + 1 is odd, this Lie semialgebra is called Bn , the negated odd-dimensional
orthogonal algebra.
(2) When k = 2n is even, it is called Dn , the negated even-dimensional orthogonal algebra.
30
GUY BLACHAR
Remark 2.48. Bn has the following s-base:
{ei,j + (−) ej,i | 1 ≤ i < j ≤ 2n + 1} ∪ {1◦R ei,i | 1 ≤ i ≤ 2n + 1}
whereas Dn has the following s-base:
{ei,j + (−) ej,i | 1 ≤ i < j ≤ 2n} ∪ {1◦R ei,i | 1 ≤ i ≤ 2n}
Definition 2.49. Let R be a semifield with a negation map. We define an involution ∗ on gl (2n, R)
by
∗
A B
Dt
(−) B t
=
C D
(−) C t
At
By taking the skew-symmetric elements, we get the Lie subalgebra Cn , the negated symplectic
algebra.
Lemma 2.50. Cn has the following base:
{ei,j + (−) en+j,n+i | 1 ≤ i, j ≤ n} ∪ {ei,n+j + ej,n+i | 1 ≤ i, j ≤ n} ∪ {en+i,j + en+j,i | 1 ≤ i, j ≤ n}
A1 A2
Proof. Denote the above set by B , and assume
∈ Cn . Then
A3 A4
∗
At4
(−) At2
A1 A2
A1 A2
=
=
(−)
A3 A4
A3 A4
(−) At3
At1
we get the following conditions:
At4 = (−) A1 ;
At2 = A2 ;
At3 = A3
If (Ak )i,j = ak,i,j , then
n
n
n
X
X
X
A1 A2
a3,i,j (en+i,j + en+j,i )
a2,i,j (ei,n+j + ej,n+i )+
a1,i,j (ei,j + (−) en+j,n+i )+
=
A3 A4
i,j=1
i,j=1
i,j=1
and this combination is unique, implying B is a base of Cn .
3. Solvable and Nilpotent Symmetrized Lie Algebras
3.1. Basic Definitions.
3.1.1. Solvability.
Definition 3.1. Let R be a semifield with a negation map, and let L be a Lie semialgebra with a
negation map over R. The
derived series
of L is the series defined by L(0) = L, L(1) = L′ = [L, L],
(n−1)
(n)
(n−1)
and in general: L = L
,L
.
Definition 3.2. Let R be a semifield with a negation map, and let L be a Lie semialgebra with a
negation map over R. We say that L is solvable, if
∃n ∈ N : L(n) ⊆ L◦
Example 3.3. All abelian Lie semialgebras with a negation map are solvable.
Lemma 3.4. Any Lie semialgebra with a negation map L, for which L′ ⊆ L◦ , is abelian.
Proof. The condition is equivalent to ∀x, y ∈ L : [x, y] ∈ L◦ , which shows that L is abelian.
Lemma 3.5. Let R be a semifield with a negation map, and let L be a Lie semialgebra with a
negation map over R. If L is solvable, then so are all of its subalgebras and homomorphic images.
Proof. If K is a subalgebra of L, then it is easy to see by induction that ∀n ∈ N : K (n) ⊆ L(n) .
Also, if ϕ : L → L1 is an R-epimorphism, then
∀n ∈ N : (L1 )(n) = ϕ L(n)
Therefore, if L is solvable, there is some n ∈ N such that L(n) = L◦ ; by the above formula,
(n)
(L1 ) ⊆ L1 ◦ .
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
31
Lemma 3.6. Let R be a semifield with a negation map, and let L be a Lie semialgebra with a
negation map over R. L is solvable if and only if there exists a chain of subalgebras Lk ⊳ Lk−1 ⊳
· · · ⊳ L0 = L, such that Lk ⊆ L◦ and L′i ⊆ Li+1 .
Proof. It is easy to check that in such a chain, L(i) ⊆ Li ; so, if such a chain exists, L(k) ⊆ L◦ ,
implying L is solvable.
If L is solvable with L(k) ⊆ L◦ , than the chain L(k) ⊳ · · · ⊳ L(0) = L works.
Lemma 3.7. If I, J ⊳ L are two ideals of L, and if I is solvable, then I ∩ J is also solvable.
Proof. I ∩ J ⊳ I, so this is a direct corollary of Lemma 3.5.
Lemma 3.8. If I, J ⊳ L are two solvable ideals of L, then I + J is also solvable.
Proof. We shall prove by induction that
(k)
∀k ∈ N ∪ {0} : (I + J)
⊆ I (k) + J (k) + I ∩ J
If k = 0 the assertion is clear. Assume it is true for k, i.e.
(k)
(I + J)
⊆ I (k) + J (k) + I ∩ J
We get
i h
i
h
(k)
(k)
⊆ I (k) + J (k) + I ∩ J, I (k) + J (k) + I ∩ J =
(I + J) , (I + J)
h
i h
i h
i h
i
= I (k) , I (k) + I (k) , J (k) + I ∩ J + J (k) , J (k) + J (k) , I (k) + I ∩ J +
h
i
+ I ∩ J, I (k) + J (k) + I ∩ J ⊆ I (k+1) + J (k+1) + I ∩ J
Note that I (k) , J (k) + I ∩ J , J (k) , I (k) + I ∩ J , I ∩ J, I (k) + J (k) + I ∩ J ⊆ I ∩ J.
(I + J)
(k+1)
=
I and J are solvable, so ∃m, n ∈ N : I (m) , J (n) ⊆ L◦ . Taking k = max {m, n}, we get
(k)
(I + J)
(k)
We also know that (I ∩ J)
⊆ L◦ + I ∩ J
⊆ L◦ , and thus,
(I + J)
(2k)
⊆ (L◦ )
(k)
+ (I ∩ J)
(k)
+ L◦ ∩ I ∩ J ⊆ L◦
implying I + J is solvable.
3.1.2. Nilpotency.
Definition 3.9. Let R be a semifield with a negation map, and let L be a Lie semialgebra with a
negation map over R. The descending central series, or the lower
central series, is the series
defined by L0 = L, L′ = L1 = [L, L], and in general: Ln = L, Ln−1 .
Definition 3.10. Let R be a semifield with a negation map, and let L be a Lie semialgebra with a
negation map over R. L is called nilpotent, if
∃n ∈ N : Ln ⊆ L◦
Example 3.11. All abelian Lie semialgebras with a negation map are nilpotent.
Lemma 3.12. ∀n ∈ N ∪ {0} : L(n) ⊆ Ln .
Proof. By induction: for n = 0, it is clear, because L(0) = L = L0 .
Assume the statement is true for n ∈ N ∪ {0}, and let x ∈ L(n+1) . Then there exist y1 , y2 ∈ L(n)
such that
x = [y1 , y2 ]
So, y1 ∈ L, and by the induction hypothesis, y2 ∈ Ln . Thus,
x = [y1 , y2 ] ∈ [L, Ln ] = Ln+1
and we get L(n) ⊆ Ln .
32
GUY BLACHAR
Corollary 3.13. Any nilpotent Lie semialgebra with a negation map is solvable.
Lemma 3.14. Let R be a semifield with a negation map, and let L be a nilpotent Lie semialgebra
with a negation map over R.
(1) All of the subalgebras and homomorphic images of L are nilpotent.
(2) If L 6= L◦ , then L◦ 6= Z (L) (in fact, L◦ ( Z (L)).
Proof.
(1) If K is a subalgebra of L, then it is easy to see by induction that ∀n ∈ N : K n ⊆ Ln . Also,
n
if ϕ : L → L′ is an R-epimorphism, then ∀n ∈ N : (L′ ) = ϕ (Ln ).
(2) Assume n is the maximal index for which Ln * L◦ . Then [L, Ln ] ⊆ L◦ , so Ln ⊆ Z (L).
∃x0 ∈ Ln \L◦ , so x0 ∈ Z (L) \L◦ , as required.
Lemma 3.15. If L′ is nilpotent, then L is solvable.
Proof. We shall prove that
n
∀n ∈ N ∪ {0} : L(n+1) ⊆ (L′ )
We use induction on n, the case n = 0 being trivial. We get
h
i h
i h
i
n−1
n−1
n−1
n
L(n+1) = L(n) , L(n) ⊆ (L′ )
, (L′ )
⊆ L′ , (L′ )
= (L′ )
Thus, if L′ is nilpotent, for some n we have (L′ )n ⊆ (L′ )◦ ⊆ L◦ , implying L(n+1) ⊆ L◦ , so L is
solvable.
Similarly to Lemma 3.7 and Lemma 3.8, we have the following lemmas:
Lemma 3.16. If I, J ⊳ L are two ideals of L, and if I is nilpotent, then I ∩ J is also nilpotent.
Lemma 3.17. If I, J ⊳ L are two nilpotent ideals of L, then I + J is also nilpotent.
3.1.3. Solvability and Nilpotency Modulo Ideals. In the semiring, the quotient algebra structure fails,
and the replacement is congruences. However, we would define equivalent definitions to solvability
and nilpotency of quotient algebras.
Definition 3.18. Let R be a semifield with a negation map, let L be a Lie semialgebra with a
negation map over R, and let I ⊳ L. We say that L is solvable modulo I, if
∃n ∈ N : L(n) ⊆ I
Remark 3.19. Solvability modulo L◦ is equivalent to solvability.
Lemma 3.20. If L is solvable modulo I, and if I is solvable (as a Lie semialgebra with a negation
map), then L is also solvable.
Proof. L is solvable modulo I, so there exists n ∈ N for which L(n) ⊆ I. But I is also solvable, so
there exists m ∈ N for which I (m) ⊆ I ◦ ⊆ L◦ . To conclude,
(m)
⊆ I (m) ⊆ L◦
L(m+n) = L(n)
implying L is solvable.
Definition 3.21. Let R be a semifield with a negation map, let L be a Lie semialgebra with a
negation map over R, and let I ⊳ L. We say that L is nilpotent modulo I, if
∃n ∈ N : Ln ⊆ I
Remark 3.22. Nilpotency modulo L◦ is equivalent to nilpotency.
Definition 3.23. Let R be a semifield with a negation map, let L be a Lie semialgebra with a
negation map over R, and let I ⊳ L. We say that L is abelian modulo I, if [L, L] ⊆ I.
Remark 3.24. Lemma 3.6 can be rephrased as follows: L is solvable if and only if there exists a
chain of subalgebras Lk ⊳ Lk−1 ⊳ · · · ⊳ L0 = L such that each Li+1 is abelian modulo Li and Lk
is abelian.
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
33
3.1.4. The Symmetrized Radical and Semisimplicity.
Definition 3.25. Let R be a semifield with a negation map, let L be a Lie semialgebra with a
negation map over R. The (negated) radical of L is
X
RadL =
I
I⊳L is solvable
That is, the negated radical is the sum of all solvable ideals of L.
Remark 3.26. In the classical theory, the radical of a finitely generated Lie algebra is its unique
maximal solvable ideal. However, in our theory, there is no guarantee that there would be one
maximal solvable ideal, since being finitely generated as a module with a negation map usually does
not mean being Noetherian (and thus, there may be infinite ascending chains of solvable ideals).
Definition 3.27. Let R be a semifield with a negation map, let L be a Lie semialgebra over R. We
say that L is locally solvable, if any finitely generated subalgebra of L is solvable.
Lemma 3.28. RadL is locally solvable.
Proof. Let L1 ⊆ RadL be a finitely generated subalgebra of L. Write
L1 = Span {x1 , . . . , xn }
By the definition of RadL, for each i there are solvable ideals Ii,1 , . . . , Ii,mi such that
xi ∈
mi
X
Ii,j
j=1
Therefore,
L1 ⊆
mi
n X
X
Ii,j
i=1 j=1
Since the sum is a finite sum of solvable ideals, by Lemma 3.8, it is also solvable. Therefore, by
Lemma 3.5, also L1 is solvable, as required.
Definition 3.29. Let R be a semifield with a negation map, let L be a Lie semialgebra over R.
If RadL = L◦ , then L is called semisimple.
Lemma 3.30. L is semisimple if and only if any abelian ideal of L is contained in L◦ .
Proof. Let L be semisimple, and let I be an abelian ideal of L. Then I is solvable, implying
I ⊆ RadL = L◦ .
Now suppose that any abelian ideal of L is contained in L◦ . If L is not semisimple, it has a
solvable ideal I * L◦ . Take a maximal n such that I (n) * L◦ ; then I (n) is an abelian ideal, a
contradiction.
3.2. Lifts of Lie Semialgebras with a Negation Map. As a continuation of subsection 1.4, we
give a definition of a lift of a Lie semialgebra with a negation map.
b ϕ
Definition 3.31. Let R be a semiring with a negation map with a lift R,
b , and let L be a Lie
b L,
b and a map ψb : L
b → L, such that the
semialgebra over R. A lift of L is a Lie algebra over R,
following properties hold:
b ψb is a lift of L as modules.
(1) L,
h
i
b : ψb ([x1 , x2 ]) ψb (x1 ) , ψb (x2 ) .
(2) ∀x1 , x2 ∈ L
We also have two parallel lemmas to Lemma 1.36:
b ⊆L
b be a subalgebra. Then ψb K
b is a subalgebra of L.
Lemma 3.32. Let K
34
GUY BLACHAR
b is a submodule of L. Therefore, we need to check that it is closed
Proof. By Lemma 1.36, ψb K
with respect to the negated Lie bracket.
b . Take x
b such that x ψb (b
Let x, y ∈ ψb K
b, yb ∈ K
x) and y ψb (b
y). Then
h
i
[x, y] ψb (b
x) , ψb (b
y) ψb ([b
x, yb])
b we are done.
Since [b
x, yb] ∈ K,
b be an ideal. Then ψb Ib is an ideal of L.
Lemma 3.33. Let Ib ⊳ L
Proof. By Lemma 3.32, ψb Ib is a subalgebra of L.
b and y ∈ L. Take x
b such that y ψb (b
Let x ∈ ψb K
b ∈ Ib such that x ψb (b
x), and take yb ∈ L
y).
Then
h
i
[x, y] ψb (b
x) , ψb (b
y) ψb ([b
x, yb])
b we are done.
Since [b
x, yb] ∈ I,
Although in general there is no obvious way to find a lift of a given Lie semialgebra (even if there
is a lift as modules), there is some consolation:
b ϕ
Theorem 3.34. Let R be a semiring with a negation map with a lift R,
b . Consider the function
b → gl (n, R)
ψb : gl n, R
b , ψb is a lift of gl (n, R) as Lie
which applies ϕ
b on each entry of a given matrix. Then gl n, R
semialgebras.
bB
b ∈ gl n, R
b ,
Proof. Since this is already a lift of modules, we will prove that given A,
b ψb B
b ψb A
bB
b
ψb A
Indeed,
b ψb B
b
ψb A
i,j
=
n
X
k=1
n
X
b
ϕ
b (b
ai,k ) ϕ
b bbk,j ϕ
b
ai,kbbk,j
k=1
!
bB
b
= ψb A
i,j
Now the assertion follows, since
h i
b , ψb B
b
b ψb B
b + (−) ψb B
b ψb A
b ψb A
bB
b + (−) ψb B
bA
b
ψb A
= ψb A
i
h
b B
b
bB
b + (−) B
bA
b = ψb A,
ψb A
b ϕ
3.2.1. Solvability, Nilpotency and Semisimplicity of the Lift. Throughout this subsection, R,
b
b ψb
will be a lift of a semifield with a negation map R, L will be a Lie semialgebra over R, and L,
will be a lift of L.
b1, L
b 2 be subalgebras of L.
b Then
Lemma 3.35. Let L
i
h
b 1 , ψb L
b 2 ⊆ ψb L
b 1, L
b2
ψb L
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
35
i
h
b 1 and
b 1 and y ∈ ψb L
b 2 . We will prove that [x, y] ∈ ψb L
b 1, L
b 2 . Take x
Proof. Let x ∈ ψb L
b∈L
b 2 such that x ψb (b
yb ∈ L
x) and y ψb (b
y). Then
h
i
[x, y] ψb (b
x) , ψb (b
y) ψb ([b
x, yb])
i
h
b1, L
b 2 , we are finished.
Since [b
x, yb] ∈ L
Corollary 3.36.
and
h i
bn
∀n ∈ N ∪ {0} : Ln ⊆ ψb L
i
h
b (n)
∀n ∈ N ∪ {0} : L(n) ⊆ ψb L
Proof. We will prove the first assertion; the second is proven similarly.
We use induction on n, the case n = 0 being trivial. If the assertion holds for n ∈ N ∪ {0}, then,
by Lemma 3.35 and the induction hypothesis,
i
h
b , ψb L
b n ⊆ ψb L,
b L
b n = ψb (Ln+1 )
Ln+1 = [L, Ln ] ⊆ ψb L
as required.
b is nilpotent (resp. solvable), then L is nilpotent (resp. solvable).
Corollary 3.37. If L
Proof. We will prove the assertion for nilpotency; the assertion for solvability is proved similarly.
b is nilpotent, there is n such that L
b n = 0. By Corollary 3.36,
Since L
b n = ψb (0) = L◦
Ln ⊆ ψb L
and thus L is nilpotent.
Corollary 3.38.
b
⊆ Rad (L)
ψb Rad L
b ⊳ L,
b and thus, by Lemma 3.33, ψb Rad L
b
Proof. Rad L
⊳ L. In addition, by Corollary 3.37,
b
ψb Rad L
is solvable. Therefore, the assertion follows.
b is semisimple.
Corollary 3.39. If L is semisimple, then L
Proof. L is semisimple, hence Rad (L) ⊆ L◦ . By Corollary 3.38,
b
ψb Rad L
⊆ Rad (L) ⊆ L◦
b = 0, as required.
Hence Rad L
3.3. Cartan’s Criterion for Lie Algebras Over ELT Algebras. For this subsection, we work
only with ELT algebras. Specifically, we are interested in the following type of ELT algebra:
Definition 3.40. Let R = R (F , L ) be an ELT algebra. We say that R is a divisible ELT field,
if L is a field, and F is a divisible group.
In our version of Cartan’s criterion, we will use the essential trace defined in [6]. Let us recall its
definition:
36
GUY BLACHAR
Definition 3.41. Let R be a divisible ELT field, and let A ∈ Mn R . Let
n
X
αi λn−i
pA (λ) = det λI + [−1]0A = λn +
i=1
be its ELT characteristic polynomial. We define
τ (αk )
τ (αℓ )
≥
L (A) =
ℓ ≥ 1 | ∀k ≥ n :
ℓ
k
µ (A) = min L (A)
In words, αµ λn−µ is the first monomial after λn which is not inessential in pA (λ).
The essential trace of A is
(
etr (A) =
Recall:
[0]
tr (A) ,
τ (αµ )
µ
[−1]0tr (A)
,
is essential in pA (λ)
Otherwise
Corollary ([6, Corollary 2.22]). If A is ELT nilpotent, then s (etr (A)) = 0.
Returning to Lie semialgebras, we specialize our symmetrized Lie semialgebra to be free; therefore,
given a homomorphism ϕ : L → L, its trace and essential trace are well-defined.
Definition 3.42. Let R be a divisible ELT field, and let L be a free symmetrized Lie semialgebra
over R. The Killing form of L is
κ (x, y) = tr (adx ◦ ady )
The essential Killing form of L is
κes (x, y) = etr (adx ◦ ady )
Definition 3.43. Let R be a divisible ELT field, let L be a free symmetrized Lie semialgebra
over R, and let κ be the Killing form of L. We say that κ is non-degenerate, if
Radκ = {x ∈ L|∀y ∈ L : s (κ (x, y)) = 0} ⊆ L◦
We say that κes is non-degenerate, if
SRadκ = {x ∈ L|∀y ∈ L : s (κes (x, y)) = 0} ⊆ L◦
Remark 3.44. Since Radκ ⊆ SRadκ (by [6, Lemma 2.11]), if κes is non-degenerate, then κ is nondegenerate.
Theorem 3.45 (Cartan’s criterion). If L has a non-degenerate essential Killing form, then L is
semisimple.
Proof. By Lemma 3.30, it is enough to prove that any abelian ideal I of L is of layer zero.
Let I be an abelian ideal, and let x ∈ I. Take an arbitrary y ∈ L. Since adx ◦ ady maps L to
I, (adx ◦ ady )2 maps L to [I, I], which is of layer zero. So adx ◦ ady is ELT nilpotent. Thus, by [6,
Corollary 2.22], s (etr (adx ◦ ady )) = 0.
This argument implies x ∈ SRadκ; so I ⊆ SRadκ. But κ is strongly non-degenerate, and thus
I ⊆ L◦ , as required. So L is semisimple.
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
37
4. PBW Theorem
In this subsection we will construct the universal enveloping algebra of an arbitrary Lie semialgebra with a negation map. Then, we will give a counterexample to a naive version of PBW theorem.
The tensor product is a well-known process in general categories (for example, [4], [11], [19] and
[26]). Regarding tensor product of modules over semirings, there are two (nonisomorphic) notions of
tensor product of modules over semirings in the literature, both denoted by ⊗R . For our purposes, we
use the tensor product discussed in [20], [21] and [4]. The other notion of tensor product is discussed
in [26] and [10]. Their construction satisfies a different universal property, and the outcome is a
cancellative monoid.
4.1. Tensor Power and Tensor Algebra of Modules over a Semirings.
Definition 4.1. Let R be a commutative semiring, and let M be an R-module. The k-th tensor
product of M is defined as
M ⊗k = M ⊗ · · · ⊗ M
{z
}
|
k times
If k = 0, we define M ⊗0 = R.
We note that this definition is well-defined, because the tensor product is associative.
Definition 4.2. Let R be a commutative semiring, and let M be an R-module. The tensor algebra
of M , denoted T (M ), is
∞
M
T (M ) =
M ⊗k
k=0
Remark 4.3. The natural isomorphism
M ⊗k ⊗ M ⊗ℓ ∼
= M ⊗(k+ℓ)
yields a multiplication
M ⊗k × M ⊗ℓ → M ⊗(k+ℓ)
given by (a, b) 7→ a ⊗ b. Endowed with this multiplication, T (M ) is an R-semialgebra.
Lemma 4.4. Let A be an R-semialgebra, and assume that ϕ : M → A is a homomorphism of
R-modules. Then there exists a unique homomorphism of R-semialgebras ϕ̂ : T (M ) → A, such that
ϕ̂|M = ϕ.
Proof. Uniqueness forces that
ϕ̂ (x1 ⊗ · · · ⊗ xk ) = ϕ̂ (x1 ) ⊗ · · · ⊗ ϕ̂ (xk ) = ϕ (x1 ) ⊗ · · · ⊗ ϕ (xk )
This defines a homomorphism of R-semialgebras.
4.2. The Universal Enveloping Algebra of a Lie Algebra with a Negation Map. We now
return to our motivation, where we work with a Lie semialgebra L over a semifield with a negation
map R.
Definition 4.5. Let R be a semifield with a negation map, and let L be a Lie semialgebra over R.
We define a congruence ∼ on T (L) as the congruence generated by
∀x, y ∈ L : [x, y] ∼ x ⊗ y + (−) y ⊗ x
The universal enveloping algebra of L is
U (L) = T (L) /∼
If ρ : T (L) → U (L) is the canonical map, we have the PBW homomorphism of L, which we
denote ϕL : L → U (L), defined by ϕL = ρ|L .
However, the PBW homomorphism of L need not be injective, as the next subsection demonstrates.
38
GUY BLACHAR
4.3. Counterexample to the Naive PBW Theorem. We now present a Lie semialgebra with
a negation map, which cannot be embedded into a Lie semialgebra with a negation map obtained
from an associative algebra together with the negated commutator operator. In particular, ϕL will
not be injective.
Example 4.6. The example will use the ELT structre. For convenience, we assume that our underlying ELT field is R = R (C, R). Recall that in the ELT theory, the relation is denoted .
We take a free R-module L with base B = {x1 , x2 }, and define:
[x1 , x1 ] =
[0]1x
1
+ [0]1x2
[x2 , x2 ] =
[0]1x
1
+ [0]1x2
[x1 , x2 ] =
[x2 , x1 ] =
x1 + x2
(−) [x1 , x2 ]
By Lemma 2.28, L equipped with [·, ·] is a Lie semialgebra with a negation map over R.
Consider
y1 = [x1 , [x1 , x2 ]] + [x1 , [x2 , x1 ]]
and
y2 = [x2 , [x1 , x1 ]] .
From Jacobi’s identity, we know that y1 + y2 ∈ L◦ . Let us expand them, using the definition of
the Lie bracket:
y1 = [x1 , [x1 , x2 ]] + [x1 , [x2 , x1 ]] = x1 , [0]0 [x1 , x2 ] = x1 , [0]0x1 + [0]0x2 =
= [0]0 [x1 , x1 ] + [0]0 [x1 , x2 ] = [0]1x1 + [0]1x2 + [0]0x1 + [0]0x2 = [0]1x1 + [0]1x2
whereas
y2 = [x2 , [x1 , x1 ]] = x2 , [0]1x1 + [0]1x2 = [0]1 [x2 , x1 ] + [0]1 [x2 , x2 ] =
= [0]1 [−1]0x1 + [−1]0x2 + [0]1 [0]1x1 + [0]1x2 = [0]2x1 + [0]2x2
Therefore, y2 y1 , but y1 6= y2 .
Now, assume that there is an ELT associative algebra A, which is a Lie semialgebra with a negation
map together with the ELT commutator, such that there exists an R-monomorphism ϕ : L → A
of Lie semialgebra with a negation maps. Denote a1 = ϕ (x1 ), a2 = ϕ (x2 ). We will show that
ϕ (y1 ) = ϕ (y2 ), which will show that ϕ is not injective.
We note that
b1 = ϕ (y1 ) = [a1 , [a1 , a2 ]] + [a1 , [a2 , a1 ]]
and
b2 = ϕ (y2 ) = [a2 , [a1 , a1 ]]
Since any associative algebra with the symmetrized commutator satisfies the strong Jacobi’s identity
(Example 2.12), b1 b2 . However, since y2 y1 , also b1 = ϕ (y2 ) ϕ (y1 ) = b2 .
By Corollary 1.6, since R◦ is idempotent, is a partial order on L. In particular, ϕ (y1 ) = b1 =
b2 = ϕ (y2 ). But ϕ is a monomorphism, implying y1 = y2 , a contradiction.
References
[1] Marianne Akian, Guy Cohen, Stephane Gaubert, R Nikoukhah, and Jean Pierre Quadrat. Linear systems in
(max,+) algebra. In Decision and Control, 1990., Proceedings of the 29th IEEE Conference on, pages 151–156.
IEEE, 1990.
[2] Marianne Akian, Stéphane Gaubert, and Alexander Guterman. Linear independence over tropical semirings and
beyond. Contemporary Mathematics, 495:1, 2009.
[3] Marianne Akian, Stéphane Gaubert, and Alexander Guterman. Tropical cramer determinants revisited. Tropical
and Idempotent Mathematics and Applications, 616:45, 2014.
[4] Markus Banagl. The tensor product of function semimodules. Algebra universalis, 70(3):213–226, 2013.
MODULES AND LIE SEMIALGEBRAS OVER SEMIRINGS WITH A NEGATION MAP
39
[5] Guy Blachar and Erez Sheiner. ELT linear algebra. to be published, 2016.
[6] Guy Blachar and Erez Sheiner. ELT linear algebra II. to be published, 2016.
[7] David Dolžan and Polona Oblak. Invertible and nilpotent matrices over antirings. arXiv preprint arXiv:0806.2996,
2008.
[8] Stéphane Gaubert. Théorie des systèmes linéaires dans les dioı̈des. PhD thesis, 1992.
[9] Stephane Gaubert. Methods and applications of (max,+) linear algebra. In STACS 97, pages 261–282. Springer,
1997.
[10] Jonathan S Golan. Semirings and Their Applications. Springer Science & Business Media, 1999.
[11] Alexandre Grothendieck. Produits tensoriels topologiques et espaces nucléaires. Séminaire Bourbaki, 2:193–200,
1955.
[12] James E Humphreys. Introduction to Lie Algebras and Representation Theory, volume 9. Springer Science &
Business Media, 1972.
[13] Zur Izhakian. Tropical arithmetic and tropical matrix algebra. arXiv preprint math/0505458, 2005.
[14] Zur Izhakian, Manfred Knebusch, and Louis Rowen. Supertropical linear algebra. Pacific Journal of Mathematics,
266(1):43–75, 2013.
[15] Zur Izhakian, Adi Niv, and Louis Rowen. Supertropical SLn . arXiv preprint arXiv:1508.04483, 2015.
[16] Zur Izhakian and Louis Rowen. Supertropical matrix algebra. Israel Journal of Mathematics, 182(1):383–424,
2011.
[17] Zur Izhakian and Louis Rowen. Supertropical matrix algebra II: Solving tropical equations. Israel Journal of
Mathematics, 186(1):69–96, 2011.
[18] Zur Izhakian and Louis Rowen. Supertropical matrix algebra III: Powers of matrices and their supertropical
eigenvalues. Journal of Algebra, 341(1):125–149, 2011.
[19] EB Katsov. Tensor product of functors. Siberian Mathematical Journal, 19(2):222–229, 1978.
[20] Yefim Katsov. Tensor products and injective envelopes of semimodules over additively regular semirings. In
Algebra Colloquium, volume 4, pages 121–131. SPRINGER VERLAG 175 FIFTH AVE, NEW YORK, NY
10010, 1997.
[21] Yefim Katsov. Toward homological characterization of semirings: Serres conjecture and basss perfectness in a
semiring context. algebra universalis, 52(2-3):197–214, 2005.
[22] Brett Parker. Exploded manifolds. Advances in Mathematics, 229(6):3256–3319, 2012.
[23] Louis Halle Rowen. Graduate Algebra: Noncommutative View, volume 91. American Mathematical Society, 2008.
[24] Louis Halle Rowen. Symmetries in tropical algebra. arXiv preprint arXiv:1602.00353, 2016.
[25] Erez Sheiner. Exploded Layered Tropical Algebra. PhD thesis, Bar-Ilan University, 2015.
[26] Michihiro Takahashi. On the bordism categories III. In Mathematics seminar notes, volume 10, pages 211–236.
, 1982.
Department of Mathematics, Bar-Ilan University, Ramat-Gan 52900, Israel.
E-mail address: [email protected]
| 0 |
Ultimate Intelligence Part III: Measures of
Intelligence, Perception and Intelligent Agents
Eray Özkural
arXiv:1709.03879v1 [] 8 Sep 2017
Gök Us Sibernetik Ar&Ge Ltd. Şti.
Abstract. We propose that operator induction serves as an adequate
model of perception. We explain how to reduce universal agent models
to operator induction. We propose a universal measure of operator induction fitness, and show how it can be used in a reinforcement learning
model and a homeostasis (self-preserving) agent based on the free energy principle. We show that the action of the homeostasis agent can be
explained by the operator induction model.
“Wir müssen wissen – wir werden wissen!”
— David Hilbert
1
Introduction
The ultimate intelligence research program is inspired by Seth Lloyd’s work on
the ultimate physical limits to computation [15]. We investigate the ultimate
physical limits and conditions of intelligence. This is the third installation of
the paper series, the first two parts proposed new physical complexity measures,
priors and limits of inductive inference [18,17].
We frame the question of ultimate limits of intelligence in a general physical
setting, for this we provide a general definition of an intelligent system and a
physical performance criterion, which as anticipated turns out to be a relation
of physical quantities and information, the latter of which we had conceptually
reduced to physics with minimum machine volume complexity in [18].
2
Notation and Background
2.1
Universal Induction
Let us recall Solomonoff’s universal distribution [21]. Let U be a universal computer which runs programs with a prefix-free encoding like LISP; y = U (x)
denotes that the output of program x on U is y where x and y are bit strings.
1
Any unspecified variable or function is assumed to be represented as a bit string.
1
A prefix-free code is a set of codes in which no code is a prefix of another. A computer file uses a prefix-free code, ending with an EOF symbol, thus, most reasonable
programming languages are prefix-free.
2
Eray Özkural
|x| denotes the length of a bit-string x. f (·) refers to function f rather than its
application.
The algorithmic probability that a bit string x ∈ {0, 1}+ is generated by a
random program π ∈ {0, 1}+ of U is:
X
2−|π|
(1)
PU (x) =
U(π)∈x(0|1)∗ ∧π∈{0,1}+
which conforms to Kolmogorov’s axioms [13]. PU (x) considers any continuation
of x, taking into account non-terminating programs.2 PU is also called the universal prior for it may be used as the prior in Bayesian inference, for any data
can be encoded as a bit string. We also give the basic definitions of Algorithmic
Information Theory (AIT) [14], where the algorithmic entropy, or complexity of
a bit string x ∈ {0, 1}+ is
HU∗ (x) = − log2 PU (x)
HU (x) = min({|π| | U (π) = x})
(2)
We use some variables in overloaded fashion in the paper, e.g., π might be a
program, a policy, or a physical mechanism depending on the context.
2.2
Operator induction
Operator induction is a general form of supervised machine learning where we
learn a stochastic map from n question and answer pairs D = {(qi , ai )} sampled
from a (computable) stochastic source µ. Operator induction can be solved by
finding in available time a set of operators Oj (·|·), each a conditional probability
density function (cpdf), such that the following goodness of fit is maximized
X
ψnj
(3)
Ψ=
j
for a stochastic source µ where each term in the summation is the contribution
of a model:
n
Y
j
Oj (ai |qi ).
(4)
ψnj = 2−|(O (·|·)|
i=1
qi and ai are question/answer pairs in the input dataset drawn from µ, and Oj is
a computable cpdf in Equation 4. We can use the found m operators to predict
unseen data with a mixture model [24]
PU (an+1 |qn+1 ) =
m
X
ψnj Oj (an+1 |qn+1 )
(5)
j=1
The goodness of fit in this case strikes a balance between high a priori probability and reproduction of data like in minimum message length (MML) method
[27,26], yet uses a universal mixture like in sequence induction. The convergence
theorem for operator induction was proven in [23] using Hutter’s extension to
arbitrary alphabet, and it bounds total error by HU (µ) ln 2 similarly to sequence
induction.
2
We used the regular expression notation in language theory.
Measuring Intelligence
2.3
3
Set induction
Set induction generalizes unsupervised machine learning where we learn a probability density function (pdf) from a set of n bitstrings D = {d1 , d2 , ..., dn }
sampled from a stochastic source µ. We can then inductively infer new members
to be added to the set with:
P (dn+1 ) =
PU (D ∪ dn+1 )
PU (D)
(6)
Set induction is clearly a restricted case of operator induction where we set Qi ’s
to null string. Set induction is a universal form of clustering, and it perfectly
models perception. If we apply set induction over a large set of 2D pictures of
a room, it will give us a 3D representation of it necessarily. If we apply it to
physical sensor data, it will infer the physical theory – perfectly general, with
infinite domains – that explains the data, perception is merely a specific case
of scientific theory inference in this case, though set induction works both with
deterministic and non-deterministic problems.
2.4
Universal measures of intelligence
There is much literature on the subject of defining a measure of intelligence.
Hutter has defined an intelligence order relation in the context of his universal reinforcement learning (RL) model AIXI [8], which suggests that intelligence
corresponds to the set of problems an agent can solve. Also notable is the universal intelligence measure [10,11], which is again based on the AIXI model. Their
universal intelligence measure is based on the following philosophical definition
compiled from their review of definitions of intelligence in the AI literature.
Definition 1 (Legg & Hutter). Intelligence measures an agent’s ability to
achieve goals in a wide range of environments.
It implies that intelligence requires an autonomous goal-following agent. The
intelligence measure of [10] is defined as
X
Υ (π) =
2−HU (µ) Vµπ
(7)
µ∈E
where µ is a computable reward bounded environment, And Vµπ is the expected sum P
of future rewards in the total interaction sequence of agent π.
∞
Vµπ = Eµ,π [ t=1 γ t rt ], where rt is the instantaneous reward at time t generated from the interaction between the agent π and the environment µ, and γ t is
the time discount factor.
2.5
The free energy principle.
In Asimov’s story titled “The Last Question”, the task of life is identified as overcoming the second law of thermodynamics, however futile. Variational free energy essentially measures predictive error, and it was introduced by Feynmann to
4
Eray Özkural
address difficult path integral problems in quantum physics. In thermodynamic
free energy, energies are negative log probabilities like entropy. The free energy
principle states that any system must minimize its free energy to maintain its
order. An adaptive system that tends to minimize average surprise (entropy) will
tend to survive longer. A biological organism can be modelled as an adaptive
system that has an implicit probabilistic model of the environment, and the variational free energy puts an upper bound on the surprise, thus minimizing free
energy will improve the chances of survival. The divergence between the pdf of
environment and an arbitrary pdf encoded by its own mechanism is minimized
in Friston’s model [9]. It has been shown in detail that the free energy principle
adequately models a self-preserving agent in a stochastic dynamical system [6,9],
which we can interpret as an environment with computable pdf. An active agent
may be defined in the formalism of stochastic dynamical systems, by partitioning
the physical states X of the environment into X = E × S × A × Λ where e ∈ E
is an external state, s ∈ S is a sensory state, a ∈ A an active state, and λ ∈ Λ
is an internal state. Self-preservation is defined by the Markov blanket S × A,
the removal of which partitions X into external states E and internal states Λ
that influence each other only through sensory and action states. E influences
sensations S, which in turn influence internal states Λ, resulting in the choice
of action signals S, which impact E, forming the feedback loop of the adaptive
system. The system states x ∈ X evolve according to the stochastic equation:
fe (e, s, a)
f (e, s, a)
s
(10)
f (x) =
ẋ(t) = f (x) + ω
(8)
fa (s, a, λ)
x(0) = x0
(9)
fλ (s, a, λ)
where f (x) is the flow of system states and it is decomposed into flows over
the sets in the system partition, explicitly showing the dependencies among state
sets; ω models fluctuations. Friston formalizes the self-preservation (homeostasis)
problem as finding an internal dynamics that minimizes the uncertainty (Shannon entropy) of the external states, and shows a solution based on the principle
of least action [9] wherein minimizing free energy is synonymous with minimizing
the entropy of the external states (principle of least action), which subsequently
corresponds to active inference. We have space for only some key results from the
rather involved mathematical theory. p(s, f |m) is the generative pdf that generates sensorium s and fictive (hidden) states f ∈ F from probabilistic model
m, and q(f |λ) is the recognition pdf that predicts hidden states F in the world
given internal state. Generative pdf factorizes as p(s, f |m) = p(s|f, m)p(f |m).
Free energy is defined as energy minus entropy
F (s, λ) = Eq [− ln p(s, f |m)] − H(q(f |λ))
(11)
which can be subjectively computed by the system. Free energy is also equal to
surprise plus divergence between recognition and generative pdf’s.
F (s, λ) = Eq [− ln p(s, f |m)] + DKL (q(f |λ)||p(f |s, m))
(12)
Measuring Intelligence
5
Minimizing divergence minimizes free energy, internal states λ may be optimized
to minimize predictive error using Equation 12, and surprise is invariant with
respect to λ. Free energy may be formulated as complexity plus accuracy of
recognition, as well.
F (s, λ) = Eq [− ln p(s, a|f, m)] + DKL (q(f |λ)||p(f, m))
(13)
In this case, we may choose an action that changes sensations to reduce predictive
error. Only the first term is a function of action signals. Minimization of free
energy turns out to be equivalent to the information bottleneck principle of
Tishby [9,25]. The information bottleneck method is equivalent to the pioneering
work of Ashby, which is simple enough to state here [3,2]:
SB = I(λ; F ) − I(S; λ)
(14)
where the first term is the mutual information between internal and hidden
states, and the second term is the mutual information between sensory states
and internal states. Both terms are expanded using conditional entropy, and
then two terms in the middle are eliminated because they are not relevant to
the optimization problem – we do not know the hidden variables in H(λ|F ) and
H(S) is constant.
SB = H(λ) − H(λ|F ) − H(S) + H(S|λ)
∗
SB
= H(λ) + H(S|λ)
(15)
(16)
∗
Minimizing SB
Equation 16 thus minimizes the sum of the entropy of internal
states and the entropy required to encode sensory states given internal states. In
other words, it strikes an optimal balance between model complexity H(λ), and
model accuracy H(S|λ). Friston further shows that Equation 16 directly derives
from the free energy principle, closing potential loopholes in the theory. Please
see [5] for a comprehensive application of the free energy principle to agents and
learning. Note also that the bulk of the theory assumes the ergodic hypothesis.
3
Perception as General Intelligence
Since we are chiefly interested in stochastic problems in the physical world, we
propose a straightforward informal definition of intelligence:
Definition 2. Intelligence measures the capability of a mechanism to solve prediction problems.
Mechanism is any physical machine as usual, see [4] which suggests likewise.
Therefore, a general formulation of Solomonoff induction, operator induction,
might serve as a model of general intelligence, as well [24]. Recall that operator
induction can infer any physically plausible cpdf, thus its approximation can
solve any classical supervised machine learning problem. The only slight issue
with Equation 7 might be that it seems to exclude classical AI systems that are
not agents, e.g., expert systems, machine learning tools, knowledge representation systems, search and planning algorithms, and so forth, which are somewhat
more naturally encompassed by our informal definition.
6
3.1
Eray Özkural
Is operator induction adequate?
A question naturally arises as to whether operator induction can adequately solve
every prediction problem we require in AI. There are two strong objections to
operator induction that we know of. It is argued that in a dynamic environment,
as in a physical environment, we must use an active agent model so that we can
account for changes in the environment, as in the space-time embedded agent
[16] which also provides an agent-based intelligence measure. This objection may
be answered by the simple solution that each decision of an active intelligent
system may be considered a separate induction problem. The second objection
is that the basic Solomonoff induction can only predict the next bit, but not
the expected cumulative reward, which its extensions can solve. We counter
this objection by stating that we can reduce an agent model to a perception
and action-planning problem as in OOPS-RL [20]. In OOPS-RL, the perception
module searches for the best world-model given the history of sensory input
and actions in allotted time using OOPS, and the planning module searches
for the best control program using the world-model of the perception module
to determine the action sequence that maximizes cumulative reward likewise.
OOPS has a generalized Levin Search [12] which may be tweaked to solve either
prediction or optimization problems. Hutter has also observed that standard
sequence induction does not readily address optimization problems [8]. However,
Solomonoff induction is still complete in the sense of Turing, and can infer any
computable cpdf; and when the extension to Solomonoff induction is applied to
sequence prediction, it does not yield a better error bound, which seems like a
conundrum. On the other hand, Levin Search with a proper universal probability
density function (pdf) of programs can be modified to solve induction problems
(sequence, set, operator, and sequence prediction with arbitrary loss), inversion
problems (computer science problems in P and NP), and optimization problems
[23]. The planning module of OOPS-RL likewise requires us to write such an
optimization program. In that sense, AIXI implies yet another variation of Levin
Search for solving a particular universal optimization problem, however, it also
has the unique advantage that formal transformations between AIXI problem
and many important problems including function minimization and strategic
games have been shown [8]. Nevertheless, the discussion in [23] is rather brief.
Also see [1] for a discussion of universal optimization.
Proposition 1. A discrete-time universal RL model may be reduced to operator
induction.
More formally, the perceptual task of an RL agent would be inferring from a
history the cumulative rewards in the future, without loss of generality. Let the
chronology C be a sequence of sensory, reward, and action data C = [(s1 , r1 , a1 ),
(s2 , r2 , a2 ), . . . , (sn , rn , an )] where Ci accesses ith element, and Ci:j accesses the
subsequence [Ci , Ci+1 , . . . , Cj ]. Let rc be the cumulative reward function where
Pk≤j
rc (C, i, j) = k=i
rk . After observing (sn , rn , an ), we construct dataset Dc as
follows. For every unique (i, j) pair such that 1 < i ≤ j ≤ n, we concatenate
history tuples C1:(i−1) , and we form a question string that also includes the next
Measuring Intelligence
7
action, i and j, q = [(s1 , r1 , a1 ), (s2 , r2 , a2 ), . . . , (s(i−1) , r(i−1) , a(i−1) )], ai , i, j, and
an answer string which is the cumulative reward a = rc (C, i, j). Solving the operator induction problem for this dataset DC will yield a cpdf which predicts
cumulative rewards in the future. After that, choosing the next action is a simple
matter of maximizing r(C1:n , ai , n + 1, λ) where λ is the planning horizon. The
reduction causes quadratic blow-up in the number of data items. Our somewhat
cumbersome reduction suggests that all of the intelligence here comes from operator induction, surely an argmax function, or a summation of rewards does
not provide it, but rather it builds constraints into the task. In other words,
we interpret that the intelligence in an agent model is provided by inductive
inference, rather than an additional application of decision theory.
4
Physical Quantification of Intelligence
Definition 1 corresponds to any kind of reinforcement-learning or goal-following
agent in AI literature quite well, and can be adapted to solve other kinds of problems. The unsupervised, active inference agent approach is proposed instead of
reinforcement learning approach in [7], and the authors argue that they did not
need to invoke the notion of reward, value or utility. The authors in particular claim that they could solve the mountain-car problem by the free-energy
formulation of perception. We thus propose a perceptual intelligence measure.
4.1
Universal measure of perception fitness
Note that operator induction is considered to be insufficient to describe universal
agents such as AIXI, because basic sequence induction is inappropriate for modelling optimization problems [8]. However, a modified Levin search procedure can
solve such optimization problems as in finding an optimal control program [20].
In OOPS-RL, the perception module searches for the best world-model given
the history of sensory input and actions in allotted time using OOPS, and the
planning module searches for the best control program using the world-model of
the perception module to determine the control program that maximizes cumulative reward likewise. In this paper, we consider the perception module of such
a generic agent which must produce a world-model, given sensory input.
We can use the intelligence measure Equation 7 in a physical theory of intelligence, however it contains terms like utility that do not have physical units
(i.e., we would be preferring a more reductive definition). We therefore attempt
to obtain such a measure using the more benign goodness-of-fit (Equation 3).
Let the universal measure of the fitness of operator induction be defined as
X
ΥO (π) =
2−HU (µ) Ψ (µ, π)
(17)
µ∈S
where S is the set of possible stochastic sources in the observable universe U
and π is a physical mechanism, and Ψ is relative to a stochastic source µ and a
8
Eray Özkural
physical mechanism (computer) π. This would be maximum if we assume that
operator induction were solved exactly by an oracle machine.
Note that HU (µ) is finite; Ψ (µ, π) is likewise bounded by the amount of
computation π will spend on approximating operator induction.
4.2
Application to homeostasis agent
In a presentation to Friston’s group in January 2015, we noted that the mini∗
mization of SB
is identical to Minimum Message Length principle, which can be
further refined as
′
SB
= H ∗ (Λ) + H ∗ (S|Λ)
(18)
using Solomonoff’s entropy formulation that takes the negative logarithm of algorithmic probability [22]. In the unsupervised agent context, solving this minimization problem corresponds to inferring an optimal behavioral policy as Λ
constitutes internal dynamics which may be modeled as a non-terminating program. We could directly apply induction to minimize KL divergence, as well.
Note the correspondence to operator induction.
Theorem 1. Minimizing the free energy is equivalent to solving the operator
induction problem for (λ, s) pairs where qi ∈ Λ and ai ∈ S.
Proof. Observe that minimizing Equation 16 corresponds to picking maximum
ψnj since in entropy form,
− log2 (ψnj )
= − log2 (2
−|Oj (·|·)|
) − log2 (
n
Y
Oj (si |λi ))
i=1
= |Oj (·|·)| −
n
X
log2 (Oj (ai |qi )) = |Oj (·|·)| + H(Oj (ai |qi )).
i=1
We define a non-redundant selection of ψnj ’s, |Oj (·|·)| = HU (Oj (·|·)), e.g., we
pick only the shortest programs that produce the same cpdf, otherwise the entropy form would diverge. Minimizing Equation 18 is exactly operator induction,
even though the questions are programs, the ensemble
is of all programs
P here
j
and
all
sensory
state,
program
pairs
in
space-time.
|O
(·|·)|
= H ∗ (Λ) and
P
j
∗
H(O (ai |qi )) = H (S|Λ). Note that this merely establishes model equivalence, we have not yet explained how it is to be computed in detail.
Proposition 2. By the above theorem, Equation 17 measures the goodness of
fit for a given homeostasis agent mechanism, for all possible environments.
The mechanism π that maximizes Ψ (µ, π) achieves less error with respect to
a source (which may be taken to correspond to the whole random dynamical
system in the framework of free energy principle), while ΥO (π) normalizes Ψ (µ, π)
with respect to a random dynamical system. It holds for the same reasons Legg’s
Measuring Intelligence
9
measure holds, which are not discussed due to space limits in the present paper.
We prefer the unsupervised homeostasis agent among the two agent models we
discussed because it provides an exceptionally elegant and reductionist model of
autonomous behavior, that has been rigorously formulated physically. Note that
this agent is conceptually related to the survival property of RL agents discussed
in [19].
4.3
Discussion
The unsupervised model still achieves exploration and curiosity, because it would
stochastically sample and navigate the environment to reduce predictive errors.
While we either optimize perceptual models or choose an action that would befit
expectations, it might be possible to express the optimal adaptive agent policy in
a general optimization framework. A more in-depth analysis of the unsupervised
agent will be presented in a subsequent publication. A more general reductive
definition of intelligence should also be researched. These developments could
eventually help unify AGI theory.
References
1. Alpcan, T., Everitt, T., Hutter, M.: Can we measure the difficulty of an optimization problem? In: 2014 IEEE Information Theory Workshop, ITW 2014,
Hobart, Tasmania, Australia, November 2-5, 2014. pp. 356–360. IEEE (2014),
http://dx.doi.org/10.1109/ITW.2014.6970853
2. Ashby, W.R.: Principles of the self-organizing system. In: v. Foerster, H., Zopf,
G.W. (eds.) Principles of Self-Organization: Transactions of the University of Illinois Symposium, pp. 255–278. Pergamon, London (1962)
3. Ashby, W.: Principles of the self-organizing dynamic system. The Journal of General Psychology 37(2), 125–128 (1947)
4. Dowe, D.L., Hernández-Orallo, J., Das, P.K.: Artificial General Intelligence: 4th
International Conference, AGI 2011, Mountain View, CA, USA, August 3-6, 2011.
Proceedings, chap. Compression and Intelligence: Social Environments and Communication, pp. 204–211. Springer Berlin Heidelberg, Berlin, Heidelberg (2011),
http://dx.doi.org/10.1007/978-3-642-22887-2_21
5. Friston,
K.,
FitzGerald,
T.,
Rigoli,
F.,
Schwartenbeck,
P.,
ODoherty, J., Pezzulo, G.: Active inference and learning. Neuroscience
and
Biobehavioral
Reviews
68,
862
–
879
(2016),
http://www.sciencedirect.com/science/article/pii/S0149763416301336
6. Friston, K., Kilner, J., Harrison, L.: A free energy principle for
the brain. Journal of Physiology-Paris 100(13), 70 – 87 (2006),
http://www.sciencedirect.com/science/article/pii/S092842570600060X,
theoretical and Computational Neuroscience: Understanding Brain Functions
7. Friston,
K.J.,
Daunizeau,
J.,
Kiebel,
S.J.:
Reinforcement
learning
or
active
inference?
PLOS
ONE
4(7),
1–13
(07
2009),
https://doi.org/10.1371/journal.pone.0006421
8. Hutter, M.: Universal algorithmic intelligence: A mathematical top→down approach. In: Goertzel, B., Pennachin, C. (eds.) Artificial General Intelligence, pp.
227–290. Cognitive Technologies, Springer, Berlin (2007)
10
Eray Özkural
9. Karl, F.: A free energy principle for biological systems. Entropy 14(11), 2100–2121
(2012), http://www.mdpi.com/1099-4300/14/11/2100
10. Legg, S., Hutter, M.: Universal intelligence: A definition of machine intelligence.
Minds Mach. 17(4), 391–444 (Dec 2007)
11. Legg, S., Veness, J.: An approximation of the universal intelligence measure. In:
Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence, Lecture Notes in Computer Science, vol. 7070, pp. 236–249. Springer Berlin
Heidelberg (2013)
12. Levin, L.: Universal problems of full search. Problems of Information Transmission
9(3), 256–266 (1973)
13. Levin, L.A.: Some theorems on the algorithmic approach to probability theory and
information theory. CoRR abs/1009.5894 (2010)
14. Li, M., Vitanyi, P.M.: An Introduction to Kolmogorov Complexity and Its Applications. Springer Publishing Company, Incorporated, 3 edn. (2008)
15. Lloyd, S.: Ultimate physical limits to computation. Nature406 (Aug 2000)
16. Orseau, L., Ring, M.: Space-time embedded intelligence. In: Bach, J., Goertzel, B., Ikl, M. (eds.) Artificial General Intelligence, Lecture Notes in
Computer Science, vol. 7716, pp. 209–218. Springer Berlin Heidelberg (2012),
http://dx.doi.org/10.1007/978-3-642-35506-6_22
17. Özkural, E.: Ultimate Intelligence Part II: Physical Measure and Complexity of
Intelligence. ArXiv e-prints (Apr 2015)
18. Özkural, E.: Ultimate intelligence part I: physical completeness and objectivity of
induction. In: Artificial General Intelligence - 8th International Conference, AGI
2015, AGI 2015, Berlin, Germany, July 22-25, 2015, Proceedings. pp. 131–141
(2015), http://dx.doi.org/10.1007/978-3-319-21365-1_14
19. Ring, M., Orseau, L.: Delusion, survival, and intelligent agents. In: Artificial General Intelligence, pp. 11–20. Springer Berlin Heidelberg (2011)
20. Schmidhuber, J.: Optimal ordered problem solver. Machine Learning 54, 211–256
(2004)
21. Solomonoff, R.J.: A formal theory of inductive inference, part i. Information and
Control 7(1), 1–22 (March 1964)
22. Solomonoff, R.J.: Complexity-based induction systems: Comparisons and convergence theorems. IEEE Trans. on Information Theory IT-24(4), 422–432 (July 1978)
23. Solomonoff, R.J.: Progress in incremental machine learning. Tech. Rep. IDSIA-1603, IDSIA, Lugano, Switzerland (2003)
24. Solomonoff, R.J.: Three kinds of probabilistic induction: Universal distributions
and convergence theorems. The Computer Journal 51(5), 566–570 (2008)
25. Tishby, N., Pereira, F.C., Bialek, W.: The information bottleneck method. ArXiv
Physics e-prints (Apr 2000)
26. Wallace, C.S., Dowe, D.L.: Minimum message length and kolmogorov complexity. The Computer Journal 42(4), 270–283 (1999),
http://comjnl.oxfordjournals.org/content/42/4/270.abstract
27. Wallace, C.S., Boulton, D.M.: A information measure for classification. Computer
Journal 11(2), 185–194 (1968)
| 2 |
arXiv:1404.1957v3 [math.PR] 29 Oct 2015
The Annals of Applied Probability
2015, Vol. 25, No. 6, 3511–3570
DOI: 10.1214/14-AAP1081
c Institute of Mathematical Statistics, 2015
ERGODIC CONTROL OF MULTI-CLASS M/M/N + M QUEUES
IN THE HALFIN–WHITT REGIME
By Ari Arapostathis∗,1, Anup Biswas∗,2 and Guodong Pang†,3
The University of Texas at Austin∗ and Pennsylvania State University†
We study a dynamic scheduling problem for a multi-class queueing network with a large pool of statistically identical servers. The
arrival processes are Poisson, and service times and patience times
are assumed to be exponentially distributed and class dependent. The
optimization criterion is the expected long time average (ergodic) of
a general (nonlinear) running cost function of the queue lengths. We
consider this control problem in the Halfin–Whitt (QED) regime,
that is, the number of servers n and the total offered load r scale
√
like n ≈ r + ρ̂ r for some constant ρ̂. This problem was proposed in
[Ann. Appl. Probab. 14 (2004) 1084–1134, Section 5.2].
The optimal solution of this control problem can be approximated
by that of the corresponding ergodic diffusion control problem in the
limit. We introduce a broad class of ergodic control problems for controlled diffusions, which includes a large class of queueing models in
the diffusion approximation, and establish a complete characterization of optimality via the study of the associated HJB equation. We
also prove the asymptotic convergence of the values for the multi-class
queueing control problem to the value of the associated ergodic diffusion control problem. The proof relies on an approximation method
by spatial truncation for the ergodic control of diffusion processes,
where the Markov policies follow a fixed priority policy outside a
fixed compact set.
Received April 2014; revised November 2014.
Supported in part by the Office of Naval Research Grant N00014-14-1-0196.
2
Supported in part by an award from the Simons Foundation (# 197982) to The University of Texas at Austin and in part by the Office of Naval Research through the Electric
Ship Research and Development Consortium.
3
Supported in part by the Marcus Endowment Grant at the Harold and Inge Marcus
Department of Industrial and Manufacturing Engineering at Penn State.
AMS 2000 subject classifications. Primary 60K25; secondary 68M20, 90B22, 90B36.
Key words and phrases. Multi-class Markovian queues, reneging/abandonment,
Halfin–Whitt (QED) regime, diffusion scaling, long time-average control, ergodic control,
stable Markov optimal control, spatial truncation, asymptotic optimality.
1
This is an electronic reprint of the original article published by the
Institute of Mathematical Statistics in The Annals of Applied Probability,
2015, Vol. 25, No. 6, 3511–3570. This reprint differs from the original in
pagination and typographic detail.
1
2
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
CONTENTS
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1. Contributions and comparisons . . . . . . . . . . . . . . .
1.2. Organization . . . . . . . . . . . . . . . . . . . . . . . . .
1.3. Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2. The controlled system in the Halfin–Whitt regime . . . . . .
2.1. The multi-class Markovian many-server model . . . . . .
2.2. The ergodic control problem in the Halfin–Whitt regime
3. A broad class of ergodic control problems for diffusions . . .
3.1. The controlled diffusion model . . . . . . . . . . . . . . .
3.2. Structural assumptions . . . . . . . . . . . . . . . . . . .
3.3. Piecewise linear controlled diffusions . . . . . . . . . . . .
3.4. Existence of optimal controls . . . . . . . . . . . . . . . .
3.5. The HJB equation . . . . . . . . . . . . . . . . . . . . . .
3.6. Technical proofs . . . . . . . . . . . . . . . . . . . . . . .
4. Approximation via spatial truncations . . . . . . . . . . . . .
5. Asymptotic convergence . . . . . . . . . . . . . . . . . . . . . .
5.1. The lower bound . . . . . . . . . . . . . . . . . . . . . . .
5.2. The upper bound . . . . . . . . . . . . . . . . . . . . . . .
6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
4
6
6
8
8
10
14
14
17
18
22
26
31
42
48
49
53
61
61
61
1. Introduction. One of the classical problems in queueing theory is to
schedule the customers/jobs in a network in an optimal way. These problems are known as the scheduling problems which arise in a wide variety
of applications, in particular, whenever there are different customer classes
present in the network and competing for the same resources. The optimal
scheduling problem has a long history in the literature. One of the appealing
scheduling rules is the well-known cµ rule. This is a static priority policy
in which it is assumed that each class-i customer has a marginal delay cost
ci and an average service time 1/µi , and the classes are prioritized in the
decreasing order of ci µi . This static priority rule has proven asymptotically
optimal in many settings [4, 28, 32]. In [11], a single-server Markov modulated queueing network is considered and an averaged cµ-rule is shown
asymptotically optimal for the discounted control problem.
An important aspect of queueing networks is abandonment/reneging, that
is, customers/jobs may choose to leave the system while being in the queue
before their service. Therefore, it is important to include customer abandonment in modeling queueing systems. In [5, 6], Atar et al. considered a
multi-class M/M/N +M queueing network with customer abandonment and
proved that a modified priority policy, referred to as cµ/θ rule, is asymptotically optimal for the long run average cost in the fluid scale. Dai and Tezcan
[13] showed the asymptotic optimality of a static priority policy on a finite
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
3
time interval for a parallel server model under the assumed conditions on
the ordering of the abandonment rates and running costs. Although static
priority policies are easy to implement, it may not be optimal for control
problems of many multi-server queueing systems. For the same multi-class
M/M/N + M queueing network, discounted cost control problems are studied in [3, 7, 22], and asymptotically optimal controls for these problems are
constructed from the minimizer of a Hamilton–Jacobi–Bellman (HJB) equation associated with the controlled diffusions in the Halfin–Whitt regime.
In this article, we are interested in an ergodic control problem for a
multi-class M/M/N + M queueing network in the Halfin–Whitt regime.
The network consists of a single pool of n statistically identical servers and
a buffer of infinite capacity. There are d customer classes and arrivals of
jobs/customers are d independent Poisson processes with parameters λni ,
i = 1, . . . , d. The service rate for class-i customers is µni , i = 1, . . . , d. Customers may renege from the queue if they have not started to receive service
before their patience times. Class-i customers renege from the queue at rates
γin > 0, i = 1, . . . , d. The scheduling policies are work-conserving, that is, no
server stays idle if any of the queues is nonempty. We assume the system
operates in the Halfin–Whitt regime, where the arrival rates and the number
of servers are scaled appropriately in a manner that the traffic intensity of
the system satisfies
!
d
X
√
λni
−→ ρ̂ ∈ R.
n 1−
nµni n→∞
i=1
In this regime, the system operations achieve both high quality (high server
levels) and high efficiency (high servers’ utilization), and hence it is also
referred to as the Quality-and-Efficiency-Driven (QED) regime; see, for example, [7, 16, 17, 19, 21] on the many-server regimes. We consider an ergodic
cost function given by
Z T
1
n
r(Q̂ (s)) ds ,
lim sup E
T →∞ T
0
where the running cost r is a nonnegative, convex function with polynomial
growth and Q̂n = (Q̂n1 , . . . , Q̂nd )T is the diffusion-scaled queue length process.
It is worth mentioning that in addition to the running cost above which is
based on the queue-length, we can add an idle-server cost provided that it
has at most polynomial growth. For such, a running cost structure the same
analysis goes through. The control is the allocation of servers to different
classes of customers at the service completion times. The value function is
defined to be the infimum of the above cost over all admissible controls
(among all work-conserving scheduling policies). In this article, we are interested in the existence and uniqueness of asymptotically optimal stable
4
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
stationary Markov controls for the ergodic control problem, and the asymptotic behavior of the value functions as n tends to infinity. In [7], Section 5.2,
it is stated that analysis of this type of problems is important for modeling
call centers.
1.1. Contributions and comparisons. The usual methodology for studying these problems is to consider the associated continuum model, which
is the controlled diffusion limit in a heavy-traffic regime, and to study the
ergodic control problem for the controlled diffusion. Ergodic control problems governed by controlled diffusions have been well studied in literature
[1, 9] for models that fall in these two categories: (a) the running cost is
near-monotone, which is defined by the requirement that its value outside a
compact set exceeds the optimal average cost, thus penalizing unstable behavior (see Assumption 3.4.2 in [1] for details), or (b) the controlled diffusion
is uniformly stable, that is, every stationary Markov control is stable and the
collection of invariant probability measures corresponding to the stationary
Markov controls is tight. However, the ergodic control problem at hand does
not fall under any of these frameworks. First, the running cost we consider
here is not near-monotone because the total queue length can be 0 when
the total number of customers in the system are O(n). On the other hand,
it is not at all clear that the controlled diffusion is uniformly stable (unless
one imposes nontrivial hypotheses on the parameters), and this remains an
open problem. One of our main contributions in this article is that we solve
the ergodic control problem for a broad class of nondegenerate controlled
diffusions, that in a certain way can be viewed as a mixture of the two categories mentioned above. As we show in Section 3, stability of the diffusion
under any optimal stationary Markov control occurs due to certain interplay
between the drift and the running cost. The model studied in Section 3 is
far more general than the queueing problem described, and thus it is of separate interest for ergodic control. We present a comprehensive study of this
broad class of ergodic control problems that includes existence of a solution
to the ergodic HJB equation, its stochastic representation and verification
of optimality (Theorem 3.4), uniqueness of the solution in a certain class
(Theorem 3.5), and convergence of the vanishing discount method (Theorem 3.6). These results extend the well-known results for near-monotone
running costs. The assumptions in these theorems are verified for the multiclass queueing model and the corresponding characterization of optimality
is obtained (Corollary 3.1), which includes growth estimates for the solution
of the HJB.
We also introduce a new approximation technique, spatial truncation, for
the controlled diffusion processes; see Section 4. It is shown that if we freeze
the Markov controls to a fixed stable Markov control outside a compact
set, then we can still obtain nearly optimal controls in this class of Markov
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
5
controls for large compact sets. We should keep in mind that this property is
not true in general. This method can also be thought of as an approximation
by a class of controlled diffusions that are uniformly stable.
We remark that for a fixed control, the controlled diffusions for the queueing model can be regarded as a special case of the piecewise linear diffusions
considered in [14]. It is shown in [14] that these diffusions are stable under
constant Markov controls. The proof is via a suitable Lyapunov function.
We conjecture that uniform stability holds for the controlled diffusions associated with the queueing model. For the same multi-class Markovian model,
Gamarnik and Stolyar show that the stationary distributions of the queue
lengths are tight under any work-conserving policy [15], Theorem 2. We also
wish to remark here that we allow ρ̂ to be negative, assuming abandonment
rates are strictly positive, while in [15], ρ̂ > 0 and abandonment rates can
be zero.
Another important contribution of this work is the convergence of the
value functions associated with the sequence of multi-class queueing models to the value of the ergodic control problem, say ̺∗ , corresponding to the
controlled diffusion model. It is not obvious that one can have asymptotic optimality from the existence of optimal stable controls for the HJB equations
of controlled diffusions. This fact is relatively straightforward when the cost
under consideration is discounted. In that situation, the tightness of paths
on a finite time horizon is sufficient to prove asymptotic optimality [7]. But
we are in a situation where any finite time behavior of the stochastic process
plays no role in the cost. In particular, we need to establish the convergence
of the controlled steady states. Although uniform stability of stationary distributions for this multi-class queueing model in the case where ρ̂ > 0 and
abandonment rates can be zero is established in [15], it is not obvious that
the stochastic model considered here has the property of uniform stability.
Therefore, we use a different method to establish the asymptotic optimality.
First, we show that the value functions are asymptotically bounded below by
̺∗ . To study the upper bound, we construct a sequence of Markov scheduling policies that are uniformly stable (see Lemma 5.1). The key idea used
in establishing such stability results is a spatial truncation technique, under which the Markov policies follow a fixed priority policy outside a given
compact set. We believe these techniques can also be used to study ergodic
control problems for other many-server queueing models.
The scheduling policies we consider in this paper allow preemption, that
is, a customer in service can be interrupted for the server to serve a customer
of a different class and her service will be resumed later. In fact, the asymptotic optimality is shown within the class of the work-conserving preemptive
policies. In [7], both preemptive and nonpreemptive policies are studied,
where a nonpreemptive scheduling control policy is constructed from the
HJB equation associated with preemptive policies and thus is shown to be
6
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
asymptotically optimal. However, as far as we know, the optimal nonpreemptive scheduling problem under the ergodic cost remains open.
For a similar line of work in uncontrolled settings, we refer the reader
to [16, 19]. Admission control of the single class M/M/N + M model with
an ergodic cost criterion in the Halfin–Whitt regime is studied in [26]. For
controlled problems and for finite server models, asymptotic optimality is
obtained in [12] in the conventional heavy-traffic regime. The main advantage in [12] is the uniform exponential stability of the stochastic processes,
which is obtained by using properties of the Skorohod reflection map. A
recent work studying ergodic control of a multi-class single-server queueing
network is [25].
To summarize our main contributions in this paper:
– We introduce a new class of ergodic control problems and a framework to
solve them.
– We establish an approximation technique by spatial truncation.
– We provide, to the best of our knowledge, the first treatment of ergodic
control problems at the diffusion scale for many server models.
– We establish asymptotic optimality results.
1.2. Organization. In Section 1.3, we summarize the notation used in
the paper. In Section 2, we introduce the multi-class many server queueing
model and describe the Halfin–Whitt regime. The ergodic control problem
under the heavy-traffic setting is introduced in Section 2.2, and the main
results on asymptotic convergence are stated as Theorems 2.1 and 2.2. Section 3 introduces a class of controlled diffusions and associated ergodic control problems, which contains the queueing models in the diffusion scale.
The key structural assumptions are in Section 3.2 and these are verified for
a generic class of queueing models in Section 3.3, which are characterized
by piecewise linear controlled diffusions. Section 3.4 concerns the existence
of optimal controls under the general hypotheses, while Section 3.5 contains
a comprehensive study of the HJB equation. Section 3.6 is devoted to the
proofs of the results in Section 3.5. The spatial truncation technique is introduced and studied in Section 4. Finally, in Section 5 we prove the results
of asymptotic optimality.
1.3. Notation. The standard Euclidean norm in Rd is denoted by |·|. The
set of nonnegative real numbers is denoted by R+ , N stands for the set of
natural numbers, and I denotes the indicator function. By Zd+ we denote the
set of d-vectors of nonnegative integers. The closure, the boundary and the
complement of a set A ⊂ Rd are denoted by A, ∂A and Ac , respectively. The
open ball of radius R around 0 is denoted by BR . Given two real numbers
a and b, the minimum (maximum) is denoted by a ∧ b (a ∨ b), respectively.
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
7
Define a+ := a ∨ 0 and a− := −(a ∧ 0). The integer part of a real number
a is denoted by ⌊a⌋. We use the notation ei , i = 1, . . . , d, to denote the
vector with ith entry equal to 1 and all other entries equal to 0. We also let
e := (1, . . . , 1)T . Given any two vectors x, y ∈ Rd the inner product is denoted
by x · y. By δx we denote the Dirac mass at x. For any function f : Rd → R
and domain D ⊂ R we define the oscillation of f on D as follows:
osc(f ) := sup{f (x) − f (y) : x, y ∈ D}.
D
For a nonnegative function g ∈ C(Rd ), we let O(g) denote the space of
|f (x)|
< ∞. This is a Banach space
functions f ∈ C(Rd ) satisfying supx∈Rd 1+g(x)
under the norm
|f (x)|
kf kg := sup
.
x∈Rd 1 + g(x)
We also let o(g) denote the subspace of O(g) consisting of those functions f
satisfying
lim sup
|x|→∞
|f (x)|
= 0.
1 + g(x)
By a slight abuse of notation, we also denote by O(g) and o(g) a generic
member of these spaces. For two nonnegative functions f and g, we use the
notation f ∼ g to indicate that f ∈ O(g) and g ∈ O(f ).
We denote by Lploc (Rd ), p ≥ 1, the set of real-valued functions that are
p
d
d
locally p-integrable and by Wk,p
loc (R ) the set of functions in Lloc (R ) whose
p
ith weak derivatives, i = 1, . . . , k, are in Lloc (Rd ). The set of all bounded
k,α
continuous functions is denoted by Cb (Rd ). By Cloc
(Rd ) we denote the set of
functions that are k-times continuously differentiable and whose kth derivatives are locally Hölder continuous with exponent α. We define Cbk (Rd ), k ≥ 0,
as the set of functions whose ith derivatives, i = 1, . . . , k, are continuous and
bounded in Rd and denote by Cck (Rd ) the subset of Cbk (Rd ) with compact
support. For any path X(·), we use the notation ∆X(t) to denote the jump
at time t. Given any Polish space X , we denote by P(X ) the set of probability measures on X and we endow P(X ) with the Prokhorov metric. For
ν ∈ P(X ) and a Borel measurable map f : X → R, we often use the abbreviated notation
Z
f dν.
ν(f ) :=
X
The quadratic variation of a square integrable martingale is denoted by
h·, ·i and the optional quadratic variation by [·, ·]. For presentation purposes
we use the time variable as the subscript for the diffusion processes. Also
κ1 , κ2 , . . . and C1 , C2 , . . . are used as generic constants whose values might
vary from place to place.
8
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Fig. 1.
A schematic model of the system.
2. The controlled system in the Halfin–Whitt regime.
2.1. The multi-class Markovian many-server model. Let (Ω, F, P) be a
given complete probability space and all the stochastic variables introduced
below are defined on it. The expectation w.r.t. P is denoted by E. We consider a multi-class Markovian many-server queueing system which consists
of d customer classes and n parallel servers capable of serving all customers
(see Figure 1).
The system buffer is assumed to have infinite capacity. Customers of class
i ∈ {1, . . . , d} arrive according to a Poisson process with rate λni > 0. Customers enter the queue of their respective classes upon arrival if not being
processed. Customers of each class are served in the first-come-first-serve
(FCFS) service discipline. While waiting in queue, customers can abandon
the system. The service times and patience times of customers are classdependent and both are assumed to be exponentially distributed, that is,
class i customers are served at rate µni and renege at rate γin . We assume
that customer arrivals, service and abandonment of all classes are mutually
independent.
The Halfin–Whitt regime. We study this queueing model in the Halfin–
Whitt regime [or the Quality-and-Efficiency-Driven (QED) regime]. Consider a sequence of such systems indexed by n, in which the arrival rates λni
and the number of servers n both increase appropriately. Let rni := λni /µni
be the mean offered load of classPi customers. The traffic intensity of the
nth system is given by ρn = n−1 di=1 rni . In the Halfin–Whitt regime, the
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
9
parameters are assumed to satisfy the following: as n → ∞,
(2.1)
λni
→ λi > 0,
µni → µi > 0,
n
√
λni − nλi
√
n(µni − µi ) → µ̂i ,
→ λ̂i ,
n
λi
rni
→ ρi :=
< 1,
n
µi
This implies that
√
n(1 − ρn ) → ρ̂ :=
d
X
γin → γi > 0,
ρi = 1.
i=1
d
X
ρi µ̂i − λ̂i
i=1
µi
∈ R.
The above scaling is common in multi-class multi-server models [7, 22]. Note
that we do not make any assumption on the sign of ρ̂.
State descriptors. Let Xin = {Xin (t) : t ≥ 0} be the total number of class i
customers in the system, Qni = {Qni (t) : t ≥ 0} the number of class i customers
in the queue and Zin = {Zin (t) : t ≥ 0} the number of class i customers in
service. The following basic relationships hold for these processes: for each
t ≥ 0 and i = 1, . . . , d,
(2.2)
Xin (t) = Qni (t) + Zin (t),
Qni (t) ≥ 0,
Zin (t) ≥ 0 and
e · Z n (t) ≤ n.
We can describe these processes using a collection {Ani , Sin , Rin , i = 1, . . . , d}
of independent rate-1 Poisson processes. Define
Ãni (t) := Ani (λni t),
Z t
S̃in (t) := Sin µni
Zin (s) ds ,
0
Z t
n
n
n
n
R̃i (t) := Ri γi
Qi (s) ds .
0
Then the dynamics take the form
(2.3) Xin (t) = Xin (0) + Ãni (t) − S̃in (t) − R̃in (t),
t ≥ 0, i = 1, . . . , d.
Scheduling control. Following [7, 22], we only consider work-conserving
policies that are nonanticipative and allow preemption. When a server becomes free and there are no customers waiting in any queue, the server stays
idle, but if there are customers of multiple classes waiting in the queue, the
10
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
server has to make a decision on the customer class to serve. Service preemption is allowed, that is, service of a customer class can be interrupted at
any time to serve some other class of customers and the original service is resumed at a later time. A scheduling control policy determines the processes
Z n , which must satisfy the constraints in (2.2) and the work-conserving
constraint, that is,
e · Z n (t) = (e · X n (t)) ∧ n,
t ≥ 0.
Define the action set An (x) as
An (x) := {a ∈ Zd+ : a ≤ x and e · a = (e · x) ∧ n}.
Thus, we can write Z n (t) ∈ An (X n (t)) for each t ≥ 0. We also assume that
all controls are nonanticipative. Define the σ-fields
Ftn := σ{X n (0), Ãni (t), S̃in (t), R̃in (t) : i = 1, . . . , d, 0 ≤ s ≤ t} ∨ N
and
Gtn := σ{δÃni (t, r), δS̃in (t, r), δR̃in (t, r) : i = 1, . . . , d, r ≥ 0},
where
δÃni (t, r) := Ãni (t + r) − Ãni (t),
Z t
n
n
n
n
n
δS̃i (t, r) := Si µi
Zi (s) ds + µi r − S̃in (t),
0
δR̃in (t, r) :=
Rin
γin
Z
0
t
Qni (s) ds + γin r
− R̃in (t),
and N is the collection of all P-null sets. The filtration {Ftn , t ≥ 0} represents
the information available up to time t while Gtn contains the information
about future increments of the processes.
We say that a working-conserving control policy is admissible if:
(i) Z n (t) is adapted to Ftn ,
(ii) Ftn is independent of Gtn at each time t ≥ 0,
(iii) for each i = 1, . . . , d, and t ≥ 0, the process δS̃in (t, ·) agrees in law
with Sin (µni ·), and the process δR̃in (t, ·) agrees in law with Rin (γin ·).
We denote the set of all admissible control policies (Z n , F n , G n ) by Un .
2.2. The ergodic control problem in the Halfin–Whitt regime. Define the
diffusion-scaled processes
X̂ n = (X̂1n , . . . , X̂dn )T ,
Q̂n = (Q̂n1 , . . . , Q̂nd )T
and
Ẑ n = (Ẑ1n , . . . , Ẑdn )T ,
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
11
by
(2.4)
1
X̂in (t) := √ (Xin (t) − ρi nt),
n
1
Q̂ni (t) := √ Qni (t),
n
1
Ẑin (t) := √ (Zin (t) − ρi nt)
n
for t ≥ 0. By (2.3), we can express X̂in as
Z t
Z t
n
n
n
n
n
n
X̂i (t) = X̂i (0) + ℓi t − µi
Ẑi (s) ds − γi
Q̂ni (s) ds
0
0
(2.5)
n
n
n
+ M̂A,i
(t) − M̂S,i
(t) − M̂R,i
(t),
where ℓn = (ℓn1 , . . . , ℓnd )T is defined as
1
ℓni := √ (λni − µni ρi n),
n
and
(2.6)
1
n
M̂A,i
(t) := √ (Ani (λni t) − λni t),
n
Z t
Z t
1
n
n
n
n
n
n
M̂S,i (t) := √ Si µi
Zi (s) ds − µi
Zi (s) ds ,
n
0
0
Z t
Z t
1
n
n
n
n
n
n
M̂R,i (t) := √ Ri γi
Qi (s) ds − γi
Qi (s) ds
n
0
0
are square integrable martingales w.r.t. the filtration {Ftn }.
Note that
√
1
(λ̂i − ρi µ̂i )
.
ℓni = √ (λni − λi n) − ρi n(µni − µi ) −→ ℓi :=
n→∞
n
µi
Define
S := {u ∈ Rd+ : e · u = 1}.
For Z n ∈ Un we define, for t ≥ 0 and for adapted Û n (t) ∈ S,
(2.7)
Q̂n (t) := (e · X̂ n (t))+ Û n (t),
Ẑ n (t) := X̂ n (t) − (e · X̂ n (t))+ Û n (t).
If Q̂n (t) = 0, we define Û n (t) := ed = (0, . . . , 0, 1)T . Thus, Ûin represents the
fraction of class-i customers in the queue when the total queue size is positive. As we show later, it is convenient to view Û n (t) as the control. Note
that the controls are nonanticipative and preemption is allowed.
12
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
2.2.1. The cost minimization problem. We next introduce the running
cost function for the control problem. Let r : Rd+ → R+ be a given function
satisfying
(2.8)
c1 |x|m ≤ r(x) ≤ c2 (1 + |x|m )
for some m ≥ 1,
and some positive constants ci , i = 1, 2. We also assume that r is locally Lipschitz. This assumption includes linear and convex running cost functions.
For example, if we let hi be the holding cost rate for class i customers, then
some of the typical running cost functions are the following:
r(x) =
d
X
hi xm
i ,
m ≥ 1.
i=1
These running cost functions evidently satisfy the condition in (2.8).
Given the initial state X n (0) and a work-conserving scheduling policy
n
Z ∈ Un , we define the diffusion-scaled cost function as
Z T
1
n
n
n
(2.9)
r(Q̂ (s)) ds ,
J(X̂ (0), Ẑ ) := lim sup E
T →∞ T
0
where the running cost function r satisfies (2.8). Note that the running
cost is defined using the scaled version of Z n . Then the associated cost
minimization problem becomes
(2.10)
V̂ n (X̂ n (0)) := ninf n J(X̂ n (0), Ẑ n ).
Z ∈U
We refer to V̂ n (X̂ n (0)) as the diffusion-scaled value function given the
initial state X̂ n (0) in the nth system.
From (2.7), it is easy to see that by redefining r as r(x, u) = r((e · x)+ u)
we can rewrite the control problem as
˜ X̂ n (0), Û n ),
V̂ n (X̂ n (0)) = inf J(
where
(2.11)
˜ X̂ n (0), Û n ) := lim sup 1 E
J(
T →∞ T
Z
T
0
r(X̂ n (s), Û n (s)) ds ,
and the infimum is taken over all admissible pairs (X̂ n , Û n ) satisfying (2.7).
For simplicity, we assume that the initial condition X̂ n (0) is deterministic
and X̂ n (0) → x as n → ∞ for some x ∈ Rd .
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
13
2.2.2. The limiting controlled diffusion process. As in [7, 22], one formally deduces that, provided X̂ n (0) → x, there exists a limit X for X̂ n on
every finite time interval, and the limit process X is a d-dimensional diffusion
process with independent components, that is,
(2.12)
dXt = b(Xt , Ut ) dt + Σ dWt,
with initial condition X0 = x. In (2.12), the drift b(x, u) : Rd × S → Rd takes
the form
(2.13)
with
b(x, u) = ℓ − R(x − (e · x)+ u) − (e · x)+ Γu,
ℓ := (ℓ1 , . . . , ℓd )T ,
R := diag(µ1 , . . . , µd ),
Γ := diag(γ1 , . . . , γd ).
The control Ut lives in S and is nonanticipative, W (t) is a d-dimensional
standard Wiener process independent of the initial condition X0 = x, and
the covariance matrix is given by
ΣΣT = diag(2λ1 , . . . , 2λd ).
A formal derivation of the drift in (2.13) can be obtained from (2.5) and
(2.7). A detailed description of equation (2.12) and related results are given
in Section 3. Let U be the set of all admissible controls for the diffusion
model (for a definition see Section 3).
2.2.3. The ergodic control problem in the diffusion scale. Define r̃ : Rd+ ×
Rd+ → R+ by
r̃(x, u) := r((e · x)+ u),
where r is the same function as in (2.9). In analogy with (2.11) we define
the ergodic cost associated with the controlled diffusion process X and the
running cost function r̃(x, u) as
Z T
1 U
J(x, U ) := lim sup Ex
r̃(Xt , Ut ) dt ,
U ∈ U.
T →∞ T
0
We consider the ergodic control problem
(2.14)
̺∗ (x) = inf J(x, U ).
U ∈U
We call ̺∗ (x) the optimal value at the initial state x for the controlled
diffusion process X. It is shown later that ̺∗ (x) is independent of x. A
detailed treatment and related results corresponding to the ergodic control
problem are given in Section 3.
We next state the main results of this section, the proof of which can be
found in Section 5.
14
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Theorem 2.1. Let X̂ n (0) → x ∈ Rd as n → ∞. Also assume that (2.1)
and (2.8) hold. Then
lim inf V̂ n (X̂ n (0)) ≥ ̺∗ (x),
n→∞
where ̺∗ (x) is given by (2.14).
Theorem 2.2. Suppose the assumptions of Theorem 2.1 hold. In addition, assume that r in (2.9) is convex. Then
lim sup V̂ n (X̂ n (0)) ≤ ̺∗ (x).
n→∞
Thus, we conclude that for any convex running cost function r, Theorems 2.1 and 2.2 establish the asymptotic convergence of the ergodic control
problem for the queueing model.
3. A broad class of ergodic control problems for diffusions.
3.1. The controlled diffusion model. The dynamics are modeled by a controlled diffusion process X = {Xt , t ≥ 0} taking values in the d-dimensional
Euclidean space Rd , and governed by the Itô stochastic differential equation
(3.1)
dXt = b(Xt , Ut ) dt + σ(Xt ) dWt .
All random processes in (3.1) live in a complete probability space (Ω, F, P).
The process W is a d-dimensional standard Wiener process independent of
the initial condition X0 . The control process U takes values in a compact,
metrizable set U, and Ut (ω) is jointly measurable in (t, ω) ∈ [0, ∞) × Ω.
Moreover, it is nonanticipative: for s < t, Wt − Ws is independent of
Fs := the completion of σ{X0 , Ur , Wr , r ≤ s} relative to (F, P).
Such a process U is called an admissible control, and we let U denote the set
of all admissible controls.
We impose the following standard assumptions on the drift b and the
diffusion matrix σ to guarantee existence and uniqueness of solutions to
equation (3.1).
(A1) Local Lipschitz continuity: The functions
b = [b1 , . . . , bd ]T : Rd × U → Rd
and σ = [σ ij ] : Rd → Rd×d
are locally Lipschitz in x with a Lipschitz constant CR > 0 depending on
R > 0. In other words, for all x, y ∈ BR and u ∈ U,
|b(x, u) − b(y, u)| + kσ(x) − σ(y)k ≤ CR |x − y|.
We also assume that b is continuous in (x, u).
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
15
(A2) Affine growth condition: b and σ satisfy a global growth condition
of the form
|b(x, u)|2 + kσ(x)k2 ≤ C1 (1 + |x|2 )
∀(x, u) ∈ Rd × U,
where kσk2 := trace(σσ T ).
(A3) Local nondegeneracy: For each R > 0, it holds that
d
X
i,j=1
−1 2
|ξ|
aij (x)ξi ξj ≥ CR
∀x ∈ BR ,
for all ξ = (ξ1 , . . . , ξd )T ∈ Rd , where a := σσ T .
In integral form, (3.1) is written as
Z t
Z t
(3.2)
σ(Xs ) dWs .
b(Xs , Us ) ds +
Xt = X0 +
0
0
The third term on the right-hand side of (3.2) is an Itô stochastic integral.
We say that a process X = {Xt (ω)} is a solution of (3.1), if it is Ft -adapted,
continuous in t, defined for all ω ∈ Ω and t ∈ [0, ∞), and satisfies (3.2) for
all t ∈ [0, ∞) a.s. It is well known that under (A1)–(A3), for any admissible
control there exists a unique solution of (3.1) [1], Theorem 2.2.4.
We define the family of operators Lu : C 2 (Rd ) → C(Rd ), where u ∈ U plays
the role of a parameter, by
(3.3)
Lu f (x) := 12 aij (x)∂ij f (x) + bi (x, u)∂i f (x),
u ∈ U.
We refer to Lu as the controlled extended generator of the diffusion. In
∂
(3.3) and elsewhere in this paper, we have adopted the notation ∂i := ∂x
i
2
and ∂ij := ∂x∂i ∂xj . We also use the standard summation rule that repeated
subscripts and superscripts are summed from 1 through d. In other words,
the right-hand side of (3.3) stands for
d
d
X
1 X ij
∂f
∂2f
bi (x, u)
a (x)
(x) +
(x).
2
∂xi ∂xj
∂xi
i=1
i,j=1
Of fundamental importance in the study of functionals of X is Itô’s formula. For f ∈ C 2 (Rd ) and with Lu as defined in (3.3), it holds that
Z t
LUs f (Xs ) ds + Mt ,
(3.4)
a.s.,
f (Xt ) = f (X0 ) +
0
where
Mt :=
Z
0
t
h∇f (Xs ), σ(Xs ) dWs i
16
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
is a local martingale. Krylov’s extension of Itô’s formula [27], page 122,
d
extends (3.4) to functions f in the local Sobolev space W2,p
loc (R ), p ≥ d.
Recall that a control is called Markov if Ut = v(t, Xt ) for a measurable map
v : R+ × Rd → U, and it is called stationary Markov if v does not depend on
t, that is, v : Rd → U. Correspondingly, (3.1) is said to have a strong solution
if given a Wiener process (Wt , Ft ) on a complete probability space (Ω, F, P),
there exists a process X on (Ω, F, P), with X0 = x0 ∈ Rd , which is continuous,
Ft -adapted, and satisfies (3.2) for all t a.s. A strong solution is called unique,
if any two such solutions X and X ′ agree P-a.s., when viewed as elements of
C([0, ∞), Rd ). It is well known that under assumptions (A1)–(A3), for any
Markov control v, (3.1) has a unique strong solution [20].
Let USM denote the set of stationary Markov controls. Under v ∈ USM , the
process X is strong Markov, and we denote its transition function by Pvt (x, ·).
It also follows from the work of [8, 31] that under v ∈ USM , the transition
probabilities of X have densities which are locally Hölder continuous. Thus,
Lv defined by
Lv f (x) := 12 aij (x)∂ij f (x) + bi (x, v(x))∂i f (x),
v ∈ USM ,
for f ∈ C 2 (Rd ), is the generator of a strongly-continuous semi-group on
Cb (Rd ), which is strong Feller. We let Pvx denote the probability measure
and Evx the expectation operator on the canonical space of the process under the control v ∈ USM , conditioned on the process X starting from x ∈ Rd
at t = 0.
We need the following definition.
Definition 3.1. A function h : Rd × U → R is called inf-compact on a
set A ⊂ Rd if the set Ā ∩ {x : minu∈U h(x, u) ≤ β} is compact (or empty) in
Rd for all β ∈ R. When this property holds for A ≡ Rd , then we simply say
that h is inf-compact.
Recall that control v ∈ USM is called stable if the associated diffusion
is positive recurrent. We denote the set of such controls by USSM , and let
µv denote the unique invariant probability measure on Rd for the diffusion
under the control v ∈ USSM . We also let M := {µv : v ∈ USSM }. Recall that
v ∈ USSM if and only if there exists an inf-compact function V ∈ C 2 (Rd ), a
bounded domain D ⊂ Rd , and a constant ε > 0 satisfying
Lv V(x) ≤ −ε
∀x ∈ D c .
We denote by τ (A) the first exit time of a process {Xt , t ∈ R+ } from a set
A ⊂ Rd , defined by
τ (A) := inf{t > 0 : Xt ∈
/ A}.
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
17
The open ball of radius R in Rd , centered at the origin, is denoted by BR ,
c ).
and we let τR := τ (BR ), and τ̆R := τ (BR
We assume that the running cost function r(x, u) is nonnegative, continuous and locally Lipschitz in its first argument uniformly in u ∈ U. Without
loss of generality, we let κR be a Lipschitz constant of r(·, u) over BR . In
summary, we assume that
(A4) r : Rd ×U → R+ is continuous and satisfies, for some constant CR > 0
|r(x, u) − r(y, u)| ≤ CR |x − y|
∀x, y ∈ BR , ∀u ∈ U,
and all R > 0.
In general, U may not be a convex set. It is therefore often useful to
enlarge the control set to P(U). For any v(du) ∈ P(U) we can redefine the
drift and the running cost as
Z
Z
(3.5)
b̄(x, v) := b(x, u)v(du) and r̄(x, v) := r(x, u)v(du).
U
U
It is easy to see that the drift and running cost defined in (3.5) satisfy all
the aforementioned conditions (A1)–(A4). In what follows, we assume that
all the controls take values in P(U). These controls are generally referred to
as relaxed controls. We endow the set of relaxed stationary Markov controls
with the following topology: vn → v in USM if and only if
Z
Z
Z
Z
f (x) g(x, u)v(du|x) dx
f (x) g(x, u)vn (du|x) dx −→
Rd
n→∞ Rd
U
L1 (Rd )
L2 (Rd )
U
(Rd
for all f ∈
∩
and g ∈ Cb
× U). Then USM is a compact
metric space under this topology [1], Section 2.4. We refer to this topology
as the topology of Markov controls. A control is said to be precise if it takes
value in U. It is easy to see that any precise control Ut can also be understood
as a relaxed control by Ut (du) = δUt . Abusing the notation, we denote the
drift and running cost by b and r, respectively, and the action of a relaxed
control on them is understood as in (3.5).
3.2. Structural assumptions. Assumptions 3.1 and 3.2, described below,
are in effect throughout the analysis, unless otherwise stated.
Assumption 3.1.
For some open set K ⊂ Rd , the following hold:
(i) The running cost r is inf-compact on K.
(ii) There exist inf-compact functions V ∈ C 2 (Rd ) and h ∈ C(Rd × U), such
that
(3.6)
Lu V(x) ≤ 1 − h(x, u)
∀(x, u) ∈ Kc × U,
Lu V(x) ≤ 1 + r(x, u)
∀(x, u) ∈ K × U.
18
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Without loss of generality, we assume that V and h are nonnegative.
Remark 3.1. In the statement of Assumption 3.1, we refrain from using
any constants in the interest of notational economy. There is no loss of
generality in doing so, since the functions V and h can always be scaled to
eliminate unnecessary constants.
The next assumption is not a structural one, but rather the necessary requirement that the value of the ergodic control problem is finite. Otherwise,
the problem is vacuous. For U ∈ U, define
Z T
1 U
̺U (x) := lim sup Ex
(3.7)
r(Xs , Us ) ds .
T →∞ T
0
Assumption 3.2.
x ∈ Rd .
There exists U ∈ U such that ̺U (x) < ∞ for some
Assumption 3.2 alone does not imply that ̺v < ∞ for some v ∈ USSM .
However, when combined with Assumption 3.1, this is the case as the following lemma asserts.
Lemma 3.1. Let Assumptions 3.1 and 3.2 hold. Then there exists u0 ∈
USSM such that ̺u0 < ∞. Moreover, there exists a nonnegative inf-compact
function V0 ∈ C 2 (Rd ), and a positive constant η such that
(3.8)
Lu0 V0 (x) ≤ η − r(x, u0 (x))
∀x ∈ Rd .
Conversely, if (3.8) holds, then Assumption 3.2 holds.
Proof. The first part of the result follows from Theorem 3.1(e) and
(3.23) whereas the converse part follows from Lemma 3.2. These proofs are
stated later in the paper.
Remark 3.2. There is no loss of generality in using only the constant
η in Assumption 3.2, since V0 can always be scaled to achieve this.
We also observe that for K = Rd the problem reduces to an ergodic control problem with near-monotone cost, and for K = ∅ we obtain an ergodic
control problem under a uniformly stable controlled diffusion.
3.3. Piecewise linear controlled diffusions. The controlled diffusion process in (2.12) belongs to a large class of controlled diffusion processes, called
piecewise linear controlled diffusions [14]. We describe this class of controlled
diffusions and show that it satisfies the assumptions in Section 3.2.
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
19
Definition 3.2. A square matrix R is said to be an M -matrix if it can
be written as R = sI − N for some s > 0 and nonnegative matrix N with
property that ρ(N ) ≤ s, where ρ(N ) denotes the spectral radius of N .
Let Γ = [γ ij ] be a given matrix whose diagonal elements are positive,
= 0 for i = 1, . . . , d − 1, and the remaining elements are in R. (Note that
for the queueing model, Γ is a positive diagonal matrix. Our results below
hold for the more general Γ.) Let ℓ ∈ Rd and R be a nonsingular M -matrix.
Define
γ id
(3.9)
b(x, u) := ℓ − R(x − (e · x)+ u) − (e · x)+ Γu,
with u ∈ S := {u ∈ Rd+ : e · u = 1}. Assume that
eT R ≥ 0T .
We consider the following controlled diffusion in Rd :
(3.10)
dXt = b(Xt , Ut ) dt + Σ dWt,
where Σ is a constant matrix such that ΣΣT is invertible. It is easy to see
that (3.10) satisfies conditions (A1)–(A3).
Analysis of these types of diffusion approximations is an established tradition in queueing systems. It is often easy to deal with the limiting object and
it also helps to obtain information on the behavior of the actual queueing
model.
We next introduce the running cost function. Let r : Rd × S → [0, ∞) be
locally Lipschitz with polynomial growth and
(3.11)
c1 [(e · x)+ ]m ≤ r(x, u) ≤ c2 (1 + [(e · x)+ ]m ),
for some m ≥ 1 and positive constants c1 and c2 that do not depend on u.
Some typical examples of such running costs are
+ m
r(x, u) = [(e · x) ]
d
X
i=1
hi um
i
with m ≥ 1,
for some positive vector (h1 , . . . , hd )T .
Remark 3.3. The controlled dynamics in (3.9) and running cost in
(3.11) are clearly more general than the model described in Section 2.2. In
(3.10), X denotes the diffusion approximation for the number customers in
the system in the Halfin–Whitt regime and its ith component X i denotes
the diffusion approximation of the number of class i customers. Therefore,
(e · X)+ denotes the total number of customers in the queue. For R and
Γ diagonal as in (2.13), the diagonal entries of R and Γ denote the service
20
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
and abandonment rates, respectively, of the customer classes. The ith coordinate of U denotes the fraction of class-i customers waiting in the queue.
Therefore, the vector-valued process Xt − (e · Xt )+ Ut denotes the diffusion
approximation of the numbers of customers in service from different customer classes.
Proposition 3.1. Let b and r be given by (3.9) and (3.11), respectively.
Then (3.10) satisfies Assumptions 3.1 and 3.2, with h(x) = c0 |x|m and
(3.12)
K := {x : δ|x| < (e · x)+ }
for appropriate positive constants c0 and δ.
Proof. We recall that if R is a nonsingular M -matrix, then there exists
a positive definite matrix Q such that QR + RT Q is strictly positive definite
[14]. Therefore, for some positive constant κ0 it holds that
2
κ0 |y|2 ≤ y T [QR + RT Q]y ≤ κ−1
0 |y|
∀y ∈ Rd .
The set K in (3.12), where δ > 0 is chosen later, is an open convex cone,
and the running cost function r is inf-compact on K. Let V be a nonnegative function in C 2 (Rd ) such that V(x) = [xT Qx]m/2 for |x| ≥ 1, where the
constant m is as in (3.11).
Let |x| ≥ 1 and u ∈ S. Then
m[xT Qx]m/2−1 T
x [QR + RT Q]x
2
+ m[xT Qx]m/2−1 Qx · (R − Γ)(e · x)+ u
m/2−1 κ0
T
2
+
≤ ℓ · ∇V(x) − m[x Qx]
|x| − C|x|(e · x)
2
∇V(x) · b(x, u) = ℓ · ∇V(x) −
κ0
, then on Kc ∩ {|x| ≥ 1}
for some positive constant C. If we choose δ = 4C
we have the estimate
mκ0 T
(3.13)
[x Qx]m/2−1 |x|2 .
∇V(x) · b(x, u) ≤ ℓ · ∇V(x) −
4
Note that ℓ · V is globally bounded for m = 1. For any m ∈ (1, ∞), it follows
by (3.13) that
mκ0 T
∇V(x) · b(x, u) ≤ m(ℓT Qx)[xT Qx]m/2−1 −
[x Qx]m/2−1 |x|2
4
(3.14)
m/2
T
mκ0 (λ(Q))m/2 m
m|ℓ Q|(λ(Q))
|x|m−1 −
|x|
≤
λ(Q)
4λ(Q)
for x ∈ Kc ∩ {|x| ≥ 1}, where λ(Q) and λ(Q) are the smallest and largest
eigenvalues of Q, respectively. We use Young’s inequality
|a|m m − 1 m/(m−1)
+
|b|
,
a, b ≥ 0,
|ab| ≤
m
m
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
21
in (3.14) to obtain the bound
(3.15)
∇V(x) · b(x, u) ≤ κ1 −
mκ0
(λ(Q))m/2 |x|m
8λ(Q)
for some constant κ1 > 0. A similar calculation shows for some constant
κ2 > 0 it holds that
(3.16)
∇V(x) · b(x, u) ≤ κ2 (1 + [(e · x)+ ]m )
∀x ∈ K ∩ {|x| ≥ 1}.
Also note that we can select κ3 > 0 large enough such that
(3.17)
1
mκ0
|trace(ΣΣT ∇2 V(x))| ≤ κ3 +
(λ(Q))m/2 |x|m .
2
16λ(Q)
Hence, by (3.13)–(3.17) there exists κ4 > 0 such that
mκ0
(λ(Q))m/2 |x|m IKc (x) + κ2 [(e · x)+ ]m IK (x)
(3.18) Lu V(x) ≤ κ4 −
16λ(Q)
for all x ∈ Rd . It is evident that we can scale V, by multiplying it with a
constant, so that (3.18) takes the form
(3.19) Lu V(x) ≤ 1 − c0 |x|m IKc (x) + c1 [(e · x)+ ]m IK (x)
∀x ∈ Rd .
By (3.11), the running cost r is inf-compact on K. It then follows from (3.11)
and (3.19) that (3.6) is satisfied with h(x) := c0 |x|m .
We next show that (3.10) satisfies Assumption 3.2. Let
u0 (·) ≡ ed = (0, . . . , 0, 1)T .
Then we can write (3.10) as
(3.20)
dXt = (ℓ − R(Xt − (e · Xt )+ u0 ) − (e · x)+ Γu0 ) dt + Σ dWt.
It is shown in [14] that the solution Xt in (3.20) is positive recurrent and,
therefore, u0 is a stable Markov control. This is done by finding a suitable
Lyapunov function. In particular, in [14], Theorem 3, it is shown that there
exists a positive definite matrix Q̃ such that if we define
(3.21)
ψ(x) := (e · x)2 + κ̃[x − ed φ(e · x)]T Q̃[x − ed φ(e · x)],
for some suitably chosen constant κ̃ and
y,
φ(y) = − 21 δ̃,
smooth,
a function φ ∈ C 2 (R), given by
if y ≥ 0,
if y ≤ −δ̃,
if − δ̃ < y < 0,
where δ̃ > 0 is a suitable constant and 0 ≤ φ′ (y) ≤ 1, then it holds that
(3.22)
Lu0 ψ(x) ≤ −κ̃1 |x|2 ,
22
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
for |x| large enough and positive constant κ̃1 . We define V0 := eaψ where a
is to be determined later. Note that |∇ψ(x)| ≤ κ̃2 (1 + |x|) for some constant
κ̃2 > 0. Hence, a straightforward calculation shows that if we choose a small
enough, then for some constant κ̃3 > 0 it holds that
Lu0 V0 (x) ≤ (−κ̃1 a|x|2 + a2 kΣk2 κ̃2 (1 + |x|)2 )V0 (x)
≤ −κ̃3 |x|2 V0 (x),
for all |x| large enough. Since V0 (x) > [(e · x)+ ]m , m ≥ 1, for all large enough
|x| we see that V0 satisfies (3.8) with control u0 . Hence, Assumption 3.2
holds by Lemma 3.1.
3.4. Existence of optimal controls.
Definition 3.3.
Recall the definition of ̺U in (3.7). For β > 0, we define
Uβ := {U ∈ U : ̺U (x) ≤ β for some x ∈ Rd }.
We also let UβSM := Uβ ∩ USM , and
̺ˆ∗ := inf{β > 0 : Uβ 6= ∅},
̺∗ := inf{β > 0 : UβSM 6= ∅},
̺˜∗ := inf{π(r) : π ∈ G},
where
Z
d
G := π ∈ P(R × U) :
Rd ×U
u
L f (x)π(dx, du) = 0 ∀f
∈ Cc∞ (Rd )
,
and Lu f (x) is given by (3.3). It is well known that G is the set of ergodic
occupation measures of the controlled process in (3.1), and that G is a closed
and convex subset of P(Rd × U) [1], Lemmas 3.2.2 and 3.2.3. We use the
notation πv when we want to indicate the ergodic occupation measure associated with the control v ∈ USSM . In other words,
πv (dx, du) := µv (dx)v(du|x).
Lemma 3.2. If (3.8) holds for some V0 and u0 , then we have πu0 (r) ≤ η.
Therefore, ̺ˆ∗ < ∞.
Proof. Let (Xt , u0 (Xt )) be the solution of (3.1). Recall that τR is the
first exit time from BR for R > 0. Then by Itô’s formula
Z T ∧τR
u0
u0
Ex [V0 (XT ∧τR )] − V0 (x) ≤ ηT − Ex
r(Xs , u0 (Xs )) ds .
0
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
23
Therefore, letting R → ∞ and using Fatou’s lemma, we obtain the bound
Z T
u0
Ex
r(Xs , u0 (Xs )) ds ≤ ηT + V0 (x) − min V0 ,
Rd
0
and thus
1
lim sup Eux0
T →∞ T
Z
T
0
r(Xs , u0 (Xs )) ds ≤ η.
In the analysis, we use a function h̃ ∈ C(Rd × U) which, roughly speaking,
is of the same order as r in K × U and lies between r and a multiple of r + h
on Kc × U, with K as in Assumption 3.1. The existence of such a function
is guaranteed by Assumption 3.1 as the following lemma shows.
Lemma 3.3.
Define
H := (K × U) ∪ {(x, u) ∈ Rd × U : r(x, u) > h(x, u)},
where K is the open set in Assumption 3.1. Then there exists an inf-compact
function h̃ ∈ C(Rd × U) which is locally Lipschitz in its first argument uniformly w.r.t. its second argument, and satisfies
(3.23) r(x, u) ≤ h̃(x, u) ≤
k0
(1 + h(x, u)IHc (x, u) + r(x, u)IH (x, u))
2
for all (x, u) ∈ Rd × U, and for some positive constant k0 ≥ 2. Moreover,
(3.24)
Lu V(x) ≤ 1 − h(x, u)IHc (x, u) + r(x, u)IH (x, u)
for all (x, u) ∈ Rd × U, where V is the function in Assumption 3.1.
Proof. If f (x, u) denotes the right-hand side of (3.23), with k0 = 4,
then
f (x, u) − r(x, u) > h(x, u)IHc (x, u) + r(x, u)IH (x, u)
≥ h(x, u)IKc (x) + r(x, u)IK (x),
since r(x, u) > h(x, u) on H \ (K × U). Therefore, by Assumption 3.1, the
set {(x, u) : f (x, u) − r(x, u) ≤ n} is bounded in Rd × U for every n ∈ N.
Hence, there exists an increasing sequence of open balls Dn , n = 1, 2, . . . ,
centered at 0 in Rd such that f (x, u) − r(x, u) ≥ n for all (x, u) ∈ Dnc × U. Let
g : Rd → R be any nonnegative smooth function such that n − 1 ≤ g(x) ≤ n
for x ∈ Dn+1 \ Dn , n = 1, 2, . . . , and g(x) = 0 on D1 . Clearly, h̃ := r + g is
continuous, inf-compact, locally Lipschitz in its first argument, and satisfies
(3.23). That (3.24) holds is clear from (3.6) and the fact that H ⊇ K × U.
24
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Remark 3.4. It is clear from the proof of Lemma 3.3 that we could
fix the value of the constant k0 in (3.23), say k0 = 4. However, we keep the
variable k0 because this provides some flexibility in the choice of h̃, and also
in order to be able to trace it along the different calculations.
Remark 3.5. Note that if h ≥ r and r is inf-compact, then H = K × U
and h̃ := r satisfies (3.23). Note also, that in view of (3.11) and Proposition 3.1, for the multi-class queueing model we have
r(x, u) ≤ c2 (1 + [(e · x)+ ]m )
c2 dm−1
(1 + (1 ∧ c0 )|x|m )
1 ∧ c0
c2 dm−1
1
+ m
m c
≤
1 + c0 |x| IK (x) + m [(e · x) ] IK (x)
1 ∧ c0
δ
m−1
1
c2 d
r(x, u)IK (x)
1 + h(x)IKc (x) +
≤
1 ∧ c0
c1 δm
≤
≤
≤
c2 dm−1
(1 + h(x)IKc (x) + r(x, u)IK (x))
1 ∧ c0 ∧ c1 δm
c2 dm−1
(1 + h(x)IHc (x, u) + r(x, u)IH (x, u)).
1 ∧ c0 ∧ c1 δm
Therefore, h̃(x, u) := c2 + c2 dm−1 |x|m satisfies (3.23).
Remark 3.6. We often use the fact that if g ∈ C(Rd × U) is bounded
below, then the map P(Rd × U) ∋ ν 7→ ν(g) is lower semi-continuous. This
easily follows from two facts: (a) g can be expressed as an increasing limit
of bounded continuous functions, and (b) if g is bounded and continuous,
then π 7→ π(g) is continuous.
Theorem 3.1.
Let β ∈ (ˆ
̺∗ , ∞). Then:
(a) For all U ∈ Uβ and x ∈ Rd such that ̺U (x) ≤ β, then
Z T
1 U
lim sup Ex
(3.25)
h̃(Xs , Us ) ds ≤ k0 (1 + β).
t→∞ T
0
(b) ̺ˆ∗ = ̺∗ = ̺˜∗ .
(c) For any β ∈ (̺∗ , ∞), we have UβSM ⊂ USSM .
(d) The set of invariant probability measures Mβ corresponding to controls in UβSM satisfies
Z
h̃(x, v(x))µv (dx) ≤ k0 (1 + β)
∀µv ∈ Mβ .
Rd
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
25
In particular, UβSM is tight in P(Rd ).
(e) There exists (Ṽ , ̺˜) ∈ C 2 (Rd ) × R+ , with Ṽ inf-compact, such that
(3.26)
min[Lu Ṽ (x) + h̃(x, u)] = ̺˜.
u∈U
Proof. Using Itô’s formula, it follows by (3.24) that
(3.27)
1 U
(E [V(XT ∧τR )] − V(x))
T x
Z T ∧τR
1 U
h(Xs , Us )IHc (Xs , Us ) ds
≤ 1 − Ex
T
0
Z T ∧τR
1 U
+ Ex
r(Xs , Us )IH (Xs , Us ) ds .
T
0
Since V is inf-compact, (3.27) together with (3.23) implies that
Z T
1 U
2
lim sup E
h̃(Xs , Us ) ds
k0 T →∞ T x 0
(3.28)
Z T
1 U
≤ 2 + 2 lim sup Ex
r(Xs , Us ) ds .
T →∞ T
0
Part (a) then follows from (3.28).
Now fix U ∈ Uβ and x ∈ Rd such that ̺U (x) ≤ β. The inequality in (3.25)
U : t ≥ 1}, defined by
implies that the set of mean empirical measures {ζx,t
Z t
1 U
U
IA×B (Xs , Us ) ds
ζx,t(A × B) := Ex
t
0
for any Borel sets A ⊂ Rd and B ⊂ U, is tight. It is the case that any limit
point of the mean empirical measures in P(Rd × U) is an ergodic occupation
measure [1], Lemma 3.4.6. Then in view of Remark 3.6 we obtain
(3.29)
U
π(r) ≤ lim sup ζx,t
(r) ≤ β
t→∞
for some ergodic occupation measure π. Therefore, ̺˜∗ ≤ ̺ˆ∗ . Disintegrating
the measure π as π(dx, du) = v(du|x)µv (dx), we obtain the associated control v ∈ USSM . From ergodic theory [33], we also know that
Z T
1
lim sup Evx
r(Xs , v(Xs )) ds = πv (r)
for almost every x.
T →∞ T
0
It follows that ̺∗ ≤ ̺˜∗ , and since it is clear that ̺ˆ∗ ≤ ̺∗ , equality must hold
among the three quantities.
If v ∈ UβSM , then (3.28) implies that (3.29) holds with U ≡ v and π ≡ πv .
Therefore, parts (c) and (d) follow.
26
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Existence of (Ṽ , ̺˜), satisfying (3.26), follows from Assumption 3.2 and
[1], Theorem 3.6.6. The inf-compactness of Ṽ follows from the stochastic
representation of Ṽ in [1], Lemma 3.6.9. This proves (e).
Existence of a stationary Markov control that is optimal is asserted by
the following theorem.
Theorem 3.2. Let G denote the set of ergodic occupation measures corresponding to controls in USSM , and Gβ those corresponding to controls in
UβSM , for β > ̺∗ . Then:
(a) The set Gβ is compact in P(Rd ) for any β > ̺∗ .
(b) There exists v ∈ USM such that ̺v = ̺∗ .
Proof. By Theorem 3.1(d), the set Gβ is tight for any β > ̺∗ . Let
{πn } ⊂ Gβ , for some β > ̺∗ , be any convergent sequence in P(Rd ) such that
πn (r) → ̺∗ as n → ∞ and denote its limit by π∗ . Since G is closed, π∗ ∈ G, and
since the map π → π(r) is lower semi-continuous, it follows that π∗ (r) ≤ ̺∗ .
Therefore, Gβ is closed, and hence compact. Since π(r) ≥ ̺∗ for all π ∈ G,
the equality π∗ (r) = ̺∗ follows. Also v is obtained by disintegrating π∗ .
Remark 3.7. The reader might have noticed at this point that Assumption 3.1 may be weakened significantly. What is really required is the
existence of an open set Ĥ ⊂ Rd × U and inf-compact functions V ∈ C 2 (Rd )
and h ∈ C(Rd × U), satisfying
(H1) inf {u : (x,u)∈Ĥ} r(x, u) −→ ∞.
(H2)
|x|→∞
u
L V(x) ≤ 1 − h(x, u)IĤc (x, u) + r(x, u)IĤ (x, u)
∀(x, u) ∈ Rd × U.
In (H1), we use the convention that the ‘inf’ of the empty set is +∞. Also
note that (H1) is equivalent to the statement that {(x, u) : r(x, u) ≤ c} ∩ Ĥ is
bounded in Rd × U for all c ∈ R+ . If (H1)–(H2) are met, we define H := Ĥ ∪
{(x, u) ∈ Rd × U : r(x, u) > h(x, u)}, and following the proof of Lemma 3.3, we
assert the existence of an inf-compact h̃ ∈ C(Rd × U) satisfying (3.23). In fact,
throughout the rest of the paper, Assumption 3.1 is not really invoked. We
only use (3.24), the inf-compact function h̃ satisfying (3.23), and, naturally,
Assumption 3.2.
3.5. The HJB equation. For ε > 0, let
rε (x, u) := r(x, u) + εh̃(x, u).
By Theorem 3.1(d), for any π ∈ Gβ , β > ̺∗ , we have the bound
(3.30)
π(rε ) ≤ β + εk0 (1 + β).
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
27
Therefore, since rε is near-monotone, that is,
lim inf min rε (x, u) > inf π(rε ),
|x|→∞ u∈U
π∈G
there exists πε ∈ arg minπ∈G π(rε ). Let π∗ ∈ G be as in the proof of Theorem 3.2. The sub-optimality of π∗ relative to the running cost rε and (3.30)
imply that
πε (r) ≤ πε (rε )
(3.31)
≤ π∗ (rε )
≤ ̺∗ + εk0 (1 + ̺∗ )
∀ε > 0.
It follows from (3.31) and Theorem 3.1(d) that {πε : ε ∈ (0, 1)} is tight. Since
πε 7→ πε (r) is lower semi-continuous, if π̄ is any limit point of πε as ε ց 0,
then taking limits in (3.31), we obtain
(3.32)
π̄(r) ≤ lim sup πε (r) ≤ ̺∗ .
εց0
Since G is closed, π̄ ∈ G, which implies that π̄(r) ≥ ̺∗ . Therefore, equality
must hold in (3.32), or in other words, π̄ is an optimal ergodic occupation
measure.
Theorem 3.3. There exists a unique function V ε ∈ C 2 (Rd ) with V ε (0) =
0, which is bounded below in Rd , and solves the HJB
(3.33)
min[Lu V ε (x) + rε (x, u)] = ̺ε ,
u∈U
where ̺ε := inf π∈G π(rε ), or in other words, ̺ε is the optimal value of the
ergodic control problem with running cost rε . Also a stationary Markov control vε is optimal for the ergodic control problem relative to rε if and only if
it satisfies
(3.34) Hε (x, ∇V ε (x)) = b(x, vε (x)) · ∇V ε (x) + rε (x, vε (x))
where
(3.35)
Hε (x, p) := min[b(x, u) · p + rε (x, u)].
u∈U
Moreover:
(a) for every R > 0, there exists kR such that
(3.36)
osc V ε ≤ kR ;
BR
a.e. in Rd ,
28
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
(b) if vε is a measurable a.e. selector from the minimizer of the Hamiltonian in (3.35), that is, if it satisfies (3.33), then for any δ > 0,
Z τ̆δ
V ε (x) ≥ Evxε
(rε (Xs , vε (Xs )) − ̺ε ) ds + inf V ε ;
Bδ
0
(c) for any stationary control v ∈ USSM and for any δ > 0,
Z τ̆δ
ε
v
ε
V (x) ≤ Ex
(rε (Xs , v(Xs )) − ̺ε ) ds + V (Xτ̆δ ) ,
0
where τ̆δ is hitting time to the ball Bδ .
Theorem 3.4.
following hold:
Let V ε , ̺ε , and vε , for ε > 0, be as in Theorem 3.3. The
(a) The function V ε converges to some V∗ ∈ C 2 (Rd ), uniformly on compact sets, and ̺ε → ̺∗ , as ε ց 0, and V∗ satisfies
min[Lu V∗ (x) + r(x, u)] = ̺∗ .
(3.37)
u∈U
Also, any limit point v∗ (in the topology of Markov controls) as ε ց 0 of the
set {vε } satisfies
Lv∗ V∗ (x) + r(x, v∗ (x)) = ̺∗
a.e. in Rd .
(b) A stationary Markov control v is optimal for the ergodic control problem relative to r if and only if it satisfies
a.e. in Rd ,
(3.38) H(x, ∇V∗ (x)) = b(x, v(x)) · ∇V∗ (x) + r(x, v(x))
where
H(x, p) := min[b(x, u) · p + r(x, u)].
u∈U
Moreover, for an optimal v ∈ USM , we have
Z T
1 v
r(Xs , v(Xs )) ds = ̺∗
lim Ex
T →∞ T
0
∀x ∈ Rd .
(c) The function V∗ has the stochastic representation
Z τ̆δ
v
Ex
V∗ (x) = lim S inf
(r(Xs , v(Xs )) − ̺∗ ) ds
δց0 v∈
(3.39)
= lim
δց0
Ev̄x
β
β>0 USM
Z
0
τ̆δ
0
(r(Xs , v∗ (Xs )) − ̺∗ ) ds
for any v̄ ∈ USM that satisfies (3.38).
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
29
(d) If U is a convex set, u 7→ {b(x, u) · p + r(x, u)} is strictly convex whenever it is not constant, and u 7→ h̃(x, u) is strictly convex for all x, then any
measurable minimizer of (3.33) converges pointwise, and thus in USM , to the
minimizer of (3.37).
Theorem 3.4 guarantees the existence of an optimal stable control, which
is made precise by (3.38), for the ergodic diffusion control problem with the
running cost function r. Moreover, under the convexity property in part (d),
the optimal stable control can be obtained as a pointwise limit from the
minimizing selector of (3.33). For instance, if we let
+
r(x, u) = (e · x)
d
X
hi um
i ,
m > 1,
i=1
then by choosing h and h̃ + |u|2 as in Proposition 3.1, we see that the
approximate value function V ε and approximate control vε converge to the
desired optimal value function V∗ and optimal control v∗ , respectively.
Concerning the uniqueness of the solution to the HJB equation in (3.37),
recall that in the near-monotone case the existing uniqueness results are as
follows: there exists a unique solution pair (V, ̺) of (3.37) with V in the class
of functions C 2 (Rd ) which are bounded below in Rd . Moreover, it satisfies
V (0) = 0 and ̺ ≤ ̺∗ . If the restriction ̺ ≤ ̺∗ is removed, then in general,
there are multiple solutions. Since in our model r is not near-monotone in
Rd , the function V∗ is not, in general, bounded below. However, as we show
later in Lemma 3.10 the negative part of V∗ grows slower than V, that is,
it holds that V∗− ∈ o(V), with o(·) as defined in Section 1.3. Therefore, the
second part of the theorem that follows may be viewed as an extension of the
well-known uniqueness results that apply to ergodic control problems with
near-monotone running cost. The third part of the theorem resembles the
hypotheses of uniqueness that apply to problems under a blanket stability
hypothesis.
Theorem 3.5.
(3.40)
Let (V̂ , ̺ˆ) be a solution of
min[Lu V̂ (x) + r(x, u)] = ̺ˆ,
u∈U
such that V̂ − ∈ o(V) and V̂ (0) = 0. Then the following hold:
(a) Any measurable selector v̂ from the minimizer of the associated Hamiltonian in (3.38) is in USSM and ̺v̂ < ∞.
(b) If ̺ˆ ≤ ̺∗ then necessarily ̺ˆ = ̺∗ and V̂ = V∗ .
(c) If V̂ ∈ O(minu∈U h̃(·, u)), then ̺ˆ = ̺∗ and V̂ = V∗ .
30
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Applying these results to the multi-class queueing diffusion model, we
have the following corollary.
Corollary 3.1. For the queueing diffusion model with controlled dynamics given by (3.10), drift given by (3.9), and running cost as in (3.11),
there exists a unique solution V , satisfying V (0) = 0, to the associated HJB
in the class of functions C 2 (Rd ) ∩ O(|x|m ), whose negative part is in o(|x|m ).
This solution agrees with V∗ in Theorem 3.4.
Proof. Existence of a solution V follows by Theorem 3.4. Select V ∼
|x|m as in the proof of Proposition 3.1. That the solution V is in the stated
class then follows by Lemma 3.10 and Corollary 4.1 that appear later in
Sections 3.6 and 4, respectively. With h ∼ |x|m as in the proof of Proposition 3.1, it follows that minu∈U h̃(x, u) ∈ O(|x|m ). Therefore, uniqueness
follows by Theorem 3.5.
We can also obtain the HJB equation in (3.37) via the traditional vanishing discount approach as the following theorem asserts. Similar results
are shown for a one-dimensional degenerate ergodic diffusion control problem in [29] and certain multi-dimensional ergodic diffusion control problems
(allowing degeneracy and spatial periodicity) in [2].
Theorem 3.6.
Let V∗ and ̺∗ be as in Theorem 3.4. For α > 0, we define
Z ∞
−αt
U
e r(Xt , Ut ) dt .
Vα (x) := inf Ex
U ∈U
0
The function Vα − Vα (0) converges, as α ց 0, to V∗ , uniformly on compact
subsets of Rd . Moreover, αVα (0) → ̺∗ , as α ց 0.
The proofs of the Theorems 3.3–3.6 are given in Section 3.6. The following
result, which follows directly from (3.31), provides a way to find ε-optimal
controls.
Proposition 3.2. Let {vε } be the minimizing selector from Theorem 3.3
and {µvε } be the corresponding invariant probability measures. Then almost
surely for all x ∈ Rd ,
Z
Z T
1 vε
lim Ex
r(x, vε (x))µvε (dx)
r(Xs , vε (Xs )) ds =
T →∞ T
Rd
0
≤ ̺∗ + εk0 (1 + ̺∗ ).
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
31
3.6. Technical proofs. Recall that rε (x, u) = r(x, u) + εh̃(x, u), with h̃ as
in Lemma 3.3. We need the following lemma.
For α > 0 and ε ≥ 0, we define
Z ∞
ε
U
−αt
Vα (x) := inf Ex
(3.41)
e rε (Xt , Ut ) dt ,
U ∈U
0
where we set r0 ≡ r. Clearly, when ε = 0, we have Vα0 ≡ Vα .
We quote the following result from [1], Theorem 3.5.6, Remark 3.5.8.
Lemma 3.4. Provided ε > 0, then Vαε defined above is in C 2 (Rd ) and is
the minimal nonnegative solution of
min[Lu Vαε (x) + rε (x, u)] = αVαε (x).
u∈U
The HJB in Lemma 3.4 is similar to the equation in [7], Theorem 3, which
concerns the characterization of the discounted control problem.
Lemma 3.5. Let u be any precise Markov control and Lu be the corresponding generator. Let ϕ ∈ C 2 (Rd ) be a nonnegative solution of
Lu ϕ − αϕ = g,
d
where g ∈ L∞
loc (R ). Let κ : R+ → R+ be any nondecreasing function such that
kgkL∞ (BR ) ≤ κ(R) for all R > 0. Then for any R > 0 there exists a constant
D(R) which depends on κ(4R), but not on u, or ϕ, such that
osc ϕ ≤ D(R) 1 + α inf ϕ .
BR
B4R
Proof. Define g̃ := α(g − 2κ(4R)) and ϕ̃ := 2κ(4R) + αϕ. Then g̃ ≤ 0
in B4R and ϕ̃ solves
Lu ϕ̃ − αϕ̃ = g̃
in B4R .
Also
kg̃kL∞ (B4R ) ≤ α(2κ(4R) + kgkL∞ (B4R ) )
≤ 3α(2κ(4R) − kgkL∞ (B4R ) )
= 3 inf |g̃|
B4R
≤ 3|B4R |−1 kg̃kL1 (B4R ) .
Hence by [1], Theorem A.2.13, there exists a positive constant C̃H such that
sup ϕ̃(x) ≤ C̃H inf ϕ̃(x),
x∈B3R
x∈B3R
32
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
implying that
α sup ϕ(x) ≤ C̃H 2κ(4R) + inf αϕ(x) .
(3.42)
x∈B3R
x∈B3R
We next consider the solution of
Lu ψ = 0
in B3R ,
ψ=ϕ
on ∂B3R .
Then
Lu (ϕ − ψ) = αϕ + g
in B3R .
If ϕ(x̂) = inf x∈B3R ϕ(x), then applying the maximum principle ([1], Theorem A.2.1, [18]) it follows from (3.42) that
sup |ϕ − ψ| ≤ Ĉ(1 + αϕ(x̂)).
(3.43)
x∈B3R
Again ψ attains its minimum at the boundary ([1], Theorem A.2.3, [18]).
Therefore, ψ − ϕ(x̂) is a nonnegative function, and hence by the Harnack
inequality, there exists a constant CH > 0 such that
ψ(x) − ϕ(x̂) ≤ CH (ψ(x̂) − ϕ(x̂)) ≤ CH Ĉ(1 + αϕ(x̂))
∀x ∈ B2R .
Thus, combining the above display with (3.43) we obtain
osc ϕ ≤ sup(ϕ − ψ) + sup ψ − ϕ(x̂) ≤ Ĉ(1 + CH )(1 + αϕ(x̂)).
B2R
B2R
B2R
This completes the proof.
Lemma 3.6. Let Vαε be as in Lemma 3.4. Then for any R > 0, there
exists a constant kR > 0 such that
osc Vαε ≤ kR
for all α ∈ (0, 1] and ε ∈ [0, 1].
BR
Proof. Recall that µu0 is the stationary probability distribution for the
process under the control u0 ∈ USSM in Lemma 3.1. Since u0 is sub-optimal
for the α-discounted criterion in (3.41), and Vαε is nonnegative, then for any
ball BR , using Fubini’s theorem, we obtain
Z
ε
Vαε (x)µu0 (dx)
µu0 (BR ) inf Vα ≤
BR
Rd
≤
Z
Rd
Eux0
Z
∞
0
−αt
e
rε (Xt , u0 (Xt )) dt µu0 (dx)
1
µu (rε )
α 0
1
≤ (η + εk0 (1 + η)),
α
=
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
33
where for the last inequality we used Lemma 3.2 and Theorem 3.1(a).
Therefore, we have the estimate
α inf Vαε ≤
BR
η + εk0 (1 + η)
.
µu0 (BR )
The result then follows by Lemma 3.5.
We continue with the proof of Theorem 3.3.
Proof of Theorem 3.3. Consider the function V̄αε := Vαε − Vαε (0). In
view of Lemma 3.5 and Lemma 3.6, we see that V̄αε is locally bounded uniformly in α ∈ (0, 1] and ε ∈ (0, 1]. Therefore, by standard elliptic theory, V̄αε
and its first- and second-order partial derivatives are uniformly bounded in
Lp (B), for any p > 1, in any bounded ball B ⊂ Rd , that is, for some constant
CB depending on B and p, kV̄αε kW2,p (B) ≤ CB ([18], Theorem 9.11, page 117).
Therefore, we can extract a subsequence along which V̄αε converges. Then
the result follows from Theorems 3.6.6, Lemma 3.6.9 and Theorem 3.6.10 in
[1]. The proof of (3.36) follows from Lemma 3.5 and Lemma 3.6.
Remark 3.8. In the proof of the following lemma, and elsewhere in the
paper, we use the fact that if U ⊂ USSM is a any set of controls such that
the corresponding set {µv : v ∈ U } ⊂ M of invariant probability measures is
tight then the map v 7→ πv from the closure of U to P(Rd × U) is continuous,
and so is the map v 7→ µv . In fact, the latter is continuous under the total
variation norm topology [1], Lemma 3.2.6. We also recall that G and M are
closed and convex subsets of P(Rd × U) and P(Rd ). Therefore,{πv : v ∈ Ū } is
compact in G. Note also that since U is compact, tightness of a set of invariant
probability measures is equivalent to tightness of the corresponding set of
ergodic occupation measures.
Lemma 3.7. If {vε : ε ∈ (0, 1]} is a collection of measurable selectors from
the minimizer of (3.33), then the corresponding invariant probability measures {µε : ε ∈ (0, 1]} are tight. Moreover, if vεn → v∗ along some subsequence
εn ց 0, then the following hold:
(a) µεn → µv∗ as εn ց 0,
(b) Rv∗ is a stable Markov control,
(c) Rd r(x, v∗ (x))µv∗ (dx) = limεց0 ̺ε = ̺∗ .
Proof. By (3.25) and (3.31), the set of ergodic occupation measures
corresponding to {vε : ε ∈ (0, 1]} is tight. By Remark 3.8, the same applies
to the set {µε : ε ∈ (0, 1]}, and also part (a) holds. Part (b) follows from
the equivalence of the existence of an invariant probability measure for a
34
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
controlled diffusion and the stability of the associated stationary Markov
control (see [1], Theorem 2.6.10). Part (c) then follows since equality holds
in (3.32).
We continue with the following lemma that asserts the continuity of the
mean hitting time of a ball with respect to the stable Markov controls.
Lemma 3.8. Let {vn : n ∈ N} ⊂ UβSM , for some β > 0, be a collection
of Markov controls such that vn → v̂ in the topology of Markov controls as
n → ∞. Let µn , µ̂ be the invariant probability measures corresponding to the
controls vn , v̂, respectively. Then for any δ > 0, it holds that
Evxn [τ̆δ ] −→ Ev̂x [τ̆δ ]
∀x ∈ Bδc .
n→∞
Proof. Define H(x) := minu∈U h̃(x, u). It is easy to see that H is infcompact and locally Lipschitz. Therefore, by Theorem 3.1(d) we have
sup µn (H) ≤ k0 (1 + β),
n∈N
and since µn → µ̂, we also have µ̂(H) ≤ k0 (1 + β). Then by [1], Lemma 3.3.4,
we obtain
Z τ̆δ
Z τ̆δ
sup Evxn
H(Xs ) ds + Ev̂x
(3.44)
H(Xs ) ds < ∞.
n∈N
0
0
Let R be a positive number greater than |x|. Then by (3.44), there exists a
positive k such that
Z τ̆δ
Z τ̆δ
1
k
Evx
I{H>R} (Xs ) ds ≤ Evx
H(Xs )I{H>R} (Xs ) ds ≤
R
R
0
0
for v ∈ {{vn }, v̂}. From this assertion and (3.44), we see that
Z τ̆δ
v
sup Ex
I{H>R} (Xs ) ds −→ 0.
v∈{{vn },v̂}
R→∞
0
Therefore, in order to prove the lemma it is enough to show that, for any
R > 0, we have
Z τ̆δ
Z τ̆δ
vn
v̂
Ex
I{H≤R} (Xs ) ds −→ Ex
I{H≤R} (Xs ) ds .
0
n→∞
0
But this follows from [1], Lemma 2.6.13(iii).
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
35
Lemma 3.9. Let (V ε , ̺ε ) be as in Theorem 3.3, and vε satisfy (3.35).
There exists a subsequence εn ց 0, such that V εn converges to some V∗ ∈
C 2 (Rd ), uniformly on compact sets, and V∗ satisfies
min[Lu V∗ (x) + r(x, u)] = ̺∗ .
(3.45)
u∈U
Also, any limit point v∗ (in the topology of Markov controls) of the set {vε },
as ε ց 0, satisfies
(3.46)
Lv∗ V∗ (x) + r(x, v∗ (x)) = ̺∗
a.e. in Rd .
Moreover, V∗ admits the stochastic representation
Z τ̆δ
v
V∗ (x) = S inf
Ex
(r(Xs , v(Xs )) − ̺∗ ) ds + V∗ (Xτ̆δ )
v∈
(3.47)
= Evx∗
β
β>0 USM
Z
0
τ̆δ
0
(r(Xs , v∗ (Xs )) − ̺∗ ) ds + V∗ (Xτ̆δ ) .
It follows that V∗ is the unique limit point of V ε as ε ց 0.
Proof. From (3.36), we see that the family {V ε : ε ∈ (0, 1]} is uniformly
locally bounded. Hence, applying the theory of elliptic PDE, it follows that
d
{V ε : ε ∈ (0, 1]} is uniformly bounded in W2,p
loc (R ) for p > d. Consequently,
1,γ
ε
{V : ε ∈ (0, 1]} is uniformly bounded in Cloc for some γ > 0. Therefore, along
some subsequence εn ց 0, V εn → V∗ ∈ W2,p ∩ C 1,γ , as n → ∞, uniformly on
compact sets. Also, limεց0 ̺ε = ̺∗ by Lemma 3.6(c). Therefore, passing to
the limit we obtain the HJB equation in (3.45). It is straightforward to verify
that (3.46) holds [1], Lemma 2.4.3.
By Theorem 3.3(c), taking limits as ε ց 0, we obtain
Z τ̆δ
Evx
(3.48) V∗ (x) ≤ S inf
(r(Xs , v(Xs )) − ̺∗ ) ds + V∗ (Xτ̆δ ) .
v∈
β
β>0 USM
0
Also by Theorem 3.3(b) we have the bound
V ε (x) ≥ −̺ε Evxε [τ̆δ ] + inf V ε .
Bδ
Using Lemma 3.8 and taking limits as εn ց 0, we obtain the lower bound
(3.49)
V∗ (x) ≥ −̺∗ Evx∗ [τ̆δ ] + inf V∗ .
Bδ
By Lemma 3.7(c) and Theorem 3.1(d), v∗ ∈ USSM , and πv∗ (h̃) ≤ k0 (1 + ̺∗ ).
Define
Z τ̆δ
v∗
ϕ(x) := Ex
(3.50)
h̃(Xs , v∗ (Xs )) ds .
0
36
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
For |x| > δ, we have
Evx∗ [I{τR <τ̆δ } ϕ(XτR )] = Evx∗
I{τR <τ̆δ }
Z
τ̆δ
τR ∧τ̆δ
h̃(Xs , v∗ (Xs )) ds .
Therefore, by the dominated convergence theorem and the fact that ϕ(x) <
∞ we obtain
Evx∗ [ϕ(XτR )I{τR <τ̆δ } ] −→ 0.
Rր∞
By (3.48) and (3.49), we have |V∗ | ∈ O(ϕ). Thus (3.49) and (3.50) imply that
lim inf Evx∗ [V∗ (XτR )I{τR <τ̆δ } ] = 0,
Rր∞
and thus
lim inf Evx∗ [V∗ (XτR ∧τ̆δ )] = Evx∗ [V∗ (Xτ̆δ )].
(3.51)
Rր∞
Applying Itô’s formula to (3.46), we obtain
Z τ̆δ ∧τR
v∗
(3.52) V∗ (x) = Ex
(r(Xs , v∗ (Xs )) − ̺∗ ) ds + V∗ (Xτ̆δ ∧τR ) .
0
Taking limits as R → ∞, and using the dominated convergence theorem, we
obtain (3.47) from (3.48).
Recall the definition of o(·) from Section 1.3. We need the following
lemma.
Lemma 3.10.
Let V∗ be as in Lemma 3.9. It holds that V∗− ∈ o(V).
Proof. Let v∗ be as in Lemma 3.9. Applying Itô’s formula to (3.24)
with u ≡ v∗ we obtain
Z τ̆δ
v∗
Ex
h(Xs , v∗ (Xs ))IHc (Xs , v∗ (Xs )) ds
(3.53)
0
≤ Evx∗
Z
τ̆δ
0
r(Xs , v∗ (Xs ))IH (Xs , v∗ (Xs )) ds + Evx∗ [τ̆δ ] + V(x).
Therefore, adding the term
Z τ̆δ
v∗
Ex
r(Xs , v∗ (Xs ))IH (Xs , v∗ (Xs )) ds − (1 + 2̺∗ )Evx∗ [τ̆δ ]
0
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
37
to both sides of (3.53) and using the stochastic representation of V∗ we
obtain
Z τ̆δ
−1 v∗
F (x) := 2k0 Ex
h̃(Xs , v∗ (Xs )) ds − 2(1 + ̺∗ )Evx∗ [τ̆δ ]
(3.54)
0
≤ 2V∗ (x) + V(x) − 2 inf V∗ .
Bδ
From the stochastic representation of V∗ we have V∗− (x) ≤ ̺∗ Evx∗ [τ̆δ ]−inf Bδ V∗ .
For any R > δ, we have
Z τ̆δ
v∗
c
(3.55) Ex
h̃
Ex [τ̆R ]
∀x ∈ BR
.
h̃(Xs , v∗ (Xs )) ds ≥ inf
c
BR ×U
0
It is also straightforward to show that lim|x|→∞ EExx[τ̆[τ̆Rδ ]] = 1. Therefore, since
h̃ is inf-compact, it follows by (3.54) and (3.55) that the map x 7→ Evx∗ [τ̆δ ]
is in o(F ), which implies that V∗− ∈ o(F ). On the other hand, by (3.54) we
obtain F (x) ≤ V(x) − 2 supBδ V∗ for all x such that V∗ (x) ≤ 0, which implies
that the restriction of F to the support of V∗− is in O(V). It follows that
V∗− ∈ o(V).
We next prove Theorem 3.4.
Proof of Theorem 3.4. Part (a) is contained in Lemma 3.9.
To prove part (b), let v̄ be any control satisfying (3.38). By Lemma 3.10
the map V + 2V∗ is inf-compact and by Theorem 3.4 and (3.24) it satisfies
Lv̄ (V + 2V∗ )(x) ≤ 1 + 2̺∗ − r(x, v̄(x)) − h(x, v̄(x))IHc (x, v̄(x))
≤ 2 + 2̺∗ − 2k0−1 h̃(x, v̄(x))
∀x ∈ Rd .
This implies that v̄ ∈ USSM . Applying Itô’s formula, we obtain
Z T
1 v̄
lim sup Ex
(3.56)
h̃(Xs , v̄(Xs )) ds ≤ k0 (1 + ̺∗ ).
T →∞ T
0
Therefore, πv̄ (h̃) < ∞. By (3.24), we have
Z t
v̄
v̄
Ex [V(Xt )] ≤ V(x) + t + Ex
r(Xs , v̄(Xs )) ds ,
0
and since r ≤ h̃, this implies by (3.56) that
1
(3.57)
lim sup Ev̄x [V(XT )] ≤ 1 + k0 (1 + ̺∗ ).
T →∞ T
Since V∗− ∈ o(V), it follows by (3.57) that
1
lim sup Ev̄x [V∗− (XT )] = 0.
T →∞ T
38
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Therefore, by Itô’s formula, we deduce from (3.37) that
Z T
1 v̄
(3.58)
r(Xs , v̄(Xs )) ds ≤ ̺∗ .
lim sup Ex
T →∞ T
0
On the other hand, since the only limit point of the mean empirical measures
v̄ , as t → ∞, is π , and π (r) = ̺ , then in view of Remark 3.6, we obtain
ζx,t
v̄
v̄
∗
v̄ (r) ≥ ̺ . This proves that equality holds in (3.58) and that
lim inf t→∞ ζx,t
∗
the “lim sup” may be replaced with “lim.”
Conversely, suppose v ∈ USM is optimal but does not satisfy (3.38). Then
there exists R > 0 and a nontrivial nonnegative f ∈ L∞ (BR ) such that
fε (x) := IBR (x)(Lv V ε (x) + rε (x, v(x)) − ̺ε )
converges to f , weakly in L1 (BR ), along some subsequence ε ց 0. By applying Itô’s formula to (3.33), we obtain
Z T ∧τR
1 v
1 v ε
ε
(E [V (XT ∧τR )] − V (x)) + Ex
rε (Xs , v(Xs )) ds
T x
T
0
(3.59)
Z T ∧τR
1 v
fε (Xs , v(Xs )) ds .
≥ ̺ε + Ex
T
0
Define, for some δ > 0,
G(x) := Evx
Z
0
τ̆δ
rε (Xs , v(Xs )) ds .
Since V is bounded from below, by Theorem 3.3(c) we have V ε ∈ O(G).
Invoking [1], Corollary 3.7.3, we obtain
ε
1 v ε
Ex [V (XT )] = 0,
T →∞ T
lim
and
lim Evx [V ε (XT ∧τR )] = Evx [V ε (XT )].
R→∞
Therefore, taking limits in (3.59), first as R ր ∞, and then as T → ∞, we
obtain
(3.60)
πv (rε ) ≥ ̺ε + πv (fε ).
Taking limits as ε ց 0 in (3.60), since µv has a strictly positive density in
BR , we obtain
πv (r) ≥ ̺∗ + πv (f ) > ̺∗ ,
which is a contradiction. This completes the proof of part (b).
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
39
The first equality (3.39) follows by Lemma 3.9, taking limits as δ ց 0.
To show that the second equality holds for any optimal control, suppose v̄
satisfies (3.38). By (3.24) we have, for δ > 0 and |x| > δ,
Z τR ∧τ̆δ
(1 + r(Xs , v̄(Xs ))) ds .
Ev̄x [V(XτR )I{τR <τ̆δ } ] ≤ V(x) + sup V − + Ev̄x
Bδ
0
It follows that (see [1], Lemma 3.3.4)
lim sup Ev̄x [V(XτR )I{τR <τ̆δ } ] < ∞,
R→∞
and since V∗− ∈ o(V) we must have
lim sup Ev̄x [V∗− (XτR )I{τR <τ̆δ } ] = 0.
R→∞
By the first equality in (3.47), we obtain V∗+ ∈ O(ϕ), with ϕ as defined in
(3.50) with v∗ replaced by v̄. Thus, in analogy to (3.51), we obtain
lim inf Ev̄x [V∗ (XτR ∧τ̆δ )] = Ev̄x [V∗ (Xτ̆δ )].
Rր∞
The rest follows as in the proof of Lemma 3.9 via (3.52).
We next prove part (d). We assume that U is a convex set and that
c(x, u, p) := {b(x, u) · p + r(x, u)}
is strictly convex in u if it is not identically a constant for fixed x and p. We
fix some point ū ∈ U. Define
B := {x ∈ Rd : c(x, ·, p) = c(x, ū, p) for all p}.
It is easy to see that on B both b and r do not depend on u. It is also easy
to check that B is a closed set. Let (V∗ , v∗ ) be the limit of (V ε , vε ), where
V∗ is the solution to (3.37) and v∗ is the corresponding limit of vε . We have
already shown that v∗ is a stable Markov control. We next show that it
is, in fact, a precise Markov control. By our assumption, vε is the unique
minimizing selector in (3.34) and, moreover, vε is continuous in x. By the
definition of rε it is clear that the restriction of vε to B does not depend on
ε. Let vε (x) = v ′ (x) on B. Using the strict convexity property of c(x, ·, ∇V∗ )
it is easy to verify that vε converges to the unique minimizer of (3.37) on
B c . In fact, since B c is open, then for any sequence xε → x ∈ B c it holds that
vε (xε ) → v∗ (x). This follows from the definition of the minimizer and the
uniform convergence of ∇V ε to ∇V∗ . Therefore, we see that v∗ is a precise
Markov control, v∗ = v ′ on B, and vε → v∗ pointwise as ε → 0. It is also easy
to check that pointwise convergence implies convergence in the topology of
Markov controls.
We now embark on the proof of Theorem 3.5.
40
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Proof of Theorem 3.5. The hypothesis that V̂ − ∈ o(V) implies that
the map V + 2V̂ is inf-compact. Also by (3.24) and (3.40), it satisfies
Lv̂ (V + 2V̂ )(x) ≤ 1 + 2ˆ
̺ − r(x, v̂(x)) − h(x, v̂(x))IHc (x, v̂(x))
≤ 2 + 2ˆ
̺ − 2k0−1 h̃(x, v̂(x))
∀x ∈ Rd .
R
Therefore, h̃(x, v̂(x)) dπv̂ < ∞ from which it follows that ̺v̂ < ∞. This
proves part (a).
By (3.24), we have
Z t
v̂
v̂
r(Xs , v̂(Xs )) ds ,
Ex [V(Xt )] ≤ V(x) + t + Ex
0
and since ̺v̂ < ∞, this implies that
1
lim sup Ev̂x [V(XT )] ≤ 1 + ̺v̂ .
(3.61)
T →∞ T
Since V̂ − ∈ o(V), it follows by (3.61) that
lim sup
T →∞
1 v̂ −
E [V̂ (XT )] = 0.
T x
Therefore, by Itô’s formula, we deduce from (3.40) that
Z T
1 v̂
1 v̂ +
E [V̂ (XT )] + Ex
r(Xs , v̂(Xs )) ds = ̺ˆ.
lim sup
T x
T
T →∞
0
This implies that ̺v̂ ≤ ̺ˆ and since by hypothesis ̺ˆ ≤ ̺∗ we must have ̺ˆ = ̺∗ .
Again by (3.24), we have
Z τR ∧τ̆δ
v̂
−
v̂
Ex [V(XτR )I{τR <τ̆δ } ] ≤ V(x) + sup V + Ex
(1 + r(Xs , v̂(Xs ))) ds .
Bδ
0
It follows by [1], Lemma 3.3.4, that
lim sup Ev̂x [V(XτR )I{τR <τ̆δ } ] < ∞,
R→∞
and since V̂ − ∈ o(V) we must have
(3.62)
lim sup Ev̂x [V̂ − (XτR )I{τR <τ̆δ } ] = 0.
R→∞
Using (3.62) and following the steps in the proof of the second equality in
(3.47), we obtain
Z τ̆δ
v̂
V̂ (x) ≥ Ex
(r(Xs , v̂(Xs )) − ̺∗ ) ds + inf V̂
Bδ
0
≥ V∗ (x) − sup V∗ + inf V̂ .
Bδ
Bδ
41
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
Taking limits as δ ց 0, we have V∗ ≤ V̂ . Since Lv̂ (V∗ − V̂ ) ≥ 0 and V∗ (0) =
V̂ (0), we must have V̂ = V∗ on Rd , and the proof of part (b) is complete.
R To prove part (c) note that by part (a) we haveR ̺v̂ < ∞. Therefore,
h̃ dπv̂ ≤ ∞ by Theorem 3.1(a), which implies that |V̂ | dµv̂ ≤ ∞ by the
hypothesis. Therefore, Ev̂x (|V̂ (Xt )|) converges as t → ∞ by [23], Proposition 2.6, which of course implies that 1t Ev̂x (|V̂ (Xt )|) tends to 0 as t → ∞.
Similarly, we deduce that 1t Evx∗ (|V̂ (Xt )|) as t → ∞. Applying Itô’s formula
to (3.40), with u ≡ v∗ , we obtain ̺ˆ ≤ ̺∗ . Another application with u ≡ v̂
results in ̺ˆ = ̺v̂ . Therefore, ̺ˆ = ̺∗ . The result then follows by part (b).
We finish this section with the proof of Theorem 3.6.
Proof of Theorem 3.6. We first show that limαց0 αVα (0) = ̺∗ . Let
Ṽ(t, x) := e−αt V(x), and τn (t) := τn ∧ t. Applying Itô’s formula to (3.24), we
obtain
Z τn (t)
U
U
Ex [Ṽ(τn (t), Xτn (t) )] ≤ V(x) − Ex
αṼ(s, Xs ) ds
0
It follows that
Z
U
Ex
(3.63)
τn (t)
−αs
e
0
≤
+ EU
x
Z
+ EU
x
Z
τn (t)
−αs
e
0
τn (t)
−αs
e
0
(1 − h(Xs , Us ))IHc (Xs , Us ) ds
(1 + r(Xs , Us ))IH (Xs , Us ) ds .
h(Xs , Us )IHc (Xs , Us ) ds
1
+ V(x) + EU
x
α
Z
τn (t)
0
e−αs r(Xs , Us )IH (Xs , Us ) ds .
Taking limits first as n ր ∞ and then as t ր ∞ in (3.63), and evaluating U
at an optimal α-discounted control vα∗ , relative to r we obtain the estimate,
using also (3.23),
Z ∞
2
∗
e−αs h̃(Xs , vα∗ (Xs )) ds ≤ + V(x) + 2Vα (x).
(3.64) 2k0−1 Exvα
α
0
By (3.23) and (3.64), it follows that
Z
∗
ε
vα
Vα (x) ≤ Vα (x) ≤ Ex
0
∞
−αs
e
rε (Xs , vα∗ (Xs )) ds
≤ Vα (x) + εk0 (α−1 + V(x) + Vα (x)).
42
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Multiplying by α and taking limits as α ց 0 we obtain
lim sup αVα (0) ≤ ̺ε ≤ (1 + εk0 ) lim sup αVα (0) + εk0 .
αց0
αց0
The same inequalities hold for the “lim inf.” Therefore, limαց0 αVα (0) = ̺∗ .
Let
Ṽ := lim (Vα − Vα (0)).
αց0
(Note that a similar result as Lemma 3.5 holds.) Then Ṽ satisfies
Z τ̆δ
[ β
v
Ṽ (x) ≤ lim Ex
(r(Xs , v(Xs )) − ̺∗ ) ds
∀v ∈
USM .
δց0
0
β>0
This can be obtained without the near-monotone assumption on the running
cost; see, for example, [1], Lemma 3.6.9 or Lemma 3.7.8. It follows from
(3.39) that Ṽ ≤ V∗ . On the other hand, since Lv∗ (Ṽ − V∗ ) ≥ 0, and Ṽ (0) =
V∗ (0), we must have Ṽ = V∗ by the strong maximum principle.
4. Approximation via spatial truncations. We introduce an approximation technique which is in turn used to prove the asymptotic convergence
results in Section 5.
Let v0 ∈ USSM be any control such that πv0 (r) < ∞. We fix the control v0
on the complement of the ball B̄l and leave the parameter u free inside. In
other words, for each l ∈ N we define
b(x, u),
if (x, u) ∈ B̄l × U,
bl (x, u) :=
b(x, v0 (x)),
otherwise,
r(x, u),
if (x, u) ∈ B̄l × U,
rl (x, u) :=
r(x, v0 (x)),
otherwise.
We consider the family of controlled diffusions, parameterized by l ∈ N, given
by
(4.1)
dXt = bl (Xt , Ut ) dt + σ(Xt ) dWt ,
with associated running costs rl (x, u). We denote by USM (l, v0 ) the subset of USM consisting of those controls v which agree with v0 on B̄lc . Let
η0 := πv0 (r). It is well known that there exists a nonnegative solution ϕ0 ∈
d
W2,p
loc (R ), for any p > d, to the Poisson equation (see [1], Lemma 3.7.8(ii))
Lv0 ϕ0 (x) = η0 − h̃(x, v0 (x))
x ∈ Rd ,
which is inf-compact, and satisfies, for all δ > 0,
Z τ̆δ
v0
ϕ0 (x) = Ex
(h̃(Xs , v0 (Xs )) − η0 ) ds + ϕ0 (Xτ̆δ )
0
∀x ∈ Rd .
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
43
We recall the Lyapunov function V from Assumption 3.1. We have the
following theorem.
Theorem 4.1. Let Assumptions 3.1 and 3.2 hold. Then for each l ∈ N
d
l
there exists a solution V l in W2,p
loc (R ), for any p > d, with V (0) = 0, of the
HJB equation
min[Lul V l (x) + rl (x, u)] = ̺l ,
(4.2)
u∈U
Lul
where
is the elliptic differential operator corresponding to the diffusion
in (4.1). Moreover, the following hold:
(i)
(ii)
2ϕ0 (x)
(iii)
(iv)
̺l is nonincreasing in l;
there exists a constant C0 , independent of l, such that V l (x) ≤ C0 +
for all l ∈ N;
(V l )− ∈ o(V + ϕ0 ) uniformly over l ∈ N;
the restriction of V l on Bl is in C 2 .
Proof. As earlier, we can show that
Z ∞
l
U
−αs
Vα (x) := inf Ex
e
rl (Xs , Us ) ds
U ∈U
0
is the minimal nonnegative solution to
(4.3)
min[Lul Vαl (x) + rl (x, u)] = αVαl (x),
u∈U
d
and Vαl ∈ W2,p
loc (R ), p > d. Moreover, any measurable selector from the minimizer in (4.3) is an optimal control. A similar estimate as in Lemma 3.5 holds
and, therefore, there exists a subsequence {αn }, along which Vαl n (x) − Vαl n (0)
d
l
converges to V l in W2,p
loc (R ), p > d, and αn Vαn (0) → ̺l as αn ց 0, and
(V l , ̺l ) satisfies (4.2) (see also [1], Lemma 3.7.8).
To show that πvl (r) = ̺l , v l is a minimizing selector in (4.2), we use the
following argument. Since πv0 (r) < ∞, we claim that there exists a nonnegative, inf-compact function g ∈ C(Rd ) such that πv0 (g · (1 + r)) < ∞. Indeed,
this is true since integrability and uniform integrability of a function under
any given measure are equivalent (see also the proof of [1], Lemma 3.7.2).
Since every control in USM (l, v0 ) agrees with v0 on Blc , then for any x0 ∈ B̄lc
the map
Z τ̆l
v
v 7→ Ex0
g(Xs )(1 + r(Xs , v(Xs ))) ds
0
is constant on USM (l, v0 ). By the equivalence of (i) and (iii) in Lemma 3.3.4
of [1], this implies that
sup
v∈USM (l,v0 )
πv (g · (1 + r)) < ∞
∀l ∈ N,
44
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
and thus r is uniformly integrable with respect to the family {πv :
v ∈ USM (l, v0 )} for any l ∈ N. It then follows by [1], Theorem 3.7.11, that
(4.4)
̺l =
inf
v∈USM (l,v0 )
πv (r),
l ∈ N.
This yields part (i). Moreover, in view of Lemmas 3.5 and 3.6, we deduce
that for any δ > 0 it holds that supBδ |V l | ≤ κδ , where κδ is a constant
independent of l ∈ N. It is also evident by (4.4) that ̺l is decreasing in l and
̺l ≤ η0 for all l ∈ N. Fix δ such that minu∈U h̃(x, u) ≥ 2η0 on Bδc . Since ϕ0 is
nonnegative, we obtain
Z τ̆δ
v0
(4.5)
∀x ∈ Rd .
(h̃(Xs , v0 (Xs )) − η0 ) ds ≤ ϕ0 (x)
Ex
0
Using an analogous argument as the one used in the proof of [1], Lemma 3.7.8,
we have
Z τ̆δ
l
v
(4.6) V (x) ≤ Ex
(rl (Xs , v(Xs )) − ̺l ) ds + κδ
∀v ∈ USM (l, v0 ).
0
Thus, by (4.5) and (4.6), and since by the choice of δ > 0, it holds that
r ≤ h̃ ≤ 2(h̃ − η0 ) on Bδc , we obtain
Z τ̆δ
V l (x) ≤ Evx0
2(h̃(Xs , v0 (Xs )) − η0 ) ds + κδ
0
(4.7)
≤ κδ + 2ϕ0 (x)
∀x ∈ Rd .
This proves part (ii).
Now fix l ∈ N. Let vαl be a minimizing selector of (4.3). Note then that
l
vα ∈ USM (l, v0 ). Therefore, vαl is a stable Markov control. Let vαl n → v l in
the topology of Markov controls along the same subsequence as above. Then
it is evident that v l ∈ USM (l, v0 ). Also from Lemma 3.8, we have
vl
l
Exαn [τ̆δ ] −→ Evx [τ̆δ ]
∀x ∈ Bδc , ∀δ > 0.
αn ց0
Using [1], Lemma 3.7.8, we obtain the lower bound
l
V l (x) ≥ −̺l Evx [τ̆δ ] − κδ .
(4.8)
By [1], Theorem 3.7.12(i) (see also (3.7.50) in [1]), it holds that
Z τ̆δ
l
vl
l
l
V (x) = Ex
(rl (Xs , v (Xs )) − ̺l ) ds + V (Xτ̆δ )
0
(4.9)
≥
l
Evx
Z
0
τ̆δ
l
l
rl (Xs , v (Xs )) ds − ̺l Evx [τ̆δ ] − κδ
∀x ∈ Blc .
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
45
By (3.23), we have
2k0−1 h̃(x, u)IH (x, u) ≤ 1 + r(x, u)IH (x, u).
Therefore, using the preceding inequality and (4.9), we obtain
l
(4.10)
V l (x) + (1 + ̺l )Evx [τ̆δ ] + κδ
Z τ̆δ
2 vl
l
l
≥ Ex
h̃(Xs , v (Xs ))IH (Xs , v (Xs )) ds .
k0
0
By (3.24), (4.9) and the fact that V is nonnegative, we have
Z τ̆δ
2 vl
l
Ex
h̃(Xs , v l (Xs ))IHc (Xs , v l (Xs )) ds − V(x) − Evx [τ̆δ ]
k0
0
Z τ̆δ
vl
l
l
≤ Ex
(4.11)
r(Xs , v (Xs ))IH (Xs , v (Xs )) ds
0
l
l
≤ V (x) + ̺l Evx [τ̆δ ] + κδ .
Combining (4.7), (4.10) and (4.11), we obtain
Z τ̆δ
l
l
h̃(Xs , v l (Xs )) ds ≤ k0 (1 + ̺l )Evx [τ̆δ ]
Evx
0
+
k0
V(x) + 2k0 (ϕ0 (x) + κδ )
2
for all l ∈ N. As earlier, using the inf-compact property of h̃ and the fact
that ̺l ≤ η0 is bounded, we can choose δ large enough such that
Z τ̆δ
vl
vl
l
(4.12) η0 Ex [τ̆δ ] ≤ Ex
h̃(Xs , v (Xs )) ds ≤ k0 V(x) + 4k0 (ϕ0 (x) + κδ )
0
for all l ∈ N. Since h̃ is inf-compact, part (iii) follows by (4.8) and (4.12).
Part (iv) is clear from regularity theory of elliptic PDE [18], Theorem 9.19,
page 243.
Similar to Theorem 3.3, we can show that oscillations of {V l } are uniformly bounded on compacts. Therefore, if we let l → ∞ we obtain a HJB
equation
(4.13)
min[Lu V̂ (x) + r(x, u)] = ̺ˆ,
u∈U
with V̂ ∈ C 2 (Rd ) and liml→∞ ̺l = ̺ˆ. By Theorem 4.1, we have the bound
(4.14)
V̂ (x) ≤ C0 + 2ϕ0 (x),
46
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
for some positive constant C0 . This of course, implies that V̂ + (x) ≤ C0 +
2ϕ0 (x). Moreover, it is straightforward to show that for any v ∈ USSM with
̺v < ∞, we have
1
lim sup Evx [V(Xt )] < ∞.
t→∞ t
Therefore, if in addition, we have
1
lim sup Evx [ϕ0 (Xt )] < ∞,
t→∞ t
then it follows by Theorem 4.1(iii) that
(4.15)
1
lim sup V̂ − (Xt ) −→ 0.
t→∞
t→∞ t
Theorem 4.2. Suppose that ϕ0 ∈ O(minu∈U h̃(·, u)). Then, under the
assumptions of Theorem 4.1, we have liml→∞ ̺l = ̺ˆ = ̺∗ , and V̂ = V∗ . Moreover, V∗ ∈ O(ϕ0 ).
Proof. Let {v̂l } be any sequence of measurable selectors from the minimizer of (4.2) and {πl } the corresponding sequence of ergodic occupation
measures. Since by Theorem 3.1 {πl } is tight, then by Remark 3.8 if v̂ is
a limit point of a subsequence {v̂l }, which we also denote by {v̂l }, then
π̂ = πv̂ is the corresponding limit point of {πl }. Therefore, by the lower
semi-continuity of π → π(r) we have
̺ˆ = lim πl (r) ≥ π̂(r) = ̺v̂ .
l→∞
It also holds that
(4.16)
Lv̂ V̂ (x) + r(x, v̂(x)) = ̺ˆ,
a.s.
By (4.15), we have
lim inf
T →∞
1 v̂
E [V̂ (XT )] = 0,
T x
and hence applying Itô’s rule on (4.16) we obtain ̺v̂ ≤ ̺ˆ. On the other hand,
if v∗ is an optimal stationary Markov control, then by the hypothesis ϕ0 ∈
O(h̃), the fact that πv∗ (h̃) < ∞, (4.14) and [23], Proposition 2.6, we deduce
that Evx∗ [V̂ + (Xt )] converges as t → ∞, which of course together with (4.15)
implies that 1t Ev̂x [V̂ (Xt )] tends to 0 as t → ∞. Therefore, evaluating (4.13) at
v∗ and applying Itô’s rule we obtain ̺v∗ ≥ ̺ˆ. Combining the two estimates,
we have ̺v̂ ≤ ̺ˆ ≤ ̺∗ , and thus equality must hold. Here, we have used the
fact that there exists an optimal Markov control for r by Theorem 3.4.
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
Next, we use the stochastic representation in (4.9)
Z τ̆δ
l
v̂l
l
(4.17) V (x) = Ex
(r(Xs , v̂l (Xs )) − ̺l ) ds + V (Xτ̆δ ) ,
0
47
x ∈ Bδc .
̺v
Fix any x ∈ Bδc . Since USM0 is compact, it follows that for each δ and R with
̺v
0 < δ < R, the map Fδ,R (v) : USM0 → R+ defined by
Z τ̆δ ∧τR
v
Fδ,R (v) := Ex
r(Xs , v(Xs )) ds
0
is continuous. Therefore, the map F̄δ := limRր∞ Fδ,R is lower semi-continuous.
It follows that
Z τ̆δ
Z τ̆δ
v̂
v̂l
r(Xs , v̂(Xs )) ds ≤ lim Ex
(4.18) Ex
r(Xs , v̂l (Xs )) ds .
l→∞
0
0
On the other hand, since h̃ is inf-compact, it follows by (4.12) that τ̆δ is
uniformly integrable with respect to the measures {Pv̂xl }. Therefore, as also
shown in Lemma 3.8, we have
lim Ev̂xl [τ̆δ ] = Ev̂x [τ̆δ ].
(4.19)
l→∞
Since V l → V̂ , uniformly on compact sets, and ̺l → ̺∗ , as l → ∞, it follows
by (4.17)–(4.19) that
Z τ̆δ
V̂ (x) ≥ Ev̂x
(r(Xs , v̂(Xs )) − ̺∗ ) ds + V̂ (Xτ̆δ ) ,
x ∈ Bδc .
0
Therefore, by Theorem 3.4(b), for any δ > 0 and x ∈ Bδc we obtain
Z τ̆δ
v̂
V∗ (x) ≤ Ex
(r(Xs , v̂(Xs )) − ̺∗ ) ds + V∗ (Xτ̆δ )
0
≤ V̂ (x) + Ev̂x [V ∗ (Xτ̆δ )] − Ev̂x [V̂ (Xτ̆δ )],
and taking limits as δ ց 0, using the fact that V̂ (0) = V∗ (0) = 0, we obtain V∗ ≤ V̂ on Rd . Since Lv̂ (V∗ − V̂ ) ≥ 0, we must have V∗ = V̂ . By Theorem 4.1(ii), we have V∗ ∈ O(ϕ0 ).
Remark 4.1. It can be seen from the proof of Theorem 4.2 that the assumption ϕ0 ∈ O(h̃) can be replaced by the weaker hypothesis that
1 v∗
T Ex [ϕ0 (XT )] → 0 as T → ∞.
It is easy to see that if one replaces rl by
1
r(x, u) + f (u),
for x ∈ B̄l ,
l
rl (x, u) =
r(x, v0 (x)) + 1 f (v0 (x)),
otherwise,
l
Remark 4.2.
48
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
for some positive valued continuous function f , the same conclusion of Theorem 4.2 holds.
If we consider the controlled dynamics given by (3.20), with running cost
as in (3.11), then there exists a function V ∼ |x|m satisfying (3.6). This fact
is proved in Proposition 3.1. There also exists a Lyapunov function V0 ∈
O(|x|m ), satisfying the assumption in Theorem 4.2, relative to any control
v0 with πv0 (h̃) < ∞, where h̃ is selected as in Remark 3.5. Indeed, in order
to construct V0 we recall the function ψ in (3.21). Let V0 ∈ C 2 (Rd ) be any
function such that V0 = ψ m/2 on the complement of the unit ball centered
at the origin. Observe that for some positive constants κ1 and κ2 it holds
that
κ1 |x|2 ≤ ψ(x) ≤ κ2 |x|2 .
Then a straightforward calculation from (3.22) shows that (3.8) holds with
the above choice of V0 . By the stochastic representation of ϕ0 , it follows that
ϕ0 ∈ O(V0 ). We have proved the following corollary.
Corollary 4.1. For the queueing diffusion model with controlled dynamics given by (3.20), and running cost given by (3.11), there exists a
solution (up to an additive constant) to the associated HJB in the class of
functions in C 2 (Rd ) whose positive part grows no faster than |x|m and whose
negative part is in o(|x|m ).
We conclude this section with the following remark.
Remark 4.3. Comparing the approximation technique introduced in
this section with that in Section 3, we see that the spatial truncation technique relies on more restrictive assumption on the Lyapunov function V0
and the running cost function (Theorem 4.2). In fact, the growth of h̃ also
restricts the growth of r by (3.23). Therefore, the class of ergodic diffusion
control problems considered in this section is more restrictive. For example,
if the running cost r satisfies (3.11) and h̃ ∼ |x|m , then it is not obvious that
one can obtain a Lyapunov function V0 with growth at most of order |x|m .
For instance, if the drift has strictly sub-linear growth, then it is expected
that the Lyapunov function should have growth larger than |x|m . Therefore,
the class of problems considered in Section 3 is larger than those considered
in this section.
5. Asymptotic convergence. In this section, we prove that the value of
the ergodic control problem corresponding to the multi-class M/M/N + M
queueing network asymptotically converges to ̺∗ , the value of the ergodic
control for the controlled diffusion.
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
49
Recall the diffusion-scaled processes X̂ n , Q̂n and Ẑ n defined in (2.4), and
from (2.5) and (2.6) that
Z t
Z t
n
n
n
n
n
n
X̂i (t) = X̂i (0) + ℓi t − µi
Ẑi (s) ds − γi
Q̂ni (s) ds
(5.1)
0
0
n
n
n
+ M̂A,i
(t) − M̂S,i
(t) − M̂R,i
(t),
n (t), M̂ n (t) and M̂ n (t), i = 1, . . . , d, as defined in (2.6), are
where M̂A,i
S,i
R,i
square integrable martingales w.r.t. the filtration {Ftn } with quadratic variations
λni
t,
n
Z
µni t n
n
hM̂S,i i(t) =
Z (s) ds,
n 0 i
Z
γin t n
n
Q (s) ds.
hM̂R,i i(t) =
n 0 i
n
hM̂A,i
i(t) =
5.1. The lower bound. In this section, we prove Theorem 2.1.
Proof of Theorem 2.1. Recall the definition of V̂ n in (2.10), and
consider a sequence such that supn V̂ n (X̂ n (0)) < ∞. Let ϕ ∈ C 2 (Rd ) be any
function satisfying ϕ(x) := |x|m for |x| ≥ 1. As defined in Section 1.3, ∆X(t)
denotes the jump of the process X at time t. Applying Itô’s formula on ϕ
(see, e.g., [24], Theorem 26.7), we obtain from (5.1) that
Z t
Θn1 (X̂1n (s), Ẑ1n (s))ϕ′ (X̂1n (s)) ds
E[ϕ(X̂1n (t))] = E[ϕ(X̂1n (0))] + E
0
+E
Z
0
+E
t
Θn2 (X̂1n (s), Ẑ1n (s))ϕ′′ (X̂1n (s)) ds
X
s≤t
∆ϕ(X̂1n (s)) − ϕ′ (X̂1n (s−)) · ∆X̂1n (s)
1 ′′ n
n
n
− ϕ (X̂ (s−))∆X̂1 (s)∆X̂1 (s) ,
2
where
Θn1 (x, z) := ℓn1 − µn1 z − γ1n (x − z),
µn1 z + γ1n (x − z)
λn1
1 n
n
√
+
µ ρ1 +
.
Θ2 (x, z) :=
2 1
n
n
50
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Since {ℓn1 } is a bounded sequence, it is easy to show that for all n there exist
positive constants κi , i = 1, 2, independent of n, such that
Θn1 (x, z)ϕ′ (x) ≤ κ1 (1 + |(e · x)+ |m ) − κ2 |x|m ,
κ2
Θn2 (x, z)ϕ′′ (x) ≤ κ1 (1 + |(e · x)+ |m ) + |x|m ,
4
provided that x − z ≤ (e · x)+ and √zn ≤ 1. We next compute the terms
corresponding to the jumps. For that, first we see that the jump size is of
order √1n . We can also find a positive constant κ3 such that
sup |ϕ′′ (y)| ≤ κ3 (1 + |x|m−2 )
∀x ∈ Rd .
|y−x|≤1
Using Taylor’s approximation, we obtain the inequality
∆ϕ(X̂1n (s)) − ϕ′ (X̂1n (s−)) · ∆X̂1n (s) ≤
1
sup
|ϕ′′ (y)|[∆(X̂1n (s))]2 .
2 |y−X̂ n (s−)|≤1
1
Hence, combining the above facts we obtain
X
E
∆ϕ(X̂1n (s)) − ϕ′ (X̂1n (s−)) · ∆X̂1n (s)
s≤t
(5.2)
1 ′′ n
n
n
− ϕ (X̂1 (s−))∆X̂1 (s)∆X̂1 (s)
2
X
≤E
κ3 (1 + |X̂1n (s−)|m−2 )(∆(X̂1n (s)))2
s≤t
t
λn1
µn1 Z1n (s) γ1n Qn1 (s)
= κ3 E
+
+
ds
n
n
n
0
Z t
κ2
≤E
κ4 + |X̂1n (s)|m + κ5 ((e · X̂ n (s))+ )m ds ,
4
0
Z
(1 + |X̂1n (s)|m−2 )
for some suitable positive constants κ4 and κ5 , independent of n, where in
the second inequality we use the fact that the optional martingale [X̂1n ] is
the sum of the squares of the jumps, and that [X̂1n ] − hX̂1n i is a martingale.
Therefore, for some positive constants C1 and C2 it holds that
0 ≤ E[ϕ(X̂1n (t))]
(5.3)
≤ E[ϕ(X̂1n (0))] + C1 t −
+ C2 E
Z
t
0
κ2
E
2
Z
0
t
|X̂1n (s)|m ds
((e · X̂ (s)) ) ds .
n
+ m
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
51
By (2.8), we have
r(Q̂n (s)) ≥
c1
((e · X̂ n (s))+ )m ,
dm
which, combined with the assumption that supn V̂ n (X̂ n (0)) < ∞, implies
that
Z T
1
+ m
n
((e · X̂ (s)) ) ds < ∞.
sup lim sup E
n
T →∞ T
0
In turn, from (5.3) we obtain
sup lim sup
n
T →∞
1
E
T
Z
T
0
|X̂1n (s)|m ds < ∞.
Repeating the same argument for coordinates i = 2, . . . , d, we obtain
Z T
1
(5.4)
|X̂ n (s)|m ds < ∞.
sup lim sup E
n
T →∞ T
0
We introduce the process
n
X̂i (t) − Ẑin (t)
,
n
n
+
Ui (t) :=
(e · X̂ (t))
ed ,
i = 1, . . . , d,
if (e · X̂ n (t))+ > 0,
otherwise.
Since Z n is work-conserving, it follows that U n takes values in S, and Uin (t)
represents the fraction of class i customers in queue. Define the mean empirical measures
Z T
1
n
n
n
ΦT (A × B) := E
IA×B (X̂ (s), U (s)) ds
T
0
for Borel sets A ⊂ Rd and B ⊂ S.
From (5.4), we see that the family {ΦnT : T > 0, n ≥ 1} is tight. Hence, for
any sequence Tk → ∞, there exists a subsequence, also denoted by Tk , such
that ΦnTk → π n , as k → ∞. It is evident that {π n : n ≥ 1} is tight. Let π n → π
along some subsequence, with π ∈ P(Rd × S). Therefore, it is not hard to
show that
Z
n
n
r̃(x, u)π(dx, du),
lim V̂ (X̂ (0)) ≥
n→∞
Rd ×U
where, as defined earlier, r̃(x, u) = r((e · x)+ u). To complete the proof of the
theorem, we only need to show that π is an ergodic occupation measure
for the diffusion. For that, consider f ∈ Cc∞ (Rd ). Recall that [X̂in , X̂jn ] = 0
52
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
for i 6= j [30], Lemmas 9.2 and 9.3. Therefore, using Itô’s formula and the
definition of ΦnT , we obtain
1
E[f (X̂ n (T ))]
T
1
= E[f (X̂ n (0))]
T
!
Z
d
X
(5.5)
Ani (x, u) · fxi (x) + Bin (x, u)fxi xi (x) ΦnT (dx, du)
+
Rd ×U
+
i=1
d
"
X
1 X
fxi (X̂ n (s−)) · ∆X̂in (s)
E
∆f (X̂ n (s)) −
T
s≤T
i=1
#
d
1 X
n
n
n
fxi xj (X̂ (s−))∆X̂i (s)∆X̂j (s) ,
−
2
i,j=1
where
Ani (x, u) := ℓni − µni (xi − (e · x)+ ui ) − γin (e · x)+ ui ,
λn µn xi + (γin − µni )(e · x)+ ui
1 n
√
µi ρi + i + i
.
Bin (x, u) :=
2
n
n
We first bound the last term in (5.5). Using Taylor’s formula, we see that
n
∆f (X̂ (s)) −
−
d
X
i=1
∇f (X̂ n (s−)) · ∆X̂ n (s)
d
1 X
fxi xj (X̂ n (s−))∆X̂in (s)∆X̂jn (s)
2
i,j=1
=
d
kkf kC 3 X
√
|∆X̂in (s)∆X̂jn (s)|
n
i,j=1
for some positive constant k, where we use the fact that the jump size is
√1 . Hence, using the fact that independent Poisson processes do not have
n
simultaneous jumps w.p.1, using the identity Q̂ni = X̂in − Ẑin , we obtain
"
d
X
1 X
∇f (X̂ n (s−)) · ∆X̂ n (s)
E
∆f (X̂ n (s)) −
T
i=1
s≤T
(5.6)
#
d
1 X
−
fxi xj (X̂ n (s−))∆X̂in (s)∆X̂jn (s)
2
i,j=1
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
kkf k 3
≤ √C E
T n
"Z
T
0
53
d n
X
λ
i=1
#
n Z n (s)
n Qn (s)
µ
γ
i
+ i i
+ i i
ds .
n
n
n
Therefore, first letting T → ∞ and using (5.2) and (5.4) we see that the
expectation on the right-hand side of (5.6) is bounded above. Therefore, as
n → ∞, the left-hand side of (5.6) tends to 0. Thus, by (5.5) and the fact
that f is compactly supported, we obtain
Z
Lu f (x)π(dx, du) = 0,
Rd ×U
where
Lu f (x) = λi ∂ii f (x) + (ℓi − µi (xi − (e · x)+ ui ) − γi (e · x)+ ui )∂i f (x).
Therefore, π ∈ G.
5.2. The upper bound. The proof of the upper bound in Theorem 2.2 is
a little more involved than that of the lower bound. Generally, it is very
helpful if one has uniform stability across n ∈ N (see, e.g., [12]). In [12],
uniform stability is obtained from the reflected dynamics with the Skorohod
mapping. However, here we establish the asymptotic upper bound by using
the technique of spatial truncation that we have introduced in Section 4.
Let vδ be any precise continuous control in USSM satisfying vδ (x) = u0 =
(0, . . . , 0, 1) for |x| > K > 1.
First, we construct a work-conserving admissible policy for each n ∈ N
(see [7]). Define a measurable map ̟ : {z ∈ Rd+ : e · z ∈ Z} → Zd+ as follows:
for z = (z1 , . . . , zd ) ∈ Rd , let
!
d
X
(zi − ⌊zi ⌋) .
̟(z) := ⌊z1 ⌋, . . . , ⌊zd−1 ⌋, ⌊zd ⌋ +
i=1
Note that |̟(z) − z| ≤ 2d. Define
uh (x) := ̟((e · x − n)+ vδ (x̂n )),
x ∈ Rd ,
x1 − ρ1 n
xd − ρd n
n
√
x̂ :=
,..., √
,
n
n
n
√ o
An := x ∈ Rd+ : sup |xi − ρi n| ≤ K n .
i
We define a state-dependent, work-conserving policy as follows:
n
X − uh (X n ),
if X n ∈ An ,
i
!
+
i−1
X
Zin [X n ] :=
(5.7)
n
n
,
otherwise.
Xj
X ∧ n−
i
j=1
54
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Therefore, whenever the state of the system is in Acn , the system works under
the fixed priority policy with the least priority given to class-d jobs. First,
we show that this is a well-defined policy for all large n. It is enough to
show that Xin − uh (X n ) ≥ 0 for all i when X n ∈ An . If not, then for some
i, 1 ≤ i ≤ d, we must have Xin − uh (X n ) < 0 and so Xin < (e · X n − n)+ + d.
Since X n ∈ An , we obtain
√
−K n + ρi n ≤ Xin
< (e · X n − n)+ + d
!+
d
X
n
(Xi − ρi n)
=
+d
i=1
√
≤ dK n + d.
But this cannot hold for large n. Hence, this policy is well defined for all
large n. Under the policy defined in (5.7), X n is a Markov process and its
generator given by
Ln f (x) =
d
X
i=1
+
λni (f (x + ei ) − f (x)) +
d
X
i=1
d
X
i=1
µni Zin [x](f (x − ei ) − f (x))
γin Qni [x](f (x − ei ) − f (x)),
x ∈ Zd+ ,
where Z n [x] is as above and Qn [x] := x − Z n [x]. It is easy to see that, for
x∈
/ An ,
! + #+
"
i−1
X
n
xj
.
Qi [x] = xi − n −
j=1
Lemma 5.1. Let X n be the Markov process corresponding to the above
control. Let q be an even positive integer. Then there exists n0 ∈ N such that
Z T
1
q
n
sup lim sup E
|X̂ (s)| ds < ∞,
n≥n0 T →∞ T
0
where X̂ n = (X̂1n , . . . , X̂dn )T is the diffusion-scaled process corresponding to
the process X n , as defined in (2.4).
Proof. The proof technique is inspired by [6], Lemma 3.1. Define
fn (x) :=
d
X
i=1
βi (xi − ρi n)q ,
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
55
where βi , i = 1, . . . , d, are positive constants to be determined later. We first
show that for a suitable choice of βi , i = 1, . . . , d, there exist constants Ci ,
i = 1, 2, independent of n ≥ n0 , such that
Ln fn (x) ≤ C1 nq/2 − C2 fn (x),
(5.8)
x ∈ Zd+ .
Choose n large enough so that the policy is well defined. We define Yin :=
xi − ρi n. Note that
Also,
(a ± 1)q − aq = ±qa · aq−2 + O(aq−2 ),
µni Zin [x] = µni xi
Ln fn (x) =
d
X
i=1
−
−
≤
(5.9)
i=1
d
X
i=1
i=1
d
X
d
X
i=1
+
βi µni xi [qYin |Yin |q−2 + O(|Yin |q−2 )]
βi (γin − µni )Qni [x][qYin |Yin |q−2 + O(|Yin |q−2 )]
βi (λni + µni xi + |γin − µni |Qin [x])O(|Yin |q−2 )
i=1
≤
Then
a ∈ R.
βi λni [qYin |Yin |q−2 + O(|Yin |q−2 )]
d
X
d
X
+
− µni Qni [x].
βi qYin |Yin |q−2 (λni − µni xi − (γin − µni )Qni [x])
βi (λni + (µni + |γin − µni |)(Yin + ρi n))O(|Yin |q−2 )
d
X
i=1
βi qYin |Yin |q−2 (λni − µni xi − (γin − µni )Qni [x]),
where in the last inequality we use the fact that Qni [x] ≤ xi for x ∈ Zd+ . Let
√
δin := λni − µni ρi n = O( n).
The last estimate is due to the assumptions in (2.1) concerning the parameters in the Halfin–Whitt regime. Then
(5.10)
d
X
i=1
βi qYin |Yin |q−2 (λni − µni xi − (γin − µni )Qni [x])
= −q
d
X
i=1
βi µni |Yin |q +
d
X
i=1
βi qYin |Yin |q−2 (δin − (γin − µni )Qni [x]).
56
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
If x ∈ An and n is large, then
Qni [x] = uh (x) = ̟((e · x − n)+ vδ (x̂n ))
√
≤ (e · x − n)+ + d ≤ 2dK n.
Let x ∈ Acn . We use the fact that for any a, b ∈ R it holds that a+ − b+ =
ξ[a − b] for some ξ ∈ [0, 1]. Also,
!+ #+
"
i−1
X
nρj
= 0,
i = 1, . . . , d.
nρi − n −
j=1
Thus, we obtain maps ξ, ξ̃ : Rd → [0, 1]d such that
!+ #+
"
i−1
X
nρj
− Qni [x]
−Qni [x] = nρi − n −
j=1
= ξi (x)(nρi − xi ) − ξ˜i (x)
i−1
X
(xj − nρj ),
x ∈ Acn .
j=1
Hence, from (5.10) we obtain
d
X
i=1
βi qYin |Yin |q−2 (λni − µni xi − (γin − µni )Qni [x])
d
d
X
X
√
βi ((1 − ξi (x))µni + ξi (x)γin )|Yin |q
βi |Yin |q−1 − q
≤ O( n)q
i=1
i=1
+q
d
X
i=1
βi Yin |Yin |q−2
δin
− (γin
− µni )ξ̃i (x)
where we used the fact that on An we have
!+ #+
"
i−1
X
√
= O( n)
xj
xi − n −
j=1
i−1
X
Yjn
j=1
!
,
∀i.
Observe that there exists ϑ > 0, independent of n due to (2.1), such that
(1 − ξi (x))µni + ξi (x)γin ≥ min(µni , γin ) ≥ ϑ
for all n ∈ N, all x ∈ Rd , and all i = 1, . . . , d. As a result, we obtain
d
X
i=1
βi qYin |Yin |q−2 (λni − µni xi − (γin − µni )Qni [x])
57
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
d
d
X
X
√
n q−1
βi |Yin |q
βi |Yi |
− qϑ
≤ O( n)q
(5.11)
i=1
i=1
+q
d
X
i=1
βi Yin |Yin |q−2 δin − (γin − µni )ξ̃i (x)
i−1
X
j=1
!
Ynj .
We next estimate the last term on the right-hand side of (5.11). Let κ :=
ϑ
. Using Young’s inequality, we obtain the estisupn,i |γin − µni |, and ε1 := 8κ
mate
q
i−1
i−1
X
1 X n
n q−1
j
n q
Yj .
|Yi |
Yn ≤ ε1 |Yi | + q−1
ε
1
j=1
j=1
Therefore,
q
d
X
i=1
βi Yin |Yin |q−2 −(γin − µni )ξ̃i (x)
≤ qκ
d
X
ε1 βi |Yin |q
≤ qκ
d
X
ε1 βi |Yin |q
i=1
i=1
+
+
i−1
βi X
ε1q−1
j=1
βi
q−1
ε1q−1
i−1
X
Yjn
j=1
Yjn
d
!
q!
i−1
X
|Yjn |q
j=1
!
!
d
i−1
qϑ X
βi q−1 X n q
n q
=
βi |Yi | + q d
|Yj | .
8
ε1
i=1
j=1
We choose β1 = 1 and for i ≥ 2, we define βi by
εq1
βi := q min βj .
d j≤i−1
With this choice of βi it follows from above that
q
d
X
i=1
βi Yin |Yin |q−2
−(γin
− µni )ξ̃i (x)
i−1
X
j=1
Ynj
!
Using the preceding inequality in (5.11), we obtain
(5.12)
d
X
i=1
d
qϑ X
≤
βi |Yin |q .
4
i=1
βi qYin |Yin |q−2 (λni − µni xi − (γin − µni )Qni [x])
d
d
X
√
3 X
βi |Yin |q .
≤ O( n)q
βi |Yin |q−1 − qϑ
4
i=1
i=1
58
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Combining (5.9) and (5.12), we obtain
Ln fn (x) ≤
(5.13)
d
X
i=1
d
X
√
O(n)O(|Yin |q−2 )
O( n)O(|Yin |q−1 ) +
i=1
d
3 X
βi |Yin |q .
− qϑ
4
i=1
By Young’s inequality, for any ε > 0, we have the bounds
√
√
O( n)O(|Yin |q−1 ) ≤ ε[O(|Yin |q−1 )]q/(q−1) + ε(1−q) [O( n)]q ,
O(n)O(|Yin |q−2 ) ≤ ε[O(|Yin |q−2 )]q/(q−2) + ε(1−q/2) [O(n)]q/2 .
Thus, choosing ε properly in (5.13) we obtain (5.8).
We proceed to complete the proof of the lemma by applying (5.8). First,
we observe that E[sups∈[0,T ] |X n (s)|p ] is finite for any p ≥ 1 as this quantity
is dominated by the Poisson arrival process. Therefore, from (5.8) we see
that
Z T
n
n
n
Ln fn (X (s)) ds
E[fn (X (T ))] − fn (X (0)) = E
0
≤ C1 n
q/2
T − C2 E
Z
T
0
fn (X (s)) ds ,
n
which implies that
#
"Z
d
d
T X
X
βi (X̂in (0))q .
C2 E
βi (X̂in (s))q ds ≤ C1 T +
0
i=1
i=1
Hence, the proof follows by dividing both sides by T and letting T → ∞.
Proof of Theorem 2.2. Let r be the given running cost with polynomial growth with exponent m in (2.8). Let q = 2(m + 1). Recall that
r̃(x, u) = r((e · x)+ u) for (x, u) ∈ Rd × S. Then r̃ is convex in u and satisfies
(3.11) with the same exponent m. For any δ > 0, we choose vδ ∈ USSM such
that vδ is a continuous precise control with invariant probability measure µδ
and
Z
(5.14)
r̃(x, vδ (x))µδ (dx) ≤ ̺∗ + δ.
Rd
We also want the control vδ to have the property that vδ (x) = (0, . . . , 0, 1)
outside a large ball. To obtain such vδ , we see that by Theorems 4.1, 4.2 and
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
59
Remark 4.2 we can find vδ′ and a ball Bl for l large, such that vδ′ ∈ USSM ,
vδ′ (x) = ed for |x| > l, vδ′ is continuous in Bl , and
Z
δ
r̃(x, vδ′ (x))µ′δ (dx) − ̺∗ < ,
2
d
R
where µ′δ is the invariant probability measure corresponding to vδ′ . We note
that vδ′ might not be continuous on ∂Bl . Let {χn : n ∈ N} be a sequence
c
of cut-off functions such that χn ∈ [0, 1], it vanishes on Bl−(1/n)
, and it
n
n
takes the value 1 on Bl−(2/n) . Define the sequence vδ (x) := χ (x)vδ′ (x) +
(1 − χn (x))ed . Then vδn → vδ′ , as n → ∞, and the convergence is uniform
on the complement of any neighborhood of ∂Bl . Also by Proposition 3.1
the corresponding invariant probability measures µnδ are exponentially tight.
Thus,
Z
Z
′
′
r̃(x, vδn (x))µnδ (dx) −→ 0.
r̃(x, vδ (x))µδ (dx) −
Rd
Rd
n→∞
Combining the above two expressions, we can easily find vδ which satisfies
(5.14). We construct a scheduling policy as in Lemma 5.1. By Lemma 5.1,
we see that for some constant K1 it holds that
Z T
1
q
n
(5.15)
|X̂ (s)| ds < K1 ,
q = 2(m + 1).
sup lim sup E
n≥n0 T →∞ T
0
Define
vh (x) := ̟((e · x − n)+ vδ (x̂n )),
√
v̂h (x̂n ) := ̟( n(e · x̂n )+ vδ (x̂n )).
Since vδ (x̂n ) = (0, . . . , 0, 1) when |x̂n | ≥ K, it follows that
Qn [X n ] = X n − Z n [X n ] = vh (X n )
P
n
for large n, provided that d−1
i=1 Xi ≤ n. Define
( d−1
)
X
√
x̂ni > ρd n .
Dn := x :
i=1
Then
1
n
r(Q̂ (t)) = r √ v̂h (X̂ (t)) + r(X̂ n (t) − Ẑ n (t))I{X̂ n (t)∈Dn }
n
1
n
− r √ v̂h (X̂ (t)) I{X̂ n (t)∈Dn } .
n
n
60
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
Define, for each n, the mean empirical measure ΨnT by
Z T
1
IA (X̂ n (t)) dt .
ΨnT (A) := E
T
0
By (5.15), the family {ΨnT : T > 0, n ≥ 1} is tight. We next show that
Z
Z T
1
n
(5.16) lim lim sup E
r((e · x)+ vδ (x))µδ (dx).
r(Q̂ (t)) dt =
n→∞ T →∞ T
Rd
0
For each n, select a sequence {Tkn : k ∈ N} along which the “lim sup” in (5.16)
is attained. By tightness, there exists a limit point Ψn of ΨnT n . Since Ψn has
k
support on a discrete lattice, we have
Z
Z
1
1
n
r √ v̂h (x) Ψn (dx).
r √ v̂h (x) ΨT n (dx) −→
k
k→∞
d
n
n
d
R
R
Therefore,
1
lim sup E
T →∞ T
Z
T
0
1
r √ v̂h (x) Ψn (dx) ± E n ,
r(Q̂ (t)) dt ≶
n
d
R
n
Z
where
E n = lim sup
T →∞
1
E
T
Z
0
T
1
r(Q̂n (t)) + r √ v̂h (X̂ n (t)) I{X̂ n (t)∈Dn } dt .
n
By (5.15), the family {Ψn : n ≥ 1} is tight. Hence, it has a limit Ψ. By
definition, we have
1
2d
√ v̂h (x) − (e · x)+ vδ (x) ≤ √ .
n
n
Thus, using the continuity property of r and (2.8) it follows that
Z
Z
1
n
r √ v̂h (x) Ψ (dx) −→
r((e · x)+ vδ (x))Ψ(dx),
n→∞
n
d
d
R
R
along some subsequence. Therefore, in order to complete the proof of (5.16)
we need to show that
lim sup E n = 0.
n→∞
Since the policies are work-conserving, we observe that 0 ≤ X̂ n − Ẑ n ≤ (e ·
X̂ n )+ , and therefore for some positive constants κ1 and κ2 , we have
1
n
r √ v̂h (X̂ (t)) ∨ r(X̂ n (t) − Ẑ n (t)) ≤ κ1 + κ2 [(e · X̂ n )+ ]m .
n
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
61
Given ε > 0 we can choose n1 so that for all n ≥ n1 ,
Z T
1
+ m
n
√
√
[(e · X̂ (s)) ] I{|X̂ n (s)|>(ρd / d) n} ds ≤ ε,
lim sup E
T →∞ T
0
p
where we use (5.15). We observe that Dn ⊂ {|x̂n | > ρd d/n}. Thus, (5.16)
holds. In order to complete the proof, we only need to show that Ψ is the
invariant probability measure corresponding to vδ . This can be shown using
the convergence of generators as in the proof of Theorem 2.1.
6. Conclusion. We have answered some of the most interesting questions
for the ergodic control problem of the Markovian multi-class many-server
queueing model. This current study has raised some more questions for future research. One of the interesting questions is to consider nonpreemptive
policies and try to establish asymptotic optimality in the class of nonpreemptive admissible polices [7]. It will also be interesting to study a similar
control problem when the system has multiple heterogeneous agent pools
with skill-based routing.
It has been observed that customers’ service requirements and patience
times are nonexponential [10] in some situations. It is therefore important
and interesting to address similar control problems under general assumptions on the service and patience time distributions.
Acknowledgements. We thank the anonymous referee for many helpful
comments that have led to significant improvements in our paper. Ari Arapostathis acknowledges the hospitality the Department of Industrial and
Manufacturing Engineering in Penn State while he was visiting at the early
stages of this work. Guodong Pang acknowledges the hospitality of the Department of Electrical and Computer Engineering at University of Texas at
Austin while he was visiting for this work. Part of this work was done while
Anup Biswas was visiting the Department of Industrial and Manufacturing
Engineering in Penn State. Hospitality of the department is acknowledged.
REFERENCES
[1] Arapostathis, A., Borkar, V. S. and Ghosh, M. K. (2012). Ergodic Control
of Diffusion Processes. Encyclopedia of Mathematics and Its Applications 143.
Cambridge Univ. Press, Cambridge. MR2884272
[2] Arisawa, M. and Lions, P.-L. (1998). On ergodic stochastic control. Comm. Partial
Differential Equations 23 2187–2217. MR1662180
[3] Atar, R. (2005). Scheduling control for queueing systems with many servers: Asymptotic optimality in heavy traffic. Ann. Appl. Probab. 15 2606–2650. MR2187306
[4] Atar, R. and Biswas, A. (2014). Control of the multiclass G/G/1 queue in the
moderate deviation regime. Ann. Appl. Probab. 24 2033–2069. MR3226171
[5] Atar, R., Giat, C. and Shimkin, N. (2010). The cµ/θ rule for many-server queues
with abandonment. Oper. Res. 58 1427–1439. MR2560545
62
A. ARAPOSTATHIS, A. BISWAS AND G. PANG
[6] Atar, R., Giat, C. and Shimkin, N. (2011). On the asymptotic optimality of the
cµ/θ rule under ergodic cost. Queueing Syst. 67 127–144. MR2771197
[7] Atar, R., Mandelbaum, A. and Reiman, M. I. (2004). Scheduling a multi class
queue with many exponential servers: Asymptotic optimality in heavy traffic.
Ann. Appl. Probab. 14 1084–1134. MR2071417
[8] Bogachev, V. I., Krylov, N. V. and Röckner, M. (2001). On regularity of transition probabilities and invariant measures of singular diffusions under minimal
conditions. Comm. Partial Differential Equations 26 2037–2080. MR1876411
[9] Borkar, V. S. (1989). Optimal Control of Diffusion Processes. Pitman Research
Notes in Mathematics Series 203. Longman, Harlow. MR1005532
[10] Brown, L., Gans, N., Mandelbaum, A., Sakov, A., Shen, H., Zeltyn, S. and
Zhao, L. (2005). Statistical analysis of a telephone call center: A queueingscience perspective. J. Amer. Statist. Assoc. 100 36–50. MR2166068
[11] Budhiraja, A., Ghosh, A. and Liu, X. (2014). Scheduling control for Markovmodulated single-server multiclass queueing systems in heavy traffic. Queueing
Syst. 78 57–97. MR3238008
[12] Budhiraja, A., Ghosh, A. P. and Lee, C. (2011). Ergodic rate control problem for single class queueing networks. SIAM J. Control Optim. 49 1570–1606.
MR2817491
[13] Dai, J. G. and Tezcan, T. (2008). Optimal control of parallel server systems with
many servers in heavy traffic. Queueing Syst. 59 95–134. MR2430811
[14] Dieker, A. B. and Gao, X. (2013). Positive recurrence of piecewise Ornstein–
Uhlenbeck processes and common quadratic Lyapunov functions. Ann. Appl.
Probab. 23 1291–1317. MR3098433
[15] Gamarnik, D. and Stolyar, A. L. (2012). Multiclass multiserver queueing system
in the Halfin-Whitt heavy traffic regime: Asymptotics of the stationary distribution. Queueing Syst. 71 25–51. MR2925789
[16] Gamarnik, D. and Zeevi, A. (2006). Validity of heavy traffic steady-state approximation in generalized Jackson networks. Ann. Appl. Probab. 16 56–90.
MR2209336
[17] Garnett, O., Mandelbaum, A. and Reiman, M. I. (2002). Designing a call center
with impatient customers. Manuf. Serv. Oper. Manag. 4 208–227.
[18] Gilbarg, D. and Trudinger, N. S. (1983). Elliptic Partial Differential Equations of
Second Order, 2nd ed. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 224. Springer, Berlin. MR0737190
[19] Gurvich, I. (2014). Diffusion models and steady-state approximations for exponentially ergodic Markovian queues. Ann. Appl. Probab. 24 2527–2559. MR3262510
[20] Gyöngy, I. and Krylov, N. (1996). Existence of strong solutions for Itô’s stochastic equations via approximations. Probab. Theory Related Fields 105 143–158.
MR1392450
[21] Halfin, S. and Whitt, W. (1981). Heavy-traffic limits for queues with many exponential servers. Oper. Res. 29 567–588. MR0629195
[22] Harrison, J. M. and Zeevi, A. (2004). Dynamic scheduling of a multiclass queue
in the Halfin–Whitt heavy traffic regime. Oper. Res. 52 243–257. MR2066399
[23] Ichihara, N. and Sheu, S.-J. (2013). Large time behavior of solutions of Hamilton–
Jacobi–Bellman equations with quadratic nonlinearity in gradients. SIAM J.
Math. Anal. 45 279–306. MR3032978
[24] Kallenberg, O. (2002). Foundations of Modern Probability, 2nd ed. Springer, New
York. MR1876169
ERGODIC CONTROL IN THE HALFIN–WHITT REGIME
63
[25] Kim, J. and Ward, A. R. (2013). Dynamic scheduling of a GI/GI/1 + GI queue
with multiple customer classes. Queueing Syst. 75 339–384. MR3110643
[26] Koçağa, Y. L. and Ward, A. R. (2010). Admission control for a multi-server queue
with abandonment. Queueing Syst. 65 275–323. MR2652045
[27] Krylov, N. V. (1980). Controlled Diffusion Processes. Applications of Mathematics
14. Springer, New York. Translated from the Russian by A. B. Aries. MR0601776
[28] Mandelbaum, A. and Stolyar, A. L. (2004). Scheduling flexible servers with convex delay costs: Heavy-traffic optimality of the generalized cµ-rule. Oper. Res.
52 836–855. MR2104141
[29] Ocone, D. and Weerasinghe, A. (2003). Degenerate variance control in the onedimensional stationary case. Electron. J. Probab. 8 no. 24, 27 pp. (electronic).
MR2041825
[30] Pang, G., Talreja, R. and Whitt, W. (2007). Martingale proofs of many-server
heavy-traffic limits for Markovian queues. Probab. Surv. 4 193–267. MR2368951
[31] Stannat, W. (1999). (Nonsymmetric) Dirichlet operators on L1 : Existence, uniqueness and associated Markov processes. Ann. Scuola Norm. Sup. Pisa Cl. Sci.
(4) 28 99–140. MR1679079
[32] van Mieghem, J. A. (1995). Dynamic scheduling with convex delay costs: The generalized cµ rule. Ann. Appl. Probab. 5 809–833. MR1359830
[33] Yosida, K. (1980). Functional Analysis, 6th ed. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 123.
Springer, Berlin. MR0617913
A. Arapostathis
A. Biswas
Department of Electrical
and Computer Engineering
The University of Texas at Austin
1616 Guadalupe St., UTA 7.508
Austin, Texas 78701
USA
E-mail: [email protected]
[email protected]
G. Pang
The Harold and Inge Marcus Department
of Industrial and Manufacturing Engineering
College of Engineering
Pennsylvania State University
University Park, Pennsylvania 16802
USA
E-mail: [email protected]
| 3 |
arXiv:1607.07837v4 [math.OC] 17 Apr 2017
First Efficient Convergence for Streaming k-PCA:
a Global, Gap-Free, and Near-Optimal Rate∗
Zeyuan Allen-Zhu
[email protected]
Institute for Advanced Study
Yuanzhi Li
[email protected]
Princeton University
July 26, 2016†
Abstract
We study streaming principal component analysis (PCA), that is to find, in O(dk) space,
the top k eigenvectors of a d × d hidden matrix Σ with online vectors drawn from covariance
matrix Σ.
We provide global convergence for Oja’s algorithm which is popularly used in practice but
lacks theoretical understanding for k > 1. We also provide a modified variant Oja++ that runs
even faster than Oja’s. Our results match the information theoretic lower bound in terms of
dependency on error, on eigengap, on rank k, and on dimension d, up to poly-log factors. In
addition, our convergence rate can be made gap-free, that is proportional to the approximation
error and independent of the eigengap.
In contrast, for general rank k, before our work (1) it was open to design any algorithm with
efficient global convergence rate; and (2) it was open to design any algorithm with (even local)
gap-free convergence rate in O(dk) space.
1
Introduction
Principle component analysis (PCA) is the problem of finding the subspace of largest variance in
a dataset consisting of vectors, and is a fundamental tool used to analyze and visualize data in
machine learning, computer vision, statistics, and operations research. In the big-data scenario,
since it can be unrealistic to store the entire dataset, it is interesting and more challenging to study
the streaming model (a.k.a. the stochastic online model) of PCA.
Suppose the data vectors x ∈ Rd are drawn i.i.d. from an unknown distribution with covariance
matrix Σ = E[xx> ] ∈ Rd×d , and the vectors are presented to the algorithm in an online fashion.
Following [10, 12], we assume the Euclidean norm kxk2 ≤ 1 with probability 1 (therefore Tr(Σ) ≤ 1)
and we are interested in approximately computing the top k eigenvectors of Σ. We are interested
in algorithms with memory storage O(dk), the same as the memory needed to store any k vectors
in d dimensions. We call this the streaming k-PCA problem.
For streaming k-PCA, the popular and natural extension of Oja’s algorithm originally designed
for the k = 1 case works as follows. Beginning with a random Gaussian matrix Q0 ∈ Rd×k (each
∗
We thank Jieming Mao for discussing our lower bound Theorem 6, and thank Dan Garber and Elad Hazan for
useful conversations. Z. Allen-Zhu is partially supported by a Microsoft research award, no. 0518584, and an NSF
grant, no. CCF-1412958.
†
An earlier version of this paper appeared at https://arxiv.org/abs/1607.07837. This newer version contains
a stronger Theorem 2, a new lower bound Theorem 6, as well as the new Oja++ results Theorem 4 and Theorem 5.
1
entry i.i.d ∼ N (0, 1)), it repeatedly applies
rank-k Oja’s algorithm:
Qt ← (I + ηt xt x>
t )Qt−1 ,
Qt = QR(Qt )
(1.1)
where ηt > 0 is some learning rate that may depend on t, vector xt is the random vector in iteration
t, and QR(Qt ) is the Gram-Schmidt decomposition that orthonormalizes the columns of Qt .
Although Oja’s algorithm works reasonably well in practice, very limited theoretical results are
known for its convergence in the k > 1 case. Even worse, little is known for any algorithm that
solves streaming PCA in the k > 1. Specifically, there are three major challenges for this problem:
1. Provide an efficient convergence rate that only logarithmically depends on the dimension d.
2. Provide a gap-free convergence rate that is independent of the eigengap.
3. Provide a global convergence rate so the algorithm can start from a random initial point.
In the case of k > 1, to the best of our knowledge, only Shamir [18] successfully analyzed the
original Oja’s algorithm. His convergence result is only local and not gap-free.1
Other groups of researchers [3, 9, 12] studied a block variant of Oja’s, that is to sample multiple
vectors x in each round t, and then use their empirical covariance to replace the use of xt x>
t . This
algorithm is more stable and easier to analyze, but only leads to suboptimal convergence.
We discuss them more formally below (and see Table 1):
• Shamir [18] implicitly provided a local but efficient convergence result for Oja’s algorithm,2
which requires a very accurate starting matrix Q0 : his theorem relies on Q0 being correlated
with the top k eigenvectors by a correlation value at least k−1/2. If using random initialization,
this event happens with probability at most 2−Ω(d) .
• Hardt and Price [9] analyzed the block variant of Oja’s,3 and obtained a global convergence
that linearly scales with the dimension d. Their result also has a cubic dependency on the gap
between the k-th and (k + 1)-th eigenvalue which is not optimal. They raised an open question
regarding how to provide any convergence result that is gap-free.
• Balcan et al. [3] analyzed the block variant of Oja’s. Their results are also not efficient and
cubically scale with the eigengap. In the gap-free setting, their algorithm runs in space more
than O(kd), and also outputs more than k vectors.4 For such reason, we do not include their
gap-free result in Table 1, and shall discuss it more in Section 4.
• Li et al. [12] also analyzed the block variant of Oja’s. Their result also cubically scales with
the eigengap, and their global convergence is not efficient.
• In practice, researchers observed that it is advantageous to choose the learning rate ηt to be
high at the beginning, and then gradually decreasing (c.f. [22]). To the best of our knowledge,
there is no theoretical support behind this learning rate scheme for general k.
In sum, it remains open before our work to obtain (1) any gap-free convergence rate in space O(kd),
(2) any global convergence rate that is efficient, or (3) any global convergence rate that has the
optimal quadratic dependence on eigengap.
1
A local convergence rate means that the algorithm needs a warm start that is sufficiently close to the solution.
However, the complexity to reach such a warm start is not clear.
2
The original method of Shamir [18] is an offline one. One can translate his result into a streaming setting and
this requires a lot of extra work including the martingale techniques we introduce in this paper.
3
They are in fact only able to output 2k vectors, guaranteed to approximately include the top k eigenvectors.
4
They require space O((k + m)d) where k + m is the number of eigenvalues in the interval [λk − ρ, 1] for some
“virtual gap” parameter ρ. See our Theorem 2 for a definition. This may be as large as O(d2 ). Also, they output
k + m vectors which are only guaranteed to approximately “contain” the top k eigenvectors.
2
Shamir [17]
Sa et al. [16]
k=1
gapdependent
Li et al. [11]
a
Jain et al. [10]
Theorem 1 (Oja)
k=1
gap-free
Shamir [17]
(Remark 1.3)
Theorem 2 (Oja)
Hardt-Price [9]
Li et al. [12]
k≥1
gapdependent
b
d
gap2
·
1
ε
e
O
d
gap2
·
1
ε
e
O
dλ1
gap2
·
1
ε
e
O
λ1
gap2
·
1
ε
e
O
λ1
gap2
·
1
ε
e
O
d
ρ2
1
ε2
e
O
λ1∼(1+m)
ρ2
e
O
dλk
gap3
·
kλk
gap3
· kd +
·
1
ε
[
·
1
ε
unknown
Shamir [18]
Balcan et al. [3]
e
O
e
O
b
b
Theorem 4 (Oja++ )
Theorem 6 (LB)
2
e
O
d(λ1∼k ) λk
gap3
λ1∼k
gap2
·
e
O
λ1∼k
gap2
·
e
O
Theorem 1 (Oja)
1
ε
1
ε
e
O
Theorem 5 (Oja++ )
Theorem 6 (LB)
λ1∼k+m
ρ2
·
[
1
[
ε
[
1
·ε
[
(when λ1∼k ≥ k/d) c
+k
Ω
1
ε
no
yes
kλk
gap2
·
1
ε
min{1, (λ1∼k +k·λ(k+1)∼(k+m) )}
·k
ρ2
e λ1∼k+m
+O
· 1ε
ρ2
e
O
Theorem 2 (Oja)
k≥1
gap-free
Is It
Local Convergence
“Efficient”?
e 12 · 1
O
[
no
[
gap
ε
e d2 · 1
O
[
no
[
gap
ε
e dλ12 · 1
[
no
O
[
gap
ε
e λ12 · 1
yes
O
gap
ε
e λ12 · 1
yes
O
gap
ε
Global Convergence
Paper
Ω
def
kλk
ρ2
·
1
ε
no
no
no
no
yes
yes
e
O
e
O
yes
·
1
ε2
dλk
gap3
·
1
ε
kλk
gap3
·
ε
O
1
gap2
·
e
O
e
O
e
O
e
O
e
O
e
O
(lower bound)
[
λ1∼(1+m)
ρ2
e
O
(lower bound)
yes
1
ρ2
·
1
ε
1
[
[
1
[
ε
1
2
d(λ1∼k ) λk
gap3
·ε
[
(when λ1∼k ≥ k/d)
λ1∼k 1
gap2 · ε
λ1∼k 1
gap2 · ε
λ1∼k+m
ρ2
·
1
ε
λ1∼k+m
ρ2
·
1
ε
Table 1: Comparison of known results. For gap = λk − λk+1 , every ε ∈ (0, 1) and ρ ∈ (0, 1):
2
“gap-dependent convergence” means kQ>
T ZkF ≤ ε where Z consists of the last d − k eigenvectors.
2
“gap-free convergence” means kQ>
T WkF ≤ ε where W consists of all eigenvectors with eigenvalues ≤ λk − ρ.
a global convergence is “efficient” if it only (poly-)logarithmically depend on the dimension d.
k is the target rank; in gap-free settings m be the largest index so that λk+m > λk − ρ.
def P
• we denote by λa∼b = bi=a λi in this table. Since kxk2 ≤ 1 for each sample vector, we have
•
•
•
•
gap ∈ [0, 1/k],
λi ∈ [0, 1/i],
kgap ≤ kλk ≤ λ1∼k ≤ λ1∼k+m ≤ 1 .
• we use [ to indicate the result is outperformed.
• some results in this table (both ours and prior work) depend on λ1∼k . In principle, this requires the algorithm
to know a constant approximation of λ1∼k upfront. In practice, however, since one always tunes the learning
rate η (for any algorithm in the table), we do not need additional knowledge on λ1∼k .
2
e dλ12 · 1 ) by under a stronger 4-th moment assumption. It slows down at least by a
The result of [11] is in fact O(
ε
gap
factor 1/λ1 if the 4-th moment assumption is removed.
b
2
These results give guarantees on spectral norm kQ>
T Wk2 , so we increased them by a factor k for a fair comparison.
c
If kxt k2 is always 1 then λ1∼k ≥ k/d always holds. Otherwise, even in the rare case of λ1∼k < k/d, their
e k2 λk3 and is still worse than ours.
complexity becomes O
d·gap
a
3
Over Sampling. Let us emphasize that it is often desirable to directly output a d × k matrix QT .
Some of the previous results, such as Hardt and Price [9], or the gap-free case of Balcan et al. [3],
are only capable of finding an over-sampled matrix d × k 0 for some k 0 > k, with the guarantee that
these k 0 columns approximately contain the top k eigenvectors of Σ. However, it is not clear how
to find “the best k vectors” out of this k 0 -dimensional subspace.
Special Case of k = 1. Jain [10] obtained the first convergence result that is both efficient and
global for streaming 1-PCA. Shamir [17] obtained the first gap-free result for streaming 1-PCA,
but his result is not efficient. Both these results are based on Oja’s algorithm, and it remains open
before our work to obtain a gap-free result that is also efficient even when k = 1.
Other Related Results. Mitliagkas et al. [13] obtained a streaming PCA result but in the
restricted spiked covariance model. Balsubramani et al. [4] analyzed a modified variant of Oja’s
algorithm and needed an extra O(d5 ) factor in the complexity.
The offline problem of PCA (and SVD) can be solved via iterative algorithms that are based on
variance-reduction techniques on top of stochastic gradient methods [2, 18] (see also [6, 7] for the
k = 1 case); these methods do multiple passes on the input data so are not relevant to the streaming
model. Offline PCA can also be solved via power method or block Krylov method [14], but since
each iteration of these methods relies on one full pass on the dataset, they are not suitable for
streaming setting either. Other offline problems and efficient algorithms relevant to PCA include
canonical correlation analysis and generalized eigenvector decomposition [1, 8, 21].
Offline PCA is significantly easier to solve because one can (although non-trivially) reduce a
general k-PCA problem to k times of 1-PCA using the techniques of [2]. However, this is not the
case in streaming PCA because one can lose a large polynomial factor in the sampling complexity.
1.1
Results on Oja’s Algorithm
We denote by λ1 ≥ · · · ≥ λd ≥ 0 the eigenvalues of Σ, and it satisfies λ1 + · · · + λd = Tr(Σ) ≤ 1.
We present convergence results on Oja’s algorithm that are global, efficient and gap-free.
Our first theorem works when there is a eigengap between λk and λk+1 :
def
Theorem 1 (Oja, gap-dependent). Letting gap = λk − λk+1 ∈ 0, k1
for every ε, p ∈ (0, 1) define learning rates
1
e
Θ
gap·T0
kΛ
Λ
1
e
e
e
Θ
T0 = Θ
, T1 = Θ
, ηt =
gap·T1
gap2 p2
gap2
Θ
1
e
gap·(t−T0 )
def
and Λ =
Pk
i=1 λi
1 ≤ t ≤ T0 ;
T0 < t ≤ T0 + T1 ;
t > T0 + T1 .
∈ 0, 1 ,
5
Let Z be the column orthonormal matrix consisting of all eigenvectors of Σ with values no more
than λk+1 . Then, the output QT ∈ Rd×k of Oja’s algorithm satisfies with probability at least 1 − p:
e T1
it satisfies kZ> QT k2 ≤ ε .
for every6 T = T0 + T1 + Θ
ε
e hides poly-log factors in
Above, Θ
1 1
p , gap
F
and d.
k
In other words, after a warm up phase of length T0 , we obtain a λ1 +···+λ
·
gap2
>
2
the quantity kZ QT kF . We make several observations (see also Table 1):
1
T
convergence rate for
• In the k = 1 case, Theorem 1 matches the best known result of Jain et al. [10].
5
6
The intermediate stage [T0 , T0 + T1 ] is in fact unnecessary, but we add this phase to simplify proofs.
e T1 /ε by making ηt poly-logarithmically dependent on T .
Theorem also applies to every T ≥ T0 + T1 + Ω
4
• In the k > 1 case, Theorem 1 gives the first efficient global convergence rate.
• In the k > 1 case, even in terms of local convergence rate, Theorem 1 is faster than the best
known result of Shamir [18] by a factor λ1 + · · · + λk ∈ (0, 1).
Remark 1.1. The quantity kZ> QT k2F captures the correlation between the resulting matrix QT ∈
Rd×k and the smallest d − k eigenvectors of Σ. It is a natural generalization of the sin-square
quantity widely used in the k = 1 case, because if k = 1 then kZ> QT k2F = sin2 (q, ν1 ) where q is
the only column of Q and ν1 is the leading eigenvector of Σ.
Some literatures instead adopt the spectral-norm guarantee (i.e., bounds on kZ> QT k22 ) as opposed to the Frobenius-norm one. The two guarantees are only up to a factor k away. We choose
to prove Frobenius-norm results because: (1) it makes the analysis significantly simpler, and (2) k
is usually small comparing to d, so if one can design an efficient (i.e., dimension free) convergence
rate for the Frobenius norm that also implies an efficient convergence rate for the spectral norm.
Remark 1.2. Our lower bound later (i.e. Theorem 6) implies, at least when λ1 and λk are within a
constant factor of each other, the local convergence rate in Theorem 1 is optimal up to log factors.
Gap-Free Streaming k-PCA. When the eigengap is small which is usually true in practice, it is
desirable to obtain gap-free convergence [14, 17]. We have the following theorem which answers the
open question of Hardt and Price [9] regarding gap-free convergence rate for streaming k-PCA.
Theorem 2 (Oja, gap-free). For every ρ, ε, p ∈ (0, 1), let λ1 , . . . , λm be all eigenvalues of Σ
def Pk
def Pk+m
that are > λk − ρ, let Λ1 =
i=1 λi ∈ 0, 1 , Λ2 =
j=k+1 λj ∈ 0, 1 , define learning rates
e 1
!
Θ
t ≤ T0 ;
kΛ2
ρ·T
0
k · min{1, Λ1 + p2 }
Λ1 + Λ2
1
e
e
e
Θ ρ·T1
t ∈ (T0 , T0 + T1 ];
T0 = Θ
, T1 = Θ
, ηt =
ρ2 · p2
ρ2
Θ
1
e
t > T0 + T1 .
ρ·(t−T0 )
Let W be the column orthonormal matrix consisting of all eigenvectors of Σ with values no more
than λk − ρ. Then, the output QT ∈ Rd×k of Oja’s algorithm satisfies with prob. at least 1 − p:
e T1
for every7 T = T0 + T1 + Θ
it satisfies kW> QT k2 ≤ ε .
F
ε
e hides poly-log factors in 1 , 1 and d.
Above, Θ
p ρ
Note that the above theorem is a double approximation. The number of iterations depend both
on ρ and ε, where ε is an upper bound on the correlation between QT and all eigenvectors in W
(which depends on ρ). This is the first known gap-free result for the k > 1 case using O(kd) space.
One may also be interested in single-approximation guarantees, such as the rayleigh-quotient
guarantee. Note that a single-approximation guarantee by definition loses information about the
ε-ρ tradeoff; furthermore, (good) single-approximation guarantees are not easy to obtain.8
We show in this paper the following theorem regarding the rayleigh-quotient guarantee:
Theorem 3(Oja, rayleigh quotient, informal). There exist learning rate choices so that, for every
e 2k 2 , letting qi be the i-th column of the output matrix QT , then
T =Θ
ρ ·p
e
Pr ∀i ∈ [k], qi> Σqi ≥ λi − Θ(ρ)
≥1−p .
e hides poly-log factors in 1 , 1 and d.
Again, Θ
p ρ
e T0 /ε by making ηt poly-logarithmically dependent on T .
Theorem also applies to every T ≥ T0 + Ω
Pointed out by [10], a direct translation from double approximation to a rayleigh-quotient type of convergence
loses a factor on the approximation error. They raised it as an open question regarding how to design a direct proof
without sacrificing this loss. Our next theorem answers this open question (at least in the gap-free case).
7
8
5
Remark 1.3. Before our work, the only gap-free result with space O(kd) is Shamir [17] — but it is
not efficient and only for k = 1. His result is in Rayleigh quotient but not double-approximation.
If the initialization phase is ignored, Shamir’s local convergence rate matches our global one in
Theorem 3. However, if one translates his result into double approximation, the running time loses
a factor ε. This is why in Table 1 Shamir’s result is in terms of 1/ε2 as opposed to 1/ε.
1.2
Results on Our New Oja++ Algorithm
Oja’s algorithm has a slow initialization phase (which is also observed
in practice [22]). For example,
λ1 +···+λk
1
e
in the gap-dependent case, Oja’s running time O
· k+ ε is dominated by its initialization
ρ2
when ε > 1/k. We propose in this paper a modified variant of Oja’s that initializes gradually.
Our Oja++ Algorithm. At iteration 0, instead putting all the dk random Gaussians into Q0
like Oja’s, our Oja++ only fills the first k/2 columns of Q0 with random Gaussians, and sets the
remaining columns be zeros. It applies the same iterative rule as Oja’s to go from Qt to Qt+1 , but
after every T0 iterations for some T0 ∈ N∗ , it replaces the zeros in the next k/4, k/8, . . . columns
with random Gaussians and continues.9 This gradual initialization ends when all the k columns
become nonzero, and the remaining algorithm of Oja++ works exactly the same as Oja’s.
We provide pseudocode of Oja++ in Algorithm 2 on page 58, and state below its main theorems:
def
1
++
Theorem 4 (Oja++ , gap-dependent, informal). Letting gap = λk − λk+1
∈ 0, k , our Oja
k
e λ1 +···+λ
outputs a column-orthonormal QT ∈ Rd×k with kZ> QT k2 ≤ ε in T = Θ
iterations.
2
F
gap ε
++ outputs a column-orthonormal
Theorem 5 (Oja++ , gap-free, informal). Given
our Oja
ρ ∈ (0, 1),
λ
+···+λ
e 1 2 k+m iterations.
QT ∈ Rd×k with kW> QT k2F ≤ ε in T = Θ
ρ ε
1.3
Result on Lower Bound
We have the following information-theoretical lower bound for any (possibly offline) algorithm:
Theorem 6 (lower bound, informal). For every integer k ≥ 1, integer m ≥ 0, every 0 < ρ < λ <
1/k, every (possibly randomized) algorithm A, we can construct a distribution µ over unit vectors
with λk+m+1 (Eµ [xx> ]) ≤ λ − ρ and λk (Eµ [xx> ]) ≥ λ. The output QT of A with samples x1 , ..., xT
i.i.d. drawn from µ satisfies
kλ
>
2
E
kW QT kF = Ω
.
x1 ,...,xT ,A
ρ2 · T
(W consists of the last d − (k + m) eigenvectors of Eµ [xx> ].)
Our Theorem 6 (with m = 0 and ρ = gap) implies that, in the gap-dependent setting, the global
convergence rate of Oja++ is optimal up to log factors, at least when λ1 = O(λk ). Our gap-free
result does not match this lower bound. We explain in Section 4 that if one increases the space
from O(kd) to O((k + m)d) in the gap-free case, our Oja++ can also match this lower bound.
9
Zeros columns will remain zero according to the usage of Gram-Schmidt in Oja’s algorithm.
6
2
Preliminaries
We denote by 1 ≥ λ1 ≥ · · · ≥ λd ≥ 0 the eigenvalues of the positive semidefinite (PSD) matrix Σ,
and ν1 , ν2 , . . . , νd the corresponding normalized eigenvectors. Since we assumed kxk2 ≤ 1 for each
def
data vector it satisfies λ1 + · · · + λd = Tr(Σ) ≤ 1. We define gap = λk − λk+1 ∈ 0, k1 . Slightly
abusing notations, we also use λk (M) to denote the k-th largest eigenvalue of an arbitrary M.
def
def
Unless otherwise noted, we denote by V = [ν1 , . . . , νk ] ∈ Rd×k and Z = [νk+1 , . . . , νd ] ∈
Rd×(d−k) . For a given parameter ρ > 0 in our gap-free results, we also define W = [νk+m+1 , . . . , νd ] ∈
Rd×(d−k−m) where m is the largest index so that λk+m > λk −ρ. We write Σ≤k = VDiag{λ1 , . . . , λk }V>
def
and Σ>k = ZDiag{λk+1 , . . . , λd }Z> so Σ = Σ≤k + Σ>k .
For a vector y, we sometimes denote by y[i] or y (i) the i-th coordinate of y. We may use different
notations in different lemmas in order to obtain the cleanest representations; when we do so, we
shall clearly point out inQthe statement of the lemmas.
def
We denote by Pt = ts=1 (I + ηs xs x>
s ) where xs is the s-th data sample and ηs is the learning
def
rate of iteration s. We denote by Q ∈ Rd×k (or Q0 ) the random initial matrix, and by Qt =
10 We use the
QR((I + ηt xt x>
t )Qt−1 ) = QR(Pt Q0 ) the output of Oja’s algorithm for every t ≥ 1.
notation Ft to denote the sigma-algebra generated by xt . We denote F≤t to be the sigma-algebra
generated by x1 , ..., xt , i.e. F≤t = ∨ts=1 Fs . In other words, whenever we condition on F≤t it means
we have fixed x1 , . . . , xt .
For a vector x we denote by kxk or kxk2 the Euclidean norm of x. We write A B if A, B
are symmetric matrices and A − B is PSD. We denote by kAkS1 the Schatten-1 norm which is the
summation of the (nonnegative) singular values of A. It satisfies the following simple properties:
Proposition 2.1. For not necessarily symmetric matrices A, B ∈ Rd×d we have
(1): Tr(A) ≤ kAkS1
(2): Tr(AB) ≤ kABkS1 ≤ kAkS1 kBk2 .
1/2
(3): Tr(AB) ≤ kAkF kBkF = Tr(A> A)Tr(B> B)
.
Proof. (1) is because Tr(A) = 21 Tr(A + A> ) ≤ 12 kA + A> kS1 ≤ 12 kAkS1 + kA> kS1 = kAkS1 .
(2) is because of (1) and the matrix Holder’s inequality. P
(3) is owing to von Neumann’s trace
inequality (together with Cauchy’s) which says Tr(AB) ≤ i σA,i · σB,i ≤ kAkF kBkF . (Here, we
have noted by σA,i the i-th largest eigenvalue of A and similarly for B.
2.1
A Matrix View of Oja’s Algorithm
The following lemma tells us that, for analysis purpose only, we can push the QR orthogonalization
step in Oja’s algorithm to the end:
Lemma 2.2 (Oja’s algorithm). For every s ∈ [d], every X ∈ Rd×s , every t ≥ 1, every initialization
matrix Q ∈ Rd×k , it satisfies kX> Qt kF ≤ kX> Pt Q(V> Pt Q)−1 kF .
e t = Pt Q, we first observe that for every t ≥ 0, Qt = Q
e t Rt for
Proof of Lemma 2.2. Denoting by Q
k×k
some (upper triangular) invertible matrix Rt ∈ R
(if Rt is not invertible, then the right hand
side of the statement is +∞ so we already done). The claim is true for t = 0. Suppose it holds for
t by induction, then
>
Qt+1 = QR[(I + ηt+1 xt+1 x>
t+1 )Qt ] = (I + ηt+1 xt+1 xt+1 )Qt St
for some St ∈ Rk×k by the definition of Gram-Schmidt. This implies that
e t Rt St = Pt+1 QRt St = Q
e t+1 Rt St = Q
e t+1 Rt+1
Qt+1 = (I + ηt+1 xt+1 x> )Q
t+1
10
The second equality is a simple fact but anyways proved in Lemma 2.2 later.
7
e t Rt . As a result, since
if we define Rt+1 = Rt St . This completes the proof that Qt = Q
>
each Qt is column orthogonal for t ≥ 1, we have kQt Vk2 ≤ 1 and therefore kX> Qt kF ≤
kX> Qt (V> Qt )−1 kF kV> Qt k2 ≤ kX> Qt (V> Qt )−1 kF . Finally,
e t Rt (V> Q
e t Rt )−1 kF ≤ kX> Q
e t (V> Q
e t )−1 kF .
kX> Qt kF ≤ kX> Qt (V> Qt )−1 kF = kX> Q
Observation. Due to Lemma 2.2, in order to prove our upper bound theorems, it suffices to upper
bound the quantity kX> Pt Q(V> Pt Q)−1 kF for X = W (gap-free) or X = Z (gap-dependent).
3
Overview of Our Proofs and Techniques
Oja’s Algorithm. To illustrate the idea, let us simply focus on the gap-dependent case. Denoting
def
in this section by st = kZ> Pt Q(V> Pt Q)−1 kF , owing to Lemma 2.2, we want to bound st in terms
of xt and st−1 . A simple calculation using the Sherman-Morrison formula gives
h η a 2 i
t t
>
−1
E[s2t ] ≤ (1 − ηt gap) E[s2t−1 ] + E
where at = kx>
(3.1)
t Pt−1 Q(V Pt−1 Q) k2
1 − η t at
At a first look, E[s2t ] is decaying by a multiplicative factor (1 − ηt gap) at every iteration; however,
this bound could be problematic when ηt at is close to 1 and thus we need to ensure ηt ≤ a1t with
high probability for every step.
>
−1
√One can naively bound at ≤ kPt−1 Q(V Pt−1 Q)
√ k2 ≤ st−1 + 1. However, since st−1 can be
Ω( d) even at t = 1, we must choose ηt ≤ O(1/ d) and the resulting convergence rate will be
not efficient (i.e., proportional to d). This is why most known global convergence results are not
efficient (see Table 1). On the other hand, if one ignores initialization and starts from a point t0
when st0 ≤ 1 is already satisfied, then we can prove a local convergence rate that is efficient (c.f.
[18]). Note that this local rate is still slower than ours by a factor λ1 + · · · + λk .
Our first contribution is the following crucial observation: for a random initial matrix Q, a1 =
>
−1
kx>
1 Q(V Q) k2 is actually quite small.
√ A simple fact on the singular value distribution of inverseWishart distribution √
implies a1 = O( k) with high probability. Thus, at least in the first iteration,
we can set η1 = Ω(1/ k) independent of the dimension d. Unfortunately, in subsequent iterations,
it is not clear whether at remains small or increases.
Our second contribution is to control at using the fact that at itself “forms another random
>
−1
process.” More precisely, denoting by at,s = kx>
t Ps Q(V Ps Q) k2 for 0 ≤ s ≤ t − 1, we wish
to bound at,s in terms of at,s−1 and show that it does
√ not increase by much. (If we could achieve
so, combining it with the initialization at,0 ≤ O( k), we would know that all at,s are small for
s ≤ t − 1.) Unfortunately, since xt is not an eigenvector of Σ, the recursion looks like
h
2 i
η a
E[a2t,s ] ≤ (1 − ηs λk ) E[a2t,s−1 ] + ηs λk E[b2t,s−1 ] + E 1−ηs ss,s−1
(3.2)
as,s−1
>
−1
where bt,s = kx>
t ΣPs Q(V Ps Q) k2 . Now three difficulties arise from formula (3.2):
• bt,s can be very different from at,s — in worse case, the ratio between them can be unbounded.
• the problematic term now becomes as,s−1 (rather than the original at = at,t−1 in (3.1)) which
t−1
is not present in the chain {at,s }s=1
.
• since bt,s differs from at,s by an additional factor Σ in the middle, to analyze bt,s , we need to
2
>
−1
further study kx>
t Σ Ps Q(V Ps Q) k2 and so on.
(i)
def
We solve these issues by carefully considering a multi-dimensional random process ct,s with ct,s =
i
>
−1
kx>
t Σ Ps Q(V Ps Q) k2 . Ignoring the last term, we can derive that
(i) 2
(i) 2
(i+1) 2
∀t, ∀s ≤ t − 1, E ct,s
/ (1 − ηs λk ) E ct,s−1
+ ηs λk E ct,s−1
.
(3.3)
8
Our third contribution is a new random process concentration bound to control the change in
this multi-dimensional chain (3.3). To achieve this, we adapt the prove of standard Chernoff bound
to multi dimensions (which is not the same as matrix concentration bound). After establishing
(0)
this non-trivial concentration result, all terms of at = ct,t−1 can be simultaneously bounded by a
constant. This ensures that the problematic term in (3.1) is well-controlled.
The overall plan looks promising; however, there are holes in the above thought experiment.
• In order to apply a random-process concentration bound (e.g., Azuma concentration), we need
the process to not depend on the future. However, the random vector ct,s is not F≤s measurable
but F≤s ∨ Ft measurable (i.e., it depends on xt for a future t > s).
• Furthermore, the expectation bounds such as (3.1), (3.2), (3.3) only hold if E[xt xt ] = Σ;
however, if we take away a failure event C —C may correspond to the event when at is large—
the conditional expectation E[xt xt | C] becomes Σ + ∆ where ∆ is some error matrix. This
can amplify the failure probability in next iteration.
Our fourth contribution is a “decoupling” framework to deal with the above issues (Appendix i.D).
At a high level, to deal with the first issue we fix xt and study {ct,s }s=0,1,...,t−1 conditioning on xt ;
in this way the process decouples and each ct,s becomes F≤s measurable. We can do so because
we can carefully ensure that the failure events only depend on xs for s ≤ t − 1 but not on xt . To
deal with the second issue, we convert the random process into an unconditional random process
(see (i.D.2)); this is a generalization of using stopping time on martingales. Using these tools, we
manage to show that the failure probability only grows linearly with respect to T and henceforth
(i)
bound the value of ct,s for all t, s and i.
we are able to show that Oja’s algorithm achieves convergence rate
Putting them together,
λ1 +...+λk 1
e
O
( ε + k) . The rate matches our lower bound asymptotically when λ1 and λk are
gap2
within a constant factor of each other, however, if we only care about crude approximation of the
eigenvectors (e.g. for constant ε), then the Oja’s algorithm is off by a factor k.
Remark 3.1. The ideas above are insufficient for our gap-free results. To prove Theorem 2 and 3, we
def
also need to bound quantities s0t = kW> Pt Q(V> Pt Q)−1 kF where W consists of all eigenvectors
of Σ with values no more than λk − ρ. This is so because the interesting quantity in a gap-free case
changes from st to s0t according to Lemma 2.2. Now, to bound s0t one has to bound ct,s ; however,
the ct,s process still depends on the original st as opposed to s0t . In sum, we unavoidably have to
bound st , s0t , and ct,s all together, making the proofs even more sophisticated.
Our Oja++ Algorithm. The factor k in Oja’s
√ algorithm comes from the fact that
√ the earlier
> Q)−1 k is at least Ω( k) at t = 1, so we must set η ≤ O(1/ k) and this
quantity a1 = kx>
Q(V
2
1
1
incurs a factor k in the running time. After warm start, at drops to O(1) and we can choose ηt ≤ 1.
A similar issue was also observed by Hardt and Price [9] and they solved it using “oversampling”. Namely, to put it into our setting, we can use a d × 2k random starting matrix Q0 and
run Oja’s to produce QT ∈ Rd×2k . In this way, the quantity a1 becomes O(1) even at the beginning
due to some property of the inverse-Wishart distribution. However, the output QT is now a 2k
dimensional space that is only guaranteed to “approximately contain” the top k eigenvectors. It is
not clear how to find this k-subspace (recall the algorithm does not see Σ).
Our key observation behind Oja++ is that a similar effect also occurs via “under-sampling”.
If we initialize Q0 randomly with dimension d × k/2, we can also obtain a speed-up factor of k.
Unlike Hardt and Price, this time the output QT0 is a k/2-dimensional subspace that approximately
lies entirely in the column span of V ∈ Rd×k .
9
After getting QT0 , one could hope to get the rest by running the same algorithm again, but
restricted to the orthogonal complement of QT0 . This approach would work if QT0 were exactly
the eigenvectors of Σ; however, due to approximation error, this approach would eventually lose a
factor 1/gap in the sample complexity which is even bigger than the factor k that we could gain.
Instead, our Oja++ algorithm is divided into log k epochs. At each epoch i = 1, 2, . . . , log k, we
attach k/2i new random columns to the working matrix Qt in Oja’s algorithm, and then run Oja’s
for a fixed number of iterations. Note that every time (except the last time) we attach new random
columns, we are in an “under-sampling” mode because if we add k/2i columns there must be k/2i
remaining dimensions. This ensures that the quantity at only increases by a constant so we have
at = O(1) throughout
execution
of Oja++ . Finally, there are only log k epochs so the total running
k 1
e λ1 +...+λ
e
time is still O
ε and this O notion hides a log k factor.
ρ2
Roadmap. Our proofs are highly technical so we carefully choose what to present in this main
body. In Section 5 we state properties of the initial matrix Q which corresponds to our first
contribution. In Section 6 we provide expected guarantees on st , s0t and at,s and they correspond
to our second contribution. The third (martingale lemmas) and fourth contributions (decoupling
lemma) are deferred to the appendix.
Most importantly, in Section 7 we present (although in weaker forms) two Main Lemmas to
deal with the convergence one for t ≤ T0 (before warm start) and one for t > T0 (after warm start).
These sections, when put together, directly imply two weaker variants of Theorem 1 and 2. We
state these weaker variants in Appendix i and include all the mathematical details there.
Appendix ii includes our Rayleigh quotient Theorem 3 and lower bound Theorem 6. Appendix
iii strengthens the main lemmas into their stronger forms, and prove Theorem 1 and 2 formally.
Our Oja++ results, namely Theorem 4 and 5, are also proved in Appendix iii.
In Figure 1 on page 14, we present a dependency graph of all of our main theorems and lemmas.
We hope that the readers could appreciate our organization of this paper.
4
Discussions, Extensions and Future Directions
In this paper we give global convergence analysis of the Oja’s algorithm, and a twisted version Oja++
which has better complexity. We also give an information-theoretic
bound
that
showing
lower
any
kλk
>
2
algorithm, offline or online, must have final accuracy Ex1 ,...,xT ,A kW QT kF = Ω gap2 ·T . This
matches our gap-dependent result on Oja++ when λ1 + · · · + λk = O(kλk ); that is, when there is
an eigengap and when “the spectrum is flat.”
When the spectrum is not flat, our algorithm can be improved to have better accuracy. However,
this requires good prior knowledge of λ1 , · · · λk , and may not be realistic.
λ +···λ
In the gap-free case, Oja++ only achieves accuracy O 1 ρ2 ·Tk+m , which appears worse than
λ1 +···λk
k
the lower bound O ρkλ
if we allow more space, namely,
2 ·T . In fact, we can also achieve O
ρ2 ·T
space up to O((k + m)d) as opposed to O(kd). More generally, we have a space-accuracy tradeoff.
If we run Oja++ on k 0 initial random vectors, and thus using space O(dk 0 ) for k 0 ∈ [k, k + m],
we can randomly pick k columns from the output and have the following accuracy:
Theorem 4.1 (tradeoff). For every k 0 ∈ [k, k + m] with λk0 − λk+m ≥
0
ρ
log d ,
0
let Q ∈ Rd×k be a
random gaussian matrix and QT ∈ Rd×k be the output of Oja++ with random input Q. Then,
letting Q0T ∈ Rd×k be k random columns of QT (chosen uniformly at random), we have
k λ1 + · · · λk+m0
> 0 2
e
E kW QT kF = O
k0
ρ2 T
10
where m0 ≤ m is any index satisfying λk0 − λk+m0 ≥
ρ
log d .
Proof of Theorem 4.1. Observethat Oja++ guarantees (using ρ/ log d instead of ρ as the gap-free
k+m0
e λ1 +···λ
parameter) E[kW> QT k2F ] = O
. Then, k random columns of Q0T decreases the squared
ρ2 T
Frobenius norm by a factor of k 0 /k.
We also have the following crucial observation:
Corollary 4.2. There exists k 0 ∈ [k, k + m] such that λk0 − λk+m ≥
ρ
log d
and
k
(λ1 + · · · λk+m0 ) = O (λ1 + · · · λk ) .
k0
Proof of Corollary 4.2. The
is by counting.
Divide [λk − ρ, λk] into log d intervals of equal
proof
ρ
ρ
span in descending order λk − log d , λk , λk − 2 log d , λk − logρ d , · · · , λk − ρ, λk − 1 − log1 d ρ , and
let Si ⊆ {k + 1, .P
. . , k + m} be the indices of λj is in the i-th interval above,P
for i = 1, 2, . . . , log d.
Define Λi = j∈Si λj and Λ0 = Λ = λ1 + · · · λk . Also define ki = k + 1≤j<i |Sj |. We then
P
have, for every k 0 = ki , we can choose m0 such that λ1 + · · · λk+m0 = Λ + ij=1 Λj . By Λ0 ≤ dΛlog d ,
we know that there must exist some i such that Λi ≤ 100Λi−1 . We compute
Pi
Pi−1
k
k
k
Λ
≤
100
Λ
+
Λ
+
j
j=1
j=1 Λi = 100 ki (λ1 + · · · + λki ) ≤ 100Λ
ki
ki
so we have found such k 0 satisfying the statement.
++ to at most O((k + m)d), we can
In sum. In the gap-free case,
if we increase the space of Oja
λ1 +···λk
achieve accuracy O ρ2 ·T , and thus match the lower bound when the spectrum is flat.11
It was also studied by Balcan et al. [3] that increasing the space could enhance the performance.
However, their algorithm always uses space Ω((k + m)d), and furthermore, in the Frobenius-norm
accuracy, their performance is always worse than Oja++ , not to say Oja++ only uses space O(kd).12
k
It is an important future direction to directly get O λ1ρ+···λ
without increasing the space.
2 ·T
Matrix Bernstein. Using matrix Bernstein and a gap-free variant of the Wedin theorem [2], one
can
QT be the top-k eigenvectors of the empirical covariance matrix
PT show>that, if we simply let >
e λ21 . If one directly translates this to a Frobenius
x
x
,
then
we
have
E[kW
QT k22 ] = O
t=1 t t
ρ T
λ1 k
>
2
e
norm bound, it gives E[kW QT kF ] = O ρ2 T and is worse than ours. However, our result, if
naively translated to spectral norm, also loses a factor k. It is a future direction to directly get a
spectral-norm guarantee for streaming PCA.
5
Random Initialization
We state our main lemma for initialization. Let Q ∈ Rd×k be a matrix with each entry i.i.d drawn
from N (0, 1), the standard gaussian.
Lemma 5.1 (initialization). For every p, q ∈ (0, 1), every T ∈ N∗ , every distribution on vector set
{xt }Tt=1 with kxt k2 ≤ 1, with probability at least 1 − p − 2q over the random choice of Q:
2
ln dp
and
(Z> Q)(V> Q)−1 F ≤ 576dk
p2
1/2
i−1
18
T
> (Σ/λ
> Q)−1
Prx1 ,...,xT ∃i ∈ [T ], ∃t ∈ [T ], x>
2k
ln
≤q
ZZ
)
Q(V
≥
k+1
t
p
q
2
0
Of course, this requires the algorithm to know k which can be done by trying k0 = k + 2, k + 4, k + 8, etc.
2
12
e λ1∼k+m
e dk(λ1∼k+m3 ) λk , but ours is only O
.
For instance, when λ1∼k+m ≥ k+m , their global convergence is O
2
11
d
(k+m)ρ T
11
ρ T
(i)
The two statements of the above lemma correspond to s0 and ct,0 that we defined in Section 3. The
second statement is of the form “Pr[event] ≤ q” instead of “for every fixed xt , event holds with
probability ≤ q” because we cannot afford taking union bound on xt .
6
Expected Results
We now formalize inequalities (3.1), (3.2), (3.3) which characterize to the behavior of our target
random processes. Let X ∈ Rd×r be a generic matrix that shall later be chosen as either X = W
(corresponding to s0t ), X = Z (corresponding to st ), or X = [w] for some vector w (corresponding
(i)
to ct,s ). We introduce the following notions that shall be used extensively:
Lt = Pt Q(V> Pt Q)−1 ∈ Rd×k
r×k
R0t = X> xt x>
t Lt−1 ∈ R
St = X> Lt ∈ Rr×k
k×k
H0t = V> xt x>
t Lt−1 ∈ R
Lemma 6.1 (Appendix i.B). For every t ∈ [T ], suppose C≤t is an event on random x1 , . . . , xt and
1
>
>
−1
C≤t implies kx>
,
t Lt−1 k2 = kxt Pt−1 Q(V Pt−1 Q) k2 ≤ φt where ηt φt ≤
2
>
and suppose Ext xt xt | F≤t−1 , C≤t = Σ + ∆. Then, we have:
(a) When X = Z,
h
i
2 2
E Tr(S>
S
)
|
F
,
C
≤ (1 − 2ηt gap + 14ηt2 φ2t )Tr(St−1 S>
t
≤t−1
≤t
t
t−1 ) + 10ηt φt
3/2
1/2
>
+ 2ηt k∆k2 Tr(S>
+ 2Tr(S>
t−1 St−1 )
t−1 St−1 ) + Tr(St−1 St−1 )
(b) When X = W,
h
i
2 2
>
2 2
E Tr(S>
t St ) | F≤t−1 , C≤t ≤ (1 − 2ηt ρ + 14ηt φt )Tr(St−1 St−1 ) + 10ηt φt
1/2
1/2
+ 2ηt k∆k2 Tr(S>
+ Tr(S>
1 + Tr(Z> Lt−1 L>
t−1 St−1 )
t−1 St−1 )
t−1 Z)
(c) When X = [w] ∈ Rd×1 where w is a vector with Euclidean norm at most 1,
h
i
ηt
2 2
>
2 2
kw> ΣLt−1 k22
E Tr(S>
t St ) | F≤t−1 , C≤t ≤ 1 − ηt λk + 14ηt φt Tr(St−1 St−1 ) + 10ηt φt +
λk
1/2
1/2
>
>
>
+ 2ηt k∆k2 Tr(S>
S
)
+
Tr(S
S
)
1
+
Tr(Z
L
L
Z)
t−1 t−1
t−1 t−1
t−1 t−1
The above three expectation results will be used to provide upper bounds on the quantities we
(i)
care about (i.e., st , s0t , ct,s ). In the appendix, to enable proper use of martingale concentration,
>
>
we also bound their absolute changes |Tr(S>
t St ) − Tr(St−1 St−1 )| and variance E |Tr(St St ) −
>
2
13
Tr(St−1 St−1 )| in changes.
7
Main Lemmas
Our main lemmas in this section can be proved by combining (1) the expectation results in Section 6,
(2) the martingale concentrations in Appendix i.C, and (3) our decoupling lemma in Appendix i.D.
13
Recall that even in the simplest martingale concentration, one needs upper bounds on the absolute difference
between consecutive variables; furthermore, the concentration can be tightened if one also has an (expected) variance
upper bound between variables.
12
Our first lemma describes the behavior of quantities st = kZ> Pt Q(V> Pt Q)−1 kF and s0t =
kW> Pt Q(V> Pt Q)−1 kF before warm start. At a high level, it shows if the st sequence starts
from s20 ≤ ΞZ , under mild conditions, s2t never increases to more than 2ΞZ . Note that ΞZ = O(d)
according to Lemma 5.1. The other sequence (s0t )2 also never increases to more than 2ΞZ because
s0t ≤ st , but most importantly, (s0t )2 drops below 2 after t ≥ T0 . Therefore, at point t = T0 we need
to adjust the learning rate so the algorithm achieves best convergence rate, and this is the goal of
our Lemma Main 2. (We emphasize that although we are only interested in st and s0t , our proof of
the lemma also needs to bound the multi-dimensional ct,s sequence discussed in Section 3.)
Lemma Main 1 (before warm start). For every ρ ∈ (0, 1), q ∈ 0, 12 , ΞZ ≥ 2, Ξx ≥ 2, and fixed
matrix Q ∈ Rd×k , suppose it satisfies
• kZ> Q(V> Q)−1 k2F ≤ ΞZ , and
h
j−1
>
Q(V> Q)−1
• Prxt ∀j ∈ [T ], x>
t ZZ (Σ/λk+1 )
Suppose also the learning rates {ηs }s∈[T ] satisfy
(1):
3/2
∀s ∈ [T ], qΞZ ≤ ηs ≤
ρ
4000Ξ2x ln
(3):
24T
q2
2
i
≤ Ξx ≥ 1 − q 2 /2 for every t ∈ [T ].
(2):
PT
∃T0 ∈ [T ] such that
2 2
t=1 ηt Ξx
PT0
t=1 ηt
≤
1
100 ln
≥
ln(3ΞZ )
ρ
32T
q2
.
Then, for every t ∈ [T − 1], with probability at least 1 − 2qT (over the randomness of x1 , . . . , xt ):
• kZ> Pt Q(V> Pt Q)−1 k2F ≤ 2ΞZ , and
• if t ≥ T0 then kW> Pt Q(V> Pt Q)−1
2
F
≤ 2.
Our second lemma asks for a stronger assumption on the learning rates and shows that after
warm start (i.e., for t ≥ T0 ), the quantity (s0t )2 scales as 1/t.
√
Lemma Main 2 (after warm start). In the same setting as Lemma Main 1, if there exists δ ≤ 1/ 8
s.t.
T0
1
1
9 ln(8/q 2 )
, ∀s ∈ {T0 +1, . . . , T } : 2ηs ρ−56ηs2 Ξ2x ≥
and ηs ≤
≥
,
2
2
δ
s−1
20(s − 1)δΞx
ln T0
then, with probability at least 1 − 2qT (over the randomness of x1 , . . . , xT ):
• kZ> Pt Q(V> Pt Q)−1 k2F ≤ 2ΞZ for every t ∈ {T0 , . . . , T }, and
• kW> Pt Q(V> Pt Q)−1 k2F ≤
5T0 / ln2 (T0 )
t/ ln2 t
for every t ∈ {T0 , . . . , T }.
Parameter 7.1. There exist constants C1 , C2 , C3 > 0 such that for every q > 0 that is sufficiently
small (meaning q < 1/poly(T, ΞZ , Ξx , 1/ρ)), the following parameters satisfy both Lemma Main 1
and Lemma Main 2:
(
ln ΞZ
Ξ2x ln Tq ln2 ΞZ
t ≤ T0 ;
ρ
T0
T0 ·ρ
, and δ = C3 ·
=
C
·
,
η
=
C
·
.
1
t
2
1
2
2
t > T0 .
ρ
Ξx
ln (T0 )
t·ρ
Using such learning rates for our main lemmas, one can prove in one page (see Appendix i.F)
• a weaker version of Theorem 2 where (Λ1 , Λ2 ) are replaced by (1, 0), and
• a weaker version of Theorem 1 where Λ = λ1 + · · · + λk is replaced by 1.
13
Appendix Overview
Main Lemma Appendix ii.G
Lower Bound
Theorem 6
Rayleigh Quotient
Theorem 3
(before warm start)
Lemma Main 3
Appendix ii
upgrade
Appendix i
Expectation Lemma
Lemma 6.1 and Appendix i.B
upgra
Main Lemma Appendix i.E
de
Oja (GF, weak)
Theorem 2’
Decoupling Lemma
Appendix i.D
(after warm start)
Lemma Main 2
Oja (GD, weak)
Theorem 1’
Initialization Lemma
Lemma 5.1
Appendix i.A
rade
(before warm start)
Lemma Main 1
upg
Martingale Lemma
Appendix i.C
upgrade
u p grade
Appendix iii
Main Lemma Appendix iii.L
(before warm start)
Lemma Main 4
(after warm start)
Lemma Main 5
Expectation Lemma
Appendix iii.K
(under sampling)
Lemma Main 6
Oja (GF)
Theorem 2
Oja (GD)
Theorem 1
Initialization Lemma
Lemma iii.J.2
Appendix iii.J
Oja++ (GF)
Theorem 5
Oja++ (GD)
Theorem 4
Figure 1: Overall structure of this paper. GF and GD stand for gap-free and gap-dependent respectively.
We divide our appendix sections into three parts, Appendix i, ii, and iii.
• Appendix i (page 16) provides complete proof but for two weaker versions of our Theorem 1
and 2.
– Appendix i.A and i.B give missing proofs for Section 5 and 6;
– Appendix i.C and i.D provide details for our martingale and decoupling lemmas;
– Appendix i.E proves main lemmas in Section 7 and Appendix i.F puts everything together.
• Appendix ii (page 35) includes proofs for Theorem 6 and Theorem 3.
– Appendix ii.G extends our main lemmas to better serve for the rayleigh quotient setting;
14
– Appendix ii.H provides the final proof for our Rayleigh Quotient Theorem 3;
– Appendix ii.I includes a three-paged proof of our lower bound Theorem 6.
• Appendix iii (page 42) provide full proofs not only to the stronger Theorem 1 and Theorem 2
for Oja’s algorithm, but also to Theorem 4 and Theorem 5 for Oja++ .
–
–
–
–
–
Appendix
Appendix
Appendix
Appendix
Appendix
iii.J extends our initialization lemma in Appendix i.A to stronger settings;
iii.K extends our expectation lemmas in Appendix i.B to stronger settings;
iii.L extends our main lemmas in Appendix i.E to stronger settings;
iii.M provides the final proofs for our Theorem 1 and Theorem 2;
iii.N provides the final proofs for our Theorem 4 and Theorem 5.
We include the dependency graphs of all of our main sections, lemmas and theorems in Figure 1
for a quick reference.
15
Appendix (Part I)
In this Part I of the appendix, we provide complete proof but two weaker versions of our
Theorem 1 and 2. We state these weaker versions Theorem 1’ and 2’ here, and meanwhile:
• Appendix i.A and i.B give missing proofs for Section 5 and 6;
• Appendix i.C and i.D provide details for our martingale and decoupling lemmas;
• Appendix i.E proves main lemmas in Section 7; and
• Appendix i.F puts everything together and proves Theorem 1’ and 2’.
def
Theorem 1’ (gap-dependent streaming k-PCA). Letting gap = λk − λk+1 ∈ 0, k1 , for every
ε, p ∈ (0, 1) define learning rates
1
e
Θ gap·T0
1 ≤ t ≤ T0 ;
k
e
T0 = Θ
, ηt =
2
2
1
e
Θ
gap · p
t > T0 .
gap·t
Let Z be the column orthonormal matrix consisting of all eigenvectors of Σ with values no more
than λk+1 . Then, the output QT ∈ Rd×k of Oja’s algorithm satisfies with prob. at least 1 − p:
T0
e
for every T = T0 + Θ
it satisfies kZ> QT k2F ≤ ε .
ε
e hides poly-log factors in 1 , 1 and d.
Above, Θ
p gap
Theorem 2’ (gap-free streaming k-PCA). For every ρ, ε, p ∈ (0, 1), define learning rates
1
e
Θ
t ≤ T0 ;
k
0
e
ρ·T
T0 = Θ
, ηt =
2
2
e 1
Θ
ρ ·p
t > T0 .
ρ·t
Let W be the column orthonormal matrix consisting of all eigenvectors of Σ with values no more
than λk − ρ. Then, the output QT ∈ Rd×k of Oja’s algorithm satisfies with prob. at least 1 − p:
T0
e
for every T = T0 + Θ
it satisfies kW> QT k2F ≤ ε .
ε
e hides poly-log factors in 1 , 1 and d.
Above, Θ
p ρ
16
i.A
Random Initialization (for Section 5)
Recall that Q ∈ Rd×k is a matrix with each entry i.i.d drawn from N (0, 1), the standard gaussian.
i.A.1
Preparation Lemmas
Lemma i.A.1. For every x ∈ Rd that has Euclidean norm kxk2 ≤ 1, every PSD matrix A ∈ Rk×k ,
and every λ ≥ 1, we have
− λ
Pr x> ZZ> QAQ> ZZ> x ≥ Tr(A) + λ ≤ e 8Tr(A) .
Q
Proof of Lemma i.A.1. Let A = UΣA U> be the eigendecomposition of A, and we denote by
Qz = Z> QU ∈ R(d−k)×d . Since a random Gaussian matrix is rotation invariant, and since U is
unitary and Z is column orthonormal, we know that each entry of Qz draw i.i.d. from N (0, 1).
Next, since we have kZ> xk2 ≤ 1, it satisfies that y = x> ZZ> QU is a vector with each coordinate
i independently drawn from distribution N (0, σi ) for σi ≤ 1. This implies
x> ZZ> QAQ> ZZ> x = y > ΣA y =
P
)2
k
X
[ΣA ]i,i (yi )2 .
i=1
distribution14
is a subexponential
with parameter (σ 2 , b) where σ 2 , b ≤
Now,
i∈[k] [ΣA ]i,i (yi
Pk
4 i=1 [ΣA ]i,i . Using the subexponential concentration bound [20], we have for every λ ≥ 1,
" k
#
(
)
k
X
X
λ
2
Pr
[ΣA ]i,i (yi ) ≥
[ΣA ]i,i + λ ≤ exp − Pk
.
8 i=1 [ΣA ]i,i
i=1
i=1
After rearranging, we have
λ
− 8Tr(A)
Pr[x> ZZ> QAQ> ZZ> x ≥ Tr(A) + λ] ≤ e
.
The following lemma is on the singular value distribution of a random Gaussian matrix:
Lemma i.A.2 (Theorem 1.2 of [19]). Let Q ∈ Rk×k be a random matrix with each entry i.i.d.
drawn from N (0, 1), and σ1 ≤ σ2 ≤ · · · ≤ σk be its singular values. We have for every j ∈ [k] and
α ≥ 0:
j 2
αj
Pr σj ≤ √ ≤ (2e)1/2 α
.
k
Lemma i.A.3. Let Q be our initial matrix, then for every p ∈ (0, 1):
h
√
−1 i π 2 ek
p
>
>
>
≥
≤
.
Pr Tr (V Q) (V Q)
Q
3p
1−p
Proof of Lemma i.A.3. Using Lemma i.A.2, we know that (using the famous equation
π2
6 )
Pr Tr
14
P∞
1
j=1 j 2
=
−1 π 2 ek
2ek
−2
>
≥
(V Q) (V Q)
≤ Pr ∃j ∈ [k], σj (V Q) ≥ 2
3p
j p
√
√ X
k
j p
p
2
= Pr ∃j ∈ [k], σj (V> Q) ≤ √
≤
pj /2 ≤
.
1−p
2ek
j=1
>
>
>
Recall that a random variable X is (σ 2 , b)-subexponential if log E exp(λ(X − E X)) ≤ λ2 σ 2 /2 for all λ ∈ [0, 1/b].
The squared standard Gaussian variable is (4, 4)-subexponential.
17
i.A.2
Proof of Lemma 5.1
2
Proof of Lemma 5.1. Applying Lemma i.A.3 with the choice of probability = p4 , we know that
−1
36k
def
.
Pr Tr(A) ≥ 2 ≤ p where A = (V> Q)> (V> Q)
Q
p
n
o
Conditioning on event C = Tr(A) ≤ 36k
, and setting r = 36k
, we have for every fixed x1 , ..., xT
p2
p2
and fixed i ∈ [T ], it satisfies
T 1/2
i−1
>
>
−1
≥
18r
ln
Pr x>
ZZ
(Σ/λ
)
Q(V
Q)
C,
x
t
k+1
t
Q
q
2
¬
T 1/2
≤ Pr yt ZZ> Q(V> Q)−1 ≥ 18r ln
C, x1 , .., xt
q
2
® q2
T2
>
>
>
>
≤ Pr yt ZZ QAQ ZZ yt ≥ 9r ln 2 | C, x1 , .., xt ≤ 2 .
q
T
i−1
>
Above, ¬ uses the definition yt = x>
; is from the definition of A; and ® is
t ZZ (Σ/λk+1 )
> Σ i−1
owing to Lemma i.A.1 together with the fact that kyt k2 ≤ kxt k2 · ZZ
≤ 1 and the fact
λk+1
2
def
that Z> Q is independent of V> Q.15 Next, define event
i−1
>
C2 = ∃i ∈ [T ], ∃t ∈ [T ], x>
Q(V> Q)−1
t ZZ (Σ/λk+1 )
T 1/2
≥ 18r ln
.
q
2
The above derivation, after taking union bound, implies that for every fixed x1 , ..., xT , it satisfies
PrQ [C2 | C, x1 , ..., xT ] ≤ q 2 . Therefore, denoting by 1C2 the indicator function of event C2 ,
1
Pr [C2 | Q] C
Pr Pr [C2 | Q] ≥ q C ≤ E
Q x1 ,...,xT
q Q x1 ,...,xT
1
= E
E [1C | Q] C
q Q x1 ,...,xT 2
1
=
E
E[1C | C, x1 , . . . , xT ]
q x1 ,...,xT Q 2
1
=
E
Pr[C2 | C, x1 , . . . , xT ]
≤ q .
q x1 ,...,xT Q
Above, the first inequality uses Markov’s bound. In an analogous manner, we define event
n
d 1/2 o
C3 = ∃j ∈ [d], j ≥ k + 1, kνj> Q(V> Q)−1 k2 ≥ 18r ln
p
where νj is the j-th eigenvector of Σ corresponding to eigenvalue λj . A completely analogous proof
as the lines above also shows PrQ [C3 | C] ≤ q. Finally, using union bound
h ^
i
h
i
Pr C3
Pr [C2 | Q] ≥ q ≤ Pr[C3 | C] + Pr
Pr [C2 | Q] ≥ q C + Pr[C] ≤ q + q + p ,
Q
x1 ,...,xT
Q
Q
x1 ,...,xT
Q
we conclude that with probability at least 1 − p − 2q over the random choice of Q, it satisfies
• Prx1 ,...,xT [C2 | Q] < q, and
• C3 holds (which implies kZ> Q(V> Q)−1 k2F < 18rd ln dp as desired).
15
In principle, we only proved Lemma i.A.1 when Q is a random matrix, independent of A. Here, A also depends
on Q but only on V> Q. Therefore, A is independent from Z> Q, so we can still safely apply Lemma i.A.1.
18
i.B
Expectation Lemmas (for Section 6)
Let X ∈ Rd×r be a generic matrix that shall later be chosen as either X = W, X = Z, or X = [w]
for some vector w. We recall the following notions from Section 6
Lt = Pt Q(V> Pt Q)−1 ∈ Rd×k
r×k
R0t = X> xt x>
t Lt−1 ∈ R
St = X> Lt ∈ Rr×k
k×k
H0t = V> xt x>
t Lt−1 ∈ R
Lemma i.B.1. For every Q ∈ Rd×k and every t ∈ [T ], suppose for φt ≥ 0, xt satisfies:
1
>
>
−1
kx>
.
t Lt−1 k2 = kxt Pt−1 Q(V Pt−1 Q) k2 ≤ φt and ηt φt ≤
2
Then the following holds:
>
>
0
>
0
(a) Tr(S>
t St ) ≤ Tr(St−1 St−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
2
0 2
2
0
0
+ (12ηt2 kH0t k22 + 2ηt2 kR0t k2 kH0t k2 )Tr(S>
t−1 St−1 ) + 8ηt kRt k2 + 2ηt kRt k2 kHt k2
>
2
2
0 2
>
2
2 kR0 k2 Tr(S> S
4 2
0 2
(b) |Tr(S>
t St )−Tr(St−1 St−1 )| ≤ 243ηt kHt k2 Tr(St−1 St−1 ) +12η
t 2
t−1 t−1 )+300ηt φt kRt k2
q t
>
>
2 2
>
(c) |Tr(S>
t St ) − Tr(St−1 St−1 )| ≤ 9ηt φt Tr(St−1 St−1 ) + 2ηt φt Tr(St−1 St−1 ) + 10ηt φt
Proof of Lemma i.B.1. We first notice that
X> Pt Q = X> Pt−1 Q + ηt X> xt x>
t Pt−1 Q
and
V> Pt Q = V> Pt−1 Q + ηt V> xt x>
t Pt−1 Q ,
where the second equality further implies (using the Sherman-Morrison formula) that
(V> Pt Q)−1 = (V> Pt−1 Q)−1 −
>
−1
ηt (V> Pt−1 Q)−1 V> xt x>
t Pt−1 Q(V Pt−1 Q)
>
−1 >
1 + η t x>
t Pt−1 Q(V Pt−1 Q) V xt
= (V> Pt−1 Q)−1 − (ηt − αt ηt2 )(V> Pt−1 Q)−1 H0t ,
def
and above we denote by αt =
ψt
1+ηt ψt
def
>
where ψt = x>
t Lt−1 V xt . Therefore, we can write
¬
St = X> Pt Q(V> Pt Q)−1
= St−1 − (ηt − αt ηt2 )St−1 H0t + ηt R0t − (ηt2 − αt ηt3 )R0t H0t
®
= St−1 − (ηt − αt ηt2 )St−1 H0t + (ηt − ψt ηt2 + αt ψt ηt3 )R0t
¯
=
St−1 − ηt St−1 Ht + ηt Rt .
Above, equality ¬ uses the definition of St and Lt ; equality uses our derived equations for
X> Pt Q and (V> Pt Q)−1 ; equality ® uses R0t H0t = ψt R0t ; and in quality ¯ we have denoted by
Ht = (1 − αt ηt )H0t and Rt = (1 − ψt ηt + αt ψt ηt2 )R0t
to simplify the notations. Note that H0t , R0t are rank one matrices so kH0t kF = kH0t k2 and kR0t kF =
19
kR0t k2 . We now proceed and compute
>
>
>
Tr(S>
t St ) = Tr(St−1 St−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
¬
>
2
>
2
>
+ηt2 Tr(H>
t St−1 St−1 Ht ) + ηt Tr(Rt Rt ) − 2ηt Tr(Rt St−1 Ht )
>
>
≤ Tr(St−1 S>
t−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
>
2
>
+2ηt2 Tr(H>
t St−1 St−1 Ht ) + 2ηt Tr(Rt Rt )
>
>
≤ Tr(St−1 S>
t−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
®
2
2 2
0 2
+2ηt2 (1 − αt ηt )2 kH0t k22 Tr(St−1 S>
t−1 ) + 2ηt (1 − ψt ηt + αt ψt ηt ) kRt k2
>
0
>
0
≤ Tr(St−1 S>
t−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
0
2
>
0
+2ηt2 |αt | Tr(S>
t−1 St−1 Ht ) + 2ηt (ηt |ψt | + ηt |αt ||ψt |) Tr(St−1 Rt )
¯
2
2 2 2
0 2
+2ηt2 (1 + 2φt ηt )2 kH0t k22 Tr(St−1 S>
t−1 ) + 2ηt (1 + φt ηt + 2φt ηt ) kRt k2
>
0
>
0
≤ Tr(St−1 S>
t−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
0
2
0
>
0
+4ηt2 kH0t k2 Tr(S>
t−1 St−1 Ht ) + 4ηt kHt k2 Tr(St−1 Rt )
°
2
0 2
+8ηt2 kH0t k22 Tr(St−1 S>
t−1 ) + 8ηt kRt k2
>
0
>
0
≤ Tr(St−1 S>
t−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
0
2
0 2
>
2
0 2
+4ηt2 kH0t k2 Tr(S>
t−1 Rt ) + 12ηt kHt k2 Tr(St−1 St−1 ) + 8ηt kRt k2 .
(i.B.1)
Above, ¬ is because 2Tr(A> B) ≤ Tr(A> A) + Tr(B> B) which is Young’s inequality in the matrix
case; and ® are both because Ht = (1 − αt ηt )H0t and Rt = (1 − ψt ηt + αt ψt ηt2 )R0t ; ¯ follow from
the parameter properties |ψt | ≤ kH0t k2 ≤ φt , |αt | ≤ 2kH0t k2 ≤ 2φt , and 0 ≤ ηt φt ≤ 21 ; ° follows
0
>
0
from |Tr(S>
t−1 St−1 Ht )| ≤ Tr(St−1 St−1 )kHt k2 which uses Proposition 2.1.
Next, Proposition 2.1 tells us
q
kR0t k2
0
0
0
>
> S
)|
≤
kR
k
kS
k
≤
kR
k
|Tr(S>
R
Tr(S
)
≤
Tr(S
S
)
+
1
, (i.B.2)
t−1 2
t−1 t
t S1
t 2
t−1 t−1
t−1 t−1
2
(the second inequality is because R0t is rank 1, and the spectral norm of a matrix is no greater than
its Frobenius norm.) we can further simplify the upper bound in (i.B.1) as
>
>
0
>
0
Tr(S>
t St ) ≤ Tr(St−1 St−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
2
0 2
>
2
0 2
+2ηt2 kR0t k2 kH0t k2 Tr(S>
t−1 St−1 ) + 1 + 12ηt kHt k2 Tr(St−1 St−1 ) + 8ηt kRt k2
>
0
>
0
= Tr(St−1 S>
t−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
2
0 2
2
0
0
+(12ηt2 kH0t k22 + 2ηt2 kR0t k2 kH0t k2 )Tr(S>
t−1 St−1 ) + 8ηt kRt k2 + 2ηt kRt k2 kHt k2 .
This finishes the proof of Lemma i.B.1-(a).
A completely symmetric analysis of the above derivation also gives
>
>
0
>
0
Tr(S>
t St ) ≥ Tr(St−1 St−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
2
0 2
2
0
0
−(12ηt2 kH0t k22 + 2ηt2 kR0t k2 kH0t k2 )Tr(S>
t−1 St−1 ) − 8ηt kRt k2 − 2ηt kRt k2 kHt k2 ,
20
and thus combining the upper and lower bounds we have
>
>
0
>
0
|Tr(S>
t St ) − Tr(St−1 St−1 )| ≤ 2ηt |Tr(St−1 St−1 Ht )| + 2ηt |Tr(St−1 Rt )|
+(12ηt2 kH0t k22
+
2ηt2 kR0t k2 kH0t k2 )Tr(S>
t−1 St−1 )
+
8ηt2 kR0t k22
+
(i.B.3)
2ηt2 kR0t k2 kH0t k2
0
≤ (2ηt kH0t k2 + 12ηt2 kH0t k22 + 2ηt2 kR0t k2 kH0t k2 )Tr(S>
t−1 St−1 ) + 2ηt kRt k2
q
Tr(S>
(i.B.4)
)
t−1 St−1
q
2
0
Tr(S>
t−1 St−1 ) + 10ηt φt kRt k2 .
(i.B.5)
¬
+8ηt2 kR0t k22 + 2ηt2 kR0t k2 kH0t k2
0
≤ 9ηt kH0t k2 Tr(S>
t−1 St−1 ) + 2ηt kRt k2
Above, ¬ again uses Proposition 2.1 and (i.B.2); uses ηt φt ≤ 1/2 and kH0t k2 , kR0t k2 ≤ φt .
Finally, if we take square on both sides of (i.B.5), we have (using again ηt kR0t k2 ≤ 12 ):
>
2
2
0 2
>
2
2
0 2
>
4 2
0 2
|Tr(S>
t St ) − Tr(St−1 St−1 )| ≤ 243ηt kHt k2 Tr(St−1 St−1 ) + 12ηt kRt k2 Tr(St−1 St−1 ) + 300ηt φt kRt k2
and this finishes the proof of Lemma i.B.1-(b). If we continue to use kH0t k2 , kR0t k2 ≤ φt to upper
bound the right hand side of (i.B.5), we finish the proof of Lemma i.B.1-(c).
Proof of Lemma 6.1 from Lemma i.B.1. According to the expectation we have E[H0t | F≤t−1 , C≤t ] =
V> (Σ + ∆)Lt−1 and E[R0t | F≤t−1 , C≤t ] = X> (Σ + ∆)Lt−1 . Now we consider the subcases separately:
(a) By Lemma i.B.1-(a),
h
i¬
2 2
>
2 2
E Tr(S>
t St ) | F≤t−1 , C≤t ≤ (1 + 14ηt φt )Tr(St−1 St−1 ) + 10ηt φt
h
i
0
>
0
+ E −2ηt Tr(S>
.
t−1 St−1 Ht ) + 2ηt Tr(St−1 Rt ) | F≤t−1 , C≤t
(i.B.6)
kR0t k2 , kH0t k2
≤ φt . Next, we compute the expectation
Above, ¬ uses
h
i
0
>
0
E −2ηt Tr(S>
S
H
)
+
2η
Tr(S
R
)
|
F
,
C
t
≤t−1 ≤t
t−1 t−1 t
t−1 t
>
>
>
= −2ηt Tr(S>
t−1 St−1 V (Σ + ∆)Lt−1 ) + 2ηt Tr(St−1 Z (Σ + ∆)Lt−1 )
>
>
>
>
≤ −2ηt gap · Tr(St−1 S>
t−1 ) − 2ηt Tr(St−1 St−1 V ∆Lt−1 ) + 2ηt Tr(St−1 Z ∆Lt−1 ) .(i.B.7)
>
>
>
>
>
Above, is because Tr(S>
t−1 Z ΣLt−1 ) = Tr(St−1 Σ>k Z Lt−1 ) = Tr(St−1 Σ>k St−1 ) ≤ λk+1 Tr(St−1 St−1 ),
>
>
>
>
>
>
as well as Tr(St−1 St−1 V ΣLt−1 ) = Tr(St−1 St−1 Σ≤k V Lt−1 ) = Tr(St−1 St−1 Σ≤k ) ≥ λk Tr(St−1 St−1 ).
Using the decomposition I = VV> + ZZ> , kVk2 ≤ 1, kZk2 ≤ 1, and Proposition 2.1 multiple
times, we have
>
>
>
>
>
Tr(S>
t−1 St−1 V ∆Lt−1 ) = Tr(St−1 St−1 V ∆(VV + ZZ )Lt−1 )
>
>
>
≤ Tr(S>
t−1 St−1 V ∆V) + Tr(St−1 St−1 V ∆ZSt−1 )
¬
3/2
>
≤ k∆k2 Tr(S>
t−1 St−1 ) + Tr(St−1 St−1 )
>
>
>
>
>
Tr(S>
t−1 Z ∆Lt−1 ) = Tr(St−1 Z ∆(VV + ZZ )Lt−1 )
>
>
>
≤ Tr(S>
t−1 Z ∆V) + Tr(St−1 Z ∆ZSt−1 )
>
1/2
≤ k∆k2 Tr(S>
S
)
+
Tr(S
S
)
.
t−1 t−1
t−1 t−1
3/2
>
>
Above, ¬ uses the fact that kSt−1 S>
t−1 St−1 kS1 ≤ kSt−1 St−1 kS1 kSt−1 k2 ≤ Tr(St−1 St−1 )
21
Plugging them into (i.B.7) gives
h
i
0
>
0
>
E −2ηt Tr(S>
S
H
)
+
2η
Tr(S
R
)
|
F
,
C
t
≤t−1 ≤t ≤ −2ηt gap · Tr(St−1 St−1 )
t−1 t−1 t
t−1 t
3/2
1/2
>
>
+2ηt k∆k2 Tr(S>
S
)
+
2Tr(S
S
)
+
Tr(S
S
)
.
t−1 t−1
t−1 t−1
t−1 t−1
(i.B.8)
Putting this back to (i.B.6) finishes the proof of Corollary 6.1-(a).
(b) In this case (i.B.7) also holds but one needs to replace gap with ρ because of the definitional
difference between W and Z. We compute the following upper bounds similar to case (a):
>
>
>
>
>
Tr(S>
t−1 St−1 V ∆Lt−1 ) = Tr(St−1 St−1 V ∆(VV + ZZ )Lt−1 )
>
>
>
>
≤ Tr(S>
t−1 St−1 V ∆V) + Tr(St−1 St−1 V ∆ZZ Lt−1 )
¬
1/2
>
>
≤ k∆k2 Tr(S>
t−1 St−1 ) 1 + Tr(Z Lt−1 Lt−1 Z)
>
>
>
>
>
Tr(S>
t−1 Z ∆Lt−1 ) = Tr(St−1 Z ∆(VV + ZZ )Lt−1 )
>
>
>
>
≤ Tr(S>
t−1 Z ∆V) + Tr(St−1 Z ∆ZZ Lt−1 )
>
>
1/2
1/2
(i.B.9)
1
+
Tr(Z
L
L
Z)
≤ k∆k2 Tr(S>
S
)
t−1 t−1
t−1 t−1
Above, ¬ is because (using Proposition 2.1)
1/2
1/2
> >
· Tr(V> ∆ZZ> Lt−1 L>
t−1 ZZ ∆ V)
1/2
>
>
≤ k∆k2 Tr(S>
t−1 St−1 ) · Tr(Z Lt−1 Lt−1 Z)
>
>
>
2
Tr(S>
t−1 St−1 V ∆ZZ Lt−1 ) ≤ Tr (St−1 St−1 )
and holds for a similar reason.
Putting these upper bounds into (i.B.7) finishes the proof of Corollary 6.1-(b).
(c) When X = [w], a slightly different derivation of (i.B.7) gives
h
i
2 2
>
2 2
E Tr(S>
t St ) | Ft−1 , C≤t ≤ (1 − 2ηt λk + 14ηt φt )Tr(St−1 St−1 ) + 10ηt φt
>
>
>
>
>
− 2ηt Tr(S>
t−1 St−1 V ∆Lt−1 ) + 2ηt Tr(St−1 w ∆Lt−1 ) + 2ηt Tr(St−1 w ΣLt−1 ) .
(i.B.10)
Note that the third and fourth terms can be upper bounded similarly using (i.B.9). As for the
fifth term, we have
1
λk
>
Tr(S>
Tr(w> ΣLt−1 L>
Tr(S>
t−1 w ΣLt−1 ) ≤
t−1 St−1 ) +
t−1 Σw)
2
2λk
Putting these together, we have:
h
i
ηt
2 2
>
2 2
E Tr(S>
S
)
|
F
,
C
kw> ΣLt−1 k22
≤t−1 ≤t ≤ 1 − ηt λk + 14ηt φt Tr(St−1 St−1 ) + 10ηt φt +
t t
λk
1/2
1/2
>
>
>
+ 2ηt k∆k2 Tr(S>
S
)
+
Tr(S
S
)
1
+
Tr(Z
L
L
Z)
t−1
t−1
t−1
t−1
t−1
t−1
i.C
Martingale Concentrations
We prove in the appendix the following two martingale concentration lemmas. Both of them
are stated in their most general form for the purpose of this paper. The first lemma is for 1-d
martingales and the second is for multi-d martingales.
At a high level, Lemma i.C.1 will only be used to analyze the sequences st or s0t (see Section 3)
after warm start — that is, after t ≥ T0 . Our Lemma i.C.2 can be used to analyze ct,s as well as st
and s0t before warm start.
22
Lemma i.C.1 (1-d martingale). Let {zt }∞
t=t0 be a non-negative random process with starting time
1
∗
t0 ∈ N . Suppose there exists δ > 0, κ ≥ 2, and τt = δt
such that
E[zt+1 | F≤t ] ≤ (1 − δτt )zt + τt2
E[(zt+1 − zt )2 | F≤t ] ≤ τt2 zt + κ2 τt4
∀t ≥ t0 :
(i.C.1)
√
|zt+1 − zt | ≤ κτt zt + κ2 τt2
2
If there exists φ ≥ 36 satisfying lnt20t ≥ 7.5κ2 (φ + 1) with zt0 ≤ φ2δln2 t0t0 , we have:
0
h
i exp{− φ −1 ln t }
0
(φ+1) ln2 t
36
Pr ∃t ≥ t0 , zt >
≤
.
φ
δ2 t
36
−1
Lemma i.C.2 (multi-dimensional martingale). Let {zt }Tt=0 be a random process where each zt ∈
T −1
RD
≥0 is F≤t -measurable. Suppose there exist nonnegative parameters {βt , δt , τt }t=0 satisfying κ ≥ 0
and κτt ≤ 1/6 such that, ∀i ∈ [D], ∀t ∈ {0, 1, . . . , T − 1},
(denoting by [zt ]i is the i-th coordinate of zt and [zt ]D+1 = 0)
2 )[z ] + δ [z ]
2 ,
E
[z
]
|
F
≤
(1
−
β
−
δ
+
τ
+
τ
t+1
i
≤t
t
t
t
i
t
t
i+1
t
t
E |[zt+1 ]i − [zt ]i |2 | F≤t ≤ τt2 [zt ]2i + [zt ]i +κ2 τt4 , and
(i.C.2)
p
[zt+1 ]i − [zt ]i ≤ κτt [zt ]i + [zt ]i + κ2 τt2 .
Then, we have: for every λ > 0, every p ∈ 1, mins∈[t] { 6κτ1s−1 } :
nP
o
t−1
2 τ 2 − pβ
Pr [zt ]1 ≥ λ ≤ λ−p maxj∈[t+1] {[z0 ]pj } exp
5p
s
s
s=0
nP
o
Pt−1
t−1
2 2
.
+ 1.4 s=0 exp
u=s+1 5p τu − pβu
The above two lemmas are stated in the most general way in order to be used towards all of
our three theorems each requiring different parameter choices of βt , δt , τt , κ. For instance, to prove
Theorem 2 it suffices to use κ = O(1).
i.C.1
Martingale Corollaries
We provide below four instantiations of these lemmas, each of them can be verified by plugging in
the specific parameters.
Corollary i.C.3 (1-d martingale). Consider the same setting as Lemma i.C.1. Suppose p ∈ (0, e12 ),
1
δ ≤ √18 , τt = δt
, κ ∈ 2, √12δ , lnt20t ≥ 9 ln(1/p)
, and zt0 ≤ 2 we have:
δ2
0
h
i
2
0 / ln t0 )
≤p .
Pr ∃t ≥ t0 , zt > 5(tt/
2
ln t
Corollary i.C.4 (multi-d martingale).
Consider the same setting as Lemma i.C.2. Suppose q ∈
Pt−1
−1 4t
1
2
(0, 1), mins∈[t] { 6κτ1s−1 } ≥ 4 ln 4t
and
s=0 τs ≤ 100 ln
q
q , then
h
i
Pr [zt ]1 ≥ 2 max 1, max {[z0 ]j } ≤ q .
j∈[t+1]
Corollary i.C.5 (multi-d martingale). Consider the same setting as Lemma i.C.2. Given q ∈
def
(0, 1), suppose there exists parameter γ ≥ 1 such that, denoting by l = 10γ ln 3t
q,
t−1
X
^
1
2
βs − lτs ≥ ln max {[z0 ]j }
and ∀s ∈ {0, 1, . . . , t − 1} : βs ≥ lτs2
κτs ≤
.
j∈[t+1]
12 ln 3t
q
s=0
Then, we have
Pr [zt ]1 ≥ 2/γ ≤ q .
23
i.C.2
Proofs for One-Dimensional Martingale
Proof of Lemma i.C.1. Define yt =
δ 2 tzt
ln t
− ln t, then we have:
δ 2 (t + 1) E[zt+1 | F≤t ]
− ln(t + 1)
ln(t + 1)
δ 2 (t + 1)(1 − δτt )zt δ 2 (t + 1)τt2
+
− ln(t + 1)
≤
ln(t + 1)
ln(t + 1)
¬ δ 2 tzt
δ 2 (t + 1) 1 − 1t
t+1
≤
zt + 2
− ln(t + 1) ≤
− ln t = yt ,
ln(t + 1)
t ln(t + 1)
ln t
t2
1
t+1
where ¬ is because for every t ≥ 4 it satisfies (t+1)(t−1)
ln(t+1) ≤ ln t and t2 ln(t+1) ≤ ln 1 + t .
At the same time, we have
δ2t
δ2
1
|yt+1 − yt | ≤
|zt+1 − zt | +
zt+1 + ,
(i.C.3)
ln t
ln t
t
t+1
where is because for every t ≥ 3 it satisfies 0 ≤ ln(t+1)
− lnt t ≤ ln1 t and ln(t + 1) − ln(t) ≤ 1/t.
Taking square on both sides, we have
2 2
2 2
δ t
δ
3
2
2
2
|yt+1 − yt | ≤ 3
|zt+1 − zt | + 3
zt+1
+ 2 .
ln t
ln t
t
Taking expectation on both sides, we have
2 2
(yt + ln t)2
3
δ t
2
(τt2 zt + κ2 τt4 ) + 3
+ 2
E[|yt+1 − yt | | F≤t ] ≤ 3
2
ln t
t
t
2
2
3(yt + ln t) 3(yt + ln t)
3(1 + κ )
<
+
+
t ln t
t2
t2
2
2
2
®
¯ 4(φt + 1)
3(φ + 1) 3(φ + 1) ln t 15κ
≤
+
+
≤
.
t
t2
4t2
t
Above, ® uses yt ≤ φ ln t and κ ≥ 2; ¯ uses lnt2 t ≥ lnt20t ≥ max 7.5κ2 , 6(φ + 1) and ln t ≥ 1.
0
Therefore, if yt ≤ φ ln t holds true for t = t0 , ..., T and t0 ≥ 8 (which implies lnt2 t ≥ lnt20t ), then
0
Z T
T
T
X
X
dt
4(φ + 1)
≤ 4(φ + 1)
≤ 4(φ + 1) ln(T ) .
E[|yt+1 − yt |2 | F≤t ] ≤
t
t=t0 −1 t
t=t
t=t
E[yt+1 | F≤t ] =
0
0
Now we can check about the absolute difference. We continue from (i.C.3) and derive that, if
yt ≤ φ ln t, then
√
δ2t
δ2
1
δ2t
δ2
1
|yt+1 − yt | ≤
|zt+1 − zt | +
zt+1 +
≤
κτt zt + κ2 τt2 +
zt+1 +
ln t
ln t
t
ln t!
ln t
t
!
r
r
°
yt + ln t
κ
(yt + ln t) 1
yt + ln t yt + ln t + κ
≤ κ
+
+
+
≤ κ
+
t ln t
t ln t
t
t
t ln t
t
!
r
r
±
²
(φ + 1) (φ + 1) ln t + κ
(φ + 1)
≤ κ
+
≤ 2κ
t
t
t
where ° uses ln t ≥ 2 and κ ≥ 2, ± uses yt ≤ φ ln t, and ² uses lnt2 t ≥ lnt20t ≥ 4 max{φ + 1, κ}.
0
From the above inequality, we have that if t0 ≥ 4κ2 (φ + 1) and yt ≤ φ ln t holds true for
t = t0 , ..., T − 1 then |yt+1 − yt | ≤ 1 for all t = t0 , . . . , T − 1.
2
Finally, since we have assumed φ > 36 and zt0 ≤ φ2δln2 t0t0 which implies yt0 ≤ φ ln2 t0 , we can apply
24
martingale concentration inequality (c.f. [5, Theorem 18]):
∞
X
Pr [∃t ≥ t0 , yt > φ ln t] ≤
Pr yT > φ ln T ; ∀t ∈ {t0 , ..., T − 1}, yt ≤ φ ln t
≤
T =t0 +1
∞
X
T =t0 +1
∞
X
Pr yT − yt0 > φ ln T /2; ∀t ∈ {t0 , ..., T − 1}, yt ≤ φ ln t
)
−(φ ln T /2)2
≤
exp
2 · 4(φ + 1) ln(T − 1) + 23 (φ ln T /2)
T =t0 +1
∞
X
φ2 /4
≤
exp − ln T
8(φ + 1) + φ/3
T =t0 +1
Z ∞
φ
exp{− 36
− 1 ln t0 }
φ
exp − ln T dT ≤
≤
.
φ
36
T =t0
36 − 1
def 4δ 2 t0
ln2 t0
φ ln2 t0
2δ 2 t0
Proof of Corollary i.C.3. Define φ =
√
1) (because κ ≤ 1/( 2δ)) and zt0 ≤
(
≥ 36 ln p1 ≥ 72. It is easy to verify that
t0
ln2 t0
≥ 7.5κ2 (φ+
= 2, so we can apply Lemma i.C.1:
n
o
φ
2
exp
−
−
1
ln
t
0
36
φ
(φ + 1) ln t
≤
exp
−
Pr ∃t ≥ t0 , zt >
≤
−
1
ln
t
0 ≤p ,
φ
δ2t
36
−
1
36
φ
φ
− 1 ln t0 ≥ 36
. Therefore, we conclude that
where the last inequality uses ln t0 ≥ 2 and 36
2
5(t0 / ln t0 )
(φ + 1) ln2 t
Pr ∃t ≥ t0 , zt >
≤ Pr ∃t ≥ t0 , zt >
≤p .
δ2t
t/ ln2 t
i.C.3
Proofs for Multi-Dimensional Martingale
Proof of Corollary i.C.4. We apply Lemma i.C.2 with λ = 2 max 1, maxj∈[t+1] {[z0 ]j } ≥ 2. Using
the fact that βt ≥ 0, we have
(
)
t−1
X
Pr [zt ]1 ≥ λ = Pr [zt ]1 ≥ 2( max {[z0 ]j } + 1) ≤ (1 + 1.4t) exp − ln(2p ) + 5p2
τs2
.
j∈[t+1]
We can take p = 4 ln
4t
q
fore, denoting by α =
≤ mins∈[t] { 6κτ1s−1 }
Pt−1 2
s=0 τs , we have
s=0
which satisfies the assumption of Lemma i.C.2. There-
n
o
Pr [zt ]1 ≥ λ ≤ 4t exp − p ln 2 + 5p2 α ≤ q .
2
4t
Above, the last inequality is due to −p ln 2 + 5p2 α2 ≤ −2 ln 4t
+
5
4
ln
α ≤ − ln 4t
q
q
q which holds
for every α ≤
1
100
ln−1
4t
q.
Proof of Corollary i.C.5. We consider fixed p =
with (using the fact that γ ≥ 1)
l
5γ
βt0 = βt ,
= 2 ln 3t
q . Let yt = γ · zt , then yt satisfies (i.C.2)
δt0 = δt , (τt0 )2 = γτt2 , κ0 = κ .
def Pt−1
def P
2
We denote by b = s=0
βs = b and a = t−1
s=0 τs , and apply Lemma i.C.2 on yt with λ = 2. Using
2
2
0
the fact that βs ≥ lτs = 5zpτs we know pβt ≥ 5p2 (τt0 )2 . Therefore, for all s ∈ {0, 1, . . . , T − 1} we
25
have
Pr [[yt ]1 ≥ 2] ≤ exp −pb + 5p2 γa + p ln Ξ − p ln 2 + 1.4t exp {−p ln 2} ,
(i.C.4)
def
where we have denoted by Ξ = maxj∈[t+1] {[z0 ]j } for notational simplicity. Now, the choice p =
1
2 ln 3t
q satisfies the presumption of Lemma i.C.2 because we have assumed κτs ≤ 12 ln 3t Therefore,
q
we have
^
3t
q
⇐= b − la ≥ ln Ξ p ≥ 2 ln
2
q
q
3t
−p ln 2 ≤ ln
⇐= p ≥ 2 ln
.
3t
q
i
h
Plugging them into (i.C.4) gives Pr [zt ]1 ≥ γ2 = Pr [yt ]1 ≥ 2 ≤ 2q + 2q = q .
−pb + 5p2 γa + p ln Ξ − p ln 2 = p(−b + la + ln Ξ − ln 2) ≤ ln
Proof of Lemma i.C.2. Define vector st for every t ∈ {0, 1, . . . , T − 1} and i ∈ [D], it satisfies
def
]i
[st ]i = [z[zt+1
− 1. We have
t ]i
τ2
[zt ]i+1
+ t .
E [st ]i | F≤t ≤ −(δt + βt − τt2 ) + δt
[zt ]i
[zt ]i
In particular,
κ2 τt4
τ2
≤ (2 + (τt κ)2 )τt2 ≤ 3τt2 ,
E [st ]2i | F≤t ≤ τt2 + t +
[zt ]i
[zt ]2i
κτt
κ2 τt2
[st ]i ≤ κτt + p
+
≤ κτt (2 + κτt ) ≤ 3κτt .
[zt ]i
[zt ]i
if [zt ]i ≥ 1, then
(i.C.5)
(i.C.6)
(i.C.7)
We consider [zt+1 ]pi for some fixed value p ≥ 1 and derive that (using (i.C.7))
X
p
p
1
[st ]qi
if (κτt )p ≤ and [zt ]i ≥ 1, then [zt+1 ]pi = [zt ]pi (1 + [st ]i )p = [zt ]pi
6
q
q=0
≤ [zt ]pi 1 + p[st ]i + p2 [st ]2i .
After taking expectation, we have if (κτt )p ≤
¬
E [[zt+1 ]pi | F≤t ] ≤
≤
=
®
≤
=
¯
≤
1
6
and [zt ]i ≥ 1, then
[zt ]pi 1 + p E [[st ]i | F≤t ] + 3p2 τt2
τt2 p
[zt ]i+1
p
2 2
2
[zt ]i 1 − p(δt + βt − τt ) + δt p
+
+ 3p τt
[zt ]i
[zt ]i
[zt ]pi 1 − p(δt + βt − τt2 ) + 3p2 τt2 + δt p[zt ]p−1
[zt ]i+1 + pτt2 [zt ]ip−1
i
p−1 p 1 p
p
2
2 2
2
[zt ]i + [zt ]i+1
[zt ]i 1 − p(δt + βt − τt ) + 3p τt + pτt + δt p
p
p
p
p
2
2 2
2
[zt ]i 1 − δt − pβt + pτt + 3p τt + pτt + δt [zt ]i+1
[zt ]pi 1 − δt − pβt + 5p2 τt2 + δt [zt ]pi+1 .
Above, ¬ uses (i.C.6); uses (i.C.5); ® uses [zt ]i ≥ 1 and Young’s inequality ab ≤ ap /p + bq /q for
1/p + 1/q = 1; and ¯ uses p ≥ 1.
On the other hand, if (κτt )p ≤ 61 but [zt ]i < 1, we have the following simple bound (using
κτt ≤ 1/6):
p
[zt+1 ]i ≤ (1 + κτt )[zt ]i + κτt [zt ]i + κ2 τt2 ≤ (1 + κτt ) + (κτt ) + κ2 τt2 < 1.4 .
Therefore, as long as (κτt )p ≤ 16 we always have
E [zt+1 ]pi | F≤t ≤ [zt ]pi 1 − δt − pβt + 5p2 τt2 + δt [zt ]pi+1 + 1.4 =: (1 − αt )[zt ]pi + δt [zt ]pi+1 + 1.4 ,
26
def
and in the last inequality we have denoted by αt = δt + pβt − 5p2 τt2 . Telescoping this expectation,
and choosing i = 1, we have whenever p ∈ [1, mins∈[t] { 6κτ1s−1 }], it satisfies
!
t
t
t
Y
X
Y
p
p
E [[zt+1 ]1 ] ≤
(1 − αs + δs ) max {[z0 ]j } + 1.4
(1 − αu + δu )
≤
≤
j∈[t+2]
s=1
t
Y
(1 − pβs + 5p2 τs2 )
s=0
max
j∈[t+2]
{[z0 ]pj } exp
(
−p
s=0
u=s+1
t
X
p
max {[z0 ]j } + 1.4
j∈[t+2]
t
X
s=0
βs
!
s=0
+ 5p
2
t
X
s=0
τs2
)
t
Y
(1 − pβu + 5p2 τu2 )
u=s+1
t
X
+ 1.4
s=0
(
exp −p
u=s+1
Finally, using Markov’s inequality, we have for every λ > 0:
nP
o
t
2 τ 2 − pβ
Pr [zt+1 ]1 ≥ λ ≤ λ−p maxj∈[t+2] {[z0 ]pj } exp
5p
s
s
s=0
nP
o
Pt
t
2 2
+ 1.4 s=0 exp
.
u=s+1 5p τu − pβu
i.D
t
X
!
βu
!
+ 5p
2
u=s+1
Decoupling Lemmas
We prove the following general lemma. Let x1 , ..., xT ∈ Ω be random variables each i.i.d. drawn from
some distribution D. Let Ft be the sigma-algebra generated by xt , and denote by F≤t = ∨ts=1 Ft .16
Lemma i.D.1 (decoupling lemma). Consider a fixed value q ∈ [0, 1). For every t ∈ [T ] and
s ∈ {0, 1, ..., t − 1}, let yt,s ∈ RD be an Ft ∨ F≤s measurable random vector and let φt,s ∈ RD be a
fixed vector. Let D0 ∈ [D]. Define events (we denote by (i) the i-th coordinate)
h
i
(i)
(i)
0
0 def
Ct = (x1 , ..., xt−1 ) satisfies Pr ∃i ∈ [D ] : yt,t−1 > φt,t−1 Ft−1 ≤ q
xt
n
o
def
(i)
(i)
Ct00 = (x1 , ..., xt ) satisfies ∀i ∈ [D0 ] : yt,t−1 ≤ φt,t−1
def
def V
and denote by Ct = Ct0 ∧ Ct00 and C≤t = ts=1 Cs . Suppose the following three assumptions hold:
(A1) The random process {yt,s }t,s satisfy that for every i ∈ [D], t ∈ [T − 1], s ∈ {0, 1, . . . , t − 2}
(i)
(i)
(a) E yt,s+1 | Ft , F≤s , C≤s ≤ fs yt,s , q ,
(i)
(i)
(i)
(b) E |yt,s+1 − yt,s |2 | Ft , F≤s , C≤s ≤ hs yt,s , q , and
(i)
(i)
(i)
(c) yt,s+1 − yt,s ≤ gs yt,s whenever C≤s holds.
d
Above, for each i ∈ [D] and s ∈ {0, 1, . . . , T − 2}, we have fs , hs : Rd × [0, 1] → RD
≥0 , gs : R →
D
d
R≥0 are functions satisfying for every x ∈ R ,
(i)
(i)
(d) fs (x, p), hs (x, p) are monotone increasing in p, and
2
(i)
(i)
(i)
(i)
(i)
(e) x(i) − fs (x, 0) ≤ hs (x, 0) and x(i) − fs (x, 0) ≤ gs (x) whenever fs (x, 0) ≤ x(i) .
(A2) Each t ∈ [T ] satisfies Prxt [Et ] ≤ q 2 /2 where event
def
(i)
(i)
Et = xt satisfies ∀i ∈ [D] : yt,0 ≤ φt,0
16
d
.
For the purpose of this paper, one can feel free view Ω as R , each xt as the t-th sample vector, and D as the
distribution with covariance matrix Σ.
27
t
X
τu2
)
.
(A3) For every t ∈ [T ], letting xt be any vector satisfying Et , consider any random process {zs }t−1
s=0
where each zs ∈ RD
≥0 is F≤s measurable with z0 = yt,0 as the starting vector. Suppose that
t−1
whenever {zs }s=0
satisfies
(i)
(i)
E zs+1 | F≤s ≤ fs (zs , q)
(i)
(i)
(i)
∀i ∈ [D], ∀s ∈ {0, 1, . . . , t − 2} :
(i.D.1)
E |zs+1 − zs |2 | F≤s ≤ hs (zs , q)
(i)
(i)
(i)
zs+1 − zs
≤ gs (zs )
(i)
(i)
then it holds Prx1 ,...,xt−1 [∃i ∈ [D0 ] : zt−1 > φt,t−1 ] ≤ q 2 /2.
Under the above two assumptions, we have for every t ∈ [T ], it satisfies Pr[Ct ] ≤ 2tq .
Proof of Lemma i.D.1. We prove the lemma by induction. For the base case, by applying assump
(i)
(i)
tion (A2) we know that Prx1 ∃i ∈ [D0 ] : y1,0 > φ1,0 ≤ Pr[E1 ] ≤ q 2 /2 ≤ q so event C1 holds with
probability at least 1 − q. In other words, Pr[C≤1 ] = Pr[C1 ] ≤ q < 2q.
Suppose Pr[C≤t−1 ] ≤ 2(t − 1)q is true for some t ≥ 2, we will prove Pr[C≤t ] ≤ 2tq. Since it
satisfies Pr[C≤t ] ≤ Pr[C≤t−1 ] + Pr[Ct ], it suffices to prove that Pr[ Ct ] ≤ 2q.
Note also Pr[ Ct ] ≤ Pr[ Ct0 ] + Pr[ Ct00 | Ct0 ] but the second quantity Pr[ Ct00 | Ct0 ] is no more than
q according to our definition of Ct0 and Ct00 . Therefore, in the rest of the proof, it suffices to show
Pr[ Ct0 ] ≤ q.
We use yt,s (xt , x≤s ) to emphasize that yt,s is an Ft × F≤s measurable random vector. Let us
D
now fix xt to be a vector satisfying Et . Define {zs }t−1
s=0 to be a random process where each zs ∈ R
is F≤s measurable:
( (i)
yt,s n
xt , x≤s
def
(i)
(i)
o if x≤s satisfies C≤s ;
(i)
zs = zs x≤s =
(i.D.2)
(i)
min fs−1 zs−1 (x≤s−1 ), 0 , zs−1 (x≤s−1 )
if x≤s satisfies C≤s .
(i)
Then zs satisfies for every i ∈ [D], s ≤ {0, 1, . . . , t − 2},
(i)
(i)
(i)
E zs+1 | F≤s = Pr[C≤s+1 | F≤s ] · E zs+1 | C≤s+1 , F≤s + Pr C≤s+1 | F≤s · E zs+1 | C≤s+1 , F≤s
¬
(i)
≤ Pr[C≤s+1 | F≤s ] · E yt,s+1 | C≤s+1 , F≤s + Pr C≤s+1 | F≤s · fs(i) (zs , 0)
≤ Pr[C≤s+1 | F≤s ] · fs(i) yt,s , q + Pr C≤s+1 | F≤s · fs(i) zs , q
®
≤ Pr[C≤s+1 | F≤s ] · fs(i) yt,s , q + Pr C≤s+1 | F≤s · fs(i) yt,s , q
(i.D.3)
= fs(i) (zs , q)
(i.D.4)
Above, ¬ is because whenever C≤s+1 holds it satisfies
(i)
zs+1
(i)
fs (zs , 0);
(i)
zs+1
=
(i)
yt,s+1 ,
as well as whenever C≤s+1
holds it satisfies
≤
uses assumptions
(A1a) and (A1d) as well as the fact that
we have fixed xt ; ® uses the fact that whenever Pr C≤s+1 | F≤s > 0 it must hold that C≤s is
satisfied, and therefore it satisfies yt,s = zs .
Similarly, we can also show for every i ∈ [D], s ≤ {0, 1, . . . , t − 2},
(i)
E |zs+1 − zs(i) |2 | F≤s
(i)
(i)
= Pr[C≤s+1 | F≤s ] · E |zs+1 − zs(i) |2 | C≤s+1 , F≤s + Pr C≤s+1 | F≤s · E |zs+1 − zs(i) |2 | C≤s+1 , F≤s
¬
(i)
(i)
≤ Pr[C≤s+1 | F≤s ] · E |yt,s+1 − yt,s |2 | C≤s+1 , F≤s + Pr C≤s+1 | F≤s · h(i)
s (zs , 0)
(i)
≤ Pr[C≤s+1 | F≤s ] · h(i)
s yt,s , q) + Pr C≤s+1 | F≤s · hs (zs , q)
®
≤ h(i)
s (zs , q) .
(i.D.5)
28
(i)
(i)
(i)
Above, ¬ is because whenever C≤s+1 holds it satisfies zs+1 = yt,s+1 and zs
(i)
(i)
(i)
= yt,s , together
(i)
(i) 2
with whenever C≤s+1 holds it satisfies |zs+1 − ys |2 either equal zero or equal fs (zs , 0) − zs
(i)
fs (zs , 0)
,
(i)
zs
but in the latter case we must have
<
(owing to (i.D.2)) and therefore it holds
(i)
(i) 2
(i)
fs (zs , 0) − zs
≤ hs (zs , 0) using assumption (A1e). uses assumptions
(A1b) and (A1d) as
well as the fact that we have fixed xt . ® uses the fact that whenever Pr C≤s+1 | F≤s > 0 then
C≤s must hold, and therefore it satisfies yt,s = zs .
Finally, we also have
(i)
|zs+1 − zs(i) | ≤ gs(i) (zs(i) ) .
(i.D.6)
(i)
|zs+1
(i)
(i)
(i)
This is so because whenever C≤s+1 holds it satisfies
− zs | = |yt,s+1 − yt,s | so we can apply
(i)
(i)
assumption (A1c). Otherwise, C≤s+1 holds we either have |zs+1 − zs | = 0 (so (i.D.6) trivially
(i)
(i)
(i)
(i)
(i)
(i)
holds) or |zs+1 − zs | = fs zs , 0 − zs , but in the latter case we must have fs (zs , 0) < zs
(i)
(i)
(i)
(owing to (i.D.2)) so it must satisfy fs zs , 0 − zs ≤ gs (zs ) using assumption (A1e).
We are now ready to apply assumption (A3), which together with (i.D.4), (i.D.5), (i.D.6),
implies that (recalling we have fixed xt to be any vector satisfying Et )
(i)
(i)
Pr
∃i ∈ [D0 ] : zt−1 > φt,t−1 | Et ≤ q 2 /2 .
x1 ,...,xt−1
This implies, after translating back to the random process {yt,s }, we have
(i)
(i)
(i)
(i)
Pr ∃i ∈ [D0 ] : yt,t−1 > φt,t−1 ≤ Pr ∃i ∈ [D0 ] : yt,t−1 > φt,t−1 | Et + Pr[Et ]
x1 ,...,xt
x1 ,...,xt
(i)
(i)
≤ Pr
∃i ∈ [D0 ] : zt−1 > φt,t−1 | Et + q 2 /2
x1 ,...,xt−1
≤ q 2 /2 + q 2 /2 = q 2 .
where the last inequality uses (A2). Finally, using Markov’s inequality,
0
(i)
(i)
0
Ct
=
Pr
Pr ∃i ∈ [D ] : yt,t−1 > φt,t−1 | F≤t−1 > q
Pr
x1 ,...,xt−1 xt
x1 ,...,xt−1
1
(i)
(i)
0
≤
·
E
Pr[∃i ∈ [D ] : yt,t−1 > φt,t−1 | F≤t−1 ]
q x1 ,...,xt−1 xt
h
i
1
(i)
(i)
=
· Pr ∃i ∈ [D0 ] : yt,t−1 > φt,t−1 ≤ q .
q x1 ,...,xt
Therefore, we finish proving Pr[Ct0 ] ≤ q which implies Pr[C≤t ] ≤ 2tq as desired. This finishes the
proof of Lemma i.D.1.
i.E
i.E.1
Main Lemmas (for Section 7)
Before Warm Start
Proof of Lemma Main 1. For every t ∈ [T ] and s ∈ {0, 1, ..., t − 1}, consider random vectors yt,s ∈
RT +2 defined as:
(1)
def
(2)
def
yt,s = kZ> Ps Q(V> Ps Q)−1 k2F ,
yt,s = kW> Ps Q(V> Ps Q)−1 k2F ,
x> ZZ> (Σ/λ )j P Q(V> P Q)−1
s
s
(3+j) def
k+1
t
yt,s
=
(3+j)
(1 − η λ ) · y
,
s k
t,s−1
29
2
2
, for j ∈ {0, 1, . . . , t − s − 1};
for j ∈ {t − s, . . . , T − 1}.
(3+j)
(3+j)
(In fact, we are only interested in yt,s
for j ≤ t − s − 1, and can “almost” define yt,s
= +∞
whenever j ≥ t − s. However, we still decide to give such out-of-boundary variables meaningful
values in order to make all of our vectors yt,s (and functions f, g, h defined later) to be of the same
dimension T + 2. This allows us to greatly simplify our notations.)
We consider upper bounds
2ΞZ s < T0 ;
(3+j) def
(2) def
(1) def
, and φt,s = 2Ξ2x .
φt,s = 2ΞZ , φt,s =
2
otherwise.
For each t ∈ [T ], define event Ct0 and Ct00 in the same way as decoupling Lemma i.D.1 (with D0 = 3):
h
i
(i)
(i)
0 def
Ct = (x1 , ..., xt−1 ) satisfies Pr ∃i ∈ [3] : yt,t−1 > φt,t−1 Ft−1 ≤ q
xt
n
o
def
(i)
(i)
Ct00 = (x1 , ..., xt ) satisfies ∀i ∈ [3] : yt,t−1 ≤ φt,t−1
def V
def
and denote by Ct = Ct0 ∧ Ct00 and C≤t = ts=1 Cs .
As a result, if C≤s+1 holds, then we always have
2
>
−1 2
>
>
>
−1
>
>
>
−1
kx>
s+1 Ps Q(V Ps Q) k2 ≤ kxs+1 VV Ps Q(V Ps Q) k2 + kxs+1 ZZ Ps Q(V Ps Q) k2
√
(3)
≤ (1 + φs+1,s )2 = ( 2Ξx + 1)2 ≤ 4Ξ2x ,
where last inequality uses Ξx ≥ 2. This allows us to later apply Lemma i.B.1 and Lemma 6.1 with
φt = 2Ξx .
Verification of Assumption (A1) in Lemma i.D.1.
def
00
Suppose E[xs x>
s | C≤s , F≤s−1 ] = Σ + ∆, and we want to bound k∆k2 . Defining q1 = Pr[Cs |
00
0
0
Cs , C≤s−1 , F≤s−1 ], then we must have q1 ≤ q according to the definition of Cs and Cs . Using law of
total expectation:
0
>
>
00 0
E[xs x>
s | Cs , C≤s−1 , F≤s−1 ] = E[xs xs | C≤s , F≤s−1 ] · (1 − q1 ) + E[xs xs | Cs , Cs , C≤s−1 , F≤s−1 ] · q1 ,
>
0
>
and combining it with the fact that 0 xs x>
s I and E[xs xs | Cs , C≤s−1 , F≤s−1 ] = E[xs xs ] = Σ,
17
we have
Σ (Σ + ∆)(1 − q1 ) + q1 · I and Σ (Σ + ∆)(1 − q1 ) .
q1
q
After rearranging, these two properties imply k∆k2 ≤ 1−q
.
≤ 1−q
1
Now, we can apply Lemma 6.1 and obtain for every t ∈ [T ], s ∈ {0, 1, . . . , t − 2}, and every
j ∈ {0, 1, . . . , T − 1}, it satisfies18
(1)
(1)
2
2
E[yt,s+1 | Ft , F≤s , C≤s+1 ] ≤ (1 + 56ηs+1
Ξ2x )yt,s + 40ηs+1
Ξ2x + 20ηs+1
(2)
(2)
(3+j)
(3+j)
3/2
qΞZ
1−q
,
2
2
E[yt,s+1 | Ft , F≤s , C≤s+1 ] ≤ (1 − 2ηs+1 ρ + 56ηs+1
Ξ2x )yt,s + 40ηs+1
Ξ2x + 20ηs+1
2
E[yt,s+1 | Ft , F≤s , C≤s+1 ] ≤ (1 − ηs+1 λk + 56ηs+1
Ξ2x )yt,s
(3+j+1)
+ ηs+1 λk yt,s
3/2
qΞZ
1−q
, and
2
+ 40ηs+1
Ξ2x + 20ηs+1
Moreover, for every i ∈ [T + 2], using Lemma i.B.1-(c) with φt = 2Ξx we have whenever C≤s+1
17
18
Here, we use notation A B to indicate spectral dominance: that is, B − A is positive semidefinite.
We make a few comments regarding how to derive these upper bounds.
• Event C≤s+1 implies we can safely apply Lemma 6.1 with φt = 2Ξx .
• To obtain the third inequality, one needs to use the fact that when w = xt ZZ> the quantity ληkt kw> ΣLt−1 k22
ηt λ2
k+1
kw> Lt−1 k22 ≤ ηt λk kw> Lt−1 k22 .
λk
(3+j) def
(3+j)
which means yt,s
= (1 − ηs λk ) · yt,s−1 according
that appeared in Corollary 6.1-(c) can be upper bounded by
(3+j)
• Whenever j ≥ t − s we have yt,s
is “out of boundary”
to our definition. In this case it easily satisfies the third inequality.
(3+j+1)
• When j = T − 1, in the third inequality we should replace yt,s
with zero because it is out of bound.
30
3/2
qΞZ
1−q
.
holds it satisfies
(i)
(i)
(i)
yt,s+1 − yt,s ≤ 18ηs+1 Ξx · yt,s + 4ηs+1 Ξx ·
q
(i)
(i)
2
2
yt,s + 40ηs+1
Ξ2x ≤ 20ηs+1 Ξx · yt,s + 42ηs+1
Ξ2x .
Putting the above bounds together, one can verify that the random process {yt,s }t∈[T ],s≤t−1 satisfy
assumption (A1) of Lemma i.D.1 with19
2
2
Ξ2x )y (1) + 40ηs+1
Ξ2x + 20ηs+1
fs(1) (y, q) = (1 + 56ηs+1
3/2
qΞZ
1−q
,
2
2
Ξ2x + 20ηs+1
fs(2) (y, q) = (1 − 2ηs+1 ρ + 56ηs+1
Ξ2x )y (2) + 40ηs+1
3/2
qΞZ
1−q
,
2
2
Ξ2x )y (3+j) + ηs+1 λk y (3+j+1) + 40ηs+1
Ξ2x + 20ηs+1
fs(3+j) (y, q) = (1 − ηs+1 λk + 56ηs+1
3/2
qΞZ
1−q
,
2
gs(i) (y) = 20ηs+1 Ξx · y (i) + 42ηs+1
Ξ2x , and
2
(i)
h(i)
s (y, q) = gs (y)
Verification of Assumption (A2) of Lemma i.D.1.
(i)
For coordinates i = 1 and i = 2, our assumption kZ> Q(V> Q)−1 k2F ≤ ΞZ implies yt,0 ≤ ΞZ <
h
(i)
j−1
>
Q(V> Q)−1 2 ≤
φt,0 . For coordinates i ≥ 3, we have assumption Prxt ∀j ∈ [T ], x>
t ZZ (Σ/λk+1 )
i
def
(i)
(i)
Ξx ≥ 1 − q 2 /2. Together, event Et (recall Et = xt satisfies ∀i ∈ [D] : yt,0 ≤ φt,0 ) holds for all
t ∈ [T ] with probability at least 1 − q 2 /2. In sum, assumption (A2) is satisfied in Lemma i.D.1.
Verification of Assumption (A3) of Lemma i.D.1.
For every t ∈ [T ], at a high level assumption (A3) is satisfied once we plug in the following three
sets of parameter choices to Corollary i.C.4 and Corollary i.C.5: for every s ∈ [T − 1], define
βs,1 = 0,
δs,1 = 0,
τs,1 = 20ηs+1 Ξx
βs,2 = 2ηs+1 ρ,
δs,2 = 0,
τs,2 = 20ηs+1 Ξx
βs,3 = 0,
δs,3 = ηs+1 λk
τs,3 = 20ηs+1 Ξx
More specifically, for every t ∈ [T ], let
Lemma i.D.1. Define q2 = q 2 /8.
{zs }t−1
s=0
be the arbitrary random vector satisfying (i.D.1) of
• For coordinate i = 1 of {zs }t−1
s=0 ,
– apply Corollary i.C.4 with {βs,1 , δs,1 , τs,1 }t−2
s=0 , q = q2 , D = 1, and κ = 1;
• For coordinate i = 2 of {zs }t−1
s=0 ,
– if t < T0 , apply Corollary i.C.4 with {βs,2 , δs,2 , τs,2 }t−2
s=0 , q = q2 , D = 1, and κ = 1;
t−2
– if t ≥ T0 , apply Corollary i.C.5 with {βs,2 , δs,2 , τs,2 }s=0 , q = q2 , D = 1, γ = 1, and κ = 1;
• For coordinates i = 3, 4, . . . , T + 2 of {zs }t−1
s=0 ,
– apply Corollary i.C.4 with {βs,3 , δs,3 , τs,3 }t−2
s=0 , q = q2 , D = T , and κ = 1.
19
(i)
(i)
The only part of (A1) that is non-trivial to verify is (A1e) for gs . Whenever fs x, 0 ≤ x(i) , it satisfies
if i = 1;
0,
2ηs+1 ρ · x(2) , if i = 2; ≤ 2ηs+1 · x(i) ≤ gs(i) (x) ,
fs(i) x, 0 − x(i) ≤
ηs+1 λk · x(i) , if i ≥ 3.
where the second inequality uses ρ, λk ≤ 1 and the last inequality uses Ξx ≥ 2.
31
One needs to verify that the assumptions of Corollary i.C.4 and i.C.5 are satisfied as follows.
First of all, one can carefully check that our parameters β, δ, τ satisfy (i.C.2) with κ = 1 and this
needs our assumption q ≤ ηs+1
3/2 . Next, we can apply Corollary i.C.4 because our assumptions on ηs
ΞZ
PT −1 2
1
1
imply s=0 τs,i ≤ 100
ln−1 4T
q2 and τs,i ≤ 24 log(4T /q2 ) for i = 1, 2, 3. To verify the presumption of
Corollary i.C.5 with γ = 1, we notice that
• our assumption ηs ≤
ρ
4000·Ξ2x ln
3T
q2
2
implies βs,2 ≥ 10 ln 3T
q2 · τs,2 and κτs ≤
1
12 ln
3T
q2
for every s,
P 0 −1
P
3t 2
• our assumption Ts=0
βs,2 ≥ 1 + ln ΞZ implies t−1
s=0 βs − 10 ln q2 τs ≥ ln ΞZ + 1 − 1 = ln ΞZ
whenever t > T0 ,
Therefore, the conclusion of Corollary i.C.4 and Corollary i.C.5 imply that
(i)
(i)
Pr[∃i ∈ [3] : zt−1 > φt,t−1 ] ≤ 3q2 < q 2 /2
so assumption (A3) of Lemma i.D.1 holds.
Application of Lemma i.D.1. Applying Lemma i.D.1, we have Pr[CT ] ≤ 2qT which implies
our desired bounds and this finishes the proof of Lemma Main 1.
i.E.2
After Warm Start
Proof of Lemma Main 2. For every t ∈ [T ] and s ∈ {0, 1, ..., t − 1}, consider the same random
vectors yt,s ∈ RT +2 defined in the proof of Lemma Main 1:
(1)
def
(2)
def
yt,s = kZ> Ps Q(V> Ps Q)−1 k2F ,
yt,s = kW> Ps Q(V> Ps Q)−1 k2F ,
x> ZZ> (Σ/λ )j P Q(V> P Q)−1
s
s
(3+j) def
k+1
t
yt,s
=
(1 − η λ ) · y (3+j) ,
s k
2
2
, for j ∈ {0, 1, . . . , t − s − 1};
t,s−1
This time, we consider slightly different upper bounds
2ΞZ
if s < T0 ;
(1) def
(2) def
2
if s = T0 ; ,
φt,s = 2ΞZ , φt,s =
5T0 / ln22(T0 ) if s > T0 .
for j ∈ {t − s, . . . , T − 1}.
(3+j)
and φt,s
def
= 2Ξ2x .
s/ ln s
We stress that the only difference between the above upper bounds and the ones we used in the
(2)
proof of Lemma Main 1 is the choice of φt,s for s > T0 . Instead of setting it to be constant 2 for
all such s, we make it decrease almost linearly with respect to index s.
Again, define event
h
i
(i)
(i)
0 def
Ct = (x1 , ..., xt−1 ) satisfies Pr ∃i ∈ [3] : yt,t−1 > φt,t−1 Ft−1 ≤ q
xt
n
o
def
(i)
(i)
Ct00 = (x1 , ..., xt ) satisfies ∀i ∈ [3] : yt,t−1 ≤ φt,t−1
def
def V
and denote by Ct = Ct0 ∧ Ct00 and C≤t = ts=1 Cs .
We next want to apply the decoupling Lemma i.D.1.
Verification of Assumption (A1) in Lemma i.D.1.
(i)
(i)
(i)
The same functions fs , gs , and hs used in the proof of Lemma Main 1 still apply here.
(i)
However, we want to make a minor change on gs whenever s ≥ T0 .
32
Applying Lemma i.B.1-(c) with φt = 2Ξx , we have whenever C≤s+1 holds for some s ≥ T0
(2)
(which implies yt,s ≤ 5),
q
q
(2)
(2)
(2)
(2)
(2)
2
2
|yt,s+1 − yt,s | ≤ 18ηs+1 Ξx yt,s + 4ηs+1 Ξx yt,s + 40ηs+1
Ξ2x ≤ 45ηs+1 Ξx yt,s + 40ηs+1
Ξ2x .
Therefore, we can choose
gs(2) (y)
q
2
= 45ηs+1 Ξx y (2) + 40ηs+1
Ξ2x
for all s ≥ T0 and this still satisfies assumption (A1) of Lemma i.D.1.20
Verification of Assumption (A2) of Lemma i.D.1.
This is the same as the proof of Lemma Main 1.
Verification of Assumption (A3) in Lemma i.D.1.
Again, for every t ∈ [T ], let {zs }t−1
s=0 be the arbitrary random vector satisfying (i.D.1) of
2
Lemma i.D.1. Choosing q2 = q /8 again, the same proof of Lemma Main 1 shows that
(i)
(i)
Pr[∃i ∈ {1, 3} : zt−1 > φt,t−1 ] ≤ 2q2 .
(2)
(2)
Therefore, it suffices to prove that Pr[zt−1 > φt,t−1 ] ≤ 2q2 .
(2)
We only need to focus on the case t ≥ T0 + 2, because otherwise if t ≤ T0 + 1 then gs is
(2)
not changed for all s ∈ {0, . . . , t − 2} so the same proof of Lemma Main 1 also shows Pr[zt−1 >
(2)
φt,t−1 ] ≤ q2 .
When t ≥ T0 + 2, we can first apply the same proof of Lemma Main 1 (for t = T0 + 1) to show
(2)
(2)
(2)
that Pr[zT0 > φT0 +1,T0 = 2] ≤ q2 . Next, conditioning on zT0 ≤ 2 which happens with probability
1
.
at least 1 − q2 , we want to apply Corollary i.C.3 with κ = 2 and τs = δs
More specifically, for every t ∈ {T0 + 2, . . . , T }, we have shown that the random sequence
(2)
{zs }t−1
s=T0 satisfies (i.D.1) with
(2)
2
2
fs(2) (y, q) = (1 − 2ηs+1 ρ + 56ηs+1
Ξ2x )y (2) + 40ηs+1
Ξ2x + 20ηs+1
q
(2)
2
gs (y) = 45ηs+1 Ξx y (2) + 40ηs+1
Ξ2x
2
(2)
h(2)
s (y, q) = gs (y)
Therefore, {zs }t−1
s=T0 also satisfies (i.C.1) with κ = 2 and τs =
our assumptions:
1
δτs
3/2
qΞZ
1−q
because the following holds from
1
2
≤ 2ηs+1 ρ − 56ηs+1
Ξ2x
s
3/2
1
2
qΞZ
2
2
τs2 = 2 2 ≥ 60ηs+1
Ξ2x ≥ 40ηs+1
Ξ2x + 20ηs+1 1−q
κτs =
≥ 40ηs+1 Ξx
δ s
δs
Now, we are ready to apply Corollary i.C.3 with q = q2 , t0 = T0 , and κ = 2. Because q2 ≤ e−2 ,
√
(2)
2)
zT0 ≤ 2, δ ≤ 1/ 8 and lnT2 0T ≥ 9 ln(1/q
, the conclusion of Corollary i.C.3 tells us
δ2
3/2
qΞZ ≤ ηs+1
δτs =
0
(2)
(2)
(2)
Pr[zt−1 > φt,t−1 | zT0 ≤ 2] ≤ q2 .
(2)
(2)
By union bound, we have Pr[zt−1 > φt,t−1 ] ≤ q2 + q2 = 2q2 as desired.
Finally, we conclude (for every t ≥ T0 + 2) that
(i)
(i)
Pr[∃i ∈ [3] : zt−1 > φt,t−1 ] ≤ 4q2 < q 2 /2
(i)
(2)
(2)
Similar to Footnote 19, we also need to verify (A1e) for gs . Whenever fs x, 0 ≤ x(2) , it satisfies fs x, 0 −
(2)
x(2) ≤ 2ηs+1 · x(2) ≤ gs (x) , where the first inequality uses ρ ≤ 1 and the second uses Ξx ≥ 2.
20
33
so assumption (A3) of Lemma i.D.1 holds.
Application of Lemma i.D.1. Applying Lemma i.D.1, we have Pr[CT ] ≤ 2qT which implies
our desired bounds and this finishes the proof of Lemma Main 2.
i.F
Proof of Theorem 1’ and 2’
Proof of Theorem
apply Lemma 5.1 with p0 = p6
1 2’. pFirst for a sufficiently large constant C, we can
and q = min CT 2 d2 , 4T and obtain: with probability at least 1−p0 −q 2 ≥ 1−p/2 over the random
choice of Q, the following holds:
>
>
−1 2 ≤ 20736dk ln 6d , and
(Z Q)(V" Q)
p
p2
F
#
q
i−1
>
>
>
−1
Prx1 ,...,xT ∃i ∈ [T ], ∃t ∈ [T ], xt ZZ (Σ/λk+1 ) Q(V Q)
2
≥
216
k ln
p
2T
q
≤
q2
2
.
Denote by C1 the union of the above two events, and we have PrQ [C1 ] ≥ 1 − p/2.
Now, for every fixed Q, whenever C1 holds, we can let
q
216
2k ln 2T
p
20736dk 6d
ln , Ξx =
,
ΞZ =
2
p
p
p
so the initial conditions in Lemma Main 1 (and thus Lemma Main 2) is satisfied. Also, according
to Parameter 7.1, our parameter choices satisfy the assumptions in Lemma Main 2. Finally, the
conclusion of Lemma Main 2 immediately implies for every T ≥ T0
T0
p
>
>
−1 2
e
Pr ∀t = {T0 , . . . , T } : kW Pt Q(V Pt Q) kF ≤ O
C1 ≥ 1 − 2qT ≥ 1 − .
x1 ,...,xT
t
2
Union bounding this with event C1 , we have
T0
>
>
−1 2
e
Pr
∀t = {T0 , . . . , T } : kW Pt Q(V Pt Q) kF ≤ O
≥1−p .
Q,x1 ,...,xT
t
Combining this with Lemma 2.2 completes the proof.
Finally, Theorem 1’ is a direct corollary of Theorem 2’ by setting ρ ← gap.
34
Appendix (Part II)
Theorem a.
In this part II of the appendix, we provide our proofs for the lower bound Theorem 6, as well
as that for the Rayleigh quotient Theorem 3.
• Appendix ii.G extends our main lemmas to better serve for the rayleigh quotient setting;
• Appendix ii.H provides the final proof for our Rayleigh Quotient Theorem 3;
• Appendix ii.I includes a three-paged proof of our lower bound Theorem 6.
Below we address, at a high level, the main ideas needed behind our main lemma extension as
well as the proof of Theorem 3.
Additional Ideas Needed for Theorem 3. In order to prove Theorem 3 which is the rayleighquotient guarantee in gap-free streaming PCA, we want to strengthen Lemma Main 1 so that it
provides guarantee essentially of the form:
for every γ ≥ 1 :
Wγ> Pt Q(V> Pt Q)−1
2
F
≤ 2/γ ,
(ii.F.1)
where Wγ is the column orthonormal matrix consisting of all eigenvectors of Σ with eigenvalues
≤ λk − γ · ρ. For obvious reason Lemma Main 1 is a special case of (ii.F.1) when restricting only to
γ = 1. It is a simple exercise to show that (ii.F.1) implies our desired rayleigh-quotient guarantee
(via an Abel transformation and an integral computation, see Appendix ii.H).
Therefore, it suffices to prove (ii.F.1). If one were allowed to magically change learning rates and
apply Lemma Main 1 multiple times, then (ii.F.1) would be trivial to prove: just replace W with
Wγ and replacing ρ with γ · ρ and repeatedly apply Lemma Main 1. Unfortunately, the difficulty
arises because want to prove (ii.F.1) for all γ ≥ 1 but with a fixed set of learning rates ηt .
In Appendix ii.G, we showed that the same learning rates in Parameter 7.1, together with a
more general martingale concentration lemma (i.e., Corollary i.C.5 with γ ≥ 1), one can obtain
(ii.F.1) and we call it Lemma Main 3. This proof follows from the same structure as that of
Lemma Main 1 except for the change in how we apply Corollary i.C.5.
Finally, Theorem 3 follows from Lemma Main 3 for a one-paged reason, see Appendix ii.H.
35
ii.G
Improved Main Lemma
In this section we also sketch the proof to obtain Rayleigh quotient result. We will prove the
following lemma which is a strengthened version of Lemma Main 1.
Lemma Main 3 (before warm start). In the same setting as Lemma Main 1, suppose we redefine
W = Wγ to be the column orthonormal matrix consisting of eigenvectors of Σ with values ≤
λk − γ · ρ.
Then, for every γ ∈ [1, 1/ρ], with probability at least 1 − 2qT :
2
2
∀t ∈ {T0 , . . . , T }, Wγ> Pt Q(V> Pt Q)−1 F ≤
.
γ
Proof of Lemma Main 3. The proof is a non-trivial adaption of the proof of Lemma Main 1.
We redefine W = Wγ and consider random vectors yt,s ∈ RT +2 defined in the same way as the
proof of Lemma Main 1. This time, we consider upper bounds
2ΞZ s < T0 ;
(1) def
(2) def
(3+j) def
φt,s = 2ΞZ , φt,s =
, and φt,s = 2Ξ2x
2/γ s ≥ T0 .
so the only difference we make here is on coordinate i = 2 for s ≥ T0 . For each t ∈ [T ], we also
def
def V
consider events Ct0 , Ct00 , Ct = Ct0 ∧ Ct00 , and C≤t = ts=1 Cs defined in the same way as before.
Verification of Assumption (A1) in Lemma i.D.1.
We consider the same functions fs , gs , hs as defined in the proof of Lemma Main 1, except
that we replace ρ with γ · ρ because this time we have redefined W = Wγ so that it consists of
eigenvectors with values ≤ λk − γ · ρ. In other words, we redefine
2
2
fs(2) (y, q) = (1 − 2ηs+1 γρ + 56Ληs+1
Ξ2x )y (2) + 40ηs+1
ΛΞ2x + 20ηs+1
3/2
qΞZ
1−q
.
In the same way we can verify that these functions satisfy assumption (A1) of Lemma i.D.1.
Verification of Assumption (A2) of Lemma i.D.1.
This step is exactly the same as the proof of Lemma Main 1 so ignored here.
Verification of Assumption (A3) of Lemma i.D.1.
We consider the same parameters {βs , δs , τs }s as Lemma Main 1 except that at coordinate i = 2
we replace ρ with γ · ρ:
βs,2 = 2ηs+1 γρ,
δs,2 = 0,
τs,2 = 20Ληs+1 Ξx .
Now, for every t ∈ [T ], let {zs }t−1
s=0 be the arbitrary random vector satisfying (i.D.1) of Lemma i.D.1.
2
Letting q2 = q /8, we can handle coordinates i = 1 and i ≥ 3 in the same way as before. As for
t−1
coordinate i = 2 of {zs }s=0
,
• if t < T0 , apply Corollary i.C.4 with {βs,2 , δs,2 , τs,2 }t−2
s=0 , q = q2 , D = 1, and κ =
√1 ;
Λ
• if t ≥ T0 , apply Corollary i.C.5 with {βs,2 , δs,2 , τs,2 }t−2
s=0 , q = q2 , D = 1, γ = γ, and κ =
√1 ;
Λ
Note that the t < T0 case is exactly the same as before. When t ≥ T0 , we again apply Corollary i.C.5
but this time with value γ ≥ 1 rather than γ = 1. Since this is the only difference here, we only
need to verify the the presumptions of Corollary i.C.5:
• our assumption ηs ≤
ρ
4000Λ·Ξ2x ln
3T
q2
2
implies βs,2 ≥ 20γ ln 3T
q2 · τs,2 and κτs ≤
P 0 −1
P
3t 2
• our assumption Ts=0
βs,2 ≥ 1 + ln ΞZ implies t−1
s=0 βs,2 − 10γ ln q2 τs,2 ≥
whenever t > T0 .
36
1
12 ln
1
2
3T
q2
Pt−1
for every s,
s=0 βs,2
≥ ln ΞZ
Therefore, in the same way as the old proof in Lemma Main 1, we can conclude using Corollary i.C.4
and Corollary i.C.5 that
(i)
(i)
Pr[∃i ∈ [3] : zt−1 > φt,t−1 ] ≤ 3q2 < q 2 /2 .
This verifies assumption (A3) of Lemma i.D.1.
Application of Lemma i.D.1. Applying Lemma i.D.1, we have Pr[CT ] ≤ 2qT which implies
our desired bounds and this finishes the proof of Lemma Main 3.
ii.H
Proof of Theorem 3: Rayleigh Quotient
e
Theorem 3 (restated). In the same setting as Theorem 2’, we have for every T = Θ
letting qi be the i-th column of the output matrix QT , then
h
i
e
Pr ∀i ∈ [k], qi> Σqi ≥ λi − Θ(ρ)
≥1−p .
k
ρ2 ·p2
,
e hides poly-log factors in 1 , 1 and d.
Again, Θ
p ρ
Proof of Theorem 3. Since the statement of Theorem 3 uses the same learning rates21 as in Theorem 2’,
the same proof of Theorem 2’ ensures that the initialization assumptions in Lemma Main 3 are satisfied and thus we can apply Lemma Main 3.
We want to prove next the output matrix QT = [q1 , . . . , qk ] ∈ Rd×k satisfies
1
with probability at least 1 − (2kdT )q, ∀i ∈ [k] : qi> Σqi ≥ λi − 3ρ ln .
(ii.H.1)
ρ
For every i ∈ [k], let QiT ∈ Rd×i denote the first i-columns of QT . By the property of Oja’s
algorithm, the same QiT would have been the output if we started from an Rd×i random matrix
Q0 for streaming i-PCA. In other words, we can write QiT = [q1 , . . . , qi ].
Letting Wγi be the column orthonormal matrix consisting of all eigenvectors of Σ with eigenvalue
≤ λi − γ · ρ, we applying Lemma Main 3 (with k = i) and obtain:
w.p. at least 1 − 2qT :
k(Wγi )> QiT k2F ≤ k(Wγi )> PT Qi (V> PT Qi )−1 k2F ≤ 2/γ .
(Above, the first inequality uses Lemma 2.2.) This in particular implies k(Wγi )> qi k22 ≤
Let us define for each i ∈ [k],
nλ − λ
o
1
def
def λi − λj
i
j
λi − λj ≥ ρ ⊆ R≥1 and γi,j =
.
Γi =
∈ 1,
ρ
ρ
ρ
By union bound,
w.p. at least 1 − 2qkdT , ∀i ∈ [k], ∀γ ∈ Γi :
k(Wγi )> qi k22 ≤ 2/γ .
2
γ
.
(ii.H.2)
We are now ready to bound Rayleigh quotient. For each i ∈ [k], let i0 be the index of the first
def P
(i.e., the largest) eigenvector with eigenvalue ≤ λi − ρ and define bi,j = ds=j hqi , νj i2 where νj is
the j-th largest eigenvector of Σ. It satisfies bi,1 = 1. By Abel’s formula,
qi> Σqi
=
d
X
j=1
21
2
λj hqi , νj i ≥ (λi − ρ) −
d
X
j=i0 +1
bi,j (λj−1 − λj ) .
More precisely, we can use the same Parameter 7.1 together with the same values of Ξx and ΞZ used in Section i.F.
37
Note that for every j ≥ i0 + 1, we have bi,j ≤ kWγi i,j qi k22 ≤
d
X
j=i0 +1
which implies
ii.I
bi,j (λj−1 − λj ) ≤
qi> Σqi
≥ λi − 3ρ ln
1
ρ
d
X
j=i0 +1
2
ρ(γi,j
γi,j
2
γi,j
according to (ii.H.2). Therefore,
Z 1
ρ 1
1
− γi,j−1 ) ≤ 2ρ
dz ≤ 2ρ ln ,
ρ
1 γ
so (ii.H.1) holds.
Proof of Theorem 6: Lower Bound
In this section we prove the following lower bound. Without loss of generality, we only focus on
the case when d = 2(k + m) and the case for d > 2(k + m) can be done by padding zeros. Denoting
by S(2(k+m))×k the set of all (column) orthonormal matrix in R(2(k+m))×k ,
Theorem 6 (lower bound, restated). For every k ∈ N∗ , every m ≥ 0 that is an integral
multiple of
1
λ
λ
∗
k, for every λ ∈ (0, 4(k+m) ], every δ ∈ (0, 2 ], every T ∈ N satisfying T ≥ Ω δ2 , every algorithm
A : (R2(k+m) )⊗T → S(2(k+m))×k , there exists a distribution µ over vectors in R2(k+m) such that
>
>
all x ∼ µ satisfies kxk2 ≤ 1, λk E [xx ] ≥ λ, λk+m+1 E [xx ] ≤ λ − δ .
x∼µ
x∼µ
def
Furthermore, let QT = A(x1 , . . . , xT ) be the output with respect to T i.i.d. random inputs
x1 , ..., xT from µ, we have
>
kλ
2
,
E
kQT WkF = Ω 2
x1 ,...,xT ,A
δ T
where W consists of all the last d − (k + m) eigenvectors of Ex∼µ [xx> ].
We shall just prove this theorem for gap case, i.e. when m = 0. For the gap free case, the
theorem follows by first obtaining µ for the m = 0 case and for λ0 = λ · m+k
k , and then modifying it
(m/k) independent copies from µ, and then concatenate
into some µ0 as follows: first draw x(0)
,
.
.
.
,
x
√
k
them into x0 = (x(0) , . . . , x(m/k) ) · √m+k
and output this x0 instead.
Throughout this section, we use the phrase i-th eigenvalue to denote the i-th largest eigenvalue
of a matrix (tie breaking arbitrarily) and the i-th eigenvector to denote that corresponding to the
i-th eigenvector (with unit norm). We first state a lemma regarding 2 × 2 matrices:
√ √
p
>
Lemma ii.I.1 (2×2 matrix). For every β ∈ 22 , 23 and ε ∈ (0, 2β 2 −1], choose a = β, 1 − β 2 , b =
p
>
β, − 1 − β 2 ∈ R2 and define
1
1
1
1
A = aa> + bb> , B =
+ ε aa> +
− ε bb> .
2
2
2
2
a
b
Then, letting ν1 and ν1 be the top eigenvectors of A and B respectively, letting λ1 , λ2 be the
eigenvalues of A, and λb1 , λb2 be the eigenvalues of B, we have
|hν1a , ν1b i|2 ≤ 1 −
ε2
16(λ1 − λ2 )2
and
λb1 > λ1 = β 2 > 1 − β 2 = λ2 > λb2 .
β2
0
Proof of Lemma ii.I.1. We can calculate that A =
and therefore λ1 = β 2 and
0 1 − β2
!
p
2
2
β
2εβ
1
−
β
2
p
λ2 = 1 − β . We can also compute B =
and the eigenvectors of B
2εβ 1 − β 2
1 − β2
p
are νib = √ 12 (1, si )> with eigenvalues λbi = β 2 + 2εβ 1 − β 2 si for i = 1, 2. Here, s1 ≥ s2 are the
si +1
38
2
2β
√−1
2εβ 1−β 2
have that λb1
two roots of equation s2 +
s − 1 = 0. Since exactly one of the roots is positive (denoted
p
= β 2 + 2εβ 1 − β 2 s1 > λ1 . This also implies λb2 < λ2 since
by s1 ), we immediately
λb1 + λb2 = Tr(B) = 1 = λ1 + λ2 .
2
We now turn to the eigenvectors. Denote by α = 2β√−1 2 and it satisfies α ≤ 2(λ1ε−λ2 ) because
2εβ 1−β
√ √
2
3
β ∈ 2 , 2 . Therefore,
√
−α + α2 + 4
2
1
1
ε
√
s1 =
=
≥√
≥q
≥√
.
2
4(λ1 −λ2 )2
8(λ
−
λ
)
α + α2 + 4
α2 + 4
1
2
+4
ε2
Above, the last inequality uses our assumption that ε ≤ 2β 2 − 1 = λ1 − λ2 . On the other hand,
s1 = α+√2α2 +4 ≤ 1. Therefore, we have:
1
s21
ε2
≤
1
−
≤
1
−
.
2
16(λ1 − λ2 )2
1 + s21
√ √
Corollary ii.I.2 (Corollary of Lemma ii.I.1). For every β ∈ 22 , 23 and every ε ∈ (0, gap] where
def
gap = 2β 2 − 1, choose a and b in R2 as defined as in Lemma ii.I.1. Define distributions over {a, b}:
1
1
1
µ1 : Pr[x = a] = Pr[x = b] =
and µ2 : Pr[x = a] = + ε, Pr[x = b] = − ε .
2
2
2
2
⊗T
2
Suppose we are given an arbitrary (possibly randomized) algorithm A : (R ) → R for T < O( ε12 ).
Then, if we pick i ∈ {1, 2} each with probability 1/2 and sample T i.i.d. vectors x1 , ..., xT from
distribution µi , it satisfies
kqk22 ε2
E[hq, ν2+ i2 ] ≥
,
512gap2
where q = A(x1 , ..., xT ) is the output of A, ν2+ is the second eigenvector of Eµi [xx> ], and the
expectation is taken over the randomness of i, x1 , ..., xT , and A.
|hν1a , ν1b i|2 =
Proof of Corollary ii.I.2. Without lose of generality, let us assume that kqk2 = 1. Let ν2a , ν2b be the
second eigenvectors of Eµ1 [xx> ] and Eµ2 [xx> ] respectively. Let ν1+ , ν2+ be the two eigenvectors of
Eµi [xx> ], and ν1− , ν2− be the two eigenvectors of Eµ3−i [xx> ].
ε2
Suppose by way of contradiction that E[hq, ν2+ i2 ] < 512gap
2 . Then, we shall construct a protocol
that can, given samples x1 , . . . , xT , with success probability at least 3/4 to tell if the distribution
µi is µ1 or µ2 . However, we cannot distinguish between a fair coin and a 21 ± ε biased coin with
probability ≥ 3/4 with fewer than O(1/ε2 ) samples. This will give a contradiction and finish the
ε2
proof that E[hq, ν2+ i2 ] ≥ 512gap
2.
We define the following protocol: on input samples x1 , ..., xT , run algorithm A and get output
ε2
q = A(x1 , . . . , xT ); then, declare distribution µ1 if hq, ν2a i2 < 128gap
2 , or distribution µ2 otherwise.
Using Markov’s inequality, with probability at least
2
3
4,
ε2
. In this
128gap2
+ − 2
ε2
hν1 , ν2 i ≥ 16gap2 . Using
it satisfies hq, ν2+ i2 <
ε
1
case, we have hq, ν1+ i2 ≥ 1 − 128gap
2 ≥ 2 . By Lemma ii.I.1, we also have
these two together we can derive that
hq, ν2− i2 = (hq, ν1+ ihν1+ , ν2− i + hq, ν2+ ihν2+ , ν2− i)2
1
ε2
ε2
ε2
≥
hq, ν1+ i2 hν1+ , ν2− i2 − hq, ν2+ i2 hν2+ , ν2− i2 ≥
−
≥
.
2
64gap2 128gap2
128gap2
This means, the protocol we just defined can declare the correct µi : indeed, among the two
ε2
vectors {ν2a , ν2b } = {ν2+ , ν2− }, the incorrect one (namely, ν2− ) will not satisfy hq, ν2− i2 < 128gap
2.
39
q
1+δ/λ
so 2β 2 − 1 = λδ . Let ε ∈ (0, λδ ] be defined such that
Proof of Theorem 6 . Choose β =
2
2λT /β 2 = Θ( ε12 ). (We can do so because T ≥ Ω δλ2 .) We let a and b be the two vectors defined
in Lemma ii.I.1 with parameters β, ε.
We also denote by A0 and A1 respectively the two 2 × 2 matrices from Lemma ii.I.1:
!
p
2
2
2
β
2εβ
β
0
1
−
β
p
A0 =
and A1 =
.
0 1 − β2
1 − β2
2εβ 1 − β 2
Now, consider the following procedure to generate T random vectors x1 , . . . , xT ∈ R2k . At the
beginning, pick a vector z ∈ {0, 1}k uniformly at random from the 2k choices. Then, in each of the
T rounds,
1. Pick y ∈ [0, 1] uniformly at random.
2. If y > kλ/β 2 , output xt = 0; otherwise continue to the next step. (Note kλ/β 2 ∈ [0, 1].)
3. Pick i ∈ [k] uniformly at random.
4. If zi = 0, then pick
xt = 0⊕2(i−1) ⊕ a ⊕ 0⊕2(k−i) w.p. 1/2 and xt = 0⊕2(i−1) ⊕ b ⊕ 0⊕2(k−i) w.p. 1/2
If zi = 1, then pick
xt = 0⊕(i−1) ⊕ a ⊕ 0⊕(k−i) w.p. 1/2 + ε and xt = 0⊕(i−1) ⊕ b ⊕ 0⊕(k−i) w.p. 1/2 − ε
It is clear from the above definition that x1 , . . . , xT are generated i.i.d. from a distribution Dz which
is characterized by vector z. Since kak2 = kbk2 = 1, we also have kxt k2 ≤ 1. By the construction,
k
M
λ
>
,
A
E [xx ] =
z
i
x∼Dz
β2
i=1
which implies, according to Lemma ii.I.1, for every possible z ∈ {0, 1}k
λ
λ
E [xx> ] ≥ 2 β 2 = λ and λk+1 E [xx> ] ≤ 2 (1 − β 2 ) ≤ λ − δ .
λk
x∼Dz
x∼Dz
β
β
Now, in the total number of T rounds, with probability ≥ 12 , there are less than 2T · kλ
rounds
β2
that xt 6= 0. Denote this event as E. If this happens, then at least 7/8 of i ∈ [k] are picked less than
16T · kλ
· 1 = 16 λT
times. Denote this set of indices as S1 ⊂ [k], and let QT = A(x1 , . . . , xT ) ∈ R2k×k
β2 k
β2
be the output of the algorithm on this random input sequence x1 , . . . , xT . (Recall that the algorithm
does not know the vector z ∈ {0, 1}k ). Let [QT ]i be the i-th row of QT , it holds:
1. ∀i ∈ [2k], k[QT ]i k22 ≤ 1.
P2k
2
2
2.
i=1 k[QT ]i k2 = kQT kF = k.
Therefore, at least 1/4 of j ∈ [k] satisfies k[QT ]2j−1 k22 + k[QT ]2j k22 ≥ 12 . We denote this set of j
def
as S2 ⊂ [k]. Let S = S1 ∩ S2 , then it holds that |S2 | ≥ k8 .
Now we apply Corollary ii.I.2 to each index i ∈ S under event E. We can apply Corollary ii.I.2
because 16λT /β 2 ≤ O( ε12 ) and the algorithm A has received only 16λT /β 2 nonzero samples under
event E. The conclusion of Corollary ii.I.2 implies: for every i ∈ S:
ε2
ε2
+ >
+
2
2
E[(νi,2
) QT Q>
≥
.
T νi,2 | E] ≥ k[QT ]2i−1 k2 + k[QT ]2i k2 ·
512(2β 2 − 1)2
1024(2β 2 − 1)2
+
Above, we have denoted by νi,2
= 0⊕2(i−1) ⊕ ν2+ ⊕ 0⊕2(k−i) where ν2+ is the second eigenvector
+
of Azi . It is clear that νi,2
QT = (ν2+ )> (q1 , ..., qk ) where each qj = ([QT ]2i−1,j , [QT ]2i,j )> ∈ R2 so
we can apply Corollary ii.I.2 on each of them. Thus, using Pr[E] ≥ 21 we conclude:
40
E
"
X
i∈S
+ >
k(νi,2
) QT k22
#
Θ(k)
ε2
1 k
=
=Θ
≥ · ·
2
2
2
2 8 1024(2β − 1)
λ(2β − 1)2 T /β 2
kλ
δ2T
.
Finally, let W be the column orthonormal matrix consisting of the (k + 1)-th till the 2k-th
+
eigenvectors of matrix Ex∼Dz [xx> ]. Since each νi,2
for i ∈ [k] is in one of the columns of W, the
above lower bound implies
h
i
kλ
>
2
E
,
kQT WkF ≥ Ω 2
z,x1 ,...,xT ,QT
δ T
where the expectation is over all possible choices z ∈ {0, 1}k , the random vectors x1 , . . . , xT generated from Dz , and the randomness of the algorithm. By an averaging argument, there exists some
z ∈ {0, 1}k such that
h
i
kλ
>
2
kQT WkF ≥ Ω 2
E
.
x1 ,...,xT ∼Dz ,QT
δ T
41
Appendix (Part III)
In this part III of the appendix, we make non-trivial modifications to Appendix I and derive
our final theorems. More specifically,
• Appendix iii.J extends our initialization lemma in Appendix i.A to stronger settings;
• Appendix iii.K extends our expectation lemmas in Appendix i.B to stronger settings;
• Appendix iii.L extends our main lemmas in Appendix i.E to stronger settings;
• Appendix iii.M provides the final proofs for our Theorem 1 and Theorem 2;
• Appendix iii.N provides the final proofs for our Theorem 4 and Theorem 5.
Theorem a.
Main Ideas Behind Main Lemmas.
Our New Main Lemmas in Appendix iii are extensions of Main Lemmas in Appendix i essentially
with two additional ingredients.
• Under Sampling
This we have already discussed in Section 3 of the main body.
• Variance Bound
Recall that our weaker theorems (such as Theorem 1’) do not have the factor Λ = λ1 +· · ·+λk ∈
[0, 1] show up in the complexity. This speed-up factor 1/Λ could be large and has been argued
as a very important factor in the total running time by [10]. To achieve this, we need tighter
martingale concentrations on our random variables and below we discuss the main intuition.
Recall that all martingale concentrations for a random process {zt }t require some upper bound
between consecutive variables |zt − zt+1 |. If this upper bound holds with probability 1, that
is, |zt − zt+1 |2 ≤ M , then an Azuma-type of concentration can be naturally
used. However,
Azuma concentration is not tight: if one knows a better bound on E |zt+1 − zt |2 | zt , the M
term can be replaced with this expected bound a tighter concentration bound can be implied.
See for instance the survey [5].
This same issue also shows up in streaming PCA. Our Lemma Main 1 and Main 2 have adopted
a probability-one absolute bound on |zt − zt+1 | because that gives the simplest proof. If one
replaces it with a tighter (but very sophisticated) expected bound, the concentration result can
be further improved and this improvement translates to a factor 1/Λ speed-up in the running
time on Oja’s algorithm in the gap-dependent case (and similarly in the gap-free case). We
present such expected bounds in Appendix iii.K.
Additional Ideas for Theorem 2.
While the extended main lemmas are sufficient to prove Theorem 1, they lead again to a
weaker version of Theorem 2 where Λ1 , Λ2 are replaced by (kΛ, kΛ2 ).22 The reason that the factor k shows up is because in gap-free case, there is no “local convergence” (i.e. convergence when
>
−1 2
kPt Q(V> Pt Q)−1 k2 = O(1)) as opposite to the gap-case. Therefore, the quantity kx>
t Pt Q(V Pt Q) k2
will stay at Ω(k) during the entire execution of Oja’s algorithm, and never decreased to O(1).
22
This new weaker version is still weaker than Theorem 2 but already stronger than Theorem 2’. We refrain from
stating it formally in this paper.
42
To overcome this issue, we consider an auxiliary “under-sampled” objective that has a “local
convergence” behavior. As illustrated by Figure 2 on page 57, we define W1 to consist of all
the eigenvectors of eigenvalue ≤ λ − ρ/2, and V1 = W1⊥ contains all eigenvectors of eigenvalue
> λ − ρ/2. Now, we define the auxiliary objective to be W1> Pt Q(V> Pt Q)−1 .
Unlike Z, this new matrix W1 has an eigengap ρ/2 as compared with V, and thus after certain
number of iterations, the quantity kW1> Pt Q(V> Pt Q)−1 k2F can drop to a constant, say ≤ 2.
Now, let us suddenly “shift” our objective to “ kW> Pt Q(V1> Pt Q)−1 k2F ”.23 The crucial
observation is that
>
−1
>
>
−1
>
>
−1
kx>
t Pt Q(V1 Pt Q) k2 ≤ 1 + kW1 Pt Q(V1 Pt Q) k2 ≤ 1 + kW1 Pt Q(V Pt Q) k2 ≤ 3 ,
√
and this is k times smaller than the previous bound
√
>
−1
kx>
t Pt Q(V Pt Q) k2 ≤ O( k) .
Therefore, we can now focus on this random process “ kW> Pt Q(V1> Pt Q)−1 k2F ”, and it converges
a factor Ω(k) faster than the old quantity kW> Pt Q(V> Pt Q)−1 k2F . Finally, Lemma 2.2 tells us
that bounding the former (with respect to V1 ) as opposed to the old quantity (with respect to V)
also implies the convergence of Oja’s algorithm, so we are done.
Additional Ideas for Oja++ Algorithm.
The analysis behind our Oja++ requires a more fine-grind argument regarding the alternation
between “under-sampling” and “objective shift”. As illustrated by Figure 4 on page 60, we consider
a logarithmic long sequence (V1 , W1 ), (V2 , W2 ), . . . and shift carefully and only when needed.
Since our main goal of Oja++ is to remove the factor O(k) in the running time, the spirit behind
these shiftings is the same as the “under-sampled” objective as we discussed above. However, in
this case we need to shift our objective once per epoch, so we need in total logarithmic many of
them.
Also, we need to establish a stronger initialization lemma. Recall that at the beginning of
each epoch of Oja++ , we insert random columns to the working matrix from the previous epoch.
Therefore, the initial matrix Q0 for every epoch (except the first epoch) is not completely fresh
e Q] where Q
e is a (fixed) column orthonormal matrix and
random, but consists of two parts Q0 = [Q,
Q is a fresh random gaussian matrix. We therefore need to re-design our initialization lemma for this
>
−1
> e
> e
−1
particular case, especially to bound the quantity kx>
t Q0 (V Q0 ) k2 = kxt [Q, Q](V [Q, Q]) k2 .
This will be our main focus in the next sub-section.
The matrix V1> Pt Q is not a square matrix anymore because of the shifting, so by inverse we actually mean the
>
−1/2 2
Moore-Penrose pseudo-inverse. kW> Pt Q(V1> Pt Q)−1 k2F is also equal to kW> Pt Q(Q> P>
kF .
t V1 V1 Pt Q)
23
43
iii.J
Final Initialization Lemmas
We state and prove the following lemma using random matrix theory:
Lemma iii.J.1. There exists constants C such that the following holds: Let Q in RN ×n , N ≥ n
be a random matrix with each entry i.i.d. N (0, 1), for every ε > 0,
√
√
1
Pr[λmin (Q> Q) ≤ ε2 ( N − n − 1)2 ] ≤ Cε log
ε
Proof of Lemma iii.J.1. Theorem 1.1 of [15] states that there exist constants C 0 , c > 0 such that
√
√
Pr[λmin (Q> Q) ≤ ε2 ( N − n − 1)2 ] ≤ (C 0 ε)N −n+1 + e−cN .
log
1
log
1
Therefore, if N > 2c ε then we are done. In the case when N ≤ 2c ε , we view Q as a submatrix
e where the entries of Q
e are also i.i.d. generated from N (0, 1).
of an N × N random matrix [Q, Q]
e and conclude that for every α ≤ 0,
In such a case, we can apply Lemma i.A.2 on [Q, Q]
h
2i
e > [Q, Q])
e ≤ α ≤ C 00 · α .
Pr λmin ([Q, Q]
N
>
>
e
e
e > [Q, Q]).
e
Since Q Q is a sub matrix of [Q, Q] [Q, Q], we know that λmin (Q> Q) ≥ λmin ([Q, Q]
Choosing α = εN , we conclude that
√
√
Pr[λmin (Q> Q) ≤ ε2 ( N − n − 1)2 ] ≤ Pr[λmin (Q> Q) ≤ ε2 N ] ≤ O(ε log(1/ε)) .
We prove the following the following initialization lemma that extends Lemma 5.1. Note that
e Q] where Q is a random matrix but Q
e is a fixed one. This
the matrix we are interested now is [Q,
will allow us to perform “under-sampling” and “objective shift” properly.
e ∈ Rd×α , Q ∈ Rd×β , V ∈ Rd×k be three matrices such that
Lemma iii.J.2 (initialization). Let Q
• α, β, k ∈ N where α ≥ 0, β ≥ 1, and α + β ≤ k ≤ d;
• each entry of Q is i.i.d. random from N (0, 1);
e and V are (column) orthonormal;
• Q
e = [] has zero column or Q
e > VV> Q
e 1 I holds.
• either Q
2
Denote by Z ∈ Rd×(d−k) the orthogonal complement of V.
Then, for every p, q ∈ (0, 1), every T ∈ N∗ , every set of random vectors {xt }Tt=1 with kxt k2 ≤ 1,
with probability at least 1 − p − q over the random choice of Q, the following holds:
−1/2 2
e Q] [Q,
e Q]> VV> [Q,
e Q]
• Z> [Q,
≤d·♣ ;
F
−1/2 2
e Q] [Q,
e Q]> VV> [Q,
e Q]
≥♣ ≤q ;
• Pr ∃i, t ∈ [T ], x> ZZ> (Σ/λk+1 )i−1 [Q,
t
x1 ,...,xT
•
where
νj> ZZ> Q(Q> VV> Q)−1/2
2
2
≤ ♣ for every j ∈ [d] .
β · log2 (1/p) · log(T d/q)
e
♣=C· 2 √
=Θ
√
p ( k − α − β − 1)2
def
β
√
√
2
p ( k − α − β − 1)2
.
def
e > VV> Q
e 1 I ∈ Rα×α and by B> def
e > VV> ∈ Rα×d .
Proof of Lemma iii.J.2. Let A2 = Q
= A−1 Q
2
We have
e > VV> Q]
e −1 Q> V V .
B> B = I and BB> = V> V> Q[Q
44
e > VV> Q]
e −1/2 is a column orthonormal matrix, we can always find C such that
Since V> Q[Q
VV> − BB> = CC> 0,
C> B = 0,
Therefore,
e Q]> VV> [Q,
e Q]
[Q,
=
C ∈ Rd×(k−α) .
!
e > VV> Q
e Q
e > VV> Q
Q
e Q> VV> Q
Q> VV> Q
A
I
B> Q
A
=
I
I
Q> B Q> (BB> + CC> )Q
Let us denote Qb = B> Q ∈ Rα×β , Qc = C> Q ∈ R(k−α)×β , σ = σmin (Q>
c Qc ), we then have:
I
Qb
A
A
>
> e
e
[Q, Q] VV [Q, Q]
I
I
Q>
Q>
b
b Qb + σI
Which implies that
−1
−1
−1
A
I + σ −1 Qb Q>
−σ −1 Qb
A
b
e Q]> VV> [Q,
e Q]
[Q,
I
I
−σ −1 Q>
σ −1 I
b
−σ −1 Qb
I + σ −1 Qb Q>
b
2
−1
>
σ −1 I
−σ Qb
I 0
>
>
+ 2σ −1 [Q>
(iii.J.1)
= 2
b , −I] [Qb , −I] .
0 0
−1
I
Qb
−σ −1 Qb
I + σ −1 Qb Q>
b
;
=
Above, the first spectral dominance uses
Q>
Q>
σ −1 I
−σ −1 Q>
b Qb + σI
b
b
the second spectral dominance uses A2 12 I
Now consider any fixed y ∈ Rd with kyk2 ≤ 1. We have
−1
e Q] [Q,
e Q]> VV> [Q,
e Q]
e Q]> y
y > [Q,
[Q,
−1/2
−1/2
e Q] [Q,
e Q]> VV> [Q,
e Q]
e Q] [Q,
e Q]> VV> [Q,
e Q]
= y > VV> [Q,
+ y > ZZ> [Q,
≤ 2 y>V
2
2
−1/2
e Q] [Q,
e Q]> VV> [Q,
e Q]
V> [Q,
2
>
>
>
2
2
−1/2
e Q] [Q,
e Q]> VV> [Q,
e Q]
+ 2 y > ZZ> [Q,
−1
e Q] [Q,
e Q] VV [Q,
e Q]
e Q]> ZZ> y .
≤ 2 + 2y ZZ [Q,
[Q,
>
2
(iii.J.2)
Above, the last inequality uses ky > Vk2 ≤ 1 and the fact that kA(A> A)−1/2 k2 ≤ 1 for every matrix
A that has full column rank. Next, using inequality (iii.J.1), we can bound
−1
1 > > e
e Q]> VV> [Q,
e Q]
e Q]> ZZ> y
y ZZ [Q, Q] [Q,
[Q,
2
e 2 + σ −1 ky > ZZ> QQ
e > − Q ]k2
≤ ky > ZZ> Qk
2
2
b
>
>
>
2
>
>
2
e
2kQb Q ZZ yk2 + 2kQ ZZ yk2
≤ 1+
.
(iii.J.3)
σ
Note that Qb = B> Q ∈ Rα×β , Qc = C> Q ∈ R(k−α)×β , and Z> Q ∈ R(d−k)×β are independent
of each other, because the entries of Q remain independent after rotation and the columns of
def
def
>
>
β
e> >
B, C, Z are pairwise orthogonal. Therefore, letting u1 = Q>
b Q ZZ y and u2 = Q ZZ y ∈ R ,
we have that
e > ZZ> yk2 );
• u1 ∈ Rβ is a random vector with entries i.i.d. in N (0, kBQ
2
• u2 ∈ Rβ is a random vector with entries i.i.d. in N (0, kZZ> yk22 );
45
2
2
• u1 , u2 and σ = σmin (Q>
c Qc ) are independent variables.
e > ZZ> yk2 ≤ 1 and kZZ> yk2 ≤ 1, it implies (using tail bound for chi-squared distriSince kBQ
2
bution) that
∀s ≥ 4 : Pr[ku1 k22 ≥ sβ], Pr[ku2 k22 ≥ sβ] ≤ (se1−s )β/2 ≤ e−s/6 .
def
Putting them into (iii.J.3) and then altogether to (iii.J.2), we claim that under event Cσ =
{σmin (Q> CC> Q) ≥ σ}, we have
−1
16βs
> e
>
> e
>
e
e
Pr y [Q, Q] [Q, Q] VV [Q, Q]
[Q, Q] y ≥ 6 +
(iii.J.4)
Cσ ≤ 2e−s/6 .
σ
Final Probability Arguments. Let us now take σ =
√
√
p2 ( k−α− β−1)2
c2 ·log2 (1/p)
for a large enough constant
c2 , we have Pr[Cσ ] ≥ 1 − p/2 according to Lemma iii.J.1. We also take s = Θ log(T d/q) , and it
satisfies
β · log2 (1/p) · log(T d/q)
16βs
√
=♣ .
=Θ
6+
√
σ
p2 ( k − α − β − 1)2
We now apply (iii.J.4) multiple times in two different manners
• We can apply (iii.J.4) with y = ν1 , ν2 , . . . , νd which are the eigenvector of Σ. By union bound,
we have Pr[C1 |Cσ ] ≥ 1 − q where C1 is the following event
−1/2 2
>
>
>
e Q] [Q,
e Q] VV [Q,
e Q]
Z [Q,
≤ d · ♣ , and
def
C1 =
F
−1/2 2
e Q] [Q,
e Q]> VV> [Q,
e Q]
∀j ∈ [d] : νj ZZ> [Q,
≤♣
2
• Define
event
−1/2
i−1 e
>
>
> e
e
ZZ
(Σ/λ
)
[
Q,
Q]
[
Q,
Q]
VV
[
Q,
Q]
C2 = ∃i ∈ [T ], ∃t ∈ [T ], x>
k+1
t
i−1
>
x>
t ZZ (Σ/λk+1 )
2
≥♣ .
(and we can do so
Then, we apply (iii.J.4) multiple times each with y =
because kyk2 ≤ 1). By union bound, we have for every fixed x1 , ..., xT , it satisfies PrQ [C2 |
Cσ , x1 , ..., xT ] ≤ q 2 . Denoting by 1C2 the indicator function of event C2 , then
1
Pr [C2 | Q] Cσ
Pr Pr [C2 | Q] ≥ q Cσ ≤ E
Q x1 ,...,xT
q Q x1 ,...,xT
1
E [1C | Q] Cσ
= E
q Q x1 ,...,xT 2
1
E
E[1C | Cσ , x1 , . . . , xT ]
=
q x1 ,...,xT Q 2
1
=
E
Pr[C2 | Cσ , x1 , . . . , xT ]
≤ q .
q x1 ,...,xT Q
Above, the first inequality uses Markov’s bound.
In sum, we just derived that PrQ [C1 |Cσ ] ≥ 1 − q and PrQ Prx1 ,...,xT [C2 | Q] ≥ q ≤ q. Applying
union bound again we have
h
i
Pr C1 ∧ Pr [C2 | Q] ≤ q ≥ 1 − p − 2q .
Q
x1 ,...,xT
This finishes the proof of Lemma iii.J.2.
46
iii.K
Final Expectation Lemmas
This section extends Appendix i.B to provide better bounds regarding the expected behaviors of
the random process we are interested. Recall that if X ∈ Rd×r is a generic matrix (either X = W,
X = Z, or X = [w] for some vector w), we have the following notions from Section 6.
Lt = Pt Q(V> Pt Q)−1 ∈ Rd×k
r×k
R0t = X> xt x>
t Lt−1 ∈ R
St = X> Lt ∈ Rr×k
k×k
H0t = V> xt x>
t Lt−1 ∈ R
Our next lemma is a stronger version of Lemma 6.1. It provide tighter expected bounds
byh introducing an additional factor
Γ, and introduce new bounds regarding the variance terms
i
2
>
>
E Tr(St St ) − Tr(St−1 St−1 ) .
Lemma iii.K.1. For every t ∈ [T ], For every t ∈ [T ], let C≤t be any event that depends on random
x1 , . . . , xt and implies
1
>
>
−1
∀j ∈ [d], kνj> Lt−1 k2 ≤ φt , kx>
,
t Lt−1 k2 = kxt Pt−1 Q(V Pt−1 Q) k2 ≤ φt where ηt φt ≤
2
and E[xt x>
t | F≤t−1 , C≤t ] = Σ + ∆. Let m be the largest integer such that λk+m > λk − ρ, and
k
k+m
X
X
def
def
Γ = min
λi + φ2t
λj + k∆k2 , 1
and χ = k + mφ2t
i=1
j=k+1
. We have:
(a) If X = [w] ∈ Rd×1 where w is a vector with Euclidean norm at most 1,
h
i
ηt
2 2
>
2 2
kw> ΣLt−1 k22
E Tr(S>
t St ) | F≤t−1 , C≤t ≤ 1−ηt λk +14Γηt φt Tr(St−1 St−1 )+10Γηt φt +
λk
1/2
1/2
>
>
>
+ 2ηt k∆k2 Tr(S>
S
)
+
Tr(S
S
)
1
+
Tr(Z
L
L
Z)
t−1 t−1
t−1 t−1
t−1 t−1
(b) If X = [w] ∈ Rd×1 where w is a vector with Euclidean norm at most 1,
i
h
2
>
|
F
,
C
E Tr(S>
S
)
−
Tr(S
S
)
t−1 ≤t
t−1 t−1
t t
2
2 2
>
4 4
≤ 243Γηt2 φ2t Tr(S>
t−1 St−1 ) + 12Γηt φt Tr(St−1 St−1 ) + 300Γηt φt
(c) If X = W,
h
i
E Tr(S>
S
)
|
F
,
C
t−1 ≤t
t t
2
≤ 1 − 2ηt ρ + 12Γηt2 φ2t + ηt2 (6φt + 8)λk+1 Tr(St−1 S>
t−1 ) + 10Γηt (2φt + 8)
3/2
>
>
+ 2ηt k∆k2 ηt (4 + φt )χ + Tr(L>
+ (5 + 4ηt )Tr(L>
t−1 ZZ Lt−1 )
t−1 ZZ Lt−1 )
1/2
>
>
+ Tr(Lt−1 ZZ Lt−1 )
.
(d) If X = W,
>
2
E[|Tr(S>
t St ) − Tr(St−1 St−1 )|2 | Ft−1 , C≤t ]
2
2
2
>
≤ 192Γηt2 φ2t Tr(St−1 S>
t−1 ) + 4ηt (4φt + 8) λk+1 Tr(St−1 St−1 )
+ 192Γηt4 φ2t + k∆k2 · 4ηt2 (4φt + 8)2 (χ + Tr(S>
t−1 St−1 )) .
47
Proof. The proof of the first two cases rely on the following tighter upper bounds when X = [w]:
n P
o 2
k
2
E kH0t k22 | F≤t−1 , C≤t ≤ φ2t E kxt Vk22 | F≤t−1 , C≤t ≤ min
λ
+
k∆k
2 , 1 φt ≤ Γφt
i=1 i
o
n
Pk
2
2
E kR0t k22 | F≤t−1 , C≤t ≤ φ2t E kxt Xk22 | F≤t−1 , C≤t ≤ min
i=1 λi + k∆k2 , 1 φt ≤ Γφt
(iii.K.1)
φ2t
as opposed to
that we have used in the past.
The proof of the last two cases rely on different upper bounds for X = W. We introduce
def
some notations that shall be only used in this proof. Let Y = [νk+1 , . . . , νk+m ] ∈ Rd×m be the
matrix consisting of all the eigenvectors of Σ with eigenvalues λk+1 , . .P
. , λk+m . In this
Pnotation,
k
k+m
>
>
>
e
ZZ = YY + WW . We also denote by V = [V, Y]. Let Λ1 = j=1 λj , Λ2 = j=k+1
λj ,
Λ = Λ1 + Λ2 φ2t ≤ Γ.
We make three quick observations:
P
P
>
2
2
> e
e>
= kj=1 λj + k+m
j=k+1 λj kνj Lt k2 ≤ Λ1 + Λ2 φt = Λ ,
Tr(Lt−1 VΣ≤k+m V Lt )P
k+m
2
2
e e>
(iii.K.2)
Tr(L>
t−1 VV Lt ) = k +
j=k+1 kνj Lt k2 ≤ k + mφt = χ , and
eV
e > + WW> )Lt ) ≤ χ + Tr(S> St−1 ) .
Tr(L> Lt ) = Tr(L> (V
t−1
t−1
t−1
Therefore, we have:
2
2
E[kH0t k22 | F≤t−1 , C≤t ] ≤ φ2t · E[kx>
t VkF | F≤t−1 , C≤t ] ≤ Γφt
2
>
>
E[kR0t k22 | F≤t−1 , C≤t ] ≤ E[kx>
t Lt−1 k2 | F≤t−1 , C≤t ] = Tr(Lt−1 ΣLt−1 ) + Tr(Lt−1 ∆Lt−1 )
e ≤k+m V
e > + WΣ>k+m W> )Lt−1 ) + k∆k2 Tr(L> Lt−1 )
≤ Tr(L> (VΣ
t−1
≤Λ+
=Λ+
t−1
>
λk+1 kW Lt−1 k2F + k∆k2 · (χ + Tr(S>
t−1 St−1 ))
>
λk+1 Tr(St−1 St−1 ) + k∆k2 · (χ + Tr(S>
t−1 St−1 ))
.
(iii.K.3)
We are now ready to prove our four cases individually.
(a) This follows from almost the same proof of Corollary 6.1-(c), except that one can replace the
use of (i.B.10) with the following (owing to (iii.K.1))
h
i
2 2
E Tr(S>
S
)
|
F
,
C
≤ (1 − 2ηt λk + 14Γηt2 φ2t )Tr(St−1 S>
≤t−1 ≤t
t t
t−1 ) + 10Γηt φt
>
>
>
>
>
−2ηt Tr(S>
t−1 St−1 V ∆Lt−1 ) + 2ηt Tr(St−1 w ∆Lt−1 ) + 2ηt Tr(St−1 w ΣLt−1 ) .
(b) This follows directly from Lemma i.B.1-(b) and (iii.K.1).
(c) The exact same first half of the proof of Lemma i.B.1 gives (i.B.1) which using kH0t k2 ≤ φt
gives
>
>
0
>
0
Tr(S>
t St ) ≤ Tr(St−1 St−1 ) − 2ηt Tr(St−1 St−1 Ht ) + 2ηt Tr(St−1 Rt )
0
2
0 2
>
2
0 2
+4ηt2 φt Tr(S>
t−1 Rt ) + 12ηt kHt k2 Tr(St−1 St−1 ) + 8ηt kRt k2 . (iii.K.4)
This time, we upper bound
0
>
>
>
>
>
> e e>
>
|Tr(S>
t−1 Rt )| = |Tr(St−1 W xt xt Lt−1 )| = |Tr(St−1 W xt xt (VV + WW )Lt−1 )|
eV
e > Lt−1 )|
(iii.K.5)
= |Tr(S> W> xt x> WSt−1 ) + Tr(S> W> xt x> V
t−1
t
t−1
¬
t
3
1 >e e>
>
>
2
≤
Tr(S>
t−1 W xt xt WSt−1 ) + kxt VV Lt−1 k2 .
2
2
Above, inequality ¬ is because 2Tr(A> B) ≤ Tr(A> A)+Tr(B> B) which is Young’s inequality
48
in the matrix case. We take expectation and get:
0
E |Tr(S>
t−1 Rt )| | Ft−1 , C≤t
3
1
>
> e e>
e e>
≤ Tr(S>
t−1 W (Σ + ∆)WSt−1 ) + Tr(Lt−1 VV (Σ + ∆)VV Lt−1 )
2
2
1
3
1
3
>
> e
>
>
> e e>
e
Tr(St−1 St−1 ) + Tr(Lt−1 VV Lt−1 )
≤ λk+1 Tr(St−1 St−1 ) + Tr(Lt−1 VΣ≤k+m V Lt ) + k∆k2 ·
2
2
2
2
¬ 3
1
3
χ
(iii.K.6)
Tr(S>
≤ λk+1 Tr(S>
t−1 St−1 ) + Λ + k∆k2 ·
t−1 St−1 ) +
2
2
2
2
Above, inequality ¬ has relied on our earlier observations (iii.K.2). At this point, plugging
(iii.K.6), (iii.K.3) into (iii.K.4) and using the assumption ηt φt ≤ 1/2, we have
2
E[Tr(S>
1 + 12Γηt2 φ2t + ηt2 (6φt + 8)λk+1 Tr(St−1 S>
t St ) | F≤t−1 , C≤t ] ≤
t−1 ) + ηt (2φt + 8)Λ
0
>
0
+ E − 2ηt Tr(S>
t−1 St−1 Ht ) + 2ηt Tr(St−1 Rt ) | F≤t−1 , C≤t
+2ηt k∆k2 · ηt (4 + φt )χ + ηt (4 + 3φt )Tr(S>
.
t−1 St−1 )
Finally, inequality (i.B.8) (from the proof of Corollary 6.1-(a)) gives an upper bound on the
0
>
0
expected value of −Tr(S>
t−1 St−1 Ht ) + Tr(St−1 Rt ). Putting it in gives us the desired bound.
(d) This time we compute a slightly different upper bound from (iii.K.5)
0
>
>
>
>
>
>
>
>
|Tr(S>
t−1 Rt )| = |Tr(St−1 W xt xt Lt−1 )| = |Tr(St−1 W xt xt (VV + ZZ )Lt−1 )|
>
>
>
>
>
>
= |Tr(S>
t−1 W xt xt V) + Tr(St−1 W xt xt ZZ Lt−1 )|
>
>
>
>
>
≤ |Tr(S>
t−1 W xt xt ZZ Lt−1 )| + kSt−1 W xt k2 .
Plugging this into (i.B.1), we obtain
>
>
0
>
0
2
0
>
0
|Tr(S>
t St ) − Tr(St−1 St−1 )| ≤ 2ηt |Tr(St−1 St−1 Ht )| + 2ηt |Tr(St−1 Rt )| + 4ηt kHt k2 Tr(St−1 Rt )
¬
2
0 2
+12ηt2 kH0t k22 Tr(St−1 S>
t−1 ) + 8ηt kRt k2 .
>
0
2
0 2
≤ 8ηt kH0t k2 Tr(St−1 S>
t−1 ) + 4ηt |Tr(St−1 Rt )| + 8ηt kRt k2
>
>
>
>
≤ 8ηt kH0t k2 Tr(St−1 S>
t−1 ) + 4ηt |Tr(St−1 W xt xt ZZ Lt−1 )|
>
2
0 2
+4ηt kS>
t−1 W xt k2 + 8ηt kRt k2
>
>
>
1/2
≤ 8ηt kH0t k2 Tr(St−1 S>
t−1 ) + ηt (4φt + 8)Tr(St−1 W xt xt WSt−1 )
+8ηt2 φt kR0t k2
Above, ¬ uses the fact that ηt kH0t k2 ≤ ηt φt ≤ 1/2, and uses kR0t k2 ≤ φt as well as
>
>
>
>
kx>
t ZZ Lt−1 k2 ≤ kxt Lt−1 k2 + kxt VV Lt−1 k2 ≤ (φt + 1)
Taking square on both sides, we have
>
2
2
0 2
>
2
2
2
>
>
>
|Tr(S>
t St ) − Tr(St−1 St−1 )|2 ≤ 192ηt kHt k2 Tr(St−1 St−1 ) + 3ηt (4φt + 8) Tr(St−1 W xt xt WSt−1 )
+192ηt4 φ2t kR0t k22
49
Finally, taking expectation and using (iii.K.3), we have (noticing that ηt φt ≤ 1/2)
>
2
E[|Tr(S>
t St ) − Tr(St−1 St−1 )|2 | F≤t−1 , C≤t ]
2
2
2
>
>
≤ 192Γηt2 φ2t Tr(St−1 S>
t−1 ) + 3ηt (4φt + 8) Tr(St−1 W (Σ + ∆)WSt−1 )
>
+192ηt4 φ2t Λ + λk+1 Tr(S>
S
)
+
k∆k
·
(χ
+
Tr(S
S
))
.
t−1
2
t−1
t−1
t−1
2
2
2
>
≤ 192Γηt2 φ2t Tr(St−1 S>
)
+
3η
(4φ
+
8)
λ
+
k∆k
Tr(S
S
)
t
2
t−1
k+1
t−1
t
t−1
>
+192ηt4 φ2t Λ + λk+1 Tr(S>
t−1 St−1 ) + k∆k2 · (χ + Tr(St−1 St−1 )) .
2
2
2
>
≤ 192Γηt2 φ2t Tr(St−1 S>
t−1 ) + 4ηt (4φt + 8) λk+1 Tr(St−1 St−1 )
+192Γηt4 φ2t + k∆k2 · 4ηt2 (4φt + 8)2 (χ + Tr(S>
t−1 St−1 )) .
iii.L
Final Main Lemmas
In this section, we extend our main lemmas in Section 7 to their strong forms. Specifically,
• Lemma Main 4 is an extension of Lemma Main 1;
• Lemma Main 5 is an extension of Lemma Main 2;
• Lemma Main 6 is a new main lemma that takes into account “under-sampling”.
Recall that given parameter ρ ∈ (0, λk ),
• V is a matrix consisting of all the eigenvectors of Σ with eigenvalue ≥ λk ,
• Z is a matrix consisting of all the eigenvectors of Σ with eigenvalue < λk ,
• W is a matrix consisting of all the eigenvectors of Σ with eigenvalue ≤ λk − ρ.
When we apply these main lemmas in later sections, we may redefine the meaning of (V, Z, W) to
be with respect to some other λ that is not necessarily λk .
iii.L.1
Before Warm Start
Lemma Main 4 (before warm start). For every q ∈ 0, 12 , ΞZ ≥ 2, Ξx ≥ 2, and fixed matrix
Q ∈ Rd×k , suppose the initial matrix Q satisfies
• kZ> Q(V> Q)−1 k2F ≤ ΞZ ,
h
j−1
>
• Prxt ∀j ∈ [T ], x>
Q(V> Q)−1
t ZZ (Σ/λk+1 )
•
24
νj> ZZ> Q(V> Q)−1
2
≤ Ξx for every j ∈ [d].24
2
i
≤ Ξx ≥ 1 − q 2 /2 for every t ∈ [T ],
This assumption is redundant for j ∈ [k] because νj> Z = 0 for j ∈ [k].
50
Pk
Let m be the number of eigenvalues of Σ in (λk − ρ, λk+1 ]. Let Λ =
Suppose also the learning rates {ηs }s∈[T ] satisfy
(1):
∀s ∈ [T ],
3/2
2q(ΞZ + dΞ2x )
ρ
≤ ηs ≤ O
Λ
Λ · Ξ2x ln dT
q
(3):
(2):
T
X
t=1
i=1 λi
+ Ξ2x
Ληt2 Ξ2x ≤ O
∃T0 ∈ [T ] such that
T0
X
t=1
Pk+m
j=k+1 λj .
1
.
ln dT
q
ln(Ξ )
Z
ηt ≥ Ω
ρ
(iii.L.1)
Then, for every t ∈ [T − 1], we have with probability at least 1 − 2qT (over the randomness of
x1 , . . . , xt ):
• if t ≥ T0 then kW> Pt Q(V> Pt Q)−1
2
F
≤ 2.
Proof of Lemma Main 4. The proof is a non-trivial adaption of the proof of Lemma Main 1.
This time we consider random vectors yt,s ∈ R2+d+T defined as:
(1)
def
(2)
def
(2+T +j)
def
yt,s = kZ> Ps Q(V> Ps Q)−1 k2F ,
yt,s = kW> Ps Q(V> Ps Q)−1 k2F ,
yt,s
(3+d+j)
yt,s
2
= νj> ZZ> Ps Q(V> Ps Q)−1 , for j ∈ [d] ,
2
x> ZZ> (Σ/λ )j P Q(V> P Q)−1 2 , for j ∈ {0, 1, . . . , t − s − 1};
def
s
s
k+1
t
=
2
(3+d+j)
(1 − η λ ) · y
,
for j ∈ {t − s, . . . , T − 1}.
s k
t,s−1
We again consider upper bounds
2ΞZ s < T0 ;
(1) def
(2) def
φt,s = 2ΞZ , φt,s =
,
2
otherwise.
(3)
(2+d+T )
and φt,s = · · · = φt,s
def
= 2Ξ2x .
For each t ∈ [T ], define event Ct0 and Ct00 in the same way as before:
h
i
(i)
(i)
0 def
Ct = (x1 , ..., xt−1 ) satisfies Pr ∃i ∈ [3 + d] : yt,t−1 > φt,t−1 Ft−1 ≤ q
xt
n
o
def
(i)
(i)
Ct00 = (x1 , ..., xt ) satisfies ∀i ∈ [3 + d] : yt,t−1 ≤ φt,t−1
def
def V
and denote by Ct = Ct0 ∧ Ct00 and C≤t = ts=1 Cs .
Verification of Assumption (A1) in Lemma i.D.1.
q1
q
Suppose E[xs x>
s | C≤s , F≤s−1 ] = Σ + ∆, then we have k∆k2 ≤ 1−q1 ≤ 1−q using the same proof
as before.
This time, we use Lemma iii.K.1 (instead of Lemma i.B.1) with φt = 2Ξx to obtain the following
tighter bounds for i ∈ [T + d + 2]:
(i)
(i)
(i)
E yt,s+1 | Ft , F≤s , C≤s ≤ fs(i) yt,s , q and E |yt,s+1 − yt,s |2 | Ft , F≤s , C≤s ≤ h(i)
s yt,s , q
51
where we define25
def
2
2
fs(i) (y, q) = 1 + O(Ληs+1
Ξ2x ) y (i) + O Ληs+1
Ξx + Err
for i = 1, 3, 4, . . . 2 + d
def
2
2
fs(i) (y, q) = 1 − 2ηs+1 ρ + O(Ληs+1
Ξ2x ) y (i) + O Ληs+1
Ξx + Err
for i = 2
def
2
2
fs(i) (y, q) = 1 − ηs+1 λk + O(Ληs+1
Ξ2x ) y (i) + ηs+1 λk y (i+1) + O Ληs+1
Ξ2x + Err
def
(i) 2
for i = 3 + d, . . . , 2 + d + T
2
2
2
4
h(i)
+ Ληs+1
Ξ2x y (i) + Ληs+1
Ξ2x + Err
for i = 1, 2, 3, . . . 2 + d
s (y, q) = O Ληs+1 Ξx y
def
2
2 (i) 2
2
4
h(i)
+ Ληs+1
Ξ2x y (i) + Ληs+1
Ξ4x
for i = 3 + d, . . . , 2 + d + T .
s (y, q) = O Ληs+1 Ξx y
def
3/2
q
Above, we denote by Err = ηs+1 Ξx ΞZ + dΞ2x · 1−q
the error term similar to the proof of
2q(Ξ
3/2
+dΞ2 )
x
Z
Lemma Main 1. Obviously if
≤ ηs is satisfied then the Err term can be absorbed into
Λ
the big-O notation.
For every i ∈ [2 + d + T ], we consider the same gs as defined in the proof of Lemma Main 1:
2
gs(i) (y) = 20ηs+1 Ξx · y (i) + 42ηs+1
Ξ2x
(i)
(i)
(i)
and it satisfies whenever C≤s+1 holds then |yt,s+1 − yt,s | ≤ gs (yt,s ) .
Putting the above bounds together, we finish verifying assumption (A1) of Lemma i.D.1.
Verification of Assumption (A2) of Lemma i.D.1.
This step is exactly the same as the proof of Lemma Main 1 so ignored here.
Verification of Assumption (A3) of Lemma i.D.1.
For every t ∈ [T ], at a high level assumption (A3) is satisfied once we plug in the following
√
def
three sets of parameter choices to Corollary i.C.4 and Corollary i.C.5: define κ = 1/ Λ > 1 and
for every s ∈ [T − 1],
√
βs,1 = 0,
δs,1 = 0,
τs,1 = O(ηs+1 Ξx · Λ)
√
βs,2 = 2ηs+1 ρ,
δs,2 = 0,
τs,2 = O(ηs+1 Ξx · Λ)
√
βs,3 = ηs+1 ρ,
δs,3 = ηs+1 λk
τs,3 = O(ηs+1 Ξx · Λ)
More specifically, for every t ∈ [T ], let {zs }t−1
s=0 be the arbitrary random vector satisfying (i.D.1) of
Lemma i.D.1. Define q2 = q 2 /(8 + 2d).
(i)
t−2
• For coordinate i = 1, 3, 4, . . . , 2 + d of {zs }t−1
s=0 , apply Corollary i.C.4 with {βs,1 , δs,1 , τs,1 }s=0 ,
q = q2 , D = 1, and κ;
• For coordinate i = 2 of {zs }t−1
s=0 ,
– if t < T0 , apply Corollary i.C.4 with {βs,2 , δs,2 , τs,2 }t−2
s=0 , q = q2 , D = 1, and κ;
t−2
– if t ≥ T0 , apply Corollary i.C.5 with {βs,2 , δs,2 , τs,2 }s=0 , q = q2 , D = 1, γ = 1, and κ;
• For coordinates i = 2 + d + 1, . . . , 2 + d + T of {zs }t−1
s=0 ,
– apply Corollary i.C.4 with {βs,3 , δs,3 , τs,3 }t−2
s=0 , q = q2 , D = T , and κ.
25
We refer readers to Footnote 18 in the proof of Lemma Main 1 on page 30 for a detailed discussion as well as a
careful treatment for out-of-bound indices. We remark here that to derive such bounds, one needs to use the fact that
when w = xZZ> for some unit norm vector x (such as x = xt for some t ∈ [T ] or x = νj for some j ∈ [d]), the quantity
ηt
kw> ΣLt−1 k22
λk
that appeared in Lemma iii.K.1-(a) can be upper bounded by
52
ηt λ2
k+1
kw> Lt−1 k22
λk
≤ ηt λk kw> Lt−1 k22 .
Note that we can apply Corollary i.C.4 because
for every i = 1, 2, 3, our assumptions on ηs
√
PT −1 2
−1 4T
1
Λ
imply s=0 τs,i ≤ 100 ln q2 and τs,i ≤ 24 log(4T /q2 ) .
We can apply Corollary i.C.5 with γ = 1 because our assumption ηs ≤ O Λ·Ξ2ρln dT implies
x
q
PT0 −1
Pt−1
2 for every s, and our assumption
βs,2 ≥ 10 ln 3(Tq+d)
·
τ
β
≥
1
+
ln
Ξ
implies
s,2
Z
s,2
s=0
s=0 βs −
2
3t 2
10 ln q2 τs ≥ ln ΞZ + 1 − 1 = ln ΞZ whenever t > T0 .
Therefore, the conclusion of Corollary i.C.4 and Corollary i.C.5 imply that
(i)
(i)
Pr[∃i ∈ [3 + d] : zt−1 > φt,t−1 ] ≤ (3 + d)q2 < q 2 /2
so assumption (A3) of Lemma i.D.1 holds.
Application of Lemma i.D.1. Applying Lemma i.D.1, we have Pr[CT ] ≤ 2qT which implies
our desired bounds and this finishes the proof of Lemma Main 4.
iii.L.2
After Warm Start
Lemma Main 5 √
(after warm start). In the same setting as Lemma Main 4, suppose in addition
there exists δ ≤ 1/ 8 such that
9 ln((8 + 2d)/q 2 )
T0
≥
,
δ2
ln2 T0
∀s ∈ {T0 +1, . . . , T } :
2ηs ρ−ηs2 Ξ2x ≥
Ω(1)
s−1
and
Then, with probability at least 1 − 2qT (over the randomness of x1 , . . . , xT ):
• kW> Pt Q(V> Pt Q)−1 k2F ≤
5T0 / ln2 (T0 )
t/ ln2 t
ηs ≤ √
O(1)
.
Λ(s − 1)δΞx
for every t ∈ {T0 , . . . , T }.
Proof of Lemma Main 5. For every t ∈ [T ] and s ∈ {0, 1, ..., t − 1}, consider the same random
vectors yt,s ∈ R2+d+T defined in the proof of Lemma Main 4. This time, we define upper bounds:
2ΞZ
if s < T0 ;
(1) def
(2) def
2
if s = T0 ; , and φ(3) = · · · = φ(2+d+T ) def
φt,s = 2ΞZ , φt,s =
= 2Ξ2x .
t,s
t,s
2
5T0 / ln 2(T0 ) if s > T0 .
s/ ln s
def
def V
Also consider the same events Ct0 , Ct00 , Ct = Ct0 ∧ Ct00 and C≤t = ts=1 Cs defined in the proof of
Lemma Main 4. We again want to apply the decoupling Lemma i.D.1.
Verification of Assumption (A1) in Lemma i.D.1.
(i)
(i)
(i)
The same functions fs , gs , and hs used in the proof of Lemma Main 4 still apply here.
We make minor changes on the second coordinate (and this similar modification was also done in
Lemma Main 2): whenever s ≥ T0 , define
q
def
def
(2)
2
2
2 (2)
4
gs (y) = 45ηs+1 Ξx y (2) + 40ηs+1
Ξ2x and h(2)
+ Ληs+1
Ξ2x + Err .
s (y, q) = O Ληs+1 Ξx y
(2)
Note that we can make this change for gs owing to exactly the same reason as the proof of
(2)
Lemma Main 2. We can do so for hs because whenever C≤s+1 holds for some s ≥ T0 (which
(2)
(2)
implies yt,s ≤ 5), we have (y (2) )2 = O(y (2) ) so the formulation of hs can be simplified as above.
(i)
(i)
(i)
These choices of fs , gs , and hs satisfy assumption (A1) of Lemma i.D.1.
Verification of Assumption (A2) of Lemma i.D.1.
Same as before.
Verification of Assumption (A3) in Lemma i.D.1.
53
Same as the proof of Lemma Main 2, for every t ∈ [T ], let {zs }t−1
s=0 be the arbitrary random
2
vector satisfying (i.D.1) of Lemma i.D.1. Choosing q2 = q /(8 + 2d) again, the same argument
before indicates that it suffices to focus on t ≥ T0 + 2 and prove
(2)
(2)
(2)
Pr[zt−1 > φt,t−1 | zT0 ≤ 2] ≤ q2 .
(iii.L.2)
We want to apply Corollary i.C.3. Recall that for every t ∈ {T0 +2, . . . , T }, the random sequence
satisfies (i.D.1) with
def
2
2
fs(2) (y, q) = (1 − 2ηs+1 ρ + O(Ληs+1
Ξ2x ))y (2) + O Ληs+1
Ξx + Err ,
def
2
2 (2)
4
2
h(2)
(y,
q)
=
O
Λη
Ξ
y
+
Λη
Ξ
+
Err
,
s
s+1 x
s+1 x
q
def
2
gs(2) (y) = 45ηs+1 Ξx y (2) + 40ηs+1
Ξ2x
√
(2) t−1
1
Therefore, {zs }s=T
satisfies (i.C.1) with κ = 2/ Λ and τs = δs
because the following holds from
0
our assumptions:
1
3/2
2
Ξ2x )
qΞZ ≤ ηs+1
δτs = ≤ 2ηs+1 ρ − Ω(ηs+1
s
1
1
2
2
4
τs2 = 2 2 ≥ Ω(Ληs+1
Ξ2x ) κ2 τs4 =
≥ Ω(Ληs+1
Ξ2x )
κτs = √
≥ Ω(ηs+1 Ξx )
4
4
δ s
Λδ s
Λδs
√
Finally, we are ready to apply Corollary i.C.3 with q = q2 , t0 = T0 , and κ = 2/ Λ. Because
√
(2)
2)
q2 ≤ e−2 , zT0 ≤ 2, δ ≤ 1/ 8 and lnT2 0T ≥ 9 ln(1/q
, the conclusion of Corollary i.C.3 tells us
δ2
(2)
{zs }t−1
s=T0
(2)
(2)
(2)
0
Pr[zt−1 > φt,t−1 | zT0 ≤ 2] ≤ q2 , which is exactly (iii.L.2) so this finishes the verification of
assumption (A3).
Application of Lemma i.D.1. Applying Lemma i.D.1, we have Pr[CT ] ≤ 2qT which implies
our desired bounds and this finishes the proof of Lemma Main 5.
Parameter iii.L.1. There exists constants C1 , C2 , C3 > 0 such that for every q > 0 that is
sufficiently small (meaning q < 1/poly(T, ΞZ , Ξx , 1/gap)), the following parameters both satisfy
Lemma Main 4 and Lemma Main 5:
(
2
ln ΞZ
ΛΞ2x ln dT
t ≤ T0 ;
T0
ρ
q ln ΞZ
T0 ·ρ
= C1 ·
, ηt = C2 ·
, and δ = C3 · √
.
1
2
2
t
>
T
.
ρ
ln (T0 )
0
ΛΞx
t·ρ
iii.L.3
Under-Sampling Lemma
The previous two sections together (namely, Lemma Main 4 and Lemma Main 5 together), analyze
the behavior of a rank-k Oja’s algorithm with an eigen-partition (λk , λk − ρ), meaning V consists
of all eigenvectors with eigenvalues ≥ λk , Z consists of all the eigenvectors with eigenvalues < λk ,
and W consists of all the eigenvectors with eigenvalues ≤ λk − ρ.
In this section, we consider a more general scenario
Definition iii.L.2. Given λ ∈ [0, λk ] and ρ ∈ (0, λ), we say that V, Z, W form an (λ, λ − ρ) eigenpartition if V consists of all eigenvectors with eigenvalues ≥ λ, Z consist of all the eigenvectors
with eigenvalues < λ, and W consist of all the eigenvectors with eigenvalues ≤ λ − ρ.
The following lemma studies the behavior of a rank-k 0 Oja’s algorithm for an arbitrary k 0 ≤ r.
Lemma Main 6 (under sampling). Let V, Z, W be an (λ, λ − ρ) eigen-partition where the V ∈
Rd×r , Z ∈ Rd×(d−r) , W∈ Rd×(d−m−r) . We study rank k 0 Oja’s algorithm for some k 0 ≤ r.
0
For every q ∈ 0, 21 , ΞZ ≥ 2, Ξx ≥ 2, and fixed matrix Q ∈ Rd×k , suppose it satisfies
54
• kZ> Q(Q> VV> Q)−1/2 k2F ≤ ΞZ ,
h
j−1
>
Q(Q> VV> Q)−1/2
• Prxt ∀j ∈ [T ], x>
t ZZ (Σ/λr+1 )
•
νj> ZZ> Q(Q> VV> Q)−1/2
2
2
i
≤ Ξx ≥ 1−q 2 /2 for every t ∈ [T ],
≤ Ξx for every j ∈ [d].
Suppose also the learning rates {ηs }s∈[T ] satisfy the all conditions in Lemma Main 4 and Lemma Main 5
P
def P
with Λ replaced by Λ0 = ri=1 λi + Ξ2x r+m
j=r+1 λj .
0
d×k
Then, letting Qt ∈ R
be the output of the rank-k 0 Oja’s algorithm with respect to input Q,
with probability at least 1 − 2qT (over the randomness of x1 , . . . , xT ), it satisfies
• kW> Qt k2F ≤
5T0 / ln2 (T0 )
t/ ln2 t
for every t ∈ {T0 , . . . , T }.
0
Proof of Lemma Main 6. Since V> Q ∈ Rr×k for r ≥ k 0 , we can always find a (column) orthonor0
e = VS ∈ Rd×(r−k0 ) , we have
mal matrix S ∈ Rr×(r−k ) such that S> V> Q = 0. Letting Q
e > VV> Q
e = I, Q
e > VV> Q = 0, Z> Q
e = 0, Z> Σj−1 Q
e =0 .
Q
>
j−1 , or X> = ν ZZ> , we always have
Therefore, for every X> = Z> , X> = x>
j
t ZZ (Σ/λr+1 )
>
e
e −1 k2 = Tr X> [Q, Q]([Q,
e
e > VV> [Q, Q])
e −1 [Q, Q]
e >X
kX> [Q, Q](V
[Q, Q])
Q]
F
= Tr X> Q(Q> VV> Q)−1 Q> X
= kX> Q(Q> VV> Q)−1/2 k2F ≤ Ξ2X .
e ∈ Rd×r now satisfies
This implies this matrix [Q, Q]
> [Q, Q])
e
e −1 k2 ≤ ΞZ ,
• kZ> [Q, Q](V
F
h
j−1
>
> [Q, Q])
e
e −1
[Q, Q](V
• Prxt ∀j ∈ [T ], x>
t ZZ (Σ/λr+1 )
•
> [Q, Q])
e
e −1
νj> ZZ> [Q, Q](V
2
≤ Ξx for every j ∈ [d].
2
i
≤ Ξx ≥ 1 − q 2 /2 for all t ∈ [T ],
e and k = r and conclude
Therefore, we can apply Lemma Main 5 with initial matrix [Q, Q]
with probability 1 − 2qT :
>
e −1 k2 ≤
e
kW> Pt [Q, Q](V
Pt [Q, Q])
F
5T0 / ln2 (T0 )
.
t/ ln2 t
e ∈ Rd×r be the output of the rank-r Oja’s algorithm on input [Q, Q]
e
Now, let Qt = QR(Pt [Q, Q])
after t steps. Owing to Lemma 2.2, we have
5T0 / ln2 (T0 )
.
t/ ln2 t
However, recall that QR decomposition orthonormalizes a column vectors only with respect to its
previous columns. This implies, Qt , which is the output of the rank-k 0 Oja’s algorithm on input
Q, is exactly identical to the first k 0 columns of Qt . Therefore, we have
>
e
e −1 k2 ≤
kW> Qt k2F ≤ kW> Pt [Q, Q](V
Pt [Q, Q])
F
kW> Qt k2F ≤ kW> Qt k2F ≤
55
5T0 / ln2 (T0 )
.
t/ ln2 t
iii.M
Proof of Theorems 1 and 2 (for Oja)
We now prove the final Theorem of Oja’s algorithm in gap-free case:
Theorem 2 (restated). For every ρ, ε, p ∈ (0, 1), let k + m be the number of eigenvalues of Σ in
(λk − ρ, 1]. Let
o
nP
P
k
k Pk+m
Λ2 = k+m
Λ1 = min
i=1 λi .
i=1 λi + p2
j=k+1 λj , 1 ,
Define learning rates
e
T0 = Θ
kΛ1
ρ2 p2
e
T1 = Θ
,
Λ2
ρ2
,
1
e
t ≤ T0 ;
Θ
ρ·T0
1
e
ηt =
Θ
t ∈ (T0 , T0 + T1 ];
ρ·T1
1
e
Θ
t > T0 + T1 .
ρ·(t−T0 )
Let W be the column orthonormal matrix consisting of all eigenvectors of Σ with values no more
than λk − ρ. Then, the output QT ∈ Rd×k of Oja’s algorithm satisfies with prob. at least 1 − p:
e T1
it satisfies kW> QT k2F ≤ ε .
for every T = T0 + T1 + Θ
ε
e hides poly-log factors in 1 , 1 and d.
Above, Θ
p ρ
Note that Theorem 1 is a direct corollary of Theorem 2 by setting ρ ← gap and noticing that
m = 0 so Λ1 = λ1 + · · · + λk and Λ2 = 0.
Proof of Theorem 2. First for a sufficiently large constant C, we can apply Lemma 5.1 with p0 = p8
and some sufficiently small q = poly(1/T, 1/d, p), with probability at least 1 − p0 − q 2 ≥ 1 − p/4
over the random choice of Q, the following holds:
2
d
, and
(Z> Q)(V> Q)−1 F ≤ O dk
2 ln p
p
#
"
qk ln T
2
q
i−1
>
≤ q2
Prx1 ,...,xT ∃i ∈ [T ], ∃t ∈ [T ], x>
Q(V> Q)−1 ≥ Ω
t ZZ (Σ/λk+1 )
p
2
q
T
k ln q
ν > ZZ> Q(V> Q)−1 ≤ O
for every j ∈ [d] .
j
2
p
Denote by C1 the union of the above three events, and we have PrQ [C1 ] ≥ 1 − p/4.
Let us introduce now some notations for this proof. As illustrated in Figure 2, besides the
definitions of V, W, Z, let W1 be the (column) orthogonal matrix consisting of all eigenvectors of
Σ with eigenvalue ≤ λk − ρ2 , and V1 be the (column) orthogonal matrix consisting of all eigenvectors
of Σ with eigenvalue > λk − ρ2 .
We now wish to apply our main lemma twice. Once with (V, Z, W) = (V, Z, W1 ), and once
with (V, Z, W) = (V1 , W1 , W).
Application One. For every fixed Q, whenever C1 holds, we can let
q
dk d
k ln Tp
ΞZ = Θ 2 ln
, Ξx = Θ
,
p
p
p
so the initial conditions in Lemma Main 4 is satisfied. Also, according to Parameter iii.L.1, our
parameter choices ηt for t ≤ T0 satisfy the assumptions in Lemma Main 4 with Λ replaced by Λ1 .
We can therefore apply Lemma Main 4 with
Q = Q,
(V, Z, W) = (V, Z, W1 ),
56
λk = λk ,
ρ = ρ/2,
ΞZ , Ξx
Application One
𝑉
𝑍
Application Two
𝑊
𝑉
𝑽
𝑍
𝑊
𝒁
𝑾𝟏
𝑾
𝑽𝟏
eigenvalues
𝜆𝑘
𝜆𝑘 − 𝜌/2
𝜆𝑘 − 𝜌
eigenvalues
Figure 2: Notations V1 , W1 for the proof of Theorem 2
and derive that
Pr
x1 ,...,xT0
kW1> PT0 Q(V> PT0 Q)−1 k2F ≥ 2 C1 ≤ 2qT0 ≤ p/4 .
Now, let C2 be the event where kW1> PT0 Q(V> PT0 Q)−1 k2F ≤ 2 holds, and by union bound
PrQ,x1 ,...,xT0 [C2 ] ≥ 1 − p/4 − p/4 = 1 − p/2.
Application Two. If C2 is true then
¬
>
−1/2 2
>
−1/2 2
kW1> PT0 Q(Q> P>
kF ≤ kW1> PT0 Q(Q> P>
kF
T0 V1 V1 PT0 Q)
T0 VV PT0 Q)
V1 V1>
= kW1> PT0 Q(V> PT0 Q)−1 k2F ≤ 2 .
VV>
(iii.M.1)
>
−1
(Q> P>
T0 V1 V1 PT0 Q)
Above, inequality ¬ is because
and this gives us
>
>
>
−1
>
(Q PT0 VV PT0 Q) ; equality is because V PT0 Q is a square matrix and we therefore have
>
−1 = (V> P Q)−1 (V> P Q)−1 > .
(Q> P>
T0
T0
T0 VV PT0 Q)
Inequality (iii.M.1) also implies that, for every x ∈ Rd with kxk2 ≤ 1, it satisfies
√
>
>
−1/2
kx> W1 W1> PT0 Q0 (Q>
k2 ≤ 2 .
0 PT0 V1 V1 PT0 Q0 )
In sum, whenever C2 holds, we have that the initial conditions in Lemma Main 6 is satisfied.
Also, according to Parameter iii.L.1, our parameter choices ηt for t = T0 + 1, . . . , T —once shifted
left by T0 — satisfy the assumptions in Lemma Main 6 with Λ replaced by Λ2 . We can therefore
apply Lemma Main 6 with
Q = PT0 Q,
(V, Z, W) = (V1 , W1 , W),
and conclude that
h
Pr
∀T0 + T1 ≤ t ≤ T : kW> Qt k2F ≤
λ = λk − ρ/2,
ρ = ρ/2,
Ξx = ΞZ = 2
i
5T1 / ln2 (T1 )
C
≥ 1 − 2qT ≥ 1 − p/4 .
2
xT0 +1 ,...,xT
(t − T0 )/ ln2 (t − T0 )
In sum, we have with probability at least Pr[C2 ](1 − p/4) ≥ 1 − p over the random choices of Q,
e 1 /ε), it satisfies kW> Qt k2 ≤ ε.
and x1 , . . . , xT , for every t = T0 + T1 + Θ(T
F
57
iii.N
Proof of Theorem 4 and 5 (for Oja++ )
Algorithm 1 Oja {xt }Tt=1 , {ηt }Tt=1 , Q
Input: vectors {xt ∈ Rd }Tt=1 , learning rates {ηt ∈ R>0 }Tt=1 , an initial matrix Q ∈ Rd×k .
1: Q0 ← Q.
2: for t ← 1 to T do
3:
Qt ← QR (I + ηt xt x>
t )Qt−1
4: end for
5: return QT
Algorithm 2 Oja++ {xt }Tt=1 , {ηt }Tt=1 , {(T (i) , Q(i) )}si=1
Input: vectors {xt ∈ Rd }Tt=1 ; learning rates {ηt ∈ R>0 }Tt=1 ; initial matrices {Q(i) ∈ Rd×ri }si=1 ;
lengths T (1) , . . . , T (s) satisfying T (1) + · · · + T (s) = T .
Output: QT ∈ Rd×(r1 +···+rs ) .
1: Q0 ← []; t ← 0.
2: for i ← 1 to s do
3:
Qt ← [Qt , Q(i) ]
4:
for t0 ← 1 to T (i) do
5:
t ← t + 1;
6:
Qt ← QR (I + ηt xt x>
t )Qt−1
7:
end for
8: end for
9: return QT
We formally write the pseudocode of Oja++ in Algorithm 2. We also write down the pseudocode
of Oja because we shall use it in the analysis of this section. We emphasize here that Oja++ spends
the same per-iteration running time and space complexity as Oja.
58
We prove the following main theorem:
Theorem 5 (Oja++ , restated). For every ρ, ε, p ∈ (0, 1), let k + m be the number of eigenvalues
of Σ in (λk − ρ, 1]. Define
k
m
k+m
X
X
Λ2
1 X
Λ1
e
e
Λ1 =
, T1 = Θ
λi + 2
λj , Λ2 =
λi , s = dlog(k + 1)e, T0 = Θ
p
ρ2 p2
ρ2
i=1
i=1
j=k+1
and learning rates (where C
C · ρT1 0
1
C·
ρ(t−11iT
0)
ηt =
1
C · ρT1
1
C·
ρ(t−11sT0 )
e
is some fixed value that is only Θ(1)):
if t ∈ [11iT0 , 11iT0 + T0 ) for some i ∈ {0, 1, ..., s − 1};
if t ∈ [11iT0 + T0 , 11iT0 + 11T0 ) for some i ∈ {0, 1, ..., s − 1}26 ;
if t ∈ (11sT0 , 11sT0 + T1 );
if t > 11sT0 + T1 .
i−1 c−bk/2i c)
For each i ∈ [s], let Q(i) ∈ Rd×(bk/2
Then, the output
T
QT ← Oja++ xt t=1 , ηt
satisfies with probability at least 1 − p
for every
be a random matrix with entries i.i.d. N (0, 1).
T
s−1
, (T0 , Q(i) ) i=1
t=1
e
T = 11sT0 + T1 + Θ
e hides poly-log factors in 1 , 1 and d.
Above, Θ
p ρ
T1
ε
∪ (T − 11(s − 1)T0 , Q(s)
kW> QT k2F ≤ ε .
it satisfies
Note that Theorem 4 is a simple corollary of Theorem 5 by setting ρ ← gap and noticing that
m = 0 so Λ1 = λ1 + · · · + λk and Λ2 = 0.
𝑘/2 - 𝑘/22
𝑘 − 𝑘/2
𝑑
𝑄
1
𝑄
𝑘/22 - 𝑘/23
2
𝑄
3
𝑘
𝑑
𝑄
𝑇
1
1
𝑄
apply Oja’s for
= 11𝑇0 iterations
1
𝑄
𝑇
2
2
𝑄
apply Oja’s for
= 11𝑇0 iterations
2
𝑄
𝑇
3
3
…
𝑄
𝑘
𝑠
apply Oja’s for
= 11𝑇0 iterations
𝑄𝑇
𝑇
apply Oja’s for
= 𝑇 − 11𝑠𝑇0
iterations
s+1
Figure 3: Illustration of our Oja++ algorithm.
e (0) = [] being empty matrix and for every j ∈ [s] by
Proof of Theorem 5. Denote by Q
j
e (j) def
Q
= Oja++ x1 , . . . , x11jT0 , η1 , . . . , η11jT0 , (11T0 , Q(j) ) i=1
def
26
In fact, the intermediate learning rates for t ∈ {1, 2, . . . , 11sT0 } can all be set to the same value
them slightly decrease between epochs just for the sake of having a cleaner proof.
59
C
.
ρT0
We make
the output of Oja++ if we run the outer loop only for j iterations. By definition, it satisfies
(j−1) (j)
e
e (j) = Oja x11(j−1)T +b T0 , η11(j−1)T +b T0 , Q
,Q
,
∀j ∈ [s] : Q
0
0
b=1
b=1
e (j) as the output of the old Oja’s algorithm when initialized on the previous
so we can view each Q
(j−1)
e
output Q
appended with a new random matrix Q(j) . We illustrate this pictorially in Figure 3.
We also introduce some notations (see Figure 4 for an illustration). For each i ∈ [s], we define
Vi to be the (column) orthonormal matrix consisting of all eigenvectors of Σ with eigenvalue
i
> λk − s+1
ρ, and Wi to be the (column) orthonormal matrix consisting of all eigenvectors of Σ
i
with eigenvalue ≤ λk − s+1
ρ. Define V0 = V and W0 = Z. Let ki ≥ k be the column dimension
of Vi .
Application 1
𝑉
𝑍
Application 2
𝑊
𝑉
dim = 𝑘
𝑍
Application 𝑠 + 1
𝑉
𝑊
𝑽 = 𝑽0
𝑽1
𝒁 = 𝑾0
𝑾1
dim = 𝑘1
𝑽2
𝑽3
dim = 𝑑 − 𝑘1
𝑽𝑠
dim = 𝑑 − 𝑘2
𝑾3
dim = 𝑘3
dim = 𝑑 − 𝑘3
𝑾𝑠
dim = 𝑘s
𝑊
dim = 𝑑 − 𝑘
𝑾2
dim = 𝑘2
𝑍
dim = 𝑑 − 𝑘𝑠
𝑾
eigenvalues
𝜆𝑘
…
𝜆𝑘 − 𝜌
eigenvalues
Figure 4: Notations V0 , . . . , Vs , W0 , . . . , Ws for the proof of Theorem 5.
Below, we wish to apply our Lemma Main 6 a total of s + 1 times each corresponding to one
outer loop of Oja++ . Each time we shall use our new initialization lemma (i.e., Lemma iii.J.2) in
order to satisfy the preassumption of Lemma Main 6. The first s applications serve as a graduated
warm-start phase and the last application provides the final convergence rate.
def
e (j) k2 ≤ 1 }. In this first
Application 1 Through s. Define event Ci = {∀j ∈ N, j ≤ i : kWj> Q
F
2
step we prove Pr[Ci ] ≥ 1 − 2ip for all i = 0, 1, . . . , s by induction. The base case Pr[C0 ] = 1 is
e (0) is an empty matrix.
obvious because Q
Suppose Pr[Ci−1 ] ≥ 1 − 2(i − 1)p holds true for some i ∈ [s] and we wish to bound Pr[Ci ]. We
e (i−1) is an empty matrix)
first note that event Ci−1 implies (or if i = 1 then Q
e (i−1) > Vi−1 V> Q
e (i−1) = I − Q
e (i−1) > Wi−1 W> Q
e (i−1) 1 I .
Q
i−1
i−1
2
(i−1)
d×α
(i)
d×β
i−1
e
Next, we have Q
∈R
and Q ∈ R
where α = (k −bk/2 c) and β = bk/2i−1 c−bk/2i c.
Since ki−1 − α ≥ k − α ≥ 2β − 1, we have
β
β
β
p
√
≤ √
≤ √
<6 .
√
√
2
( 2β − 1 − β − 1)2
( k − α − β − 1)
( ki−1 − α − β − 1)2
Therefore, we can apply Lemma iii.J.2 on
e =Q
e (i−1) ,
Q
Q = Q(i) ,
(V, Z) = (Vi−1 , Wi−1 ).
and derive that, denoting by Q = [Q(i−1) , Q(i) ], then with probability 1 − p over the random choice
60
of Q(i) , the following event Bi holds (for some polynomially small q):
−1/2 2
d
> Q
> Q) Q> V
e
≤
O
and
V
(W
i−1 i−1
i−1
p2
F
"
#
−1/2 2
>
def
∃`∈[T ]
`−1
> Q
>
>
e 12
≥Ω
Pr
Q Q Vi−1 Vi−1
≤q
Bi =
∃t∈[T ] xt Wi−1 Wi−1 (Σ/λk+1 )
p
x
,...,x
1
T
2
2
ν > W W> Q Q> V V> Q −1/2 ≤ O
e 1 for every j ∈ [d].
j
i−1
i−1
i−1
i−1
p2
2
e (j) can be viewed as the output of the original Oja’s algorithm on
Whenever Bi holds, because Q
T0
, we can apply Lemma Main 6 with
input Q and applied with learning rates η11(i−1)T0 +b b=1
i−1
1
Q = Q, (V, Z, W) = (Vi−1 , Wi−1 , Wi ), (λ, ρ) = λk − s+1
ρ, s+1
ρ ,
2 ), O(1/p
2 ))
e
e
(ΞZ , Ξx ) = (O(d/p
T0
—once shifted left by 11(i − 1)T0 — are
We emphasize that these learning rates, η11(i−1)T0 +b b=1
exactly
e 1
Θ
t ≤ T0
0
ρT
ηt =
, T = 11T0 , Λ = Λ1
e 1
Θ
t
>
T
0
ρt
so they satisfy the assumption of Lemma Main 6 according to Parameter iii.L.1. Finally, the conclusion of Lemma Main 6 tells us
1
> e (i) 2
Pr kWi Q kF ≥ Ci−1 ∩ Bi ≤ p ,
2
and by union bound, we have (recalling Pr[Bi ] ≥ 1 − p):
1
> e (i) 2
≥ Pr[Ci−1 ∩ Bi ] × (1 − p)
Pr[Ci ] = Pr Ci−1 ∧ kWi Q kF ≤
2
= (1 − p) Pr[Ci−1 ] Pr[Bi | Ci−1 ] ≥ (1 − p)(1 − 2(i − 1)p)(1 − p) ≥ 1 − 2ip .
This finishes the proof that Pr[Cs ] ≥ 1 − 2sp.
Application s + 1. We now focus on the last outer loop of Oja++ , which satisfies
T
T
e (s) .
QT = Oja xt t=11sT0 +1 , ηt t=11sT0 +1 , Q
e (s) k2 ≤ 1 and thus (Q
e (s) )> Vs V> Q
e (s)
Similar to the first s outer loops, under event Cs we have kWs> Q
s
F
2
√
1
> e (s) −1/2 k ≤
e (s) >
2 and therefore
2
2 I. This implies k((Q ) Vs Vs Q )
> e (s) e (s) >
> e (s) −1/2 2
e (s) k2 k((Q
e (s) )> Vs V> Q
e (s) )−1/2 k2 ≤ 2.
kW Q ((Q ) Vs V Q )
k ≤ kW> Q
s
s
F
s
F
s
2
Rd
Thus, for every x ∈
with kxk2 ≤ 1:
>
> e (s) e (s) >
e (s) )−1/2 k2 ≤ kW> Q
e (s) k2 k((Q
e (s) )> Vs V> Q
e (s) )−1/2 k2 ≤ 2 .
kx Ws Ws Q ((Q ) Vs Vs> Q
s
s
Now we can apply Lemma Main 6 again with parameters
(s)
e
Q = Q , (V, Z, W) = (Vs , Ws , W), (λ, ρ) = λk −
s
1
ρ,
ρ ,
s+1 s+1
ΞZ = Ξx = 2
T
We emphasize that these learning rates, ηt t=11sT0 +1 —once shifted left by 11sT0 — are exactly
e 1
Θ
t ≤ T1
1
ρT
ηt =
, T = T − 11sT0 , Λ = Λ2
e 1
Θ
t > T1
ρt
61
so they satisfy the assumption of Lemma Main 6 according to Parameter iii.L.1. We can conclude
from Lemma Main 6 that
T1
1
>
2
> e (s) 2
e
≤p .
Pr kW QT kF = Ω
kWs Q kF ≤
T − 11sT0
2
h
i
e (s) k2 ≥ 1 ≤ 2sp we complete the proof of Theorem 5 if we replace
Taking into account Pr kWs> Q
F
2
p with p/(2s + 1).
62
References
[1] Zeyuan Allen-Zhu and Yuanzhi Li. Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition. ArXiv e-prints, abs/1607.06017, July 2016.
[2] Zeyuan Allen-Zhu and Yuanzhi Li. LazySVD: Even Faster SVD Decomposition Yet Without
Agonizing Pain. In NIPS, 2016.
[3] Maria-Florina Balcan, Simon S Du, Yining Wang, and Adams Wei Yu. An improved gapdependency analysis of the noisy power method. In COLT, pages 284–309, 2016.
[4] Akshay Balsubramani, Sanjoy Dasgupta, and Yoav Freund. The fast convergence of incremental pca. In NIPS, pages 3174–3182, 2013.
[5] Fan Chung and Linyuan Lu. Concentration inequalities and martingale inequalities: a survey.
Internet Mathematics, 3(1):79–127, 2006.
[6] Dan Garber and Elad Hazan. Fast and simple PCA via convex optimization. ArXiv e-prints,
September 2015.
[7] Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli,
and Aaron Sidford. Robust shift-and-invert preconditioning: Faster and more sample efficient
algorithms for eigenvector computation. In ICML, 2016.
[8] Rong Ge, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, and Aaron Sidford. Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. In ICML, 2016.
[9] Moritz Hardt and Eric Price. The noisy power method: A meta algorithm with applications.
In NIPS, pages 2861–2869, 2014.
[10] Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, and Aaron Sidford. Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja’s
Algorithm. In COLT, 2016.
[11] Chris J. Li, Mengdi Wang, Han Liu, and Tong Zhang. Near-Optimal Stochastic Approximation
for Online Principal Component Estimation. ArXiv e-prints, abs/1603.05305, March 2016.
[12] Chun-Liang Li, Hsuan-Tien Lin, and Chi-Jen Lu. Rivalry of two families of algorithms for
memory-restricted streaming pca. In Proceedings of the 19th International Conference on
Artificial Intelligence and Statistics, pages 473–481, 2016.
[13] Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain. Memory limited, streaming
pca. In NIPS, pages 2886–2894, 2013.
[14] Cameron Musco and Christopher Musco. Randomized block krylov methods for stronger and
faster approximate singular value decomposition. In NIPS, pages 1396–1404, 2015.
[15] Mark Rudelson and Roman Vershynin. Smallest singular value of a random rectangular matrix.
Communications on Pure and Applied Mathematics, 62(12):1707–1739, 2009.
[16] Christopher De Sa, Christopher Re, and Kunle Olukotun. Global convergence of stochastic
gradient descent for some non-convex matrix problems. In ICML, pages 2332–2341, 2015.
63
[17] Ohad Shamir. Convergence of stochastic gradient descent for pca. In ICML, 2016.
[18] Ohad Shamir. Fast stochastic algorithms for svd and pca: Convergence properties and convexity. In ICML, 2016.
[19] Stanislaw J Szarek. Condition numbers of random matrices. Journal of Complexity, 7(2):131–
149, 1991.
[20] Martin Wainwright. High-dimensional statistics: A non-asymptotic viewpoint, chapter Chapter
2: Basic tail and concentration bounds. 2015.
[21] Weiran Wang, Jialei Wang, Dan Garber, and Nathan Srebro. Efficient Globally Convergent
Stochastic Optimization for Canonical Correlation Analysis. In NIPS, 2016.
[22] Bo Xie, Yingyu Liang, and Le Song. Scale up nonlinear component analysis with doubly
stochastic gradients. In NIPS, pages 2341–2349, 2015.
64
| 8 |
On Integrating Fuzzy Knowledge Using a Novel Evolutionary Algorithm
Nafisa Afrin Chowdhury, Murshida Khatun and M. M. A. Hashem
Department of computer science and Engineering,
Khulna University of Engineering & Technology, Khulna
Khulna-920300, Bangladesh
Ph: +88041-774318, Fax+88-0141774403
E-mail: [email protected];
[email protected].
Abstract
Fuzzy systems may be considered as knowledge-based systems that incorporates human knowledge into
their knowledge base through fuzzy rules and fuzzy membership functions. The intent of this study is to
present a fuzzy knowledge integration framework using a Novel Evolutionary Strategy (NES), which can
simultaneously integrate multiple fuzzy rule sets and their membership function sets. The proposed
approach consists of two phases: fuzzy knowledge encoding and fuzzy knowledge integration Four
application domains, the hepatitis diagnosis, the sugarcane breeding prediction, Iris plants classification,
and Tic-tac-toe endgame were used to show the performance of the proposed knowledge approach.
Results show that the fuzzy knowledge base derived using our approach performs better than Genetic
Algorithm based approach.
Key words- Fuzzy knowledge, Rule set, Membership function, Evolutionary Algorithms (EA), Novel
Evolutionary Algorithm (NES), Crossover, Mutation.
1. Introduction
The construction of a reliable knowledge-based system, especially for complex application problems,
usually requires the integration of multiple knowledge inputs as related domain knowledge is usually
distributed among multiple sites. The use of knowledge integrated from multiple knowledge sources is
1
thus important to ensure comprehensive coverage. Most knowledge sources or actual instances in realworld applications, especially in medical and control domains, contain fuzzy or ambiguous information.
Expressions of domain knowledge in fuzzy terms are thus seen more and more frequently. Several
researchers have recently investigated automatic generation of fuzzy classification rules and fuzzy
membership functions using evolutionary algorithms [1].
Many knowledge acquisition and integration systems [4] use genetic algorithms to derive knowledge
from training instances. In this study we have proposed an approach that uses a novel evolutionary
algorithm for fuzzy knowledge integration process. In this technique, the emphasis is not on the
acquisition of a structure with high fitness, as in GA, but on modeling evolution at the granularity of an
individual, ESs considers an individual to be composed of a set of behaviors, each of which is a feature.
In this approach we used insertion deletion mutation as a variation (genetic) operator in order to introduce
variable length string for each individual. Therefore the integration process shows fast convergence with
considerable accuracy as comparing with GA approach.
The remainder of this paper is organized as follows. A NES based fuzzy knowledge integration
framework is proposed in section 2. The fuzzy knowledge encoding approach is explained in section 3.
The fuzzy knowledge integration approach is proposed in section 4. Experiments on two application
domains- Hepatitis Diagnosis and Sugarcane Breeding Prediction are stated in section 5. Conclusion and
future research directions are given in section 6.
2. Fuzzy Knowledge Integration Framework
Here, we propose a NES-based fuzzy-knowledge integration framework. The proposed framework
can integrate multiple fuzzy rule sets and membership function sets at the same time [4]. Fuzzy rule sets,
membership functions, and test objects including instances and historical records may be distributed
among various sources. Knowledge from each site might be directly obtained by a group of human
experts using knowledge-acquisition tools or derived using machine-learning methods. Here, we assume
that all knowledge sources are represented by fuzzy rules since almost all knowledge derived by
2
knowledge-acquisition tools or induced by machine-learning methods may easily be translated into or
represented by rules.
The proposed framework maintains a population of fuzzy rule sets with their membership functions
and uses a Novel Evolutionary Algorithm to automatically derive the resulting fuzzy knowledge base.
This integration framework operates in two phases: fuzzy knowledge encoding and fuzzy knowledge
integration. The encoding phase first transforms each fuzzy rule set and its associated membership
functions into an intermediary representation, which is further encoded as a variable-length string [4]. The
integration phase then chooses appropriate strings for “mating,” gradually creating good offspring fuzzy
rule sets and membership function sets. The offspring fuzzy rule sets with their associated membership
functions then undergo recursive “evolution” until an optimal or nearly optimal set of fuzzy rules and
membership functions has been obtained. Fig. 1 shows the two-phase process, where are the fuzzy rule
sets with their associated membership function sets, as obtained from different sources for integration.
Initial population
Generation 0
~~
R S 1 + MFS 1
Chromosome 1
~~
R S 2 + MFS 2
Chromosome 2
Generation k
Chr 1
Chr 1
Chr 2
Chr 2
The best one
Genetic
~~
R S + MFS
Operator
~~
R S m + MFSm
Chromosome m
Knowledge encoding
Chr m
Chr m
Knowledge integrating
Fig. 1: The genetic-fuzzy knowledge integration process.
3. Knowledge Encoding
The encoding phase first transforms each fuzzy rule set and its associated membership functions
into an intermediary representation in a few steps [4], further encoded as a variable-length string. Fig 2
shows µ individuals constructing a generation. Each individual has rule set and membership functions that
represent the feature values and corresponding membership criteria for each feature respectively.
3
Ind 1
Rule 1
Rule 2
MF 1
MF 2
Ind 2
Rule 1
Rule 2
MF 1
MF 2
Ind µ
Rule 1
Rule 2
MF 1
MF 2
Fig.2: The rule set and membership function encoding of a generation having µ individuals.
To effectively encode the associated membership functions, we use two parameters to represent
each membership function, as Parodi and Bonelli [4] did. Membership functions applied to a fuzzy rule
set are then assumed to be isosceles-triangle functions as shown in Fig. 3, where aij denotes the jth
linguistic value of Feature Ai , uaij denotes the membership function of aij , cij indicates the center abscissa
of membership function uaij and wij represents half the spread of membership function uaij .
u
aij
1
uaij
w ij
cij
Ai
Fig.3: Membership functions of feature Ai .
4. Knowledge Integration
After each fuzzy rule set with its associated membership functions has been encoded as a string,
the fuzzy-knowledge integration process starts. It chooses good individuals in the population for
“mating,” gradually creating better offspring fuzzy rule sets and membership function sets. In order to
make the integration process more effective and to increase accuracy of the resultant knowledge base we
used a Novel Evolutionary Strategy (NES) Algorithm in place of Genetic Algorithm (GA). There are µ
individuals are created initially by the program using the function urnN_N (a, b) for input variables and
4
output variables. Their corresponding evaluated fitness f value for each individual is also calculated in the
evaluated function. Two important factors are used in evaluating process. They are: accuracy (RS) and
~~
complexity (RS) of the resulting knowledge structure. Accuracy of a fuzzy rule set RS is evaluated using
test
objects
as
follows
[4]:
~~
Accuracy (R~S~ )= total number of objects correctly matched by R S ....................1
total number of objects
The more data used the more objective and accurate the evaluation is. The complexity of the resulting rule
( ~ ~ ) is the ratio of rule increase, defined as follows:
set R S
~~
Complexity (R~S~ )= Number of rules in the integrated rule set R S .......................2
p
(
)
~~
[ ∑ Number of rules in the initial R S i ] p
i =1
~~
Where R S i is the ith initial fuzzy rule set, and P is the number of initial rule sets. Accuracy and
complexity are combined to represent the fitness value of the rule set. Since the goal is to increase the
accuracy and decrease the complexity of the resulting rule sets, the evaluation function is then defined as
follows:
Fitness (RS ) =.
[Accuracy(RS )] .............................................................................3
[Complexity(RS )]α
Where
α
is a control parameter, representing a tradeoff between accuracy and complexity.
Two genetic operators, crossover and mutation, are applied on individuals of each generation to
create new offspring.
A. SBMAC
In the SBMAC (Sub-population Based Max-mean Arithmetical Crossover) each subpopulation’s elite and
the mean –individual (the virtual parent) created from that subpopulation excluding the elite are used.
This SBMAC is supposed to explore promising areas in the search space with different directions towards
the optimum point [3]. Thus, the algorithm is exposed to less possibility of being trapped in local optima.
5
B. Modified SBMAC
For an instance of the evolution, a modified version of the SBMAC operation is shown in fig 4.
Firstly, the minimum length individual within each subpopulation is determined. Secondly, the offspring
variables are produced up to this length [5].Then the remaining part of the parent subpopulation variables
is appended to the offspring population variables to form the full length chromosome.
a
a1
a2
a3
a4
a5
b
b1
b2
b3
b4
b5
Crossover operation
a′ 2
a′ 3
a4
a5
b ′ b′ 1 b′ 2
b′ 3
b4
b5
a′
a′ 1
Non crossover part
Fig 4: Modified SBMAC operation.
C. TVM
In the real world, a rapid change is observed at the early stages of life and a slow change is
observed at latter stages of life in all kind of animals/plants. These changes are more often occurred
dynamically depending on the situation exposed to them. A special dynamic Time-Variant Mutation
(TVM), which follows this natural evidence [3], is applied here.
D. Insertion Deletion Mutation
In order to generate variable length chromosome for each individual insertion deletion mutation
[5] is performed on each individual. For insertion mutation, a new randomly generated steering angle
value (gene) is inserted at a selected position based on a probability of insertion Pim .This will increase
the length of a chromosome. On the other hand the deletion mutation is the reverse action than that of the
insertion mutation. Fig 5 shows the insertion deletion mutation procedure.
a ′ a′ 1 a′ 2
a′
a′ 3
a′ 1 a′ 2 a′ 3
Deletion mutation at 5th
parameter.
a4
b5
a5
b′
b′ 1
b′ 2
b′ 3
b′
b′ 1
b′ 2
b′ 3
b4
b5
b′ 4
b5
Insertion mutation at 4th
parameter.
Fig. 5: Insertion deletion mutation operation.
6
5. Simulation result
To demonstrate the effectiveness of the proposed fuzzy knowledge integration approach, we
applied it to four application domains. The first one was “The Hepatitis Diagnosis Domain”, which
contained 155 cases obtained from Carnegie-Mellon University [4]. The second was “The Sugarcane
Breeding Prediction Domain”, which contained 699 cases obtained from Taiwan Sugar Research Institute
(TSRI) [4]. The flexibility of this approach is proved by applying successfully in two new application
domains, where the GA approach is not applied yet. They are: “The Iris Plants Classification Domain” [7]
and “The Tic- Tac-Toe Endgame Domain” [7].
5.1 The Hepatitis Diagnosis Domain:
The hepatitis diagnosis problem was first used here to test the performance of the proposed fuzzyknowledge integration approach. The goal of the experiments was to identify two possible classes, Die or
Live, from a set of instances. Table I shows an actual case expressed in term of 19 features and one class.
Among these features, Age, Bilirubin, Alk Phosphate, SGOT, Albumin, and Protime are numerical and
have membership functions with them.
Table I
A case for hepatitis diagnosis
Features
Features value
Billirubin
0.90
Alk Phosphate
95
SGOT
28
Albumin
4.0
Protime
75
Class: Live
Each rule, consisting of 5 feature tests and a class pattern, was encoded into a record having 5
rule values and 10 membership function value (center, width). The parameter α was set at 0.01.
Experimental results show that executing the proposed approach over more generations initially yielded
more accurate results and finally gradually converged to a constant. The structure of the best individual
after convergence is shown in fig.6. Fig. 6(a), 6(b), 6(c), 6(d), 6(e) are for five numerical parametersBillirubin, Alk Phosphate, SGOT, Albumin, Protime.
7
Fig. 6 (a): Membership function of Billirubin.
Fig. 6 (c): Membership function of SGOT.
Fig. 6 (b): Membership function of Alk
phosphate.
Fig. 6 (d): Membership function of Albumin.
Fig. 6 (e): Membership function of Protime.
Fig 7 shows the relationship between the accuracy of the resulting rule sets with the number of
generations by the proposed approach. As the numbers of generations increased, the accuracy as well as
resulting fitness value also increased and finally converged to a specific value.
8
Accuracy vs Generation
100
95
Accuracy
90
85
80
75
70
0
50
100
150
200
250
300
350
400
450
500
Generation
Fig.7: Relationship between accuracy and number of generations for the hepatitis domain
.
Table II compares the accuracy of our proposed approach with that of the above learning methods
[4]. It can easily be seen that our approach has a higher accuracy than some other learning methods.
Moreover, in our approach convergence occurs after 200 generation whereas in GA approach, it takes
4000 generation [4].
Table II
A comparison with some other learning methods for hepatitis domain
Methods
Our approach
GA approach
Accuracy
96.33%
91.61%
5.2The Sugarcane Breeding Prediction Domain
In this experiment, we present a real application to the sugarcane breeding prediction [4] based on
our proposed approach. The goal of this application was to breed good sugarcane plants with high sugar
contents and strong disease-resistance abilities.
Table III
A Case for Sugarcane Breeding Prediction
Features
Values
Stalk diameter
3.5cm
Stalk length
5.2m
Stalk number
96
Cane yield
198
Offspring with high sugar-ingredients and disease-resistant abilities
9
Table III shows an actual case expressed in terms of 36 features and a class report. Here, the parameter α
was set at 0.001. Our approach obtained an accuracy rate of 89.52% after 500 execution generations. In
fig. 8 the resultant structure of the best individual is represented. Fig 8 (a), 8 (b), 8 (c), 8 (d) shows the
resultant membership function for Stalk diameter, Stalk length, Stalk number and Cane yield,
respectively.
Fig 8 (a): Membership function of Stalk
diameter.
Fig 8 (c): Membership function of Stalk number.
Fig 8 (b): Membership function of Stalk
length.
Fig 8 (d): Membership function of Cane yield.
Fig.9 shows the results for different generations with respect to accuracy in the sugarcane domain. Table
IV compares the accuracy of our proposed approach with that of the sugarcane breeding assistant system
(SCBAS) learning method [4] in the sugarcane domain. It can easily be seen that our approach has a
higher accuracy than the GA approach or SCBAS learning method. Moreover, in our approach
convergence occurs after 400 generation whereas in GA approach, it takes 4500 generation[4].
10
Fig 9: Relationship between accuracy and generation for the sugarcane domain.
Table IV
A comparison with some other learning methods for sugarcane breeding prediction domain.
Methods
Our approach
GA approach
Accuracy
89.01%
76.02%
In case of Iris plants classification our approach shows the accuracy of 76% after 100 generation and
it is 67% after 300 generation in case of Tic-tac-toe endgame.
6. Concluding Remarks
In this paper, we have proposed an appropriate representation to encode the fuzzy knowledge and
have shown how fuzzy-knowledge integration can be effectively processed using this representation.
Experimental results have also shown that our genetic fuzzy-knowledge integration framework is
valuable for simultaneously combining multiple fuzzy rule sets and membership function sets. Our
approach needs no human experts’ intervention during the integration process. The time required by
our approach is thus dependent on computer execution speed, but not on human experts. Much time
can thus be saved since experts may be geographically dispersed. Also, our approach is a scalable
integration method that can be applied as well when the number of rule sets to be integrated increases.
Integrating a large number of rule sets may increase the validity of the resulting knowledge base. It is
also objective since human experts are not involved in the integration process.
References
[1]
Zimmermann, H.J. “Fuzzy Set Theory and Its Application,” 2nd edition, Allied Publishers
Ltd, New Delhi, 1996.
[2]
Ching-Hung Wang, Tzung-Pei Hong and Shian-Shyong Tseng, “Integrating Fuzzy Knowledge
by Genetic Algorithms”, IEEE Transaction on Eolutionary Computation Vol-2, No.4,
November 1998.
[3]
K. Watanabe and M.M.A. Hashem, “Evolutionary Computations: New Algorithms and their
Applications to Evolutionary Robots,” Series: Studies in Fuzziness and Soft Computing, Vol.
147, Springer-Verlag, Berlin/New York, ISBN: 3-540-20901-8, (2004).
[4]
M.M.A. Hashem,“Global Optimization Through a new class of Evolutionary Algorithms”,
PhD Dissertation, Saga University, Japan, 1999.
[5]
Marco Russo, “Genetic Fuzzy Learning,” IEEE Transaction on Eolutionary Computation
Vol-4, No.3, September 2000.
[6]
O. Cordon, F. Herrera, F. Hoffmann, F. Gomide and L. Magdalena, “Ten Years of Genetic
Fuzzy Systems: Current Framework and New Trends”.
[7]
http://ics.uci.edu/~mlearn/MLRepository/
12
| 9 |
Locally Robust Semiparametric Estimation
Juan Carlos Escanciano†
Hidehiko Ichimura‡
MIT
Indiana University
University of Tokyo
arXiv:1608.00033v1 [] 29 Jul 2016
Victor Chernozhukov∗
Whitney K. Newey§
MIT
July 27, 2016
Abstract
This paper shows how to construct locally robust semiparametric GMM estimators,
meaning equivalently moment conditions have zero derivative with respect to the first
step and the first step does not affect the asymptotic variance. They are constructed by
adding to the moment functions the adjustment term for first step estimation. Locally
robust estimators have several advantages. They are vital for valid inference with machine
learning in the first step, see Belloni et. al. (2012, 2014), and are less sensitive to the
specification of the first step. They are doubly robust for affine moment functions, so
moment conditions continue to hold when one first step component is incorrect. Locally
robust moment conditions also have smaller bias that is flatter as a function of first step
smoothing leading to improved small sample properties. Series first step estimators confer
local robustness on any moment conditions and are doubly robust for affine moments, in
the direction of the series approximation. Many new locally and doubly robust estimators
are given here, including for economic structural models. We give simple asymptotic theory
for estimators that use cross-fitting in the first step, including machine learning.
Keywords: Local robustness, double robustness, semiparametric estimation, bias, GMM.
JEL classification: C13; C14; C21; D24.
∗
Department of Economics, MIT, Cambridge, MA 02139, U.S.A E-mail: [email protected].
Department of Economics, Indiana University, Bloomington, IN 47405–7104, U.S.A E-mail: [email protected].
‡
Faculty of Economics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033. E-mail:
[email protected].
§
Department of Economics, MIT, Cambridge, MA 02139, U.S.A E-mail: [email protected].
†
1
1
Introduction
There are many economic parameters that depend on nonparametric or large dimensional first
steps. Examples include games, dynamic discrete choice, average consumer surplus, and treatment effects. This paper shows how to construct GMM estimators that are locally robust to the
first step, meaning equivalently that moment conditions have a zero derivative with respect to
the first step and that estimation of the first step does not affect their influence function.
Locally robust moment functions have several advantages. Belloni, Chen, Chernozhukov,
and Hansen (2012) and Belloni, Chernozhukov, and Hansen (2014) showed that local robustness, also referred to as orthogonality, is important for correct inference about parameters of
interest when machine learning is used in the first step. Locally robust moment conditions
are also nearly correct when the nonparametric part is approximately correct. This robustness
property is appealing in many settings where it may be difficult to get the first step completely
correct. Furthermore, local robustness implies the small bias property analyzed in Newey, Hsieh,
and Robins (1998, 2004; NHR henceforth). As a result asymptotic confidence intervals based
on locally robust moments have actual coverage probability closer to nominal than for other
moments. Also, bias is flatter as a function of first step bias for locally robust estimators than
for other estimators. This tends to make their mean-square error (MSE) flatter as a function
of smoothing also, so their performance is less sensitive to smoothing. In addition, by virtue
of their smaller bias, locally robust estimators have asymptotic MSE that is smaller than other
estimators in important cases and undersmoothing is not required for root-n consistency. Finally, asymptotic variance estimation is straightforward with locally robust moment functions,
because the first step is already accounted for.
Locally robust moment functions are constructed by adding to the moment functions the
terms that adjust for (or account for) first step estimation. This construction gives moment
functions that are locally robust. It leads to new estimators for games, dynamic discrete choice,
average surplus, and other important economic parameters. Also locally robust moments that
are affine in a first step component are globally robust in that component, meaning the moments
continue to hold when that component varies away from the truth. This result allows construction of doubly robust moments in the sense of Scharfstein, Rotnitzky, and Robins (1999) and
Robins, Rotnitzky, and van der Laan (2000) by adding to affine moment conditions an affine
adjustment term. Here we construct many new doubly robust estimators, e.g. where the first
step solves a conditional moment restriction or is a density.
Certain first step estimators confer the small bias property on moment functions that are
not locally robust, including series estimators of mean-square projections (Newey, 1994), sieve
2
maximum likelihood estimators (Shen, 1996, Chen and Shen, 1997), bootstrap bias corrected
first steps (NHR), and higher-order kernels (Bickel and Ritov, 2003). Consequently the inference
advantages of locally robust estimators may be achieved by using one of these first steps. These
first steps only make moment conditions locally robust in certain directions. Locally robust
moments have the small bias property in a wider sense, that the moments are nearly zero as the
first step varies in a general way. This property is important when the first step is chosen by
machine learning or in other very flexible, data based ways; see Belloni et. al. (2014).
First step series estimators have some special robustness properties. Moments without the
adjustment term can be interpreted as locally robust because there is an estimated adjustment
term with average that is identically zero. This property corresponds to first step series estimators conferring local robustness in the direction of the series approximation. Also, first step
series estimators are doubly robust in those directions when the moment functions and first step
estimating equations are affine.
The theoretical and Monte Carlo results of NHR show the bias and MSE advantages of locally robust estimators for linear functionals of a density. The favorable properties of a twicing
kernel first step versus a standard first step found there correspond to favorable properties of
locally robust moments versus original moments, because a twicing kernel estimator is numerically equivalent to adding an estimated adjustment term. The theoretical results show that
using locally robust moment conditions increases the rate at which bias goes to zero but only
raises the variance constant, and so leads to improved asymptotic MSE. The Monte Carlo results
show that the MSE of the locally robust estimator is much flatter as a function of bandwidth
and has a smaller minimum than the original moment functions, even with quite small samples.
Advantages have also been found in the literature on doubly robust estimation of treatment
effects, as in Bang and Robins (2005) and Firpo and Rothe (2016). These results from earlier work suggest that locally robust moments provide a promising approach to improving the
properties of semiparametric estimators.
This paper builds on other earlier work. Locally robust moment conditions are semiparametric versions of Neyman (1959) C(α) test moments for parametric models, with parametric
extensions to nonlikelihood settings given by Wooldridge (1991), Lee (2005), Bera et. al. (2010),
and Chernozhukov, Hansen, and Spindler (2015). Hasminskii and Ibragimov (1978) suggested
an estimator of a functional of a nonparametric density estimator that can be interpreted as
adding the first step adjustment term. Newey (1990) derived the form of the adjustment term
in some cases. Newey (1994) showed local robustness of moment functions that are derivatives
of an objective function where the first step has been ”concentrated out,” derived the form
of the adjustment term for many important cases, and showed that moment functions based
on series nonparametric regression have small bias. General semiparametric model results on
doubly robust estimators were given in Robins and Rotnitzky (2001).
3
NHR showed that adding the adjustment term gives locally robust moments for functionals
of a density integral and showed the important bias and MSE advantages for locally robust
estimators mentioned above. Robins et. al. (2008) showed that adding the adjustment term
gives local robustness of explicit functionals of nonparametric objects, characterized some doubly
robust moment conditions, and considered higher order adjustments that could further reduce
bias. The form of the adjustment term for first step estimation has been derived for a variety
of first step estimators by Pakes and Olley (1995), Ai and Chen (2003), Bajari, Chernozhukov,
Hong, and Nekipelov (2009), Bajari, Hong, Krainer, and Nekipelov (2010), Ackerberg, Chen,
and Hahn (2012), Ackerberg, Chen, Hahn, and Liao (2014), and Ichimura and Newey (2016),
among others. Locally and doubly robust moments have been constructed for a variety of
estimation problems by Robins, Rotnitzky, and Zhao (1994, 1995), Robins and Rotnitzky (1995),
Scharfstein, Rotznitzky, and Robins (1999), Robins, Rotnitzky, and van der Laan (2000), Robins
and Rotnitzky (2001), Belloni, Chernozhukov, and Wei (2013), Belloni, Chernozhukov, and
Hansen (2014), Ackerberg, Chen, Hahn, and Liao (2014), Firpo and Rothe (2016), and Belloni,
Chernozhukov, Fernandez-Val, and Hansen (2016).
Contributions of this paper are a general construction of locally robust estimators in a
GMM setting, a general nonparametric construction of doubly robust moments, and deriving
bias and other large sample properties. The special robustness properties of first step series
estimators are also shown here. We use these results to obtain many new locally and doubly
robust estimators, such as those where the first step allows for endogeneity or is a conditional
choice probability in an economic structural model. We expect these estimators to have the
advantages mentioned above, that machine learning can be used in the first step, the estimators
have appealing robustness properties, smaller bias and MSE, are less sensitive to bandwidth,
have closer to nominal coverage for confidence intervals, and standard errors that can be easily
computed.
Section 2 describes the general construction of locally robust moment functions for semiparametric GMM. Section 3 shows how the first step adjustment term can be derived and shows the
local robustness of the adjusted moments. Section 4 introduces local double robustness, shows
that affine, locally robust moment functions are doubly robust, and gives new classes of doubly
robust estimators. Section 5 describes how locally robust moment functions have the small bias
property and a smaller remainder term. Section 6 considers first step series estimation. Section
7 characterizes locally robust moments based on conditional moment restrictions. Section 8
gives locally robust moment conditions for conditional choice probability estimation of discrete
game and dynamic discrete choice models. Section 9 gives asymptotic theory based on cross
fitting with easily verifiable regularity conditions for the first step, including machine learning.
4
2
Constructing Locally Robust Moment Functions
The subject of this paper is GMM estimators of parameters where the sample moment functions
depend on a first step nonparametric or large dimensional estimator. We refer to these estimators
as semiparametric. We could also refer to them as GMM where first step estimators are “plugged
in” the moments. This terminology seems awkward though, so we simply refer to them as
semiparametric GMM estimators. We denote such an estimator by β̂, which is a function of the
data z1 , ..., zn where n is the number of observations. Throughout the paper we will assume that
the data observations zi are i.i.d. We denote the object that β̂ estimates as β0 , the subscript
referring to the parameter value under the distribution that generated the data.
To describe the type of estimator we consider let m(z, β, γ) denote an r×1 vector of functions
of the data observation z, parameters of interest β, and a function γ that may be vector valued.
The function γ can depend on β and z through those arguments of m. Here the function γ
represents some possible first step, such as an estimator, its limit, or a true function. A GMM
estimator can be based on a moment condition where β0 is the unique parameter vector satisfying
E[m(zi , β0 , γ0 )] = 0,
(2.1)
and γ0 is the true γ. Here it is assumed that this moment condition identifies β. Let γ̂ denote
some first step estimator of γ0 . Plugging in γ̂ to obtain m(zi , β, γ̂) and averaging over zi gives the
P
estimated sample moments m̂(β) = ni=1 m(zi , β, γ̂)/n. For Ŵ a positive semi-definite weighting
matrix a semiparametric GMM estimator is
β̂ = arg min m̂(β)T Ŵ m̂(β),
β∈B
where AT denotes the transpose of a matrix A and B is the parameter space for β.
√
As usual a choice of Ŵ that minimizes the asymptotic variance of n(β̂ − β0 ) will be
√
a consistent estimator of the inverse of the asymptotic variance of nm̂(β0 ). Of course that
efficient Ŵ may include adjustment terms for the first step estimator γ̂. This optimal Ŵ also
gives an efficient estimator in the wider sense shown in Ackerberg, Chen, Hahn, and Liao (2014).
The optimal Ŵ makes β̂ efficient in a semiparametric model where the only restrictions imposed
are equation (2.1).
To explain and analyze local robustness we consider limits when the true distribution of a
single observation zi is F , and how those limits vary with F over a general class of distributions.
This kind of analysis can be used to derive the asymptotic variance of semiparametric estimators,
as in Newey (1994), and is also useful here. Let γ(F ) denote the limit of γ̂ when F is the true
distribution of zi . Here γ(F ) is understood to be the limit of γ̂ under general misspecification
where F need not satisfy the conditions used to construct γ̂. We also consider parametric models
Fτ where τ denotes a vector of parameters, with Fτ equal to the true distribution F0 at τ = 0.
5
We will restrict each parametric model to be regular in the sense used in the semiparametric
efficiency bounds literature, so that Fτ has a score S(z) (derivative of the log-likelihood in many
cases, e.g. see Van der Vaart, 1998, p. 362) at τ = 0 and possibly other conditions are satisfied.
We also require that the set of scores over all regular parametric family has mean square closure
that includes all functions with mean zero and finite variance. Here we are assuming that the
set of scores for regular parametric models is unrestricted, the precise meaning of the domain of
γ(F ) being a general class of distributions. We define local robustness in terms of such families
of regular parametric models.
Definition 1: The moment functions m(z, β, γ) are locally robust if and only if for all
regular parametric models
∂E[m(zi , β0 , γ(Fτ ))]
= 0.
∂τ
τ =0
This zero pathwise derivative condition means that moment conditions are nearly zero as the
first step limit γ(F ) departs from the truth γ0 along any path γ(Fτ ). Below we use a functional
derivative condition, but for now this pathwise derivative definition is convenient. Throughout
the remainder of the paper we evaluate derivatives with respect to τ at τ = 0 unless otherwise
specified.
In general, locally robust moment functions can be constructed by adding to moment functions the term that adjusts for (or accounts for) first step estimation. Under conditions discussed
below there is a unique vector of functions φ(z, β, γ) such that E[φ(zi , β0 , γ0 )] = 0 and
√
n
n
1 X
1 X
nm̂(β0 ) = √
m(zi , β0 , γ̂) = √
{m(zi , β0 , γ0) + φ(zi , β0 , γ0)} + op (1).
n i=1
n i=1
(2.2)
Here φ(z, β0 , γ0 ) adjusts for the presence of γ̂ in m(z, β0 , γ̂). Locally robust moment functions
can be constructed by adding φ(z, β, γ) to m(z, β, γ) to obtain new moment functions
g(z, β, γ) = m(z, β, γ) + φ(z, β, γ).
For ĝ(β) =
Pn
i=1
(2.3)
g(zi , β, γ̂)/n, a locally robust semiparametric GMM estimator is obtained as
β̂ = arg min ĝ(β)′Ŵ ĝ(β).
β
In a parametric setting it is easy to see how adding the adjustment term for first step
estimation gives locally robust moment conditions. Suppose that the first step estimator γ̂ is
a function of a finite dimensional vector of parameters λ where there is a vector of functions
h(z, λ) satisfying E[h(zi , λ0 )] = 0 and the first step parameter estimator λ̂ satisfies
n
1 X
√
h(zi , λ̂) = op (1).
n i=1
6
(2.4)
For H = ∂E[h(zi , λ)]/∂λ|λ=λ0 the usual expansion gives
√
n
1 X
n(λ̂ − λ0 ) = −H −1 √
h(zi , λ0 ) + op (1).
n i=1
For notational simplicity let the moment functions depend directly on λ (rather than γ (λ)) and
so take the form m(z, β, λ). Let Mλ = ∂E[m(zi , β0 , λ0 )]/∂λ. Another expansion gives
√
n
√
1 X
m(zi , β0 , λ0 ) + Mλ n(λ̂ − λ0 ) + op (1)
nm̂(β0 ) = √
n i=1
n
1 X
{m(zi , β0 , λ0 ) − Mλ H −1 h(zi , λ0 )} + op (1).
=√
n i=1
Here we see that the adjustment term is
φ(z, β, λ) = −Mλ H −1 h(z, λ).
(2.5)
We can add this term to the original moment functions to produce new moment functions of
the form
g(z, β, λ) = m(z, β, λ) + φ(z, β, λ) = m(z, β, λ) − Mλ H −1 h(z, λ).
Local robustness of these moment functions follows by the chain rule and
∂E[g(zi , β0 , λ)]
∂λ
=
λ=λ0
∂E[m(zi , β0 , λ) − Mλ H −1 h(zi , λ)]
∂λ
λ=λ0
= Mλ − Mλ H −1 H = 0.
Neyman (1959) used scores and the information matrix to form such g(z, β, λ) in a parametric
likelihood setting, where g(z, β0 , λ0 ) has an orthogonal projection interpretation. There the
purpose was to construct tests, based on g(z, β, λ), where estimation of the nuisance parameters
λ did not affect the distribution of the tests. The form here was given in Wooldridge (1991) for
nonlinear least squares and Lee (2005), Bera et. al (2010), and Chernozhukov et. al. (2015)
for GMM. What appears to be new here is construction of locally robust moment functions by
adding the adjustment term to original moment functions.
In general the adjustment term φ(z, β, γ) may depend on unknown components that are
not present in the original moment functions m(z, β, γ). In the above parametric example the
matrix Mλ H −1 is unknown so its elements should really be included in γ, along with λ. If we
do this local robustness will continue to hold with these additional components of γ because
E[h(zi , λ0 )] = 0. For notational simplicity we will take γ to be the first step for the locally
robust moment functions g(z, β, γ) = m(z, β, γ) + φ(z, β, γ), with the understanding φ(z, β, γ)
will generally depend on first step functions that are not included in m(z, β, γ).
In general semiparametric settings the form of the adjustment term φ(z, β, γ) and local
robustness of g(z, β, γ) can be explained in terms of influence functions. We will do so in the
7
next Section. In many interesting cases the form of the adjustment term φ(z, β, γ) is already
known, allowing construction of locally robust estimators. We conclude this section with an
important class of examples.
The class of examples we consider is one where the first step γ̂1 is based on a conditional
moment restriction E[ρ(zi , γ10 )|xi ] = 0 for a residual ρ(z, γ1 ) and instrumental variables x. The
conditional mean or median of yi given xi are included as special cases where ρ(z, γ1 ) = y −γ1 (x)
and ρ(z, γ1 ) = 2·1(y < γ1 (x))−1 respectively, as are versions that allow for endogeneity where γ1
depends on variables other than x. We take γ̂1 to have the same limit as the nonparametric twostage least squares (NP2SLS) estimator of Newey and Powell (1989, 2003) and Newey (1991).
Thus, γ̂1 has limit γ1 (F ) satisfying
γ1 (F ) = arg min EF [{EF [ρ(zi , γ1 )|xi ]}2 ],
γ1 ∈Γ
and EF denotes the expectation under the distribution F . Suppose that there is γ20 (x) in the
mean square closure of the set of derivatives ∂E[ρ(zi , γ1 (Fτ ))|xi ]/∂τ as Fτ varies over regular
parametric models such that
∂E[m(zi , β0 , γ1(Fτ ))]
∂E[ρ(zi , γ1 (Fτ ))|xi ]
= −E[γ20 (xi )
].
∂τ
∂τ
(2.6)
Then from Ichimura and Newey (2016) the adjustment term is
φ(z, β, γ) = γ2 (x, β)ρ(z, γ1 ).
(2.7)
A function γ20 (x) satisfying equation (2.6) exists when the set of derivatives ∂E[ρ(zi , γ1 (Fτ ))|xi ]/∂τ
is linear as Fτ varies over parametric models, ∂E[m(zi , β0 , γ1 (Fτ ))]/∂τ is a linear functional
of ∂E[ρ(zi , γ1(Fτ ))|xi ]/∂τ, and that functional is continuous in mean square. Existence of
γ20 (x) then follows from the Riesz representation theorem. Special cases of this characterization of γ20 (x) are in Newey (1994), Ai and Chen (2007), and Ackerberg, Chen, Hahn, and
Liao (2014). When ∂E[m(zi , β0 , γ1 (Fτ ))]/∂τ is not a mean square continuous functional of
∂E[ρ(zi , γ1 (Fτ ))|xi ]/∂τ then first step estimation should make the moments converge slower
√
than 1/ n, as shown by Newey and McFadden (1994) and Severini and Tripathi (2012) for
special cases. The adjustment term given here includes Santos (2011) as a special case with
R
m(z, β, γ1 ) = v(x)γ1 (x)dx − β, though Santos (2011) is more general in allowing for nonidentification of γ10 .
There are a variety of ways to construct an estimator φ(zi , β, γ̂) of the adjustment term to
be used in forming locally robust moment functions, see NHR and Ichimura and Newey (2016).
A relatively simple and general one when the first step is a series or sieve estimator is to treat
the first step as if it were parametric and use the parametric formula in equation (2.5). This
approach to estimating the adjustment term is known to be asymptotically valid in a variety
8
of settings, see Newey (1994), Ackerberg, Chen, and Hahn (2012), and Ichimura and Newey
(2016). For completeness we give a brief description here.
We parameterize an approximation to γ1 as γ1 = γ(λ) where λ is a finite dimensional vector
of parameters as before. Let m(zi , β, λ) = m(zi , β, γ1(λ)) and λ̂i denote an estimator of λ0
solving E[h(zi , λ0 )] = 0, that is allowed to depend on observation i. Being a series or sieve
estimator the dimension of λ and hence of h(z, λ) will increase with sample size. Also, let I(i)
be a set of observation indices that can also depend on i. An estimator of the adjustment term
is give by
φ(zi , β, γ̂) = −M̂iλ (β)Ĥi−1 h(zi , λ̂i ), M̂iλ (β) =
X ∂h(zj , λ̂i )
X ∂m(zj , β, λ̂i )
, Ĥi =
.
∂λ
∂λ
j∈I(i)
j∈I(i)
This estimator allows for cross fitting where λ̂i , M̂iλ , and Ĥi depend only on observations other
than the ith . cross fitting is known to improve performance in some settings, such as in ”leave
one out” kernel estimators of averages, e.g. see NHR. This adjustment term will lead to locally
robust moments in a variety of settings, as further discussed below.
3
Influence Functions, Adjustment Terms, and Local Robustness
Influence function calculations can be used to derive the form of the adjustment term φ(z, β, γ)
and show local robustness of the adjusted moment functions g(z, β, γ) = m(z, β, γ) + φ(z, β, γ).
To explain influence functions note that many estimators are asymptotically equivalent to a
sample average. The object being averaged is the unique influence function. For example, in
equation (2.2) we are assuming that the influence function of m̂(β0 ) is g(z, β0 , γ0 ) = m(z, β0 , γ0 )+
φ(z, β0 , γ0). This terminology is widely used in the semiparametric estimation literature.
In general an estimator µ̂ of a true value µ0 and its influence function ψ(z) satisfy
√
n
1 X
ψ(zi ) + op (1), E[ψ(zi )] = 0, E[ψ(zi )ψ(zi )′ ] exists.
n(µ̂ − µ0 ) = √
n i=1
The function ψ(z) can be characterized in terms of the functional µ(F ) that is the limit of µ̂
under general misspecification where F need not satisfy the conditions used to construct µ̂. As
before, we allow F to vary over a family of regular parametric models where the set of scores for
the family has mean square closure that includes all mean zero functions with finite variance. As
shown by Newey (1994) the influence function ψ(z) is then the unique solution to a derivative
equation of Van der Vaart (1991),
∂µ(Fτ )
= E[ψ(zi )S(zi )], E[ψ(zi )] = 0,
∂τ
9
(3.1)
as Fτ (and hence S(z)) varies over the general family of regularly parametric models. Ichimura
and Newey (2016) also showed that when ψ(z) has certain continuity properties it can be
computed as
∂µ(Fτh ) h
, Fτ = (1 − τ )F0 + τ Ghz ,
(3.2)
ψ(z) = lim
h−→0
∂τ
where Ghz is constructed so that Fτh is in the domain of µ(F ) and Ghz approaches the point mass
at z as h −→ 0.
These results can be used to derive the adjustment term φ(z, β, γ) and to explain local
robustness. Let γ(F ) denote the limit of the first step estimator γ̂ under general misspecification
when a single observation has CDF F, as discussed above. From Newey (1994, pp. 1356-1357) we
know that the adjustment term φ(z, β0 , γ0 ) is the influence function of µ(F ) = E[m(zi , β0 , γ(F ))]
where E[·] denotes the expectation at the truth. Thus φ(z, β, γ) can be calculated as in equation
(3.2) for that µ(F ). Also φ(z, β, γ) satisfies equation (3.1) for µ(F ) = E[m(zi , β0 , γ(F ))], i.e. for
the score S(z) at τ = 0 for any regular parametric model {Fτ },
∂E[m(zi , β0 , γ(Fτ ))]
= E[φ(zi , β0 , γ0 )S(zi )], E[φ(zi , β0 , γ0 )] = 0.
∂τ
(3.3)
Also, φ(z, β0 , γ0) can be computed as
∂E[m(zi , β0 , γ(Fτh ))] h
, Fτ = (1 − τ )F0 + τ Ghz ,
h−→0
∂τ
The characterization of φ(z, β0 , γ0) in equation (3.3) can be used to specify another local
robustness property that is equivalent to Definition 1. We have defined local robustness as the
derivative on the left of the first equality in equation (3.3) being zero for all regular parametric
models. If that derivative is zero for all parametric models then φ(z, β0 , γ0 ) = 0 is the unique
solution to this equation, by the set of scores being mean square dense in the set of mean zero
random variables with finite variance. Also, if φ(z, β0 , γ0) = 0 then the derivative on the left is
always zero. Therefore we have
φ(z, β0 , γ0 ) = lim
Proposition 1: φ(z, β0 , γ0 ) = 0 if and only if m(z, β, γ) is locally robust.
Note that φ(z, β, γ) is the term in the influence function of m̂(β0 ) that accounts for the first
step estimator γ̂. Thus Proposition 1 gives an alternative characterization of local robustness,
that first step estimation does not affect the influence function of m̂(β0 ). This result is a
semiparametric version of Theorem 6.2 of Newey and McFadden (1994). It also formalizes the
discussion in Newey (1994, pp. 1356-1357).
Local robustness of the adjusted moment function g(z, β, γ) = m(z, β, γ) + φ(z, β, γ) follows from Proposition 1 and φ(z, β, γ) being a nonparametric influence function. Because
def
φ(z, β, γ) is an influence function it has mean zero at all true distributions, i.e. µ(F ) =
10
R
φ(z, β0 , γ(F ))F (dz) ≡ 0 identically in F. Consequently the derivative in equation (3.1) is
zero, so that (like Proposition 1) the influence function of µ(F ) is zero. Consequently, under
P
appropriate regularity conditions φ̄ = ni=1 φ(zi , β0 , γ̂)/n has a zero influence function and so
√
nφ̄ = op (1). It then follows that
n
n
√
√
√
1 X
1 X
√
g(zi , β0 , γ̂) = nm̂(β0 ) + nφ̄ = nm̂(β0 ) + op (1) = √
g(zi , β0 , γ0 ) + op (1), (3.4)
n i=1
n i=1
where the last equality follows by equation (2.2). Here we see that the adjustment term is zero
for the moment functions g(z, β, γ). From Proposition 1 with g(z, β, γ) replacing m(z, β, γ) it
then follows that g(z, β, γ) is locally robust.
Proposition 2: For the influence function φ(z, β0 , γ0 ) of µ(F ) = E[m(zi , β0 , γ(F ))] the
adjusted moment function g(z, β, γ) = m(z, β, γ) + φ(z, β, γ) is locally robust.
R
Local robustness of g(z, β, γ) also follows directly from the identity φ(z, β0 , γ(F ))F (dz) ≡ 0
as discussed in the Appendix. Also, the adjusted moments ĝ(β0 ) have the same asymptotic
variance as the original moments, as in the second equality of equation (3.4). That is, adding
φ(z, β, γ) to m(z, β, γ) does not affect the asymptotic variance. Thus the asymptotic benefits
of the locally robust moments are in their higher order properties. Other modifications of
the moments may also improve higher-order properties of estimators, such as the cross fitting
described above (like ”leave on out” in NHR) and the higher order bias corrections in Robins
et. al. (2008) and Cattaneo and Jansson (2014).
4
Local and Double Robustness
The zero derivative condition in Definition 1 is an appealing robustness property in and of itself.
Mathematically a zero derivative is equivalent to the moments remaining closer to zero than τ as
τ varies away from zero. This property can be interpreted as local robustness of the moments to
the value of γ being plugged in, with the moments remaining close to zero as γ varies away from
its true value. Because it is difficult to get nonparametric functions exactly right, especially in
high dimensional settings, this property is an appealing one.
Such robustness considerations, well explained in Robins and Rotnitzky (2001), have motivated the development of doubly robust estimators. For our purposes doubly robust moments
have expectation zero if just one first stage component is incorrect. When there are only two first
stage components this means that the moment conditions hold when only one of the first stage
components is correct. Doubly robust moment conditions allow two chances for the moment
conditions to hold.
11
It turns out that locally robust moment functions are automatically doubly robust in a local
sense that the derivative with respect to each individual, distinct first stage component is zero. In
that way the moment conditions nearly hold as each distinct component varies in a neighborhood
of the truth. Furthermore, when locally robust moment functions are affine functions of a distinct
first step component they are automatically globally robust in that component. Thus, locally
robust moment functions that are affine in each distinct first step are doubly robust.
These observations suggest a way to construct doubly robust moment functions. Starting
with any two step semiparametric moment function we can add the adjustment term to get a
locally robust moment function. When we can choose a first step of that moment function so
that it enters in an affine way the new moment function will be doubly robust in that component.
To give these results we need to define distinct components of γ. A distinct component is
one where there are parametric models Fτ with that component varying in an unrestricted way
but the other components of γ not varying. For a precise definition we will focus on the first
component γ1 of γ = (γ1 , ..., γJ ).
Definition 2: A component γ1 of γ is distinct if and only if there is Fτ such that
γ(Fτ ) = (γ1 (Fτ ), γ20 , ..., γJ0 ),
and γ1 (Fτ ) is unrestricted as Fτ varies across parametric models.
An example is the moment function g(z, β, γ1, γ2 ) = m(z, β, γ1 ) + γ2 (x, β)ρ(z, γ1 ), where
E[ρ(zi , γ10 )|xi ] = 0. In that example the two components γ1 and γ2 are often distinct because γ1
depends only on the conditional distribution of zi given xi and γ20 (x, β) depends on the marginal
distribution of xi in an unrestricted way.
Local robustness means that the derivative must be zero for any model, so in particular it
must be zero for any model where only the distinct component is varying. Thus we have
Proposition 3: If γ1 is distinct then for g(z, β, γ) = m(z, β, γ) + φ(z, β, γ) and regular
parametric models Fτ as in Definition 2,
∂E[g(zi , β0 , γ1(Fτ ), γ20 , ..., γJ0)]
= 0.
∂τ
This result is an application of the simple fact that when a multivariable derivative is zero
the partial derivative must be zero when the variables are allowed to vary in an unrestricted
way. Although this fact is simple, it is helpful in understanding when local robustness holds
for individual components. This means that locally robust moment functions automatically
have a local double robustness property, that the expectation of the moment function remains
nearly zero as each distinct first stage component varies away from the truth. For example, for
12
a first step conditional moment restriction where g(z, β, γ) = m(z, β, γ1 ) + γ2 (x, β)ρ(z, γ1 ), the
conclusion of Proposition 3 is
∂E[m(zi , β0 , γ1 (Fτ )) + γ20 (xi )ρ(zi , γ1 (Fτ ))]
= 0.
∂τ
In fact, this result is implied by equation (2.6), so by construction g(z, β, γ) is already locally robust in γ1 alone. Local robustness in γ2 follows by the conditional moment restriction
E[ρ(zi , γ10 )|xi ] = 0.
Moments that are locally robust in a distinct component γ1 will be globally robust in γ1 if
γ1 enters the moments in an affine way, meaning that for any γ1 and γ = (γ1 , γ20 , ..., γJ0 )′ and
any z,
g(z, β, (1 − τ )γ0 + τ γ) = (1 − τ ) · g(z, β, γ0) + τ · g(z, β, γ).
(4.1)
Global robustness holds because an affine function with zero derivative is constant. For simplicity
we state a result when Fτ can be chosen so that γ(Fτ ) = (1 − τ )γ0 + τ γ though it will hold more
generally. Note that here
E[g(zi , β0 , γ(Fτ ))] = (1 − τ )E[g(zi , β0 , γ0 )] + τ · E[g(zi , β0 , γ)] = τ · E[g(zi , β0 , γ)].
Here the derivative of the moment condition with respect to τ is just E[g(zi , β0 , γ)] so Proposition
3 gives the following result:
Proposition 4: If equation (4.1) is satisfied and there is Fτ with γ(Fτ ) = ((1 − τ )γ10 +
τ γ1 , γ20 , ..., γJ0)′ then E[g(zi , β0 , γ1 , γ20 , ..., γJ0 )] = 0.
Thus we see that locally robust moment functions that are affine in a distinct first step
component are globally robust in that component. This result includes many existing examples
of doubly robust moment functions and can be used to construct new ones.
A general class of doubly robust moment functions that appears to be new and includes
many new and previous examples has first step satisfying a conditional moment restriction
E[ρ(zi , γ10 )|xi ] = 0 where ρ(z, γ1 ) and m(z, β0 , γ1 ) are affine in γ1 . Suppose that E[m(zi , β0 , γ1 )]
is a mean-square continuous linear functional of E[ρ(zi , γ1 )|xi ] for γ1 in a linear set Γ. Then by
the Riesz representation theorem there is γ ∗ (x) in the mean square closure Π of the image of
E[ρ(zi , γ1 )|xi ] such that
E[m(zi , β0 , γ1 )] = −E[γ ∗ (xi )E[ρ(zi , γ1 )|xi ]] = −E[γ ∗ (xi )ρ(zi , γ1 )], γ1 ∈ Γ.
(4.2)
Let γ20 (x) be any function such that γ20 (xi ) − γ ∗ (xi ) is orthogonal to Π and g(z, β, γ) =
m(z, β, γ1 ) + γ2 (x, β)ρ(z, γ1 ). Then E[g(zi , β0 , γ1 , γ20 )] = 0 by the previous equation. It also
follows that E[g(zi , β0 , γ10 , γ2 )] = 0 by E[ρ(zi , γ10 )|xi ] = 0. Therefore g(z, β, γ1, γ2 ) is doubly
robust, showing the following result:
13
Proposition 5: If m(zi , β0 , γ1 ) and ρ(zi , γ1) are affine in γ1 ∈ Γ with Γ linear and
E[m(zi , β0 , γ1 )] is a linear, mean square continuous functional of E[ρ(zi , γ1 )|xi ] then there is
γ20 (x) such that g(z, β, γ1, γ2 ) = m(z, β, γ1 ) + γ2 (x, β)ρ(z, γ1 ) is doubly robust.
Section 3 of Robins et. al. (2008) gives necessary and sufficient conditions for a moment
function to be doubly robust when γ1 and γ2 enter the moment functions as functions evaluated
at observed x. Proposition 5 is complementary to that work in deriving the form of doubly robust
moment functions when the first step satisfies a conditional moment restriction and m(z, β, γ1 )
can depend on the entire function γ1 .
It is interesting to note that γ20 such that E[g(z, β0 , γ1, γ20 )] = 0 for all γ1 ∈ Γ is not unique
when Π does not include all functions of x, the overidentified case of Chen and Santos (2015).
This nonuniqueness can occur when there are multiple ways to estimate the first step γ10 using
the conditional moment restrictions E[ρ(zi , γ10 )|xi ] = 0. As discussed in Ichimura and Newey
(2016), the different γ20 (xi ) correspond to different first step estimators, with γ20 (xi ) = γ ∗ (xi )
corresponding to the NP2SLS estimator.
An important case is a linear conditional moment restrictions setup up like Newey and Powell
(1989, 2003) and Newey (1991) where
ρ(z, γ1 ) = y − γ1 (w), E[yi − γ10 (wi )|xi ] = E[ρ(zi , γ10 )|xi ] = 0.
(4.3)
Consider a moment function equal to m(z, β, γ1 ) = v(w)γ1(w) − β for some known function
v(w), where the parameter of interest is β0 = E[v(wi )γ10 (wi )]. If there is γ̄(x) such that v(wi ) =
E[γ̄(xi )|wi ] then we have
E[m(zi , β0 , γ1 )] = E[v(wi ){γ1(wi ) − γ10 (wi )}] = E[E[γ̄(xi )|wi ]{γ1 (wi ) − γ10 (wi )}]
= E[γ̄(xi ){γ1(wi ) − γ10 (wi )}] = −E[γ̄(xi )ρ(zi , γ1 )].
It follows that g(z, β, γ) = m(z, β, γ1 ) + γ2 (x)ρ(z, γ1 ) is doubly robust for γ20 (x) = γ̄(xi ). Interestingly, the existence of γ̄ with v(wi ) = E[γ̄(xi )|wi ] is a necessary condition for root-n consistent
estimability of β0 as in Severini and Tripathi’s (2012, Lemma 4.1). We see here that a doubly
robust moment condition can always be constructed when this necessary condition is satisfied.
Also, similarly to the above, the γ20 (x) may not be unique.
Corollary 6: If m(z, β, γ1 ) = v(w)γ1(w) − β, equation ( 4.3) is satisfied, and there is γ̄(x)
such that v(w) = E[γ̄(x)|w] then g(z, β, γ1, γ2 ) = v(w)γ1(w) − β + γ2 (x)[y − γ1 (w)] is doubly
robust for γ20 (x) − γ̄(x) orthogonal to Π.
A new example of a doubly robust moment condition corresponds to the weighted average
derivative of γ10 (w) of Ai and Chen (2007). Here m(z, β, γ1 ) = v̄(w)∂γ1 (w)/∂w − β for some
14
function v̄(w). Let f0 (w) be the pdf of wi . Assuming that v̄(w)γ1(w)f0 (w) is zero on the
boundary of the support of wi , integration by parts gives
E[m(zi , β0 , γ1 )] = E[v(wi ){γ1(wi ) − γ10 (wi )}], v(w) = f0 (w)−1 ∂[v̄(w)f0 (w)]/∂w.
Assume that there exists γ̄(x) such that v(wi ) = E[γ̄(xi )|wi ]. Then as in Proposition 5 a doubly
robust moment function is
g(z, β, γ) = v̄(w)
∂γ1 (w)
− β + γ2 (x)[y − γ1 (w)].
∂w
A special case of this example is the doubly robust moment condition for the weighted average
derivative in the exogenous case where wi = xi given in Firpo and Rothe (2016).
Doubly robust moment conditions can be used to identify parameters of interest. In general,
if g(z, β, γ1, γ2 ) is doubly robust and γ20 is identified then β0 may be identified from
E[g(zi , β0 , γ̄1 , γ20 )] = 0,
for any fixed γ̄1 when the solution β0 to this equation is unique.
Proposition 7: If g(z, β, γ1, γ2 ) is doubly robust, γ20 is identified, and for some γ̄1 the
equation E[g(zi , β, γ̄1, γ20 )] = 0 has a unique solution then β0 is identified as that solution.
Applying this result to the NPIV setting gives an explicit formula for certain functionals of
γ10 (w) without requiring that the completeness identification condition of Newey and Powell
(2003) be satisfied, similarly to Santos (2011). Suppose that v(w) is identified, e.g. as for the
weighted average derivative. Since both w and x are observed it follows that a solution γ20 (x)
to v(w) = E[γ20 (x)|w] will be identified if such a solution exists. Plugging in γ̄1 = 0 in the
equation E[g(zi , β0 , γ̄1 , γ20 )] = 0 gives
Corollary 8: If v(wi ) is identified and there exists γ20 (xi ) such that v(wi ) = E[γ20 (xi )|wi ]
then β0 = E[v(wi )γ10 (wi )] is identified as β0 = E[γ20 (xi )yi ].
Note that this result holds without the completeness condition. Identification of β0 =
E[v(wi )γ10 (wi )] for known v(wi ) with v(wi ) = E[γ20 (xi )|wi ] follows from Severini and Tripathi
R
(2006). Santos (2011) gives a related formula for a parameter β0 = ṽ(w)γ20(w)dw. The
formula here differs from Santos (2011) in being an expectation rather Lebesgue integral. An
estimator of β0 could be constructed similarly to Santos (2011), though that is beyond the scope
of this paper.
Another new example of a doubly robust estimator is a weighted average over income values
of an average (across heterogenous individuals) of exact consumer surplus bounds, as in Hausman
and Newey (2016). Here y is quantity consumed, w = x = (x1 , x2 )′ , x1 is price, x2 is income,
15
γ10 (xi ) = E[yi |xi ], price is changing between x̆1 and x̄1 , and B is a bound on the income
effect. Let v2 (x2 ) be some weight function and v1 (x1 ) = 1(x̆1 ≤ x1 ≤ x̄1 )e−B(x1 −x̆1 ) . For the
R
moment function m(z, β, γ1 ) = v2 (x2 ) v1 (u)γ1 (u, x2 )du − β the true parameter β0 is a bound
on the average of equivalent variation over unobserved individual heterogeneity and income. Let
f10 (x1 |x2 ) denote the conditional pdf of x1 given x2 . Note that
Z
E[m(zi , β0 , γ1)] = E[v2 (x2i ) v1 (u)[γ1(u, x2i ) − γ10 (u, x2i )]du]
= E[f10 (x1i |x2i )−1 v1 (x1i )v2 (x2i ){γ1 (xi ) − γ10 (xi )}]
= −E[γ20 (xi )E[yi − γ1 (xi )|xi ]], γ20 (x) = f10 (x1 |x2 )−1 v1 (x1 )v2 (x2 ).
Then it follows by Proposition 5 that a doubly robust moment function is
Z
g(z, β, γ) = v2 (x2 ) v1 (u)γ1 (u, x2 )du − β + γ2 (x)[y − γ1 (x)].
When the moment conditions are formulated so that they are affine in the first step Proposition 4 applies to many previously developed doubly robust moment conditions. Data missing
at random is a leading example. Let β0 be the mean of a variable of interest w where w is not
always observed, y ∈ {0, 1} denote an indicator for w being observed, and x a vector of covariates. Assume w is mean independent of y conditional on covariates x. We consider estimating
β0 using the propensity score P0 (xi ) = Pr(yi = 1|xi ). We specify an affine conditional moment
restriction by letting γ1 (x) = 1/P (x) and ρ(z, γ1 ) = γ1 (x)y − 1. We have β0 = E[γ10 (xi )yi wi ],
as is well known. An affine moment function is then m(z, β, γ1 ) = γ1 (x)yw − β. Note that
E[m(zi , β0 , γ1 )] = E[E[yi wi |xi ]{γ1 (xi ) − γ10 (xi )}] = −E[γ20 (xi )ρ(zi , γ1 )],
γ20 (xi ) = −γ10 (xi )E[yi wi |xi ].
Then Proposition 5 implies that a doubly robust moment function is given by
g(z, β, γ) = γ1 (x)yw − β − γ2 (x)[γ1 (x)y − 1].
This is the well known doubly robust moment function of Robins, Rotnitzky, and Zhao (1994).
This example illustrates how applying Propositions 4 and 5 require specifying the first step
so that the moment functions are affine. These moment conditions were originally shown to be
doubly robust when the first step is taken to be the propensity score P (x). Propositions 4 and
5 only apply when the first step is taken to be 1/P (x). More generally we expect that particular
formulations of the first step may be needed to make the moment functions affine in the first
step and so use Propositions 4 and 5 to derive doubly robust moment functions.
Another general class of doubly robust moment functions depend on the pdf γ1 of a subset of variables xi and are affine in γ1 . An important example of such a moment function
16
R
is the average where β0 = f0 (x)2 dx and m(z, β, γ1 ) = γ1 (x) − β. Another is the density
weighted average derivative (WAD) of Powell, Stock, and Stoker (1989) where m(z, β, γ1 ) =
−2y · ∂γ1 (x)/∂x − β. Assume that E[m(zi , β0 , γ1 )] is a function of γ1 − γ10 that is continuous in
R
1/2
the norm [γ1 (u) − γ10 (u)]2du
. Then by the Riesz representation theorem there is γ20 (x)
with
Z
E[m(zi , β0 , γ1 )] = γ20 (u)[γ1 (u) − γ10 (u)]du.
(4.4)
The adjustment term for m(z, β, γ), as in Proposition 3 of Newey (1994), is φ(z, β, γ) = γ2 (x) −
R
γ2 (u)γ1(u)du. The corresponding locally robust moment function is
Z
g(z, β, γ1, γ2 ) = m(z, β, γ1 ) + γ2 (x) − γ2 (u)γ1(u)du.
(4.5)
This function is affine in γ1 and γ2 separately so when they are distinct Proposition 4 implies
double robustness. Double robustness also follows directly from
Z
Z
Z
E[g(zi , β0 , γ)] = γ20 (u)[γ1(u) − γ10 (u)]du + γ2 (u)γ10(u)du − γ2 (u)γ1(u)du
Z
= − [γ2 (u) − γ20 (u)][γ1 (u) − γ10 (u)]du.
Thus we have the following result:
Proposition 9: If m(zi , β, γ1) is affine in γ1 and E[m(zi , β0 , γ1)] is a linear function of
R
1/2
, then for γ20 (x) from equation
γ1 −γ10 that is continuous in the norm {γ1 (x) − γ10 (x)}2 dx
R
(4.4), g(z, β, γ) = m(z, β, γ1 ) + γ2 (x) − γ2 (u)γ1(u)du is doubly robust.
We can use this result to derive doubly robust moment functions for the WAD. Let δ(xi ) =
E[yi |xi ]γ10 (xi ). Assuming that δ(u)γ1(u) is zero on the boundary, integration by parts gives
Z
E[m(zi , β0 , γ1 )] = −2E[yi ∂γ1 (xi )/∂x] − β0 = 2 [∂δ(u)/∂u]{γ1 (u) − γ10 (u)}du,
so that γ20 (x) = 2∂δ(x)/∂x. A doubly robust moment condition is then
Z
∂γ1 (x)
∂δ(x)
∂δ(u)
g(z, β, γ) = −2y
−β+2
− 2
γ1 (u)du.
∂x
∂x
∂u
The double robustness of this moment condition appears to be a new result. As shown in Newey,
Hsieh, and Robins (1998), a ”delete-one” symmetric kernel estimator based on this moment
function gives the twicing kernel estimator of NHR. Consequently the MSE comparisons of
NHR for twicing kernel estimators with the original kernel estimator correspond to comparison
of a doubly (and locally) robust estimator with one based on unadjusted moment conditions, as
discussed in the introduction.
17
It is interesting to note that Proposition 9 does not require that γ1 and γ2 are distinct first
step components. For the average density γ1 (x) and γ2 (x) both represent the marginal density
of x and so are not distinct. Nevertheless the moment function g(z, β0 , γ) = γ1 (x) − β0 +
R
γ2 (x) − γ1 (u)γ2 (u)du is doubly robust, having zero expectation if either γ1 or γ2 is correct.
This example shows a moment function may be doubly robust even though γ1 and γ2 are not
distinct. Thus, there are doubly robust moment functions that cannot be constructed using
Proposition 4.
All of the results of this Section continue to hold with cross fitting. That is true because the
results of this Section concern the moment and their expectation at various values of the first
step, and not the particular way in which the first step is formed.
5
Small Bias of Locally Robust Moment Conditions
Adding the adjustment term improves the higher order properties of the estimated moments
though it does not change their asymptotic variance. An advantage of locally robust moment
functions is that the effect of first step smoothing bias is relatively small. To describe this
advantage it is helpful to modify the definition of local robustness. In doing so we allow F to
represent a more general object, an unsigned measure (charge). Let k·k denote a seminorm on
F (a seminorm has all the properties of a norm but may be zero when F is not zero). Also, let
F be a set of charges where m(z, β0 , γ(F )) is well defined.
Definition 3: m(z, β, γ) is locally robust if and only if E[m(zi , β0 , γ(F ))] = o(kF − F0 k)
for F ∈ F .
Definition 1 requires that µ(F ) have a zero pathwise derivative. Definition 3 requires a zero
Frechet derivative for the semi-norm k·k, generally a stronger condition than a zero pathwise
derivative. The zero Frechet derivative condition is helpful in explaining the bias properties of
locally robust moment functions.
Generally a first estimator will depend on some vector of smoothing parameters b. This
b could be the bandwidth in a kernel estimator or the inverse of number of terms in a series
estimator. Suppose that the limit of γ̂ for fixed b is γ(Fb ) where Fb is a ”smoothed” version of
the true distribution that approaches the truth F0 as b −→ 0. Then under regularity conditions
m̂(β0 ) will have limit E[m(zi , β0 , γ(Fb ))]. We can think of kFb − F0 k as a measure of the smoothing bias of Fb . Similarly E[m(zi , β0 , γ(Fb ))] is a measure of the bias in the moment conditions
caused by smoothing. The small bias property (SBP) analyzed in NHR is that the expectation
of the moment functions vanishes faster than the nonparametric bias as b −→ 0.
Definition 4: m(z, β, γ) and Fb have the small bias property if and only if E[m(zi , β0 , γ(Fb ))] =
o(kFb − F0 k) as b −→ 0.
18
As long as Fb ∈ F , the set F in Definition 3, locally robust moments will have bias that
vanishes faster than the nonparametric bias kFb − F0 k as b −→ 0. Thus locally robust moment
functions have the small bias property.
Proposition 10: If m(z, β, γ) is locally robust then m(z, β, γ) has the small bias property
for any Fb ∈ F .
Note that the bias of locally robust moment conditions will be flat as a function of the first
step smoothing bias kFb − F0 k as that goes to zero. This flatter moment bias can also make the
MSE flatter, meaning the MSE of the estimator does not depend as strongly on kFb − F0 k for
locally robust moments as for other estimators.
By comparing Definitions 3 and 4 we see that the small bias property is a form of directional
local robustness, with the moment being locally robust in the direction Fb . If the moments are
not locally robust then there will be directions where the bias of the moments is not smaller
than the smoothing bias Fb . Being locally robust in all directions can be important when the
first step is allowed to be very flexible, such as when machine learning is used to construct the
first step. There the first step can vary randomly across a large class of functions making the
use of locally robust moments important for correct inference, e.g. see Belloni, Chernozhukov,
and Hansen (2014).
This discussion of smoothing bias is based on sequential asymptotics where we consider
limits for fixed b. This discussion provides useful intuition but it is also important to consider
asymptotics where b could be changing with the sample size. We can analyze the precise effect
of using locally robust moments by considering an expansion of the average moments. Let m̄ =
Pn
Pn
i=1 m(zi , β0 , γ0 )/n, ḡ =
i=1 g(zi , β0 , γ0 )/n, φ(z) = φ(z, β0 , γ0 ), and F̃ denote the empirical
distribution. We suppose that γ̂ = γ(F̂ ) for some estimator F̂ of the true distribution F0 . Let
µ(F ) = E[m(zi , β0 , γ(F ))]. By adding and subtracting terms we have
Z
m̂(β0 ) = ḡ + R̃1 + R̂2 + R̂3 , R̃1 = φ(z)[F̂ − F̃ ](dz),
(5.1)
Z
R̂2 = µ(F̂ ) − φ(z)F̂ (dz), R̂3 = m̂(β0 ) − m̄ − µ(F̂ ).
R
R
The object φ(z)F̂ (dz) = φ(z)[F̂ − F0 ](dz) is a linearization of µ(F̂ ) = µ(F̂ ) − µ(F0 ) so R̂2 is
a nonlinearity remainder that is second order. Also R̂3 is a stochastic equicontinuity remainder
of a type familiar from Andrews (1994) that is also second order.
The locally robust counterpart ĝ(β0 ) to m̂(β0 ) has a corresponding remainder term that
is asymptotically smaller than R̃1 . To see this let φ̂(z) = φ(z, β0 , γ(F̂ )) and note that the
R
mean zero property of an influence function will generally give φ̂(z)F̂ (dz) = 0. Then by
R
ĝ(β0 ) = m̂(β0 ) + φ̂(z)F̃ (dz) we have
19
Proposition 11: If
R
φ̂(z)F̂ (dz) = 0 then ĝ(β0 ) = ḡ + R̂1 + R̂2 + R̂3 ,
Z
R̂1 = − [φ̂(z) − φ(z)][F̂ − F̃ ](dz).
Comparing this conclusion with equation (5.1) we see that locally robust moments have the
same expansion as the original moments except that the remainder R̃1 has been replaced by
the remainder R̂1 . The remainder R̂1 will be asymptotically smaller than R̃1 under sufficient
regularity conditions. Consequently, depending on cross correlations with other terms, the
locally robust moments ĝ(β0 ) can be more accurate than m̂(β0 ). For instance, as shown by
NHR the locally robust moments for linear kernel averages have a higher order bias term that
converges to zero at a faster rate than the original moments, while only the constant term in
the higher order variance is larger. Consequently, the locally robust estimator will have smaller
MSE asymptotically for appropriate choice of bandwidth. In nonlinear cases the use of locally
robust moments may not lead to an improvement in MSE because nonlinear remainder terms
may be important, see Robins et. al. (2008) and Cattaneo and Jansson (2014). Nevertheless,
using locally robust moments does make smoothing bias small, which can be an important
improvement.
In some settings it is possible to obtain a corresponding improvement by changing the first
step estimator. For example, as mentioned earlier, for linear kernel averages the locally robust
estimator is identical to the original estimator based on a twicing version of the original kernel
(see NHR). The improvement from changing the first step can be explained in relation to the
remainder R̃1 , that is the difference of the integral of φ(z) over the estimated distribution F̂ and
its sample average. Note that F̂ − F̃ will be shrinking to zero so that R̃1 − E[R̃1 ] should be a
second order (stochastic equicontinuity) term. E[R̃1 ] is the most interesting term. If E[F̂ ] = Fb
and integrals can be interchanged then
Z
Z
E[R̃1 ] = φ(z)Fb (dz) = φ(z)[Fb − F0 ](dz).
When a twicing kernel or any other higher order kernel is used this remainder becomes second
order, depending on the smoothness of both the true distribution F0 and the influence function
φ(z), see NHR and Bickel and Ritov (2003). Thus, by using a twicing or higher order kernel
we obtain a second order bias, so all of the remainder terms are second order. Furthermore,
series estimator automatically have a second order bias term, as pointed out in Newey (1994).
Consequently, for all of these first steps the remainders are all second order even though the
moment function is not locally robust.
The advantage of locally robust moments are that the improvement applies to any first step
estimator. One does not have to depend on the particular structure of the estimator, such
as having a kernel of sufficiently high order. This feature is important when the first step is
20
complicated so that it is hard to analyze the properties of terms that correspond to E[R̃1 ].
Important examples are first steps that use machine learning. In that setting locally robust
moments are very important for obtaining root-n consistency; see Belloni et. al. (2014). Locally
robust moments have the advantages we have discussed even for very complicated first steps.
6
First Step Series Estimators
First step series estimators have certain automatic robustness properties. Moment conditions
based on series estimators are automatically locally robust in the direction of the series approximation. We also find that affine moment functions are automatically doubly robust in these
directions. In this Section we present these results.
It turns out that for certain first step series estimators there is a version of the adjustment
term that has sample mean zero, so that ĝ(β) = m̂(β). That is, locally robust moments are
numerically identical to the original moments. This version of the adjustment term is constructed by treating the first step as if it were parametric with parameters given by those of the
series approximation, and calculating a sample version of the adjustment described in Section
P
2. Suppose that the coefficients λ̂ of the first step estimator satisfy ni=1 h(zi , λ̂)/n = 0. Let
P
P
M̂λ (β) = n−1 ni=1 ∂m(zi , β, λ̂)/∂λ, Ĥ = n−1 ni=1 ∂h(zi , λ̂)/∂λ, and
φ(z, β, γ̂) = −M̂λ (β)Ĥ −1h(z, λ̂)
(6.1)
be the parametric adjustment term described in Section 2, where γ̂ includes the elements of
M̂γ (β) and Ĥ and there is no cross fitting. Note that
n
n
1X
1X
φ(zi , β, γ̂) = −M̂λ (β)Ĥ −1
h(zi , λ̂) = 0.
n i=1
n i=1
It follows that ĝ(β) = m̂(β), i.e. the locally robust moments obtained by adding the adjustment
P
term are identical to the original moments. Thus, if ni=1 h(zi , λ̂)/n = 0, we treat the first
step series estimator as parametric, and use the parametric adjustment term the locally robust
moments are numerically identical to the original moments. This numerical equivalence results is
an exact version of local robustness of the moments in the direction of the series approximation.
In some settings it is known that φ(z, β, γ̂) in equation (6.1) is an estimated approximation to
φ(zi , β, γ0), justifying its use. Newey (1994, p. 1369) showed that this approximation property
holds when the first step is a series regression. Ackerberg, Chen, and Hahn (2012) showed
that this property holds when the first step satisfies certain conditional moment restrictions
or is part of a sieve maximum likelihood estimator. It is also straightforward to show that
this approximation holds when the first step is a series approximates to the solution to the
conditional moment restriction E[ρ(zi , γ10 )|xi ] = 0. We expect that in general φ(z, β, γ̂) is an
estimator of φ(z, β, γ0 ).
21
We note that the result that ĝ(β) = m̂(β) is dependent on λ̂ not varying with the observations
and on being constructed from the whole sample. If we use cross fitting in any form then the
numerical equivalence of the original moments with their locally robust counterpart will generally
not hold. Also ĝ(β) 6= m̂(β) will generally occur when different models are used for different
elements of γ. Such different models will often be present when machine learning is used for
constructing the estimators of the different elements of γ. See for example Chernozhukov et. al.
(2016).
There are interesting cases where the original moment functions m(z, β, γ) with a series
estimator for γ are doubly robust in certain directions, with E[m(zi , β0 , γ1)] = 0 when γ1 is
a series approximation to γ10 . Here we show this directional double robustness property for
series estimators of solutions to conditional moment restrictions and orthogonal series density
estimators. Consider first a conditional moment restriction where the residual ρ(z, γ1 ) is affine
in γ1 , m(z, β, γ1 ) is also affine in γ1 , and the first step is a linear series estimator. Suppose
that the series estimator approximates γ10 by a linear combination pK′λ of a vector of functions
pK (w) = (p1K (w), ..., pKK (w))′. Let q K (xi ) be a K × 1 vector of instrumental variables and λ̂
P
be the instrumental variables estimator solving a moment condition ni=1 q K (xi )ρ(zi , pK′λ̂) = 0.
Under standard regularity conditions the limit λ∗ of λ̂ will solve the corresponding population
moment condition E[q K (xi )ρ(zi , pK′λ∗ )] = 0. Let γ20 (x) satisfy equation (4.2). Then if γ20 (xi ) =
θ′ q K (xi ) for some θ it follows that
E[m(zi , β0 , pK′λ∗ )] = −E[γ20 (xi )ρ(zi , pK′λ∗ )] = −θ′ E[q K (xi )ρ(zi , pK′λ∗ )] = 0.
Thus we have the result
Proposition 12: If m(zi , β, γ1) and ρ(zi , γ1) are affine in γ1 ∈ Γ with Γ linear, E[m(zi , β0 , γ1 )]
is a mean square continuous functional of E[ρ(zi , γ1)|xi ], and γ20 (xi ) satisfying E[m(zi , β0 , γ1 )] =
−E[γ20 (xi )ρ(zi , γ1 )] also satisfies γ20 (xi ) = θ′ q K (xi ) for some θ then E[m(zi , β0 , pK′λ∗ )] = 0.
The property shown in the conclusion of Proposition 12 is a directional double robustness
condition that depends on γ1 being equal to a series approximation to γ10 and on γ20 (x) being
restricted. These restrictions are not required for double robustness of g(z, β, γ) = m(z, β, γ1 ) +
γ2 (x)ρ(z, γ1 ). We will have E[g(zi , β0 , γ20 , γ1 )] = 0 for all γ1 , and not just for the γ1 that are a
series approximation to γ10 , and for any γ20 (x) and not just one that is a linear combination of
q K (x). For series first steps the the original moment functions will be doubly robust in certain
directions just as they are locally robust in certain directions.
Previous examples of Proposition 12 are given in Newey (1990, p. 116), Newey (1999), and
Robins et. al. (2007). Proposition 12 allows for endogeneity and m(z, β, γ1 ) to depend on the
entire function γ1 . The condition that the instruments q K (xi ) have the same dimension as the
approximating functions pK (w) allows for more than K instrumental variables. As is well known,
22
any IV estimator of λ can be viewed as having only K instruments q K (x), each one of which
is equal to a linear combination of all the instrumental variables. Here the existence of θ such
that γ20 (xi ) = θ′ q K (xi ) is restrictive. It is not sufficient that γ20 (xi ) be any linear combination
of all the instrumental variables. We must have γ20 (xi ) equal to a linear combination of the
instruments q K (xi ) used in estimating λ∗ . This result also extends to the case where an infinite
number of instrumental variables are used in the limit. In that case q K (xi ) can also be interpreted
as an infinite dimensional linear combination of instrumental variables.
To illustrate, consider again the weighted average derivative example discussed above where
m(z, β, γ1 ) = v(w)∂γ1 (w)/∂w − β, ρ(z, γ1 ) = y − γ1 (w), and there is γ20 (x) such that
E[γ20 (x)|w] = −f0 (w)−1∂[f0 (w)v(w)]/∂w.
(6.2)
Suppose that the first step is a linear instrumental variables (IV) estimator with right-hand side
variables pK (wi ) and instruments q K (xi ) and let λ∗ be the limit of the IV coefficients. From
Proposition 12 it follows that if there is a θ such that θ′ q K (x) = γ20 (x) then
E[v(wi ) ∂pK (wi )′ λ∗ /∂w] − β0 = E[m(zi , β0 , pK′ λ∗ )] = 0.
Thus, the weighted average derivative of the linear IV estimator will be consistent when γ20 (x)
is a linear combination of q K (x).
The case where v(w) is 1, wi is Gaussian, and E[xi |wi ] is linear in wi is interesting. Partial
out constants and means so that (wi′ , x′i )′ has mean zero. Let pK (wi ) = wi and let qi = q K (xi ) be
any linear combination xi such that ζ = E[qi wi′ ] is nonsingular. Normalize wi and qi so that each
have an identity variance matrix. Then f0 (w)−1 ∂f0 (w)/∂w = −w. Note that E[qi |wi ] = ζwi , so
that equation (6.2) is satisfied with γ20 (x) = −ζ −1 q K (x).Thus the conditions of Proposition 12
are satisfied, giving the following result
Corollary 13: If yi = γ10 (wi ) + εi , E[xi εi ] = 0, E[yi2] < ∞, wi is Gaussian, and E[xi |wi ]
is linear in wi then for instruments equal to any linear combination qi of xi with cov(qi , wi )
nonsingular ,
E[∂γ10 (wi )/∂w] = cov(qi , wi )−1 cov(qi , yi ).
We can give a simple, direct proof that only uses qi and εi uncorrelated, as is assumed in
Corollary 11, rather than the conditional moment restriction we have been focusing on. With
means partialed out we have E[wi ] = E[qi ] = 0 so that
cov(qi , wi )−1 cov(qi , yi ) = (E[qi wi′ ])−1 E[qi yi ] = (E[qi wi′ ])−1 E[qi γ10 (wi )]
= (E[E[qi |wi ]wi′ ])−1 E[E[qi |wi ]γ10 (wi )] = (E[ζwi wi′ ])−1 E[ζwi γ10 (wi )]
= (E[wi wi′ ])−1 E[wi γ10 (wi )] = E[∂γ10 (wi )/∂w],
23
where the fourth equality follows by E[qi |wi ] = ζwi and the last equality holds by Stoker (1986)
and wi Gaussian. This result generalizes that of Stoker (1986) to NPIV models where the right
hand variables are Gaussian. Further generalizations to non Gaussian cases can be obtained by
letting q K (x) and pK (w) be nonlinear in x and w.
Orthogonal series density estimators have a property analogous to Proposition 12. Suppose now that pK (x) is orthonormal with respect to Lebesgue measure on (−∞, ∞) so that
R K
p (u)pK (u)′ du = I. An orthogonal series pdf estimator is γ̂1 (x) = pK (x)′ λ̂, where λ̂ =
R K
Pn K
p
(x
)/n
has
limit
λ
=
p (u)γ10 (u)du. Suppose that E[m(zi , β0 , γ1 )] is a continuous
i
∗
i=1
linear functional of γ1 − γ10 so that by the Riesz representation theorem there is γ20 (x) with
R
E[m(zi , β0 , γ1 )] = γ20 (u)[γ1(u) − γ10 (u)]du. If there is θ with θ′ pK (x) = γ20 (x) then by pK (u)
orthonormal and equation (4.4) we have
Z
Z
K′
K
′
E[m(zi , β0 , p λ∗ )] = γ20 (u)[p (u) λ∗ − γ10 (u)]du = θ′ pK (u)[pK (u)′ λ∗ − γ10 (u)]du
Z
Z
′
K
K
′
′
pK (u)γ10 (u)du = θ′ λ∗ − θ′ λ∗ = 0.
= θ [ p (u)p (u) du]λ∗ − θ
Thus we have the following result:
Proposition 14: If i) m(z, β0 , γ1 ) is affine in γ1 , ii) E[m(zi , β0 , γ1 )] − β0 is a functional
R
1/2
of γ1 (x) − γ10 (x) that is continuous in the norm
[γ1 (u) − γ10 (u)]2 du
, and iii) γ20 (xi ) =
′ K
K′
θ p (xi ) for some θ then E[m(zi , β0 , p λ∗ )] = 0.
The orthogonal series estimators of linear functionals of a pdf discussed in Bickel and Ritov (2003) are examples. Those estimators are special cases of the estimator above where
R
m(z, β, γ1 ) = γ20 (u)γ1 (u)du − β for prespecified γ20 (u). Proposition 14 implies that the orthogonal series estimator of β0 will be consistent if γ20 (u) is a linear combination of the approximating functions. For example, if pK (x) is vector of polynomials of order K − 1 then the
orthogonal series estimator of moments of up to order K − 1 is consistent for fixed K.
7
Conditional Moment Restrictions
Conditional moment restrictions are widely used in econometrics to identify parameters of interest. In this Section we expand upon the cases already considered to construct a wide variety of
locally robust moment conditions. In particular we extend above results to residuals that may
depend on parameters of interest with instrumental variables that can differ across residuals.
Here we depart from deriving locally robust moments from the adjustment term for first step
estimation. Instead we extend the form of previously derived locally robust moments to the
more general setting of this Section.
24
To describe these results let j = 2, ..., J index conditional moment restrictions, ρj (z, β, γ1 )
denote a corresponding residual, and xj be corresponding conditioning variables. We will consider construction of locally robust moment conditions when the true parameters of interest β0
and a first step γ10 satisfy conditional moment restrictions
E[ρj (zi , β0 , γ10 )|xji ] = 0, (j = 2, ..., J).
(7.1)
Here γ1 is specified to include all functions that affect any of the residuals ρj (zi , β, γ1). We
continue to assume that the unconditional moment restriction in equation (2.1) holds, though
m(z, β, γ1 ) could be zero, with identification of β0 coming from the conditional moment restrictions of equation (7.1). We will discuss this case below.
In this setting we consider locally robust moment conditions having the form
g(z, β, γ) = m(z, β, γ1 ) +
J
X
γj (xj , β)ρj (z, β, γ1 ),
(7.2)
j=2
where γj (x, β), (j = 2, ..., J) are unknown functions satisfying properties discussed below. These
moment functions depend on J first step components γ = (γ1 , ..., γJ ). By virtue of the conditional moment restrictions these moment functions will be doubly robust in (γ2 , ..., γJ ), meaning
that E[g(zi , β0 , γ10 , γ2 , ..., γJ ) = 0. They will be locally robust in γ1 if for the limit γ1 (F ) of γ̂1
and all regular parametric models Fτ as discussed in Section 2,
J
X
∂
∂
E[m(zi , β0 , γ1 (Fτ ))] + E[
γj0 (xji ) E[ρj (zi , β0 , γ1 (Fτ ))|xji ]] = 0.
∂τ
∂τ
j=2
(7.3)
If ∂E[m(zi , β0 , γ1(Fτ ))]/∂τ |τ =0 is a linear mean-square continuous function of
(∂E[ρ2 (zi , β0 , γ1 (Fτ ))|xji ]/∂τ, ..., ∂E[ρJ (zi , β0 , γ1 (Fτ ))|xji ]/∂τ )|τ =0
and the mean-square closure of the set of such vectors over all parametric submodels is linear
then existence of γj0 (xj ), j ≥ 2 satisfying equation (7.3) will follow by the Riesz representation
theorem. In addition, if m(z, β0 , γ1 ) and ρj (z, β0 , γ1), (j ≥ 2), are affine in γ1 then we will have
double robustness in γ1 similarly to Proposition 12. Summarizing we have
Proposition 15: If equation (7.3) is satisfied then g(z, β, γ) from equation (7.2) is locally
robust. Furthermore, if i) m(z, β0 , γ1 ) and ρj (z, β0 , γ1 ), (j ≥ 2) are affine in γ1 ∈ Γ with Γ
linear; ii) E[m(zi , β0 , γ1)] is a mean square continuous functional of E[ρj (zi , β0 , γ1 )|xij ], (j ≥ 2)
then there is γj0 (x), (j ≥ 2), such that g(z, β, γ) is doubly robust.
For local identification of β we also require that
rank( ∂E[g(zi , β, γ0)]/∂β|β=β0 ) = dim(β).
25
(7.4)
A model where β0 is identified from semiparametric conditional moment restrictions with
common instrumental variables x is a special case where m(z, β, γ) is zero and xj = x, (j ≥ 2).
In this case let ρ(z, β, γ1 ) = (ρ2 (z, β, γ1 ), ..., ρJ (z, β, γ1 ))′ . The conditional moment restrictions
of equation (7.1) can be summarized as
E[ρ(zi , β0 , γ10 )|xi ] = 0.
This model is considered by Chamberlain (1992) and Ai and Chen (2003, 2007, 2012). We allow
the residual vector ρ(z, β, γ1 ) to depend on the entire function γ1 and not just its value at some
function of the observed data zi . Also let ϕ(x) = [γ2 (x), ..., γJ (x)] denote an r ×(J −1) matrix of
functions of x. A locally robust moment function g(z, β, γ) = ϕ(x)ρ(z, β, γ1 ) will be one which
satisfies Definition 1 with g(z, β, γ) replacing m(z, β, γ), i.e. where
∂E[g(zi , β0 , γ(Fτ ))]
∂E[ρ(zi , β0 , γ1 (Fτ ))|xi ]
= 0,
= E ϕ(xi )
∂τ
∂τ
for all regular parametric models. We also require that equation (7.4) is satisfied.
To characterize local robustness here it is helpful to assume that the set of pathwise derivatives of E[ρ(zi , β0 , γ)|xi ] varies over a linear set as the regular parametric model Fτ varies. To
be precise we will assume that γ1 ∈ Γ for Γ linear and for ∆ ∈ Γ we let
mγ (x, ∆) =
∂E[ρ(zi , β0 , γ10 + τ ∆)|xi ]
∂τ
τ =0
denote the (J − 1) × 1 random vector that is Gateaux derivative of the conditional expectation
E[ρ(zi , β0 , γ1 )|xi ] with respect to the first step γ1 in the direction ∆. We assume that mγ (x, ∆)
is linear in ∆ and that the mean square closure Mγ of the set {mγ (x, ∆) : ∆ ∈ Γ} equals
the mean-square closure of the set {∂E[ρ(zi , β0 , γ1(Fτ ))|xi ]/∂τ } as Fτ varies over all regular
parametric models. The local robustness condition can then be interpreted as orthogonality of
each row ϕ(xi )′ ej of ϕ(x) with Mγ in the Hilbert space of functions of x with inner product
ha, bi = E[a(xi )′ b(xi )], where ek is the k th unit vector. Thus the condition for locally robust
g(z, β, γ) = ϕ(x)ρ(z, β, γ1 ) is that
E[ϕ(xi )mγ (xi , ∆)] = 0 for all ∆ ∈ Γ.
We refer to such ϕ(xi ) as being orthogonal. They can be interpreted as instrumental variables
where the effect of estimation of γ1 has been partialed out.
There are many ways to construct orthogonal instruments. For instance, given a r × (J − 1)
matrix of instrumental variables A(x) one could construct corresponding orthogonal ones ϕ(xi )
as the matrix where each row is the residual of the least squares projection of the corresponding
row of A(x) on Mγ . We focus on another way of constructing orthogonal instruments that leads
26
to an efficient estimator of β0 . Let Σ(x) denote some positive definite matrix with smallest eigenvalue bounded away from zero, so that Σ(xi )−1 is bounded. Let ha, biΣ = E[a(xi )′ Σ(xi )−1 b(xi )]
denote an inner product and note that Mγ is closed in this inner product by Σ(xi )−1 bounded.
Let Ãk (xi , A, Σ) denote the residual from the least squares projection of the k th row A (x)′ ek
of A(x) on Mγ with the inner product ha, biΣ . Also let ϕ(xi , A, Σ) be the matrix with k th row
Ãk (xi , A, Σ)′ Σ(xi )−1 , (k = 1, ..., r). Then for all ∆ ∈ Γ,
A (xi )′ ek − Ãk (xi , A, Σ) ∈ Mγ , E[ϕ(xi , A, Σ)mγ (xi , ∆)] = E[Ã(xi , A, Σ)Σ(xi )−1 mγ (xi , ∆)] = 0,
so that ϕ(xi , A, Σ) are orthogonal instruments. Also, Ã(x, A, Σ) can be interpreted as the
solution to
min
{M (x):M (x)′ ek ∈Mγ ,k=1,...,r}
E[{A(xi ) − M(xi )}Σ(xi )−1 {A(xi ) − M(xi )}′ ]
where the minimization is in the positive semidefinite sense.
The orthogonal instruments that minimize the asymptotic variance of GMM in the class of
GMM estimators with orthogonal instruments are given by
∂E[ρ(zi , β, γ10 )|xi ]
ϕ (xi ) = ϕ(xi , A , Σ ), A (xi ) =
∂β
∗
∗
∗
′
, Σ∗ (xi ) = V ar(ρ(zi , β0 , γ10 )|xi ).
∗
β=β0
To see that ϕ∗ (xi ) minimizes the asymptotic variance note that for any orthogonal instrumental
variable matrix ϕ(x)
G = E[ϕ(xi )A∗ (xi )′ ] = E[ϕ(xi )Ã(xi , A∗ , Σ∗ )′ ] = E[ϕ(xi )ρ(zi , β0 , γ10 )ρ(zi , β0 , γ10 )′ ϕ∗ (xi )′ ],
where the first equality defines G and the second equality holds by ϕ(xi ) orthogonal. Since the
p
instruments are orthogonal the asymptotic variance matrix of GMM estimator with Ŵ −→ W
is the same as if γ̂1 = γ10 . Define mi = G′ W ϕ(xi )ρ(zi , β0 , γ10 ) and m∗i = ϕ∗ (xi )ρ(zi , β0 , γ10 ). The
asymptotic variance of the GMM estimator for orthogonal instruments ϕ(x) is
(G′ W G)−1 G′ W E[ϕ(xi )ρ(zi , β0 , γ10 )ρ(zi , β0 , γ10 )′ ϕ(xi )′ ]W G(G′ W G)−1
−1
′
∗ −1′
= (E[mi m∗′
.
i ]) E[mi mi ](E[mi mi ])
The fact that this matrix is minimized in the positive semidefinite sense for ϕ(x) = ϕ∗ (x) follows
from Theorem 5.3 of Newey and McFadden (1994) and can also be shown using the argument
in Chamberlain (1987).
Proposition 16: The instruments ϕ∗ (xi ) give an efficient estimator in the class of IV
estimators with orthogonal instruments.
The asymptotic variance of the GMM estimator with optimal orthogonal instruments is
−1
(E[m∗i m∗′
= E[Ã(xi , A∗ , Σ∗ )Σ∗ (xi )−1 Ã(xi , A∗ , Σ∗ )′ ])−1 .
i ])
27
This matrix coincides with the semiparametric variance bound of Ai and Chen (2003). Estimation of the optimal orthogonal instruments is beyond the scope of this paper. The series
estimator of Ai and Chen (2003) could be used for this.
8
Structural Economic Examples
Estimating structural models can be difficult when that requires computing equilibrium solutions. Motivated by this difficulty there is increasing interest in two step semiparametric
methods based on first step estimation of conditional choice probabilities (CCP). This two step
approach was pioneered by Hotz and Miller (1993). In this Section we show how locally robust
moment conditions can be formed for two kinds of structural models, the dynamic discrete choice
model of Rust (1987) and the static model of strategic interactions of Bajari, Hong, Krainer,
and Nekipelov (2010, BHKN). It should be straightforward to extend the construction of locally robust moments to other more complicated structural economic models. The use of such
moment conditions will allow for conditional choice probabilities that are estimated by modern,
machine learning methods.
8.1
Static Models of Strategic Interactions
We begin with a static model of interactions where results are relatively simple. To save space we
describe the estimator of BHKN while only describing a small part of the motivational economic
structure. Let x denote a vector of state variables for a fixed set of individuals and let y denote
a vector of binary variables, each one representing a choice of an alternative by an individual.
Let the observations zi = (yi , xi ) represent repeated plays of a static game of interaction and
γ10 (x) = E[yi |xi = x] the vector of conditional choice probabilities given a value x of the state.
In the semiparametric estimation problem described in Section 4.2 of BHKN there is a known
function r(x, β, γ1(x)) of the state variable x, a vector of parameters β, and a possible value
γ(x) of the conditional choice probabilities such that the true parameter β0 satisfies
E[yi |xi = x] = r (x, β0 , γ10 (x)) ,
This model can be used to form moment functions
m(z, β, γ1 ) = A(x)[y − r (x, β, γ1 (x))],
where A(x) is a matrix of instrumental variables; see equation (17) of BHKN.
To describe locally robust moment functions in this example, let rγ (x, β, γ1) = ∂r(x, β, γ1 )/∂γ1
where γ1 here denotes a real vector representing a possible value of γ10 (x). Then, it follows from
28
Proposition 4 of Newey (1994), as discussed in BHKN, that the adjustment term for first step
estimation of γ10 (x) = E[yi |xi = x] is
φ(z, β, γ1 ) = −A(x)rγ (x, β, γ1)[y − γ1 (x)].
This expression differs from BHKN in the appearance of γ1 (x) at the end of the expression
rather than rγ (x, β, γ1 (x)), which is essential for local robustness. The locally robust moment
conditions are then
g(z, β, γ1) = A(x){y − r(x, β, γ1 (x)) − rγ (x, β, γ1(x))[y − γ1 (x)]}.
For a first step estimator γ̂(x) of the conditional choice probabilities, the locally robust sample
moments will be
n
1X
A(xi ){yi − r(xi , β, γ̂1(xi )) − rγ (xi , β, γ̂1(xi ))[yi − γ̂1 (xi )]}
ĝ(β) =
n i=1
Here the locally robust moments are constructed by subtracting from the structural residuals r
a linear combination of the first step residuals. Using these moment functions should result in
an estimator of the structural parameters with less bias and the other improved properties of
locally robust estimators mentioned above.
The optimal instruments here are the same as discussed in BHKN. Let I denote the identity
matrix, set H(x) = I − ∂r(xi , β0 , γ10 (xi ))/∂γ1 , and let Ω(xi ) = H(xi )V ar(yi |xi )H(xi )T denote
the conditional variance of H(xi )(yi − γ10 (xi )). The optimal instruments are given by
∂r(xi , β0 , γ10 (xi )) ′
Ω(x)−
A (xi ) =
∂β
∗
where A− denotes a generalized inverse of a positive semi-definite matrix A.
This model can also be viewed as a special case of the conditional moment restrictions
framework with residual vector ρ(z, β, γ1 ) = (y − γ1 (x), y − r(x, β, γ1 (x)))T . An orthogonal
instrument that gives the above locally robust moment function is A(x)[−rγ (xi , β, γ1(xi )), I].
Here the locally robust moment function only depend on one first step function γ1 (x). This
feature is shared by all setups where the second step residual r(x, β, γ1) depends only on regressors that are included in the first step γ1 (x). The static model of strategic interactions leads
to this structure. The situation is not so simple in other structural economic models, as we see
next.
8.2
Dynamic Discrete Choice
Dynamic discrete choice estimation is important for modeling economic decisions, Rust (1987).
In this setting we find it helpful to describe the underlying economic model in order to explain
29
the form of the moment conditions. Here we give locally robust moment conditions moment
conditions that depend on first step estimation of the conditional choice probabilities. We do
this for the infinite horizon, stationary, dynamic discrete choice model of Rust (1987). It is
straightforward to derive locally robust moment conditions for other structural econometric
models. We also focus here on the case of data on many homogenous individuals, but discuss
how the approach extends to time series on one individual.
Suppose that the per-period utility function for an agent making choice j in period t is given
by
Ujt = vj (xt , β0 ) + ǫjt , (j = 1, ..., J; t = 1, 2, ...)
where we suppress the individual subscript i for notational convenience. The vector xt is the
observed state variables of the problem (e.g. work experience, number of children, wealth) and
the vector β is unknown parameters. The disturbances ǫt = {ǫ1t , ..., ǫJt } are not observed by the
econometrician. As in the majority of the literature we assume that ǫt is i.i.d. over time with
known CDF that has support RJ and is independent of the state process xt and that xt is Markov
of order 1. Let δ denote a time discount parameter, v̄(x) the expected value function, yjt ∈ {0, 1}
the indicator that choice j is made and v̄j (xt ) = vj (xt , β0 ) + δE[v̄(xt+1 )|xt , ytj = 1] the expected
value function for choice j. Also let ṽj denote a possible realization of ṽj (xt ) = v̄j (xt ) − v̄1 (xt ),
so that ṽ1 ≡ 0. Let ṽ = (ṽ2 , ..., ṽJ ) and Pj (ṽ) = Pr(ṽj + ǫjt ≥ ṽk + ǫkt ; ṽ1 = 0; k = 1, ..., J),
(j = 1, ..., J) denote the choice probabilities associated with the distribution of ǫt . Here we
normalize to focus on the difference with v̄1 (x) throughout. Let ṽ(x) = (ṽ2 (x), ..., ṽJ (x))′ . Let
γa (x) = (γa1 (x), ..., γaJ (x))T be a vector of first step functions with true values γaj0(xt ) =
Pr(ytj = 1|xt ). From Rust (1987) we know that for
γaj0 (xt ) = Pj (ṽ(xt )), (j = 1, ..., J).
v̄(xt ) = E[max{v̄j (xt ) + ǫjt }|xt ] = v̄1 (xt ) + E[max{ṽj (xt ) + ǫjt }|xt ].
j
j
From Hotz and Miller (1993) we know that P (ṽ) = (P1 (ṽ), P2 (ṽ), ..., PJ (ṽ))′ is a one-to-one
function of ṽ, so that inverting this relationship it follows that E[maxj {ṽj (xt ) + ǫjt }|xt ] is a
function of γa0 (xt ), say E[maxj {ṽj (xt ) + ǫjt }|xt ] = H(γa0 (xt )), for some function H(·) (e.g. for
binary logit H(γa0 (xt )) = .5772 − ln(γa10 (xt ))). Then the expected value function is given by
v̄(xt ) = v̄1 (xt ) + H(γa0 (xt ))
To use these relationships to construct semiparametric moment conditions we normalize
v1 (x, β) = 0 and make an additional assumption for v̄1 (x). The additional assumption is that
E[v̄(xt+1 )|xt , 1] does not depend on xt . With this normalization and assumption we have a
constant choice specific value function for j = 1, that is v̄1 (x) = v̄1 , with
v̄1 (xt ) = 0 + δE[v̄(xt+1 )|xt , yt1 = 1] = δE[v̄(xt+1 )|yt1 = 1] = v̄1 .
30
A sufficient condition for constant v̄1 (xt ) is that j = 1 is a ”renewal” choice where the distribution
of the future state does not depend on the current state. In the Rust (1987) example this state
is the one where the bus engine is replaced.
With this normalization and assumption we now have
v̄j (x) = vj (x, β0 ) + δE[v̄1 + H(γa0 (xt+1 ))|xt = x, ytj = 1]
= vj (x, β0 ) + δv̄1 + δE[H(γa0 (xt+1 ))|xt = x, ytj = 1],
ṽj (x) = vj (x, β0 ) + δE[H(γa0 (xt+1 ))|xt = x, ytj = 1] − δE[H(γa0 (xt+1 ))|yt1 = 1], (j = 2, ..., J).
The choice specific expected value differences ṽj (x) have a parametric part vj (x, β0 ) and a
nonparametric part that depends on J − 1 additional nonparametric regressions γb (x, γa ) =
(γb1 (x, γa ), ..., γb,J−1 (x, γa ))T and an unknown parameter γc where
γbj0 (x, γa ) = E[H(γa (xt+1 ))|xt = x, yt,j+1 = 1], j = 1, ..., J − 1; γc0(γa ) = E[H(γa (xt+1 ))|yt1 = 1].
Let γ1 (x) = (γa (x)T , γb(x, γa )T , γc (γa ))T be a vector of first step objects and
ṽj (x, β, γ1) = vj (x, β) + δ[γb,j (x, γa ) − γc (γa )], ṽ(x, β, γ1) = (ṽ2 (x, β, γ1), ..., ṽJ (x, β, γ1))T
denote the semiparametric choice specific expected value differences. Semiparametric moment
conditions can then be formed by plugging in a nonparametric γ̂1 estimator of the first step into
the expected differences and plugging those into the choice probability. Let yt = (yt1 , ..., ytJ )T
denote the vector of choice indicators for period t and z = (y1T , xT1 , ..., yTT , xTT )T be the vector
consisting of the observations on choice and state variables xt for each time period t. Also
let At (xt ) be an r × 1 vector of functions of xt where r is the dimension of β. Then for each
t = 1, ..., T − 1 we can form semiparametric moment conditions as
mt (z, β, γ1 ) = At (xt )[yt − P (ṽ(xt , β, γ1))].
To derive locally robust moment functions we can derive the adjustment term for estimation
of γ1 . The first step function γ1 is more complicated than previously considered. It depends on
two unknown conditional expectations, E[yt |xt ] and E[·|xt , j], (j = 2, ..., J). From Newey (1994,
p. 1357) we know that the adjustment term will be the sum of two terms, each adjusting for
one of the two conditional expectations while treating the other as if it was equal to the truth.
In the Appendix we give a general form for each of these adjustment terms. Here we apply that
general form to derive the corresponding locally robust moment functions for dynamic discrete
choice.
We begin by deriving the adjustment terms for γb and γc because they are simpler than
for γa . The adjustment term for γb and γc are obtained by applying Proposition A1 in the
31
Appendix. Let γ̃b (F ), γ̃c (F ) have the same form as γb and γc except that E[·|xt , yt ] is replaced
by EF [·|xt , j]. For Ht+1 = H (γa0 (xt+1 )) and π1 = E[yt1 ] let
λbj (z, β, γ1) = [yt,j+1/γa,j+1(xt )]{H(γa (xt+1 )) − γb,j+1(xt , γa )},
λb (z, β, γ1) = (λb1 (z, β, γ1 ), ..., λb,J−1(z, β, γ1 ))T ,
λc (z, β, γ1 , π1 ) = π1−1 yt1 H(γa (xt+1 )) − γc (γa ).
Then for e a (J − 1) × 1 vector of 1′ s we have
∂E[mt (zi , β0 , γa0 , γ̃b (Fτ ), γ̃c (Fτ ))]
= E[φtb (zi , β0 , γ1 , γ2)S(zi )],
∂τ
∂P
(ṽ(xt , β, γ1))λb (z, β, γ1)
φtb (z, β, γ1, π) = −δA(xt )
∂ṽ
∂P
+ δE[A(xt )
(ṽ(xt , β, γ1))]e · λc (z, β, γ1 , π1 )}
∂ṽ
The adjustment term for estimation of γa (x) is obtained by applying Proposition A2. This
term is somewhat complicated because γa (x) is evaluated at x = xt+1 rather than the conditioning argument xt of its true value We assume that xt is stationary over time so that xt+1 and xt
have the same pdf, eliminating the ratio of pdf’s in the conclusion of Proposition A2. Let γ̄a (F )
have the same form as γ10 except that γ10 (x) is replaced by EFt [yt |xt = x]. Also let
∂P (ṽ(xt , β, xt ))
ytj
A(xt )
|xt+1 = x , (j = 1, ..., J − 1),
γ2j0(x, β, γ1 ) = E
γaj (xt )
∂ṽ
γ30 (x, π1 ) = π1−1 E[yt1 |xt+1 = x]|x=xt .
Then we have
∂E[mt (zi , β0 , γa (Fτ ), γb0 , γc0 )]
= E[φta (zi , β0 , γ1 , γ2)S(zi )],
∂τ
φta (z, β, γ1 , γ2, γ3 , π1 ) = −δ{γ2 (x, β, γ1) − E[A(xt )Pṽ (ṽ(xt , β, γ1))]γ3 (xt , π1 )}e
×
∂H(γa (xt ))
[yt − γa (xt )].
∂γa
We can now form locally robust moment conditions as
gt (z, β, γ1 , γ2 , γ3, π1 ) = mt (z, β, γ1 ) + φta (z, β, γ1 , γ2 , γ3, π1 ) + φtb (z, β, γ1 , π1 ).
With data zi that is i.i.d. over individuals these moment functions can be used for any t to
estimate the structural parameters β. Also, for data for a single individual we could use a time
P −1
gt (z, β, γ1 , γ2 , γ3, π1 )/(T − 1) to estimate β, although the asymptotic theory we
average Tt=1
give does not apply to this estimator.
Bajari, Chernozhukov, Hong, and Nekipelov (2009) derived the adjustment term for the more
complicated dynamic discrete game of imperfect information. Locally robust moment conditions
for such games could be formed using their results. We leave that formulation to future work.
32
9
Asymptotic Theory for Locally Robust Moments
In this Section we give asymptotic theory for locally robust estimators. In keeping with the
general applicability of locally robust moments to a variety of first steps we consider estimation
and conditions that have the most general and simplest conditions we can find for the first step.
In particular, the construction here only requires that the first step converge at rate slightly
faster than n−1/4 in norms specified below, a simpler condition than in most of the literature.
This formulation allows the results to be applied in settings where it is challenging to say much
about the first step other than its convergence rate, such as when machine learning is used in
the first step. The locally robust form of the moment conditions is essential for this formulation,
as previously discussed.
We use cross fitting in the first step to obtain an estimator that is root-n consistent and
asymptotically normal under such simple conditions. Chernozhukov et. al. (2016) gives results
with cross fitting that allow for moment functions that are not smooth in parameters. Here we
focus on the smooth in parameters case. Cross fitting has been previously used in the literature
on semiparametric estimation. See Bickel, Klaasen, Ritov, and Wellner (1993) for discussion.
This approach is different than that some previous work in semiparametric estimation, as in
Andrews (1994), Newey (1994), Chen, Linton, and van Keilegom (2003), Ichimura and Lee
(2010), where cross fitting was not used and the moment conditions need not be locally robust.
The approach adopted here leads to general and simple conditions.
The estimator is formed by grouping observations into L distinct groups. Let Iℓ , (ℓ = 1, ..., L)
partition the set of observation indices {1, ..., n}. Let γ̂−ℓ be the first step constructed from all
observations not in Iℓ . Consider sample moment conditions of the form
L
1 XX
g(zi , β, γ̂−ℓ).
g̃(β) =
n ℓ=1 i∈I
ℓ
We consider GMM estimators based on these moment functions. This is a special case of the
cross fitting described earlier. Also, leave one out moment conditions are a further special case
where each Iℓ consists of a single observation. We focus here on the case where the number of
groups L is fixed to keep the conditions as simple as possible.
An important intermediate result is that the adjustment term for the first step is zero by
virtue of g(z, β, γ) being locally robust, that is
√
n
1 X
ng̃(β0 ) = √
g(zi , β0 , γ0) + op (1).
n i=1
(9.1)
With cross fitting this result holds under relatively weak and simple conditions:
R
p
Assumption 1: For each ℓ = 1, ..., L, i) kg(z, β0 , γ̂−ℓ ) − g(z, β0 , γ0 )k2 F0 (dz) −→ 0, ii) for
R
√
p
ζ > 1, C > 0 we have
g(z, β0 , γ̂−ℓ )F0 (dz) ≤ C kγ̂−ℓ − γ0 kζ , and iii) n kγ̂−ℓ − γ0 kζ −→ 0.
33
Lemma 17: If Assumption 1 is satisfied then equation (9.1) is satisfied.
This Lemma is proved in the Appendix. There are two important components to this result.
One component is a stochastic equicontinuity result
√
n
L
1 X
1 X
ng̃(β0 ) − √
nℓ
g(zi , β0 , γ0) − √
n i=1
n ℓ=1
Z
p
g(z, β0 , γ̂−ℓ )F0 (dz) −→ 0,
where nℓ is the number of observations with i ∈ Iℓ . Assumption 1 i) is sufficient for this result.
Assumption 1 i) is a much weaker stochastic equicontinuity condition than appears in much of
the literature, e.g. Andrews (1994). Those other conditions generally involve boundedness of
some derivatives of γ̂. In contrast Assumption 1 i) only requires that γ̂−ℓ have a mean square
convergence property. The cross fitting is what makes this condition sufficient. Cattaneo and
Jansson (2014) have also previously weakened the stochastic equicontinuity condition and established the validity of the bootstrap for kernel estimators under substantially weaker bandwidth
conditions than usually imposed.
The second component of the result is that
Z
√
p
nḡ(γ̂−ℓ ) −→ 0, ḡ(γ) = g(z, β0 , γ)F0 (dz).
This component follows from Assumptions 1 ii) and iii). By comparing Assumption 1 ii) with
Definition 2 we see that this condition implies local robustness in the sense that the Frechet
derivative of ḡ(γ) is zero at γ0 . Assumption 1 ii) will generally hold with ζ = 2 if ḡ(γ) is twice
continuously Frechet differentiable. In that case Assumption 1 iii) become the n1/4 rate condition
familiar from Newey and McFadden (1994) and other works. The more general 1 < ζ < 2 case
allows for the first Frechet derivative of ḡ(γ) to satisfy a Lipschitz condition. In this case
Assumption 1 iii) will require a convergence rate of γ̂ that is faster than n1/4 .
We note that previous results suggest that n1/4 convergence of γ̂ may be stronger than is
needed. As shown in Robins et. al. (2008) and Cattaneo and Jansson (2014) the variance terms
√
in nḡ(γ̂) are of the same order as the variance term of a nonparametric estimator, rather
√
than being the order of n times those variance terms. The arguments for these weaker results
are quite complicated so we do not attempt to give an account here. Instead we focus on the
relatively simple conditions of Assumption 1.
Another component of an asymptotic normality result is convergence of the Jacobian term
∂g̃(β)/∂β. The conditions we impose to account for the Jacobian term are standard. Let G̃(β) =
∂g̃(β)/∂β denote the derivative of the moment function.
Assumption 2: There is a neighborhood N of β0 such that i) g(zi , β, γ) is differentiable in
β on N with probability approaching 1 ii) there is ζ ′ > 0 and d(zi ) with E[d(zi )] < ∞ such that
34
for β ∈ N and kγ − γ0 k small enough
′
′
∂g(zi , β, γ) ∂g(zi , β0 , γ0)
≤ d(zi )(kβ − β0 kζ + kγ − γ0 kζ ),
−
∂β
∂β
p
iii) E[k∂g(zi , β0 , γ0 )/∂βk] < ∞; iv) kγ̂−ℓ − γ0 k −→ 0, (ℓ = 1, ..., L).
Define
G = E[∂g(zi , β, γ0)/∂β|β=β0 ]
p
Lemma 18: If Assumption 2 is satisfied then for any β̄ −→ β0 , g̃(β) is differentiable at β̄
p
with probability approaching one and ∂g̃(β̄)/∂β −→ G.
With Lemmas 17 and 18 in place the asymptotic normality of semiparametric GMM follows
in a standard way.
p
p
Theorem 19: If Assumptions 1 and 2 are satisfied, β̂ −→ β0 , Ŵ −→ W , G′ W G is
nonsingular, and E[kg(zi , β0 , γ0 )k2 ] < ∞ then for Ω = E[g(zi , β0 , γ0)g(zi , β0 , γ0 )′ ],
√
d
n(β̂ − β0 ) −→ N(0, V ), V = (G′ W G)−1G′ W ΩW G(G′W G)−1 .
It is also useful to have a consistent estimator of the asymptotic variance of β̂. As usual such
an estimator can be constructed as
V̂ = (Ĝ′ Ŵ Ĝ)−1 Ĝ′ Ŵ Ω̂Ŵ Ĝ(Ĝ′ Ŵ Ĝ)−1 ,
L
∂g̃(β̂)
1 XX
Ĝ =
g(zi , β̂, γ̂−ℓ )g(zi , β̂, γ̂−ℓ )′ .
, Ω̂ =
∂β
n ℓ=1 i∈I
ℓ
Note that this variance estimator ignores the estimation of γ, which works here because the
moment conditions are locally robust. Its consistency will follow under the conditions of Theorem
19 and one additional condition that accounts for the presence of β̂ in Ω̂.
˜ with E[d(z
˜ i )2 ] <
Theorem 20: If the conditions of Theorem 19 are satisfied and there is d(z)
∞ such that for kβ − β0 k and kβ − β0 k small enough
˜ i )(kβ − β0 kζ ′ + kγ − γ0 kζ ′ ),
kg(zi , β, γ) − g(zi , β0 , γ0 )k ≤ d(z
p
then V̂ −→ V.
In this Section we have used cross fitting to obtain relatively simple conditions for asymptotic
normality of locally robust semiparametric estimators. It is also known that in some settings
35
some kinds of cross fitting improves the properties of semiparametric estimators. For linear
kernel averages it is known that the leave one out method eliminates a bias term and leads to
a reduction in asymptotic mean square error; e.g. see NHR and the references therein. Also
Robins et. al. (2008) use cross fitting in higher order bias corrections. These results indicate
the some kind of cross fitting can lead to estimators with improved properties. For reducing
higher order bias and variance it may be desirable to let the number of groups grow with the
sample size. That case is beyond the scope of this paper.
10
APPENDIX
We first give an alternative argument for Proposition 2 that is a special case of the proof of
Theorem 2.2 of Robins et. al. (2008). As discussed above, φ(z, β0 , γ0 ) is the influence function
of the functional µ(F ) = E[m(zi , β0 , γ(F ))]. Because it is an influence function it has mean
R
zero at all true distributions, i.e. φ(z, β0 , γ(F0 ))F0 (dz) ≡ 0 identically in F0 . Since a regular
parametric model {Fτ } is just a subset of all true models, we have
Z
φ(z, β0 , γ(Fτ ))Fτ (dz) ≡ 0,
identically in τ . Differentiating this identity at τ = 0 and applying the chain rule gives
∂E[φ(zi , β0 , γ(Fτ ))]
∂τ
τ =0
= −E[φ(zi , β0 , γ0 )S(zi )].
(10.1)
Summing equations (3.3) and (10.1) we obtain
∂E[g(zi , β0 , γ(Fτ ))]
∂τ
=
τ =0
∂E[m(zi , β0 , γ(Fτ ))]
∂τ
+
τ =0
∂E[φ(zi , β0 , γ(Fτ ))]
∂τ
τ =0
= E[φ(zi , β0 , γ0 )S(zi )] − E[φ(zi , β0 , γ0 )S(zi )] = 0.
Thus we see that the adjusted moment functions g(z, β, γ) are locally robust.
Next we derive the form of the adjustment term when a first step is E[·|x, y = 1] for some
binary variable y with where yi ∈ {0, 1}. Consider a first step function of the form γ1 (x) =
E[wi |xi = x, yi = 1]. Let π(xi ) = E[yi |xi ]. Note that E[wi |xi , yi = 1] = E[yi wi |xi ]/E[yi |xi ] so
that
∂EFτ [wi |xi , yi = 1]
= E[π(xi )−1 {yi wi − E[yi wi |xi ]} − E[wi |xi , yi = 1](yi − π(xi ))}S(zi )|xi ]
∂τ
= E[π(xi )−1 yi {wi − E[wi |xi , yi = 1]}S(zi )|xi ].
Suppose that there is δ(xi ) such that
∂EFτ [m(zi , β0 , γ1(Fτ ))]
∂EFτ [wi |xi , yi = 1]
= E[δ(xi )
]
∂τ
∂τ
= E[δ(xi )π(xi )−1 yi {wi − E[wi |xi , yi = 1]}S(zi )].
36
Then taking the limit gives the following result:
Proposition A1: If there is δ(x) such that ∂E[m(zi , β0 , γ1(Fτ ))]/∂τ = E[δ(xi )∂EFτ [wi |xi , yi =
1]/∂τ ] then the adjustment term is
φ(z, β, γ) = δ(x)π(x)−1 y{w − E[wi |xi = x, yi = 1]}.
Next we derive the adjustment term when a nonparametric regression is evaluated at a
variable different than the one being conditioned on in the regression. Note that for δ(v) with
E[E[δ(vi )|wi ]EFτ [yi |xi = x]|x=wi ]
∂E[δ(vi )EFτ [yi |xi = x]|x=wi ]
=
∂τ
∂τ
Z
= ∂ E[δ(vi )|wi = w]EFτ [yi |xi = w]fw (w)dw/∂τ
Z
= ∂ [fw (x)/fx (x)]E[δ(vi )|wi = x]EFτ [yi |xi = x]fx (dx)/∂τ
= E[{fx (xi )−1 fw (xi )E[δ(vi )|wi = w]|w=xi (yi − E[yi |xi ])}S(zi )]
Taking limits gives
Proposition A2: If there is δ(v) such that ∂E[m(zi , β0 , γ1 (Fτ ))]/∂τ = ∂E[δ(vi )EFτ [yi |xi =
x]|x=wi ]/∂τ then the adjustment term is
φ(z, β, γ) = fx (x)−1 fw (x)E[δ(vi )|wi = x](yi − E[yi |xi = x]).
Next we give the proofs for the asymptotic normality results.
Proof of Lemma 17: Let
Z
X
ˆ iℓ .
ˆ iℓ = g(zi , β0 , γ̂−ℓ ) − ḡ(γ̂−ℓ ) − g(zi , β0 , γ0 ), (i ∈ Iℓ ), ∆
¯ℓ = 1
∆
ḡ(γ) = g(z, β0 , γ)F0 (dz), ∆
n i∈I
ℓ
Also, let Z−ℓ denote a vector of all observations zi for i ∈
/ Iℓ . Note that by construction
ˆ
E[∆iℓ |Z−ℓ ] = 0, so for any i, j ∈ Iℓ , i 6= j, it follows by zi and zj independent conditional
ˆ′ ∆
ˆ
ˆ′
ˆ
on Z−ℓ that E[∆
iℓ jℓ |Z−ℓ ] = E[∆iℓ |Z−ℓ ]E[∆jℓ |Z−ℓ ] = 0 Furthermore
Z
2
ˆ iℓ |Z−ℓ ] ≤ kg(z.β0 , γ̂−ℓ ) − g(z, β0 , γ0 )k2 F0 (dz).
E[ ∆
Therefore, for nℓ equal to the number of observations in group ℓ, Assumption 1 i) implies
Z
X
2
1
n
ℓ
′
¯ ∆
¯
ˆ iℓ |Z−ℓ ] ≤
E[∆
E[ ∆
kg(z.β0 , γ̂−ℓ ) − g(z, β0 , γ0 )k2 F0 (dz) = op (nℓ /n2 ).
ℓ ℓ |Z−ℓ ] = 2
2
n i∈I
n
ℓ
37
¯ ℓ = op (√nℓ /n). It then follows that
Standard arguments then imply that for each ℓ we have ∆
#
"
n
L
p
√
√ X
1X
p
¯ ℓ = op ( nℓ /n) −→
g(zi , β0 , γ0 ) − ḡ(γ̂) = n
∆
n g̃(β0 ) −
0.
n i=1
ℓ=1
It also follows by Assumption 1 ii) and iii) that
√
n kḡ(γ̂)k ≤
√
p
nC kγ̂ − γ0 kζ −→ 0.
The conclusion then follows by the triangle inequality. Q.E.D.
Proof of Lemma 18: Let G̃(β) = ∂g̃(β)/∂β when the derivative exists and Ĝ = ∂g(zi , β0 , γ0)/∂β.
p
By the law of large numbers and Assumption 2 iii), Ĝ −→ G. Also, by Assumption 2 i), ii),
P
iii) G̃(β̄) is well defined with probability approaching one ni=1 d(zi )/n = Op (1) by the Markov
inequality, and by the triangle inequality,
L
1 XX
{d(zi )( β̄ − β0
G̃(β̄) − Ĝ ≤
n ℓ=1 i∈I
ℓ
!
n
1X
≤
d(zi ) ( β̄ − β0
n i=1
ζ′
ζ′
′
+ kγ̂ℓ − γ0 kζ )}
+
L
X
ℓ=1
′
p
kγ̂ℓ − γ0 kζ ) = Op (1)op (1) −→ 0.
The conclusion then follows by the triangle inequality. Q.E.D.
The proofs of Theorems 19 and 20 are standard and so we omit them.
Acknowledgements
Whitney Newey gratefully acknowledges support by the NSF. Helpful comments were provided
by M. Cattaneo, J. Hahn, M. Jansson, Z. Liao, J. Robins, R. Moon, A. de Paula, J.M. Robin,
participants in seminars at Cornell, Harvard-MIT, UCL, and USC.
REFERENCES
Ackerberg, D., X. Chen, and J. Hahn (2012): ”A Practical Asymptotic Variance Estimator
for Two-step Semiparametric Estimators,” The Review of Economics and Statistics 94: 481–498.
Ackerberg, D., X. Chen, J. Hahn, and Z. Liao (2014): ”Asymptotic Efficiency of Semiparametric Two-Step GMM,” The Review of Economic Studies 81: 919–943.
Ai, C. and X. Chen (2003): “Efficient Estimation of Models with Conditional Moment Restrictions Containing Unknown Functions,” Econometrica 71, 1795-1843.
38
Ai, C. and X. Chen (2007): ”Estimation of Possibly Misspecified Semiparametric Conditional
Moment Restriction Models with Different Conditioning Variables,” Journal of Econometrics
141, 5–43.
Ai, C. and X. Chen (2012): ”The Semiparametric Efficiency Bound for Models of Sequential
Moment Restrictions Containing Unknown Functions,” Journal of Econometrics 170, 442–457.
Andrews, D.W.K. (1994): “Asymptotics for Semiparametric Models via Stochastic Equicontinuity,” Econometrica 62, 43-72.
Bajari, P., V. Chernozhukov, H. Hong, and D. Nekipelov (2009): ”Nonparametric and
Semiparametric Analysis of a Dynamic Discrete Game,” working paper, Stanford.
Bajari, P., H. Hong, J. Krainer, and D. Nekipelov (2010): ”Estimating Static Models of
Strategic Interactions,” Journal of Business and Economic Statistics 28, 469-482.
Bang, and J.M. Robins (2005): ”Doubly Robust Estimation in Missing Data and Causal Inference Models,” Biometrics 61, 962–972.
Belloni, A., D. Chen, V. Chernozhukov, and C. Hansen (2012): “Sparse Models and
Methods for Optimal Instruments with an Application to Eminent Domain,” Econometrica 80,
2369–2429.
Belloni, A., V. Chernozhukov, and Y. Wei (2013): “Honest Confidence Regions for Logistic
Regression with a Large Number of Controls,” arXiv preprint arXiv:1304.3969.
Belloni, A., V. Chernozhukov, I. Fernandez-Val, and C. Hansen (2016): ”Program
Evaluation and Causal Inference with High-Dimensional Data,” Econometrica, forthcoming..
Bera, A.K., G. Montes-Rojas, and W. Sosa-Escudero (2010): ”General Specification
Testing with Locally Misspecified Models,” Econometric Theory 26, 1838–1845.
Bickel, P.J., C.A.J. Klaassen, Y. Ritov, and J.A. Wellner (1993): Efficient and Adaptive
Estimation for Semiparametric Models, Springer-Verlag, New York.
Bickel, P.J. and Y. Ritov (2003): ”Nonparametric Estimators Which Can Be ”Plugged-in,”
Annals of Statistics 31, 1033-1053.
Cattaneo, M.D., and M. Jansson (2014): ”Bootstrapping Kernel-Based Semiparametric Estimators,” working paper, Berkeley.
Chamberlain, G. (1987): “Asymptotic Efficiency in Estimation with Conditional Moment Restrictions,” Journal of Econometrics 34, 1987, 305–334.
Chamberlain, G. (1992): “Efficiency Bounds for Semiparametric Regression,” Econometrica 60,
567–596.
Chen, X. and X. Shen (1997): “Sieve Extremum Estimates for Weakly Dependent Data,”
Econometrica 66, 289-314.
39
Chen, X., O.B. Linton, and I. van Keilegom (2003): “Estimation of Semiparametric Models
when the Criterion Function Is Not Smooth,” Econometrica 71, 1591-1608.
Chen, X., and A. Santos (2015): “Overidentification in Regular Models,” working paper.
Chernozhukov, V., C. Hansen, and M. Spindler (2015): ”Valid Post-Selection and PostRegularization Inference: An Elementary, General Approach,” Annual Review of Economics 7:
649–688.
Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey
(2016): ”Double Machine Learning: Improved Point and Interval Estimation of Treatment and
Causal Parameters,” MIT working paper.
Firpo, S. and C. Rothe (2016): ”Semiparametric Two-Step Estimation Using Doubly Robust
Moment Conditions,” working paper.
Hasminskii, R.Z. and I.A. Ibragimov (1978): ”On the Nonparametric Estimation of Functionals,” Proceedings of the 2nd Prague Symposium on Asymptotic Statistics, 41-51.
Hausman, J.A., and W.K. Newey (2016): ”Individual Heterogeneity and Average Welfare,”
Econometrica 84, 1225-1248.
Hotz, V.J. and R.A. Miller (1993): ”Conditional Choice Probabilities and the Estimation of
Dynamic Models,” Review of Economic Studies 60, 497-529.
Ichimura, H., and S. Lee (2010): “Characterization of the Asymptotic Distribution of Semiparametric M-Estimators,” Journal of Econometrics 159, 252–266.
Ichimura, H. and W.K. Newey (2016): ”The Influence Function of Semiparametric Estimators,” CEMMAP working paper.
Lee, Lung-fei (2005): “A C(α)-type Gradient Test in the GMM Approach,” working paper.
Newey, W.K. (1990): ”Semiparametric Efficiency Bounds,” Journal of Applied Econometrics 5,
99-135.
Newey, W.K. (1991): ”Uniform Convergence in Probability and Stochastic Equicontinuity,”
Econometrica 59, 1161-1167.
Newey, W.K. (1994): ”The Asymptotic Variance of Semiparametric Estimators,” Econometrica
62, 1349-1382.
Newey, W.K. (1999): ”Consistency of Two-Step Sample Selection Estimators Despite Misspecification of Distribution,” Economics Letters 63, 129-132.
Newey, W.K., and D. McFadden (1994): “Large Sample Estimation and Hypothesis Testing,”
in Handbook of Econometrics, Vol. 4, ed. by R. Engle, and D. McFadden, pp. 2113-2241. North
Holland.
Newey, W.K., and J.L. Powell (1989) ”Instrumental Variable Estimation of Nonparametric
Models,” presented at Econometric Society winter meetings, 1989.
40
Newey, W.K., and J.L. Powell (2003) ”Instrumental Variable Estimation of Nonparametric
Models,” Econometrica 71, 1565-1578.
Newey, W.K., F. Hsieh, and J.M. Robins (1998): “Undersmoothing and Bias Corrected
Functional Estimation,” MIT Dept. of Economics working paper 72, 947-962.
Newey, W.K., F. Hsieh, and J.M. Robins (2004): “Twicing Kernels and a Small Bias Property
of Semiparametric Estimators,” Econometrica 72, 947-962.
Neyman, J. (1959): “Optimal Asymptotic Tests of Composite Statistical Hypotheses,” Probability
and Statistics, the Harald Cramer Volume, ed., U. Grenander, New York, Wiley.
Pakes, A. and G.S. Olley (1995): ”A Limit Theorem for a Smooth Class of Semiparametric
Estimators,” Journal of Econometrics 65, 295-332.
Powell, J.L., J.H. Stock, and T.M. Stoker (1989): ”Semiparametric Estimation of Index
Coefficients,” Econometrica 57, 1403-1430.
Robins, J.M., A. Rotnitzky, and L.P. Zhao (1994): ”Estimation of Regression Coefficients
When Some Regressors Are Not Always Observed,” Journal of the American Statistical Association 89: 846–866.
Robins, J.M. and A. Rotnitzky (1995): ”Semiparametric Efficiency in Multivariate Regression
Models with Missing Data,” Journal of the American Statistical Association 90:122–129.
Robins, J.M., A. Rotnitzky, and L.P. Zhao (1995): ”Analysis of Semiparametric Regression
Models for Repeated Outcomes in the Presence of Missing Data,” Journal of the American
Statistical Association 90,106–121.
Robins, J.M.,and A. Rotnitzky (2001): Comment on “Semiparametric Inference: Question
and an Answer Likelihood” by P.A. Bickel and J. Kwon, Statistica Sinica 11, 863-960.
Robins, J.M., A. Rotnitzky, and M. van der Laan (2000): ”Comment on ’On Profile
Likelihood’ by S. A. Murphy and A. W. van der Vaart, Journal of the American Statistical
Association 95, 431-435.
Robins, J., M. Sued, Q. Lei-Gomez, and A. Rotnitzky (2007): ”Comment: Performance of
Double-Robust Estimators When Inverse Probability’ Weights Are Highly Variable,” Statistical
Science 22, 544–559.
Robins, J.M., L. Li, E. Tchetgen, and A. van der Vaart (2008) ”Higher Order Influence
Functions and Minimax Estimation of Nonlinear Functionals,” IMS Collections Probability and
Statistics: Essays in Honor of David A. Freedman, Vol 2, 335-421.
Rust, J. (1987): ”Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold
Zurcher,” Econometrica 55, 999-1033.
Santos, A. (2011): ”Instrumental Variable Methods for Recovering Continuous Linear Functionals,” Journal of Econometrics, 161, 129-146.
41
Scharfstein D.O., A. Rotnitzky, and J.M. Robins (1999): Rejoinder to “Adjusting For
Nonignorable Drop-out Using Semiparametric Non-response Models,” Journal of the American
Statistical Association 94, 1135-1146.
Severini, T. and G. Tripathi (2006): ”Some Identification Issues in Nonparametric Linear
Models with Endogenous Regressors,” Econometric Theory 22, 258-278.
Stoker, T. (1986): ”Consistent Estimation of Scaled Coefficients,” Econometrica 54, 1461-1482.
Tamer, E. (2003): ”Incomplete Simultaneous Discrete Response Model with Multiple Equilibria,”
Review of Economic Studies 70, 147-165.
van der Vaart, A.W. (1991): “On Differentiable Functionals,” The Annals of Statistics, 19,
178-204.
van der Vaart, A.W. (1998): Asymptotic Statistics, Cambride University Press, Cambridge,
England.
Wooldridge, J.M. (1991): “On the Application of Robust, Regression-Based Diagnostics to
Models of Conditional Means and Conditional Variances,” Journal of Econometrics 47, 5-46.
42
| 10 |
Towards a Consumer-Centric Grid: A Behavioral Perspective
Walid Saad1, Arnold Glass2 , Narayan Mandayam3, and H. Vincent Poor4
1
Wireless@VT, Bradley Deparmtent of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA, USA, Email: [email protected]
2
Department of Psychology, Rutgers University, New Brunswick, NJ, USA, Email: [email protected]
3
Electrical and Computer Engineering Department, Rutgers University, New Brunswick, NJ, USA, Email: [email protected]
4
Electrical Engineering Department, Princeton University, Princeton, NJ, USA, E-mail: [email protected]
arXiv:1511.01065v1 [] 1 Nov 2015
Abstract—Active consumer participation is seen as an integral
part of the emerging smart grid. Examples include demandside management programs, incorporation of consumer-owned
energy storage or renewable energy units, and active energy
trading. However, despite the foreseen technological benefits of
such consumer-centric grid features, to date, their widespread
adoption in practice remains modest. To shed light on this
challenge, this paper explores the potential of prospect theory,
a Nobel-prize winning theory, as a decision-making framework
that can help understand how risk and uncertainty can impact
the decisions of smart grid consumers. After introducing the
basic notions of prospect theory, several examples drawn from a
number of smart grid applications are developed. These results
show that a better understanding of the role of human decisionmaking within the smart grid is paramount for optimizing
its operation and expediting the deployment of its various
technologies.
I. I NTRODUCTION
The electric power grid has undergone unprecedented
changes over the past few years. The traditional, hierarchical and centralized electric grid has transformed into a
large-scale, decentralized, and “smart” grid [1]–[4]. Such a
smart grid is expected to encompass a mix of devices, as
shown in Fig. 1, that include distributed renewable energy
sources, electric vehicles (EVs), and storage units that can
be actively controlled and operated via a reliable, two-way
communication infrastructure [1]. The effective operation of
such a heterogeneous and decentralized system is expected to
change the way in which energy is produced and delivered to
consumers.
One key byproduct of the smart grid evolution is an
ability to deliver innovative energy management services to
consumers [1]–[4]. Here, energy management refers to the
processes using which energy is generated, managed, and
delivered to consumers in the grid. For instance, demand-side
management (DSM) and demand response mechanisms will
be an integral part of the smart grid. The primary goal of such
programs is to dynamically shape and manage the supply and
demand on the grid in order to maintain a desirable load over
various timescales. Indeed, the design of optimized DSM and
demand response protocols and associated pricing schemes
has led to significant research in this area in recent years [5]–
[37].
This research is supported by the U.S. National Science Foundation under
Grants CNS-1446621, ECCS-1549894, ECCS-1549900, and ECCS-1549881.
Dr. Saad was a corresponding author.
Fig. 1. A future smart grid with a heterogeneous mix of storage units, EVs,
renewable sources, and other consumer-owned equipment.
Moreover, in the smart grid, consumers will be able to
individually own energy production units, such as solar panels,
as well as storage devices in the form of EVs or small batteries. This can potentially transform every smart grid consumer
into an independent energy production and storage source.
Consequently, the possibility of energy trading between such
well-equipped consumers will undoubtedly become a reality
in the next few years. Indeed, many recent works, such as in
[8], [38]–[66], have investigated the various challenges of such
large-scale energy exchange, which include the development
of optimized market mechanisms, the management of the
grid operation, and the optimized exploitation of available
consumer-owned storage and energy production units.
Realizing this vision of a distributed, sustainable, and
consumer-centric smart grid will naturally face many challenges. On the one hand, although DSM programs (and
related ideas) have been theoretically shown to yield important
technological benefits to the grid, their wide-spread deployment still remains insipid [67]–[73]. On the other hand, the
impact of energy trading on the smart grid operation and
the realistic assumption that every consumer can become a
producer of energy is still not well-understood. In addition,
how to maximize the amount of energy that stems from
renewable sources is yet another important challenge. Last but
not least, the design of efficient dynamic pricing mechanisms
that go hand-in-hand with DSM and energy trading schemes
is seen as a critical enabler for most of the foreseen smart
grid features.
To widely deploy such grid features, one important chal-
lenge, among many others, is to properly incentivize consumers to actively participate in emerging grid features.
For example, without effective adoption of DSM schemes
by consumers, power companies will not be able to reap
the technological benefits of such load-shaping mechanisms.
Similarly, the willingness of consumers to own and actively
utilize EVs, storage units, or even renewable sources, is an
essential milestone for the deployment of a truly sustainable,
consumer-centric smart grid. For example, the statistics in [71]
show that installed solar generating capacity has increased
from about 1000 MW in 2010 to more than 6000 MW in 2014.
However, residential capacity has only increased from about
200 MW in 2010 to about 1000 MW in 2014. The reasons for
this small growth are touched upon in [72] – the upfront cost
of $21,000 - $25,000 is more than most homeowners have
to risk on what is still an uncertain venture. Clearly, coupled
with a properly designed and cost-effective ICT infrastructure,
active consumer participation plays an instrumental role in
facilitating the adoption of some of the smart grid technologies
and features.
However, somewhat remarkably, most of the existing research in this area is still based on formal mathematical
constructs, such as game theory or classical optimization,
which presume that consumers are objective, rational entities
that are uninfluenced by real-world behavioral considerations.
While such an assumption will hold in a highly-centralized,
traditional power system, it will remain an important barrier
that has prevented the widespread adoption of the smart grid.
The primary goal of this article is to shed light on the role
of consumer participation in the grid, while exposing the role
of the Nobel-prize winning framework of prospect theory,
in providing a mathematical basis within which to better
understand how consumer behavior and realistic smart grid
considerations impact the operation and efficiency of smart
energy management mechanisms such as DSM, energy trading, and storage management. To this end, we first provide an
overview of important energy management services in which
consumer participation plays an important role. For each such
service, we expose the state of the art and discuss the key
assumptions and limitations. Then, we present the basics of
prospect theory and discuss the motivation for applying this
framework in a smart grid environment. We illustrate the
benefits of prospect theory via two simple examples pertaining
to energy storage management and DSM. We conclude by
providing a future roadmap on how behavioral studies can
play an instrumental role in future smart grid designs.
The rest of this paper is organized as follows: Section II
presents an overview on existing energy management literature. In Section III, we provide a tutorial on the framework of
prospect theory and its motivation. Then, in Section IV, we
discuss two smart grid examples. Finally, Section V draws
some conclusions and future work.
II. E NERGY M ANAGEMENT IN
R EVIEW
THE
S MART G RID : A
Owing to the deployment of a smart communication infrastructure and to the presence of new devices, such as storage
units, smart meters, and renewable sources, the smart grid
presents numerous new opportunities for energy production
and distribution that were not possible in a classical grid.
For instance, the possibility of deploying smart meters at
consumer premises, opens the door for enabling consumers to
actively manage their energy. In addition, the ability of a smart
meter to communicate in real-time with a power company’s
control center, provides the latter with various opportunities
to actively control and monitor energy usage. Such new
capabilities undoubtedly change the way in which energy is
generated and distributed to consumers. Clearly, smart energy
management protocols and mechanisms are needed to exploit
the opportunities brought forward by this new smart grid
infrastructure.
Deploying efficient energy management mechanisms in future smart grid systems faces many challenges. The first such
challenge is to actively utilize smart metering and consumerbased energy management systems to shape the overall grid
load over time. Such load shaping is quintessential for an
efficient operation of a large-scale smart grid. Enabling such
demand-side management requires both increased automation
and active participation by the grid consumers. Another important challenge is to properly integrate and exploit storage
devices in the grid. For instance, on the one hand, a power
company can make use of EVs to store energy reserves so
as to regulate the grid operation. In this example, the power
company will be submitting an offer for ancillary services to
an independent system operator (ISO). On the other hand, a
consumer-based storage unit can be used as a means to store
or even sell energy back to the power company (rather than
an ISO). Last but not least, an important energy management
challenge is to properly decide on how to integrate and utilize
consumer-based power sources, such as solar panels, within
an operating smart grid system.
In summary, energy management in the smart grid involves
the planning and operation of energy-related production and
consumption units, particulary when such units are consumerowned. Next, we summarize some of the main research topics
related to energy management in the smart grid.
A. Demand-side Management
Demand-side management and demand response programs
are arguably the most important form of energy management
in the smart grid. DSM can entail a broad range of programs. These programs range from classical direct consumer
load control to peak shaving programs and ancillary service
provisions. Naturally, each such DSM program has its own
challenges. Here, we will summarize some of these programs,
while emphasizing the role of consumers. For example, in
peak shaving or direct load control programs, DSM schemes
typically aim at encouraging consumers to change their energy
consumption habits during peak hours. In particular, such
load control DSM schemes aim at providing consumers with
incentives to shift their unnecessary grid load to various times
during the day, so as to shape the peak hour load on the grid.
For example, a simple DSM scheme can provide monetary
benefits to consumers if they use delay-tolerant equipment,
such as dish washers or washing machines, during the night,
instead of peak hours [5]–[37]. Even if the individual appliance consumption can be small, the participation of consumers
at scale, such as within a neighborhood or city, will significantly impact the smart grid. Moreover, DSM will also extend
to other types of consumer-owned devices, such as storage
units or renewable energy sources whose consumption and
usage might be more significant than standard appliances [5]–
[37]. Last but not least, consumers of DSM need not only be
home users, but they can extend to industry and even small,
local energy providers that work hand-in-hand with the power
company. In such cases, significant gains for both consumers
and power companies can be achieved if DSM is properly
implemented [73].
1) Challenges: The key challenge in DSM is to design
realistic incentive mechanisms that can be used by power
companies and consumers alike to manage their power consumption over time. The essence of demand-side management
revolves around modeling the interactions and decision making processes of various grid players whose goals and actions
are largely interdependent. For example, the change in the
load of a certain consumer can lead to change in the pricing
scheme used by the power company which, in turn, can lead
to a change in the behavior of other consumers. This large
coupling in the behavior and goals of the grid consumers has
led to an abundant literature that applies the mathematical
framework of game theory [74] to analyze and design efficient
DSM schemes.
Game theory is a mathematical framework that enables
one to model the decision making processes of a number
of players whose objectives are largely interdependent. The
merits of a game-theoretic approach for DSM include: 1)
ability to capture the heterogeneity of the devices in the
grid, 2) effective integration of consumer-based decisions,
3) synergy between game-theoretic designs and the design
of incentive mechanisms, and 4) low-complexity learning
mechanisms that can characterize the outcome of a game and
that can be practically implemented in a real-world smart grid.
2) State-of-the-art: There has been a surge in research
activities related to DSM in recent years [5]–[37]. As already
mentioned, the majority of these works adopts a gametheoretic approach to demand-side management.
One of the earliest works in this area is [5] which presents
a DSM model in which the users are able to decide on how
to schedule their appliances over a given time horizon. The
basic idea is simple: each consumer selects a certain schedule
for its appliances, in such a way so as to minimize its overall
cost; given a fixed, yet well-designed pricing scheme from
the company. Using a game-theoretic model, the authors in
[5] characterize the eventual operating system of the grid and
show that, under the assumption that users are rational and
will act strategically, significant reductions in the overall grid
load can be foreseen using such a DSM scheme.
The recent work in [6] extends the model of [5] by including
the power company as a player in the system. In this regard,
following a grid model similar to [5], the authors enable
the power company to strategically decide on its pricing
depending on the total power. The objective is to reduce
the peak-to-average ratio (PAR) of the load demand. Using
numerical simulations, the authors establish the merits of such
a DSM scheme and show that noticeable reduction in the PAR
can be harnessed via dynamic pricing that adapts to the users’
behavior.
Another key contribution on DSM is presented in [7]. In
this work, the goal is not to reduce peak hour consumption,
but rather to match the supply and demand. Depending on
whether there is a deficit or excess of energy, the proposed
game-theoretic market model incentivizes the consumers to
either increase or shed their load so as to match the supply
and maintain normal grid operation. Thus, this work highlights
an interesting use of DSM for regulating the overall grid
operation, rather than just for reducing or shifting load over
time.
The work in [8] studies, using a game-theoretic and optimization framework, the ability of consumers to coordinate
the way in which they defer their grid load, based on the
power company’s pricing scheme. In particular, the authors
observe that, when DSM protocols leave the smart meters to
react independently, in an uncoordinated manner, to pricing
fluctuations, new peak hours may be created thus defeating
the main purpose of a DSM scheme. To this end, the authors
propose a coordination mechanism between a large population
of smart grid consumers. In this mechanism, instead of
directly reacting to the pricing change, smart meters, acting
on behalf of users, aim to adapt the deferment of loads to
the changes in the price. One of the key contributions of this
work is to consider such a coordination over a large number
of consumers. The results, based on realistic, empirical market
models from the UK, show that such a coordinated DSM
approach can reduce peak hour demand while also reducing
carbon emissions.
In [9], an interesting game-theoretic framework was developed to answer an important question: what is the value of
DSM and demand response mechanisms in the smart grid.
Essentially, the system is viewed as a noncooperative, hierarchical game between a number of generation companies and
the consumers. On the one hand, the generation companies
are controlled by the utility operator who can determine
their production level. On the other hand, the consumers are
aggregated into one collective decision maker, who responds
to a pricing signal sent out by the operator, to determine
the overall consumption level. Using this model, it has been
shown that the use of a demand response mechanism can,
in some cases, be more beneficial to generation companies
than to consumers. This benefit is largely dependent on how
consumers respond to the pricing scheme. Therefore, this work
has yet again shown that the way in which consumers behave
must be properly modeled if one is to reap the benefits of a
technology such as DSM.
Building on those key contributions, a number of equally
interesting DSM schemes have emerged more recently [10]–
[37] expanding on the aforementioned works by developing
more advanced models such as those that integrate additional
players or other energy efficiency metrics.
3) Summary and Remarks: Clearly, existing research has
established the technological benefits of DSM. Indeed, most
of the existing works such as in [5], [6], [8]–[37], have shown
that under fairly realistic scenarios, DSM can yield significant
reduction in peak hour load and can provide an interesting
means for regulating the overall operation of the grid. The
nexus of these existing works remains a game-theoretic model
in which various interactive scenarios between the users and
one or more utility providers are modeled. The outcomes
include a broad range of pricing mechanisms and load-shifting
scheduling algorithms that can be implemented to optimize
various energy efficiency metrics, such as peak hour load,
PAR, and load regulation metrics. Undoubtedly, DSM and
related ideas are likely to become an important component
of the smart grid.
Alas, despite this established gains of DSM, the real-world
implementation of such programs (and related ideas) has
remained below expectations [67]–[73]. One of the underlying
reasons is that, in real life, consumers are not behaving the
way they are supposed to, as assumed by many existing
mathematical models. In this regard, most of these existing
models rely on the assumption that players are rational and
will act objectively when faced with a DSM decision. In other
words, these models presume that, in the real-world, consumer
behavior will follow strictly objective measures of benefits
and losses, when deciding on whether or not to subscribe to a
DSM scheme or when choosing on how to schedule appliance
usage. However, realistically, consumers may deviate from the
rational behavior to various factors. For example, on a cold
winter day, consumers may be reluctant to shift their heating
consumption to a later time of the day, even if such a shift can
be beneficial to the grid or can bring some economic benefit
to the consumers.
Clearly, within the context of DSM, there is an urgent
need to capture such “behavioral” factors when designing the
demand response mechanisms of the future. Without a careful
accounting for the behavioral side, the real-world adoption of
DSM mechanisms will not live up to the expectations.
B. Integration of Storage Units and Consumer-Owned Renewable Sources
Beyond DSM and demand response mechanisms, energy
management in the smart grid must account for the presence
of a variety of new devices that are expected to be deployed in
the near future. Such devices include EVs, storage units, and
renewable energy sources. While renewable energy sources
may be owned by energy providers or consumers, the majority
of EVs and storage units are expected to be consumerowned. The presence of such new components in the power
grid presents an interesting opportunity for deploying evolved
energy trading mechanisms [41], [57]–[63].
In particular, the ability of consumers to store energy or
possibly feed energy back into the grid, via either their storage
units or their owned renewable sources, will pave the way
towards a large-scale exchange of energy within the grid. For
example, on the one hand, consumers with a surplus of energy
may decide to send this energy back into the grid, to improve
grid regulation and reap some possible monetary benefits. On
the other hand, the power company may utilize EVs or other
storage units as a means to store energy reserves or to regulate
the grid frequency [43]–[45], [52]–[56]. Indeed, if properly
managed, the charging and discharging behavior of EVs and
storage units can yield significant technological and economic
benefits for power companies and consumers alike [8], [38]–
[66].
In a nutshell, the effective integration and exploitation of
storage units and consumer-owned renewable sources will
be an essential property of the future smart grid. How to
efficiently exploit such devices to improve the delivery, production, and management of smart grid energy is thus an
important problem that must be addressed.
1) Challenges: The challenges of integrating energy storage and renewable sources are numerous. From a power system point of view, the intermittent nature of renewable sources
will require fundamentally new ways to operate energy production and generation in the grid. How to develop stochastic
optimization algorithms that can adapt to this intermittency
is thus a key challenge. In addition, the foreseen large-scale
deployment of EVs will present an unprecedented increase
in the load on the grid. Here, effective DSM mechanisms, as
those discussed in the previous section, which can manage the
EVs load will be needed. Nonetheless, storage units and EVs
also provide an opportunity for the power grid to store any
excess or mismatch in the generation and demand, so as to
regulate the overall grid operation.
More relevant to this article are the challenges pertaining
to the use of storage units and consumer-owned renewables
within energy trading markets. In particular, it is foreseen
that local markets in which consumers may directly exchange
energy with one another or with the grid can be set up in the
future smart grid. Such markets are enabled by the presence of
storage units, EVs, and consumer-owned generation sources.
Important challenges here include: 1) devising economic
mechanisms that incentivize consumers, power companies,
and energy providers to setup such markets, 2) analyzing
the impact of such localized markets on grid operation,
and 3) integrating such energy trading within existing DSM
mechanisms.
2) State-of-the-art: Integrating storage units and renewable
energy sources has been a topic of significant interest to the
smart grid community in recent years [8], [38]–[66], [75]–
[84]. Beyond the works that focus primarily on the power
system operation side [75]–[84], there has been a number of
interesting works that investigate the usage of storage, EVs,
and renewables, to shape the overall grid load and to establish
energy trading markets [8], [38]–[66].
The earliest work in this area is in [38] in which a gametheoretic framework is developed to analyze how consumers
equipped with storage unit can smartly decide on when
to buy or store energy, in a local smart grid area. The
presented scheme is essentially a modified DSM protocol
which explicitly factors in the presence of storage units. The
market price is assumed to be pre-determined using an auction
mechanism and, thus, the work does not account for dynamic
pricing. Simulation results presented in [8] show that, based
on empirical data from the UK market, the use of storage at
consumer premises along with game-theoretic DSM protocol
can help in reducing the peak demand which also leads to
reduced costs and carbon emissions. The results also analyze
the benefit of storage and how it impacts the social welfare
of the system thus highlighting the possible practical impact
of storage unit integration.
One of the most interesting works that follows in this
direction is presented in [39]. In this contribution, the authors
study a DSM-like scheme in which users can be endowed with
storage units and renewable sources. In contrast to traditional
DSM schemes in which users only decide whether or not to
purchase energy from the grid, the model in [39] enables the
users to decide on whether to purchase, produce, or store
energy in their batteries. By expanding the decision space
of the users, it is shown, using a game-theoretic approach,
that a smart exploration of the storage and energy production
options can reduce the overall aggregate load on the grid
while also providing monetary savings to end-users, under
the assumption of rational decision making. Such a study
thus motivates the penetration of consumer-owned storage and
energy production units.
The effective integration of EVs into a smart grid system is
studied in [40] using a game-theoretic framework that models
the interactions between the grid operator and the EVs. The
primary goal is to analyze how EVs can provide ancillary
services to the grid, once a proper market model between EVs
and the grid is setup. The basic idea is to use a smart pricing
policy to exploit the EVs for regulating the grid frequency.
The idea is to study how EVs (and their owners) can decide
on whether to charge, discharge, or remain idle, in a way to
optimize the grid frequency regulation while benefiting both
consumers and the grid operator. On the one hand, using such
a scheme consumers can obtain additional income while, on
the other hand, the grid can achieve the required frequency
regulation command signal.
The impact of energy trading between owners of EVs is
further analyzed in our earlier work in [41]. In this work,
a local market in which EV owners can decide on whether
or not to sell a portion of their stored energy to the smart
grid is studied. Using an auction and a game-theoretic model,
we have shown that, when EV owners act strategically, they
are able to reap significant benefits from selling their surplus
of stored energy to potential buyers in the smart grid. These
benefits are reflected in terms of revenues that can be viewed
as either direct monetary gains or as coupons or other offers
provided from the grid owner to active participants.
The impact of distributed renewable energy sources on
local energy trading markets is analyzed in [42]. The authors
essentially propose the use of an aggregator of distributed
energy resources allowing the smart grid to engage in an open
energy exchange market. The developed mechanism tightly
integrates classical DSM ideas with the use of an active
management scheme at the end-users side to allow a better
utilization of the renewable energy sources. Overall, the results
show that a smart exploration of possibly consumer-owned
energy sources can lead to a sustainable source of energy and
a reduction in the consumers’ energy consumption cost.
Beyond the aforementioned contributions, the exploitation
of storage and renewable sources at consumer premises has
been studied in a broad range of literature [43]–[66]. These
works mainly develop variants of the discussed energy trading
mechanisms and establish clearly that the use of mathematical
optimization frameworks such as game theory to manage the
way in which storage devices, EVs, and consumer-owned
energy sources are integrated in the smart grid can bring
in substantial technological, economic, and environmental
benefits to the smart grid players.
3) Summary and Remarks: The use of energy trading
between consumer-owned devices will indisputably be an
important feature of the smart grid. As demonstrated in the
abundant literature, the associated gains, both from a technical
(energy reduction, sustainable generation) and an economic
(reduced costs on consumers and providers) point of view,
are substantial. Yet, despite these established results, beyond
some small deployments of EVs and renewable energy sources
in Europe and some areas of the United States [85]–[92],
the large-scale introduction of such consumer-owned devices
remains modest.
Similar to the DSM case, one of the primary limitations
of existing models is that they often do not explicitly factor
in realistic risk considerations of both consumers and power
companies. For instance, even though an open, energy trading
market can yield economic and technological benefits, power
companies may remain risk averse and continue to rely on
traditional, largely controlled markets. Similarly, despite the
prospective economic savings and environmental benefits of
owning renewable sources or EVs, consumers may still be
reluctant to change from their current, effective technologies.
Therefore, when analyzing energy trading, one must explicitly factor in such risk considerations and their impact on the
overall operation of smart energy management mechanisms.
III. P ROSPECT T HEORY FOR
THE
S MART G RID
A. Introduction and Motivation
As demonstrated in Section II, the existing literature on
smart energy management in the smart grid has established
significant technological, economic, and environmental benefits for features such as DSM and energy trading. Yet, the
real-world deployment of such mechanisms remains largely
below expectations. One of the hurdles facing the real-world
implementation of the developed DSM and energy management mechanisms, is the lack of a mathematical and empirical
framework that can capture the realistic behavioral patterns of
consumers and power companies.
Indeed, most existing works [5]–[66], [75]–[84] still assume that consumers and power companies are rational and
will abide by the objective decision making rules that are
derived via frameworks such as game theory or optimization.
However, in practice, empirical studies have shown that, in
uncertain and risky situations, human players may not act in
accordance with the rational behavior established by decision
making frameworks such as game theory [93]–[97]. Given
that most foreseen smart grid features are consumer-centric,
the “human” factor will undoubtedly play an instrumental role
in the success or failure of advanced smart grid features. Thus,
uncertainty and risk factors must be properly modeled in any
DSM or energy management scheme. Examples of risk in
the smart grid include the continuous reliance of operators on
traditional markets and the interdependence of the decisions
between consumers. In terms of uncertainty, when dealing
with an energy management scheme, consumers are faced with
uncertain outcomes due to a lack of transparency in explaining
the rules of dynamic pricing or due to the presence of
stochastic elements such as stochastic generation or uncertain
presence or absence of EVs and energy storage units.
Thus, expediting the smart grid adoption requires new
approaches for analyzing the often irrational and nonconforming nature of the energy management decisions of
human players under such risk and uncertainty. Such decisionmaking factors that deviate from the objective, rational behavior assumed in existing works [5], [6], [8]–[66], [75]–
[84] can be analyzed by the Nobel-prize winning framework
prospect theory (PT) [93]–[95], [98]. Originally conceived
for modeling decisions during monetary transactions such
as lottery outcomes, PT has made its way into many applications [93]–[100], due to the universal applicability of
its concepts. In essence, PT provides one with mathematical
tools to understand how decision making, in real life, can
deviate from the tenets of expected utility theory (EUT), a
conventional game-theoretic notion which is guided strictly
by objective notions of gains and losses, player rationality,
conformity to pre-determined decision making rules that are
unaffected by real-life perceptions of benefits and risk.
Illustrative Example: Essentially, PT notions have been
developed to understand how consumers, when faced with
uncertainty of outcome and risky decisions, will behave in
real-life. Suppose that an efficient energy management system
is constructed for individual home owners to both buy and sell
power on the grid and a dynamic pricing DSM mechanism is
available to shift consumption to non-peak periods. Furthermore, suppose that it has been proven that under PT as well as
conventional game theory, stable prices can be found, so that
the smart grid could ultimately result in more efficient power
consumption. Under rational analysis, one might believe when
these conditions were satisfied, offering the opportunity to
buy and sell power to the public would result in widespread
participation and an optimal pricing equilibrium would soon
be reached. However, an important implication of PT is that
these conditions are insufficient to guarantee such a beneficial
result.
One important implication of PT is that the preferred choice
between a pair of uncertain alternatives is not only determined
by the values of the two alternatives but also by how the choice
is stated. Consider the following example, which is unnatural
only in that the alternatives are designed to have equal value,
so that a preference is clearly determined by the statement of
the choice. A power company wishes to entice its consumers
to abandon buying power at a fixed rate and instead join a
system where they buy and sell power at variable rates. Here
are two ways the alternatives may be presented in a letter to
a consumer:
• The Gain Scenario: Your average monthly utility bill is
now $450 a month. Under our new smart system, your
bill will show a debit of $500 a month. In addition you
may choose between:
a) A 50% chance of a credit of $100 if you join the smart
DSM scheme, or
b) A 100% chance of a credit of $50 that will keep your
bill the same.
• The Loss Scenario: Your average monthly utility bill is
now $450 a month. Under our new smart system, your
bill will show a credit of $400 a month. In addition, you
may choose between:
c) A 50% chance of a bill for $100 if you join the smart
DSM scheme, or
d) A 100% chance of a bill for $50 that will keep your
bill the same.
In fact, the Gain and Loss scenarios describe the identical
alternatives in different words. Alternatives a) and c) are
identical and alternatives b) and d) are identical. Nevertheless,
based on theoretical and empirical foundations, PT predicts
that more people will prefer alternative b) to alternative a)
B. Basics of Prospect Theory
Prospect theory encompasses a broad range of techniques
and tools to account for realistic consumer behavior during
decision making processes [94]–[98], [101], [102]. The basic
underlying idea is that decision makers, in real-life, will have
subjective perceptions of losses, gains, and their competitive
environment. For example, instead of viewing each others’
actions (e.g., load shifting schedules) objectively as in classical game theory, players could have different subjective
assessments about each others behavior, which, in turn can
lead to unexpected, irrational decisions. For example, in DSM,
even though rational behavior dictates that consumers follow
the load shifting mechanisms of the power company, some
consumers may turn on certain appliances at unexpected
times, since they are unsure on whether participation level
is high enough to obtain economic benefits which will hinder
the performance of the DSM scheme. The large spread of
such unconventional actions can thus be disruptive to any
energy management scheme. In such situations, PT provides
Weighted probability (subjective perception)
because a certain gain is preferred to a 50% chance at a
double-gain but will also prefer alternative c) to alternative
d) because a 50% chance of a loss is preferred to a certain,
albeit smaller, loss. This prediction has been confirmed in [93]
and [101].
The point of this example is not just that the level of
participation in smart grid services depends on how it is
presented to the public. The point is that important behavioral
factors outside of the technical specifications of the smart grid
will determine the choices of participants and giving them
the opportunity to perform optimally does not guarantee that
they will. In other words, people cannot be counted on to
always choose optimally among alternatives if merely stating
the alternatives differently influences their choices. This holds
true even if such alternatives, as discussed in Section II, have
immense technological and environmental benefits. Indeed, in
[102], Kahneman suggests that people behave non-optimally
when buying and selling stocks, selling rising stocks too soon
to lock in gains and hanging on to losing stocks too long
to resist locking in a loss. If people behave non-optimally
in the purchase and sale of securities, the default assumption
is that they will perform in the same non-optimal manner in
the purchase and sale of power and commodities, especially
when people are already familiar with the incumbent pricing
and energy management mechanism.
The obvious solution to the problem of human behavior
is to use prospect-theoretic notions to refine existing gametheoretic mechanisms and guide the way in which optimal
strategic decisions are derived as well as to improve the
presentation of information to buyers and sellers in the grid to
encourage optimal behavior in DSM and energy trading. To
provide further insights on the mathematical machinery underlying PT, in the next subsection, we provide an introduction
to the basics of the framework.
1
0.8
0.6
Subjective α=0.4
Subjective α=0.25
Subjective α=0.1
Subjective α=0.7
Objective/rational
0.4
0.2
0
0
0.2
0.4
0.6
Objective probability
0.8
1
Fig. 2. Illustration of the prospect-theoretic weighting effect: how objective
probabilities are viewed subjectively by human participants. The parameter
α determines how far the behavior is from the fully rational case.
solid analytical tools that directly address how these choices
are framed and evaluated, given the subjective observation of
players in the decision-making process.
1) Subjective Actions – The Weighting Effect: The first
important PT notion is the so-called weighting effect. In
particular, in PT [93]–[98], it is observed that in real-life
decision-making, people tend to subjectively weight uncertain
outcomes. In particular, in energy management mechanisms,
the frequency with which a consumer chooses a certain
strategy, say a certain schedule of appliances or a certain
storage pattern, depends on how other consumers make their
own choices. The dependence stems from many factors.
For example, in dynamic pricing schemes, the actual price
announced by a power company depends on the entire load
of the consumers. Therefore, the decision of a consumer, will
subsequently depend on the decision of others. Indeed, when
faced with a given smart grid scenario, consumers may act differently over time due to the interdependence of their actions
and its unpredictability, and, thus, a probabilistic model for
decision making is suitable to capture this uncertainty. Such
uncertainty can stem not only from the individual decisions
of consumers but also from other smart grid factors (e.g.,
uncertainty of renewable energy).
In classical game-theoretic smart grid schemes [5], [6], [8]–
[66], [75]–[84], consumer interdependence is captured via
the notions of expected utility theory in which a consumer
computes an expected value of its achieved gains or losses,
under the observation of an objective probability of choice by
other consumers. In contrast, using the weighting effect, PT
allows one to capture each consumer’s subjective evaluation
on the probabilistic strategies of its opponents. Thus, under
PT, instead of objectively observing the information given by
the other players and computing a classical expected value
for the utility, each consumer perceives a weighted version of
its observation on the other actions. The weighting is used to
express a “distorted” view that a given consumer or player can
have on the actions of others. PT studies have shown that most
people overweight low probability outcomes and underweight
high probability outcomes.
To illustrate the weighting effect, in Fig. 2, we show an
Framing effect example
10
8
EUT raw utility
PT framing utility
6
4
Utility value
example of a weighting function, known as the Prelec function [103], which maps an objective utility into a subjective
utility. The mapping is controlled by a parameter α which
quantifies the level of subjectivity in the observation. For
α = 1, we have the fully rational case, while for α close to
0 we get the fully irrational case. Within a smart grid setting,
such a weighting can have a cascading effect on the way in
which an energy management scheme works. For example,
in a DSM context, a highly irrational consumer will have a
largely distorted view on how other consumers behave. In
turn, this consumer will become more risk averse or more
risk seeking, depending on how the opponents actions impact
the dynamic pricing mechanism. As a result, this consumer
will not follow the actions recommended by classical, rational
mechanisms, but, instead will take an unexpected action
which, in turn, will yield unexpected DSM results.
Indeed, how to model such a weighting effect and how to
integrate it into realistic energy management mechanisms is
an important topic for research. In addition, how to design
weighting functions that are tailored towards the smart grid
and that can work in realistic power system setting is a
key challenge. In Section IV, we will discuss with specific
examples how weighting modifies the results of energy trading
and management protocols.
2) Subjective Perceptions of Utility Functions – The Framing Effect: Another important idea brought forward by PT is
the notion of utility framing. In engineering designs, one often
defines mathematically rigorous objective (utility) functions
that are used to optimize a certain metric of the system. For
example, when dealing with an optimal energy generation
problem, a smart grid system must find the maximum energy
output that can meet or match the demand. In such a case, it
is sound to assume that the function that must be optimized
is based on an objective metric, the energy in this case.
However, when dealing with smart energy management
mechanisms having human players, the idea of an objective
metric for evaluating utility functions might not be a reasonable assumption. For instance, each individual has a different
perception on the economic gains from a certain DSM scheme.
For example, a saving of $10 per month may not seem like a
significant gain for a relatively wealthy consumer. Instead, a
poor consumer might view this amount as a highly significant
reduction. Clearly, the objective measure of $10, can be
viewed differently by different consumers.
In PT, such subjective perceptions of utility functions
are captured via the idea of framing or reference points.
In essence, each individual frames its gains or losses with
respect to a possibly different reference point. Back to the
aforementioned example, the wealthy consumer will frame
the $10 with respect to its initial wealth which could be
close to millions and, thus, this consumer views the $10 as
insignificant. In contrast, the poor consumer might have a
wealth close to 0 and, thus, when framing the $10 with respect
to this reference point, the gains are viewed as significant. One
2
0
−2
−4
PT value is:
Concave in gains
Convex in losses
−6
−8
−10
−10
−8
−6
−4
−2
0
2
Raw utility
4
6
8
10
Fig. 3. Illustration of the prospect-theoretic framing effect: how objective
utilities are viewed subjectively by human participants. The utility function
value changes depending on a certain reference point that highlights the
individual perceptions of gains and losses.
popular way to capture such framing effects is by observing
that losses loom larger than gains, and, thus, PT provides one
transformation that maps objective utility functions into socalled subjective value functions - concave in gains, convex
in losses - over the possible outcomes. These gains and losses
are measured with respect to a reference point that need not
be 0 and that may be different between players. An illustrative
example is shown in Fig. 3 for one typical PT value function
from [93] assuming a zero reference point for gains/losses.
Naturally, as consumers change the way in which they
compute their utilities, their overall decision making processes
will deviate from the conventional, rational thought. Indeed,
when applying PT ideas to game-theoretic settings such as
in [98], it is shown that the objective results do not hold.
For example, in some cases, it is shown that the choice
of a reference point can impact whether or not a certain
game has an equilibrium solution or not. Clearly, when one
decision maker changes the way in which it evaluates its
objective function, the overall operation of any optimization
mechanisms will be significantly affected.
In the smart grid, we can envision many situations in
which to incorporate the framing effects. These situations
need not be purely economical. For example, during winter,
consumers may perceive less prospective gain from turning
off high-capacity loads (such as heaters) at night than during
day time. How this “frame of reference” transforms the
utility will fundamentally change the outcome of an energy
management mechanism that is based on classical objective
notions. Moreover, in the smart grid, such reference points
and framing effects may change over time, space, and even
demographics. Clearly, properly designing and developing
framing notions in smart grid DSMs is an important direction
that must be investigated to better understand its impact on
energy management and trading mechanisms.
Having defined the two key effects of PT, in the next sec-
EUT Player1
EUT Player2
PT Player1
PT Player2
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.02
0.04
0.06
Unit selling price b ($/kWh)
0.08
0.1
Fig. 4. Impact of the weighting effect on consumer behavior in a two-player
charginge/discharging setting.
tion, we discuss, in detail, two energy management scenarios
to highlight, as an example, the impact of weighting on smart
grid protocols.
IV. P ROSPECT-T HEORETIC S MART G RID A PPLICATIONS
A. Example 1: Charging and Discharging of ConsumerOwned Energy Storage
To show the impact of prospect-theoretic considerations in
smart grid design, we first study a model in which consumers
are equipped with storage units and must decide on how to
manage the charging and discharging of their storage units,
depending on the network state and the pricing incentives.
This model is based on our recent work in [104].
In particular, we consider a grid consisting of multiple
consumers who own storage units. For illustration purposes,
we assume the case in which only two consumers are “active
participants” while all other consumers constitute a passive
load on the grid. Each consumer has a storage unit which
holds a certain initial amount of energy stored. The power
company offers these active consumers the option to either
charge their storage unit and, thus, act as a load on the grid,
or, instead to actively feed back and sell energy to the grid.
Note that, any action taken by either of the two consumers
affects both the power system as it impacts the overall needed
generation as well as the prices set by the utility company. The
choices of both consumers are also coupled, since the choice
of acting as load or source, will impact the overall generation
and distribution of energy in the grid.
In this setting, we assume that the consumers need to make
a choice between charging or discharging while optimizing a
utility function that captures two properties: a) the economic
and technical benefits of storing or selling energy and b) the
power system regulatory penalties. Indeed, although the power
company allows the two active consumers to individually manage their storage units, it still requires the generation to remain
within desired operating conditions which are measured based
on an initial point.
Power company revenues from two consumers ($)
Probability of buying (acting as load)
1
3.5
EUT
PT α=0.25
PT α=0.45
PT α=0.65
3
2.5
2
1.5
1
0.5
0
0.02
0.04
0.06
Unit selling price b ($/kWh)
0.08
0.1
Fig. 5. Impact of the weighting effect on the revenues of the power company.
We formulate and investigate this setting using a PT-based,
classical noncooperative game and we study the equilibrium
solution of the game. The equilibrium is essentially a point of
the system in which neither of the two consumers can improve
its utility by changing the frequency with which it chooses to
charge or discharge its storage unit. We analyze the results
under both classical EUT and PT considerations. For PT, we
first consider the weighting effect: each consumer views a
subjective observation on the charging/discharging behavior of
its opponent in accordance with a Prelec weighting function
such as in Fig. 2.
We use a numerical example to show the impact of PT
considerations on the operation of the system. We consider
a standard 4-bus power system with 2 active consumers.
The loads and surpluses of active consumer 1 are, respectively, 20 kWh and 10 kWh, while those of consumer 2
are, respectively, 15 kWh and 5 kWh. Fig. 4 shows the
impact of the unit selling price b that is used by the two
consumers when discharging energy to the grid. This price
is assumed to be equal for both consumers. Clearly, as b
increases, both consumers have more incentive to sell than
to buy, as the gains start outweighing the regulation penalty.
More interestingly, Fig. 4 shows that, for both consumers,
PT behavior significantly differs from the rational EUT case.
For consumer 2, below 0.07$ per kWh, the probability of
buying energy at the PT equilibrium is much higher than
under EUT. This implies that for low gains and high risks, the
consumer follows a conservative, risk-averse strategy under
PT and is less interested in reaping the gains of selling energy
compared to EUT. However, as the unit selling price crosses
the threshold, the probability that consumer 2 acts as load
under PT becomes much smaller than EUT. Thus, once the
selling benefits are significant (risks decrease), PT predicts
that consumer 2 will start selling more aggressively than
in EUT. Analogous behavior is seen for consumer 1 with
threshold 0.045$ per kWh.
Fig. 5 shows the total power company revenues, collected
from the two consumers when they charge their storage unit,
as the unit selling price increases. Fig. 5 shows that, as b in-
EUT
PT (γ=2)
PT (γ=1.5)
PT (γ=1)
0
30
EUT
PT (α = 0.25)
The expected utility ($)
Total average grid load from the two consumers (kWh)
2
35
25
20
15
−4
−6
−8
−10
10
−12
−2.5
5
0
0.03
−2
0.04
0.05
0.06
0.07
0.08
Unit price c ($/kWh)
Fig. 6. Expected grid load when the consumers actively participate with their
storage devices under rational EUT and irrational PT behavior.
creases, the total revenues decrease, as the consumers begin to
sell more and buy less. Note that, here, the power company’s
revenues pertain to only those revenues that are collected
from the two consumers. This does not include any additional
sources of revenues that the power company might collect
(e.g., taking a percentage on the profits of the consumers).
Clearly, deviations from EUT can have major impacts on
energy management in a smart grid setting. Consider the case
in which the Prelec rationality factor is set to α = 0.25. When
b is below 0.06$ per kWh, under PT, the total revenue is
much higher than predicted by EUT. In contrast, if consumers
set prices greater than 0.06$ per kWh, PT predicts that the
revenues will be much smaller than in EUT. It is thus more
beneficial for the company to regulate the consumers’ unit
selling price to be below 0.06$ per kWh. Fig. 5 also shows
that when the company adopts EUT to regulate the consumers’
selling price, it can lose revenues due to real-life consumer
behavior. Fig. 5 also shows that, as α increases, the consumers
behave more in line with EUT. However, even for a relatively
high value, α = 0.65, the company revenues resulting from
PT still yield non-negligible deviations from EUT.
We further analyze how consumer behavior impacts grid
operation by showing the average expected load on the grid
in Fig. 6, as the company varies its minimum price 1 . Fig. 6
shows that the expected load on the system will significantly
change between PT and EUT. For PT, when the unit price is
small, consumers are less interested in selling their stored energy. However, as the price crosses a threshold, the consumers
will start selling more aggressively, rendering the average load
smaller than expected. Fig. 6 provides guidelines for realistic
DSM with storage. For example, assume the company wants to
increase its price to drive consumers to sell more and reduce
their load to about 10 kWh. Based on classical EUT-based
schemes, the company has to increase the price to 0.078$
per kWh. In real life, because consumers behave subjectively
under risk, the power company can increase its unit price to
1 This pertains to the rates that the power company will use to directly
charge its consumers.
−2
−1.5
−1
−0.5
0
0.5
1
The reference point u0
1.5
2
2.5
Fig. 7. The total utility under both EUT and PT as the reference point u0
varies.
only 0.06$ per kWh and obtain the desired load reduction.
Also, if the power company wants to reduce its price to sustain
a load of 23 kWh from the two consumers, based on EUT, it
must offer a price of 0.035$ per kWh. In contrast, PT shows
that 0.047$ per kWh will achieve the same impact yet yield
more profits.
Next, we consider the same example in the presence of
framing effects, based on our work in [105]. Here, we assume
that both consumers frame their utility with respect to a given
reference point that reflects how these consumers evaluate
the economic gains or losses from charging or discharging
their storage unit. For incorporating framing, we adopted the
classical model of [98] in which, under framing, the utility
function becomes concave in gains and convex in losses,
as losses loom larger than gains. To assess the impact of
framing, in simulations, the reference point is chosen to
coincide with the case in which consumers discharge/sell
energy at the same price that is announced by the power
company. In Fig. 7, we show the total expected utility (sum
for both consumers) under both EUT and PT, as the reference
point varies. In this figure, we choose the same reference
point for both consumers and we use typical values for γ
which is a parameter that represents the loss aversion, i.e.,
how a consumer values its losses versus its gains. First, we
can see that, the expected PT utility will decrease when the
reference point value increases. In essence, the reference point
is subtracted from the EUT utility to determine the exact
values of gains and losses. For a high reference point (i.e.,
electricity price), PT consumers will value their stored energy
more than in cases in which the reference point is smaller.
Thus, using a same selling/discharging price under EUT and
PT, the payoff obtained by consumers under PT is smaller
than under EUT, due to the fact that, in PT, the reference
point reduces the gains from selling energy. Second, this figure
also shows that, the framing aversion parameter, i.e., γ, would
have different impacts on the PT utility. In particular, when
γ increases from 1 to 2, the losses viewed by PT consumers
will increase. Thus, with an increasing γ, a PT consumer will
start valuing its gains less than in the EUT case, which leads
to increasing its conservative, charging strategy. Additional
results on framing can be found in [105].
In summary, ignoring consumer behavior in storage-based
energy management can lead to unexpected results as shown
here for a basic setting. These results can be undesirable
from both an economic and technological perspective. Therefore, building on the presented model, one can design more
elaborate and realistic storage management mechanisms that
account for PT-based notions of subjective perceptions. In
addition, the power company can utilize these results to
properly shape its pricing schemes.
B. Example 2: Demand-side Management under ProspectTheoretic Considerations
2 The reader interested in the mathematical formulation is referred to our
work in [106].
Fig. 8. The expected nonparticipating load for the 6 consumer game under
both EUT and PT over 24 hours, when consumers have different values of
α.
The nonparticipating load profile at 19:00
34
The expected nonparticipating load (kWh)
Another important application for PT is classical demandside management models. Here, we consider a grid in which
consumers are given the opportunity to decide on whether or
not to participate in DSM. The DSM scheme considered is
one in which the participating consumer would shift its load
over time, in order to reduce the overall peak hour load. The
actual DSM process is in line with classical game-theoretic
settings such as those in [5].
However, in our model, it is assumed that consumers have
also a choice of not participating in the DSM at all. In
addition, consumers can choose the time starting which they
will begin their participation. The decisions of the consumers
are driven by the goal of minimizing the overall electricity bill
while maintaining their desired load to operate their required
appliances2 . One important feature of the considered model is
that every load shift by a given consumer will automatically
impact the way in which prices are set by the power company.
Thus, this interdependence in decision making will naturally
warrant a game-theoretic approach to modeling the decision
making.
In essence, we have a model in which every consumer can
decide on the time at which it starts to participate in DSM.
Alternatively, the consumer may decide not to participate
at all. We can then analyze the frequency with which a
consumer will participate or not and we analyze the impact
of this participation on the grid by deriving equilibrium
conditions [106]. This analysis is done for both the rational
and irrational case. For PT, we consider mainly a weighting
effect.
To gain more insights on the impact of weighting on DSM
participation, we consider a numerical validation using a realistic load profile in [107] which represents consumers’ initial
demands during Spring 2013, from the Miami International
Airport. In these numerical examples, each consumer can
choose a starting time to participate in DSM from the time
period between 18:00 and 20:00. Alternatively, the consumer
can decide not to participate.
In Fig. 8, we show the expected nonparticipating load
profile using different values for the Prelec rationality parameters α. In this example, each consumer has a different
EUT
PT
Actual demand
32
30
28
26
24
22
20
18
0.1
0.2
0.3
0.4
0.5
α
0.6
0.7
0.8
0.9
Fig. 9. The impact of the rationality parameter α on the expected nonparticipating load of all consumers at 19:00.
subjective perception on other consumers and, thus, has a
different rationality parameter. In particular, we choose α =
[0.5 0.5 0.2 0.1 0.1 0.1] for the 6 considered consumers. This
implies that consumer 1 is more rational than consumers 36 while consumers 4-6 are the least rational. In this figure,
we can see that, when some consumers have a very irrational
observation on their opponents, the PT nonparticipating load
between 21:00 and 23:00 will be higher for PT than EUT. This
implies that, in realty, if some consumers deviate significantly
from their rational strategies (for example, a consumer decides
not to assist the power company in load shifting despite the
economic benefit), the power company will not be able to
shift the total load predicted by the rational, objective model.
Thus, this simple, yet insightful example shows that one must
better understand how consumers behave (here reflected by
the rationality parameter) to better design the dynamic pricing
and DSM scheme.
We further analyze the impact of the consumer rationality
on DSM by showing, in Fig. 9, the expected nonparticipating
load at a chosen time of the day which is here selected to
be 19:00 for illustrative purposes. Here, it is assumed that
all consumers have a similar level of rationality. In Fig. 9,
we observe that, under EUT, the expected nonparticipating
load is 65.7% of the total load. In contrast, under PT, the
nonparticipating load is less than EUT when α > 0.56, i.e.,
when consumers are fairly rational. Thus, the power company
can shift more load in practice, compared to an EUT scheme,
if the consumers are all of equal rationality level when α >
0.56. Clearly, there exists a rationality threshold, such that, if
α is greater (smaller) than the threshold, PT consumers will
have lower (higher) nonparticipating loads than EUT cases.
A large value of α, which maps to a small deviation from
EUT, yields an increased competition thus raising the costs
to the consumers. Consequently, the consumers will become
risk seeking and more apt to shift their loads and decrease
their payments. Thus, the increasing PT costs will force the
majority to shift more loads, compared to EUT. In contrast,
a relatively small rationality parameter or a large deviation
from EUT, will lead to highly irrational behavior from the
consumers which will lead to increasingly high competition
and decreasing participation, as consumers become extremely
risk averse and unwilling to participate in the DSM process.
From Fig. 9, we can infer that one of the reasons for
which DSM schemes might have not been adopted widely
in practice is due to a severely irrational behavior observed
from the consumers. Indeed, as per Fig. 9, one can see
that a small deviation from EUT (slight irrationality) may
in fact be beneficial for the power company as it increases
consumers’ participation. In contrast, a significant deviation
from EUT will inevitably lead to highly risk averse behavior
which will prevent most consumers from participating; thus
yielding detrimental results for the grid and preventing the
power company from reaping the benefits of DSM.
Through this simple, yet realistic DSM example, we are
able to see that by only considering the weighting effect of
PT, the results of DSM can significantly change. This is due
to the fact that the PT model allows to better capture the
way in which consumers behave in practice. Consequently,
this motivates a deeper investigation of the role of human
decision making in practical DSM mechanisms.
C. Choice of PT Parameters
In this section, we have brought forward several key results
that show how realistic consumer behavior can impact smart
grid energy management. However, in these models for assessing the rationality (e.g., the parameter α) or risk aversion of
the consumers, we have adopted PT models of risk that were
conceived in the economics literature such as in [93]–[98],
[103]. Naturally, to understand whether those models map
directly into the smart grid, there is a need to run analogous
behavioral experiments, with real-world smart grid consumers,
to generate new empirical models for PT that can be used to
further enhance the results of this section. Such experiments
can be based on both qualitative surveys and on real-world
simulations in which grid consumers (e.g., homeowners or
factories) are solicited to participate in simulated experiments
on grid scenarios that pertain to DSM, storage, or other
consumer-centric features. Such experiments can mimic the
gain and loss scenarios presented in Section III-A. Using such
behavioral experiments, we can refine the choice of the various
PT parameters and we can generate more advanced models
and results.
V. C ONCLUSIONS
AND
F UTURE O UTLOOK
Realizing the vision of a smart, consumer-centric grid is
without any doubt strongly dependent on gathering a better
understanding on the impact and role of consumer behavior
in energy management processes such as demand-side management, demand response, or energy trading. In this article,
we have shown that the use of prospect theory, a powerful
framework from operations research and psychology, can
provide the first step towards better understanding the impact
of consumer behavior on smart grid operation. Indeed, our
preliminary investigations have shown that consumer-related
deviations from conventional, rational game-theoretic energy
management mechanisms can be one of the primary reasons
behind the modest adoption of such mechanisms in practical
smart grid systems.
Nonetheless, in this article, we have only scratched the
surface of this emerging area in smart grid research. Indeed,
the study of consumer behavior in the smart grid requires
significant advances to frameworks such as PT. Many future
directions can be envisioned. For example, our results so far
have solely relied on the analysis of the weighting effect.
However, we anticipate that the use of both framing and
weighting can provide deeper insights into how DSM and
energy management can operate in the smart grid. Indeed, the
fact that smart grid consumers will have time-dependent reference points while measuring their utility functions provides
a very interesting and promising research direction.
In addition, our study thus far has focused primarily on
economic-oriented models, in which the impact on the power
system is restricted to load management. Instead, one can
envision the use of PT-based behavioral model to better
understand how the overall regulation of the power system
operation can be modified due to the uncertainty and risk
introduced by consumer-based decision making. Such studies
can also be extended to explicitly account for communication
and security considerations, both of which can involve enduser decisions.
Another important direction for future work is to explicitly
account for renewable energy sources. In fact, renewable
sources will introduce two types of uncertainty: a) uncertainty
due to consumer decisions, as captured in the models of this
paper and b) uncertainty due to nature and other environmental
factors that affect renewable generation. Here, it is of interest
to apply PT models to capture both types of uncertainty. Some
early works on PT such as in [98] have shown that when
both weighting effects and utility uncertainty are considered,
one can expect significant deviation from conventional rational
results. How such deviations can be applied in a smart grid
context remains an open problem.
Moreover, the recent surge in the application of big data
analytics in various smart grid scenarios will provide an
important avenue to explore the differences between EUT
and PT. These data that are being constantly collected can,
in the future, provide an important source for corroborating
the intuition provided by PT while also providing important
information to derive more realistic PT models.
Last but not least, as mentioned previously, one important
challenge is the lack of any large-scale data on how buyers
and sellers will in reality behave in the still speculative smart
grid market. Though PT provides broad hints about the factors
that may affect choices, the tests of it have been so far
restricted to basic models such as those presented here, which
are still somewhat distant from the context of a large-scale
practical smart grid to be determinative. In other areas, the
experiments confirming PT have overwhelmingly been single
session experiments in which naive participants make choices
in speculative scenarios of no consequence to themselves.
This is different from regular participation in a smart grid
in which their choices have direct financial consequences on
themselves. People have the ability to learn from experience
and this is known to affect their choices in some contexts. For
example, the endowment effect is that amateur collectors will
not sell an item they already own for the price they would be
willing to pay for it [108]. However, professional merchants
do not show an endowment effect [109]. Fortunately, current
internet technology makes it possible to simulate the smart
grid and systematically evaluate consumer behavior in it
under different conditions. Conducting such simulations will
be an important area for future work as the results of such
studies should make it possible to design transaction rules
and human interfaces that constrain behavior into optimal
pathways. Such results will also help corroborate and improve
upon PT models.
In a nutshell, the deployment of smart energy management
mechanisms is an integral and essential part of the smart
grid. However, in order to expedite the introduction of such
features, it has become crucial to properly develop behavioral
models that can factor in explicitly the impact of human
behavior on the overall operation of the future, consumercentric smart grid.
R EFERENCES
[1] E. Hossain, Z. Han, and H. V. Poor, Smart Grid Communications and
Networking. Cambridge University Press, UK, Oct. 2012.
[2] R. Schneiderman, “Smart grid represents a potentially huge market for
the electricity industry,” IEEE Signal Processing Magazine, vol. 27,
no. 5, pp. 8–15, Sep. 2010.
[3] ISO New England Inc., “Overview of the smart grid: Policies, initiatives and needs,” Feb. 2009.
[4] A. Ipakchi and F. Albuyeh, “Grid of the future,” IEEE Power and
Energy Magazine, vol. 7, no. 2, pp. 52 – 62, Mar. 2009.
[5] H. Mohsenian-Rad, V. W. S. Wong, J. Jatskevich, R. Schober, and
A. Leon-Garcia, “Autonomous demand side management based on
game-theoretic energy consumption scheduling for the future smart
grid,” IEEE Trans. on Smart Grid, vol. 1, no. 3, pp. 320–331, Dec.
2010.
[6] Z. Fadlallah, D. M. Quan, N. Kato, and I. Stojmenovic, “GTES: An
optimized game-theoretic demand-side management scheme for smart
grid,” IEEE Systems Journal, vol. 8, pp. 588–597, Jan. 2009.
[7] L. Chen, N. Li, S. H. Low, and J. C. Doyle, “Two market models
for demand response in power networks,” in Proc. International
Conference on Smart Grid Communications, Gaithersburg, MD, USA,
Oct. 2010.
[8] S. D. Ramchurn, P. Vytelingum, A. Rogers, and N. R. Jennings,
“Agent-based control for decentralised demand side management in the
smart grid,” in Proc. International Conference on Autonomous Agents
and Multiagent Systems (AAMAS), Taipei, Taiwan, May 2011.
[9] Q. Zhu, P. Sauer, and T. Başar, “Value of demand response in the
smart grid,” in Proc. IEEE Power and Energy Conference at Illinois,
Champaign, IL, USA, Feb. 2013.
[10] C. Ibars, M. Navarro, and L. Giupponi, “Distributed demand management in smart grid with a congestion game,” in Proc. International
Conference on Smart Grid Communications, Gaithersburg, MD, USA,
Oct. 2010.
[11] S. Caron and G. Kesidis, “Incentive-based energy consumption
scheduling algorithms for the smart grid,” in Proc. International
Conference on Smart Grid Communications, Gaithersburg, MD, USA,
Oct. 2010.
[12] Z. Zhu, J. Tang, S. Lambotharan, W. H. Chin, and Z. Fan, “An integer
linear programming and game theory based optimization for demandside management in smart grid,” in Proc. IEEE Global Commun. Conf.,
Houston, TX, USA, Dec. 2011.
[13] S. Bu, F. R. Yu, and P. X. Liu, “A game-theoretical decision-making
scheme for electricity retailers in the smart grid with demand-side
management,” in Proc. International Conference on Smart Grid Communications, Brussels, Belgium, Oct. 2011.
[14] S. Bu and F. R. Yu, “A game-theoretical scheme in the smart grid
with demand-side management: Towards a smart cyber-physical power
infrastructure,” IEEE Trans. on Emerging Topics in Computing, vol. 1,
pp. 22–32, Jun. 2013.
[15] H. K. Nguyen, J. B. Song, and Z. Han, “Demand side management
to reduce peak-to-average ratio using game theory in smart grid,” in
Proc. of IEEE INFOCOM, Workshop on Smart Grid, Orlando, FL,
USA, Apr. 2012.
[16] S. Mahrajan, Q. Zhu, Y. Zhang, S. Gjessing, and T. Başar, “Dependable
demand response management in the smart grid: A stackelberg game
approach,” IEEE Trans. on Smart Grid, vol. 4, pp. 120–132, Mar. 2013.
[17] P. Samadi, H. Mohsenian-Rad, R. Schober, and V. W. S. Wong,
“Advanced demand side management for the future smart grid using
mechanism design,” IEEE Trans. on Smart Grid, vol. 3, pp. 1170–
1180, Sep. 2012.
[18] H. Mohsenian-Rad and A. Davoudi, “Optimal demand response in
DC distribution networks,” in Proc. IEEE Int. Conf. on Smart Grid
Communications (SmartGridComm), Vancouver, Canada, Oct. 2013.
[19] H. Zhong, L. Xie, and Q. Xia, “Coupon incentive-based demand
response: Theory and case study,” IEEE Trans. on Power Systems,
to appear 2013.
[20] M. D. Ilic, L. Xie, and J. Joo, “Efficient coordination of wind power
and price-responsive demand Part I: theoretical foundations,” IEEE
Trans. on Power Systems, vol. 26, pp. 1875–1884, Nov. 2011.
[21] ——, “Efficient coordination of wind power and price-responsive
demand Part II: case studies,” IEEE Trans. on Power Systems, vol. 26,
pp. 1885–1893, Nov. 2011.
[22] C. Su and D. Kirschen, “Quantifying the effect of demand response
on electricity markets,” IEEE Trans. on Power Systems, vol. 24, pp.
1199–1207, Aug. 2009.
[23] P. Palensky and D. Dietrich, “Demand side management: Demand
response, intelligent energy systems, and smart loads,” IEEE Trans.
on Industrial Informatics, vol. 7, no. 3, pp. 381 – 388, Aug. 2011.
[24] B. Asare-Bediako, W. L. Kling, and P. F. Ribeiro, “Integrated agentbased home energy management system for smart grids applications,”
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
in Proc. IEEE/PES Innovative Smart Grid Technologies Europe, Lyngby, Denmark, Oct. 2013.
H. Mohsenian-rad, V. W. S. Wong, J. Jatskevich, and R. Schober, “Optimal and autonomous incentive-based energy consumption scheduling
algorithm for smart grid,” in Proc. Innovative Smart Grid Technologies
(ISGT), Gathersburg, MD, USA, Jan. 2010.
J. Ma, J. Deng, L. Song, and Z. Han, “Incentive mechanism for demand
side management in smart grid using auction,” IEEE Trans. on Smart
Grid, vol. 5, no. 3, pp. 1379 – 1388, May 2014.
Y. Liu, C. Yuen, S. Huang, N. Hassan, X. Wang, and S. Xie, “Peak-toaverage ratio constrained demand-side management with consumer’s
preference in residential smart grid,” IEEE Journal on Selected Topics
in Signal Processing, vol. 8, no. 6, pp. 1084 – 1097, Dec. 2014.
Z. M. Fadlullah, M. Q. Dong, N. Kato, and I. Stojmenovic, “A novel
game-based demand side management scheme for smart grid,” in Proc.
IEEE Wireless Commun. and Networking Conf., Shanghai, China, Apr.
2013.
C. O. Adika and L. Wang, “Smart charging and appliance scheduling
approaches to demand side management,” International Journal of
Electrical Power & Energy Systems, vol. 57, pp. 232–240, May 2014.
J. Yang, G. Zhang, and K. Ma, “Real-time pricing-based scheduling
strategy in smart grids: A hierarchical game approach,” Journal of
Applied Mathematics, vol. 2014, Apr. 2014.
X. Xue, S. Wang, C. Yan, and B. Cui, “A fast chiller power demand
response control strategy for buildings connected to smart grid,”
Applied Energy, vol. 137, p. 7787, Jan. 2015.
F. Rahimi and A. Ipakchi, “Demand response as a market resource
under the smart grid paradigm,” IEEE Trans. Smart Grid, vol. 1, no. 1,
pp. 82–88, Apr. 2010.
Q. Zhu, Z. Han, and T. Başar, “A differential game approach to
distributed demand side management in smart grid,” in Proc. Int. Conf.
on Communications, Ottawa, Canada, Jun. 2012.
E. Nekouei, T. Alpcan, and D. Chattopadhyay, “A game-theoretic
analysis of demand response in electricity markets,” in Proc. IEEE
PES General Meeting, National Harbor, MD, USA, Jul. 2014.
C. Joe-Wong, S. Sen, S. Ha, and M. Chiang, “Optimized day-ahead
pricing for smart grids with device-specific scheduling flexibility,”
IEEE J. Select. Areas Commun., vol. 30, no. 6, pp. 1075 – 1085,
Jul. 2012.
A. Safdarian, M. Fotuhi-Firuzabad, and M. Lehtonen, “A distributed
algorithm for managing residential demand response in smart grids,”
IEEE Trans. on Industrial Informatics, vol. 10, no. 4, pp. 2385 – 2393,
Nov. 2014.
T. Logenthiran, D. Srinivasan, and K. W. M. Vanessa, “Demand side
management of smart grid: Load shifting and incentives,” AIP Journal
of Renewable and Sustainable Energy, vol. 6, Jun. 2014.
P. Vytelingum, T. D. Voice, S. D. Ramchurn, A. Rogers, and N. R. Jennings, “Agent-based micro-storage management for the smart grid,” in
Proc. International Conference on Autonomous Agents and Multiagent
Systems (AAMAS), Toronto, Canada, May 2010.
I. Atzeni, L. G. Ordonez, G. Scutari, D. P. Palomar, and J. R. Fonollosa,
“Noncooperative and cooperative optimization of distributed energy
generation and storage in the demand-side of the smart grid,” IEEE
Trans. Signal Processing, vol. 61, no. 10, pp. 2454–2472, May 2013.
C. Wu, H. Mohsenian-Rad, and J. Huang, “Vehicle-to-aggregator
interaction game,” IEEE Trans. on Smart Grid, vol. 3, no. 1, pp. 434
– 442, Oct. 2011.
Y. Wang, W. Saad, Z. Han, H. V. Poor, and T. Başar, “A game-theoretic
approach to energy trading in the smart grid,” IEEE Trans. on Smart
Grid, vol. 5, no. 3, pp. 1439–1450, Apr. 2014.
C. Cecati, C. Citro, and P. Siano, “Combined operations of renewable
energy systems and responsive demand in a smart grid,” IEEE Trans.
on Sustainable Energy, vol. 3, no. 1, pp. 468 – 476, Jul. 2011.
H. M. Soliman and A. Leon-Garcia, “Game-theoretic demand-side
management with storage devices for the future smart grid,” IEEE
Trans. on Smart Grid, vol. 5, no. 3, pp. 1475 – 1485, May 2014.
Z. Fan, “A distributed demand response algorithm and its application
to phev charging in smart grids,” IEEE Trans. Smart Grid, vol. 3, no. 3,
pp. 1280 – 1290, Sep. 2012.
M. A. Lopez, S. de la Torre, S. Martin, and J. A. Aguado, “Demandside management in smart grid operation considering electric vehicles
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
load shifting and vehicle-to-grid support,” International Journal of
Electrical Power and Energy Systems, vol. 64, p. 689698, Jan. 2015.
W. Tushar, W. Saad, H. V. Poor, and D. B. Smith, “Economics of
electric vehicle charging: A game theoretic approach,” IEEE Trans.
on Smart Grid, vol. 3, no. 4, pp. 1767–1779, Dec. 2012.
I. Atzeni, L. G. Ordonez, G. Scutari, D. P. Palomar, and J. R. Fonollosa,
“Demand-side management via distributed energy generation and
storage optimization,” IEEE Trans. on Smart Grid, vol. 4, no. 2, pp.
866 – 876, Jun. 2013.
S. Lakshminarayana, T. Q. S. Quek, and H. V. Poor, “Combining
cooperation and storage for the integration of renewable energy in
smart grids,” in Proc. of IEEE INFOCOM, Workshop on Smart Grid,
Toronto, Canada, Apr. 2014.
Q. Sun, M. E. Cotterell, A. Beach, and S. Grijalva, “The fundamental
value of information and strategy in stochastic management of distributed energy storage,” in Proc. North American Power Symposium,
Champaign, IL, USA, Sep. 2012.
M. Ampatzis, P. H. Nguyen, and W. L. Kling, “Introduction of storage
integrated pv sytems as an enabling technology for smart energy grids,”
in Proc. Innovative Smart Grid Technologies (ISGT) Europe, Lyngby,
Denmark, Oct. 2013.
R. Couillet, S. M. Perlaza, H. Tembine, and M. Debbah, “Electrical
vehicles in the smart grid: A mean field game analysis,” IEEE J. Select.
Areas Commun., vol. 30, no. 6, pp. 1086 – 1096, Jul. 2012.
N. Rotering and M. Ilic, “Optimal plug-in electric vehicle charge
control in deregulated electricity markets,” IEEE Transactions on
Power Systems, vol. 26, no. 3, pp. 1021 – 1029, Aug. 2011.
C. Silva, M. Ross, and T. Farias, “Evaluation of energy consumption,
emissions and cost of plug-in hybrid vehicles,” Elsevier Energy Conversion and Management, vol. 50, no. 7, pp. 1635–1643, Jul. 2009.
S. Sojoudi and S. H. Low, “Optimal charging of plug-in hybrid
electric vehicles in smart grids,” IEEE Power and Energy Society (PES)
General Meeting, Jul. 2011.
E. Tara, S. Shahidinejad, S. Filizadeh, and E. Bibeau, “Battery storage
sizing in a retrofitted plug-in hybrid electric vehicle,” IEEE Transactions on Vehicular Technology, vol. 59, no. 6, pp. 2786–2794, Jul.
2010.
J. Gonder, T. Markel, M. Thornton, and A. Simpson, “Using global
positioning system travel data to assess real-world energy use of plugin hybrid electric vehicles,” Transportation Research Record: Journal
of the Transportation Research Board, vol. 2007, no. 2017, pp. 26–32,
Jan. 2008.
T. A. Al-Awami and E. Sortomme, “Coordinating vehicle-to-grid
services with energy trading,” IEEE Trans. Smart Grid, vol. 3, no. 1,
pp. 453 – 462, Mar. 2012.
D. Ilic, P. G. Da Silva, S. Karnouskos, and M. Griesemer, “An
energy market for trading electricity in smart grid neighbourhoods,” in
Proc. 6th IEEE Int. Conference on Digital Ecosystems Technologies,
Campione D’Italia, Italy, Jun. 2012.
B.-G. Kim, S. Ren, M. van der Schaar, and J.-W. Lee, “Bidirectional
energy trading and residential load scheduling with electric vehicles
in the smart grid,” IEEE J. Select. Areas Commun., vol. 31, no. 7, pp.
1219 – 1234, Jul. 2013.
W. Tushar, J. A. Zhang, D. B. Smith, H. V. Poor, and S. Thiebaux,
“Prioritizing consumers in smart grid: A game theoretic approach,”
IEEE Trans. on Smart Grid, vol. 5, no. 3, pp. 1429 – 1438, May
2014.
S. Chen, N. B. Shroff, and P. Sinha, “Energy trading in the smart
grid: From end-user’s perspective,” in Proc. Asilomar Conference on
Signals, Systems, and Computers, Pacific Grove, CA, USA, Nov. 2013.
S. Chen, N. Shroff, and P. Sinha, “Heterogeneous delay tolerant task
scheduling and energy management in the smart grid with renewable
energy,” IEEE J. Select. Areas Commun., vol. 31, no. 7, pp. 1258 –
1267, Jul. 2013.
I. S. Bayram, M. Z. Shakir, M. Abdallah, and K. Qaraqe, “A survey on
energy trading in smart grid,” in Proc. IEEE IEEE Global Conference
on Signal and Information Processing (GlobalSIP), Atlanta, GA, USA,
Dec. 2014.
D. T. Nguyen and L. Le, “Optimal energy trading for building microgrid with electric vehicles and renewable energy resources,” in Proc.
[65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
[74]
[75]
[76]
[77]
[78]
[79]
[80]
[81]
[82]
[83]
[84]
[85]
[86]
[87] EnerNex Corp., “Eastern wind integration and transmission study,”
IEEE PES Innovative Smart Grid Technologies (ISGT), Washington,
National Renewable Energy Laborator, Report NREL/SR-550-47078,
DC, USA, Feb. 2014.
2010.
E. Mocanu, K. O. Aduda, P. H. Nguyen, G. Boxem, W. Zeiler,
[88] OpenEI, “Open energy data sets,” 2014. [Online]. Available:
M. Gibescu, and W. L. Kling, “Optimizing the energy exchange
http://en.openei.org/datasets/
between the smart grid and building systems,” in Proc. 49th Int.
[89] U.S.
Energy
Information
Administration,
“ElectricUniversities Power Engineering Conference, Cluj-Napoca, Romania,
ity
statistics
and
data,”
2014.
[Online].
Available:
Sep. 2014.
http://www.eia.gov/electricity/data.cfm
A. Mondal and S. Misra, “Dynamic coalition formation in a smart grid:
[90] ecoEnergy, “Energy consumption of major household appliances
A game theoretic approach,” in Proc. Int. Conf. on Communications,
shipped in canada,” Technical Report, Dec. 2007. [Online]. Available:
Budapest, Hungary, Jun. 2013.
http://oee.nrcan.gc.ca/Publications/statistics/cama07/pdf/cama07.pdf
J. Fahey, “Companies dangle free nights and weekends,” Pocono
[91] Lawrence Berkeley National Laboratory, “Standy power data,” 2014.
Record, 2013.
[Online]. Available: http://standby.lbl.gov/data.html
P. Durand, “Moving beyond smart grid toward customer engagement,”
[92] Massachusetts Institute of Technology, “The reference energy
Electric Light & Power, Jun. 2015.
disaggregation data set (redd),” 2014. [Online]. Available:
G. Wang, “3 ways to engage utility customers in home energy
http://redd.csail.mit.edu/
management,” Electric Light & Power, Jun. 2015.
[93] D. Kahneman and A. Tversky, “Prospect theory: An analysis of
Associated Press, “Small-scale solar power market draws big utilities,”
decision under risk,” Econometrica, vol. 47, pp. 263–291, 1979.
Oct. 2015.
[94] G. A. Quattrone and A. Tversky, “Contrasting rational and psychoSolar Energy Industries Association, “Solar energy facts:
logical analyses of political choice,” The American Political Science
2014
year
in
review,”
2015.
[Online].
Available:
Review, vol. 82, no. 3, pp. 719–736, 1988.
http://www.seia.org/research-resources/solar-industry-data
[95] C. Camerer, L. Babcock, G. Loewenstein, and R. Thaler, “Labor supply
D. Cardwell, “Compromise in arizona defers a solar power fight,” New
of New York City cab drivers: One day at a time,” Quarterly Journal
York Times, 2015.
of Economics, no. 111, pp. 408–441, May 1997.
Federal Energy Regulatory Commission, “Demand response
[96] D. Kahneman and A. Tversky, Choices, Values, and Frames. Cam& advance metering staff report,” 2012. [Online]. Available:
bridge University Press, 2000.
https://www.ferc.gov/legal/staff-reports/12-20-12-demand-response.pdf
[97] Y. Wang, A. Nakao, and J. Ma, “Psychological research and application
T. Başar and G. J. Olsder, Dynamic Noncooperative Game Theory.
in autonomous networks and systems: A new interesting field,” in Proc.
Philadelphia, PA, USA: SIAM Series in Classics in Applied MatheInternational Conference on Intelligent Computing and Integrated
matics, 1999.
Systems, Guilin, China, Oct. 2010.
M. E. Khodayar, M. Barati, and M. Shahidehpour, “Integration of high
[98] L. P. Metzger and M. O. Riegery, “Equilibria in games with prospect
reliability distribution system in microgrid operation,” IEEE Trans. on
theory preferences,” Working Paper, Nov. 2009. [Online]. Available:
Smart Grid, vol. 3, no. 4, pp. 1997 – 2006, Dec. 2012.
http://www.bf.uzh.ch/publikationen/pdf/publ 2150.pdf
P. Piagi and R. H. Lasseter, “Autonomous control of microgrids,” IEEE
[99] T. Li and N. Mandayam, “Prospects in a wireless random access
Power Engineering Society General Meeting, vol. 27, no. 5, pp. 78–94,
game,” in Proc. 46th Annual Conference on Information Sciences and
Oct. 2006.
Systems, Princeton, NJ, USA, Mar. 2012.
F. Katiraei, R. Iravani, and N. Hatziargyriou, “Microgrids manage[100] ——, “When users interfere with protocols: prospect theory in wireles
ment,” IEEE Power and Energy Magazine, vol. 6, pp. 54–65, May
networks using random access as an example,” IEEE Trans. Wireless
2008.
Commun., vol. 13, no. 4, pp. 1888–1907, Feb. 2014.
N. Hatziargyriou, H. Sano, R. Iravani, and C. Marnay, “Microgrids:
[101] A. Tversky and D. Kahneman, “Advances in prospect theory: CumuAn overview of ongoing research development and demonstration
lative representation of uncertainty,” Journal of Risk and Uncertainty,
projects,” IEEE Power and Energy Magazine, vol. 27, pp. 78–94, Aug.
vol. 5, pp. 297–323, Oct. 1992.
2007.
[102] D. Kahneman, Thinking, fast and slow. New York City, NY, USA:
S. Dasgupta, S. N. Mohan, S. K. Sahoo, and S. K. Panda, “Lyapunov
Farrar, Straus, & Giroux, 2011.
function-based current controller to control active and reactive power
[103] D. Prelec, “The probability weighting function,” Econometrica, pp.
flow from a renewable energy source to a generalized three-phase
497–528, 1998.
microgrid system,” IEEE Trans. on Industrial Electronics, vol. 60,
[104] Y. Wang, W. Saad, N. Mandayam, and H. V. Poor, “Integrating energy
no. 2, pp. 799 – 813, Feb. 2013.
storage in the smart grid: A prospect-theoretic approach,” in Proc.
A. Karabibera, C. Kelesb, A. Kaygusuzb, and B. B. Alagoz, “An
IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP),
approach for the integration of renewable distributed generation in
Florence, Italy, May 2014.
hybrid DC/AC microgrids,” Renewable Energy, vol. 51, p. 251259,
[105] Y. Wang and W. Saad, “On the role of utility framing in smart grid
Apr. 2013.
energy storage management,” in Proc. Int. Conf. on Communications,
O. Hafez and K. Bhattacharya, “Optimal planning and design of a
Workshop on Smart Grids, London, UK, Jun. 2015.
renewable energy based supply system for microgrids,” Renewable
[106] Y. Wang, W. Saad, N. Mandayam, and H. V. Poor, “Load shifting in
Energy, vol. 45, p. 715, Sep. 2012.
the smart grid: To participate or not?” IEEE Trans. on Smart Grid, to
M. Sechilariu, B. Wang, and F. Locment, “Building integrated phoappear 2015.
tovoltaic system with energy storage and smart grid communication,”
[107] “OpenEI, http://en.openei.org/datasets/files/961/pub/.”
IEEE Trans. on Industrial Electronics, vol. 60, no. 4, pp. 1607 – 1618,
[108] J. L. Knetsch, “The endowment effect and evidence of nonreversible
Apr. 2013.
indifference curves,” American Economic Review, vol. 79, pp. 1277–
C. A. Hill, M. Such, C. D, J. Gonzalez, and W. M. Grady, “Battery
1284, 1989.
energy storage for enabling integration of distributed solar power
[109] J. A. List, “Does market experience eliminate market anomalies?”
generation,” IEEE Trans. on Smart Grid, vol. 3, no. 2, pp. 850 – 857,
Quarterly Journal of Economics, vol. 118, pp. 47–71, 2003.
May 2012.
L. Xie, D.-H. Choi, S. Kar, and H. V. Poor, “Fully distributed state
estimation for wide-area monitoring systems,” IEEE Trans. on Smart
Grid, vol. 3, no. 3, pp. 1154–1169, Sep. 2012.
Modellstadt
Mannheim,
“E-energy-projectsmart
city
mannheim,” Technical Report, 2012. [Online]. Available:
http://ec.europa.eu/dgs/jrc/downloads/events/20120711-esof/esof-2012-thomas-wolski.pdf
Power Plus Communications, “Smart citizens,” Modellstadt
Mannheim
Technical
Report,
2012.
[Online].
Available:
http://www.ppc-ag.de/files/moma en.pdf
| 3 |
arXiv:1708.06090v1 [] 21 Aug 2017
THE STRONG REES PROPERTY OF POWERS OF THE
MAXIMAL IDEAL AND TAKAHASHI-DAO’S QUESTION
TONY J.PUTHENPURAKAL, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
Dedicated to Craig Huneke on the occasion of his 65th Birthday.
Abstract. In this paper, we introduce the notion of the strong Rees property
(SRP) for m-primary ideals of a Noetherian local ring and prove that any power
of the maximal ideal m has its property if the associated graded ring G of m
satisfies depth G ≥ 2. As its application, we characterize two-dimensional
excellent normal local domains so that m is a pg -ideal.
Finally we ask what m-primary ideals have SRP and state a conjecture
which characterizes the case when mn are the only ideals which have SRP.
1. Introduction
Let (A, m) be a Noetherian local ring with d = dim A ≥ 1 and I an m-primary
ideal of A. The notion of m-full ideals was introduced by D. Rees and J. Watanabe
([22]) and they proved the “Rees property” for m-full ideals, namely, if I is m-full
ideal and J is an ideal containing I, then µ(J) ≤ µ(I), where µ(I) = ℓA (I/mI) is
the minimal number of generators of I. Also, they proved that integrally closed
ideals are m-full if A is normal.
fn , the Ratliff-Rush closure of mn , is m-full ([1,
Suppose depth A > 0. Then m
Proposition 2.2]). Thus mn is m-full for sufficiently large n.
Sometimes we need stronger property for µ(I) and we will call it “Strong Rees
property” (SRP for short).
Definition (Strong Rees Property). Let I be an m-primary ideal of A. Then
we say that I satisfies the strong Rees property if for every ideal J ) I, we have
µ(J) < µ(I).
So it will be natural to ask the following questions.
Question 1.1. Let (A, m) be a normal local ring. Then
(1) Does mn has the strong Rees property for every n ≥ 1?
(2) If I has the strong Rees property, then is I = mn for some n ≥ 1?
Date: August 22, 2017.
2000 Mathematics Subject Classification. Primary 13A30; Secondary 13H15, 13B22, 14B05.
1
2
TONY J.PUTHENPURAKAL, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
Actually, both questions are not true in general. But we can show that (1) in
the Question 1.1 holds under suitable mild condition. Also, we will give an example
of two-dimensional normal local rings where m2 does not satisfy the strong Rees
property.
As for (2) of Question 1.1, we will discuss it in Section 6.
Assume that I is an m-primary ideal. Then the multiplicity (resp. the minimal
number of system of generators) of I is denoted by e(I) (resp. µ(I)). The Loewy
length ℓℓ(I) is defined by
ℓℓ(I) = min{r ∈ Z≥0 | mr ⊂ I}
Notice that the notion of Loewy length of an Artinian ring measures the nilpotency
of the maximal ideal. It is natural to ask if e(I) is bounded by the product of µ(I)
and ℓℓ(I) after adjusting some error terms.
The origin of this work was a discussion of second and third authors with Hailong
Dao. He presented an inequality between µ(I), the multiplicity e(I) of I and Loewy
length ℓℓ(I).
Question 5.1 ([2]). Let (A, m) be a d-dimensional Cohen-Macaulay local ring.
Let I ⊂ A be an m-primary (integrally closed) ideal. When does the inequality
(d − 1)!(µ(I) − d + 1) · ℓℓ(I) ≥ e(I)
hold true?
Dao and Smirnov [2] proved that the Question 5.1 holds true if A is a twodimensional analytically unramified and the maximal ideal m is a pg -ideal (or,
equivalently, the Rees algebra R(m) is normal and Cohen-Macaulay).
We are interested in the converse of the Question 5.1 in the case of d = 2. Here,
since e(I) does not change after taking integral closure, it is natural to assume that
I is integrally closed. Namely, our Question is
Question 1.2. Assume that (A, m) is a two-dimensional excellent normal local
domain. If an inequality
(µ(I) − 1) · ℓℓ(I) ≥ e(I)
holds for any m-primary integrally closed ideal I, then is m a pg -ideal?
It turns out that this question is related to the “strong Rees property” of powers
of the maximal ideal. The main result in this paper is the following theorem.
Theorem 3.2. Let (A, m) be a Noetherian local ring. Assume that depth A ≥ 2
1
and HM
(G) has finite length, where G = G(m) = ⊕n≥0 mn /mn+1 . If mℓ is RatliffRush closed, then mℓ has the strong Rees property.
By Remark 2.5, any power mℓ is Ratliff-Rush closed whenever depth G ≥ 1.
Hence we have the following corollary.
THE STRONG REES PROPERTY OF POWERS OF THE MAXIMAL IDEAL
3
Corollary 3.5. If depth G ≥ 2, then mℓ has the strong Rees property for every
ℓ ≥ 1.
In general, we cannot relax the assumption that depth G ≥ 2 even if A is normal.
See Section 4 for more details.
In Section 5, as an application of the theorem above, we prove the above question
has an affirmative answer.
Theorem 5.2.
Let (A, m) be a two-dimensional excellent normal local domain containing an
algebraically closed field. Then the following conditions are equivalent:
(1) m is a pg -ideal.
(2) For every m-primary integrally closed ideal I,
(µ(I) − 1) · ℓℓA (I) ≥ e(I)
(∗)
holds true.
(3) For any power I = mℓ of m, the inequality (∗) holds true.
After proving this theorem, Dao and Smirnov informed us that they proved the
same theorem independently.
2. Preliminaries
Throughout this paper, let (A, m) be a Noetherian local ring with d = dim A ≥ 1,
and let I, J ⊂ A be ideals of positive height. We recall the notion which we will
need later.
2.1. m-full ideals.
Definition 2.1 (m-full, Rees property[22]). An ideal I is called m-full if there
exists an element x ∈ m so that mI : x = I.
An ideal I is said to have the Rees property if µ(J) ≤ µ(I) for any ideal J
containing I.
Proposition 2.2 (See [22, Theorem 3]). Any m-primary m-full ideal has the Rees
property.
Proof. See the proof of [3, Lemma 2.2].
The following result is due to Rees in the case of normal integral domains.
Proposition 2.3 (See [22, Theorem 5]). Any integrally closed ideal of positive
height is m-full.
Proof. See the proof of Theorem [3, Theorem 2.4].
4
TONY J.PUTHENPURAKAL, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
Definition 2.4 (Ratliff-Rush closure [17]). The Ratliff-Rush closure of I is
defined by
[
Ie =
I n+1 : I n .
n≥0
The ideal I is called Ratliff-Rush closed if Ie = I.
Remark 2.5. Note that I ⊂ Ie ⊂ I, where I is the integrally closure of I. If we
L
put G = G(m) = n≥0 mn /mn+1 , then
M
n+1 ∩ mn )/mn+1 .
]
H 0 (G) =
(m
M
n≥0
Lemma 2.6. Assume that for some z ∈ mn−1 \ mn we have zm ⊂ mn−1 . Then,
putting I = (x) + mn , µ(I) > µ(mn ), hence mn does not have Rees property.
Proof. Let {y1 , . . . , yµ } be a minimal set of generators of mn . Then we can see that
{z, y1, . . . , yµ } is a minimal set of generators of I. Hence µ(I) = µ(mn ) + 1.
Any m-full ideal has the Rees property. However, in order to prove our theorem,
we need stronger inequalities.
Recall the definition of the strong Rees property.
Definition 2.7. An m-primary ideal I is said to have the strong Rees property
(SRP for short) if µ(J) < µ(I) holds true for every ideal J with I ( J.
We can show the existence of m-primary ideals with SRP by the following
Lemma.
Lemma 2.8. Let (A, m) be a Noetherian local ring. Fix an m-primary ideal I with
µ(I) = n.
(1) Any maximal element of the set of m-primary ideals
I = {J ⊂ A | J ⊃ I and µ(J) ≥ n}
has the strong Rees property.
(2) If, moreover, I is m-full, then there is an ideal I ′ ⊃ I with the strong Rees
property and µ(I ′ ) = µ(I).
Proof. (1) is obvious by the definition of I.
(2) If I ′ ⊃ I has SRP and µ(I ′ ) ≥ µ(I), we should have µ(I ′ ) = µ(I) since I is
m-full.
The next example gives a motivation for us to study the strong Rees property
of powers of the maximal ideal. See also [22, Theorem 5] and Section 6.
Example 2.9. Assume that A is a two-dimensional regular local ring. Then for
any m-primary ideal I, the following conditions are equivalent.
(1) I has the strong Rees property.
THE STRONG REES PROPERTY OF POWERS OF THE MAXIMAL IDEAL
5
(2) I = mn for some integer n ≥ 1.
Indeed, assume that I has the strong Rees property. If we put n = ord(I) =
max{n ∈ Z | I ⊂ mn }, then we can take an element f ∈ I so that ord(f ) =
n. Then since I/(f ) is an m/(f )-primary ideal of A/(f ), we have µ(I/(f )) ≤
n = e(m/(f )) because A/(f ) is a one-dimensional Cohen-Macaulay local ring. In
particular, µ(I) ≤ n + 1 = µ(mn ). If I 6= mn , then n + 1 = µ(mn ) < µ(I) by the
assumption (1). But this is a contradiction.
Conversely, if I ) mn , then ord(I) < n and µ(I) ≤ ord(I) + 1 < n + 1 = µ(mn ).
2.2. pg -ideals. In what follows, let (A, m) be a two-dimensional excellent normal
local domain containing an algebraically closed field k = A/m. Let f : X → Spec A
be a resolution of singularities. Then pg (A) = dimk H 1 (OX ) is called the geometric
genus of A. Note that it does not depend on the choice of resolution of singularities.
Let I ⊂ A be an m-primary integrally closed ideal. Let f : X → Spec A be a
resolution of singularities on which I is represented, that is, IOX is invertible and
IOX = OX (−Z) for some anti-nef cycle Z on X.
Okuma and the last two authors [13] proved that dimk H 0 (OX (−Z)) ≤ pg (A)
holds true if H 0 (OX (−Z)) has no fixed component.
Definition 2.10 (See [13]). An anti-nef cycle Z is a pg -cycle if OX (−Z) is generated
and dimk H 0 (OX (−Z)) = pg (A). An m-primary integrally closed ideal I is called
a pg -ideal if I is represented as I = H 0 (OX (−Z)) by a pg -cycle Z.
The following theorem gives a characterization of pg -ideals in terms of Rees
algebras.
Proposition 2.11 (See [14]). Let (A, m) be a two-dimensional excellent normal
local domain containing an algebraically closed field. Let I ⊂ A be an m-primary
ideal. Then the following conditions are equivalent:
(1) I is a pg -ideal.
(2) I n = I n for every n ≥ 1 and I 2 = QI for some minimal reduction Q ⊂ I.
(3) The Rees algebra R(I) = A[It](⊂ A[t]) is a Cohen-Macaulay normal domain, where t is an indeterminate over A.
For instance, any integrally closed m-primary ideal in a two-dimensional rational
singularity (i.e. pg (A) = 0) is a pg -ideal. On the other hand, any two-dimensional
excellent normal local domain containing algebraically closed field has a pg -ideal;
see [13, 14].
6
TONY J.PUTHENPURAKAL, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
3. Strong Rees Property of powers of the maximal ideal
In what follows, let (A, m) be a Noetherian local ring and set R = R(m) and
G = G(m) and M = mR + R+ . The main purpose of this section is to consider the
following question.
Question 3.1. Assume that I is Ratliff-Rush closed. When does I have the strong
Rees property?
As an answer, we show that some powers of the maximal ideal m have the strong
Rees property if depth G ≥ 2; see Corollary 3.5. More generally, we can show the
following theorem.
1
Theorem 3.2 (Strong Rees Property). Assume that depth A ≥ 2 and HM
(G)
has finite length. If mℓ is Ratliff-Rush closed, then mℓ has the strong Rees property.
We first need to prove the following lemma.
Lemma 3.3. Suppose that for some x ∈ m, mℓ+1 : x = mℓ . Then for every ideal
J with J ⊃ mℓ , µ(J) ≤ µ(mℓ ) is always satisfied, and the following conditions are
equivalent:
(1) µ(J) = µ(mℓ ).
(2) mJ = xJ + mℓ+1 .
When this is the case, if we put C = JR/mℓ R, C/(xt)C = (C/(xt)C)0 = J/mℓ .
In particular, dim C ≤ 1.
Proof. Consider the following two short exact sequences:
0 →
mℓ ∩ mJ
mℓ
mℓ
→
→
→ 0.
mℓ+1
mℓ+1
mℓ ∩ mJ
J
J
mℓ + mJ
→
→ ℓ
→ 0.
mJ
mJ
m + mJ
Since mℓ /(mℓ ∩ mJ) ∼
= (mℓ + mJ)/mJ, combining two exact sequence implies
0 →
(3.1)
0 →
mℓ ∩ mJ
mℓ
J
J
→ ℓ+1 →
→ ℓ
→ 0.
ℓ+1
m
m
mJ
m + mJ
It follows that
ℓA (mℓ ∩ mJ/mℓ+1 ) = µ(mℓ ) − µ(J) + ℓA (J/mℓ + mJ).
Furthermore, since mJ/(mℓ ∩ mJ) ∼
= (mℓ + mJ)/mℓ , we get
ℓA (mJ/mℓ+1 ) = ℓA (J/mℓ ) + µ(mℓ ) − µ(J)
We now consider an R-module C = JR/mℓ R. The assumption that mℓ+1 : x =
m implies that the multiplication map
ℓ
·x : C0 = J/mℓ → C1 = mJ/mℓ+1
THE STRONG REES PROPERTY OF POWERS OF THE MAXIMAL IDEAL
7
is injective. Hence µ(mℓ ) − µ(J) = ℓA (C1 ) − ℓA (C0 ) ≥ 0. Moreover, equality holds
true if and only if the multiplication map by x is isomorphism, which means that
mJ = xJ + mℓ+1 .
When this is the case, we have
mn+1 J = xmn J + mn+ℓ+1
for every n ≥ 1. The last assertion immediately follows from this.
Lemma 3.4. Let J ⊂ A be an ideal with J ) mℓ , where ℓ ≥ 1. Put C = JR/mℓ R.
1
Assume that depth A ≥ 2, and HM
(G) has finite length. If mℓ is Ratliff-Rush
closed, then
i
(1) HM
(C) has finite length for i = 0, 1.
0
(2) [HM
(C)]0 = 0.
Proof. First we define an R-module L(−1) as follows:
0 → R → A[t] → L(−1) → 0 (ex),
that is, L(−1) =
L
n≥0 (A/m
n
)tn .
i
(L(−1)) has finite length for i = 0, 1.
Claim 1. HM
By [15, Proposition 4.7] we have
0
HM
(L(−1)) =
Mm
fn
n≥0
mn
0
= HM
(G),
1
which has finite length and then it is proved in [16, Theorem 6.2] that HM
(L(−1))
1
has finite length, if and only if HM (G) has finite length. For instance, depth G ≥ 2,
i
then HM
(L(−1)) = 0 for each i = 0, 1.
L
Secondly, we define D = L(−1)≥ℓ (ℓ) = n≥0 (A/mℓ+n )tn .
0
1
Claim 2. [HM
(D)]0 = 0 and HM
(D) has finite length.
By definition, we have
0 → D(−ℓ) → L(−1) → W → 0,
where W is an R-module of finite length. Then since in the exact sequence
0
1
1
W = HM
(W ) → HM
(D(−ℓ)) → HM
(L(−1))
1
the modules of the both sides have finite length, so does HM
(D(−ℓ)). Moreover,
0
0
0
as HM (D(−ℓ)) ⊂ HM (L(−1)), HM (D(−ℓ)) is also of finite length.
0
fℓ /mℓ and our assumption.
The first assertion follows from the fact [HM
(D)]0 ⊂ m
L
Thirdly, we define an R-module V = n≥0 (A/mn J)tn as follows:
0 → JR → A[t] → V → 0 (ex).
8
TONY J.PUTHENPURAKAL, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
By definition of V , we have
0
0
1
1
0 = HM
(A[t]) → HM
(V ) → HM
(JR) → HM
(A[t]) = 0,
where two vanishing follows from the fact that depth A ≥ 2. It follows that
1
0
0
(JR)]n = 0 for large enough n. On the other hand, as HM
(V ) ⊂
[HM
(V )]n ∼
= [HM
0
0
V , [HM (V )]n = 0 for each n ≤ −1. Thus HM (V ) has finite length.
0
1
Claim 3. [HM
(C)]0 = 0 and HM
(C) has finite length.
One can easily obtain the following exact sequence:
0 → C → D → V → 0 (ex).
Hence
0
0
0
→ HM
(C) → HM
(D)
0
→ HM
(V )
1
1
→ HM
(C) → HM
(D)
→
···
0
0
As [HM
(D)]0 = 0 by Claim 2, we get [HM
(C)]0 = 0. Moreover, in the sequence
0
0
1
1
HM
(D) → HM
(V ) → HM
(C) → HM
(D),
1
1
the both sides of HM
(C) have finite length. Hence so does HM
(C).
Proof of Theorem 3.2. Choose an m-superficial element x ∈ m \ m2 . By assumption, we have
fℓ = mℓ .
mℓ ⊂ mℓ+1 : x ⊂ m
Then mℓ+1 : x = mℓ In particular, this means mℓ is m-full.
Let J be an ideal with J ) mℓ . By Lemma 3.3, µ(J) ≤ µ(mℓ ). We want to show
that this inequality is strict. Now suppose that equality holds true: µ(J) = µ(mℓ ).
Put C = JR/mℓ R. In Lemma 3.3, we showed that C/(xt)C has finite length.
Hence we have dim C ≤ 1.
1
0
(C) has
On the other hand, by Lemma 3.4, we have [HM
(C)]0 = 0 and HM
ℓ
0
finite length. If dim C = 0, then 0 6= J/m = [C]0 = [HM (C)]0 = 0. This is a
1
contradiction. Hence dim C = 1. Then HM
(C) is not finitely generated. This
1
contradicts the fact that HM (C) has finite length. Therefore we conclude that
µ(J) < µ(mℓ ), as required.
In the case where depth G ≥ 2, then all powers of the maximal ideal have the
strong Rees property.
Corollary 3.5. If depth G ≥ 2, then mℓ has the strong Rees property for every
ℓ ≥ 1.
1
(G) = 0 and
Proof. Assume Theorem 3.2 holds true. If depth G ≥ 2, then HM
depth A ≥ 2. Hence the ring A satisfies the assumption on Theorem 3.2.
THE STRONG REES PROPERTY OF POWERS OF THE MAXIMAL IDEAL
9
Corollary 3.6. Let (A, m) be a Cohen-Macaulay local ring with d = dim A ≥ 2. If
G = G(m) is Cohen-Macaulay, then mℓ has the strong Rees property for each ℓ ≥ 1.
Corollary 3.7. If depth A ≥ 2 and m is a normal ideal (i.e., mn is integrally closed
for all n ≥ 1) then mℓ has the strong Rees property for every ℓ ≥ 1.
Proof. By [8, Theorem 3.1] we get depth G(mn ) ≥ 2 for all n ≫ 0. Here G(mn )
1
denotes the associated graded ring of mn . It follows that HM
(G(m)) has finite
r
length. Also as m is integrally closed for all r ≥ 1 we get that it is Ratliff-Rush
closed. It follows that mr has SRP for all r ≥ 1 by Theorem 3.2.
Example 3.8. Let (A, m) be a two-dimensional excellent normal local domain.
If m is a pg -ideal (e.g. A is a rational singularity), then mℓ has the strong Rees
property for every ℓ ≥ 1.
On the other hand, “depth G ≥ 1” can be characterized by the Rees property.
Proposition 3.9. Put G = G(m), where d = dim G ≥ 1. Then the following
conditions are equivalent:
(1) depth G ≥ 1.
(2) mℓ has the Rees property for every ℓ ≥ 1.
fℓ = mℓ and mℓ is m-full for every ℓ ≥ 1.
Proof. (1) =⇒ (2) : If depth G ≥ 1, then m
ℓ
In particular, m has the Rees property.
0
(2) =⇒ (1) : Now suppose depth G = 0. Then HM
(G) 6= 0. There exist an
ℓ
ℓ+1
integer ℓ ≥ 1 and an element z ∈ m \ m
such that 0 6= z ∗ = z + mℓ+1 ∈
0
0
ℓ+2
[HM (G)]ℓ ∩ Soc(HM (G)). Then mz ⊂ m . If we put J = (z) + mℓ+1 , then
mJ = m(z) + mℓ+2 = mℓ+2 . Hence
J/mJ = ((z) + mℓ+1 )/mℓ+2 ) mℓ+1 /mℓ+2
and so µ(J) = ℓA (J/mJ) > ℓA (mℓ+1 /mℓ+2 ) = µ(mℓ+1 ). This contradicts the
assumption that mℓ+1 has the Rees property.
We now consider the case of depth G = 1. We need the following lemma.
Lemma 3.10. Suppose that depth G = 1. Assume that J ) mℓ so that µ(J) =
µ(mℓ ). Then one can find elements x ∈ m \ m2 and y ∈ J \ (mℓ + (x)) such that x∗
0
is G-regular and 0 6= z ∗ ∈ Soc(HM
(G)), where z = y ∈ A/xA and z ∗ denotes the
∗
initial form of z in G = G/x G.
Proof. Take an element x ∈ m \ m2 so that x∗ is G-regular and J 6⊂ mℓ + (x);
see the remark below for the exitence of x. Choose y ∈ J \ (mℓ + (x)). Then
y ∈ (mk + (x)) \ (mk+1 + (x)) for some k with 0 ≤ k ≤ ℓ − 1. By assumption and
Lemma 3.3, we have that my ⊂ mJ = xJ + mℓ+1 . Let denote the image of the
surjection A → A/xA and put z = y. Then mz ⊂ mℓ+1 ⊂ mk+2 , which means that
0
0 6= z ∗ ∈ [Soc(HM
(G/x∗ G))]k .
10
TONY J.PUTHENPURAKAL, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
Remark 3.11. Suppose depth G = 1. For any ideal J ) mℓ , one can find an
element x ∈ m \ m2 so that x∗ is G-regular and J is not contained in mℓ + (x). In
order to prove this, it suffices to consider the case ℓ(J/mℓ ) = 1, that is, J = (f )+mℓ,
where f 6∈ mℓ and mf ⊂ mℓ .
If J ⊂ mℓ + (x) for such an element x as above, then f − ax ∈ mℓ for some
a ∈ A. As x∗ is G-regular (and thus a∗ x∗ 6= 0), we obtain the relation a∗ x∗ = f ∗
in G. Let G = k[X]/a and let F be the inverse image of f ∗ in the polynomial
ring k[X]. As k is an infinite field and dim k[X]/(a + (F )) ≥ 1, one can find a
homogeneous element X of degree one which does not vanish on V (a + (F )). The
required assertion follows from here.
Hence we have the following.
Proposition 3.12. Suppose that depth G = 1. Let x∗ ∈ G1 be a nonzero divisor
0
(G)]ℓ = 0 for all ℓ < n, then mℓ has the
of G and put G = G/x∗ G. If [Soc(HM
strong Rees property for all ℓ ≤ n.
0
Proof. First we note that Soc(HM
(G/x∗ G)) is independent of the choice of x. In
fact, the short exact sequence
x∗
0 −→ G(−1) −→ G −→ G := G/x∗ G −→ 0
yields a short exact seuence
x∗
0
0
1
1
HM
(G) = 0 −→ HM
(G) −→ HM
(G)(−1) −→ HM
(G).
Taking a socle, we get
x∗
1
1
0
(G))(−1) −→ Soc(HM
(G)).
0 −→ Soc(HM
(G)) −→ Soc(HM
0
1
Since the last map is a zero map, Soc(HM
(G)) ∼
(G))(−1) is independent
= Soc(HM
of the choice of x.
Now suppose that mℓ does not have SRP. Then we can find an ideal J ) mℓ so
that µ(J) = µ(mℓ ). By Lemma 3.10, there exist an element x ∈ m \ m2 such that
0
x∗ is G-regular and 0 6= [Soc(HM
(G/x∗ G)]k for some 0 ≤ k ≤ ℓ − 1 ≤ n − 1. This
contradicts the assumption.
Proposition 3.13. When depth G = 1, there exists an n ∈ N ∪ {∞} such that mℓ
has the strong Rees property if and only if 1 ≤ ℓ ≤ n.
In order to prove the proposition, it suffices to show the following lemma.
Lemma 3.14. Suppose that depth G = 1. If mℓ does not have the strong Rees
property, then neither does mℓ+1 .
Proof. Since depth G = 1, there exists an element x ∈ m \ m2 so that x∗ = x + m2 is
G-regular. In particular, mℓ and mℓ+1 are m-full. By assumption and Lemma 3.3,
THE STRONG REES PROPERTY OF POWERS OF THE MAXIMAL IDEAL
11
we can take an ideal I ) mℓ so that mI = xI + mℓ+1 . Put J = xI + mℓ+1 . Then
mJ = m(xI + mℓ+1 ) = m(xI) + mℓ+2 .
Moreover, we suppose that J = mℓ+1 . Then xI ⊂ mℓ+1 and thus I ⊂ mℓ+1 : x = mℓ .
This contradicts the choice of I. Hence J ) mℓ+1 and µ(J) = µ(mℓ+1 ). This implies
that mℓ+1 does not have SRP.
Example 3.15. For any n ≥ 1, there exists a triple (a, b, c) ∈ N3 such that A =
k[[s, ta , tb , tc ]] is a two-dimensional Cohen-Macaulay local domain such that mℓ has
the strong Rees property if and only if 1 ≤ ℓ ≤ n.
Proof. For a given n ≥ 1, we can choose an integer a = 10N > 2n. Set b = a + 1
and c = (a − 1)(a + 1) − an. Then s, ta , tb , tc is a minimal system of generators
because ab − a − b = (a − 1)(a + 1) − a ≥ c(> b > a).
Since
A = k[[s, x, y, z]]/(yz − xa+1−n , xn z − y a−1 , z 2 − xa+1−2n y a−2 ),
we get
G = G(m) ∼
= k[S, X, Y, Z]/(Y Z, X n Z, Z 2 , Y a ).
0
(G/SG)) is generated by X n−1 Z ∈ [G/SG]n .
Then S is an G-regular and Soc(HM
ℓ
Hence m has SRP if 1 ≤ ℓ ≤ n.
If we put I = (xn−1 z) + mn+1 ) mn+1 , then x · xn−1 z = y a−1 ∈ ma−1 ⊆ mn+2 .
Similarly, we have that y · xn−1 z ∈ ma ⊆ mn+2 and z · xn−1 z ∈ m2a−n−2 ⊂ mn+2 .
Hence mI = (s · xn−1 z) + mn+2 , and this implies that mn+1 does not have SRP.
Example 3.16. Let A = k[[s, t4 , t5 , t11 ]] and m = (s, t4 , t5 , t11 ). Then m2 does
not have strong Rees property. In fact, m2 = (s2 , st4 , st5 , st11 , t8 , t9 , t10 ) and thus
µ(m2 ) = 7. If we put I = (s2 , st4 , st5 , t8 , t9 , t10 , t11 ) = m2 + (t11 ), then I ) m2 and
µ(I) = 7 = µ(m2 ).
On the other hand, since G ∼
= k[S, X, Y, Z]/(XZ, Y Z, Z 2 , Y 4 ) (see e.g. [20,
f2 is m-full. Moreover, since
Section 2]), we have depth G = 1 and thus m2 = m
2
2
11
2
t ∈ m \ m , m is not integrally closed.
We can find an example of two-dimensional excellent normal local domains (A, m)
for which m2 does not satisfy the strong Rees property and A/sA ∼
= k[[t4 , t5 , t11 ]]
for some nonzero divisor s of A. See the next section.
4. Point divisor on a smooth curve – An example mn does not have
strong Rees property
In this section we treat a class of normal graded rings of dimension 2 and discuss
whether mn has the strong Rees property in such rings.
12
TONY J.PUTHENPURAKAL, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
Definition 4.1. Let k be an algebraically closed field and C be a smooth connected
projective curve of genus g over k. We take a point P ∈ C and define
H = HC,P = {n ∈ Z | h0 (C, OC (nP )) > h0 (C, OC ((n − 1)P ))},
where hi (C, F ) = dimk H i (C, F ). It is easy to see that HC,P is an additive semigroup and N \ HC,P has just g elements.
We define
R = RH = RC,P = ⊕n≥0 H 0 (C, OC (nP )T n ,
as a subring of k(C)[T ], where H 0 (C, OC (nP ) = {f ∈ k(C) | divC (f ) + nP ≥ 0}.
Namely, f ∈ H 0 (C, OC (nP ), if and only iff has pole of order at most n at P and
no other poles.
Then R = RC,P is a normal graded ring of dimension 2 as treated in [5], Chapter
5, §2. In the following, we fix H = HC,P and write A = k[H] and R = RH so that
R/T R ∼
= A, where T = 1.T ∈ R1 .
Pe
We write H = hn1 , . . . , ne i if H = { i=1 ai ni | ai ∈ Z≥0 (i = 1, . . . , e)}. In
this case, we say that H is generated by e elements. We denote by H+ the set of
positive elements of H and denote n ∈ rH+ if n = h1 + . . . + hr with hi ∈ H+
(i = 1, . . . , r).
Remark 4.2. Given a semigroup H, sometimes there does not exist the pair (C, P )
such that HC,P = H. But at least we know the existence of (C, P ) such that
H = HC,P in the following cases (cf. [10], [11]) ;
(1)
(2)
(3)
(4)
k[H] is a complete intersection.
H is generated by 3 elements.
H is generated by 4 elements and H is symmetric or pseudo-symmetric.
g(H) ≤ 9, where g(H) is the number of positive integers not in H.
We summarize some property of R = RC,P . We put m = R+ .
Proposition 4.3. Let R = RC,P . An element of Rn is denoted by f T n , where
f ∈ k(C). We denote by v(f ) the order of the pole of f at P . For non zero
elements f, g ∈ k(C), v(f g) = v(f ) + v(g).
(1) f T n ∈ Rn if and only if v(f ) ≤ n and f has no other poles on C.
(2) Hence if v(f ) < n, then f T n ∈ T n−v(f ) R, because f T v(f ) ∈ Rv(f ) .
(3) If HC,P = hn1 , . . . , ne i, which are minimal generating system, then there
are elements f1 , . . . , fe ∈ k(C) with v(fi ) = ni (i = 1, . . . , e) such that
R = k[T, f1 T n1 , . . . , fe T ne ].
(4) If f T n ∈ Rn and v(f ) = n, then f T n ∈ mr if and only if n ∈ rH+ .
(5) T ∈ R1 is a super regular element of R. Namely, if T x ∈ mr for some
x ∈ R, then x ∈ mr−1 .
THE STRONG REES PROPERTY OF POWERS OF THE MAXIMAL IDEAL
13
Theorem 4.4. Let R = RC,P and H = HC,P = hn1 , . . . , ne i. If for some n ∈
rH+ , n 6∈ (r − 1)H+ , n + ni ∈ (r + 2)H+ for i = 1, . . . , e, then mn+1 does not have
the strong Rees property.
Proof. By the assumption, there is some n ∈ (r − 1)H+ , n 6∈ rH+ such that n +
H+ ⊂ (r + 1)H+ . Then we can take f ∈ k(C) so that v(f ) = n and f T n ∈
Rn . Since f T n 6∈ T R, f T n ∈ mr−1 and f T n 6∈ mr+1 . We put I = (mr , f T n )
and show that µ(I) = µ(mr ). Now, let homogeneous minimal generators of m
be {T, g1T n1 , . . . , ge T ne }. Then among the homogeneous minimal generators of
m(f T n), (f T n )(gi T ni ) ∈ mr+1 by our assumption. Hence we can obtain minimal
generating system of I from that of mr , interchanging f T n and f T n+1, obtaining
µ(I) = µ(mr ).
Corollary 4.5. Let R = RC,P , H = HC,P and m = R+ . Then the following
conditions are equivalent:
(1) For all n ≥ 2, mn has the strong Rees property.
(2) The associated graded ring of k[H] with respect to k[H]+ is Cohen-Macaulay.
(3) The associated graded ring of R with respect to R+ is Cohen-Macaulay.
Example 4.6. Let H = h4, 5, 11i, C a smooth curve of genus 5 such that there is a
point P with HC,P = h4, 5, 11i. We put R = RC,P and m = R+ . Since 11+4 ∈ 3H+
and 11 + 5 ∈ 4H+ , we see that m2 does not have the strong Rees property. In this
example, we can easily see that mn is integrally closed for all n ≥ 2.
Remark 4.7. For 3 generated semigroup H = ha, b, ci, we know when the associated graded ring is Cohen-Macaulay (cf, [7],[12]).
5. Takahashi-Dao’s question
Dao and Takahashi [21] gave two upper bounds of the dimension of the singularity
category dim Dsg (A):
dim Dsg (A) ≤
(µ(I) − dim A + 1)ℓℓ(I) − 1
dim Dsg (A) ≤
e(I) − 1
for any m-primary ideal I contained in the sum N A of the Noether differents of A.
They posed the following question.
Question 5.1 (Takahashi-Dao). Let (A, m) be a d-dimensional Cohen-Macaulay
local ring. Let I ⊂ A be an m-primary (integrally closed) ideal. When does an
inequality
(d − 1)!(µ(I) − d + 1) · ℓℓA (I) ≥ e(I)
hold true (cf. [2])?
14
TONY J.PUTHENPURAKAL, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
The following theorem is motivated by the question as above. In fact, (1) ⇒ (2)
is due to Dao and Smirnov, and (2) ⇒ (1) is also proved by them independently.
Theorem 5.2. Let (A, m) be a two-dimensional excellent normal local domain
containing k = k ∼
= A/m. Then the following conditions are equivalent:
(1) m is a pg -ideal.
(2) For every m-primary integrally closed ideal I,
(µ(I) − 1) · ℓℓA (I) ≥ e(I)
(∗)
holds true.
(3) For any power I = mℓ of m, the inequality (∗) holds true.
Proof. (1) =⇒ (2) : We give a sketch of proof here for the sake of the completeness.
Assume that m is a pg -ideal. Let I ⊂ A be an m-primary integrally closed ideal.
Then there exists a resolution of singularities f : X → Spec A so that IOX =
OX (−Z) for some anti-nef cycle Z on X. By [13, Theorem 6.1], we have
µ(I) = −M Z + 1,
where M is an anti-nef cycle on X so that mOX = OX (−M ).
Put r = ℓℓA (I), that is, mr ⊂ I and mr−1 6⊂ I. Then mr ⊂ I = I. Thus
rM ≥ Z. Since e(I) = −Z 2 , we have
(µ(I) − 1)ℓℓA (I) − e(I) = (−M Z)r + Z 2 = −(rM − Z)Z ≥ 0,
as required.
(2) =⇒ (3) : Trivial.
(3) =⇒ (1) : Now assume that inequalities
(µ(mℓ ) − 1) · ℓℓA (mℓ ) ≥ e(mℓ )
hold true for all integers ℓ ≥ 1. This shows that
(µ(mℓ ) − 1) · ℓ ≥ e(mℓ ) = e(mℓ ) = ℓ2 · e,
where e = e(m). Hence µ(mℓ ) ≥ ℓe + 1.
In the case of ℓ = 1, we have that µ(m) − 1 ≥ e(m). On the other hand,
Abhyankar’s inequality implies that µ(m) − 1 ≤ e(m), and thus equality holds true.
That is, A has maximal embedding dimension in the sense of Sally ([19]). Then
µ(mℓ ) = ℓe + 1. Moreover, since G = G(m) is Cohen-Macaulay ([18]), we obtain
fℓ is m-full for every ℓ ≥ 1. Then µ(mℓ ) ≤ µ(mℓ ) = ℓe + 1 by Rees
that mℓ = m
property of m-full ideals. Hence µ(mℓ ) = µ(mℓ ). Then Corollary 3.6 yields that
mℓ = mℓ because G(m) is Cohen-Macaulay in our case (Sally [18]). Therefore m is
a pg -ideal by [14].
THE STRONG REES PROPERTY OF POWERS OF THE MAXIMAL IDEAL
15
Remark 5.3. We can show that I = mℓ satisfies the above inequality if (A, m) has
maximal embedding dimension. Note that mℓ does not necessarily have the strong
Rees property; see Sections 3 and 4.
It is known that the maximal ideal m of any two-dimensional rational singularity
is a pg -ideal. So it is natural to ask the following question. Which ring the maximal
ideal of which is a pg -ideal? By a similar argument as in [6, Corollary 11.4], we can
show the following. Notice that this gives a slight generalization of the fact that
any two-dimensional rational singularity is an almost Gorenstein local ring.
Proposition 5.4. Let (A, m) be a two-dimensional excellent normal local domain
containing an algebraically closed field. Let KA denote the canonical module of A.
If m is a pg -ideal, then A is an almost Gorenstein local ring in the sense of [6].
That is, there exists a short exact sequence of A-modules:
0 → A → KA → C → 0
such that mC = xC for some regular element x over C.
Example 5.5. In the notation of Section 4, let P ∈ C be such that HC,P =
{0, g + 1, g + 2, . . .}. Then the maximal ideal m of RC,P is a pg -ideal.
Proof. Take f ∈ k(C) with f T g+1 ∈ R and v(f ) = g + 1. Then putting Q =
(T, f T g+1 ), we see that m2 = Qm.
Introduce a valuation w on R such that
v(f )
m
w(f T ) = (m − v(f )) +
g+1
To show that mn is integrally closed it suffices to show that w(gT m ) ≥ n if and
only if gT m ∈ mn . If w(gT m ) ≥ n and m − v(g) = r, since v(g) ≥ (g + 1)(n − r),
gT v(g) ∈ mn−r and then gT m ∈ mn .
Hence mn is integrally closed for all n ≥ 1 and m is a pg -ideal by Proposition
2.11.
In dimension 2, although Takahashi-Dao’s inequality does not hold for general
normal ring A, we have an inequality adding a constant depending on A. Also we
have converse inequality for e(I) changing ℓℓ(A) to ord(I).
Proposition 5.6. Let (A, m) be an excellent normal local ring as in Theorem 5.2
and I be an integrally closed ideal in A. Then we have the following inequalities;
(1) If we put c = pg (A) − ℓA (H 1 (X, OX (−M ))), then we have an inequality
(µ(I) − 1 + c) · ℓℓ(I) ≥ e(I).
Note that c is an invariant depending only on A.
(2) If we change ℓℓ(I) to ord(I), we have the converse inequality;
e(I) ≥ (µ(I) − 1) · ord(I).
16
TONY J.PUTHENPURAKAL, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
Proof. We modify our argument in the proof of Theorem 5.2.
(1) In the situation of the proof of Theorem 5.2, by [13, Theorem 6.1], we have
µ(I) ≥ −M Z + 1 − c and then the argument is the same as Theorem 5.2.
(2) Since I ⊂ mord(I) , we have Z ≥ ord(I)M and hence −Z 2 ≥ − ord(I)M Z and
−M Z ≥ µ(I) − 1.
Example 5.7. Put A = k[[X, Y, Z]]/(f ), where f is homogeneous of degree n ≥ 3
and we assume A is normal. Then c = pg (A) − ℓA (H 1 (X, OX (−M ))) = n−1
in
2
s
this case. In fact, if we take I = m with s ≥ n, then we have ℓℓ(I) = s, µ(I) =
sn − (n − 3)n/2 and we see that c = n−1
is best possible.
2
In dimension ≥ 3, we can take A so that there is no constant c for which the
inequality
(d − 1)!(µ(I) − d + 1 + c) · ℓℓ(I) ≥ e(I)
hold for all integrally closed ideal I. See the next example.
Example 5.8. Let A = k[[x, y, z, w]]/(f ), where f is a homogeneous polynomial
of degree n and we assume A is normal. Then we can see if n ≥ 5, putting I = ms ,
to have inequality (d − 1)!(µ(I) − d + 1 + c) · ℓℓ(I) ≥ e(I) holds we have to take c
arbitrary large when s tends to infinity.
6. What Ideals have SRP ?
We have shown under certain condition, mn has SRP and also in regular local
rings of dimension 2, mn (n ≥ 1) are only ideals with SRP in Example 2.9. In
this section, we ask if that this property characterize regular local rings and some
Veronese subrings in dimension 2. We get a partial result for rational singularities.
Proposition 6.1. Let (A, m) be a two-dimensional rational singularity and assume
that A/m is algebraically closed. Then we have the following results concerning the
strong Rees property for integral closed ideals.
(1) If an integrally closed ideal I has the strong Rees property, then I is a good
ideal in the sense of [4].
(2) If the minimal resolution of Spec(A) has more than two exceptional curves,
then there are integrally closed ideals I 6= mn with the property such that
µ(I ′ ) < µ(I) for every integrally closed ideal I ′ strictly containing I.
Proof. If A is a rational singularity, and if I = IZ as in the proof of Theorem 5.2,
we have shown µ(I) = −M Z + 1.
(1) If I is not good and I ′ is the minimal good ideal containing I ′ , we have
µ(I) = µ(I ′ ). Hence I does not have SRP. (2) If I = IZ has SRP, then Z is defined
on the minimal resolution of Spec(A) and the property “µ(I ′ ) < µ(I) for every
integrally closed ideal I ′ strictly containing I” is equivalent to say that “for every
cycle Z ′ < Z, −M Z ′ < −M Z or M (Z ′ − Z) > 0. If the minimal resolution of
THE STRONG REES PROPERTY OF POWERS OF THE MAXIMAL IDEAL
17
Spec(A) contains more than two curves, we can construct such Z 6= nM with that
property.
Conjecture 6.2. We believe that if (A, m) is a local ring of dimension d ≥ 2 and
if mn are only ideals of A which has the strong Rees property, then dim A = 2 and
either A is a regular local ring or  ∼
= k[[X r , X r−1 Y, . . . , Y r ]] for some r.
Acknowledgment 1. The authors would like to thank Hailong Dao for giving
us a motivation of this work. Moreover, they would like to thank Shiro Goto,
Jürgen Herzog, Ryo Takahashi, Junzo Watanabe and Santiago Zarzuela for giving
us several valuable comments.
The second author is supported by Grant-in-Aid for Scientific Research (C)
26400053. The third author is supported by Grant-in-Aid for Scientific Research
(C) 16K05110.
References
[1] J. Asadollahi and T. J. Puthenpurakal, An analogue of a theorem due to Levin and Vasconcelos, Commutative algebra and algebraic geometry, 9–15, Contemp. Math., 390, Amer.
Math. Soc., Providence, RI, 2005.
[2] H. Dao and I. Smirnov, The multiplicity and the number of generators of an integrally closed
ideal, arXiv:1703.09427.
[3] S. Goto, Integral closedness of complete intersection ideals, J. Algebra 108 (1987), 151–160.
[4] S. Goto, S. Iai, and K.-i. Watanabe, Good ideals in Gorenstein local rings, Trans. Amer.
Math. Soc. 353 (2001), no. 6, 2309–2346 (electronic).
[5] S. Goto and K. -i.Watanabe, On graded rings, I, J. Math. Soc. Japan, 30 (1978), 179–213.
[6] S. Goto, R. Takahashi and N. Taniguchi, Almost Gorenstein rings–towards a theory of higher
dimension, J. Pure Appl. Algebra 219 (2015), 2666–2712.
[7] J. Herzog, When is a regular sequence super regular?, Nagoya Math. J. 83 (1981), 183–195.
[8] S. Huckaba and C. Huneke, Normal ideals in regular rings, J. Reine Angew. Math. 510 (1999),
63-82.
[9] S. Itoh, Coefficients of normal Hilbert polynomials, J. Algebra 150 (1992), no.1, 101–117.
[10] J. Komeda, On the existence of weierstrass points with a certain semigroup generated by 4
elements, Tsukuba J. Math. 6 (1982), no.2, 237–270.
[11] J. Komeda, and A. Ohbuchi, Existence of the non-primitive Weierstrass gap sequences on
curves of genus 8, Bull Braz Math. Soc. News Series 39 (2008), 109–121.
[12] L. Robbiano
and G. Valla,
On the equations
Math. Proc. Camb. Phil. Soc. 88 (1980), 281–297.
defining
tangent
cones,
[13] T. Okuma, K.-i. Watanabe and K. Yoshida, Good ideals and pg -ideals in two-dimensional
normal singularities, manuscripta math. 150 (2016), 499–520.
[14] T. Okuma, K.-i. Watanabe and K. Yoshida, Rees algebras and pg -ideals in a two-dimensional
normal local domain, Proc. Amer. Math. Soc. 145 (2017), no.1, 39–47.
[15] T. J. Puthenpurakal, Ratliff-Rush filtration, regularity and depth of higher associated graded
modules. I, J. Pure Appl. Algebra 208 (2007), no.1, 159–176.
[16] T. J. Puthenpurakal, Ratliff-Rush filtration, regularity and depth of higher associated graded
modules. Part II, J. Pure Appl. Algebra 221 (2017), no. 3, 611–631.
[17] L. J. Ratliff, Jr. and D. E.Rush, Two Notes on Reduction of ideals, Indiana Univ. Math. J.
27 (1978), no.6, 929–934.
18
TONY J.PUTHENPURAKAL, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
[18] J.D. Sally, On the associated graded ring of a local Cohen-Macaulay ring, J. Math. Kyoto
Univ. 17 (1977), 19–21.
[19] J.D. Sally, Cohen-Macaulay local rings of maximal embedding dimension, J. Algebra 56
(1979), 168–183.
[20] J.D. Sally, Cohen-Macaulay Local Rings of Embedding Dimension e + d − 2, J. Algebra 83
(1983), 393–408.
[21] H. Dao and R. Takahashi, Upper bounds for dimensions of singularity categories, C. R. Acad.
Sci. Paris, Ser. I, 353 (2015), 297–301.
[22] J. Watanabe, m-full ideals, Nagoya Math. J. 106 (1987), 101–111.
[23] K.-i. Watanabe, Chains of integrally closed ideals, Commutative algebra (Grenoble/Lyon,
2001), 353–358, Contemp. Math., 331, Amer. Math. Soc., Providence, RI, 2003.
Tony J.Puthenpurakal, Department of Mathematics, IIT Bombay, Powai, Mumbai
400076, India, email: [email protected]
Kei-ichi Watanabe, Department of Mathematics, College of Humanity and Sciences,
Nihon University, Setagaya-ku, Tokyo, 156-8550, Japan, email: [email protected]
Ken-ichi Yoshida, Department of Mathematics, College of Humanities and Sciences,
Nihon University, Setagaya-ku, Tokyo, 156-8550, Japan, email: [email protected]
| 0 |
The Dynamic Geometry of Interaction Machine:
A Call-by-need Graph Rewriter
Koko Muroya and Dan Ghica
University of Birmingham, UK
arXiv:1703.10027v1 [] 29 Mar 2017
Abstract
Girard’s Geometry of Interaction (GoI), a semantics designed for linear logic proofs, has been also successfully applied to programming language semantics. One way is to use abstract machines that pass a token
on a fixed graph along a path indicated by the GoI. These token-passing abstract machines are space efficient,
because they handle duplicated computation by repeating the same moves of a token on the fixed graph.
Although they can be adapted to obtain sound models with regard to the equational theories of various
evaluation strategies for the lambda calculus, it can be at the expense of significant time costs. In this paper
we show a token-passing abstract machine that can implement evaluation strategies for the lambda calculus,
with certified time efficiency. Our abstract machine, called the Dynamic GoI Machine (DGoIM), rewrites
the graph to avoid replicating computation, using the token to find the redexes. The flexibility of interleaving
token transitions and graph rewriting allows the DGoIM to balance the trade-off of space and time costs.
This paper shows that the DGoIM can implement call-by-need evaluation for the lambda calculus by using
a strategy of interleaving token passing with as much graph rewriting as possible. Our quantitative analysis
confirms that the DGoIM with this strategy of interleaving the two kinds of possible operations on graphs
can be classified as “efficient” following Accattoli’s taxonomy of abstract machines.
1
1.1
Introduction
Token-passing Abstract Machines for λ-calculus
Girard’s Geometry of Interaction (GoI) [16] is a semantic framework for linear logic proofs [15]. One way of
applying it to programming language semantics is via “token-passing” abstract machines. A term in the λcalculus is evaluated by representing it as a graph, then passing a token along a path indicated by the GoI.
Token-passing GoI decomposes higher-order computation into local token actions, or low-level interactions of
simple components. It can give strikingly innovative implementation techniques for functional programs, such as
Mackie’s Geometry of Implementation compiler [20], Ghica’s Geometry of Synthesis (GoS) high-level synthesis
tool [12], and Schöpp’s resource-aware program transformation to a low-level language [25]. The interactionbased approach is also convenient for the complexity analysis of programs, e.g. Dal Lago and Schöpp’s IntML
type system of logarithmic-space evaluation [7], and Dal Lago et al.’s linear dependent type system of polynomialtime evaluation [5, 6].
Fixed-space execution is essential for GoS, since in the case of digital circuits the memory footprint of the
program must be known at compile-time, and fixed. Using a restricted version of the call-by-name language
Idealised Algol [13] not only the graph, but also the token itself can be given a fixed size. Surprisingly, this
technique also allows the compilation of recursive programs [14]. The GoS compiler shows both the usefulness of
the GoI as a guideline for unconventional compilation and the natural affinity between its space-efficient abstract
machine and call-by-name evaluation. The practical considerations match the prior theoretical understanding
of this connection [9].
In contrast, re-evaluating a term by repeating its token actions poses a challenge for call-by-value evaluation
(e.g. [11, 24, 18, 3]) because duplicated computation must not lead to repeated evaluation. Moreover, in callby-value repeating token actions raises the additional technical challenge of avoiding repeating any associated
computational effects (e.g. [23, 22, 4]). A partial solution to this conundrum is to focus on the soundness of the
equational theory, while deliberately ignoring the time costs [22]. However, Fernández and Mackie suggest that
in a call-by-value scenario, the time efficiency of a token-passing abstract machine could also be improved, by
allowing a token to jump along a path, even though a time cost analysis is not given [11].
For us, solving the the problem of creating a GoI-style abstract machine which computes efficiently with evaluation strategies other than call-by-name is a first step in a longer-range research programme. The compilation
techniques derived from the GoI can be extremely useful in the case of unconventional computational platforms.
1
But if GoI-style techniques are to be used in a practical setting they need to extend beyond call-by-name, not
just correctly but also efficiently.
1.2
Interleaving Token Passing with Graph Rewriting
A token jumping, rather than following a path, can be seen as a simple form of short-circuiting that path,
which is a simple form of graph-rewriting. This idea first occurs in Mackie’s work as a compiler optimisation
technique [20] and is analysed in more depth theoretically by Danos and Regnier in the so-called Interaction
Abstract Machine [9]. More general graph-rewriting-based semantics have been used in a system called virtual reduction [8], where rewriting occurs along paths indicated by GoI, but without any token-actions. The
most operational presentation of the combination of token-passing and jumping was given by Fernández and
Mackie [11]. The interleaving of token actions and rewriting is also found in Sinot’s interaction nets [26, 27].
We can reasonably think of the DGoIM as their abstract-machine realisation.
We build on these prior insights by adding more general, yet still efficient, graph-rewriting facilities to the
setting of a GoI token-passing abstract machine. We call an abstract machine that interleaves token passing
with graph rewriting the Dynamic GoI Machine (DGoIM), and we define it as a state transition system with
transitions for token passing as well as transitions for graph rewriting. What connects these two kinds of
transitions is the token trajectory through the graph, its path. By examining it, the DGoIM can detect redexes
and trigger rewriting actions.
Through graph rewriting, the DGoIM reduces sub-graphs visited by the token, avoiding repeated token
actions and improving time efficiency. On the other hand, graph rewriting can expand a graph by e.g. copying
sub-graphs, so space costs can grow. To control this trade-off of space and time cost, the DGoIM has the
flexibility of interleaving token passing with graph rewriting. Once the DGoIM detects that it has traversed a
redex, it may rewrite it, but it may also just propagate the token without rewriting the redex.
As a first step in our exploration of the flexibility of this machine, we consider the two extremal cases
of interleaving. The first extremal case is “passes-only,” in which the DGoIM never triggers graph rewriting,
yielding an ordinary token-passing abstract machine. As a typical example, the λ-term (λx.t) u is evaluated like
this:
1. A token enters the graph on the left at the bottom open edge.
2. A token visits and goes through the left sub-graph λx.t.
λx.t
u
3. Whenever a token detects an occurrence of the variable x in t, it
traverses the right sub-graph u, then returns carrying the resulting
value.
4. A token finally exits the graph at the bottom open edge.
Step 3 is repeated whenever term u needs to be re-evaluated. This strategy of interleaving corresponds to
call-by-name reduction.
The other extreme is “rewrites-first,” in which the DGoIM interleaves token passing with as much, and as
early, graph rewriting as possible, guided by the token. This corresponds to both call-by-value and call-by-need
reductions, the difference between the two being the trajectory of the token. In the case of call-by-value, the
token will enter the graph from the bottom, traverse the left-hand-side sub-graph, which happens to be already
a value, then visit sub-graph u even before x is used in a call. While traversing u, it will cause rewrites such that
when the token exits, it leaves behind the graph of a machine corresponding to a value v such that u reduces
to v. The difference with call-by-need is that the token will visit u only when x is encountered in λx.t. In both
cases, if repeated evaluation is required then the sub-graph corresponding now to v is copied, so that one copy
can be further rewritten, if needed, while the original is kept for later reference.
1.3
Contributions
This work presents a DGoIM model for call-by-need, which can be seen as a case study of the flexibility achieved
through controlled interleaving of rewriting and token-passing. This is achieved through a rewriting strategy
which turns out to be as natural as the passes-only strategy is for implementing call-by-name. The DGoIM
avoids re-evaluation of a sub-term by rewriting any sub-graph visited by a token so that the updated sub-graph
represents the evaluation result, but, unlike call-by-value, it starts by evaluating the sub-graph corresponding
to the function λx.t first. We chose call-by-need mainly because of the technical challenges it poses. Adapting
the technique to call-by-value is a straightforward exercise, and we discuss other alternative in the Conclusion.
We analyse the time cost of the DGoIM with the rewrites-first interleaving, using Accattoli et al.’s general
methodology for quantitative analysis [2, 1]. Their method cannot be used “off the shelf,” because the DGoIM
does not satisfy one of the assumptions used in [1, Sec. 3]. Our machine uses a more refined transition system,
2
in which several steps correspond to a single one in loc. cit.. We overcome this technical difficulty by building
a weak simulation of Danvy and Zerny’s storeless abstract machine [10] to which the recipe does apply. The
result of the quantitative analysis confirms that the DGoIM with the rewrites-first interleaving can be classified
as “efficient,” following Accattoli’s taxonomy of abstract machines introduced in [1].
As we intend to use the DGoIM as a starting point for semantics-directed compilation, this result is an
important confirmation that no hidden inefficiencies lurk within the fabric of the rather complex machinery of
the DGoIM.
2
The Dynamic GoI Machine
2.1
Well-boxed Graphs
The graphs used to construct the DGoIM are essentially MELL proof structures [15] of the multiplicative and
exponential fragment of linear logic. They are directed, and built over the fixed set of nodes called “generators”
shown in Fig. 1.
H
|
|
Ax
Cut
`
!
?
D
?
Cn
!
|
Figure 1: Generators of Graphs
Figure 2: !-box H
A Cn -node is annotated by a natural number n that indicates its in-degree, i.e. the number of incoming
edges. It generalises a contraction node, whose in-degree is 2, and a weakening node, whose in-degree is 0, of
MELL proof structures. In Fig. 1, a bunch of n edges is depicted by a single arrow with a strike-out.
Graphs must satisfy the well-formedness condition below. Note that, unlike the usual approach [15], we need
not assign MELL formulas to edges, nor require a graph to be a valid proof net.
Definition 2.1 (well-boxed). A directed graph G built over the generators in Fig. 1 is well-boxed if:
• it has no incoming edges
• each !-node v in G comes with a sub-graph H of G and an arbitrary number of ?-nodes ~u such that:
– the sub-graph H (called “!-box”) is well-boxed inductively and has at least one outgoing edges
– the !-node v (called “principal door of H”) is the target of one outgoing edge of H
– the ?-nodes ~u (called “auxiliary doors of H”) are the targets of all the other outgoing edges of H
• each ?-node is an auxiliary door of exactly one !-box
• any two distinct !-boxes with distinct principal doors are either disjoint or nested
Note that a !-box might have no auxiliary doors. We use a dashed box to indicate a !-box together with its
principal door and its auxiliary doors, as in Fig. 2. The auxiliary doors are depicted by a single ?-node with a
thick frame and with single incoming and outgoing arrows with strike-outs. Directions of edges are omitted in
the rest of the paper, if not ambiguous, to reduce visual clutter.
2.2
Pass Transitions and Rewrite Transitions
The DGoIM is formalised as a labelled transition system with two kinds of transitions, namely pass transitions
99K and rewrite transitions . Labels of transitions are b, s, o that stand for “beta,” “substitution,” and
“overheads” respectively.
Let L be a fixed countable (infinite) set of names. The state of the transition system s = (G, p, h, m) consists
of the following elements:
• a named well-boxed graph G = (G, ℓG ), that is a well-boxed graph G with a naming lG that assigns a
unique name α ∈ L to each node of G
• a pair p = (e, d) called position, of an edge e of G and a direction d ∈ {↑, ↓}
• a history stack h defined by the grammar h ::= | Axα : h | Cutα : h | ⊗α : h | `α : h | !α : h | Dα : h |
Cnα : h, where α ∈ L and n is some positive natural number.
• a multiplicative stack m defined by the BNF grammar m ::= | l : m | r : m.
3
We refer to a node by its name, i.e. we say “a node α” instead of “a node whose name is α.”
A pass transition (G, p, h, m) 99Ko (G, p′ , h′ , m′ ) changes a position using a multiplicative stack, pushes to a
history stack, and keeps a named graph unchanged. All pass transitions have the label o.
Fig. 3 shows pass transitions graphically, omitting irrelevant parts of graphs. A position p = (e, d) is
represented by a bullet • (called “token”) on the edge e together with the direction d. Recall that an edge with
a strike-out represents a bunch of edges. The transition in the last line of Fig. 3 (where we assume n > 0) moves
a token from one of the incoming edges of a Cn -node to the outgoing edge of the node. Node names α ∈ L are
indicated wherever needed.
(h, m)
Ax α
↑
(h, m)
↓
$α
(h, l : m)
↑
$α
(h, m)
↓
!
α
(h, m)
(Axα : h, m)
99Ko
Ax α ↓
($α : h, l : m)
99Ko
↓
$α
($α : h, m)
99Ko
↑
$α
(!α : h, m)
99Ko
↓
!
(h, m)
↓
Cut α
(h, m)
$α
↓
(h, r : m)
$α
↑
(h, m)
↓
α
Dα
(Cutα : h, m)
99Ko
Cut α
↑
($α : h, r : m)
99Ko
$α
↓
($α : h, m)
99Ko
$α
↑
(Dα : h, m)
99Ko
↓
Dα
(Cnα : h, m)
|
|
↓
Cn α
99Ko
Cn α
↓
Figure 3: Pass Transitions ($ ∈ {⊗, `}, n > 0)
A rewrite transition (G, (e, d), h, m) x (G′ , (e′ , d), h′ , m) consumes some elements of a history stack, rewrites
a sub-graph of a named graph, and updates a position (or, more precisely, its edge). The label x of a rewrite
transition x is either b, s or o. Fig. 4 shows rewrite transition in the same manner as Fig. 3. Multiplicative
stacks are not present in the figure since they are irrelevant. The ♯-node represents some arbitrary node (incoming
edges omitted). We can see that no rewrite transition breaks the well-boxed-ness of a graph.
The rewrite transitions (1),(2),(3), and (4) are exactly taken from MELL cut elimination [15]. The rewrite
transition (5) is a variant of (1). It acts on a connected pair of a Cut-node and an Ax-node that arises as a
result of the transition (6) or (7) but cannot be rewritten by the transition (1). These transitions (6) and (7)
are inspired by the MELL cut elimination process for (binary) contraction nodes; note that we assume n > 0 in
Fig. 4.
The rewrite transition (6) in Fig. 4 deserves further explanation. The sub-graph H ≈ is a copy of the !-box H
where all the names are replaced with fresh ones. The thick Cg+f −1 -node and Cg+2f −1 -node represent families
m
{Cg(j)+f −1 (j) }m
ǫ = ǫ0 , . . . , ǫl and
j=0 , {Cg(j)+2f −1 (j) }j=0 , of C-nodes respectively. They are connected to ?-nodes ~
µ
~ = µ0 , . . . , µl in such a way that:
• the natural numbers l, m satisfy l ≥ m, and come with a surjection f : {0, . . . , l} ։ {0, . . . , m} and a
function g : {0, . . . , m} → N to the set N of natural numbers
• each ?-node ǫi and each ?-node µi are both connected to the C-node φf (i)
• each C-node φj has g(j) incoming edges whose source is none of the ?-nodes ~ǫ, ~µ.
Some rewrite transitions introduce new nodes to a graph. We require that the uniqueness of names throughout a whole graph is not violated by these transitions. Under this requirement, the introduced names ν, ~µ and
the renaming H ≈ in Fig. 4 can be arbitrary.
Definition 2.2. We call a state ((G, ℓG ), p, h, m) rooted at e0 for an open (outgoing) edge e0 of G, if there
exists a finite sequence ((G, ℓG ), (e0 , ↑), , ) 99K∗ ((G, ℓG ), p, h, m) of pass transitions such that the position p
appears only last in the sequence.
4
Cutα : Axβ : h
↑
Cut α
Ax β
h
o
!α : Cutβ : Axγ : h
(1)
↑
!α : h
↑
!
Ax γ
α
o
↑
!
Cut β
`α : Cutβ : ⊗γ : h
↑
↑
γ
Cut β
b
!α : Cutβ : Cn+1
: ♯δ : h
γ
!ν : Cutη : ♯δ : h
(2)
Cut ν
ν
H
|
H≈
↑
?µ
!
~
|
`α
Cut β
Cutβ : h
(5)
α
Cutβ : h
|
↑
b
Cut β
(3)
? ~ǫ
!
Cn+1 γ
α
s
? ~ǫ !
~
φ
Cutβ : h
|
|
Cg+2f −1
|
|
H
↑
|
δ
o
!
α
Dγ
Cut β
|
?~
H
↑
(4)
Cut β
!α : Cutβ : C1γ : ♯δ : h
α
C1 γ
Cut β
s
Cn γ
Cut β
α
~
φ
H
↑
?~
|
|
δ
♯δ
!
(6)
!α : Cutβ : ♯δ : h
|
|
H
↑
?~
♯δ
|
|
!α : Cutβ : Dγ : h
Cut η
|
Cg+f −1
Cut ν
♯δ
Cut β
|
γ
|
↑
`α
Cut β
H
↑
|
`α : Cutβ : ⊗γ : h
δ
♯δ
!
(7)
α
Cut β
Figure 4: Rewrite Transitions (n > 0)
Lem. 2.3(1) below implies that, the DGoIM can determine whether a rewrite transition is possible at a rooted
state by only examining a history stack. The rooted property is preserved by transitions.
Lemma 2.3 (rooted states). Let ((G, ℓG ), (e, d), h, m) be a rooted state at e0 with a (finite) sequence
((G, ℓG ), (e0 , ↑), , ) 99K∗ ((G, ℓG ), (e, d), h, m).
1. History stack represents an (undirected and possibly cyclic) path of the graph G connecting edges e0 and e.
2. If a transition ((G, ℓG ), (e, d), h, m) (99K ∪ ) ((G′ , ℓG′ ), p′ , h′ , m′ ) is possible, the open edges of G′ are
bijective to those of G, and the state ((G′ , ℓG′ ), p′ , h′ , m′ ) is rooted at the open edge corresponding to e0 .
Proof. (Sketch.) The proof of the first part is by induction on the length of the sequence of move transitions.
For the second part, rewrite transitions
modify open edges of a graph in a bijective way. The edge that a
state is rooted at can be modified only by the rewrite transitions (1) and (5) involving Ax-nodes.
2.3
Cost Analysis of the DGoIM
The time cost of updating stacks is constant, as each transition changes only a fixed number of top elements of
stacks. Updating a position is local and needs constant time, as it does not require searching beyond the next
edge in the graph from the current edge. We can conclude all pass transitions take constant time.
We estimate the time cost of rewrite transitions by counting updated nodes. The rewrite transitions (1)–(3)
involve a fixed number of nodes, and the transition (7) eliminates one C1 -node. Only the transitions (4) and (6)
have non-constant time cost. The number of doors deleted in the transition (4) can be arbitrary, and so is the
number of nodes introduced in the transition (6).
Pass transitions and rewrite transitions are separately deterministic (up to the choice of new names). However, both a pass transition and a rewrite transition are possible at some states. We here opt for the following
“rewrites-first” way to interleave pass transitions with as much rewrite transitions as possible:
(
s x s′
(if x possible)
′ def.
s _x s ⇐⇒
′
s 99Kx s (if only 99K [x possible).
The DGoIM with this strategy yields a deterministic labelled transition system _ up to the choice of new names
in rewrite transitions. We denote it by DGoIM_ , making the strategy explicit. Note that there can be other
strategies of interleaving although we do not explore them here.
Before we conclude, several considerations about space cost analysis. Space costs are generally bound by
time costs, so from our analysis there is an implicit guarantees that space usage will not explode. But if a more
refined space cost analysis is desired, the following might prove to be useful.
5
Terms
Values
Evaluation contexts
Substitution contexts
t ::= x | λx.t | t t | t[x ← t]
v ::= λx.t
E ::= h·i | E t | E[x ← t] | Ehxi[x ← E]
A ::= h·i | A[x ← t]
Pure terms
Pure values
t ::= x | λx.t | t t
v ::= λx.t
(t u, E)term →o (t, Ehh·i ui)term
(8)
(x, E1 hE2 [x ← t]i)term →o (t, E1 hE2 hxi[x ← h·i]i)term
(v, E)term →o (v, E)ctxt
(9)
(10)
(λx.t, EhA ui)ctxt →b (t, EhAhh·i[x ← u]ii)term
(v, E1 hE2 hxi[x ← A]i)ctxt →s (v ≈ , E1 hAhE2 [x ← v]ii)ctxt
(if x ∈ FV∅ (E2 ))
(11)
(12)
(v, E1 hE2 hxi[x ← A]i)ctxt →s (v, E1 hAhE2 ii)ctxt
(if x ∈
/ FV∅ (E2 ))
(13)
Figure 5: Call-by-need Storeless Abstract Machine (SAM)
The space required in implementing a named well-boxed graph is bounded by the number of its nodes. The
number of edges is linear in the number of nodes, because each generator has a fixed out-degree and every edge
of a well-boxed graph has its source.
Additionally a !-box can be represented by associating its auxiliary doors to its principal door. This adds
connections between doors to a graph that are as many as ?-nodes. It enables the DGoIM to identify nodes of
a !-box by following edges from its principal and auxiliary doors. Nodes in a !-box that are not connected to
doors can be ignored, since these nodes are never visited by a token (i.e. pointed by a position) as long as the
DGoIM acts on rooted states.
Only the rewrite transition (6) can increase the number of nodes of a graph by copying a !-box with its
doors. Rewrite transitions can copy !-boxes and eliminate the !-box structure, but they never create new !-boxes
or change existing ones. This means that, in a sequence of transitions that starts with a graph G, any !-boxes
copied by the rewrite transition (6) are sub-graphs of the graph G. Therefore the number of nodes of a graph
increases linearly in the number of transitions.
Elements of history stacks and multiplicative stacks, as well as a position, are essentially pointers to nodes.
Because each pass/rewrite transition adds at most one element to each stack, the lengths of stacks also grow
linearly in the number of transitions.
3
3.1
Weak Simulation of the Call-by-Need SAM
Storeless Abstract Machine (SAM)
We show the DGoIM_ implements call-by-need evaluation by building a weak simulation of the call-by-need
Storeless Abstract Machine (SAM) defined in Fig. 5. It simplifies Danvy and Zerny’s storeless machine [10,
Fig. 8] and accommodates a partial mechanism of garbage collection (namely, transition (13)). We will return
to a discussion of garbage collection at the end of this section.
The SAM is a labelled transition system between configurations (t, E). They are classified into two groups,
namely term configurations and context configurations, that are indicated by annotations term, ctxt respectively.
Pure terms (resp. pure values) are terms (resp. values) that contain no explicit substitutions t[x ← u]; we
sometimes omit the word “pure” and the overline in denotation as long as that raises no confusion.
Each evaluation context E contains exactly one open hole h·i, and replacing it with a term t (or an evaluation
context E ′ ) yields a term Ehti (or an evaluation context EhE ′ i) called plugging. In particular an evaluation
context E ′ hxi[x ← E] replaces the open hole of E ′ with x and keeps the open hole of E.
Labels of transitions are the same as those used for the DGoIM (i.e. b, s and o). The transition (11), with
the label b, corresponds to the β-reduction where evaluation and substitution of function arguments are delayed.
Substitution happens in the transitions (12) and (13), with the label s, that replaces exactly one occurrence of
a variable. The other transitions with the label o, namely (t, E) →o (t′ , E ′ ), search a redex by rearranging a
configuration. The two pluggings Ehti and E ′ ht′ i indeed yield exactly the same term.
We characterise “free” variables using multisets of variables. Multisets make explicit how many times a
variable is duplicated in a term (or an evaluation context). This information of duplication is later used in
translating terms to graphs.
Notation (multiset). A multiset x := [x, . . . , x] consists of a finite number of x. The multiplicity of x in a
multiset M is denoted by M (x). We write x ∈k M if M (x) = k, x ∈ M if M (x) > 0 and x ∈
/ M if M (x) = 0.
A multiset M comes with its support set supp(M ). For two multisets M and M ′ , their sum and difference are
6
denoted by M + M ′ and M − M ′ respectively. Removing all x from a multiset M yields the multiset M \x, e.g.
[x, x, y]\x = [y].
Each term t and each evaluation context E are respectively assigned multisets of variables FV(t), FVM (E), with
M a multiset of variables. The multisets FV are defined inductively as follows.
FV(x) := [x],
FV(λx.t) := FV(t)\x,
FV(t u) := FV(t) + FV(u),
FV(t[x ← u]) := (FV(t)\x) + FV(u).
FVM (h·i) := M,
FVM (E t) := FVM (E) + FV(t),
FVM (E[x ← t]) := (FVM (E))\x + FV(t),
FVM (E ′ hxi[x ← E]) := (FV[x] (E ′ ))\x + FVM (E).
The following equations can be proved by a straightforward induction on E.
Lemma 3.1 (decomposition).
FV(Ehti) = FVFV(t) (E)
FVM (EhE ′ i) = FVFVM (E ′ ) (E)
A variable x is bound in a term t if it appears in the form of λx.u or u[x → u′ ]. A variable x is captured
in an evaluation context E if it appears in the form of E ′ [x ← t] (but not in the form of E ′ hxi[x ← E ′′ ]). The
transitions (12) and (13) depend on whether or not the bound variable x appears in the evaluation context E2 .
If the variable x appears, the value v is kept for later use and its copy v ≈ is substituted for x. If not, the value
v itself is substituted for x.
The SAM does not assume the α-equivalence, but explicitly deals with it in copying a value. The copy v ≈
in has all its bound variables replaced by distinct fresh variables (i.e. distinct variables that do not appear in a
whole configuration). This implies that the SAM is deterministic up to the choice of new variables introduced
in copying.
A term t is closed if FV(t) = ∅; and is well-named if each variable gets bound at most once in t, and each
bound variable x in t satisfies x ∈
/ FV(t). An initial configuration is a term configuration (t0 , h·i)term where t0
is closed and well-named. A finite sequence of transitions from an initial configuration is called an execution. A
reachable configuration (t, E), that is a configuration coming with an execution from some initial configuration
to itself, satisfies the following invariant properties.
Lemma 3.2 (reachable configurations). Let (t, E) be a reachable configuration from an initial configuration
(t0 , h·i)term . The term t is a sub-term of the initial term t0 up to α-equivalence, and the plugging Ehti is closed
and well-named.
Proof. (Sketch.) The proof is by induction on the length of the execution (t0 , h·i)term →∗ (t, E). Not only the
term t but also the term u′ in any sub-term t′ u′ or t′ [x ← u′ ] of the plugging Ehti is a sub-term of the initial
term t0 . The transition (12) renames a value in the way that preserves closedness and the well-named-ness
of pluggings. In the transition (13) where an explicit substitution for a bound variable x is eliminated, the
induction hypothesis ensures that the variable x does not occur in the plugging E1 hAhE2 hviii.
We now conclude with a brief consideration on garbage collection. Transition (13) eliminates an explicit
substitution and therefore implements a partial mechanism of garbage collection. The mechanism is partial
because only an explicit substitution that is looked up in an execution can be eliminated, as illustrated below.
The explicit substitution [x ← λz .z] is eliminated in the first example, but not in the second example because
the bound variable x does not occur.
((λx.x) (λz .z), h·i)term →∗ (λz .z, h·i)ctxt
((λx.λy.y) (λz .z), h·i)term →∗ (λy.y, h·i[x ← λz .z])ctxt
We incorporate this partial garbage collection to make clear the behaviour of the DGoIM_ , in particular the
use of the rewrite transitions (6) and (7).
7
x† :=
x Ax
|
t†
x
Ck
FV(t)\x
x
(if x ∈k FV(t))
|
(λx.t)† :=
?
!
|
FV(t)\x
t†
†
(t u) :=
FV(u)
|
|
FV(t)
u†
Ax
D
Cut
u†
|
(if x ∈k FV(t))
|
(t[x ← u])† :=
t†
x
C
FV(t)\x x k
|
Cut
FV(u)
Figure 6: Inductive Translation (·)† of Terms to Well-boxed Graphs
x† :=
t†
x Ax
†
(t u) :=
FV(u)
|
FV(t)
|
t†
x
Ck
FV(t)\x
x
u†
Ax
D
Cut
|
|
(λx.t)† :=
|
(if x ∈k FV(t))
|
(t[x ← u])† :=
!
u†
|
?
|
FV(t)\x
t†
x
C
FV(t)\x x k
(if x ∈k FV(t))
Cut
FV(u)
Figure 7: Inductive Translation (·)†M of Evaluation Contexts to Graphs
3.2
Translation and Weak Simulation
A weak simulation is built on top of translations of terms and evaluation contexts. The translations (·)† are
inductively defined in Fig. 6 and Fig. 7. What underlies them is the so-called “call-by-value” translation of
intuitionistic logic to linear logic. This translates all and only values to !-boxes that can be copied by rewrite
transitions.
t† of a term t is a well-boxed graph, where some edges are annotated with variables
The translation
|
FV(t)
to help understanding. We continue representing a bunch of edges by a single edge and a strike-out, with
M
|
annotations denoted by a multiset, and a bunch of nodes by a single thick node. The translation
†
EM
|
FVM (E)
of an evaluation context E, given a multiset M of variables, is not a well-boxed graph because it has incoming
edges. Lem. 3.3 is analogous to Lem. 3.1; their proof is by straightforward induction on E.
Lemma 3.3 (decomposition).
†
t
FV(t)
|
FVFV(t) (E)
|
†
(Ehti) =
†
EFV(t)
(E ′ )†M
FVM (E)
†
EFV
′
M (E )
FVFVM (E ′ ) (E)
|
(EhE ′ i)†M =
|
M
|
8
The translations (·)† are lifted to a binary relation between reachable configurations of the SAM and rooted
states of the DGoIM_ .
|
Definition 3.4 (binary relation ). A reachable configuration c and a state ((G, ℓG ), p, h, m) satisfies c
((G, ℓG ), p, h, m) if and only if:
†
t
FV(t)
↑
(if c = (t, E)term )
†
EFV(t)
v⋄
• (G, p) =
v⋄
†
FV(v)
↑
(if v = FV(v)
? !
?
!
FV(v)
FV(v)
†
EFV(v)
and c = (v, E)ctxt )
|
|
|
|
• ℓG is an arbitrary naming
• ((G, ℓG ), p, h, m) is rooted at the unique open edge of G.
Note that the graph G in the above definition has exactly one open edge, because it is equal to the translation
Ehti† (Lem. 3.3) and the plugging Ehti is closed (Lem. 3.2).
The binary relation gives a weak simulation, as stated below. It is weak in Milner’s sense [21], where
transitions with the label o are regarded as internal. We can conclude from Thm. 3.5 below that the DGoIM_
soundly implements the call-by-need evaluation.
Theorem 3.5 (weak simulation). Let a configuration c and a state s satisfy c s.
1. If a transition c →b c′ of the SAM is possible, there exists a sequence s _2o _b _o s′ such that c′ s′ .
2. If a transition c →s c′ of the SAM is possible, there exists a sequence s _s _o s′ such that c′ s′ .
′
3. If a transition c →o c′ of the SAM is possible, there exists a sequence s _N
o s such that 0 < N ≤ 4 and
′
′
c s.
4. No transition _ is possible at the state s′ if c′ = (v, A)ctxt .
Proof. We start with two basic observations. First, only a pass transition is possible at a state s that satisfies
(t, E)term s, and second, no transition is possible at a state s that satisfies (v, A)ctxt s.
Fig. 8 shows how the DGoIM_ simulates each transition of the SAM. Stacks, annotations of edges, and some
annotations of C-nodes are omitted. The equations in Fig. 8 apply the decomposition properties in Lem. 3.3
as well as the other decomposition properties in the following Lem. 3.6. In the application, we exploit the
closedness and well-named-ness of reachable configurations (in the sense of Lem. 3.2).
Lemma 3.6 (decomposition). Let M0 , M be multisets of variables.
M
A‡M
FVM (A)
|
1. The translation A†M of a substitution context A has a unique decomposition
.
|
†
2. If no variables in M0 are captured in an evaluation context E, the translation EM
is equal to the graph
0 +M
|
M
†
EM
|
M0
.
|
FVM (E)
†
3. If each variable in M0 is captured in an evaluation context E exactly once, the translation EM
has
0 +M
|
††
EM
|
M0
M1
CM0 +M1
. The multiset M1 satisfies supp(M1 ) ⊆ supp(M0 ),
|
|
supp(M0 ) Cut
|
FVM0 +M (E)
|
a unique decomposition
M
and the thick CM0 +M1 -node represents a family {CM0 (x)+M1 (x) }x∈supp(M0 ) of C-nodes.
9
v⋄
Ax
?
(E2 )†∅
!
?
|
|
Ax
(E2 )†∅
(E2 )†∅
v⋄
|
|
|
|
(v⋄ )≈
↑
? !
t†
|
|
|
|
|
Cut
C
|
99K3o
|
C
|
(E2 )†∅
t†
↑
(E2 )†∅
Cut
v⋄
|
|
|
C
|
?
99Ko
!
|
EhAi††
↑
|
|
!
?
Cut
v⋄
Cut
!
|
|
↑
Ck
|
E1†
v⋄
?
o
|
|
s
E1†
|
|
(E2 )†∅
Cut
Ax
↑
|
|
C
E1†
Ax
↑
!
Cut
A‡
E1†
→o (10)
?
EhAi††
Cut
==
Ck+1
==
|
Cut
!
|
?
v⋄
|
|
Ck+1
↑
|
|
==
|
t†
C
→o (9)
|
|
E1†
Ax
E†
↑
(E2 )†[x]
Cut
E1†
D
A‡
Ck+1
|
Cut
Cut
E†
Ax
==
A†
Ck+1
Ax
↑
|
↑ 99K4
o
D
Cut
u†
|
t†
Ax
|
|
→o (8)
u†
!
|
|
|
→s (12)
t†
↑
|
|
(E2 )†∅
v⋄
Ax
↑
†
EFV(v)
(v⋄ )≈
↑
? !
†
EFV(v)
|
t†
Ax
==
EhAi†
u†
|
|
|
|
D
A‡
Ax
(E2 )†∅
|
E1†
Cut
E1†
A†
v⋄
v⋄
|
|
o
A‡
|
|
Figure 8: Illustration of Simulation (k > 0)
10
==
!
(E2 )†∅
A‡
|
|
E1†
!
|
|
(E2 )†∅
?
|
?
s
↑
|
↑
|
E†
E1†
|
Cut
A‡
C1
==
|
E†
↑
Cut
==
A†
C1
u†
|
A‡
|
o
|
|
b
↑
Cut
?
|
C
|
o 99Ko
C
(E2 )†∅
!
|
→s (13)
|
t†
u†
|
t†
?
|
E†
v⋄
Ax
↑
↑
|
Cut
E†
v⋄
Ax
D
|
Cut
!
|
u†
|
A†
?
Cut
|
?
|
nnn
Ck
`
↑
!
|
|
`
↑
!
v⋄
|
|
C
(E2 )†∅
==
|
|
C
?
→b (11)
t†
!
|
|
M0 M
(E2 )†M0 +M
t†
x
FVM0 +M (E2 )\x xCk
Cut
|
|
|
†
EM
=
0 +M
(E1 )†
|
|
|
M2 M
(E2 )†M2 +M
t†
x
x
Ck
x
Cut
|
|
M1
|
|
(by 2.)
|
=
(E1 )†
|
|
M
(E2 )††
M
x
Ck
C
x
Cut
|
|
|
|
|
=
(E1
M1
|
|
|
M2
t†
|
|
x
Cut
(I.H.)
)††
|
|
|
C
|
|
Cut
†
Figure 9: Decomposing EM
0 +M
Proof. The proofs for 1. and 2. are by straightforward inductions on A and E respectively. The proof for 3. is
by induction on the dimension of M0 , i.e. the size of the support set supp(M0 ). The base case where M0 = ∅
is obvious. In the inductive case, let x satisfy x ∈ M0 . Since x is captured exactly once in E by assumption,
the evaluation context E can be decomposed as E = E1 hE2 [x ← t]i such that x is not captured in E1 or E2 .
The evaluation context E2 satisfies x ∈k FVM0 +M (E2 ) for some positive multiplicity k. Moreover the multiset
M0 can be decomposed as M0 = M1 + x + M2 such that all the variables in M1 (resp. M2 ) are only captured
†
††
in E1 (resp. E2 ). The translation EM
can be decomposed as in Fig. 9. Finally, we let EM
consist of
0 +M
†
††
(E2 )††
,
t
,
(E
)
.
1
M
4
Time Cost Analysis of Rewrites-First Interleaving
4.1
Recipe for Time Cost Analysis
Our time cost analysis of the DGoIM_ follows Accattoli’s recipe, described in [2, 1], of analysing complexity of
abstract machines. This section recalls the recipe and explains how it applies to the DGoIM_ .
The time cost analysis focuses on how efficiently an abstract machine implements an evaluation strategy.
In other words, we are not interested in minimising the number of β-reduction steps simulated by an abstract
machine. Our interest is in making the number of transitions of an abstract machine “reasonable,” compared
to the number of necessary β-reduction steps determined by a given evaluation strategy.
Accattoli’s recipe assumes that an abstract machine has three groups of transitions: 1) “β-transitions” that
correspond to β-reduction in which substitution is delayed, 2) transitions perform substitution, and 3) other
“overhead” transitions. We incorporate this classification using the labels b, s, o of transitions.
Another assumption of the recipe is that, each step of β-reduction is simulated by a single transition of
an abstract machine, and so is substitution of each occurrence of a variable. This is satisfied by many known
abstract machines including the SAM, however not by the DGoIM_ . The DGoIM_ has “finer” transitions and
can take several transitions to simulate a single step of reduction (hence a single transition of the SAM, as we
can observe in Thm. 3.5). In spite of this mismatch we can still follow the recipe, thanks to the weak simulation
. It discloses what transitions of the DGoIM exactly correspond to β-reduction and substitution, and gives
a concrete number of overhead transitions that the DGoIM_ needs to simulate β-reduction and substitution.
The recipe for the time cost analysis is:
1. Examine the number of transitions, by means of the size of input and the number of β-transitions.
11
2. Estimate time cost of single transitions.
3. Derive a bound of the overall execution time cost.
4. Classify an abstract machine according to its execution time cost.
The last step is accompanied by the following taxonomy of abstract machines introduced in [1].
Definition 4.1 (classes of abstract machines [1, Def. 7.1]).
1. An abstract machine is efficient if its execution time cost is linear in both the input size and the number
of β-transitions.
2. An abstract machine is reasonable if its execution time cost is polynomial in the input size and the number
of β-transitions.
3. An abstract machine is unreasonable if it is not reasonable.
The input size in our case is given by the size |t| of a term t, inductively defined by:
|x| := 1
|λx.t| := |t| + 1
|t u| := |t| + |u| + 1
|t[x ← u]| := |t| + |u| + 1.
Given a sequence r of transitions (of either the SAM or the DGoIM_ ), we denote the number of transitions
with a label x in r by |r|x . Since we use the fixed set {b, s, o} of labels, the length |r| of the sequence r is equal
to the sum |r|b + |r|s + |r|o .
4.2
Number of Transitions
We first estimate the number of transitions of the SAM, and then derive estimation for the DGoIM_ .
Lemma 4.2 (quantitative bounds for SAM). Each execution e from an initial configuration (t0 , E)term , comes
with the following inequalities:
|e|s ≤ |e|b
|e|o ≤ |t0 | · (5 · |e|b + 2) + (3 · |e|b + 1).
Proof. The proof is analogous to the discussion in [2, Sec. 11]. Let |e|(8) , |e|(9) and |e|(10) be the numbers of
transitions (8), (9) and (10) in e, respectively. The number |e|o is equal to the sum of these three numbers.
The transition (11) introduces one explicit substitution, and the other transitions never increase the number
of explicit substitution. In particular the transition (12) copies a pure value, that means no explicit substitutions
are copied. Therefore we can bound the number of transitions that concern explicit substitutions, and obtain
|e|(9) ≤ |e|b and |e|s ≤ |e|b .
Each occurrence of the transition (10) in an execution is either the last transition of the execution or
followed by the transitions (11), (12) and (13). This yields the inequality |e|(10) ≤ |e|b + |e|s + 1 and hence
|e|(10) ≤ 2 · |e|b + 1.
The transition (8) reduces the size of a pure term that is the first component of a configuration. The
pure term is always a sub-term of the initial term t0 (Lem. 3.2). This means each maximal subsequence of an
execution that solely consists of the transition (8) has at most the length |t0 |. Such a maximal subsequence
either occurs at the end of an execution or is followed by transitions other than the transition (8). Therefore
the number of these maximal sub-sequences is no more than |e|b + |e|s + |e|(9) + |e|(10) + 1 that can be bounded
by 5 · |e|b + 2. A bound of the number |e|(8) can be given by multiplying these two bounds, namely we obtain
|e|(8) ≤ |t0 | · (5 · |e|b + 2).
Combining these bounds for the SAM with the weak simulation , we can estimate the number of transitions
of the DGoIM_ as below.
Proposition 4.3 (quantitative bounds for DGoIM_ ). Let r : s0 _∗ s be a sequence of transitions of the
DGoIM_ . If there exists an execution (t0 , h·i)term →∗ (t, E) of the SAM such that s0 (t0 , h·i)term and
s (t, E), the sequence r comes with the following inequalities:
|r|s ≤ |r|b
|r|o ≤ 4 · |t0 | · (5 · |r|b + 2) + (16 · |r|b + 4).
Proof. This is a direct consequence of Lem. 4.2 and Thm. 3.5.
12
4.3
Execution Time Cost
We already discussed time cost of single transitions of the DGoIM in Sec. 2.3. It is worth noting that the
discussion in Sec. 2.3 is independent of any particular choice of a rewriting and token-passing interleaving
strategy.
Thm. 4.4 below gives a bound of execution time cost of the DGoIM_ . We can conclude that, according
to Accattoli’s taxonomy (see Def. 4.1), the DGoIM_ is “efficient” as an abstract machine for the call-by-need
evaluation.
Theorem 4.4 (time cost). Let C, D be fixed natural numbers, and r : s0 _∗ s be a sequence of transitions of
the DGoIM_ . If there exists an execution (t0 , h·i)term →∗ (t, E) of the SAM such that s0 (t0 , h·i)term and
s (t, E), the total time cost T (r) of the sequence r satisfies:
T (r) = O((|t0 | + C) · (|r|b + D)).
Proof. We estimated in Sec. 2.3 that time cost of single transitions, except for the rewrite transitions (4) and
(6), is constant. Time cost of these rewrite transitions (4) and (6) depends on the number of doors and/or nodes
of a !-box.
Since the sequence r of transitions simulates an execution of the SAM, every !-box concerned in r arises as
the translation of a value v. By definition of the translation (·)† (Fig. 6), the graph v † is a !-box, with as many
auxiliary doors as occurrences of free variables in v. The number of auxiliary doors is no more than the number
of nodes in the !-box, due to the well-boxed-ness condition, that is linear in the size |v| of the value.
Moreover the value v appears as the first component of a context configuration, and therefore it is a sub-term
of the initial term t0 (Lem. 3.2). As a result, time cost of each occurrence of the rewrite transitions (4) and (6)
in the sequence r is linear in the size t0 .
The bound of the total time cost T (r) of the sequence r is given by combining these estimations for single
transitions with the results of Prop. 4.3.
Corollary 4.5. The DGoIM_ is an efficient abstract machine, in the sense of Def. 4.1.
5
Conclusions
We introduced the DGoIM, which can interleave token passing with graph rewriting informed by the trajectory
of the token. We focused on the rewrites-first interleaving and proved that it enables the DGoIM to implement
the call-by-need evaluation strategy. The quantitative analysis of time cost certified that the DGoIM_ gives
an “efficient” implementation in the sense of Accattoli’s classification. The proof of Thm. 4.4 pointed out
that eliminating and copying !-boxes are two main sources of time cost. Our results are built on top of a
weak simulation of the SAM, that relates several transitions of the DGoIM to each computational task such as
β-reduction and substitution.
The main feature of the DGoIM is the flexible combination of interaction and rewriting. We here briefly
discuss how the flexibility can enable the DGoIM to implement evaluation strategies other than the call-by-need.
As mentioned in Sec. 1.2, the passes-only interleaving yields an ordinary token-passing abstract machine
that is known to implement the call-by-name evaluation. Because no rewrites are triggered, as oppose to the
rewrites-first interleaving, a token not only can pass a principal door but also can go inside a !-box and pass an
auxiliary door. These behaviours are in fact not possible with the DGoIM presented in this paper; the transitions
and data carried by a token are tailored to the rewrites-first interleaving. To recover an ordinary token-passing
machine, we therefore need to add pass transitions that involve auxiliary doors and data structures (so-called
“exponential signatures”) that deal with !-boxes, for example.
The only difference between the call-by-need and the call-by-value evaluations lies in when function arguments are evaluated. In the DGoIM, this corresponds to changing a trajectory of a token so that it visits
function arguments immediately after it detects function application. Therefore, to implement the call-by-value
evaluation, the DGoIM can still use the rewrites-first interleaving, but it should use a modified set of pass
transitions. Further refinements, not only of the evaluation strategies but also of the graph representation could
yield even more efficient implementation, such as full lazy evaluation, as hinted in [26].
Our final remarks concern programming features that have been modelled using token-passing abstract
machines. Ground-type constants are handled by attaching memories to either nodes of a graph or a token, in
e.g. [20, 18, 3] — this can be seen as a simple form of graph rewriting. Algebraic effects are also accommodated
using memories attached to nodes of a graph in token machines [18], but their treatment would be much
simplified in the DGoIM as effects are evaluated out of the term via rewriting.
Acknowledgements. We are grateful to Ugo Dal Lago and anonymous reviewers for encouraging and
insightful comments on earlier versions of this work.
13
References
[1] Beniamino Accattoli. The complexity of abstract machines. In WPTE 2016, volume 235 of EPTCS, pages
1–15, 2017.
[2] Beniamino Accattoli, Pablo Barenbaum, and Damiano Mazza. Distilling abstract machines. In ICFP 2014,
pages 363–376. ACM, 2014.
[3] Ugo Dal Lago, Claudia Faggian, Benoı̂t Valiron, and Akira Yoshimizu. Parallelism and synchronization in
an infinitary context. In LICS 2015, pages 559–572. IEEE, 2015.
[4] Ugo Dal Lago, Claudia Faggian, Benoı̂t Valiron, and Akira Yoshimizu. The Geometry of Parallelism:
classical, probabilistic, and quantum effects. In POPL 2017, pages 833–845. ACM, 2017.
[5] Ugo Dal Lago and Marco Gaboardi. Linear dependent types and relative completeness. In LICS 2011,
pages 133–142. IEEE Computer Society, 2011.
[6] Ugo Dal Lago and Barbara Petit. Linear dependent types in a call-by-value scenario. In PPDP 2012, pages
115–126. ACM, 2012.
[7] Ugo Dal Lago and Ulrich Schöpp. Computation by interaction for space-bounded functional programming.
Inf. Comput., 248:150–194, 2016.
[8] Vincent Danos and Laurent Regnier. Local and asynchronous beta-reduction (an analysis of Girard’s
execution formula). In LICS 1993, pages 296–306. IEEE Computer Society, 1993.
[9] Vincent Danos and Laurent Regnier. Reversible, irreversible and optimal lambda-machines. Elect. Notes
in Theor. Comp. Sci., 3:40–60, 1996.
[10] Olivier Danvy and Ian Zerny. A synthetic operational account of call-by-need evaluation. In PPDP 2013,
pages 97–108. ACM, 2013.
[11] Maribel Fernández and Ian Mackie. Call-by-value lambda-graph rewriting without rewriting. In ICGT
2002, volume 2505 of LNCS, pages 75–89. Springer, 2002.
[12] Dan R. Ghica. Geometry of Synthesis: a structured approach to VLSI design. In POPL 2007, pages
363–375. ACM, 2007.
[13] Dan R. Ghica and Alex Smith. Geometry of Synthesis III: resource management through type inference.
In POPL 2011, pages 345–356. ACM, 2011.
[14] Dan R. Ghica, Alex Smith, and Satnam Singh. Geometry of Synthesis IV: compiling affine recursion into
static hardware. In ICFP, pages 221–233, 2011.
[15] Jean-Yves Girard. Linear logic. Theor. Comp. Sci., 50:1–102, 1987.
[16] Jean-Yves Girard. Geometry of Interaction I: interpretation of system F. In Logic Colloquium 1988, volume
127 of Studies in Logic & Found. Math., pages 221–260. Elsevier, 1989.
[17] Georges Gonthier, Martı́n Abadi, and Jean-Jacques Lévy. The geometry of optimal lambda reduction. In
POPL 1992, pages 15–26. ACM, 1992.
[18] Naohiko Hoshino, Koko Muroya, and Ichiro Hasuo. Memoryful Geometry of Interaction: from coalgebraic
components to algebraic effects. In CSL-LICS 2014, pages 52:1–52:10. ACM, 2014.
[19] John Lamping. An algorithm for optimal lambda calculus reduction. In POPL 1990, pages 16–30. ACM
Press, 1990.
[20] Ian Mackie. The Geometry of Interaction machine. In POPL 1995, pages 198–208. ACM, 1995.
[21] Robin Milner. Communication and concurrency. PHI Series in Computer Science. Prentice Hall, 1989.
[22] Koko Muroya, Naohiko Hoshino, and Ichiro Hasuo. Memoryful Geometry of Interaction II: recursion and
adequacy. In POPL 2016, pages 748–760. ACM, 2016.
[23] Ulrich Schöpp. Computation-by-interaction with effects. In APLAS 2011, volume 7078 of Lect. Notes
Comp. Sci., pages 305–321. Springer, 2011.
14
[24] Ulrich Schöpp. Call-by-value in a basic logic for interaction. In APLAS 2014, volume 8858 of Lect. Notes
Comp. Sci., pages 428–448. Springer, 2014.
[25] Ulrich Schöpp. Organising low-level programs using higher types. In PPDP 2014, pages 199–210. ACM,
2014.
[26] François-Régis Sinot. Call-by-name and call-by-value as token-passing interaction nets. In TLCA 2005,
volume 3461 of Lect. Notes Comp. Sci., pages 386–400. Springer, 2005.
[27] François-Régis Sinot. Call-by-need in token-passing nets. Math. Struct. in Comp. Sci., 16(4):639–666, 2006.
15
| 6 |
Revisiting Connected Vertex Cover: FPT Algorithms and
Lossy Kernels
R. Krithika, Diptapriyo Majumdar, and Venkatesh Raman
arXiv:1711.07872v1 [] 21 Nov 2017
The Institute of Mathematical Sciences, HBNI, Chennai, India.
{rkrithika|diptapriyom|vraman}@imsc.res.in
Abstract
The Connected Vertex Cover problem asks for a vertex cover in a graph that
induces a connected subgraph. The problem is known to be fixed-parameter tractable
(FPT), and is unlikely to have a polynomial sized kernel (under complexity theoretic
assumptions) when parameterized by the solution size. In a recent paper, Lokshtanov et
al. [STOC 2017], have shown an α-approximate kernel for the problem for every α > 1,
in the framework of approximate or lossy kernelization. We exhibit lossy kernels and
FPT algorithms for Connected Vertex Cover for parameters that are more natural
and functions of the input, and in some cases, smaller than the solution size.
Our first result is a lossy kernel for Connected Vertex Cover parameterized
by the size k of a split deletion set. A split graph is a graph whose vertex set can be
partitioned into a clique and an independent set and a split deletion set is a set of
vertices whose deletion results in a split graph. Let n denote the number of vertices in
the input graph. We show that
• Connected Vertex Cover parameterized by the size k of a split deletion set
d 2α−1
α−1 e ) vertices and a
admits an α-approximate kernel with O(k + (2k + d 2α−1
α−1 e)
k O(1)
3 n
time algorithm.
• For the special case when the split deletion set is a clique deletion set, the algorithm
runs in 2k nO(1) time and the lossy kernel has O(k + d 2α−1
α−1 e) vertices.
To the best of our knowledge, this (approximate) kernel is one of the few lossy kernels
for problems parameterized by a structural parameter (that is not solution size). We
extend this lossy kernelization to Connected Vertex Cover parameterized by an
incomparable parameter, and that is the size k of a clique cover. A clique cover of a
graph is a partition of its vertex set such that each part induces a clique. We show that
• Connected Vertex Cover parameterized by the size k of a clique cover is
W[1]-hard but admits an α-approximate kernel with O(kd 2α−1
α−1 e) vertices for every
α > 1. This is one of the few problems that are not FPT but admit a lossy kernel.
Then, we consider the size of a cluster deletion set as parameter. A cluster graph is a
graph in which every component is a complete graph and a cluster deletion set is a set
of vertices whose deletion results in a cluster graph. We show that
• Connected Vertex Cover parameterized by the size k of a cluster deletion set
is FPT via an 4k nO(1) time algorithm and admits an α-approximate kernel with
α
k
α
d α−1
e
O(k2 + d 2α−1
) vertices for every α > 1.
α−1 e · α−1 + d α−1 e · k
• For the special case when the cluster deletion set is a degree-1 modulator (a subset
of vertices whose deletion results in a graph with maximum degree 1), the FPT
k
α
algorithm runs in 3k nO(1) time and the lossy kernel has O(k2 + α−1
+ d α−1
e·
α
d α−1
e
k
) vertices.
1
Finally, we consider Connected Vertex Cover parameterized by the size k of a
chordal deletion set. A chordal graph is a graph in which every cycle of length at least
4 has a chord – an edge joining non-adjacent vertices of the cycle. A chordal deletion
set of a graph is a subset of vertices whose deletion results in a chordal graph. We
show that Connected Vertex Cover parameterized by k is FPT using the known
algorithm for Connected Vertex Cover on bounded treewidth graphs.
1
Introduction, Motivation and Our Results
A vertex cover in a graph is a set of vertices that has at least one endpoint from every edge
of the graph. In the Vertex Cover problem, given a graph G and an integer `, the task
is to determine if G has a vertex cover of size at most `. Undoubtedly, Vertex Cover
is one of the most well studied problems in parameterized complexity. In this framework,
each problem instance is associated with a non-negative integer called parameter. The pair
consisting of a decision problem and a parameterization is called a parameterized problem. A
common parameter is a bound on the size of an optimum solution for the problem instance.
A problem is said to be fixed-parameter tractable (FPT) with respect to parameter k if it can
be solved in f(k)nO(1) time for some computable function f, where n is the input size. Such
an algorithm is called a parameterized algorithm or FPT algorithm. For convenience, the
running time f(k)nO(1) where f grows super-polynomially with k is denoted as O∗ (f(k)). A
kernelization algorithm is a polynomial-time algorithm that transforms an arbitrary instance
of the problem to an equivalent instance of the same problem whose size is bounded by some
computable function g of the parameter of the original instance. The resulting instance is
called a kernel and if g is a polynomial function, then it is called a polynomial kernel and
we say that the problem admits a polynomial kernel. In order to classify parameterized
problems as being FPT or not, the W-hierarchy: FPT ⊆ W[1] ⊆ W[2] ⊆ · · · ⊆ XP is defined.
It is believed that the subset relations in this sequence are all strict and a parameterized
problem that is hard for some complexity class above FPT in this hierarchy is unlikely to
be FPT. A parameterized problem is said to be in XP if it has an algorithm with runtime
f(k)ng(k) time for some computable functions f and g. A problem is said to be para-NP-hard
if it is not in XP unless P=NP. The complexity classes FPT and para-NP can be viewed as
the parameterized analogues of P and NP.
Vertex Cover is FPT via an easy O∗ (2` ) algorithm and after a long race, the current
fastest algorithm runs in O∗ (1.2738` ) time [6]. Vertex Cover also has a kernel with
2` − O(log `) vertices and O(`2 ) edges [21]. In the Connected Vertex Cover problem,
we seek a connected vertex cover, i.e., a vertex cover that induces a connected subgraph. It
is easy to observe that Vertex Cover reduces to Connected Vertex Cover implying
that the latter is at least as hard as the former in general graphs. In fact, Connected
Vertex Cover is NP-hard even on bipartite graphs [13] where Vertex Cover is solvable in
polynomial time (Theorem 2.1.1 [11]). However, Connected Vertex Cover is polynomialtime solvable on chordal graphs and sub-cubic graphs [13,26] and has an O∗ (2O(t) ) algorithm
when the input graph has treewidth upper bounded by t [2]. The best known parameterized
algorithm for Connected Vertex Cover takes O∗ (2` ) time where ` is the size of the
connected vertex cover that we are looking for [7]. Further, the problem does not admit
a polynomial kernel unless the polynomial hierarchy collapses (specifically, unless NP ⊆
coNP/poly) [12].
As the goal in parameterized algorithms is to eventually solve the given instance of a
problem, the application of a (classical) kernelization algorithm is typically followed by an
exact or approximation algorithm that finds a solution for the reduced instance. However,
the current definition of kernels provide no insight into how this solution relates to a solution
2
for the original instance and the basic framework is not amenable to be a precursor to
approximation algorithms or heuristics. Recently, Lokshtanov et al. [22] proposed a new
framework, referred to as lossy kernelization, that is a bit less stringent than the notion of
polynomial kernels and combines well with approximation algorithms and heuristics. The
key new definition in this framework is that of an α-approximate kernelization. The precise
definitions are deferred to Section 2. Informally, an α-approximate polynomial kernelization
(lossy kernelization) is a polynomial time preprocessing algorithm that takes as input an
instance of a parameterized problem and outputs another instance of the same problem
whose size is bounded by a polynomial function of the parameter of the original instance.
Additionally, for every c ≥ 1, a c-approximate solution for the reduced instance can be
turned into a (αc)-approximate solution for the original instance in polynomial time. The
authors of [22] exhibit lossy polynomial kernels for several problems that do not admit
classical polynomial kernels including Connected Vertex Cover parameterized by the
solution size. In this paper, we extend this lossy kernel for Connected Vertex Cover to
parameters that are more natural and functions of the input, and in some cases, smaller
than the solution size.
In the initial work on parameterized complexity, the parameter was almost always the
solution size (with treewidth being a notable exception). A recent trend is to study the
complexity of the given problem with respect to structural parameters that are more likely
to be small in practice. Also, once a problem is shown to be FPT or to have a polynomial
sized kernel by a parameterization, it is natural to ask whether the problem is FPT (and
admits a polynomial kernel) when parameterized by a smaller parameter. Similarly, once
a problem is shown to be W-hard by a parameterization, it is natural to ask whether the
problem is FPT when parameterized by a larger parameter. Structural parameterizations
of Vertex Cover [3, 18], Feedback Vertex Set [20] and Graph Coloring [19] have
been explored extensively. We refer to [18] and [17] for a detailed introduction to the
whole program. A parameter that has gained significant attention recently is the size of
a modulator to a family of graphs. Let F denote a hereditary graph class (which is closed
under induced subgraphs) on which Vertex Cover is polynomial-time solvable. Suppose
S is a set of vertices (called a F-modulator) such that G − S ∈ F. Then, Vertex Cover is
FPT when parameterized by the size of |S|. Following is an easy O∗ (2|S| ) algorithm: guess
the intersection of the required solution with the modulator and solve the problem on the
remaining graph in polynomial time. In contrast, a similar generic result does not seem
easy for Connected Vertex Cover and a study of such structural parameterizations for
Connected Vertex Cover is another goal of this paper.
Our Results. The starting point of our study is to extend the lossy kernelization known
for Connected Vertex Cover parameterized by solution size to Connected Vertex
Cover parameterized by a smaller parameter. A split graph is a graph whose vertex set
can be partitioned into a clique and an independent set. Split graphs can be recognized in
polynomial time and such a partition can be obtained in the process [16]. A split deletion
set is a set of vertices whose deletion results in a split graph. Connected Vertex Cover
has an easy polynomial time algorithm in split graphs, and it is easy to verify that the size
of minimum connected vertex cover is at least the size of the minimum vertex cover which
is at least the size of the minimum split deletion set. We show that
• Connected Vertex Cover parameterized by the size k of a split deletion set is FPT
via an O∗ (3k ) algorithm and that the α-approximate kernel for Connected Vertex
Cover parameterized by solution size can be extended in a non-trivial way to an
d 2α−1
e
α−1 ) vertices for every α > 1. This
α-approximate kernel with O(k + (2k + d 2α−1
α−1 e)
adds to the small list of problems for which lossy kernels with respect to structural
3
parameterizations are known.
• For the special case when the split deletion set is a clique deletion set (a set of vertices
whose deletion results in a complete graph), the algorithm runs in O∗ (2k ) time and
the lossy kernel has O(k + d 2α−1
α−1 e) vertices (linear size).
Then, we consider Connected Vertex Cover parameterized by the size of a clique cover.
A clique cover of a graph is a partition of its vertex set such that each part induces a clique.
We show that
• Connected Vertex Cover parameterized by the size k of a clique cover is W[1]-hard
but admits an α-approximate kernel with O(kd 2α−1
α−1 e) vertices for every α > 1. This
is one of the few problems that are not FPT but admit a lossy kernel.
Then, we consider a parameter related (structurally) to clique cover size, namely, the size of
a cluster deletion set and show that Connected Vertex Cover is FPT with respect to
this parameter. A cluster graph is a graph in which every component is a complete graph
and a cluster deletion set is a set of vertices whose deletion results in a cluster graph. We
show that
• Connected Vertex Cover parameterized by the size k of a cluster deletion set
is FPT via an O∗ (4k ) algorithm and admits an α-approximate kernel with O(k2 +
α
d α−1
e
k
α
d 2α−1
) vertices for every α > 1.
α−1 e · α−1 + d α−1 e · k
• For the special case when the cluster deletion set is a degree-1 modulator (a subset
of vertices whose deletion results in a graph with maximum degree 1), the algorithm
α
α
k
runs in O∗ (3k ) time and the lossy kernel has O(k2 + α−1
+ d α−1
e · kd α−1 e ) vertices.
Finally, we consider Connected Vertex Cover parameterized by the size k of a chordal
deletion set. A chordal graph is a graph in which every cycle of length at least 4 has a chord
– an edge joining non-adjacent vertices of the cycle. A chordal deletion set of a graph is a
subset of vertices whose deletion results in a chordal graph. We show that Connected
Vertex Cover parameterized by k is FPT using the known algorithm for Connected
Vertex Cover on bounded treewidth graphs. A summary of these results is presented in
Figure 1.
Techniques. Our FPT algorithms involve a reduction to the Steiner Tree problem on
bipartite graphs. Given a graph G, an integer p and a set T (called terminals) of vertices of
G, the Steiner Tree problem is the task of determining whether G contains a tree (called
Steiner tree) on at most p vertices that contains T . A bipartite graph is a graph whose vertex
set can be partitioned into 2 independent sets. Such a partition is called a bipartition. We
use the following result known for Steiner Tree on bipartite graphs.
Lemma 1. [7, 25] Given a connected bipartite graph G with bipartition (P, Q), there is an
algorithm running in O∗ (2|Q| ) time that computes a minimum set X of vertices of G such
that Q ⊆ X and G[X] is connected.
Our lower bound results are based on the hardness results known for the Set Cover
problem. Given a family F of subsets of a universe U and a positive integer `, the Set
Cover problem isSthe task of determining whether there is a subfamily F 0 ⊆ F of size at
most ` such that X∈F 0 X = U. Set Cover can be solved in O∗ (2|U| ) time by a dynamic
programming routine [14] and the Set Cover Conjecture states that no O∗ ((2 − )|U| ) time
algorithm exists for any > 0 [8]. Our lower bound results are based on the following result
that is a consequence of this conjecture.
4
Connected
Vertex Cover
Clique
Deletion Set
Vertex
Cover
Split
Deletion Set
Modulator
Cluster
Deletion Set
Feedback
Vertex Set
Degree-1
Chordal
Deletion Set
Degree-2
Modulator
Degree-i(i 3)
Modulator
FPT
FPT and
admits P SAKS
para-N P -hard
Figure 1: Ecology of Parameters for Connected Vertex Cover. The parameter values
are the minimum possible for a given graph. An arrow from parameter x to parameter y
means y ≤ x.
Lemma 2. [8] Connected Vertex Cover parameterized by the solution size ` has no
O∗ ((2 − )` ) time algorithm under the Set Cover Conjecture.
For each of the parameterized problems considered in this paper, we assume that the
modulator (whose size is the parameter) under consideration is given as part of the input.
In most cases, this assumption is reasonable. For instance, there is an algorithm by Cygan
and Pilipczuk [10] that runs in O∗ (1.2732k+o(k) ) time and returns a split deletion set of
size at most k (or decides that no such set exists). In the case of cluster deletion set size
as parameter, we use the algorithm by Boral et al. [5] that runs in O∗ (1.9102k ) time and
returns a cluster deletion set of size at most k, if one exists. Also, a degree-1 modulator S
of size at most k, if one exists, can be obtained in O∗ (1.882k ) time using the algorithm by
Wu [27]. Finally, there is an algorithm by Marx [23] that runs in O∗ (2O(k log k) ) time and
either outputs a chordal deletion set of size k or decides that no such set exists.
2
Preliminaries
We refer to [11] for graph theoretic terms and notation that are not explicitly
defined here.
A
We use [r] to denote the set {1, 2, . . . , r}. Given a finite set A, we use k to denote the
A
collection of subsets of A containing exactly k elements and we use ≤k
to denote the
collection of subsets of A containing at most k elements.
For a graph G, V(G) and E(G) denote the set of vertices and edges respectively. Two
vertices u, v are said to be adjacent if there is an edge {u, v} in the graph. The neighbourhood
of a vertex v, denoted by NG (v), is the set of vertices adjacent to v and its degree dG (v)
is |NG (v)|. The subscript in the notation for neighbourhood and degree is omitted if the
5
graph under consideration is clear. For a set S ⊆ V(G), G − S denotes the graph obtained
by deleting S from G and G[S] denotes the subgraph of G induced by set S. The contraction
operation of an edge e = uv in G results in the deletion of u and v and the addition of a new
vertex w adjacent to vertices that were adjacent to either u or v. Any parallel edges added
in the process are deleted so that the graph remains simple. This operation is extended to a
subset of edges and the resultant graph is oblivious to the contraction sequence. A path is a
sequence of distinct vertices where every consecutive pair of vertices are adjacent. A set of
pairwise non-adjacent vertices is called as an independent set and a set of pairwise adjacent
vertices is called as a clique. A complete graph is a graph whose vertex set is a clique. The
number of components of a graph G is denoted by #comp(G) and by convention if G is
connected then #comp(G) is 1. Two non-adjacent vertices u and v are called false twins if
N(u) = N(v).
A tree-decomposition of a graph G is a pair (T, X = {Xt }t∈V(T) ) such that T is a tree
whose every node t is assigned a vertex subset Xt , called a bag, satisfying the following
properties.
S
• t∈V(T) Xt = V(G).
• For every edge {x, y} ∈ E(G) there is a t ∈ V(T) such that {x, y} ⊆ Xt .
• For every vertex v ∈ V(G) the subgraph of T induced by the set {t | v ∈ Xt } is
connected.
The width of a tree decomposition is maxt∈V(T) |Xt | − 1 and the treewidth of G, denoted by
tw(G), is the minimum width over all tree decompositions of G.
Lossy Kernelization: Parameterized complexity terminology and definitions not stated
here can be found in [9]. We now state terminology and definitions related to lossy kernelization given in [22]. The key definition in this framework is the notion of a parameterized
optimization (maximization / minimization) problem which is the parameterized analogue
of an optimization problem in the theory of approximation algorithms. A parameterized
minimization problem is a computable function Π : Σ∗ × N × Σ∗ 7→ R ∪ {±∞}. The instances of Π are pairs (I, k) ∈ Σ∗ × N and a solution for (I, k) is a string S ∈ Σ∗ such
that |S| ≤ |I| + k. The value of a solution S is Π(I, k, S). The optimum value of (I, k) is
OPTΠ (I, k) =
min
Π(I, k, S), and an optimum solution for (I, k) is a solution S such
S∈Σ∗ , |S|≤|I|+k
that Π(I, k, S) = OPTΠ (I, k). A parameterized maximization problem is defined in a similar
way. We will omit the subscript Π in the notation for optimum value if the problem under
consideration is clear from context.
Next, we define the notion of a strict α-approximate polynomial-time preprocessing
algorithm for Π. It is defined as a pair of polynomial-time algorithms, called the reduction
algorithm and the solution lifting algorithm, that satisfy the following properties.
• Given an instance (I, k) of Π, the reduction algorithm computes an instance (I 0 , k 0 ) of
Π.
• Given the instances (I, k) and (I 0 , k 0 ) of Π, and a solution S 0 to (I 0 , k 0 ), the solution
Π(I,k,S)
Π(I 0 ,k 0 ,S 0 )
lifting algorithm computes a solution S to (I, k) such that OPT(I,k)
≤ max{ OPT(I
0 ,k 0 ) , α}.
A reduction rule is the execution of the reduction algorithm on an instance, and we say that
it is applicable on an instance if the output instance is different from the input instance. An
α-approximate kernelization (or α-approximate kernel) for Π is an α-approximate polynomialtime preprocessing algorithm such that the size of the output instance is upper bounded by
a computable function g : N → N of k. A reduction rule is said to be α-safe for Π if there
6
is a solution lifting algorithm, such that the rule together with this algorithm constitute a
strict α-approximate polynomial-time preprocessing algorithm for Π. A reduction rule is
safe if it is 1-safe. Observe that this definition is more strict that the definition of safeness
in classical kernelization. A polynomial-size approximate kernelization scheme (PSAKS) for
Π is a family of α-approximate polynomial kernelization algorithms for each α > 1. Note
that, the size of an output instance of a PSAKS, when run on (I, k) with approximation
parameter α, must be upper bounded by f(α)kg(α) for some functions f and g independent
of |I| and k. A PSAKS is said to be time efficient if the reduction algorithm and the solution
lifting algorithm run in f(α)|I|c , for some computable function f. We encourage the reader
to see [22] for a more comprehensive discussion of these ideas and definitions.
3
Connected Vertex Cover parameterized by Split Deletion
Set
In this section, we describe an FPT algorithm and a lossy polynomial kernel for Connected
Vertex Cover parameterized by the size of a split deletion set. Recall that a split deletion
set is a set of vertices whose deletion results in a split graph.
3.1
An FPT algorithm
In this subsection we describe an O∗ (3k ) time algorithm for Connected Vertex Cover
parameterized by the size k of a split deletion set.
Theorem 3. Given a graph G, a split deletion set S and a positive integer `, there is an
algorithm that determines whether G has a connected vertex cover of size at most ` in O∗ (3|S| )
time. Moreover, no O∗ ((2 − )|S| ) time algorithm exists for any > 0 for this problem under
the Set Cover Conjecture. Further, if S is a clique deletion set, then there is an algorithm
that runs in O∗ (2|S| ) time.
Proof. Let H denote the split graph G − S whose vertex set can be partitioned into a clique
C and an independent set I. Let |S| = k and let X∗ be a connected vertex cover of G of size
at most ` (if one exists). We show that once we guess the intersection of X∗ with S and C,
the problem reduces to a version of the Steiner tree problem with at most |S| terminals. As
X∗ can afford to exclude at most one vertex from C, there are at most |C| + 1 choices for
X∗ ∩ C. Also, there are at most 2k choices for X∗ ∩ S. Consider one such choice (Y, Z) where
Y = X∗ ∩ C and Z = X∗ ∩ S. Let T denote the set Y ∪ Z. We will extend T into a connected
vertex cover of G. If (C \ Y) ∪ (S \ Z) is not an independent set, no such connected vertex
cover exists and we skip to the next choice of (Y, Z). Let us first consider the case when
I = ∅. That is, S is a clique deletion set of G. In this case, as we have already made our
choice in S and C, and so if G[T ] is not connected, we can skip to the next choice of (Y, Z).
If none of the choices leads to the required solution, we declare that G has no connected
vertex cover of size at most `. The overall running time of the algorithm is O∗ (2k ).
When I 6= ∅, observe that G[T ] has at most |Z| + 1 components. Let R denote the set
(N(C \ Y) ∪ N(S \ Z)) \ (Y ∪ Z). Clearly, R ⊆ I and has to be included in the solution. See
Figure 2. If there is a vertex r ∈ R that has no neighbours in T , then r cannot be connected
to T by vertices from I as R ⊆ I. Therefore, if there is such a vertex r, then we skip to
the next choice of (Y, Z). Otherwise, by adding R to T , #comp(G[T ]) cannot increase in
this process. The problem now reduces to finding a set J ⊆ I \ R such that G[T ∪ R ∪ J] is
connected.
b with bipartition (T ∗ , I \ R) where each vertex of
Now consider the bipartite graph G
∗
T corresponds to a component of G[T ∪ R]. We draw an edge between a vertex p ∈ I \ R
7
S = split deletion set
Z
H=G
Y
R
S
• Y [ Z = Vertices included in solution.
• R = Vertices that are forced into
solution by choice of Y [ Z.
J
• J = Vertices that are required to connect
the components of G[Y [ Z [ R].
C
I
Clique Independent
Set
Figure 2: Connected vertex cover parameterized by split deletion set.
and a vertex c in T ∗ if there is an edge in G between p and a vertex in the component
b with terminal set
corresponding to c. The task now is to find a minimum Steiner tree of G
∗|
∗
∗
|T
T which can be done
Pkin Ok(2i ) time using Lemma 1. Therefore, the overall running time
of the algorithm is i=0 i 2 where the first term in the product denotes the number of
choices for X∗ ∩ S where |X∗ ∩ S| = i and the second term is the time taken to find a minimum
Steiner tree for an instance with at most i terminals. Thus, the algorithm runs in O∗ (3k )
time. As an edgeless graph is a split graph, we have that the size of a minimum connected
vertex cover is at least the size of a minimum connected split deletion set. Therefore, the
claimed lower bound follows from Lemma 2.
In order to obtain a tighter and refined analysis of the running time of the above
algorithm, we generalize the following lemma. Here VC(H) denotes the set of vertex covers
of H.
P
Lemma 4. [7] Let H be a connected graph on h vertices. Then,
2#comp(H[C]) ≤
C∈VC(H)
3·
2h−1 .
We next generalize Lemma 4 to graphs that are not necessarily connected.
P
Lemma 5. Let H be a graph on h vertices with #comp(H) = d. Then,
2#comp(H[C]) ≤
C∈VC(H)
3d 2h−d .
Proof. Let H1 , · · · , Hd denote the components of H and let hi denote the number of vertices
in Hi . Then, any vertex cover of H is the union of the vertex covers of Hi for 1 ≤ i ≤ d.
Thus, we have the following bound from Lemma 4.
X
C∈VC(H)
2#comp(H[C]) =
d
Y
X
2#comp(Hi [C]) ≤
i=1 C∈VC(Hi )
d
Y
i=1
3 · 2hi −1 = 3d · 2h−d
Now, using Lemma 5 and Theorem 3, we have the following result.
Corollary 6. Given a graph G, a split deletion set S with #comp(G[S]) = d and a positive
integer `, there is an algorithm that determines whether G has a connected vertex cover of
size at most ` in O∗ (3d 2|S|−d ) time.
8
Proof. The running time of the
presented in Theorem 3 is upper bounded (ignoring
P algorithm
polynomial factors) by
2#comp(G[Z]) which in turn is upper bounded by 3d 2|S|−d
Z∈VC(G[S])
from Lemma 5. Therefore, the claimed bound follows.
Using Lemma 2, Corollary 6 and the fact that a vertex cover of size k (if one exists) can
be obtained in O∗ (1.2738` ) time [6], we have the following result.
Corollary 7. Given a graph G, a vertex cover S with #comp(G[S]) = d and a positive
integer `, there is an algorithm that determines whether G has a connected vertex cover of
size at most ` in O∗ (3d 2|S|−d ) time. Further, no O∗ ((2 − )|S| ) time algorithm exists for this
problem for any > 0 under the Set Cover Conjecture.
3.2
A Lossy Kernel
It is known that Vertex Cover parameterized by the size of a clique deletion set has no
polynomial kernel unless NP ⊆ coNP/poly [4]. As Vertex Cover reduces to Connected
Vertex Cover by just adding a universal vertex to the graph, we have the following
observation.
Observation 1. Connected Vertex Cover parameterized by the size of a clique deletion
set has no polynomial kernel unless NP ⊆ coNP/poly.
In this section, we show that Connected Vertex Cover parameterized by the size
of a split deletion set admits a lossy polynomial kernel. As a consequence, we show that
Connected Vertex Cover parameterized by the size of a clique deletion set admits a
lossy linear kernel. The parameterized minimization problem definition of interest to us is
the following. The cases specified in the definition are ordered, i.e., given (G, S), an integer
k and a set T ⊆ V(G), CVC((G,
S), k, T ) is assigned the first value applicable.
−∞
if
|S| > k or G − S is not a split graph
CVC((G, S), k, T ) = ∞
if T is not a connected vertex cover
|T |
otherwise
Without loss of generality, we assume that G has no isolated vertices as no minimum
connected vertex cover contains them. Given α > 1, let d be the minimum integer greater
d−1
than 1 such that α ≥ d−2
. That is, d = d 2α−1
α−1 e. Let H denote the split graph G − S whose
vertex set can be partitioned into a clique C and an independent set I. First, we bound the
number of vertices in C by applying Reduction Rule 3.1.
Reduction Rule 3.1. If |C| ≥ d, then add a new vertex uC adjacent to N(C) and delete
C to get the graph G 0 . The resulting instance is (G 0 , S).
Observe that S continues to be a split deletion set of G 0 as a result of applying this rule
since V(G 0 − S) can be partitioned into the clique {uC } and the independent set I.
Lemma 8. Reduction Rule 3.1 is α-safe.
Proof. Consider a solution D 0 of the reduced instance. If uC ∈ D 0 , then the solution
lifting algorithm returns D = (D 0 \ {uC }) ∪ C which is a connected vertex cover of G
such that CVC((G, S), k, D) = CVC((G 0 , S), k, D 0 ) − 1 + |C|. Otherwise, we know that
N(C) ⊆ D 0 . Let x be a vertex in C such that C \ {x} has a neighbour in V(G 0 ) \ C.
Then, D = D 0 ∪ (C \ {x}) is a connected vertex cover of G such that CVC((G, S), k, D) =
CVC((G 0 , S), k, D 0 ) + |C| − 1. In any case, we have shown that CVC((G, S), k, D) =
CVC((G 0 , S), k, D 0 ) + |C| − 1. Now, consider an optimum solution D∗ for the original
9
instance. Clearly, |D∗ ∩ C| ≥ |C| − 1. Then, (D∗ \ C) ∪ {uC } is a connected vertex cover of G 0 .
Hence, OPT((G 0 , S), k) ≤ OPT((G, S), k) − |C| + 1 + 1. Combining these bounds, we have
CVC((G,S),k,D)
CVC((G 0 ,S),k,D 0 ) |C|−1
CVC((G 0 ,S),k,D 0 )
OPT((G,S),k) ≤ max
OPT((G 0 ,S),k) , |C|−2 ≤ max
OPT((G 0 ,S),k) , α} as |C| ≥ d.
Observe that the application of Reduction Rule 3.1 does not make any vertex isolated in
the reduced graph. Further, when S is a clique deletion set and this rule is not applicable, it
follows that G − S has at most d − 1 vertices. This leads to the following result.
Theorem 9. Connected Vertex Cover parameterized by the size k of a clique deletion
set has a time efficient PSAKS with at most k + d 2α−1
α−1 e vertices.
Proof. Let (G, S, `) be an instance on which Reduction Rule 3.1 is not applicable. Then,
|S| ≤ k and |V(G − S)| ≤ d where d = d 2α−1
α−1 e. Therefore, |V(G)| ≤ k + d.
Let us now consider the case when S is not a clique deletion set.
Lemma 10. Let (G, S) be an instance on which Reduction Rule 3.1 is not applicable. Then,
G has a connected vertex cover of size at most 2k + d − 1.
Proof. If Reduction Rule 3.1 is not applicable, then |C| ≤ d − 1. Now, T = S ∪ C is a vertex
cover of G such that G[T ] has at most |S| + 1 components. If G[T ] is not connected, then we
can add at most |S| vertices from V(G) \ T to T so that G[T ] becomes connected. Such a
choice of vertices exists as G is connected. Hence, it follows that G has a connected vertex
cover of size at most |T | + |S| ≤ 2k + d − 1.
From Lemma 10, we have OPT((G, S), k) ≤ 2k + d − 1. Therefore, any vertex with at
least 2k + d neighbours is present in any optimal solution. We define a partition of vertices
of G into the following three parts.
• B = {u ∈ V(G) | d(u) ≥ 2k + d}
• IB = {v ∈ V(G) \ B | N(v) ⊆ B}
• R = V(G) \ (B ∪ IB )
We apply the following two reduction rules which are known from [22] to bound IB
(which is an independent set).
Reduction Rule 3.2. If there exists u ∈ IB ∩ V(G − S) such that dG (u) ≥ d, then delete
NG [u] and add a new vertex w adjacent to every vertex in NG (NG (u)) \ {u}. Further,
add 2k + d new vertices W adjacent to w to get the graph G 0 . The resulting instance is
(G 0 , (S ∪ {w}) \ N(u)).
Observe that S 0 = (S ∪ {w}) \ N(u) is a split deletion set of G 0 as V(G 0 − S 0 ) can be
partitioned into the clique C ∩ V(G 0 − S 0 ) and independent set (I ∩ V(G 0 − S 0 )) ∪ W. Further,
as |C| ≤ d − 1 and dG (u) ≥ d, it follows that N(u) ∩ S 6= ∅. As we add at most one vertex
to S and delete at least one vertex from S to get S 0 , we have |S 0 | ≤ |S|.
Lemma 11. Reduction Rule 3.2 is α-safe.
Proof. Consider a solution D 0 of the reduced instance. If w ∈
/ D 0 , the solution lifting
algorithm returns a connected vertex cover D of G of size at most 2k + d − 1 obtained
from Lemma 10. Then, we have CVC((G 0 , S 0 ), k, D 0 ) ≥ 2k + d since w has at least
2k + d neighbours and CVC((G, S), k, D) ≤ 2k + d − 1 implying CVC((G, S), k, D) ≤
10
CVC((G 0 , S 0 ), k, D 0 ). Otherwise, we have w ∈ D 0 and the solution lifting algorithm returns D = (D 0 \ (W ∪ {w})) ∪ N[u] which is a connected vertex cover of G such that
CVC((G, S), k, D) ≤ CVC((G 0 , S 0 ), k, D 0 ) − 1 + |N[u]|. Next, consider an optimum solution
D∗ for the original instance. Clearly, |D∗ | ≤ 2k + d − 1 from Lemma 10. Thus, B ⊆ D∗ .
In particular, NG (u) ⊆ D∗ . Then, (D∗ \ NG [u]) ∪ {w} is a connected vertex cover of
G 0 . Hence, OPT((G 0 , S 0 ), k) ≤ OPT((G, S), k) − |NG (u)| + 1. Combining these bounds,
0 ,S 0 ),k,D 0 )
CVC((G 0 ,S 0 ),k,D 0 )
|N(u)|
we have CVC((G,S),k,D)
as
≤ max CVC((G
OPT((G,S),k) ≤ max
OPT((G 0 ,S 0 ),k) , |N(u)|−1
OPT((G 0 ,S 0 ),k) , α
|N(u)| ≥ d.
Recall that two non-adjacent vertices u and v are called false twins if N(u) = N(v).
Reduction Rule 3.3. If there exists x ∈ IB ∩ V(G − S) such that x has at least 2k + d false
twins in IB ∩ V(G − S), then delete x. The resulting instance is (G − {x}, S \ {x}).
Observe that S 0 = S \ {x} continues to be a split deletion set of G 0 = G − {x} as a result
of applying this rule.
Lemma 12. Reduction Rule 3.3 is 1-safe.
Proof. Consider a solution D 0 of the reduced instance. If |D 0 | ≥ 2k + d, the solution lifting
algorithm returns a connected vertex cover D of G of size at most 2k + d − 1 obtained from
Lemma 10. Then, we have CVC((G 0 , S 0 ), k, D 0 ) ≥ 2k+d and CVC((G, S), k, D) ≤ 2k+d−1
implying CVC((G, S), k, D) ≤ CVC((G 0 , S 0 ), k, D 0 ). Otherwise, it follows that one of the
false twins of x, say y, is excluded from D 0 . Thus, N(y) ⊆ D 0 implying that N(x) ⊆ D 0 .
Then, the solution lifting algorithm returns D = D 0 which is a connected vertex cover of G
such that CVC((G, S), k, D) = CVC((G 0 , S 0 ), k, D 0 ). Next, consider an optimum solution
D∗ for the original instance. Clearly, |D∗ | ≤ 2k + d − 1 from Lemma 10. Thus, either x and
one of its false twins y is excluded from D∗ or two of the false twins of x, say y and z are
excluded from D∗ . In any case, we have another optimal connected vertex cover D∗∗ of G 0
that excludes x. Hence, OPT((G 0 , S 0 ), k) ≤ OPT((G, S), k). Combining these bounds, we
CVC((G 0 ,S 0 ),k,D 0 )
have CVC((G,S),k,D)
OPT((G,S),k) ≤ OPT((G 0 ,S 0 ),k) .
Now, we have the following bound.
Lemma 13. Suppose S is a split deletion set of size k of G and none of the Reduction Rules
3.1, 3.2 and 3.3 is applicable on the instance (G, S). Then, |V(G)| is O(k + (2k + d)d ).
Proof. We will bound B, IB and R separately in order to bound V(G). We know that G
has a connected vertex cover T of size at most 2k + d − 1. As B is the set of vertices of
degree at least 2k + d, B ⊆ T and so |B| ≤ 2k + d − 1. Every vertex in R has degree at
most 2k + d − 1. Therefore, as T ∩ R is a vertex cover of G[R], |E(G[R])| is O((2k + d − 1)2 ).
Also, by the definition of IB , every vertex in R has a neighbour in R and hence there are no
isolated vertices in G[R]. Thus, |R| is O((2k + d − 1)2 ). Finally, we bound the size of IB .
As Reduction Rule 3.2 is not applicable, every vertex in IB ∩ V(G − S) has degree at
most d − 1. For every set B 0 ⊆ B of size at most d − 1, there are at most 2k + d vertices in
IB ∩ V(G − S) which have B 0 as their neighbourhood. Otherwise, Reduction Rule 3.3 would
have been applied. Hence, there are at most (2k + d) · 2k+(d−1)
vertices in IB ∩ V(G − S).
d−1
Finally, as none of the reduction rules increases the size of the split deletion set S, we have
|IB ∩ S| ≤ k. Therefore, |IB | is O(k + (2k + d)d ).
This leads to a PSAKS for the problem as claimed.
Theorem 14. Connected Vertex Cover parameterized by the size k of a split deletion
d 2α−1
e
α−1 ) vertices.
set admits a time efficient PSAKS with O(k + (2k + d 2α−1
α−1 e)
11
Proof. Given α > 1, we choose d = d 2α−1
α−1 e and apply the reduction rules as long as they
are applicable. Then, from Lemma 13, |V(G)| is O(k + (2k + d)d ).
We note that this kernel is one of the few lossy kernels for a problem with respect to a
structural parameter.
4
Connected Vertex Cover parameterized by Clique Cover
In this section, we first show that some of the ideas from the previous section can be used
to give a lossy kernel for Connected Vertex Cover when parameterized by the size of
the clique cover. A clique cover of a graph is a partition of its vertex set such that each part
induces a clique. We assume that we are given a partition of the vertex set of the input
graph into cliques. For this parameterization, the corresponding parameterized minimization
problem is defined in a similar way as defined for split deletion set.
Theorem 15. Connected Vertex Cover parameterized by the size k of a clique cover
admits a time efficient PSAKS with O(kd 2α−1
α−1 e) vertices.
Proof. Without loss of generality, we assume that the graph has no isolated vertices as no
minimum connected vertex cover contains them. Given α > 0, let d = d 2α−1
α−1 e. Let C denote
the set of k cliques in the clique cover of G. We bound the number of vertices in G by
applying Reduction Rule 3.1 on each C ∈ C. From Lemma 8, each application of this rule
is α-safe. When this rule is no longer applicable, it follows that G has at most k(d − 1)
vertices. This leads to the claimed result.
Now, we modify a reduction known from [20] used in the hardness of Feedback Vertex
Set with respect to the size of a clique cover to show that Connected Vertex Cover is
W[1]-hard under the same parameterization. This makes this one of the few problems that
are unlikely to be in FPT but have a lossy kernel.
Theorem 16. Connected Vertex Cover parameterized by the size of a clique cover is
W[1]-hard.
Proof. We reduce the well known W[1]-hard problem, Independent Set parameterized
by solution size, to our problem. A non-separating independent set of a graph G is an
independent set I such that V(G) \ I is a connected vertex cover of G. Let (G, k) be an
instance of Independent Set. We construct a graph G 0 on the vertex set {(v, i) | v ∈
V(G), i ∈ [k]} ∪ {x, y}. We add an edge between (v, i) and (u, j) if and only if i = j or u = v
or v ∈ NG (u). Further, for every w ∈ V(G 0 ) \ {x}, we add the edge {w, x} to G 0 . Note that
G 0 has a clique cover of size k + 1 as for all j ∈ [k], G 0 [{(v, j) | v ∈ V(G)}] is a complete graph
and {x, y} is a clique. We claim that G has an independent set of size k if and only if G 0 has
a non-separating independent set of size k + 1.
Suppose S = {v1 , . . . , vk } is an independent set of size k in G. Define the set S 0 ⊆ V(G 0 )
as {(v1 , 1), (v2 , 2), . . . , (vk , k), y}. Consider any two vertices (vi , i) and (vj , j) in S 0 such that
i 6= j. As for each distinct i, j ∈ [k], we have {vi , vj } ∈
/ E(G), it follows that there is no edge
between (vi , i) and (vj , j). Also, by the construction of G 0 , y is not adjacent to any vertex in
G 0 other than x. Therefore, S 0 is an independent set of size k + 1 in G 0 . As x is adjacent to
every vertex in G 0 − S 0 , it follows that the deletion of S 0 does not disconnect G 0 . In other
words, S 0 is a non-separating independent set of G 0 .
Conversely, suppose S 0 is a non-separating independent set of size k + 1 in G 0 . Clearly,
different vertices of S 0 have to come from different cliques of the clique cover of G 0 . We may
assume that y ∈ S 0 as if x ∈ S 0 , then (S 0 \ {x}) ∪ {y} is another non-separating independent
12
set of size k + 1. This is due to the facts that y is not adjacent to any vertex other than x
and x is a universal vertex in G 0 . Define the set S ⊆ V(G) as {v | (v, j) ∈ S 0 \ {y}}. Consider
any two vertices (v, i) and (u, j) in S 0 . Then, u 6= v, i 6= j and v ∈
/ NG (u). Therefore, S is
an independent set in G of size k.
From Theorem 16, it follows that Connected Vertex Cover parameterized by k + q
is W[1]-hard where k is the size of a set S whose deletion results in a graph for which there
exists a clique cover of size q. However, the problem is in XP as the algorithm in Theorem
3 can be easily generalized to solve Connected Vertex Cover in O∗ (2k nq ) time where
n is the number of vertices in the input graph. Further, by an easy adaptation of Theorem
15, the problem admits a PSAKS leading to the following result.
Theorem 17. Given a graph G on n vertices, a set S ⊆ V(G) such that |S| ≤ k and G − S
has a clique cover Q of size q and a positive integer `, there is an algorithm that determines
whether G has a connected vertex cover of size at most ` in O∗ (2k nq ) time. Further, the
problem admits a time efficient PSAKS with O(k + qd 2α−1
α−1 e) vertices.
Proof. Without loss of generality, we assume that the graph has no isolated vertices as no
minimum connected vertex cover contains them. We also assume that the set S and the
clique cover Q of G − S are part of the input. First, let us describe the XP algorithm. Let
X∗ be a connected vertex cover of G of size at most `. S
As at least |Q| − 1 vertices from each
∗
∗
clique Q ∈ QSare contained in X , we guess Y = X ∩ ( Q∈Q Q). Then, we guess Z = X∗ ∩ S
such that (( Q∈Q Q) \ Y) ∪ (S \ Z) is an independent set. The number of choices for (Y, Z)
is O∗ (nq 2k ). If G[Y ∪ Z] is not connected or has more than ` vertices, we skip to the next
choice of (Y, Z). Otherwise, Y ∪ Z is the required solution. If none of the choices leads to a
solution, we declare that G has no connected vertex cover of size at most `. The overall
running time of the algorithm is thus O∗ (2k nq ).
Now, we describe an α-approximate kernel for each α > 1. Given α > 0, let d = d 2α−1
α−1 e.
We bound the number of vertices in G by applying Reduction Rule 3.1 on each Q ∈ Q. From
Lemma 8, each application of this rule is α-safe. When this rule is no longer applicable, it
follows that G has at most k + q(d − 1) vertices. This leads to the claimed result.
5
Connected Vertex Cover parameterized by Cluster Deletion Set
In Section 3.1, we gave an O∗ (2k ) time algorithm for Connected Vertex Cover parameterized by the size k of a clique deletion set. Here, we generalize this algorithm to solve
Connected Vertex Cover in O∗ (4k ) time where k is the size of a cluster deletion set.
Observe that this is a parameter smaller than the clique deletion set size. Further, we also
describe a lossy kernel with respect to this parameterization. A classical polynomial kernel
is unlikely as the size of a minimum cluster deletion set is at most the size of a minimum
connected vertex cover. This is due to the fact that deleting a connected vertex cover from
a graph results in a cluster graph in which every component is an isolated vertex.
5.1
An FPT Algorithm
Consider an instance (G, S, `) of Connected Vertex Cover where S is a cluster deletion
set. Let H denote the cluster graph G − S. Let X∗ be a connected vertex cover of size at
most ` (if one exists) that we are looking for. First, we guess the subset S 0 of S such that
S 0 = S ∩ X∗ . If S \ S 0 is not an independent set, we skip to the next choice of S 0 . Define the
set F as N(S \ S 0 ) ∩ V(H). Initialize the set X to be S 0 . In the latter steps of our algorithm,
13
we will extend X to a connected vertex cover of G. We also update ` to ` − |S 0 | and delete
S \ S 0 from G. We will now describe a sequence of reduction and branching rules to be
applied. The rules are applied in the order stated and a rule is applied as long as it is
applicable on the instance. That is, a rule is applied only when none of the preceding rules
can be applied.
Let I denote the set of isolated vertices of H and Q denote the set of vertex sets of
components of H − I. That is, I is an independent set and each element of Q is a clique in
H. These sets are updated accordingly as and when the rules are applied. First, we apply
the following preprocessing rule.
Preprocessing Rule 5.1. If there is a clique Q ∈ Q with N(Q) ∩ X = ∅ or a vertex
v ∈ F ∩ I such that N(v) ∩ X = ∅, then skip to the next choice of S 0 .
The correctness of the first part of the rule follows from the fact that at least |Q| − 1
vertices from Q has to be in any vertex cover and any set of such vertices along with X
cannot be extended to a connected subgraph by only adding vertices from V(H). The
correctness of the second part follows from the facts that F is forced into the solution and
G[X ∪ {v}] cannot be extended to a connected subgraph by only adding vertices from V(H).
Let F be partitioned into sets Y and Z defined as follows.
• Y = {v ∈ F | N(v) ∩ X 6= ∅}, the set of vertices of F that have a neighbour in X.
• Z = F \ Y, the set of vertices of F that have no neighbour in X.
These sets are updated over the execution of the algorithm according to the current
partial solution X. In particular, at any point of time, Z is the set of vertices in F that are
not adjacent to any vertex in the current partial solution X. Our first reduction rule is as
follows.
Reduction Rule 5.1. Add Y ∪ Z to X, update ` to ` − |Z ∪ Y| and delete Y from H.
This rule is justified by the fact that X∗ must contain F. Due to Preprocessing Rule
5.1, every vertex v ∈ Z is in some clique Q in Q. When Reduction Rule 5.1 is no longer
applicable, we have V(H) ∩ F = Z. For each v ∈ Z, let Qv denote the clique in Q containing
v. The next rule is the following.
Reduction Rule 5.2. If there is a vertex v ∈ Z with |Qv ∩ Z| = |Qv | − 1, then add Qv \ Z
to X, update ` to ` − 1 and delete Qv from H and Z.
The correctness of this rule follows from the fact that there is exactly one vertex u in
Qv ∩ N(X \ Z). This vertex is forced into the solution since Qv \ {u} ⊆ X and Qv \ {u} has
no neighbours in X \ (Qv \ {u}). Now, for any clique Q ∈ Q, there are at least two vertices
that are not in Z. For a vertex v ∈ V(H), let C(v) = {A ∈ G[X] | A is a component of G[X]
and NG (v) ∩ V(A) 6= ∅}. The next rule is the following.
Reduction Rule 5.3. If there is a clique Q in Q that has two vertices u, v ∈
/ Z with
C(u) ⊆ C(v), then update X to X ∪ (Q \ {u}) and reduce ` by the number of vertices added to
X. Delete Q from H and Z.
If there exists an optimal connected vertex cover X that does not contain v, then u ∈ X
and X 0 = (X \ {u}) ∪ {v} is also a connected vertex cover of G. This justifies the correctness
of the rule. The next rule is a branching rule that is applied when H has a triangle or a
larger clique Q with vertices not in Z. Further, for each v ∈ Q \ Z, |C(v)| ≥ 1 and for every
u, v ∈ Q \ Z, C(u) 6⊆ C(v).
14
Branching Rule 5.1. If there is a triangle {u, v, w} in H with u, v, w ∈
/ Z, then branch
into adding u, v or u, w or w, v into X. In each of the branches update ` to ` − 2.
The branches in this rule are clearly exhaustive as any vertex cover contains at least
two vertices from a triangle. The reason for handling such special triangles will be apparent
during the application of subsequent rules. When none of the rules described so far is
applicable, every clique Q ∈ Q has exactly 2 vertices not in Z. In particular, if Q ∩ Z = ∅,
then |Q| = 2.
/ Z in H and |C(u)| = 1 and
Reduction Rule 5.4. If there is an edge {u, v} with u, v ∈
|C(v)| ≥ 1, then update X to X ∪ (Q \ {u}) and reduce ` by the number of vertices added to X
where Q is the clique in Q that contains u and v. Delete Q \ {u} from H and Z. Add u to I.
If there exists an optimal connected vertex cover X that does not contain v, then u ∈ X
and X 0 = (X \ {u}) ∪ {v} is also a vertex cover of G. Further, G[X 0 ] is connected as the only
vertex in Q ∩ X that is adjacent to a vertex in X \ {u} is u and |C(u)| = 1. Note that we
crucially use the property that Q has exactly two vertices adjacent to a vertex in X \ {u, v}
which is achieved by the application of Branching Rule 5.1. This justifies the correctness
of the rule. The next rule is a branching rule that is applied until X is a vertex cover (not
necessarily connected) of G.
Branching Rule 5.2. If there is an edge {u, v} in H, then in one branch we add u into
X and in other branch we add v into X. In both branches, update ` to ` − 1 and delete the
vertex added to X from H.
When this rule is applied on an edge {u, v}, we have |C(x)|, |C(y)| ≥ 2 and {x, y} ∩ Z = ∅.
The branches are exhaustive as any vertex cover contains x or y. At this point, when none
of the described rules (reduction and branching) is applicable, we have a vertex cover X of
G. That is, V(H) is an independent set. However, X may not necessarily induce a connected
subgraph. Let G 0 be the graph obtained from G[V(H) ∪ X] by contracting each component
b where X
b is the
of G[X] into a single vertex. G 0 is bipartite and has a bipartition (V(H), X)
set of vertices of G 0 corresponding to the set of components of G[X]. The problem is now to
b
b which can be solved in O∗ (2|X|
find a Steiner tree of G 0 that contains X
) time using Lemma
1. This completes the description of the algorithm leading to the following result.
Theorem 18. Given a graph G, a cluster deletion set S and a positive integer `, there is
an algorithm that determines whether G has a connected vertex cover of size at most ` in
O∗ (4|S| ) time.
Proof. Given an instance (G, S, `), we first guess a subset S 0 of S that is contained in the
solution. Then, we apply the reduction rules and branching rules in the order stated. We
reiterate that each rule is applied as long as it is applicable. Also, a rule is applied only
when none of the earlier rules is applicable. Let X be the partial solution that we are trying
to extend to a connected vertex cover. Initially X is S 0 . The measure that we use to bound
the running time is the number #comp(G[X]) of components of G[X] which is at most |S 0 |.
On applying either of the branching rules, it is clear that #comp(G[X]) drops in all the
branches by at least one. We will now show that the reductions rules do not increase the
measure. When Reduction Rule 5.2 is applied, Qv has a neighbour in X due to Preprocessing
Rule 5.1. Similarly, when Reduction Rule 5.3 or 5.4 is applied, Q \ {u} has a neighbour in X.
Hence, these three rules do not increase #comp(G[X]). Now consider Reduction Rule 5.1.
Adding Y to X do not increase #comp(G[X]) while adding Z to X will definitely increase it.
However, if Z is non-empty, then in subsequent rules, some neighbour of Z which is also
adjacent to some vertex in (current) X is added to the solution. Consider a vertex z ∈ Z on
15
which Reduction Rule 5.2 is applicable. Then, we add a neighbour z 0 of z that is adjacent
to X. Therefore, the measure does not increase. In fact, the measure first increases by 1
(due to addition of z to X) and then decreases by at least 1 (due to addition of z 0 to X).
Suppose Reduction Rule 5.2 is not applicable on a vertex z ∈ Z. Then, Qz has at least 2
vertices not in Z. If Reduction Rule 5.3 is applicable on Qz , then adding z into X does not
increase the measure. Let us consider the case when Reduction Rule 5.3 is not applicable
on Qz . Then, either Branching Rule 5.1 or Reduction Rule 5.4 or Branching Rule 5.2 is
applicable to vertices of Qz . In any case, as we add a neighbour of z that is adjacent to X,
the measure does not increase.
When none of the reduction and branching rules is applicable, we have a set X that is
a vertex cover of G. That is, at each leaf of the recursion tree, we have a vertex cover X
of G that needs to be connected by adding a minimum number of vertices from V(H). We
b as described earlier. Here, X
b
construct the bipartite graph G 0 with bipartition (V(H), X)
is the set of vertices of G 0 corresponding to the set of components of G[X]. Observe that
#comp(G[X]) is at most |S 0 |. Then, the time taken at each leaf of the recursion tree is upper
b
b which is O∗ (2|X|
bounded by the time taken to find a Steiner tree of G 0 that contains X
) from
0
Lemma 1. Let |S | = i. If a leaf is at depth j where j ≤ i, it follows that the #comp(G[X])
is at most i − j. Then, the Steiner tree algorithm at this leaf runs in O∗ (2i−j ) time.
Let T (j) denote the number of leaves of the search tree at depth j. It is easy to verify that
the search tree is a ternary tree as there are at most three branches in any rule. Thus, the
number of leaves at depth j is at most 3j . Therefore, the running time (ignoring polynomial
factors) of the algorithm is upper bounded by the following value. Let k denote |S|.
k X
i
X
k
i=0
i
j=0
j i−j
32
k
k
X
X
k i
k − 1 i−1
≤
i
3 =3
k
3
= 3k · 4k−1
i
i−1
i=0
i=1
Thus, the running time of the algorithm is O∗ (4k ).
A degree-i modulator is a set of vertices whose deletion results in a graph with maximum
degree at most i. As a degree-1 modulator is also a cluster deletion set, it follows that
Connected Vertex Cover can be solved in O∗ (4|S| ) time when given a degree-1 modulator
S as part of input. However, observe that this algorithm runs in O∗ (3k ) time as branching
rule 5.2 is never applicable (and so the search tree is a binary tree). Thus, we have the
following result.
Theorem 19. Given a graph G, a degree-1 modulator S and a positive integer `, there is
an algorithm that determines whether G has a connected vertex cover of size at most ` in
O∗ (3|S| ) time.
It is well known that the treewidth (tw) of a graph is not larger than the size of its
minimum vertex cover, and hence of its minimum connected vertex cover. Therefore, a naive
dynamic programming routine over a tree decomposition of width tw solves Connected
Vertex Cover in O∗ (2tw log tw ) time [24]. The running time bound has been improved
to O∗ (2O(tw) ) using algebraic techniques [2]. If the maximum degree of a graph G is upper
bounded by 2, then every component of G is an induced path or an induced cycle. Hence,
the treewidth of G is at most 2. Therefore, if G has a degree-2 modulator of size at most k,
then the treewidth of G is at most k + 2. This shows that Connected Vertex Cover is
FPT when parameterized by the size of a degree-2 modulator. As Connected Vertex
Cover is NP-hard on graphs with maximum degree at most 4, it is clear that Connected
Vertex Cover when parameterized by the size of a degree-4 modulator is para-NP-hard
(NP-hard for fixed values of parameter). This observation doesn’t immediately carry over
16
when parameterized by the size of a degree-3 modulator as Connected Vertex Cover
is polynomial time solvable in sub-cubic graphs [26]. Nevertheless, Connected Vertex
Cover is para-NP-hard even when parameterized by the size of a degree-3 modulator. This
is due to the fact that Vertex Cover in sub-cubic graphs is NP-complete [1, 15] and
Vertex Cover reduces to Connected Vertex Cover by just adding a universal vertex
(which is a degree-3 modulator) to the graph.
Theorem 20. Connected Vertex Cover parameterized by the size of a degree-i modulator is FPT when i ≤ 2 and para-NP-hard for i ≥ 3.
It is intriguing that though Connected Vertex Cover is polynomial-time solvable
on sub-cubic graphs, it is NP-hard on graphs that are just one vertex away from being a
sub-cubic graph. Note that the related Feedback Vertex Set problem is also solvable
in polynomial-time on sub-cubic graphs, but it is a major open problem whether it is
polynomial-time solvable on graphs that have a degree-3 modulator of size 1.
5.2
A Lossy Kernel
In this section, we give a PSAKS for Connected Vertex Cover parameterized by the
size of a cluster deletion set. We formally define the parameterized minimization problem
as follows.
−∞ if |S| > k or some component of G − S is not a clique
CVC((G, S), k, T ) = ∞
if T is not a connected vertex cover
|T |
otherwise
We now prove the following main result of this section.
Theorem 21. Connected Vertex Cover parameterized by the size k of a cluster deletion
α
d α−1
e
k
α
set admits a time efficient PSAKS with O(k2 + d 2α−1
) vertices.
α−1 e · α−1 + d α−1 e · k
Proof. Let G be a connected graph and S denote its cluster deletion set of size at most k.
Given α > 1, define = α − 1. Let H denote the cluster graph G − S and F0 be the set of
isolated vertices in H. Let F1 = V(H) \ F0 and t denote the number of components in G[F1 ].
α
Let C1 , · · · , Ct denote the components of G[F1 ]. Let d1 = d 2α−1
α−1 e and d2 = d α−1 e. Define the
set T as follows. We first initialize T to S. Then, for each i ∈ [t], we add a set Xi of |V(Ci )|−1
vertices of Ci such that NG (Xi )∩S 6= ∅ to T . Such a set always exists as G is connected. Now,
t
P
T is a vertex cover of G and |T | = |S|+ (|V(Ci )|−1). Further, #comp(G[T ]) = #comp(G[S]).
i=1
If G[T ] is not connected, then we add at most |S| − 1 vertices from V(G) \ T to T so that
G[T ] becomes connected. Once again, such a set exists as G is connected. Thus, we have
t
t
P
P
|T | < 2k + (|V(Ci )| − 1). We know that OPT ((G, S), k) ≥ (|V(Ci )| − 1) as each V(Ci )
i=1
i=1
is a clique and any vertex cover excludes at most one vertex from any clique. Therefore,
|T | < 2k + OPT ((G, S), k). If t ≥ 2k
, then the reduction algorithm outputs a constant size
0
0
instance (G , S ). Since OPT ((G, S), k) ≥ t, we have k ≤ 2 OPT ((G, S), k) and it follows that
|T | is at most (1 + )OPT ((G, S), k). Equivalently, CVC((G, S), k, T ) ≤ (1 + )OPT ((G, S), k).
The solution lifting algorithm simply returns T which is obtained in polynomial time.
On the other hand, suppose t < 2k
. For each i ∈ [t], we apply Reduction Rule 3.1 with
d = d1 to bound the number of vertices in Ci by d1 . From Lemma 8, we know that Reduction
Rule 3.1 is α-safe. When this rule is no longer applicable, we have at most d1 2k
vertices in F1 .
Now, it remains to bound the number of vertices in F0 . Observe that any optimal connected
vertex cover must contain any vertex x ∈ S with |NG (x) ∩ F0 | ≥ 2k. This is because, if some
17
optimal connected vertex cover T ∗ excludes x, then |T ∗ | ≥ 2k +
T is a connected vertex cover with |T | < 2k +
t
P
t
P
(|V(Ci )| − 1). However,
i=1
(|V(Ci )| − 1) leading to a contradiction.
i=1
Let S1 = {x ∈ S | |NG (x) ∩ F0 | ≥ 2k}. We first partition F0 into A0 = {v ∈ F0 | NG (v) ⊆ S1 }
and B0 = F0 \ A0 . Then, we apply Reduction Rule 3.2 to vertices in A0 with d = d2 and
Reduction Rule 3.3 to vertices in A0 with d = 0. Reduction Rule 3.2 is α-safe and Reduction
Rule 3.3 is 1-safe from Lemmas 11 and 12.
Suppose none of the described rules is applicable on the instance (G, S). We will show
that |V(G)| = O(k2 + d2 · kd2 + d1 · k). The set V(G) is partitioned into sets S, F1 , A0 and
B0 . As G is connected, G has no isolated vertex. As Reduction Rule 3.1 is not applicable,
each component in G[F1 ] has at most d1 vertices. Thus, |V(F1 )| ≤ 2k
· d1 . For any vertex
v ∈ F0 , there is a vertex x ∈ S such that {x, v} ∈ E(G). Since the reduction rules described
are not applicable,
every vertex x ∈ A0 has at most d2 − 1 neighbors in S1 . Further, for
every X ∈ ≤dS21−1 , there are at most 2k vertices from A0 such that each of those 2k vertices
dP
2 −1
k
d2 as |S | ≤ k.
has X as its neighborhood in G. Therefore, |A0 | ≤ 2k ·
1
i ≤ 2(d2 − 1)k
i=1
As G is connected, each vertex in B0 has a neighbour in S \ S1 . Since, the degree of any
vertex in S \ S1 is at most 2k, it follows that |B0 | ≤ 2k2 . Thus, |V(G)| = |S| + |F1 | + |A0 | + |B0 |
α
d α−1
e
2α−1
k
α
d2
2
2
is at most k + 2k
).
· d1 + 2(d2 − 1)k + 2k which is O(k + d α−1 e · α−1 + d α−1 e · k
As the deletion of a degree-1 modulator results in a graph in which every component
has at most 2 vertices, we have the following result as a consequence of Theorem 21.
Corollary 22. Connected Vertex Cover parameterized by the size k of a degree-1
α
k
α
modulator admits a time efficient PSAKS with O(k2 + α−1
+ d α−1
e · kd α−1 e ) vertices.
6
Connected Vertex Cover parameterized by Chordal Deletion Set
In this section, we show that Connected Vertex Cover parameterized by the size of a
chordal deletion set is FPT. It is well known that the treewidth (tw) of a graph is at most
one more than the size of a minimum feedback vertex set (a set of vertices whose removal
results in a forest). This immediately implies that Connected Vertex Cover is FPT
when parameterized by the size of a feedback vertex set. Recall that a chordal graph is a
graph in which every induced cycle is a triangle. As forests, split graphs and cluster graphs
are chordal, the minimum chordal deletion set size is at most a minimum feedback vertex
set, a minimum split deletion set size and a minimum cluster deletion set.
Theorem 23. Given a graph G, a chordal deletion set S and a positive integer `, there is
an algorithm that determines whether G has a connected vertex cover of size at most ` in
O∗ (2|S| log |S| ) time.
Proof. Let G be a connected graph on n vertices and S denote its chordal deletion set. As
G − S is a chordal graph, a tree decomposition of G − S of optimum width in which every
bag is a clique can be obtained in linear time [16]. From this tree decomposition of G − S,
a tree decomposition T = (T, {Xt }t∈V(T ) ) of G can be obtained by adding S to every bag.
We will show that the algorithm for Connected Vertex Cover on bounded treewidth
graphs [24] runs in O∗ (2O(|S| log |S|) ) time. The algorithm employs a dynamic programming
routine over the tree decomposition T . As this approach is largely standard, we only present
an outline of the algorithm here.
18
For a node t of T , let Tt denote the subtree of T rooted at t ∈ V(T ) and Vt denote
the union of all the bags of Tt . Let Gt denote the subgraph of G induced by Vt . For
t ∈ V(T ), X ⊆ Xt and a partition P = {P1 , · · · , Pq } into at most |X| parts, let Γ [t, X, P]
denote a minimum vertex cover Z of Gt with Xt ∩ Z = X and Gt [Z] has exactly q connected
components C1 , · · · , Cq where Pi = V(Ci ) ∩ Xt for each i ∈ {1, · · · , q}. Further, if X is empty,
Z is required to be connected in Gi . Then, Γ [t, ∅, {∅}] is a minimum connected vertex cover
of G. Observe that the total number of (valid) states per node is |S|O(|S|) n. This is due to
the fact that for each vertex t ∈ V(T ), G[Xt ] has a clique deletion set of size |S|. Further,
each entry can be computed in |S|O(|S|) nO(1) time. This leads to the claimed result.
We remark that the lossy kernelization status of this problem remains open even when
the chordal deletion set is a feedback vertex set.
7
Concluding Remarks
We have given lossy kernels and studied the parameterized complexity statuses of Connected Vertex Cover parameterized by split deletion set size, clique cover number
and cluster deletion set size. Our FPT running times (for Connected Vertex Cover
parameterized by split deletion set and cluster deletion set) have slight gaps between upper
bounds and lower bounds (based on well-known conjectures), and tightening the bounds
is an interesting open problem. A more general direction is to explore the parameterized
and (lossy) kernelization complexity of Connected Vertex Cover in the parameter
ecology program. Figure 1 gives a partial landscape of the (parameterized) complexity
of Connected Vertex Cover under various structural parameters. Designing (lossy)
kernels (or proving impossibility results) for those whose status is unknown is an interesting
future direction.
References
[1] P. Alimonti and V. Kann. Some APX-Completeness Results for Cubic Graphs. Theoretical Computer Science, 237(1-2):123–134, 2000.
[2] H. L. Bodlaender, M. Cygan, S. Kratsch, and J. Nederlof. Deterministic Single
Exponential Time Algorithms for Connectivity Problems parameterized by Treewidth.
Information and Computation, 243:86–111, 2015.
[3] H. L. Bodlaender and B. M. P. Jansen. Vertex Cover Kernelization Revisited: Upper and
Lower Bounds for a Refined Parameter. Theory of Computing Systems, 63(2):263–299,
2013.
[4] H. L. Bodlaender, B. M. P. Jansen, and S. Kratsch. Kernelization Lower Bounds by
Cross-Composition. SIAM Journal of Discrete Mathematics, 28(1):277–305, 2014.
[5] A. Boral, M. Cygan, T. Kociumaka, and M. Pilipczuk. A Fast Branching Algorithm
for Cluster Vertex Deletion. Theory of Computing Systems, 58(2):357–376, 2016.
[6] J. Chen, I. A. Kanj, and W. Jia. Vertex Cover: Further Observations and Further
Improvements. Journal of Algorithms, 41(2):280–301, 2001.
[7] M. Cygan. Deterministic Parameterized Connected Vertex Cover. In Proceedings of the
13th Scandinavian Workshop on Algorithm Theory (SWAT), pages 95–106, 2012.
19
[8] M. Cygan, H. Dell, D. Lokshtanov, D. Marx, J. Nederlof, Y. Okamoto, R. Paturi,
S. Saurabh, and M. Wahlstrm. On Problems as Hard as CNF-SAT. ACM Transactions
on Algorithms, 12(3):41:1–41:24, 2016.
[9] M. Cygan, F. V. Fomin, L. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk,
M. Pilipczuk, and S. Saurabh. Parameterized Algorithms. Springer, 2015.
[10] M. Cygan and M. Pilipczuk. Split Vertex Deletion meets Vertex Cover: New FixedParameter and Exact Exponential-Time Algorithms. Information Processing Letters,
113(5-6):179–182, 2013.
[11] R. Diestel. Graph Theory. Springer, Graduate Text in Mathematics, 2012.
[12] M. Dom, D. Lokshatanov, and S. Saurabh. Kernelization Lower Bounds through Colors
and IDs. ACM Transaction on Algorithms, 11(2):13:1–13:20, 2014.
[13] B. Escoffier, L. Gourvs, and J. Monno. Complexity and Approximation Results for
the Connected Vertex Cover problem in Graphs and Hypergraphs. Journal of Discrete
Algorithms, 8(1):36–49, 2010.
[14] F. Fomin and D. Kratsch. Exact Exponential Algorithms. Springer-Verlag, 2010.
[15] M. R. Garey and D. S. Johnson. Computer and Intractability: A Guide to the Theory
of NP-Completeness. W. H. Freeman & Co., 1979.
[16] M. C. Golumbic. Algorithmic Graph Theory for Perfect Graphs. Springer, 2004.
[17] B. M. P. Jansen. The Power of Data Reduction: Kernels for Fundamental Graph
Problems. PhD thesis, Utrecht University, The Netherlands, 2013.
[18] B. M. P. Jansen, M. R. Fellows, and F. A. Rosamond. Towards fully Multivariate
Algorithmics: Parameter Ecology and the Deconstruction of Computational Complexity.
European Journal of Combinatorics, 34(3):541–566, 2013.
[19] B. M. P. Jansen and S. Kratsch. Data Reduction for Coloring Problems. Information
and Computation, 231:70–88, 2013.
[20] B. M. P. Jansen, V. Raman, and M. Vatshelle. Parameter Ecology for Feedback Vertex
Set. Tsinghua Science and Technology, 19(4):387–409, 2014.
[21] D. Lokshtanov, N. S. Narayanaswamy, V. Raman, M. S. Ramanujan, and S. Saurabh.
Faster Parameterized Algorithms Using Linear Programming. ACM Transactions on
Algorithms, 11(2):15:1–15:31, 2014.
[22] D. Lokshtanov, F. Panolan, M. S. Ramanujan, and S. Saurabh. Lossy Kernelization.
In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
(STOC), pages 224–237, 2017.
[23] D. Marx. Chordal Deletion is Fixed-Parameter Tractable. Algorithmica, 57(4):747–768,
2010.
[24] H. Moser. Exact Algorithms for Generalizations of Vertex Cover. Master’s thesis,
Institut für Informatik, Friedrich-Schiller-Universität, 2005.
[25] J. Nederlof. Fast Polynomial-Space Algorithms Using Inclusion-Exclusion. Algorithmica,
65(4):868–884, 2013.
20
[26] S. Ueno, Y. Kajitani, and S. Gotoh. On the Nonseparating Independent Set problem
and Feedback Set problem for Graphs with no Vertex Degree exceeding Three. Discrete
Mathematics, 72:355–360, 1988.
[27] B. Y. Wu. A Measure and Conquer based Approach for the Parameterized Bounded
Degree-one Vertex Deletion. In Proceedings of the 21st International Computing and
Combinatorics Conference (COCOON), pages 469–480, 2015.
21
| 8 |
Session Types Go Dynamic or
How to Verify Your Python Conversations
Rumyana Neykova
Imperial College London, United Kingdom
[email protected]
This paper presents the first implementation of session types in a dynamically-typed language Python. Communication safety of the whole system is guaranteed at runtime by monitors that check
the execution traces comply with an associated protocol. Protocols are written in Scribble, a choreography description language based on multiparty session types, with addition of logic formulas for
more precise behaviour properties. The presented framework overcomes the limitations of previous
works on the session types where all endpoints should be statically typed so that they do not permit interoperability with untyped participants. The advantages, expressiveness and performance of
dynamic protocol checking are demonstrated through use case and benchmarks.
1
Introduction
The study of multiparty session types (MPST) has explored a type theory for distributed programs which
can ensure, for any typable programs, a full guarantee of deadlock-freedom and communication safety
(all processes conform to a globally agreed communication protocol) through static type checking. However, a static verification is not always feasible and dynamic approaches have several advantages. First,
when access to the source code is restricted dynamic verification enables to detect and ensure the correctness of external untyped components. Second, constraints on the message payload are easier to check
dynamically. Third, as shown in this paper, dynamic checking is less obstructive to the source code,
because it does not require extensions of the host language as in the existing works on session types.
In this paper we present a toolchain for session-based programming (hereafter conversation programming) in Python that
Global Protocol
uses MPST-protocols to dynamically verify the communication
Specification
PROJECTION
safety of the running system. Conversation programming in
(Scribble)
Python resembles the standard development methodology for
Local Specification
MPST-based frameworks (Fig. 1). It starts by specifying the intended interactions (choreography) as a global protocol in the proImplementation Source Code
(Python)
Conversation
tocol description language Scribble [13]. Then Scribble local proRuntime
tocols are generated mechanically for each participant (role) defined in the protocol. After that processes for each role are impleMonitor
mented using MPST operations exposed by Python conversation
Verification
(Dynamic)
library. An external monitor is assigned to each endpoint.
During communication initiation the monitor retrieves the local
SAFE NETWORK
protocol for its process and converts it to a finite state machine
(FSM). The FSM continuously checks at runtime that each inter- Figure 1: Development methodology
action (execution trace) is correct. If all participants comply to
their protocols, the whole communication is guaranteed to be safe [6]. If participants do not comply,
violations (such as deadlocks and communication mismatch) are detected and optionally ignored.
N. Yoshida, W. Vanderbauwhede (Eds.): Programming Language
Approaches to Concurrency- and Communication-cEntric
Software 2013 (PLACES’13)
EPTCS 137, 2013, pp. 95–102, doi:10.4204/EPTCS.137.8
96
Session Types Go Dynamic
The presented framework brings several non-trivial contributions to MPST works. First, Scribble is
extended with logic assertions (constraints on the message payload). Second, implementing MPST in a
dynamic language requires different code augmentation techniques. For that purpose, we have defined
a minimal, but sufficient and extendable format for conversation message headers. Third, we show that
using FSMs for MPST checking has reasonable overhead. The algorithm used to convert local session
types to FSM is based on [7], however we have optimised it to avoid the state explosion for parallel
sub-protocols and have extended it for the new Scribble constructs. Finally, the Python API is more
flexible compared to other session types language extensions, because it supports different programming
styles (event-driven and thread-based, see Fig. 4). From the existing implementations only SJ [9] features
event-driven programming, but it has more strict typing rule. To the best of our knowledge, this is the
first implementation of session types for decentralised monitoring. Our practical framework is inspired
by the formal model of MPST runtime safety enforcement presented in [6, 5]. In the aforementioned
works conformance to stipulated global protocols is guaranteed at runtime through local monitoring.
The rest of the paper illustrates the key features of our conversation framework, the Python runtime and its API (§ 2), it also gives overview of the monitoring tool, along with its benchmarks (§ 3).
§ 5 discusses future work and concludes. The code for the runtime and the monitor tool and example
applications are available from [14].
2
Conversation Programming in Python
This section illustrates the stages of our framework and its implementation through a use case. Step 1
and 2 illustrate the use case specification in Scribble, while Step 3 presents one of the main contributions of the paper – a python API for conversation programming. We present a use case obtained from
our industrial partners Ocean Observatory Institute (OOI) [11] (use case UC.R2.13 ”Acquire Data From
Instrument”). OOI aims to establish cyberinfrastructure for the delivery, management and analysis of
scientific data from a large network of ocean sensor. Their architecture relies on distributed run-time
monitoring to regulate the behaviour of third-party applications within the system. Part of the monitor
tool presented in this paper is already integrated in their system as an internal monitor.
Step 1: Global Protocol. The Scribble global protocol for the use case is listed in Fig. 2. Scribble
describes interactions between session participants through message passing sequences, branches and
recursion. Each message has a label (an operator) and a payload. The first line declares the Data Acquisition protocol and three participant roles – a User (U), an Agent service (A) and an Instrument (I). The
overall scenario is as follows: U requests via A to start streaming a list of resources from I (line 2–3).
At Line 4 I makes a choice wether to continue the interaction or not. If I supports the requested resource
the communication continues and A starts to poll resources from I and streams them to U (line 6–15).
Line 10 shows the new assertion construct and restricts I to send data packages that are less than 512MB.
The presented assertion extension is inspired by [4]. However, we do not stick to a predefined logic, but
allow various policy languages to be incorporated inside an assertion construct.
Step 2: Global-to-local Protocol Projection. Local protocols specify the communication behaviour
for each conversation participant. An example of a local protocol (the local protocol for role A is given
in Fig. 2. A local protocol is essentially a view of the global protocol from the perspective of one
participant role and as such it is mechanically projected from the global protocol. Projection basically
works by identifying the message exchanges where the participant is involved, and disregarding the rest,
while preserving the overall interaction structure of the global protocol. The assertions are similarly
R. Neykova
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
97
global protocol DataAquisition(role U,
role A, role I) {
Request(string:info) from U to A;
Request(string:info) from A to I;
choice at I {
Support from I to A;
rec Poll{
Poll from A to I;
choice at I {
@{size(data) ≤ 512}
Raw(data) from I to A ;
Formatted(data) from I to U;
Poll;
} or {
Stop from I to A;
Stop from A to U;}}
} or {
NotSupported from I to A;
Stop from A to I;
Stop from A to U;}}
1
2
3
4
5
6
7
8
9
10
11
local protocol DataAquisition at A(role U,
role A, role I) {
Request(string:info) from U;
Request(string:info) to I;
choice at I {
Support from I;
rec Poll{
Poll to I;
choice at I {
@{size(data) ≤ 512}
Raw(data) from I;
12
13
14
15
16
17
18
19
20
Poll;
} or {
Stop from I;
Stop to U;}}
} or {
NotSupported from I;
Stop to I;
Stop to U;}}
Figure 2: Global Protocol (left) and Local Protocol for role A (right)
preserved by projection where relevant.
Step 3: Process Implementation. Fig. 4 illustrates the
conversation API by presenting two alternative implemen- # session initiation bla
tations in Python for the User process. Our Python conver- create(protocol, inv_config.yml)
sation API offers a high level interface for safe conversation # accept an invitation
join(self, role, principal_name)
programming and maps basic session calculus primitives to # send a msg
lower-level communication actions on a concrete transport send(self, to_role, op, payload)
(AMQP [1] in this case). The implementation is built on top # receive a msg
of Pika [12], a widely used AMQP client library for Python. recv(self, from_role)
Fig. 3 lists the basic API methods. In short, the API pro- # receive asynchronously
recv_async(self, from_role, callback)
vides functionality for (1) session initiation and joining and # close the connection
(2) basic send/receive. Each message embeds in its payload stop()
a conversation header. The header contains session inforFigure 3: Conversation API
mation either for monitor initialisation (in case of invitation
messages), or session checking (in case of in-session messages).
Conversation initiation The Conversation.create method initiates a new conversation. It creates
a fresh conversation id and the required AMQP objects (principal exchange and queue), and sends an
invitation message for each role specified in the protocol. Invitation mechanism is needed to map the
role names to concrete addressable entities on the network (principals) and to propagate this mapping
to all participants. Invitation header carries a conversation id, a role, a principal name (resolvable to a
network address) and a name for a Scribble local specification file. In our example, the User starts a
session and sends invitation to all other participants. Once the invitations are sent and accepted, a session
is established and the intended message exchange can start. An invitation for a role is accepted using the
Conversation.join method. It establishes an AMQP connection and, if one does not exist, creates an
invitation queue on which the invitee waits to receive an invitation.
Conversation message passing The API provides standard send/receive primitives. Send is asynchronous, meaning that a basic send does not block on the corresponding receive; however, the basic re-
98
Session Types Go Dynamic
class ClientApp(BaseApp):
def start(self):
c = Conversation.create(’DataAquisition’,
’config.yml’)
c.join(’U’, ’alice’)
resource_request = c.receive(’U’)
c.send(’I’, resource_request)
req_result = c.receive(’I’)
if (req_result == SUPPORTED):
c.send(’I’, ’Poll’)
op, data = c.receive(’I’)
while (op != ’Stop’):
formatted_data = format(data)
c.send(’U’, fomratted_data)
c.send(’U’, stop)
else:
c.send(’U, I’, stop)
c.stop()
class ClientApp(BaseApp):
def start(self):
c = Conversation.create(’DataAquisition’,
’config.yml’)
c.join(’U’, ’alice’)
c.receive_async(’U’, on_request_received)
def on_request_received(self, conv, op, msg):
if (op == SUPPORTED):
conv.send(’I’, ’Poll’)
conv.receive_async(’I’, ’on_data_received’)
else: conv.send(’I, U’, ’Stop’)
def on_data_received(self, conv, op, payload):
if (operation != ’Stop’):
formatted_data = format(payload)
c.send(’U’, formatted_data)
else:
conv.send(’U’, ’Stop’)
conv.stop()
Figure 4: Python standard (left) and event-driven (right) implementation of the User process
ceive does block until the complete message has been received. An asynchronous receive (receive async)
is also provided to support event-driven usage of the conversation API. We have demonstrated two different implementations for the the User process (threaded and event-driven). Both versions require the
same monitor for checking. The primitives for sending and receiving specify the name of the sender
and receiver role respectively. The runtime resolves the role name to the actual network destination by
coordinating with the in-memory conversation routing table created as a result of the conversation invitation. All messages are sent/received as a tuple of an operation and a payload. The API does not mandate
how the operation field should be treated, allowing the runtime freedom to interpret the operation name
various ways, e.g. as a plain message label, an RMI method name, etc. Syntactic sugar such as automatic
dispatch on method calls based on the message operation is possible. More examples of programs using
the API can be found in [14].
3
3.1
Dynamic Verification
Monitoring Implementation
To guarantee global safety our monitoring framework imposes complete mediation of communications:
no communication action should have an effect unless the message is mediated by the monitor. We use
the AMQP’s functions to reroute each outgoing/incoming message to its associated monitor. Routing is
configured during session initialisation.
Figure 5 depicts the main components and internal workflow of our prototype monitor. The lower
part relates to session initiation. The invitation message carries (a reference to) the local type for the
invitee and the session id (global types can be exchanged if the monitor has the facility for projection.)
The monitor generates the FSM from the local type following [7]. Our implementation differs from [7]
in the treatment of parallel sub-protocols (i.e. unordered message sequences). For efficiency, the monitor
generates nested FSMs for each session thread, avoiding the potential state explosion that comes from
constructing their product. FSM generation has therefore polynomial time and space cost in the length
R. Neykova
99
Figure 5: Monitor components and workflow. The messages are processed depending on their type:
(1) Invitation Messages and (2) Conversation Messages.
of the local type. The (nested) FSM is stored in a hash table with session id as the key. Due to MPST
well-formedness conditions (message label distinction), any nested FSM is uniquely identifiable from
any unordered message (i.e. session FSMs are deterministic). Transition functions are similarly hashed,
each entry having the shape: (current state, transition) 7→ (next state, assertion, var) where transition
is a triple (label, sender, receiver), and var is the variable binder for the message payload.
The upper part of the Figure relates to in-session messages, which carry the session id (matching an
entry in the FSM hash table), sender and receiver fields, and the message label and payload. This information allows the monitor to retrieve the corresponding FSM (the message signature is matched to the
FSM’s transition function). Any associated assertions are evaluated by invoking an external logic engine;
a monitor can be configured to use various logic engines, for example, logic engines that support the validation of assertions, automata-based specifications (such as security automata), or state updates. The
current implementation uses a Python predicate evaluator, which is sufficient for the example protocol
specifications that we have tested so far.
3.2
Benchmarks
These benchmarks measure the communication overhead introduced by our prototype monitor implementation. The results show that the core FSM-related functionality of the monitor adds little overhead
in comparison to a dummy monitor that performs plain message forwarding.
Benchmark framework. We measure the time to complete a session between client and server endpoints connected to a single-broker AMQP network. Three benchmark cases are compared. The main
case (Monitor) is fully monitored, i.e. FSM generation and message checking are enabled for both the
client and server. The base case for comparison (Forwarder) has the client and server in the same configuration, but with dummy monitors that perform only message forwarding. For reference, the final
case (No Monitor) tests direct AMQP communication between the server and client, i.e. messages are
routed directly from an exchange to their destination queues (no intermediate forwarding). Naturally,
forwarding-based mediation incurs additional latencies; the actual internal overhead of the monitor is
given by the first two benchmark cases. This benchmark framework is applied to three scenarios:
1. Increasing session length (number of messages), for protocol:
µ X.S → C{OK().C → S{ACK().X}, KO().end}
Session length is the number of times the recursion is repeated.
2. Increasing protocol size (increasing number of parallel states). We repeatedly compose the base
pattern to construct bigger protocols for nested FSM generation.
S → C{OK().end} | C → S{ACK().end}
Session Types Go Dynamic
No Monitor
10
100
10
Forwarder
Monitor
0.04
Hundreds
execution time (seconds)
100
1
0.03
0.02
1
0.1
0.1
0.01
0.01
0.01
1
10
100 1000
(a) recursive
iterations
1
10
100
(b) parallel
states
1000
0
1
10
(c) message
size
Figure 6: Microbenchmarks comparing end-to-end monitor performance
3. Increasing payload size (message size), using protocol from (1).
Benchmark environment and results. The server and client endpoint processes, both monitors and
the RabbitMQ broker (2.7.0/R13B03) are all run on separate machines with the same specification:
Intel Core2 Duo 2.80 GHz and 4 GB main memory, running 64-bit Ubuntu 11.04 (kernel 2.6.38) and
connected via gigabit Ethernet. Latency between each node is measured to be 0.24 ms on average (ping
64 bytes). The benchmark applications are executed using Python 2.7.1.
Figure 6 presents the results for the three benchmark scenarios. Each chart gives the mean time (yaxis) for the client and server to complete one session after repeating the benchmark 100 times for each
parameter configuration (session length/parallel states/message size). Scenario (3) message size is measured for session length 1. For all three scenarios, the results show that the overhead of the monitor due
to FSM generation and FSM-based message checking, the baseline cost in the current framework, are
acceptable (around 20%). Non-communication related computation in more realistic applications and
higher latency environments will both contribute to decreasing the relative overhead. For scenario (1)
in chart (a), note that the relative overhead decreases (from 12% to 9%) as the session length increases,
because the one-time FSM generation cost becomes less prominent. Although our implementation work
is ongoing, we believe these results confirm the feasibility of our approach. As expected, the forwarding
configuration incurs extra latencies (due to the reciprocal shape of the benchmark protocol) in comparison to the (No Monitor) case. The full source code and raw results of these benchmarks, and additional
tests using protocols with assertions, can be obtained from the project homepage [14].
4
Related Work
The work closest to ours is that by Ancona et al. [2]. It explores session types protocols as a test framework for multiagent systems (MAS). A global session type is specified as cyclic Prolog terms in Jason
(a MAS development platform) and verified through test monitors. Their global types are less expressive in comparison with the language presented in this paper (due to restricted arity on forks and the
lack of assertions). Their monitor is centralised and global safety properties are not discussed. Kruger
et al. [10] propose a run-time monitoring framework, projecting MSCs to FSM-based distributed monitors. They use aspect-oriented programming techniques to inject monitors into the implementation
of the components. Our outline monitoring verifies conversation protocols and does not require such
monitoring-specific augmentation of programs. Gan [8] follows a similar but centralised approach to
Kruger et al.
R. Neykova
101
Works on monitoring BPEL languages can also be compared. Baresi et al. [3] develop a run-time
monitoring tool with assertions. However, a major difference is that BPEL approaches do not treat or
prove global safety. BPEL is expressive, but does not support distribution and is designed to work in a
centralised manner.
5
Conclusion and Future Work
We have shown that session types are amendable for dynamic verification. Our implementation automates distributed monitoring by generating FSMs from local protocol projections. Further benchmarks
are needed to compare the conversation API with existing network libraries and to investigate its performance. Future work includes also the incorporation of more elaborate handling of error cases into
monitor functionality, extending Scribble and automatic generation of services stubs. Although our implementation work is ongoing, the results confirm the feasibility of our approach. We believe this work
is an important step towards a better, safer world of easier to speak and easier to understand distributed
conversations.
Acknowledgments. I would like to dedicate this paper to the memory of Kohei Honda, who is a
constant source of inspiration to me and whose guidance was invaluable. I thank my supervisor Nobuko
Yoshida for her constant support and ideas, my colleagues Raymond Hu and Pierre-Malo Denilou for
the discussions about the framework; the anonymous reviewers for useful comments and corrections;
and Tzu-Chun Chen for her valuable feedback and her inspirational work on the formal system behind the presented work. This work is partially supported by VMWare PhD studentship and EPSRC
EP/G015635/1.
References
[1] Advanced Message Queuing Protocols (AMQP) homepage.
display/AMQP/Advanced+Message+Queuing+Protocol.
http://jira.amqp.org/confluence/
[2] Davide Ancona, Sophia Drossopoulou & Viviana Mascardi (2012): Automatic Generation of Self-Monitoring
MASs from Multiparty Global Session Types in Jason. In: DALT’12, Springer. Available at http://dx.
doi.org/10.1007/978-3-642-37890-4_5.
[3] Luciano Baresi, Carlo Ghezzi & Sam Guinea (2004): Smart monitors for composed services. In: ICSOC ’04,
pp. 193–202. Available at http://doi.acm.org/10.1145/1035167.1035195.
[4] Laura Bocchi, Kohei Honda, Emilio Tuosto & Nobuko Yoshida (2010): A theory of design-by-contract for
distributed multiparty interactions. In: CONCUR, LNCS 6269, pp. 162–176. Available at http://dx.doi.
org/10.1007/978-3-642-15375-4_12.
[5] Tzu chun Chen (2013): Theories for Session-based Governance for Large-scale Distributed Systems. Ph.D.
thesis, Queen Mary, University of London.
[6] Tzu-Chun Chen et al. (2012): Asynchronous Distributed Monitoring for Multiparty Session Enforcement. In:
TGC’11, LNCS, Springer. Available at http://dx.doi.org/10.1007/978-3-642-30065-3_2.
[7] Pierre-Malo Deniélou & Nobuko Yoshida (2012): Multiparty Session Types Meet Communicating Automata.
In: ESOP, LNCS, Springer. Available at http://dx.doi.org/10.1007/978-3-642-28869-2_10.
[8] Yuan Gan et al. (2007): Runtime monitoring of web service conversations. In: CASCON ’07, ACM, pp.
42–57. Available at http://doi.ieeecomputersociety.org/10.1109/TSC.2009.16.
[9] Raymond Hu, Dimitrios Kouzapas, Olivier Pernet, Nobuko Yoshida & Kohei Honda (2010): Type-Safe
Eventful Sessions in Java. In: ECOOP’10, LNCS 6183, Springer-Verlag, pp. 329–353. Available at
http://dx.doi.org/10.1007/978-3-642-14107-2_16.
102
Session Types Go Dynamic
[10] Ingolf H. Krüger, Michael Meisinger & Massimiliano Menarini (2010): Interaction-based Runtime Verification for Systems of Systems Integration. J. Log. Comput. 20(3), pp. 725–742. Available at http:
//dx.doi.org/10.1093/logcom/exn079.
[11] OOI. http://www.oceanobservatories.org/.
[12] AMQP for Python (PIKA). https://github.com/pika/pika.
[13] Scribble Project homepage. http://www.scribble.org.
[14] Full version of this paper. http://www.doc.ic.ac.uk/~rn710/spy.
| 6 |
On The Multiparty Communication Complexity of Testing
Triangle-Freeness
arXiv:1705.08438v1 [] 23 May 2017
Orr Fischer∗
Shay Gershtein†
Rotem Oshman‡
May 24, 2017
In this paper we initiate the study of property testing in simultaneous and non-simultaneous multi-party
communication complexity, focusing on testing triangle-freeness in graphs. We consider the coordinator
model, where we have k players receiving private inputs, and a coordinator who receives no input; the
coordinator can communicate with all the players, but the players cannot communicate with each other. In
this model, we ask: if an input graph is divided between the players, with each player receiving some of
the edges, how many bits do the players and the coordinator need to exchange to determine if the graph is
triangle-free, or far from triangle-free?
For general communication protocols, we show that Õ(k(nd)1/4 + k2 ) bits are sufficient to test trianglefreeness in graphs of size n with average degree d (the degree need not be known in advance). For simul√
taneous protocols, where there is only one communication round, we give a protocol that uses Õ(k n)
√
√
bits when d = O( n) and Õ(k(nd)1/3 ) when d = Ω( n); here, again, the average degree d does not
need to be known in advance. We show that for average degree d = O(1), our simultaneous protocol is
asymptotically optimal up to logarithmic factors. For higher degrees, we are not able to give lower bounds
on testing triangle-freeness, but we give evidence that the problem is hard by showing that finding an edge
that participates in a triangle is hard, even when promised that at least a constant fraction of the edges must
be removed in order to make the graph triangle-free.
∗
Computer Science Department, Tel-Aviv University. Email: [email protected]
Computer Science Department, Tel-Aviv University. Email: [email protected]
‡
Computer Science Department, Tel-Aviv University. Email: [email protected]
†
1
1 Introduction
The field of property testing asks the following question: for a given property P , how hard is it to test whether
an input satisfies P , or is ǫ-far from P , in the sense that an ǫ-fraction of its representation would need to
be changed to obtain an object satisfying P ? Property-testing has received extensive attention, including
graph properties such as connectivity and bipartiteness [22], properties of Boolean functions (monotonicity,
linearity, etc.), properties of distributions, and many others [35, 17, 21]. The usual model in which propertytesting is studied is the query model, in which the tester cannot “see” the entire input, and accesses it by
asking local queries, that is by only viewing a single entry in the object representation at a time. The tester
typically does not have at its disposal the possibility of making a ”non-local” query whose answer depends
on a substantial subset of the object’s representation, which is a primary source of difficulty in property
testing. For example, for graphs represented by their adjacency matrix, the tester might ask whether a given
edge is in the graph, or what is the degree of some vertex. The efficiency of a property tester is measured by
the number of queries it needs to make. One can also distinguish between oblivious testers, which decide
in advance on the set of queries, and adaptive testers, which decide on the next query after observing the
answers to the previous queries. It is known that for many graph properties, one-sided oblivious testers are
no more than quadratically more expensive than adaptive testers [24].
In this paper we study property testing from a different perspective, that of communication complexity.
We focus on property testing for graphs, and we assume that the input graph is divided between several
players, who must communicate in order to determine whether it satisfies the property or is far from satisfying it. Each player can operate on its own part of the input “for free”, without needing to make queries; we
charge only for the number of bits that the players exchange between them. This is on one hand easier than
the query model, because players are not restricted to making local queries, and on the other hand harder,
because the query model is centralized while here we are in a distributed setting. This leads us to questions
such as: does the fact that players are not restricted to local queries make the problem easier, or even trivial? Which useful “building blocks” from the world of property testing can be implemented efficiently by
multi-party protocols? Does interaction between the players help, or can we adopt the “oblivious approach”
represented by simultaneous communication protocols?
Beyond the intrinsic interest of these questions, our work is motivated by two recent lines of research.
First, [10, 19], study property testing in the CONGEST model, and show that many graph property-testing
problems can be solved efficiently in the distributed setting. As pointed out in [19], existing techniques
for proving lower bounds in the CONGEST model seem ill-suited to proving lower bounds for property
testing. It seems that such lower bounds will require some advances on the communication complexity side,
and in this paper we make initial steps in this direction. Second, recent work has shown that many exact
problems are hard in the setting of multi-party communication complexity: Woodruff et al. [38] proved that
for several natural graph properties, such as triangle-freeness, bipartiteness and connectivity, determining
whether a graph satisfies the property essentially requires each player to send its entire input. We therefore
ask whether weakening our requirements by turning to property testing can help.
In this work we focus mostly on the specific graph property of triangle-freeness, an important property
which has received a wealth of attention in the property testing literature. It is known that in dense graphs
(average degree Θ(n)) there is an oblivious tester for triangle-freeness which is asymptotically optimal
in terms of the size of the graph (i.e., adaptivity does not help) [2, 18], and [3] also gives an oblivious
√
tester for graphs with average degree Ω( n). The closest parallel to oblivious testers in the world of
communication complexity is simultaneous communication protocols, where the players each send a single
message to a referee, and the referee then outputs the answer. We devote special attention to the question of
2
the simultaneous communication-complexity of testing triangle-freeness.
1.1 Related Work
Property testing is an important notion in many areas of theoretical computer science; see the surveys [35,
17, 21] for more background.
Triangle-freeness, the problem we consider in this paper, is one of the most extensively studied properties in the world of property testing; many different graph densities and restrictions have been investigated
(e.g., [2, 1, 5, 23]). Of particular relevance to us is triangle-freeness in the general model of property testing, where the average degree of the graph is known
but no other restrictions are imposed. For
√ in advance,
4
2
3 /d 3 }) on testing triangle-freeness, and a lower
nd,
n
this model, [3] showed an upper bound of Õ(min{
√
√
bound of Ω(max{ n/d, min{d, n/d}, min{ d, n2/3 }n−o(1) }), both for graphs with a average degree d
ranging from Ω(1) up to n1−o(1) . For specific ranges of d, [34] and [25] improved these upper and lower
bounds, respectively, by showing an upper bound of O(max{(nd)4/9 , n2/3 /d1/3 } and a lower bound of
Ω(min{(nd)1/3 , n/d), n/d}).
Our simultaneous protocols use ideas, and in one case an entire tester, from [3], but implementing them
in our model presents different challenges and opportunities. (Our unrestricted-round protocol does not bear
much similarity to existing testers.) As for lower bounds, we cannot use the techniques from [3] or other
property-testing lower bounds, because they rely on the fact that the tester only has query access to the
graph. For example, [3] uses the fact that a triangle-freeness tester with one-sided error must find a triangle
before it can announce that the graph is far from triangle-free ([3] also gives a reduction lifting their results
to two-sided error). In the communication complexity setting this is no longer true; there is no obvious
reason why the players need to find a triangle in order to learn that the graph is not triangle-free.
Property testing in other contexts. Recently, the study of property testing has been explored in distributed computing [7, 10, 19]. Among their other results, Censor-Hillel et al. [10] showed that trianglefreeness can be tested in O( ǫ12 ) rounds in the CONGEST model; expanding this, [19] showed that testing
H-freeness for any 4 node graph H can be done in O( ǫ12 ) rounds, and showed that their BFS and DFS
approach fails for K5 and C5 -freeness, respectively; [19] does not give a general lower bound. There
has also been work on property testing in the streaming model [26]. The related problem of computing
the exact or approximate number of triangles has also been studied in many contexts, including distributed
computing [14, 11, 30, 15], sublinear-time algorithms (see [16] and the references therein), and streaming
(e.g., [27]). Specifically, [27] gives a reduction which shows a lower bound on the space complexity of
approximating the number of triangles in the streaming model; we apply their reduction here to show the
hardness of testing triangle-freeness, by reducing from a different variant of the problem used in [27].
Communication complexity. The multi-party number-in-hand model of communication complexity has
received significant attention recently. In [38] it is shown that several graph problems, including exact
triangle-detection, are hard in this model. Many other exact and approximation problems have also been
studied, including [6, 31, 39, 9, 37, 8] and others.
Unfortunately, it seems that canonical lower bounds and techniques in communication complexity cannot be leveraged to obtain property-testing lower bounds; for discussion, see section 4.6.
1.2 Our Contributions
The contributions of this work are as follows:
3
Basic building-blocks. We show that many useful building-blocks from the property testing world can be
implemented efficiently in the multi-player setting, allowing us to use existing property testers in our setting
as well. For some primitives — e.g., sampling a random set of vertices — this is immediate. However, in
some cases it is less obvious, especially when edge duplication is allowed (so that several players can receive
the same edge from the input graph). We show that even with edge duplication the players can efficiently
simulate a random walk, estimate the degree of a node, and implement other building blocks.
Upper
√ bounds on testing triangle-freeness. For unrestricted communication protocols, we show that
Õ(k 4 nd + k2 ) bits are sufficient to test triangle-freeness, where n is the size of the graph, d is its average
degree (which is not known in advance), and k is the number of players. When interaction is not allowed
√
√
(simultaneous
protocols), we give a protocol that uses Õ(k n) bits when d = O( n), and another protocol
√
√
using Õ(k 3 nd) bits for the case d = Ω( n). We also combine these protocols into a single degreeoblivious protocol, which does not need to know the average degree in advance. (This is not as simple as
might sound, since we are working with simultaneous protocols, where we cannot first estimate the degree
and then use the appropriate protocol for it.)
Lower bounds. Our lower bounds are mostly restricted to simultaneous protocols, although we first prove
lower bounds for one-way protocols for two or three players, and then then “lift” the results to simultaneous
protocols for k ≥ 3 players using the symmetrization technique [33].
√
We show that for average degree d = O(1), Ω(k n) bits are required to simultaneously test trianglefreeness, matching our upper bound. For higher degrees, we are not able to give a lower bound on testing
triangle-freeness, but we give evidence that the problem is hard: we show that it is hard to find an edge that
participates in a triangle, even in graphs that are ǫ-far from triangle free (for constant ǫ), and where every
edge participates in a triangle with (small) constant marginal probability.
2 Preliminaries
Unrestricted Communication in the Number-in-Hand Model The default model we consider is this
work is the number-in-hand model. In this model k players receive private inputs X1 , . . . , Xk and communicate with each other in order to determine a joint function of their inputs f (X1 . . . Xk ). This is the most
general model, as the number of rounds of communication is unrestricted. There are three common variants
to this model, according to the mode of communication: the blackboard model, where a message by any
player is seen by everyone; the message-passing model, where every two players have a private communication channel and each message has a specific recipient; the coordinator model, which is the variant we
consider in this paper, and define promptly.
In the coordinator model the players communicate over private channels with the coordinator, but they
cannot communicate directly with each other. The protocol is divided into communication rounds. In each
such round, the coordinator sends a message of arbitrary size to one of the players, who then responds back
with a message. Eventually the coordinator outputs the answer f (X1 . . . Xk ). For convenience, we assume
that the players and the coordinator have access to shared randomness instead of private randomness. Note
that the players will make explicit use of the fact that the randomness is shared for common procedures like
sampling, as the players can agree on which elements to sample simply by agreeing in advance (as part of
the protocol) on how to interpret the public bits, and no interaction is required. (For protocols that use more
than one round, it is possible to get rid of this assumption and use private randomness instead via Newman’s
Theorem [32], which costs at most additional O(k log n) bits. For further details see [29, 32]).
4
The communication complexity of a protocol Π, denoted CC(Π), is the maximum over inputs of the
expected number of bits exchanged between the players and the coordinator in the protocol’s run. For a
problem P , we let CCk,δ (P ) denote the best communication complexity of any protocol that solves P with
worst-case error probability δ on any input.
The coordinator model is roughly equivalent to the widely used message-passing model. More concretely, every protocol in the message-passing model can be simulated with a coordinator, incurring an
overhead factor of at most log k by appending to each message the id of the recipient, to infrom the coordinator whom to forward this message to. The other direction can also be simulated efficiently, as in the
message-passing model we can assign an arbitrary player to be the coordinator and run the protocol as it is.
Although in this paper for convenience we consider the coordinator model, our results consequently apply
for the message-passing model as well, up to a log k factor.
Simultaneous Communication Of particular interest to us in this work are simultaneous protocols, which
are, in a sense, the analog of oblivious property testers. This is the second primary model we investigate
in addition to unrestricted communication. In a simultaneous protocol, there is only one communication
round, where each player, after seeing its input, sends a single message to the coordinator (usually called
the referee in this context). The coordinator then outputs the answer. Any oblivious graph property tester
which uses only edge queries (which test whether a given edge is in the graph or not) can be implemented
by a simultaneous protocol, but the converse is not necessarily true.
Communication complexity of property testing in graphs. we are given a graph G = (V, E) on n
vertices, which is divided between the k players, with each player j receiving some subset Ej ⊆ E of edges.
More concretely, each player, j, receives the characteristic vector of Ej , where each entry corresponds to a
single edge, such that if the bit is 1 then that edge exists in E, and if the bit is 0 it is unknown to the player
whether it exists or not, as this entry might be 1 in the input of a different player. The logical OR of all inputs
results in the characteristic vector of the graph edges, E. Note that there is no guarantee for any vertex for
a single player to have all its adjacent edges in its input, as is the case in models like CONGEST. To make
our results as broad as possible we follow the general model of property testing in graphs (see, e.g., [3]): we
do not assume that the graph is regular or that there is an upper bound on the degree of individual nodes. As
in [38], edges may be duplicated, that is, the sets E1 , . . . , Ek are not necessarily disjoint.
The goal of a property tester for property P is to distinguish the case where G satisfies P from the case
where G is ǫ-far from satisfying P , that is, at least ǫ|E| edges would need to be added or removed from G to
obtain a graph satisfying P . An important parameter in our algorithms is the average degree, d, of the graph
(also referred to as density); for our upper bounds, we do not assume that d is known, but our lower bounds
can assume that it is known to the protocol up to a tight multiplicative factor of (1 ± o(1)). Moreover, as
in [3], we focus on d = Ω(1) and d ≤ n1−ν(n) , where ν(n) = o(1), since for graphs of average degree
d = Θ(n) there is a known solution whose complexity is independent on n in the property-testing query
model and consequently in our model as well. The case of d = o(1), although not principally different, is
ignored for simplicity, as its extreme sparsity makes it of less interest than any degree which is Ω(1).
Information theory. Our lower bounds use information theory to argue that using a small number of
communication bits, the players cannot convey much information about their inputs. For lack of space, we
give here only the essential definitions and properties we need.
Let (X, Y ) ∼ µ be random variables. (In our lower bounds, for clarity, we adopt the convention that
bold-face letters indicate random variables.) To measure the information we learned about X after observ5
ing Y , we examine the difference between the prior distribution of X, denoted µ(X), and the posterior
distribution of X after seeing Y = y, which we denote µ(X|Y = y). We use KL divergence to quantify
this difference:
Definition 1 (KL Divergence). For distributions µ, η : X → [0, 1], the KL divergence between µ and η is
X
D (µ k η) :=
µ(x) log (µ(x)/η(x)) .
x∈X
We require the following property, which follows from the superadditivity of information [13]: if
(X 1 , . . . , X n , Y ) ∼ µ are such that X 1 , . . . , X n P
are independent, and Y can be represented using m
bits (that is, its entropy is at most m), then Ey∼µ(Y ) [ ni=1 D (µ(X i |Y = y) k µ(X i ))] ≤ m. Here and in
the sequel, µ(X i ) denotes the marginal distribution of X i according to µ, and µ(X i |Y = y) is the marginal
distribution of X i given Y = y, and Ey∼µ(Y ) denotes the expectation according to the distribution µ.
Graph definitions and notation. We let deg(v) denote the degree of a vertex v in the input graph, and for
a player j ∈ [k], we denote by dj (v) the degree of v in player j’s input (the subgraph (V, Ej )).
Definition 2. We say that a pair of edges {{u, v} , {v, w}} ⊆ E is a triangle-vee if {u, w} ∈ E, and in this
case we call v the source of the triangle-vee.
Definition 3. We say that an edge e ∈ E is a triangle edge if G contains a triangle T , such that e is an edge
in T .
3 Upper Bounds
All the solutions we present have a one-sided error, that is if a triangle is returned then it exists in G with
probability 1. This holds even when the input is not ǫ-far from being triangle free. Therefore, by solving the
problem of triangle detection, we also solve triangle-freeness, as we never output a triangle in a triangle-free
graph, and do output one with high probability whenever the gap guarantee holds. All algorithms have at
most a small constant bound on the error, δ. We also prove for some cases an improved complexity for
several relaxations such as having the players communicate in the blackboard model, where each message
is seen by all players, or the variant where the players are guaranteed there is no edge-duplication, such
that each edge of the graph appears in exactly one input. Additionally, we assume that k = O(poly(n)), to
simplify the complexity expressions.
3.1 Building Blocks
We start by showing that the essential primitives used in the property testing setting (dense, sparse and
general models combined) of graph problems are efficiently translatable into our communication complexity
model, where the edges of the graph are scattered across k inputs with possible multiplicity and as default the
communication is unrestricted. This illustrates the added power packed in our communication complexity
model, that can solve many problems with at most a logarithmic overhead factor, by simulating the PT
solution, while for some problems, such as the one we will explore here, there is a significantly more
efficient solution.
• Querying a specific edge (check for its existence) - This is one of the main primitives in the dense
model. This can be done in our model in O(k) by having each player send a bit to the coordinator
indicating whether it is in its input, and the coordinator sends the answer bit to all players.
6
• Choosing uniformly a random edge adjacent to a given vertex v - This is the main primitive in the
sparse model. It can be simulated by utilizing the random bits to fix an random order, P , over all the
n−1 potential edges adjacent to v, and have each player send the first edge in his input according to P
to the coordinator, who then sends everyone the first edge according to P of all the edges he received.
This costs O(k log(n)). A random walk, which is a pivotal procedure in sparse property-testing, can
be simulated by taking a random neighbor each step using this primitive. Note that the permutation
was necessary so that edges with higher multiplicity would not be favored, as would happen in a naive
implementation.
• Querying vertex degree - this is an auxiliary query that is sometimes included in the general PT
model. Without duplication this can be done trivially in O(k log(d(v)), by having all players send the
number of edges adjacent to v in their input, and have the coordinator sum them up to get the result.
With duplication, an exact answer costs Ω(kd(v)) as it is at least as hard as solving disjointness, to
ensure no over-counting. However an α-approximation, for any α > 1 can be performed in efficiently
as we promptly prove. We can also reduce the complexity in the no-duplication case by using an approximation, as we also show, and in many cases, such as triangle-detection, a constant approximation
is good enough.
Theorem 3.1. For any given vertex, v, the players can compute an α-approximation, for a constant α > 1 with probability at least (1 − τ ) and communication complexity of O(k log log d(v) +
k log k log log k log τ1 ).
Proof. First each player, Pi , computes locally, di (v), the number of edges adjacent to v in his input,
Ei , and sends Ii , the index of the MSB (the leftmost ’1’ bit in the
representation of di (v)),
P binary
2Ii +1 . This amounts to at most
to the coordinator, who then proceeds to compute the sum d′ =
i∈[k]
P Ii +1
P Ii +1
2
, which is in itself a K-approximation of d(v), where we can only
2
and at least
2
i∈[k]
i∈[k]
′
d
over-count. Hence 2k
≤ d(v) ≤ d′ . The coordinator announces d′ to each player and they proceed to
the next step. The cost so far has been O(k log log d(v)).
In the second phase, the players start a O(log k)-round procedure, where in each round their decrease
√
their guess, d′′ , of d(v), by a multiplicative factor of α, having the starting guess be d′′ = d′ . In
each round they repeat independently an experiment of sampling possible edges (adjacent to v) and
checking whether the sample contains an edge in E. They set a threshold for each round, and if the
number of samples that contained an edge in E exceeds the threshold, they stop and declare the value
of d′′ for that round as the approximated value. If the last guess is reached, the players output it
without running the experiment. Note that the case of d′′ = 1, therefore, is never checked, and we can
assume that d′′ > 2.
In each round, r, we denote d′′ (r) as the value of the guess, d′′ , for that round, and m(r) is the number
of experiments they run. A single experiment is choosing into a set, S(r), every neighbor of v with
probability d′′1(r) , using public randomness, and then each player sends the coordinator a bit indicating
whether S(r) ∩ Ei = ∅. Each round the players assume their guess is correct, and therefore compute
′′
F (r) = (1 − (1 − d′′ (r))d (r) ) as the probability of success in a single experiment, and thus the
expected fraction of successes. The actual probability (as well as the expected fraction of successes)
is E(r) = (1 − (1 − d′′ (r))d(v) ).
7
We wish to prove that, with high probability, if d′′ (r) > αd(v), then the number of successes does
not exceed the threshold, which is F (r)
c , where c is a small constant who value we will determine
later. In that case we have E(r) < F (r) and
F (r)
E(r)
≥
(1−(1−d′′ (r))d
′′ (r)
)
d′′ (r)
(1−(1−d′′ (r)) α )
, which is lower bounded
by, (1 + β1 (α)), where β1 is a constant dependent only on α. The players may assume the lowest
possible value for d′′ , in order for the expression to be dependent only on α, as for d′′ = 2 we get a
small constant bigger than 1, and as d′′ tends to infinity it increases to
1− 1e
1
1−( 1e ) α
. Therefore, by choosing
c to be small enough (less than the square root of the difference) and running a number of experiments
dependent on β1 and τ , a Chernoff bound yields a 1 − τ bound on the probability of exceeding the
τ
threshold. We wish to reduce the error by a Θ(log k) factor, to 2 log
k , so that we may use the union
′′
bound to ensure this deviation doesn’t happen in any round where d (r) > αd(v), and by a Chernoff
argument an increase to m(r) by a factor of O(log log k) suffices.
√ , we exceed the threshold with
On the other hand, we show that the first guess where d′′ (r) < d(v)
α
probability at least (1 − τ /2). Combined with what we proved in the previous case, this proves
that with probability at least (1 − τ ) we stop at a guess which is within the bounds of an alphaapproximation of the d(v), as required.
Now we have d′′ (r) <
d(v)
√
α
that implies E(r) > F (r) and
F (r)
E(r)
≥
′′
(1−(1−d′′ (r))d (r) )
√ ′′
,
(1−(1−d′′ (r)) αd (r) )
which is
upper bounded by constant, β2 (α), dependent only on α. Therefore, m(r) = Θ(log log k) is more
than enough for a constant bound on a constant deviation small enough not to reduce the number of
successes below the threshold.
The total complexity of each round, therefore, is O(k log log k), and summing across all rounds we get
O(k log k log log k). The overall complexity of the algorithm is O(k log log d(v) + k log k log log k).
Lemma 3.2. In the no-duplication variant, for any given vertex, v, the players can compute a αapproximation of d(v), for alpha = O(1), with complexity O(k log log d(v)
k ).
Proof. Each player,Pi , computes locally, di (v), the number of edges adjacent to v in his input, Ei ,
and sends to the coordinator the (log α1 ) most significant bits along with the index of the cutoff,
which takes O(log log di (v)) bits to represent. The coordinator assumes all the missing bits are zeros,
and which makes this an α-approximation of di (v) for each player, when we can only under-count,
and thus the sum of all this approximation, is also an α-approximation. Note that since there is no
duplication, the worst case, by convexity, is when all players have a k1 -fraction of the edges, which
implies the bound stated in the lemma.
Note that this approximation procedure can be applied to any subset of vertex pairs, including estimating the total number of edges in the graph, and not only to the specific set of all possible edges
adjacent to a given vertex. More generally, this solves the problem of estimating the number of distinct
elements in a set.
• Choosing uniformly a random edge - this usually can’t be performed efficiently in the PT model
(unless the standard model is augmented), and is commonly replaced by choosing a random vertex,
and then a random neighbor from the adjacency list. In our model however, we can once again use
8
randomness to fix an order over the edges and each player sends to the coordinator its highest ranked
edge, which is O(log(n)), and across all players this sums up to O(k log n). The coordinator chooses
the highest ranked edge, and posts it to all players.
• Selecting all edges of a subgraph induced by V ′ ⊂ V - this also cannot be performed efficiently
in the PT model, as it requires querying all possible vertex pairs, or going over all relevant adjacency
lists. In our model, on the other hand, it is possible by having each player post all edges (in the
blackboard model the players post in turns so as to not repeat the same edge) of the relevant subgraph
in his input. Let m denote the number of edges in the relevant subgraph, then the complexity is
O(km log n), and it is the same with simultaneous communication (except that only the referee will
know the answer). The complexity is reduced to O(m log n) if there is no edge-duplication, or if the
players communicate using a blackboard. If m is significantly smaller than |V ′ |2 , then our procedure
is more efficient. This is particularly relevant when implementing a BFS. It can be done in O(n log n)
by having all players post all the neighbors of the currently examined vertex.
3.2 Input analysis
Prior to discussing our proposed algorithms, we analyze the properties of the input - a graph ǫ-far from
being triangle free. Our pivotal tool for this analysis is bucketing. We partition V into buckets, such that
for 1 ≤ i ≤ ⌊log3 n + 1⌋ we have Bi = {v ∈ V | 3i−1 ≤ deg(v) < 3i }, whereas B0 is the bucket
of singletons. Note that there are less than log n buckets. Let d− (Bi ) = 3i−1 and d+ (Bi ) = 3i denote
respectively the minimal and maximal bounds on degrees of vertices in Bi . We use d(Bi ) to mean any
degree in that range, when the 3 factor is negligible, and refer to it as the degree of the bucket. We say an
edge is adjacent to a bucket, if it is adjacent to at least one of its vertices. Additionally, we call a set of
triangle-vees disjoint if any two of them are either edge disjoint or originate from a different vertex. Note
that for simplicity we ignore some rounding issues, and avoid using floor or ceiling values.
We are interested in buckets that contain many vertices that participate in a large number of triangles.
Towards that end, we introduce the following definition, and analyze its properties.
Definition 4 (full bucket). We call Bi a full bucket if the edges adjacent to it contain a set of
triangle-vees. Let Bmin denote the bucket with the lowest degree of all the full buckets.
ǫnd
2 log n
disjoint
Observation 3.3. By the pigeonhole principle there is at least one full bucket, as there are at least ǫnd
disjoint triangle-vees.
Lemma 3.4 (size of a full bucket). If Bi is a full bucket then:
ǫnd
2nd
≤ |Bi | ≤ min{n, −
}
+
log n · d (Bi )
d (Bi )
when the upper bound holds regardless of Bi being full.
Proof. The number of disjoint triangle-vees in a full bucket is at least 2 ǫnd
log n . Therefore, the lower bound
pertains to the extreme case when all vertices have the maximal degree d+ (Bi ), that consists entirely of
ǫnd
d+ (Bi )/2 disjoint triangle-vees, thus reaching the sum log
n with as few vertices as possible. The upper
bound follows from the opposite extreme when each vertex contributes as little as possible which is d− (Bi ),
and there are at most d−2nd
(Bi ) such vertices as it would amount to nd edges, the total number of edges in the
graph (the 2 factor follows from counting each edge twice).
9
Definition 5 (full vertex). We call a vertex, v, a full vertex if at least an
to it are a set of disjoint triangle-vees. Additionally, let
ǫ
12 log n -fraction
of the edges adjacent
F (Bi ) = {v ∈ Bi | v is full} ,
be the set of full vertices in Bi . And let
F (V ) = {v ∈ V | v is full} ,
be the set of all full vertices in V .
Full vertices vertices play a vital role in finding a triangles, as they, by definition, participate in many
disjoint triangles. We, therefore, prove several useful lemmas about their incidence, as we are interested in
identifying such vertices, preferably using sampling.
Lemma 3.5. At least an
ǫ
12 log n -fraction
of the vertices in a full bucket, Bi , are full.
Proof. We prove that otherwise there are less than 2 ǫnd
log n triangle-vees adjacent to it, which contradicts it
being full. This holds even if we disregard any double-counting, and assume the bucket has the maximal
, and all its vertex degrees are d+ (Bi ) (both assumptions can not hold simultaneously, but this
size of d−2nd
(Bi )
only strengthens our proof).
The total contribution to the count of triangle-vees coming from non-full vertices is less than:
2nd
ǫ
1
1
· −
· d+ (Bi ) ·
=
ǫnd
2 d (Bi )
12 log n
4 log n
.
+
i)
vees to the count, and if we assume the fraction of full
Each full vertex can contribute at most d (B
2
ǫ
vertices is less than 12 log n , it amounts to less than:
ǫ
2nd
d+ (Bi )
1
· −
·
=
ǫnd
12 log n d (Bi )
2
4 log n
.
overall all vertices combined contribute less than the required
ǫnd
2 log n
to the disjoint triangle-vees count.
Lemmas 3.4 and 3.5 imply the following corollary:
Corollary 3.6. The number of full vertices in a full bucket, Bi , is at least:
|F (Bi )| ≥
ǫ2 · d
·n
12 · log2 n · d+ (Bi )
Next, we single out a set of buckets in proximity to a given bucket, that will play a special role in our
algorithm.
Definition 6 (r-neighbourhood of a bucket). Let r ∈ N such that r ≤ log3 n. We call
Nr (Bi ) = {Bj | j ≥ (i − log3 r)}
the r-neighborhood of bucket Bi , that is the set of all buckets of higher degrees, itself, and the log3 r buckets
right below it in the degree ranking. Additionally, we call
N (Bi ) = Bi−1 ∪ Bi ∪ Bi+1
the neighborhood of bucket Bi .
10
Lemma 3.7. Let Bi be a full bucket. We prove the following lower bound on the ratio of the number of full
vertices in it to the combined size of the buckets in its neighborhood:
ǫ2
|F (Bi )|
≥
|N (Bi )|
312 · log2 n
(1)
on the size of Bj as we have proven in lemma 3.4.
Proof. For any j we have the upper bound d−2nd
(Bj )
Therefore, we have the following upper bound on the sum of bucket sizes:
|N (Bi )| = |Bi−1 | + |Bi | + |Bi+1 |
2nd
2nd
1
2nd
≤3· −
+ −
+ · −
=
d (Bi ) d (Bi ) 3 d (Bj )
26 · nd
26 · nd
=
=
−
3 · d (Bi )
3i
2
·dn
Additionally, we have 12·logǫ2 n·d
+ (B ) as a lower bound on |F (Bi )| (corollary 3.6). Hence, we get the
i
following lower bound on the ratio:
|F (Bi )|
≥
|N (Bi )|
ǫ2 ·dn
12·log2 n·3i
26·nd
3i
=
ǫ2
312 · log2 n
(2)
Lemma 3.8. Let Bi be a full bucket. We prove the following lower bound on the ratio of the number of full
vertices in it to the combined size of the buckets in its r-neighborhood:
ǫ2
|F (Bi )|
P
≥
|Bj |
108 · log2 n · r
(3)
Bj ∈Nr (Bi )
Proof. For any j we have the upper bound d−2nd
(Bj ) on the size of Bj as we have proven in lemma 3.4.
Therefore, we have the following upper bound on the sum of bucket sizes:
X
Bj ∈Nr (Bi )
|Bj | =
X
<
j≥(i−log3 r)
=
9 · ndr
3i
⌊log3 n+1⌋
X
j=(i−log3 r)
⌊log3 n+1⌋
X
|Bj | ≤
j=(i−log3 r)
2nd
3j−1
2nd
6nd · r X 1
6nd · r 3
≤
·
≤
·
j−1
i
m
3
3
3
3i
2
m≥0
2
ǫ ·dn
Additionally, we have 12·log
as a lower bound on |F (Bi )| (corollary 3.6). Hence, we get the fol2
n·3i
lowing lower bound on the ratio:
|F (Bi )|
P
≥
|Bj |
Bj ∈Nr (Bi )
ǫ2 ·dn
12·log2 n·3i
9·ndr
3i
11
=
ǫ2
108 · log2 n · r
(4)
Now we show that we can efficiently sample edges adjacent to a full vertex in order to detect a trianglevee.
Lemma 3.9 (Extended Birthday Paradox). Let v be a vertex of degree d(v) ≥ 2, such that at least αd(v),
for α ≥ 2/d, of the edges adjacent to it are a set of disjoint triangle-vees.
It is enough to sample each edge
q
independently with probability p = c · √ 1 , where c = 4 · ln δ1′ , in order for the sampled set to contain
α·d(v)
a triangle-vee with probability at least (1 − δ′ ).
Proof. The probability of any specific triangle-vee to be sampled is p2 . By the linearity of expectation the
2
expected number of triangle-vees sampled is p2 · αd(v)
= c2 . By a Chernoff bound the probability that less
2
c2
2
2
c2
than one triangle-vee has been sampled is less than e− 4 ·(1− c2 ) ≤ e− 16 = δ′ .
Corollary 3.10. Let v beqa full vertex
q of degree d(v). By sampling independently every edge adjacent to it
log n
6
, we find a triangle-vee with probability at least (1 − δ/6).
with probability p = 4 · ln δ · 12ǫ·d(v)
Proof. We get this trivially by plugging α =
ǫ
12 log n
and δ′ = δ/6 in lemma 3.9.
√
Finally, we prove that there are many triangle-vees adjacent to vertices of degree O( nd), such that we
can focus only on such vertices, and adjust our analysis accordingly.
p
Definition 7. Let Vh denote the subset of V that contains all the vertices with degree at least dh = nd/ǫ.
Let Eh ⊂ E denote all edges with both endpoints in Vh . Finally, let Vl = V \Vh , and let Gl denote the
resulting graph when Eh is removed from G.
Lemma 3.11. Gl is 2ǫ -far from being triangle-free, and there are at least ǫnd/2 disjoint triangle-vees adjacent to vertices in Vl .
√
ǫnd, it follows that Eh < ǫnd
Proof. Because |Vh | ≤ nd
dh =
2 , hence if all edges in Eh are removed, at least
ǫnd
2 additional edges need to be removed from G for it to be triangle-free, as at least ǫnd are required in total
by definition. This also implies that ǫnd
2 of the triangle-vees are adjacent to a vertices in Vl .
Definition 8. Let dl =
ǫd
2 log n .
Lemma 3.12. We have the following bound on degree of vertices in Bmin :
dl ≤ d− (Bmin ) ≤ dh
Proof. The lower bound follows a simple counting argument, as even if all n vertices are in Bmin , if
ǫd
d− (Bmin ) < dl , then the total number of triangle-vees adjacent to Bmin is less than 2 log
n n which contradicts it being full. The upper bound is implied by lemma 3.11 and the pigeonhole principle.
Corollary 3.13. We have:
3
1. |Bmin | ≥=
ǫ2
3·log n
·
√
nd,
5
2. |F (Bmin )| ≥
ǫ2
12·log2 n·3
·
√
nd.
Proof. If plug the upper bound from lemma 3.12 into lemma 3.4 and corollary 3.6 we get the first and second
clause of this corollary respectively.
12
3.3 Unrestricted Communication
The first protocol we present requires interaction between the players, and exploits the following advantage
we have over the query model: suppose that the players have managed to find a set S ⊆ E of edges that
contains a “triangle-vee” — a pair of edges {u, v} , {v, w} ∈ S such that {u, w} ∈ E (but {u, w} is not
necessarily in S). Then even if S is very large, the players can easily conclude that the graph contains a
triangle: each player examines its own input and checks if it has an edge that closes a triangle together with
some vee in S, and in the next round informs the other players. Thus, in our model, finding a triangle boils
down to finding a triangle-vee. (In contrast, in the query model we would need to query {u, w} for every
2-path {u, v} , {v, w} ∈ S, and this could be expensive if S is large.)
Our goal is to find a full vertex. Once that is obtained, we proved in lemma 3.9 we can efficiently sample
a relatively small subset of its edges to find a triangle-vee, thus successfully ending the algorithm.
More
p
concretely, If v is a full vertex, then sampling each of its edges with probability pd(v) = Θ( log n/d(v))
will reveal a triangle-vee with constant probability.
Note that deg(v) may be significantly higher than the average degree d in the graph, so we cannot
necessarily afford to sample each of v’s edges with probability pd(v) ; we need to find a low-degree vertex
which is √
full. Towards that end, we proved in lemma 3.11 we can focus only on vertices of degree at most
dh = O( nd).
With that in mind, we proceed to present our strategy for finding a full vertex. We first describe the core
of the algorithm in a relatively detail-free manner, to emphasize the intuitive narrative leading us throughout
the procedure. This will be followed by a rigorous analysis of the full algorithm.
How can the players find a full vertex? A uniformly random vertex is not always likely to be full —
there might be a small dense subgraph of relatively high-degree nodes which contains all the triangles. In
order to target such dense subgraphs, we use bucketing: we partition the vertices into buckets, with each
bucket Bi containing the vertices with degrees in the range [3i , 3i+1 ). We want to find a full bucket, and
sample its vertices, as we proved that many of them are full. Of course, we cannot know in advance which
bucket is full; we must try all the buckets. Hence, we iterate over the buckets in an increasing order of their
associated vertex degree, up until dh , and for each bucket, assume it is full, and then sampling enough of its
vertices, relying on the lower bound we proved for the number of full vertices in a full bucket, for the sample
to include a full vertex. Then, we sample the edges of each vertex, which for the full vertex will result in
discovering a triangle-vee with high probability. Although, our assumption can be wrong in many cases, we
have proven that there is at least one full bucket, Bi , in that range of degrees, hence the assumption will be
correct at least once, which is enough for our algorithm to succeed with high probability.
It remains to describe, given Bi is a full bucket, how we can sample a random vertex from it, that
is, a random vertex with degree in the range [3i , 3i+1 ). We cannot do that, precisely, but we can come
close. Because the edges are divided between the players, no single player initially knows the degree of any
given vertex. However, by the pigeonhole principle, for each vertex v there is some player that has at least
deg(v)/k of v’s edges, and of course no player has more than deg(v) edges for v.
Let B̃ij := v ∈ V | 3i /k ≤ dj (v) ≤ 3i+1 be the set of vertices that player j can “reasonably suspect”
S
belong to bucket i, where dj denotes the degree of vertex v in the input of player j, and let B̃i := j B̃j . By
the argument above, Bi ⊆ B̃i . Also, B̃i ⊆ Nk (Bi ), since the total degree of any vertex selected cannot be
smaller than 3i /k. Therefore, sampling uniformly from B̃i is a good proxy for sampling from Bi , although
we may also hit adjacent buckets. Nevertheless, we have proven that a full bucket must be large, and hence
constitutes at least roughly a k1 -fraction of B̃i and Nk (Bi ). Hence a uniformly random sample will yield a
vertex from Bi with probability at least roughly 1/k, and Θ̃(k) samples yield a vertex from Bi with high
13
probability.
Lemma 3.14. Let Bi be a full bucket. It suffices to sample uniformly with replacement m = ln ( 6δ ) ·
108·log2 n·r
ǫ2
vertices from Nr (Bi ), in order for the sampled set to contain a vertex from F (Bi ) with probability
at least (1 − 6δ ).
2
ǫ
Proof. The probability of sampling a vertex from F (Bi ) is p = 108·log
according to Lemma 3.8. There2
n·r
fore, the probability of not having a vertex from F (Bi ) after m samples is bounded by (1 − p)m ≤ e−mp =
δ/6.
To implement the sampling procedure we need two components: first, we need to be able to sample
uniformly from B̃i . The difficulty here is that each vertex v ∈ B̃i can be known to a different number of
players — possibly only one player j has v ∈ B̃ij , possibly all players do. If we try a naive approach, such as
having each player j post a random sample from B̃ij , then our sample will be biased in favor of vertices that
belong to B̃ij for many players j. Our solution is to impose a random order on the nodes in B̃i by publicly
sampling a permutation π on V (this is done by interpreting the random bits as a permutation on V ), and we
then choose the smallest node in B̃i with respect to π. This yields a uniformly random sample, unbiased by
the number of players that know of a given node. We call this procedure SampleUniformFromB̃i .
Algorithm 1 SampleUniformFromB̃i
1: π ← random permutation on V
j
2: Each player j sends the coordinator the first vertex in B̃i with respect to π
3: The coordinator outputs the first vertex with respect to π of all the vertices it received
Our sample is too large to treat every sampled vertex as if it is a full vertex from Bi . Sampling edges
for each vertex is too costly and wasteful, since it is possible that only a k1 -fraction of the sampled vertices
are even in Bi . The second component, therefore, verifies that a sampled √
node indeed belongs to Bi . We
cannot do that exactly, but we can come sufficiently close. We compute a 3-approximation of the degree
of the sampled node, as explained in Theorem 3.1, and discard vertices whose approximate degree does not
match N (Bi ). This substantially reduces the size of the sampled set without discarding any vertex from Bi .
We call this procedure ApproxDegree(v).
The protocol for player j is sketched in Algorithm 2. Here N = Θ̃(k) is the number of samples from
B̃i required to produce a sample from Bi with good probability. Following the procedure described in
Algorithm 2, the coordinator sends all the edges he received to all the players, and the players then check
their own inputs for an edge that closes a triangle with some triangle-vee sent by the coordinator. With high
probability, a triangle-vee is discovered, and the protocol completes in the next round.
We move on to a more rigorous analysis of the sampling parameters and the complexity. First we
compute the number of vertices we need to uniformly sample from N (Bi ) to find a full vertex, so that we
can bound the number of vertices we examine while retaining a high probability of preserving a full vertex.
2
n
Lemma 3.15. Let Bi be a full bucket. It suffices to sample uniformly with replacement m = ln ( 6δ )· 312·log
ǫ2
vertices from N (Bi ), in order for the sampled set to contain a vertex in F (Bi ) with probability at least
(1 − 6δ ).
2
n
according to Lemma 3.7. ThereProof. The probability of sampling a vertex from F (Bi ) is p = 312·log
δ·ǫ2
fore, the probability of not having a vertex from F (Bi ) after m samples is bounded by (1 − p)m ≤ e−mp =
δ/6.
14
Algorithm 2 Code for player j
For each i = 0, . . . , log n:
ℓ←0
Repeat until ℓ ≥ N :
v ← SampleUniformFromB̃i
¯ ← ApproxDegree(v)
d(v)
√
√
¯ ≤ 3d+ (Bi ):
If d− (Bi )/ 3 ≤ d(v)
ℓ←ℓ+1
Jointly generate a public random set S ⊆ V , where each u ∈ S with iid probability pd(v)
¯
Send Ej ∩ ({v} × S) to the coordinator
2
n·k
. we use q as
We now present the procedure GetFullCandidates(Bi ). Let q = ln ( 6δ ) · 108·log
ǫ2
a bound on the total number of samples, and we also bound the number of samples that pass the degree
approximation criteria. Both bounds are needed to ensure worst case complexity.
Algorithm 3 GetFullCandidates(Bi)
1: count ← 0
2: C ← ∅
2
n
3: Do until count = q or |C| = ln ( 6δ ) · 312·log
ǫ2
4:
count ← count + 1
5:
v ← SampleUniformFrom
B̃i
√
6:
compute d′ (v), a 3-approximation of d(v), such that the error probability is at most
d− (B )
7:
if √3 i
8: output C
√
≤ d′ (v) ≤ 3d+ (Bi ) then add v to C
δ
3q
Lemma 3.16. The complexity of GetFullCandidates(Bi ) is O(k2 ·log4 n log log n), and in the no-duplication
variant it is O(k2 · log3 n).
Proof. Each vertex the players send to the coordinator costs O(log n), therefore the overall complexity
of choosing uniformly a vertex from U (Bi ) is O(k log n). According to Theorem 3.1, the complexity of
a constant approximation of d(v) for vertex v, when the error bound is Θ( 1q ), is O(k · log k · log log k ·
(log log n + log k)) = O(k · log2 n · log log n), and in the no-duplication variant it is O(k log log nk ). The
number of iterations is at most q = O(k log2 n), hence the total complexity is O(k 2 · log4 n log log n), and
in the no-duplication variant it is O(k2 · log3 n).
√
Lemma 3.17. If Bi is a full, then C contains a vertex from F (Bi ) along with a correct 3-approximation
of its degree with probability at least 1 − 2δ
3 .
Proof. First, note that a union bound implies that all O(q) vertex approximations were correct with probability at least (1 − 3δ ). Due to the symmetric process of sampling in algorithm 1, the vertices are chosen
uniformly from B̃i . Lemma 3.14, when substituting r = k, implies that q uniform samples is enough to
sample a vertex from F (Bi ) with probability at least (1 − 6δ ). And since B̃i ⊆ Nk (Bi ), sampling from
B̃i can only improve our probability of finding a full vertex. Additionally, lemma 3.15 assures us that
2
n
uniform samples from N (Bi ) is enough to encounter a full vertex with probability at least
ln ( 6δ ) · 312·log
ǫ2
δ
(1− 6 ). Note that if all degree approximations are in the guaranteed range, then all vertices sampled from Bi
15
are added to C, and all other vertices that are added are in N (Bi ). Since our sampling process is symmetric
and therefore uniform, vertices sampled into C have at least the same probability of containing a full vertex,
2
n
samples suffice. By a
as in the case of sampling uniformly from all of N (Bi ), and thus ln ( 6δ ) · 312·log
ǫ2
union bound argument, the probability of all approximations being correct, and both sampling processes
sufficing for finding a full vertex, is at least (1 − 3δ ) − 2 · 6δ ) = (1 − 23 δ). Finally, if both sampling processes
would encounter a full vertex, we find it no matter which stopping condition made us halt, therefore it is
also the probability of containing a full vertex in the output.
By this point we know how to efficiently obtain, given Bi is a full bucket, a small sample of vertices that
is likely to contain one full vertex from Bi . All that is left is to iterate over the sampled set, sampling edges
adjacent to each vertex, such that for the full vertex the sample will contain a triangle-vee.
Algorithm 4 SampleEdges(v)
1: S ← sample every possible edge adjacent to v with probability p = 4 ·
r
q
n
ln 6δ · 12dlog
′ (v)
2: each player j sends the coordinator S ∩ Ej if this set is of size at most (1 +
3: the coordinator outputs S ∩ E
ǫ·
18
d′ (v)·p
3
ln δ6 ) ·
√ ′
3d (v)p
Algorithm 5 FindTrinagleVee(Bi)
1: C ← GetFullCandidates(Bi )
2: for each v ∈ C let S ← SampleEdges(v) and then the coordinator posts all the edges to all the players.
3
Lemma 3.18. The communication complexity of FindTrinagleVee(Bi ) is O(k · log 2 n ·
log4 n log log n)
p
d(Bi ) + k2 ·
Proof. The cost of GetFullCandidates is O(k2 · log4 n log log n). Each edge required O(log n) bits to
identify, and since there is a limit on the size of the set the players can send, the overall complexity is
p
3
O(k · log 2 n · d(Bi ) + k2 · log4 n log log n).
Lemma 3.19. If Bi is a full bucket then the players find a triangle with probability at least 1 − δ using
procedure FindTrinagleVee(Bi ).
Proof. According to lemma 3.17, C contains a full vertex with probability at least (1 − 23 δ), and the approximation procedure of its degree gave a correct output. And according to corollary 3.10, if v is a full vertex,
then sampling each of its edges with probability p suffices to sample a triangle vee, with probability at least
(1 − 6δ ). Moreover, the expected size of the sampled set is d(v) · p, and by a Chernoff bound the probability
of sampling more the cutoff size specified in step 2 of the sampling algorithm is at most δ/6. Overall, with
probability at least (1 − δ), one of the vertices sampled will be full, with the players knowing approximately
its degree, and hence sampling enough of its edges so that they find a triangle-vee, and do not need to send
a set above the cutoff size. When that happens, one of the players will respond to the coordinator with the
third edge completing the triangle-vee into a triangle, and the algorithm will end successfully.
All that is left is to find a full bucket. This is achieved by iterating all the relevant buckets. Note the
common theme which is that the players do not know how successful each sampling process was up until
the algorithm terminates. They do not know which bucket is full, which of the sampled vertices are full,
16
or which of its adjacent edges belong to triangles. Since they examine all bucket, all sampled nodes (after
preliminary filtering), and send all sampled edges, that knowledge is redundant, as when they encounter a
full bucket and a full vertex they will be treated as such as a working assumption. Our analysis culminates
in algorithm FindTrinagle(G), which nests inside it all the procedure we presented so far.
Algorithm 6 FindTrinagle(G)
1: Run FindTrinagleVee(Bi) for every bucket starting the first bucket such that d− (Bi ) ≥ dl and until the last bucket
where d+ (Bi ) ≤ dh
Theorem 3.20. If the input graph is ǫ-far from √
being triangle free, then the players can find√a triangle with
4
nd log5/2 n + k2 log5 n log log n) = Õ(k 4 nd + k2 ). The
probability at least (1 − δ)
and
complexity
O(k
p
complexity is in fact Õ(k d(Bmin ) + k2 ) w.p. at least 1 − δ.
Proof. According to lemma 3.12, one of the iterations of FindTriangleVee is performed on Bmin , which is a
full bucket. According to lemma 3.19, the players find a triangle in that iteration with probability (1−δ). The
maximal complexity of an iteration grows as d(Bi ) grows. The number
of iterations is O(log n). Therefore,
p
with probability at least (1 − δ), the overall complexity is O(k d(Bmin ) log5/2 n + k2 log5 n log log n).
However, if an error occurs,
players will go over all buckets √
in the given range, thus in the worst
√ then the
5/2
5
2
case the complexity is O(k dh log n + k log n log log n) = Õ(k 4 nd + k2 ).
Corollary 3.21.
There is a one-sided error protocol for
testing triangle-freeness with communication
com√
√
p
4
4
5/2
4
2
2
plexity O(k nd log n+k log n log log n) = Õ(k nd+k ). The complexity is in fact Õ(k d(Bmin )+
k2 ) w.p. at least 1 − δ.
Corollary 3.22. The variant where the players are not given d has the same complexity.
Proof. The players can compute 2-approximations of dh and dl , denoted by d′h and d′l , respectively, and then
use d′l /2 and 2d′h as the boundaries for the iteration condition. The added complexity is negligible, and the
error can be reduced to an arbitrarily small constant. The rest of the algorithm remains the same, and does
not rely on any knowledge of d.
Theorem 3.23. In the blackboard model, if the input graph is ǫ-far from
being triangle free, then the players
√
4
can√find a triangle with probability at least (1p
− δ) and complexity O( nd log5/2 n + k2 log5 n log log n) =
4
2
Õ( nd + k ). The complexity is in fact Õ( d(Bmin ) + k2 ) w.p. at least 1 − δ. The same protocol also
solves testing of triangle-freeness.
Proof. In the blackboard model, posting the edges of the sub-procedure SampleEdges(v) can be implemented more efficiently, having each player post the edges on the blackboard in turns, ensuring no edge is
posted twice. This saves a k factor in the communication cost with regards to the coordinator model.
3.4 Simultaneous Communication
In the simultaneous model, the players cannot interact with each other — they send only one message to the
referee, and the referee then outputs the answer. This rules out our previous approach, as exposing a trianglevee does not help us if the players cannot then check their inputs for an edge that completes the triangle.
Indeed, the simultaneous model is closer to the query model in spirit. Accordingly, we include trianglefreeness testers of [3], but show that we can implement them more efficiently in our model. Moreover, we
achieve roughly the same complexity without knowing the average degree in advance.
17
√
√
We present separate algorithms for the case of d = Ω( n) and d = O( n), referred to as high and low
√
degrees, respectively. when d = Θ( n), both algorithms are essentially identical. We conclude with an
algorithm that works for the more general case where d is unknown to the players.
3.4.1
high degrees
√
Forpgraphs with average degree Ω( n), the tester from [3] samples a uniformly random set S ⊆ V of
Θ( 3 n2 /d) vertices, queries all edges in S 2 , and checks if the subgraph exposed contains a triangle. It is
shown in [3] that if the graph is ǫ-far from triangle-free, then the subgraph induced by S will contain Θ(1)
triangles in expectation, and the variance is small enough to ensure small error.
We can implement this tester easily, and in our model it is less expensive: instead of querying all pairs
in S 2 , the players simply send all the edges from S 2 in their input, paying only for edges that exist and not
for edges that do not exist in the graph. The set S is large enoughthat the number of edges in the subgraph
does not deviate significantly from its expected value, Θ (nd)1/3 .
We present an algorithm of complexity O(k(nd)1/3 ). We later show, in the lower-bounds section, that
√
for average degree Θ( n) this is tight for 3 players.
Algorithm 7 FindTringleSimHigh(G)
q
2
1: S ← a uniformly random set of vertices of size c 3 nǫd , for a sufficiently large c
2: players send all edges in the subgraph induced by the vertices in S. If the number of edges to be sent by a player
2
4
exceeds l = |S|
n2 δ nd, send any l edges.
3: The Referee checks whether the union on edges it received contains a triangle, and outputs accordingly.
√
Theorem 3.24. The problem of triangle detection, when d = Ω( n) is known to the players, can be solved
with communication cost of O(k(nd)1/3 log n) and a constant error.
Proof. Let δ denote the required bound on the error, and let VS denote the subgraph of G induced by S. In [3]
it was shown that for a sufficiently large c, the probability of VS not containing a triangle is arbitrarily small,
and for our purposes taken as δ/2. Additionally, the probability of each edge appearing in VS is |S|·|S−1|
n·(n−1) ,
thus by the linearity of expectation the expected number of edges in VS is
|S|·|S−1|
n·(n−1) nd
<
l
2
δ
. Finally, by a
Markov argument we get that the probability of the number of edges in VS exceeding l is at most δ/2. When
that happens, even if every player has all the edges, none of them exceeds l, and overall, by a union bound
argument, we get that the referee receives all edges in VS and they contain a triangle, with probability at
1
least (1 − δ). The complexity is at most O(kl log n) = O(k(nd) 3 log n).
Corollary 3.25. The problem of triangle-detection in the no-duplication variant can be solved in the simultaneous model with a constant error, δ, such that the complexity is O((nd)1/3 log n) with probability at least
(1 − δ), and the worst case complexity is O(k(nd)1/3 log n).
Proof. The players would use algorithm 7 as in the general case, thus the worst case complexity is the same.
But as we proved, with probability at least 1 − δ the total number of edges in the subgraph is O((nd)1/3 ),
and since there is no duplication, that is also the total number of edges sent.
18
3.4.2
low degrees
√
For density d = o( n), the approach above no longer works, as the variance is too large. To illustrate this,
consider a graph with d vertices of degree Θ(n), which are the sources of Θ(nd) triangle-vees, such that all
triangles have at least one such node. If we were to sample vertices uniformly at random, we need to sample
Θ(n/d) vertices in order for the subgraph to contain a triangle. However, whereas in the query model we
would need to make Θ(n2 /d2 ) queries to learn the entire subgraph induced by the set we sampled, in our
model we proceed as follows (using ideas from [3], which require adaptivity there, and deploying them in
a different way): let S be the set of Θ(n/d) uniformly-random vertices. We sample another smaller set, R,
√
of Θ( n) vertices, and we send all edges in R × (S ∪ R). If indeed there is a small set of high-degree
vertices participating in most of the triangles, then with good probability we will have one of them in S, and
by the birthday paradox, one of its triangles will have its other two vertices in R. On the other hand, if the
triangles are spread out “evenly”, then the subgraph R × R will probably contain one. The expected size of
√
√
R × (S ∪ R) is O( n), and we show that w.h.p. the total communication is Õ(k n).
√
Note that both our solutions work for d = Θ( n), and for this density they are essentially the same:
√
both sets, S and R, are of size Θ(n/d) = Θ(d) = Θ (nd)1/3 , so for d = Θ( n) the second protocol is
not very different from the first. We can also show that if edge duplication is not allowed, a factor of k is
saved in the communication complexity with high probability.
Algorithm 8 FindTriangleSimLow(G)
S ← sample each vertex with probability p1 = min{ dc , 1}
2: R ← sample each vertex with probability p2 = √cn
3: Players send all to the referee edges with one endpoint in R and the second endpoint R ∪ S. If the
√
number of such edges in the input of a player exceeds q = 2c2 ( n + d) · 2δ , that player sends any q
edges.
4: The Referee checks whether the union of the edges it received contains a triangle, and outputs accordingly.
1:
Where c is a constant to be determined later.
√
Theorem 3.26. The problem of triangle detection in the simultaneous model when d = O( n) is known to
√
the players, can be solved with communication cost of O(k n log n) and with constant error.
Proof. We show that algorithm 8 is such a solution. Let δ denote the required constant bound on the error,
let VR denote the graph induced by R, and let VRS denote the graph on R ∪ S that includes all edges with
at least one endpoint in R. Since each player sends at most q edges, the complexity of the algorithm is
√
O(qk log n) = O(k n log n).
2
The probability of a given edge appearing in VR is p22 = cn , therefore by linearity of expectation, the
2
expected number of edges in VR is at most nd · cn = c2 d. The probability of any edge having one endpoint
in S and the other in R is at most 2p1 p2 , therefore by linearity of expectation, the expected number of such
√
edges is at most 2ndp1 p2 ≤ 2c2 n. Overall, we get that the expected number of edges in VRS , which are
the only edges the players may send, is at most ( q2 ) , and by a Markov argument we get that with probability
δ
at least (1 − 2δ ) the number of edges in VS does not exceed q, thus all players can send all the edges they
have in VRS .
We show that with probability at least (1 − 2δ ) the edges in VRS contain a triangle, which via a union
bound proves that the referee will receive a triangle with probability at least (1 − δ). Recall Gl , the graph
19
defined in definition 7 to be the subgraph on all edges adjacent to at least one vertex of degree at most dh .
For the purpose of analysis, ignore all triangle that are not in Gl . According to lemma 3.11, Gl is 2ǫ -far
from being triangle-free, and thus it contains a family of 6ǫ nd edge-disjoint triangles. Fix such a family, T .
Note that in any triangle in Gl , at most one vertex in can be of degree higher than dh . Further restricting the
number of triangles counted, we only count triangles where at least two vertices of degree at most dh were
sampled into R, which implies that if the third vertex is of degree higher than dh it must be sampled into S.
Let X be a random variable equal to the number of triangles in VRS with the aforementioned restrictions.
The probability of such a triangle to be sampled is at least p1 p22 , therefore the
ǫ
ǫ
E[X] ≥ ndp1 p22 = c2
6
6
(5)
.
We now bound the variance of X. For each t ∈ T , let Xt be an indicator of t being sampled (with our
restrictions). Let dT (v) ≤ d(v) denote the degree of vertex v when only edges of T are left in the graph.
Observe that if two triangles have no vertices in common, then they are selected independently. Additionally,
note that two triangles in T can have at most one vertex in common, as all triangles are edge disjoint. The
probability of two triangles with a joint vertex being both sampled can be split into two cases, one where
the joint vertex is in S, and the second case is when it is sampled into R.
The probability when the joint vertex is sampled into S, is p1 p42 . The number of triangles that can have
P
dT (v) ≤ 2nd, by convexity we get
vertex v in common and sampled into S is at most dT (v)/2
. Since
2
v∈V
dT (v)/2
2
P
v∈V
≤
2d
P
i=1
n/2
2
≤ 2d ·
n2
8
.
As for the case when the common vertex, v, is sampled into R (note that it means that dT (v) < dh ),
the probability of both triangles being sampled is p21 p32 . The number of vertices of degree dh is at most
P dT (v)/2
2nd
≤
2
dh , and once again by convexity we get that the number of such triangles is smaller than
v∈Vh
P
v∈Vh
dh /2
2
≤
2nd
dh
·
d2h
8 .
Therefore, the variance of X is, for d > c, bounded by:
X dT (v)/2
X dT (v)/2
4
V ar[X] ≤
p1 p2 +
p21 p32
2
2
v∈V
v∈V ∩R
n2
2nd d2h 2 3
≤ 2d · p1 p42 +
· p1 p2
8
dh
8
2
n c c 4 2ndǫ nd c 2 c 3
(√ ) + √ ·
( ) (√ )
≤ 2d ·
8 d n
n
dn 8ǫ d
5
5
5
c
c
c
=
+ √ ≤
4
2
4 d
and similarly for d ≤ c (which implies that p1 = 1, S = V and d = T heta(1)) we get that the variance
5
is also bounded by c2 .
We conclude by employing a Chebyshev bound:
Pr(X < 1) ≤ Pr(|X − E[X]| ≥
V ar 2 (X)
8ǫc4
4
δ
1
E[X]) ≤
≤
=
<
2
((1/2) E[X])2
18c5
9c
2
20
Where the last inequality follows by taking c =
probability at least (1 − δ/2).
8
9δ .
Therefore, VRS contains at least one triangle with
√
Corollary 3.27. The complexity of the no-duplication variant is O( n log n) with probability at least (1 −
√
δ), and the worst case complexity is O(k n log n).
Proof. The players would use algorithm 8 as in the general case, thus the worst case complexity is the
same. But as we proved, with probability at least 1 − δ the total number of edges in the subgraph between an
√
endpoint of R and S is O( n), and the total number of edges in the subgraph of R is at most O(d) and since
√
there is no duplication, that is also the total number of edges sent, which implies complexity O( n).
3.4.3
Degree Oblivious Algorithm
We start with a high-level overview of how we combine the protocols above and modify them so that they
can be used without advance knowledge of the degree. The challenge here is that no single player can get a
good estimate of the degree from their input, and since the protocol is simultaneous, the players must decide
what to do without consulting each other. The natural approach is to use log n exponentially-increasing
“guesses” for the density, covering the range [1, n], and try them all; however, if we do this we will incur
a high cost for guesses that imply examining a larger sample than needed. We, therefore, take a more
fine-grained approach.
Our first observation is that some players can make a reasonable estimate of the global density, although
they do not know that they can. Let d¯j denote the average degree in player j’s input Ej , and let us say that
player j is relevant if d¯j ≥ (ǫ/(4k))d, and irrelevant otherwise. If we eliminate all the irrelevant players
and their inputs, the graph still remains (ǫ/2)-far from triangle-free, so we can afford to ignore the irrelevant
players in our analysis — except for making sure that their messages are not too large.
Since players cannot know if they are relevant, all players assume that they are. Based on the degree
d¯j that player j observes, it knows that if it is relevant, then the average degree in the graph is in the range
log n
Dj = [d¯j , Θ(k d¯j )]. We fix in advance an exponential scale 2i i=0 of guesses for the density, and execute
in parallel log n instances of triangle-freeness protocols, one for each degree 2i . However, each player j only
participates in the O(log k) instances corresponding to density guesses that fall in Dj , and sends nothing for
the other instances. The protocols are the two algorithms we have presented for a known average degree,
with some modification. For relevant players, we know that the true density falls in their range Dj , so they
will participate in the “correct” instance. For irrelevant players, we do not care, and their message size is
also not an issue: their density estimate is too low, and the communication complexity of each instance
increases with the density it corresponds to.
If we are not careful, we may still incur a blow-up in communication, as relevant players may use
guesses lower than the true density by a factor of k, which increases the size of the sample beyond what is
necessary. However, by carefully assigning each player j a communication budget depending on d¯j , we
can eliminate the blow-up, and match the degree-aware protocol up to polylogarithmic factors.
We now move on to a detailed analysis of the algorithm, which relies on an integration of modified
versions of the algorithms we presented in theqnon-oblivious sections. First, we alter the algorithm for highdegrees, such that instead of sampling |S| =
3
n2
ǫd
2/3
= Θ( nd1/3 ) vertices without replacement, we sample each
−1/3 ). Additionally, we remove
vertex independently (with replacement) with probability Θ( |S|
n ) = Θ(nd
the cap on the number of edges allowed to be sent from both algorithms (high and low degrees). Let Alghigh
denote the modified algorithm for high degrees, and Alglow - for low degrees. We provides figures for both.
21
Algorithm 9 AlgHigh (G)
1: S ← a uniformly random set of vertices of size c
q
3
n2
ǫd ,
for a sufficiently large c
2: players send all edges in the subgraph induced by the vertices in S.
3: The Referee checks whether the union on edges it received contains a triangle, and outputs accordingly.
Algorithm 10 AlgLow (G)
S ← sample each vertex with probability p1 = min{ dc , 1}
2: R ← sample each vertex with probability p2 = √cn
3: Players send all to the referee edges with one endpoint in R and the second endpoint R ∪ S.
4: The Referee checks whether the union of the edges it received contains a triangle, and outputs accordingly.
1:
Lemma 3.28. Alghigh detects a triangle with a small constant error.
Proof. By choosing sufficiently large constants, we ensure that the expected number of triangles is not lower
than the expected value before the alteration. The bound on the deviation, follows the same analysis as in
the proof of the original algorithm, and analogous to the correctness proof of Alglow , as when bounding the
variance in the number of edge-disjoint triangles, only triangles with one common vertex are dependent. In
terms of complexity, we once again use a Markov argument to claim that the total number of edges in the
sampled subgraph does not exceed the expectation by a large constant factor with high probability.
Alglow obviously also remains correct, as removing the cap could only increase the chances of detecting
a triangle. Moreover, both algorithms have the same complexity as before with probability at least (1 − δ),
as implied by their respective complexity analysis in the previous section. We will reinstall modified caps
further into our analysis.
The main sub-procedure the players utilize in both algorithms is choosing jointly, via public randomness,
a subset S ⊆ V , such that each vertex is chosen independently with probability p, and then posting all edges
in their inputs with both endpoints in S. We show that the number of edges each player has in S does not
significantly exceed the expectation, given his average degree.
Lemma 3.29. Let S ⊆ V denote a set where each vertex was sampled with probability p. The number of
edges player j has in the subgraph VS induced by S is O(nd¯j · p2 · logn log(k log n)), such that this holds
for all players with high probability Θ(1).
Proof. Recall the bucketing partition we used in the section of input analysis. For a given player j, we
partition V into O(log n) buckets as we did before, only this time according to dj (v) and not d(v). Since
each vertex is chosen independently, we may utilize the Chernoff bound to claim that when sampling with
probability p, the degree of each sampled vertex is reduced from dj (v) to O(p · dj (v) log(nk)) = O(p ·
1
). Therefore, by the union bound this holds true for all
dj (v) log n), with probability of error at most O( nk
n vertices in the inputs of all k players, with a small constant error.
Next, we also claim that the number of vertices chosen from each bucket, B, is at most O(p·|B| log(k log n)),
1
with probability of error at most O( k log
n ), once again due to a Chernoff argument. A union bound implies
this holds true for all O(log n) buckets for all k players with a small constant error. Overall, if the number
of vertices chosen from each bucket deviates by at most a O(log(k log n)) factor from the expectation, and
each vertex degree is reduced to a size that deviates by at most an O(log n) factor from its expectation, we
22
get that for all k players the number of edges they have in the sampled subgraph deviates from its expected
value (given d¯j ), by at most a O(log n log(k log n)) factor.
For a given guess of the average degree d′ , let p(d′ ) denote the probability with which the players need
to sample each vertex. Recall that all relevant players will include p(d) in their range of guesses (the 2 factor
difference is asymptotically insignificant), hence the union of all the sampled edges includes a triangles with
high probability. We note that for our purposes, an increase of the degree guess, d′ , by a factor of 2 decreases
the sampling probability, p(d′ ), also by at most a factor of 2, therefore there is no dangerous super-constant
blowup in the sampling probability that would otherwise incur an asymptotic overhead on the complexity.
Observe that the guess for the average degree varies inversely as the corresponding sampling probability
and thus expected sample size.
√
√
We first discuss the case where for player j, d¯j = Ω( n), which implies d = Ω( n). The player
performs simultaneously O(log k) algorithms
each with a different guess, d′ , of the average degree. For
√
3
′
′
high degrees this means p(d ) = Θ( nd ).
We now prove that the complexity bound on the message for each player can remain roughly the same,
with the error remaining constant. More concretely,
each player limits separately each of the O(log k) simul√
3
¯
j
taneous algorithms by sending at most O( nd log n log (k log
√ n)) edges. This implies that the complexity
bound of this player over all its simultaneous instances is O( 3 nd log2 n log k log (k log n)).
√
3
Lemma 3.30. A complexity bound of sending at most O( nd¯j log n log (k log n)) edges for each instance
of Alghigh suffices for the instance, pertaining to the correct guess, to send all the edges in its corresponding
subgraph.
Proof. Let r(j) = dd¯j denote the ratio between the correct average degree and the player’s observed average
degree. The expected number of edges player j has in the sampled subgraph for a correct guess is
1/3
p
√
(nd¯j )
3
nd¯j
¯j ) ≤ O( 3 nd)
n
)
=
Θ(
)
≤
O(
Θ(
d
(nd)2/3
r(j)2/3
where we used the fact
that r(j) = Ω(1). Therefore, player j can limit the edge budget of each algorithm
√
3
with a bound of O( nd¯j log n log (k log n)) following lemma 3.29, with all k players not exceeding the
bound with high probability.
This lemma along with the fact that we’ve shown that the sample pertaining to the correct guess contains
a triangle with high probability implies correctness with constant error.
√
Now we deal with the case d̄i ≤ n. As in the previous case the player performs O(log k) algorithms
covering the relevant degree range. And as before all relevant players include the correct guess in their
range of guesses, implying that the union of messages of all players contain a triangle with high probability
following the same analysis as in the non-oblivious case.
√
The player splits the relevant range
of degrees into two cases. For every degree guess, d′ , where n ≤
√
ǫ n
¯j
¯j
empty) the player simulates the algorithm for high
d′ ≤ 4k
ǫ d (Note that when d ≤ 4k this range is √
3
degrees as we just described (with an edge limit of O( nd¯j log n log (k log n)) for each algorithm).
√
√
If indeed d ≥ n then correctness and complexity analysis for that case is the same as when d¯j ≥ n.
√
Whereas for every guess, d′ , where d¯j ≤ d′ ≤ n, the player simulates the AlgLow using d′ instead
of d. More concretely, the player samples into S each vertex with probability p = min{ dc′ , 1}, and the
of sampling into R each vertex with probability Θ( √1n ) as in the original algorithm remains the same (the
players can use the same R across all simultaneous instances).
23
√
We use a cap of O( n log n log (k log n)) edges for each instance of AlgLow and, as the we promptly
prove, it suffices for the correct instance.
√
Lemma 3.31. A complexity bound of sending at most O( n log n log (k log n)) edges for each instance of
√
AlgLow , suffices, for the case where d ≤ n, for the protocol pertaining to the correct guess to send all the
edges in its corresponding subgraph.
√
¯j
Proof. The expected number of edges player j has in R is Θ( nnd ) = Θ(d¯j ) = O( n). It is not surprising
that the expected number did not increase, as the sample size does not depend on the average degree, and
we have already assumed, in our previous analysis, the worst case of each edge in R appearing in all inputs.
√
For the correct guess, d′ = d ≤ n, the expected number of edges player j has connecting S and
√
¯j
R is Θ( dn√d n ) = O( n). Therefore, the expected number of edges player j needs to send overall for the
√
√
correct guess is O( n), and indeed, for all guesses where d′ ≤ n, player j can limit the edge budget
√
of each algorithms with a bound of O( n log n log (k log n)) following lemma 3.29, with all k players not
exceeding the bound with high probability.
Since the complexity (and the edge cap) is higher for the simulations of AlgLow than for the simulations
of AlgHigh , the simulations of AlgHigh do not affect the overall computation of the complexity asymptotically.
√
To conclude, when d ≤ n, the message cost of each player, j, is
√
1
O(max{ n, (nd¯j )1/3 } log2 n log k log (k log n)) = O((nd) 3 log2 n log k log (k log n))
1
thus the overall complexity of the protocol for all players is O(k(nd) 3 log2 n log k log (k log n)).
√
√
√
Whereas when d ≤ n then d¯j ≤ n, and the complexity of player j is O( n log2 n logk ), thus for k
√
players we get O(k n log2 n logk ).
We summarize our complete procedure for all cases in F indT riangleSimOblivious(G). The correctness follows the fact that all relevant players participate in the instance pertaining to the correct guess, and
what we have proved about the edge cap not limiting that instance.
Theorem 3.32. The problem of d-oblivious triangle detection in the simultaneous model can be solved with
√
√
1
communication cost of O(k n log2 n log k log (k log n)) for d = O( n), and in O(k(nd) 3 log2 n log k log (k log n))
√
for d = Ω( n), by a single algorithm, with constant error in both cases.
Algorithm 11 FindTriangleSimOblivious(G)
1:
2:
3:
4:
5:
6:
Each player j ∈ [k] runs simultaneously O(log k) protocols - for each degree guess d′ - covering its
¯j
relevant degree range, Dj = [d¯j , 4k
ǫ d ]:
√
for each guess d′ ≥ n:
run AlgHigh and send up to O((nd)( 1/3) log n log (k log n)) edges
√
for each guess d′ < n:
√
run AlgLow and send up to O( n log n log (k log n)) edges
The Referee checks whether the union of edges it received contains a triangle, and outputs accordingly.
24
4 Lower Bounds
Our main result in this section is the following:
√
ǫ be the task of finding a triangle edge in graphs of size n and
Theorem 4.1. For any d = O( n), let Tn,d
average degree d which are ǫ-far from triangle-free. Then for sufficiently small constant error probability
δ < 1/100 we have:
1/6
ǫ
.
(1) For k > 3 players: CCsim
k,δ (Tn,d ) = Ω k · (nd)
ǫ ) = Ω (nd)1/3 .
(2) For 3 players: CCsim
(T
3,δ
n,d
√
To show both results, we first prove them for average degree d = Θ( n), and then easily obtain the
√
result for lower degrees by embedding a dense subgraph of degree Θ( n) in a larger graph with lower
overall average degree.
√
To prove (1), we begin by proving that for graphs of average degree Θ( n), three players require
ǫ √ in the one-way communication model, where Alice and Bob
Ω(n1/4 ) bits of communication to solve Tn,
n
send messages to Charlie, and then Charlie outputs the answer. In fact, our lower bound is more general,
and allows Alice and Bob to communicate back-and-forth for as many rounds as they like, with Charlie
observing the transcript. We then “lift” the result to k > 3 players communicating simultaneously, using
symmetrization [33].
To prove (2), we show directly that in the simultaneous communication model, three players require
√
ǫ in graphs of average degree Θ(√n).
Ω( n) bits to solve Tn,d
Our lower bounds actually bound the distributional hardness of the problems: we show an input distribution µ on which any protocol that has a small probability of error on inputs drawn from µ requires high
communication. This is stronger than worst-case hardness, which would only assert that any protocol that
has small error probability on all inputs requires high communication.
4.1 Information Theory: Definitions and Basic Properties
We start with an overview of our information theory toolkit, which is our primary technical apparatus for
directly proving lower bound (as opposed to reductions, which we also use to derive subsequent results).
Definition 9. The mutual information between two random variables is I(X; Y ) = H(X) − H(X|Y ) =
Ey∼Y [D (µ(X|Y = y) k µ(X))].
Lemma 4.2 (Super-additivity of information). If X 1 , . . . , X n are independent, then
I(X 1 , . . . , X n ; Y ) ≥
n
X
I(X i ; Y ).
i=1
Lemma 4.3. Let p, q ∈ (0, 1), and let D (q k p) denote the KL divergence between Bernoulli(q) and
Bernoulli(p). Then for any p < 1/2 we have D (q k p) ≥ q − 2p.
Proof. Since the divergence is non-negative, it suffices to show that for p < 1/2, for any q ≥ 2 · p we have
D (q k p) ≥ q − 2p.
For convenience, let us write q = p + x, where −p ≤ x ≤ 1 − p. Our goal is to show that when x ≥ p,
we have D (p + x k p) ≥ x − p.
25
Consider the difference
g(x, p) = D (p + x k p) − (x − p)
1−p−x
p+x
+ (1 − p − x) log
− (x − p).
= (p + x) log
p
1−p
At x = p we have g(x, p) ≥ 0 (since divergence is always non-negative); we will show that the derivative
w.r.t. x is non-negative for x ≥ p, and hence g(x, p) ≥ 0 for any x ≥ p.
Taking the derivative with respect to x, we obtain
∂
p+x
p
1
1−p−x
1−p
1
g(x, p) = log
+ (p + x) ·
·
− log
− (1 − p − x) ·
·
−1
∂x
p
p + x p ln 2
1−p
1 − p − x (1 − p) ln 2
x
x
= log 1 +
− log 1 −
−1
p
1−p
The derivative is increasing in x, and since we consider only x ≥ p, it is sufficient to show that it is nonnegative at x = p:
p
∂
= log (1 + 1) − log 1 −
g(x, p)
−1
∂x
1−p
x=p
p
− 1 ≥ 0.
≥1+
1−p
In the last step we used the fact that log(1 − z) ≤ −z for any z ∈ (0, 1); in our case, since p < 1/2, we
have p/(1 − p) < 1.
√
4.2 Random Graph of Degree Θ( n)
In this section, we derive our main results, lower bounds for one-way and simultaneous communication, all
√
using a single distribution, µ, for graphs of average degree Θ( n), whose edges are shared among 3 players.
In the subsequent sections we move on to showcase methods to generalize these results for k players and
other average degrees.
4.2.1
The input distribution and its properties
√
Our lower bounds for degree Θ( n) use the following input distribution, µ: we construct a tripartite graph
√
G = (U ∪ V1 ∪ V2 , E), where each edge appears iid with probability γ/ n for some constant γ.
This distribution has very high probability that the input is ǫ-far from being triangle-free, but it does not
guarantee it with probability 1. Still, if we can show some task (finding a triangle, or finding a triangle-edge)
is hard on µ, then it is also hard on the distribution µ′ obtained from µ by conditioning on the input being
ǫ-far from triangle-free.
Observation 4.4. Let Π be a protocol for some task T , with error probability at most δ on some distribution
µ supported on a class X of inputs. Then for any Y ⊆ X , the error probability of Π on µ|Y is at most
δ/ Prµ [Y].
26
Proof. We can write:
δ ≥ Pr [Π errs on X]
= Pr [Π errs on X | X ∈ Y] Pr [X ∈ Y] + Pr [Π errs on X | X 6∈ Y] Pr [X 6∈ Y]
≥ Pr [Π errs on X | X ∈ Y] Pr [X ∈ Y] .
The claim follows.
In our case we have:
Lemma 4.5. When γ is sufficiently small, a graph sampled from µ is O(1)-far from triangle-free with
probability at least 1/2.
Proof. Let T be the random variable of the set of triangles in the graph, and let I be the set of pairs of
triangles that share an edge. let
n
γ
γ3
E[|T |] =
( √ )3 ≥ n3/2
3
n
12
1
n
γ
E[|I|] = 3
(n − 3)( √ )5 ≤ E[|T |]
2
3
n
Where the last inequality follows from choosing a sufficiently small γ. It follows that E[|T | − |I|] ≥ 21 |T |.
Let D denote the maximal size of a set disjoint triangles in the graph. Note that D ≥ |T | − |I|, since given
the set T of triangles, we can for each pair in I choose one of the intersecting triangles, and remove the other
from T . this process halts after |I| steps and we are left with a set disjoint triangles of size at least |T | − |I|.
3
Therefore E[D] ≥ γ24 · n3/2 . Denote X = n2/3 − D, and let |E| be the size of the set of edges in the graph.
Trivially |E| ≥ D, therefore
2 /((1−γ)2 )
P r(X ≤ 0) ≤ Pr(n2/3 − |E| ≤ 0) =≤ Pr(n2/3 ≤ |E|) ≤ e−m
where the next to last inequality follows from chernoff bound on the number of edges in the graph. Since
X gets negative values with exponentially small probability, and is only polynomial in value, it holds that
E[X|X > 0] ≤ (1 + o(1)) E[X]. Therefore
E[D]
γ3
For convenience denote c1 = 2n
2/3 = 48 . It follows that
Pr(D ≤ c1 n3/2 ) = Pr(X ≥ (1 − c)n3/2 ) =
Pr(X ≥ (1 − c1 )n3/2 |X > 0) Pr(X > 0) + Pr(X ≥ (1 − c1 ) · n3/2 |X ≤ 0) Pr(X ≤ 0) ≤
2
2
E[X|X > 0]
E[X]
Pr(X ≥ (1 − c1 ) · n3/2 |X > 0) + e−m /((1−γ) ) ≤
≤ (1 + o(1))
+ o(1) =
(1 − c1 )n2/3
(1 − c1 )n2/3
(1 + o(1))
1 − 2c1
n2/3 − E[D]
+ o(1) = (1 + o(1))
+ o(1)
2/3
1 − c1
(1 − c1 )n
1
(1 − c1 )n2/3 is a constant smaller than 1, meaning (1 + o(1)) 1−2c
1−c1 + o(1) is smaller than some constant
c2 < 1. Therefore with constant probability there are at least c1 disjoint triangles.
27
Therefore, any lower bound we prove for µ translates to asymptotically the same bound on a distribution
that is ǫ-free from triangle-free, namely, µ conditioned on being ǫ-free from triangle-freeness.
Let X e be an indicator variable for the presence of edge e in the input graph. For a transcript t of a
communication protocol Π, let
√
∆t (e) := Pr [X e = 1 | Π = t] − 2γ/ n.
Lemma 4.6. We have:
E
t∼π
"
X
e
#
∆t (e) ≤ |Π|.
√
Proof. For each edge e, the prior probability that e ∈ E is γ/ n, so by Lemma 4.3, for any transcript t,
∆t (e) ≤ D (π(X e |Π = t) k π(X e )) ,
By super-additivity of information,
|Π| ≥ I(Π; E) ≥
= E
"
≥ E
"
t∼π
t∼π
X
I(Π; X e )
e
#
X
D (π(X e |Π = t) k π(X e ))
X
∆t (e).
e
e
#
Covered and reported edges. Our lower bounds show that it is hard for the players to find an edge that
belongs to a triangle. Intuitively, in order to output such an edge, the players need to identify some edge
{v1 , v2 } that (a) is in the input, and (b) closes a triangle together with some third vertex u; that is, for some
u, the edges {u, v1 } and {u, v2 } are also in the input.
We formalize the notion of “finding” an edge satisfying some property using the posterior probability of
the edge satisfying this property given the transcript.
Definition 10 (Reported edges). Given a transcript t, let
Rep(t) = {e ∈ E | Pr [e ∈ E | Π = t] ≥ 9/10}
be the set of edges whose posterior probabilities of being in the input increase to at least 9/10 when transcript t is sent. We call the edges in Rep(t) reported.
Definition 11 (Covered edges). Given a transcript t, let
C (t) = {e ∈ V1 × V2 | Pr [∃u ∈ U : (u, v1 ) ∈ E 1 ∧ (u, v2 ) ∈ E 2 | Πt] ≥ 9/10}
be the set of edges in V1 × V2 whose posterior probability of being covered by a vee rises to at least 9/10
upon observing transcript t. We say that edges in C (t) are covered by Alice and Bob. Let Cov (e) be an
indicator for the event that e ∈ C (Π).
28
4.2.2
One-Way Communication
Consider a protocol Π between three players — Alice, Bob and Charlie — where Alice and Bob communicate back-and-forth for as many rounds as they want, with Charlie observing their transcript, and finally
Charlie outputs an edge from his side of the graph. We claim that the total amount of communication
exchanged by Alice and Bob must be Ω(n1/4 ).
The underlying intuition for our proof is that by the end of the protocol Charlie needs to be informed
√
by Alice and bob of at least Ω( n) vertex pairs in V1 × V2 being covered with high certainty by a vee in
their input. This is due to the fact, that only a Θ( √1n )-fraction of these pairs is expected to have an edge
connecting them. We prove that the number of pairs Alice and Bob can on average inform Charlie of being
covered is at most quadratically larger than their bit-budget, which implies that Ω(n1/4 ) bits are required for
such communication, that succeeds with high probability.
This result is somewhat surprising as the a priori probability of any edge in Charlie’s input belonging to
a triangle is already constant, and elevating only one of these probabilities to 1 − delta suffices for solving
the problem. This observation is equally valid for simultaneous communication.
Theorem 4.7. For any constant γ ∈ (0, 1), if Π solves the triangle-edge-finding problem under µ with error
δ ≤ 1/100, then |Π| = Ω(n1/4 ).
Proof. Suppose for the sake of contradiction that there is a protocol Π with communication αn1/4 , where α
satisfies:
(100α2 + 10α) < (9/20)/γ,
and error δ ≤ 1/100.
√
Say that transcript t of Π is good if |C (t) | ≥ n/(2γ).
Lemma 4.8. Pr [Π is good] ≥ 1 − 20δ.
Proof. If t is not a good transcript, then because C (t) is independent of E 3 ,
√
√
E [|E 3 ∩ C (t) |] ≤ (γ/ n) · ( n/(2γ)) = 1/10.
By Markov, Pr [E 3 ∩ C (t) 6= ∅] ≤ 1/2. Whenever E 3 ∩ C (t) = ∅, Charlie must output an edge that is
either not in his input (E 3 ), or not covered by t; in the first case this is an error, and in the second case, the
probability of an error is at least 1/10, independent of E 3 (it depends only on E 1 , E 2 , which are independent
of E 3 , even given Π = t). Therefore, conditioned on E 3 ∩ C (t) = ∅, the error probability is at least 1/10;
and overall, for any t that is not good,
Pr [error | Π = t] ≥ Pr [E 3 ∩ C (t) = ∅] · (1/10) ≥ 1/20.
Since the total probability of error is bounded by δ, we obtain
X
δ ≥ Pr [error] =
Pr [error | Π = t] Pr [Π = t]
t
≥
≥
X
bad t
X
Pr [error | Π = t] Pr [Π = t]
(1/20) · Pr [Π = t] = Pr [Π is bad] /20.
bad t
The claim follows.
29
Next, say that t is informative if:
X
e∈U ×V1 ∪U ×V2
∆t (e) ≥ 10αn1/4 .
Lemma 4.9. Pr [Π is informative] ≤ 1/10.
Proof. By super-additivity,
αn1/4 = |Π| ≥ I(Π; E 1 ∪ E 2 )
X
I(Π; X e )
≥
e∈U ×V1 ∪U ×V2
= E
X
D (π(X e |Π = t) k π(X e ))
≥ E
X
∆t (e) .
t∼Π
t∼Π
The claim follows by Markov.
e∈U ×V1 ∪U ×V2
e∈U ×V1 ∪U ×V2
(By Lemma 4.3)
Corollary 4.10. There exists a transcript which is both good and uninformative.
Proof. By union bound, the probability that a transcript is either not good or informative is at most 20δ +
1/10 < 1.
We will now show that such a transcript cannot exist, as an uninformative transcript cannot cover enough
edges to be good.
For any particular transcript t of Π, the inputs of the three players remain independent given Π = t.
Therefore, for any edge (v1 , v2 ) ∈ V1 × V2 ,
X
Pr [∃u : (u, v1 ) ∈ E 1 ∧ (u, v2 ) ∈ E 2 | Π = t] ≤
Pr [(u, v1 ) ∈ E 1 ∧ (u, v2 ) ∈ E 2 | Π = t]
u∈U
=
X
u∈U
=
X
u∈U
=
X
u∈U
(Pr [(u, v1 ) ∈ E 1 ] Pr [(u, v2 ) ∈ E 2 | Π = t])
√
√
∆t (u, v1 ) + 2γ/ n ∆t (u, v2 ) + 2γ/ n
√ X
(∆t (u, v1 )∆t (u, v2 )) + 2(γ/ n)
(∆t (u, v1 ) + ∆t (u, v2 )) .
u∈U
√
Now let t be a transcript that is good, that is, |C (t) | ≥ n/(2γ), and also uninformative. Let S(t) ⊆
√
C (t) be a set of n/(2γ) covered edges (chosen arbitrarily from C (t)), and let W1 (t) ⊆ V1 and W2 (t) ⊆ V2
be the endpoints of the edges in S. Since each edge (v1 , v2 ) ∈ S(t) is covered in t,
X
√ X
(∆t (u, v1 )∆t (u, v2 )) + 2(γ/ n)
(∆t (u, v1 ) + ∆t (u, v2 )) ≥ 9/10,
u∈U
u∈U
30
and together we have
X
X
(v1 ,v2 )∈S(t) u∈U
"
√ X
(∆t (u, v1 )∆t (u, v2 )) + 2(γ/ n)
(∆t (u, v1 ) + ∆t (u, v2 ))
u∈U
√
≥ (9/10)|S| = (9/20) n/γ.
#
On the other hand,
"
#
X
X
√ X
(∆t (u, v1 )∆t (u, v2 )) + 2(γ/ n)
(∆t (u, v1 ) + ∆t (u, v2 ))
(v1 ,v2 )∈S(t) u∈U
≤
X
√
∆t (u, v2 ) + 2(γ/ n)
∆t (u, v1 )
X
X X
∆t (u, v1 )
X X
u∈U
≤
u∈U
X
X
v1 ∈V1
u∈U v1 ∈V1
v2 ∈V2
u∈U v2 ∈V2
√
+ 2(γ/ n) · |S(t)| ·
X X
u∈U v1 ∈V1
We therefore have
∆t (u, v1 ) +
2
√
√
≤ 10αn1/4 + 2(γ/ n) · n/(2γ) · 10αn1/4
√
≤ (100α2 + 10α) n.
(∆t (u, v1 ) + ∆t (u, v2 ))
v1 ∈C1 (t) v2 ∈C2 (t)
∆t (u, v2 )
X
X X
u∈U v2 ∈V2
∆t (u, v2 )
√
√
(100α2 + 10α) n ≥ (9/20) n/γ,
contradicting our assumption about α.
Streaming Lower Bounds There is a known connection between communication complexity, specifically,
one-way communication, and space complexity in the data-stream model. In this model the input arrives as
an ordered sequence that must be accessed in order and can be read only once, while the space complexity is
defined as the maximal size of the memory used at any given point of the computation. As demonstrated in
[4], there is a generic reduction which proves that lower bounds on the one-way communication complexity
of a problem, are also lower bounds on the space-complexity of the same problem in the data-stream model.
Consequently, we get a corresponding lower bound of Ω(n1/4 ) on the space complexity of detecting a
triangle edge (with the input graph distribution identical to the one in our model) in the data-stream model.
We present here a sketch of the proof, as the data-stream model is not the focus of this work; for more
details on the relationship between lower bounds in the two models refer to [4, 20].
Assume to the contrary that there exists an algorithm, A, that solves the triangle-edge detection with
space complexity o(n1/4 ) in the data-stream model. This implies a one-way 3-player protocol, Π, of complexity o(n1/4 ), which implies a contradiction (our ”extended” one-way model is even more powerful than
the more standard one-way model used in this reduction, where Alice sends one message to Bob, who then
sends one message to Charlie, who has to output the answer), proving our initial assumption to be false.
More concretely, Π entails Alice running A on the input, which is viewed as the beginning of the stream,
31
then sending the content of the memory (which is limited by o(n1/4 ) bits) to Bob, who continues the computation of A on his input, which is viewed as the continuation of the stream, and once again sends the
content of the memory to Charlie, who concludes the computation of A on his input, the final segment of
the stream.
We can apply the same reduction to the extended one-way lower bounds we derive later in this chapter
√
for a more general average degree d = O( n).
4.2.3
Simultaneous Communication
For simultaneous protocols, it is not enough to have some covered edge that also appears in Charlie’s input:
the referee needs to know (or believe) that it is in Charlie’s input — that is, with good probability, the edge
the referee outputs has a large posterior probability of being in Charlie’s input, given Charlie’s message.
Say that edge e is reported by a transcript t if PrE∼µ|t [e ∈ E] ≥ 9/10. The goal of the players is to
provide the referee with some edge that is covered by Alice and Bob and also reported by Charlie.
We show that the “best” strategy for the players is to choose a set T ⊆ V1 × V2 of Θ(n) edges, and have
Alice and Bob try to cover edges from T and Charlie report edges from T . The crux of the lower bound is
showing that to target a fixed set of edges T , Alice and Bob must give up their quadratic advantage: whereas
in for in our analysis of the one-way lower bound, the sum P
of the cover probabilities was bounded by the
square of the sum-increase of individual edge probabilities ( e ∆t (e)), here we show that we can bound it
√
linearly, yielding a lower bound of Ω( n) instead of Ω(n1/4 ).
Fix a deterministic simultaneous protocol Π, where the messages sent by the three players are M 1 , M 2
and M 3 , respectively. Let Π(m1 , m2 , m3 ) denote the edge output by the referee upon receiving messages
m1 , m2 and m3 from the three players. We freely interchange the messages with the inputs to the respective
players, since the protocol is deterministic; e.g., we write Π(E 1 , E 2 , E 3 ) to indicate the referee’s output
upon receiving the messages sent by the players on input (E 1 , E 2 , E 3 ).
√
Let C = α n be the number of bits sent by each player, where α will be fixed later. Let δ denote the
error of Π on µ. Our goal is to show that when γ and δ are sufficiently small, we require α = Ω(1), so the
communication complexity of the protocol is Ω(n).
In a simultaneous protocol, the messages sent by the players are independent of each other given the
input. In our case, because the inputs are also independent of each other, the messages are independent
even without conditioning on a particular input. We therefore abuse notation slightly by omitting parts
of the transcript that are not relevant to the event at hand. Specifically, we let Rep(mi ) denote the set of
edges covered by a message mi of player i (this is independent of the other players’ messages), and we
let C (m1 , m2 ) denote the edges covered by messages m1 , m2 of Alice and Bob, respectively (again, this
is independent of Charlie’s message). We also sometimes write the player’s input instead of its message;
because the protocol is deterministic, the message is a function of the input.
In any simultaneous protocol, the goal of the players is to provide the referee with an edge in Charlie’s
input that is both reported by Charlie and covered by Alice and Bob:
Lemma 4.11. The probability that there exists an edge that is both reported by Charlie and covered by Alice
and Bob is at least 1 − 10δ. That is,
Pr [Rep(M 3 ) ∩ C (M 1 , M 2 ) 6= ∅] ≥ 1 − 10δ.
Proof. If the referee outputs an edge that is both covered and reported, then of course there must exist such
an edge. Let us therefore bound the probability that the referee outputs an edge that is either not reported
32
or not covered. Call a triplet (m1 , m2 , m3 ) of messages “bad” if Π(m1 , m2 , m3 ) = e, where e is either not
reported (e 6∈ Rep(m3 )) or not covered (e 6∈ C (m1 , m2 )).
The protocol errs whenever it outputs an edge e ∈ E3 that is not in Charlie’s input E 3 , or an edge that
does not form a triangle together with some node u ∈ U . If e is not reported (in m3 ), then Pr [e ∈ E 3 | M 3 = m3 ] <
9/10, and if e is not covered (in m1 , m2 ), then Pr [∃u ∈ U : (u, v1 ) ∈ E 1 ∧ (u, v2 ) ∈ E 2 | M 1 = m1 , M 2 = m2 ] <
9/10. Therefore, each bad triplet of messages contributes at least 1/10 to the error probability of the protocol. Together we have
X
δ ≥ Pr [Π errs] ≥
Pr [(M 1 , M 2 , M 3 ) = (m1 , m2 , m3 )] · (1/10)
bad (m1 , m2 , m3 )
= Pr [ (M 1 , M 2 , M 3 ) are bad] /10.
The claim follows.
By Lemma 4.11, we see that the players’ “best strategy” is to try to “coordinate” the edges reported by
Charlie with the edges covered by Alice and Bob, so that the referee can find an edge in the intersection.
Indeed, as a corollary we obtain:
hP
i
Corollary 4.12. E
Pr[Cov
(e)]
≥ 1 − 10δ.
e∈Rep(E 3 )
Proof. Fix Rep(E 3 ) = R. By union bound and the independence of the players’ inputs,
X
X
Pr [e ∈ C (M 1 , M 2 )] =
Pr [Cov (e)]
Pr [R ∩ C (M 1 , M 2 ) 6= ∅ | Rep(E 3 ) = R] ≤
e∈R
e∈R
Therefore,
X
E
e∈Rep(E 3 )
=
X
R
≥
≥
X
R
X
R
Pr[Cov (e)]
E
X
e∈Rep(E 3 )
X
e∈R
Pr[Cov (e)] | Rep(E 3 ) = R Pr [Rep(E 3 ) = R]
!
!
Pr [Cov (e)] Pr [Rep(E 3 ) = R]
(Pr [R ∩ C (M 1 , M 2 ) 6= ∅ | Rep(E 3 ) = R] Pr [Rep(E 3 ) = R]) ≥ 1 − 10δ.
Analyzing Charlie’s messages. First, observe that Charlie (and the other players) cannot report too many
edges, except with small probability. Each reported edge is “a little expensive”:
Lemma 4.13. Let mi be a message sent by player i. Assume that γ < 1/2. If e ∈ Rep(mi ), then for
sufficiently large n we have D (π(X e | M i = mi ) k π(X e )) ≥ 9 log n/40.
33
√
Proof. Since e ∈ Rep(mi ), the posterior probability that X e = 1 is at least 9/10 > γ/ n. Because
D (p k q) increases as |p − q| increases, for sufficiently large n,
√
D (π(X e | M i = mi ) k π(X e )) ≥ D 9/10 k γ/ n
9/10
1/10
√
= (9/10) log √ + (1/10) log
γ/ n
1 − γ/ n
√
1
n
√
+ (1/10) log
= −H(1/10) + (9/10) log
γ
1 − γ/ n
9
9/10
log n ≥
log n.
≥ −1 +
2
40
√
We used the fact that γ < 1/2, so (9/10) log(1/γ) > 0, and also that 1 − γ/ n < 1, and hence log(1/(1 −
√
γ/ n)) > 0.
It follows that with a budget of C bits, Charlie can only report roughly C edges (in fact, somewhat less)
in expectation:
Corollary 4.14.
E [|Rep(E 3 )|] ≤
40α √
n
9 log n
Proof. By the super-additivity of information,
X
√
α n = |M3 | ≥ I(M 3 ; E 3 ) ≥
I(M 3 ; X e )
e∈E3
=
≥
≥
E
E
E
m3 ∼M 3
m3 ∼M 3
m3 ∼M 3
X
e∈E3
D (π(X e | M 3 = m3 ) k π(X e ))
X
e∈Rep(m3 )
D (π(X e | M 3 = m3 ) k π(X e ))
9
log n .
|Rep(m3 )| ·
40
The claim follows.
As we said above, since the referee “wants” to output an edge that is both reported and covered, the goal
of the players should be to provide it with such an edge. Let us rank the edges in V1 × V2 according to the
probability that they are covered by Alice and Bob: we write V1 × V2 = {e1 , . . . , en2 }, where i ≤ j iff
Pr [Cov (ei )] ≥ Pr [Cov (ej )], breaking ties arbitrarily.
Let Top(E 3 ) denote the set of |Rep(E 3 )| highest-ranking edges in E 3 . Clearly,
X
X
Pr [Cov (e)] ≤
Pr [Cov (e)] .
(6)
e∈Rep(E 3 )
e∈Top(E 3 )
That is, “it is in Charlie’s interest” to report edges from Top(E 3 ), as this maximizes the probability that
some reported edge is also covered.
34
Let T = {e1 , . . . , em } be the m highest-ranking edges in V1 × V2 , where
9
n.
80α
m=
For any integer k ≥ 1 we have:
X
e1 ,...,ek·m
Therefore,
X
Pr[Cov (e)]
E
Pr [Cov (e)] ≤ k ·
X
Pr [Cov (e)] .
e∈T
e∈Rep(E 3 )
≤ E
=
X
e∈Top(E 3 )
⌉ log(n2 /m)⌉
X
i=1
≤
⌉ log(n2 /m)⌉
X
i=1
Pr[Cov (e)]
E
"
X
e∈Top(E 3 )
2i+1 ·
X
Pr[Cov (e)] 2i · m ≤ |Top(E 3 )| ≤ 2i+1 · m Pr 2i · m ≤ |Top(E 3 )| ≤ 2i+1 · m
!
Pr [Cov (e)]
e∈T
√ X
n
40α
≤ log n · 2 ·
·
Pr [Cov (e)]
9 log n m
e∈T
P
Pr [Cov (e)]
.
= e∈T √
n
Analyzing the cover probabilities
E [|Top(E 3 )|]
·
2i · m
#
(7)
We show that it is not possible for the two other players to have:
X
√
Pr [Cov (e)] ≥ β · n,
e∈T
where β is a constant whose value will be fixed later.
√
Notation. Let V H be the set of nodes in V1 ∪ V2 whose degree in T is at least n, and let V L be the
remaining nodes in V1 ∪ V2 . Also, let Via = V a ∩ Vi , for a ∈ {L, H} and i ∈ {1, 2}.
√
9
.
Since |T | ≈ n, we have |V H | ≤ c · n, where c = 80α
Let T1 = V1L × V2 ∪ V1 × V2H and let T2 = V1 × V2L ∪ V1H × V2 . For edges in T1 , their endpoints in V1
√
all have low degree in T1 (edges in V1L × V2 have degree at most n in T , and edges in V1 × V2H also have
√
low degree in T1 , since |V H | ≤ c · n). We have T = T1 ∪ T2 , so it suffices to bound the sum of the cover
probabilities in T1 and the sum in T2 . (The union is not disjoint; e.g., edges in V1L × V2L appear in both sets,
so we may be over-counting). Let NS (v) denote the nodes adjacent to node v in S ⊆ V1 × V2 .
We let M 1 , M 2 be random variables denoting the messages sent by the two players, respectively. Let
M1 , M2 be the set of all possible messages for each player (resp.).
35
Bounding the cover probabilities in T . For any pair of messages m1 , m2 , if e = (v1 , v2 ) ∈ C (m1 , m2 ),
then by union bound,
X
Pr [(u, v1 ) ∈ E 1 ∧ (u, v2 ) ∈ E 2 | M 1 = m1 , M 2 = m2 ]
u∈U
≥ Pr [∃u : (u, v1 ) ∈ E 1 ∧ (u, v2 ) ∈ E 2 | M 1 = m1 , M 2 = m2 ] ≥ 9/10.
(8)
Because the edges in E 1 and E 2 remain independent given M 1 = m2 , M 2 = m2 , and the messages are
also independent of each other and of the other player’s input, for each u ∈ U ,
Pr [(u, v1 ) ∈ E 1 ∧ (u, v2 ) ∈ E 2 | M 1 = m1 , M 2 = m2 ]
= Pr [(u, v1 ) ∈ E 1 | M 1 = m1 , M 2 = m2 ] · Pr [(u, v2 ) ∈ E 2 | M 1 = m1 , M 2 = m2 ]
= Pr [(u, v1 ) ∈ E 1 | M 1 = m1 ] · Pr [(u, v2 ) ∈ E 2 | M 2 = m2 ] .
Consider first the edges in T1 . Plugging the above into (8), and also writing Pr [(u, v1 ) ∈ E 1 | M1 = m1 ] =
√
∆m1 (u, v1 ) + 2γ/ n (where ∆m1 is the L1 difference between the posterior and the prior), we obtain:
X
√
∆m1 (u, v1 ) + 2γ/ n Pr [(u, v2 ) ∈ E 2 | M 2 = m2 ] ≥ 9/10.
(9)
u∈U
Multiplying both sides by Pr [M 2 = m2 ], and summing across all m2 such that (v1 , v2 ) ∈ C (m1 , m2 ), we
get that for any m1 ,
X
X
√
∆m1 (u, v1 ) + 2γ/ n · Pr [(u, v2 ) ∈ E 2 | M 2 = m2 ] Pr [M 2 = m2 ]
m2 :(v1 ,v2 )∈C(m1 ,m2 ) u∈U
=
X
u∈U
≥ (9/10)
!
√
∆m1 (u, v1 ) + 2γ/ n ·
X
X
m2 :(v1 ,v2 )∈C(m1 ,m2 )
Pr [M 2 = m2 ]
Pr [(u, v2 ) ∈ E 2 | M 2 = m2 ] Pr [M 2 = m2 ]
m2 :(v1 ,v2 )∈C(m1 ,m2 )
= (9/10) Pr [Cov (v1 , v2 ) | M 1 = m1 ] .
Notice that for any u ∈ U ,
X
m2 :(v1 ,v2 )∈C(m1 ,m2 )
≤
X
m2 ∈M2
Pr [(u, v2 ) ∈ E 2 | M 2 = m2 ] Pr [M 2 = m2 ]
Pr [(u, v2 ) ∈ E 2 | M 2 = m2 ] Pr [M 2 = m2 ]
√
= Pr [(u, v2 ) ∈ E 2 ] = γ/ n.
Therefore,
X
u∈U
√
∆m1 (u, v1 ) + 2γ/ n
!
√
· (γ/ n) ≥ (9/10) Pr [Cov (v1 , v2 ) | M 1 = m1 ] .
36
Now, taking the expectation over all m1 ,
"
#
X
√
√
(∆M 1 (u, v1 ) + 2γ/ n) ≥ (9/10) E [Pr [Cov (v1 , v2 ) | M 1 ]]
γ/ n E
M1
M1
u∈U
= (9/10) Pr [Cov (v1 , v2 )] .
√
Summing across all v2 such that (v1 , v2 ) ∈ T1 , and using the fact that the degree of v1 in T1 is at most c· n,
"
#
"
#!
X
X
X
√
√
√
√
√
c n · γ/ n E
(∆M 1 (u, v1 ) + 2γ/ n) ≥
(∆M 1 (u, v1 ) + 2γ/ n)
γ/ n E
M1
≥ (9/10)
u∈U
X
v2 ∈NT1 (v1 )
M1
u∈U
Pr [Cov (v1 , v2 )] .
v2 ∈NT1 (v1 )
And now, summing over all v1 ∈ V1 ,
X X
X
√
∆M 1 (u, v1 ) + 2γ/ n
E
cγ
v1 ∈V1
M1
v1 ∈V1 u∈U
= cγ E
M1
X X
v1 ∈V1 u∈U
Using Lemma 4.6 we obtain:
√
∆M 1 (u, v1 ) + 2γ n ≥ (9/10)
X
e∈T1
Pr [Cov (e)] ≤
X
Pr [Cov (v1 , v2 )] .
(v1 ,v2 )∈T1 )
cγ(α + 2) √
n.
9/10
For edges in T2 the argument is symmetric.
Together we have:
X
e∈T
Pr [Cov (e)] ≤
√
1
2cγ(α + 2) √
n ≤ (α + 2) n,
9/10
25
assuming that γ is a sufficiently small constant.
Combining this with (7), we see that we must have α ≥ 23/25.
4.3 Lifting 3-player Lower Bounds to k Players
Using symmetrization [33], we “lift” our lower bounds for a constant number of players to general kplayer lower bounds (Symmetrization was developed in [33] to lift unrestricted 2-player lower bounds to
unrestricted k-player lower bounds.) Interestingly, our symmetrization reduction transforms a simultaneous k-player protocol into a one-way 3-player (or 2-player) protocol, so in order to obtain lower bounds
on simultaneous protocols for k players we need to first prove lower bounds on one-way protocols for a
small number of players. This curious behavior turns out to be inherent, at least for large k: a simultaneous protocol can emulate a one-way protocol, by having each player send their entire input to the referee
with probability 1/k, and otherwise send their message under the one-way protocol. The referee can, with
constant probability, take the role of one of the players, whose input the referee received, and compute the
37
answer using the messages from the other players. When k is sufficiently large, this may be cheaper than
the simultaneous protocol.
We say that a k-player distribution µ is symmetric if the marginal distribution of each player’s input is
the same.
Theorem 4.15. Let P be a graph property, Suppose that µ is a symmetric 3-player input distribution such
k,sim
ǫ
ǫ
that CC3,→
µ,δ (P ) = C. Then there is a k-player input distribution η such that CCη,δ (P ) ≥ (k/2)C.
Proof. Let η be the following distribution: we sample (X 1 , X 2 , X 3 ) ∼ µ; we give X 1 and X 2 to two
random players that are not player k, and the remaining players all receive X 3 .
We show by reduction from the 3-player case that η is hard for k players. Let Π be a simultaneous
protocol for k players that solves P ǫ on η with error probability δ.
We construct a 3-player protocol Π′ as follows: Alice and Bob publicly choose two random IDs i, j ∈ [k]
(i 6= j), and take on the roles of players i and j, using their actual inputs under µ. Charlie will play the role
of all the remaining players, using his input for each one of them, and also the role of the referee (who has
no input). Let embed(i, j, X) denote the input thus constructed, where X = (X1 , X2 , X3 ). The resulting
k-player input distribution is exactly η.
To simulate the execution of Π, Alice and Bob simply send Charlie the messages players i and j would
send under Π to player k. Charlie computes the messages that each player ℓ ∈ [k] \ {i, j} would send,
and then, using these messages and the messages received from Alice and Bob, computes the output of the
referee.
The simulation adds no error — on each input, it exactly computes the referee’s output (or rather, it
generates the correct distribution for the referee’s output). Therefore,
X
µ(X) Pr Π′ errs on X
Pr Π′ errs =
µ
X
X
X
1
Pr [Π errs on embed(i, j, X)]
µ(X)
=
k(k − 1)
i,j
X
X
=
η(Y ) Pr [Π errs on Y ] ≤ δ.
Y
What is the expected communication of Π′ ? Observe that since Π is simultaneous, each player’s transcript is a (random) function of only its own input: in particular, the distribution of player i’s transcripts is the
same under any joint input distribution where player i’s input has the same marginal. In η, all players’ inputs
38
have the same marginal distribution — the marginal distribution of each player’s input in µ. Therefore,
E
[|Π(embed(i, j, X))|]
E |Π′ (X)| =
X∼µ
i,j∼U [k],X∼µ
=
=
=
E
[|Πi (embed(i, j, X))| + |Πj (embed(i, j, X))|]
E
[|Πi (Y )| + |Πj (Y )|]
i,j∼U [k],X∼µ
i,j∼U [k],Y ∼η
E
i∼U [k],Y ∼η
[|Πi (Y )|]
k
1X
E [|Πi (Y )|]
Y ∼η
k
i=1
#
" k
X
2
2
=
E
|Πi (Y )| = CC(Π).
Y
∼η
k
k
=2
i=1
This result implies a Ω(k · n1/4 ) lower bound for the problem of k players trying to find a triangle-edge
√
in a graph of average degree d = Θ( n) via simultaneous communication,
For deterministic and symmetric protocols we can do a little better, by modifying the reduction: instead
of constructing a one-way protocol, we construct a simultaneous protocol — using the fact that the original
k-player protocol is deterministic, Charlie can pick one of the players he simulates and send the message of
only that one player to the referee, because we know that all the players simulated by Charlie will send the
same message (as they receive the same input).
4.4 Lower Bound for Degree O(1)
For graphs with average degree O(1), a lower bound was shown in the streaming model in [27] reducing the
Hidden Boolean Matching problem, introduced in [28] to triangle counting approximation in streaming. The
same reduction yields a lower bound on triangle testing in two players one-way communication complexity.
We present the reduction for the sake of completeness, and to show that it indeed holds in our model as well.
We use the bound shown in [36] but need only the bound for matchings (rather than hypermatchings),
we give here a simplified version of the problem;
Definition 12 (Boolean Matching). In the Boolean Matching problem, denoted BM n , Alice receives a vector
x ∈ {0, 1}2n , and Bob receives a perfect matching M on 2n vertices {1, . . . , 2n} and a vector w ∈ {0, 1}n .
We represent M as an n × 2n matrix, where each row represents one edge of the matching: if the i-th edge
of the matching is {j1 , j2 } ⊆ [2n]2 , then the i-th row of the matrix contains 1 in columns j1 and j2 , and 0
elsewhere.
The goal of the players is to distinguish the case where
→
−
Mx ⊕ w = 0
from the case where
→
−
Mx ⊕ w = 1 .
Theorem 4.16. The randomized one-way communication complexity of testing triangle-freeness in graphs
√
with average degree O(1) is Ω( n).
39
Proof. Given inputs X for Alice and M, w for Bob, the players construct the following graph G = (V, E),
where V = {u} ∪ ([n] × [2]):
• For each bit i ∈ [n] where xi = 0, Alice adds the edge {u, (i, 0)}; for each bit i ∈ [n] where xi = 1,
she adds the edges {u, (i, 1)}.
• For each edge ej = {j1 , j2 } in his matching,
– If wj = 0, Bob adds edges {(j1 , 0), (j2 , 0)} and {(j1 , 1), (j2 , 1)};
– If wj = 1, Bob adds edges {(j1 , 0), (j2 , 1)} and {(j1 , 1), (j2 , 0)}.
For each j ∈ [n], let Mj = {j1 , j2 }.
A triangle appears in the subgraph induced by vertices {u, (j1 , 0), (j1 , 1), (j2 , 0), (j2 , 1)} iff either wj =
0 and xj1 = xj2 , or wj = 1 and xj1 6= xj2 . In other words, a triangle appears iff (M x ⊕ w)j = 0. No other
→
−
triangles appear in the graph. Therefore, if M x ⊕ w = 0 then G contains n edge-disjoint triangles, and if
→
−
M x ⊕ w = 1 then G is triangle-free. In the first case, G is 1-far from triangle-freeness.
4.5 Other Degrees
We now show how we can extend a lower bound for a given average degree, d, to any lower degree, d′ , by
embedding dense inputs of degree d into sparse graphs such that the average degree evens out to be d′ .
Lemma 4.17. Let d = Θ(nc ) denote the average degree of the graph, and let CC(T ǫ,d,n ) = Θ(f (n)) denote
the communication complexity as a function of n, the number of vertices. Then for any d′ ≤ d, we have
1
′
CC(T ǫ,Θ(d ),n ) = Θ(f ((d′ n) 1+c )).
1
Proof. For graphs with n′ = (d′ n) 1+c vertices and average degree Θ((n′ )c ), the communication complexity
is Θ(f (n′ )). We examine the following subset of graphs with n vertices and average degree d′ : any such
graph, G, is a union of (n − n′ ) isolated nodes, and a graph, G′ , which is either triangle-free or ǫ-far from
′
being triangle-free, and has n′ vertices and average degree (n′ )c . The average degree of G is Θ((n′ )c ) · nn =
Θ(d′ ), and its distance to being triangle-free is identical to that of G′ , as it has no edges outside of G′ .
Since any triangle in G must be contained in G′ , solving the problem on G is equivalent to solving it on
G′ . And since we asserted that the complexity of the problem for graphs with the stated properties of G is
1
Θ(f (n′ )) = Θ(f (d′ n) 1+c , it is also the complexity of the problem for graphs of average degree Θ(d′ ).
Note that lemma 4.17 holds regardless of the model of communication. Therefore, as a corollary, we
√
√
can generalize the lower bounds we derived directly for graphs of average degree n to d = O( n) (for 3
√
players in both cases). Specifically, the Ω(n1/4 ) bound for one-way communication and the Ω( n) bound
for simultaneous communication extend to Ω((nd)1/6 ) and Ω((nd)1/3 ), respectively. Furthermore, lemma
4.17, combined with theorem 4.15 and the lower bounds we proved in section 4.2 imply Theorem 4.1, the
main result of this section.
4.6 Discussion: Lower Bounds on the Communication Complexity of Property-Testing
Lower bounds on the “canonical” problems in communication complexity, such as Set Disjointness and Gap
Hamming Distance [12], cannot be leveraged to obtain property-testing lower bounds, at least for trianglefreeness. Some classical problems do feature a gap, where we are only interested in distinguishing two
cases that are “far” from each other; however, for property-testing lower bounds, the gap needs to be around
40
zero (either we have no triangles, or we have many edge-disjoint triangles), while existing gap problems
typically become easy unless the gap is centered far from zero (for example, in Gap Hamming Distance, the
players get vectors x, y ∈ {0, 1}n , and they need to determine whether their Hamming distance is greater
√
√
than n/2 + n or smaller than n/2 − n). In addition, because triangles are not “independent” of each
other (if they share an edge), the direct sum approach to proving lower bounds, which works well when we
can break the problem up into many independent pieces, does not apply here.
5 Summary
In this work we showed that in the setting of communication complexity, property testing can be significantly
easier than exactly testing if the input satisfies the property: exactly determining whether the input graph
contains a triangle was shown to require Ω(knd) bits in [38], but we showed that weakening the requirement
to property-testing improves the complexity, and even simultaneous protocols can do better than the best
exact algorithm with unrestricted communication. However, the problem does not appear to become trivial,
as shown by our lower bounds for simultaneous and restricted one-way protocols. Table 1 summarizes our
main results.
√
√
d = Θ(1) d = O( n) d = Ω( n)
△-freeness
√
Õ(k 4 nd + k2 )
Unrestricted Communication
Upper bound
△-freeness
√
√
Õ(k n)
Õ(k 3 nd)
Simultaneous Communication
Upper bound
△-edge detection
√
”Extended” One-Way Communication
Ω( 6 nd)
—
3 players
Lower bound
△-edge detection
√
Simultaneous Communication
Ω 3 nd
—
3 players
Lower bound
△-edge detection
√
Ω k · 6 nd
—
Simultaneous Communication
Lower bound
△-freeness
√
Ω( n)
—
Simultaneous Communication
Lower bound
Table 1: Results summary;
We have provided non-trivial upper bounds for the entire relevant degree range for both simultaneous
and unrestricted communication. Our solutions have several desirable qualities. First, they can overcome
the obstacle of not knowing the average degree in advance. Additionally, the algorithms solve not only the
problem of triangle-freeness, but more specifically, the problem of triangle detection, which can only be
41
harder. Finally, all solutions have a one-sided error - a graph is found to contain triangles only if a triangle is
detected with probability 1. We also address other variants and relaxations, such as the case where all inputs
are disjoint, or a case where the players communicate via a blackboard visible to everyone, and describe
how these guarantees can improve the complexity. In terms of more general contributions, we describe how
to efficiently implement typical building blocks used in standard property-testing solutions, of which the
most notable is the proposed procedure for approximating a vertex degree up to a constant, which can be
used to solve the more general problem of approximating the number of distinct elements in a set.
The task of proving non-trivial lower bounds for triangle-freeness is considerably harder. We have discussed the shortcomings of mainstream techniques in communication complexity for tackling this problem.
Nevertheless, we have been able to produce a a tight lower-bound for the closely related problem of triangle√
edge detection for d = Θ( n) in the simultaneous model. We have also been able to prove a lower-bound
for an extended variation of one-way communication, which enabled us to derive a bound for k players.
Moreover, we showed how to extend these bounds, by a rather generic procedure, to lower average degrees.
Finally, we demonstrated how to translate our one-way bounds into streaming-lower bounds, once again via
a generic (and well known) reduction.
We believe that extending the lower bounds to protocols with unrestricted rounds, and strengthening
them to apply to testing triangle-freeness rather than finding a triangle edge, will require techniques from
Fourier analysis, like the ones used in [28] to show the lower bound on Boolean Hidden Matching (from
which we reduce in Section 4.4). In addition, we believe that devising a hard distribution for dense graphs of
√
degree d = ω( n), with desirable properties for proving lower bounds, will require some sophisticated utilization of Behrend graphs [3]. Finally, a worthwhile topic for related future research could be generalizing
our techniques for detecting a wider class of subgraphs or testing other properties, relying on the propertytesting relaxation. As demonstrated by this work, this relaxation can significantly reduce the complexity
of an otherwise maximally hard problem, but not to a degree that it becomes trivial and uninteresting, as
suggested by our lower bounds. More generally, there is much room for a more elaborate investigation of
the interrelation between the models of communication complexity and property-testing, as alongside innate
distinctions there seem to exist non-trivial similarities the extent of which is yet to be determined.
Acknowledgements
We thank Noga Alon, Eldar Fischer and Dana Ron for fruitful discussions.
References
[1] Noga Alon. Testing subgraphs in large graphs. Random Struct. Algorithms, 21(3-4):359–370, October
2002.
[2] Noga Alon, Eldar Fischer, Michael Krivelevich, and Mario Szegedy. Efficient testing of large graphs.
Combinatorica, 20(4):451–476, 2000.
[3] Noga Alon, Tali Kaufman, Michael Krivelevich, and Dana Ron. Testing triangle-freeness in general
graphs. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm,
SODA ’06, pages 279–288, 2006.
42
[4] Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency
moments. In Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of Computing,
STOC ’96, pages 20–29, 1996.
[5] Noga Alon and Asaf Shapira. Testing subgraphs in directed graphs. In Proceedings of the Thirty-fifth
Annual ACM Symposium on Theory of Computing, STOC ’03, pages 700–709, 2003.
[6] Christos Boutsidis, David P. Woodruff, and Peilin Zhong. Optimal principal component analysis in
distributed and streaming models. In Proceedings of the 48th Annual ACM SIGACT Symposium on
Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 236–249, 2016.
[7] Zvika Brakerski and Boaz Patt-Shamir. Distributed discovery of large near-cliques. Distributed Computing, 24(2):79–89, 2011.
[8] Mark Braverman, Faith Ellen, Rotem Oshman, Toniann Pitassi, and Vinod Vaikuntanathan. A tight
bound for set disjointness in the message-passing model. In Proceedings of the 2013 IEEE 54th Annual
Symposium on Foundations of Computer Science, pages 668–677, 2013.
[9] Mark Braverman, Ankit Garg, Tengyu Ma, Huy L. Nguyen, and David P. Woodruff. Communication lower bounds for statistical estimation problems via a distributed data processing inequality. In
Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing, pages 1011–1020,
2016.
[10] Keren Censor-Hillel, Eldar Fischer, Gregory Schwartzman, and Yadu Vasudev. Fast distributed algorithms for testing graph properties. In Distributed Computing: 30th International Symposium, DISC
2016, Paris, France, September 27-29, 2016. Proceedings, pages 43–56, 2016.
[11] Keren Censor-Hillel, Petteri Kaski, Janne H. Korhonen, Christoph Lenzen, Ami Paz, and Jukka
Suomela. Algebraic methods in the congested clique. In Proceedings of the 2015 ACM Symposium on
Principles of Distributed Computing, PODC ’15, pages 143–152, 2015.
[12] Amit Chakrabarti and Oded Regev. An optimal lower bound on the communication complexity of
gap-hamming-distance. In Proceedings of the Forty-third Annual ACM Symposium on Theory of Computing, STOC ’11, pages 51–60, 2011.
[13] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). 2006.
[14] Danny Dolev, Christoph Lenzen, and Shir Peled. ”tri, tri again”: Finding triangles and small subgraphs
in a distributed setting. In Proceedings of the 26th International Conference on Distributed Computing,
DISC’12, pages 195–209, 2012.
[15] Andrew Drucker, Fabian Kuhn, and Rotem Oshman. On the power of the congested clique model. In
Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, PODC ’14, pages
367–376, 2014.
[16] Talya Eden, Amit Levi, Dana Ron, and C. Seshadhri. Approximately counting triangles in sublinear
time. In IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley,
CA, USA, 17-20 October, 2015, pages 614–633, 2015.
43
[17] Eldar Fischer. The art of uninformed decisions. Bulletin of the EATCS, 75:97, 2001.
[18] Jacob Fox. A new proof of the graph removal lemma. Annals of Mathematics, pages 561–579, 2011.
[19] Pierre Fraigniaud, Ivan Rapaport, Ville Salo, and Ioan Todinca. Distributed testing of excluded subgraphs. In Distributed Computing - 30th International Symposium, DISC 2016, Paris, France, September 27-29, 2016. Proceedings, pages 342–356, 2016.
[20] Anna Gál and Parikshit Gopalan. Lower bounds on streaming algorithms for approximating the length
of the longest increasing subsequence. SIAM J. Comput., 39(8):3463–3479, 2010.
[21] Oded Goldreich. Combinatorial property testing – a survey. Randomization Methods in Algorithm
Design, page 4560, 1998.
[22] Oded Goldreich, Shari Goldwasser, and Dana Ron. Property testing and its connection to learning and
approximation. J. ACM, 45(4):653–750, 1998.
[23] Oded Goldreich and Dana Ron. Property testing in bounded degree graphs. In Proceedings of the
Twenty-ninth Annual ACM Symposium on Theory of Computing, STOC ’97, pages 406–415, 1997.
[24] Oded Goldreich and Luca Trevisan. Three theorems regarding testing graph properties. Random Struct.
Algorithms, 23(1):23–57, 2003.
[25] L. Gugelmann. Testing trinagle-freeness in general graphs: Lower bounds. Bachelor thesis, Dept. of
Mathematics, ETH, Zurich, 2006.
[26] Zengfeng Huang and Pan Peng. Dynamic graph stream algorithms in o(n) space. In 43rd International
Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy,
pages 18:1–18:16, 2016.
[27] John Kallaugher and Eric Price. A hybrid sampling scheme for triangle counting. In Proceedings of
the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona,
Spain, Hotel Porta Fira, January 16-19, pages 1778–1797, 2017.
[28] Iordanis Kerenidis and Ran Raz. The one-way communication complexity of the boolean hidden
matching problem. Electronic Colloquium on Computational Complexity (ECCC), 13, 2006.
[29] Eyal Kushilevitz and Noam Nisan. Communication Complexity. 1997.
[30] François Le Gall. Further Algebraic Algorithms in the Congested Clique Model and Applications to
Graph-Theoretic Problems, pages 57–70. 2016.
[31] Yi Li, Xiaoming Sun, Chengu Wang, and David P. Woodruff. On the Communication Complexity of
Linear Algebraic Problems in the Message Passing Model, pages 499–513. 2014.
[32] Ilan Newman. Private vs. common random bits in communication complexity. Inf. Process. Lett.,
39(2):67–71, 1991.
[33] Jeff M. Phillips, Elad Verbin, and Qin Zhang. Lower bounds for number-in-hand multiparty communication complexity, made easy. In Proceedings of the Twenty-third Annual ACM-SIAM Symposium
on Discrete Algorithms, SODA ’12, pages 486–501, 2012.
44
[34] T. Rast. Testing trinagle-freeness in general graphs: Upper bounds. Bachelor thesis, Dept. of Mathematics, ETH, Zurich, 2006.
[35] Dana Ron. Algorithmic and analysis techniques in property testing. Foundations and Trends in Theoretical Computer Science, 5(2):73–205, 2009.
[36] Elad Verbin and Wei Yu. The streaming complexity of cycle counting, sorting by reversals, and other
problems. In Proceedings of the Twenty-second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’11, pages 11–25, 2011.
[37] David P. Woodruff and Qin Zhang. Tight bounds for distributed functional monitoring. In Proceedings
of the Forty-fourth Annual ACM Symposium on Theory of Computing, STOC ’12, pages 941–960,
2012.
[38] David P. Woodruff and Qin Zhang. When distributed computation is communication expensive. In
Distributed Computing: 27th International Symposium, DISC 2013, Jerusalem, Israel, October 14-18,
2013. Proceedings, pages 16–30, 2013.
[39] David P. Woodruff and Qin Zhang. An optimal lower bound for distinct elements in the message passing model. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms,
SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 718–733, 2014.
45
| 8 |
Improved Inference for Checking Type
Annotations
arXiv:cs/0507036v1 [] 14 Jul 2005
Peter J. Stuckey1 , Martin Sulzmann2 and Jeremy Wazny1
1
NICTA Victoria Laboratories
Department of Computer Science and Software Engineering
The University of Melbourne, Vic. 3010, Australia
{pjs,jeremyrw}@cs.mu.oz.au
2
School of Computing, National University of Singapore
S16 Level 5, 3 Science Drive 2, Singapore 117543
[email protected]
Abstract. We consider type inference in the Hindley/Milner system
extended with type annotations and constraints with a particular focus
on Haskell-style type classes. We observe that standard inference algorithms are incomplete in the presence of nested type annotations. To
improve the situation we introduce a novel inference scheme for checking type annotations. Our inference scheme is also incomplete in general
but improves over existing implementations as found e.g. in the Glasgow
Haskell Compiler (GHC). For certain cases (e.g. Haskell 98) our inference
scheme is complete. Our approach has been fully implemented as part of
the Chameleon system (experimental version of Haskell).
1
Introduction
Type inference for the Hindley/Milner system [Mil78] and extensions [Rém93,Pot98,OSW99]
of it is a heavily studied area. Surprisingly, little attention has been given to the
impact of type annotations (a.k.a. user-provided type declarations) and userprovided constraints on the type inference process. For concreteness, we assume
that the constraint domain is described in terms of Haskell type classes [Jon92,HHPW94].
Type classes represent a user-programmable constraint domain which can be
used to code up almost arbitrary properties. Hence, we believe that the content
of this paper is of importance for any Hindley/Milner extension which supports
type annotations and constraints. The surprising observation is that even for
“simple” type classes type inference in the presence of type annotations becomes
a hard problem.
Example 1. The following program is a variation of an example from [JN00].
Note that we make use of nested type annotations. 1
class Foo a b where foo :: a->b->Int
instance Foo Int b
instance Foo Bool b
1
For concreteness, we annotate 1 with Int because in Haskell 1 is in general only a
number.
p y = (let f :: c -> Int
f x = foo y x
in f, y + (1::Int))
q y = (y + (1::Int), let f :: c -> Int
f x = foo y x
in f)
We introduce a two-parameter type class Foo which comes with a method foo
which has the constrained type ∀a, b.F oo a b ⇒ a → b → Bool. The two instance
declarations state that F oo Int b and F oo Bool b hold for any b. Consider
functions p and q. In each case the subexpression y+(1::Int) forces y to be of
type Int. Note that we could easily provide a more complicated subexpression
without type annotations which forces y to be of type Int. The body of function
f x = foo y x generates the constraint F oo Int tx where tx is the type of x.
Note that this constraint is equivalent to T rue due to the instance declaration.
We find that f has the inferred type ∀tx .tx → Int. We need to verify that this
type subsumes the annotated type f::c->Int which is interpreted as ∀c.c →
Int. More formally, we write Cg ⊢ σi ≤ σa to denote that the inferred type
σi subsumes the annotated type σa under some constraint Cg . Suppose σi =
(∀ā.Ci ⇒ ti ) and σa = (∀b̄.Ca ⇒ ta ) where there are no name clashes between
ā and b̄. Then, the subsumption condition is (logically) equivalent to Cg |=
∀b̄.(Ca ⊃ (∃ā.Ci ∧ ti = ta )). In this statement, we assume that |= refers to
the model-theoretic entailment relation and ⊃ refers to Boolean implication.
Outermost universal quantifiers are left implicit. Note that in our system, we
only consider type equality rather than the more general form of subtyping. For
our example, we find that the subsumption condition holds. Hence, expressions
p and q are well-typed.
Let’s see what some common Haskell implementations such as Hugs [HUG]
and GHC [GHC] say. Expression p is accepted by Hugs but rejected by GHC
whereas GHC accepts q which is rejected by Hugs! Why?
In a traditional type inference scheme [DM82], constraints are generated
while traversing the abstract syntax tree. At certain nodes (e.g. let) the constraint solver is invoked. Additionally, we need to check for correctness of type
annotations (a.k.a. subsumption check). The above examples show that different traversals of the abstract syntax tree yields different results. E.g. Hugs
seems to perform a right-first traversal. We visit f::c -> Int; f x = foo y x
first without considering the constraints arising out of the left tuple component.
Hence, we find that f has the inferred type ∀tx .F oo ty tx ⇒ tx → Int where
y has type ty . This type does not subsume the annotated type. Therefore, type
inference fails. Note that GHC seems to favor a left-first traversal of the abstract
syntax tree.
The question is whether there is an inherent problem with nested type annotations, or whether it’s simply a specific problem of the inference algorithms
implemented in Hugs and GHC.
Example 2. Here is a variation of an example mentioned in [Fax03]. We make
use of the class and instance declarations from the previous example.
2
test y = let f :: c->Int
f x = foo y x
in f y
Note that test may be given types Int → Int and Bool → Int. The constraint F oo ty tx arising out of the program text can be either satisfied by
ty = Int or ty = Bool. However, the “principal” type of test is of the form
∀ty .(∀tx .F oo ty tx ) ⇒ ty → Int. Note that the system we are considering does
not allow for constraints of the form ∀tx .F oo ty tx . Hence, the above example
has no (expressible) principal type. We note that the situation is different for
(standard) Hindley/Milner with type annotations. As shown by Odersky and
Läufer [OL96], the problem of finding a solution such that σi (the inferred type)
is an instance of σa (the annotated type) can be reduced to unification under
a mixed prefix [Mil92]. Hence, we either find a principal solution φ such that
⊢ φ(σi ) ≤ φ(σa ) or no solutions. Hence, inference for Hindley/Milner with type
annotations is complete.
We conclude that type inference for Hindley/Milner with constraints and
(nested) type annotations is incomplete. The incompleteness arises because the
subsumption check not only involves a test for correctness of annotations, but
may also need to find a solution. The above example shows that in our general
case there might not necessarily be a principal solution.
In order to resurrect completeness we could impose a syntactic restriction on
the set of programs. E.g., we could simply rule out type annotations for “nested”
let-definitions, or require that the types of all lambda-bound variables occurring
in the scope of a nested annotation must be explicitly provided (although it’s
unclear whether this is a sufficient condition). In any case, we consider these as
too severe restrictions.
In fact, the simplest solutions seems to be to enrich the language of constraints. Note that the subsumption condition itself is a solution to the subsumption problem. Effectively, we add constraints of the form ∀b̄.(Ca ⊃ (∃ā.Ci ∧ ti =
ta )) to our language of constraints. Then, ∀ty .(∀tx .F oo ty tx ) ⇒ ty → Int will
become a valid type of test in the above example. This may be a potential solution for some cases, but is definitely undesirable for Haskell where constraints
are attached to dictionaries. It is by no means obvious how to construct dictionaries [HHPW94] for “higher-order” constraints. Furthermore, we believe that
type inference easily becomes undecidable depending on the underlying primitive
constraint domain.
In this paper, we settle for a compromise between full type inference and full
type checking. We only check for the correctness of type annotations. But before
checking we infer as much as possible. Our contributions are:
– We introduce a novel formulation of improved inference for checking annotations in terms of Constraint Handling Rules (CHRs) [Frü95]. While inferring
the type of some inner expression we can reach the result of inference for
some outer expression.
– We can identify a class of programs for which inference is complete.
– Our approach is fully implemented as part of the Chameleon system [SW].
We can type a much larger class of programs compared to Hugs and GHC.
E.g., Example 1 is typable in our system. We refer to [SW] for more examples.
3
(Var-∀E)
(x : ∀ā.D ⇒ t) ∈ Γ
Pp |= C ⊃ [t̄/ā]D
C, Γ ⊢ x : [t̄/ā]t
(Abs)
C, Γ ⊢ e1 : t1 → t2
C, Γ ⊢ e2 : t1
C, Γ ⊢ e1 e2 : t2
ā = fv(C1 , t1 )
C2 ∧ C1 , Γ.(g : ∀ā.C1 ⇒ t1 ) ⊢ e1 : t1
C2 , Γ.(g : ∀ā.C1 ⇒ t1 ) ⊢ e2 : t2
g :: C1 ⇒ t1
C2 , Γ ⊢ let
in e2 : t2
g = e1
C, Γ.x : t1 ⊢ e : t2
C, Γ ⊢ λx.e : t1 → t2
C1 , Γ ⊢ e 1 : t1
ā = fv(C1 , t1 ) − fv(Γ )
(Let)
C2 , Γ.(g : ∀ā.C1 ⇒ t1 ) ⊢ e2 : t2
C2 , Γ ⊢ let g = e1 in e2 : t2
(LetA)
(App)
Fig. 1. Hindley/Milner with Type Annotations
We continue in Section 2 where we formally introduce an extension of the
Hindley/Milner system with constraints and type annotations. Section 3 is the
heart of the paper. We first motivate our approach by example before mapping
the entire type inference problem to CHRs. Type inference then becomes CHR
solving. In Section 4 we discuss related work and conclude.
2
Types and Constraints
We present an extension of the Hindley/Milner system with constraints and type
annotations.
f :: C ⇒ t
in e
Expressions e ::= x | λx.e | e e | let f = e in e | let
f =e
Types
t ::= a | t → t | T t̄
Type Schemes σ ::= t | ∀ā.C ⇒ t
Constraints C ::= t = t | U t̄ | C ∧ C
CHRs
R ::= U t̄ ⇐⇒ C | U1 t1 , ..., Un tn =⇒ C
We write ō to denote a sequence of objects o1 , ..., on and o : t to denote o1 :
t1 , ..., on : tn . W.l.o.g., we assume that lambda-bound and let-bound variables
have been a-renamed to avoid name clashes. We record these variables in some
environment Γ . Note that we consider Γ as an (ordered) list of elements, though
we commonly use set notation. We denote by {x1 : σ1 , . . . , xn : σn }.x : σ the
environment {x1 : σ1 , . . . , xn : σn , x : σ}.
Our type language consists of variables a, type constructors T and type
application, e.g. T a. We use common notation for writing function and list
types. We also make use of pairs, integers, booleans etc. in examples.
We find two kinds of constraints. Equations t1 = t2 among types t1 and t2
and user-defined constraints U t̄. We assume that U refers to type classes such
as F oo. For our purposes, we restrict ourselves to single-headed simplification
CHRs U t̄ ⇐⇒ C and multi-headed propagation CHRs U1 t1 , ..., Un tn =⇒ C.
We note that CHRs describe logic formula. E.g. U t̄ ⇐⇒ C can be interpreted
as ∀ā.U t̄ ↔ (∃b̄.C) where ā = fv(t̄) and b̄ = fv(C) − ā, and U1 t1 , ..., Un tn =⇒ C
can be interpreted as ∀ā.(U1 t1 ∧ ... ∧ Un tn ) ⊃ (∃b̄.C) where ā = fv(t1 , ..., tn )
and b̄ = fv(C) − ā. Via CHRs we can model most known type class extensions.
We refer the interested reader to [SS04] for a detailed account of translating
4
classes and instances to CHRs. We claim that using CHRs we can cover a sufficiently large range of Hindley/Milner type systems with constraints such as
functional dependencies, records etc. Note that CHRs additionally offer multiheaded simplification CHRs which in our experience so far do not seem to be
necessary in the type classes context. Due to space limitations, we only give one
simple example showing how to express Haskell 98 type class relations in terms
of CHRs.
We assume that the meaning of user-defined constraints (introduced by class
and instance declarations) has already been encoded in terms of some set Pp of
CHRs. User-defined functions are recorded in some initial environment Γinit .
Example 3. Consider
class Eq a where (==) :: a->a->Bool
instance Eq a => Eq [a]
class Eq q => Ord a where (<) :: a->a->Bool
class Foo a b where foo :: a->b->Int
instance Foo Int b
instance Foo Bool b
Note that the declaration class Eq a => Ord a introduces a new type class
Ord and imposes the additional condition that Ord a implies Eq a (which is
sensible assuming that an ordering relation assumes the existence of an equality
relation). We can model such a condition via a propagation rule. Hence, Pp
consists of
(Super) Ord a =⇒ Eq a (F1) F oo Int b ⇐⇒ T rue
(Eq1) Eq a ⇐⇒ Eq [a] (F2) F oo Bool b ⇐⇒ T rue
and Γinit = {(==) : ∀a.Eq a ⇒ a → a → Bool, (<) : ∀a.Ord a ⇒ a → a →
Bool, f oo : ∀a, b.F oo a b ⇒ a → b → Int}.
We introduce judgments of the form C, Γ ⊢ e : t where C is a constraint,
Γ refers to the set of lambda-bound variables, predefined and user-defined functions, e is an expression and t is a type. We leave the type class theory Pp implicit. We say a judgment is valid iff there is a derivation w.r.t. the rules found
in Figure 1. Commonly, we require that constraints appearing in judgments are
satisfiable. We say that a valid judgment is satisfiable iff all constraints appearing in the derivation are satisfiable. A constraint is satisfiable w.r.t. a type class
theory iff we find some model satisfying the theory and constraint. We say a
theory Pp is satisfiable iff we find some model for Pp .
In rule (Var-∀E), we assume that x either refers to a lambda- or let-bound
variable. Note that only let-bound variables and primitives can be polymorphic.
For convenience, we combine variable introduction with quantifier elimination.
We can build an instance of a type scheme if the instantiated constraint is
entailed by the given constraint w.r.t. type class theory Pp .
In rule (Let) we couple the quantifier introduction rule with the introduction
of user-defined functions. In our formulation, C2 does not necessarily guarantee
that C1 is satisfiable. However, our rule (Let) is sound for a lazy semantics which
applies to Haskell.
5
Rule (LetA) introduces a type annotation. Note that we assume that type
annotations are closed, i.e. all variables appearing in C1 ⇒ t1 are assumed to
be universally quantified. This is the assumption made for Haskell 98 [Has].
Note that the environment for typing the function body includes the binding
g : ∀ā.C1 ⇒ t1 . Hence, we allow for polymorphic recursive functions.
The other rules are standard. Note that we left out the rule for monomorphic
recursive functions for simplicity.
3
Type Inference via CHRs
We introduce our improved inference scheme first by example. Then, we show
how to map the typing problem to a set of CHRs. We give a description of the
semantics of CHRs adapted to our setting. Finally, we show how to perform type
inference in terms of CHR solving.
3.1 Motivating Examples
The following examples give an overview of the process by which we abstract
the typing problem in terms of constraints and CHRs.
Example 4. Consider the following program.
g y = let f x = x in (f True, f y)
We introduce new predicates, (special-purpose) user constraints, g(t) and
f (t) to constrain t to the types of functions g and f respectively. It is necessary
for us to provide a meaning for these constraints, which we will do in terms of
CHR rules. The body of each rule will contain all constraints arising from the
definition of the corresponding function, which represent that function’s type.
For the program above we may generate rules similar to the following.
g(t) ⇐⇒ t = ty → (t1 , t2 ), f (tf 1 ), tf 1 = Bool → t1 , f (tf 2 ), tf 2 = ty → f2
f (t) ⇐⇒ t = tx → tx
The arrow separating the rule head from the rule body can be read as logical
equivalence. Variables mentioned only in a rule’s body are implicitly existentially
quantified.
In the g rule we see that g’s type is of the form ty → (t1 , t2 ), where t1 and t2
are the results of applying function f to a Bool and a ty . We represent f’s type,
at both call sites in the program, by the f user constraint.
The f rule is much more straightforward. It simply states that t is f ’s type
if t is the function type tx → tx , for some tx , which is clear from the definition
of f.
We can infer g’s type by performing a CHR derivation, solving the constraint
g(t) by applying CHRs (removing the constraint matching the lhs with the rhs).
Note that we avoid renaming variables where unnecessary.
g(t) g t = ty → (t1 , t2 ), f (tf 1 ), tf 1 = Bool → t1 , f (tf 2 ), tf 2 = ty → f2
f t = ty → (t1 , t2 ), tf 1 = tx → tx , tf 1 = Bool → t1 , f (tf 2 ), tf 2 = ty → f2
f t = ty → (t1 , t2 ), tf 1 = tx → tx , tf 1 = Bool → t1 , tf 2 = t′x → t′x , tf 2 = ty → f2
If we solve the resulting constraints for t, we see that g’s type is ∀ty .ty →
(Bool, ty ).
6
Example 5. The program below is a slightly modified version of the program
presented in Example 4.
g y = let f x = (y,x) in (f True, f y)
The key difference is that f now contains a free variable y. Since y is monomorphic within the scope of g we must ensure that all uses of y, in all definitions,
are consistent, i.e. each CHR rule which makes mention of ty , y’s type, must
be referring to the same variable. This is important since the scope of variables
used in a CHR rule is limited to that rule alone.
In order to enforce this, we perform a transformation akin to lambda-lifting,
but at the type level. Instead of user constraints of form f (t) we now use binary
constraints f (t, l) where the l parameter represents f’s environment.
We would generate rules like the following from this program.
g(t, l) ⇐⇒ t = ty → (t1 , t2 ), f (tf 1 ), tf 1 = Bool → t1 , f (tf 2 , htxi), tf 2 = ty → f2 , l = ls
f (t, l) ⇐⇒ t = tx → (ty , tx ), l = (hty |tsi)
We write ht1 , ..., tn i to indicate a type-level list containing n types. A list with
an n-element prefix but an unbounded tail is denoted by ht1 , ..., tn |ti. When unifying such a type against another list, t will be bound to some sublist containing
all elements after the nth.
As mentioned above, we now use binary predicates to represent the type of a
function. The first argument, which we commonly refer to as the t component,
will still be bound to the function’s type. The second component, which we call l,
represents a list of unbound variables in scope of that function. We have ensured
that whenever the f constraint is invoked from the g rule that ty , the type of y,
is made available to it. So, in essence, the ty that we use in the f rule will have
the same type as the ty in g, rather than simply being a fresh variable known
only in g.
Example 6. We now return to the program first introduced in Example 1, and
generate the CHR rules corresponding to the function p, which is repeated below.
For simplicity we will assume that (+) is defined only on Ints, i.e. (+) : Int →
Int → Int.
p y = (let f :: c -> Int
f x = foo y x
in f, y + (1::Int))
We generate the following CHRs from this fragment of the program. We also
include the rule which represents foo’s type, and the rule which corresponds to
the instance F oo Int b.
p(t, l) ⇐⇒ t = ty → (t1 , t2 ), f (t1 , (hty i, hty , tx i), t2 = tr ,
tplus = Int → Int → Int, tplus = ty → Int → tr , l = (ls, hty , tx i)
fa (t, l) ⇐⇒ p(t′ , l), t = c → Int, l = (hty |lsi, hty , tx i)
f (t, l) ⇐⇒ t = tx → tr , f oo(tf oo , ls′ ), tf oo = ty → tx → tr , fa (t, l),
l = (hty |lsi, hty , tx i)
f oo(t, l) ⇐⇒ t = a → b → Int, F oo a b
F oo Int b ⇐⇒ T rue
7
Here we have extended the scheme which we used to generate the constraints
in the previous example. We stick to binary predicates, but have expanded the
l component to include two lists. The first list, which we refer to as the local l
component contains, as before, a type-level list of all unbound lambda variables
in scope of the function. The second list, which we will often denote LT simply
contains all of the lambda-bound variables from the top-level definition down,
in a fixed order.
We introduce a symbol fa and generate a new rule to represent f’s annotated
type. Note also that we add a call to the f rule to unify f’s inferred type with
the declared type.
As demonstrated earlier, in order to check f it is necessary to consider all
of the type information available in f’s context. In particular, for this program,
it is critical that we know y has type Int, in order to reduce the F oo Int b
constraint which arises from the use of foo, but is absent from the annotation.
The way we introduce f’s context into f is by adding a call from the fa rule
to the immediate parent definition, which in this case is represented by p. In this
instance we are not interested in p’s type, only the effect it has on lambda-bound
variables, and any type class constraints which may arise. Note that if p were
itself embedded within a function definition, then it too would have such a call
(to its own parent), and so f would indirectly inherit p’s context.
We perform the following simplified derivation to demonstrate that the CHR
formulation above captures the necessary context information within f.
f (t, l) f f oo(tf oo , ls′ ), tf oo = ty → tx → tr , fa (t, l),
l = (hty i, hty , tx i), ...
f oo tf oo = a → b → Int, F oo a b, tf oo = ty → tx → tr , fa (t, l),
l = (hty i, hty , tx i), ...
we can simplify this to:
F oo ty tx , fa (t, l), l = (hty i, hty , tx i), ...
fa F oo ty tx , p(t′ , l), l = (hty i, hty , tx i), ...
p F oo ty tx , tplus = Int → Int → Int, tplus = ty → Int → tr ,
l = (ls, hty , tx i), l = (hty i, hty , tx i), ...
F oo tplus = Int → Int → Int, tplus = ty → Int → tr ,
l = (ls, hty , tx i), l = (hty i, hty , tx i), ...
Through the call to p from fa we introduce sufficient context information to
determine that ty is an Int, and to consequently reduce away the F oo constraint
using the instance rule. Note that the LT component is necessary here because
p is not aware of its own lambda-bound variables. Without the LT component,
p would not be able to “export” the required information about ty to f .
3.2 Constraint and CHR Generation
In detail, we show how to map expressions to constraints and CHRs. Lambdaabstractions such as λx.e are preprocessed and turned into λx :: tx .e where tx
is a fresh type variable. We assume that LT contains all such type variables tx
attached to lambda-abstractions.
For each function definition f=e we generate a CHR of the form f (t, l) ⇐⇒ C
where l refers to a pair (ll , lg ). The constraint C is generated out of the program
text of e. We maintain that ll denotes the set of types of lambda-bound variables
in the environment and lg refers to LT the types of all lambda-bound variables.
8
Constraint Generation:
t, l, lg fresh (f ∈ E or
fa ∈ E f non-recursive)
(Var-f)
C = {f (t, l), l = (htx i, lg )}
{x : tx }, E, f ⊢C (C t)
(x : t1 ) ∈ Γ t2 fresh
(Var-x)
Γ, E, x ⊢C (t2 = t1 t2 )
(VarA-f)
fa ∈ E f recursive t, l, lg fresh C = {fa (t, l), l = (htx , lg i)}
{x : tx }, E, f ⊢C (C t)
Γ.x : t1 , E, e ⊢C (C t2 )
(Abs) C ′ = {C, t3 = t1 → t2 } t3 fresh
Γ, E, λx :: t1 .e ⊢C (C ′ t3 )
(Let)
Γ, E ∪ {g}, e2 ⊢C (C t)
Γ, E, let g = e1 in e2 ⊢C (C t)
(App)
Γ, E, e1 ⊢C (C1 t1 )
Γ, E, e2 ⊢C (C2 t2 ) t3 fresh
Γ, E, e1 e2 ⊢C (C1 , C2 , t1 = t2 → t3 t3 )
(LetA)
Γ, E ∪ {ga }, e2 ⊢C (C t)
g :: C1 ⇒ t1
Γ, E, let
in e2 ⊢C (C t)
g = e1
CHR Generation:
(Var) h, Γ, E, v ⊢R ∅
(App)
h, Γ, E, e1 ⊢R P1
h, Γ, E, e2 ⊢R P2
h, Γ, E, e1 e2 ⊢R P1 ∪ P2
(Abs)
h, Γ.x : t, E, e ⊢R P
h, Γ, E, λx :: t.e ⊢R P
Γ = {x1 : t1 , . . . , xn : tn } t, t′2 , l, lr, lg fresh
g, Γ, E, e1 ⊢R P1 h, Γ, E ∪ {g}, e2 ⊢R P2 Γ, E, e1 ⊢C (C1′ t′1 )
(Let)
P = P1 ∪ P2 ∪ {g(t, l) ⇐⇒ C1′ , t′1 = t, l = (ht1 , . . . , tn |lri, LT ), h(t′2 , l)⊖ }
h, Γ, E, let g = e1 in e2 ⊢R P
(LetA)
Γ = {x1 : t1 , . . . , xn : tn } t, t′2 , l, lr fresh
g, Γ, E ∪ {ga }, e1 ⊢R P1 h, Γ, E ∪ {ga }, e2 ⊢R P2 Γ, E ∪ {ga }, e1 ⊢C (C1′ t′1 )
P = P1 ∪ P2 ∪
(
)
′′
ga (t, l) ⇐⇒ t = t1 , C1′′ , l = (ht1 , ..., tn |lri, LT ), h(t′2 , l)⊖
g(t, l) ⇐⇒ l = (ht1 , . . . .tn |lri, LT ), ga (t, l), C1′ , t = t′1
h, Γ, E, let
g :: C1′′ ⇒ t′′1
in e2 ⊢R P
g = e1
Fig. 2. Constraint and CHR Generation
We make use of list notation (on the level of types) to refer to the types
of λ-bound variables. In order to avoid confusion with lists of values, we write
hl1 , . . . , ln i to denote the list of types l1 , . . . , ln . We write hl|ri to denote the list
of types with head l and tail r.
For constraint generation, we employ judgments Γ, E, e ⊢C (C t) where
environment Γ , set of predicate symbols E and expression e are input values and
constraint C and type t are output parameters. Note that Γ consists of lambdabound variables only whereas E holds the set of predicate symbols referring to
primitive and let-defined functions. Initially, we assume that Einit holds all the
9
symbols defined in Pinit which is the CHR representation of all functions in Γinit .
The rules can be found in Figure 2.
Consider rule (Var-f). If the function does not carry an annotation or the
function is not recursive 2 we make use of the definition CHR. However, strictly
making use of the definition CHR might introduce cycles among CHRs, e.g. consider polymorphic recursive functions. In such cases we make use of the annotation CHR, see rule (VarA-f). In both rules we set l to the sequence of types of
all lambda-variables in scope. Note that we might pass in more types of lambdabound variables than expected by that function. This is safe because we leave
the first component of l “open” at definition sites. That is, we expect at least
the types of lambda-bound variables in scope at the definition site and possibly
some more. The second component which refers to the sequence of all types of
lambda-bound variables appearing in the entire program is left unconstrained.
This component will be only constrained at definition sites (see CHR generation
rules (Let) and (LetA)). Note that in rule (Abs) the order of lambda-bound
variables added to type environment matters. Hence, we silently treat Γ as a list
rather than a set. In rules (Let) and (LetA) the constraints arising out of e1 might
not appear in C unless we use function g in e2 . Note that we do not generate a
constraint for the subsumption condition which will be checked separately.
For rule generation, we employ judgments of the form h, Γ, E, e ⊢R P where
CHR h, environment Γ , set of predicate symbols E and expression e are input
values and the set P of CHRs is the output value. As an invariant we maintain
that h refers to the surrounding definition of expression e. Initially, we assume
that h refers to some trivial CHR h(t, l) ⇐⇒ T rue and E refers to the set of
primitive functions. We refer to Figure 2 for details. There are two interesting
rules.
Rule (Let) deals with unannotated functions. Note that we do not add g to E
when generating constraints and rules from e1 . Hence, we assume for simplicity
that unannotated functions are not allowed to be recursive. Of course, our system [SW] handles unannotated, recursive functions. Their treatment is described
in a forthcoming report The novel idea of our inference scheme is that we reach
surrounding constraints within the definition of g via the constraint h(t2 , l)⊖ .
The ⊖ marker (left out in Example 6 for simplicity) serves two purposes: (1) We
potentially create cycles among CHRs because we might reintroduce renamed
copies of surrounding constraints. Markers will allow us to detect such cycles to
avoid non-termination of CHRs. (2) Variables occurring in marked constraints
are potentially part of the environment. Hence, we should not quantify over those
variables. Examples will follow shortly to highlight these points.
Rule (LetA) is similar to rule (Let). Here, the annotation CHR includes the
surrounding definition h. The actual inference result is reported in the definition
CHR.
3.3
CHR Solving
We introduce the marked CHR semantics. We assume that each constraint is
attached with either a ⊖ marker or a ǫ (pronounced empty) marker. The empty
marker is commonly left implicit. We refer to constraints carrying a ⊖ marker
as marked constraints. A constraint carrying the empty marker is unmarked. In
2
We can easily check whether a function is recursive or not by a simple dependency
analysis.
10
case of CHR rule application on a marked constraint, we mark all constraints in
the body before adding them to the constraint store.
Definition 1 (Marked CHR Application). Let d = (U t̄)a (or d = f (t̄)a )
be a primitive constraint where a ∈ {⊖ ,ǫ }. We define mark(d) = a. We write
d⊖ to denote (U t̄)⊖ (or f (t̄)⊖ ).
Let (R) c1 , ..., cn =⇒ d1 , ..., dm ∈ P and C be a constraint. Let φ be the
m.g.u. of all equations in C. Let c′1 , ..., c′n ∈ C such that there exists a substitution
θ on variables in rule (R) such that θ(ci ) = φ(c′i ) for i = 1...n. Then, C R
C, d′1 , ..., d′m where
di
if mark(c′j ) 6= ⊖ for j = 1...n
d′i =
(di )⊖ otherwise
Let (R) c ⇐⇒ d1 , ..., dm ∈ P and C be a constraint. Let φ be the m.g.u. of
all equations in C. Let c′ ∈ C such that there exists a substitution θ on variables
in rule (R) such that θ(c) = φ(c′ ), that is user-defined constraint c′ matches the
left-hand side of rule (R). Then, C R C − c′ , d′1 , ..., d′m where d′i are as above.
A derivation step from global set of constraints C to C ′ using an instance of
rule r is denoted C r C ′ . A derivation, denoted C ∗P C ′ is a sequence of
derivation steps using either rules in P such that no further derivation step is
applicable to C ′ . The operational semantics of CHRs exhaustively apply rules to
the global set of constraints, being careful not to apply propagation rules twice
on the same constraints (to avoid infinite propagation). We say a set of CHRs
is terminating if for each C there exists C ′ such that C ∗P C ′ .
Example 7. Consider
class Erk a where erk :: a
class Foo a where foo :: a
f = (erk, let g :: Foo a => a; g = foo in g)
Here is a sketch of the translation to CHRs.
ga (t) ⇐⇒ f (t′ )⊖ , F oo t
g(t) ⇐⇒ ga (t), F oo t
f (t) ⇐⇒ t = (a, b), Erk a, g(b)
Consider the derivation
g(t1 ) g ga (t1 ), F oo t1 ga f (t′ )⊖ , F oo t1
f t′ = (a, b), (Erk a)⊖ , g(b)⊖ , F oo t1
g t′ = (a, b), (Erk a)⊖ , ga (b)⊖ , (F oo b)⊖ , F oo t1
In step f we propagate ⊖ to all new constraints. Note that we encounter
a cycle among CHRs (see underlined constraints). Indeed, CHRs may be “nonterminating” because we introduce repeated duplicates of surrounding constraints.
To avoid non-termination we introduce an additional CHR Cycle Removal step
...
g
t′ = (a, b), (Erk a)⊖ , ga (b)⊖ , (F oo b)⊖ , F oo t1
CCR t′ = (a, b), (Erk a)⊖ , (F oo b)⊖ , F oo t1
which is defined as follows.
11
Definition 2 (CHR Cycle Removal). Let f (t, l) ∈ C and f (t′ , l′ )⊖ ∈ C ′
and a derivation ... C ... C ′ . Then, ... C ... C ′ CCR
C ′ − f (t′ , l′ )⊖ .
We assume that CCR is applied aggressively.
We argue that this derivation step is sound because any further rule application on ga (b)⊖ will only add renamed copies of constraints already present in
the store.
Lemma 1 (CCR Soundness). Let P be a set of CHRs and C and C ′ two
¯fv(C) C ′ .
constraints such that C ∗P C ′ . Then P |= C ↔ ∃
We can also argue that we break any potential cycle among predicate symbols referring to function symbols. Note that we do not consider breaking cycles
among two unmarked constraints. Such cases will only occur in case of unannotated, recursive functions which are left out for simplicity. A detailed description
of such cases will appear in a forthcoming report. Also note that we never remove cycles in case of user-defined constraints. In such a case, the type class
theory might be non-terminating. Hence, we state the the CCR derivation steps
preserves termination assuming the type class theory is terminating.
Lemma 2 (CCR Termination). Let Pp be a terminating type class theory,
h(t, l) ⇐⇒ T rue a CHR, Einit a set of primitive predicate symbols, Pinit a set of
CHRs and (Γ, e) a typing problem such that h, Γ, Einit , e ⊢R Pe and (Γinit , Pinit )
models Einit for some Γinit . Then, Pp ∪ Pe ∪ Pinit is terminating.
3.4
Type Inference via CHR Solving
Consider type inference for an expression e w.r.t. an environment Γ of lambdabound variables and an environment Γinit of primitive functions and type class
theory Pp . We assume (Pint , Eint ) model Γinit such that for each f : ∀ā.C ⇒ t′ we
find f (t, l) ⇐⇒ C, t = t′ ∈ Pinit and f ∈ Einit . Then, we generate Γ, e ⊢C (C t)
and h, Γ, Einit , e ⊢R Pe . We generally assume that P denotes Pp ∪ Pe ∪ Pinit .
For typability we need to check that (1) constraint C is satisfiable, and (2)
all type annotations in e are correct. We are now in the position to describe
CHR-based satisfiability and subsumption check procedures.
Definition 3 (Satisfiability Check).
Let P be a set of CHRs and C a constraint such that C ∗P C ′ for some
constraint C ′ . We say that C is satisfiable iff the unifier of all equations in C ′
exists.
Soundness of the above definition follows from results stated in [SS02] in combination with Lemma 1. Of course, decidability of the satisfiability check depends
on whether CHRs are terminating.
To check for correctness of type annotations we first need to calculate the
set of all subsumption problems. Let Esub(e) be the set of all predicate symbols
ga where each ga refers to some subexpression (let g :: C1 ⇒ t1 ; g = e1 in e2 )
in e. Let Fsub(e) be a formula such that ∀t, l.(ga (t, l) ↔ g(t, l)) ∈ Fsub(e) for all
ga ∈ Esub(e) . It remains to verify that the type annotation is correct under the
abstraction of type inference in terms of P . Formally, we need to verify that
12
P |= Fsub(e) where P refers to first-order logic interpretation of the set of CHRs
P . In [SS02], we introduced a CNF (Canonical Normal Form) procedure to test
for equivalence among constraints ∀t, l.(ga (t, l) ↔ g(t, l)) w.r.t. some set of CHR
by executing ga (t, l) and g(t, l) and verify that the resulting final stores are
equivalent modulo variables in the initial store (here {t, l}). Thus, we can phrase
¯t,l .C to denote ∃fv(C) − {t, l}.C.
the subsumption check as follows. We write ∃
Definition 4 (Subsumption Check). Let ga ∈ Esub(e) and P be a set of
CHRs. We say that g’s annotation is correct iff (1) we execute ga (t, l) ∗P C1
¯t,l .C1 ) ↔ (∃
¯t,l .C2 ).
and g(t, l) ∗P C2 , (2) we have that |= (∃
Soundness of the above definition follows from results stated in [SS02] in combination with Lemma 1.
Example 8. Recall Example 7 and the derivation g(t1 ) ∗ t′ = (a, b), (Erk a)⊖ , F oo t1 .
A similar calculation shows that ga (t1 ) ∗ s′ = (c, d), (Erk c)⊖ , F oo t1 . Note
that the resulting constraints are logically equivalent modulo variable renamings.
Hence, g’s annotation is correct.
Note that in our formulation of type inference, the type of an expression is
described by a set of constraints w.r.t. a set of CHRs. The following procedure describes how to build the associated type scheme. Markers attached to constraints
provide important information which variables arise from the surrounding scope.
Of course, we need to be careful not to quantify over those variables.
Definition 5 (Building of Type Schemes). Let P be a set of CHRs, g a
function symbol. We say function g has type ∀ā.C ′ ⇒ t′ w.r.t. P iff (1) We
have that g(t, l) ∗ C, l = (ll , lg ) for some constraint C, (2) φ, the m.g.u. of
C, l = (ll , lg ) exists, (3) let D ⊆ φ(C) such that D is maximal and D consists
of unmarked user-defined constraints only, (4) let ā = fv(D, φt) − fv(φll ), (5) let
C ′ = φ(C) and t′ = φ(t).
Example 9. According to Example 8 we find that g has type ∀t1 .(Erk a, F oo t1 ) ⇒
t1 . Note that Erk a arises from f’s program text.
We are able to state soundness of our approach.
Theorem 1 (Soundness). Let Pe , Pp and Pinit be three sets of CHRs, h a
CHR in Pinit , Γ an environment of simply-typed bindings, Γinit an environment
of primitive functions, e an expression, Einit a set of predicate symbols, C a
constraint and t a type such that (Pinit , Einit ) models Γinit and Γ, Einit , e ⊢C
(C t) and h, Γ, Einit , e ⊢R Pe and type checking of all annotations in e is
successful. Let C ∗Pp ∪Pinit ∪Pe C ′ for some constraint C ′ . Let φ be the m.g.u. of
C ′ where we treat all variables in Γ as Skolem constants. Then, φ(C ′ ), φ(Γ ) ∪
Γinit ⊢ e : φ(t).
The challenge is to identify some sufficient criteria under which our type
inference method is complete. Because we only check for subsumption we need
to guarantee that each subsumption condition will be either true or false. E.g. in
Example 2 the subsumption condition boils down to the constraint ∀tx .F oo ty tx .
Note that we can satisfy this constraint by setting ty to either Int or Bool. Hence,
13
our task is to prevent such situations from happening. In fact, such situations can
never happen for single-parameter type classes. But what about multi-parameter
type classes? The important point is to ensure that fixing one parameter will
immediately fix all the others. That is, in case of ∀tx .F oo ty tx we know that
tx is uniquely determined by ty . We can enforce such conditions in terms of
functional dependencies [Jon00].
Definition 6. We say a type class T C is fully functional iff we find a class
declaration class TC a1 ... an | fd1 ,..., fdn where fdi = ai -> a1 ...
ai−1 ai+1 ... an .
We argue that for fully functional dependencies solutions (if they exist) must
be unique. In fact, this is not sufficient because there are still cases where we
need guess.
Example 10. Consider the following simplified representation of the Show class.
class Show a where
show :: a->String
read :: String->a
f :: Show a => String->String
f x = show (read x)
The subsumption check boils down to the formula ∀a.Show a ⊃ ∃a′ .Show a′
which is obviously a true statement (take a′ = a). However, in our translation
to CHRs we effectively check for Show a ↔ Show a′ which obviously does not
hold.
There are further sources where we need to take a guess.
Example 11. Consider
class Foo
instance Show Int
instance Foo a => Show a
where Pp = {(S1) Show Int ⇐⇒ T rue (S2) Show a ⇐⇒ F oo a}. We have that
Pp |= Show Int. However, Show Int S2 F oo Int where |= F oo Int 6↔ T rue
which suggests that Pp |= Show Int might not hold. Clearly, by guessing the
right path in the derivation we find that Show Int S1 T rue.
To ensure that our subsumption check (Definition 4) is complete we need to
rule out ambiguous types and require that the type class theory is complete. A
type is ambiguous iff we can not determine the variables appearing in constraints
by looking at the types alone. The annotation f::Show a=>String->String in
Example 10 is ambiguous. A type class theory Pp is complete iff Pp is confluent,
terminating and range-restricted (i.e. grounding the lhs of CHRs grounds the
rhs) and all simplification rules are single-headed. The type class theory Pp in
Example 11 is non-confluent. In [SS02] we have identified these conditions as
sufficient to ensure completeness of the Canonical Normal Form procedure to
test for equivalence among constraints.
Theorem 2. Let Pp a complete and fully functional type class theory. Then
our CHR-based inference scheme infers principal types if the types arising are
unambiguous.
14
4
Related Work and Conclusion
Simonet and Pottier [SP04] introduce HMG(X) a refined version of HM(X) [OSW99,Sul00]
which includes among others type annotations. Their type inference approach is
based on the “allow for more solutions” philosophy. Hence, they achieve complete type inference immediately. However, they only consider tractable type
inference for the specific case of equations as the only primitive constraints.
An approach in the same spirit is considered by Hinze and Peyton-Jones [HJ00].
They sketch an extension of Haskell to allow for “higher-order” instances which
logically correspond to nested equivalence relations. As pointed out by Faxén [Fax03],
in such an extended Haskell version it would be possible to type the program in
Example 2. We believe this is an interesting avenue to pursue. We are not aware
of any formal results nor a concrete implementation of their proposal.
Pierce and Turner [PT00] develop a local type inference scheme where userprovided type information is propagated inwards to nodes which are below
the annotation in the abstract syntax tree. Their motivation is to remove redundant annotations. Note that Peyton-Jones and Shields [PJS04] describe a
particular instance of local type inference based on the work by Odersky and
Läufer’s[OL96]. In our approach we are able to freely distribute type information
across the entire abstract syntax tree. Currently, we only distribute information
about the types of lambda-bound variables and type class constraints. We believe that our approach can be extended to a system with rank-k types. We plan
to pursue this topic in future work.
In this paper, we have presented a novel inference scheme where the entire
type inference problem is mapped to a set of CHRs. Due to the constraint-based
nature of our approach, we are able to make available the results of inference for
outer expressions while inferring the type of inner expressions. We have fully implemented the improved CHR-based inference system as part of the Chameleon
system [SW] Our system improves over previous implementations such as Hugs
and GHC. For some cases, e.g. unambiguous Haskell 98 programs, we can even
state completeness. We note that our improved inference scheme can host the
type debugging techniques described in [SSW03,SSW04].
In future work, we plan to follow the path of Odersky and Läufer [OL96]
and compute (non-principal in general) solutions to subsumption problems. We
strongly believe that our improved CHR-based inference will be of high value
for such an attempt. Another alternative inference approach not mentioned so
far is to only generate all necessary subsumption problems σi ≤ σa and wait
for the “proper” moment to solve or check them. Of course, we still need to
process them in a certain order and might fail for the same reason we failed
in Example 1. Clearly, our constraint-based approach allows us to “exchange”
intermediate results among two subsumption problems which may be crucial for
successful inference.
References
DM82.
Fax03.
L. Damas and R. Milner. Principal type-schemes for functional programs.
In Proc. of POPL’82, pages 207–212. ACM Press, January 1982.
K. F. Faxén. Haskell and principal types. In Proc. of Haskell Workshop’03,
pages 88–97. ACM Press, 2003.
15
Frü95.
T. Frühwirth. Constraint handling rules. In Constraint Programming: Basics and Trends, LNCS. Springer-Verlag, 1995.
GHC.
Glasgow haskell compiler home page. http://www.haskell.org/ghc/.
Has.
Haskell 98 language report. http://research.microsoft.com/Users/simonpj/haskell98-revised/haskell98-report
html/.
HHPW94. C. V. Hall, K. Hammond, S. Peyton Jones, and P. Wadler. Type classes in
Haskell. In ESOP’94, volume 788 of LNCS, pages 241–256. Springer-Verlag,
April 1994.
HJ00.
Ralf Hinze and Simon Peyton Jones. Derivable type classes. In Proc. of
the Fourth Haskell Workshop, September 2000.
HUG.
Hugs home page. haskell.cs.yale.edu/hugs/.
JN00.
M.
P.
Jones
and
J.
Nordlander,
2000.
http://haskell.org/pipermail/haskell-cafe/2000-December/001379.html.
Jon92.
M. P. Jones. Qualified Types: Theory and Practice. D.phil. thesis, Oxford
University, September 1992.
Jon99.
M. Jones. Typing haskell in haskell. In Haskell Workshop, September 1999.
Jon00.
M. P. Jones. Type classes with functional dependencies. In Proc. of
ESOP’00, volume 1782 of LNCS. Springer-Verlag, 2000.
Mil78.
R. Milner. A theory of type polymorphism in programming. Journal of
Computer and System Sciences, 17:348–375, Dec 1978.
Mil92.
D. Miller. Unification under a mixed prefix. J. Symb. Comput., 14(4):321–
358, 1992.
OL96.
M. Odersky and K. Läufer. Putting type annotations to work. In Proc. of
POPL’96, pages 54–67. ACM Press, 1996.
OSW99. M. Odersky, M. Sulzmann, and M Wehr. Type inference with constrained
types. Theory and Practice of Object Systems, 5(1):35–55, 1999.
PJS04.
S. Peyton-Jones and M. Shields. Practical type inference for arbitrary-rank
types. Submitted for publication, 2004.
Pot98.
F. Pottier. A framework for type inference with subtyping. In Proc. of
ICFP’98, pages 228–238. ACM Press, 1998.
PT00.
B.C. Pierce and D.N. Turner. Local type inference. ACM Transactions on
Programming Languages and Systems, 22(1):1–44, 2000.
Rém93.
D. Rémy. Type inference for records in a natural extension of ML. In
Carl A. Gunter and John C. Mitchell, editors, Theoretical Aspects Of
Object-Oriented Programming. Types, Semantics and Language Design.
MIT Press, 1993.
SP04.
V. Simonet and F. Pottier. Constraint-based type inference with guarded
algebraic data types. Submitted to ACM Transactions on Programming
Languages and Systems, June 2004.
SS02.
P. J. Stuckey and M. Sulzmann. A theory of overloading. In Proc. of
ICFP’02, pages 167–178. ACM Press, 2002.
SS04.
P.J. Stuckey and M. Sulzmann. A theory of overloading. ACM Transactions
on Programming Languages and Systems, 2004. To appear.
SSW03.
P.J. Stuckey, M. Sulzmann, and J. Wazny. Interactive type debugging in
Haskell. In Proc. of Haskell Workshop’03, pages 72–83. ACM Press, 2003.
SSW04.
P. J. Stuckey, M. Sulzmann, and J. Wazny. Improving type error diagnosis.
In Proc. of Haskell’04, pages 80–91. ACM Press, 2004.
Sul00.
M. Sulzmann. A General Framework for Hindley/Milner Type Systems
with Constraints. PhD thesis, Yale University, Department of Computer
Science, May 2000.
SW.
M.
Sulzmann
and
J.
Wazny.
Chameleon.
http://www.comp.nus.edu.sg/˜ sulzmann/chameleon.
16
A
Variations
Our exisiting translation to CHRs is slightly lazier than most inference algorithms in the sense that we do not infer the types of let-bound variables which
are never called.
Example 12. Consider:
f x = let g = x x
in x
We generate CHR rules that look like the following:
f (t) ⇐⇒ t = tx → tx
g(t) ⇐⇒ tx = tx → t
Since the f rule never calls g, the unsatisfiable constraint is not introduced,
and this program is considered well-typed.
This “laziness” can be problematic whenever we need to compare the inferred
type of some function with its declared type. Consider an annotated function g,
nested within the definition of function f , from which we generate an inference
rule, g(t, l) ⇐⇒ ga (t, l), Ci , and an annotation rule, ga (t, l) ⇐⇒ f (t′ , l′ )⊖ , Ca .
It’s possible that the types of some global variables are affected in g, but not in
ga . In order for g(t, l) and ga (t, l) to be equivalent, we depend on g’s context, as
called in the ga rule, to in turn call g and introduce those missing constraints.
Example 13. Consider the following program.
f y = let g :: Bool
g = y
in ’a’
We generate the following (simplified) CHR rules:
f (t, l) ⇐⇒ t = Char, l = (hi, hty i)
g(t, l) ⇐⇒ t = ty , l = (hty i, hty i), ga (t, l)
ga (t, l) ⇐⇒ t = Bool, l = (hty i, hty i), f (t′ , l)
Even though g’s type annotation is acceptable, out subsumption check would
fail, because g(t, l) and ga (t, l) are not equivalent wrt the l component. Clearly,
the g rule implies ty = Bool, but the ga rule does not.
We can remedy this situation by ensuring that all nested functions are called
by their parent function. In this way, when we consider a function annotation,
the definition of the function becomes part of its context.
Example 14. We modify the program of Example 13, forcing f to call g, but
disregard its value.
f y = let g :: Bool
g = y
in const ’a’ g
17
The CHR rule associated with f would now look something like:
f (t, l) ⇐⇒ tconst = a → b → a, a = Char, t = ty → a,
l = (hi, hty i), g(b, l′ )
This solves our immediate problem, in that the constraints arising from g(t, l)
and ga (t, l) are now equialent wrt t and l.
Unfortunately this does not work in the case where the function we call is
recursive. Consider:
f y = let g :: Bool
g = const y g
in ’a’
Here, if f were to call g, the constraint generated to represent g’s type would
be ga (t′ , l′ ). We then face the same problem, that from within ga we have no
association between ty and Bool.
Clearly, a syntactic transformation of the source program to introduce calls
to otherwise uncalled functions is not sufficient. We must modify the CHR generation process to directly insert calls to the inference constraints of functions
which are not already called.
Example 15. We return to the rules generated in Example 13. The following
CHR rule is a modified version of the f rule above which now contains a call to
g, further constraining the lambda-bound type variables.
f (t, l) ⇐⇒ t = Char, l = (hi, hty i), g(t′ , l)⊖
Using this modified rule, we see now that g(t, l) and ga (t, l), are equivalent,
since ty = Bool in both. The ⊖ mark on the g constraint is not significant
here, though it does accomodates the simpler form of cycle breaking (by simply removing the repeated constraint) than the equivalent unmarked constraint
would.
B
Monomorphic Recursive Functions
In case of monomorphic recursive functions (i.e. recursive functions with no type
annotation) we need to update our strategy for breaking cycles among CHRs
(Definition 2). We denote by NRF the set of all non-recursive functions, by
MRF the set of all recursive functions which carry no type annotations and by
ARF the set of all annotated recursive functions.
Definition 7 (CHR Cycle Removal). Let f (t, l)⊖ ∈ C and f (t′ , l′ )⊖ ∈ C ′
where f ∈ NRF ∪ ARF and a derivation ... C ... C ′ . Then, ... C
... C ′ CCR C ′ − f (t′ , l′ )⊖ .
Let f (t, l) ∈ C and f (t′ , l′ )⊖ ∈ C ′ where f ∈ MRF and a derivation ...
C ... C ′ . Then, ... C ... C ′ CCR C ′ − f (t′ , l′ )⊖ , t′ = t.
Let f (t, l) ∈ C and f (t′ , l′ ) ∈ C ′ where f ∈ MRF and f (t′ , l′ ) is known
to arise from the exact same program source location as f (t, l), and given a
derivation ... C ... C ′ . Then, ... C ... C ′ Mono C ′ −
f (t′ , l′ ), t′ = t.
We assume that CCR and Mono are applied aggressively.
18
Note that according to the definition of a Mono step, we should only ever
break cycles amongst constraints representing unannotated, recursive function
types when they arise from the same program location. Unifying the types of
different calls to the same function is overly restrictive, as the following example
illustrates.
Example 16. Consider the following program.
e = let f = f
in (f::Int, f::Bool)
The two fs in the body of e are calls to the same recursive, unannotated
function. i.e. f ∈ M RF . It would be unnecessarily restrictive, however, to aggressively apply Mono here and unify the types of the two fs, resulting in a
type error. Indeed, these two fs are not even part of the same cycle.
We note that breaking cycles among constraints arising from the exact same
program source location is sufficient.
Example 17. Consider
f = ... f1 ... f2
where we added numbers 1 and 2 to different uses sites of f. Here is a sketch of
type inference where we annotate to refer to the number of CHR steps applied
so far. f n f1 , f2 f f1,1 , f1,2 , f2 . In the last step we reduce a call to f at
location 1. Note that we make use of a refined marking scheme. We keep track
of the original source location and add the location of the constraint introduced
to the store to the existing locations. We refer to [SSW03] for the details of such
a “location-history” aware refinement of the CHR semantics. Hence, applying
rule (Mono) twice will remove f1,1 and f1,2 (and equate the types of f1,1 and
f1,2 with f ). Similarly, we remove the cycles created by f2 .
We note that for CCR there is no need to equate the l component in case
of f ∈ MRF . The l component is always set at the use site of functions (see
constraint generation rules (Var-f)). Equating the l component would yield a
strictly weaker system.
Example 18. Consider f x = ( f1 ..., let g y = ... f2 ...in g) where
we added numbers 1 and 2 to different (monomorphic) uses sites of f. Here
is a sketch of type inference f n ...f1 , ..., g n+m ...f1 , ..., f2 where natural
numbers n and m refer to the number of CHR steps applied so far. Assume we
fully equate f1 and f2 . Note that their local ll components differ (because this
component is always exact!) Hence, type inference fails.
Note that (as before in case of ARF and NRF ) we only break cycles among
constraints which carry the same marker. This yields a more precise method.
Example 19. Assume we have situation where
... f (Int), ... ... f (t)⊖ , ...
Removing f (t)⊖ and adding t = Int might be too restrictive.
19
Note that by construction Mono never applies to ARF and NRF . Any
potential cycle will be eventually broken. We can re-establish Lemmas 2 and
and state a slightly stronger Lemma 1 which guarantee soundness of our CHRbased inference approach.
Lemma 3 (CCR-Mono Soundness). Let P be a set of CHRs and C and C ′
¯fv(C) C ′ ⊃ C.
two constraints such that C ∗P C ′ . Then P |= ∃
¯fv(C) C ′ does not hold anymore. We may reject typable
Note that P |= C ⊃ ∃
programs because we stricly enforce the (Mono) rule.
Example 20. Consider
h x = (h1 ’a’) && (h2 True)
Here is a sketch of type inference.
Mono
Mono
↔
h(t)
t = tx → Bool, h1 (Char → Bool), h2 (Bool → Bool)
t = tx → Bool, Char → Bool = tx′ → Bool, h1,1 (Char → Bool),
h1,2 (Bool → Bool), h2 (Bool → Bool)
t = tx → Bool, Char → Bool = tx′ → Bool, Char → Bool = Char → Bool,
h1,2 (Bool → Bool), h2 (Bool → Bool)
t = tx → Bool, Char → Bool = tx′ → Bool, Char → Bool = Char → Bool,
Char → Bool = Bool → Bool, h2 (Bool → Bool)
F alse
It is interesting to note is that our type inference scheme for recursive function is more relaxed compared to the one found in some other established type
checkers.
Example 21. Consider the following program.
e
e
f
f
g
:: Bool
= g
:: Bool -> a
= g
= f e
In the case of GHC, the following error reported is:
mono-rec.hs:5:
Couldn’t match ‘Bool -> a’ against ‘Bool’
Expected type: Bool -> a
Inferred type: Bool
In the definition of ‘f’: f = g
The problem reported here stems from the fact that within the mutually
recursive binding group consisting of e, f and g, f is assigned two ununifiable
types, Bool and Bool → a: the first because it must have the same type as g,
which according to e must be Bool; and the second because of its type declaration.
20
Our translation scheme is more liberal than this, in that g’s type within e
and f may be different. Essentially, we only require that the type of a variable
be identical at all locations within the mutually recursive subgroup if a type
declaration has been provided for that variable.
(Simplified) Translation of the above program to CHRs yields.
ea (t) ⇐⇒ t = Bool
e(t) ⇐⇒ g(t)
fa (t) ⇐⇒ t = Bool → a
f (t) ⇐⇒ g(t)
g(t) ⇐⇒ ea (te ), fa (tf ), tf = te → Bool
It is clear from the above that there are no cycles present amongst these rules.
We can use them to successfully infer a type for any of the variables in the
program.
In fact, our handling of binding groups is similar to [Jon99] where type inference of binding groups proceeds as follows:
1. Extend the type environment with the type signatures. In this case f::forall
a. Bool -> a and e::Bool.
2. Do type inference on the bindings without type signatures, in this case g = f
e. Do generalisation too, and extend the environment, giving g :: forall
a. a.
3. Now, and only now, do type inference on the bindings with signatures.
21
| 6 |
THE STRUCTURE OF DGA RESOLUTIONS OF MONOMIAL
IDEALS
arXiv:1610.06526v1 [] 20 Oct 2016
LUKAS KATTHÄN
Abstract. Let I ⊂ k[x1 , . . . , xn ] be a squarefree monomial ideal a polynomial
ring. In this paper we study multiplications on the minimal free resolution F of
a S/I. In particular, we characterize the possible vectors of total Betti numbers
for such ideals which admit a differential graded algebra (DGA) structure on F.
We also show that under these assumptions the maximal shifts of the graded
Betti numbers are subadditive.
On the other hand, we present an example of a strongly generic monomial
ideal which does not admit a DGA structure on its minimal free resolution.
In particular, this demonstrates that the Hull resolution and the Lyubeznik
resolution do not admit DGA structures in general.
Finally, we show that it is enough to modify the last map of F to ensure
that it admits the structure of a DG algebra.
Introduction
Let S be a polynomial ring over a field, I ⊂ S be a monomial ideal, and let
F denote the minimal free resolution of S/I over S. The multiplication map
S/I ⊗S S/I → S/I can be lifted to a “multiplication” ∗ : F ⊗S F → F, which in
general is associative and graded-commutative only up to homotopy. Moreover,
∗ is unique only up to homotopy. It is known that ∗ can always chosen to be
graded-commutative “on the nose” (cf. [BE77, Prop 1.1]), but in general it is not
possible to choose ∗ such that it is associative (cf. [Avr74, Appendix], [Avr81]).
If ∗ is graded-commutative and associative, then it gives F the structure of a
differential graded algebra (DGA). In this situation, we say that S/I admits a
minimal DGA resolution. Starting with the work of Buchsbaum and Eisenbud
[BE77], a lot of research has been devoted to the study of DGA resolutions, see
for example [Avr81] or [Kus94] and the references therein. In the present paper,
we study the case where I is generated by squarefree monomials and consider
only multiplications which respect the multigrading. These restrictions allow us
to prove much stronger statements than in the general situation. Our results can
be organized along the following questions:
(1) Which ideals admit a minimal DGA resolution? Which constructions of
(not necessarily minimal) resolutions yield DGA resolutions?
(2) What are the consequences if a given ideal admits a minimal DGA resolution?
(3) How unique or non-unique are the DGA structures?
(4) What can be said about the structure of DGA resolutions as algebras?
2010 Mathematics Subject Classification. Primary: 05E40; Secondary: 13D02,16W50,13F55.
Key words and phrases. Monomial ideal; Free resolution; Differential graded Algebra.
1
2
LUKAS KATTHÄN
There are a few classes of ideals in local rings which are known to admit minimal
DGA resolutions, see [Kus94] for an overview. In the context of monomial ideals,
this is the case for stable ideals [Pee96], matroidal ideals [Skö11] and edge ideals of
cointerval graphs [Skö16]. Another class of ideals whose minimal free resolutions
are well-understood are generic monomial ideals. However, in Theorem 5.1, we
present an example of a (strongly) generic monomial ideal which does not admit a
minimal DGA resolution. The example also demonstrates that the hull resolution
[BS98] and the Lyubeznik resolution [Lyu88] do not admit a DGA structure in
general.
On the positive side, we show in Theorem 2.1 that for every monomial ideal I
there always exists a monomial m ∈ S such that S/(mI) admits a minimal DGA
resolution. Note that the minimal free resolutions of S/I and S/(mI) differ only
in the last map. Therefore, one can interpret this result as saying that the other
maps in the resolution to not contain obstructions to the existence of a DGA
structure. This result actually holds in greater generality for homogeneous ideals
in S and even for ideals in regular local domains.
Concerning the second question, it is a beautiful result by Buchsbaum and
Eisenbud [BE77, Proposition 1.4], that if I admits a minimal DGA resolution,
then one can conclude certain lower bounds on the Betti numbers of S/I. In the
context of squarefree monomial ideals, we sharpen this result by completely characterizing the possible total Betti numbers of ideals admitting a minimal DGA
resolution (Theorem 4.1). Moreover, we show that if I is such an ideal, then
its graded Betti numbers satisfy the subadditivity condition of [ACI15] (Proposition 4.4).
It is well-known that DGA structures on minimal free resolutions do not need
to be unique. Our Example 3.2 demonstrates that this even fails for squarefree
monomial ideals, thus answering a question of Buchsbaum and Eisenbud [Pee10,
Open Problem 31.4]. On the other hand, we show that at least a certain part of
the multiplication is indeed unique, see Proposition 3.1 for the precise statement.
Concerning the fourth questions, we show that F is essentially generated in
degree 1 with respect to any multiplication. See Proposition 3.4 for the exact
statement. As a consequence, whenever I admits a minimal DGA resolution
F, then it is a quotient of the Taylor resolution (which has a canonical DGA
structure) by a DG-ideal, see Theorem 3.6. Thus one can study the possible
DGA structures on F by considering DG-ideals in the Taylor resolution.
The key idea behind most of our results is the following simple observation: F is
a free S-module which is generated in squarefree degrees, so for any homogeneous
element f ∈ F, we can find an element f ′ ∈ F of squarefree degree, such that
f = mf ′ for a monomial m ∈ S. We call f ′ the squarefree part of f .
This paper is structured as follows. In Section 1 we recall some preliminaries
about multiplicative structures on resolution, the Taylor resolution and the Scarf
complex, and we introduce the squarefree part of an element f ∈ F. In the next
section we prove Theorem 2.1 about the existence of minimal DGA resolutions
after multiplication with an element. The following Section 3 is devoted to the
study of the structure of multiplication on F. After that, in Section 4 we derive
two consequences of the existence of a minimal DGA resolution, namely our
characterization of the total Betti numbers and the subadditivity of syzygies.
STRUCTURE OF DGA STRUCTURES
3
In the penultimate Section 5 we present an example of a generic monomial
ideal which does not admit a minimal DGA resolution, and in the last section we
study multiplication on resolutions of ideals which are not necessarily squarefree.
1. Preliminaries and notation
Throughout the paper, let k be a field and S = k[x1 , . . . , xn ] be a polynomial
ring over k, endowed with the fine Zn -grading. We denote by m := hx1 , . . . , xn i
the maximal homogeneous ideal of S. Further, let I ⊂ S be a monomial ideal
and let
∂
∂
∂
F : · · · → F2 → F1 → F0 → S/I → 0
denote the minimal free resolution of S/I. Note that F is Zn -graded and the
differential ∂ respect the multigrading. The multigraded Betti numbers of S/I
S
are denoted by βi,a
(S/I) = TorSi (S/I, k)a . We often identify F with the free
L
multigraded S-module i Fi . For an element f ∈ F we denote its multidegree by
deg(f ) and its homological degree by |f |.
We consider the componentwise order on Zn and write a ∧ b and a ∨ b for the
componentwise minimum and maximum of a, b ∈ Zn , respectively.
1.1. Multiplications on resolutions. We recall the definition of a DGA structure.
Definition 1.1. A differential graded algebra (DGA) structure on F is an S-linear
map ∗ : F ⊗S F → F satisfying the following axioms for a, b, c ∈ F:
(1) ∗ extends the usual multiplication on F0 = S,
(2) ∂(a ∗ b) = ∂(a) ∗ b + (−1)|a| a ∗ ∂b (Leibniz rule),
(3) |a ∗ b| = |a| + |b| (homogeneity with respect to the homological grading),
(4) a ∗ b = (−1)|a|·|b| b ∗ a (graded commutativity), and
(5) (a ∗ b) ∗ c = a ∗ (b ∗ c) (associativity).
We will also consider non-associative multiplications. To make this precise,
we call a map ∗ : F ⊗S F → F a multiplication if it satisfies all the axioms of a
DGA except possibly the associativity. Moreover, we make the convention that
in this paper, every multiplication respects the multigrading on F, unless specified
otherwise. The only occasion when we consider more general multiplication is in
Section 5.
While F does not always admit a DGA structure, it always admits a multiplication, cf. [BE77, Prop 1.1], and the multiplication is unique up to homotopy. Explicitly, this means that when ∗1 , ∗2 are two multiplications, then there
exist a map σ : F ⊗S F → F raising the homological degree by 1 such that
a ∗1 b = a ∗2 b + ∂σ(a, b) + σ(∂a, b) + (−1)|a| σ(a, ∂b) for a, b ∈ F;
1.2. The Taylor resolution and the Scarf complex. We recall the definitions
of the Taylor resolution and the Scarf complex of I. We refer the reader to
p.67 and Section 6.2 of [MS05], respectively, for further information about these
constructions.
Let G(I) denote the set of minimal monomial generators of I and choose a
total order ≺ on G(I). The choice of the order affects only the signs in the
computations.
4
LUKAS KATTHÄN
Definition 1.2. The Taylor resolution T of S/I is the complex of free S-modules
with basis {gW : W ⊆ G(I)}. The basis elements are graded by |gW | := #W
and deg gW := deg mW , where mW := lcm(m : m ∈ W ) for W ⊆ G(I). Further,
the differential is given by
X
mW
∂gW =
(−1)σ(m,W )
gW \{m} ,
mW \{m}
m∈W
where σ(m, W ) := #{m′ ∈ W, m′ ≺ m}.
The Taylor resolution is a free resolution of S/I, but typically not a minimal
one. It was shown by Gemeda [Gem76] (see also [Pee10, Proposition 31.3]) that
it carries a DGA structure with the multiplication given by
(−1)σ(W,V ) mW mV
gW ∗ gV =
0
mW ∪V
gW ∪V
if W ∩ V = ∅,
otherwise,
where σ(W, V ) := #{(m, m′ ) ∈ W × V : m′ ≺ m}. To simplify the notation we
occasionally write gm1 m2 ... instead of g{m1 ,m2 ,... } for m1 , m2 , . . . ∈ G(I).
Definition 1.3. The Scarf complex ∆I of S/I is the simplicial complex with
vertex set G(I), where a set W ⊂ G(I) is contained in ∆I if and only if there
does not exist another set V ⊂ G(I) with V 6= W and lcm(m : m ∈ W ) =
lcm(m : m ∈ V ).
Moreover, the algebraic Scarf complex F∆I is the subcomplex of the Taylor
resolution generated by the generators gW , W ∈ ∆I .
The algebraic Scarf complex is always a subcomplex of the minimal free resolution of S/I (cf. [MS05, Proposition 6.12]), but in general it is not acyclic.
We will therefore use the notation gW for generators of F which lie in the Scarf
complex.
1.3. The squarefree part. The following very basic observation turns out to
be the key to many of our results:
Lemma 1.4. Let F be a free S-module and a ∈ Nn . Assume that the degree
of every generator of F is less or equal to a. Then every element f ∈ F with
deg f a can be written as f = mf ′ for a monomial m ∈ S and f ′ ∈ F with
deg f ′ = (deg f ) ∧ a. Both m and f ′ are uniquely determined.
This lemma is almost obvious, but we include a proof for completeness.
Proof. Choose a basis of F and let G be the set of those basis elements whose
degrees are less or equal than deg f . By assumption, f can be written as a linear
combination of elements of G, and we write cg for the coefficient of g in this
expansion. Then it holds that
f=
X
g∈G
cg xdeg(f )−deg(g) g = xdeg(f )−(a∧deg(f ))
X
cg x(a∧deg(f ))−deg(g) g.
g∈G
All exponents are non-negative by our hypothesis, so we have shown the existence
of the claimed factorization.
The uniqueness of m is trivial, because there is only one monomial of the
correct multidegree. Using this, the uniqueness of f ′ follows from the fact that S
is a domain and F is torsion-free.
STRUCTURE OF DGA STRUCTURES
5
Definition 1.5. Let F be a free S-module, such that the degree every generator is
squarefree, i.e. less or equal than (1, . . . , 1) ∈ Nn . In the situation of Lemma 1.4,
we call f ′ the squarefree part of f and denote it by |f |sqf .
Note that the map f 7→ |f |sqf is k-linear, but not S-linear. We will apply
this definition almost exclusively to the minimal free resolution F of a S/I for a
squarefree monomial ideal I ⊂ S.
2. Existence of DGA structures up to multiplication with an
element
Our first result is the following theorem. It implies that it is enough modify the
last map of a minimal free resolution to ensure the existence of a DGA structure.
Theorem 2.1. Let S be a regular local ring, or a polynomial ring and I ⊆ S an
ideal. There exists a homogeneous element s ∈ S, s 6= 0 such that the minimal
free resolution of S/(sI) admits a DGA structure.
If I is a monomial ideal (and S a polynomial ring), then s can be chosen to be
the least common multiple of the generators of I.
We need the following lemma for the proof of the Theorem.
±
n
Lemma 2.2. Let Q = k[t±
1 , . . . , tn ] be a Z -graded Laurent polynomial for n ≥ 0.
Let
F:
0 → Fp → · · · → F1 → F0 → 0
be an exact complex of graded Q-modules and assume that F0 = Q. Then the
multiplication on F0 can be extended to a graded-commutative DGA structure on
F.
Proof. Let ∂ denote the differential of F. We claim that there exists a map
σ : F → F of homological degree 1 such that ∂ ◦ σ + σ ◦ ∂ = idF and σ ◦ σ = 0.
Indeed, every graded module over Q is free (cf. [GW78, Theorem 1.1.4]), so we
can choose splittings Fi ∼
= Vi ⊕ ∂(Fi+1 ). We define σi |Vi = 0 and σi (∂(f )) = f for
f ∈ Fi+1 . It is not difficult to see that this gives indeed a map σ with the claimed
properties.
We define the DGA structure on F inductively by the formula
a ∗ b :=
ab
σ ∂(a) ∗ b + (−1)|a| a ∗ ∂(b)
if |a| = 0 or |b| = 0
otherwise,
where the multiplication in the first case is the one from F0 = Q.
This multiplication clearly satisfies the Leibniz rule and it extends the multiplication on F0 . It remains to show that ∗ is graded-commutative and associative.
Note that σ = σ ◦ ∂ ◦ σ. We proceed by induction on the homological degree, the
6
LUKAS KATTHÄN
base case being clear. It holds for a, b, c ∈ F:
a ∗ (b ∗ c) = σ ∂(a ∗ (b ∗ c))
= σ ∂a ∗ (b ∗ c) + (−1)|a| a ∗ (∂b ∗ c) + (−1)|a|+|b| a ∗ (b ∗ ∂c)
(#)
= σ (∂a ∗ b) ∗ c + (−1)|a| (a ∗ ∂b) ∗ c + (−1)|a|+|b| (a ∗ b) ∗ ∂c
= σ ∂((a ∗ b) ∗ c)
= (a ∗ b) ∗ c
where in (#) we use the induction hypothesis. The commutativity is verified
analogously.
Proof of Theorem 2.1. Let F be the minimal free resolution of S/I. Let Q be the
subring of the field of fractions of S where we adjoin inverses for all homogeneous
elements of S, and set FQ := F ⊗S Q. Note that Q is a multivariate Laurent
polynomial ring over some field k. Moreover, Q is flat over S and hence FQ is
exact. So by Lemma 2.2, it can be endowed with a DGA structure ∗.
Choose a basis for each Fi . Then the multiplication on FQ can be represented
as a matrix with entries in Q. Hence we can choose an element s ∈ S such
that sq ∈ S for each entry q of this matrix. Let F′ be the subcomplex of F
defined by F′0 := F0 and F′i := sFi for i ≥ 1. We claim that F′ is closed under
multiplication. Indeed, for sa, ab ∈ F′ the choice of s implies that s(a ∗ b) ∈ F
and thus (sa ∗ sb) ∈ F′ .
Note that F′ is isomorphic to F except in degree 0, so in particular it is exact
in every other degree. In degree 0, it holds that H0 (F′ ) = F0 /∂(sF1 ) = S/(sI).
Thus F′ is the minimal free resolution of S/(sI).
Finally, consider the case that I is a monomial ideal in a polynomial ring
S = k[x1 , . . . , xn ]. Let a, b ∈ F be two homogeneous elements. The product
a ∗ b ∈ FQ can be written as a sum
X
λg xdeg(a)+deg(b)−deg(g) g
g∈B
where B is an S-basis of F and λg ∈ k. Let now s be the lcm of all generators
of I. Then the multidegrees of all elements of B are less or equal to deg(s), and
hence s(a ∗ b) ∈ F. Now one can argue as above.
3. Properties of multiplications
3.1. Uniqueness. The multiplication on F is in general not unique. However, the
part of the multiplication which lives on the algebraic Scarf complex is uniquely
determined, as the next proposition shows.
Proposition 3.1. Let I ⊂ S be a squarefree monomial ideal with minimal free
resolution F, and let further ∗ be a multiplication on F. Then, for V, W ∈ ∆I
with V ∪ W ∈ ∆I , it holds that
gW ∗ gV =
(−1)σ(W,V ) mW mV
mW ∪V
0
gW ∪V
if W ∩ V = ∅,
otherwise;
STRUCTURE OF DGA STRUCTURES
7
where σ(W, V ) is defined as above. In particular, the product gW ∗ gV does not
depend on the choice of ∗.
Proof. Let B be an S-basis for F. For every g ∈ B there exists a (not necessarily
unique) subset Ug ⊂ G(I) such that deg g = deg mUg and |g| = #Ug . This
follows from the fact that F is a direct summand of the Taylor resolution.
Expand gW ∗ gV in the basis B:
gW ∗ gV =
X
cg g.
g∈B
Assume that cg 6= 0 for some g ∈ B. As I is squarefree, it holds that deg mUg =
deg g ≤ deg |gW ∗gV |sqf = deg(gW )∨deg(gV ) = deg mW ∪V . Hence deg mUg ∪W ∪V =
deg mW ∪V . But W ∪ V is contained in the Scarf complex, and hence it follows
that Ug ∪ W ∪ V = W ∪ V and hence Ug ⊆ V ∪ W . On the other hand, it holds
that #Ug = |g| = #W + #V ≥ #(W ∪ V ). So we can conclude that Ug = V ∪ W
and that #W + #V = #(W ∪ V ). Hence if W and V intersect non-trivially then
V
gW ∗ gV is zero, and if they are disjoint, then gW ∗ gV = λ mmWWm
gW ∪V for a scalar
∪V
λ ∈ k. It is clear by induction that the scalars λ are uniquely determined by
the Leibniz rule. Moreover, the values given in the claim satisfy the Leibniz rule,
hence we have determined the product gW ∗ gV .
The following example show that the multiplication is not unique in general.
This answers a question of Buchsbaum and Eisenbud [Pee10, Open Problems
31.4].
Example 3.2. Let I ⊂ S = Q[x1 , x2 , x3 , x4 , y, z] be the ideal with generators
a := x1 x2 y, b := x2 x3 , c := x3 x4 z and d := x4 x1 . One can verify with Macaulay2
that the minimal free resolution F of S/I equals its algebraic Scarf complex. The
Scarf complex ∆I is the following simplicial complex:
c
d
a
b
Consider the product ga ∗ gc . Both a and c are vertices of ∆I , but {a, c} ∈
/ ∆I ,
so the preceding Proposition does not apply. And indeed, for any λ ∈ k the
choice
ga ∗ gc := λ(x4 zgab + x1 ygbc ) (1 − λ)(x3 zgad − x2 ygcd )
satisfies the Leibniz rule. This can be extended to a multiplication on F, which
is even a DGA structure, because the length of the resolution is three (cf. [BE77,
Prop 1.3])
The next example illustrates that one cannot omit the assumption of I being
squarefree from Proposition 3.1.
Example 3.3. Consider the ideal I ⊂ S = k[x, y, z] generated by a := x2 , b :=
xy and c := xz. The algebraic Scarf complex of this ideal coincides with its
Taylor resolution T. So if I were squarefree, then the DGA structure on T
described above were the only possible multiplication. However, we can modify
the multiplication by setting gb ∗ gc := zgab − ygac . The Leibniz rule requires that
in addition we set gb ∗ gac := gc ∗ gab := 0. See Table 1 for the full multiplication
table of this new multiplication. It is even a DGA structure, again because the
projective dimension of S/I is less then four.
8
LUKAS KATTHÄN
∗
ga
gb
gc
gab
gac
gbc
gabc
ga
gb
gc
0
xgab
xgac
−xgab
0
zgab − ygac
−xgac −zgab + ygac
0
0
0
0
0
0
0
xgabc
0
0
0
0
0
Table 1. The multiplication table
gab gac gbc gabc
0
0 xgabc 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
of Example 3.3.
3.2. Structure. The next result shows that if I is squarefree, then F is essentially
generated in degree 1 for any multiplication on it:
Proposition 3.4. Let I ⊂ S be a squarefree monomial ideal with minimal free
resolution F. Let ∗ be a multiplication on F. Then every homogeneous element
f ∈ F of squarefree degree can be written as
f=
X
|g1,j ∗ (g2,j ∗ (· · · (g|f |−1,j ∗ g|f |,j ) · · · )|sqf ,
j
for gℓ,j ∈ F1 for 1 ≤ ℓ ≤ |f | and all j.
Proof. Let g1,1 ∈ F1 be a basis element with deg g1,1 ≤ f . Such an element exists,
because f is a linear combination of basis elements, and the degree of each basis
element is an lcm of degrees of basis elements of F1 . Note that deg g1,1 ≤ deg f
implies that f = |∂(g1,1 ) ∗ f |sqf . Hence it holds that
r := f − |g1,1 ∗ ∂f |sqf = |∂(g1,1 ∗ f )|sqf = ∂(|g1,1 ∗ f |sqf ) ∈ mF.
In the last step we used that F is minimal, so ∂F ⊂ mF. Now, ∂f has a smaller
homological degree than f , so it can be decomposed into a product by induction.
Moreover, note that r ∈ mF implies that r is a S-linear combination of elements
of Fi of strictly smaller degree than f . So these elements can be decomposed by
induction as well. To conclude that f can be decomposed as claimed, we note the
following fact: Whenever for f ′ ∈ F and a monomial m ∈ S it holds that m|f ′ |sqf
is squarefree, then it follows that m|f ′ |sqf = |m(|f ′ |sqf )|sqf = |mf ′ |sqf .
The preceding result does not hold if I not squarefree.
Example 3.5. Consider the multiplication constructed in Example 3.3. Under
that multiplication, gbc cannot be written as a product of elements of degree one.
We will see in Section 6 below that every monomial ideal admits at least one
multiplication which satisfies the conclusion of Proposition 3.4. This is no longer
true for more general ideals. Indeed, if I ⊂ S is a homogeneous ideal with three
generators satisfying the conclusion of Proposition 3.4, then the third Betti number is bounded by the number of non-associative, graded-commutative products
of three elements. On the other hand, it is a celebrated result of Bruns [Bru76]
that any free resolution can be modified to yield a resolution of an ideal with three
generators, so there does not exist a global bound on the third Betti number.
The preceding proposition leads to the following structure theorem for DGA
structures on F:
STRUCTURE OF DGA STRUCTURES
9
Theorem 3.6. Let I ⊂ S be a squarefree monomial ideal. Assume that the
minimal resolution F of S/I is a multigraded DGA.
Then there exists a DG-ideal J ⊂ T in the Taylor resolution of S/I, such that
∼
F = T/J as multigraded DGAs over S.
−1
Proof. Let Q := S[x−1
1 , . . . , xn ] denote the Laurent polynomial ring. As T ⊗S Q
is an exterior algebra over Q, we can define a map ϕ : T ⊗S Q → F ⊗S Q of
DGAs by setting ϕ(gm ) := gm , where the latter is interpreted as element of the
algebraic Scarf complex of S/I.
We need to show that ϕ restricts to a map ϕ : T → F, where we consider T
and F as subalgebras of T ⊗ Q and F ⊗ Q in the natural way. For this, consider
a subset A = {m1 < m2 < . . . < ms } ⊆ G(I). It holds that
m1 ∨ · · · ∨ ms
gA =
gm1 ∗ gm2 ∗ · · · ∗ gms ,
m1 · · · ms
and, consequently, that
m1 ∨ · · · ∨ ms
ϕ1 (gm1 ) ∗ ϕ1 (gm2 ) · · · ϕ(gms )
ϕ(gA ) =
m1 · · · ms
= |ϕ1 (gm1 ) ∗ ϕ1 (gm2 ) · · · ϕ(gms )|sqf .
The last expression is clearly contained in F, and as T is generated by the elements
(gA )A⊆G(I) it follows that ϕ maps T to F.
It remains to show that the restriction of ϕ to T is surjective onto F. But this is
clear from Proposition 3.4, because ϕ is surjective in homological degree 1 and it
commutes with taking the squarefree part (the latter holds for any homogeneous
morphism).
The following example shows that one cannot omit the assumption of squarefreeness in the preceding Theorem.
Example 3.7. We continue with Example 3.3 and consider the DGAs structure
on F constructed in that example. Let ϕ : T ⊗ Q → F ⊗ Q the map from the
proof of Theorem 3.6. Then
1
1
z
y
ϕ(gbc ) = ϕ( gb ∗ gc ) = ϕ(gb ) ∗ ϕ(gc ) = − gab + gac ,
x
x
x
x
which is not contained in F. So ϕ does not restrict to a map T → F in this case.
In fact, F is not the image of T under any map of DGAs, because F ⊗ Q is not
generated in homological degree 1.
The next example illustrates how one can use Theorem 3.6 to prove that a
given ideal does not admit a (multigraded) minimal DGA resolution.
Example 3.8. Let I ⊆ S = Q[x1 , x2 , x3 , x4 , x5 , x6 ] be the ideal generated by
a := x1 x2 , b := x2 x3 , c := x3 x4 , d := x4 x5 and e := x5 x6 . This is a well-known
example by Avramov which does not admit a minimal DGA resolution [Avr81,
Example I].
Assume for the contrary that its minimal free resolution F admits a DGA
structure. Then by Theorem 3.6 there exists a DG-ideal J ⊂ T in its Taylor
S
resolution such that F = T/J. We have that β3,(1,1,1,1,0,0)
(S/I) = 0 and hence
gabc ∈ J. As J is a DG-ideal, it also holds that (∂gabc ) ∗ ge ∈ J. Moreover,
gabc ∗ gd = x3 gabcd ∈ J. As T/J is free, J is saturated with respect to the
10
LUKAS KATTHÄN
S
variables and hence also gabcd ∈ J. Similarly, β3,(0,0,1,1,1,1)
(S/I) = 0 implies that
ga ∗ (∂gcde ) ∈ J and gbcde ∈ J. It follows that
f := (∂gabc ) ∗ ge − ga ∗ (∂gcde ) − x1 ∂gbcde − x4 ∂gabcd
= x2 x23 gabe − x22 x3 gade + x1 x2 x3 gbde − x2 x3 x4 gabd
is contained in J. On the other hand, the sets {a, b, e}, {a, d, e}, {b, d, e} and
{a, b, d} lie in the Scarf complex of I, so the images of the corresponding generators of T are part of a basis of F. But this is a contradiction, because f is a
relation between these elements.
4. Consequences of the existence
4.1. Betti numbers. In this section, we prove the following characterization of
the possible Betti vectors of squarefree monomial ideals admitting minimal DGA
resolutions.
Theorem 4.1. Let f = (1, f1 , f2 , . . . ) ∈ Nν be a finite sequence of natural numbers. Then the following conditions are equivalent:
(1) There exists a squarefree monomial ideal I in some polynomial ring S
whose minimal free resolution is a DGA, such that βiS (S/I) = fi for all i.
(2) f is the f -vector of a simplicial complex ∆ which is a cone.
Recall that the f -vectors of simplicial complexes are characterized by the
Kruskal-Katona theorem [Sta96, Theorem 2.1]. From this one can derive an explicit characterization of the f -vectors of cones, see also [Kal85]. For the implication “1) ⇒ 2)” we use the following general result.
L
Proposition 4.2. Let A = k≥0 Ak be a DGA over a field k. Assume that
A0 = k, that A is generated in degree 1, that dimk A1 ≤ ∞ and finally that the
differential of A is not identically zero. Then the Hilbert function of A equals the
f -vector of a simplicial complex which is a cone.
Aramova, Herzog and Hibi [AHH97, Theorem 4.1] characterize the Hilbert functions of graded-commutative algebras which are generated in degree one. These
are indeed the same as f -vectors of simplicial complexes. Our result extends this,
as we show showing that the existence of a non-zero differential yields a further
restriction to the Hilbert series.
Proof. Let C ⊂ A be the subalgebra of cycles. We claim that C is also generated
in degree 1 and that A ∼
= C ⊕ C[−1] as k-vector space, where C[−1] denotes
the vector space C with degrees shifted by one. These two claims are sufficient
to prove our result. Indeed, by the above mentioned [AHH97, Theorem 4.1], the
Hilbert function of C is the f -vector of some simplicial complex ∆. An elementary
computation shows that in this case the Hilbert function of A ∼
= C ⊕ C[−1] is
the f -vector of the cone over ∆.
Now we turn to the proof of our claims. Let C1 ⊂ A1 be the space of cycles in A1
and write C ′ for the algebra generated by them. By assumption, the differential
∂ of A does not vanish identically. As A is generated in degree one, the Leibniz
rule implies that the first component ∂1 : A1 → A0 = k is nonzero, hence there
exists an element f ∈ A1 with ∂f = 1 and (thus) A1 = C1 ⊕ kf . The latter
implies that A ∼
= C ′ + f C ′.
STRUCTURE OF DGA STRUCTURES
11
Hence any element a ∈ A can be written as a = c1 + f c2 with c1 , c2 ∈ C ′ .
In particular, if a ∈ C, then 0 = ∂a = c2 , so a = c1 ∈ C ′ and thus C = C ′ is
generated in degree one.
For the second claim, we first note that A ∼
= C ⊕ f C. Indeed, for any f c ∈ C ′ ∩
′
f C it holds that 0 = ∂(f c) = c. Finally, it remains to show that f C ∼
= C(−1).
For this it is sufficient to show that the multiplication by f is injective on C.
But this is clear, because ∂(f c) = c for any c ∈ C, so the differential provides a
left-inverse.
Proof of Theorem 4.1. 1) ⇒ 2) Let Q be the field of fractions of S and let F the
minimal free resolution of I. It follows from Proposition 3.4 that F ⊗S Q is generated in degree 1 as Q-algebra. Hence the claim is immediate from Proposition 4.2,
applied to F ⊗S Q.
2) ⇒ 1) Set k := f1 . Let ∆ ⊂ 2k be a simplicial complex whose f -vector
equals f which is a cone with apex 1. We can find a monomial ideal I in some
polynomial ring S, such that the lcm-lattice of I equals the face lattice of ∆,
augmented by a maximal element (cf. [IKMF14, Theorem 3.4], [Map13]).
Then ∆ is the Scarf complex of I. As ∆ is acyclic, its f -vector equals the Betti
vector of I (this can be seen by computing the Betti numbers via the lcm lattice,
see [GPW99, Theorem 2.1]).
It remains to show that the minimal free resolution of I admits a DGA structure. Let T be Taylor resolution of I. We identify the set of generators of I with
the set [k] of vertices of ∆. To construct the minimal free resolution of S/I we
use discrete Morse theory [JW09; Skö06]. Let us recall the relevant aspects of
this technique. Let G be the directed graph with vertex set 2[k] and edges from
V to W whenever the coefficient of gV in the expansion of ∂gW is nonzero. A
Morse matching M is a collection of edges (V, W ) in this graph with the following
properties:
(a) M is a matching, i.e. the edges are disjoint,
(b) reversing all edges of M in G results in an acyclic graph, and
(c) deg gV = deg gW for all (V, W ) ∈ M.
Given such a Morse matching, define
JM := spank {gW , ∂gW : ∃V ∈ 2[k] , (V, W ) ∈ M}.
Then the complex TM := T/JM is a free resolution of S/I, and it has a k-basis
is given by the images of gW for those W ∈ 2[k] which do not appear in the
matching.
In our situation we define the Morse matching:
M := {(gW \{1} , gW ) : W ⊂ [k], W ∈
/ ∆, 1 ∈ W }
Recall that we assumed 1 to be the apex of ∆, so W ∈
/ ∆ and 1 ∈ W imply that
W \ {1} ∈
/ ∆. This is clearly a matching, and it is easy to see that it satisfies
property (b). For property (c), recall that the lcm lattice LI of I is ∆, together
with an additional maximal element. We are only matching generators which
are not in ∆, so the degrees of all those generators correspond to the maximal
element of LI , and thus they are equal.
Finally, the unmatched generators correspond exactly to the faces of ∆. As
we know that the Betti vector of I equals the f -vector of ∆, we can conclude
12
LUKAS KATTHÄN
that TM := T/JM is a minimal free resolution of S/I. On the other hand, by
our choice of the matching M, JM is in fact a DG-ideal and hence TM is a
DG-algebra.
We give an example to show that the assumptions of I being squarefree and
admitting a minimal DGA resolution are both necessary in Theorem 4.1.
Example 4.3. Consider the ideal I := hx1 x2 , x2 x3 , x3 x4 , x4 x5 , x5 x6 , x6 x1 i ⊂
Q[x1 , . . . , x6 ]. Using Macaulay2 [GS], one can compute that its total Betti numbers are (1, 6, 9, 6, 2). We claim that this is not the f -vector of any simplicial
complex. Indeed, such a simplicial complex would have two 3-simplices. As each
3-simplex has three 2-faces, and the two 3-simplices can only share one 2-face,
we would need a total of at least seven 2-faces, but we have only six.
On the other hand, by Theorem 2.1 there exists a monomial m ∈ S such that
mI admits a minimal DGA resolution. However, mI is not squarefree, and indeed
its Betti vector still does not satisfy the conclusion of Theorem 4.1, as it conicides
with the Betti vector of I.
4.2. Subadditivity of syzygies. For a monomial ideal I ⊂ S and 0 ≤ i ≤
pdim S/I we define
S
ti := max{j : βi,j
(S/I) 6= 0}.
We say that the syzygies of I are subadditive when tb ≤ ta + tb−a for all a, b
such that 1 ≤ a < b ≤ pdim S/I. Not every ideal has this property [ACI15],
but among monomial ideals no counterexample is known. Here, we show that
for squarefree monomial ideals, the existence of a DGA structure on the minimal
free resolution implies the subadditivity of syzygies.
Proposition 4.4. If I ⊂ S is a squarefree monomial ideal which admits a minimal DGA resolution F, then its syzygies are subadditive.
The following lemma is needed for the proof of this result.
Lemma 4.5. Let f, g ∈ F be homogeneous. If |f ∗ g|sqf ∈
/ mF, then there exist
f ′ , g ′ ∈ F \ mF of the same homological degrees as f and g, respectively, such that
deg(f ) ∨ deg(g) = deg(f ′ ) ∨ deg(g ′ ).
P
P
Proof. By choosing a basis of F, we may write f = i λi ci fi and g = j µj dj gj ,
with λi , µi ∈ k, ci , dj ∈ S monomials and fi , gj ∈ F \ mF. By our assumption,
P
/ mF. So at least one summand is not contained
|f ∗g|sqf = i,j λi µj |cidj fi ∗gj |sqf ∈
in mF, say |c1 d1 f1 ∗ g1 |sqf . It follows easily from the definition of the squarefree
part that there exists a monomial m ∈ S such that |c1 d1 f1 ∗ g1 |sqf = m|f1 ∗ g1 |sqf .
But this does not lie in mF, so we can conclude that m = 1 and the claim follows
with f ′ := f1 and g ′ := g1 .
Proof of Proposition 4.4. Let f ∈ Fb be an element of total degree tb and we may
assume that f ∈
/ mF. By Proposition 3.4, we can write f as
f=
X
|g1,j ∗ (g2,j ∗ (· · · (gi−1,j ∗ gi,j ) · · · )|sqf ,
j
for gℓ,j ∈ F1 for 1 ≤ ℓ ≤ i and all j. As f ∈
/ mF, the same holds for at least one
summand, say for j = 1.
STRUCTURE OF DGA STRUCTURES
13
Using that the multiplication on F is associative, we may rewrite that summand
as |(g1,1 · · · ga,1 ) ∗ (ga+1,1 · · · gb,1)|sqf . Now we apply Lemma 4.5 to this product to
obtain f ′ ∈ Fa and g ′ ∈ Fb−a which are both not contained in mF and satisfy
deg f = deg(f ′ ) ∨ deg(g ′). Thus we conclude that
tb = deg(f ) = deg(f ′ ) ∨ deg(g ′ ) ≤ deg(f ′ ) + deg(g ′) ≤ ta + tb−a
If we do not assume that F admits a DGA structure, then our methods still
suffice to show the case a = 1. This was first shown by Herzog and Srinivasan in
[HS15, Corollary 4] using a different method.
Corollary 4.6. For a monomial ideal I ⊂ S, it holds that ti ≤ t1 + ti−1 for all
2 ≤ i ≤ pdim S/I.
Proof. We may replace I by its polarization and so assume it is squarefree. Further, choose any multiplication on F. Now we just note that the proof of Proposition 4.4 does not require the multiplication to be associative if a = 1.
5. Strongly generic ideals
A monomial ideal is called strongly generic if no variable appears with the same
nonzero exponent in two distinct generators of I. This class of ideals was introduced and studied by Bayer, Peeva and Sturmfels [BPS98] 1, see also Chapter 6
of [MS05]. Strongly generic ideals have a number of desirable properties, and in
particular their minimal free resolution is given by the algebraic Scarf complex
[BPS98, Theorem 3.2]. However, this condition is not sufficient to ensure that the
ideal admits a minimal DGA resolution.
Theorem 5.1. The ideal hx2 , xy, y 2z 2 , zw, w 2 i ⊂ k[x, y, z, w] is strongly generic,
but its minimal free resolution does not admit the structure of a DGA, whose
multiplication respects the standard Z-grading.
This example is a variation of Example 3.8. The proof of Theorem 5.1 is very
technical, so we postpone it and discuss the result first.
Remark 5.2. Corollary 3.6 in [BPS98] states that strongly generic monomial
ideals admit a minimal DGA resolution. I was informed by I. Peeva that the
authors of that article are aware that the claim does not hold as stated and that
the statement and the proof were supposed to include an additional combinatorial
condition.
Theorem 5.1 implies that various kinds of resolutions do not admit a DGA
structure in general.
Corollary 5.3.
(1) There exists a monomial ideal whose hull resolution [BS98]
does not admit a DGA structure.
(2) There exists a monomial ideal whose Lyubeznik resolution [Lyu88] does
not admit a DGA structure.
1In
[BPS98] these ideals are called generic. In the later article [MSY00] a weaker notion of
genericity was introduced and the original one is called strongly generic since then.
14
LUKAS KATTHÄN
(3) There exists a monomial ideal whose minimal free resolution is supported
on a simplicial complex (and in particular cellular) [BS98], but it does not
admit a DGA structure.
Proof. The ideal of Theorem 5.1 provides the counterexample in all three cases.
This ideal is generic, so its minimal free resolution is supported in its Scarf
complex and it coincides with its hull resolution [MS05, Theorem 6.13]. The
Lyubeznik resolution depends on the choice of a total order on the generators. One can check that the Lyubeznik resolution with respect to the order
xy ≺ zw ≺ x2 ≺ y 2z 2 ≺ w 2 is minimal.
In fact, the minimal free resolution of Avramov’s Example 3.8 can also be
given the structure of a cellular resolution, though not with a simplicial complex
as support.
Proof of Theorem 5.1. In this proof we use both the multigrading and the standard Z-grading, so we refer to the latter as the total degree.
Let S = k[x, y, z, w] and let I be the ideal of the statement. We give names to
its generators as follows:
a := x2 , b := xy, c := y 2z 2 , d := zw, e := w 2
The Scarf complex ∆I is the simplicial complex with the facets {b, c, d} and
{a, b, d, e}. As mentioned above, the minimal free resolution F of S/I is given by
its algebraic Scarf complex. The generators of F are listed in Table 2, together
with their total degrees.
Assume that F admits an associative multiplication which respects the Zgrading, which we denote by ∗. The map ∗ : F ⊗ F → F can be decomposed
into its homogeneous components with respect to the Z4 -grading. We write × for
the component of multidegree (0, 0, 0, 0). Because the differential on F respects
the multigrading, × also satisfies the Leibniz rule and lifts the multiplication
on S/I. Hence, by the uniqueness of the multiplication, there exists a map
σ : F ⊗ F → F[1] such that
a ∗ b = a × b + δσ(a, b)
for a, b ∈ F, where δσ = ∂ ◦ σ + σ ◦ ∂. Moreover, the component of σ in
multidegree (0, 0, 0, 0) vanishes, because ∗ and × coincide in this degree. We
proceed by proving a series of claims.
Claim 1. For i, j, k ∈ {a, b, d, e}, all products of the form gi × gj and gi × gjk
coincide with the corresponding products from the Taylor resolution.
If I were squarefree, then this would follow from Proposition 3.1. However, as
I is not squarefree, we need to verify this. We start in homological degree 2. For
any i, j ∈ {a, b, d, e}, i 6= j, one can verify that the product gi × gj is a multiple of
gcd(i, j)gij , because there is no other element of the correct multidegree. Then
the Leibniz rule implies that gi × gj = ± gcd(i, j)gij . Moreover gi × gi = 0,
because ∗ and thus × was assumed to be graded-commutative.
Next, gi ×gij = 0 for all i, j because it is clearly a cycle, and the only boundary
of homological degree 3 is ∂gabde , which has a different multidegree. Moreover, for
pairwise distinct i, j, k ∈ {a, b, d, e}, it follows again by inspection that gi × gjk
is a multiple of gcd(i, lcm(j, k))gijk because this is the only element with the
STRUCTURE OF DGA STRUCTURES
ga , gb , gd , ge
gc
gab , gde
gad , gae , gbd , gbe
gbc , gcd
gijk 6= gbcd
gbcd
gabde
Table 2. The degrees of the
15
2
3
3
4
5
5
6
6
generators of F.
required multidegree, and again the Leibniz rule determines the coefficient to be
±1.
In fact, the other products in the subalgebra generated by ga , gb , gd and gd also
coincide with the product from the Taylor resolution, but we do not need this.
Claim 2. It holds that (ga × gc ) × ge − ga × (gc × ge ) = yz∂gabde
By considering the multigrading and using the Leibnitz rule as above, one can
show that ga × gc = yz 2 gab + xgbc and gc × ge = wgcd + y 2 zgde . Using this and
Claim 1, one can compute that
∂(gbc × ge ) = ∂(wgbcd + yzgbde ).
This implies that gbc × ge = wgbcd + yzgbde + ∂(f ) for some f ∈ F4 . But the
x-component of the multidegree of every element in F4 is at least 2, hence f = 0.
Now we can compute that
(ga × gc ) × ge = yz 2 gab × ge + xgbc × ge = yz 2 gabe + xwgbcd + xyzgbde .
Using essentially the same reasoning one can also show that
ga × (gc × ge ) = y 2zgade + xwgbcd + yzwgabd .
Hence it follows that (ga × gc ) × ge − ga × (gc × ge ) = yz∂gabde .
Claim 3. For any f ∈ F it holds that σ(1k , f ) = σ(f, 1k ) = 0.
For f ∈ F it holds that
f = 1k ∗ f = 1k × f + ∂σ(1k , f ) + σ(∂1k , f ) +σ(1k , ∂f )
|
{z
=0
}
and hence ∂σ(1k , f ) + σ(1k , ∂f ) = 0.
We proceed now by induction on |f |. Consider the case |f | = 0, i.e f = 1k .
Then σ(1k , 1k ) has multidegree (0, 0, 0, 0) and homological degree 1, which is only
possible if σ(1k , 1k ) = 0.
Assume now that |f | = i and σ(1k , g) = 0 for all g ∈ F with |g| < i. Then
∂σ(1k , f ) = −σ(1k , ∂f ) = 0 by the induction hypothesis. As F is acyclic, there
exist an h ∈ F with |h| = |f | + 2 such that ∂h = σ(1k , f ). We may assume that f
is a generator of F. Now, inspection of Table 2 shows that for no generator f of F
there exist an element h with deg h = deg f and |h| = |f | + 2. Hence σ(1k , f ) = 0.
Claim 4. It holds that σ(gi , gj ) = 0 for i, j 6= c.
16
LUKAS KATTHÄN
If σ(gi , gj ) 6= 0 for i, j 6= c, then it is an element of total degree 4 and homological degree 3. But by Table 2, no such element exists.
Claim 5. It holds that gi ∗ gj = gi × gj for i, j 6= c.
In general we have that gi ∗ gj = gi × gj + ∂σ(gi , gj ) + σ(∂gi , gj ) + σ(gi , ∂gj ).
All terms involving σ vanish because of Claim 3 and Claim 4.
Claim 6. For i, j, k ∈ {a, b, d, e}, it holds that σ(gi , gjk ) = 0.
First, assume that i = j or i = k. We only consider the case i = j, the other
case is similar. For this, we compute that
Cl. 5
Cl. 1
0 = (gi ∗ gi ) ∗ gj = gi ∗ (gi ∗ gj ) = gi ∗ (gi × gj ) = ±µgi ∗ gij
Cl. 3
= ±µ(gi × gij +δσ(gi , gij )) = ±µ∂σ(gi , gij ),
|
{z
=0
}
where µ = gcd(i, j). The differential is injective in homological degree 4, hence
∂σ(gi , gij ) = 0 implies that σ(gi , gij ) = 0.
Now we consider the case that i, j, k are all different. Then {i, j, k} contains
either a and b, or d and e. We only consider the first case, the other one is similar.
If {j, k} = {a, b} then σ(gi , gab ) = 0 holds for degree reasons, see Table 2. So
assume that i ∈ {a, b}. Again, we assume without loss of generality that i = a.
Then either j or k equals b, say j. We compute that
Cl. 5
(ga ∗ gb ) ∗ gk = (ga × gb ) ∗ gk = xgab ∗ gk = xgab × gk + xδσ(gab , gk )
(∗)
= xgabk + xδσ(gab , gk )
Cl. 4
= xgabk + x∂ σ(gab , gk ),
|
{z
=0
}
where in (∗) we used Claim 1 and that gcd(x, k) = 1 for k ∈ {d, e}. On the
other hand, a similar computation shows that ga ∗ (gb ∗ gk ) = xgabk + ∂σ(ga , gbk ),
where we use that gcd(b, k) = 1. The assumption that ∗ is associative implies
that ∂σ(ga , gbk ) = 0 and thus σ(ga , gbk ) because ∂ is injective in homological
degree 4.
Claim 7. It holds that ga ∗ (gc ∗ ge ) − (ga ∗ gc ) ∗ ge 6= 0.
Let us compute the associator of ga , gc and ge :
ga ∗ (gc ∗ ge ) − (ga ∗ gc ) ∗ ge =
ga × (gc × ge ) − (ga × gc ) × ge +
δσ(ga , gc × ge ) − δσ(ga × gc , ge )+
ga × δσ(gc , ge ) − δσ(ga , gc ) × ge +
δσ(ga , δσ(gc , ge )) − δσ(δσ(ga , gc ), ge )
We know that ga × (gc × ge ) − (ga × gc ) × ge = yz∂gabde and we want to
show that this term does not cancel against any other term. For this we need to
consider the component of ga ∗ (gc ∗ ge ) − (ga ∗ gc ) ∗ ge in multidegree (0, 0, 0, 0).
As σ has no component in this multidegree, the summands having only one
STRUCTURE OF DGA STRUCTURES
17
σ cannot contribute. This it remains to consider the last two terms, namely
δσ(ga , δσ(gc , ge )) and δσ(δσ(ga , gc ), ge ).
By Claim 3 it holds that δσ(ga , δσ(gc , ge )) = ∂σ(ga , ∂σ(gc , ge )). Further,
σ(gc , ge ) has homological degree 3, hence it is of the form σ(gc , ge ) = λgbcd + r
where λ ∈ S and r is a sum of terms not involving c. Now it holds that
∂σ(gc , ge ) =λ∂gbcd + ∂r
=λxgcd + λwgbc + terms not involving c
Using Claim 6 it follows that
∂σ(ga , ∂σ(gc , ge )) = λx∂σ(ga , gcd ) + λw∂σ(ga , gbc ) = λ(λ1 x + λ2 w)∂gabde ,
for some λ1 , λ2 ∈ S. Here we used that gabde is the only generator in homological
degree 4. These terms cannot cancel against yz∂gabde , because they are multiples
of x or w, respectively. The same arguments apply to δσ(δσ(ga , gc ), ge ).
Remark 5.4. Unfortunately, it does not seem possible to use Avramov’s obstructions from [Avr81] to prove the preceding theorem. At least, I could not find a
suitable regular sequence f1 , . . . , fr in the ideal to consider R = S/(f1 , . . . , fs ).
6. The lcm lattice and non-sqarefree ideals
The lcm lattice LI of a monomial ideal is the lattice of all least common multiples of subsets of the minimal generators of I. The lcm lattice was introduced in
[GPW99], were it is also proven that its isomorphism type determines the minimal
free resolution of I. So one might be led to conjecture that it also determines the
possible multiplication on F. However, it is immediately clear from Theorem 2.1
that is not true. Nevertheless, we can identify a class of multiplications which is
compatible with the lcm lattice:
Definition 6.1. A multiplication ∗ on F is called supportive if for all a, b ∈ F
there exists a c ∈ F with deg c ≤ (deg a) ∨ (deg b), such that a ∗ b = mc for a
monomial m ∈ S.
Example 6.2. The DGA structure on the Taylor resolution is supportive, but
the multiplication constructed in Example 3.3 is not supportive.
For squarefree monomial ideals, being supportive is not a restriction.
Lemma 6.3. If I ⊆ S is a squarefree monomial ideal, then every multiplication
on its minimal free resolution is supportive.
Proof. Let F be the minimal free resolution of S/I and let B be an S-basis for it.
Every element of B has a squarefree degree, we may apply Lemma 1.4. It follows
that for any g1 , g2 ∈ B it holds that g1 ∗ g2 = mh for a monomial m and an
element h ∈ F with deg h ≤ (deg g1 + deg g2 ) ∧ (1, . . . , 1) = deg g1 ∨ deg g2 .
The next result shows that supportive multiplications are essentially determined by the isomorphism type of the lcm-lattice of I. This is our main motivation for introducing this notion.
Proposition 6.4. Let I and I ′ be two monomial ideals in two polynomial rings
S and S ′ , respectively, whose lcm-lattices are isomorphic. Then every supportive
multiplication on the minimal free resolution of S/I can be relabeled to a supportive multiplication on the minimal free resolution of S ′ /I ′ .
18
LUKAS KATTHÄN
Proof. Let ν : LI → LI ′ be the isomorphism. We recall the “relabeling” construction, which was introduced in [GPW99]. Fix a basis B of the minimal free
resolution F of S/I. We express the differential ∂ of F in this basis:
∂g =
X
chg xdeg(g)−deg(h) h
h∈B
with g ∈ B, chg ∈ k. The relabeled resolution ν(F) is the free S ′ -module with
the basis {ḡ : g ∈ B} and deg ḡ = ν(deg g). Here, by ḡ we mean a new basis
element. The differential on ν(F) is given by
X h deg(ḡ)−deg(h̄)
¯ :=
∂ḡ
c x
h̄.
g
h∈B
This is well-defined because deg h ≤ deg g implies that deg h̄ = ν(deg h) ≤
ν(deg g) = deg ḡ. By [GPW99, Theorem 3.3], ν(F) is indeed a minimal free resolution of S ′ /I ′ . Our definition the relabeling of the multiplication is analogous:
If
X
g1 ∗ g2 =
chg1 ,g2 xdeg(g1 )+deg(g2 )−deg(h) h,
h∈B
then we set
ḡ1 ∗ ḡ2 :=
X
chg1 ,g2 xdeg(ḡ1 )+deg(ḡ2 )−deg(h̄) h̄.
h∈B
To see that this is well-defined, we compute that
deg ḡ1 + deg ḡ2 ≥ deg ḡ1 ∨ deg ḡ2 = ν(deg g1 ) ∨ ν(deg g2 )
(1)
(∗∗)
(∗)
= ν(deg g1 ∨ deg g2 ) ≥ ν(deg h) = deg h̄.
Here, the equality marked with (∗) holds because ν is an isomorphism of lattices. Moreover, the equality marked with (∗∗) holds because the multiplication
is supportive.
It is clear that the relabeled multiplication is graded-commutative and satisfies
the Leibniz rule, because the structure constants are the same as for the multiplication on F. Finally, the computation (1) also shows that it is supportive.
The last two results have a number of immediate corollaries.
Corollary 6.5. Every monomial ideal admits a supportive multiplication on its
minimal free resolution.
Proof. Choose any multiplication on the minimal free resolution of the polarization of I and relabel it.
Corollary 6.6. Any supportive multiplication satisfies the conclusion of Proposition 3.4.
Proof. The conclusion of Proposition 3.4 is clearly invariant under relabeling.
Hence relabeling the supportive multiplication to the minimal free resolution of
the polarization of I yields the claim.
Corollary 6.7. Let I and I ′ be two monomial ideals with isomorphic lcm lattices
and assume that I is squarefree. If I admits a minimal DGA resolution, then so
does I ′ .
Proof. Associativity is invariant under relabeling.
REFERENCES
19
We close the section by giving an example which illustrates that the last
corollary cannot be strengthened by considering the Betti poset [CM14a; TV15;
CM14b]. Recall that the Betti poset of a monomial ideal is the subposet of the
lcm lattice consisting of those multidegrees in which are nonzero Betti numbers.
Example 6.8. The ideal hx2 , xy, y 2z 2 , zw, w 2i ⊂ k[x, y, z, w] from Theorem 5.1
does not admit a minimal DGA resolution. As argued in the proof of that theorem, its minimal free resolution is given by its algebraic Scarf complex, so in
particular its Betti poset is the face poset of its Scarf complex. However, that
Scarf complex is a cone (with apex b in the notation of that proof). Hence the
construction in the proof of the second implication of Theorem 4.1 yields an example of a squarefree monomial ideal admitting a minimal DGA resolution but
having the same Betti poset.
Acknowledgments
The author is indebted to Volkmar Welker for the suggestion to look for
a Macaulay-type theorem for DGA resolutions, which eventually led to Theorem 4.1.
References
[AHH97]
[Avr74]
[Avr81]
[ACI15]
[BPS98]
[BS98]
[Bru76]
[BE77]
[CM14a]
[CM14b]
[GPW99]
A. Aramova, J. Herzog, and T. Hibi. “Gotzmann theorems for exterior algebras and combinatorics”. In: Journal of Algebra 191.1 (1997),
pp. 174–211. doi: 10.1006/jabr.1996.6903.
L. L. Avramov. “The Hopf algebra of a local ring”. In: Izv. Akad.
Nauk SSSR Ser. Mat. 38 (1974), pp. 253–277.
L. L. Avramov. “Obstructions to the existence of multiplicative structures on minimal free resolutions”. In: Amer. J. Math. 103.1 (1981),
pp. 1–31. doi: 10.2307/2374187.
L. L. Avramov, A. Conca, and S. B. Iyengar. “Subadditivity of syzygies of Koszul algebras”. In: Mathematische Annalen 361.1 (2015),
pp. 511–534. issn: 1432-1807. doi: 10.1007/s00208-014-1060-4.
D. Bayer, I. Peeva, and B. Sturmfels. “Monomial resolutions”. In:
Math. Res. Lett. 5.1-2 (1998), pp. 31–46. doi: 10.4310/MRL.1998.v5.n1.a3.
D. Bayer and B. Sturmfels. “Cellular resolutions of monomial modules.” In: J. Reine Angew. Math. 502 (1998), pp. 123–140. doi: 10.1515/crll.1998.083
W. Bruns. ““Jede” endliche freie Auflösung ist freie Auflösung eines
von drei Elementen erzeugten Ideals”. In: Journal of Algebra 39.2
(1976), pp. 429–439. doi: 10.1016/0021-8693(76)90047-8.
D. A. Buchsbaum and D. Eisenbud. “Algebra structures for finite free
resolutions, and some structure theorems for ideals of codimension 3”.
In: Amer. J. Math. 99.3 (1977), pp. 447–485.
T. B. Clark and S. Mapes. “Rigid monomial ideals”. In: Journal of
Commutative Algebra 6.1 (2014), pp. 33–51. doi: 10.1216/JCA-2014-6-1-33.
T. B. Clark and S. Mapes. “The Betti poset in monomial resolutions”.
In: Preprint (2014). arXiv:1407.5702.
V. Gasharov, I. Peeva, and V. Welker. “The LCM-lattice in monomial
resolutions”. In: Mathematical Research Letters 6 (1999), pp. 521–532.
doi: 10.4310/MRL.1999.v6.n5.a5.
20
[Gem76]
REFERENCES
D. Gemeda. “Multiplicative structure of finite free resolutions of ideals
generated by monomials in an R-sequence”. Ph.D. Thesis. Brandeis
University, 1976.
[GW78]
S. Goto and K. Watanabe. “On graded rings, II (Zn -graded rings)”. In:
Tokyo J. Math 1.2 (1978), pp. 237–261. doi: 10.3836/tjm/1270216496.
[GS]
D. R. Grayson and M. E. Stillman. Macaulay2, a software system for
research in algebraic geometry. url: http://www.math.uiuc.edu/Macaulay2/.
[HS15]
J. Herzog and H. Srinivasan. “On the subadditivity problem for maximal shifts in free resolutions”. In: Commutative Algebra and Noncommutative Algebraic Geometry II. Vol. 68. MSRI Publications, 2015.
arXiv:1303.6214.
[IKMF14] B. Ichim, L. Katthän, and J. J. Moyano Fernández. “Stanley depth
and the lcm-lattice”. In: Preprint (2014). arXiv:1405.3602.
[JW09]
M. Jöllenbeck and V. Welker. Minimal resolutions via algebraic discrete Morse theory. Vol. 197. Memoirs of the AMS 923. American
Mathematical Society, 2009. doi: 10.1090/memo/0923.
[Kal85]
G. Kalai. “f -Vectors of acyclic complexes”. In: Discrete mathematics
55.1 (1985), pp. 97–99. doi: 10.1016/S0012-365X(85)80024-8.
[Kus94]
A. Kustin. “The Minimal Resolution of a Codimension Four Almost
Complete Intersection Is a DG-Algebra”. In: Journal of Algebra 168.2
(1994), pp. 371–399. doi: 10.1006/jabr.1994.1235.
[Lyu88]
G. Lyubeznik. “A new explicit finite free resolution of ideals generated
by monomials in an R-sequence”. In: Journal of Pure and Applied Algebra 51.1 (1988), pp. 193–195. doi: 10.1016/0022-4049(88)90088-6.
[Map13]
S. Mapes. “Finite atomic lattices and resolutions of monomial ideals”.
In: Journal of Algebra 379 (2013), pp. 259–276. doi: 10.1016/j.jalgebra.2013.01.005.
[MS05]
E. Miller and B. Sturmfels. Combinatorial commutative algebra. Springer,
2005. doi: 10.1007/b138602.
[MSY00] E. Miller, B. Sturmfels, and K. Yanagawa. “Generic and Cogeneric
Monomial Ideals”. In: Journal of Symbolic Computation 29.4 (2000),
pp. 691–708. doi: 10.1006/jsco.1999.0290.
[Pee96]
I. Peeva. “0-Borel Fixed Ideals”. In: Journal of Algebra 184.3 (1996),
pp. 945–984. doi: 10.1006/jabr.1996.0293.
[Pee10]
I. Peeva. Graded syzygies. Springer, 2010. doi: 10.1007/978-0-85729-177-6.
[Skö06]
E. Sköldberg. “Morse theory from an algebraic viewpoint”. In: Transactions of the American Mathematical Society 358.1 (2006), pp. 115–
129. doi: 10.1090/S0002-9947-05-04079-1.
[Skö11]
E. Sköldberg. “Resolutions of modules with initially linear syzygies”.
In: Preprint (2011). arXiv:1106.1913.
[Skö16]
E. Sköldberg. “The minimal resolution of a cointerval edge ideal is
multiplicative”. In: Preprint (2016). arXiv:1609.07356.
[Sta96]
R. P. Stanley. Combinatorics and commutative algebra. 2nd Ed. Vol. 41.
Progress in Mathematics. Birkhäuser Boston, Inc., Boston, 1996.
[TV15]
A. Tchernev and M. Varisco. “Modules over categories and Betti
posets of monomial ideals”. In: Proceedings of the American Mathematical Society 143.12 (2015), pp. 5113–5128. doi: 10.1090/proc/12643.
REFERENCES
FB 12 – Institut für Mathematik, Goethe-Universität Frankfurt, Germany
E-mail address: [email protected]
21
| 0 |
arXiv:1801.02329v2 [] 29 Jan 2018
Grassmannian Codes with New
Distance Measures for Network Coding
Tuvi Etzion
Hui Zhang
Dept. of Computer Science
Technion-Israel Institute of Technology
Haifa 3200003, Israel
Email: [email protected]
Dept. of Computer Science
Technion-Israel Institute of Technology
Haifa 3200003, Israel
Email: [email protected]
Abstract—Subspace codes are known to be useful in errorcorrection for random network coding. Recently, they were
used to prove that vector network codes outperform scalar
linear network codes, on multicast networks, with respect
to the alphabet size. In both cases, the subspace distance is
used as the distance measure. In this work we show that
we can replace the subspace distance with two other possible
distance measures which generalize the subspace distance. We
prove that each code with the largest number of codewords
and the generalized distance, given the other parameters, has
the minimum requirements needed to solve a given multicast
network with a scalar linear code. We discuss lower and upper
bounds on the sizes of the related codes.
I. I NTRODUCTION
Network coding has been attracting increasing attention
in the last fifteen years. The seminal work of Ahlswede,
Cai, Li, and Yeung [1] and Li, Yeung, and Cai [15]
introduce the basic concepts of network coding and how
network coding outperforms the well-known routing. The
class of networks which were mainly studied are the multicast networks and these are also the target of this work.
A multicast network is a directed acyclic graph with one
source. The source has h messages, which are scalars over
a finite field Fq . The network has N receivers, each one
demands all the h messages of the source to be transmitted
in one round of a network use. An up-to-date survey on
network coding for multicast networks can be found for
example in [10]. Kötter and Médard [16] provided an
algebraic formulation for the network coding problem: for
a given network, find coding coefficients (over a small
field) for each edge, which are multiplied with the symbols
received at the starting node of the edge, such that each
receiver can recover all the demanded information from its
received symbols on its in-coming edges. This sequence of
coding coefficients at each edge is called the local coding
vector. Such an assignment of coding coefficients for all the
edges in the network is called a solution for the network.
The coding coefficients are scalars and the solution is a
scalar linear solution. Ebrahimi and Fragouli [4] have
extended this algebraic approach to vector network coding.
In this setting the messages are vectors of length t over Fq
and the coding coefficients are t×t matrices over Fq . A set
of coding matrices such that all the receivers can recover
their requested information, is called a vector solution.
The alphabet size of the solution is an important parameter that directly influences the complexity of the
calculations at the network nodes and as a consequence
the performance of the network. Jaggi et al. [14] have
shown a deterministic algorithm to find a network code
(for multicast networks) of a field size which is the first
prime power greater or equal to the number of receivers.
In general, finding the minimum required alphabet size
of a network code for a given multicast network is NPcomplete [19]. Vector network coding solution with vectors
of length t over Fq outperforms the scalar linear network
coding solution for the same network if the scalar solution
requires an alphabet of size qs , where qs > q t .
An important step in the evolution of network coding
was the introduction of random network coding [12], [13].
Instead of the deterministic algorithm to design the network
code, there is a random selection of coefficients for the
local coding vectors at each coding point (taken from Fq ).
By choosing a field size large enough, the probability that
this random selection won’t be a solution for the network
tends to zero.
Kötter and Kschischang [17] introduced a framework
for error-correction in random network coding. They have
shown that for this purpose the codewords are taken as subspaces over a finite field Fq , where the distance measure,
called the subspace distance is defined as follows. The set
of all subspaces of Fnq is called the projective space and
denoted by Pq (n). For two subspaces X, Y ∈ Pq (n) the
subspace distance dS (X, Y ) is defined by
dS (X, Y ) = dim X + dim Y − 2 dim(X ∩ Y ) .
It was shown later in [22] that another metric called
the injection metric is better suitable for random network
coding. For two subspaces X, Y ∈ Pq (n) the injection
distance dI (X, Y ) is defined by
dI (X, Y ) = max{dim X, dim Y } − dim(X ∩ Y ) .
These two papers [17], [22] have motivated an extensive
work on subspace codes and in particular on codes in which
all the subspaces have the same dimension. These codes are
called constant dimension codes or Grassmannian codes
since they belong to the Grassmann space. For a given
positive integer n and a given integer k, 0 ≤ k ≤ n, the
Grassmannian Gq (n, k) is the set of all subspaces in Pq (n)
whose dimension is k. The two distance measures coincide for this family of subspaces or more precisely, if
X, Y ∈ Gq (n, k) then dS (X, Y ) = 2dI (X, Y ). Since we
concentrate on Grassmannian codes in this paper we will
define the Grassmannian distance dG (X, Y ) to be
dG (X, Y ) = k − dim(X ∩ Y ) .
Most of the research on subspace codes motivated
by [17] was in two directions – find the largest codes
with prescribed minimum subspace distance and looking
for designs based on subspaces. To this end, two quantities
were defined. The first one Aq (n, d) is the maximum size
of a code in Pq (n) with minimum subspace distance d.
The second one Aq (n, d, k) is the maximum size of a code
in Gq (n, k) with minimum subspace distance d (Grassmannian distance d/2). A related concept is a subspace design
or a block design t-(n, k, λ)q which is a collection S of
k-subspaces from Gq (n, k) (called blocks) such that each
subspace of Gq (n, t) is contained in exactly λ blocks of S.
In particular if λ = 1 this subspace design is called a qSteiner system and is denoted by Sq (t, k, n). Note, that such
a q-Steiner system is a Grassmannian code in Gq (n, k) with
minimum Grassmannian distance k − t + 1.
It was shown recently [8], [9] that subspace codes are
useful in vector network coding and by using subspace
codes it can be shown that vector network codes outperform
scalar linear network codes, in multicast networks, with
respect to the alphabet size. The comparison between the
alphabet size for a vector network coding solution is done
as follows. For a given network N , if there exists a vector
network coding solution with vectors of length t and a finite
field of size q and a scalar linear network coding solution
requires a finite field of size at least qs , then the gap in
the between the two solutions is qs − q t . The proof that
vector network coding outperforms scalar linear network
coding used a family of networks called the generalized
combination networks, where the combination networks
were defined and used in [21] (but were defined informally
before too).
The goal of this work is to show that there is a
tight connection between optimal Grassmannian codes and
network coding solutions for the generalized combination
networks. We will define two new dual distance measures
on Grassmannian codes which generalize the Grassmannian
distance. We discuss on the maximum sizes of the new
Grassmannian codes with the new distance measures. We
explore the connection between these codes and related
generalized combination networks. Our exposition will
derive some interesting properties of these codes with
respect to the traditional subspace codes and some subspace
designs. We will show, using two different approaches, that
codes in the Hamming space with minimum Hamming distance are a special case of subspace codes. Some interesting
connections to subspace designs will be also explored.
The rest of this paper is organized as follows. In
Section II we present the combination network and its
generalization which was defined in [8], [9]. We discuss
the family of codes which provide network coding solutions
for these networks. In Section III we further consider this
family of codes, define two dual distance measures on these
codes, and show how these codes and the distance measures
defined on them generalize the conventional Grassmannian
codes and Grassmannian distance. We show the connection
of these codes with subspace designs. We prove that for
each such code of the largest size, over Fq , there exists a
generalized combination network which is solved only by
this code (or a code with more relaxed parameters). We also
discuss how codes in the Hamming space form a subfamily
of these subspace codes. Finally, we show which subfamily
of these codes is useful for vector network coding. In
Section IV we present basic results for the bounds on sizes
of these codes and concentrate on the subfamily of these
codes which are useful to show how vector network coding
outperform scalar linear network coding. We analyse this
family of codes in general and concentrate on one specific
code with specific parameters in particular. In Section V
we provide a conclusion and directions for future research.
II. G ENERALIZED COMBINATION NETWORKS
In this section we will define the generalized combination network which is a generalization of the combination
network [21]. This network defined in [8], [9] was used to
prove that vector network coding outperforms scalar linear
network coding with respect to the alphabet size.
The generalized combination network is called the
ǫ-direct links ℓ-parallel links Nh,r,s network, in short the
(ǫ, ℓ)-Nh,r,s network. The network has three layers. In the
first layer there is a source with h messages. In the second
layer there are r nodes. The source has ℓ parallel links
to each node in the middle layer. From any α = s−ǫ
ℓ
nodes in the middle layer, there are ℓ parallel
links to one
receiver in the third layer, i.e. there are αr receivers in the
third layer. Additionally, from the source
there are ǫ direct
parallel links to each one of the αr receivers in the third
layer. Therefore, each receiver has s = αℓ + ǫ incoming
links. The (0, 1)-Nh,r,s network is the combination network
defined in [21]. We will assume some relations between the
parameters h, α, ǫ, and ℓ such that the resulting network is
interesting and does not have a trivial or no solution [8].
Theorem 1. The (ǫ, ℓ)-Nh,r,αℓ+ǫ network has a trivial
solution if ℓ + ǫ ≥ h, and it has no solution if αℓ + ǫ < h.
Otherwise, the network has a nontrivial solution.
The immediate natural question is which codes will solve
the generalized combination network over Fq . The answer
is quite simple. Since each receiver has ǫ direct links from
the source, it follows that the source can send any required
ǫ-subspace of Fhq to the receiver. Hence, the receiver must
be able to obtain an (h−ǫ)-subspace of Fhq from the middle
layer nodes connected to it. The receiver is connected to
α nodes in the middle layer and each one can send to the
receiver an ℓ-subspace of Fhq . Hence, a solution for the
network exists if and only if there exists a code with r ℓsubspaces of Fhq , such that each α codewords (ℓ-subspaces)
span a subspace whose dimension is at least h − ǫ.
III. C OVERING /M ULTIPLE G RASSMANNIAN C ODES
In this section we provide the formal definition for
the codes required to solve the generalized combination
networks. We define two distance measures on these codes,
prove that both Grassmannian codes, codes in the Hamming
space, and subspace designs, are subfamilies of this family
of codes. We present some basic properties of these codes
and their connection to the solution of the generalized
combination networks.
An α-(n, k, δ)cq covering (generalized) Grassmannian
code (code in short) C is a subset of Gq (n, k) such that
each α codewords of C span a subspace of dimension at
least δ + k in Fnq . The following theorem is easily verified.
Theorem 2. A code C in Gq (n, k) with Grassmannian
distance δ is a 2-(n, k, δ)cq Grassmannian code.
Theorem 2 implies that the Grassmannian codes form
a subfamily of the α-(n, k, δ)cq codes. It is also natural to define the quantity δ as the minimum α-covering
Grassmannian distance of the code, where the 2-covering
Grassmannian distance is just the Grassmannian distance.
In other words, the α-covering Grassmannian distance of
α subspaces in Gq (n, k) is the dimension of the subspace
which they span in Fnq minus k. A natural problem is
to find maximum size of such codes? For this we define
the quantity Bq (n, k, δ; α) to be the maximum size of an
α-(n, k, δ)cq code. From our discussion it can be readily
verified that if C is an α-(n, k, δ)cq code which attains
Bq (n, k, δ; α) then C solves the (ǫ, k)-Nn,r,αk+ǫ network,
where ǫ = n − δ − k and r = Bq (n, k, δ; α). But, clearly
we also have from our previous discussion that such a code
has parameters with the minimum conditions which are
necessary to solve such a network. Thus, each such code
of the maximum size is exactly what needed to solve a
certain instant of the generalized combination networks.
Finally, for a given Grassmannian code C in Gq (n, k) we
can define covering hierarchy, where the 1-covering is just
k, the α-covering is the α-covering Grassmannian distance
of C plus k. This hierarchy is a q-analog for the generalized
Hamming weights [24] for constant weight codes.
The way we have described the code which solves
the generalized combination network is very natural when
we consider the definition of the generalized combination
network. We have also defined a natural generalization for
the Grassmannian distance, although some might argue that
it is less natural from a point of view of a code definition.
Hence, we will give now a more natural definition from
a point of view of coding theory, or more precisely from
a point of view of a packing, which is the combinatorial
property of an error-correcting code. For this we will need
to use the dual subspace V ⊥ of a given subspace V in Fnq ,
and the orthogonal complement of a given code C. For a
code C in Pq (n) the orthogonal complement C⊥ is defined
def
by C⊥ = {V ⊥ : V ∈ C}. It is well-known [7] that
the minimum subspace distance of C and C⊥ are equal.
The following lemma is also well known (see for example
Lemma 12 in [7]).
Corollary 1. For any given set of α subspaces
V1 , V2 , . . . , Vα of Fnq we have
!⊥
α
α
X
\
⊥
.
Vi
Vi =
i=1
i=1
Corollary 1 induces a new definition of a distance measure for the orthogonal complements of the Grassmannian
codes which solve the generalized combination networks.
For a Grassmannian code C ∈ Gq (n, k), the minimum
λ-multiple Grassmannian distance is k − τ + 1, where τ
is the largest integer such that each τ -subspace of Fnq is
contained in at most λ k-subspaces of C.
Theorem 3. If C ∈ Gq (n, k) is an α-(n, k, δ)cq code
then C⊥ has minimum (α − 1)-multiple Grassmannian
distance δ.
Proof. By the definition of an α-(n, k, δ)cq code it follows
that for each α subspaces
V1 , V2 , . . . , Vα of C we have
Pα
Pα
⊥
≤
that dim
i=1 Vi
i=1 Vi ≥ δ + k and hence dim
n −Tδ − k. Therefore, by Corollary 1 we have that
α
dim i=1 Vi⊥ ≤ n − δ − k. This implies that each
subspace of dimension n − δ − k + 1 of Fnq can be
contained in at most α − 1 codewords of C⊥ . Thus, since
C⊥ ∈ Gq (n, n − k), it follows by definition that the
minimum (α − 1)-multiple Grassmannian distance of C⊥
is (n − k) − (n − δ − k + 1) + 1 = δ.
Theorem 3 leads to a new definition for Grassmannian
codes (orthogonal complements of α-(n, k, δ)cq codes).
An s-(n, k, λ)m
q multiple (generalized) Grassmannia code
(code in short) C is a subset of Gq (n, k) such that each
s-subspace of Fnq is contained in at most λ codewords of C.
Similarly, let Aq (n, k, s; λ) denote the maximum size of an
s-(n, k, λ)m
q code. One can verify that
Theorem 4. An s-(n, k, 1)m
q code is a Grassmannian
code in Gq (n, k) with minimum Grassmannian distance
k − s + 1.
Clearly, by Theorem 4 we have that the λ-multiple
Grassmannian distance is a generalization of the Grassmannian distance (1-multiple Grassmannian distance). The
generalization implied by Theorem 4 is for the packing
interpretation of an s-(n, k, 1)m
q code. If each s-subspace
is contained exactly once in an s-(n, k, 1)m
q code C, then
C is a q-Steiner system Sq (s, k, n). If each s-subspace is
contained exactly λ times in an s-(n, k, λ)m
q code C, then C
is an s-(n, k, λ)q subspace design. Similarly to Theorem 3
we have
Theorem 5. If C ∈ Gq (n, k) is an s-(n, k, λ)m
q code
then C⊥ has minimum (λ + 1)-covering Grassmannian
distance k − λ + 1.
Corollary 2. C ∈ Gq (n, k) is an α-(n, k, δ)cq code if and
only if C⊥ is an (α − 1)-(n, n − k, n − k − δ + 1)m
q code.
Lemma 1. For any two subspaces U, V in Pq (n) we have
that U ⊥ ∩ V ⊥ = (U + V )⊥ .
Corollary 3. For any feasible s, k, n, and λ, we have that
Aq (n, k, s; λ) = Bq (n, n − k, k − λ + 1; λ + 1).
Clearly, by induction we have the following consequence
from Lemma 1.
Not all Grassmannian codes can be used as solutions
in vector network coding. If there are h messages and
each message is a vector of length t over Fq , then each
link will carry a t-subspace of Fht
q . Therefore, the only
Grassmannian codes which are used in vector network
coding are α-(ht, ℓt, δt)cq codes, where 1 ≤ ℓ ≤ h − 1,
1 ≤ δ ≤ h − 1, and 2 ≤ α.
It should be noted that codes in the Hamming space with
the Hamming distance form a subfamily of the subspace
codes. This can be shown using two different approaches.
The first one was pointed in 1957 by Tits [23] who
suggested that combinatorics of sets could be regarded as
the limiting case q → 1 of combinatorics of vector spaces
over the finite field Fq .
The second approach is based on the solution for the
generalized combination networks. It was proved in [21]
that the (0, 1)-Nh,r,s network has a scalar linear solution
over Fq if and only if there exists a linear code, over Fq ,
of length r, dimension h, and minimum Hamming distance
r − s + 1. For such a code we have an h × r generator
matrix G for which any set with s columns has a subset
of h linearly independent columns. These r columns of G
form an s-(h, 1, h − 1)cq code. Such codes were already
very well studied in coding theory (and also in projective
geometry, see [11]) and we omit their discussion here.
Linear codes in the Hamming schemes can be used also
in other ways as solutions for the generalized combination
network. Let C be a linear code, over Fq , of length n,
dimension k, and minimum Hamming distance d. For such
a code, in the (n− k)× n parity-check matrix H each d− 1
columns are linearly independent. Hence, the n columns of
H form an (d− 1)-(n− k, 1, d− 2)cq code. This code solves
the (n − k − d + 1, 1)-Nn−k,n,d−1 network.
Each s-(h, 1, δ)cq code C which solves the (ǫ, 1)-Nh,r,s
network yields also a related vector solution. This solution
forms an s-(ht, t, δt)cq code C′ with the sam size as C, but
usually much larger codes than |C| can be constructed.
Also nonlinear codes can be used as solutions for the
generalized combination networks. Some of them (for the
combination networks) were discussed in [21] and some
will be discussed in the full version of this paper.
IV. B OUNDS
ON THE
S IZES OF C ODES
In this section we will present some bounds on the sizes
of codes. Clearly, there is a huge ground for research since
the parameters of the codes are in a very large range and
our knowledge is very limited. Hence, we will give a brief
discussion. More will be given in the full version of the
paper. We will give some ideas and some insight about the
difficulty of obtaining new bounds and especially the exact
size of optimal codes. The bounds are on Aq (n, k, s; λ)
and on Bq (n, k, s; α) and clearly by Corollary 3, bounds
only on one of them is needed since they are equivalent.
There is duality between the two types of codes which were
considered with two dual distance measures. We start by
considering this duality and related four different simple
approaches for bounds on the maximum sizes of codes.
In an α-(n, k, δ)cq code C, each subset of α codewords
(k-subspaces) span a (δ + k)-subspace of Fnq . In other
words, each (δ + k − 1)-subspace of Fnq contains at most
α − 1 subspaces of C. The orthogonal complement C⊥ is
an (α − 1)-(n, n − k, n − δ − k + 1)m
q code. In such a
code each (n − δ − k + 1)-subspace of Fnq is contained
in at most α − 1 codewords. In other words, any set of
α codewords intersect in a subspace of dimension at most
n−δ−k. Bounds can be obtained based on any one of these
four observations. Each one can give another direction to
obtain related bounds.
The classic bounds for the cases λ = 1 or α = 2, for an
c
s-(n, k, λ)m
q code and an α-(n, k, δ)q code, respectively,
can be easily generalized for higher λ (respectively, α),
where the simplest ones are the packing bound and the
Johnson bounds [7]. It might be easier to generalize the
bounds when we consider s-(n, k, λ)m
q codes.
Theorem 6. If n, k, s, and λ are
positive integers such
that 1 ≤ s < k < n and 1 ≤ λ ≤ ns q , then
$ n %
s q
Aq (n, k, s; λ) ≤ λ k
.
s q
Corollary 4. If n, k, δ, and α are positive integers
such
that 1 < k < n, 1 ≤ δ ≤ n − k and 2 ≤ α ≤ nk q , then
n
n−δ−k+1 q
Bq (n, k, δ; α) ≤ (α − 1)
.
k
n−δ−k+1 q
Theorem 7. If n, k, s, and λ are
positive integers such
that 1 ≤ s < k < n and 1 ≤ λ ≤ ns q , then
n
q −1
Aq (n − 1, k − 1, s − 1; λ) .
Aq (n, k, s; λ) ≤ k
q −1
Corollary 5. If n, k, δ, and α are positive integers
such
that 1 < k < n, 1 ≤ δ ≤ n − k and 2 ≤ α ≤ nk q , then
n
q −1
Bq (n − 1, k − 1, δ; α) .
Bq (n, k, δ; α) ≤ k
q −1
Theorem 8. If n, k, s, and λ are
positive integers such
that 1 ≤ s < k < n and 1 ≤ λ ≤ ns q , then
n
q −1
Aq (n, k, s; λ) ≤ n−k
Aq (n − 1, k, s; λ) .
q
−1
Corollary 6. If n, k, δ, and α are positive integers
such
that 1 < k < n, 1 ≤ δ ≤ n − k and 2 ≤ α ≤ nk q , then
n
q −1
Bq (n, k, δ; α) ≤ n−k
Bq (n − 1, k, δ; α) .
q
−1
Finally, also the following bound was obtained.
Theorem 9. If k + δ ≤ n ≤ 2k, then
Bq (n, δ, k; α) ≥ (α − 1)Bq (n, δ, k; 2) , and
(α−1)q k(n−k−δ+1) ≤ Bq (n, δ, k; α) ≤ (α−1)Q(q)q k(n−k−δ+1) ,
Q
−j
where Q(q) = ∞
).
j=1 (1 − q
We would also like to mention that some bounds can be
obtained from known results on arcs and caps in projective
geometry [11]. Discussion on these will be given in the full
version of this paper.
How good are these bounds? Can the bound of Theorem 6 be attained? If λ = 1 then good constructions and
asymptotic bounds are known [2], [6]. But, Theorem 6
might be misleading. Consider the case where n = 6,
k = 4, s = 3, λ = 1, and q = 2. By Theorem 6 we
have that A2 (6, 4, 3; 1) ≤ 93, while the actual value is
A2 (6, 4, 3; 1) = 21. This is not unique for these parameters,
and it occurs since Aq (n, k, s; 1) = Aq (n, n − k, n − 2k +
s; 1), i.e. we should consider only k ≤ n/2, when λ = 1.
For λ > 1 (respectively, α > 2) there is no similar connection between codes of Gq (n, k) and codes of Gq (n, n − k).
The big difference is when k > n/2. A good example can
be given by considering the (1, 1)-N3,r,4 network which
was used in [8] to show that there is a network with three
messages for which vector network coding outperforms
scalar linear network coding. For a vector solution of this
network we need an 3-(3t, t, t)cq code. For simplicity and
for explaining the problems in obtaining lower and upper
bounds on Aq (n, k, s; λ) we consider the case of t = 2 and
q = 2, i.e. a 3-(6, 2, 2)c2 code or its orthogonal complement
a 3-(6, 4, 2)m
2 code. In [9] a code with 51 codewords was
presented. By using a modification on rank-metric codes
we can find a 2-(6, 4, 3)m
2 code of size 64. But, when a
3-(6, 2, 2)c2 code was considered a code of size 82 was
obtained [5] using a general method. This general methods
implies that
B2 (n + 2, 2, 2; 3) ≥ 8B2 (n, 2, 2; 3) + 2,
for n ≥ 3. How these lower bounds are compared
with the upper bound? By Theorem 6 we have that
A2 (6, 4, 3; 2) ≤ 186. By Theorem
7 we have that
A
(5,
3,
2;
2)
.
Hence,
we have to
A2 (6, 4, 3; 2) ≤ 63
2
15
consider the value of A2 (5, 3, 2; 2). An upper bound on
A2 (5, 3, 2; 2) involves some more complicated analysis of
the way that the 3-subspaces of the related codes are obtained from extensions of subspaces in F42 using also some
sets of linear equations. A bound of A2 (5, 3, 2; 2) ≤ 34 [5]
was obtained and hence A2 (6, 4, 3; 2) ≤ 142. A code of
size 30 [5] was found and hence 30 ≤ A2 (5, 3, 2; 2) ≤ 34.
Recently, these last results were improved and some other
interesting bounds were obtained by Sascha Kurz [18].
There is a sequence of subspace designs in which each
s-subspace is contained in exactly λ times, which form
optimal generalized Grassmannian codes. These subspace
designs have small block length and very large λ. Hence,
they are limited for solutions of generalized combination
networks for which the subspaces have dimension close to
the one of the ambient space. For example, Thomas [20]
have constructed a 2-(n, 3, 7)2 design, where n ≥ 7 and
n ≡ 1 or 5 (mod 6). This is the same as a 7-(n, 3, 2)m
2
code and its orthogonal complement is an 8-(n, n − 3, 2)c2
code. Such a code provides a solution for the (1, n − 3)Nn,7·381,8(n−3)+1 network. Note, that even for the smallest
number of messages n, i.e. n = 7, the in-degree of each
receiver is 33. It looks more interesting to find solutions for
generalized combination networks for which the number of
parallel links is small compared to the number or messages.
Some interesting codes and related networks exist for small
parameters. For example, consider the 2-(6, 3, 3)2 design
presented in [3], which is a 3-(6, 3, 2)m
2 code and its
orthogonal complement is a 4-(6, 3, 2)c2 code. Such a code
provides a solution for the (1, 3)-N6,279,13 network. Other
designs which lead to optimal codes for related networks
can be found in many recent papers on this emerging topic.
V. C ONCLUSIONS
AND
O PEN P ROBLEMS
We have introduced a new family of Grassmannian
codes with two new distance measures which generalize the traditional Grassmannian codes and Grassmannian
distance. There is a correspondence between the set of
these codes of maximum size and the set of generalized
combination networks. There is a natural generalization of
our exposition to subspace codes, where codewords can be
of different dimensions. The investigation we have started
for bounds on the sizes of such codes is very preliminary
and there are many obvious coding questions related to
these codes and they are currently under research and will
provide lot of ground for future research. All these issues
will be addressed in the full version of this paper. Another
issue is the related distance measures in the Hamming
scheme, where our suggested distance measures are the
q-analog. This direction will be discussed in another work.
R EFERENCES
[1] R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, “Network
information flow”, IEEE Trans. Inform. Theory, vol. 46, pp. 1204–
1216, 2000.
[2] S. Blackburn and T. Etzion, “The Asymptotic Behavior of Grassmannian Codes”, IEEE Trans. Inform. Theory, vol. 58, pp. 6605–
6609, 2012.
[3] M. Braun, A. Kerber, and R. Laue, “Systematic construction of qanalogs of t−(v, k, λ)-designs”, Designs, Codes, and Cryptography,
vol. 34, pp. 55–70, 2005.
[4] J. B. Ebrahimi and C. Fragouli, “Algebraic algorithms for vector
network coding”, IEEE Trans. Inform. Theory, vol. 57, pp. 996–
1007, 2011.
[5] T. Etzion, K. Otal, and F. Özbudak, manuscipt in preparation.
[6] T. Etzion and N. Silberstein, “Codes and Designs Related to Lifted
MRD Codes”, IEEE Trans. Inform. Theory, vol. 59, pp. 1004–1017,
2013.
[7] T. Etzion and A. Vardy, “Error-correcting codes in projective space”,
IEEE Trans. on Inform. Theory, vol. 57, pp. 1165–1173, 2011.
[8] T. Etzion and A. Wachter-Zeh, “Vector network coding based on
subspace codes outperforms scalar linear network coding”, IEEE
Trans. on Inform. Theory, to appear.
[9] T. Etzion and A. Wachter-Zeh, “Vector network coding based on
subspace codes outperforms scalar linear network coding”, Proc.
of IEEE Int. Symp. on Inform. Theory (ISIT), pp. 1949–1953,
Barcelona, Spain, July 2016.
[10] C. Fragouli and E. Soljanin, “(Secure) Linear network coding
multicast”, Designs, Codes, and Crypt., vol. 78, pp. 269–310, 2016.
[11] J. W. P. Hirschfeld and L. Storme, “The packing problem in
statistics, coding theory and nite projective spaces: Update 2001”,
Finite Geometries (Developments in Mathematics), vol. 3. Dordrecht,
The Netherlands: Kluwer, pp. 201–246, 2001.
[12] T. Ho, R. Koetter, M. Médrad, D. R. Karger, M. Effros, “The benefits
of coding over routing in randomized setting”, Proc. of IEEE Int.
Symp. on Inform. Theory (ISIT), Yokohama, Japan, June/July 2003.
[13] T. Ho, M. Médrad, R. Koetter, D. R. Karger, M. Effros, J. Shi, B.
Leong, “A random linear network coding approach to multicast”,
IEEE Trans. Inform. Theory, vol. 52, pp. 4413–4430, 2006.
[14] S. Jaggi, P. Sanders, P. A. Chou, M. Effros, S. Egner, K. Jain,
L. Tolhuizen, “Polynomial time algorithms for multicast network
code construction”, IEEE Trans. Inform. Theory, vol. 51, pp. 1973–
1982, 2005.
[15] S.-Y. R. Li, R. W. Yeung, and N. Cai, “Linear network coding”,
IEEE Trans. Inform. Theory, vol. 49, pp. 371–381, 2003.
[16] R. Kötter and M. Médrad, “An algebraic approach to network
coding”, IEEE Trans. Networking, vol. 11, pp. 782–795, 2003.
[17] R. Kötter and F. R. Kschischang, “Coding for errors and erasures in
random network coding”, IEEE Trans. on Inform. Theory, vol. 54,
pp. 3579–3591, 2008.
[18] S. Kurz, personal communication.
[19] A. Rasala Lehman and E. Lehman, “Complexity classification of
network information flow problems”, Proc. 15th Annu. ACM/SIAM
Symp. Discrete Algorithms (SODA), pp. 142–150, New Orleans, LA,
January 2004.
[20] S. Thomas, “Designs over finite fields”, Geometriae Dedicata,
vol. 21, pp. 237–242, 1987.
[21] S. Riis and R. Ahlswede, “Problems in network coding and error
correcting codes” appended by a draft version of S. Riis “Utilising
public information in network coding”, General Theory of Information Transfer and Combinatorics, in Lecture Notes in Computer
Science, vol. 4123 pp. 861–897, 2006.
[22] D. Silva and F. Kschischang, “On metrics for error correction in
network coding”, IEEE Trans. Inform. Theory, vol. 55, pp. 5479–
5490, 2009.
[23] J. Tits, Sur les analogues algébriques des groupes semi-simples
complexes, in Colloque d’Algébre Supèrieure, tenu á Bruxelles du 19
au 22 décembre 1956, Centre Belge de Recherches Mathématiques
Établissements Ceuterick, Louvain, Paris: Librairie Gauthier-Villars,
pp. 261–289, 1957.
[24] V. K. Wei, “Generalized Hamming weights for linear codes,” IEEE
Trans. Inform. Theory, vol. 37, pp. 1412–1418, 1991.
| 7 |
Survey on Additive Manufacturing, Cloud 3D Printing
and Services
arXiv:1708.04875v1 [cs.CY] 7 Aug 2017
Felix W. Baumann, Dieter Roller
University of Stuttgart, Stuttgart, Germany
Abstract
Cloud Manufacturing (CM) is the concept of using manufacturing resources
in a service oriented way over the Internet. Recent developments in Additive
Manufacturing (AM) are making it possible to utilise resources ad-hoc as
replacement for traditional manufacturing resources in case of spontaneous
problems in the established manufacturing processes. In order to be of use
in these scenarios the AM resources must adhere to a strict principle of
transparency and service composition in adherence to the Cloud Computing
(CC) paradigm. With this review we provide an overview over CM, AM and
relevant domains as well as present the historical development of scientific
research in these fields, starting from 2002. Part of this work is also a metareview on the domain to further detail its development and structure.
Keywords: Additive Manufacturing; Cloud Manufacturing; 3D Printing
Service
1. Introduction
Cloud Manufacturing (here CM, in other works also CMfg) as a concept
is not new and has been executed in enterprises for many years [275], under
different terms, e.g., Grid Manufacturing [50] or Agile Manufacturing [215].
The decision to have a globally distributed and with many contractors
or partners interconnected production process and related supply chains is
a luxurious one. Large global corporations and competitions makes “expensive” local production nearly impossible.
Email addresses: [email protected] (Felix W. Baumann),
[email protected] (Dieter Roller)
Preprint submitted to arXiv
August 17, 2017
CM is based on a strict service orientation of its constituent production
resources and capabilities.
Manufacturing resources become compartmentalised and connected and
worked with as service entities, that can be rented, swapped, expanded,
dismantled or scaled up or down just by the use of software. This almost
instantaneous and flexible model of resource usage is what made the Cloud
Computing (CC) paradigm very successful for a number of companies. Here
computing resources and data storage are all virtual, living in large datacentres around the globe, with the user only interfacing these resources
through well-defined APIs (Application programming interface) and paying
for only the resources utilised – apart from the costs inflicted by the cloud
service providers due to their business model and the surcharged or otherwise
calculated requirement for profit.
With this work we contribute to the dissemination of knowledge in the
domain of Additive Manufacturing (AM) and the concept of CM. Cloud Manufacturing can be seen as having two aspects and applications, where the first
application is within an industrial environment for which CM provides a concept to embed, connect and utilise existing manufacturing resources, e.g., 3D
printers, drilling-, milling- and other machines, i.e., cloud manufacturing is
not limited to AM but AM can be utilised within a CM concept. The second
application is for end-users that use AM/3D resources over the Internet in
lieu acquiring their own 3D printer. The usage in this second application
is highly service oriented and has mainly end-users or consumers as target
clients. The consumers can profit from online-based services without the
requirement of having to own neither hard- nor software resources for 3D
printing.
We motivate this work by an overview of the historical development of
scientific research in these domains starting from 2002. With this we show
that the scientific output within these fields has increased by an average of
41.3 percent annually to about 20000 publications per year (see Sect. 1.2).
To develop a better understanding of the topic at hand we discuss various
terminological definitions found in literature and standards. We give critique
on the common definitions of AM and propose a simpler, yet more accurate
definition.
For the reader to further grasp these domains we study existing journals
catering for these communities and discuss reach and inter-connections.
Cloud Manufacturing relies on a service oriented concept of production
services or capabilities. We extend an existing study on cloud printing ser2
vices as we see such services as integral components for future CM systems.
Cloud manufacturing has two aspects which are detailed in this work.
First CM is a methodology that is used within industrial settings for the
connection of existing resources to form either a virtual assembly line or to
acquire access to manufacturing resources in a service oriented manner. Due
to the globalisation of the industry, manufacturers face increased challenges
with shorter time-to-markets, requirements for mass customisation (MC) and
increased involvement of customers within the product development process.
In order to stay or become competitive, companies must utilise their resources
more efficiently and where not available they must acquire those resources in
an efficient and transparent way. These resources must then be integrated
into the existing process environment and released when no longer required.
The concepts of cloud computing, where resources are available as services
from providers that can be leased, rented, acquired or utilised in other ways
are proposed to be applied to the domain of manufacturing.
Resources like machines and software, as well as capabilities/abilities become transparently available as services that customers or end-users can use
through the respective providers and pay for only the services they require
momentarily. Most often, no contractual obligations between the provider
and the consumer exists (but it can exist, especially for high-value or highvolume usage) which gives the consumer great flexibility at the expense of
possible unavailability of resources by the provider.
In the end-user segment, or the consumer aspect of CM the user is interested in using AM resources like 3D printers through a web-based interface
in order to be able to have objects produced that are designed to be 3D
printed without the necessity to purchase and own a 3D printer themselves.
The user commonly uses such services in a similar fashion that they would
use a (online) photography lab / printing service. The users’ experience and
knowledge of AM and 3D printing can vary significantly.
Albeit these two aspects seem to be far apart, the commonality between
them is, that the service operator must provide AM resources in both cases
in a transparent and usable manner. Resources must be provided with clear
definitions of the interface to the service, i.e., the data consumed by the
service and data rendered by the service. The description and provisioning
of the service must be hardware agnostic as the consumer must be able to
select the resources required, e.g., have an object manufactured either on a
FDM (Fused Deposition Modeling, also Fused Filament Fabrication FFF)
machine or and SLA (Stereolithography) machine without the necessity to
3
alter the underlying data and models but by selection.
This work is structured as follows: Section 1.1 provides information of
the objective we accomplish with this review. Section 1.1.1 presents the research methodology applied for this work. In section 1.1.2 we disseminate the
sources that were used to gather information for this work. Section 1.2 provides a dissemination of the scientific research in these fields with a discussion
on its historical development. Chapter 2 contains sections on key terminology and their definition throughout literature and standards. We present
these terms as well as synonyms and provide an alternative definition as a
proposal. The Chapter 3 is an exhaustive collection of scientific journals relevant to the domains discussed in this work. We provide an insight in their
interconnectedness and their structure. Chapter 4 provides a meta-review
on the subject for the reader to get a further reaching understanding of the
subject and its relevant components.
In Chapter 4.1 we discuss the audience or target group for CM and 3D
printing related cloud services. Chapter 5 extends the study by Rayna and
Striukova [204] due to the importance of 3D printing related cloud services
for the topic at hand. Section 6 provides the information on the concepts,
terminology, methods relevant to the subject as they are disseminated in
literature. We conclude this work with a summary in Chapter 7.
1.1. Research Objective
This review is performed to establish an overview on the concept and
implementation of CM and the utilisation of Additive Manufacturing (AM)
therein. For the understanding it is required to become familiar with the
various definitions available and the problems arising from inconsistent usage
of terminology. For this we compile differing definitions on key terminology.
With this work we aim to present an overview over the topic of CM, and
its current research findings. We furthermore present a summary overview
over existing online and cloud based 3D printing services that can either be
regarded as implementations of CM or be utilised in CM scenarios. This part
is to extend the knowledge on relevant online services and their orientation
towards numerous services. With the presentation of the identified journals
that cater for AM, DM, RP, RM and 3D printing research we want to provide
other researchers with insight into possible publication venues and a starting
point for the identification of relevant sources for their own work. The review
work of this article has the objective to identify relevant literature and summarise the key and essential findings thereof. The review also is intended to
4
Search Results for Terms from Google Trends (2004 - 2016)
3D printing
Additive Manufacturing
Cloud Manufacturing
3D Print
100
number of queries
80
60
40
20
9
-2
01
5-
08
-2
7
7
-1
23
20
15
-0
8-
11
-2
01
5-
01
-0
6
20
15
-0
1-
01
-2
01
4-
06
-2
6
20
14
-0
6-
20
-2
01
3-
10
-1
4
20
13
-1
0-
10
-2
01
3-
03
-0
4
320
13
-0
7-
29
-2
01
2-
08
-2
4
12
101
-2
18
20
12
-0
220
11
-1
5-
08
-2
01
1-
05
-1
2
0
-0
10
001
-2
-0
11
20
20
10
-0
9-
26
-2
01
0-
02
-2
1
9
-1
14
20
10
-0
2-
05
-2
00
9-
07
-2
9
7-0
09
20
20
08
-1
1-
23
-2
00
8-
11
-1
8
-2
00
8-
04
-0
7
13
4-0
08
20
20
07
-0
9-
02
-2
00
7-
09
-2
7
-2
00
7-
01
-1
5
21
20
07
-0
1-
11
-2
00
6-
06
-0
6
6-0
06
20
20
05
-1
0-
30
-2
00
5-
11
-2
03
500
-2
20
3-0
05
04
20
20
-0
8-
08
-2
00
4-
08
-1
4
0
source of data: trends.google.com; 3D printing, sum=13816, avg=43.72; AM, sum=983, avg=3.11; CM, sum=217, avg=0.69; 3D print, sum=6060, avg=19.18
Figure 1: Queries for 3D Printing, AM, CM and 3D Print on google.com for 2004–2016
from trends.google.com
provide a high level overview on identified research needs that are considered
essential for the evolution of AM and CM.
1.1.1. Methodology
The first part of this review is the analysis of other reviews in order to
establish a foundation of the existing works and to have a baseline for the
analysis of the journals catering to this domain.
The journals are identified and presented in order to help researchers in
finding a suitable publication venue and to present the recent development
in this area. The journals are identified by literature research, web searching
(see Sect. 1.1.2), and as a result of the review analysis.
This review identified its sources by web search for each of the identified
topics depicted in the concept map (See Sect. 6.1), where the first 30 results
from the search engines (see Sect. 1.1.2) each are scanned first by title, then by
their abstract. For the creation of the topological map an iterative process is
applied. The process starts with the analysis of the following works [273, 265,
275, 279, 102] which we had prior knowledge of due to previous engagements
in this research area. After the analysis a backward- and forward search is
performed.
5
The searches for the content of the review are sorted by relevance, according to the search engine operator. The articles are then analysed and its
core concepts are presented in this work.
The reviews for the meta-review are identified by a web search and data
gathered during our review.
For the compilation of the definitions an extraction process is employed
where the identified literature for the review is basis for information extraction and dissemination. The compilation is expanded by literature and
Internet research for the appropriate keywords and concepts.
The extension to the study by Rayna and Striukova [204] is performed
following the research methodology applied in the original work.
1.1.2. Sources
This review is based on scientific literature acquired through the respective publishers and searched for using the following search engines:
• Google Scholar1
• SemanticScholar2
• dblp3
• Web of Science4
• ProQuest5
Microsoft Academic Search6 is not used for the search as the quality and
number of results is unsatisfactory. Scopus7 is not used for the research, as
we have no subscription for it. The search engines differ in the handling of
grouping and selection operators (e.g., OR, +). For each search engine the
appropriate operators where selected when not explicitly stated otherwise.
As a search engine for scientific literature, Google Scholar, yields the most
1
https://scholar.google.com
https://semanticscholar.org
3
https://dblp.uni-trier.de
4
https://webofknowledge.com
5
https://proquest.com
6
http://academic.research.microsoft.com
7
https://www.scopus.com
2
6
results but with a high degree of unrelated or otherwise unusable sources,
like the Google search engine8 itself. Furthermore, the search engine enforces
strict usage rules thus hindering automated querying and usage. Results from
patents and citations are excluded from the result set for Google Scholar.
SemanticScholar offers a responsive interface, that allows for automated
querying through JSON9 , to “millions” of articles10 from computer science - a
statement that we can not verify as we have seen articles from other domains
too. The dblp project indexes over 333333311 from computer science in a
very high quality. Its interface allows for automated and scripted usage.
Web of Science provides an index of a large number (over 5612 millions) of
scientific works. The entries in the index are of high quality but the interface
is rather restrictive. ProQuest also has a very restrictive and non-scriptable
interface and contains over 54 million 13 entries in its corpus, among which
are historical news articles and dissertations. The quality of the results is
high. ProQuest and Web of Science are subscription based services.
1.2. Development in Scientific Publications
The significance and maturity of a research area is reflected in the number
of publications available. We perform a keyword based analysis utilising the
sources described in Section 1.1.2. The searches are performed with a number
of relevant keywords (including various technologies and methods for AM)
and a restriction of the time period starting from 2002 to 2016. The queries
are also restricted on the type of results with an exclusion to citations and
patents, where applicable. For a study on the patents and the development
of patent registrations for this domain we refer to Park et al. [190].
Caveat: Searching on search engines for specific keywords like clip and
lens in their abbreviated form will lead to a number of skewed results from
works that are not significant for this body of work. For example the search
for “Additive Manufacturing” and LENS yield articles in the results that
8
https://google.com
JavaScript Object Notation
10
https://www.semanticscholar.org/faq#index-size
11
News from 2016-05-03: “Today, dblp reached the wonderful ”Schnapszahl” of
3,333,333 publications”
12
A search for publications with its publication date between 1700 and 2100 yields
56998216 results
13
A search for publications with its publication date after 1700 yields 54266680 results
9
7
Classification of Research in Cloud Manufacturing
OTHER
ENVIRONMENTAL SCIENCES ECOLOGY
CHEMISTRY
MATERIALS SCIENCE
OPERATIONS RESEARCH MANAGEMENT SCIENCE
AUTOMATION CONTROL SYSTEMS
COMPUTER SCIENCE
ENGINEERING
0
10
20
30
40
50
60
percent
source of data: webofknowledge.com; n=460
Figure 2: Classification of articles for Cloud Manufacturing; source of data: webofknowledge.com
Classification of Research in Additive Manufacturing
OTHER
FOOD SCIENCE TECHNOLOGY
SCIENCE TECHNOLOGY OTHER TOPICS
PHYSICS
METALLURGY METALLURGICAL ENGINEERING
CHEMISTRY
ENGINEERING
MATERIALS SCIENCE
0
10
20
30
40
50
percent
source of data: webofknowledge.com; n=4582
Figure 3: Classification of articles for Additive Manufacturing; source of data: webofknowledge.com
are either fabricating (optical) lenses using AM or are about lenses in lasers
that are used in AM. In case the result sets are as large as in our case it
is not feasible to remove those erroneous results and adjust the result set
accordingly. We make the reader aware to only take the given numbers as
an indication.
In Fig. 2 to Fig. 7 the classification of scientific articles according to Web
of Science is shown. The classifications do not add up to 100 percent as the
respective articles can be classified in more than one field. In the figures the
number of results per search term is also listed. Domains with less than five
percent aggregated classification are grouped together as “OTHER”.
In Fig. 8 the accumulated prevalence of the terms 3D printing versus Additive Manufacturing (AM) is displayed. For these numbers queries are made
for a combination of search terms and restrictions on the time period. The
scale of the Y-Axis is logarithmic due to the large differences in the number
of results per search engine. The dblp database returned the lowest number
8
Classification of Research in 3D Printing
OTHER
COMPUTER SCIENCE
PHYSICS
CHEMISTRY
SCIENCE TECHNOLOGY OTHER TOPICS
ENGINEERING
MATERIALS SCIENCE
0
10
20
30
40
50
60
percent
source of data: webofknowledge.com; n=3369
Figure 4: Classification of articles for 3D Printing; source of data: webofknowledge.com
Classification of Research in Rapid Manufacturing
OTHER
SCIENCE TECHNOLOGY OTHER TOPICS
METALLURGY METALLURGICAL ENGINEERING
AUTOMATION CONTROL SYSTEMS
PHYSICS
MATERIALS SCIENCE
ENGINEERING
0
10
20
30
40
50
60
70
percent
source of data: webofknowledge.com; n=496
Figure 5: Classification of articles for Rapid Manufacturing; source of data: webofknowledge.com
Classification of Research in Rapid Prototyping
OTHER
CHEMISTRY
AUTOMATION CONTROL SYSTEMS
PHYSICS
SCIENCE TECHNOLOGY OTHER TOPICS
COMPUTER SCIENCE
MATERIALS SCIENCE
ENGINEERING
0
10
20
30
40
50
percent
source of data: webofknowledge.com; n=5393
Figure 6: Classification of articles for Rapid Prototyping; source of data: webofknowledge.com
9
Classification of Research in Rapid Tooling
OTHER
POLYMER SCIENCE
METALLURGY METALLURGICAL ENGINEERING
AUTOMATION CONTROL SYSTEMS
MATERIALS SCIENCE
ENGINEERING
0
10
20
30
40
50
60
70
80
percent
source of data: webofknowledge.com; n=394
Figure 7: Classification of articles for Rapid Tooling; source of data: webofknowledge.com
of results with results consistently less than 10. Google Scholar yielded the
largest number of results with the accumulated number of results for the term
AM gaining on the term 3D printing since 2009. In Fig. 9 the prevalence of
certain AM or 3D Printing technologies is studied by the number of articles
from four different search engines for the respective combination of search
terms. The largest number of results are from Google Scholar for search
term combinations with “3D Printing”. Furthermore, a generalised search
is performed for the terminology “Laser, Lithography and Powder”, e.g.,
summarising technologies like SLM (Selective Laser Melting), SLS (Selective Laser Sintering), SLA, LOM (Laminated Object Manufacturing), LENS
(Laser Engineered Net Shaping) for the term “Laser”. The search for technologies like CLIP and LENS are problematic due to the non-specificity of
the terminology as described before (See note 1.2).
2. Definition and Terminology
In general the usage of the terminology within this field is very inconsistent. Commonly and colloquially the terms 3D printing and AM are used
as synonyms. Analysing the prevalence of either of these terms we find that
3D printing is slightly more prevalent for results of scientific literature with
68164 results for the sources described in Sect. 1.1.2 during the period of
2002–2016. In the same period there are over 59506 results for the term
Additive Manufacturing. SemanticScholar provided significantly more
results (7072 over 1211) for 3D printing and Web of Science yielded almost
four times the number of results for Additive Manufacturing over 3D
Printing (1956 results to 578). There is also no clear trend in the usage
of either terms. With this section we exemplify this situation and present
10
Comparison of AM and 3D printing over 4 search engines
Google: 3D Printing
SemanticScholar: 3D Printing
dblp: 3D Printing
WoS: 3D Printing
ProQuest: 3D Printing
Google: Additive Manufacturing
SemanticScholar: Additive Manufacturing"
dblp: Additive Manufacturing
WoS: Additive Manufacturing
ProQuest: Additive Manufacturing
summative number of publications
100000
10000
1000
100
10
16
20
15
20
14
20
13
20
12
20
11
20
10
20
09
20
08
20
07
20
06
20
05
20
04
20
03
20
20
02
1
year
source of data: scholar.google.com, semanticscholar.org, dblp.uni-trier.de, webofknowledge.com, progquest.com
Figure 8: Comparison of AM and 3D Printing on selected search engines (2002–2016)
11
Google: 3D Printing
SemanticScholar: 3D Printing
WoS: 3D Printing
ProQuest: 3D Printing
Google: Additive Manufacturing
SemanticScholar: Additive Manufacturing
WoS: Additive Manufacturing
ProQuest: Additive Manufacturing
15000
10000
5000
s
m
dl
sff
ls
dm
de
r
y
ph
ra
ho
g
lit
po
w
se
r
la
jm
m
3d
p
le
ns
ip
cl
eb
m
m
fd
m
lo
a
sl
m
sl
s
0
sl
summative number of publications (2002-2016)
Results for Manufacturing Technologies
source of data: scholar.google.com, semanticscholar.org, webofknowledge.com, proquest.com
Figure 9: Comparison of 3D printing technologies on selected search engines (2002–2016)
12
common definitions throughout literature and standards. We furthermore
add our point of view in the form of a critique at the end of the section.
2.1. Additive Manufacturing and 3D Printing
In this section we present established definitions for AM and related terminology as presented in literature and standards.
2.1.1. Definitions of Additive Manufacturing
AM is most often regarded as an umbrella term for technology and methods for the creation of objects from digital models from scratch. It is usually
in contrast to subtractive and formative methods of manufacturing as defined
in the standard [1]. It is also commonly a synonym for 3D printing.
Gibson et al. [89] define AM as: “Additive manufacturing is the formalised
term for what used to be called rapid prototyping and what is popularly
called 3D Printing. [...] Referred to in short as AM, the basic principle of
this technology is that a model, initially generated using a three-dimensional
Computer-Aided Design (3D CAD) system, can be fabricated directly without the need for process planning. [...]”
Gebhardt [88] defines AM as: “Als Generative Fertigungsverfahren werden alle Fertigungsverfahren bezeichnet, die Bauteile durch Auf- oder Aneinanderfügen von Volumenelementen (Voxel’n), vorzugsweise schichtweise, automatisiert herstellen.”, which we translate as “As generative/additive manufacturing processes all production processes are referred that produce components automatically by depositioning of volume elements (Voxels), preferably
layer-wise”.
The VDI directives VDI 3404 (Version 2009 [4] and 2014 [6]) define additive fabrication as: “Additive fabrication refers to manufacturing processes
which employ an additive technique whereby successive layers or units are
built up to form a model.”.
The 2009 directive “VDI-Richtlinie: VDI 3404 Generative Fertigungsverfahren - Rapid-Technologien (Rapid Prototyping) - Grundlagen, Begriffe,
Qualitätskenngrößen, Liefervereinbarungen” and the 2014 directive “VDIRichtlinie: VDI 3404 Additive Fertigung - Grundlagen, Begriffe, Verfahrensbeschreibungen” are both currently in retracted states.
The also retracted ASTM standard F2792-12a “Standard terminology
for additive manufacturing technologies” defines AM as “A process of joining materials to make objects from 3D model data, usually layer upon layer,
13
as opposed to subtractive manufacturing methodologies.” with the following synonyms listed “additive fabrication, additive processes, additive techniques, additive layer manufacturing, layer manufacturing, and freeform fabrication.”.
Bechthold et al. [26] define AM as: “The terms additive manufacturing
(AM) and 3D printing describe production processes in which a solid 3D
structure is produced layer by layer by the deposition of suitable materials
via an additive manufacturing machine.”
Thomas and Gilbert [242] define AM as: “Additive manufacturing is the
process of joining materials to make objects from three-dimensional (3D)
models layer by layer as opposed to subtractive methods that remove material. The terms additive manufacturing and 3D printing tend to be used
interchangeably to describe the same approach to fabricating parts. This
technology is used to produce models, prototypes, patterns, components, and
parts using a variety of materials including plastic, metal, ceramics, glass,
and composites”
Klocke [136] defines AM as: “Generative Verfahren: Diese Verfahrensgruppe umfasst alle Technologien, mit denen eine aufbauende, schichtweise
Fertigung von Bauteilen realisiert wird. Sie werden auch als Additive Manufacturing Technologies oder als Layer based Manufacturing Technologies
bezeichnet. Zum Herstellen der Schichten wird häufig Laserstrahlung verwendet. [...].”
translation “Generative Processes: This process group contains all technologies, with which an additive, layer-wise generation of parts is realised.
They are also referred to as additive manufacturing technologies or layer
based manufacturing technologies. For the creation of the layers oftentimes
laser emission is used. [...]”
In the ASTM F2792-12a [5] standard AM is defined as: “process of joining
materials to make objects from 3D model data, usually layer upon layer, as
opposed to subtractive manufacturing methodologies. Synonyms: additive
fabrication, additive processes, additive techniques, additive layer manufacturing, layer manufacturing, and freeform fabrication.”
Gao et al. [83] use the term AM and 3D printing synonymously: “Additive
manufacturing (AM), also referred to as 3D printing, [...]”.
Sames et al. [214] also use the term AM and 3D printing synonymously:
“Additive manufacturing (AM), also known as three-dimensional (3D) printing, [...]”
Lachmayer and Lippert [140] define AM as: “Das Additive Manufacturing
14
(AM), als Überbegriff für das Rapid Prototyping (RP), das Rapid Tooling
(RT), das Direct Manufacturing (DM) und das Rapid Repair (RR) basiert
auf dem Prinzip des additiven Schichtaufbaus in x-, y- und z-Richtung zur
maschinellen Herstellung einer (Near-) Net-Shape Geometrie” which translates to: “Additive manufacturing as an umbrella term for Rapid Prototyping
(RP), Rapid Tooling (RT), Direct Manufacturing (DM) and Rapid Repair
(RR) is based on the principle of the additive layer fabrication in x-, y- and
z-direction for the fabrication of a (near-) net-shape geometry by machines”
The ISO/ASTM Standard 52900:2015(E) [9] defines AM as: “process of
joining materials to make parts (2.6.1) from 3D model data, usually layer
(2.3.10) upon layer, as opposed to subtractive manufacturing and formative
manufacturing methodologies”.
2.1.2. Definitions of 3D Printing
According to Gebhardt [88] 3D Printing is a generic term that is synonymous to AM and is replacing the term AM in the future due to its simplicity.
Bechtholdt et al. [26] use the terms 3D Printing and AM synonymously as
umbrella terms for technologies and applications. In the VDI directive [7] the
term 3D printing is used for a certain additive process but it is acknowledged
that it is generally used as a synonym for AM.
The ASTM standard F2792-12a (retracted) defines 3D printing as “The
fabrication of objects through the deposition of a material using a print head,
nozzle, or another printer technology.” but also acknowledges the common
synonymous use of this term for AM, mostly of low-end quality and price
machines.
Gibson [89] uses the term 3D Printing for the technology invented by
researches at MIT [212] but also acknowledges that it is used synonymously
for AM and will eventually replace the term AM due to media coverage.
The ISO/ASTM Standard 52900:2015(E) [9] defines 3D Printing as: “fabrication of objects through the deposition of a material using a print head,
nozzle, or another printer technology”.
It is also noted in this standard that the term 3D printing is often used
as a synonym for AM, mostly in non-technical context. Furthermore, it is
noted that 3D printing is associated with low price and capability machines.
2.1.3. Definitions of Rapid Prototyping
In Hopkinson and Dickens [106] Rapid Prototyping (RP) is defined as:
“RP refers to a group of commercially available processes which are used to
15
create solid 3D parts from CAD, from this point onwards these processes will
be referred to as layer manufacturing techniques (LMTs)”
The VDI directive 3405 defines RP as: “Additive fabrication of parts
with limited functionality, but with sufficiently well-defined specific characteristics.”
Weber et al. [267] define RP as: “Early AM parts were created for the
rapid prototyping market and were first employed as visual aids and presentation models. Many lower cost AM systems are still used in this way.”
2.1.4. Definitions of Rapid Manufacturing
Hopkinson et al. [108] define Rapid Manufacturing (RM) as: “the use of a
computer aided design (CAD)-based automated additive manufacturing process to construct that are used directly as finished products or components.”
Previously Hopkinson and Dickens [106] defined RM as: “Rapid manufacturing uses LMTs for the direct manufacture of solid 3D products to be used
by the end user either as parts of assemblies or as stand-alone products.”
The VDI directive 3404 Version 2009 [4] defines RM as: “Additive fabrication of end products (often also described as production parts). Characteristics: Has all the characteristics of the end product or is accepted by the
customer for “series production readiness”. Material is identical to that of
the end product. Construction corresponds to that of the end product.”
The VDI directive 3405 [7] defines RM as a synonym for direct manufacturing, which is defined as: “Additive fabrication of end products.”
2.1.5. Definitions of Rapid Tooling
King and Tansey [135] define Rapid Tooling (RT) as an extension of RP as
such: “Rapid tooling is a progression from rapid prototyping. It is the ability
to build prototype tools directly as opposed to prototype products directly
from the CAD model resulting in compressed time to market solutions.”
The VDI directive 3405 [7] defines RT as: “The use of additive technologies and processes to fabricate end products which are used as tools, moulds
and mould inserts.”
Weber et al. [267] define RT as: “Another class of applications for AM
parts is patterns for tooling or tooling directly made by AM. AM processes
can be used to significantly shorten tooling time and are especially useful for
low-run production of products.”
16
2.1.6. Definitions of Cloud Manufacturing
The work by Li et al. [112] appears to be the first to introduce the concept
and definition of Cloud Manufacturing (CM), but unfortunately this article is
only available in Chinese and could therefore not be considered. The article
is cited by more than 450 publications according to Google Scholar.
Wu and Yang [280] define CM as such: “Cloud manufacturing is an integrated supporting environment both for the share and integration of resources
in enterprise. It provides virtual manufacturing resources pools, which shields
the heterogeneousness and the regional distribution of resources by the way of
virtualisation. cloud manufacturing provides a cooperative work environment
for manufacturing enterprises and individuals and enables the cooperation of
enterprise.”
Tao et al. [237] define CM indirectly by the following description: “Cloud
manufacturing is a computing and service-oriented manufacturing model
developed from existing advanced manufacturing models (e.g. ASP, AM,
NM, MGrid) and enterprise information technologies under the support of
cloud computing, IoT, virtualisation and service-oriented technologies, and
advanced computing technologies”
Xu [283] defines CM similar to the NIST definition of CC as: “a model
for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable manufacturing resources (e.g., manufacturing software
tools, manufacturing equipment, and manufacturing capabilities) that can be
rapidly provisioned and released with minimal management effort or service
provider interaction”. This definition is also used in the work by Wang and
Xu [266].
Zhang et al. [297] describe CM as: “Cloud manufacturing (CMfg) is a
new manufacturing paradigm based on networks. It uses the network, cloud
computing, service computing and manufacturing enabling technologies to
transform manufacturing resources and manufacturing capabilities into manufacturing services, which can be managed and operated in an intelligent
and unified way to enable the full sharing and circulating of manufacturing
resources and manufacturing capabilities. CMfg can provide safe, reliable,
high-quality, cheap and on-demand manufacturing services for the whole life
cycle of manufacturing.”
2.1.7. Synonyms for AM
As with the previous definitions for AM, RP, RT, RM and 3D printing
there is no consensus in the terminology for synonyms of AM in general. The
17
following synonyms can be found in literature and are used in existing works.
• direct layer manufacturing or layer manufacturing or additive layer
manufacturing
• direct digital manufacturing is a synonym for rapid manufacturing [89]
• solid freeform fabrication (SFF), three dimensional printing [267]
• 3D printing, Additive Techniques, Layer Manufacturing, and Freeform
fabrication [183]
• additive fabrication, additive processes, additive techniques, additive
layer manufacturing, layer manufacturing, and freeform fabrication [5]14
• “The technical name for 3D printing is additive manufacturing [...]” [154]
2.1.8. Critique
The existing definitions fall short on their focus on the layer-wise creation
of objects as technologies like LENS and multi-axis (n > 3) are not bound
and defined by a layer structure but can regarded as a form of AM as they
create objects based on 3D (CAD) models from scratch without any of the
characteristics of traditional subtractive or formative fabrication methods.
Through a systematic decomposition of the existing definitions of AM we
conclude that the basic commonality of AM is described as the creation of a
physical object from a digital model by a machine.
Furthermore, we propose the term AM as an umbrella term that signifies industrial, commercial or professional application and usage whereas
3D printing can be colloquially used for technologies and methods for the
creation of physical objects from 3D (CAD) models in other situations.
For the actual building machines of additively manufactured parts we
recommend the synonymous use of AM fabricator or 3D printer. The first
as it describes the functionality in a precise way and the second as it is
commonly used and understood by a broad audience.
14
Also https://wohlersassociates.com/additive-manufacturing.html
18
3. Journals related to the Subject
We have identified a number of journals specialising in the domain of AM.
In this section we explain their foci and their scientific scope.
The following journals cater partially or solely for the academic dissemination of works based in or related to the domains of AM, RM, RP and 3D
Printing. These journals are identified using the service of the Directory of
Open Access Journals15 , Thomson Reuters Web of Science16 and the articles
used for this review. Only journals with indication for AM, RM, RP or 3D
Printing in either the title or the scope are listed below.
In the following overview the abbreviations EiC for Editor in Chief, ImpactF for Impact Factor and SJR for SCImago Journal Rank Indicator17 are
used. The Impact Factor is either acquired from the journal’s home page
directly when available or looked up from Thomson Reuters InCites Journal
Citation Reports18 . For a number of journals neither a SJR nor the IF could
be found. The numbers for the available volumes, issues and articles are directly extracted from the respective journal’s website. The listing contains a
full list of all members of the board and editors per journal for an assessment
of the interconnection between the various journals. Editors and members of
the board that are involved in more than one journal are indicated by italicised text and the indication in which other journal they are involved. The
journals are ordered by their number of articles published and if two or more
journals have an equal number of publications the ordering is chronological.
The journals without publications and age available are sorted by their ISSN.
The 20 journals have an accumulated 22616 articles published (respectively 17877 articles, when only considering articles from Journal 2 after
it was renamed). The median of the first publication date is 2014. Under the assumption that the articles are published equally since the first
Journal (Journal 1) started in 1985, 31 years ago, this results in an average number of 576 articles per year, which accounts for approximately
18 % of the average accumulated results of 3197 scientific works indexed by
15
https://doaj.org
http://webofknowledge.com
17
“It expresses the average number of weighted citations received in the selected year
by the documents published in the selected journal in the three previous years”, see http:
//www.scimagojr.com/SCImagoJournalRank.pdf for more details
18
https://jcr.incites.thomsonreuters.com
16
19
http://scholar.google.com for the time frame of 2002 to 2016 (See also
Section 1.1). The information on the journals is accurate as of 2016-08-10
according to the respective websites.
1. The International Journal of Advanced Manufacturing Technology
Publisher Springer
ISSN 1433-3015
URL http://www.springer.com/engineering/production+engineering/
journal/170/PSE
ImpactF 1.568
H-Index 7119
SJR 0.9120
Since 1985
Volumes 85
Issues 432
Articles 11727
EiC Andrew Y. C. Nee (See also Journal 3)
Board and Editors 1) Kai Cheng 2) David W. Russell 3) M. S. Shunmugam 4) Erhan Budak 5) D. Ben-Arieh 6) C. Brecher 7) H. van
Brussel 8) B. Çatay 9) F. T. S. Chan (See also Journal 3) 10) F.
F. Chen 11) G. Chryssolouris 12) Chee Kai Chua (See also Journals 4, 6, 11) 13) M. Combacau 14) A. Crosnier 15) S. S. Dimov
16) L. Fratini 17) M. W. Fu 18) H. Huang 19) V. K. Jain 20) M.
K. Jeong 21) P. Ji 22) W.-Y. Jywe 23) R. T. Kumara 24) A. Kusiak 25) B. Lauwers 26) W. B. Lee 27) C. R. Nagarajah 28) E.
Niemi 29) D. T. Pham 30) S. G. Ponnambalam 31) M. M. Ratnam
32) V. R. Rao 33) C. Saygin 34) W. Steen 35) D. J. Stephenson
36) M. K. Tiwari (See also Journal 18) 37) E. Vezzetti 38) G.
Vosniakos 39) X. Xu (See also Journal 3) 40) Y. X. Yao 41) A.
R. Yildiz 42) M. Zoe (See also Journals 7, 12) 43) H.-C. Zhang
44) L. Zhang 45) A.G. Mamalis
2. Journal of Manufacturing Science and Engineering
Publisher The American Society of Mechanical Engineers
19
20
http://www.scimagojr.com/journalsearch.php?q=20428&tip=sid&clean=0
http://www.scimagojr.com/journalsearch.php?q=20428&tip=sid&clean=0
20
ISSN 1087-1357
URL http://manufacturingscience.asmedigitalcollection.asme.
org/journal.aspx
ImpactF 1.087
H-Index 6821
SJR 0.822
Since 1996
Volumes 138
Issues 101 (Since “Journal of Engineering for Industry” was renamed
to its current title)
Articles 7066 (2327 since the renaming in May 1996)
EiC Y. Lawrence Yao
Board and Editors 1) Sam Anand 2) Wayne Cai (See also Journal 5)
3) Jaime Camelio 4) Hongqiang Chen 5) Dragan Djurdjanovic
6) Guillaume Fromentin 7) Yuebin Guo 8) Yong Huang (See also
Journal 9) 9) Yannis Korkolis 10) Laine Mears 11) Gracious Ngaile
(See also Journal 5) 12) Radu Pavel 13) Zhijian Pei 14) Xiaoping
Qian 15) Tony Schmitz 16) Jianjun (Jan) Shi 17) Daniel Walczyk
18) Donggang Yao 19) Allen Y. Yi
3. Robotics and Computer-Integrated Manufacturing
Publisher Elsevier B.V.
ISSN 0736-5845
URL http://www.journals.elsevier.com/robotics-and-computer-integrated-manufa
ImpactF 2.077
H-Index 6123
SJR 1.6124
Since 1984–1994, 1996 ongoing
Volumes 44
Issues 145
Articles 2191
EiC Andre Sharon
Board and Editors 1) M. Haegele 2) L. Wang 3) M. M. Ahmad 4) K.
21
http://www.scimagojr.com/journalsearch.php?q=20966&tip=sid&clean=0
http://www.scimagojr.com/journalsearch.php?q=20966&tip=sid&clean=0
23
http://www.scimagojr.com/journalsearch.php?q=18080&tip=sid&clean=0
24
http://www.scimagojr.com/journalsearch.php?q=18080&tip=sid&clean=0
22
21
Akella 5) H. Asada 6) J. Baillieul 7) T. Binford 8) D. Bossi 9) T.
Broughton 10) M. Caramanis 11) F. T. S. Chan (See also Journal
1) 12) G. Chryssolouris 13) J. Deasley 14) S. Dubowsky 15) E.
Eloranta 16) K. C. Fan 17) J. Y. H. Fuh (See also Journals 4,
15) 18) J. X. Gao 19) M. Gevelber 20) Y. Ito 21) K. Iwata 22) T.
Kanade 23) F. Liu 24) L. Luong 25) K. L. Mak 26) K. McKay
27) A. Meng 28) N. Nagel 29) A. Y. C. Nee (See also Journal
1) 30) G. Reinhardt 31) R. D. Schraft 32) W. P. Seering 33) D.
Spath 34) H. C. G. Spur 35) N. Suh 36) M. K. Tiwari 37) H. Van
Brussel 38) F. B. Vernadat 39) A. Villa 40) M. Weck 41) H. Worn
42) K. Wright 43) C. Wu 44) X. Xu (See also Journal 1)
4. Rapid Prototyping Journal
Publisher Emerald Group Publishing, Ltd
ISSN 1355-2546
URL http://www.emeraldinsight.com/loi/rpj
ImpactF 1.352
H-Index 4925
SJR 0.8126
Since 1995
Volumes 22
Issues 113
Articles 882
EiC Ian Campbell
Board and Editors 1) David Bourell (See also Journals 10, 6) 2) Ian
Gibson (See also Journals 8, 6) 3) James Martin 4) Sung-Hoon
Ahn 5) Paulo Jorge da Silva Bártolo (See also Journals 17, 6,
11) 6) Deon de Beer 7) Alain Bernard (See also Journals 13, 8,
6) 8) Richard Bibb (See also Journal 11) 9) U. Chandrasekhar
10) Khershed Cooper (See also Journal 10) 11) Denis Cormier
(See also Journals 9, 8) 12) Henrique de Amorim Almeida (See
also Journal 12) 13) Phill Dickens 14) Olaf Diegel (See also Journal
10) 15) Jerry Fuh (See also Journals 15, 3) 16) Jorge Ramos Grez
17) Chua Chee Kai (See also Journals 1, 6, 11) 18) Jean-Pierre
25
26
http://www.scimagojr.com/journalsearch.php?q=21691&tip=sid&clean=0
http://www.scimagojr.com/journalsearch.php?q=21691&tip=sid&clean=0
22
Kruth 19) Gideon N. Levy 20) Toshiki Niino 21) Eujin Pei (See
also Journal 12) 22) B. Ravi 23) David Rosen (See also Journals
10, 9) 24) Monica Savalani 25) Tim Sercombe (See also Journals
8, 15) 26) Brent Stucker (See also Journals 10, 9) 27) Wei Sun
(See also Journal 10) 28) Jukka Tuomi 29) Terry Wohlers (See
also Journal 10)
5. Journal of Manufacturing Processes
Publisher Elsevier B.V.
ISSN 1526-6125
URL http://www.journals.elsevier.com/journal-of-manufacturing-processes
ImpactF 1.771
H-Index 2427
SJR 1.0928
Since 1999
Volumes 24
Issues 47
Articles 620
EiC Shiv G. Kapoor
Board and Editors 1) M. Annoni 2) W. Cai (See also Journal 2) 3) G.
Cheng 4) J. Dong 5) Z. Feng 6) G. Y. Kim 7) A. S. Kumar 8) X. Li
9) G. Ngaile (See also Journal 2) 10) S. S. Park 11) M. Sundaram
12) B. Wu 13) H. Yamaguchi Greenslet 14) Y. Zhang
6. Virtual and Physical Prototyping
Publisher Taylor & Francis
ISSN 1745-2767
URL http://www.tandfonline.com/loi/nvpp20
ImpactF N/A
H-Index 1529
SJR 0.4230
27
http://www.scimagojr.com/journalsearch.php?q=27677&tip=sid&clean=0
http://www.scimagojr.com/journalsearch.php?q=27677&tip=sid&clean=0
29
http://www.scimagojr.com/journalsearch.php?q=5800173379&tip=sid&clean=
28
0
30
http://www.scimagojr.com/journalsearch.php?q=5800173379&tip=sid&clean=
0
23
Since 2006
Volumes 11
Issues 42
Articles 294
EiC Paulo Jorge da Silva Bártolo (See also Journals 4, 17, 11), Chee
Kai Chua (See also Journals 1, 4, 11)
Board and Editors 1) Wai Yee Yeong (See also Journal 11) 2) Alain
Bernard (See also Journals 4, 13, 8) 3) Anath Fischer (See also
Journal 12) 4) Bopaya Bidanda 5) Cijun Shuai (See also Journal
11) 6) David Bourell (See also Journals 10, 4) 7) David Dean (See
also Journal 12) 8) Dongjin Yoo (See also Journal 11) 9) Jack Zhou
10) Ian Gibson (See also Journals 4, 8) 11) Jiankang He (See also
Journal 11) 12) John Lewandowski 13) Martin Dunn 14) Ming
Leu 15) Peifeng Li 16) Shoufeng Yang (See also Journals 17, 11)
17) Shlomo Magdassi 18) Yong Chen (See also Journal 8)
7. RTejournal
Publisher University Library of the FH-Aachen University of applied
Science
ISSN 1614-0923
URL http://www.rtejournal.de
ImpactF N/A
H-Index N/A
SJR N/A
Since 2004
Volumes 13
Issues 13
Articles 155
EiC Andreas Gebhardt
Board and Editors 1) Ralf Eckhard Beyer 2) Dietmar Drummer 3) KarlHeinrich Grote 4) Sabine Sändig 5) Gerd Witt (See also Journal
12) 6) Michael Zäh (See also Journals 1, 12)
8. International Journal of Rapid Manufacturing
Publisher Inderscience Enterprises Ltd.
ISSN 1757-8825
URL http://www.inderscience.com/ijrapidm
ImpactF N/A
24
H-Index N/A
SJR N/A
Since 2009
Volumes 5
Issues 20
Articles 93
EiC Bahram Asiabanpour (See also Journal 13)
Board and Editors 1) Ali K. Kamrani 2) Denis Cormier (See also Journals 9, 4) 3) Ismail Fidan 4) Ian Gibson (See also Journals 4, 6)
5) Wei Jun 6) Allan Rennie 7) Joseph J. Beaman Jr. (See also
Journal 9) 8) Alain Bernard (See also Journals 4, 13, 6) 9) Georges
Fadel 10) Mo Jamshidi 11) Behrokh Khoshnevis (See also Journal
10) 12) John M. Usher 13) Richard A. Wysk 14) Abe Zeid 15) Abdulrahman M. Al-Ahmari 16) Manfredi Bruccoleri 17) Satish T.
S. Bukkapatnam 18) Yong Chen (See also Journal 6) 19) Fred
Choobineh 20) L. Jyothish Kumar (See also Journals 10, 13)
21) Mehdi Mojdeh 22) Benoit Montreuil 23) Kamran Mumtaz
24) Hossein Tehrani Niknejad 25) Pulak Mohan Pandey (See also
Journal 13) 26) Prahalad K. Rao 27) Sa’Ed M. Salhieh 28) Tim
Sercombe (See also Journals 4, 15) 29) Kathryn E. Stecke 30) Albert Chi To 31) Shigeki Umeda 32) Omid Fatahi Valilai 33) Nina
Vojdani 34) Micky R. Wilhelm 35) Stewart Williams
9. Additive Manufacturing
Publisher Elsevier B.V.
ISSN 2214-8604
URL http://www.journals.elsevier.com/additive-manufacturing
ImpactF N/A
H-Index 531
SJR 1.0432
Since 2014
Volumes N/A
Issues 12
31
http://www.scimagojr.com/journalsearch.php?q=21100349533&tip=sid&
clean=0
32
http://www.scimagojr.com/journalsearch.php?q=21100349533&tip=sid&
clean=0
25
Articles 93
EiC Ryan Wicker
Board and Editors 1) E. MacDonald 2) M. Perez 3) A. Bandyopadhyay
4) J. Beaman (See also Journal 8) 5) J. Beuth 6) S. Bose 7) S.
Chen 8) J. W Choi 9) K. Chou 10) D. Cormier (See also Journals
4, 8) 11) K. Creehan 12) C. Elkins 13) S. Fish 14) D. D. Gu 15) O.
Harrysson 16) D. Hofmann 17) N. Hopkinson 18) Y. Huang (See
also Journal 2) 19) K. Jurrens 20) K. F. Leong 21) J. Lewis (See
also Journal 10) 22) L. Love 23) R. Martukanitz 24) D. Mei 25) R.
Resnick (See also Journal 10) 26) D. Rosen (See also Journals 10,
4) 27) C. Spadaccini 28) B. Stucker (See also Journals 10, 4) 29) C.
Tuck 30) C. Williams
10. 3D Printing and Additive Manufacturing
Publisher Mary Ann Liebert, Inc
ISSN 2329-7662
URL http://www.liebertpub.com/overview/3d-printing-and-additive-manufacturin
621
ImpactF N/A
H-Index N/A
SJR N/A
Since 2014
Volumes 3
Issues 10
Articles 86
EiC Skylar Tibbits
Board and Editors 1) Hod Lipson 2) Craig Ryan 3) Anthony Atala
4) David Benjamin 5) Lawrence J. Bonassar 6) David Bourell (See
also Journals 4, 6) 7) Adrian Bowyer 8) Glen Bull 9) Adam Cohen 10) Khershed P. Cooper (See also Journal 4) 11) Scott Crump
12) Olaf Diegel (See also Journal 4) 13) Richard Hague 14) John
F. Hornick 15) Weidong Huang 16) Takeo Igarashi 17) Bryan Kelly
18) Behrokh Khoshnevis (See also Journal 8) 19) Matthias Kohler
20) L. Jyothish Kumar (See also Journals 13, 8) 21) Melba Kurman 22) Jennifer A. Lewis (See also Journal 9) 23) Jos Malda
24) Gonzalo Martinez 25) Neri Oxman 26) Bre Pettis 27) Sharon
Collins Presnell 28) Phil Reeves 29) Avi N. Reichental 30) Ralph
Resnick (See also Journal 9) 31) David W. Rosen (See also Jour26
nals 9, 4) 32) Jenny Sabin 33) Carolyn Conner Seepersad 34) Brent
Stucker (See also Journals 9, 4) 35) Wei Sun (See also Journal 4)
36) Hiroya Tanaka 37) Thomas Toeppel 38) Peter Weijmarshausen
39) Terry Wohlers (See also Journal 4)
11. International Journal of Bioprinting
Publisher Whioce Publishing Pte Ltd
ISSN 2424-8002
URL http://ijb.whioce.com/index.php/int-j-bioprinting
ImpactF N/A
H-Index N/A
SJR N/A
Since 2015
Volumes 2
Issues 3
Articles 31
EiC Chee Kai Chua (See also Journals 1, 4, 6)
Board and Editors 1) Wai Yee Yeong (See also Journal 6) 2) Aleksandr
Ovsianikov (See also Journal 17) 3) Ali Khademhosseini (See also
Journal 15) 4) Boris N. Chichkov (See also Journal 17) 5) Charlotte Hauser 6) Cijun Shuai (See also Journal 6) 7) Dong Jin
Yoo (See also Journal 6) 8) Frederik Claeyssens 9) Geun Hyung
Kim 10) Giovanni Vozzi (See also Journals 17, 15) 11) Ibrahim
Tarik Ozbolat 12) Jiankang He (See also Journal 6) 13) Lay Poh
Tan 14) Makoto Nakamura 15) Martin Birchall 16) Paulo Jorge
Da Silva Bartolo (See also Journals 17, 4, 6) 17) Peter Dubruel
18) Richard Bibb (See also Journal 4) 19) Roger Narayan (See
also Journals 20, 15) 20) Savas Tasoglu (See also Journals 17, 15)
21) Shoufeng Yang (See also Journals 6, 17) 22) Vladimir Mironov
23) Xiaohong Wang 24) Jia An
12. Progress in Additive Manufacturing
Publisher Springer
ISSN 2363-9520
URL http://www.springer.com/engineering/production+engineering/
journal/40964
ImpactF N/A
H-Index N/A
27
SJR N/A
Since 2016
Volumes 1
Issues 1
Articles 14
EiC Martin Schäfer, Cynthia Wirth
Board and Editors 1) Henrique A. Almeida (See also Journal 4) 2) David
Dean (See also Journal 6) 3) Fernando A. Lasagni 4) Eujin Pei
(See also Journal 4) 5) Jan Sehrt 6) Christian Seidel 7) Adriaan Spierings 8) Xiaoyong Tian 9) Jorge Vilanova 10) Anath Fischer (See also Journal 6) 11) Russell Harris 12) Dachamir Hotza
13) Bernhard Müller 14) Nahum Travitzky 15) Gerd Witt (See
also Journal 7) 16) Michael Friedrich Zäh (See also Journals 7, 1)
13. International Journal on Additive Manufacturing Technologies
Publisher Additive Manufacturing Society of India
ISSN 2395-4221
URL http://amsi.org.in/homejournal.html
ImpactF N/A
H-Index N/A
SJR N/A
Since 2015
Volumes 1
Issues 1
Articles 7
EiC Pulak M. Pandey (See also Journal 8), David Ian Wimpenny, Ravi
Kumar Dwivedi
Board and Editors 1) L. Jyothish Kumar (See also Journals 10, 8)
2) Keshavamurthy D. B. 3) Khalid Abdelghany 4) Suman Das
5) Alain Bernard (See also Journals 4, 8, 6) 6) C. S. Kumar
7) Bahram Asiabanpour (See also Journal 8) 8) K. P. Raju Rajurkar 9) Ehsan Toyserkani 10) Wan Abdul Rahman 11) Sarat
Singamneni 12) Vijayavel Bagavath Singh
14. 3D Printing in Medicine
Publisher Springer
ISSN 2365-6271
URL http://www.springer.com/medicine/radiology/journal/41205
28
ImpactF N/A
H-Index N/A
SJR N/A
Since 2015
Volumes 2
Issues 4
Articles 3
EiC Frank J. Rybicki
Board and Editors 1) Leonid L. Chepelev 2) Andy Christensen 3) Koen
Engelborghs 4) Andreas Giannopoulos 5) Gerald T. Grant 6) Ciprian
N. Ionita 7) Peter Liacouras 8) Jane M. Matsumoto 9) Dimitrios
Mitsouras 10) Jonathan M. Morris 11) R. Scott Rader 12) Adnan Sheikh 13) Carlos Torres 14) Shi-Joon Yoo 15) Nicole Wake
16) William Weadock
15. Bioprinting
Publisher Elsevier B.V.
ISSN 2405-8866
URL http://www.journals.elsevier.com/bioprinting
ImpactF N/A
H-Index N/A
SJR N/A
Since 2016
Volumes 1
Issues N/A
Articles 1
EiC A. Atala
Board and Editors 1) S. V. Murphy 2) T. Boland 3) P. Campbell 4) U.
Demirci (See also Journal 17) 5) B. Doyle 6) J. Fisher 7) J. Y.
H. Fuh (See also Journals 4, 3) 8) A. K. Gaharwar 9) P. Gatenholm 10) K. Jakab 11) J. Jessop 12) A. Khademhosseini (See also
Journal 11) 13) S. J. Lee 14) I. Lelkes 15) J. Lim 16) A. G. Mikos
17) R. Narayan (See also Journals 20, 11) 18) T. Sercombe (See
also Journals 4, 8) 19) A. Skardal 20) S. Tasoglu (See also Journals 17, 11) 21) D. J. Thomas 22) G. Vozzi (See also Journals 17,
11) 23) I. Whitaker (See also Journal 17) 24) S. K. Williams II
16. 3D Printing – Science and Technology
29
Publisher DE GRUYTER OPEN
ISSN 1896-155X
URL http://www.degruyter.com/view/j/3dpst
ImpactF N/A
H-Index N/A
SJR N/A
Since 2016
Volumes 0
Issues 0
Articles 0
EiC Haim Abramovich
Board and Editors 1) Christopher A. Brown 2) Paolo Fino 3) Amnon
Shirizly 4) Frank Walther 5) Kaufui Wong
17. Journal of 3D Printing in Medicine
Publisher Future Medicine Ltd
ISSN 2059-4755
URL http://www.futuremedicine.com/page/journal/3dp/editors.
jsp
ImpactF N/A
H-Index N/A
SJR N/A
Since 2016
Volumes 0
Issues 0
Articles 0
EiC Dietmar W Hutmacher
Board and Editors 1) Peter Choong 2) Michael Schuetz 3) Iain S. Whitaker
(See also Journal 15) 4) Shoufeng Yang (See also Journals 6,
11) 5) Paulo Jorge Bártolo (See also Journals 4, 6, 11) 6) Luiz
E. Bertassoni 7) Faiz Y. Bhora 8) Boris N. Chichkov (See also
Journal 11) 9) Utkan Demirci (See also Journal 15) 10) Michael
Gelinsky 11) Ruth Goodridge 12) Robert E. Guldberg 13) Scott
J. Hollister 14) Zita M. Jessop 15) Jordan S. Miller 16) Adrian
Neagu 17) Aleksandr Ovsianikov (See also Journal 11) 18) Katja
Schenke-Layland 19) Ralf Schumacher 20) Jorge Vicente Lopes da
Silva 21) Chris Sutcliffe 22) Savas Tasoglu (See also Journals 15,
11) 23) Daniel Thomas 24) Martijn van Griensven 25) Giovanni
30
Vozzi (See also Journals 15, 11) 26) David J. Williams 27) Chris
J. Wright 28) Jing Yang 29) Nizar Zein
18. Smart and Sustainable Manufacturing Systems
Publisher ASTM
ISSN N/A
URL http://www.astm.org/SSMS
ImpactF N/A
H-Index N/A
SJR N/A
Since 2017
Volumes 0
Issues 0
Articles 0
EiC Sudarsan Rachuri
Board and Editors 1) Darek Ceglarek 2) Karl R. Haapala 3) Yinlun
Huang 4) Jacqueline Isaacs 5) Sami Kara 6) Soundar Kumara
7) Sankaran Mahadevan 8) Lihong Qiao 9) Roberto Teti 10) Manoj
Kumar Tiwari (See also Journal 1) 11) Shozo Takata 12) Tetsuo
Tomiyama 13) Li Zheng 14) Fazleena Badurdeen 15) Abdelaziz
Bouras 16) Alexander Brodsky 17) LiYing Cui 18) Bryony DuPont
19) Sebti Foufou 20) Pasquale Franciosa 21) Robert Gao 22) Moneer Helu 23) Sanjay Jain 24) I. S. Jawahir 25) Sagar V. Kamarthi
26) Jay Kim 27) Minna Lanz 28) Kincho H. Law 29) Mahesh
Mani 30) Raju Mattikalli 31) Michael W. McKittrick 32) Shreyes
N. Melkote 33) P. V. M. Rao 34) Utpal Roy 35) Christopher J.
Saldana 36) K. Senthilkumaran 37) Gopalasamudram R. Sivaramakumar 38) Eswaran Subrahmanian 39) Dawn Tilbury 40) Conrad S. Tucker 41) Anahita Williamson 42) Paul William Witherell
43) Lang Yuan 44) Rakesh Agrawal 45) Dean Bartles 46) Gahl
Berkooz 47) Jian Cao 48) S. K. Gupta 49) Timothy G. Gutowski
50) Gregory A. Harris 51) Rob Ivester 52) Mark Johnson 53) Thomas
Kurfess 54) Bahram Ravani 55) William C. Regli 56) S. Sadagopan
57) Vijay Srinivasan 58) Ram D. Sriram 59) Fred van Houten
60) Albert J. Wavering
19. Powder Metallurgy Progress
Publisher DE GRUYTER OPEN
31
ISSN 1339-4533
URL http://www.degruyter.com/view/j/pmp
ImpactF N/A
H-Index N/A
SJR N/A
Since N/A
Volumes 0
Issues 0
Articles 0
EiC Beáta Ballóková
Board and Editors 1) Katarı́na Ondrejová 2) Herbert Danninger 3) Eva
Dudrová 4) Marco Actis Grande 5) Abolghasem Arvand 6) Csaba
Balázsi 7) Sergei M. Barinov 8) Frank Baumgärtner 9) Paul Beiss
10) Sigurd Berg 11) Michal Besterci 12) Jaroslav Briančin 13) Francisco Castro 14) Andrzej Cias 15) Ján Dusza 16) Juraj Ďurišin
17) Štefan Emmer 18) Sergei A. Firstov 19) Christian Gierl-Mayer
20) Eduard Hryha 21) Pavol Hvizdoš 22) Jan Kazior 23) Jacob
Kübarsepp 24) Alberto Molinari 25) John R. Moon 26) Ľudovı́t
Parilák 27) Doan Dinh Phuong 28) Raimund Ratzi 29) Wolf D.
Schubert 30) František Simančı́k 31) Marin Stoytchev 32) Andrej
Šalak 33) José M. Torralba 34) Andrew S. Wronski 35) Timothy
Martin 36) Radovan Bureš
20. 3D-Printed Materials and Systems
Publisher Springer
ISSN 2363-8389
URL http://www.springer.com/materials/journal/40861
ImpactF N/A
H-Index N/A
SJR N/A
Since N/A
Volumes 0
Issues 0
Articles 0
EiC Roger J. Narayan (See also Journals 15, 11)
Board and Editors 1) Vipul Dave 2) Mohan Edirisinghe 3) Sungho
Jin 4) Soshu Kirihara 5) Sanjay Mathur 6) Mrityunjay Singh
7) Pankaj Vadgama
32
Furthermore, the following journals are identified from the literature relevant to this review. Journals catering specifically or explicitly to AM, RM,
RP and 3D printing are listed above.
The list contains only journals with more than 2 publications. The goal
for composing this list is to enable other researches to identify possible publication venues for their work. The list is sorted by the number of publications
in our bibliography for each identified journal. The number of each entry
indicates the number of publications for the journal.
11 The International Journal of Advanced Manufacturing Technology (See
Journal 1)
11 Rapid Prototyping Journal (See Journal 4)
7 Computer-Aided Design33 , ISSN: 0010-4485
6 Robotics and Computer-Integrated Manufacturing (See Journal 3)
6 Journal of Manufacturing Science and Engineering (See Journal 2)
5 Journal of Materials Processing Technology34 , ISSN: 0924-0136
4 International Journal of Computer Integrated Manufacturing35 , ISSN:
1362-3052
4 CIRP Annals - Manufacturing Technology36 , ISSN: 0007-8506
3 Journal of Manufacturing Systems37 , ISSN: 0278-6125
3 Computers in Industry38 , ISSN: 0166-3615
2 Virtual and Physical Prototyping (See Journal 6)
2 Proceedings of the Institution of Mechanical Engineers, Part B: Journal
of Engineering Manufacture39 , ISSN: 2041-2975
33
http://www.journals.elsevier.com/computer-aided-design
http://www.journals.elsevier.com/journal-of-materials-processing-technology
35
http://www.tandfonline.com/toc/tcim20/current
36
http://www.journals.elsevier.com/cirp-annals-manufacturing-technology
37
http://www.journals.elsevier.com/journal-of-manufacturing-systems
38
http://www.journals.elsevier.com/computers-in-industry
39
http://pib.sagepub.com
34
33
2 Journal of Intelligent Manufacturing40 , ISSN: 1572-8145
2 International Journal of Production Research41 , ISSN: 1366-588X
2 International Journal of Machine Tools and Manufacture42 , ISSN: 08906955
2 IEEE Transactions on Industrial Informatics43 , ISSN: 1551-3203
2 Enterprise Information Systems44 , ISSN: 1751-7583
2 Applied Mechanics and Materials45 , ISSN: 1662-7482
2 Advanced Materials Research46 , ISSN: 1662-8985
4. Reviews on the Subject
The topic of AM in general and its special applications, technologies
and directions is extensively researched and results published in literature.
The growth of the number of publications as found by Google Scholar and
Proquest is illustrated in the following figures (See Fig. 10 and Fig. 11).
An analysis of literature within this domain from sources (See Sect. 1.1.2)
for scientific literature shows an increase in the number of published works
from 2002 to 2016 of 41.3 % on average (See Fig. 10), respectively 26.1 % for
the search engine Proquest. This number is from the average of the average
growth of results found for keywords related to specific AM topics and AM
related literature in general from http://scholar.google.com.
In this section we will present the findings of the analysis of available data
on the scientific publications.
Specific aspects of AM, 3D printing and associated areas are topic of a
number of reviews listed below. The list of reviews is compiled by searching
on the previously mentioned search engines (See sect. 1.1.2) using a keyword
40
http://link.springer.com/journal/10845
http://www.tandfonline.com/toc/tprs20/current
42
http://www.journals.elsevier.com/international-journal-of-machine-tools-and-manufacture
43
http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=9424
44
http://www.tandfonline.com/toc/teis20/current
45
http://www.scientific.net/AMM
46
http://www.scientific.net/AMR
41
34
Average number of publications per year (Google Scholar)
Average annual growth
128
112
10000
96
8000
80
6000
64
48
4000
32
2000
16
16
15
20
14
20
13
20
12
20
11
20
10
20
09
20
08
20
07
20
06
20
05
20
04
20
20
20
03
0
02
0
percent; avg=41.32
144
12000
20
publications; avg=3196.81
Average number of publications
14000
period
source of data: scholar.google.com
Figure 10: Average number of publications and annual average growth for the combined
results from scholar.google.com for 2002–2016
Average number of publications per year (ProQuest)
Average annual growth
117
99
6000
81
4000
63
45
2000
27
9
0
percent; avg=26.14
135
8000
6
5
20
1
4
20
1
3
20
1
2
20
1
1
20
1
0
20
1
9
20
1
8
20
0
7
20
0
6
20
0
5
20
0
20
0
4
20
0
20
0
20
0
3
-9
2
publications; avg=441.93
Average number of publications
period
source of data: proquest.com
Figure 11: Average number of publications and annual average growth for the combined
results from proquest.com for 2002–2016
35
search. The keywords are “3D Printing” +Review/Survey/“State of the
Art”, “Additive Manufacturing” +Review/Survey“State of the Art”, “Rapid
Manufacturing” +Review/Survey“State of the Art”.
The time range for the search for reviews is restricted from 2005 to 2016.
Following this literature search a backward search on the results is performed.
From the 70 reviews identified we calculate the average number of authors
per review to be 3.3 with an average length of 15.2 pages. The list is sorted
chronologically with the general theme or domain of the review provided.
1. Dimitar Dimitrov, Kristiaan Schreve and N. de Beer [61]; General Introduction, Applications, Research Issues
2. Vladimir Mironov, Nuno Reis and Brian Derby [172]; Bioprinting,
Technology
3. Ben Utela et al. [252]; New Material Development (Mainly Powders)
4. Abbas Azari and Sakineh Nikzad [21]; Dentistry, Applications in Dentistry
5. Hongbo Lan [143]; Rapid Prototyping, Manufacturing Systems
6. Daniel Eyers and Krassimir Dotchev [72]; Rapid Manufacturing, Mass
Customisation
7. Ferry P. W. Melchels, Jan Feijen and Dirk W. Grijpma [170]; Stereolithography, Biomedical Engineering
8. Fabian Rengier et al. [207]; Medicine, Data Acquisition (Reverse-Engineering)
using Image Data
9. R. Sreenivasan, A. Goel and D. L. Bourell [227]; Energy Consumption,
Sustainability
10. Rupinder Singh [224]; Rapid Prototyping, Casting
11. R. Ian Campbell, Deon J. de Beer and Eujin Pei [44]; Application and
Development of AM in South Africa
12. Benjamin Vayre, Frédéric Vignat and François Villeneuve [256]; Metal
Components, Technology
36
13. Dongdong Gu et al. [93]; Metal Components, Technology, Terminology
14. Ferry P. W. Melchels et al. [169]; Medicine, Tissue and Organ Engineering
15. Kaufui V. Wong and Aldo Hernandez [271]; General, Technology
16. Lawrence E. Murr et al. [182]; Metal Components, EBM, Laser Melting
17. Shawn Moylan et al. [179]; Quality, Test Artifacts
18. Timothy J. Horn and Ola L. A. Harrysson [109]; General, Applications,
Technology
19. Xibing Gong, Ted Anderson and Kevin Chou [90]; EBM, Powder Based
AM
20. Flavio S. Fogliatto, Giovani J.C. da Silveira and Denis Borenstein [77];
Mass-Customization
21. K. P. Karunakaran et al. [125]; Rapid Manufacturing, Metal Object
Manufacturing
22. Carl Schubert, Mark C. van Langeveld and Larry A. Donoso [217];
General
23. Irene J. Petrick and Timothy W. Simpson [195]; Economics, Business
24. Iulia D. Ursan, Ligia Chiu and Andrea Pierce [251]; Pharmaceutical
Drug Printing
25. Jasper Cerneels et al. [47]; Thermoplastics
26. Mohammad Vaezi, Hermann Seitz and Shoufeng Yang [253]; MicroStructure AM
27. Nannan Guo and Ming C. Leu [97]; General, Technology, Materials,
Applications
28. Olga Ivanova, Christopher Williams and Thomas Campbell [118]; NanoStructure AM
29. Robert Bogue [30]; General
37
30. Samuel H. Huang et al. [113]; Socio-Ecological and Economy
31. Zicheng Zhu et al. [305]; Hybrid Manufacturing
32. Dazhong Wu et al. [272]; Cloud Manufacturing
33. Bethany C. Gross et al. [92]; Biotech, Chemistry
34. Brett P. Conner et al. [57]; Classification, Object Complexity
35. Brian N. Turner, Robert Strong and Scott A. Gold [248]; Thermoplastics, Physical Properties
36. David W. Rosen [210]; Design for Additive Manufacturing
37. Dimitris Mourtzis, Michael Doukas and Dimitra Bernidaki [178]; Simulation
38. Douglas S. Thomas and Stanley W. Gilbert [242]; Economy, Cost
39. Gustavo Tapia and Alaa Elwany [239]; Process Monitoring, Quality
40. Hae-Sung Yoon et al. [291]; Energy Consumption
41. Jan Deckers, Jef Vleugels and Jean-Pierre Kruth [59]; Ceramics AM
42. Rouhollah Dermanaki Farahani, Kambiz Chizari and Daniel Therriault [74]; Micro-Structure AM
43. Siavash H. Khajavi, Jouni Partanen and Jan Holmström [131]; Supply
Chain, Application
44. William E. Frazier [79]; Metal Components
45. Wu He and Lida Xu [102]; Cloud Manufacturing
46. Syed Hasan Massod [163]; Fused Deposition Modeling (FDM)
47. Brian N. Turner and Scott A Gold [247]; Thermoplastic AM, Material
Properties
48. Carlos Mota et al. [177]; Medicine, Tissue Engineering
49. C. Y. Yap et al. [290]; SLM
38
50. Donghong Ding et al. [62]; Metal Components, Wire Fed Processes
51. Adamson et al.[11]; Cloud Manufacturing, Terminology
52. Jie Sun et al. [231]; Food Printing, Technology
53. Jin Choi et al. [54]; 4D Printing
54. K. A. Lorenz et al. [158]; Hybrid Manufacturing
55. Merissa Piazza and Serena Alexander [197]; General, Terminology, Academic
56. Omar A. Mohamed, Syed H. Masood and Jahar L. Bhowmik [176];
Process Parameter Optimization (FDM)
57. Seyed Farid Seyed Shirazi et al. [220]; Tissue Engineering, Powder
Based AM
58. Sheng Yang and Yaoyao Fiona Zhao [288]; Design for AM, Complexity
59. Sofiane Guessasma et al. [95]; Design for AM, Process Parameter Optimization
60. Wei Gao et al. [83]; General, Technology, Engineering
61. Yong Huang et al. [115]; General, Technology, Research Needs
62. Zhong Xun Khoo et al. [133]; Smart Materials, 4D Printing
63. Hammad H. Malik et al. [162]; Medicine, Surgery
64. Jie Sun et al. [232]; Food Printing
65. Behzad Esmaeilian, Sara Behdad and Ben Wang [71]; Manufacturing
66. H. Bikas, P. Stavropoulos and G. Chryssolouris [28]; General, Technology
67. Julien Gardan [84]; Technology, Engineering, Manufacturing
68. Swee Leong Sing et al. [223]; Metal Components, Medicine, Implants,
Materials
39
69. William J. Sames et al. [214]; Metal Components, Materials
70. Andrew J. Pinkerton [198]; Laser-technology
4.1. Stakeholder Distinction
Different 3D printing technologies, machines and manufacturers as well
as services target different clients for which we propose the following classification. Generally the discerning factors are 1. cost per machine 2. quality of
print (e.g., surface quality, physical properties of object) 3. reliability of machine and 4. materials available. From literature the three classes of audience
are apparent:
• consumer/end user
• professional user
• industrial application
For the consumer a very important factor is the cost of the printer itself
with 45 % of consumers are not willing to pay more than $US 299 for a 3D
printer [164].
In recent years the price of entry level consumer 3D printers, especially for
build-kits, decreased to about $US 30047 . Open-source projects like RepRap
have contributed to the decline of costs for these machine [225].
In Fig. 12 we differentiate between the user groups of end-users/consumer,
professional users and industrial users. Industrial users rely on high quality
available with a large selection of processable materials. Machines for these
users are expensive and out of reach of most end-users and professional users.
The quality these machines produce is very high and the objects can be
used for integration in a product or be a product themselves. Due to these
restrictions the availability of such machines is not very wide spread but
limited to highly specialised enterprises.
On the other end of the spectrum the end-user/consumer has a large
choice of 3D printers to select from, they are relatively inexpensive, produce
objects of acceptable quality, work on a much lower number materials (typically thermoplastics) and have a reliability that is lower than the reliability
47
XYZPrinting da Vinci Jr.
1.0,
XYZprinting-Vinci-Jr-1-0-Printer
$US 297.97,
40
https://www.amazon.com/
of professional equipment. In the middle of the spectrum we see professional
users, e.g., from design bureaus or architects, that use such machines in a
professional manner, draw benefits from the usage of such technology but
it is mostly not their main concept of business. In an example, an architect makes use of a 3D printer for the creation of a high-quality model of a
building he designed, which is faster and easier than making such a model
by hand.
Reliability
Material Availability
Consumer
Professional
Industrial
Cost per Machine
Quality
Availabilty
Figure 12: Audience classification and Expectations
5. 3D Printing Services
There are numerous dedicated 3D printing services available to end-users,
professionals and industrial users. They differ in the clients they address, the
services they offer, the quality they can provide and the cost they charge. In
this section we give an overview of a selection of available 3D printing services.
The list is not conclusive as a number of enterprises does offer 3D printing
services in their portfolio but they are not necessarily to be considered 3D
printing services due to either their local mode of operation or the number
of 3D printers the user can chose from. This overview is closely based on the
work of [204] and extends its findings.
We use the following list of properties to distinguish the services:
• The target group (End-users, industrial users or professional users)
• The local reach (Local or global)
41
• Availability of an API
• Services rendered (Design, 3D printing, marketplace, other)
Rayna and Striukova [204] base their exploratory study on the following
list of services they have identified. For the original list of services we add
the following information.
• 3D Burrito48 - Pre-Launch Phase
• 3D Creation Lab49
• 3DLT50 - Shut down on 2015-12-31
• 3DPrintUK51
• Additer.com52 - Unreachable
• Cubify Cloud53 - Acquired by 3D Systems, Service no longer available
• i.Materialise54
• iMakr55
• Kraftwürx.com56
• MakerBot/Thingiverse57
• MakeXYZ58
• Ponoko59
48
http://3dburrito.com
http://www.3dcreationlab.co.uk
50
http://3dlt.com
51
https://www.3dprint-uk.co.uk
52
http://additer.com
53
http://cubify.com
54
https://i.materialise.com/
55
http://imakr.co.uk
56
http://www.kraftwurx.com
57
http://thingiverse.com
58
https://www.makexyz.com
59
https://www.ponoko.com/
49
42
• Sculpteo60
• Shapeways61
For this study we extend the selection with the additional services listed
in Tabs. 1 and . 2. Services omitted in these two tables are described in the
original study.
In contrast to the authors of the original work we think that an exhaustive list of such services is impossible to compile as a large number of local
businesses do offer 3D printing services over the Internet and would therefore qualify to be included in such a list. These (local) businesses are hard
to identify due to their limited size and reach. Also, an exhaustive list would
need to contain 3D printing services and repositories of which many similar
and derivative services exist.
Further, we extend the classification and study to the provisioning of an
API by the respective service. An API should provide methods to use the
service programmatically. With an API such printing services can be used
as a flexible production means in CM settings. The range of functionality of
such APIs can vary significantly and range from the possibility of having a
widget displayed on a website with a 3D model viewer, to upload and store
digital models in a repository, request quotes for manufacturing or digital
fabrication. A commonality for these APIs is the requirement for the thirdparty user to have an account with the service, which is indicated in Tabs. 3
and 4 by Implementer in the column Required for registration. The
indication User in this column indicates that the user must be registered
with this service too.
The implementer registration is intended for scenarios where the API is
embedded in a service or website that a third party user then uses. The
findings of this study are presented in Tabs. 3 and 4, where we state whether
the service provides an API and if it is publicly available or only accessible
for business partners, who needs to be registered for the usage of the API
and what capabilities the API provides (See Tab. 5).
This explorative extension study is performed as described by the original
authors.
60
61
https://www.sculpteo.com
http://www.shapeways.com/
43
Table 1: 3D printing platforms and services included in this study – Part 1
Company/Service Name URL
3Faktur
http://3faktur.com
3DaGoGo
3DExport
3DHubs
3DPrinterOS
3DShook
3D Warehouse
Autodesk 123D
Clara.io
CreateThis
Cults
Grabcad
La Poste
Libre3D
Makershop
ClassificationEstablished
Locati
Modeling
2014
Germa
Service
https://www.3dagogo.com
Marketplace 2013
USA
https://3dexport.com
Marketplace, 2004
USA
Repository
http://3dhubs.com
Crowd
2013
USA
Printing
Provider
https://www.3dprinteros.com
Crowd
2014
USA
Printing
Provider
http://www.3dshook.com
Marketplace, 2014
Israel
Subscription Service
https://3dwarehouse.sketchup.com Marketplace, 2006
USA
Community,
Repository
http://www.123dapp.com
Software,
2009
USA
Marketplace,
Repository
https://clara.io
Repository, 2013
Canad
Modeling
http://www.createthis.com
Marketplace 2013
USA
France
https://cults3d.com
Marketplace, 2013
Repository,
Design
Service
https://grabcad.com
Software,
2009
USA
Marketplace,
Repository
http://impression3d.laposte.fr
Print
2013
France
Provider,
44
Marketplace
http://libre3d.com
Marketplace, 2014
USA
Repository
https://www.makershop.co
Marketplace, 2013
USA
Repository
Table 2: 3D printing platforms and services included in this study – Part 2
Company/Service Name
NIH 3D Print Exchange
URL
http://3dprint.nih.gov
p3d.in
Pinshape
REPABLES
Rinkak
https://p3d.in
https://pinshape.com
http://repables.com
https://www.rinkak.com
shapeking
http://www.shapeking.com
Shapetizer
https://www.shapetizer.com
Sketchfab
https://sketchfab.com
stlfinder
http://www.stlfinder.com
STLHive
http://www.stlhive.com
Stratasys Direct Express
https://express.stratasysdirect.com
Threeding
https://www.threeding.com
Tinkercad
https://www.tinkercad.com
Treatstock
https://www.treatstock.com
45
trinckle
https://www.trinckle.com
Trinpy
https://www.trinpy.com
Classification
CoCreation,
Repository
Modeling
Marketplace
Repository
Marketplace,
Repository,
Crowd
Printing
Provider
Marketplace,
Repository
Marketplace,
Repository, Print
Provider
Marketplace,
Repository
Search Engine
Marketplace,
Repository,
Design
Service
Print
Provider
Marketplace,
Print
Provider
Design,
Repository
Marketplace,
Community,
Crowd
Printing
Provider
Print
Provider
Marketplace,
Table 3: 3D printing platforms and services and their APIs - Part 1
Company / Service Name
3Faktur
3DaGoGo
3DExport
3DHubs
3DPrinterOS
3DPrintUK
3DShook
3D Creation Lab
3D Warehouse
Autodesk 123D
Clara.io
CreateThis
Cults
Grabcad
iMakr
i.Materialise
Kraftwürx.com
La Poste
Libre3D
MakerBot / Thingiverse
Makershop
MakeXYZ
Materflow
MeltWerk
Provides Required Capabilities
Reach
Target Group
an API for
registration
No
N/A
N/A
Regional
Consumer
No
N/A
N/A
Global
Consumer
No
N/A
N/A
Global Consumer + Professional
Yes
Implementer
Upload Global
Consumer
+ User
No
N/A
N/A
Global
Consumer
No
N/A
Global Consumer
No
N/A
N/A
Global
Consumer
No
N/A
N/A
Global
Consumer
No
N/A
N/A
Global
Consumer
N/A
No
N/A
Global
Consumer
Yes
Implementer
Upload, Global Consumer + Professional
Modify,
Retrieve
No
N/A
N/A
Global
Consumer
Yes
Implementer
View,
Global
Consumer
(not
Republic)
trieve
No
N/A
N/A
Global Consumer + Professional
No
N/A
N/A
Global
Consumer
Yes
Implementer
Upload, Global Consumer + Professional
Quoting,
Order
Yes
Implementer
Upload, Global
Consumer
(not
Order
public)
No
N/A
N/A
Regional
Consumer
No
N/A
N/A
Global
Consumer
Yes
Implementer
Upload, Global
Consumer
Retrieve
Yes
Implementer
Search, Global Consumer + Professional
46
Retrieve
Yes
Implementer
Order
Global Consumer + Professional
+ User
No
N/A
N/A
Global
Consumer
Yes
Implementer
Upload, Global
Consumer
Table 4: 3D printing platforms and services and their APIs - Part 2
Company / Service Name
NIH 3D Print Exchange
p3d.in
Pinshape
Ponoko
REPABLES
Rinkak
Sculpteo
shapeking
Shapetizer
Shapeways
Sketchfab
stlfinder
STLHive
Stratsys Direct Express
Threeding
Tinkercad
Treatstock
trinckle
Trinpy
TurboSquid
UPS
Yeggi
Provides Required Capabilities
Reach
an API for
registration
Yes
Implementer
Upload, Global
Retrieve
No
N/A
N/A
Global
No
N/A
N/A
Global
No
N/A
N/A
Global
No
N/A
N/A
Global
Yes
Implementer
View,
Global
Order,
Modeling
Yes
Implementer
Upload, Global
+ User Retrieve,
Quoting,
Order
No
N/A
N/A
Global
No
N/A
Global
N/A
Yes
Implementer
Upload, Global
+ User Quoting,
Order
Yes
Implementer
Upload, Global
View
No
N/A
N/A
Global
No
N/A
N/A
Global
No
N/A
N/A
Regional
No
N/A
N/A
Global
No
N/A
N/A
Global
Yes
Implementer
Upload, Global
Retrieve
47
No
N/A
N/A
Global
No
N/A
N/A
Global
No
N/A
N/A
Global
No
N/A
N/A
Regional
Yes
Implementer
Search, Global
Re-
Target Group
Consumer
Consumer
Consumer
Consumer
Consumer
Consumer
Consumer + Professional
Consumer
Consumer
Consumer + Professional
Consumer
Consumer
Consumer + Professional
Professional
Consumer
Consumer
Consumer
Consumer + Professional
Consumer
Consumer + Professional
Consumer
Consumer
Table 5: Categorising 3D printing online platforms
Company
/ Service
Name
Design Design
marreposiket
tory
place
3Faktur
3DaGoGo +
3DExport +
3DHubs
3DPrinterOS
3DPrintUK
3DShook
+
3D
Creation Lab
3D Ware- +
house
Autodesk
+
123D
Clara.io
+
CreateThis +
Cults
+
Grabcad
+
iMakr
i.Materialise +
Kraftwürx.com
+
La Poste
+
Libre3D
+
MakerBot +
/ Thingiverse
Makershop +
MakeXYZ
Materflow +
MeltWerk
MyMiniFactory
+
NIH
3D +
Print
Exchange
p3d.in
+
Pinshape
+
Ponoko
REPABLES +
Rinkak
+
Sculpteo
Design Printing Printing Printer Crowd
sermarket service sale
sourcvice
place
ing
platform
+
+
Editor
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
p
+
+
+
+
+
p
+
+
+
+
+
+
+
+
+
+
+
+
+
+
48
+
+
+
+
+
+
+
As analysed in Tab. 5, the services surveyed offer a different range of
services each. No provider could be identified that offers a complete set of
service for 3D printing and related tasks. In the table, the indication of p
marks companies that do not themselves offer printers through this service
but their parental companies do. The o character in the column for printing
service for Tinkercad and YouMagine, indicates that the service itself does
not render printing services, but has a cooperation with a third party for the
provisioning of this service. With the exception of La Poste, UPS and iMakr
all the services render their business completely on the Internet without the
requirement for physical interaction. La Poste and UPS offer an Internet
interface with the physical delivery of the objects in certain shops of theirs.
Services that offer a design market place can offer designs and other files
costless or for a fee, no distinction is made for this study. Yeggi and stlfinder
are search engines for 3D model data that work on the data from other
sources. Albeit a search engine, Yeggi provides the integration of printing
services and cloud printing services for models available from third party
services, thus Yeggi can be classified as a service of services. The service
rendered by Trinpy is subscription based with various membership options.
Grabcab provides 3D printing planning and control services, and integration
with an online editor.
6. Review
Cloud Manufacturing is mainly an overlapping manufacturing or engineering concept with application and grounding the development of parts
or objects in “traditional” manufacturing. With traditional manufacturing
we denote all technologies and methods to create or fabricate objects or
parts other than AM. For a distinction between manufacturing methods see
Klocke [136], Nee [185] and the DIN Standard 8580 [1]. In this sense all
subtractive or formative manufacturing methods are summarises as “traditional manufacturing” methods. As AM offers a large degree of flexibility
due to short lead times as well as other beneficial properties, we see that
AM is the ideal technology to be considered within CM scenarios. Taking
the properties of AM into account we do not predict that AM will replace
other manufacturing methods, not even within CM scenarios. Rather AM
will fill niches for special applications like mass-customisation, rapid replacement production capabilities or RT, especially within CM scenarios. With
this work we aim to contribute to the development of AM methodology and
49
technology in the CM paradigm.
6.1. Topological Map
In Fig. 13 the relationship and connection of various concepts relevant to
CM is described. This map forms the basis of the following review, where
the nodes from the map represent sections from the review where we present
the current state of research and elaborate on open research questions. The
topics are extracted from literature.
This topological map displays the relationship of CM with a variety of
connected and enabling technologies and concepts. Additive Manufacturing
(See Sect. 6.10) enables CM to be more modular, flexible and offers new capabilities and business opportunities. The Rapid Technology (See Sect. 6.8)
and its composition Rapid Prototyping (RP, see Sect. 6.8.3), Rapid Manufacturing (RM, see Sect. 6.8.2) and Rapid Tooling (RT, see Sect. 6.8.1) are areas
in which CM can be applied. The topic of Service Orientation (Sect. 6.7)
and its composition “as-a-Service” of which Design-as-a-Service (DaaS, see
Sect. 6.7.2), Testing-as-a-Service (TaaS, see Sect. 6.7.3) and Manufacturingas-a-Service (MaaS, see Sect. 6.7.1) are explored as examples, are concepts
that enable the efficient application of CM. For a broader understanding it
is required to research the stakeholders involved in this technology which
makes Sect. 6.6. The topics of Scheduling (See Sect. 6.12) and Resource Description (See Sect. 6.13) are to be discussed for the universal and efficient
application of CM. The domain of Simulation (See Sect. 6.5) with its composition of Optimisation (See Sect. 6.5.2) and Topological Optimisation (See
Sect. 6.5.1) enable a more rapid, more flexible and more robust usage of the
technology. For AM technology the application of Topology Optimisation
enables the benefits of this technology. Similar to AM is 3D printing (See
Sect. 6.4) with its subtopic of Accuracy and Precision (See Sect. 6.4.1) as
this technology is a appropriate basis for CM systems. The topic of Hybrid
Manufacturing (See Sect. 6.14) gains importance in flexible and agile manufacturing systems which warrants and requires its research. In the topic of
Technology (See Sect. 6.2) the general principles and technologies of CM and
AM are discussed as these are basic principles for the efficient implementation of these systems. The topic of Cloud Computing (CC, see Sect. 6.11)
with its sub-components Internet of Things (IoT, see Sect. 6.11.1) and Cyberphysical Systems (CPS, see Sect. 6.11.2) is the conceptual progenitor of CM
and therefore requires careful studying. IoT and CPS are key enabling technologies for CM. The topic of Security 6.3 is of increasing importance with the
50
Figure 13: Topological Map for Cloud Manufacturing
spreading application of AM and CM as attack surfaces grow and potential
damage increases.
6.2. Technology
A large number of technologies and technological advances have made AM
possible to evolve from its origin as a RP method to its current state where it
is used for end-part manufacturing (RM) and available to consumers [88, 269].
All 3D printed objects are based on a digital model. This model can either be
created using CAD software, 3D sculpting software or acquired using reverseengineering methods (e.g., object scanning or photo reconstruction) [89].
Albeit direct slicing from a CAD model has been proposed by Jamieson [119]
in 1995 it is still rarely performed. Direct slicing requires implementation in
the CAD software for each printer type and printer manufacturer, which is
not feasible. Further shortcomings of the de-facto standard file format for
AM, i.e., STL, namely the possibility to contain mis-aligned facets, holes or
to be non-watertight, as well as being to large in file size are reported by
[281].
51
Besides a Steiner-patch based [193] file format to replace the STL file
format the ASTM Committee F42 has published an ISO Standard [10] for the
AMF (Additive Manufacturing File Format) with the same intention. Both
file formats are created to increase the accuracy for the models described and
therefore increase the quality of the resulting printed objects. STL seems to
be the prevalent file format for AM with 25700 results on Google Scholar
compared to 8230 results for AMF. Further investigation in the file support
for different hard- and software vendors is warranted but out of the scope of
this work.
The review by Dimitrov et al. [61] presents further information on the
technology that AM is based on with an overview of applications for it.
In the review by Esmaeilian et al. [71] the authors present the relationship
of AM and Manufacturing in general as well its benefits. With the emergence
of Internet or cloud based CAD Modelling software the creation of models for
AM becomes easier as direct integration of 3D printing providers is possible.
Furthermore, the collaborative aspect of 3D modelling is enhanced as
studied by Jou and Wang [122]. This study used a group of college students
as a test group and investigated the adoption of an online CAD modelling (
Autodesk AutoCAD 62 ) software in the curriculum.
The authors Andreadis et al. [18] present a case study on the adoption of
an unnamed cloud based CAD system in comparison to traditional software,
as well as an exhaustive list of benefits of cloud based software.
Wu et al. [277] present an economic analysis of cloud based services for
design and manufacturing. This work also explores a number of cloud based
services along with their pricing.
Communities are of great importance to enterprises as shown in West and
Kuk [269]. One form of community is a repository for 3D printable digital
models that collects and curates models supplied by users for collaboration,
exchange, co-creation and sale. In this work the authors conduct a study to
research the profit of catering for such a community/repository (Thingiverse)
by a former open-source company (Makerbot).
Wittbrodt et al. [270] performed experiments to determine the ROI (Return on Investment) of 3D printers for common households and their feasibility in application in end-user scenarios. With their experiment they concluded that an average household can achieve between 40 and 200 percent
62
http://autodesk.com/products/autocad
52
ROI on average usage of such machines.
6.3. Security
Security for 3D Printing, AM or CM can be discussed from at least three
perspectives. The first perspective would be the legal security of data and
models processed within such a scenario. Discussions can range from whether
it is legal to manufacture an existing object (replication) which might be protected by intellectual copyright laws to questions regarding product liability
in case of company supplied model data. The second perspective is closely related to intellectual property (IP) as it is the technological discussion about
the safeguarding of digital model files and data. The third perspective is
about the data and process security itself in scenarios with malicious thirdparties (e.g., Hackers, Cyber-criminals). This third perspective is not limited
to AM but shares many problems with CC and computing in general.
Dolinksy [65] analyses the copyright and its application to 3D Printing
for the jurisdiction of the USA. Because legal systems are different to each
other such an analysis can not be exhaustive.
Grimmelmann [91] further exemplifies the legal status of 3D printing and
model creation in the USA with fictitious characters from literature and
theatre. He states that the creation of an object irregardless of the source
of model for such a creation is infringing on copyright if the object that is
replicated is protected by copyright.
In [260] the author discusses the current situation of 3D printing in regard
to gun laws. This discussion was started by the media in 2013 as models for
a functional plastic gun were distributed and the gun manufactured. The
author states that current gun control laws are adequate to control 3D printed
weapons and that this is currently not a big issue.
On a broader scope the authors McNulty et al. [167] research the implications of AM for the national security of the USA where the authors
present the benefits of bio or tissue 3D printing for the treatment of battlefield wounds as well as the implications of AM technologies for criminal
misconduct.
For the analysis of data security the authors Wu et al. [276] present the
importance of such technologies within a CM environment. They propose
the development of trust models for cyber-physical systems respectively the
actors within such systems.
The authors of Yampolskiy et al. [284] provide a full risk analysis of a
scenario for outsourcing AM under consideration of IP. The risk assessment
53
does not include malicious behaviour other than IP infringement.
To secure printed objects against counterfeiting the authors of [75] study
and recommend the use of chemical components for authentication. Possible
attacks on the 3D printing process by third parties is researched in [294]
where one scenario is about the introduction of wilfully integrated material
differences into an object in order to weaken the object under load. If the
printing process itself is secured the question remains if a printed object is
the original, a genuine replicate or a faked replicate. For the identification
of genuine objects the authors of [14] research the applicability of physical
signatures to 3D printed objects.
In [110] the authors present a watermarking technique for 3D printed
objects that is resilient against repeated digital scanning of the manufactured
object.
For a generalised discussion on security of cloud services and cloud computing we refer to [230] where the authors present issues ranging from data
integrity to confidentiality. The concepts and terminology of CC security
are also discussed in [306] of which the concept of confidentiality, trust and
privacy are most relevant to scenarios of cloud based AM where users have
physical objects created from digital models by third parties.
Sturm et al. [229] present attack scenarios and mitigation strategies for
attacks on AM system. The authors see rising CPS implementations in AM as
potential intrusion vectors for attacks. The authors discuss various attacks
for each of the manufacturing process phases. Furthermore, the authors
identify the STL file format as a potential risk for tampering and attacking.
Among the recommendations for mitigation is file hashing and improved
process monitoring.
Bridges et al. [39] briefly explore possible attacks on the cyber-physical
systems that are used for AM. Among the attack scenarios the authors identify theft and tampering.
6.4. 3D Printing
Following the distinction between AM and 3D printing given in the definition of 3D printing (See Sect. 2.1.2) by some authors into high-quality professional or industrial usage and lower-quality end-user or semi-professional
usage 3D printing could not be part of CM. As we relax the definition of AM
and 3D printing and use the terms as synonyms, we survey technological
developments within this chapter. Technological progress and development
54
are essential to the widespread use and application of 3D Printing or AM in
the CM paradigm.
In the short article by Hansen et al. [100] the authors propose a measurement method for the correction or calibration of FDM printers. For this
purpose the authors develop a measurement plate that is printed with specified parameters. In their experiment the authors recorded roundness errors
of up to 100 µm. The calibration could not be applied due to the printer
control software being closed-source.
Anitha et al. [19] analyse the process variables layer thickness, bead width
and deposition speed for their influence on the quality of objects manufactured using FDM. The authors find that the layer thickness is contributing
with approximately 50 % to the surface roughness of the manufactured objects.
Balogun et al. [22] describe and an experiment on the energy consumption
and carbon footprint of models printed using FDM technology. They define
three specimens of 9000 mm3 and 18000 mm3 volume which are printed on
a Stratasys dimension SST FDM. Their experiment also captures the energy
consumption of the post processing with and ultrawave precision cleaning
machine. The energy consumed for the print is approximately 1 kWh. Over
60 % of the energy is consumed in non-productive states, e.g., pre-heating.
This energy consumption profile warrants high utilisation of 3D printers when
aiming for a low ecological impact and penalises frequent and long idle times
of the 3D printer.
Brajlih et al. [37] propose a comparison method for the speed and accuracy of 3D printers. As a basis the authors introduce properties and capabilities of 3D printers. A test-object designed by the authors is used to
evaluate the average manufacturing speed of an Objet EDEN330 Polyjet and
3D Systems SLA3500 SLA manufacturing machine in an experiment. Furthermore, the experiment includes an EOS EOSINT P385 SLS and Stratasys
Prodigy Plus FDM machine. The experiment concludes that the SLS machine is capable of the highest manufacturing speed (approx. 140 cm3 /h).
In the experiment the angular and dimensional deviations are significant (up
to 2.5° for a 90° nominal, and 0.8 mm for a 10 mm nominal).
Roberson et al. [208] develop a ranking model for the selection of 3D
printers based on the accuracy, surface roughness and printing time. This
decision making model is intended to enable consumers and buyers of such
hardware to select the most appropriate device.
Utela et al. [252] provide a review on the literature related to the devel55
opment of new materials for powder bed based 3D printing systems. They
decompose the development into five steps, for which they provide information on the relevant literature.
Brooks et al. [40] perform a review on the history and business implications of 3D printing. They argue that the most promising approach for companies to benefit from 3D printing technology is to invest in and adapt current
business models to support supplementary printing for the users. They also
present the importance of the DMCA (Digital Millennium Copyright Act) in
the USA under the aspect of 3D printing for current and upcoming businesses
and services in the USA.
Bogue [30] aims to provide an introduction into 3D printing with this
review. The historical development of the various printing technologies is
presented and furthermore, applications with examples are explored.
Petrick and Simpson [195] compare traditional manufacturing, which they
classify as “economy of scale”, with AM. AM is classified by the authors as
“economy of one”. They base their future hypotheses on the traditional
design-build-deliver model and current patterns in supply chains from which
they draw logical conclusion for future developments. These hypotheses are
sparsely supported by literature. They predict that in the future the boundaries between the design-build-deliver paradigm will be less clear and that
design and production will be closely coupled with experiments. One obvious
prediction is that the supply chains will get shorter and the production will
be more localised both geographically and in regard to time planning.
Matias and Rao [164] conduct an exploratory study on the business and
consumer markets of 3D printing. This study consists of a survey based
part for consumers within the area of 3D printing with a sample size of 66
participants conducted in 2014. One of their findings for the consumers is
the willingness of 45 % of the participants to spend only $US 299 on this
technology. They also found out that a large number of consumers is not
proficient with the technology and the required software. This finding was
backed by five interviews conducted with business persons from five different
companies. Their interviewees also expressed concerns that there will not be
mass market for 3D printing within the next five to ten years.
6.4.1. Accuracy/Precision
The accuracy, precision and geometrical fidelity of 3D printed objects has
been researched in many works for over 20 years [116, 73] due to the necessity
to produce objects that match their digital models closely. This topic is of
56
general relevance for AM as only precise objects are usable for the various
applications. Increased precision and accuracy enables AM and CM to be a
valid manufacturing technology.
Dimitrov et al. [61] conducted a study on the accuracy of the 3DP (3DPrinting) process with a benchmark model. Among the three influencing
factors for the accuracy is the selected axis and the material involved.
Turner and Gold [247] provide a review on FDM with a discussion on the
available process parameters and the resulting accuracy and resolution.
Boschetto and Bottini [32] develop a geometrical model for the prediction
of the accuracy in the FDM process. They predict the accuracy based on
process parameters for a case study for 92 % of their specimens within 0.1
mm.
Armillotta [20] discusses the surface quality of FDM printed objects. The
author utilises a non-contacting scanner with a resolution of 0.03 mm for the
assessment of the surface quality. Furthermore, the work delivers a set of
guidelines for the FDM process in respect to the achievable surface quality.
Equbal et al. [69] present a Fuzzy classifier and neural-net implementations for the prediction of the accuracy within the FDM process under
varying process parameters. They achieve a mean absolute relative error of
5.5 % for the predictor based on Fuzzy logic.
Sahu et al. [213] also predict the precision of FDM manufactured parts
using a Fuzzy prediction, but with different input parameters (Signal to noise
ratio of the width, length and height).
Katatny et al. [126] present a study on the dimensional accuracy of FDM
manufactured objects for the use as medical models. The authors captured
the geometrical data with a 3D Laser scanner at a resolution of 0.2 mm
in the vertical direction. In this work a standard deviation of 0.177 mm is
calculated for a model of a mandible acquired from Computer Tomography
(CT) data.
To counter expected deviations of the object to the model, Tong et
al. [243] propose the adaption of slice files. For this adaption the authors
present a mathematical error model for the FDM process and compare the
adaption of slice files to the adaption of STL (STereoLitography) files. Due
to machine restrictions the corrections in either the slice file and the STL file
are comparable, i.e., control accuracy of the AM fabricator is not sufficient
to distinguish between the two correction methods.
Boschetto and Bottini [33] discuss the implications of AM methods on the
process of design. For this discussion they utilise digitally acquired images
57
to compare to model files.
Garg et al. [87] present a study on the comparison of surface roughness of
chemically treated and untreated specimens manufactured using FDM. They
conclude that for minimal dimensional deviation from the model the objects
should be manufactured either parallel or perpendicular to the main axis of
the part and the AM fabricator axis.
6.5. Simulation
Simulation in the area of AM is of great importance even though the
process of object manufacturing itself is relatively cheap and fast when compared to other means of production. But even 3D printed objects can take
many hours to be manufactured in which the AM resource is occupied. Furthermore, with specialised and high value printing materials the costs can be
prohibitively expensive for misprinted parts.
In [104] the authors describe a voxel based simulation for 3D printing for
the estimation of the precision of AM objects.
Pal et al. [189] propose a finite-element based simulation for the heattransfer of powder based AM methods. With this simulation the authors
claim that the general quality of the printed objects can be enhanced and
post-processing/quality-control can be reduced.
The work of Zhou et al. [301] proposed a numerical simulation method
for the packing of powder in AM. This research is conducted to provide a
better understanding of the powder behaviour in methods like SLS or SLM.
Alimardani [15] propose another numerical simulation method for the
prediction of heat distribution and stress in printed objects for Laser Solid
Freeform Fabrication (LSFF), a powder based process similar to LENS. They
compare their numerically computed predictions with experimental specimens, in one finding the maximum time-dependent stress could be reduced
by eight percent by improvements made by the simulation.
Ding et al. [64] discuss a FEM based simulation model for wire and arc
based AM. Their simulation of the thermo-mechanical properties during this
process is performed in the ABAQUS63 software.
Chan [49] presents graphical simulation models for the use in manufacturing scenarios not limited to AM. Such models must be adopted to contain
virtual production entities like the ones provided by CM.
63
http://www.3ds.com/products-services/simulia/products/abaqus
58
Mourtzis et al. [178] present a review on the aspects of simulation within
the domain of product development (PD). For this work they give introduction to concepts and technologies supporting and enabling PD. The concepts
are explained in sections of two paragraphs each and supported by existing literature. They link the concepts to simulation research within these
areas, where applicable. The concepts and technology introduced includes
Computer Aided Design (CAD), Computer Aided Manufacturing (CAM),
Computer Aided Process Planning (CAPP), Augmented and Virtual Reality
(AR and VR), Life Cycle Assessment (LCA), Product Data Management
(PDM) and Knowledge Management (KM), Enterprise Resource Planning
(ERP), Layout Planning, Process-, Supply Chain- and Material Flow Simulation, Supervisory Control and Data Acquisition (SCADA) and Manufacturing Systems and Networks Planning and Control. Their review is based
on over 100 years of research in the area of simulation and 15954 scientific
articles from 1960 to 2014. The articles are aligned with the product and
production lifecycle. Furthermore, this review includes a comparison of commercially available simulation software. The authors conclude their work
with a detailed analysis of research opportunities aligned to the concepts
introduced within this work.
6.5.1. Topological Optimisation
One of the key benefits of AM is the ability to create almost any arbitrarily
complex object which makes topological optimisation ideal for AM. In CM
scenarios such optimisations can be embedded in the digital process chain
and be offered and applied as services. In this section we present a number
of research works on topological optimisation for AM.
In [48] the authors discuss the application of topology optimisation for
AM in a general manner giving an overview of the current state.
Galantucci et al. [80] present experimental results of topology optimised
and FDM printed objects from compression tests. In their experiment the
reduction of filling material reduced the material consumption but also the
maximum stress of the object.
Almeida and Bártolo [58] propose a topology optimisation algorithm for
the use in scaffold construction for bio-printing. This optimisation strategy is
aimed to create scaffolds that are more bio-compatible due to their porosity
but yield high structural strength. The authors conducted an experiment
for the comparison of the topological optimised structures and un-optimised
structures with reduced infill. Their approach yields structurally stable scaf59
folds up to 60 % porosity.
The work by Leary et al. [146] focuses on the topological optimisation
to create objects without the requirement for additional support structures.
For this approach the authors perform a general topological optimisation
first, then orient the part optimally to reduce the required support material
and apply a strategy to add part structures to remove the required support
material. In an experiment conducted the authors create an object that
requires significantly less material (89.7 cm3 compared to 54.9 cm3 ) and is
manufactured in 2.6 hours compared to 5.7 hours for the optimised part with
support structures.
Tang et al. [233] propose a design method for AM with the integration of
topological optimisation. For this work the authors analyse existing design
methods for AM.
Bracket et al. [36] provide an overview and introduction of topology optimisation for AM. The authors identify constraints and restrictions for the usage of topology optimisation, e.g., insufficient mesh-resolution or insufficient
manufacturing capabilities. Among the identified opportunities of topology
optimisation in AM is the ability create lattice structures and design for
multi-material objects.
Gardan [85] proposes a topology optimisation method for the use in
RP and AM. The work is focused on the inner-part of the object. In
non-optimised objects this is filled with a pre-defined infill pattern of a
user-selectable density. The authors implement the method in a plugin for
Rhinoceros 3D64 and provide an experiment with SLA and SLS. The article
does not provide detailed information on the implementation of the software
and the algorithm.
Gardan and Schneider [86] expand on the prior work by Gardan by slightly
expanding the previous article. In this work the authors apply the optimisation method to prosthetic hip implants and additionally experiment on a
FDM 3D printer.
Hiller and Lipson [105] propose a genetic algorithm (GA) for the multimaterial topological optimisation. With this approach the authors demonstrate the optimisation of varying degrees of stiffness within a part. They
utilise a high-level description of the parts properties to design the desired
object automatically in its optimised composition.
64
https://rhino3d.com
60
6.5.2. Optimisation
For AM and CM as processes a number of optimisations is possible and
necessary. The optimisations can relate to the optimisation of process parameters for quicker manufacturing, higher quality manufacturing or increased
utilisation of hard- and software resources. The optimisation can furthermore, regard the embeddability and integration of AM and CM within existing production processes.
Optimisation for AM is a topic that is researched for a long time, as
illustrated by the following two articles. Cheng et al. [52] propose an optimisation method for the criteria of manufacturing time, accuracy and stability.
This optimisation is based on the calculation of the optimal part orientation.
As a basis for the optimisation, the authors analyse the sources for errors
in AM processes, e.g., tessellation errors, distortion and shrinkage, overcuring or stair-stepping effects. For their model they weight input parameters
according to the inflicting errors and perform a multi-objective optimisation.
Lin et al. [153] propose a mathematical model to reduce the process error
in AM. In the first part the authors analyse the different process errors for
various types of AM, e.g., under- and overfill, stair-stepping effects. Their
model optimises the part orientation for minimal errors. Albeit this work is
more than 15 years old, optimal orientation and placement of objects is not
widely available in 3D printing control software.
More recently, Rayegani and Onwubolu [203] present two methods for the
process parameter prediction and optimisation for the FDM process. The
authors provide an experiment to evaluate the tensile strength of specimens
for the optimised process parameters. For the optimised parameters the
authors provide the solution of 0° part orientation, 50° raster angle, 0.2034
mm raster width and -0.0025 mm air-gap. These parameters yield a mean
tensile strength of 36.8603808 MPa.
Paul and Anand [192] develop a system for the calculation of the energy consumption in SLS processes and process parameter optimisation to
minimise the laser energy. For the model the authors neglect the energy
consumption of all elements (e.g., heating bed) but the laser system.
Paul and Anand [194] propose an optimisation for AM for the reduction
of support material. Their approach is to optimise the part orientation for
the minimum of support material. Furthermore, they provide optimisation
for minimum cylindricity and flatness errors.
Jin et al. [17] propose a method to optimise the path generation of ex-
61
trusion based AM, e.g., FDM. The optimisation goals for this approach are
machine utilisation and object precision. The authors perform a study with
their approach but comparison data on the quality and time consumption of
other algorithms is missing.
Khajavi et al. [131] present an optimisation for the spare-parts industry
of fighter jets by the utilisation of AM. This work is on a systemic optimisation with AM being one strategy to achieve the optimum solution. The
authors analyse the current situation in this specific application and propose
an optimised solution based on distributed manufacturing or AM.
Ponche et al. [199] present an optimisation for the design for AM based on
a decomposition of functional shapes and volumes. The authors argue that
objects designed for traditional manufacturing are not necessarily suitable
for AM but require partial or complete re-design to adjust for the specifics
of a certain AM process, e.g., in the inability to produce sharp corners and
edges. In their work one optimisation goal is the reduction of material and
therefore cost.
Hsu and Lai [111] present the results of an experiment and the resulting
process parameter optimisation for the 3DP manufacturing method. The
authors improved the dimensional accuracy of each axis to under 0.1 mm.
Furthermore, the authors improved on the building time by approximately
10 % and on the flexural stress by approximately 30 %. The authors experimented on the four process parameters that are layer height, object location,
binder saturation and shrinkage.
6.6. Stakeholder
In CM systems or cloud based printing systems naturally a number of
stakeholders is involved. With this section we are presenting the current
state in research on the identification of stakeholders in this domain as well
as research regarding their agendas.
In Rayna and Striukova [204] the authors identify the requirements of
end-users for online 3D printing services. They base their study on concepts
relevant to these stakeholders like user-participation and co-creation.
Park et al. [190] provide a statistical analysis of patents and patent filings
in the domain of 3D printing and bioprinting that can serve as a basis for
decision making in the investment and R&D in these fields. The stakeholders
in this case are the investors and managers.
Hämäläinen and Ojala [99] applied the Stakeholder Theory by Freeman
on the domain of AM and performed a study on eight companies with semi62
structured interviews. They identified five companies that use AM for prototyping (RP). They further analysed the benefits of AM for the interviewed
companies.
Buehler et al. [42] created a software called GripFab for the use in special
needs education. For this software and the use of 3D printing in special needs
education they performed a stakeholder analysis. The analysis is based on
observations and found beneficial uses for this technology, e.g., in the form
of assistive devices.
Munguı́a et al. [181] analyse what influence missing standards have on
the stakeholders of RM and develop a set of best practises for RM scenarios.
They identified the four main contributors to RM cost as Operation times,
Machine costs, Labour costs and Material costs.
Lehmhus et al. [149] analyse the usage of data acquisition technologies
and sensors within AM from the perspectives of the identified stakeholders
designer, producer, user and regulatory or public bodies. They argue that
the producers in such scenarios might become obsolete if AM is utilised in a
complete CM sense.
Fox [78] introduces the concept of virtual-social-physical (VSP) convergence for the application to product development. Within this concept he
argues that AM can play an integral part to enhance product development.
He identifies requirements from stakeholders in the product development process and addresses them in this work.
Flatscher and Riel [76] propose a method to integrate stakeholder in an
integrated design-process for the scenarios of next-generation manufacturing
(Industry 4.0). In their study, a key challenge was the integration of all
stakeholders in a team structure which they solved by integrating influential
persons from different department in the joint operation.
Maynard [166] discusses the risks and challenges, that come with the
paradigm of Industry 4.0. Industry 4.0 as a concept is incorporating other
concepts like AM and CM. The author briefly identifies the possible stakeholders of this technology as consumers, CEOs and educators.
In the report by Thomas [241] the author performs a very detailed and
thorough analysis of stakeholders for AM technology in the USA. The list
of identified stakeholders is 40 entries long and contains very specific entries
(e.g., Air Transport Providers or Natural Gas Suppliers) as well as generalised
stakeholder groups (e.g., Professional Societies or Consumers).
63
6.7. Service Orientation
Service orientation denotes a paradigm from the domain of programming
(Service Oriented Architecture, SOA). Within this paradigm the functionality or capability of a software is regarded and handled as a consumable
service. The services offer encapsulated capabilities that can be consumed
by users or other services in an easy to use, well-defined and transparent
manner. The services are inter- and exchangeable within business processes
as their inner-working is abstracted and the services act like black boxes
with well-defined interfaces. With CM or AM in general, this service orientation can be expanded to the physical resources of manufacturing. Similar
to service orientation from the programming domain, it must be bound by
the same stringency of well-defined interfaces and transparent or abstract
execution of functionality or capability.
In the review on service-oriented system engineering (SOSE) by Gu and
Lago [94], the authors propose the hypotheses that the challenges in this
domain can be classified by topic and by type. SOSE is the engineering
discipline to develop service oriented systems. The authors identified 413
SOSE challenges from the reviewed set of 51 sources. The authors furthermore identified quality, service and data to be the three top challenges in
this domain.
Wang et al. [263] provide a review on the CC paradigm. For this work
the authors classify CC into the three layers ( Hardware-as-a-Service - HaaS,
Software-as-a-Service - SaaS and Data-as-a-Service - Daas). In this early
work on CC the authors establish the importance of SoA for the CC paradigm.
Tsai et al. [246] present an initial survey on CC architectures and propose a service architecture for CC (Service-Oriented Cloud Computing Architecture - SOCCA) for the interoperability between various cloud systems.
Among the identified problems with CC architectures are tight coupling,
lack of SLA (Service Level Agreement) support, lack of multi-tenancy support and lack of flexibility in user interfaces. The authors utilise SOA for the
implementation of their prototype that is deployed on Google App Engine65 .
Alam et al. [13] present a review on impact analysis in the domains of
Business Process Management (BPM) and SOA. In their work the authors
discuss the relationship and convergence of the two methods. From a set of
60 reviewed studies the authors conclude that BPM and SOA are becoming
65
https://appengine.google.com
64
dominant technologies.
Zhang et al. [296] propose a management architecture for resource service composition (RSC) in the domain of CM. For this work the authors
analyse and define the flexibility of resources and their composition. The implementation by the authors supports resource selection based on QoS and
flexibility.
Shang et al. [219] propose a social manufacturing system for the use
in the apparel industry. Their implementation connects existing logistics
and manufacturing systems with a strong focus on the consumer. For this
architecture the authors rely heavily on SOA technology and describe the
implementation of various layers required.
Tao et al. [235] analyse the development of Advanced Manufacturing Systems (AMS) and its trend towards socialisation. In this work the authors
establish the relationship between service orientation and manufacturing.
The authors identify three phases for the implementation of service-oriented
manufacturing (SOM), namely “Perception and Internet connection of Manufacturing resource and capability and gathering, Aggregation, management,
and optimal allocation of Manufacturing resource and capability in the form
of Manufacturing service and Use of Manufacturing service”.
Thiesse et al. [240] analyse economic implications of AM on MIS (Management Information Systems) and the service orientation of these systems.
In this work the authors analyse the economic, ecological and technological
potential of AM and its services. The authors conclude that the services for
the product development will be relocated upstream.
For the service composition of cloud applications standards and definitions are essential. In the work by Binz et al. [29] the authors introduce
the TOSCA (Topology and Orchestration Specification for Cloud Applications) standard. Albeit this standard is focused on the deployment and
management of computing and other non-physical resources its architectural
decisions and structures are of relevance to CM systems. Support for encapsulation and modelling as described in this work is sparse for other CM
systems.
As an extension to the previous work, the authors Soldani et al. [226]
propose and implement a marketplace (TOSCAMART) for TOSCA for the
distribution of cloud applications. Such a marketplace would be highly beneficial to CM systems as it can foster innovation, collaboration, re-use and
competition.
65
6.7.1. Manufacturing-as-a-Service
Described in Sect. 6.7 the service orientation regards capabilities as services that can be consumed. Such a class of services is the manufacturing of
products. As a consumer of such a service, one is not necessarily interested
in the process of manufacturing (e.g., what type of machine is used) or the
location of manufacturing as long as a pre-agreed upon list of qualities of
the end-product is complied with. As an example it can be said that a user
wants two parts made from a certain metal, within a certain tolerance, certain properties regarding stress-resistance and within a defined time frame.
The input of this service would then be the CAD model and the properties
that must be fulfilled. The parts could then either be milled or 3D printed in
any part of the world and then shipped to the user. The user must pay for the
service rendered, i.e., the manufacturing of objects, but is not involved with
the manufacturing itself as this is performed by a service provider. In the
seventh EU Framework Programme, the project ManuCloud66 was funded
that consolidated research on this topic. In this section we present current
research articles on the subject in order to illustrate the concept of MaaS,
its role for CM and applications.
Tao et al.[236] propose an algorithm for a more efficient service composition optimal-selection (SCOS) in cloud manufacturing systems. Their proposed method is named FC-PACO-RM (full connection based parallel adaptive chaos optimisation with reflex migration) and it optimises the selection
of manufacturing resources for the quality properties time, cost, energy, reliability, maintainability, trust and function similarity. In an experiment they
proof that their implementation performs faster than a genetic algorithm
(GA), an adaptive chaos optimisation (ACO) algorithm for the objectives of
time, energy and cost, but not for the objective of reliability.
Veiga et al. [258] propose a design and implementation for the flexible
reconfiguration of industrial robot cells with SMEs in mind. These robot
cells are mostly reconfigurable by design but with high barriers for SMEs
due to the requirement of experts. The system proposed enables an intuitive
interface for reconfiguration of the cells in order to enhance the flexibility of
manufacturing. The implementation draws heavily on SOA concepts. The
implementation supports the flexible orchestration of robotic cells as services.
Zhang et al. [297] provide an introduction into the paradigm of CM.
66
http://www.manucloud-project.eu
66
Within this work the authors discuss issues arising from the implementation
and the architecture itself. The authors present the decomposition of this
paradigm into its service components, that are “design as a service (DaaS),
manufacturing as a service (MFGaaS), experimentation as a service (EaaS),
simulation as a service (SIMaaS), management as a service (MANaaS), maintain as a service (MAaaS), integration as a service (INTaaS)”. The authors
implement such a CM system as a prototype for evaluation and discussion.
Moghaddam et al. [175] present the development of MaaS and its relationship to the concepts of CM, Cloud Based Design and Manufacture (CBDM)
and others. The authors propose SoftDiss [221] as an implementation platform for CM systems.
Van Moergestel et al. [255] analyse the requirements for and propose an
architecture for a manufacturing system that enables low-volume and lowcost manufacturing. The authors identify customer requirements for lowvolume and flexible production of products as a driver for the development
of the CM concept or other MaaS implementations. The architecture relies
on cheap reconfigurable production machines (equiplet). For the implementation of the system the authors utilise open source software like Tomcat and
have a strong focus on the end-user integration via Web technology.
Sanderson et al. [216] present a case study on distributed manufacturing
systems which the authors call collective adaptive systems (CAS). The example in their case study is a manufacturing plant by Siemens in the UK which
is part of the “Digital Factory” division. The authors present the division,
structure and features of the company which is compared to CAS features.
Among the identified challenges the authors list, physical layouting, resource
flow through supply chains and hierarchical distributed decision making.
For the integration of MaaS (which is called Fabrication-as-a-Service,
FaaS, in this work) into CM, Ren et al. [206] analyse the service provider cooperative relationship (CSPR). Such a cooperation of MaaS/FaaS providers
within a CM system is essential for the task completion rate and the service
utilisation as demonstrated by the authors in an experiment.
Guo [96] proposes a system design method for the implementation of
CM systems. Within this work the MaaS layer of the CM system is further
divided into “product design, process design, purchasing, material preparing,
part processing and assembly and marketing process”. In the generalised
five-layer architecture for the implementation of CM systems, the MaaS is
located in the fifth layer.
Yu and Xu [293] propose a cloud-based product configuration system
67
(PCS) for the implementation within CM systems. Such systems interface
with the customer enabling the customer to configure or create products for
ordering. Within a CM such a system can be employed to directly prepare
objects that can be manufactured directly utilising MaaS capabilities. In the
implementation within an enterprise the authors utilise the STEP file format
for information exchange.
6.7.2. Design-as-a-Service
Besides physical and computational resources that are exposed and utilized as services the concept of CM allows for and requires traditional services
to be integrated. Such a service is for example the design of an object, which
is traditionally either acquired as a service from a third-party company or
rendered in-house.
As with the physical Manufacturing-as-a-Service the service rendered here
must be well-defined and abstract. The service in this section is that of the
design for AM or traditional manufacturing.
This paradigm can lead to increased involvement of the user as described
by Wu et al. [279]. The authors provide an introduction to social product
development - a new product development paradigm rooted in Web 2.0 and
social media technology. They conducted a study on their students in a graduate level course on product development. They structure the process in four
phases beginning with acquisition of user requirement through social media.
With the social product development process (PDP) the product development involves the users or customers more directly and more frequently than
with traditional PDP. This increased degree of integration requires support
through technology which is provided by social media and Web 2.0 technology for communication and management.
Unfortunately the scientific literature on DaaS is sparse and mostly only
mentioned as part of architectural or systematic descriptions or implementations of CM systems. In Tao et al. [237] the authors place DaaS among
other capabilities services that are part of the CM layer. Other capabilities services are Manufacturing, Experimentation, Simulation, Management,
Maintenance and Integration-as-a-Service. In Adamson et al. [12] the same
classification is used (but without the Integration and Maintenance, and a
combination of the Simulation and Experimentation service). The authors
also briefly review literature in the domain of collaborative design for CM
systems. In Yu et al. [292] DaaS is also identified as a capability of CM
systems and part of its layered structure.
68
Johanson et al. [121] discuss the requirements and implications of distributed collaborative engineering or design services. According to the authors the service orientation of design and its collaborative aspects will render
enterprise more competitive due to reduced costs for software, decreased design times and innovative design. Furthermore, such services promote tighter
integration and cooperation with customers.
Laili et al. [141] propose an algorithm for a more efficient scheduling
of collaborative design tasks within CM systems. As collaborative design
task scheduling is NP-hard, the authors propose a heuristic energy adaptive
immune genetic algorithm (EAIGA). In an experiment the authors prove
that their implementation is more stable with higher quality results than
compared to an genetic algorithm (GA) and a immune GA (IGA).
Duan et al. [66] explore the servitization of capabilities and technologies in
CC scenarios. The authors explore and discuss a variety of service offerings as
described in literature. Among the identified as-a-Service offerings is Designas-a-Service which is referenced to Tsai et al. [246]. Duan et al. provide
a large collection of “aaS” literature. Contrary to the indication by Duan
et al. the work of Tsai et al. does not cover DaaS. It however covers the
architectural design of CC systems and service provisioning as well as an
analysis of potential drawbacks and limitations of CC systems.
6.7.3. Testing-as-a-Service
Similar to the Design as a-Service (See Sect. 6.7.2) this exposition of a
capability as a service can play an important role within CM systems. In
general the QA for AM is not sufficiently researched and conducted as the
traceability of information from the original CAD model to the manufactured
part is insufficient due to the number of conversion steps, file formats and
systems involved.
Albeit mentioned in a number of publications on the design and implementation of CM architectures, designs or systems, e.g., Ren et al. [205] or
Gao et al. [82], the research on Testing-as-a-Service (TaaS) in CM systems is
sparse and the authors are not aware of any dedicated works on this topic.
In contrast, TaaS as a concept for software testing in the cloud is researched by a number of authors, see e.g., Gao et al. [81] or Mishra and
Tripathi [173] for an introductory overview, Yan et al. [285] for the special
application of load testing, Tsai et al. [245] for service design or Llamas et
al. [157] for a software implementation.
Extrapolating from the benefits that TaaS brings to software quality,
69
e.g., transparency, scalability, concurrency, cost-reduction and certification
by third parties, research on this area in CM scenarios is warranted. In
contrast to software QA, physical testing has an extended set of requirements
and limitations, e.g., object under test must be physically available, higher
likelihood that standardised test protocols exist, inability to scale without
hardware investment or inability to scale beyond minimum time required
for testing. With this section we want to motivate further discussion and
research into this area.
6.8. Rapid Technology
As an umbrella term in accordance to the definition “General term to
describe all process chains that manufacture parts using additive fabrication
processes.” by [4] we examine the relevance of this technology for CM with
this chapter. This technology is integral to the product development especially with its sub-technology that is RP (See Sect. 6.8.3). This and the
following sections extend on the definitions provided in Sect. 2 by examples
and research findings.
For a brief introduction we refer to the following articles. Li et al. [151]
propose a method for rapid new product ramp-up within large multi-national
companies relying on disperse supply chain networks and out-sourcing partners. In this work the authors consider large-volume product development.
For the conceptual framework the authors identified critical members and
defined a ramp-up process as a flowchart.
Mavri [165] describes 3D printing itself as a rapid technology and analyses
the impact of this technology on the production chain. The author performs
an analysis on the influences on the phases of product design, production
planning, product manufacturing as well as the topics material utilisation,
inventory and retail market. The findings of the author include that AM
enables companies to act more agile, cater for smaller markets, limit potential
inventory issues and can sustain smaller and slimmer supply chain networks.
Muita et al. [180] discuss the evolution of rapid production technologies
and its implications for businesses. The authors investigate business models
and processes, transitions as well as materials and logistics. A decomposition
of rapid technology into the phases or layers (Rapid Prototyping, 3D Printing,
Rapid Tooling, Rapid Product Development and Rapid Manufacturing) is
provided and discussed. The authors recommend the adaption of AM by all
companies.
70
In the book by Bertsche and Bullinger [27] the authors present the work
of a research project on RP and the various problems addressed within the
topic of Rapid Product Development (RPD). One aspect of this research is
the development and integration of systems to efficiently store and retrieve
information required throughout the process. Information required in the
process is knowledge on construction, quality, manufacturing, cost and time.
In Lachmayer et al. [140] the authors present current topics of AM and
its application in the industry. In the chapter by Zghair [295] the concept of
rapid repair is discussed. This concept is intended to prolong the life-time of
high-investment parts as well as modification of parts in academic settings.
The authors perform an experiment for this approach with three objects
and conclude that there is no visible difference between additional object
geometry in the case of previously SLM manufactured objects. Differences
are visually detectable for cast objects that are repaired.
6.8.1. Rapid Tooling
The use case of RT for AM is that the required tools or moulds for the
(mass-) production of other parts or objects is supported by provisioning
of said tools or moulds. See the definitions of RT in Sect. 2.1.5. RT as a
concept is researched and applied for at least 26 years [212]. Conceptually
little has changed since the early publications but the number of available AM
technologies, materials and support by other concepts like CC has increased.
Since its start the idea of RT is to create tools or tooling directly from CAD
models thus saving time and material. In this section we present articles
from this research to give an overview to the reader and present its relevance
and relationship to the concept of CM.
In the review by Boparai et al. [31] the authors thoroughly analyse the
development of RT using FDM technology. FDM manufactured objects commonly require post-processing for higher-quality surfaces which is discussed
by the authors in a separate section of their work. The authors present a
variety of applications of RT with FDM which include casting and injection
moulds and scaffolds for tissue engineering. Furthermore, the authors discuss
material selection and manufacturing, as well as testing and inspection.
The review by Levy et al. [150] on RT and RM for layer oriented AM from
2003 already states that AM is not just for RP anymore. According to the
definition of RT by the authors tools are supposed to last a few thousand to
millions applications. The authors focus mainly on plastic injection moulds
for tooling and survey a large number of different technologies and materials.
71
Similarly, the definition of RT by King and Tansey [135], is focused on
injection moulds, a definition that has since been expanded to other tooling
areas. In this work the authors present research on material selection for
SLS manufactured moulds. In this work the authors analyse RapidSteel and
copper polyamide for the use in time-critical RT scenarios.
Lušić et al. [161] present a study on the applicability of FDM manufactured moulds for carbon fibre reinforced plastic (CFRP) objects. The
authors achieved up to 84 % material saving or 47 % time saving for optimised structures compared to a solid mould at a comparable stiffness. The
authors experimented with varying shell thicknesses and infill patterns.
Nagel et al. [184] present the industrial application of RT in a company.
The authors present at a high level the benefits and thoughts leading to
the creation of flexible grippers for industrial robots utilising 3D printing.
The authors also present a browser based design tool for the creation of
the individual grippers with which the company is able to reduce the time
required for product design by 97 %.
Chua et al. [56] present a thorough introduction to RT as well as a classification into soft- and hard tooling, with a further divide into direct soft
tooling, indirect soft tooling, direct hard tooling and indirect hard tooling.
Among the benefits of RT the authors see time and cost savings as well as
profit enhancements. The authors discuss each of the classifications with
examples and the relevant literature. Examples from industry given support
the benefits proposed by the authors.
Rajaguru et al. [201] propose a method for the creation of RT moulds
for the production of low-volume plastic parts. With this indirect tooling
method, the authors are able to produce low-cost moulds in less than 48
hours. The authors present an experiment where the mould is used for 600
repetitions. The method uses electroless plating of nickel and phosphorous
alloy for the micro-pattern moulds.
In the introduction to RT, Equbal et al. [70] start with the basics of
various AM technologies. The authors provide a classification schema for
RT and discuss each class with the appropriate examples. According to the
authors, RT is a key technology for globally active companies in respect to
flexibility and competitiveness.
In the review by Malik et al. [162] the authors investigate the use of 3D
printing in the field of surgery. The authors discuss the fabrication of medical models for education and operation planning as well as drill-guides and
templates as RT technology. In contrast, the direct fabrication of implants
72
or prosthetics as described by the authors is regarded RM.
6.8.2. Rapid Manufacturing
In contrast to RP the goal of RM is the creation of parts and objects
directly usable as end-products or part of end-products (See Sect. 2.1.4). To
achieve this usability the requirements on the quality of the parts is higher,
therefore the quality control and quality assurance are stricter.
Hopkinson and Dickens [107] provides findings on cost analysis for the
manufacturing of parts for traditional manufacturing and AM. The authors
identify the current and potential future benefits for RM as the ability to
manufacture with less lead time, increased geometric freedom, manufacture
in distributed environments and potentially the use of graded material for
production. The authors compared the costs incurred for the creation of
two objects with injection moulding (IM), SLA, FDM and SLS. For IM the
tool costs are high (27360 and 32100 Euro) whereas the unit costs are low
(0.23 and 0.21 Euro). In their calculation the equilibrium for the cost of IM
and SLS for one of the objects is at about 14000 units and for the other
part at around 600 units. This finding validates RM for certain low-volume
production scenarios.
Ruffo et al. [211] also present a cost estimation analysis and is an extension
and update to the previous work. The authors calculated with a much lower
utilisation of the machines (57 % compared to 90 %), higher labour cost
as well as production and administrative overhead costs. Furthermore, the
authors took other indirect costs like floor/building costs and software costs
into consideration. The authors calculated a higher unit cost for the object
(3.25 Euro compared to 2.20 Euro), and a non-linear costing function due to
partial low-utilisation of the printing resources which is due to incomplete
rows for unit counts not equal or multiple of maximum unit packing. The
comparison of these two works illustrates the necessity to use the most up-to
date and complete models for costing estimation.
Ituarte et al. [117] propose a methodology to characterise and asses AM
technologies, here SLS, SLA and Polyjet. The methodology proposed is an
experimental design for process parameter selection for object fabrication.
The authors find that surface quality is the hardest quality to achieve with
AM and might not suffice for RM usage with strict requirements. Such an
analysis is of value in order to asses the feasibility of certain manufacturing
methods in RM scenarios.
In the review by Karunakaran et al. [125], the authors survey and classify
73
technologies capable of manufacturing metallic objects for RM. The technologies surveyed are CNC-machining, laminated manufacturing, powder bed
processes, deposition processes, hybrid processes and rapid casting. The authors develop different classification schemes for RM processes based on various criteria, e.g., material or application. Furthermore, the authors compile
a list of RM process capabilities to be used for the selection of appropriate
RM processes.
Simhambhatla and Karunakaran [222] survey build strategies for metallic
objects for RM. The authors focus on the issues of overhangs and undercut
structures in metallic AM. The work concludes with a comparative study on
the fabrication of a part using CNC-machining and a hybrid layered manufacturing (HLM) method. With the hybrid approach the authors build the part
in 177 minutes compared to 331 minutes at a cost of 13.83 Euro compared
to 24.32 Euro.
Hasan et al. [101] present an analysis of the implications of RM on the
supply chain from a business perspective. For this study the authors interviewed 10 business representatives and 6 RP or RM service providers. The
authors propose both reverse-auctioning as well as e-cataloguing as modes
for business transactions.
With rapid changing production the need arises for rapid fixture design
and fabrication for the RM provider itself. This issue is discussed by Nelaturi
et al. [186], as they propose a mechanism to synthesise fixture designs. The
method analyses the models to be manufactured and supported by fixtures
as STL files for possible fixture application areas. The algorithm furthermore
calculates possible fixture positions and inflicting forces. The authors select
existing fixtures from in-house or online catalogues of fixtures for application.
Gupta et al. [98] propose an adaptive method to slice model files of heterogeneous objects for the use with RM. For this the authors decompose the
slicing process into three phases (Slicing set up, Slices generation and Retrieving data). The work also surveys other existing slicing techniques for
various optimisation goals, e.g., quality, computing resources or part manufacturing time. For the extraction of geometric and material information the
authors utilise a relational database for efficient storage. The authors find
that utilising the appropriate slicing technique the fabrication time can be
reduced by up to 37 %.
Hernández et al. [103] present the KTRM (Knowledge Transfer of Rapid
Manufacturing) initiative which is created to improve training and knowledge
transfer regarding RM in the European Union. For the requirement analysis
74
of such a project, the authors conducted a study with 136 participants of
which the majority (70 %) are SMEs. Such training initiatives are beneficial
to the growth in application and increased process majority as the authors
find that the knowledge of RM is low but the perceived benefits of this
technology include higher quality parts, lower time to markets and increased
competitiveness.
With the chapter by Paul et al. [191], the authors provide a thorough
overview over laser-based RM. The authors discuss classifications of such
systems as well as composition of these systems in general. Process parameters are presented and located in literature. Furthermore, the authors discuss
materials available for this class of RM and applications. This work is a comprehensive overview, covering all relevant aspects of the technology, including
monitoring and process control.
6.8.3. Rapid Prototyping
Following the definitions in Sect. 2.1.3 Rapid Prototyping (RP) is the concept to speed-up the creation of prototypes in product development. These
prototypes can be functional, visual, geared towards user-experience or of
any other sort. RP was one of the first uses for AM and oftentimes the
terms AM and RP are used synonymously. The quick or rapid creation of
prototypes does not necessarily mean fast in absolute terms but rather a
more rapid way to create prototypes than traditionally created using skilled
or expert labour (e.g., wooden models created by carpenters) or subtractive
or formative manufacturing methods oftentimes requiring specialised tooling
or moulds.
Pham and Gault [196] provide an overview of commonly used methods
to rapidly create prototypes with information on the achievable accuracy,
speed and incurred costs of each technology from a very early perspective.
A number of technologies, e.g., Beam Interference Solidification (BIS), has
since been disused. The accuracy for Fused Deposition Modeling (FDM)
stated with 127 µm has not been improved significantly since then.
Masood [163] reviews the technology of FDM and examines the usability
of it for RP. Among the advantages of this technology is “Simplicity, Safety,
and Ease of Use” as well as “Variety of Engineering Polymers” which makes
it suitable for the creation of functional prototypes. A number of limitations, like “Surface Finish and Accuracy”, can diminish the suitability of
this technology for certain aspects of prototyping.
In their keynote paper Kruth et al. [137] survey the technologies used for
75
RP and produce examples for the technologies. Furthermore, they briefly
explain the developmental bridge from RP to RT and RM.
The authors Yan et al. [286] present the historical development of RP
from its roots in the analogue and manual creation of prototypes to digital
fabrication methods. The also present a list of current limitations for digital
RP. Among the five limitations they place high-manufacturing cost, for the
manufacturing resources, and the insufficient forming precision. The first
argument of cost is often put forward in its reversed statement as RP is
proposed as a low-cost production method, when compared to traditional
prototyping.
Azari and Nikzad [21] present a review on RP in dentistry with a distinction of models in dentistry and its general meaning. They discuss the
problems in data-capture for RP due to the nature of living patients. They
further discuss the use of AM for drill-guides which is an application for RT.
Liu et al. [155] present a study on profit mechanisms associated with
cloud 3D printing platforms predominantly in China. They argue that such
services can enable small and medium sized enterprises (SME) to produce
prototypes more rapidly and cheaper thus increasing their competitiveness.
Roller et al. [209] introduce the concept of Active Semantic Networks
(ASN) as shared database systems for the storage of information for the
product development process.
6.9. Design
In traditional (subtractive or formative) manufacturing the design is driven
by the capabilities provided by the manufacturing equipment. This is described as Design for Manufacturing or Design for Manufacturability (DFM)
which means that the parts designed must be easy and cheap to manufacture. Especially in large volume production the parts must be machinable
in a simple way as tooling, tool changes and complex operations are expensive. Furthermore, with traditional manufacturing certain operations like
hollowed or meshed structures are not possible to produce or incur large
costs. With AM the design of objects or parts is not strictly limited by
these considerations as flexibility comes for free and a number of operations
(e.g., intersecting parts, hollowed structures) become possible. The designer
can chose more freely from available designs and is less restricted. The design itself can concentrate on the functionality of the part, rather than its
manufacturability.
76
In the review by Rosen [210] the author proposes principles that are
relevant for design for AM (DFAM) as they exist in literature. The suitability
of AM is declared for parts of high complexity and low production volume,
high production volume and high degree of customisation, parts with complex
or custom geometries or parts with specialised properties or characteristics.
Within this review the author proposes a prototypical design process for AM
that is derived from a European Union standardisation project by the name
of SASAM67 .
Kerbrat et al. [129] propose a multi-material hybrid design method for
the combination of traditional manufacturing and AM. For this method the
object is decomposed based on the machining difficulty. The authors implemented their method in a CAD System (Dassault Systems SolidWorks68 )
for evaluation. This hybrid design method is not limited to a specific AM or
manufacturing technology. The authors omit information on how the decomposed part or parts are fused together and how to compensate for inaccuracies
within the manufacturing process.
Throughout the design process and later for manufacturing it is necessary
to convey and transport information on design decisions and other specifications. Brecher et al. [38] provide an analysis of the STEP [8] and STEP-NC [2]
file formats. This analysis is used to propose extensions necessary for the use
in an interconnected CAD-CAM-NC (CAx) platform.
Buckner and Love [41] provide a brief presentation of their work on the
automatic object creation using Multi-Objective Constrained Evolutionary
Optimisation (MOCEO) on a high-performance computing (HPC) system.
With their software, utilising Matlab69 and driving SolidWorks, the objects
are created automatically following a given set of restrictions and rules.
Cai et al. [43] propose a design method for the personalisation of products
in AM. Their work defines basic concepts ranging from Design Intent to Consistency Models. The design method is intended to convey design intentions
from users or designers in a collaborative design or CAD environment.
Vayre et al. [257] propose a design method for AM with a focus on the
constraints and capabilities. This design method consists of four steps (Initial Shape Design, Parameter Definition, Parametric Optimisation, Shape
67
http://www.sasam.eu
http://www.solidworks.com
69
http://mathworks.com/products/matlab
68
77
Validation). For the initial shape design, the authors propose the use of
topological optimisation. The authors illustrate this process with an example of the re-design of an aluminium alloy square bracket.
Diegel et al. [60] discuss the value of AM for a sustainable product design. The authors explore the benefits (e.g., Mass customisation, freedom of
design) and design considerations or restrictions for AM (e.g., surface finish,
strength and flexibility). The authors argue that AM offers create potential
for the creation of long-lasting, high-quality objects and parts that can save
resources throughout their lifetime by optimised design.
Ding et al. [63] analyse existing slicing strategies for the creation of objects
with AM. Besides the analysis, the authors propose a strategy to create
multi-directional slicing paths to be used with AM machines that support
multi-axis deposition or fabrication. By the authors’ analysis the existing
software for slice creation is insufficient and leaves uncovered areas (hole or
gap). This work is not on the design for AM but rather on the design of the
resulting machine-paths for the manufacture with AM fabricators.
In Wu et al. [274] the authors discuss the concept of Cloud-Based Design
and Manufacturing (CBDM, see also [278]) in which the whole design and
manufacturing process chain is executed in a cloud environment. CBDM is
an extension to the CM concept as it expands the process chain horizontally into the collaborative and cooperative domain of the cloud. CBDM
utilises Web 2.0 technology, service-oriented architecture (SOA) concepts,
semantic web technologies and has an inherent connection to social networking applications. In this article concepts like collaboration, cooperation and
crowdsourcing for design for AM are discussed and exemplified.
6.10. Additive Manufacturing
We see AM an integral component in CM and Industry 4.0 settings due
to the benefits it provides. Among those benefits are flexibility, resource efficiency and the freedom in and of design. In this section we survey scientific
literature regarding AM, especially works that provide an overview (e.g., reviews, surveys), present important aspects or exhibit common characteristics
of this domain.
Le Bourhis et al. [35] develop the concept of design for sustainable AM
(DFSAM) to minimise the yet unknown environmental impact of AM. According to the authors about 41 % of the total energy consumption globally is
attributed to industry. The authors further provide a division for the French
industry in 2010 where about 12 % percent are attributed to manufacturing.
78
The authors claim that AM can reduce the energy required as it limits waste
material. The authors experiment on the energy and resource consumption
of the Additive Laser Manufacturing (ALM) process and present a method
to calculate electricity, powder and gas consumption for an object based on
the respective GCode.
In their work Kim et al. [134] present a federated information systems
architecture for AM. This architecture is intended to facilitate an end-to-end
digital implementation of AM, i.e., “digital thread”, design-to-product process. The authors analyse, for each phase (Part geometry/design, Raw/tessellated
data, Tessellated 3D model, Build file, Machine data, Fabricated Part, Finished Part, Validated Part) the available and used data formats and supporting software. The focus of their conceptual architecture is interoperability
by an open architecture.
Balogun et al. [23] perform an experiment on the electricity consumption
of the FDM process. The authors divide the manufacturing process into
its components (Start-up, warm-up, ready-state, build-state). In an experiment they analyse three different FDM machines (Stratasys Dimension SST
FDM, Dentford Inspire D290 and PP3DP) for their power consumption profile during manufacturing. The machines differ significantly in the energy
demand with the Dentford machine requiring 1418 Wh and the PP3DP only
requiring 66 Wh. Furthermore, the authors compare the energy consumption and manufacturing duration of a FDM machine to a milling machine.
In the experiment the AM process consumed 685 Wh and the Mikron HSM
400 milling machine only 114 Wh. The AM cycle time was 3012 s (without
3600 s for support structure removal in an ultrasonic cleaning tank) and the
milling machine cycle time was 137 s.
Weller et al. [268] discuss the implications of AM on the company and
industry level. Economic characteristics, i.e., opportunities like acceleration
and possible price premiums, lower market entry barriers and limitations
like missing economy of scale, missing quality standards are discussed in this
analysis. The authors perform modelling of various scenarios and propositions for the market under the influence of AM. Their prediction for first
adoption is within markets with an overall lower economy of scale.
Efthymiou et al. [67] present a systematic survey on the complexity in
manufacturing systems. Albeit not directly referencing AM this study is
relevant to understand the implications of AM on manufacturing systems.
Turner et al. [248] survey melt extrusion AM processes. This work is part
of a two piece series (See also [247]) with this part focusing on the design
79
and process modelling. The authors provide a short market analysis in their
introduction. The authors discuss literature relating to various processing
steps and problems, e.g., die swelling, with melt extrusion processes. The
authors provide a thorough overview on the literature for this topic.
Mitev [174] approaches the topic of AM in a very uncommon manner,
namely with a philosophical approach. This is the sole publication with this
approach found by the authors. The author discusses AM for the question
on what matter is and how 3D printing affects our concept of matter and
material.
In contrast to the previous author, Bayley et al. [25] present a model
for the understanding of error generation in FDM. This work consists of
two parts with experiments. The first part analyses actual errors in FDM
manufactured parts (e.g., Roundness error, geometrical deviation). In the
second part the authors construct a framework for error characterisation and
quantification.
In the review by Kai et al. [123] the authors evaluate the relationship of
manufacturing systems and AM briefly. The authors also provide an overview
over one possible decomposition of AM and its academic relevance through
numbers of published works from 1975 to 2015.
6.11. Cloud Computing
Cloud Computing (CC) is the concept of virtualized computing resources
available to consumers and professionals as consumable services without
physical restraints. Computing, storage and other related tasks are performed in a ubiquitous cloud which delivers all these capabilities through easy
to use and interface front-ends or APIs. These concepts enable enterprises
to acquire computing capacities as required while, often paying only for the
resources consumed (Pay-as-you-go) in contrast to payment for equipment
and resources in stock (e.g., leasing, renting or acquisition). Concepts developed for this computing paradigm are of importance for the CM domain, as
many problems stated or solved are interchangeable within domains. What
CC is to computing resources (e.g., storage, computing, analysis, databases)
CM is to physical manufacturing resources (e.g., Tools, 3D printer, drills).
In the definition of Cloud Computing, Mell and Grance [171] from NIST
develop and present the characteristics and services models for CC.
Truong and Dustdar [244] present a service for estimating and monitoring
costs for cloud resources from the domain of scientific computing. This model
is also suitable for the monitoring of costs in other cloud based computing
80
scenarios as CM with adaptions. The authors present an experiment where
they analyse the cost of scientific workflows on with on-premise execution
and deployment to the Amazon Web Service (AWS) cloud system.
Stanik et al. [228] propose the cloud layer that is Hardware-as-a-Service
for the remote integration of distinct hardware resource into the cloud. The
authors argue from the point of embedded systems development and testing
but the concepts described are universally applicable for any hardware that
is intended to be exposed as a service.
Mehrsai et al. [168] propose a cloud based framework for the integration
of supply networks (SN) for manufacturing. The authors discuss the basics
of supply networks and CC in order to develop a concept to integrate CC
for the improvement of SNs. This modular approach is demonstrated in an
experimental simulation.
Oliveira et al. [187] research the factors influencing the adoption of CC in
general and for the manufacturing sector. The authors test their hypothesis
on a survey of 369 companies from Portugal with 37.94 % of the companies
from the domain of manufacturing. The authors find that security concerns
do not inhibit the adoption of CC in the manufacturing domain sub-sample
of their survey group.
Ramisetty et al. [202] propose an ontology based architecture for the integration of CC or cloud platforms in advanced manufacturing. The authors
claim that adoption of CC in manufacturing is less than in comparable industries due to the lack of social or collaborative engagement. The authors
implement three services (Ontology, Resource Brokering and Accounting)
for an evaluation in the WheelSim App. The authors propose an “App Marketplace” for manufacturing services to further the adoption of CC in the
manufacturing industry.
Um et al. [250] analyse the benefit of CC on the supply chain interactions
in the aerospace industry. The authors propose a manufacturing network for
contracting and subcontracting based on CC. In this architecture the basis
for information exchange is the STEP-NC file format.
Valilai and Houshmand [254] propose a service oriented distributed manufacturing system on the basis of CC. For their work the authors analyse
the requirements and basics of globally distributed manufacturing systems.
The proposed system (XMLAYMOD) utilises the STEP file format for the
information exchange and enables a collaborative and distributed product
development process as well as process planning and execution.
81
6.11.1. Internet of Things
Internet of Things (IoT) is a term used to describe a network consisting
of physical objects connected to the Internet. These physical objects can be
tools, parts, machines, actuators or sensors. The concept of IoT is integral to
the CM paradigm as it is necessary to control the AM resources transparently
and monitor the resources for efficient utilisation planning and scheduling.
In IoT scenarios the use of open-standards helps to avoid vendor lock-in.
Tao et al. [234] present a very high-level description of the possible integration of IoT in CM scenarios. In four proposed layers (IoT, Service,
Application, Bottom Support) of CM systems, they declare IoT and the
corresponding layer as core enabling technology.
Tao et al. [238] propose a five layer (Resource layer, perception layer,
network layer, service layer and application layer) architecture for a CM
system. The authors propose the utilisation of IoT technology as a method
to interface the manufacturing resources into the architecture. This work is
very similar to [234].
Qu et al. [200] present a case study on the integration of CM and IoT
technology into an enterprise to synchronise the production logistics (PL)
processes. For the implementation they propose a five tier (Physical resource
layer, Smart object layer, Cloud manufacturing resource management layer,
Cloud manufacturing core service layer, Cloud manufacturing application
layer) decomposition. The system uses AUTOM [299] as a backbone for the
IoT integration.
Baumann et al. [24] propose the development of flexible sensor boards
for the use in the monitoring of AM processes. The authors analyse existing
sensors available and provide an architectural overview over a system for
the incorporation of these sensor boards into a manufacturing control and
monitoring system. With these sensors AM resources can be bridged to
control systems or services thus enabling IoT functionality for the resources.
Caputo et al. [46] perform a review on IoT from a managerial perspective
with the application of AM. The authors develop a four staged (Radical,
Modular, Architectural and Incremental) conceptual framework to classify
innovation and research on the topic. Within this framework’s description
AM resource will become digitally represented by sensors and IoT technology.
In the review by Kang et al. [124], the authors focus on global research on
smart manufacturing and its enabling technologies and underlying concepts.
In the section on IoT the authors link this concept to other technologies like
82
SoA, CM and smart sensors.
Vukovic [259] discusses the importance of APIs for the IoT deployment
and usage. The author discusses the common architectural patterns in IoT
scenarios and the arising requirement for APIs to further this technology.
In the work by Kubler et al. [138], the authors discuss the evolution of
manufacturing paradigms and the origins of CM along its relationship with
IoT technology. The authors conclude that CM is not widely adopted because
of security concerns but research in AM and IoT will drive CM forward.
6.11.2. Cyber-physical Systems
Cyber-physical Systems (CPS) are one of the key enabling technologies
for the Internet of Things. CPS is a term coined by the NSF (National Science Foundation) to describe systems, e.g., machines, parts or tools that have
capabilities to sense and interact with their physical environment while being
connected to the Internet in order to relay state and environment information
to an Internet based control system. The first occurrence in scientific literature can be found in Lee [147]. In the domain of 3D Printing, AM and CM
such systems are required to enable seamless integration of systems. With
CPS, it is possible for an AM hardware resource to signal its current status
or utilisation to a centralised or cloud-based control infrastructure in order
to participate in scheduling endeavours and become part of a controllable
system.
In the work by Chen and Tsai [51] the authors propose the concept of
ubiquitous manufacturing. This concept is similar to CM but with a stronger
focus on mobility of users and manufacturing resources. For this concept
ubiquitous sensor and IoT technology are key enabling technologies.
Lee et al. [148] propose a five layer architecture for CM based on IoT
technology. The layers in this architecture are from bottom to top: Smart
Connection Layer, Data-to-Information Conversion Layer, Cyber Layer, Cognition Layer and Configuration Layer. The goal of this work is to provide a
guideline for implementation of such a CPS backed manufacturing systems
and to improve the product quality as well as the system’s reliability.
Sha et al. [218] provide a general introduction into CPS and the related
research challenges. The authors identify QoS composition, knowledge engineering, robustness and real-time system abstraction as the four main research questions for this technology.
In the survey by Khaitan and McCalley [130] the authors study the design, development and application of CPS. In the list of identified application
83
scenarios (Surveillance, Networking Systems, Electric Power Grid and Energy
Systems, Data Centres, Social Networks and Gaming, Power and Thermal
Management, Smart Homes, Medical and Health Care, Vehicular and Transportation Systems) manufacturing systems are missing. Despite this lack of
mention, application in e.g., transportation and power management is relevant for CM systems.
Kuehnle [139] proposes a theory of decomposition for manufacturing resources in distributed manufacturing (DM) systems. DM is similar in concept to CM in relation to the decomposition of manufacturing resources in
vitalised services. According to the author IoT technology and CPS are
among the enabling technologies for this smart manufacturing concept.
Yao and Lin [289] expand the concept of CPS into socio-cyber-physicalsystems (SCPS) with this study on smart manufacturing. This extension
to social aspects of manufacturing (e.g., collaboration and cooperation) is
expected to be an integral part of the next industrial revolution (Industry
4.0).
Turner et al. [249] discuss the risk of attacks and their implications on
CPS in the domain of manufacturing. The authors present a number of
attack vectors, e.g., attack on the QA process and counter or mitigation
strategies. According to the authors CPS provide an additional attack surface
for malicious third parties.
6.12. Scheduling
In CM as in CC a number of resources must be provisioned on demand.
In contrast to CC the requirements for the execution resource can be more
complex than just a computing resource. With CM manufacturing resources
must first be described in an abstract way (See Sect. 6.13) to be schedulable.
In this section we present current research on the challenges that come from
scheduling.
Cheng et al. [53] introduce the concept of CM in their work and perform
a brief review over possible criteria for scheduling in such scenarios. The
authors provide four scheduling modes based on the three identified stakeholders (Operator, Consumer and Provider) and the system as a whole. The
proposed modes consider energy consumption, cost and risk. The proposed
system-centred cooperative scheduling mode yields the highest utilisation in
their experiment.
Liu et al. [156] propose a scheduling method for CM systems for multiple
enterprise and services scenarios. The authors use the criteria time, cost and
84
pass-rate for the task selection. Based on these criteria constraints are constructed for the decomposition of tasks into subtasks and their distribution
onto resources. The authors take geographical distance, respectively delivery
times between CM locations into consideration. Their simulation concludes
that for a 50 task scenario, with 10 enterprises offering 10 services in total, the utilisation is 49.88 % compared to 10 tasks (17.07 %). The authors
provide no specific scheduling solution with their work.
Laili et al. [142] define the problem of optimal allocation of resources based
on a 3-tier model (Manufacturing Task Level, Virtual Resource Layer and
Computing Resource Layer). The authors prove that the optimum resource
allocation is NP-complete. For the reason of NP-completeness of the scheduling problem this and other authors propose heuristics based algorithms to
provide near-optimal scheduling. Heuristics based scheduling algorithms provide near-optimum solutions for most of the scheduling instances without the
guarantee to achieve an optimum solution but at greater speed than exact
computation. In this work the authors propose a heuristic algorithm inspired by the immune system (Immune Algorithm, IA). In an experiment
they compare their algorithm against three other heuristic algorithms and it
performed comparable.
Wang [261] proposes a web-based distributed process planning (WebDPP) system that performs process planning, machining job dispatching
and job execution monitoring. The system is implemented as a prototype
and connects to legacy machine controllers. The proposed system acts directly on the manufacturing resource and interfaces with the Wise-ShopFloor
framework [262]. The author does not provide information on scheduling algorithms or methods used.
Huang et al. [114] propose a scheduling algorithm based on Ant Colony
Optimisation (ACO). In an experiment they compare the algorithm with
and without a serial schedule generation scheme (SSGS) against another
heuristic Genetic Algorithm (GA). Their algorithm for conflict resolution
performs faster and with better quality results than the GA when used with
the SSGS.
Lartigau et al. [144] present an 11-step framework for scheduling and order
decomposition within a CM system. This scheduling is deadline oriented
and implemented in a company environment for evaluation. The paper lacks
validation and conclusive results for the proposed algorithm.
In the work by Zhang et al. [300] the authors propose a queue optimisation
algorithm for the criteria lowest cost, fastest finished time, cleanest environ85
ment and highest quality. The proposed CM system relies on active and realtime environment and machine-status sensing through heterogeneous sensors.
Furthermore, they utilise semantic web (Ontology) technology for the system.
Cao et al. [45] refine an ACO algorithm for efficient scheduling within
a CM. This algorithm optimises for time, quality, service or cost (TQSC).
With the addition of a selection mechanism to ACO their ACOS algorithm
performs with better quality results and faster convergence in comparison to
Particle Swarm Optimisation (PSO), GA and Chaos Optimisation (CO).
Jian and Wang [120] propose an adapted PSO algorithm (Improved Cooperative Particle Swarm Optimisation, ICPSO) for the use in batch processing
within a CM system. Batch tasks are indivisible units of work to be executed
with manufacturing resources. The authors present an experiment for the
comparison of the proposed algorithm with an PSO and a cooperative PSO
scheduling algorithm in respect to the cost and time criteria. The algorithm
performs better than the other two algorithms.
6.13. Resource Description
For the usage of manufacturing resources within CM there must be an
abstract definition or description of the resources. Open-standards are preferable where available in order to avoid vendor lock in.
Luo et al. [160] propose a six step framework for the description of Manufacturing Capabilities (MC). The representation of this information utilises
ontology and Fuzzy technology. Within the framework the authors represent
information on the manufacturing equipment, computing resources, intellectual resources, software and other resources.
Wang et al. [264] also propose an ontology based representation for manufacturing resources. The information and ontology is derived from manufacturing task descriptions. The authors implement their algorithm in an
enterprise setting in a medium-sized Chinese company for evaluation.
As a more general approach to CC scheduling Li et al. [152] propose an
ontology based scheduling algorithm with PSO. The authors motivate their
work by an example in a logistics centre which is relevant to the domain of
CM. For this algorithm the selection is restricted based on the Quality of
Service (QoS) with time, cost, availability and reliability as criteria.
Zhu et al. [302] develop an XML based description for manufacturing
resources oriented at the Web Service Description Language (WSDL) for webservices. The authors separate the resource description into two parts (Cloud
End, CE and Cloud Manufacturing Platform, CMP). In their approach, they
86
reflect static data, e.g., physical structure or input data types, in the CE layer
whereas the CMP layer reflects the dynamic data, e.g., function parameters.
Wu et al. [282] propose an ontology based capability description for industrial robot (IR) systems. IR are regarded as manufacturing resources and
described as such. Besides manufacturing machines such IR systems enable
CM to perform as a flexible and agile manufacturing system.
6.14. Hybrid Manufacturing
Hybrid Manufacturing is a term used for the combination of AM and traditional manufacturing methods. The combination of these methods promises
to provide benefits from both, e.g., speed and accuracy of a milling machine
with the low material input from AM.
Lu et al. [159] propose an architecture for a hybrid manufacturing cloud.
Their definition of hybrid refers to cloud operation modes (private, community and public cloud). Besides the architecture they present a cloud
management engine (CME) which is implemented for evaluation purposes
on Amazon Web Service (AWS).
In the work by Kenne et al. [128] a model for a hybrid manufacturingremanufacturing system is proposed. The authors refer the term hybrid
to manufacturing and remanufacturing in combination. Remanufacturing
denotes an alternative use of products at the end of their product lifecycle
for value generation. In an experiment the authors calculate the cost for a
mixture of parameters, e.g., return rates and come to the conclusion that the
system is applicable with customisation to various industries.
In the review by Chu et al. [55] the authors discuss 57 hybrid manufacturing processes. These micro- and nanoscale processes are categorised in
three different schemes (concurrent, main/assistive separate and main/main
separate). The authors survey a combination of 118 processes in this work.
The review by Zhu et al. [305] provides a classification of hybrid manufacturing processes. The authors present an extensive list of mainly two-process
combination manufacturing processes. For this work the authors explore the
existing definitions of manufacturing and hybrid manufacturing processes in
literature.
In another work by Zhu et al. [304] the authors propose a build time
estimation for the iAtractive [303] process that combines additive, subtractive and inspection processes. This process is based on FDM and the build
time prediction is based on the same parameters as normal FDM build time
prediction. The authors provide a discussion on an experiment for which
87
their estimation ranged from approximately -12 % – 12 % to the real build
time. The authors only provide a build estimation method for the additively
manufactured part of the process.
Lauwers et al. [145] propose a definition and classification of hybrid manufacturing processes with their work. They define these processes as acting
simultaneously on the same work area or processing zone. This definition
excludes processes that combine processing steps sequentially.
Elmoselhy [68] proposes a hybrid lean-agile manufacturing system (HLAMS).
The author develops the system for the requirements in the automotive industry. The definition of hybridity in this work refers to the school of thinking
for manufacturing.
The work by Kendrick et al. [127] proposes a solution to the problems
associated with distributed manufacturing through the utilisation of hybrid
manufacturing processes. The authors propose four options for the usage
of distributed hybrid manufacturing systems (Local factories, manufacturing
shops, community areas, personal fabrication). The described usage of hybrid
MS can be further utilised in CM systems.
Yang et al. [287] propose a hybrid system for the integration of multiple
manufacturing clouds. The definition of hybridity used in this work refers to
the mixture of diverse manufacturing clouds and not on the manufacturing
process itself. The architecture proposed links the various clouds together
for a single point of interaction integration. The authors define adaptors and
a broker system and implement these for evaluation purposes.
In the overview by Zhang et al. [298] the authors use the term hybrid to
describe the cloud management. Their definition for the three cloud types
used in CM is private/enterprise cloud, public/industry cloud and a mixture
of both as hybrid cloud.
6.15. Research Implications
From the provided literature we have identified the following number of
open research questions. The listing compiled is non-exhaustive due to the
nature of scientific research.
Bourell et al. [3] provide a report on the “Roadmap for Additive Manufacturing” workshop that took place in 2009 and resulted in proposal for
research of the coming 10 to 12 years. The recommendations are grouped
into 1. Design 2. Process Modelling and Control 3. Materials, Processes and
Machines 4. Biomedical Applications 5. Energy and Sustainability Applications 6. Education 7. Development and Community and 8. National Testbed
88
Center. The recommendations include the proposal to create design methods for aiding designers with AM, creation of closed-loop printing systems
and the design and implementation of open-architecture controllers for AM
fabricators.
The authors reflect on their proposed roadmap in an article [34] five
years later. In this analysis the authors state that the direct influence of
the Roadmap is hard to quantify. The authors remark that the report is
referenced about 50 times in scientific literature but only one project can be
clearly attributed to the Roadmap.
Lan [143] identifies the following four tasks for future research in his
review. 1. Combination of Web services and software agents 2. Collaborative network environment with the focus on integration and interoperability
3. Focus on Web technology integration in RM systems and 4. Collaborative
product commerce and collaborative planning and control.
In the review by Fogliatto et al. [77] on Mass Customisation (MC), the
authors identify the following research needs: 1. Research on Rapid Manufacturing (RM) to support MC 2. Research on the value of MC for consumers
as well as environmental, economic and ethic value 3. Research on Quality
Control 4. Research on Warranty for MC objects and 5. Case Studies and
empirical validation.
Khan and Turowski [132] perform a survey on challenges in manufacturing
for the evolution to Industry 4.0. The authors identify six current and future
challenges which are the following topics 1. Data integration (IoT, Big-Data,
real-time data, data management) 2. Process flexibility (Adaption, Change
management) 3. Security (Connectivity, monitoring, compliance) 4. Process
integration within and across enterprise boundaries (Integrated processes,
logistics, optimisation) 5. Real-time information access on hand-held devices
(Web technology, ERP integration) and 6. Predictive Maintenance (Machine
data, sensors).
Among the research needs identified by Adamson et al. [11] in their review are the following 1. Capabilities, information and knowledge integration
and sharing as well as cloud architectures 2. Definitions and standards for
CM 3. Intelligent, flexible and agile, distributed monitoring and control systems 4. Business models 5. Intellectual properties and 6. Cost, security and
adoption of CM systems. Furthermore, the authors identify and predict
1. The emergence of cloud service providers 2. Real world connectivity (IoT)
3. New collaboration and cooperation scenarios (Customer-manufacturer and
manufacturing collaboration) 4. Increased competitiveness 5. Cloud closed89
loop manufacturing 6. Manufacturing of feature function blocks 7. Increased
awareness and research on sustainable operations.
In the work by Oropallo and Piegl [188] the authors specifically researched
and compiled ten challenges in current AM systems that require research.
The challenges are 1. Shape optimisation (Cellular structures and topology
optimisation) 2. Design for 3D printing (Software support, design methodology) 3. Pre- and Postprocessing (File formats, model preprocessing, part
postprocessing) 4. Printing methodologies (Layered manufacturing, voxel and
digital material, non-layer oriented methods) 5. Error control (Before and
during printing) 6. Multi material printing (Modelling and manufacturing
support) 7. Hardware and Maintenance issues (Process and material based
issues) 8. Part orientation 9. Slicing (Adaptive and direct slicing) 10. Speed
Wu et al. [274] explicitly identify the following research needs for the
evolution of CM 1. Cloud-Based Manufacturing (Modeling and simulation
of material flow, concurrency and synchronisation for scalability) 2. CloudBased Design (Social media integration and leveraging, CAx convergence
and cloud enablement) 3. Information, Communication, and Cyber Security
(IoT, Security-as-a-Service) and 4. Business Model.
The work by Huang et al. [115] examines the state of the art of AM
and names the following research areas for future investigation: 1. Materials
2. Design (Methods and tools, complex geometries, lifecycle cost analysis)
3. Modeling, Sensing, Control, and Process Innovation (Multi-scale modelling simulation, error and failure detection, optical geometry reconstruction, faster hardware, bioprinting) 4. Characterization and Certification and
5. System Integration and Cyber Implementation (Knowledge management
integration, cloud based systems).
7. Summary
This article provides an overview over the topic that is CM and 3D Printing services.
With the overview of the existing definitions (See Sect. 2) and the extension of the definition proposed we create the foundation for the following
work.
The review is based on the topological map presented in Sect. 6.1. Concepts, techniques, methods and terminology is presented by exploring different authors work. We perform an explorative extension study to [204] due to
relevance for this domain (See Sect. 5). In this study we cover and analyse
90
48 publicly available services. extension considers APIs of such services a
further distinction to be made. This work also gives an overview on available journals in the domain of AM or 3D printing in general as to support
other researchers’ in finding suitable audiences for their work. One journal
was established 31 years ago and provides a catalogue of over 21000 articles
with no exclusivity to AM or 3D Printing. In recent years a number of new
journals were established or are currently in the process of being established.
Their focus is solely on AM or related domains like bioprinting.
The domain of AM, CM, 3D Printing, RM and related domains is thoroughly presented in this work by means of literature analysis in scientometrical sense (See Sect. 1.1.2 and Sect. 1.2).
The results presented in this work illustrate the scientific development of
various techniques and methods from these domains in a time period ranging
from 2002 to 2016 (See Sect. 6).
Disclosure
This work is not funded by any third party. The authors do not have any
conflicting interests in the topic and provide an unbiased and neutral article
on the topic contained within this article.
References
[1] DIN 8580 Fertigungsverfahren - Begriffe, Einteilung. Din, 2003.
URL:
http://www.din.de/de/mitwirken/normenausschuesse/
natg/normen/wdc-beuth:din21:65031153.
[2] Iso 10303-238:2007 - industrial automation systems and integration –
product data representation and exchange – part 238: Application
protocol: Application interpreted model for computerized numerical
controllers. Iso, 2007. URL: http://www.iso.org/iso/catalogue_
detail.htm?csnumber=38036.
[3] Roadmap for Additive Manufacturing Identifying the Future of
Freeform Processing. Technical report, The University of Texas at
Austin, 2009. URL: http://wohlersassociates.com/roadmap2009.
html.
91
[4] VDI-Richtline:
VDI 3404 Generative Fertigungsverfahren Rapid-Technologien (Rapid Prototyping) - Grundlagen, Begriffe, Qualitätskenngrößen, Liefervereinbarungen. Vdi, 12 2009.
URL:
https://www.vdi.de/richtlinie/vdi_3404-generative_
fertigungsverfahren_rapid_technologien_rapid_prototyping_
grundlagen_begriffe.
[5] Astm f2792-12a standard terminology for additive manufacturing technologies (withdrawn 2015). Astm, 2012. URL: http://www.astm.org/
cgi-bin/resolver.cgi?F2792, doi:10.1520/F2792-12A.
[6] VDI-Richtlinie: VDI 3404 Additive Fertigung - Grundlagen, Begriffe, Verfahrensbeschreibungen.
Vdi, 5 2014.
URL: https:
//www.vdi.de/richtlinie/entwurf_alt_vdi_3404-additive_
fertigung_grundlagen_begriffe_verfahrensbeschreibungen/.
[7] VDI-Richtlinie: VDI 3405 Additive Fertigungsverfahren - Grundlagen,
Begriffe, Verfahrensbeschreibungen. Vdi, 12 2014. URL: https://www.
vdi.de/richtlinie/vdi_3405-additive_fertigungsverfahren_
grundlagen_begriffe_verfahrensbeschreibungen_/.
[8] Iso 10303-21:2016 - industrial automation systems and integration –
product data representation and exchange – part 21: Implementation methods: Clear text encoding of the exchange structure. Iso,
2016. URL: http://www.iso.org/iso/home/store/catalogue_ics/
catalogue_detail_ics.htm?csnumber=63141.
[9] ISO/ASTM 52900:2015 Additive manufacturing — General principles
— Terminology. Iso, 2016. URL: http://www.iso.org/iso/home/
store/catalogue_tc/catalogue_detail.htm?csnumber=69669.
[10] ISO/ASTM 52915:2016 Specification for Additive Manufacturing File Format (AMF) Version 1.2.
Iso, 2016.
URL:
http://www.iso.org/iso/home/store/catalogue_tc/catalogue_
detail.htm?csnumber=67472.
[11] Göran Adamson, Lihui Wanga, Magnus Holm, and Philip Moore.
Cloud manufacturing – a critical review of recent development and future trends. International Journal of Computer Integrated Manufacturing, pages 1–34, 2015. URL: http://dx.doi.org/10.1080/0951192X.
2015.1031704, doi:10.1080/0951192X.2015.1031704.
92
[12] Göran Adamson, Lihui Wang, and Magnus Holm. The State of the
Art of Cloud Manufacturing and Future Trends. In ASME 2013 International Manufacturing Science and Engineering Conference collocated with the 41st North American Manufacturing Research Conference, volume 2, pages 1–9, 2013. URL: http://dx.doi.org/10.1115/
MSEC2013-1123, doi:10.1115/MSEC2013-1123.
[13] Khubaib Amjad Alam, Rodina Ahmad, Adnan Akhunzada, Mohd
Hairul Nizam Md Nasir, and Samee U. Khan. Impact analysis and
change propagation in service-oriented enterprises: A systematic review. Information Systems, 54:43–73, 2015. URL: http://dx.doi.
org/10.1016/j.is.2015.06.003, doi:10.1016/j.is.2015.06.003.
[14] Daniel G. Aliaga and Mikhail J. Atallah. Genuinity Signatures: Designing Signatures for Verifying 3D Object Genuinity. Computer Graphics
Forum, 28(2):437–446, 2009. URL: http://dx.doi.org/10.1111/j.
1467-8659.2009.01383.x, doi:10.1111/j.1467-8659.2009.01383.
x.
[15] Masoud Alimardani, Ehsan Toyserkani, and Jan P. Huissoon. A 3D
dynamic numerical approach for temperature and thermal stress distributions in multilayer laser solid freeform fabrication process. Optics and Lasers in Engineering, 45(12):1115–1130, 2007. URL: http:
//dx.doi.org/10.1016/j.optlaseng.2007.06.010, doi:10.1016/
j.optlaseng.2007.06.010.
[16] American Society for Precision Engineering.
Dimensional
Accuracy and Surface Finish in Additive Manufacturing,
2014.
URL: http://aspe.net/publications/spring_2014/
2014aspespringproceedings-printfinal.pdf.
[17] Yu an Jin, Yong He, Jian zhong Fu, Wen feng Gan, and Zhi wei
Lin. Optimization of tool-path generation for material extrusion-based
additive manufacturing technology. Additive Manufacturing, 1-4:32–
47, 2014. URL: http://dx.doi.org/10.1016/j.addma.2014.08.004,
doi:10.1016/j.addma.2014.08.004.
[18] Georgios Andreadis, Georgios Fourtounis, and Konstantinos-Dionysios
Bouzakis. Collaborative design in the era of cloud computing. Advances
in Engineering Software, 81:66–72, 2015. URL: http://dx.doi.org/
93
10.1016/j.advengsoft.2014.11.002, doi:10.1016/j.advengsoft.
2014.11.002.
[19] R. Anitha, S. Arunachalam, and P. Radhakrishnan. Critical parameters influencing the quality of prototypes in fused deposition modelling. Journal of Materials Processing Technology, 118(1–3):385–388,
2001. URL: http://dx.doi.org/10.1016/S0924-0136(01)00980-3,
doi:10.1016/S0924-0136(01)00980-3.
[20] Antonio Armillotta. Assessment of surface quality on textured
FDM prototypes. Rapid Prototyping Journal, 12(1):35–41, 2006.
URL: http://dx.doi.org/10.1108/13552540610637255, doi:10.
1108/13552540610637255.
[21] Abbas Azari and Sakineh Nikzad. The evolution of rapid prototyping in dentistry: a review. Rapid Prototyping Journal, 15(3):216–
225, 2009. URL: http://dx.doi.org/10.1108/13552540910961946,
doi:10.1108/13552540910961946.
[22] Vincent A. Balogun, Neil Kirkwood, and Paul T. Mativenga.
Energy consumption and carbon footprint analysis of Fused Deposition Modelling: A case study of RP Stratasys Dimension
SST FDM.
International Journal of Scientific & Engineering
Research, 6(8):442–447, 8 2015.
URL: http://www.ijser.org/
research-paper-publishing-august-2015_page3.aspx.
[23] Vincent A. Balogun, Neil D. Kirkwood, and Paul T. Mativenga. Direct electrical energy demand in fused deposition modelling – in: 21st
cirp conference on life cycle engineering. Procedia CIRP, 15:38–43,
2014. URL: http://dx.doi.org/10.1016/j.procir.2014.06.029,
doi:10.1016/j.procir.2014.06.029.
[24] Felix W. Baumann, Manuel Schön, Julian Eichhoff, and Dieter Roller.
Concept Development of a Sensor Array for 3D Printer. In Procedia
CIRP, 3rd ICRM 2016 International Conference on Ramp-Up Management, volume 51, pages 24–31, 2016. URL: http://dx.doi.org/
10.1016/j.procir.2016.05.041, doi:10.1016/j.procir.2016.05.
041.
94
[25] Cindy Bayley, Lennart Bochmann, Colin Hurlbut, Moneer Helu,
Robert Transchel, and David Dornfeld. Understanding Error Generation in Fused Deposition Modeling. In Proceedings of Dimensional
Accuracy and Surface Finish in Additive Manufacturing : ASPE Spring
Topical Meeting [16], pages 98–103. URL: http://e-citations.
ethbib.ethz.ch/view/pub:146650.
[26] Laura Bechthold, Veronika Fischer, Andre Hainzlmaier, Daniel Hugenroth, Ljudmila Ivanova, Kristina Kroth, Benedikt Römer, Edyta Sikorska, and Vincent Sitzmann. 3D Printing - A Qualitative Assessment
of Applications, Recent Trends and the Technology’s Future Potential. Number 17-2015 in Studien zum deutschen Innovationssystem.
Expertenkommission Forschung und Innovation (EFI), 2015. URL:
http://www.e-fi.de/151.html.
[27] Bernd Bertsche and Hans-Jörg Bullinger, editors.
Entwicklung und Erprobung innovativer Produkte – Rapid Prototyping.
VDI-Buch. Springer-Verlag Berlin Heidelberg, 1 edition, 2007.
URL: http://dx.doi.org/10.1007/978-3-540-69880-7, doi:10.
1007/978-3-540-69880-7.
[28] H. Bikas, Panagiotis Stavropoulos, and George Chryssolouris. Additive manufacturing methods and modelling approaches: a critical
review. The International Journal of Advanced Manufacturing Technology, 83(1):389–405, 2016. URL: http://dx.doi.org/10.1007/
s00170-015-7576-2, doi:10.1007/s00170-015-7576-2.
[29] Tobias Binz, Uwe Breitenbücher, Oliver Kopp, and Frank Leymann.
TOSCA: Portable Automated Deployment and Management of Cloud
Applications, pages 527–549. Springer New York, New York, NY,
2014. URL: http://dx.doi.org/10.1007/978-1-4614-7535-4_22,
doi:10.1007/978-1-4614-7535-4_22.
[30] Robert Bogue. 3d printing: the dawn of a new era in manufacturing?
Assembly Automation, 33(4):307–311, 2013. URL: http://dx.doi.
org/10.1108/AA-06-2013-055, doi:10.1108/AA-06-2013-055.
[31] Kamaljit Singh Boparai, Rupinder Singh, and Harwinder Singh.
Development of rapid tooling using fused deposition modeling:
a review.
Rapid Prototyping Journal, 22(2):281–299, 2016.
95
URL: http://dx.doi.org/10.1108/RPJ-04-2014-0048,
1108/RPJ-04-2014-0048.
doi:10.
[32] Alberto Boschetto and Luana Bottini. Accuracy prediction in fused
deposition modeling. The International Journal of Advanced Manufacturing Technology, 73(5):913–928, 2014. URL: http://dx.doi.org/
10.1007/s00170-014-5886-4, doi:10.1007/s00170-014-5886-4.
[33] Alberto Boschetto and Luana Bottini.
Design for manufacturing of surfaces to improve accuracy in Fused Deposition Modeling.
Robotics and Computer-Integrated Manufacturing, 37:103–114, 2016.
URL: http://dx.doi.org/10.1016/j.rcim.2015.07.005, doi:10.
1016/j.rcim.2015.07.005.
[34] David L. Bourell, David W. Rosen, and Ming C. Leu. The Roadmap
for Additive Manufacturing and Its Impact. 3D Printing and Additive
Manufacturing, 1(1):6–9, 2014. URL: http://dx.doi.org/10.1089/
3dp.2013.0002, doi:10.1089/3dp.2013.0002.
[35] Florent Le Bourhis, Olivier Kerbrat, Lucas Dembinski, Jean-Yves
Hascoet, and Pascal Mognol. Predictive Model for Environmental Assessment in Additive Manufacturing Process - In: 21st CIRP
Conference on Life Cycle Engineering. Procedia CIRP, 15:26–31,
2014. URL: http://dx.doi.org/10.1016/j.procir.2014.06.031,
doi:10.1016/j.procir.2014.06.031.
[36] David Brackett, Ian Ashcroft, and Richard J. M. Hague. Topology Optimization for Additive Manufacturing. In Proceedings of the TwentySecond Solid Freeform Fabrication (SFF) Symposium, pages 348–362,
2011. URL: http://sffsymposium.engr.utexas.edu/2011TOC.
[37] Tomaz Brajlih, Bogdan Valentan, Joze Balic, and Igor
Drstvensek.
Speed and accuracy evaluation of additive manufacturing machines.
Rapid Prototyping Journal, 17(1):64–75,
2011.
URL: http://dx.doi.org/10.1108/13552541111098644,
doi:10.1108/13552541111098644.
[38] Christian Brecher, Wolfram Lohse, and Mirco Vitr. Advanced Design and Manufacturing Based on STEP, chapter Module-based Platform for Seamless Interoperable CAD-CAM-CNC Planning, pages 439–
462. Springer Series in Advanced Manufacturing. Springer London,
96
2009. URL: http://dx.doi.org/10.1007/978-1-84882-739-4_20,
doi:10.1007/978-1-84882-739-4_20.
[39] Susan M. Bridges, Ken Keiser, Nathan Sissom, and Sara J. Graves. Cyber security for additive manufacturing. In Proceedings of the 10th Annual Cyber and Information Security Research Conference, CISR ’15,
pages 1–3, New York, NY, USA, 2015. ACM. URL: http://doi.acm.
org/10.1145/2746266.2746280, doi:10.1145/2746266.2746280.
[40] Gail Brooks, Kim Kinsley, and Tim Owens. 3d printing as a consumer
technology business model. International Journal of Management & Information Systems, 18(4):271–280, 9 2014. URL: http://dx.doi.org/
10.19030/ijmis.v18i4.8819, doi:10.19030/ijmis.v18i4.8819.
[41] Mark A. Buckner and Lonnie J. Love. Automating and accelerating
the additive manufacturing design process with multi-objective constrained evolutionary optimization and hpc/cloud computing. In Future of Instrumentation International Workshop (FIIW), 2012, pages
1–4, 2012. URL: http://dx.doi.org/10.1109/FIIW.2012.6378352,
doi:10.1109/FIIW.2012.6378352.
[42] Erin Buehler, Niara Comrie, Megan Hofmann, Samantha McDonald,
and Amy Hurst. Investigating the Implications of 3D Printing in Special Education. ACM Trans. Access. Comput., 8(3):1–28, 3 2016. URL:
http://doi.acm.org/10.1145/2870640, doi:10.1145/2870640.
[43] X. T. Cai, W. D. Li, F. Z. He, and Y. Q. Wu. Operation-effects
merging for collaborative design of personalized product. In Computer Supported Cooperative Work in Design (CSCWD), 2015 IEEE
19th International Conference on, pages 489–493, 2015. URL: http:
//dx.doi.org/10.1109/CSCWD.2015.7231008, doi:10.1109/CSCWD.
2015.7231008.
[44] R. Ian Campbell, Deon J. de Beer, and Eujin Pei. Additive manufacturing in South Africa: building on the foundations. Rapid Prototyping
Journal, 17(2):156–162, 2011. URL: http://dx.doi.org/10.1108/
13552541111113907, doi:10.1108/13552541111113907.
[45] Yang Cao, Shilong Wang, Ling Kang, and Yuan Gao. A tqcs-based service selection and scheduling strategy in cloud manufacturing. The In97
ternational Journal of Advanced Manufacturing Technology, 82(1):235–
251, 2016. URL: http://dx.doi.org/10.1007/s00170-015-7350-5,
doi:10.1007/s00170-015-7350-5.
[46] Andrea Caputo, Giacomo Marzi, and Massimiliano Matteo Pellegrini.
The Internet of Things in manufacturing innovation
processes: Development and application of a conceptual framework. Business Process Management Journal, 22(2):383–402, 2016.
URL: http://dx.doi.org/10.1108/BPMJ-05-2015-0072, doi:10.
1108/BPMJ-05-2015-0072.
[47] Jasper Cerneels, André Voet, Jan Ivens, and Jean-Pierre Kruth. Additive manufacturing of thermoplastic composites. In Proceedings of
the COMPOSITES WEEK @ LEUVEN AND TEXCOMP-11 CONFERENCE, pages 1–7, 2013. URL: https://lirias.kuleuven.be/
handle/123456789/419132.
[48] Gilbert Chahine, Pauline Smith, and Radovan Kovacevic. Application of Topology Optimization in Modern Additive Manufacturing.
In Proceedings of the Twenty-First Solid Freeform Fabrication (SFF)
Symposium, pages 606–618, 2010. URL: http://sffsymposium.engr.
utexas.edu/2010TOC.
[49] Danny S. K. Chan. Simulation modelling in virtual manufacturing
analysis for integrated product and process design. Assembly Automation, 23(1):69–74, 2003. URL: http://dx.doi.org/10.1108/
01445150310460114, doi:10.1108/01445150310460114.
[50] Li Chen, Hong Deng, Qianni Deng, and Zhenyu Wu. Framework for
grid manufacturing. Tsinghua Science and Technology, 9(3):327–330,
6 2004.
[51] Toly Chen and Horng-Ren Tsai. Ubiquitous manufacturing: Current practices, challenges, and opportunities. Robotics and ComputerIntegrated Manufacturing, pages 1–7, 2016. URL: http://dx.doi.
org/10.1016/j.rcim.2016.01.001, doi:10.1016/j.rcim.2016.01.
001.
[52] W. Cheng, J. Y. H. Fuh, A. Y. C. Nee, Y. S. Wong, H. T. Loh,
and T. Miyazawa. Multi-objective optimization of part-building ori98
entation in stereolithography. Rapid Prototyping Journal, 1(4):12–
23, 1995. URL: http://dx.doi.org/10.1108/13552549510104429,
doi:10.1108/13552549510104429.
[53] Ying Cheng, Fei Tao Yilong Liu, Dongming Zhao, Lin Zhang, and Lida
Xu. Energy-aware resource service scheduling based on utility evaluation in cloud manufacturing system. Proceedings of the Institution
of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 2013. URL: http://dx.doi.org/10.1177/0954405413492966,
doi:10.1177/0954405413492966.
[54] Jin Choi, O-Chang Kwon, Wonjin Jo, Heon Ju Lee, and Myoung-Woon
Moon. 4D Printing Technology: A Review. 3D Printing and Additive
Manufacturing, 2(4):159–167, 2015. URL: http://dx.doi.org/10.
1089/3dp.2015.0039, doi:10.1089/3dp.2015.0039.
[55] Won-Shik Chu, Chung-Soo Kim, Hyun-Taek Lee, Jung-Oh Choi,
Jae-Il Park, Ji-Hyeon Song, Ki-Hwan Jang, and Sung-Hoon Ahn.
Hybrid manufacturing in micro/nano scale: A Review. International Journal of Precision Engineering and Manufacturing-Green
Technology, 1(1):75–92, 2014. URL: http://dx.doi.org/10.1007/
s40684-014-0012-5, doi:10.1007/s40684-014-0012-5.
[56] Chee Kai Chua, Kah Fai Leong, and Zhong Hong Liu. Rapid Tooling in Manufacturing, pages 2525–2549. Springer London, London,
2015. URL: http://dx.doi.org/10.1007/978-1-4471-4670-4_39,
doi:10.1007/978-1-4471-4670-4_39.
[57] Brett P. Conner, Guha P. Manogharan, Ashley N. Martof, Lauren M.
Rodomsky, Caitlyn M. Rodomsky, Dakesha C. Jordan, and James W.
Limperos. Making sense of 3-D printing: Creating a map of additive
manufacturing products and services. Additive Manufacturing, 1-4:64–
76, 2014. Inaugural Issue. URL: http://dx.doi.org/10.1016/j.
addma.2014.08.005, doi:10.1016/j.addma.2014.08.005.
[58] Henrique de Amorim Almeida and Paulo Jorge da Silva Bártolo.
Virtual topological optimisation of scaffolds for rapid prototyping.
Medical Engineering & Physics, 32(7):775–782, 2010. URL: http:
//dx.doi.org/10.1016/j.medengphy.2010.05.001, doi:10.1016/
j.medengphy.2010.05.001.
99
[59] Jan Deckers, Jef Vleugels, and Jean-Pierre Kruth. Additive manufacturing of ceramics: A review. Journal of Ceramic Science and
Technology, 5(4):245–260, 2014. URL: http://dx.doi.org/10.4416/
JCST2014-00032, doi:10.4416/JCST2014-00032.
[60] Olaf Diegel, Sarat Singamneni, Stephen Reay, and Andrew Withell.
Tools for sustainable product design: Additive manufacturing.
Journal of Sustainable Development, 3(3):68–75, 2010.
URL: http://www.ccsenet.org/journal/index.php/jsd/article/
view/6456, doi:10.5539/jsd.v3n3p68.
[61] Dimitar Dimitrov, Kristiaan Schreve, and N. de Beer.
Advances in three dimensional printing – state of the art and future perspectives. Rapid Prototyping Journal, 12(3):136–147, 2006.
URL: http://dx.doi.org/10.1108/13552540610670717, doi:10.
1108/13552540610670717.
[62] Donghong Ding, Zengxi Pan, Dominic Cuiuri, and Huijun Li.
Wire-feed additive manufacturing of metal components: technologies, developments and future interests. The International Journal of Advanced Manufacturing Technology, 81(1):465–481, 2015.
URL: http://dx.doi.org/10.1007/s00170-015-7077-3, doi:10.
1007/s00170-015-7077-3.
[63] Donghong Ding, Zengxi Pan, Dominic Cuiuri, Huijun Li, and Stephen
van Duin. Advanced Design for Additive Manufacturing: 3D Slicing
and 2D Path Planning, chapter 1, pages 3–23. InTech, 2016. URL:
http://dx.doi.org/10.5772/63042, doi:10.5772/63042.
[64] J. Ding, P. Colegrove, J. Mehnen, S. Ganguly, P. M. Sequeira
Almeida, F. Wang, and S. Williams. Thermo-mechanical analysis of Wire and Arc Additive Layer Manufacturing process on large
multi-layer parts. Computational Materials Science, 50(12):3315–3322,
2011. URL: http://dx.doi.org/10.1016/j.commatsci.2011.06.
023, doi:10.1016/j.commatsci.2011.06.023.
[65] Kyle Dolinsky. CAD’s Cradle: Untangling Copyrightability, Derivative Works, and Fair Use in 3D Printing. Washington & Lee Law
Review, 71(1):593–681, 2014. URL: http://scholarlycommons.law.
wlu.edu/wlulr/vol71/iss1/14.
100
[66] Yucong Duan, Yuan Cao, and Xiaobing Sun. Various ”aas” of
everything as a service. In Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD),
2015 16th IEEE/ACIS International Conference on, pages 1–6, 6
2015. URL: http://dx.doi.org/10.1109/SNPD.2015.7176215, doi:
10.1109/SNPD.2015.7176215.
[67] K. Efthymiou, A. Pagoropoulos, N. Papakostas, D. Mourtzis, and
G. Chryssolouris. Manufacturing systems complexity review: Challenges and outlook - in: 45th cirp conference on manufacturing systems
2012. Procedia CIRP, 3:644–649, 2012. URL: http://dx.doi.org/
10.1016/j.procir.2012.07.110, doi:10.1016/j.procir.2012.07.
110.
[68] Salah A.M. Elmoselhy. Hybrid lean–agile manufacturing system technical facet, in automotive sector. Journal of Manufacturing Systems,
32(4):598–619, 2013. URL: http://dx.doi.org/10.1016/j.jmsy.
2013.05.011, doi:10.1016/j.jmsy.2013.05.011.
[69] Asif Equbal, Anoop Kumar Sood, and S. S. Mahapatra. Prediction of dimensional accuracy in fused deposition modelling:
a fuzzy logic approach.
International Journal of Productivity
and Quality Management, 7(1):22–43, 2011. URL: http://www.
inderscienceonline.com/doi/abs/10.1504/IJPQM.2011.03773,
doi:10.1504/IJPQM.2011.03773.
[70] Azhar Equbal, Anoop Kumar Sood, and Mohammad Shamim. Rapid
tooling: A major shift in tooling practice. Journal of Manufacturing
and Industrial Engineering, 14(3–4):1–9, 2015. URL: http://dx.doi.
org/10.12776/mie.v14i3-4.325, doi:10.12776/mie.v14i3-4.325.
[71] Behzad Esmaeilian, Sara Behdad, and Ben Wang. The evolution and
future of manufacturing: A review. Journal of Manufacturing Systems, 39:79–100, 4 2016. URL: http://dx.doi.org/10.1016/j.jmsy.
2016.03.001, doi:10.1016/j.jmsy.2016.03.001.
[72] Daniel Eyers and Krassimir Dotchev.
Technology review for
mass customisation using rapid manufacturing.
Assembly Automation, 30(1):39–46, 2010. URL: http://dx.doi.org/10.1108/
01445151011016055, doi:10.1108/01445151011016055.
101
[73] Georges M. Fadel and Chuck Kirschman. Accuracy issues in cad
to rp translations. Rapid Prototyping Journal, 2(2):4–17, 1996.
URL: http://dx.doi.org/10.1108/13552549610128189, doi:10.
1108/13552549610128189.
[74] Rouhollah Dermanaki Farahani, Kambiz Chizari, and Daniel Therriault. Three-dimensional printing of freeform helical microstructures: a
review. Nanoscale, 6:10470–10485, 2014. URL: http://dx.doi.org/
10.1039/C4NR02041C, doi:10.1039/C4NR02041C.
[75] Sharon Flank, Gary E. Ritchie, and Rebecca Maksimovic. Anticounterfeiting Options for Three-Dimensional Printing. 3D Printing and Additive Manufacturing, 2(4):180–189, 12 2015. URL: http://dx.doi.
org/10.1089/3dp.2015.0007, doi:10.1089/3dp.2015.0007.
[76] Martina Flatscher and Andreas Riel. Stakeholder integration for
the successful product–process co-design for next-generation manufacturing technologies. {CIRP} Annals - Manufacturing Technology,
65(1):181–184, 2016. URL: http://dx.doi.org/10.1016/j.cirp.
2016.04.055, doi:10.1016/j.cirp.2016.04.055.
[77] Flavio S. Fogliatto, Giovani J.C. da Silveira, and Denis Borenstein.
The mass customization decade: An updated review of the literature. International Journal of Production Economics, 138(1):14–
25, 2012. URL: http://dx.doi.org/10.1016/j.ijpe.2012.03.002,
doi:10.1016/j.ijpe.2012.03.002.
[78] Stephen Fox. Potential of virtual-social-physical convergence for
project manufacturing. Journal of Manufacturing Technology Management, 25(8):1209–1223, 2014. URL: http://dx.doi.org/10.1108/
JMTM-01-2013-0008, doi:10.1108/JMTM-01-2013-0008.
[79] William E. Frazier.
Metal additive manufacturing: A review.
Journal of Materials Engineering and Performance, 23(6):1917–1928,
2014. URL: http://dx.doi.org/10.1007/s11665-014-0958-z, doi:
10.1007/s11665-014-0958-z.
[80] Luigi Maria Galantucci, Fulvio Lavecchia, and Gianluca Percoco.
Study of compression properties of topologically optimized {FDM}
made structured parts. {CIRP} Annals - Manufacturing Technology,
102
57(1):243–246, 2008. URL: http://dx.doi.org/10.1016/j.cirp.
2008.03.009, doi:10.1016/j.cirp.2008.03.009.
[81] Jerry Gao, Xiaoying Bai, and Wei-Tek Tsai. Testing as a service (taas)
on clouds. In Service Oriented System Engineering (SOSE), 2013 IEEE
7th International Symposium on, pages 212–223, 3 2013. doi:10.1109/
SOSE.2013.66.
[82] R. Gao, L. Wang, R. Teti, D. Dornfeld, S. Kumara, M. Mori, and
M. Helu. Cloud-enabled prognosis for manufacturing. {CIRP} Annals - Manufacturing Technology, 64(2):749–772, 2015. URL: http://
dx.doi.org/10.1016/j.cirp.2015.05.011, doi:10.1016/j.cirp.
2015.05.011.
[83] Wei Gao, Yunbo Zhang, Devarajan Ramanujan, Karthik Ramani, Yong
Chen, Christopher B. Williams, Charlie C. L. Wang, Yung C. Shin,
Song Zhang, and Pablo D. Zavattieri. The status, challenges, and future of additive manufacturing in engineering. Computer-Aided Design,
69:65–89, 2015. URL: http://dx.doi.org/10.1016/j.cad.2015.04.
001, doi:10.1016/j.cad.2015.04.001.
[84] Julien Gardan. Additive manufacturing technologies: state of the art
and trends. International Journal of Production Research, 54(10):3118–
3132, 2016. URL: http://dx.doi.org/10.1080/00207543.2015.
1115909, doi:10.1080/00207543.2015.1115909.
[85] Nicolas Gardan. Knowledge Management for Topological Optimization
Integration in Additive Manufacturing. International Journal of Manufacturing Engineering, 2014:1–9, 2014. URL: http://dx.doi.org/
10.1155/2014/356256, doi:10.1155/2014/356256.
[86] Nicolas Gardan and Alexandre Schneider. Topological optimization of
internal patterns and support in additive manufacturing. Journal of
Manufacturing Systems, 37(1):417–425, 2015. URL: http://dx.doi.
org/10.1016/j.jmsy.2014.07.003, doi:10.1016/j.jmsy.2014.07.
003.
[87] Ashu Garg, Anirban Bhattacharya, and Ajay Batish. On Surface
Finish and Dimensional Accuracy of FDM Parts after Cold Vapor
Treatment. Materials and Manufacturing Processes, 31(4):522–529,
103
2016. URL: http://dx.doi.org/10.1080/10426914.2015.1070425,
doi:10.1080/10426914.2015.1070425.
[88] Andreas Gebhardt. Generative Fertigungsverfahren. Carl Hanser Verlag GmbH & Co. KG, 4., neu bearbeitete und erweiterte auflage edition,
2013. URL: http://www.hanser-elibrary.com/doi/book/10.3139/
9783446436527, doi:10.3139/9783446436527.
[89] Ian Gibson, David Rosen, and Brent Stucker. Additive Manufacturing Technologies - 3D Printing, Rapid Prototyping, and Direct Digital Manufacturing. Springer New York, 2 edition, 2015.
URL: http://dx.doi.org/10.1007/978-1-4939-2113-3, doi:10.
1007/978-1-4939-2113-3.
[90] Xibing Gong, Ted Anderson, and Kevin Chou.
Review on
powder-based electron beam additive manufacturing technology. In
ASME/ISCIE 2012 International Symposium on Flexible Automation, pages 507–515, 2012. URL: http://dx.doi.org/10.1115/
ISFA2012-7256, doi:10.1115/ISFA2012-7256.
[91] James Grimmelmann. Indistinguishable from Magic: A Wizard’s
Guide to Copyright and 3D Printing. Washington & Lee Law Review, 71(1):683–698, 2014. URL: http://scholarlycommons.law.
wlu.edu/wlulr/vol71/iss1/14.
[92] Bethany C. Gross, Jayda L. Erkal, Sarah Y. Lockwood, Chengpeng
Chen, and Dana M. Spence. Evaluation of 3D Printing and Its Potential Impact on Biotechnology and the Chemical Sciences. Analytical Chemistry, 86(7):3240–3253, 2014. URL: http://dx.doi.org/10.
1021/ac403397r, doi:10.1021/ac403397r.
[93] Dongdong Gu, Wilhelm Meiners, Konrad Wissenbach, and Reinhart Poprawe. Laser additive manufacturing of metallic components: materials, processes and mechanisms. International Materials
Reviews, 57(3):133–164, 2012. URL: http://dx.doi.org/10.1179/
1743280411Y.0000000014, doi:10.1179/1743280411Y.0000000014.
[94] Qing Gu and Patricia Lago.
Exploring service-oriented system engineering challenges: a systematic literature review. Service Oriented Computing and Applications, 3(3):171–188, 2009.
104
URL: http://dx.doi.org/10.1007/s11761-009-0046-7, doi:10.
1007/s11761-009-0046-7.
[95] Sofiane Guessasma, Weihong Zhang, Jihong Zhu, Sofiane Belhabib,
and Hedi Nouri. Challenges of additive manufacturing technologies
from an optimisation perspective. International Journal for Simulation
and Multidisciplinary Design Optimization, 6, 2015. URL: http://dx.
doi.org/10.1051/smdo/2016001, doi:10.1051/smdo/2016001.
[96] Liang Guo. A system design method for cloud manufacturing application system. The International Journal of Advanced Manufacturing Technology, 84(1):275–289, 2016. URL: http://dx.doi.org/10.
1007/s00170-015-8092-0, doi:10.1007/s00170-015-8092-0.
[97] Nannan Guo and Ming C. Leu. Additive manufacturing: technology, applications and research needs. Frontiers of Mechanical Engineering, 8(3):215–243, 2013. URL: http://dx.doi.org/10.1007/
s11465-013-0248-8, doi:10.1007/s11465-013-0248-8.
[98] Vikas Gupta, V. K. Bajpai, and Puneet Tandon. Slice Generation and
Data Retrieval Algorithm for Rapid Manufacturing of Heterogeneous
Objects. Computer-Aided Design and Applications, 11(3):255–262,
2014. URL: http://dx.doi.org/10.1080/16864360.2014.863483,
doi:10.1080/16864360.2014.863483.
[99] Mervi Hämäläinen and Arto Ojala. Additive manufacturing technology: Identifying value potential in additive manufacturing stakeholder
groups and business networks. In AMCIS 2015 : Proceedings of the
Twenty-first Americas Conference on Information Systems, pages 1–
10. AIS Electronic Library (AISeL), 2015. URL: http://urn.fi/URN:
NBN:fi:jyu-201508192705.
[100] Hans Nørgaard Hansen, Jakob Skov Nielsen, Jakob Rasmussen, and
David Bue Pedersen. Performance verification of 3D printers. In Proceedings of ASPE 2014 Spring Topical Meeting: Dimensional Accuracy and Surface Finish in Additive Manufacturing [16], pages 104–108.
URL: http://findit.dtu.dk/en/catalog/2289698722.
[101] Saad Hasan, Allan Rennie, and Jamal Hasan. The Business Model for
the Functional Rapid Manufacturing Supply Chain. Studia commer105
cialia Bratislavensia, 6(24):536–552, 12 2013. URL: http://dx.doi.
org/10.2478/stcb-2013-0008, doi:10.2478/stcb-2013-0008.
[102] Wu He and Lida Xu. A state-of-the-art survey of cloud manufacturing. International Journal of Computer Integrated Manufacturing,
28(3):239–250, 2015. URL: http://dx.doi.org/10.1080/0951192X.
2013.874595, doi:10.1080/0951192X.2013.874595.
[103] P. Hernández, M. D. Monzón, A. N. Benı́tez, M. Marrero, Z. Ortega, N. Dı́az, and F. Ortega. Rapid manufacturing experience in
training. In New Frontiers in Manufacturing Engineering and Materials Processing Training and Learning, volume 759 of Materials Science Forum, pages 47–54. Trans Tech Publications, 7 2013. doi:
10.4028/www.scientific.net/MSF.759.47.
[104] Jonathan Hiller and Hod Lipson. Design and analysis of digital materials for physical 3d voxel printing. Rapid Prototyping
Journal, 15(2):137–149, 2009. URL: http://dx.doi.org/10.1108/
13552540910943441, doi:10.1108/13552540910943441.
[105] Jonathan D. Hiller and Hod Lipson. Multi Material Topological Optimization of Structures and Mechanisms. In Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation, GECCO
’09, pages 1521–1528, New York, NY, USA, 2009. ACM. URL: http:
//doi.acm.org/10.1145/1569901.1570105, doi:10.1145/1569901.
1570105.
[106] Neil Hopkinson and Phill Dickens.
Rapid prototyping for direct manufacture.
Rapid Prototyping Journal, 7(4):197–202,
2001. URL: http://dx.doi.org/10.1108/EUM0000000005753, doi:
10.1108/EUM0000000005753.
[107] Neil Hopkinson and Phill Dickens.
Analysis of rapid manufacturing — using layer manufacturing processes for production.
Proceedings of the Institution of Mechanical Engineers, Part C:
Journal of Mechanical Engineering Science, 217(1):31–39, 2003.
URL: http://pic.sagepub.com/content/217/1/31.abstract, doi:
10.1243/095440603762554596.
106
[108] Neil Hopkinson, Richard J. M. Hague, and Phill M. Dickens, editors.
Rapid Manufacturing: An Industrial Revolution for the Digital Age.
John Wiley & Sons, Ltd, 1 edition, 2006. URL: http://dx.doi.org/
10.1002/0470033991, doi:10.1002/0470033991.
[109] Timothy J. Horn and Ola L. A. Harrysson.
Overview of
current additive manufacturing technologies and selected applications.
Science Progress, 95(3):255–282, 2012.
URL: http:
//dx.doi.org/10.3184/003685012X13420984463047, doi:10.3184/
003685012X13420984463047.
[110] Jong-Uk Hou, Do-Gon Kim, Sunghee Choi, and Heung-Kyu Lee. 3D
Print-Scan Resilient Watermarking Using a Histogram-Based Circular
Shift Coding Structure. In Proceedings of the 3rd ACM Workshop on
Information Hiding and Multimedia Security, IH&MMSec ’15, pages
115–121, New York, NY, USA, 2015. ACM. URL: http://doi.acm.
org/10.1145/2756601.2756607, doi:10.1145/2756601.2756607.
[111] Tsung-Jung Hsu and Wei-Hsiang Lai. Manufacturing parts optimization in the three-dimensional printing process by the Taguchi
method. Journal of the Chinese Institute of Engineers, 33(1):121–130,
2010. URL: http://dx.doi.org/10.1080/02533839.2010.9671604,
doi:10.1080/02533839.2010.9671604.
[112] Bo hu Li, Lin Zhang, Shi long Wang, Fei Tao, Jun wei Cao, Xiao
dan Jiang, Xiao Song, and Xu dong Chai. Cloud manufacturing:a
new service-oriented networked manufacturing model. Computer Integrated Manufacturing Systems, 16(1), 2010. URL: http://www.
cims-journal.cn/EN/Y2010/V16/I01/0.
[113] Samuel H. Huang, Peng Liu, Abhiram Mokasdar, and Liang Hou.
Additive manufacturing and its societal impact: a literature review. The International Journal of Advanced Manufacturing Technology, 67(5):1191–1203, 2013. URL: http://dx.doi.org/10.1007/
s00170-012-4558-5, doi:10.1007/s00170-012-4558-5.
[114] Xiaorong Huang, Baigang Du, Libo Sun, Feng Chen, and Wei Dai.
Service requirement conflict resolution based on ant colony optimization in group-enterprises-oriented cloud manufacturing. The International Journal of Advanced Manufacturing Technology, 84(1):183–
107
196, 2016. URL: http://dx.doi.org/10.1007/s00170-015-7961-x,
doi:10.1007/s00170-015-7961-x.
[115] Yong Huang, Ming C. Leu, Jyoti Mazumder, and Alkan Donmez. Additive manufacturing: Current state, future potential, gaps and needs,
and recommendations. Journal of Manufacturing Science and Engineering, 137(1):014001–014011, 2 2015. URL: http://dx.doi.org/
10.1115/1.4028725, doi:10.1115/1.4028725.
[116] R. Ippolito, L. Iuliano, and A. Gatto. Benchmarking of rapid prototyping techniques in terms of dimensional accuracy and surface finish. CIRP Annals - Manufacturing Technology, 44(1):157–160, 1995.
URL: http://dx.doi.org/10.1016/S0007-8506(07)62296-3, doi:
10.1016/S0007-8506(07)62296-3.
[117] Iñigo Flores Ituarte, Eric Coatanea, Mika Salmi, Jukka Tuomi, and
Jouni Partanen. Additive manufacturing in production: A study case
applying technical requirements - in: 15th nordic laser materials processing conference, nolamp 15. Physics Procedia, 78:357–366, 2015.
URL: http://dx.doi.org/10.1016/j.phpro.2015.11.050, doi:10.
1016/j.phpro.2015.11.050.
[118] Olga Ivanova, Christopher Williams, and Thomas Campbell. Additive manufacturing (am) and nanotechnology: promises and
challenges.
Rapid Prototyping Journal, 19(5):353–364, 2013.
URL: http://dx.doi.org/10.1108/RPJ-12-2011-0127, doi:10.
1108/RPJ-12-2011-0127.
[119] Ron Jamieson and Herbert Hacker. Direct slicing of cad models for rapid prototyping. Rapid Prototyping Journal, 1(2):4–12,
1995. URL: http://dx.doi.org/10.1108/13552549510086826, doi:
10.1108/13552549510086826.
[120] C. F. Jian and Y. Wang. Batch Task Scheduling-Oriented Optimization
Modelling and Simulation in Cloud Manufacturing. International Journal of Simulation Modelling, 13(1):93–101, 2014. URL: http://dx.
doi.org/10.2507/IJSIMM13(1)CO2, doi:10.2507/IJSIMM13(1)CO2.
[121] Mathias Johanson, Magnus Löfstrand, and Lennart KARLSSON.
Collaborative innovation through distributed engineering services.
In Proceedings of International Multi-Conference
108
on Engineering and Technological Innovation, pages 10–13,
2009.
URL: http://pure.ltu.se/portal/en/publications/
collaborative-innovation-through-distributed-engineering-services(e7027d30-af
.html.
[122] Min Jou and Jingying Wang. Observations of achievement and motivation in using cloud computing driven CAD: Comparison of college students with high school and vocational high school backgrounds. Computers in Human Behavior, 29(2):364–369, 2013. Advanced HumanComputer Interaction. URL: http://dx.doi.org/10.1016/j.chb.
2012.08.001, doi:10.1016/j.chb.2012.08.001.
[123] Dalton Alexandre Kai, Edson Pinheiro de Lima, Marlon Wesley Machado Cunico, and Sergio Eduardo Gouvêa da Costa. Additive
Manufacturing: A New Paradigm For Manufacturing. In H. Yang,
Z. Kong, and M. D. Sarder, editors, Proceedings of the 2016 Industrial
and Systems Engineering Research Conference, pages 1–6, 2016.
URL: https://www.researchgate.net/publication/304148343_
Additive_Manufacturing_A_New_Paradigm_For_Manufacturing.
[124] Hyoung Seok Kang, Ju Yeon Lee, SangSu Choi, Hyun Kim, Jun Hee
Park, Ji Yeon Son, Bo Hyun Kim, and Sang Do Noh. Smart manufacturing: Past research, present findings, and future directions. International Journal of Precision Engineering and Manufacturing-Green
Technology, 3(1):111–128, 2016. URL: http://dx.doi.org/10.1007/
s40684-016-0015-5, doi:10.1007/s40684-016-0015-5.
[125] K. P. Karunakaran, Alain Bernard, S. Suryakumar, Lucas Dembinski, and Georges Taillandier. Rapid manufacturing of metallic objects.
Rapid Prototyping Journal, 18(4):264–280, 2012.
URL: http://dx.doi.org/10.1108/13552541211231644, doi:10.
1108/13552541211231644.
[126] Ihab E. Katatny, S. H. Masood, and Y. S. Morsi. Evaluation and
Validation of the Shape Accuracy of FDM Fabricated Medical Models. Advanced Materials Research, 83–86:275–280, 2010. URL: http:
//dx.doi.org/10.4028/www.scientific.net/AMR.83-86.275, doi:
10.4028/www.scientific.net/AMR.83-86.275.
109
[127] Blake A. Kendrick, Vimal Dhokia, and Stephen T. Newman. Strategies to realize decentralized manufacture through hybrid manufacturing platforms. Robotics and Computer-Integrated Manufacturing,
Robotics and Computer-Integrated Manufacturing, 2015. In Press.
URL: http://dx.doi.org/10.1016/j.rcim.2015.11.007, doi:10.
1016/j.rcim.2015.11.007.
[128] Jean-Pierre Kenné, Pierre Dejax, and Ali Gharbi. Production planning
of a hybrid manufacturing–remanufacturing system under uncertainty
within a closed-loop supply chain. International Journal of Production
Economics, 135(1):81–93, 2012. Advances in Optimization and Design
of Supply Chains. URL: http://dx.doi.org/10.1016/j.ijpe.2010.
10.026, doi:10.1016/j.ijpe.2010.10.026.
[129] Olivier Kerbrat, Pascal Mognol, and Jean-Yves Hascoet. A new DFM
approach to combine machining and additive manufacturing. Computers in Industry, 62(7):684–692, 9 2011. URL: http://dx.doi.org/
10.1016/j.compind.2011.04.003, doi:10.1016/j.compind.2011.
04.003.
[130] Siddhartha Kumar Khaitan and James D. McCalley. Design Techniques and Applications of Cyberphysical Systems: A Survey. IEEE
Systems Journal, 9(2):350–365, 6 2014. URL: http://dx.doi.org/
10.1109/JSYST.2014.2322503, doi:10.1109/JSYST.2014.2322503.
[131] Siavash H. Khajavi, Jouni Partanen, and Jan Holmström. Additive
manufacturing in the spare parts supply chain. Computers in Industry,
65(1):50–63, 2014. URL: http://dx.doi.org/10.1016/j.compind.
2013.07.008, doi:10.1016/j.compind.2013.07.008.
[132] Ateeq Khan and Klaus Turowski. Proceedings of the First International Scientific Conference “Intelligent Information Technologies for Industry” (IITI’16) - Volume 1, volume 450 of Advances
in Intelligent Systems and Computing, chapter A Survey of Current Challenges in Manufacturing Industry and Preparation for Industry 4.0, pages 15–26. Springer International Publishing, 2016.
URL: http://dx.doi.org/10.1007/978-3-319-33609-1_2, doi:10.
1007/978-3-319-33609-1_2.
110
[133] Zhong Xun Khoo, Joanne Ee Mei Teoh, Yong Liu, Chee Kai Chua,
Shoufeng Yang, Jia An, Kah Fai Leong, and Wai Yee Yeong. 3d
printing of smart materials: A review on recent progresses in 4d
printing. Virtual and Physical Prototyping, 10(3):103–122, 2015.
URL: http://dx.doi.org/10.1080/17452759.2015.1097054, doi:
10.1080/17452759.2015.1097054.
[134] Duck Bong Kim, Paul Witherell, Robert Lipman, and Shaw C.
Feng. Streamlining the additive manufacturing digital spectrum: A
systems approach. Additive Manufacturing, 5:20–30, 2014. URL:
http://dx.doi.org/10.1016/j.addma.2014.10.004, doi:10.1016/
j.addma.2014.10.004.
[135] D. King and T. Tansey. Alternative materials for rapid tooling.
Journal of Materials Processing Technology, 121(2-3):313–317, 2002.
URL: http://dx.doi.org/10.1016/S0924-0136(01)01145-1, doi:
10.1016/S0924-0136(01)01145-1.
[136] Fritz Klocke. Fertigungsverfahren 5 - Gießen, Pulvermetallurgie,
Additive Manufacturing. VDI-Buch. Springer Berlin Heidelberg, 4
edition, 2015. URL: http://link.springer.com/book/10.1007/
978-3-540-69512-7, doi:10.1007/978-3-540-69512-7.
[137] J.-P. Kruth, M. C. Leu, and T. Nakagawa. Progress in additive
manufacturing and rapid prototyping.
CIRP Annals - Manufacturing Technology, 47(2):525–540, 1998.
URL: http://www.
sciencedirect.com/science/article/pii/S0007850607632405,
doi:10.1016/S0007-8506(07)63240-5.
[138] Sylvain Kubler, Jan Holmström, Kary Främling, and Petra Turkama.
Technological Theory of Cloud Manufacturing, pages 267–276. Springer
International Publishing, Cham, 2016. URL: http://dx.doi.org/10.
1007/978-3-319-30337-6_24, doi:10.1007/978-3-319-30337-6_
24.
[139] Hermann Kuehnle. Distributed manufacturing (dm) - smart units
and collaborative processes. International Journal of Social, Behavioral, Educational, Economic, Business and Industrial Engineering,
9(4):1230–1241, 2015. URL: http://waset.org/Publications?p=
100.
111
[140] Roland Lachmayer, Rene Bastian Lippert, and Thomas Fahlbusch,
editors.
3D-Druck beleuchtet.
Springer Berlin Heidelberg, 1
edition, 2016. URL: http://link.springer.com/book/10.1007/
978-3-662-49056-3, doi:10.1007/978-3-662-49056-3.
[141] Y. Laili, L. Zhang, and Fei Tao. Energy adaptive immune genetic algorithm for collaborative design task scheduling in cloud manufacturing system. In Industrial Engineering and Engineering Management
(IEEM), 2011 IEEE International Conference on, pages 1912–1916,
12 2011. URL: http://dx.doi.org/10.1109/IEEM.2011.6118248,
doi:10.1109/IEEM.2011.6118248.
[142] Yuanjun Laili, Fei Tao, Lin Zhang, and Bhaba R. Sarker. A study
of optimal allocation of computing resources in cloud manufacturing
systems. The International Journal of Advanced Manufacturing Technology, 63(5):671–690, 2012. URL: http://dx.doi.org/10.1007/
s00170-012-3939-0, doi:10.1007/s00170-012-3939-0.
[143] Hongbo Lan. Web-based rapid prototyping and manufacturing systems: A review. Computers in Industry, 60(9):643–656, 2009.
URL: http://dx.doi.org/10.1016/j.compind.2009.05.003, doi:
10.1016/j.compind.2009.05.003.
[144] Jorick Lartigau, Lanshun Nie, Xiaofei Xu, Dechen Zhan, and Tehani
Mou. Scheduling Methodology for Production Services in Cloud Manufacturing. In Service Sciences (IJCSS), 2012 International Joint Conference on, pages 34–39, 5 2012. URL: http://dx.doi.org/10.1109/
IJCSS.2012.19, doi:10.1109/IJCSS.2012.19.
[145] Bert Lauwers, Fritz Klocke, Andreas Klink, A. Erman Tekkaya,
Reimund Neugebauer, and Don Mcintosh. Hybrid processes in manufacturing. {CIRP} Annals - Manufacturing Technology, 63(2):561–
583, 2014. URL: http://dx.doi.org/10.1016/j.cirp.2014.05.003,
doi:10.1016/j.cirp.2014.05.003.
[146] Martin Leary, Luigi Merli, Federico Torti, Maciej Mazur, and Milan
Brandt. Optimal topology for additive manufacture: A method for enabling additive manufacture of support-free optimal structures. Materials & Design, 63:678–690, 2014. URL: http://dx.doi.org/10.1016/
j.matdes.2014.06.015, doi:10.1016/j.matdes.2014.06.015.
112
[147] Edward A. Lee. Cyber-Physical Systems - Are Computing Foundations
Adequate. In Position Paper for NSF Workshop On Cyber-Physical
Systems: Research Motivation, Techniques and Roadmap, volume 2,
2006. URL: http://citeseerx.ist.psu.edu/viewdoc/download?
doi=10.1.1.84.8011&rep=rep1&type=pdf.
[148] Jay Lee, Behrad Bagheri, and Hung-An Kao. A Cyber-Physical
Systems architecture for Industry 4.0-based manufacturing systems.
Manufacturing Letters, 3:18–23, 2015. URL: http://dx.doi.org/
10.1016/j.mfglet.2014.12.001, doi:10.1016/j.mfglet.2014.12.
001.
[149] Dirk Lehmhus, Thorsten Wuest, Stefan Wellsandt, Stefan Bosse,
Toshiya Kaihara, Klaus-Dieter Thoben, and Matthias Busse. CloudBased Automated Design and Additive Manufacturing: A Usage Data-Enabled Paradigm Shift. Sensors, 15(12):32079–32122,
2015.
URL: http://dx.doi.org/10.3390/s151229905, doi:10.
3390/s151229905.
[150] Gideon N. Levy, Ralf Schindel, and J. P. Kruth. Rapid manufacturing
and rapid tooling with layer manufacturing (lm) technologies, state of
the art and future perspectives. CIRP Annals - Manufacturing Technology, 52(2):589–609, 2003. URL: http://dx.doi.org/10.1016/
S0007-8506(07)60206-6, doi:10.1016/S0007-8506(07)60206-6.
[151] Hui-Hong JK Li, Yong Jiang Shi, Mike Gregory, and Kim Hua
Tan. Rapid production ramp-up capability: a collaborative supply network perspective. International Journal of Production Research, 52(10):2999–3013, 2014. URL: http://dx.doi.org/10.1080/
00207543.2013.858837, doi:10.1080/00207543.2013.858837.
[152] Wenfeng Li, Ye Zhong, Xun Wang, and Yulian Cao. Resource virtualization and service selection in cloud logistics. Journal of Network and
Computer Applications, 36(6):1696–1704, 2013. URL: http://dx.doi.
org/10.1016/j.jnca.2013.02.019, doi:10.1016/j.jnca.2013.02.
019.
[153] Feng Lin, Wei Sun, and Yongnian Yan. Optimization with minimum
process error for layered manufacturing fabrication. Rapid Prototyp-
113
ing Journal, 7(2):73–82, 2001. URL: http://dx.doi.org/10.1108/
13552540110386691, doi:10.1108/13552540110386691.
[154] Hod Lipson and Melba Kurman. Fabricated - The New World of 3D
Printing. John Wiley & Sons, Inc., 1 edition, 2013. URL: http://eu.
wiley.com/WileyCDA/WileyTitle/productCd-1118416945.html.
[155] Xiaowei Liu, Yun Yang, Xiaodong Xu, Changchun Li, and Lingpeng Ran. Research on profit mechanism of 3d printing cloud platform based on customized products. Applied Mechanics and Materials, 703:318–322, 2015. URL: http://dx.doi.org/10.4028/www.
scientific.net/AMM.703.318, doi:10.4028/www.scientific.net/
AMM.703.318.
[156] Yongkui Liu, Xun Xu, Lin Zhang, and Fei Tao. An Extensible Model
for Multi-Task Service Composition and Scheduling in a Cloud Manufacturing System. Journal of Computing and Information Science in
Engineering, 2016. URL: http://dx.doi.org/10.1115/1.4034186,
doi:10.1115/1.4034186.
[157] Ramón Medrano Llamas, Quentin Barrand, Johannes Elmsheuser, Federica Legger, Gianfranco Sciacca, Andrea Sciabà, and Daniel van der
Ster. Testing as a Service with HammerCloud. Journal of Physics:
Conference Series, 513(6):062031, 2014. URL: http://dx.doi.org/
10.1088/1742-6596/513/6/062031, doi:10.1088/1742-6596/513/
6/062031.
[158] K. A. Lorenz, J .B. Jones, D. I. Wimpenny, and M. R. Jackson. A Review of Hybrid Manufacturing. In Proceedings of the Twenty-Sixth Solid
Freeform Fabrication (SFF) Symposium, pages 96–108, 2015. URL:
http://sffsymposium.engr.utexas.edu/2015TOC.
[159] Yuqian Lu, Xun Xu, and Jenny Xu. Development of a hybrid manufacturing cloud. Journal of Manufacturing Systems, 33(4):551–566,
10 2014. URL: http://dx.doi.org/10.1016/j.jmsy.2014.05.003,
doi:10.1016/j.jmsy.2014.05.003.
[160] Yongliang Luo, Lin Zhang, Fei Tao, Lei Ren, Yongkui Liu, and Zhiqiang
Zhang. A modeling and description method of multidimensional information for manufacturing capability in cloud manufacturing sys114
tem. The International Journal of Advanced Manufacturing Technology, 69(5):961–975, 2013. URL: http://dx.doi.org/10.1007/
s00170-013-5076-9, doi:10.1007/s00170-013-5076-9.
[161] Mario Lušić, Kilian Schneider, and Rüdiger Hornfeck. A Case Study on
the Capability of Rapid Tooling Thermoplastic Laminating Moulds for
Manufacturing of {CFRP} Components in Autoclaves. Procedia CIRP,
50:390–395, 2016. 26th CIRP Design Conference. URL: http://dx.
doi.org/10.1016/j.procir.2016.04.151, doi:10.1016/j.procir.
2016.04.151.
[162] Hammad H. Malik, Alastair R. J. Darwood, Shalin Shaunak, Priyantha Kulatilake, Abdulrahman A. El-Hilly, Omar Mulki, and Aroon
Baskaradas. Three-dimensional printing in surgery: a review of current surgical applications. Journal of Surgical Research, 199(2):512–
522, 2015. URL: http://dx.doi.org/10.1016/j.jss.2015.06.051,
doi:10.1016/j.jss.2015.06.051.
[163] Syed Hasan Masood. 10.04 - Advances in Fused Deposition Modeling. In Saleem Hashmi, Gilmar Ferreira Batalha, Chester J. Van Tyne,
and Bekir Yilbas, editors, Comprehensive Materials Processing, pages
69–91. Elsevier, Oxford, 2014. URL: http://dx.doi.org/10.1016/
B978-0-08-096532-1.01002-5, doi:0.1016/B978-0-08-096532-1.
01002-5.
[164] Elizabeth Matias and Bharat Rao. 3d printing: On its historical
evolution and the implications for business. In 2015 Portland International Conference on Management of Engineering and Technology (PICMET), pages 551–558, 2015. URL: http://dx.doi.org/10.
1109/PICMET.2015.7273052, doi:10.1109/PICMET.2015.7273052.
[165] Maria Mavri. Redesigning a Production Chain Based on 3D Printing Technology. Knowledge and Process Management, 22(3):141–147,
2015. URL: http://dx.doi.org/10.1002/kpm.1466, doi:10.1002/
kpm.1466.
[166] Andrew D. Maynard. Navigating the fourth industrial revolution. Nature Nanotechnology, 10:1005–1006, 12 2015. URL: http://dx.doi.
org/10.1038/nnano.2015.286, doi:10.1038/nnano.2015.286.
115
[167] Connor M. McNulty, Neyla Arnas, and Thomas A. Campbell. Toward the Printed World: Additive Manufacturing and
Implications for National Security.
Defense Horizons, 73:1–
16, 9 2012.
URL: http://ctnsp.dodlive.mil/2012/09/01/
dh-073-toward-the-printed-world-additive-manufacturing-and-implications-for[168] Afshin Mehrsai, Hamid Reza Karimi, and Klaus-Dieter Thoben. Integration of supply networks for customization with modularity in
cloud and make-to-upgrade strategy. Systems Science & Control Engineering: An Open Access Journal, 1:28–42, 2013. URL: http://dx.
doi.org/10.1080/21642583.2013.817959, doi:10.1080/21642583.
2013.817959.
[169] Ferry P. W. Melchels, Marco A. N. Domingos, Travis J. Klein,
Jos Malda, Paulo J. Bartolo, and Dietmar W. Hutmacher. Additive manufacturing of tissues and organs. Progress in Polymer
Science, 37(8):1079–1104, 2012. Topical Issue on Biorelated polymers.
URL: http://www.sciencedirect.com/science/article/
pii/S0079670011001328, doi:10.1016/j.progpolymsci.2011.11.
007.
[170] Ferry P. W. Melchels, Jan Feijen, and Dirk W. Grijpma. A review on stereolithography and its applications in biomedical engineering. Biomaterials, 31(24):6121–6130, 2010. URL: http://
dx.doi.org/10.1016/j.biomaterials.2010.04.050, doi:10.1016/
j.biomaterials.2010.04.050.
[171] Peter Mell and Timothy Grance. The nist definition of cloud computing. NIST Special Publication 800-145, National Institute of Standards
and Technology (NIST), 9 2011. URL: http://dx.doi.org/10.6028/
NIST.SP.800-145, doi:10.6028/NIST.SP.800-145.
[172] Vladimir Mironov, Nuno Reis, and Brian Derby. Bioprinting: A beginning. Tissue Engineering, 12(4):631–634, 5 2006. URL: http:
//dx.doi.org/10.1089/ten.2006.12.631, doi:10.1089/ten.2006.
12.631.
[173] Pankhuri Mishra and Neeraj Tripathi.
Testing as a Service, pages 149–176.
Springer Singapore, Singapore, 2017.
116
URL: http://dx.doi.org/10.1007/978-981-10-1415-4_7, doi:10.
1007/978-981-10-1415-4_7.
[174] Tihomir Mitev. Where is the Missing Matter?: A Comment on
”The Essence” of Additive Manufacturing. International Journal of
Actor-Network Theory and Technological Innovation, 7:10–17, 2015.
URL: http://dx.doi.org/10.4018/IJANTTI.2015010102, doi:10.
4018/IJANTTI.2015010102.
[175] Mohsen Moghaddam, José Reinaldo Silva, and Shimon Y. Nof.
Manufacturing-as-a-Service—From e-Work and Service-Oriented Architecture to the Cloud Manufacturing Paradigm - In: 15th IFAC
Symposium onInformation Control Problems in Manufacturing. IFACPapersOnLine, 48(3):828–833, 2015. URL: http://dx.doi.org/
10.1016/j.ifacol.2015.06.186, doi:10.1016/j.ifacol.2015.06.
186.
[176] Omar A. Mohamed, Syed H. Masood, and Jahar L. Bhowmik. Optimization of fused deposition modeling process parameters: a review of current research and future prospects. Advances in Manufacturing, 3(1):42–53, 2015. URL: http://dx.doi.org/10.1007/
s40436-014-0097-7, doi:10.1007/s40436-014-0097-7.
[177] Carlos Mota, Dario Puppi, Federica Chiellini, and Emo Chiellini. Additive manufacturing techniques for the production of tissue engineering
constructs. Journal of Tissue Engineering and Regenerative Medicine,
9(3):174–190, 2015. URL: http://dx.doi.org/10.1002/term.1635,
doi:10.1002/term.1635.
[178] Dimitris Mourtzis, Michael Doukas, and Dimitra Bernidaki. Simulation in Manufacturing: Review and Challenges. In Procedia
CIRP - 8th International Conference on Digital Enterprise Technology - DET 2014 Disruptive Innovation in Manufacturing Engineering towards the 4th Industrial Revolution, volume 25, pages 213–229,
2014. URL: http://dx.doi.org/10.1016/j.procir.2014.10.032,
doi:10.1016/j.procir.2014.10.032.
[179] Shawn Moylan, April Cooke, Kevin Jurrens, John Slotwinski, and
M. Alkan Donmez. A review of test artifacts for additive manufacturing. NISTIR NISTIR 7858, National Institute of Standards and
117
Technology (NIST), 2012. URL: http://dx.doi.org/10.6028/NIST.
IR.7858, doi:10.6028/NIST.IR.7858.
[180] K. Muita, M. Westerlund, and R. Rajala. The evolution of rapid production: How to adopt novel manufacturing technology - in: 15th
ifac symposium oninformation control problems in manufacturing.
IFAC-PapersOnLine, 48(3):32–37, 2015. URL: http://dx.doi.org/
10.1016/j.ifacol.2015.06.054, doi:10.1016/j.ifacol.2015.06.
054.
[181] Javier Munguı́a, Joaquim de Ciurana, and Carles Riba. Pursuing successful rapid manufacturing: a users’ best-prcatices approach. Rapid
Prototyping Journal, 14(3):173–179, 2008. URL: http://dx.doi.org/
10.1108/13552540810878049, doi:10.1108/13552540810878049.
[182] Lawrence E. Murr, Sara M. Gaytan, Diana A. Ramirez, Edwin Martinez, Jennifer Hernandez, Krista N. Amato, Patrick W. Shindo,
Francisco R. Medina, and Ryan B. Wicker.
Metal fabrication
by additive manufacturing using laser and electron beam melting
technologies. Journal of Materials Science & Technology, 28(1):1–
14, 2012. URL: http://www.sciencedirect.com/science/article/
pii/S1005030212600164, doi:10.1016/S1005-0302(12)60016-4.
[183] Subramanian Senthilkannan Muthu and Monica Mahesh Savalani,
editors.
Handbook of Sustainability in Additive Manufacturing,
volume 1 of Environmental Footprints and Eco-design of Products and Processes. Springer Singapore, 1 edition, 2016. URL:
http://link.springer.com/book/10.1007/978-981-10-0549-7,
doi:10.1007/978-981-10-0549-7.
[184] Marcel Nagel, Felix Giese, and Ralf Becker. Flexible Gripper Design
Through Additive Manufacturing, pages 455–459. Springer International Publishing, Cham, 2016. URL: http://dx.doi.org/10.1007/
978-3-319-26378-6_37, doi:10.1007/978-3-319-26378-6_37.
[185] Andrew Y. C. Nee. Handbook of Manufacturing Engineering and Technology. Springer London, 2015. URL: http://dx.doi.org/10.1007/
978-1-4471-4670-4, doi:10.1007/978-1-4471-4670-4.
118
[186] Saigopal Nelaturi, Arvind Rangarajan, Christian Fritz, and Tolga Kurtoglu. Automated fixture configuration for rapid manufacturing planning. Computer-Aided Design, 46:160–169, 2014. 2013 SIAM Conference on Geometric and Physical Modeling. URL: http://dx.doi.org/
10.1016/j.cad.2013.08.028, doi:10.1016/j.cad.2013.08.028.
[187] Tiago Oliveira, Manoj Thomas, and Mariana Espadanal. Assessing the
determinants of cloud computing adoption: An analysis of the manufacturing and services sectors. Information & Management, 51(5):497–
510, 7 2014. URL: http://dx.doi.org/10.1016/j.im.2014.03.006,
doi:10.1016/j.im.2014.03.006.
[188] William Oropallo and Les A. Piegl.
Ten challenges in 3d
printing.
Engineering with Computers, 32(1):135–148, 2016.
URL: http://dx.doi.org/10.1007/s00366-015-0407-0, doi:10.
1007/s00366-015-0407-0.
[189] Deepankar Pal, Nachiket Patil, Kai Zeng, and Brent Stucker. An
Integrated Approach to Additive Manufacturing Simulations Using
Physics Based, Coupled Multiscale Process Modeling. Journal of
Manufacturing Science and Engineering, 136(6):1–16, 12 2014. URL:
http://dx.doi.org/10.1115/1.4028580, doi:10.1115/1.4028580.
[190] Sangsung Park, Juhwan Kim, Hongchul Lee, Dongsik Jang, and Sunghae Jun. Methodology of technological evolution for three-dimensional
printing. Industrial Management & Data Systems, 116(1):122–146,
2016. URL: http://dx.doi.org/10.1108/IMDS-05-2015-0206, doi:
10.1108/IMDS-05-2015-0206.
[191] Christ P. Paul, Pankaj Bhargava, Atul Kumar, Ayukt K. Pathak, and
Lalit M. Kukreja. Laser Rapid Manufacturing: Technology, Applications, Modeling and Future Prospects, pages 1–67. John Wiley & Sons,
Inc., 2013. URL: http://dx.doi.org/10.1002/9781118562857.ch1,
doi:10.1002/9781118562857.ch1.
[192] Ratnadeep Paul and Sam Anand. Process energy analysis and optimization in selective laser sintering. Journal of Manufacturing Systems, 31(4):429–437, 2012. Selected Papers of 40th North American
Manufacturing Research Conference. URL: http://dx.doi.org/10.
1016/j.jmsy.2012.07.004, doi:10.1016/j.jmsy.2012.07.004.
119
[193] Ratnadeep Paul and Sam Anand. A new Steiner patch based file format
for Additive Manufacturing processes. Computer-Aided Design, 63:86–
100, 2015. URL: http://dx.doi.org/10.1016/j.cad.2015.01.002,
doi:10.1016/j.cad.2015.01.002.
[194] Ratnadeep Paul and Sam Anand. Optimization of layered manufacturing process for reducing form errors with minimal support
structures. Journal of Manufacturing Systems, 36:231–243, 2015.
URL: http://dx.doi.org/10.1016/j.jmsy.2014.06.014, doi:10.
1016/j.jmsy.2014.06.014.
[195] Irene J. Petrick and Timothy W. Simpson.
3D Printing Disrupts Manufacturing: How Economies of One Create New Rules
of Competition.
Research-Technology Management, 56(6):12–16,
2013. URL: http://dx.doi.org/10.5437/08956308X5606193, doi:
10.5437/08956308X5606193.
[196] Duc Truong Pham and Rosemary S. Gault.
A comparison of rapid prototyping technologies.
International Journal
of Machine Tools and Manufacture, 38(10-11):1257–1287, 1998.
URL: http://dx.doi.org/10.1016/S0890-6955(97)00137-5, doi:
10.1016/S0890-6955(97)00137-5.
[197] Merissa Piazza and Serena Alexander. Additive manufacturing: A summary of the literature. Urban Publications 1319, Cleveland State University - Maxine Goodman Levin College of Urban Affairs, 2015. URL:
http://engagedscholarship.csuohio.edu/urban_facpub/1319.
[198] Andrew J. Pinkerton. [invited] lasers in additive manufacturing.
Optics & Laser Technology, 78, Part A:25–32, 2016. URL: http:
//dx.doi.org/10.1016/j.optlastec.2015.09.025, doi:0.1016/j.
optlastec.2015.09.025.
[199] R. Ponche, J. Y. Hascoet, O. Kerbrat, and P. Mognol. A new global
approach to design for additive manufacturing. Virtual and Physical
Prototyping, 7(2):93–105, 2012. URL: http://dx.doi.org/10.1080/
17452759.2012.679499, doi:10.1080/17452759.2012.679499.
[200] T. Qu, S. P. Lei, Z. Z. Wang, D. X. Nie, and X. Chen George Q.
Huang. Iot-based real-time production logistics synchronization sys120
tem under smart cloud manufacturing. The International Journal of Advanced Manufacturing Technology, 84(1):147–164, 2016.
URL: http://dx.doi.org/10.1007/s00170-015-7220-1, doi:10.
1007/s00170-015-7220-1.
[201] Janaka Rajaguru, Mike Duke, and ChiKit Au. Development of
rapid tooling by rapid prototyping technology and electroless nickel
plating for low-volume production of plastic parts. The International Journal of Advanced Manufacturing Technology, 78(1):31–40,
2015. URL: http://dx.doi.org/10.1007/s00170-014-6619-4, doi:
10.1007/s00170-014-6619-4.
[202] Shravya Ramisetty, Prasad Calyam, J. Cecil, Amit Rama Akula,
Ronny Bazan Antequera, and Ray Leto. Ontology integration for advanced manufacturing collaboration in cloud platforms. In Proceedings
of IFIP/IEEE International Symposium on Integrated Network Management (IM), pages 504–510, 2015. URL: http://dx.doi.org/10.
1109/INM.2015.7140329, doi:10.1109/INM.2015.7140329.
[203] Farzad Rayegani and Godfrey C. Onwubolu. Fused deposition modelling (fdm) process parameter prediction and optimization using
group method for data handling (gmdh) and differential evolution
(de). The International Journal of Advanced Manufacturing Technology, 73(1):509–519, 2014. URL: http://dx.doi.org/10.1007/
s00170-014-5835-2, doi:10.1007/s00170-014-5835-2.
[204] Thierry Rayna and Ludmila Striukova.
A Taxonomy of Online 3D Printing Platforms, pages 153–166.
Information Technology and Law Series. T.M.C. Asser Press, The Hague, 2016.
URL: http://dx.doi.org/10.1007/978-94-6265-096-1_9, doi:10.
1007/978-94-6265-096-1\_9.
[205] Lei Ren, Jin Cui, Ni Li, Qiong Wu, Cuixia Ma, Dongxing Teng, and
Lin Zhang. Cloud-Based Intelligent User Interface for Cloud Manufacturing: Model, Technology, and Application. Journal of Manufacturing Science and Engineering, 137(4):1–7, 2015. URL: http:
//dx.doi.org/10.1115/1.4030332, doi:10.1115/1.4030332.
[206] Lei Ren, Jin Cui, Yongchang Wei, Yuanjun LaiLi, and Lin
Zhang. Research on the impact of service provider cooperative
121
relationship on cloud manufacturing platform. The International
Journal of Advanced Manufacturing Technology, pages 1–12, 2016.
URL: http://dx.doi.org/10.1007/s00170-016-8345-6, doi:10.
1007/s00170-016-8345-6.
[207] Fabian Rengier, Amit Mehndiratta, Hendrik Von Tengg-Kobligk,
Christian Martin Zechmann, Roland Unterhinninghofen, Hans-Ulrich
Kauczor, and Frederik L. Giesel.
3d printing based on imaging data: review of medical applications. International Journal
of Computer Assisted Radiology and Surgery, 5(4):335–341, 2010.
URL: http://dx.doi.org/10.1007/s11548-010-0476-x, doi:10.
1007/s11548-010-0476-x.
[208] D. A. Roberson, D. Espalin, and R. B. Wicker. 3d printer selection: A
decision-making evaluation and ranking model. Virtual and Physical
Prototyping, 8(3):201–212, 2013. URL: http://dx.doi.org/10.1080/
17452759.2013.830939, doi:10.1080/17452759.2013.830939.
[209] Dieter Roller, M. Bihler, and Oliver Eck.
Asn: Active, distributed knowledge base for rapid prototyping. In Dieter Roller, editor, Proceedings of 30th ISATA, Volume ”Rapid Prototyping in the
Automotive Industries & Laser Applications in the Automotive Industries”, pages 253–262. Automotive Automation Ltd., Croydon,
England, 1997. URL: http://citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.87.9318.
[210] David W. Rosen. Research supporting principles for design for additive manufacturing. Virtual and Physical Prototyping, 9(4):225–232,
2014. URL: http://dx.doi.org/10.1080/17452759.2014.951530,
doi:10.1080/17452759.2014.951530.
[211] Massimiliano Ruffo, Christopher John Tuck, and Richard J. M. Hague.
Cost estimation for rapid manufacturing - laser sintering production for
low to medium volumes. Proceedings of the Institution of Mechanical
Engineers, Part B: Journal of Engineering Manufacture, 220(9):1417–
1427, 2006. URL: http://dx.doi.org/10.1243/09544054JEM517,
doi:10.1243/09544054JEM517.
[212] E. Sachs, M. Cima, and J. Cornie.
Three-Dimensional Printing: Rapid Tooling and Prototypes Directly from a CAD Model.
122
CIRP Annals - Manufacturing Technology, 39(1):201–204, 1990.
URL: http://dx.doi.org/10.1016/S0007-8506(07)61035-X, doi:
10.1016/S0007-8506(07)61035-X.
[213] Ranjeet Kumar Sahu, S. S. Mahapatra, and Anoop Kumar Sood. A
study on dimensional accuracy of fused deposition modeling (fdm) processed parts using fuzzy logic. Journal for Manufacturing Science
& Production, 13(3):183–197, 2013. URL: http://dx.doi.org/10.
1515/jmsp-2013-0010, doi:10.1515/jmsp-2013-0010.
[214] William J. Sames, F. A. List, S. Pannala, R. R. Dehoff, and
Babu. The metallurgy and processing science of metal additive
ufacturing. International Materials Reviews, 61(5):315–360,
URL: http://dx.doi.org/10.1080/09506608.2015.1116649,
10.1080/09506608.2015.1116649.
S. S.
man2016.
doi:
[215] Luis M. Sanchez and Rakesh Nagi. A review of agile manufacturing
systems. International Journal of Production Research, 39(16):3561–
3600, 2001. URL: http://dx.doi.org/10.1080/00207540110068790,
doi:10.1080/00207540110068790.
[216] David Sanderson, Nikolas Antzoulatos, Jack C. Chaplin, Dı́dac Buscuets, Jeremy Pitt, Carl German, Alan Norbury, Emma Kelly, and
Svetan Ratchev. Advanced manufacturing: An industrial application
for collective adaptive systems. In Self-Adaptive and Self-Organizing
Systems Workshops (SASOW), 2015 IEEE International Conference
on, pages 61–67, 9 2015. URL: http://dx.doi.org/10.1109/SASOW.
2015.15, doi:10.1109/SASOW.2015.15.
[217] Carl Schubert, Mark C. van Langeveld, and Larry A. Donoso.
Innovations in 3d printing: a 3d overview from optics to organs.
British Journal of Ophthalmology, 2013.
URL: http:
//dx.doi.org/10.1136/bjophthalmol-2013-304446, doi:10.1136/
bjophthalmol-2013-304446.
[218] Lui Sha, Sathish Gopalakrishnan, Xue Liu, and Qixin Wang. CyberPhysical Systems: A New Frontier, pages 3–13. Springer US, 2009.
URL: http://dx.doi.org/10.1007/978-0-387-88735-7_1, doi:10.
1007/978-0-387-88735-7_1.
123
[219] Xiuqin Shang, Xiwei Liu, Gang Xiong, Changjian Cheng, Yonghong
Ma, and Timo R. Nyberg. Social manufacturing cloud service platform
for the mass customization in apparel industry. In Service Operations
and Logistics, and Informatics (SOLI), 2013 IEEE International Conference on, pages 220–224, 7 2013. URL: http://dx.doi.org/10.
1109/SOLI.2013.6611413, doi:10.1109/SOLI.2013.6611413.
[220] Seyed Farid Seyed Shirazi, Samira Gharehkhani, Mehdi Mehrali,
Hooman Yarmand, Hendrik Simon Cornelis Metselaar, Nahrizul Adib
Kadri, and Noor Azuan Abu Osman. A review on powder-based additive manufacturing for tissue engineering: selective laser sintering
and inkjet 3D printing. Science and Technology of Advanced Materials, 16(3), 2015. URL: http://dx.doi.org/10.1088/1468-6996/16/
3/033502, doi:10.1088/1468-6996/16/3/033502.
[221] José Reinaldo Silva.
New trends in manufacturing: Converging to service and intelligent systems. IFAC Proceedings Volumes,
47(3):2628–2633, 2014. 19th IFAC World Congress. URL: http:
//dx.doi.org/10.3182/20140824-6-ZA-1003.02823, doi:10.3182/
20140824-6-ZA-1003.02823.
[222] Suryakumar Simhambhatla and K. P. Karunakaran. Build strategies
for rapid manufacturing of components of varying complexity. Rapid
Prototyping Journal, 21(3):340–350, 2015. URL: http://dx.doi.org/
10.1108/RPJ-07-2012-0062, doi:10.1108/RPJ-07-2012-0062.
[223] Swee Leong Sing, Jia An, Wai Yee Yeong, and Florencia Edith
Wiria. Laser and electron-beam powder-bed additive manufacturing of metallic implants: A review on processes, materials and designs. Journal of Orthopaedic Research, 34(3):369–385, 2016. URL:
http://dx.doi.org/10.1002/jor.23075, doi:10.1002/jor.23075.
[224] Rupinder Singh. Three dimensional printing for casting applications: A state of art review and future perspectives. Advanced Materials Research, 83-86:342–349, 2010. URL: http://dx.doi.org/
10.4028/www.scientific.net/AMR.83-86.342, doi:10.4028/www.
scientific.net/AMR.83-86.342.
[225] Johan Söderberg.
Automating amateurs in the 3d printing
community: connecting the dots between ‘deskilling’ and ‘user124
friendliness’. Work Organisation, Labour & Globalisation, 7(1):124–
139, 2013.
URL: http://www.jstor.org/stable/10.13169/
workorgalaboglob.7.1.0124, doi:10.13169/workorgalaboglob.7.
1.0124.
[226] Jacopo Soldani, Tobias Binz, Uwe Breitenbücher, Frank Leymann,
and Antonio Brogi. Toscamart: A method for adapting and reusing
cloud applications. Journal of Systems and Software, 113:395–406,
2016. URL: http://dx.doi.org/10.1016/j.jss.2015.12.025, doi:
10.1016/j.jss.2015.12.025.
[227] R. Sreenivasan, A. Goel, and David L. Bourell. Laser assisted net
shape engineering 6, proceedings of the lane 2010, part 1 sustainability
issues in laser-based additive manufacturing. Physics Procedia, 5:81–
90, 2010. URL: http://dx.doi.org/10.1016/j.phpro.2010.08.124,
doi:10.1016/j.phpro.2010.08.124.
[228] Alexander Stanik, Matthias Hovestadt, and Odej Kao.
Hardware as a service (haas): Physical and virtual hardware on demand. In Cloud Computing Technology and Science (CloudCom),
2012 IEEE 4th International Conference on, pages 149–154. IEEE,
2012. URL: http://dx.doi.org/10.1109/CloudCom.2012.6427579,
doi:10.1109/CloudCom.2012.6427579.
[229] L. D. Sturm, C. B. Williams, J. A. Camelio, J. White, and R. Parker.
Cyber-Physical Vunerabilities in Additive Manufacturing Systems. In
Proceedings of the Twenty-Fifth Solid Freeform Fabrication (SFF)
Symposium, pages 951–963, 2014. URL: http://sffsymposium.engr.
utexas.edu/2014TOC.
[230] S. Subashini and V. Kavitha. A survey on security issues in service
delivery models of cloud computing. Journal of Network and Computer
Applications, 34(1):1–11, 2011. URL: http://dx.doi.org/10.1016/
j.jnca.2010.07.006, doi:10.1016/j.jnca.2010.07.006.
[231] Jie Sun, Zhuo Peng, Weibiao Zhou, Jerry Y.H. Fuh, Geok Soon Hong,
and Annette Chiu. A Review on 3D Printing for Customized Food Fabrication. Procedia Manufacturing, 1:308–319, 2015. URL: http://dx.
doi.org/10.1016/j.promfg.2015.09.057, doi:10.1016/j.promfg.
2015.09.057.
125
[232] Jie Sun, Weibiao Zhou, Dejian Huang, Jerry Y. H. Fuh, and
Geok Soon Hong. An overview of 3d printing technologies for
food fabrication. Food and Bioprocess Technology, 8(8):1605–1615,
2015. URL: http://dx.doi.org/10.1007/s11947-015-1528-6, doi:
10.1007/s11947-015-1528-6.
[233] Yunlong Tang, Jean-Yves Hascoet, and Yaoyao Fiona Zhao. Integration of Topological and Functional Optimization in Design for Additive
Manufacturing. In ASME 2014 12th Biennial Conference on Engineering Systems Design and Analysis, volume 1, 2014. URL: http://dx.
doi.org/10.1115/ESDA2014-20381, doi:10.1115/ESDA2014-20381.
[234] Fei Tao, Ying Cheng, Li Da Xu, Lin Zhang, and Bo Hu Li. CCIoTCMfg: Cloud Computing and Internet of Things-Based Cloud Manufacturing Service System. IEEE Transactions on Industrial Informatics, 10(2):1435–1442, 5 2014. URL: http://dx.doi.org/10.1109/
TII.2014.2306383, doi:10.1109/TII.2014.2306383.
[235] Fei Tao, Ying Cheng, L. Zhang, and A. Y. C. Nee. Advanced manufacturing systems: socialization characteristics and trends. Journal of Intelligent Manufacturing, pages 1–16, 2015. URL: http://dx.doi.org/
10.1007/s10845-015-1042-8, doi:10.1007/s10845-015-1042-8.
[236] Fei Tao, Yuanjun LaiLi, Lida Xu, and Lin Zhang. FC-PACO-RM: A
Parallel Method for Service Composition Optimal-Selection in Cloud
Manufacturing System. IEEE Transactions on Industrial Informatics,
9(4):2023–2033, 2013. URL: http://dx.doi.org/10.1109/TII.2012.
2232936, doi:10.1109/TII.2012.2232936.
[237] Fei Tao, Lin Zhang, V. C. Venkatesh, Y. Luo, and Ying Cheng.
Cloud manufacturing: a computing and service-oriented manufacturing model. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 000:1–8,
2011. URL: http://dx.doi.org/10.1177/0954405411405575, doi:
10.1177/0954405411405575.
[238] Fei Tao, Y. Zuo, Li Da Xu, and Lin Zhang. IoT-Based Intelligent Perception and Access of Manufacturing Resource Toward Cloud Manufacturing. IEEE Transactions on Industrial Informatics, 10(2):1547–1557,
126
5 2014. URL: http://dx.doi.org/10.1109/TII.2014.2306397, doi:
10.1109/TII.2014.2306397.
[239] Gustavo Tapia and Alaa Elwany. A review on process monitoring and control in metal-based additive manufacturing. Journal of
Manufacturing Science and Engineering, 136(6), 2014. URL: http:
//dx.doi.org/10.1115/1.4028540, doi:10.1115/1.4028540.
[240] Frédéric Thiesse, Marco Wirth, Hans-Georg Kemper, Michelle
Moisa, Dominik Morar, Heiner Lasi, Frank Piller, Peter Buxmann, Letizia Mortara, Simon Ford, and Tim Minshall. Economic Implications of Additive Manufacturing and the Contribution
of MIS. Business & Information Systems Engineering, 57(2):139–
148, 2015. URL: http://aisel.aisnet.org/bise/vol57/iss2/7,
doi:10.1007/s12599-015-0374-4.
[241] Douglas S. Thomas. Economics of the u.s. additive manufacturing industry. NIST Special Publication 1163, National Institute of Standards
and Technology (NIST), 8 2013. URL: http://dx.doi.org/10.6028/
NIST.SP.1163, doi:10.6028/NIST.SP.1163.
[242] Douglas S. Thomas and Stanley W. Gilbert. Costs and cost effectiveness of additive manufacturing - a literature review and discussion.
NIST Special Publication 1176, National Institute of Standards and
Technology (NIST), 2014. URL: http://dx.doi.org/10.6028/NIST.
SP.1176, doi:10.6028/NIST.SP.1176.
[243] Kun Tong, Sanjay Joshi, and E. Amine Lehtihet. Error compensation for fused deposition modeling (fdm) machine by correcting slice files. Rapid Prototyping Journal, 14(1):4–14, 2008.
URL: http://dx.doi.org/10.1108/13552540810841517, doi:10.
1108/13552540810841517.
[244] Hong-Linh Truong and Schahram Dustdar. Iccs 2010 composable cost
estimation and monitoring for computational applications in cloud
computing environments. Procedia Computer Science, 1(1):2175–2184,
2010. URL: http://dx.doi.org/10.1016/j.procs.2010.04.243,
doi:10.1016/j.procs.2010.04.243.
127
[245] Wei-Tek Tsai, Guanqiu Qi, Lian Yu, and Jerry Gao. TaaS (Testing-asa-Service) Design for Combinatorial Testing. In Software Security and
Reliability (SERE), 2014 Eighth International Conference on, pages
127–136, 6 2014. doi:10.1109/SERE.2014.26.
[246] Wei-Tek Tsai, Xin Sun, and Janaka Balasooriya. Service-oriented cloud
computing architecture. In Information Technology: New Generations
(ITNG), 2010 Seventh International Conference on, pages 684–689, 4
2010. URL: http://dx.doi.org/10.1109/ITNG.2010.214, doi:10.
1109/ITNG.2010.214.
[247] Brian N. Turner and Scott A Gold. A review of melt extrusion
additive manufacturing processes: Ii. materials, dimensional accuracy, and surface roughness. Rapid Prototyping Journal, 21(3):250–
261, 2015. URL: http://dx.doi.org/10.1108/RPJ-02-2013-0017,
doi:10.1108/RPJ-02-2013-0017.
[248] Brian N. Turner, Robert Strong, and Scott A. Gold.
A review of melt extrusion additive manufacturing processes: I. process design and modeling. Rapid Prototyping Journal, 20(3):192–
204, 2014. URL: http://dx.doi.org/10.1108/RPJ-01-2013-0012,
doi:10.1108/RPJ-01-2013-0012.
[249] Hamilton Turner, Jules White, James A. Camelio, Christopher
Williams, Brandon Amos, and Robert Parker. Bad parts: Are our
manufacturing systems at risk of silent cyberattacks? IEEE Security
Privacy, 13(3):40–47, 5 2015. URL: http://dx.doi.org/10.1109/
MSP.2015.60, doi:10.1109/MSP.2015.60.
[250] Jumyung Um, Yong-Chan Choi, and Ian Stroud. Factory Planning
System Considering Energy-efficient Process under Cloud Manufacturing. In Variety Management in Manufacturing – Proceedings of the
47th CIRP Conference on Manufacturing Systems, volume 17 of Procedia CIRP, pages 553–558, 2014. URL: http://dx.doi.org/10.1016/
j.procir.2014.01.084, doi:10.1016/j.procir.2014.01.084.
[251] Iulia D. Ursan, Ligia Chiu, and Andrea Pierce. Three-dimensional
drug printing: A structured review. Journal of the American Pharmacists Association, 53(2):136–144, 2013. URL: http://dx.doi.org/
10.1331/JAPhA.2013.12217, doi:10.1331/JAPhA.2013.12217.
128
[252] Ben Utela, Duane Storti, Rhonda Anderson, and Mark Ganter. A
review of process development steps for new material systems in
three dimensional printing (3dp). Journal of Manufacturing Processes,
10(2):96–104, 2008. URL: http://dx.doi.org/10.1016/j.jmapro.
2009.03.002, doi:10.1016/j.jmapro.2009.03.002.
[253] Mohammad Vaezi, Hermann Seitz, and Shoufeng Yang. A review
on 3d micro-additive manufacturing technologies. The International
Journal of Advanced Manufacturing Technology, 67(5):1721–1754,
2013. URL: http://dx.doi.org/10.1007/s00170-012-4605-2, doi:
10.1007/s00170-012-4605-2.
[254] Omid Fatahi Valilai and Mahmoud Houshmand.
A collaborative and integrated platform to support distributed manufacturing
system using a service-oriented approach based on cloud computing paradigm. Robotics and Computer-Integrated Manufacturing,
29(1):110–127, 2 2013. URL: http://dx.doi.org/10.1016/j.rcim.
2012.07.009, doi:10.1016/j.rcim.2012.07.009.
[255] Leo van Moergestel, Erik Puik, Daniël Telgen, and John-Jules Ch.
Meyer. Implementing Manufacturing as a Service: A Pull-Driven
Agent-Based Manufacturing Grid. In Proceedings of the 11th International Conference on ICT in Education, Research and Industrial Applications: Integration, Harmonization and Knowledge Transfer, volume 1356, pages 172–187. CEUR Workshop Proceedings, 2015. URL:
http://dspace.library.uu.nl/handle/1874/315706.
[256] Benjamin Vayre, Frédéric Vignat, and François Villeneuve. Metallic
additive manufacturing: state-of-the-art review and prospects. Mechanics & Industry, 13(2):89–96, 2012. URL: http://dx.doi.org/
10.1051/meca/2012003, doi:10.1051/meca/2012003.
[257] Benjamin Vayre, Frédéric Vignat, and François Villeneuve. Designing for additive manufacturing. In 45th CIRP Conference on Manufacturing Systems, pages 632–637, 2012. URL: http://dx.doi.org/
10.1016/j.procir.2012.07.108, doi:10.1016/j.procir.2012.07.
108.
[258] Germano Veiga, Pedro Malaca, J. Norberto Pires, and Klas Nilsson. Separation of concerns on the orchestration of operations in
129
flexible manufacturing. Assembly Automation, 32(1):38–50, 2012.
URL: http://dx.doi.org/10.1108/01445151211198700, doi:10.
1108/01445151211198700.
[259] Maja Vukovic. Internet Programmable IoT: On the Role of APIs
in IoT: The Internet of Things (Ubiquity Symposium). Ubiquity,
2015(November):1–10, 11 2015. URL: http://doi.acm.org/10.1145/
2822873, doi:10.1145/2822873.
[260] Gerald Walther. Printing Insecurity? The Security Implications of 3DPrinting of Weapons. Science and Engineering Ethics, 21(6):1435–1445,
2015. URL: http://dx.doi.org/10.1007/s11948-014-9617-x, doi:
10.1007/s11948-014-9617-x.
[261] Lihui Wang. Machine availability monitoring and machining process planning towards Cloud manufacturing.
CIRP Journal of
Manufacturing Science and Technology, 6(4):263–273, 2013. URL:
http://dx.doi.org/10.1016/j.cirpj.2013.07.001, doi:10.1016/
j.cirpj.2013.07.001.
[262] Lihui Wang, Weiming Shen, and Sherman Lang. Wise-ShopFloor: a
web-based and sensor-driven shop floor environment. In Computer Supported Cooperative Work in Design, 2002. The 7th International Conference on, pages 413–418, 2002. doi:10.1109/CSCWD.2002.1047724.
[263] Lizhe Wang, Gregor von Laszewski, AND Xi He Andrew Younge,
Marcel Kunze, Jie Tao, and Cheng Fu. Cloud Computing: a Perspective Study. New Generation Computing, 28(2):137–146, 2010.
URL: http://dx.doi.org/10.1007/s00354-008-0081-5, doi:10.
1007/s00354-008-0081-5.
[264] Tianri Wang, Shunsheng Guo, and Chi-Guhn Lee. Manufacturing
task semantic modeling and description in cloud manufacturing system. The International Journal of Advanced Manufacturing Technology, 71(9):2017–2031, 2014. URL: http://dx.doi.org/10.1007/
s00170-014-5607-z, doi:10.1007/s00170-014-5607-z.
[265] Xi Vincent Wang and Xun W. Xu. An interoperable solution for
Cloud manufacturing. Robotics and Computer-Integrated Manufacturing, 29(4):232–247, 8 2013. URL: http://dx.doi.org/10.1016/j.
rcim.2013.01.005, doi:10.1016/j.rcim.2013.01.005.
130
[266] Xi Vincent Wang and Xun W. Xu. An interoperable solution for
cloud manufacturing. Robotics and Computer-Integrated Manufacturing, 29(4):232–247, 8 2013. URL: http://dx.doi.org/10.1016/j.
rcim.2013.01.005, doi:10.1016/j.rcim.2013.01.005.
[267] Christopher L. Weber, Vanessa Peña, Maxwell K. Micali, Elmer
Yglesias, Sally A. Rood, Justin A. Scott, and Bhavya Lal.
The Role of the National Science Foundation in the Origin
and Evolution of Additive Manufacturing in the United States.
IDA Paper P-5091, Science and Technology Policy Institute, 11
2013.
URL: https://www.ida.org/idamedia/Corporate/Files/
Publications/STPIPubs/ida-p-5091.ashx.
[268] Christian Weller, Robin Kleer, and Frank T. Piller. Economic implications of 3d printing: Market structure models in light of additive manufacturing revisited. International Journal of Production Economics,
164:43–56, 2015. URL: http://dx.doi.org/10.1016/j.ijpe.2015.
02.020, doi:10.1016/j.ijpe.2015.02.020.
[269] Joel West and George Kuk. Proprietary Benefits from Open Communities: How MakerBot Leveraged Thingiverse in 3D Printing.
In Proceedings of the 2014 Academy of Management Conference, 1
2014. URL: http://dx.doi.org/10.2139/ssrn.2544970, doi:10.
2139/ssrn.2544970.
[270] B. T. Wittbrodt, A. G. Glover, J. Laureto, G. C. Anzalone,
D. Oppliger, J. L. Irwin, and J. M. Pearce.
Life-cycle economic analysis of distributed manufacturing with open-source 3D printers. Mechatronics, 23(6):713–726, 2013. URL: http://
dx.doi.org/10.1016/j.mechatronics.2013.06.002, doi:10.1016/
j.mechatronics.2013.06.002.
[271] Kaufui V. Wong and Aldo Hernandez. A review of additive manufacturing. ISRN Mechanical Engineering, 2012:1–10, 2012. URL: http:
//dx.doi.org/10.5402/2012/208760, doi:10.5402/2012/208760.
[272] Dazhong Wu, M. J. Greer, David W. Rosen, and Dirk Schaefer. Cloud
manufacturing: Strategic vision and state-of-the-art. Journal of Manufacturing Systems, 32(4):564–579, 2013. URL: http://dx.doi.org/
10.1016/j.jmsy.2013.04.008, doi:10.1016/j.jmsy.2013.04.008.
131
[273] Dazhong Wu, Matthew J. Greer, David W. Rosen, and Dirk Schaefer.
Cloud manufacturing: Drivers, current status, and future trends. In
Proceedings of the ASME 2013 International Manufacturing Science
and Engineering Conference, pages 1–10, 6 2013. URL: http://dx.
doi.org/10.1115/MSEC2013-1106, doi:10.1115/MSEC2013-1106.
[274] Dazhong Wu, David W. Rosen, and Dirk Schaefer. Cloud-Based Design and Manufacturing (CBDM) - A Service-Oriented Product Development Paradigm for the 21st Century, chapter Cloud-Based Design and Manufacturing: Status and Promise, pages 1–24. Springer
International Publishing, 2014. URL: http://dx.doi.org/10.1007/
978-3-319-07398-9_1, doi:10.1007/978-3-319-07398-9_1.
[275] Dazhong Wu, David W. Rosen, Lihui Wang, and Dirk Schaefer. Cloudbased Manufacturing: Old Wine in New Bottles? In Hoda ElMaraghy,
editor, Variety Management in Manufacturing – Proceedings of the 47th
CIRP Conference on Manufacturing Systems, volume 17, pages 94–99.
Elsevier B.V., 2014. URL: http://dx.doi.org/10.1016/j.procir.
2014.01.035, doi:10.1016/j.procir.2014.01.035.
[276] Dazhong Wu, David W. Rosen, Lihui Wang, and Dirk Schaefer.
Cloud-based design and manufacturing: A new paradigm in digital
manufacturing and design innovation. Computer-Aided Design, 59:1–
14, 2 2015. URL: http://dx.doi.org/10.1016/j.cad.2014.07.006,
doi:10.1016/j.cad.2014.07.006.
[277] Dazhong Wu, Janis Terpenny, and Wolfgang Gentzsch. Economic
Benefit Analysis of Cloud-Based Design, Engineering Analysis, and
Manufacturing. Journal of Manufacturing Science and Engineering,
137(4):1–9, 8 2015. URL: http://dx.doi.org/10.1115/1.4030306,
doi:10.1115/1.4030306.
[278] Dazhong Wu, J. Lane Thames, David W. Rosen, and Dirk Schaefer.
Towards a Cloud-Based Design and Manufacturing Paradigm: Looking Backward, Looking Forward. In ASME 2012 International Design
Engineering Technical Conferences and Computers and Information in
Engineering Conference, pages 315–328, 2012. URL: http://dx.doi.
org/10.1115/DETC2012-70780, doi:10.1115/DETC2012-70780.
132
[279] Dazhong Wu, J. Lane Thames, David W. Rosen, and Dirk Schaefer. Enhancing the product realization process with cloud-based design and manufacturing systems. Journal of Computing and Information Science in Engineering, 13(4):1–14, 9 2013. URL: http:
//dx.doi.org/10.1115/1.4025257, doi:10.1115/1.4025257.
[280] Lei Wu and Chengwei Yang. A Solution of Manufacturing Resources Sharing in Cloud Computing Environment, pages 247–
252.
Springer Berlin Heidelberg, Berlin, Heidelberg, 2010.
URL: http://dx.doi.org/10.1007/978-3-642-16066-0_36, doi:
10.1007/978-3-642-16066-0_36.
[281] Tong Wu and Edmund H. M. Cheung.
Enhanced STL.
The International Journal of Advanced Manufacturing Technology, 29(11):1143–1150, 2006. URL: http://dx.doi.org/10.1007/
s00170-005-0001-5, doi:10.1007/s00170-005-0001-5.
[282] Xingxing Wu, Xuemei Jiang, Wenjun Xu, Qingsong Ai, and
Quan Liu. A Unified Sustainable Manufacturing Capability Model
for Representing Industrial Robot Systems in Cloud Manufacturing,
pages 388–395. Springer International Publishing, Cham, 2015.
URL: http://dx.doi.org/10.1007/978-3-319-22759-7_45, doi:
10.1007/978-3-319-22759-7_45.
[283] Xun Xu. From cloud computing to cloud manufacturing. Robotics
and Computer-Integrated Manufacturing, 28(1):75–86, 2 2012. URL:
http://dx.doi.org/10.1016/j.rcim.2011.07.002, doi:10.1016/
j.rcim.2011.07.002.
[284] Mark Yampolskiy, Todd R. Andel, J. Todd McDonald, William B.
Glisson, and Alec Yasinsac. Intellectual Property Protection in Additive Layer Manufacturing: Requirements for Secure Outsourcing.
In Proceedings of the 4th Program Protection and Reverse Engineering Workshop, PPREW-4, pages 1–9, New York, NY, USA, 2014.
ACM. URL: http://doi.acm.org/10.1145/2689702.2689709, doi:
10.1145/2689702.2689709.
[285] Minzhi Yan, Hailong Sun, Xu Wang, and Xudong Liu. Building a
taas platform for web service load testing. In 2012 IEEE International
133
Conference on Cluster Computing, pages 576–579, 9 2012. doi:10.
1109/CLUSTER.2012.20.
[286] Yongnian Yan, Shengjie Li, Renji Zhang, Feng Lin, Rendong Wu,
Qingping Lu, Zhuo Xiong, and Xiaohong Wang. Rapid prototyping
and manufacturing technology: Principle, representative technics, applications, and development trends. Tsinghua Science & Technology,
14, Supplement 1:1–12, 2009. URL: http://dx.doi.org/10.1016/
S1007-0214(09)70059-8, doi:10.1016/S1007-0214(09)70059-8.
[287] Chen Yang, Weiming Shen, Tingyu Lin, and Xianbin Wang. A hybrid
framework for integrating multiple manufacturing clouds. The International Journal of Advanced Manufacturing Technology, 86(1):895–
911, 2016. URL: http://dx.doi.org/10.1007/s00170-015-8177-9,
doi:10.1007/s00170-015-8177-9.
[288] Sheng Yang and Yaoyao Fiona Zhao. Additive manufacturing-enabled
design theory and methodology: a critical review. The International Journal of Advanced Manufacturing Technology, 80(1):327–342,
2015. URL: http://dx.doi.org/10.1007/s00170-015-6994-5, doi:
10.1007/s00170-015-6994-5.
[289] Xifan Yao and Yingzi Lin.
Emerging manufacturing paradigm
shifts for the incoming industrial revolution. The International
Journal of Advanced Manufacturing Technology, 85(5–8):1665–1676,
2016. URL: http://dx.doi.org/10.1007/s00170-015-8076-0, doi:
10.1007/s00170-015-8076-0.
[290] C. Y. Yap, C. K. Chua, Z. L. Dong, Z. H. Liu, D. Q. Zhang, L. E.
Loh, and S. L. Sing. Review of selective laser melting: Materials and
applications. Applied Physics Reviews, 2(4):1–21, 2015. URL: http:
//dx.doi.org/10.1063/1.4935926, doi:10.1063/1.4935926.
[291] Hae-Sung Yoon, Jang-Yeob Lee, Hyung-Soo Kim, Min-Soo Kim, EunSeob Kim, Yong-Jun Shin, Won-Shik Chu, and Sung-Hoon Ahn. A
comparison of energy consumption in bulk forming, subtractive, and
additive processes: Review and case study. International Journal of
Precision Engineering and Manufacturing-Green Technology, 1(3):261–
279, 2014. URL: http://dx.doi.org/10.1007/s40684-014-0033-0,
doi:10.1007/s40684-014-0033-0.
134
[292] Chunyang Yu, Xun Xu, and Yuqian Lu.
Computer-Integrated
Manufacturing, Cyber-Physical Systems and Cloud Manufacturing
– Concepts and relationships. Manufacturing Letters, 6:5–9, 2015.
URL: http://dx.doi.org/10.1016/j.mfglet.2015.11.005, doi:
10.1016/j.mfglet.2015.11.005.
[293] Shiqiang Yu and Xun Xu. Development of a Product Configuration
System for Cloud Manufacturing, pages 436–443. Springer International Publishing, Cham, 2015. URL: http://dx.doi.org/10.1007/
978-3-319-22759-7_51, doi:10.1007/978-3-319-22759-7_51.
[294] Steven Eric Zeltmann, Nikhil Gupta, Nektarios Georgios Tsoutsos,
Michail Maniatakos Jeyavijayan Rajendran, and Ramesh Karri. Manufacturing and security challenges in 3d printing. JOM, 68(7):1872–1881,
2016. URL: http://dx.doi.org/10.1007/s11837-016-1937-7, doi:
10.1007/s11837-016-1937-7.
[295] Yousif Zghair. Rapid Repair hochwertiger Investitionsgüter, pages
57–69.
Springer Berlin Heidelberg, Berlin, Heidelberg, 2016.
URL: http://dx.doi.org/10.1007/978-3-662-49056-3_6, doi:10.
1007/978-3-662-49056-3_6.
[296] Lin Zhang, H Guo, Fei Tao, Y. L. Luo, and N. Si.
Flexible management of resource service composition in cloud manufacturing. In Industrial Engineering and Engineering Management
(IEEM), 2010 IEEE International Conference on, pages 2278–2282,
12 2010. URL: http://dx.doi.org/10.1109/IEEM.2010.5674175,
doi:10.1109/IEEM.2010.5674175.
[297] Lin Zhang, Yongliang Luo, Fei Tao, Bo Hu Li, Lei Ren, Xuesong Zhang,
Hua Guo, Ying Cheng, Anrui Hu, and Yongkui Liu. Cloud manufacturing: a new manufacturing paradigm. Enterprise Information Systems,
8(2):167–187, 2014. URL: http://dx.doi.org/10.1080/17517575.
2012.683812, doi:10.1080/17517575.2012.683812.
[298] Lin Zhang, Jingeng Mai, Bo Hu Li, Fei Tao, Chun Zhao, Lei Ren,
and Ralph C. Huntsinger. Cloud-Based Design and Manufacturing
(CBDM) - A Service-Oriented Product Development Paradigm for
the 21st Century, chapter Future Manufacturing Industry with Cloud
Manufacturing, pages 127–152. Springer International Publishing,
135
2014. URL: http://dx.doi.org/10.1007/978-3-319-07398-9_5,
doi:10.1007/978-3-319-07398-9_5.
[299] Yingfeng Zhang, George Q. Huang, T. Qu, Oscar Ho, and Shudong
Sun. Agent-based smart objects management system for real-time
ubiquitous manufacturing. Robotics and Computer-Integrated Manufacturing, 27(3):538–549, 2011. URL: http://dx.doi.org/10.1016/
j.rcim.2010.09.009, doi:10.1016/j.rcim.2010.09.009.
[300] Yingfeng Zhang, Geng Zhang, Yang Liu, and Di Hu. Research on
services encapsulation and virtualization access model of machine for
cloud manufacturing. Journal of Intelligent Manufacturing, pages 1–
15, 2015. URL: http://dx.doi.org/10.1007/s10845-015-1064-2,
doi:10.1007/s10845-015-1064-2.
[301] Jianhua Zhou, Yuwen Zhang, and J. K. Chen. Numerical Simulation
of Random Packing of Spherical Particles for Powder-Based Additive
Manufacturing. Journal of Manufacturing Science and Engineering,
131(3):1–8, 6 2009. URL: http://dx.doi.org/10.1115/1.3123324,
doi:10.1115/1.3123324.
[302] Linan Zhu, Yanwei Zhao, and Wanliang Wang. A Bilayer Resource
Model for Cloud Manufacturing Services. Mathematical Problems in
Engineering, 2013, 2013. URL: http://dx.doi.org/10.1155/2013/
607582, doi:10.1155/2013/607582.
[303] Z. Zhu, V. Dhokia, and S. T. Newman. A novel process planning approach for hybrid manufacturing consisting of additive, subtractive and
inspection processes. In 2012 IEEE International Conference on Industrial Engineering and Engineering Management, pages 1617–1621,
12 2012. URL: http://dx.doi.org/10.1109/IEEM.2012.6838020,
doi:10.1109/IEEM.2012.6838020.
[304] Zicheng Zhu, Vimal G. Dhokia, Aydin Nassehi, and Stephen T.
Newman. A Methodology for the Estimation of Build Time for
Operation Sequencing in Process Planning for a Hybrid Process,
pages 159–171. Springer International Publishing, Heidelberg, 2013.
URL: http://dx.doi.org/10.1007/978-3-319-00557-7_13, doi:
10.1007/978-3-319-00557-7_13.
136
[305] Zicheng Zhu, Vimal G. Dhokia, Aydin Nassehi, and Stephen T.
Newman. A review of hybrid manufacturing processes - state of
the art and future perspectives. International Journal of Computer
Integrated Manufacturing, 26(7):596–615, 2013. URL: http://dx.
doi.org/10.1080/0951192X.2012.749530, doi:10.1080/0951192X.
2012.749530.
[306] Dimitrios Zissis and Dimitrios Lekkas. Addressing cloud computing
security issues. Future Generation Computer Systems, 28(3):583–592,
2012. URL: http://dx.doi.org/10.1016/j.future.2010.12.006,
doi:10.1016/j.future.2010.12.006.
137
| 3 |
arXiv:1708.08118v1 [] 27 Aug 2017
MERGE DECOMPOSITIONS, TWO-SIDED KROHN-RHODES, AND
APERIODIC POINTLIKES
SAMUEL J. V. GOOL AND BENJAMIN STEINBERG
Abstract. This paper provides short proofs of two fundamental theorems of finite semigroup theory whose previous proofs were significantly longer, namely the two-sided KrohnRhodes decomposition theorem and Henckell’s aperiodic pointlike theorem, using a new algebraic technique that we call the merge decomposition. A prototypical application of this
technique decomposes a semigroup T into a two-sided semidirect product whose components
are built from two subsemigroups T1 , T2 , which together generate T , and the subsemigroup
generated by their setwise product T1 T2 . In this sense we decompose T by merging the
subsemigroups T1 and T2 . More generally, our technique merges semigroup homomorphisms
from free semigroups.
Introduction
Eilenberg’s variety theorem [3] provides a dictionary between formal language theory and finite
semigroup theory. In particular, membership problems in certain Boolean algebras of regular
languages (languages accepted by finite automata) are equivalent to membership problems
in varieties of finite semigroups. Other natural problems in language theory transform into
questions about pointlikes with respect to a variety of finite semigroups, a notion introduced by
Henckell and Rhodes [4]. An important problem in language theory is the separation problem:
given disjoint regular languages, determine whether they can be separated by a language from
a given variety of regular languages. The separation problem is equivalent to decidability of
pointlike pairs [1], which is strictly stronger than the membership problem [10, 2]. Decidability
of pointlikes can be used to obtain decidability of membership problems of related varieties.
For instance, the second author showed, using the decidability of aperiodic pointlikes and
Zelmanov’s solution to the restricted Burnside problem, that the join of the variety of aperiodic
semigroups with any variety of finite groups of bounded exponent has decidable membership
problem, answering a question of Rhodes and Volkov [13].
The first decidability result on pointlikes was Henckell’s theorem on the decidability of aperiodic pointlikes [4], which for a long time was considered one of the most difficult results in the
subject. Henckell not only provided a decidability algorithm: he also gave an elegant structural description of the aperiodic pointlike sets that we call Henckell’s formula. Henckell’s
original proof idea is a variation on the holonomy proof [5] of the Krohn-Rhodes theorem [7]
for directly decomposing semigroups into wreath products. The difficult part of Henckell’s
proof is to prove that a certain semigroup is aperiodic, which he does by wreath product
embeddings. In [6], Henckell, Rhodes and the second author provided a direct proof that
Henckell’s semigroup is aperiodic, leading to a simpler and shorter proof of his main theorem.
They also extended the theorem beyond aperiodic pointlikes to the variety of semigroups
Date: August 29, 2017.
The first-named author was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant #655941; the second-named author was supported by United
States - Israel Binational Science Foundation #2012080 and NSA MSP #H98230-16-1-0047.
1
2
SAMUEL J. V. GOOL AND BENJAMIN STEINBERG
whose subgroups have prime divisors belonging to a fixed set π of primes (the restriction of
this proof to the aperiodic case can be found in [11, Ch. 4]). Although simpler than the
original proof of Henckell [4], the proof in [6] is still non-trivial.
Recently, Place and Zeitoun [9] gave a new proof of the decidability of aperiodic pointlikes,
which, unlike the previous proofs, is inductive. They use a language theoretic reformulation of
the problem of computing pointlike sets and the McNaughton-Schützenberger theorem that
the aperiodic languages are precisely the first order definable languages [14]. The PlaceZeitoun approach follows the inductive proof scheme of the Krohn-Rhodes theorem (the socalled ‘V ∪ T ’ argument [8, 11]) later used by Wilke in the logic context [15], but done in the
power set of the semigroup.
This paper introduces a new algebraic tool, that we call the merge decomposition, in Section 2.
In Section 3, we use this tool to give a short proof of the inductive step of the two-sided
Krohn-Rhodes decomposition theorem (cf. [11, Ch. 5]). Then, in Section 4, we use the merge
decomposition in the inductive step of the Place-Zeitoun inductive scheme to give a short
algebraic proof of Henckell’s formula for the aperiodic pointlikes. We feel that our approach
has several advantages over previous approaches [4, 6, 9]. First of all, it leads to a significantly
shorter proof than the previous ones. Secondly, we obtain the best known bound on the length
of a two-sided Krohn-Rhodes decomposition of the aperiodic semigroup witnessing pointlikes
(or, equivalently, quantifier-depth of the first order formula giving separation).
An advantage of our approach is that it is potentially extendable beyond the realm of first
order logic on words. For instance, decidability of pointlikes for some larger varieties than
aperiodics is obtained in [6]. We leave this to future work.
1. Preliminaries
We assume familiarity with notions from the theory of semigroups, in particular, relational
morphisms and divisions, the wreath product (denoted ≀), and the two-sided semidirect product of semigroups (denoted ⊲⊳) and of varieties of finite semigroups (denoted ∗∗); see, e.g., [11,
Ch. 1]. Throughout the paper, we call ‘variety’ what is called ‘pseudovariety’ in [11].
Augmented semigroups. Let T be a finite semigroup. Let T I be the monoid obtained by
adjoining a new identity, I, to T and T 0 the semigroup obtained by adjoining a new zero to
T . We denote by SL the variety of finite semilattices and by U1 the two-element semilattice.
Fact 1.1. If a variety V contains T and SL, then T 0 ∈ V. If a variety V contains T , SL,
and is generated by monoids, then T I ∈ V.
Proof. The first statement is true because T 0 is a homomorphic image of T × U1 [3, Ex. I.9.2].
For the second statement, we distinguish two cases. If T is a monoid, then T I embeds in
T × U1 , where U1 denotes the two-element semilattice [3, Ex. I.9.1]. If T is not a monoid,
then T divides some monoid M ∈ V, and since T is not a monoid, it follows that T I divides
the same monoid M .
The semigroup T acts faithfully on T I by multiplication on the right, and thus T embeds into
the semigroup of total functions on T I ; we identify every element t ∈ T with the corresponding
right multiplication map. Further, for every t ∈ T I , we denote by t♯ the function with constant
value t. We define T ♯ := T ∪{t♯ : t ∈ T I }, the semigroup consisting of the right multiplication
maps and the constant maps. Thus, T ♯ naturally acts on T on the right.1 Dually, T ♭ denotes
1Note that our definition of T ♯ for a semigroup T deviates slightly from the definition of M ♯ for a monoid M
in [11, Subsec. 4.1.2].
MERGE DECOMPOSITIONS, TWO-SIDED KROHN-RHODES, AND APERIODIC POINTLIKES
3
the semigroup consisting of left multiplication maps for every t ∈ T and constant maps t♭ for
every t ∈ T I . Note that T ♭ = ((T op )♯ )op , and T ♭ acts on T on the left.
Fact 1.2. For any finite semigroup T , let Te denote the monoid obtained from T by adjoining
an identity and a zero and let M be any monoid with |M | > |T |. Then T ♭ embeds in M ≀ Te.
Proof. Fix a bijection t 7→ mt between T and a subset of M \ {1M }. We define a function
e
i : T ♭ → M ≀ Te. For every t ∈ T , define i(t) := (c1 , t), where c1 ∈ M T denotes the function
e
with constant value 1M , the identity of M , and i(t♭ ) := (ft , 0), where ft ∈ M T denotes the
function defined by ft (0) := 1M and ft (t′ ) := mt′ t for all t′ ∈ T I . It is straightforward to
verify that i is an injective homomorphism.
Triple product. Let (S, +) be a (not necessarily commutative) semigroup equipped with two
actions on it, a left action of a semigroup (SL , ·) and a right action of a semigroup (SR , ·),
which commute. The triple product 2 T = (SR , S, SL ) is the semigroup of triples (sR , s, sL ),
with multiplication defined by (sR , s, sL ) · (s′R , s′ , s′L ) := (sR s′R , ss′R + sL s′ , sL s′L ).
Fact 1.3. If S ∈ V and SL , SR ∈ W, then (SR , S, SL ) ∈ V ∗∗ W.
Proof. Define an action of SL × SR on S by hsL , sR is := sL s and shsL , sR i := ssR . Then T is
isomorphic to the two-sided semidirect product S ⊲⊳ (SL × SR ). (Cf., e.g., [3, Sec. V.9].)
2. The merge decomposition
Throughout this section, we fix:
• a finite alphabet A and two disjoint subalphabets A1 , A2 such that A = A1 ∪ A2 ;
+
• two homomorphisms ψ1 : A+
1 → T1 and ψ2 : A2 → T2 , with T1 and T2 finite;
+
• a homomorphism χ : (T1 × T2 ) → T0 .
+
For any w1 ∈ A+
1 , w2 ∈ A2 , define µ(w1 w2 ) := (ψ1 (w1 ), ψ2 (w2 )) ∈ T1 × T2 . Since the
+
+ + +
subsemigroup (A1 A2 ) of A+ is freely generated by the infinite set of generators A+
1 A2 ,
+ + +
the function µ extends uniquely to a homomorphism µ : (A1 A2 ) → (T1 × T2 )+ . We define
+ +
→ T0 to be the composition χ ◦ µ. For i = 0, 1, 2, we denote the external
ψ0 : (A+
1 A2 )
identity of TiI by Ii , and we also denote by ψi the homomorphism from the corresponding free
monoid to the finite monoid TiI ; i.e., ψi (ε) := Ii .
+ ∗
∗
For any word w in A+ , uniquely write w = v2 uv1 , with v2 ∈ A∗2 , u ∈ (A+
1 A2 ) , and v1 ∈ A1 ,
I
I
+
I
and define τ (w) := (ψ2 (v2 ), ψ0 (u), ψ1 (v1 )). The function τ : A → T2 × T0 × T1 is not a
homomorphism in general. The aim in this section is to show that the kernel of τ can be
refined to a semigroup congruence of finite index in a well-controlled variety.
To this end, we will define a semigroup TM and a homomorphism ψM : A+ → TM . Let
I
I
S := (T0I )T1 ×T2 , with the pointwise product of T0I , written additively. We define a left action
of T1♯ and a right action of T2♭ on S. For s ∈ S, sL ∈ T1♯ and sR ∈ T2♭ , let sL ssR ∈ S be defined
by [sL ssR ](t1 , t2 ) := s(t1 sL , sR t2 ) for every (t1 , t2 ) ∈ T1I × T2I . Let TM := (T2♭ , S, T1♯ ) be the
triple product; we call TM the merge semigroup associated to ψ1 , ψ2 and χ.
Fact 2.1. Let V be a variety, and W a variety generated by monoids and containing SL.
I
I
If T0 ∈ V , T1 , T2 ∈ W, and TM is any triple product of T2♭ , (T0I )T1 ×T2 , and T1♯ , then
TM ∈ V ∗∗ (SL ∗∗ W).
2We follow the notation of [3, Sec V.9]; note the positions of the semigroups acting on the left and on the right.
Also note that the multiplication
can be
viewed as matrix multiplication, if we represent an element (sR , s, sL )
by the lower triangular matrix ssR s0L .
4
SAMUEL J. V. GOOL AND BENJAMIN STEINBERG
Proof. Applying Fact 1.2 with M a semilattice (e.g., a chain) with |T2 | + 1 elements and using
f
T2 ∈ W by Fact 1.1 yields T2♭ ∈ SL ∗∗ W. Similarly, T1♯ ∈ SL ∗∗ W. Fact 1.3 gives the
result.
For any w1 ∈ A+
1 , we define an element sw1 ∈ S by sw1 (t1 , I2 ) := I0 and sw1 (t1 , t2 ) :=
χ(t1 ψ1 (w1 ), t2 ), for all t1 ∈ T1I and t2 ∈ T2 . Now let ψM : A+ → TM be the unique homomorphism defined by
ψM (a1 ) := (I2♭ , sa1 , ψ1 (a1 )) for a1 ∈ A1 ,
ψM (a2 ) := (ψ2 (a2 ), i0 , I1♯ ) for a2 ∈ A2 ,
where i0 denotes the identity of S, i.e., the function with constant value I0 . We call the
homomorphism ψM : A+ → TM the merge decomposition of A+ along χ, ψ1 and ψ2 .
The crucial property of the merge decomposition is the following.
Proposition 2.2. There exists a function f : TM → T2I × T0I × T1I such that f ◦ ψM = τ .
Proof. For any (t2 , s, t1 ) ∈ TM , define f (t2 , s, t1 ) := (t2 I2 , s(I1 , I2 ), I1 t1 ). We show f ◦ψM = τ .
♭
We first prove, for all w1 ∈ A+
1 , ψM (w1 ) = (I2 , sw1 , ψ1 (w1 )). By induction, assume that this
+
holds for all shorter words in A1 . Then, writing w1 = a1 w1′ , the left and right coordinates are
clearly as stated, and the middle coordinate of ψM (w1 ) = ψM (a1 )ψM (w1′ ) is sa1 I2♭ +ψ1 (a1 )sw1′ .
From the definition of the right action and of sa1 we get that sa1 I2♭ = i0 . From the definition
of the left action and of sw1′ and sw1 , we get that ψ1 (a1 )sw1′ = sw1 . For w2 ∈ A+
2 , we easily
obtain ψM (w2 ) = (ψ2 (w2 ), i0 , I1♯ ), since sL i0 sR = i0 for all sL , sR , because i0 is a constant map.
♯
+
♭
Multiplying these two results, for any w1 ∈ A+
1 and w2 ∈ A2 , ψM (w1 w2 ) = (I2 , sw1 w2 , I1 ),
where sw1w2 (I1 , I2 ) = sw1 (I1 , ψ2 (w2 )) = χ(ψ1 (w1 ), ψ2 (w2 )) = ψ0 (w1 w2 ).
+ +
We next prove, by induction on the length of u ∈ (A+
1 A2 ) as a word in the free semigroup
♯
+
♭
generated by A+
1 A2 , that ψM (u) = (I2 , su , I1 ), where su (I1 , I2 ) = ψ0 (u). We have already
+
+ + +
′
′
established the base case. If u = (w1 w2 )u for some w1 ∈ A+
1 and w2 ∈ A2 with u ∈ (A1 A2 ) ,
then, for the middle coordinate su of ψM (u) = ψM (w1 w2 )ψM (u′ ), we have
su (I1 , I2 ) = [sw1 w2 I2♭ + I1♯ su′ ](I1 , I2 ) = ψ0 (w1 w2 ) · ψ0 (u′ ) = ψ0 (u).
+ +
Finally, to prove that f ◦ ψM = τ , let w ∈ A+ . Suppose that w = v2 uv1 with u ∈ (A+
1 A2 ) ,
+
+
v1 ∈ A1 and v2 ∈ A2 . Then, using our previous calculations, we get
ψM (v2 uv1 ) = (ψ2 (v2 ), i0 , I1♯ ) · (I2♭ , su , I1♯ ) · (I2♭ , sv1 , ψ1 (v1 )) = (ψ2 (v2 )♭ , s, ψ1 (v1 )♯ ),
where
s(I1 , I2 ) =
i0 I2♭ + I1♯ su I2♭ + I1♯ sv1 (I1 , I2 ) = I0 · su (I1 , I2 ) · I0 = ψ0 (u).
Thus, in this case, f (ψM (w)) = τ (w). If one or more of the factors in the factorization
w = v2 uv1 are empty, then the proof is similar but simpler.
We end with a prototypical application of the technique, to be used in the next section.
Corollary 2.3. Let S be a finite semigroup and let T1 , T2 be subsemigroups of S such that
T1 ∪ T2 generates S. Denote by T0 := hT1 T2 i, the subsemigroup generated by T1 T2 . Then the
I
I
semigroup S divides a triple product of T2♭ , (T0I )T1 ×T2 , and T1♯ .
Proof. Let Ai := Ti × {i} for i = 1, 2 and A := A1 ∪ A2 . Denote by ψ : A+ ։ S the surjective
homomorphism defined on generators (ti , i) ∈ A by ψ(ti , i) := ti . For i = 1, 2, let ψi be
+
the restriction of ψ to A+
i , and let χ : (T1 × T2 ) → T0 be the homomorphism defined by
χ(t1 , t2 ) := t1 t2 for (t1 , t2 ) ∈ T1 × T2 . Note that ψ0 , as defined above, in this case turns
MERGE DECOMPOSITIONS, TWO-SIDED KROHN-RHODES, AND APERIODIC POINTLIKES
5
+ +
I
I
I
I
out to be the restriction of ψ to (A+
1 A2 ) . Hence, writing m : T2 × T0 × T1 → T for the
multiplication map m(t2 , t0 , t1 ) := t2 t0 t1 , we have ψ = m ◦ τ . Let ψM : A+ → TM be the
merge decomposition along χ, ψ1 , and ψ2 . By Proposition 2.2, pick f : TM → T2I × T0I × T1I
such that τ = f ◦ ψM . Then ψ = m ◦ f ◦ ψM , so S divides TM since ψ is surjective.
3. Two-sided Krohn-Rhodes theorem
In this section, we apply the merge decomposition technique of Section 2 to give a short proof
of the crucial step in the two-sided Krohn-Rhodes theorem.
For any finite semigroup S, define VS to be the smallest variety which is closed under twosided semidirect products, and which contains SL and all simple groups that divide S.
Theorem 3.1 (Two-sided Krohn-Rhodes). Let S be a finite semigroup. Then S ∈ VS .
Proof. By induction on |S|.
Case 1. S is a group. Any finite group embeds in an iterated wreath product of its simple
group divisors, cf., e.g., [11, Cor. 4.1.6].
Case 2. S is cyclic. Any finite cyclic semigroup divides an iterated wreath product of a subgroup and copies of U1 , cf., e.g., [11, Cor. 4.1.28].
Case 3. S is not a group and S is not cyclic. Let A be a minimal generating set for S and
note that |A| ≥ 2. Since S is not a group, without loss of generality, S is not right simple
(cf., e.g., [11, Lem. A.3.3]). Therefore, there exists a ∈ A such that aS ( S. Let A1 := {a},
A2 := A \ A1 , Ti := hAi i for i = 1, 2, and T0 := hT1 T2 i. By minimality of A, T1 and T2 are
strictly contained in S. By the induction hypothesis, Ti ∈ VTi , which is contained in VS ,
since any simple group dividing Ti also divides S. Moreover, T0 ⊆ aS, so T0 is also strictly
contained in S. By the induction hypothesis again, T0 ∈ VT0 ⊆ VS . Since T1 ∪ T2 generates
I
I
S, by Corollary 2.3, S divides a triple product of T2♭ , (T0I )T1 ×T2 , and T1♯ . Hence, by Fact 2.1,
S ∈ VS ∗∗ (VS ∗∗ VS ) = VS .
4. Henckell’s theorem on aperiodic pointlikes
Recall that any element s in a finite semigroup S has a unique idempotent power, sω . A
semigroup S is called aperiodic if every subgroup of S is trivial, or, equivalently, sω s = sω for
every s ∈ S. For k ≥ 1, define SLk+1 := SL ∗∗ SLk . A semigroup S is aperiodic if, and only
if, S ∈ SLk for some k; indeed, the necessity follows from Theorem 3.1.3
Fact 4.1. For any m, n ≥ 1, SLm ∗∗ SLn ⊆ SLm+n .
Proof. By induction on m. The case m = 1 is true by definition. By the lax associativity of
double semidirect product [11, Cor. 2.6.26], (SL ∗∗ SLm−1 ) ∗∗ SLn ⊆ SL ∗∗ (SLm−1 ∗∗ SLn ).
By the induction hypothesis, SL ∗∗ (SLm−1 ∗∗ SLn ) ⊆ SL ∗∗ SLm+n−1 = SLm+n .
Let V be a variety. A subset X of a finite semigroup S is called V-pointlike if, for any
relational morphism ρ : S →
7 T with T ∈ V, X ⊆ ρ−1 (t) for some t ∈ T . Any singleton set
is V-pointlike, and the collection of V-pointlike subsets of a semigroup S forms a downward
closed subsemigroup, PLV (S), of the power semigroup 2S , partially ordered by inclusion, and
with multiplication of subsets of S.
The following observation is specific to the variety
A of aperiodic semigroups: if X is an AS
pointlike set in S, then so is the set X ω+∗ := n≥0 X ω X n . Indeed, for any ρ : S →
7 T with T
3A finite semigroup S lies in SLk if, and only if, every language recognized by S can be defined by a first-order
sentence of quantifier depth ≤ k; this result is contained in [14, Ch. VI], and relates our work in Section 4 to
the logical approach of [9].
6
SAMUEL J. V. GOOL AND BENJAMIN STEINBERG
aperiodic, X ⊆ ρ−1 (t) for some t ∈ T , which gives X m ⊆ ρ−1 (tm ) for all m ≥ 1. Aperiodicity
of T then yields X ω X n ⊆ ρ−1 (tω ) for all n ≥ 0.
We will call a subset U of 2S saturated, if it is a subsemigroup that is closed downward in the
inclusion order and closed under the operation X 7→ X ω+∗ . Clearly, any subset U of 2S is
contained in a smallest saturated set, which we call its saturation, and denote by Sat(U ).
We will need the following lemma, which was essentially already in [4]; see also [6].
S
Lemma 4.2. Let G be a subgroup of 2S . Then G ∈ Sat(G).
Proof. Let C1 , . . . , Ck be anSexhaustive
S list of the cyclic subgroups of G. Note that, for any
generator X of Ci , X ω+∗ = Ci , so Ci ∈ Sat(G) forS
every i.SAlso noteSthat G = C1 · · · Ck .
Therefore, since multiplication distributes over union, G = ( C1 ) · · · ( Ck ) ∈ Sat(G).
We will use the merge decomposition (Section 2) to give a short proof of the following theorem.
Theorem 4.3 (cf. [4, 6, 9]). Let S be a semigroup. The set PLA (S) is the saturation of the
set of singletons in 2S . Moreover, if A is a generating set for S, then PLA (S) = PLSLk (S),
|S|
where k = (|A| − 1)2( 2 ) + 2|A| − 1.
Proof. Throughout the proof, for any finite alphabet A, semigroup S, and homomorphism
S
(|S2ϕ |)
+ 2|Sϕ | − 1.
ϕ : A+ → 2S , define Uϕ := im(ϕ), Sϕ := Uϕ , and k(ϕ) := (|ϕ(A)| − 1)2
Claim. For any homomorphism ϕ : A+ → 2S \{∅}, there exists a homomorphism ψ : A+ → T
S
with T ∈ SLk(ϕ) and ϕ(ψ −1 t) ∈ Sat(Uϕ ) for every t ∈ T .
Proof of Claim. The construction of ψ : A+ → T with T ∈ SLk(ϕ) is by induction on the
parameter (|Sϕ |, |ϕ(A)|) in N2 , ordered lexicographically.
Case 1. For every a ∈ A, ϕ(a)Sϕ = Sϕ = Sϕ ϕ(a).
Let e = ϕ(w) be an idempotent in the minimal
S ideal of Uϕ . Then G := eUϕ e is a subgroup of
Uϕ , see, e.g., [11, App. A]. By Lemma 4.2, G lies in Sat(eUϕ e), and hence also in Sat(Uϕ ),
since eUϕ e ⊆ Uϕ . Using the assumption in this case and the fact that multiplication distributes
over union, we have
[
[
G.
Sϕ = ϕ(w)Sϕ ϕ(w) = e
Uϕ e =
Thus, Sϕ lies in Sat(Uϕ ), and we choose ψ to be the trivial homomorphism A+ → {1} ∈ SL.
Case 2. |ϕ(A)| = 1.
Denote the unique element of ϕ(A) by X. Since Uϕ is a finite cyclic semigroup, pick m ≤ |Uϕ |
such that X m is idempotent, i.e., X m = X ω . Let T := hx | xm = xm+1 i, the finite aperiodic
cyclic semigroup of order m, and let ψ : A+ → T be the homomorphism defined by a 7→ x for
every letter a ∈ A. Note that T ∈ SLm [11, Lem. 4.1.27], and, since Uϕ ⊆ 2Sϕ \ {∅}, we have
S
m ≤ |Uϕ | ≤ 2|Sϕ | −1 = k(ϕ). From
the
definitions,
note
that,
for
1
≤
i
<
m,
ϕ(ψ −1 xi ) = X i ,
S
−1
i
ω+∗
which lies in Uϕ , and for i ≥ m, ϕ(ψ x ) = X
, which lies in Sat(Uϕ ).
Case 3. |ϕ(A)| ≥ 2, and there is a0 ∈ A such that ϕ(a0 )Sϕ ( Sϕ or Sϕ ϕ(a0 ) ( Sϕ .
Without loss of generality, we may assume ϕ(a0 )Sϕ ( Sϕ . Let A1 := {a ∈ A | ϕ(a) = ϕ(a0 )},
and A2 := A \ A1 . Note that, since |ϕ(A)| ≥ 2, ϕ(A1 ) and ϕ(A2 ) are non-empty proper
+
subsets of ϕ(A). For i = 1, 2, denote by ϕi the restriction of ϕ to A+
i , and pick ψi : Ai → Ti
S
S
with Ti ∈ SLk(ϕi ) and ϕ(ψi−1 t) = ϕi (ψi−1 t) ∈ Sat(Uϕi ) ⊆ Sat(Uϕ ), for all t ∈ Ti . Without
loss of generality, we may assume the ψi are surjective.
MERGE DECOMPOSITIONS, TWO-SIDED KROHN-RHODES, AND APERIODIC POINTLIKES
7
Let ϕ0 : (T1 ×ST2 )+ → 2S \ {∅} be the unique homomorphism defined, for (t1 , t2 ) ∈ T1 × T2 , by
ϕ0 (t1 , t2 ) := ϕ(ψ1−1 t1 · ψ2−1 t2 ). Note that Sϕ0 ⊆ ϕ(a0 )Sϕ , since any w ∈ ψ1−1 t1 · ψ2−1 t2 starts
with a letter from the subalphabet A1 . Since ϕ(a0 )Sϕ ( Sϕ by assumption, |Sϕ0 | < |Sϕ |,
so the induction hypothesis applies to ϕ0 : pick a homomorphism χ : (T1 × T2 )+ → T0 with
S
T0 ∈ SLk(ϕ0 ) such that ϕ0 (χ−1 (t)) ∈ Sat(Uϕ0 ) ⊆ Sat(Uϕ ), for every t ∈ T0 .
+ +
Define µ : (A+
→ (T1 × T2 )+ and ψ0 := χ ◦ µ, as in Section 2. Note that, for any
1 A2 )
+
+
w1 ∈ A1 , w2 ∈ A2 , we have ϕ(w1 w2 ) ⊆ ϕ0 (µ(w1 w2 )), and, hence, ϕ(w) ⊆ ϕ0 (µ(w)) for all
S
S
w ∈ (A+
A+ )+ . Therefore, by the definition of ψ0 , ϕ(ψ0−1 t) ⊆ ϕ0 (χ−1 t) for all t ∈ T0 , so
S 1 2−1
also ϕ(ψ0 t) ∈ Sat(Uϕ ). Applying the construction of Section 2, let ψM : A+ → TM be the
merge homomorphism, and pick f : TM → T2I × T0I × T1I such that f ◦ ψM = τ . Let t ∈ TM ,
and write f (t) = (t2 , t0 , t1 ) ∈ T2I × T0I × T1I . If (t2 , t0 , t1 ) ∈ T2 × T0 × T1 , then
[
[
[
[
[
−1
t) ⊆
ϕ(τ −1 (t2 , t0 , t1 )) =
ϕ(ψ2−1 t2 ) ·
ϕ(ψ0−1 t0 ) ·
ϕ(ψ1−1 t1 ) ∈ Sat(Uϕ ),
ϕ(ψM
and, if tS
i = Ii for one or more i ∈ {0, 1, 2}, a similar inclusion holds, omitting the corresponding
factors ϕ(ψi−1 ti ) from the final product.
Let us write m := |Sϕ |. Note that, since Sϕ0 is strictly contained in Sϕ and ϕ0 (T1 × T2 ) is
contained in 2Sϕ0 \ {∅}, we have
m
m−1
m
m−1
k(ϕ0 ) ≤ (2m−1 − 2)2( 2 ) + 2m−1 − 1 = 2( 2 ) − 2( 2 )+1 + 2m−1 − 1 ≤ 2( 2 ) − 1,
m−1
m−1
and 2m−1 ≤ 2( 2 )+1 .
using that m
2
2 =m−1+
By Facts 2.1 and 4.1, TM ∈ SLk , where k = k(ϕ0 ) + max{k(ϕ1 ), k(ϕ2 )} + 1. Using that
|ϕ(Ai )| < |ϕ(A)|, we have
m
m
k(ϕ0 ) + max{k(ϕ1 ), k(ϕ2 )} + 1 ≤ 2( 2 ) − 1 + (|ϕ(A)| − 2)2( 2 ) + 2m − 1 + 1 = k(ϕ).
Now, to prove the theorem, let A be a generating set for S, define ϕ : A+ → 2S by ϕ(a) := {a}
for a ∈ A, and pick ψ : A+ → T as in the claim. Then Uϕ is the set of singletons, |ϕ(A)| = |A|,
|S|
and Sϕ = S, so that k(ϕ) = (|A| − 1)2( 2 ) + 2|S| − 1 =: k. Define the relational morphism
S
ρ: S →
7 T by ρ−1 (t) := ϕ(ψ −1 t). Then, for any SLk -pointlike X ⊆ S, we have X ⊆ ρ−1 (t)
for some t ∈ T , and therefore, since ρ−1 (t) lies in Sat(Uϕ ) by the claim, so does X. We
have proved that PLSLk (S) ⊆ Sat(Uϕ ), while the remarks at the beginning of this section
imply Sat(Uϕ ) ⊆ PLA (S), which is clearly contained in PLSLk (S), since SLk ⊆ A. Thus,
PLSLk (S) = Sat(Uϕ ) = PLA (S).
Acknowledgements
(|S2ϕ |)
In an earlier version of the proof of Theorem 4.3, we proved the claim for k(ϕ) := |ϕ(A)|2
.
We acknowledge the help of MathOverflow [12] for guiding us to the slightly better bound
given in the paper.
References
1. J. Almeida, Some algorithmic problems for pseudovarieties, Publ. Math. Debrecen 54 (1999), no. suppl.,
531–552, Automata and formal languages, VIII (Salgótarján, 1996).
2. K. Auinger and B. Steinberg, On the extension problem for partial permutations, Proc. Amer. Math. Soc.
131 (2003), no. 9, 2693–2703.
3. S. Eilenberg, Automata, languages, and machines. Vol. B, Academic Press, New York, 1976, With two
chapters by Bret Tilson, Pure and Applied Mathematics, Vol. 59.
8
SAMUEL J. V. GOOL AND BENJAMIN STEINBERG
4. K. Henckell, Pointlike sets: the finest aperiodic cover of a finite semigroup, J. Pure Appl. Algebra 55
(1988), 85–126.
5. K. Henckell, S. Lazarus, and J. Rhodes, Prime decomposition theorem for arbitrary semigroups: general
holonomy decomposition and synthesis theorem, J. Pure Appl. Algebra 55 (1988), no. 1-2, 127–172.
6. K. Henckell, J. Rhodes, and B. Steinberg, Aperiodic pointlikes and beyond, Internat. J. Algebra Comput.
20 (2010), no. 2, 287–305.
7. K. Krohn and J. Rhodes, Algebraic theory of machines. I. Prime decomposition theorem for finite semigroups and machines, Trans. Amer. Math. Soc. 116 (1965), 450–464.
8. K. Krohn, J. Rhodes, and B. Tilson, Algebraic theory of machines, languages, and semigroups, Edited by
Michael A. Arbib. With a major contribution by Kenneth Krohn and John L. Rhodes, Academic Press,
New York, 1968, Chapters 1, 5–9.
9. T. Place and M. Zeitoun, Separating regular languages with first-order logic, Logical Methods in Computer
Science 12 (2016), no. 1:5, 1–30.
10. J. Rhodes and B. Steinberg, Pointlike sets, hyperdecidability and the identity problem for finite semigroups,
Internat. J. Algebra Comput. 9 (1999), no. 3-4, 475–481.
11.
, The q-theory of Finite Semigroups, Springer, 2009.
12. B. Steinberg, A strange two-variable recursion, MathOverflow question, answered by M. Fischler,
https://mathoverflow.net/q/278517.
, On pointlike sets and joins of pseudovarieties, Internat. J. Algebra Comput. 8 (1998), no. 2,
13.
203–234.
14. H. Straubing, Finite automata, formal logic, and circuit complexity, Progress in Theoretical Computer
Science, Birkhäuser Boston Inc., Boston, 1994.
15. T. Wilke, Classifying discrete temporal properties, STACS’99, Lec. Notes. Comp. Sci., vol. 1563, Springer,
1999, pp. 32–46.
Department of Mathematics, City College of New York, Convent Avenue at 138th Street, New
York, New York 10031, USA
E-mail address: [email protected] and [email protected]
| 4 |
Inverse of a Special Matrix and Application
Thuan Nguyen
arXiv:1708.07795v1 [cs.DM] 25 Aug 2017
School of Electrical and Computer Engineering, Oregon State University, Corvallis, OR, 97331
Email: [email protected]
Abstract—The matrix inversion is an interesting topic in
algebra mathematics. However, to determine an inverse matrix
from a given matrix is required many computation tools and
time resource if the size of matrix is huge. In this paper, we have
shown an inverse closed form for an interesting matrix which has
much applications in communication system. Base on this inverse
closed form, the channel capacity closed form of a communication
system can be determined via the error rate parameter α.
Keywords: Inverse matrix, convex optimization, channel
capacity.
I. M ATRIX C ONSTRUCTION
In Wireless communication system or Free Space Optical
communication system, due to the shadow effect or the turbulent of environment, the channel conditions can be flipped
from “good” to “bad” or “bad” to “good” state such as
Markov model after the transmission time σ [1] [2]. For
simple intuition, in “bad” channel, a signal will be transmitted
incorrectly and in “good” channel, the signal is received
perfectly. Suppose a system has total n channels, the “good”
channel is noted as “1” and “bad” channel is “0”, respectively,
the transmission time between transmitter and receiver is σ,
the probability the channel is flipped after the transmission
time σ is α. We note that if the system using a binary code
such as On-Off Keying in Free Space Optical communication,
then the flipped probability α is equivalent to the error rate.
Consider a simple case for n = 2, suppose that at the
beginning, both channel is “good” channel, the probability of
system has both of channels are “good” after transmission
time σ, for example, is (1−α)2 . Let call Aij is the probability
of system from the state has i − 1 “good” channels and
n − i + 1 “bad” channels transfers to state has j − 1 “good”
and n − j + 1 “bad” channels. Obviously that 1 ≤ i ≤ n + 1
and 1 ≤ j ≤ n + 1. For example, the transition matrix A2
and A3 for n = 2 and n = 3 are constructed respectively as
follows:
(1 − α)2
A2 = α(1 − α)
α2
(1 − α)3
(1 − α)2 α
A3 =
(1 − α)α2
α3
2α(1 − α)
(1 − α)2 + α2
2α(1 − α)
3(1 − α)2 α
2(1 − α)α2 + (1 − α)3
2(1 − α)2 α + α3
3α2 (1 − α)
α2
α(1 − α) .
(1 − α)2
3α2 (1 − α)
2(1 − α)2 α + α3
2(1 − α)α2 + (1 − α)3
3(1 − α)2 α
given by Proposition 2. Moreover, this matrices are obviously
central symmetric matrix.
Proposition 1. For n channels system, the transition matrix
An has size (n + 1) × (n + 1) and all entries An ij in row i
column j will be established by
An ij =
s=max(i−j,0)
n+1−i
s
i−1
j−i+2s
n−(j−i+2s)
α
(1 − α)
Proof. From the definition, Anij is the probability from state
has i − 1 “good” channels or i − 1 bit “1” transfer to state has
j − 1 “good” channels or j − 1 bit “1”. Therefore, suppose
s is the number channels in i − 1 “good” channels that is
flipped to “bad” channels after the transmission time σ and
0 ≤ s ≤ i − 1. Thus, to maintain j − 1 “good” channels after
the time σ, the number of “bad” channels in n + 1 − i “bad”
channels must be flipped to “good” channels is:
(j − 1) − ((i − 1) − s) = j − i + s
Therefore, the total number of channels are flipped their
state after transmission time σ is:
s + (j − i + s) = j − i + 2s
and the total number of channels that preserves their state after
transmission time σ is n−(j −i+2s). However, 0 ≤ s ≤ i−1.
Similarly, the number of “bad” channels in n + 1 − i “bad”
channels must be flipped to “good” channels should be in
0 ≤ j − i + s ≤ n + 1 − i. Hence:
(
max s = min(n + 1 − j; i − 1)
min s = max(0; i − j)
Therefore, An ij can be determined by below form:
An ij =
j − i + s
s=min(n+1−j,i−1)
X
s=max(i−j,0)
n+1−i
s
i−1
j−i+2s
n−(j−i+2s)
α
(1 − α)
Proposition 2. All the entries of inverse matrix A−1
n given in
Proposition 1 can be determined via original transition matrix
A
n for ∀ α 6= 1/2.
α3
(1 − α)α2
.
(1 − α)2 α
(1 − α)3
These transition matrices are obviously size (n+1)×(n+1)
since the number of “good” channels can achieve n+1 discrete
values from 0, 1, . . . , n. Moreover, these class matrices have
several interesting properties: (1) all entries in matrix An can
be determined by Proposition 1; (2) the inverse of matrix An is
s=min(n+1−j,i−1)
j − i + s
X
An −1
ij =
(−1)i+j
An
(1 − 2α)n ij
Due to the pages limitation, we will show the detailed
proof at the end of this paper. To illustrate our result, an
example of the inverse matrix A2 are shown as follows:
A−1
2
(1 − α)2
1
=
−α(1 − α)
(1 − 2α)2
α2
−2α(1 − α)
(1 − α)2 + α2
−2α(1 − α)
α2
−α(1 − α) .
(1 − α)2
Next, base on the existence of inverse matrix closed form,
we will show that a capacity closed form for a discrete
memory-less channel can be established. We note that in [3],
the authors said that haven’t closed form for channel capacity
problem. However, with our approach, the closed form can be
established for a wide range of channel with error rate α is
small.
II. O PTIMIZE SYSTEM CAPACITY
A discrete memoryless channel is characterized by a channel
matrix A ∈ Rm×n with m and n representing the numbers of
distinct input (transmitted) symbols xi , i = 1, 2, . . . , m, and
output (received) symbols yj , j = 1, 2, . . . , n, respectively.
The matrix entry Aij represents the conditional probability
that given a symbol xi is transmitted, the symbol xj is
received. Let p = (p1 , p2 , . . . , pm )T be the input probability
mass vector, where pi denotes the probability of transmitting
symbol xi , then the probability mass vector of output symbols
q = (q1 , q2 , . . . , qn )T = AT p, where qi denotes the probability
of receiving symbol yi . For simplicity, we only consider the
case n = m such that the number of transmitted input patterns
is equal the number of received input patterns. The mutual
information between input and output symbolsis:
I(X; Y ) = H(Y ) − H(Y |X),
where
H(Y ) =
H(Y |X) =
j=n
X
qj
j=1
m X
n
X
−
log qj
pi Aij log Aij .
i=1 j=1
Thus, the mutual information function can be written as:
I(X; Y ) = −
j=n
X
(AT p)j log (AT p)j +
m X
n
X
pi Aij log Aij ,
i=1 j=1
j=1
where (AT p)j denotes the jth component of the vector
q = (AT p). The capacity C of a discrete memoryless channel
associated with a channel matrix A puts a theoretical maximum rate that information can be transmitted over the channel
[3]. It is defined as:
C = max I(X; Y ).
p
(1)
Therefore, finding the channel capacity is to find an optimal
input probability mass vector p such that the mutual information between the input and output symbols is maximized.
For a given channel matrix A, I(X; Y ) is a concave function
in p [3]. Therefore, maximizing I(X; Y ) is equivalent to
minimizing −I(X; Y ), and the capacity problem can be cast
as the following convex problem:
Minimize:
m X
n
n
X
X
pi Aij log Aij
(AT p)j log (AT p)j −
i=1 j=1
j=1
Subject to:
(
pi 0
1T p = 1
Optimal numerical values of p∗ can be found efficiently
using various algorithms such as gradient methods [4] [5].
However, in this paper, we try to figure out the closed form for
optimal distribution p via KKT condition. The KKT conditions
state that for the following canonical optimization problem:
Problem Miminize: f (x)
Subject to:
gi (x) ≤ 0, i = 1, 2, . . . n,
hj (x) = 0, j = 1, 2, . . . , m,
construct the Lagrangian function:
L(x, λ, ν) = f (x) +
n
X
i=1
λi gi (x) +
m
X
νj hj (x),
(2)
j=1
then for i = 1, 2, . . . , n, j = 1, 2, . . . , m, the optimal point
x∗ must satisfy:
gi (x∗ ) ≤ 0,
∗
hj (x ) = 0,
dL(x,λ,ν)
(3)
|x=x∗ ,λ=λ∗ ,ν=ν ∗ = 0,
dx
∗ ∗
λi xi = 0,
λ∗ ≥ 0.
i
Our transition matrix that is already established in previous
part can represent as a channel matrix. In the optical transmission, for example, the transmission bits are denoted by the
different levels of energy, for example, in On-Off Keying code
bit “1” and “0” is represented by high and low power level.
This energy is received by a photo diode and converse directly
to the voltage for example. However, these photo diode work
base on the aggregate property when collecting all the incident
energy, that said, if two channels transmit a bit “1” then the
photo diode will receive the same energy “2” even though this
energy comes from a different pair of channels. Therefore,
the received signal is completely dependent to the number
of bits “1” in transmission side. Hence, in receiver side, the
photo diode will recognize n + 1 states 0, 1, 2, . . . , n. From
this property, the transition matrix A is the previous section
is exactly the system channel matrix. The channel capacity of
system, therefore, is determined as an optimization problem
in (1).
Next, we will show that the above optimization problem
can be solved efficiently by KKT condition. We note that
our method can establish the closed form for general channel
matrix and then the results are applied to special matrix
An . First, we try to optimize directly with input distribution
p, however, the KKT condition for input distribution is too
complicated to construct the first derivation. On the other hand,
base on the existence of inverse channel matrix, the output
variable is more suitable to work with KKT condition since.
Due to 0 ≤ qj ≤ 1, the Lagrange function from (2) with
output variable q is:
L(qj , λj , νj ) = I(X, Y ) +
j=n
X
j=n
X
qj λj + ν(
qj − 1)
j=1
j=1
Using KKT conditions, at optimal point qj∗ , λ∗j , ν ∗ :
∗
qj ≥ 0
Pj=n ∗
j=1 qj = 1
dI(X, Y )
ν ∗ − λ∗j −
=0
dqj∗
λ∗ ≥ 0
j∗ ∗
λj qj = 0
P
Because 0 ≤ pi ≤ 1, i = 1, . . . , (n) and ni=1 pi = 1, so
Pi=n
always exist pi > 0. From qj = i=1 pi Aij with ∀ Aij > 0,
we can see clearly that qj > 0 with ∀qj or qj∗ > 0 with ∀qj∗ .
Therefore with fifth condition, λ∗j = 0 with ∀λ∗j . Then, we
have simplified KKT conditions:
Pj=n ∗
j=1 qj = 1
dI(X, Y )
=0
ν ∗ −
dqj∗
The derivations are determined by:
j=n
i=n
dI(X, Y ) X −1 X
Aij log Aij − (1 + log qj )
Aji
=
dqj
j=1
i=1
Let call:
i=n
X
A−1
ji
j=n
X
Aij log Aij = Kj
From the second KKT simplified condition, we can compute ∀ qj∗ :
∗
qj∗ = 2Kj −ν −1
And finally:
∗
Due to the channel matrix is a closed form of α, the optimal
input vector p and output vector q also is a function of α.
However, we note that since the KKT condition works directly
to the output variable q, the optimal input p can be invalid
pi > 1 or pi < 0. In next step, our simulations shown that for
n ≤ 10 and α ≤ 0.2, both output and input vector are valid.
That said, our approach will be worked with a good system
where the error probability α is small. In case of the invalid
optimal input vector, the upper bound of channel capacity, of
course, will be established.
III. C ONCLUSION
In this paper, our contributions are twofold: (1) establish an
inverse closed form for a class of channel matrix based on the
error probability α; (2) figure out the closed form for channel
matrix with small error rate α and determine the upper bound
system capacity for a high error rate channel.
R EFERENCES
[1] Jeff McDougall and Scott Miller. Sensitivity of wireless network
simulations to a two-state markov model channel approximation. In
Global Telecommunications Conference, 2003. GLOBECOM’03. IEEE,
volume 2, pages 697–701. IEEE, 2003.
[2] Hong Shen Wang and Nader Moayeri. Finite-state markov channel-a
useful model for radio communication channels. IEEE transactions on
vehicular technology, 44(1):163–171, 1995.
[3] Thomas M Cover and Joy A Thomas. Elements of information theory.
John Wiley & Sons, 2012.
[4] Michael Grant, Stephen Boyd, and Yinyu Ye. Cvx: Matlab software for
disciplined convex programming, 2008.
[5] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
j=1
i=1
A PPENDIX
Next, using derivation of I(X,Y) at qj = qj∗ and last KKT
condition:
ν ∗ = Kj − (1 + log qj ∗ )
Hence:
qj∗ = 2Kj −ν
∗
−1
Next, using first KKT simplified condition, we have the
sum of all output states is 1.
j=n
X
2Kj −ν
∗
−1
=1
j=1
2
ν∗
=
j=n
X
2
Kj −1
j=1
Therefore, ν ∗ can be figured out by:
∗
∗
pT = q T A−1
ij
ν = log
j=n
X
j=1
2Kj −1
Proof for Proposition 2.
Proof. To simplify our notation, the “good” and “bad” channel
are represented by bit “1” and “0”, respectively. Next, we will
use the definition to show that:
An An −1 = I
If matrix A∗n is constructed by A∗nij = (−1)i+j An ij , then
we need to show that:
An A∗n = B = (1 − 2α)n I
Firstly, we note that the An ij and An ∗ij is only different
by sign of the first index (−1)i+j . Therefore, Bij which is
computed by product of row i in matrix An ij and column j
in matrix An ∗ij , can be computed by:
Bij =
k=n+1
X
k=1
An ik A∗nkj
Note that the An ik is the probability from state i “good”
channels (with i − 1 bit “1” and n − i + 1 bit “0”) to medium
state has k “good” channels (with k − 1 bit “1” and n − k + 1
bit “0”). Moreover, if the sign is ignored, then An ∗kj also is
the probability going from state
k to state j, too. However,
n
the state k includes C k−1
sub-states which have a same
number of “good” and “bad” channels. For example with
n = 2, state k = 2 includes two sub-states that contains one
“good” and one “bad” channels are “10” an “01”. Therefore,
the total number of sub-states while k runs from 1 to n is
P
k=n+1
n
C k−1
= 2n sub-states. Let compute Bij by divided
k=1
into two subsets:
Compute Bij for i = j: This means that Bii is the sum
of the probability from state i − 1 bit “1” go to states has
k − 1 bit “1” then come back to state has i − 1 bit “1”. In
2n sub-states, we can divide back to n + 1 categories by the
number of different position between i and k.
• If all the bit in i and k are the same, then the probability
is:
!
!
C
n
n
(1 − α)2n
(1 − α)n (1 − α)n = C
0
0
• If all the bit in i and k different at only one position, then
the probability is:
!
n
(1 − α)2(n−1) (1 − α)2
C
1
• If all the bit in i and k different at only two positions,
then the probability is:
!
n
C
(1 − α)2(n−2) (1 − α)2×2
2
• If all the bit in i and k different at all positions, then the
probability is:
!
C
n
(1 − α)2n
n
Therefore, Bii can be determined by the probability of all
n + 1 categories such as:
!
n 2t
C
α (1−α)2n−2t = ((1 − α)2 − α2 )n = (1−2α)n
Bii =
t
t=0
t=n
X
A∗nkj
into two subsets:
Compute Bij for i6= j: Let divide
k + j is odd and A∗nkj < 0 or k + j is even and A∗nkj > 0,
Pk=n
respectively. Therefore, Bij = k=1 An ik A∗nkj also is distributed into positive or negative subsets. Next, we will show
that the positive subset in Bij is equal the negative subset then
Bij = 0 for i 6= j. Indeed, suppose that state i with i − 1 bit
“1” go to state k1 and then to back to state j with j −1 bit “1”
and Bik1 is positive value. Next, we will show that existence
a state k2 such that Bik2 is negative value and Bik1 = −Bik2 .
Let call s is the number of positions where state i and j
have a same bit. Obviously that s ≤ n − 1 due to i 6= j. For
example if n = 4 and i = 1111 and j = 0001, we have s = 1
because i and j share a same bit “1” in the positions fourth.
Suppose that an arbitrary state k1 are picked, we will show
how to chose the state k2 with Bik1 = −Bik2 . Consider two
follows cases:
• If (n − s) is odd. k2 is constructed by maintain s position
of k1 where i and j have same bit and flip bit in the n − s
rest positions.
• If (n − s) is even. k2 is constructed by maintain s + 1
position of k1 where s position are i and j have a same bit and
one position where i and j have a different bit, next n − s − 1
rest positions will be flipped. Note that since s ≤ n − 1 then
we are able to flip n − s − 1 rest positions.
We obviously can see that k1 and k2 satisfied the probability
condition |Bik1 | = |Bik2 | due to the number of flipped bit
between i and k1 equals the number of flipped bit between k2
and j and the number of flipped bit between j and k1 equals
the number of flipped bit between k2 and i.
Next, we will prove that k1 and k2 make Bik1 and Bik2 in
different subsets. Indeed, call number of bit “1” in k1 is b1 ,
number of bit “1” in k2 is b2 , number of bit “1” in s bit same
of i and j is bs , respectively. Therefore, the number of bit “1”
of k1 in (n − s) rest positions is (k1 − ks ), the number of bit
“1” of k2 in (n − s) rest positions is (k2 − ks ).
• If (n − s) is odd. Since all bit in (n − s) rest positions
of k1 is flipped to create k2 , then total number of bit “1” in
n − s bit of k1 and k2 is (k1 − ks + k2 − ks = n − s) is odd.
So, (k1 + k2 ) should be an odd number. That said (k1 − k2 )
is odd or (k1 + j) − (k2 + j) is odd. Therefore, Bik1 and Bik2
bring the contradict sign.
• If (n − s) is even. Because, we fix one more position to
create k2 , then number of flipped bit (n−s−1) is odd number.
If one more bit is fixed in k1 is “0”, we have a same result
with case (n − s) is odd. If fixed bit is “1”, similarly in first
case (k1 − ks − 1) + (k2 − ks − 1) = n − s − 1 is odd number,
therefore (k1 + k2) is odd number. That said (k1 − k2 ) is odd
or (k1 + j) − (k2 + j) is odd. Therefore, Bik1 and Bik2 bring
the contradict sign.
Therefore, the state k2 always can be created from a random
state k1 and Bik1 and Bik2 bring a contradict sign. That said
for i 6= j, Bij = 0. Therefore:
B = (1 − 2α)n I
The Proposition 2, therefore, are proven.
| 7 |
2016 ICSEE International Conference on the Science of Electrical Engineering
Randomized Independent Component Analysis
Matan Sela
Ron Kimmel
arXiv:1609.06942v1 [stat.ML] 22 Sep 2016
Department of Computer Science
Technion - Israel Institute of Technology
Abstract—Independent component analysis (ICA) is a method
for recovering statistically independent signals from observations
of unknown linear combinations of the sources. Some of the most
accurate ICA decomposition methods require searching for the
inverse transformation which minimizes different approximations
of the Mutual Information, a measure of statistical independence
of random vectors. Two such approximations are the Kernel
Generalized Variance or the Kernel Canonical Correlation which
has been shown to reach the highest performance of ICA
methods. However, the computational effort necessary just for
computing these measures is cubic in the sample size. Hence,
optimizing them becomes even more computationally demanding,
in terms of both space and time. Here, we propose a couple of
alternative novel measures based on randomized features of the
samples - the Randomized Generalized Variance and the Randomized Canonical Correlation. The computational complexity
of calculating the proposed alternatives is linear in the sample
size and provide a controllable approximation of their Kernelbased non-random versions. We also show that optimization of
the proposed statistical properties yields a comparable separation
error at an order of magnitude faster compared to Kernel-based
measures.
I. I NTRODUCTION
Independent component analysis (ICA) is a well-established
problem in unsupervised learning and signal processing, with
numerous applications including blind source separation, face
recognition, and stock price prediction. onsider the following
scenario. A couple of speakers are located in a room. Each
of them plays a different sound . Two microphones which are
arbitrarily placed in the same room record unknown linear
combinations of the sounds. The goal of ICA is to process the
signals recorded by the mics for recovering the soundtracks
played by the speakers.
More precisely, the basic idea of ICA is to recover ns statistically independent components of a non-Gaussian random
T
vector s = (s1 , ..., sns ) from ns observed linear mixtures of
its elements. That is, we assume that some unknown matrix
A ∈ Rns ×ns mixes the entries of s such that x = As. From
samples of x, the goal is to estimate an un-mixing matrix W
such that y = W x and the components of y are statistically
independent.
The matrix W is found by a minimization process over
a contrast function which measures the dependency between
the unmixed elements. Ideally, finding the matrix W which
minimizes the mutual information (MI) provides the theoretically most accurate reconstruction. The MI is defined as the
Kullback-Liebler divergence between the joint distribution of
y, p(y1 , ..., yns ), and the product of its marginal distributions,
ns
Y
p(yi ). It is a non-negative function which vanishes if and
i=1
only if the components of y are mutually independent. Unfortunately, in practical applications, the joint and the marginal
distributions are unknown. Estimating and optimizing the MI
directly from the samples is difficult. Fitting a parametric or a
nonparametric probabilistic model to the data based on which
MI is calculated is problem dependent and is often inaccurate.
Many researches proposed alternative contrast functions.
One of the most robust alternative is the F-correlation proposed in [1]. This function is evaluated by mapping the data
samples into a reproducing kernel Hilbert space, F where
a canonical correlation analysis is performed. The largest
kernel canonical correlation (KCC) and the product of the
kernel canonical correlations, known as the kernel generalized
variance (KGV), are two possible contrast functions. Despite
their superior performance, algorithms based on minimizing
the KCC or the KGV (denoted as kernelized ICA algorithms)
are less attractive for practical uses since the complexity of
exact evaluation of these functions is cubic in the sample size.
A recent strand of research suggested randomized nonlinear
feature maps for approximating the reproducing kernel Hilbert
space corresponding to kernel functions. This technique enables revealing nonlinear relations in a data by performing linear data analysis algorithms, such as Support Vector Machine
[2], Principal Component Analysis and Canonical Correlation
Analysis [3]. These methods approximate the solution of
kernel methods while reducing their complexity from cubic
to linear in the sample size.
Here, we propose two alternative contrast functions, Randomized Canonical Correlation (RCC) and Randomized Generalized Variance (RGV) which approximate KCC and KGV,
respectively, yet require just a fraction of the computational
effort to evaluate. Furthermore, the proposed random approximations are smooth, easy to optimize and converge to their
kernelized version as the number of random features grows.
Finally, we propose optimization algorithms similar to those
proposed in [1], for solving the ICA problem. We demonstrate
that our method has a comparable accuracy as KICA but runs
12 times faster while separating components of real data.
II. BACKGROUND AND R ELATED W ORKS
A. Canonical Correlation Analysis
Canonical correlation analysis is a classical linear analysis
method introduced in [4], which generalizes the Principal
Component Analysis (PCA) for two or more random vectors.
In PCA, given samples of a random vector x ∈ Rd , the
idea is to search for a vector w ∈ Rd , which maximizes
the variance of the projection of x onto w. In practice, the
principal components are the eigenvectors corresponding to
the largest eigenvalues of the empirical covariance matrix
1
XX T , where X is a matrix containing samples of
C =
N
x as its columns, and N is the sample size.
In canonical correlation analysis (CCA), given a couple of
random vectors x ∈ Rdx and y ∈ Rdy , one looks for a
pair of vectors wx ∈ Rdx and wy ∈ Rdy , that maximize
the correlation between the projection of x onto wx and
the projection of y onto wy . More formally, CCA can be
formulated as
max
wx ∈Rdx ,wy ∈Rdy
=
corr(wxT x, wyT y) =
cov(wxT x, wyT y)
(var(wxT x))1/2 (var(wyT y))1/2
Let X ∈ Rdx ×N and Y ∈ Rdy ×N be matrices of samples
of x and y, respectively. The empirical Canonical Correlation
Analysis problem is given by
minimize
Wx ∈Rdx ×r ,Wy ∈Rdy ×r
subject to
kd
corr(WxT X, WyT Y ) − Ik
cd
orr(Wx , Wx ) = I,
cd
orr(Wy , Wy ) = I,
where cd
orr(·, ·) is the empirical correlation. The canonical
correlations λ1 , ..., λr are found by solving the following
generalized eigenvalue problem
0
CXY
wx
=
CY X
0
wy
CXX + γI
0
wx
λ
,
0
CY Y + γI
wy
1
XY T is the empirical cross covariance
where CXY =
N
matrix, and γI is added to the diagonal for stabilizing the
solution. As discussed next, the kernelized ICA method use a
kernel formulation of CCA for evaluating its contrast functions
which measures the independence of random variables.
B. Kernelized Independent Component Analysis
In ICA, we minimize a contrast function which is defined
as any non-negative function of two or more random variables
that is zero if and only if they are statistically independent.
By definition, a pair of random variables x1 and x2 are said
to be statistically independent if p(x1 , x2 ) = p(x1 )p(x2 ). As
a result, for any two functions f1 , f2 ∈ F
The F-correlation ρF is defined as the maximal correlation
among all the functions f1 (x1 ), f2 (x2 ) in F. That is
=
f1 ,f2 ∈F
|cov (f1 (x1 ), f2 (x2 ))|
max
f1 ,f2 ∈F
1/2
(varf1 (x1 ))
(varf2 (x2 ))
1/2
Obviously, if x1 and x2 are independent, then ρF = 0. As
proven in [1], if F is a functional space that contains the
Fourier basis (f (x) = eiωx , ω ∈ R), then, the opposite is also
true. This property implies that ρF can replace the mutual
information while searching for a matrix W that transforms the
vector (x1 , x2 )T into a vector with independent components.
Computing the F-correlation directly for an arbitrary space
F requires estimating the correlation between every possible
pair of functions in the space, making the calculation impractical. However, if F is a reproducing kernel Hilbert space
(RKHS), the F-correlation can be evaluated. Let k(x, y) be
a kernel function associated with the inner product between
functions in a reproducing kernel Hilbert space F. Denote
Φ(x) as the feature map corresponding to the kernel k(x, y),
such that k(x, y) = hΦ(x), Φ(y)i. The feature map is given
by Φ(x) = k(·, x). Then, from the reproducing property of
the kernel, for any function f (x) ∈ F,
f (x) = hΦ(x), f i,
∀f ∈ F, ∀x ∈ R.
It follows that for every functional space F associated with
a kernel function k(x, y) and a feature map Φ(x), the Fcorrelation between a pair of random variables x1 and x2 can
be formulated as
ρF = max |corr (hΦ(x1 ), f1 i, hΦ(x2 ), f2 i)| .
f1 ,f2 ∈F
i N
Let {xi1 }N
i=1 and {x2 }i=1 be the samples of the random
variables x1 and x2 , respectively. Any function f ∈ F
N
X
can be represented by f =
αi Φ(xi ) + f ⊥ , where f ⊥
i=1
is a function in the subspace of functions orthogonal to
span{Φ(x1 ), ..., Φ(xN )}. Thus,
c (hΦ(x1 ), f1 i, hΦ(x2 ), f2 i) =
cov
N
1 X
hΦ(xk1 ), f1 ihΦ(xk2 ), f2 i
=
N
k=1
N
N
N
X
X
1 X
=
hΦ(xk1 ),
α1i Φ(xi1 ) + f1⊥ ihΦ(xk2 ),
α2i Φ(xi2 ) + f2⊥ i
N
i=1
i=1
=
E{f1 (x1 )f2 (x2 )} = E{f1 (x1 )}E{f2 (x2 )}
1
N
k=1
N
X
hΦ(xk1 ),
N
X
α1i Φ(xi1 )ihΦ(xk2 ),
N
X
α2i Φ(xi2 )i
i=1
i=1
k=1
N X
N X
N
X
α1i K1 (xi1 , xk1 )K2 (xj2 , xk2 )α2i
k=1 i=1 j=1
=
1
N
=
1 T
α K1 K2 α2 ,
N 1
Equivalently,
cov (f1 (x1 ), f2 (x2 )) = 0.
max |corr (f1 (x1 ), f2 (x2 ))|
ρF =
where K1 and K2 are the empirical kernel Gram matrices of
x1 and x2 , respectively. With a similar development for the
empirical variance, one conclude
c (hΦ(x1 ), f1 i) =
var
1 T
α K1 K1 α1
N 1
c (hΦ(x2 ), f2 i) =
var
1 T
α K2 K2 α2
N 2
Therefore, the empirical F-correlation between x1 and x2
is given by
ρ̂F (x1 , x2 ) =
max
α1 ,α2 ∈RN
where P is the rank of Kκ . Figure II-B demonstrates the
Kernel Generalized Variance versus the Mutual Information
for a Gaussian kernel and for different σ’s. The complexity of
a naive implementation of both KCC and KGV is O(N 3 ).
However, an approximate solution can be computed using
Cholesky decomposition which reduces the complexity to be
roughly quadratic in the sample size.
α1T K1 K2 α2
1/2 T 2 1/2
α2 K2 α2
α1T K12 α1
which can be evaluated by solving the following generalized
eigenvalue problem
0
K1 K2
α1
=
K2 K1
0
α2
2
K1
0
α1
λ
.
0 K22
α2
The calculation of the F-correlation is thus equivalent to
solving a kernel CCA problem. For ensuring computational
stability, it is common to use a regularized version of KCCA
given by
0
K1 K2
α1
=
K2 K1
0
α2
0
α1
(K1 + N2κ I)2
λ
.
α2
0
(K2 + N2κ I)2
For m random variables, the regularized KCCA amounts
to finding the largest generalized eigenvalue of the following
system
0
K2 K1
..
.
Km K1
λ
K1 K2
0
..
.
Km K2
(K1 + N2κ I)2
0
..
.
0
···
···
···
α1
K1 Km
K2 Km α2
. =
..
..
.
0
αm
0
(K2 + N2κ I)2
..
.
0
···
···
···
P
for different σ values as a function of the angle of rotation of the
orthogonal matrix W .
C. Randomized Features
In kernel methods, each data sample x is mapped to a
function in the reproducing kernel Hilbert space Φ(x), where
the analysis is performed. The inner product between two
representational functions Φ(x) and Φ(y) can be evaluated
directly on the samples x and y using the kernel function
k(x, y). For real valued, normalized (k(x, y) ≤ 1), shift
invariant kernels Rd × Rd
Z
T
k(x, y) =
p(w)e−jw (x−y) dw ≈
Rd
0
α1
0
α2
.
..
..
.
2
Nκ
αm
(Km + 2 I)
or in short, Kκ α = λDκ α.
This is the first contrast function proposed in [1]. The
second one, denoted as the kernel generalized variance, depends upon not only the largest generalized eigenvalue of the
problem above but also on the product of the entire spectrum.
This is derived from the Gaussian case where the mutual
information is equal to minus half the logarithm of the product
of generalized eigenvalues of the regular CCA problem. In
summary, the kernel generalized variance is given by
I(x1 , ..., xm ) = −
Fig. 1. The mutual information and the Kernel Generalized Variance
1X
log λi ,
2 i=1
m
X
1 −jwiT x jwiT y
≈
e
e
m
i=1
=
m
X
1
cos(wiT x + bi ) cos(wiT y + bi ),
m
i=1
where p(w) is the inverse Fourier transform of k and bi ∼
U (0, 2π), and wi ’s are independently drawn from p(w). Thus,
the kernel function can be approximated by transforming the
data samples into an m-dimensional random space z(x) =
1
T
√
cos(w1T x + b1 ), ..., cos(wm
x + bm ) and taking the inm
ner product between the maps.
k(x, y) ≈ hz(x), z(y)i.
The following theorem shows that the reproducing kernel
Hilbert space corresponding to k can be approximated by a
random map, in the inner product sense.
Let k(x, y) = k(x − y) be a shift-invariant kernel function
such that k(x, y) ≤ 1. Denote p(w)
as the inverse Fourier
Z
transform of k, that is, k(x) =
p(w)e−i2πw
T
x
dx. Inde-
Rd
{wi }m
i=1
pendently sample m variables
from the distribution
p(w) and m random numbers bi uniformly from the section
[−π,
r π] and construct the r andom Fourier feature map z(x) =
2
T
cos(w1T x + b1 ), ..., cos(wm
x + bm ) . We define the Zm
correlation of a pair of random variables x1 and x2 as
ρz (x1 , x2 ) =
max
w1 ∈Rm ,w2 ∈Rm
corr(w1T z(x1 ), w2T z(x2 ))
The empirical covariance between w1T z(x1 ) and w2T z(x2 )
is given by
c 1T z(x1 ), w2T z(x2 )) =
cov(w
Fig. 2. The analytic and empirical error of the kernel approximation
using random Fourier features as a function of the number of features
m.
N
1 X
hw1 , z(xk1 )ihw2 , z(xk2 )i
=
N
k=1
N
1 X T
w1 z(xk1 )z(xk2 )T w2
=
N
k=1
!
N
X
1
= w1T
z(xk1 )z(xk2 )T w2
N
d×N
Theorem II.1. Let X ∈ R
be a matrix containing samples
of x ordered in its columns. Denote z(X) ∈ Rm×N as a
matrix containing the Fourier random features of each column
of X as its columns. Denote K̂ = z(X)T z(x) and K as the
empirical kernel matrix corresponding to the same kernel as
K̂. Then
r
3n2 log n 2n log n
EkK̂ − Kk ≤
+
,
m
m
k=1
z
= w1T Ĉ12
w2
Similarly, the empircal variances can be computed as
z
vd
ar(w1T z(x1 )) = w1T Ĉ11
w1
where the norm is an operator norm.
z
vd
ar(w2T z(x2 )) = w1T Ĉ22
w2 ,
Proof. See [3].
Figure II-C shows the analytic bound versus the empirical
error of an approximate kernel K̂ evaluated on 104 data
samples.
Introduced in [2] for approximating the solution of kernel
Support Vector Machine, these random feature maps are
also useful for solving kernel Principal Component Analysis
(kPCA), and kernel Canonical Correlation Analysis (kCCA),
[3]. The main purpose of using these features is the fact that
they enable reducing the complexity of kernel methods to
linear in the sample size, at the expense of a mild gap in
accuracy. Here, we extend this idea for solving the problem
of ICA.
III. R ANDOMIZED I NDEPENDENT C OMPONENT A NALYSIS
As demonstrated in [1], the kernel canonical correlation
and the kernel generalized variance measure the statistical
dependence between a set of sampled random variables. In
addition, optimization over these functions for decomposing
mixtures of these variables into independent ones is more accurate and robust. However, evaluating these functions requires
O(N 3 ) operations. Fortunately, for certain types of kernels,
these contrast functions can be approximated using random
features in linear time.
where
z
Ĉ11
=
N
1 X
z(xk1 )z(xk1 )T
N
and
z
Ĉ22
=
k=1
N
1 X
z(xk2 )z(xk2 )T .
N
k=1
Thus the empirical Z-correlation is the largest generalized
eigenvalue of the system
z
0
Ĉ12
w1
=
z
w
Ĉ21
0
z
2
Ĉ11 + γI
0
w1
λ
.
(1)
z
w2
+ γI
0
Ĉ22
As demonstrated in Figure 3, as the number of random features grows, the Z-correlation converges to the F-correlation.
Notice that the size of the generalized eigenvalue system in the
random feature space is ns m × ns m which is much smaller
than the kernel case where it is of size ns N × ns N .
The kernel canonical correlation provide less accurate results than the kernel generalized variance since it takes into
account only the largest generalized eigenvalue. This can also
be justified by the Gaussian case where the kernel generalized
variance is shown to be a second order approximation of the
mutual information around the point of independence as σ
and comparing the accuracy of our algorithm in estimating
the unmixing matrix from the mixed data, we used the Amari
distance given by
!
Pns
ns
1 X
j=1 |aij |
d(V, W ) =
−1 +
2ns i=1 maxj |aij
ns Pns
1 X
i=1 |aij |
+
−1 ,
2ns j=1 maxi |aij |
where aij = (V W −1 )ij . This metric, introduced in [5], is
invariant to scaling and permutation of the rows or columns
of the matrices.
Fig. 3. Approximation of the Kernel Canonical Correlation using Ran-
domized Canonical Correlation with different numbers of features.
goes to zero. Thus, we propose to approximate the kernel
generalized variance by
P
δz (x1 , ..., xns ) = −
1X
log λk
2
k=1
where λk are the generalized eigenvalues of 1. We define
this function as the random generalized variance (RGV). As
demonstrated in Figure 4, the more features we take for
calculating the empirical covariances in the random space, the
better RGV approximates KGV.
Fig. 5. Probability density functions used in our tests. These pdf-s
are identical to those used in [1].
Approximation of the Kernel Generalized Variance using
Randomized Generalized Variance with different numbers of features.
Fig. 4.
IV. E XPERIMENTAL R ESULTS
To evaluate the performance of optimization over our novel
contrast functions for solving ICA problems, we independently
draw samples from two or more probability functions given
in Figure 5. Then, we applied some random transformation
with condition number between one to two. For measuring
First, we evaluated the performance of our randomized ICA
approach (RICA) on two mixtures of independent sources
drawn in independently identically distributed fashion from
the pdfs in 5. We repeated the experiments for 250 and 1000
samples, and the Amari distance for various ICA algorithms
are summerized in Tables I and II, respectively. As expected,
the our results are comparable to those of KICA [1], while
our algorithm is strictly linear in the sample size.
We compared the robustness of our contrast function to
outlier samples. The evaluation was done by choosing at
random a certain number of points in the dataset and adding 5
or −5 at random with probability 0.5. The Amari distance
between the estimated matrix and the true one for several
algorithms is demonstrated in Figure 6. We repeated the
experiments 1000 times and averaged the results. The most
robust algorithms are KICA [1], while RICA algorithms are
still more robust then the others.
Since our framework imitates the kernel ICA methods in
the random feature space, we evaluated the accuracy and the
runtime of the methods in unmixing real audio signal. The
results are given in Table III. With comparable accuracy, our
methods run more than ten times faster.
TABLE II
T HE A MARI ERRORS ( MULTIPLIED BY 100) AND THE RUNTIME
Fig. 6. Robustness to outliers.
TABLE I
T HE A MARI ERRORS ( MULTIPLIED BY 100) OF STATE - OF - THE - ART ICA
ALGORITHMS EVALUATED ON A COUPLE OF MIXTURES OF INDEPENDENT
SOURCES DRAWN FROM THE DENOTED PDFS . I N THIS EXPERIMENT WE
USED 250 SAMPLES AND REPEATED THE EXPERIMENT 1000 TIMES . T HE
KGV, KCC, RCC, AND RGV ALGORITHMS WERE INITIALIZED WITH THE
SIMILAR MATRICES . E ACH ROW REPRESENT THE PDFS FROM WHICH THE
SAMPLES WERE DRAWN . T HE MEAN ROW REPRESENT THE AVERAGE
PERFORMANCE OF EACH METHOD AND RAND ROW DENOTES THE
PERFORMANCE OF THE ALGORITHMS FOR A MIXTURE OF A PAIRS OF
SOURCES RANDOMLY SELECTED FROM THE PDFS . W E REPEATED THIS
EXPERIMENT 1000 TIMES .
pdfs
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
mean
rand
F-ica
8.1
12.5
4.9
12.9
10.6
7.4
3.6
12.4
20.9
16.3
13.2
21.9
8.4
12.9
9.8
9.1
33.4
13.0
12.8
10.5
Jade
7.2
10.1
3.6
11.2
8.6
5.2
2.8
8.6
16.5
14.2
9.7
18.0
5.9
9.7
6.8
6.4
31.4
9.3
10.3
9.0
KCC
9.6
12.2
4.9
15.4
3.8
3.9
3.0
17.6
33.7
3.4
9.3
18.1
4.3
8.4
15.5
4.9
13.0
13.5
10.8
8.0
RCC
8.8
10.7
4.5
13.6
3.7
4.2
2.9
14.4
28.9
3.2
8.1
14.4
11.1
11.4
14.5
6.0
16.5
11.7
10.5
8.7
KGV
7.3
9.6
3.3
12.8
2.9
3.2
2.7
14.4
31.1
2.9
7.0
15.5
3.2
4.7
10.9
3.4
8.5
9.7
8.5
5.9
RGV
6.9
8.7
3.6
12.0
3.1
3.5
2.7
12.2
29.0
2.9
6.5
13.5
7.2
7.5
10.8
4.7
11.9
9.2
8.7
6.8
V. C ONCLUSIONS
We proposed two novel pseudo-contrast functions for solving the ICA problem. The functions are evaluated using randomized Fourier features and thus can be computed in linear
time. As the number of feature grows, the proposed functions
converge to the renowned kernel generalized variance the
kernel canonical correlation which require a computational
effort that is cubic in the sample size. The accuracy of the
proposed ICA methods is comparable to the state-of-the-art
but runs over ten times faster. The proposed functions could
pdfs
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
mean
rand
F-ica
4.2
5.9
2.2
6.8
5.2
3.7
1.7
5.4
9.0
5.8
6.3
9.4
3.7
5.3
4.2
4.0
16.4
5.7
5.8
5.8
Jade
3.6
4.7
1.6
5.3
4.0
2.6
1.3
3.9
6.6
4.2
4.4
6.7
2.6
3.7
3.1
2.7
12.4
4.0
4.3
4.6
KCCA
5.3
5.4
2.0
8.4
1.6
1.7
1.4
6.5
12.9
1.5
4.3
7.0
1.7
2.8
5.2
2.0
3.8
5.3
4.4
3.6
RCC
4.9
5.3
1.7
7.4
1.6
1.9
1.2
6.0
12.1
1.3
3.6
6.6
3.1
3.2
4.8
2.4
4.5
4.5
4.2
3.7
KGV
3.0
3.0
1.4
5.3
1.2
1.3
1.2
4.4
10.6
1.3
2.6
4.9
1.3
1.8
3.3
1.4
2.1
3.2
3.0
2.5
RGV
3.5
3.9
1.4
5.9
1.3
1.4
1.1
4.4
10.0
1.2
2.7
4.9
2.0
2.3
3.4
1.7
3.0
3.3
3.2
2.8
TABLE III
T HE A MARI ERRORS ( MULTIPLIED BY 100) AND THE RUNTIME ANALYSIS
IN SEPARATING A PAIR OF INDEPENDENT AUDIO SIGNALS FROM TWO
RECORDED MIXTURES .
Method
KCC
RCC
KGV
RGV
# repl
100
100
100
100
Amari Distance
3.2
3.3
1.2
1.3
Runtime (seconds)
337.7
28.4
258.0
23.6
also be evaluated using Nystrom extension for kernel matrices.
R EFERENCES
[1] F. R. Bach and M. I. Jordan, “Kernel independent component analysis,”
J. Mach. Learn. Res., vol. 3, pp. 1–48, Mar. 2003. [Online]. Available:
http://dx.doi.org/10.1162/153244303768966085
[2] A. Rahimi and B. Recht, “Weighted sums of random kitchen
sinks: Replacing minimization with randomization in learning,” in
Advances in Neural Information Processing Systems 21, D. Koller,
D. Schuurmans, Y. Bengio, and L. Bottou, Eds. Curran Associates, Inc.,
2009, pp. 1313–1320. [Online]. Available: http://papers.nips.cc/paper/
3495-weighted-sums-of-random-kitchen-sinks-replacing-minimization-with-randomizat
pdf
[3] D. Lopez-Paz, S. Sra, A. Smola, Z. Ghahramani, and B. Schölkopf, “Randomized nonlinear component analysis,” arXiv preprint arXiv:1402.0119,
2014.
[4] H. Hotelling, “Relations between two sets of variates,” Biometrika,
vol. 28, no. 3/4, pp. 321–377, 1936.
[5] S.-i. Amari, A. Cichocki, H. H. Yang et al., “A new learning algorithm
for blind signal separation,” Advances in neural information processing
systems, pp. 757–763, 1996.
| 3 |
arXiv:1508.05813v3 [] 7 Apr 2016
DUAL OF BASS NUMBERS AND DUALIZING MODULES
MOHAMMAD RAHMANI AND ABDOLJAVAD TAHERIZADEH
Abstract. Let R be a Noetherian ring and let C be a semidualizing R-module. In this
paper, we impose various conditions on C to be dualizing. For example, as a generalization of Xu [22, Theorem 3.2], we show that C is dualizing if and only if for an R-module
M , the necessary and sufficient condition for M to be C-injective is that πi (p, M ) = 0
for all p ∈ Spec (R) and all i 6= ht (p), where πi is the invariant dual to the Bass numbers
defined by E.Enochs and J.Xu [8].
1. introduction
Throughout this paper, R is a commutative Noetherian ring with non-zero identity.
A finitely generated R-module C is semidualizing if the natural homothety map R −→
Hom R (C, C) is an isomorphism and Ext iR (C, C) = 0 for all i > 0. Semidualizing modules
have been studied by Foxby [9], Vasconcelos [20] and Golod [10] who used the name suitable for these modules. Dualizing complexes, introduced by A.Grothendieck, is a powerful
tool for investigating cohomology theories in algebraic geometry. A bounded complex of Rmodules D with finitely generated homologies is said to be a dualizing complex for R, if the
natural homothety morphism R → RHom R (D, D) is quasiisomorphism, and id R (D) < ∞.
These notion has been extended to semidualizing complexes by L.W. Christensen [5]. A
bounded complex of R-modules C with finitely generated homologies is semidualizing for
R if the natural homothety morphism R → RHom R (C, C) is quasiisomorphism. He used
these notion to define a new homological dimension for complexes, namely GC -dimension,
which is a generalization of Yassemi’s G-dimension [23]. The following, is the translation of
a part of [5, Proposition 8.4] to the language of modules:
Theorem 1. Let (R, m, k) be a Noetherian local ring and let C be a semidualizing Rmodule. The following are equivalent:
(i) C is dualizing.
(ii) GC -dim R (M ) < ∞ for all finite R-modules M .
(iii) GC -dim R (k) < ∞.
In particular, the above theorem recovers [4, 1.4.9]. Note that k is a Cohen-macaulay Rmodule of type 1. R.Takahashi, in [17, Theorem 2.3], replaced the condition G-dim R (k) < ∞
in [4, 1.4.9] by weaker conditions and obtained a nice characterization for Gorenstein rings.
2000 Mathematics Subject Classification. 13C05, 13D05, 13D07, 13H10.
Key words and phrases. Semidualizing modules, dualizing modules, GC -dimension, Bass numbers, dual
of Bass numbers, minimal flat resolution, local cohomology.
1
2
M. RAHMANI AND A.- J. TAHERIZADEH
Indeed, he showed that R is Gorenstein, provided that either R admits an ideal I of finite Gdimension such that R/I is Gorenstein, or there exists a Cohen-Macaulay R-module of type
1 and of finite G-dimension. The following is the main result of section 3, which generalizes
Theorem 1 as well as [17, Theorem 2.3]. See Theorem 3.4 below.
Theorem 2. Let (R, m) be a Noetherian local ring and let C be a semidualizing R-module.
The following are equivalent:
(i) C is dualizing.
(ii) There exists an ideal a with GC -dim R (aC) < ∞ such that C/aC is dualizing for
R/a.
(iii) There exists a Cohen-Macaulay R-module M with rR (M ) = 1 and GC -dim R (M ) <
∞.
(iv) rR (C) = 1 and there exists a Cohen-Macaulay R-module M with GC -dim R (M ) <
∞.
E.Enochs et al. [1], solved a long standing conjecture about the existence of flat covers.
Indeed, they showed that if R is any ring, then all R-modules have flat covers. E.Enochs
[6], determined the structure of flat cotorsion modules. Also, E.Enochs and J.Xu [8, Definition 1.2], defined a new invariant πi , dual to the Bass numbers, for modules related to
flat resolutions. J.Xu [22], studied the minimal injective resolution of flat R-modules and
minimal flat resolution of injective R-modules. He characterized Gorenstein rings in terms
of vanishing of Buss numbers of flat modules, and vanishing of dual of Bass numbers of
injective modules. More precisely, the following theorem is [22, Theorems 2.1 and 3.2].
Theorem 3. Let R be a Noetherian ring. The following are equivalent:
(i) R is Gorenstein.
(ii) An R-module F is flat if and only if µi (p, F ) = 0 for all p ∈ Spec (R) whenever
i 6= ht (p).
(iii) An R-module E is injective if and only if πi (p, E) = 0 for all p ∈ Spec (R) whenever
i 6= ht (p).
In section 4, we give a generalization of Theorem 3. Indeed, in Theorem 4.3, we prove the
following result.
Theorem 4. Let R be a Noetherian ring and let C be a semidualizing R-module. The
following are equivalent:
(i) C is pointwise dualizing.
(ii) An R-module M is C-injective if and only if πi (p, M ) = 0 for all p ∈ Spec (R)
whenever i 6= ht (p).
(iii) An R-module M is injective if and only if πi (p, Hom R (C, M )) = 0 for all p ∈
Spec (R) whenever i 6= ht (p).
Theorem 4 has several applications. Let (R, m) be a d-dimensional Cohen-Macaulay local
ring possessing a canonical module. In this section, we give the structure of the minimal flat
resolution of Hdm (R), the top local cohomology of R. More precisely, the following theorem
is Corollary 4.7.
DUAL OF BASS NUMBERS AND DUALIZING MODULES
3
Theorem 5. Let (R, m) be a d-dimensional Cohen-Macaulay local ring possessing a canonical module. The minimal flat resolution of Hdm (R) is of the form
cm −→ · · · −→ Q Tp −→ Q Tp −→ Hdm (R) −→ 0,
0 −→ R
ht (p)=1
ht (p)=0
in which Tp is the completion of a free Rp -module with respect to pRp -adic topology.
In this section, by using the above resolution, we obtain the following isomorphism for a
d-dimensional Cohen-Macaulay local ring (See Corollary 4.8).
(
Hdm (R)
i = d,
d
d
∼
Tor R
i (Hm (R), Hm (R)) =
0
i=
6 d.
2. preliminaries
In this section, we recall some definitions and facts which are needed throughout this
paper. By an injective cogenerator, we always mean an injective R-module E for which
Hom R (M, E) 6= 0 whenever M is a nonzero R-module. For an R-module M , the injective
hull of M , is always denoted by E(M ).
Definition 2.1. Let X be a class of R-modules and M an R-module. An X -resolution of
M is a complex of R-modules in X of the form
∂X
∂X
1
n
X0 −→ 0
Xn−1 −→ . . . −→ X1 −→
X = . . . −→ Xn −→
∼
such that H0 (X) = M and Hn (X) = 0 for all n ≥ 1. Also the X -projective dimension of M
is the quantity
X -pd R (M ) := inf{sup{n ≥ 0|Xn 6= 0} | X is an X -resolution of M }
.
So that in particular X -pd R (0) = −∞. The modules of X -projective dimension zero are
precisely the non-zero modules in X . The terms of X -coresolution and X -id are defined
dually.
Definition 2.2. A finitely generated R-module C is semidualizing if it satisfies the following
conditions:
(i) The natural homothety map R −→ Hom R (C, C) is an isomorphism.
(ii) Ext iR (C, C) = 0 for all i > 0.
For example a finitely generated projective R-module of rank 1 is semidualizing. If
R is Cohen-Macaulay, then an R-module D is dualizing if it is semidualizing and that
id R (D) < ∞ . For example the canonical module of a Cohen-Macaulay local ring, if exists,
is dualizing.
Definition 2.3. Following [12], let C be a semidualizing R-module. We set
FC (R) = the subcategory of R–modules C ⊗R F where F is a flat R–module.
IC (R) = the subcategory of R–modules Hom R (C, I) where I is an injective R–
module.
4
M. RAHMANI AND A.- J. TAHERIZADEH
The R-modules in FC (R) and IC (R) are called C-flat and C-injective, respectively. If
C = R, then it recovers the classes of flat and injective modules, respectively. We use the
notations C-fd and C-id instead of FC -pd and IC -id , respectively.
Proposition 2.4. Let C be a semidualizing R-module. Then we have the following:
(i) Supp (C) = Spec (R), dim (C) = dim (R) and Ass (C) = Ass (R).
(ii) If R → S is a flat ring homomorphism, then C ⊗R S is a semidualizing S-module.
(iii) If x ∈ R is R–regular, then C/xC is a semidualizing R/xR-module.
(iv) If, in addition, R is local, then depth R (C) = depth (R).
Proof. The parts (i), (ii) and (iii) follow from the definition of semidualizing modules. For
(iv), note that an element of R is R-regular if and only if it is C-regular since Ass (C) =
Ass (R). Now an easy induction yields the equality.
Definition 2.5. Let C be a semidualizing R-module. A finitely generated R-module M is
said to be totally C-reflexive if the following conditions are satisfied:
(i) The natural evaluation map M −→ Hom R (Hom R (M, C), C) is an isomorphism.
(ii) Ext iR (M, C) = 0 = Ext iR (Hom R (M, C), C) for all i > 0.
For an R-module M , if there exists an exact sequence 0 → Gn → · · · → G1 → G0 → M → 0,
of R-modules such that each Gi is totally C-reflexive, then we say that M has GC -dimension
at most n, and write GC -dim R (M ) ≤ n. If there is no shorter such sequence, we set GC dim R (M ) = n. Also, if such an integer n does not exist, then we say that M has infinite
GC -dimension, and write GC -dim R (M ) = ∞.
The next proposition collects basic properties of GC -dimension. For the proof, see [10].
Proposition 2.6. Let (R, m) be local, M a finitely generated R-module and let C be a
semidualizing R-module. The following statements hold:
(i) If GC -dim R (M ) < ∞, and x ∈ m is M -regular, then
GC -dim R (M ) = GC -dim R (M/xM ) − 1.
If, also, x is R-regular, then
GC -dim R (M ) = GC/xC -dim R/xR (M/xM ).
(ii) If GC -dim R (M ) < ∞ and x is an R-regular element in Ann R (M ), then
GC -dim R (M ) = GC/xC -dim R/xR (M ) + 1.
(iii) Let 0 → K → L → N → 0 be a short exact sequence of R-modules. If two of L, K, N
are of finite GC -dimension, then so is the third.
(iv) If GC -dim R (M ) < ∞, then
GC -dim R (M ) = sup{i ≥ 0 | Ext iR (M, C) 6= 0}
= depth (R) − depth R (M ).
Definition 2.7. Let C be a semidualizing R-module. The Auslander class with respect to
C is the class AC (R) of R-modules M such that:
i
(i) Tor R
i (C, M ) = 0 = Ext R (C, C ⊗R M ) for all i ≥ 1, and
DUAL OF BASS NUMBERS AND DUALIZING MODULES
5
(ii) The natural map M → Hom R (C, C ⊗R M ) is an isomorphism.
The Bass class with respect to C is the class BC (R) of R-modules M such that:
(i) Ext iR (C, M ) = 0 = Tor R
i (C, Hom R (C, M )) for all i ≥ 1, and
(ii) The natural map C ⊗R Hom R (C, M )) → M is an isomorphism.
The class AC (R) contains all R-modules of finite projective dimension and those of finite Cinjective dimension. Also the class BC (R) contains all R-modules of finite injective dimension
and those of finite C-projective dimension (see [18, Corollary 2.9]). Also, if any two Rmodules in a short exact sequence are in AC (R) (resp. BC (R)), then so is the third (see
[13]).
Proposition 2.8. Let (R, m) be a local ring and let C be a semidualizing R-module.
b is a dualizing R-module
b
.
(i) C is a dualizing R-module if and only if C ⊗R R
(ii) Let x ∈ m be R-regular. Then C is a dualizing R-module if and only if C/xC is a
dualizing R/xR-module.
Proof. Just use the definition of dualizing modules.
Theorem 2.9. Let C be a semidualizing R-module and let M be an R-module.
(i) C-id R (M ) = id R (C ⊗R M ) and id R (M ) = C-id R (Hom R (C, M )).
(ii) C-fd R (M ) = fd R (Hom R (C, M )) and fd R (M ) = C-fd R (C ⊗R M ).
Proof. For (i), see [18, Theorem 2.11] and for (ii), see [19, Proposition 5.2].
Lemma 2.10. Let C be a semidualizing R-module, E be an injective cogenerator and M be
an R-module.
(i) One has C-id R (M ) = C-fd R (Hom R (M, E)).
(ii) One has C-fd R (M ) = C-id R (Hom R (M, E)).
Proof. (i). We have the following equalities
C-id R (M ) = id R (C ⊗R M )
= fd R (Hom R (C ⊗R M, E))
= fd R (Hom R (C, Hom R (M, E))
= C-fd R (Hom R (M, E)),
in which the first equality is from Theorem 2.9(i), and the last one is from Theorem 2.9(ii).
(ii). Is similar to (i).
Remark 2.11. Let (R, m) be a local ring and let M be a finitely generated R-module. We
use νR (M ) to denote the minimal number of generators of M . More precisely, νR (M ) =
vdim R/m (M ⊗R R/m). It is easy to see that if x ∈ m, then νR (M ) = νR/xR (M/xM ). In
particular, if x ∈ Ann R (M ), then νR (M ) = νR/xR (M ). Assume that depth R (M ) = n. The
type of M , denoted by rR (M ), is defined to be vdim R/m (Ext nR (R/m, M )). If x ∈ m, then
rR (M/xM ) = rR/xR (M/xM ) by [2, Exercise 1.2.26]. Also, if x ∈ m is M - and R-regular,
then rR (M ) = rR/xR (M/xM ) by [2, Lemma 3.1.16]. Assuma that C is a semidualizing
6
M. RAHMANI AND A.- J. TAHERIZADEH
R-module. Then rR (C) | rR (R). Indeed, by reduction modulo a maximal R-sequence, we
can assume that depth R (C) = 0 = depth (R). Then we have
rR (R) = vdim R/m Hom R (R/m, R)
= vdim R/m Hom R (R/m, Hom R (C, C))
= vdim R/m Hom R (R/m ⊗R C, C)
= vdim R/m Hom R (R/m ⊗R C ⊗R/m R/m, C)
= vdim R/m Hom R/m (R/m ⊗R C, Hom R (R/m, C))
= νR (C)rR (C).
In particular, if rR (R) = 1 (e.g. R is Gorenstein local), then νR (C) = 1 and then C ∼
= R.
Definition 2.12. Let M be an R-module and let X be a class of R-modules . Following
[7], a X -precover of M is a homomorphism ϕ : X → M , with X ∈ X , such that every
homomorphism Y → M with Y ∈ X , factors through φ; i.e., the homomorphism
Hom R (Y, ϕ) : Hom R (Y, X) → Hom R (Y, M )
is surjective for each module Y in X . A X -precover ϕ : X → M is a X -cover if every
ψ ∈ Hom R (X, X) with ϕψ = ϕ is an automorphism.
Definition 2.13. Following [6], an R-module M is called cotorsion if Ext 1R (F, M ) = 0 for
any flat R-module F.
Remark 2.14. In [1], E. Enochs et al. showed that if R is any ring, then every R-module
has a flat cover. It is easy to see that flat cover must be surjective. By [6, Lemma 2.2], the
kernel of a flat cover is always cotorsion. So that if F → M is flat cover and M is cotorsion,
then so is F . Therefore for an R-module M , one can iteratively take flat covers to construct
a flat resolution of M . Since flat cover is unique up to isomorphism, this resolution is unique
up to isomorphism of complexes. Such a resolution is called the minimal flat resolution of M .
Note that the minimal flat resolution of M is a direct summand of any other flat resolution
of M . Assume that
· · · → Fi → · · · → F1 → F0 → M → 0,
is the minimal flat resolution of M . Then Fi is cotorsion for all i ≥ 1. If, in addition, M
is cotorsion, then all the flat modules in the minimal flat resolution of M are cotorsion. E.
Enochs [6], determined the structure of flat cotorsion modules. He showed that if F is flat
∼ QTp where Tp is the completion of a free Rp -module with respect
and cotorsion, then F =
p
to pRp -adic topology. So that we can determine the structure of the minimal flat resolution
of cotorsion modules.
Definition 2.15. Let M be a cotorsion R-module and let
· · · → Fi → · · · → F1 → F0 → M → 0,
be the minimal flat resolution of M . Following [8], for a prime ideal p of R and an integer
i ≥ 0, the invariant πi (p, M ) is defined to be the cardinality of the basis of a free Rp -module
Q
whose completion is Tp in the product Fi ∼
= Tp . By [8, theorem 2.2], for each i ≥ 0,
p
Rp
πi (p, M ) = vdim Rp /pRp Tor i
Rp /pRp , Hom R (Rp , M ) .
DUAL OF BASS NUMBERS AND DUALIZING MODULES
7
Remark 2.16. Let M be a finitely generated R-module. There are isomorphisms
Hom R (M, E(R/p)) ∼
= Hom R (M, E(R/p) ⊗R Rp )
∼
= Hom R (M, E(R/p)) ⊗R Rp
∼
= Hom Rp (Mp , ERp (Rp /pRp )),
where the the first isomorphism holds because E(R/p) ∼
= ERp (Rp /pRp ), and the second
isomorphism is tensor-evaluation [7, Theorem 3.2.14].
3. Finiteness of GC -dimension
Throughout this section, C is a semidualizing R-module. We begin with three lemmas
that are needed for the main result of this section. It is well-known that a local ring over
which there exists a non-zero finitely generated injective module, must be Artinian. Our first
lemma generalizes this fact by replacing the injectivity condition with weaker assumption.
Lemma 3.1. Let (R, m) be local and let M be a finitely generated R-module with depth (M ) =
0. If Ext 1R (R/m, M ) = 0, then R is Artinian. In particular, M is injective.
Proof. We show that dim (R) = 0. Assume, on the contrary, that dim (R) > 0. Note that if
N is an R-module of finite length, then by using a composition series for N in conjunction
with the assumption, we have Ext 1R (N, M ) = 0. Now an easy induction on ℓR (N ) yields the
equality ℓR (Hom R (N, M )) = ℓR (N )ℓR (Hom R (R/m, M )). Next, note that ℓR (R/mi ) < ∞
i
i+1
for any i ≥ 1, and that the sequence {ℓR (R/mi )}∞
for
i=1 is not bounded since m 6= m
2
any i ≥ 1. Hence {ℓR (Hom R (R/mi , M ))}∞
i=1 is not bounded. But 0 :M m ⊆ 0 :M m ⊆ · · ·
is a chain of submodules of M , and hence is eventually stationary. This is a contradiction.
Therefore R is Artinian. Finally, the assumption Ext 1R (R/m, M ) = 0 implies that M is
injective.
Lemma 3.2. Let (R, m) be a local ring and let M be a Cohen-Macaulay R-module with GC dim R (M ) < ∞. Then rR (C) | rR (M ).
Proof. We use induction on n = depth (R). If n = 0, then by Proposition 2.6(iv), we have
∼ Hom R (Hom R (M, C), C), and
GC -dim R (M ) = 0, and hence there is an isomorphism M =
the equalities depth R (C) = 0 = depth R (M ). Hence we have
rR (M ) = vdim R/m Hom R (R/m, M )
= vdim R/m Hom R (R/m, Hom R (Hom R (M, C), C))
= vdim R/m Hom R (R/m ⊗R Hom R (M, C), C)
= vdim R/m Hom R (R/m ⊗R Hom R (M, C) ⊗R/m R/m, C)
= vdim R/m Hom R/m (R/m ⊗R Hom R (M, C), Hom R (R/m, C))
= νR (Hom R (M, C))rR (C).
Therefore rR (C) | rR (M ). Now, assume inductively that n > 0. We consider two cases:
Case 1. If depth R (M ) = 0, then M is of finite length since it is Cohen-Macaulay. Hence
we can take an R-regular element x such that xM = 0. Set (−) = (−) ⊗R R/xR. Then by
8
M. RAHMANI AND A.- J. TAHERIZADEH
Proposition 2.6(ii), we have GC -dim R (M ) < ∞. Also, note that M is a Cohen-Macaulay
R-module. Hence by induction hypothesis we have rR (C) | rR (M ). Thus rR (C) | rR (M ).
Case 2. If depth R (M ) > 0, then we can take an element y ∈ m to be M - and R-regular.
Set (−) = (−) ⊗R R/yR. Now M is a Cohen-Macaulay R-module, and that
GC -dim R (M ) = GC -dim R (M ) < ∞,
by Proposition 2.6(i). Therefore, by induction hypothesis, we have rR (C) | rR (M ), whence
rR (C) | rR (M ). This complete the inductive step.
Lemma 3.3. Let (R, m) be local and that rR (C) = 1. If there exists a totally C-reflexive
R-module of finite length, then C is dualizing.
Proof. Assume that M is a finite length C-reflexive R-module. Then depth R (M ) = 0,
and hence depth (R) = GC -dim (M ) = 0 by Proposition 2.6(iv).
Therefore, we have
depth R (C) = 0 by Proposition 2.4(iv). Now assume, on the contrary, that C is not dualizing. Hence, by Lemma 3.1, we have Ext 1R (R/m, C) 6= 0. Let
0 = M0 ⊂ M1 ⊂ · · · ⊂ Mr = M ,
be a composition series for M . Thus the factors are all isomorphic to R/m, and we have
exact sequences
0 → Mi−1 → Mi → R/m → 0,
for all 1 ≤ i ≤ r. Applying the functor Hom R (−, C), we get the exact sequence
0 → Hom R (R/m, C) → Hom R (Mi , C) → Hom R (Mi−1 , C),
for each 1 ≤ i ≤ r−1. Now since depth R (C) = 0 and rR (C) = 1, we have Hom R (R/m, C) ∼
=
R/m. Hence we have the inequality ℓR (Hom R (Mi , C)) ≤ ℓR (Hom R (Mi−1 , C)) + 1 for each
1 ≤ i ≤ r − 1. On the other hand, application of the functor Hom R (−, C) on the exact
sequence 0 → Mr−1 → M → R/m → 0, yields an exact sequence
0 → Hom R (R/m, C) → Hom R (M, C) → Hom R (Mr−1 , C)
→ Ext 1R (R/m, C) → Ext 1R (M, C) = 0.
Therefore ℓR (Hom R (M, C)) = ℓR (Hom R (Mr−1 , C)) + 1 − ℓR (Ext 1R (R/m, C)). But since
ℓR (Ext 1R (R/m, C)) > 0, we have
ℓR (Hom R (M, C))
< ℓR (Hom R (Mr−1 , C)) + 1
≤ ℓR (Hom R (Mr−2 , C)) + 2
≤ ···
≤ ℓR (Hom R (M0 , C)) + r
=r
= ℓR (M ).
Now since Hom R (M, C) is again a totally C-reflexive R-module of finite length, the same
argument shows that ℓR Hom R (Hom R (M, C), C) ≤ ℓR (Hom R (M, C)). But since M is
∼ Hom R (Hom R (M, C), C), which implies that ℓR (M ) <
totally C-reflexive, we have M =
ℓR (M ), a contradiction. Hence C is dualizing.
The following theorem is a generalization of [17, Theorem 2.3].
Theorem 3.4. Let (R, m) be local. The following are equivalent:
DUAL OF BASS NUMBERS AND DUALIZING MODULES
9
(i) C is dualizing.
(ii) There exists an ideal a with GC -dimR (aC) < ∞ such that C/aC is dualizing for
R/a.
(iii) There exists a Cohen-Macaulay R-module M with rR (M ) = 1 and GC -dimR (M ) <
∞.
(iv) rR (C) = 1 and there exists a Cohen-Macaulay R-module M of finite GC -dimension.
Proof. (i)=⇒(ii). Choose a = 0.
(ii)=⇒(iii). We show that C/aC has the desired properties. First, the exact sequence
0 → aC → C → C/aC → 0,
in conjunction with Proposition 2.6(iii), show that GC -dim R (C/aC) < ∞. On the other
hand, C/aC is a Cohen-Macaulay R/aR-module and hence is a Cohen-Macaulay R-module.
Finally, by [2, Exercise 1.2.26], we have rR (C/aC) = rR/a (C/aC) = 1.
(iii)=⇒(iv). By Lemma 3.2, we have rR (C) = 1.
(iv)=⇒(i). Assume that M is a Cohen-Macaulay R-module with GC -dim R (M ) < ∞. We
use induction on m = depth R (M ). If m = 0, then M is of finite length since it is Cohenp
Macaulay. Since Ann R (M ) = m, we can choose a maximal R-sequence from elements
of Ann R (M ), say x. In view of Proposition 2.8(ii) and Proposition 2.6(ii), we can replace
C by C/xC and R by R/xR, and assume that M is totally C-reflexive. In this case, C
is dualizing by Lemma 3.3. Now assume inductively that m > 0. Hence depth (R) > 0
by Proposition 2.6(iv), and we can take an element x ∈ m to be M - and R-regular. Set
(−) = (−)⊗R R/xR. Now M is a Cohen-Macaulay R-module and rR (C) = rR (C) = 1. Also,
by Proposition 2.6(i), we have GC -dim R (M ) =GC -dim R (M ) < ∞. Hence, by induction
hypothesis, C is dualizing for R, whence C is dualizing for R by Proposition 2.8(ii).
It is well-known that the existence of a finitely generated (resp. Cohen-Macaulay) module
of finite injective (resp. projective) dimension implies Cohen-Macaulyness of the ring. But,
in the special case that C is dualizing, the proof is easy, as the following relations show
dim (R) = dim R (C) ≤ id R (C) = depth (R),
where the first equality is from Proposition 2.4(i), and the remaining parts are from [2,
Theorem 3.1.17]. Therefore, in view of Theorem 3.4, we can state the following corollary.
Corollary 3.5. Let (R, m) be local. If there exists a Cohen-Macaulay R-module of type 1
and of finite GC -dimension, then R is Cohen-Macaulay.
4. C-injective modules
In this section, our aim is to extend two nice results of J.Xu [22]. It is well-known that
a Noetherian ring R is Gorenstein if and only if µi (p, R) = δi,ht(p) (the Kronecker δ). As a
generalization, J.Xu [22, Theorem 2.1], showed that R is Gorenstein if and only if for any
R-module F , the necessary and sufficient condition for F to be flat is that µi (p, F ) = 0 for all
p ∈ Spec (R) and all i 6= ht (p). Next, in [22, Theorem 3.2], he proved a dual for this theorem.
Indeed, he proved that R is Gorenstein if and only if for any R-module E, the necessary
and sufficient condition for E to be injective is that πi (p, E) = 0 for all p ∈ Spec (R) and
10
M. RAHMANI AND A.- J. TAHERIZADEH
all i 6= ht (p). In the present section, first we generalize the mentioned results. Next, we use
our new results to determine the minimal flat resolution of some top local cohomology of a
Cohen-Macaulay local rings and their torsion products.
Lemma 4.1. The followings are equivalent:
(i) C is pointwise dualizing.
(ii) C-fd R (E(R/m)) = ht (m) for any m ∈ Max (R).
(iii) C-fd R (E(R/m)) < ∞ for any m ∈ Max (R).
(iv) C-fd R (E(R/p)) = ht (p) for any p ∈ Spec (R).
(v) C-fd R (E(R/p)) < ∞ for any p ∈ Spec (R).
(vi) C-id R (Tm ) = ht (m) for any m ∈ Max (R).
(vii) C-id R (Tm ) < ∞ for any m ∈ Max (R).
(viii) C-id R (Tp ) = ht (p) for any p ∈ Spec (R).
(ix) C-id R (Tp ) < ∞ for any p ∈ Spec (R).
Proof. (i)=⇒(ii). Assume that m ∈ Max (R). There are equalities
C-fd R (E(R/m)) = fd R (Hom R (C, E(R/m)))
= fd Rm (Hom Rm (Cm , ERm (Rm /mRm ))
= id Rm (Cm )
= dim (Rm )
= ht (m),
in which the first equality is from Theorem 2.9(ii), and the second one is from Remark 2.16.
(ii)=⇒(iii). Is clear.
(iii)=⇒(i). We can assume that (R, m) is local. Now one can use Theorem 2.9(ii), to see
that
id R (C) = fd R (Hom R (C, E(R/m)))
= C-fd R (E(R/m)) < ∞,
whence C is dualizing.
(i)=⇒(iv). Let p be a prime ideal of R. Note that E(R/p)q 6= 0 if and only if q ⊆ p. Now
as in (i)=⇒(ii), we have C-fd R (E(R/p)) = dim (Rp ) = ht (p).
(iv)=⇒(v). Is clear.
(v)=⇒(i). Again, we can assume that R is local. Now the proof is similar to that of
(iii)=⇒(i).
(ii)⇐⇒(vi) and (iii)⇐⇒(vii). Note that Tm = Hom R E(R/m), E(R/m)(X) for some set
X. Now we have the equalities
C-id R (Tm ) = id R (C ⊗R Tm )
= id R C ⊗R Hom R E(R/m), E(R/m)(X)
= id R Hom R Hom R (C, E(R/m)), E(R/m)(X)
= fd R (Hom R (C, E(R/m)))
= C-fd R (E(R/m)),
DUAL OF BASS NUMBERS AND DUALIZING MODULES
11
in which the first equality is from Theorem 2.9(i), the fourth equality is from Remark 2.16
and the fact that E(R/m)(X) is an injective cogenerator in the category of Rm -modules, and
the last one is from Theorem 2.9(ii).
(iv)⇐⇒(viii) and (v)⇐⇒(ix). Are similar to (ii)⇐⇒(vi).
The following theorem is a generalization of [21, theorem 2.1].
Theorem 4.2. The following are equivalent:
(i) C is pointwise dualizing.
(ii) An R-module M is C-flat if and only if µi (p, M ) = 0 for all p ∈ Spec (R) whenever
i 6= ht (p).
(iii) An R-module M is flat if and only if µi (p, C ⊗R M ) = 0 for all p ∈ Spec (R)
whenever i 6= ht (p).
Proof. (i)=⇒(ii). First assume that M is C-flat. Set M = C ⊗R F , where F is a flat
R-module. Since C is pointwise dualizing, we have µi (p, C) = 0 for all p ∈ Spec (R) with
i 6= ht (p). Assume that
0 → C → E 0 (C) → E 1 (C) → ... → E i (C) → ...
is the minimal injective resolution of C. By applying the exact functor − ⊗R F to this
resolution, we find an exact complex
0 → M = C ⊗R F → E 0 (C) ⊗R F → E 1 (C) ⊗R F → ... → E i (C) ⊗R F → ..., (∗)
which is an injective resolution for M . By [7, Theorem 3.3.12] the injective R-module
E(R/p) ⊗R F is a direct sum of copies of E(R/p) for each p ∈ Spec (R). Now, since the
minimal injective resolution of M is a direct summand of the complex (∗), we get the result.
Conversely, suppose that M is an R-module such that µi (p, M ) = 0 for all p ∈ Spec (R)
whenever i 6= ht (p). In order to show that M is C-flat, it is enough to prove that Mm is Cm flat Rm -module for all m ∈ Max (R). For if Mm is Cm -flat Rm -module for all m ∈ Max (R),
then Hom R (C, M )m ∼
= Hom Rm (Cm , Mm ) is flat as an Rm -module for all m ∈ Max (R) by
Theorem 2.9(ii). Hence Hom R (C, M ) is a flat R-module and thus M is C-flat by Theorem
2.9(ii). Hence, replacing R by Rm , we can assume that (R, m) is local. Clearly we may
assume that M 6= 0. In this case we have id R (M ) < ∞ since by assumption µi (p, M ) = 0
for all p ∈ Spec (R) and all i > dim (R) . Hence the assumption in conjunction with Lemma
4.1, imply that M has a bounded injective resolution all of whose terms have finite C-flat
dimensions. More precisely, by Lemma 4.1, if E i is the i-th term in the minimal injective
resolution of M , then C-fdR (E i ) = i for all 0 ≤ i ≤ id (M ). Breaking up this resolution to
short exact sequences and using [19, Corollary 5.7], we can conclude that C-fdR (M ) = 0.
Hence M is C-flat, as wanted.
(ii)=⇒(iii). Assume that M is a flat R-module. Then C ⊗R M ∈ FC and µi (p, C ⊗R
M ) = 0 for all p ∈ Spec (R) whenever i 6= ht (p) by assumption. Conversely, suppose that
µi (p, C ⊗R M ) = 0 for all p ∈ Spec (R) whenever i 6= ht (p). Then, by assumption, C ⊗R M
is C-flat. Set C ⊗R M = C ⊗R F , where F is flat. Therefore C ⊗R M ∈ BC (R), whence
12
M. RAHMANI AND A.- J. TAHERIZADEH
M ∈ AC (R) by [18, Theorem 2.8(b)]. Thus we have the isomorphisms
M
∼
= Hom R (C, C ⊗R M )
∼
= Hom R (C, C ⊗R F )
∼
= F,
where the first and the last isomorphism hold since both M and F are in AC (R).
(iii)=⇒(i). Note that R is a flat R-module. Hence by assumption, if m ∈ Max (R), then
i
µ (m, C ⊗R R) = 0 for all i > ht (m). Thus id Rm (Cm ) < ∞, as wanted.
Theorem 4.3. The following are equivalent:
(i) C is pointwise dualizing.
(ii) An R-module M is C-injective if and only if πi (p, M ) = 0 for all p ∈ Spec (R)
whenever i 6= ht (p).
(iii) An R-module M is injective if and only if πi (p, Hom R (C, M )) = 0 for all p ∈
Spec (R) whenever i 6= ht (p).
Proof. (i)=⇒(ii) . Assume that M is a nonzero C-injective R-module. Set M = Hom R (C, E)
with E is injective. First, we show that M is cotorsion. Assume that F is a flat R-module.
Then, by [7, Theorem 3.2.1], we have Ext 1 (F, Hom R (C, E)) ∼
= Hom R (Tor R (F, C), E) = 0,
R
1
and hence M is cotorsion. Fix a prime ideal p of R and set k(p) = Rp /pRp . Note
∼ ⊕ E(R/q), where
that Hom R (Rp , E) is an injective R-module and that Hom R (Rp , E) =
q∈X
X ⊆ Ass R (E) and each element of X is a subset of p. There are isomorphisms
R
R
Tor i p k(p), Hom R (Rp , Hom R (C, E)) ∼
= Tor i p k(p), Hom R (Cp , E)
R
∼
= Tor i p k(p), Hom Rp (Cp , Hom R (Rp , E)
R
∼
= Tor i p k(p), Hom Rp (Cp , ⊕ E(R/q)
q∈X
∼
= Hom Rp Ext iRp (k(p), Cp ), ⊕ E(R/q) ,
q∈X
where the last isomorphism is from [7, Theorem 3.2.13]. Now since Cp is dualizing for Rp ,
we have Ext iRp (k(p), Cp ) = 0 for all i 6= ht (p). Therefore πi (p, M ) = 0 for all i 6= ht (p).
Conversely, assume that M is a non-zero R-module with πi (p, M ) = 0 for all i 6= ht (p). By
assumption, the minimal flat resolution of M is of the form
Q
· · · −→ Fi −→ · · · −→ F1 −→ F0 −→ M −→ 0,
in which Fi =
Tp for all i ≥ 1. Also, in view of [22, Lemma 3.1], we have
ht (p)=i
Q
Tp . Hence the minimal flat resolution of M is of the form
F0 =
ht (p)=0
Q
Q
Q
· · · −→
Tp −→ · · · −→
Tp −→
Tp −→ M −→ 0. (∗)
ht (p)=i
ht (p)=1
ht (p)=0
Let E be an injective cogenerator. According to Lemma 2.10(i), it is enough to show
that Hom R (M, E) is C-flat.
In fact, by Theorem 4.2, we need only to show that
i
µ (p, Hom R (M, E)) = 0 for all i 6= ht (p) and all i ≥ 0. Applying the exact functor
Hom R (−, E) on (∗), we get an injective resolution
Q
0 −→ Hom R (M, E) −→ Hom R
Tp , E −→
ht (p)=0
DUAL OF BASS NUMBERS AND DUALIZING MODULES
Hom R
Q
Tp , E −→ · · · −→ Hom R
ht (p)=1
for Hom R (M, E). Note that Hom R
Q
Q
Tp , E
ht (p)=i
Q
ht (p)=i
13
Tp , E −→ · · · ,
is an injective R-module for all i ≥ 0.
∼ ⊕E(R/q). We show that ht (q) = i. Since C is pointwise
Tp , E =
ht (p)=i
dualizing, by Lemma 4.1, we have C-fd R (E(R/q)) = ht (q). On the other hand, we have
Set Hom R
the equalities
C-fd R (E(R/q)) = C-fd R
(⊕E(R/q))
Q
= C-fd R Hom R
Tp , E
ht(p)=i
Q
= C-id R
Tp
ht (p)=i
= i,
in which the third equality is from Lemma 2.10(i), and the last one is from Lemma 4.1.
Hence µi (p, Hom R (M, E)) = 0 for all i ≥ 0 with i 6= ht (p), as wanted.
(ii)=⇒(iii). Assume that M is an injective R-module. Then Hom R (C, M ) ∈ IC and
i
µ (p, Hom R (C, M )) = 0 for all p ∈ Spec (R) whenever i 6= ht (p) by assumption. Conversely,
suppose that µi (p, Hom R (C, M )) = 0 for all p ∈ Spec (R) whenever i 6= ht (p). Then,
by assumption, Hom R (C, M ) is C-injective. Set Hom R (C, M ) = Hom R (C, I), where I is
injective. Therefore Hom R (C, M ) ∈ AC (R), whence M ∈ BC (R) by [18, Theorem 2.8(a)].
Thus we have the isomorphisms
M
∼
= C ⊗R Hom R (C, M )
∼
= C ⊗R Hom R (C, I)
∼
= I,
where the first and the last isomorphism hold since both M and I are in BC (R).
(iii)=⇒(i). Assume that m is a maximal ideal of R. Set k(m) = Rm /mRm . Since E(R/m)
is injective, by assumption, we have πi m, Hom R (C, E(R/m)) = 0 fo all i 6= ht (m). On the
other hand, there are isomorphisms
Hom Rm Ext iRm (k(m), Cm ), E(k(m))
m
∼
k(m), Hom Rm (Cm , E(k(m))
= Tor R
i
∼ Tor Rm k(m), Hom R (Cm ⊗R Rm , E(k(m))
=
m
m
i
m
∼
k(m), Hom R (Rm , Hom Rm (Cm , E(k(m))
= Tor R
i
∼
= Tor Rm k(m), Hom R (Rm , Hom R (C, E(R/m)) ,
i
where the first isomorphism is from [7, Theorem 3.2.13], and the last one is from Remark
2.16. From this isomorphisms, it follows that Hom Rm Ext iRm (k(m), Cm ), E(k(m)) = 0 for
all i 6= ht (m), from which we conclude that Ext iRm (k(m), Cm ) = 0 for all i 6= ht (m), since
E(k(m)) is an injective cogenerator in the category of Rm -modules. Thus Cm is dualizing
for Rm , as required.
Corollary 4.4. Let C be pointwise dualizing. Then flat cover of any C-injective R-module
is C-injective.
14
M. RAHMANI AND A.- J. TAHERIZADEH
Proof. By Lemma 4.1, C-id R (Tp ) = 0 for any prime ideal p with ht (p) = 0. Hence Tp is
C-injective for any prime ideal p with ht (p) = 0. Assume that M is a C-injective R-module.
Q
By Theorem 4.3, we have F (M ) =
Tp . Now the result follows since the class IC
ht (p)=0
closed under arbitrary direct product.
Corollary 4.5. The R-module C is pointwise dualizing if and only if for any prime ideal p
of R,
πi p, Hom R (C, E(R/p)) =
(
1
i = ht (p),
0
i 6= ht (p).
Proof. Assume that p ∈ Spec (R). Set k(p) = Rp /pRp . We have the following equalities
R
πi p, Hom R (C, E(R/p)) = vdim k(p) Tor i p k(p), Hom R (Rp , Hom R (C, E(R/p))
R
= vdim k(p) Tor i p k(p), Hom R (Cp , Hom Rp (Rp , E(R/p))
R
= vdim k(p) Tor i p k(p), Hom R (Cp , E(R/p))
= vdim k(p) Hom Rp Ext iRp (k(p), Cp ), E(R/p) ,
where the second equality is from Remark 2.16, and the last equality is from [7, Theorem
3.2.13]. Now, C is pointwise dualizing if and only if Cp is the dualizing module of Rp for all
p ∈ Spec (R), and this is the case if and only if
(
k(p)
Ext iRp (k(p), Cp ) ∼
=
0
i = ht (p),
i 6= ht (p).
for all p ∈ Spec (R).
Thus we are done by the above equalities and the fact that
∼
Hom Rp (k(p), E(R/p)) = k(p).
In the following corollaries, we are concerned with the local cohomology. For an R-
module M , the i-th local cohomology module of M with respect to an ideal a of R, denoted
by Hia (M ), is defined to be
Hia (M ) = limExt iR (R/an , M ).
−→
n≥1
For the basic properties of local cohomology modules, please see the textbook [3].
Corollary 4.6. Let (R, m) be a Cohen-Macaulay local ring with dim (R) = d possesing
a canonical module ωR . Then πi m, Hdm (R) = δi,d , and πi q, Hdm (R) = 0 for any nonmaximal prime ideal q whenever i 6= ht (q).
Proof. By [3, Theorem 11.2.8], we have Hdm (R) ∼
= Hom R (ωR , E(R/m)), and hence Hdm (R) is
ωR -injective. Assume that q is a non-maximal prime ideal of R. Then by the Theorem 4.3, we
have πi q, Hdm (R) = 0 for all i 6= ht (q). Finally, by corollary 4.5, we have πi m, Hdm (R) = 0
for all i 6= d and that πd m, Hdm (R) = 1, as wanted.
If (R, m) is a Cohen-Macaulay local ring with dim (R) = d, then by [3, Corollary 6.2.9] the
only non-vanishing local cohomology of R with respect to m is Hdm (R). Also, if R admits a
DUAL OF BASS NUMBERS AND DUALIZING MODULES
15
canonical module, then by [7, Proposition 9.5.22], we have fd R (Hdm (R)) = d. The following
corollary describes the structure of the minimal flat resolution of Hdm (R).
Corollary 4.7. Let (R, m) be a d-dimensional Cohen-Macaulay local ring possessing a
canonical module. The minimal flat resolution of Hdm (R) is of the form
cm −→ · · · −→ Q Tp −→ Q Tp −→ Hdm (R) −→ 0.
0 −→ R
ht (p)=1
ht (p)=0
In the following corollary, we give another proof of [16, Corollary 3.7]. Our approach is
direct, and uses the well-known fact that the homology functor Tor can be computed by a
flat resolution.
Corollary 4.8. Let (R, m) be a d-dimensional Cohen-Macaulay local ring. Then
(
Hdm (R)
i = d,
d
R
d
∼
Tor i (Hm (R), Hm (R)) =
0
i 6= d.
b is a d-dimensional complete Cohen-Macaulay local ring, and hence
Proof. Note that R
admits a canonical module ωRb . The R-module Hdm (R) is Artinian by [3, Theorem 7.1.6], and
b
thus naturally has a R-module
structure by [3, Remark 10.2.9]. Hence Tor R (Hd (R), Hd (R))
i
m
m
is Artinian for all i ≥ 0 by [14, Corollary 3.2]. Thus there are isomorphisms
d
d
Tor R
i (Hm (R), Hm (R))
d
d
∼
b
= Tor R
i (Hm (R), Hm (R)) ⊗R R
b
d
∼
b
b d
= Tor R
i (Hm (R) ⊗R R, Hm (R) ⊗R R)
b
d
d
∼
b ,
b H b (R)
= Tor R H b (R),
i
mR
mR
in which the second isomorphism is from [7, Theorem 2.1.11], and the last one is flat base
change [3, Theorem 4.3.2]. Also, we have the isomorphisms
b
Hdm (R) ∼
= Hdm (R) ⊗R R
d
∼
b
= H b (R)
mR
∼
b R)
b ,
= Hom Rb ωRb , ERb (R/m
in which the first isomorphism holds because Hdm (R) is Artinian, the second isomorphism is
the flat base change, and the last one is local duality [3, Theorem 11.2.8]. Thus Hdm (R) is
b
a ωRb -injective R-module.
Hence, by Corollary 4.7, the minimal flat resolution of Hdm (R), as
b
an R-module,
is of the form
[
b b −→ · · · −→ Q TQ −→ Q TQ −→ Hdm (R) −→ 0,
0 −→ R
mR
ht (Q)=1
ht (Q)=0
b
bQ -adic topology,
in which TQ is the completion of a free RQ -module with respect to QR
b Observe that the above resolution is a flat resolution of Hdm (R) as an
for Q ∈ Spec (R).
R-module since the modules in the above resolution are all flat R-modules. Therefore, we
b and assume that R is complete. So that, the minimal flat resolution of
can replace R by R,
Hdm (R) is of the form
cm −→ · · · −→
0 −→ R
Q
Tp −→
ht (p)=1
Q
Tp −→ Hdm (R) −→ 0,
ht (p)=0
16
M. RAHMANI AND A.- J. TAHERIZADEH
in which Tp is the completion of a free Rp -module with respect to pRp -adic topology, for p
∈ Spec(R). Next, note that for each prime ideal p with p 6= m, we have
Q
d
Tp = 0. Indeed, we can write Hdm (R) = limMα , where Mα is a finitely
Hm (R) ⊗R
−→
generated submodule of
Hdm (R).
α∈I
Also Tp = Hom R E(R/p), E(R/p)(X) for some set X.
Now since Mα is of finite length by [3, Theorem 7.1.3], we can take an element x ∈ m r q
such that xMα = 0. But multiplication of x induces an automorphism
on E(R/p) and hence
Q
Q
Tp is both an isomorphism and
on Tp . Consequently, multiplication of x on Mα ⊗R
Q
Q
Tp = 0 since
Tp = 0, from which we conclude that Hdm (R) ⊗R
zero. Hence Mα ⊗R
d
d
tensor commutes with direct limit. Thus Tor R
i (Hm (R), Hm (R)) = 0 for i 6= d. Finally, we
have
d
d
Tor R
d (Hm (R), Hm (R))
∼
cm ⊗R Hd (R)
=R
m
∼
cm )
(R
= HdmR
d
m
∼
c
c
= Hom d (ωd
Rm , E d (Rm /mRm ))
Rm
Rm
∼
cm
= Hom Rm (ωRm , ERm (Rm /mRm )) ⊗Rm R
∼ Hom R (ωR , ER (Rm /mRm ))
=
m
m
m
∼
= Hom R (ωR , E(R/m)) ⊗R Rm
∼
= Hom R (ωR , E(R/m) ⊗R Rm )
∼
= Hom R (ωR , E(R/m))
∼
= Hd (R),
m
in which the second isomorphism is the flat base change [3, Theorem 4.3.2], the third isomorphism is local duality [3, Theorem 11.2.8], and the fifth one is from [3, Remark 10.2.9], since
Hom Rm (ωRm , ERm (Rm /mRm )) is an Artinian Rm -module and hence has a natural structure
cm -module.
as an R
The following theorem is a slight generalization of [22, Theorem 3.3].
Theorem 4.9. The following are equivalent:
(i) C is pointwise dualizing.
(ii) If M is a cotorsion R-module such that C-id R (M ) = n < ∞, then M admits
a minimal flat resolution such that πi (p, M ) = 0 for all p ∈ Spec (R) whenever
ht (p) ∈
/ {i, ..., i + n}.
Proof. (i) =⇒ (ii). We use induction on n. If n = 0, then we are done by Theorem 4.3.
Now assume inductively that n > 0 and the case n ie settled. Fix a prime ideal p of R.
Assume that M is a cotorsion R-module with C-id R (M ) = n + 1. Hence M ∈ AC (R), and
so the IC -preenvelope of M is injective by [18, Corollary 2.4(b)]. Thus there exists an exact
sequence
0 → M → Hom R (C, I) → L → 0, (∗)
in which I is injective, and L = Coker (M → Hom R (C, I)). Note that L is cotorsion
since both M and Hom R (C, I) are cotorsion. Also, since both M and Hom R (C, I) are
in AC (R), we have L ∈ AC (R), and therefore Tor R
1 (C, L) = 0.
On the other hand
DUAL OF BASS NUMBERS AND DUALIZING MODULES
17
C ⊗R Hom R (C, I) ∼
= I, by [7, Theorem 3.2.11]. Hence application of C ⊗R − on (∗) yields
an exact sequence
0 → C ⊗R M → I → C ⊗R L → 0.
By Theorem 2.9(i), we have id R (C ⊗R M ) = n + 1. Therefore id R (C ⊗R L) = n, whence
C-id R (L) = n.
Now induction hypothesis applied to Hom R (C, I) and L yields that
/ {i, ..., i + n}.
πi p, Hom R (C, I) = 0 for all i 6= ht (p), and that πi (p, L) = 0 fo all ht (p) ∈
Note that Ext 1R (Rp , M ) = 0 since M is cotorsion. Hence the exact sequence (∗) yields an
exact sequence
0 → Hom R (Rp , M ) → Hom R (Rp , Hom R (C, I)) → Hom R (Rp , L) → 0,
and the later exact sequence, by applying k(p) ⊗Rp −, yields the long exact sequence
Rp
Rp
k(p), Hom R (Rp , L) →
k(p), Hom R (Rp , Hom R (C, E)) → Tor i+1
· · · → Tor i+1
R
R
Tor i p k(p), Hom R (Rp , M ) → Tor i p k(p), Hom R (Rp , Hom R (C, E)) → · · · .
R
From the above long exact sequence, it follows that Tor i p k(p), Hom R (Rp , M ) = 0 for all
ht (p) ∈
/ {i, ..., i + n + 1}, as wanted. This completes the inductive step.
(ii) =⇒ (i). Let m be a maximal ideal of R. Now Hom R (C, E(R/m)) is C-injective and
hence by assumption πi m, Hom R (C, E(R/m)) = 0 for all i 6= ht (m). Now by the same
argument as in the proof of Theorem 4.3, we have Ext iRm (k(m), Cm ) = 0 for all i 6= ht (m),
whence Cm is dualizing for Rm .
Corollary 4.10. The following statements hold true:
(i) If C is pointwise dualizing, then C-id R (F (M )) ≤ C-id R (M ) for any cotorsion Rmodule M .
(ii) If C-id R (F (M )) ≤ C-id R (M ) for any R-module M , then C is pointwise dualizing.
Proof. (i). Assume that M is a cotorsion R-module. If C-id R (M ) = ∞, then we are done.
Q
So assume that C-id R (M ) = n < ∞. Then by Theorem 4.9, we have F (M ) = Tp where
0 ≤ ht (p) ≤ n. Now the result follows by Lemma 4.1.
(ii). Assume that m is a maximal ideal of R. We have to show that Cm is dualizing
for Rm . Assume that x is a maximal R-sequence in m. Then fd R (R/xR) < ∞, and
Ass R (C/xC) = {m} since x is also a maximal C-sequence. Hence we have the equalities
C-fd (C/xC)
= fd R (Hom R (C, C/xC))
= fd R (Hom R (C, C ⊗R R/xR))
= fd R (R/xR)
< ∞,
in which the first equality is from Theorem 2.9(ii), and the third one holds because R/xR ∈
AC (R). Assume that E is an injective cogenerator. Set (−)∨ = Hom R (−, E). Then Cid R ((C/xC)∨ ) < ∞ by Lemma 2.10(ii). Now if F is the flat cover of (C/xC)∨ , then by
assumption, we have C-id R (F ) < ∞. Therefore, we have C-fd R (F ∨ ) < ∞ by Lemma
2.10(i). Next, note that we have
C/xC ֒→ (C/xC)∨∨ ֒→ F ∨ .
Hence, the injective envelope of C/xC is a direct summand of F ∨ . Thus, in fact, E(R/m)
18
M. RAHMANI AND A.- J. TAHERIZADEH
is a direct summand of F ∨ , since R/m ֒→ C/xC. It follows that C-fd R (E(R/m)) < ∞, and
hence we are done by Lemma 4.1, since m was arbitrary.
Acknowledgments. We thank the referee for very careful reading of the manuscript
and also for his/her useful suggestions.
References
1. L. Bican, R. El Bashir and E. Enochs, All modules have flat covers, Bull. London Math. Soc. 33 (2001),
385–390.
2. W. Bruns and J. Herzog , Cohen-Macaulay rings, Cambridge University Press, Cambridge, 1993.
3. M.P. Brodmann and R.Y. Sharp , Local cohomology: an algebraic introduction with geometric applications, Cambridge University Press, Cambridge, 1998.
4. L.W. Christensen, Gorenstien Dimensions, Lecture Notes in Math., vol. 1747, Springer, Berlin, 2000.
5. L.W. Christensen, Semi-dualizing complexes and their Auslander categories, Trans. Amer. Math. Soc. 5
(2001), 1839–1883.
6. E. Enochs, Flat covers and flat cotorsion modules, Proc. Amer. Math. Soc. 92 (1984), 179–184.
7. E. Enochs and O. Jenda, Relative Homological Algebra, de Gruyter Expositions in Mathematics 30, 2000.
8. E. Enochs and J. Xu, On invariants dual to the Bass numbers, Proc. Amer. Math. Soc. 125 (1997),
951–960.
9. H.-B. Foxby, Gorenstein modules and related modules, Math. Scand. 31 (1973), 267–284 .
10. E. S. Golod, G-dimension and generalized perfect ideals, Trudy Mat. Inst. Steklov. Algebraic geometry
and its applications 165 (1984), 62–66.
11. R. Hartshorne, Local cohomology, A seminar given by A. Grothendieck, Harvard University, Fall, vol.
1961, Springer-Verlag, Berlin, 1967.
12. H. Holm, P. Jørgensen, Semi-dualizing modules and related Gorenstein homological dimensions, J. Pure
Appl. Algebra, 205(2006) 423–445.
13. H. Holm, D. White, Foxby equivalence over associative rings, J. Math. Kyoto Univ. 47 no.4, (2007),
781–808.
14. B. Kubik, M. J. Leamer, and S. Sather-Wagstaff, Homology of artinian and Matlis reflexive modules
I, J. Pure Appl. Algebra, 215 (2011), no. 10, 2486–2503.
15. B. Kubik, M. J. Leamer, and S. Sather-Wagstaff, Homology of artinian and mini-max modules, J.
Algebra. 403 (2014), no.1, 229–272.
16. M. Rahmani and A.-J. Taherizadeh, Tensor product of C-injective modules, preprint, 2015, available
from http://arxiv.org/abs/1503.05492.
17. R. Takahashi, Some characterizations of Gorenstein local rings in terms of G-dimension, Acta Math.
Hungar. 104 no. 4, (2004), 315–322.
18. R. Takahashi and D.White, Homological aspects of semidualizing modules, Math. Scand. 106 (2010)
5–10.
19.
M. Salimi, E. Tavasoli, S. Sather-Wagstaff and S. Yassemi, Relative tor functor with respect to a
semidualizing module, Algebras and Representation Theory, 17, no.1, (2014), 103–120.
20. W. V. Vasconcelos, Divisor theory in module categories, North-Holland Math. Stud. 14, North-Holland
Publishing Co., Amsterdam (1974).
21. J. Xu, Flat covers of modules, Lecture Notes in Mathematics, Vol. 1634, Springer, New York, (1996).
22. J. Xu, Minimal injective and flat resolutions of modules over Gorenstein rings, J. Algebra 175 (1995),
451–477.
23. S. Yassemi, G-dimension, Math. Scand. 77 (1995), no. 2, 161–174.
DUAL OF BASS NUMBERS AND DUALIZING MODULES
Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran.
E-mail address: [email protected]
E-mail address: [email protected]
19
| 0 |
Practical estimation of rotation distance
and induced partial order for binary trees
Jarek Duda
arXiv:1610.06023v1 [] 19 Oct 2016
Jagiellonian University, Golebia 24, 31-007 Krakow, Poland. Email: [email protected]
Abstract—Tree rotations (left and right) are basic local deformations allowing to transform between two unlabeled binary
trees of the same size. Hence, there is a natural problem of
practically finding such transformation path with low number
of rotations, the optimal minimal number is called the rotation
distance. Such distance could be used for instance to quantify similarity between two trees for various machine learning problems,
for example to compare hierarchical clusterings or arbitrarily
chosen spanning trees of two graphs, like in SMILES notation
popular for describing chemical molecules.
There will be presented inexpensive practical greedy algorithm
for finding a short rotation path, optimality of which has still to
be determined. It uses introduced partial order for binary trees
of the same size: t1 ≤ t2 iff t2 can be obtained from t1 by a
sequence of only right rotations. Intuitively, the shortest rotation
path should go through the least upper bound or the greatest
lower bound for this partial order. The algorithm finds a path
through candidates for both points in representation of binary
tree as stack graph: describing evolution of content of stack
while processing a formula described by a given binary tree.
The article is accompanied with Mathematica implementation of
all used procedures (Appendix).
Keywords: tree rotation, algorithmics, machine learning
I. I NTRODUCTION
Tree is a basic tool of computer science, used for example
to represent hierarchical structure in data, e.g. in hierarchical
clustering [1]. It is a natural question to evaluate similarity between such trees, maybe propose a path of local deformations
to get a transformation between such two structures. A natural
candidate for the required elementary local deformation of a
binary tree, maintaining order of leaves, are tree rotations (left
and right), which switch levels of two nodes and reconnect
their subtrees. It is used for example to balance binary search
tree in popular AVL method [2].
The minimal number of rotations to transform between
two binary trees is a metric called rotation distance. It was
shown that for any two N -node trees, for n ≥ 11, this
distance is at most 2N − 6 and there exist pairs of trees
fulfilling this 2N − 6 distance ([3], [4]). Binary trees have
multiple equivalent representations, for example as bracketing
or triangularization of polygon used in these two cited articles.
There will be discussed greedy algorithm for finding a rotation path which base on a different representation: of evolution
of stack content while processing such bracketed formula,
which can be found for example in Chapter 7 of ”Concrete
mathematics” book [5]. We will refer to this function as stack
graph, it is visualized if Fig. 1 and left/right rotations become
Figure 1. An example of binary tree (top), corresponding bracketing (center)
and stack graph (bottom): describing evolution of content of stack while
processing a formula described by a given binary tree (bracketing). It can
be obtained by post-order tree traversal: visiting leaf corresponds to +1 for
the graph (’x’ in bracketing, push value to stack), visiting internal node to -1
(closing bracket: pop two values then push their function).
simple lower/lift procedures, visualized in Fig. 2, which turn
out convenient for searching for a short path.
This representation allows to conclude that a natural relation
is a partial order: t1 ≤ t2 iff there exists a series of right
rotations transforming tree t1 to t2 . This partial order for 4
and 5 node trees is presented in Fig. 3 and 4. These figures
evaluating distance between different hierarchical clusterings.
Another example of situation where we need to quickly evaluate distance between trees is for comparing graphs for which
we can define a spanning tree in an unique way. We cannot do
it for general graphs, as it would solve the graph isomorphism
problem, but such spanning trees are a popular tool to describe
e.g. chemical molecules in so called SMILES notation [6].
Such trees are more complex than binary unlabeled: degree of
nodes can be larger than two and both vertices and edges may
have types worth including in the definition of distance - will
require a generalization of the discussed method.
II. PARTIAL ORDER AND
GREEDY SEARCH FOR COMMON LIFT
Denote the set of all n-leaf unlabeled binary trees as Tn ,
|Tn | = 2n−2
n−1 /n is Catalan number. Each of such trees has
n − 1 internal nodes of degree 2, denote the number of all its
nodes as N = 2n − 1.
Let us define its stack graph as in Fig. 1:
Figure 2. Tree rotations (left) and their analogue for corresponding bracketing
(gray) and stack graph (right). The α, β and γ are nonegative graphs of even
width (can be zero), changing by ±1 per position. Presented lift will be
denoted by lij , where i, j are the marked positions.
Figure 3. Partial order for all 4 leaf binary trees (top) and their stack graphs
(bottom). Each arrow denotes possibility of single right rotation (or lift).
suggest that the shortest path between two trees has to go
through their least upper bound (∧) or their greatest lower
bound (∨) for this order, or in other words: the path can be
chosen (sorted) as first a sequence of right rotations, then of
left rotations (∧) or oppositely (∨). However, this intuition has
still to be verified.
There will be presented a natural inexpensive (O(n ·
’path length’)) greedy algorithm to find the common lift (CL)
for stack graphs of two trees, which corresponds to a candidate
for the least upper bound (∧) of these trees. By taking mirror
image of the trees, which switches left and right rotations
and so reverses the order, this algorithm can also provide a
candidate for the greatest lower bound (∨). Its optimality has
still to be verified, but currently it can be used to quickly find
the upper bound for the rotation distance or to approximate
this distance and find a short path, especially for situations
where the optimality is not crucial.
For example for various machine learning situations, like
Definition 1. Stack graph st : {0, . . . , N } → N for tree t ∈
Tn is function defined by the following conditions:
• st (0) = 0, st (N ) = 1,
• for i = 1, . . . , N , st (i) − st (i − 1) = 1 if i-th node in
post-order traversal of t is leaf, −1 otherwise
Stack graph is fixed for 0 and 1, and is at least 1 for i ∈
{1, . . . , N }. For better visualization, it will be depictured with
points joined by lines, sometimes without the fixed [0, 1] range
(Fig. 3, 4). Denote Sn = s(Tn ) as the set of all stack graphs
for n-leaf binary trees.
The tree rotation operations are presented in Fig. 2. In bracketing notation, right rotation changes ((αβ)γ) into (α(βγ))
and left rotation the opposite, where α, β, γ represent some
formulas (subtrees), which can be degenerated into single
variables (leaves).
As we can see in this figure, right rotation corresponds
to lift by (−1, 1) vector of some {i, . . . , j} range. It can be
degenerated: i = j for leaves.
Definition 2. For two stack graphs: s1 , s2 ∈ Sn we will say
that s2 is (i, j) lift of s1 , denoted as s2 = lij (s1 ), if the
following conditions are fulfilled:
1) i ≤ j, s1 (i) = s1 (j)
2) s1 (i − 1) = s1 (i + 1) = s1 (j − 1) = s(i) + 1
3) for i < k < j, s1 (k) > s1 (i)
4) for k < i and j < k, s1 (k) = s2 (k)
5) for i ≤ k ≤ j, s2 (k − 1) = s1 (k) + 1
The value of stack graph will be referred as level. Condition
1) says that i ≤ j are on the same level of s1 , 2) says that
there is first ’∨’ shape in i, then ’\’ shape in j. Condition 3)
enforces s1 being above the i, j level. Finally 4) and 5) say
that s1 and s2 differ only on the [i, j] range, in which s2 is
lift by (−1, 1) vector of s1 .
Obviously, lifting cannot decrease value in any position:
Observation 3. If t2 is a right rotation of t1 , then for all
i ∈ {0, . . . , N }, st1 (i) ≤ st2 (i).
Figure 4. Partial order for all 5 leaf binary trees (top) and their stack graphs (bottom). Each arrow denotes possibility of single right rotation (or lift). Observe
symmetry of the top diagram - taking mirror image of all trees switches left and right rotation, reversing the order
It allows to conclude antisymmetry of the following relation,
its reflexivity and transitivity are obvious, making it a partial
order in Tn :
Definition 4. For t1 , t2 ∈ Tn we define partial order: t1 ≤ t2
iff t1 = t2 or there exists a sequence of right rotations from
t1 to t2 .
Examples of this partial order for 4 and 5 leaf trees are
presented in Fig. 3 and 4 correspondingly. These examples
suggest that the shortest rotation path has to go through the
least upper bound (∧) or the greatest lower bound (∨), however, it does not have to be generally true. Having algorithm
searching for a short path going through the least upper bound
(∧), we could use it to find path going through the greatest
lower bound (∨) by taking mirror images of both trees, which
switch left and right rotations, reversing this order. There is
a nontrivial relation between stack graph of a tree and of its
mirror image, which examples are presented in various figures
of this paper, but it can found in O(n) time and memory (see
mirror[] in Appendix).
To find a candidate of the the greatest lower bound, we can
take both stack graphs and use a sequence of lifts applied to a
locally lower one, to finally equalize them, getting a candidate
for the lowest common lift and of path to them. Example of
such process is presented in Fig. 5. It is presented (without
tracking the changes) as Algorithm 1 and 2.
Its naive implementation (see findrotationpath[] in
Appendix) is linear in n and path length. Figure 7 shows
Figure 5. Example of finding a common lift using the greedy algorithm.
Its step finds the lowest level where the two stack graphs disagree, find the
most-right position of such points in this level (i), find the closest position in
this level right to this point (j), then lift (i, j) segment. Such steps continue
until equalizing both stack graphs, getting a common lift.
examples of common lifts found by this algorithm, Figure 8
shows example of found rotation path for tree with 30 leaves.
III. C ONCLUSIONS AND FURTHER PERSPECTIVES
There was presented a practical way to search for a short
rotation path between two unlabeled binary trees of the same
size. Its optimality is not guaranteed in this moment, nor
counterexamples were found. It can be used to find a rotation
path and bound from above or approximate the rotation path.
Algorithm 1 Greedy common lift for s1, s2 stack graphs
while s1 6= s2 do
lvl = min{s1[k] : s1[k] 6= s2[k]}
i = max{k : s1[k] 6= s2[k], lvl = min(s1[k], s2[k])}
if s1[i] < s2[i] then
j = min{k > i: s1[k] = lvl}; lift(s1,i,j)
else
j = min{k > i: s2[k] = lvl}; lift(s2,i,j)
end if
end while
Algorithm 2 Greedy shortest path for s1, s2
find common lift for s1 and s2
find common lift for mirror(s1) and mirror(s2)
take the shorter one
The basic question to investigate is optimality of this algorithm. If it is not optimal, maybe find its improvements, find
bounds for its inaccuracy, try to characterise counterexamples.
If it is already optimal, the proof might go through the
following steps, each of which might turn out false:
1) The shortest rotation path has to go through the least
upper bound or the greatest lower bound of the two
trees. In other words, the shortest rotation path can be
chosen (sorted) as only right rotations first then only left
rotations, or oppositely,
2) The common lift found by the greedy algorithm corresponds to the least upper bound,
3) The greedy algorithm finds the shortest path to the
obtained common lift - the number of lifts cannot be
reduced.
Another large topic is applying this or extended methods for
various problems, especially in machine learning, where not
being optimal seems not crucial. There will be needed some
expansions, like generalization to non-binary trees, which can
be realized for example by splitting a higher degree node into
a few binary nodes. Also, in some applications we should
distinguish types of nodes and edges (e.g. atoms and molecular
bonds), which might require modifying the metric and so the
optimization problem.
A related line of work is trying to apply this method
to evaluate similarity of graphs by comparing some their
arbitrarily chosen spanning trees. Choosing an isomorphism
independent spanning tree is generally a difficult problem,
as it would allow to solve the graph isomorphism problem.
However, it can be useful for some families of graphs, used
for example in SMILES representation of chemical molecules.
A PPENDIX
This appendix contains Wolfram Mathematica sources used
to generate figures in this article. It generates random binary
tree assuming uniform probability distribution among trees of
given number of leaves. For this purpose, it uses enumerative
decoding [7] to get a random sequence of +1 and -1 which
sums to 1, then finds its only cyclic rotation giving a stack
graph.
(* enumerative decoding *)
dec[n_, w_, X_] := (l = w; x = X;
Table[b = Binomial[n - i, l];
If[x < b, 0, x -= b; l--; 1], {i, n}]);
(* generate stack graph of n leaf random tree *)
randtree[n_] := (nn = 2 n - 1;
s = 2 dec[2 n - 1, n,
RandomInteger[Binomial[2 n - 1, n - 1] - 1]] - 1;
Do[s[[i]] += s[[i - 1]], {i, 2, nn}];
min = Min[s];
div = Max[Table[If[s[[i]] == min, i, 0], {i, nn}]];
Join[s[[div ;; nn]], 1 + s[[1 ;; div]]] - min);
(* draw tree from stack graph *)
drawtree[s_] := (st = Table[0, {i, Length[s]}];
stp = 1; vn = 1; edg = {};
Do[If[s[[i + 1]] - s[[i]] == 1, st[[stp++]] = vn++,
AppendTo[edg,
{st[[stp - 2]] -> vn, st[[stp - 1]] -> vn}];
st[[stp - 2]] = vn++; stp--], {i, 1, Length[s] - 1}];
TreePlot[Sort[Flatten[edg]], Automatic, Length[s] - 1,
VertexLabeling -> False]);
(* mirror image of tree from stack graph *)
mirror[s_] := (ls = Length[s]; p = 0;
pbr = Table[i, {i, ls/2}]; nbr = Table[0, {i, ls/2}];
Do[If[s[[i + 1]] - s[[i]] == 1, p++,
brac = pbr[[pbr[[p]] - 1]];
nbr[[pbr[[p]] = brac]]++], {i, ls - 1}];
S = Table[0, {i, ls}]; p = 1;
Do[p++; S[[p]] = S[[p - 1]] + 1;
Do[p++; S[[p]] = S[[p - 1]] - 1, {j, 1, nbr[[i]]}]
,{i, ls/2, 1, -1}]; S);
(* the greedy algorithm to find path *)
lift[s_, f_, t_] :=
Join[s[[1 ;; f - 1]], s[[f + 1 ;; t]] + 1, {s[[t]]},
s[[t + 1 ;; Length[s]]]];
findrotationpath[S1_, S2_] := (
s1 = S1; s2 = S2; ll = {s1}; rl = {s2};
While[s1 != s2, min = Infinity;
Do[If[s1[[i]] != s2[[i]], m = Min[s1[[i]], s2[[i]]];
If[m <= min, min = m; pm = i]], {i, Length[s1]}];
px = pm + 2;
If[s1[[pm]] < s2[[pm]],
While[s1[[px]] > min, px++];
AppendTo[ll, s1 = lift[s1, pm, px]],
While[s2[[px]] > min, px++];
PrependTo[rl, s2 = lift[s2, pm, px]]]];
Join[ll, rl[[2 ;; Length[rl]]]])
(* example of application *)
n = 10; S1 = randtree[n]; S2 = randtree[n];
path1 = findrotationpath[S1, S2];
path2 = findrotationpath[mirror[S1], mirror[S2]];
Print[Row[Table[drawtree[path1[[i]]], {i, Length[path1]}]]]
Row[Table[drawtree[path2[[i]]], {i, Length[path2]}]]
R EFERENCES
[1] J. H. Ward Jr, “Hierarchical grouping to optimize an objective function,”
Journal of the American statistical association, vol. 58, no. 301, pp. 236–
244, 1963.
[2] G. Adelson-Velsky and E. Landis, “An algorithm for the organization of
information,” Proceedings of the USSR Academy of Sciences, vol. 146,
pp. 263–266, 1962.
[3] D. D. Sleator, R. E. Tarjan, and W. P. Thurston, “Rotation distance,
triangulations, and hyperbolic geometry,” Journal of the American Mathematical Society, vol. 1, no. 3, pp. 647–681, 1988.
[4] L. Pournin, “The diameter of associahedra,” Advances in Mathematics,
vol. 259, pp. 13–42, 2014.
[5] D. E. Knuth, R. L. Graham, O. Patashnik et al., “Concrete mathematics,”
Adison Wesley,, 1989.
[6] D. Weininger, “Smiles, a chemical language and information system. 1.
introduction to methodology and encoding rules,” Journal of chemical
information and computer sciences, vol. 28, no. 1, pp. 31–36, 1988.
[7] L. Oktem, Hierarchical enumerative coding and its applications in image
compression. Tampere University of Technology, 1999.
Figure 6. Example of rotation path found by the greedy algorithm for common lift, for the original trees (top) and their mirror images (bottom). The marked
trees correspond to the common lift. The upper path is shorter so it will be the final answer.
Figure 7. Five examples (columns) of the found common lift by greedy algorithm (highest orange line) for stack graphs of two random trees (lower: blue
and green lines). The upper row corresponds to stack graphs of the original trees: common lift is a candidate for the least upper bound. The lower row
corresponds to stack graphs of mirror images of the trees.
Figure 8.
Example of found rotation path for two random 30 vertex trees. The marked tree corresponds to the common lift.
| 8 |
1
A Statistical Characterization of Localization
Performance in Wireless Networks
Christopher E. O’Lone, Graduate Student Member, IEEE,
arXiv:1710.07716v1 [cs.NI] 20 Oct 2017
Harpreet S. Dhillon, Member, IEEE, and R. M. Buehrer, Fellow, IEEE
Abstract
Localization performance in wireless networks has been traditionally benchmarked using the CramérRao lower bound (CRLB), given a fixed geometry of anchor nodes and a target. However, by endowing
the target and anchor locations with distributions, this paper recasts this traditional, scalar benchmark
as a random variable. The goal of this work is to derive an analytical expression for the distribution of
this now random CRLB, in the context of Time-of-Arrival-based positioning.
To derive this distribution, this work first analyzes how the CRLB is affected by the order statistics
of the angles between consecutive participating anchors (i.e., internodal angles). This analysis reveals
an intimate connection between the second largest internodal angle and the CRLB, which leads to
an accurate approximation of the CRLB. Using this approximation, a closed-form expression for the
distribution of the CRLB, conditioned on the number of participating anchors, is obtained.
Next, this conditioning is eliminated to derive an analytical expression for the marginal CRLB
distribution. Since this marginal distribution accounts for all target and anchor positions, across all
numbers of participating anchors, it therefore statistically characterizes localization error throughout an
entire wireless network. This paper concludes with a comprehensive analysis of this new network-wideCRLB paradigm.
Index Terms
Cramér-Rao lower bound, localization, order statistics, Poisson point process, stochastic geometry,
Time of Arrival (TOA), mutual information, wireless networks.
The authors are with Wireless@VT, Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg,
VA, USA. Email: {olone, hdhillon, buehrer}@vt.edu.
This paper was presented in part at the IEEE ICC 2017 Workshop on Advances in Network Localization and Navigation
(ANLN), Paris, France [1].
2
I. I NTRODUCTION
The Global Positioning System (GPS) has for decades been the standard mechanism for position location anywhere in the world. However, the deployment locations of recent and emerging
wireless networks have begun to put a strain on the effectiveness of GPS as a localization solution.
For example, as populations increase, precipitating the expansion of urban environments, cell
phone use in urban canyons, as well as indoors, is continually increasing. The rise of these GPSconstrained environments highlight the need to fall back on the existing network infrastructure
for localization purposes.
Additionally, with the emergence of Wireless Sensor Networks (WSNs) and their increased
emphasis on energy efficiency and cost-effectiveness, [2], [3], the possibility of equipping each
potential target node with a GPS chip quickly becomes impractical. Furthermore, the deployment of these networks in GPS-constrained environments further necessitates a reliance on the
terrestrial network for a localization solution. Thus, localization within a network, performed by
the network itself in the absence of GPS, has begun to garner attention, [4], [5].
Benchmarking localization performance in wireless networks has traditionally been done using
the Cramér-Rao lower bound, which provides a lower bound on the position error of any unbiased
estimator [6]. Common practice has been to analyze the CRLB in fixed scenarios of anchor nodes
and a target, e.g., [7], [8], [9], [10]. This strategy produces a scalar/fixed value for the CRLB and
is specific to the scenario being analyzed. While this idea does provide insight into fundamental
limits of localization performance, it is rather limited in that it does not take into account all
possible setups of anchor nodes and target positions within a network.
In order to account for all possible setups, it is useful to appeal to the field of stochastic
geometry. Whereas in the past stochastic geometry has been applied towards the study of
“connectivity, capacity, outage probability, and other fundamental limits of wireless networks,”
[11], [12], we now however apply it towards the study localization performance in wireless
networks. Modeling anchor node and target placements with point processes opens the possibility
of characterizing the CRLB over all setups of anchor nodes and target positions. Thus, the CRLB
is no longer a fixed value, but rather a random variable (RV) conditioned on the number of
participating anchors, where the randomness is induced by the inherent randomness of the anchor
positions. Upon marginalizing out the number of participating anchors, the resulting marginal
distribution of the CRLB will characterize localization performance throughout an entire wireless
3
network.
A. Related Work
The quest for a network-wide distribution of localization performance comprises two main
steps. The first step involves finding the distribution of the CRLB conditioned on the number
of participating anchor nodes, and the second step involves finding the probability that a given
number of anchors can participate in a localization procedure.
With regards to the first step, there have been several attempts in the literature to obtain this
conditional distribution. An excellent first attempt can be found in the series of papers, [13],
[14], [15], in which approximations of this conditional distribution were presented for ReceivedSignal-Strength (RSS), Time-of-Arrival (TOA), and Angle-of-Arrival (AOA) based localization,
respectively. These approximations were obtained through asymptotic arguments by driving the
number of participating anchor nodes to infinity. While these approximate distributions are very
accurate for larger numbers of participating anchors, they are less than ideal for lower numbers.
However, it is desirable that this conditional distribution be accurate for lower numbers of
participating anchors, since this is the dominate case in terrestrial networks, e.g., cellular.
This conditional distribution of the CRLB was also explored in [16]. Here, the authors were
able to derive the true expression for this conditional distribution through a clever re-writing of
the CRLB using complex exponentials. This distribution was then used to derive and analyze the
so-called “localization outage probability” in scenarios with a fixed number of randomly placed
anchor nodes. While this expression represents the true conditional distribution, its complexity
puts it at a disadvantage over simpler approximations (discussed further in Section III-D).
The second step, which involves finding the participation probability of a given number of
anchor nodes, was explored in [17]. In this work, the authors modeled a cellular network with a
homogeneous Poisson point process (PPP), which consequently allowed them to derive bounds
on L-localizability, i.e., the probability that a mobile device can hear at least L base stations for
participation in a localization procedure.1 Further, by employing a “dominant interferer analysis,”
they were able to derive an accurate expression for the probability of L-localizability, which
can easily be extended to give the probability of hearing a given number of anchor nodes for
participation in a localization procedure.
1 The
term hearability (or hearable) is used to describe anchor nodes that are able to participate in a localization procedure,
i.e., their received SINR at the target is above some threshold.
4
B. Contributions
This paper proposes a novel statistical characterization of a wireless network’s ability to
perform TOA-based localization. Using stochastic geometry to model target and anchor node
placements throughout a network, and using the CRLB as the localization performance benchmark, this paper presents an analytical derivation of the CRLB’s distribution.2 This distribution
offers many insights into localization performance within wireless networks that previously were
only attainable through lengthy, parameter-specific, network simulations. Thus, this distribution
1) offers a means for comparing networks in terms of their localization performance, by
enabling the calculation of network-wide localization statistics, e.g., avg. localization error,
2) unlocks insight into how changing network parameters, such as SINR thresholds, processing
gain, frequency reuse, etc., affect localization performance throughout a network, and
3) provides network designers with an analytical tool for determining whether a network meets,
for example, the FCC E911 standards [18].
In pursuit of this distribution, this paper makes four key contributions. First, this work presents
an analysis of how the CRLB is affected by the order statistics of the internodal angles. This
analysis reveals an intimate connection between the second largest internodal angle and the
CRLB, which leads to an accurate approximation of the CRLB. Second, this approximation is
then used to obtain the distribution of the CRLB conditioned on the number of hearable anchors.
Although this is a distribution of the CRLB approximation, its simplicity and accuracy offer clear
advantages over the true distribution presented in [16] and the approximate distribution presented
in [14]. These advantages are discussed further in Section III-D.
Third, this work then takes a major step by combining this conditional distribution of the CRLB
with the distribution of the number of participating anchors. This eliminates the conditioning
on a given number of anchors, allowing for the analytical expression of the marginal CRLB
distribution to be obtained. Since this marginal distribution now simultaneously accounts for
all possible target and anchor node positions, across all numbers of participating anchor nodes,
it therefore statistically characterizes localization error throughout an entire wireless network,
and thus signals a departure from the existing literature. Additionally, since the two component
distributions are parameterized by various network parameters, our resulting marginal distribution
2 We
will be using the square root of the CRLB as our performance benchmark, however, we just state the CRLB here as to
not unnecessarily clutter the discussion. This will be described further in Section III-A.
5
of the CRLB will also be parameterized by these network parameters. Consequently, our final
contribution involves a comprehensive analysis of this new network-wide CRLB paradigm, where
we examine how varying network parameters affects the distribution of the CRLB — thereby
revealing how these network parameters affect localization performance throughout the network.
II. P ROBLEM S ETUP
This section details the assumptions we propose in determining the network layout as well
as the localization procedure. Additionally, we describe important notation and definitions used
throughout the paper and conclude with how the assumptions impact the network setup.
A. Network Setup and Localization Assumptions
Assumption 1. We assume a ubiquitous, two-dimensional wireless network with anchor nodes
distributed according to a homogeneous PPP over R2 . Further, we assume potential targets to be
distributed likewise, where the anchor and target point processes are assumed to be independent.
Remark. This assumption for modeling wireless networks is common in the literature, e.g., [19],
[20], [21], [22], [23].
Assumption 2. We assume a 2-Way TOA-based positioning technique is used within the twodimensional network.
Remark. Although Time-Difference-of-Arrival (TDOA) is also commonly implemented, 2-Way
TOA represents a viable approach, and additionally offers a lower bound on TDOA performance.
Furthermore, the 2-Way assumption eliminates the need for clock synchronization between the
target and anchor nodes.
Assumption 3. Range measurements are independent and exhibit zero-mean, normally distributed range error.
Remark. A reader familiar with wireless positioning will recognize this as a classic Line-of-Sight
(LOS) assumption. However, those familiar with localization in terrestrial networks will realize
that Non-Line-of-Sight (NLOS) measurements are more common. Thus, while we move forward
under this LOS assumption in order to make progress under this new paradigm, we will see in
Section IV that we can adapt our model to accommodate NLOS measurements by selecting
ranging errors consistent with NLOS propagation.
6
TABLE I
S UMMARY OF N OTATION
Symbol
fX (·)
h(X)
u(x)
k·k
1[·]
tr(X)
L
S
Θk
∠k
X(k)
α
β
K
Description
Probability distribution function (pdf) of X
Differential entropy of X
Unit step function, 0 when x < 0, 1 when x ≥ 0
`2 -norm
Indicator function; 1[A] = 1 if A true, 0 otherwise
Trace of the matrix X
Number of participating anchors, L ∈ {0, 1, 2, . . . }
The localization performance benchmark
Anchor node angle k, a random variable
Internodal angle k, a random variable
k th order statistic of a sequence {Xi } of RVs
Path loss exponent, α > 2
Post-proc. SINR threshold
Frequency reuse factor
Symbol
FX (·)
I(X; Y )
N (µ, σ 2 )
P[A]
b·c
XT
N
M
θk
ϕk
λ
γ
q
σr2
Description
Cumulative distribution function (cdf) of X
Mutual Information between X and Y
Normal distribution, mean µ, variance σ 2
Probability of event A
Floor function
Transpose of the matrix X
Max anchors tasked to perform localization
Fixed localization error, in meters, M ∈ R+
A realization of Θk
A realization of ∠k
Density of PPP of anchor locations
Processing gain
Average network load (i.e., network traffic)
Common range error variance
Assumption 4. The range error variance, σr2 , is common among measurements from participating
anchor nodes and is considered a known quantity.
Remark. This assumption is often made in the literature for range-based localization, e.g., [10],
[25], and although not realistic in every scenario, it allows us to gain initial insight into the
problem and will be relaxed in future work.
B. Notation
The notation used throughout this paper can be found in Table I.
C. Localizability
We now define the terms localizable and unlocalizable, which were introduced in [17].
Definition 1. We say that a target is localizable if it detects localization signals from a sufficient
number of anchor nodes such that its position can be determined without ambiguity.
Remark. Under Assumption 2, this implies that L ≥ 3. We also define unlocalizable to be the
negation of Definition 1.
For the purposes of this setup and subsequent derivations, we will initially only consider
scenarios in which the target is localizable, to avoid unnecessary complication. Later in Section
7
III-E and those which follow, we will account for scenarios in which the target is unlocalizable,
and will modify our results accordingly.
D. Impact of Assumptions
With these assumptions in place, we now describe how they impact the network setup.
From Assumption 1, since the anchors and potential targets are distributed by independent,
homogeneous PPPs over R2 (i.e., stationary), then without loss of generality, we may perform
our analysis for a typical target placed at the origin of the xy-plane [21]. This is due to the
fact that the independence and stationarity assumptions imply that no matter where the target is
placed in the network, the distribution of anchors relative to the target appears the same.
Next, we assume that the number of hearable anchors is some fixed value, L, and begin by
numbering these anchors in terms of their increasing distance from the origin (target position).
This is depicted in Fig. 1 for a particular realization of a homogeneous PPP in which there
are four hearable anchor nodes. Fig. 1 also depicts how their corresponding angles, measured
counterclockwise from the +x-axis, are labeled accordingly. Assumption 1 further implies that
these angles of the hearable anchors are i.i.d. random variables that come from a uniform
distribution on [0, 2π).
Definition 2. If the target is placed at the origin of an x y-plane, then the term anchor node
angle, Θ k , is defined to be the angle corresponding to hearable anchor node k, measured
i.i.d.
counterclockwise from the +x-axis. Note that Θ k ∼ unif[0, 2π), ∀k ∈ {1, . . . , L}.
In later sections we will see that the distances between the anchor nodes and the target are
important for determining how many anchors are able to participate in a given localization procedure. However, under Assumption 4, once particular anchor nodes are identified as participating
in a localization procedure, then they are endowed with the common range error variance. As
we will see in the following section, this assumption, along with Assumptions 2 and 3 will
lead to the CRLB (with L fixed) as being dependent on only the angles between participating
anchor nodes (i.e., internodal angles). Thus, since the CRLB expression is only dependent on
the internodal angles, the distances between participating anchor nodes and the target need not
be considered. Hence, we may view the participating anchors as being placed on a circle about
the origin. This is depicted in Fig. 2, under the same PPP realization as in Fig 1.
8
y
1
y
θ1
θ3
2
θ2
θ (3)
4
θ4
x
ϕ(2)
θ (2)
ϕ(1) θ (1)
ϕ(3)
ϕ(4)
x
θ (4)
3
Fig. 1. I NITIAL L ABELING S CHEME . The dots represent
Fig. 2. E QUIVALENT S ETUP. This is the realization from
a particular realization of anchors placed according to a
Fig. 1, where only participating anchor nodes and the in-
homogeneous PPP. The origin represents the location of the
ternodal angles they trace out are considered, whereas their
target. The anchors participating in the localization procedure
distances from the target are not. The realizations of the RVs
are labeled in increasing order w.r.t. their distance from the
are given by Θ(k) = θ (k) and ∠(k) = ϕ(k) . Note, the anchor
origin. Their corresponding anchor node angles are labeled
node angle order stats “renumber” the participating anchors
accordingly. Note the realization of the RV Θk is Θk = θ k .
in terms of increasing angle c.c.w. starting from the +x-axis.
Next, we formally define the term internodal angle. Since the anchor node angles are such
i.i.d.
that Θ k ∼ unif[0, 2π), for k ∈ {1, . . . , L}, we may examine their corresponding order statistics,
Θ(1), Θ(2), . . . , Θ(L) , where 0 ≤ Θ(1) ≤ Θ(2) ≤ · · · ≤ Θ(L) ≤ 2π by definition. Thus, the order
statistics of the participating anchor node angles effectively “renumber” the nodes in terms of
increasing angle, starting counterclockwise from the +x-axis. This is also depicted in Fig. 2.
Definition 3. If participating anchor nodes are considered according to their anchor node angle
order statistics, then an internodal angle, ∠ k , is defined to be the angle between two consecutive
participating anchor nodes. That is, ∠ k = Θ(k+1) − Θ(k) for 1 ≤ k < L and ∠ L = 2π − (Θ(L) − Θ(1) ).
Remark. Since the internodal angles are functions of RVs, they themselves are RVs. Thus, we
may also consider their order statistics, ∠(1), ∠(2), . . . , ∠(L) . These order statistics of the internodal
angles are depicted for the particular PPP realization in Fig. 2.
In summary, Fig 2 depicts an example of a typical setup realization, given L = 4, once all of
the assumptions are taken into consideration.
9
III. D ERIVATION OF THE N ETWORK -W IDE CRLB D ISTRIBUTION
In this section, we first formally define our localization performance benchmark: the square
root of the CRLB. Using this definition and assuming a random placement of anchors, as well
as a random L, we then describe how this work generalizes localization performance results
currently in the literature. In what follows, we present the steps necessary to derive the marginal
distribution of our localization performance benchmark.
A. The Localization Performance Benchmark
Consider the traditional localization scenario, where the number of participating anchor nodes
(L) and their positions, as well as the target position, are all fixed. We represent the set of
coordinates of these anchors by
n
o
T
ΨL = ψi ∈ R2 ψi = xi, yi , i ∈ {1, 2, 3, ..., L} .
T
The coordinates of the target are denoted by ψt = xt, yt .
Next, under Assumptions 2, 3, and 4, the 2-Way range measurements between the target and
the L participating anchors are given by
ri = di + ni ,
where ri is the measured distance divided by 2 (i.e., the measured 1-Way distance),
q
2
2
xi − xt + yi − yt
di = kψi − ψt k =
i.i.d.
is the true 1-Way distance, and ni ∼ N (0, σr2 ) for all i ∈ {1, 2, . . . , L}.
Remark. Note that under Assumption 4, σr2 is common among the range measurements. For
2
2
2-Way TOA we set this to be σr2 = σ2-Way
/4. If 1-Way TOA is considered, then σr2 = σ1-Way
.
Thus, moving forward we may consider the range measurements to always be 1-Way, regardless
of whether 1-Way or 2-Way TOA is used.
Continuing, Assumption 3 enables the likelihood function to be easily written as a product.
Denoting the vector of range measurements as r = [r1, . . . , rL ]T , the likelihood function is
!
L
Ö
2
(r
−
d
)
1
i
i
exp −
.
L ψt r, ΨL, σr2 =
√
2
2σ
2πσ
r
r
i=1
From this likelihood function, we obtain the following Fisher Information Matrix (FIM)
ÍL
ÍL
2
i=1 cos θ i
i=1 cos θ i sin θ i
1
J L (φt ) = 2
,
ÍL
σr Í L cos θ sin θ
2
sin θi
i
i
i=1
i=1
10
where cos θi =
xi −xt
di
and sin θi =
yi −yt
di .
Remark. Note that if the target is placed at the origin, then the angles here, i.e., the θi ’s, are a
particular realization of the anchor node angles from Definition 2.
Taking the inverse of the FIM above, then the CRLB for any unbiased estimator, ψ̂t = [ x̂t, ŷt ]T ,
of the target position, ψt = [xt, yt ]T , is given by
CRLB = tr J−1
(ψ
)
t ,
L
(1)
as defined in Ch. 2.4.1 of [24].
Definition 4. We define our localization performance benchmark, S, to be the square root of
the CRLB in (1):
r
S , tr J−1
(ψ
)
t .
L
(2)
Remark. This benchmark is often referred to as the position error bound (PEB) in the literature,
[25], [26].
To conclude, notice that from (2), a closed-form expression for S can be obtained:
√
L
S = σr v
u
!2 ,
t L
L
L
Õ
Õ
Õ
cos2 θi
sin2 θi −
cos θi sin θi
i=1
i=1
(3)
i=1
which is a function of the anchor node angles.
B. Departure From the Traditional Localization Setup
In the previous section, we assumed a traditional setup, in which the number of participating
anchor nodes (L) and their positions (ΨL ), as well as the target position (ψt ), were all fixed. In
this section however, we now assume that both ΨL and L are random and briefly describe how
the random placement of anchors impacts our localization performance benchmark, S, and how
the randomness of L signals a departure from the existing literature.
We begin by describing how the random placement of anchors affects our localization performance benchmark. This is accomplished by now invoking Assumption 1 and examining
the impact that this has on the expression for S given by (3). Recall from Section II-D that
Assumption 1 implies that ψt = [0, 0]T and that the anchor node angles are i.i.d. on [0, 2π).
Thus, the θi realizations in (3) are now replaced with the random variables, Θi , from Definition
11
2. Since S is now a function of random variables, it itself becomes a random variable, and we
may now seek its distribution.
While work in the past has sought this distribution for S, e.g., [14], [16], there always
remained one implicit assumption: a fixed L. Therefore, the results presented thus far in the
literature have only applied to localization setups with a fixed number of anchor nodes, and hence
were not applicable network-wide. To address this issue, we now consider L to be a random
variable, whose distribution statistically quantifies the number of anchor nodes participating in
a localization procedure. This new interpretation of L will consequently allow us to decouple S
from L, thereby enabling the marginal distribution of S to account for all possible positioning
scenarios within a network. In addition to the contributions outlined in Section I-B, taking
advantage of this new interpretation of L, to subsequently obtain the marginal distribution of S,
is the main contribution setting this work apart from the existing literature.
C. Approximation of the CRLB
Before deriving the conditional distribution of S given L, we use this section to acquire an
approximation of our expression for S in (3). This will consequently allow for an accurate,
tractable, closed-form expression for the conditional distribution of S to be obtained.
1) Approximation Preliminaries and Goals: To facilitate the search for an accurate approximation, we recall that (3) is now in terms of the random variables Θi , and thus can be rewritten
using the internodal angles from Definition 3. This given by
√
L
S = σr v
u
! ,
tÕ
j−1
L−1 Õ
L
Õ
∠k
sin2
i=1 j=i+1
(4)
k=i
Definition 5. We define the terms underneath the square-root in the denominator in (4) by the
random variable D. That is,
D=
L−1 Õ
L
Õ
i=1 j=i+1
sin2
j−1
Õ
!
∠k .
(5)
k=i
Thus, we would like to find an approximation for D which comprises two key traits: (1) it
allows for a straightforward transformation of random variables, i.e., the number of sin2 (·) terms
does not change with L (and would ideally only involve a single term), and (2) it simultaneously
does not sacrifice accuracy, i.e., the approximation should preserve as much “information”
12
as possible about the setup of anchors, implying that the approximation should dominate (or
contribute the most to) the total value of D.
2) Initial Approach and Intuition: In trying to find an approximation that satisfies both
traits, we consider the following possibilities. First, consider approximating D with the sine
squared of an arbitrary internodal angle, sin2 (∠ k ), or with a sum of consecutive internodal angles,
sin2 (∠ k +∠ k+1 +...), where the starting angle, ∠ k , is arbitrary. While these possible approximations
may seem like reasonable candidates for satisfying the first trait, they unfortunately fall short
of satisfying the second trait. To see why, it is illustrative to examine Fig. 2 under different
realizations/placements of anchor nodes. By only looking at the same unordered internodal angle
on every realization, little knowledge is gained about the total setup of anchors. For example,
on one realization, the arbitrary internodal angle being examined might be large and would
therefore give a strong indication of how the rest of the anchors are placed, however, on another
realization, this same internodal angle might be small, thus giving little information about the
placement of the remaining anchors. Hence, in general, arbitrary internodal angles do not provide
accurate approximations due to their inconsistency in describing the anchor node setup, which
consequently leads to their (sine squared terms’) inability to capture the total value of D across
all realizations of anchors for a given L.
3) A Quantitative Approach Using Mutual Information: Taking advantage of the intuition
gained above, it should now be clear that we would like an approximation that utilizes angles
which tend to consistently dominate any given setup. Therefore, it makes sense to examine
larger internodal angles, and thus, the use of internodal angle order statistics follows naturally.
Since we ideally desire a single-term approximation, we examine the possibility of using the
sine-squared of the largest, second largest, and third largest internodal angles, i.e.,3
W(L) = sin2 (∠(L) ),
W(L−1) = sin2 (∠(L−1) ),
and
W(L−2) = sin2 (∠(L−2) ).
(6)
Note that these larger internodal angles should intuitively contain more “information” about the
setup of anchors, and consequently D, since they greatly restrict the placement of the remaining
anchors. Since each of the order statistic approximations above might seem viable under this
3 We
will not examine sums of the larger internodal angle order statistics, e.g., sin2 (∠(L) + ∠(L−1) ) or sin2 (∠(L) ) + sin2 (∠(L−1) ),
for the following two reasons: (1) they would lead to a more complex expression for the conditional distribution of S than
desired, and (2) they offer no gain in accuracy over using the sine squared of just a single internodal angle order statistic, as
evidenced through simulation.
13
1.6
I(D; W(L) |L)
I(D; W(L−1) |L)
I(D; W(L−2) |L)
Mutual Information (Bits)
1.4
1.2
1
0.8
0.6
0.4
0.2
0
3
4
5
6
7
Number of Participating Anchor Nodes (L)
Fig. 3.
J USTIFYING A PPROXIMATIONS T HROUGH M UTUAL I NFORMATION. The mutual informations were calculated
numerically by computing (8) and (9), where the necessary distributions were generated using a Monte Carlo simulation of
10 million anchor node realizations. The bin width of these distributions was chosen to be 0.01, Matlab’s ‘spline’ option was
used to interpolate the integrands in (8) and (9), and the supports of D and W(i) are given by D = [0, L 2 /4] and W(i) = [0, 1],
where i ∈ {L, L − 1, L − 2}. Furthermore, we adopt the convention: 0 log2 0 = 0 “based on continuity arguments” [29].
qualitative notion of information, we thus turn towards a quantitative notion, in order to justify
the use of one of these approximations.
Towards this end, we utilize the concept of mutual information. The reason behind this choice,
over that of correlation for example, is because mutual information captures both linear and
nonlinear dependencies between two random variables, since it is zero if and only if the two
random variables are independent [28]. Hence, we examine the mutual information between D
and the random variables W(L) , W(L−1) , and W(L−2) , so that we may quantify which approximation
carries the most information about D. Thus, we condition on L equaling some integer `(≥ 3)
and calculate:
I(D; W(i) |L = `) = h(D|L = `) − h(D|W(i), L = `),
where i ∈ {L, L − 1, L − 2}, and the differential entropies are given by
∫
1
h(D|L = `) = fD (d|`) log2
dd,
and
fD (d|`)
D
∫ ∫
1
h(D|W(i), L = `) =
fDW(i) (d, w|`) log2
dd dw,
fD (d|w, `)
(7)
(8)
(9)
W(i) D
where W(i) and D are the supports of W(i) and D, respectively [29]. The mutual information
between the approximations in (6) and D are given in Fig. 3, versus L. From Fig. 3, it is evident
14
that the mutual information between D and the approximation W(L−1) is the highest across all
numbers of participating anchors shown.
4) Investigating the High Mutual Information of D and W(L−1) : To explore the reasoning
behind this result, we examine the effect that W(L−1) has on the total value of D. We begin by
rewriting D as follows:
Proposition 1. The random variable D from Definition 5 can be equivalently expressed as
!
j+i−1
L−2 Õ
L−i
L
Õ
Õ
Õ
sin2 ∠(k) +
sin2
∠k .
D=
i=2 j=1
k=1
k= j
Proof. See Appendix A.
By separating the sine-squared terms of the internodal angle order statistics from the total
sum, Proposition 1 makes it clearer as to how our approximations from (6) may affect the total
value of D. To reveal the effects of W(L−1) in particular, we present the following lemma along
with its corollaries:
Lemma 2. The cdf of the second largest order statistic of the internodal angles, ∠(L−1) , conditioned on L, is given by
F∠(L−1)
X
Õ
nϕ
n−1 L
ϕ|L =
(−1)
(n − 1) 1 −
2π
n
n=0
! L−1
,
(10)
where X = min L, b2π/ϕc and the support is 0 ≤ ∠(L−1) ≤ π. (Note L ≥ 2, since if L < 2,
∠(L−1) would not exist.)
Proof. We refer the reader to the conference version of this paper, i.e., Appendix B of [1].
Corollary 2.1. Given a finite L, the expected value of the second largest order statistic of the
internodal angles, ∠(L−1) , conditioned on L, is given by
!
L
Õ
L
2π(n
−
1)
E[∠(L−1) |L] =
(−1)n
n
nL
n=2
Proof. See Appendix B.
Corollary 2.2. Given a finite L, the variance of ∠(L−1) , conditioned on L, is given by
!"
#
L
2
Õ
4π
L n−1
2
c
VAR[∠(L−1) |L] =
(−1)n
+ ,
L n=2
n
n
n(L − 1) L
(11)
(12)
15
where c =
m+1 L m−1
m=2 (−1)
m
m
ÍL
.
2
2
2
Proof. Note: VAR[∠(L−1) |L] = E[∠(L−1)
|L] − E[∠(L−1) |L] , where the derivation of E[∠(L−1)
|L]
is analogous to that of E[∠(L−1) |L] in the proof of Corollary 2.1.
Next, we plot Corollary 2.1, plus/minus one and two standard deviations of ∠(L−1) . This is
given in Fig. 4, versus L. Here, we can see that the second largest internodal angle is centered
and concentrated around π/2, suggesting that W(L−1) = sin2 (∠(L−1) ) will be concentrated about
its maximum of one. This implies that, for the majority of anchor node placements, sin2 (∠(L−1) )
will be a dominant term in the expression for D in Prop. 1. Thus, the sin2 (∠(L−1) ) term will tend
to contribute the most, that a given sin2 (·) term could contribute, to the total value of D. This
is especially true for small values of L, which is our focus.
Also for low L, ∠(L−1) is intuitively the dominant angle in that it places the greatest constraints
on the remaining angles. That is, once ∠(L−1) is determined, it restricts both ∠(L) and ∠(L−2) ,
and thus gives the greatest sense of the total setup of anchors. Note, when considering order
statistics other than ∠(L−1) , the constraints placed on the remaining angles are not as pronounced.
Furthermore, by examining different realizations of anchors (Fig. 2 as an example), along with
Prop. 1, one can see that when W(L−1) is small (∠(L−1) ≈ 0 or π), than so is D, and when W(L−1)
is large (∠(L−1) ≈ π/2), a large value of D follows. Thus, W(L−1) ’s consistency as a dominant
term in Prop. 1, along with its intuitive correlation with D, offer supporting evidence as to why
I(D; W(L−1) |L = `) is higher than both I(D; W(L) |L = `) and I(D; W(L−2) |L = `) for low L.
In summary, mutual information has proved its utility by revealing that W(L−1) is perhaps the
best approximation of D, for the desirable lower values of L. Since W(L−1) possesses the two
desirable traits for an approximation, discussed at the beginning of this section, we henceforth
use W(L−1) in our approximation of D.
5) Completing the Approximation: To complete the approximation of D, and consequently
S, all that now remains is to ensure that D and W(L−1) have the same range of possible values,
i.e., the same support. This will ensure that our ultimate approximation for S will produce the
same range of values as the true S. In order to accomplish this, we approximate D with a scaled
version of W(L−1) , i.e., D ≈ k W(L−1) , and thus search for the value of the constant k so that
kW(L−1) yields the desired support. Since D = [0, dmax ] and W(L−1) = [0, 1] (which follows from
the support of ∠(L−1) , Lemma 2), then in order to have the support of kW(L−1) equal that of D,
we simply need to set k = dmax . The value of dmax is presented in the following lemma:
16
1
π
E[6
E[6
E[6
7 π /8
(L−1)
(L−1)
(L−1)
| L]
| L] ± σ
| L] ± 2σ
0.9
0.8
P[S ≤ abscissa | L]
0.7
5 π /8
π /2
3 π /8
E[6
(L−1) |L]
(Radians)
6 π /8
Increasing L
(L = 3, 5, 7, 9)
0.6
0.5
0.4
0.3
2 π /8
0.2
π /8
Solid: True
Dashed: Theorem 4
0.1
0
3
4
5
6
7
8
9
0
10
20
30
Number of Participating Anchor Nodes (L)
40
50
60
70
Meters
Fig. 4. T HE S ECOND L ARGEST I NTERNODAL A NGLE O R -
Fig. 5. ACCURACY OF T HEOREM 4. The true conditional cdf
S TATISTIC . This figure gives a sense of the concentration
of S given L was generated using a Monte Carlo simulation of
of the distribution of ∠(L−1) around π/2. E[∠(L−1) |L] is given
q
by Cor. 2.1 and σ = VAR[∠(L−1) |L] is from Cor. 2.2.
(4) over 1 million random setup realizations of the internodal
DER
angles. Note, σr = 20 m.
Lemma 3. Let L be finite. If D is given as in (5), then its maximum value is dmax = L 2 /4.
Proof. See Appendix C.
Thus, we now have D ≈ (L 2 /4) · W(L−1) , which completes our approximation of D. Lastly,
substituting this approximation for D into the expression for S in (4) finally yields:
Approximation 1. The localization performance benchmark, S, can be approximated by
r
4
1
S ≈ σr ·
·
,
L sin(∠(L−1) )
(13)
where 0 ≤ ∠(L−1) ≤ π, as stated in Lemma 2.
D. The Conditional CRLB Distribution
Theorem 4. If the localization performance benchmark, S, is given by Approximation 1, then
the cdf of S conditioned on L is
! L−1 X
! L−1
X2
1
Õ
Õ
L
nϕ
L
nϕ
2
1
FS (s | L, σr ) =
(−1)n−1
(n − 1) 1 −
−
(−1)n−1
(n − 1) 1 −
, (14)
n
2π
n
2π
n=0
n=0
p
where ϕ1 = sin−1 (a/s), ϕ2 = π − sin−1 (a/s), a = σr · 4/L, X1 = min L, b2π/ϕ1 c , X2 =
min L, b2π/ϕ2 c , and the support is S ∈ [a, ∞).
17
Proof. See Appendix D.
Remark. Although Theorem 4 is the conditional distribution of our approximation of S, it
provides two clear advantages over the true conditional distribution presented in [16] and over
the approximate conditional distribution presented in [14]. First, Theorem 4 offers a simple,
closed-form, algebraic expression involving only finite sums, as opposed to the rather complex
expression in [16] involving an improper integral of products of scaled Bessel functions. Second,
Theorem 4 is remarkably accurate for lower numbers of participating anchor nodes, see Fig. 5.
This comes in contrast to the approximate distribution presented in [14], which was derived
asymptotically and therefore only accurate for higher numbers of participating anchors. This
selective accuracy of Theorem 4 is desirable since a device is more likely to hear lower numbers
of participating anchors, especially in infrastructure-based, terrestrial wireless networks.4
E. The Distribution of the Number of Participating Anchors
The next step needed to achieve our goal is to find the distribution of the number of participating anchors, fL (`). In this section, we build upon the localizability results from [17] in order
to obtain this distribution. Towards this end, we present the relevant theorems from this work
and modify them for our use here. Finally, we conclude with a discussion on the applicability
of these results.
1) Overview of Localizability Work: Recall from Section I-A that the goal of [17] was to
derive an expression for the probability that a mobile can hear at least ` base stations for
participation in a localization procedure in a cellular network, i.e., P[L ≥ `]. To derive this
expression, the authors assumed that base stations were placed according to a homogeneous
PPP, and then examined the SIRs of the base station signals received at a “typical user” placed
at the origin.5 Specifically, they examined the SIR of the ` th base station (denoted SIR` ), since
this was used directly to determine P[L ≥ `].
Since SIR` depends on the locations of interfering base stations, then the base stations’
placement according to a PPP implies that SIR` becomes a random variable. Consequently, its
distribution also becomes a function of the PPP density, λ. Additionally, the authors incorporate
4 By
infrastructure-based wireless networks, we mean any wireless network setup with mobile devices, fixed access points,
and separate uplink/downlink channels.
5 SIR
= Signal to Interference Ratio. Noise is ignored since [17] assumes an interference-limited network.
18
a network loading parameter, q, into SIR` , where 0 ≤ q ≤ 1. This means that any given base
station can be considered active (i.e., interfering with base station `’s signal) with probability
q. Furthermore, SIR` is also a function of pathloss, α, where α > 2, and the distances of base
stations to the target.
With SIR` statistically characterized, the authors were able to determine P[L ≥ `] by noting
that P[L ≥ `] = P[SIR` ≥ β/γ], where β/γ is the pre-processing SIR threshold for detection
of a signal.6 Here, γ is the processing gain at the mobile (assumed to also average out the
effect of small scale fading), and β is the post-processing threshold. Thus, since P[L ≥ `] =
P[SIR` ≥ β/γ], then P[L ≥ `] must also depend on all of the network parameters described
above. We denote this dependency by P[L ≥ ` | α, λ, q, γ, β].
Before continuing to the localizability results, we mention one last caveat regarding the PPP
density. That is, when shadowing is present, it can easily be incorporated into the PPP network
model through small displacements of the base station locations. This results in a new PPP
density, which accounts for this effect of shadowing. This new shadowing-transformed density
is given by λ̃ = λ E[Sz2/α ], where Sz is assumed to be a log-normal random variable representing
the effect of shadowing on the signal from base station z to the origin [31].7 Thus, by using λ̃,
we incorporate shadowing under the log-normal model presented in Section II of [31].
2) The Localizability Results: In this section, we present the main theorem which will enable
us to obtain fL (`).
Lemma 5. (Theorem 2, [17]) The probability that a mobile device can hear at least ` base
stations for participation in a localization procedure is given by
P[L ≥ ` | α, λ̃, q, γ, β] =
`−1
© Õ
α−2
1− e− 2qβ/γ
n=0
«
α−2
2qβ/γ
n!
n
∫ ∞∫ r`
`−1
−α
` Õ
ª
r
β
4(
λ̃π)
`
® fΩ (0) +
1
≥
f
(ω)
Ω
2−α −r 2−α
®
r
(` − 1)! ω=1
γ
2πq
λ̃
2(ω−1)
2−α
`
1
0 0 r −α +
· r 2 −r 2 + α−2 r`
2−α
1
¬
`
1
× r1 (r`2 − r12 )ω−1r`2(`−ω)−1 ωe−λ̃πr` dr1 dr`,
2
where ` ∈ {1, 2, . . . }, and Ω is a random variable denoting the number of active participating
6 Note
that for q < 1, this equality holds for all but a few rare corner cases. However, the probability of these cases occurring
is vanishingly small and thus has little to no impact on the accuracy of the subsequent localizability results [17].
7 The
Sz are assumed to be i.i.d. ∀z. Sz ’s log-normal behavior implies it is distributed normally when expressed in dB [31].
19
base stations interfering with the ` th . Note, Ω ∼ Binomial(` − 1, q) = fΩ (ω). Additionally, for the
trivial case of ` = 0, we define P[L ≥ 0] = 1. (Note: γ and β are in linear terms, not dB.)
Remark. This theorem was derived under the following assumptions: (1) a dominant interferer
and (2) interference-limited networks. We refer the reader to Section III-D of [17] for further
details regarding these assumptions and the consequent derivation of this theorem.
3) The Distribution of L: With these localizability results, we now finally present the distribution of the number of participating anchors.
Theorem 6. The pdf of L is given by
fL (` | α, λ̃, q, γ, β) = P[L ≥ ` | α, λ̃, q, γ, β] − P[L ≥ ` + 1 | α, λ̃, q, γ, β],
where the support is L ∈ {0, 1, 2, . . . } and the probabilities are given by Lemma 5.
4) The Distribution of L with Frequency Reuse: Using Theorem 6, we may obtain another
expression for fL (`) which incorporates a frequency reuse parameter, K. This parameter models
the ability of base stations to transmit on K separate frequency bands, thereby limiting interference to a per-band basis. This can easily be incorporated into the model by considering K
independent PPPs whose densities are that of the original PPP divided by K. Thus, if nk is
the number of participating base stations in band k, then the total number of participating base
Í
stations is given by L = K
k=1 n k . Thus, to find P[L = `] under frequency reuse, we simply need
to account for all of the per-band combinations of participating base stations such that their sum
equals `. This is given in the following corollary, which is a modification of Theorem 3 of [17].
Corollary 6.1. The pdf of L, given a frequency reuse factor of K, is
!
K
Õ Ö
λ̃
fL (` | α, λ̃, q, γ, β, K) =
fL nk α, , q, γ, β ,
K
k=1
{nÍ1,...,n K }
i ni =`
where the multiplicands are given by Thm. 6, K ∈ {1, 2, . . . }, and the support is L ∈ {0, 1, 2, . . . }.
Remark. When K = 1, this corollary reduces to Theorem 6. Further, this corollary may be
evaluated numerically through the use of a recursive function.
5) Applicability of the Results: Now that we have obtained the pdf of L, we conclude
with a brief discussion regarding its applicability. We begin by taking note of the support of
fL (`). Whereas up until this section we have proceeded under the assumption that the target is
20
localizable, the support of fL (`) now allows us to consider cases where the target is unlocalizable,
i.e., L = 0, 1, 2. Thus, as will be addressed in the following section, we may use these cases to
determine the percentage of the network where a target is unlocalizable.
Lastly, we note that while the localizability results in [17] (Lemma 5) were presented in
the context of cellular networks, these results are actually applicable to any infrastructure-based
wireless network using downlink measurements, so long as the distribution parameters are altered
accordingly. This implies that the distribution for L (Corollary 6.1) also has this applicability,
since it was derived using Lemma 5. Thus, since we use Corollary 6.1, along with a modified
Theorem 4, to derive the marginal distribution of S, then this final distribution will also be
applicable to any infrastructure-based wireless network employing a TOA localization strategy.
F. The Marginal CRLB Distribution
In this section, we modify Theorem 4 and combine this with Corollary 6.1 to obtain the
marginal distribution of S. First, we state one last network assumption often used in practice:
Assumption 5. For a given localization procedure, only a finite number of anchor nodes, N
(≥ 3), are ever tasked to transmit localization signals.
Remark. Just because N anchors are tasked, does not mean that all N signals are necessarily
heard.
Under this assumption, and further considering scenarios where the target is unlocalizable, we
modify Theorem 4 as follows:
FS (s | L = N, σr )
0
FS (s | L, σr ) = FS (s | L, σr )
u(s − M)
L≥N
L = 3, . . . , N − 1
(15)
L = 0, 1, 2
where FS0 is the new modified conditional distribution of S, FS is the previous conditional
distribution given in Theorem 4, and M ∈ R+ is a predetermined localization error value used
to account for unlocalizable scenarios (described in more detail below).
Remark. While Theorem 4 only accounted for scenarios in which the target was localizable,
this modified form, however, now accounts for unlocalizable scenarios. For these scenarios, this
modified conditional distribution yields a step function, which is a valid cdf and corresponds to
a deterministic value for the localization error, i.e., P[S = M | L] = 1. Thus, we account for
21
cases where the target is unlocalizable by assigning an arbitrary localization error value for M,
which is chosen to represent cases where there is ambiguity in a target’s position estimate.8
Remark. It is possible that a mobile may hear more anchors than are tasked to perform the
localization procedure, i.e., L > N. In this case, the N participating anchors are likely those with
the highest received SIR at the target, if connectivity information is known a priori. Therefore,
in this scenario, localization performance will only be based on the N anchors tasked. This is
clearly reflected in the modified conditional distribution of S in (15).
Using this modified Theorem 4 in (15), along with Corollary 6.1, we may now obtain the
distribution of localization error for an entire wireless network:
Theorem 7. The marginal cdf of the localization performance benchmark, S, is
h
i
FS (s | σr , α, λ̃, q, γ, β, K, M, N) = FS (s | ` = N, σr ) 1 − P[L ≤ N − 1|α, λ̃, q, γ, β, K]
+
N−1
Õ
FS (s | `, σr ) fL (` | α, λ̃, q, γ, β, K)
`=3
+ u(s − M) P[L ≤ 2|α, λ̃, q, γ, β, K],
where FS (s | L = `, σr ) is given by Theorem 4, fL (` | α, λ̃, q, γ, β, K) is given by Corollary 6.1,
Íx
and P[L ≤ x|α, λ̃, q, γ, β, K] = `=0
fL (` | α, λ̃, q, γ, β, K).
Proof. Multiplying the modified conditional distribution of S, given in (15), by the marginal
distribution of L from Corollary 6.1 gives the joint distribution of S and L. Then, setting L
equal to a particular realization, `, and summing over all realizations, gives us the marginal cdf
of S, as desired.
Remark. First, recall that the conditional distribution of our approximation of S, FS (s | L = `, σr )
given by Theorem 4, is accurate for lower ` values. Next, note that: (1) fL (`) declines rapidly
as ` increases (an intuitive result, since the probability of hearing many anchor nodes should
be small in infrastructure-based wireless networks), and (2) only a maximum of N nodes are
tasked to perform a localization procedure (Assumption 5). Thus, these two facts validate the
use of our approximation, since the cases where our approximation is less than ideal (i.e., for
8 In
this paper, we choose a value for M that applies for all L < 3. Thus, a large enough M allows for a clear distinction
between the localizable and unlocalizable portions of the network through a quick examination of the cdf of S. Note however,
that one could account for the L = 0, 1, 2 scenarios separately. For example, if L = 1 then in a cellular network one may want
to choose M to be the cell radius, since the user equipment typically knows in which cell it is located [32].
22
large L) will now either be multiplied by fL (`) ≈ 0, or considered invalid in a realistic network
under Assumption 5. As a consequence, Theorem 7 will also retain this accuracy.
Remark. We conclude by noting that this distribution accounts for localization error over all
setups of anchor nodes, numbers of participating anchors, and placements of a target anywhere in
the network. Hence, this distribution completely characterizes localization performance throughout an entire wireless network and represents the main contribution of this work.
IV. N UMERICAL A NALYSIS
In this section, we examine the accuracy of Theorem 7 and investigate how changing network
parameters affects localization performance throughout the network.
A. Description of Simulation Setup
Here, we discuss the parameters that are fixed across all simulations and include a description
of how the simulations were conducted.
1) Fixed Parameter Choices and their Effect on Model Assumptions: For these simulations,
we consider the case of a cellular network, and thus, we place anchor nodes such that the
PPP density matches that of a ubiquitous hexagonal grid with 500m intersite distances (i.e.,
√
λ = 2/ 3 · 5002 m2 ) [17]. Furthermore, we choose a shadowing standard deviation of 8 dB,
which defines our shadowing-transformed density parameter, λ̃.
Next, we set our pathloss exponent, α ≈ 4, which was chosen to represent a pathloss similar to
that seen in a typical cellular network. Note that this pathloss value is indicative of NLOS range
measurements, which are inherently a part of localization in cellular networks. Recall however,
that Assumption 3 implied the use of Line-of-Sight (LOS) measurements. Thus, we attempt to
mimic NLOS in our simulations by selecting a range error to account for a reasonable delay
spread under NLOS conditions. This is described further in the following section. We note that
a subject of future work will be to seek a refinement of this model by incorporating NLOS
directly into the range measurements.
The last parameter that remains fixed across simulations is M. This parameter was chosen to
be large enough such that an examination of the cdf of S will reveal which percentage of the
network the target is unlocalizable. Towards this end, it was sufficient to choose M ≈ 200 m for
our simulations here. Again, the choice of this value is arbitrary and left up to network designers
and how they would like to treat the unlocalizable cases.
23
1
1
Increasing
Frequency Reuse
(K = 1, 2, 3, 6)
0.9
0.8
0.9
0.8
0.7
P[S ≤ abscissa]
P[S ≤ abscissa]
0.7
0.6
0.5
0.4
0.3
0.6
Decreasing q
(q = 1, 0.75, 0.50, 0.25)
0.5
0.4
0.3
0.2
0.2
Solid: True
Dashed: Theorem 7
0.1
Solid: True
Dashed: Theorem 7
0.1
0
0
0
5
10
15
20
25
30
35
40
45
50
0
5
10
15
20
Meters
Fig. 6.
T HE E FFECT
OF
F REQUENCY R EUSE. This result
25
30
35
40
45
50
Meters
Fig. 7.
T HE I MPACT
OF
D ECREASING N ETWORK L OAD.
exposes the large impact that frequency reuse has on local-
This plot demonstrates the improvement in localization perfor-
ization performance throughout the network. The parameters
mance due to a decrease in network loading. The parameters
are: N = 10, β = 10 dB, γ = 20 dB, q = 1, and σr = 20 m.
were chosen as follows: N = 10, β = 10 dB, γ = 20 dB,
The plots’ “piece-wise” appearance is due to L being discrete.
K = 2, and σr = 20 m.
2) Conducting the Simulations: The true marginal cdf of S was generated through a simulation
over 100,000 positioning scenarios. Each scenario consisted of an average placement of 1,000
anchor nodes, placed according to a homogeneous PPP, with the target located at the origin. Next,
the anchor nodes whose SIRs surpassed the detection threshold were deemed to participate in
the localization procedure, and their corresponding coordinates were used to calculate the true
value of S given by Definition 4. If more than N anchor nodes had signals above the threshold,
then the N anchors with highest SIRs were used to calculate S.
B. The Effect of Frequency Reuse on the Network-Wide Distribution of Localization Error
In this section, we explore how frequency reuse impacts localization performance throughout
the network. This simulation, as well as all subsequent simulations, compares the true (simulated)
distribution of S with our analytical model from Theorem 7. All parameters were fixed at the
levels stated in Fig. 6, while the only parameter varied was the frequency reuse factor, K. The
range error standard deviation, σr , was chosen according to the detection threshold, β, and
the CRLB of a range estimate (see [6], equ. 3.40), assuming a 10MHz channel bandwidth.
Approximately 10m was added to account for a reasonable delay spread in NLOS conditions.
From Fig. 6, the most notable impact that frequency reuse has on localization performance is
24
that of localizability. That is, with just a small increase in frequency reuse from K = 1 to K = 2,
the portion of the network with which a target is localizable increases from only ≈ 25% to an
astonishing ≈ 85%. Furthermore, localization error is also reduced, although the improvement
is not as drastic as the increase in localizability. Additionally, as frequency reuse increases, the
gains in localizability stop after K = 3, with the gains in localization error also declining after
K = 3 as well. Thus, we can conclude that an increase in frequency reuse is strongly advisable if
once desires an increase in localization performance within a network, a result which coincides
with what has been seen in practice, viz. 3GPP. Lastly, we note the excellent match between
the true simulated distribution and our analytically derived distribution given by Theorem 7. We
will see that this accuracy of Theorem 7 is retained across all of our results in this section.
C. Examining the Effects of Network Loading
Here we examine the effect that network loading has on localization performance throughout
the network. This is accomplished by varying the percentage of the network, q, actively transmitting (interfering) during a localization procedure. All parameter values other than q were
fixed and σr was chosen in the same manner as in the frequency reuse case.
From the distributions plotted in Fig. 7, we can see that a decrease in network load leads to
an improvement in localizability, as well as an improvement in localization error. However, the
improvement is not as pronounced as in the frequency reuse case. Further, examining the 80th
percentile for example, it is evident that the rate of improvement in localization error declines
as the network load declines as well. Thus, since low network traffic is usually never desirable,
a network designer looking to optimize localization performance may find solace in the fact that
gains in performance begin to decline as network loading decreases also.
D. The Impact of Processing Gain
In this section, we examine the effects of changing the processing gain, since it is perhaps the
easiest parameter for a network designer to change in practice. (Note that we choose σr here as in
the previous simulations.) From Fig. 8, it is evident that as the processing gain increases, there is
a corresponding improvement in localizability across the network, as well as an improvement in
localization error. As a consequence, there exists a clear trade-off between sacrificing processing
time for gains in localization performance. However, it appears that these improvements begin
to level off at a processing gain of ≈25 dB. This is promising, as processing gains higher than
1
1
0.9
0.9
0.8
0.8
0.7
0.7
P[S ≤ abscissa]
P[S ≤ abscissa]
25
0.6
Increasing Processing Gain
(γ = 15 dB, 20 dB, 25 dB)
0.5
0.4
0.3
Increasing σr
(σr = 20m, 40m, 60m, 80m)
0.6
0.5
0.4
0.3
0.2
0.2
Solid: True
Dashed: Theorem 7
0.1
0
Solid: True
Dashed: Theorem 7
0.1
0
0
10
20
30
40
50
60
70
0
Meters
Fig. 8. T HE E FFECT
OF I NCREASING
20
40
60
80
100
120
140
160
Meters
P ROCESSING G AIN.
Fig. 9. T HE I MPACT OF I NCREASING R ANGE E RROR. Inject-
This figure highlights the trade-off that exists between pro-
ing range error results in a predictable effect on localization
cessing time and localization performance. The distribution
performance throughout the network, yet has no effect on
parameter values are: N = 10, β = 5 dB, q = 1, K = 2, and
localizability. The parameter values are: N = 10, β = 10 dB,
σr = 30 m.
γ = 20 dB, K = 3, and q = 1.
this can quickly become impractical. Examining the 80th percentile, we can see that a 10 dB
increase in processing gain can lead to about a 30m improvement in localization error throughout
the network. Thus, increasing the processing gain can be an easily implementable solution for
achieving moderate gains in localization performance.
E. Range Error and its Impact on Localization Performance within the Network
With this last result, we attempt to mimic the effect of an increasing NLOS bias by injecting
additional range error into the measurements. In examining Fig. 9, we note that the first σr value,
i.e., σr = 20 m, was chosen as in the frequency reuse simulation. Therefore, the subsequent
choices represent that of mimicking the impact of increasing NLOS bias. From Fig. 9, we can
see that increasing range error has no effect on localizability within the network. This should
be clear from the analytical model, since σr does not appear as a parameter in Corollary 6.1.
Additionally, injecting more range error into the measurements results in a predictable effect
on the distribution of localization performance. This is also evidenced by examining Theorem
4, where σr appears as a scale parameter. Thus, mimicking the effects of NLOS measurements
results in a scaling of the distribution of S, implying a predictable reduction in localization
performance.
26
V. C ONCLUSION
This paper presents a novel parameterized distribution of localization error, applicable throughout an entire wireless network. Invoking a PPP network model, as well as common assumptions
for TOA localization, has enabled this distribution to simultaneously account for all possible positioning scenarios within a network. Deriving this result involved two main steps: (1) a derivation
of an approximate distribution of our localization performance benchmark, S, conditioned on
the number of hearable anchor nodes, L, which yielded a conditional distribution with desirable
accuracy and tractability properties, and (2) a modification of results from [17] to attain the
distribution of L. Thus, using these two distributions, we arrived at the final, marginal distribution
of S, which also retained the same parameterizations as its two component distributions.
What followed was a numerical analysis of this network-wide distribution of localization
performance. This analysis revealed that our distribution can offer an accurate, baseline tool for
network designers, in that it can be used to get a sense of localization performance within a
wireless network, while also providing insight into which network parameters to change in order
to meet localization requirements.
Since this marginal distribution of S is a distribution of the TOA-CRLB throughout the
network, it consequently provides a benchmark for describing localization performance in any
network employing an unbiased, efficient, TOA-based localization algorithm. The amount of
insights that this network-wide distribution of localization error reveals are numerous, and the
results presented in this paper have only begun to explore this new paradigm. It is our hope
that this work spawns additional research into this new concept, and that future work, such
as incorporating NLOS measurements, accounting for collaboration, adding other localization
strategies (TDOA/AOA), etc., can further refine the model presented here.
In closing, this work presents an initial attempt to provide network designers with a tool for
analyzing localization performance throughout a network, freeing them from lengthly simulations
by offering an accurate, analytical solution.
27
A PPENDIX A
P ROOF OF P ROPOSITION 1
It first helps to visualize the sin2 (·) terms of the sum in (5) on a 2-D grid, where the i’s
represent the rows and the j’s represent the columns. As an example, the case of L = 4 gives
j=1 j=2
i=1
j=3
j=4
∠1 + ∠2 ∠1 + ∠2 + ∠3
∠1
i=2
∠2 + ∠3
∠2
i=3
∠3
where just the arguments of the sin2 (·) terms are represented in the grid for clarity. From this
arrangement, it is evident that the sum in (5) represents the process of summing each row
sequentially, starting at i = 1.
Now, however, we choose to sum the terms diagonally, starting with the lowest diagonal and
working our way upward. This yields
D=
L−1 Õ
L−i
Õ
i=1 j=1
sin2
j+i−1
Õ
!
∠k .
(16)
k= j
Considering the cases i = 1 and i = L − 1 separately, we may rewrite (16) as
!
!
j+i−1
L−1
L−1
L−2 Õ
L−i
Õ
Õ
Õ
Õ
2
2
2
∠ k + sin
∠k .
D=
sin ∠ k +
sin
i=2 j=1
k=1
k= j
k=1
Next, note that
2
sin
L−1
Õ
!
∠ k = sin
2
2π −
L−1
Õ
!
∠ k = sin2 2π − (Θ(L) − Θ(1) ) = sin2 ∠ L ,
(17)
k=1
k=1
where the last two equalities follow from Definition 3. Hence,
D=
L
Õ
k=1
sin2 ∠ k +
L−2 Õ
L−i
Õ
sin2
i=2 j=1
j+i−1
Õ
!
∠k .
k= j
Lastly, we complete the proof by noting that the first sum may be equivalently expressed by
replacing the internodal angles with their order statistics.
A PPENDIX B
P ROOF OF C OROLLARY 2.1
Using Lemma 2 and our assumption of a finite L, the pdf of ∠(L−1) conditioned on L is
!
! L−2
X
Õ
d
L
n(n
−
1)(L
−
1)
nϕ
f∠(L−1) (ϕ|L) =
F∠ (ϕ|L) =
(−1)n
1−
,
dϕ (L−1)
n
2π
2π
n=0
28
where X = min L, b2π/ϕc and the support is 0 ≤ ∠(L−1) ≤ π, as in Lemma 2.
Next, note that the lower summation limit may be rewritten as n = 2 and that the upper
summation limit, X, can be simplified to just L by appending an indicator function to the
summand. This gives the logically equivalent expression:
!
! L−2 "
#
L
Õ
n(n − 1)(L − 1)
nϕ
2π
n L
f∠(L−1) (ϕ|L) =
(−1)
1−
1 ϕ≤
.
n
2π
2π
n
n=2
Using the above expression for the pdf, the expectation is derived as follows:
∫ π
E[∠(L−1) |L] =
ϕ f∠(L−1) (ϕ|L) dϕ
0
!
! L−2
L n(n − 1)(L − 1)
nϕ
ϕ
(−1)
1−
dϕ
=
n
2π
2π
0
n=2
! ∫ 2π
! L−2
L
Õ
n
nϕ
n(n − 1)(L − 1)
n L
=
(−1)
ϕ 1−
dϕ
n
2π
2π
0
n=2
!
L
Õ
L
2π(n
−
1)
=
(−1)n
,
nL
n
n=2
∫
2π
n
L
Õ
n
(18)
(19)
(20)
where (18) follows from absorbing the indicator function into the integration limits since n ≥ 2,
(19) follows from our assumption of a finite L, and (20) is derived through integration by parts.
A PPENDIX C
P ROOF OF L EMMA 3
This is a straightforward application of the lowest Geometric Dilution Of Precision (GDOP)
presented in [30]. First, since L is finite, then D is a continuous real function defined on a
compact subset of R L . Thus, its maximum must exist. Call it dmax .
Next, under our Assumptions 2, 3, and 4, GDOP, presented in [27], can be written as GDOP =
p
L/D, where D follows from the lemma assumption. Further, under our assumptions, [30] asserts
√
that the lowest GDOP is given by GDOPmin = 2/ L. Since GDOPmin must occur when D is at
p
√
its maximum, we have that 2/ L = L/dmax , and the lemma follows.
A PPENDIX D
P ROOF OF T HEOREM 4
Under Approximation 1, we have S = a/sin(∠(L−1) ), where a is defined in Theorem 4. To
determine the support of S conditioned on L, we know from Lemma 2 that if 0 ≤ ∠(L−1) ≤ π,
29
then the support for the RV sin(∠(L−1) ) must be 0 ≤ sin(∠(L−1) ) ≤ 1. From here, we see that
0 ≤ sin(∠(L−1) ) ≤ 1
⇒
1≤
1
sin(∠(L−1) )
⇒
a≤
a
sin(∠(L−1) )
⇒
a ≤ S,
and hence, S ∈ [a, ∞).
Next, to find the cdf of S conditioned on L, consider the following
FS (s | L, σr ) = P[S ≤ s | L, σr ]
"
a
≤s
=P
sin(∠(L−1) )
"
= P ∠(L−1) ≤ π−sin−1
#
(21)
L
a
s
!
#
"
L − P ∠(L−1) ≤ sin−1
a
s
!
#
L
= P[∠(L−1) ≤ ϕ2 | L] − P[∠(L−1) ≤ ϕ1 | L]
= F∠(L−1) ϕ2 | L − F∠(L−1) ϕ1 | L ,
(22)
(23)
where (21) follows from Approximation 1 and the fact that we may drop the parameter, σr , as
a condition since the dependency is now explicit. Further, ϕ1 and ϕ2 from (22) are defined in
Theorem 4, and (23), through use of Lemma 2, gives (14) from Theorem 4, as desired.
R EFERENCES
[1] C. E. O’Lone and R. M. Buehrer, “Towards a Characterization of Localization Performance in Networks with Random
Geometries,” in IEEE International Conference on Communications (ICC), Paris, France, May 2017.
[2] P. Baronti, P. Pillai, V. W. C. Chook, S. Chessa, A. Gotta, and Y. F. Hu, “Wireless sensor networks: A survey on the state
of the art and the 802.15.4 and ZigBee standards,” in Comput. Commun. 30, pp. 1655-1695, May 2007.
[3] M. A. M. Vieira, C. N. Coelho, D. C. da Silva, and J. M. da Mata, “Survey on wireless sensor network devices,” in Proc.
of IEEE Conference on Emerging Technologies and Factory Automation., vol. 1, pp. 537-544, 2003.
[4] N. Patwari, J. N. Ash, S. Kyperountas, A. O. Hero, R. L. Moses, and N. S. Correal, “Locating the nodes: cooperative
localization in wireless sensor networks,” in IEEE Signal Processing Magazine, vol. 22, no. 4, pp. 54-69, July 2005.
[5] N. A. Alrajeh, M. Bashir, and B. Shams, “Localization Techniques in Wireless Sensor Networks,” in International Journal
of Distributed Sensor Networks, vol. 9, no. 6, 2013.
[6] S. M. Kay, Fundamentals of Statistical Signal Processing. Upper Saddle River, NJ: Prentice-Hall, Inc., vol. 1, 1993.
[7] C. Chang and A. Sahai, “Estimation bounds for localization,” in First Annual IEEE Communications Society Conference
on Sensor and Ad Hoc Communications and Networks. IEEE SECON, pp. 415-424., 2004.
[8] H. Wang, L. Yip, K. Yao, and D. Estrin, “Lower bounds of localization uncertainty in sensor networks,” in IEEE International
Conference on Acoustics, Speech, and Signal Processing, vol. 3, pp. iii-917-20, 2004.
[9] I. Guvenc and C. C. Chong, “A Survey on TOA Based Wireless Localization and NLOS Mitigation Techniques,” in IEEE
Communications Surveys & Tutorials, vol. 11, no. 3, pp. 107-124, 3rd Quarter 2009.
[10] A. Savvides et al., “On The Error Characteristics of Multihop Node Localization In Ad-Hoc Sensor Networks,” in ISPN’03
Proceedings of the 2nd international conference on Information Processing in Sensor Networks, pp. 317-332, 2003.
30
[11] M. Haenggi, J. G. Andrews, F. Baccelli, O. Dousse, and M. Franceschetti, “Stochastic Geometry and Random Graphs for
the Analysis and Design of Wireless Networks,” in IEEE J. Sel. Areas Commun., vol. 27, no. 7, pp. 1029-1046, Sept., 2009.
[12] M. Haenggi, Stochastic Geometry for Wireless Networks. New York: Cambridge University Press, 2013
[13] B. Huang, T. Li, B. D. O. Anderson, and C. Yu, “On the performance limit of sensor localization,” in 50th IEEE Conference
on Decision and Control and European Control Conference, Orlando, FL, pp. 7870-7875, 2011.
[14] B. Huang et al., “On the performance limit of single-hop TOA localization,” in 12th International Conference on Control
Automation Robotics & Vision (ICARCV), Guangzhou, pp. 42-46, 2012.
[15] B. Huang et al., “Performance Limits in Sensor Localization,” in Automatica, vol. 49, no. 2, pp. 503-509, Feb. 2013.
[16] F. Zhou and Y. Shen, “On the Outage Probability of Localization in Randomly Deployed Wireless Networks,” in IEEE
Communications Letters, vol. 21, no. 4, pp. 901-904, April 2017.
[17] J. Schloemann, H. S. Dhillon, and R. M. Buehrer, “Towards a Tractable Analysis of Localization Fundamentals in Cellular
Networks,” in IEEE Trans. Wireless Commun., vol. 15, no. 3, pp. 1768-1782, March 2016.
[18] Code of Federal Regulations, “911 Service,” 47 C.F.R. 20.18 (h), 2015
[19] J. G. Andrews, F. Baccelli, and R. K. Ganti, “A Tractable Approach to Coverage and Rate in Cellular Networks,” in IEEE
Trans. Commun., vol. 59, no. 11, pp. 3122-3134, Nov. 2011.
[20] H. S. Dhillon, R. K. Ganti, F. Baccelli, and J. G. Andrews, “Modeling and Analysis of K-Tier Downlink Heterogeneous
Cellular Networks,” in IEEE J. Sel. Areas Commun., vol. 30, no. 3, pp. 550-560, Apr. 2012.
[21] B. Blaszczyszyn, M. K. Karray, and H. P. Keeler, “Using Poisson processes to model lattice cellular networks,” in Proc.
IEEE INFOCOM, pp. 773-781, Apr. 2013.
[22] H. P. Keeler, N. Ross, and A. Xia, “When do wireless network signals appear Poisson?,” in arXiv:1411.3757, 2014.
[23] B. Blaszczyszyn, M. K. Karray, and H. P. Keeler, “Wireless networks appear Poissonian due to strong shadowing,” in
IEEE Trans. Wireless Commun., vol. 14, no. 8, Aug. 2015.
[24] R. Zekavat and R. M. Buehrer, Handbook of Position Location: Theory, Practice, and Advances. Wiley, 2012.
[25] D. B. Jourdan, D. Dardari, and M. Z. Win, “Position Error Bound for UWB Localization in Dense Cluttered Environments,”
IEEE International Conference on Communications, Istanbul, 2006, pp. 3705-3710.
[26] D. B. Jourdan and N. Roy, “Optimal Sensor Placement for Agent Localization,” IEEE/ION Position, Location, And
Navigation Symposium, 2006, pp. 128-139.
[27] J. Schloemann, H. S. Dhillon, and R. M. Buehrer, “A Tractable Metric for Evaluating Base Station Geometries in Cellular
Network Localization”, IEEE Wireless Commun. Lett., vol. 5, no. 2, pp. 140-143, Apr. 2016.
[28] J. Nounagnon, “Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative Positioning,” Ph.D.
Dissertation, Bradley Dept. of Electrical and Computer Eng., Virgina Tech, Blacksburg, VA, 2016.
[29] T. M. Cover and J. A. Thomas, Elements of Information Theory. New York, NY: Wiley-Interscience, 1991, chs. 2 and 9.
[30] X. Lv, K. Liu and P. Hu, “Geometry Influence on GDOP in TOA and AOA Positioning Systems,” Second International
Conference on Networks Security, Wireless Communications and Trusted Computing, Wuhan, Hubei, 2010, pp. 58-61.
[31] H. S. Dhillon and J. G. Andrews, “Downlink Rate Distribution in Heterogeneous Cellular Networks under Generalized
Cell Selection,” in IEEE Wireless Communications Letters, vol. 3, no. 1, pp. 42-45, February 2014.
[32] T. Bhandari, H. S. Dhillon, and R. M. Buehrer, “The Impact of Proximate Base Station Measurements on Localizability
in Cellular Systems, in Proc. IEEE SPAWC, Edinburgh, UK, July 2016. (Invited paper)
| 7 |
Solving Markov decision processes for network-level post-hazard
recovery via simulation optimization and rollout
(Invited Paper)
arXiv:1803.04144v1 [math.OC] 12 Mar 2018
Yugandhar Sarkale1 , Saeed Nozhati2 , Edwin K. P. Chong1 , Bruce R. Ellingwood2 , Hussam Mahmoud2
Abstract— Computation of optimal recovery decisions for
community resilience assurance post-hazard is a combinatorial
decision-making problem under uncertainty. It involves solving
a large-scale optimization problem, which is significantly aggravated by the introduction of uncertainty. In this paper, we draw
upon established tools from multiple research communities to
provide an effective solution to this challenging problem. We
provide a stochastic model of damage to the water network
(WN) within a testbed community following a severe earthquake
and compute near-optimal recovery actions for restoration
of the water network. We formulate this stochastic decisionmaking problem as a Markov Decision Process (MDP), and
solve it using a popular class of heuristic algorithms known as
rollout. A simulation-based representation of MDPs is utilized
in conjunction with rollout and the Optimal Computing Budget
Allocation (OCBA) algorithm to address the resulting stochastic
simulation optimization problem. Our method employs nonmyopic planning with efficient use of simulation budget. We
show, through simulation results, that rollout fused with OCBA
performs competitively with respect to rollout with total equal
allocation (TEA) at a meagre simulation budget of 5-10% of
rollout with TEA, which is a crucial step towards addressing
large-scale community recovery problems following natural
disasters.
I. INTRODUCTION
Natural disasters have a significant impact on the economic, social, and cultural fabric of affected communities.
Moreover, because of the interconnected nature of communities in the modern world, the adverse impact is no
longer restricted to the locally affected region, but it has
ramifications on national or international scale. Among other
factors, the occurrence of such natural disasters is on the
rise owing to population growth and economic development
in hazard-prone areas [1]. Keeping in view the increased
frequency of natural disasters, there is an urgent need to
address the problem of community recovery post-hazard.
Typically, the resources available to post-disaster planners
are limited and relatively small compared to the impact of
1 Yugandhar Sarkale and Edwin K. P. Chong are with Department of Electrical and Computer Engineering, Colorado State University, Fort Collins, CO 80523-1373, USA Yugandhar.Sarkale,
[email protected]
2 Saeed
Nozhati, Bruce R. Ellingwood and Hussam Mahmoud
are with the Department of Civil and Environmental Engineering, Colorado State University, Fort Collins, CO 805231372,
USA
Saeed.Nozhati, Bruce.Ellingwood,
[email protected]
This work was supported by the National Science Foundation under Grant
CMMI-1638284. This support is gratefully acknowledged. Any opinions,
findings, conclusions, or recommendations presented in this material are
solely those of the authors and do not necessarily reflect the views of the
National Science Foundation.
the damage. Under these scenarios, it becomes imperative
to assign limited resources to various damaged components
in the network optimally to support community recovery.
Such an assignment must also consider multiple objectives and cascading effects due to the interconnectedness
of various networks within the community and must also
successfully adopt previous proven methods and practices
developed by expert disaster management planners. Holistic
approaches addressing various uncertainties for networklevel management of limited resources must be developed
for maximum effect. Civil infrastructure systems, including
power, transportation, and water networks, play a critical part
in post-disaster recovery management. In this study, we focus
on one such critical infrastructure system, namely the water
networks (WN), and compute near-optimal recovery actions,
in the aftermath of an earthquake, for the WN of a test-bed
community.
Markov decision processes (MDPs) offer a convenient
framework for representation and solution of stochastic
decision-making problems. Exact solutions are intractable for
problems of even modest size; therefore, approximate solution methods have to be employed. We can leverage the rich
theory of MDPs to model recovery action optimization for
large state-space decision-making problems such as our. In
this study, we employ a simulation-based representation and
solution of MDP. The near-optimal solutions are computed
using an approximate solution technique known as rollout.
Even though state-of-the-art hardware and software practices
are used to implement the solution to our problem, we are
faced with the additional dilemma of computing recovery
actions on a fixed simulation budget without affecting the solution performance. Therefore, any prospective methodology
must incorporate such a limitation in its solution process.
We incorporate the Optimal Computing Budget Allocation
(OCBA) algorithm into our MDP solution process [2], [3] to
address the limited simulation budget problem.
II. TESTBED CASE STUDY
A. Network Characterization
This study considers the potable water network (WN) of
Gilroy, CA, USA as an example to illustrate the proposed
methodology. Gilroy, located 50 kilometers (km) south of
the city of San Jose, CA is approximately 41.91 km2 in area,
with a population of 48,821 [4]. We divide our study area
into 36 grid regions to define the properties of infrastructure
systems, household units, and the population. Our model
of the community maintains adequate detail to study the
Fig. 1.
The modeled Water Network of Gilroy
performance of the WN at a community level under severe
earthquakes. The potable water of Gilroy is provided only
by the Llgas sub-basin [5]. The potable water wells, located
in wood-frame buildings, pump water into the distribution
system. The Gilroy municipal water pipelines range from
102 mm to 610 mm in diameter [5]. In this study, a simplified
aggregated model of WN of Gilroy adopted from [5] is
modeled. This model shown in Fig. 1, includes six water
wells, two booster pump stations (BPS), three water tanks
(WT), and the main pipelines.
the water main, and G(·) is the moment-generating function
of εPGV (the residual of the PGV). The term C for water
pipe segment i is C = K × 0.00187 × PGVi , where K is a
coefficient determined by the pipe material, diameter, joint
type, and soil condition based on the guidelines prepared by
the American Lifeline Alliance [9]. Adachi and Ellingwood
[8] demonstrated that the Upper Bound (UB) and exact
solutions (1) are close enough so that in practical applications
the UB assessment (conservative evaluation) can be used.
Repair crews, replacement components, and tools are
considered as available units of resources to restore the
damaged components of WN following the hazard. One unit
of resources is required to repair each damaged component
[10], [11]. However, the available units of resources are
limited and depend on the capacities and policy of the entities
within the community. To restore the WN, the restoration
curves based on exponential distributions synthesized from
HAZUS-MH [7] are used, as summarized in Table I. The
pipe-restoration time in the WN is based on repair rate or
number of repairs per kilometer.
TABLE I
T HE EXPECTED REPAIR TIMES (U NIT: DAYS )
Component
Water tanks
Wells
Pumping plants
Damage States
Minor
1.2
0.8
0.9
Moderate
3.1
1.5
3.1
Extensive
93
10.5
13.5
Complete
155
26
35
B. Seismic Hazard Simulation
The San Andreas Fault (SAF), which is near Gilroy, is a
source of severe earthquakes. In this study, we assume that
a seismic event of moment magnitude Mw = 6.9 occurs at
one of the closest points on the SAF projection to downtown
Gilroy with an epicentral distance of approximately 12 km.
Ground motion prediction equations (GMPE) determine the
conditional probability of exceeding ground motion intensity at specific geographic locations within Gilroy for this
earthquake.
The Abrahamson et al. [6] GMPE is used to estimate
the Intensity Measures (IM) and associated uncertainties.
Peak Ground Acceleration (PGA) is considered for the
above-ground WN facilities and wells, whereas Peak Ground
Velocity (PGV) is considered as IM of pipelines.
C. Fragility and Restoration Assessment of Water Network
The physical damage to WN components can be assessed
by seismic fragility curves. We use the fragility curves
presented in HAZUS-MH [7] for wells, water tanks, and
pump stations based on the IM of PGA. This study adopts the
assumptions in [8] for water pipelines. The failure probability
of a pipe is bounded as follows:
1 − GεPGV (−CLµPGV ) ≤ E[Pf ] ≤ 1 − E[exp(−CLµPGV )]
(1)
where Pf is the failure probability of a pipe, L is the length
of pipe, µPGV is the average PGV for the entire length of
III. PROBLEM DESCRIPTION AND SOLUTION
A. MDP Framework
We provide a brief description of MDP [12] for the sake
of completeness. An MDP is a controlled dynamical process
useful in modelling of wide range of decision-making problems. It can be represented by the 4-tuple hS, A, T, Ri. Here,
S represents the set of states, and A represents the set of
actions. Let s, s0 ∈ S and a ∈ A; then T is the state transition
function, where T (s, a, s0 ) = P(s0 | s, a) is the probability of
going into state s0 after taking action a in state s. R is the
reward function, where R(s, a, s0 ) is the reward received after
transitioning from s0 to s as a result of action a. In this study,
we assume that |S| and |A| are finite. R is bounded and realvalued and a deterministic function of s, a and s0 . Implicit in
our presentation are also the following assumptions: First order Markovian dynamics (history independence), stationary
dynamics (reward function is not a function of absolute time),
and full observability of the state space (outcome of an action
in a state might be random, but we know the state reached
after action is completed). In our study, we assume that we
are allowed to take recovery actions (decisions) indefinitely
until all the damaged components of our modelled problem
are repaired (infinite-horizon planning). In this setting, we
have a stationary policy π, which is defined as π : S → A.
Let’s say that decisions are made at discrete-time t; then
π(s) is the action to be taken in state s (regardless of time
t). Our objective is to find an optimal policy π ∗ . For the
infinite-horizon case, π ∗ is defined as
C. Simulation-Based Representation of MDP
π ∗ = arg max V π (s0 ),
(2)
π
where
"
π
V (s0 ) = E
∞
∑γ
#
t
R(st , π(st ), st+1 )
(3)
t=0
is called the value function for a fixed policy π, and 0 < γ < 1
is the discount factor. Note that the optimal policy is independent of the initial state s0 . Also, note that we maximize
over policies π, where at each time t the action taken is
at = π(st ). Stationary optimal policies are guaranteed to exist
for discounted infinite-horizon optimization criteria [13]. To
summarize, our presentation is for infinite-horizon discretetime MDPs with the discounted value as our optimization
criterion.
We now briefly explain the simulation-based representation of an MDP [17]. Such a representation serves well
for large state and action spaces, which is a characteristic
feature of many real-world problems. When |S| or |A| is
large, it is not feasible to represent T and R in a matrix
form. A simulation-based representation of an MDP is a 5tuple hS, A, R, T, Ii, where S and A are as before, except |S|
and |A| are large. Here, R is a stochastic real-valued bounded
function that stochastically returns a reward r when input s
and a are provided, where a is the action applied in state s. T
is a simulator that stochastically returns a state s0 when state
s and action a are provided as inputs. I is the stochastic initial
state function that stochastically returns a state according to
some initial state distribution. R, T , and I can be thought of
as any callable library functions that can be implemented in
any programming language.
B. MDP Solution
A solution to an MDP is the optimal policy π ∗ . We can
obtain π ∗ with linear programming or dynamic programming. In the dynamic programming regime, there are several
solution strategies, namely value iteration, policy iteration,
modified policy iteration, etc. Unfortunately, such exact
solution algorithms are intractable for large state and actions
spaces. We briefly mention here the method of value iteration
because it illustrates the Bellman’s equation [14]. Studying
Bellman’s equation is useful for defining Q value function.
Q value function will play a critical role in describing the
∗
rollout algorithm. Let V π denote the optimal value function
∗
for some π ∗ ; Bellman showed that V π satisfies:
(
)
h ∗
i
∗
V π (s) = max γ · ∑ P(s0 | s, a) · V π (s0 ) + R(s, a, s0 ) .
a∈A(s)
s0
(4)
Equation (4) is known as the Bellman’s optimality equation,
where A(s) is the set of possible actions in any state s. The
value iteration algorithm solves (4) by using Bellman backup
repeatedly, where Bellman backup is given by:
(
)
Vi+1 (s) = max γ ∑ P(s0 | s, a) · Vi (s0 ) + R(s, a, s0 ) . (5)
a∈A(s)
D. Problem Formulation
After an earthquake event occurs, the components of the
water network remain undamaged or exhibit one of the
damage states as shown in Table I. Let L0 be the total number
of damaged component at t. Let tc represent the decision time
when all components are repaired. There is a fixed number
of resource units (M) available to the decision maker. At
each discrete time t, the decision maker has to decide the
assignment of unit of resource to the damaged locations;
each component cannot be assigned more than one resource
unit. When the number of undamaged locations is greater
than the number of units of resources (because of sequential
application of repair actions, or otherwise), we retire the extra
unit of resources so that M is less than or equal to the total
damaged locations.
•
•
s0
∗
Bellman showed that limi→∞ Vi = V π , where V0 is initialised
arbitrarily.1 Next, we define the Q value function of policy
π:
Qπ (s, a) = γ · ∑ P(s0 | s, a) · V π (s0 ) + R(s, a, s0 ) . (6)
s0
The Q value function of any policy π gives the expected
discount reward in the future after starting in some state s,
taking action a and following policy π thereafter. Note that
this is the inner term in (4).
1 On a historical note, Lloyd Shapely’s paper [15] included the value
iteration algorithm for MDPs as a special case, but this was recognised
only later on [16].
•
States S: Let st be the state of the damaged components
of the system at time t; then st is a vector of length L0 ,
0
st = (st1 , . . . , stL ), and stl is one of the damaged state in
Table I where l ∈ {1, . . . , L0 }.
Actions A: Let at denote the repair action to be carried
out at time t. Then, at is a vector of length L0 , at =
0
(at1 , . . . , atL ), and atl ∈ {0, 1} ∀l,t. When atl = 0, no repair
work is to be carried out at l. Similarly, when atl = 1,
repair work is carried out at l.
Simulator T: The repair time associated with each
damaged location depends on the state of the damage
to the component at that location (see Table I). This
repair time is random and is exponentially distributed
with expected repair times shown in Table I. Given st
and at , T gives us the new state st+1 . We say that a
repair action is complete as soon as at least one of
the locations where repair work is carried out is fully
repaired. Let’s denote this completion time at every t
by tˆ. Note that it is possible for the repair work at two
or more damaged locations to be completed simultaneously. Once the repair action is complete, the units
of resources at remaining locations, where repair work
was not complete, are also available for reassignment
•
•
•
along with unit of resources where repair was complete.
The new repair time at such unrepaired locations is
calculated by subtracting tˆ from the time required to
repair these locations. It is also possible to reassign the
unit of resource at the same unrepaired location if it is
deemed important for the repair work to be continued
at that location by the planner. Because of this reason,
preemption of repair work during reassignment is not a
restrictive assumption, on the contrary, it allows greater
flexibility to the decision maker for planning. Because
the repair times are random, the outcomes of repair
actions are random as not the same damaged component
will be repaired first every time (random repair time)
even if the same repair action at is applied in st . Hence,
our simulator T is stochastic. Alternative formulation
where outcome of repair action is deterministic is also
an active area of research [18], [19].
Rewards R: We wish to optimally plan decisions so that
maximum people will get water in minimum amount of
time. We combine these two competing objectives to
define our reward as:
r
,
(7)
R(st , at , st+1 ) =
trep
where r is the number of people who have water after
action at is completed, and trep is the total repair time
(days) required to reach st+1 from any initial state s0 .
Note that our reward function is stochastic because the
outcome of our action at is random.
Initial State I: We have already described the stochastic
damage model of the components for the modeled
network in Section II-B and Section II-C. The initial
damage states associated with the components will be
provided by these models.
Discount factor γ: In our simulation studies, γ is fixed
at 0.99.
E. Rollout
The rollout algorithm was first proposed for stochastic
scheduling problems by Bertsekas and Castanon [20]. Instead
of the dynamic programming formalism [20], we motivate
the rollout algorithm in relation to the simulation-based
representation of our MDP. Suppose that we have access
to a non-optimal policy π, and our aim is to compute an
improved policy π 0 . Then, we have:
π 0 (st ) = arg max Qπ (st , at ),
at
(8)
where the Q function is as defined in (6). If the policy defined
in (8) π 0 is non-optimal, it is a strict improvement over π
[13]. This result is termed as policy improvement theorem.
Note that the improved policy π 0 is generated as a greedy
policy w.r.t. Qπ . Unlike the exact solution methods described
in Section III-B, we are interested here in computing π 0 only
for the current state. Methods that use (8) as the basis for
updating the policy suffer from the curse of dimensionality.
Before performing the policy improvement step in (8), we
have to first calculate the value of Qπ . Calculating the
value of Qπ in (8) is known as policy evaluation. Policy
evaluation is intractable for large or continuous state and
action spaces. Approximation techniques alleviate this problem by calculating an approximate Q value function. Rollout
is one such approximation technique that utilises montecarlo simulations. Particularly, rollout can be formulated
as an approximate policy iteration algorithm [17], [21].
An implementable (programming sense) stochastic function
(simulator) SimQ(st , at , π, h) is defined in such a way that
its expected value is Qπ (st , at , h), where h is a finite number
representing horizon length. In the rollout algorithm, SimQ is
implemented by simulating action at in state st and following
π thereafter for h − 1 steps. This is done for all the actions
at ∈ A(st ). A finite horizon approximation of Qπ (st , at )
(termed as Qπ (st , at , h)) is required; our simulation would
never finish in the infinite horizon case because we would
have to follow policy π indefinitely. However, V π (st ), and
consequently Qπ (st , at ), is defined over the infinite horizon.
It is easy to show the following:
|Qπ (st , at ) − Qπ (st , at , h)| =
γ h Rmax
.
1−γ
(9)
The approximation error in (9) reduces exponentially fast
as h grows. Therefore, the h-horizon results apply to the
infinite horizon setting, for we can always choose h such
that the error in (9) is negligible. To summarize, the rollout
algorithm can be presented in the following fashion for our
problem:
Algorithm 1 Uniform Rollout (π,h,α,st )
for i = 1 to n do
for j = 1 to α do
ãi, j ← SimQ(st , ati, j , π, h)
. See algorithm 2
end for
end for
ãti ← Mean(ãi, j )
k ← arg max ãti
return atk
Algorithm 2 Simulator SimQ(st , ati, j , π, h)
st+1 ← T (st , ati, j )
r ← R(st , ati, j , st+1 )
for p = 1 to h − 1 do
st+1+p ← T (st+p , π(st+p ))
r ← r + γ p R(st+p , π(st+p ), st+1+p )
end for
return r
In Algorithm 1, n denotes |A(st )|. Note that Algorithm 2
returns the discounted sum of rewards. When h = tc , we term
the rollout as complete rollout, and when h < tc , the rollout
is called truncated rollout [20]. It is possible to analyse
the performance of uniform rollout in terms of uniform
allocation α and horizon depth h [17], [22].
F. Optimal Computing Budget Allocation
In the previous section, we presented the rollout method
for solving our MDP problem. In the case of uniform rollout,
we allocate a fixed rollout sampling budget α to each action,
i.e., we obtain α number of rollout samples per candidate
action to estimate the Q value associated with the action. In
the simulation optimization community, this is analogous to
total equal allocation (TEA) [23] with a fixed budget α for
each simulation experiment (a single simulation experiment
is equivalent to one rollout sample). In practice, we are only
interested in the best possible action, and we would like to
direct our search towards the most promising candidates.
Also, for large real-world problems, the simulation budget
available is insufficient to allocate α number of rollout
samples per action. We would like to get a rough estimate of
the performance of each action and spend the remaining simulation budget in refining the accuracy of the best estimates.
This is the classic exploration vs. exploitation problem faced
in optimal learning and simulation optimization problems.
Instead of a uniform allocation α for each action, nonuniform allocation methods have been explored in the literature pertaining to Algorithm 1 called as adaptive rollout [24].
An analysis of performance guarantees for adaptive rollout
remains an active area of research [24]–[26]. These nonuniform allocation methods guarantee performance without
a constraint on the budget of rollouts. Hence, we explore
an alternative non-uniform allocation method that would not
only fuse well into our solutions (adaptively guiding the
stochastic search) but would also incorporate the constraint
of simulation budget in its allocation procedure. Numerous
techniques have been proposed in the simulation optimization community to solve this problem. We draw upon one of
the best performers [27] which naturally fits into our solution
framework—OCBA. Moreover, the probability of correct
selection of an alternative in OCBA mimics finding the best
candidate action at each stage in Algorithm 1. Formally, the
OCBA problem [28] for Section III-D can be stated as :
n
max P{CS} such that ∑ Ni = B,
N1 ,...,Nn
(10)
i=1
where B represents the simulation budget for determining
optimal at for st at any t. At each OCBA allocation step,
barring the best alternative, the OCBA solution assigns an
allocation that is directly proportional to the variance of
each alternative and inversely proportional to the squared
difference between the mean of that alternative and the best
alternative.
Here, we only provide information required to initialize
the OCBA algorithm. For a detailed description of OCBA,
including the solution to the problem in (10), see [28]. The
key initialization variables, for the OCBA algorithm [28], are
k, T (not to be confused with T in this paper), ∆, and n0 . The
variable k is equal to variable n in our problem. The value of
n changes at each t and depends on the number of damaged
components and units of resources. The variable T is equal to
per-stage budget B in our problem. More information about
Fig. 2. Performance comparison of rollout vs base policy for 3 units of
resources.
the exact value assigned to B is described in Section IV. We
follow the guidelines specified in [29] to select n0 and ∆; n0
in the OCBA algorithm is selected equal to 5, and ∆ is kept
at 15% of n (within rounding).
IV. SIMULATION RESULTS
We simulate 100 different initial damage scenarios for
each of the plots presented in this section. There will be
a distinct recovery path for each of the initial damage
scenarios. All the plots presented here represent the average
of 100 such recovery paths. Two different simulation plots of
rollout fused with OCBA are provided in Fig. 2 and Fig. 3.
They are termed as rollout with OCBA1 and rollout with
OCBA2. The method applied is the same for both cases;
only the per-stage simulation budget is different. A per-stage
budget (budget at each decision time t) of B = 5 · n + 5000
is assigned for rollout with OCBA1 and B = 5 · n + 10000
for rollout with OCBA2. Fig. 2 compares the performance
of rollout fused with OCBA and base policy. The rollout
algorithm is known to have the “lookahead property” [20].
This behavior of the rollout algorithm is evident in the results
in Fig. 2, where the base policy initially outperforms the
rollout policy, but after about six days the former steadily
outperforms the later. Recall, that our objective is to perform
repair actions so that maximum people will have water in
minimum amount of time. Evaluating the performance of our
method in meeting this objective is equivalent to checking
the area under the curve of our plots. This area represents
the product of the number of people who have water and
the number of days for which they have water. A larger area
represents that greater number of people were benefitted as
a result of the recovery actions. The area under the curve for
recovery with rollout (blue and red plots) is more than its
base counterpart (black). A per-stage budget increase of 5000
simulations in rollout with OCBA2 with respect to rollout
with OCBA1 shows improvements in the recovery process.
In the plots shown in Fig. 3, we use M = 5. In the
initial phase of planning, it might appear that the base
policy outperforms the rollout for a substantial amount of
time. However, this is not the case. Note that the number
of days for which the base policy outperforms rollout, in
Fig. 3. Performance comparison of rollout vs base policy for 5 unit of
resources.
rollout. Therefore, these two algorithms form a powerful
combination together, where each algorithm consistently and
sequentially reinforces the performance of the other. Such
synergistic behavior of the combined approach is appealing.
Lastly, our simulation studies show that increments in the
simulation budget of rollout results in marginal performance
improvement for each increment. Beyond a certain increment
in the simulation budget, the gain in performance might not
scale with the simulation budget expended. A possible explanation is that small simulation budget increase might not
dramatically change the approximation of Q value function
associated with a state-action pair. Thus, π 0 in (8) might not
show a drastic improvement compared to the one computed
by a lower simulation budget (policy improvement based on
Q approximation that utilises lower simulation budget).
V. F UTURE W ORK
Fig. 4. Performance comparison of uniform rollout (TEA), rollout with
OCBA and base policy for 3 units of resources.
both Fig. 2 and Fig. 3, is about six days, but because the
number of resource units has increased from three to five,
the recovery is faster, giving an illusion that the base policy
outperforms rollout for a longer duration. It was verified that
the area under the curve for recovery with rollout (blue and
red curves) is more than its base counterpart (black curve).
Because OCBA is fused with rollout here, we would like to
ascertain the exact contribution of the OCBA approach in
enhancing the rollout performance.
For the rollout with OCBA in Fig. 4, B = 5 · n + 20000,
whereas α = 200 for uniform rollout. The recovery as
a result of these algorithms outperforms the base policy
recovery in all cases. Also, rollout with OCBA performs
competitively with respect to uniform rollout despite a
meagre simulation budget of 10% of uniform rollout. The
area under the recovery process in Fig. 4, as a result of
uniform rollout, is only marginally greater than that due
to rollout with OCBA. Note that after six days, OCBA
slightly outperforms uniform rollout because it prioritizes the
simulation budget on the most promising actions per-stage.
Rollout exploits this behavior in each stage and gives a set
of sequential recovery decisions that further enhances the
outcome of the recovery decisions. We would like to once
again stress that such an improvement is being achieved at a
significantly low simulation budget with respect to uniform
For future work, we would like to leverage the availability
of multiple base polices in the aftermath of hazards in our
framework and incorporate parallel rollout in the solution
method [30]. We anticipate further improvements to the
performance demonstrated here when OCBA is fused with
parallel rollout. In the future, we will also present the
inter-relationship in other critical infrastructure systems like
electrical power, roads, bridges, and water networks and the
impact such dynamic interactive system has on the recovery
process post-hazard. We are also interested in exploring the
social impact of the optimized recovery process. We will
examine how to incorporate meta-heuristics to guide the
stochastic search that determines most promising actions
[31].
R EFERENCES
[1] S. Nozhati, B. Ellingwood, H. Mahmoud, and J. van de Lindt,
“Identifying and analyzing interdependent critical infrastructure in
post-earthquake urban reconstruction,” in Proc. of the 11th Nat. Conf.
in Earthquake Eng. Los Angeles, CA: Earthquake Eng. Res. Inst.,
Jun 2018.
[2] T. Sun, Q. Zhao, and P. B. Luh, “A rollout algorithm for multichain
markov decision processes with average cost,” in Positive Syst., R. Bru
and S. Romero-Vivó, Eds. Springer Berlin Heidelberg, 2009.
[3] L. Péret and F. Garcia, “Online resolution techniques,” Markov Decision Processes in Artificial Intell., pp. 153–184, 2010.
[4] “The association of bay area governments (city of Gilroy annex).”
2011. [Online]. Available: http://resilience.abag.ca.gov/wp-content/
documents/2010LHMP/Gilroy-Annex-2011.pdf
[5] Semseler, R.G. and T. Akel, “2010 urban water management
plan,” 2010. [Online]. Available: http://www.ci.gilroy.ca.us/265/
Water-Management-Plan
[6] N. A. Abrahamson, W. J. Silva, and R. Kamai, Update of the AS08
ground-motion prediction equations based on the NGA-West2 data set.
Pacific Earthquake Eng. Res. Center, 2013.
[7] H. MRl, “Multi-hazard loss estimation methodology: Earthquake
model.” [Online]. Available: https://www.hsdl.org/?view&did=11343
[8] T. Adachi and B. R. Ellingwood, “Serviceability assessment of a
municipal water system under spatially correlated seismic intensities,”
Comput.-Aided Civil and Infrastructure Eng., vol. 24, no. 4, pp. 237–
248, 2009.
[9] J. Eidinger et al., “Seismic fragility formulations for water systems,”
American Lifelines Alliance, G&E Eng. Syst. Inc., 2001. [Online].
Available: http://homepage.mac.com/eidinger
[10] M. Ouyang, L. Dueñas-Osorio, and X. Min, “A three-stage resilience
analysis framework for urban infrastructure systems,” Structural
Safety, vol. 36, pp. 23–31, 2012.
[11] H. Masoomi and J. W. van de Lindt, “Restoration and functionality
assessment of a community subjected to tornado hazard,” Structure
and Infrastructure Eng., vol. 14, no. 3, pp. 275–291, 2018.
[12] M. L. Puterman, Markov Decision Processes: Discrete Stochastic
Dynamic Programming, 1st ed. New York, NY, USA: John Wiley &
Sons, Inc., 1994.
[13] R. A. Howard, Dynamic Programming and Markov Processes. Cambridge, MA: MIT Press, 1960.
[14] R. Bellman, Dynamic Programming, 1st ed. Princeton, NJ, USA:
Princeton University Press, 1957.
[15] L. S. Shapley, “Stochastic games,” Proc. of the Nat. Academy of
Sci., vol. 39, no. 10, pp. 1095–1100, 1953. [Online]. Available:
http://www.pnas.org/content/39/10/1095
[16] L. Kallenberg, “Finite state and action MDPs,” in Handbook of Markov
Decision Processes. Springer, 2003, pp. 21–87.
[17] A. Fern, S. Yoon, and R. Givan, “Approximate policy iteration with a
policy language bias: Solving relational Markov decision processes,”
J. of Artificial Intell. Res., vol. 25, pp. 75–118, 2006.
[18] S. Nozhati, Y. Sarkale, B. Ellingwood, E. K. P. Chong,
and H. Mahmoud, “Near-optimal planning using approximate
dynamic programming to enhance post-hazard community resilience
management,” submitted for publication, vol. abs/1803.01451, 2018.
[Online]. Available: https://arxiv.org/abs/1803.01451
[19] S. Nozhati, B. Ellingwood, H. Mahmoud, Y. Sarkale, , E. K. P. Chong,
and N. Rosenheim, “An approximate dynamic programming approach
to community recovery management,” in Eng. Mechanics Inst., 2018.
[20] D. P. Bertsekas and D. A. Castanon, “Rollout algorithms for stochastic
scheduling problems,” J. of Heuristics, vol. 5, no. 1, pp. 89–108, Apr
1999. [Online]. Available: https://doi.org/10.1023/A:1009634810396
[21] M. G. Lagoudakis and R. Parr, “Reinforcement learning as classification: Leveraging modern classifiers,” in Proc. of the 20th Int. Conf.
on Mach. Learn. (ICML-03), 2003, pp. 424–431.
[22] C. Dimitrakakis and M. G. Lagoudakis, “Algorithms and bounds for
rollout sampling approximate policy iteration,” in Recent Advances in
Reinforcement Learn., S. Girgin, M. Loth, R. Munos, P. Preux, and
D. Ryabko, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg,
2008, pp. 27–40.
[23] M. C. Fu, C. H. Chen, and L. Shi, “Some topics for simulation
optimization,” in 2008 Winter Simulation Conf., Dec 2008, pp. 27–
38.
[24] C. Dimitrakakis and M. G. Lagoudakis, “Rollout sampling
approximate policy iteration,” Mach. Learn., vol. 72, no. 3,
pp. 157–171, Sep 2008. [Online]. Available: https://doi.org/10.1007/
s10994-008-5069-3
[25] ——, “Algorithms and bounds for rollout sampling approximate
policy iteration,” CoRR, vol. abs/0805.2015, 2008. [Online]. Available:
http://arxiv.org/abs/0805.2015
[26] A. Lazaric, M. Ghavamzadeh, and R. Munos, “Analysis of
classification-based policy iteration algorithms,” J. of Mach. Learn.
Res., vol. 17, no. 19, pp. 1–30, 2016. [Online]. Available:
http://jmlr.org/papers/v17/10-364.html
[27] J. Branke, S. E. Chick, and C. Schmidt, “Selecting a selection
procedure,” Manage. Sci., vol. 53, no. 12, pp. 1916–1932, 2007.
[Online]. Available: https://doi.org/10.1287/mnsc.1070.0721
[28] C.-H. Chen, J. Lin, E. Yücesan, and S. E. Chick, “Simulation budget
allocation for further enhancing the efficiency of ordinal optimization,”
Discrete Event Dynamic Syst., vol. 10, no. 3, pp. 251–270, Jul 2000.
[Online]. Available: https://doi.org/10.1023/A:1008349927281
[29] C.-H. Chen, S. D. Wu, and L. Dai, “Ordinal comparison of heuristic
algorithms using stochastic optimization,” IEEE Trans. Robot. Autom.,
vol. 15, no. 1, pp. 44–56, Feb 1999.
[30] H. S. Chang, R. Givan, and E. K. P. Chong, “Parallel rollout for online
solution of partially observable markov decision processes,” Discrete
Event Dynamic Syst., vol. 14, no. 3, pp. 309–341, Jul 2004. [Online].
Available: https://doi.org/10.1023/B:DISC.0000028199.78776.c4
[31] A. Kaveh and N. Soleimani, “CBO and DPSO for optimum design
of reinforced concrete cantilever retaining walls,” Asian J. Civil Eng.,
vol. 16, no. 6, pp. 751–774, 2015.
| 5 |
Ultrafast photonic reinforcement learning based on laser
chaos
Makoto Naruse1, Yuta Terashima2, Atsushi Uchida2 & Song-Ju Kim3
1
Strategic Planning Department, National Institute of Information and Communications Technology,
4-2-1 Nukui-kita, Koganei, Tokyo 184-8795, Japan
2
Department of Information and Computer Sciences, Saitama University, 255 Shimo-Okubo,
Sakura-ku, Saitama city, Saitama 338-8570, Japan
3
WPI Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki,
Tsukuba, Ibaraki 305-0044, Japan
* Corresponding author. Email: [email protected]
1
ABSTRACT
Reinforcement learning involves decision making in dynamic and uncertain environments, and
constitutes one important element of artificial intelligence (AI). In this paper, we
experimentally demonstrate that the ultrafast chaotic oscillatory dynamics of lasers efficiently
solve the multi-armed bandit problem (MAB), which requires decision making concerning a
class of difficult trade-offs called the explorationexploitation dilemma. To solve the MAB, a
certain degree of randomness is required for exploration purposes. However, pseudo-random
numbers generated using conventional electronic circuitry encounter severe limitations in
terms of their data rate and the quality of randomness due to their algorithmic foundations.
We generate laser chaos signals using a semiconductor laser sampled at a maximum rate of 100
GSample/s, and combine it with a simple decision-making principle called tug-of-war with a
variable threshold, to ensure ultrafast, adaptive and accurate decision making at a maximum
adaptation speed of 1 GHz. We found that decision-making performance was maximized with
an optimal sampling interval, and we highlight the exact coincidence between the negative
autocorrelation inherent in laser chaos and decision-making performance. This study paves the
way for a new realm of ultrafast photonics in the age of AI, where the ultrahigh bandwidth of
photons can provide new value.
2
INTRODUCTION
Physical unique attributes of photons have been utilized in information processing in the literature of
optical computing1. New photonic processing principles have recently emerged to solve complex
time-series prediction problems2-4, and issues in spatiotemporal dynamics5 and combinatorial
optimization6, which coincide with the rapid shift to the age of artificial intelligence (AI). These
novel approaches exploit the ultrahigh bandwidth attributes of photons and their enabling device
technologies2,3,6. This paper experimentally demonstrates the usefulness of ultrafast chaotic
oscillatory dynamics in semiconductor lasers for reinforcement learning, which is among the most
important elements in machine learning.
Reinforcement learning involves adequate decision making in dynamic and uncertain
environments7. It forms the foundation of a variety of applications, such as information
infrastructures8, online advertisements9, robotics10, transportation11, and Monte Carlo tree search12,
which is used in computer gaming13. A fundamental of reinforcement learning is known as the multiarmed bandit problem (MAB), where the goal is to maximize total reward from multiple slot
machines, the reward probabilities of which are unknown7,14,15. To solve the MAB, one needs to
explore better slot machines. However, too much exploration may result in excessive loss, whereas
too quick a decision, or insufficient exploration, may lead to neglect of the best machine. There is a
trade-off, referred to as the explorationexploitation dilemma7. A variety of algorithms for solving
3
the MAB have been proposed in the literature, such as -greedy14, softmax16, and upper confidence
bound17.
These approaches typically involve probabilistic attributes, especially for exploration purposes.
While the implementation and improvements of such algorithms on conventional digital computing
are important for various practical applications, understanding their limitations and investigating
novel approaches are also important based on perspectives from postsilicon computing. For example,
the pseudo-random number generation (RNG) used in conventional algorithmic approaches has
severe limitations, such as its data rate, due to the operating frequencies of digital processors (~ GHz
range). Moreover, the quality of randomness in RNG has serious limitations18. The usefulness of
photonic random processes for machine learning is also discussed by utilizing multiple optical
scattering19.
We consider that directly utilizing physical irregular processes in nature is an exciting approach
with the goal of realizing artificially constructed, physical decision-making machines20. Indeed, the
intelligence of slime moulds or amoebae, a single-cell natural organism, has been used in solution
searches, whereby complex inter-cellular spatiotemporal dynamics play a key role21. This stimulated
the subsequent discovery of a new principle of decision making strategy called tug of war (TOW),
invented by Kim et al.22,23. The principle of the TOW method originated from the observation of
slime moulds: the dynamic expanding and shrinking of their bodies while maintaining a constant
intracellular resource volume allows them to collect environmental information, and the conservation
4
of the volume of their bodies entails a nonlocal correlation within the body. The fluctuation, or
probabilistic behaviour, in the body of amoebae is important for the exploration of better solutions.
The name “TOW” is a metaphor to represent such a nonlocal correlation while accommodating
fluctuation, which enhances decision-making performance23.
This principle can be adapted to photonic processes. In past research, we experimentally
exhibited physical decision making based on near-field-mediated optical excitation transfer at the
nanoscale20,24 and with single photons25. These former studies pursued the ultimate physical
attributes of photons in terms of diffraction limit-free spatial resolutions and energy efficiency by
near-field photons26,27, and the quantum attributes of single-light quanta28. The nonlocal aspect of
TOW is directly physically represented by the wave nature of a single photon or an exciton polariton,
whereas fluctuation is also directly represented by their intrinsic probabilistic attributes. However,
the fluctuations are limited by practical limitations on the measurements and control systems (second
range in the worst case) as well as the single-photon generation rate (kHz range).
The ultrafast, high-bandwidth aspect of photons is another promising physical platform for
TOW-based decision making to complement diffraction-limit-free and low-energy near-field photon
approaches as well as quantum-level single-photon strategies. As demonstrated below, chaotic
oscillatory dynamics of lasers that contains negative autocorrelation experimentally demonstrates 1
GHz decision making. In addition to the resultant speed merit, it should be emphasized that the
technological maturity of ultrafast photonic devices allows for relatively easy and scalable system
5
implementation through commercially available photonic devices. Furthermore, the applications of
the proposed ultrafast photonics-based, and the former near-field/single-photon-based, decision
making are complementary: the former targets high-end, data centre scenarios by highlighting
ultrafast performance, whereas the latter appeals to low-energy, Internet-of-Things-related (IoT)29,
and security30 applications.
In this study, we demonstrate ultrafast reinforcement learning based on chaotic oscillatory
dynamics in semiconductor lasers31-34 that yields adaptation from zero-prior knowledge at a gigahertz
(GHz) range. The randomness is based on complex dynamics in lasers32-34, and its resulting speed is
unachievable in other mechanisms, at least through technologically reasonable means. We
experimentally show that ultrafast photonics has significant potential for reinforcement learning. The
proposed principles using ultrafast temporal dynamics can be matched to applications including an
arbitration of resources at data centres35 and high-frequency trading36, where decision making is
required at least within milliseconds, and other such high-end utilities. Scientifically, this study paves
a way toward the understandings of the physical origin of the enhancement of intelligent abilities
(which is reinforcement learning herein) when natural processes (laser chaos herein) are coupled
with external systems; this is what we call natural intelligence.
Chaotic dynamics in lasers have been examined in the literature32-34, and their applications have
exploited the ultrafast attributes of photonics for secure communication37-39, random number
generation31,40,41, remote sensing42, and reservoir computing2-4. Reservoir computing is a type of
6
neural network similar to deep learning13 that has been intensively studied to provide recognitionand prediction-related functionalities. The reinforcement learning described in this study differs
completely from reservoir computing from the perspective that neither a virtual network nor machine
learning for output weights is required. However, it should be noted that reinforcement learning is
important in complementing the capabilities of neural networks, indicating the potential for the
fusion of photonic reservoir computing with photonic reinforcement learning in future work.
Principle of reinforcement learning
For the simplest case that preserves the essence of solving the MAB, we consider a player who
selects one of two slot machines, called slot machines 1 and 2 hereafter, with the goal of maximizing
reward (known as the two-armed bandit problem). Denoting the reward probabilities of the slot
machines by Pi (i 1, 2) , the problem is to select the machine with the highest reward probability.
The amount of reward dispensed by each slot machine for a play is assumed to be the same in this
study.
The measured chaotic signal s (t ) is subjected to the threshold adjuster (TA), according to the
TOW principle. The output of the TA is immediately the decision concerning the slot machine to
choose. If s (t ) is equal to or greater than the threshold value T (t ) , the decision is made to select slot
machine 1. Otherwise, the decision is made to select slot machine 2. The reward—the win/lose
information of a slot machine play—is fed back to the TA.
The chaotic signal level s (t ) is compared with the threshold value T (t ) denoted by
7
T (t ) k TA(t )
(1)
where TA(t ) is the threshold adjuster value at cycle t, TA(t ) is the nearest integer to TA(t )
rounded to zero, and k is a constant determining the range of the resultant T (t ) . In this study, we
assumed that TA(t ) takes the values N , 1, 0,1, , N , where N is a natural number. Hence
the number of the thresholds is 2 N 1 , referred to as TA’s resolution. The range of T (t ) is limited
between kN and kN by setting T (t ) kN when TA(t ) is greater than N , as well as
T (t ) kN when TA(t ) is smaller than N .
If the selected slot machine yields a reward at cycle t (in other words, wins the slot machine
play), the TA value is updated at cycle t 1 based on
TA(t 1) TA(t ) if slot machine 1 wins
TA(t 1) TA(t ) if slot machine 2 wins
(2)
where is referred to as the forgetting (memory) parameter20, and is the constant increment (in
this experiment, 1 and 0.999 ). In this study, the initial TA value was zero. If the selected
machine does not yield a reward (or loses in the slot machine play), the TA value is updated by
TA(t 1) TA(t ) if slot machine 1 fails
TA(t 1) TA(t ) if slot machine 2 fails
(3)
where is the increment parameter defined below. Intuitively speaking, the TA takes a smaller
value if slot machine 1 is considered more likely to win, and a greater value if slot machine 2 is
considered more likely to earn the reward. This is as if the TA value is being pulled by the two slot
machines at both ends, which coincides with the notion of a tug of war.
8
The fluctuation, necessary for exploration, is realized by associating the TA value with the
threshold of digitization of the chaotic signal train. If the chaotic signal level s (t ) is equal to or
greater than the assumed threshold T (t ) , the decision is immediately made to choose slot machine 1;
otherwise, the decision is made to select slot machine 2. Initially, the threshold is zero; hence, the
probability of choosing either slot machine 1 or 2 is 0.5. As time elapses, the TA value shifts
(becomes positive or negative) towards the slot machine with the higher reward probability based on
the dynamics shown in Eqs. (2) and (3). We should note that due to the irregular nature of the
incoming chaotic signal, the possibility of choosing the opposite machine is not zero, which is a
critical feature of exploration in reinforcement learning. For example, even when the TA value is
sufficiently small (meaning that slot machine 1 seems highly likely to be the better machine), the
probability of the decision to choose slot machine 2 is not zero.
In TOW-based decision making, the increment parameter in Eq. (3) is determined based
on the history of betting results. Let the number of times when slot machine i is selected in cycle t be
Si and the number of wins in selecting slot machine i be Li . The estimated reward probabilities of
slot machines 1 and 2 are given by
L
L
Pˆ1 1 , Pˆ2 2 .
S1
S2
(4)
is then given by
Pˆ1 Pˆ2
.
2 ( Pˆ1 Pˆ2 )
(5)
9
The initial is assumed to be unity, and a constant value is assumed when the denominator of Eq.
(5) is zero. The detailed derivation of Eq. (5) is shown in Ref. 23.
RESULTS
The architecture of laser chaos-based reinforcement learning is schematically shown in Fig. 1a. A
semiconductor laser is coupled with a polarization-maintaining (PM) coupler. The emitted light is
incident on a variable fibre reflector, by which a delayed optical feedback is supplied to the laser,
leading to laser chaos40. The output light at the other end of the PM coupler is detected by a highspeed, AC-coupled photodetector through an optical isolator (ISO) and an attenuator, and is sampled
by a high-speed digital oscilloscope at a rate of 100 GSample/s (a 10-ps sampling interval). The
detailed specifications of the experimental apparatus are described in the Methods section.
Figure 1b shows an example of the chaotic signal train. Figures 1c and 1d show the optical and
radio frequency (RF) spectra of laser chaos measured by optical and RF spectrum analysers,
respectively. The semiconductor laser was operated at a centre wavelength of 1547.782 nm. The
standard bandwidth31 of the RF spectrum was estimated as 10.8 GHz. Figure 1e summarizes the
histogram of the signal levels of the chaotic trains, which spanned from 0.2 to 0.2, with the level
zero at its maximum incidence, and slightly skewed between the positive and negative sides. A
remark is that the small incidence peak at the lowest measured amplitude is due to our experimental
apparatus (not by the laser), and it does not critically effect in the present study. The rounded TA
10
values, TA(t ) , are also schematically illustrated at the right-hand side of Fig. 1b and the upper side
of Fig. 1e, assuming N 10 . This means that TA(t ) ranges from 1 0 to 10 . Signal value s (t )
spans between approximately 0.2 and 0.2. The particular example of TA values shown in Figs. 1b
and 1e shows the case when the constant k in Eq. (1) is given by 0.02, so that the actual threshold
T (t ) spans from 0.2 to 0.2. In the experimental demonstration of this study, the TA and the slot
machines were emulated in offline processing, whereas online processing was technologically
feasible due to the simple procedure of the TA (Remarks are given in the Discussion section).
[EXPERIMENT-1] Adaptation to sudden environmental changes
We first solved the two-armed bandit problems given by the following two cases, where the reward
probabilities {P1 , P2 } were given by {0.8, 0.2} and {0.6, 0.4} . We assumed that the sum of the reward
probabilities P1 P2 was known prior to slot machine plays. With this knowledge, is unity by Eq.
(5).
The slot machine was consecutively played 4,000 times, and this play was repeated 100 times.
The red and blue curves in Fig. 2a show the evolution of the correct decision rate (CDR), defined by
the ratio of the number of selections of the machines that yielded the higher reward probability at
cycle t in 100 trials, with respect to the probability combination of {0.8, 0.2} and {0.6, 0.4} . The
chaotic signal was sampled every 10 ps; hence, the total duration of the 4,000 plays of the slot
machine was 40 ns. In order to represent sudden environmental changes (or uncertainty), the reward
11
probability was forcibly interchanged every 10 ns, or every 1,000 plays (For example,
{P1 , P2 } {0.8, 0.2} was reconfigured to {P1 , P2 } {0.2, 0.8} ). The resolution of TA was set to 5 (or
N 2 ).
We observed that the CDR curves quickly approached unity, even after sudden changes in the
reward probabilities, showing successful decision making. The adaptation was steeper in the case of
a reward probability combination of {0.8, 0.2} than that of {0.6, 0.4} , since the difference in the
reward probability was greater in the former case ( 0.8 0.2 0.6 ) than the latter ( 0.6 0.4 0.2 ).
This meant that decision making was easier. The red and blue curves in Fig. 2b represent the
evolution of TA values ( TA(t ) ) in the case of {0.8, 0.2} and {0.6, 0.4} , respectively, where the TA
value became greater than 2 or 2; hence, it took long for the TA values to be inverted to the other
polarity following environmental change. This is the origin of some delay in the CDR in responding
to the environmental changes observed in Fig. 2a. One simple method of improving adaptation is to
limit the maximum and minimum values of TA(t ) . The forgetting parameter is also crucial to
improving adaptation speed.
Figures 2c, 2d and 2e characterize decision-making performance dependencies with respect to
the configuration of TA. Figure 2c concerns TA resolutions demonstrated by the seven curves therein,
with corresponding TA resolutions of 5 to 255, where the adaptation was quicker in fewer
resolutions than not. Figure 2d considers TA range dependencies: while keeping the centre of the TA
value at zero and the TA resolution at 5, the three curves in Fig. 2d compare CDRs by a TA range of
12
0.1, 0.2 and 0.4. We observed that full coverage (0.4) of the chaotic signal yields the best
performance. Figure 2e shows the TA’s centre value dependencies while maintaining TA ranges of
0.4 and a resolution of 5, where we can clearly observe the deterioration of CDRs by the shift in the
centre of the TA from zero.
[EXPERIMENT-2] Adaptation from zero prior knowledge
Here we considered decision-making problems without any prior knowledge of the slot machines.
Hence, parameter needed to be updated. The reward probabilities of the slot machines were set to
{P1 , P2 } {0.5, 0.1} , and 50 consecutive plays were executed. The colour curves in Fig. 3a depict
CDRs at six sampling intervals of chaotic signal trains from 10 to 400 ps. Hence, the time needed to
complete 50 consecutive slot machine plays differed among these; it ranges from
10 ps 50 500 ps to 400 ps 50 20 ns . Moreover, the black curve shows the CDR obtained by
uniformly distributed pseudo-random numbers generated by the Mersenne Twister for the random
signal source, instead of experimentally observed chaotic signals. We observed that CDRs based on
chaotic signals exhibited more rapid adaptation than the uniformly distributed pseudo-random
numbers.
The most prompt adaptation, or the optimal performance, was obtained at a particular sampling
interval. Figure 3b compares CDRs at the cycle t 3 ; the sampling interval of 50 ps yielded the best
performance, indicating that the original chaotic dynamics of the laser could physically be optimized
13
such that the most prompt decision-making is realized. Indeed, the autocorrelation of the laser chaos
signal trains was evaluated as shown in Fig. 3c, its negative maximum value is taken when the time
lag is given by 5 or 5, corresponding exactly to the sampling interval of 50 ps ( 5 10 ps ). In other
words, the negative correlation of chaotic dynamics enhanced the exploration ability for decision
making. Furthermore, this finding suggests that optimal performance is obtained at the maximum
sampling rate (or data rate) by physically tuning the dynamics of the original laser chaos, which will
be an important and exciting topic for future investigation. The adaptation speed of decision making
was estimated as 1 GHz in this optimal case, where CDR was larger than 0.95 at 20 cycles with a 50ps sampling interval (20 GSamples/s) in Fig. 3a (1 GHz = (50 ps 20 cycles)1).
Furthermore, we characterized CDRs with normally distributed random numbers (referred to as
RANDN) in order to ensure that the statistical incidence patterns of the laser chaos, which were
similar to a normal distribution shown in Fig. 1e, were not the origins of the fast adaptation of the
decision making. By keeping the mean value of RANDN at zero, the standard deviation ( ) was
configured as 0.2, 0.1 and 0.01. As shown in Fig. 4, the CDRs at cycle t 3 by RANDN were
inferior to chaotic signals and the uniformly distributed random numbers (denoted by RAND).
Moreover, the CDR was evaluated by surrogating the chaotic signal time series sampled at 50 ps
intervals, which resulted in poorer performance than the original, as shown in Fig. 4. These
evaluations support the claim that laser chaos is beneficial to the performance of reinforcement
learning, in addition to its ultrafast data rate for random signals.
14
DISCUSSION
The performance enhancement of decision making by chaotic laser dynamics is demonstrated and
the impacts of negative autocorrelation is clearly suggested. Further understanding between chaotic
oscillatory dynamics and decision making is a part of important future research. The first regards to
physical insights. Toomey et al. recently showed that the complexity of laser chaos varies within the
coherence collapse region in the given system43. The level of the optical feedback, injection current
of the laser becomes an important parameter in determining the complexity of chaos, which is the
entrance to thorough insights. We also consider the use of the bandwidth enhancement technique31
with optically injected lasers to improve the adaptation speed of decision making over tens of GHz.
Meanwhile, besides negative autocorrelation inherent in laser chaos, other perspectives could address
the underlying mechanism such as diffusivity,44 Hurst exponents,45 etc.
In the experimental demonstration, the nonlocal aspect of the TOW principle was not directly,
physically relevant to the chaotic oscillatory dynamics of the lasers. By combining the chaotic
dynamics with the threshold adjustor, the nonlocal and fluctuation properties of the TOW principle
emerged, which have not been completely realized in the literature nor in our past experimental
studies24,25. Such a hybrid realization of nonlocality in TOW leads to higher likelihood of
technological implementability, and better scalability and extension to higher-grade problems.
Online post-processing can become feasible through electric circuitry, as already demonstrated for a
15
random-bit generator41,46 since the post-processing is very simple as described in Eqs. (1) to (5). For
scalability, a variety of approaches can be considered, such as time-domain multiplication, which
exploits the ultrafast attributes of chaotic lasers, and is a frequently used strategy in ultrafast photonic
systems. Introducing multi-threshold values31,41 is another simple extension of our proposed scheme.
Whole-photonic realization is an interesting issue to explore, and has already been implied by
the analysis where the time-domain correlation of laser chaos strongly influences decision-making
performances. The mode dynamics of multi-mode lasers47 are very promising for the implementation
of nonlocal properties of fully photonic systems required for decision making. Synchronization and
its clustering properties in coupled laser networks48,49 are also interesting approaches to physically
realizing the nonlocality of the TOW. These systems can automatically tune the optimal settings for
decision making (e.g., negative autocorrelation properties), which can lead to autonomous photonic
intelligence.
We also make note of the extension of the principle demonstrated in this paper to highergrade machine learning problems. The competitive MAB50 with multiple players is an exciting topic
for photonic intelligence research as it involves the so-called Nash equilibrium, and is the foundation
of such important applications as resource allocation and social optimization. Investigating the
possibilities of extending the present method and utilizing ultrafast laser dynamics for competitive
MAB is highly interesting.
16
Conclusion
We experimentally established that laser chaos provides ultrafast reinforcement learning and
decision making. The adaptation speed of decision making reached 1 GHz in the optimal case with
the sampling rate of 20 GSample/s (50-ps decision-making intervals) using the ultrafast dynamics
inherent in laser chaos. The maximum adaptation performance coincided with the negative
maximum of the autocorrelation of the original time-domain laser chaos sequences, demonstrating
the strong impact of chaotic lasers on decision making. The origin of the performance was also
validated by comparing with uniformly and normally distributed pseudo-random numbers as well as
surrogated arrangements of original chaotic signal trains. This study is the first demonstration of
ultrafast photonic reinforcement learning or decision making, to the best of our knowledge, and
paves the way for research on photonic intelligence and new applications of chaotic lasers in the
realm of artificial intelligence.
METHODS
Optical system
The laser used in the experiment was a distributed-feedback (DFB) semiconductor laser mounted on
a butterfly package with optical fibre pigtails (NTT Electronics, KELD1C5GAAA). The injection
current of the semiconductor laser was set to 58.5 mA (5.85 Ith), where the lasing threshold Ith was
10.0 mA. The relaxation oscillation frequency of the laser was 6.5 GHz. The temperature of the
17
semiconductor laser was set to 294.86 K. The laser output power was 13.2 mW. The laser was
connected to a variable fibre reflector, which reflected a fraction of light back into the laser, inducing
high-frequency chaotic oscillations of optical intensity32-34. 1.9 % of the laser output power was fed
back to the laser cavity from the reflector. The fibre length between the laser and the reflector was
4.55 m, corresponding to the feedback delay time (round trip) of 43.8 ns. Polarization-maintaining
fibres were used for all optical fibre components. The optical output was converted to an electronic
signal by a photodetector (New Focus, 1474-A, 35 GHz bandwidth) and sampled by a digital
oscilloscope (Tektronics, DPO73304D, 33 GHz bandwidth, 100 GSample/s, eight-bit vertical
resolution). The RF spectrum of the laser was measured by an RF spectrum analyser (Agilent,
N9010A-544, 44 GHz bandwidth). The optical wavelength of the laser was measured by an optical
spectrum analyser (Yokogawa, AQ6370C-20).
Data analysis
[EXPERIMENT-1] A chaotically oscillated signal train was sampled at a rate of 100 GSample/s by
9,999,994 points, which lasted approximately 10 s. As described in the main text, 4,000
consecutive plays were repeated 100 times; hence, the total number of slot machine plays was
400,000. With a 10-ps interval sampling, the initial 400,000 points of the chaotic signal were used
for the decision-making experiments. The post-processing required for 400,000 iterations of slot
machine plays was approximately 1.2 s (or 3.0 s/decision), on a normal-grade personal computer
(Panasonic, CF-SX3, 16 GB RAM, Windows 7, MATLAB R2011b).
18
[EXPERIMENT-2]
(1) Sampling methods: A chaotic signal train was sampled at 10-ps intervals with 9,999,994
sampling points. Such a train was measured 120 times. Each chaotic signal train was referred to
as chaosi , and there were 120 kinds of such trains: i 1, ,120 . In demonstrating 10 M ps
sampling intervals, where M was a natural number ranging from 1 to 40 (namely, the sampling
intervals were 10 ps, 20 ps, , and 400 ps), we chose one of every M samples from the original
sequence.
(2) Evaluation of CDR regarding a specific chaos sequence: For every chaotic signal train
chaosi , 50 consecutive plays were repeated 200 times. Consequently, 10,000 points were used
from chaosi . Such evaluations were repeated 100 times. Hence, 1,000,000 slot machine plays
were conducted in total. These CDRs were calculated for all signal trains chaosi ( i 1, ,120 ).
(3) Evaluation of CDR of all chaotic sequences: We evaluated the average CDR of all
chaotic signal trains ( i 1, ,120 ) derived in (2) above, which were the results discussed in the
main text.
(4) Autocorrelation of chaotic signals: The autocorrelation was computed based on all
9,999,994 sampling points of chaosi , and was evaluated for all chaosi ( i 1, ,120 ). The
autocorrelation demonstrated in Fig. 3c was evaluated as the average of these 120 kinds of
autocorrelations.
(5) Surrogate methods: The surrogate time series of original chaotic sequences were generated
by the randperm function in MATLAB which is based on the sorting of pseudorandom
numbers generated by Mersenne Twister.
19
Data availability. The data sets generated during the current study are available from the
corresponding author on reasonable request.
References
1. Jahns, J. & Lee, S. H. Optical Computing Hardware. (Academic Press, San Diego, 1994).
2. Larger, L., et al. Photonic information processing beyond Turing: an optoelectronic 3
implementation of reservoir computing. Opt. Express 20, 3241–3249 (2012).
3. Brunner, D., Soriano, M. C., Mirasso, C. R. & Fischer, I. Parallel photonic information
processing at gigabyte per second data rates using transient states. Nat. Commun. 4, 1364 (2013).
4. Vandoorne, K., et al. Experimental demonstration of reservoir computing on a silicon photonics
chip. Nat. Commun. 5, 3541 (2014).
5. Tsang,
M.
&
Psaltis,
D.
Metaphoric
optical
computing
of
fluid
dynamics.
arXiv:physics/0604149v1 (2006).
6. Inagaki, T. et al. A coherent Ising machine for 2000-node optimization problems. Science
10.1126/science.aah4243 (2016).
7. Sutton, R. S. & Barto, A. G. Reinforcement Learning: An Introduction (The MIT Press,
Massachusetts, 1998).
8. Awerbuch, B. & Kleinberg, R. Online linear optimization and adaptive routing. J. Comput. Syst.
Sci. 74, 97–114 (2008).
20
9. Agarwal, D., Chen, B. -C. & Elango, P. Explore/exploit schemes for web content optimization.
Proc. of ICDM 1–10 (2009). DOI: 10.1109/ICDM.2009.52
10. Kroemer, O. B., Detry, R., Piater, J. & Peters, J. Combining active learning and reactive control
for robot grasping. Robot. Auton. Syst. 58, 1105–1116 (2010).
11. Cheung, M. Y., Leighton, J. & Hover, F. S. Multi-armed bandit formulation for autonomous
mobile acoustic relay adaptive positioning. In 2013 IEEE Intl. Conf. Robot. Auto. 4165–4170
(2013).
12. Kocsis, L. & Szepesvári, C. Bandit based Monte Carlo planning. Machine Learning: ECML
(2006), LNCS 4212, 282–293 (2006). DOI: 10.1007/11871842_29
13. Silver, D., et al. Mastering the game of Go with deep neural networks and tree search. Nature
529, 484–489 (2016).
14. Robbins, H. Some aspects of the sequential design of experiments. B. Am. Math. Soc. 58, 527–
535 (1952).
15. Lai, T. L. & Robbins, H. Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6,
4–22 (1985).
16. Daw, N., O’Doherty, J., Dayan, P., Seymour, B. & Dolan, R. Cortical substrates for exploratory
decisions in humans. Nature 441, 876–879 (2006).
17. Auer, P., Cesa-Bianchi, N. & Fischer, P. Finite-time analysis of the multi-armed bandit problem.
Machine Learning 47, 235–256 (2002).
21
18. Murphy, T. E. & Roy, R., The world’s fastest dice, Nat. Photon. 2, 714–715 (2008).
19. Saade, A., et al., Random projections through multiple optical scattering: Approximating Kernels
at the speed of light. In IEEE International Conference on Acoustics, Speech and Signal
Processing, March 20–25, 2016, Shanghai, China 6215–6219 (IEEE, 2016).
20. Kim, S. -J., Naruse, M., Aono, M., Ohtsu, M. & Hara, M. Decision Maker Based on Nanoscale
Photo-Excitation Transfer. Sci Rep. 3, 2370 (2013).
21. Nakagaki, T., Yamada, H., & Tóth, Á. Intelligence: Maze-solving by an amoeboid organism.
Nature 407, 470–470 (2000).
22. Kim, S. -J., Aono, M. & Hara, M. Tug-of-war model for the two-bandit problem:
Nonlocallycorrelated parallel exploration via resource conservation. BioSystems 101, 29–36
(2010).
23. Kim, S. -J., Aono, M. & Nameda, E. Efficient decision-making by volume-conserving physical
object. New J. Phys. 17, 083023 (2015).
24. Naruse, M., et al. Decision making based on optical excitation transfer via near-field interactions
between quantum dots. J. Appl. Phys. 116, 154303 (2014).
25. Naruse, M., et al. Single-photon decision maker. Sci. Rep. 5, 13253 (2015).
26. Pohl, D. W. & Courjon, D. Near Field Optics (Kluwer Kluwer, The Netherlands, 1993).
27. Naruse, M., Tate, N., Aono, M. & Ohtsu, M. Information physics fundamentals of
nanophotonics. Rep. Prog. Phys. 76, 056401 (2013).
22
28. Eisaman, M. D., Fan, J., Migdall, A. & Polyakov, S. V. Single-photon sources and detectors.
Rev. Sci. Instrum. 82, 071101 (2011).
29. Kato, H., Kim, S.-J., Kuroda, K., Naruse, M. & Hasegawa, M. The Design and Implementation
of a Throughput Improvement Scheme based on TOW Algorithm for Wireless LAN. In Proc. 4th
Korea-Japan Joint Workshop on Complex Communication Sciences, January 12-13, Nagano,
Japan J13 (IEICE, 2016).
30. Naruse, M., Tate, N. & Ohtsu, M. Optical security based on near-field processes at the nanoscale.
J. Optics 14, 094002 (2012).
31. Sakuraba, R., Iwakawa, K., Kanno, K. & Uchida, A. Tb/s physical random bit generation with
bandwidth-enhanced chaos in three-cascaded semiconductor lasers. Opt. Express 23, 1470–1490
(2015).
32. Soriano, M. C., García-Ojalvo, J., Mirasso, C. R. & Fischer, I. Complex photonics: Dynamics
and applications of delay-coupled semiconductors lasers. Rev. Mod. Phys. 85, 421–470 (2013).
33. Ohtsubo, J. Semiconductor lasers: stability, instability and chaos (Springer, Berlin, 2012).
34. Uchida, A. Optical communication with chaotic lasers: applications of nonlinear dynamics and
synchronization (Wiley-VCH, Weinheim, 2012).
35. Bilal, K., Malik, S. U. R., Khan, S. U. & Zomaya, A. Y. Trends and challenges in cloud
datacenters. IEEE Cloud Computing 1, 10–20 (2014).
23
36. Brogaard, J., Hendershott, T. & Riordan, R. High-frequency trading and price discovery. Rev.
Financ. Stud. 27, 2267–2306 (2014).
37. Colet, P. & Roy, R. Digital communication with synchronized chaotic lasers. Opt. Lett. 19,
2056–2058 (1994).
38. Argyris, A., et al. Chaos-based communications at high bit rates using commercial fibre-optic
links. Nature 438, 343–346 (2005).
39. Annovazzi-Lodi, V., Donati, S. & Scire, A. Synchronization of chaotic injected-laser systems
and its application to optical cryptography. IEEE J Quantum Electron. 32, 953–959 (1996).
40. Uchida, A., et al. Fast physical random bit generation with chaotic semiconductor lasers. Nat.
Photon. 2, 728–732 (2008).
41. Kanter, I., Aviad, Y., Reidler, I., Cohen, E. & Rosenbluh, M. An optical ultrafast random bit
generator. Nat. Photon. 4, 58–61 (2010).
42. Lin, F.-Y. & Liu, J.-M. Chaotic lidar. IEEE J. Sel. Top. Quantum Electron. 10, 991-997 (2004).
43. Toomey, J. P. & Kane, D. M. Mapping the dynamic complexity of a semiconductor laser with
optical feedback using permutation entropy. Opt. Express 22, 1713–1725 (2014).
44. Kim, S. -J., Naruse, M., Aono, M., Hori, H. & Akimoto, T. Random walk with chaotically driven
bias. Sci. Rep. 6, 38634 (2016).
45. Lam, W. S., Ray, W., Guzdar, P. N. & Roy, R. Measurement of Hurst exponents for
semiconductor laser phase dynamics. Phys. Rev. Lett. 94, 010602. (2005).
24
46. Honjo, T., et al. Differential-phase-shift quantum key distribution experiment using fast physical
random bit generator with chaotic semiconductor lasers. Opt. Express 17, 9053–9061 (2009).
47. Aida, T. & Davis, P. Oscillation mode selection using bifurcation of chaotic mode transitions in a
nonlinear ring resonator. IEEE J. Quantum Electron. 30, 2986–2997 (1994).
48. Nixon, M., Fridman, M., Ronen, E., Friesem, A. A., Davidson, N. & Kanter, I. Controlling
synchronization in large laser networks. Phys. Rev. Lett. 108, 214101 (2012).
49. Williams, C. R. S., Murphy, T. E., Roy, R., Sorrentino, F., Dahms, T. & Schöll, E. Experimental
observations of group synchrony in a system of chaotic optoelectronic oscillators. Phys. Rev.
Lett. 110, 064104 (2013).
50. Kim, S. -J., Naruse, M. & Aono, M. Harnessing the Computational Power of Fluids for
Optimization of Collective Decision Making. Philosophies Special Issue “Natural Computation:
Attempts in Reconciliation of Dialectic Oppositions” 1, 245–260 (2016).
Acknowledgements
This work was supported in part by the Core-to-Core Program, A. Advanced Research Networks
from the Japan Society for the Promotion of Science and Grants-in-Aid for Scientific Research from
Japan Society for the Promotion of Science.
25
Figure 1 | Architecture of photonic reinforcement learning based on laser chaos. (a)
Architecture and experimental configuration of laser chaos-based reinforcement learning. Ultrafast
chaotic optical signal is subjected to the tug-of-war (TOW) principle that determines the selection of
slot machines. ISO: optical isolator. (b) An example of chaotic signal trains sampled at 100
GSample/s. The signal level is subjected to a threshold adjustment (TA) for decision making. (c)
Optical spectrum and (d) RF spectrum of the laser chaos used in the experiment. (e) Incidence
statistics (histogram) of the signal level of the laser chaos signal.
26
Figure 2. Reinforcement learning in dynamically changing environments. (a) Evolution
of the correct decision rate (CDR) when the reward probabilities of the two slot machines are {0.8,
0.2} and {0.6, 0.4}. The knowledge that the sum of the reward probability, which is unity in these
cases, is supposed to be given. The reward probability is intentionally swapped every 10 ns in order
to represent sudden environmental changes or uncertainty. Rapid and adequate adaptation is
observed in both cases. (b) Evolution of the threshold adjuster (TA) value underlying correct
decision making. (c-e) CDR performance dependency on the setting of TA. (c) TA resolution
dependency. (d) TA range dependency. (e) TA centre value dependency.
27
Figure 3 | Reinforcement learning from zero prior knowledge. (a) Evolution of CDR with
different sampling intervals of the laser chaos signal (10, 20, 30, 40, 50, and 400 ps) and uniformly
distributed pseudo-random numbers. CDR exhibits prompt adaptation when the sampling interval is
50 ps. (b) CDR is evaluated as a function of the sampling interval from 10 ps to 400 ps, where the
maximum performance is obtained at 50 ps. (c) Autocorrelation of the laser chaos signals exhibits its
negative maximum when the time lag is 5 or 5, which exactly coincides with the fact that the
optimal adaptation is realized at 50 ps (10 ps
5) sampling intervals.
28
Figure 4 | Comparison of learning performance in laser chaos, uniformly and
normally distributed pseudo-random numbers, and surrogate laser chaos signals.
The laser chaos sampled at 50 ps interval exhibits the best performance compared with other cases,
indicating that the dynamics of laser chaos affects reinforcement learning ability.
29
| 2 |
Parametric Strategy Iteration
Thomas M. Gawlitza1 , Martin D. Schwarz2 , and Helmut Seidl2
1
Carl von Ossietzky Universität Oldenburg, Ammerländer Heerstraße 114-118, D-26129 Oldenburg, Germany
[email protected]
2
Technische Universität München, Boltzmannstraße 3, D-85748 Garching, Germany
{schwmart,seidl}@in.tum.de
arXiv:1406.5457v1 [] 19 Jun 2014
Abstract
Program behavior may depend on parameters, which are either configured before compilation
time, or provided at runtime, e.g., by sensors or other input devices. Parametric program analysis
explores how different parameter settings may affect the program behavior.
In order to infer invariants depending on parameters, we introduce parametric strategy iteration.
This algorithm determines the precise least solution of systems of integer equations depending on
surplus parameters. Conceptually, our algorithm performs ordinary strategy iteration on the given
integer system for all possible parameter settings in parallel. This is made possible by means of
region trees to represent the occurring piecewise affine functions. We indicate that each required
operation on these trees is polynomial-time if only constantly many parameters are involved.
Parametric strategy iteration for systems of integer equations allows to construct parametric integer interval
analysis as well as parametric analysis of differences of integer variables. It thus provides a general technique to
realize precise parametric program analysis if numerical properties of integer variables are of concern.
1
Introduction
Since the very beginnings of linear programming, parametric versions of linear programming (LP for
short) have already been of interest (see, e.g., [6, 8] for an overview and [16] for recent algorithms).
Parametric LP can be applied to answer questions such as: how much does the result of the analysis
(optimal value/solution) depend on specific parameters? What is the precise dependency between a
parameter and the result? In which regions of parameter values do these dependencies significantly
change? Such types of sensitivity and mode analyses are important in order to obtain a better understanding of the problem under consideration and its potential analysis. Sensitivity and mode questions
equally apply to programs whose behavior depends on parameters. Such parameters either could be
provided at configuration time by engineers, or at runtime, e.g., through sensor data or other kinds of
input. The goal then is to determine how the output values produced by the program may be influenced
by the parameters. The same software may, e.g., control the break of a truck or a car, but must behave
quite differently in the two use cases.
Here, we consider the static analysis of parameterized systems and propose methods for inferring,
how numerical program invariants may depend on the parameters of the program. These questions cannot be answered by linear or integer linear programming related techniques alone, since the constraints
to be solved are necessarily non-convex. Still, such questions can be answered for interval analysis or,
more generally, for template-based numerical analyses such as difference bound matrices or octagons,
since their analysis results can be expressed in first-order linear real arithmetic. This observation has
been exploited by Monniaux [22] who applied quantifier elimination algorithms to obtain parametric
analysis results. The resulting system can provide amazing results. However, if the programs under
consideration have more complicated control-flow, i.e., do not consist of a single program point only,
fixpoint computation realized by means of quantifier elimination does no longer seem appropriate.
In this paper, we are concerned with invariants over integers (opposed to rationals in Monniaux’
system).
1
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
Example 1. Consider the following parametric program:
x = p1 ; while (x < p2 ) x = x + 1;
where p1 , p2 are the parameters. The parametric invariant for program exit states that x = p1 holds, if
p2 ≤ p1 , and x = p2 otherwise. Thus, the analysis should distinguish two modes where the invariant
inferred for program exit is significantly different, namely the set of parameter settings where p2 ≤ p1
holds and its complement. In the first mode, the value of x at program exit is only sensitive to changes
of the parameter p1 , while otherwise it is sensitive to changes of the parameter p2 .
Note that for integer linear arithmetic, quantifier elimination is even more intricate than over the rationals. As shown in [9, 10, 11], non-parametric interval analysis as well as the analysis of differences
of integer variables can be compiled into suitable integer equations. In absence of parameters, integer
equation systems where no integer multiplication or division is involved, can be solved without resorting to heavy machinery such as integer linear programming. Instead, an iteration over max-strategies
suffices [9, 10, 11]. Here, a max-strategy maps each application of a maximum operator to one of its
arguments. Once a choice is made at each occurrence of a maximum operator, a conceptually simpler
system is obtained. For systems without maximum operators, the greatest solution can be determined by
means of a generalization of the Bellman-Ford algorithm. The greatest solutions of the maximum-free
systems encountered during the iteration on max-strategies, provide us with an increasing sequence of
lower approximations to the overall least solution of the integer system. Given such a lower approximation, we can check whether a solution and thus the least solution has already been reached. Otherwise,
the given max-strategy is improved, and the iteration proceeds.
In contrast to ordinary program analysis, parametric program analysis infers a distinct program
invariant for each possible parameter setting. To solve these parametric systems, we propose to apply
strategy iteration simultaneously for all parameter settings (Section 3). We show that this algorithm
terminates—given that we can effectively deal with the parametric intermediate results considered by
the algorithm. For that, we show that the intermediate parametric values can be represented by region
trees (Section 4). Here, a region tree over a finite set C of linear inequalities is a data-structure for
representing finite partitions of the parameter space into non-empty regions of parameter settings which
are indistinguishable by means of the constraints in C. A value (of a set V of values) is then assigned
to each non-empty region of the tree. We also indicate that each basic operation which is required
for implementing parametric strategy iteration can be realized with region trees in polynomial time —
assuming that the number of parameters is fixed (Section 5). Finally, we apply parametric strategy
iteration for parametric integer equations to solve parametric interval equations and thus to perform
parametric program analysis for integer variables of programs, and report preliminary experimental
results (Section 6).
2
Basic Concepts
In this section we provide basic notions and introduce systems of parametric integer equations. By Z
we denote the complete linearly ordered set Z∪{−∞, ∞}. Let P and X denote finite sets of parameters
and variables (unknowns), which are disjoint. A system E of parametric integer equations is given by
x = ex ,
x∈X
where each right-hand side ex is of the form e1 ∨ . . . ∨ er for parametric integer expressions e1 , . . . , er .
Here, “∨” denotes the maximum operator. In this paper, an integer expression is built up from constants
in Z, variables, parameters, and negated parameters by means of application of operators. As operators,
2
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
we consider “∧” (minimum), “+” (addition), “;” (test of non-negativity) and multiplication with nonnegative scalars. Addition as well as scalar multiplication is extended to −∞ and ∞ by:
x + −∞ = −∞ + x = −∞
x+∞=∞+x=∞
c · (−∞) = −∞
∀x ∈ Z
∀x ∈ Z\{−∞}
∀c ≥ 0
0·∞=0
c·∞=∞
∀c > 0
A parametric integer expression is defined by means of the following grammar:
e ::= a | p | −p | x | e1 ∧ e2 | e1 + e2 | e1 ; e2 | c · e1
where a ∈ Z, c ∈ N, p ∈ P, x ∈ X. Given a parameter setting π : P → Z and a variable assignment
ξ : X → Z, the value of an expression e is determined by:
JaKπ ξ = a
JxKπ ξ = ξ(x)
JpKπ ξ = π(p)
J−pKπ ξ = −π(p)
Je1 2 e2 Kπ ξ = Je1 Kπ ξ 2 Je2 Kπ ξ
Jc · eKπ ξ = c · JeKπ ξ
Here, 2 ∈ {∧, +}. For a given parameter setting π, Eπ denotes the (non-parametric) integer equation
system obtained from E by replacing every parameter p of E with its value π(p). For a given parameter
setting π, a solution to Eπ is a variable assignment ξ ∗ that satisfies all equations of Eπ . That is, for each
equation x = e1 ∨ . . . ∨ er in E,
ξ ∗ (x) = Je1 Kπ ξ ∗ ∨ . . . ∨ Jer Kπ ξ ∗
Since all operators occurring in right hand sides are monotonic, for every parameter setting π, Eπ has a
uniquely determined least solution. Finally, a parametric solution of E is a mapping Ξ which assigns to
each possible parameter setting π, a solution of Eπ . Ξ is the parametric least solution of E iff Ξ(π) is
the least solution of Eπ for every parameter setting π.
Example 2. Consider the parametric system E which consists of the single equation x = p1 ∨ (x + 1 ∧
p2 ). Then the parametric least solution Ξ of E is given by
(
π(p1 ) if π(p1 ) ≥ π(p2 )
Ξπx=
π(p2 ) if π(p1 ) < π(p2 )
for all parameter settings π.
3
Parametric Strategy Iteration
In the following we w.l.o.g. assume that the set of parameters P is given by P = {p1 , . . . , pk }. Accordingly, parameter settings from P → Z can be represented as vectors from Zk . Our goal is to enhance the
strategy iteration algorithm from [9, 10, 11] to an algorithm that computes parametric least solutions of
systems of parametric integer equations. Conceptually, we do this by performing each operation for all
parameter settings in parallel. For that, we lift the complete lattice Z of integer values to the set Zk → Z
of parametric values which is again a complete lattice w.r.t. the point-wise extension of the ordering on
3
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
Z where the least and greatest elements are given by the functions const−∞ and const∞ mapping each
vector of parameters to the constant values −∞ and ∞, respectively. Accordingly, sub-expressions of
right-hand sides are no longer evaluated one by one for each parameter setting. Instead, each binary
operator 2 on Z is lifted to a binary operator 2∗ on parametric values from Zk → Z by defining:
(φ1 2∗ φ2 )(π) = φ1 (π) 2 φ2 (π)
for all parameter settings π ∈ Zk . In particular, the lifted maximum operator ∨∗ equals the least upper
bound of the complete lattice Zk → Z. Likewise, scalar multiplication with a non-negative constant c
is lifted point-wise from a unary operator on Z to a unary operator on Zk → Z. For convenience, we
henceforth denote the lifted operators with the same symbols by which we denote the original operators.
The original system E of parametric equations over Z thus can be interpreted as a system of equations
over the domain Zk → Z of parametric values. For all parametric variable assignments ρ : X → Zk →
Z, expressions e are interpreted as follows:
Jpi K ρ = proji
JaK ρ = consta
Je1 2 e2 K ρ = Je1 K ρ 2 Je2 K ρ
J−pi K ρ = −proji
JxK ρ = ρ(x)
Jc · eK ρ = c · JeK ρ
Here, 2 is a binary operator, consta is a parametric value which maps all arguments to the constant a,
and proji denotes the projection onto the ith component of its argument vector.
With respect to this interpretation the least solution ρ∗ of E is a mapping of type X → Zk → Z. Let
us call ρ∗ the least parametric solution. Let Ξ denote the parametric least solution as defined in the last
section. Then Ξ and ρ∗ are not identical — but in one-to-one correspondence. By fixpoint induction, it
can be verified that:
Ξ π x = ρ∗ x π
for all variables x ∈ X, and all parameter settings π ∈ Zk . In the same way as the abstract domain
Z and the operators on Z, we also lift the notion of a strategy from [10] to the notion of a parametric
strategy. For technical reasons, let us assume that the right-hand side for each variable in E is of the
form a ∨ e1 ∨ . . . er where a ∈ Z. This can always be achieved, e.g., by replacing right-hand sides e
which are not of the right format with −∞ ∨ e. A parametric strategy σ then assigns to each variable
x, a parametric choice. If the right-hand side for x is given by e0 ∨ . . . ∨ er , σ x maps each parameter
setting π ∈ Zk to a natural number in the range [0, r] (signifying one of the argument expressions ei ).
Moreover, we need an operator “next” which takes a given parametric strategy σ together with a
parametric variable assignment ρ : X → Zk → Z and then, for every parameter setting π, switches
the choice provided by σ whenever required by the evaluation of subexpressions according to ρ. That
is, σ 0 = next(σ, ρ) implies that the following properties hold for all equations x = e0 ∨ . . . ∨ er and all
parameter settings π:
1. Jeσ0 x π K ρ π ≥ Jei K ρ π for all i ∈ {0, . . . , r}.
2. If Jeσ x π K ρ π ≥ Jei K ρ π for all i, then σ 0 x π = σ x π.
Note that this operator changes the choice given by the argument strategy σ only if a real improvement
is guaranteed. An operator “next” with properties 1) and 2) is a locally optimal parametric strategy
improvement operator because it chooses for each x a best alternative everywhere (relative to ρ). For
the correctness of the algorithm it would be sufficient to choose some strategy that is an improvement
compared to the current strategy at ρ. Finally, we need an operator “select” which, based on a parametric
choice φ : Zk → N, selects one of the arguments, i.e., select φ (v0 , . . . , vr ) is the parametric value given
by:
select φ (v0 , . . . , vr ) π = vφ(π) π
4
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
for all parameter settings π.
With these parametric versions of the corresponding operators used by strategy iteration, we propose
the algorithm in Fig. 1 for systems of parametric integer equations with n unknowns. The resulting
algorithm is called parametric strategy iteration or PSI for short.
σ = {x 7→ const0 | x ∈ X};
do {
forall (x ∈ X) ρ(x) = const∞ ;
for (int i = 0; i < n; i++)
forall ((x = e0 ∨ . . . ∨ er ) ∈ E)
ρ(x) = select (σ x) (Je0 K ρ, . . . , Jer K ρ);
old = σ;
σ = next(σ, ρ);
} while (old 6= σ);
output(ρ);
//
initial strategy
//
begin BF
//
end BF
//
//
strategy improvement
termination detection
Figure 1: Parametric strategy iteration for a parametric integer equation system with n unknowns.
σ = {x 7→ 0 | x ∈ X};
do {
forall (x ∈ X) ρ(x) = ∞;
for (int i = 0; i < n; i++)
forall ((x = e0 ∨ . . . ∨ er ) ∈ E)
ρ(x) = Jeσ x K ρ;
old = σ;
σ = next(σ, ρ);
} while (old 6= σ);
output(ρ);
//
initial strategy
//
begin BF
//
end BF
//
//
strategy improvement
termination detection
Figure 2: Ordinary strategy iteration for a non-parametric integer equation system with n unknowns.
PSI starts with the initial parametric strategy σ mapping each variable and parameter setting to the
constant 0, i.e., it selects for each variable and parameter setting the constant term on the right-hand side.
For a given parametric strategy, the Bellman-Ford algorithm is used to determine the greatest solution
(the for-loop labeled as BF). This Bellman-Ford iteration amounts to n rounds of round robin iteration
(n the number of variables) starting from the top element of the lattice. During round robin iteration,
the appropriate integer expression ei as right-hand side for each variable x and each parameter setting
π, is selected by means of the auxiliary function select according to the current parametric strategy σ.
As a result of Bellman-Ford iteration for all parameter settings in parallel, the next approximation ρ to
the least parametric fixpoint is obtained. This parametric variable assignment then is used to improve
the current parametric strategy σ by means of the operator “next”. This is repeated until the parametric
strategy does not change any more.
For a comparison, Fig. 2 shows a version of the non-parametric strategy iteration as presented in
[10]. The mappings ρ, σ there have functionalities:
ρ:X→Z
σ:X→N
where the evaluation Jei K of expressions ei results in integer values only. Since the strategy σ specifies
a single integer expression ei for any given variable x, the call to “select” in PSI can be simplified
5
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
to Jeσ x K ρ. This optimization is not possible in the parametric case, since different parameter values
may result in different ei to be selected. For systems of integer equalities without parameters strategy
iteration computes the least solution as has been shown in [10].
Assume for a moment that we can compute with parametric values and parametric strategies effectively, i.e., can represent them in some data structure, test them for equality, compute the results
of parametric operator applications, as well as realize the operations “select” and “next”. Then the
algorithmic scheme from Fig. 1 can be implemented, and we obtain:
Theorem 1. Let E be a parametric integer equation system with n variables where each right-hand side
is a maximum of at most r non-constant integer expressions. The following holds:
1. Parameterized strategy iteration as given by Fig. 1 terminates after at most (r + 1)n strategy
improvement steps where each round of improvement requires at most O(n · |E|) evaluations of
parametric operators.
2. On termination, the algorithm returns the least parametric solution of E.
Proof. First we observe that, when probing the intermediate values of σ and ρ for any given parameter
setting π = (p1 , . . . , pk ) ∈ Zk , the algorithm from Fig. 1 for the parametric system E returns the
same values and strategic choices as the algorithm from Fig. 2 when run on the integer system Eπ .
Moreover upon termination, the strategic choices σ x π as well as the values ρ x π do no longer change,
and therefore for each parameter setting π, a least solution of Eπ has been attained. Since the maximal
number of strategies considered by strategy iteration for ordinary integer systems is bounded by (r +1)n
(independently of the values of the constants in the system), we conclude that the parametric algorithm
also performs at most (r + 1)n strategy improvement steps. Therefore, the algorithm terminates. Since
then for each parameter setting π, a least fixpoint of Eπ has been obtained, the resulting assignment is
equal to the least parametric solution of E.
Being able to effectively compute with parametric variable assignments and parametric strategies is
crucial for the implementation of parametric strategy iteration as a practical algorithm. In the following
we will explore the structure of parametric variable assignments and parametric strategies occurring
during the algorithm.
A set S ⊆ Zk of integer points is called convex iff S equals the set of integer points inside its convex
hull over the rationals. A mapping f : Zk → V (V some set) is called piecewise constant iff there
is a finite partition Ψ of Zk into nonempty convex sets together with a mapping Ψf : Ψ → V such
that f (π) = Ψf (P ) for all P ∈ Ψ and all π ∈ P . The cardinality of the partition Ψ is called the
fragmentation of f . In fact, the fragmentation of f depends on the representation of f rather than the
function f itself. Still, we intentionally do not differentiate between the function and its representation
here. Let Aff k ⊆ Zk → Z denote the set of functions which are either const−∞ , const∞ or an affine
function from Zk → Z. We call f ∈ Zk → Z piecewise affine iff there is a piecewise constant mapping
f˜ : Zk → Aff k such that f (π) = f˜(π)(π). If f is piecewise affine, then there is a finite partition Ψ of
Zk into non-empty convex sets together with a mapping Ψf : Ψ → Aff k such that f (π) = Ψf (P )(π)
for all P ∈ Ψ and π ∈ P .
Assume f1 , f2 are piecewise affine where both mappings share the same partition Ψ of the parameter
space Zk into convex sets. Then the functions c · f1 as well as f1 + f2 are piecewise affine using the
same partition Ψ. The functions f1 ∨ f2 , f1 ∧ f2 , and f1 ; f2 are piecewise affine as well with, however,
a possibly different finite partition into convex sets. The fragmentation is increased at most by a factor
2. From that, we conclude that the inner for-loop which implements the BF iteration, may increase the
fragmentation of a common partition of values and the parametric strategy σ only by a factor 2nm∧ ,
where m∧ is the number of occurrences of minimum operators ∧ in E. The applications of the operator
“;” do not contribute, since, in this phase, they never evaluate to a parametric value which returns
6
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
−4
0
1
3
Figure 3: The partition [−4, 0, 1, 3] where the elements of the second region are displayed.
−∞. Assume that ρ is the parametric variable assignment computed for the parametric strategy σ after
executing the inner for-loop. The parametric strategy σ 0 returned by the call to the next-function for σ
and ρ will again be a piecewise constant function whose fragmentation compared with the fragmentation
of ρ is increased at most by a factor of 2m∨ +m∧ +m; , where m∨ and m; denote the number of occurrences
of ∨-operators and ;-operators, respectively. This holds because we basically have to evaluate right-hand
sides in order to apply the function next to σ and ρ. Therefore, compared with the fragmentation of σ,
the fragmentation of σ 0 increased by a factor of at most 2m∨ +m∧ +m; ·2nm∧ = 2m∨ +(n+1)m∧ +m; . Since
the initial strategy has fragmentation 1 and the total number of strategy improvement steps is bounded,
we obtain our second theorem:
Theorem 2. Consider the parametric strategy improvement algorithm from Fig. 1. All encountered
parametric strategies are piecewise constant. Likewise, all encountered variable assignments are piecewise affine. Additionally:
1. The fragmentation is bounded by 2d·(m∨ +(n+1)m∧ +m; ) , where n is the number of unknowns in the
integer system of equations, m2 is the number of 2-operators, where 2 ∈ {∨, ∧, ; }, and d is the
maximal number of strategies for any parameter setting.
2. The absolute value of any occurring number is bounded by (c ∨ 2)sn · a where a is the maximum of
the absolute values of all constants, c is the maximal occurring constant in a scalar multiplication
and s is the maximal size of a right-hand side.
In the second part of Theorem 2, we provided bounds for the coefficients occurring in affine functions
of parametric values. These bounds follow since each parametric value is determined by means of n
round of round robin iteration. Consequently the sizes of the numbers occurring in parametric values are
always polynomial in the input size of PSI. Since each inequality used for refining the current partition
of the parameter space is obtained from the comparison of two affine functions, we conclude that the
sizes of coefficients of all occurring inequalities also remain polynomial.
4
Region Trees
The key issue for a practical implementation of PSI is to provide an efficient data-structure for partitions
of the parameter space Zk into convex components. In case k = 1, i.e., when the system depends
on a single parameter only, the partition Ψ consists of a set of non-empty intervals [−∞, z0 ], [z0 +
1, z1 ] . . . , [zr−1 + 1, zr ], [zr + 1, ∞] whose union equals Z. Thus, it can be represented by a finite
ordered list [z0 ; . . . ; zr ] (see Fig. 3). By means of the list representation, all required operations on
parametric values as well as on parametric strategies can be realized in polynomial time.
The case k > 1 is less obvious. We use a representation based on satisfiable conjunctions of linear
inequalities on parameters a1 p1 + . . . + ak pk ≤ b with a1 , . . . , ak , b ∈ Z. Note that the negation of
this inequality is given by the inequality −a1 p1 − . . . − ak pk ≤ −b − 1. Disjunctions of satisfiable
conjunctions of inequalities are organized into a binary tree t as shown, e.g., in Fig. 4 on top. Each
node in the tree is labeled with an inequality c. The left child of a node labeled with c corresponds to
the case where c holds while the right child corresponds to the case where ¬c holds. A leaf v of the
tree t thus represents the conjunction of the inequalities as provided by the path reaching v. The path
7
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
−2p1 + p2 ≤ −2
−p1 − p2 ≤ −6
−p1 − p2 ≤ −6
−p1 − p2 ≤ −2
−p1 − p2 ≤ −2
Figure 4: The region tree for the inequalities of Example 3 together with the region represented by the
second leaf.
(c1 , j1 ) . . . (cr , jr ) in t which successively visits the nodes labeled by c1 , . . . , cr and continues with the
j1 , . . . , jr th successors, respectively, (where ji ∈ {1, 2}), represents the conjunction cj11 ∧ . . . ∧ cjrr
where c1 = c and c2 = ¬c. As an invariant of t, we maintain that all conjunctions corresponding to
paths in t are satisfiable. The leaves of t are annotated with the values attained in the corresponding
parameter region. In order to obtain a more canonical representation, we additionally impose a strict
linear ordering ≺ on the inequalities (analogous to the linear ordering of variables for OBDDs) where
the inequality c at a node in t should be less than all successor inequalities. Moreover, we demand that
c ≺ ¬c should hold. We call the corresponding data-structure region tree.
Example 3. Consider the following set of linear inequalities:
−2p1 + p2 ≤ −2
−p1 − p2 ≤ −6
−p1 − p2 ≤ −2
Assume further that the inequalities are ordered from left to right and that each of them is smaller than
its negation. Then we obtain a region tree as depicted in Fig. 4 at the top, where the integer points
corresponding to the second leaf are shown at the bottom. The tree is not a full binary tree, since the
second inequality −p1 − p2 ≤ −6 implies the third inequality −p1 − p2 ≤ −2.
A trivial upper bound for the number of leaves of a region tree over a set of n linear inequalities is given
by 2n . However, in our application we assume that the parameter space is of fixed dimensionality k.
Therefore many occurring inequalities are at least partially redundant. For the important case where
k ≤ n, we can establish the following more precise upper bound on the number of leaves:
Lemma 1. The number
of a region tree over n linear inequalities over k parameters with
Pk of leaves
k ≤ n is bounded by i=0 ni .
A similar bound has been inferred for cells of maximal dimension k in arrangements of n hyperplanes
(see, e.g., [15]). The intersection of halfspaces as required for our estimation, results in an identical
recurrence relation and therefore in an identical solution. As a consequence of Lemma 1, the number of
leaves and thus also the number of nodes of a region tree for a fixed set of parameters is polynomial in
the number of involved linear inequalities.
When maintaining region trees, we repeatedly must verify whether a (growing) conjunction of linear
inequalities is satisfiable (over Z). Several algorithms have been proposed to solve this problem (see,
e.g., [24, 5]). If the number of parameters is fixed and small, polynomial run-time can be guaranteed,
8
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
e.g., by relying on the LLL algorithm for lattices in combination with the ellipsoid method [18, 17] or
by means of generating functions [19]. Note that for small numbers of variables, even Fourier-Motzkin
elimination (though only complete for rational satisfiability) is polynomial.
5
Implementing Parametric Strategy Iteration
The efficiency of the resulting algorithm crucially depends on the fragmentation of parametric variable
assignments and strategies occurring during iteration. Instead of globally maintaining a single common
partition, we allow individual partitions for each intermediately computed value as well as for each
variable from X. Before applying a binary operator to parametric argument values t1 , t2 , first a common
refinement of the partitions of t1 , t2 is computed by means of a function “align”. Given that 0 a tree is
the type of region trees whose leaves are labeled with values of type 0 a, the function “align” has the
following type
align : 0 a tree → 0 b tree → (0 a ∗ 0 b) tree
In case of addition, the operator “+” is applied for each component separately. In case of minimum, the
components of the common refinement may be further split into halves in order to represent the result
as piecewise affine function. Here, an extra function “normalize” is required which re-establishes the
ordering on the inequalities in the tree.
Since the number of nodes of a region tree is polynomial in the number n of inequalities, and each
required subsumption test is polynomial in n and the maximal size of an occurring number, we obtain:
Lemma 2. Assume that C is a set of n linear inequalities over a fixed finite set of parameters where
the sizes of all occurring numbers are bounded by m. Then the operations “align” as well as addition,
scalar multiplication and minimum lifted to region trees with inequalities from C are polynomial-time
in the numbers n and m.
We build up the nodes of the resulting trees in pre-order. In order to achieve the given complexity bound,
we take care not to introduce nodes which correspond to unsatisfiable conjunctions of inequalities, i.e.,
empty regions. Similar to parametric variable assignments, also parametric strategies are not determined
as a whole. Instead, we maintain for each right-hand side a ∨ e1 ∨ . . . ∨ er a separate piecewise constant
mapping from Zk into the range [0, r] of natural numbers which identifies for each parameter setting,
the subexpression which is currently selected. The idea is that one variable x should not suffer from the
fragmentation required for another unrelated variable of the system. Also for the operations “next” and
“select”, we obtain:
Lemma 3. Assume that C is a finite set of linear inequalities over a fixed finite set of parameters. Then
the operations next and select are polynomial-time in the number n of variables and the size of C.
Putting lemmas 2, 3 together we conclude that the running time of PSI is fast, whenever the number of
occurring inequalities is small and only few strategies are encountered. Summarizing, we obtain:
Theorem 3. Consider a system E of integer equations with k parameters. The least parametric solution of E can be computed in time polynomial in the bit size of E, the maximal number of strategies
encountered for any parameter setting, and the maximal number of encountered inequalities.
Although in our experiments with interval analysis for the benchmark programs used in Section 6, the
sets of involved inequalities stayed reasonably small, this however need not always be the case.
Example 4. For each m ≥ 0, consider the following system with a single parameter p:
xi = xi+1 ∨ −2i + x0i+1
xm = x ∨ −2m + x0
x = p ∧ −p
9
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
x0i = x0i+1 ∧ 2i + xi+1
x0m = x0 ∧ 2m + x
x0 = −p ∨ p
where 1 ≤ i < m. Let ρ∗ be the least parametric solution. Then we have:
−p − 2m
if p ≤ −2m − 1
0
if −2m ≤ p ≤ 2m , p even
ρ∗ x 1 p =
−1
if −2m ≤ p ≤ 2m , p odd
m
p−2
if 2m + 1 ≤ p
Thus, the fragmentation of the mapping ρ∗ necessarily grows exponentially with m.
We conclude that for large values m, any parametric analyzer will exhibit an exponential behavior on
the system of equations from Example 4.
6
Parametic Program Analysis and Experimental Evaluation
As indicated in the introduction, interval analysis for integer variables can be compiled into a finite
+
system of integer equations. The set of unknowns of this system are of the forms x−
u and xu where
u is a program point and x is a program variable of the program to be analyzed, and the superscripts
−, + indicate the negated lower bounds and the upper bounds of the respective intervals (see [9, 10]
for the details of the transformation). The least solution ρ∗ of the integer system then translates into the
program invariant which, for program point u and variable x, asserts that all runtime values of x are in
∗ +
the interval [−ρ∗ (x−
u ), ρ (xu )]. Here, [∞, −∞] signifies the empty set of values (unreachability of u).
The transformation of programs into integer equations is readily extended to programs with parameters. For the sake of the transformation, parameters are treated as constants occurring in the program,
thus resulting in a parametric system of equations as introduced in Section 2. For the program of Example 1, e.g., we obtain:
−
x−
1 = p1 ∨ x3
+
−
−
x−
2 = (x1 + ∞); (x1 + p2 − 1); (x1 ∧ ∞)
−
x−
3 = x2 + (−1)
+
−
−
x−
4 = (x1 + (−p2 )); (x1 + ∞); (x1 ∧ −p2 )
+
x+
1 = p1 ∨ x3
+
−
+
x+
2 = (x1 + ∞); (x1 + p2 − 1); (x1 ∧ p2 − 1)
+
x+
3 = x2 + 1
+
−
+
x+
4 = (x1 + (−p2 )); (x1 + ∞); (x1 ∧ ∞)
+
The least parametric solution for the unknowns x−
4 and x4 (signifying the bounds of the values of x at
program exit) is given by:
ξ(x−
4 ) = if − p1 + p2 ≤ 0 then − p1 else − p2
ξ(x+
4 ) = if − p1 + p2 ≤ 0 then p1 else p2
The resulting parametric invariant for the program exit states that x equals p1 , if −p1 + p2 ≤ 0, and x
equals p2 otherwise.
We have provided prototypical implementations of parametric strategy iteration for parametric integer equations, and based on these implementations, also for parametric interval equations. For convenience, the user may additionally specify a boolean combination of linear constraints as a global
assumption on the parameter values of interest. Thus, we may, e.g., specify that generally,
0 ≤ p1 ∧ p1 ≤ p2
should hold. Then the analyzer takes only considers values less or equal to the tree in Fig. 5.
10
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
−p1 ≤ 0
const−∞
p1 − p2 ≤ 0
const∞
const−∞
Figure 5: The topmost value under the assumption 0 ≤ p1 ≤ p2 .
One implementation based on lists, deals with the one-parameter case only, while the other implementation, which is based on region trees, can deal with multiple parameters. The total ordering ≺ used
by our analyzer orders according to the number of variables, where the lexicographical ordering on the
vector of coefficients is used for constraints with the same number of variables. For deciding satisfiability of conjunctions of inequalities, we generally rely on Fourier-Motzkin elimination (with integer
tightening). Only in the very end, when it comes to produce the final result, we purge regions containing
no integer points by means of an integer solver. We have tried our implementations on the rate limiter
example from [22] as well as on several small (about 20 interval unknowns) but intricate systems of
equations in order to evaluate the impact of the number of parameters as well as the impact of the chosen method for checking emptiness of integer polyhedrons on the practical performance. The tests have
been executed on an Intel(R) Core(TM) i5-3427U CPU running Ubuntu. On that machine, parametric
interval analysis of the rate limiter example terminated after less than 5s. The remaining benchmarks
are based on programs where interval analysis according to the standard widening/narrowing approach
fails to compute the least solution. For each example equation system, we successively introduce parameters for the constants used, e.g., in conditions and initializers. The system of equations nested is
derived from a program with two independent nested loops. The systems amatoj correspond to three
example programs presented in [1]. The system rupak corresponds to an example program by Rupak
Majumdar presented at MOD’11. Both amato2 and rupak do not realize a plain interval analysis but
additionally track differences of variables.
Interestingly, the number of required strategy improvements does not depend on the number of
parameters — with the notable exception amato0 where for three and four parameters, the number
increases from 8 to 9. Generally, the number of iterations is always significantly lower than the number
of unknowns in the system of equations.
0 params
1 params
2 params
3 params
4 params
20
10
0
nested
amato0
amato1
amato2
rupak
Figure 6: Fragmentation.
Figure 6 shows the number of regions in the results with different behavior. For the 0 parameter case
this is always 1. As expected, the fragmentation increases with the number of parameters — but not
as excessively as we expected. In case of rupak, the number of regions even decreased for three and
four parameters. The reason is that by introducing fresh parameters, also the ordering on inequalities
11
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
changes. The ordering on the other hand may have a significant impact onto fragmentation.
10
0 params
1 params
2 params
3 params
4 params
1
0.1
0.01
nested
amato0
amato1
amato2
rupak
Figure 7: Execution time in (s).
Figure 7 shows the running times of the benchmarks on a logarithmic scale. We visualize the runtimes for 0 through 4 parameters each. The filled and outlined bars correspond to Fourier-Motzkin
elimination and integer satisfiability for testing emptiness of regions, respectively. The inscribed red bars
in the single parameter case represent the run-time obtained by using linear lists instead of region trees.
The bottom-line case without parameters is fast and the dedicated implementation for single parameters
increases the run-time only by a factor of approximately 1.4. Using region trees clearly incurs an
extra penalty, which increases significantly with the number of parameters. While replacing FourierMotzkin elimination for testing emptiness of regions with an enhanced algorithm for integer satisfiability
increases the run-time only by an extra factor of about 1.5. When considering absolute run-times,
however, it turns out that the solver even for four parameters together with full integer satisfiability is
not prohibitively slow (a few seconds only for all benchmark equation systems). Details on experimental
results can be found at www2.in.tum.de/˜seidl/psi.
7
Related Work
Parametric analysis of linear numerical program properties has been advocated by Monniaux [22, 23]
by compiling the abstract program semantics to real linear arithmetic and then use quantifier elimination
to determine the parametric invariants. We have conducted experiments with Monniaux’ tool M JOLL NIR , by which we tried to solve real relaxations of parametric integer equations. Since M JOLLNIR has
no native support for positive or negative infinities these values had to be encoded through formulas
with extra propositional variables. This approach, however, did not scale to the sizes we needed. Our
conjecture is that the high Boolean complexity of the formulas causes severe problems. Beyond that,
fewer calls to quantifier elimination for formulas with many variables (as required by M JOLLNIR) may
be more expensive than many calls to an integer solver for formulas with few variables (namely, the
parameters as in our approach). All in all, since our integer tool and M JOLLNIR tackle slightly different
problems, a precise comparison is difficult. Still, our experiments indicates that our approach behaves
better than a quantifier elimination-based approach for applications where the control-flow is complex
with multiple control-flow points, but where few parameters are of interest.
Since long, relational program analyses, e.g., by means of polyhedra have been around [4, 2] which
also allow to infer linear relationships between parameters and program variables. The resulting invariants, however, are convex and thus do not allow to differentiate between different linear dependencies
in different regions. In order to obtain invariants as precise as ours, one would have to combine polyhedral domains with some form of trace partitioning [20]. These kinds of analysis, though, must rely on
widening and narrowing to enforce termination, whereas our algorithms avoid widening and narrowing
12
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
completely and directly compute least solutions, i.e., the best possible parametric invariants.
Parametric analysis of a different kind has also been proposed by Reineke [25] in the context of
worst-case execution time (WCET). They rely on parametric linear programming as implemented by
the PIP tool [7] and infer the dependence of the WCET on architecture parameters such as the chache
size by means of black box sampling of the WCETs obtained for different parameter settings.
Our data structure of region trees is a refinement of the tree-like data-structure Q UAST provided
by the PIP tool [7]. Similar data-structures are also used by Monniaux [22] to represent the resulting
invariants, and by Mihaila et al. [21] to differentiate between different phases of a loop iteration. In our
implementation, we additionally enforce a total ordering on the constraints in the tree nodes and allow
arbitrary values at the leaves. Total orderings on constraints have also been proposed for linear decision
diagrams [3]. Variants of LDDs later have been used to implement non-convex linear program invariants [13, 12] and in [14] for representing linear arithmetic formulas when solving predicate abstraction
queries. In our application, sharing of subtrees is not helpful, since each node v represents a satisfiable
conjunction of the inequalities which is constituted by the path reaching v from the root of the datastructure. Moreover, our application requires that the leaves of the data-structure are not just annotated
with a Boolean value (as for LDDs), but with values from various sets, namely strategic choices, affine
functions or even pairs thereof.
8
Conclusion
Solving systems of parametric integer equations allows to solve also systems of parametric interval
equations, and thus to realize parametric program analysis for programs using integer variables. To solve
parametric integer equations, we have presented parametric strategy iteration. Instead of solving integer optimization and satisfiability problems involving all unknowns of the problem formulation (as an
approach based on quantifier elimination), our algorithm is a smooth generalization of ordinary strategy
iteration, which applies integer satisfiability to inequalities involving parameters only. Our prototypical
implementation indicates that this approach indeed has the potential to deal with nontrivial problems —
at least when only few parameters are involved. Introducing further parameters significantly increases
the analysis costs. Surprisingly, the number of strategies required as well as the fragmentation observed
in our examples increased only moderately. Accordingly, the required running times were quite decent.
More experiments are necessary, though, to obtain a deeper understanding of parametric strategy
iteration. Also, we are interested in exploring the practical potential of sensitivity and mode analysis
enabled by this new algorithm, e.g., for automotive and avionic applications.
Acknowledgement. We thank Stefan Barth (LMU) for Example 4, and Jan Reineke (Universität des
Saarlandes) for useful discussions.
References
[1] Gianluca Amato and Francesca Scozzari. Localizing widening and narrowing. In Static Analysis Symposium,
volume 7935 of LNCS, pages 25–42. Springer, 2013.
[2] Roberto Bagnara, Patricia M. Hill, and Enea Zaffanella. The Parma polyhedra library: Toward a complete set
of numerical abstractions for the analysis and verification of hardware and software systems. Sci. Comput.
Program., 72(1–2):3–21, 2008.
[3] Sagar Chaki, Arie Gurfinkel, and Ofer Strichman. Decision diagrams for linear arithmetic. In Formal Methods
in Computer-Aided Design, pages 53–60. IEEE Press, 2009.
[4] Patrick Cousot and Nicolas Halbwachs. Automatic discovery of linear restraints among variables of a program.
In Principles of Programming Languages, pages 84–96. ACM Press, 1978.
13
Parametric Strategy Iteration
T.M. Gawlitza, M.D. Schwarz, H. Seidl
[5] Isil Dillig, Thomas Dillig, and Alex Aiken. Cuts from proofs: A complete and practical technique for solving
linear inequalities over integers. In Computer Aided Verification, volume 5643 of LNCS, pages 233–247.
Springer, 2009.
[6] Paul Feautrier. Parametric integer programming. RAIRO Recherche Opérationnelle, 22:243–268, 1988.
[7] Paul Feautrier, Jean-François Collard, and Cédric Bastoul. A Solver for Parametric Integer Programming
Problems, 2007.
[8] Thomas Gal and Harvey J. Greenberg, editors. Advances in Sensitivity Analysis and Parametric Programming.
Kluwer Academic Publishers, 1997.
[9] Thomas Gawlitza and Helmut Seidl. Precise fixpoint computation through strategy iteration. In Programming
Languages and Systems, volume 4421 of LNCS, pages 300–315. Springer, 2007.
[10] Thomas Gawlitza and Helmut Seidl. Precise interval analysis vs. parity games. In Formal Methods, volume
5014 of LNCS, pages 342–357. Springer, 2008.
[11] Thomas Gawlitza and Helmut Seidl. Abstract interpretation over zones without widening. In Workshop on
Invariant Generation, volume 1 of EPiC, pages 12–43. EasyChair, 2012.
[12] Khalil Ghorbal, Franjo Ivancic, Gogul Balakrishnan, Naoto Maeda, and Aarti Gupta. Donut domains: Efficient non-convex domains for abstract interpretation. In Verification, Model Checking, and Abstract Interpretation, volume 7148 of LNCS, pages 235–250. Springer, 2012.
[13] Arie Gurfinkel and Sagar Chaki. Boxes: A symbolic abstract domain of boxes. In Static Analysis Symposium,
volume 6337 of LNCS, pages 287–303. Springer, 2010.
[14] Arie Gurfinkel, Sagar Chaki, and Samir Sapra. Efficient predicate abstraction of program summaries. In
NASA Formal Methods, volume 6617 of LNCS, pages 131–145. Springer, 2011.
[15] D. Halperin. Arrangements. In Joseph O’Rourke Jacob E. Goodman, editor, Handbook of Discrete and
Computational Geometry (2nd Edition), chapter 24, pages 529–562. CRC Press, Inc., 2004.
[16] A. Holder. Parametric LP analysis. Encyclopedia of Operations Research and Management Science, 2011.
[17] Leonid G. Khachiyan. Polynomial algorithms in linear programming. USSR Computational Mathematics and
Mathematical Physics, 20(1):53–72, 1980.
[18] H.W. Lenstra, A.K. Lenstra, and L. Lovász. Factoring polynomials with rational coefficients. Mathematische
Annalen, 261:515–534, 1982.
[19] Jesús A. De Loera, David Haws, Raymond Hemmecke, Peter Huggins, and Ruriko Yoshida. Three kinds
of integer programming algorithms based on Barvinok’s rational functions. In Integer Programming and
Combinatorial Optimization, volume 3064 of LNCS, pages 244–255. Springer, 2004.
[20] L. Mauborgne and X. Rival. Trace partitioning in abstract interpretation based static analyzers. In European
Symposium on Programming, volume 3444 of LNCS, pages 5–20. Springer, 2005.
[21] B. Mihaila, A. Sepp, and A. Simon. Widening as abstract domain. In NASA Formal Methods, volume 7871
of LNCS, pages 170–186. Springer, 2013.
[22] David Monniaux. Automatic modular abstractions for linear constraints. In Principles of Programming
Languages, pages 140–151. ACM Press, 2009.
[23] David Monniaux. Quantifier elimination by lazy model enumeration. In Computer Aided Verification, volume
6174 of LNCS, pages 585–599. Springer, 2010.
[24] William Pugh. The omega test: a fast and practical integer programming algorithm for dependence analysis.
In Supercomputing, pages 4–13. IEEE Press, 1991.
[25] Jan Reineke and Johannes Doerfert. Architecture-parametric timing analysis. In Real-Time and Embedded
Technology and Applications Symposium, April 2014. To appear.
14
| 6 |
arXiv:1604.03115v2 [math.CO] 4 Apr 2017
A BROAD CLASS OF SHELLABLE LATTICES
JAY SCHWEIG AND RUSS WOODROOFE
Abstract. We introduce a new class of lattices, the modernistic lattices, and their duals,
the comodernistic lattices. We show that every modernistic or comodernistic lattice has
shellable order complex. We go on to exhibit a large number of examples of (co)modernistic
lattices. We show comodernism for two main families of lattices that were not previously
known to be shellable: the order congruence lattices of finite posets, and a weighted generalization of the k-equal partition lattices.
We also exhibit many examples of (co)modernistic lattices that were already known to
be shellable. To begin with, the definition of modernistic is a common weakening of the
definitions of semimodular and supersolvable. We thus obtain a unified proof that lattices
in these classes are shellable.
Subgroup lattices of solvable groups form another family of comodernistic lattices that
were already proven to be shellable. We show not only that subgroup lattices of solvable
groups are comodernistic, but that solvability of a group is equivalent to the comodernistic
property on its subgroup lattice. Indeed, the definition of comodernistic exactly requires
on every interval a lattice-theoretic analogue of the composition series in a solvable group.
Thus, the relation between comodernistic lattices and solvable groups resembles, in several
respects, that between supersolvable lattices and supersolvable groups.
1. Introduction
Shellings are a main tool in topological combinatorics. An explicit shelling of a simplicial complex ∆ simultaneously shows the sequentially Cohen-Macaulay property, computes
homotopy type, and gives a cohomology basis. Frequently, a shelling also gives significant
insight into the homeomorphy of ∆. The downside is that shellings are often difficult to find,
and generally require a deep understanding of the complex.
In this paper, we describe a large class of lattices whose order complexes admit shellings.
The shellings are often straightforward to explicitly write down, and so give a large amount
of information about the topology of the order complex. Included in our class of lattices are
many examples which were not previously understood to be closely related.
The question that first motivated this research project involved shelling a particular family
of lattices. The order congruence lattice O(P ) of a finite poset P is the subposet of the
partition lattice consisting of all equivalence classes arising as the level sets of an orderpreserving function. Order congruence lattices interpolate between Boolean lattices and
partition lattices, as we will make precise later. Such lattices were already considered by
Sturm in [39]. More recently, Körtesi, Radeleczki and Szilágyi showed the order congruence
1
A BROAD CLASS OF SHELLABLE LATTICES
2
lattice of any finite poset to be graded and relatively complemented [22], while Jenča and
Sarkoci showed such lattices to be Cohen-Macaulay [21].
The Cohen-Macaulay result naturally suggested to us the question of whether every order
congruence lattice is shellable. After proving the answer to this question to be “yes”, we
noticed that our techniques apply to a much broader class of lattices. Indeed, a large number
of the lattices previously shown to be shellable lie in our class. Thus, our main result
(Theorem 1.2 below) unifies numerous results on shellability of order complexes of lattices,
in addition to proving shellability for new examples. It is our belief that our results will
be useful to other researchers. Finding a shelling of a lattice can be a difficult problem. In
many cases, showing that a lattice is in our class may be simpler than constructing a shelling
directly.
All lattices, posets, simplicial complexes, and groups considered in this paper will be finite.
1.1. Modernistic and comodernistic lattices. We now define the broad class of lattices
described in the title and introduction. Our work relies heavily on the theory of modular
elements in a lattice. Recall that an element m of a lattice L is left-modular if whenever
x < y are elements of L, then the expression x ∨ m ∧ y can be written without parentheses,
that is, that (x ∨ m) ∧ y = x ∨ (m ∧ y). An equivalent definition is that m is left-modular
if m is not the nontrivial element in the short chain of any pentagonal sublattice of L; see
Lemma 2.10 for a more precise statement.
Our key object of study is the following class of lattices.
Definition 1.1. We say that a lattice L is modernistic if every interval in L has an atom
that is left-modular (in the given interval). We say that L is comodernistic if the dual of L
is modernistic, that is, if every interval has a left-modular coatom.
Our main theorem is as follows. (We will recall the definition of a CL-labeling in Section 2.3
below.)
Theorem 1.2. If L is a comodernistic lattice, then L has a CL-labeling.
Corollary 1.3. If L is either comodernistic or modernistic, then the order complex of L is
shellable.
The CL-labeling is explicit from the left-modular coatoms, so Theorem 1.2 also gives a
method for computing the Möbius function of L. See Lemma 3.7 for details.
We find it somewhat surprising that Theorem 1.2 was not proved before now. We speculate
that the reason may be the focus of previous authors on atoms and CL-labelings, whereas
Theorem 1.2 requires dualizing exactly one of the two.
Remark 1.4. The name “modernistic” comes from contracting “atomically modular” to “atomically mod”. Since atomic was a common superlative from the the late 1940’s, and since the
mod (or modernistic) subculture was also active at about the same time, we find the name
to be somewhat appropriate, as well as short and perhaps memorable.
A BROAD CLASS OF SHELLABLE LATTICES
3
1.2. Examples and applications. Theorem 1.2 has a large number of applications, which
we briefly survey now. First, we can now solve the problem that motivated the project.
Theorem 1.5. If P is any poset, then the order congruence lattice O(P ) is comodernistic,
hence CL-shellable.
We also recover as examples many lattices already known to be shellable. The following
theorem lists some of these, together with references to papers where they are shown to be
shellable.
Proposition 1.6. The following lattices are comodernistic, hence CL-shellable:
(1) Supersolvable and left-modular lattices, and their order duals [2, 23, 26].
(2) Order duals of semimodular lattices [2]. (I.e., semimodular lattices are modernistic.)
(3) k-equal partition lattices [7], and their type B analogues [5].
(4) Subgroup lattices of solvable groups [33, 43].
We comment that many of these lattices are shown in the provided references to have
EL-labelings. Theorem 1.2 provides only a CL-labeling. Since the CL-labeling constructed
is explicit from the left-modular elements, Theorem 1.2 provides many of the benefits given
by an EL-labeling. We do not know if every comodernistic lattice has an EL-labeling, and
leave this interesting question open.
Experts in lattice theory will immediately recognize items (1) and (2) from Proposition 1.6
as being comodernistic. Theorem 1.2 thus unifies the theory of these well-understood lattices
with the more difficult lattices on the list. The CL-labeling that we construct in the proof
of Theorem 1.2 can moreover be seen as a generalization of the standard EL-labeling for a
supersolvable lattice, further connecting these classes of lattices.
We will prove that k-equal partition lattices and their type B analogues are comodernistic
in Section 6. In Section 6.3 we will show the same for a new generalization of k-equal
partition lattices. The proofs show, broadly speaking, that coatoms of (intervals in) these
subposets of the partition lattice inherit left-modularity from that in the partition lattice.
Although modernism and comodernism give a simple and unified framework for showing
shellability of many lattices, not every shellable lattice is (co)modernistic. For an easy
example, the face lattice of an n-gon has no left-modular elements when n > 3, so is neither
modernistic nor comodernistic.
1.3. Further remarks on subgroup lattices. We can expand on the connection with
group theory suggested by item (4) of Proposition 1.6. For a group G, the subgroup lattice
referred to in this item consists of all the subgroups of G, ordered by inclusion; and is denoted
by L(G).
Theorem 1.7. If G is a group, then G is solvable if and only if L(G) is comodernistic.
Stanley defined supersolvable lattices in [34] to abstract the interesting combinatorics of
the subgroup lattices of supersolvable groups to general lattices. Theorem 1.7 says that
comodernism is one possibility for a similar abstraction for solvable groups. A result of a
A BROAD CLASS OF SHELLABLE LATTICES
4
similar flavor was earlier proved by Schmidt [30]; our innovation with comodernism is to
require a lattice-theoretic analogue of a composition series in every interval of the lattice.
We further discuss possible notions of solvability for lattices in Section 5.
Shareshian in [33] showed that a group G is solvable if and only if L(G) is shellable.
Theorems 1.2 and 1.7 give a new proof of the “only if” direction of this result. For the “if”
direction, Shareshian needed a hard classification theorem from finite group theory. Our proof
of Theorem 1.7 does not rely on hard classification theorems. On the other hand, it follows
directly from Shareshian’s proof that G is solvable if L(G) is sequentially Cohen-Macaulay.
Thus, Shareshian’s Theorem gives a topological characterization of solvable groups. The
characterization given in Theorem 1.7 is lattice-theoretic, rather than topological. It is
an interesting open problem to give a classification-free proof that if L(G) is sequentially
Cohen-Macaulay, then G is solvable.
1.4. Organization. This paper is organized as follows. In Section 2 we recall some of the
necessary background material. We prove Theorem 1.2 (our main theorem) in Section 3.
In the remainder of the paper we show how to apply comodernism and Theorem 1.2
to various classes of lattices. The techniques may be illustrative for those who wish to
prove additional classes of lattices to be comodernistic. In Section 4, we examine order
congruence lattices, and prove Theorem 1.5. In Section 5 we prove Theorem 1.7, and argue
for comodernism as a notion of solvable for lattices. We close in Section 6 by showing that
k-equal partition lattices (and variations thereof) are comodernistic.
Acknowledgements
We would like to thank Vincent Pilaud and Michelle Wachs for their helpful remarks.
We also thank the anonymous referee for his or her thoughtful comments. The example in
Figure 5.1 arose from a question of Hugh Thomas. The second author is grateful to the
University of Miami for their hospitality in the spring of 2016, during which time a part of
the paper was written.
2. Preliminaries
We begin by recalling some necessary background and terminology. Many readers will be
able to skip or skim this section, and refer back to it as necessary.
2.1. Posets, lattices, and order complexes. A poset P is bounded if P has a unique
least element 0̂ and greatest element 1̂.
Associated to a bounded poset P is the order complex, denoted ∆P , a simplicial complex
whose faces consist of all chains (totally ordered subsets) in P \ {0̂, 1̂}. In particular, the
vertices of ∆P are the chains of length 0 in P \ {0̂, 1̂}, that is, the elements of P \ {0̂, 1̂}.
The importance of the order complex in poset theory arises since the Möbius function µ(P )
(important for inclusion-exclusion) is given by the reduced Euler characteristic χ̃(∆P ).
We often say that a bounded poset P possesses a property from simplicial topology (such
as “shellability”), by which we mean that ∆P has the same property.
A BROAD CLASS OF SHELLABLE LATTICES
5
We say that a poset P is Hasse-connected if the Hasse diagram of P is connected as a
graph. That is, P is Hasse-connected if and only for any x, y ∈ P , there is a sequence
x = x0 , x1 , . . . , xk = y such that xi is comparable to xi+1 for each i.
A poset L is a lattice if every two elements x, y ∈ L have a unique greatest lower bound
(the meet x ∧ y) and unique least upper bound (the join x ∨ y). It is obvious that every
lattice is bounded, hence has an order complex ∆L.
A poset is graded if all its maximal chains have the same length, where the length of a
chain is one less than its cardinality. The height of a bounded poset P is the length of the
longest chain in P , and the height of an element x is the height of the interval [0̂, x]. An
atom of a bounded poset P is an element of height 1.
The order dual of a poset P is the poset P ∗ with reversed order relation, so that x <∗ y
in P ∗ exactly when x > y in P . Poset definitions may be applied to the dual by prepending
a “co”: for example, an element x is a coatom if x is an atom in P ∗ .
For more background on poset and lattice theory from a general perspective, we refer to
[37]. For more on order complexes and poset topology, we refer to [42].
2.2. Simplicial complexes and shellings. We assume basic familiarity with homology
and cohomology, as exposited in e.g. [18, 27].
A shelling of a simplicial complex ∆ is an ordering of the facets of ∆ that obeys certain
conditions, the precise details of which will not be important to us. Not every simplicial
complex has a shelling; those that do are called shellable.
We remark that in the early history of the subject, shellings were defined only for balls
and spheres [29]. Later, shellings were considered only for pure complexes, that is, complexes
all of whose facets have the same dimension. Nowadays, shellings are studied on arbitrary
simplicial complexes [7].
Shellable complexes are useful for showing a complex to satisfy the Cohen-Macaulay (in
the pure case) or sequentially Cohen-Macaulay property (more generally). These properties
are important in commutative algebra as well as combinatorics.
We refer to [36] for more on shellable and Cohen-Macaulay complexes.
2.3. CL-labelings and EL-labelings. The definition of a shelling is often somewhat unwieldy to work with directly, and it is desirable to find tools through which to work. One
such tool is given by a CL-labeling, which we will now define.
If x and y are elements in a poset P , we say that y covers x when x < y but there is no
z ∈ P so that x < z < y. In this situation, we write x ⋖ y, and may also say that x ⋖ y is a
cover relation. Thus, a cover relation is an edge in the Hasse diagram of P . A rooted cover
relation is a cover relation x ⋖ y together with a maximal chain from 0̂ to x (called the root).
A rooted interval is an interval [x, y] together with a maximal chain r from 0̂ to x. In this
situation, we use the notation [x, y]r . Notice that every atomic cover relation of [x, y]r can
be rooted by r.
A chain-edge labeling of a bounded poset P is a function λ that assigns an element of an
ordered set (which will always for us be Z) to each rooted cover relation of P . Then λ assigns
A BROAD CLASS OF SHELLABLE LATTICES
6
a word over Z to each maximal chain on any rooted interval by reading the cover relation
labels in order, so e.g. the word associated with 0̂ ⋖ x1 ⋖ x2 ⋖ x3 ⋖ . . . is λ(0̂ ⋖ x1 , 0̂)λ(x1 ⋖
x2 , 0̂ ⋖ x1 )λ(x2 ⋖ x3 , 0̂ ⋖ x1 ⋖ x2 ) · · · .
Remark 2.1. Since many researchers may be less familiar with CL-labelings and the machinery behind them, it may be helpful to think of a chain-edge labeling via the following
dynamical process. Begin at 0̂, and walk up the maximal chain 0̂ = x0 ⋖ x1 ⋖ x2 ⋖ · · · ⋖ 1̂.
At each step i, assign a label to the label xi−1 ⋖ xi . In assigning the label, you are allowed
to look backwards at where you have been, but are not allowed to look forwards at where
you may go. At each step, you add the assigned label to the end of a word associated with
the maximal chain.
We say that a maximal chain c is increasing if the word associated with c is strictly
increasing, and decreasing if the word is weakly decreasing. We order maximal chains by the
lexicographic order on the associated words.
Definition 2.2. A CL-labeling is a chain-edge labeling that satisfies the following two conditions on each rooted interval [x, y]r :
(1) There is a unique increasing maximal chain m on [x, y]r , and
(2) the increasing chain m is strictly earlier in the lexicographic order than any other
maximal chain on [x, y]r .
If a CL-labeling λ assigns the same value to every x ⋖ y irrespective of the choice of root,
then we say λ is an EL-labeling.
Björner [2] and Björner and Wachs [6, 7] introduced CL-labelings, and proved the following
theorem.
Theorem 2.3. [8, Theorem 5.8] If λ is a CL-labeling of the bounded poset P , then the
lexicographic order on the maximal chains of P is a shelling order of ∆P . In this case,
a cohomology basis for ∆P is given by the decreasing maximal chains of P , and ∆P is
homotopy equivalent to a bouquet of spheres in bijective correspondence with the decreasing
maximal chains.
For this reason, bounded posets with a CL- or EL-labeling are often called CL- or ELshellable. Since the order complex of P and that of the order dual of P coincide, either a
CL-labeling or a dual CL-labeling implies shellability of ∆P .
From the cohomology basis, it is straightforward to compute Euler characteristic, hence
also Möbius number.
Corollary 2.4. [8, Proposition 5.7] If λ is a CL-labeling of the bounded poset P , then the
Möbius number of P is given by
µ(P ) = χ̃(∆P ) = #even length decreasing maximal chains in P
− #odd length decreasing maximal chains in P.
A BROAD CLASS OF SHELLABLE LATTICES
7
2.4. Order congruence lattices. If P and Q are posets, then a map ϕ : P → Q is orderpreserving if whenever x ≤ y, it also holds that ϕ(x) ≤ ϕ(y). The level set partition of a
map ϕ : P → Q is the partition with blocks of the form ϕ−1 (q). If π is the level set partition
of an order preserving map ϕ : P → Q, then π is an order partition of P . Since every poset
has a linear extension, it is easy to see that it would be equivalent to restrict the definition
of order partition to the case where Q = Z.
As previously defined, the order congruence lattice O(P ) is the subposet of the partition
lattice ΠP consisting of all order partitions of P . The cover relations in O(P ) correspond to
merging blocks in an order partition, subject to a certain compatibility condition.
Example 2.5. Consider P = [3] with the usual order. Then the function mapping 1, 2 to 1
and 3 to 2 is order-preserving, so 12 | 3 ∈ O([3]). Similarly, the partition 1 | 23 ∈ O([3]). It
is not difficult to see, however, that there is no order-preserving map with level set partition
13 | 2. Thus, the lattice O([3]) is isomorphic to the Boolean lattice on 2 elements.
More generally, an elementary argument shows that the order congruence lattice of a chain
on n elements is isomorphic to a Boolean lattice on n − 1 elements. It is obvious that the
order congruence lattice of an antichain is the usual partition lattice. Thus, order congruence
lattices interpolate between Boolean lattices and partition lattices.
It is easy to confuse O(P ) with another closely related lattice defined on a poset P . We
say that a subset S ⊆ P is order convex if whenever a ≤ b ≤ c with a, c ∈ S, then also
b ∈ S. The order convexity partition lattice of P , denoted Oconv (P ), consists of all partitions
where every block is order convex. There is some related literature on the related lattice of
all order convex subsets of a poset, going back to [1].
We do not know if order convexity lattices must always be comodernistic or shellable, as
intervals of the form [π, 1̂] in Oconv seem difficult to describe.
Example 2.6. Consider the bowtie poset B, with elements a1 , a2 , b1 , b2 and relations ai < bj
(for i, j ∈ {1, 2}). As B has height 1, all subsets are order convex, so that Oconv (B) ∼
= Π4 .
However, the partitions a1 b1 | a2 b2 and a1 b2 | a2 b1 are not order congruence partitions, so are
not in O(B).
We additionally caution the reader that the notion of order congruence considered here
is less restrictive than that considered in [28], where congruences are required to respect
lower/upper bounds.
In another point of view, it is straightforward to show that order-preserving partitions are
in bijective correspondence with certain quotient objects of P . Thus, the order congruence
lattice assigns a lattice structure to quotients of P . See [21, Section 3] for more on the
quotient view of O(P ).
For our purposes, it will be enough to understand intervals above atoms in O(P ). Say
that elements x, y of poset P are compatible if either x ⋖ y, y ⋖ x, or x, y are incomparable.
If x, y are compatible in P , then Px∼y is the poset obtained by identifying x and y. That is,
Px∼y is obtained from P by replacing x, y with w, subject to the relations z < w whenever
A BROAD CLASS OF SHELLABLE LATTICES
8
z < x or z < y, and z > w whenever z > x or z > y. We remark in passing that this
identification is an easy special case of the quotienting viewpoint discussed above.
Lemma 2.7. Let P be a poset. A partition π of P is an atom of O(P ) if and only if π has
exactly one non-singleton block consisting of compatible elements {x, y}. In this situation,
we have the lattice isomorphism [π, 1̂]O(P ) ∼
= O(Px∼y ).
Repeated application of Lemma 2.7 allows us to understand any interval of the form [π, 1̂]
in O(P ).
Although we will not use need this, intervals of the form [0̂, π] are also not difficult to
understand. Let π have blocks B1 , B2 , . . . , Bk . It is well-known (see e.g. [37, Example
3.10.4]) that an interval of this form in the full partition lattice is isomorphic to the product
of smaller partition lattices ΠB1 × · · · × ΠBk . It is straightforward to see via the orderpreserving mapping P → Z definition that a similar result holds in the order congruence
lattice. That is, [0̂, π] in O(P ) is lattice-isomorphic to O(B1 ) × · · · × O(Bk ), where Bi refers
to the induced subposet on Bi ⊆ P . Combining this observation with Lemma 2.7 allows
us to write any interval in O(P ) as a product of order congruence lattices of quotients of
subposets. We find it simpler to give more direct arguments, but readers familiar with poset
products may appreciate this connection.
2.5. Supersolvable and semimodular lattices. We previously defined an element m of
a lattice L to be left-modular if (x ∨ m) ∧ y = x ∨ (m ∧ y) for all pairs x < y.
A lattice is modular if every element is left-modular. A lattice L is usually defined to be
semimodular if whenever a ∧ b ⋖ a in L, then b ⋖ a ∨ b. We prefer the following equivalent
definition, which highlights the close connection between semimodularity and comodernism:
Lemma 2.8 (see e.g. [38, essentially Theorem 1.7.2]). A lattice L is semimodular if and only
for every interval [x, y] of L, every atom of [x, y] is left-modular (as an element of [x, y]).
Thus, the definition of a modernistic lattice is obtained from that of a semimodular lattice
by weakening a single universal quantifier to an existential quantifier.
An M-chain in a lattice is a maximal chain consisting of left-modular elements. A lattice
is left-modular if it has an M-chain, and supersolvable if it is graded and left-modular.
Supersolvable lattices were originally defined by Stanley [34], in a somewhat different form.
The theory of left-modular lattices was developed in a series of papers [10, 23, 24, 26], and
it was only in [26] that it was noticed that Stanley’s original definition of supersolvable is
equivalent to graded and left-modular.
There is an explicit cohomology basis for a supersolvable lattice, which does not seem
to be as well-known as is deserved. A chain of complements to an M-chain m = {0̂ =
m0 ⋖ m1 ⋖ · · · ⋖ mn = 1̂} is a chain of elements c = {0̂ = cn ⋖ cn−1 ⋖ · · · ⋖ c0 = 1̂} so that
each ci is a complement to mi , that is, so that ci ∨ mi = 1̂ and ci ∧ mi = 0̂. A less explicit
form of the following appears in [2, 34], and a special case in [41].
A BROAD CLASS OF SHELLABLE LATTICES
9
Theorem 2.9. If L is a supersolvable lattice with a fixed M-chain m, then a cohomology
basis for ∆L is given by the chains of complements to m. In particular, the Möbius number
of L is (up to sign) the number of such chains.
A (strong form of a) homology basis for supersolvable lattices appears in [32].
2.6. Left-modularity. We now recall some additional basic properties of left-modular elements. First, we state more carefully the equivalent “no pentagon” condition mentioned in
the Introduction.
Lemma 2.10. [24, Proposition 1.5] An element m of the lattice L is left-modular if and
only if for every a < c in L, we have a ∧ m 6= c ∧ m or a ∨ m 6= c ∨ m.
The pentagon lattice (usually notated as N5 ) consists of elements 0̂, 1̂, a, b, c with the only
nontrivial relation being a < c. Lemma 2.10 says exactly that m is left-modular if and only
if m never plays the role of b in a sublattice of L isomorphic to N5 . Thus, Lemma 2.10 is a
pleasant generalization of the characterization of modular lattices as those with no pentagon
sublattices.
Another useful fact is:
Lemma 2.11. [23, Proposition 2.1.5] If m is a left-modular element of the lattice L, and
x < y in L, then x ∨ m ∧ y is a left-modular element of the interval [x, y].
Finally, we give an alternate characterization of left-modularity of coatoms, in the flavor
of Lemma 2.8. This characterization will be useful for us in the proof of Theorems 1.2 and
3.5, and is also often easy to check.
Lemma 2.12 (Left-modular Coatom Criterion). Let m be a coatom of the lattice L. Then
m is left-modular in L if and only if for every y such that y 6≤ m we have m ∧ y ⋖ y.
Proof. If m ∧ y < z < y, then z < y violate the condition of Lemma 2.10 with m, hence m
is not left-modular. Conversely, if z < y violate the condition of Lemma 2.10, then y 6≤ m,
and m ∧ y = m ∧ z < z < y.
Corollary 2.13. If L is a lattice and m a left-modular coatom of L, then for any x < y in
L either x ∨ m ∧ y = y or else x ∨ m ∧ y ⋖ y.
Proof. If x 6≤ m or y ≤ m, then x ∨ m ∧ y = y. Otherwise, apply Lemmas 2.11 and 2.12.
2.7. Group theory. We recall that a group G is said to be solvable if either of the following
equivalent conditions is met:
(1) There is a chain 1 = N0 ⊂ N1 ⊂ N2 ⊂ · · · ⊂ Nk = G of subgroups in G, so that each
Ni is normal in G, and so that each factor Ni /Ni−1 is abelian.
(2) There is a chain 1 = H0 ⊂· H1 ⊂· H2 ⊂· · · · ⊂· Hn = G of subgroups in G, so that
each Hi is normal in Hi+1 (but is not necessarily normal in G). Note that it follows
in this case that each factor Hi /Hi−1 is cyclic of prime order.
Since every subgroup of a solvable group is solvable, an alternative form of the latter is:
A BROAD CLASS OF SHELLABLE LATTICES
10
(2’) For every subgroup H ⊆ G, there is a subgroup K ⊂· H such that K ⊳ H.
A subgroup H of G is said to be subnormal if there is a chain H ⊳ L1 ⊳ L2 ⊳ · · · ⊳ G. Thus,
Condition (2) says a group is solvable if and only if G has a maximal chain consisting of
subnormal subgroups.
A group is supersolvable if there is a maximal chain in L(G) consisting of subgroups
normal in G. Thus, a group is supersolvable if there is a chain which simultaneously meets
the conditions in (1) and (2). One important fact about the subgroup lattice of supersolvable
groups is:
Theorem 2.14. [20] For a group G, the subgroup lattice L(G) is graded if and only if G is
supersolvable.
Subgroup lattices were one of the motivations for early lattice theorists in making the
definition of (left-)modularity. It follows easily from the Dedekind Identity (see Lemma 5.3)
that if N ⊳ G, then N is left-modular in L(G). In particular, if G is a supersolvable group,
then L(G) is a supersolvable lattice.
Moreover, a normal subgroup N satisfies a stronger condition. An element m of a lattice
is said to be modular (or two-sided-modular ) if it neither plays the role of b nor of a in any
pentagon sublattice, where a, b are as in the discussion following Lemma 2.10. A second
application of the Dedekind Identity shows any normal subgroup to be modular in L(G).
The following lemma, whose proof is immediate from the definitions, says that leftmodularity and two-sided-modularity are essentially the same for the purpose of comodernism arguments.
Lemma 2.15. If L is a lattice, and m is a maximal left-modular element, then m is modular.
We refer to e.g. [12, Chapter A] for further general background on group theory, and to
[31] for the reader interested in further background on lattices of subgroups.
3. Proof of Theorem 1.2
3.1. sub-M-chains. As discussed in Section 2.5, a lattice is left-modular if it has an Mchain, that is, a maximal chain consisting of left-modular elements. The reader may be
reminded of the maximal chain consisting of normal elements in the definition of a supersolvable group.
We extend the notion of M-chain to comodernistic lattices. A maximal chain 0̂ = m0 ⋖
m1 ⋖ · · · ⋖ mn = 1̂ in L is a sub-M-chain if for every i, the element mi is left-modular in
the interval [0̂, mi+1 ]. The reader may be reminded of the maximal subnormal chain in a
solvable group. It is straightforward to show that a lattice is comodernistic if and only if
every interval has a sub-M-chain.
Stanley [35] and Björner [2] showed that any supersolvablen lattice has an EL-labeling, and
o
(ss)
(ss)
(ss)
(ss)
Liu [23] extended this to any left-modular lattice. If m
= 0̂ = m0 ⋖ m1 ⋖ · · · ⋖ mn = 1̂
A BROAD CLASS OF SHELLABLE LATTICES
11
is an M-chain, then the EL-labeling is defined as follows:
(3.1)
(ss)
λss (x ⋖ y) = max{i : x ∨ mi−1 ∧ y = x}
(ss)
= min{i : x ∨ mi
∧ y = y}.
The essential observation involved in proving Theorem 1.2 is that, if we replace the Mchain used for λss with a sub-M-chain, then we can still label the atomic cover relations of
L in the same manner as in λss . More precisely, if m is a sub-M-chain in a lattice L, then
let
(3.2)
λ(0̂ ⋖ a) = 1 + max{i : mi ∧ a = 0̂}.
Adding 1 is not essential, and we do so only so that the labels will be in the range 1 through
n, rather than 0 through n − 1.
3.2. The CL-labeling. We construct the full CL-labeling recursively from (3.2).
We say that a chain c is indexed by a subset S = {i1 < · · · < ik } of the integers if
c = {ci1 < · · · < cik }. That is, we associate an index or label with each element of the chain.
Notice that we require the indices to (strictly) respect order.
We will need the following somewhat-technical lemma to handle non-graded lattices.
Lemma 3.1. Let L be a lattice with a sub-M-chain m of length n. Then no chain of L has
length greater than n.
Proof. We proceed by induction on n. The base case is trivial. Suppose that 0̂ = c0 ⋖ c1 ⋖
· · · ⋖ cℓ = 1̂ is some chain in L. Let mn−1 ⋖ 1 be the unique coatom in m, and i be the
greatest index such that ci ≤ mn−1 . Then cj ∨ mn−1 = 1̂ for any j > i, so by the left-modular
property
ci+1 ∧ mn−1 < ci+2 ∧ mn−1 < · · · < cℓ ∧ mn−1 = mn−1 .
Thus,
0̂ = c0 ⋖ c1 ⋖ · · · ⋖ ci = ci+1 ∧ mn−1 < ci+2 ∧ mn−1 < · · · < cℓ ∧ mn−1 = mn−1
is a chain of length ℓ − 1 on [0̂, mn−1 ], and by induction ℓ − 1 ≤ n − 1.
Definition 3.2. Let L be a comodernistic lattice of height n. Take a fixed sub-M-chain m
given as 0̂ = m0 ⋖ m1 ⋖ · · · ⋖ mn = 1̂ as the starting point for a recursive construction.
Let x ⋖ a, r be a rooted cover relation. Assume by recursion that we are given a subM-chain m(r) on [x, 1̂]. Further assume that the elements of m(r) are indexed by a subset
(r)
S ⊆ [n] ∪ {0}, and that 1̂ = mn . Label x ⋖ a as in (3.2) that is, as
(3.3)
(r)
λ(x ⋖ a) = 1 + max{i : mi ∧ a = x}.
To continue the recursion, it remains to construct an indexed sub-M-chain m(r∪a) on [a, 1̂].
(r)
Suppose that λ(x ⋖ a) = 1 + i. It is clear that mi is the greatest element of m(r) such
(r)
(r)
(r)
that a 6≤ mi . By abuse of notation, let m>i be the portion of m that is greater than mi ,
A BROAD CLASS OF SHELLABLE LATTICES
12
(r)
and let S>i be the indices greater than i on m(r) . Thus, the labels of m>i are exactly S>i .
Let S<i = S \ (S>i ∪ i) similarly be the indices less than i on m(r) .
(r)
Now by construction, all elements of m>i are greater than a. By the comodernistic
(r)
property, the submodular chain m>i may be completed to a sub-M-chain m(r∪a) for [a, 1̂].
(r)
Preserve the indices on m>i , and index the elements of m(r∪a) \ m(r) by elements of S<i . It
follows by applying Lemma 3.1 on [0̂, mi+1 ] that there are enough indices available in S<i to
perform such indexing.
The recursion can now continue, which completes the definition of the CL-labeling.
Notation 3.3. Throughout the remainder of Section 3, we fix L to be a comodernistic lattice
of height n, with a sub-M-chain m = {0̂ = m0 ⋖ m1 ⋖ · · · ⋖ mn = 1̂}. Indeed, we select
a sub-M-chain on every interval, which uniquely determines a chain-edge labeling λ as in
Definition 3.2.
Remark 3.4. By repeated application of Lemma 2.11, it follows that if L has an M-chain
m = {0̂ = m0 ⋖ m1 ⋖ · · · ⋖ mn = 1̂}, then the set {u ∨ mi ∧ v} is an M-chain for the interval
[u, v]. With this choice of (sub)-M-chain on each interval, the labeling λ of Notation 3.3
coincides with λss .
We are now ready to prove the following refinement of Theorem 1.2.
Theorem 3.5. The labeling λ of Notation 3.3 is a CL-labeling.
Proof. It is clear from construction that λ is a chain-edge labeling. By the recursive construction, it suffices to show that an interval of the form [0̂, y] has a unique increasing maximal
chain, and that every lexicographically first maximal chain on [0̂, y] is increasing.
Let m = {0̂ = m0 ⋖ m1 ⋖ · · · ⋖ mn = 1̂} be the sub-M-chain used to define the labeling.
Let ℓ = 1 + max{i : mi ∧ y < y}. It is clear from the construction that every atomic cover
relation on [0̂, y] receives a label that is at most ℓ. Since the elements greater than mℓ−1 are
preserved until the corresponding labels are used, no chain on [0̂, y] receives any label greater
than ℓ.
Similarly, the mℓ−1 element in the sub-M-chain is by construction preserved until the ℓ
label is used, and a chain receives an ℓ label when it leaves the interval [0̂, mℓ−1 ]. Since
y ∈
/ [0̂, mℓ−1 ], we see that every maximal chain on [0̂, y] receives an ℓ label. Thus, an
increasing chain must have the ℓ label on its last cover relation.
But Corollary 2.13 gives mℓ−1 ∧ y to be a coatom of [0̂, y]. It follows by the definition
of the labeling that every increasing chain on [0̂, y] must end with mℓ−1 ∧ y ⋖ y. An easy
induction now yields the only increasing chain to be 0̂ = m0 ∧ y ≤ m1 ∧ y ≤ · · · ≤ mℓ−1 ∧ y,
the “projection” of the sub-M-chain to [0̂, y]. As mℓ−1 ∧ y ⋖ y, the projection chain is in
particular maximal.
We now show that this chain is the unique lexicographically first chain. In the construction,
the least label of an atomic cover relation on [0̂, y] corresponds with the least mi+1 such that
A BROAD CLASS OF SHELLABLE LATTICES
13
1̂
3
2
1 or 2
m2
2
c
1
m1
3
3
a
1
2
1
b
3
0̂
Figure 3.1. A comodernistic labeling of a lattice
mi+1 ∧ y > 0̂. But this is the (unique) first non-0̂ element of the increasing chain. The
desired follows.
3.3. More details about the CL-labeling. Any chain-edge labeling assigns a word to
each maximal chain of L. Since when we label a cover relation with i according to λ, we
remove i from the index set (used for available labels), we obtain a result extending one
direction of [25, Theorem 1].
Lemma 3.6. The chain-edge labeling λ assigns a word with no repeated labels to each maximal chain in L.
Thus, if L is graded of height n, then λ assigns a permutation in Sn to each maximal
chain.
The decreasing chains are also easy to (recursively) understand. Recall that if x and y are
lattice elements, then x is a complement to y if x ∨ y = 1̂ and x ∧ y = 0̂. The following is an
extension of Theorem 2.9 for comodernistic lattices.
Lemma 3.7. If 0̂ ⋖ c1 ⋖ · · · ⋖ 1̂ is a decreasing chain of L with respect to λ, then c1 is a
complement to mn−1 .
Proof. As in the proof of Theorem 3.5, every maximal chain on [0̂, 1̂] contains an n label.
Thus λ(0̂ ⋖ c1 ) = n, so mn−1 ∧ c1 = 0̂. The result follows.
It is natural to ask whether theorems about supersolvable geometric lattices (see e.g. [34])
extend to comodernistic geometric lattices. The answer to this question is positive, but for
rather uninteresting reasons:
Proposition 3.8. If L is a geometric lattice, then any sub-M-chain of L is also an Mchain. Thus, a lattice L is geometric and comodernistic if and only if L is geometric and
supersolvable.
A BROAD CLASS OF SHELLABLE LATTICES
14
Proof. A result of Brylawski [11, Proposition 3.5] says that if m is modular in a geometric
lattice L, and x is modular in [0̂, m], then x is also modular in L. The result now follows
from Lemma 2.15 and an inductive argument.
In contrast to Proposition 3.8, lattices that are not geometric may have sub-M-chains
which are not M-chains. We close this section by working out a small example in detail.
Example 3.9. We consider the lattice C in Figure 3.1, which we obtained by removing
a single cover relation from the Boolean lattice on 3 elements. It is easy to check that
m2 is modular in C, but that m1 is not modular in C. (Indeed, m1 together with b < c
generate a pentagon sublattice.) Since any lattice of height at most 2 is modular, the chain
0̂ ⋖ m1 ⋖ m2 ⋖ 1̂ is a sub-M-chain, though not an M-chain.
With the exception of c ⋖ 1̂, the label of every cover relation in C is independent of the
choice of root. We have indicated these labels in the diagram. But we notice that the interval
[a, 1̂] inherits the sub-M-chain a ⋖ m2 ⋖ 1̂, while the interval [b, 1̂] has unique maximal (subM-)chain b ⋖ c ⋖ 1. Thus, the edge c ⋖ 1̂ receives a label of 1 with respect to root 0̂ ⋖ a ⋖ c,
but a label of 2 with respect to root 0̂ ⋖ b ⋖ c.
The reader may have noticed that the atom a is indeed left-modular. Thus, although we
have shown the comodernistic labeling determined by the given sub-M-chain, there is also a
supersolvable EL-labeling of C. We will see in Example 4.7 and Figure 4.1 a lattice that is
neither geometric nor supersolvable, but that is comodernistic.
4. Order congruence lattices
In this section, we examine the order congruence lattices of posets, as considered in the
introduction and in Section 2.4. We prove Theorem 1.5, and apply Lemma 3.7 to calculate
the Möbius number of O(P ).
4.1. Order congruence lattices are comodernistic. A useful tool for showing certain
lattices to be comodernistic is given by the following lemma.
Lemma 4.1. Let L be a meet subsemilattice of a lattice L+ . If m ∈ L+ is a left-modular
coatom in L+ , and m ∈ L, then m is also left-modular in L.
Proof. Since m is a coatom in L+ and therefore in L, the join of x and m is either m (if
x ≤ m) or 1̂ (otherwise) in both lattices. In particular, the join operations in L and L+ agree
on m, and we already know the meet operations agree by the subsemilattice condition. The
result now follows by Lemma 2.10.
The following theorem follows immediately.
Theorem 4.2. If L is a meet subsemilattice of the partition lattice ΠS , and m ∈ L is a
partition x | (S \ x) for some element x ∈ S, then m is a left-modular coatom of L.
We now show:
A BROAD CLASS OF SHELLABLE LATTICES
15
Lemma 4.3. If P is any poset, then the order congruence lattice O(P ) is a meet subsemilattice of ΠP .
Proof. It is clear from definition that O(P ) is a subposet of ΠP . It suffices to show that if
π1 , π2 are in O(P ), then their meet π1 ∧ π2 also is in O(P ). Let f1 , f2 : P → Z be such that
π1 and π2 are the level sets of f1 and f2 . But then the product map f1 × f2 : P → Z × Z
(where Z × Z is taken with the product order) has the desired level set partition.
That order congruence lattices are comodernistic now follows easily.
Proof (of Theorem 1.5). Let P be a poset, and let x be a maximal element of P . Assume by induction that the result holds for all smaller posets. It is straightforward to
see m = x | (P \ x) is the level set partition of an order preserving map. By Theorem 4.2
and Lemma 4.3, the element m is a left-modular coatom on the interval [0̂, 1̂]. Since [0̂, m] is
lattice-isomorphic to O(P \ x), we get by induction that [0̂, π] is comodernistic when π < m.
If π is incomparable to m, then π ∧ m is a left-modular coatom of [0̂, π] by Corollary 2.13.
Finally, repeated application of Lemma 2.7 and induction gives that intervals of the form
[π ′ , 1̂] are comodernistic. The result follows for general intervals [π ′ , π].
4.2. The Möbius number of an order congruence lattice. We now use the comodernism of the order congruence lattice O(P ) to recover the Möbius number calculation due
to Jenča and Sarkoci. Denote by Compat(x) the set of all y ∈ P that are compatible with
x. That is, Compat(x) consists of all y such that either y ⋖ x, x ⋖ y, or y is incomparable to
x.
Our proof is short and simple.
Theorem 4.4. [21, Theorem 3.8] For any poset P with maximal element x, the Möbius
function of the order congruence lattice satisfies the recurrence
X
µ(O(P )) = −
µ(O(Px∼y ).
y∈Compat(x)
Proof. By Lemma 3.7 together with the proof of Theorem 1.5, every decreasing chain of O(P )
begins with a complement to the (left-modular) order partition x | P \ x. Such complements
are easily seen to be atoms a whose non-singleton block is {x, y}, where y ∈ P is compatible
with x. The result now follows by Lemma 2.7.
Jenča and Sarkoci also show in [21] that if P is a Hasse-connected poset, then the number
of linear extensions of P satisfies the same recurrence as µ(O(P )). We give a short bijective
proof of the same, which has the same flavor as the proof of the main result in [13]. Let
LE(P ) denote the set of linear extensions of P .
Lemma 4.5. Let P be a poset and x a maximal element of P . If x is not also minimal,
then there is a bijection
[
LE(P ) →
LE(Px∼y ).
y∈Compat(x)
A BROAD CLASS OF SHELLABLE LATTICES
16
Proof. Since x is not minimal, it cannot be the first element in any linear extension L of P .
If y is the element immediately preceding x in L, then it is clear that x and y are compatible.
Then Lx∼y is a linear extension of Px∼y .
To show this map is a bijection, we notice that the process is reversible. If L is a linear
extension of Px∼y , replace the element corresponding to the identification of x and y with x
followed by y to get a linear extension of P .
Corollary 4.6. If P is a Hasse-connected poset, then the decreasing chains of O(P ) are in
bijective correspondence with the linear extensions of P .
In particular, |µ(O(P )| is the number of linear extensions of P , and ∆O(P ) is homotopy
equivalent to a bouquet of this number of (|P | − 3)-dimensional spheres.
Proof. Since P is Hasse-connected, a maximal element x cannot also be minimal. Moreover, if
P is Hasse-connected then Px∼y is also Hasse-connected. The result now follows immediately
by Theorem 4.4 and Lemma 4.5.
In the case where P is not Hasse-connected, a similar approach can be followed. Indeed, the
same argument as in Lemma 4.5 applies, except that we must discard the linear extensions
that begin with x in each recursive step where they arise. This argument identifies the
decreasing chains of O(P ) with a recursively-defined subset of the linear extensions of P .
We do not have a non-recursive description of this subset. Jenča and Sarkoci give a somewhat
different description of |µ(O(P ))| for a non-Hasse-connected poset P in [21, Theorem 4.5].
4.3. An order congruence lattice that is neither geometric nor supersolvable.
Example 4.7. Consider the pentagon lattice N5 , obtained by attaching a bottom and top
element 0̂ and 1̂ to the poset with elements a, b, c and relation a < c. In this case, O(N5 )
and Oconv (N5 ) coincide, and are pictured in Figure 4.1.
The reader can verify by inspection that no atom of O(N5 ) is left-modular, thus, the lattice
O(N5 ) is comodernistic but neither geometric nor supersolvable. As some of the coatoms of
O(N5 ) are not left-modular, the dual of O(N5 ) also fails to be geometric. We remark that
order congruence lattices that fail to be geometric were examined earlier in [22].
5. Solvable subgroup lattices
In this section, we discuss applications to and connections with the subgroup lattice of a
group.
5.1. Known lattice-theoretic analogues of classes of groups. Since the early days of
the subject, a main motivating object for lattice theory has been the subgroup lattice of a
finite group. Indeed, a (left-)modular element may be viewed as a purely lattice-theoretic
analogue or extension of a normal subgroup. Focusing on the normal subgroups characterizing a class of groups then typically gives in a straightforward way an analogous class of
lattices with interesting properties. For example, every subgroup of an abelian (or more
A BROAD CLASS OF SHELLABLE LATTICES
17
01abc
0ab|1c
0b|1ac
0b|1c|a
0b|1|ac
0ab|1|c
0b|1|a|c
0|1ac|b
0|1c|a|b
0abc|1
0|1c|ab
0a|1c|b
0|1|ab|c
0a|1bc
0|1abc
0|1|abc
0|1|ac|b
0a|1|bc
0|1|a|bc
0ac|1|b
0ac|1b
0|1bc|a
0a|1|b|c
0|1b|ac
0a|1b|c
0|1b|a|c
0|1|a|b|c
Figure 4.1. The order congruence lattice O(N5 ) of the pentagon lattice N5 .
Nontrivial left-modular elements are shown with rectangles.
Group class
Lattice class
for L(G)
Characterizes Self-dual?
group class?
cyclic
distributive
Yes
Yes
abelian,
Hamiltonian
modular
No
Yes
nilpotent
lower semimodular
No
No
supersolvable
supersolvable
Yes
Yes
solvable
???
Table 1. Classes of groups and related classes of lattices.
generally Hamiltonian) group is normal, so a corresponding class of lattices is that of the
modular lattices.
We summarize some of these analogies in Table 1.
We remark that, although every normal subgroup is modular in the subgroup lattice,
not every modular subgroup is normal. Similarly, although every nilpotent group has lower
A BROAD CLASS OF SHELLABLE LATTICES
18
semimodular subgroup lattice, group that are not nilpotent may also have lower semimodular
subgroup lattice. For example, the subgroup lattice of the symmetric group on 3 elements
L(S3 ) has height 2, hence is modular (despite being neither abelian nor nilpotent). As L(S3 )
is lattice isomorphic to L(Z23 ), a subgroup lattice characterization of these classes of groups
is not possible.
It may then be surprising that a group G is supersolvable if and only if L(G) is a supersolvable lattice. The reason for this is more superficial than one might hope: Iwasawa proved
in [20] that a group is supersolvable if and only if its subgroup lattice is graded. However,
the definition of supersolvable lattice seems to capture the pleasant combinatorial properties
of supersolvable groups much better than the definition of graded lattice does.
Semimodular and supersolvable lattices have been of great importance in algebraic and
topological combinatorics. In particular, both classes of lattices are EL-shellable, and the
EL-labeling gives an efficient method of computing homotopy type, Möbius invariants, etc.
5.2. Towards a definition of solvable lattice. After the previous subsection, it may
come as some surprise that there is no widely-accepted definition of solvable lattice. It is
the purpose of this subsection to make the case for the definition of comodernistic lattices
as one good candidate.
It was independently proved by Suzuki in [40] and Zappa in [45] that solvable groups are
characterized by their subgroup lattices. Later Schmidt gave an explicit characterization:
Proposition 5.1 (Schmidt [30]; see also [31, Chapter 5.3]). For a group G, the following
are equivalent:
(1) G is solvable.
(2) L(G) has a chain of subgroups 1 = G0 ( G1 ( · · · ( Gk = G such that each Gi is
modular in L(G), and such that each interval [Gi , Gi+1 ] is a modular lattice.
(3) L(G) has a chain of subgroups 1 = G0 ⊂· G1 ⊂· · · · ⊂· Gn = G such that each Gi is
modular in the interval [1, Gi+1 ].
The reader will recognize the conditions in Proposition 5.1 as direct analogues of the
conditions from Section 2.7. Despite this close correspondence, proving that Conditions (2)
and (3) of the proposition imply solvability is not at all trivial.
Although Proposition 5.1 combinatorially characterizes solvable subgroup lattices, we find
it somewhat unsatisfactory. We don’t know how to use the implicit lattice conditions to
calculate Möbius numbers. And, as the following example will show, lattices satisfying the
implicit conditions need not be shellable.
Example 5.2. Consider the lattice whose Hasse diagram is pictured in Figure 5.1. The
element M is easily verified to be modular in this lattice, and the interval [0̂, M] is a Boolean
lattice, hence modular. But since the interval [C, 1̂] is disconnected, the lattice is not shellable
or Cohen-Macaulay.
It is our opinion that a good definition of “solvable lattice” should be equivalent to solvability on subgroup lattices, and obey as many of the useful properties possessed by supersolvable
A BROAD CLASS OF SHELLABLE LATTICES
19
1
M
C
0
Figure 5.1. A non-shellable lattice satisfying Conditions (2) and (3) of Proposition 5.1.
lattices as possible. Among these are: EL-shellability, efficient computation of homotopy
type and/or homology bases and/or Möbius numbers, and self-duality of the property. Perhaps even more importantly, such a definition should have many combinatorial examples.
We believe comodernistic lattices to be an important step towards understanding a definition or definitions of “solvable lattice”. As Theorem 1.7 states, comodernism is equivalent to
solvability on subgroup lattices. While we do not know if a comodernistic lattice is always
EL-shellable, we have shown such a lattice to be CL-shellable. The CL-labeling allows efficient calculation of homotopy type, and consequences thereof. Perhaps most importantly,
there are many natural examples of comodernistic lattices.
Unfortunately, comodernism is not a self-dual property, as is made clear by Example 4.7.
5.3. Proof of Theorem 1.7. We begin the proof by reviewing a few elementary facts about
groups and subgroup lattices, all of which can be found in [31], or easily verified by the reader.
We say that H permutes with K if HK = KH.
Lemma 5.3. Let H, K, and L be subgroups of a group G.
(1) If H permutes with K, then HK = KH is the join in L(G) of H and K.
(2) If N ⊳ G, then N permutes with every subgroup of G.
(3) (Dedekind Identity) If H ⊆ K, then H(K ∩ L) = K ∩ HL and (K ∩ L)H = K ∩ LH.
Corollary 5.4. If K ⊃ H permutes with every subgroup on the interval [H, G], then K is a
modular element of this interval.
By Lemma 2.15, the elements of a sub-M-chain of a comodernistic lattice satisfy the
modularity condition as in part (3) of Proposition 5.1. It follows immediately by the same
proposition that G is solvable if L(G) is comodernistic.
For the other direction, every subgroup of a solvable group is solvable. Thus, it suffices
to find a modular coatom in the interval [H, G] over any subgroup H. Let 1 = N0 ( N1 (
A BROAD CLASS OF SHELLABLE LATTICES
20
· · · ( Nk = G be a chief series. It follows that each HNi is a subgroup for each i. Let ℓ be
the maximal index such that HNℓ < G, and let K be any coatom of the interval [HNℓ , G].
We will show that K permutes with every subgroup L on [H, G], hence is modular on the
same interval. For any such L, we have
H(L ∩ Nℓ+1 ) = L ∩ HNℓ+1 = L ∩ G = L, and similarly
(L ∩ Nℓ+1 )H = L ∩ Nℓ+1 H = L ∩ G = L,
so that H permutes with L ∩ Nℓ+1 . Moreover, Nℓ ⊆ K ∩ Nℓ+1 ⊆ Nℓ+1 , and since Nℓ+1 /Nℓ is
abelian, the Correspondence Theorem gives that K ∩ Nℓ+1 ⊳ Nℓ+1 . Thus, K ∩ Nℓ+1 permutes
with L ∩ Nℓ+1 . Now
KL = H(K ∩ Nℓ+1 )H(L ∩ Nℓ+1 ) = H(L ∩ Nℓ+1 )H(K ∩ Nℓ+1 ) = LK,
as desired.
5.4. Homotopy type of the subgroup lattice of a solvable group. It is immediate
from Lemma 5.3 that if N ⊳ G then HN permutes with all subgroups on the interval [H, G].
Thus, a chief series {Ni } lifts to a chain of left-modular elements {HNi } on the interval
[H, G]. In a solvable group we can (by the proof of Theorem 1.7) complete the chain {HNi }
to a sub-M-chain. Let λ be constructed according to this choice of sub-M-chain in all
applicable intervals, and consider the decreasing chains of λ.
It is immediate by basic facts about left-modular elements that a decreasing maximal
chain on L(G) contains as a subset a chain of complements to the chief series {Ni }. Since
a chain of complements to {Ni } in a solvable group is a maximal chain [12, Lemma 9.10],
such chains are exactly the decreasing maximal chains.
The order complex of a CL-shellable poset is a bouquet of spheres, where the spheres are
in bijective correspondence with the decreasing chains of the poset. Thus, we recover the
homotopy-type calculation of [41] (see also [43, 44]).
6. k-equal partition and related lattices
In this section, we will show that the k-equal partition lattices are comodernistic. We’ll
also show two related families of lattices to be comodernistic.
6.1. k-equal partition lattices. Recall that the k-equal partition lattice Πn,k is the subposet of the partition lattice Πn consisting of all partitions whose non-singleton blocks have
size at least k.
Theorem 6.1. For any 1 ≤ k ≤ n, the k-equal partition lattice Πn,k is comodernistic.
Proof. Consider an interval [π ′ , π] in Πn,k . Let π be C1 | C2 | . . . | Cm , and assume without loss
of generality that C1 is not a block in π ′ . Then C1 is formed by merging blocks B1 , . . . , Bℓ of
π ′ . Suppose that the Bi ’s are ordered by increasing size, so that |B1 | ≤ |B2 | ≤ · · · . Consider
the element
m = B1 | B2 ∪ · · · ∪ Bℓ | C2 . . . | Cm .
A BROAD CLASS OF SHELLABLE LATTICES
21
Suppose that σ is some partition on [π ′ , π], and D is the block of σ containing B1 . If D = B1 ,
then σ ∧ m = σ. Otherwise, there are two cases:
Case 1: |B1 | > 1. Then σ ∧ m is formed by splitting D into blocks B1 , D \ B1 . Notice
that |D \ B1 | ≥ |B1 | ≥ k by the ordering of the Bi ’s.
Case 2: |B1 | = 1. Then if |D| > k, then σ ∧ m is formed by splitting D into smaller
blocks B1 , D \ B1 . Otherwise, we have |D| = k, and σ ∧ m is formed by splitting D into k
singletons.
In either situation, we have σ ∧ m ⋖ σ, so Lemma 2.12 gives m to be left-modular on the
desired interval.
We recover from Theorem 6.1 a weaker form of the result [7, Theorem 6.1] that k-equal
partition lattices are EL-shellable. Repeated application of Lemma 3.7 recovers the same
set of decreasing chains for the comodernistic labeling as in [7, Corollary 6.2]. See also [9].
6.2. k, h-equal partition lattices in type B. By a sign pattern of a set S, we refer to an
assignment of + or − to each element of S, considered up to reversing the signs of every
element of S. Thus, if S has an order, an equivalent notion is to assign a + to the first
element of S and either + or − to each remaining element.
A signed partition of {0, 1, . . . , n} then consists of a partition of the set, together with a
sign pattern assignment for each block not containing 0. The block containing 0 is called
the zero block, and other blocks are called signed blocks. The signed partition lattice ΠB
n
consists of all signed partitions of {0, 1, . . . , n}. The cover relations in ΠB
are
of
two
types:
n
merging two signed blocks, and selecting one of the two possible patterns of the merged set;
or merging a signed block with the zero block (thereby ‘forgetting’ the sign pattern on the
signed block).
The signed partition lattice is well-known to be supersolvable. Indeed, if π is a signed
partition where every signed block is a singleton, then π is left-modular in ΠB
n.
Björner and Sagan [5] considered the signed k, h-equal partition lattice, where 1 ≤ h <
k ≤ n. This is the subposet ΠB
n,k,h consisting of all signed partitions whose non-singleton
signed blocks have size at least k, and whose zero block is either a singleton or has size at
least h + 1.
Theorem 6.2. For any 1 ≤ h < k ≤ n, the signed k, h-equal partition lattice ΠB
n,k,h is
comodernistic.
Proof. We proceed similarly to the proof of Theorem 6.1. Let [π ′ , π] be an interval in ΠB
n,k,h ,
and π be C1 | C2 | . . . | Cm. Assume without loss of generality that C1 is not a block in π ′ .
Then C1 is formed by merging blocks B1 , . . . , Bℓ of π ′ . Let B1 be the smallest signed block
in this list, and consider the element
m = B1 | B2 ∪ · · · ∪ Bℓ | C2 . . . | Cm .
Now let σ be some partition in the interval [π ′ , π], and D be the block of σ containing B1 .
If D = B1 , then σ ∧ m = σ. Otherwise, there are two cases:
A BROAD CLASS OF SHELLABLE LATTICES
22
Case 1: |B1 | > 1. Since σ is on [π ′ , π], we see that D ⊆ C1 . If D is a signed block, then
|D \ B1 | ≥ |B1 | ≥ k by the ordering of the blocks. Otherwise, by choice of B1 , no part of
σ ∧ m is a signed singleton block contained in C1 . It follows that |D \ B1 | ≥ h + 1. In either
situation, it follows that σ ∧ m is formed by splitting D into blocks B1 , D \ B1 .
Case 2: |B1 | = 1. If |D| > k, then σ ∧ m is formed by splitting D into smaller blocks
B1 , D \ B1 . Similarly if 0 ∈ D and |D| > h + 1. Otherwise, we have |D| = k or |D| = h
(depending on whether 0 ∈ D), and σ ∧ m is formed by splitting D into singletons.
In either case, we have σ ∧ m ⋖ σ, hence that m is left-modular on the desired interval by
Lemma 2.12.
Björner and Sagan [5, Theorem 4.4] showed ΠB
n,k,h to be EL-shellable. We recover from
Theorem 6.2 the weaker result of CL-shellability. However, we remark that our proof is
significantly simpler, and still allows easy computation of a cohomology basis, etc.
There is also a “type D analogue” of Πn,k and ΠB
n,k,h . Björner and Sagan considered this
lattice in [5], but left the question of shellability open. Feichtner and Kozlov gave a partial
answer to the type D shellability question in [16].
Our basic technique in the proofs of Theorems 6.1 and 6.2 is to show that left-modularity of
B
coatoms in Πn and ΠB
n is sometimes inherited in the join subsemilattices Πn,k and Πn,k,h . It is
easy to verify from the (here omitted) definition that the type D analogue of Πn and ΠB
n has
no left-modular coatoms, see also [19]. For this reason, the straightforward translation of our
techniques will not work in type D. We leave open the question of under what circumstances
the type D analogue of Πn,k has left-modular coatoms, or is comodernistic.
6.3. Partition lattices with restricted element-block size incidences. The k-equal
partition lattices admit generalizations in several directions. One such generalization, examined in [7], is that of the subposet of partitions where the size of every block is in some set
T . Further generalizations in similar directions are studied in [14, 15].
We consider here a different direction. Motivated by the signed k, h-equal partition lattices,
Gottlieb [17] examined a related sublattice of the (unsigned) partition lattice. In Gottlieb’s
lattice, the size of a block is restricted to be at least k or at least h, depending on whether
or not the block contains a distinguished element.
We further generalize to allow each element to have a different block-size restriction associated to it. More formally, we consider a map aff : [n] → [n], which we consider as providing
an affinity to each element x of [n]. We consider two subposets of the partition lattice Πn :
Π∀aff , {π ∈ Πn : every nonsingleton block B has |B| ≥ aff(x) for every x ∈ B} , and
Π∃aff , {π ∈ Πn : every nonsingleton block B contains some x such that |B| ≥ aff(x)} .
It is clear that both subposets are join subsemilattices of Πn , hence lattices.
Theorem 6.3. For any selection of affinity map aff, the lattices Π∃aff and Π∀aff are comodernistic.
A BROAD CLASS OF SHELLABLE LATTICES
23
Proof. As in the proof of Theorem 6.1, consider an interval [π ′ , π]. Let some block of π split
nontrivially into blocks B1 , B2 , . . . , Bℓ of π ′ . As in Theorem 6.1, assume that the blocks are
sorted by increasing size, and in particular that |B1 | ≤ |Bi | for all i.
Now, if there are multiple singleton blocks
B1 = {x1 }, B2 = {x2 }, . . . Bj = {xj },
then sort these by affinity. In the case of Π∃aff , let aff(x1 ) ≥ aff(x2 ) ≥ . . . ; while for Π∀aff
reverse to require aff(x1 ) ≤ aff(x2 ) ≤ . . . .
The remainder of the proof now goes through entirely similarly to that of Theorems 6.1
and 6.2.
Theorem 6.3 has applications to lower bounds of the complexity of a certain computational
problem, directly analogous to the work in [3, 4] with Πn,k .
References
[1] Garrett Birkhoff and M. K. Bennett, The convexity lattice of a poset, Order 2 (1985),
no. 3, 223–242.
[2] Anders Björner, Shellable and Cohen-Macaulay partially ordered sets, Trans. Amer.
Math. Soc. 260 (1980), no. 1, 159–183.
[3] Anders Björner and László Lovász, Linear decision trees, subspace arrangements and
Möbius functions, J. Amer. Math. Soc. 7 (1994), no. 3, 677–706.
[4] Anders Björner, László Lovász, and Andrew C. C. Yao, Linear decision trees: Volume estimates and topological bounds, Proceedings of the Twenty-fourth Annual ACM
Symposium on Theory of Computing (New York, NY, USA), STOC ’92, ACM, 1992,
pp. 170–177.
[5] Anders Björner and Bruce E. Sagan, Subspace arrangements of type Bn and Dn , J.
Algebraic Combin. 5 (1996), no. 4, 291–314.
[6] Anders Björner and Michelle L. Wachs, On lexicographically shellable posets, Trans.
Amer. Math. Soc. 277 (1983), no. 1, 323–341.
[7]
, Shellable nonpure complexes and posets. I, Trans. Amer. Math. Soc. 348 (1996),
no. 4, 1299–1327.
, Shellable nonpure complexes and posets. II, Trans. Amer. Math. Soc. 349
[8]
(1997), no. 10, 3945–3975.
[9] Anders Björner and Volkmar Welker, The homology of “k-equal” manifolds and related
partition lattices, Adv. Math. 110 (1995), no. 2, 277–313.
[10] Andreas Blass and Bruce E. Sagan, Möbius functions of lattices, Adv. Math. 127 (1997),
no. 1, 94–123, arXiv:math/9801009.
[11] Tom Brylawski, Modular constructions for combinatorial geometries, Trans. Amer.
Math. Soc. 203 (1975), 1–44.
[12] Klaus Doerk and Trevor Hawkes, Finite soluble groups, de Gruyter Expositions in Mathematics, vol. 4, Walter de Gruyter & Co., Berlin, 1992.
A BROAD CLASS OF SHELLABLE LATTICES
24
[13] Paul Edelman, Takayuki Hibi, and Richard P. Stanley, A recurrence for linear extensions, Order 6 (1989), no. 1, 15–18.
[14] Richard Ehrenborg and JiYoon Jung, The topology of restricted partition posets, J.
Algebraic Combin. 37 (2013), no. 4, 643–666.
[15] Richard Ehrenborg and Margaret A. Readdy, The Möbius function of partitions with
restricted block sizes, Adv. in Appl. Math. 39 (2007), no. 3, 283–292.
[16] Eva Maria Feichtner and Dmitry N. Kozlov, On subspace arrangements of type D, Discrete Math. 210 (2000), no. 1-3, 27–54, Formal power series and algebraic combinatorics
(Minneapolis, MN, 1996).
[17] Eric Gottlieb, The h, k-equal partition lattice is EL-shellable when h ≥ k, Order 31
(2014), no. 2, 259–269.
[18] Allen Hatcher, Algebraic topology, Cambridge University Press, Cambridge, 2002,
http://www.math.cornell.edu/∼hatcher/AT/ATpage.html.
[19] Torsten Hoge and Gerhard Röhrle, Supersolvable reflection arrangements, Proc. Amer.
Math. Soc. 142 (2014), no. 11, 3787–3799.
[20] Kenkichi Iwasawa, Über die endlichen Gruppen und die Verbände ihrer Untergruppen,
J. Fac. Sci. Imp. Univ. Tokyo. Sect. I. 4 (1941), 171–199.
[21] Gejza Jenča and Peter Sarkoci, Linear extensions and order-preserving poset partitions,
J. Combin. Theory Ser. A 122 (2014), 28–38, arXiv:1112.5782.
[22] Péter Körtesi, Sándor Radeleczki, and Szilvia Szilágyi, Congruences and isotone maps
on partially ordered sets, Math. Pannon. 16 (2005), no. 1, 39–55.
[23] Larry Shu-Chung Liu, Left-modular elements and edge labellings, Ph.D. thesis, Michigan
State University, 1999.
[24] Shu-Chung Liu and Bruce E. Sagan, Left-modular elements of lattices, J. Combin. Theory Ser. A 91 (2000), no. 1-2, 369–385, arXiv:math.CO/0001055, In memory of GianCarlo Rota.
[25] Peter McNamara, EL-labelings, supersolvability and 0-Hecke algebra actions on posets,
J. Combin. Theory Ser. A 101 (2003), no. 1, 69–89, arXiv:math/0111156.
[26] Peter McNamara and Hugh Thomas, Poset edge-labellings and left modularity, European
Journal of Combinatorics 27 (2006), no. 1, 101–113, arXiv:math.CO/0211126.
[27] James R. Munkres, Elements of algebraic topology, Addison-Wesley Publishing Company, Menlo Park, CA, 1984.
[28] Nathan Reading, Lattice congruences of the weak order, Order 21 (2004), no. 4, 315–344
(2005).
[29] D. E. Sanderson, Isotopy in 3-manifolds. I. Isotopic deformations of 2-cells and 3-cells,
Proc. Amer. Math. Soc. 8 (1957), 912–922.
[30] Roland Schmidt, Eine verbandstheoretische Charakterisierung der auflösbaren und der
überauflösbaren endlichen Gruppen, Arch. Math. (Basel) 19 (1968), 449–452.
[31]
, Subgroup lattices of groups, de Gruyter Expositions in Mathematics, vol. 14,
Walter de Gruyter & Co., Berlin, 1994.
A BROAD CLASS OF SHELLABLE LATTICES
25
[32] Jay Schweig, A convex-ear decomposition for rank-selected subposets of supersolvable
lattices, SIAM J. Discrete Math. 23 (2009), no. 2, 1009–1022.
[33] John Shareshian, On the shellability of the order complex of the subgroup lattice of a
finite group, Trans. Amer. Math. Soc. 353 (2001), no. 7, 2689–2703.
[34] Richard P. Stanley, Supersolvable lattices, Algebra Universalis 2 (1972), 197–217.
[35]
, Finite lattices and Jordan-Hölder sets, Algebra Universalis 4 (1974), 361–371.
[36]
, Combinatorics and commutative algebra, second ed., Progress in Mathematics,
vol. 41, Birkhäuser Boston Inc., Boston, MA, 1996.
[37]
, Enumerative combinatorics. Volume 1, second ed., Cambridge Studies in Advanced Mathematics, vol. 49, Cambridge University Press, Cambridge, 2012.
[38] Manfred Stern, Semimodular lattices, Encyclopedia of Mathematics and its Applications,
vol. 73, Cambridge University Press, Cambridge, 1999, Theory and applications.
[39] Teo Sturm, Verbände von Kernen isotoner Abbildungen, Czechoslovak Math. J. 22(97)
(1972), 126–144.
[40] Michio Suzuki, On the lattice of subgroups of finite groups, Trans. Amer. Math. Soc. 70
(1951), 345–371.
[41] Jacques Thévenaz, The top homology of the lattice of subgroups of a soluble group,
Discrete Math. 55 (1985), no. 3, 291–303.
[42] Michelle L. Wachs, Poset topology: Tools and applications, Geometric combinatorics, IAS/Park City Math. Ser., vol. 13, Amer. Math. Soc., Providence, RI, 2007,
arXiv:math/0602226, pp. 497–615.
[43] Russ Woodroofe, An EL-labeling of the subgroup lattice, Proc. Amer. Math. Soc. 136
(2008), no. 11, 3795–3801, arXiv:0708.3539.
[44]
, Chains of modular elements and shellability, J. Combin. Theory Ser. A 119
(2012), no. 6, 1315–1327, arXiv:1104.0936.
[45] Guido Zappa, Sulla risolubilità dei gruppi finiti in isomorfismo reticolare con un gruppo
risolubile, Giorn. Mat. Battaglini (4) 4(80) (1951), 213–225.
Department of Mathematics, Oklahoma State University, Stillwater, OK, 74078
E-mail address: [email protected]
URL: https://math.okstate.edu/people/jayjs/
Department of Mathematics & Statistics, Mississippi State University, Starkville, MS
39762
E-mail address: [email protected]
URL: http://rwoodroofe.math.msstate.edu/
| 4 |
Medical Concept Representation Learning from Electronic Health
Records and its Application on Heart Failure Prediction
Edward Choi*,MS; Andy Schuetz**,PhD; Walter F. Stewart**,PhD; Jimeng Sun*,PhD
*Georgia Institute of Technology, Atlanta, USA
**Research Development & Dissemination, Sutter Health, Walnut Creek, USA
Corresponding Author: Jimeng Sun
Georgia Institute of Technology
266 Ferst Drive, Atlanta, GA 30313
Tel: 404.894.0482
E-mail: [email protected]
Keywords:
Neural Networks, Representation learning, Predictive modeling, Heart Failure prediction.
Word Count: 3600
ABSTRACT
Objective: To transform heterogeneous clinical data from electronic health records into
clinically meaningful constructed features using data driven method that rely, in part, on
temporal relations among data.
Materials and Methods: The clinically meaningful representations of medical concepts
and patients are the key for health analytic applications. Most of existing approaches
directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some
ontology mapping such as SNOMED codes. However, none of the existing approaches
leverage EHR data directly for learning such concept representation. We propose a new
way to represent heterogeneous medical concepts (e.g., diagnoses, medications and
procedures) based on co-occurrence patterns in longitudinal electronic health records. The
intuition behind the method is to map medical concepts that are co-occuring closely in time
to similar concept vectors so that their distance will be small. We also derive a simple
method to construct patient vectors from the related medical concept vectors.
Results: For qualitative evaluation, we study similar medical concepts across diagnosis,
medication and procedure. In quantitative evaluation, our proposed representation
significantly improves the predictive modeling performance for onset of heart failure (HF),
where classification methods (e.g. logistic regression, neural network, support vector
machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC
curve (AUC) using this proposed representation.
Conclusion: We proposed an effective method for patient and medical concept
representation learning. The resulting representation can map relevant concepts together
and also improves predictive modeling performance.
Introduction
Growth in use of electronic health records (EHR) in health care delivery is opening
unprecedented opportunities to predict patient risk, understand what works best for a given
patient, and to personalize clinical decision-making. But, raw EHR data, represented by a
heterogeneous mix of elements (e.g., clinical measures, diagnoses, medications,
procedures) and voluminous unstructured content, may not be optimal for analytic uses or
even for clinical care. While higher order clinical features (e.g., disease phenotypes) are
intuitively more meaningful and can reduce data volume, they may fail to capture
meaningful information inherent to patient data. We explored whether novel data driven
methods that rely on the temporal occurrence of EHR data elements could yield higher
order intuitively interpretable features that both capture pathophysiologic relations inherent
to data and improve performance of predictive models.
Growth in use of EHRs is raising fundamental questions on optimal ways to
represent structured and unstructured data. Medical ontologies such as SNOMED, RxNorm
and LOINC offer structured hierarchical means of compressing data and of understand
relations among data from different domains (e.g., disease diagnosis, labs, prescriptions).
But, these ontologies do not offer the means of extracting meaningful relations inherent to
longitudinal patient data. Scalable methods that can detect pathophysiologic relations
inherent to longitudinal EHR data and construct intuitive features may accelerate more
effective use of EHR data in clinical care and advances in performance of predictive
analytics.
The abstract concepts inherent to existing ontologies does not provide a means to
connect elements in different domains to a common underlying pathophysiologic
constructs that are represented by how data elements co-occur in time. The data driven
approach we developed logically organizes data into higher order constructs.
Heterogeneous medical data were mapped to a low-dimensional space that accounted for
temporal clustering of similar concepts (e.g., A1c lab test, ICD-9 code for diabetes,
prescription for metformin). Co-occurring clusters (e.g., diabetes and peripheral
neuropathy) were then identified and formed into higher order pathophysiologic feature
sets organized by prevalence.
We propose to learn such a medical concept representation on longitudinal EHR
data based on a state-of-the-art neural network model. We also propose an efficient way to
derive patient representation based on the medical concept representation (or medical
concept vectors). We calculated for a set of diseases their closest diseases, medications and
procedures to demonstrate the clinical knowledge captured by the medical concept
representations. We use those learned representation for heart failure prediction tasks,
where significant performance improvement up to 23% in AUC can be obtained on many
classification models (logistic regression: AUC 0.766 to 0.791, SVM: AUC 0.736 to 0.791,
neural network: AUC 0.779 to 0.814, KNN: AUC 0.637 to 0.785)
BACKGROUND
Representation Learning in Natural Language Processing
Recently, neural network based representation learning has shown success in many
fields such as computer vision [1] [2] [3], audio processing [4] [5] and natural language
processing (NLP) [6] [7] [8]. We discuss representation learning in NLP, in particular, as
our proposed method is based on Skip-gram [6] [9], a popular method for learning word
representations.
Mikolov et al. [6] [9] proposed Skip-gram, a simple model based on neural network
that can learn real-valued multi-dimensional vectors that capture relations between words
by training on massive amount of text. The trained real-valued vectors will have similar
values for syntactically and semantically close words such as dog and cat or would and
could, but distinct values for words that are not. Pennington et al. [10] proposed GloVe, a
word representation algorithm based on the global co-occurrence matrix. While GloVe and
Skip-gram essentially achieve the same goal by taking different approaches, GloVe is
computationally faster than Skip-gram as it precomputes co-occurrence information before
the actual learning. Skip-gram, however, requires less number of hyper-parameters to tune
than GloVe, and generally shows better performance [11].
As natural language text can be seen as a sequence of codes, medical records such as
diagnoses, medications and procedures can also be seen as a sequence of codes over time.
In this work, we propose a framework for mapping raw medical concepts (e.g., ICD9, CPT)
into related concept vectors using Skip-gram and validating the utility of the resulting
medical concept vectors.
Representation Learning in the Clinical Field
A few researchers applied representation learning in the clinical field recently.
Minnaro-Gimenez et al. [12] learned the representations of medical terms by applying
Skip-gram to various medical text. They collected the medical text from PubMed, Merck
Manuals, Medscape and Wikipedia. De Vine et al. [13] learned the representations of
UMLS concepts from free-text patient records and medical journal abstracts. The first preprocessed the text to map the words to UMLS concepts, then applied Skip-gram to learn
the representations of the concepts.
More recently, Choi et al. [14] applied Skip-gram to structured dataset from a health
insurance company, where the dataset consisted of patient visit records along with
diagnosis codes(ICD9), lab test results(LOINC), and drug usage(NDC). Their goal, to learn
efficient representations of medical concepts, partially overlaps with our goal. Our study
however, is focused on learning the representations of medical concepts and using them to
generate patient representations, apply them to a real-world prediction problem to
demonstrate improved performance provided by the efficient representation learning.
MATERIALS AND METHODS
Training Phase
Medical Concept
Representation
Learning
EHR
Intensity
Kidney failure
Medical
Mapping
Concept
Vectors
Patient
Representation
Construction
Aggregation
Time
Patient
Vectors
Heart Failure
Prediction Model
Training
Input
Model
Output
Heart Failure
Risk Score
Prediction Phase
Figure 1. Flowchart of the proposed method
Intensity
Kidney failure
In Figure 1, we give a Time
high-level overview of the steps we take to perform HF
prediction. In the training phase, we first train medical concept vectors from the EHR
dataset using Skip-gram. Then, we construct patient representation using the medical
concept vectors. The patient representation is then used to train heart failure prediction
models using various classifiers, namely logistic regression, support vector machine
(SVM), multi-layer perceptron with one hidden layer (MLP) and K-nearest neighbors
classifier (KNN). In the prediction phase, we map the medical record of a patient to medical
concept vectors and generate patient vectors by aggregating the concept vectors. Then we
plug the patient vectors into the trained model, which in turn will generate the risk score
for heart failure.
In the following sections, we will describe medical concept representation learning
and patient representation construction in more detail.
Medical Concept Representation Learning
N-dimensional vector
D-dimensional vector
Bronchitis: [1, 0, 0, 0, 0, …, 0, 0, 0]
Pneumonia: [0, 1, 0, 0, 0, …, 0, 0, 0]
Obesity: [0, 0, 1, 0, 0, …, 0, 0, 0]
Bronchitis: [0.4, -0.2, …, 0.2]
Pneumonia: [0.3, -0.3, …, 0.1]
Obesity: [-0.7, 1.4, …, 1.2]
Cataract:
Cataract:
N diagnoses
[0, 0, 0, 0, 0, …, 0, 0, 1]
(a) One-hot encoding for diagnoses
[1.2, 0.8, …, 1.5]
(b) A better representation of diagnoses
Figure 2. Two different representation of diagnoses. Typically, raw data dimensionality
N(~10,000) is much larger than concept dimensionality D(50~1,000)
Figure 2 depicts a straightforward motivation for using a better representation for
medical concepts. Figure 2(a) shows one-hot encoding of N unique diagnoses using Ndimensional vectors. It is easy to see that this is not an effective representation in that the
difference between Bronchitis and Pneumonia are the same as the difference between
Pneumonia and Obesity. Figure 2(b) shows a better representation in that Bronchitis and
Pneumonia share similar values compared to other diagnoses. By using Skip-gram, we will
be able to better represent not only diagnoses but also medications and procedures as multidimensional real-valued vectors that will capture the latent relations between them.
(a) Patient medical records on a timeline
Pneumonia
Benzonatate
Amoxicillin
Time
Cough
Cough
Benzonatate
Fever
Pneumonia
Chest X-ray
Chest X-ray
Fever
(b) Predicting neighboring medical concepts given Fever
Bezonanate
Fever
Chest X-ray
Amoxicillin
Pneumonia
(c) Predicting neighboring medical concepts given Pneumonia
D-dimensional
v(ct-2)
v(ct-1)
v(ct+1)
v(ct+2)
v(ct)
mapping
ct
(d) Model architecture of Skip-gram
Figure 3. Training examples and the model architecture of Skip-gram
Figure 3(a) is an example of a patient medical record in a temporal order. Skip-gram
assumes the meaning of a concept is determined by its context(or neighbors). Therefore,
given a sequence of concepts, Skip-gram picks a target concept and tries to predict its
neighbors, as shown by Figure 3(b). Then we slide the context window, pick the next target
and do the same context prediction, which is shown by Figure 3(c). Since the goal of Skipgram is to learn the vector representation of concepts, we need to convert medical concepts
to D-dimensional vectors, where D is a user-chosen value typically between 50 and 1000.
Therefore the actual prediction is conducted with vectors, as shown by Figure 3(d), where
c" is the concept at the t-th timestep, 𝐯 𝑐" the vector that represents c" . The goal of Skipgram is to maximize the following average log probability,
1
𝑇
4
log 𝑝 𝑐"+, |𝑐" , where
"56 ./0,0/,,23
𝑝 𝑐"+, |𝑐" =
𝑒𝑥𝑝 𝐯 𝑐"+,
?
@56 𝑒𝑥𝑝
>
𝐯 𝑐"
𝐯 𝑐 > 𝐯 𝑐"
where T is the length of the sequence of medical concepts, w the size of the context window,
𝑐" the target medical concept at timestep t, 𝑐"+, the neighboring medical concept at
timestep t+j, 𝐯 𝑐 the vector that represents the medical concept c, N the total number of
medical concepts. The size of the context window is typically set to 5, giving us 10 concepts
surrounding the target concept. Note that the conditional probability is expressed as a
softmax function. Simply put, by maximizing the softmax score of the inner product of the
neighboring concepts, Skip-gram learns real-valued vectors that efficiently capture the
fine-grained relations between concepts. It needs to be mentioned that our formulation of
Skip-gram is different from the original Skip-gram. In Mikolov et al. [9], they distinguish
the vectors for the target concept and the vectors for the neighboring concept. In our
formulation, we force the two sets of vectors to hold the same values as suggested by [15].
This simpler formulation allowed faster training and impressive results.
Patient Representation Construction
In this section, we describe a simple derivation of patient representation using the
learned medical concept vectors. One of the impressive features of Skip-gram in Mikolov
et al. [9] was that the word vectors supported syntactically and semantically meaningful
linear operations that enabled word analogy calculations such that the resulting vector of
King – Man + Woman is closest to Queen vector.
We expect that the medical concept representations learned by Skip-gram will show similar
properties so that the concept vectors will support clinically meaningful vector additions.
Then, an efficient representation of a patient will be as simple as converting all medical
concepts in his medical history to medical concept vectors, then summing all those vectors
to obtain a single representation vector, as shown in Figure 4. In the experiments, we show
examples of clinically meaningful concept vector additions.
Benzonatate
Pneumonia
Amoxicillin
(a)
Time
Cough
Fever
Chest X-ray
(b)
(c)
Medical concept vectors
+
+
+
+
+
=
Patient representation
Figure 4. Patient representation construction. (a) represents a medical record of a patient
on a timeline. (b) The medical concepts are represented as vectors using the trained medical
concept vectors. (c) The patient is represented as a vector by summing all medical concept
vectors.
EXPERIMENTS AND RESULTS
Population and Source of Data
Data were from Sutter Palo Alto Medical Foundation (Sutter-PAMF) primary care
patients. Sutter-PAMF is a large primary care and multispecialty group practice that has
used an EHR for more than a decade. The study dataset was extracted with cases and
controls identified within the interval from 05/16/2000 to 05/23/2013. The EHR data
included demographics, smoking and alcohol consumptions, clinical and laboratory values,
International Classification of Disease version 9 (ICD-9) codes associated with encounters,
order, and referrals, procedure information in Current Procedural Terminology (CPT)
codes and medication prescription information in medical names. The dataset contained
265,336 patients with 555,609 unique clinical events in total.
Configuration for Medical Concept Representation Learning
To apply Skip-gram, we scanned through encounter, medication order, procedure
order and problem list records of all 265,336 patients, and extracted diagnosis, medication
and procedure codes assigned to each patient in temporal order. If a patient received
multiple diagnoses, medications or procedures at a single visit, then those medical codes
were given the same timestamp. The respective number of unique diagnoses, medications
and procedures was 11,460, 17,769 and 9,370 totaling to 38,599 unique medical concepts.
We used 100-dimensional vectors to represent medical concepts(i.e. D=100 in Figure 2(b)),
considering 300 was sufficient to effectively represent 692,000 vocabularies in NLP. [9]
We used Theano [16], a Python library for evaluating mathematical expression to
implement Skip-gram. Theano can also take advantage of GPUs to greatly improve the
speed of calculations involving large matrices. For optimization, we used Adadelta [17],
which employs adaptive learning rate. Unlike stochastic gradient descent (SGD), which is
widely used for training neural networks, Adadelta does not depend very strongly on the
Diagnosis Codes using SkipGram
ion reducec by t-SNE
ntation of Diagnosis Codes using SkipGram
15by t-SNE of Diagnosis Codes using SkipGram
Dimension
reducec
Vector
Representation
Dimension reducec by t-SNE
10
des using SkipGram
setting of the learning rate, and shows good performance. Using Theano 0.7 and CUDA 7
t-SNE Dimension2
5
Vector Representation of Diagnosis Codes using SkipGram
Dimension reducec by t-SNE
is Codes using SkipGram
on an Ubuntu
machine
E5-2697 and
0
Vector Representation
of Diagnosis
Codeswith
using Xeon
SkipGram
by t-SNE
Nvidia Tesla K80, it took approximately
Dimension reducec by t-SNE
43 hours to run 10 epochs of Adadelta with the batch size of 100.
-5
Evaluation of Medical Concept Representation Learning
-10
Infectious And Parasitic Diseases
Diseases Of The Nervous System And Sense Organs
Neoplasms
Diseases Of The Circulatory System
Endocrine, Nutritional And Metabolic Diseases, And Immunity Disorders
Diseases Of The Respiratory System
Diseases Of The Blood And Blood-Forming Organs
Diseases Of The Digestive System
Vector Representation of Diagnosis Codes using
SkipGram
Mental Disorders
Diseases
Of The Genitourinary System
-15
-20
Dimension reducec by t-SNE
30
-25
25
-30
-30
-10
20
-5
-25
-20
0
-15
5
-10
10
-5
15
0
20
5
10
15
20
25
Highcharts.com
t-SNE Dimension1
Highcharts.com
15
5
-20
-15
-10
10
-5
0
5
10
15
20
25
30
t-SNE Dimension1
Highcharts.com
5
5
-5
0
t-SNE Dimension1
-15
-10
-5
t-SNE Dimension2
0
t-SNE Dimension1
10
15
5 0
20
10
0
25
15
30
Highcharts.com
20
5
25
10
15
30
Highcharts.com
20
25
10
15
30
t-SNE Dimension1
-5
5
Highcharts.com
-10
20
25
30
Highcharts.com
-15
-20
25
ension1
-15
5
-20
-10
10
-15
-5
-20
-25
15
0
t-SNE Dimension1
20
25
-5
-10
5
0
10
30
15
5
20
10
25
15
20
Benign Skin Neoplasms
30
Highcharts.com
Malignant Skin Neoplasms
t-SNE Dimension1
Highcharts.com
25
30
Highcharts.com
-30
-30
30
25 Dimension1
30
t-SNE
-25
-20
-15
-10
-5
0
5
10
15
20
25
30
t-SNE Dimension1
Highcharts.com
053.29: Herpes zoster with other ophthalmic complications
053.21: Herpes zoster keratoconjunctivitis
053.19: Herpes zoster with other nervous system complications
364.04: Secondary iridocyclitis, noninfectious
364.00: Acute and subacute iridocyclitis, unspecified
053.22: Herpes zoster iridocyclitis
364.3: Unspecified iridocyclitis
Figure 5. Diagnosis vectors projected to a 2D space by t-SNE
Figure 5 shows the trained diagnosis vectors plotted in a 2D space, where we used
t-SNE [18] to reduce the dimensions from 100 to 2. t-SNE is a dimensionality reduction
algorithm that was specifically developed for plotting high-dimensional data into a two or
three dimensional space. We randomly chose 1,000 diagnoses from 10 uppermost
categories of ICD-9, which are displayed at the top of the figure. It is readily visible that
diagnoses are generally well grouped by their corresponding categories. However, if
diagnoses from the same category are in fact quite different, they should be apart. This is
shown by the red box and the blue box in Figure 5. Even though they are from the same
neoplasms category, red box indicates a group of malignant skin neoplasms (172.X, 173.X)
while blue box indicates a group of benign skin neoplasms (216.X). Detailed figure of the
red and blue boxes are in the supplementary section. What is more, as the black box shows,
diagnoses from different groups are located close to one another if they are actually related.
In the black box, iridocyclitis and eye infections related to herpes zoster are closely located,
which corresponds to the fact that approximately 43% herpes zoster ophthalmicus (HZO)
patients develop iridocyclitis. [19]
In order to see how well the representation learning captured the relations between
medications and procedures as well as diagnoses, we conduct the following study. We
chose 100 diagnoses that occurred most frequently in the data, obtained for each diagnosis
50 closest vectors in terms of cosine similarity, picked 5 diagnosis, medication and
procedure vectors among the 50 vectors. Table 2 depicts a portion of the entire lis. Note
that some cells contain less than 5 items, which is because there was less than 5 items in
the 50 closest vectors. The entire list is provided in the supplementary section.
Table 1. Examples of diagnoses and their closest medical concepts.
Diagnoses
-Bronchitis, not specified as acute or chronic
Acute upper
(490)
respiratory
-Cough (786.2)
infections
-Acute sinusitis, unspecified (461.9)
(465.9)
-Acute bronchitis (466.0)
-Acute pharyngitis (462)
Medications
Procedures
-Azithromycin 250 mg po tabs
-Pulse oximetry single
-Promethazine-Codeine 6.25-10
-Serv prov during reg sched
mg/5ml po syrp
eve/wkend/hol hrs
-Amoxicillin 500 mg po caps
-Chest PA & lateral
-Fluticasone Propionate 50 mcg/act
-Gyn cytology (pap) pa
na susp
-Influenza vac (flu clinic
-Flonase 50 mcg/act na susp
only) 3+yo pa
-Diabetes mellitus (250.00)
-Metformin hcl 500 mg po tabs
Diabetes
-Mixed hyperlipidemia (272.2)
-Metformin hcl 1000 mg po tabs
mellitus
-Other abnormal glucose (790.29)
-Glucose blood vi strp
(250.02)
-Obesity, unspecified (278.00)
-Lisinopril 10 mg po tabs
-Pure hypercholesterolemia (272.0)
-Lisinopril 20 mg po tabs
-Anemia, unspecified (285.9)
-Furosemide 20 mg po tabs
-Congestive heart failure, unspecified (428.0) -Hydrochlorothiazide 25 mg po tabs
-Diabetic eye exam (no bill)
-Diabetes education, int
-Ophthalmology, int
-Diabetic foot exam (no bill)
-Influenza vac 3+yr (v04.81)
im
-Debridement of nails, 6 or
more
Edema
-Unspecified essential hypertension (401.9)
-Hydrocodone-Acetaminophen 5-500 -OV est pt min serv
(782.3)
-Atrial fibrillation (427.31)
mg po tabs
-Chronic kidney disease, Stage III (moderate) -Cephalexin 500 mg po caps
-ECG and interpretation
(585.3)
-Chest PA & lateral
-Furosemide 40 mg po tabs
-Blepharitis, unspecified (373.00)
Tear film
-Senile cataract, unspecified (366.10)
insufficiency, -Presbyopia (367.4)
unspecified
-Preglaucoma, unspecified (365.00)
(375.15)
-Other chronic allergic conjunctivitis
-Refraction
-Glasses
-Erythromycin 5 mg/gm op oint
-Patanol 0.1 % op soln
(372.14)
Benign
essential
hypertension
(401.1)
-EKG
-Visual field exam extended
-Visual field exam limited
-Referral to ophthalmology,
int
-Ophthalmology, int
-Hyperlipidemia (272.4)
-Hydrochlorothiazide 25 mg po tabs
-Essential hypertension (401.9)
-Atenonol 50 mg po tabs
-Pure hypercholesterolemia (272.0)
-Lisinopril 10 mg po tabs
-Mixed hyperlipidemia (272.2)
-Lisinopril 40 mg po tabs
-Diabetes mellitus (250.00)
-Lisinopril 20 mg po tabs
-ECG and interpretation
-Influenza vac 3+yr im
-Immun admin im/sq/id/perc
1st vac only
-GI, int
-OV est pt lev 3
Evaluation of Medical Concept Vector Additions
Table 2. Vector operations of medical vectors trained by Skip-gram.
Diagnoses
Hypertension (401.9)
+
Obesity (278.0)
-Hyperlipidemia (272.4)
-Hydrochlorothiazide
-Diabetes (250.00)
-Valsartan
-Coronary atherosclerosis (414.00)
-Nifedipine
-Hypertension (401.1)
-Lisinopril
-Chronic kidney disease (585.3)
-Losartan potassium
-Pneumonia (486)
Fever (780.60)
+
Cough (786.2)
Medications
-Acute bronchitis (466.0)
-Acute upper respiratory infections
(465.9)
-Bronchitis (490)
-Acute sinusitis (461.9)
+
Pain in/around Eye (379.91)
-X-ray chest
-Chest PA & Lateral
-Promethazine-codeine
-Pulse oximetry
-Guaifenesin-codeine
-Serv prov during reg sched
-Proair HFA
eve/wkend/hol hrs
-Levofloxacin
-Inhalation Rx for
obstruction MDI/NEB
-Ophthalmology
-Visual discomfort (368.13)
-Glasses
-Regular astigmatism (367.21)
-Erythromycin ointment -Referral to ophthalmology
-Presbyopia (367.4)
-Patanol
-Blepharitis (373.00)
Loss of Weight (783.21)
+
Anxiety State (300.00)
+
Speech Disturbance (784.59)
-Peripheral refraction
-Visual field exam
-Diabetic eye exam
-Depressive disorder (311)
-Lorazepam
-Referral to GI
-Malais & fatigue (780.79)
-Zolpidem tartrate
-ECG & Interpretation
-Insomnia (780.52)
-Omeprazole
-GI
-Generalized anxiety disorder (300.02)
-Alprazolam
-EKG
-Esophageal reflux (530.81)
-Trazodone HCL
-Chest PA & Lateral
-Dysarthria (784.51)
Hallucination (780.1)
N/A
-Azithromycin
-Tear film insufficiency (375.15)
Visual Disturbance (368.8)
Procedures
-Secondary parkinsonism (332.1)
-Senile dementia with delirium (290.3)
-Mental disorder (294.9)
-Paranoid state (297.9)
-Referral to geriatrics
-Midorine HCL
-Mental status exam
-Risperdal
-Referral to
-Rivastigmine Tartrate
neuropsychology
-Rivastigmine
-Referral to speech therapy
Home visit est pt lev 2
Due to the difficulty of generating medically interesting examples, we chose 5
intuitive examples as shown by the first column of Table 3 to give a simple demonstration
of the medical concept vector additions. We again generated 50 closest vectors to the sum
of two medical concept vectors and picked 5 from each diagnosis, medication and
procedure category.
Setup for Heart Failure Prediction Evaluation
In this section, we first describe why we chose heart failure (HF) prediction task as an
application. Then we briefly mention the models to use, followed by the description of the
data processing steps to create the training data for all models. Lastly, the evaluation
strategy will be followed by implementation details.
Heart failure prediction task: Onset of HF is associated with a high level of disability,
health care costs and mortality (roughly ~50% risk of mortality within 5 years of diagnosis).
[20] [21] There has been relatively little progress in slowing the progression of HF severity,
largely because it is difficult to detect before actual diagnosis. As a consequence,
intervention has primarily been confined to the time period after diagnosis, with little or no
impact on disease progression. Earlier detection of HF could lead to improved outcomes
through patient engagement and more assertive treatment with angiotensin converting
enzyme (ACE)-inhibitors or Angiotensin II receptor blockers (ARBs), mild exercise, and
reduced salt intake, and possibly other options [22] [23] [24] [25].
Models for performance comparison: We aim to emphasize the effectiveness of the
medical concept representation and the patient representation derived from it. Therefore
we trained four popular classifiers, namely logistic regression, MLP, SVM, and KNN using
both one-hot vectors and medical concept vectors.
Definition of Cases and Controls: Criteria for incident onset of HF, are described in [26]
and were adopted from [27]. The criteria are defined as: 1) Qualifying ICD-9 codes for HF
appeared as a diagnosis code in either the encounter, the problem list, or the medication
order fields. Qualifying ICD-9 codes are listed in the supplementary section. Qualifying
ICD-9 codes with image and other related orders were excluded because these orders often
represent a suspicion of HF, where the results are often negative; 2) a minimum of three
clinical encounters with qualifying ICD-9 codes had to occur within 12 months of each
other, where the date of diagnosis was assigned to the earliest of the three dates. If the time
span between the first and second appearances of the HF diagnostic code was greater than
12 months, the date of the second encounter was used as the first qualifying encounter; 3)
ages 50 or greater to less than 85 at the time of HF diagnosis.
Up to ten (nine on average) eligible primary care clinic-, sex-, and age-matched (in
5-year age intervals) controls were selected for each incident HF case. Primary care
patients were eligible as controls if they had no HF diagnosis in the 12-month period before
diagnosis of the incident HF case. Control subjects were required to have their first office
encounter within one year of the matching HF case patient’s first office visit, and have at
least one office encounter 30 days before or any time after the case’s HF diagnosis date to
ensure similar duration of observations among cases and controls.
From 265,336 Sutter-PAMF patients, 3,884 incident HF cases and 28,903 control
patients were identified.
Data processing: To train the four models, we generated the dataset again from the
encounter, medication order, procedure order and problem list records of 3,884 cases and
28,903 controls. Based on the HF diagnosis date (HFDx) of each patient, we extracted all
records from the 18-month period before the HFDx. To train the models with medical
concept vectors, we converted the medical records to patient vectors as shown in Figure 4.
To train the models with one-hot encoding, we converted the medical records to aggregated
one-hot vectors in the same fashion as Figure 4, using one-hot vectors instead of medical
concept vectors.
In order to study the relation between the medical concept vectors trained with
different sizes of data and their influence on the models’ prediction performance, we used
three kinds of medical concept vectors: 1) The one trained with only HF cases (3,884
patients), 2) The one trained with HF cases and controls (32,787 patients), 3) The one
trained with the full sample (265,336 patients). Note that medical concept vectors trained
with smaller number of patients cover less number of medical concepts. Therefore, when
converting patient records to patient vectors as Figure 4, we excluded all medical codes
that did not have matching medical concept vectors. All input vectors were normalized to
zero mean and unit variance.
Evaluation strategy: We used six-fold cross validation to train and evaluate all models,
and to estimate how well the models will generalize to independent datasets. Prediction
performance was measured using area under the ROC curve (AUC), on data not used in
the training. We used the confidence score to calculate its AUC for SVM. Detailed
explanation of the cross validation is given in the supplementary section.
Implementation details: Logistic regression and MLP were implemented with Theano
and trained with Adadelta. SVM and KNN were implemented with Python Scikit-Learn.
All models were trained by the same machine used for medical concept representation
learning. Hyper-parameters used for training each model are described in the
supplementary section.
Evaluation of Heart Failure Prediction
One-hot encoding
Medical concept vectors trained on 4K patients
Medical concept vectors trained on 33K patients
Medical concept vecotrs trained on 265K patients
1
0.95
0.9
0.85
AUC
0.8
0.75
0.7
0.65
0.6
0.55
0.5
Logistic Regression
SVM
MLP
KNN
Figure 6. Heart failure prediction performance of various models and input vectors.
Figure 6 shows the average AUC of 6-fold cross validations of various models and
input vectors. The colors represent different training input vectors. The error bars indicate
the standard deviation derived from the 6-fold cross evaluation. The power of medical
concept representation learning is evident as all models show significant improvement in
the HF prediction performance. Logistic regression and SVM, both being linear models,
show similar performance when trained with medical concept vectors, although SVM
benefits slightly more from using the better representation of medical concepts. MLP also
benefits from using medical concept vectors, and, being a non-linear model, shows better
performance compared to logistic regression and SVM. It is interesting that KNN benefits
the most from using the medical concept vectors, even the ones trained on the smallest
dataset. Considering the fact that KNN classification is based on the distances between data
points, this is a clear indication that proper medical concept representation can alleviate the
sparsity problem induced by the simple one-hot encoding.
Figure 6 also tells us that medical concept representation is best learned with a large
dataset as shown by Mikolov et al. [6] However, in most models, especially KNN, even
the medical concept vectors trained with the smallest number of patients improves the
prediction performance. It is quite surprising given the fact that we used less amount of
information by excluding unmatched medical codes when using medical concept vectors
trained with a small number of patients, the models still show better prediction performance.
This again is a clear proof that medical concept representation learning provides more
effective way to represent medical concepts than one-hot encoding.
Table 3. Training speed improvement. (*Since KNN does not require training, we display
classification time instead)
One-hot
encoding
Medical
concept vectors
Speed -up
Logistic
regression
SVM
MLP
KNN*
81.3
20.5
85.7
2900.82
5.3
1.9
5.6
36.66
x15.3
x10.8
x15.3
x79.1
Table 4 depicts the training time for each model when using one-hot encoding and
medical concept vectors. Considering the high-dimensionality of one-hot encoding,
training the models with medical concept vectors should provide significant speed-up, as
shown by the last row of Table 4. This shows that medical concept vectors not only improve
performance, but also significantly reduce the training time.
Before discussing future work, we would like to emphasize the fact that our entire
experiments were conducted completely without expert knowledge such as medical
ontologies or features designed by medical experts. Using only the medical order records,
we were able to produce clinically meaningful representation of medical concepts. This is
an inspiring discovery that can be extended for numerous other medical problems.
Future Work
Although medical concept vectors have shown impressive results, it would be even
more effective if deeper medical information could be embedded such as lab results or
patient demographic information. This would enable us to represent the medical state of
patients more accurately.
Using expert knowledge is another thing we should try. Even though we have
shown impressive performance only by using medical records, this does not mean we
cannot benefit from well-established expert medical knowledge, such as specific features
or medical ontologies.
Another natural extension of our work is to address other medical problems.
Although this work focused on the early detection of heart failure, our approach is very
general that it could be applied to any kind of disease prediction problem. And the medical
concept vectors can also be used in numerous medical applications as well.
CONCLUSION
We proposed a new way of representing heterogeneous medical concepts as realvalued vectors and constructing efficient patient representation using the state-of-the-art
Deep Learning method. We have qualitatively shown that the trained medical concept
vectors indeed captured medical insights compatible with our medical knowledge and
experience. For the heart failure prediction task, medical concept vectors improved the
performance of many classifiers, thus quantitatively proving its effectiveness. We
discussed the limitation of our method and possible future works, which include deeper
utilization of medical information, combining expert knowledge into our framework, and
expanding our approach to various medical applications.
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, "Imagenet classification with
deep convolutional neural networks," in Advances in Neural Information Processing
Systems, 2012, pp. 1097-1105.
[2] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol,
"Extracting and composing robust features with denoising autoencoders," in
International Conference on Machine learning, 2008, pp. 1096-1103.
[3] Quoc Le et al., "uilding high-level features using large scale unsupervised learning,"
in arXiv:1112.6209 , 2012.
[4] Honglak Lee, Peter Pham, Yan Largman, and Andrew Ng, "Unsupervised feature
learning for audio classification using convolutional deep belief networks," in
Advances in Neural Information Processing Systems, 2009, pp. 1096-1104.
[5] Geoffrey Hinton et al., "Deep Neural Networks for Acoustic Modeling in Speech
Recognition: The Shared Views of Four Research Groups," Signal Processing
Magazine, vol. 29, no. 6, pp. 82-97, 2012.
[6] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean, "Efficient estimation of
word representations in vector space," in arXiv preprint arXiv:1301.3781, 2013.
[7] Kyunghyun Cho et al., "Learning phrase representations using rnn encoder-decoder
for statistical machine translation," in arXiv preprint arXiv:1406.1078, 2014.
[8] Richard Socher, Jeffrey Pennington, Eric Huang, Andrew Ng, and Christopher
Manning, "Semi-supervised recursive autoencoders for predicting sentiment
distributions," in Empirical Methods in Natural Language Processing, 2011, pp. 151161.
[9] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeff Dean,
"Distributed representations of words and phrases and their compositionality," in
Advances in Neural Information Processing Systems, 2013, pp. 3111-3119.
[10] Jeffrey Pennington, Richard Socher, and Christopher Manning, "Glove: Global
vectors for word representation," in Empirical Methods on Natural Language
Processing, 2014, pp. 1532-1543.
[11] Radim
Rehurek.
(2014,
Dec.)
Rare
Technologies.
[Online].
http://rare-
technologies.com/making-sense-of-word2vec/
[12] Jose Minarro-Gimenez, Oscar Marin-Alonso, and Matthias Samwald, "Exploring the
application of deep learning techniques on medical text corpora," Studies in health
technology and informatics, vol. 205, pp. 584-588, 2013.
[13] Lance De Vine, Guido Zuccon, Bevan Koopman, Laurianne Sitbon, and Peter Bruza,
"Medical Semantic Similarity with a Neural Language Model," in International
Conference on Information and Knowledge Management, 2014, pp. 1819-1822.
[14] Youngduk Choi, Chill Chiu, and David Sontag, Learning low-dimensional
representations of medical concepts, 2016, to be appeared in AMIA-CRI.
[15] Xin Rong, "word2vec Parameter learning explained," in arXiv preprint
arXiv:1411.2738, 2014.
[16] James Bergstra et al., "Theano: a CPU and GPU Math Expression Compiler," in
Python for Scientific Computing Conference, 2010.
[17] Matthew Zeiler, "ADADELTA: An adaptive learning rate method," in arXiv preprint
arXiv:1212.5701, 2012.
[18] Laurens Van der Maaten and Geoffrey Hinton, "Visualizing data using t-SNE,"
Journal of Machine Learning Research, vol. 9, no. 11, pp. 2579-2605, 2008.
[19] Janice Thean, Anthony Hall, and Richard Stawell, "Uveitis in herpes zoster
ophthalmicus," Clinical & Experimental Ophthalmology, vol. 29, no. 6, pp. 406-410,
2001.
[20] Veronique L. Roger et al., "Trends in heart failure incidence and survival in a
community-based population," JAMA, vol. 292, no. 3, pp. 344-350, July 2004.
[21] Sherry L. Murphy, Jiaquan Xu, and Kenneth D. Kochanek, "Deaths: final data for
2010," National Vital Stat Rep, vol. 61, no. 4, pp. 1-117, May 2010.
[22] SOLVD Investigators, "Effect of enalapril on mortality and the development of heart
failure in asymptomatic patients with reduced left ventricular ejection fractions," N
Engl j Med, vol. 327, pp. 685-691, 1992.
[23] J Arnold et al., "Prevention of heart failure in patients in the Heart Outcomes
Prevention Evaluation (HOPE) study," Circulation, vol. 107, no. 9, pp. 1284-1290,
2003.
[24] Sebastiano Sciarretta, Francesca Palano, Giuliano Tocci, Rossella Baldini, and
Massimo Volpe, "Antihypertensive treatment and development of heart failure in
hypertension: a Bayesian network meta-analysis of studies in patients with
hypertension and high cardiovascular risk," Archives of internal medicine, vol. 171,
no. 5, pp. 384-394, 2011.
[25] Chao-Hung Wang, Richard Weisel, Peter Liu, Paul Fedak, and Subodh Verma,
"Glitazones and heart failure critical appraisal for the clinician," Circulation, vol.
107, no. 10, pp. 1350-1354, 2003.
[26] Rajakrishnan Vijayakrishnan et al., "Prevalence of heart failure signs and symptoms
in a large primary care population identified through the use of text and data mining
of the electronic health record," Journal of Cardiac Failure, vol. 20, no. 7, pp. 459464, 2014.
[27] Jerry Gurwitz et al., "Contemporary prevalence and correlates of incident heart
failure with preserved ejection fraction," The American journal of medicine, vol. 126,
no. 5, pp. 393-400, 2013.
SUPPLEMENTARY
Table 4. Qualifying ICD-9 codes for heart failure
ICD-9
Code
398.91
402.01
402.11
402.91
404.01
404.03
404.11
404.13
404.91
404.93
428.0
428.1
428.20
428.21
428.22
428.23
428.30
428.31
428.32
428.33
428.40
428.41
428.42
428.43
428.9
Description
Rheumatic heart failure (congestive)
Malignant hypertensive heart disease with heart failure
Benign hypertensive heart disease with heart failure
Unspecified hypertensive heart disease with heart failure
Hypertensive heart and chronic kidney disease, malignant, with heart failure and with
chronic kidney disease stage I through stage IV, or unspecified
Hypertensive heart and chronic kidney disease, malignant, with heart failure and with
chronic kidney disease stage V or end stage renal disease
Hypertensive heart and chronic kidney disease, benign, with heart failure and with
chronic kidney disease stage I through stage IV, or unspecified
Hypertensive heart and chronic kidney disease, benign, with heart failure and chronic
kidney disease stage V or end stage renal disease
Hypertensive heart and chronic kidney disease, unspecified, with heart failure and with
chronic kidney disease stage I through stage IV, or unspecified
Hypertensive heart and chronic kidney disease, unspecified, with heart failure and
chronic kidney disease stage V or end stage renal disease
Congestive heart failure, unspecified
Left heart failure
Systolic heart failure, unspecified
Acute systolic heart failure
Chronic systolic heart failure
Acute on chronic systolic heart failure
Diastolic heart failure, unspecified
Acute diastolic heart failure
Chronic diastolic heart failure
Acute on chronic diastolic heart failure
Combined systolic and diastolic heart failure, unspecified
Acute combined systolic and diastolic heart failure
Chronic combined systolic and diastolic heart failure
Acute on chronic combined systolic and diastolic heart failure
Heart failure, unspecified
Red and Blue Box of Figure 5
173.71
173.31
172.5
173.41
173.51
173.50
173.91
172.4
173.6
172.3
173.4
173.0
173.7
238.9
173.2
173.3
173.5
173.1
173.9
Figure 7. Detailed version of the red box of Figure 5
Table 5. List of ICD-9 codes that appear in Figure 7, and their descriptions
ICD-9 Code
172.3
172.4
172.5
173.0
173.1
173.2
173.3
173.31
173.4
173.41
173.5
173.50
173.51
173.6
173.7
173.71
173.9
173.91
Description
Malignant melanoma of skin of other and unspecified parts of face
Malignant melanoma of skin of scalp and neck
Malignant melanoma of skin of trunk, except scrotum
Other and unspecified malignant neoplasm of skin of lip
Other and unspecified malignant neoplasm of skin of eyelid, including canthus
Other and unspecified malignant neoplasm of skin of ear and external auditory canal
Other and unspecified malignant neoplasm of skin of other and unspecified parts of face
Basal cell carcinoma of skin of other and unspecified parts of face
Other and unspecified malignant neoplasm of scalp and skin of neck
Basal cell carcinoma of scalp and skin of neck
Other and unspecified malignant neoplasm of skin of trunk, except scrotum
Unspecified malignant neoplasm of skin of trunk, except scrotum
Basal cell carcinoma of skin of trunk, except scrotum
Other and unspecified malignant neoplasm of skin of upper limb, including shoulder
Other and unspecified malignant neoplasm of skin of lower limb, including hip
Basal cell carcinoma of skin of lower limb, including hip
Other and unspecified malignant neoplasm of skin, site unspecified
Basal cell carcinoma of skin, site unspecified
238.9
Neoplasm of uncertain behavior, site unspecified
448.9
228.00
216.5
216.7
216.6
216.9
216.3
078.10
216.8
238.2
448.1
Figure 8. Detailed version of the blue box of Figure 5
Table 6. List of ICD-9 codes that appear in Figure 8, and their descriptions
ICD-9 Code
078.10
216.3
216.5
216.6
216.7
216.8
216.9
228.00
238.2
448.1
448.9
Description
Viral warts, unspecified
Benign neoplasm of skin of other and unspecified parts of face
Benign neoplasm of skin of trunk, except scrotum
Benign neoplasm of skin of upper limb, including shoulder
Benign neoplasm of skin of lower limb, including hip
Benign neoplasm of other specified sites of skin
Benign neoplasm of skin, site unspecified
Hemangioma of unspecified site
Neoplasm of uncertain behavior of skin
Nevus, non-neoplastic
Other and unspecified capillary diseases
6-fold Cross Validation Scheme
Training Set
Validation Set
Test Set
32,787 Patients
Fold 1
Fold 2
Fold 6
Figure 9. Diagram of 6-fold cross validation
Figure 6 depicts the 6-fold cross validation we performed for HF prediction. As
explained earlier, the entire cohort is divided into 7 chunks, and two chunks take turn to
play as the validation set and the test set.
Hyper-parameters used for training models
After experimenting with various values, the following hyper-parameter setting produced
the best performance. We used Theano 0.7 and CUDA 7 for training logistic regression,
MLP, and GRU models. SVM was implemented with Scikit-Learn Linear SVC. KNN was
implemented with Scikit-Learn KNeighborsClassifier.
Table 7. Hyper-parameter settings for training the models
Model
Logistic regression,
one-hot vectors
Logistic regression,
medical concept vectors
SVM,
one-hot vectors
SVM,
medical concept vectors
MLP,
one-hot vectors
MLP,
medical concept vectors
KNN,
one-hot vectors
KNN,
medical concept vectors
Hyper-parameter
L2 regularization: 0.1, Max epoch: 100
L2 regularization: 0.01, Max epoch: 100
L2 regularization: 0.000001, Dual: False
L2 regularization: 0.001, Dual: False
L2 regularization: 0.01, Hidden layer size: 15, Max epoch: 100
L2 regularization: 0.001, Hidden layer size: 100, Max epoch: 100
Number of neighbors: 15
Number of neighbors: 100
Complete table of 100 frequent diagnoses and their closest diagnoses, medications and
procedures
Diagnoses
Medications
Procedures
Myalgia and myositis,
unspecified (729.1)
-Lumbago (724.2)
-Thoracic or lumbosacral
neuritis or radiculitis,
unspecified (724.4)
-Degeneration of lumbar or
lumbosacral intervertebral
disc (722.52)
-Other malaise and fatigue
(780.79)
-PR INJ TRIGGER POINT(S)
1 OR 2 MUSCLES
-PHYSICAL MEDICINE, INT
-TRIAMCINOLONE
ACETONIDE *KENALOG
INJ
-ASP/INJ MAJOR
JOINT/BURS/CYST
-PR METHLYPRED ACET
(BU 21 - 40MG) *DEPOMEDROL INJ
Pain in limb (729.5)
-Keratoderma, acquired
(701.1)
-Plantar fascial
fibromatosis (728.71)
-Other specified diseases of
nail (703.8)
-Disturbance of skin
sensation (782.0)
-Senile cataract,
unspecified (366.10)
-Presbyopia (367.4)
-Preglaucoma, unspecified
(365.00)
-Other chronic allergic
conjunctivitis (372.14)
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-ZOLPIDEM TARTRATE 10
MG PO TABS
-CYCLOBENZAPRINE HCL
10 MG PO TABS
-HYDROCODONEACETAMINOPHEN 10-325
MG PO TABS
-TRAMADOL HCL 50 MG PO
TABS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-GLASSES
-ERYTHROMYCIN 5 MG/GM
OP OINT
-PATANOL 0.1 % OP SOLN
-PR REFRACTION
-PR VISUAL FIELD EXAM
EXTENDED
-PR VISUAL FIELD EXAM
LIMITED
-REFERRAL TO
OPHTHALMOLOGY, INT
-OPHTHALMOLOGY, INT
-REFERRAL TO
NEUROLOGY, INT
-NEUROLOGY, INT
-SERV PROV DURING REG
SCHED EVE/WKEND/HOL
HRS
-PR OV EST PT LEV 3
-PR ECG AND
INTERPRETATION
Tear film insufficiency,
unspecified (375.15)
Headache (784.0)
-Dizziness and giddiness
(780.4)
-Cervicalgia (723.1)
-Other malaise and fatigue
(780.79)
-Acute sinusitis,
unspecified (461.9)
Disturbance of skin
sensation (782.0)
-Brachial neuritis or
radiculitis NOS (723.4)
-Cervicalgia (723.1)
-Pain in limb (729.5)
-Headache (784.0)
Edema (782.3)
-Congestive heart failure,
unspecified (428.0)
-Unspecified essential
hypertension (401.9)
-Atrial fibrillation (427.31)
-Chronic kidney disease,
Stage III (moderate)
(585.3)
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-CYCLOBENZAPRINE HCL
10 MG PO TABS
-FLONASE 50 MCG/ACT NA
SUSP
-AZITHROMYCIN 250 MG PO
TABS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-PREDNISONE 20 MG PO
TABS
-FUROSEMIDE 20 MG PO
TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-CEPHALEXIN 500 MG PO
CAPS
-DEBRIDEMENT OF NAILS,
6 OR MORE
-PODIATRY, INT
-REFERRAL TO PODIATRIC
MEDICINE/SURGERY, INT
-PR OV EST PT LEV 3
-FOOT COMPLETE
-NEUROLOGY, INT
-SENSE/MIXED N
CONDUCTION TST
-REFERRAL TO
NEUROLOGY, INT
-PR NERVE COND STUDY
MOTOR W F-WAVE EA
-PR EMG NEEDLE ONE
EXTREMITY
-DEBRIDEMENT OF NAILS,
6 OR MORE
-PR OV EST PT MIN SERV
-EKG
-PR ECG AND
INTERPRETATION
-CHEST PA & LATERAL
Asthma, unspecified type,
unspecified (493.90)
-Allergic rhinitis, cause
unspecified (477.9)
-Acute bronchitis (466.0)
-Bronchitis, not specified as
acute or chronic (490)
-Acute upper respiratory
infections of unspecified
site (465.9)
Routine gynecological
examination (V72.31)
-Other screening
mammogram (V76.12)
-Screening for lipoid
disorders (V77.91)
-Routine general medical
examination at a health care
facility (V70.0)
-Screening for unspecified
condition (V82.9)
-Mixed hyperlipidemia
(272.2)
-Other abnormal glucose
(790.29)
-Obesity, unspecified
(278.00)
-Pure hypercholesterolemia
(272.0)
Diabetes mellitus without
mention of complication,
type II or unspecified type,
uncontrolled (250.02)
Need for prophylactic
vaccination and inoculation
against diphtheria-tetanuspertussis, combined [DTP]
[DTaP] (V06.1)
Diabetes mellitus without
mention of complication,
type II or unspecified type,
not stated as uncontrolled
(250.00)
Personal history of colonic
polyps (V12.72)
-Laboratory examination
ordered as part of a routine
general medical
examination (V72.62)
-Need for prophylactic
vaccination and inoculation
against other combinations
of diseases (V06.8)
-Screening for lipoid
disorders (V77.91)
-Need for prophylactic
vaccination and inoculation
against influenza (V04.81)
-Unspecified essential
hypertension (401.9)
-Diabetes mellitus without
mention of complication,
type II or unspecified type,
uncontrolled (250.02)
-Obesity, unspecified
(278.00)
-Benign essential
hypertension (401.1)
-Special screening for
malignant neoplasms of
colon (V76.51)
-Family history of
malignant neoplasm of
gastrointestinal tract
(V16.0)
-Diverticulosis of colon
(without mention of
hemorrhage) (562.10)
-Routine general medical
examination at a health care
facility (V70.0)
-FUROSEMIDE 40 MG PO
TABS
-AZITHROMYCIN 250 MG PO
TABS
-ALBUTEROL SULFATE HFA
108 (90 BASE) MCG/ACT IN
AERS
-ALBUTEROL SULFATE HFA
108 MCG/ACT IN AERS
-PROMETHAZINE-CODEINE
6.25-10 MG/5ML PO SYRP
-ALBUTEROL 90 MCG/ACT
IN AERS
-OBTAINING SCREEN PAP
SMEAR
-GYN CYTOLOGY (PAP) PA
-MAMMOGRAM SCREENING
-MAMMO SCREENING
BILATERAL DIGITAL
-SCRN MAMMO DIR DIG
IMAGE BIL (V76.12)
-METFORMIN HCL 500 MG
PO TABS
-METFORMIN HCL 1000 MG
PO TABS
-GLUCOSE BLOOD VI STRP
-LISINOPRIL 10 MG PO TABS
-LISINOPRIL 20 MG PO TABS
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-AZITHROMYCIN 250 MG PO
TABS
-ZOLPIDEM TARTRATE 10
MG PO TABS
-SIMVASTATIN 20 MG PO
TABS
-METFORMIN HCL 500 MG
PO TABS
-METFORMIN HCL 1000 MG
PO TABS
-GLUCOSE BLOOD VI STRP
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-LISINOPRIL 10 MG PO TABS
-PEG-KCL-NACL-NASULFNA ASC-C 100 G PO SOLR
-MOVIPREP 100 G PO SOLR
-SIMVASTATIN 20 MG PO
TABS
-OMEPRAZOLE 20 MG PO
CPDR
-CHEST PA & LATERAL
-PULSE OXIMETRY SINGLE
-PR INHALATION RX FOR
OBSTRUCTION MDI/NEB
-XR CHEST 2 VIEWS PA
LATERAL
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-DIABETIC EYE EXAM (NO
BILL)
-DIABETES EDUCATION,
INT
-OPHTHALMOLOGY, INT
-DIABETIC FOOT EXAM
(NO BILL)
-INFLUENZA VAC 3+YR
(V04.81) IM
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-MAMMO SCREENING
BILATERAL DIGITAL
-PR INFLUENZA VACC
PRES FREE 3+ YO 0.5ML IM
-REFERRAL TO GI, INT
-PNEUMOCOCCAL VAC
(V03.82) IM/SQ
-DIABETIC EYE EXAM (NO
BILL)
-PR DIABETIC EYE EXAM
(NO BILL)
-INFLUENZA VAC 3+YR
(V04.81) IM
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-OPHTHALMOLOGY, INT
-PR COLONOSCOPY
W/BIOPSY(S)
-PR COLONOSCOPY
W/RMVL
LESN/TUM/POLYP SNARE
-REFERRAL TO GI, INT
-GI, INT
-PR COLONOSCOPY DIAG
Acute upper respiratory
infections of unspecified
site (465.9)
-Cough (786.2)
-Acute sinusitis,
unspecified (461.9)
-Acute bronchitis (466.0)
-Acute pharyngitis (462)
Need for prophylactic
vaccination and inoculation
against tetanus-diphtheria
[Td] (DT) (V06.5)
-Routine general medical
examination at a health care
facility (V70.0)
-Other general medical
examination for
administrative purposes
(V70.3)
-Need for prophylactic
vaccination and inoculation
against other combinations
of diseases (V06.8)
-Multiphasic screening
(V82.6)
-Unspecified essential
hypertension (401.9)
-Allergic rhinitis, cause
unspecified (477.9)
-Cough (786.2)
-Routine general medical
examination at a health care
facility (V70.0)
-Lens replaced by other
means (V43.1)
-Senile cataract,
unspecified (366.10)
-Unspecified aftercare
(V58.9)
-Follow-up examination,
following surgery,
unspecified (V67.00)
-Irritable bowel syndrome
(564.1)
-Nausea with vomiting
(787.01)
-Hemorrhage of rectum and
anus (569.3)
-Nausea alone (787.02)
Esophageal reflux (530.81)
Other specified aftercare
following surgery (V58.49)
Diarrhea (787.91)
Coronary atherosclerosis of
unspecified type of vessel,
native or graft (414.00)
-Unspecified essential
hypertension (401.9)
-Atrial fibrillation (427.31)
-Benign essential
hypertension (401.1)
-Congestive heart failure,
unspecified (428.0)
Dizziness and giddiness
(780.4)
-Benign paroxysmal
positional vertigo (386.11)
-Other malaise and fatigue
(780.79)
-Unspecified hearing loss
(389.9)
-Chest pain, unspecified
(786.50)
-Unspecified hearing loss
(389.9)
-Infective otitis externa,
Impacted cerumen (380.4)
-AZITHROMYCIN 250 MG PO
TABS
-PROMETHAZINE-CODEINE
6.25-10 MG/5ML PO SYRP
-AMOXICILLIN 500 MG PO
CAPS
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-FLONASE 50 MCG/ACT NA
SUSP
-TD VAC 7+ YO (V06.5) IM
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-CHART ABSTRACTION
SIMP (V70.3)
-MAMMOGRAM SCREENING
-IMMUN ADMIN, EACH ADD
-PULSE OXIMETRY SINGLE
-SERV PROV DURING REG
SCHED EVE/WKEND/HOL
HRS
-CHEST PA & LATERAL
-GYN CYTOLOGY (PAP) PA
-INFLUENZA VAC (FLU
CLINIC ONLY) 3+YO PA
-OMEPRAZOLE 20 MG PO
CPDR
-ACIPHEX 20 MG PO TBEC
-PROTONIX 40 MG PO TBEC
-PRILOSEC 20 MG PO CPDR
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-GLASSES
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-VICODIN 5-500 MG PO TABS
-CEPHALEXIN 500 MG PO
CAPS
-IBUPROFEN 600 MG PO
TABS
-METRONIDAZOLE 500 MG
PO TABS
-CIPRO 500 MG PO TABS
-OMEPRAZOLE 20 MG PO
CPDR
-CIPROFLOXACIN HCL 500
MG PO TABS
-PROMETHAZINE HCL 25
MG PO TABS
-SIMVASTATIN 40 MG PO
TABS
-NITROGLYCERIN 0.4 MG SL
SUBL
-SIMVASTATIN 20 MG PO
TABS
-LISINOPRIL 10 MG PO TABS
-LISINOPRIL 20 MG PO TABS
-LORAZEPAM 0.5 MG PO
TABS
-MECLIZINE HCL 25 MG PO
TABS
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-GI, INT
-PR ECG AND
INTERPRETATION
-PR LARYNGOSCOPY FLEX
DIAG
-PR OV EST PT LEV 3
-CHEST PA & LATERAL
-NEOMYCIN-POLYMYXINHC 1 % OT SOLN
-AMOXICILLIN 500 MG PO
-PR ECG AND
INTERPRETATION
-PR OPHTHALMIC
BIOMETRY
-FOOT COMPLETE
-EKG
-PR REFRACTION
-GI, INT
-REFERRAL TO GI, INT
-SERV PROV DURING REG
SCHED EVE/WKEND/HOL
HRS
-PR COLONOSCOPY
W/BIOPSY(S)
-US ABDOMEN COMPLETE
-PR ECG AND
INTERPRETATION
-EKG
-STRESS ECHO
-TTE W/O DOPPLER CMPLT
-PR CARDIAC STRESS
COMPLTE
-PR ECG AND
INTERPRETATION
-PR TYMPANOMETRY
(IMPEDANCE TESTING)
-EKG
-PR COMPREHENSIVE
HEARING TEST
-ENT, INT
-PR REMVL EAR WAX
UNI/BILAT (380.4)
-ENT, INT
unspecified (380.10)
-Dysfunction of Eustachian
tube (381.81)
-Dizziness and giddiness
(780.4)
CAPS
-FLONASE 50 MCG/ACT NA
SUSP
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-TRIAMCINOLONE
ACETONIDE 0.1 % EX CREA
-FLUOCINONIDE 0.05 % EX
CREA
Other chronic dermatitis
due to solar radiation
(692.74)
-Personal history of other
malignant neoplasm of skin
(V10.83)
-Other seborrheic keratosis
(702.19)
-Benign neoplasm of other
specified sites of skin
(216.8)
-Inflamed seborrheic
keratosis (702.11)
Chronic kidney disease,
Stage III (moderate)
(585.3)
-Anemia, unspecified
(285.9)
-Diabetes with renal
manifestations, type II or
unspecified type, not stated
as uncontrolled (250.40)
-Proteinuria (791.0)
-Unspecified essential
hypertension (401.9)
-Unspecified essential
hypertension (401.9)
-Other and unspecified
hyperlipidemia (272.4)
-Long-term (current) use of
other medications (V58.69)
-Screening for malignant
neoplasms of prostate
(V76.44)
-Keratoderma, acquired
(701.1)
-Peripheral vascular
disease, unspecified (443.9)
-Ingrowing nail (703.0)
-Diabetes with neurological
manifestations, type II or
unspecified type, not stated
as uncontrolled (250.60)
-Tear film insufficiency,
unspecified (375.15)
-Glaucomatous atrophy
[cupping] of optic disc
(377.14)
-Vitreous degeneration
(379.21)
-Lens replaced by other
means (V43.1)
-AMLODIPINE BESYLATE 10
MG PO TABS
-FUROSEMIDE 20 MG PO
TABS
-AMLODIPINE BESYLATE 5
MG PO TABS
-LISINOPRIL 40 MG PO TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-COLCHICINE 0.6 MG PO
TABS
-ALLOPURINOL 100 MG PO
TABS
-ALLOPURINOL 300 MG PO
TABS
-INDOMETHACIN 50 MG PO
CAPS
-ATENOLOL 50 MG PO TABS
-ECONAZOLE NITRATE 1 %
EX CREA
-Pre-operative
cardiovascular examination
(V72.81)
-Other specified preoperative examination
(V72.83)
-Unspecified aftercare
(V58.9)
-Senile cataract,
unspecified (366.10)
-Acute upper respiratory
infections of unspecified
site (465.9)
-Bronchitis, not specified as
acute or chronic (490)
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-VICODIN 5-500 MG PO TABS
-IBUPROFEN 600 MG PO
TABS
Gout, unspecified (274.9)
Other specified diseases of
nail (703.8)
Preglaucoma, unspecified
(365.00)
Pre-operative examination,
unspecified (V72.84)
Cough (786.2)
-GLASSES
-AZITHROMYCIN 250 MG PO
TABS
-PROMETHAZINE-CODEINE
6.25-10 MG/5ML PO SYRP
-ALBUTEROL SULFATE HFA
-PR COMPREHENSIVE
HEARING TEST
-PR TYMPANOMETRY
(IMPEDANCE TESTING)
-CERUMEN RMVL BY
NURSE/MA (NO BILL)
-PR DESTR PRE-MALIG
LESN 1ST
-PR DESTR PRE-MALIG
LESN 2-14
-PR BIOPSY OF
SKIN/SQ/MUC MEMBR
LESN SINGLE
-PR DESTRUCT BENIGN
LESION UP TO 14 LESIONS
(LIQUID NITROGEN)
-PR OV EST PT LEV 3
-PR PROTHROMBIN TIME
-PR TX/ PROPHY/ DX INJ
SQ/ IM
-PR OV EST PT MIN SERV
-EKG
-DEBRIDEMENT OF NAILS,
6 OR MORE
-DEBRIDEMENT OF NAILS,
6 OR MORE
-TRIAMCINOLONE
ACETONIDE *KENALOG
INJ
-DEBRIDEMENT OF NAILS,
6 OR MORE
-TRIM SKIN LESIONS, 2 TO
4
-PARE CORN/CALLUS,
SINGLE LESION
-PODIATRY, INT
-PR DESTR PRE-MALIG
LESN 1ST
-PR VISUAL FIELD EXAM
EXTENDED
-PR FUNDAL
PHOTOGRAPHY
-PR OPHTHALMIC DX
IMAGING
-PR VISUAL FIELD EXAM
LIMITED
-PR CMPTR OPHTH IMG
OPTIC NERVE
-PR ECG AND
INTERPRETATION
-EKG
-CHEST PA & LATERAL
-PR OV EST PT LEV 3
-ECG AND
INTERPRETATION
(ORDERS ONLY) SZ
-CHEST PA & LATERAL
-XR CHEST 2 VIEWS PA
LATERAL
-PULSE OXIMETRY SINGLE
-SERV PROV DURING REG
-Asthma, unspecified type,
unspecified (493.90)
-Acute sinusitis,
unspecified (461.9)
Palpitations (785.1)
Need for prophylactic
vaccination and inoculation
against influenza (V04.81)
Benign essential
hypertension (401.1)
Unspecified essential
hypertension (401.9)
Other screening
mammogram (V76.12)
Elevated blood pressure
reading without diagnosis
of hypertension (796.2)
-Cardiac dysrhythmia,
unspecified (427.9)
-Syncope and collapse
(780.2)
-Dizziness and giddiness
(780.4)
-Shortness of breath
(786.05)
-Other and unspecified
hyperlipidemia (272.4)
-Need for prophylactic
vaccination and inoculation
against diphtheria-tetanuspertussis, combined [DTP]
[DTaP] (V06.1)
-Unspecified essential
hypertension (401.9)
-Screening for unspecified
condition (V82.9)
-Unspecified essential
hypertension (401.9)
-Pure hypercholesterolemia
(272.0)
-Mixed hyperlipidemia
(272.2)
-Diabetes mellitus without
mention of complication,
type II or unspecified type,
not stated as uncontrolled
(250.00)
-Diabetes mellitus without
mention of complication,
type II or unspecified type,
not stated as uncontrolled
(250.00)
-Benign essential
hypertension (401.1)
-Coronary atherosclerosis
of unspecified type of
vessel, native or graft
(414.00)
-Obesity, unspecified
(278.00)
-Screening for malignant
neoplasms of cervix
(V76.2)
-Routine general medical
examination at a health care
facility (V70.0)
-Screening for unspecified
condition (V82.9)
-Special screening for
malignant neoplasms of
colon (V76.51)
-Screening for unspecified
condition (V82.9)
-Impaired fasting glucose
(790.21)
-Family history of ischemic
heart disease (V17.3)
-Obesity, unspecified
(278.00)
108 (90 BASE) MCG/ACT IN
AERS
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-GUAIFENESIN-CODEINE
100-10 MG/5ML PO SYRP
-ATENOLOL 25 MG PO TABS
-METOPROLOL SUCCINATE
ER 25 MG PO TB24
-LORAZEPAM 0.5 MG PO
TABS
SCHED EVE/WKEND/HOL
HRS
-PR ECG AND
INTERPRETATION
-AZITHROMYCIN 250 MG PO
TABS
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-SIMVASTATIN 20 MG PO
TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-PR INFLUENZA VACC
PRES FREE 3+ YO 0.5ML IM
-INFLUENZA VAC 3+YR
(V04.81) IM
-INFLUENZA VAC (FLU
CLINIC ONLY) 3+YO PA
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-PR ECG AND
INTERPRETATION
-INFLUENZA VAC 3+YR
(V04.81) IM
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-GI, INT
-PR OV EST PT LEV 3
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-ATENOLOL 50 MG PO TABS
-LISINOPRIL 10 MG PO TABS
-LISINOPRIL 40 MG PO TABS
-LISINOPRIL 20 MG PO TABS
-HOLTER 24 HR ECG
-PR ECG AND
INTERPRETATION
-PR CARDIAC STRESS
COMPLTE
-EKG
-TTE W/O DOPPLER CMPLT
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-LISINOPRIL 10 MG PO TABS
-ATENOLOL 50 MG PO TABS
-LISINOPRIL 20 MG PO TABS
-SIMVASTATIN 20 MG PO
TABS
-PR ECG AND
INTERPRETATION
-EKG
-INFLUENZA VAC 3+YR
(V04.81) IM
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-MAMMO SCREENING
BILATERAL DIGITAL
-MAMMOGRAM
SCREENING
-GYN CYTOLOGY (PAP) PA
-OBTAINING SCREEN PAP
SMEAR
-PR OV EST PT LEV 3
-AZITHROMYCIN 250 MG PO
TABS
-PR OV EST PT LEV 3
-PR OV EST PT LEV 2
-CHART ABSTRACTION
SIMP (V70.3)
-PR ECG AND
INTERPRETATION
-TD VAC 7+ YO (V06.5) IM
Symptomatic menopausal
or female climacteric states
(627.2)
Congestive heart failure,
unspecified (428.0)
Other and unspecified
hyperlipidemia (272.4)
Pure hypercholesterolemia
(272.0)
Mixed hyperlipidemia
(272.2)
Unspecified acquired
hypothyroidism (244.9)
Depressive disorder, not
elsewhere classified (311)
-Other screening
mammogram (V76.12)
-Routine gynecological
examination (V72.31)
-Screening for malignant
neoplasms of cervix
(V76.2)
-Disorder of bone and
cartilage, unspecified
(733.90)
-Edema (782.3)
-Coronary atherosclerosis
of unspecified type of
vessel, native or graft
(414.00)
-Aortic valve disorders
(424.1)
-Mitral valve disorders
(424.0)
-Diabetes mellitus without
mention of complication,
type II or unspecified type,
not stated as uncontrolled
(250.00)
-Benign essential
hypertension (401.1)
-Coronary atherosclerosis
of unspecified type of
vessel, native or graft
(414.00)
-Need for prophylactic
vaccination and inoculation
against influenza (V04.81)
-Benign essential
hypertension (401.1)
-Other and unspecified
hyperlipidemia (272.4)
-Impaired fasting glucose
(790.21)
-Diabetes mellitus without
mention of complication,
type II or unspecified type,
uncontrolled (250.02)
-Benign essential
hypertension (401.1)
-Diabetes mellitus without
mention of complication,
type II or unspecified type,
uncontrolled (250.02)
-Impaired fasting glucose
(790.21)
-Obesity, unspecified
(278.00)
-Unspecified essential
hypertension (401.9)
-Benign essential
hypertension (401.1)
-Other screening
mammogram (V76.12)
-Other malaise and fatigue
(780.79)
-Insomnia, unspecified
(780.52)
-Other malaise and fatigue
(780.79)
-Unspecified essential
hypertension (401.9)
-ZOLPIDEM TARTRATE 10
MG PO TABS
-MAMMOGRAM
SCREENING
-MAMMO SCREENING
BILATERAL DIGITAL
-GYN CYTOLOGY (PAP) PA
-PR OV EST PT LEV 3
-OBTAINING SCREEN PAP
SMEAR
-FUROSEMIDE 20 MG PO
TABS
-FUROSEMIDE 40 MG PO
TABS
-LASIX 20 MG PO TABS
-LASIX 40 MG PO TABS
-LISINOPRIL 5 MG PO TABS
-TTE W/O DOPPLER CMPLT
-PR OV EST PT MIN SERV
-TRANSTHORACIC
COMPLETE, ECHO
-CHEST PA & LATERAL
-PR PROTHROMBIN TIME
-SIMVASTATIN 20 MG PO
TABS
-SIMVASTATIN 40 MG PO
TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-LISINOPRIL 10 MG PO TABS
-LIPITOR 10 MG PO TABS
-PR ECG AND
INTERPRETATION
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-PR OV EST PT LEV 3
-INFLUENZA VAC 3+YR
(V04.81) IM
-PR INFLUENZA VACC
PRES FREE 3+ YO 0.5ML IM
-SIMVASTATIN 20 MG PO
TABS
-SIMVASTATIN 40 MG PO
TABS
-LISINOPRIL 10 MG PO TABS
-ATENOLOL 50 MG PO TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-EKG
-PR ECG AND
INTERPRETATION
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-MAMMO SCREENING
BILATERAL DIGITAL
-SIMVASTATIN 40 MG PO
TABS
-SIMVASTATIN 20 MG PO
TABS
-LISINOPRIL 10 MG PO TABS
-SIMVASTATIN 10 MG PO
TABS
-METFORMIN HCL 500 MG
PO TABS
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-INFLUENZA VAC 3+YR
(V04.81) IM
-EKG
-PR CARDIAC STRESS
COMPLTE
-LEVOTHYROXINE SODIUM
75 MCG PO TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-LEVOTHYROXINE SODIUM
100 MCG PO TABS
-SIMVASTATIN 20 MG PO
TABS
-SIMVASTATIN 40 MG PO
TABS
-ZOLPIDEM TARTRATE 10
MG PO TABS
-LORAZEPAM 0.5 MG PO
TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-TRAZODONE HCL 50 MG PO
-MAMMO SCREENING
BILATERAL DIGITAL
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-PR OV EST PT LEV 3
-MAMMOGRAM
SCREENING
-PR INFLUENZA VACC
PRES FREE 3+ YO 0.5ML IM
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-PR OV EST PT LEV 3
-INFLUENZA VAC 3+YR
(V04.81) IM
-Other and unspecified
hyperlipidemia (272.4)
Personal history of other
malignant neoplasm of skin
(V10.83)
Bronchitis, not specified as
acute or chronic (490)
-Other chronic dermatitis
due to solar radiation
(692.74)
-Benign neoplasm of other
specified sites of skin
(216.8)
-Other seborrheic keratosis
(702.19)
-Neoplasm of uncertain
behavior of skin (238.2)
-Acute upper respiratory
infections of unspecified
site (465.9)
-Cough (786.2)
-Acute sinusitis,
unspecified (461.9)
-Acute nasopharyngitis
[common cold] (460)
Obstructive sleep apnea
(adult)(pediatric) (327.23)
-Sleep disturbance,
unspecified (780.50)
-Obesity, unspecified
(278.00)
-Other respiratory
abnormalities (786.09)
-Other malaise and fatigue
(780.79)
Anemia, unspecified
(285.9)
-Chronic kidney disease,
Stage III (moderate)
(585.3)
-Edema (782.3)
-Unspecified essential
hypertension (401.9)
-Iron deficiency anemia,
unspecified (280.9)
-Tear film insufficiency,
unspecified (375.15)
-Vitreous degeneration
(379.21)
-Regular astigmatism
(367.21)
-Other specified aftercare
following surgery (V58.49)
Lens replaced by other
means (V43.1)
Chest pain, unspecified
(786.50)
-Palpitations (785.1)
-Other chest pain (786.59)
-Other respiratory
abnormalities (786.09)
-Other malaise and fatigue
(780.79)
Acute sinusitis, unspecified
(461.9)
-Cough (786.2)
-Bronchitis, not specified as
acute or chronic (490)
-Allergic rhinitis, cause
unspecified (477.9)
-Acute pharyngitis (462)
TABS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-FLUOCINONIDE 0.05 % EX
CREA
-TRIAMCINOLONE
ACETONIDE 0.1 % EX CREA
-PROMETHAZINE-CODEINE
6.25-10 MG/5ML PO SYRP
-AZITHROMYCIN 250 MG PO
TABS
-ZITHROMAX Z-PAK 250 MG
PO TABS
-ALBUTEROL 90 MCG/ACT
IN AERS
-ALBUTEROL SULFATE HFA
108 MCG/ACT IN AERS
-ZOLPIDEM TARTRATE 10
MG PO TABS
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-SIMVASTATIN 20 MG PO
TABS
-ZOLPIDEM TARTRATE 5
MG PO TABS
-AZITHROMYCIN 250 MG PO
TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-GLASSES
-AZITHROMYCIN 250 MG PO
TABS
-NITROGLYCERIN 0.4 MG SL
SUBL
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-OMEPRAZOLE 20 MG PO
CPDR
-AMOXICILLIN 500 MG PO
CAPS
-AZITHROMYCIN 250 MG PO
TABS
-FLONASE 50 MCG/ACT NA
SUSP
-PROMETHAZINE-CODEINE
6.25-10 MG/5ML PO SYRP
-MAMMO SCREENING
BILATERAL DIGITAL
-PR DESTR PRE-MALIG
LESN 1ST
-PR DESTR PRE-MALIG
LESN 2-14
-PR BIOPSY OF
SKIN/SQ/MUC MEMBR
LESN SINGLE
-SURGICAL PATHOLOGY
PA
-PR OV EST PT LEV 3
-PULSE OXIMETRY SINGLE
-CHEST PA & LATERAL
-PR INHALATION RX FOR
OBSTRUCTION MDI/NEB
-SERV PROV DURING REG
SCHED EVE/WKEND/HOL
HRS
-XR CHEST 2 VIEWS PA
LATERAL
-REFERRAL TO SLEEP
DISORDERS, INT
-PR POLYSOMNOGRAPHY
4 OR MORE
-PR POLYSOMNOGRAPHY
W/C P
-SLEEP STUDY ATTENDED
-CPAP EQUIPMENT DME
-INFLUENZA VAC 3+YR
(V04.81) IM
-PR ECG AND
INTERPRETATION
-GI, INT
-EKG
-CHEST PA & LATERAL
-PR VISUAL FIELD EXAM
EXTENDED
-PR REFRACTION
-PR OPHTHALMIC DX
IMAGING
-DIABETIC EYE EXAM (NO
BILL)
-PR DIABETIC EYE EXAM
(NO BILL)
-PR ECG AND
INTERPRETATION
-PR CARDIAC STRESS
COMPLTE
-EKG
-STRESS ECHO
-CHEST PA & LATERAL
-PULSE OXIMETRY SINGLE
-SERV PROV DURING REG
SCHED EVE/WKEND/HOL
HRS
-CHEST PA & LATERAL
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
Atrial fibrillation (427.31)
Urinary tract infection, site
not specified (599.0)
-Encounter for therapeutic
drug monitoring (V58.83)
-Coronary atherosclerosis
of unspecified type of
vessel, native or graft
(414.00)
-Congestive heart failure,
unspecified (428.0)
-Unspecified essential
hypertension (401.9)
-Dysuria (788.1)
-Urinary frequency
(788.41)
-Other malaise and fatigue
(780.79)
-Vaginitis and
vulvovaginitis, unspecified
(616.10)
Pain in joint, shoulder
region (719.41)
-Other affections of
shoulder region, not
elsewhere classified (726.2)
-Rotator cuff (capsule)
sprain (840.4)
-Cervicalgia (723.1)
-Pain in joint, lower leg
(719.46)
Encounter for therapeutic
drug monitoring (V58.83)
-Atrial fibrillation (427.31)
-Congestive heart failure,
unspecified (428.0)
-Coronary atherosclerosis
of unspecified type of
vessel, native or graft
(414.00)
Malignant neoplasm of
breast (female), unspecified
(174.9)
-Lump or mass in breast
(611.72)
-Unspecified aftercare
(V58.9)
-Disorder of bone and
cartilage, unspecified
(733.90)
-Pre-operative examination,
unspecified (V72.84)
Contact dermatitis and
other eczema, unspecified
cause (692.9)
-Unspecified disorder of
skin and subcutaneous
tissue (709.9)
-Other atopic dermatitis
and related conditions
(691.8)
-Other seborrheic keratosis
(702.19)
-Other chronic dermatitis
due to solar radiation
(692.74)
-Special screening for
malignant neoplasms of
colon (V76.51)
-Diverticulosis of colon
(without mention of
Benign neoplasm of colon
(211.3)
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-WARFARIN SODIUM 5 MG
PO TABS
-ATENOLOL 50 MG PO TABS
-ATENOLOL 25 MG PO TABS
-METOPROLOL SUCCINATE
ER 50 MG PO TB24
-SIMVASTATIN 40 MG PO
TABS
ONLY
-ENT, INT
-NITROFURANTOIN
MONOHYD MACRO 100 MG
PO CAPS
-CIPROFLOXACIN HCL 250
MG PO TABS
-SULFAMETHOXAZOLETMP DS 800-160 MG PO
TABS
-MACROBID 100 MG PO
CAPS
-CIPRO 500 MG PO TABS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-VICODIN 5-500 MG PO TABS
-NAPROXEN 500 MG PO
TABS
-SERV PROV DURING REG
SCHED EVE/WKEND/HOL
HRS
-PR URINALYSIS AUTO
W/O SCOPE
-PR ECG AND
INTERPRETATION
-CYSTOURETHROSCOPY
-PR URINALYSIS DIP W/O
SCOPE
-WARFARIN SODIUM 5 MG
PO TABS
-COUMADIN 5 MG PO TABS
-WARFARIN SODIUM 2.5 MG
PO TABS
-ATENOLOL 50 MG PO TABS
-WARFARIN SODIUM 2 MG
PO TABS
-LORAZEPAM 1 MG PO
TABS
-LORAZEPAM 0.5 MG PO
TABS
-TAMOXIFEN CITRATE 20
MG PO TABS
-VICODIN 5-500 MG PO TABS
-ANASTROZOLE 1 MG PO
TABS
-TRIAMCINOLONE
ACETONIDE 0.1 % EX CREA
-FLUOCINONIDE 0.05 % EX
CREA
-TRIAMCINOLONE
ACETONIDE 0.1 % EX OINT
-DESONIDE 0.05 % EX CREA
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-PEG-KCL-NACL-NASULFNA ASC-C 100 G PO SOLR
-MOVIPREP 100 G PO SOLR
-OMEPRAZOLE 20 MG PO
CPDR
-PR OV EST PT MIN SERV
-PR PROTHROMBIN TIME
-EKG
-PR ECG AND
INTERPRETATION
-TTE W/O DOPPLER CMPLT
-SHOULDER 2+ VIEWS
-ASP/INJ MAJOR
JOINT/BURS/CYST
-TRIAMCINOLONE
ACETONIDE *KENALOG
INJ
-UPPER EXTREM JOINT
MRI
-PR THERAPEUTIC
EXERCISES
-PR OV EST PT MIN SERV
-PR PROTHROMBIN TIME
-DEBRIDEMENT OF NAILS,
6 OR MORE
-PR DESTR PRE-MALIG
LESN 2-14
-PR ECG AND
INTERPRETATION
-MAMMOGRAM BILAT
DIAGNOSTIC
-DEXA BONE DENSITY
AXIAL SKELETON
HIPS/PELV/SPINE
-MAMMO DIAGNOSTIC
BILATERAL DIGITAL
-VENIPUNC
FNGR,HEEL,EAR
-REFERRAL TO
ONCOLOGY, INT
-DERMATOLOGY, INT
-PR BIOPSY OF
SKIN/SQ/MUC MEMBR
LESN SINGLE
-PR DESTR PRE-MALIG
LESN 1ST
-REFERRAL TO
DERMATOLOGY, INT
-PR OV EST PT LEV 3
-PR COLONOSCOPY
W/BIOPSY(S)
-PR COLONOSCOPY
W/RMVL
LESN/TUM/POLYP SNARE
hemorrhage) (562.10)
-Family history of
malignant neoplasm of
gastrointestinal tract
(V16.0)
-Hemorrhage of rectum and
anus (569.3)
-Other screening
mammogram (V76.12)
-Symptomatic menopausal
or female climacteric states
(627.2)
-Routine general medical
examination at a health care
facility (V70.0)
-Special screening for
osteoporosis (V82.81)
-Myopia (367.1)
-Hypermetropia (367.0)
-Astigmatism, unspecified
(367.20)
-Other specified visual
disturbances (368.8)
-SIMVASTATIN 20 MG PO
TABS
-REFERRAL TO GI, INT
-GI, INT
-PR COLONOSCOPY DIAG
-FOSAMAX 70 MG PO TABS
-DEXA BONE DENSITY
AXIAL SKELETON
HIPS/PELV/SPINE
-BONE DENSITY (DEXA)
-MAMMO SCREENING
BILATERAL DIGITAL
-DEXA BONE DENSITY
STUDY 1+ SITES AXIAL
SKEL (HIP/PELVIS/SPINE)
-PR OV EST PT LEV 3
-PR REFRACTION
-DIABETIC EYE EXAM (NO
BILL)
-PR VISUAL FIELD EXAM
EXTENDED
-INFLUENZA VAC (FLU
CLINIC ONLY) 3+YO PA
-PR VISUAL FIELD EXAM
LIMITED
-Screening for malignant
neoplasms of cervix
(V76.2)
-Routine general medical
examination at a health care
facility (V70.0)
-Routine gynecological
examination (V72.31)
-Screening for unspecified
condition (V82.9)
-Regular astigmatism
(367.21)
-Astigmatism, unspecified
(367.20)
-Other specified visual
disturbances (368.8)
-Hypermetropia (367.0)
-CHART ABSTRACTION
SIMP (V70.3)
-PR OV EST PT LEV 2
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-GYN CYTOLOGY (PAP) PA
-PR OV EST PT LEV 3
Acute pharyngitis (462)
-Acute sinusitis,
unspecified (461.9)
-Acute nasopharyngitis
[common cold] (460)
-Bronchitis, not specified as
acute or chronic (490)
-Conjunctivitis, unspecified
(372.30)
Screening for unspecified
condition (V82.9)
-Other screening
mammogram (V76.12)
-Other general medical
examination for
administrative purposes
(V70.3)
-Need for prophylactic
vaccination and inoculation
against tetanus-diphtheria
[Td] (DT) (V06.5)
-Screening for malignant
neoplasms of cervix
(V76.2)
-AZITHROMYCIN 250 MG PO
TABS
-AMOXICILLIN 500 MG PO
CAPS
-FLONASE 50 MCG/ACT NA
SUSP
-PROMETHAZINE-CODEINE
6.25-10 MG/5ML PO SYRP
-AUGMENTIN 875-125 MG PO
TABS
-PR OV EST PT LEV 3
-PR OV EST PT LEV 2
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-MAMMOGRAM SCREENING
-GYN CYTOLOGY (PAP) PA
Disorder of bone and
cartilage, unspecified
(733.90)
Presbyopia (367.4)
Screening for lipoid
disorders (V77.91)
Myopia (367.1)
-GLASSES
-GLASSES
-PR REFRACTION
-CONTACT LENS MISC PA
-INFLUENZA VAC (FLU
CLINIC ONLY) 3+YO PA
-PR VISUAL FIELD EXAM
EXTENDED
-DIABETIC EYE EXAM (NO
BILL)
-PR STREP A RAPID ASSAY
W/OPTIC
-SERV PROV DURING REG
SCHED EVE/WKEND/HOL
HRS
-PULSE OXIMETRY SINGLE
-ENT, INT
-CHART ABSTRACTION
SIMP (V70.3)
Screening for malignant
neoplasms of cervix
(V76.2)
Impotence of organic origin
(607.84)
Unspecified sinusitis
(chronic) (473.9)
Need for prophylactic
vaccination and inoculation
against viral hepatitis
(V05.3)
Senile cataract, unspecified
(366.10)
-Other screening
mammogram (V76.12)
-Screening for lipoid
disorders (V77.91)
-Routine general medical
examination at a health care
facility (V70.0)
-Screening for unspecified
condition (V82.9)
-Hypertrophy (benign) of
prostate without urinary
obstruction and other lower
urinary tract symptom
(LUTS) (600.00)
-Other and unspecified
hyperlipidemia (272.4)
-Hypertrophy (benign) of
prostate with urinary
obstruction and other lower
urinary tract symptoms
(LUTS) (600.01)
-Unspecified essential
hypertension (401.9)
-Acute sinusitis,
unspecified (461.9)
-Allergic rhinitis, cause
unspecified (477.9)
-Dysfunction of Eustachian
tube (381.81)
-Headache (784.0)
-Other specified counseling
(V65.49)
-Need for prophylactic
vaccination and inoculation
against other combinations
of diseases (V06.8)
-Need for prophylactic
vaccination and inoculation
against poliomyelitis
(V04.0)
-Need for prophylactic
vaccination and inoculation
against tetanus-diphtheria
[Td] (DT) (V06.5)
-Presbyopia (367.4)
-Regular astigmatism
(367.21)
-Preglaucoma, unspecified
(365.00)
-Tear film insufficiency,
unspecified (375.15)
-GYN CYTOLOGY (PAP) PA
-OBTAINING SCREEN PAP
SMEAR
-MAMMOGRAM SCREENING
-PR OV EST PT LEV 3
-PR OV EST PT LEV 2
-VIAGRA 100 MG PO TABS
-SILDENAFIL CITRATE 100
MG PO TABS
-VIAGRA 50 MG PO TABS
-TADALAFIL 20 MG PO
TABS
-SIMVASTATIN 20 MG PO
TABS
-PR OV EST PT LEV 3
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-PR DESTR PRE-MALIG
LESN 1ST
-PR OV EST PT LEV 2
-INFLUENZA VAC 3+YR
(V04.81) IM
-AMOXICILLIN-POT
CLAVULANATE 875-125 MG
PO TABS
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-MOMETASONE FUROATE
50 MCG/ACT NA SUSP
-FLONASE 50 MCG/ACT NA
SUSP
-AUGMENTIN 875-125 MG PO
TABS
-CIPROFLOXACIN HCL 500
MG PO TABS
-MEFLOQUINE HCL 250 MG
PO TABS
-PR NASAL ENDO DIAG
-ENT, INT
-CT MAXILLOFACIAL W/O
CONTRAST
-PR LARYNGOSCOPY FLEX
DIAG
-REFERRAL TO ENT, INT
-GLASSES
-PR REFRACTION
-PR VISUAL FIELD EXAM
EXTENDED
-DIABETIC EYE EXAM (NO
BILL)
-PR VISUAL FIELD EXAM
LIMITED
-PR OPHTHALMIC DX
IMAGING
-PR INFLUENZA VACC
PRES FREE 3+ YO 0.5ML IM
-MAMMO SCREENING
BILATERAL DIGITAL
-PR OV EST PT LEV 3
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-ASP/INJ MAJOR
JOINT/BURS/CYST
Insomnia, unspecified
(780.52)
-Depressive disorder, not
elsewhere classified (311)
-Other malaise and fatigue
(780.79)
-Sleep disturbance,
unspecified (780.50)
-Need for prophylactic
vaccination and inoculation
against influenza (V04.81)
-ZOLPIDEM TARTRATE 10
MG PO TABS
-AMBIEN 10 MG PO TABS
-LORAZEPAM 0.5 MG PO
TABS
-ZOLPIDEM TARTRATE 5
MG PO TABS
-TRAZODONE HCL 50 MG PO
TABS
Osteoarthrosis, localized,
primary, lower leg (715.16)
-Osteoarthrosis, localized,
not specified whether
-HYDROCODONEACETAMINOPHEN 10-325
-HEP A VAC ADULT (V05.3)
IM
-IMMUN ADMIN, EACH
ADD
-HEP B VAC ADULT (V05.3)
IM
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
Obesity, unspecified
(278.00)
Screening for malignant
neoplasms of prostate
(V76.44)
Unspecified vitamin D
deficiency (268.9)
Lumbago (724.2)
primary or secondary,
lower leg (715.36)
-Chondromalacia of patella
(717.7)
-Pain in joint, pelvic region
and thigh (719.45)
-Enthesopathy of hip region
(726.5)
MG PO TABS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-Routine general medical
examination at a health care
facility (V70.0)
-Other and unspecified
hyperlipidemia (272.4)
-Unspecified sleep apnea
(780.57)
-Unspecified essential
hypertension (401.9)
-Special screening for
malignant neoplasms of
colon (V76.51)
-Screening for unspecified
condition (V82.9)
-Impaired fasting glucose
(790.21)
-Impotence of organic
origin (607.84)
-Laboratory examination
ordered as part of a routine
general medical
examination (V72.62)
-Other malaise and fatigue
(780.79)
-Need for prophylactic
vaccination and inoculation
against diphtheria-tetanuspertussis, combined [DTP]
[DTaP] (V06.1)
-Anemia, unspecified
(285.9)
-Sciatica (724.3)
-Cervicalgia (723.1)
-Thoracic or lumbosacral
neuritis or radiculitis,
unspecified (724.4)
-Degeneration of lumbar or
lumbosacral intervertebral
disc (722.52)
-METFORMIN HCL 500 MG
PO TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
Backache, unspecified
(724.5)
-Sciatica (724.3)
-Cervicalgia (723.1)
-Thoracic or lumbosacral
neuritis or radiculitis,
unspecified (724.4)
-Pain in joint, pelvic region
and thigh (719.45)
Thoracic or lumbosacral
neuritis or radiculitis,
unspecified (724.4)
-Spinal stenosis, lumbar
region, without neurogenic
claudication (724.02)
-Displacement of lumbar
intervertebral disc without
myelopathy (722.10)
-Sciatica (724.3)
-Lumbago (724.2)
-TRIAMCINOLONE
ACETONIDE *KENALOG
INJ
-REFERRAL TO
ORTHOPEDICS, INT
-KNEE 3 VIEW
-BETAMETH ACET 3MG &
NA PHOS 3MG
*CELESTONE INJ
-PR OV EST PT LEV 3
-NUTRITION, INT
-PR ECG AND
INTERPRETATION
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-PR OV EST PT LEV 2
-SIMVASTATIN 20 MG PO
TABS
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-PR OV EST PT LEV 3
-PR OV EST PT LEV 2
-GI, INT
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-ERGOCALCIFEROL 50000
UNITS PO CAPS
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-SIMVASTATIN 20 MG PO
TABS
-OMEPRAZOLE 20 MG PO
CPDR
-ZOLPIDEM TARTRATE 10
MG PO TABS
-MAMMO SCREENING
BILATERAL DIGITAL
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-DEXA BONE DENSITY
AXIAL SKELETON
HIPS/PELV/SPINE
-EKG
-REFERRAL TO GI, INT
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-CYCLOBENZAPRINE HCL
10 MG PO TABS
-FLEXERIL 10 MG PO TABS
-VICODIN 5-500 MG PO TABS
-CARISOPRODOL 350 MG PO
TABS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-CYCLOBENZAPRINE HCL
10 MG PO TABS
-NAPROXEN 500 MG PO
TABS
-FLEXERIL 10 MG PO TABS
-VICODIN 5-500 MG PO TABS
-HYDROCODONEACETAMINOPHEN 10-325
MG PO TABS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-GABAPENTIN 300 MG PO
CAPS
-CYCLOBENZAPRINE HCL
10 MG PO TABS
-PREDNISONE 20 MG PO
TABS
-LS SPINE 2 VIEWS
-LUMBAR SPINE MRI W/O
CONTR
-PHYSICAL MEDICINE, INT
-PHYSICAL THERAPY, INT
-PHYSICAL MEDICINE, INT
-LS SPINE 2 VIEWS
-PHYSICAL MEDICINE, INT
-PR ECG AND
INTERPRETATION
-LUMBAR SPINE MRI W/O
CONTR
-PR OV EST PT LEV 3
-LUMBAR SPINE MRI W/O
CONTR
-MRI LUMBAR SPINE WO
CONTRAST
-REFERRAL TO PHYSICAL
MEDICINE, INT
-PHYSICAL MEDICINE, INT
-PHYSICAL MEDICINE, INT
Inflamed seborrheic
keratosis (702.11)
-Other chronic dermatitis
due to solar radiation
(692.74)
-Other dyschromia (709.09)
-Viral warts, unspecified
(078.10)
-Nevus, non-neoplastic
(448.1)
-TRIAMCINOLONE
ACETONIDE 0.1 % EX CREA
-ECONAZOLE NITRATE 1 %
EX CREA
Other seborrheic keratosis
(702.19)
-Actinic keratosis (702.0)
-Inflamed seborrheic
keratosis (702.11)
-Other dyschromia (709.09)
-Personal history of other
malignant neoplasm of skin
(V10.83)
-TRIAMCINOLONE
ACETONIDE 0.1 % EX CREA
-FLUOCINONIDE 0.05 % EX
CREA
Allergic rhinitis due to
other allergen (477.8)
-Allergic rhinitis due to
pollen (477.0)
-Asthma, unspecified type,
unspecified (493.90)
-Extrinsic asthma,
unspecified (493.00)
-Acute sinusitis,
unspecified (461.9)
Allergic rhinitis, cause
unspecified (477.9)
-Acute sinusitis,
unspecified (461.9)
-Routine general medical
examination at a health care
facility (V70.0)
-Acute upper respiratory
infections of unspecified
site (465.9)
-Cough (786.2)
Abdominal pain,
unspecified site (789.00)
-Abdominal pain, epigastric
(789.06)
-Abdominal pain, other
specified site (789.09)
-Esophageal reflux
(530.81)
-Constipation, unspecified
(564.00)
(V72.3)
-Other general medical
examination for
administrative purposes
(V70.3)
-Need for prophylactic
vaccination and inoculation
against tetanus-diphtheria
[Td] (DT) (V06.5)
-Screening for unspecified
condition (V82.9)
-Symptomatic menopausal
or female climacteric states
(627.2)
-Screening for malignant
neoplasms of prostate
(V76.44)
-Routine general medical
examination at a health care
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-FLONASE 50 MCG/ACT NA
SUSP
-MOMETASONE FUROATE
50 MCG/ACT NA SUSP
-ALLEGRA 180 MG PO TABS
-NASONEX 50 MCG/ACT NA
SUSP
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
-FLONASE 50 MCG/ACT NA
SUSP
-MOMETASONE FUROATE
50 MCG/ACT NA SUSP
-AZITHROMYCIN 250 MG PO
TABS
-NASONEX 50 MCG/ACT NA
SUSP
-CIPRO 500 MG PO TABS
-METRONIDAZOLE 500 MG
PO TABS
-OMEPRAZOLE 20 MG PO
CPDR
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-ACIPHEX 20 MG PO TBEC
-ALBUTEROL 90 MCG/ACT
IN AERS
Other specified
vaccinations against
streptococcus pneumoniae
[pneumococcus] (V03.82)
-ALBUTEROL SULFATE HFA
108 (90 BASE) MCG/ACT IN
AERS
-PR DESTRUCT BENIGN
LESION UP TO 14 LESIONS
(LIQUID NITROGEN)
-PR DESTR PRE-MALIG
LESN 1ST
-PR DESTR PRE-MALIG
LESN 2-14
-PR BIOPSY OF
SKIN/SQ/MUC MEMBR
LESN SINGLE
-DERMATOLOGY, INT
-PR DESTR PRE-MALIG
LESN 1ST
-PR DESTR PRE-MALIG
LESN 2-14
-PR BIOPSY OF
SKIN/SQ/MUC MEMBR
LESN SINGLE
-PR DESTRUCT BENIGN
LESION UP TO 14 LESIONS
(LIQUID NITROGEN)
-PR OV EST PT LEV 3
-IMMUNOTHERAPY
MULTIPLE
-ANTIGEN ALLERGY MAT
FOR INJ (06041) PA
-PR IMMUNOTX W
EXTRACT 2+ INJ
-INFLUENZA VAC (FLU
CLINIC ONLY) 3+YO PA
-ENT, INT
-PR OV EST PT LEV 3
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-IMMUNOTHERAPY
MULTIPLE
-MAMMOGRAM
SCREENING
-INFLUENZA VAC (FLU
CLINIC ONLY) 3+YO PA
-US ABDOMEN COMPLETE
-GI, INT
-ABDOMEN/PELVIS CT PA
-PR ECG AND
INTERPRETATION
-SERV PROV DURING REG
SCHED EVE/WKEND/HOL
HRS
-GYN CYTOLOGY (PAP) PA
-MAMMOGRAM
SCREENING
-CHART ABSTRACTION
SIMP (V70.3)
-PR OV EST PT LEV 2
-TD VAC 7+ YO (V06.5) IM
-PNEUMOCOCCAL VAC
(V03.82) IM/SQ
-PNEUMOCOCCAL
POLYSACC VAC 2+ YO
(V03.82) IM/SQ
Other psoriasis (696.1)
facility (V70.0)
-Need for prophylactic
vaccination and inoculation
against tetanus-diphtheria
[Td] (DT) (V06.5)
-Need for prophylactic
vaccination and inoculation
against other specified
disease (V05.8)
-Other dyschromia (709.09)
-Other atopic dermatitis
and related conditions
(691.8)
-Seborrheic dermatitis,
unspecified (690.10)
-Unspecified pruritic
disorder (698.9)
Other general medical
examination for
administrative purposes
(V70.3)
-Routine general medical
examination at a health care
facility (V70.0)
-Need for prophylactic
vaccination and inoculation
against tetanus-diphtheria
[Td] (DT) (V06.5)
-Screening for lipoid
disorders (V77.91)
Migraine, unspecified,
without mention of
intractable migraine
without mention of status
migrainosus (346.90)
-Anxiety state, unspecified
(300.00)
-Cervicalgia (723.1)
-Other screening
mammogram (V76.12)
-Myalgia and myositis,
unspecified (729.1)
Routine general medical
examination at a health care
facility (V70.0)
-Other screening
mammogram (V76.12)
-Special screening for
malignant neoplasms of
colon (V76.51)
-Screening for malignant
neoplasms of prostate
(V76.44)
-Need for prophylactic
vaccination and inoculation
against diphtheria-tetanuspertussis, combined [DTP]
[DTaP] (V06.1)
-Routine general medical
examination at a health care
facility (V70.0)
-Benign neoplasm of colon
(211.3)
-Screening for unspecified
condition (V82.9)
-Other screening
mammogram (V76.12)
-Pain in joint, shoulder
region (719.41)
-Pain in joint, lower leg
(719.46)
-Lumbago (724.2)
-Pain in joint, pelvic region
and thigh (719.45)
Special screening for
malignant neoplasms of
colon (V76.51)
Care involving other
physical therapy (V57.1)
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-INFLUENZA VAC 3+YR
(V04.81) IM
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-CLOBETASOL PROPIONATE
0.05 % EX OINT
-CLOBETASOL PROPIONATE
0.05 % EX CREA
-FLUOCINONIDE 0.05 % EX
CREA
-DESONIDE 0.05 % EX CREA
-TRIAMCINOLONE
ACETONIDE 0.1 % EX CREA
-ALBUTEROL 90 MCG/ACT
IN AERS
-FLONASE 50 MCG/ACT NA
SUSP
-SUMATRIPTAN SUCCINATE
50 MG PO TABS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-PROMETHAZINE HCL 25
MG PO TABS
-IMITREX 100 MG PO TABS
-ZOLPIDEM TARTRATE 10
MG PO TABS
-PR OV EST PT LEV 3
-PR OV EST PT LEV 2
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-GYN CYTOLOGY (PAP) PA
-PR ULTRAVIOLET
THERAPY
-DERMATOLOGY, INT
-PR BIOPSY OF
SKIN/SQ/MUC MEMBR
LESN SINGLE
-PR DESTR PRE-MALIG
LESN 1ST
-DERMATOLOGY, INT
-CHART ABSTRACTION
SIMP (V70.3)
-CHART ABSTRACTION
INT (V70.3)
-PAMFONLINE
ENROLLMENT
-MAMMOGRAM
SCREENING
-PR CHART ABSTRACTION
COMP (V70.3)
-MAMMO SCREENING
BILATERAL DIGITAL
-GYN CYTOLOGY (PAP) PA
-PR OV EST PT LEV 3
-MAMMOGRAM
SCREENING
-NEUROLOGY, INT
-PEG-KCL-NACL-NASULFNA ASC-C 100 G PO SOLR
-GI, INT
-REFERRAL TO GI, INT
-PR OV EST PT LEV 3
-PR OV EST PT LEV 2
-IMMUN ADMIN
IM/SQ/ID/PERC 1ST VAC
ONLY
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-HYDROCODONEACETAMINOPHEN 10-325
MG PO TABS
-CYCLOBENZAPRINE HCL
10 MG PO TABS
-HYDROCODONE-
-PR PHYS THERAPY
EVALUATION
-PR THERAPEUTIC
EXERCISES
-PR MANUAL THER TECH
1+REGIONS EA 15 MIN
-PR ELECTRIC
STIMULATION THERAPY
-REFERRAL TO PHYSICAL
THERAPY, INT
ACETAMINOPHEN 5-325 MG
PO TABS
-FOLIC ACID 1 MG PO TABS
-METHOTREXATE 2.5 MG PO
TABS
-PREDNISONE 5 MG PO
TABS
-PREDNISONE 1 MG PO
TABS
-HYDROXYCHLOROQUINE
SULFATE 200 MG PO TABS
Rheumatoid arthritis
(714.0)
-Unspecified inflammatory
polyarthropathy (714.9)
-Arthropathy, unspecified,
site unspecified (716.90)
-Sicca syndrome (710.2)
Osteoporosis, unspecified
(733.00)
-Unspecified acquired
hypothyroidism (244.9)
-Osteoarthrosis, unspecified
whether generalized or
localized, site unspecified
(715.90)
-Benign essential
hypertension (401.1)
-Unspecified essential
hypertension (401.9)
-FOSAMAX 70 MG PO TABS
-ALENDRONATE SODIUM 70
MG PO TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-ACTONEL 35 MG PO TABS
Other malaise and fatigue
(780.79)
-Routine general medical
examination at a health care
facility (V70.0)
-Dizziness and giddiness
(780.4)
-Other respiratory
abnormalities (786.09)
-Screening for unspecified
condition (V82.9)
-Atrial fibrillation (427.31)
-Congestive heart failure,
unspecified (428.0)
-Coronary atherosclerosis
of unspecified type of
vessel, native or graft
(414.00)
-AZITHROMYCIN 250 MG PO
TABS
-FLUTICASONE
PROPIONATE 50 MCG/ACT
NA SUSP
Long-term (current) use of
anticoagulants (V58.61)
Spinal stenosis, lumbar
region, without neurogenic
claudication (724.02)
-Degeneration of lumbar or
lumbosacral intervertebral
disc (722.52)
-Lumbosacral spondylosis
without myelopathy (721.3)
-Displacement of lumbar
intervertebral disc without
myelopathy (722.10)
-Lumbago (724.2)
Unspecified hemorrhoids
without mention of
complication (455.6)
-Internal hemorrhoids
without mention of
complication (455.0)
-Benign neoplasm of colon
(211.3)
-Anal or rectal pain
(569.42)
-Personal history of colonic
polyps (V12.72)
-Dermatophytosis of foot
(110.4)
-Ingrowing nail (703.0)
-Keratoderma, acquired
(701.1)
-Peripheral vascular
disease, unspecified (443.9)
Dermatophytosis of nail
(110.1)
-RHEUMATOLOGY, INT
-METHYLPRED ACET
80MG *DEPO-MEDROL INJ
-TRIAMCINOLONE
ACETONIDE *KENALOG
INJ
-ASP/INJ INT
JOINT/BURS/CYST
-PR METHLYPRED ACET
(BU 21 - 40MG) *DEPOMEDROL INJ
-DEXA BONE DENSITY
AXIAL SKELETON
HIPS/PELV/SPINE
-BONE DENSITY (DEXA)
-DEXA BONE DENSITY
STUDY 1+ SITES AXIAL
SKEL (HIP/PELVIS/SPINE)
-INFLUENZA VAC 3+YR
(V04.81) IM
-MAMMOGRAM
SCREENING
-PR ECG AND
INTERPRETATION
-CHEST PA & LATERAL
-PR OV EST PT LEV 3
-EKG
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-WARFARIN SODIUM 5 MG
PO TABS
-COUMADIN 5 MG PO TABS
-WARFARIN SODIUM 2.5 MG
PO TABS
-ATENOLOL 50 MG PO TABS
-WARFARIN SODIUM 2 MG
PO TABS
-HYDROCODONEACETAMINOPHEN 10-325
MG PO TABS
-GABAPENTIN 300 MG PO
CAPS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-TRAMADOL HCL 50 MG PO
TABS
-PREDNISONE 20 MG PO
TABS
-HYDROCORTISONE
ACETATE 25 MG PR SUPP
-ANUSOL-HC 25 MG PR SUPP
-HYDROCORTISONE 2.5 %
PR CREA
-ANUSOL-HC 2.5 % PR CREA
-PEG-KCL-NACL-NASULFNA ASC-C 100 G PO SOLR
-PR OV EST PT MIN SERV
-PR PROTHROMBIN TIME
-DEBRIDEMENT OF NAILS,
6 OR MORE
-PR ECG AND
INTERPRETATION
-EKG
-ECONAZOLE NITRATE 1 %
EX CREA
-TERBINAFINE HCL 250 MG
PO TABS
-TRIAMCINOLONE
ACETONIDE 0.1 % EX CREA
-DEBRIDEMENT OF NAILS,
6 OR MORE
-TRIM SKIN LESIONS, 2 TO
4
-PARE CORN/CALLUS,
SINGLE LESION
-PODIATRY, INT
-LUMBAR SPINE MRI W/O
CONTR
-LS SPINE 2 VIEWS
-PHYSICAL MEDICINE, INT
-MRI LUMBAR SPINE WO
CONTRAST
-PHYSICAL MEDICINE, INT
-ANOSCOPY
-SURGERY, INT
-PR COLONOSCOPY
W/BIOPSY(S)
-LIGATION OF
HEMORRHOID(S)
-GI, INT
-CEPHALEXIN 500 MG PO
CAPS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-VICODIN 5-500 MG PO TABS
-IBUPROFEN 600 MG PO
TABS
Pain in joint, lower leg
(719.46)
-Chondromalacia of patella
(717.7)
-Tear of medial cartilage or
meniscus of knee, current
(836.0)
-Pain in limb (729.5)
-Pain in joint, pelvic region
and thigh (719.45)
Pain in joint, pelvic region
and thigh (719.45)
-Lumbago (724.2)
-Sciatica (724.3)
-Backache, unspecified
(724.5)
-Pain in joint, lower leg
(719.46)
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-VICODIN 5-500 MG PO TABS
Anxiety state, unspecified
(300.00)
-Insomnia, unspecified
(780.52)
-Generalized anxiety
disorder (300.02)
-Other malaise and fatigue
(780.79)
-Headache (784.0)
Generalized anxiety
disorder (300.02)
-Depressive disorder, not
elsewhere classified (311)
-Insomnia, unspecified
(780.52)
-Irritable bowel syndrome
(564.1)
-Sleep disturbance,
unspecified (780.50)
Cervicalgia (723.1)
-Lumbago (724.2)
-Backache, unspecified
(724.5)
-Headache (784.0)
-Brachial neuritis or
radiculitis NOS (723.4)
Impaired fasting glucose
(790.21)
-Routine general medical
examination at a health care
facility (V70.0)
-Screening for malignant
neoplasms of prostate
(V76.44)
-Other and unspecified
hyperlipidemia (272.4)
-Overweight (278.02)
-Other chronic dermatitis
due to solar radiation
(692.74)
-Other seborrheic keratosis
(702.19)
-Benign neoplasm of other
specified sites of skin
(216.8)
-Neoplasm of uncertain
behavior of skin (238.2)
-LORAZEPAM 0.5 MG PO
TABS
-ZOLPIDEM TARTRATE 10
MG PO TABS
-LORAZEPAM 1 MG PO
TABS
-ALPRAZOLAM 0.25 MG PO
TABS
-CLONAZEPAM 0.5 MG PO
TABS
-LORAZEPAM 0.5 MG PO
TABS
-CLONAZEPAM 0.5 MG PO
TABS
-TRAZODONE HCL 50 MG PO
TABS
-LORAZEPAM 1 MG PO
TABS
-AMBIEN 10 MG PO TABS
-CYCLOBENZAPRINE HCL
10 MG PO TABS
-HYDROCODONEACETAMINOPHEN 5-500 MG
PO TABS
-FLEXERIL 10 MG PO TABS
-NAPROXEN 500 MG PO
TABS
-CARISOPRODOL 350 MG PO
TABS
-SIMVASTATIN 20 MG PO
TABS
-HYDROCHLOROTHIAZIDE
25 MG PO TABS
-SIMVASTATIN 40 MG PO
TABS
Actinic keratosis (702.0)
-TRIAMCINOLONE
ACETONIDE 0.1 % EX CREA
-FLUOCINONIDE 0.05 % EX
CREA
-GLASSES
-PR DESTR PRE-MALIG
LESN 1ST
-ASP/INJ MAJOR
JOINT/BURS/CYST
-REFERRAL TO
ORTHOPEDICS, INT
-TRIAMCINOLONE
ACETONIDE *KENALOG
INJ
-LOWER EXTREMITY
JOINT MRI W/O CONTR
-ORTHOPEDICS, INT
-PELVIS LIMITED
-LS SPINE 2 VIEWS
-XR PELVIS 1 OR 2 VIEWS
-REFERRAL TO
ORTHOPEDICS, INT
-ASP/INJ MAJOR
JOINT/BURS/CYST
-MAMMO SCREENING
BILATERAL DIGITAL
-EKG
-PR ECG AND
INTERPRETATION
-PR INFLUENZA VACC
PRES FREE 3+ YO 0.5ML IM
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-MAMMOGRAM
SCREENING
-GI, INT
-CHART ABSTRACTION
INT (V70.3)
-PHYSICAL MEDICINE, INT
-SPINE CERVICAL
COMPLETE
-PHYSICAL THERAPY, INT
-PHYSICAL THERAPY, EXT
-PR THERAPEUTIC
EXERCISES
-PR OV EST PT LEV 3
-TDAP VAC 11-64 YO
*ADACEL (V06.8) IM
-PR OV EST PT LEV 2
-NUTRITION, INT
-PR INFLUENZA VACC
PRES FREE 3+ YO 0.5ML IM
-PR DESTR PRE-MALIG
LESN 1ST
-PR DESTR PRE-MALIG
LESN 2-14
-PR BIOPSY OF
SKIN/SQ/MUC MEMBR
LESN SINGLE
-SURGICAL PATHOLOGY
PA
-PR OV EST PT LEV 3
| 9 |
1
Maximizing Indoor Wireless Coverage Using
UAVs Equipped with Directional Antennas
arXiv:1705.09772v1 [] 27 May 2017
Hazim Shakhatreh and Abdallah Khreishah
Abstract
Unmanned aerial vehicles (UAVs) can be used to provide wireless coverage during emergency
cases where each UAV serves as an aerial wireless base station when the cellular network goes down.
They can also be used to supplement the ground base station in order to provide better coverage and
higher data rates for the users. In this paper, we aim to maximize the indoor wireless coverage using
UAVs equipped with directional antennas. We study the case that the UAVs are using one channel, thus
in order to maximize the total indoor wireless coverage, we avoid any overlapping in their coverage
volumes. We present two methods to place the UAVs; providing wireless coverage from one building
side and from two building sides. In the first method, we utilize circle packing theory to determine the
3-D locations of the UAVs in a way that the total coverage area is maximized. In the second method,
we place the UAVs in front of two building sides and efficiently arrange the UAVs in alternating upsidedown arrangements. We show that the upside-down arrangements problem can be transformed from 3D
to 2D and based on that we present an efficient algorithm to solve the problem. Our results show that
the upside-down arrangements of UAVs, can improve the maximum total coverage by 100% compared
to providing wireless coverage from one building side.
Index Terms
Unmanned aerial vehicles, coverage, circle packing theory.
I. I NTRODUCTION
Cells on wheels (COW), are used to provide expanded wireless coverage for short-term
demands, when cellular coverage is either minimal, never present or compromised by the disaster [1]. UAVs can also be used to provide wireless coverage during emergency cases and
Hazim Shakhatreh and Abdallah Khreishah are with the Department of Electrical and Computer Engineering, New Jersey
Institute of Technology (email: {hms35,abdallah}@njit.edu)
2
special events (such as concerts, indoor sporting events, etc.), when the cellular network service
is not available or it is unable to serve users [2]–[5]. Compared to the COW, the advantage of
using UAV-based aerial base stations is their ability to quickly and easily move [6]. The main
disadvantage of using UAVs as aerial base stations is their energy capacity, the UAVs need to
return periodically to a charging station for recharging, due to their limited battery capacity.
In [7], the authors integrate the recharging requirements into the coverage problem and examine
the minimum number of required UAVs for enabling continuous coverage under that setting.
Directional antennas are used to improve the received signal at their associated users, and
also reduce interference since other- aerial base stations are targeting/serving other users in
other directions [8]. The authors in [9] study the optimal deployment of UAVs equipped with
directional antennas, using circle packing theory. The 3D locations of the UAVs are determined
in a way that the total coverage area is maximized. In [10], the authors investigate the problem
by characterizing the coverage area for a target outage probability, they show that for the case of
Rician fading there exists a unique optimum height that maximizes the coverage area. In [11], the
authors propose a heuristic algorithm to find the positions of aerial base stations in an area with
different user densities, the goal is to find the minimum number of UAVs and their 3D placement
so that all the users are served. However, it is assumed that all users are outdoor and the location
of each user represented by an outdoor 2D point. In [12], the authors use multiple UAVs to design
efficient UAV relay networks to support military operations. They describe the tradeoff between
connectivity among the UAVs and maximizing the covered area. However, they use the UAVs
as wireless relays and do not take into account their mutual interference in downlink channels.
In [13], the authors propose a computational method for positioning aerial base stations with the
goal of minimizing their number, while fully providing the required bandwidth over the disaster
area. It is assumed that overlapping aerial base stations coverage areas are allowed and they use
the Inter-Cell Interference Coordination (ICIC) methods to schedule radio resources to avoid
inter-cell interference. The authors in [4], [5] use a single UAV equipped with omnidirectional
antenna to provide wireless coverage for indoor users inside a high-rise building, where the
objective is to find the 3D location of a UAV that minimizes the total transmit power required to
cover the entire high-rise building. In [14], the authors use UAVs equipped with omnidirectional
antennas to minimize the number of UAVs required to cover the indoor users.
We summarize our main contributions as follows:
•
In order to maximize the indoor wireless coverage, we present two methods to place the
3
UAVs, providing wireless coverage from one building side and from two building sides.
In this paper, we study the case that the UAVs are using one channel, thus we avoid any
overlapping in their coverage volumes (to avoid interference). In the first method, we utilize
circle packing theory to determine the 3-D locations of the UAVs in a way that the total
coverage area is maximized. In the second method, we place the UAVs in front of two
building sides and efficiently arrange the UAVs in alternating upside-down arrangements.
•
We show that the upside-down arrangements problem can be transformed from 3D to 2D
and based on that we present an efficient algorithm to solve the problem.
•
We demonstrate through simulation results that the upside-down arrangements of UAVs, can
improve the maximum total coverage by 100% compared to providing wireless coverage
from one building side.
The rest of this paper is organized as follows. In Section II, we describe the system model. In
Section III, we show the appropriate placement of UAVs that maximizes the total indoor wireless
coverage. Finally, we present our numerical results in Section IV and make concluding remarks
in Section V.
II. S YSTEM M ODEL
A. System Settings
Consider a 3D building, as shown in Figure 1, where N UAVs must be deployed to maximize
wireless coverage to indoor users located within the building. The dimensions of the high-rise
building, in the shape of a rectangular prism, be [0, xb ] × [0, yb] × [0, zb ]. Let (xk , yk , zk ) denote
the 3D location of UAV k∈ N, and let (Xi , Yi , Zi ) denote the location of user i. Also, let
dout,i be the distance between the UAV and indoor user i, and let din,i be the distance between
the building wall and indoor user i. Each UAV uses a directional antenna to provide wireless
coverage where the antenna half power beamwidth is θB . The authors in [15] use an outdoor
directional antenna to provide wireless coverage for indoor users. They show that the highest
RSRP (Reference Signal Received Power) and throughput values are measured along the main
beam direction, thus the radiation pattern of a directional antenna is a cone and the indoor
volume covered by a UAV is a truncated cone, as shown in Figure 2. Here, ri is the radius of
the circle that is located at yz-rectangular side ((0,0,0), (0,0,zb ) , (0,yb ,zb ), (0,yb ,0))), rj is the
4
Fig. 1: System model
radius of the circle that is located at yz-rectangular side ((xb ,0,0), (xb ,0,zb ) , (xb ,yb ,zb ), (xb ,yb ,0))
and xb is the horizontal width of the building. The volume of a truncated cone is given by:
1
V = πxb (ri2 + rj2 + ri rj )
3
B. UAV Power Consumption
In [16], the authors show that significant power gains are attainable for indoor users even in
rich indoor scattering conditions, if the indoor users use directional antennas. Now, consider a
transmission between k-th UAV located at (xk , yk , zk ) and i-th indoor user located at (Xi , Yi ,
Zi ). The received signal power at i-th indoor user location can be given by:
Pr,ik (dB) = Pt + Gt + Gr − Li
where Pr,ik is the received signal power, Pt is the transmit power of UAV, Gt is the antenna
gain of the UAV. It can be approximated by Gt ≈
29000
2
θB
with θB in degrees [17], [18] and Gr is
the antenna gain of indoor user i, which is given by [16]:
Gr (dB) = Gr,dir + Gr,omni − GRF
5
Fig. 2: 3D Dimensions of the truncated cone
Fig. 3: Building sides
where Gr,dir and Gr,omni are free-space antenna gains of a directive and an omnidirectional
antenna respectively and GRF is the decrease in gain advantage of a directive over an omnidirectional antenna, due to the presence of clutter.
Also, Li is the path loss which for the Outdoor-Indoor communication is:
Li = LF + LB + LI = (w log10 d3D,i + w log10 fGhz + g1 )
+(g2 + g3 (1 − cos θi )2 ) + (g4 d2D,i )
where LF is the free space path loss, LB is the building penetration loss, and LI is the indoor
loss. In the path loss model, we also have w=20, g1 =32.4, g2 =14, g3 =15, g4 =0.5 [19] and fGhz
is the carrier frequency.
C. Placement of UAVs
Choosing the appropriate placement of UAVs will be a critical issue when we aim to maximize
the indoor wireless coverage. In this paper, we assume that we can place the UAVs in front of
building sides A, B and above the building C as shown Figure 3. We also assume that the UAVs
6
Fig. 4: Placing two UAVs in front of building side A
Fig. 5: Placing two UAVs in front of two building sides A and B
are using one channel. In this section, we demonstrate why avoiding the overlapping between
UAV’s coverage volumes will strengthen the total indoor wireless coverage.
1) Overlapping between UAV’s coverage volumes is allowed: Now, when we place two UAVs
in front of building sides A as shown in Figure 4 (the UAVs have different z-coordinates and
same x- and y- coordinates), the indoor users located in G1 ’s and G2 ’s locations will have high
SINR. On the other hand, the indoor users located in G3 ’s location will have low SINR. This
is because the dependency of SINR on the location of indoor user. Similarly, when we place two
UAVs in front of two building sides A and B as shown in Figure 5 (the UAVs have different xcoordinates and same y- and z- coordinates), the indoor users located in G1 ’s and G2 ’s locations
will have high SINR. On the other hand, the indoor users located in G3 ’s location will have
low SINR. In Figure 6 (the UAVs have same y-coordinates and same x- and z- coordinates),
7
Fig. 6: Placing one UAV in front of building side A and one UAV above the building C
Fig. 7: UAVs with small antenna half power beamwidth θB
when we place one UAV in front of building side A and one UAV above the building C, the
indoor users located in G1 ’s and G2 ’s locations will have high SINR. On the other hand, the
indoor users located in G3 ’s location will have low SINR. From the previous examples, we can
conclude that allowing the UAVs coverage volumes to overlap will result in that some users are
not satisfied. In the next section, we place the UAVs in a way that maximizes the total coverage,
and avoids any overlapping in their coverage volumes.
2) Overlapping between UAV’s coverage volumes is not allowed: In Figure 7, we avoid the
overlapping between UAV’s coverage volumes by using UAVs with small antenna half power
beamwidths θB . Actually, this is impractical way to cover the building, due to the high number
of UAVs required to cover the building. In Figure 8, we place the UAVs in front of two building
sides and efficiently arrange the UAVs in alternating upside-down arrangements. We can notice
8
that this method will maximize the indoor wireless coverage where the uncovered holes are
minimized and the overlapping between UAV’s coverage volumes is not allowed.
III. M AXIMIZING I NDOOR W IRELESS C OVERAGE
In this section, the UAVs are assumed to be symmetric having the same transmit power, the
same horizontal location xk , the same channel and the same antenna half power beamwidth θB .
We show two methods to place the UAVs in a way that tries to maximize the total coverage,
and avoids any overlapping in their coverage volumes.
A. Providing Wireless Coverage from one building side
In this method, we place all UAVs in front of one building side (side A, side B or side C). The
objective is to determine the three-dimensional location of each UAV k∈ N in a way that the
total covered volume is maximized. Now, consider that we place the UAVs in front of building
side A, then the projection of UAV’s coverage on the building side B is a circle as shown in
Figure 9. Our problem can be formulated as:
max |N| ⋆
1
⋆ π ⋆ xb ⋆ (ri2 + rj2 + ri rj )
3
subject to
q
(yk − yq )2 + (zk − zq )2 ≥ 2rj , k 6= q ∈ N
zb − (zk + rj ) ≥ 0, k ∈ N
(zk − rj ) ≥ 0, k ∈ N
yb − (yk + rj ) ≥ 0, k ∈ N
(yk − rj ) ≥ 0, k ∈ N
The objective is to maximize the indoor wireless coverage (covered volume). Constraint set
(1) guarantees that truncated cones cannot overlap each other. Constraint sets (2-5) ensure that
UAV k should not cover outside the 3D building, see Figure 9. We model this problem by
utilizing the well-known problem, circle packing problem. In this problem, N circles should be
packed inside a given surface such that the packing density is maximized and no overlapping
occurs [20], note that the surface in our problem is a rectangle.
The authors of [20] tackle this problem by solving a number of decision problems. The
decision problem is:
9
Fig. 8: UAVs in alternating upside-down arrangements
Given N circles of radius rj and a rectangle of dimension d1 × d2 , whether is it possible
to locate all the circles into the rectangle or not.
In [20], the authors introduce a nonlinear model for this problem. Finding the answer for the
decision problem will depend on finding the global minimizer of a nonconvex and nonlinear
optimization problem. In each decision problem, they investigate the feasibility of packing N
identical circles. If this is feasible, N is incremented by one and the decision problem is solved
again. The algorithm will stop when the decision problem yields an infeasible packing [21]. The
pseudo code of the algorithm is shown in Algorithm 1. In the next section, we utilize the two
building sides to maximize the indoor wireless coverage. This will allow us to extend the indoor
wireless coverage compared with providing wireless coverage from one building side, because
the holes induced by the cones of the UAVs in one side can be filled by the cones induced by
the UAVs in the other side without causing overlap among the two sets of cones.
B. Providing Wireless Coverage from two building sides
In this method, we place the UAVs in front of two building sides (side A and side B) and
efficiently arrange the UAVs in alternating upside-down arrangements (see Figures 10 and 11).
In Theorem 1, we find the horizontal location of the UAV xU AV that guarantees the upside-down
arrangements of the truncated cones. In Theorem 2, we prove that if the truncated cones do not
intersect in 3D, then the circles do not intersect in building sides (A and B), and vice versa. In
10
Fig. 9: Circle packing in a rectangle
Theorem 3, we prove that if we maximize the percentage of covered area of building sides (A
and B), then we maximize the percentage of covered volume of building, and vice versa. These
theorems help us to transform the geometric problem from 3D to 2D and present an efficient
algorithm that maximizes the indoor wireless coverage.
Theorem 1. The horizontal location of the UAV xU AV that guarantees the upside-down arrangements of the truncated cones will be equal to 0.7071xb regardless of the antenna half power
beamwidth angle θB .
Proof. The radius of the smaller circular face ri is given by:
ri = rj
xU AV
xb + xU AV
(1)
Now, we divide the building sides A and B to square cells (as shown in Figures 10 and 11),
11
Algorithm 1 Circle packing in a rectangle
1: N ←− 1
2: Solve the decision problem for N circles
3: If Answer = YES
4: Then N ←− N + 1
5: Return to step 2
6: If Answer = NO
7: n ←− N − 1
8: End
9: Output n
the large circle in Figure 10 and the small circle in Figure 11 will represent the projections of
UAV’s coverage on building sides A and B when the UAV is placed in front of building side
B. Similarly, the four small circles quarters in Figure 10 and the four large circles quarters in
Figure 11 will represent the projections of UAVs coverage on building sides A and B when the
UAV is placed in front of building side A. From Figures 10 and 11, the diagonal of the square
cell is:
D = 2rj + 2ri
where rj is the radius of the larger circular face and ri is the radius of the smaller circular face.
By applying the pythagoreans theorem, we get:
√
4rj2 + 4rj2 = (2rj + 2ri )2 =⇒ 8rj = 2rj + 2ri =⇒
√
8−2
ri =
rj = γrj
2
(2)
From equations
√ (1) and (2), we get:
√
√
8−2
xU AV
=
=⇒ 2xU AV = xb ( 8 − 2) + xU AV ( 8 − 2) =⇒
xb + xU AV √ 2
( 8 − 2)
√ = 0.7071xb
xU AV = xb
(4 − 8)
Thus, to guarantee the upside-down arrangements of the truncated cones, we must place the
UAVs at horizontal distance equals to 0.7071xb . Theorems 2 and 3 help us to transform the
geometric problem from 3D to 2D and present an efficient algorithm that maximizes the indoor
wireless coverage.
12
Fig. 10: The square cell in side A
Fig. 11: The square cell in side B
Fig. 12: Four circles (with
Fig. 13: Four circles (with
radius rj ) in building side A
radius ri ) in building side B
Theorem 2. The truncated cones do not intersect in 3D iff The circles do not intersect in building
sides (A and B).
Proof. First, we prove that if the truncated cones do not intersect in 3D, then the circles do
not intersect in building sides (A and B). Assume that we have a set of truncated cones G =
{1, 2, ..., N} and they do not intersect in 3D space. Each truncated cone n ∈ G can be represented
by a number of 2D circles {c1n , c2n , ..., c|h|n}, where |h| is the height of the truncated cone, c1n
is the smaller circular face and c|h|n is the larger circular face. It is obvious that if the |G|
truncated cones do not intersect in 3D space then the smaller and larger circular faces do not
intersect in building sides (A and B).
13
Second, we prove that if the circles do not intersect in building sides (A and B), then the
truncated cones do not intersect in 3D. Assume that four circles (with large radius rj ) not
intersect in building side A (see Figure 12), then the circles (with small radius ri ) in building
side B will appear as shown Figure 13. Now, we need to do two steps: 1) Connect the lines
between these points (A|h| with A1 , B|h| with B1 , C|h| with C1 and D|h| with D1 ). 2) Draw
circles that pass through four points Ak , Bk , Ck and Dk where k ∈ h. After these two steps, the
circles that have been drawn in step two will represent a truncated cone that his circular bases
do not intersect with the four circles in building sides (A and B). Also, the truncated cones do
not intersect in 3D space.
Theorem 3. We maximize the percentage of covered area of building sides (A and B) iff We
maximize the percentage of covered volume of building
Proof. First, we divide the building sides A and B to square cells (as shown in Figures 10
and 11). The percentage of covered volume is given by:
(yb ∗ zb )
⌋ ∗ 2 ∗ ( π3 ∗ xb ∗ (ri2 + ri rj + rj2 ))
2
4rj
V =
(xb ∗ yb ∗ zb )
⌊
(3)
Where:
(yb ∗ zb )
⌊
⌋: the number of square cells in the building side.
4rj2
2: the number of truncated cones in the square cell (see Figures 7 and 8).
π
3
∗ xb ∗ (ri2 + ri rj + rj2 ): the volume of truncated cone.
(xb ∗ yb ∗ zb ): the volume of the building.
Now, from equations (2) and (3), we get:
(yb ∗ zb )
) ∗ (γ 2 + γ + 1)rj2
⌋ ∗ ( 2π
3
4rj2
(yb ∗ zb ) 2
⌋rj
= K1 ⌊
V =
(yb ∗ zb )
4rj2
⌊
(4)
Where:
( 2π )(γ 2 + γ + 1)
K1 = 3
(yb ∗ zb )
The percentage of covered area of building sides (A and B) is given by:
(yb ∗ zb )
(yb ∗ zb )
(yb ∗ zb )
⌋ ∗ (πri2 + πrj2 ) ⌊
⌋ ∗ (πri2 + πrj2 )
⌋ ∗ 2π(ri2 + rj2 )
⌊
2
2
4rj
4rj
4rj2
W =
+
=
(yb ∗ zb )
(yb ∗ zb )
(yb ∗ zb )
⌊
(5)
14
Now, from equations (2) and (5), we get:
(yb ∗ zb )
⌋ ∗ 2π(γ 2 + 1)rj2
2
4rj
(yb ∗ zb ) 2
⌋rj
= K2 ⌊
W =
(yb ∗ zb )
4rj2
⌊
(6)
Where:
(2π)(γ 2 + 1)
K2 =
(yb ∗ zb )
To prove that maximizing the percentage of covered volume of building is equivalent to maximizing the percentage of covered area of building sides (A and B). From equations (4) and (6),
(yb ∗ zb ) 2
(yb ∗ zb ) 2
⌋rj is equivalent to maximizing K2 ⌊
⌋rj where K1 and
maximizing V = K1 ⌊
2
4rj
4rj2
K2 are constants.
To prove that maximizing the percentage of covered area of building sides (A and B) is
equivalent to maximizing the percentage of covered volume of building. From equations (4) and
(yb ∗ zb ) 2
(yb ∗ zb ) 2
⌋rj is equivalent to maximizing K1 ⌊
⌋rj where K1
(6), maximizing W = K2 ⌊
2
4rj
4rj2
and K2 are constants.
In Algorithm 2, we maximize the covered volume by placing the UAVs in alternating upsidedown arrangements. First, we find the horizontal distance between the building and the UAVs
xU AV = 0.7071xb (see Theorem 1) that guarantees the alternating upside-down arrangements.
Then, we divide the building sides A and B to square cells and place one UAV in front of the
square cell. In steps (8-16), we find the 3D locations of UAVs that cover the building from side
B. On the other hand, steps (17-25) find the 3D locations of UAVs that cover the building from
side A. Finally, the algorithm will output total number of UAVs and the total covered volume.
IV. SIMULATION RESULTS
Let the dimensions of the building, in the shape of a rectangular prism, be [0, xb = 30]×[0, yb =
40] × [0, zb = 60]. We use three methods to cover the building using UAVs. In the first method,
we place all UAVs in front of one building side (A or B) (FOBS). In the second method, we
place all UAVs above the building (C) (ABS). In the third method, we arrange the UAVs in
alternating upside-down arrangements (AUDA). For the first and second methods, we utilize
the circle packing in a rectangle approach [22] to maximize the covered volume. For the third
method, we apply Algorithm 2 to maximize the covered volume. In Figure 14, we find the
maximum total coverage for different antenna half power beamwidth angles θB . As can be seen
from the simulation results, the maximum total coverage is less than half for the FOBS and
15
Algorithm 2 Maximizing Indoor Wireless Coverage Using UAVs
1: Input:
2: The dimensions of building xb , yb and zb
3: The radius of the larger circular face rj
4: Initialization:
√
8−2
5: ri =
rj
2
6: xU AV = 0.7071xb
7: u = q = 0
8: The 3D locations of UAVs that cover the building from
side B are given by:
yb
9: For k1 = 1 : ⌊ ⌋
2rj
zb
10:
For s1 = 1 : ⌊
⌋
2rj
11:
u=u+1
12:
xq = xU AV + xb
13:
yu = (2k1 − 1)rj
14:
zu = (2s1 − 1)rj
15:
End
16: End
17: The 3D locations of UAVs that cover the building from
side A are given by:
yb
18: For k2 = 1 : ⌊ ⌋
3rj
zb
⌋
19:
For s2 = 1 : ⌊
3rj
20:
q =q+1
21:
xq = −xU AV
22:
yq = (2k2 )rj
23:
zq = (2s2 )rj
24:
End
25: End
26: Output:
27: The number of UAVs= u + q + 2k1 + 2s1
28: The covered volume=(u)(2)( π3 ∗ xb ∗ (ri2 + ri rj + rj2 ))
16
Total coverage (Normalized)
1
FOBS
ABS
AUDA
0.8
0.6
0.4
0.2
0
5
10
15
The antenna half power beamwidth (
20
θ )
B
Fig. 14: Total coverage vs. θB
100
FOBS
ABS
AUDA
Number of UAVs
80
60
40
20
0
8
10
12
14
16
18
The antenna half power beamwidth (
20
θ )
22
B
Fig. 15: Number of UAVs vs. θB
ABS methods, this is because providing wireless coverage from one building side will only
maximize the covered area of the building side. On the other hand, we improve the maximum
total coverage by applying the AUDA, this is because AUDA will allow us to use a higher
number of UAVs to provide wireless coverage compared with providing wireless coverage from
one building side, as shown in Figure 15.
In order to provide full wireless coverage for the building, we use UAVs with different channels
to cover the holes in the building. In Figure 16, we find the total number of UAVs required to
17
300
FOBS
ABS
AUDA
Number of UAVs
250
200
150
100
50
0
8
10
12
14
16
18
The antenna half power beamwidth (
20
θ )
22
B
Fig. 16: Number of UAVs vs. θB
provide full coverage. As can be seen from the figure, FOBS and ABS need high number of
UAVs to guarantee full wireless coverage for the building, due to the irregular shapes of the holes
in the building. Here, we can easily specify the number of UAVs required to cover each hole in
the building, due to the small projections of the holes in the building side. On the other hand,
AUDA needs fewer number of UAVs to provide full wireless coverage, due to the small-regular
shapes of the uncovered spaces inside the building. Here, we need only one UAV to cover each
hole. In Figure 17, we find the total transmit power consumed by UAVs when the building is
fully covered. Here, we assume that the threshold SNR equals 25dB, the noise power equals
-120dBm, the frequency of the channel is 2GHz and the antenna gain of each indoor user is
14.4 dB [16]. As can be seen from the figure, the total transmit power in all methods is very
small, due to the high gain of the directional antennas. Also, we can notice that the total power
consumed in FOBS and ABS is higher than that of AUDA. This is because the number of UAVs
required to fully cover the building in AUDA is fewer than that for FOBS and ABS.
V. C ONCLUSION
Choosing the appropriate placement of UAVs will be a critical issue when we aim to maximize
the indoor wireless coverage. In this paper, we study the case that the UAVs are using one channel,
thus in order to maximize the total indoor wireless coverage, we avoid any overlapping in their
coverage volumes. We present two methods to place the UAVs; providing wireless coverage from
18
Total transmit power consumed
by UAVs (Watts)
×10
-4
1.2
1
0.8
0.6
0.4
FOBS
ABS
AUDA
0.2
0
8
10
12
14
16
18
The antenna half power beamwidth (
20
θ )
22
B
Fig. 17: Total transmit power vs. θB
one building side and from two building sides. In the first method, we utilize circle packing theory
to determine the 3-D locations of the UAVs in a way that the total coverage area is maximized.
In the second method, we place the UAVs in front of two building sides and efficiently arrange
the UAVs in alternating upside-down arrangements. We show that the upside-down arrangements
problem can be transformed from 3D to 2D and based on that we present an efficient algorithm
to solve the problem. Our results show that the upside-down arrangements, can improve the
maximum total coverage by 100% compared to providing wireless coverage from one building
side.
ACKNOWLEDGMENT
This work was supported in part by the NSF under Grant CNS-1647170.
R EFERENCES
[1] wikipedia,
“Mobile
cell
sites,”
(Accessed
on
April
2017).
[Online].
Available:
https://en.wikipedia.org/wiki/Mobile{ }cell{ }sites
[2] P. Bupe, R. Haddad, and F. Rios-Gutierrez, “Relief and emergency communication network based on an autonomous
decentralized uav clustering network,” in SoutheastCon 2015.
IEEE, 2015, pp. 1–8.
[3] R. I. Bor-Yaliniz, A. El-Keyi, and H. Yanikomeroglu, “Efficient 3-d placement of an aerial base station in next generation
cellular networks,” in Communications (ICC), 2016 IEEE International Conference on.
IEEE, 2016, pp. 1–5.
[4] H. Shakhatreh, A. Khreishah, and B. Ji, “Providing wireless coverage to high-rise buildings using uavs,” in (IEEE
International Conference on Communications, IEEE ICC 2017 (accepted).
IEEE, 2017.
19
[5] H. Shakhatreh, A. Khreishah, A. Alsarhan, I. Khalil, A. Sawalmeh, and O. Noor Shamsiah, “Efficient 3d placement of
a uav using particle swarm optimization,” in The International Conference on Information and Communication Systems
(ICICS 2017) (accepted).
[6] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Mobile internet of things: Can uavs provide an energy-efficient
mobile architecture?” arXiv preprint arXiv:1607.02766, 2016.
[7] H. Shakhatreh, A. Khreishah, J. Chakareski, H. B. Salameh, and I. Khalil, “On the continuous coverage problem for a
swarm of uavs,” in Sarnoff Symposium, 2016 IEEE 37th.
IEEE, 2016, pp. 130–135.
[8] O. Georgiou, “Simultaneous wireless information and power transfer in cellular networks with directional antennas,” IEEE
Communications Letters, 2016.
[9] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Efficient deployment of multiple unmanned aerial vehicles for optimal
wireless coverage,” IEEE Communications Letters, vol. 20, no. 8, pp. 1647–1650, 2016.
[10] M. M. Azari, F. Rosas, K.-C. Chen, and S. Pollin, “Optimal uav positioning for terrestrial-aerial communication in presence
of fading,” in Global Communications Conference (GLOBECOM), 2016 IEEE. IEEE, 2016, pp. 1–7.
[11] E. Kalantari, H. Yanikomeroglu, and A. Yongacoglu, “On the number and 3d placement of drone base stations in wireless
cellular networks,” in Proc. of IEEE Vehicular Technology Conference, 2016.
[12] D. Orfanus, E. P. de Freitas, and F. Eliassen, “Self-organization as a supporting paradigm for military uav relay networks,”
IEEE Communications Letters, vol. 20, no. 4, pp. 804–807, 2016.
[13] J. Kosmerl and A. Vilhar, “Base stations placement optimization in wireless networks for emergency communications,” in
Communications Workshops (ICC), 2014 IEEE International Conference on.
IEEE, 2014, pp. 200–205.
[14] H. Shakhatreh, A. Khreishah, and I. Khalil, “The indoor mobile coverage problem using uavs,” in submitted to IEEE
Transactions on Wireless Communications.
IEEE, 2017.
[15] A. Ö. Kaya and D. Calin, “On the wireless channel characteristics of outdoor-to-indoor lte small cells,” IEEE Transactions
on Wireless Communications, vol. 15, no. 8, pp. 5453–5466, 2016.
[16] R. Feick, M. Rodrı́guez, L. Ahumada, R. A. Valenzuela, M. Derpich, and O. Bahamonde, “Achievable gains of directional
antennas in outdoor-indoor propagation environments,” IEEE Transactions on Wireless Communications, vol. 14, no. 3,
pp. 1447–1456, 2015.
[17] K. Venugopal, M. C. Valenti, and R. W. Heath, “Device-to-device millimeter wave communications: Interference, coverage,
rate, and finite topologies,” IEEE Transactions on Wireless Communications, vol. 15, no. 9, pp. 6175–6188, 2016.
[18] C. A. Balanis, Antenna theory: analysis and design.
John Wiley & Sons, 2016.
[19] M. Series, “Guidelines for evaluation of radio interface technologies for imt-advanced,” Report ITU, no. 2135-1, 2009.
[20] E. G. Birgin, J. Martınez, and D. P. Ronconi, “Optimizing the packing of cylinders into a rectangular container: a nonlinear
approach,” European Journal of Operational Research, vol. 160, no. 1, pp. 19–33, 2005.
[21] M. Hifi and R. M’hallah, “A literature review on circle and sphere packing problems: Models and methodologies,” Advances
in Operations Research, vol. 2009, 2009.
[22] E.
G.
Birgin,
“Packing
problems,”
https://www.ime.usp.br/∼egbirgin/packing/
(Accessed
on
April
2017).
[Online].
Available:
| 7 |
arXiv:1705.06456v1 [] 18 May 2017
Groups whose Chermak-Delgado lattice is a quasi-antichain∗
Lijian An
Department of Mathematics, Shanxi Normal University
Linfen, Shanxi 041004, P. R. China
April 1, 2018
Abstract
A quasiantichain is a lattice consisting of a maximum, a minimum, and the
atoms of the lattice. The width of a quasiantichian is the number of atoms. For
a positive integer w (≥ 3), a quasiantichain of width w is denoted by Mw . In [3],
it is proved that Mw can be as a Chermak-Delgado lattice of a finite group if and
only if w = 1 + pa for some positive integer a. Let t be the number of abelian
atoms in CD(G). If t > 2, then, according to [3], there exists a positive integer
b such that t = pb + 1. The converse is still an open question. In this paper, we
proved that a = b or a = 2b.
Keywords finite p-groups
generated torsion module
Chermak-Delgado lattice
quasi-antichain finitely
2000 Mathematics subject classification: 20D15.
Chermak and Delgado [5] defined a family of functions from the set of subgroups of a
finite group into the set of positive integers. They then used these functions to obtain
a variety of results, including a proof that every finite group G has a characteristic
abelian subgroup N such that |G : N | ≤ |G : A|2 for any abelian A ≤ G. ChermakDelgado measures are values of one of these functions. For any subgroup H of G,
the Chermak-Delgado measure of H (in G) is denoted by mG (H), and defined as
mG (H) = |H||CG (H)|. The maximal Chermak-Delgado measure of G is denoted by
m∗ (G). That is , m∗ (G) = max{mG (H) | H ≤ G}. In [6], Isaacs showed the following
theorem.
Theorem 1. ([6]) Given a finite group G, let CD(G) = {H | mG (H) = m∗ (G)}. Then
(1) CD(G) is a lattice of subgroups of G.
(2) If H, K ∈ CD(G), then hH, Ki = HK.
(3) If H ∈ CD(G), then CG (H) ∈ CD(G) and CG (CG (H)) = H.
By Theorem 1, Chermak-Delgado lattice of a finite groups is always a self-dual
lattice. It is natural to ask a question: which types of self-dual lattices can be used
as Chermak-Delgado lattices of finite groups. Some special cases of the above question
∗
This work was supported by NSFC (No. 11471198)
1
are proposed and solved. In [2], it is proved that, for any integer n, a chain of length
n can be a Chermak-Delgado lattice of a finite p-group. In [1], general conclusions are
given.
Theorem 2. ([1]) If L is a Chermak-Delgado lattice of a finite p-group G such that both
G/Z(G) and G′ are elementary abelian, then are L+ and L++ , where L+ is a mixed
3-string with center component isomorphic to L and the remaining components being
m-diamonds (a lattice with subgroups in the configuration of an m-dimensional cube),
L++ is a mixed 3-string with center component isomorphic to L and the remaining
components being lattice isomorphic to Mp+1 (a quasiantichain of width p + 1, see the
following definition).
A quasiantichain is a lattice consisting of a maximum, a minimum, and the atoms
of the lattice. The width of a quasiantichian is the number of atoms. For a positive
integer w (≥ 3), a quasiantichain of width w is denoted by Mw . In [3], it is proved that
Mw can be as a Chermak-Delgado lattice of a finite group if and only if w = 1 + pa
for some positive integer a. According to [3], if Mw is a Chermak-Delgado lattice of
a finite group G, then G is nilpotent of class 2. Moreover, we may choose G to be
a p-group of class 2 without loss of generality. Let M1 , M2 , . . . , Mw be all atoms of
CD(G). Then G has the following properties:
(P1) Both G/Z(G) and G′ are elementary abelian;
(P2) Mi /Z(G) ∼
= Mj /Z(G) and G/Z(G) = Mi /Z(G) × Mj /Z(G) for i 6= j;
(P3) Let M1 = hx1 , x2 , . . . , xn iZ(G) and M2 = hy1 , y2 , . . . , yn iZ(G) such that
M1 /Z(G) and M2 /Z(G) are elementary abelian groups of order pn . Then, for k ≥ 3,
Mk = hx1 y1′ , x2 y2′ , . . . , xn yn′ iZ(G) where M2 = hy1′ , y2′ , . . . , yn′ iZ(G). Moreover, there is
(k)
an invertible matrix Ck = (cij )n×n over Fp such that
yj′ ≡
n
Y
c
(k)
yi ij (mod Z(G)), j = 1, 2, . . . , n.
i=1
In this paper, Ck is called the characteristic matrix of Mk relative to (x1 , x2 , . . . , xn )
and (y1 , y2 , . . . , yn ).
Let G be a p-group with CD(G) a quasi-antichain of width w ≥ 3. Let t be the
number of abelian atoms in CD(G). If t > 2, then, according to [3], there exists a
positive integer b such that t = pb + 1. The converse is still an open question. At the
end of [3], such questions are proposed definitely:
(Q1) Which values of t are possible in quasi-antichain Chermak-Delgado lattices of
width w = pa + 1 when a > 1?
(Q2) Are there examples of groups G with G ∈ CD(G) and CD(G) a quasi-antichain
where t = 0 and p ≡ 1 modulo 4?
This paper answer above questions completely. It is amazing that there are only
two possible relations: a = b or a = 2b. We also prove that a | n, which is another appli2
cation about the decomposition of a finitely generated torsion module over a principal
ideal domain. Examples related to (Q1) and (Q2) are given.
Throughout the article, let p be a prime and G be a finite p-group with properties
(P1), (P2) and (P3).
Theorem 3. Suppose that there are totally t > 2 abelian atoms in CD(G), which is a
quasi-antichain of width w. Then there are positive integers a and b such that w = pa +1
and t = pb + 1, where a = b or a = 2b. If |G/Z(G)| = p2n , then a | n.
Proof Without loss of generality, we may assume that M1 , M2 and M3 are abelian
atoms. For convenience, the operation of the group G is replaced to be addition.
According to (P3), M3 = hx1 + y1′ , x2 + y2′ , . . . , xn + yn′ iZ(G) such that
yj′ ≡
n
Y
(3)
cij yi (mod Z(G)), j = 1, 2, . . . , n.
i=1
(3)
where C3 = (cij )n×n is the characteristic matrix of Mk relative to (x1 , x2 , . . . , xn ) and
(y1 , y2 , . . . , yn ). Replacing y1 , y2 , . . . , yn with y1′ , y2′ , . . . , yn′ respectively, we may assume
that C3 = In .
Let zij = [xi , yj ] and Z = (zij )n×n . Then G′ = hzij | i, j = 1, 2, . . . , ni. We have the
following formalized calculation:
[((x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn )Ck )T , (x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn )Ck′ ]
= [(x1 , x2 , . . . , xn )T + CkT (y1 , y2 , . . . , yn )T , (x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn )Ck′ ]
=
[(x1 , x2 , . . . , xn )T , (y1 , y2 , . . . , yn )Ck′ ] + [CkT (y1 , y2 , . . . , yn )T , (x1 , x2 , . . . , xn )]
=
ZCk′ + CkT (−Z T ) = ZCk′ − (ZCk )T .
Hence CG (Mk ) = Mk′ if and only if (ZCk )T = ZCk′ . In particular, Mk is abelian if
and only if ZCk is symmetric. Since M3 is abelian, Z T = Z.
Let V = {C ∈ Mn (Fp ) | ∃ C ′ ∈ Mn (Fp ) s.t. C T Z = ZC ′ }. It is straightforward to
prove that V is a linear space over Fp . Hence |V| = pa for some positive integer a. In
the following, we prove that V is also a field.
Assume that On 6= C ∈ V and C T Z = ZC ′ . Let
(s1 , s2 , . . . , sn ) = (x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn )C
and
(t1 , t2 , . . . , tn ) = (x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn )C ′ .
Then, by the above formalized calculation, [(s1 , s2 , . . . , sn )T , (t1 , t2 , . . . , tn )] = On . Let
N1 = hs1 , s2 , . . . , sn iZ(G) and N2 = ht1 , t2 , . . . , tn iZ(G). Then N2 ≤ CG (N1 ). Hence
|N1 ||CG (N1 )| ≥ |N1 ||N2 | = |M1 |2 = m∗ (G). It implies that N1 ∈ CD(G), and, there is
some k ≥ 3 such that N1 = Mk . Hence V \ {On } is the set of Ck where 3 ≤ k ≤ w.
3
Since Ck is invertible, V is a division algebra. By Wedderburn’s little theorem, V is
also a finite field. Now we know that w − 2 = pa − 1. hence w = pa + 1. Let A be a
a
primitive element of V. Then V = {On , A, A2 , . . . , Ap −1 = In }.
Next we study A applying decomposition of a finitely generated torsion module over
a principal ideal domain. M2 /Z(G) can be regarded as a n-dimensional vector space
V2 over Fp , in which (y1 Z(G), y2 Z(G), . . . , yn Z(G)) is a base for V2 . At this time,
A = (aij ) is a matrix of a single transformation A relative to the given base, where
n
Y
A(yj Z(G)) = ( aij yi )Z(G), j = 1, 2, . . . , n.
i=1
We can make V2 an Fp [λ]-module by defining the action of any polynomial g(λ) =
b0 + b1 λ + · · · + bm λm on any vector y ∈ V2 as
g(λ)y = b0 + b1 (Ay) + · · · + bm (Am y).
Let mA (λ) be the minimal polynomial of A. Then V2 is a torsion Fp [λ]-module of
exponent mA (λ). Obviously, mA (x) is irreducible with degree a. By fundamental
structure theorem for finitely generated torsion modules over a principle ideal domain,
V2 is a direct sum of cyclic module:
V2 = Fp [λ]v1 ⊕ Fp [λ]v2 ⊕ · · · ⊕ Fp [λ]vr
It is easy to see that annvi = mA (λ) where i = 1, 2, . . . , r. Let mA (λ) = k0 + k1 λ +
· · · + ka−1 λa−1 − ka λa . Then n = ra and
v1 , Av1 , . . . , Aa−1 v1 , v2 , Av2 , . . . , Aa−1 v2 , . . . , vr , Avr , . . . , Aa−1 vr
is a set of generators for M2 . Assume that (v1 , v2 , . . . , vn ) = (y1 , y2 , . . . , yn )S. Let
(u1 , u2 , . . . un ) = (x1 , x2 , . . . , xn )S where S = (sij ) is an invertible matrix. Then the
characteristic matrix of Mk relative to (u1 , u2 , . . . , un ) and (v1 , v2 , . . . , vn ) is S −1 Ck S.
Matrices related in this way are said to be similar.
Without loss of generality, we may assume that A is the characteristic matrix relative to (v1 , Av1 , . . . , Aa−1 vr ). It is clear that A has the form
A1 Oa . . . Oa
Oa A2 . . . Oa
A= .
..
.. ,
..
..
.
.
.
Oa
0
...
Ar
where Ai is the companion matrix of mA (λ). That is,
0 0 ...
k0
1 0 ...
k1
Ai = B = . .
..
.
.
.
.
.
.
.
.
0 ...
1 ka−1
4
,
and A = Diag(B, B, . . . , B).
If AT Z = ZA′ , then A′ T Z = ZA since Z T = Z.
there are 1 ≤ k ≤ pa − 1 such that AT Z = ZAk . Let
Z11 Z12 . . . Z1r
Z21 Z22 . . . Z2r
Z= .
..
..
..
..
.
.
.
Zr1 Zr2 . . .
Zrr
It follows that A′ ∈ V. Hence
,
T for 1 ≤ i ≤ j ≤ r. It follows
where Zij are a × a matrices. Since Z T = Z, Zji = Zij
from AT Z = ZAk that
(1)
Zij
(2)
Zij
Let Zij =
..
.
(a)
Zij
B T Zij = Zij B k
(1)
B T Zji = Zji B k
(2)
. Then
(2)
B Zij =
Zij
(3)
Zij
..
.
T
(1)
(2)
(a)
k0 Zij + k1 Zij + · · · + ka−1 Zij
(3)
By (1) and (3),
(2)
(1)
(3)
(2)
(a)
(a−1)
Zij = Zij B k , Zij = Zij B k , . . . , Zij = Zij
(1)
(2)
(a)
Bk
(a)
k0 Zij + k1 Zij + · · · + ka−1 Zij = Zij B k
(4)
(5)
Hence
(1)
Zij (k0 + k1 B k + · · · + ka−1 B (a−1)k − B ak ) = 0
(6)
(1)
That is, Zij mA (B k ) = 0. We claim that mA (B k ) = 0. Otherwise, mA (Ak ) 6= 0. Since
mA (Ak ) ∈ V and V is a field, mA (Ak ) is invertible. Hence mA (B k ) is also invertible.
(1)
Notice that if Zij = 0 then Zij = Oa by (4) and (5). Hence we may choose Zij such
(1)
(1)
that Zij 6= 0. In this case, Zij mA (B k ) 6= 0, a contradiction. Thus mA (B k ) = 0 and
a
2
mA (Ak ) = 0. Since Ap , Ap , . . . , Ap = A are all zero points of mA (λ), there exists a
1 ≤ e ≤ a such that k = pe .
e
It is easy to see that (Am )T Z = ZAmp if and only if (pa − 1) | m(pe − 1). Let
W = {C ∈ V | C T Z = ZC}. Then W \ {On } = {Am ∈ V | (pa − 1) | m(pe − 1)}. Hence
5
pa −1
W \ {On } is a cyclic group generated by A (pa −1,pe −1) of order (pa − 1, pe − 1) = p(a,e) − 1.
Let b = (a, e). Notice that |W| = t − 1. It is easy to see that t = pb + 1. By (1),
e
(B p )T Zij = Zij B p
2e
(7)
By (2),
e
(B p )T Zij = Zij B
(8)
Hence (pa − 1) | (p2e − 1). Thus a | 2e. It follows that a | 2b. Hence a = e = b or
a = 2e = 2b.
Remark 4. Let F be a field containing pa elements and F ∗ be the multiply group of
F . Then F ∗ is cyclic with order pa − 1. Let F ∗ = hbi and B : f 7→ bf be linear
transformation over F , where F is regarded as a linear space over field Fp . Of course,
the order of B is pa −1. Let p(x) be the minimal polynomial of B. It follows from CayleyHamilton theorem that deg(p(x)) = r ≤ a. Let W = {f (B) f (x) ∈ Fp [x]} = {f1 (B) |
f1 (x) ∈ Fp [x], deg(f1 ) < r}. Then dimW = deg(p(x)) = r and hence |W | = pr . On the
a
other hand, {1, B, B 2 , . . . , B p −1 } ⊆ W and hence |W | ≥ pa . So we get r = a. Let the
minimal polynomial of B is
p(x) = xa − ka−1 xa−1 − · · · − k1 x − k0 .
Then a matrix of B is Frobenius form
0 0
1 0
B= . .
..
..
0 ...
...
...
..
.
k0
k1
..
.
1
ka−1
.
Lemma 5. Let B be the matrix introduced in Remark 4. Z = (zij ) is a a × a matrix.
If B T Z = ZB, Then Z is symmetric.
Proof By calculation, the (u, v)th entry of B T Z is zu+1,v , while the (u, v)th entry of
ZB is zu,v+1 , where 1 ≤ u, v ≤ a − 1. Hence zu+1,v = zu,v+1 where 1 ≤ u, v ≤ a − 1.
Then, for u ≤ v, we have zu,v+1 = zu+1,v = zu+2,v−1 = · · · = zv,u+1 = zv+1,u . Hence Z
is symmetric.
Theorem 6. Suppose that ???
Proof Let G be generated by {x1 , x2 , . . . , xn , y1 , y2 , . . . , yn } with defining relationships xpi = yip = 1 and [xi , xj ] = [yi , yj ] = 1 for all i, j such that 1 ≤ i, j ≤
n, [x(u−1)a+i , y(v−1)a+j ] = z(u−1)a+i,(v−1)a+j for 1 ≤ u, v ≤ r and 1 ≤ i, j ≤ a,
z(u−1)a+1,(v−1)a+j ∈ Z(G) for 1 ≤ u ≤ v ≤ r and 1 ≤ j ≤ a. For convenience,
6
we use addition operation to replace mutiplication operation of G. We also use the
following notations (where 1 ≤ u, v ≤ r and 1 ≤ i, j, k ≤ a):
(k)
Zuv
:= (z(u−1)a+k,(v−1)a+1 , z(u−1)a+k,(v−1)a+2 , . . . , z(u−1)a+k,(v−1)a+a ),
Zuv
= (z(u−1)a+i,(v−1)a+j ) =
(1)
Zuv
(2)
Zuv
..
.
(3)
Zuv
and Z =
Z11 Z12 . . .
Z21 Z22 . . .
..
..
..
.
.
.
Zr1 Zr2 . . .
Z1r
Z2r
..
.
Zrr
(k)
.
(1)
Using above notations, we continue to give defining relationships Zuv = Zuv B k−1 for
T for 1 ≤ v < u ≤ r. It is easy to see that
1 ≤ u ≤ v ≤ r and 2 ≤ k ≤ a, Zuv = Zvu
G′ = Z(G) = hz(u−1)a+1,(v−1)a+j | 1 ≤ u ≤ v ≤ r, 1 ≤ j ≤ ai is elementary abelian of
r+1
r+5
order p 2 n . Hence |G| = p 2 n .
(k)
(1)
Since Zuv = Zuv B k−1 for 1 ≤ u ≤ v ≤ r and 2 ≤ k ≤ a, B T Zuv = Zuv B
for 1 ≤ u ≤ v ≤ r. By Lemma 5, Zuv is symmetric for 1 ≤ u ≤ v ≤ r. Hence
T
T
T = Z
Zuv = Zvu
vu for 1 ≤ v < u ≤ r. Moreover Zuv = Zvu = Zuv = Zvu for all
1 ≤ u, v ≤ r and Z T = Z. Let X = hx1 , x2 , . . . , xn iZ(G) and Y = hy1 , y2 , . . . , yn iZ(G).
Assertion 1: CG (x) = X for all x ∈ X \ Z(G).
Q
Let x = ni=1 ci xi +z where z ∈ Z(G). Write Ck = (c(k−1)a+1 , c(k−1)a+2 , . . . , cka ) for
1 ≤ k ≤ a. Since x 6∈ Z(G), there exists a k0 such that Ck0 6= (0, 0, . . . , 0). Exchanging
x(k0 −1)a+1 , y(k0 −1)a+1 , . . . , xk0 a , yk0 a and x(r−1)a+1 , y(r−1)a+1 , . . . , xra , yra , we have Cr 6=
(0, 0, . . . , 0). Let Hi = h[x, y(i−1)a+j ] | 1 ≤ j ≤ ai for 1 ≤ i ≤ r. We will prove that
H1 +H2 +· · ·+Hv is of order pva . Use induction, we may assume that H1 +H2 +· · ·+Hv−1
is of order p(v−1)a . By calculation,
([x, y(v−1)a+1 ], . . . , [x, yva ])
n
X
=
ck ([xk , y(v−1)a+1 ], . . . , [xk , yva ])
k=1
=
=
=
=
=
r X
a
X
u=1 i=1
r X
a
X
u=1 i=1
r X
a
X
c(u−1)a+i ([x(u−1)a+i , y(v−1)a+1 ], . . . , [x(u−1)a+i , yva ])
(i)
c(u−1)a+i Zuv
(1) i−1
c(u−1)a+i Zuv
B
u=1 i=1
a
r
X
X
(1)
c(u−1)a+i B i−1 )
Zuv
(
u=1
i=1
a
r−1
X
X
(1)
c(u−1)a+i B i−1 ) +
Zuv (
u=1
i=1
7
a
X
(1)
Zrv
(
i=1
c(r−1)a+i B i−1 )
P
P
Since Cr 6= (0, 0, . . . , 0), ( ai=1 c(r−1)a+i B i−1 ) 6= Oa . By Remark 4, ( ai=1 c(r−1)a+i B i−1 )
(1) P
is invertible. Hence Zrv ( ai=1 c(r−1)a+i B i−1 ) is of rank a. It follows that (H1 + H2 +
· · · + Hv−1 ) ∩ Hv = 0. Thus H1 + H2 + · · · + Hv is of order pva .
By above discussion, H1 +H2 +· · ·+Hv is of order pn . Hence |[x, G]| = pn . It follows
r+3
r+3
that |CG (x)| = |G|/pn = p 2 n . Since |X| = p 2 n and X ≤ CG (x), CG (x) = X.
Similarly, we have CG (y) = Y for all y ∈ Y \ Z(G). Then CG (X) = X and
CG (Y ) = Y , yielding mG (G) = mG (X) = mG (Y ) = p(r+3)n .
Assertion 2: m∗ (G) = p(r+3)n .
Otherwise, by the dual-property of CD-lattice, there exists H ∈ CD(G) such that
(r+3)
(r+1)
n
2
H < G and |H| > p 2 n . Since |H ∩ X| = |H||X|
= |Z(G)|, there exists
|HX| > p
x ∈ H ∩ X \ Z(G). Hence CG (H) ≤ CG (x) = X. Similarly, we have CG (H) ≤ Y .
Hence CG (H) = Z(G) and mG (H) < mG (G), a contradiction.
Above discussion also gives that CD(G) is a quasi-antichain, in which every atom
(r+3)
is of order p 2 n .
Assertion 3: w = t = 1 + pa .
Now we have G, Z(G), X, Y ∈ CD(G). Let M be an atom different from X and Y .
Then, by the same reason given in Theorem 3, we may let M = hw1 , w2 , . . . , wn i and
N = CG (M ) = hv1 , v2 , . . . , vn i where
(w1 , w2 , . . . , wn ) = (x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn )C,
(v1 , v2 , . . . , vn ) = (x1 , x2 , . . . , x3n ) + (y1 , y2 , . . . , yn )D.
Since [M, N ] = 0, C T Z = ZD. Let
C11 C12 . . . C1r
C21 C22 . . . C2r
C= .
..
..
..
..
.
.
.
Cr1 Cr2 . . . Crr
Then
r
X
k=1
T
Cki
Zkj
=
and D =
r
X
D11 D12 . . .
D21 D22 . . .
..
..
..
.
.
.
Dr1 Dr2 . . .
Zik Dkj for 1 ≤ i, j ≤ r.
D1r
D2r
..
.
Drr
.
(9)
k=1
Taking i 6= j in Equation (9) and compiling two sides, we get Cik = Oa for i 6= k,
Dkj = Oa for k 6= j, and
CiiT Zij = Zij Djj for 1 ≤ i, j ≤ r and i 6= j.
(10)
Taking i = j in Equation (9) and compiling two sides, we get
CiiT Zii = Zii Dii for 1 ≤ i ≤ r.
(11)
Notice that Zij and Z11 have no essential difference. Thus Cii = Cjj and Dii = Djj for
i 6= j.
8
Let V = {U ∈ Ma (Fp ) | ∃ V ∈ Ma (Fp ) s.t. U T Z11 = Z11 V }. Similar to the proof
in Theorem 3, V is a division algebra. Hence V has at most pa elements (otherwise,
there exists a non-zero matrix in which the elements of the first row are all zero, a
contradiction). Notice that B T Z11 = Z11 B. Hence B k ∈ V where 1 ≤ k ≤ pa − 1. it
a
follows that V = {On , B, B 2 , . . . , B p −1 = In }.
Therefore C = D = Diag(B k , B k , . . . , B k ) for some 1 ≤ k ≤ pa − 1. Hence w = t =
pa + 1.
Theorem 7. Suppose that a, b and r are positive integers such that a = 2b and r ≥ 3.
Let n = ar. Then there exists a group G such that |G/Z(G)| = p2n , and CD(G) is a
quasi-antichain of width w = pa + 1, in which the number of abelian atoms is t = pb + 1.
Proof Let G be generated by {x1 , x2 , . . . , xn , y1 , y2 , . . . , yn } with defining relationships
similar to that in Theorem 6. The followings are different defining relationships from
(k)
that in Theorem 6: [x(u−1)a+i , y(u−1)a+j ] = 1 for 1 ≤ u ≤ r and 1 ≤ i, j ≤ a, Zuv =
b
(1)
Zuv B (k−1)p for 1 ≤ u < v ≤ r and 2 ≤ k ≤ a.
In this case, G′ = Z(G) = hz(u−1)a+1,(v−1)a+j | 1 ≤ u < v ≤ r, 1 ≤ j ≤ ai is
r+3
r−1
elementary abelian of order p 2 n . Hence |G| = p 2 n .
b
b
(k)
(1)
Since Zuv = Zuv B (k−1)p for 1 ≤ u < v ≤ r and 2 ≤ k ≤ a, B T Zuv = Zuv B p for
1 ≤ u < v ≤ r. In this case, Zuv is not symmetric and Zuv 6= Zvu for all 1 ≤ v < u ≤ r.
Z T = Z is still hold. Let X = hx1 , x2 , . . . , xn iZ(G) and Y = hy1 , y2 , . . . , yn iZ(G).
Similar to Assertion 1 in the proof of Theorem 6, we have |[x, Y ]| ≥ p(r−1)a for all
x ∈ X \ Z(G). It follows that |CY (x)| ≤ |Y |/p(r−1)a = pa |Z(G)|.
Similarly, |CX (y)| ≤ pa |Z(G)| for all y ∈ Y \ Z(G). It is easy to check that
CG (X) = X and CG (Y ) = Y , yielding mG (G) = mG (X) = mG (Y ) = p(r+1)n .
Assertion (1): m∗ (G) = p(r+1)n .
Otherwise, by the dual-property of CD-lattice, there exists H ∈ CD(G) such that
(r−1)
(r+1)
n
2
= |Z(G)|, there exists
H < G and |H| > p 2 n . Since |H ∩ X| = |H||X|
|HX| > p
x ∈ H ∩ X \ Z(G). Hence CG (H) ≤ CG (x) = XCY (x). Similarly, there exists y ∈ H ∩
Y \ Z(G). Hence CG (H) ≤ CG (y) = Y CX (y). It follows that CG (H) ≤ CX (y)CY (x).
Hence |CG (H)| ≤ p2a |Z(G)|. Obviously CG (H) > Z(G). Hence there exists 1 6=
x′ y ′ ∈ CG (H) \ Z(G), where x′ ∈ CX (y), y ′ ∈ CY (x). Whatever, we have |H| ≤
r+1
|CG (x′ y ′ )| ≤ |G : [G, x′ y ′ ]| ≤ p 2 n+a . Hence mG (H) = |H||CG (H)| ≤ prn+3a ≤
p(r+1)n , a contradiction.
Assertion (2): CD(G) is a quasi-antichain.
Otherwise, by the dual-property of CD-lattice, there exists H ∈ CD(G) such that
(r+1)
H < G and |H| > p 2 n . By Assertion (1), n = 3a and there exists x ∈ H ∩ X \ Z(G)
and y ∈ H ∩ Y \ Z(G) such that CG (H) = CX (y)CY (x), where |CX (y)| = |CY (x)| =
pa |Z(G)|. Take x′ ∈ CX (y) \ Z(G) and y ′ ∈ CY (x) \ Z(G). Then H ≤ CX (y ′ )CX (x′ ) ≤
(r+1)
p2a |Z(G)| < p 2 n , a contradiction.
9
Assertion (3): w = pa + 1 and t = pb + 1.
Let M be an atom different from X and Y . Then, by the same reason given in
Theorem 3, we may let M = hw1 , w2 , . . . , wn i and N = CG (M ) = hv1 , v2 , . . . , v3n i
where
(w1 , w2 , . . . , wn ) = (x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn )C,
(v1 , v2 , . . . , v3 ) = (x1 , x2 , . . . , x3n ) + (y1 , y2 , . . . , y3 )D.
We also have C T Z = ZD.
C11 C12
C21 C22
C= .
..
..
.
Then
Let
...
...
..
.
C1r
C2r
..
.
Cr1 Cr2 . . .
Crr
r
X
k=1
T
Zkj =
Cki
and D =
r
X
D11 D12 . . .
D21 D22 . . .
..
..
..
.
.
.
Dr1 Dr2 . . .
Zik Dkj for 1 ≤ i, j ≤ r.
D1r
D2r
..
.
Drr
.
(12)
k=1
Taking i 6= j in Equation (12) and compiling two sides, we get Cik = Oa for i 6= k,
Dkj = Oa for k 6= j, and
CiiT Zij = Zij Djj for 1 ≤ i, j ≤ r and i 6= j.
(13)
Notice that Zij , Zik and Zki have no essential difference for k 6= i, j. Thus Cii = Ckk and
Djj = Dkk for k 6= i, j. Since r ≥ 3, C11 = C22 = · · · = Crr and D11 = D22 = · · · = Drr .
Let V = {U ∈ Ma (Fp ) | ∃ V ∈ Ma (Fp ) s.t. U T Z12 = Z12 V }. Similar to the proof
in Theorem 3, V is a division algebra. Hence V has at most pa elements (otherwise,
there exists a non-zero matrix in which the elements of the first row are all zero, a
b
contradiction). Notice that B T Z12 = Z12 B p . Hence B k ∈ V where 1 ≤ k ≤ pa − 1. It
a
follows that V = {On , B, B 2 , . . . , B p −1 = In }.
b
b
b
Therefore C = Diag(B k , B k , . . . , B k ) and D = Diag(B kp , B kp , . . . , B kp ) for some
1 ≤ k ≤ pa − 1. Hence w = pa + 1. It is easy to see that M = N if and only if C = D
if and only if (pb + 1) | k. Hence the number of abelian atoms t = pb + 1.
Theorem 8. Suppose that a and r are positive integers such that r ≥ 3. Let n = ar.
Then there exists a group G such that |G/Z(G)| = p2n , and CD(G) is a quasi-antichain
of width w = pa + 1, in which the number of abelian atoms is 1 or 2 for p = 2 or p 6= 2
respectively.
Proof Let G be generated by {x1 , x2 , . . . , xn , y1 , y2 , . . . , yn } with defining relationships
xpi = yip = 1 and [xi , yj ] = 1 for all i, j such that 1 ≤ i, j ≤ n, [x(u−1)a+i , x(u−1)a+j ] = 1
for 1 ≤ u ≤ r and 1 ≤ i < j ≤ a, [x(u−1)a+i , x(v−1)a+j ] = z(u−1)a+i,(v−1)a+j for
1 ≤ u < v ≤ r and 1 ≤ i, j ≤ a, [yi , yj ] = [xj , xi ] for every i, j with 1 ≤ i < j ≤ n,
z(u−1)a+1,(v−1)a+j ∈ Z(G) for 1 ≤ u < v ≤ r and 1 ≤ j ≤ a. For convenience, we use
10
addition operation to replace mutiplication operation of G. We also use the following
notations (where 1 ≤ u, v ≤ r and 1 ≤ i, j, k ≤ a):
(k)
Zuv
:= (z(u−1)a+k,(v−1)a+1 , z(u−1)a+k,(v−1)a+2 , . . . , z(u−1)a+k,(v−1)a+a ),
(1)
Zuv
Z11 Z12 . . . Z1r
(2)
Z21 Z22 . . . Z2r
Zuv
Zuv = (z(u−1)a+i,(v−1)a+j ) =
and
Z
=
..
..
.. .
..
..
.
.
.
.
.
(3)
Zr1 Zr2 . . .
Zuv
(k)
Zrr
(1)
Using above notations, we continue to give defining relationships Zuv = Zuv B k−1 for
T for 1 ≤ v < u ≤ r,
1 ≤ u < v ≤ r and 2 ≤ k ≤ a. It is easy to see that Zuv = −Zvu
G′ = Z(G) = hz(u−1)a+1,(v−1)a+j | 1 ≤ u < v ≤ r, 1 ≤ j ≤ ai is elementary abelian of
r+3
r−1
order p 2 n . Hence |G| = p 2 n .
Since [x(u−1)a+i , x(u−1)a+j ] = 1 for 1 ≤ u ≤ r and 1 ≤ i < j ≤ a, Zuu = Oa for
(k)
(1)
1 ≤ u ≤ r. Since Zuv = Zuv B k−1 for 1 ≤ u < v ≤ r and 2 ≤ k ≤ a, B T Zuv = Zuv B
for 1 ≤ u < v ≤ r. By Lemma 5, Zuv is symmetric for 1 ≤ u < v ≤ r. Hence
T
T = −Z
Zuv = −Zvu
vu for 1 ≤ v < u ≤ r. Moreover B Zuv = Zuv B for all 1 ≤ u, v ≤ r
and Z T = −Z. Let X = hx1 , x2 , . . . , xn iZ(G) and Y = hy1 , y2 , . . . , yn iZ(G).
Similar to Assertion 1 in the proof of Theorem 6, we have |[x, X]| ≥ p(r−1)a for all
x ∈ X \ Z(G). It follows that |CX (x)| ≤ |X|/p(r−1)a = pa |Z(G)|.
Similarly, |CY (y)| ≤ pa |Z(G)| for all y ∈ Y \Z(G). It is easy to check that CG (X) =
Y and CG (Y ) = X, yielding mG (G) = mG (X) = mG (Y ) = p(r+1)n .
Similar to Assertion (1) and Assertion (2) in the proof of Theorem 7, we have
= p(r+1)n and CD(G) is a quasi-antichain.
Let M be an atom different from X and Y . Then, by the same reason given in
Theorem 3, we may let M = hw1 , w2 , . . . , wn i and N = CG (M ) = hv1 , v2 , . . . , v3n i
where
(w1 , w2 , . . . , wn ) = (x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn )C,
m∗ (G)
(v1 , v2 , . . . , vn ) = (x1 , x2 , . . . , xn )D + (y1 , y2 , . . . , yn ).
Since [M, N ] = 0, C T Z = ZD. Let
C11 C12 . . . C1r
C21 C22 . . . C2r
C= .
..
..
..
..
.
.
.
Cr1 Cr2 . . . Crr
and D =
D11 D12 . . .
D21 D22 . . .
..
..
..
.
.
.
Dr1 Dr2 . . .
D1r
D2r
..
.
Drr
.
Similar to Assertion 3 in the proof of Theorem 6, we have C = D = Diag(B k , B k , . . . , B k )
for some 1 ≤ k ≤ pa − 1. Hence w = pa + 1. It is easy to see that M = N if and only if
CD = In if and only if (pa − 1) | 2k. Hence the number of abelian atoms is 1 or 2 for
p = 2 and p 6= 2 respectively.
11
Theorem 9. Suppose that p is and odd prime, a is odd, and r is positive integers such
that r ≥ 3. Let n = ar. Then there exists a group G such that |G/Z(G)| = p2n , and
CD(G) is a quasi-antichain of width w = pa + 1, in which the number of abelian atoms
is t = 0.
Proof Let G be generated by {x1 , x2 , . . . , xn , y1 , y2 , . . . , yn } with defining relationships
similar to that in Theorem 8. The unique different defining relationship is [yi , yj ] =
[xj , xi ]ν for every i, j with 1 ≤ i < j ≤ n, where ν is a fixed quadratic non-residue
module p.
Similar to the proof of Theorem 8, we have m∗ (G) = p(r+1)n , CD(G) is a quasiantichain, w = pa + 1, νC = D = Diag(B k , B k , . . . , B k ) for some 1 ≤ k ≤ pa − 1, and
M = N if and only if νB 2k = Ia .
By the definition of B, hB
pa −1
p−1
i = hlIa | 1 ≤ l ≤ p − 1i. Since ν is not a square,
a
−1
m pp−1
there exists an odd m such that B
= νIa .
a −1
pa −1
2k
a
If νB = Ia , then (p −1) | 2k +m p−1 . Notice that m and pp−1
= 1+p+· · ·+pa−1
are all odd. There is no integer k such that νB 2k = Ia . Hence the number of abelian
atoms is t = 0.
References
[1] AN, L., Brenna, J., Qu, H., Wilcox, E., Chermak-Delgado lattice extension theorems, Algebra comm., 2015, 43(5): 2201–2213.
[2] Brewster, B., Hauck, P., Wilcox, E., Groups whose Chermak-Delgado lattice is a
chain, Journal of Group Theory, 2014, 17(2): 253–265.
[3] Brewster, B., Hauck, P., Wilcox, E., Quasi-antichain Chermak-Delgado lattice of
finite groups, Arch. Math., 2014, 103: 301–311.
[4] Brewster, B., Wilcox, E., Some groups with computable Chermak-Delgado lattices,
Bulletin of the Australian Mathematical Society, 2012, 86(1): 29–40.
[5] Chermak, A., Delgado, A., A measuring argument for finite groups, Proceedings
of the American Mathematical Society, 1989, 107(4): 907–914.
[6] Isaacs, I. M., Finite Group Theory, American Mathematical Society, 2008.
12
| 4 |
1
Investigations of a Robotic Testbed with
Viscoelastic Liquid Cooled Actuators
arXiv:1711.01649v2 [] 8 Mar 2018
Donghyun Kim, Junhyeok Ahn, Orion Campbell, Nicholas Paine, and Luis Sentis,
Abstract—We design, build, and thoroughly test a new type of
actuator dubbed viscoelastic liquid cooled actuator (VLCA) for
robotic applications. VLCAs excel in the following five critical
axes of performance: energy efficiency, torque density, impact
resistence, joint position and force controllability. We first study
the design objectives and choices of the VLCA to enhance the
performance on the needed criteria. We follow by an investigation
on viscoelastic materials in terms of their damping, viscous and
hysteresis properties as well as parameters related to the longterm performance. As part of the actuator design, we configure
a disturbance observer to provide high-fidelity force control
to enable a wide range of impedance control capabilities. We
proceed to design a robotic system capable to lift payloads of
32.5 kg, which is three times larger than its own weight. In
addition, we experiment with Cartesian trajectory control up to
2 Hz with a vertical range of motion of 32 cm while carrying a
payload of 10 kg. Finally, we perform experiments on impedance
control and mechanical robustness by studying the response of
the robotics testbed to hammering impacts and external force
interactions.
Index Terms—Viscoelastic liquid cooled actuator, Torque feedback control, Impedance control.
I. I NTRODUCTION
ERIES elastic actuators (SEAs) [1] have been extensively
used in robotics [2], [3] due to their impact resistance and
high-fidelity torque controllability. One drawback of SEAs is
the difficulty that arises when using a joint position controller
due to the presence of the elastic element in the drivetrain. To
remedy this problem the addition of dampers has been previously considered [4]–[6]. However, incorporating mechanical
dampers makes actuators bulky and increases their mechanical
complexity.
One way to avoid this complexity is to employ elastomers
instead of metal springs. Using a viscoelastic material instead
of combined spring-damper systems enables compactness [7]
and simplified drivetrains [8]. However, it is difficult to
achieve high bandwidth torque control due to the nonlinear
behavior of elastomers. To address this difficulty, [9] models
the force-displacement curve of elastomer using a “standard
linear model.” The estimated elastomer force is employed in
a closed-loop force controller. Unfortunately, the hysteresis in
the urethane elastomer destabilized the system at frequencies
above 2 Hz. In contrast our controllers achieve a bandwidth of
70 Hz. The study on [10] accomplishes reasonably good torque
control performance, but the range of torques is small to ensure
that the elastomer operates in the linear region; our design and
control methods described here achieve more than an order of
magnitude higher range of torques with high fidelity tracking.
To sufficiently address the nonlinear behavior of elastomers,
which severely reduce force control performance, we empiri-
S
cally analyze various viscoelastic materials with a custom-built
elastomer testbed. We measure each material’s linearity, creep,
compression set, and damping under preloaded conditions,
which is a study under-documented in the academic literature.
To achieve stable and accurate force control, we study various
feedback control schemes. In a previous work, we showed that
the active passivity obtained from motor velocity feedback
[11] and model-based control such as disturbance observer
(DOB) [12] play an essential role in achieving high-fidelity
force feedback control. Here, we analyze the phase margins
of various feedback controllers and empirically show their
operation in the new actuators. We verify the stability and
accuracy of our controllers by studying impedance control and
impact tests.
To test our new actuator, we have designed a two degree-offreedom (DOF), robotic testbed, shown in Fig. 5. It integrates
two of our new actuators, one in the ankle, and another in the
knee, while restricting motions to the sagittal plane. With the
foot bolted to the floor for initial tests, weight plates can be
loaded on the hip joint to serve as an end-effector payload.
We test operational space control to show stable and accurate
operational space impedance behaviors. We perform dynamic
motions with high payloads to showcase another important
aspect of our system, which is its cooling system aimed at
significantly increasing the power of the robot.
The torque density of electric motors is often limited by
sustainable core temperature. For this reason, the maximum
continuous torque achieved by these motors can be significantly enhanced using an effective cooling system. Our
previous study [13] analyzed the improvements on achievable
power based on thermal data of electric motors and proposed
metrics for design of cooling systems. Based on the metrics
from that study, we chose a 120 W Maxon EC-max 40, which
is expected to exert 3.59 times larger continuous torque when
using the proposed liquid cooling system. We demonstrate the
effectiveness of liquid cooling by exerting 860N continous
force during 5 min and 4500N peak force during 0.5s while
keeping the core temperatures below 115◦C, which is much
smaller than the maximum, 155◦C. We accurately track fast
motions of 2 Hz while carrying a 10 kg payload for endurance
tests. In addition we perform heavy lift tests with a payload
of 32.5 kg keeping the motor temperatures under 80◦C.
The main contribution of this paper is the introduction of a
new viscoelastic liquid cooled actuator and a thorough study
of its performance and its use on a multidof testbed. We
demonstrate that the use of liquid cooling and the elastomer
significantly improve joint position controllability and power
density over traditional SEAs. More concretely, we 1) design
2
a new actuator, dubbed the VLCA, 2) extensively study
viscoelastic materials, 3) extensively analyze torque feedback
controllers for VLCAs, and 4) examine the performance in a
multidof prototype.
II. BACKGROUND
Existing actuators can be characterized using four criteria:
power source (electric or hydraulic), cooling type (air or liquid), elasticity of the drivetrain (rigid or elastic), and drivetrain
type (direct, harmonic drive, ball screw, etc.) [14], [15]. One of
the most powerful and common solutions is the combination of
hydraulic, liquid-cooling, rigid and direct drive actuation. This
achieves high power-to-weight and torque-to-weight ratios,
joint position controllability, and shock tolerance. Existing
robots that use this type of actuators include Atlas, Spot, Big
Dog, and Wildcat of Boston Dynamics, BLEEX of Berkeley
[16], and HyQ of IIT [17]. However, hydraulics are less
energy efficient primarily because they require more energy
transformations [18]. Typically, a gasoline engine or electric
motor spins a pump, which compresses hydraulic fluid, which
is modulated by a hydraulic servo valve, which finally causes
a hydraulic piston to apply a force. Each stage in this process
incurs some efficiency loss, and the total losses can be very
significant.
The combination of electric, air-cooled, rigid, and harmonic
drive actuators are other widely used actuation types. Some
robots utilizing these actuator types include Asimo of Honda,
HRP2,3,4 of AIST [19], HUBO of KAIST [20], REEM-C of
PAL Robotics, JOHNNIE and LOLA of Tech. Univ. of Munich
[21], [22], CHIMP of CMU [23], Robosimian of NASA JPL
[24], and more. These actuators have precise position control
and high torque density. For example, LOLA’s theoretical
knee peak torque-density (129N m/kg) is comparable to ours
(107N m/kg), although they did not validate their number
experimentally and their max speed is roughly 2/3 of our max
speed [22]. Compared to us, low shock tolerance, low fidelity
force sensing, and low efficiency gearboxes are common
drawbacks of these type of actuators. According to Harmonic
Drive AGs catalog, the efficiency of harmonic drives may be
as poor as 25% and only increases above 80% when optimal
combinations of input shaft speed, ambient temperature, gear
ratio, and lubrication are present. Conversely, the efficiency of
our VLCA is consistently above 80% due to the use of a ball
screw mechanism.
[25] used liquid cooling for electric, rigid, harmonic
drive actuators to enhance continuous power-to-weight ratio.
The robots using this type of actuation include SCHAFT
and Jaxon [26]. These actuators share the advantages and
disadvantages of electric, rigid, harmonic drive actuators, but
have a significant increase of the continuous power output
and torque density. One of our studies [13], indicates a 2x
increase in sustained power output by retrofitting an electric
motor with liquid cooling. Other published results indicate a
6x increase in torque density through liquid cooling [14], [27],
though such performance required custom-designing a motor
specifically for liquid cooling. In our case we use an off-theshelf electric motor. In contrast with our design, these actuators
do not employ viscoelastic materials reducing their mechanical
robustness and high quality force sensing and control.
Although the increased power density achieved via liquid cooling amplifies an electric actuator’s power, the rigid
drivetrain is still vulnerable to external impacts. To increase
impact tolerance, many robots (e.g. Walkman and COMAN
of IIT [28], Valkyrie of NASA [29], MABEL and MARLO in
UMich [30], [31], and StarlETH of ETH [32]) adopt electric,
air-cooled, elastic, harmonic drive actuators. This type of
actuation provides high quality force sensing, force control,
impact resistance, and energy efficiency. However, precise
joint position control is difficult because of the elasticity in
the drivetrain and the coupled effect of force feedback control
and realtime latencies [33]. Low efficiency originating from
the harmonic drives is another drawback.
As an alternative to harmonic drives, ball screws are great
drives for mechanical power transmission. SAFFiR, THOR,
and ESCHER of Virginia Tech [34]–[36], M2V2 of IHMC
[37], Spring Flamingo of MIT [38], Hume of UT Austin [11],
and the X1 Mina exoskeleton of NASA [39] use electric,
air-cooled, elastic, ball-screw drives. These actuators show
energy efficiency, good power and force density, low noise
force sensing, high fidelity force controllability, and low
backlash. Compared to these actuators our design significantly
reduces the bulk of the actuator and increases its joint position
controllability. There are some other actuators that have special
features such as the electric actuators used in MIT’s cheetah
[40], which allow for shock resistance through a transparent
but backlash-prone drivetrain. However, the lack of passive
damping limits the joint position controllability of these type
of actuators compared to us.
III. VISCOELASTIC MATERIAL CHARACTERIZATION
The primary driver for using elastomers instead of metal
springs is to benefit from their intrinsic damping properties.
However, the mechanical properties of viscoelastic materials
can be difficult to predict, thus making the design of an
actuator based on these materials a challenging endeavor.
The most challenging aspect of incorporating elastomers
into the structural path of an actuator is in estimating or modeling their complex mechanical properties. Elastomers possess both hysteresis and strain-dependent stress, which result
in nonlinear force displacement characteristics. Additionally,
elastomers also exhibit time-varying stress-relaxation effects
when exposed to a constant load. The result of this effect is a
gradual reduction of restoration forces when operating under a
load. A third challenge when using elastomers in compression
is compression set. This phenomenon occurs when elastomers
are subjected to compressive loads over long periods of time.
An elastomer that has been compressed will exhibit a shorter
free-length than an uncompressed elastomer. Compression set
is a common failure mode for o-rings, and in our application,
it could lead to actuator backlash if not accounted for properly.
To address these various engineering challenges we designed experiments to empirically measure the following four
properties of our viscoelastic springs: 1) force versus displacement, 2) stress relaxation, 3) compression set, and 4)
3
E-stop
Load cell
Displacement
sensor
Elastic material
EtherCAT-based
embedded
control system
Belt drive
BLDC motor
2500
2000
1500
Force (N)
1000
Ball screw drive
(a) Viscoelastic material Testbed
0
-500
Spring steel
Viton 75A
Polyurethane 80A
Polyurethane 90A
EPDM 80A
Reinforced Silicon 70A
Buna- N 90A
Silicone 90A
-1000
Reinforced silicone 70A
Viton 75A
Buna-N 90A
Polyurethane 80A
Polyurethane 90A
Spring steel
500
EPDM 80A
-1500
-2000
0
1
2
3
4
5
-2500
6
-5
Compression set (%)
-4
-3
-2
-1
0
1
2
3
Dispacement (m)
5
10 -4
(c) Force vs Displacement curve
Phase (deg)
Force (N)
Magnitude (dB)
(b) Complession set
4
time (sec)
(d) Stess relaxation
Increasing system bandwidth
Spring steel
Viton 75A
Buna-N 90A
Polyurethane 90A
Frequency (Hz)
(e) Dynamic response of four elastomer
Fig. 1. Viscoelastic material test. (a) The elastomer testbed is designed and constructed to study various material properties of candidate viscoelastic
materials. (b) We measured each elastomers free length both before and after they were placed in the preloaded testbed. (c) A strong correlation between
material hardness and the materials stiffness can be observed. An exception to this correlation is the fabric reinforced silicone which we hypothesize had
increased stiffness due to the inelastic nature of its reinforcing fabric. Nonlinear effects such as hysteresis can also be observed in this plot. (d) We command
a rapid change in material displacements and then measured the materials force change versus time for 300 seconds. Note that the test of reinforced silicone
70A is omitted due to its excessive stiffness. (d) Although the bandwidths of the four responses are different, their damping ratios (signal peak value) are
relatively constant, which implies different damping.
frequency response, which will be used to characterize each
material’s effective viscous damping. We built a viscoelastic
material testbed, depicted in Fig. 1(a), to measure each of
these properties. We selected and tested the seven candidate
materials that are listed in Table I. The dimension of the tested
materials are fairly regular, with 46mm diameter and 27mm
thickness.
A. Compression set
Compression set is the reduction in length of an elastomer
after prolonged compression. The drawback of using materials with compression set in compliant actuation is that the
materials must be installed with larger amounts of preload
forces to avoid the material sliding out of place during usage.
To measure this property, we measured each elastomers free
length both before and after the elastomer was placed in
the preloaded testbed. The result of our compression set
experiments are summarized in Table I.
B. Force versus displacement
In the design of compliant actuation, it is essential to know
how much a spring will compress given an applied force.
This displacement determines the required sensitivity of a
spring-deflection sensor and also affects mechanical aspects
of the actuator such as usable actuator range of motion and
clearance to other components due to Poisson ratio expansion.
In this experiment, we identify the force versus displacement
curves for the various elastomer springs. Experimental data
for all eight springs as shown in Fig 1(b). Note that there is
a disagreement between our empirical measurements and the
analytic model relating stiffness to hardness, i.e. the Gent’s
relation shown in [41]. This mismatch arises because in our
experiments the materials are preloaded whereas the analytical
models assume unloaded materials.
4
Materials
Compression Linearity
set (%)
(R-square)
Linear stiffness
(N/mm)
Spring steel
0
0.996
860.8
Polyurethane 90A
2
0.992
8109
Preloaded elastic
modulus (N/mm)
Material damping
(N s/m)
Creep
(%)
Material
Cost ($)
0
0
-
112.5
16000
15.3
19.40
Reinforced silicone 70A
2.7
0.978
57570
798.7
242000
-
29.08
Buna-N 90A
2.8
0.975
11270
156.4
29000
25
51.47
Viton 75A
4
0.963
2430
33.7
9000
30.14
105.62
Polyurethane 80A
4.5
0.993
2266
31.4
4000
16.8
19.40
EPDM 80A
6.48
0.939
6499
90.2
16000
23.4
35.28
Silicone 90A
-
0.983
12460
172.9
37000
10.7
29.41
TABLE I
S UMMARY OF VISCOELASTIC MATERIALS
C. Stress relaxation
E. Selection of Polyurethane 90A
Stress-relaxation is an undesirable property in compliant
actuators for two reasons. First, the time-varying force degrades the quality of the compliant material as a force sensor.
When a material with significant stress-relaxation properties
is used, the only way to accurately estimate actuator force
based on deflection data is to model the effect and then pass
deflection data through this model to obtain a force estimate.
This model introduces complexity and more room for error.
The second reason stress-relaxation can be problematic is that
it can lead to the loss of contact forces in compression-based
spring structures.
The experiment for stress relaxation is conducted as follow:
1) enforce a desired displacement to a material, 2) record
the force data over time from the load cell, 3) subtract the
initially measured force from all of the force data. Empirically
measured stress-relaxation properties for each of the materials
are shown in Fig. 1 (c), which represents force offsets as
time goes under the same displacement enforced. Note that
each material shows different initial force due to the different
stiffness and each initial force data is subtracted in the plot.
A variety of other experiments were conducted to strengthen
our analysis and are summarized in Table I. Based on these
results, Polyurethane 90A appears to be a strong candidate
for viscoelastic actuators based on its high linearity (0.992),
low compression set (2%), low creep (15%), and reasonably
high damping (16000 N s/m). It is also the cheapest of the
materials and comes in the largest variety of hardnesses and
sizes.
D. Dynamic response
In regards to compliant actuation, the primary benefit of
using an elastomer spring is its viscous properties, which can
characterize the dynamic response of an actuator in series with
such a component. To perform this experiment, we generate
motor current to track an exponential chirp signal, testing
frequencies between 0.001Hz and 200Hz. Given the inputoutput relation of the system, we can fit a second order transfer
function to the experimental data to obtain an estimate of
the system’s viscous properties. However, this measure also
includes the viscoelastic testbed’s ballscrew drive train friction
(Fig. 1(a)). To quantify the elastomer spring damping independently of the damping of the testbed drive train, the latter
(8000 N s/m) was first characterized using a metal spring,
and then subtracted from subsequent tests of the elastomer
springs to obtain estimates for the viscous properties of the
elastomer materials. Fig. 1(d) shows the frequency response
results for current input and force output of three different
springs, while controlling the damping ratio. The elastomers
have higher stiffness than the metal spring, hence their natural
frequencies are higher.
IV. V ISCOELASTIC L IQUID C OOLED ACTUATION
The design objectives of the VLCA are 1) power density, 2)
efficiency, 3) impact tolerance, 4) joint position controllability,
and 5) force controllability. Compactness of actuators is also
one of the critical design parameters, which encourage us
to use elastomers instead of metal springs and mechanical
dampers. Our previous work [13] shows a significant improvement in motor current, torque, output power and system
efficiency for liquid cooled commercial off-the-shelf (COTS)
electric motors and studied several Maxon motors for comparison. As an extension of this previous work, in this new
study we studied COTS motors and their thermal behavior
models and selected the Maxon EC-max 40 brushless 120 W
(Fig. 2(e)), with a custom housing designed for the liquid
cooling system (Fig. 2(h)). The limit of continuous current
increases by a factor of 3.59 when liquid convection is used
for cooling the motor. Therefore, a continuous motor torque
of 0.701 N · m is theoretically achievable. Energetically, this
actuator is designed to achieve 366 W continuous power
and 1098W short-term power output with an 85% ball screw
efficiency (Fig. 2(b)) since short-term power is generally three
time larger than continuous power. With the total actuator
mass of 1.692 kg, this translates into a continuous power of
216W/kg and a short-term power of 650W/kg. The liquid
pump, radiator, and reservoir are products of Swiftech which
weight approximately 1kg. By combining convection liquid
cooling, high power brushless DC (BLDC) motors, and a
high-efficiency ball screw, we aim to surpass existing electric
actuation technologies with COTS motors in terms of power
density.
In terms of controls, a common problem with conventional
SEAs is their lack of physical damping at their mechanical
output. As a result, active damping must be provided from
torque produced by the motor [42]. However, the presence of
5
(e) BLDC Motor
(a) Timing belt transmission
(b) Ball screw drive
(c) Load cell
(d) Actuator output
(f) Quadrature encoder
(g) Temperature sensor
(h) Liquid cooling jacket
Opposite side
Motor part
Rubber part
Load part
(i) Tube connector
(j) Polyuretane elastomer
(k) Compliance deflection sensor
(l) Mechanical ground pivot
(m) Quadrature encoder (deflection)
ground
Fig. 2. Viscoelastic Liquid Cooled Actuator. The labels are explanatory. In addition, the actuator contains five sensors: a load cell, a quadrature encoder
for the electric motor, a temperature sensor, and two elastomer deflection sensors. One of the elastomer deflection sensors is absolute and the other one is a
quadrature encoder. The quadrature encoder gives high quality velocity data of the elastomer deflection.
signal latency and derivative signal filtering limit the amount
by which this active damping can be increased, resulting
in SEA driven robots achieving only relatively low output
impedances [33] and thus operating with limited joint position
control accuracy and bandwidth. Our VLCA design incorporates damping directly into the compliant element itself,
reducing the requirements placed on active damping efforts
from the controller. The incorporation of passive damping aims
to increase the output impedance while retaining compliance
properties, resulting in higher joint position control bandwidth.
The material properties we took into consideration will be
introduced in Section III. The retention of a compliant element
in the VLCA drive enables the measurement of actuator forces
based on deflection. The inclusion of a load cell (Fig. 2(c)) on
the actuators output serves as a redundant force sensor and is
used to calibrate the force displacement characteristics of the
viscoelastic element.
Mechanical power is transmitted when the motor turns a
ball nut via a low-loss timing belt and pulley (Fig. 2 (a)),
which causes a ball screw to apply a force to the actuator’s
output (Fig. 2(d)). The rigid assembly consisting of the motor,
ball screw, and ball nut connects in series to a compliant viscoelastic element (Fig. 2(j)), which connects to the mechanical
ground of the actuator (Fig. 2(k)). When the actuator applies a
force, the reaction force compresses the viscoelastic element.
The viscoelastic element enables the actuator to be more shock
tolerant than rigid actuators yet also maintain high output
impedance due to the inherent damping in the elastomer.
V. ACTUATOR F ORCE F EEDBACK C ONTROL
To demonstrate various impedance behaviors in operational
space, robots must have a stable force controller. Stable and
accurate operational space control (OSC) is not trivial to
achieve because of the bandwidth interference between outer
position feedback control (OSC) and inner torque feedback
control [11]. Since stable torque control is a critical component
for a successful OSC implementation, we extensively study
various force feedback controls.
Jm (kg m2 )
bm (N m s)
mr (kg)
br (N s/m)
kr (N/m)
3.8e−5
2.0e−4
1.3
2.0e4
5.5e6
TABLE II
ACTUATOR PARAMETERS
The first step in this analysis is to identify the actuator
dynamics. The transfer functions of the reaction force sensed
in the series elastic actuators (elastomer deflection) are well
explained in [43]. When the actuator output is fixed, the
transfer function from the motor current input to the elastomer
deflection is given by
Px =
xr
ηkτ Nm
=
, (1)
2 + m )s2 + (b N 2 + b )s + k
im
(Jm Nm
r
m m
r
r
where η, kτ , Nm , and im are the ball screw efficiency,
the torque constant of a motor, the speed reduction ratio of
the motor to the ball screw, and the current input for the
motor, respectively. The equations follow the nomenclature in
Fig. 3(a). We can find η, kτ , and Nm in data sheets, which are
0.9, 0.0448 N · m/A, and 3316 respectively. The gear ratio of
the drivetrain is computed by dividing the speed reduction of
pulleys (2.111) with lead length of the ball screw (0.004m)
using the equation 2π × 2.111/0.004.
However, we need to experimentally identify kr , br , Jm ,
and bm . We infer kr by dividing the force measurement
from the load cell by the elastomer deflection. The other
parameters are estimated by comparing the frequency response
of the model and experimental data. The frequency response
test is done with the ankle actuator while prohibiting joint
movement with a load and an offset force command. The
results are presented in Fig. 3 with solid gray lines. Note that
the dotted gray lines are the estimated response from the transfer function (measured elastomer force/ input motor force)
using the parameters of Table. II. The estimated response and
experimental result match closely with one another, implying
that the parameters we found are close to the actual values.
We also study the frequency response for different load
masses to understand how the dynamics changes as the joint
moves. When 10kg is attached to the end of link, the reflected
6
(a)
VCLA
-149
o
27.5 Hz
Fig. 3. Frequency response of VLCA. Gray solid lines are experimental data
and the other lines are estimated response with the model using empirically
parameters.
(b)
Open-loop
PDm
PIDm
PDf
PDm + DOB
60
Magnitude (dB)
Experiment result
40
20
0
-20
-40
0
Phase margin
-45
47.6
-90
41.0
1) Proportional (P) + Derivative (Df ) using velocity signal
obtained by a low-pass derivative filtered elastomer
deflection
2) Proportional (P) + Derivative (Dm ) using motor velocity
signal measured by a quadrature encoder connected to
a motor axis
The second controller (PDm ) has benefits over the first one
(PDf ) with respect to sensor signal quality. The velocity of
motor is directly measured by a quadrature encoder rather than
low-pass filtered elastomer deflection data, which is relatively
noisy and lagged. In addition, Fig. 4 shows that the phase
margin of the second controller (47.6) is larger than the first
one (17.1).
To remove the force tracking error at low frequencies, we
consider two options: augmenting the controller either with
integral control or with a DOB on the PDm controller. To
compare the two controllers, we analyzed the phase margins
of all the mentioned controllers. First, we chose to focus on
the location where the sensor data returns in order to address
the time delay of digital controllers (Fig. 4 (a) and (c)). Next,
we have to compute the open-loop transfer function for each
closed loop system. For example, the PDf controller’s closed
loop transfer function is
kr Px
kp (Fr − e−T s Fk ) + Fr − kd,f Qd e−T s Fk ,
N
(2)
where Fk , Fr , T , and Qd are the measured force from a
elastomer deflection, a reference force, a time delay, a low
pass derivative filter, respectively.For convenience, we use N
instead of the multiplication of three terms, ηkτ Nm . When
34.0
-225
10
(c)
42.8
17.7
-135
-180
mass to the actuator varies from 1500kg to 2500kg because
the length of the effective moment arm changes depending on
joint position. In Fig. 3(b), the bode plots are presented and
the response is not significantly different than the fixed output
case. Therefore, we design and analyze the feedback controller
based on the fixed output dynamics.
For the force feedback controller, we first compare two
options, which we have used in our previous studies [11],
[12]:
Fk =
kr
-3dB
Phase (deg)
Inf. mass
2500 kg
2000 kg
1500 kg
0
10
1
10 2
Frequency (Hz)
VCLA
kr
s
Fig. 4. Stability analysis of controllers. Phase margins of each controllers
and open-loop system are presented.
gathering the term with e−T s of Eq. (2), we obtain
Fk
kr Px (Kp + 1)/N
=
.
Fr
1 + e−T s kr Px (Kp + Kd,f Qd )/N
(3)
Then, the open-loop transfer function of the closed system
with the time delay is
open
PPD
= kr Px (Kp + Kd,f Qd )/N.
f
(4)
We can apply the same method for the PIDm and PDm +DOB
controllers.
The transfer function of PIDm , which is presented in
Fig. 4(c), is
1
kr Px
(Fr − e−T s Fk )(Kp + Ki ) + Fr
Fk =
N
s
(5)
Fk
−T s
− Kd,m e
sNm
.
kr
Then it becomes
Fk
kr Px (Kp + Ki /s + 1)/N
=
.
Fr
1 + e−T s Px (kr (Kp + Ki /s) + Kd sNm )/N
(6)
When we apply a DOB instead of integral control, we need
the inverse of the plant. In our case, the plant of the DOB is
PDm , which is similar to Eq. (6) except that Ki and e−T s are
omitted:
kr Px (Kp + 1)
PPDm (= Pc ) =
.
(7)
N + Px (kr Kp + Kd,m sNm )
7
Fig. 5. Robotic testbed. Our testbed consists of two VLCAs at the ankle
and the knee. The foot of the testbed is fixed on the ground. The linkages
are designed to vary the maximum peak torques and velocities depending on
postures. As the joint positions change, the ratios between ball screw velocities
(L̇0,1 ) and joint velocities (q̇0,1 ) also change because of effective lengths of
moment arms vary. The linkages are designed to exert more torque when the
robot crouches, which is the posture that the gravitational loads on the joints
are large.
The formulation of PDm including the DOB, which is shown
in Fig. 4(c), is
Fk =
kr Px (Kp + 1)(Fd − e−T s Pc−1 Qτ d Fk )
, (8)
(N + e−T s Px (kr Kp + Kd,m sNm )) (1 − Qτ d )
the hip, and the foot is fixed on the ground. With this testbed,
we intended to demonstrate coordinated position control with
two VLCAs, the viability of liquid cooling on an articulated
platform, cartesian position control of a weighted end effector,
and verification of a linkage design.
The two joints each have a different linkage structure that
was carefully designed so that the moment arm accommodates
the expected torques and joint velocities as the robot posture
changes (Fig. 5). For example, each joint can exert a peak
torque of approximately 270 N m and the maximum joint
velocity ranges between 7.5 rad/s and 20+ rad/s depending
on the mechanical advantage of the linkage along the configurations. The joints can exert a maximum continuous torque
of 91 N m at the point of highest mechanical advantage. This
posture dependent ratio of torque and velocity is a unique
benefit of prismatic actuators.
Given cartesian motion trajectories, which are 2nd order Bspline or sinusoidal functions, the centralized controller computes the torque commands with operational space position
and velocity, which are updated by the sensed joint position
and velocity. The OSC formulation that we use is
−1
τ = AJhip
(ẍdes + Kp e + Kd ė − J˙hip q̇) + b + g,
(11)
where A, b, and g represent inertia, coriolis, and gravity
joint torque, respectively. ẍdes , e, and ė are desired trajectory
acceleration, position and velocity error, respectively. q̇ ∈ R2
is the joint velocity of the robot and τ is the joint torque. Jhip
is a jacobian of the hip, which is a 2 × 2 square matrix and
assumed to be full-rank.
where Qτ d is a second order low-pass filter. Then the transfer
function is
VII. R ESULTS
Fk
kr Px (Kp + 1)
.
=
We first conducted various single actuator tests to show
Fd
N (1 − Qτ d ) + e−T s (N Qτ d + Px (kr Kp + Kd,m sNm ))
(9) basic performance such as torque and joint position controllability, continuous and peak torque, and impact resistance. SubThe open-loop transfer function is
sequently, we focused on the performance of OSC using the
N Qτ d + Px (kr Kp + Kd,m sNm )
open
robotic testbed integrated with DOB based torque controllers
PPD
=
(10)
m +DOB
N (1 − Qτ d )
to demonstrate actuator efficiency and high power motions.
open
open
open
open
The bode plots of PPD
,
P
,
P
,
and
P
PDm
PIDm
PDm +DOB
f
are presented in Fig. 4(b). The gains (Kp , Kd,m , Ki ) are the A. Single Actuator Tests
same as the values that we use in the experiments presented in
Fig. 6(a) shows the experimental results of our frequency
Section VII-A, which are 4, 15, and 300, respectively. The PDf response testing as well as the estimated response based on
controller uses Kd Nm /kr for Kd,f to normalize the derivative the transfer functions. We compare three types of controllers:
gain. The cutoff frequency of the DOB is set to 15Hz because PD , PID , and PD + DOB. As we predicted in the
m
m
m
this is where the PDm +DOB shows a magnitude trend similar analysis of Section V, the PD + DOB controller shows less
m
to the integral controller (PIDm ). The results imply that the phase drop and overshoot than PID . The integral control
m
PDm +DOB controller is more stable than PIDm with respect feedback gain used in the experiment is 300 and the cutoff
to phase margin and maximum phase lag. This analysis is also frequency of the DOB’s Q filter is 60Hz, which shows
τd
experimentally verified in Section VII-A.
similar error to the PID controller (Fig. 6(b)). Another test
m
VI. ROBOTIC T ESTBED
We built a robotic testbed shown in Fig. 5. To demonstrate dynamic motion, we implemented an operational space
controller (OSC) incorporating the multi-body dynamics of
the robot. We designed and built a robotic testbed (Fig. 5)
consisting of two VLCAs - one for the ankle (q0 ) and one
for the knee (q1 ). The design constrains motion to the sagittal
plane, the robot carries 10kg, 23kg, or 32.5kg of weight at
presented in Fig. 6(c) also supports the stability and accuracy
of torque control. In the test, we command a ramp in joint
torque from 1 to 25N m in 0.1s. The sensed torque (blue solid
line) almost overlaps the commanded torque (red dashed line).
Fig. 6(d) is the result of a joint position control test designed
to show that VLCAs have better joint position controllability
than SEAs using springs. In the experiment, we use a joint
encoder for position control and a motor quadrature encoder
for velocity feedback. To compare the VLCAs performance
8
20
10
1.8
2
2.2
2.4
2.6
2.8
-1.8
JPos (rad)
Phase (deg)
(c) Torque fast response
Open (estimated)
(estimated)
(estimated)
-2
-2.4
1.6
1.8
2
2.2
2.4
2.6
70
1
60
0
50
40
-2
-3
2.8
30
0
1
2
3
time (sec)
20
(f) peak force
(d) Position fast response
Actuator force (N)
80
1000
150
500
100
0
-500
time (sec)
50
actuator force
core temperature (w/ liquid)
core temperature (w/o liquid)
0
50
100
150
200
250
0
350
300
Motor temperature (oC)
Error (dB)
(a) Frequency responses of different controllers
Frequency (Hz)
90
2
time (sec)
Reference
100
-1
(sim.) metal spring
(sim.) elastomer
(exp.) command
(exp.) sensed
-2.2
Frequency (Hz)
110
3
0
1.6
Open
x 103
4
command
sensed
Actuator force (N)
Magnitude (dB)
5
Motor temperature (oC)
Torque (Nm)
30
time (sec)
(e) Continous force and core temperature
(b) Error magintude and chirp test trajectories
Fig. 6. Torque Feedback Control Test. (a) Experimental data and estimated response based on the transfer functions are presented. Estimated response of
PD controller is identical to the PD+DOB since DOB theoretically does not change the transfer function. The plot show PD+DOB shows better performance
in terms of less overshoot and smaller phase drop near to the natural frequency. (b) We choose integral controller feedback gain that shows similar accuracy
of PD+DOB’s. The left is error magnitude of three controllers. PD controller has larger error than the other two controller in the low frequency region. The
right is torque trajectories in the time domain.
Load cell (solid holding)
Load cell (w/ elastomer)
Rubber deflection (solid holding)
Rubber deflection (w/ elastomer)
1000
Actuator force (N)
with that of spring-based SEAs, we present simulation results
for a spring-based SEA on the same plot as the experiment
result for the VLCA. The green dashed line is the simulated
step response of our actuator and the yellow dotted line is
the result of the simulation model using the same parameters
except the spring stiffness and damping. The spring stiffness
was selected to be 11% of the elastomer’s, based on the
results of our tests in Section III, and the damping for the
spring case was set to 8000 N s/m which only includes the
drivetrain friction. The results show a notable improvement in
joint position control when using an elastomer instead of a
steel spring.
Fig. 6(e) shows the continous force and the motor core temperature trend with and without liquid cooling. The observed
continous force is 860N and the motor core temperature settles
at 115◦C with liquid cooling. Fig. 6(f) is the the result of shortterm torque test. In the experiment, we fix the output of the
actuator and command a 31A current for 0.5s. The observed
force measured by a loadcell (Fig. 2(c)) is 4500N, which is
a little smaller than the theoretically expected value, 5900N.
Considering that the estimated core temperature surpassed
107◦C (< 155◦C limit), we expect that the theoretical value
is reasonable. Thus, we conclude that the maximum force
density of our actuator is larger than 2700N/kg and potentially
3500N/kg.
Fig. 7 shows loadcell and elastomer force data from the
impact tests. In the tests, we hit the loadcell connected to the
ball screw (Fig. 2(c)) with a hammer falling from a constant
height while fixing the actuator in two different places to
500
0
95% interval
-500
-1000
-2
0
2
4
6
8
10
12
14
time (ms)
Fig. 7. Impact test. 83 trials are plotted and estimated with gaussian process.
We can see the deflections of the elastomer, which imply that the elastic
element absorbes the external impact force.
compare the rigid actuator to viscoelastic actuator response. In
the rigid scenario, outer case of ballnut, a blue part in Fig. 2,
is fixed to exclude the elastomer from the external impact
force path. In the second case, we fixed the ground pin of the
actuator, which is depicted by a gray part in Fig. 2(l), to see
how the elastomers react to the impact.
The impact experiment is challenging because the number
of data points we can obtain is very small with a 1ms update
rate. To overcome the lack of data points, we estimate the
mean and variance of 83 trials by gaussian process regression.
The results presented in Fig. 7 imply that there is no significant
difference in the forces measured by the loadcell in both
cases, which is predictable because the elastic element is
placed behind the drivetrain. However, the elastomer does
Ankle
command
sensed
20
0
power
supply
14
16
18
20
22
24
14
16
18
20
22
24
40
Wm
Wb
0.2
Hip position (m)
joint
electric
motor
50
30
0
-0.1
14
16
18
20
22
24
1
Wk
Average efficiency
1.5
0.1
Wk / W m
Compliant in
horizontal direction
servo
drive
-20
Knee
Stiff in
vertical direction
Torque (Nm)
9
1
0.5
0.8
14
16
(a) Impedance control
22
0
0.6
0.4
-20
12
14
16
18
40
12
14
16
18
Hip position (m)
0.1
10
12
10
12
14
16
18
14
16
18
1
time (sec)
(b) Operational space impact test
0.9
0.1
0.85
0
0.8
-0.1
2
command
2.5
sensed
0.9
0.75
0.7
0.8
0.65
0.7
0.6
0.6
1.5
time (sec)
1.5
2
2.5
3
3.5
5 (sec)
3
1
0.5
0.3
0.2
0
-0.2
0.5
1
1.5
2
2.5
3
3.5
0
-0.1
0.8
1
1
time (sec)
0.2
1.5
0.5
-0.4
30
10
1
0
24
Wk / W b
Ankle
20
-10
10
Knee
Torque (Nm)
Hitting down
18
time (sec)
2
2.5
-0.05
0
0.05
(c) Fast up and down (1.7 Hz)
Fig. 8. Operational Space Impedance Control Test. (a) The robot
demonstrates different impedance: stiff in the vertical direction and compliant
in the horizontal direction. The high tracking performance of force feedback
control results in the overlapped commanded and sensed torques. (b) To
show the stability, we hit the weight with a hammer while operating the
impedance control. Even under the impact, force control show stable and
accurate tracking. (c) The robot demonstrates a 1.7Hz up and down motion
while carrying 10kg weight at the hip, and shows a position error of less than
2.5cm.
play a significant role in absorbing energy from the impact
which is evident from large elastomer deflection in the second
case. Thus, the presence of the elastic element mitigates the
propagation of an impulse to the link where the actuator
grounds.
B. Operational Space Impedance Control
Fig. 8 shows our OSC experimental tests (Section VI)
carrying a 10kg weight. In the first test presented in Fig. 8(a),
the commanded behavior is to be compliant in the horizontal
direction (x) and to be stiff in the vertical direction (y).
When pushing the hip with a sponge in the x direction, the
robot smoothly moves back to comply with the push, but it
strongly resists the given vertical disturbance to maintain the
Fig. 9. Efficiency analysis of the ankle actuator. Efficiencies of mechanical
system using electrical power has 3 steps from a power supply to robot joints.
The graph shows the ratio of the mechanical power of the ankle joint and the
motor power and the ratio of the joint power and power supply’s input power.
commanded height. To show the stability of our controller, we
also test the response to impacts by hitting the weight with a
hammer (Fig. 8(b)). Even when there are sudden disturbances,
the torque controllers rapidly respond to maintain good torque
tracking performance as shown in Fig. 6(d).
Fig. 8(c) shows the tracking performance of our system
while following a fast vertical hip trajectory. While traveling 0.3m with 1.7Hz frequency, the hip position errors are
bounded by 0.025m. This result demonstrates that our system
is capable of stable and accurate OSC, which is challenging
because of the bandwidth conflict induced by its cascaded
structure.
C. Efficiency Analysis
Fig. 9 explains the power flow from the power supply to the
robot joint. Input current (Ib ) and voltage (Vb ) are measured
in the micro-controllers and the product of those two yields
the input power from the power supply. θ̇m is measured by the
quadrature encoder connected to the motor’s axis (Fig. 2(f))
and τm is computed from kτ im with im measured in the
micro-controller. Joint velocity is low-pass derivative filtered
joint positions measured at the absolute joint encoders. The
torque (τk ) is computed from projecting the load cell data
across the linkage’s effective moment arm.
In this test, the robot lifts a 23kg load using five different
durations to observe efficiency over a range of different speeds
and torques. The results are presented in Fig. 9 with the
description of three different power measures. The sensed
torque data measured by a load cell is noisy; therefore, we
compute the average of the drivetrain efficiency for a clearer
comparison. The averages are the integrations of efficiency divided by the time durations. Here we only integrate efficiency
while the mechanical power is positive, to prevent confounding
our results by incorporating the work done by gravity.
7
7.5
8
200
100
0
-100
0
7
joint torque
7.5
mechanical power
8
400
200
0
-200
-400
-600
200
100
0
6.5
7
7.5
Mechanical power (W)
Ankle
7
200
-200
6.5
Knee
8
5
4
6.5
Joint torque (Nm)
7.5
8
(a) 2Hz up and down motion
Ankle
0
45
40
-1000
35
-2000
-3000
0
0.5
1
1.5
2
2.5
actuator force
3
3.5
4
motor
temperature
30
4.5
80
Knee
-1000
60
-2000
-3000
40
-4000
0
Power (W)
Ankle
Knee
6.5
6
1000
Actuator force (N)
command
joint encoder
motor encoder
2.6
2.4
2.2
2
1.8
0.5
1
1.5
2
2.5
3
3.5
4
Motor temperature (oC)
Joint position (rad)
10
4.5
total
ankle
knee
1500
1000
500
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
time (sec)
(b) Heavy weight lift
Fig. 10. High power motion experiment. (a) Joint position data from joint encoder and motor encoder are shown. In this experiment, the maximum observed
torque of the ankle joint is 250 N m and the maximum observed mechanical power of the knee joint is 310W. (b) The robot lifts by 0.3m a 32.5 kg load
during 0.4s. There is still a safety margin with respect to the limits equal to 5900N and 155◦C.
The experimental results show that the drivetrain efficiency
is approximately 0.89, which means that we lose only a small
amount of power in the drivetrain and most of the torque
from the motor is delivered to the joint. This high efficiency
indicates only minor drivetrain friction, which is beneficial for
dynamics-based motion controllers.
D. High Power Motion Experiment
To demonstrate high power motions such as fast vertical
trajectories and heavy payload lifts, we use the motor position
control mode, which uses the quadrature encoders attached
directly to the motor for feedback. Fig. 10(a) presents the
results of a test comprised of 2Hz vertical motion with 0.32
m of travel while carrying a load of 10 kg at the hip.
With respect to mechanical power, the knee joint repeatedly
exerts 305W, which is close to the predicted constant power
(360W). Although the limited range of motion makes it hard
to demonstrate continuous mechanical power, these results
convincingly support our claim of enhanced continuous power
enabled through liquid cooling.
Fig. 10(b) presents another test in which the robot lifts a
32.5kg weight. We can see that the robot operates in the safe
region (≤ 5900N and ≤ 155◦C) while demonstrating high
power motion.
6(e). As we can see, when turning off liquid cooling the
temperature rises quickly above safety limits whereas when
turning on the cooling we can sustain large payload torques
for long periods of time. The use of elastomers versus steel
springs has demonstrated a clear improvement on joint position
performance as shown in Fig. 6(d). This capability is important
to achieve a large range of output joint or Cartesian space
impedances.
In the future we will explore further reducing the size
of our viscoelastic liquid cooled actuators. Maintaining the
current compact design structure we can still reduce another
significant percentage the bulk of the actuator by exploring
new types of bearings, ballnut sizes and piston bearings at the
front end of the actuator. We will also explore using different
material for the liquid cooling actuator jacket. The current
polyoxymethylene material is easily breakable and develops
cracks due to the vibrations and impacts of this kind of robotic
applications. In the future we will switch to sealed metal
chambers for instance. Further in the future we will consider
designing our own motor stators and rotors for improved performance. We expect this kind of actuators to make their way
into full humanoid robots and high performance exoskeleton
devices and we look forward to participate in such interesting
future studies.
ACKNOWLEDGMENT
VIII. C ONCLUDING R EMARKS
Overall our main contribution has been on the design and
extensive testing of a new viscoelastic liquid cooled actuator
for robotics.
One of the tests addressed is impedance control in the
operational space instead of joint impedance control. It is often
the case that humanoid robots require impedance control in
the operational space. For instance, controlling the operational
space impedance can enable improved locomotion behaviors
such as running. Our controllers demonstrate that we can
control the impedance in the Cartesian operational space as
a potential functionality for future robotic systems. The use
of liquid cooling has allowed to sustain high output torque
for prolonged times as shown in the experiments of Fig.
The authors would like to thank the members of the
Human Centered Robotics Laboratory at The University of
Texas at Austin for their help and support. This work was
supported by the Office of Naval Research, ONR Grant
[grant #N000141512507] and NASA Johnson Space Center,
NSF/NASA NRI Grant [grant #NNX12AM03G].
R EFERENCES
[1] G. A. Pratt and M. M. Williamson, “Series elastic actuators,” in Intelligent Robots and Systems 95. ’Human Robot Interaction and Cooperative
Robots’, Proceedings. 1995 IEEE/RSJ International Conference on,
1995, pp. 399–406.
[2] B. Henze, M. A. Roa, and C. Ott, “Passivity-based whole-body balancing
for torque-controlled humanoid robots in multi-contact scenarios,” The
International Journal of Robotics Research, p. 0278364916653815, Jul.
2016.
11
[3] N. Paine, J. S. Mehling, and J. Holley, “Actuator Control for the NASAJSC Valkyrie Humanoid Robot: A Decoupled Dynamics Approach for
Torque Control of Series Elastic Robots,” Journal of Field Robotics,
2015.
[4] J. Hurst, A. Rizzi, and D. Hobbelen, “Series elastic actuation: Potential
and pitfalls,” in International Conference on Climbing and Walking
Robots, 2004.
[5] N. Kashiri, G. A. Medrano-Cerda, N. G. Tsagarakis, M. Laffranchi, and
D. Caldwell, “Damping control of variable damping compliant actuators,” in IEEE International Conference on Robotics and Automation
(ICRA). IEEE, 2015, pp. 850–856.
[6] C.-M. Chew, G.-S. Hong, and W. Zhou, “Series damper actuator: a
novel force/torque control actuator,” in 2004 4th IEEE/RAS International
Conference on Humanoid Robots. IEEE, pp. 533–546.
[7] D. Rollinson, Y. Bilgen, B. Brown, F. Enner, S. Ford, C. Layton,
J. Rembisz, M. Schwerin, A. Willig, P. Velagapudi, and H. Choset,
“Design and architecture of a series elastic snake robot,” in 2014
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS 2014). IEEE, 2014, pp. 4630–4636.
[8] K. Abe, T. Suga, and Y. Fujimoto, “Control of a biped robot driven by
elastomer-based series elastic actuator,” in 2012 12th IEEE International
Workshop on Advanced Motion Control (AMC). IEEE, 2012, pp. 1–6.
[9] J. Austin, A. Schepelmann, and H. Geyer, “Control and evaluation of
series elastic actuators with nonlinear rubber springs,” in 2015 IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS).
IEEE, 2015, pp. 6563–6568.
[10] D. Rollinson, S. Ford, B. Brown, and H. Choset, “Design and Modeling
of a Series Elastic Element for Snake Robots,” ASME Proceedings
of the Dynamic Systems and Control Conference, pp. V001T08A002–
V001T08A002, Oct. 2013.
[11] D. Kim, Y. Zhao, G. Thomas, B. R. Fernandez, and L. Sentis, “Stabilizing Series-Elastic Point-Foot Bipeds Using Whole-Body Operational
Space Control,” Transactions on Robotics, vol. 32, no. 6, pp. 1362–1379,
2016.
[12] N. Paine, S. Oh, and L. Sentis, “Design and control considerations for
high-performance series elastic actuators,” IEEE/ASME Transactions on
Mechatronics, vol. 19, no. 3, pp. 1080–1091, 2014.
[13] N. Paine and L. Sentis, “Design and Comparative Analysis of a
Retrofitted Liquid Cooling System for High-Power Actuators,” Actuators, vol. 4, no. 3, pp. 182–202, 2015.
[14] I. W. Hunter, J. M. Hollerbach, and J. Ballantyne, “A comparative
analysis of actuator technologies for robotics,” Robotics Review, vol. 2,
pp. 299–342, 1991.
[15] N. A. Paine, “High-performance Series Elastic Actuation,” Ph.D. dissertation, Austin, 2014.
[16] A. B. Zoss, H. Kazerooni, and A. Chu, “Biomechanical design of
the berkeley lower extremity exoskeleton (bleex),” Transactions On
Mechatronics, vol. 11, no. 2, pp. 128–138, 2006.
[17] C. Semini, “Hyq-design and development of a hydraulically actuated
quadruped robot,” Doctor of Philosophy (Ph. D.), University of Genoa,
Italy, 2010.
[18] P. A. Bhounsule, J. Cortell, A. Grewal, B. Hendriksen, J. D. Karssen,
C. Paul, and A. Ruina, “Low-bandwidth reflex-based control for lower
power walking: 65 km on a single battery charge,” The International
Journal of Robotics Research, vol. 33, no. 10, pp. 1305–1321, 2014.
[19] N. Kanehira, T. Kawasaki, S. Ohta, T. Ismumi, T. Kawada, F. Kanehiro,
S. Kajita, and K. Kaneko, “Design and experiments of advanced leg
module (hrp-2l) for humanoid robot (hrp-2) development,” in International Conference on Intelligent Robots and Systems (IROS), vol. 3.
IEEE, 2002, pp. 2455–2460.
[20] I.-W. Park, J.-Y. Kim, J. Lee, and J.-H. Oh, “Mechanical design of
humanoid robot platform khr-3 (kaist humanoid robot 3: Hubo),” in 5th
International Conference on Humanoid Robots. IEEE, 2005, pp. 321–
326.
[21] M. Gienger, K. Loffler, and F. Pfeiffer, “Towards the design of a biped
jogging robot,” in International Conference on Robotics and Automation,
vol. 4. IEEE, 2001, pp. 4140–4145.
[22] S. Lohmeier, T. Buschmann, H. Ulbrich, and F. Pfeiffer, “Modular joint
design for performance enhanced humanoid robot lola,” in International
Conference on Robotics and Automation. IEEE, 2006, pp. 88–93.
[23] A. Stentz, H. Herman, A. Kelly, E. Meyhofer, G. C. Haynes, D. Stager,
B. Zajac, J. A. Bagnell, J. Brindza, C. Dellin et al., “Chimp, the cmu
highly intelligent mobile platform,” Journal of Field Robotics, vol. 32,
no. 2, pp. 209–228, 2015.
[24] S. Karumanchi, K. Edelberg, I. Baldwin, J. Nash, J. Reid, C. Bergh,
J. Leichty, K. Carpenter, M. Shekels, M. Gildner et al., “Team robosimian: Semi-autonomous mobile manipulation at the 2015 darpa
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
robotics challenge finals,” Journal of Field Robotics, vol. 34, no. 2,
pp. 305–332, 2017.
J. Urata, Y. Nakanishi, K. Okada, and M. Inaba, “Design of high torque
and high speed leg module for high power humanoid,” in International
Conference on Intelligent Robots and Systems. IEEE, 2010, pp. 4497–
4502.
K. Kojima, T. Karasawa, T. Kozuki, E. Kuroiwa, S. Yukizaki, S. Iwaishi,
T. Ishikawa, R. Koyama, S. Noda, F. Sugai, S. Nozawa, Y. Kakiuchi, K. Okada, and M. Inaba, “Development of life-sized high-power
humanoid robot JAXON for real-world use,” in 15th International
Conference on Humanoid Robots. IEEE, 2015, pp. 838–843.
F. Aghili, J. M. Hollerbach, and M. Buehler, “A modular and highprecision motion control system with an integrated motor,” Transactions
on Mechatronics, vol. 12, no. 3, pp. 317–329, 2007.
N. G. Tsagarakis, S. Morfey, G. M. Cerda, L. Zhibin, and D. G.
Caldwell, “Compliant humanoid coman: Optimal joint stiffness tuning
for modal frequency control,” in International Conference on Robotics
and Automation (ICRA). IEEE, 2013, pp. 673–678.
N. A. Radford, P. Strawser, K. Hambuchen, J. S. Mehling, W. K.
Verdeyen, A. S. Donnan, J. Holley, J. Sanchez, V. Nguyen, L. Bridgwater
et al., “Valkyrie: Nasa’s first bipedal humanoid robot,” Journal of Field
Robotics, vol. 32, no. 3, pp. 397–419, 2015.
J. W. Grizzle, J. Hurst, B. Morris, H.-W. Park, and K. Sreenath,
“Mabel, a new robotic bipedal walker and runner,” in American Control
Conference (ACC). IEEE, 2009, pp. 2030–2036.
A. Ramezani, “Feedback Control Design for MARLO, a 3D-Bipedal
Robot,” Ph.D. dissertation, 2013.
M. Hutter, C. Gehring, M. Bloesch, M. A. Hoepflinger, C. D. Remy,
and R. Siegwart, “Starleth: A compliant quadrupedal robot for fast,
efficient, and versatile locomotion,” in Adaptive Mobile Robotics. World
Scientific, 2012, pp. 483–490.
Y. Zhao, N. Paine, K. Kim, and L. Sentis, “Stability and Performance Limits of Latency-Prone Distributed Feedback Controllers,” IEEE
Transactions on Industrial Electronics, vol. 62, no. 11, pp. 7151–716,
November 2015.
D. Lahr, V. Orekhov, B. Lee, and D. Hong, “Early developments of a
parallelly actuated humanoid, saffir,” in ASME 2013 international design
engineering technical conferences and computers and information in
engineering conference, 2013, pp. V06BT07A054–V06BT07A054.
B. Lee, C. Knabe, V. Orekhov, and D. Hong, “Design of a HumanLike Range of Motion Hip Joint for Humanoid Robots,” in International
Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical
Engineers, Aug. 2014.
C. Knabe, J. Seminatore, J. Webb, M. Hopkins, T. Furukawa,
A. Leonessa, and B. Lattimer, “Design of a series elastic humanoid
for the darpa robotics challenge,” in 15th International Conference on
Humanoid Robots (Humanoids). IEEE, 2015, pp. 738–743.
J. Pratt and B. Krupp, “Design of a bipedal walking robot,” in Proc. of
SPIE, vol. 6962, 2008, pp. 69 621F1–69 621F13.
J. E. Pratt, “Exploiting inherent robustness and natural dynamics in the
control of bipedal walking robots,” Massachusetts Inst. of Tech. Dept.
of Electr. Eng. and Comp. Science, Tech. Rep., 2000.
R. Rea, C. Beck, R. Rovekamp, P. Neuhaus, and M. Diftler, “X1: A
robotic exoskeleton for in-space countermeasures and dynamometry,” in
AIAA SPACE 2013 Conference and Exposition, 2013, p. 5510.
S. Seok, A. Wang, M. Y. M. Chuah, D. J. Hyun, J. Lee, D. M. Otten,
J. H. Lang, and S. Kim, “Design principles for energy-efficient legged
locomotion and implementation on the mit cheetah robot,” Transactions
on Mechatronics, vol. 20, no. 3, pp. 1117–1129, 2015.
E. Pucci and G. Saccomandi, “A note on the Gent model for rubberlike materials,” Rubber chemistry and technology, vol. 75, no. 5, pp.
839–852, 2002.
M. Hutter, C. D. Remy, M. A. Hoepflinger, and R. Siegwart, “High
compliant series elastic actuation for the robotic leg scarleth,” in Proc.
of the International Conference on Climbing and Walking Robots
(CLAWAR), no. EPFL-CONF-175826, 2011.
Y. Park, S. Oh, and H. Zoe, “Dynamic analysis of Reaction Force sensing
Series Elastic Actuator as Unlumped two mass system,” in IECON 42nd Annual Conference of the IEEE Industrial Electronics Society.
IEEE, 2016, pp. 5784–5789.
| 3 |
arXiv:1703.03372v3 [] 15 Mar 2017
LesionSeg: Semantic segmentation of skin lesions
using Deep Convolutional Neural Network
Dhanesh Ramachandram
School of Engineering
University of Guelph
Guelph, ON N1G 2W1, Canada
[email protected]
Terrance DeVries
School of Engineering
University of Guelph
Guelph, ON N1G 2W1, Canada
[email protected]
Executive Summary
We present a method for skin lesion segmentation for the ISIC 2017 Skin Lesion Segmentation
Challenge. Our approach is based on a Fully Convolutional Neural Network architecture which is
trained end to end, from scratch, on a small dataset. Our semantic segmentation architecture utilizes
several recent innovations in deep learning particularly in the combined use of (i) atrous convolutions
to increase the effective field of view of the network’s receptive field without increasing the number
of parameters, (ii) network-in-network 1 × 1 convolution layers to increase network capacity and (iii)
state-of-art super-resolution upsampling of predictions using subpixel CNN layers. We achieved a
IOU score of 0.642 on the validation set provided by the organizers.
1
Background
One of the fundamental and challenging tasks in digital image analysis is segmentation, which is the
process of assigning pixel-wise labels to regions in an image that share some high-level semantics,
hence the term “semantic segmentation”. In skin lesion segmentation, the goal is to assign pixel-wise
labels to regions in dermoscopy images that represents skin lesions, such as melanoma, seborrhoeic
keratosis or benign nevus.
Skin lesion segmentation is challenging due to a variety of factors, such as variations in skin tone,
uneven illumination, partial obstruction due to the presence of hair, low contrast between lesion and
surrounding skin, and the presence of freckles or gauze in the image frame, which may be mistaken
for lesions. A successful lesion segmentation technique should be robust enough to accommodate
these variability.
Skin lesion segmentation is a widely researched topic in medical image analysis[1–3]. Until recently,
most skin lesion segmentation approaches were based on hand crafted algorithms[4–6]. Such
approaches require carefully designed pre-processing and post-processing approaches such as hair
removal, edge-preserving smoothing and morphological operations. The robustness of the such
approaches can be somewhat limited however, as each new scenario may require custom tuning.
An alternative approach to manually crafting segmentation algorithms is to instead leverage machine
learning techniques to learn a model capable of successfully dealing with the numerous factors of
variation. Specifically applicable to this application are artificial neural networks, which have made
an impressive resurgence in recent years. The active research area, now known as deep learning, is
currently enjoying interest and support not only from the academic community, but also from industry.
The advances in hardware performance and low costs involved have made it viable to analyse very
large data sizes using very deep neural network architectures in a reasonable amount of time. Deep
learning-based techniques have resulted in state-of-the-art performance for many practical problems,
especially in areas involving high-dimensional unstructured data such as in computer vision, speech
and natural language processing. Medical imaging problems have also been given much attention by
2017 ISIC Skin Lesion Segmentation Challenge
deep learning researchers[7] and has seen tremendous success for the related skin lesion classification
problem[8]. The notion of being able to train in an end-to end manner, without requiring any manual
feature engineering or complicated hand-crafted algorithms, is very attractive indeed.
2
Semantic Segmentation
The deep learning approach to image segmentation is known as semantic segmentation. In contrast to
low-level image segmentation, which operate purely on local image characteristics such as colour,
shape and texture, deep semantic segmentation algorithms are trained using thousands of examples
to recognize and delineate regions in an image corresponding to some high-level semantics. A
convolutional neural network can be adapted to perform perform semantic segmentation by replacing
the top layer1 of a classification network into a convolutional layer. As FCNN use downsampling
implemented via the max-pooling or strided convolutions to capture context, these architectures often
employ a single or several progressive upsampling layers which are used to upscale lower resolution
pixel-wise predictions to match the dimensions of the input image. Ground truth segmentation masks
provide pixel-wise labels for the segmentation task, now cast as a pixel-wise classification problem.
The fully convolutional neural network architecture was first proposed by Long et al.[9]. Subsequently,
a number of similar architectures have been reported in the literature[10, 11].
For the ISIC 2017 challenge, the skin lesion segmentation task is a binary segmentation task - the
goal is to produce accurate segmentation of various skin lesions, benign and malignant, against a
variety of background which may consist of skin, colored markers, or dark-vignetted region produced
by the dermoscope. Fig.?? shows an example lesion and its corresponding binary mask. The training
dataset consists of 2000 dermoscopy images and corresponding binary masks. The images consists of
3 types of skin lesions: nevus, seborrhoeic keratosis and melanoma; the latter lesion being malignant.
The images are also of various dimensions.
Figure 1: Left: An example of a skin lesion image with a blue marker in the background. Right: The
corresponding ground truth binary segmentation mask.
3
Our Approach
Our approach for lesion segmentation is primarily a fully convolutional neural network, trained from
scratch, in an end-to-end manner.
3.1
Architecture
The inputs to the network are images resized to 448 × 448. The first convolution layer uses a stride
of 2 to downsample the image by a factor of 2. This is followed by series of 2D convolution layers
interspersed with 1 × 1 convolutions. We also apply batch-normalization for the convolutional
layers and use ReLu activations. The 1 × 1 convolution layers add capacity to the network without
increasing the number of parameters.The last two convolutions are dilated convolutional layers or
1
typically the softmax layer
2
Layer
Filter Size
Stride/Rate
Padding
Non-linearity
Conv1-1
Conv1-2
Conv1-3
Conv2-1
Conv2-2
Conv2-3
Conv3-1
Conv3-2
Conv3-3
Subpixel
5x5x64
3x3x96
1x1x96
3x3x128
3x3x256
1x1x256
3x3x256
3x3x256
3x3x128
3x3x32
2
1
1
2
1
1
2
2
2
1
Mirror
Mirror
Same
Mirror
Mirror
Same
Mirror
Mirror
Mirror
Mirror
ReLu
ReLu
ReLu
ReLu
ReLu
ReLu
ReLu
ReLu
None
None
Type
2D Convolution
2D Convolution
2D Convolution
2D Convolution
2D Convolution
2D Convolution
Atrous Convolution
Atrous Convolution
Atrous Convolution
Subpixel CNN Layer
Table 1: LesionSeg Architecture
atrous convolutions[12] with a rate of 2. These two layers effectively enlarge the field of view of
filters to incorporate larger context without increasing the number of parameters or the amount of
computation. Before upsampling is performed, the final subpixel convolution layer is added. This
layer has number of filters equal to the upsampling factor and is applied with a stride of 1 and without
any non-linearities. The upsampling layer is a subpixel convolutional layer introduced by [13] which
produced state-of-art super-resolution reconstruction accuracy superior to the commonly used bilinear
upsampling as used by [9].
3.2
Preprocessing
The input images and the corresponding ground-truth masks are resized to 448 by 448 pixels. We
perform data augmentation on-the-fly by randomly rotating both the image and its mask by 90-degree
increments as well as flipping the images. In addition, we also perform per-image-standardization of
the input image.
3.3
Training
We trained the network using Adam[14] optimization using a per-pixel cross-entropy loss function.
During training, we performed randomly sampled images using a batch size of 32. The network is
trained until no improvement in the mean IOU is observed.
3.4
Post-processing
As the test and validation sets have images of different resolutions up to 6688 × 4439 pixels,
we resorted to upscaling the network output back to its original image dimensions using bicubic
interpolation and then binarizing the upsampled output mask using a threshold of 128. We also apply
morphological opening to eliminate small, spurious errors made by the semantic segmentation using
a 3 × 3 disk-shaped kernel.
4
Results and Discussion
We achieved a IOU score of 0.642 for the validation set using our approach. Example segmentation
outputs from our lesion segmentation architecture is shown in Fig.2.
3
(a) Sample 1
(b) Sample 2
(c) Sample 3
(d) Sample 4
Figure 2: Examples of segmentation output using our approach.
References
[1] M Emre Celebi, Quan Wen, Hitoshi Iyatomi, Kouhei Shimizu, Huiyu Zhou, and Gerald Schaefer.
A state-of-the-art survey on lesion border detection in dermoscopy images, 2015.
[2] Konstantin Korotkov and Rafael Garcia. Computerized analysis of pigmented skin lesions: a
review. Artificial intelligence in medicine, 56(2):69–90, 2012.
[3] M Emre Celebi, Hitoshi Iyatomi, Gerald Schaefer, and William V Stoecker. Lesion border
detection in dermoscopy images. Computerized medical imaging and graphics, 33(2):148–153,
2009.
[4] Huiyu Zhou, Gerald Schaefer, M Emre Celebi, Faquan Lin, and Tangwei Liu. Gradient vector
flow with mean shift for skin lesion segmentation. Computerized Medical Imaging and Graphics,
35(2):121–127, 2011.
[5] Xiaojing Yuan, Ning Situ, and George Zouridakis. A narrow band graph partitioning method
for skin lesion segmentation. Pattern Recognition, 42(6):1017–1028, 2009.
[6] Gerald Schaefer, Maher I Rajab, M Emre Celebi, and Hitoshi Iyatomi. Colour and contrast
enhancement for improved skin lesion segmentation. Computerized Medical Imaging and
Graphics, 35(2):99–104, 2011.
[7] Ge Wang. A perspective on deep imaging. IEEE Access, 4:8914–8924, 2016.
[8] Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and
Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks.
Nature, 542(7639):115–118, 2017.
[9] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 3431–3440, 2015.
[10] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille.
Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv
preprint arXiv:1412.7062, 2014.
[11] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional
encoder-decoder architecture for image segmentation. arXiv preprint arXiv:1511.00561, 2015.
4
[12] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille.
Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and
fully connected crfs. arXiv preprint arXiv:1606.00915, 2016.
[13] Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop,
Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an
efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, pages 1874–1883, 2016.
[14] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
5
| 9 |
Submitted to the Annals of Statistics
arXiv: arXiv:0000.0000
arXiv:1710.11268v3 [] 11 Dec 2017
THEORETICAL AND COMPUTATIONAL GUARANTEES
OF MEAN FIELD VARIATIONAL INFERENCE
FOR COMMUNITY DETECTION
By Anderson Y. Zhang, and Harrison H. Zhou
Yale University
The mean field variational Bayes method is becoming increasingly
popular in statistics and machine learning. Its iterative Coordinate
Ascent Variational Inference algorithm has been widely applied to
large scale Bayesian inference. See Blei et al. (2017) for a recent comprehensive review. Despite the popularity of the mean field method
there exist remarkably little fundamental theoretical justifications.
To the best of our knowledge, the iterative algorithm has never been
investigated for any high dimensional and complex model. In this
paper, we study the mean field method for community detection under the Stochastic Block Model. For an iterative Batch Coordinate
Ascent Variational Inference algorithm, we show that it has a linear
convergence rate and converges to the minimax rate within log n iterations. This complements the results of Bickel et al. (2013) which
studied the global minimum of the mean field variational Bayes and
obtained asymptotic normal estimation of global model parameters.
In addition, we obtain similar optimality results for Gibbs sampling
and an iterative procedure to calculate maximum likelihood estimation, which can be of independent interest.
1. Introduction. A major challenge of large scale Bayesian inference is
the calculation of posterior distribution. For high dimensional and complex
models, the exact calculation of posterior distribution is often computationally intractable. To address this challenge, the mean field variational method
[2, 19, 30] is used to approximate posterior distributions in a wide range of
applications in many fields including natural language processing [6, 22],
computational neuroscience [14, 26], and network science [1, 8, 17]. This
method is different from Markov chain Monte Carlo (MCMC) [13, 28], another popular approximation algorithm. The variational inference approximation is deterministic for each iterative update, while MCMC is a randomized sampling algorithm, so that for large-scale data analysis, the mean
field variational Bayes usually converges faster than MCMC [7], which is
particularly attractive in the big data era.
MSC 2010 subject classifications: Primary 60G05
Keywords and phrases: mean field, variational inference, Bayesian, community detection, stochastic block model
1
2
ZHANG, ZHOU
In spite of a wide range of successful applications of the mean field variational Bayes, its fundamental theoretical properties are rarely investigated.
The existing literature [3, 8, 31, 33, 34] is mostly on low dimensional parameter estimation and on the global minimum of the variational Bayes
method. For example, in a recent inspiring paper, Wang and Blei [32] studied the frequentist consistency of the variational method for a general class
of latent variable models. They obtained consistency for low dimensional
global parameters and further showed asymptotic normality, assuming the
global minimum of the variational Bayes method can be achieved. However,
it is often computationally infeasible to attain the global minimum when the
model is high-dimensional or complex. This motivates us to investigate the
statistical properties of the mean field in high dimensional settings, and more
importantly, to understand the statistical and computational guarantees of
the iterative variational inference algorithms.
The success and the popularity of the mean field method in Bayesian
inference mainly lies in the success of its iterative algorithm: Coordinate
Ascent Variational Inference (CAVI) [7], which provides a computationally
efficient way to approximate the posterior distribution. It is important to
understand what statistical properties CAVI has and how do they compare
to the optimal statistical accuracy. In addition, we want to investigate how
fast CAVI converges for the purpose of implementation. With the ambition
of establishing a universal theory of the mean field iterative algorithm for
general models in mind, in this paper, we consider the community detection
problem [1, 4, 12, 24, 25, 35] under the Stochastic Block Model (SBM)
[4, 18, 21, 29] as our first step.
Community detection has been an active research area in recent years,
with the SBM as a popular choice of model. The Bayesian framework and
the variational inference for community detection are considered in [1, 3, 8,
11, 17, 27]. For high dimensional settings, Celisse et al. [8] and Bickel et al.
[3] are arguably the first to study the statistical properties of the mean field
for SBMs. The authors built an interesting connection between full likelihood and variational likelihood, and then studied the closeness of maximum
likelihood and maximum variational likelihood, from which they obtained
consistency and asymptotic normality for global parameter estimation. From
a personal communication with the authors of Bickel et al. [3], an implication of their results is that the variational method achieves exact community
recovery under a strong signal-to-noise (SNR) ratio. Their analysis idea is
fascinating, but it is not clear whether it is possible to extend the analysis
to other SNR conditions under which exact recovery may never be possible.
More importantly, it may not be computationally feasible to maximize the
MEAN FIELD FOR COMMUNITY DETECTION
3
variational likelihood for the SBM, as seen from Theorem 2.1.
In this paper, we consider the statistical and computational guarantees of
the iterative variational inference algorithm for community detection. The
primary goal of community detection problem is to recover the community
membership in a network. We measure the performance of the iterative variational inference algorithm by comparing its output with the ground truth.
Denote the underlying ground truth by Z ∗ . For a network of n nodes and
k communities, Z ∗ is an n × k matrix with each row a standard Euclidean
∗ }n
basis in Rk . The index of non-zero coordinate of each row {Zi,·
i=1 gives the
community assignment information for the corresponding node. We propose
an iterative algorithm called Batch Coordinate Ascent Variational Inference
(BCAVI), a slight modification of CAVI with batch updates, to make parallel and distributed computing possible. Let π (s) denote the output of the
s-th iteration, an n × k matrix with nonnegative entries. The summation
(s)
of each row {πi,· }ni=1 is equal to 1, which is interpreted as an approximate
posterior probability of assigning the corresponding node of each row into k
communities. The performance of π (s) is measured by an `1 loss `(·, ·) compared with Z ∗ .
An Informal Statement of Main Result: Let π (s) be the estimation of community membership from the iterative algorithm BCAVI after s iterations.
Under weak regularity condition, for some cn = on (1), with high probability,
we have for all s ≥ 0,
(1)
`(π (s+1) , Z ∗ ) ≤ minimax rate + cn `(π (s) , Z ∗ ).
The main contribution of this paper is Equation (1). The coefficient cn is
on (1) and is independent of s, which implies `(π (s) , Z ∗ ) decreases at a fast
linear rate. In addition, we show that BCAVI converges to the statistical
optimality [35]. It is worth mentioning that after log n iterations BCAVI attains the minimax rate, up to an error on (n−a ) for any constant a > 0. The
conditions required for the analysis of BCAVI are relatively mild. We allow
the number of communities to grow. The sizes of the communities are not
assumed to be of the same order. The separation condition on global parameters covers a wide range of settings from consistent community detection
to exact recovery.
To the best of our knowledge this provides arguably the first theoretical
justification for the iterative algorithm of the mean field variational method
in a high-dimensional and complex setting. Though we focus on the problem
of community detection in this paper, we hope the analysis would shed some
4
ZHANG, ZHOU
light on analyzing other models, which may eventually lead to a general
framework of understanding the mean field theory.
The techniques of analyzing the mean field can be extended to providing
theoretical guarantees for other iterative algorithms, including Gibbs sampling and an iterative procedure for maximum likelihood estimation, which
can be of independent interest. Results similar to Equation (1) are obtained
for both methods under the SBM.
Organization. The paper is organized as follows. In Section 2 we introduce
the mean field theory and the implementation of BCAVI algorithm for community detection. All the theoretical justifications for the mean field method
are in Section 3. Discussions on the convergence of the global minimizer and
other iterative algorithms are presented in Section 4. The proofs of theorems
are in Section 5. We include all the auxiliary lemmas and propositions and
their corresponding proofs in the supplemental material.
Notation. Throughout this paper, for any matrix X ∈ Rn×m
P , its `1 norm is
defined in analogous to that of a vector. That is, kXk1 = i,j |Xi,j |. We use
the notation Xi,· and X·,i to indicate its i-th row and column respectively.
For matrices
P X, Y of the same dimension, their inner product is defined as
hX, Y i =
i,j Xi,j Yi,j . For any set D, we use |D| for its cardinality. We
denote Ber(p) for a Bernoulli random variable with success probability p.
For two positive sequences xn and yn , xn . yn means xn ≤ cyn for some
constant c not depending on n. We adopt the notation xn yn if xn . yn
and yn . xn . To distinguish from the probabilities p, q, we use bold p and
q to indicate distributions. The Kullback-Leibler divergence between two
distributions is defined as KL(pkq) = Eq log(p(x)/q(x)). We use ψ(·) for the
digamma function, which is defined as the logarithmic derivative of Gamma
d
function, i.e., ψ(x) = dx
[log Γ(x)]. In any Rd , we denote {ea }da=1 to be the
standard Euclidean basis with e1 = (1, 0, 0, . . .), e2 = (0, 1, 0, . . . , 0), . . . , ed =
(0, 0, 0, . . . , 1). We let 1d be a vector of length d whose entries are all 1. We use
[d] to indicate the set {1, 2, . . . , d}. Throughout this paper, the superscript
“pri” (e.g., π pri ) indicates that this is a hyperparameters of priors.
2. Mean Field Method for Community Detection. In this section,
we first give a brief introduction to the variational inference method in
Section 2.1. Then we introduce the community detection problem and the
Stochastic Block Model in Section 2.2. The Bayesian framework is presented
in Section 2.3. Its mean field approximation and CAVI updates are given in
Section 2.4 and Section 2.5 respectively. The BCAVI algorithm is introduced
in Section 2.6.
MEAN FIELD FOR COMMUNITY DETECTION
5
2.1. Mean Field Variational Inference. We first present the mean field
method in a general setting and then consider its application to the community detection problem. Let p(x|y) be an arbitrary posterior distribution
for x, given observation y. Here x can be a vector of latent variables, with
coordinates {xi }. It may be difficult to compute the posterior p(x|y) exactly. The variational Bayes ignores
Q the dependence among {xi }, by simply
taking a product measure q(x) = i qi (xi ) to approximate it. Usually each
qi (xi ) is simple and easy to compute. The best approximation is obtained
by minimizing the Kullback-Leibler divergence between q(x) and p(x|y):
(2)
q̂MF = arg min KL(qkp).
q∈Q
Despite the fact that every measure q has a simple product structure, the
global minimizer q̂MF remains computationally intractable.
To address this issue, an iterative Coordinate Ascent Variational Inference
(CAVI) is widely used to approximate the global minimum. It is a greedy
algorithm. The value of KL(qkp) decreases in each coordinate update:
Y
q̂i = min KL qi
qj p , ∀i.
(3)
qi ∈Qi
j6=i
The coordinate update has an explicit formula
(4)
q̂i (xi ) ∝ exp Eq−i [log p(xi |x−i , y)] ,
where x−i indicates
all the coordinates in x except xi , and the expectation
Q
is over q−i = j6=i qj (xj ). Equation (4) is usually easy to compute, which
makes CAVI computationally attractive, although CAVI only guarantees to
achieve a local minimum.
In summary, the mean field variational inference via CAVI can be represented in the following diagram:
approx.
approx.
p(x|y) ⇐= q̂MF (x) ⇐= q̂CAVI (x),
where q̂MF (x), the global minimum, serves mainly as an intermediate step
in the mean field methodology. What is implemented in practice to approximate global minimum is an iterative algorithm like CAVI. This motivates
us to consider directly the theoretical guarantees of the iterative algorithm
in this paper.
We refer the readers to a nice review and tutorial by Blei et al. [7] for
more detail on the variational inference and CAVI. The derivation from
Equation (3) to Equation (4) can be found in many variational inference
literatures [5, 7]. We include it in Appendix D in the supplemental material
for completeness.
6
ZHANG, ZHOU
2.2. Community Detection and Stochastic Block Model. The Stochastic
Block Model (SBM) has been a popular model for community detection.
Consider an n-node network with its adjacency matrix denoted by A.
It is an unweighted and undirected network without self-loops, with A ∈
{0, 1}n×n , A = AT and Ai,i = 0, ∀i ∈ [n]. Each edge is an independent
Bernoulli random variable with EAi,j = Pi,j , ∀i < j. In the SBM, the value of
connectivity probability Pi,j depends on the communities the two endpoints
i and j belong to. We assume Pi,j = p if both nodes come from the same
community and Pi,j = q otherwise. There are k communities in the network.
We denote z ∈ [k]n , as the assignment vector, with zi indicating the index
of community the i-th node belongs to. Thus, the connectivity probability
matrix P can be written as
Pi,j = Bzi ,zj ,
where B ∈ [0, 1]k×k with diagonal entries as p and off-diagonal entries as q.
That is, B = q1k 1Tk + (p − q)Ik . Let Z ∈ Π0 be the assignment matrix where
Π0 = {π ∈ {0, 1}n×k : kπi,· k0 = 1, ∀i ∈ [n]}.
In each row {Zi,· }ni=1 there is only one 1 with all the other coordinates as 0,
indicating the assignment of community for the corresponding node. Then
T , ∀i < j, or in a matrix form
P can be equivalently written as Pi,j = Zi,· BZj,·
Pi,j = (ZBZ T )i,j , ∀i < j.
The goal of community detection is to recover the assignment vector z,
or equivalently, the assignment matrix Z. The equivalence can be seen by
observing that there is a bijection r between z ∈ [k]n and Z ∈ Π0 which is
defined as follows,
(5)
r(z) = Z, where Zi,a = I{a = zi }, ∀i ∈ [n], a ∈ [k].
Since they are uniquely determined by each other, in our paper we may use
z directly without explicitly defining z = r−1 (Z) (or vice versa) when there
is no ambiguity.
2.3. A Bayesian Framework. Throughout the whole paper, we assume
k, the number of communities, is known. We observe the adjacency matrix
A. The global parameters p and q and the community assignment Z are
unknown. From the description of the model in Section 2.2, we can write
down the distribution of A as follows:
Y Ai,j
p(A|Z, p, q) =
Bzi ,zj (1 − Bzi ,zj )1−Ai,j ,
(6)
i<j
7
MEAN FIELD FOR COMMUNITY DETECTION
with B = q1k 1Tk + (p − q)Ik and z = r−1 (Z). We are interested in Bayesian
inference for estimating Z, with prior to be given on both p, q and Z.
We assume that {zi }ni=1 have independent categorical (a.k.a. multinomial
P
pri n
pri
with size one) priors with hyperparameters {πi,·
}i=1 , where ka=1 πi,a
=
n
1, ∀i ∈ [n]. In other words, {Zi,· }i=1 are independently distributed by
pri
P(Zi,· = eTa ) = πi,a
, ∀a = 1, 2, . . . , k,
where {ea }ka=1 are the coordinate vectors. Here we allow the priors for Zi,· to
be different for different i. If additionally πi,· = πj,· for all i 6= j is assumed,
and then this is reduced to the usual case of i.i.d. priors.
Since {Ai,j }i<j are Bernoulli, it is natural to consider a conjugate Beta
prior for p and q. Let p ∼ Beta(αppri , βppri ) and q ∼ Beta(αqpri , βqpri ). Then the
joint distribution is
(7)
p(A, Z, p, q) =
"
Y
i
×
"
#
Y
A
1−Ai,j
π pri Bzii,j
,zj (1 − Bzi ,zj )
i,zi
i<j
Γ(αppri + βppri ) αpri
p p −1 (1
pri
pri
Γ(αp )Γ(βp )
− p)
βppri −1
#"
Γ(αqpri + βqpri )
Γ(αqpri )Γ(βqpri )
q
αpri
q −1
(1 − q)
Our main interest is to infer Z, from the posterior distribution p(Z, p, q|A).
However, the exact calculation of p(Z, p, q|A) is computationally intractable.
2.4. Mean Field Approximation. Since the posterior distribution p(Z, p, q|A)
is computationally intractable, we apply the mean field approximation to
approximate it by a product measure,
qπ,αp ,βp ,αq ,βq (Z, p, q) = qπ (Z)qαp ,βp (p)qαq ,βq (q)
where {r−1 (Zi,· )}ni=1 areQindependent categorical variables with parameters
{πi,· }ni=1 , i.e., qπ (Z) = ni=1 qπi,· (Zi,· ) with
qπi,· (Zi,· = ea ) = πi,a , ∀i ∈ [n], a ∈ [k],
and qαp ,βp (p) and qαq ,βq (q) are Beta with parameters αp , βp , αq , βq due to
conjugacy. See Figure 1 for the graphical presentation of qπ,αp ,βp ,αq ,βq (Z, p, q).
Note that the distribution class of q is fully captured by the parameters
(π, αp , βp , αq , βq ), and then the optimization in Equation (2) is equivalent
βqpri −1
#
.
n×kn,i,jwhere
i,jhyperparameter
i,j
!
!
1 1]prior
1 1]
with
πi,a
∈[n].
[0,
,i,jwhere
n with
prior
hyperparameter
π (a.k.a.
∈k [0,
πi,a =a=1
1, ∀i
∈
It is
k
a=1
n×k
We
assume
{z
}multinomial
have
categorical
(a.k.a.
multinomi
n×k
We
{z
}its
have
categorical
with
size
one)
n
T n{Z
n
iand
iand
Tp,
estimating
with
prior
given
on
both
q
Z.
i=1
imating Z, with
given
on assume
both
p,
qbe
Z.
i=1
equivalently
to
state
that,
}
are
ind
with
hyperparameter
∈
[0,
1]
,
where
π
=
1,
∀i
priorprior
with
hyperparameter
π
∈
[0,
1]
,
where
π
=
1,
∀i
∈
[n].
It
is
equivalently
to
state
that,
{Z
}
are
independently
distributed
with
n
T
i,·
P(Z
=
e
)
=
π
,
∀u
=
1
set D,
we denote
|D|Z,
toprior
cardinality.
set
D,
we
We
denote
denote
|D|
Ber(p)
to
be
its
to
cardinality.
be
a
Bernoulli
We
denote
Ber(p)
to
be
aB
i,a
T
i,·
i,a
!
i=1
!
P(Z
=
e
)
=
π
,
∀u
=
1,
2,
.
.
.
,
k,
i=1
i,·
i,a
equivalently
to
state
that,
{Z
}
are
independ
a=1
a=1
equivalently to state that,
{Zi,·
}ai=1
are)i,a
independently
with
a=
i,·
i,·
P(Z
k πi,a , ∀u
i=1ea ) =
k πi,a , ∀u = distributed
P(Z
=
e
=
1,
2,
.
.
.
,
k,
n×k
i,·
n×k
i,·
a
n
n
n
prior
with
hyperparameter
π
∈
[0,
1]
,
where
π
n
prior
with
hyperparameter
π
∈
[0,
1]
,
where
π
=
1,
∀i
∈
[n].
It
is
i,a
i,a
We
assume
{z
}
have
categorical
(a.k.a.
multinomial
with
size
one)
a=1
T
We assume equivalently
{zi }i=1random
have to
categorical
(a.k.a.
multinomial
with
size
one)
a=1
variable
with equivalently
success
probability
random
variable
p. Forthat,
with
two =
positive
success
probability
sequ
Ti,· }
state
{Z
are
independently
state that,
{Zii,·i=1
}i=1
areto
independently
distributed
with
np.
Ti,· = positive
P(Z
eπdistributed
) ,=∀uπi,a
∀u2,
i=1
=
πsequences
∀u. k. =
1,nx2,
. .i,·
.For
,=
k, etwo
k ,2,
i,a
ai,a
!
P(Z
= ,1,
nP(Zare
kk {Z
!
=P(Z
eTa )i,·
= state
πei,aavectors.
,)∀u
=
1,
. , k,
i,·
a ) =prior
where
{e
}
are
the
coordinate
vectors.
He
kequivalently
k
to
that,
{Z
}
are
independently
dis
equivalently
to
state
that,
}
independently
distributed
with
where
{e
}
are
the
coordinate
Here
we
allow
the
paramn×k
a
n×k
i,·
i,·
a=1
where
{e
}
are
the
coordinate
vectors
a
and
y
,
x
!
y
means
x
cy
for
and
some
y
,
x
constant
!
y
means
c
not
depending
x
cy
for
on
some
n.
We
constant
c
not
depending
o
i=1
where
{e
}
are
the
coordinate
vectors.
Here
we
allow
the
prior
par
i=1
a=1
a
n hyperparameter
n n
nπ
n
n ∈ ,[n].
n isn
a
a=1
with
∈
[0,
1]
where
π
=
1,
∀i
∈
[n].
It
is
or with hyperparameter nprior
π ∈n [0,
1]
, where
π
=
1,
∀i
It
a=1
i,a
i,a
T
a=1
T
k
a=1
k
k
k
where
{e
}
are
the
coordinate
vectors.
eters
on
Z
varies.
If
we
assume
π
=
π
f
where
{e
}
are
the
coordinate
vectors.
Here
we
allow
the
prior
paramP(Z
=
e
)
=
π
,
∀u
=
1,
2,
.
.
.
,
k,
P(Z
=
e
)
=
π
,
∀u
=
1,
2,
.
.
.
,
k,
a
eters
on
Z
varies.
If
we
assume
π
=
π
for
all
i
=
̸
j,
then
this
can
be
i,·
i,·
j,·
a
where
{e
}
are
the
coordinate
vectors.
Here
a=1
i,a
are
the
vectors.
we
prior
i,a
eters
Zfrom
varies.
we
assume
πi,· =
πw
a=1
adopt{Z
the notation
xi,·nto≍where
yeters
xthat,
adopt
the
and
ynncoordinate
!Ifare
xwe
xindependently
To
≍
distinguish
yai,·
ifπHere
xon
!
yπndistributed
and
the
ythe
!
x̸=nwith
.paramTothen
distinguish
a
n
Tallow
on
Zyi,·
varies.
assume
=
for
all
i∀u
j,
this
can
a }!
T
a=1
i,·
na if {e
ni,·
n{Z
ni,·
n
nP(Z
n j,·
n If
a=1
i,·
j,·
equivalently
state
}notation
uivalently to state
that,
distributed
with
=
e
)
=
π
,
=
1,
2,
.
.
.
,
k,
P(Z
=
e
)
π
,
∀u
=
1,
2,
.
.
.
,
k,
i,·=
6
ZHANG,
6
ZHOU
ZHANG,
ZHOU
i,· }i=1 are independently
i,·
i,a
i,·
i,a
i=1
a
n
a
eters
on
Z
varies.
If
we
assume
π
=
π
eters
on
Z
varies.
If
we
assume
π
=
π
for
all
i
=
̸
j,
then
this
can
be
i,·
i,·
j
eters
Z
varies.
Ifusual
we
assume
πj,· sam
for
i,·and
j,·ip
into
the
case
with
pria
onnation
Zi,·into
varies.
Ifusual
we
assume
π
=
πon
for
all
̸=on
j,
then
this
be=
i,·
i,·
the
case
with
same
prior
all
.}πni=1
i,·degenerated
j,·i,·
probabilities p, q, wedegenerated
useeters
bold
probabilities
qp,to
q,
indicate
wedegenerated
use
bold
distributions.
nation
and
q {z
to
indicate
distributi
into
the
usual
case
with
i}
degenerated
the
usual
case
with
same
prior
on
all
{zcan
.same
i=1
k pinto
nicase
nthe
degenerated
into
usual
with
same
degenerated
into
the
usual
case
with
same
prior
on
all
{z
}
.
degenerated
into
the
usual
case
with
same
prior
where
{e
}
are
the
coordinate
vectors.
Here
we
allow
the
pri
where {ea }ka=1
are
the
coordinate
vectors.
Here
we
allow
the
prior
paramdegenerated
into
the
usual
case
with
same
prior
on
all
{z
}
.
i
T
T
a
k
i=1
i
k
Due
to
the
fact
that
p,
q
∈
(0,
1)
and
all
the
i=1
a=1
Te +)(p
Te
Due
toP(Z
the
that
p,−
qπq)I
∈
and
all
the
{A
}Bayesian
random
Due
to
fact
that
p,Bernoulli
qare
∈
(0, 1)
and
all{o
=
)(pvectors.
=
,(0,
=
1,
2,
.Due
. all
.all
,the
k,
Due
to
fact
that
∈1)
(0,
1)
and
the
{A
}are
Bernoulli
ran
{e
}p,ka=1
are
the
coordinate
vectors.
Here
we
allow
i,j
i<j
}ka=1
the
coordinate
Here
we
allow
the
prior
param=−{eπq)I
,. ∀u
=
1,
2,B
.=
.i,·
.fact
,the
k,
withP(Z
B =i,·q1=
Weare
are
with
interested
q1
1where
aa+fact
Bayesian
framework
.q∀u
We
are
interested
for
inthe
athat
framework
for
i,j
i<j
ai,a
ai,a
k 1where
k in
to
fact
that
p,
q
∈
(0,
1)
and
all
th
ka
kthat
Due
to
the
that
p,
q
∈
(0,
1)
and
the
{A
}
are
Bernoulli
random
Due
to
the
fact
p,
q
∈
(0,
1)
and
all
the
{A
Due
to
the
fact
p,
q
∈
(0,
1)
and
all
the
{A
}
are
Bernoulli
random
i,j
i<j
i,
i,j
i<j
eters
varies.
If=
we
πqis
=then
for
all
iprior
̸=
j,
then
th
eters
on Z
varies.
IfZgiven
we
assume
πZHANG,
=Mean
πZHOU
for
all
i assume
̸= variable,
j,
this
can
beconsider
2.8i,·
Field
Method.
2.
Field
Method.
variable,
it
natural
to
the
conjuga
i,·
i,·
j,·this
j,·π
it
isi,·
natural
to
consider
the
conjugate
prior:
Beta
for
it
isitZ.πnatural
to
consider
the
con
variable,
it
is
natural
to
consider
the
conjugate
Beta
prior
for
b
eters
on
Zgiven
Ifthen
we
assume
πprior:
=
πfor
for
all
i both
̸=
j,
estimating
Z,Mean
with
prior
onvariable,
estimating
both
p,
qZ
and
Z,
with
Z.
prior
on
both
p,
and
eters
on
varies.
Ifon
we
assume
πtoj,·varies.
for
all
ithe
̸=
j,
can
be
i,·
i,·prior:
j,·
i,·k variable,
i,· consider
variable,
is
natural
to
consider
the
conj
variable,
it
is
natural
to
consider
the
conjugate
variable,
it
is
natural
consider
conjugate
Beta
prior
for
both
it
is
natural
to
the
conjugate
prior:
Beta
prior
both
n .
nwe. allow
nthe
n
n
degenerated
into
the
usual
case
with
same
prior
on
all
{z
}
where
{e
}
are
the
coordinate
vectors.
Here
the
prior
paraminto
usual
case
with
same
prior
on
all
{z
}
p
and
q.
We
let
p
∼
Ber(α
,
β
)
and
q
∼
Ber
ere {ea }ka=1 degenerated
are We
theassume
coordinate
vectors.
Here
we
allow
the
prior
paramp
and
q.
We
let
p
∼
Ber(α
,
β
)
and
q
∼
i
a
{z
}
have
categorical
We
(a.k.a.
assume
multinomial
{z
}
have
with
categorical
size
one)
(a.k.a.
multinomial
with
size
one)
p
and
q.
We
let
p
∼
Ber(α
,
β
)
and
q
∼
Ber(α
,
β
)
with
hyperparameter
i
phyperparameter
pand
and
q. We
pi=1p∼same
))∼and
q∼
∼
Ber(α
,q∼
β
)p, βwith
hyperparam
pand
p
a=1
i i=1
ilet
degenerated
the
usual
case
same
allq ∼
{zqB
pβpinto
pand
degenerated
intop the
case
with
all
}βBer(α
.with
pi=1
and
let
phyperparameter
and
and
q.
We
∼
Ber(α
∼i=1
Ber(α
pp,p
pusual
q.
We
∼Ber(α
,ββ
and
q{z
, qβ
) qwith
q.and
We
let
plet
∼
Ber(α
)prior
qppon
Ber(α
)pqwith
i
p, β
p )qon
i,let
pBer(α
p )prior
qq
p ,Ber(α
qq.
qWe
i=1
!
!
pri Block
pri
pri
pri
2.1.
Stochastic
Since
2.1.
Stochastic
proposed,
Block
Stochastic
Model.
Block
Since
Model
pri Model.
k
k proposed, Stochastic Block
n×k
n×k
α
,
β
,
α
,
β
.
Thus
the
full
likelihood
function
Due
to
the
fact
that
p,
q
∈
(0,
1)
and
all
the
{A
}
are
Bernou
α
,
β
,
α
β
.
Thus
the
full
likelihood
fun
eters
on
Z
varies.
If
we
assume
π
=
π
for
all
i
=
̸
j,
then
this
can
be
prior
hyperparameter
π
∈
[0,
prior
1]
with
,
where
hyperparameter
π
=
1,
π
∀i
∈
∈
[0,
[n].
1]
It
is
where
π
=
1,
∀i
∈
[n].
It
is
Due
to
the
fact
that
p,
q
∈
(0,
1)
and
all
the
{A
}
are
Bernoulli
random
α
,
β
,
α
,
β
.
Thus
the
full
likelihood
function
is
rs on Zi,· varies.
Ifwith
we
assume
π
=
π
for
all
i
=
̸
j,
then
this
can
be
α
,
β
,
α
,
β
.
Thus
the
full
likelihood
function
is
α
,
β
,
α
,
β
.
Thus
the
full
likelihood
funct
p
p
q
q
α
,
β
,
α
,
β
.
Thus
the
full
likelihood
function
is
α
,
β
,
α
,
β
.
Thus
the
full
likelihood
function
is
α
β
,
α
,
β
.
Thus
the
full
likelihood
function
is
p
p
q
q
i,a
i,a
i,j the
i<j{Ai,j }i<j are
parameter
i,· fact
j,· pthat
i,j
i<j
pq are
q a=1
q1) andrandom
Due
toi,·
the
fact
p,
qpq∈ Bernoulli
(0,
all
p j,·pthat
p p p q pp,
Due toi,·the
and
all
the
{A
}pi<j
q q1)
q qqpqq∈ q(0,
a=1
i,jstudied
(SBM)tohas
been
the
most
studied
(SBM)
model
has
fornbeen
community
the
most
detection.
model
Consider
for
community
detection. C
n equivalently
nwith
n"⎤
⎡
⎡
⎤
⎡
⎡⎡ with
⎡
⎤
equivalently
state
that,
{Z
}
are
independently
to
state
distributed
that,
{Z
}
are
independently
distributed
⎡
⎤
⎡
"
$
i,·
i,·
$
"
$
"
$
"
$
variable,
it
is
natural
to
consider
the
conjugate
prior:
Beta
prio
degenerated
into
the
usual
case
with
same
prior
on
all
{z
}
.
"
$
i=1
i=1
variable,
it is
natural
to
consider
the
conjugate
prior:
Beta
prior
for
both
generated into
the usual
case
with
same
prior
on
all
{z
}
.
variable,
it
is
natural
to
consider
the
conjugate
prior:
Be
"
$
"
$
variable,
it
is
natural
to
consider
the
conjugate
prior:
Beta
prior
for
both
i
i
i=1
i=1
#un#matrix
an n-node network with its adjacency
an n-node
network
denoted
with
its
A.adjacency
It1−A
is an
matrix
denoted
by
A. It1−is
# #
#
##
#by
#
#
#
#
A
#
#
#
#
A
A
i,j
A
i,j
i,j
i,j
latent
i,j
1−A
Tfact
T
A
Ai,j
⎣qπ
⎦ .)n,n
i,j ⎣
⎣
AB
⎣
p(A,
q)
=
πi,j
(1
)BB
i,j
plet
q.
We
let
p, B
∼
Ber(α
,zare
βihyperparameter
and
qi,z
∼
Ber(α
, i,j
βB
with
p) pp(A,
q.
pself
Ber(α
β
)Z,
and
qB
∼
Ber(α
, iβB⎣
hyper
p,weighted
π
BBer(α
(1
)=
Due
toi,·We
the
q=pq)
∈
(0,
1)
and
the
{A
}jwith
random
1−A
plet
and
q.
let
pand
∼
Ber(α
, are
β
).=
and
∼,=all
,i,a
β
)p,
p and
pP(Z
∼
Ber(α
,network
βπthat
)p(A,
and
qp,
∼
Ber(α
β
hyperparameter
i,j
Z,
p,
q)
=
π
B(1
(1
z)
z)
,z−
1−A
Due to the fact
thatq.
p, We
q∈
(0,
1)
and
{A
}q)We
random
Z,
q)
B
(1
−
B
za
,z
=all
ep(A,
=
,i,j
∀u
=
1,
2,
.Bernoulli
, k,
P(Z
)Ajpwith
=
π
,p(A,
1,
2,
.Bernoulli
k,
i,j
i,z
,z
i ,zwith
p
qz−
⎣
iB
j−
p
qi,z
qB
i<j
z
p.and
q−
qzi,j
pZ,
qi,z
qp(A,
i)
⎣
⎦
i∼
iZ,
,z
i<j
,zp.j)
jq
i,a
i,·
⎣
j∀u
p,
q)
=
i e
a the
Z,
q)
π
−
weighted
without
and
such
that
network
∈
without
{0,
1}
self
,⎦i π⎦
loop,
such
A
p(A,
Z,
p,
=
πloop,
(1
−
),i,z
zii,z,z
variableand undirected
Z,
p,
q)p,
=
π
B
−
B
zij,zB
z∈
zziip,
,z
j (1
i,z
i
j that
zi ,zp(A,
i ,zj{
i,zi undirected
z=
jA
i
j (1
i ,zj )zi i,zj i,zi
i<j
i
i<j
i is
i<jboth
i
i<j
T
T
α
,
β
,
α
,
β
.
Thus
the
full
likelihood
function
is
n
α
,
β
,
α
,
β
.
Thus
the
full
likelihood
function
is
n
α
,
β
,
α
,
β
.
Thus
the
full
likelihood
function
variable,
it
is
natural
to
consider
conjugate
prior:
Beta
prior
for
α
,
β
,
α
,
β
.
Thus
the
full
likelihood
function
is
p
p
q
q
p and
p the
q i,i qconjugate
iable, it is natural
to }consider
prior:
for
i<j
p∈ [n].
q =
pwhere
p A
i) *allow
*)
= 0,p∀i
A
and
A
isthe
an=
independent
0,both
∀i
∈ [n].
Bernoulli
edge
isi<jan
independent
B
i )prior
i<j
k qA are
kq AiBeta
)areedge
*Here
)Each
*
i<j
i,i
)i )+ the
{eqa=
the Acoordinate
where
vectors.
{e
Here
}Each
allow
the
coordinate
prior
paramvectors.
we
prior
parama
a=1
a=1)we
βqqp−1
)p +
)βqpΓ(α
Γ(α
βp ) p α+
Γ(α
+*
) Γ(α
)
)
)
⎡
β
)
p +Γ(α
q Γ(α
⎡
⎤
αqp⎤
−1
β⎤
β
)
+
β
)
*
)
−1
β
−1
α
β
−1
p −1 β β
p
p
p
q
p
⎡
⎡
⎤
α
−1
"
$
α
−1
β
−1
α
−1
observation
p
"
$
p, β
and
q.weWe
pπi,ji,·∼=
Ber(α
,all
β"
and
qwith
∼βthis
Ber(α
,πPβj,·−
with
hyperparameter
pp)=
q−
p(1β
−(1
×pall
(1
p)
random
variable,
with
EA
P
∀i×
variable,
j.
EA
, ∀i
< Γ(α
nd q. We let eters
p ∼ on
Ber(α
q $∼let
Ber(α
,random
with
hyperparameter
p$
−i,·
q+then
−q)q)
.− −
(1π
p<
pi)×
q(1
q )for
i,j
i,j
i,j
"p ) Ifand
(1
p)−
pcan
p)
q)
Zi,· pvaries.
assume
eters
=qon
πβ
Zq,)#
for
varies.
̸=If
j,
weβ
then
assume
=
be
ij.×
̸=ppΓ(α
be
Γ(α
+
Γ(α
)pqcan
+
βppβ
))Γ(β
Γ(α
+
))Γ(β
) this
p)p
pp )
qq)−1
p)q+
αpα−1
αp −1
βpqΓ(α
−1
Γ(α
)Γ(β
α)pβ
β
n(n-1)/2j,· i,·
n(n-1)/2
p)Γ(β
qj,
q+
Γ(α
Γ(α
)Γ(β
)
q −1
#
#
pp−1
p
#
α
−1
β
−1
α
β
−
p
p
q
Γ(α
p
q
Γ(α
)
Γ(α
)Γ(β
p
−
×
(1
p
pconnectivity
−the
q(1−
×
(1
p) on
(1
#
#
p 1−A
qn Ppqq depends
×
(1
p)
Aall
np
A
−⎦
−⎦
(1πsame
p)
q)
i,j {z }
In SBM,
of#
connectivity
In×into
SBM,
probability
the
P
of
depends
on⎣
probability
comon−pt
i,j
1−A
i,j
i,j
αlikelihood
β#
, usual
αp,qvalue
,q)
βfunction
. ⎣Thus
the
full
likelihood
function
is
i,j
A
degenerated
into
case
with
degenerated
same
prior
on
the
all
{z
usual
}value
case
.p⎣
with
prior
.z)i,j⎦
βp , αq , βq . Thus
the full
is
Ai,j
⎣
p , the
pthe
q=
i,j
i−
i−
Γ(α
)Γ(β
)
1−A
1−A
p(A,
Z,
p,
q)
=
B
(1
B
)
Γ(α
)Γ(β
Γ(α
)Γ(β
)
i=1
i=1
p(A,
Z,
π
B
(1
B
)
i,j
Γ(α
)Γ(β
)
p
p
i,j
full
Bayesian
inference
mean
field
approximation
z
,z
i,z
,z
p
p
q
q
z
,z
⎦
i,z
z
,z
Γ(α
)Γ(β
)
Γ(α
)Γ(β
p
p
i
j
i
i
j
i
j
i
i
j
p obtaining
q) Z,
q p,
p(A,
Z,
p,
q)
=
B
(1
B
p(A,Due
Z,munities
p, q) fact
= thattwo
π q endpoints
B
(1
−
Bthe
)πp,Z,
interest
to
estimate
by
obtaining
posterio
Our
interest
to
estimate
p(Z,
i,z
zi ,zjestimate
zall
,z
zthat
,zpto.
i ,z
j=
i∈
iand
jinterest
ito
jestimate
Our
interest
Z,q|A).
by
obtaining
iOur
munities
jis
belong
two
We
endpoints
assume
pzall
iposterior
and
pis
j−
ifdistribution
belong
to.
We
assume
pp,
= pp
Z,
by
obtaining
posterior
distribution
p(Z,
to$the
p, i,z
∈i (0, 1)
Due
to
the
the
{A
fact
are
Bernoulli
q by
(0,
1)Our
and
the
{A
}is
Bernoulli
random
i,j
i,jq|A).
⎡
⎤
⎡ the
⎤
i,j }is
i<j
i,jboth
i<jtoare
irandom
i<j
iand
i<j
"
$
"
Same
as
many
other
other
problems,
calculating
Same
as
many
other
other
problems,
calculating
p(Z,
p,
q|A)
is
computationOur
interest
is
to
estimate
Z,
by
obtaining
i
i<j
i
i<j
Our
interest
is
to
estimate
Z,
by
obtaining
posterior
distribution
p(Z,
p,pq
Same
as
many
other
other
problems,
calcula
Same
as
many
other
other
problems,
calculating
p(Z,
p,
q|A)
is
computationOur
interest
is
to
estimate
Z,
by
obtaining
post
nodes
come
from
the
same
community
nodes
come
and
from
P
the
=
q
same
otherwise.
community
Usually
and
P
=
q
otherwise.
)
*
)
variable, itFig
is#
natural
to model
consider
variable,
the conjugate
it to
is#
natural
prior: Beta
to
consider
prior
for
both
conjugate
prior:
Betai,jprior p(Z,
for
)#
*the
)panel)
* both
Our
interest
is
estimate
Z,
by obtaining
posterior
distribution
p, q|A).
i,j
#
1.
Graphical
presentations
of
full
Bayesian
inference
(left
and
the
mean
A
)
*
)
ally
intractable.
)
*
)
*
A
ally
intractable.
i,j
1−A
i,j
Γ(α
β⎦β
Γ(αq + β
Γ(α
+i,j
β⎦
+
i,j
pas
p )q ),α
ally
intractable.
p∼
p )let
q+
⎣other
Same
many
other
intractable.
⎣ p(A,
β
as
many
other
problems,
calculating
p(Z,
p,
is .calculatin
computat
α
−1+
pq−1
pq−1
q)−1
let
pB∼
Ber(α
,Same
βB
and
pSame
and
qally
q.
Ber(α
We
ppB
)∼
Ber(α
hyperparameter
,βSame
βpB
)qzpiand
qqmany
Ber(α
βother
with
hyperparameter
p, q)p(right
=
π
(1
−
)βαΓ(α
other
other
A, Z, p, q) = p and
πi,zq.i We
)α1−A
p×
q−1
p+
p−1
qpp(Z,
field
approximation
for
The
edges
show
the
dependence
many
other
problems,
calculating
p,(1
isβproblems,
computationziwith
,z
i,z
,z
Γ(α
β
)+
Γ(α
βq ) calcu
zZ,
Γ(α
+
β)zppanel)
) jas
Γ(α
)∼
−problems,
×
(1
p)
j−
icommunity
jas
pq ,αβother
−
(1
q)q|A)
i ,z
j (1p−
pp)
q
i ,z
βdetection.
βq|A)
p −1 αq −1 q βp −1
q−
p −1
p −1
q −1
pΓ(α
qq α)Γ(
×Approximation.
(1 −
Γ(α
)Γ(β
)q(1
Γ(α
.
×
p)
q) of posterior
Γ(α
)isThus
)p)−
ally
intractable.
among
variables.
ally
αip , βp , αq , β
Thus
the full likelihood
α
, )Γ(β
αq(1
, βqField
.i<j
the full likelihood
function
is
pMean
pField
p
p−
qq)Γ(β
2.3.
Approximation.
The
compu
q .i<j
pp, βfunction
pintractable.
ally
intractable.
2.3.
Mean
The
computational
issue
i
ally
intractable.
Field Approximation.
) *q)) 2.3.
Γ(α
Approximation.
TheMean
issue
of posterior
Γ(α⎡p )Γ(βp )) * )2.3. Mean Field
Γ(αqdistribution
p )Γ(β
p)Γ(β
q )Γ(β
q ) The
⎤ Γ(α
⎡
⎤
* c
)
* isincomputational
one
of the
keyThe
challenges
arose
distribution is one
the$ key
challenges
arose
Bayes
world.
so- challenges
"
$
" isof
distribution
is
one
of
the
key
distribution
one
of
the
key
challenges
arose
in
Bayes
world.
The
soOur
interest
is
to
estimate
Z,
by
obtaining
posterior
distrib
Our
interest
is
to
estimate
Z,
by
obtaining
posterior
distribution
p(Z,
p,
q|A).
Γ(α
+filed
βq p+
)Field
Γ(α
+
βApproximation.
Γ(αp + βp ) #αp −1 # Aβcalled
βαqp)−1Approximation.
#
#
pΓ(α
q1−A
q ) Approximation.
2.3.
Mean
Field
The
2.3.
Mean
The
computational
issue
βcalled
−1β
αway
βThe
−1poste
mean
filed
variational
Bayes
method
is
o
−1interest
α(1
−1
p
q −1
qof
mean
variational
Bayes
method
is
one
popular
to
tackle
A
p
q⎣
q −1
2.3.
Mean
Field
com
i,j
i,j
1−A
2.3.
Mean
Field
Approximation.
The
computational
issue
of
posterior
i,j
i,j
Our
is
to
estimate
Z,
by
obtaining
posterior
distribution
⎣
⎦
⎦
p
−
q
−
.p(
×
p)
(1
q)
Our
interest
is
to
estimate
Z,
by
obtaining
posterior
distribution
p(Z,
p,
q|A).
p
−
q
−
.
×
(1
p)
(1
q)
called
mean
filed
variational
Bayes
metho
called
mean
filed
variational
Bayes
is
onecalculating
popular wayp(Z,
to tackle
p(A, Z, p, q)
πas
B
p(A,
−Z,Bp,zi q)
πas
(1other
− p,
Bmethod
zparameters
zi ,zp(Z,
i,ziover
,z
i,z
zq|A)
Same
other
problems,
p,
q|A
to =minimize
the
as
Same
many
other
problems,
calculating
computationi ,z
j (1other
j The
j )=
i many B
i ,z
j ) is
it.
philosophy
of
mean
filed
theory
is
to
appr
distribution
is
one
of
the
key
challenge
it.
The
philosophy
of
mean
filed
theory
is
to
approximate
the
posterior
disdistribution
is
one
of
the
key
challenges
arose
in
Bayes
world.
The
Γ(α
)Γ(β
)
Γ(α
)Γ(β
)
Γ(α
Γ(α
)Γ(β
p iof
q Bayes
p ) other
qone
q ) the
is
one
of
the
keyworld.
challenges
distribution
is
key
arose
in
The
soit. The
ofp(Z,
mean
filed
theory
isart
it.
philosophy
ofp(Z,
mean
filed
theory
isqphilosophy
to approximate
the
posterior
disi<j
i<jdistribution
Same
aspThe
many
other
other
problems,
calculating
p,
q|A)
ismeth
com
Same
asp )Γ(β
many
other
problems,
calculating
p,challenges
q|A)
computationally
intractable.
ally)i intractable.
tribution
p(x|y)
byissome
distribution
q(x)
with
tribution
p(x|y)
by
some
distribution
q(x)iswith
simpler
structure.
When
called
mean
filed
variational
Bayes
*
)
)
*
*
)
*
called
mean
filed
variational
Bayes
method
one
popular
way
to
ta
tribution
byβ some
distribution
q(x
mean
filed
variational
Bayes
method
tribution
p(x|y)
byβ+qsome
distribution
q(x)
with
simpler
structure.
When
(8)Our Γ(α
mean
filed
variational
Bayes
method
isp(x|y)
one
way
to
Γ(α
Γ(α
)βinvolved
)called
Γ(α
+
intractable.
are
multiple
variables
involved
inβtackle
p + βp )called
q +p h
pα
q popular
q) α
ally
intractable.
ispally
to
estimate
Z,
by
obtaining
posterior
distribution
p(Z,
p,
q|A).
i
are
variables
in
the
distribution
p(x),
r interest is to
estimate
Z, ×by interest
obtaining
posterior
distribution
p(Z,
p,
q|A).
α
−1
βmultiple
α
−1
βq −1
βphilosophy
−1
−1 po
pthere
p −1
q −1
pthere
pposterior
q −1
qthe
it.
The
of
mean
filed
theory
it.
of variables
mean
tomultiple
approximate
the
− p) philosophy
q it.
p filed
−
−there
.is the
q distribution
−posterior
(1The
×
(1involved
(1theory
q)
p) in
(1involved
q) isp(x),
are
variables
in.isat
there
aremin
multiple
posterior
MF
MF
MF it.
MF The
MF
The
philosophy
of
mean
filed
theory
to
philosophy
of
mean
filed
theory
is
to
approximate
the
posterior
dismean
field
theory
usually
assumes
the
independ
(π̂
,
α̂
,
β̂
,
α̂
,
β̂
)
=
arg
KL
q
(Z,
p,
q)
p(Z,
p,
q|A)
,
2.3.
Mean
Field
Approximation.
The
computational
Γ(α
)Γ(β
)
Γ(α
Γ(α
)Γ(β
)Γ(β
)
)
Γ(α
)Γ(β
)
mean
field
theory
usually
assumes
the
independence
of
these
variables
in
2.3.
Mean
Field
Approximation.
The
computational
issue
of
posterior
π,α
,β
,α
,β
p
p
q
p
q
p
q
q
p
p
q
q
p
p
q
q
Same as many
other
other
problems,
calculating
p(Z,
p,
q|A)
is computation+n q(x)
me as many other other problems,
calculating
p(Z,
p,
q|A)
is computation+n field
tribution
p(x|y)
by
some
distribution
q(
mean
field
theory
usually
assumes
the ind
tribution
p(x|y)
by
some
distribution
with
simpler
structure.
W
mean
usually
assumes
the
independence
oftothese
variables
in
π∈Π
1 theory
tribution
p(x|y)
by
some
distribution
q(x)
w
q(x)computational
=
q+
(x
),
a way
reduce
computat
q(x)
=
q
(x
a way
to is
reduce
The
so+
tribution
p(x|y)
by
some
distribution
q(x)
with
simpler
structure.
When
iposterior
idistribution
iq),
i=1
one
of
the
key
challenges
arose
in
Baye
n icomplexity.
distribution
is
one
of
the
key
challenges
arose
in
Bayes
world.
The
so2.3.
Mean
Field
Approximation.
The
computational
issue
of
i=1
2.3.
Mean
Field
Approximation.
The
computational
issue
of
n
α
,β
,α
,β
>0
p
p
q
to estimate
Z, by obtaining
Our
interest
posterior
is to
estimate
distribution
Z,
by
p(Z,
obtaining
p, q|A).
posterior
distribution
p(Z,
p,over
q|A).
q(x)
=
qposterior
ato way
toinvolved
reduce
com
intractable.
there
are
multiple
inp
q(x)
=field
qi (xivariables
), to
a search
way
to
reduce
computational
complexity.
The
soy intractable. Our interest isally
i (xvariables
i ),
there
are
multiple
involved
in
the
distribution
i=1
i=1
called
mean
field
theory
is
search
some
called
mean
theory
is
over
some
distribution
class
Q,
such
there
are
multiple
variables
involved
in
the
there
are
variables
involved
in
the
posterior
distribution
p(x),
n×k
called
filed
variational
Bayes
method
is
one
popu
mean
filed
variational
Bayes
method
is
one
popular
way
to
tackle
distribution
is
of
key
challenges
arose
in
Bayes
world
distribution
is
one
of problems,
the
key
challenges
arose
in
Bayes
world.
The
socalled
mean
field
theory
to
search
overin
Same as manycalled
other
other
Same
calculating
many
p(Z,
other
p,
other
q|A)
problems,
computationcalculating
p(Z,
p,
q|A)
is
computationmean
field
theory
isassumes
to
over
some
class
Q,
such
where
Π
=asmultiple
{π
∈ [0,
1]one
,mean
kπ
kthe
1}.
mean
field
theory
usually
the
1called
i,·is
mean
field
theory
usually
the
independence
ofisassumes
these
variable
that
the
Kullback-Leibler
divergence
isthe
minimized
1 =
that
the
Kullback-Leibler
divergence
issearch
minimized.
That
isdistribution
+
mean
field
theory
usually
assumes
indep
+
mean
field
theory
usually
assumes
the
independence
of
these
variables
in
n
it.
The
philosophy
of
mean
filed
theory
is
to
approximate
it.
The
philosophy
of
mean
filed
theory
is
to
approximate
the
posterior
disn
that
the
Kullback-Leibler
divergence
is
min
ally
intractable.
ally
intractable.
that
the
Kullback-Leibler
divergence
is
minimized.
That
is
+=nway
called
mean
filed
variational
Bayes
method
popular
way
calledApproximation.
mean filed
Bayes
is of
popular
toqissue
2.3. variational
Mean
Approximation.
The
computational
posterior
+
2.3. Mean Field
TheField
computational
posterior
q(x)
(xiis
), of
aone
way
to reduce
co
q(x)
=
qissue
aone
way
to
reduce
computational
complexity.
The
itackle
MF
MF
i (xi ), q̂
n method
i=1
i=1
(4)
q̂
=
min
KL(q∥p).
q(x)
=
q
(x
),
a
way
to
reduce
compu
(4)
=
min
KL(q∥p).
Here
Π
can
be
viewed
as
a
relaxation
of
Π
:
it
uses
an
`
constraint
q(x)
=
q
(x
),
a
way
to
reduce
computational
complexity.
The
soi
i
1
0
1i=1 distribution
tribution
p(x|y)
by
some
q(x)
with
simpler
i
i
tribution
p(x|y)
by
some
distribution
q(x)
with
simpler
structure.
When
MF
i=1
MF
q∈Q
q∈Q
The
philosophy
of issue
mean
filed
theory
is
to world.
approximate
the
pos
called
mean
field
theory
is
search
ove
The
philosophy
of mean
filed
theory
is
tochallenges
approximate
the
posterior
disdistribution
isit.
one
of
the
key
arose
in
Bayes
The
so(4)
q̂to
=
min Q,
KL(
called
mean
field
theory
is
to
search
over
some
distribution
class
tribution is it.
one
of Mean
the
key row
challenges
arose
in
Bayes
world.
The
so(4)
q̂
=The
min
KL(q∥p).
2.3.
Field
Approximation.
2.3.
The
computational
Field
Approximation.
of
posterior
issue
ofsearch
posterior
on each
instead
of the
`0 Mean
constraint
used
inin
Πthe
.search
The
global
minimizer
mean
field
theory
isin
to the
over
som
q∈Qsuch
0called
q∈Qcomputational
called
mean
fieldthere
theory
is
to
over
some
distribution
class
Q,
are
multiple
variables
involved
posterior
there
are
multiple
variables
involved
posterior
distribution
p(x),
that
the
Kullback-Leibler
divergence
isp(Z
md
that
the
Kullback-Leibler
divergence
isour
minimized.
That
is
Back
top(Z,
case,
the
posterior
distribution
tribution
p(x|y)
by
some
distribution
q(x)
simpler
structu
tribution
p(x|y)
by
some
distribution
q(x)
with
simpler
structure.
When
Back
to our
case,
the
posterior
distribution
p,
q|A)
iswith
the
main
intercalled
mean
filed
variational
Bayes
method
is
one
popular
way
to
tackle
led mean filed
variational
Bayes
method
is
one
popular
way
to
tackle
distribution
is
one
of
the
key
challenges
distribution
arose
is
one
in
Bayes
of
the
world.
key
challenges
The
soarose
in
Bayes
world.
The
soq
(Z)
gives
approximate
probabilities
to
classify
every
node
to
each
comMF
that
the
Kullback-Leibler
divergence
is
minim
π̂ mean field theory
thatestthe
Kullback-Leibler
divergence
is
minimized.
That
is
mean
field
theory
usually
assumes
the
independence
of
usually
assumes
the
independence
of
these
variables
in
Back
to
our
case,
the
posterior
distributi
Back
ourvariational
case,+
theway
posterior
distribution
p, q|A)way
is the
main
interestapproximate
but
it is computationally
intractable.
Conside
but
it
is computationally
Consider
called
mean
filed
variational
method
mean
isto
filed
one
popular
to
tackle
method
is distribution
onep(Z,
popular
to
tackle
+n isBayes
MFdisthere
multiple
variables
involved
in
posterior
distribut
there
multiple
variables
involved
in
posterior
distribution
p(x),
it.
The
philosophy
of
mean
filed
theory
isBayes
to
the
posterior
MF
munity.
The
optimization
inare
Equation
(8)the
can
benintractable.
shown
to
be
equivalent
to
The philosophy
of are
mean
filed
theory
tocalled
approximate
posterior
dis(4)
q̂
=
min
KL
(4)
q̂
=
min
KL(q∥p).
est
but
it
is
computationally
intractable.
C
est
but
it
is
computationally
intractable.
Consider
distribution
MF
q(x)
=
q
(x
),
a
way
to
reduce
computational
com
q(x)
=
q
(x
),
a
way
to
reduce
computational
complexity.
The
soMF
i theory
i disi theory
i The
i=1
i=1
q∈Q ′ va
it.
The
philosophy
of
mean
filed
it.
is
philosophy
to
approximate
of
mean
the
filed
posterior
is
to
approximate
posterior
dis(4)
q̂
=
min
KL(q∥p
q∈Q
(4)
q̂
=
min
KL(q∥p).
a
more
explicit
optimization
as
follows.
Recall
ψ(·)
is
the
digamma
function
′independence
′ ′,βstructure.
′ ′,α
′ ,β
′the
′ (Z)q
q
(Z,
p,
q)
=
q
′
′
′
′
′
′
′
′
q
(Z,
p,
q)
=
q
(Z)q
(p)q
(q)
mean
field
theory
usually
assumes
the
of
these
mean
field
theory
usually
assumes
the
independence
of
these
variables
in
π
,α
π
α
tribution
p(x|y)
by
some
distribution
q(x)
with
simpler
When
π ,αp ,βpstructure.
,αq ,βq
π
αp ,βp
α
bution p(x|y) by some
distribution
q(x)
with
simpler
When
p q ,β
pq q q
p ,β
q∈Q
+
mean
field
theory
to search
over
some
distribut
mean
field
theory
isp(x|y)
tocalled
search
over
someq∈Q
distribution
class
such
d distribution
tribution+
p(x|y)
by some
tribution
q(x)
simpler
by
When
simpler
structure.
When
′Q,
′ ,α
′ ,β
′ (Z, p,
q)
= qπ′ (Z
nwith
′ ,βstructure.
′ distribution
′ ,β ′ (Z, p,
′q
′ ′(p)q
′ (q)
n called
qπ′some
q) q(x)
= qis
(Z)q
with
ψ(x)
=
Γ(x)].
,α
π ′with
α
α′q ,β
p ,βp
qqq
p a
p ,αway
q in
q the
p ,βπ
p ,α
dxa[log
q(x)
=
q
(x
),
to
reduce
computational
complexity
q(x)
=
q
(x
),
way
to
reduce
computational
complexity.
The
soBack
to
our
case,
the
posterior
distribu
there
are
multiple
variables
involved
posterior
distribution
p(x),
Back
to
our
case,
the
posterior
distribution
p(Z,
p,
q|A)
is
the
main
in
re are multiple
variables
involved
in
the
posterior
distribution
p(x),
i
i
i
i
i=1
i=1
that
the
Kullback-Leibler
divergence
is
minimized.
That
i
that the
Kullback-Leibler
divergence
is
minimized.
That
is
there are multiple
variables
involved
there
in
are
the
multiple
posterior
variables
distribution
involved
p(x),
in
the
posterior
distribution
p(x),
Back
to our
case,
the
posterior
distribution
Back
to
our
case,
the
posterior
distribution
p(Z,
p,
q|A)
is
the
main
interest
but
it
is
computationally
intractable.
est
but
it
is
computationally
intractable.
Consider
distribution
called
mean
field
theory
is
to
search
over
some
distribution
clas
called
mean
field
theory
is
to
search
over
some
distribution
class
Q,
such
mean
field
theory
usually
assumes
the
independence
of
these
variables
in
an field
theory
usually
assumes
the
independence
of
these
variables
in
MF
MF
MF
MF
MF
mean
field
theory
usually
assumes
mean
the
field
independence
theory
usually
of
these
assumes
variables
the
in
independence
of
these
variables
in
2.1.
The mean computationally
fieldMF
estimator (π̂ intractable.
, α̂but
α̂computationally
, β̂q )distribution
+
+
est
intractable. Con
p , β̂it
p is,Consider
q q̂MF
+n Theorem
n est but it is +
nq̂ (4)= min KL(q∥p).
= min
KL(q∥p).
(4)
that
the
Kullback-Leibler
divergence
is
minimized.
That
is
divergence
is
minimized.
That
is
q(x)
=a Kullback-Leibler
q
(x
),
a
way
to
reduce
q(x)
=
computational
q
(x
),
complexity.
a
way
to
reduce
The
socomputational
complexity.
q(x)
=
q
(x
),
a
way
to
reduce
computational
complexity.
TheThe
x) = ni=1 qthat
way
to
reduce
computational
complexity.
The
soi
i
i
i
i
i
defined
in
Equation
(8)
is
equivalent
to
i (x
i ),the
i=1
i=1 q ′ q∈Q
q∈Q
′ ,α
′ ′,β
′ ,α′ ,β′′ (Z,
i=1
q
p,soq)so= qπ ′ (
′
′
′
′
′
′
(Z,
p,
q)
=
q
(Z)q
(p)q
π
π ,αp ,βp ,αq ,βq
π
αp ,βpp p q αq ,βq′ (q)
called
mean
field
theory
is
to
search
called
over
mean
some
field
distribution
theory
is
to
class
search
Q,
such
over
some
distribution
class
Q,
such
′
′
′
′
′
′ (Z)qα
q
(Z,
p,
q)
=
q
′ ,α
′ ,βsearch
′ ,α′ ,β
′ (Z,
′distribution
′,α
′,β
′q,β ′ (q)
q
p,
q)
=
q
(Z)q
(p)q
mean
field
theory
is
to
over
some
class
Q,
such
π
,α
,β
π
led mean field theory is called
to search
over
some
distribution
class
Q,
such
π
π
α
,β
α
p
p
q
MF
MF
p p q q
p p
q q
MF to
MFthe
MF
(4)
q̂
=
min
KL(q∥p).
Back
to
our
case,
the
posterior
distribution
p(Z,
p,
q|A)
(4)
q̂
=
min
KL(q∥p).
Back
our
case,
posterior
distribution
p(Z,
p,
q|A)
is
the
main
interthat the Kullback-Leibler
is
the
minimized.
Kullback-Leibler
That
is
divergence
is
minimized.
That
is
(π̂
, α̂pMFdivergence
, β̂pMF
, α̂that
,
β̂
)
=
arg
min
f
(π,
α
,
β
,
α
,
β
;
A),
p p
q q
q
q
that the Kullback-Leibler
divergence
is minimized.
That is
q∈Q
q∈Q
at the Kullback-Leibler divergence
is minimized.
That
is
π∈Π1
(4)
est
but it is Consider
computationally
intractable. Consider distrib
est but it is computationally
intractable.
distribution
MF
MF
α
p ,βp ,αq ,βq >0
q̂
=(4)
min KL(q∥p).
q̂
= min KL(q∥p).
MF
q∈Q distribution
q∈Qis distribution
Back
to our
case,
the KL(q∥p).
posterior
p(Z, p, q|A) is the m
Back to our
case,
theKL(q∥p).
posterior
p,qq|A)
the
main
inter(4)
q̂MF
= p(Z,
min
q̂
= min
′ ,α
′ ,β
′
′ α′′ ,β
′ ,α′ ,β ′ ,α′ ,β ′ (Z, p, q) = qπ ′ (Z)q
′ ,β
′ (p)q
′ (q)
(Z,
p, q) = qπ′ (Z)qα′p ,βp′ (p)qα′q ,β
q
π
π
α
where
q∈Q
p ,αq ,β
p p q q
p pp
qq q
q∈Q
estdistribution
but
computationally
intractable.
est but
intractable.
Consider
distribution
Back it
to is
ourcomputationally
case, the posterior
Backittoisour
p(Z,
case,
p, q|A)
the posterior
is the
main
distribution
interp(Z,Consider
p, q|A) is thedistribution
main inter1
est but
is computationally
intractable.
estp(Z,
but
Consider
itposterior
isq|A)
computationally
distribution
intractable.
Consider
distribution
T main
toβour
case,
the
distribution
p,q )]q|A)
is the main interBack to our case,
theitposterior
distribution
interf (π, αBack
= thA
− λ1p,
1Tn +
λI′isn ,′the
ππ
i′ +
[ψ(α
)′ −
ψ(β
kAk
q′; A)
q p(Z,
1
′(p)q
′ (Z,
′ (Z)q
′ (Z, p, q) =n q
′q
′ ′,β
′ ,β
p,
q)
=
q
qpπ,′β,αp ,′pα,βqp′, ,α
(Z)q
(q)
2
π
,α
,β
,α
,β
π
α′p ,βp′ (p)qα′q ,βq′ (q)
,β
π
α
α
p pp pq q
q q
q q
est
but
computationally
Consider
distribution
′ ,α
′ ,β ′ ,αit
′ ,β is
′ (Z,
′ (Z)q
′ ,β
′ intractable.
′(p)q
′ ,βα
′ ′,α
′ ′,β
′ (Z,
′ (Z)q
′ ,β ′ (p)qα′ ,β ′ (q)
but it is computationally
Consider
distribution
qπintractable.
p,
q)
=
q
q
(q)
p,
q)
=
q
n
π
α
π
,α
,β
π
α
h
i
p p q q
p p
p p q qq q X
p p
q q
qπ′ ,α′p ,βp′ ,α′q ,βq′ (Z, p, q) =
n
pri
[ψ(βq ) − ψ(αq + βq )] −
KL Categorical(πi,· )kCategorical(πi,·
)
2
′ ,α
′ ,β ′ (Z,
′
′
′
′
′
qπ′ ,αα′p ,β
p,
q)
=
q
(Z)q
(p)q
(q)
′ ,β
′ (p)q
′
′
qπ′ (Z)q
(q)
π
α ,β
αq ,βq
i=1
pp p q q αq ,βq
p p
− KL Beta(αp , βp )kBeta(αppri , βppri ) − KL Beta(αq , βq )kBeta(αqpri , βqpri ) ,
+
9
MEAN FIELD FOR COMMUNITY DETECTION
and
t = [[ψ(αp ) − ψ(βp )] − [ψ(αq ) − ψ(βq )]] /2
(9)
λ = [[ψ(βq ) − ψ(αq + βq )] − [ψ(βp ) − ψ(αp + βp )]] /(2t).
(10)
The explicit formulation in Theorem 2.1 is helpful to understand the
global minimizer of the mean field method. However, the global minimizer
π̂ MF remains computationally infeasible as the objective function is not convex. Fortunately, there is a practically useful algorithm to approximate it.
2.5. Coordinate Ascent Variational Inference. CAVI is possibly the most
popular algorithm to approximate the global minimum of the mean field variational Bayes. It is an iterative algorithm. In Equation (8), there are latent
variables {Zi,· }ni=1 , p, q. CAVI updates them one by one. Since the distribution class of q is uniquely determined by the parameters {πi,· }ni=1 , αp , βp , αq , βq ,
equivalently we are updating those parameters iteratively. Theorem 2.2 gives
explicit formulas for the coordinate updates.
Theorem 2.2. Starts with some π, αp , βp , αq , βq , the CAVI update for
each coordinate (i.e., Equation (3) and Equation (4)) has an explicit expression as follows:
• Update on p:
αp0
=
αppri
+
k
XX
πi,a πj,a Ai,j , and
i<j a=1
βp0
=
βppri
+
k
XX
i<j a=1
πi,a πj,a (1 − Ai,j ).
• Update on q:
XX
XX
αq0 = αqpri +
πi,a πj,b Ai,j , and βq0 = βqpri +
πi,a πj,b (1 − Ai,j ).
i<j a6=b
i<j a6=b
• Update on Zi,· , ∀i = 1, 2, . . . , n:
X
pri
0
πi,a
∝ πi,a
exp 2t
πj,a (Ai,j − λ) , ∀a = 1, 2, . . . , k,
j6=i
where t and λ are defined in EquationP
(9) and Equation (10) respec0 = 1.
tively, and the normalization satisfies ka=1 πi,a
All coordinate updates in Theorem 2.2 have explicit formulas, which
makes CAVI a computationally attractive way to approximate the global
optimum q̂MF for the community detection problem.
10
ZHANG, ZHOU
2.6. Batch Coordinate Ascent Variational Inference. The Batch Coordinate Ascent Variational Inference (BCAVI) is a batch version of CAVI. The
difference lies in that CAVI updates the rows of π sequentially one by one,
0 } according to Thewhile BCAVI uses the value of π to update all rows {πi,·
orem 2.2. This makes BCAVI especially suitable for parallel and distributed
computing, a nice feature for large scale network analysis.
We define a mapping h : Π1 → Π1 as follows. For any π ∈ Π1 , we have
X
pri
(11)
[ht,λ (π)]i,a ∝ πi,a
exp 2t
πja (Ai,j − λ) ,
j6=i
with parameters t and λ. For BCAVI, we update π by π 0 = ht,λ (π) in each
batch iteration, with t, λ defined in Equations (14) and (15). See Algorithm
1 for the detailed implementation of BCAVI algorithm.
Algorithm 1: Batch Coordinate Ascent Variational Inference (BCAVI)
1
Input: Adjacency matrix A, number of communities k, hyperparameters
π pri , αppri , βppri , αqpri , βqpri , initializer π (0) , number of iterations S.
Output: Mean variational Bayes approximation π̂, α̂p , β̂p , α̂q , β̂q .
for s = 1, 2, . . . , S do
(s)
(s)
(s)
(s)
Update αp , βp , αq , βq by
(12)
αp(s) = αppri +
k X
X
(s−1)
Ai,j πi,a
(s−1)
πj,a
, βp(s) = βppri +
a=1 i<j
k X
X
(s−1) (s−1)
(1 − Ai,j )πi,a πj,a ,
a=1 i<j
(13)
αq(s) = αqpri +
XX
(s−1)
Ai,j πi,a
(s−1)
πj,b
, βq(s) = βqpri +
a6=b i<j
a6=b i<j
2
XX
(s−1) (s−1)
(1 − Ai,j )πi,a πj,b .
Define
i h
ii
1 hh
ψ(αp(s) ) − ψ(βp(s) ) − ψ(αq(s) ) − ψ(βq(s) )
2
i h
ii
1 hh
= (s) ψ(βq(s) ) − ψ(αq(s) + βq(s) ) − ψ(βp(s) ) − ψ(αp(s) + βp(s) ) ,
2t
(14)
t(s) =
(15)
λ(s)
where ψ(·) is the digamma function. Then update π (s) with
π (s) = ht(s) ,λ(s) (π (s−1) ),
3
where the mapping h(·) is defined as in Equation (11).
end
(S)
(S)
(S)
(S)
We have π̂ = π (S) , α̂p = αp , β̂p = βp , α̂q = αq , β̂q = βq .
11
MEAN FIELD FOR COMMUNITY DETECTION
Remark 2.1. The definitions of t(s) and λ(s) in Equations (14) and (15)
involve the digamma function, which costs a non-negligible computational
resources each time called. Note that we have ψ(x) ∈ (log(x− 21 ), log x) for all
x > 1/2. For the computational purpose, we propose to use the logarithmic
function instead of digamma function in Algorithm 1, i.e., Equations (14)
and (15) are replaced by
(s)
(s) (s)
(16)
t(s) =
1
αp βq
log (s) (s) ,
2
β p αq
(s)
(s)
and λ(s) =
(s)
(s)
(s)
βq (αp + βp )
1
log (s)
.
(s)
(s) (s)
2t
(αq + βq )βp
(s)
Later we show that αp , βp , αq , βq are all at least in the order of np,
which goes to infinity, and thus the error caused by using the logarithmic
function to replace the digamma function is negligible. All theoretical guarantees obtained in Section 3 for Algorithm 1 (i.e., Theorem 3.1, Theorem
3.2) still hold if we use Equation (16) to replace Equations (14) and (15).
3. Theoretical Justifications. In this section, we establish theoretical justifications for BCAVI for community detection under the Stochastic
Block Model. Though Z, p and q are all unknown, the main interest of community detection is on the recovery of the assignment matrix Z, while p
and q are nuisance parameters. As a result, our main focus is on developing
convergence rate of BCAVI for π.
3.1. Loss Function. We use `1 norm to measure the performance of recovering Z. Let Φ be the set of all the bijections from [k] to [k]. Then for
any Z, Z ∗ ∈ Π1 , the loss function is defined as
X
∗
`(Z, Z ∗ ) = inf kZ − φ ◦ Z ∗ k1 = inf
(17)
|Zi,a − Zi,φ(a)
|.
φ∈Φ
φ∈Φ
i,a
Note that the infimum over Φ addresses the issue of identifiability over the
labels. For instance, in the case of n = 4, k = 2, the assignment vector
z = (1, 1, 2, 2) and z 0 = (2, 2, 1, 1) give the same partition. In Equation (17)
two equivalent assignments give the same loss.
There are a few reasons for the choise of the `1 norm. When both Z, Z 0 ∈
Π0 , the `1 distance between Z and Z 0 is equal to the `0 norm, i.e., the Hamming distance between the corresponding assignment vectors r−1 (Z) and
r−1 (Z 0 ), which is the default metric used in community detection literature
[12, 35]. The other reason is related to the interpretation of Π1 . Since each
row of Π1 corresponds to a categorical distribution, it is natural to use the
`1 norm, the total variation distance, to measure their diffidence.
12
ZHANG, ZHOU
3.2. Ground Truth. We use the superscript asterisk (∗ ) to indicate the
ground truth. The ground truth of connectivity matrix B ∗ is
B ∗ = q ∗ 1k 1Tk + (p∗ − q ∗ )Ik ,
where p∗ is the within community connection probability and q ∗ is the between community connection probability. Throughout the paper, we assume
p∗ > q ∗ such that the network satisfies the so-called “assortative” property, with the within-community connectivity probability larger than the
between-community connectivity probability.
We further assume the network is generated by the true assignment matrix
Z ∗ in the sense that Pi,j = (Z ∗ B ∗ Z ∗T )i,j for all i 6= j. We are interested
in deriving a statistical guarantee of `(π̂ (s) , Z ∗ ). Throughout this section we
(ρ,ρ0 )
(ρ,ρ0 )
consider cases Z ∗ ∈ Π0 or Z ∗ ∈ Π0
, where Π0
is defined to be a subset
of Π0 with all the community sizes bounded between ρn/k and ρ0 n/k. That
is,
(ρ,ρ0 )
Π0
= {π ∈ Π0 : ρn/k ≤ |{i ∈ [n] : πi,a = 1}| ≤ ρ0 n/k, ∀a ∈ [k]}.
It is worth mentioning that ρ, ρ0 are not necessarily constants. We allow the
community sizes not to be of the same order in the theoretical analysis.
3.3. Theoretical Justifications for BCAVI. In Theorem 3.1, we present
theoretic guarantees of the convergence rate of BCAVI when initialized properly. Define
pri
pri
, and n̄min = min[na + nb ]/2.
w = max max πi,a
/πi,b
i∈[n] a,b∈[k]
a6=b
When w = 1, the priors for {r−1 (Zi,· )}ni=1 are i.i.d. Categorical(1/k, 1/k, . . . , 1/k)
and n̄min = n/2 when there exist only two communities. The following quantity I plays a key role in the minimax theory [35]
hp
i
p
I = −2 log
p∗ q ∗ + (1 − p∗ )(1 − q ∗ ) ,
which is the Rényi divergence of order 1/2 between two Bernoulli distributions: Ber(p∗ ) and Ber(q ∗ ). The proof of Theorem 3.1 is deferred to Section
5.3.
Theorem 3.1. Let Z ∗ ∈ Π0 . Let 0 < c0 < 1 be any constant. Assume
0 < c0 p∗ < q ∗ < p∗ = on (1),
(18) nI/[wk[n/n̄min ]2 ] → ∞, and αppri , βppri , αqpri , βqpri = on ((p∗ − q ∗ )n2 /k).
MEAN FIELD FOR COMMUNITY DETECTION
13
Under the assumption that the initializer π (0) satisfies `(π (0) , Z ∗ ) ≤ cinit n̄min
for some sufficiently small constant cinit with probability at least 1 − , there
exist some constant c > 0 and some η = on (1) such that in each iteration
for the BCAVI algorithm, we have
`(π (s) , Z ∗ )
`(π (s+1) , Z ∗ ) ≤ n exp(−(1 − η)n̄min I) + p
, ∀s ≥ 0,
nI/[wk[n/n̄min ]2 ]
1
holds uniformly with probability at least 1 − exp[−(n̄min I) 2 ] − n−c − .
Theorem 3.1 establishes a linear convergence rate for BCAVI algorithm.
The coefficient [nI/[wk[n/n̄min ]2 ]]−1/2 is independence of s, and goes to
0 when n grows. The following theorem is an immediate consequence of
Theorem 3.1.
Theorem 3.2. Under the same condition as in Theorem 3.1, for any
s ≥ s0 , [nI/k]/ log[nI/[wk[n/n̄min ]2 ]], we have
(
n exp(−(1 − o(1))ρnI/k), k ≥ 3;
(s)
∗
`(π̂ , Z ) ≤ n exp(−(1 − 2η)n̄min I) ≤
n exp(−(1 − o(1))nI/2), k = 2,
1
with probability at least 1 − exp[−(n̄min I) 2 ] − n−c − .
Theorem 3.2 shows that BCAVI provably attains the statistical optimality from the minimax lower bound in Theorem 3.3 after at most s0 iterations. When the network is sparse, i.e., p∗ and q ∗ are at most in an order
of (log n)/n, the quantity s0 can be shown to be o(log n), and then BCAVI
converges to be minimax rate within log n iterations. When the network is
dense, i.e., p∗ and q ∗ are far bigger than (log n)/n, log n iterations are not
enough to attain the minimax rate. However, `(π (s) , Z ∗ ) = o(n−a ) for any
a > 0 when s ≥ log n, and thus all the nodes can be correctly clustered
with high probability by clustering each note to a community with the highest assignment probability. Therefore, it is enough to pick the number of
iterations to be log n in implementing BCAVI.
Under the assumption nI/(k log k) → ∞, we have
(
n exp(−(1 − o(1))ρnI/k), k ≥ 3;
inf sup E`(π̂, Z ∗ ) ≥
.
π̂
n exp(−(1 − o(1))nI/2), k = 2,
(ρ,ρ0 )
Z ∗ ∈Π
Theorem 3.3.
0
14
ZHANG, ZHOU
Theorem 3.3 gives the minimax lower bound for community detection
problems with respect to the `(·, ·) loss. In Theorem 3.2, under the addi(ρ,ρ0 )
tional assumption that Z ∗ ∈ Π0
, it immediately reveals that BCAVI
converges to the minimax rate after s0 iterations. As a consequence, BCAVI
is not only computationally efficient, but also achieves statistical optimality.
The minimax lower bound in Theorem 3.3 is almost identical to the minimaxity established in [35]. The only difference is that [35] consider a `0 loss
function. The proof of Theorem 3.3 is just a routine extension of that in
[35]. Therefore, we omit the proof.
To help understand Theorem 3.1, we add a remark on conditions on model
parameters and priors, and a remark on initialization.
Remark 1 (Conditions on model parameters and priors). The community
sizes are not necessarily of the same order in Theorem 3.1. If we further
pri
assume ρ, ρ0 are constants, and the prior πi,a
1/k, ∀i ∈ [n], a ∈ [k] (for
example, uniform prior), and then the first condition in Equation (18) is
equivalent to
nI/k 3 → ∞,
noting that n/n̄min k and w 1. This condition is necessary for consistent
community detection [35] when k is finite. The assumptions in Equation
(18) is slightly stronger than the assumption in [23], which is essentially
nI ≥ Ck 2 log k for a sufficient large constant C.
Under the assumption nI/k 3 → ∞, since we have I (p∗ − q ∗ )2 /p∗ , it
can be shown that p∗ , q ∗ are far bigger than n−1 , and then the second part
of Equation (18) can also be easily satisfied. For instance, we can simply set
αppri , βppri , αqpri , βqpri all equals to 1, i.e., consider non-informative priors.
Remark 2 (Initialization). The requirement on the initializers for BCAVI
in Theorem 3.1 is relatively weak. When k is a constant and the community sizes are of the same order, the condition needed is `(π (0) , Z ∗ ) ≤ cn
for some small constant c. Many existing methodologies in community detection literature can be used. One popular choice is spectral clustering.
Established in [9, 12, 21], the spectral clustering has a mis-clustering error bound as O(k 2 /I). From Equation (18), the error is o(n̄min ), and then
the condition that Theorem 3.1 requires for initialization is satisfied. The
semidefinite programming (SDP), another popular method for community
detection, also enjoys satisfactory theoretical guarantees [10, 16], and is suitable as an initializer.
4. Discussion.
MEAN FIELD FOR COMMUNITY DETECTION
15
4.1. Statistical Guarantee of Global Minimizer. Though it is often challenging to obtain the global minimizer of the mean field method, it is still interesting to understand the statistical property of the global minimizer π̂ MF .
Assume that both p∗ and q ∗ are known, the optimization problem stated in
Theorem 2.1 can be further simplified. The posterior
distribution becomes
Q
p(Z|A). We use a product measure qπ (Z) = i qi (πi,· ) for approximation,
and then π̂ MF = arg minπ∈Π1 KL[qπ (Z)kp(Z|A)]. Theorem 4.1 reveals that
π̂ MF is rate-optimal, not surprisingly given the theoretical results obtained
for BCAVI, an approximation of π̂ MF .
Theorem 4.1. Assume p∗ and q ∗ are known. Under the assumption
ρnI/[wk 2 [n/n̄min ]2 ] → ∞, there exist some constant c > 0 and η = on (1)
such that
`(π̂ MF , Z ∗ ) ≤ n exp(−(1 − η)n̄min I)
1
with probability at least 1 − exp[−(n̄min I) 2 ] − n−c .
4.2. Gibbs Sampling. In Section 3.3 we analyze an iterative algorithm,
BCAVI, and establish its linear convergence towards statistical optimality.
The framework and methodology we establish is not limited to BCAVI, but
can be extended to other iterative algorithms, including Gibbs sampling.
As a popular Markov chain Monte Carlo (MCMC) algorithm, Gibbs sampling has been widely used in practice to approximate the posterior distribution. There is a strong tie between Gibbs sampling and the mean field
variational inference: both implement coordinate updates using conditional
distributions. Using the general notation introduced in Section 2.1, to approximate p(x|y), Gibbs sampling obtains the update on xi by a random generation from the conditional distribution p(xi |x−i ,y), while the variational
inference updates in a deterministic way with exp Eq−i log p(xi |x−i , y) .
We present a batched version of Gibbs sampling for community detection.
It involves iterative updates with
• Generate p(s) by sampling from p(p|q (s−1) , Z (s−1) , A);
• Generate q (s) by sampling from p(q|p(s−1) , Z (s−1) , A);
(s)
(s−1) (s−1)
• Generate Zi,· independently by sampling from p(Zi,· |Z−i,· , p(s) , q (s) , A),
for i ∈ [n].
We include the detailed implementation as Algorithm 2 in the supplemental
material (Section A.1). The similarity between Algorithm 1 and Algorithm 2
makes it possible for us to analyze the output of Gibbs sampling in a similar
way as we did for the variational inference.
16
ZHANG, ZHOU
Theorem 4.2. Assume the initializer Z (0) satisfies `(Z (0) , Z ∗ ) ≤ cinit n̄min
for some sufficiently small constant cinit with probability at least 1−. Under
the same condition as in Theorem 3.1, there exist some constant c > 0 and
some η, η 0 = on (1) that go to 0 slowly, such that for all s ≥ 0 of the batched
Gibbs sampling (Algorithm 2), we have
h
i
EZ (s+1) `(Z (s+1) , Z ∗ ) A, Z (0) ≤ n exp(−(1 − η)n̄min I) + csn `(Z (0) , Z ∗ ) + (s + 1)nbn
1
−c
holds
probability
1 − exp[−(n̄min I)
p2 )] − n − , where bn =
with
at least
02
2
02
2
exp −η n̄min + exp −η n I and cn = 1/ nI/[wk[n/n̄min ]2 ]. Consequently, for s = [nI/k]/ log[nI/[wk[n/n̄min ]2 ]], we have
h
i
EZ (s+1) `(Z (s+1) , Z ∗ ) A, Z (0) ≤ exp(−(1 − 2η)n̄min I),
1
with probability at least 1 − exp[−(n̄min I) 2 ] − n−c − .
Theorem 4.2 establishes theoretical justification for batched Gibbs sampling for community detection. Despite that we have the same cn and similar
convergence as Theorem 3.1, some extra efforts are needed due to the existence of randomness in each iterative update. The additional term of bn is
necessary to handle the extreme events due to random generation. Note that
(s + 1)nbn is dominated by n exp(−(1 − η)n̄min I) as long as s ≤ en . Thus,
when s ≤ en , we have similar “linear convergence” results as in Theorem
3.1.
4.3. An Iterative Algorithm for Maximum Likelihood Estimation. Maximum likelihood estimator (MLE) usually yields statistical optimality. However, the maximization of the likelihood p(A|Z, p, q) over Z, p, q is computationally infeasible. Inspired by the procedures proposed in Algorithm 1 and
Algorithm 2, we may approach max p(A|Z, p, q) by alternating maximization. We use a batched coordinate maximization:
• Maximize p(A|p, q (s−1) , Z (s−1) ) over p to obtain p(s) ;
• Maximize p(A|p(s−1) , q, Z (s−1) ) over q to obtain q (s) ;
(s−1)
(s)
• Maximize p(A|p(s−1) , q (s−1) , Zi,· , Z−i,· ) over Zi,· to obtain Zi,· , for
each i ∈ [n].
We include its detailed implementation in Algorithm 3 in the supplemental
material (Section A.2). We have the following theoretical guarantee of this
iterative algorithm to approximate the MLE.
17
MEAN FIELD FOR COMMUNITY DETECTION
Theorem 4.3. Assume the initializer Z (0) satisfies `(Z (0) , Z ∗ ) ≤ cinit n̄min
for some sufficiently small constant cinit with probability at least 1−. Under
the same condition as in Theorem 3.1, there exist some constant c > 0 and
some η = on (1), such that in each iteration of the BCAVI algorithm,
`(Z (s) , Z ∗ )
`(Z (s+1) , Z ∗ ) ≤ n exp(−(1 − η)n̄min I) + p
, ∀s ≥ 0,
nI/[wk[n/n̄min ]2 ]
1
holds with probability at least 1 − exp[−(n̄min I) 2 ] − n−c − .
Algorithm 3 is essentially the same with the procedure proposed in [12].
However, [12] can only analyze the performance of one single iteration from
Z (0) (i.e., `(Z (1) , Z ∗ )), and it requires extra data splitting steps. Theorem
4.3 provides a stronger and cleaner result compared with that of [12].
5. Proofs of Main Theorems. In this section, we give proofs of the
theorems in Section 2 and Section 3. We first present the proof of Theorem
2.1 in Section 5.1. Then we give the proof Theorem 2.2 in Section 5.2. The
proof of Theorem 3.1 is given in Section 5.3.
5.1. Proof of Theorem 2.1. From Equation (8), by some algebra (see
Equation (53) in Appendix D for detailed derivation) we have
(19)
(π̂ MF , α̂pMF , β̂pMF , α̂qMF , β̂qMF ) =
arg min
π∈Π1
αp ,βp ,αq ,βq >0
Eq [log p(A|Z, p, q)] − KL(q(Z, p, q)kp(Z, p, q)),
where we use q instead of qπ,αp ,βp ,αq ,βq for simplicity. From the conditional
distribution in Equation (6), the log-likelihood function can be simplified as
XX
Bab
log p(A|Z, p, q) =
Zia Zjb Ai,j log
+ log(1 − Bab ) .
1 − Bab
a,b i<j
Due to the independence of Z and p, q under q, we have
XX
Eq [log p(A|Z, p, q)] = Eq(p,q) Eq(Z)
Zi,a Zj,b Ai,j log
a,b i<j
XX
= Eq(p,q)
πi,a πj,b Ai,j log
a,b i<j
Bab
1 − Bab
+ log(1 − Bab )
Bab
+ log(1 − Bab ) .
1 − Bab
18
ZHANG, ZHOU
Since Ba,a = p, ∀a ∈ [k] and Ba,b = q, ∀a 6= b, we have
(20)
XX
1−p
p(1 − q)
+ log
Eq [log p(A|Z, p, q)] = Eq(p,q)
πi,a πj,a Ai,j log
q(1 − p)
1−q
a i<j
i
h
XX
q
+ Eq(p,q)
+ log(1 − q) .
πi,a πj,b Ai,j log
1−q
a,b i<j
By properties of Beta distribution, we obtain
Eq(p,q) log
p(1 − q)
= Eq(p) [log p − log(1 − p)] − Eq(q) [log q − log(1 − q)]
q(1 − p)
= [ψ(αp ) − ψ(βp )] − [ψ(αq ) − ψ(βq )] ,
and
Eq(p,q) log
1−q
= Eq(q) log(1 − q) − Eq(p) log(1 − p)
1−p
= [ψ(βq ) − ψ(αq + βq )] − [ψ(βp ) − ψ(αp + βp )] .
This leads to
(21)
X
X
XX
1−p
p(1 − q)
+ log
= 2t
Eq(p,q)
πi,a πj,a (Ai,j − λ)
πi,a πj,a Ai,j log
q(1
−
p)
1
−
q
a
a
i<j
i<j
= thA − λ1n 1Tn + λIn , ππ T i.
Similarly we can obtain
(22)
h
XX
Eq(p,q)
πi,a πj,b Ai,j log
a,b i<j
q
1−q
i
+ log(1 − q)
X
X
XX
q
= Eq(q) log
Ai,j
πi,a πj,b + Eq(q) log(1 − q)
πi,a πj,b
1−q
i<j
a,b
1
n
= [ψ(αq ) − ψ(βq )] kAk1 + [ψ(βq ) − ψ(αq + βq )] ,
2
2
i<j a,b
19
MEAN FIELD FOR COMMUNITY DETECTION
where we use the fact that kπi,· k1 = 1, ∀i ∈ [n]. Now consider the KullbackLeibler divergence between q(Z, p, q) and p(Z, p, q). Due to the independence
of p, q and {Zi,· }ni=1 in both distributions, we have
(23)
KL(q(Z, p, q)kp(Z, p, q)) = KL(q(Z)kp(Z)) + KL(q(p)kp(p)) + KL(q(q)kp(q))
n
h
i
X
pri
=
KL Categorical(πi,· )kCategorical(πi,·
)
i=1
+ KL Beta(αp , βp )kBeta(αppri , βppri ) + KL Beta(αq , βq )kBeta(αqpri , βqpri ) .
By Equations (19) - (23), we conclude with the desired result.
5.2. Proof of Theorem 2.2. Note that
" k
#
X
X
Bzi ,zj =
Zi,a Zj,a p +
Zi,a Zj,b q.
a=1
a6=b
We rewrite the joint distribution p(p, q, z, A) in Equation (7) as follows,
(24)
p(p, q, Z, A)
" n
#
Pk
Pk
Y pri
Y
Y
Z
Z
Z
Z
i,a j,b
i,a j,a
q Ai,j (1 − q)1−Ai,j a6=b
=
πi,zi
pAi,j (1 − p)1−Ai,j a=1
i=1
×
"
i<j
Γ(αppri + βppri )
Γ(αppri )Γ(βppri )
p
αpri
p −1
(1 − p)
βppri −1
#"
i<j
Γ(αqpri + βqpri )
Γ(αqpri )Γ(βqpri )
q
αpri
q −1
(1 − q)
βqpri −1
#
.
Updates on p and q. From Equation (24), p has conditional probability as
"
#
P
pri
pri
Y
k
pri
pri
Γ(α
+
β
)
Z
Z
p
p
i,a
j,a
pαp −1 (1 − p)βp −1 .
p(p|q, Z, A) ∝
pAi,j (1 − p)1−Ai,j a=1
pri
pri
Γ(α
)Γ(β
)
p
p
i<j
Then the CAVI update in Equation (4) leads to
q̂(p) ∝ exp Eq(q,Z) log p(p|q, Z, A)
"
#
k
pri
pri
XX
A
pri
pri
Γ(α
+
β
)
p
p
∝ exp Eq(Z)
Zi,a Zj,a log p i,j (1 − p)1−Ai,j
pαp −1 (1 − p)βp −1
pri
pri
Γ(α
)Γ(β
)
p
p
i<j a=1
"
#
k
pri
pri
XX
A
pri
pri
Γ(α
+
β
)
p
p
= exp
πi,a πj,a log p i,j (1 − p)1−Ai,j
pαp −1 (1 − p)βp −1 .
pri
pri
Γ(α
)Γ(β
)
p
p
i<j a=1
20
ZHANG, ZHOU
It can be written as
"
#
h P Pk
i Γ(αpri + β pri ) pri
P
Pk
pri
p
p
q̂(p) ∝ p i<j a=1 πi,a πj,a Ai,j (1 − p) i<j a=1 πi,a πj,a (1−Ai,j )
pαp −1 (1 − p)βp −1 .
Γ(αppri )Γ(βppri )
The distribution of p is still Beta p ∼ Beta(αp0 , βp0 ), with
αp0
=
αppri
+
k
XX
πi,a πj,a Ai,j , and
βp0
i<j a=1
=
βppri
+
k
XX
i<j a=1
πi,a πj,a (1 − Ai,j ).
Similar analysis on q yields updates on αq0 and βq0 . Hence, its proof is omitted.
Updates on {Zi,· }ni=1 .
Zi,· is
From Equation (24), the conditional distribution on
pri
p(Zi,· |Z−i,· , p, q, A) ∝ πi,z
i
Y Ai,j
Bzi ,zj (1 − Bzi ,zj )1−Ai,j .
j6=i
Consequently, up to a constant not depending on i, we have
log P(Zi,a = 1|Z−i,· , p, q, A)
X
pri
= log πi,a + log
Zj,a Ai,j log
XX
q
p
+ log(1 − p) +
Zj,b Ai,j log
+ log(1 − q)
1−p
1−q
j6=i b6=a
j6=i
X
X
1
−
q
q
p(1
−
q)
pri
= log πi,a
− log
+
Ai,j log
+ log(1 − q) .
+ log
Zj,a Ai,j log
q(1 − p)
1−p
1−q
j6=i
j6=i
Then the CAVI update from Equation (4) leads to
0
πi,a
= q̂Zi,· (Zi,a = 1)
∝ exp Eq(p,q,z−i ) log P(Zi,a = 1|Z−i,· , p, q, A)
h
i
= exp Eq(p) Eq(q) Eq(Z−i,· ) log P(Zi, = 1|Z−i,· , p, q, A)
X
1
−
q
p(1
−
q)
pri
,
(25)
− log
∝ πi,a
exp Eq(p) Eq(q)
πj,a Ai,j log
q(1 − p)
1−p
j6=i
where we use the property that p, q, Z are all independent of each other
under q. Recall that p ∼ Beta(αp , βp ) and q ∼ Beta(αq , βq ). It can be shown
that
p
Eq(p) log
= ψ(αp ) − ψ(βp ), and Eq(p) log(1 − p) = ψ(βp ) − ψ(αp + βp ),
1−p
MEAN FIELD FOR COMMUNITY DETECTION
21
where ψ(·) is digamma function. Similar results hold for Eq(q) log(q/(1 − q))
and Eq(q) log(1 − q). Plug in these expectations to Equation (25), we have
X
pri
0
πi,a
∝ πi,a
exp 2t
πj,a (Ai,j − λ) .
j6=i
5.3. Proof of Theorem 3.1. Theorem 3.1 gives a theoretical justification
for all iterations in the BCAVI algorithm. Due to the limit of pages, in this
section we assume `(π (0) , Z ∗ ) = o(n̄min ). The proof of the case `(π (0) , Z ∗ )
in a constant order of n̄min is essentially the same with slight modification,
and we defer it to Section B.1 in the supplemental material.
To prove the theorem, it is sufficient if we are able to show the loss `(·, Z ∗ )
decreases in a desired way for one BCAVI iteration, when the community
assignment is in an appropriate neighborhood of the truth. Let γ = o(1) be
any sequence that goes to zero when n grows. Define t∗ and λ∗ as the true
counterparts of t and λ, by
t∗ =
1
p∗ (1 − q ∗ )
1
1 − q∗
∗
log ∗
,
and
λ
=
log
.
2
q (1 − p∗ )
2t∗
1 − p∗
The proof of Theorem 3.1 involves three parts as follows.
Part One: One Iteration. Consider any π ∈ Π1 such that kπ − Z ∗ k1 ≤
γ n̄min . Let η 0 be any sequence such that η 0 = o(1). Consider any t and λ
with |t−t∗ | ≤ η 0 (p∗ −q ∗ )/p∗ and |λ−λ∗ | ≤ η 0 (p∗ −q ∗ ). We define F to be the
event, that after applying the mapping ht,λ (·), there exists some η = o(1)
such that
kπ − Z ∗ k1
kht,λ (π) − Z ∗ k1 ≤ n exp(−(1 − η)n̄min I) + p
,
nI/[wk[n/n̄min ]2 ]
holds uniformly over all the eligible π, t and λ. We have
1
P(F) ≥ 1 − exp[−(n̄min I) 2 )] − n−r ,
for some constant r > 0. We defer its proof to the later part of this section.
Part Two: Consistency of Model Parameters. Consider any π ∈ Π1
such that kπ − Z ∗ k1 ≤ γ n̄min . Define
(26)
αp = αppri +
k X
X
a=1 i<j
Ai,j πi,a πj,a ,
βp = βppri +
k X
X
(1 − Ai,j )πi,a πj,a ,
a=1 i<j
22
ZHANG, ZHOU
and
(27)
αq = αqpri +
XX
Ai,j πi,a πj,b ,
a6=b i<j
βq = βqpri +
XX
(1 − Ai,j )πi,a πj,b ,
a6=b i<j
and consequently,
(28)
(29)
1
[[ψ(αp ) − ψ(βp )] − [ψ(αq ) − ψ(βq )]]
2
1
λ=
[[ψ(βq ) − ψ(αq + βq )] − [ψ(βp ) − ψ(αp + βp )]] .
2t
t=
From Lemma C.1, we have a concentration of t, λ towards t∗ , λ∗ . That is,
there exists some η 0 = o(1), such that with probability at least 1 − e3 5−n ,
the following inequalities hold
|t − t∗ | ≤ η 0 (p∗ − q ∗ )/p∗ , and |λ − λ∗ | ≤ η 0 (p∗ − q ∗ ),
uniformly over all the eligible π.
Part Three: Multiple Iterations. Consider any π ∈ Π1 such that kπ − Z ∗ k1 ≤
γ n̄min . Define αp , βp , αq , βq , t, λ as Equations (26) - (29). A combination of
results from Part One and Part Two immediately implies that
(30)
kπ − Z ∗ k1
,
kht,λ (π) − Z ∗ k1 ≤ n exp(−(1 − η)n̄min I) + p
nI/[wk[n/n̄min ]2 ]
1
holds uniformly over all the eligible π with probability at least 1−exp[−(n̄min I) 2 )]−
n−r . This is sufficient to show Theorem 3.1.
The only thing left to be proved, the most critical part towards the proof
of Theorem 3.1, is the claim we made in Part One. We are going to prove
the claim as follow.
Proof Sketch of Part One. The error associated with the [ht,λ (π)]i,· is a
function of π and Ai,· . It can be decomposed into a summation of two terms,
one only involves the ground truth Z ∗ and the other involves the deviation
π − Z ∗ . That is,
∗
[ht,λ (π)]i,· − Zi,·
1
≤ fi,1 (Z ∗ , Ai,· ) + fi,2 (π − Z ∗ , Ai,· ).
23
MEAN FIELD FOR COMMUNITY DETECTION
Consequently,
(31)
∗
kht,λ (π) − Z k1 ≤
n
X
|i=1
∗
fi,1 (Z , Ai,· ) +
{z
involves
Z∗
}
n
X
|i=1
fi,2 (π − Z ∗ , Ai,· ) .
{z
involves π−Z ∗
}
With a proper choice of f·,1 and f·,2 , the first term on the RHS of Equation
(31) leads to the minimax rate n exp(−(1 − η)n̄min I). Up to a constant not
dependent on π, Z ∗ or A, the second term can be written as
n
X
i=1
fi,2 (π − Z ∗ , Ai,· ) .
X
a
∗ T
∗
(π·,a − Z·,a
) (A − EA)(A − EA)T (π·,a − Z·,a
).
In this way it is all about the random
EA and
P there exist∗ sharp
P matrix A −
∗ 2 ≤
bounds on kA − EAkop . Note that a π·,a − Z·,a
a π·,a − Z·,a 1 ≤
∗
kπ − Z k1 . The second term ends up being upper bounded by kπ − π ∗ k1
multiplied by a coefficient factor.
Proof of Part One. Denote z = r−1 (Z ∗ ). By the definition of ht,λ (·) in
Equation (11), we have
h P
i
P
pri
2 a6=zi πi,a
exp 2t j6=i πj,a (Ai,j − λ)
∗
i
h P
≤ P
[ht,λ (π)]i,· − Zi,·
1
pri
π
π
(A
−
λ)
exp
2t
j,a
i,j
a i,a
j6=i
X
X
≤ 2w
1 ∧ exp 2t
(πj,a − πj,zi )(Ai,j − λ) .
a6=zi
j6=i
Define f (x) = 1 ∧ exp(−x). It can be shown P
that for any x0 < 0 and any
integer m ≥ 1 we have f (x) ≤ exp(x0 ) + m−1
l=0 exp(lx0 /m)I{x ≥ (l +
1)x0 /m}, which can be seen as a stepwise approximation of theP
continuous
function f (x). By taking x0 = −(na +nzi )I/2 and letting x = 2t j6=i (πj,a −
πj,zi )(Ai,j − λ), we have
"
m−1
X
X
(n
+
n
)I
l(na + nzi )I
a
zi
∗
[ht,λ (π)]i,· − Zi,· 1 ≤ 2w
exp −
+ 2w
exp −
2
2m
a6=zi
l=0
#
X X
(l + 1)(na + nzi )I
×
I 2t
(πj,a − πj,zi )(Ai,j − λ) ≥ −
.
2m
a6=zi
j6=i
We choose some m → ∞ slowly such that
(32)
m = o(n̄min I) and m = o([wnI/[k[n/n̄min ]2 ]1/4 ).
24
ZHANG, ZHOU
Thus, we have
∗
kht,λ (π) − Z k1 ≤ 2wnk exp(−n̄min I) + 2w
m−1
k X
XX
l=0 a=1 b6=a
"
l(na + nb )I
exp −
2m
#
X X
(l + 1)(na + nb )I
×
I
(πj,a − πj,b )(Ai,j − λ) ≥ −
4mt
(33)
i:zi =b
j6=i
where we use the fact that mina6=b (na + bb )/2 ≥ n̄min .
The key to the rest of the analysis is to understand
Equation (33) through
P
the decomposition of the critical quantity j6=i (πj,a − πj,b )(Ai,j − λ). We
will show for any pair of a, b ∈ [k] such that a 6= b, and any i ∈ [n] such
that zi = b, it is equal to a summation of two terms: one only involves the
ground truth Z ∗ , and the other involves the deviation π − Z ∗ . The former
remains steady along iterations and contributes to the minimax rate, while
the latter needs to be connected with the error kπ − Z ∗ k1 .
∗ + Z∗ −
Let θa,b be a vector of length n such that [θa,b ]j = πj,a − Zj,a
j,b
πj,b , ∀j ∈ [n]. Then we have
(34)
X
X
X
∗
∗
∗
∗
− πj,b )(Ai,j − λ)
)(Ai,j − λ) +
(πj,a − Zj,a
+ Zj,b
(πj,a − πj,b )(Ai,j − λ) =
(Zj,a
− Zj,b
j6=i
j6=i
j6=i
=
X
j6=i
=
X
j6=i
|
∗
∗
)(Ai,j
(Zj,a
− Zj,b
∗
(Zj,a
−
∗
Zj,b
)(Ai,j
X
− λ) +
(Ai,j − λ)[θa,b ]j
j6=i
− λ) + (Ai,· − EAi,· )θa,b +
{z
}
involves Z ∗
|
With the help of Equation (34), Equation (33) can be written as
kht,λ (π) − Z ∗ k1
≤ 2wnk exp(−n̄min I) + 2w
m−1
k X
XX
l=0 a=1 b6=a
"
l(na + nb )I
exp −
2m
X
(EAi,j − λ)[θa,b ]j .
j6=i
{z
involves π−Z ∗
#
X X
X
(l
+
3/2)(n
+
n
)I
a
b
∗
∗
×
I
(Zj,a
− Zj,b
)(Ai,j − λ) ≥ −
−
(EAi,j − λ)[θa,b ]j
4mt
i:zi =b
j6=i
j6=i
"" m−1
#
#
k X
X
X
X
l(na + nb )I
n̄min I
+ 2w
exp −
×
I (Ai,· − EAi,· )θa,b ≥
.
2m
4mt
a=1 b6=a
l=0
i:zi =b
}
MEAN FIELD FOR COMMUNITY DETECTION
Equations (18) and (32) imply
we have
Pm−1
l=0
exp [−l(na + nb )I/(2m)] ≤ 2. Thus,
kht,λ (π) − Z ∗ k1 ≤ 2wnk exp(−n̄min I) + 2wLsum
+ 4wLsum
,
| {z1 }
| {z2 }
involves Z ∗
where
25
Lsum
1
,
m−1
k X
XX
l=0 a=1 b6=a
l(na + nb )I
exp −
2m
X
involves π−Z ∗
L1,i (a, b, l),
i:zi =b
P
∗
∗
with
L
(a,
b,
l)
,
I[
1,i
j6=i (Zj,a −Zj,b )(Ai,j −λ) ≥ −(l+3/2)(na +nb )I/(4mt)−
P
j6=i (EAi,j − λ)[θa,b ]j ], and
Lsum
2
,
k X X
X
a=1 b6=a i:zi =b
I (Ai,· − EAi,· )θa,b
n̄min I
≥
.
4mt
In this way we turn kht,λ (π) − Z ∗ k1 into calculations on Lsum
and Lsum
1
2 ,
where the former only involves the ground truth Z ∗ and the latter only
involves the deviation π − Z ∗ .
and Lsum
as follows. Their proofs
We can obtain upper bounds on Lsum
1
2
are deferred to the end of this section.
00
• For Lsum
1 , there exists a sequence η = o(1) such that with probability
1
at least 1 − exp[−2(n̄min I) 2 ], we have
(35)
Lsum
≤ nmk exp −(1 − 2η 00 )n̄min I .
1
• For Lsum
2 , there exist constants c and r such that with probability at
least 1 − n−r − exp(−5np∗ ), we have
(36)
Lsum
≤
2
cknp∗ kπ − Z ∗ k1 cn2 kp∗ exp(−5np∗ )
+
.
(n̄min I/(mt∗ ))2
n̄min I/(mt∗ )
Thus, we have
kht,λ (π) − Z ∗ k1 ≤ 2wnk exp(−n̄min I) + 2wnmk exp −(1 − 2η 00 )n̄min I
+
4cwknp∗ kπ − Z ∗ k1 4cwkn2 p∗ exp(−5np∗ )
,
+
(n̄min I/(mt∗ ))2
n̄min I/(mt∗ )
1
with probability at least 1−exp[−2(n̄min I) 2 ]−n−r −exp(−5np∗ ). By Propositions C.2 and C.3, we have p∗ t∗2 I. Then due to Equation (32), we have
"
#
wknp∗
1
n 2 k
2
wm
=o p
,
(n̄min I/(mt∗ ))2
n̄min nI
nI/[wk[n/n̄min ]2 ]
26
ZHANG, ZHOU
and
√ ∗
n
wkn2 p∗ exp(−5np∗ )
np
n exp(−5np∗ ) ≤ n exp(−5n̄min I).
wmk √
n̄min I/(mt∗ )
n̄
nI
min
1
Thus, with probability at least 1 − exp[−(n̄min I) 2 ] − n−r , there exists some
η = o(1), such that
kπ − Z ∗ k1
kht,λ (π) − Z ∗ k1 ≤ n exp(−(1 − η)n̄min I) + p
.
nI/[wk[n/n̄min ]2 ]
The proof for Part One is complete. The very last thing remained to be
obtained is upper bounds on Lsum
and Lsum
1
2 , i.e., Equations (35) and (36).
Recall the definition of θa,b . We have some properties on θa,b which will be
useful in the analysis for Lsum
and Lsum
1
2 : kθa,b k∞ ≤ 2 and
(37)
∗
kθa,b k1 ≤ π·,a − Z·,a
1
∗
+ π·,b − Z·,b
1
≤ kπ − Z ∗ k1 ≤ γ n̄min ,
1
≤ 2k kπ − Z ∗ k1 .
and
(38)
k X
X
a=1 b6=a
kθa,b k1 ≤ 2k
1. Bounds on Lsum
1 .
X
a
∗
π·,a − Z·,a
By applying Markov inequality, we have
EL1,i (a, b, l)
∗
X
X
t (l + 3/2)(na + nb )I
∗
∗
− t∗
(EAi,j − λ)[θa,b ]j
= P t∗
(Zj,a
− Zj,b
)(Ai,j − λ) ≥ −
4mt
j6=i
j6=i
∗
X
t (l + 3/2)(na + nb )I
∗
∗
≤ exp
+ t∗ (EAi,j − λ1Tn )θa,b E exp t∗
(Zj,a
− Zj,b
)(Ai,j − λ) .
4mt
j6=i
With the help of Proposition C.1, we have
X
∗
∗
)(Ai,j − λ)
E exp t∗
(Zj,a
− Zj,b
j6=i
= exp(−t∗ (λ − λ∗ )(na − nb )) exp(−t∗ λ∗ (na − nb ))
Y
j6=i
∗
∗
E exp(t∗ (Zj,a
− Zj,b
)Ai,j )
b
na −n
tX
2
b
tX −tY na +n
Ee
∗
∗
−tλ
2
= exp(−t (λ − λ )(na − nb )) e
Ee
Ee
Ee−tY
(na + nb )I
= exp(−t∗ (λ − λ∗ )(na − nb )) exp −
.
2
27
MEAN FIELD FOR COMMUNITY DETECTION
Hence
(39)
ELsum
1
m−1
k X
XX
"
X
t∗ (l
+ 3/2)(na + nb )I
+ t∗
(EAi,j − λ)[θa,b ]j
4mt
l=0 a=1 b6=a
j6=i
#
(na + nb )I
× exp(−t∗ (λ − λ∗ )(na − nb )) exp −
2
m−1
k X
t∗ (l+3/2)
l
XX
X
(1 + m − 2mt )(na + nb )I
≤
exp −
(EAi,j − λ)[θa,b ]j .
− t∗ (λ − λ∗ )(na − nb ) + t∗
2
=
exp −
l(na + nb )I
exp
2m
l=0 a=1 b6=a
j6=i
We are going to show −(1−η 00 )n̄min I upper bounds terms in the exponent of
RHS of Equation (39) by some η 00 = o(1). We first present some properties
of λ∗ , t∗ and I that will be helpful:
(40)
(41)
(42)
I (p∗ − q ∗ )2 /p∗ ,
λ∗ ∈ (q ∗ , p∗ ),
and t∗ (p∗ − q ∗ )/p∗ .
Here Equations (40) and (41) are proved by Propositions C.2 and C.3 respectively. Equation (42) is due to t∗ log(1 + (p∗ − q ∗ )/q ∗ ) (p∗ − q ∗ )/p∗
under the assumption that p∗ , q ∗ = o(1), p∗ q ∗ .
The first term in the exponent of Equation (39) is upper bounded by
−(1 − 7/(8m))n̄min I by the assumption t∗ /t = 1 + o(1). Since |t∗ (λ − λ∗ )| ≤
η 0 t∗ (p∗ − q ∗ ), by Equations (40) and (42) the second term is upper bounded
by η 0 n̄min I up to a constant factor. For the last term in the exponent of
Equation (39), since |λ − λ∗ | ≤ η 0 (p∗ − q ∗ ) we have
t∗
X
X
X
(λ∗ − λ)[θa,b ]i
(EAi,j − λ)[θa,b ]i ≤ t∗
(EAi,j − λ∗ )[θa,b ]i + t∗
j6=i
j6=i
j6=i
0
∗
∗
∗
≤ (1 + η )t (p − q ) kθa,b k1
≤ (1 + η 0 )t∗ (p∗ − q ∗ )γ n̄min
. γ n̄min I,
where we use Equations (37) and (40) - (42).
As a consequence, there exists a sequence η 00 = o(1) that goes to zero
slower than m−1 , γ, η 0 , such that the summation of three terms in the exponent of the RHS of Equation (39) is upper bounded by −(1 − η 00 )n̄min I.
28
ZHANG, ZHOU
Thus, Equation (39) can be written as
ELsum
≤ nmk exp −(1 − η 00 )n̄min I .
1
1
Since η 00 goes to 0 slower than m−1 , we have η 00 ≥ m−1 ≥ (n̄min I) 4 by
Equation (32). Then by applying Markov inequality, we have
h
i
1
P Lsum
≥ nmk exp −(1 − 2η 00 )n̄min I ≤ exp −η 00 n̄min I ≤ exp −2(n̄min I) 2 .
1
1
That is, with probability at least 1 − exp[−2(n̄min I) 2 ], Equation (35) holds.
2. Bounds on Lsum
2 . Depending on whether the network is dense or sparse,
we consider two scenarios.
(1) Dense Scenario: q ∗ ≥ (log n)/n. In this scenario, we have a sharp
bound on kA − EAkop . First we observe that
X
X
T
[(Ai,· − EAi,· )θa,b ]2 = θa,b
[(Ai,· − EAi,· )T (Ai,· − EAi,· )]θa,b
i:zi =b
i:zi =b
≤
=
T
θa,b
X
i
T
θa,b [(A
[(Ai,· − EAi,· )T (Ai,· − EAi,· )]θa,b
− EA)T (A − EA)]θa,b .
By applying Markov inequality, we have
Lsum
≤
2
k X T
X
θa,b [(A − EA)T (A − EA)]θa,b
a=1 b6=a
(n̄min I/(4mt))2
.
Since kθa,b k∞ ≤ 2, we have kθa,b k2 ≤ 2 kθa,b k1 . Lemma C.3 shows kA −
√
EAkop ≤ c1 np holds with probability at least 1 − n−r for some constants
c1 , r > 0. Together with Equation (38), we have
k X
X
a=1 b6=a
T
θa,b
[(A
T
− EA) (A − EA)]θa,b ≤
≤
k X
X
a=1 b6=a
k X
X
a=1 b6=a
kA − EAk2op kθa,b k2
2c1 np kθa,b k1
≤ 4c1 knp kπ − Z ∗ k1 .
Thus, with probability at least 1 − n−r ,
Lsum
≤
2
4c1 knp kπ − Z ∗ k1
.
(n̄min I/(4mt))2
29
MEAN FIELD FOR COMMUNITY DETECTION
(2) Sparse Scenario: q ∗ < (log n)/n. When the network is sparse, the
previous upper bound on kA − EAkop no longer holds. Instead, removing
nodes with large degrees is required
to yield provably sharp bound on kA −
P
EAkop . Define S = {i ∈ [n], j Ai,j ≥ 20np∗ }. We define Ã, P̃ such that
Ãi,j = Ai,j I{i, j ∈
/ S} and P̃i,j = (EAi,j )I{i, j ∈
/ S}. Then we have the
decomposition as
X
n̄min I
L2 (a, b) ,
I (Ai,· − EAi,· )θa,b ≥
4mt
i:zi =b
X
n̄min I
≤
I (Ãi,· − P̃i,· )θa,b ≥
8mt
i:zi =b
X
X
n̄min I
+
I (Ai,j − EAi,j )[θa,b ]i,j I{i ∈ S or j ∈ S} ≥
8mt
i:zi =b
j6=i
, L2,1 (a, b) + L2,2 (a, b).
Pk P
Define Lsum
2,1 ,
a=1
b6=a L2,1 (a, b). We have
Lsum
2,1
≤
k X T
X
θa,b [(Ã − P̃ )T (Ã − P̃ )]θa,b
(n̄min I/(8mt))2
a=1 b6=a
≤
k X
X
2kà − P̃ k2op kθa,b k
a=1 b6=a
(n̄min I/(8mt))2
1
.
√
Lemma C.4 shows kÃ− P̃ kop ≤ c2 np holds with probability at least 1−n−1
for some constant c2 > 0. Then we have
Lsum
2,1 ≤
4c2 knp kπ − Z ∗ k1
.
(n̄min I/(8mt))2
P
Lemma C.5 shows i,j |Ai,j − EAi,j |I{i ∈ S} ≤ 20n2 p∗ exp(−5np∗ ) holds
with probability at least 1 − exp(−5np∗ ). Then by applying Markov inequality, we have
k
X
X
Lsum
L2,2 (a, b)
2,2 ,
a=1
≤
≤
≤
k
X
b6=a
n
X
a=1 i,j=1
k
X
4
a=1
|Ai,j − EAi,j ||[θa,b ]i,j |I{i ∈ S or j ∈ S}
n̄min I/(8mt)
P
i,j
|Ai,j − EAi,j |I{i ∈ S}
n̄min I/(8mt)
80n2 kp∗ exp(−5np∗ )
.
n̄min I/(8mt)
30
ZHANG, ZHOU
As a consequence, we have
sum
Lsum
≤ Lsum
2
2,1 + L2,2 ≤
4c2 knp∗ kπ − Z ∗ k1 80n2 kp∗ exp(−5np∗ )
+
,
(n̄min I/(8mt))2
n̄min I/(8mt)
with probability at least 1 − n−1 − exp(−5np∗ ). By the bounds on Lsum
and
1
∗ = 1 + o(1), we obtain Equation (36).
Lsum
,
and
due
to
t/t
2
SUPPLEMENTARY MATERIAL
Supplement A: Supplement to “Theoretical and Computational
Guarantees of Mean Field Variational Inference for Community
Detection”
(url to be specified). In the supplement [36], we provide the detailed implementations of the batched Gibbs sampling and an iterative algorithm for
MLE in Algorithm 2 and Algorithm 3 respectively. We include proof of Theorem 4.1, Theorem 4.2 and Theorem 4.3. We also include all the auxiliary
propositions and lemmas in the supplement.
References.
[1] Edoardo M Airoldi, David M Blei, Stephen E Fienberg, and Eric P Xing. Mixed
membership stochastic blockmodels. Journal of Machine Learning Research, 9(Sep):
1981–2014, 2008.
[2] Matthew James Beal. Variational algorithms for approximate Bayesian inference.
University of London, 2003.
[3] Peter Bickel, David Choi, Xiangyu Chang, and Hai Zhang. Asymptotic normality
of maximum likelihood and its variational approximation for stochastic blockmodels.
The Annals of Statistics, 41(4):1922–1943, 2013.
[4] Peter J Bickel and Aiyou Chen. A nonparametric view of network models and
Newman-Girvan and other modularities. Proceedings of the National Academy of
Sciences, 106(50):21068–21073, 2009.
[5] Christopher M Bishop. Pattern recognition and machine learning. springer, 2006.
[6] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent Dirichlet allocation.
Journal of machine Learning research, 3(Jan):993–1022, 2003.
[7] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review
for statisticians. Journal of the American Statistical Association, (just-accepted),
2017.
[8] Alain Celisse, Jean-Jacques Daudin, and Laurent Pierre. Consistency of maximumlikelihood and variational estimators in the stochastic block model. Electronic Journal
of Statistics, 6:1847–1899, 2012.
[9] Peter Chin, Anup Rao, and Van Vu. Stochastic block model and community detection
in sparse graphs: A spectral algorithm with optimal rate of recovery. In COLT, pages
391–423, 2015.
[10] Yingjie Fei and Yudong Chen. Exponential error rates of SDP for block models:
Beyond Grothendieck’s inequality. arXiv preprint arXiv:1705.08391, 2017.
[11] Chao Gao, Aad W van der Vaart, and Harrison H Zhou. A general framework for
bayes structured linear models. arXiv preprint arXiv:1506.02174, 2015.
MEAN FIELD FOR COMMUNITY DETECTION
31
[12] Chao Gao, Zongming Ma, Anderson Y Zhang, and Harrison H Zhou. Achieving optimal misclassification proportion in stochastic block model. The Journal of Machine
Learning Research, 18(60):1–45, 2017.
[13] Alan E Gelfand and Adrian FM Smith. Sampling-based approaches to calculating
marginal densities. Journal of the American statistical association, 85(410):398–409,
1990.
[14] Agnieszka Grabska-Barwińska, Simon Barthelmé, Jeff Beck, Zachary F Mainen,
Alexandre Pouget, and Peter E Latham. A probabilistic approach to demixing odors.
Nature neuroscience, 20(1):98–106, 2017.
[15] Alexandre Grothendieck. Résumé de la théorie métrique des produits tensoriels
topologiques. Resenhas do Instituto de Matemática e Estatı́stica da Universidade
de São Paulo, 2(4):401–481, 1996.
[16] Olivier Guédon and Roman Vershynin. Community detection in sparse networks via
Grothendiecks inequality. Probability Theory and Related Fields, 165(3-4):1025–1049,
2016.
[17] Jake M Hofman and Chris H Wiggins. Bayesian approach to network modularity.
Physical review letters, 100(25):258701, 2008.
[18] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic
blockmodels: First steps. Social networks, 5(2):109–137, 1983.
[19] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul.
An introduction to variational methods for graphical models. Machine learning, 37
(2):183–233, 1999.
[20] Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional
by model selection. Annals of Statistics, pages 1302–1338, 2000.
[21] Jing Lei and Alessandro Rinaldo. Consistency of spectral clustering in stochastic
block models. The Annals of Statistics, 43(1):215–237, 2015.
[22] Percy Liang, Slav Petrov, Michael I Jordan, and Dan Klein. The infinite PCFG using
hierarchical dirichlet processes. In EMNLP-CoNLL, pages 688–697, 2007.
[23] Yu Lu and Harrison H Zhou. Statistical and computational guarantees of Lloyd’s
algorithm and its variants. arXiv preprint arXiv:1612.02099, 2016.
[24] Elchanan Mossel, Joe Neeman, and Allan Sly. Stochastic block models and reconstruction. arXiv preprint arXiv:1202.1499, 2012.
[25] Mark EJ Newman. Modularity and community structure in networks. Proceedings
of the national academy of sciences, 103(23):8577–8582, 2006.
[26] William D Penny, Nelson J Trujillo-Barreto, and Karl J Friston. Bayesian fMRI time
series analysis with spatial priors. NeuroImage, 24(2):350–362, 2005.
[27] Zahra S Razaee, Arash A Amini, and Jingyi Jessica Li. Matched bipartite block
model with covariates. arXiv preprint arXiv:1703.04943, 2017.
[28] Christian P Robert. Monte carlo methods. Wiley Online Library, 2004.
[29] Karl Rohe, Sourav Chatterjee, and Bin Yu. Spectral clustering and the highdimensional stochastic blockmodel. The Annals of Statistics, 39(4):1878–1915, 2011.
[30] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families,
and variational inference. Foundations and Trends R in Machine Learning, 1(1–2):
1–305, 2008.
[31] Bo Wang and DM Titterington. Convergence properties of a general algorithm for
calculating variational bayesian estimates for a normal mixture model. Bayesian
Analysis, 1(3):625–650, 2006.
[32] Yixin Wang and David M Blei. Frequentist consistency of variational bayes. arXiv
preprint arXiv:1705.03439, 2017.
[33] Ted Westling and Tyler H McCormick. Establishing consistency and improving un-
32
ZHANG, ZHOU
certainty estimates of variational inference through M-estimation. arXiv preprint
arXiv:1510.08151, 2015.
[34] Chong You, John T Ormerod, and Samuel Müller. On variational bayes estimation
and variational information criteria for linear regression models. Australian & New
Zealand Journal of Statistics, 56(1):73–87, 2014.
[35] Anderson Y Zhang and Harrison H Zhou. Minimax rates of community detection in
stochastic block models. The Annals of Statistics, 44(5):2252–2280, 2016.
[36] Anderson Y Zhang and Harrison H Zhou. Supplement to “theoretical and computational guarantees of mean field variational inference for community detection”. 2017.
SUPPLEMENT TO “THEORETICAL AND COMPUTATIONAL
GUARANTEES OF MEAN FIELD VARIATIONAL INFERENCE
FOR COMMUNITY DETECTION”
BY Anderson Y. Zhang and Harrison H. Zhou
Yale University
APPENDIX A: ADDITIONAL ALGORITHMS
In this section, we provide the detailed implementations of the batched
Gibbs sampling and an iterative algorithm of MLE for community detection.
A.1. Batched Gibbs Sampling.
Algorithm 2: Batched Gibbs Sampling
1
Input: Adjacency matrix A, number of communities k, hyperparameters
π pri , αppri , βppri , αqpri , βqpri , some initializers Z (0) , number of iterations S.
Output: Gibbs sampling Ẑ, p̂, q̂.
for s = 1, 2, . . . , S do
(s)
(s)
(s)
(s)
Update αp , βp , αq , βq by
αp(s) = αppri +
k X
X
(s−1)
, βp(s) = βppri +
k X
X
(s−1) (s−1)
(1 − Ai,j )Zi,a Zj,a ,
(s−1)
, βq(s) = βqpri +
XX
(s−1) (s−1)
(1 − Ai,j )Zi,a Zj,b .
(s−1)
Zj,a
(s−1)
Zj,b
Ai,j Zi,a
a=1 i<j
a=1 i<j
αq(s) = αqpri +
XX
Ai,j Zi,a
a6=b i<j
a6=b i<j
(s)
2
(s)
(s)
(s)
Then generate p(s) ∼ Beta(αp , βp ) and q (s) ∼ Beta(αq , βq ) independently.
Define
t(s) =
p(s) (1 − q (s) )
1
log
,
2
(1 − p(s) )q (s)
and λ(s) =
1
1 − q (s)
log
.
(s)
2t
1 − p(s)
Then update π (s) with
π (s) = ht(s) ,λ(s) (Z (s−1) ),
where ht,λ (·) is defined as in Equation (11). Independently generate each row of
Z (s) from distributions
(s)
(s)
P(Zi,· = ea ) = πi,a , ∀a ∈ [k], ∀i ∈ [n].
3
end
We have ẑ = z (S) , p̂ = p(S) and q̂ = q (S) .
2
ZHANG, ZHOU
A.2. An Iterative Algorithm for Maximum Likelihood Estimation. We first define a mapping h0 : Π0 → Π0 as follows
X
[h0λ (Z)]i,a = I a = arg max
Zi,b (Ai,j − λ) .
(43)
b
j6=i
Here if the maximizer is not unique, we simply pick the smallest index.
Algorithm 3: An Iterative Algorithm for MLE
1
Input: Adjacency matrix A, number of communities k, some initializers z (0) ,
number of iterations S.
Output: Estimation Ẑ, p̂, q̂.
for s = 1, 2, . . . , S do
Update p(s) , q (s) by
Pk
p
(s)
P
a=1
= Pk
a=1
(s−1)
Zj,a
(s−1) (s−1)
Ai,j )Zi,a Zj,a
(s−1)
i<j
P
Ai,j Zi,a
i<j (1
−
P
Ai,j Zi,a
and
P
q
(s)
= P
a6=b
a6=b
2
(s−1)
i<j
P
i<j (1
(s−1)
Zj,b
(s−1)
− Ai,j )Zi,a
(s−1)
.
Zj,b
Define
t(s) =
p(s) (1 − q (s) )
1
log
,
2
(1 − p(s) )q (s)
and λ(s) =
1
1 − q (s)
log
.
(s)
2t
1 − p(s)
Then update π (s) with
Z (s) = h0λ(s) (Z (s−1) ),
where h0λ (·) is defined as in Equation (43).
3
end
We have ẑ = z (S) , p̂ = p(S) and q̂ = q (S) .
APPENDIX B: PROOFS OF OTHER THEOREMS
In this section, we first validate Theorem 3.1 when `(π (0) , π ∗ ) is in a
constant order of n̄min , which complements the proof presented in Section
5.3. The we give proofs of theorems stated in Section 4, including Theorem
4.1, Theorem 4.2 and Theorem 4.3.
B.1. Proof of Theorem 3.1 for the case `(π (0) , π ∗ ) in a constant
order of n̄min . For any π such that `(π, π ∗ ) ≤ cinit n̄min , we are going to
3
MEAN FIELD FOR COMMUNITY DETECTION
show when cinit is sufficiently small
(44)
`(π, Z ∗ )
`(ht,λ (π), Z ∗ ) ≤ n exp(−n̄min I/25) + p
,
2 nI/[wk[n/n̄min ]2 ]
with probability at least 1−exp(−n̄min I/10)−n−r for some constant r > 0. If
it holds, for any π (0) such that `(π (0) , Z ∗ ) = cn̄min for some
pconstant c ≤ cinit ,
(0)
∗
the term n exp(−n̄min I/25) is dominated by `(π , Z )/ nI/[wk[n/n̄min ]2 ]
which implies
`(π (1) , Z ∗ ) ≤ n exp(−(1 − η)/n̄min I) + p
`(π (0) , Z ∗ )
nI/[wk[n/n̄min ]2 ]
.
It also implies `(π (1) , Z ∗ ) = o(n̄min ), which means after the first iteration,
the results in Section 5.3 can be directly applied and the proof is complete.
The proof of Equation (44) mainly follows the proof of Part One in Section
5.3. We have
X
X
∗
[ht,λ (π)]i,· − Zi,·
≤ 2w
1 ∧ exp 2t
(πj,a − πj,zi )(Ai,j − λ) .
1
a6=zi
j6=i
Note that the inequality 1 ∧ exp(−x) ≤ f (x0 ) + I{x ≥ x0 } holds for any
x0 ≥ 0. By taking x0 = (na + nzi )I/4, we have
X
X
(n
+
n
)I
(n
+
n
)I
zi
a
zi
∗
exp − a
,
≤ 2w
+ I (πj,a − πj,zi )(Ai,j − λ) ≥ −
[ht,λ (π)]i,· − Zi,·
1
4
8t
a6=zi
j6=i
and consequently,
kht,λ (π) − Z ∗ k1 ≤ 2wnk exp(−n̄min I/2)
#
k X X X
X
(na + nzi )I
+ 2w
.
I
(πj,a − πj,b )(Ai,j − λ) ≥ −
8t
a=1 b6=a i:zi =b
j6=i
Define θa,b the same way as in Section 5.3, and by the same argument, we
have
k X X
X
n̄min I
∗
kht,λ (π) − Z k1 ≤ 2wnk exp(−n̄min I/2) + 2w
I (Ai,· − EAi,· )θa,b ≥
8t
a=1 b6=a i:zi =b
k X X X
X
(na + nb )I X
∗
∗
+ 2w
I
(Zj,a − Zj,b )(Ai,j − λ) ≥ −
−
(EAi,j − λ)[θa,b ]j .
4t
a=1 b6=a i:zi =b
j6=i
j6=i
4
ZHANG, ZHOU
From Lemma C.1, when cinit is sufficiently small, with probability at least
1 − e3 5−n we have
|λ − λ∗ |
|t − t∗ |
(45)
,
≤ 24c0 cinit .
max
(p∗ − q ∗ )/p∗ (p∗ − q ∗ )
Proposition C.3 shows that λ∗ ∈ (q ∗ + c(p∗ − q ∗ ), q ∗ + (1 − c)(p∗ − q ∗ )) for
some positive constant 0 < c < 1/2. Therefore, when cinit is sufficiently
small, we have λ ∈ (q ∗ , p∗ ). Thus,
X
j6=i
(EAi,j − λ)[θa,b ]j ≤ (p∗ − q ∗ ) kθa,b k1 ≤ (p∗ − q ∗ ) kπ − Z ∗ k1 ≤ cinit (p∗ − q ∗ )n̄min ,
where we use Equation (37). By Equations (40) - (42), it is smaller than
(na + nzi )/(8t) when cinit is sufficiently small. As a consequence, we have
∗
kht,λ (π) − Z k1 ≤ 2wnk exp(−n̄min I/2) + 2w
+ 2w
k X
X
a=1 b6=a i:zi =b
I (Ai,· − EAi,· )θa,b
n̄min I
≥
8t
X X
(na + nb )I
∗
∗
)(Ai,j − λ) ≥ −
(Zj,a
− Zj,b
I
.
8t
a=1 b6=a i:zi =b
Define Lsum
1
sum
and L2 =
k X X
X
Pk
=
Pk
j6=i
a=1
P
P
b6=a
P
P
i:zi =b I
hP
b6=a
i:zi =b I [(Ai,·
∗
j6=i (Zj,a
−
∗ )(A
Zj,b
i,j
i
− λ) ≥ −(na + nb )I/(8t)
− EAi,· )θa,b ≥ n̄min I/(8t)]. Our analysis on them is quite similar to that in Section 5.3. By Markov inequality,
k X X
X
X
∗
∗
)(Ai,j − λ) ≥ −t∗ (na + nb )I/(8t)
(Zj,a
− Zj,b
P t∗
ELsum
=
1
a=1
a=1 b6=a i:zi =b
≤
≤
k X X
X
a=1 b6=a i:zi =b
k X
X
X
a=1 b6=a i:zi =b
j6=i
exp
t∗ (n
X
a + nb )I
∗
∗
− t∗ (λ − λ∗ )(na − nb ) E exp t∗
(Zj,a
− Zj,b
)(Ai,j − λ∗ )
8t
j6=i
t∗ (na + nb )I
(na + nb )I
∗
∗
− t (λ − λ )(na − nb ) −
.
exp
8t
2
By Equations (40) - (42) and (45), when cinit is small enough, t∗ /t ≤ 2 and
t∗ |λ − λ∗ | ≤ I/6. Thus
ELsum
≤ nk exp(−n̄min I/12).
1
5
MEAN FIELD FOR COMMUNITY DETECTION
Hence, with probability at least 1 − exp(−n̄min I/24),
Lsum
≤ nk exp(−n̄min I/24).
1
For Lsum
we use the same argument as in Section 5.3 and obtain
2
Lsum
≤
2
4c2 knp∗ kπ − Z ∗ k1 80n2 kp∗ exp(−5np∗ )
+
,
(n̄min I/(8t))2
n̄min I/(8t)
with probability at least 1−n−r −exp(−5np∗ ) for some constants r, c1 , c2 > 0.
Recall that
kht,λ (π) − Z ∗ k1 ≤ 2wnk exp(−n̄min I/2) + 2wLsum
+ 2wLsum
1
2 .
Using the same argument as in Section 5.3, we conclude with
1
kπ − Z ∗ k1 ,
kht,λ (π) − Z ∗ k1 ≤ n exp(−n̄min I/25) + p
2 nI/[wk[n/n̄min ]2 ]
with probability at least 1 − exp(−n̄min I/10) − n−r .
B.2. Proof of Theorem 4.1. Define t∗ =
1
2t∗
∗
1
2
∗
∗
(1−q )
∗ =
log qp∗ (1−p
∗ ) and λ
1−q
log 1−p
∗ . By the same simplification we derive in Theorem 2.1, we have
π̂ MF = arg max f 0 (π; A),
π∈Π1
where
f 0 (π; A) = hA + λ∗ In − λ∗ 1n 1Tn , ππ T i −
n
1 X
pri
KL(Categorical(πi,· )kCategorical(πi,·
)).
t∗
i=1
Recall the definition of ht,λ (·) as in Equation (11). A key observation is
that π̂ MF = ht∗ ,λ∗ (π̂ MF ), otherwise if there exists some i ∈ [n] such that
MF . This indicates the implementation of CAVI
[ht∗ ,λ∗ (π̂ MF )]i,· not equal to π̂i,·
update on the i-th row of π will make change, leading to the decrease of
f 0 (·; A). This contradicts with the fact that π̂ MF is the global minimizer.
The fixed-point property of π̂ MF is the key to our analysis. It involves
three steps.
• Step One. For any π such that `(π, Z ∗ ) = o(n̄min ), by the same analysis
as in the proof of Theorem 3.1, we are able to show that there exist
constant r > 0 and sequence η = o(1) such that
kht∗ ,λ∗ (π) − Z ∗ k1 ≤ n exp(−(1 − η)n̄min I) + p
1
kπ − Z ∗ k1
nI/[wk[n/n̄min ]2 ]
with probability at least 1 − exp[−(n̄min I) 2 ] − n−r .
,
6
ZHANG, ZHOU
• Step Two. Lemma C.6 presents some loose upper bound for `(π̂ MF , Z ∗ ).
That is, under the assumption ρnI/[wk 2 [n/n̄min ]2 ] → ∞, with probability at least 1 − e3 5−n , we have
`(π̂ MF , Z ∗ ) ≤ o(n̄min ).
• Step Three. Using the property that ht∗ ,λ∗ (π̂ MF ) = π̂ MF , we have
π̂ MF − Z ∗
1
≤ n exp(−(1 − η)n̄min I) + p
π̂ MF − Z ∗
1
nI/[wk[n/n̄min ]2 ]
1
holds with probability at least 1 − exp[−(n̄min I) 2 ] − n−r . Then we
obtain the desired result by simple algebra.
B.3. Proof of Theorem 4.2. By law of total expectation, we have
(46)
EZ (s+1)
h
Z
(s+1)
−Z
∗
1
A, Z
(0)
i
h
i
(0)
(s+1)
∗
(s+1)
(0)
= Eπ(s+1) EZ (s+1) Z
A, Z
−Z
π
, A, Z
1
h
i
= Eπ(s+1) π (s+1) − Z ∗
A, Z (0) ,
1
where the first equation is due to that the conditional expectation of Z (s+1)
is π (s+1) . We are going to build the connection between π (s) and π (s+1) . In
Algorithm 2, there are intermediate steps between π (s) and π (s+1) as follows:
π (s)
Z (s)
(p(s+1) , q (s+1) ) → (t(s+1) , λ(s+1) ) → π (s+1) ,
where we use the plain right arrow (→) to indicate deterministic generation
and the curved right arrow ( ) to indicate random generation. Despite a
slight abuse of notation, we define π (0) = Z (0) .
Analogous to the proof of Theorem 3.1 in Section 5.3, we assume `(Z (0) , Z ∗ ) =
o(n̄min ). The proof for the case `(Z (0) , Z ∗ ) in the same order of n̄min is similar
and thus is omitted.
Let γ = o(1) be any sequence goes to 0 when n grows. We define a series
of events as follows:
• global event F: We define F exactly the same way as we define in the
proof of Theorem 3.1 in Section 5.3 with respect to sequences γ and
1
η 0 , and we have P(F) ≥ 1 − exp[−(n̄min I) 2 )] − n−r for some constant
r > 0. We have η 0 = o(1) whose value will be determined later.
7
MEAN FIELD FOR COMMUNITY DETECTION
• global event G: Consider any Z ∈ Π1 such that kZ − Z ∗ k1 ≤ γ n̄min .
Define
αp = αppri +
k X
X
Ai,j Zi,a Zj,a , βp = βppri +
a=1 i<j
αq = αqpri +
XX
k X
X
(1 − Ai,j )Zi,a Zj,a ,
a=1 i<j
Ai,j Zi,a Zj,b , βq = βqpri +
a6=b i<j
XX
a6=b i<j
(1 − Ai,j )Zi,a Zj,b .
Define G be the event that
αq
αp
∗
∗
−p ,
−q
≤ η 00 (p∗ − q ∗ )
max
αp + βp
αq + β q
holds uniformly over all the eligible Z for some sequence η 00 = o(1).
Then by the same analysis as in Lemma C.1, we have P(G) ≥ 1−e3 5−n .
(s)
(s)
• local events {H1 }Ss=1 : We define H1 = { π (s) − Z ∗ 1 ≥ γ n̄min /2}.
(s)
(s)
• local events {H2 }Ss=1 : We define H2 = { Z (s) − Z ∗
the conditional probability, we have
(s)
≤P
≥ γ n̄min }. For
(s)
P(H2 = 1|H1 = 0)
" n
X h (s)
∗
≤P
Zi,· − Zi,·
"
1
i=1
n h
X
(s)
Zi,·
i=1
Since π (s) − Z ∗
we have
(s)
P(H2
=
−
∗
Zi,·
1
−
∗
− Zi,·
−
(s)
πi,·
∗
Zi,·
−
1
1
i
i
≥ γ n̄min − π (s) − Z ∗
≥ γ n̄min /2
(s)
H1
#
1
(s)
H1
=0
≤ γ n̄min /2 given H1 = 0 by Bernstein inequality,
"
(γ n̄min )2 /8
= 0) ≤ exp − (s)
π − Z ∗ 1 + γ n̄min /6
≤ exp −3(γ n̄min )2 /16 .
(s)
#
=0
(s)
1
(s)
1|H1
(s)
1
(s)
πi,·
#
• local events {H3 }Ss=1 : We define H3 = {|t(s) −t∗ | ≥ η 0 (p∗ −q ∗ )/p∗ , or |λ(s) −
(s)
λ∗ | ≥ η 0 (p∗ − q ∗ )}. If the global event G holds and the local event H2
does not hold, we have
(
)
(s+1)
(s+1)
αp
αq
∗
∗
max
− p , (s+1)
−q
≤ η 00 (p∗ − q ∗ ).
(s+1)
(s+1)
(s+1)
αp
+ βp
αq
+ βq
8
ZHANG, ZHOU
P
P
(s) (s)
(s+1)
(s+1)
Note that αp
+ βp
= αppri + βppri + ka=1 i<j Zi,a Zj,a ≥ n2 /k.
Using the tail bound of Beta distribution (Lemma C.7) we are able to
show
"
#
(s+1)
α
p
(s)
P p(s+1) − (s+1)
≥ η 00 (p∗ − q ∗ ) H2 = 0, G = 1
(s+1)
αp
+ βp
∗
∗ 2
002 2 (p − q )
≤ exp −η n
2p∗
002 2
≤ exp −η n I/2 ,
where the last inequality is due to Proposition C.2. This leads to
h
i
(s)
P p(s+1) − p∗ ≥ 2η 00 (p∗ − q ∗ ) H2 = 0, G = 1 ≤ exp −η 002 n2 I/2 .
And similar result holds for q (s+1) . Then by the same analysis as in the
proof of Lemma C.1, max{|p(s+1) − p∗ |, |q (s+1) − q ∗ |} ≤ 2η 00 (p∗ − q ∗ )
leads to
(
)
∗
|t(s+1) − t∗ | |λ(s+1)−λ |
max
,
≤ 16c0 η 00 .
(p∗ − q ∗ )/p∗ p∗ − q ∗
By taking η 0 = 16c0 η 00 , we obtain
(s+1)
P(H3
(s)
= 1|H2 = 0, G = 1) ≤ 2 exp −η 002 n2 I/2 .
Note that events F and G are about the adjacency matrix A. The events
(s+1)
and H3
are for π (x) , Z (s) and (p(s+1) , q (s+1) ) respectively. With
all the above events defined, we can continue our analysis for Equation (46).
(s)
(s)
(s+1) C
Under the event F ∩ G ∩ (H1 ∪ H2 ∪ H3
) we have
(s)
(s)
H1 , H2
π (s+1) − Z ∗
(47)
1
≤ n exp(−(1 − η)n̄min I) + cn π (s) − Z ∗
1
,
where cn = [nI/[wk[n/n̄min ]2 ]]−1/2 . As a consequence, under the event F ∩
Q
(v)
(v)
(v+1) C
) , we have
G ∩ ( sv=0 H1 ∪ H2 ∪ H3
π (s+1) − Z ∗
1
≤ n exp(−(1 − 2η)n̄min I) + csn π (0) − Z ∗
1
.
Therefore, we have
(48)
Eπ(s+1)
h
i
(0)
H1 = 0, F = 1, G = 1 ≤ n exp(−(1 − 2η)n̄min I)
1
#
" s
Y (v)
(0)
(v)
(v+1)
H1 = 0, F = 1, G = 1 .
+ nP
H1 ∪ H 2 ∪ H 3
π (s+1) − Z ∗
+ csn π (0) − Z ∗
1
v=1
MEAN FIELD FOR COMMUNITY DETECTION
9
Due to the small value of cn , if π (s) − Z ∗ 1 ≤ γ n̄min , Equation (47) immediately implies π (s+1) − Z ∗ 1 ≤ γ n̄min . This implies that under the event
F ∪ G we have
(s+1)
H1
(s)
(s)
(s+1)
⊂ H1 ∪ H2 ∪ H3
, ∀s ≥ 0,
and consequently,
s
Y
v=0
(v)
(v)
(v+1)
H1 ∪ H 2 ∪ H 3
(0)
⊂ H1
s
Y
v=0
(v)
(v+1)
H2 ∪ H3
, ∀s ≥ 1.
Thus,
(49)
P
"
s
Y
v=0
≤P
≤
"
(v)
H1
s
Y
v=0
s
X
∪
(v)
H2
(v)
H2
∪
(v)
∪
(v+1)
H3
(v+1)
H3
(v)
(0)
H1
(0)
H1
= 0, F = 1, G = 1
#
= 0, F = 1, G = 1
P(H2 = 1|H1 = 0) +
v=0
n
X
(v+1)
P(H3
v=0
(v)
= 1|H2 = 0, G = 1)
≤ (s + 1) exp −3(γ n̄min )2 /16 + 2 exp −η 002 n2 I/2 .
(0)
#
1
Note that P(H1 = 0, F = 1, G = 1) ≥ 1−exp[−(n̄min I) 2 )]−n−r −e3 5−n −.
Recall we define π (0) = Z (0) . By Equations (46), (48) and (49), we have
h
i
EZ (s+1) Z (s+1) − Z ∗
A, Z (0) ≤ n exp(−(1 − 2η)n̄min I) + csn Z (0) − Z ∗ + (s + 1)nbn ,
1
1
1
−r
3 −n
2
with probability at least
1 − exp[−(n̄
002 2min I)
)] − n − e 5 − , where bn =
2
exp −3(γ n̄min ) /16 + 2 exp −η n I/2 .
B.4. Proof of Theorem 4.3. Note the similarity between Algorithm
3 and Algorithm 1. We can prove Theorem 4.3 with almost the identical
argument used in the proof of Theorem 3.1, thus omitted.
APPENDIX C: STATEMENTS AND PROOFS OF AUXILIARY
LEMMAS AND PROPOSITIONS
We include all the auxiliary propositions and lemmas in this section.
10
ZHANG, ZHOU
C.1. Statements and Proofs of Lemmas and Propositions for
Theorem 3.1.
Lemma C.1. Let cinit be some sufficiently small constant. Consider any
π ∈ Π1 such that kπ − Z ∗ k1 ≤ cinit n/k. Let αp , βp , αq , βq , t, λ be the outputs
after one step CAVI iteration from π described in Algorithm 1. That is, they
are defined as Equations (26) - (29). Define
P P
P Pk
i<j
a6=b πi,a πj,b Ai,j
i<j
a=1 πi,a πj,a Ai,j
, and q̂ = P P
p̂ = P Pk
.
i<j
a6=b πi,a πj,b
a=1 πi,a πj,a
i<j
Under the same assumption as in Theorem 3.1, there exists some sequence
= o(1) such that with probability at least 1 − e3 5−n , the following inequality
holds
kπ − Z ∗ k1
|p̂ − p∗ | |q̂ − q ∗ |
|t − t∗ |
|λ − λ∗ |
max
,
,
,
≤
+
24c
,
0
p∗ − q ∗ p∗ − q ∗ (p∗ − q ∗ )/p∗ p∗ − q ∗
n/k
uniformly over all the eligible π. In addition if we further assume cinit goes
to 0, the LHS of the above inequality will be simply upper bounded by .
Proof. We are going to obtain tight bounds on |p̂ − p∗ | and |q̂ − q ∗ | first.
Note that we have the “variance-bias” decomposition as in
P Pk
P P
| i<j ka=1 πi,a πj,a (Ai,j − EAi,j )|
i<j
a=1 πi,a πj,a EAi,j
∗
+
− p∗ .
|p̂ − p | ≤
P Pk
P Pk
a=1 πi,a πj,a
a=1 πi,a πj,a
i<j
i<j
We have concentration inequality holds for the numerator in the first term
by Lemma C.2. That is, with probability at least 1 − e3 5−n , we have
k
XX
i<j a=1
πi,a πj,a (Ai,j − EAi,j ) =
p
1
hA − EA, ππ T i ≤ 3n np∗
2
holds uniformly over all π ∈ Π1 . For the denominator, we have
k
k
n2 X X
1X
n2
≥
πi,a πj,a =
kπ·,a k21 ≥
,
2
2
2k
i<j a=1
a=1
P
since ka=1 kπ·,a k1 = n. Thus, we are able to obtain an upper bound on the
first term as
r
P P
| i<j ka=1 πi,a πj,a (Ai,j − EAi,j )|
k 2 p∗
.
≤6
P Pk
n
π
π
i,a
j,a
i<j
a=1
11
MEAN FIELD FOR COMMUNITY DETECTION
For the second term, since EAi,j = p∗
we have
P
Pk
a=1 πi,a πj,a EAi,j
Pk
i<j
a=1 πi,a πj,a
i<j
P
Pk
∗
∗
∗
a=1 Zi,a Zj,a + q (1 −
− p∗ = (p∗ − q ∗ )
hP
P
∗
∗
a=1 Zi,a Zj,a ),
k
a=1 πi,a πj,a
i<j
P
hππ T , 11T −
= (p∗ − q ∗ ) P Pk
= (p∗ − q ∗ )
Pk
i hP
Pk
i<j
a=1 πi,a πj,a
Z ∗ Z ∗T i
i<j
a=1 πi,a πj,a
hππ T − Z ∗ Z ∗T , 11T −
P
i<j
k
a=1 1
Pk
Z ∗ Z ∗T i
a=1 πi,a πj,a
∗ Z∗
− Zi,a
j,a
i
,
where in the last inequality we use the orthogonality between Z ∗ Z ∗T and
11T − Z ∗ Z ∗T . For its numerator, we have
hππ T − Z ∗ Z ∗T , 11T − Z ∗ Z ∗T i ≤ ππ T − Z ∗ Z ∗T
1
∗
≤ kπ − Z k1 (kπk1 + kZ ∗ k1 )
≤ kπ − Z ∗ k1 (2 kZ ∗ k1 + kπ − Z ∗ k1 )
≤ 3n kπ − Z ∗ k1 .
This leads to
P Pk
3n kπ − Z ∗ k1 (p∗ − q ∗ )
i<j
a=1 πi,a πj,a EAi,j
≤ 3kn−1 (p∗ − q ∗ ) kπ − Z ∗ k1 .
− p∗ ≤
P Pk
2 /k
n
π
π
a=1 i,a j,a
i<j
Thus,
|p̂ − p∗ | ≤ 6
r
k 2 p∗
+ 3kn−1 (p∗ − q ∗ ) kπ − Z ∗ k1 ≤
n
Similar result holds for |q̂ − q ∗ |. Denote η0 =
q
"s
#
3 kπ − Z ∗ k1
k 2 p∗
+
(p∗ − q ∗ ).
n(p∗ − q ∗ )2
n/k
k 2 p∗
n(p∗ −q ∗ )2
+
3kπ−Z ∗ k1
,
n/k
thus
max{|p̂ − p∗ |, |q̂ − q ∗ |} ≤ η0 (p∗ − q ∗ ).
By the assumption of nI in Equation (18) and Proposition C.2, we have
n(p∗ − q ∗ )2 /(k 2 p∗ ) nI/k 2 → ∞. Therefore, the first term in η0 goes to 0.
The second term in η0 is at most 3cinit which implies η0 ≤ 4cinit .
By the fact that the digamma function satisfies ψ(x) ∈ (log(x−1/2), log x), ∀x ≥
12
ZHANG, ZHOU
1/2, we have
αp − 1/2
βp
i hP P
i
h pri
P P
k
αp − 1/2 + i<j ka=1 πi,a πj,a Ai,j
π
π
i<j
a=1 i,a j,a
h
i hP P
i
= log
P
P
k
1 + βppri − i<j ka=1 πi,a πj,a Ai,j
π
π
i<j
a=1 i,a j,a
hP P
i
k
p̂ + (αppri − 1/2)
i<j
a=1 πi,a πj,a
h
i .
= log
Pk
pri P
1 − p̂ + βp
i<j
a=1 πi,a πj,a
ψ(αp ) − ψ(βp ) ≥ log
P P
Recall that we have shown i<j ka=1 πi,a πj,a lies in the interval of (n2 /(2k), n2 /2).
By Equation (18), there exists a sequence η 0 = o(1) such that αp , βp ≤
η 0 (p∗ − q ∗ )n2 /k. Then we have
ψ(αp ) − ψ(βp ) ≥ log
p∗ − |p∗ − p̂| − η 0 (p∗ − q ∗ )
.
1 − p∗ + |p∗ − p̂| + η 0 (p∗ − q ∗ )
Similar analysis leads to
ψ(αq ) − ψ(βq ) ≤ log
q ∗ + |q ∗ − q̂| + η 0 (p∗ − q ∗ )
.
1 − q ∗ − |q ∗ − q̂| − η 0 (p∗ − q ∗ )
Together we have
∗
p − |p∗ − p̂| − η 0 (p∗ − q ∗ ) 1 − q ∗ − |q ∗ − q̂| − η 0 (p∗ − q ∗ )
∗
− t∗
t − t ≥ log
1 − p∗ + |p∗ − p̂| + η 0 (p∗ − q ∗ ) q ∗ + |q ∗ − q̂| + η 0 (p∗ − q ∗ )
"
#
|p∗ − p̂| + η 0 (p∗ − q ∗ ) 4 p∗ (1 − q ∗ )
≥ log 1 −
− t∗
q∗
q ∗ (1 − p∗ )
∗
∗
0 p −q
= 4 log 1 − (η0 + η )
.
q∗
Recall that we assume c0 p∗ < q ∗ < p∗ . Thus (η0 + η 0 )(p∗ − q ∗ )/p∗ ≤ 5cinit c0 .
When cinit is sufficiently small, we have (η0 + η 0 )(p∗ − q ∗ )/p∗ ≤ 1/2. Then
using the fact −x ≥ log(1 − x) ≥ −2x, ∀x ∈ (0, 1/2). We have
t − t∗ ≥ −8(η0 + η 0 )(p∗ − q ∗ )/q ∗ .
Analogously we can obtain the same upper bound on t̂ − t∗ , and then
|t − t∗ | ≤ 8c0 (η0 + η 0 )
p∗ − q ∗
.
p∗
MEAN FIELD FOR COMMUNITY DETECTION
13
Identical analysis can be applied towards bounds on |λ̂ − λ∗ |. Note that
h
i
Pk
pri P
1
−
p̂
+
β
π
π
p
i,a
j,a
i<j
a=1
βp
i ,
= log
log
hP Pk
αp + βp
1 + (αppri + βppri )
π π
i<j
a=1
i,a j,a
similarly for αq , βq . Omitting the immediate steps, we end up with
|λ − λ∗ | = | [ψ(βq ) − ψ(αq + βq )] − [ψ(βp ) − ψ(αp + βp )] − λ∗ | ≤ 8(η0 + η 0 )(p∗ − q ∗ ).
The proof is complete after we unify and rephrase all the aforementioned
results.
Lemma C.2. Let A ∈ [0, 1]n×n such that A = AT and Ai,i = 0, ∀i ∈ [n].
Assume {Ai,j }i<j are independent
random variable, and there exists p ≤ 1
P
2
−1
such that 9n ≤ n(n−1) i<j Var(Ai,j ) ≤ p, and then we have
√
sup hA − EA, ππ T i ≤ 6n np,
π∈Π1
with probability at least 1 − e3 5−n .
Proof. This result is a direct consequence of Grothendieck inequality
[15] (see also Theorem 3.1 of [16] for a rephrased statement) on the matrix
A−EA. The Lemma 4.1 of [16] proves that with probability at least 1−e3 5−n ,
X
√
sup
(Ai,j − EAi,j )si tj ≤ 3n np.
s,t∈{−1,1}n
i,j
Then by applying Grothendieck inequality we obtain
X
√
sup
(Ai,j − EAi,j )XiT Xj ≤ 3cn np,
kXi k2 ≤1,∀i∈[n]
i,j
where c is a positive constant smaller than 2. This concludes with
√
sup hA − EA, ππ T i ≤ 6n np,
π∈Π1
Proposition C.1. Assume 0 < q < p < 1. Let X ∼ Ber(q) and Y ∼
p(1−q)
1−q
1
Ber(p). Recall the definition λ = log 1−p
/ log p(1−q)
q(1−p) , t = 2 log q(1−p) and
p
√
I = −2 log[ pq + (1 − p)(1 − q)]. Then the following two equations hold
(50)
e
tλ
=
EetX
Ee−tY
12
, and EetX Ee−tY = exp(−I).
14
ZHANG, ZHOU
Proof. The proof is straightforward and all by calculation. Note that
E exp(tX) = pet + 1 − p and E exp(tY ) = qet + 1 − q. We can easily obtain
p
√
EetX Ee−tY = (pet + 1 − p)(qe−t + 1 − q) = ( pq + (1 − p)(1 − q))2 = exp(−I).
We can justify the first part of Equation (50) in a similar way.
Lemma C.3. [Theorem 5.2 of [21]] Let A ∈ {0, 1}n×n be a symmetric binary matrix with Ai,i = 0, ∀i ∈ [n], and {Ai,j }i<j are independent Bernoulli
random variable. If p , maxi,j EAi,j ≥ log n/n. Then there exist constants
c, r > 0 such that
√
kA − EAkop ≤ c np,
with probability at least 1 − n−r .
The following lemma on the operator norm of sparse networks is from [9].
In the original statement of Lemma 12 in [9], “with probability 1 − o(1)” is
stated. However, its proof in [9] gives explicit form of the probability that
the statement holds, which is at least 1 − n−1 .
Lemma C.4. [Lemma 12 of [9]] Suppose M is random symmetric matrix
with zero on the diagonal whose entries above the diagonal are independent
with the following distribution
(
1 − pi,j , w.p. pi,j ;
Mi,j =
−pi,j , w.p. 1 − pi,j .
Let p , maxi,j pi,j and M̃ be the matrix obtained from M by zeroing out all
the rows and columns having more than 20np positive entries. Then there
exists some constant c > 0 such that
√
kM̃ kop ≤ c np,
holds with probability at least 1 − n−1 .
Lemma C.5. Let A ∈ {0, 1}n×n be a symmetric binary matrix with Ai,i =
0, ∀i ∈ [n], and {Ai,j }i<j are independent
Bernoulli random variable.
Let
P
P
p ≥ maxi,j EAi,j . Define S = {i ∈ [n], j Ai,j ≥ 20np} and Zi = j |Ai,j −
EAi,j |I{i ∈ S}. Then with probability at least 1 − exp(−5np), we have
X
Zi ≤ 20n2 p exp(−5np).
i
MEAN FIELD FOR COMMUNITY DETECTION
15
P
Proof. Note that E j |Ai,j − EAi,j | ≤ 2np(1 − p) ≤ 2np. For any s ≥
20np, we have
X
X
P(Zi > s) ≤ P
|Ai,j − EAi,j | − E
|Ai,j − EAi,j | > s − 2np
j
"
≤ exp −
1
2 (s
− 2np)2
np + 13 (s − 2np)
#
j
≤ exp(−s/2),
by implementing Bernstein inequality. Applying Bernstein inequality again
we have
X
P(Zi > 0) = P
Ai,j ≥ 20np
j
X
X
≤ P
Ai,j − E
Ai,j ≥ 18np
j
≤ exp −
j
(18np)2 /2
np + 18np/3
≤ exp(−21np/2).
Thus, we are able to bound EZi with
Z 20np
Z ∞
EZi ≤
P(Zi > 0) ds +
P(Zi > s) ds
0
20np
Z ∞
≤ 20np exp(−21np/2) +
exp(−s/2)
20np
≤ 20np exp(−10np).
By Markov inequality, we have
"
#
X
X
2
2
P
|Ai,j − EAi,j |I{i ∈ S} ≥ 20n p exp(−5np) = P
Zi ≥ 20n p exp(−5np)
i,j
i
≤
nEZ1
2
20n p exp(−5np)
≤ exp(−5np).
16
ZHANG, ZHOU
Proposition
C.2. Under the iassumption that 0 < q < p = o(1). For
h√
p
I = −2 log pq + (1 − p)(1 − q) we have
√
√
I = (1 + o(1))( p − q)2 .
Consequently, (p − q)2 /(4p) ≤ I ≤ (p − q)2 /p.
Proof. It is a partial result of Lemma B.1 in [35].
p(1−q)
1−q
Proposition C.3. Define λ = log 1−p
. For any p, q > 0 such
/ log q(1−p)
that p, q = o(1) and p q, there exists a constant 0 < c < 1/2 such that
λ−q
∈ (c, 1 − c).
p−q
Proof. First we are going to establish the lower bound. Let x = p − q,
and then we can rewrite λ as
λ=
1
1+
log(1+x/q)
log(1+x/(1−q−x))
.
Case I: x ≥ q/10. Define s = (p − q)/q. Since p q we have s ≥ 1/10 and
also upper bounded by some constant. We have
λ−q
1 1
1
=
− 1
log(1+s)
p−q
s q1+
log(1+sq/(1−(s+1)q))
1 (1 − q) log(1 + sq/(1 − (s + 1)q)) − q log(1 + s)
=
s
q log(1 + sq/(1 − (s + 1)q)) + q log(1 + s)
sq
1 (1 − q) 1−(s+1)q − q log(1 + s)
≥
s
2q log(1 + s)
1 1−q
≥
,
8 log(1 + s)
which is lower bounded by some constant c > 0.
Case II: x < q/10. By Taylor theorem, there exist constants 0 ≤ 1 , 2 ≤
1/10 such that
x
x 1 − 1 x 2
log 1 +
,
= −
q
q
2
q
2
x
x
1 − 2
x
and log 1 +
=
−
.
1−q−x
1−q−x
2
1−q−x
MEAN FIELD FOR COMMUNITY DETECTION
17
Thus, we have
log(1 + xq )
log(1 +
x
1−q−x )
=
q(1 − q)2 − 2q(1 − q) +
q 2 (1 −
1−1
2
2 (1 − q)
2 2
q) − 3−
2 q x
x + c1 x2 + c2 x3
,
where c1 = (1 − 1 )(1 − q) + q and c2 = −(1 − 1 )/2. Thus,
"
#
2 2
q 2 (1 − q) − 3−
q x
1
λ−q
2
=
−q
3−2 2
1
2
2
3
p−q
x q(1 − q) − 2q(1 − q) + 1−
2 (1 − q) + 2 q x + c1 x + c2 x
1
2 2
1
2
2
2 q(1 − q) + 2 q (1 − q) − 2 (1 − q) q + c1 qx + c2 qx
=
3−2 2
1
2
2
3
q(1 − q) − 2q(1 − q) + 1−
2 (1 − q) + 2 q x + c1 x + c2 x
Note that |c1 |, |c2 | ≤ 1. We have
1
q(1 − q)
λ−q
≥ 4
≥ 1/8.
p−q
2q(1 − q)
By using exactly the same discussion, we can show (p − λ)/(p − q) > c.
Thus, we proved the desired bound stated in the proposition.
C.2. Statements and Proofs of Lemmas and Propositions for
Theorem 4.1.
Lemma C.6. Let Z ∗ ∈ Π0 . Assume p∗ , q ∗ = o(1) and p∗ q ∗ . Define
t∗ , λ∗ and π̂ MF the same way as in Theorem 4.1. If nI/[k log kw] → ∞, we
have with probability at least 1 − e3 5−n ,
√
Z ∗ Z ∗T − π̂ MF (π̂ MF )T 1 . n2 / nI.
(ρ,ρ0 )
If we further assume Z ∗ ∈ Π0
probability at least 1 − e3 5−n ,
with arbitrary ρ, ρ0 , and then we have with
`(π̂ MF , Z ∗ ) . ρ−1 n
p
k 2 /(nI).
Proof. Form Lemma C.2, with probability at least 1 − e3 5−n , we have
uniformly for all π ∈ Π1
p
(51)
|hA − EA, ππ T i| ≤ 6n np∗ .
In the remaining part of the proof, we always assume
holds.
P the above event
pri
Denote f 0 (π) = hA+λ∗ In −λ∗ 1n 1Tn , ππ T i−(t∗ )−1 ni=1 KL(πi,· kπi,·
) for any
18
ZHANG, ZHOU
pri
pri
π ∈ Π1 . Here we adopt the notation KL(πi,· kπi,·
) short for KL(Categorical(πi,· )kCategorical(πi,·
)),
and we do it in the same way in the rest part of the proof. Thus,
p
hEA + λ∗ In − λ∗ 1n 1Tn , π̂ MF (π̂ MF )T i ≥ hA + λ∗ In − λ∗ 1n 1Tn , π̂ MF (π̂ MF )T i − 6n np∗
n
X
p
pri
0 MF
∗ −1
MF
∗
= f (π̂ ) − 6n np + (t )
KL(π̂i,·
kπi,·
)
i=1
n
X
p
pri
∗ −1
MF
0
∗
∗
KL(π̂i,·
kπi,·
)
≥ f (Z ) − 6n np + (t )
i=1
T
∗ ∗T
1n 1n , Z Z i
p
≥ hEA + λ In − λ
− 12n np∗
n
n
X
X
pri
pri
∗ −1
MF
∗ −1
∗
+ (t )
KL(π̂i,· kπi,· ) − (t )
KL(Zi,·
kπi,·
),
∗
∗
i=1
i=1
where we use Equation (51) twice in the first and last inequality. Note that
for any π ∈ Π1 , we have
X
X
pri
pri
|KL(πi,· kπi,·
)| ≤ |
πi,j log πi,j | + |
πi,j log πi,j
| ≤ log k + log w,
j
j
P
where the second inequality is due to 0 ≥ j πi,j log πi,j = KL(πi,· kk −1 1k )−
log k ≥ − log k, where k −1 1k can be explicitly written as a length-k vector
(1/k, 1/k, . . . , 1/k). Then we have
n
X
i=1
pri
MF
KL(π̂i,·
kπi,·
)
−
n
X
i=1
pri
∗
KL(Zi,·
kπi,·
) ≤ 2n log kw.
Thus,
hEA + λ∗ In − λ∗ 1n 1Tn , Z ∗ Z ∗T − π̂ MF (π̂ MF )T i ≤ 12n
By Proposition C.4, we have
∗
hEA + λ In − λ
∗
1n 1Tn , Z ∗ Z ∗T
− π̂
MF
(π̂
p
np∗ + 2(t∗ )−1 n log kw.
λ∗ − q ∗
λ∗ − q ∗
α+ ∗
γ ,
) i ≥ 2(p − q ) 1 − ∗
p − q∗
p − q∗
MF T
∗
∗
where α = hZ ∗ Z ∗T − π̂ MF (π̂ MF )T , Z ∗ Z ∗T − In i/2 and γ = hπ̂ MF (π̂ MF )T −
Z ∗ Z ∗T , 1n 1Tn − Z ∗ Z ∗T i/2. By Proposition C.3, there exists a constant c > 0
such that
(52)
hEA + λ∗ In − λ∗ 1n 1Tn , Z ∗ Z ∗T − π̂ MF (π̂ MF )T i ≥ 2c(p∗ − q ∗ )(α + γ).
MEAN FIELD FOR COMMUNITY DETECTION
19
Note that the following inequality holds
2(α + γ) = Z ∗ Z ∗T − π̂ MF (π̂ MF )T
≥ Z ∗ Z ∗T − π̂ MF (π̂ MF )T
1
1
− hZ ∗ Z ∗T − π̂ MF (π̂ MF )T , In i/2
− n/2.
These together lead to
Z ∗ Z ∗T − π̂ MF (π̂ MF )T
1
≤
i
h
p
1
∗ + 2(t∗ )−1 n log kw + c(p∗ − q ∗ )n/2 .
np
12n
c(p∗ − q ∗ )
Note that t∗ (p∗ − q ∗ )/p∗ when p∗ q ∗ . Together by Proposition C.2,
as long as nI/[k log kw] → ∞, the last two terms in the RHS of the above
formula is dominated by the first term. Thus,
Z ∗ Z ∗T − π̂ MF (π̂ MF )T
(ρ,ρ0 )
If we further assume Z ∗ ∈ Π0
to
1
n2
.√ .
nI
, Proposition C.5 and Equation (52) lead
hEA + λ∗ In − λ∗ 1n 1Tn , Z ∗ Z ∗T − π̂ MF (π̂ MF )T i ≥
ρcn(p∗ − q ∗ )
`(π̂ MF , Z ∗ ).
8k
So we have
p
8k
(12n np∗ + 2(t∗ )−1 n log kw)
∗
∗
ρcn(p − q )
s
192k
np∗
≤
.
∗
ρc
(p − q ∗ )2
`(π̂ MF , Z ∗ ) ≤
Before we state the remaining lemmas and propositions used in the Proof
of Lemma C.6, we first introduce two definitions. For any π, π 0 ∈ [0, 1]n×k ,
0 0
0 0
0 0
define α(π; π 0 ) = hπ π T −ππ T , π π T −In i/2 and γ(π; π 0 ) = hππ T −π π T , 1n 1Tn −
0 0T
π π i/2.
Proposition C.4. Define P = Z ∗ BZ ∗T − pIn , with B = q1k 1Tk + (p −
q)Ik . We have the equation
λ−q
λ−q
α(π; Z ∗ ) +
γ(π; Z ∗ ) .
hP + λIn − λ1n 1Tn , Z ∗ Z ∗T − ππ T i = 2(p − q) 1 −
p−q
p−q
20
ZHANG, ZHOU
Proof. Note that Z ∗ BZ ∗T − pIn = (p − q)Z ∗ Z ∗T + q1n 1Tn . We have
λ−q
λ−p
1n 1Tn +
In , Z ∗ Z ∗T − ππ T i
p−q
p−q
− In , Z ∗ Z ∗T − ππ T i
hP + λIn − λ1n 1Tn , Z ∗ Z ∗T − ππ T i = (p − q)hZ ∗ Z ∗T −
= (p − q)hZ ∗ Z ∗T
+ (λ − q)hIn − 1n 1Tn , Z ∗ Z ∗T − ππ T i
= (p − λ)hZ ∗ Z ∗T − In , Z ∗ Z ∗T − ππ T i
+ (λ − q)hZ ∗ Z ∗T − 1n 1Tn , Z ∗ Z ∗T − ππ T i
= 2(p − q)α(π; Z ∗ ) + 2(λ − q)γ(π; Z ∗ ).
Consequently, we obtain the desired bound.
(ρ,ρ0 )
If Z ∗ ∈ Π0
Proposition C.5.
, π ∈ Π1 , we have
α(π; Z ∗ ) + γ(π; Z ∗ ) ≥
ρn
`(π, Z ∗ ).
16k
Proof. We use α, γ instead of α(π; Z ∗ ), γ(π; Z ∗ ) for simplicity. Without
∗
∗ = 1}
loss of generality
Z ∗ ). Define Cu = {i : Zi,u
P we assume kπ − Z k1 = `(π, P
and Lu,v = i∈Cu πi,v . We have the equality v Lu,v = |Cu | and also
α=
and γ =
1X
2
u
1X
2
|Cu |2 −
X
X X
i,j∈Cu w
X
u6=v i∈Cu ,j∈Cv w
"
#
X
X
1
1X X
πi,w πj,w =
|Cu |2 −
L2u,w =
Lu,w Lu,w0
2 u
2 u
0
w
πi,w πj,w =
1 XX
2
w6=w
Lu,w Lv,w .
u6=v w
We define [k] into two disjoint subsets S1 and S2 where
o
n
3
S1 = u ∈ [k] : ∀v 6= u, Lu,v ≤ |Cu | ,
4
n
o
3
and S2 = i ∈ [k] : ∃v 6= u, Lu,v > |Cu | .
4
P
Define L = v6=u Lu,v . For any u ∈ S1 , if Lu,u ≥ |Cu |/4, we have |Cu |2 −
P 2
P 2 u
1
2
w Lu,w ≥
w Lu,w ≥ Lu,u Lu ≥ |Cu |Lu /4. If Lu,u < 4 |Cu | we have |Cu | −
3
2
8 |Cu | ≥ |Cu |Lu /4 as well. This leads to
"
#
X
1 X
1 X
α≥
|Cu |2 −
L2u,w ≥
|Cu |Lu .
2
8
w
u∈S1
u∈S1
MEAN FIELD FOR COMMUNITY DETECTION
21
For any u ∈ S2 there exists a v 6= u such that Lu,v > 43 |Cu |. We must have
Lu,u + Lv,v ≥ Lu,v + Lv,u otherwise kπ − Z ∗ k1 = `(π, Z ∗ ) does not hold since
we can switch the u-th and v-th columns of π to make
− Z ∗ k1 smaller.
P kπP
Consequently, we have Lv,v ≥ Lu /2. So we have
u0 6=u
w Lu,w Lu0 ,w ≥
Lu,v Lv,v ≥ 3|Cu |Lu /8. Then we have
γ≥
1 X XX
3 X
Lu,w Lu0 ,w ≥
|Cu |Lu .
2
8
0
w
u∈S2 u 6=u
u∈S2
Thus,
α+γ ≥
ρn X
ρn
ρn
1 X
|Cu |Lu ≥
Lu ≥
kπ − Z ∗ k1 =
`(π, Z ∗ ).
16 u
16k u
16k
16k
C.3. Statements and Proofs of Lemmas and Propositions for
Theorem 4.2.
Lemma C.7. Let X ∼ Beta(α, β) where α = n2 p and β = n2 (1 − p) with
p = o(1). Let η = o(1). Then we have
P(|X − p| ≥ ηp) ≤ exp(−η 2 n2 p/2).
Proof. Note X has the same distribution as Y /(Y + Z) where Y and
Z are independent χ2 random variables with Y ∼ χ2 (2α) and Z ∼ χ2 (2β).
Then by using tail bound of χ2 distribution (i.e., Proposition C.6)
P(|X − p| ≥ ηp) ≤ P(|Y − 2n2 p| ≥ 2ηn2 p) + P(|Y + Z − 2n2 | ≥ ηn2 )
≤ 2 exp(−η 2 n2 p/4) + 2 exp(−η 2 n2 /16)
≤ exp(−η 2 n2 p/2).
Proposition C.6. Let X ∼ χ2 (k) we have
P |X − k| ≥ kt ≤ 2 exp(−kt2 /8), ∀t ∈ (0, 1).
Proof. See Lemma 1 of [20].
22
ZHANG, ZHOU
APPENDIX D: GENERAL DERIVATIONS OF CAVI FOR
VARIATIONAL INFERENCE
In this section, we provide the derivation from Equation (3) to Equation
(4). First we have
(53)
h
q(x) i
KL(q(x)kp(x|y)) = Eq(x) log
p(x|y)
= Eq(x) [log q(x)] − Eq(x) [log p(x|y)]
= Eq(x) [log q(x)] − Eq(x) [log p(x, y)] + log p(y)
= −(Eq(x) [log p(x, y)] − Eq(x) [log q(x)]) + log p(y)
= − Eq(x) [log p(y|x)] − KL(q(x)kp(x)) + log p(y).
Thus, to minimize KL(q(x)kp(x|y)) w.r.t. q(x) is equivalent to maximize
Eq(x) [log p(y|x)] − KL(q(x)kp(x)).
n
Recall we have independence under both p and
Q q for {xi }i=1 . For simplicity, denote x−i to be {xj }j6=i and q−i to be j6=i qj . We have the decomposition
bi (qi ) , Eq(x) [log p(x, y)] − Eq(x) [log q(x)]
= Eqi Eq−i [log p(xi , x−i , y)] − Eqi Eq−i [log q(xi , x−i )]
= Eqi Eq−i [log p(xi |x−i , y)] − Eqi [log qi (xi )] + const
log qi (xi )
+ const,
= −Eqi log −1
c exp Eq−i [log p(xi |x−i , y)]
P
where the constant includes all terms not depending on xi and c = xi exp Eq−i [log p(xi |x−i , y)]
which is also independent of xi . It is obvious that to solve Equation (3) is
equivalent to
q̂i = arg max bi (qi )
qi
= arg min KL qi kc−1 exp Eq−i [log p(xi |x−i , y)] .
qi
Immediately we have q̂i (xi ) = c−1 exp Eq−i [log p(xi |x−i , y)] . Or we may
write it as
q̂i (xi ) ∝ exp Eq−i [log p(xi |x−i , y)] .
Department of Statistics
Yale University
New Haven, CT 06511
E-mail: [email protected]
E-mail: [email protected]
URL: http://www.stat.yale.edu/˜hz68/
| 10 |
A smooth transition from Wishart to GOE
Miklós Z. Rácz
∗
Jacob Richey
†
arXiv:1611.05838v1 [] 17 Nov 2016
November 18, 2016
Abstract
It is well known that an n × n Wishart matrix with d degrees of freedom is close to the
appropriately centered and scaled Gaussian Orthogonal Ensemble (GOE) if d is large enough.
Recent work of Bubeck, Ding, Eldan, and Racz, and independently Jiang and Li, shows that
the transition happens when d = Θ(n3 ). Here we consider this critical window and explicitly
compute the total variation distance between the Wishart and GOE matrices when d/n3 → c ∈
(0, ∞). This shows, in particular, that the phase transition from Wishart to GOE is smooth.
1
Introduction
The Wishart distribution is a fundamental object appearing in many domains, such as statistics,
geometry, quantum physics, and wireless communications, among others. In statistics it arises as
the distribution of the sample covariance matrix of a sample from a multivariate normal distribution.
In geometry it is known as the Gram matrix of inner products of n points in Rd , and it is also the
starting point for canonical models of random geometric graphs [5, 3, 8].
It is well known that an n × n Wishart matrix with d degrees of freedom is close to the
appropriately centered and scaled Gaussian Orthogonal Ensemble (GOE) if d islarge enough (see,
e.g., [5]). Recent work [3, 9] shows that the transition happens when d = Θ n3 and in this paper
we study this critical window. In Theorem 1.2 below we explicitly compute the total variation
distance between the Wishart and GOE matrices when d/n3 → c ∈ (0, ∞), showing, in particular,
that the phase transition from Wishart to GOE is smooth.
1.1
Main result
Let X be an n × d matrix where the entries are i.i.d. standard normal random variables, and let
W ≡ W (n, d) = XX T be the corresponding n × n Wishart matrix with d degrees of freedom.1 Let
M (n) be an n × n matrix drawn from the Gaussian Orthogonal Ensemble, i.e., a symmetric n × n
random matrix where the diagonal entries are i.i.d. normal random variables with mean zero and
variance 2, and the entries above the diagonal are i.i.d. standard normal random variables, with the
entries on and above the diagonal all independent. In order to match the first moment
√ and the scale
of the Wishart matrix, we center and scale M (n) appropriately: let M (n, d) := dM (n) + dIn ,
where In is the n × n identity matrix.
∗
Microsoft Research; [email protected].
University of Washington; [email protected].
1
In statistics the number of samples is usually denoted by n and the number of parameters is usually denoted
by p, resulting in a p × p Wishart matrix with n degrees of freedom. Here our notation is taken with the geometric
perspective in mind, following [5, 3, 4].
†
1
If d is large enough compared to n, then the Wishart matrix becomes approximately like the
GOE. Recent work of Bubeck, Ding, Eldan,and Racz [3], and independently Jiang and Li [9], shows
that the transition happens when d = Θ n3 . Specifically, they proved the following theorem, where
we write TV for total variation distance.
Theorem 1.1. Define the random matrix ensembles W (n, d) and M (n, d) as above.
(a) (Bubeck, Ding, Eldan, and Racz [3]) If d/n3 → 0 then
TV (W (n, d), M (n, d)) → 1.
(b) (Bubeck, Ding, Eldan, and Racz [3]; Jiang and Li [9]) If d/n3 → ∞ then
TV (W (n, d), M (n, d)) → 0.
Our focus is on the critical window and our main result is the explicit computation of the
limiting total variation distance between W (n, d) and M (n, d) when d/n3 → c ∈ (0, ∞).
Theorem 1.2. Define the random matrix ensembles W (n, d) and M (n, d) as above and let d = d(n)
be such that d/n3 → c ∈ (0, ∞). Then
1
√ √ ,
(1.1)
lim TV (W (n, d), M (n, d)) = Erf
n→∞
4 3 c
where recall that the error function is defined as
2
Erf (x) = √
π
Z
x
2
e−t dt.
0
From this result we can immediately read off that, as c → 0 and c → ∞, the total variation
distance goes to 1 and 0, respectively, recovering the previous results described in Theorem 1.1.
Since Erf (x) = √2π x (1 + o (1)) as x → 0, the limiting total variation distance decays as
1
1
√ √
Erf
∼ √ √
4 3 c
2 3π c
as c → ∞. The behavior of the limit when c is small is plotted in Figure 1.
Figure 1: The limiting total variation distance as a function of c, when c is close to 0.
From the proof we shall see that the limit in (1.1) is the expected value of an explicit function
of a two-dimensional Gaussian, which comes from the central limit theorem for the first and third
moments of the empirical spectral distribution of a GOE matrix.
2
1.2
Further related work and open problems
Several recent works have explored extensions of Theorem 1.1, and Theorem 1.2 raises further
questions.
Robustness. Bubeck and Ganguly [4] showed that the critical dimension is universal in the
following sense: Theorem 1.1 holds (up to logarithmic factors) if the entries of X are i.i.d. from a
sufficiently smooth distribution. What can be said about the transition in the critical regime? Are
there other distributions for which the limiting total variation distance can be computed explicitly?
If not, can one prove similar qualitative behavior?
Anisotropy. Eldan and Mikulincer [8] studied the effect of anisotropy on the power of detecting
geometry in random geometric graphs. This is directly related to studying Wishart matrices where
each row of X is a multivariate normal with a diagonal covariance matrix. The authors introduce
new notions of dimensionality and prove a theorem similar to Theorem 1.1 with appropriate upper
and lower bounds on the “effective critical dimension”. While the primary open problem is to close
the gap between these bounds, one may also ask about the nature of the transition at the effective
critical dimension: can anisotropy cause qualitatively different behavior?
Other regimes. Theorems 1.1 and 1.2 state that as d/n3 → ∞, all statistics of the Wishart
W (n, d) and the GOE M (n, d) have asymptotically the same distribution, but this is not the case
if d/n3 remains bounded. In the random matrix literature there has been lots of work showing that
particular statistics of these ensembles have asymptotically the same distribution even when d n3 .
For instance, when d = Θ (n), then the limiting empirical spectral distribution of the Wishart is
the Marchenko-Pastur law, which shows the difference between the Wishart and GOE, but the
largest eigenvalue of the Wishart already behaves like that of the GOE [10, 6, 7]. This naturally
raises the question of whether there are other regimes of d and n where there are interesting phase
transitions.
2
Proof of Theorem 1.2
The main reason that allows for an explicit computation of the limiting total variation distance in
Theorem 1.2 is that both W (n, d) and M (n, d) have explicit densities. The proof of Theorem 1.2
is similar to the case of d/n3 → ∞ presented in [3] and proceeds by a Taylor expansion of the
ratio of the densities of the two random matrix ensembles. The difference compared to the case of
d/n3 → ∞ is that here the Taylor expansion has to be done to one degree higher. As we shall see,
taking the limit of the total variation distance as d/n3 → c ∈ (0, ∞) then requires using the central
limit theorem for the moments of the empirical spectral distribution of a GOE matrix.
n(n+1)
Step 1: Writing out the total variation distance. Let P ⊂ R 2 denote the cone of
positive semidefinite matrices. It is well known (see, e.g., [11]) that when d ≥ n, W (n, d) has the
following density with respect to the Lebesgue measure on P:
1
(det (A)) 2 (d−n−1) exp − 21 Tr (A)
fn,d (A) := 1
,
Q
1
2 2 dn π 4 n(n−1) ni=1 Γ 12 (d + 1 − i)
where Tr (A) denotes the trace of the matrix A. The density of a GOE random matrix with respect
n(n+1)
1
n
to the Lebesgue measure on R 2 is A 7→ (2π)− 4 n(n+1) 2− 2 exp − 14 Tr A2 and so the density
3
n(n+1)
of M (n, d) with respect to the Lebesgue measure on R 2 is
1
Tr (A − dIn )2
exp − 4d
.
gn,d (A) :=
1
n
(2πd) 4 n(n+1) 2 2
n(n+1)
Denote the measure given by this density by µn,d , let λ denote the Lebesgue measure on R 2
and write A 0 if A is positive semidefinite. We can then write
Z
TV (W (n, d), M (n, d)) = n(n+1) gn,d (A) − fn,d (A) 1{A0} + dλ (A)
R 2
Z
fn,d (A) 1{A0}
= n(n+1) 1 −
dµn,d (A) ,
(2.1)
gn,d (A)
R 2
+
where x+ := max {x,
Let Q denote the
set of symmetric matrices for which all of the eigenvalues
h 0}. √
√ i
are in the interval d − 3 dn, d + 3 dn . Since d/n3 → c > 0, we have that Q ⊂ P for all n large
enough. It is known (see, e.g., [1]) that, with probability 1 − o (1), all the eigenvalues of M (n)
√
√
are in the interval [−3 n, 3 n], which implies that M (n, d) ∈ Q. Since the integrand in (2.1) is
bounded, we can then write
Z
fn,d (A)
TV (W (n, d), M (n, d)) =
1−
dµn,d (A) + o (1)
(2.2)
gn,d (A) +
Q
and so we may restrict our attention to Q.
Define αn,d (A) := log (fn,d (A) /gn,d (A)). Denote the eigenvalues of an n × n matrix A by
λ1 (A) ≤ · · · ≤ λn (A); when
P from the context, we omit the dependence on
Q the matrix is obvious
A. Recall that det (A) = ni=1 λi and Tr (A) = ni=1 λi . We then have that
n
1X
1
2
αn,d (A) =
(λi − d)
(d − n − 1) log λi − λi +
2
2d
i=1
n
X
n (n + 3) dn
n
n (n + 1)
1
+
−
log 2 + log π +
log d −
log Γ
(d + 1 − i) .
4
2
2
4
2
i=1
By Stirling’s formula we know that log Γ (z) = z − 12 log z − z + 12 log (2π) + O z1 as z → ∞, so
n
1X
1
2
αn,d (A) =
(d − n − 1) log λi − λi +
(λi − d)
2
2d
i=1
n
n
n
n (n + 1)
1X
1X
log d −
(d − i) log (d + 1 − i) +
(d + 1 − i) + O
.
4
2
2
d
i=1
i=1
(i−1)2
(i−1)3
i−1
Now writing log (d + 1 − i) = log d + log 1 − i−1
=
log
d
−
−
+
O
we get that
2
3
d
d
2d
d
+
n
1X
1
n3
n
2
αn,d (A) =
(d − n − 1) log λi − λi +
(λi − d) − {(d − n − 1) log d − d} −
+ o (1) .
2
2d
2
12d
i=1
n
o
1
Defining h (x) := 21 (d − n − 1) log (x/d) − (x − d) + 2d
(x − d)2 , we have that
αn,d (A) =
n
X
h (λi ) −
i=1
4
n3
+ o (1) .
12d
(2.3)
Step 2: Taylor expansion and taking the limit. The derivatives of h at d are h (d) = 0,
n+1
00
(3) (d) = d−n−1 , h(4) (d) = − 3(d−n−1) , and also h(5) (x) = 12(d−n−1) .
= − n+1
2d , h (d) = 2d2 , h
d3
d4
x5
Approximating h with its fourth order Taylor polynomial around d we get that
h0 (d)
h(x) = −
d−n−1
d−n−1
d−n−1
n+1
n+1
(x − d)2 +
(x − d)3 −
(x − d)4 +
(x − d)5 ,
(x − d)+
2d
4d2
6d3
8d4
10ξ 5
where ξ is some real number between x and d. From 2.3 we see that to compute αn,d (A), we need
to compute the sum over the eigenvalues {λi }ni=1 of each term in the expansion.
First, we argue
the contribution
from the remainder term
Recall that A ∈ Q,
h that
h is negligible.
√
√ i
√
√ i
and hence λi ∈ d − 3 dn, d + 3 dn for every i ∈ [n]. If x ∈ d − 3 dn, d + 3 dn , then
d−n−1
(x − d)5 ≤
10ξ 5
√ 5
d−n−1
1
√ 5 3 dn = O n2 ,
10 d − 3 dn
where we used that d = Θ n3 . Summing n such terms gives a term of order O(1/n), which is
negligible in the limit. Turning to the four terms that matter, defining
n
n
n+1X
S1 (n, d) := −
(λi − d) ,
2d
n+1X
S2 (n, d) :=
(λi − d)2 ,
4d2
i=1
S3 (n, d) :=
n
d−n−1X
6d3
i=1
n
d−n−1X
(λi − d)4 ,
S4 (n, d) := −
8d4
3
(λi − d) ,
i=1
i=1
and also letting S0 (n, d) := −n3 /(12d), we thus have that
αn,d (A) = S0 (n, d) + S1 (n, d) + S2 (n, d) + S3 (n, d) + S4 (n, d) + o (1) .
(2.4)
(λi − d). If {λi }ni=1 are the eigenvalues of M (n, d), then {µi }ni=1 are
P
the eigenvalues of √1n M (n). Recall that the empirical spectral distribution n1 ni=1 δµi converges
√
1
weakly to the semicircle distribution with density ρsc (x) = 2π
4 − x2 1{|x|≤2} (see, e.g., [1]). With
this notation we can rewrite the four quantities above as follows:
√
n
n
n (n + 1) X
n2 (n + 1) 1 X 2
√
S1 (n, d) = −
×
µi ,
S2 (n, d) =
×
µi ,
4d
n
2 d
i=1
i=1
r
n
n
d − n − 1 n3
1X 4
d−n−1
n3 X 3
S4 (n, d) = −
×
×
µi .
×
×
µi ,
S3 (n, d) =
8d
d
n
6d
d
For i ∈ [n] define µi :=
√1
dn
i=1
i=1
Let U be a random variable distributed according to the semicircle law. We know that for any
fixed k ∈ N, the k th moment of the empirical spectral distribution converges in probability to the
k th moment of the semicircle law (see [1, Lemmas 2.1.6 and 2.1.7]), i.e.,
n
h i
1X k
µi → E U k .
n
i=1
Since E U 2 = 1 and E U 4 = 2, we have that, as d/n3 → c ∈ (0, ∞), S2 (n, d) → 1/(4c) and
S4 (n, d) → −1/(4c), so put together we have that S2 (n, d) + S4 (n, d) → 0.
5
The central limit theorem for the moments of the empirical spectral distribution of a GOE
matrix (see [1, Theorem 2.1.31 and Exercise 2.1.35] and [2]) shows that
!
n
n
X
X
µi ,
µ3i =⇒ (N1 , N3 ) ,
i=1
i=1
6 ). The entries
where (N1 , N3 ) are jointly normal with mean zero and covariance matrix C = ( 26 24
of the covariance matrix C are special cases of the more general formulas found in [1, 2], so we do
not describe the computations here, but one can verify these numbers by using the identity
n
X
µki
= Tr
√1 M
n
n
k X
k
1
√
(n)
=
M (n)
n
i=1
i=1
i,i
and computing appropriate moments of normal random variables.
Putting everything together we see that, as d/n3 → c ∈ (0, ∞), we have that
1
1
1
1
1
√
√
,−
.
N1 , ,
N3 , −
(S0 , S1 , S2 , S3 , S4 ) =⇒ −
12c 2 c
4c 6 c
4c
(2.5)
Therefore, since the function (s0 , s1 , s2 , s3 , s4 ) 7→ (1 − exp (s0 + s1 + s2 + s3 + s4 ))+ is continuous
and bounded, we have by (2.2), (2.4), and (2.5) that
1
1
1
TV (W (n, d), M (n, d)) → E 1 − exp −
− √ N1 + √ N3
.
(2.6)
12c 2 c
6 c
+
Step 3: Evaluating the limit. What remains is to evaluate the expectation on the right
hand side of (2.6). Let Y and Z be independent normal random variables with mean zero and
variances 2 and 6, respectively. Then (N1 , N3 ) and (Y, 3Y + Z) have the same distribution, since
both are Gaussian with the same mean vector and covariance matrix. Notice that
1
1
1
− √ Y + √ (3Y + Z) = √ Z
2 c
6 c
6 c
and hence the right hand side of (2.6) is equal to
1
1
E 1 − exp −
+ √ Z
.
12c 6 c
+
√
√
Since 1 − exp (−1/(12c) + z/(6 c)) ≥ 0 if and only if z ≤ 1/(2 c), we have that
Z √1
1
2 c
z2
1
1
1
z
− 1 + √
E 1 − exp −
+ √ Z
=
1 − e 12c 6 c · √
e− 12 dz
12c 6 c
12π
−∞
+
Z √1
Z √1
√
2
(z−1/ c)2
2 c
2 c
1
1
− z12
−
12
√
√
=
e
dz −
e
dz
12π −∞
12π −∞
1
1
1
√ √ ,
=P Z< √
−P Z <− √
= Erf
2 c
2 c
4 3 c
which concludes the proof.
6
References
[1] G. W. Anderson, A. Guionnet, and O. Zeitouni. An Introduction to Random Matrices. Cambridge University Press, 2010.
[2] G. W. Anderson and O. Zeitouni. A CLT for a band matrix model. Probability Theory and
Related Fields, 134(2):283–338, 2006.
[3] S. Bubeck, J. Ding, R. Eldan, and M. Z. Rácz. Testing for high-dimensional geometry in
random graphs. Random Structures & Algorithms, 49(3):503–532, 2016.
[4] S. Bubeck and S. Ganguly. Entropic CLT and phase transition in high-dimensional Wishart
matrices. Preprint available at http://arxiv.org/abs/1509.03258, 2015.
[5] L. Devroye, A. György, G. Lugosi, and F. Udina. High-dimensional random geometric graphs
and their clique number. Electronic Journal of Probability, 16:2481–2508, 2011.
[6] N. El Karoui. On the largest eigenvalue of Wishart matrices with identity covariance when n,
p and p/n → ∞. arXiv preprint math/0309355, 2003.
[7] N. El Karoui. Tracy-Widom limit for the largest eigenvalue of a large class of complex sample
covariance matrices. The Annals of Probability, pages 663–714, 2007.
[8] R. Eldan and D. Mikulincer. Information and dimensionality of anisotropic random geometric
graphs. Preprint available at https://arxiv.org/abs/1609.02490, 2016.
[9] T. Jiang and D. Li. Approximation of Rectangular Beta-Laguerre Ensembles and Large Deviations. Journal of Theoretical Probability, 28:804–847, 2015.
[10] I. M. Johnstone. On the distribution of the largest eigenvalue in principal components analysis.
Annals of Statistics, 29(2):295–327, 2001.
[11] J. Wishart. The Generalised Product Moment Distribution in Samples from a Normal Multivariate Population. Biometrika, 20A(1/2):32–52, 1928.
7
| 10 |
arXiv:1408.6821v3 [math.CO] 19 May 2016
Looking for vertex number one
Alan Frieze∗, Wesley Pegden†
Department of Mathematical Sciences,
Carnegie Mellon University,
Pittsburgh PA 15213,
U.S.A.
May 20, 2016
Abstract
Given an instance of the preferential attachment graph Gn = ([n], En ), we would like
to find vertex 1, using only ‘local’ information about the graph; that is, by exploring the
neighborhoods of small sets of vertices. Borgs et al. gave an an algorithm which runs in
time O(log4 n), which is local in the sense that at each step, it needs only to search the
neighborhood of a set of vertices of size O(log4 n). We give an algorithm to find vertex 1,
which w.h.p. runs in time O(ω log n) and which is local in the strongest sense of operating
only on neighborhoods of single vertices. Here ω = ω(n) is any function that goes to infinity
with n.
1
Introduction
The Preferential Attachment Graph Gn was first discussed by Barabási and Albert [2] and then
rigorously analysed by Bollobás, Riordan, Spencer and Tusnády [3]. It is perhaps the simplest
model of a natural process that produces a graph with a power law degree sequence.
The Preferential Attachment Graph can be viewed as a sequence of random graphs G1 , G2 , . . . , Gn
where Gt+1 is obtained from Gt as follows: Given Gt , we add vertex t + 1 and m random edges
{ei = (t + 1, ui ) : 1 ≤ i ≤ m} incident with vertex t + 1. Here the constant m is a parameter
of the model. The vertices ui are not chosen uniformly from Vt , instead they are chosen with
probabilities proportional to their degrees. This tends to generate some very high degree vertices,
∗
†
Research supported in part by NSF grant CCF1013110
Research supported in part by NSF grant DMS
1
compared with what one would expect in Erdős-Rényi models with the same edge-density. We
refer to u1 , u2 , . . . , um as the left choices of vertex t + 1. We also say that t + 1 is a right neighbor
of ui for i = 1, 2, . . . , m.
We consider the problem of searching through the preferential attachment graph looking for
vertex number 1, using only local information. This was addressed by Borgs, Brautbar, Chayes,
Khanna and Lucier [5] in the context of the Preferential Attachment Graph Gn = (Vn , En ). Here
Vn = [n] = {1, 2, . . . , n}. They present the following local algorithm that searches for vertex 1,
in a graph which may be too large to hold in memory in its entirety.
1: Initialize a list L to contain an arbitrary node u in the graph.
2: while L does not contain node 1 do
3:
Add a node of maximum degree in N(L) to L od;
4: return L
Here for vertex set L, we let N(L) = {w ∈
/ L : ∃v ∈ L s.t. {v, w} ∈ En }.
They show that w.h.p. the algorithm succeeds in reaching vertex 1 in O(log4 n) steps. (We
assume that an algorithm can recognize vertex 1 when it is reached.) In [5], they also show how
a local algorithm to find vertex 1 can be used to give local algorithms for some other problems.
We also note that Brautbar and Kearns [6] considered local algorithms in a more general context.
There the algorithm is allowed to jump to random vertices as well as crawl around the graph in
the search for vertices of high degree and high clustering coefficient.
We should note that, as the maximum degree in Gn is n1/2−o(1) w.h.p., one cannot hope to have a
polylog(n) time algorithm if we have to check the degrees of the neighbors as we progress. Thus
the algorithm above operates on the assumption that we can find the highest-degree neighbor of
a vertex in O(1) time. This would be the case, for example, if the neighborhood of a vertex is
stored as a linked-list which is sorted by degrees. In the same situation, we can also determine the
K highest degree neighbors of a vertex in constant time for any constant K, and in the present
manuscript we assume such a constant-time step is possible. In particular, in this setting, each
of steps 2-7 of the following Degree Climbing Algorithm takes constant time.
We let dn (v) denote the degree of vertex v ∈ Vn .
Algorithm DCA:
The algorithm generates a sequence of vertices v1 , v2 , . . . , until vertex 1 is reached.
Step 1 Carry out a random walk on G until it is mixed; i.e., until the variation distance between
the current vertex and the steady state is o(1). We let v1 be the terminal vertex of the
walk. (See Remark 1.1 for comments on this step.)
Step 2 t ← 1.
2
Step 3 repeat
Step 4
Let Ct = w1 , w2, . . . , wm/2 be the m/2 neighbors of vt of largest degree.
(In the case of ties for the m/2th largest degree, vertices will be placed randomly in Ct in
order to make |Ct | = m/2. Also m is large here and we could replace m/2 by ⌊m/2⌋ if m
is odd without affecting the analysis by very much.)
Step 5
Choose vt+1 randomly from Ct .
Step 6
t ← t + 1.
Step 7 until dn (vt ) ≥
arbitrary.
n1/2
log1/100 n
(SUCCESS) or t > 2ω log4/3 n (FAILURE), where ω → ∞ is
Step 8 Assuming Success, starting from vT , where T is the value of t at this point, do a random
1/2
walk on the vertices of degree at least logn1/20 n until vertex 1 is reached.
Remark 1.1. It is known that w.h.p. the mixing time of a random walk on Gn is O(log n), see
Mihail, Papadimitriou and Saberi [11]. So we can assume that the distribution of v1 is close to
n (v)
the steady state πv = d2mn
.
Note that Algorithm DCA is a local algorithm in a strong sense: the algorithm only requires
access to the current vertex and its neighborhood. (Unlike the algorithm from [5], it does not
need access to the neighborhood of the entire set Pt = {v1 , . . . vt } of vertices visited so far.) Our
main result is the following:
Theorem 1.2. If m is sufficiently large then then w.h.p. Algorithm DCA finds vertex 1 in Gn
in O(ω log n) time.
DCA is thus currently the fastest as well as the “most local” algorithm to find vertex 1. We
conjecture that the factor ω in the running time is unnecessary.
Conjecture 1.3. Algorithm finds vertex 1 in Gn in O(log n) time, w.h.p.
We note that w.h.p. the diameter of Gn is ∼
execution time much below O(log n).
log n
log log n
and so we cannot expect to improve the
The bulk of our proof consists of showing that the execution of Steps 2–7 requires only time
O(ω log n) w.h.p. for any ω = ω(n) → ∞. This analysis requires a careful accounting of conditional probabilities. This is facilitated by the conditional model of the preferential attachment
graph due to Bollobás and Riordan [4]. One contribution of our paper is to recast their model in
terms of sums of independent copies of the rate one exponential random variables; this will be
essential to our analysis.
3
Outline of the paper
In Section 2 we reformulate the construction of Bollobás and Riordan [4] in terms of sums of
independent copies of the exponential random variable of rate one.
Section 3 is the heart of the paper. The aim is to show that if vt is not too small, then the ratio
vt+1 /vt is bounded above by 3/4 in expectation. We deduce from this that w.h.p. the main loop,
Steps 2–7, only takes O(ω log n) rounds. The idea is to determine a degree bound ∆ such that
many of vt ’s left neighbors have degree at least ∆, while only few of vt ’s right neighbors have
degree at least ∆. In this way, vt+1 is likely to be significantly smaller than vt .
Once we find a vertex vT of high enough degree, then we know that w.h.p. vT is not very large
and lies in a small connected subgraph of vertices of high degree that contains vertex one. Then a
simple argument based on the worst-case covertime of a graph suffices to show that only o(log n)
more steps are required.
Our proofs will use various parameters. For convenience, we collect here in table form a dictionary
of some notations, giving a brief (and imprecise) description of the role each plays in our proof,
for later reference.
ω :=
λ0 :=
Definition
Role in proof
O(log log n)
An arbitrarily chosen slowly growing fucntion.
1
log
40/m
n
A (usually valid) lower bound on random variables ηi (cf. Section 2.1).
n1 :=
log1/100 n
W.h.p. the main loop never visits v ≤ n1 .
Pt :=
{v1 , . . . vt }
The set of vertices visited up to time t.
Ψ:=
(log log n)10
Vertices v > Ψvt will not be important in the search for vt+1 .
L:=
m1/5
A large constant, significantly smaller than m.
Notation: We write An ∼ Bn if An = (1 + o(1))Bn as n → ∞. We write α . β in place of
α ≤ o(1) + (1 + o(1))β.
4
2
2.1
Preliminaries
A different model of the preferential attachement graph
Bollobás and Riordan [4] gave an ingenious construction equivalent to the preferential attachment
graph model. We choose x1 , x2 , . . . , x2mn independently and uniformly from [0, 1]. We then let
{ℓi , ri } = {x2i−1 , x2i } where ℓi < ri for i = 1, 2, . . . , mn. We then sort the ri in increasing order
R1 < R2 < · · · < Rmn and let R0 = 0. We then let
Wj = Rmj and wj = Wj − Wj−1 and Ij = (Wj−1 , Wj ]
for j = 1, 2, . . . , n. Given this we can define Gn as follows: It has vertex set Vn = [n] and an
edge {x, y} , x ≤ y for each pair ℓi , ri , where ℓi ∈ Ix and ri ∈ Iy .
We recast the construction of Bollobás and Riordan as follows: we can generate the sequence
R1 , R2 , . . . , Rmn by letting
1/2
Υi
Ri =
,
(1)
Υmn+1
where Υ0 = 0 and
ΥN = ξ1 + ξ2 + · · · + ξN for N ≥ 1
and where ξ1 , ξ2, . . . , ξmn+1 are independent exponential rate one random variables i.e. Pr(ξi ≥
2
are independent and uniform in [0, 1] (as they
x) = e−x for all i. This is because r12 , r22 , . . . , rmn
are each chosen as the maximum of two uniform points) and the order statistics of N independent
uniform [0, 1] random variables can be expressed as the ratios Υi /ΥN +1 for 1 ≤ i ≤ N.
We refer to the distribution of ΥN as ERL(N), as it is known in the literature as the Erlang
distribution.
2.2
Important properties
The advantage of our modification of the variant of the Bollobàs and Riordan construction is
that if we define
ηi := ξ(i−1)m+1 + ξ(i−1)m+2 + · · · + ξim ,
then ηi is closely related to the size of Ii . It can then be used to estimate the degree of vertex i.
This will simplify the analysis since ηi is simply a sum of exponentials.
In this section, we make this claim (along with other more obscure asymptotic properties of this
model) precise. In particular, we let E denote the event that the following properties hold for
Gn . In the appendix, we prove that Gn has all these properties w.h.p.
5
(P1) For Υk,ℓ = Υk − Υℓ , we have
Υk,ℓ
"
1/2
Lθk,ℓ
∈ (k − ℓ) 1 ±
3(k − ℓ)1/2
#
for (k, ℓ) = (mn + 1, 0) or
l=0
1
k−ℓ
2
∈ {ω, ω + 1, . . . , n} and k − l ≥ log n
k ≥ log30 n, l > 0
m
1/300
log
n 0 < l < k < log30 n.
Here, where n0 =
λ20 n
,
ω log2 n
λ0 =
θk,ℓ
1
,
log20/m n
log k
1/2
k
= (k − ℓ)1/2
(k−ℓ)3/2 log n
n1/2
n
ω 3/2 log2 n
Similarly define
θk =
1/2
k3/2
k
log n
1/2
n
n
ω 3/2 log2 n
ω ≤ l < k ≤ log30 n,
ω ≤ k ≤ n2/5 , l = 0,
log30 n < k ≤ n2/5 ,
n2/5 < k ≤ n0 ,
n0 < k.
k ≤ n2/5 ,
n2/5 < k ≤ n0 ,
n0 < k.
#
1/2 "
1/2
1/2
Lθi
i
i
1 ± 1/2 ∼
for ω ≤ i ≤ n.
(P2) Wi ∈
n
i
n
"
#
1/2
ηi
2Lθi
ηi
(P3) wi ∈
1 ± 1/2 1/2 ∼
for ω ≤ i ≤ n.
1/2
2m(in)
m i
2m(in)1/2
(P4) λ0 ≤ ηi ≤ 40m log log n for i ∈ [log30 n].
(P5) ηi ≤ log n for i ∈ [n].
Some properties give asymptotics for intermediate quantities in the Bollobas/Riordan model
(e.g., (P2), (P3)), while the rest give worst-case bounds on parameters in various ranges for
i. The very technical (P1) is just giving constraints on the gaps between the points Υk in the
Bollobas/Riordan model.
2.3
Inequalities
We will use the following inequalities from Hoeffding [9] at several points in the paper. Let
Z = Z1 + Z2 + . . . + ZN be the sum of independent [0, 1] random variables and suppose that
6
µ = E(Z). Then if α > 1 we have
n 2 o
(
exp − α3µ , α ≤ 1.
α2 µ
≤
Pr(Z ≥ (1 + α)µ) ≤ exp −
2 + α/3
exp − αµ
, α > 1.
3
βµ
e
Pr(Z ≥ βµ) ≤ e−µ
, β>1
β
2
α µ
, 0 ≤ α ≤ 1.
Pr(Z ≤ (1 − α)µ) ≤ exp −
2
(2)
(3)
(4)
Our main use for these inequalities is to get a bound on vertex degrees, see Section 2.4.
In addition to these concentration inequalities, we use various inequalities bounding the tails of
the random variable η. We note that the probability density φ(x) of the sum η of m independent
exponential rate one random variables is given by
φ(x) :=
xm−1 e−x
.
(m − 1)!
That is,
Pr(a ≤ η ≤ b) =
Z
b
φ(y)dy.
(5)
a
The equation (5) is a standard result, which can be verified by induction on m (for example, see
exercise 4.14.10 of Grimmett and Stirzaker [8]). Although we will frequently need to bound the
probability (5), this integral cannot be evaluated exactly in general, and thus we will often use
simple bounds on φ(η). We summarise what we need in the following lemma:
Lemma 2.1.
(a)
Pr(η ≤ xm) ≤ m(xe1−x )m
for x ≤ 1 −
1
.
m
(6)
(b)
Pr(η ≤ x) ≤ (1 − e−x )m ≤ xm .
(c)
Pr(η ≥ βm) ≤
eβ
eβ
m
≤ e−3m/10 for β ≥ 2.
(d)
2 m/3
Pr(η ≥ (1 + α)m) ≤ e−α
for 0 < α < 1.
(e)
2 m/2
Pr(η ≤ (1 − α)m) ≤ e−α
7
for 0 < α < 1.
(7)
Proof. (a) φ(η) is maximized at η = m − 1. Taking φ(mx) (x ≤ 1 − 1/m) as an upper bound on
φ(y) for y ∈ [0, mx] and m! ≥ (m/e)m in (5) gives us (6).
Q
(b) Writing η = ξ1 + ξ2 + · · · + ξm we have Pr(η ≤ x) ≤ m
i=1 Pr(ξi ≤ x).
(c) If η = ξ1 + ξ2 + · · · + ξN , then with λ = (β − 1)/β,
Pr(η ≥ βm) = Pr(eλη ≥ eλβm ) ≤ e−λβm E(eλη ) = e−λβm
m
Y
E(eλξi ) =
i=1
−λβm
e
(1 − λ)−m = (βe−(β−1) )m . (8)
(d) Putting β = 1 + α into (8) we see that
2 m/3
Pr(η ≥ (1 + α)m) ≤ ((1 + α)e−α )m ≤ e−α
.
(e) With λ = α/(1 − α) we now have
Pr(η ≤ (1 − α)m) = Pr(e−λη ≥ e−λ(1−α)m ) ≤ eλ(1−α)m E(e−λη ) = eλ(1−α)m
m
Y
E(e−λξi ) =
i=1
λ(1−α)m
e
2.4
(1 + λ)
−m
2 m/2
= ((1 − α)eα )m ≤ e−α
.
Properties of the degree sequence
We will use the following properties of the degree sequence throughout: let
!
1/2
n 1/2
i
5L log log n
ζ(i) =
1−
− 3/4
i
n
ω log n
!
1/2
n 1/2
i
5L log log n
+
.
1−
+ 3/4
ζ (i) =
i
n
ω log n
(9)
(10)
Note that
+
ζ(i) ∼ ζ (i)
ζ(i) ∼
n 1/2
i
2 log log n
.
if i ≤ n 1 −
log n
if i = o(n).
Also, let d¯n (i) denote the expected value of dn (i) in Gn .
Lemma 2.2.
8
(11)
(12)
(a) If E occurs then d¯n − m ∈ [ηi ζ(i), ηi ζ + (i)].
2 η ζ(i)/2
i
(b) Pr(dn (i) − m ≤ (1 − α)ηi ζ(i)) ≤ e−α
for 0 ≤ α ≤ 1.
2 η ζ + (i)/3
i
(c) Pr(dn (i) − m ≥ (1 + α)ηi ζ + (i)) ≤ e−α
(d) Pr(dn (i) − m ≥ βηi ζ + (i)) ≤ (e/β)βηi ζ
+ (i)
for 0 ≤ α ≤ 1.
for β ≥ 2.
(e) W.h.p. ηi ≥ λ0 and ω ≤ i ≤ n1/2 implies that dn (i) ∼ ηi
(f ) W.h.p. ω ≤ i ≤ log30 n implies that dn (i) ∼ ηi
n 1/2
.
i
(g) W.h.p. ω ≤ i ≤ n1/2 implies dn (i) . max {1, ηi }
(h) W.h.p. n1/2 ≤ i ≤ n implies dn (i) ≤ n1/3 .
(i) W.h.p. 1 ≤ i ≤ log1/49 n implies that dn (i) ≥
(j) W.h.p. dn (i) ≥
n
log1/20 n
n 1/2
i
n 1/2
.
i
.
n1/2
.
log1/20 n
implies i ≤ log1/9 n.
Proof. We defer the proof, which is straightforward but tedious, to the appendix.
Remark 2.3. We will for the rest of the paper condition on the occurrence of E. All probabilities include this conditioning. We will omit the conditioning in the text in order to simplify
expressions.
3
Analysis of the main loop
Since the variation distance after Step 1 is o(1), it suffices to prove Theorem 1.2 under the
assumption that we begin Step 2, with v1 chosen randomly, exactly according to the stationary
distribution.
The main loop consists of Steps 2–7. Let v0 = 1 and v1 , v2 , . . . , vs for s ≥ 1 be the sequence of
vertices followed by the algorithm up to time s. Let ρt = vt+1 /vt , and define T1 , T2 by
T1 = min t : vt ≤ log30 n and T2 = T1 + 30ω log4/3 log n and T0 = min 2ω log4/3 n, T2 . (13)
We will prove, see Lemma 3.2, that
E(ρt ) ≤
3
for 1 ≤ t ≤ T0 .
4
(14)
Recalling that T is the time when Step 8 begins, we note that if T < t ≤ T0 then this statement
is meaningless. So, we will keep to the following notational convention: if Xt is some quantity
that depends on t ≤ T and t > T then Xt = XT .
9
Now, roughly speaking, if r = 2 log4/3 n and µ is the number of steps in the main loop, then we
would hope to have
1
1
Pr(µ ≥ r) ≤ Pr ρ0 ρ1 · · · ρr ≥
≤ n E(ρ0 ρ1 · · · ρr ) ≤
n
n
and so w.h.p. the algorithm will complete the main loop within 2 log4/3 n steps. Unfortunately,
we cannot justify Q
the last inequality, seeing as the ρt are not independent. I.e. we cannot replace
E(ρ0 ρ1 · · · ρr ) by ri=0 E(ρi ). We proceed instead as in the next lemma.
Lemma 3.1. Assuming (14) we have the w.h.p. DCA completes the main loop in at most T0
steps with SUCCESS.
Proof. We let s0 denote the number of vertices visited by the main loop, and then define Zs =
ρ0 ρ1 · · · ρs for s ≤ s0 , and Zs = ρ0 ρ1 · · · ρs0 ( 43 )s−s0 for s > s0 .
Suppose first that T1 > ω log4/3 n. Now (14) and Jensen’s inequality implies that for s ≥ 1,
min(s,s0 )
E(log(Zs )) =
X
E(log(ρi )) +
i=0
s
X
log 43
min(s,s0 )+1
≤
min(s,s0 )
X
log E(ρi ) +
i=0
s
X
log 43 ≤ s log(3/4). (15)
min(s,s0 )+1
Now
log(Zs ) ≥ (s − s0 ) log(3/4) − log n ≥ s log(3/4) − log n
(16)
since ρ1 ρ2 . . . ρs0 ≥ 1/n.
Now let
α = Pr(log(Zs ) ≤ (1 − β)s log(3/4))
where α, β are to be determined. Then, (15), (16) imply that
(1 − α)(1 − β)s log(3/4) + α(s log(3/4) − log n) ≤ E(log(Zs )) ≤ s log(3/4).
(17)
Equation (17) then implies that
α≥
βs log(4/3)
.
βs log(4/3) + log n
Now putting s = ω log4/3 n and β = 1/2 we see that (18) becomes
α ≥1−
2
= 1 − o(1).
ω+2
So w.h.p. after at most ω log4/3 n steps, we will have exited the main loop with SUCCESS.
10
(18)
Suppose now that T1 ≤ ω log4/3 n. Using the argument that gave us (18) we obtain
T − T1 ≤ ω log4/3 log30 n w.h.p.
To prove Lemma 3.2, we will use a method of deferred decisions, exposing various parameters of Gn as we proceed. At time t, we will consider all random variables in the model from
Section 2.1 as being exposed if they have affected the algorithm’s trajectory thus far, and condition on their particular evaluation. To reduce the conditioning necessary, we will actually
analyze a modified algorithm, NARROW-DCA(τ ), and then later show that the trajectory of
NARROW-DCA(τ ) is the same as that of the DCA algorithm, w.h.p., when identical sources
of randomness are used.
NARROW-DCA(τ ) is the same as the DCA algorithm, except that for the first τ rounds of
the algorithm, a modified version of Step 4 is used:
Modified Step 4 Let
Ct = w1 , w2 , . . . , wm/2
be the m/2 neighbors of vt of largest degree from {1, . . . , Ψvt } where Ψ := (log log n)10 .
For rounds τ + 1, τ + 2, . . . , the behavior of NARROW-DCA(τ ) is the same as for DCA.
Notice that NARROW-DCA “cheats” by using the indices of the vertices, which we do not
actually expect to be able to use. Nevertheless, we will see later that w.h.p., for τ = 2ω log4/3 n,
the path of this algorithm is the same as for the DCA algorithm, justifying its role in our analysis.
3.1
Analyzing one step
Our analysis of one step of the main loop consists of the following lemma:
Lemma 3.2. Let ρt be the ratio of vt+1 /vt which appears in a run of the algorithm NARROWDCA(t). Then for all t ≤ T0 (see (13)), we have that
E(ρt ) ≤
3
4
and
Pr (ρt ≥ Ψ) ≤
1
.
log2 n
(19)
The first statement ensures that NARROW-DCA(t) makes progress in expectation in the tth
jump. The second part of this statement implies by induction that for any t ≤ ω log n, the
behavior of NARROW-DCA(t) is identical to the behavior of the DCA algorithm for the first
t steps. Thus together these statements give (14).
To prove Lemma 3.2, we will prove a stronger statement which is conditioned on the history of
the algorithm at time t. The history Ht of the process at the end of step t consists of
11
(H1) The sequence v1 , v2 , . . . , vt .
(H2) The left-choices λ(vs , 1), λ(vs , 2), . . . , λ(vs , m), 1 ≤ s < t and the corresponding left neighbors NL (vs ) = {u1,s , u2,s , . . . , um,s }. These are the m ℓ′i s that correspond to the m ri ’s
associated with vs as defined at the beginning of Section 2.1.
(H3) The lists u′1,s , u′2,s , . . . , u′r,s of all vertices u′k,s which have the property that (i) vs ∈ NL (u′k,s)
and (ii) u′k,s ≤ Ψvs for 1 ≤ s < t. (It is important to notice that s < t here.)
(H4) The values ηvi and the intervals Ivi for i = 1, 2, . . . , t.
(H5) The values ηw and the intervals Iw and the degrees deg(w), for w ∈
Here,
St
i=1
N(vi ).
N(v) = NL (v) ∪ NR (v) where NR (v) = {w ≤ Ψv : v ∈ NL (w)} .
We note that at any step t, and for a fixed random sequence used in the NARROW-DCA(t)
algorithm, Ht contains all random variables which have determined the behavior of the algorithm
so far, in the sense that if we modify any random variables from the random graph model
described in Section 2.1 while preserving all values in the history, then the trajectory of the
algorithm will not change. We write Ht to refer to a particular evaluation of the history (so that
we will be conditioning on events of the form Ht = Ht ).
Structure of the proof
The essential structure of our proof of Lemma 3.2 is as follows:
Part 1 We will define the notion of a typical history Ht .
Part 2 We will prove that for t ≤ T0 and any typical history Ht , random variables ηv which are
not explicitly exposed in Ht are essentially unconditioned by the event Ht = Ht (Lemma
3.3).
Part 3 We will prove by induction that Ht is typical w.h.p., for t ≤ T0 .
Part 4 We will use Part 2 and Part 3 to prove that for t ≤ T0 ,
E(ρt | Ht ) ≤
L3
2 21ηvt
+
+ 2
3
mL
m
and
Pr (ρt ≥ Ψ | Ht ) ≤
1
log2 n
(20)
by using using nearly unconditioned distributions of random variables which are not revealed in Ht to estimate the probabilities of various events. Here E(ρt | Ht ) is short for
E(ρt | Ht = Ht ). (Note that ηvt in (20) is simply a real number determined by Ht .) In this
context, we always work under the assumption that Ht is typical.
12
Part 5 We will also prove for t ≤ T0 that
E(ηvt+1 ) ≤ 4m.
(21)
Now the expected value statement in (19) follows from (21) and the first part of (20), by removing
the conditioning on Ht .
Part 1
Let Pt denote the sequence of vertices v1 , v2 , . . . , vt determined by the history Ht . We now
define the notion of a typical history Ht . For this purpose, we consider the reordered values
(t)
(t)
(t)
0 ≤ λ1 < λ2 < · · · < λN (t) where
n
o
(t)
(t)
(t)
(t)
Λ0 = λ1 , λ2 , . . . , λN (t) = {λ(vs , i) : 1 ≤ s ≤ t, 1 ≤ i ≤ m} .
(t)
(t)
Given this we define v = vj to be the index such that λj ∈ Iv and then let
n
o
(t)
(t)
VL = vj : 1 ≤ j ≤ N(t) .
We also define
(t)
VR = {v : v ∈ NR (Pt )}
Now let us reorder
n
o
(t)
(t)
(t)
(t)
(t)
V (t) = x1 < x2 < · · · < xM (t) = VL ∪ VR .
(t)
(t)
We define the extreme points x0 = 0 and xM (t)+1 = n + 1 and define
M (t)+1
(t)
Xj
=
(t)
[xj−1
+
(t)
1, xj
− 1] and X
(t)
=
(t)
[
Xj = [n] \ V (t)
(t)
(t)
and Nj = |Xj |,
j=1
(t)
Uj = [Wx(t)
j−1 +1
, Wx(t) −1 ] and U (t) =
j
N
[
(t)
Uj
(t)
(t)
and Lj = |Uj |.
j=1
A typical history Ht , t ≤ T0 is now one with the following properties:
(S1) There do not exist s1 , s2 ≤ t such that either (i) s1 ≤ t − 2 and vs1 and vs2 are neighbors
or (ii) s1 ≤ t − 3 and there exists a vertex w such that w ∈ N(vs1 ) ∩ N(vs2 ). (We say that
the path is self-avoiding.)
(S2) The points of Λ(t) are well-separated, in the following sense:
(
(t)
log2 n
xj−1 ≥ log30 n.
(t)
(t)
|xj − xj−1 | ≥
log1/400 n Otherwise.
13
(22)
We observe that
(T1) If Ht is typical then vj+1 is chosen from X (j) for all j < t.
(t)
(t)
(T2) Each Uj is the union of intervals Iv , v ∈ Xj .
Part 2
We prove the following:
Lemma 3.3. For any vertex v ∈ X (t) , any interval R ⊆ R, and any typical history Ht , we have
that v ∈
/ Pt ∪ N(Pt ) implies
Pr (ηv ∈ R | Ht ) ∼ Pr (ERL(m) ∈ R) .
(23)
The following lemma is the starting point for the proof of Lemma 3.3.
(t)
Lemma 3.4. Let j ∈ [M(t)+1], let Ht be any typical history, and let X ′ be the value of Xj in Ht .
Then the distribution of the random variables ηv , v ∈ X ′ conditioned on Ht = Ht is equivalent
to
of the random variables ηv , v ∈ X ′ conditioned only on the relationship
P the distribution
2
2
v∈X ′ ηv = A1 − A0 , where A1 , A0 are the values of Wx(t) −1 and Wx(t) +1 , respectively, in Ht .
j
j−1
Proof. Suppose we fix everything except for ηv , v ∈ X ′ . By everything we mean every other ηw
and all of the λ(v, i) and the random bits we use to make our choices in Step 5 of DCA; we
′
′
′
let Ht be the corresponding
Phistory. Suppose now that we replace ηv , v ∈ X with ηv , v ∈ X
without changing the sum v∈X ′ ηv . Then Wx(t) +1 remains the same, as it depends only on ηv
j−1
for v ∈
/ X ′ , and thus Wx(t) −1 remains the same as well, since the difference A21 − A20 is unchanged.
j
In particular, this implies that Ht remains a valid history. We confirm this by induction. Suppose
that Hs , s < t remains valid. We first note that because the λ(vs , i) are unchanged, none of vs′ s
(t)
left neighbors are in Xj . Also, NR (vs ) and the vertex degrees for w ∈ NR (vs ) will not be affected
(t)
by the change, even if vs < min Xj . So Hs+1 will be unchanged, completing the induction.
We are now ready to prove Lemma 3.3.
(t)
(t)
Proof of Lemma 3.3. Suppose that v ∈ X ′ = Xj , then M = Nj
Lemma 3.4 to write
Pr(ηv ≤ x | Ht ) = Pr ηv ≤ x
X
w∈X ′
14
ηw =
A21
−
≥ ζn → ∞. We now use
A20 ,
!
where A1 and A0 are the values of Wx(t) −1 and Wx(t)
j
j−1)+1
, respectively, in Ht , so that A1 − A0 is
(t)
the value of Lj in Ht .
Now from (P1) we have that A := A21 − A22 ∈ [(1 − ε)mM, (1 + ε)mM] for M = |X ′ | w.h.p., for
any ε > 0. Thus we fix any µ ∈ [(1 − ε)mM, (1 + ε)mM] and show that
!
X
ηw = µ = (1 + O(ε)) Pr (ERL(m) ≤ x) .
Pr ηv ≤ x
w∈X ′
The lemma follows since ε is arbitrary.
We write
Pr ηv ≤ x
X
w∈X ′
x
ηw = µ
!
η m−1 e−η (µ − η)(M −1)m−1 e−(µ−η) (Mm − 1)!
·
· M m−1 −µ dη
((M − 1)m − 1)!
µ
e
η=0 (m − 1)!
(M −1)m−1 Q
Z x m−1 −η
1 − µη
eη m
i=1 (Mm − i)
η
e
·
dη
=
µm
η=0 (m − 1)!
2
Z x m−1 −η
m
η
η
η
e
dη
·
1
+
O
· exp η − ((M − 1)m − 1)
+O
=
µ
µ2
M
η=0 (m − 1)!
Z x m−1 −η
η
e
= (1 + O(ε))
dη.
η=0 (m − 1)!
=
Z
Here we used that Ht typical implies that M ≥ log1/400 n → ∞.
Part 3
In the next section we will need a lower bound on vt+1 . Let
(
1
v ≥ log30 n
log3 n
φv =
1
. v < log30 n.
(log log n)3
Lemma 3.5. W.h.p. ρt ≥ φvt for 1 ≤ t ≤ T0 .
Proof. The values of λ(vt , i), i = 1, 2, . . . , m are unconditioned by Ht , see (H2). It then follows
from (P2) that if vt ≥ log30 n then
Pr(vt+1 ≤ φvt vt | Ht ) . m
Wφvt
m
. mφ1/2
.
vt =
Wvt
log3/2 n
There are O(ω log n) choices for t and so this deals with vt ≥ log30 n.
15
(24)
Now there are O(log log n) choices of t ∈ [T1 , T0 ] for which vt ≤ log30 n. In this case we can
replace the RHS of (24) by 1/(log log n)3/2 .
We will also need to bound the size of NR (vt ) for all t.
Lemma 3.6. W.h.p., for all t ≤ T0 ,
|NR (vt )| ≤
(
log3 n
(log log n)20
vt ≥ log30 n.
vt ≤ log30 n.
Proof. The size of NR (v), v = vt is stochastically bounded by Bin(Ψv, ηv /v). This is because if
w ∈ NR (v) then w ≤ Ψv. Also, for any such w, the probability that it has v as a left neighbor
is at most mwv /Ww . ηv /(vw)1/2 ≤ ηv /v. This uses property (S1) to see that the values of
λ(w, i), i = 1, 2, . . . , m are unconditioned by Ht . Thus, if θv = log3 n if v ≥ log30 n and equal to
(log log n)20 otherwise,
θ
ηv θv
Ψv
eΨηv v
Pr(|NR (v)| ≥ θv | Ht ) ≤
≤
.
(25)
θv
v
θv
3
If v ≥ log30 n then the RHS of (25) is at most (e/ log n)log n which is clearly small enough to han20
dle T possible values for t. If v < log30 n then the RHS of (25) is at most (40e/(log log n)9 )(log log n)
which is small enough to handle O(ω log log n) possible values for t such that v < log30 n.
Continuing Part 3, we now show that the DCA walk doesn’t contain cycles.
Lemma 3.7. W.h.p. the path Pt , t ≤ T0 is self avoiding.
Proof. We proceed by induction and assume that the claim of the lemma is valid up to time
t − 1. Now consider the choice of vt .
Case 1: There is an edge vs vt where s ≤ t − 2:
(a): vt ∈ NL (vs ) ∩ NL (vt−1 ).
We bound the probability of this (conditional on E, Ht ) asymptotically by
X X mwv
X X
ηv
.
.
Wvt−1
2(vvt−1 )1/2
s∈[t−2] v∈NL (vs )
(26)
s∈[t−2] v∈NL (vs )
Here, and throughout the proof of Case 1, v denotes a possibility for vt and mwv /Wvt−1 bounds
the probability that vt−1 chooses v. Remember that these choices are still uniform, given the
history.
We split the sum in (26) as
X
X
s∈[t−2] v∈NL (vs )
vs >log30 n
ηv
+
2(vvt−1 )1/2
X
X
s∈[t−2] v∈NL (vs )
vs ≤log30 n
16
ηv
.
2(vvt−1 )1/2
Consider the first sum. There are less than t choices for s; m choices for v and ηv ≤ log n. Now
v ∈ NL (vs ) and Lemma 3.5 implies that v ≥ log27 n. So we can bound the first sum by
1
1
1
.
=o
(# of s) · (# of v) · (max ηv ) · 1/2 ≤ T0 · m · log n ·
v
log11 n
log27/2 n
Summing this estimate over t ≤ T0 gives o(1).
For the second sum, we bound the number of choices of s by O(ω log log n) and ηv by O(log log n),
since v ≤ vs . We use the fact (see Section 3.2) that vt−1 ≥ log1/100 n. So we can therefore bound
the second sum by
1
1
1
(# of s) · (# of v) · (max ηv ) · 1/2 ≤b ω log log n · m · log log n ·
=o
.
log1/200 n
log1/300 n
vt−1
(We use A ≤b B in place of A = O(B).)
There are O(ω log log n) choices for T0 ≥ t > s ≥ T1 and so we can sum this estimate over choices
of t.
(b): vt ∈ NL (vs ) ∩ NR (vt−1 ).
Using vt ∈ NR (vt−1 ), we bound the probability of this asymptotically by
X
X
s∈[t−2] v∈NL (vs )
vs >log30 n
ηvt−1
+
2(vvt−1 )1/2
X
X
s∈[t−2] v∈NL (vs )
vs ≤log30 n
ηvt−1
.
2(vvt−1 )1/2
For the first sum we use the argument of Case (a) without any change, except for bounding ηvt−1
by log n as opposed to bounding ηv by the same. This gives a bound
1
1
1
(# of s) · (# of v) · (max ηvt−1 ) · 1/2 ≤b T0 · m · log n ·
.
=o
v
log11 n
log27/2 n
This is small enough to inflate by the number of choices for t.
For the second sum we split into two cases: (i) vt−1 ≥ log30 n and (ii) vt−1 < log30 n. This enables
us to control ηvt−1 . For the first case we obtain
1
1
1
(# of s) · (# of v) · (max ηvt−1 ) · 1/2 ≤ ω log log n · m · log n ·
=o
.
log15 n
log13 n
vt−1
The RHS is small enough to handle the O(ω log n) choices for t.
For the second case we obtain
(# of s) · (# of v) · (max ηvt−1 ) ·
1
1/2
vt−1
≤ ω log log n · m · log log n ·
1
log1/200 n
The RHS is small enough to handle the O(ω log log n) choices for t.
17
=o
1
log1/300 n
.
(c): vt ∈ NR (vs ) ∩ NL (vt−1 ).
Using vt ∈ NL (vt−1 ), we bound the probability of this asymptotically by
X
X
X
X
ηv
ηv
+
.
1/2
2(vvt−1 )
2(vvt−1 )1/2
s∈[t−2] v∈NR (vs )
vs ≤log29 n
s∈[t−2] v∈NR (vs )
vs >log29 n
For the first sum we use v ≥ vs and the argument of Case (a) without change, but notice we
split over vs > log29 n or not here. This gives a bound of
1
1
1
(# of s) · (# of v) · (max ηv ) · 1/2 ≤ T0 · m · log n ·
=o
.
v
log12 n
log29/2 n
For the second sum we use v ≤ Ψvs to bound v by log30 n. We also use Lemma 3.6 to bound the
number of choices of v by (log log n)20 . This gives a bound of
1
1
1
20
.
(# of s)·(# of v)·(max ηv )· 1/2 ≤b ω log log n·(log log n) ·log log n· 1/200 = o
log
n
log1/300 n
vt−1
(d): vt ∈ NR (vs ) ∩ NR (vt−1 ).
Using vt ∈ NR (vt−1 ), we bound the probability of this asymptotically by
X
X
s∈[t−2] v∈NR (vs )
vs >log30 n
ηvt−1
+
2(vvt−1 )1/2
X
X
s∈[t−2] v∈NR (vs )
vs ≤log30 n
ηvt−1
.
2(vvt−1 )1/2
For the first sum we use v ≥ vs and Lemma 3.6 to bound the number of choices for v and then
we have a bound of
1
1
1
3
.
=o
(# of s) · (# of v) · (max ηvt−1 ) · 1/2 ≤ T0 · log n · log n ·
v
log15 n
log9 n
For the second sum we split into two cases: (i) vt−1 ≥ log30 n and (ii) vt−1 < log30 n. This enables
us to control ηvt−1 . We also use Lemma 3.6 to bound the number of choices for v in each case.
Thus in the first case we have the bound
1
1
1
3
(# of s) · (# of v) · (max ηvt−1 ) · 1/2 ≤b ω log log n · log n · log n ·
=o
.
log15 n
log10 n
vt−1
In the second case we have
(# of s)·(# of v)·(max ηvt−1 )·
1
1/2
vt−1
20
≤b ω log log n·(log log n) ·log log n·
1
log1/200 n
=o
1
log1/300 n
Case 2: There is a path vs , v, vt where s < t.
The calculations that we have done for Case 1 carry through unchanged. We just replace vt−1 by
vt throughout the calculation and treat v as an arbitrary vertex as opposed to a choice of vt .
18
.
(t)
The xj are separated
We now prove that w.h.p. points λi are well-separated. Let
J1 = j : vj ≥ log30 n .
Lemma 3.8. Equation (22) holds w.h.p. for all t ≤ T0 .
Proof. We consider cases.
(t)
(t)
(t)
Case 1: xj−1 , xj ∈ VR .
For this we write
ζv,w =
(
log2 n
min {v, w} ≥ log30 n.
.
log1/300 n Otherwise.
Pr(∃1 ≤ s ≤ t, v ∈ NR (vs ), w ∈ NR (vt ) : |v − w| ≤ ζv,w | E, Ht) ≤
X
X
ηvs ηvt
.
(vs vt vw)1/2
1≤s≤t≤T
0
≤
≤
X
v∈NR (vs ),w∈NR (vt )
|v−w|≤ζv,w
X
ζv,w ηvs ηvt
(v − ζv,w )(vs vt )1/2
(27)
1≤s≤t≤T0 v∈NR (vs )
∗
X ζs,t
ηvs ηvt |NR (vs )|
2
.
(vs vt )1/2
1≤s≤t≤T0
(28)
∗
Here ζs,t
will be a bound on the possible value of ζv,w in (27).
Case 1a: max {vs , vt } ≥ log29 n:
∗
In this case ζs,t
≤ log2 n and we can bound the summand of (28) by
∗
ζs,t
· log2 n · log3 n ·
1
log
29/2
n
=
1
log
15/2
n
.
Multiplying by a bound T02 on the number of summands gives a bound of o(1). Here, and in the
next case, we use Lemma 3.6 to bound |NR (vs )|.
Case 1b: max {vs , vt } < log29 n:
Here we have max {v, w} ≤ Ψ log29 n ≤ log30 n. In this case we can bound the summand of (28)
by
1
(log log n)5
1/300
2
20
log
n · (40m log log n) · (log log n) ·
.
=o
log1/100 n
log1/200
We only have to inflate this by (T0 − T1 )2 = O((ω log log n)2 ). This completes the case where
(t)
(t)
xj−1 , xj ∈ R(t) .
19
(t)
(t)
(t)
Case 2: xj−1 , xj ∈ VL
We first show that the gaps λj − λj−1 are large. Define
β1 =
and
log1/300 n
log15/2 n
and
β
=
.
2
n1/2
n1/2
(
β1
εj =
β2
and
σ1 =
λj = λ(vt , i), vt ∈ J1 .
otherwise.
1
log
15/2
n
and σ2 =
1
log
1/200
n
.
We drop the superscript t for the rest of the lemma.
Claim 3.9.
Pr(∃λj ∈ Λ0 : λj−1 > λj − εj | Ht ) = o(1).
Proof of Claim 3.9. This follows from the fact that
Pr(∃j : λj−1 > λj − εj ) ≤ o(1) + (1 + o(1))(m2 T12 σ1 + m2 (T0 − T1 )(T1 σ1 + (T0 − T1 )σ2 )).
We have fewer than m2 T12 choices for s = τ (j − 1), t = τ (j) ∈ J1 . Assume first that s < t.
Given such a choice, we have that w.h.p. Wvt & log15 n/n1/2 by (P2). Now λj will have been
chosen uniformly from 0 to ≈ Wvt and so the probability it lies in [λj−1, λj−1 + εj ] is at most
≈ β1 /Wvs , which explains the term m2 T12 σ1 . If s > t then we repeat the above argument with
[λj−1 , λj−1 + ε1 ] replaced by [λj − ε1 , λj ]
The term m2 (T0 − T1 )T1 σ1 arises in the same way with j − 1 ∈ J1 , j ∈
/ J1 or vice-versa.
The term m2 (T0 − T1 )2 σ2 arises from the case where j − 1, j ∈
/ J1 . Here we can only assume
1/200
1/2
that Wj & log
n/n . This follows from (P2), (P4) and Lemma 2.2 and the fact that
we exit the main loop with SUCCESS when we see a vertex of degree at least n1/2 / log1/100 n.
Assuming that s < t we see that the probability that λj lies in [λj−1 , λj−1 + ε2 ] is at most
β2 /Wvt ∼ β2 /(log1/200 n/n1/2 ) = o(1).
Given the Claim and (P4), (P5) we have that w.h.p.
(
n
1
β1 − log
1/2 ≥ 2 β1
n
Wv(t) −1 − Wv(t) +1 ≥
log n
j
j−1
≥ 21 β2
β2 − 40mnlog
1/2
j ∈ J1 .
j∈
/ J1
(29)
Now,
Wv(t) −1 − Wv(t)
j
j−1 +1
=
Υm(τ (j)−1)
Υmn+1
1/2
−
Υm(τ (j−1)+1)
Υmn+1
20
1/2
=
Υm(τ (j)−1) − Υm(τ (j−1)+1)
1/2
1/2
1/2
Υmn+1 (Υm(τ (j)−1) + Υm(τ (j−1)+1) )
.
Or,
(
β1 n1/2
1/2
1/2
1/2
ηu = (Wv(t) −1 − Wv(t) +1 )Υmn+1 (Υm(τ (j)−1) + Υm(τ (j−1)+1) ) ≥
j
j−1
β2 n1/2
(t)
j ∈ J1 .
.
j∈
/ J1
X
u∈Xj
It follows that w.h.p.
(t)
|xj−1
(t)
(t)
(t)
−
(t)
xj |
=
(t)
|Xj |
≥
(
β1 n1/2
log n
β2 n1/2
40m log log n
j ∈ J1 .
j∈
/ J1
.
(t)
Case 3: xj ∈ VL , xj−1 ∈ VR
Let θv = β1 , v ≥ log30 n and θv = β2 otherwise. We write
ηvs
wv + 2θv
·
1/2
(vvs )
Wvt
s,t,v,k
X ηv ηv
2n1/2 θv ηvs
s
+
.(30)
.
v(vs vt )1/2
v(vs vt )1/2
s,t,v,k
Pr(∃s < t, v, k : v ∈ NR (vs ), λ(vt , k) ∈ Iv ± θv | E, Ht ) ≤
X
We bound the sum in the RHS of (30) as follows: If max {v, vs } ≥ log30 n then we bound the
first sum by
n
X
1
1
1
2
≤b (ω 2 log2 n) · log2 n · log n ·
= o(1).
(#s, t, k) · log n ·
·
15
v log n
log15 n
v=1
We bound the second sum by
n
X
1
1
1
15/2
(#s, t, k) · 2 log
n · log n ·
·
≤ (ω 2 log2 n) · log15/2 n · log n · log n ·
= o(1).
15
15
v
log
n
log
n
v=1
When max {v, vs , vt } < log30 n we bound the first sum by
log30 n
(#s, t, k) · (40 log log n)2 ·
X 1
1
≤b
·
1/200
v log
n
v=1
(ω log log n)2 · (log log n)2 · log log n ·
1
log
1/200
log
1/200
n
= o(1). (31)
We bound the second sum by
log30 n
(#s, t, k) · log
1/300
n · 40 log log n ·
X 1
1
≤b
·
1/200
v
log
n
v=1
(ω log log n)2 · log1/300 n · log log n ·
1
n
= o(1). (32)
Finally, if max {v, vs } < log30 n ≤ log30 n then we have to replace (#s, t, k) in (31), (32) by
1/2
O(ω 2 log n log log n). But this is compensated by a factor 1/vt ≤ 1/ log15 n.
It follows that (29) holds w.h.p. and the proof continues as for Case 2.
21
Part 4
We now assume t ≤ T0 . We begin by showing that DCA only uncovers a small part of the
distribution of the η’s.
Let Ξt = Pt ∪ N(Pt ) and
St,j =
X
wv .
v∈Ξt
Lemma 3.10. W.h.p., St,j = o(Wj ) for log1/100 n ≤ j and 1 ≤ t ≤ T0 .
Proof. Assume first that j ≥ log30 n. It follows from (P2), (P3), (P5) and Lemma 3.6 that
w.h.p.
max ηvs
T0 log n(m + log3 n)
ω log5 n
St,j ≤ T0 ×
× (m + max |NR (vs )|) .
=O
.
n1/2
2mn1/2
n1/2
10
1/2
log n
j
.
=Ω
Wj ≥ (1 − o(1))
n
n1/2
This completes this case. Now assume that j ≤ log30 n. (P2), (P3), (P4) and Lemma 3.6 that
w.h.p.
1/2
ω log4/3 log n
j
20
St,j . 40m log log n ×
× (m + (log log n) ) ≪ Wj ∼
1/2
2mn
n
for log1/100 n ≤ j ≤ log30 n.
Dealing with left neighbors
The calculation of the ratio ρt takes contributions from two cases: where vt+1 is a left-neighbor
of vt , and where vt+1 is a right-neighbor of vt .
Lemma 3.11.
2
E(ρt 1vt+1 <vt | Ht ) ≤ .
3
Proof. Let D denote the (m/2)th largest degree of a vertex in NR (vt ). We write
X
E(ρt 1vt+1 <vt | Ht | D = d) Pr(D = d)
E(ρt 1vt+1 <vt | Ht ) =
d
ζd
≤
E
vt
d
ζD
,
=E
vt
X
22
Pr(D = d)
where ζd is the index of the smallest degree left neighbor of vt that has degree at least d. We
let ζd = 0 if there are no such left neighbors. We now couple ζ with a random variable that is
independent of the algorithm and can be used in its place.
Going back to Section 2.1 let us associate ℓk for k ≥ ω with an index µk chosen uniformly from
[⌊k/m⌋]. In this way, vertex i ≥ ω is associated with m uniformly chosen vertices ai,1 , ai,2 , . . . , ai,m
in [i − 1]. Furthermore, we can couple these choices so that if NL (i) = {bi,1 , bi,2 , . . . , bi,m } then
given we have (i) Pr(bi,j ≤ ai,j ) ≥ 1 − o(1) and (ii) bi,j ≤ 2ai,j for all i, j. This because
Pr(bi,j ≤ k) ∼ Wk /Wi ∼ (k/i)1/2 (giving (i)) and (k/i)1/2 ≥ k/i and (1 − o(1))(k/i)1/2 ≥ k/2i
(giving (ii)).
So now let µ be the index of the uniform choice associated with the largest degree left neighbor
of vt that has degree at least D. Thus
ζD
µ
1
2
.E
= + o(1) ≤ .
E
vt
vt
2
3
Dealing with right neighbors
It will be more difficult to consider the contribution of right-neighbors. In preparation, for
λ0 ≤ γ ≤ 1 − 1/m we define
∆iγ := m + γmζ(i)
where ζ(i), ζ + (i) are defined in (9), (10) respectively. We note that ηi ζ(i) is a lower bound for
the expected degree of vertex i, i ≥ ω, see Lemma 2.2(a). Note also that ηi ζ + (i) is an upper
bound for the expected degree of vertex i, i ≥ ω.
The parameter ∆iγ is a degree threshold. For a suitable parameter γ, we wish it to be the case
that there should be many left-neighbors but few right-neighbors which have degree greater than
∆iγ . We define
γi∗ = max γ : j ∈ NL (i) : dn (j) ≥ ∆iγ ≥ m/2 .
∆vγtv∗ is a lower bound on the degree needed for vertex j > vt to be considered by DCA as the
t
next vertex; thus we proceed by analyzing the distribution of γv∗t . We first derive upper bounds
for Pr(γv∗t ≤ γ | Ht ).
Lemma 3.12. There exists c1 > 0 such that
1/2
)c1 m + me−c1 γ
Pr(γv∗t ≤ γ | Ht ) . (γ 1/2 e1−γ
1/2
)c 1 m
Pr(γv∗t ≤ 45 | Ht ) . e−c1 m .
Pr(γv∗t ≥ γ | Ht ) . γ −c1 m ,
2
1
, 0≤γ≤ .
8
1
−c1 /γ 1/2
+ me
, 0≤γ≤ .
8
Pr(γv∗t ≤ γ | Ht ) . (γ 1/2 e1−γ
2
5
γ ≥ 10 .
23
1/2 m
(33)
(34)
(35)
(36)
Proof. For j < vt , we define events Aj = ηj ≤ γ 1/2 m and Dj = dn (j) ≤ ∆vγt . We need to
T
estimate Pr
j∈S Dj for subsets S ⊆ NL (vt ) of size m/2. We write
\
j∈S
Dj ⊆
\
(Aj ∪ (Āj ∩ Dj )) ⊆
j∈S
\
Aj ∪
j∈S
[
(Āj ∩ Dj ).
(37)
j∈S
Now, using inequality (6) and equation (23), we see that if 0 ≤ γ ≤ 1/8 then for j < vt ,
Pr(ηj ≤ γ 1/2 m | Ht ) . m(γ 1/2 e1−γ
1/2
)m .
(38)
The RHS of (38) includes a factor of 1 + o(1) due to conditioning on E, Ht .
So,
Pr
\
Aj
j∈S
Ht
!
. (m(γ 1/2 e1−γ
1/2
)m )|S| .
(39)
Furthermore, because j ∈ NL (vt ) implies that i ≥ j and hence ζ(vt ) ≤ ζ(j),
Pr((dn (j) ≤ ∆iγ ) ∧ Āj | Ht ) ≤ Pr (dn (j) − m ≤ γmζ(j) | Ht ) Pr(ηj > γ 1/2 m|Ht )
(1 − γ 1/2 )2 1/2
γ mζ(j) .
. exp −
2
(40)
Explanation of (40): We remark first that the conditioning on E, Ht only adds a (1+o(1)) factor
to the upper bound on our probability estimate. We now apply Lemma 2.2(b) with 1 − α = γ
and ηj ≥ γ 1/2 m.
From (37) (summing over all m/2 subsets of NL (vt )) and (40) (summing over NL (vt )) we obtain
Pr(γv∗t ≤ γ | Ht ) = Pr(| j ∈ NL (vt ) : dn (j) ≤ ∆iγ | ≥ m/2| | Ht ) .
m/2
(1 − γ 1/2 )2 1/2
m
1/2 1−γ 1/2 m
2 (m(γ e
) )
+ m exp −
γ mζ(vt ) . (41)
2
We observe that j ∈ NL (vt ) implies that dn (j) ≥ m + 1. So,
m + 1 < ∆iγ implies ζ(vt ) >
1
.
mγ
(42)
m/2
1/2
2
1/2 1−γ 1/2 m
Using (42) in (41) verifies (34), after bounding 2 (m(γ e
) )
by (γ 1/2 e1−γ )c1 m .
m
From (37) and (40),
Pr(γv∗t ≤ γ | Ht ) . (m(γ 1/2 e1−γ
1/2
)m )m/2 +
(1 − γ 1/2 )2 1/2
γ mζ
+ Pr(A1 | Ht ) + Pr(Ā1 | Ht ) m exp −
2
−1
24
9n
10
(43)
Here
A1 =
9n
NL (vt ) ∩
10
m
≥
4
.
Explanation of (43): The first term is from (39). If Ā1 holds then vt has at least one left
neighbor j ≤ 9n/10. The final term comes from using (40) and ζ(j) ≥ ζ(9n/10). The factor
Pr(Ā1 )−1 handles the conditioning on Ā1 . The factor m is the union bound for choices of j.
Now NL (vt ) ∩ 9n
is dominated by the binomial Bin(m, n/10) and so Pr(A1 | Ht ) ≤ e−d1 m .
10
Now ζ(9n/10) ≥ 1/20 and plugging these facts into (43) yields (33). Here we have absorbed the
1/2
e−d1 m term into me−c1 γ m and we will do so again below.
We continue with the proof of (35). For j ∈ NL (vt ), we observe that if dn (j) ≤ ∆vγt and γ ≤ 54
then
5m
dn (j) − m ≤
ζ(vt ).
4
We now estimate the probability that a uniform random choice of j ∈ NL (vt ) (for fixed Ht , which
determines vt ) has certain properties.
We first observe that
W3i/5
3i
Pr j ≥
Ht . 1 −
∼
5
Wvt
1/2 !
2
3
< .
1−
5
5
(44)
(For this we used (P2).)
Now (6) implies that
Pr(ηj ≤ 0.99m | Ht ) ≤ e−d2 m .
(45)
Moreover, for ηj > 0.99m and j < 3vt /5, we have
ζ(j)
=
ζ(vt )
1/2
vt
j
where
ε=
Thus we have
1−
1−
j 1/2
−ε
n
1/2
vt
−ε
n
!
1/2
vt
≥
j
5L log log n
.
ω 3/4 log n
(46)
1/2
5
E(dn (j) − m | Ht ) ≥ ηj ζ(j) & 0.99m
ζ(vt).
3
Now 0.99 × (5/3)1/2 = 1.278.. > 1.01 × 5/4 and so
ζ(j)ηj
5ζ(vt )
Ht ≤ Pr dn (j) − m ≤
Pr dn (j) − m ≤
4
1.01
25
Ht
≤ e−d3 ηj ζ(j) ≤ e−d4 m
(47)
using Lemma 2.2(b). It follows from (44) and (45) and (47) that
5
m
2
∗
−d2 m
−d4 m
Pr γvt <
≥
≤ e−d3 m .
Ht ≤ Pr Bin m, e
+e
+
4
5
2
This completes the proof of (35).
To deal with (36) we observe that if dn (j) ≥ ∆vγt and γ ≥ 105 then
vt
j ∈ NL (vt ) and j ≤ 1/2
γ
But
or
ηj ≥ γ
1/2
1/2
j
m
≥ γ 1/4 m or dn (j) − m ≥ γ 3/4 ηj ζ(j).
vt
vt
Pr j ∈ NL (vt ) and j ≤ 1/2
γ
Ht
And, using (P3) and γ ≥ 105 ,
Pr ηj ≥ γ
1/4
m Ht
.
vt
X
l=2vt /γ
wl
Wvt
.
Z
Wvt /γ 1/2
1
. 1/4 .
Wvt
γ
∞
ηl =γ 1/4 m
(48)
ηlm e−ηl
dηl
(m − 1)!
1/4
vt
X
e−γ m
.
2(vt l)1/2
l=2vt /γ
. e−γ
1/4 m
.
(49)
Lastly, using (44), (45) and Lemma 2.2(d) and ζ + (j) . ζ(j) for j ≤ 3n/5 we have
2
3/4
Pr dn (j) − m ≥ γ 3/4 ηj ζ(j) | Ht ≤ + e−d2 m + e−γ η ≤ 0.41.
5
(50)
It follows from (48), (49) and (50) that
1
m
∗
−γ 1/4 m
Pr γvt ≥ γ | Ht . Pr Bin m, (1 + o(1))
. e−d3 m .
+e
+ 0.41
≥
γ 1/4
2
This completes the proof of the lemma.
Corollary 3.13. W.h.p. γv∗s ≥ 1/(log log n)2 for s = 1, 2, . . . , T = O(log n).
Proof. The value of γv∗s is determined when vs is first visited and in this case we can apply Lemma
3.12. In which case the result follows directly from (34).
We now have a handle on the distribution of γv∗t . We now put bounds on the expected number
of j > vt that can be considered to be a candidate for vt+1 , conditioned on the value of γv∗t . In
particular, we let
Dγvt = j > vt : dn (j) ≥ ∆vγt .
26
We will bound the size of Dγi by dividing Dγi into many parts bounding each part; in particular,
κ ∈ N we let
h
i
i, i2 ∩ D i
κ = 0.
γ
γ
i
(51)
Jγi,κ = h
i2 1 + κ−1 , i2 1 + κ ∩ D i 1 ≤ κ ≤ 2nγ 2 L .
γ
γ
L
γ
L
i
Note that Jγi,0 = ∅ if γ ≥ 1.
Finally, we let
rγi,κ
:=
|Jγi,κ |
rγi
and
:=
X
rγi,κ
and
siγ
:=
κ≥0
i X
κ + 1 i,κ
j≤ 2
rγ .
1+
γ κ≥0
L
i,κ
X X
(52)
κ≥0 j∈Jγ
Remark 3.14. We have that E m2 v1t svγt | Ht is an upper bound on the expectation of the ratio
, conditioned on the event that vt+1 > vt , since each right neighbor whose index is
ρt = vt+1
vt
included in the sum svγt has probability of at most m2 of being chosen by the algorithm.
3L3
Lemma 3.15. If vt ≤ n 1 − m then
E(rγvt | Ht ) ≤
svγt
E
| Ht
vt
Moreover,
≤
ηvt
L max {0, (1 − γ)} + 7 + 10Le−c2 γL .
γL
(53)
ηvt
L max {0, (1 − γ)} + 13 + 100Le−c2 γL .
3
γ L
If vt ≤ n/5 and κ ≥ (log log n)4 and γ ≥ 1/(log log n)2 then Pr(rγvt ,κ > 0) ≤
(54)
1
.
log2 n
(55)
Note that (55) implies the second inequality in (19).
Proof. Recall from Lemma 3.10 that w.h.p.,
ST,j = o(Wj ) for j ≥ log1/100 n.
(56)
We write
E(rγvt ,κ | Ht )
Z ∞ m−1 −ηj
X
ηj e
mwvt
.
Pr dn (j) ≥ ∆vγt | Ht , ηj dηj
ηj =0 (m − 1)!
vt ,κ Wj − St,j
(57)
j∈Jγ
. ηvt
X
v ,κ
j∈Jγ t
1
2(vt j)1/2
Z
∞
ηj =0
ηjm−1 e−ηj
Pr dn (j) ≥ ∆vγt | Ht , ηj dηj .
(m − 1)!
27
(58)
Explanation of (57) and (58): We sum over the relevant j and fix ηj . We multiply by the
density of ηj and integrate. Using (56) we see that
mwvt
ηvt
mwvt
∼
∼
.
Wj − St,j
Wj
2(vt j)1/2
This is asymptotically equal to the expected number of times j chooses vt as a neighbor.
Thus
E(rγvt ,κ | Ht ) .
X
v ,κ
j∈Jγ t
where
(59)
ηjm−1 e−ηj
Ij =
Pr dn (j) ≥ ∆vγt | Ht , ηj dηj ≤ 1.
ηj =0 (m − 1)!
Z
If m is large then
ηvt
Ij
2(vt j)1/2
∞
E rγvt ,0 | Ht
Continuing, for κ ≥ 1, we write:
(
0
. P
ηvt
v
j∈J0 t (γ) 2(vt j)1/2
.
ηvt 1−γ
γ
γ ≥ 1.
γ < 1.
Ij . A1 + A2 + A3 ,
(60)
(61)
where
A1 =
Z
A2 =
Z
A3 =
Z
(1−1/L)γm
ηj =0
(1+1/L)m
ηjm−1 e−ηj
Pr dn (j) ≥ ∆vγt | ηj dηj .
(m − 1)!
ηj =(1−1/L)m
∞
ηj =(1+1/L)m
ηjm−1 e−ηj
Pr dn (j) ≥ ∆vγt | ηj dηj
(m − 1)!
ηjm−1 e−ηj
Pr dn (j) ≥ ∆vγt | ηj dηj .
(m − 1)!
and then write rγvt ,κ = rγvt ,κ,1 + rγvt ,κ,2 + rγvt ,κ,3. Here rγvt ,κ,l is equal to the RHS of (59) with Ij
replaced by Al . The implicit
(1 + o(1)) factor in (61) arises from replacing
Pr dn (j) ≥ ∆vγt | ηj , Ht by Pr dn (j) ≥ ∆vγt | ηj in the integrals, i.e., ignoring the conditioning
due to Ht . Since j > vt , the only effect
of Ht is on Wj through wvt . Here we have that w.h.p.
ηvt
log n
vt 1/2
Wj ∼ n
= o(Wvt ).
and wvt ∼ 2m(vt n)1/2 = O 2m(v
1/2
t n)
Case 1: n1 ≤ vt < n/5:
Note that in this case
1
ζ(vt ) ≥
2
1/2
n
≥ 1.
vt
In the following we use Lemma 2.1 to estimate the integrals over ηj . We observe that
E(rγvt ,κ,1 | Ht )
28
(62)
.
X
ηvt
2(vt j)1/2
X
ηvt
2(vt j)1/2
v ,κ
j∈Jγ t
.
v ,κ
j∈Jγ t
Z (1− 1 )m m−1 −ηj
L
ηj e
Pr dn (j) ≥ ∆vγt | Ht , ηj dηj ,
(m − 1)!
ηj =0
(
!
Z (1− 1 )m m−1 −ηj
L
ηj e
1
1 ≤ κ ≤ 10L,
·
dηj , (63)
(m − 1)!
(e(L/κ)1/2 )d0 γmζ(vt ) κ > 10L
ηj =0
Explanation of (63): We remark first that the conditioning on Ht only adds a (1 + o(1)) factor
to the upper bound on our probability estimate. We will use Lemma 2.2 to bound the probability
that degrees are large. Now with our bound on vt and within the range of integration, the ratio
of ∆vγt − m to the mean of dn (j) − m is
∆vγt − m
ζ + (j)
1/2
1−
γm vnt
=
1/2
ηj nj
1−
vt 1/2
n
1/2
− o(1)
γm j
≥
&
ηj vt
j 1/2
+
o(1)
n
1/2
κ−1
κ 1/2
L
1+
≥
when κ ≥ 10L. (64)
L−1
L
L
We then use (11) and Lemma 2.2(d) with β = (κ/L)1/2 .
Continuing, we observe that
1/2
κ 1/2
κ−1
1
1+
− 1+
≤
L
L
2L
and so
E(rγvt ,κ,1
ηv e−m/(2L
| Ht ) ≤ t
γvt 1/2
2)
ηv e−m/(2L
≤ t
γL
2)
1/2
1/2 !
κ+1
κ−1
vt
vt
×
1+
−
1+
L
L
(
!
1
1 ≤ κ ≤ 10L,
(e(L/κ)1/2 )d0 γmζ(vt ) κ > 10L,
(
!
1
1 ≤ κ ≤ 10L,
(e(L/κ)1/2 )d0 γmζ(vt ) κ > 10L.
(65)
·
(66)
Continuing, it follows from (65) that
X
v ,κ
j∈Jγ t
1
j 1/2
≤
vt 1/2
.
γL
E(rγvt ,κ,2 | Ht )
Z (1+1/L)m
X
ηjm−1 e−ηj
ηvt
vt
Pr
d
(j)
≥
∆
|
H
,
η
dηj
≤
n
t
j
γ
1/2
ηj =(1−1/L)m (m − 1)!
vt ,κ 2(vt j)
j∈Jγ
29
(67)
≤
X
v ,κ
j∈Jγ t
ηvt
2(vt j)1/2
1 n
o
d1 γmζ(vt )
× exp − L2
(e(L/κ)1/2 )d0 γmζ(vt )
1
ηvt −d1 γmζ(vt )/L2
× e
≤
γL
(e(L/κ)1/2 )d0 γmζ(vt )
κ ≤ 3,
4 ≤ κ ≤ 10L,
(68)
κ > 10L,
1 ≤ κ ≤ 3,
4 ≤ κ ≤ 10L,
κ > 10L.
(69)
where we have used (67).
Explanation for (68): We proceed in a similar manner to (64) and use
1/2
vt 1/2
n
vt
1
−
−
o(1)
γm
∆γ − m
vt
n
1
1
m.
=
≥ 1 + if κ ≥ 4, ηj ≥ 1 −
1/2
ζ + (j)
L
L
j 1/2
n
1− n
+ε
ηj j
Then we use Lemma 2.2(c),(d).
Continuing,
E(rγvt ,κ,3
| Ht ) ≤
X
v ,κ
j∈Jγ t
ηvt
2(vt j)1/2
Z
∞
ηj =(1+1/L)m
ηjm−1 e−ηj
Pr dn (j) ≥ ∆vγt | Ht , ηj dηj
(m − 1)!
(70)
We bound the integral in (70) by something independent of j and then as above, there is a factor
ηvt /γL arising from the sum over j.
For all 1 ≤ κ ≤ 80L + 1, we simply use the bound
Z ∞
ηjm−1 e−ηj
d2 m
dηj ≤ exp − 2 .
1
L
ηj =m(1+ L
) (m − 1)!
(71)
For κ ≥ 80L + 2, we split the integral from (70) into pieces B1κ , B2κ (whose definition depends on
κ), which we will bound individually.
In particular, we use
B1κ
=
Z
≤
Z
∞
ηj =m(1+ κ−1
L )
1/4
∞
ηj =m(1+ κ−1
L )
1/4
ηjm−1 e−ηj
Pr dn (j) ≥ ∆vγt | Ht , ηj dηj
(m − 1)!
ηjm−1 e−ηj
dηj
(m − 1)!
1/4
≤ e−d3 m(κ/L)
(72)
and
30
B2κ
m(1+ κ−1
L )
1/4
ηjm−1 e−ηj
Pr dn (j) ≥ ∆vγt | Ht , ηj dηj
=
1
ηj =m(1+ L
) (m − 1)!
1/4 ! 1/4 d4 γmζ(vt )
κ
−
1
eL
≤ Pr dn (j) ≥ ∆vγt Ht , ηj ≤ m 1 +
≤
(73)
L
κ1/4
Z
to bound the integral in (70) by B1κ + B2κ for all κ ≥ 80L + 2.
Therefore, gathering the many terms together (and using that κ ≤
on m large to allow crude upper bounding, we see that
2nLγ 2
vt
from (51)) and relying
γL
2
E(rγvt | Ht ) . L max {0, (1 − γ)} [from (60)] + 10Le−m/(2L ) [from (66)]
ηvt
2
2nLγ
X/i eL1/2 d0 γmζ(vt )
−d1 γmζ(vt )/L2
−m/(2L2 )
+ 10Le
[from (69)] + 2 + e
[from (66) and (69)]
1/2
κ
κ=10L
d2 m
+ 6[from (69)] + 100L exp − 2 [from (71)]
L
2
1/4 d4 γmζ(vt ) !
2nLγ /i
X
eL
1/4
+
[from (72) and (73)]. (74)
e−d3 m(κ/L) +
1/4
κ
κ=80L+2
We first observe that if vnt < γ102 then the summations κ = 10L, . . . , 2nLγ 2 /vt etc. above are
empty. For larger n/vt we can therefore assume that γm(n/vt )1/2 ≥ m which implies (see (62))
that γmζ(vt ) ≥ m/2 and then we can assume that
2nLγ 2 /vt
X eL1/2 d0 γmζ(vt )
1
≤
and
1/2
κ
1000
κ=10L
2nLγ 2 /vt
X eL1/4 d4 γmζ(vt )
1
≤
.
1/4
κ
1000
κ=80L+1
(75)
Plugging these estimates into (74) and making some simplifications, we obtain (53).
Going back to (52) we have
vt
2sγ
γ 3L
Ht ≤ L max {0, (1 − γ)}
E
ηvt
mi
−m/(2L2 )
+ 200Le
−d1 γmζ(vt )/L2
+ 100Le
2
2nLγ
X/vt 2κ eL1/2 d0 γmζ(vt )
L
κ1/2
κ=10L
1/4 d4 γmζ(vt ) !
eL
1/4
e−d3 m(κ/L) +
.
κ1/4
−m/(2L2 )
+ 2+e
2
2nLγ
X/vt 2κ
d
m
2
+ 12 + 104 L exp − 2 +
L
L
κ=80L+2
Making similar estimates to what we did for (75) gives us (54).
We obtain (55) from (P5), (66), (69), (72) and (73). Indeed, if Jγvt ,κ 6= ∅ then from its definition
2n
1/2
we must have vt ≤ 2Lγ
. Together with vt ≤ n/5 we obtain that ζ(vt ) ≥ 2Lκ1/2 γ . Thus, in this
κ−1
31
case,
eL1/2
κ1/2
d0 γmζ(vt )
eL1/2
(log log n)2
d0 m(log log n)2 /2L1/2
1
≤
=o
.
(76)
log10 n
This deals with the probabilities in (66) and (69). For (69) we rely m large to to show that
1/4
e−d3 m(κ/L) = o(1/ log10 n). Equation (72) is dealt with in a similar manner to (66). Here we
1/4 d4 γmζ(vt )
which is the square root of (76).
have eL
κ1/4
3
Case 2: n/5 ≤ vt ≤ n 1 − 3L
:
m
The upper bound on vt implies that
mζ(vt ) ≥ L3 .
Using the same definitions of rγvt ,κ,l , l = 1, 2, 3 as above:
X
E(rγvt ,κ,1
| Ht ) ≤
X X
≤
X X
κ≥1 j∈Jγvt ,κ
κ≥1
κ≥1 j∈Jγvt ,κ
ηvt
2(vt j)1/2
Z (1− 1 )m m−1 −ηj
L
ηj e
dηj ,
(m − 1)!
ηj =0
ηvt
2
e−m/(2L ) ,
1/2
2(vt j)
from Lemma 2.1(e),
1/2
ηvt n
2
≤
e−m/(2L )
γ vt
51/2 ηvt −m/(2L2 )
e
.
≤
γ
X
E(rγvt ,κ,2 | Ht )
κ≥1
.
X X
ηvt
2(vt j)1/2
Z
X X
ηvt
2(vt j)1/2
Z
κ≥1 j∈Jγvt ,κ
.
κ≥1 j∈Jγvt ,κ
(1+1/L)m
ηj =(1−1/L)m
(1+1/L)m
ηj =(1−1/L)m
2
X ηv −d5 γL
t
4e
≤
d γL3
γL
κ≥1
eL1/2 6
κ1/2
X
E(rγvt ,κ,3
| Ht ) ≤
κ ≤ 3,
4 ≤ κ ≤ 10L,
κ > 10L,
X X
κ≥1 j∈Jγvt ,κ
κ≥1
≤
ηjm−1 e−ηj
Pr dn (j) ≥ ∆vγt | ηj dηj
(m − 1)!
κ ≤ 3,
1 n
o
ηjm−1 e−ηj
d5 γmζ(vt )
4 ≤ κ ≤ 10L,
· exp − L2
(m − 1)!
(e(L/κ)1/2 )d6 γmζ(vt ) κ > 10L,
X X
κ≥1
v ,κ
j∈Jγ t
ηvt
2(vt j)1/2
Z
∞
ηj =(1+1/L)m
ηvt
−m/(3L2 )
e
,
2(vt j)1/2
32
ηjm−1 e−ηj
(m − 1)!
from Lemma 2.1(d),
1/2
ηvt −m/(3L2 ) n
e
≤
γ
vt
1/2
5 ηvt −m/(3L2 )
≤
e
.
γ
The above upper bounds are small enough to give the lemma in this case, without trouble.
We are now in a position to prove (20). We confirmed the second part of the statement (20)
above, using (55), so only the first part remains. The first part follows immediately from Lemma
3.11 and the following, by addition:
Lemma 3.16.
E(ρt 1vt+1 ≥vt | Ht ) ≤
Proof. We consider cases.
Case 1: n1 ≤ vt ≤ n 1 −
3L3
m
21ηvt
L3
+ 2.
mL
m
: Then,
E(ρt 1vt+1 ≥vt | Ht ) ≤ I1 + I2 + I3 + I4 ,
where
5/4
I1 =
Z
≤
Z
γ=1/8
5/4
γ=1/8
≤
E(ρt 1vt+1 ≥vt | γv∗t = γ)d Pr(γv∗t ≤ γ),
1000ηvt
m
−c1 m
2 × 83
× ηvt ×
mL
Z
,
≤ ηvt e
Z 10000
I2 =
γ=5/4
≤
I3 =
7L
+ 13 + 100Le−c2 γL
8
d Pr(γv∗t ≤ γ),
by Remark 3.14, Lemma 3.15,
5/4
γ=1/8
d Pr(γv∗t ≤ γ),
from (35).
(77)
2
× ηvt × (13 + 100Le−c2 γL ) d Pr(γv∗t ≤ γ),
mγ 3 L
20ηvt
.
ZmL
∞
γ=10000
(78)
2
× ηvt × (13 + 100Le−c2 γL ) d Pr(γv∗t ≤ γ)
mγ 3 L
Z ∞
27ηvt
γ −cm dγ,
from (36),
≤ 15
10 Lm γ=100
27ηv
1
= 15 t × 2(cm−1)
.
10 Lm 10
(cm − 1)
Z 1/8
I4 =
E(ρt 1vt+1 ≥vt | γv∗t = γ)d Pr(γv∗t ≤ γ)
γ=0
33
(79)
1/2
≤ e−d0 m
,
(80)
1/2
To obtain the term e−d0 m in (80) we use (33) and (34) to obtain
max γ ∈ [0, 1/8] : γ −2 Pr(γ ∗ ≤ γ) ≤
o
n
2
1/2
2
max γ ∈ [0, 1/8] : (γ 1/2−4/(c1 m ) e1−γ )c1 m +
oo
n
n
−2
−c1 mγ 1/2 −c1 /γ 1/2
max γ ∈ [0, 1/8] : mγ min e
,e
2 )−1/2
≤ (84/(c1 m
1/2
e1−1/8
2
1/2
)c1 m + m2 e−c1 m
.
The first case of the lemma now follows from (77), (78), (79) and (80).
3
:
Case 2: vt > n 1 − 3L
m
3
We observe first that n ≤ vt 1 + 4L
. Then we let Z = dn (vt ) − m be the number of right
m
neighbors of vt . Furthermore,
n
X
wi
.
E(Z | Ht ) .
W
j
j=v +1
t
3
vt 1+ 4L
m
X
j=vt +1
ηvt
. ηvt
2(vt j)1/2
!
1/2
4L3
4L3
1+
− 1 ≤ ηvt
.
m
m
Case 2a: ηvt ≥ 1/L1/2 .
We use (81) and Lemma 2.2(d) to prove
ηvt
1/2
Pr Z ≥
Ht ≤ e−d1 ηvt L ≤ e−d1 L .
L
(81)
(82)
Then we can write
4L3
2ηvt
3ηvt
1/2
E(ρt 1vt+1 ≥vt | Ht ) ≤ 1 +
× e−d1 L +
≤
.
m
Lm
Lm
3
Explanation: ρt will be at most 1 + 4L
if the unlikely event in (82) occurs. Failing this, the
m
chance that ρt > 1 is at most
2Z
m
≤
2ηvt
.
Lm
Case 2b: ηvt < 1/L1/2 .
It follows from (81) that E(Z | Ht ) . 4L5/2 /m. It then follows from Lemma 2.2(d) that
L3
1/2
Ht ≤ e−d2 L .
Pr Z ≥
3m
We then have
E(ρt 1vt+1 ≥vt | Ht ) ≤
4L3
1+
m
34
× e−d2 L
1/2
+
2L3
L3
≤
.
3m2
3m2
Part 5
We now prove (21). To do this, we will obtain a recurrence for E(ηvt+1 | Ht ), and, at the end,
obtain the bound 4m by averaging over the possible histories Ht .
We begin by writing
E(ηvt+1 | Ht ) = E(ηvt+1 1vt+1 <vt | Ht ) + E(ηvt+1 1vt+1 >vt | Ht )
(83)
We consider each term in (83) separately. For the first term, since
ηvt+1 1vt+1 <vt ≤ max {ηl : 1 ≤ l < vt , l ∈ NL (vt )} 1vt+1 <vt ≤ max {ηl : 1 ≤ l < vt , l ∈ NL (vt )} ,
we have that
E(ηvt+1 1vt+1 <vt | Ht ) ≤ E (max {ηl : 1 ≤ l < vt , l ∈ NL (vt )} | Ht )
Z ∞
=
Pr(max {ηl : 1 ≤ l < vt , l ∈ NL (vt )} ≥ η | Ht )dη
η=0
Z ∞
=
Pr(∃1 ≤ l < vt , l ∈ NL (vt ) : ηl ≥ η | Ht )dη
η=0
≤
.
Z
t −1
∞ vX
η=0 l=1
Z ∞ vX
t −1
η=0 l=1
vX
t −1 Z ∞
Pr(l ∈ NL (vt ) and ηl ≥ η | Ht )dη
wl
Wvt
Z ∞
Z
∞
ηl =η
ηlm−1 e−ηl
dηl dη
(m − 1)!
ηlm−1 e−ηl
ηl
dηl dη
1/2 (m − 1)!
2m(lv
t)
η=0
η
=η
l
l=1
Z ∞ Z ∞
ηlm e−ηl
dηl dη
. 2m + (1 + o(1))
η=2m ηl =η (m − 1)!
Z ∞
. 2m +
4e−3η/10 dη, from Lemma 2.1(c),
.
η=2m
−3m/5
. 2m + 20e
≤ 3m.
(84)
We now bound the second term of (83). We consider two cases, according to properties of the
history Ht (which determines vt and ηvt ).
1
Case 1: Ht is such that vt ≤ 1 − ω1/2
n.
In this case, we have that
E(ηvt+1 1vt+1 >vt | Ht )
o
n
≤ E max ηl : vt < l ≤ n, vt+1 ∈ NL (l), dn (l) ≥ ∆vγtv∗ 1vt+1 >vt | Ht
t
35
o
n
≤ E max ηl : vt < l ≤ n, vt+1 ∈ NL (l), dn (l) ≥ ∆vγtv∗ | Ht .
t
So we have that
E(ηvt+1 1vt+1 >vt | Ht )
o
n
vt
≤ E max ηl : vt < l ≤ n, vt+1 ∈ NL (l), dn (l) ≥ ∆γv∗ | Ht
t
Z ∞
o
n
vt
=
Pr(max ηl : vt < l ≤ n, vt ∈ NL (l), dn (l) ≥ ∆γv∗ ≥ η | Ht )dη
t
η=0
Z ∞
=
Pr ∃vt < l ≤ n, vt ∈ NL (l) : ηl ≥ η, dn (l) ≥ ∆vγtv∗ | Ht dη
≤
η=0
n
X
l=vt +1
n
X
t
Z
∞
η=0
Pr (vt ∈ NL (l)) ∧ (ηl ≥ η) ∧ (dn (l) ≥ ∆vγtv∗ ) | Ht , ηl dη
ηvt
.
2(lvt )1/2
l=v +1
t
t
Z
∞
η=0
Z
∞
ηl =η
ηlm−1 e−ηl
Pr dn (l) ≥ ∆vγtv∗ | Ht , ηl dηl dη.
t
(m − 1)!
(85)
Recall that in the final two lines, vt and ηvt are not random variables, but are the actual values of these random variables in the history Ht , so this is a deterministic upper bound on
E(ηvt+1 1vt+1 >vt | Ht ).
We split the sum in the RHS of (85) into E1 + E2 + E3 + E4 according to the ranges of l and η,
and bound each separately. The first part consists of
Z
Z
n
X
ηvt 1l≤4m2 vt /(γv∗t )2 2m ∞ ηlm−1 e−ηl
vt
|
H
,
η
Pr
d
(l)
≥
∆
E1 =
∗
t l dηl dη.
n
γvt
1/2
2(lv
(m
−
1)!
t)
η=0
η
=η
l
l=v +1
t
Even though vt and ηvt are constants (determined by Ht ), we caution that γv∗t and so also E1 are
random variables.
Observe that we have that
Z ∞ m−1 −ηl
Z ∞ m−1 −ηl
ηl e
ηl e
vt
Pr dn (l) ≥ ∆γv∗ | Ht , ηl dηl ≤
dηl ≤ 1
t
ηl =η (m − 1)!
ηl =η (m − 1)!
which allows us to write
E1 1γv∗t ≤5/4 ≤ 1γv∗t ≤5/4
n
X
2mηvt 1l≤4m2 vt /(γv∗t )2
l=vt +1
2(lvt )1/2
≤
5m2 ηvt
1γv∗t ≤5/4 .
γv∗t
(86)
We will use this expression when we take the expectation over γv∗t ≤ 5/4.
We also have that
Z ∞ m−1 −ηl
ηl e
Pr (dn (l) ≥ ∆vγt∗ ) ∧ (γv∗t > 5/4) | Ht , ηl dηl ≤ I1 + I2 + I3 ,
ηl =η (m − 1)!
36
(87)
where
I1 =
Z
7m/8
ηl =η
Z ∞
ηlm−1 e−ηl
dηl ≤ m
(m − 1)!
7 1/8
e
8
m
≤ e−d0 m ,
from Lemma 2.1(a),
ηlm−1 e−ηl
dηl ≤ e−d1 m , from Lemma 2.1(d).
I2 =
ηl =9m/8 (m − 1)!
Z 9m/8
ηlm−1 e−ηl
∗
∗
Pr dn (l) − m ≥ γvt mζ(vt ) ∧ (γvt > 5/4) Ht , ηl dηl ,
I3 =
ηl =7m/8 (m − 1)!
Z 9m/8
ηlm−1 e−ηl
5
≤
Pr dn (l) − m ≥ mζ(vt ) Ht , ηl dηl .
4
ηl =7m/8 (m − 1)!
(88)
We bound I3 with two subcases:
Subcase 1a: ζ(l) > 0.
Z 9m/8
8
10ζ(vt)
ηlm−1 e−ηl
I3 ≤
Pr dn (l) − m ≥
ηl ζ(l) Ht , ηl dηl [since m ≥ ηl ],
9ζ(l)
9
ηl =7m/8 (m − 1)!
n
o
2
(10ζ(vt )−9ζ(l)) ηl
Z 9m/8
10ζ(vt) ≤ 18ζ(l)
ηlm−1 e−ηl exp n−
81ζ(l)
o
[from (2) and ℓ ≥ vt + 1],
≤
(10ζ(vt )−9ζ(l))ηl
10ζ(vt) > 18ζ(l)
ηl =7m/8 (m − 1)! exp −
27
≤
Z
9m/8
ηl =7m/8
n
o
exp − 7mζ(vt )
10ζ(vt ) ≤ 18ζ(l)
648
n
o
(m − 1)! exp − 7mζ(vt )
10ζ(vt ) > 18ζ(l)
216
ηlm−1 e−ηl
≤ e−mζ(vt )/100 .
(89)
Subcase 1b: ζ(l) ≤ 0.
In this case, we go back to (88) and use ζ + (l) in place of ζ(l), see (10).
Z 9m/8
10ζ(vt) +
ηlm−1 e−ηl
Pr dn (l) − m ≥
ηl ζ (l) Ht , ηl dηl ,
I3 ≤
9ζ + (l)
ηl =7m/8 (m − 1)!
(90)
For ε as in (46) we see that ζ(l) ≤ 0 implies that l ≥ n(1 − ε)2 . In which case
ζ + (l) ≤
On the other hand, vt ≤ 1 −
1
ω 1/2
2ε
≤ 3ε.
1−ε
(91)
implies that
ζ(vt ) ≥
1
ω 1/2
− 2ε ≥
1
.
2ω 1/2
t)
≥
Comparing (91) and (92), we see that ζ(vt ) ≫ ζ + (l). From this and (3) with β = 10ζ(v
9ζ + (l)
we deduce that
10ζ(vt ) +
ηl ζ (l) Ht , ηl ≤ (6εω 1/2)10ηl ζ(vt ) ≤ (6εω 1/2 )35mζ(vt )/36 .
Pr dn (l) − m ≥
9ζ + (l)
37
(92)
1
6εω 1/2
Plugging this estimate into (90) we obtain something stronger than (89), finishing Subcase 1b
and giving that I3 ≤ e−mζ(vt )/100 in all cases.
Having bounded the three terms in (87), we then have that
E1 1γv∗t >5/4 ≤
n
X
ηvt 1l≤4m2 vt /(γv∗t )2
(lvt )1/2
l=vt +1
5m
e−d2 m ∗
γvt
≤ ηvt
−d2 m
e−d2 m + e−mζ(vt )/100
n
e−mζ(vt )/100 X 1
+
vt 1/2
l1/2
l=v +1
t
−mζ(vt )/100
≤ ηvt 4me
+e
!
(n + 1)1/2 − (vt + 1)1/2
·
vt 1/2
≤ ηvt (4me−d2 m + 2ζ(vt)e−mζ(vt )/100 )
200
−d2 m
.
+
≤ ηvt 4me
m
(93)
It follows from (86) and (93) that
2
200
5m
−d2 m
.
1γ ∗ ≤5/4 + 4me
+
E1 ≤ ηvt
γv∗t vt
m
We continue with the other parts of the RHS of (85):
Z
Z ∞ m−1 −ηl
n
X
ηvt 1l≤4m2 vt /(γv∗t )2 ∞
ηl e
E2 =
Pr(dn (l) ≥ ∆vγt∗ | Ht , ηl )dηl dη
1/2
2(lv
)
(m
−
1)!
t
η=2m ηl =η
l=v +1
t
4m2 vt /(γv∗t )2
≤
X
ηvt
2(lvt )1/2
Z
∞
X
ηvt
2(lvt )1/2
Z
∞
X
ηvt
×m
2(lvt )1/2
l=vt +1
4m2 vt /(γv∗t )2
≤
l=vt +1
4m2 vt /(γv∗t )2
≤
l=vt +1
4m2 vt /(γv∗t )2
=
≤
X
l=vt +1
−d3 m
e
ηvt
.
γv∗t
η=2m
Z
∞
eη/m
eη/m
ηl =η
η=2m
Z
ηlm−1 e−ηl
dηl dη
(m − 1)!
m
dη
from Lemma 2.1(c),
∞
e−3mx/10 dx
x=2
10ηvt e−3m/5
6(lvt )1/2
(94)
Note that we aborbed an O(m) factor into the expression in (94). This is valid because m is
large. We continue to do this where possible.
Z
Z
n
X
ηvt 1l>4m2 vt /(γv∗t )2 γv∗t (l/vt )1/2 ∞ ηlm−1 e−ηl
Pr(dn (l) ≥ ∆vγtv∗ | Ht )dηl dη
E3 =
1/2
t
2(lv
)
(m
−
1)!
t
η=0
ηl =η
l=v +1
t
38
n
X
γv∗t (l/vt )1/2
∞
ηlm−1 e−ηl
dηl dη
∗ (l/v )1/2 (m − 1)!
η=0
η
=γ
t
l
2
∗
2
v
t
l=4m vt /(γvt )
(
)
Z γv∗ (l/vt )1/2
n
X
t
3γv∗t (l/vt )1/2
ηvt
exp −
dη from Lemma 2.1(c),
≤
2(lvt )1/2 η=0
10
2
∗
2
l=4m vt /(γvt )
)
(
n
X
3γv∗t (l/vt )1/2
ηvt
≤
,
exp −
2(lvt )1/2
10
2
∗
2
l=4m vt /(γvt )
)
(
Z
ηvt γv∗t n
3γv∗t (x/vt )1/2
≤
dx,
exp −
vt
10
x=4m2 vt /(γv∗t )2
Z ∞
ηvt γv∗t
8vt
≤
× ∗ 2
ye−3y/5 dy,
vt
(γvt ) y=m
≤
ηvt
2(lvt )1/2
Z
Z
ηvt e−d4 m
=
.
γv∗t
Z
Z ∞ m−1 −ηl
n
X
ηvt 1l>4m2 vt /(γv∗t )2 ∞
ηl e
Pr(dn (l) ≥ ∆vγtv∗ | Ht , ηl )dηl dη
E4 =
1/2
t
2(lvt )
η=γv∗t (l/vt )1/2 ηl =η (m − 1)!
l=i
Z ∞
Z ∞ m−1 −ηl
n
X
ηl e
ηvt
dηl dη
≤
1/2
2(lvt )
η=γv∗ (l/vt )1/2 ηl =η (m − 1)!
2
∗ 2
l=4m vt /(γvt )
≤
n
X
l=4m2 vt /(γv∗t )2
t
ηvt
2(lvt )1/2
Z
∞
e−3η/10 dη
η=γv∗t (l/vt )1/2
)
(
3γv∗t (l/vt )1/2
5ηvt
,
≤
exp −
3(lvt )1/2
10
2
2
∗
l=4m vt /(γvt )
)
(
Z
1/2
∗
(x/j)
3γ
2ηvt ∞
≤ 1/2
dx,
x−1/2 exp − vt
i
10
2
∗
2
x=4m vt /(γvt )
Z
4ηvt ∞ −3y/10
e
dy,
= ∗
γvt y=2m
n
X
≤
ηvt e−d5 m
.
γv∗t
Thus,
E(ηvt+1 1vt+1 >vt
5m∗ 2 + e−d∗6 m ηvt ≤
γvt
γvt
| Ht ) ≤ E1 + E2 + E3 + E4 ≤ −d
m
7
200
e
ηvt
+
∗
γvt
m
7m2
η
γv∗t vt
γv∗t ≤ 5/4
γv∗t > 5/4.
We now integrate with respect to the value of γv∗t . (Note that γv∗t is actually a discrete random
variable, so that Pr(γv∗t ≤ γ | Ht ) is discontinuous, but one can view this as a Riemann-Stieltjes
integral. We write Pr† (γv∗t ≤ γ) below in place of Pr(γv∗t ≤ γ | Ht ).) Using Lemma 3.12 we see
39
that if m is large then integrating over γ,
E(ηvt+1 1vt+1 >vt | Ht )
!
Z ∞
Z 5/4
−d7 m
e
200
7m2
d Pr† (γv∗t ≤ γ) +
d Pr† (γv∗t ≤ γ) +
≤ ηvt
γ
γ
m
γ=5/4
γ=0
!
Z 5/4
7m2
200
≤ ηvt
d Pr† (γv∗t ≤ γ) + e−d8 m +
γ
m
γ=0
!
Z 1/m
Z 5/4
2
7m2
200
7m
= ηvt
d Pr† (γv∗t ≤ γ) +
d Pr† (γv∗t ≤ γ) + e−d8 m +
γ
γ
m
1/2
γ=0
γ=1/m
1/m Z 1/m1/2
2
7m2
7m
† ∗
+
Pr (γvt ≤ γ)
Pr† (γv∗t ≤ γ)dγ
≤ ηvt
2
γ
γ
γ=0
0
!
Z 5/4
7m2
200
+
d Pr† (γv∗t ≤ γ) + e−d8 m +
γ
m
γ=1/m
Z 1/m
7m2
−d9 m1/2
Pr† (γv∗t ≤ γ)dγ+
+
≤ ηvt e
2
γ
γ=0
!
Z 5/4
7m3
200
,
from (34),
d Pr† (γv∗t ≤ γ) + e−d8 m +
m
γ=1/m γ
200
1
†
∗
−d8 m
−d9 m1/2
−d10 m1/2
4
,
≤ γvt ≤ 5/4 + e
+
+e
+ 7m Pr
≤ ηvt e
m
m
200
−d9 m1/4
−d10 m1/2
4 −c1 m
−d8 m
+e
+ 7m e
+e
+
≤ ηvt e
,
from (35)
m
200
−d12 m1/4
+
≤ ηvt e
.
m
(95)
Combining (84) and (95) via (83), we have that
E(ηvt+1
200
−cm1/4
ηvt .
| Ht ) ≤ 3m + e
+
m
This completes Case 1. Case 2 is much shorter.
1
Case 2: Ht is such that vt > 1 − ω1/2
n.
E(ηvt+1 | Ht ) .
∼
∼
n
X
l=vt +1
n
X
l=vt +1
n
X
ηvt
2(lvt )1/2
Z
ηvt
2(lvt )1/2
Z
mηvt
2(lvt )1/2
l=v +1
t
40
∞
η=0
Z
∞
ηl =η
∞
−η
e
η=0
ηlm−1 e−ηl
dηl dη
(m − 1)!
m
X
i=1
η m−i
dη
(m − i)!
(96)
. mηvt
≤
(n + 1)1/2 − (vt + 1)1/2
(vt + 1)1/2
mηvt
.
ω 1/2
This completes Case 2. In particular, for sufficiently large n we see that for any typical Ht (i.e.,
in both Case 1 and Case 2), the bound from (96) is valid. Putting
Et = E ∩ {Ht is typical}
we deduce from (96) that
E(ηvt+1
200
−cm1/4
| Et ) ≤ 3m + e
+
E(ηvt | Et )
m
200
−cm1/4
E(ηvt | Et−1 ).
. 3m + e
+
m
(97)
(98)
We obtain (98) from (97) because Et ⊆ Et−1 and so
E(ηvt | Et−1 ) ≥ E(ηvt | Et ) Pr(Et | Et−1 ) ∼ E(ηvt | Et ).
Because m is large, (21) will follow by induction once we have shown that
E(ηv1 ) ≤ 3m.
(99)
Here we will use the assumption that v1 is chosen exacty according to the stationary distribution
for a simple random walk on Gn . In particular, we have
!
n
X
dn (i)
Pr(ηv1 ≥ η) ≤ E
1η ≥η ,
2mn v1
i=1
and Lemma 2.2 implies that if η ≥ 2m
E((dn (i) − m)1ηv1 ≥η ) .
n 1/2 Z
i
∞
η=2m
n 1/2
η m e−η
dη .
× 4me−η/2 .
(m − 1)!
i
Furthermore,
E(m1ηvt ≥η ) = m Pr(ηv1 ≥ η) ≤ 5me−η/2 .
So, if η ≥ 2m, then
Pr(ηv1
Therefore,
E(ηv1 ) ≤ 2m +
Z
n
9e−η/2 X 1
≤ 10e−η/2 .
≥ η) .
2n1/2 i=1 i1/2
∞
Pr(ηv1 ≥ η)dη ≤ 2m + 10
η=2m
Z
∞
η=2m
and (99) follows.
41
e−η/2 dη
3.2
Exiting the main loop with success
In summary, it follows that w.h.p. DCA reaches Step 7 in O(ω log n) time. Also, at this time
vT ≤ log1/49 n. This follows from Lemma 2.2(g), 2.2(h) and (P4). Furthermore, this justifies
using n1 as a lower bound on vertices visited during the main loop. The random walk of Step
8 will w.h.p. take place on [log1/9 n]. This follows from Lemma 2.2(j). Vertex 1 will be in the
1/2
same component as vt in the subgraph of Gn induced by vertices of degree at least logn1/20 n . This
is because there is a path from vT to vertex 1 through vertices in [vT ] only and furthermore it
1/2
follows from Lemma 2.2(i) that w.h.p. every vertex on this path has degree at least logn1/20 n .
The expected time to visit all vertices of a graph with ν vertices is O(ν 3 ), see for example
Aleliunas, Karp, Lipton, Lovász and Rackoff [1]. Consequently, vertex 1 will be reached in a
further O((log1/9 n)3 ) = o(log n) steps w.h.p, completing the proof of Theorem 1.2.
4
Concluding remarks
We have described an algorithm that finds a distinguished vertex quickly and which is local in a
strong sense. There are some natural questions that are left unanswered:
• Can the running time be improved from O(ω log n) to O(log n)?
• Can we get polylog expected running time for DCA if m = 2?
• Can we extend the analysis to other more general models of web graphs e.g. Cooper and
Frieze [7]. In this case, we would not be able to use the model described in Section 2.
As a final observation, the algorithm DCA could be used to find the vertex of largest degree: if
we replace Step 8 by “Do the random walk for log n steps and output the vertex of largest degree
encountered” then w.h.p. this will produce a vertex of highest degree. This is because log n will
be enough time to visit all vertices v ≤ log1/39 n, where the maximum degree vertex lies.
References
[1] R. Aleliunas, R.M. Karp, R.J. Lipton, L. Lovász and C. Rackoff, Random Walks, Universal
Traversal Sequences, and the Complexity of Maze Problems. Proceedings of the 20th Annual
IEEE Symposium on Foundations of Computer Science (1979) 218-223.
[2] A. Barabási and R. Albert, Emergence of scaling in random networks, Science 286 (1999)
509-512.
[3] B. Bollobás, O. Riordan, J. Spencer and G. Tusnády, The degree sequence of a scale-free
random graph process, Random Structures and Algorithms 18 (2001) 279-290.
42
[4] B. Bollobás and O. Riordan, The Diameter of a Scale-Free Random Graph, Combinatorica
24 (2004) 5-34.
[5] C. Borgs, M. Brautbar, J. Chayes, S. Khanna and B. Lucier, The Power of Local Information
in Social Networks, http://arxiv.org/abs/1212.0884.
[6] M. Brautbar and M. Kearns, Local Algorithms for Finding Intersting Individuals in Large
Networks, in Innovations in Computer Science (ITCS) (2010) 188-199.
[7] C. Cooper and A.M. Frieze, On a general model of web graphs, Random Structures and
Algorithms 22 (2003) 311-335.
[8] G. Grimmett and D. Stirzaker, Probability and Random Processes, Third Edition, Oxford
University Press, Oxford UK, 2001.
[9] W. Hoeffding, Probability inequalities for sums of bounded random variables, Journal of the
American Statistical Association 58 (1963) 13-30.
[10] M. Jerrum and A. Sinclair. The Markov chain Monte Carlo method: an approach to approximate counting and integration. In Approximation Algorithms for NP-hard Problems.
(D. Hochbaum ed.) PWS (1996) 482-520.
[11] M. Mihail, C. Papadimitriou and A. Sabeeri, On Certain Connectivity Properties of the
Internet Topology, Journal of Computer and System Sciences 72 (2006) 239251.
A
Proofs of properties (P1)–(P5)
In this section we give proofs of (P1)–(P5), which we list here for convenience.
(P1) For Υk,ℓ = Υk − Υℓ , we have
Υk,ℓ
"
1/2
Lθk,ℓ
∈ (k − ℓ) 1 ±
3(k − ℓ)1/2
#
for (k, ℓ) = (mn + 1, 0) or
l=0
1
k−ℓ
2
∈ {ω, ω + 1, . . . , n} and k − l ≥ log n
k ≥ log30 n, l > 0
m
1/300
log
n 0 < l < k < log30 n.
43
Here n0 =
λ20 n
,
ω log2 n
λ0 =
1
,
log20/m n
log k
1/2
k
= (k − ℓ)1/2
(k−ℓ)3/2 log n
n1/2
n
θk,ℓ
ω ≤ l < k ≤ log30 n, l > 1
ω ≤ k ≤ n2/5 , l = 0
log30 n < k ≤ n2/5
n2/5 < k ≤ n0 .
n0 < k.
ω 3/2 log2 n
#
1/2 "
1/2
1/2
i
Lθi
i
(P2) Wi ∈
1 ± 1/2 ∼
for ω ≤ i ≤ n.
n
i
n
(P3) wi ∼
ηi
for ω ≤ i ≤ n.
2m(in)1/2
(P4) λ0 ≤ ηi ≤ 40m log log n for i ∈ [log30 n].
(P5) ηi ≤ log n for i ∈ [n].
Proof of (P1)
Applying Lemma 2.1(d),(e) to (1) for i ≥ 1 we see that
Pr(¬(P1))
n
X
L2 θk,0
≤2
exp −
+2
27
k=ω
n2/5
L2 k 1/2
exp −
=2
27
k=ω
X
30
n
X
k−ℓ=log1/300 n
+2
n0
X
k=n2/5 +1
L2 θk,ℓ
exp −
27
L2 θmn+1,0
+ 2 exp −
27
L2 k 3/2 log n
exp −
27n1/2
+2
n+1
X
k=n0
L2 n
exp −
27ω 3/2 log2 n
+1
n2/5
X
L2 log k
L2 (k − ℓ)1/2
exp −
+2
+2
exp −
+
27
27
30
1/300
k−ℓ=log n
k−ℓ=log
n
n
n
0
X
X
L2 n
L2 (k − ℓ)3/2 log n
+2
exp −
2
exp −
27n1/2
27ω 3/2 log2 n
2/5
k−ℓ=n +1
log
k−ℓ=n
Xn
+1
0
= o(1).
Proof of (P2)
For this we use
Wi =
Υmi
Υmn+1
44
1/2
.
Then,
#
1/2 "
1/2
Lθi
i
Wi ∈
/
1 ± 1/2
n
i
implies that either
Υmn+1
"
1/2
Lθi
∈
/ (mn + 1) 1 ±
3(mn + 1)1/2
#
or Υmi
"
1/2
Lθi
∈
/ mi 1 ± 1/2
3i
#
.
These events are ruled out w.h.p. by (P1).
Proof of (P3)
We use (1 + x)1/2 ≤ 1 +
wi =
x
2
for 0 ≤ |x| ≤ 1. Then,
Υmi
Υmn+1
1/2
−
Υm(i−1)
Υmn+1
1/2
!
1/2
Υm(i−1) 1/2
ηi
=
1+
−1
Υmn+1
Υm(i−1)
1/2
1/2
Lθi
m(i − 1) 1 + 3m1/2 (i−1)1/2
η
i
≤
1/2
1/2
Lθi
2m(i − 1) 1 −
(mn + 1) 1 −
1/2
3(mn+1)
ηi
≤
2m(in)1/2
1/2
2Lθ
1 + 1/2i 1/2
m i
!
1/2
Lθi
3m1/2 (i−1)1/2
.
A similar calculation gives
1/2
ηi
wi ≥
2m(in)1/2
2Lθ
1 − 1/2i 1/2
m i
!
.
Proof of (P4)
The upper bound follows from Lemma 2.1(c). For the lower bound, we observe by (7) that the
expected number of i ≤ log30 n with ηi ≤ λ0 is at most log30 n × λm
0 = o(1).
Proof of (P5)
This follows from Lemma 2.1(c).
45
B
Proof of Lemma 2.2
We restate the lemma for convenience.
Lemma B.1.
(a) If E occurs then d¯n − m ∈ [ηi ζ(i), ηi ζ + (i)].
2 η ζ(i)/2
i
(b) Pr(dn (i) − m ≤ (1 − α)mζ(j)) ≤ e−α
for 0 ≤ α ≤ 1.
2 η ζ + (i)/3
i
(c) Pr(dn (i) − m ≥ (1 + α)mζ + (j)) ≤ e−α
(d) Pr(dn (i) − m ≥ βmζ + (j)) ≤ (e/β)βηi ζ
+ (i)
for 0 ≤ α ≤ 1.
for β ≥ 2.
(e) W.h.p. ηi ≥ λ0 and ω ≤ i ≤ n1/2 implies that dn (i) ∼ ηi
(f ) W.h.p. ω ≤ i ≤ log30 n implies that dn (i) ∼ ηi
n 1/2
.
i
(g) W.h.p. ω ≤ i ≤ n1/2 implies that dn (i) . max {1, ηi }
(h) W.h.p. n1/2 ≤ i ≤ n implies dn (i) ≤ n1/3 .
(i) W.h.p. 1 ≤ i ≤ log1/49 n implies that dn (i) ≥
(j) W.h.p. dn (i) ≥
n
log1/20 n
n 1/2
.
i
n 1/2
i
.
n1/2
.
log1/20 n
implies i ≤ log1/9 n.
Proof. (a) Suppose that we fix the values for W1 , W2 , . . . , Wn . Then the degree dn (i) of vertex i
can be expressed
n X
m
X
dn (i) = m +
ζj,k
j=i k=1
where the ζj,k are independent Bernouilli random variables such that
wi wi
Pr(ζj,k = 1) ∈
,
.
Wj Wj−1
So, putting
d¯n (i) = E(dn (i))
we have
mwi
n
n
X
X
1
1
≤ d¯n (i) − m ≤ mwi
.
W
W
j
j
j=i−1
j=i
Now assuming that (P2), we have for ω ≤ i ≤ n,
n 1/2
n
X
X
n
1
≥
Wj
j
j=i
j=i
46
1/2
2Lθj
1 − 1/2
j
!
.
But
1/2
n
X
θj
j=ω
j
2/5
n
X
1
≤
+
3/4
j
j=ω
n0
X
j=n2/5 +1
n
X
n1/2
log1/2 n
+
n1/4 j 1/4 j=n +1 jω 3/4 log n
0
3n1/2 log log n
4n1/2
+
3ω 3/4 log n
ω 3/4 log n
4n1/2 log log n
≤
.
ω 3/4 log n
≤ 4n1/10 +
It follows that
9Ln1/2 log log n
1/2
1/2
1/2
¯
2(n − (i + 1) ) −
dn (i) ≥ m + mwi n
ω 3/4 log n
!
1/2
n 1/2
i
9L log log n
1
1−
,
− 1/2 1/2 −
≥ m + ηi
i
n
n i
2ω 3/4 log n
after using (P3).
A similar calculation gives a similar upper bound for d¯n (i) and this proves that
"
#
1/2
n 1/2
i
5L
log
log
n
i ≥ ω implies that d¯n (i) ∈ m + ηi
1−
.
± 3/4
i
n
ω log n
It follows from (2) and (4) that
L2 ηi n1/2
Pr dn (i) − m ≤ (1 − α)ηi ζ(i) ηi ≤ exp − 1/2 1/2 .
4i ω
L2 ηi n1/2
Pr dn (i) − m ≥ (1 + α)ηi ζ(i) ηi ≤ exp − 1/2 1/2 .
4i ω
(a) For ηi ≥ λ0 and ω ≤ i ≤ n0 we have
L2 ηi n1/2
2
exp − 1/2 1/2 ≤ e−L log n/4 .
4i ω
(b) This follows from (a) and (4).
(c) This follows from (a) and (2).
(d) This follows from (a) and (3).
(e) This follows from (a), (b), (c) and (12).
(f) This follows from (e) and (P4).
47
(g) This follows from (c) and (12).
(h) The degree of i ≥ n1/2 is stochastically dominated by the degree of n1/2 . Also, the probability
that dn (n1/2 ) exceeds the stated upper bound is o(1/n). So (h) follows from (g).
(i) For ω ≤ i ≤ log1/49 n, this follows from (f) and (P4). For 1 ≤ i < ω we can use (b) with
ηi ≥ λ0 and α = n−1/10 .
(j) This follows from (e), (f) and (g) and (P4).
48
| 8 |
Morphological Error Detection in 3D Segmentations
David Rolnick1∗, Yaron Meirovitch1 , Toufiq Parag2 , Hanspeter Pfister2
Viren Jain3 , Jeff W. Lichtman2 , Edward S. Boyden1 , Nir Shavit1
arXiv:1705.10882v1 [] 30 May 2017
1 Massachusetts
Institute of Technology, 2 Harvard University, 3 Google, Inc.
Abstract
Deep learning algorithms for connectomics rely upon localized classification, rather than
overall morphology. This leads to a high incidence of erroneously merged objects. Humans,
by contrast, can easily detect such errors by acquiring intuition for the correct morphology
of objects. Biological neurons have complicated and variable shapes, which are challenging
to learn, and merge errors take a multitude of different forms. We present an algorithm,
MergeNet, that shows 3D ConvNets can, in fact, detect merge errors from high-level neuronal morphology. MergeNet follows unsupervised training and operates across datasets. We
demonstrate the performance of MergeNet both on a variety of connectomics data and on a
dataset created from merged MNIST images.
1
Introduction
The neural network of the brain remains a mystery, even as engineers have succeeded in building
artificial neural networks that can solve a wide variety of problems. Understanding the brain
at a deeper level could significantly impact both biology and artificial intelligence [3, 8, 19, 23,
35]. Perhaps appropriately, artificial neural networks are now being used to map biological neural
networks. However, humans still outperform computer vision algorithms in segmenting brain tissue.
Deep learning has not yet attained the intuition that allows humans to recognize and trace the fine,
intermingled branches of neurons.
The field of connectomics aims to reconstruct three-dimensional networks of biological neurons
from high-resolution microscope images. Automated segmentation is a necessity due to the quantities of data involved. In one recent study [9], the brain of a larval zebrafish was annotated by
hand, requiring more than a year of human labor. It is estimated that mapping a single human
brain would require a zettabyte (one billion terabytes) of image data [17], clearly more than can be
manually segmented.
State-of-the-art algorithms apply a convolutional neural network (ConvNet) to predict, for each
voxel of an image, whether it is on the boundary (cell membrane) of a neuron. The predicted membranes are then filled in by subsequent algorithms [10]. Such methods are prone both to split errors,
in which true objects are subdivided, and to merge errors, in which objects are fused together. The
latter pose a particular challenge. Neurons are highly variable, unpredictably sprouting thousands
of branches, so their correct shapes cannot be catalogued. Erroneously merged neurons are obvious
to trained humans because they simply don’t look right, but it has hitherto been impossible to
make such determinations automatically.
∗
Correspondence should be addressed to: [email protected].
1
Figure 2: A single, relatively simple merge error
detected and localized by MergeNet. This object, within the Kasthuri dataset [12], occurred
within the training data of the algorithm, but
was not labeled as a merge error. MergeNet
was nonetheless able to correct the label. This
capability allows MergeNet to be trained on an
uncertain segmentation, then used to correct errors within the same segmentation, without requiring any manual annotation.
Figure 1: A probability map localizing merge
errors, as predicted by MergeNet, for an object within the ECS dataset. Orange indicates
a high probability of merge error, blue the absence of error. Location (A) illustrates a merge
between two neurons running in parallel, (B)
a merge between three neurons simultaneously
(the two parallel neurons, plus a third perpendicular to them), and (C) a merge between a
large neuron segment and a small branch from
another neuron. MergeNet is able to learn that
all of these diverse morphologies (and others not
illustrated in this example) represent merge errors, but that locations such as (D) and (E) are
normally occurring branch points within a single neuron.
We introduce a deep learning approach for detecting merge errors that leverages the morphological intuition of human annotators. Instead of relying upon voxelwise membrane predictions
or microscope images, we zoom out and capture as much context as possible. Using only threedimensional binary masks, our algorithm is able to learn to distinguish the shapes of plausible
neurons from those that have been erroneously fused together.
We test our network, MergeNet, both on connectomics datasets and on an illustrative dataset
derived from MNIST [15]. The key contributions of this approach include:
• Localization of merge errors. MergeNet is able to detect merge errors with high accuracy
within a three-dimensional segmentation and to pinpoint their locations for correction (see
Figures 1 and 2).
• Unsupervised training. The algorithm can be trained using any reasonably accurate segmentation, without the need for any additional annotation. It is even able to correct errors
within its own training data.
• Generalizability and scalability across datasets. MergeNet can be applied irrespective
of the segmentation algorithm or imaging method. It can be trained on one dataset and run
2
on another with high performance. By downsampling volumetric data, our ConvNet is able
to process three million voxels a second, faster than most membrane prediction systems.
2
Related work
There have been numerous recent advances in using neural networks to recognize general threedimensional objects. Methods include taking 2D projections of the input [32], combined 2D-3D
approaches [6, 16], and purely 3D networks [20, 36]. Accelerated implementation techniques for 3D
networks have been introduced by Budden, et al. [4] and Zlateski, Lee, and Seung [38].
Within the field of connectomics, Maitin-Shepard et al. [18] describe CELIS, a neural network
approach for optimizing local features of a segmented image. Januszewski et al. [11] and Meirovitch
et al. [22] present approaches for directly segmenting individual neurons from microscope images,
without recourse to membrane prediction and agglomeration algorithms. Deep learning techniques
have likewise been used to detect synapses between neurons [29, 31] and to localize voltage measurements in neural circuits [2] (progress towards a functional connectome). New forms of data are
also being leveraged for connectomics [27, 34], thanks to advances in biochemical engineering.
Many authors cite the frequent problems posed by merge errors (see e.g. [26]); however, almost
no approaches have been proposed for detecting them automatically. Meirovitch et al. [22] suggest a
hard-coded heuristic to find “X-junctions”, one variety of merge error, by analyzing graph theoretical
representations of neurons as skeletons (see also [37]). Recent work including [13, 24] has considered
the problem of deep learning on graphs, and Farhoodi, Ramkumar, and Kording [7] use Generative
Adversarial Networks (GANs) to generate neuron skeletons. However, such methods have not to
date been brought to bear on connectomic reconstruction of neural circuits.
3
Methods
Our algorithm, MergeNet, operates on an image segmentation to correct errors within it. Given an
object within the proposed segmentation, MergeNet determines whether points chosen within the
object are the location of erroneous merges. If no such points exist, then the object is determined
to be free from merge errors.
Input and architecture.
The input to our network is a three-dimensional window of the object in question, representing a
51 × 51 × 51 section of the object, centered at the chosen point. (These dimensions are chosen as a
tradeoff between enhancing speed and capturing more information, as we discuss further in sections
§4 and 5.) Crucially, the window is given as a binary mask : that is, each voxel is 0 or 1 depending on
whether it is assigned to the object. MergeNet is not given data from the original image, inducing
the network to learn general morphological features present in the binary mask. The network
follows a simple convolutional architecture, containing six convolutional layers with rectified linear
unit (ReLU) activation, and three max-pooling layers, followed by a densely connected layer and
softmax output. The desired output is a 1-hot vector for the two classes “merge” and “no merge”,
and is trained with cross-entropy loss.
2D MergeNet.
We also constructed a simpler 2D version of MergeNet to illustrate the identification of merge errors
within two-dimensional images. In this case, the input to the network is a square binary mask, which
passes through four convolutional layers and two max-pooling layers. This network was trained to
recognize merges between binarized digits from the MNIST dataset [15]. Random digits were drawn
3
from the training dataset and chained together, with merge errors given by pixels at the points of
contact between neighboring digits. Testing was performed on similar merges created from the
testing dataset. The size of the input window to the network was varied to compare accuracy
across a variety of contextual scales.
Downsampling.
To apply MergeNet to connectomics data, we begin by downsampling all objects. Segmentations of
neural data are typically performed at very high resolution, approximately 5 nm. The finest morphological details of neurons, however, are on the order of 100 nm. Commonly, data is anisotropic,
with resolution in the z direction being significantly lower than that in x and y. We tested MergeNet with downsampling ratios of 10 × 10 × 2 and 25 × 25 × 5 to compensate for anisotropy.
Downsampling an object is performed by a max-pooling procedure. That is, every voxel within the
downsampled image represents the intersection of the object with a corresponding subvolume of the
original image with dimensions e.g. 25 × 25 × 5.
Training.
The network was trained on artificially induced merge errors between objects within segmentations
of neural tissue. Merges consisted of identifying immediately adjacent objects and designating
points of overlap as the locations of merge errors. Negative examples consisted of windows centered
at random points of objects within the segmentation. Artificial merge errors are used owing to
the impracticality of manually annotating data over large enough volumes to determine sufficient
merge errors for training. As we demonstrate, such training suffices for effective detection of real
merge errors and has numerous other advantages (detailed in section §5). Training was performed
on various segmentations of the Kasthuri dataset [12], and the algorithm was evaluated both on
this dataset and on segmented objects of the ECS dataset, a 20 × 20 × 20 micron cube of rat cortex
data to which we were given access.
Output.
To run a trained instance of MergeNet on objects within a segmentation, it is not necessary to apply
the network to every voxel since the predictions at nearby points may be interpolated. Sample points
are therefore taken within each downsampled object, and MergeNet is run on windows centered at
these points. The real-valued predictions at sample points are then interpolated over the entire
object to give a heatmap of probabilities for merge errors. As the distribution of training examples
is balanced between positive and negative examples, which is not true when the network is applied
in practice, the output must be normalized or thresholded after the softmax layer. We find it
effective to classify as a merge error any voxel at which the prediction exceeds 0.9.
4
Results
We first consider the illustrative example of the merged MNIST dataset. After training on five
million examples within this dataset, we obtained a maximum pixelwise accuracy of 96.8 percent
on a test set constructed from held out digits and equally distributed between positive and negative
examples. This corresponds to almost perfect identification of individual merge error regions, as
shown in Figure 3. (Ambiguous pixels on the edges of merge error regions were the most likely
pixels to be misclassified.)
Accuracy increases with morphological information.
In Figure 4, we show the dependence of accuracy on the window size used. Intuitively, a larger
window gives the network more morphological context to work from, and plateaus in this case at
approximately the dimensions of a pair of fused MNIST digits (40-50 pixels across), which represents
the maximum scale at which morphology is useful to the network. Figure 3 provides a qualitative
4
Figure 3: Predictions of MergeNet on merged MNIST digits, shown on a sample of the test set.
The top image shows predictions of the network with 12 × 12 input windows, with predicted merges
shown in yellow. The middle image shows predictions with 24 × 24 input windows. The final image
shows the actual merges, in red. Note that both networks are quite accurate, but that for 12 × 12
input, the algorithm makes several erroneous predictions of merge errors (shown with blue arrows),
which are not made for 24 × 24 input. This illustrates how greater morphological context leads to
qualitatively better predictions. A quantitative assessment is shown in Figure 4.
comparison of performance between a smaller and a larger window size. The smaller window
size erroneously predicts merges within digits, while the large window size allows the network to
recognize the shapes of these digits and identify only merge errors between digits.
There is, of course, a tradeoff between accuracy and the time required to train and run the network. Slowdown resulting from larger window size is considerable for three-dimensional ConvNets.
We have attempted to choose the parameters of MergeNet with this tradeoff in mind.
MergeNet detects merge errors across datasets.
We trained MergeNet on two segmentations of the Kasthuri dataset [21] and tested performance
on artificially merged objects omitted from the training set. Segmentation A was relatively poor,
and training on it yielded performance of only 77.3 percent. Segmentation B was more accurate,
yielding performance of 90.0 percent. Training on both segmentations together yielded the best
performance on both test sets, showing that MergeNet is able to leverage contextual information
from one segmentation to improve performance on another. We also note that after training on
the low-quality Segmentation A, MergeNet was able to detect errors within its own training set, as
shown in Figure 2.
Testing on Seg. A
Testing on Seg. B
Training on Seg. A
77.3%
70.7%
Training on Seg. B Training on both
82.6%
84.5%
90.0%
91.9%
MergeNet generalizes broadly across datasets as well as segmentations of the same dataset, and
applies to both artificial and natural merge errors, though the latter are harder to quantify owing
to the paucity of large-scale annotation. After training on the Kasthuri dataset, MergeNet was able
to detect naturally occurring merge errors within segmentations of the ECS dataset, obtained from
a state-of-the-art U-Net segmentation algorithm [30]. Example output is shown in Figures 1 and 5.
5
Figure 4: Performance of MergeNet on merged MNIST digits, shown as a function of input window
width (with example windows shown). Note that increasing window size increases performance,
but only up to a size of 40-50 pixels. We see that performance plateaus at the point at which
morphological context captures all of the information about the neighboring digits. For the case of
neurons, which are larger and more complicated, morphological context does not plateau; however,
there is a tradeoff between more context and greater speed, since the time required to run the 3D
ConvNet depends strongly upon input size.
5
Discussion
We will now consider the capabilities of the MergeNet algorithm and discuss opportunities that it
offers within the field of connectomics.
Detection and localization of merge errors.
MergeNet is a powerful tool for detecting and pinpointing merge errors. Once a merge location
has been flagged with high spatial precision, other algorithms can be used to create a more accurate local segmentation, thereby correcting any errors that occured. Flood-filling networks [11]
and MaskExtend [22] are two examples of algorithms that have high accuracy, but are extremely
time-consuming to run over large volumes, making them ideally suited to segment at the merge
locations flagged by MergeNet. Alternatively, the agglomeration algorithm NeuroProof [25], used
in transforming membrane probabilities to segmentations, can be tuned to be more or less sensitive
to merge errors. A more merge-sensitive setting could be applied at those locations flagged by our
algorithm.
If the thresholding step is omitted, then the output of MergeNet may be thought of as a probability distribution of merge errors over objects within a segmentation. This distribution may be
treated as a Bayesian prior and updated if other information is available; multiple proofreading algorithms can work together. Thus, for example, synapse detection algorithms [29, 31] may provide
additional evidence for a merge error if synapses of two kinds are found on the same segmented
object, but are normally found on different types of neurons. In such a scenario, the probability at
the relevant location would be increased from its value computed by MergeNet, according to the
confidence of the synapse detection algorithm. We envision MergeNet being the first step towards
fully automated proofreading of connectomics data, which will become increasingly necessary as
such data is processed at ever greater scale.
6
Unsupervised training.
MergeNet is trained on merge errors created by fusing adjacent objects within a segmentation.
This allows training to proceed without any direct human annotation. In testing MergeNet, we
performed training on several automatically generated segmentations of EM data and obtained
good results, even though the training data was not free of merge errors and other mistakes. It
is highly advantageous to eliminate the need for further data annotation, since this is the step
in connectomics that has traditionally consumed by far the most human effort. The ability to
run on any (reasonably accurate) segmentation also means that MergeNet can be trained on far
larger datasets than those for which manual annotation could reasonably be obtained. Automated
segmentations already exist for volumes of neural tissue as large as 232,000 cubic microns [28].
Comparison of segmentations.
Automatic detection of merge errors allows us to compare the performance of alternative segmentation algorithms in the absence of ground truth annotation. We ran MergeNet with the same
parameter settings on two segmentations within the ECS dataset, after training on other data.
These alternative segmentations were produced by two different versions of a state-of-the-art UNet segmentation algorithm [30]. For the simpler algorithm, 33 of the 300 largest objects within
the segmentation (those most likely to have merge errors) were flagged as unlikely, while, for the
more advanced algorithm, only 15 of the largest 300 objects were flagged. This indicates that the
latter pipeline produces more plausible objects, making fewer merge errors. The size of the objects
was comparable in each case, so there is no indication that this improvement came with a greater
propensity for erroneous splits within single objects. Thus, MergeNet was able to perform a fully
automatic comparison of two segmentation algorithms and confirm that one outperforms the other.
Correction of the training set.
We have already observed that MergeNet can be trained on any reasonably correct segmentation.
In fact, it is possible to leverage artificial merge errors within the training set to detect real merge
errors that may occur there (as shown in Figure 2). This is remarkable, since the predictions
MergeNet makes in this case should conflict with the labels it was itself trained on - namely, when
objects from the training set that have not been artificially merged are nonetheless the result of
real merge errors. Our observations align with results showing that neural networks are capable of
learning from data even when the labels given are unreliable [33].
Independence from lower-order errors.
Since MergeNet makes use of the global morphology of neurons, it is not reliant on earlier stages
of the connectomics pipeline, such as microscope images or membrane predictions. Thus, it is
able to correct errors that arise at early stages of the pipeline, including those at the experimental
stage. EM images are prone to various catastrophic errors; most notably, individual tissue slices
can tear before imaging, leading to distortion or gaps in the predicted membranes. Algorithms that
stitch together adjacent microscope images also sometimes fail, leading to pieces of neurons in one
image being erroneously aligned with pieces from the neighboring image. Typically, such errors
are propagated or magnified by later stages of the connectomics pipeline, since algorithms such as
watershed and NeuroProof [25] assume that their input is mostly true. By contrast, MergeNet can
look at the broader picture and use “common sense” as would a human proofreader.
The output of a membrane-detection algorithm also can induce errors in object morphology.
Common sources of error at this stage are ambiguously stained tissue slices and intracellular membranes, such as those from mitochondria, which can be confused with the external cell membrane.
Figure 5 shows an error in predictions from the U-Net algorithm, where a large gap in the predicted
membrane has allowed two objects to be fused together into one (shown in blue). By utilizing the
overall 3D context, MergeNet is able to detect and localize the error (shown in red).
7
Figure 5: The output of MergeNet at a merge
error, superimposed over the erroneously predicted membranes that led to the merge error. MergeNet output at individual pixels has
been thresholded above 0.9, with red denoting
predicted merge error and blue the absence of
error. Observe that the predicted membranes
have a wide gap at the region MergeNet has
flagged; this gap is incorrect and the membrane
should extend between the two objects, separating them. Note that MergeNet used only threedimensional morphological information to detect this error, and did not make use of the (erroneous) membrane predictions that are shown,
or the underlying microscope images.
Figure 6: Glial cell, flagged as a merge error by
MergeNet. While glia are not merge errors, they
are also not neurons and did not occur in the
training set for MergeNet. As the algorithm recognizes, the morphology of glia is markedly different from that of neurons. Specifically training MergeNet to recognize glia could be useful in
segmenting these cells, which occur along with
neurons in brain tissue.
Generalizability to different datasets.
One of the challenges of traditional connectomics algorithms is that there are numerous different
imaging techniques, which can each be applied to the nervous systems of various organisms, and
in some cases also to structurally distinct regions of the nervous system within a single organism.
For networks used in connectomics for image segmentation, it is often necessary to obtain ground
truth annotations on each new dataset, which consumes considerable time and effort.
MergeNet, by contrast, is highly transferrable between datasets. Not only can the algorithm be
trained on an unverified segmentation and can correct it, but it can also be trained on one dataset
and then run on a segmentation of a different dataset, without any retraining. Figures 1 and 5
show images obtained by training on a segmentation of the Kasthuri dataset [12] and then running
on the ECS dataset.
Applicability to anisotropic data.
The microscope data underlying connectomics segmentation is often anisotropic, where the particular dimensions in the x, y, z directions depend upon the particular imaging procedure used. For
example, the Kasthuri dataset has resolution of 6 × 6 × 30 nanometers, while our ECS dataset is
even more anisotropic, with resolution of 4 × 4 × 30 nanometers. Some imaging technologies do
yield isotropic data, such as expansion microscopy (ExM) [5] and focused ion beam scanning elec8
tron microscopy (FIB-SEM) [14]. Various techniques have been proposed to work with anisotropic
data, including 2D ConvNets feeding into 3D ConvNets [16] and a combination of convolutional
and recurrent networks [6].
MergeNet cancels the effect of anisotropy, as necessary, by downsampling differentially along
the x, y, z directions. Thus, the network is able to transform any segmentation into one in which
morphology is approximately isotropic, making learning much easier. We also anticipate that it may
be possible to train on data from one imaging modality, then to apply a different downsampling
ratio to run on data with different anisotropy. For example, MergeNet could be trained on an EM
segmentation, then run on an ExM segmentation.
Scalability.
The MergeNet algorithm is designed to be scalable, so that it can be used to proofread segmentations
of extremely large datasets. The network is applied only once objects have been downsampled by a
large factor in each dimension, and is applied then only to sampled points within the downsampled
object. These two reductions in cost allow the 3D ConvNet to be run at scale, even though 3D
kernels are slower to implement than 2D kernels.
We tested the speed of MergeNet on an object with 36,874,153 original voxels, downsampled
to 18,777 voxels, from which we sampled 1,024, allowing us to generate a dense probability map
across the entire object. The ConvNet ran in 11.3 seconds on an Nvidia Tesla K20m GPU. This
corresponds to a speed of over three million voxels per second within the original image. Thus, the
network could be applied to a volume of 1 billion voxels in a minute using five GPUs. By comparison,
the fastest membrane-prediction algorithm can process 1 billion voxels within 2 minutes on a 72core machine [21], demonstrating that our algorithm can be integrated into a scalable connectomics
pipeline. Note that our experiments were performed using TensorFlow [1]; we have not attempted
to optimize time for training or running the network, though recent work indicates that significant
further speedup may be possible [4, 38].
Detection of non-neuronal objects.
While MergeNet is trained only on merge errors, it also seems to be able to detect non-neuronal
objects, as a byproduct of learning plausible shapes for neurons. In particular, we observe that
MergeNet often detects glia (nonneuronal cells that occur in neural tissue), the morphologies of
which are distinctively different from those of neurons. Figure 6 shows an example of a glial object
from the ECS dataset; notice that MergeNet finds the morphology implausible, even though it has
been trained on neither positive nor negative examples of glia. Quantifying the accuracy of glia
detection is challenging, however, since little ground truth has been annotated for this task, and
most connectomics algorithms are unable to distinguish glia.
Finally, let us consider when (and why) MergeNet succeeds and fails on different inputs. The
algorithm does not simply label all branch points within a neuron as merge errors, or else it would
be effectively useless. However, the network can be confused by examples such as two branches that
diverge from a main segment at approximately the same point, resembling the cross of two distinct
objects. MergeNet also misses some merge errors. For example, when two neuronal segments run
closely in parallel, there may at some points along the boundary be no morphological clues that
two objects are present. It is worth noting, however, that parallel neuronal segments can in fact be
detected by MergeNet, as shown at point (A) of Figure 1.
6
Conclusion
Though merge errors occur universally in automated segmentations of neural tissue, they have never
been addressed in generality, as they are difficult to detect using existing connectomics methods.
9
We have shown that a 3D ConvNet can proofread a segmented image for merge errors by “zooming
out”, ignoring the image itself, and instead leveraging the general morphological characteristics
of neurons. We have demonstrated that our algorithm, MergeNet, is able to generalize without
retraining to detect errors within a range of segmentations and across a range of datasets. Relying
solely upon unsupervised training, it can nonetheless detect errors within its own training set. Our
algorithm enables automatic comparison of segmentation methods, and can be integrated at scale
into existing pipelines without the requirement of additional annotation, opening up the possibility
of fully automated error detection and correction within neural circuits.
While MergeNet can detect and localize merge errors, it cannot, by itself, correct them. One
could conceive a variation on the MergeNet algorithm that is used to mark the exact boundary of a
merge error, allowing a cut to be made automatically along the boundary so as to correct the merge
without additional effort. However, in practice this is a much more challenging task. Often it is
impossible to determine the exact division of objects at a merge error purely from morphology. For
instance, when two largely parallel objects touch, it may not be evident which is the continuation
of which past the point of contact, even if the erroneous merge itself is obvious. Likewise, some
merge errors consist of three or more objects that have been confused in some complex way, e.g. by
virtue of poor image quality at that location. In such cases, any merge-correction algorithm must
have recourse to the underlying microscope images or membrane probabilities, rather than relying
purely upon morphological cues.
Deep learning approaches leveraging morphology have the potential to transform biological image
analysis. It may, for instance, become possible to classify types of neurons automatically, or to
identify anomalies such as cancer cells. We anticipate a growth in such algorithms as the scale of
biological data grows and as progress in connectomics leads to a deeper understanding of the brain.
7
Acknowledgments
We are grateful for support from the National Science Foundation (NSF) under grants IIS-1447786,
CCF- 1563880, and 1122374 and from the Intelligence Advanced Research Projects Activity (IARPA)
under grant 138076-5093555. We would like to thank Tom Dean, Peter Li, Art Pope, Jeremy MaitinShepard, Adam Marblestone, and Hayk Saribekyan for helpful discussions and contributions.
References
[1] Martı́n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning
on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
[2] Noah Apthorpe, Alexander Riordan, Robert Aguilar, Jan Homann, Yi Gu, David Tank, and H Sebastian Seung. Automatic neuron detection in calcium imaging data using convolutional networks. In
Advances In Neural Information Processing Systems (NIPS), pages 3270–3278, 2016.
[3] Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Thomas Mesnard, and Zhouhan Lin. Towards
biologically plausible deep learning. Preprint arXiv:1502.04156, 2015.
[4] David Budden, Alexander Matveev, Shibani Santurkar, Shraman Ray Chaudhuri, and Nir Shavit.
Deep tensor convolution on multicores. arXiv preprint arXiv:1611.06565, 2016.
[5] Fei Chen, Paul W Tillberg, and Edward S Boyden. Expansion microscopy. Science, 347(6221):543–548,
2015.
10
[6] Jianxu Chen, Lin Yang, Yizhe Zhang, Mark Alber, and Danny Z Chen. Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation. In Advances in Neural
Information Processing Systems (NIPS), pages 3036–3044, 2016.
[7] Roozbeh Farhoodi, Pavan Ramkumar, and Konrad Kording. Deep learning approach towards generating neuronal morphology. Cosyne Abstracts, Salt Lake City USA, 2017.
[8] Moritz Helmstaedter, Kevin L Briggman, Srinivas C Turaga, Viren Jain, H Sebastian Seung, and
Winfried Denk. Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature,
500(7461):168–174, 2013.
[9] David Grant Colburn Hildebrand, Marcelo Cicconet, Russel Miguel Torres, Woohyuk Choi, Tran Minh
Quan, Jungmin Moon, Arthur Willis Wetzel, Andrew Scott Champion, Brett Jesse Graham, Owen
Randlett, et al. Whole-brain serial-section electron microscopy in larval zebrafish. Nature, 2017.
[10] Viren Jain, Srinivas C Turaga, K Briggman, Moritz N Helmstaedter, Winfried Denk, and H Sebastian
Seung. Learning to agglomerate superpixel hierarchies. In Advances in Neural Information Processing
Systems (NIPS), pages 648–656, 2011.
[11] Michal Januszewski, Jeremy Maitin-Shepard, Peter Li, Jörgen Kornfeld, Winfried Denk, and Viren
Jain. Flood-filling networks. arXiv preprint arXiv:1611.00421, 2016.
[12] Narayanan Kasthuri, Kenneth Jeffrey Hayworth, Daniel Raimund Berger, Richard Lee Schalek,
José Angel Conchello, Seymour Knowles-Barley, Dongil Lee, Amelio Vázquez-Reina, Verena Kaynig,
Thouis Raymond Jones, et al. Saturated reconstruction of a volume of neocortex. Cell, 162(3):648–661,
2015.
[13] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks.
arXiv preprint arXiv:1609.02907, 2016.
[14] Graham Knott, Herschel Marchman, David Wall, and Ben Lich. Serial section scanning electron
microscopy of adult brain tissue using focused ion beam milling. Journal of Neuroscience, 28(12):2959–
2964, 2008.
[15] Yann LeCun, Corinna Cortes, and Christopher JC Burges. The MNIST database of handwritten
digits, 1998.
[16] Kisuk Lee, Aleksandar Zlateski, Vishwanathan Ashwin, and H Sebastian Seung. Recursive training of
2d-3d convolutional networks for neuronal boundary prediction. In Advances in Neural Information
Processing Systems (NIPS), pages 3573–3581, 2015.
[17] Jeff W Lichtman, Hanspeter Pfister, and Nir Shavit. The big data challenges of connectomics. Nature
neuroscience, 17(11):1448–1454, 2014.
[18] Jeremy B Maitin-Shepard, Viren Jain, Michal Januszewski, Peter Li, and Pieter Abbeel. Combinatorial energy learning for image segmentation. In Advances in Neural Information Processing Systems
(NIPS), pages 1966–1974, 2016.
[19] Adam H Marblestone, Greg Wayne, and Konrad P Kording. Toward an integration of deep learning
and neuroscience. Frontiers in Computational Neuroscience, 10, 2016.
[20] Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference
on, pages 922–928. IEEE, 2015.
11
[21] Alexander Matveev, Yaron Meirovitch, Hayk Saribekyan, Wiktor Jakubiuk, Tim Kaler, Gergely Odor,
David Budden, Aleksandar Zlateski, and Nir Shavit. A multicore path to connectomics-on-demand.
In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 267–281. ACM, 2017.
[22] Yaron Meirovitch, Alexander Matveev, Hayk Saribekyan, David Budden, David Rolnick, Gergely
Odor, Seymour Knowles-Barley Jones, Raymond Thouis, Hanspeter Pfister, Jeff William Lichtman,
and Nir Shavit. A multi-pass approach to large-scale connectomics. arXiv preprint arXiv:1612.02120,
2016.
[23] Josh Lyskowski Morgan, Daniel Raimund Berger, Arthur Willis Wetzel, and Jeff William Lichtman.
The fuzzy logic of network connectivity in mouse visual thalamus. Cell, 165(1):192–206, 2016.
[24] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks
for graphs. In Proceedings of the 33rd annual international conference on machine learning. ACM,
2016.
[25] Toufiq Parag, Anirban Chakraborty, Stephen Plaza, and Louis Scheffer. A context-aware delayed
agglomeration framework for electron microscopy segmentation. PloS one, 10(5):e0125825, 2015.
[26] Toufiq Parag, Dan C Ciresan, and Alessandro Giusti. Efficient classifier training to minimize false
merges in electron microscopy segmentation. In Proceedings of the IEEE International Conference on
Computer Vision (ICCV), pages 657–665, 2015.
[27] Ian D Peikon, Justus M Kebschull, Vasily V Vagin, Diana I Ravens, Yu-Chi Sun, Eric Brouzes, Ivan R
Corrêa, Dario Bressan, and Anthony Zador. Using high-throughput barcode sequencing to efficiently
map connectomes. bioRxiv, page 099093, 2017.
[28] Stephen M Plaza and Stuart E Berg. Large-scale electron microscopy image segmentation in Spark.
arXiv preprint arXiv:1604.00385, 2016.
[29] William Gray Roncal, Michael Pekala, Verena Kaynig-Fittkau, Dean M Kleissas, Joshua T Vogelstein,
Hanspeter Pfister, Randal Burns, R Jacob Vogelstein, Mark A Chevillet, and Gregory D Hager.
Vesicle: Volumetric evaluation of synaptic interfaces using computer vision at large scale. arXiv
preprint arXiv:1403.3724, 2014.
[30] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical
image segmentation. In International Conference on Medical Image Computing and Computer-Assisted
Intervention, pages 234–241. Springer, 2015.
[31] Shibani Santurkar, David Budden, Alexander Matveev, Heather Berlin, Hayk Saribekyan, Yaron
Meirovitch, and Nir Shavit. Toward streaming synapse detection with compositional convnets. arXiv
preprint arXiv:1702.07386, 2017.
[32] Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned-Miller. Multi-view convolutional
neural networks for 3d shape recognition. In Proceedings of the IEEE International Conference on
Computer Vision (ICCV), pages 945–953, 2015.
[33] Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training
convolutional networks with noisy labels. arXiv preprint arXiv:1406.2080, 2014.
[34] Uygar Sümbül, Douglas Roossien, Dawen Cai, Fei Chen, Nicholas Barry, John P Cunningham, Edward
Boyden, and Liam Paninski. Automated scalable segmentation of neurons from multispectral images.
In Advances in Neural Information Processing Systems (NIPS), pages 1912–1920, 2016.
12
[35] Shin-ya Takemura, Arjun Bharioke, Zhiyuan Lu, Aljoscha Nern, Shiv Vitaladevuni, Patricia K Rivlin,
William T Katz, Donald J Olbris, Stephen M Plaza, Philip Winston, et al. A visual motion detection
circuit suggested by Drosophila connectomics. Nature, 500(7461):175–181, 2013.
[36] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, pages 1912–1920, 2015.
[37] Ting Zhao and Stephen M Plaza. Automatic neuron type identification by neurite localization in the
Drosophila medulla. arXiv preprint arXiv:1409.1892, 2014.
[38] Aleksandar Zlateski, Kisuk Lee, and H Sebastian Seung. Scalable training of 3d convolutional networks
on multi-and many-cores. Journal of Parallel and Distributed Computing, 2017.
13
| 2 |
Geometric Adaptive Control of Attitude Dynamics on SO(3)
with State Inequality Constraints
arXiv:1602.04286v1 [math.OC] 13 Feb 2016
Shankar Kulumani, Christopher Poole, and Taeyoung Lee
Abstract— This paper presents a new geometric adaptive
control system with state inequality constraints for the attitude
dynamics of a rigid body. The control system is designed
such that the desired attitude is asymptotically stabilized,
while the controlled attitude trajectory avoids undesired regions
defined by an inequality constraint. In addition, we develop
an adaptive update law that enables attitude stabilization in
the presence of unknown disturbances. The attitude dynamics
and the proposed control systems are developed on the special
orthogonal group such that singularities and ambiguities of
other attitude parameterizations, such as Euler angles and
quaternions are completely avoided. The effectiveness of the
proposed control system is demonstrated through numerical
simulations and experimental results.
I. I NTRODUCTION
Rigid body attitude control is an important problem for
aerospace vehicles, ground and underwater vehicles, as well
as robotic systems [1], [2]. One distinctive feature of the
attitude dynamics of rigid bodies is that it evolves on a nonlinear manifold. The three-dimensional special orthogonal
group, or SO(3), is the set of 3 × 3 orthogonal matrices
whose determinant is one. This configuration space is nonEuclidean and yields unique stability properties which are not
observable on a linear space. For example, it is impossible to
achieve global attitude stabilization using continuous timeinvariant feedback [3].
Attitude control is typically studied using a variety of
attitude parameterizations, such as Euler angles or quaternions [4]. All attitude parameterizations fail to represent the
nonlinear configuration space both globally and uniquely [5].
For example, minimal attitude representations, such as Euler
angle sequences or modified Rodriguez parameters, suffer
from singularities. These attitude representations are not
suitable for large angular slews. Quaternions do not have
singularities but they double cover the special orthogonal
group. As a result, any physical attitude is represented by
a pair of antipodal quaternions on the three-sphere. During
implementation, the designer must carefully resolve this nonunique representation in quaternion based attitude control
systems to avoid undesirable unwinding behavior [3].
Many physical rigid body systems must perform large
angular slews in the presence of state constraints. For example, autonomous spacecraft or aerial systems are typically
equipped with sensitive optical payloads, such as infrared
Shankar Kulumani, Christopher Poole, Taeyoung Lee, Mechanical and
Aerospace Engineering, George Washington University, Washington DC
20052 {skulumani,poolec,tylee}@gwu.edu
This research has been supported in part by NSF under the grants CMMI1243000, CMMI-1335008, and CNS-1337722.
or interferometric sensors. These systems require retargeting
while avoiding direct exposure to sunlight or other bright
objects. The removal of constrained regions from the rotational configuration space results in a nonconvex region. The
attitude control problem in the absence of constraints has
been extensively studied [6], [7], [8]. However, the attitude
control problem in the presence of constraints has received
much less attention.
Several approaches have been developed to treat the
attitude control problem in the presence of constraints. A
conceptually straightforward approach is used in [9] to
determine feasible attitude trajectories prior to implementation. The algorithm determines an intermediate point such
that an unconstrained maneuver can be calculated for each
subsegment. Typically, an optimal or easily implementable
on-board control scheme for attitude maneuvers is applied to
maneuver the vehicle along these segments. In this manner
it is possible to accomplish constraint avoidance by linking
several intermediary unconstrained maneuvers. While this
method is conceptually simple, it is difficult to generalize
for an arbitrary number of constraints. In addition, this
approach is only applicable to problems where the selection
of intermediate points are computationally feasible.
The approach in [10] involves the use of randomized
motion planning algorithms to solve the constrained attitude
control problem. A graph is generated consisting of vertices
from an initial attitude to a desired attitude. A random
iterative search is conducted to determine a path through
a directed graph such that a given cost functional is minimized. The random search approach can only stochastically
guarantee attitude convergence as it can be shown that as
the number of vertices in the graph grow, the probability of
nonconvergence goes to zero. However, the computational
demand grows as the size of the graph is increased. As a
result, random search approaches are ill-suited to on-board
implementation or in scenarios that require agile maneuvers.
Model predictive control for spacecraft attitude dynamics
is studied in [11], [12], [13]. These methods rely on linear
or non-linear state dynamics to repeatedly solve a finitetime optimal control problem. As a result, model predictive
control methods are also computational expensive and apply
direct optimization methods to solve the necessary conditions
for optimality. Therefore these methods are complicated to
implement and not applicable for real-time control applications.
Artificial potential functions are commonly used to handle kinematic constraints for a wide range of problems
in robotics [14]. The goal is the design of attractive and
repulsive terms which drive the system toward or away from
a certain state, respectively. The superposition of the these
functions allows one to apply standard feedback control
schemes for stabilization and tracking. More specifically,
artificial potential functions have previously been applied to
the spacecraft attitude control problem in [15], [16]. However, both of these approaches were developed using attitude
parameterizations, namely Euler angles and quaternions, and
as such, they are limited by the singularities of minimal
representations or the ambiguity of quaternions.
This paper is focused on developing an adaptive attitude
control scheme in the presence of attitude inequality constraints on SO(3). We apply a potential function based approach developed directly on the nonlinear manifold SO(3).
By characterizing the attitude both globally and uniquely on
SO(3), our approach avoids the issues of attitude parameterizations, such as kinematic singularities and ambiguities,
and is geometrically exact. A configuration error function
on SO(3) with a logarithmic barrier function is proposed
to avoid constrained regions. Instead of calculating a priori
trajectories, as in the geometric and randomized approaches,
our approach results in a closed-loop attitude control system.
This makes it ideal for on-board implementation on UAV or
spacecraft systems. In addition, unlike previous approaches
our control system can handle an arbitrary number of constrained regions without modification.
Furthermore, we formulate an adaptive update law to
enable attitude convergence in the presence of uncertain
disturbances. The stability of the proposed control systems
is verified via mathematically rigorous Lyapunov analysis on
SO(3). In short, the proposed attitude control system in the
presence of inequality constraints is computationally efficient
and able to handle uncertain disturbances. The effectiveness
of this approach is illustrated via numerical simulation and
experimental results.
II. P ROBLEM F ORMULATION
A. Attitude Dynamics
We assume that the external disturbance is expressed by
W (R, Ω)∆, where W (R, Ω) : SO(3) × R3 → R3×p is a
known function of the attitude and the angular velocity. The
disturbance is represented by ∆ ∈ Rp and is an unknown,
but fixed uncertain parameter. In addition, we assume that a
bound on W (R, Ω) and ∆ is known and given by
kW k ≤ BW ,
for x = [x1 , x2 , x3 ]T ∈ R3 . The inverse of the hat map is
denoted by the vee map ∨ : so(3) → R3 . Several properties
of the hat map are summarized as
x · ŷz = y · ẑx, x̂ŷz = (x · z)y − (x · y)z,
x[
× y = x̂ŷ − ŷx̂ = yxT − xy T ,
1
tr[Ax̂] = tr x̂(A − AT ) = −xT (A − AT )∨ ,
2
x̂A + AT x̂ = ({tr[A] I3×3 − A} x)∧ ,
∧
Rx̂R = (Rx) ,
SO(3) = {R ∈ R3×3 | RT R = I, det[R] = 1},
where a rotation matrix R ∈ SO(3) represents the transformation of the representation of a vector from the bodyfixed frame to the inertial reference frame. The equations of
motion are given by
J Ω̇ + Ω × JΩ = u + W (R, Ω)∆,
(1)
Ṙ = RΩ̂,
(2)
where J ∈ R3×3 is the inertia matrix, and Ω ∈ R3 is
the angular velocity represented with respect to the bodyfixed frame. The control moment is denoted by u ∈ R3 ,
and it is expressed with respect to the body-fixed frame.
(3)
This form of uncertainty enters the system dynamics through
the input channel and as a result is referred to as a matched
uncertainty. While this form of uncertainty is easier than the
unmatched variety many physically realizable disturbances
may be modeled in this manner. For example, orbital spacecraft are subject to gravity gradient torques caused by the
non-spherical distribution of mass of both the spacecraft and
central gravitational body. This form of disturbance may be
represented as a body fixed torque on the vehicle. In addition,
for typical scenarios, where the spacecraft is significantly
smaller than the orbital radius, the disturbance torque may
be assumed constant over short time intervals.
In (2), the hat map ∧ : R3 → so(3) represents the
transformation of a vector in R3 to a 3 × 3 skew-symmetric
matrix such that x̂y = x × y for any x, y ∈ R3 [6]. More
explicitly,
0
−x3 x2
0
−x1 ,
x̂ = x3
−x2 x1
0
T
Consider the attitude dynamics of a rigid body. We define
an inertial reference frame and a body frame whose origin
is at the center of mass and aligned with the principle
directions of the body. The configuration manifold of the
attitude dynamics is the special orthogonal group:
k∆k ≤ B∆ .
R(x × y) = Rx × Ry
(4)
(5)
(6)
(7)
(8)
for any x, y, z ∈ R3 , A ∈ R3×3 and R ∈ SO(3). Throughout
this paper, the dot product of two vectors is denoted by x ·
y = xT y for any x, y ∈ Rn and the maximum eigenvalue
and the minimum eigenvalue of J are denoted by λM and
λm , respectively. The 2-norm of a matrix A is denoted by
kAk,
norm is denoted by kAk ≤ kAkF =
p and its Frobenius
p
tr[AT A] ≤ rank(A) kAk.
B. State Inequality Constraint
The two-sphere is the manifold of unit-vectors in R3 such
that S2 = {q ∈ R3 | kqk = 1}. We define r ∈ S2 to be a unit
vector from the mass center of the rigid body along a certain
direction and it is represented with respect to the body-fixed
frame. For example, r may represent the pointing direction
of an on-board optical sensor. We define v ∈ S2 to be a
unit vector from the mass center of the rigid body toward an
undesired pointing direction and represented in the inertial
reference frame. For example, v may represent the inertial
direction of a bright celestial object or the incoming direction
of particles or other debris. It is further assumed that optical
sensor has a strict non-exposure constraint with respect to
the celestial object. We formulate this hard constraint as
rT RT v ≤ cos θ,
◦
(9)
◦
where we assume 0 ≤ θ ≤ 90 is the required minimum
angular separation between r and RT v.
The objective is to a determine a control input u that
stabilizes the system from an initial attitude R0 to a desired
attitude Rd while ensuring that (9) is always satisfied.
III. ATTITUDE C ONTROL ON SO(3) WITH I NEQUALITY
C ONSTRAINTS
The first step in designing a control system on a nonlinear
manifold Q is the selection of a proper configuration error
function. This configuration error function, Ψ : Q × Q →
R, is a smooth and proper positive definite function that
measures the error between the current configuration and
a desired configuration. Once an appropriate configuration
error function is chosen, one can then define a configuration
error vector and a velocity error vector in the tangent space
Tq Q through the derivatives of Ψ [6]. With the configuration
error function and vectors the remaining procedure is analogous to nonlinear control design on Euclidean vector spaces.
One chooses control inputs as functions of the state through
a Lyapunov analysis on Q.
To handle the attitude inequality constraint, we propose a
new attitude configuration error function. More explicitly,
we extend the trace form used in [6], [17] for attitude
control on SO(3) with the addition of a logarithmic barrier
function. Based on the proposed configuration error function,
nonlinear geometric attitude controllers are constructed. A
smooth control system is first developed assuming that there
is no disturbance, and then it is extended to include an
adaptive update law for stabilization in the presence of
unknown disturbances. The proposed attitude configuration
error function and several properties are summarized as
follows.
Proposition 1 (Attitude Error Function) Define an attitude error function Ψ : SO(3) → R, an attitude error vector
eR ∈ R3 , and an angular velocity error vector eΩ ∈ R3 as
follows:
Ψ(R) = A(R)B(R),
(10)
eR = eRA B(R) + A(R)eRB ,
(11)
eΩ = Ω,
(12)
with
1
tr G I − RdT R ,
2
1
cos θ − rT RT v
B(R) = 1 − ln
.
α
1 + cos θ
∨
1
eRA =
GRdT R − RT Rd G ,
2
A(R) =
(13)
(14)
(15)
eRB
∨
RT v r
=
.
α (rT RT v − cos θ)
(16)
where α ∈ R is defined as a positive constant and the
matrix G ∈ R3×3 is defined as a diagonal matrix matrix
for distinct, positive constants g1 , g2 , g3 ∈ R. Then, the
following properties hold
(i) Ψ is positive definite about R = Rd
(ii) The variation of A(R) with respect to a variation of
δR = Rη̂ for η ∈ R3 is given by
DR A · δR = η · eRA .
(17)
(iii) The variation of B(R) with respect to a variation of
δR = Rη̂ for η ∈ R3 is given by
DR B · δR = η · eRB .
(18)
(iv) The critical points of Ψ are Rd , and Rd exp(πŝ) for
s ∈ {e1 , e2 , e3 } satisfying RT v = ±r.
(v) An upper bound of keRA k is given as:
2
keRA k ≤
A(R)
,
b1
where the constant b1 is given by b1 =
(19)
h1
h2 +h3
for
h1 = min {g1 + g2 , g2 + g3 , g3 + g1 } ,
n
o
2
2
2
h2 = min (g1 − g2 ) , (g2 − g3 ) , (g3 − g1 ) ,
o
n
2
2
2
h3 = min (g1 + g2 ) , (g2 + g3 ) , (g3 + g1 ) .
Proof: See Appendix A.
Equation (10) is composed of an attractive term, A(R)
toward the desired attitude, and a repulsive term, B(R) away
from the undesired direction RT v. In order to visualize the
attitude error function on SO(3) we utilize a spherical coordinate representation. Recall that the spherical coordinate
system represents the position of a point relative to an origin
in terms of a radial distance, azimuth, and elevation. This
coordinate system is commonly used to define locations on
the Earth in terms of a latitude and longitude. Similarly,
the positions of celestial objects are defined on the celestial
sphere in terms of right ascension and declination. We apply
this concept and parametrize the rotation matrix R ∈ SO(3)
in terms of the spherical angles −180◦ ≤ λ ≤ 180◦ and
−90◦ ≤ β ≤ 90◦ . Using the elementary Euler rotations the
rotation matrix is now defined as R = exp(λê2 ) exp(βê3 ).
We iterate over the domains of λ and β in order to rotate the
body-fixed vector r throughout the two-sphere S2 . Applying
this method, Fig. 1 allows us to visualize the error function
on SO(3). The attractive error function, given by (13), has
been previously used for attitude control on SO(3). The
potential well of A(R) is illustrated in Fig. 1(a), where the
desired attitude lies at the minimum of A(R).
To incorporate the state inequality constraints we apply
a logarithmic barrier term. Barrier functions are typically
used in optimal control and motion planning applications. A
visualization of the configuration error function is presented
in Fig. 1(b) which shows that as the boundary of the
Attract
Avoid
3
3
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
where the matrices E(R, Rd ), F (R) ∈ R3×3 are given by
1 T
E(R, Rd ) =
tr R Rd G I − RT Rd G ,
(27)
2
1
F (R) =
v T Rr I − RT vrT +
α (rT RT v − cos θ)
RT v̂Rrv T Rr̂
.
(28)
(rT RT v − cos θ)
0
50
50
100
0
100
0
0
-50
Latitude (β)
0
-50
-100
Latitude (β)
Longitude (λ)
-100
Longitude (λ)
(a) Attractive A(R)
(b) Repulsive B(R)
Total
Proof: See Appendix B.
3
2.5
A. Attitude Control without Disturbance
2
1.5
We introduce a nonlinear geometric controller for the
attitude stabilization of a rigid body. We first assume that
there is no disturbance, i.e., ∆ = 0.
1
0.5
0
50
100
0
0
-50
Latitude (β)
-100
Longitude (λ)
(c) Configuration Ψ
Proposition 3 (Attitude Control) Given a desired attitude
command (Rd , Ωd = 0), which satisfies the constraint (9),
and positive constants kR , kΩ ∈ R we define a control input
u ∈ R3 as follows
Fig. 1: Configuration error function visualization
constraint is neared, or rT RT v → cos θ, the barrier term
1
increases, B → ∞. We use the scale factor 1+cos
θ to ensure
that Ψ remains positive definite. The logarithmic function
is popular as it quickly decays away from the constraint
boundary. The positive constant α serves to shape the barrier
function. As α is increased the impact of B(R) is reduced
away from the constraint boundary. The superposition of the
attractive and repulsive functions is shown in Fig. 1(c). The
control system is defined such that the attitude trajectory
follows the negative gradient of Ψ toward the minimum at
R = Rd , while avoiding the constrained region.
While (14) represents a single inequality constraint given
as (9), it is readily generalized to multiple constraints of an
arbitrary form. For example, the configuration
error function
P
can be formulated as Ψ = A[1 + i Ci ], where Ci has the
form of Ci = B − 1 for the i-th constraint. In this manner,
one may enforce multiple state inequality constraints, and
we later demonstrate this through numerical simulation.
u = −kR eR − kΩ eΩ + Ω × JΩ.
(29)
Then the zero equilibrium of the attitude error is asymptotically stable, and the inequality constraint is satisfied.
Proof: See Appendix C.
This proposition only guarantees that the attitude error
vector eR asymptotically converges to zero. However, this
does not necessarily imply that R → Rd as t → ∞, since
there are at most three additional critical points of Ψ where
eR = 0 and RT v = ±r. At an undesired equilibrium
R = exp (πêi )Rd and eΩ = 0. However, we can show
that these undesired equilibrium points are unstable in the
sense of Lyapunov [17]. As a result, we can claim that the
desired equilibrium R = Rd and eΩ = 0 is almost globally
asymptotically stable, which means that the set of initial
conditions that do not converge to the desired attitude has
zero Lebesgue measure.
B. Adaptive Control
Proposition 2 (Error Dynamics) The attitude error dynamics for Ψ, eR , eΩ satisfy
d
(Ψ) = eR · eΩ ,
dt
(20)
d
(eR ) = ėRA B(R) + eRA Ḃ(R) + Ȧ(R)eRB + AėRB ,
dt
(21)
d
(eRA ) = E(R, Rd )eΩ ,
(22)
dt
d
(eRB ) = F (R)eΩ ,
(23)
dt
d
(A(R)) = eRA · eΩ ,
(24)
dt
d
(B(R)) = eRB · eΩ ,
(25)
dt
d
(eΩ ) = J −1 (−Ω × JΩ + u + W (R, Ω)∆) ,
(26)
dt
We extend the results of the previous section with the
addition of a fixed but unknown disturbance ∆. This scenario
is typical of many mechanical systems and represents unmodeled dynamics or external moments acting on the system.
For example, Earth orbiting spacecraft typically experience
a torque due to a gravitational gradient. Aerial vehicles
will similarly experience external torques due to air currents
or turbulence. An adaptive control system is introduced to
asymptotically stabilize the system to a desired attitude while
ensuring that state constraints are satisfied.
Proposition 4 (Bound on ėR ) Consider a domain D about
the desired attitude defined as
D = R ∈ SO(3)|Ψ < ψ < h1 , rT RT v < β < cos θ .
(30)
Then the following statements hold:
TABLE I: Constraint Parameters
(i) Upper bounds of A(R) and B(R) are given by
kAk < cA ,
kBk < cB .
(31)
Constraint Vector (v)
[0.174, −0.934, −0.034]T
[0, 0.7071, 0.7071]T
[−0.853, 0.436, −0.286]T
[−0.122, −0.140, −0.983]T
(ii) Upper bounds of E(R, Rd ) and F (R) are given by
These results are combined to yield a maximum upper bound
of the time derivative of the attitude error vector ėR as
kėR k ≤ H keΩ k ,
(37)
Proof: See Appendix D.
Proposition 5 (Adaptive Attitude Control) Given a desired attitude command (Rd , Ωd = 0) and positive constants
kR , kΩ , k∆ , c ∈ R, we define a control input u ∈ R3 and
¯ as
an adaptive update law for the estimated uncertainty ∆
follows:
¯
u = −kR eR − kΩ eΩ + Ω × JΩ − W ∆,
T
˙
¯ = k∆ W (eΩ + ceR ) .
∆
(38)
(39)
If c is chosen such that
0<c<
eR1
2
4
6
8
2
10
t(sec)
2
0
−2
0
1.5
Ψ
1
0
−1
0
2
4
6
8
10
6
8
10
1
t(sec)
0.5
2
4
t(sec)
0
0
2
4
6
8
10
t(sec)
(a) Attitude error vector eR
(b) Configuration error Ψ
Fig. 2: Attitude stabilization without adaptive update law
(36)
where H ∈ R is defined as
H = kBk kEk + 2 keRA k keRB k + kAk kF k .
0
0
eR2
(iii) Upper bounds of the attitude error vectors eRA and
eRB are given by
r
ψ
keRA k ≤
,
(34)
b1
sin θ
.
(35)
keRB k ≤
α (cos θ − β)
2.5
0.5
eR3
1
kEk ≤ √ tr[G] ,
(32)
2
2
β 2 + 1 (β − cos θ) + 1 + β 2 β 2 − 2
kF k ≤
.
4
α2 (β − cos θ)
(33)
Angle (θ)
40◦
40◦
40◦
20◦
inequality constraints for the first time on SO(3). The presented control system is computed in real-time and offers
significant computational advantages over previous iterative
methods. In addition, the riguous mathematical proof guarantees stability.
IV. N UMERICAL E XAMPLES
We demonstrate the performance of the proposed control
system via numerical simulation. The inertia tensor of a rigid
body is given as
5.57 × 10−3 6.17 × 10−5 −2.50 × 10−5
J = 6.17 × 10−5 5.57 × 10−3 1.00 × 10−5 kg m2 .
−2.50 × 10−5 1.00 × 10−5 1.05 × 10−2
The control system parameters are chosen as
4kR kΩ
2 + 4k − Rλ H ,
kΩ
M
(40)
the zero equilibrium of the error vectors is stable in the sense
¯ is
of Lyapunov. Furthermore, eR , eΩ → 0 as t → ∞, and ∆
uniformly bounded.
Proof: See Appendix E.
Nonlinear adaptive controllers have been developed for
attitude stabilization in terms of modified Rodriguez parameters and quaternions, as well as attitude tracking in terms
of Euler angles. The proposed control system is developed
on SO(3) and avoids the singularities of Euler angles and
Rodriguez parameters while incorporating state inequality
constraints. In addition, the unwinding and double coverage
ambiguity of quaternions are also completely avoided. The
control system handles uncertain disturbances while avoiding
constrained regions.
Compared to the previous work on constrained attitude
control, we present a geometrically exact control system
without parameterizations. In addition, we incorporate state
G = diag[0.9, 1.1, 1.0],
c = 1.0,
kR = 0.4,
k∆ = 0.5,
kΩ = 0.296,
α = 15.
A body fixed sensor is defined as r = [1, 0, 0], while multiple
inequality constraints are defined in Table I. The simulation
parameters are chosen to be similar to those found in [15],
however we increase the size of the constraint regions to
create a more challenging scenario for the control system.
The initial state is defined as R0 = exp(225◦ ×
π
180 ê3 ), Ω0 = 0. The desired state is Rd = I, Ωd = 0.
We show simulation results for the system stabilizing about
the desired attitude with and without the adaptive update
law from
We assume a fixed disturbance of
Proposition5.
T
∆ = 0.2 0.2 0.2 , with the function W (R, Ω) = I.
This form is equivalent to an integral control term which
penalizes deviations from the desired configuration. The first
term of (39) has the effect of increasing the proportional gain
of the control system, since the time derivative of the attitude
error vector, ėR , is linear with respect to the angular velocity
error vector eΩ .
2.5
180
160
ar ccos(r T R T v i )
2
Ψ
1.5
1
140
120
100
80
0.5
60
0
0
2
4
6
8
10
40
0
2
4
6
8
10
t(sec)
t(sec)
(a) Configuration error Ψ
(b) Angle to constraints
Fig. 4: Attitude control testbed
0.3
¯
∆
0.2
0.1
0
−0.1
0
2
4
6
8
10
t(sec)
¯
(c) Disturbance estimate ∆
(d) Attitude trajectory
Fig. 3: Attitude stabilization with adaptive update law
Simulation results without the adaptive update law are
shown in Fig. 2. Without the update law, the system does
not achieve zero steady state error. Fig. 2(b) shows that the
configuration error function does not converge to zero and
there exist steady state errors. Fig. 3 shows the results with
the addition of the adaptive update law. The addition of the
adaptive update law allows the system to converge to the
desired attitude in the presence of constraints. The path of
the body fixed sensor in the inertial frame, namely Rr, is
illustrated in Fig. 3(d). The initial attitude is represented with
the green circle while the final attitude is marked with a green
×. The inequality constraints from Table I are depicted as
red cones, where the cone half angle is θ. The control system
is able to asymptotically converge to zero attitude error.
Fig. 3(b) shows that the angle arccos(rT RT vi ) between the
body fixed sensor and each constraint is satisfied for the
entire maneuver. In addition, the estimate of the disturbance
converges to the the true value as shown in Fig. 3(c).
Both control system are able to automatically avoid the
constrained regions. In addition, these results show that
it is straightforward to incorporate an arbitrary amount of
large constraints. In spite of this challenging configuration
space the proposed control system offers a simple method
of avoiding constrained regions. These closed-loop feedback
results are computed in real time and offer a significant
advantage over typical open-loop planning methods. These
results show that the proposed geometric adaptive approach
is critical to attitude stabilization in the presence of state
constraints and disturbances.
V. E XPERIMENT ON H EXROTOR UAV
A hexrotor unmanned aerial vehicle (UAV) has been
developed at the Flight Dynamics and Controls Laboratory
(FDCL) at the George Washington University [18]. The UAV
is composed of three pairs of counter-rotating propellers.
The propeller pairs of the hexrotor are angled relative to
one another to allow for a fully actuated rigid body.
The hexrotor UAV, shown in Fig. 4, is composed of the
following hardware:
• Onboard ODROID XU3 computer module.
• VectorNav VN100 IMU operating via TTL serial
• BLDC motors with BL-Ctrl-2.0 ESC via I2C.
• Position and attitude over WiFi (TCP) communication
from Vicon motion capture system.
• Commands sent over WiFi to onboard controller.
In order to constrain the motion and test only the attitude
dynamics we attach the hexrotor to a spherical joint. The
center of rotation is below the center of gravity of the
hexrotor. As a result, there is a destabilizing gravitational
moment and the resulting attitude dynamics are similar to
an inverted pendulum model. We augment the control input
in (38) with an additional term to negate the effect of the
gravitational moment.
A sensor pointing direction is defined in the body frame
of the hexrotor as r = [1, 0, 0]T . We define an obstacle in the
inertial frame as v = [ √12 , √12 , 0]T with θ = 12◦ . An initial
state is defined as R(0) = exp( π2 ê3 ), while the desired state
is Rd = I. This results in the UAV performing a 90◦ yaw
rotation about the vertical axis of the spherical joint and
the constrained region is on the shortest path connecting R0
and Rd . The attitude control system is identical to the one
presented in Proposition 5 with the exception of a gravity
moment term and the following parameters: kR = 0.4, kΩ =
0.7, c = 0.1, α = 8 and k∆ = 0.05.
The experimental results are shown in Fig. 5. In order
to maneuver the system “close” to the constrained zone we
utilize several intermediary set points on either side of the
obstacle. From the initial attitude the hexrotor rotates to the
first set point, pauses, and then continues around the obstacle
to the second set point before continuing toward the desired
attitude. As a result this creates the stepped behavior of the
configuration error history as shown in Fig. 5(b).
The brushless motors of the hexrotor allow for large
control inputs which are critical to enable aggressive maneuvers. When constrained to the spherical joint the hexrotor
is capable of performing responsive attitude changes with
high angular velocities. In addition, The on-board control
and motion capture system operate at a discrete interval
of approximately 100 Hz. It is possible for the system to
violate the constraint between these discrete steps and cause
eR1
0.5
0
−0.5
0
1.5
5
10
15
1
Ψ
eR2
t(sec)
1
0
−1
0
5
10
15
0.5
eR3
t(sec)
2
1
0
0
5
10
15
t(sec)
0
0
5
10
15
t(sec)
u1
u2
(a) Attitude error vector eR
0.2
0
−0.2
0
0.5
0
−0.5
0
(b) Configuration error Ψ
A PPENDIX
5
10
15
10
15
10
15
A. Proof of Proposition 1
t(sec)
5
To prove (i) we note that (13) is a positive definite function
about R = Rd [6]. The constraint angle is assumed 0◦ ≤
θ ≤ 90◦ such that 0 ≤ cos θ. The term rT RT v represents
the cosine of the angle between the body fixed vector r and
the inertial vector v. It follows that
u3
t(sec)
0
−0.2
−0.4
0
analysis. In addition, we have demonstrated the control
system via numerical simulation and hardware experiments
on a hexrotor UAV. A novel feature of this control is that
it is computed autonomously on-board the UAV. This is in
contrast to many state constrained attitude control systems
which require an a priori attitude trajectory to be calculated.
The presented method is simple, efficient and ideal for
hardware implementation on embedded systems.
5
t(sec)
(c) Control input u
(d) Attitude Trajectory
Fig. 5: Constrained Attitude stabilization experiment
numerical exceptions within the embedded software. As a
result, conservative control gains are chosen to ensure the
hexrotor operates in a sedate manner and to allow sufficient
time for the measurement and control software to operate.
There exist several sources of error in the experimental
setup. The motion capture system uses a series of optical
sensors to determine the relative position of several tracking
markers. These markers as well as the cameras must remain
fixed to ensure accurate attitude measurement. In addition,
the spherical joint is not fixed at the center of mass but is
instead offset due to the physical structure of the hexrotor.
As a result a disturbance moment is induced on the resultant
motion.
This results in a small steady state error in the vicinity
of the desired attitude. Over time (39) will remain non-zero
while eR 6= 0. This will cause an increase in control input
until the steady-state error is reduced. Further tuning of the
control gains would enable a faster response and a reduced
settling time.
The hexrotor avoids the constrained region illustrated by
the circular cone in Fig. 5(d), by rotating around the boundary of the constraint. This verifies that the proposed control
system exhibits the desired performance in the experimental
setting as well. A video clip showing the attitude maneuver
is available https://youtu.be/dsmAbwQram4.
0≤
cos θ − rT RT v
≤ 1,
1 + cos θ
for all R ∈ SO(3). As a result, its negative logarithm is
always positive and from (14), 1 < B. The error function
Ψ = AB is composed of two positive terms and is therefore
also positive definite, and it is minimized at R = Rd .
Next, we consider (ii). The variation of (13) is taken with
respect to δR = Rη̂ as
∨
1
DR A · δR = η ·
GRdT R − RT Rd G ,
2
where we used (6).
A straightforward application of the chain and product
rules of differentiation allows us to show (iii) as
∨
− RT v r
DR B · δR = η ·
,
α (cos θ − rT RT v)
where the scalar triple product (4) was used.
The critical points of eRA are derived in [6]. There are four
critical points of eRA , the desired attitude Rd as well as rotations about each body fixed axis by 180◦ . The repulsive
∧ error
vector eRB is zero only when the numerator RT v r = 0.
This condition only occurs if the desired attitude results
in the body fixed vector r becoming aligned with RT v
while simultaneously satisfying {Rd } ∪ {Rd exp(πŝ} for
s ∈ {e1 , e2 , e3 }. Since we assume the system will not operate
in violation of the constraints, the addition of the barrier
function does not add additional critical points to the control
system. The desired equilibrium is eR = 0 and A = 0. The
proof of keRA k given by (v) is available in [17].
VI. C ONCLUSIONS
We have developed a geometric adaptive control system
which incorporates state inequality constraints on SO(3). The
presented control system is developed directly on SO(3) and
it avoids singularities and ambiguities that are inherent to
attitude parameterizations. The attitude configuration error is
augmented with a barrier function to avoid the constrained
region, and an adaptive control law is proposed to cancel
the effects of uncertainties. We show the stability of the
proposed control system through a rigorous mathematical
B. Proof of Proposition 2
From the kinematics (2) and noting that Ṙd = 0 the time
derivative of RdT R is given as
d
RdT R = RdT RêΩ .
dt
Applying this to the time derivative of (13) gives
d
1
(A) = − tr GRdT RêΩ .
dt
2
Applying (6) into this shows (24). Next, the time derivative
of the repulsive error function is given by
T
T
r
Ω̂R
v
d
(B) =
.
dt
α (rT RT v − cos θ)
Using the scalar triple product, given by (4), one can reduce
this to (25). The time derivative of the attractive attitude error
vector, eRA , is given by
∨
d
1
(eRA ) =
êΩ RT Rd G + (RT Rd G)T êΩ .
dt
2
Using the hat map property given in (7) this is further
reduced to (22) and (27).
We take the time derivative of the repulsive attitude error
vector, eRB , as
d
(eRB ) = aΩv T Rr − aRT vΩT r + bRT v̂Rr,
dt
with a ∈ R and b ∈ R given by
−1
a = α rT RT v − cos θ
,b=
rT Ω̂RT v
α (rT RT v − cos θ)
2.
Usingthe scalar triple product from (4) as r · Ω × RT v =
RT v · r × Ω gives (23) and (28).
We show the time derivative of the configuration error
function as
d
(Ψ) = ȦB + AḂ.
dt
A straightforward substitution of (13), (14), (24) and (25)
into this and appplying (11) shows (20). We show (26) by
rearranging (1) as
d
eΩ = Ω̇ = J −1 (u − Ω × JΩ + W (R, Ω)∆) .
dt
C. Proof of Proposition 3
Consider the following Lyapunov function:
1
eΩ · JeΩ + kR Ψ(R, Rd ).
(41)
2
From (i) of Proposition 1, V ≥ 0. Using (20) and (26) with
∆ = 0, the time derivative of V is given by
V=
2
V̇ = −kΩ keΩ k .
(42)
Since V is positive definite and V̇ is negative semi-definite,
the zero equilibrium point eR , eΩ is stable in the sense of
Lyapunov. This also implies limt→∞ keΩ k = 0 and keR k
is uniformly bounded, as the Lyapunov function is nonincreasing. From (22) and (23), limt→∞ ėR = 0. One can
show that këR k is bounded. From Barbalat’s Lemma, it
follows limt→∞ kėR k = 0 [19, Lemma 8.2]. Therefore, the
equilibrium is asymptotically stable.
Furthermore, since V̇ ≤ 0 the Lyapunov function is
uniformly bounded which implies
Ψ(R(t)) ≤ V(t) ≤ V(0).
In addition, the logarithmic term in (14) ensures Ψ(R) → ∞
as rT RT v → cos θ. Therefore, the inequality constraint is
always satisfied given that the desired equilibrium lies in the
feasible set.
D. Proof of Proposition 4
The selected domain ensures that the configuration error
function is bounded Ψ < ψ. This implies that that both
A(R) and B(R) are bounded by constants cA cB < ψ < h1 .
Furthermore, since kBk > 1 this ensures that cA , cB < ψ
and shows (31).
Next, we show (32) and (33) using the Frobenius norm.
The Frobenius norm kEkF is given in [17] as
q
q
1
2
tr[G2 ] + tr[RT Rd G] .
kEkF = tr[E T E] =
2
Applying Rodrigues’ formula and the Matlab symbolic toolbox, this is simplified to
1
1 2
2
2
2
tr G + tr[G] ≤ tr[G] ,
kEkF ≤
4
2
which shows (32), since kEk ≤ kEkF .
To show (33), we apply the Frobenius norm kF kF :
T
T
1
kF kF =
2 tr a a − 2tr a b
2
T
T
α (r R v − cos θ)
T
+2tr a c + tr bT b − 2tr bT c + tr cT c .
where the terms a, b, and c are given by
a = rT RrI,
b = RT vrT ,
c=
RT v̂Rrv T Rr̂
.
rT RT v − cos θ
A straightforward computation of aT a shows that
2
tr aT a = v T Rr tr[I] ≤ 3β 2 ,
where we used the fact that v T Rr = rT RT v < β from
our given domain. Similarly, one can show that tr aT b is
equivalent to
2
tr aT b = v T Rrtr RT vrT = v T Rr ≤ β 2 ,
T
where
= xT y. The product
T we used the fact that tr xy
tr a c is given by
h
∨
i
v T Rr
tr aT c = T T
tr RT v
rv T R r̂ ,
r R v − cos θ
where we used the hat map property (8). One can show that
T
T
tr[a
T c] ≤ 0 over the range −1 ≤ v Rr ≤ cos θ. Next,
tr b b is equivalent to
tr bT b = tr rv T RRT vrT = 1,
since r, v ∈ S2 . Finally, tr cT c is reduced to
tr cT c = tr r̂RT vrT −I + RT vv T R rv T Rr̂ ,
2
where we used the fact that x̂2 = − kxk I +xxT . Expanding
and collecting like terms gives
2
4
T 1 − 2 v T Rr + v T Rr
tr c c =
.
2
(rT RT v − cos θ)
Using the the given domain rT RT v ≤ β gives the upper
bound (33). The bound on eRA is given in (19) while eRB
arises from the definition of the cross product ka × bk =
kak kbk sin θ. Finally, we can find the upper bound (21) as
kėR k ≤ (kBk kEk + 2 keRA k keRB k + kAk kF k) keΩ k .
Using (31–35) one can define H in terms of known values.
E. Proof of Proposition 5
Consider the Lyapunov function V given by
V=
1
1
eΩ · JeΩ + kR Ψ + cJeΩ · eR +
e∆ · e∆ , (43)
2
2k∆
over the domain D in (30). From Proposition 4, the Lyapunov
function is bounded in D by
V ≤ z T W z,
(44)
¯ z = [keR k, keΩ k, ke∆ k] ∈ R and the
where e∆ = ∆ − ∆,
matrix W ∈ R3×3 is given by
0
kR ψ 21 cλM
0 .
W = 12 cλM 21 λM
1
0
0
2k∆
T
3
The time derivative of V with the control inputs (38) is given
by
T
V̇ = − kΩ eTΩ eΩ + (eΩ + ceR ) W e∆ − kR ceTR eR
1 T ¯˙
e ∆,
(45)
− kΩ ceTR eΩ + cJeTΩ ėR −
k∆ ∆
¯˙ The terms linearly dependent on
where we used ė = −∆.
∆
e∆ are combined with (39) to yield
1 ¯˙
T
T
e∆ W (eΩ + ceR ) −
∆ = 0.
k∆
Using Proposition 4 an upper bound on V̇ is written as
V̇ ≤ −ζ T M ζ,
where ζ = [keR k, keΩ k] ∈ R2 , and the matrix M ∈ R2×2 is
kΩ c
kR c
2
.
(46)
M = kΩ c
kΩ − cλM H
2
If c is chosen such that (40) is satisfied the matrix M is
positive definite. This implies that V̇ is negative semidefinite
and limt→∞ ζ = 0. As the Lyapunov function is nonincreasing z is uniformly bounded.
R EFERENCES
[1] P. Hughes, Spacecraft Attitude Dynamics. Dover Publications, 2004.
[2] J. R. Wertz, Spacecraft Attitude Determination and Control. Springer,
1978, vol. 73.
[3] S. P. Bhat and D. S. Bernstein, “A topological obstruction to continuous global stabilization of rotational motion and the unwinding
phenomenon,” Systems & Control Letters, 2000.
[4] M. D. Shuster, “A survey of attitude representations,” Navigation,
vol. 8, no. 9, 1993.
[5] N. Chaturvedi, A. K. Sanyal, N. H. McClamroch, et al., “Rigid-body
attitude control,” Control Systems, IEEE, vol. 31, no. 3, pp. 30–51,
2011.
[6] F. Bullo and A. D. Lewis, Geometric Control of Mechanical Systems,
ser. Texts in Applied Mathematics. New York-Heidelberg-Berlin:
Springer Verlag, 2004, vol. 49.
[7] C. Mayhew and A. Teel, “Synergistic potential functions for hybrid
control of rigid-body attitude,” in Proceedings of the American Control
Conference, 2011, pp. 875–880.
[8] T. Lee, “Global exponential attitude tracking controls on SO(3),”
IEEE Transactions on Automatic Control, vol. 60, no. 10, pp. 2837–
2842, 2015.
[9] H. B. Hablani, “Attitude commands avoiding bright objects
and maintaining communication with ground station,” Journal of
Guidance, Control, and Dynamics, vol. 22, no. 6, pp. 759–767,
2015/09/19 1999. [Online]. Available: http://dx.doi.org/10.2514/2.
4469
[10] E. Frazzoli, M. Dahleh, E. Feron, and R. Kornfeld, “A randomized
attitude slew planning algorithm for autonomous spacecraft,” in AIAA
Guidance, Navigation, and Control Conference and Exhibit, Montreal,
Canada, 2001.
[11] A. Guiggiani, I. Kolmanovsky, P. Patrinos, and A. Bemporad, “Fixedpoint constrained model predictive control of spacecraft attitude,”
arXiv:1411.0479, 2014. [Online]. Available: http://arxiv.org/abs/1411.
0479
[12] U. Kalabic, R. Gupta, S. Di Cairano, A. Bloch, and I. Kolmanovsky,
“Constrained spacecraft attitude control on SO(3) using fast nonlinear
model predictive control using reference governors and nonlinear
model predictive control,” in American Control Conference (ACC),
2014, June 2014, pp. 5586–5593.
[13] R. Gupta, U. Kalabic, S. Di Cairano, A. Bloch, and I. Kolmanovsky,
“Constrained spacecraft attitude control on SO(3) using fast nonlinear
model predictive control,” in American Control Conference (ACC),
2015, July 2015, pp. 2980–2986.
[14] E. Rimon and D. E. Koditschek, “Exact robot navigation using artificial potential functions,” Robotics and Automation, IEEE Transactions
on, vol. 8, no. 5, pp. 501–518, 1992.
[15] U. Lee and M. Mesbahi, “Spacecraft Reorientation in Presence of Attitude Constraints via Logarithmic Barrier Potentials,” in Proceedings
of the American Control Conference, 2011, pp. 450–455.
[16] C. R. McInnes, “Large angle slew maneuvers with autonomous sun
vector avoidance,” Journal of Guidance, Control, and Dynamics,
vol. 17, no. 4, pp. 875–877, 2015/07/10 1994. [Online]. Available:
http://dx.doi.org/10.2514/3.21283
[17] T. Lee, “Robust adaptive tracking on SO(3) with an application to the
attitude dynamics of a quadrotor UAV,” IEEE Transactions on Control
Systems Technology, vol. 21, no. 5, pp. 1924–1930, September 2013.
[18] E. Kaufman, K. Caldwell, D. Lee, and T. Lee, “Design and development of a free-floating hexrotor UAV for 6-dof maneuvers,” in
Proceedings of the IEEE Aerospace Conference, 2014.
[19] H. K. Khalil, Nonlinear Systems, 3rd ed. Prentice Hall New Jersey,
2002.
| 3 |
INSPECTRE: Privately Estimating the Unseen
arXiv:1803.00008v1 [] 28 Feb 2018
Jayadev Acharya∗
ECE, Cornell University
[email protected]
Gautam Kamath†
EECS & CSAIL, MIT
[email protected]
Ziteng Sun‡
ECE, Cornell University
[email protected]
Huanyu Zhang§
ECE, Cornell University
[email protected]
March 2, 2018
Abstract
We develop differentially private methods for estimating various distributional properties. Given a
sample from a discrete distribution p, some functional f , and accuracy and privacy parameters α and ε,
the goal is to estimate f (p) up to accuracy α, while maintaining ε-differential privacy of the sample.
We prove almost-tight bounds on the sample size required for this problem for several functionals
of interest, including support size, support coverage, and entropy. We show that the cost of privacy
is negligible in a variety of settings, both theoretically and experimentally. Our methods are based on
a sensitivity analysis of several state-of-the-art methods for estimating these properties with sublinear
sample complexities.
1
Introduction
How can we infer a distribution given a sample from it? If data is in abundance, the solution may be simple
– the empirical distribution will approximate the true distribution. However, challenges arise when data
is scarce in comparison to the size of the domain, and especially when we wish to quantify “rare events.”
This is frequently the case: for example, it has recently been observed that there are several very rare
genetic mutations which occur in humans, and we wish to know how many such mutations exist [KC12,
TBO+ 12, NWE+ 12]. Many of these mutations have only been seen once, and we can infer that there are
many which have not been seen at all. Over the last decade, a large body of work has focused on developing
theoretically sound and effective tools for such settings [OSW16] and references therein, including the problem
of estimating the frequency distribution of rare genetic variations [ZVV+ 16].
However, in many settings where one wishes to perform statistical inference, data may contain sensitive
information about individuals. For example, in medical studies, where the data may contain individuals’
health records and whether they carry some disease which bears a social stigma. Alternatively, one can
consider a map application which suggests routes based on aggregate positions of individuals, which contains
delicate information including users’ residence data. In these settings, it is critical that our methods protect
sensitive information contained in the dataset. This does not preclude our overall goals of statistical analysis,
as we are trying to infer properties of the population p, and not the samples which are drawn from said
population.
That said, without careful experimental design, published statistical findings may be prone to leaking
sensitive information about the sample. As a notable example, it was recently shown that one can determine the identity of some individuals who participated in genome-wide association studies [HSR+ 08]. This
∗ Supported
by NSF CCF-1657471 and a Cornell University startup grant.
by ONR N00014-12-1-0999, NSF CCF-1617730, CCF-1650733, and CCF-1741137. Work partially done while
author was an intern at Microsoft Research, New England.
‡ Supported by NSF CCF-1657471 and a Cornell University startup grant.
§ Supported by NSF CCF-1657471 and a Cornell University startup grant.
† Supported
1
realization has motivated a surge of interest in developing data sharing techniques with an explicit focus on
maintaining privacy of the data [JS13, USF13, YFSU14, SSB16].
Privacy-preserving computation has enjoyed significant study in a number of fields, including statistics and almost every branch of computer science, including cryptography, machine learning, algorithms,
and database theory – see, e.g., [Dal77, AW89, AA01, DN03, Dwo08, DR14] and references therein. Perhaps the most celebrated notion of privacy, proposed by theoretical computer scientists, is differential privacy [DMNS06]. Informally, an algorithm is differentially private if its outputs on neighboring datasets
(differing in a single element) are statistically close (for a more precise definition, see Section 2). Differential
privacy has become the standard for theoretically-sound data privacy, leading to its adoption by several large
technology companies, including Google and Apple [EPK14, Dif17].
Our focus in this paper is to develop tools for privately performing several distribution property estimation
tasks. In particular, we study the tradeoff between statistical accuracy, privacy, and error rate in the sample
size. Our model is that we are given sample access to some unknown discrete distribution p, over a domain
of size k, which is possibly unknown in some tasks. We wish to estimate the following properties:
• Support Coverage: If we take m samples from the distribution, what is the expected number of
unique elements we expect to see?
• Support Size: How many elements of the support have non-zero probability?
• Entropy: What is the Shannon entropy of the distribution?
For more formal statements of these problems, see Section 2.1. We require that our output is α-accurate,
satisfies (ε, 0)-differential privacy, and is correct with probability 1 − β. The goal is to give an algorithm
with minimal sample complexity n, while simultaneously being computationally efficient.
Theoretical Results. Our main results show that privacy can be achieved for all these problems at a
very low cost. For example, if one wishes to privately estimate entropy, this incurs an additional additive
cost in the sample complexity which is very close to linear in 1/αε. We draw attention to two features of
this bound. First, this is independent of k. All the problems we consider have complexity Θ(k/ log k), so
in the primary regime of study where k 1/αε, this small additive cost is dwarfed by the inherent sample
complexity of the non-private problem. Second, the bound is almost linear in 1/αε. We note that performing
even the most basic statistical task privately, estimating the bias of a coin, incurs this linear dependence.
Surprisingly, we show that much more sophisticated inference tasks can be privatized at almost no cost.
In particular, these properties imply that the additive cost of privacy is o(1) in the most studied regime
where the support size is large. In general, this is not true – for many other problems, including distribution
estimation and hypothesis testing, the additional cost of privacy depends significantly on the support size or
dimension [DHS15, CDK17, ASZ17, ADR17]. We also provide lower bounds, showing that our upper bounds
are almost tight. A more formal statement of our results appears in Section 3.
Experimental Results. We demonstrate the efficacy of our method with experimental evaluations. As
a baseline, we compare with the non-private algorithms of [OSW16] and [WY18]. Overall, we find that our
algorithms’ performance is nearly identical, showing that, in many cases, privacy comes (essentially) for free.
We begin with an evaluation on synthetic data. Then, inspired by [VV13, OSW16], we analyze text corpus
consisting of words from Hamlet, in order to estimate the number of unique words which occur. Finally, we
investigate name frequencies in the US census data. This setting has been previously considered by [OSW16],
but we emphasize that this is an application where private statistical analysis is critical. This is proven by
efforts of the US Census Bureau to incorporate differential privacy into the 2020 US census [DLS+ 17].
Techniques. Our approach works by choosing statistics for these tasks which possess bounded sensitivity,
which is well-known to imply privacy under the Laplace or Gaussian mechanism. We note that bounded
sensitivity of statistics is not always something that can be taken for granted. Indeed, for many fundamental
tasks, optimal algorithms for the non-private setting may be highly sensitive, thus necessitating crucial
modifications to obtain differential privacy [ADK15, CDK17]. Thus, careful choice and design of statistics
must be a priority when performing inference with privacy considerations.
To this end, we leverage recent results of [ADOS17], which studies estimators for non-private versions of
the problems we consider. The main technical work in their paper exploits bounded sensitivity to show sharp
cutoff-style concentration bounds for certain estimators, which operate using the principle of best-polynomial
approximation. They use these results to show that a single algorithm, the Profile Maximum Likelihood
(PML), can estimate all these properties simultaneously. On the other hand, we consider the sensitivity
of these estimators for purposes of privacy – the same property is utilized by both works for very different
2
purposes, a connection which may be of independent interest.
We note that bounded sensitivity of a statistic may be exploited for purposes other than privacy. For
instance, by McDiarmid’s inequality, any such statistic also enjoys very sharp concentration of measure,
implying that one can boost the success probability of the test at an additive cost which is logarithmic in
the inverse of the failure probability. One may naturally conjecture that, if a statistical task is based on a
primitive which concentrates in this sense, then it may also be privatized at a low cost. However, this is not
true – estimating a discrete distribution in `1 distance is such a task, but the cost of privatization depends
significantly on the support size [DHS15].
One can observe that, algorithmically, our method is quite simple: compute the non-private statistic, and
add a relatively small amount of Laplace noise. The non-private statistics have recently been demonstrated
to be practical [OSW16, WY18], and the additional cost of the Laplace mechanism is minimal. This is
in contrast to several differentially private algorithms which invoke significant overhead in the quest for
privacy. Our algorithms attain almost-optimal rates (which are optimal up to constant factors for most
parameter regimes of interest), while simultaneously operating effectively in practice, as demonstrated in
our experimental results.
Related Work. Over the last decade, there have been a flurry of works on the problems we study
in this paper by the computer science and information theory communities, including Shannon and Rényi
entropy estimation [Pan03, VV17, JVHW17, AOST17, OS17, WY18], support coverage and support size estimation [OSW16, WY18]. A recent paper studies the general problem of estimating functionals of discrete
distribution from samples in terms of the smoothness of the functional [FS17]. These have culminated in a
nearly-complete understanding of the sample complexity of these properties, with optimal sample complexities (up to constant factors) for most parameter regimes.
Recently, there has been significant interest in performing statistical tasks under differential privacy
constraints. Perhaps most relevant to this work are [CDK17, ASZ17, ADR17], which study the sample
complexity of differentialy privately performing classical distribution testing problems, including identity and
closeness testing. Other works investigating private hypothesis testing include [WLK15, GLRV16, KR17,
KSF17, Rog17, GR17], which focus less on characterizing the finite-sample guarantees of such tests, and
more on understanding their asymptotic properties and applications to computing p-values. There has
also been study on private distribution learning [DHS15, DJW17, KV18], in which we wish to estimate
parameters of the distribution, rather than just a particular property of interest. A number of other problems
have been studied with privacy requirements, including clustering [WWS15, BDL+ 17], principal component
analysis [CSS13, KT13, HP14], ordinary least squares [She17], and much more.
2
Preliminaries
We will start with some definitions.
Pk
def
Let ∆ = {(p(1), . . . , p(k)) : p(i) ≥ 0, i=1 p(i) = 1, 1 ≤ k ≤ ∞} be the set of discrete distributions over
a countable support. Let ∆k be the set of distributions in ∆ with at most k non-zero probability values.
A property f (p) is a mapping from ∆ → R. We now describe the classical distribution property estimation
problem, and then state the problem under differential privacy.
Property Estimation Problem. Given α, β, f , and independent samples X1n from an unknown distribution p, design an estimator fˆ : X1n → R such that with probability at least 1 − β, fˆ(X1n ) − f (p) < α.
def
The sample complexity of fˆ, is Cfˆ(f, α, β) = min{n : Pr fˆ(X1n ) − f (p) > α < β} is the smallest number
of samples to estimate f to accuracy α, and error β. We study the problem for β = 1/3, and by the median trick, we can boost the error probability to β with an additional multiplicative log(1/β) more samples:
def
Cfˆ(f, α) = Cfˆ(f, α, 1/3). The sample complexity of estimating a property f (p) is the minimum sample
complexity over all estimators: C(f, α) = minfˆ Cfˆ(f, α).
An estimator fˆ is ε-differentially private (DP) [DMNS06] if for any X1n and Y1n , with dham (X1n , Y1n ) ≤ 1,
Pr (f (X1n )∈S)
≤ eε , for all i, and measurable S.
Pr (f (Y1n )∈S )
3
Private Property Estimation. Given α, ε, β, f , and independent samples X1n from an unknown distribution p, design an ε-differentially private estimator fˆ : X1n → R such that with probability at least 1 − β,
fˆ(X1n ) − f (p) < α. Similar to the non-private setting, the sample complexity of ε-differentially private
estimation problem is C(f, α, ε) = minfˆ:fˆis ε-DP Cfˆ(f, α, 1/3), the smallest number of samples n for which
there exists such a ±α estimator with error probability at most 1/3.
In their original paper [DMNS06] provides a scheme for differential privacy, known as the Laplace mechanism. This method adds Laplace noise to a non-private scheme in order to make it private. We first define
the sensitivity of an estimator, and then state their result in our setting.
def
Definition 1. The sensitivity of an estimator fˆ : [k]n → R is ∆n,fˆ = maxdham (X1n ,Y1n )≤1 fˆ(X1n ) − fˆ(Y1n ) .
Let Dfˆ(α, ε) = min{n : ∆n,fˆ ≤ αε}.
Lemma 1.
n
α o
,ε
.
C(f, α, ε) = O min Cfˆ(f, α/2) + Dfˆ
4
fˆ
Proof. [DMNS06] showed that for a function with sensitivity ∆n,fˆ, adding Laplace noise X ∼ Lap(∆n,fˆ/ε)
makes the output ε-differentially private. By the definition of Dfˆ( α4 , ε), the Laplace noise we add has
|x|
1 − b
, hence we have
e
parameter at most α4 . Recall that the probability density function of Lap(b) is 2b
1
α
Pr (|X| > α/2) < e2 . By the union bound, we get an additive error less than α = 2 + α2 with probability at
most 1/3 + e12 < 0.5. Hence, with the median trick, we can boost the error probability to 1/3, at the cost of
a constant factor in the number of samples.
To prove sample complexity lower bounds for differentially private estimators, we observe that the estimator can be used to test between two distributions with distinct property values, hence is a harder problem.
For lower bounds on differentially private testing, [ASZ17] gives the following argument based on coupling:
Lemma 2. Suppose there is a coupling between distributions p and q over X n , such that E [dham (X1n , Y1n )] ≤
D. Then, any ε-differentially private
algorithm that distinguishes between p and q with error probability at
most 1/3 must satisfy D = Ω 1ε .
2.1
Problems of Interest
Support Size. The support size of a distribution p is S(p) = |{x : p(x) > 0}|, the number of symbols with
non-zero probability values. However, notice that estimating S(p) from samples can be hard due to the
presence of symbols with negligible, yet non-zero probabilities. To circumvent this issue, [RRSS09] proposed
def
to study the problem when the smallest probability is bounded. Let ∆≥ k1 = {p ∈ ∆ : p(x) ∈ {0} ∪ [1/k, 1]}
be the set of all distributions where all non-zero probabilities have value at least 1/k. For p ∈ ∆≥ k1 , our goal
is to estimate S(p) up to ±αk with the least number of samples from p.
P
Support Coverage. For a distribution p, and an integer m, let Sm (p) = x (1 − (1 − p(x))m ), be the
expected number of symbols that appear when we obtain m independent samples from the distribution p.
The objective is to find the least number of samples n in order to estimate Sm (p) to an additive ±αm.
Support coverage arises in many ecological and biological studies [CCG+ 12] to quantify the number of
new elements (gene mutations, species, words, etc) that can be expected to be seen in the future. Good and
Toulmin [GT56] proposed an estimator that for any constant α, requires m/2 samples to estimate Sm (p).
P
1
Entropy. The Shannon entropy of a distribution p is H(p) = x p(x) log p(x)
, H(p) is a central object
in information theory [CT06], and also arises in many fields such as machine learning [Now12], neuroscience [BWM97, NBdRvS04], and others. Estimating H(p) is hard with any finite number of samples due
to the possibility of infinite support. To circumvent this, a natural approach is to consider distributions in
∆k . The goal is to estimate the entropy of a distribution in ∆k to an additive ±α, where ∆k is all discrete
distributions over at most k symbols.
4
3
Statement of Results
Our theoretical results for estimating support coverage, support size, and entropy are given below. Algorithms for these problems and proofs of these statements are provided in Section 4. Our experimental results
are described and discussed in Section 5.
Theorem 1. For any ε = Ω(1/m), the sample complexity of support coverage is
m log(1/α) m log(1/α)
C(Sm , α, ε) = O
+
.
log m
log(εm)
Furthermore,
m log(1/α)
1
C(Sm , α, ε) = Ω
+
.
log m
αε
Theorem 2. For any ε = Ω(1/k), the sample complexity of support size estimation is
C(S, α, ε) = O
k log2 (1/α) k log2 (1/α)
+
.
log k
log(εk)
Furthermore,
k log2 (1/α)
1
C(S, α, ε) = Ω
+
.
log k
αε
Theorem 3. Let λ > 0 be any small fixed constant. For instance, λ can be chosen to be any constant
between 0.01 and 1. We have the following upper bounds on the sample complexity of entropy estimation:
log2 (min{k, n})
1
1
k
+
+
log
C(H, α, ε) = O
α
α2
αε
αε
and
k
log2 (min{k, n})
C(H, α, ε) = O
+
+
λ2 α log k
α2
1
αε
1+λ !
.
Furthermore,
C(H, α, ε) = Ω
k
log2 (min{k, n}) log k
+
+
.
α log k
α2
αε
We provide some discussion of our results. At a high level, we wish to emphasize the following two points:
1. Our upper bounds show that the cost of privacy in these settings is often negligible compared to the
sample complexity of the non-private statistical task, especially when we are dealing with distributions
over a large support. Furthermore, our upper bounds are almost tight in all parameters.
2. The algorithmic complexity introduced by the requirement of privacy is minimal, consisting only of
a single step which noises the output of an estimator. In other words, our methods are realizable in
practice, and we demonstrate the effectiveness on several synthetic and real-data examples.
First, we examine our results on support size and support coverage estimation. We note that we focus
on the regime where ε is not exceptionally small, as the privacy requirement becomes somewhat unusual.
For instance, non-privately, if we have m samples for the problem of support coverage, then the empirical
plug-in estimator is the best we can do. However, if ε = O(1/m), then group privacy [DR14] implies that
the algorithm’s output distribution on any dataset of m samples must be very similar – however, these
samples may have an arbitrary value of support coverage ∈ [m], which precludes hopes for a highly accurate
estimator. To avoid degeneracies of this nature, we restrict our attention to ε = Ω(1/m). In this regime, if
ε = Ω(mγ /m) for any constant γ > 0, then up to constant factors, our upper bound is within a constant
factor of the optimal sample complexity without privacy constratints. In other words, for most meaningful
values of ε, privacy comes for free.
5
Next, we turn our attention to entropy estimation. We note that the second upper bound in Theorem 3
has a parameter λ that indicates a tradeoff between the sample complexity incurred in the first and third
term. This parameter determines the degree of a polynomial to be used for entropy estimation. As the degree
becomes smaller (corresponding to a large λ), accuracy of the polynomial estimator decreases, however, at
the same time, low-degree polynomials have a small sensitivity, allowing us to privatize the outcome.
In terms of our theoretical results, one can think of λ = 0.01. With this parameter setting, it can be
observed that our upper bounds are almost tight. For example, one can see that the upper and lower bounds
match to either logarithmic factors (when looking at the first upper bound), or a very small polynomial
factor in 1/αε (when looking at the second upper bound). For our experimental results, we experimentally
determined an effective value for the parameter λ on a single synthetic instance. We then show that this choice
of parameter generalizes, giving highly-accurate private estimation in other instances, on both synthetic on
real-world data.
4
Algorithms and Analysis
In this section, we prove our results for support coverage in Section 4.1, support size in Section 4.2, and
entropy in Section 4.3. In each section, we first describe and analyze our algorithms for the relevant problem.
We then go on to describe and analyze a lower bound construction, showing that our upper bounds are almost
tight.
All our algorithms fall into the following simple framework:
1. Compute a non-private estimate of the property;
2. Privatize this estimate by adding Laplace noise, where the parameter is determined through analysis
of the estimator and potentially computation of the estimator’s sensitivity.
4.1
Support Coverage Estimation
In this section, we prove Theorem 1, about support coverage estimation:
Theorem 1. For any ε = Ω(1/m), the sample complexity of support coverage is
m log(1/α) m log(1/α)
+
.
C(Sm , α, ε) = O
log m
log(εm)
Furthermore,
1
m log(1/α)
+
.
C(Sm , α, ε) = Ω
log m
αε
Our upper bound is described and analyzed in Section 4.1.1, while our lower bound appears in Section 4.1.2.
4.1.1
Upper Bound for Support Coverage Estimation
Let ϕi be the number of symbols that appear i times in X1n . We will use the following non-private support
coverage estimator from [OSW16]:
Ŝm (X1n ) =
n
X
ϕi 1 + (−t)i · Pr (Z ≥ i) ,
i=1
where Z is a Poisson random variable with mean r (which is a parameter to be instantiated later), and
t = (m − n)/n.
Our private estimator of support coverage is derived by adding Laplace noise to this non-private estimator
with the appropriate noise parameter, and thus the performance of our private estimator, is analyzed by
bounding the sensitivity and the bias of this non-private estimator according to Lemma 1.
The sensitivity and bias of this estimator is bounded in the following lemmas.
Lemma 3. Suppose m > 2n, then the maximum coefficient of ϕi in Ŝm (p) is at most 1 + er(t−1) .
6
Proof. By the definition of Z, we know Pr (Z ≥ i) =
P∞
k=i
k
e−r rk! , hence we have:
|1 + (−t)i · Pr (Z ≥ i)| ≤ 1 + ti
≤1+e
∞
X
e−r
k=i
∞
X
−r
k=i
≤ 1 + e−r
=1+e
rk
k!
(rt)k
k!
∞
X
(rt)k
k=0
r(t−1)
k!
The bias of the estimator is bounded in Lemma 4 of [ADOS17]:
Lemma 4. Suppose m > 2n, then
h
i
E Ŝm (X1n ) − Sm (p) ≤ 2 + 2er(t−1) + min(m, S(p)) · e−r .
Using these results, letting r = log(1/α), [OSW16] showed that there is a constant C, such that with
n = C logmm log(1/α) samples, with probability at least 0.9,
Ŝm (X1n ) Sm (p)
−
≤ α.
m
m
Ŝ (X n )
Our upper bound in Theorem 1 is derived by the following analysis of the sensitivity of m m 1 .
If we change one sample in X1n , at most two of the ϕj ’s change. Hence by Lemma 3, the sensitivity of
the estimator satisfies
!
2
Ŝm (X1n )
≤ · 1 + er(t−1) .
(1)
∆
m
m
By Lemma 1, there is a private algorithm for support coverage estimation as long as
!
Ŝm (X1n )
≤ αε,
∆
m
which by (1) holds if
2(1 + exp(r(t − 1))) ≤ αεm.
Let r = log(3/α), note that t − 1 =
m
n
− 2. Suppose αεm > 2, then, the condition above reduces to
1
3
m
·
− 2 ≤ log
αεm − 1 .
log
α
n
2
This is equivalent to
m log(3/α)
log( 21 αεm − 1) + 2 log(3/α)
m log(3/α)
=
log( 23 εm − 3/α) + log(3/α)
n≥
Suppose αεm > 2, then the condition above reduces to the requirement that
m log(1/α)
n=O
.
log(εm)
7
(2)
4.1.2
Lower Bound for Support Coverage Estimation
We now prove the lower bound described in Theorem 1. Note that the first term in the lower bound is the
sample complexity of non-private support coverage estimation, shown in [OSW16]. Therefore, we turn our
attention to prove the latter term in the sample complexity.
Consider the following two distributions. u1 is uniform over [m(1 + α)]. u2 is distributed over m + 1
α
1
∀i ∈ [m] and u2 [4] = 1+α
. Moreover, 4 ∈
/ [m(1 + α)]. Then,
elements [m] ∪ {4} where u2 [i] = m(1+α)
m
1
Sm (u1 ) = m(1 + α) · 1 − 1 −
,
m(1 + α)
and
Sm (u2 ) =
m· 1− 1−
1
m(1 + α)
m
+ 1− 1−
α
1+α
m
hence,
Sm (u2 ) − Sm (u1 )
= mα · 1 − 1 −
1
m(1 + α)
m
− 1− 1−
α
1+α
m
= Ω(αm)
α
Hence we know there support coverage differs by Ω(αm). Moreover, their total variation distance is 1+α
.
The following lemma is folklore, based on the coupling interpretation of total variation distance, and the
fact that total variation distance is subadditive for product measures.
Lemma 5. For any two distributions p, and q, there is a coupling between n iid samples from the two
distributions with an expected Hamming distance of dTV (p, q) · n.
Using Lemma 5 and dTV (u1 , u2 ) =
α
1+α ,
we have
Lemma 6. Suppose u1 and u2 are as defined before, there is a coupling between un1 and un2 with expected
α
Hamming distance equal to 1+α
n.
Moreover, given n samples, we must be able to privately distinguish between u1 and u2 given an α
accurate estimator of support coverage with privacy considerations. Thus, according to Lemma 2 and 6, we
have:
α
1
1
n≥ ⇒n=Ω
.
1+α
ε
εα
4.2
Support Size Estimation
In this section, we prove our main theorem about support size estimation, Theorem 2:
Theorem 2. For any ε = Ω(1/k), the sample complexity of support size estimation is
C(S, α, ε) = O
k log2 (1/α) k log2 (1/α)
+
.
log k
log(εk)
Furthermore,
1
k log2 (1/α)
+
.
C(S, α, ε) = Ω
log k
αε
Our upper bound is described and analyzed in Section 4.2.1, while our lower bound appears in Section 4.2.2.
8
4.2.1
Upper Bound for Support Size Estimation
In [OSW16], it is shown that the support coverage estimator can be used to obtain optimal results for
estimating the support size of a distribution. In this fashion, taking m = k log(3/α), we we may use an
estimate of the support coverage Sm (p) as an estimator of S(p). In particular, their result is based on the
following observation.
Lemma 7. Suppose m ≥ k log(3/α), then for any p ∈ ∆≥ k1 ,
|Sm (p) − S(p)| ≤
αk
.
3
Proof. From the definition of Sm (p), we have Sm (p) ≤ S(p). For the other side,
X
X
m
S(p) − Sm (p) =
(1 − p(x)) ≤
e−mp(x)
x
x
≤ k · e− log(3/α)
=
kα
.
3
Therefore, estimating Sm (p) for m = k log(3/α), up to ±αk/3, also estimates S(p) up to ±αk. Therefore,
the goal is to estimate the smallest value of n to solve the support coverage problem.
Suppose r = log(3/α), and m = k log(3/α) = k · r in the support coverage problem. Then, we have
t=
k log(3/α)
m
−1=
− 1.
n
n
(3)
Then, by Lemma 4 in the previous section, we have
h
i
E Ŝm (X1n ) − S(p)
h
i
≤ E Ŝm (X1n ) − Sm (p) + |Sm (p) − S(p)|
≤ 2 + 2er(t−1) + min{m, k} · e−r +
≤ 2 + 2er(t−1) + k · e− log(3/α) +
≤ 2 + 2er(t−1) + 2
kα
3
kα
3
kα
.
3
We will find conditions on n such that the middle
term above is at most kα. Toward this end, note that
2er(t−1) ≤ αk holds if and only if r(t − 1) ≤ log αk
2 . Plugging in (3), this holds when
k log(3/α)
αk
log(3/α) ·
− 2 ≤ log
,
n
2
which is equivalent to
n≥
k log2 (3/α)
log αk
2 + 2 log
3
α
=O
k log2 (1/α)
log k
where we have assumed without loss of generality that α > k1 .
The computations for sensitivity are very similar. From Lemma 1 1, we need to find the value of n such
that
2 + 2er(t−1) ≤ αεk,
where we assume that n ≤ 12 k log(3/α), else we just add noise to the true number of observed distinct
elements By computations similar to the expectation case, this reduces to
n≥
k log2 (3/α)
.
3
log αεk
2 + log α
9
Therefore, this gives us a sample complexity of
k log2 (1/α)
n=O
log (εk)
(4)
for the sensitivity result to hold.
We note that the bound above blows up when ε ≤ k1 . However, we note that our lower bound implies
that we need at least Ω(1/ε) = Ω(k) samples in this case, which is not in the sub-linear regime that we are
interested in. We therefore consider only the regime where the privacy parameter ε is at least 1/k.
4.2.2
Lower Bound for Support Size Estimation
In this section, we prove a lower bound for support size estimation, as described in Theorem 2. The techniques
are similar to those for support coverage in Section 4.1.2. The first term of the complexity is the lower bound
for non-private setting. This follows by combining the lower bound of [OSW16] for support coverage, with
the equivalence between estimation of support size and coverage as implied by Lemma 7. We focus on the
second term in the sequel.
Consider the following two distributions: u1 is a uniform distribution over [k] and u2 is a uniform
distribution over [(1−α)k]. Then the support size of these two distribution differs by αk, and dTV (u1 , u2 ) = α.
Hence by Lemma 5, we know the following:
Lemma 8. Suppose u1 ∼ U [k] and u2 ∼ U [(1 − α)k], there is a coupling between un1 and un2 with expected
Hamming distance equal to αn.
Moreover, given n samples, we must be able to privately distinguish between u1 and u2 given an α
accurate estimator of entropy with privacy considerations. Thus, according to Lemma 2 and Lemma 8, we
have:
1
1
.
αn ≥ ⇒ n = Ω
ε
εα
4.3
Entropy Estimation
In this section, we prove our main theorem about entropy estimation, Theorem 3:
Theorem 3. Let λ > 0 be any small fixed constant. For instance, λ can be chosen to be any constant
between 0.01 and 1. We have the following upper bounds on the sample complexity of entropy estimation:
log2 (min{k, n})
1
1
k
+
+
log
C(H, α, ε) = O
α
α2
αε
αε
and
k
log2 (min{k, n})
C(H, α, ε) = O
+
+
λ2 α log k
α2
1
αε
1+λ !
.
Furthermore,
C(H, α, ε) = Ω
k
log2 (min{k, n}) log k
+
+
.
α log k
α2
αε
We describe and analyze two upper bounds. The first is based on the empirical entropy estimator, and is described and analyzed in Section 4.3.1. The second is based on the method of best-polynomial approximation,
and appears in Section 4.3.2. Finally, our lower bound is in Section 4.3.3.
4.3.1
Upper Bound for Entropy Estimation: The Empirical Estimator
Our first private entropy estimator is derived by adding Laplace noise into the empirical estimator. The
parameter of the Laplace distribution is ∆(H(ε p̂n )) , where ∆(H(p̂n )) denotes the sensitivity of the empirical
estimator. By analyzing its sensitivity and bias, we prove an upper bound on the sample complexity for
private entropy estimation and get the first upper bound in Theorem 3.
10
Let p̂n be the empirical distribution, and let H(p̂n ) be the entropy of the empirical distribution. The
theorem is based on the following three facts:
log n
∆(H(p̂n )) = O
.
(5)
n
k
|H(p) − E [H(p̂n )]| = O
,
(6)
n
2
log (min{k, n})
V ar (H(p̂n )) = O
,
(7)
n
With these three facts in hand, the sample complexity of the empiricalestimator can be bounded as follows.
1
1
log( αε
)
. We also need |H(p) − E [H(p̂n )]| =
By Lemma 1, we need ∆(H(p̂n )) ≤ αε, which gives
n = O αε
2
log (min{k,n})
k
2
O(α) and V ar (H(p̂n )) = O α , which gives n = O n +
.
α2
Proof of (5). The largest change in any Nx when we change one symbol is one. Moreover, at most two
Nx change. Therefore,
n
j
n
j+1
log
− log
n
j+1 n
j
j
j
1
n
= 2 · max
log
+ log
j=1...n−1 n
j+1 n
j+1
j
1
n
j
log
,
log
≤ 2 · max max
j=1...n−1
n
j+1
n
j+1
1 log n
≤ 2 · max
,
,
n n
log n
.
=2·
n
∆(H(p̂n )) ≤ 2 ·
Proof of (6).
max
j=1...n−1
(8)
(9)
(10)
By the concavity of entropy function, we know that
E [H(p̂n )] ≤ H(p).
Therefore,
E [|H(p) − H(p̂n )|] = H(p) − E [H(p̂n )]
"
#
X
=E
(p̂n (x) log p̂n (x) − p(x) log p(x))
x
"
=E
X
x
#
"
#
X
p̂n (x)
p̂n (x) log
+E
(p̂n (x) − p(x)) log p(x)
p(x)
x
= E [D(p̂n kp)]
≤ E dχ2 (p̂n || p)
"
#
X (p̂n (x) − p(x))2
=E
p(x)
x
X (p(x)/n)
≤
p(x)
x
=
(11)
(12)
(13)
k
.
n
(14)
11
2
Proof of (7). The variance bound of logn k is given precisely in Lemma 15 of [JVHW17]. To obtain the
other half of the bound of, we apply the bounded differences inequality in the form stated in Corollary 3.2
of [BLM13].
Lemma 9. Let f : Ωn → R be a function. Suppose further that
max
0
0
z1 ,...,zn ,zi
f (z1 , . . . , zn ) − f (z1 , . . . , zi−1 , zi , . . . , zn ) ≤ ci .
Then for independent variables Z1 , . . . , Zn ,
n
Var(f (Z1 , . . . , Zn )) ≤
1X 2
c .
4 i=1 i
Therefore, using Lemma 9 and Equation (5)
V ar (H(p̂n )) ≤ n ·
4.3.2
4 log2 n
n2
=
4 log2 n
.
n
Upper Bound for Entropy Estimation: Best-Polynomial Approximation
We prove an upper bound on the sample complexity for private entropy estimation if one adds Laplace noise
into best-polynomial estimator.This will give us the second upper bound in Theorem 3.
In the non-private setting the optimal sample complexity of estimating H(p) over ∆k is given by Theorem
1 of [WY16]
k
log2 (min{k, n})
Θ
+
.
α log k
α2
However, this estimator can have a large sensitivity. [ADOS17] designed an estimator that has the same
sample complexity but a smaller sensitivity. We restate Lemma 6 of [ADOS17] here:
Lemma 10. Let λ > 0 be a fixed small constant, which may be taken to be any value between 0.01 and 1.
Then there is an entropy estimator with sample complexity
1
k
log2 (min{k, n})
+
Θ 2·
,
λ α log k
α2
and has sensitivity nλ /n.
We can now invoke Lemma 1 on the estimator in this lemma to obtain the upper bound on private
entropy estimation.
4.3.3
Lower Bound for Entropy Estimation
We now prove the lower bound for entropy estimation. Note that any lower bound on privately testing two
distributions p, and q such that H(p) − H(q) = Θ(α) is a lower bound on estimating entropy.
We analyze the following construction for Proposition 2 of [WY16]. The two distributions p, and q over
[k] are defined as:
1 − p(1)
2
,p(i) =
, for i = 2, . . . , k,
3
k−1
2−η
1 − q(1)
q(1) =
,q(i) =
, for i = 2, . . . , k.
3
k−1
p(1) =
Then, by the grouping property of entropy,
H(p) = h(2/3) +
1
1+η
· log(k − 1), and H(q) = h((2 − η)/3) +
· log(k − 1),
3
3
12
(15)
(16)
which gives
H(p) − H(q) = Ω(η log k).
For η = α/ log k, the entropy difference becomes Θ(α).
The total variation distance between p and q is η/3. By Lemma 5 in the paper, there is a coupling over
X1n , and Y1n generated from p and q with expected Hamming distance at most dTV (p, q) · n. This along with
Lemma 2 in the paper gives a lower bound of Ω(log k/αε) on the sample complexity.
5
Experiments
We evaluated our methods for entropy estimation and support coverage on both synthetic and real data.
Overall, we found that privacy is quite cheap: private estimators achieve accuracy which is comparable or
near-indistinguishable to non-private estimators in many settings. Our results on entropy estimation and
support coverage appear in Sections 5.1 and 5.2, respectively. Code of our implementation is available at
https://github.com/HuanyuZhang/INSPECTRE.
5.1
Entropy
We compare the performance of our entropy estimator with a number of alternatives, both private and
non-private. Non-private algorithms considered include the plug-in estimator (plug-in), the Miller-Madow
Estimator (MM) [Mil55], the sample optimal polynomial approximation estimator (poly) of [WY16]. We analyze the privatized versions of plug-in, and poly in Sections 4.3.1 and 4.3.2, respectively. The implementation
of the latter is based on code from the authors of [WY16]1 . We compare performance on different distributions including uniform, a distribution with two steps, Zipf(1/2), a distribution with Dirichlet-1 prior, and
a distribution with Dirichlet-1/2 prior, and over varying support sizes.
While plug-in, and MM are parameter free, poly (and its private counterpart) have to choose the degree L
of the polynomial to use, which manifests in the parameter λ in the statement of Theorem 3. [WY16] suggest
the value of L = 1.6 log k in their experiments. However, since we add further noise, we choose a single L as
follows: (i) Run privatized poly for different L values and distributions for k = 2000, ε = 1, (b) Choose the
value of L that performs well across different distributions (See Figure 1). We choose L = 1.2·log k from this,
and use it for all other experiments. To evaluate the sensitivity of poly, we computed the estimator’s value at
all possible input values, computed the sensitivity, (namely, ∆ = maxdham (X1n ,Y1n )≤1 |poly(X1n )−poly(Y1n )|),
and added noise distributed as Lap 0, ∆
ε .
0.6
0.4
0.8
0.6
500
750
1000
1250
1500
Number of samples
1750
2000
0.0
1.0
0.8
500
750
1000
1250
1500
Number of samples
1750
2000
0.0
1.50
1.25
1.00
0.75
L=0.3log(k)
L=0.6log(k)
L=0.9log(k)
L=1.2log(k)
L=1.5log(k)
L=1.8log(k)
2.0
1.5
1.0
0.5
0.25
0.2
250
L=0.3log(k)
L=0.6log(k)
L=0.9log(k)
L=1.2log(k)
L=1.5log(k)
L=1.8log(k)
1.75
0.50
0.4
0.2
250
1.2
0.6
0.4
0.2
0.0
1.0
1.4
Dirichlet-1/2 prior
Dirichlet-1 prior
L=0.3log(k)
L=0.6log(k)
L=0.9log(k)
L=1.2log(k)
L=1.5log(k)
L=1.8log(k)
1.6
RMSE
0.8
1.2
RMSE
RMSE
1.0
Zipf 1/2
L=0.3log(k)
L=0.6log(k)
L=0.9log(k)
L=1.2log(k)
L=1.5log(k)
L=1.8log(k)
1.4
RMSE
Two steps
L=0.3log(k)
L=0.6log(k)
L=0.9log(k)
L=1.2log(k)
L=1.5log(k)
L=1.8log(k)
RMSE
Uniform
1.2
250
500
750
1000
1250
1500
Number of samples
1750
2000
0.00
250
500
750
1000
1250
1500
Number of samples
1750
2000
0.0
250
500
750
1000
1250
1500
Number of samples
1750
2000
Figure 1: RMSE comparison for private Polynomial Approximation Estimator with various values for degree
L, k = 2000, ε = 1.
The RMSE of various estimators for k = 1000, and ε = 1 for various distributions are illustrated in
Figure 2. The RMSE is averaged over 100 iterations in the plots.
We observe that the performance of our private-poly is near-indistinguishable from the non-private
poly, particularly as the number of samples increases. It also performs significantly better than all other
alternatives, including the non-private Miller-Madow and the plug-in estimator. The cost of privacy is
minimal for several other settings of k and ε, for which results appear in Section A.
1 See
https://github.com/Albuso0/entropy for their code for entropy estimation.
13
2.5
1.5
2.5
2.0
1.5
2.0
2.0
1.5
1.0
1.0
0.5
0.5
0.5
0.5
400
600
800
0.0
1000
Number of samples
200
400
600
800
0.0
1000
Number of samples
200
400
600
800
Number of samples
2.0
1.5
1.0
0.5
0.0
1000
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.5
1.5
1.0
200
Dirichlet-1/2 prior
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.5
1.0
0.0
Dirichlet-1 prior
3.0
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
RMSE
2.0
RMSE
RMSE
2.5
Zipf 1/2
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
RMSE
Two steps
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
RMSE
Uniform
3.5
200
400
600
800
0.0
1000
Number of samples
200
400
600
800
1000
Number of samples
Figure 2: Comparison of various estimators for the entropy, k = 1000, ε = 1.
5.2
Support Coverage
We investigate the cost of privacy for the problem of support coverage. We provide a comparison between the
Smoothed Good-Toulmin estimator (SGT) of [OSW16] and our algorithm, which is a privatized version of
their statistic (see Section 4.1.1). Our implementation is based on code provided by the authors of [OSW16].
As shown in our theoretical results, the sensitivity of SGT is at most 2(1 + er (t − 1)), necessitating the
addition of Laplace noise with parameter 2(1 + er(t−1) )/ε. Note that while the theory suggests we select the
2
1
loge n(t+1)
parameter r = log(1/α), α is unknown. We instead set r = 2t
t−1 , as previously done in [OSW16].
5.2.1
Evaluation on Synthetic Data
In our synthetic experiments, we consider different distributions over different support sizes k. We generate
n = k/2 samples, and then estimate the support coverage at m = n · t. For large t, estimation is harder.
Some results of our evaluation on synthetic are displayed in Figure 3. We compare the performance of SGT,
and privatized versions of SGT with parameters ε = 1, 2, and 10. For this instance, we fixed the domain size
k = 20000. We ran the methods described above with n = k/2 samples, and estimated the support coverage
at m = nt, for t ranging from 1 to 10. The performance of the estimators is measured in terms of RMSE
over 1000 iterations.
1500
Uniform
1000
Non-private
Private eps=10
Private eps=2
Private eps=1
800
Two steps
1200
Non-private
Private eps=10
Private eps=2
Private eps=1
1000
Zipf 1/2
1400
Non-private
Private eps=10
Private eps=2
Private eps=1
1200
1000
800
400
RMSE
RMSE
RMSE
RMSE
600
1000
600
400
500
0
1
200
2
3
4
5
t
6
7
8
9
10
0
1
3
4
5
t
6
7
8
9
10
0
1
1800
1600
1400
600
3
4
5
t
6
7
8
9
10
0
1
1000
800
600
400
200
2
Dirichlet-1/2 prior
Non-private
Private eps=10
Private eps=2
Private eps=1
1200
800
400
200
2
Dirichlet-1 prior
Non-private
Private eps=10
Private eps=2
Private eps=1
RMSE
2000
200
2
3
4
5
t
6
7
8
9
10
0
1
2
3
4
5
t
6
7
8
9
10
Figure 3: Comparison between the private estimator with the non-private SGT when k = 20000
We observe that, in this setting, the cost of privacy is relatively small for reasonable values of ε. This
is as predicted by our theoretical results, where unless ε is extremely small (less than 1/k) the non-private
sample complexity dominates the privacy requirement. However, we found that for smaller support sizes
(as shown in Section A.2), the cost of privacy can be significant. We provide an intuitive explanation for
why no private estimator can perform well on such instances. To minimize the number of parameters, we
instead argue about the related problem of support-size estimation. Suppose we are trying to distinguish
between distributions which are uniform over supports of size 100 and 200. We note that, if we draw n = 50
samples, the “profile” of the samples (i.e., the histogram of the histogram) will be very similar for the two
distributions. In particular, if one modifies only a few samples (say, five or six), one could convert one
profile into the other. In other words, these two profiles are almost-neighboring datasets, but simultaneously
correspond to very different support sizes. This pits the two goals of privacy and accuracy at odds with each
other, thus resulting in a degradation in accuracy.
5.2.2
Evaluation on Census Data and Hamlet
We conclude with experiments for support coverage on two real-world datasets, the 2000 US Census data and
the text of Shakespeare’s play Hamlet, inspired by investigations in [OSW16] and [VV17]. Our investigation
14
on US Census data is also inspired by the fact that this is a setting where privacy is of practical importance,
evidenced by the proposed adoption of differential privacy in the 2020 US Census [DLS+ 17].
The Census dataset contains a list of last names that appear at least 100 times. Since the dataset is so
oversampled, even a small fraction of the data is likely to contain almost all the names. As such, we make
the task non-trivial by subsampling mtotal = 86080 individuals from the data, obtaining 20412 distinct last
names. We then sample n of the mtotal individuals without replacement and attempt to estimate the total
number of last names. Figure 4 displays the RMSE over 100 iterations of this process. We observe that even
an exceptionally stringent privacy budget of ε = 0.5, the performance is almost indistinguishable from the
non-private SGT estimator.
6000
Non-private
Private eps=2
Private eps=1
Private eps=0.5
5000
RMSE
4000
3000
2000
1000
0
0.1
0.2
0.3
0.4
0.5
0.6
Fraction of seen names
0.7
0.8
0.9
1.0
Figure 4: Comparison between our private estimator with the SGT on Census Data
The Hamlet dataset has mtotal = 31, 999 words, of which 4804 are distinct. Since the distribution is
not as oversampled as the Census data, we do not need to subsample the data. Besides this difference, the
experimental setup is identical to that of the Census dataset. Once again, as we can see in Figure 5, we
get near-indistinguishable performance between the non-private and private estimators, even for very small
values of ε. Our experimental results demonstrate that privacy is realizable in practice, with particularly
accurate performance on real-world datasets.
1800
Non-private
Private eps=2
Private eps=1
Private eps=0.5
1600
1400
RMSE
1200
1000
800
600
400
200
0
0.1
0.2
0.3
0.4
0.5
0.6
Fraction of seen words
0.7
0.8
0.9
1.0
Figure 5: Comparison between our private estimator with the SGT on Hamlet
References
[AA01]
Dakshi Agrawal and Charu C. Aggarwal. On the design and quantification of privacy preserving data mining algorithms. In Proceedings of the 20th ACM SIGMOD-SIGACT-SIGART
Symposium on Principles of Database Systems, PODS ’01, pages 247–255, New York, NY,
USA, 2001. ACM.
15
[ADK15]
Jayadev Acharya, Constantinos Daskalakis, and Gautam Kamath. Optimal testing for properties of distributions. In Advances in Neural Information Processing Systems 28, NIPS ’15,
pages 3577–3598. Curran Associates, Inc., 2015.
[ADOS17]
Jayadev Acharya, Hirakendu Das, Alon Orlitsky, and Ananda Theertha Suresh. A unified
maximum likelihood approach for estimating symmetric properties of discrete distributions.
In Proceedings of the 34th International Conference on Machine Learning, ICML ’17, pages
11–21. JMLR, Inc., 2017.
[ADR17]
Maryam Aliakbarpour, Ilias Diakonikolas, and Ronitt Rubinfeld. Differentially private identity
and closeness testing of discrete distributions. arXiv preprint arXiv:1707.05497, 2017.
[AOST17]
Jayadev Acharya, Alon Orlitsky, Ananda Theertha Suresh, and Himanshu Tyagi. Estimating
rényi entropy of discrete distributions. IEEE Transactions on Information Theory, 63(1):38–56,
2017.
[ASZ17]
Jayadev Acharya, Ziteng Sun, and Huanyu Zhang. Differentially private testing of identity and
closeness of discrete distributions. arXiv preprint arXiv:1707.05128, 2017.
[AW89]
Nabil R. Adam and John C. Worthmann. Security-control methods for statistical databases:
A comparative study. ACM Computing Surveys (CSUR), 21(4):515–556, 1989.
[BDL+ 17]
Maria-Florina Balcan, Travis Dick, Yingyu Liang, Wenlong Mou, and Hongyang Zhang. Differentially private clustering in high-dimensional euclidean spaces. In Proceedings of the 34th
International Conference on Machine Learning, ICML ’17, pages 322–331. JMLR, Inc., 2017.
[BLM13]
Stephane Boucheron, Gabor Lugosi, and Pierre Massart. Concentration Inequalities: A
Nonasymptotic Theory of Independence. Oxford University Press, 2013.
[BWM97]
Michael J. Berry, David K Warland, and Markus Meister. The structure and precision of retinal
spike trains. Proceedings of the National Academy of Sciences, 94(10):5411–5416, 1997.
[CCG+ 12]
Robert K. Colwell, Anne Chao, Nicholas J. Gotelli, Shang-Yi Lin, Chang Xuan Mao, Robin L.
Chazdon, and John T. Longino. Models and estimators linking individual-based and samplebased rarefaction, extrapolation and comparison of assemblages. Journal of Plant Ecology,
5(1):3–21, 2012.
[CDK17]
Bryan Cai, Constantinos Daskalakis, and Gautam Kamath. Priv’it: Private and sample efficient
identity testing. In Proceedings of the 34th International Conference on Machine Learning,
ICML ’17, pages 635–644. JMLR, Inc., 2017.
[CSS13]
Kamalika Chaudhuri, Anand D. Sarwate, and Kaushik Sinha. A near-optimal algorithm
for differentially-private principal components. Journal of Machine Learning Research,
14(Sep):2905–2943, 2013.
[CT06]
Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley, 2006.
[Dal77]
Tore Dalenius. Towards a methodology for statistical disclosure control. Statistisk Tidskrift,
15:429–444, 1977.
[DHS15]
Ilias Diakonikolas, Moritz Hardt, and Ludwig Schmidt. Differentially private learning of structured discrete distributions. In Advances in Neural Information Processing Systems 28, NIPS
’15, pages 2566–2574. Curran Associates, Inc., 2015.
[Dif17]
Differential Privacy Team, Apple. Learning with privacy at scale. https://machinelearning.
apple.com/docs/learning-with-privacy-at-scale/appledifferentialprivacysystem.
pdf, December 2017.
[DJW17]
John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. Minimax optimal procedures for
locally private estimation. Journal of the American Statistical Association, 2017.
16
[DLS+ 17]
Aref N. Dajani, Amy D. Lauger, Phyllis E. Singer, Daniel Kifer, Jerome P. Reiter, Ashwin
Machanavajjhala, Simson L. Garfinkel, Scot A. Dahl, Matthew Graham, Vishesh Karwa, Hang
Kim, Philip Lelerc, Ian M. Schmutte, William N. Sexton, Lars Vilhuber, and John M. Abowd.
The modernization of statistical disclosure limitation at the U.S. census bureau, 2017. Presented
at the September 2017 meeting of the Census Scientific Advisory Committee.
[DMNS06]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Proceedings of the 3rd Conference on Theory of Cryptography,
TCC ’06, pages 265–284, Berlin, Heidelberg, 2006. Springer.
[DN03]
Irit Dinur and Kobbi Nissim. Revealing information while preserving privacy. In Proceedings
of the 22nd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems,
PODS ’03, pages 202–210, New York, NY, USA, 2003. ACM.
[DR14]
Cynthia Dwork and Aaron Roth. The Algorithmic Foundations of Differential Privacy. Now
Publishers, Inc., 2014.
[Dwo08]
Cynthia Dwork. Differential privacy: A survey of results. In Proceedings of the 5th International
Conference on Theory and Applications of Models of Computation, TAMC ’08, pages 1–19,
Berlin, Heidelberg, 2008. Springer.
[EPK14]
Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. RAPPOR: Randomized aggregatable
privacy-preserving ordinal response. In Proceedings of the 2014 ACM Conference on Computer
and Communications Security, CCS ’14, pages 1054–1067, New York, NY, USA, 2014. ACM.
[FS17]
Kazuto Fukuchi and Jun Sakuma. Minimax optimal estimators for additive scalar functionals
of discrete distributions. In Proceedings of the 2017 IEEE International Symposium on Information Theory, ISIT ’17, pages 2103–2107, Washington, DC, USA, 2017. IEEE Computer
Society.
[GLRV16]
Marco Gaboardi, Hyun-Woo Lim, Ryan M. Rogers, and Salil P. Vadhan. Differentially private
chi-squared hypothesis testing: Goodness of fit and independence testing. In Proceedings of
the 33rd International Conference on Machine Learning, ICML ’16, pages 1395–1403. JMLR,
Inc., 2016.
[GR17]
Marco Gaboardi and Ryan Rogers. Local private hypothesis testing: Chi-square tests. arXiv
preprint arXiv:1709.07155, 2017.
[GT56]
I.J. Good and G.H. Toulmin. The number of new species, and the increase in population
coverage, when a sample is increased. Biometrika, 43(1-2):45–63, 1956.
[HP14]
Moritz Hardt and Eric Price. The noisy power method: A meta algorithm with applications.
In Advances in Neural Information Processing Systems 27, NIPS ’14, pages 2861–2869. Curran
Associates, Inc., 2014.
[HSR+ 08]
Nils Homer, Szabolcs Szelinger, Margot Redman, David Duggan, Waibhav Tembe, Jill
Muehling, John V. Pearson, Dietrich A. Stephan, Stanley F. Nelson, and David W. Craig.
Resolving individuals contributing trace amounts of dna to highly complex mixtures using
high-density snp genotyping microarrays. PLoS Genetics, 4(8):1–9, 2008.
[JS13]
Aaron Johnson and Vitaly Shmatikov. Privacy-preserving data exploration in genome-wide
association studies. In Proceedings of the 19th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, KDD ’13, pages 1079–1087, New York, NY, USA,
2013. ACM.
[JVHW17]
Jiantao Jiao, Kartik Venkat, Yanjun Han, and Tsachy Weissman. Minimax estimation of
functionals of discrete distributions. IEEE Transactions on Information Theory, 61(5):2835–
2885, 2017.
17
[KC12]
Alon Keinan and Andrew G. Clark. Recent explosive human population growth has resulted
in an excess of rare genetic variants. Science, 336(6082):740–743, 2012.
[KR17]
Daniel Kifer and Ryan M. Rogers. A new class of private chi-square tests. In Proceedings of
the 20th International Conference on Artificial Intelligence and Statistics, AISTATS ’17, pages
991–1000. JMLR, Inc., 2017.
[KSF17]
Kazuya Kakizaki, Jun Sakuma, and Kazuto Fukuchi. Differentially private chi-squared test
by unit circle mechanism. In Proceedings of the 34th International Conference on Machine
Learning, ICML ’17, pages 1761–1770. JMLR, Inc., 2017.
[KT13]
Michael Kapralov and Kunal Talwar. On differentially private low rank approximation. In
Proceedings of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’13,
pages 1395–1414, Philadelphia, PA, USA, 2013. SIAM.
[KV18]
Vishesh Karwa and Salil Vadhan. Finite sample differentially private confidence intervals. In
Proceedings of the 9th Conference on Innovations in Theoretical Computer Science, ITCS ’18,
New York, NY, USA, 2018. ACM.
[Mil55]
George A. Miller. Note on the bias of information estimates. Information Theory in Psychology:
Problems and Methods, 2:95–100, 1955.
[NBdRvS04] Ilya Nemenman, William Bialek, and Rob de Ruyter van Steveninck. Entropy and information
in neural spike trains: Progress on the sampling problem. Physical Review E, 69(5):056111:1–
056111:6, 2004.
[Now12]
Sebastian Nowozin. Improved information gain estimates for decision tree induction. In Proceedings of the 29th International Conference on Machine Learning, ICML ’12, pages 571–578.
JMLR, Inc., 2012.
[NWE+ 12]
Matthew R. Nelson, Daniel Wegmann, Margaret G. Ehm, Darren Kessner, Pamela St. Jean,
Claudio Verzilli, Judong Shen, Zhengzheng Tang, Silviu-Alin Bacanu, Dana Fraser, Liling
Warren, Jennifer Aponte, Matthew Zawistowski, Xiao Liu, Hao Zhang, Yong Zhang, Jun Li,
Yun Li, Li Li, Peter Woollard, Simon Topp, Matthew D. Hall, Keith Nangle, Jun Wang,
Gonçalo Abecasis, Lon R. Cardon, Sebastian Zöllner, John C. Whittaker, Stephanie L. Chissoe,
John Novembre, and Vincent Mooser. An abundance of rare functional variants in 202 drug
target genes sequenced in 14,002 people. Science, 337(6090):100–104, 2012.
[OS17]
Maciej Obremski and Maciej Skorski. Renyi entropy estimation revisited. In Proceedings of
the 20th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, APPROX ’17, pages 20:1–20:15, Dagstuhl, Germany, 2017. Schloss Dagstuhl–
Leibniz-Zentrum fuer Informatik.
[OSW16]
Alon Orlitsky, Ananda Theerta Suresh, and Yihong Wu. Optimal prediction of the number of
unseen species. Proceedings of the National Academy of Sciences, 113(47):13283–13288, 2016.
[Pan03]
Liam Paninski. Estimation of entropy and mutual information.
15(6):1191–1253, 2003.
[Rog17]
Ryan Michael Rogers. Leveraging Privacy in Data Analysis. PhD thesis, University of Pennsylvania, May 2017.
[RRSS09]
Sofya Raskhodnikova, Dana Ron, Amir Shpilka, and Adam Smith. Strong lower bounds for
approximating distribution support size and the distinct elements problem. SIAM Journal on
Computing, 39(3):813–842, 2009.
[She17]
Or Sheffet. Differentially private ordinary least squares. In Proceedings of the 34th International
Conference on Machine Learning, ICML ’17, pages 3105–3114. JMLR, Inc., 2017.
18
Neural Computation,
[SSB16]
Sean Simmons, Cenk Sahinalp, and Bonnie Berger. Enabling privacy-preserving GWASs in
heterogeneous human populations. Cell Systems, 3(1):54–61, 2016.
[TBO+ 12]
Jacob A. Tennessen, Abigail W. Bigham, Timothy D. O’Connor, Wenqing Fu, Eimear E.
Kenny, Simon Gravel, Sean McGee, Ron Do, Xiaoming Liu, Goo Jun, Hyun Min Kang, Daniel
Jordan, Suzanne M. Leal, Stacey Gabriel, Mark J. Rieder, Goncalo Abecasis, David Altshuler,
Deborah A. Nickerson, Eric Boerwinkle, Shamil Sunyaev, Carlos D. Bustamante, Michael J.
Bamshad, Joshua M. Akey, Broad GO, Seattle GO, and on behalf of the NHLBI Exome Sequencing Project. Evolution and functional impact of rare coding variation from deep sequencing of human exomes. Science, 337(6090):64–69, 2012.
[USF13]
Caroline Uhler, Aleksandra Slavković, and Stephen E. Fienberg. Privacy-preserving data sharing for genome-wide association studies. The Journal of Privacy and Confidentiality, 5(1):137–
166, 2013.
[VV13]
Gregory Valiant and Paul Valiant. Estimating the unseen: Improved estimators for entropy
and other properties. In Advances in Neural Information Processing Systems 26, NIPS ’13,
pages 2157–2165. Curran Associates, Inc., 2013.
[VV17]
Gregory Valiant and Paul Valiant. Estimating the unseen: Improved estimators for entropy
and other properties. Journal of the ACM, 64(6):37:1–37:41, 2017.
[WLK15]
Yue Wang, Jaewoo Lee, and Daniel Kifer. Revisiting differentially private hypothesis tests for
categorical data. arXiv preprint arXiv:1511.03376, 2015.
[WWS15]
Yining Wang, Yu-Xiang Wang, and Aarti Singh. Differentially private subspace clustering. In
Advances in Neural Information Processing Systems 28, NIPS ’15, pages 1000–1008. Curran
Associates, Inc., 2015.
[WY16]
Yihong Wu and Pengkun Yang. Minimax rates of entropy estimation on large alphabets via
best polynomial approximation. IEEE Transactions on Information Theory, 62(6):3702–3720,
2016.
[WY18]
Yihong Wu and Pengkun Yang. Chebyshev polynomials, moment matching, and optimal estimation of the unseen. The Annals of Statistics, 2018.
[YFSU14]
Fei Yu, Stephen E. Fienberg, Aleksandra B. Slavković, and Caroline Uhler. Scalable privacypreserving data sharing methodology for genome-wide association studies. Journal of Biomedical Informatics, 50:133–141, 2014.
[ZVV+ 16]
James Zou, Gregory Valiant, Paul Valiant, Konrad Karczewski, Siu On Chan, Kaitlin Samocha,
Monkol Lek, Shamil Sunyaev, Mark Daly, and Daniel G. MacArthur. Quantifying unobserved
protein-coding variants in human populations provides a roadmap for large-scale sequencing
projects. Nature Communications, 7, 2016.
A
Additional Experimental Results
This section contains additional plots of our synthetic experimental results. Section A.1 contains experiments
on entropy estimation, while Section A.2 contains experiments on estimation of support coverage.
A.1
Entropy Estimation
We present four more plots of our synthetic experimental results for entropy estimation. Figures 6 and 7
are on a smaller support of k = 100, with ε = 1 and 2, respectively. Figures 8 and 9 are on a support of
k = 1000, with ε = 0.5 and 2.
19
Uniform
3.0
2.0
2.5
1.5
2.0
1.5
2.5
1.5
1.0
0.5
0.5
0.5
40
60
Number of samples
80
100
20
Zipf 1
2.0
40
60
Number of samples
80
0.0
100
2.5
RMSE
RMSE
1.5
0.5
0.5
0.5
80
100
20
40
60
Number of samples
(a)
100
1.5
1.0
60
80
80
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.0
1.0
40
60
2.5
1.0
Number of samples
40
Number of samples
Dirichlet-1/2 prior
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.0
1.5
20
20
Dirichlet-1 prior
3.0
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.5
RMSE
2.0
1.0
20
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
1.0
0.0
Zipf 1/2
3.5
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
RMSE
RMSE
2.5
Two steps
3.5
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
RMSE
3.5
100
20
40
60
Number of samples
(b)
80
100
(c)
Figure 6: Comparison of various estimators for the entropy, k = 100, ε = 1.
Uniform
3.0
2.0
1.5
Zipf 1/2
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
2.5
RMSE
RMSE
2.5
Two steps
3.5
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.0
1.5
2.5
2.0
1.5
1.0
1.0
1.0
0.5
0.5
0.5
20
40
60
Number of samples
80
0.0
100
Zipf 1
2.5
40
60
Number of samples
80
100
20
Dirichlet-1 prior
3.0
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.0
20
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
RMSE
3.5
60
80
100
Dirichlet-1/2 prior
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.5
40
Number of samples
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.5
2.0
1.5
1.0
1.0
0.5
0.5
20
40
60
Number of samples
(a)
80
100
RMSE
1.5
RMSE
RMSE
2.0
1.5
1.0
0.5
20
40
60
Number of samples
(b)
80
100
20
40
60
Number of samples
(c)
Figure 7: Comparison of various estimators for the entropy, k = 100, ε = 2.
20
80
100
Two steps
Uniform
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
2.0
3.0
2.5
RMSE
1.5
2.5
1.5
2.0
1.5
1.0
1.0
0.5
0.5
0.5
0.0
0.0
1.0
200
400
600
800
Number of samples
1000
Zipf 1
1.75
1.50
1.00
400
600
800
Number of samples
0.0
1000
200
Dirichlet-1 prior
400
600
800
Number of samples
1000
Dirichlet-1/2 prior
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.5
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.5
2.0
2.0
RMSE
1.25
200
3.0
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.00
RMSE
2.0
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
RMSE
RMSE
2.5
Zipf 1/2
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
RMSE
3.5
1.5
0.75
1.0
0.50
0.5
1.5
1.0
0.5
0.25
200
400
600
800
Number of samples
0.0
1000
200
400
600
800
Number of samples
(a)
0.0
1000
200
400
600
800
Number of samples
(b)
1000
(c)
Figure 8: Comparison of various estimators for the entropy, k = 1000, ε = 0.5.
Two steps
Uniform
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
2.0
3.0
2.5
RMSE
1.5
2.5
1.5
2.0
1.5
1.0
1.0
0.5
0.5
0.5
0.0
0.0
1.0
200
400
600
800
Number of samples
1000
Zipf 1
2.00
1.50
1.00
0.75
400
600
800
Number of samples
200
400
600
800
Number of samples
1000
Dirichlet-1/2 prior
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.0
1.5
1.0
0.50
0.0
1000
Dirichlet-1 prior
2.5
RMSE
1.25
200
3.0
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
1.75
RMSE
2.0
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
3.0
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
2.5
2.0
RMSE
RMSE
2.5
Zipf 1/2
Plug-in
Miller
Poly
Poly-Laplace
Plug-in-Laplace
RMSE
3.5
1.5
1.0
0.5
0.5
0.25
0.00
200
400
600
800
Number of samples
(a)
1000
0.0
200
400
600
800
Number of samples
(b)
1000
0.0
200
400
600
(c)
Figure 9: Comparison of various estimators for the entropy, k = 1000, ε = 2.
21
800
Number of samples
1000
200
150
100
50
0
1
300
250
Two steps
200
Non-private
Private eps=10
Private eps=2
Private eps=1
150
RMSE
RMSE
150
Uniform
Non-private
Private eps=10
Private eps=2
Private eps=1
RMSE
200
100
50
2
3
4
5
6
t
Zipf 1
7
8
9
0
1
10
200
Non-private
Private eps=10
Private eps=2
Private eps=1
150
Zipf 1/2
Non-private
Private eps=10
Private eps=2
Private eps=1
100
50
2
3
4
5
6
t
Dirichlet-1 prior
7
8
9
0
1
10
200
Non-private
Private eps=10
Private eps=2
Private eps=1
150
2
3
4
5
6
7
8
9
10
6
7
8
9
10
t
Dirichlet-1/2 prior
Non-private
Private eps=10
Private eps=2
Private eps=1
150
RMSE
RMSE
RMSE
200
100
100
100
50
50
50
0
1
2
3
4
5
t
6
7
8
9
(a)
10
0
1
2
3
4
5
t
(b)
6
7
8
9
10
0
1
2
3
4
5
t
(c)
Figure 10: Comparison between the private estimator with the non-private SGT when k = 1000.
A.2
Support Coverage
We present three additional plots of our synthetic experimental results for support coverage estimation. In
particular, Figures 10, 11, and 12 show support coverage for k = 1000, 5000, 100000.
22
450
Non-private
Private eps=10
Private eps=2
Private eps=1
500
400
350
RMSE
RMSE
400
300
200
100
Two steps
450
Non-private
Private eps=10
Private eps=2
Private eps=1
400
350
300
300
250
250
RMSE
Uniform
600
200
0
1
1000
800
2
3
4
5
6
t
Zipf 1
7
8
9
150
100
100
0
1
10
600
Non-private
Private eps=10
Private eps=2
Private eps=1
500
50
2
3
4
5
6
t
Dirichlet-1 prior
7
8
9
0
1
10
600
Non-private
Private eps=10
Private eps=2
Private eps=1
500
RMSE
RMSE
400
200
0
1
2
3
4
5
t
6
7
8
9
300
3
4
5
6
7
8
9
10
6
7
8
9
10
t
Dirichlet-1/2 prior
Non-private
Private eps=10
Private eps=2
Private eps=1
300
200
200
100
100
0
1
10
2
400
RMSE
400
600
200
150
50
Zipf 1/2
Non-private
Private eps=10
Private eps=2
Private eps=1
2
3
4
5
(a)
t
6
7
8
9
0
1
10
2
3
4
5
(b)
t
(c)
Figure 11: Comparison between the private estimator with the non-private SGT when k = 5000.
3000
2500
RMSE
RMSE
4000
12000
2
3
4
5
6
t
Zipf 1
7
8
9
1500
0
1
10
4500
Non-private
Private eps=10
Private eps=2
Private eps=1
4000
3500
6000
2
3
4
5
6
t
Dirichlet-1 prior
7
8
9
6000
Non-private
Private eps=10
Private eps=2
Private eps=1
5000
2000
3
4
5
(a)
t
6
7
8
9
10
3
4
5
6
7
8
9
10
6
7
8
9
10
t
Dirichlet-1/2 prior
Non-private
Private eps=10
Private eps=2
Private eps=1
3000
2000
1000
500
2
2
4000
2500
0
1
1500
0
1
10
1000
2000
2000
500
1500
4000
Zipf 1/2
Non-private
Private eps=10
Private eps=2
Private eps=1
1000
3000
8000
0
1
2500
500
RMSE
RMSE
10000
3000
1000
2000
14000
3500
2000
6000
0
1
Two steps
Non-private
Private eps=10
Private eps=2
Private eps=1
RMSE
8000
Uniform
Non-private
Private eps=10
Private eps=2
Private eps=1
RMSE
10000
2
3
4
5
(b)
t
6
7
8
9
10
0
1
2
3
4
5
t
(c)
Figure 12: Comparison between the private estimator with the non-private SGT when k = 100000.
23
| 7 |
A Report of a Significant Error On a Frequently Used
Pseudo Random Number Generator
arXiv:1408.1900v1 [cs.MS] 7 Aug 2014
Ayşe Ferhan Yeşil, M. Cemal Yalabik
Department of Physics, Bilkent University, 06800 Ankara, Turkey
Abstract
Emergence of stochastic simulations as an extensively used computational
tool for scientific purposes intensified the need for more accurate ways of generating sufficiently long sequences of uncorrelated random numbers. Even
though several different methods have been proposed for this end, deterministic algorithms known as pseudo-random number generators (PRNGs)
emerged to be the most widely used tool as a replicable, portable and easy
to use method to generate such random number sequences. Here, we introduce a simple Poisson process whose simulation gives systematic errors when
the very commonly used random number generator of the GNU C Library
(Glibc) is utilised. The PRNG of Glibc is an additive lagged Fibonacci generator, the family of such PRNGs are accepted as relatively safe among other
PRNGs. The systematic errors indicate complex correlation relations among
random numbers which requires a further explanation.
Keywords: pseudo-random numbers, pseudo-random number generator,
PACS: 02.70.-c, 05.10.-a
1. Introduction
Random numbers are at the core of stochastic simulations for scientific
purposes. Nearly in all of these cases, but especially in Monte Carlo simulations, long sequences of uncorrelated, uniformly distributed random numbers
are required. Several different methods are used to generate such random
number sequences, but as a replicable, portable and easy to use method, deterministic algorithms, namely pseudo-random number generators (PRNGs),
are preferred extensively. However, to be confident of such a number sequence
being “sufficiently” random, i.e. any combination of random numbers within
Preprint submitted to Journal Of Computational Physics
August 15, 2014
the sequence is uncorrelated, is not straightforward since there is no complete
and universal statistical test of analysis. In other words, even though one
cannot be confident that a sequence is totally random by any analysis known
so far, it can still be decided that the sequence is prone to correlations in the
case of empirical or theoretical tests indicating in that direction [1]. As a result, correlation tests for PRNGs have become necessary tools for physicists
[3, 5, 6, 4, 2, 12, 7, 9, 8, 10, 11]. Exemplifying the significance of these tests,
it is important to note that most of the tests are direct results of PRNGs that
go wrong. One of the first such paradigmatic examples, dated back to 1993,
is encountered on the simulation of 2-D Ising Model which shows that results
of various algorithms are different from the exact result and they also differ
from algorithm to algorithm depending on the PRNGs used. This analysis
showed that Generalized Feedback Shift Register (GFSR) PRNGs (a type
of lagged Fibonacci generators (LFGs) which utilise exclusive-or operations)
with smaller lags produce dramatically wrong results, up to hundred times of
the standard deviation, when used with Wolff algorithm [3]. The depth first
nature of the algorithms are suspected to interfere with the intrinsic triple
correlations of the GFSRs. However, Schmid et al showed that even though
the algorithm is randomly updating the system one spin at a time, if the
model has triple structure in itself such as Blume-Capel model, use of random numbers produced by GFSR in the simulations lead to dramatic errors
[4]. On the other hand, GFSRs are not the only faulty PRNGs. LFGs with
addition and subtraction operations also cause systematic errors if they have
small lags [5]. The default random number generator of GlibC, the library of
the commonly used C-compiler unix based environments, is an LFG with addition operation LFG(31,3,+). It’s lag is 31 which is generally considered as
small. Grassberger reported of correlations in Lagged Fibonacci Generators
with correlation lengths proportional to their lags. His method resembles
Vattulainen’s n-block method[10]. Grassberger proposed that it might be related to the triple correlations intrinsic to the random number sequence produced by any LFG. He also showed while simulating 3-D self-avoiding walks
on a cubic lattice, that even an LFG with lag 250, LFG(250,103,+), gives
poor results. This led to the suspicion that LFG(55,24,+) and LFG(17,5,+)
may operate poorly, and was demonstrated to be so [6]. In light of these,
the default PRNG of GlibC should be considered as unreliable since it has a
small lag as well.
In this paper, a simple test model, which shows strong correlations when
utilised with the default random number generator provided with the C com2
piler of Linux distributions, is introduced. C is a very commonly used programming language and is also very popular among scientific programming
communities. The GNU C Library (Glibc) is used as the C library in the
GNU systems and most systems with the Linux kernel. The PRNG of this
library is constructed using a linear additive feedback method. We demonstrate the existence of relatively strong correlations through the use of a
simple algorithm which is related to a one dimensional diffusive gas problem.
To inform the unspecialised reader we provide here further information
about the pseudo-random number generators: There are basically two common types of PRNGs: Linear Congruential Generators (LCGs) and Lagged
Fibonacci Generators (LFGs) [5]. LCGs are associated with the general
equation
Xi = (AXi−1 + B) mod M
(1)
which can be represented in short as LCG(A, B, M), with a maximum period
of M that can be reached through a suitable choice of A and B. Here A, B
and M are integers, and Xi is the ith generated random number.
LFGs have the general equation:
Xi = Xi−P ⊙ Xi−Q
(2)
where P and Q are the lags, conventionally chosen as P > Q. LFGs can be
represented as LF G(P, Q, ⊙) [5]. The symbol ⊙ represents any binary arithmetic operation such as addition, subtraction, multiplication or exclusive-or
(xor) operation. If this operation is addition or subtraction, the maximum
period that can be reached for b-bit precision is (2P −1)2b−1 . If the arithmetic
operation is multiplication, the maximum period is (2P − 1)2b−3 [5]. Moreover, if the operator is xor the maximum period becomes 2P −1P . The random
number generator of Glibc uses an additive lagged Fibonacci generator, which
is a combination of a LCG and LFG utilizing 32-bit integer operations. The
lags are produced by a linear congruential generator LCG(16807, 0, 231 − 1)
and then by using those lags a LF G(31, 3, +) produces the random number
sequence. (The first 344 members of the sequence are omitted.) Newest versions of Glibc allow initialization of lags to be done by any seed (X0 ), but
here we stick to the default case in which the initial seed is equal to 1. In
order to apply Eqn 2., an initial set of Xi for i < P must be defined. This is
accomplished by
X0 = 1
3
Xi = 16807Xi−1 mod (231 − 1) for 1 ≤ i < 31
Xi = Xi−31 for i = 31, ..., 33.
(3)
The generation part is,
Xi = (Xi−3 + Xi−31 ) mod 232 for i ≥ 34.
(4)
All Xi ’s for 0 ≤ i ≤ 343 are discarded and in the end ith output of the
RNG becomes Ri = Xi+344 ÷ 2 [13]. The integer division by 2 is carried out
to convert the number to a 31 bit number, a leading zero bit is necessary to
obtain a positive number. In order to obtain real pseudo random numbers,
the result is divided by the biggest 31 bit integer:
ri =
Ri
.
−1
231
(5)
This result then provides a pseudo-random real number linearly distributed
between 0 and 1 with 31 bit precession.
2. Results
The model we study is simple: There are N boxes numbered 1 to N in
a chain structure, all boxes are equally likely to receive a ball at a random
time driven by a Poisson process (Fig. 1). Each time when a ball is received,
a random time increment of δt has elapsed, where δt = − log(r)/N and r is
a uniformly distributed random number between 0 and 1. This then defines
a Poisson process in which an event occurs with rate N per unit time. With
this choice of δt, on the average N balls are placed in unit time. The box
that will receive the ball is chosen randomly as well, such that ith box is
chosen if i − 1 < rN < i (Fig. 2). In between these two steps (choosing the
random time and choosing the random box), n samples are chosen from the
random number sequence in vain, i.e., they are discarded from the used sequence. Furthermore, the simulation is stopped at multiples of a time period
T , so that at each time step when the interval crosses a multiple of T . The
time variable is set to that time (that is, no balls are placed to any box) and
then the simulation continues as described in Fig. 1. The pseudocode for the
model can be seen in the Table 1.
4
Figure 1: a) Each box is equally likely to receive a ball at a random time. b) Time
t is incremented by δt at each step. The small dots on the axis indicate n draws in
vain from the PRNG, and the larger dots indicate a ball is placed to a random box
at that point in time. Whenever δt crosses a multiple of T , the process is paused,
i.e., no ball is placed to any box and time is set to that multiple of T .
The process involving the period (T ) dependence is necessary to observe
the correlation. Indeed Fig. 3 shows that the process is sensitive on the period. In our research, we were analysing the effect of a periodic perturbation
on a system, and that is how was the problem discovered. We then tested
the process for time increment derived from a simpler form δt = r/N, and
still found correlations.
It is expected that over a large period of time each site will be equally
likely to be occupied. However, for n = 0, boxes in the middle have a
tendency to be occupied more than the sites near the both ends (Fig. 2).
Similarly, for n = 1 and n = 2, correlations are observed. However if n > 2,
correlations vanish, in other words there are no preferences left among the
boxes. Note that the induced correlations are significant, i.e., they exceed
5
1.01
n=2
1.005
n=0
n=3
1
0.995
n=1
0.99
0.985
0
2
4
6
8
10
12
14
16
18
20
Figure 2: n = 0, 1, 2 and 3, T = 0.25, N = 20. n is the samples drawn in vain from
the PRNG, T is the period and N is the number of boxes. Dependence on n is
related to the correlations among consecutive numbers.
1% in our example. Also note that, simulations are done for 109 trials, which
is well below the period of the PRNG.
Note that, the abscissa of the graphs in Figures 2 and 3 correspond to the
size of the random number used for selecting a box. Different shapes of
graphs in these plots correspond to a non-uniform distribution for the size
of the random number. Dependences on n and T are then related to the
correlations among the random numbers.
Since we wanted a measure of the systematic error in the probability
distribution P , we looked
P at the first coefficient of the Fourier expansion of
this distribution: c = N
j=1 exp(iπj/N)(P (j) − c0 ) where c0 is the average
value of P . Figure 4 shows magnitude of c, which represents the size of the
systematic error as a function of the period.
6
Pseudocode for the process
1. SET N to lattice size
2. SET period to T
3. SET totalT ime to 0
4. SET n to number of draws in vain
5. SET a to 1
6. SET occupation of k th site o(k) to 0 for 1 ≤ k ≤ N
7. DRAW a random number r
8. SET incrementalT ime to −log(r)/N
9. IF totalT ime plus incrementalT ime is smaller than a × T THEN
10.
INCREASE totalT ime → totalT ime + incrementalT ime
11.
DRAW a random number r
12.
DRAW n random numbers in vain
13.
CHOOSE k th box as k = ⌈rN ⌉
14.
INCREMENT o(k) → o(k) + 1
15. ELSE
16.
INCREMENT a → a + 1
17.
SET totalT ime as a × T
18. REPEAT 6 − 17 steps for 1 billion times
Table 1: The pseudocode which is neccesary to produce the results in this paper.
3. Conclusions
Given the common usage of C as a scientific programming language, we
report of correlations in the default random number generator of Glibc. Even
though it can pass some tests, PRNG of Glibc shows significant correlations
under this very simple test. Systematic errors of this magnitude will cause
problems in simulations which demand random numbers with low correlations, such as Monte Carlo analysis. Scientific community should beware of
accepting default random number generators even for well established, commonly used programming languages as C. We could not identify the precise
mechanism which leads to this behavior. A mathematical analysis of this
mechanism represents an interesting challenge for further analysis.
4. Acknowledgements
This work was supported by Turkish Academy of Sciences.
7
1.015
T=0.3
1.01
T=0.2
1.005
T=0.4
1
T=0.5
0.995
0.99
T=0.1
0.985
0
5
10
15
20
25
Figure 3: n = 0, T = 0.1, 0.2, 0.3, 0.4 and 0.5. For different values of periods we see
different shapes of graphs, however for higher periods correlations cease to exist.
5. References
References
[1] I. Vattulainen, arXiv:cond-mat/9411062
[2] R. M. Ziff, Comput. Phys. 12, 385 (1998)
[3] A. M. Ferrenberg, D. P. Landau and Y. J. Wong, Phys. Rev. Lett. 69,
3382 (1992).
[4] F. Schmid and N. Wilding, Int. J. Mod. Phys. C 6, 781 (1995)
[5] P. D. Coddington, Northeast Parallel Architecture Center, Paper 14
(1994).
[6] P. Grassberger, Physics Letters A, 181 (1993)
[7] S. Mertens and H. Bauke, Phys. Rev. E 69, 055702(R) (2004)
8
0.07
0.06
0.05
c
0.04
0.03
0.02
0.01
0
0
0.5
1
1.5
2
2.5
τ
Figure 4: For different values of period the magnitude of first Fourier coefficient c.
It represents the size of the systematic error as a function of the period.
[8] D. E. Knuth, The Art of Computer Programming, 3rd ed. (AddisonWesley, New York, 1998), Vol. 2
[9] H. G. Katzgraber, arXiv:1005.4117
[10] I. Vattulainen, T. Ala-Nissila and K. Kankaala, Phys. Rev. E 52, 3205
(1995)
[11] I. Vattulainen, K. Kankaala, J. Saarinena, T. Ala-Nissilaa, Comp. Phys.
Comm., 86(1995) 209-226
[12] I. Vattulainen, T. Ala-Nissila and K. Kankaala, Phys. Rev. Lett. 73,
2513 (1994)
[13] This webpage also contains code to generate the random number sequence discussed in this paper. Peter Selinger: The GLIBC Pseudorandom Number Generator. N.p., n.d. Web. 24 Feb. 2014
9
| 5 |
Control refinement for discrete-time
descriptor systems: a behavioural approach
via simulation relations
F. Chen ∗ , S. Haesaert ∗ , A. Abate ∗∗ , and S. Weiland ∗
∗
arXiv:1704.01672v1 [] 6 Apr 2017
Department of Electrical Engineering
Eindhoven University of Technology, Eindhoven, The Netherlands
∗∗
Department of Computer Science
University of Oxford, Oxford, United Kingdom
Abstract: The analysis of industrial processes, modelled as descriptor systems, is often
computationally hard due to the presence of both algebraic couplings and difference equations
of high order. In this paper, we introduce a control refinement notion for these descriptor
systems that enables analysis and control design over related reduced-order systems. Utilising
the behavioural framework, we extend upon the standard hierarchical control refinement for
ordinary systems and allow for algebraic couplings inherent to descriptor systems.
Keywords: Descriptor systems, simulation relations, control refinement, behavioural theory.
1. INTRODUCTION
Complex industrial processes generally contain algebraic
couplings in addition to differential (or difference) equations of high order. These systems, referred to as descriptor
systems (Kunkel and Mehrmann, 2006; Dai, 1989), are
commonly used in the modelling of mechanical systems.
The presence of algebraic equations, or couplings, together
with large state dimensions renders numerical simulation
and controller design challenging. Instead model reduction
methods (Antoulas, 2005) can be applied to replace the
systems with reduced order ones. Even though most methods have been developed for systems with only ordinary
difference equations, recent research also targets descriptor
systems (Cao et al., 2015).
In this paper, we newly target the use of descriptor systems
of reduced order for the verifiable design of controllers.
A rich body of literature on verification and formal controller synthesis exists for systems solely composed of
difference equations. This includes the algorithmic design
of certifiable (hybrid) controllers and the verification of
pre-specified requirements (Tabuada, 2009; Kloetzer and
Belta, 2008). Usually, these methods first reduce the original, concrete systems to abstract systems with finite or
smaller dimensional state spaces over which the verification or controller synthesis can be run. A such controller
obtained for the abstract system can be refined over the
concrete system leveraging the existence of a similarity relation, e.g., an (approximate) simulation relation, between
the two systems (Tabuada, 2009; Girard and Pappas,
2011). For the application of these relations in control
problems, a hierarchical control framework is presented
by (Girard and Pappas, 2009). Currently, the control synthesis over descriptor systems cannot be dealt with in this
fashion due to the presence of algebraic equations.
The presence of similarity relations between descriptor systems has also been a topic under investigation
in (Megawati and Van der Schaft, 2015). This work on
similarity relations deals with continuous-time descriptor
systems that are unconstrained and non-deterministic, and
focuses on the conditions for bisimilarity and on the construction of similarity relations. Instead in this work, we
specifically consider the control refinement problem for
discrete-time descriptor systems via simulation relations
within a behavioural framework, such that properties verified over the future behaviour of the abstract system are
also verified over the concrete controlled system. Within
the behavioural theory (Willems and Polderman, 2013),
a formal distinction is made between a system (its behaviour) and its representations, enabling us to investigate descriptor systems and refinement control problems
without having to directly deal with their inherent anticausality.
In the next section, we define the notion of dynamical
systems and control within a behavioural framework and
use it to formalise the control refinement problem. Subsequently, Section 3 is dedicated to the exact control refinement for descriptor systems and contains the main results
of the paper. The last section closes with the conclusions.
2. THE BEHAVIOURAL FRAMEWORK
2.1 Discrete-time descriptor systems
As introduced by (Willems and Polderman, 2013), we
define dynamical systems as follows.
Definition 1. A dynamical system Σ is defined as a triple
Σ = (T, W, B)
with the time axis T, the signal space W, and the behaviour
B ⊂ WT . ✷
In this definition, WT denotes the collection of all timedependent functions w : T → W. The set of trajectories
or time-dependent functions given by B represents the
trajectories that are compatible with the system. This set
is referred to as the behaviour of the system (Willems
and Polderman, 2013). Generally, the representation of the
behaviour of a dynamical system by equations, such as a
set of ordinary differential equations, state space equations
and transfer functions, is non-unique. Hence we distinguish
a dynamical system (its behaviour) from the mathematical
equations used to represent its governing laws.
We consider dynamical systems evolving over discrete-time
(T := N = {0, 1, 2, . . .}) that can be represented by a
combination of linear difference and algebraic equations.
The dynamics of such a linear discrete-time descriptor
system (DS) are defined by the tuple (E, A, B, C) as
Ex(t + 1) = Ax(t) + Bu(t),
(1)
y(t) = Cx(t),
with the state x(t) ∈ X = Rn , the input u(t) ∈ U = Rp ,
and the output y(t) ∈ Y = Rk and t ∈ N. Further, E, A ∈
Rn×n , B ∈ Rn×p and C ∈ Rk×n are constant matrices and
we presume that rank(B) = p and rank(C) = k.
We say that a trajectory w = (u, x, y), with w : N → (U ×
X × Y), satisfies (1) if for all t ∈ N the equations in
(1) evaluated at u(t), x(t), x(t + 1), y(t) hold. Then the
collection of all trajectories w defines the full behaviour,
or equivalently the input-state-output behaviour as
Bi/s/o := {(u, x, y) ∈ (U × X × Y)N | (1) is satisfied}. (2)
Σ := (T, W, B) = (N, U × X × Y, Binit
i/s/o )
with
• the time axis T := N = {0, 1, 2, . . .},
• the full signal space W := U × X × Y, and
• the initialised behaviour 1
N
Binit
i/s/o = {w ∈ W |w = (u, x, y) s.t. (1)
and s.t. x(0) = x0 ∈ X0 }.
2.2 Control of descriptor systems
Controller synthesis amounts to synthesising a system Σc ,
called a controller, which, after interconnection with Σ,
restricts the behaviour B of Σ to desirable (or controlled)
trajectories. Thus, in the behavioural framework, control
is defined through interconnections (or via variable sharing
as specified next), rather than based on the causal transmission of signals or information, as in classical system theory. Let Σ1 = (T, C1 ×W, B1 ) and Σ2 = (T, C2 ×W, B2 ) be
two dynamical systems. Then, as depicted in Fig. 1a and
defined in (Willems and Polderman, 2013), the interconnection of Σ1 and Σ2 over W, denoted by Σ = Σ1 ×w Σ2
with the shared variable w ∈ W, yields the dynamical
system Σ = (T, C1 × C2 × W, B) with B = {(c1 , c2 , w) :
T → C1 × C2 × W | (c1 , w) ∈ B1 , (c2 , w) ∈ B2 }.
The variable x is considered as a latent variable, therefore
the manifest, or equivalently the input-output behaviour
associated with (1) is defined by
Bi/o:= {(u, y)|∃x ∈ XN s.t. (u, x, y) ∈ Bi/s/o }.
If E is non-singular, we refer to the corresponding dynamical system as a non-singular DS. In that case, we can
transform (1) into standard state space equations, as
x(t + 1) = Ãx(t) + B̃u(t),
y(t) = Cx(t),
with à = E
−1
A, B̃ = E
−1
(3)
B. Further Bi/s/o as in (2) is
{(u, x, y) ∈ (U × X × Y)N | (u, x, y) s.t. (3) holds}.
Similarly, if E is non-singular, Bi/o can be defined by (3).
The tuple with dynamics (1) defines a dynamical system
Σ evolving over the combined signal space W = U × X × Y
with behaviour B := Bi/s/o given in (2). Similarly, for W
restricted to input-output space, the tuple (N, U×Y, Bi/o )
defines the manifest or induced dynamical system.
We are specifically interested in the behaviour initialised
at t = 0 with a given set of initial states X0 ⊂ X. For this,
we say that a trajectory w : N → (U × X × Y) is initialised
with X0 if (1) holds and x(0) = x0 ∈ X0 . Such a trajectory,
initialised with x0 ∈ X0 , is also called the continuation
of x0 . We refer to the collection of initialised trajectories
related to X0 as the initialised behaviour Binit
i/s/o . This
allows us to formalise our definition of the descriptor
system evolving over N.
Definition 2. (Discrete-time descriptor systems (DS)). A
(discrete-time) descriptor system is defined as a dynamical
system Σ initialised with X0 , whose behaviour can be
represented by the combination of algebraic equations and
difference equations given in (1), that is
(4)
c1
c2
w
Σ1
Σ2
(a) The interconnected system Σ
obtained via the shared variables
w in W between dynamical systems Σ1 and Σ2 with signal spaces
C1 × W and C2 × W.
BΣ BΣ×Σc
BΣ c
(b) The controlled behaviour
BΣ×Σc = BΣ ∩ BΣc is given as
the intersection of the behaviours
of the dynamical system Σ and
its controller Σc .
Fig. 1. The left figure (a) portrays the general interconnection of two dynamical systems. In figure (b), the more
specific case of behavioural intersection for a system
and its controller is depicted.
Observe that w ∈ WT contains the signals shared by
both Σ1 and Σ2 , while c1 ∈ CT1 only belongs to Σ1 and
c2 ∈ CT2 only belongs to Σ2 . So, in the interconnected
system, the shared variable w satisfies the laws of both
B1 and B2 . Note that it is always possible to trivially
extend the signal spaces of Σ1 and Σ2 (and the associated
behaviour) such that a full interconnection structure is
obtained, that is, such that both C1 and C2 are empty and
the behaviour of the interconnected system is B = B1 ∩
B2 . Hence, a full interconnection of Σ = (T, W, BΣ ) and
Σc = (T, W, BΣc ) is simply Σ ×w Σc = (T, W, BΣ ∩
BΣc ), with the intersection of the behaviours, denoted by
BΣ×Σc , as portrayed in Fig. 1b. That is, interconnection
and intersection are equivalent in full interconnections.
Further, we define a well-posed controller Σc for Σ as
follows.
Definition 3. Consider a dynamical system Σ = (T, W, B),
with initialised behaviour as defined in (4). We say that a
system Σc = (T, W, Bc ) is a well-posed controller for Σ if
the following conditions are satisfied:
1
In the sequel the indexes init and i/s/o will be dropped.
(1) BΣ×Σc := BΣ ∩ BΣc 6= {∅};
(2) For every initial state x0 ∈ X0 , there exists a unique
continuation in BΣ×Σc .
Denote with C(Σ) the collection of all well-posed controllers for Σ.
We want a controller that accepts any initial state of
the system. This is formalised in the second condition
by requiring that for any initial state of Σ, there exists a
unique continuation in BΣ×Σc . We elucidate the properties
of a well-posed linear controller as follows.
Example 1. For a system Σ as in (1), consider a controller
Σc , which is a DS, and has dynamics given as
Ec x(t + 1) = Ac x(t) + Bc u(t),
(5)
with Ec , Ac ∈ Rnc ×n and Bc ∈ Rnc ×p . Suppose that the
controller shares the variables u and x with the system Σ.
That is, w = (u, x). The interconnected system Σ ×w Σc
yields the state evolutions of the combined system as
B
A
E
u(t),
(6)
x(t) +
x(t + 1) =
Bc
Ac
Ec
and can be rewritten
to
A
E −B x(t + 1)
=
x(t).
Ac
Ec −Bc
u(t)
(7)
If for any x(t) ∈ X, there exists a pair (x(t + 1), u(t)) such
that (7) holds, then this implies that for any initial state
x0 ∈ X0 of Σ there exists a continuation in the controlled
behaviour. In addition, if the pair (x(t + 1), u(t)) is unique
for any x(t) ∈ X, then this continuation is unique and we
say that Σc ∈ C(Σ). This existence and uniqueness of the
pairs (x(t+ 1), u(t)) depends on the solutions of the matrix
equality (7). We use the classical results on the solutions
of matrix equalities (cf. (Abadir and Magnus, 2005)) to
conclude that the first well-posedness condition is satisfied
if and only if
E B
E B A
rank
= rank
.
(8)
Ec Bc
Ec Bc Ac
If in addition,
rank
E B A
Ec Bc Ac
= n + p,
(9)
then the second condition is also satisfied and Σc ∈ C(Σ).
Of interest is the design of well-posed controllers subject
to specifications over the future output behaviour of the
controlled system. We thus consider specifications defined
over the output space. In order to analyse the output
behaviour, we introduce a projection map. For B ⊂ (W1 ×
W2 )T we denote with ΠW2 a projection given as
ΠW2 (B) := {w2 ∈
WT2 |∃w1
∈
WT1
(E, A, B, C) as in (1) and initialised with X0 . A well-posed
controller for Σ is referred to Σc ∈ C(Σ). The controlled
concrete system is the interconnected system Σ×w Σc with
the shared variables w = (u, x).
Now, we consider a simpler DS Σa , related to the concrete
DS Σ, with dynamics given as (Ea , Aa , Ba , Ca ) and initialised with Xa0 . We assume that the synthesis of a wellposed controller Σca for Σa is substantially easier than for
Σ. We refer to this simpler system Σa as the abstract DS,
and we note that its signals take values ua (t), xa (t), ya (t)
with xa (t) ∈ Xa = Rm , ua (t) ∈ Ua = Rq , ya (t) ∈ Ya =
Y = Rk and t ∈ N. With respect to the concrete system,
the abstract DS is generally a reduced-order system. The
controlled abstract system Σa ×wa Σca is the interconnected system with the shared variables wa = (ua , xa ).
If we assume that we can compute a well-posed controller
for the abstract system, then the control synthesis problem
reduces to a control refinement problem.
Definition 4. (Exact control refinement). Let Σa and Σ be
the abstract and concrete DS, respectively. We say that
controller Σc ∈ C(Σ) refines the controller Σca ∈ C(Σa ) if
ΠY (BΣ×Σc ) ⊆ ΠY (BΣa ×Σca ).
Then we formalise the exact control refinement problem.
Problem 1. Let Σa and Σ be the abstract and concrete
DS, respectively. For any Σca ∈ C(Σa ), refine Σca to Σc ,
s.t. Σc ∈ C(Σ) and ΠY (BΣ×Σc ) ⊆ ΠY (BΣa ×Σca ).
In the next section, we will show that the existence of
a solution to this problem hinges on certain conditions
involving similarity relations between the concrete and
abstract DS. For this, we will first introduce simulation
relations to formally characterise this similarity.
3. EXACT CONTROL REFINEMENT
3.1 Similarity relations between DS
We give the notion of simulation relation as defined in
(Tabuada, 2009) for transition systems and applied to
pairs of DS Σ1 and Σ2 that share the same output space
Y1 = Y2 = Y.
Definition 5. Let Σ1 and Σ2 be two DS with respective
dynamics (E1 , A1 , B1 , C1 ) and (E2 , A2 , B2 , C2 ) over state
spaces X1 and X2 . A relation R ⊆ X1 × X2 is called a
simulation relation from Σ1 to Σ2 , if ∀(x1 , x2 ) ∈ R,
(1) for all (u1 , x+
1 ) ∈ U1 × X1 subject to
E1 x+
1 = A1 x1 + B1 u1
there exists (u2 , x+
2 ) ∈ U2 × X2 subject to
s.t. (w1 , w2 ) ∈ B}.
We focus here on finding a controller Σc for a given dynamical system Σ such that the output behaviour ΠY (BΣ×Σc )
of the interconnected system satisfies some specifications.
2.3 Exact control refinement & problem statement
Let us refer to the original DS that represents the real
physical system as the concrete DS. It is for this system
that we would like to develop a well-posed controller.
Recall that the DS is a dynamical system Σ with dynamics
such that
E2 x+
2 = A2 x2
+
+
(x1 , x2 ) ∈ R, and
+ B2 u 2
(2) we have C1 x1 = C2 x2 .
We say that Σ1 is simulated by Σ2 , denoted by Σ1 Σ2 , if
there exists a simulation relation R from Σ1 to Σ2 and if in
addition ∀x10 ∈ X10 , ∃x20 ∈ X20 such that (x10 , x20 ) ∈ R.
We call R ⊆ X1 × X2 a bisimulation relation between Σ1
and Σ2 , if R is a simulation relation from Σ1 to Σ2 and
its inverse R−1 ⊆ X2 × X1 is a simulation relation from
Σ2 to Σ1 . We say that Σ1 and Σ2 are bisimilar, denoted
by Σ1 ∼
= Σ2 , if Σ1 Σ2 w.r.t. R and Σ2 Σ1 w.r.t. R−1 .
Simulation relations as defined above are transitive. Let
R12 and R23 be simulation relations respectively, from Σ1
to Σ2 and from Σ2 to Σ3 . Then a simulation relation from
Σ1 to Σ3 is given as a composition of R12 and R23 , namely
R12 ◦R23 = {(x1 , x3 ) | ∃x2 : (x1 , x2 ) ∈ R12 ∧(x2 , x3 ) ∈ R23 }.
We also have that Σ1 Σ2 and Σ2 Σ3 implies Σ1 Σ3
and, in addition, Σ1 ∼
= Σ3 .
= Σ3 implies Σ1 ∼
= Σ2 and Σ2 ∼
Simulation relations have also implications on the properties of the output behaviours of the two systems. More
precisely, if a system is simulated by another system then
this implies output behaviour inclusion. This follows from
Proposition 4.9 in (Tabuada, 2009) and is formalised next.
Proposition 6. Let Σ1 and Σ2 be two DS with simulation
relations as defined in Definition 5. Then,
Σ1 Σ2 =⇒ ΠY (BΣ1 ) ⊆ ΠY (BΣ2 ),
Σ1 ∼
= Σ2 =⇒ ΠY (BΣ1 ) = ΠY (BΣ2 ).
Simulation relations can also be used for the controller
design for deterministic systems such as nonsingular DS
(Tabuada, 2009; Fainekos et al., 2007; Girard and Pappas,
2009). This will be used in the next subsection, where we
consider the exact control refinement for non-singular DS.
After that, we introduce a transformation of a singular
DS to an auxiliary nonsingular DS representation, referred
to as a driving variable (DV) system. The exact control
refinement problem is then solved based on the introduced
notions.
3.2 Control refinement for non-singular DS
Let us consider the simple case where the concrete and
abstract systems of interest are given with non-singular
dynamics. For these systems, the existence of a simulation
relation also implies the existence of an interface function
(Girard and Pappas, 2009), which is formulated as follows.
Definition 7. (Interface). Let Σ1 and Σ2 be two nonsingular DS defined over the same output space Y with a
simulation relation R from Σ1 to Σ2 . A mapping F : U1 ×
X1 × X2 7→ U2 is an interface related to R, if ∀(x1 , x2 ) ∈ R
and for all u1 ∈ U1 , u2 := F (u1 , x1 , x2 ) ∈ U2 is such that
+
(x+
1 , x2 ) ∈ R with
+
x+
1 = A1 x1 + B1 u1 and x2 = A2 x2 + B2 u2 .
It follows from Definition 5 that there exists at least one
interface related to R if two deterministic, or non-singular
systems are in a simulation relation. As such we can solve
the exact refinement problem as follows.
Theorem 8. Let Σ1 and Σ2 be two non-singular DS defined over the same output space Y with dynamics
(I, A1 , B1 , C1 ) and (I, A2 , B2 , C2 ), which are initialised
with X10 and X20 , respectively. If there exists a relation
R ⊆ X1 × X2 such that
(1) R is a simulation relation from Σ1 to Σ2 , and
(2) ∀x20 ∈ X20 , ∃x10 ∈ X10 s.t. (x10 , x20 ) ∈ R,
then for any controller Σc1 ∈ C(Σ1 ), there exists a
controller Σc2 ∈ C(Σ2 ) that is an exact control refinement
for Σc1 and thus achieves with
ΠY (BΣ2 ×Σc2 ) ⊆ ΠY (BΣ1 ×Σc1 ).
Proof. Since R is a simulation relation from Σ1 to Σ2 ,
there exists an interface function F : U1 × X1 × X2 → U2
as given in Definition 7, cf (Tabuada, 2009; Girard and
Pappas, 2009). Additionally, due to (2) there exists a map,
F0 : X20 → X10 such that for all x20 ∈ X20 it holds that
(F0 (x20 ), x20 ) ∈ R.
Next, we construct the controller Σc2 that achieves exact
control refinement for Σc1 as
Σc2 := (Σ1 ×w1 Σc1 ) ×w1 ΣF ,
where w1 = (u1 , x1 ) and where ΣF := (N, W, BF ) is a
dynamical system taking values in the combined signal
space with
BF := {(x1 , u1 , x2 , u2 ) ∈ W|x10 = F0 (x20 ) and
u2 = F (x1 , u1 , x2 )}.
The dynamical system Σc2 is a well-posed controller for
Σ2 with Σ2 ×w2 Σc2 sharing w2 = (u2 , x2 ). Denote with
BΣ2 ×Σc2 the behaviour of the controlled system, then due
to the construction of ΣF it follows that BΣ2 ×Σc2 is nonempty and ∀x20 ∈ X20 , ∃x10 ∈ X10 such that (x10 , x20 ) has
a unique continuation in BΣ2 ×Σc2 . Furthermore it holds
that ΠY (BΣ2 ×Σc2 ) ⊆ ΠY (BΣ1 ×Σc1 ). ✷
The design of the controller Σc2 that achieves exact control
refinement for Σc1 is similar to that in (Tabuada, 2009),
which also holds in the behavioural framework.
3.3 Driving variable systems
Since it is difficult to control and analyse a DS directly, we
develop a transformation to a system representation that
is in non-singular DS form and is driven by an auxiliary
input. We refer to this non-singular DS as the driving
variable (DV) system (Weiland, 1991). We investigate
whether the DS and the obtained DV system are bisimilar
and behaviourally equivalent. Let us first introduce with
a simple example the apparent non-determinism or anticausality in the DS. Later-on, we show the connections
between a DS and its related DV system.
Example 2. Consider the DS with dynamics (E, A, B, C)
defined as
h1i
h 0 iT
h1 0 0i
h −1 0 0 i
E = 0 0 1 , A = 0 1 0 , B = 1 , C = 0.2 , (10)
0 0 0
1
0 0 1
0.5
T
and x(t) = [x1 (t) x2 (t) x3 (t)] . In this case, the input
u(t) = −x3 (t) is constrained by the third state component.
Now the state trajectories of (10) can be found as follows:
• for a given input sequence u : N → U, we have
x2 (t) = −u(t)−u(t+1), and thus we can use this anticausal relation of the DS to find the corresponding
state trajectories;
• alternatively, we can allow the next state x2 (t + 1)
to be freely chosen, and for arbitrary state x2 (t),
the equations (10) impose constraints on the input
sequence that is, therefore, no longer free as u(t) =
−x3 (t).
We embrace the latter, non-deterministic interpretation of
the DS.
This non-determinism can be characterised by introducing
an auxiliary driving input of a so-called DV system. We
reorganise the state evolution of (1). For simplicity we omit
the time index in x(t) and u(t) and denote x(t + 1) as x+
+
x
= Ax,
(11)
M
u
where M = [E −B]. For any x, we notice that the pairs
(u, x+ ) are non-unique due to the non-determinism related
to x+ . If M has full row rank, then it has a right inverse.
This always holds when the DS is reachable (cf. Definition
2-1.1 (Dai, 1989)). In that case we can characterise the
non-determinism as follows. Let M + be a right inverse of
M such that M M + = I and N be a matrix such that
im N = ker M and N T N = I. Then all pairs (u, x+ ) that
are compatible with state x in (11) are parametrised as
+
x
= M + Ax + N s,
(12)
u
where s is a free variable. We now claim that all transitions
(x, u, x+ ) in (12) for some variable s satisfy (11). To see
this, multiply M on both sides of (12) to regain (11). Now
assume that there exists a tuple (x, u, x+ ) satisfying (11)
that does not satisfy (12). Then there exists an s and a
vector z 6= 0 that is not an element of the kernel of M and
such that the right side of (12) becomes M + Ax + N s +
z. Multiplying again with M , we infer that there is an
additional non-zero term M z and that (11) cannot hold.
In conclusion any transition of (11) is also a transition of
(12) and vice versa.
Example 3. [Example 2: cont’d] For the DS of Example 2,
the related DV system ΣDV is developed as
h −1 0 −1 i
h 0 i
x(t + 1) = 0 0 0 x(t) + −1 s(t)
0 1 −1
0
(13)
u(t) = [ 0 0 −1 ]x(t)
y(t) = [ 0 0.2 0.5 ]x(t).
As indicated by (13), the input u(t) is a function of
the state trajectory. The non-determinism of x2 (t + 1) is
characterised by −s(t) for which the auxiliary input s can
be freely selected.
Let us now formalise the notion of a driving variable
representation. We associate a driving variable representation with any given DS (1) by defining a tuple
(Ad , Bd , Cu , Du , C) and setting
Ad
Bd
= M + A,
= N,
(14)
Cu
Du
where N ∈ R(n+p)×p has orthonormal columns, that is
N T N = I. For any given DS, this tuple defines the driving
variable system ΣDV = (N, W, BΣDV ), which maintains
the same set of initial states X0 and has dynamics
x(t + 1) = Ad x(t) + Bd s(t)
u(t) = Cu x(t) + Du s(t)
(15)
y(t) = Cx(t),
thereby yielding the initialised behaviour
BΣDV := {w ∈ WN |w =(u, x, y), ∃s ∈ SN
s.t. (15) and x0 ∈ X0 }.
Next, we propose the following assumption for DS, which
will be used in the sequel to develop our main results.
Assumption 1. The given DS Σ is a dynamical system
with dynamics (E, A, B, C) such that M = [E −B] has
full row rank.
The relationship between a DS and its related DV system
is characterised as follows.
Theorem 9. Let the DS Σ be given as in (1) satisfying
Assumption 1 and let ΣDV = (N, W, BΣDV ) be defined as
in (15). Then
(1) Σ and ΣDV are bisimilar, that is, Σ ∼
= ΣDV ,
(2) Σ and ΣDV have equal behaviour, i.e., BΣDV = BΣ ,
(3) Σ and ΣDV have equal output behaviour, that is,
ΠY (BΣ ) = ΠY (BΣDV ).
Proof. For the first statement (1), we define the diagonal
relation as I := {(x, x) | x ∈ X}. Then I is a bisimulation
relation between Σ and ΣDV , because by construction their
state evolutions can be matched, hence stay in I; and they
share the same output map. In addition, since they have
the same set of initial states it follows that Σ ∼
= ΣDV .
The second part (2) follows immediately from the derivation of ΣDV , because by construction all the transitions
in Σ can be matched by those of ΣDV and vice versa, in
addition, they have the same output map. Hence, they
share the same signal space (U × X × Y) and we can
conclude that Σ and ΣDV have equal behaviour.
Additionally, we have that (2) implies (3); via Proposition
6 also (1) implies (3). ✷
3.4 Main result: exact control refinement for DS
Based on the results developed in the previous subsections,
we now derive the solution to the exact control refinement
problem in Problem 1. More precisely, subject to the
assumption that there exists a simulation relation R from
Σa to Σ, for which in addition holds that ∀x0 ∈ X0 , ∃xa0 ∈
Xa0 s.t. (xa0 , x0 ) ∈ R, we show that for any Σca ∈ C(Σa ),
there exists a controller Σc for Σ that refines Σca such that
Σc ∈ C(Σ) and ΠY (BΣ×Σc ) ⊆ ΠY (BΣa ×Σca ).
In the case of Assumption 1, we construct DV systems
ΣDV and ΣDVa for the respective DS systems Σ and Σa
as a first step. For these systems, we develop the following
results on exact control refinement:
i) The exact control refinement for the DV systems:
∀ΣcDVa ∈ C(ΣDVa ), ∃ΣcDV ∈ C(ΣDV ), s.t.
ΠY BΣDV ×Σc
;
⊆ ΠY BΣDVa ×Σc
DV
DVa
ii) The exact control refinement from Σa to ΣDVa :
∀Σca ∈ C(Σa ), ∃ΣcDVa ∈ C(ΣDVa ), s.t.
;
ΠY BΣa ×Σca = ΠY BΣDVa ×Σc
DVa
iii) The exact control refinement from ΣDV to Σ:
∀ΣcDV ∈ C(ΣDV ), ∃Σc ∈ C(Σ), s.t.
= ΠY (BΣ×Σc ) .
ΠY BΣDV ×Σc
DV
It will be shown that the combination of the elements
i)–iii) also implies the construction of the exact control
refinement for the concrete and abstract DS.
i) Exact control refinement for the DV systems. From
Theorem 9, we know that Σ ∼
= ΣDVa
= ΣDV and Σa ∼
with respective diagonal relations I := {(x, x)|x ∈ X} and
Ia := {(xa , xa )|xa ∈ Xa }. Hence as depicted in Fig. 2 and
based on the transitivity of simulation relations, we also
derive that R is a simulation relation from ΣDVa to ΣDV .
Σ∼
= ΣDV , w.r.t. I
Σ
R = Ia ◦ R ◦ I
(∃xa0 , ∀x0 ) ∈ R
R
(∃xa0 , ∀x0 ) ∈ R
Σa
ΣDV
DVa
ΣDVa
Σa ∼
= ΣDVa , w.r.t. Ia
Fig. 2. Connection between DS and DV systems for the
exact control refinement.
Since the DV systems ΣDV and ΣDVa share the same
initial states as the respective DS Σ and Σa , it also holds
that ∀x0 ∈ X0 , ∃xa0 ∈ Xa0 s.t. (xa0 , x0 ) ∈ R. According
to Theorem 8, we know that we can do exact control
refinement, that is, we have shown
∀ΣcDVa ∈ C(ΣDVa ), ∃ΣcDV ∈ C(ΣDV ), s.t.
⊆ ΠY BΣDVa ×Σc
.
ΠY BΣDV ×Σc
DVa
DV
ii) Exact control refinement from Σa to ΣDVa . Denote
with ΣDVa the abstract DV system related to Σa , with dynamics (Ada , Bda , Cua , Dua , Ca ) and initialised with Xa0 .
We first derive the static function Sa mapping transitions
of Σa to the auxiliary input sa of ΣDVa . From the definition
of DV systems, we can also derive the transitions of ΣDVa
indexed with a, which is similar to the derivation of (12).
+
xa
= Ma+ Aa xa + Na sa .
(16)
ua
Multiplying NaT on both sides of (16), Sa is derived as
+
T xa
−NaT Ma+ Aa xa . (17)
Sa : sa = Sa (x+
,
u
,
x
)
=
N
a
a
a
a
ua
Sa maps the state evolutions of Σa ×wa Σca to the auxiliary
input sa for ΣDVa , where wa = (ua , xa ). Now, we consider
the exact control refinement from the abstract DS to the
abstract DV system.
Theorem 10. Let Σa be the abstract DS with dynamics
(Ea , Aa , Ba , Ca ) satisfying the condition of Assumption 1
and let ΣDVa be its related DV system with dynamics
(Ada , Bda , Cua , Dua , Ca ) such that both systems are initialised with Xa0 . Then, for any Σca ∈ C(Σa ), there exists
a controller ΣcDVa ∈ C(ΣDVa ) that is an exact control
refinement for Σca as defined in Definition 4 with
.
ΠY BΣa ×Σca = ΠY BΣDVa ×Σc
DVa
Proof. Denote with xa and xda the state variables of Σa
and ΣDVa , respectively. Next, we construct the controller
ΣcDVa that achieves exact control refinement for Σca as
ΣcDVa := (Σa ×wa Σca ) ×wa ΣSa ,
where wa = (ua , xa ) and where ΣSa := (N, W, BSa ) is a
dynamical system with
BSa := {(xa , ua , xda , sa ) ∈ W|xa0 = xda0 and
sa = Sa (x+
a , ua , xa )}.
c
The dynamical system ΣDVa is a well-posed controller for
ΣDVa with ΣDVa ×wad ΣcDVa sharing wad = (sa , xda ). Denote
the behaviour of the controlled system.
with BΣDVa ×Σc
DVa
By construction, we know that the set of the behaviour
is non-empty and there is a unique continuation for any
xda0 ∈ Xa0 . Further based on the construction of ΣSa , the
behaviour is such that xda (t) = xa (t), ∀t ∈ N. Additionally,
since Σa and ΣDVa share the same set of initial states
Xa0 ,
it holds that ΠY BΣa ×Σca = ΠY BΣDVa ×Σc
. ✷
The proof is actually constructive in the design of the
controller ΣcDVa that achieves exact control refinement for
Σc a .
iii) Exact control refinement from ΣDV to Σ. Now, we
consider the exact control refinement from ΣDV to Σ.
Suppose we are given a well-posed controller ΣcDV for ΣDV ,
which shares the free variable s and the state variable x
with ΣDV . We want to design a well-posed controller for
Σ over w = (u, x), for which we consider the dynamical
system ΣC = (N, W, B) over the signal space W = U ×
X × S, the behaviour of which can be defined by
BdT x(t + 1) = BdT Ad x(t) + BdT Bd s(t)
(18)
u(t) = Cu x(t) + Du s(t).
Then the dynamics of the interconnected system Σ ×w ΣC
as a function
of x and
as
s is derived
BDu
A + BCu
E
s(t). (19)
x(t)
+
x(t
+
1)
=
BdT Bd
BdT
BdT Ad
Note that A + BCu = EAd and BDu = EBd by
multiplying M = [E −B] on the left-hand side of the two
equations in (14). Therefore, (19) is simplified to
E
E
E
Bd s(t).
(20)
Ad x(t) +
x(t + 1) =
BdT
BdT
BdT
T
Furthermore E T Bd has full column rank because the
T T
is square and has full rank. Hence
matrix M N
T
T
E Bd has a left inverse and the dynamics of Σ ×w ΣC
in (20) can be simplified as
x(t + 1) = Ad x(t) + Bd s(t),
which is exactly the same as the state evolutions of ΣDV
as shown in (15). Next we construct Σc := ΣC ×wd ΣcDV
with wd = (s, xd ) and it is a well-posed controller for Σ.
This allows us to state the following theorem regarding the
control refinement from ΣDV to Σ.
Theorem 11. Let Σ be the concrete DS with dynamics
(E, A, B, C) satisfying Assumption 1 and let ΣDV be its
related DV system with dynamics (Ad , Bd , Cu , Du , C) such
that both systems are initialised with X0 . Then, for any
ΣcDV ∈ C(ΣDV ), there exists a controller Σc ∈ C(Σ)
that is an exact control refinement for ΣcDV as defined in
Definition 4 with
ΠY BΣDV ×Σc
DV
= ΠY (BΣ×Σc ) .
Proof. Denote with x and xd the state variables of the
Σ and ΣDV , respectively. Next, we construct the controller
Σc that achieves exact control refinement for ΣcDV as
Σc := ΣC ×wd ΣcDV ,
where wd = (s, xd ) and the dynamics of ΣC is defined as
(18). Then, we can show that the dynamical system Σc is a
well-posed controller for Σ. Based on the analysis of (20),
it is shown that Σ ×w ΣC = ΣDV with w = (u, x), then we
can derive Σ ×w Σc = ΣDV ×wd ΣcDV . Therefore,
we can
= ΠY BΣ×Σc
conclude Σc ∈ C(Σ) with ΠY BΣDV ×Σc
DV
immediately follows from ΣcDV ∈ C(ΣDV ). ✷
Exact control refinement for descriptor systems. We can
now argue that there exists exact control refinement from
Σa to Σ, as stated in the following result.
Theorem 12. Consider two DS Σa (abstract, initialised
with Xa0 ) and Σ (concrete, initialised with X0 ) satisfying
Assumption 1 and let R be a simulation relation from Σa
to Σ, for which in addition holds that ∀x0 ∈ X0 , ∃xa0 ∈
Xa0 s.t. (xa0 , x0 ) ∈ R. Then, for any Σca ∈ C(Σa ), there
exists a controller Σc ∈ C(Σ) such that
ΠY (BΣ×Σc ) ⊆ ΠY BΣa ×Σca .
Proof. Based on Assumption 1, we first construct ΣDV
and ΣDVa . Then to prove this we need to construct the
exact control refinement. This can be done based on
the subsequent control refinements given in Theorem 10,
Theorem 8 and Theorem 11. ✷
Theorem 12 claims the existence of such controller Σc that
achieves exact control refinement for Σca . More precisely,
we have shown in the proof that the refined controller Σc
is constructive, which provides the solution to Problem 1.
To elucidate how such an exact control refinement is
constructed, we consider the following example.
Example 4. [Example 2,3: cont’d] Consider the DS of
Example 2 and its related DV system (cf. Example 3)
such that both systems are initialised with X0 = {x0 |
x0 ∈ [−1, 1]3 ⊂ R3 }. According to Silverman-Ho algorithm (Dai, 1989), we can select an abstract DS Σa =
(Ea , Aa , Ba , Ca ) that is the minimal realisation of Σ and
is initialised with Xa0 = R2 , in addition
[ 01 00 ], Aa
[ 10 01 ], Ba
[ 10 ], Ca
T
[ 0.7
0.2 ] .
Ea =
=
=
=
Similarly, the related DV system ΣDVa of Σa is given as
0
xa (t + 1) = [ 00 10 ]xa (t) + −1
sa (t)
(21)
ua (t) = [ −1 0 ]xa (t)
ya (t) = [ 0.7 0.2 ]xa (t).
Subsequently,
R := {(xa , x) | xa = Hx, xa ∈ Xa , x ∈ X}
is a simulation relation from Σa to Σ with
1
H = 00 01 −1
.
This can be proved through verifying the two properties of
Definition 5. In addition, the condition ∀x0 ∈ X0 , ∃xa0 ∈
Xa0 s.t. (xa0 , x0 ) ∈ R holds. According to Theorem 12,
we can refine any Σca ∈ C(Σa ) to attain a well-posed
controller Σc for Σ that solves Problem 1 as follows: Define
Σca ∈ C(Σa ) with dynamics as
[ 1 1 ]xa (t + 1) = [ 0.5 0.5 ]xa (t) + ua (t).
The controlled system Σa ×wa Σca is derived as
0
1
xa (t + 1) = −0.5
−0.5 xa (t)
ya (t) = [ 0.7 0.2 ]xa (t),
with wa = (ua , xa ) and ua (t) = [ −1 0 ]xa (t). Then Σa ×wa
Σca is stable. According to Theorem 10, we derive the map
Sa for ΣDVa as sa (t) = [ 0 −1 ]xa (t + 1) = [ 0.5 0.5 ]xa (t).
Next, the related interface from ΣDVa to ΣDV is developed
as s(t) = sa (t) − [ 0 1 −1 ]x(t). According to Theorem 11,
we derive the well-posed controller Σc as
[ 0 −1 0 ]x(t + 1) = [ 0 −1 1 ]x(t) + [ 0.5 0.5 ]xa (t)
u(t) = [ 0 0 −1 ]x(t),
and the interconnected system Σ ×w Σc with w = (u, x), is
derived as
i
h1 0 1 i
h 0
0
x(t + 1) = 0 1 −1 x(t) + −0.5 −0.5 xa (t)
0 1 −1
0
0
y(t) = [ 0 0.2 0.5 ]x(t).
Since (xa , x) ∈ R, that is xa = Hx, Σ ×w Σc can be
simplified by replacing xa (t):
h1 0 1 i
x(t + 1) = 0 0.5 −1 x(t)
0 1 −1
y(t) = [ 0 0.2 0.5 ]x(t).
Finally, Σc ∈ C(Σ) and ΠY (BΣ×Σc ) ⊆ ΠY BΣa ×Σca are
achieved.
4. CONCLUSION
In this paper, we have developed a control refinement
procedure for discrete-time descriptor systems that is
largely based on the behavioural theory of dynamical
systems and the theory of simulation relations among
dynamical systems. Our main results provide complete
solutions of the control refinement problem for this class
of discrete-time systems.
The exact control refinement that has been developed
in this work also opens the possibilities for approximate
control refinement notions, to be coupled with approximate similarity relations: these promise to leverage general
model reduction techniques and to provide more freedom
for the analysis and control of descriptor systems.
The future research includes a comparison of the control
refinement approach for descriptor systems to results in
perturbation theory, as well as control refinement for
nonlinear descriptor systems.
REFERENCES
Abadir, K.M. and Magnus, J.R. (2005). Matrix algebra.
Cambridge University Press.
Antoulas, A.C. (2005). Approximation of large-scale dynamical systems. SIAM.
Cao, X., Saltik, M., and Weiland, S. (2015). Hankel model
reduction for descriptor systems. In 2015 54th IEEE
CDC, 4668–4673.
Dai, L. (1989). Singular control systems. Springer-Verlag
New York, Inc.
Fainekos, G.E., Girard, A., and Pappas, G.J. (2007). Hierarchical synthesis of hybrid controllers from temporal logic specifications. In International Workshop on
HSCC, 203–216.
Girard, A. and Pappas, G.J. (2009). Hierarchical control
system design using approximate simulation. Automatica, 45(2), 566–571.
Girard, A. and Pappas, G.J. (2011). Approximate bisimulation: A bridge between computer science and control
theory. European Journal of Control, 17(5), 568–578.
Kloetzer, M. and Belta, C. (2008). A fully automated
framework for control of linear systems from temporal
logic specifications. IEEE Transactions on Automatic
Control, 53(1), 287–297.
Kunkel, P. and Mehrmann, V.L. (2006). Differentialalgebraic equations: analysis and numerical solution.
European Mathematical Society.
Megawati, N.Y. and Van der Schaft, A. (2015). Bisimulation equivalence of DAE systems. arXiv:1512.04689.
Tabuada, P. (2009). Verification and control of hybrid
systems: a symbolic approach. Springer Science &
Business Media.
Van der Schaft, A. (2004). Equivalence of dynamical systems by bisimulation. IEEE transactions on automatic
control, 49(12), 2160–2172.
Weiland, S. (1991). Theory of approximation and disturbnace attenuation for linear systems. University of
Groningen.
Willems, J.C. and Polderman, J.W. (2013). Introduction
to mathematical systems theory: a behavioral approach,
volume 26. Springer Science & Business Media.
| 3 |
Characterizing traits of coordination
Raphael ‘kena’ Poss
University of Amsterdam, The Netherlands
arXiv:1307.4827v1 [] 18 Jul 2013
March 19, 2018
Abstract
How can one recognize coordination languages and technologies? As
this report shows, the common approach that contrasts coordination with
computation is intellectually unsound: depending on the selected understanding of the word “computation”, it either captures too many or too few
programming languages. Instead, we argue for objective criteria that can
be used to evaluate how well programming technologies offer coordination
services. Of the various criteria commonly used in this community, we are
able to isolate three that are strongly characterizing: black-box componentization, which we had identified previously, but also interface extensibility and customizability of run-time optimization goals. These criteria
are well matched by Intel’s Concurrent Collections and AstraKahn, and
also by OpenCL, POSIX and VMWare ESX.
Contents
1 Introduction
2
2 Qualifying criteria
3
3 Criteria evaluation
4
4 Problems of “coordination vs. computation”
6
5 Conclusion
9
Acknowledgements
9
References
9
1
1
Introduction
The author of this report studies a research community whose specialization is
the management of software components in multi-component applications. The
members of this community have agreed on a common linguistic referent for
their activities in this field: the word “coordination”.
The main output from this community is a combination of programming
languages and operating software aimed at optimizing the run-time execution
of applications built by hierarchical composition of components. Example technologies whose authors self-identify as “working on coordination” include SNET [16, 4], AstraKahn [17] and Intel’s Concurrent Collections (CnC) [10, 2].
A recurring theme in the discussions within this community and with external observers is whether and how much coordination differs from other forms
of programming. This topic is usually introduced with either of two questions:
“what is coordination exactly?” and “what distinguishes research on coordination from other research on programming language design and implementation?”
As it happens, different answers are used in these conversations depending
on who is asking, who is answering and the topic at hand. This author has
observed a consensus in the community that these answers are all accepted by
the researchers as valid descriptions of their line of work.
Of these explanations, we can recognize four groups:
• self-referential explanations: a research activity is considered related to
“coordination” if it self-identifies as such. For example, “this language
is a coordination language because its designers call it a coordination
language”;
• negative space explanations: an existing field of study is selected ad hoc,
then a research activity is considered related to “coordination” if it selfidentifies as “not related to” the selected research. For example, “this
language is a coordination language because its designers do not focus on
software modeling” (or functional programming, or model checking, etc.);
• void explanations: a word is selected ad hoc with no well-defined meaning, then a research activity is considered related to “coordination” if it
self-identifies as “not related to” the selected word. For example, “this
language is a coordination language because its designers do not intend
it to be a computation language” without a clear definition for the word
“computation” (cf. section 4);
• explanations by qualification: some well-defined, objective, observer-independent criteria on programming languages and operating software are
identified, then “coordination” is defined based on the criteria. For example, “this language is a coordination language because it offers facilities to
assemble applications from black-box components”, together with a careful
definition of “black-box component”, constitutes a qualified explanation.
The self-referential, negative space and void explanations are, by construction, factually vacuous: a newcomer audience exposed to them will not learn
anything about what the researcher using the explanation actually does in their
work. At best, the audience may understand that the researcher needs a keyword to motivate specialized attention and funding, but not more. These forms
of explanations are thus not further considered here.
Instead, this report reviews the criteria where consensus exists in the community that they can be used to recognize coordination objectively.
2
The discussion is presented in two parts. In section 2, we identify and detail
the objective criteria that have been previously named and used by members
of the research community. We then examine in section 3 how well this community’s technology matches their self-selected criteria, and also how well other
technologies match the same criteria. We then use this analysis to isolate which
criteria most strongly characterize the work of these researchers. Separately, in
section 4 we examine the arguments that oppose “coordination” to “computation”, and we analyze how much objective understanding can be extracted from
them. We then conclude in section 5.
2
Qualifying criteria
We reuse below the definition of “component” and component-based design from
[1, 15]: components are defined by their interface, which specifies how they can
be used in applications, and one or more implementations which define their
actual behavior. The two general principles of component-based design are then
phrased as follows. The first is interface-based integration: when a designer uses
a component for an application, he agrees to only assume what is guaranteed
from the interface, so that another implementation can be substituted if needed
without changing the rest of the application. The second is reusability: once
a component is implemented, a designer can reuse the component in multiple
applications without changing the component itself.
Based on this definition, the word “coordination” is only used in the context
of languages and infrastructures that enable component-based design1 .
Separable provisioning (Sp): the language and its infrastructure enable
the reuse of components provided by physically separate programmers, and
where the considered technology is the only communication means between
these providers. For example, a technology that offers the ability to build a
specification from different files matches this criterion.
Interface extensibility (Ie): the infrastructure enables an application designer to redefine and extend component interfaces independently from component providers, and extended interfaces can influence execution. For example, a
technology that offers the ability to annotate a component to indicate post hoc
that it is functionally pure (without state and deterministic), e.g. via a pragma
or metadata, and which can exploit this annotation to increase execution parallelism, matches this criterion.
Separable scheduling (Ss): the programmer can delegate to the technology the responsibility of choosing when (time) and where (space/resources) to
execute concurrent component activities. A different but equivalent phrasing
for the same criterion is the ability given to a programmer to define a partial
scheduling order between component activities, and ability given to the technology to decide arbitrary actual schedules as long as the partial order is respected.
1 arguably, most contemporary programming technologies already enable component-based
design; however we explicitly state the requirement to clearly scope the discussion.
3
For example, a language where programmers can declare a data-parallel operation and where the infrastructure decides how to schedule the operation matches
this criterion.
Adaptable optimization (Ao): the technology provides run-time optimization mechanisms that can adapt to different execution environments without
changing the application specification. For example, a technology which can
decide different placements when faced with different amounts of parallelism in
hardware matches this criterion, and so does a technology able to decide different schedules over time when faced with different constraints on data locality
(e.g. cache sizes).
Customizable optimization goals (Cog): the application designer can
specify different optimization goals at run-time (or no earlier than when the
specification work has completed) and the technology chooses different execution strategies based on them. For example, a technology which enables to
select between “run fast” and “use less memory” during execution matches this
criterion.
Black-box componentization (Bb): the application designer can specify
an application using components only known to the technology by name and
interface, and the technology provides a run-time interfacing mechanism previously agreed upon with component providers to integrate the components. For
example, a technology which can link component codes compiled from different
programming languages without requiring link-time cross-optimization matches
this criterion. This is the main criterion proposed in [15].
Exploitable Turing-incompleteness (Eti): the specification language is
not Turing complete but can still be used to define interesting / useful applications. For example, a technology whose advertised specification language can
only define static acyclic data flow graphs of components matches this criterion.
3
Criteria evaluation
We evaluate in table 1 how much different technologies match the criteria defined above: the criterion are listed in columns, the technologies in rows, each
intersection states whether the technology matches the criterion, and a score
column at the right side sums the number of criterion matched. Arrows in the
score columns indicate the rows with highest scores.
We review both technologies that self-identify as “coordination”, including SNET and CnC named previously, and other technologies that do not identify as
such: various C and C++ implementations, Glasgow Haskell, Single-Assignment
C (SAC), the standard Unix shell in a POSIX environment and VMWare ESX.
While constructing table 1, we highlighted the following:
• granularity: each technology may offer multiple levels of component granularities, and may not match the same criteria depending on the granularity
considered. For example, the C language offers black-box componentization for entire functions but not for individual statements. To reflect this,
4
Technology
Variant
S-NET [16, 4]
no filters / star
S+NET [14, 13]
no trans. / star
AstraKahn [17]
CnC [10, 2]
C [6, 8, 12, 9, 5]
C++ [7]
Haskell [11]
freestanding
freestanding
freestanding
OpenMP
OpenCL
ISO11, hosted
ISO99, POSIX
ISO99, POSIX
ISO11, POSIX
GHC
GHC
SAC [3]
Unix shell [5]
VMWare ESX
POSIX
Criteria
Sp
Ie
Granularity
boxes
networks
boxes
boxes
networks
boxes
boxes
networks
steps
I
functions
statements
expressions
statements
kernels
threads
threads
processes
classes
functions
packages
functions
modules
processes
virtual machines
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
Ss
Ao
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
Cog
Bb
I
I
I
I
I
I
I
I
I
E
I
I
E
I
I
I
I
I
E
I
I
I
I
I
I
I
E
E
I
I
I
I
I
E
I
I
I
I
I
I
E
E
I
E
E
E
I
E
I
I
I
I
I
I
Eti
E
E
I
E
I
I
I
I
I
I
I
I
I
I
E
E
E
E
Scores
Total
I only
4
2
5
5
4
6←
6←
3
6←
4
2
4
5
4
5
6←
3
6←
3
4
5
5
6
4
6
6
5
5
4
5
3
6
6
2
1
2
4
4
3
3
6←
2
5
4
5
3
6←
6←
Table 1: How various technologies match the proposed criteria.
multiple rows with different granularities are used for each technology in
the table.
• intent: a technology may happen to match a criterion although this match
was not primarily intended by its designers. For example, a freestanding
implementation of the C language (without library) happens to be Turing
incomplete and still quite useful, although this was arguably not intended
by its designer (nor commonly known of its users). To reflect this, we
use the letters “I” (by intent) and “E” (emergent) at each intersection and
provide two score columns in the right side.
From this first evaluation table we observe the following.
First, separable provisioning (Sp) is generally prevalent. Although it is a
prerequisite to component-based design and thus coordination, its availability
in a particular technology does not predict its score in our table. Therefore, it
is a poor criterion to characterize coordination.
Similarly, separable scheduling (Ss) and adaptable optimization (Ao) are also
relatively prevalent. Although the benefits of separate scheduling and adaptable optimization wrt. performance speedups on parallel hardware is often used
to highlight the benefits of coordination, other technologies which do not selfidentify as “coordination” (e.g. Haskell, OpenCL, SAC) also exhibit these features and can reap their associated benefits. These criteria may thus be phrased
as “prerequisites” to recognize coordination but they are not characterizing.
Also, the “exploitable Turing-incompleteness” (Eti) criterion is, perhaps surprisingly, difficult to match. The main reason, which we outline in section 4, is
that it is actually quite difficult to design a programming language which is not
Turing-equivalent.
Finally, the table reveals that none of the proposed criteria clearly separates
technology that self-identify as “coordinating” from those that don’t. The evaluation of whether a technology can be considered as coordination cannot yield a
5
←
←
←
←
←
Technology
Variant
boxes
networks
boxes
networks
boxes
networks
steps
S-NET
S+NET
AstraKahn
CnC
C
C++
Haskell
freestanding
freestanding
freestanding
OpenMP
OpenCL
ISO11, hosted
ISO99, POSIX
ISO99, POSIX
ISO11, POSIX
GHC
GHC
SAC
Unix shell
VMWare ESX
Granularity
POSIX
functions
statements
expressions
statements
kernels
threads
threads
processes
classes
functions
packages
functions
modules
processes
virtual machines
Criteria
Ie
Cog
Bb
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
E
E
I
I
I
I
I
E
I
E
I
I
I
I
I
I
I
I
I
I
I
I
Scores
Total
I only
1
0
2
1
3←
1
3←
1
0
2
1
3←
1
3←
1
0
1
1
3
1
3
3
2
2
1
2
0
3
3
1
0
1
1
2
1
1
3←
1
2
1
2
0
3←
3←
←
←
←
←
←
Table 2: How various technologies match the proposed criteria (simplified).
boolean value and instead lies on a spectrum of “more or less able to coordinate”.
From these observations, we can select the criteria most strongly matched
by these technologies that the researchers would like to objectively describe as
“strongly coordinating.” This suggests the criteria Ie, Cog and Bb and the summary in table 2. As the table shows, AstraKahn, CnC, OpenCL, POSIX and
VMWare ESX can be considered strongly coordinating, each at their preferred
component granularity: boxes, steps, kernels, threads/processes and virtual machines, respectively.
4
Problems of “coordination vs. computation”
During the discussions around coordination, this author has observed a prevalent
use of the following arguments by the members of the community:
1. “coordination technologies can be distinguished from computation technologies”;
2. “what differentiates coordination and computation technologies
is the intent of the designer: the designers of coordination languages do not focus on computation”;
3. “there exist ‘pure’ coordination languages that cannot be used
to specify computations.”
All three arguments are motivated by a subjective, human desire of the
involved individuals, that to create a “us-versus-them” vision of the research.
The ulterior motive is to generate specialized attention and attract dedicated
funding. In fairness to this community, we highlight here that this ulterior
motive is shared by most academic researchers regardless of their field of study.
However, despite and regardless of the motive and its subjectivity, the individuals involved claim (both implicitly and explicitly) that these three arguments can be recognized as objective by an external observer, i.e. they can stand
and be defended at face value.
6
What interests us here is that all three arguments require some shared understanding of what is meant by “computation.” If no shared understanding can
be found, then all three arguments are void and thus intellectually irrelevant.
Moreover, if a shared understanding can be found, then only argument #3
is objectively qualified. Even with a shared understanding of computation,
arguments #1 and #2 remain at best “negative space” arguments (cf. section 1)
and still do not inform about what coordination actually entails.
To see how much of argument #3 can be “saved” for the purpose of objective
discussions, we need to investigate two points. The first is how much shared
understanding can be gathered around the term “computation”. The second is
whether, assuming some shared understanding of what “computation” entails,
argument #3 actually holds: that languages that cannot express computation
actually exist, and can be called coordination languages.
4.1
About the notion of computation
As of this writing, there exists no formal definition of what constitutes a computation in general. What is known empirically is that for any function of
mathematics it is often possible to build a machine which can calculate the
value of this function for some input. What is known formally, is that for any
given number function of mathematics it is always possible (in theory) to build
a machine that can calculate the value of this function. What is not known
however, is the set of all mathematical functions a given concrete (real-world)
machine can reproduce; and whether it is possible to build a machine for all
possible mathematical functions, not only number functions. Meanwhile, people can be observed to also build machines to perform work that is not described
formally but is still considered useful.
In this context, two approaches can be taken to define “computation”. One
can seek formalism at all costs, and restrict the shared understanding to Church
and Turing’s thesis: that the set of computations is exactly the set of all possible input-output transformations by any theoretical Turing machine. However,
this Manichean approach excludes a range of machine activities that are commonly considered to be “computations” in practice, too: transformation and
communication of real (physical) variables, non-deterministic operations over
parallel hardware with loosely synchronized time, ongoing processes without a
start event, etc.
The other way to define “computation” is to identify some useful real-world
artefacts and behaviors, call them “computation” axiomatically, then reverseengineer which languages and formal systems can be used to specify them.
There are multiple ways to do so; here are the two such definitions that seem
to gather most consensus:
• “terminating value computations”: any operation which consumes a finite
supply of static data as input, runs for a finite amount of time and produces a finite supply of static data as output. This includes but is not
limited to the observable behavior of halting Turing machines;
• “process computations”: any operation which is running within a wellformed space boundary (e.g. a specific component of a machine), running
at a measurable cost and that is controllable: where an external agent
(e.g. a person or another system) can start, stop, observe, accelerate, slow
(etc.) the operation.
7
Chosen definition for “computation”
Behavior of Turing machines
Terminating value computations
Process computations
Computation-less languages
(none known)
(none known)
some declarative languages, including
Prolog, pure λ-calculus, HTML
Incomplete computation languages
few languages, but includes C
most languages
most languages
Table 3: Languages that cannot specify computations.
The choice of approach also defines the objective substance of any discussion that capitalizes on the notion of computation. Different choices result in
different, possibly conflicting understandings. Therefore, any situation where
the word “computation” is used casually to support negative space arguments
should be reviewed with critical care; in particular, one should feel challenged
to isolate and clarify explicitly what assumptions are being made.
4.2
Languages that “cannot specify computations”
There are two interpretations for the phrase “cannot specify computations”:
either “cannot specify any computation” or “cannot specify all computations”.
The argument “There exists pure coordination languages that cannot specify
computation” thus defines two classes of languages: computation-less languages
which cannot be used to define any computation whatsoever; and incomplete
languages which can only be used to specify a limited subset of computations.
Both can only be discussed in the context of a specific, a priori chosen
understanding of the word “computation” as described in the previous section.
We collate in table 3 a condensed inventory of existing programming languages
that are either computation-less or incomplete for the various definitions of
“computation” isolated previously.
Table 3 enables three observations.
The first is that it is difficult to find concrete computation-less languages,
for any definition of “computation”. In general, it is actually difficult to design
a computation-less language: any language that is able to define a dynamic
evaluation that can react to state, regardless of how dynamic its input is, can
be tricked at a higher-level to define some computations. For example, with
S-NET one can define operations using Peano arithmetic on the depth of the
run-time expansion of a “star” combinator over a synchrocell, using only record
types to perform choices. A computation-less language should either prevent its
user from defining a dynamic evaluation, or restrict the evaluation to be stateinsensitive (or both). It is debatable whether languages with such restrictions
can be called “programming” languages at all.
The second is that if we consider process computations in general and we
understand that “pure coordination languages are those languages that are
computation-less”, we would need to accept languages like Prolog, λ-calculus
or HTML as coordination languages. This does not appear compatible with the
vision of the coordination community being studied.
The third is that there are “too many” languages that are incomplete with
regards to each definition of “computation” to hold a strong us-versus-them argument. For the two informal definitions, i.e. terminating value computations
and process computations, if we understand that “pure coordination languages
are those languages that are incomplete with regard to specifying computa-
8
tion”, then virtually any programming language in use today is a coordination language. If we take the formal definition instead (Turing-incompleteness),
then C would also qualify as a coordination language because C is also Turingincomplete2 . Again, this does not appear compatible with the vision of this
community.
To summarize, it may not be possible to use argument #3 successfully to
motivate specialized attention to the work of this community.
5
Conclusion
We have reviewed in this report the commonly used, subjective argument that
“coordination can be contrasted to computation”. We have revealed that this
argument and all currently used related phrasings are largely intellectually unsound and we conclude they cannot be used to support specialized scientific
attention towards “coordination” as a research activity.
Instead, we have highlighted that research on “coordination” can be supported objectively using motivating arguments based on objective criteria. Of
the various candidate criteria that have been proposed so far, we have shown
that only three characterize the work of the researchers involved:
• interface extensibility: the ability to extend or replace component interfaces arbitrarily after components are provided, and define valid composite
behavior using the modified interfaces even if they conflict with the internal structure of the components;
• customizable optimization goals: the ability to specify different optimization goals after the application has been specified, e.g. during execution,
and the ability of the technology to use different execution strategies to
match the custom goals;
• black-box componentization: the ability to specify composite applications
from components only known by name and interface, and the existence
of run-time interfacing mechanisms that do not require the coordination
technology to know anything about the internal structure of components.
Of these three criteria, we had previously [15] identified the last as a clear
objective criterion to recognize coordination, and we had recognized that programming technologies are “more or less coordinating” depending on how well
they match the criterion. In the present report, we have extended this argument
to the other two criteria, and recognized several concrete coordination technologies: AstraKahn and Intel’s CnC, but also OpenCL, POSIX and VMWare ESX.
Acknowledgements
This document reports on thoughts nurtured during successive discussions with
Merijn Verstraaten, Alex Shafarenko, Sven-Bodo Scholz, Kath Knobe and Clemens
Grelck.
2 we consider here the C language without its standard library as defined in [6, 8]. This
author can provide a demonstration of C’s Turing-incompleteness upon request.
9
References
[1] Don Batory and Sean O’Malley. The design and implementation of hierarchical software systems with reusable components. ACM Trans. Softw. Eng.
Methodol., 1(4):355–398, 1992. ISSN 1049-331X. doi:10.1145/136586.
136587.
[2] Zoran Budimlić, Michael Burke, Vincent Cavé, Kathleen Knobe, Geoff
Lowney, Ryan Newton, Jens Palsberg, David Peixotto, Vivek Sarkar,
Frank Schlimbach, and Sağnak Taşırlar. Concurrent collections. Scientific Programming, 18(3–4):203–217, 2011. ISSN 1875-919X. doi:
10.3233/SPR-2011-0305.
[3] Clemens Grelck and Sven-Bodo Scholz. SAC: a functional array language
for efficient multi-threaded execution. International Journal of Parallel
Programming, 34(4):383–427, August 2006. ISSN 0885-7458 (Paper) 15737640 (Online). doi:10.1007/s10766-006-0018-x.
[4] Clemens Grelck, Sven-Bodo Scholz, and Alex Shafarenko. A gentle introduction to S-Net: Typed stream processing and declarative coordination
of asynchronous components. Parallel Processing Letters, 18(1):221–237,
2008.
[5] IEEE Standards Association. IEEE Std. 1003.1-2008, Information Technology – Portable Operating System Interface (POSIX R ). IEEE, 2008.
[6] International Standards Organization and International Electrotechnical
Commission. ISO/IEC 9899:1999(E), Programming Languages – C. American National Standards Institute (ANSI), 11 West 42nd Street, New York,
New York 1O036, second edition, December 1999.
[7] International Standards Organization and International Electrotechnical
Commission. ISO/IEC 14882:2011, Programming languages – C++. American National Standards Institute (ANSI), 11 West 42nd Street, New
York, New York 1O036, first edition, September 2011. Available from:
http://www.open-std.org/jtc1/sc22/wg21/.
[8] International Standards Organization and International Electrotechnical
Commission. ISO/IEC 9899:2011, Programming Languages – C. American
National Standards Institute (ANSI), 11 West 42nd Street, New York, New
York 1O036, first edition, December 2011. Available from: http://www.
open-std.org/jtc1/sc22/wg14/.
[9] Khronos OpenCL Working Group. The OpenCL specification, version
1.0.43, 2009. Available from: http://www.khronos.org/registry/cl/
specs/opencl-1.0.43.pdf.
[10] Kathleen Knobe. Ease of use with concurrent collections (cnc). In Proc.
1st USENIX conference on Hot topics in parallelism, HotPar’09, page 17.
USENIX Association, Berkeley, CA, USA, 2009. Available from: http:
//dl.acm.org/citation.cfm?id=1855591.1855608.
10
[11] Simon Marlow, Simon Peyton Jones, and Satnam Singh. Runtime support
for multicore Haskell. SIGPLAN Not., 44(9):65–78, August 2009. ISSN
0362-1340. doi:10.1145/1631687.1596563.
[12] OpenMP Architecture Review Board. OpenMP application program interface, version 3.0, 2008. Available from: http://www.openmp.org/
mp-documents/spec30.pdf.
[13] Raphael Poss, Merijn Verstraaten, Frank Penczek, Clemens Grelck,
Raimund Kirner, and Alex Shafarenko.
S+Net: extending functional coordination with extra-functional semantics. Technical Report
arXiv:1306.2743v1 [], University of Amsterdam and University of
Hertfordshire, June 2013. Available from: http://arxiv.org/abs/1306.
2743.
[14] Raphael Poss, Merijn Verstraaten, and Alex Shafarenko. Towards S+Net:
compositional extra-functional specification for large systems. In Erik
D’Hollander, Gerhard Joubert, Jack Dongarra, Ian Foster, and Lucio
Grandinetti, editors, HPC: Transition Towards Exascale Processing, Advances in Parallel Computing. IOS Press, 2013. (to appear). Available
from: pub/poss.13.apc.pdf.
[15] Raphael ‘kena’ Poss. The essence of component-based design and coordination. Technical Report arXiv:1306.3375v1 [cs.SE], University of Amsterdam, June 2013. Available from: http://arxiv.org/abs/1306.3375.
[16] Alex Shafarenko. Non-deterministic coordination with S-Net. In Wolfgang Gentzsch, Lucio Grandinetti, and Gerhard Joubert, editors, High
Speed and Large Scale Scientific Computing, number 18 in Advances in
Parallel Computing. IOS Press, 2009. ISBN 978-1-60750-073-5. doi:
10.3233/978-1-60750-073-5-74.
[17] Alex Shafarenko. AstraKahn: A coordination language for streaming networks. Technical Report arXiv:1306.6029v1 [], University of Hertfordshire, June 2013. Available from: http://arxiv.org/abs/1306.6029.
11
| 6 |
Bounds on the Zero-Error List-Decoding Capacity
of the q/(q − 1) Channel
Siddharth Bhandari and Jaikumar Radhakrishnan
arXiv:1802.08396v1 [] 23 Feb 2018
Tata Institute of Fundamental Research
Homi Bhabha Road
Mumbai 400005, INDIA
Email: {siddharth.bhandari, jaikumar}@tifr.res.in
Abstract—We consider the problem of determining the zero-error
list-decoding capacity of the q/(q − 1) channel studied by Elias
(1988). The q/(q − 1) channel has input and output alphabet
consisting of q symbols, say, X = {x1 , x2 , . . . , xq }; when the
channel receives an input x ∈ X , it outputs a symbol other than
x itself. Let n(m, q, ℓ) be the smallest n for which there is a code
C ⊆ X n of m elements such that for every list w1 , w2 , . . . , wℓ+1
of distinct code-words from C, there is a coordinate j ∈ [n]
that satisfies {w1 [j], w2 [j], . . . , wℓ+1 [j]} = X . We show that for
ǫ < 1/6, for all large q and large enough m, n(m, q, ǫq ln q) ≥
Ω(exp (q 1−6ǫ /8) log2 m).
The lower bound obtained by Fredman and Komlós (1984) for
perfect hashing implies that n(m, q, q − 1) = exp(Ω(q)) log 2 m;
similarly, the lower bound obtained by Körner (1986) for nearlyperfect hashing implies that n(m, q, q) = exp(Ω(q)) log2 m.
These results show that the zero-error list-decoding capacity of
the q/(q − 1) channel with lists of size at most q is exponentially
small. Extending these bounds, Chakraborty et al. (2006) showed
that the capacity remains exponentially small even if the list size
is allowed to be as large as 1.58q. Our result implies that the
zero-error list-decoding capacity of the q/(q −1) channel with list
size ǫq for ǫ < 1/6 is exp (Ω(q 1−6ǫ )). This resolves the conjecture
raised by Chakraborty et al. (2006) about the zero-error listdecoding capcity of the q/(q − 1) channel at larger list sizes.
I. I NTRODUCTION
We study the zero-error-list-decoding capacity of the q/(q −1)
channel. The input and output alphabet of this channel are
a set of q symbols, namely X = {x1 , x2 , . . . , xq }; when the
symbol x ∈ X is input, the output symbol can be anything
other than x itself. We wish to design good error correcting
codes for such a channel. For the q/(q − 1) channel it is
impossible to recover the message without error if the code
has at least two code-words: in fact, no matter how many
letters are used for encoding, for every set of up to (q − 1)
input code-words, one can construct an output word that
is compatible with all of them. It is, however, possible to
design codes where on receiving an output word from the
channel, one can narrow down the input message to a set of
size at most (q − 1)—that is, we can list-decode with lists
of size (q − 1). Such codes have rate exponentially small in q.
Definition I.1 (Code, Rate). A code C ⊆ {x1 , . . . , xq }n
is an ℓ-list-decoding-code for the q/(q − 1) channel, if
for every output word σ ′ ∈ X n , we have {σ ∈ X n :
the input word σ is compatible with σ ′ } ≤ ℓ. Let n(m, q, ℓ)
be the smallest n such that there exists an ℓ-list-decoding code
for the q/(q − 1) channel with m code-words. The zero-errorlist-of-ℓ-rate of C, |C| = m, is given by n1 log2 (m/ℓ), and
the list-of-ℓ-capacity of the q/(q − 1) channel, denoted by
cap(q, ℓ), is the least upper bound on the attainable zero-errorlist-of-ℓ-rate across all ℓ-list-decoding-codes.
The list-of-2-capacity of the 3/2 channel was studied by
Elias [1], who showed that 0.08 ≈ log2 (3)−1.5 ≤ cap(3, 2) ≤
log2 (3) − 1 ≈ 0.58. For the 4/3 channel, Dalai, Guruswami
and Radhakrishnan [2] showed that cap(4, 3) ≤ 6/19 ≈
0.3158, improving slightly on an earlier upper bound of 0.3512
shown by Arikan [3]; it was shown by Körner and Marton [4]
that cap(4, 3) ≥ (1/3) log2 (32/29) ≈ 0.0473. For general
q, one can obtain the following upper bound using a routine
probabilistic argument.
Proposition I.1. n(m, q, q − 1) = exp(O(q)) lg m.
This implies that the cap(q, q − 1) = exp(−O(q)). So for
each fixed q we do have codes with positive rate, but the
rate promised by this construction goes to zero exponentially with q. Fredman and Komlós [5] showed that this
exponential deterioration is inevitable; Körner showed that
cap(q, q) = exp(−Ω(q)). On the other hand, it can be
shown that cap(q, ⌈q ln q⌉) = 1/q, and that for all functions
ℓ : Z → Z we have cap(q, ℓ(q)) ≥ 1/q. Thus, the list-of-ℓcapacity of the q/(q − 1) channel cannot be better than 1/q
unless ℓ is allowed to grow with m.
We thus have the following situation. The list-of-ℓ-rate of
any code reaches the optimal value of 1/q when the listsize is about q ln q; however, the list-of-(q − 1) (as well as
list-of-q) rate is exponentially small in q. It is interesting,
therefore, to study the trade-off between the list size and
the rate, and determine how the rate changes from inverse
polynomial in q to exponentially small in q. Chakraborty,
Radhakrishnan, Raghunathan and Sasatte [6] addressed this
question and showed the following.
Theorem I.2. For every ǫ > 0, there is a δ > 0 such that
for all large q and large enough m, we have n(m, q, (η −
ǫ)q) ≥ exp(δq) log2 m, where η = e/(e − 1) ≈ 1.58. Thus,
cap(q, (η − ǫ)q) = exp(−Ω(q)).
We show the following.
Theorem I.3 (Result). For every ǫ
<
1/6,
for all large q and large enough m, we have
n(m, q, ǫq ln q) ≥ Ω(exp (q 1−6ǫ /8) log2 m). Thus, for
all ǫ < 1/6, cap(q, ǫq ln q) = exp(−Ω(q 1−6ǫ )).
This establishes both parts of the conjecture of Chakraborty
et al. which states the following.
Conjecture I.1. (a) For all constants c > 0, there is a
constant α, such that for all large m, we have n(m, q, cq) ≥
exp (αq) log2 m.
(b) For all functions ℓ(q) = o(q log2 q) and all large m, we
have n(m, q, ℓ(q)) ≥ q ω(1) log2 m.
A. Overview of our approach
We extend the approach of Chakraborty et al., which in turn
was based on the approach used by Fredman and Komlós [5]
to obtain lower bounds on the size of families of perfect
hash functions. To describe our adaptation of this approach,
it will be convenient to reformulate the problem using matrix
terminology.
Consider C ⊆ X n with m code-words. We can build an m×n
matrix C = (cij : i = 1, . . . , m and j = 1, . . . , n) (we use
the name C both for the code and the associated matrix) by
writing the code-words as rows of the matrix (the order does
not matter): so cij = k iff the j-th component of the i-th codeword is xk ∈ X . Then, C is an ℓ-list-decoding code iff the
matrix has the following property: for every choice R of ℓ + 1
rows, there is a column h such that {crh : r ∈ R} = X . In this
reformulation, n(m, q, ℓ) is the minimum n so that there exists
a matrix with this property. We refer to such a matrix as an
ℓ-list-decoding matrix. Furthermore, instead of writing crh we
write h(r); indeed, in the setting of hash families (originally
considered by Fredman and Komlós), the columns correspond
to hash functions that assign a symbol in X to each row-index
in [m].
We can now describe the approach of Chakraborty et al. Fix a
list-size ℓ = αq. Suppose there is an ℓ-list decoding matrix C
with n = exp(βq) log2 m columns. We wish to show that if
β is small then the matrix cannot have the required property;
that is, we can find a set R of ℓ + 1 rows for which h(R) is
a proper subset of [q] for every column h. To exhibit such a
set R we will proceed in stages. In the first stage, we pick
a subset R1 of q − 2 rows at random. Consider a column h.
What can we expect? We expect to see a good number of
collisions, where the same symbol appears in column h at
two different rows in R1 . In fact, we expect h(R) to contain
only about q(1 − 1/e) elements. By appealing to standard
results (e.g., McDiarmid’s inequality), we may conclude that
with probability exponentially close to 1 (that is, of the form
1 − exp(−γq)), h(R) is unlikely to have significantly more
elements. So we might settle on a choice of R, so that h(R)
deviates significantly (say by ǫq for some small ǫ) for at most
exp(−γq) exp(βq) log2 m columns. If the original β is chosen
to be much smaller than γ, this number is an exponentially
small fraction of log2 m.
The key idea now is to make these exceptional columns
ineffective. We do this by focusing our attention on a reduced
number of rows. For each exceptional column, we pick the
symbol that appears most often in that column, and restrict
attention to those rows that have this symbol in the exceptional
column. This depletes the number of rows by a factor at 1/q
for each exceptional column; after we do this sequentially for
all the exp(−(γ − β)q) log2 m ≪ log2 m rows, we will be
left with m′ rows, where log2 m′ = Ω(log2 m). We may
now add more rows to our existing list R1 . If we choose
these from the set of m′ rows, we are in no danger from
the exceptional columns; in the other columns R1 spans about
q(1 − 1/e) symbols, so we can add to R1 about q/e rows R2
(picked from the m′ -rows) and still ensure that in no column
h, we are in danger of h(R1 ∪ R2 ) becoming X . It is clear
that we can carry this approach further, e.g., by picking R2
randomly, expecting a significant number of internal collisions,
making the exceptional columns ineffective, focusing attention
on a smaller but still significant number of rows, etc., then
picking R3 from the rows that survive, and so on. In fact,
Chakraborty et al. derived Theorem I.2 using precisely this
approach.
In this paper, we follow the approach outlined above but
implement the idea more precisely. Before we describe our
contribution it will be useful to pin-point where the calculations in Chakraborty et al. were sub-optimal. We argued
above that after R1 is picked, we expect to span only about
q(1 − 1/e) symbols in a given column h. What about after
R2 is picked? R1 ∪ R2 contains a total of q + q/e rows: if
all symbols in column h appeared with the same frequency
(and continued to do so in the m′ rows after the exceptional
columns were eliminated), then we should expect h(R1 ∪ R2 )
to span about (q + q/e)(1 − exp(1 + 1/e)) symbols. Notice
that this is roughly the expected number of distinct coupons
collected in the classical coupon collector problem after q+q/e
attempts. Unfortunately, there are technical difficulties that
arise in claiming that this number will be reflected in our
process because (i) R1 and R2 are not picked independently,
and (ii) even if the symbols appeared with the same frequency
initially, they may not do so after we focus on a depleted
set of rows. Faced with these difficulties, Chakraborty et al.
settled for less. Instead of matching the bound suggested by
the coupon collector problem, when analysing the expected
size of h(R1 ∪ R2 ), they estimated h(R2 ) separately and
bounded |h(R1 ∪ R2 )| by |h(R1 )| + |h(R2 )|, thereby ignoring
h(R1 ∩ R2 ). The loss in precision resulting from the use of
this union bound increases as the number of phases increases.
Indeed, when the coupon collector process is carried in phases
by picking sets R1 , R2 , . . . , Rt for a large t, progress in
collecting coupons is retarded more by collisions across sets
(because for some i 6= j, h(Ri ) and h(Rj ) have elements in
common) than by collisions within some h(Ri ). By neglecting
collisions across phases, and by failing to track the coupon
collector process closely, the argument in Chakraborty et al.
were unable to push the list size in Theorem I.2 beyond
e/(e − 1).
What is new?: We attempt to track the progress of the coupon
collector faithfully. Instead of the set R1 of size q − 2 that was
picked earlier, we pick an ensemble (a collection of sets) R1
of sets of size q − 2. Similarly, in the later steps we will pick
ensembles R2 , R3 , . . .. However, in the end we pick one set
Ri from each of the ensembles Ri respectively, and assemble
our list of rows: R1 ∪ R2 ∪ · · · ∪ Rt . That this process is more
effective in bounding |h(R1 ∪ R2 ∪ . . . ∪ Rt )| will be formally
verified in later sections. For now, let us qualitatively see how
it helps in bounding |h(R1 ∪ R2 )|. We pick R1 at random:
if the number of sets in the ensemble is large enough (we
will set it to be exp(Θ(q))), then it should reflect a random
set of rows that was obtained by picking rows independently
(q − 2)-times from the set of all rows. Fix a choice for R2 , the
set to be picked at the second stage. Consider X = |h(R1 ∪
R2 )| where R1 is picked uniformly from the ensemble R1 ;
let Y = |h(R1 ∪ R2 )|, where R1 is picked uniformly from
the set of all rows. Then, we expect X and Y to have similar
distribution. So, we proceed as follows. We pick an ensemble
R1 at random. If for a certain column h, the ensemble R1
fails to deliver a good sample, we will need to make that
column ineffective as before. Further, if some set in R1 spans
a significantly larger number of symbols in some column, we
will again make that column ineffective. After this, we pick R2
from the remaining rows. We expect it to not only have a good
number of internal collisions but also be such that |h(R1 ∪
R2 )| and |h(R1 ∪ R2 )| (where the set R2 is chosen uniformly
from the available rows) are similar in expectation. Now, since
we ensured that the ensemble R1 was good for column h, a
random choice of R1 from the ensemble will deliver a value
of |h(R1 ∪ R2 )| that, with high probability, can be bounded
by the number of distinct coupons picked up at the same stage
by the coupon collector; in particular, it accounts for symbols
common to h(R1 ) and h(R2 ). The outline above illustrates
the advantages of picking an ensemble instead of committing
to just one randomly chosen set. However, a large ensemble
comes with its drawbacks. We need to ensure that no set in the
ensemble spans too many elements in any column, or rather,
we need to eliminate any column where some set spans many
elements. This forces a more drastic reduction in the number
of rows than before (that is, now m′ when compared with
m is much smaller than in the calculation in [6]). Thus, it is
important to keep the sizes of the ensembles small. The tradeoff between these opposing concerns needs to be handled with
some care. The argument is presented in detail below.
II. P ROOF
OF THE
R ESULT
In what follows we assume that q is a large natural number
and m → ∞.
We will need the following concentration result due to McDiarmid (1989).
Lemma II.1 (McDiarmid). Let X1 , X2 , . . . , Xn be independent random variables where each Xk takes values in a finite
set A. Let f : An → R be such that |f (x) − f (y)| ≤ c
whenever x and y differ in only one coordinate. Let Y =
f (X1 , X2 , . . . , Xn ); then, for all t > 0,
−2t2
Pr[E[Y ] − Y ≥ t], Pr[Y − E[Y ] ≥ t] ≤ exp
.
nc2
Let C be an ℓ-list-decoding-code for the q/(q−1) channel with
ℓ < q ln q/6. As mentioned in the introduction, we will view
C as an m×n matrix with entries from [q]. In other words, the
rows are indexed by code-words and the columns are indexed
by hash functions. Let wt P
be a function from [q] to {0, 1};
for A ⊆ [q], let wt(A) := a∈A wt(a). Let R be a random
variable taking values in P([m]). Sometimes we use R to also
refer to the distribution of this random variable.
Following the idea mentioned in the introduction, we intend to
keep an ensemble R of sets of rows such that when we pick
a new set of rows R2 from a depleted number of rows m′ ,
we not only observe the correct number of internal collisions
within R2 but also observe the correct number of collision
between members of R and R2 . This motivates the following
definition.
Definition II.1 (Sampler). We say that an ensemble R =
(R1 , R2 , . . . , RL ), where each Ri ⊆ [m], is a (γ, δ)sampler for R wrt column h if (A1 , A2 , . . . , AL ) :=
(h(R1 ), h(R2 ), . . . , h(RL )) satisfies ∀wt : [q] → {0, 1}
h
i
Pr
wt(Aj ) − E wt(h(R)) ≥ γq ≤ exp (−δq).
j∈u [L]
The definition makes provision for all functions wt, because
it tries to anticipate the appropriate internal collisions (see
Lemma II.2) with very little advance knowledge of what the
distribution on [q] looks like in column h after a large number
of rows have been discarded.
Let π : S → [0, 1] be a probability mass function on a
finite set S. Let k ≥ 1, and let X1 , X2 , . . . , Xk be independent random variables each distributed according to π.
Then, let π {k} denote the probability mass function of the
set {X1 , X2 , . . . , Xk }.
For distributions A and B on P([m]), let A ∨ B be the
distribution of S ∪ T where S ∼ A and T ∼ B, with S
and T chosen independently. The following lemma will be
the main workhorse for our argument.
Lemma II.2 (Ensemble Composition Lemma). Let R be
a distribution on P([m]) and let D be a distribution on
[m]. Let R be a (γ, δ)-sampler for R wrt a column h; let
(R1 , R2 , . . . , Rs ) be obtained by taking s independent samples
from the ensemble R. Similarly, let R′ = (R1′ , R2′ , . . . , Rs′ ) be
obtained by taking s independent samples according to R′ ∼
D{tq} where t < 1. Let γ ′ , δ ′ > 0 be such that δ ≤ 2(γ ′ )2 /t
and δ > δ ′ Let s = exp(δ −δ ′ )q, γ
e = γ +γ ′ , δe = δ −δ ′ . Then,
′
with probability 1 − 12 exp (−δ q) over the random choices,
the composed ensemble
e := R1 ∪ R1′ , R2 ∪ R2′ , . . . , Rs ∪ Rs′ )
R
e
of cardinality s, is a (e
γ , δ)-sampler
for R ∨R′ wrt the column
h, and furthermore ∀i ∈ [s],
h
i
h(Ri ∪ Ri′ ) − E h(R ∨ R′ ) ≤ e
γ q.
(1)
Note that this ensemble is generated according to the product
s
distribution (R ∨ R′ ) .
Proof. Fix f : [q] → {0, 1} and let µf := E f (h(R ∪ R′ )) ;
similarly, for R′ ⊆ [m] let µf (R′ ) := ER f (h(R ∪ R′ )) .
First, we bound the probability that when R′ is chosen
according to R′ , it fails to have µf (R′ ) close to µf . Using
McDiarmid’s inequality over the tq primitive choices for R′ ,
we have
−2(γ ′ )2 q 2
′
′
Pr
|µ
(R
)
−
µ
|
≥
γ
q
≤
2
exp
f
f
R′ ∼R′
tq
−2(γ ′ )2 q
= 2 exp
. (2)
t
Now, let wt : [q] → 0, 1 be defined by wt(x) = f (x) if
x 6∈ h(R′ ) and wt(x) = 0 otherwise. Then, for R ⊆ [m], we
have, f (h(R ∪ R′ )) = f (h(R′ )) + wt(h(R)). Therefore (note
here R′ is fixed and R varies randomly in R),
Pr
f (h(R ∪ R′ )) − µf (R′ ) ≥ γq
R∈u R
= Pr
wt(h(R)) − E wt(h(R)) ≥ γq
R∈u R
and since R is a (γ, δ)-sampler wrt R, we have
Pr
wt(h(R)) − E wt(h(R)) ≥ γq ≤ exp(−δq).
R∈u R
Thus,
h
i
′
′
f
(h(R
∪
R
))
−
µ
≥
(γ
+
γ
)q
≤
f
R∈u R,R ∼R′
i
h
|µf − µf (R′ )| ≥ γ ′ q +
Pr
R∈u R,R′ ∼R′
h
i
Pr ′ ′ µf (R′ ) − f (h(R ∪ R′ )) ≥ γq
R∈u R,R ∼R
h
i
−2(γ ′ )2 q
+ exp −δq ≤ 3 exp(−δq). (3)
≤ 2 exp
t
Pr ′
(We used δ ≤ 2(γ ′ )2 /t to justify the last inequality.) Let ∆ :=
3 exp(−δq), the quantity on the right in (3). By taking f to be
the all-1’s function, we conclude from (3) that for each i with
probability at least 1 − ∆, h(Ri ∪ Ri′ ) − E |h(R ∨ R′ )| ≤
(γ + γ ′ )q.
Now,
f (h(Ri ∪ Ri′ )) = h(Ri ) ∪ h(Ri′ ) , and µf =
E |h(R ∨ R′ )| . Now, by a union bound over the s choices
for i, we obtain
Pr ∃i ∈ [s], h(Ri ∪ Ri′ ) − E h(R ∨ R′ ) ≥ (γ + γ ′ )q
e
R
≤ ∆s ≤ 3 exp(−δ ′ q).
(4)
This establishes (1).
It remains to establish our first claim that whp the ensemble
e
picked according to (R ∨ R′ )s is a (e
γ , δ)-sampler
for R ∨ R′ .
Fix f : [q] → {0, 1}. Now, (3) implies that for each i ∈
[s], the probability that |f (h(Ri ∪ Ri′ )) − µf | ≥ (γ + γ ′ )q
is exponentially
for
small in q.′ Then, the tail probabilities
Ps
′
Y :=
i=1 I |f (h(Ri ∪ Ri )) − µf | ≥ (γ + γ )q can be
bounded by considering Bin(s, ∆). Therefore,
e
Pr [Y > exp (−δq)s]
e
R
s
e
≤
(∆)exp (−δq)s
e
exp (−δq)s
e
exp (−δq)s
e
≤ (e exp (δq)∆)
≤ 9 exp (−δ ′ q).
(5)
(We need to take a union bound against the 2q possible
functions f : [q] → {0, 1}: by changing s to qs we may
easily establsih this.) By (4) and (5), the probability that our
e
ensemble fails to be a (e
γ , δ)-sampler,
with γ
e = γ + γ ′ and
′
δe = δ − δ , or fails to satisfy (1) is at most 12 exp (−δ ′ q).
Let us recall the template of our argument. At any stage we
will have an ensemble of sets of rows, say R, and a universe
U ⊆ [m] to choose sets of rows from to add to R. We will
add a specific number of randomly chosen sets of rows of a
particular size from U and then declare those columns bad
where the modified R deviates from its expected behaviour.
Consider a set R ∈ R: we want to say that the couponcollector process at |R| probes into [q] is the gold standard for
good behaviour, i.e., no set in R will have expansion more
than the coupon-collector at the same stage. The expected
number of elements that the coupon-collector process picks
up after a i.i.d. uniform probes into [q] is approximately
q (1 − exp(−a/q)): we will denote this as µcc
q (a). So, we need
the following lemma, which is proved in the appendix.
Lemma II.3 (Phased Coupon Collector). Let a1 , a2 , . . . , ak
be positive integers; let a = a1 + a2 + · · · + ak ,
and let π1 , π2 , . . . , πk be probability mass functions. Let
A1 , A2 , . . . , Ak be independent random variables taking val{a }
ues in P([q]), where Ai ∼ πi i . Suppose a ≤ ǫq ln q and
ǫ
k ≤ eq for some ǫ < 1/3, then,
E[|A1 ∪ A2 ∪ · · · ∪ Ak |] ≤ q (1 − exp(−a/q)) + o(q 1−ǫ )
1−ǫ
= µcc
).
q (a) + o(q
Our next target is to understand the number of iterations we
wish to perform, i.e., the number of times we need to enlarge
the sizes of the sets surviving the ensemble R so that the list
size hits the target of ǫq ln q, where ǫ < 1/6. At the first stage
we will pick up sets of rows of size about ℓ1 = q, and expect
the image size to be close to µcc
q (ℓ1 ); we then prune out the
exceptional columns. In the next stage, we pick sets of size
about ℓ2 = q − µcc
q (ℓ1 ) and expect the combined image size
to be close to µcc
q (ℓ1 + ℓ2 ). Hence, in the third iteration we
pick sets of size close to ℓ3 = q − µcc
q (ℓ1 + ℓ2 ), and so on
for the subsequent iterations. P
We are interested in the list size
k
after k iterations, i.e, ℓ≤k := i=1 ℓi . We have the following
proposition, which is proved in the appendix.
Proposition
II.4. Let ℓ1 = q, and for i ≥ 1 let ℓi+1 = q −
Pi
ǫ
µcc
q (
j=1 ℓj ). Suppose k = eq for some ǫ < 1, then, ℓ≤k ≥
ǫq ln q.
Proof. (The series {ℓ≤k } tends to q ln q.)
Finally, we need a lemma where we glue all the steps mentioned in the introduction. At each iteration k, we maintain an
ensemble Rk satisfying the requisite properties.
We call a distribution D on P([m]) a (g1 , . . . , gk )-phased
{g }
{g }
{g }
coupon collector distribution if D = D1 1 ∨D2 2 . . .∨Dk k
where each Di is a probability mass function on [m]. The
following lemma tracks how the parameters change with each
iteration.
Lemma II.5 (Iteration Lemma). Let k ≤ q ǫ for some
ǫ < 1/5. Let γ = γ ′ = q −2ǫ /2 and δ ′ = q −5ǫ /4. Assume
n ≤ exp (δ ′ q) log2 m/(48 · q ǫ log2 q). Then, there exists a
partition H1 (k) ⊔ H2 (k) of the columns of C, a universe of
rows Uk ⊆ [m], an ensemble Rk = (R1 , R2 , . . . RLk ), integers (g1 , . . . , gk ) and a (g1 , . . . , gk )-phased coupon collector
distribution Dk such that:
a g1 = q − 2, and gi+1 = q − µcc
q (gi ) − (i + 1)γq − 2
b ∀i ∈ [Lk ], |Ri | = g≤k ≥ ℓ≤k − 2k − k 2 γq/2
c ∀h ∈ H2 (k), ∀i ∈ [Lk ], |h(Ri ∪ Uk )| ≤ q − 1
d ∀h ∈ H1 (k), Rk is a ((k + 1)γ, γ 2 − kδ ′ )-sampler for
Dk wrt h
e ∀h ∈ H1 (k), ∀i ∈ [Lk ] |h(Ri )| − E [h(Dk )] ≤ (k +
1)γq
f log2 |Uk | ≥ log2 m − k log2 q · 24 exp (−δ ′ q)n.
Proof. We will use induction on k. For k = 1 we have g1 =
q − 2. We use Lemma II.2 with R being the constant ∅, and
R = {∅}. Clearly, R is a (γ, γ 2 )-sampler for R. Let D be the
uniform distribution over [m] and let R′ = (R1′ , R2′ , . . . , Rs′ )
be obtained by taking s = exp ((γ 2 − δ ′ )q) independent
samples according to R′ ∼ D{q−2} . So, D1 = D{q−2} . For
a fixed column h we have the following: with probability
1 − 12 exp (−δ ′ q) over the random choices, the composed
ensemble
e = R1′ , R2′ , . . . , Rs′ )
R
e is a (2γ, γ 2 − δ ′ )-sampler for R′ wrt
is good wrt h, i.e., R
the column h, and furthermore ∀i ∈ [s],
h
i
h(Ri′ ) − E h(R′ ) ≤ 2γq.
Hence, on expectation only 12 exp (−δ ′ q)n columns are bad.
Therefore, with probability at least 1/2 at most 24 exp (−δ ′ q)n
columns are bad. Also, the probability of an Ri′ ∈ R′ having
size less than q − 2 (because some two of our q − 2 choices of
rows picked the same row) is at most q 2 /m. Thus, by the union
bound the probability of (b) not holding is at most s · q 2 /m
e which
which is less than 1/2. Therefore, there is choice of R,
1
′
we call R , such that at most 24 exp (−δ q)n columns are bad
and (b) holds. The set of bad columns is H2 (1) and the set of
good columns is H1 (1). Then, clearly (d) and (e) are true.
Let H2 (1) = {h1 , . . . , hb } where b ≤ 24 exp (−δ ′ q)n and
WLOG assume that 1 is the most frequent symbol in h1 .
Retain only those rows in U that correspond to the symbol
1 in h1 . Call this pruned universe U ′ : we have ensured that so
long as we add rows to Ri ∈ R1 only from U ′ , the image size
in h1 is at most h1 (R) + 1 ≤ q − 1. Thus, by taking a multiplicative hit of at most 1/q we have rendered h1 ineffective.
b
Iterating this over H2 (1) we take a multiplicative hit of 1q .
Hence, we obtain a universe U ′ , which will be U1 , such that
log2 |U ′ | = log2 |U1 | ≥ log2 m − 24 exp (−δ ′ q)n log2 q. This
establishes (c) and (f). This establishes the claims for k = 1;
the induction step in general is similar.
Now, as our IH let us assume that for (k − 1) we have
the partition H1 (k − 1) ⊔ H2 (k − 1), Uk−1 ⊆ [m], Rk−1 ,
integers (g1 , . . . , gk−1 ) and Dk−1 such that (a) through (f)
are satisfied. Then, we repeat the above argument. We have
gk = q − µcc
q (gk−1 ) − kγq − 2. We use Lemma II.2 for
h ∈ H1 (k − 1) with R being Dk−1 , and R = Rk−1
which is a (kγ, γ 2 − (k − 1)δ ′ )-sampler for Dk−1 wrt h. Let
(R1 , . . . , Rs ) be obtained by s = exp (γ 2 − kδ ′ ) independent
samples from Rk−1 . Let D be the uniform distribution over
Uk−1 and let R′ = (R1′ , R2′ , . . . , Rs′ ) be obtained by taking
s independent samples according to R′ ∼ D{gk } . We let
Dk = Dk−1 ∨ D{gk } . For a fixed column h we have the
following: wp 1 − 12 exp (−δ ′ q) over the random choices, the
composed ensemble
e = R1 ∪ R1′ , R2 ∪ R2′ , . . . , Rs ∪ Rs′ )
R
e is a ((k + 1)γ, γ 2 − kδ ′ )-sampler for
is good wrt h, i.e., R
Dk wrt h, and furthermore ∀i ∈ [s],
h
i
h(Ri ∪ Ri′ ) − E h(Dk ) ≤ (k + 1)γq.
Hence, on expectation only 12 exp (−δ ′ q)n columns of
H1 (k − 1) are bad. Therefore, with probability at least 1/2
at most 24 exp (−δ ′ q)n columns of H1 (k − 1) are bad. Also,
e having size less than g≤k
the probability of an Ri ∪ Ri′ ∈ R
cc
(because some two of our q − µq (gk−1 ) − kγq − 2 choices of
rows for Ri′ picked the same row of collided with some row in
Ri ) is at most (q ln q)2 /|Uk−1 |. Thus, by the union bound the
probability of (b) not holding is at most s · (q ln q)2 /|Uk−1 |
e which
which is less than 1/2. Therefore, there is choice of R,
k
′
we call R , such that at most 24 exp (−δ q)n columns of
H1 (k−1) are bad and (b) holds. Combining these bad columns
with H2 (k−1) we obtain H2 (k) and the columns not in H2 (k)
form the set H1 (k) = H2 (k). Then, clearly (d) and (e) are
true.
Let H2 (k) \ H2 (k − 1) = {h1 , . . . , hb } where b ≤
24 exp (−δ ′ q)n and WLOG assume that 1 is the most frequent
symbol in h1 . Retain only those rows in Uk−1 that correspond
to the symbol 1 in h1 . Call this pruned universe U ′ : this
pruning ensures that so long as we add rows to Ri ∈ Rk
only from U ′ , the image size in h1 is at most q − 1. Thus, by
taking a multiplicative hit of at most 1/q we have rendered h1
ineffective. Iterating this over H2 (k) we take a multiplicative
b
hit of 1q . Hence, we obtain a universe U ′ , which will
be Uk , such that log2 |U ′ | = log2 |Uk | ≥ log2 |Uk−1 | −
24 exp (−δ ′ q)n log2 q ≥ log2 m − k log2 q · 24 exp (−δ ′ q)n.
Together with property (c) of Uk−1 this establishes (c) and
(f). This completes the induction step.
Proof of Theorem I.3 (main result of the paper). Fix an ǫ′ <
1/6 and let C be an ǫ′ q ln q-list-decoding-code for the q/(q−1)
channel. Choose λ ≪ ǫ′ and let ǫ = ǫ′ +λ. Let q be sufficiently
′
large so that k = q ǫ ≥ eq ǫ +λ/2 . We will appeal to Lemma II.5
(with k and ǫ) and assume that n ≤ exp (δ ′ q) log2 m/(48 ·
q ǫ log2 q). Then, by choosing a set of rows R in the ensemble Rk and using (b) and Proposition II.4 we obtain that
|R| ≥ ǫ′ q ln q. However, using (c) we have that for all columns
h ∈ H2 (k), |h(R)| ≤ q − 1. Also, using (e) and Lemma II.3
we obtain that for all h ∈ H1 (k), |h(R)| < q. This is a contradiction and hence n > exp (δ ′ q) log2 m/(48 · q ǫ log2 q) or for
′
sufficiently large q we have n > Ω(exp (q 1−6ǫ /8) log2 m).
We note that it is possible by a more careful analy′
sis to improve the bound of Ω(exp (q 1−6ǫ /8) log2 m) to
1−4ǫ′
Ω(exp (q
/8) log2 m) in which case we may apply the
bound till a list size of q ln q/4. This bound is obtained by
modifying Lemma II.5 to accommodate γ ′ and δ ′ which vary
across the induction steps and being more scrupulous about
the argument in the preceding paragraph.
ACKNOWLEDGEMENTS
We are grateful to Prahladh Harsha for the numerous detailed
discussions that led to the result reported in this paper, and also
for proof-reading it. We also thank Ramprasad Saptharishi for
his help with Lemma II.3.
R EFERENCES
[1] P. Elias, “Zero error capacity under list decoding,” IEEE Transactions on
Information Theory, vol. 34, no. 5, pp. 1070–1074, Sep 1988.
[2] M. Dalai, V. Guruswami, and J. Radhakrishnan, “An improved bound on
the zero-error list-decoding capacity of the 4/3 channel,” in 2017 IEEE
International Symposium on Information Theory, ISIT 2017, Aachen,
Germany, June 25-30, 2017, 2017, pp. 1658–1662. [Online]. Available:
https://doi.org/10.1109/ISIT.2017.8006811
[3] E. Arikan, “An upper bound on the zero-error list-coding capacity,” IEEE
Transactions on Information Theory, vol. 40, no. 4, pp. 1237–1240, Jul
1994.
[4] J. Korner and K. Marton, “New bounds for perfect hashing
via information theory,” European Journal of Combinatorics,
vol. 9, no. 6, pp. 523 – 530, 1988. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0195669888800489
[5] M. L. Fredman and J. Komlós, “On the size of separating systems
and families of perfect hash functions,” SIAM Journal on Algebraic
Discrete Methods, vol. 5, no. 1, pp. 61–68, 1984. [Online]. Available:
https://doi.org/10.1137/0605009
[6] S. Chakraborty, J. Radhakrishnan, N. Raghunathan, and P. Sasatte,
“Zero error list-decoding capacity of the q/(q-1) channel,” in FSTTCS
2006: Foundations of Software Technology and Theoretical Computer
Science, 26th International Conference, Kolkata, India, December
13-15, 2006, Proceedings, 2006, pp. 129–138. [Online]. Available:
https://doi.org/10.1007/11944836_14
A PPENDIX
Proof of Lemma II.3. Consider a constant λ ≪ ǫ. For i =
1, 2, . . . , k, let Bi be the set of q 1−2ǫ−λ elements of [q] taking
the topmost values in πi . Let B = B1 ∪ B2 ∪ · · · ∪ Bk ; note
that |B| ≤ kq 1−2ǫ−λ = o(q 1−ǫ ). Then, E[|A1 ∪ A2 ∪ . . . Ak |]
is at most
!
k
X
Y
ai
.
(1 − πi (x))
|B| +
1−
x6∈B
i=1
Now, for x 6∈ B, we have πi (x) ≤ 1/q 1−2ǫ−λ , and
1 − πi (x) ≥ exp −πi (x)/ 1 − πi (x)
≥ exp −πi (x) 1 + 2/q 1−2ǫ−λ .
Then, by the AM-GM inequality we have the upper-bound
X
ai πi (x) .
|B| + q − q exp −(1 + 2/q 1−2ǫ−λ )(1/q)
i,x
Our
claim
follows
from
P
√
exp −(1 + 2/ q)(1/q) i,x ai πi (x)
o(1/q 1−2ǫ ) ≥ exp (−a/q) − o(q 1−ǫ )/q.
≥
this
because
exp(−a/q) −
Proof of Proposition II.4. Suppose ℓ≤i ∈ [jq, (j + 1)q] for
some j ≥ 0, then, ℓi+1 ≥ q/ej+1 . Therefore, the number of i’s
for which ℓ≤i ∈ [jq, (j + 1)q] ≤ ej+1 . Suppose ℓ≤k < ǫq ln q,
then as a contradiction we have
k < e + e2 + · · · + eǫ ln q ≤ eq ǫ .
| 7 |
1
Downlink Energy Efficiency of Power
Allocation and Wireless Backhaul Bandwidth
arXiv:1710.02942v1 [] 9 Oct 2017
Allocation in Heterogeneous Small Cell
Networks
Haijun Zhang, Senior Member, IEEE, Hao Liu, Julian Cheng, Senior
Member, IEEE, and Victor C. M. Leung, Fellow, IEEE
Abstract
The widespread application of wireless services and dense devices access have triggered huge
energy consumption. Because of the environmental and financial considerations, energy-efficient design
in wireless networks becomes an inevitable trend. To the best of the authors’ knowledge, energy-efficient
orthogonal frequency division multiple access heterogeneous small cell optimization comprehensively
considering energy efficiency maximization, power allocation, wireless backhaul bandwidth allocation,
and user Quality of Service is a novel approach and research direction, and it has not been investigated.
In this paper, we study the energy-efficient power allocation and wireless backhaul bandwidth allocation
in orthogonal frequency division multiple access heterogeneous small cell networks. Different from the
existing resource allocation schemes that maximize the throughput, the studied scheme maximizes energy
efficiency by allocating both transmit power of each small cell base station to users and bandwidth for
backhauling, according to the channel state information and the circuit power consumption. The problem
is first formulated as a non-convex nonlinear programming problem and then it is decomposed into two
convex subproblems. A near optimal iterative resource allocation algorithm is designed to solve the
Haijun Zhang is with the Beijing Engineering and Technology Research Center for Convergence Networks and Ubiquitous
Services, University of Science and Technology Beijing, Beijing, 100083, China (e-mail: [email protected]).
Hao Liu and Julian Cheng are with School of Engineering, The University of British Columbia, Kelowna, BC, Canada (e-mail:
[email protected], [email protected]).
Victor C. M. Leung is with the Department of Electrical and Computer Engineering, The University of British Columbia,
Vancouver, BC V6T 1Z4 Canada (e-mail: [email protected]).
October 10, 2017
DRAFT
2
resource allocation problem. A suboptimal low-complexity approach is also developed by exploring
the inherent structure and property of the energy-efficient design. Simulation results demonstrate the
effectiveness of the proposed algorithms by comparing with the existing schemes.
Index Terms
Bandwidth allocation, energy efficiency, heterogeneous network, power allocation, small cell, wireless backhaul.
I. I NTRODUCTION
Wireless communication networks have experienced tremendous growth in the past a few
decades. It is shown that higher capacity wireless links are expected to meet the increasing
quality of service (QoS) demands of multimedia applications, and these high data rate links also
result in increasing device power consumption. The next generation communication systems
need to provide higher data rate with limited power and bandwidth due to the rapidly increasing
demands for multimedia services. Designing energy-efficient wireless communication system
becomes an emerging trend, due to rapidly increasing system energy costs and rising requirements
of communication capacity [1]–[3]. According to [4] and [5], the radio access part is a major
energy consumer in conventional wireless cellular networks, and it accounts for up to more
than 70 percent of the total energy consumption. Therefore, increasing the energy efficiency of
typical wireless networks is important to overcome the challenges raised by the rising demands
of energy consumption and communication throughput.
To offload the overloaded traffics in macrocells and enhance the capacity and energy efficiency
of the wireless networks, one proposed method is to shorten the distance between the base stations
(BSs) and the user equipments. Small cells (e.g., picocells, femtocells and relay nodes) have been
used to improve system capacity in hotspots for relieving the burden on overloaded macrocells,
which is considered as a promising technique to provide an effective solution for the challenges
in current macrocells [6], [7]. Therefore, there is no doubt that small cell has been paid much
attention in recent years from academia and industry because it can help the system spatial reuse
spectrum with low power consumption and improve the system coverage with low infrastructure
cost deployment [8]. Heterogeneous small cell networks, where small cells are overlaid within
October 10, 2017
DRAFT
3
a macrocell to improve coverage and increase system capacity beyond the initial deployment
of macrocells, have been regarded as a promising approach to meet the increasing data traffic
demand and reduce energy consumption. Although highly promising, many important problems
related to heterogeneous small cell networks such as interference mitigation, resource allocation,
and QoS provisioning [9]–[12] should be addressed to fully reap the potential gains.
Resource allocation, such as power allocation and bandwidth allocation, has been widely
used to maximize the energy efficiency under power limitation and QoS in heterogeneous
small cell networks. Power allocation for energy efficiency has been widely studied in the
literature. The distributed power control game was studied in [13] to maximize the energy
efficiency of transmission for secondary users in cognitive radio networks and an optimal
power control problem was formulated as a repeated game. In [14], based on the hardcore
point process (HCPP), the authors investigated the maximum achievable energy efficiency of
the considered multiuser multiantenna HCPP random cellular networks with the aforementioned
minimum distance constraint for adjacent BSs. Different from the authors in [14], who took
the minimum distance in adjacent BSs into consideration to maximize the energy efficient, we
propose a suboptimal low-complexity approach of energy-efficient backhaul bandwidth allocation
by optimizing the fraction of bandwidth allocated for wireless backhauling at all small cell BSs
within a macrocell range. The authors in [15] studied energy-efficient power control and receiver
design in cognitive radio networks, and a non-cooperative power control game for maximum
energy efficiency of secondary users was considered with a fairness constraint and interference
threshold. The authors of [16] formulated the energy-efficient spectrum sharing and power
allocation in heterogeneous cognitive radio networks with femtocells as a Stackelberg game
and they proposed a gradient based iteration algorithm to obtain the Stackelberg equilibrium
solution to the energy-efficient resource allocation problem. Some works also have been done to
consider bandwidth allocation for energy efficiency. In [17], the authors studied the joint service
pricing and bandwidth allocation for energy and cost efficiency at the operator level in a multitier network where an operator deploys heterogeneous small cell networks, and they formulated
the problem as a Stackelberg game. The problem of joint link selection, power and bandwidth
allocation for energy efficiency maximization for Multi-Homing networks was investigated in
October 10, 2017
DRAFT
4
[18]. A new energy-efficient scheme was presented in [19] to statistically meet the demands for
QoS during the bandwidth allocation for wireless networks.
In this paper, we define that wireless backhaul as the connection between macro BS and
small cell BSs, and it is necessary to jointly consider the design of the radio access and
backhaul network. Several related works considered the backhaul to improve energy efficiency
in wireless networks. The authors of [20] studied energy efficiency of resource allocation in
multi-cell orthogonal frequency division multiple access (OFDMA) downlink networks where
the limited backhaul capacity, the circuit power consumption and the minimum required data rate
are considered. The resource allocation problem for energy-efficient communication with limited
backhaul capacity is formulated as an optimization problem. In [21], an energy efficiency model
of small cell backhaul networks with Gauss–Markov mobile models has been proposed. In [22],
the authors maximized system energy efficiency in OFDMA small cell networks by optimizing
backhaul data rate and emission power, and they proposed a joint forward and backhaul link
optimization scheme by taking both the power consumption of forward links and the backhaul
links into consideration.
To the best of the authors’ knowledge, energy efficiency of power allocation and wireless
backhaul bandwidth allocation in heterogeneous small cell network has not been investigated. In
this work, we study the power allocation and bandwidth allocation problem in a heterogeneous
small cell network where the small cells use wireless backhauling to maximize energy efficiency
of all small cell users. Similar to the paper in [23], we also use Gradient Assisted Binary Search
(GABS) Algorithm to solve the energy-efficient power allocation problem. Reference [24] is a
conference version of this paper. Different from the conference version, we provide the detailed
proof for the theorem, complexity analysis for the proposed algorithms and more simulation
results in this paper. The key contributions of our work can be summarized as follows.
•
Design of an energy-efficient OFDMA heterogeneous small cell optimization: This is a
novel approach by considering energy efficiency maximization, power allocation, wireless
backhaul bandwidth allocation, and user QoS into the design of OFDMA heterogeneous
small cell optimization. We formulate the energy-efficient wireless backhaul bandwidth
allocation and power allocation problem in a heterogeneous small cell as a nonlinear
October 10, 2017
DRAFT
5
programming problem, where maximum transmit power constraints of each small cell BS
to each small cell user, the downlink data rate constraint of small cell BSs and the minimum
data rate between each small cell BS and its corresponding user are considered to provide
reliable and low energy consumed downlink transmission for small cell users. The nonconvex optimization problem is then decomposed into two convex subproblems, and an
algorithm is proposed for wireless backhual bandwidth allocation and power allocation.
•
Support of the small cell backhauling in the context of designing power allocation schemes
for heterogeneous small cell networks: We study the wireless backhaul bandwidth allocation
at the small cell BS, which means a fraction of bandwidth is scheduled for backhauling
and the other is assigned for communication with corresponding users. We formulate the
bandwidth allocation problem as a convex problem and obtain the optimum solution.
•
Design of suboptimal low-complexity algorithm by decomposing the power allocation and
bandwidth allocation: The energy-efficient wireless backhaul bandwidth allocation and
power allocation problem are decomposed and are optimized separately. Correspondingly,
a suboptimal low-complexity algorithm is proposed. The effectiveness of the proposed
suboptimal algorithm is demonstrated by simulations.
The rest of this paper is organized as follows. Section II describes the system model. In Section
III, the energy-efficient resource allocation and backhauling are presented, and in Section IV,
optimization algorithms are proposed. Simulation results are discussed in Section V. Finally,
Section VI concludes the paper.
II. S YSTEM M ODEL
In this section, we formulate the problem of downlink energy efficiency of power allocation
and unified bandwidth allocation for wireless backhauling in heterogeneous small cell networks.
We consider a heterogeneous small cell network as shown in Fig. 1 with a single macro BS,
J small cells deployed within the macrocell range and K users randomly located in each small
cell.
The small cells share the same spectrum with macrocell. In this work, the unified wireless
backhaul bandwidth allocation is investigated. The unified bandwidth allocation factor β ∈ [0, 1],
October 10, 2017
DRAFT
6
which is the fraction of bandwidth allocated for wireless backhauling at all small cell BSs within a
macrocell range. For simplicity, all small cells are assumed to have the same bandwidth allocation
factor. We assume that the multiple antenna technology is used in the macro BS and each small
cell corresponds to a beamforming group, so the interference for wireless backhaul between
different small cells can be neglected. The antenna array size at macro BS is N, which is much
greater than the beamforming group size B and the number of small cells, i.e., N ≫ B and
N ≫ J. In this work, we also assume that B ≥ J. Each small cell BS is equipped with single
antenna. OFDMA technology is used in each small cell to support the communication between
BS and users.
Let P0 be the equal transmit power of the macro BS transmit antenna targeted at corresponding
small cell and σ 2 is the additive white Gaussian noise (AWGN) power. Then the received signalto-noise ratio (SNR) in the wireless backhaul downlink of small cell j is given by
γj =
P 0 Gj
.
σ2
(1)
Let gj,k be the channel power gain between the jth small cell BS and its corresponding kth
user, where j ∈ {1, 2, ..., J}, k ∈ {1, 2, ..., K}. Let pj,k denote the transmit power from the jth
small cell BS to its corresponding kth user, and let P = [pj,k ]J×K denote the power allocation
matrix.
We assume that different users in each small cell use different subchannels and co-channel
interference between small cells as part of the thermal noise because of the severe wall penetration
loss and low power of small cell BSs [12]. The received signal-to-interference-plus-noise ratio
(SINR) of small cell user k associated with small cell j is given by
γj,k =
pj,k gj,k
σ 2 +Ij,k
(2)
where Ij,k is the interference introduced by macro BS, Ij,k = P0 Gj,k , where Gj,k is the channel
power gain between macro BS and the kth user in the jth small cell. The achievable data
transmission rate between the jth small cell BS and its corresponding kth user is determined by
1−β
log2 (1 + γj,k ) .
(3)
rj,k =
K
Therefore, we have the relation between rj,k and pj,k
October 10, 2017
DRAFT
7
Krj,k
σ 2 +Ij,k
pj,k = (2 1−β − 1)
gj,k
pj,k gj,k
1−β
log2 1 + 2
.
rj,k =
K
σ +Ij,k
(4)
Besides the transmit power during the transmission, circuit energy consumption is also incurred
by device electronics in small cell BSs [25], [26]. Circuit power represents the additional device
power consumption of devices during transmissions [27], such as digital-to-analog converters,
mixers and filters, and this portion of energy consumption is independent of the transmission
state. If we denote the circuit power as PC , the overall power assumption of the jth small cell
BS to the kth user is PC + pj,k .
For energy-efficient communication, it is desirable to send the maximum amount of data with
a given amount of energy for small cell BSs. Hence, given any amount of energy ∆e consumed
in a duration ∆t in each small cell BS to each user, ∆e = ∆t(PC + pj,k ), the small cell BSs
desire to send a maximum amount of data by choosing the power allocation vector and backhaul
bandwidth to maximize
J X
K
X
rj,k (β, pj,k )∆t
j=1 k=1
∆e
(5)
which is equivalent to maximizing
U(β, P) =
J X
K
X
Uj,k (β, pj,k )
(6)
j=1 k=1
where
Uj,k (β, pj,k ) =
rj,k (β, pj,k )
.
PC + pj,k
(7)
In (6), U(β, P) is called energy efficiency for all small cells; Uj,k (β, pj,k ) is the energy efficiency
of the kth user of the jth small cell. The unit of the energy efficiency is bits per Hertz per Joule,
which has been frequently used in the literature for energy-efficient communications [28]–[32].
When the downlink channel state information is estimated by the small cell BSs, the resource
allocation is performed by a small cell BS under the following constraints.
•
Transmit power constraint of each small cell BS to each user:
0 ≤ pj,k ≤ Pmax , ∀j, k
October 10, 2017
(8)
DRAFT
8
where Pmax denotes the maximal transmit power of each small cell BS to each user.
•
The downlink data rate constraint of each small cell BS: The throughput of the small cell
is given by
Rj =
K
X
rj,k .
(9)
k=1
Due to the inter-user interference within the overlapping areas of macrocell and small cell
beamforming group, we use typical zero-forcing beamforming technique with equal-power
allocation for each user transmission link to eliminate the interference significantly [33],
[34], the capacity of the wireless backhaul downlink for small cell j is
N−B+1
Cj = βlog2 1 +
γj .
B
(10)
The downlink wireless backhaul constraint requires
Rj ≤ Cj
(11)
such that the downlink traffic of the jth small cell can be accommodated by its wireless
backhaul.
•
Heterogeneous QoS guarantee: The QoS requirement Rt should be guaranteed for each
user in each small cell to maintain the performance of the communication system
rj,k ≥ Rt .
(12)
Our target is to maximize the energy efficiency of power allocation and unified bandwidth
allocation for wireless backhauling in heterogeneous small cell networks under power constraint
and data rate requirements. Thus, the corresponding problem for the downlink can be formulated
as the following nonlinear programming problem
max U(β, P) = max
β,P
β,pj,k
J X
K
X
Uj,k (β, pj,k )
(13)
j=1 k=1
s.t. C1 : 0 ≤ pj,k ≤ Pmax
C2 : Rj ≤ Cj
(14)
C3 : rj,k ≥ Rt
C4 : 0 ≤ β ≤ 1.
October 10, 2017
DRAFT
9
III. E NERGY- EFFICIENT R ESOURCE A LLOCATION
AND
BACKHAULING
Since the optimization problem formulated in (13) and (14) is non-convex and we notice that
the continuous variables β and pj,k are separable in (13). Therefore, we consider a decomposition
approach to solve the energy-efficient resource allocation problem. We decompose the nonconvex optimization problem into two convex subproblems: one for energy-efficient power
allocation and one for energy-efficient wireless backhaul bandwidth allocation. Then, we solve
the subproblems of energy-efficient power allocation and energy-efficient backhaul bandwidth
allocation individually.
A. Energy-Efficient Power Allocation
The concept of quasiconcavity will be used in our discussion and is defined in [35].
Definition 1. A function f , which maps from a convex set of real n-dimensional vectors, D, to
a real number, is called strictly quasiconcave if for any x1 , x2 ∈ D and x1 6= x2 ,
f (λx1 + (1 − λ)x2 ) > min{f (x1 ), f (x2 )}
(15)
for any 0 < λ < 1.
Given a value β for unified wireless backhaul bandwidth allocation, the optimization algorithm
begins with the power allocation subproblem P1.1 that is formulated as
P1.1 : max U(P) = max
pj,k
P
J X
K
X
Uj,k (pj,k )
(16)
j=1 k=1
s.t. C1 : 0 ≤ pj,k ≤ Pmax
C2 : Rj ≤ Cj
(17)
C3 : rj,k (pj,k ) ≥ Rt
where
rj,k (pj,k ) =
1−β
K
pj,k gj,k
log2 1 + 2
σ +Ij,k
(18)
is strictly concave and monotonically increasing in pj,k with rj,k (0) = 0, when pj,k = 0.
October 10, 2017
DRAFT
10
The optimal energy-efficient power allocation achieves the maximum energy efficiency, i.e.
P∗ = arg max U(P).
(19)
P
It is proved in Appendix A that U(P) has the following properties.
Lemma 1. If rj,k (pj,k ) is strictly concave in pj,k , Uj,k (pj,k ) ∈ U(P) is strictly quasiconcave.
Furthermore, Uj,k (pj,k ) is first strictly increasing and then strictly decreasing in any pj,k , i.e. the
local maximum of U(P) for each pj,k exists at a positive finite value.
For strictly quasiconcave functions, if a local maximum exists, it is also globally optimal [35].
Hence, a unique globally optimal transmission rate vector always exists and its characteristics
are summarized in Theorem 1 according to the proofs in Appendix A.
Theorem 1. If rj,k (pj,k ) is strictly concave, there exists a unique globally optimal transmission
power vector P∗ = {p∗ j,k ; (j, k) ∈ J × K} for P∗ = arg max U(P), for each element in P∗ ,
P
p∗j,k = arg max Uj,k (pj,k ) where p∗j,k is given by
pj,k
∂Uj,k (pj,k )
∂pj,k
pj,k =p∗j,k
i.e., Uj,k (p∗j,k ) =
= 0, f (pj,k ) = 0,
rj,k (p∗j,k )
PC +p∗j,k
=
∂rj,k (pj,k )
∂pj,k
pj,k =p∗j,k
.
In order to solve the problem P1.1 for power allocation, we rewrite the objective function in
(16) as
max Uj,k (pj,k ) = max
pj,k
pj,k
rj,k (pj,k )
.
PC + pj,k
(20)
If each small cell user could reach the maximum energy efficiency, all small cell users could
reach the maximum energy efficiency. The total data rate in each small cell could not exceed
the capacity of the wireless backhaul downlink for small cell j, Rj ≤ Cj , we can approximate
that the data rate for each user to be less than
for user k in small cell j is
PS
.
K
Cj
,
K
rj,k (pj,k ) ≤
and the maximum of power
Thus, P1.1 is equivalent to
P1.2 : max Uj,k (pj,k )
pj,k
October 10, 2017
Cj
,
K
(21)
DRAFT
11
PS
K
Cj
C2 : rj,k (pj,k ) ≤
K
s.t. C1 : 0 ≤ pj,k ≤
(22)
C3 : rj,k (pj,k ) ≥ Rt .
We can rewrite C2 in (22) according to (4) as
2
P0 Gj
β
σ +Ij,k
log2 1+ N−B+1
(
2
1−β )
B
σ
2
−1 .
pj,k ≤
gj,k
We can rewrite C3 in (22) according to (4) as
2
t
σ +Ij,k KR
pj,k ≥
2 1−β −1 .
gj,k
(23)
(24)
Therefore, we have
Lj,k ≤ pj,k ≤ Hj,k
(25)
where
σ 2 +Ij,k KR
t
1−β
−1
Lj,k =
2
gj,k
2
β
N−B+1 P0 Gj
σ +Ij,k
log
1+
)
(
2
2
B
σ
= min
2 1−β
−1 , Pmax
gj,k
Hj,k
(26)
(27)
only if the following inequality is satisfied
Lj,k ≤ Hj,k .
(28)
The energy-efficient power allocation is given by
p̂∗j,k = arg max
pj,k
rj,k (pj,k )
PC + pj,k
(29)
subject to
Lj,k ≤ pj,k ≤ Hj,k .
(30)
We can solve (20) by using Theorem 1 to find the optimal power allocation solution. We
can also use the low-complexity iterative algorithms based on the GABS algorithm proposed in
[23] to realize the energy-efficient power allocation for the kth user in the jth small cell BS as
follows.
If the output p̂∗j,k satisfies the power constraint, i.e. p̂∗j,k =p∗j,k ; otherwise, we can obtain the
maximum Uj,k (pj,k ) by
p∗j,k = Lj,k
October 10, 2017
(31)
DRAFT
12
Algorithm GABS Algorithm
1: Initialization: Each small cell BS allocates the same transmit power to each user, pj,k > 0.
∂Uj,k (pj,k )
(1)
2: Then do pj,k = pj,k , h1 ←
(1) and c > 1 (let c = 2).
∂pj,k
pj,k =pj,k
3: if h1 < 0 then
4:
repeat
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
(2)
(1)
(1)
p
(1)
∂U
(p
)
j,k
, and h1 ← j,k
pj,k ← pj,k , pj,k ← j,k
(1)
c
∂pj,k
pj,k =pj,k
until h1 ≥ 0
else
∂U (pj,k )
(2)
(1)
pj,k ← pj,k × c and h2 ← j,k
(2)
∂pj,k
pj,k =pj,k
repeat
∂Uj (pj,k )
(1)
(2)
(2)
(2)
pj,k ← pj,k , pj,k ← pj,k × c and h2 ← ∂p
(2)
pj,k =pj,k
j,k
until h2 ≤ 0
end if
while no convergence do
p
(1)
+p
(2)
p̂∗j,k ← j,k 2 j,k , h′ ←
if h′ > 0 then
(1)
pj,k = p̂∗j,k
else
(2)
pj,k = p̂∗j,k
end if
end while
Output p̂∗j,k .
∂Uj,k (pj,k )
pj,k =p̂∗j,k
∂pj,k
if p̂∗j,k < Lj,k , or we can get the maximum Uj,k (pj,k ) by
p∗j,k = Hj,k
(32)
if p̂∗j,k > Hj,k , since Uj,k (pj,k ) is first strictly increasing and then strictly decreasing in any
positive finite pj,k .
B. Energy-Efficient Wireless Backhaul Bandwidth Allocation
Once the optimal solution P∗ = {p∗j,k ; (j, k) ∈ J × K} is obtained for the convex subproblem
P1.2 parameterized by β, it can be used in the following subproblem P1.3 for the unified wireless
backhaul bandwidth allocation
P1.3 : max U(β, P∗ ) = max
β
October 10, 2017
β
J X
K
X
Uj,k (β, p∗j,k )
(33)
j=1 k=1
DRAFT
13
s.t. C1 : 0 ≤ β ≤ 1
(34)
C2 : Rj (β, P∗) ≤ Cj (β, P∗)
C3 : rj,k (β, p∗j,k ) ≥ Rt
where Rj (β, P∗) is the function value of Rj evaluated at P∗ , Cj (β, P∗) is the function value of
Cj evaluated at P∗ . In order to obtain the solution to the original problem in (13) and (14), the
two subproblems P1.2 and P1.3 are solved iteratively until convergence.
Maximizing the objective function of P1.3 with respect to β is equivalent to maximizing
(1 − β) only, since (33) is a monotonically decreasing function of β. Problem P1.3 reduces to
a feasibility problem whose solution is the smallest feasible value of β given constraints (34).
According to C2, Rj (β, P∗) ≤ Cj (β, P∗), we have
K
P
p∗ gj,k
log2 1 + σ2j,k+Ij,k
k=1
β≥
P
K
P 0 Gj
Klog2 1 + N −B+1
+
log
2 1+
B
σ2
k=1
According to C3,
rj,k (β, p∗j,k )
≥ Rt , we have
KRt
β ≤ 1−
log2 1 +
So we have
p∗j,k gj,k
σ2 +Ij,k
p∗j,k gj,k
σ2 +Ij,k
.
.
(36)
β = max {φj }
where
φj =
K
P
k=1
Klog2 1 +
log2 1 +
N −B+1 P0 Gj
B
σ2
only if Rt satisfies the following condition
+
(37)
p∗j,k gj,k
σ2 +Ij,k
K
P
k=1
log2 1 +
(38)
p∗j,k gj,k
σ2 +Ij,k
Rt ≤ min {ϕj }
where
(35)
p∗j,k gj,k
N −B+1 P0 Gj
log2 1 + σ2 +Ij,k
log2 1 + B
σ2
ϕj =
P
.
K
p∗j,k gj,k
N −B+1 P0 Gj
Klog2 1 + B
+
log2 1 + σ2 +Ij,k
σ2
(39)
(40)
k=1
October 10, 2017
DRAFT
14
IV. A LGORITHM D ESIGN
According to the analysis of power allocation and wireless backhaul bandwidth allocation
discussed above, we propose an iterative optimization algorithm and a suboptimal low-complexity
algorithm.
A. Iterative Resource Allocation Algorithm
The proposed iterative resource allocation algorithm is shown in Algorithm 1.
Algorithm 1 Iterative Resource Allocation Algorithm
1: Initialization: Each small cell BS allocates the same transmit power to each user, pj,k > 0
and set l = 1.
2: repeat
3:
Backhaul Bandwidth Allocation
4:
Compute optimum β according to (37).
5:
Macro BS broadcasts the updated wireles backhaul bandwidth allocation factor to all small
cell BSs.
6:
for each small cell BS do
7:
for each small cell user do
8:
Power Allocation
9:
a) find p̂∗j,k = arg max Uj,k (pj,k ) according to GABS;
10:
b) check power constraint;
11:
if Lj,k ≤ p̂∗j,k ≤ Hj,k then
12:
p∗j,k = p̂∗j,k
13:
end if
14:
if p̂∗j,k < Lj,k then
15:
p∗j,k = Lj,k
16:
end if
17:
if p̂∗j,k > Hj,k then
18:
p∗j,k = Hj,k
19:
end if
20:
end for
21:
end for
22:
l = l + 1.
23: until total energy efficiency convergence or l = Lmax
In Algorithm 1, each small cell BS calculates φj according to (38) and then sends φj to macro
BS. The macro BS chooses the maximum φj to be the optimal bandwidth allocation factor β
and broadcasts β to all small cell BSs.
October 10, 2017
DRAFT
15
B. Low-Complexity Optimization Algorithm
To reduce the complexity of Algorithm 1, we propose a low-complexity optimization algorithm
where bandwidth allocation factor is calculated from the equal power allocation and we fix β
to calculate the power allocation according to the scheme proposed in Section IV. This lowcomplexity optimization algorithm is shown in Algorithm 2.
Algorithm 2 Fixed β and Optimum Power Allocation Algorithm
1: Initialization: Each small cell BS allocates the same transmit power to each user, pj,k > 0.
2: Backhaul Bandwidth Allocation
3: Compute optimum β according to (37).
4: Macro BS broadcasts the wireles backhaul bandwidth allocation factor to all small cell BSs.
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
for each small cell BS do
for each small cell user do
Power Allocation
a) find p̂∗j,k = arg max Uj,k (pj,k ) according to GABS;
b) check power constraint;
if Lj,k ≤ p̂∗j,k ≤ Hj,k then
p∗j,k = p̂∗j,k
end if
if p̂∗j,k < Lj,k then
p∗j,k = Lj,k
end if
if p̂∗j,k > Hj,k then
p∗j,k = Hj,k
end if
end for
end for
C. Complexity Analysis
Since the problem formulated in (13) and (14) is not convex, the only way to obtain the
optimal solution is to use the method of exhaustion. If we assume that it costs P operations to
calculate rj,k and it costs Q operations to calculate Cj , the complexity of checking C2 and C3
in (14) entails KP + K + Q operations and P + 1 operations, respectively. If we assume that
it costs S operations to calculate Uj,k , the complexity of obtaining the total energy efficiency of
all small cell users entails JKS + (J − 1) (K − 1) operations. The total complexity of getting
the value of objective function in (13) under the constraints in (14) entails KP + K + P + 1 +
October 10, 2017
DRAFT
16
JKS + (J − 1) (K − 1) operations for specific pj,k and β values. If we assume the value step
JK
size for pj,k is a and the value step size for β is b, there are 1b Pmax
choices for the values
a
Pmax JK
.
of pj,k and β. Therefore, the complexity for the method of exhaustion is O JKS
b
a
In Algorithm 1, the worst-case complexity of calculating bandwidth allocation factor β from
(37) entails J operations in each iteration. If we assume that it costs Ω operations in each
GABS to search the optimum power allocation without power constraint, then the worst-case
complexity of finding the power allocation for every user in each small cell entails JK (Ω+4)
operations in each iteration. Suppose the Algorithm 1 needs ∆ iterations to converge, so the
total complexity of Algorithm 1 is O (JKΩ∆). Since iteration is not applied in Algorithm 2,
the total complexity of Algorithm 2 is O (JKΩ), which is less than that of Algorithm 1. In the
simulation, the typical value for ∆ is around 16, the typical value for Ω is less than 500, and the
typical values for
1
b
and
Pmax
b
are both 100. So the complexities of Algorithm 1 and Algorithm
2 are always less than that of the method of exhaustion. When the number of small cells J and
the number of users in each small cell K increase, the complexity of the method of exhaustion
increases exponentially, so the complexity of the method of exhaustion is much larger than the
complexities of two proposed algorithms.
V. S IMULATION R ESULTS
Simulation results are given in this section to evaluate the performance of the proposed power
allocation and backhaul bandwidth allocation algorithms. In the simulations, it is assumed that
small cells are uniformly distributed in the macrocell coverage area, and small cell users are
uniformly distributed in the coverage area of their serving small cell. AWGN power σ 2 =3.9811×
10−14 W. The coverage radius of the macrocell is 500 m, and that of a small cell is 10 m. Small
cell has a minimal distance of 50 m from the macro BS. The minimal distance between small
cell BSs is 40 m. We assume that the channel fading is composed of path loss, shadowing fading,
and Rayleigh fading. The pathloss model for small cell users is based on [36]. The lognormal
shadowing between small cell BS and small cell users is 10 dB. At the macro BS, we assume
that the transmit power is 33 dBm, the antenna array size N = 100 and beamforming group size
is B = 20. We consider that all the small cell users have the same QoS requirement.
October 10, 2017
DRAFT
17
Figure 2 shows the convergence in terms of the energy efficiency of all small cell users for
the proposed Algorithm 1 versus the number of iterations, where J = 5, Rt = 0.01 bps/Hz,
Pmax = 20 dBm. It can be observed that the proposed resource allocation algorithm takes nearly
16 iterations to converge to stable solutions. This result, together with the previous analysis,
ensures that the proposed Algorithm 1 is applicable in heterogeneous small cell networks.
Figure 3 shows the total energy efficiency of all small cell users when the number of users per
small cell is increased from 2 to 10, for Algorithm 2 under Pmax = 7 dBm, Pmax = 10 dBm and
Pmax = 20 dBm compared with Algorithm 1 under Pmax = 20 dBm. The simulation parameters
are set as J = 5, Rt = 0.01 bps/Hz. Fig. 3 shows that the energy efficiency performance of
Algorithm 1 is 20% more superior to that of Algorithm 2. It also can be seen from Fig. 3 that
the more number of users in small cell is, the better performance is obtained because of the
multi-user diversity.
Figure 4 shows the total energy efficiency of all small cell users when the number of small
cells is increased from 3 to 15, for Algorithm 2 under Pmax = 7 dBm, Pmax = 10 dBm and
Pmax = 20 dBm when compared with Algorithm 1 under Pmax = 20 dBm. The simulation
parameters are set as K = 5, Rt = 0.01 bps/Hz. Fig. 4 indicates that more number of small cell
is, the better performance is obtained. It can also be seen from Fig. 4 that the energy efficiency
performance of Algorithm 1 is always better than that of Algorithm 2 and the gap between them
becomes larger when the number of small cells increases. The energy efficiency performance of
Algorithm 1 is 30% superior to that of Algorithm 2 when the number of small cells is 10.
Figure 5 shows the total downlink capacity of all small cell users when the number of users
per small cell is increased from 2 to 10, for Algorithm 2 under Pmax = 7 dBm, Pmax = 10
dBm and Pmax = 20 dBm compared with Algorithm 1 under Pmax = 20 dBm. The simulation
parameters are set as J = 5, Rt = 0.01 bps/Hz. Fig. 5 shows that the total downlink capacity of
Algorithm 1 is more than 3 bps/Hz higher than that of Algorithm 2. It can also be seen from
Fig. 5 that the more number of users in small cell is, the better performance is obtained due to
the multi-user diversity. The total downlink capacity of Algorithm 1 is 21% higher than that of
Algorithm 2 when the number of users in each small cell is over 10.
Figure 6 shows the total downlink capacity of all small cell users when the number of small
October 10, 2017
DRAFT
18
cells is increased from 3 to 15, for Algorithm 2 under Pmax = 7 dBm, Pmax = 10 dBm and
Pmax = 20 dBm compared with Algorithm 1 under Pmax = 20 dBm. The simulation parameters
are set as K = 5, Rt = 0.01 bps/Hz. Fig. 6 illustrates that Algorithm 1 is superior to Algorithm
2 in terms of the total downlink capacity and the gap between them becomes larger when the
number of small cells increases. The total downlink capacity of Algorithm 1 is 29% larger than
that of Algorithm 2 when there are 14 small cells in the heterogeneous network.
Figure 7 shows the total energy efficiency of all the small cell users when using Algorithm
2 for power constraint Pmax ranging from 0 dBm to 12.79 dBm where the number of users
in each small cell is 3, 4, 5. The simulation parameters are set as J = 5, Rt = 0.01 bps/Hz.
Fig. 7 presents that the more users in each small cell are, the higher total energy efficiency can
be obtained, which has already been shown in Fig. 3. It can also be seen from Fig. 7 that the
larger power constraint is, the better performance is obtained. This is because the larger power
constraint leads to the larger region of the optimizing variable.
Figure 8 shows the total energy efficiency of all the small cell users when the number of users
per small cell is increased from 2 to 10, for different algorithms. Algorithm 1 and Algorithm 2 are
the iterative optimization algorithm and the low-complexity optimization algorithm, respectively,
which we have proposed in Section IV. Algorithm 3 is an existing energy efficiency optimization
algorithm with equal power allocation and Algorithm 4 is an algorithm that uses the optimum
power allocation we proposed given a random β to optimize energy efficiency. All the algorithms
are under the constraint Pmax = 20 dBm. Fig. 8 indicates that more users in each small cell are,
the better performance can be obtained, which has already been shown in Fig. 3. It also can be
seen from Fig. 8 that Algorithm 1 has the best performance, and then it follows by Algorithm
2, Algorithm 3 and Algorithm 4. The energy efficiency performance of Algorithm 1 is 30.5%
and 56.6% higher than that of Algorithm 3 and Algorithm 4, respectively.
Figure 9 shows the total energy efficiency of all the small cell users when the number of small
cells is increased from 2 to 5, for the optimal solution and the two proposed algorithms. Since
the complexity of the method of exhaustion is high, we only consider the situation with small
dimension where there are two users located in each small cell, K = 2. All the algorithms are
under the setting of Pmax = 20 dBm and Rt = 0.01 bps/Hz. From Fig. 9, we can observe that the
October 10, 2017
DRAFT
19
difference between the optimal solution and Algorithm 1 in terms of energy efficiency is small,
which ensures the effectiveness of the proposed algorithms. The energy efficiency performance
of the optimal solution is only about 7% and 24% higher than that of Algorithm 1 and Algorithm
2 respectively when the number of small cells is 3. The difference between the optimal solution
and the proposed Algorithm 1 is mainly caused by the approximation of C2 in (17). We can also
observe that the performance of Algorithm 1 is slightly better than that of the existing algorithm,
which is the backhaul bandwidth allocation in conjunction with the resource allocation algorithm
in [14]. This phenomenon can be explained as follows. As the QoS requirement of small cell
users increases, the more power is required to meet the higher QoS requirement.
VI. C ONCLUSION
In this paper, we investigated the energy-efficient wireless backhaul bandwidth allocation and
power allocation in a heterogeneous small cell network. We demonstrated the existence of a
unique globally optimal energy efficiency solution and provided an iterative algorithm to obtain
this optimum. For the downlink scenario, we first found the near optimal energy-efficient resource
allocation approach, and then developed a low-complexity suboptimal algorithm by exploring the
inherent structure of the objective function and the feature of energy-efficient design. From the
simulation results, we observed that energy efficiency is improved by increasing the number of
small cells and the number of users per small cell, and the capacity is also improved by increasing
the number of small cells and the number of users per small cell. Simulation results showed great
energy efficiency improvement of the proposed iterative optimization algorithm than that of the
low-complexity optimization algorithm and the existing schemes. The proposed low-complexity
algorithms can achieve a promising tradeoff between performance and complexity. If the future,
we will investigate the nonunified backhaul bandwidth allocation and inter-small-cell interference
in heterogeneous small cell networks.
A PPENDIX A
P ROOF
OF
L EMMA 1
We first focus on Uj,k (pj,k ) and then we can obtain the properties of U(P). If every user
in each small cell can reach the maximum energy efficiency, all small cell users can reach the
October 10, 2017
DRAFT
20
maximum energy efficiency.
Denote the α–superlevel sets of Uj,k (pj,k ) as
Sα = { pj,k ≥ 0| Uj,k (pj,k ) ≥ α}
(41)
where pj,k is nonnegative. Based on the propositions in [35], Uj,k (pj,k ) is strictly quasiconcave if
and only if Sα is strictly convex for any real number α. In this case, when α < 0, no points exist
on the contour Uj,k (pj,k ) = α. When α = 0, only pj,k = 0 is on the contour Uj,k (0) = α. Hence,
Sα is strictly convex when α ≤ 0. Now, we investigate the case when α > 0. We can rewrite
the Sα as Sα = { pj,k ≥ 0| αPC + αpj,k − rj,k (pj,k ) ≤ 0}. Since rj,k (pj,k ) is strictly concave in
pj,k , −rj,k (pj,k ) is strictly convex in pj,k ; therefore, Sα is strictly convex. Hence, we have the
strict quasiconcavity of Uj,k (pj,k ).
Next, we can obtain the partial derivative of Uj,k (pj,k ) with pj,k as
f (pj,k )
∂Uj,k (pj,k )
(PC + pj,k )r ′ j,k (pj,k ) − rj,k (pj,k )
=
=
2
∂pj,k
(PC + pj,k )
(PC + pj,k )2
(42)
where f (pj,k ) = (PC + pj,k )r ′ j,k (pj,k ) − rj,k (pj,k ), r ′ j,k (pj,k ) is the first partial derivative of
rj,k (pj,k ) with respect to pj,k . If p∗j,k exists such that
∂Uj,k (pj,k )
∂pj,k
pj,k =p∗j,k
= 0, it is unique, i.e. if
there is a p∗j,k such that f (p∗j,k ) = 0. In the following, we investigate the conditions when p∗j,k
exists.
The derivative of f (pj,k ) is
f ′ (pj,k ) = (PC + pj,k )r ′′ j,k (pj,k )
(43)
where r ′′ j,k (pj,k ) is the second partial derivative of rj,k (pj,k ) with respect to pj,k . Since rj,k (pj,k )
is strictly concave in pj,k , so r ′′ j,k (pj,k ) < 0, f ′ (pj,k ) < 0. Hence, f (pj,k ) is strictly decreasing.
lim f (pj,k ) = lim ((PC + pj,k )r ′ j,k (pj,k ) − rj,k (pj,k ))
pj,k →∞
pj,k →∞
(44)
′
′
= lim (PC r j,k (pj,k ) + pj,k r j,k (pj,k ) − rj,k (pj,k ))
pj,k →∞
where
r ′ j,k (pj,k ) =
1−β
K
gj,k
σ 2 + Ij,k
1
ln 2
1
1+
pj,k gj,k
σ2 +Ij,k
!
(45)
and
lim r ′ j,k (pj,k ) = 0.
pj,k →∞
October 10, 2017
(46)
DRAFT
21
So we have
lim PC r ′ j,k (pj,k ) = 0.
(47)
pj,k →∞
According to the L’Hopital’s rule, it is easy to show that
!
g
1
p
1
−
β
j,k
j,k
lim pj,k r ′j,k (pj,k ) = lim
p g
pj,k →∞
pj,k →∞
K
σ 2 + Ij,k
ln 2
1 + σ2j,k+Ij,k
j,k
!
1−β
gj,k
1
1
= lim
gj,k
2
pj,k →∞
K
σ + Ij,k
ln 2
σ2 +Ij,k
1
1−β
= lim
pj,k →∞
K
ln 2
1−β
pj,k gj,k
lim (−rj,k (pj,k )) = lim −
log2 1 + 2
= −∞.
pj,k →∞
pj,k →∞
K
σ +Ij,k
(48)
(49)
So we have
lim f (pj,k ) < 0.
(50)
pj,k →∞
Besides,
lim f (pj,k ) = lim ((PC + pj,k )r ′ j,k (pj,k ) − rj,k (pj,k ))
pj,k →0
pj,k →0
(51)
(0)
PC r ′ j,k (pj,k )
−
1−β
K
gj,k
2
σ + Ij,k
1−β
K
gj,k
σ 2 + Ij,k
=
(0)
rj,k (pj,k )
(0)
where pj,k denotes pj,k = 0
r ′ j,k (pj,k ) =
=
(0)
(0)
1
ln 2
1
ln 2
1
1+
pj,k gj,k
σ2 +Ij,k
!
rj,k (pj,k ) = 0
1−β
PC gj,k
1
lim f (pj,k ) =
> 0.
pj,k →0
K
σ 2 + Ij,k
ln 2
So, together with
pj,k =0
(52)
(53)
(54)
lim f (pj,k ) < 0, we see that p∗ j,k exists and Uj,k (pj,k ) is first strictly
pj,k →∞
increasing and then strictly decreasing in pj,k .
Lemma 1 is readily obtained.
October 10, 2017
DRAFT
22
R EFERENCES
[1] C. Jiang, H. Zhang, Y. Ren, and H.-H. Chen, “Energy-efficient non-cooperative cognitive radio networks: Micro, meso
and macro views,” IEEE Commun. Mag., vol. 52, no. 7, pp. 14–20, July 2014.
[2] F. R. Yu, X. Zhang, and V. C.M. Leung, Green Communications and Networking., CRC Press, 2012.
[3] C. Xu, M. Sheng, C. Yang, X. Wang, and L. Wang, “Pricing-based multiresource allocation in OFDMA cognitive radio
networks: an energy efficiency perspective,” IEEE Trans. Veh. Technol., vol. 63, no. 5, pp. 2336–2348, June 2014.
[4] T. Edler and S. Lundberg, “Energy efficiency enhancements in radio access networks,” Ericsson Review, 2004,
http://www.ericsson.com/ericsson/corpinfo/publications/review/2004 01/files/2004015.pdf
[5] Y. Chen, S. Zhang, S. Xu, and G. Y. Li, “Fundamental trade-offs on green wireless networks,” IEEE Commun. Mag., vol.
49, no. 6, pp. 30–37, June 2011.
[6] H. Zhang, X. Chu, W. Guo, and S. Wang, “Coexistence of WiFi and heterogeneous small cell networks sharing unlicensed
spectrum,” IEEE Commun. Mag., vol. 53, no. 3, pp. 158–164, Mar. 2015.
[7] D. Lopez-Perez, X. Chu, A. V. Vasilakos, and H. Claussen, “Power minimization based resource allocation for interference
mitigation in OFDMA femtocell networks,” IEEE J. Sel. Areas Commun., vol. 32, no. 2, pp. 333–344, Feb. 2014.
[8] H. Zhang, C. Jiang, J. Cheng, and V. C. M. Leung, “Cooperative interference mitigation and handover management for
heterogeneous cloud small cell networks,” IEEE Wireless Commun., vol. 22, no. 3, pp. 92–99, June 2015.
[9] H. Zhang, C. Jiang, N. C. Beaulieu, X. Chu, X. Wang, and T. Quek, “Resource allocation for cognitive small cell networks:
A cooperative bargaining game theoretic approach,” IEEE Trans. Wireless Commun., vol. 14, no. 6, pp. 3481–3493, June
2015.
[10] W. Cheng, X. Zhang, and H. Zhang, “Statistical-QoS driven energy-efficiency optimization over green 5G mobile wireless
networks,” IEEE J. Sel. Areas Commun., vol. 34, no. 12, pp. 3092-3107, Dec. 2016.
[11] H. Zhang, C. Jiang, X. Mao, and H.-H. Chen,“Interference-limited resource optimization in cognitive femtocells with
fairness and imperfect spectrum sensing,” IEEE Trans. Veh. Technol., vol. 65, no. 3, pp. 1761–1771, Mar. 2016.
[12] H. Zhang, C. Jiang, N. C. Beaulieu, X. Chu, X. Wen, and M. Tao, “Resource allocation in spectrum-sharing OFDMA
femtocells with heterogeneous services,” IEEE Trans. Commun., vol. 62, no. 7, pp. 2366–2377, July 2014.
[13] M. Le Treust and S. Lasaulce, “A repeated game formulation of energy-efficient decentralized power control,” IEEE Trans.
Wireless Commun., vol. 9, no. 9, pp. 2860–2869, Sep. 2010.
[14] X Ge, B Du, Q Li, D. S. Michalopoulos, “Energy efficiency of multi-user multi-antenna random cellular networks with
minimum distance constraints,” IEEE Trans. Veh. Technol., vol. 66, No. 2, pp. 1696–1708, Feb. 2017.
[15] S. Buzzi and D. Saturnino, “A game-theoretic approach to energy-efficient power control and receiver design in cognitive
CDMA wireless networks,” IEEE J. Sel. Topics Signal Proc., vol. 5, no. 1, pp. 137–150, Feb. 2011.
[16] R. Xie, F. R. Yu, H. Ji, and Y. Li, “ Energy-efficient resource allocation for heterogeneous cognitive radio networks with
femtocells,” IEEE Trans. Wireless Commun., vol. 11, no. 11, pp. 3910–3920, Nov. 2012.
[17] C. M. G. Gussen, E. V. Belmega, and M. Debbah, “Pricing and bandwidth allocation problems in wireless multi-tier
networks,” Signals, Systems and Computers (ASILOMAR), 2011 Conference Record of the Forty Fifth Asilomar Conference
on, pp. 1633–1637, Nov. 2011.
[18] Q. Vu, L. Tran, M. Juntti, and E. Hong, “Energy-efficient bandwidth and power allocation for multi-homing networks,”
IEEE Trans. Signal Process., vol. 63, no. 7, pp. 1684–1699, Apr. 2015.
October 10, 2017
DRAFT
23
[19] W. Wang, X. Wang, and A. A. Nilsson, “Energy-efficient bandwidth allocation in wireless networks: Algorithms, analysis,
and simulations,” IEEE Trans. Wireless Commun., vol. 5, no. 5, pp. 1103–1114, May 2006.
[20] D. W. K. Ng, E. S. Lo, and R. Schober, “Energy-efficient resource allocation in multi-cell OFDMA systems with limited
backhaul capacity,” IEEE Trans. Wireless Commun., vol. 11, no. 10, pp. 3618–3631, Oct. 2012.
[21] X. Ge, S. Tu, T. Han, Q. Li, and G. Mao, “Energy efficiency of small cell backhaul networks based on Gauss-Markov
mobile models,” IET Networks, vol. 4, no. 2, pp. 158–167, Mar. 2015.
[22] G. Nie, H. Tian, and J. Ren, “Energy efficient forward and backhaul link optimization in OFDMA small cell networks,”
IEEE Commun. Lett., vol. 19, no. 11, pp. 1989–1992, Nov. 2015.
[23] G. Miao, N. Himayat, and G. Y. Li, “Energy-efficient link adaptation in frequency-selective channels,” IEEE Trans.
Commun., vol. 58, no. 2, pp. 545–554, Feb. 2010.
[24] H. Liu, H. Zhang, J. Cheng, and V. C. M. Leung, “Energy efficient power allocation and backhaul esign in heterogeneous
small cell networks,” Proceedings of IEEE International Conference on Communications (ICC 2016), Kuala Lampur,
Malaysia, May 23-27, 2016.
[25] S. Cui, A. Goldsmith, and A. Bahai, “Energy-constrained modulation optimization,” IEEE Trans. Wireless Commun., vol.
4, no. 5, pp. 2349–2360, Sep. 2005.
[26] A. Y. Wang, S. Chao, C. G. Sodini, and A. P. Chandrakasan, “Energy efficient modulation and MAC for asymmetric RF
microsensor system,” Proc. Int. Symp. Low Power Electronics Design, Huntington Beach, CA, pp. 106–111, Aug. 2001.
[27] S. Cui, A. Goldsmith, and A. Bahai, “Energy-efficiency of MIMO and cooperative MIMO techniques in sensor networks,”
IEEE J. Sel. Areas Commun., vol. 22, no. 6, pp. 1089–1098, Aug. 2004.
[28] S. Verdu, “Spectral efficiency in the wideband regime,” IEEE Trans. Inf. Theory., vol. 48, no. 6, pp. 1319–1343, June
2002.
[29] F. Meshkati, H. V. Poor, S. C. Schwartz, and N. B. Mandayam, “An energy-efficient approach to power control and receiver
design in wireless networks,” IEEE Trans. Commun., vol. 53, no. 11, pp. 1885–1894, Nov. 2005.
[30] R. G. Gallager, “Power limited channels: Coding, multiaccess, and spread spectrum,” in Proc. Conf. Inf. Sci. Syst., vol. 1,
Mar. 1988.
[31] D. Goodman and N. Mandayam, “Power control for wireless data,” IEEE Wireless Commun., vol. 7, no. 2, pp. 48–54,
Apr. 2000.
[32] N. Feng, S. C. Mau, and N. B. Mandayam, “Pricing and power control for joint network-centric and user-centric radio
resource management,” IEEE Trans. Commun., vol. 52, no. 9, pp. 1547–1557, Sep. 2004.
[33] N. Wang, E. Hossain, and V. K. Bhargava, “Joint downlink cell association and bandwidth allocation for wireless
backhauling in two-tier HetNets with large-scale antenna arrays,” IEEE Trans. Wireless Commun., vol. 15, no. 5, pp.
3251–3268, May 2016.
[34] D. Bethanabhotla, O. Y. Bursalioglu, H. C. Papadopoulos, and G. Caire, “User association and load balancing for cellular
massive MIMO,” Inf. Theory Appl. Workshop (ITA), Feb. 2014, pp. 1–10.
[35] E. Wolfstetter, Topics in Microeconomics: Industrial Organization, Auctions, and Incentives., Cambridge University Press,
1999.
[36] Further Advancements for E-UTRA, Physical Layer Aspects.,3GPP Std.TR 36.814 v9.0.0, 2010.
October 10, 2017
DRAFT
24
...
small cell
base station
small cell
small cell
user
small cell
macro cell
user
macrocell
base station
small cell
small cell
macrocell
Fig. 1.
Topology of a heterogeneous small cell network.
October 10, 2017
DRAFT
25
Energy efficiency of all small cell users (bits/Hz/Joule)
200
180
160
140
120
100
Pmax=20 dBm, K=5
60
40
20
0
0
Fig. 2.
Pmax=20 dBm, K=7
80
2
4
6
8
10
12
Iteration index
14
16
18
20
The convergence in terms of energy efficiency of all small cell users over the number of iterations.
Energy efficiency of all small cell users (bits/Hz/Joule)
185
180
175
170
Algorithm 1, Pmax=20 dBm
165
Algorithm 2, Pmax=20 dBm
Algorithm 2, Pmax=7 dBm
155
150
145
140
135
130
2
Fig. 3.
Algorithm 2, Pmax=10 dBm
160
3
4
5
6
7
Number of users per small cell
8
9
10
Energy efficiency versus the number of users per small cell.
October 10, 2017
DRAFT
26
500
Energy efficiency of all small cell users (bits/Hz/Joule)
Algorithm 1, P
max
450
400
Algorithm 2, Pmax=20 dBm
Algorithm 2, Pmax=10 dBm
Algorithm 2, Pmax=7 dBm
350
300
250
200
150
100
50
2
Fig. 4.
=20 dBm
4
6
8
10
Number of small cells
12
14
16
Energy efficiency versus the number of small cells.
Total downlink capacity of all small cell users (bps/Hz)
20
19
Algorithm 1, Pmax=20 dBm
Algorithm 2, P
max
=10 dBm
Algorithm 2, Pmax=7 dBm
17
16
15
14
2
Fig. 5.
Algorithm 2, Pmax=20 dBm
18
3
4
5
6
7
Number of users per small cell
8
9
10
Capacity versus the number of users per small cell.
October 10, 2017
DRAFT
27
Total downlink capacity of all small cell users (bps/Hz)
55
50
Algorithm 1, P
max
Algorithm 2, Pmax=20 dBm
45
Algorithm 2, Pmax=10 dBm
40
Algorithm 2, Pmax=7 dBm
35
30
25
20
15
10
5
0
2
Fig. 6.
=20 dBm
4
6
8
10
Number of small cells
12
14
16
Capacity versus the number of small cells.
Energy efficiency of all small cell users (bits/Hz/Joule)
150
145
140
135
125
120
115
0
Fig. 7.
Algorithm 2, K=5
Algorithm 2, K=4
Algorithm 2, K=3
130
0.002
0.004
0.006
0.008 0.01 0.012 0.014
Power constraint Pmax (W)
0.016
0.018
0.02
Energy efficiency versus the power constraint.
October 10, 2017
DRAFT
28
Energy efficiency of all small cell users (bits/Hz/Joule)
180
170
160
150
140
130
120
110
2
Fig. 8.
Algorithm 1
Algorithm 2
Algorithm 3
Algorithm 4
3
4
5
6
7
Number of users per small cell
8
9
10
Energy efficiency comparison for different algorithms.
October 10, 2017
DRAFT
29
Energy Efficiency of all small cell users (bits/Hz/Joule)
240
220
Algorithm 1, Pmax=20 dBm
Algorithm 2, Pmax=20 dBm
200
Existing Algorithm, Pmax=20 dBm
180
160
140
120
100
80
60
Fig. 9.
Optimal solution, Pmax=20 dBm
2
2.5
3
3.5
4
4.5
Number of small cells
5
5.5
6
Energy efficiency comparison for the optimal solution and proposed algorithms.
October 10, 2017
DRAFT
| 7 |
KAZHDAN CONSTANTS, CONTINUOUS PROBABILITY
MEASURES WITH LARGE FOURIER COEFFICIENTS AND
RIGIDITY SEQUENCES
by
arXiv:1804.01369v1 [math.DS] 4 Apr 2018
Catalin Badea & Sophie Grivaux
To the memory of Jean-Pierre Kahane (1926-2017)
Abstract. — Exploiting a construction of rigidity sequences for weakly mixing dynamical
systems by Fayad and Thouvenot, we show that for every integers p1 , . . . , pr there exists a
continuous probability measure µ on the unit circle T such that
inf
k1 ≥0,...,kr ≥0
|b
µ(pk1 1 . . . pkr r )| > 0.
′
This results applies in particular to the Furstenberg set F = {2k 3k ; k ≥ 0, k′ ≥ 0}, and
disproves a 1988 conjecture of Lyons inspired by Furstenberg’s famous ×2-×3 conjecture.
We also estimate the modified Kazhdan constant of F and obtain general results on rigidity
sequences which allow us to retrieve essentially all known examples of such sequences.
1. Introduction
Denote by T the unit circle T = {λ ∈ C ; |λ| = 1}, by M(T) the set of (finite) complex
Borel measures on T and by P(T) the set of Borel probability measures on T. The Fourier
coefficients of µ ∈ M(T) are defined here as
Z
λn dµ(λ).
µ̂(n) =
T
A measure µ ∈ P(T) is said to be continuous, or atomless, if µ({λ}) = 0 for every λ ∈ T.
We denote the set of continuous probability measures on T by Pc (T). According to a
theorem of Wiener and the Koopman-von Neumann lemma, µ is continuous if and only
if µ̂(n) tends to zero as n tends to infinity along a sequence in N of density one. For
2000 Mathematics Subject Classification. — 43A25, 37A05, 37A25.
Key words and phrases. — Fourier coefficients of continuous measures; non-lacunary semigroups of
integers; Furstenberg Conjecture; rigidity sequences for weakly mixing dynamical systems; Kazhdan subsets
of Z.
This work was supported in part by the project FRONT of the French National Research Agency (grant
ANR-17-CE40-0021) and by the Labex CEMPI (ANR-11-LABX-0007-01).
We are grateful to Étienne Matheron for pointing out a simplification of our original proof of Theorem 2.4,
and to Étienne Matheron, Martine Queffélec, Jean-Paul Thouvenot and Benjy Weiss for several interesting
discussions.
2
CATALIN BADEA & SOPHIE GRIVAUX
every µ ∈ P(T), we define µ
e by setting µ
e(A) = µ(Ac ) for every Borel set A ⊆ T, with
e has the property that ν̂(n) = |µ̂(n)|2 ≥ 0 for every
Ac = {λ ; λ ∈ A}. Then ν := µ ∗ µ
n ∈ Z, and ν belongs to Pc (T) as soon as µ does.
A conjecture of Russell Lyons. — Our aim in this paper is to study some nonlacunary sets of positive integers from a Fourier analysis point of view, and to construct
some probability measures which have large Fourier coefficients on such sets. In particular,
we disprove a 1988 conjecture of Lyons [31], called there Conjecture (C4), which reads as
follows:
Lyons’ Conjecture (C4): If S is a non-lacunary semigroup of integers, and if
µ ∈ Pc (T), there exists an infinite sequence (nk )k≥1 of elements of S such that
/ +∞.
/ 0 as k
µ
b(nk )
This conjecture of Lyons is inspired by Furstenberg’s famous conjecture concerning
common invariant probability measures for two commuting automorphisms Tp : λ ✤ / λp
and Tq : λ ✤ / λq of the unit circle T when p and q are two multiplicatively independent
integers (i.e. p and q are not both powers of the same integer). In this setting, Furstenberg’s
conjecture states that the only continuous probability measure on T invariant by both Tp
and Tq is the Lebesgue measure on T. Furstenberg himself proved [22] that if S is any
non-lacunary semigroup of integers (i.e. if S is not contained in any semigroup of the form
{an ; n ≥ 0}, a ≥ 2), the only infinite closed S-invariant subset of T is T itself. See [10]
for an elementary proof of this result and the references mentioned in [15, Chapter 2] for
′
several extensions. Since S = {pk q k ; k, k ′ ≥ 0} is a non-lacunary semigroup whenever
p and q are multiplicatively independent, the only infinite closed subset of T which is
simultaneously Tp -invariant and Tq -invariant is T. Starting with the work of Lyons in
[31], Furstenberg’s conjecture has given rise to an impressive amount of related questions
and results, concerning in particular the dynamics of commuting group automorphisms.
We refer the reader to the papers [35], [14] or [25] for example, as well as to the texts [29],
[26] or [36] for surveys of results related to this conjecture, as well as for perspectives.
As written in [31], conjecture (C4) is a natural version of Furstenberg’s conjecture about
measures, but not involving invariance. If (C4) were true, it would obviously imply an
affirmative answer to the Furstenberg conjecture.
Kazhdan sets and modified Kazhdan constants. — It turns out that Lyons’ conjecture is related to an important property of subsets of Z, namely that of being or not a
Kazhdan subset of Z. Kazhdan subsets Q of a second-countable topological group G are
those for which there exists ε > 0 such that any strongly continuous representation π of
G on a complex separable Hilbert space H admitting a vector x ∈ H with ||x|| = 1 which
is ε-invariant on Q (i.e. supg∈Q ||π(g)x − x|| < ε) has a G-invariant vector. Such an ε is
called a Kazhdan constant for Q, and the supremum of all ε’s with this property is the
Kazhdan constant of Q. Groups with Property (T), also called Kazhdan groups, are those
which admit a compact Kazhdan set. See the book [7] for more on Property (T) and its
numerous important applications.
As suggested in [7, Sec. 7.12], it is of interest to study Kazhdan sets in groups which
do not have Property (T), such as locally compact abelian groups, Heisenberg groups,
etc. See [4] and also [17] for a study of such problems. In the case of the group Z, the
definition above is equivalent to the following one:
CONTINUOUS PROBABILITY MEASURES WITH LARGE FOURIER COEFFICIENTS
3
Definition 1.1. — (Kazhdan sets and constants) A subset Q ⊂ Z is said to be a Kazhdan
set if there exists ε > 0 such that any unitary operator U acting on a complex separable
Hilbert space H satisfies the following property: if there exists a vector x ∈ H with
||x|| = 1 such that supn∈Q ||U n x − x|| < ε, then there exists a non-zero vector y ∈ H such
that U y = y (i.e. 1 is an eigenvalue of U ). We will say in this case that (Q, ε) is a Kazhdan
pair. We define the Kazhdan constant of Q as
Kaz(Q) = inf inf sup kU q x − xk,
U kxk=1 q∈Q
where the first infimum is taken over all unitary operators U on H without fixed vectors.
It follows from [7, p. 30] that 0 ≤ Kaz(Q) ≤
√
2 for every Q ⊆ Z.
Several characterizations of Kazhdan subsets of Z were obtained in [4] as consequences
of results applying to a much wider class of groups; self-contained proofs of these characterizations of Kazhdan subsets of Z, involving only classical tools from harmonic analysis,
were obtained in the paper [5]. One of the characterizations of generating Kazhdan sets
obtained in [4, Th. 6.1] (see also [5, Th. 4.12]) runs as follows. Recall that Q is said to be
generating in the group Z if the smallest subgroup containing Q is Z itself.
Theorem 1.2 ([4]). — Let Q be a√generating subset of Z. Then Q is a Kazhdan subset of
Z if and only if there exists ε′ ∈ (0, 2] such that (Q, ε′ ) is a modified Kazhdan pair, that is
any unitary operator V acting on a complex separable Hilbert space H satisfies the following
property: if there exists a vector x ∈ H with ||x|| = 1 such that supn∈Q ||V n x − x|| < ε′ ,
then V has at least one eigenvalue.
We define now the modified Kazhdan constant of Q as
g
Kaz(Q)
= inf inf sup kV q x − xk,
V kxk=1 q∈Q
where the first infimum is taken this time over unitary operators V on H without eigenvalues (that is, with continuous spectra). Therefore
√
g
0 ≤ Kaz(Q) ≤ Kaz(Q)
≤ 2
g
and for every Q ⊆ Z, Kaz(Q) = 0 if and only if Kaz(Q)
= 0 if and only if Q is a nonKazhdan set. The property of being or not a Kazhdan set can also be expressed in terms
of Fourier coefficients of probability measures; see Section 4 for a discussion.
The characterization of Kazhdan subsets of Z obtained by the authors in [4] (see also
[5]) implies that the generating subsets Q of Z which satisfy the property stated in (C4)
(namely that there exists for every µ ∈ Pc (T) an infinite sequence (nk )k≥1 of elements
/ +∞) are exactly the Kazhdan subsets of Z with
/ 0 as k
of Q such that µ
b(nk )
√
√
g
modified Kazhdan constant Kaz(Q)
= 2. Since 2 is the modified Kazhdan constant
√
of Z seen as a subset of itself, 2 is the maximal modified Kazhdan constant, and thus
(C4) can be reformulated as: every generating non-lacunary semigroup
S of integers is
√
a Kazhdan subset of Z with maximal modified Kazhdan constant 2. The relationship
between Furstenberg ×2-×3 conjecture and modified Kazhdan constants can be also seen
directly from Proposition 4.4 below.
CATALIN BADEA & SOPHIE GRIVAUX
4
2. Main results
The first main result of this paper is the following:
Theorem 2.1. — Let p1 , . . . , pr be positive distinct integers and set
E = {pk11 . . . pkr r ; k1 ≥ 0, . . . , kr ≥ 0}.
There exists a continuous probability measure µ on T such that
inf
k1 ≥0,...,kr ≥0
|b
µ(pk11 . . . pkr r )| > 0.
Equivalently,
√
g
Kaz(E)
<
2.
It should be noted that, as conjecture (C4) does not involve invariant measures, we do
not assume in Theorem 2.1 that the integers pj are multiplicatively independent. Notice
also that the statement of Theorem 2.1 is well-known in the lacunary case: if r = 1 it
suffices to consider the classical Riesz product associated to the sequence (pk )k≥0 . In
the non-lacunary case, Theorem 2.1 disproves Conjecture (C4), as well as the related
conjectures (C5) and (C6) of [31] (which are both stronger than (C4)). It applies in
′
particular to the Furstenberg set F = {2k 3k ; k, k′ ≥ 0} and shows the existence of a
measure µ ∈ Pc (T) such that
′
inf
µ
b(2k 3k ) > 0.
′
k,k ≥0
In view of this result, one may naturally wonder for which values of δ ∈ (0, 1) there exists
a measure µ ∈ Pc (T) such that
′
inf
µ
b(2k 3k ) ≥ δ,
′
k,k ≥0
or, equivalently, whether the Furstenberg set F is a Kazhdan set in Z, and if yes, with
which (modified) Kazhdan constant. In this direction, we prove the following result:
′
g ) ≤ 1. More precisely, there
Theorem 2.2. — Let F = {2k 3k ; k, k′ ≥ 0}. Then Kaz(F
exists for every δ ∈ (0, 1/2) a continuous probability measure µ on T with nonnegative
Fourier coefficients such that
′
inf
µ
b(2k 3k ) > δ.
′
k,k ≥0
Rigidity sequences. — Our strategy for proving Theorem 2.1 is to construct measures µ ∈ Pc (T) whose Fourier coefficients tend to 1 along a substantial part of the
set {pk11 . . . pkr r ; k1 ≥ 0, . . . , kr ≥ 0}. In other words, we show that certain large subsets of this set form, when taken in a strictly increasing order, rigidity sequences in the
sense of [8] or [18]. Recall that a dynamical system (X, B, m; T ) on a Borel probability
space is called rigid if there exists a strictly increasing sequence of integers (nk )k≥1 such
/ +∞ for every f ∈ L2 (X, B, m), where UT denotes as
/ 0 as k
that ||UTnk f − f ||
✤
/ f ◦ T associated to T on L2 (X, B, m). Equivalently,
usual the Koopman operator f
−n
/
/
m(T k A △ A)
0 as k
+∞ for every A ∈ B. We say in this case that the system is rigid with respect to the sequence (nk )k≥1 , or that (nk )k≥1 is a rigidity sequence
for (X, B, m; T ). The case where the system (X, B, m; T ) is weakly mixing is particularly
interesting, and is the object of the works [8] and [18]. A strictly increasing sequence
(nk )k≥1 of integers is called a rigidity sequence if there exists a weakly mixing system
which is rigid with respect to (nk )k≥1 .
CONTINUOUS PROBABILITY MEASURES WITH LARGE FOURIER COEFFICIENTS
5
Using Gaussian dynamical systems, one can show that (nk )k≥1 is a rigidity sequence
/ +∞.
/ 1 as k
if and only if there exists a measure µ ∈ Pc (T) such that µ
b(nk )
The study of rigidity sequences was initiated in [8] and [18]. Further works on this topic
include the papers [1], [3], [2], [23], [21] and [20] among others. The paper [21] by
Fayad and Thouvenot is especially relevant here: the authors re-obtain a result of Adams
[3], showing that whenever (nk )k≥1 is a rigidity sequence for an ergodic rotation on the
circle, it is a rigidity sequence for a weakly mixing system. The proof of this result in
[3] relies on an involved construction of a suitable weakly mixing system by cutting and
stacking, while the authors of [21] proceed by a direct construction of suitable continuous
/ 1 for some λ = e2iπθ ∈ T with θ ∈ R \ Q,
probability measures: they show that if λnk
/ 1. We obtain the following theorem, which
there exists µ ∈ Pc (T) such that µ
b(nk )
generalizes the result of Fayad and Thouvenot:
Theorem 2.3. — Let (nk )k≥0 be a strictly increasing sequence of integers. Suppose that
the set
/ 1 as k
/ +∞ }
C = {λ ∈ T ; λnk
is dense in T. Then (nk )k≥1 is a rigidity sequence, and there exists a continuous probability
/ 1 as k
/ +∞.
measure µ on T such that µ
b(nk )
This result allows us to retrieve essentially all known examples of rigidity sequences (a
notable exception being the examples of [20]). Notice that C, like every subgroup of the
circle group, is dense in T as soon as it is infinite.
We deduce from Theorem 2.3 the following two-dimensional statement, which is asymmetric and involves a uniformity assumption.
Theorem 2.4. — Let (mk )k≥0 and (nk′ )k′ ≥0 be two strictly increasing sequences of inte/ +∞ as k
/ +∞, and set
/ N be such that ψ(k)
gers. Let also ψ : N
Dψ = {(k, k′ ) ∈ N2 ; 0 ≤ k′ ≤ ψ(k)}.
Suppose that the set
Cψ′ = {λ ∈ T ; λmk nk′
/ 1 as k
/ +∞ , (k, k ′ ) ∈ Dψ }
is dense in T.
/1
Then there exists a continuous probability measure µ on T such that µ
b(mk nk′ )
′
/
+∞ with (k, k ) ∈ Dψ . In other words, there exists for every ε > 0 an integer
as k
k0 such that |b
µ(mk nk′ ) − 1| < ε for every (k, k′ ) ∈ N2 with k ≥ k0 and 0 ≤ k′ ≤ ψ(k).
Remark that the assumption of Theorem 2.4 is in particular satisfied if the set
C ′ = {λ ∈ T ; λmk nk′
/ 1 as k
/ +∞ uniformly in k ′ }
is dense in T.
Theorem 2.1 is obtained by first observing that the set {pk11 . . . pkr r ; p1 ≥ 0, . . . , pr ≥ 0}
can be split into r sets to which Theorem 2.4 applies, and then considering a convex
combination of the continuous measures obtained in this way.
In order to prove Theorem 2.2, one needs to refine the statement of Theorem 2.4,
and to show that the sequences (mk nk′ )k,k′ ≥0 satisfying the assumption of Theorem 2.4
actually give rise to non-Kazhdan subsets of Z. More precisely, we prove the following
strengthenings of Theorems 2.3 and 2.4 respectively:
CATALIN BADEA & SOPHIE GRIVAUX
6
Theorem 2.5. — Under the assumption of Theorem 2.3, there exists for every ε > 0 a
/ +∞ and sup
/ 1 as k
µ(nk ) − 1| < ε. In
measure µ ∈ Pc (T) such that µ
b(nk )
k≥0 |b
particular, {nk ; k ≥ 0} is a non-Kazhdan subset of Z.
Theorem 2.6. — Under the assumption of Theorem 2.4, there exists for every ε > 0 a
/ 1 as k
/ +∞ with (k, k ′ ) ∈ Dψ , and
measure µ ∈ Pc (T) such that µ
b(mk nk′ )
sup
k≥0, 0≤k ′ ≤ψ(k)
|b
µ(mk nk′ ) − 1| < ε.
In particular, {mk nk′ ; k ≥ 0, 0 ≤ k′ ≤ ψ(k)} is a non-Kazhdan subset of Z.
Organization of the paper. — The paper is structured as follows. We give in Section 3
the proof of Theorems 2.3 and 2.4, which are very much inspired from the paper [21]. In
Section 4, we recall a characterization of generating Kazhdan subsets of Z from [4], and
detail the links between several natural constants involved in this characterization. We
explain in particular why the generating subsets of Z which satisfy the property
√ stated in
(C4) are exactly the Kazhdan subsets of Z with modified Kazhdan constant 2. We then
prove Theorems 2.5 and 2.6. Section 5 is devoted to applications: we prove Theorems 2.1
and 2.2, and show how to retrieve many examples of rigidity sequences, using Theorems
2.3 and 2.4. We also provide an application of Theorem 2.2 to the study of the size
of the exceptional set of values θ ∈ R for which the sequence (nk θ)k≥0 is not almost
uniformly distributed modulo 1 with respect to a (finite) complex Borel measure ν ∈ M(T),
where (nk )k≥0 denotes the Furstenberg sequence: we show that this exceptional set is
uncountable, thus providing a new example of a sublacunary sequence with uncountable
exceptional set for (almost) uniform distribution.
3. Proof of Theorems 2.3 and 2.4
Given two integers a < b, we will when the context is clear denote by [a, b] the set of
integers k such that a ≤ k ≤ b.
Proof of Theorem 2.3. — The general strategy of the proof is the following: we construct a
sequence (λi )i≥1 of pairwise distinct elements of C, as well as a strictly increasing sequence
of integers (Np )p≥0 , such that the measures
p
−p
µp = 2
2
X
i=1
δ{λi } ,
p≥0
satisfy certain properties stated below. At step p, we determine the elements λi for
2p−1 < i ≤ 2p as well as the integer Np in such a way that λ1 = 1, N0 = 0, and
(1) for every p ≥ 1, every j ∈ [0, p − 1] and every k ∈ [Nj , Nj+1 ],
Z
|λnk − 1| dµp (λ) < 2−(j−1) ;
T
(2) for every p ≥ 1, every q ∈ [0, p − 1], l ∈ [1, 2p−q ), r ∈ [1, 2q ],
|λl2q +r − λr | < ηq
where ηq =
1
inf 1≤i<j≤2q |λi − λj | for every q ≥ 1, and η0 = 1;
4
CONTINUOUS PROBABILITY MEASURES WITH LARGE FOURIER COEFFICIENTS
7
(3) for every p ≥ 1 and every k ≥ Np ,
Z
|λnk − 1| dµp (λ) < 2−(p+1) .
T
Remark that property (2) implies that the sequence (λi )i≥1 satisfies
(4) for every q ≥ 0, every l ≥ 0, and every r ∈ [1, 2q ],
|λl2q +r − λr | < ηq .
Suppose that the sequences (λi )i≥1 and (Np )p≥0 have been constructed so as to satisfy
(1), (2), and (3) above, and let µ be a w∗ -limit point of the sequence (µp )p≥0 in P(T).
Claim 3.1. — We have µ
b(nk )
/ 1 as k
/ +∞.
Proof. — For every k ≥ 0, denote by jk ≥ 0 the unique integer j such that k ∈ [Nj , Nj+1 ).
For every p > jk , we have by (1)
Z
Z
nk
−(jk −1)
|λnk − 1| dµ(λ) ≤ 2−(jk −1) .
|λ − 1| dµp (λ) < 2
so that
Z
T
T
/ 0, i.e. µ
/ +∞,
/1.
/ +∞ as k
|λnk − 1| dµ(λ)
b(nk )
Since jk
T
Claim 3.2. — The probability measure µ is continuous.
Proof. — Fix q ≥ 1, and consider for every r ∈ [1, 2q ] the two arcs Γr and ∆r of T defined
by
3
Γr = {λ ∈ T ; |λ − λr | ≤ ηq } and ∆r = {λ ∈ T ; |λ − λr | < ηq }.
2
The 2q arcs ∆r are pairwise disjoint. Indeed, for every r, r ′ ∈ [1, 2q ] with r 6= r ′ , every
λ ∈ ∆r and every λ′ ∈ ∆r′ , we have by the definition of ηq that
|λ − λ′ | ≥ |λr − λr′ | − 3ηq ≥ 4ηq − 3ηq = ηq > 0.
So ∆r and ∆r′ do not intersect.
Let us next estimate, for every r ∈ [1, 2q ] and every p ≥ q, the quantity µp (Γr ). We
have
µp (Γr ) = 2−p × #{i ∈ [1, 2p ] ; λi ∈ Γr }.
Every i ∈ [1, 2p ] can be written as i = l2q + s for some l ≥ 0 and s ∈ [1, 2q ]. By (4), λi
belongs to Γs . Since the arcs ∆r′ , r ′ ∈ [1, 2q ], are pairwise disjoint, it follows that
Also,
µp (∆r ) = µp (Γr ) = 2−p × #{i ∈ [1, 2p ] ; i ≡ r mod 2q } ≤ 2−q .
µp
2q
[
r=1
Γr = 1.
Since the arcs Γr are closed while the arcs ∆r are open, going to the limit as p goes to
infinity yields that µ(∆r ) ≤ 2−q for every r ∈ [1, 2q ] and
µ
2q
[
r=1
Γr = 1.
If λ ∈ T is such that µ({λ}) > 0, there exists an r ∈ [1, 2q ] such that λ ∈ Γr ⊂ ∆r . So
µ({λ}) ≤ µ(∆r ) ≤ 2−q , a contradiction if q is sufficiently large. It follows that the measure
µ is continuous.
CATALIN BADEA & SOPHIE GRIVAUX
8
By Claims 3.1 and 3.2, it suffices to construct (λi )i≥0 and (Np )p≥0 satisfying (1), (2),
and (3) in order to prove Theorem 2.3. For p = 0, we set λ1 = 1 and N0 = 0, so that
µ0 = δ{1} .
For p = 1, we choose λ2 ∈ C distinct from λ1 with |λ2 − λ1 | < 1. Since µ1 = 12 (δ{1} +
δ{λ2 } ), we have
Z
1
|λnk − 1| dµ1 (λ) = |λn2 k − 1| ≤ 1 < 2 for every k ≥ 0.
2
T
Hence property (1) is satisfied whatever the choice of N1 . Since η0 = 1 and |λ2 − λ1 | < 1,
property (2) is satisfied. It remains to choose N1 in such a way that property (3) is
satisfied. Since λ2 belongs to C,
Z
1
/ 0 as k
/ +∞,
|λnk − 1| dµ1 (λ) = |λn2 k − 1|
2
T
so we can choose N1 so large that
Z
|λnk − 1| dµ1 (λ) < 2−2 for every k ≥ N1 .
T
This terminates the construction for p = 1.
Suppose now that the construction has been carried out until step p, i.e. that the
quantities λi , i ∈ [1, 2p ], and Nl , l ∈ [0, p], have been constructed satisfying (1), (2), and
(3).
We construct by induction on s ∈ [1, 2p ] elements λ2p +s of C, measures µp,s ∈ P(T),
and integers Np,s in such a way that the elements λi , i ∈ [1, 2p+1 ], are all distinct, Np <
Np,1 < · · · < Np,2p , and the following three properties are satisfied:
(a) for every j ∈ [0, p − 1] and every k ∈ [Nj , Nj+1 ],
Z
|λnk − 1| dµp,s (λ) < 2−(j−1) ;
T
(b) for every k ≥ Np ,
(c) for every k ≥ Np,s,
Z
Z
T
|λnk − 1| dµp,s (λ) < 2−(p−1) ;
T
|λnk − 1| dµp,s (λ) < 2−(p+2) .
Let us start with the construction of λ2p +1 . By density of C, one can choose λ2p +1
distinct from all the elements λi , i ∈ [1, 2p ], with |λ2p +1 − λ1 | arbitrarily small. Consider
the measure
µp,1 = µp + 2−(p+1) δ{λ2p +1 } − δ{λ1 }
−(p+1)
=2
p
−p
δ{λ1 } + δ{λ2p +1 } + 2
2
X
δ{λi }
i=2
obtained by splitting the point mass δ{λ1 } appearing in the expression of µp into 12 (δ{λ1 } +
δ{λ2p +1 } ). We have for every k ≥ 0
Z
Z
nk
|λnk − 1| dµp (λ) + 2−(p+1) |λn2pk+1 − λn1 k |.
|λ − 1| dµp,1 (λ) ≤
(5)
T
T
CONTINUOUS PROBABILITY MEASURES WITH LARGE FOURIER COEFFICIENTS
9
If |λ2p +1 − λ1 | is sufficiently small, we have by (1) that
Z
|λnk − 1| dµp,1 (λ) < 2−(j−1)
T
for every j ∈ [0, p−1] and every k ∈ [Nj , Nj+1 ], i.e. that (1) still holds true for the measure
µp,1 . Also (5) and (3) imply that for every k ≥ Np ,
Z
|λnk − 1| dµp,1 (λ) < 2−(p+1) + 2−p < 2−(p−1) ,
T
which is (b). Since all the elements λi , i ∈ [1, 2p + 1], belong to C, there exists Np,1 > Np
such that
Z
|λnk − 1| dµp,1 (λ) < 2−(p+2) for every k ≥ Np,1 .
T
Properties (a), (b), and (c) are thus satisfied for s = 1. Suppose now that λ2p +s′ , µ2p +s′ ,
and N2p +s′ have been constructed for s′ < s. Let λ2p +s ∈ C \ {λ1 , . . . , λ2p +s−1 } be very
close to λs , and set
µp,s = µp,s−1 + 2−(p+1) δ{λ2p +s } − δ{λs }
(this time, the mass point δ{λs } appearing in µp is split as 12 (δ{λs } + δ{λ2p +s } ). Since, for
every k ≥ 0,
(6)
Z
T
|λ
nk
− 1| dµp,s (λ) ≤
Z
T
|λnk − 1| dµp,s−1 (λ) + 2−(p+1) |λn2pk+s − λns k |,
the induction assumption implies that (a) holds true provided |λ2p +s − λs | is sufficiently
small. As to (b), we have to consider separately the cases Np ≤ k < Np,s−1 and k ≥ Np,s−1 .
If |λ2p +s − λs | is sufficiently small,
Z
|λnk − 1| dµp,s (λ) < 2−(p−1) for every Np ≤ k < Np,s−1 .
T
By property (c) at step s − 1 and (6),
Z
|λnk − 1| dµp,s (λ) < 2−(p+2) + 2−p < 2−(p−1)
T
for every k ≥ Np,s−1 . Hence (b) is satisfied at step s. Property (c) is satisfied if Np,s is
chosen sufficiently large since all the elements λi , i ∈ [1, 2p + s], belong to C.
Let us now set µp+1 = µp,2p and Np+1 = Np,2p . It remains to check that with these
choices of λi , i ∈ [1, 2p+1 ], µp+1 and Np+1 , properties (1), (2), and (3) are satisfied.
By (a), property (1) is satisfied for every j ∈ [0, p − 1]. The case where j = p follows
from (b). So (1) is true. Property (3) follows immediately from (c), so it only remains to
check (2).
Fix q ∈ [0, p], l ∈ [1, 2p+1−q ) and r ∈ [1, 2q ]. Consider first the case where q = p. In
this case l = 1, and the quantities under consideration have the form |λ2p +r − λr |, with
r ∈ [1, 2p ]. One can ensure in the construction that |λ2p +r − λr | < ηp for every r ∈ [1, 2p ]
and then (2) holds true for q = p.
Suppose then that q ∈ [0, p − 1], and write l as l = l′ + ε2p−q with ε ∈ {0, 1} and l′ ∈
[1, 2p−q ). Then l2q +r = l′ 2q +r+ε2p . Set s = l′ 2q +r. Then 1 ≤ s ≤ (2p−q −1)2q +2q = 2p ,
i.e. s ∈ [1, 2p ]. We have
|λl2q +r − λr | ≤ |λs+ε2p − λs | + |λl′ 2q +r − λr |.
CATALIN BADEA & SOPHIE GRIVAUX
10
If ε = 0, the first term is zero; if ε = 1, it is equal to |λ2p +s − λs |, which can be assumed
to be as small as we wish in the construction. As to the second term, it is less than ηq by
property (2) at step p, since l′ ∈ [1, 2p−q ) and r ∈ [1, 2q ] with q ∈ [0, p − 1]. We can thus
ensure that
|λl2q +r − λr | < ηq
for every q ∈ [0, p], l ∈ [1, 2p+1−q ), and r ∈ [1, 2q ]. This proves that property (2) is satisfied
at step p + 1, and this concludes the proof of Theorem 2.3.
Theorem 2.4 is now a formal consequence of Theorem 2.3.
Proof of Theorem 2.4. — Recall that Dψ = {(k, k′ ) ∈ N2 ; 0 ≤ k′ ≤ ψ(k)} and
Cψ′ = {λ ∈ T ; λmk nk′
/ 1 as k
/ +∞ , (k, k ′ ) ∈ Dψ }.
Order the set {mk nk′ ; (k, k′ ) ∈ Dψ } as a strictly increasing sequence (pl )l≥1 of integers.
Since there exists for every integer k1 ≥ 0 an integer l1 ≥ 0 such that
{pl ; l ≥ l1 } ⊆ {mk nk′ ; (k, k′ ) ∈ Dψ , k ≥ k1 },
/ +∞. By Theorem 2.3
/ 1 as l
every element λ ∈ Cψ′ has the property that λpl
/ 1 as l
/ +∞.
applied to the sequence (pl )l≥1 , there exists µ ∈ Pc (T) such that µ
b(pl )
Using this time the fact that there exists for every integer l2 ≥ 0 an integer k2 ≥ 0 such
that
{mk nk′ ; (k, k′ ) ∈ Dψ , k ≥ k2 } ⊆ {pl ; l ≥ l2 },
we deduce that µ
b(mk nk′ )
/ 1 as k
/ +∞ with (k, k ′ ) ∈ Dψ .
Remark 3.3. — Suppose that the set
C ′ = {λ ∈ T ; λmk nk′
/ 1 as k
/ +∞ uniformly in k ′ }
is dense in T. It is natural to wonder whether there exists a measure µ ∈ Pc (T) such that
/ 1 as k
/ 1 uniformly in k ′ . The following example shows that it is not
µ
b(mk nk′ )
k
the case: set mk = 2 and nk′ = k′ for every k, k′ ≥ 0. The set
C ′ = {λ ∈ T ; λmk nk′
/ 1 as k
/ +∞ uniformly in k ′ }
contains all 2k -th roots of 1, and so is dense in T. Suppose that µ ∈ P(T) is such that
/ 1 as k
/ +∞ uniformly in k ′ . Then there exists an integer k0 ≥ 1 such that
µ
b(2k k′ )
k
′
|b
µ(2 0 k )| ≥ 1/2 for every k′ ≥ 0. Consider the measure ν = T2k0 (µ). Since νb(n) = µ
b(2k0 n)
k
for every n ∈ Z, ν cannot be continuous. Also, ν({λ0 }) = µ({λ ∈ T ; λ2 0 = λ0 }) for every
λ0 ∈ T, and so the measure µ itself cannot be continuous.
So the conclusion of Theorem 2.4 seems to be essentially optimal.
4. Some non-Kazhdan subsets of Z
4.1. Kazhdan constants and Fourier coefficients of probability measures. —
We begin this section by recalling a characterization of generating Kazhdan subsets of
Z, obtained in [4, Th. 6.1] (see also [5, Th. 4.12]) and presenting some facts concerning
the (modified) Kazhdan constants of such sets. We state it here in a slightly modified
way (condition (ii) is not exactly the same as in [5, Th. 4.12]), and include a discussion
concerning the links between the various constants appearing in the equivalent conditions.
CONTINUOUS PROBABILITY MEASURES WITH LARGE FOURIER COEFFICIENTS
11
Theorem 4.1. — Let Q be a generating subset of Z. Then Q is a Kazhdan subset of Z
if and only if one of the following equivalent assertions holds true:
√
(i) there exists ε ∈ (0, 2) such that (Q, ε) is a modified Kazhdan pair. Equivalently,
g
Kaz(Q)
≥ ε;
(ii) there exists γ ∈ (0, 1) such that any measure µ ∈ P(T) with supn∈Q (1 − ℜe µ
b(n)) < γ
has a discrete part;
(iii) there exists δ ∈ (0, 1) such that any measure µ ∈ P(T) with inf n∈Q |b
µ(n)| > δ has a
discrete part.
Moreover:
√
– (i) is satisfied for ε ∈ (0, 2) if and only if (ii) is satisfied for γ = ε2 /2;
√
– if (ii) is satisfied for γ ∈ (0, 1), (iii) is satisfied for δ = 1 − γ, while if (iii) is
satisfied for δ ∈ (0, 1), (ii) is satisfied for γ = 1 − δ;
q
√
– hence if (i) is satisfied for ε ∈ (0, 2), (iii) is satisfied for δ = 1 − ε2 /2 , while if
p
(iii) is satisfied for δ ∈ (0, 1), (i) holds true for ε = 2(1 − δ).
We prove briefly here the statement concerning the relations between the constants ε,
γ, and δ appearing in (i), (ii), and (iii) respectively, following [4] and [5].
√
Proof. — Suppose that (i) is satisfied for ε ∈ (0, 2), and let µ ∈ P(T). Consider the
unitary operator U = Mλ of multiplication by λ on L2 (T, µ). Let f be the function
constantly equal to 1. Then ||U n f − f ||2 = 2(1 − ℜe µ
b(n)). If supn∈Q (1 − ℜe µ
b(n)) < ε2 /2,
g
U has an eigenvalue since Kaz(Q)
≥ ε, and so µ has a discrete part.
Conversely, suppose that (ii) is satisfied for γ ∈ (0, 1). Let U be a unitary operator on
a separable Hilbert space H, and let x ∈ H with ||x|| = 1 be such that
p
sup ||U n x − x|| < 2γ.
n∈Q
The proof of [5, Th. 4.6] shows then that there exists µ ∈ P(T) such that
2 sup (1 − ℜe µ
b(n)) = sup ||U n x − x||2 < 2γ.
n∈Q
n∈Q
So supn∈Q (1 − ℜe µ
b(n)) < γ. By (ii), µ has a discrete part, and so U has an eigenvalue.
√
g
Hence Kaz(Q) ≥ 2γ.
Suppose next that
√ property (ii) is satisfied for γ ∈ (0, 1). Let µ ∈ P(T) be such that
inf n∈Q |b
µ(n)| > 1 − γ. Set ν = µ ∗ µ
e. Then inf n∈Q νb(n) > 1 − γ. It follows that
supn∈Q (1 − νb(n)) < γ, and ν has a discrete part. So µ itself has a discrete part.
Lastly, suppose that (iii) is satisfied for δ ∈ (0, 1). Let µ ∈ P(T) be a measure satisfying
supn∈Q (1−ℜe µ
b(n)) < 1 − δ. Then inf n∈Q |b
µ(n)| ≥ inf n∈Q ℜe µ
b(n) > δ, so µ has a discrete
part.
Remark 4.2. — Given a subset Q of Z, one can prove, using the spectral theorem for
unitary operators, that the following assertions are equivalent (see [5, Th. 4.6]):
√
(i’) Q is a Kazhdan subset of Z, i.e. there exists ε ∈ (0, 2) such that (Q, ε) is a Kazhdan
pair;
(ii’) there exists γ ∈ (0, 1) such that any measure µ ∈ P(T) with supn∈Q (1 − ℜe µ
b(n)) < γ
is such that µ({1}) > 0.
CATALIN BADEA & SOPHIE GRIVAUX
12
√
Moreover (i’) holds true for a certain constant ε ∈ (0, 2) (i.e. Kaz(Q) ≥ ε) if and only if
(ii’) holds true for γ = ε2 /2.
It is interesting to note that these two conditions (i’) and (ii’) are not equivalent to the
natural version (iii’) of (iii) (namely, that there exists δ ∈ (0, 1) such that any measure
µ ∈ P(T) with inf n∈Q |b
µ(n)| > δ satisfies µ({1}) > 0). Indeed, (iii’) is satisfied for any
Dirac mass δ{λ} , λ ∈ T. The proof that (ii) implies (iii) in Theorem 4.1 above uses in a
crucial way the fact that if µ ∈ P(T) is such that µ ∗ µ
e has a discrete part, µ itself has a
discrete part. But µ ∗ µ
e may very well satisfy µ ∗ µ
e({1}) > 0 while µ({1}) = 0, and so (ii’)
does not imply (iii’).
Theorem 4.1 is related to Conjecture (C4) in the following way:
Corollary 4.3. — Let Q be a generating subset of Z. The following assertions are equivalent:
√
g
(α) Q is a Kazhdan subset of Z with Kaz(Q)
= 2;
(β) any measure µ ∈ Pc (T) satisfies inf n∈Q |b
µ(n)| = 0;
(γ) any measure µ ∈ Pc (T) satisfies lim inf |n|→+∞ |b
µ(n)| = 0.
n∈Q
Proof. — The equivalence between (α) and (β) follows immediately from Theorem 4.1.
So only the implication (β)=⇒(γ) requires a proof. Suppose that any µ ∈ Pc (T) satisfies inf n∈Q |b
µ(n)| = 0. We want to show that the conclusion can be reinforced into
lim inf |n|→+∞ |b
µ(n)| = 0. Let ρ ∈ Pc (T) be a Rajchman measure with positive coeffin∈Q
cients, that is such that lim|n|→+∞ ρb(n) = 0 and ρb(n) > 0 for every n ∈ Z. Consider
the measure ν = (µ ∗ µ
e + ρ)/2. It is continuous and satisfies ν(n) > 0 for every n ∈ Z.
Since inf n∈Q νb(n) = 0 and ν(n) > 0 for every n ∈ Z, lim inf |n|→+∞ νb(n) = 0. Hence
µ(n)|2 = 0, and the conclusion follows.
lim inf |n|→+∞ |b
n∈Q
n∈Q
So Conjecture (C4) is equivalent to the statement
that any non-lacunary semigroup
√
of integers has modified Kazhdan constant 2. We can also estimate the Fourier coefficients of a continuous probability measure on T which is T2 - and T3 -invariant in terms
of the modified Kazhdan constant of the Furstenberg set. Notice that Proposition 4.4 is
meaningful only if κ̃ > 0.
′
g ). Let µ be a
Proposition 4.4. — Let F = {2k 3k ; k, k ′ ≥ 0} and set κ̃ = Kaz(F
continuous probability measure on T which is T2 - and T3 -invariant. Then
κ̃2
for every j ∈ Z \ {0}.
2
Proof of Proposition 4.4. — Set, for every j ∈ Z \ {0}, µj = Tj µ. Then µj is a continuous
′
measure which satisfies µ̂j (2k 3k ) = µ̂(j) for every k, k′ ≥ 0 It follows that if δ ∈ (0, 1)
is such
p that (iii) of Theorem 4.1 is satisfied, δ ≥ |µ̂(j)|. Hence, by Theorem 4.1 again,
κ̃ ≤ 2(1 − |µ̂(j)|).
|µ̂(j)| ≤ 1 −
Remark 4.5. — Although a generating subset Q of Z is a Kazhdan set if and only if
g
Kaz(Q)
> 0, there is no link between the Kazhdan constant and the modified Kazhdan
constant of Q. Indeed,
there exist Kazhdan subsets Q of Z with maximal modified con√
g
stant Kaz(Q) = 2 and arbitrarily small Kazhdan constant Kaz(Q). This relies on the
following observation, which can be extracted from the proof of [5, Th 7.1] and results
from Proposition 5.9 below.
CONTINUOUS PROBABILITY MEASURES WITH LARGE FOURIER COEFFICIENTS
13
Proposition 4.6. — Let (nk )k≥0 be a strictly increasing sequence of integers with n0 = 1
such that (nk θ)k≥0 is uniformly distributed modulo 1 for every θ ∈ R \ D, where D is
countable subset of R. Then the set Q = {nk ; k ≥ 0} is a Kazhdan subset of Z which
√
g
satisfies Kaz(Q)
= 2.
Consider, for every integer p ≥ 2, the set Qp = p N + 1. By Proposition 4.6, Qp is a
√
g p ) = 2. But the measure µ = δ 2iπ/p satisfies
Kazhdan subset of Z with Kaz(Q
{e
}
Hence Kaz(Qp ) ≤
p
sup (1 − ℜe µ
b(n)) = cos(2π/p).
n∈Qp
2 cos(2π/p), which can be arbitrarily small if p is sufficiently large.
4.2. Proofs of Theorems 2.5 and 2.6. — We only sketch the proof of Theorem 2.5
(Theorem 2.6 is a formal consequence of it).
Proof. — Fix ε ∈ (0, 1/2). Using the notation of the proof of Theorem 2.3, it suffices to
construct the measures µp , p ≥ 0, in such a way that they satisfy the assertions
(1ε ) for every p ≥ 1, every j ∈ [0, p − 1] and every k ∈ [Nj , Nj+1 ],
Z
|λnk − 1| dµp (λ) < 3ε(1 − ε)j ;
T
(2) for every p ≥ 1, every q ∈ [0, p − 1], l ∈ [1, 2p−q ), r ∈ [1, 2q ],
|λl2q +r − λr | < ηq
1
inf 1≤i<j≤2q |λi − λj | for every q ≥ 1, and η0 = 1;
4
(3ε ) for every p ≥ 1 and every k ≥ Np ,
Z
|λnk − 1| dµp (λ) < ε(1 − ε)p+1 ;
where ηq =
T
(4ε ) µp ({λi }) ≤ (1 − ε)p for every i ∈ [1, 2p ].
Then any w∗ -limit point µ of (µp )p≥0 will be a continuous measure which simultaneously
/ 1 as k
/ +∞ and sup
satisfies µ
b(nk )
µ(nk )−1| ≤ 3ε. The main difference with
k≥0 |b
the proof of Theorem 2.3 is that the measures µp will be defined as
p
p
µp =
2
X
(p)
ai δ{λi }
for some suitable weights
api
> 0 with
2
X
(p)
ai
= 1.
i=1
i=1
For p = 0, we set λ1 = 1, N0 = 0, and µ0 = δ{1} .
For p = 1, we choose λ2 ∈ C \ {λ1 } with |λ2 − λ1 | < 1 and set µ1 = (1 − ε)δ{1} + εδ{λ2 } .
We have for every k ≥ 0
Z
|λnk − 1| dµ1 (λ) = ε|λn2 k − 1| ≤ 2ε < 3ε.
T
Since |λ2 − λ1 | < 1, (2) is true. If N1 is chosen sufficiently large, µ1 satisfies properties
(1ε ) and (3ε ). Moreover, µ1 ({1}) = 1 − ε and µ2 ({λ2 }) = ε < 1 − ε, so (4ε ) is true.
Suppose now that the construction has been carried out until step p. We can then
construct by induction on s ∈ [1, 2p ] measures µp,s which satisfy
CATALIN BADEA & SOPHIE GRIVAUX
14
(aε ) for every j ∈ [0, p − 1] and every k ∈ [Nj , Nj+1 ],
Z
|λnk − 1| dµp,s (λ) < 3ε(1 − ε)j ;
T
(bε ) for every k ≥ Np ,
(cε ) for every k ≥ Np,s,
Z
Z
T
T
|λnk − 1| dµp,s (λ) < 3ε(1 − ε)p ;
|λnk − 1| dµp,s (λ) < 3ε(1 − ε)p+2 ;
(dε ) µp,s ({λi }) = µp ({λi }) for every i ∈ (s, 2p ] and µp,s ({λi }) ≤ (1 − ε)p+1 for every
i ∈ [1, s] ∪ [2p + 1, 2p + s].
We define µp,1 as
µp,1 = µp + µp ({1}) ε (δ{λ2p +1 } − δ{λ1 } )
p
= µp ({1}) (1 − ε) δ{λ1 } +
2
X
i=2
µp ({λi }) δ{λi } + µp ({1}) ε δ{λ2p +1 } ,
where λ2p +1 ∈ C \ {λ1 , . . . , λ2p } is such that |λ2p +1 − λ1 | = |λ2p +1 − 1| is very small. Then
for every k ≥ 0,
Z
Z
nk
|λnk − 1| dµp (λ) + (1 − ε)p ε |λn2pk+1 − λn1 k |.
|λ − 1| dµp,1 (λ) ≤
(7)
T
T
It follows that (aε ) holds true for µp,1 , provided that |λ2p +1 −λ1 | is sufficiently small. Also,
we have by (7) and (3ε ) that for every k ≥ Np ,
Z
|λnk − 1| dµp,1 (λ) < ε (1 − ε)p+1 + 2ε (1 − ε)p < 3ε (1 − ε)p
T
so that (bε ) holds true. If Np,1 is sufficiently large, (cε ) is true. Property (dε ) follows from
the expression of µp,1 , since µp,1 ({1}) = µp ({1}) (1 − ε) ≤ (1 − ε)p+1 and µp,1 ({λ2p +1 }) =
µp ({1}) ε ≤ ε (1 − ε)p ≤ (1 − ε)p+1 by (4ε ).
Supposing now that s ≥ 2 and that the construction has been carried out for every
s′ < s, we choose λ2p +s ∈ C \ {λ1 , . . . , λ2p +s−1 } very close to λs , and set
µp,s = µp,s−1 + µp,s−1 ({λs }) ε (δ{λ2p +s } − δ{λs } ).
Then, since µp,s−1 ({λs }) = µp ({λs }) ≤ (1 − ε)p by (dε ) and (4ε ), we have for every k ≥ 0
Z
Z
nk
|λnk − 1| dµp,s−1 (λ) + (1 − ε)p ε |λn2pk+s − λns k |.
|λ − 1| dµp,s (λ) ≤
(8)
T
T
Thus (aε ) holds true provided that |λ2p +s − λs | is sufficiently small. In order to prove (bε ),
suppose first that Np ≤ k < Np,s−1 . Then, if |λ2p +s − λs | is small enough, we have by (8)
and (bε ) for s − 1 that
Z
|λnk − 1| dµp,s (λ) < 3ε (1 − ε)p for every Np ≤ k < Np,s−1 .
T
If k ≥ Np,s−1 we have by (cε ) at step s − 1 and (8) that
Z
|λnk − 1| dµp,s (λ) < ε (1 − ε)p+2 + 2ε (1 − ε)p < 3ε (1 − ε)p .
T
CONTINUOUS PROBABILITY MEASURES WITH LARGE FOURIER COEFFICIENTS
15
So (bε ) is satisfied at step s. Property (cε ) is true if Np,s is chosen sufficiently large.
As to property (dε ), we have µp,s({λi }) = µp,s−1 ({λi }) for every i distinct from λs and
λ2p +s . So µp,s ({λi }) = µp ({λi }) for every i ∈ (s, 2p ], and µp,s ({λi }) ≤ (1 − ε)p+1 for every
i ∈ [1, s) ∪ [2p + 1, 2p + s). Also
µp,s ({λs }) = µp,s−1 ({λs }) (1 − ε) = µp ({λs }) (1 − ε) ≤ (1 − ε)p+1 ,
while µp,s({λ2p +s }) = µp,s−1 ({λs }) ε ≤ (1 − ε)p+1 . So (dε ) holds true at step s. This
terminates the construction of the measures µp,s .
We then set µp+1 = µp,2p and Np+1 = Np,2p and check as in the proof of Theorem
2.3 that properties (1ε ), (2), and (3ε ) are satisfied. Since by (dε ) for s = 2p we have
µp,2p ({λi }) ≤ (1 − ε)p+1 for every i ∈ [1, 2p+1 ], property (4ε ) is satisfied as well. This
terminates the construction of the measures µp , and proves Theorem 2.5.
We plan to come back to the study of the links between Kazhdan sets and rigidity
sequences in a forthcoming preprint.
As a direct corollary of Theorems 2.5 and 2.6, we obtain
Corollary 4.7. — For any function ψ : N
the sets
′
{2k 3k ; k ≥ 0, 0 ≤ k′ ≤ ψ(k)}
and
/ N with ψ(k)
/ +∞ as k
/ +∞,
′
{2k 3k ; k′ ≥ 0, 0 ≤ k ≤ ψ(k′ )}
are non-Kazhdan sets in Z.
′
It seems to be unknown whether the Furstenberg set {2k 3k ; k, k′ ≥ 0} is a Kazhdan
set in Z (see Question 5.1 below).
5. Applications
5.1. Proof of Theorem 2.1. — Our first and main application of Theorem 2.4 is
Theorem 2.1, which solves in particular Conjecture (C4) and shows that the invariance
assumption on the measure is indeed essential in the statement of Furstenberg’s ×2 -×3
conjecture.
Proof of Theorem 2.1. — If r = 1, Theorem 2.1 claims the existence, for every integer
p ≥ 2, of a measure µ ∈ Pc (T) such that inf k≥0 |b
µ(pk )| > 0. As mentioned in Section 2,
this statement is well-known: it suffices to consider the classical Riesz product associated
to the sequence (pk )k≥0 . One can also show, either as in [8] or [18], or as an application
of Theorem 2.3, that (pk )k≥0 is a rigidity sequence, so that there exists µ ∈ Pc (T) with
/ +∞.
/ 1 as k
µ
b(pk )
Suppose now that r ≥ 2, and consider, for every fixed index 1 ≤ j ≤ r, the set
−l
Cj′ = {e2iπnpj ; n, l ≥ 0}
of roots of all powers of pj . It is dense in T, and has the following property: there exists
k1 k2
p2 ... pkr r
for every λ ∈ Cj′ an integer lj such that λp1
1 ≤ i ≤ r with i 6= j. Hence
sup
ki ≥0
1≤i≤r, i6=j
k1
λp 1
... pkr r
−1
/0
= 1 for every kj ≥ lj and ki ≥ 0,
as kj
/ +∞.
CATALIN BADEA & SOPHIE GRIVAUX
16
Consider the two sequences (mk )k≥0 and (nk′ )k′ ≥0 obtained by setting mk = pkj , k ≥ 0,
and ordering the set
k1
kj−1 kj+1
p1 . . . pj−1
pj+1 . . . pkr r ; ki ≥ 0, 1 ≤ i ≤ r with i 6= j
/ N be a strictly increasing
as a strictly increasing sequence (nk′ )k′ ≥0 , and let ψ : N
function such that
k1
kj−1 kj+1
p1 . . . pj−1
pj+1 . . . pkr r ; 0 ≤ ki ≤ k, 1 ≤ i ≤ r with i 6= r
is contained in the set {nk′ ; 0 ≤ k′ ≤ ψ(k)} for every k ≥ 0. By Theorem 2.4, there exists
/ 1 as kj
/ +∞ with 0 ≤ ki ≤ kj ,
bj (pk11 . . . pkr r )
a measure µj ∈ Pc (T) such that µ
1 ≤ i ≤ r with i 6= j. Replacing, for every 1 ≤ j ≤ r, µj by µj ∗ µ
ej , we can suppose
without loss of generality that µ
bj (n) ≥ 0 for every n ∈ Z.
Let now ρ ∈ Pc (T) be such that ρ(n) > 0 for every n ∈ Z, and set
r
1 X
µj + ρ .
µ=
r+1
j=1
Then µ is a continuous probability measure on T with µ
b(n) > 0 for every n ∈ Z. Moreover,
we have
1
/ +∞.
(9)
lim inf µ
b pk11 pk22 . . . pkr r ≥
as max(k1 , . . . , kr )
r+1
(l)
(l)
Indeed, if (k1 , . . . , kr )l≥1 is an infinite sequence of elements of Nr , one can extract from
(l)
(l)
it a sequence (still denoted by (k1 , . . . , kr )l≥1 ) with the following property: there exists
(l)
(l)
1 ≤ j ≤ r such that ki ≤ kj for every 1 ≤ i ≤ r. Then
(l)
(l)
(l)
(l)
1
1
k
k
lim inf µ
b p11 . . . pkr r ≥
bj p11 . . . pkr r =
lim inf µ
·
r + 1 l / +∞
r+1
l / +∞
This yields (9). Since µ
b(n) > 0 for every n ≥ 0, it follows that
inf µ
b pk11 . . . pkr r > 0,
ki ≥0
1≤i≤r
and Theorem 2.1 is proved.
5.2. The case of the Furstenberg set. — Theorem 2.1 applies to the Furstenberg
′
set F = {2k 3k ; k, k′ ≥ 0} and shows the existence of a measure µ ∈ Pc (T) such that
′
inf
µ
b(2k 3k ) > 0
′
k,k ≥0
(the fact that the measure µ can be supposed to have nonnegative Fourier coefficients can
be extracted from the proof of Theorem 2.1, or deduced formally from Theorem
2.1 by
√
g
considering the measure µ ∗ µ
e). By Corollary 4.3, this means that Kaz(F ) < 2.
As mentioned in the introduction, it is natural to look for the optimal constant δ ∈ (0, 1)
for which there exists a measure µ ∈ Pc (T) such that
(10)
′
inf
µ
b(2k 3k ) ≥ δ
′
k,k ≥0
This is equivalent to asking whether F is a Kazhdan set in Z, and if yes, with which
(modified) Kazhdan constant. The best result which can be obtained via the methods
CONTINUOUS PROBABILITY MEASURES WITH LARGE FOURIER COEFFICIENTS
17
presented here is that there exists a measure µ ∈ Pc (T) satisfying (10) for every δ ∈
(0, 1/2): this is the content of Theorem 2.2, which we now prove.
Proof of Theorem 2.2. — The proof goes along the same lines as that of Theorem 2.1, but
it involves Theorem 2.6 instead of Theorem 2.4.
Fix δ ∈ (0, 1/2). There exist by Theorem 2.6 two measures µ1 , µ2 ∈ Pc (T) such that
√
′
|b
µ1 (2k 3k )| ≥ 2δ for every k ≥ 0 and every 0 ≤ k′ ≤ k
and
′
|b
µ2 (2k 3k )| ≥
√
2δ
for every k′ ≥ 0 and every 0 ≤ k ≤ k′ .
e1 + µ2 ∗ µ
e2 ) has nonnegative Fourier coefficients and satisfies
The measure µ = 12 (µ1 ∗ µ
′
µ
b(2k 3k ) ≥ δ for every k, k′ ≥ 0.
′
It then follows from Theorem 4.1 that if {2k 3kp; k, k′ ≥ 0} is a Kazhdan subset of Z,
its modified Kazhdan constant must be less than 2(1 − δ) for every δ ∈ (0, 1/2), so must
be at most 1.
That the bound 1/2 can be further improved does not seem clear at all, and we do not
know whether there exists for every δ ∈ [1/2, 1) a measure µ ∈ Pc (T) such that
′
inf
µ
b(2k 3k ) ≥ δ.
′
k,k ≥0
′
Question 5.1. — Is the Furstenberg set {2k 3k ; k, k′ ≥ 0} a Kazhdan set in Z?
Note that a lacunary semigroup {an ; n ≥ 0}, a ≥ 2, cannot be a Kazhdan set (see [5,
Ex. 5.2]).
Along the same lines, one can also ask for which values of δ ∈ (0, 1] there exists a measure
′
/ +∞. The proof of Theorem
µ ∈ Pc (T) such that lim inf µ
b(2k 3k ) ≥ δ as max(k, k′ )
2.1 allows us to exhibit a measure µ ∈ Pc (T) with nonnegative Fourier coefficients (namely
′
/ +∞. Again, we do
µ = (µ1 + µ2 )/2) such that lim inf µ
b(2k 3k ) ≥ 1/2 as max(k, k′ )
not know whether the constant 1/2 can be improved. The strongest statement which
could be expected in this direction is the existence of a measure µ ∈ Pc (T) such that
′
/ +∞. This would show that the Furstenberg sequence is
/ 1 as max(k, k ′ )
µ
b(2k 3k )
a rigidity sequence for weakly mixing dynamical systems. This natural question is raised
in Remark 3.12 (b) of [8] and we record it anew here:
Question 5.2. — Is the Furstenberg sequence a rigidity sequence for weakly mixing dynamical systems?
5.3. Examples of rigidity sequences. — Theorems 2.3 and 2.4 allow us to retrieve
directly all known examples of rigidity sequences from [8], [18], [2], [1] and [21]. The
only examples of rigidity sequences not covered by our results are those of [20]. Indeed,
Fayad and Kanigowski construct in [20] examples of rigidity sequences (nk )k≥0 such that
{λnk ; k ≥ 0} is dense in T for every λ = e2iπθ ∈ T with θ ∈ R \ Q, and there exist for every
integer p ≥ 2 infinitely many integers k such that p does not divide nk . So such sequences
never satisfy the assumption of Theorem 2.3.
We briefly list here some of the examples of rigidity sequences which can be obtained
from Theorems 2.3 and 2.4. Our first example is that of Fayad and Thouvenot in [21].
18
CATALIN BADEA & SOPHIE GRIVAUX
Example 5.3. — [21] If the sequence (nk )k≥0 is such that there exists λ = e2iπθ ∈ T,
/ +∞, (nk )k≥0 is a rigidity sequence.
/ 1 as k
with θ ∈ R \ Q, such that λnk
/ 1 with λ =
This result of [21] follows directly from Theorem 2.3. Indeed, if λnk
pn
/ 1 for every p ∈ Z. Since θ is irrational, the set {λp ; p ∈ Z} is
θ ∈ R \ Q, λ k
dense in T, and Theorem 2.3 applies.
e2iπθ ,
Example 5.4. — [8], [18] If (nk )k≥0 is a strictly increasing sequence of integers such that
nk |nk+1 for every k ≥ 0, (nk )k≥0 is a rigidity sequence.
Indeed, under the assumption of Example 5.4, the set C = {λ ∈ T ; λnk
all nk -th roots of 1, k ≥ 0, and is hence dense in T.
Theorem 2.4 shows that Example 5.4 can be improved into
/ 1} contains
Example 5.5. — Let (mk )k≥0 be a strictly increasing sequence of integers such that
/ N be a strictly increasing function. Order the
mk |mk+1 for every k ≥ 0. Let ψ : N
′
′
set {k mk ; k ≥ 0 and 1 ≤ k ≤ ψ(k)} as a strictly increasing sequence (nk )k≥0 . Then
(nk )k≥0 is a rigidity sequence.
′
/ +∞ uniformly in k ′ } contains all
/ 1 as k
Indeed, the set C ′ = {λ ∈ T ; λk mk
mk -th roots of 1, and is dense in T. So Theorem 2.4 applies.
For instance, if (rk )k≥0 is any sequence of positive integers, the sequence (nk )k≥0 obtained by ordering the set {k′ 2k ; k ≥ 0, 1 ≤ k′ ≤ rk } in a strictly increasing sequence is
a rigidity sequence. This provides new examples of rigidity sequences (nk )k≥0 such that
nk+1
/ +∞.
/ 1 as k
nk
/ +∞ as
Example 5.6. — Let (rk )k≥0 be any sequence of positive integers with rk
/
k
+∞. The sequence (nl )l≥0 obtained by ordering in a strictly increasing fashion
n
/ 1 as
the set {j2k ; k ≥ 0, 1 ≤ j ≤ rk } is a rigidity sequence which satisfies nl+1
l
/ +∞.
l
Proof. — It suffices to show that for every ε > 0 and every l sufficiently large there exists
n
l′ > l such that nll′ < 1 + ε.
– Suppose first that nl = j2k for some k ≥ 0 and some 1/ε < j < rk . Then taking
n
nl′ = (j + 1)2k , we have nll′ = j+1
j < 1 + ε.
k
′
– Suppose next that nl = j2 for some k ≥ 0 and some 1 ≤ j ≤ 1/ε. Fix an integer
p such that 2−p < ε. If l is sufficiently large, we have rk−p > 2p /ε. Set j ′ = j2p . Since
j ′ ≤ 2p /ε < rk−p , the integer nl′ = (j ′ + 1)2k−p appears in the sequence (nl )l≥0 . Also,
since nl′ = (j ′ + 1)2k−p > j2k = nl , we have l′ > l, and
nl ′
(j ′ + 1)2k−p
(j ′ + 1) −p j + 2−p
=
=
2 ≤
< 1 + 2−p < 1 + ε.
k
nl
j2
j
j
– The last case we have to deal with is when nl = rk 2k for some k ≥ 0. Let j ′ ≥ 1 be
such that j ′ ≤ rk /2 < j ′ + 1. Then j ′ < rk+1 , and if we set nl′ = (j ′ + 1)2k+1 , the integer
nl′ appears in the sequence (nl )l≥0 . We have
(j ′ + 1)2k+1
2(j ′ + 1)
2
nl ′
=
=
≤1+
<1+ε
k
nl
rk 2
rk
rk
if k is sufficiently large, and this terminates the proof.
CONTINUOUS PROBABILITY MEASURES WITH LARGE FOURIER COEFFICIENTS
19
Example 5.7. — [1] (a) Let (dk )k≥0 be a strictly increasing sequence of positive integers
of density zero. There exists a strictly increasing sequence of integers (nk )k≥0 which is a
rigidity sequence and satisfies nk ≤ dk for every k ≥ 0.
(b) Let (dk )k≥0 be a sequence of real numbers with dk ≥ k for every k ≥ 0 and
limk→+∞ dk /k = +∞. There exists a strictly increasing sequence of integers (nk )k≥0
which is a rigidity sequence and satisfies nk ≤ dk for every k ≥ 0.
This has been proved by Aaronson in [1, Th. 4]; a simpler construction with the weaker
conclusion that nk ≤ dk for infinitely many k was given in [8, Prop. 3.18]. The proof
given below uses Theorem 2.3 and a result of Bugeaud [16].
Proof. — As the statement (a) is a simple consequence of (b), we only give the proof of
(b). Set g0 = 1 and gk = dk /k for every k ≥ 1. Then (gk )k≥0 is a sequence of reals with
gk ≥ 1 for every k ≥ 0 which tends to infinity (notice that for (a) this holds since (dk )k≥0
is a sequence of density zero). Using (a particular case of) [16, Th. 1], we obtain that
there exists for every fixed irrational number θ an increasing sequence (nk )k≥0 of positive
/ 1. It follows from
integers such that nk ≤ kgk = dk for every k ≥ 1 and exp(2iπθ)nk
Example 5.3 that (nk )k≥0 is a rigidity sequence.
Example 5.8. — Let (mk )k≥0 be a strictly increasing sequence of positive integers with
/ +∞. There exists a strictly increasing sequence of integers (nk )k≥0 which
mk+1 − mk
is a rigidity sequence and satisfies mk ≤ nk < mk+1 for every k ≥ 0.
Proof. — The proof is exactly the same as the preceding one, replacing the result from
[16] by [9, Obs. 1.36].
5.4. Exceptional sets for (almost) uniform distribution. — Let (nk )k≥0 be a
strictly increasing sequence of integers, and let ν ∈ M(T) be a (finite) complex Borel
measure on T. We stress that ν is not necessarily a probability measure. Given θ ∈ R,
the sequence (nk θ)k≥0 is said ([30], [28, p. 53]) to be almost uniformly distributed with
respect to ν if there exists a strictly increasing sequence (Nj )j≥1 of positive integers such
that for every arc I ⊂ T whose endpoints are not atoms (mass-points) for ν one has
1
# {n ≤ Nj : exp(2iπnk θ) ∈ I} = ν(I).
lim
j→+∞ Nj
The analog of Weyl’s criterion states that (nk θ)k≥0 is almost uniformly distributed with
respect to ν if and only if there exists a strictly increasing sequence (Nj )j≥1 of positive
integers such that
Nj
1 X
lim
exp(m2iπnk θ) exists
j→∞ Nj
k=1
for every m ∈ Z.
In this case, the limit is ν̂(m). It can also be proved that (nk θ)k≥0 is almost uniformly
distributed with respect to ν if and only if there exists a strictly increasing sequence
(Nj )j≥1 of positive integers such that
Nj
1 X
f e2iπnk θ
Nj
k=1
/
Z
f dµ
T
as
j
/ +∞
for every f ∈ C(T).
We now denote by W ((nk )k≥0 , ν), the exceptional set of almost uniform distribution of
(nk ) with respect to ν. This is the set of all θ ∈ R such that (nk θ)k≥0 is not almost
CATALIN BADEA & SOPHIE GRIVAUX
20
uniformly distributed with respect to ν. We will write U ((nk )k≥0 , ν) for the exceptional
set of (classical) uniform distribution of (nk ) with respect to ν, which corresponds to the
case where Nj = j for every j ≥ 1.
The size of the exceptional set U ((nk )k≥0 , ν) has been studied in many works, in particular in the case where ν is the normalized Lebesgue measure on T. In this case, we write
it as U ((nk )k≥0 ). If the sequence (nk )k≥0 is lacunary, U ((nk )k≥0 ) is uncountable, and
even of Hausdorff dimension 1 ([19], see also [24]). See also [34] and [32] for a stronger
result. On the other hand, it is known (see [11], [13]) that among various natural classes
of random sequences of integers, almost all sequences (nk )k≥0 satisfy U ((nk )k≥0 ) = Q.
/ 1 as
These typical random sequences (nk )k≥0 are sublacunary, i.e. satisfy nk+1 /nk
/ +∞. Nonetheless, examples of sublacunary sequences (nk )k≥0 with U ((nk )k≥0 ) unk
countable were constructed in [19] (see also [6]). Concerning the size of W ((nk )k≥0 , ν) we
refer for instance to [33], [24] and [27]. See also [15] for other references.
Our results about the size of W ((nk )k≥0 , ν) rely on the following generalization of Proposition 4.6, which provides a link between the size of the exceptional set W ((nk )k≥0 , ν) and
the modified Kazhdan constant of the set {nk ; k ≥ 0}.
Proposition 5.9. — Let (nk )k≥0 be a strictly increasing sequence of positive integers with
n0 = 1, and let ν ∈ M(T) with ν 6= δ{1} . If W ((nk )k≥0 , ν) is finite or countable infinite,
Q = {nk ; k ≥ 0} is a Kazhdan subset of Z, and
p
g
Kaz(Q)
≥ 2(1 − ℜe νb(1)).
Proof. — Fix γ ∈ (0, 1 − ℜe νb(1)), and let µ be a probability measure on T such that
supk≥0 (1 − ℜe µ
b(nk )) < γ. Then
1 − ℜe
Z X
N
1
λnk dµ(λ) < γ
T N
k=1
for every N ≥ 1.
Suppose that the measure µ is continuous. Since there exists a strictly increasing sequence
(Nj )j≥1 of integers such that
Nj
1 X nk
λ
Nj
k=1
/ν
b(1)
as j
/ +∞
for every λ ∈ T \ C,
where C is a finite or countable infinite subset of T, we have 1 − ℜe νb(1) ≤ γ, which
contradicts our initial assumption. So µ has a discrete part.
p It then follows from Theorem
4.1 that the modified Kazhdan constant of Q is at least 2(1 − ℜe νb(1)).
The following result provides an example of a nonlacunary semigroup (nk )k≥0 whose
associated exceptional sets W ((nk )k≥0 , ν) with respect to ν are uncountable for a large
class of measures ν ∈ M(T).
Theorem 5.10. — Denote by (nk )k≥0 the sequence obtained by ordering the Furstenberg
′
set F = {2k 3k ; k, k ′ ≥ 0} in a strictly increasing fashion. For every measure ν ∈ M(T)
such that ℜe νb(1) < 1/2, the set W ((nk )k≥0 , ν) is uncountable.
Proof of Theorem 5.10. — Fix ν ∈ M(T), and suppose that U ((nk )k≥0 , ν) is at most
g ) ≤ 1 by Theorem 2.2, it follows from Proposition 5.9 that
countable.
Since Kaz(F
p
2(1 − ℜe νb(1)) ≤ 1, i.e. that ℜe νb(1) ≥ 1/2. This proves Theorem 5.10.
CONTINUOUS PROBABILITY MEASURES WITH LARGE FOURIER COEFFICIENTS
21
References
[1] J. Aaronson, Rational ergodicity, bounded rational ergodicity and some continuous measures
on the circle, Israel J. Math. 33 (1979), 181–197.
[2] J. Aaronson, M. Hosseini, M. Lemańczyk, IP-rigidity and eigenvalue groups, Erg. Th.
Dynam. Syst. 34 (2014), 1057–1076.
[3] T. Adams, Tower multiplexing and slow weak mixing, Colloq. Math. 138 (2015), 47–72.
[4] C. Badea, S. Grivaux, Kazhdan sets in groups and equidistribution properties, J. Funct.
Anal. 273 (2017), 1931–1969.
[5] C. Badea, S. Grivaux, Sets of integers determined by operator-theoretical properties: Jamison and Kazhdan sets in the group Z (Actes du 1er congrès national de la SMF - Tours 2016),
Séminaires et Congrès 31, Société Mathématique de France (2017).
[6] R.C. Baker, On a theorem of Erdös and Taylor, Bull. London Math. Soc. 4 (1972), 373–374.
[7] B. Bekka, P. de la Harpe, A. Valette, Kazhdan’s Property (T), New Mathematical
Monographs 11, Cambridge University Press (2008).
[8] V. Bergelson, A. del Junco, M. Lemańczyk, J. Rosenblatt, Rigidity and nonrecurrence along sequences, Erg. Th. Dynam. Syst. 34 (2014), 1464–1502.
[9] V. Bergelson, D. Simmons, New examples of complete sets, with connections to a Diophantine theorem of Furstenberg, Acta Arith. 177 (2017), 101–131.
[10] M. Boshernitzan, Elementary proof of Furstenberg’s Diophantine result, Proc. Amer. Math.
Soc. 122 (1994), 67–70.
[11] M. Boshernitzan, Density modulo 1 of dilations of sublacunary sequences, Adv. Math. 108,
(1994) 104–117.
[12] M. Boshernitzan, Homogeneously distributed sequences and Poincaré sequences of integers
of sublacunary growth, Monatsh. Math. 96 (1983), 173–181.
[13] J. Bourgain, On the maximal ergodic theorem for certain subsets of the integers, Israel J.
Math. 61 (1988), 39–72.
[14] J. Bourgain, E. Lindenstrauss, P. Michel, A. Venkatesh, Some effective results for
×a × b, Erg. Th. Dynam. Syst. 29 (2009), 1705–1722.
[15] Y. Bugeaud, Distribution modulo one and Diophantine approximation, Cambridge Tracts
in Mathematics 193, Cambridge University Press (2012).
[16] Y. Bugeaud, On sequences (an ξ)n≥1 converging modulo 1, Proc. Amer. Math. Soc. 137
(2009), 2609–2612.
[17] I. Chatterji, D. Witte Morris, R. Shah, Relative property (T) for nilpotent subgroups,
preprint 2017, arXiv:1702.01801.
[18] T. Eisner, S. Grivaux, Hilbertian Jamison sequences and rigid dynamical systems, J. Funct.
Anal. 261 (2011), 2013–2052.
[19] P. Erdös, S. Taylor, On the set of points of convergence of a lacunary trigonometric series
and the equidistribution properties of related sequences, Proc. London Math. Soc. 7 (1957),
598–615.
[20] B. Fayad, A. Kanigowski, Rigidity times for a weakly mixing dynamical system which are
not rigidity times for any irrational rotation, Erg. Th. Dynam. Syst. 35 (2015), 2529–2534.
[21] B. Fayad, J.-P. Thouvenot, On the convergence to 0 of mn ξ mod 1, Acta Arith. 165
(2014), 327–332.
[22] H. Furstenberg, Disjointness in ergodic theory, minimal sets, and a problem in Diophantine
approximation, Math. Systems Theory 1 (1967), 1–49.
[23] S. Grivaux, IP-Dirichlet measures and IP-rigid dynamical systems: an approach via generalized Riesz products, Studia Math. 215 (2013), 237–259.
[24] H. Helson, J.-P. Kahane, A Fourier method in Diophantine problems, J. Analyse Math.
15 (1965), 245–262.
22
CATALIN BADEA & SOPHIE GRIVAUX
[25] M. Hochman, Geometric rigidity of ×m invariant measures, J. of the European Math. Soc.
14 (2012), 1539–1563.
[26] M. Hochman, Lectures on dynamics, fractal geometry, and metric number theory, J. of Mod.
Dyn. 8 (2014), 437–497.
[27] J.-P. Kahane, Sur les mauvaises répartitions modulo 1, Ann. Inst. Fourier (Grenoble) 14
(1964), 519–526.
[28] L. Kuipers, H. Niederreiter, Uniform distribution of sequences, Pure and Applied Mathematics, Wiley-Interscience (1974).
[29] E. Lindenstrauss, Rigidity of multiparameter actions, Probability in mathematics, Israel
J. Math. 149 (2005), 199–226.
[30] R. Lyons, Fourier-Stieltjes coefficients and asymptotic distribution modulo 1, Ann. of Math.
122 (1985), 155–170.
[31] R. Lyons, On measures simultaneously 2- and 3-invariant, Israel J. Math. 61 (1988), 219–224.
[32] B. de Mathan, Sur un problème de densité modulo 1, C. R. Acad. Sci. Paris 287 (1978),
277–279.
[33] I. Piatetski-Shapiro, On the laws of distribution of the fractional parts of an exponential
function, Izv. Akad. Nauk SSSR Ser. Mat. 15 (1951), 47–52 (in Russian).
[34] A. Pollington, On the density of sequence (nk ξ), Illinois J. Math. 23 (1979), 511–515.
[35] D. Rudolph, ×2 and ×3 invariant measures and entropy, Erg. Th. Dynam. Syst. 10 (1990),
395–406.
[36] A. Venkatesh, The work of Einsiedler, Katok and Lindenstrauss on the Littlewood conjecture, Bull. Amer. Math. Soc. 45 (2008), 117–134.
Catalin Badea, Laboratoire Paul Painlevé, UMR 8524, Université de Lille, Cité Scientifique, Bâtiment
M2, 59655 Villeneuve d’Ascq Cedex, France • E-mail : [email protected]
Sophie Grivaux, CNRS, Laboratoire Paul Painlevé, UMR 8524, Université de Lille,
Cité Scientifique, Bâtiment M2, 59655 Villeneuve d’Ascq Cedex, France
E-mail : [email protected]
| 4 |
Estimation of Risk Contributions with MCMC
Takaaki Koike ∗and Mihoko Minami
†
February 13, 2017
arXiv:1702.03098v1 [q-fin.RM] 10 Feb 2017
Abstract
Determining risk contributions by unit exposures to portfolio-wide economic capital is an
important task in financial risk management. Despite its practical demands, computation of
risk contributions is challenging for most risk models because it often requires rare-event simulation. In this paper, we address the problem of estimating risk contributions when the total
risk is measured by Value-at-Risk (VaR). We propose a new estimator of VaR contributions,
that utilizes Markov chain Monte Carlo (MCMC) method. Unlike the existing estimators, our
MCMC-based estimator is computed by samples of conditional loss distribution given the rare
event of our interest. MCMC method enables to generate such samples without evaluating the
density of total risk. Thanks to these features, our estimator has improved sample-efficiency
compared with the crude Monte Carlo method. Moreover, our method is widely applicable to
various risk models specified by joint portfolio loss density. In this paper, we show that our
MCMC-based estimator has several attractive properties, such as consistency and asymptotic
normality. Our numerical experiment also demonstrates that, in various risk models used in
practice, our MCMC estimator has smaller bias and MSE compared with these of existing
estimators.
Keywords: Quantitative risk management; Value-at-Risk; Economic capital; Risk allocation; risk
contributions; VaR contributions; Markov chain Monte Carlo
1
Introduction
In most financial institutions, the risk of their portfolios is measured by economic capital. For the
purpose of further risk analysis, it is necessary to decompose the portfolio-wide economic capital
into the sum of risk contributions by unit exposures. This risk-redistribution process is called
capital allocation, and the allocated capitals are called risk contributions (see, e.g., Dev, 2004).
Since it is non-trivial to allocate the economic capital in an economically meaningful way, various
principles to determine the risk contributions have been proposed. Among these principles, the
Euler principle is one of the most well-established rule proposed in Tasche (1999). Its financial
justification is given from differing points of view (e.g., by Denault, 2001; Kalkbrener, 2005; Tasche,
1999). Detailed references for rationalizing the use of the Euler principle can also be found in Tasche
(2008).
Despite its good economic properties, actual computation of risk contributions poses theoretical and numerical difficulties, especially when the portfolio-wide risk is measured by Value-at-Risk
∗ Center
for Mathematics, Graduate School of Science and Technology, Keio University, [email protected]
† Department of Mathematical Engineering,
Faculty of Science and Technology, Keio University,
[email protected]
1
(VaR). Although the explicit formula of VaR contributions is derived by Tasche (2001), it can rarely
be calculated analytically with a very few exceptions (such as CreditRisk+ studied in Tasche, 2004).
Therefore, Monte Carlo (MC) method is often used to obtain numerical solution. However, the MC
estimator of VaR contributions suffers from significant bias caused by sample-inefficiency and inevitable numerical modification (see, e.g., Yamai and Yoshiba, 2002 for the example). To overcome
the difficulties, several methods have been proposed in the literature. Hallerbach (2003) and Tasche
and Tibiletti (2004) proposed the approximation formulae by regarding the VaR contributions as
the best prediction of losses given the total loss, under some model assumptions. In this paper,
we call the estimator derived in such a way the generalized regression (GR) estimator. Evaluation
of the approximation error caused by the model misspecification is quite difficult. Furthermore,
it is also challenging to find a well-fitted model among losses, to accurize the approximation.
Glasserman (2005) developed importance sampling (IS) estimators in the case of credit portfolios.
Although the problem of sample-inefficiency is overcomed in the IS method, risk models where the
IS technique are available are quite limited. Consequently, the IS estimator can not be applied to
various risk models widely used in practice. Tasche (2009) proposed the Nadaraya-Watson (NW)
estimator constructed by using kernel estimation method. Although the NW estimator is available
to most risk models, incorporation of the importance sampling technique is still indispensable to
achieve efficient estimation. Moreover, the asymptotic standard error of the NW estimator can not
be computed easily.
In this paper, we propose a new estimator of VaR contributions that utilizes Markov chain
Monte Carlo (MCMC) method. Our MCMC-based estimator is available whenever the portfolio
loss model is specified by the joint density. This case contains wide varieties of risk models since
the joint density can be completely specified for all models having marginal densities and copula
density (see, e.g., Yoshiba, 2013 for various examples of this type of models). In this case, the
IS technique can hardly be available, and thus the problem of estimating VaR contributions is
generally unsettled. We study theoretical properties of our MCMC estimator, and provide a
guideline for the efficient application of MCMC to our problem. The proposed estimation method
is then carried out within some risk models often used in practice. For these risk models, we
compare the performance of the MCMC estimator with the existing estimators.
The MCMC estimator has several attractive properties. First, the MCMC estimator is consistent. Since the MCMC method has an improved sample-efficieny compared with the MC method,
we can expect the MCMC estimate to have much smaller bias than that of MC. Second, the MCMC
estimator holds asymptotic normality, and its standard error can be consistently estimated by using
methods in the theory of Markov chain. Thanks to these features, we can construct approximated
confidence interval of the true VaR contributions. Moreover, together with the sample-efficiency
of the MCMC method, we can expect that the MCMC estimator has much smaller standard
error compared with the existing methods. Finally, the MCMC estimator is free from modelmisspecification bias, and can flexibly incorporate the feature of the underlying loss distribution
into the estimation. In contrast with the GR estimator, the MCMC estimator does not use any
approximation. Therefore, the MCMC estimator is free from the approximation error caused by
the model-misspecification. Moreover, by choosing an appropriate proposal distribution of the
MCMC, we can directly capture the features of the underlying loss distributions, Thanks to these
propertoes, the MCMC estimator can have stably good performance regardless of the underlying
risk model.
The key idea behind the MCMC estimator is to generate samples directly from the joint loss
distribution given a rare event of interest. This approach is completely different from the crude MC
method in which samples are generated from the loss distribution itself. In the MC method, large
2
portion of samples have to be discarded because VaR contributions depend only on losses given the
rare event. Our method, in contrast, does not waste any samples for the estimation. Thanks to this
difference, great sample-efficiency can be achieved in the MCMC method. How can we generate
samples directly from the conditional density on the rare event? Usually it is quite difficult because
our conditional density contains the density of the total loss, which is too cumbersome to compute.
Nevertheless, MCMC method enables us to generate their samples directly by sequentially updating
the samples, without evaluating the cumbersome term on the total loss.
The paper is organized as follows. Section 2 introduces the mathematical setting of the capital
allocation problem, and explains problems for estimating VaR contributions with the existing
estimators. Section 3 is devoted to the brief introduction to Markov chain Monte Carlo method.
Several families of the proposal distributions are also introduced for efficient use of MCMC. In
Section 4, we propose the MCMC estimator, that combines the MCMC method with estimation of
VaR contributions to achieve more sample efficiency. Based on the theory of MCMC, we also study
some theoretical properties of our MCMC estimator. Numerical results and empirical studies are
presented in Section 5. We demonstrate that, for widely used risk models in practice, the MCMC
estimator has smaller bias and MSE than those of existing estimators. Through the empirical
study, we provide a guideline of incorporating the MCMC method in our framework of estimating
VaR contributions. Concluding remarks, future research and potential extensions of the MCMC
estimator are summarized in Section 6.
Throughout the paper, integration of vector-valued function is applied componentwise. Let
r×c
r×c
(E) be the set of E-valued
M (E) be the set of E-valued r × c matrices and Mr×c
+ (E) ⊂ M
r×c
(i,j)
positive definite r×c matrices. For a matrix M ∈ M (E), we write M
the (i, j)-th component
of M for i = 1, 2, . . . , r and j = 1, 2, . . . , c. For any vector x ∈ Rd , write xj the j-th component of
x.
2
Capital Allocation Problem
Throughout the paper, we consider the following portfolio loss model:
S=
d
X
Xj ,
(2.1)
j=1
where d ≥ 3 is the size of the portfolio, X1 , X2 , . . . , Xd are random variables that represent the
losses incurred by the exposures j = 1, 2, . . . , d within a fixed time period. The random variable
S defined in (2.1) stands for the portfolio-wide loss. In this paper, a positive value of loss random
variable represents a financial loss, and a negative loss is interpreted as a profit. Let Fj be the
cumulative distribution function (cdf) of Xj for j = 1, 2, . . . , d, FX be the joint cdf of the random
vector X = (X1 , X2 , . . . , Xd ) and FS be the cdf of the total loss S. Let C be a copula of FX . By
Sklar’s theorem (see, e.g., Nelsen, 2013), it holds that
FX (x) = C(F1 (x1 ), . . . , Fd (xd ))
(2.2)
for any x ∈ Rd . This formula allows us to separate the modelling of the individual marginal losses
from their dependence structure. See McNeil et al. (2015) for the benefits of this copula modelling
approach, and its application in financial risk management.
For the sake of argument, we impose two assumptions on the portfolio loss models (2.1) in this
paper. One assumption is supp(FX ) = Rd+ , or Rd , where supp(F ) is the support of a function
3
F and Rd+ := {x ∈ Rd : x ≥ 0}. The former case focuses on modelling pure losses and the
latter on modelling profits and losses (P&L). The other assumption is that, the distributions
F1 , . . . , Fd , FX , FS and C are all absolutely continuous, i.e. they all have densities f1 , . . . , fd , fX , fS
and c, respectively. In other words, we confine ourselves to continuous loss models, and discrete loss
models are beyond the scope of this paper. Although typical credit loss models such as the factor
models (see, e.g., Bluhm et al, 2016) are excluded, wide variety of loss models are still included in
the sphere of our research. Under the assumption of continuity, we have the density-form of the
Sklar’s formula:
fX (x) = c(F1 (x1 ), . . . , Fd (xd )) · f1 (x1 ) · · · fd (xd ),
(2.3)
by differentiating the both sides of (2.2).
As mentioned in Section 1, computing the risk contributions is important in the framework of
integrated risk management. In practice, a two-step procedure is conducted for determining the
risk contributions. First step is to compute the economic capital ρ(S), where ρ is a so-called risk
measure. Risk measure is a map from a loss random variable to a capital buffer that is required
to cover the loss with a certain high probability. One of the most popular risk measure is the
Value-at-Risk (VaR), defined by
VaRp (X) = inf{x ∈ R : P(X ≤ x) ≥ p},
(2.4)
where p ∈ (0, 1) is called the confidence level. Second step is to allocate the capital ρ(S) to the
individual d-exposures according to some principle. Mathematically, the capital allocation is the
problem of determining the vector (AC1 , AC2 , . . . , ACd ) that satisfies the full allocation property:
ρ(S) =
d
X
ACj .
(2.5)
j=1
The Euler principle solves this problem by utilizing the well-known Euler rule for a function
u 7→ ρ(uT X):
d
X
∂ρ(uT X)
ρ(uT X) =
uj
,
u∈Λ
(2.6)
∂uj
j=1
where Λ ⊂ Rd \{0} is an open set such that 1 ∈ Λ and ρ is a positive homogeneous risk measure.
Define
∂ρ(uT X)
j = 1, 2, . . . , d.
(2.7)
ACρj :=
∂uj
u=1
Then the full alocation property (2.5) holds for this vector (ACρ1 , . . . ,ACρd ) by taking u = 1 in the
equation (2.6). Since VaRp is positive homogeneous, the Euler principle can be applied and its
solution is given by
VaRp
ACj
:=
∂VaRp (uT X)
∂uj
u=1
= E[Xj |X1 + · · · + Xd = VaRp (S)].
(2.8)
VaR
VaRp
See Tasche (2001) for deriving the second equality. We call this vector ACVaRp := (AC1 p , . . . , ACd
the VaR contributions. Since we focus only on this form of allocated capital in this paper, we drop
the superscript ”VaRp ” and simply denote the VaR contributions by AC=(AC1 , . . . , ACd ).
Although its good economical properties have been reported in the literature, VaR contributions can not be computed easily. Even when the joint density of the portfolio loss vector fX is
analytically specified, theoretical computation of AC is difficult since deriving the joint distribution
4
)
of (Xj , S) is challenging in general. A possible numerical method to calculate the VaR contributions is the Monte Carlo (MC) method. In the crude MC method, we consider the pseudo VaR
contributions, defined by
ACδ = E[X| S ∈ [VaRp (S) − δ, VaRp (S) + δ] ]
(2.9)
instead of the true AC defined in (2.8), for a sufficiently small δ > 0. Since the probability
P(S ∈ [VaRp (S) − δ, VaRp (S) + δ]) is positive, the right hand side of (2.9) can be written by
ACδ =
E[X · 1[S∈Aδ ] ]
P(S ∈ Aδ )
where
Aδ = [VaRp (S) − δ, VaRp (S) + δ].
(2.10)
This expression allows us to construct the estimator of the pseudo VaR contributions given by
MC
c δ,N =
AC
1
N
PN
(n)
· 1[S (n) ∈Aδ ]
n=1 X
P
N
1
n=1 1[S (n) ∈Aδ ]
N
=
N
1 X (n)
X · 1[S (n) ∈Aδ ] ,
Mδ,N n=1
(2.11)
PN
i.i.d.
i.i.d.
(n)
(n)
where Mδ,N = n=1 1[S (n) ∈Aδ ] , X (1) , . . . , X (N ) ∼ FX and {S (n) := X1 + · · · + Xd }N
n=1 ∼
FS . We call (2.11) the MC estimator. The MC estimator is used to estimate the true VaR
contributions by setting δ and N sufficiently small and large, respectively. Note that this method
is available only when δ > 0 since P(S ∈ A0 ) = P(S = VaRp (S)) = 0 by absolute continuity of FS .
Although we can compute the MC estimator whenever we can generate i.i.d. samples from the
joint loss distribution FX , it has an inevitable bias to estimate the true VaR contributions. The
bias of the MC estimator can be decomposed by
MC
c
AC
δ,N − AC = bδ (N ) + b(δ)
(2.12)
MC
c δ,N − ACδ and b(δ) = ACδ − AC. Because we generally do not know the true
where bδ (N ) = AC
AC, the second term b(δ) can not be computed easily. Therefore, the δ should be as small as
possible. However, when the δ is quite small, it is difficult to maintain a sufficient sample size
Mδ,N that keeps the first term bδ (N ) small enough. This is because E[Mδ,N ] = N · P(S ∈ Aδ ) and
P(S ∈ Aδ ) is usually much less than 1 − p. Due to this trade off relation, the two parts of the bias
are difficult to eliminate simultaneously.
To overcome this problem, several estimators have been proposed in the literature. We end up
this section by introducing some existing estimators and their problems. Firstly, the NadarayaWatson (NW) kernel estimator proposed in Tasche (2009) is defined by
NW
c φ,h,N
AC
(n)
S
−VaRp (S)
X (n) · φ
h
(n)
,
PN
S
−VaRp (S)
φ
n=1
h
PN
=
n=1
(2.13)
where φ is the kernel density and h > 0 is the bandwidh. Since this estimator is a slight modification of the MC estimator (2.11), it has the same problem as the MC estimator explained above.
Moreover, the bias and the standard error of the NW estimator (see, e.g. Hansen, 2009 for their
expressions) can not be computed easily because it requires the evaluation of the total loss density
fS (VaRp (S)). Secondly, Glasserman (2005) developed the importance sampling (IS) estimators
to overcome the problem of sample-inefficiency. By using the IS technique to increase samples
whose sums belong to Aδ , the bias of the IS estimators can be sufficiently small. However, risk
models that can utilize this IS technique is limited to specific credit risk models (see, e.g., Huang
5
et al, 2007 ; Mausser and Rosen, 2008). Unfortunately, the IS estimators are hardly available in
our case, where the loss model (2.1) has a joint loss density as specified in (2.3). Finallly, Hallerbach (2003) and Tasche and Tibiletti (2004) constructed estimators by supposing a generalized
regression model among the losses:
X = gβ (S) + ε,
(2.14)
where gβ (s) is a function parametrized by β ∈ Rq and ε is an error random vector such that
E[ε|S = VaRp (S)] = 0. Let β̂N be an estimator of β. Then we call the following form of the
estimator
GR
c
(2.15)
AC
gβ ,N := gβ̂N (VaRp (S))
the generalized regression (GR) estimator. Although this estimator is intuitive and easy to apply,
it also has an inevitable bias. Let g be the true function such that E[X|S = v] = g(v) and let β ∗
be the minimizer of the error term ε in some sense. If β̂N goes to β ∗ as N → +∞, the bias of the
GR estimator can be decomposed by
GR
c
AC
gβ ,N − AC = b(N ) + b(gβ ),
(2.16)
where b(N ) = gβ̂N (VaRp (S)) − gβ∗ (VaRp (S)) and b(gβ ) = gβ∗ (VaRp (S)) − g(VaRp (S)). The first
term b(N ) is caused by the estimation error and the second term b(gβ ) is by the misspecification
of the model (2.14). Since we do not know the true function g in general, the second term b(gβ ) is
difficult to evaluate. To avoid the model-misspecification, we have to choose the family of functions
{gβ : β ∈ Rq } large enough to contain g. On the other hand, estimating the parameter β is not as
easy as in the ordinary regression model. Since we usually do not have samples from FX|S=VaRp (S) ,
it is non-trivial to estimate β by minimizing the error term ε given {S = VaRp (S)}. Due to these
obstacles, it is generally difficult to evaluate the bias of the GR estimator. A notable exception is
the case when the risk model X follows an elliptical distribution. In this case, it holds that (e.g.,
Corollary 8.43 in McNeil et al., 2015)
E[X|S = VaRp (S)] − E[X] =
Cov(X, S)
· (VaRp (S) − E[S]).
Var(S)
(2.17)
Therefore, the true VaR contributions are provided when we choose gβ (x) = β0 + β1 x and
β0∗ = E[X] −
Cov(X, S)
· E[S]
Var(S)
β1∗ =
Cov(X, S)
.
Var(S)
(2.18)
Since this β ∗ is the minimizer of E[ε2 ] = E[(X − β0 − β1 S)2 ], the ordinary least squares (OLS)
OLS
estimator βN
converges to β ∗ as N goes to infinity. As a consequence, the second term b(gβ ) is
zero and the first term b(N ) can be eliminated by taking N sufficiently large.
3
General theory of MCMC
Markov chain Monte Carlo (MCMC) is a method to generate samples from a probability distribution by constructing a Markov chain whose stationary distribution is the desired one. By
allowing Markovian type dependence within the samples, MCMC enables to simulate wide variety
of distributions even if they can not be simulated directly in practice. In this section, we briefly
introduce the theory of MCMC method. We review consistency and asymptotic normality of estimators computed by Markov chain. To obtain an efficient estimator, it is important to choose an
6
appropriate proposal distribution on generating sample path of the Markov chain. In Section 3.2,
we summarize the family of proposal distributions used in this paper.
3.1
Brief introduction to MCMC
Let E ⊂ Rd be a set and E be a σ-algebra of subsets of E. Markov chain is a sequence of E-valued
random variables (X (1) , X (2) , . . . ) satisfying the Markov property;
P(X (n+1) ∈ A|X (k) = x(k) , k ≤ n) = P(X (n+1) ∈ A|X (n) = x(n) ),
(3.1)
for all n ≥ 1, A ∈ E, and x(1) , . . . , x(n) ∈ E. Markov chain is characterized by its stochastic kernel
P : E × E → R+ , given by x × A 7→ P (x, A) := P(X (n+1) ∈ A|X (n) = x). If there exists a
R
probability distribution π such that π(A) = π(dx)P (x, A) for all x ∈ E and A ∈ E, then the π is
called the stationary distribution. See, e.g., Nummelin (2004) for general theory of Markov chain.
Markov chain Monte Carlo (MCMC) is a widely used statistical method for simulating probability distribution by generating Markov chain. MCMC is often used to estimate the quantity
Z
π(h) :=
h(x)π(dx),
(3.2)
for some distribution π and a π-measurable function h. Its MCMC estimator is given by
π̂N (h) :=
1
( h(X (1) ) + · · · + h(X (N ) ) ),
N
(3.3)
where (X (1) , . . . , X (N ) ) is an N -path of the Markov chain whose stationary distribution is π. In
the framework of MCMC, we call this π the target distribution. Since the target distribution π
is externally determined by the problem to solve, the problem of MCMC is to find the stochastic
kernel P such that it has the stationary distribution π. Additionally, we require P that its sample
path can be generated easily.
One of the most popular stochastic kernel used in practice is the Metropolis-Hastings (MH)
kernel (Metropolis et al., 1953; Hastings, 1970), defined by
P (x, dy) = p(x, y)dy + r(x)δx (y),
(3.4)
where
p(x, y)
= q(x, y)α(x, y),
α(x, y)
h
i
min π(y)q(y,x) , 1
π(x)q(x,y)
=
0
r(x)
=
1−
if π(x)q(x, y) > 0
otherwise ,
Z
p(x, y)dy,
δx is the Dirac delta function, and q : E × E → R+ is a function such that x 7→ q(x, y) is a
measurable function for all y ∈ E, and y 7→ q(x, y) is a probability density function (pdf) for all
x ∈ E. This function q is called a proposal distribution. It can be shown that the MH kernel has
a stationary distribution π (Tierney, 1994). The great popularity of the MH kernel comes from
the fact that, we can easily generate sample path of the corresponding Markov chain by using the
7
so-called MH algorithm (see e.g., Chib and Greenberg, 1995 for the simple and intuitive exposition
of the algorithm). Under the mild conditions that (i) a vector x(1) ∈ supp(π) is known, (ii) samples
from q(x, ·) can be generated for all x ∈ E , and (iii) the ratio π(y)/π(x) can be calculated for all
x, y ∈ E, we can generate N -sample path of the desired Markov chain by the following algorithm:
Algorithm 1: MH Algorithm
1. Set N > 0, q: proposal distribution, and X (1) = x(1) ∈ supp(π)
2. For n = 1, 2, . . . , N ,
(n)
3.
Generate X∗
4.
Set
"
αn :=
5.
∼ q(X (n) , · ) and U ∼ U(0, 1)
(n)
α(X (n) , X∗ )
= min
Set
(n)
(n)
X (n+1) := 1[U ≤αn ] · X∗
6. End for
(n)
π(X∗ ) q(X∗ , X (n) )
·
, 1
π(X (n) ) q(X (n) , X∗(n) )
#
+ 1[U >αn ] · X (n)
(3.5)
(3.6)
7. Return (X (1) , . . . , X (N ) ) as an N -sample path of the Markov chain with the kernel (3.4) and
the initial value X (1) = x(1)
(n)
We call αn := α(X (n) , X∗ ) in (3.5) the acceptance probability at n-th iteration. Based on the
N -sample path (X (1) , . . . , X (N ) ) generated in Algorithm 1, we can compute the MCMC estimator
π̂N (h) given by (3.3).
The MCMC estimator π̂N (h) has several attractive properties such as consistency and asymptotic normality. Firstly, the MCMC estimator is consistent (see, e.g., Theorem 1 in Nummelin
;2002), i.e.,
lim π̂N (h) = π(h)
with probability 1,
(3.7)
N →+∞
for all π-integrable functions h and all initial state X (1) = x(n) ∈ supp(π). Moreover, Central
Limit Theorem (CLT) holds under some regularity conditions (see, e.g., Chapter 17 in Meyn and
Tweedie, 2012), i.e.,
√
d
N ( π̂N (h) − π(h) ) −→ Nd (0, Σh )
where
Σh := Varπ [ h(X
(1)
)]+2
∞
X
as N → +∞,
Covπ [ h(X (1) ), h(X (1+k) ) ]
k=1
∈ (0, ∞).
(3.8)
(3.9)
Since we can rarely compute the asymptotic variance (3.9) in real situation, we have to estimate
it from the same sample path (X (1) , . . . , X (N ) ) that we used to compute π̂N (h). One simple and
popular estimator of Σh is the so-called batch means estimator (see, e.g., Geyer, 2012). For the
N -path of the Markov chain (X (1) , . . . , X (N ) ), the batch means estimator Σ̂h,N is defined by
B
Σ̂h,N =
N
LN X
( π̂N,b (h) − π̂N (h) )T ( π̂N,b (h) − π̂N (h) ),
BN − 1
(3.10)
b=1
where LN , BN are positive integers satisfying N = LN · BN , and
1
π̂N,b (h) =
LN
bLX
N −1
h(X (l) )
l=(b−1)LN
8
for b = 1, 2, . . . , BN .
(3.11)
LN is called the batch length and BN is the number of batches. Under some regularity conditions,
the batch means estimator Σ̂h,N converges to Σh with probability 1 as N → +∞ (see, e.g., Jones
et al., 2006; Vats et al., 2015). By using the asymptotic relation (3.8) and the consistency of the
batch means estimator Σ̂h,N , we can construct an approximate 95% confidence interval of the true
quantity π(h), given by
σ̂h,N
σ̂h,N
π̂N (h) − 1.96 · √ < π(h) < π̂N (h) + 1.96 · √ ,
N
N
(3.12)
(j,j)
where σ̂h,N = (Σ̂h,N , j = 1, 2, . . . , d).
3.2
Choice of the proposal distribution
When implementing the MCMC, an appropriate choice of the proposal function q is necessary.
When choosing q, it is desirable that the Markov chain characterized by (3.4) theoretically holds
the Central limit theorem (3.8), and also, its asymptotic variance (3.9) is as small as possible. Since
the asymptotic variance Σh can rarely be calculated explicitly in real situation, we usually rely
on post-implementation review, i.e. evaluating the goodness of the selected proposal distribution
after performing the MCMC. In this section, we introduce two empirical methods for checking
validity of the proposal selection. We also provide some families of the proposal distributions for
an appropriate choice of q. Theoretical verification of CLT will be carried over into Section 4.3.
In practice, there are two prevalent methods to check the validity of the proposal distribution
(see, e.g., Geyer, 2012 in details). One method is to check that the autocorrelation plots of the
marginal sample paths steadily decline . For an N -sample path (X (1) , . . . , X (N ) ) and the MCMC
estimator π̂N (h) = (π̂N,1 (h), . . . , π̂N,d (h)), we draw plots of the sample autocorrelations r̂j (h) :=
R̂j (h)/R̂j (0), where
R̂j (h) :=
N
−k
X
1
(n)
(n+k)
(hj (Xj ) − π̂N,j (h)) · (hj (Xj
) − π̂N,j (h)),
N − k n=1
(3.13)
versus the time lag k = 0, 1, 2, . . . , for j = 1, 2, . . . , d. We can expect that the asymptotic variance
Σh should be small if the autocorrelation plots steadily decline to zero as the lag increases. The
other method is to check the acceptance rate, i.e. the percentage of times a candidate is accepted.
In general, extremely low acceptance rate implies that the chain tends to be stuck to one point.
Conversely, extremely high acceptance rate occurs when the chain moves only around the mode
of the target density and does not traverse the entire support (Chib and Greenberg, 1995). The
first situation yields high asymptotic variance, and the second gives an illusionally low asymptotic
variance. To avoid such situations, it is important to check that the acceptance rate takes a
moderate value.
Typically, proposal distribution q is selected from a certain class of distributions. To find a
suitable q, several classes are in order. First, if q(x, y) = f (y − x) for some pdf f , then the
candidate X∗ is drawn according to the process
where Z ∼ f,
X∗ = X + Z,
(3.14)
and X is the current state. We call this q the random walk proposal distribution. In the case
when f is symmpetric around the origin, the acceptance probability (3.5) is given simply by
π(y)
α(x, y) = min[ π(x)
, 1]. Second, when q(x, y) = f (y) for some pdf f , then the candidate X∗ is
9
updated by
X∗ = Z,
where Z ∼ f.
(3.15)
Since the candidate is drawn independently of the current position, this q is called the independent
proposal distribution. The two proposal distributions, the random walk and the independent
proposals, are widely used due to their simplicity. However, these proposal distributions often
fail to perform well when the target distribution π is heavy-tailed. To overcome this problem, we
finally introduce the mixed preconditioned Crank-Nicolson (MpCN) proposal distribution (proposed
by Kamatani, 2014). This proposal distribution updates the candidate according to the following
process:
1
1
1
X∗ = µ + ρ 2 · (X − µ) + (1 − ρ) 2 · Z − 2 · W ,
(3.16)
where Z follows the gamma distribution with shape parameter d/2 and the scale parameter
1
||Σ− 2 (X − µ)||2 /2 for µ := E[X] and Σ := Var[X], and W ∼ Nd (0, Σ). Note that, here we
introduce the standardized version of the original MpCN in Kamatani (2014). The acceptance
probability (3.5) can be written by
α(X, X∗ ) =
π(X∗ )
·
π(X)
||Σ
− 21
1
(X − µ)||
||Σ− 2 (X∗ − µ)||
!−d
, 1 .
(3.17)
The major difference of this proposal distribution from the first two simple ones is that, not only
the mean but also the variance of the candidate changes with the current state X. Since the
MpCN proposal distribution admits larger jumps in the tail, we can expect better acceptance
rate even when π is heavy-tailed (see, e.g., Livingstone, 2015 for such position-dependent proposal
distributions).
4
Proposed method
We have shown that estimators constructed by MCMC has several attractive properties, such
as consistency and asymptotic normality. In this section, we propose a new estimator of VaR
contributions, that utilizes MCMC method to achieve efficient estimation, We also show consistency
and asymptotic normality of our MCMC-based estimator under a certain class of risk models.
4.1
The set-up
We suppose the following situation: we have an explicit form of the joint loss density fX , and
thus, we can compute the quantity fX (x) for any x ∈ Rd . We also can generate i.i.d. samples
X (1) , . . . , X (N ) from FX up to sufficiently large N . Then, the i.i.d. samples S (1) , . . . , S (N ) from
(n)
(n)
FS can be generated by taking S (n) = X1 + · · · + Xd for n = 1, 2, . . . , N . On the other
hand, suppose that we have neither an explicit form of the total loss density fS nor the way to
compute the quantity fS (s) for any s ∈ R. The situation described above typically occurs when we
model the joint loss density fX by the copula approach, i.e. specifying the marginal loss densities
f1 , f2 , . . . , fd and then specifying the copula density c. When we use this approach, the resulting
joint loss density fX is speficied by the formula (2.3).
Under the situation above, a two-step procedure (see, e.g., Glasserman, 2005) is conducted
for computing the VaR contributions (2.8). The first step is to estimate VaRp (S) and the second
step is to estimate the VaR contributions AC= E[X|S = VaRp (S)] with the VaRp (S) replaced
by the estimated one in the first step. Estimation of the VaRp (S) in the first step is conducted
10
by Monte Carlo simulation. Based on the i.i.d. samples S (1) , . . . , S (N ) from FS , the estimator of
d p (S) = S [N p] , where S [N p] is the N p-th largest sample among N samples
VaRp (S) is given by VaR
d p (S) as v in short, and regard it as a
{S (1) , . . . , S (N ) }. Hereafter we refer to the estimate of VaR
constant. In the second step, we aim at estimating AC= E[X|S = v]. According to the crude
Monte Carlo method as explained in Section 2, the VaR contributions are estimated by
MC
c δ,N =
AC
i.i.d.
N
1 X (n)
X · 1[S (n) ∈Aδ ] ,
Mδ,N n=1
(n)
(4.1)
(n)
where X (1) , . . . , X (N ) ∼ FX , S (n) := X1 + · · · + Xd for n = 1, . . . , N , Aδ = [v − δ, v + δ] and
PN
Mδ,N = n=1 1[S (n) ∈Aδ ] .
As explained in Section 2, the problem of this two-step procedure is that the estimator of VaR
contributions in the second step often has a significant bias. To address this issue, we develop a
Markov chain Monte Carlo (MCMC)-based estimator, that achieves sample efficiency and consistency, by utilizing the MCMC method. The major difference of our MCMC-based estimator from
the MC one is that, samples are generated not from the loss distribution FX itself, but from the
conditional distribution of X given the sum constraint {S = v}. Although it is almost impossible
in the MC method, the MCMC enables us to realize it.
4.2
The MCMC estimator
We start to describe the MCMC-based estimator with reformulating the problem of computing
VaR contributions. Let X 0 = (X1 , X2 , . . . , Xd0 ), where d0 = d − 1. By the full allocation property
(2.5), we have that
0
T
E[X|S = v] = ( E[X 0 |S = v], v − 1T
d0 · E[X |S = v] ) ,
(4.2)
where S = X1 + · · · + Xd . Therefore, computation of the VaR contributions AC=E[X|S = v] can
be reduced to estimate the VaR contributions of the d0 -subportfolio, given by AC0 = E[X 0 |S = v].
In our method, we estimate this quantity AC0 by generating samples directly from FX 0 |S=v . The
conditional joint density of X 0 given {S = v} can be written by
fX 0 |S=v (x0 ) =
0
fX (x0 , v − 1T
fX 0 ,S (x0 , v)
d0 x )
=
,
fS (v)
fS (v)
0
x0 ∈ R d
(4.3)
where the second equation follows from a linear transformation (X 0 , S) 7→ X. Since it is difficult
to evaluate the total loss density fS (v), we can hardly generate samples directly from fX 0 |S=v .
0
By taking E = Rd , h(x) = x and π(x) = fX 0 |S=v (x) on the discussion in Section 3, our
problem of estimating VaR contributions can be reduced to estimate π(h) = E[X 0 |S = v] in (3.2)
by MCMC. Even though we can hardly compute fX 0 |S=v itself, we can compute the acceptance
probability (3.5), given by
α(x, y) = min
fX 0 |S=v (y) · q(y, x)
fX (y, v − 1T
d0 y) · q(y, x)
, 1 = min
,
1
fX 0 |S=v (x) · q(x, y)
fX (x, v − 1T
d0 x) · q(x, y)
(4.4)
for all x, y. Note that the cumbersome term fS (v) disappears by taking the ratio of fX 0 |S=v (y) to
fX 0 |S=v (x). Therefore, under some appropriate choice of the proposal function q, we can generate
N -sample path of the Markov chain whose stationary distribution is π(x) = fX 0 |S=v (x). Based
on the sample path of the Markov chain, we can construct the MCMC estimator π̂N (h) defined by
11
(3.3). Its consistency follows directly from the general theory of Markov chain presented in Section
3. The algorithm to compute the MCMC estimator of VaR contributions is summarized as follows:
Algorithm 2: MCMC estimator of VaR contributions E[X|S = VaRp (S)]
1. Set v > 0, N > 0, q: proposal distribution, and X (1) = x(1) ∈ supp(π).
2. Perform Algorithm 1 for the given N , q, and x(1) to generate an N -sample path (X 0(1) , . . . , X 0(N ) )
of a Markov chain whose stationary distribution is fX 0 |S=v .
3. Define
0(n) T
X (n) := (X 0(n) , v − 1T
) ,
d0 X
and set
MCMC
c
AC
q,N
=
N
1 X (n)
X
N n=1
(4.5)
(4.6)
to estimate VaR contributions AC= E[X|S = v].
4.3
Some theoretical results
In this section, we provide some theoretical results to verify asymptotic normality of the MCMC
estimator (4.6). We will see that CLT holds for various risk models when we model the pure
losses. On the other hand, when we model the profits & losses, theoretical justification of CLT is
challenging except for some special cases. We provide an example of justifying CLT only when we
model P&L by some elliptical distribution.
When we model pure losses, i.e. supp(FX ) = Rd+ , then the conditional distribution FX 0 |S=v is
supported on the following bounded set, called the v-simplex:
0
Sv := {x ∈ Rd : x ≥ 0, 0 ≤ x1 + · · · + xd0 ≤ v}.
(4.7)
Thanks to the compactness of the support, CLT is verified under some mild conditions. The
following theorem is a direct consequence of well-known results in the theory of MCMC. However,
it clarifies the conditions on the marginal loss densities and the copula densities, that are easy to
be checked in the framework of joint risk modeling with copulas.
Theorem 4.1. Suppose that the joint distribution FX is supported on Rd+ , and has marginal
√
densities f1 , f2 , . . . , fd and a copula density c. Then, N -CLT holds for the MCMC estimator
(4.6) of VaR contributions if the following (C1) − (C3) hold:
(C1) := inf x,y∈[0,v]d0 q(x, y) > 0,
(C2) fj (x) is positive and bounded above on [0, v] for all j = 1, 2, . . . , d,
(C3) c(u) is positive and bounded above on F1 ([0, v]) × · · · × Fd ([0, v]).
√
Proof. By Theorem 23 in Roberts and Rosenthal (2004), the N -CLT holds if the Markov chain
is uniformly ergodic whenever E[||X 0 ||2 |S = v] < ∞. The moment condition is satisfied since
E[Xi Xj |S = v] ≤ E[(X1 + · · · + Xd )2 |S = v] = v 2 < ∞
(4.8)
for any i, j = 1, 2, . . . , d. Thus, it suffices to show that the Markov chain is uniformly ergodic. By
Theorem 1.3 in Mengersen and Tweeide (1996), the Markov chain is uniformly ergodic if (and only
12
if) the minorization condition (see, e.g., Rosenthal, 1995) holds on the whole space Sv , i.e., there
exist a positive integer n, δ > 0 and a probability measure ν such that P n (x, A) > δ · ν(A) for all
0
0
x ∈ Sv , A ∈ Bv , where Bv := B(Rd ) ∩ Sv . By the conditions (C2), (C3), and that Sv ⊂ [0, v]d ,
we have that
l := inf π(x) > 0,
u := sup π(x) < +∞.
(4.9)
x∈Sv
x∈Sv
Using (4.9) and the condition (C1), the minorization condition can be checked as follows. For any
x ∈ Sv , define
π(y) q(y, x)
Qx := y ∈ Sv :
·
<1 .
(4.10)
π(x) q(x, y)
Then for all A ∈ Bv , we have
P (x, A) ≥
=
≥
=
Taking n = 1, δ =
u
π(y) q(y, x)
·
dy
q(x, y) · min 1,
π(x) q(x, y)
Qx
Z
π(y) q(y, x)
·
dy
+
q(x, y) · min 1,
π(x) q(x, y)
A\Qx
Z
Z
π(y)
· q(y, x)dy +
q(x, y)dy
A\Qx
Qx π(x)
Z
Z
π(y)
π(y)dy +
dy
u Qx
u
A\Qx
π(A).
u
Z
and ν = π completes the proof.
We will see typical situations where the conditions (C1) − (C3) hold. Firstly, the condition
0
(C1) holds when, for example, q(x, y) = f (y − x), f is continuous, and positive on [−v, v]d . These
conditions are satisfied for typical choices of f , such as Gaussian density or student’s t density.
Secondly, the condition (C2) also holds when, for example f1 , f2 , . . . , fd follow Pareto distributions
with the density given by
κ
fj (x) =
κj γj j
(x + γj )κj +1
κj > 0, γj > 0
for j = 1, 2, . . . , d.
(4.11)
Lastly, the condition (C3) can easily be checked for various copulas used in practice. For the use
in Section 4, we especially focus on the following two examples:
Example 4.1 (t copula). d-dimensional t copula density is of the form
ct (u) =
ν
Γ( ν+d
2 )Γ( 2 )
1
d
|R| 2 Γ( ν+1
2 )
·
1+
xT R−1 x
ν
Πdj=1 (1 +
− ν+d
2
x2j − ν+1
2
ν )
,
ν>0
(4.12)
−1
−1
where R ∈ Md×d
+ [0, 1] is a correlation matrix, x = (tν (u1 ), . . . , tν (ud )) and tν is a cdf of the
student’s t distribution with degree of freedom ν. It is clear that (4.12) satisfies the condition (C3)
if Fj (0) > 0 for j = 1, 2, 3.
Example 4.2 (π-rotated Clayton copula). d-dimensional π-rotated Clayton copula has a density
13
of the form (Yoshiba, 2013)
− θ1 −d
d
d
1
Y
X
+
d)
Γ(
cClayton
(u) = θd · θ 1
· (1 − uj )−θ−1 (1 − uj )−θ − d + 1
,
π
Γ( θ )
j=1
j=1
0 < θ < ∞.
(4.13)
Since Clayton copula has a lower tail dependence, the π-rotated Clayton copula has an upper tail
dependence. We will see that the π-rotated Clayton copula density (4.13) satisfies the condition
(C3) under some parameter constraint. Let lj := 1 − Fj (v) ∈ (0, 1). Then, for any uj ∈ Fj ([0, v]),
it holds that 0 < lj ≤ 1 − uj ≤ 1 − Fj (0) = 1 for all j = 1, 2, . . . , d. Thus, we have
1≤
and also
d
Y
(1 − uj )−θ−1 ≤
j=1
d
Y
j=1
lj−θ−1 < ∞
− θ1 −d
− θ1 −d
d
d
X
X
≤ (1 − uj )−θ − d + 1
≤ 1.
lj−θ − d + 1
j=1
(4.14)
(4.15)
j=1
Since v is the p-th quantile of FS , we have lj = 1 − Fj (v) ≤ 1 − p. Using this inequality,
the lower bound of (4.15) is positive if θ < log(1 − p)/ log(1 − d1 ). Putting (4.14) and (4.15)
together, the π-rotated Clayton copula density (4.13) satisfies the condition (C3) when 0 < θ <
log(1 − p)/ log(1 − d1 ).
In contrast to the pure-loss case, theoretical verification of CLT is challenging in the case when
we model P&L, i.e. supp(FX ) = Rd . Since the conditional distribution FX 0 |S=v is supported on
0
the unbounded space Rd , we need a careful investigation on the tail-behavior of the conditional
density fX 0 |S=v . When the proposal distribution is MpCN, we can justify the CLT of our MCMC
estimator for a certain class of target distributions based on the result in Kamatani (2016). As its
example, we consider the case where the underlying loss vector X follows a multivariate student’s
t distribution.
Example 4.3 (Multivariate student’s t distribution). Let X ∼ tν (µ, Σ), where ν > 2, µ ∈ Rd
and Σ ∈ Md×d
+ . The density of the multivariate student’s t distribution is given by
− ν+d
2
Γ( ν+d
(x − µ)T Σ−1 (x − µ)
2 )
p
.
fX (x) =
· 1+
1
ν
ν
|Σ| 2 Γ( 2 ) (π · d)d
(4.16)
Its conditional density fX 0 |S=v can be specified as follows. Firstly, let µ = 0 for simplicity. Write
Σ−1 =
0
0
A1
AT
2
A2
A3
!
=: A,
(4.17)
0
for A1 ∈ Md ×d (R), A2 ∈ Rd and A3 ∈ R. Then, it holds that
x
v − 1T
d0 x
!T
A1
AT
2
A2
A3
!
!
x
v − 1T
d0 x
0
= (x − w)T V (x − w) + D,
0
(4.18)
0
d ×d
T
T
where V := A1 − A2 1T
, w := V −1 (vA3 1d0 − vA2 ) ∈ Rd , and
d0 − 1d0 A2 + 1d0 1d0 ∈ M+
14
D := v 2 A3 − wT V w ∈ R. Using (4.18), we have that
fX 0 |S=v (x) ∝ fX (x, v − 1T
d0 x)
− ν+d
2
(x − w)T V (x − w) + D
∝
1+
ν
− ν+d
2
(x − w)T V (x − w)
∝
1+
.
ν+D
(4.19)
Provided ν + D > 0, X 0 |S = v follows a d0 -dimensional elliptical distribution with the location
parameter w, scale parameter V −1 and the density generator g : R+ → R+ given by
g(x) =
x
1+
ν+D
− ν+d
2
.
(4.20)
This type of distribution is called the Pearson type VII distribution (see e.g., Schmidt, 2002).
Consider the MCMC estimator (4.6) where π = fX 0 |S=v and the proposal distribution q is
√
MpCN (3.16). By Theorem 25 in Roberts and Rosenthal (2004), a N -CLT holds if the Markov
chain is geometrically ergodic and E[||X 0 ||2 |S = v] < ∞. By Proposition 3.4 in Kamatani (2016),
the Markov chain with MpCN proposal distribution is geometrically ergodic if E[||X 0 ||δ |S = v] < ∞
for some δ > 0, π(x) is strictly positive and continuous, and moreover, it is symmetrically regularly
varying, i.e.,
π(rx)
lim
= λ(x),
(4.21)
r→+∞ π(r1d )
0
0
0
d −1
d −1
for some λ : R → (0, ∞) such that λ(x) = 1 for any x ∈ SW
, where SW
:= {x ∈ Rd :
0
0
1
1
d ×d
. We will firstly see that all the moment conditions
||W − 2 x|| = ||W − 2 1d0 ||} and W ∈ M+
hold, and then, the tail condition (4.21) is also satisfied for π = fX 0 |S=v .
Write R := ||X 0 ||. It can be shown that g is regularly varying (see, e.g., Resnick, 2013) at ∞
with index α = − ν+d
2 , i.e.
ν+d
g(rx)
= x− 2
lim
x > 0.
(4.22)
r→∞ g(r)
By Proposition 3.7 in Schmidt (2002), fR|S=v is regularly varying with index −(ν + 1). Then,
by Karamata’s Theorem (see, e.g., Resnick, 2013, Theorem 0.6), FR|S=v is regularly varying with
index −ν. Therefore, E[Rδ |S = v] < ∞ holds for all δ < ν (see, e.g., Mikosch, 1999). Thus all the
moment conditions above are satisfied so long as ν > 2. In elliptical case, the tail condition (4.21)
0
is a direct consequence of (4.22). Since (x − w)T V (x − w) > 0 for all x ∈ Rd , it holds that
fX 0 |S=v (rx)
lim
=
r→+∞ fX 0 |S=v (r1d )
1
||V 2 x||
!−(ν+d)
,
1
2
||V 1d0 ||
0
x ∈ Rd .
(4.23)
Thus, taking
1
λ(x) :=
||V 2 x||
!−(ν+d)
,
1
||V 2 1d0 ||
W = V −1
(4.24)
in (4.21) yields that π = fX 0 |S=v is symmetrically regularly varying. Putting them together, we
√
can find that the MCMC estimator with MpCN proposal distribution satisfies N -CLT when the
underlying loss vector follows multivariate student’s t distribution with ν > 2 and D > −ν.
15
5
Simulation and numerical studies
We have shown that the MCMC estimator has various attractive properties for the estimation of
VaR contributions. In this section, we apply the proposed estimator to various risk models used
in practice. Our numerical experiment shows that the MCMC estimator has smaller bias and
Mean Squared Error (MSE) compared with the existing estimators that have been proposed in the
previous studies. Then, based on some numerical results, we study how to choose an appropriate
proposal distribution given risk model.
5.1
Numerical study of the MCMC estimator
In the simulation study, we consider four risk models which are modeled with marginal densities and
copula densities. As is often done in risk management, we adopt heavy-tailed marginal distributions
and copulas with tail dependence. In all risk models, we set the size of the portfolio d = 3. The
models are described as follows:
(1) The loss random variable Xj follows Pareto distribution (4.11) with κj = 4 and γj = 3 for all
j = 1, 2, 3. The loss vector (X1 , X2 , X3 ) has a π-rotated Clayton copula (4.13) with θ = 0.5.
(2) Xj , (j = 1, 2, 3) have the same marginals as in the case (1). However, their copula is a
student’s t copula (4.12) with ν = 4 and the correlation matrix given by
1
P = −0.5
0.3
−0.5
1
0.5
0.3
0.5 .
1
(5.1)
(3) Xj follows student’s t distribution with νj = 4, µj = 0 and σj = 1 for all j = 1, 2, 3.
(X1 , X2 , X3 ) has a π-rotated Clayton copula with θ = 0.5.
(4) The loss random vector (X1 , X2 , X3 ) follows a multivariate student’s t distribution (4.16)
with ν = 4, µ = 0, and Σ given by (5.1).
The former two models (1) and (2) treat pure losses, and the latter two models (3) and (4) handle
P&L. In all models, marginal distributions have variance 2.0. Moreover, they are heavy-tailed
with tail index 5.0. The models (1) and (3) possess homogeneous upper tail dependence with tail
coefficient λU
i,j = 0.025 (see, e.g., Joe, 2014). The models (2) and (4) have symmetric upper, lower,
L
U
L
and upper-lower tail dependences with tail coefficients λU
1,2 = λ1,2 = 0.012, λ1,3 = λ1,3 = 0.162,
L
UL
LU
UL
LU
UL
LU
λU
2,3 = λ2,3 = 0.253, λ1,2 = λ1,2 = 0.253, λ1,3 = λ1,3 = 0.029, and λ2,3 = λ2,3 = 0.012.
For each risk model, we compute several estimators of VaR contributions AC = E[X|S =
VaRp (S)] for p = 0.999, with the VaRp (S) replaced by its Monte Carlo estimator v = S [N p] .
In this setting, the condition (C3) in Theorem 4.1 holds in the risk models (1) and (2) since
log(1 − p)/ log(1 − 1/d) = 17.037 > 0.5 = θ. Also, as shown in Example 4.3, the risk model
(4) with MpCN proposal distribution satisfies CLT since D + ν = 137.935 > 0. The estimators
that we study are the Monte Carlo (MC) estimator (2.11), Nadaraya-Watson (NW) estimator
(2.13) (Tasche, 2009), Generalized regression (GR) estimator (2.15) (Tasche and Tibiletti, 2004;
Hallerbach, 2003), and the MCMC estimator (4.6) proposed in this paper:
MC
c δ,N =
AC
1
Mδ,N
N
X
n=1
NW
X (n) · 1[S (n) ∈Aδ ] ,
c φ,h,N =
AC
16
(n)
S
−v
(n)
X
·
φ
n=1
h
,
PN
S (n) −v
n=1 φ
h
PN
MCMC
GR
c
AC
gβ ,N = gβ̂N (v),
i.i.d.
c
AC
q,N
=
(n) i.i.d.
(n)
N
1 X (n)
X
,
N n=1 |S=v
(n)
where X (n) ∼ FX , S (n) := X1 + · · · + Xd
∼ FS , and {X|S=v }n=1,...,N is an N -sample path
of a Markov chain with stationary distribution FX|S=v .
For all estimators, we fix the sample size N = 106 . Other parameters of the estimators above
are determined as follows. First, on the MC estimator, we set δ > 0 such that the MC sample size
Mδ,N is around 103 . As in Glasserman (2003), asymptotic normality
p
MC
c δ,N − ACδ ) → Nd (0, Σδ ),
Mδ,N · (AC
as N → +∞
(5.2)
MC
c
holds for a fixed δ. Using this result, we report the estimate of AC
δ,N and its approximated
(j,j) p
standard error sM C / Mδ,N for j = 1, 2, 3, where sM C is the sample standard error defined by
sM C
v
u
u
=t
N
1 X (n) c MC T (n) c MC
(X − ACδ,N ) (X − ACδ,N ) · 1[S (n) ∈Aδ ] .
Mδ,N n=1
(5.3)
Second, on the NW estimator, we choose the kernel density φ the standard normal density. We
decide the bandwidth h > 0 according to the Silverman’s rule of thumb h = 1.06 · σ̂S · N −1/5
(Silverman, 1986). Although asymptotic normality holds for the NW estimator, its asymptotic
variance can hardly be computed because it requires the calculation of fS (v). Therefore, we report
NW
c φ,h,N . Third, on the GR estimator, we choose gβ (s) = β0 + β1 · s and its
only the estimate of AC
coefficients are estimated by
PN
(n)
− X̄N )(S (n) −
n=1 (X
PN
(n) − S̄ )2
N
n=1 (S
β̂N,1 =
S̄N )
β̂N,0 = X̄N − β̂N,1 · S̄N ,
(5.4)
PN
PN
where X̄N = N1 n=1 X (n) and S̄N = N1 n=1 S (n) . According to the general theory of OLS, it
holds under some regularity conditions that
√
(j)
β0
→ N2 (02 , σε2 Q−1 ),
−
N ·
j
(j)
(j)
β̂N,1
β1
(j)
β̂N,0
(j)
as N → +∞
(5.5)
(j)
for j = 1, 2, 3, where β̂N,k and βk is the j-th component of β̂N,k and βk for k = 1 and 2,
respectively, εj is the j-th component of the error term ε, σε2j is its conditional variance given
S(1) , . . . , S(N ) , and
T
Q := lim Y Y /N
N →∞
where
1
S (1)
Y =
···
···
1
S (N )
!T
.
(5.6)
GR
c
According to this result, we report the estimate of AC
gβ ,N and its approximated standard error
(j) √
sGR / N for j = 1, 2, 3, where
(j)
sGR
q
(1,1)
(1,2)
(2,2)
:= Σ̂GR,j + 2v · Σ̂GR,j + v 2 · Σ̂GR,j ,
(5.7)
Σ̂GR,j = σ̂ε2j · (Y T Y /N )−1 and σ̂εj is the sample standard error of the j-th residuals. Finally,
on the MCMC estimator, we choose different proposal distributions depending on the risk models
17
(1)-(4): (1) Random walk proposal q(x, y) = f (y − x) with f ∼ Nd (0, Σ̂v ), where Σ̂v := s2M C ,
for sM C defined in (5.3), (2) Independent proposal q(x, y) = f (y), where f is the density of the
Dirichlet distribution with parameters (0.2, 0.28, 0.6), (3) and (4) MpCN proposal with ρ = 0.8,
GR
GR
c g ,N 0 , v − 1T0 AC
c g ,N 0 )T and Σ := s2 . In the Algorithm 2, we set the initial state
µ = (AC
d
β
MC
β
x(1) = (v/3, v/3, v/3)T . We suppose that asymptotic normality (3.8) holds for all risk models.
Also, its asymptotic variance is estimated by the batch means estimator Σ̂N defined by (3.10).
1
Following the recommendation of Jones et al. (2006), we choose LN := bN 2 c = 103 and BN :=
MCMC
1
c
bN/LN c = bN 2 c = 103 . We report the estimate of AC
and its approximated standard error
q,N
(j,j) √
Σ̂N / N for j = 1, 2, 3.
As mentioned in Section 3.2, the validity of the MCMC method can empirically be checked by
autocorrelation plots and the acceptance rate. Fig. 1 show the autocorrelation plots of the Markov
chains generated by Algorithm 2. The acceptance rate of the MH algorithm in each risk model was
1.0
(b)
○ : X1 |S = v
4 : X2 |S = v
△
+ : X3 |S = v
+
●
●
○ : X1 |S = v
4 : X2 |S = v
△
●
0.8
0.8
1.0
(a)
●
+ : X3 |S = v
+
●
0.6
0.6
●
ACF
●
●
●
0.4
●
0.4
ACF
ACF
●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●●●
●●●●
●●●●●●●●
●●●●●●●●●●●●●●●●
●
0
10
20
30
40
0.2
0.0
0.0
0.2
●
●
●
●
●
●
●●
●●
●●●
●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●
50
0
10
20
Lag
40
50
1.0
(d)
○ : X1 |S = v
4 : X2 |S = v
△
●
●
○ : X1 |S = v
4 : X2 |S = v
△
●
+ : X3 |S = v
+
●
●
+ : X3 |S = v
+
0.8
1.0
(c)
●
0.8
30
Lag
●
0.0
0
0.6
10
20
30
40
50
Lag
Lag
●
●
0.4
●
●
●
●
●
●
●
●
●
●●
●●
●●
●●
●●●●
●●●●●●●
●●●●●●●●●●●●●●●●●●
0.2
0.4
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●●
●●●
●●●
●●●●●●
●●●●●●●
●●●●●
0.0
●
ACF
●
0.2
ACF
ACF
0.6
●
0
10
20
30
40
50
Lag
Lag
Figure 1: Autocorrelation plots of the Markov chains generated by algorithm 2: (a) Pareto +
π-rotated Clayton, (b) Pareto + t copula, (c) Student’s t + π-rotated Clayton, and (d) Student’s
t + t copula. The dotted line represents y = 0.1.
(1) 0.556, (2) 0.186, (3) 0.643, and (4) 0.757. In Fig. 1, we can observe that the autocorrelation
plots steadily decline below 0.1 by the lag h around 30 for all risk models. Together with the
moderate acceptance rates, we can confirm that the choice of the proposal distributions above are
appropriate for all risk models.
Before showing the estimation results, we check the shapes of the conditional distributions
FX 0 |S=v by plotting the Markov chains generated by Algorithm 2. Fig. 2 shows the contour
plots of the generated Markov chains. According to these plots, the features of the conditional
18
(b)
20
20
X2 |S
X2 = v
X2
15
04
10
15
0.0
055
10
00
5
0.0
03
0.0
01
0.00
5
45
5
0.00
3
0.0025
0.0015
5
10
0.
0.0
0
4
00
2
0.004
0.001
0.002
0.00
3
0
0
5e−04
0
0.006
0.0
0.
05
0.0
0.002
0.0035
0.01
25
25
30
35
(a)
15
20
25
30
35
0
5
10
15
X1
X1
25
20
(d)
0.001
15
15
20
(c)
20
0.001
10
0.003
0.003
X2
0.01
0.00
5
0.0
1
8
0.009
4
0.002
−10
0.004
0.002
0.001
−15
0
5
0.00
−5
00
0.00
0.006
0
0.01
5
5
07
1
10
X2 |S
X2 = v
6
00
0.0
0.007
0.
0.005
0.
0.006
0
5
10
15
20
X1 |SX1= v
−15
−10
−5
0
5
10
15
20
X1 |SX1= v
Figure 2: Contour plots of the Markov chains generated by algorithm 2: (a) Pareto + π-rotated
Clayton, (b) Pareto + t copula, (c) Student’s t + π-rotated Clayton, and (d) Student’s t + t copula.
When drawing the contour plots, we used subsamples, which are picked up every 100-th points of
the original Markov chains, to elliminate the dependence among samples. We plot only the first
and second variables of the 3-dimensional Markov chains since the last variable can be determined
by the sum-constraint. The gray diagonal lines in the plots represent y = −x + v, where v is
the estimate of VaRp (S). The contour plots approximately express the conditional distributions
FX 0 |S=v .
distributions FX 0 |S=v are summarized as follows:
(1) Pareto + π-rotated Clayton : the contour plot Fig. 2 (a) shows that FX 0 |S=v has a unique
mode at the center of the Simplex. Most probability of FX 0 |S=v is concentrated around the
mode, and the probability decreases as going far away from the center. Also, the contour
plot is almost symmetric at the diagonal line y = x.
(2) Pareto + t copula : Unlike the case (1), FX 0 |S=v possesses two distinct modes near the xand y-axes. Especially near the y-axis, there is a peak around which the probability changes
significantly. Also, the contour plot Fig. 2 (b) is asymmetric at the diagonal line y = x.
(3) Student’s t + π-rotated Clayton : Although X 0 |S = v can take negative values, it is supported
almost on the bounded simplex as in the case (1). The contour plot Fig. 2 (c) is unimodal
and almost symmetric around y = x. Also, the tail of FX 0 |S=v is apparently light.
(4) Student’s t + t copula : As discussed in Section 4.3, the conditional distribution FX 0 |S=v in
this case follows the Pearson type VII distribution given by (4.19). From the contour plot Fig.
2 (d), we can observe some properties of this type of distribution, such as elliptical symmetry
19
and tail-heaviness. Unlike the case (3), X 0 |S = v takes large negative values beyond the
bounded simplex.
The results of our estimation are summarized in Table 1. In the four different risk models (1)(4), we report the estimates, its approximated standard errors, biases, and Root MSE (RMSE)s of
the four different estimators MC, NW, GR, and MCMC.
In the first risk model where marginals follow Pareto distributions and they have a π-rotated
Clayton copula, true VaR contributions can be computed explicitly. The true value is obtained by
allocating the total VaR homogeneously because the marginal distributions are homogeneous and
the copula is exchangeable. In this case, we observe that the MC and the NW estimators have relatively large biases. In contrast, the GR and the MCMC estimators have smaller biases compared
with the first two. The GR estimator still has some bias caused by the model-misspecification,
although its standard error is quite small. In the viewpoint of MSE, the MCMC estimator outperforms all the other estimators.
The second risk model has the same marginal distributions as in the case (1), but their copula
is t copula having upper-lower tail dependence. Even though the true VaR contributions are not
available in this case, we can construct its 95% confidence interval based on the MCMC estimator
by using (3.12). Based on this interval of the true value, we computed the range of the bias
and RMSE for other estimators. The reported ranges of bias and RMSE in Table 1 (2) show
that the MCMC estimator significantly improves the performance of other estimators. Under the
risk model (2), all estimators suffer from bias caused by asymmetry and multi-modality of the
conditional distribution FX 0 |S=v as seen in Fig. 2 (b). Especially, in contrast with the good
performance of the GR in the case (1), the GR estimator has relatively large bias and RMSE in
this case.
The third risk model examines the case where the marginal distributions are student’s t distribution and their copula is π-rotated Clayton. The true VaR contributions can be computed
similarly as in the case (1). Thanks to symmetry and uni-modality of the conditional distribution
FX 0 |S=v , all estimators remain small bias and RMSE. Together with the results in the cases (1)
and (2), it can be found that the GR estimator performs well so long as FX 0 |S=v does not have
irregular shape, such as asymmetry and multi-modality. Also, the MCMC estimator reduces bias
and RMSE compared with the MC and the NW estimators.
The final risk model illustrates the case where the underlying portfolio loss vector follows the
multivariate student’s t distribution. In this case, the true VaR contributions is available by the
formula (2.17). In such an elliptical case, the GR estimator provides a quite accurate estimate.
Although the conditional distribution FX 0 |S=v is heavy-tailed as seen in Fig. 2 (d), the MCMC
estimator keeps high performance compared with the MC and the NW estimators. Its bias is
significantly improved compared with the MC and the NW. Additionally, its standard error and
RMSE are also lower than those of the MC estimator.
Throughout the numerical study, the MCMC estimator provided significantly small bias and
RMSE regardless of the shape of the conditional distribution FX 0 |S=v . Under the case when
FX 0 |S=v is unimodal and symmetric, the GR estimator also had good performance. On the other
hand, at least in our numerical study, the MC and the NW estimator had larger bias and RMSE
compared with other estimators.
At the end of this section, we summarize the advantages of the MCMC estimator compared
with the other estimators. First, the MCMC estimator is consistent although the other estimators
are not always. As explained in Section 2, the MC, the NW, and the GR estimators have biases
which can not be evaluated easily. In Table 1, we observed that these estimators have some
20
Table 1: Performance of the four different estimators of VaR contributions under four different
risk models† .
√
Standard error ( M SE):
Estimate of AC (Bias):
Estimator
MC
NW
GR
M CM C
MC
GR
M CM C
0.166
(0.169)
0.164
(0.497)
0.167
(0.655)
0.008
(0.033)
0.008
(0.061)
0.008
(0.029)
0.016
(0.017)
0.017
(0.017)
0.016
(0.018)
0.247
(0.250,
0.694)
0.226
(0.229,
0.687)
0.142
(0.147,
0.664)
0.010
(0.034,
0.532)
0.010
(0.010,
0.532)
0.006
(0.033,
0.532)
0.050
(0.050,
0.110)
0.031
(0.031,
0.103)
0.030
(0.030,
0.102)
(1) Pareto + π-rotated Clayton: True AC = (11.265, 11.265, 11.265)
AC1
AC2
AC3
11.298
(0.033)
10.796
(-0.469)
10.631
(-0.634)
11.550
(0.2859)
11.481
(0.216)
10.762
(-0.503)
11.297
(0.033)
11.204
(-0.061)
11.293
(0.028)
11.258
(-0.007)
11.263
(-0.001)
11.273
(0.008)
(2) Pareto + t copula: True AC is not available ‡
AC1
AC2
AC3
7.475
(0.039,
0.234)
8.156
(-0.649,
-0.526)
11.803
(-0.371,
-0.255)
6.907
(-0.529,
-0.333)
8.878
(0.072,
0.196)
12.415
(0.241,
0.357)
7.772
(0.336,
0.532)
8.715
(-0.090,
0.033)
11.710
(-0.463,
-0.347)
7.338
(-0.098,
0.098)
8.744
(-0.062,
0.062)
12.115
(-0.058,
0.058)
(3) Student’s t + π-rotated Clayton : True AC = (5.917, 5.917, 5.917)
AC1
AC2
AC3
5.919
(0.002)
5.710
(-0.208)
5.638
(-0.279)
6.046
(0.129)
6.009
(0.092)
5.702
(-0.215)
5.921
(0.003)
5.918
(0.001)
5.913
(-0.004)
5.915
(-0.002)
5.940
(0.023)
5.897
(-0.021)
0.077
(0.077)
0.077
(0.221)
0.078
(0.290)
0.005
(0.006)
0.005
(0.006)
0.005
(0.007)
0.016
(0.016)
0.016
(0.028)
0.014
(0.025)
0.007
(0.019)
0.006
(0.015)
0.002
(0.005)
0.034
(0.046)
0.033
(0.044)
0.011
(0.011)
(4) Student’s t + t copula : True AC = (2.972, 3.715, 6.686)
AC1
AC2
AC3
†
3.028
(0.056)
3.482
(-0.233)
6.526
(-0.161)
3.559
(0.587)
3.053
(-0.662)
6.736
(0.050)
2.989
(0.017)
3.701
(-0.013)
6.682
(-0.004)
3.004
(0.032)
3.687
(-0.028)
6.683
(-0.004)
0.125
(0.136)
0.112
(0.259)
0.046
(0.167)
The estimate is computed for the Monte Carlo M C, the Nadaraya Watson N W , the generalized re-
gression GR, and the Markov chain Monte Carlo M CM C estimators. The standard error is computed
except for the NW estimator. The bias and RMSE are also computed for the cases (1), (3), and (4),
where true VaR contributions are available. The sample size is N = 106 . The proposal distributions of
the MCMC estimators are (1) Random Walk, (2) Independent, (3) MpCN, and (4) MpCN.
‡
In the case (2), where true VaR contributions are not available, the range of bias and RMSE are computed
based on the 95% confidence interval of the true VaR contributions derived by the MCMC estimator (3.12).
c N , we compute the range of
Based on the confidence interval ACL < AC < ACU and the estimate AC
c N − ACU , AC
c N − ACL ). The range of RMSE is computed in the same way.
the bias (AC
21
biases even when their standard errors are sufficiently small. In contrast, the MCMC estimator
provides the accurate estimate of the VaR contributions due to its consistency. Together with the
CLT, we also can construct the confidence interval of the true VaR contributions. Second, the
MCMC estimator has great sample efficiency compared with the MC estimator. Since the MCMC
estimator generates sample not from FX but from FX 0 |S=v , no sample have to be discarded. In
addition, unlike the MC, there is no trade-off between the bias and the sample efficiency. Thanks
to these properties, the MCMC estimator can hold low standard error without increasing the
bias. Finally, the MCMC estimator can maintain high performance even when the conditional
distribution FX 0 |S=v has irregular shape, such as multi-modality and heavy-tailed. As compared
with the results in the case (2) and others in Table 1, the performance of the GR estimator highly
depends on the shape of FX 0 |S=v . For the existing estimators, it is generally difficult to reflect
the shape of FX 0 |S=v to their hyper-parameters. On the other hand, for the MCMC estimator,
we can directly capture the shape of FX 0 |S=v through the proposal distribution q. By choosing an
appropriate proposal distribution q, the MCMC estimator has stably good performance regardess
of the shape of FX 0 |S=v .
5.2
Empirical study of the proposal distribution
In Section 5.1, we have shown that the MCMC estimator improves the performance of estimating
VaR contributions. However, the performance of the MCMC estimator highly depends on the
appropriate choice of the proposal distribution q. Additionally, the choice of proposal distribution
is less apparent compared with that of hyper-parameters for the other estimators. In this section,
we firstly investigate the symptoms caused by the bad choice of q. Then, we consider how to
overcome this problem based on the numerical results.
Bad choice of q is largely classified into two cases. One is that, the proposal distribution q often
generates a candidate at which the probability measured by π is quite small. This case occurs
for example when the variance of q is much larger than that of π, or when q does not capture
the irregular shape of π as appeared in Fig. 2 (b). In such cases, the Markov chain moves quite
slowly, yielding high asymptotic standard error of the MCMC estimator. This symptom appears
as a quite low acceptance rate and high autocorrelation plots. The other case is that the proposal
distribution q generates only some parts of the whole support of the target distribution π. This
case happens for example when π has distinct local modes and the variance of q is so small that
the chain can not pass between them. In this case, the estimate has a significant bias, although
the acceptance rate and the autocorrelation plots are seemingly fine. This symptom appears as a
distorted plot of the MCMC samples whose shape is clearly different from the target distribution
π.
How can we detect and avoid such fallacious estimates? First of all, as mentioned in Section
3.2, it is indispensable to check the acceptance rate and the autocorrelation plots. Additionally,
it is also important to draw the plots of the generated Markov chain, and compare them with
the plots of the MC samples whose sums belong to Aδ = [v − δ, v + δ]. Since such MC samples
follow the distribution FX 0 |S∈Aδ , we can recognize the distortion of the generated Markov chain
by comparing with the two plots of the samples. In the case of our numerical study, Fig. 3 shows
the scatter plots of the MC samples whose sums belong to [v − δ, v + δ], overlaid on the scatter
plots of the MCMC samples. We can check in Fig. 3 that the shape of the scatter plots of the
MCMC samples bear striking resemblance to those of the MC samples for all risk models.
Finally, we found that dependence information of the underlying risk model is quite helpful
for the selection of q. When the copula C of the underlying risk model represents only upper
22
(b)
35
(a)
0
5
10
15
20
25
30
25
20
X2
5
10
15
●
●● ●
●
● ● ●●
●●
●●● ●●● ●●●
●●
●
●●●●●
●
●
●●
●●
●● ●
●● ●
●● ●
●
●
●
●
●●●
●●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●●●
●
●●●●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●●
●●●●
●
●
●
●
●
●
●
●
●● ●
●
●
●●
●●
●
●
●
●
●
●
●●
●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●●
●●●
●
●
●
●
●●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●● ●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
● ●●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
● ●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●● ●● ●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●●● ●● ●●
●
● ●●
●
●
●
●
●●
●
●●
●●
●
●
●●
●●
●●
●● ●●● ● ●
●
●
●
●
●
●●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●● ●
●●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●●
●
●●
●
●
●●
●
●
●
●
● ●
●●
● ●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
● ●●
●
●
●
●
●●●
●
●
●
●●
●
●
●●
●
●
●●
●
●
●●●
●
●
●
●
●
●
●
●●
●●●
●
●
●
● ●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
● ●●
●●
●
●●●
●●●●
●
●
●
●
●●●●
●
●
●●
●●●
●
●
●
●●
●●●●
●
●
●●●●
●
●
●
● ● ●●● ●● ● ●●
●
●●●
●
●
● ●●
●
● ●●
●●
●
●
●●
●●
0
0
5
10
15
20
X1 |S
X2 = v
25
30
●●
●
●
●
●
● : MC
●
●
●●
●
●
●
●
●
● : MCMC
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
● ●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●● ●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●● ●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●●● ● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ● ●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
● ● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
● ● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●●
●
●
● ●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
● ●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●● ● ●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●
●
● ● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ● ●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●● ● ●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
● ● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
● ● ●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●●
: MC
: MCMC
●
35
0
5
10
15
X1
20
25
X1
●
(c)
0
0
5
10
15
40
20
X2
X1 |SX1= v
●
● ●
●
●
●
●
0
20
●
60
: MC
: MCMC
−20
5
10
X1 |S
X2 = v
15
●
●
●●
●●●
● ● ●●
●
●●
●●
●
●
● ●● ●
● ●● ●
●
●●
●●●●
●
●●
●
●
●
●●
●
●●●
●
●
● ●●
●●
●
●
●
● ●●● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●●●● ●
●●
●
●
●
●
●●
●
●●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●● ● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●● ● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
● ●●
●
●
●
●
●●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●●●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●●
●
●
●●
●●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●● ● ●●
●
●●
●
●●●
●
●●
●● ●
●
●
●
●
●●●●
●
●
●●
●
●●
●
●
●
●
●●
●
●●
●
●
●
●
●●
●●●
●●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●
●●●
●
●
●
●
●●
●●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●●● ●
●
●●
●●
●●●
●
●
●●
● ●● ●
● ● ● ●● ●●
●●●
●
●
●
● ●
●
●
● ●● ●
● ●
●
●
●
●
●
●
●
●
−40
20
●
(d)
●
●
●
●
●
●
: MC
: MCMC
●●
●
●
● ●●
●
●
●
●
●
●●●● ●
●● ● ●
● ●
●
●
●●
●
●●● ●●
●
●
●●
●●●
●● ●
●
●●●● ●
●
●
●
●●●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●
●●
●
●
●
●
●
●
● ●
●● ●
●
●
●
●
●
●
●●
●●
●
●
●●
●
●●
● ●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●● ●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●● ● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
● ● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●●
●●
●
●
●●
●
●●
●
●●
●●
● ●
● ●
●●
●
●●
●
●
●
●●
●
● ●● ●
●
●●
●
●●●
●
●●●
●●
●
●●●
●● ●
● ● ●●
● ●●
●
●
●
●
●●
● ● ●●
●
● ●
●
●
●
●●
● ● ●●
●
●
●
●● ●
●
−40
−20
0
20
X1 |SX1= v
40
●
●
60
●
●
Figure 3: Scatter plots of the MC (•) and the MCMC (◦) samples for different risk models: (a)
Pareto + π-rotated Clayton, (b) Pareto + t copula, (c) Student’s t + π-rotated Clayton, (d)
Student’s t + t copula. We plot the MC samples generated from FX such that their sums belong
to Aδ = [v − δ, v + δ]. In the four risk models, the value of δ are (1) 4.8, (2) 3.9, (3) 2.2, and (4) 1.7.
When drawing the scatter plots of the MCMC samples, we used subsamples, which are picked up
every 100-th points among the original Markov chains. We plot only the first and second variables
of 3-dimensional samples since the last variable is determined by the sum-constraint. The plots of
the MC, and the MCMC samples express the distribution FX 0 |S∈Aδ and FX 0 |S=v , respectively.
and/or lower tail dependence, then the target distribution π is likely to be tractable. For example
in the cases of risk models (1) and (3), where the copula C has an upper tail dependence, the
contour plots Fig. 2 (a) and (c) show that the target distribution π is unimodal and light-tailed.
These features facilitate the estimation with MCMC since simple proposal distributions such as
Random Walk proposal (3.14) perform well. Conversely, when the copula C has upper-lower tail
dependence, then the target π tends to be complicated. In the cases (2) and (4), where the copula
C has upper-lower tail dependence, the Fig. 2 (b) indicates that the π is bimodal, and also the
contour plot Fig. 2 (d) shows that the π is heavy-tailed. In such cases, careful selection of the
proposal function q is required for the good performance of the MCMC estimator. For instance,
to capture the bi-modality appeared in the contour plot Fig. 2 (b), one of the good choice of q is
an independent proposal distribution (3.15) with f the Dirichlet distribution on the v-simplex Sv .
Since Dirichlet distribution can possess two distinct modes around the edges of the simplex, we
can generate a candidate which often visits the two separated modes of the π. In the case (4), to
overcome the problem of heavy-tailedness of π seen in the contour plot Fig. 2 (d), it is necessary
to adopt some specialized MCMC for heavy-tailed target distribution, such as MpCN (3.16).
23
6
Concluding remarks
Computing VaR contributions for a continuous risk model is a difficult task in general. There is no
standard method since existing estimators have their own deficiencies. Especially, it can be seen
that the crude Monte Carlo estimator suffers from unavoidable bias to the true VaR contributions.
Our MCMC estimator proposed enables the consistent estimation of VaR contributions. Its sample
efficiency is significantly improved because the MCMC estimator is computed by samples generated
directly from the conditional density of a portfolio loss given the sum constraint. Moreover, since
the MCMC estimator can capture the features of the risk model more directly than the existing
estimators, it can maintain high performance even when the shape of the underlying loss distribution is far from elliptical. We showed some theoretical properties of the MCMC estimator, such as
consistency and asymptotic normality. The consistency follows naturally from the construction of
the estimator. We provided some conditions and example where CLT holds. By the general theory
of MCMC, its asymptotic variance can be consistently estimated. We numerically compared the
performance of various estimators of VaR contributions for typical risk models used in practice.
The simulation results showed that, in most risk models we considered, the MCMC estimator had
smaller bias and MSE compared with the existing estimators.
Although the MCMC estimator has good theoretical properties and numerically shows great
performance, some remaining issues merit future research. Firstly, although we provided guidelines
to choose good proposal distribution, they are hardly applicable when the dimension of the target
distribution is too high to visualize. For such high-dimensional cases, asymptotic analysis and
adaptive method should be developed to chooce good proposal distribution. Secondly, theoretical
investigation of the conditional joint distribution given the sum constraint is still unsatisfactory.
Our concern mainly lies with the influence of the underlying copula of a risk model to the shape
of the conditional joint distribution, such as tail behavior and multi-modality. We believe that
revealing these relations helps to find a better proposal distribution in our context of estimating
VaR contributions. Finally, our MCMC method can potentially be extended to the computation
of Expected shortfall (ES) contributions. The ES contributions can be derived as a conditional
expectation of losses given that the total loss is equal to or greater than VaR (see Tasche, 2001).
Due to its similar structure to the VaR contributions, the ES contributions can also be efficiently
estimated in an analogous method proposed in this paper. This, and other potential extensions
are currently being studied by us.
Acknowledgements
We would like to thank Paul Embrechts for his valuable comments and Kengo Kamatani for a
fruitful discussion on MCMC. Our research is supported by the Core-to-Core program (Japan
Society for the Promotion of Science: JSPS) in Keio Univeristy.
References
[1] C. Bluhm, L. Overbeck, and C. Wagner. Introduction to credit risk modeling. CRC Press,
2016.
[2] S. Chib and E. Greenberg. Understanding the metropolis-hastings algorithm. The american
statistician, 49(4), 327-335, 1995.
[3] M. Denault. Coherent allocation of risk capital. Journal of Risk, 4(1):1-34, 2001.
24
[4] A. Dev, editor. Economic Capital: A Practitioner Guide, 2004. Risk Books.
[5] C. Geyer. Introduction to markov chain monte carlo. Handbook of Markov Chain Monte Carlo,
3-48, 2012.
[6] P. Glasserman. Monte Carlo methods in financial engineering. Springer Science & Business
Media, 2003.
[7] P. Glasserman. Measuring marginal risk contributions in credit portfolios. Journal of Computational Finance, 9:1-41, 2005.
[8] W. Hallerbach. Decomposing portfolio value-at-risk: a general analysis. Journal of Risk,
5(2):1-18, 2003.
[9] B. Hansen. Nadaraya-Watson and local linear regression. Lecture Note available at
http://www.ssc.wisc.edu/%7Ebhansen/718/718.htm, 2009.
[10] W. K. Hastings. Monte Carlo sampling methods using Markov chains and their applications.
Biometrika, 57:97-109.
[11] X. Huang, C.W. Oosterlee, and M.A.M. Mesters. Computation of VaR and VaR contribution
in the Vasicek portfolio credit loss model: a comparative study. Journal of Credit Risk, 3(3),
75-96, 2007.
[12] H. Joe. Dependence modeling with copulas. CRC Press, 2014.
[13] G. L. Jones, M. Haran, B. S. Caffo, and R. Neath. Fixed-width output analysis for Markov
chain Monte Carlo. Journal of the American Statistical Association, 101(476):1537-1547, 2006.
[14] M. Kalkbrener. An axiomatic approach to capital allocation. Mathematical Finance, 15(3):425437, 2005.
[15] K. Kamatani. Efficient strategy for the Markov chain Monte Carlo in high-dimension with
heavy-tailed target probability distribution. arXiv preprint arXiv:1412.6231, 2014
[16] K. Kamatani, Ergodicity of Markov chain Monte Carlo with reversible proposal. arXiv preprint
arXiv:1602.02889, 2016.
[17] S. Livingstone. Geometric ergodicity of the Random Walk Metropolis with position-dependent
proposal covariance. arXiv preprint arXiv:1507.05780, 2015.
[18] H. Mausser and D. Rosen. Economic credit capital allocation and risk contributions. Handbooks
in Operations Research and Management Science, 15:681-726, 2007.
[19] A. J. McNeil, R. Frey, and P. Embrechts. Quantitative risk management: Concepts, techniques
and tools. Princeton university press. 2015.
[20] K. L. Mengersen and R. L. Tweedie. Rates of convergence of the Hastings and Metropolis
algorithms. The Annals of Statistics, 24(1):101-121, 1996.
[21] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equation of
state calculations by fast computing machines. The journal of chemical physics, 21(6):10871092. 1953.
[22] S. P. Meyn and R. L. Tweedie. Markov chains and stochastic stability. Springer Science and
Business Media, 2012.
25
[23] T. Mikosch. Regular variation, subexponentiality and their applications in probability theory.
Eindhoven University of Technology, 1999.
[24] R. B. Nelsen. An introduction to copulas. Springer Science & Business Media, 2013.
[25] E. Nummelin. MC’s for MCMC’ists. International Statistical Review, 70:215-240, 2002.
[26] E. Nummelin. General irreducible Markov chains and non-negative operators. Cambridge University Press, 2004.
[27] S. I. Resnick. Extreme values, regular variation and point processes. Springer, 2013.
[28] G. O. Roberts and J. S. Rosenthal. General state space Markov chains and MCMC algorithms.
Probability Surveys, 1:20-71, 2004.
[29] J. S. Rosenthal. Minorization conditions and convergence rates for Markov chain Monte Carlo.
Journal of the American Statistical Association, 90(430):558-566, 1995.
[30] R. Schmidt. Tail dependence for elliptically contoured distributions. Mathematical Methods of
Operations Research, 55(2):301-327, 2002.
[31] B. W. Silverman. Density Estimation. Chapman & Hall, London, 1986.
[32] D. Tasche. Risk contributions and performance measurement. Working paper, Techische Universität Münichen, 1999.
[33] D. Tasche. Conditional expectation as quantile derivative. arXiv preprint math/0104190, 2001.
[34] D. Tasche. Capital Allocation with CreditRisk+. In V. M. Gundlach and F. B. Lehrbass,
editors, CreditRisk+ in the Banking Industry, 25-44. Springer, 2004.
[35] D. Tasche and L. Tibiletti. Approximations for the Value-at-Risk approach to risk-return
analysis. The ICFAI Journal of Financial Risk Management, 1(4):44-61, 2004.
[36] D. Tasche. Capital allocation to business units and sub-portfolios: the Euler principle. In
Pillar II in the New Basel Accord: The Challenge of Economic Capital (ed. A. Resti), 423453. London: Risk Books, 2008.
[37] D. Tasche. Capital allocation for credit portfolios with kernel estimators. Quantitative Finance,
9(5):581-595, 2009.
[38] L. Tierney. Markov chains for exploring posterior distributions. the Annals of Statistics, 17011728, 1994.
[39] D. Vats, J. M. Flegal, and G. L. Jones. Multivariate output analysis for Markov chain Monte
Carlo. arXiv preprint arXiv:1512.07713, 2015.
[40] Y. Yamai. and T. Yoshiba. Comparative analyses of expected shortfall and value-at-risk: their
estimation error, decomposition, and optimization. Monetary and economic studies, 20(1):87121, 2002.
[41] T. Yoshiba. Risk aggregation by a copula with a stressed condition. No. 13-E-12. Bank of
Japan, 2013.
26
| 10 |
A
Scheduling and Tiling Reductions on Realistic machines
arXiv:1801.05909v1 [] 18 Jan 2018
Nirmal Prajapati
October 25, 2016
Computations, where the number of results is much smaller than the input data and are produced through
some sort of accumulation, are called Reductions. Reductions appear in many scientific applications. Usually,
reductions admit an associative and commutative binary operator over accumulation. Reductions are therefore highly parallel. Given unbounded fan-in, one can execute a reduction in constant/linear time provided
that the data is available. However, due to the fact that real machines have bounded fan-in, accumulations
cannot be performed in one time step and have to be broken into parts. Thus, a (partial) serialization of
reductions becomes necessary. This makes scheduling reductions a difficult and interesting problem.
There have been a number of research works in the context of scheduling reductions. We focus on the
scheduling techniques presented in [Gupta et al. 2002], identify a potential issue in their scheduling algorithm
and provide a solution. In addition, we demonstrate how these scheduling techniques can be extended to
“tile" reductions and briefly survey other studies that address the problem of scheduling reductions.
1. INTRODUCTION
Reductions are those computations in which an associative and commutative operator accumulate a set of points into a single value. Following example illustrates such a computation.
for i = 1 to N
A[i] += B[i-1];
The reduction operator is associative and commutative, which implies that accumulations
need not admit any order. Therefore, we should be able to exploit parallelism. With an
unbounded number of processors and unbounded fan-in operator, accumulations can be
done in a single time step. However, for a real machine, both the number of processors and
the fan-in are bounded. This necessitates ordering of accumulations.
Many scientific and engineering applications spend most of their execution time in nested
loops. The task of optimizing such nested loops involve dataflow analysis of the program [Feautrier 1991]. The dataflow analysis of the above program reflects loop-carried
data dependences that prevent parallelism. Therefore, scheduling reductions is a difficult
problem.
A multidimensional affine function that imposes a particular order of execution is called a
Schedule. A schedule must satisfy the precedence constraints imposed by the dependences.
Polyhedral model renders a powerful abstraction that enables precise reasoning for the legality of transformations. Iterations of the nested loops can be viewed as integer points in
a polyhedron. The computations impose dependence constraints which are represented as
affine functions of the indices. Reductions can also be described as Systems of Affine Recurrence Equations (SAREs) over polyhedral domains. Reduction dependences are implicit
in the SAREs. Such a representation allows us to focus on non-reduction dependences that
must be satisfied by the schedule.
[Redon and Feautrier 1994] present a scheduling technique that optimally schedules
reductions over CRCW PRAM model. They assumed that accumulations can happen in one
time step. This approach was extended by [Gupta et al. 2002] to work on realistic machines.
They invented a scheduling technique for machines with binary operators and exclusive
writes. They claim that on such machines, their scheduling method gives efficient solutions
with a constant factor slowdown compared to best possible schedules on a CRCW PRAM
model. We discover a flaw in their technique which violates their claim of “exclusive writes".
Using a counter example, we prove that their scheduling technique requires a machine with
concurrent writes. We solve this problem by introducing additional causality constraints and
show that the modified scheduling constraints guarantee a schedule with exclusive writes.
Furthermore, we extend the scheduling approach to tile reductions. Tiling [Wolfe 1987;
Irigoin and Triolet 1988] is a classic iteration space partitioning technique which combines a
set of points into tiles, where each tile can be executed atomically. Tiling comes in handy for
exploiting data locality [Wolf and Lam 1991], minimizing communication [Andonov et al.
2001; Xue 1997] and maximizing parallelism. Reductions are usually serialized before tiling.
Most of the tiling techniques such as [Bondhugula et al. 2008a; Doerfert et al. 2015] take
as input serialized reduction programs. Loop transformation techniques are used to find
tiling hyperplanes. In the majority of the cases, serializing imposes uniform dependences
which are tileable. Serializing reductions, however, negates the fact that accumulations can
be carried out in any order and imposes unnecessary intra-tile and inter-tile dependences.
The tiling techniques developed in this paper maximize parallelism as well as improve data
locality in a work efficient manner.
We also discuss other limitations of Gupta’s scheduling algorithm and suggest possible
improvements. We highlight the unexplored areas and address other related work in the
context of scheduling reductions.
The rest of the paper is organized as follows. Section 2 describes necessary background.
In section 3, we describe [Gupta et al. 2002] scheduling technique using examples. Section
4 exposes the flaw in their technique using a counter example which shows that “exclusive
writes" condition is violated and proposes a solution that is guaranteed to give schedules
for machines with “exclusive writes". Section 5 extends this scheduling technique to tile
reductions. Section 6 suggests other possible improvements, section 7 discusses related work
and section 8 concludes the paper.
2. BACKGROUND
A reduction of variable X can be represented as
X = reduce(⊕, f, R)
(1)
where ⊕ is an associative and commutative accumulation operator, the body of the reduction
is some variable R, and f is a projection function that maps a subset of the points in the
domain of R, represented as Dom(R), to zX ∈ Dom(X). For every zX ∈ Dom(X), the
Parametrized Reduction Domain of zX , referred
as P (zX ), is the set of points in R that are
z
mapped to zX by the function f = z → AP
+ cP where AP is a constant matrix, cP is
p
a constant vector and p is a vector of the size parameters.
2.1. Schedule
Schedule of a variable X is a vector that represents the time instant at which zX ∈ Dom(X)
is computed and is given by λX (zX ), a multidimensional affine function. For any two variables X and Y , if X(z) depends on Y (z 0 ) then Y (z 0 ) must be computed before X(z). The
dependence imposes the following causality constraints on the schedule
λX (z) λY (z 0 ) + TeqX (z, z 0 )
(2)
where TeqX (z, z 0 ) is the time to compute the RHS of the equation X(z) after Y (z 0 )
becomes available.
[Redon and Feautrier 1994] scheduling algorithm assumes a CRCW PRAM with unbounded number of processors and unbounded fan-in operators. The reductions can, therefore, be performed in a single time step. The causality constraints for equation (1) on such
a machine are given by
2
zX ∈ Dom(X), zR ∈ P (zX ),
λX (zX ) λR (zR ) + 1
(3)
However, this technique is not applicable for scheduling reductions on real machines.
[Gupta et al. 2002] developed a technique that schedules reductions on realistic machines
with bounded fan-in (binary) operators and exclusive writes. They claim that their scheduling algorithm generates efficient schedules with a constant fold slow down compared to
optimal schedules obtained on a CRCW PRAM model.
The following section explains in detail the scheduling algorithm as presented in [Gupta
et al. 2002].
3. GUPTA’S SCHEDULING ALGORITHM
Let λR (zR ) be the schedule of zR ∈ Dom(R). λR (zR ) = t are the equitemporal hyperplanes,
or “slices", defined as the set of points in zR ∈ P (zX ) that become available for accumulation
at time t. A temporary variable T empX(zX , t) is defined as follows
AP
z
c
T empX = reduce ⊕, z →
+ P
,R
(4)
ΛR
p
αR
z
where λR (zR ) = ΛR R + αR is the schedule for R.
p
T empX(zX , t) are the partial accumulations of equitemporal hyperplanes in P (zX ). These
intermediate results are accumulated to get final answer zX ∈ Dom(X). Equation (1) is
modified as
X = reduce(⊕, (zX , t → zX ), T empX)
(5)
Consider, X as the following reduction:
X=
i=N,j=i
X
R(i, j)
(6)
i=0,j=0
Dom(R) = {i, j|0 ≤ j ≤ i ≤ N }
Assume, λR (zR ) = i are the equitemporal hyperplanes. With this information, we can
deduce that there are N equitemporal hyperplanes. i.e. Dom(T empX) has N elements where
the t-th element is a partial accumulation of the t-th equitemporal hyperplane in P (zX ).
Figure 1(b) shows the new set of equations obtained after decomposing the reduction as
shown in equations (4) and (5).
Correspondingly, the causality constraints in equation (2) are updated to accommodate
T empX.
f (zX ) ≤ t ≤ l(zX ),
λT empX (zX , t) t + TeqT empX (zX , t)
λX (zX ) λT empX (zX , t) + TeqX (zX , t)
(7)
(8)
where f (zX ) and l(zX ) are the first and last time steps at which values in P(zX ) are
available. TeqT empX (zX , t) is the time to compute the reduction of all the values in the
equitemporal hyperplane (zX , t). For a machine with binary operators, size(zX , t) gives
the time to accumulate the equitemporal hyperplane at (zX , t). The number of time steps
3
Fig. 1: Gupta’s scheduling technique for the equation in (a).
required for linear accumulation of all values in any box B is defined as its size, which is
also equal to the 1-norm of its principal diagonal.
For the example in Figure 1(a), the size(zX , t) of a hyperplane is given by t = i. Figure 1(c) shows the iteration space and the table in Figure 1(d) give the sizes of the equitemporal hyperplanes at (zX , t).
The scheduling constraints in equations (7) and (8) get reduced to
size(zX , t) λX (zX ) − t
(9)
The slack of an equitemporal hyperplane is defined as sl(zX , t) = λX (zX ) − t. The
scheduling constraints in (9) are further modified to
sl(zX , t) size(zX , t)
(10)
Note that by definition size(zX , t) is always one-dimensional scalar. If slack is truly
multidimensional then the constraint in (10) is trivially satisfied. However, if the slack is
not multidimensional, then the value of the innermost dimension of slack should be greater
than the size(zX , t).
These causality constraints can be formulated as an integer linear program and solved
using a PIP [Feautrier 1988] solver. Using the above constraints to formulate causality for
the example in Figure 1, we obtain schedule λX (zX ) as shown in Figure 1(d).
For the same example, assume that λR (zR ) = i − j are the equitemporal hyperplanes.
Figure 2(a) shows the hyperplanes at which values in P (zX ) become available for accumu4
Fig. 2: Gupta’ scheduling technique for equation (6), given λR (zR ) = i − j.
lation. For this example, P (zX ) = Dom(R). Again, there are N equitemporal hyperplanes.
i.e. Dom(T empX) has N elements where the t − th element is a partial accumulation of
the t − th equitemporal hyperplane in zX .
The size(zX , t) of each hyperplane is shown in Figure 2(b). We obtain the schedule for
zX such that it requires a machine with concurrent writes. As shown in Figure 2(b), the
accumulations are scheduled at the same time step. In such cases, [Gupta et al. 2002]
suggests that we slow down the schedule by a factor of 2 in the innermost dimension of time
and obtain enough time for the accumulations in equation (5) on a machine with binary
operators. Figures 1(c) and (d) show how this modification easily solves the problem.
5
In the next section, we provide a counter example to show that this modification does
not solve the problem of concurrent writes in all cases.
4. COUNTER EXAMPLE
Consider, variable X as the following reduction equation:
X(i) =
j=i
X
R(i, j)
(11)
j=0
Dom(R) = {i, j|0 ≤ j ≤ i ≤ N }
Here, X is a one-dimensional variable. Dom(R) is again two-dimensional. Assume,
λR (zR ) = j are the equitemporal hyperplanes. With this information, we can see that
there are j equitemporal hyperplanes. i.e. the slice at (zX , t) has a single element. The
t − th element is a partial accumulation of the t − th equitemporal hyperplane in zX which
is a single point, therefore, there are no partial accumulations.
The size of all the equitemporal hyperplanes is equal to 1. Figure 3(c) shows the iteration
space of equation (11) and Figure 3(d) shows the desird schedule.
Using these constraints, we will now solve for t. Assume, we get λR (zR ) = t = 0. The
schedule λX (zX ) is calculated using the constraint in equation (10). This constraint is
trivially satisfied with a size of 1 for all equitemporal hyperplanes. Figures 3(e) and (f)
show the issue. Here the schedule is zero-dimensional and we can no longer slow it down by
a constant factor to accumulate the intermediate results. This violates the “exclusive write"
condition and requires a machine with Concurrent Writes!.
In the previous example, note that the number of values to be accumulated into zX
is more than then number of time steps between f (zX ) and l(zX ). This prevents linear
accumulation. We define size0 (zX ) as the total number of time steps required for linear
accumulation of all values in the reduction body T empX(zX , t) of X in equation (5). Let
the total number of time steps between f (zX ) and l(zX ) be TzX .
If the condition
∀zX ∈ Dom(X), TzX + 1 > size0 (zX )
(12)
is satisfied, then concurrent writes in zX can be avoided. Suppose, f (zX ) = 1 and l(zX ) =
N . Therefore, TzX = N . If size0 (zX ) < N , then the condition is trivially satisfied. However,
this constraint cannot be satisfied for the example (11) where TzX = 1 and size0 (zX ) = N .
Let the total number of equitemporal hyperplanes in P (zX ) be EzX . Again, if EzX <
size0 (zX ) < TzX , then exclusive writes cannot be guaranteed by equation (12).
Now let’s see what happens if the condition
∀zX ∈ Dom(X), EzX + 1 > size0 (zX )
(13)
is satisfied. There would be enough time steps for the accumulation of the intermediate
values into the final answer zX . However, if there is only one equitemporal hyperplane like
our example (11), then the constraint in (13) will not be satisfied. Neither the number of
equitemporal hyperplanes EzX nor the total number of time steps TzX can be guaranteed
to be more than size0 (zX ).
Therefore, we suggest the following scheduling constraints
λX (zX ) size0 (zX )
(14)
in addition to the constraints in (10). Linear accumulations are now guaranteed on a machine
with bounded fan-in.
The above scheduling technique can be further optimized to get better schedules. In the
next section, we will see how this scheduling technique can be extended to tile reductions
in order to maximize parallelism and improve data locality.
6
Fig. 3: Gupta’s scheduling technique violates Exclusive Write condition for equation (11).
7
Fig. 4: [Gupta et al. 2002] scheduling technique for equation in (a), given λR (zR ) = i.
5. TILING REDUCTIONS
Tiling or blocking computations is a strategy of dividing the iteration space into tiles where
each tile is a set of points [Wolfe 1987; Irigoin and Triolet 1988]. A tiling is considered to be
legal if there are no dependence based cycles between tiles and if all tiles can be executed
atomically.
→
−
Let θX ( i ) define a set of tiling hyperplanes that tile the iteration space of a variable
X. For any two variables X and Y , if X depends on Y then the following tiling legality
constraint must be satisfied for all the dependencies between X and Y
→
−
→
−
θX ( i ) − θY ( i ) ≥ 0
(15)
The above condition ensures the legality of tiling as shown in [Bondhugula et al. 2008b].
With the knowledge that accumulations can be carried out in any order, we can eliminate
the reduction dependences from the dependence set. However, (15) must hold for all other
dependencies.
8
Consider the equation shown in Figure 4(a). Figure 4(b) shows the decomposition of X
as per equation (4). Figures 4(c) and (d) show the iteration space and schedule obtained
using the formulation discussed in Section 3.
We provide an incremental approach to finding tiling hyperplanes for reductions. We first
show how equitemporal hyperplanes can be tiled, succeeded by tiling P (zX ) and finally
suggest possible tilings for the reduction body R.
5.1. Tiling Equitemporal Hyperplanes
Tiling an equitemporal hyperplane (zX , t) using any tiling hyperplane is a legal tiling. This is
due to the fact that all values in an equitemporal hyperplane are available for accumulation
at the same time and that all of them contribute to a single value in T empX. Therefore, the
tiling legality condition (15) holds for any tiling hyperplane. We are left with many possible
choices. We choose orthogonal tiling hyperplanes with tiles of size s in every dimension.
We introduce a new variable T ileT empX such that (zX , t, b) ∈ Dom(T ileT empX ) maps to
the bth tile in T empX, where (zX , t) ∈ Dom(T empX).
T ileT empX = reduce ⊕, z →
AP
ΛR
γ
!
z
p
s
!
c
+ P
αR
!
!
,R
(16)
where γ is a function that divides every dimension of the equitemporal hyperplane at
(zX , t) into tiles of size s such that 1 ≤ b ≤ τ (zX , t), where τ (zX , t) is the total number
of tiles in the slice at (zX , t). With this definition of variable T ileT empX , equation (4) is
modified as
T empX = reduce(⊕, (zX , t, b → zX , t), T ileT empX )
(17)
T empX(zX , t) is the accumulation of τ (zX , t) tiles. Equation (5) remains unchanged.
Consider the equation shown in Figure 5(a). Figure 5(b) shows the decomposition of X
as per equations (16) and (17). We now obtain causality constraints for equations (16) and
(17). The precedence constraints on T ileT empX state that
1 ≤ b ≤ τ (zX , t), f (zX ) t l(zX ), zx ∈ Dom(X)
λT ileT empX (zX , t, b) t + TeqT ileT empX (zX , t, b)
(18)
λT empX (zX , t) λT ileT empX (zX , t, b) + TeqT empX (zX , t)
(19)
In an equitemporal hyperplane, all the tiles can be executed simultaneously.
TeqT ileT empX (zX , t, b) is given by the size of the tile. We assume that tile size is s in every dimension. Let the number of dimension of the equitemporal hyperplane be d(zX , t).
Hence, size of a tile (zX , t, b) will be given by d(zX , t) × s. We get
λT ileT empX (zX , t, b) t + [d(zX , t) × s]
(20)
as the constraints for λT ileT empX (zX , t, b).
TeqT empX (zX , t) of T empX, defined as the size(zX , t), is the time it takes to accumulate
all the partial answers produced by each tile, which is also equal to τ (zX , t). Therefore,
causality constraints on the schedule for equation (17) can be formulated as:
λT empX (zX , t) t + τ (zX , t) + [d(zX , t) × s]
9
(21)
Fig. 5: Extending the scheduling techniques for tiling equitemporal hyperplanes for the
equation in (a), given λR (zR ) = i.
Similarly, we deduce the following as the causality constraints for equation (5).
λX (zX ) t + τ (zX , t) + [d(zX , t) × s]
0
λX (zX ) size (zX )
(22)
(23)
Figures 5(c) and (d) show the iteration space and schedule obtained using the formulation
discussed above. Using tile size s = 3 and N = 9, we get the number of tiles τ = 3 and
hence λX (zX ) can be scheduled as early as 15.
Note, when s does not equally divide every dimension of an equitemporal hyperplane, we
get partial tiles whose size is smaller than full tiles. Therefore, above causality constraints
are satisfied trivially for partial tiles.
With the above formulation, there are many possible choices for tile size s. If we choose
s = 1, then there will be only one point in each tile. We provide a cost function that leads
to good tile size and maximizes parallelism.
10
5.1.1. Towards finding good solutions. To minimize the total execution time, reductions of an
equitemporal hyperplane can be implemented using a binary-tree-like algorithm. However,
this is not work-efficient. To accumulate N elements, binary-tree algorithms take logN steps.
Brent’s theorem suggests N/logN parallel instances where each instance performs logN
work [Brent 1999]. Later, all N/logN parallel instances contribute to the final accumulation
of the partial results. The cost is now given by (N/logN )∗logN = N , which is work efficient.
Our proposed cost function provides such work efficient solutions.
We seek to minimize the following cost function Let,
ω = [d(zX , t) × s] − τ (zX , t)
minimize(ω)
(24)
The tile size s used in Figures 5(c) and (d) reflect the result of the cost function (24).
Note that our tile size optimization function assumes equal tile size in every dimension.
This restriction can be lifted to enable rectangular tiling.
This technique of tiling equitemporal hyperplanes applies to those cases where the equitemporal hyperplane is at least one dimensional. If the hyperplanes are zero-dimensional
then above formulation does not apply. This motivates tiling across equitemporal hyperplanes.
5.2. Tiling Parametrized Reduction Domain of zX
As shown above, tiling equitemporal hyperplanes is straightforward. Let us consider tiling
the Parameterized reduction domain of zX . If there are no dependencies between equitemporal hyperplanes, then orthogonal tiling hyperplane can be chosen and all tiles can be
launched independently. The partial answers can be accumulated to get the final answer
zX .
However, if there exists dependence in t, then orthogonal tiling hyperplanes might not
be legal. Notice, the only dependences that affect tiling legality are between equitemporal
hyperplanes (as per the definition of slices, all values become available for accumulation
in an equitemporal hyperplane at the same time). The problem of tiling P (zX ) is thus
reduced to time tiling. A well-known approach is to enforce forward communication only
constraint as presented in [Griebl et al. 2005]. If we want to maximize parallelism and
minimize synchronization, then we can use the time-partitioning technique of [Lim and Lam
1998]. If minimizing communication is also desired optimization, then the cost optimization
techniques of [Bondhugula et al. 2008a] can be used. If the dependences are uniform, then
time tiling techniques for stencil computations such as [Tang et al. 2011; Grosser et al.
2014; Bondhugula et al. 2016] can be used for maximizing parallelism together with the
concurrent start.
The dependences in t impose additional constraints on tile sizes. The tile size along
each tiling hyperplane must be greater than the length of the longest dependence in the
hyperplane. Using these tiling hyperplanes and additional dependence based constraints,
we can now optimize for tile size with our cost function.
We can eliminate the variable T empX and use only one variable T ileX to tile P (zX ).
−
T ileX is defined using an affine function →
γ which represents the tiling hyperplanes and the
tile size s. X is now an accumulation of the partial answers produced by each tile.
Additional analysis is needed to tile the reduction body. Let R be the reduction body of
X. We want to tile all the d dimensions of the reduction body using the same approach
as mentioned above. In order to do so, the schedule of tiles must admit the schedules of
both variables R and X. Tiling hyperplanes such that the tiling legality constraint (15) is
satisfied can be found. Reduction body can be partitioned using these tiling hyperplanes to
decompose the reduction.
11
6. FURTHER IMPROVEMENTS
We identify the following potential areas of improvements to the scheduling techniques
presented in this paper.
(1) Reiterating over the scheduling approach discussed in section 4, the causality constraints
are formulated by making some assumptions regarding λR and then these constraints
are simultaneously resolved to find schedules for all the variables. Hence, while formulating the causality constraints it is not possible to recognize the exact equitemporal
hyperplanes without the knowledge of λR . The scheduling techniques do not show how
to make an optimal choice of equitemporal hyperplanes while formulating the causality
constraints. The analysis is, therefore, sub-optimal and can be improved.
(2) The suggested scheduling and tiling techniques do not consider the program size parameters. Reconsider the example in Figure 4 (a) with modified size parameters such
that 0 ≤ i ≤ M and 0 ≤ i ≤ N . Assume λR (zR ) = i as the equitemporal hyperplanes as
shown in Figure 4 (c). The number of equitemporal hyperplanes will be given by M and
the size of an equitemporal hyperplane will be given by N . The causality constraints
in the equation (10) will impose the condition that the slack must be greater than or
equal to the size. Either M ≥ N or M < N . Without the knowledge of the values
of N and M , it is not possible to find a schedule that satisfies both the inequalities.
Assuming that the values of N and M are known and that M > N , after solving if we
get λR (zR ) = t = j which used N as the size of equitemporal hyperplanes, we get an
incorrect schedule of X. Therefore, it becomes necessary to consider size parameters.
(3) In the situations where equitemporal hyperplanes have different slacks, it is suggested
that equitemporal hyperplanes be scheduled in the decreasing order of slack. However,
the method does not discuss ordering of equitemporal hyperplanes when they have the
same slack.
(4) Furthermore, if the reduction operator is not commutative then accumulations must
admit an order. The scheduling techniques presented in this paper can be extended
to consider non-commutative operators. For example, a reduction computation with
an associative but non-commutative operator can be tiled using the techniques presented in Section 5 with additional constraint such that accumulations within a tile are
lexicographically ordered. The accumulations of partial answers into the final answer
can also be ordered in the lexicographically increasing order of tiles. Such an ordering
imposed by non-commutative operator might slow down the schedule by a constant
factor. Parallelism can, however, be exploited irrespective of the commutativity of the
operator.
7. RELATED WORK
The scheduling technique of [Karp et al. 1967] solved the problem of scheduling Systems
of Uniform Recurrence Equations (SUREs). Using polyhedral [Rajopadhye et al. 1986]
presented a technique for synthesizing systolic architectures from recurrence rquations which
enable scheduling Affine Recurrence Equations. [Feautrier 1992a; 1992b] give closed form
schedules as affine functions of the indices of a nested loop program. The reader is referred
to the book [Darte et al. 2000] which details scheduling algorithms for recurrence equations.
The problem of finding schedules in the presence of reductions was initially tackled by [Redon and Feautrier 1994]. They assumed a CRCW PRAM model where accumulations can
be carried out in a single time step. They also show that their technique can be extended
to machines with a bounded number of processors by serializing reductions using a partialbinary-tree algorithm. However, they did not show how this can be done efficiently. Building
over their scheduling technique, [Gupta et al. 2002] developed an algorithm to determine
effective serialization of reductions to achieve the fastest possible linear schedules on an
exclusive writes machines with bounded fan-in. Scheduling SAREs do not need to consider
12
memory based dependence like [Doerfert et al. 2015; Sato and Iwasaki 2011; Bondhugula
et al. 2008a] and hence provide more flexibility.
While scheduling SAREs with reductions, the reduction dependences are implicit which
also allows for maximal parallelism. Other techniques such as [Pugh and Wonnacott 1994;
Stock et al. 2014] make the dependences explicit to improve parallelization. Automatic
Parallelization technique such as Polly’s polyhedral optimizer [Doerfert et al. 2015] tries to
achieve parallelism by introducing privatization.
The problem of finding optimal schedules are directed towards optimizing some cost
function that miniminze latency or delay, or maximize fine-grained parallelism [Feautrier
1992a; 1992b; Redon and Feautrier 1994]. Tiling improves data locality and works such as
[Bondhugula et al. 2008a] process loops where reductions are serialized.
In certain cases, it becomes necessary to find piecewice linear schedules. It is, however,
difficult to determine the pieces automatically. In the paper [Wonnacott et al. 2015], the
authors show how to find the optimal piece-wise schedule for Optimal String Parenthesization problem and use the Mostly-Tileable technique for tiling. The schedule was, however,
found by hand.
8. CONCLUSION
We studied previous works that address the problem of scheduling SAREs in the presence of
reductions. We show that method of scheduling reductions developed in [Gupta et al. 2002]
has an error in the formulation of causality constraints which leads to concurrent writes.
We exposed this error with an example and provided a solution that guarantees exclusive
writes. The scheduling techniques presented in this paper gives optimal linear schedules.
Above all, reductions remain memory-bound computations. Therefore, exploiting data
locality using tiling techniques can improve the performance [Wolf and Lam 1991]. Using the knowledge of reduction operator being associative and commutative, we extended
Gupta’s scheduling technique to scheduling as well as tiling reductions. Tiling is also useful for coarse-grained parallelism [Lim and Lam 1998; Xue 2000]. We demonstrated that
tiling the equitemporal hyperplanes renders maximal parallelism. When the accumulations
are serialized, like most other techniques do, then similar parallelism can not be achieved
because serializing imposes an execution order on tiles.
The tile size optimization technique presented in this paper maximizes parallelism; employs data locality and provides work efficient solutions which also reduces the total number
of synchronizations at the same time. This is achieved by reducing the number of elements
that contribute to final accumulation.
REFERENCES
R. Andonov, S. Balev, S. Rajopadhye, and N. Yanev. 2001. Optimal Semi-Oblique Tiling. In Proceedings
of the thirteenth annual ACM symposium on Parallel algorithms and architectures (SPAA ’01). ACM,
New York, NY, USA.
U. Bondhugula, V. Bandishti, and I. Pananilath. 2016. Diamond Tiling: Tiling Techniques to Maximize
Parallelism for Stencil Computations. IEEE Transactions on Parallel and Distributed Systems PP, 99
(2016), 1–1. DOI:http://dx.doi.org/10.1109/TPDS.2016.2615094
U. Bondhugula, A. Hartono, J. Ramanujam, and P. Sadayappan. 2008a. PLuTo: A Practical and Fully
Automatic Polyhedral Program Optimization System. In ACM Conference on Programming Language
Design and Implementation. ACM SIGPLAN, Tuscon, AZ, 101–113.
Uday Bondhugula, Albert Hartono, J. Ramanujam, and P. Sadayappan. 2008b. A Practical Automatic
Polyhedral Parallelizer and Locality Optimizer. In Proceedings of the 29th ACM SIGPLAN Conference
on Programming Language Design and Implementation (PLDI ’08). ACM, New York, NY, USA, 101–
113. DOI:http://dx.doi.org/10.1145/1375581.1375595
Richard P. Brent. 1999. Some Parallel Algorithms for Integer Factorization.. In EuroPar’99 Parallel Processing (Lecture Notes in Computer Science, No. 1685), P. Amestoy, P. Berger, M. Daydé, I. Duff,
V. Frayssé, L. Giraud, and D. Ruiz (Eds.). Springer-Verlag, 1–22.
A. Darte, Y. Robert, and F. Vivien. 2000. Scheduling and Automatic Parallelization. Birkhäuser.
13
Johannes Doerfert, Kevin Streit, Sebastian Hack, and Zino Benaissa. 2015. Polly’s Polyhedral Scheduling
in the Presence of Reductions, In Fifth International Workshop on Polyhedral Compilation Techniques
in conjunction with HiPEAC 2015. CoRR abs/1505.07716 (2015). http://arxiv.org/abs/1505.07716
Paul Feautrier. 1988. Parametric integer programming. In RAIRO Recherche Operationnelle. 22:243– 268.
P. Feautrier. 1991. Dataflow analysis of array and scalar references. International Journal of Parallel Programming 20, 1 (Feb 1991), 23–53.
P. Feautrier. 1992a. Some efficient solutions to the affine scheduling problem, Part I, One-dimensional
Time. Technical Report 28. Labaratoire MASI, Institut Blaise Pascal.
P. Feautrier. 1992b. Some efficient solutions to the affine scheduling problem, Part II, Multidimensional
Time. Technical Report 78. Labaratoire MASI, Institut Blaise Pascal.
Martin Griebl, Paul Feautrier, and Armin Größlinger. 2005. Forward Communication Only Placements and
Their Use for Parallel Program Construction. Springer Berlin Heidelberg, Berlin, Heidelberg, 16–30.
DOI:http://dx.doi.org/10.1007/11596110_2
Tobias Grosser, Albert Cohen, Justin Holewinski, P. Sadayappan, and Sven Verdoolaege. 2014. Hybrid
Hexagonal/Classical Tiling for GPUs. In CGO. Orlando, FL, 66.
Gautam Gupta, Sanjay Rajopadhye, and Patrice Quinton. 2002. Scheduling Reductions on Realistic Machines. In Proceedings of the Fourteenth Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA ’02). ACM, New York, NY, USA, 117–126. DOI:http://dx.doi.org/10.1145/564870.564888
F. Irigoin and R. Triolet. 1988. Supernode Partitioning. In the 15th ACM SIGPLAN-SIGACT symposium
on Principles of programming languages (POPL ’88). ACM, New York, NY, USA, 319–329.
R.M. Karp, R.E. Miller, and S. Winograd. 1967. The organization of computations for uniform recurrence
equations. Journal of the ACM (JACM) 14, 3 (1967), 563–590.
Amy W. Lim and Monica S. Lam. 1998. Maximizing parallelism and minimizing synchronization with affine
partitions.. In Parallel Computing. DOI:http://dx.doi.org/10.1016/S0167-8191(98)00021-0
W. Pugh and D. Wonnacott. 1994. Static Analysis of Upper and Lower Bounds on Dependences and
Parallelism. ACM Trans. Program. Lang. Syst. 16, 4 (1994), 1248–1278.
Sanjay V. Rajopadhye, S. Purushothaman, and Richard Fujimoto. 1986. On Synthesizing Systolic Arrays
from Recurrence Equations with Linear Dependencies. In Proceedings of the Sixth Conference on Foundations of Software Technology and Theoretical Computer Science. Springer-Verlag, London, UK, UK,
488–503. http://dl.acm.org/citation.cfm?id=646824.706926
Xavier Redon and Paul Feautrier. 1994. Scheduling Reductions. In Proceedings of the 8th International Conference on Supercomputing (ICS ’94). ACM, New York, NY, USA, 117–125.
DOI:http://dx.doi.org/10.1145/181181.181319
Shigeyuki Sato and Hideya Iwasaki. 2011. Automatic Parallelization via Matrix Multiplication. SIGPLAN
Not. 46, 6 (June 2011), 470–479. DOI:http://dx.doi.org/10.1145/1993316.1993554
Kevin Stock, Martin Kong, Tobias Grosser, Louis-Noël Pouchet, Fabrice Rastello, J. Ramanujam, and P.
Sadayappan. 2014. A Framework for Enhancing Data Reuse via Associative Reordering. In Proceedings
of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI
’14). ACM, New York, NY, USA, 65–76. DOI:http://dx.doi.org/10.1145/2594291.2594342
Yuan Tang, Rezaul Alam Chowdhury, Bradley C. Kuszmaul, Chi-Keung Luk, and Charles E. Leiserson.
2011. The Pochoir Stencil Compiler. In Proceedings of the 23rd ACM symposium on Parallelism in
algorithms and architectures (SPAA ’11). ACM, New York, NY, USA, 12.
M. E. Wolf and M. Lam. 1991. A Data Locality Optimizing Algorithm. In ACM SIGPLAN Conference on
Programming Language Design and Implementation (PLDI). Totonto, CA.
M. J. Wolfe. 1987. Iteration space tiling for memory hierarchies. Parallel Processing for Scientific Computing
(SIAM) (1987), 357–361.
David Wonnacott, Tian Jin, and Allison Lake. 2015. Automatic Tiling of “Mostly-Tileable" Loop Nests.
In Fifth International Workshop on Polyhedral Compilation Techniques in conjunction with HiPEAC
2015 (IMPACT ’15).
J. Xue. 1997. Communication-Minimal Tiling of Uniform Dependence Loops. J. Parallel and Distrib. Comput. 42, 1 (January 1997), 42–59.
Jingling Xue. 2000. Loop Tiling for Parallelism. Kluwer Academic Publishers, Norwell, MA, USA.
14
| 6 |
PROBABILITY THEORY AND MATHEMATICAL STATISTICS 2017
International Kazan conference
Improved nonparametric estimation of the drift in
diffusion processes
1
Evgeny Pchelintsev,
arXiv:1710.03550v1 [] 10 Oct 2017
1
1
Svyatoslav Perelevskiy and
1
Irina Makarova
Tomsk State University, 36 Lenina avenue, Tomsk, 634050, Russian Federation;
[email protected], [email protected], [email protected]
http://en.tsu.ru/
Abstract. In this paper, we consider the robust adaptive non parametric estimation problem for the drift
coefficient in diffusion processes. An adaptive model selection procedure, based on the improved weighted least
square estimates, is proposed. Sharp oracle inequalities for the robust risk have been obtained.
Keywords: Improved estimation, stochastic diffusion process, mean-square accuracy, model selection, sharp
oracle inequality.
1
Introduction
Let (Ω, F , (F )t≥0, P) be a filtred probability space on which the following stochastic differential
equation is defined:
y. = S(yt ) t. + w.t , 0 ≤ t ≤ T ,
(1)
t
where (wt )t≥0 is ascalar standard Wiener process, the initial value y0 is some given constant, and
S(·) is an unknown function.The problem is to estimate the function S(x), xin[a, b], from the observations (yt )0≤t≤T . The calibration problem for the model (1) is important in various applications.
In particular, it appears, when constructing optimal strategies for investor behavior in diffusion
financial markets. It is known that the optimal strategy depends on unknown market parameters,
in particular, on unknown drift coefficient S. Therefore, in practical financial calculations it is necessary to use statistical estimates for the function S which are reliable on some fixed time interval
[0, T ] [6]. Earlier, the problem of non-asymptotic estimation of the parameters of diffusion processes
was studied in [9]. Here it was shown that many difficulties of asymptotic estimation of parameters
for one-dimensional diffusion processes can be overcome by using a sequantial approach. It turns
out that the theoretical analysis of successive estimates is simpler than the analysis of classical
procedures. In particular, it is possible to calculate non-asymptotic bounds for quadratic risk. Owing to the use of a sequential approach, the problems of non-asymptotic estimation of parameters
were studied in [1] for multidimensional diffusion processes and recently in [2] for multidimensional
continuous and discrete semimartingales. In [7] a truncated sequential method for estimating the
parameters of diffusion processes was developed. Now about nonparametric estimation. A consistent approach to nonparametric criteria for minimax estimation of the drift coefficient in (ergodic)
diffusion processes was developed in [3]. In this article, sequential pointwise kernel estimates are
considered. For such estimates, non-asymptotic upper bounds of the root-mean-square risk are
obtained, and these estimates give the optimal convergence rate as T → ∞.
This paper deals with the estimating the unknown function S(x), a ≤ x ≤ b, in the sense of
the mean square risk
Z b
2
2
b
b
R(ST , S) = ES kST − Sk , kSk =
S 2 (x)x. ,
(2)
a
2
E. Pchelintsev, S. Perelevskiy, I. Makarova
where SbT is the estimate of S by observations (yt )0≤t≤T , a < b are some real numbers. Here ES
is the expectation with respect to the distribution PS of the random process (yt )0≤t≤T given the
drift function S.
The goal of this paper is to construct an adaptive estimate S ∗ of the drift coefficient S in (1)
and to show that the quadratic risk of this estimate is less then the one of the estimate proposed
in [3], i.e. we construct the improved estimate in the mean square acuracy sence. For this we use
the improved estimation approach proposed in [10] and [8] for parametric regression models and
recently developted in [11] for a nonparametric estimation problem. Moreover in this paper we
consider the estimation problem in adaptive setting, i.e. when the regulary of S is unknown. For
this we use a model selection method proposed in [4]. Such approach provides adaptive solution
for the nonparametric estimation through oracle inequalities which give the nonparametric upper
bound for the quadratic risk of estimate.
The rest of the paper is organized as follows. In section 2 we reduce the initial problem to an
estimation problem in a discrete time nonparametric regression model. In section 3 we construct
the improved weigted least square estimates. In section 4 the sharp nonasymptotic oracle inequality
for quadratic risk of model selection procedure is given.
2
Passage to a discrete time regression model
To obtain a reliable estimate of the function S, it is necessary to impose on it certain conditions
that are analogous to the periodicity of the deterministic signal in the white noise model [5]. One
of the conditions sufficient for this purpose is the assumption that the process (yt )t≥0 in (1) returns
to any neighborhood of each points x ∈ [a, b]. As in [3] to get the ergodicity of the process (1) we
define the following functional class:
ΣL,N = {S ∈ LipL (R) : |S(N)| ≤ L ; ∀|x| ≥ N, ∃ Ṡ(x) ∈ C(R)
such that − L ≤ inf Ṡ(x) ≤ sup Ṡ(x) ≤ −1/L} ,
|x|≥N
(3)
|x|≥N
where L > 1, N > |a| + |b|, Ṡ(x)− derivative S(x),
(
LipL (R) =
)
|f (x) − f (y)|
f ∈ C(R) : sup
≤ L .
|x − y|
x,y∈R
We note that if S ∈ ΣL,N , then there exists an invariant density
Rx
exp{2 0 S(z)z.}
.
q(x) = qS (x) = R +∞
Ry
exp{2 0 S(z)z.}y.
−∞
(4)
We note that the functions in ΣL,N are uniformly bounded on [a, b], i.e.
s∗ = sup
sup S 2 (x) < ∞ .
a≤x≤b S∈ΣL,N
We start with the partition of the interva [a, b] by the points (xk )1≤k≤n , defined as
xk = a +
k
(b − a) ,
n
(5)
Improved nonparametric estimation of the drift in diffusion processes
3
where n = n(T ) is an integer-valued function of T such that
n(T ) ≤ T
n(T )
= 1.
T →∞ T
and
lim
(6)
Now at any point xk we estimate the function S by a sequential kernel estimation. We fix some
0 < t0 < T and put
Rt
y −x
τk = inf{t ≥ t0 : t Q s h k s. ≥ Hk } ;
0
(7)
1 R τk
ys −xk
Sek =
Q
y. ,
h
s
Hk t0
where Q(z) = 1{|z|≤1}, 1A is an indicator of the set A, h = (b − a)/(2n) and Hk is a positive
threshold, which will be indicated below. From (1) it is easy to obtain that
Sek = S(xk ) + ζk .
The error ζk is represented as a sum of the approximating and stochastic parts, i.e.
Z τk
ys − xk
1
1
Q
ξk , Bk =
∆S(ys , xk )s. ,
ζk = Bk + p
Hk t
h
Hk
0
where ∆S(y, x) = S(y) − S(x) and
1
ξk = p
Hk
Z
τk
Q
t0
ys − xk
h
w.s .
Taking into account that S is Lipshitz function , we obtain an upper bound for the approximating
part as
|Bk | ≤ Lh .
It is easy to see that random variables (ξk )1≤k≤n are independent identically distributed from
N (0, 1). In [3] it is established that an effective kernel estimate of the form (7) has a stochastic
part distributed as N (0, 2T hqS (xk )), where qS (xk ) is the ergodic density defined in (4). Therefore,
for an effective estimate at each point xk by the kernel estimate (7), we need to estimate the density
(4) from observations (yt )0≤t≤t0 .To this end, we establish that
where ǫT is positive, 0 < ǫT < 1,
qeT (xk ) = max{b
q(xk ) , ǫT } ,
1
qb(xk ) =
2t0 h
Z
0
t0
Q
ys − xk
h
s. .
Now choose the threshold Hk in (7):
Hk = (T − t0 )(2e
qT (xk ) − ǫ2T )h .
Suppose that the parameters t0 = t0 (T ) and ǫT satisfy the following conditions:
4
E. Pchelintsev, S. Perelevskiy, I. Makarova
H1 ) For any T ≥ 32,
16 ≤ t0 ≤ T /2 and
H2 )
lim t0 (T ) = ∞ ,
T →∞
√
1/8
2/t0
lim ǫT = 0 ,
T →∞
≤ ǫT ≤ 1 .
lim T ǫT /t0 (T ) = ∞ .
T →∞
H3 ) For any ν > 0 and m > 0,
lim T ǫm
= ∞ and
T
T →∞
lim T m e−ν
√
t0
For example, for T ≥ 32,
t0 = max{min{ln4 T , T /2} , 16} and ǫT =
Let
= 0.
T →∞
√
−1/8
2 t0
.
Γ = { max τl ≤ T } and Yk = Sek 1Γ .
(8)
1≤l≤n
Then on the set Γ there exists a temporary heteroscedastic regression model
Yk = S(xk ) + ζk ,
with δk = Bk and
σk2 =
ζk = σk ξk + δk
(9)
n
.
(T − t0 )(e
qT (xk ) − ǫ2T /2)(b − a)
It should be noted that from (6) and H1 ), we get the following upper bound
max σk2 ≤
1≤k≤n
4
= σ∗
(b − a)ǫT
(10)
for which, by condition H3 ),
σ∗
= 0 for any m > 0 .
T →∞ T m
lim
To estimate the S function from the observations of (9) should study some properties of the set Γ
in (8).
Proposition 1. Suppose that the parameters t0 and ǫT satisfy the following conditions: H1 ) –
H3 ). Then
sup PS (Γ c ) ≤ ΠT ,
S∈ΣL,N
where limT →∞ T m ΠT = 0 for any m > 0.
3
Improved estimates
In this section we consider the estimation problem for the model (9). The function S(·) is unknown
and has to be estimated from observations Y1 , . . . , Yn .
The accuracy of any estimator Sb will be measured by the empirical squared error of the form
n
b−aX b
kSb − Sk2n = (Sb − S, Sb − S)n =
(S(xl ) − S(xl ))2 .
n
l=1
Improved nonparametric estimation of the drift in diffusion processes
5
Now we fix a basis (φj )1≤j≤n which is orthonormal for the empirical inner product:
n
b−aX
φi (xl )φj (xl ) = Krij ,
(φi , φj )n =
n
l=1
where Krij is Kronecker’s symbol. By making use of this basis we apply the discrete Fourier
transformation to (9) and we obtain the Fourier coefficients
n
b−aX
θbj,n =
Yl φj (xl ) ,
n
n
θj,n
l=1
b−aX
=
S(xl ) φj (xl ) .
n
l=1
From (9) it follows directly that these Fourier coefficients satisfy the following equation
r
b−a
θbj,n = θj,n + ζj,n with ζj,n =
ξ + δj,n ,
n j,n
where
ξj,n =
r
n
n
l=1
l=1
b−aX
b−aX
σl ξl φj (xl ) and δj,n =
δl φj (xl ) .
n
n
Note that the upper bound (10) and the Bounyakovskii-Cauchy-Schwarz inequality imply that
|δj,n | ≤ kδkn kφj kn = kδkn .
We estimate the function S in (9) on the sieve (5) by the weighted least squares estimator
Sbλ (xl ) =
n
X
j=1
λ(j) θbj,n φj (xl ) 1Γ ,
1 ≤ l ≤ n,
where the weight vector λ = (λ(1), . . . , λ(n)) belongs to some finite set Λ ⊂ [0, 1]n . We set for any
a≤x≤b
n
X
b
b
Sλ (x) = Sλ (x1 )1{a≤x≤x1 } +
Sbλ (xl )1{xl−1 <x≤xl } .
(11)
l=2
Further we suppose that the first d ≤ n components of the weight vector λ are equal to 1, i.e.
λ(j) = 1 for any 1 ≤ j ≤ d.
We consider a new estimate for the function S in (9) of the form
Sλ∗ (xl )
=
n
X
∗
λ(j) θj,n
φj (xl ) 1Γ ,
j=1
where
∗
θj,n
=
c(d)
1−
1{1≤j≤d}
kθe k
n
where
(d − 1)σ∗2 L(b − a)1/2
p
,
c(d) =
n(s∗ + dσ∗ /n)
!
1 ≤ l ≤ n,
θbj,n ,
kθen k2 =
d
X
j=1
2
θbj,n
.
6
E. Pchelintsev, S. Perelevskiy, I. Makarova
Now we define the estimate for S in (1). We set for any a ≤ x ≤ b
Sλ∗ (x)
=
Sλ∗ (x1 )1{a≤x≤x1 }
+
n
X
Sλ∗ (xl )1{xl−1 <x≤xl } .
(12)
l=2
We denote the difference of quadratic risks of the estimates (12) and (11) as
∆n (S) := ES kSλ∗ − Sk2n − ES kSbλ − Sk2n .
The choice of estimate (12) is motivated by the desire to control the quadratic risk.
Theorem 1. The estimate (12) outperforms in mean square accuracy the estimate (11), i.e.
sup ∆n (S) < −c2 (d).
S∈ΣL,N
4
Oracle inequalities
In order to obtain a good estimator, we have to write a rule to choose a weight vector λ ∈ Λ in
(12). It is obvious, that the best way is to minimize the empirical squared error with respect to λ:
Errn (λ) = kSλ∗ − Sk2n → min .
Making use of (12) and the Fourier transformation of S imply
Errn (λ) =
n
X
λ
2
∗2
(j)θj,n
j=1
−2
n
X
∗
λ(j)θj,n
θj,n
+
j=1
n
X
2
θj,n
.
j=1
∗
θj,n by some its estimator
Since the coefficient θj,n is unknown, we need to replace the term θj,n
which we choose as
n
b−a
b−a X 2 2
∗
θej,n = θbj,n θj,n
−
σl φj (xl ) .
sj,n with sj,n =
n
n
l=1
One has to pay a penalty for this substitution in the empirical squared error. Finally, we define
the cost function of the form
n
n
X
X
2
∗2
Jn (λ) =
λ (j)θj,n − 2
λ(j) θej,n + ρPn (λ) ,
j=1
j=1
where the penalty term is defined as
n
Pn (λ) =
b−aX 2
λ (j)sj,n
n
j=1
and 0 < ρ < 1 is some positive constant which will be chosen later. We set
b = argmin
λ
J (λ)
λ∈Λ n
and define an estimator of S of the form (11):
S ∗ (x) = Sλb∗ (x) for a ≤ x ≤ b .
(13)
Now we obtain the non asymptotic upper bound for the quadratical risk of the estimator (13).
Improved nonparametric estimation of the drift in diffusion processes
7
Theorem 2. Let Λ ⊂ [0, 1]n be any finite set such that the first d ≤ n components of the weight
vector λ are equal to 1. Then, for any n ≥ 3 and 0 < ρ < 1/6, the estimator (13) satisfies the
following oracle inequality
ES kS ∗ − Sk2n ≤
where limn→∞ Ψn (ρ)/n = 0.
1 + 6ρ
Ψ (ρ)
min ES kSbλ − Sk2n + n
,
1 − 6ρ λ∈Λ
n
Now we consider the estimation problem (1) via model (9). We apply the estimating procedure
(13) with special weight set introduced in [3] to the regression scheme (9). Denoting Sα∗ = Sλ∗ we
α
set
S ∗ = Sαb∗ with α
b = argminα∈A Jn (λα ) .
ε
We obtain through Theorem 2 the following oracle inequality.
Theorem 3. Assume that S ∈ ΣL,N and the number of the points n = n(T ) in the model (9)
satisfies (6). Then the procedure S ∗ satisfies, for any T ≥ 32, the following inequality
R(S ∗ , S) ≤
B (ρ)
(1 + ρ)2 (1 + 6ρ)
min R(Sα∗ , S) + T
,
1 − 6ρ
n
α∈Aε
where limT →∞ BT (ρ)/n(T ) = 0.
Acknowledgement
The results of Section 3 of this work are supported by the RSF grant number 17-11-01049. The results of section 4 are supported by the Ministry of Education and Science of the Russian Federation
in the framework of the research project no. 2.3208.2017/4.6.
References
1. Galtchouk, L.I. and Konev, V.V. (1997) On Sequential Estimation of Parameters in Continuous-Time Stochastic Regression. in: Yu.M. Kabanov, B.L. Rozovskii, A.N. Shiryaev (Eds), Statistics and Control of Stochastic Processes. The
Liptser Festschrift, World Scientific, 123-138.
2. Galtchouk, L.I. and Konev, V.V. (2001) On Sequential Estimation of Parameters in Semimartingale Regression Models
with Continuous Time Parameter. Annals of Statistics, 29, 1508-2035.
3. Galtchouk, L.I. and Pergamenshchikov, S.M. (2006) Asymptotically efficient sequential kernel estimates of the drift
coefficient in ergodic diffusion processes. Statistical Inference for Stochastic Processes, 9, 1-16.
4. Galtchouk, L.I. and Pergamenshchikov, S. M. (2007) Adaptive sequential estimation for ergodic diffusion processes
in quadratic metric. Part 1. Sharp non-asymptotic oracle inequalities. Prépublication 2007/06, IRMA de Strasbourg,
http://hal.archives-ouvertes.fr/hal-00177875/fr/
5. Ibragimov, I.A. and Hasminskii, R.Z. (1979) Statistical Estimation: Asymptotic Theory. Springer, New York.
6. Karatzas, I. and Shreve, S.E. (1998) Methods of Mathematical Finance. Springer, New York.
7. Konev, V.V. and Pergamenshchikov, S.M. (1992 ) On Truncated Sequential Estimation of the Parameters of Diffusion
Processes. in: Methods of Economical Analysis, Central Economical and Mathematical Institute of Russian Academy of
Science, Moscow, 3-31.
8. Konev V., Pergamenshchikov, S. and Pchelintsev, E. (2014) Estimation of a regression with the pulse type noise from
discrete data. Theory Probab. Appl., 58 (3), 442–457.
9. Kutoyants Yu. Statistical inference for ergodic diffusion processes. Springer-Verlag, London, 2004.
10. Pchelintsev E. (2013) Improved estimation in a non-Gaussian parametric regression. Stat. Inference Stoch. Process., 16
(1), 15 – 28.
11. Pchelintsev E., Pchelintsev V. and Pergamenshchikov S. (2017) Improved robust model selection methods for the Lévy
nonparametric regression in continuous time. Preprint https://arxiv.org/submit/2029866, 1 – 32.
| 10 |