Delete txt
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- txt/2101.04493.txt +0 -414
- txt/2101.07621.txt +0 -1082
- txt/2102.04993.txt +0 -1202
- txt/2103.02372.txt +0 -655
- txt/2103.10051.txt +0 -435
- txt/2103.13922.txt +0 -1473
- txt/2103.14651.txt +0 -1336
- txt/2104.03109.txt +0 -919
- txt/2104.04958.txt +0 -0
- txt/2104.10602.txt +0 -840
- txt/2105.00696.txt +0 -0
- txt/2105.06948.txt +0 -0
- txt/2105.11519.txt +0 -0
- txt/2105.11780.txt +0 -644
- txt/2105.12205.txt +0 -554
- txt/2106.07286.txt +0 -970
- txt/2106.08858.txt +0 -901
- txt/2106.09564.txt +0 -366
- txt/2106.12269.txt +0 -827
- txt/2106.14625.txt +0 -752
- txt/2107.03444.txt +0 -1256
- txt/2107.04240.txt +0 -0
- txt/2107.13662.txt +0 -598
- txt/2107.14072.txt +0 -473
- txt/2108.06156.txt +0 -1272
- txt/2109.02417.txt +0 -514
- txt/2109.04223.txt +0 -998
- txt/2109.05783.txt +0 -15
- txt/2109.11406.txt +0 -0
- txt/2110.10486.txt +0 -1576
- txt/2110.12687.txt +0 -427
- txt/2111.01203.txt +0 -0
- txt/2111.09172.txt +0 -536
- txt/2111.13236.txt +0 -0
- txt/2112.01591.txt +0 -0
- txt/2112.03557.txt +0 -216
- txt/2201.00132.txt +0 -617
- txt/2201.01661.txt +0 -1600
- txt/2201.02942.txt +0 -1740
- txt/2202.08613.txt +0 -396
- txt/2202.10177.txt +0 -786
- txt/2202.13056.txt +0 -0
- txt/2203.01944.txt +0 -954
- txt/2203.10094.txt +0 -497
- txt/2203.10517.txt +0 -1153
- txt/2204.03592.txt +0 -0
- txt/2204.14047.txt +0 -944
- txt/2205.05391.txt +0 -362
- txt/2205.08365.txt +0 -823
- txt/2205.10663.txt +0 -265
txt/2101.04493.txt
DELETED
@@ -1,414 +0,0 @@
|
|
1 |
-
PVDECONV: POINT-VOXEL DECONVOLUTION FOR AUTOENCODING CAD
|
2 |
-
CONSTRUCTION IN 3D
|
3 |
-
Kseniya Cherenkova?yDjamila Aouada?Gleb Gusevy
|
4 |
-
?SnT, University of LuxembourgyArtec3D
|
5 |
-
ABSTRACT
|
6 |
-
We propose a Point-V oxel DeConvolution ( PVDeConv ) mod-
|
7 |
-
ule for 3D data autoencoder. To demonstrate its efficiency we
|
8 |
-
learn to synthesize high-resolution point clouds of 10k points
|
9 |
-
that densely describe the underlying geometry of Computer
|
10 |
-
Aided Design (CAD) models. Scanning artifacts, such as pro-
|
11 |
-
trusions, missing parts, smoothed edges and holes, inevitably
|
12 |
-
appear in real 3D scans of fabricated CAD objects. Learning
|
13 |
-
the original CAD model construction from a 3D scan requires
|
14 |
-
a ground truth to be available together with the corresponding
|
15 |
-
3D scan of an object. To solve the gap, we introduce a new
|
16 |
-
dedicated dataset, the CC3D, containing 50k+ pairs of CAD
|
17 |
-
models and their corresponding 3D meshes. This dataset is
|
18 |
-
used to learn a convolutional autoencoder for point clouds
|
19 |
-
sampled from the pairs of 3D scans - CAD models. The chal-
|
20 |
-
lenges of this new dataset are demonstrated in comparison
|
21 |
-
with other generative point cloud sampling models trained on
|
22 |
-
ShapeNet. The CC3D autoencoder is efficient with respect to
|
23 |
-
memory consumption and training time as compared to state-
|
24 |
-
of-the-art models for 3D data generation.
|
25 |
-
Index Terms —CC3D, point cloud autoencoder, CAD
|
26 |
-
models generation, Scan2CAD.
|
27 |
-
1. INTRODUCTION
|
28 |
-
Recently, deep learning (DL) for 3D data analysis has seen
|
29 |
-
a boost in successful and competitive solutions for segmen-
|
30 |
-
tation, detection and classification [1], and real-life applica-
|
31 |
-
tions, such as self-driving, robotics, medicine, and augmented
|
32 |
-
reality. In industrial manufacturing, 3D scanning of fabricated
|
33 |
-
parts is an essential step of product quality control, when the
|
34 |
-
3D scans of real objects are compared to the original Com-
|
35 |
-
puter Aided Design (CAD) models. While most consumer
|
36 |
-
solutions for 3D scanning are good enough for capturing
|
37 |
-
the general shape of an object, artifacts can be introduced
|
38 |
-
in the parts of the object that are physically inaccessible for
|
39 |
-
3D scanning, resulting in the loss of sharp features and fine
|
40 |
-
details.
|
41 |
-
This paper focuses on recovering scanning artifacts in an
|
42 |
-
autoencoding data-driven manner. In addition to presenting a
|
43 |
-
new point cloud autoencoder, we introduce a new 3D dataset,
|
44 |
-
referred to as CC3D , which stands for CAD Construction
|
45 |
-
Fig. 1 . Examples of CC3D data: From left to right, CAD
|
46 |
-
models, corresponding 3D scans, 10k input point clouds and
|
47 |
-
results of the proposed autoencoder.
|
48 |
-
in 3D . We further provide an analysis focused on real 3D
|
49 |
-
scanned data, keeping in mind real-world constraints, i.e.,
|
50 |
-
variability, complexity, artifacts, memory and speed require-
|
51 |
-
ments. The first two columns in Fig. 1 give some examples
|
52 |
-
from CC3D data; the CAD model and its 3D scanned version
|
53 |
-
in triangular mesh format. While the most recent existing
|
54 |
-
solutions [2, 3, 4, 5] on 3D data autoencoders mostly focus
|
55 |
-
on low-resolution data configuration (approximately 2500
|
56 |
-
points), we see it more beneficial for real data to experiment
|
57 |
-
in higher-dimension. It is what brings the important 3D object
|
58 |
-
details into the big data learning perspective.
|
59 |
-
Several publicly available datasets related to CAD mod-
|
60 |
-
elling, such as ModelNet [6], ShapeNet [7], and ABC [8],
|
61 |
-
have been released in the last years. The summary of the fea-
|
62 |
-
tures they offer can be found in Table 1. These datasets have
|
63 |
-
boosted the research on deep learning on 3D point clouds
|
64 |
-
mainly.
|
65 |
-
Similarly, our CC3D dataset should support research ef-
|
66 |
-
forts in addressing real-world challenges. Indeed, this dataset
|
67 |
-
provides various 3D scanned objects, with their ground-truth
|
68 |
-
CAD models. The models collected in CC3D dataset are
|
69 |
-
not restricted to any object’s category and/or complexity. 3D
|
70 |
-
scans offer challenging cases of missing data, smoothed ge-arXiv:2101.04493v1 [cs.CV] 12 Jan 2021ometry and fusion artefacts in the form of varying protrusions
|
71 |
-
and swept holes. Moreover, the resolution of 3D scans is typ-
|
72 |
-
ically high with more than 100k faces in the mesh.
|
73 |
-
In summary, the contributions of this paper include: (1)
|
74 |
-
A 3D dataset, CC3D, a collection of 50k+ aligned pairs of
|
75 |
-
meshes, a CAD model and its virtually 3D scanned coun-
|
76 |
-
terpart with corresponding scanning artifacts; (2) A CC3D
|
77 |
-
autoencoder architecture on 10k point clouds learned from
|
78 |
-
CC3D data; (3) A Point-V oxel DeConvolution ( PVDeConv )
|
79 |
-
block for the decoder part of our model, combining point fea-
|
80 |
-
tures on coarse and fine levels of the data.
|
81 |
-
The remainder of the paper is organized as follows: Sec-
|
82 |
-
tion 2 reviews relevant state-of-the-art works in 3D data au-
|
83 |
-
toencoding. In Section 3 we give a brief overview of the
|
84 |
-
core components our work is built upon. Section 4 describes
|
85 |
-
the main contributions of this paper in more details. In Sec-
|
86 |
-
tion 5 the results and comparison with related methods are
|
87 |
-
presented. Section 6 gives the conclusions.
|
88 |
-
2. RELATED WORK
|
89 |
-
The choice of DL architecture and 3D data representation is
|
90 |
-
usually defined by existing practices and available datasets
|
91 |
-
for learning [9]. V oxel-based representations have pioneered
|
92 |
-
3D data analysis, applying 3D Convolution Neural Network
|
93 |
-
(CNN) directly on a regular voxel grid [10]. Despite the im-
|
94 |
-
proved models in terms of memory consumption, e.g., [11],
|
95 |
-
their inability to resolve fine object details remains the main
|
96 |
-
limiting factor in practical use.
|
97 |
-
Other works introduce convolutions directly on graph
|
98 |
-
structures, e.g., [12]. They attempt to generalize DL models
|
99 |
-
to non-Euclidean domains such as graphs and manifolds [13],
|
100 |
-
and offer the analogs of pooling/unpooling operations as
|
101 |
-
well [14]. However, they are not applicable for learning on
|
102 |
-
real unconstrained data as they require either meshes to be
|
103 |
-
registered to a common template, or inefficiently deal with
|
104 |
-
meshes of up to several thousand faces, or are specific to
|
105 |
-
segmentation or classification tasks only.
|
106 |
-
Recent advances in developing efficient architectures for
|
107 |
-
3D data analysis are mainly related to point cloud based meth-
|
108 |
-
ods [15, 16]. Decoders [17, 2, 18, 3, 19] have made point
|
109 |
-
clouds a highly promising representation for 3D object gen-
|
110 |
-
eration and completion using neural networks. Successful
|
111 |
-
works in generative adversarial network (GAN) (e.g.,[20]),
|
112 |
-
show the applicability of different GAN models operating on
|
113 |
-
the raw point clouds.
|
114 |
-
In this paper, we comply with the common autoencoder
|
115 |
-
approach, i.e., we use a point cloud encoder to embed the
|
116 |
-
point cloud input, and design a decoder to generate a complete
|
117 |
-
point cloud from the embedding of the encoder.3. BACKGROUND AND MOTIVATION
|
118 |
-
We herein present the fundamental building blocks that com-
|
119 |
-
prise the core of this paper, namely, the point cloud, metric on
|
120 |
-
it, and the DL backbone. All together, these elements make
|
121 |
-
the CC3D autoencoder perform efficiently on high-resolution
|
122 |
-
3D data.
|
123 |
-
A point cloud Scan be represented as S=f(pk; fk)g,
|
124 |
-
where each pkis the 3D coordinates of the kthinput point,
|
125 |
-
andfkis the feature corresponding to it, and the size of fk
|
126 |
-
defines the dimensionality of the points feature space. Note
|
127 |
-
that while it is straightforward to include auxiliary informa-
|
128 |
-
tion (such as points’ normals) to our architecture, in this paper
|
129 |
-
we exclusively employ the xyzcoordinates of pkas the input
|
130 |
-
data.
|
131 |
-
We base on Point-V oxel Convolution (PVConv), a mem-
|
132 |
-
ory efficient architecture for learning on 3D point cloud pre-
|
133 |
-
sented in [21]. To the best of our knowledge, it is the first de-
|
134 |
-
velopment of autoencoder based on PVCNN as the encoder.
|
135 |
-
Briefly, PVConv combines the fine-grained feature transfor-
|
136 |
-
mation on points with the coarse-grained neighboring feature
|
137 |
-
aggregation in the voxel space of point cloud. Three basic op-
|
138 |
-
erations are performed in the coarse branch, namely, voxeliza-
|
139 |
-
tion, followed by voxel-based 3D convolution, and the devox-
|
140 |
-
elization. The point-based branch aggregates the features for
|
141 |
-
each individual with multilayer perceptron (MLP), providing
|
142 |
-
high resolution details. The features from both branches are
|
143 |
-
aggregated into a hidden feature representation.
|
144 |
-
The formulation of convolution in both voxel-based and
|
145 |
-
point-based cases is the following:
|
146 |
-
yk=X
|
147 |
-
xi2N(xk)K(xk; xi)F(xi); (1)
|
148 |
-
where for each center xk, and its neighborhood N(xk), the
|
149 |
-
neighboring features F(xi)are convolved with the kernel
|
150 |
-
K(xk; xi). The choice of PVCNN is due to its efficiency
|
151 |
-
in training on high-resolution 3D data. Indeed, it makes it
|
152 |
-
a good candidate for working with real-life data. As it is
|
153 |
-
stated in [21], PVConv combines advantages of point-based
|
154 |
-
methods, reducing memory consumption, and voxel-based,
|
155 |
-
improving the data locality and regularity.
|
156 |
-
For the loss function, Chamfer distance [22] is used to
|
157 |
-
measure the quality of the autoencoder. It is a differentiable
|
158 |
-
metric, invariant to permutation of points in both ground-truth
|
159 |
-
and target point clouds, SGandS, respectively. It is defined
|
160 |
-
as follows:
|
161 |
-
dCD(S; SG) =X
|
162 |
-
x2Smin
|
163 |
-
y2SGkx yk2+X
|
164 |
-
y2SGmin
|
165 |
-
x2Skx yk2:
|
166 |
-
(2)
|
167 |
-
As it follows from its definition, no correspondence or equal
|
168 |
-
number of points in SandSGis required for the computation
|
169 |
-
ofdCD, making it possible to work within different resolu-
|
170 |
-
tions for the encoder and decoder.4. PROPOSED AUTOENCODING OF 3D SCANS TO
|
171 |
-
CAD MODELS
|
172 |
-
This paper studies the problem of 3D point cloud autoencod-
|
173 |
-
ing in a deep learning setup, and in particular, the choice
|
174 |
-
of the architecture of a 3D point cloud decoder for efficient
|
175 |
-
reconstruction of point clouds sampled from corresponding
|
176 |
-
pairs of 3D scans and CAD models.
|
177 |
-
4.1. CC3D dataset
|
178 |
-
The CC3D dataset of 3D CAD models was collected from a
|
179 |
-
free online service for sharing CAD designs [23]. In total,
|
180 |
-
the collected dataset contains 50k+ models in STEP format,
|
181 |
-
unrestricted to any category, with varying complexity from
|
182 |
-
simple to highly detailed designs (see examples in Fig. 1).
|
183 |
-
These CAD models are converted to meshes, and each mesh
|
184 |
-
was virtually scanned using proprietary 3D scanning pipeline
|
185 |
-
developed by Artec3D [24]. The typical size of the result-
|
186 |
-
ing scans is in the order of 100K points and faces, while the
|
187 |
-
meshes converted from CAD models are usually more than
|
188 |
-
an order of magnitude lighter.
|
189 |
-
In order to illustrate the uniqueness of our dataset, Ta-
|
190 |
-
ble 1 summarizes the available CAD-like datasets and se-
|
191 |
-
mantic information they provide. Unlike ShapeNet [7] and
|
192 |
-
ModelNet [6], the CC3D dataset is a collection of 3D ob-
|
193 |
-
jects unrestricted to any category, with the complexity vary-
|
194 |
-
ing from very basic to highly detailed models. One of the
|
195 |
-
most recent datasets, the ABC dataset [8] would have been a
|
196 |
-
valuable collection due to its size for our task if it had con-
|
197 |
-
tained 3D scanned models alongside with ground-truth CAD
|
198 |
-
objects. The availability of CAD-3D scan pairings, the high-
|
199 |
-
resolution of meshes and variability of the models make the
|
200 |
-
CC3D dataset stand out among other alternatives. The CC3D
|
201 |
-
dataset will be shared with the research community.
|
202 |
-
4.2. CC3D Autoencoder
|
203 |
-
Our decoder is a modified version of PVCNN, where we
|
204 |
-
cut the final classification/segmentation layer. The proposed
|
205 |
-
Dataset #Models
|
206 |
-
CAD
|
207 |
-
Curves
|
208 |
-
Patches
|
209 |
-
Semantics
|
210 |
-
Categories
|
211 |
-
3D scan
|
212 |
-
CC3D (ours) 50k+ 3 7 7 7 7 3
|
213 |
-
ABC [8] 1M+ 3 3 3 7 7 7
|
214 |
-
ShapeNet [7] 3M+ 7 7 7 3 3 7
|
215 |
-
ShapeNetCore [7] 51k+ 7 7 7 3 3 7
|
216 |
-
ShapeNetSem [7] 12k 7 7 7 3 3 7
|
217 |
-
ModelNet [6] 151k+ 7 7 7 3 3 7
|
218 |
-
Table 1 . Summary of datasets with CAD-like data. Note that
|
219 |
-
only ABC and CC3D offer CAD models in b-rep (boundary
|
220 |
-
representation) format in addition to triangular meshes.
|
221 |
-
Fig. 2 . Overview of CC3D autoencoder architecture and
|
222 |
-
PVDeConv module. The features from coarse voxel-based
|
223 |
-
and fine point-based branches are fused to be unwrapped to
|
224 |
-
the output point cloud.
|
225 |
-
PVDeConv structure is depicted in Fig. 2. The fine point-
|
226 |
-
based branch is implemented as shared transposed MLP, al-
|
227 |
-
lowing to maintain the same number of points throughout the
|
228 |
-
autoencoder, while the coarse branch allows the features to be
|
229 |
-
aggregated at different voxel grid resolutions, thus modelling
|
230 |
-
the neighborhood information at different scales.
|
231 |
-
The PVDeConv block consists of 3D volumetric deconvo-
|
232 |
-
lutions to aggregate the features, dropout, the batch normal-
|
233 |
-
ization and the nonlinear activation function after each 3D
|
234 |
-
deconvolution. Features from both branches are fused at the
|
235 |
-
final level and MLP to produce the output points.
|
236 |
-
The transposed 3D convolution operator, used in PVDe-
|
237 |
-
Conv, multiplies each input value element-wise by a learnable
|
238 |
-
kernel, and sums over the outputs from all input feature chan-
|
239 |
-
nels. This operation can be seen as the gradient of 3D convo-
|
240 |
-
lution, although it is not an actual deconvolution operation.
|
241 |
-
5. EXPERIMENTS
|
242 |
-
We evaluate the proposed autoencoder by training first on our
|
243 |
-
CC3D dataset, and then on the ShapeNetCore [7] dataset.
|
244 |
-
5.1. Training on CC3D
|
245 |
-
Dataset. CC3D dataset is randomly split into three non-
|
246 |
-
intersecting folds: 80% for training, 10% for testing and 10%
|
247 |
-
for validation. Ground-truth point clouds are generated by
|
248 |
-
uniformly sampling N= 10 k points on the CAD models sur-
|
249 |
-
faces, while the input point clouds are sampled in the same
|
250 |
-
manner from corresponding 3D scans of the models. The data
|
251 |
-
is normalized to (0, 1).
|
252 |
-
Implementation Details. The encoder follows the struc-
|
253 |
-
ture in [21], the coarse blocks are ((64, 1, 32), (64, 2, 16),
|
254 |
-
(128, 1, 16), 1024), where triplets describe voxel-based con-
|
255 |
-
volutional PVConv block in terms of number of channels,
|
256 |
-
number of blocks, and voxel resolution. The last number de-
|
257 |
-
scribes the resulting embedding size for the coarse part, and
|
258 |
-
being combined with shared MLP cloud blocks = (256, 128),
|
259 |
-
gives the feature embedding size of 1472. The decoder coarse
|
260 |
-
blocks are ((128, 1, 16), (64, 2, 16), (64, 1, 32), 128), whereFig. 3 . Results of our autoencoder on CC3D data with 10k
|
261 |
-
points for input and output. The left of each pair of results
|
262 |
-
is the input point cloud of 10k, the right is the autoencoder
|
263 |
-
reconstruction of 10k points.
|
264 |
-
the triplets are PVDeConv concatenated with decoder point-
|
265 |
-
based fine blocks of size (256, 128).
|
266 |
-
Training setup. The autoencoder is trained with Chamfer
|
267 |
-
loss for 50 epochs on two Quadro P6000 with batch size 80 in
|
268 |
-
data parallel mode. The overall training takes approximately
|
269 |
-
15 hours. The best model is chosen based on the validation
|
270 |
-
set.
|
271 |
-
Evaluation. The qualitative results of our autoencoder on
|
272 |
-
the CC3D data are presented in Fig. 3. We notice that the fine
|
273 |
-
details are captured in these challenging cases.
|
274 |
-
Method Chamfer distance, 10 3
|
275 |
-
AtlasNet [2] 1.769
|
276 |
-
FoldingNet [17] 1.648
|
277 |
-
PCN [19] 1.472
|
278 |
-
TopNet [3] 0.972
|
279 |
-
Ours 0.804
|
280 |
-
Table 2 . CC3D autoencoder results on ShapeNetCore
|
281 |
-
dataset: comparison against previous works ( N= 2:5k).
|
282 |
-
5.2. Training on ShapeNetCore
|
283 |
-
To demonstrate the competitive performance of our CC3D
|
284 |
-
autoencoder, we train it on the ShapeNetCore dataset follow-
|
285 |
-
ing the train/test/val split of [3], with the number of sampled
|
286 |
-
point N= 2500 for a fair comparison. Since we do not have
|
287 |
-
scanned models for the ShapeNet data, we add a 3% Gaussian
|
288 |
-
noise to each point’s location. The rest of the training setup
|
289 |
-
is replicated from the CC3D configuration. The final met-
|
290 |
-
ric is the mean Chamfer distance averaged per model across
|
291 |
-
all classes. The numbers for other methods are reported
|
292 |
-
from [3]. The results of the evaluation of our method against
|
293 |
-
state-of-the-art methods are shown in Table 2. We note that
|
294 |
-
Fig. 4 . Chamfer distance distribution for our autoencoder. On
|
295 |
-
test set of CC3D for point clouds of size N= 10 k, mean
|
296 |
-
Chamfer distance is 1:2610 3with standard deviation of
|
297 |
-
0:79410 3. ShapeNetCore test set with N= 2:5k, it is
|
298 |
-
0:80410 3with standard deviation 0:76610 3.
|
299 |
-
Fig. 5 . Results of our autoencoder on ShapeNetCore data.
|
300 |
-
The top row is the input 2.5k point clouds, the bottom is the
|
301 |
-
reconstruction of our autoencoder.
|
302 |
-
our result surpasses the previous works by a significant mar-
|
303 |
-
gin. Qualitative examples on ShapeNetCore data are given in
|
304 |
-
Fig. 5. The distribution of distances given in Fig. 4 implies
|
305 |
-
that CC3D dataset presents advanced challenges for our au-
|
306 |
-
toencoder, where it performs at 1:2610 3average Chamfer
|
307 |
-
distance, while it reaches 0:80410 3on ShapeNetCore.
|
308 |
-
6. CONCLUSIONS
|
309 |
-
In this work, we proposed a Point-V oxel Deconvolution
|
310 |
-
(PVDeConv ) block for a fast and efficient deconvolution
|
311 |
-
on 3D point clouds. It was used in combination with a new
|
312 |
-
dataset, CC3D, for autoencoding 3D Scans to their corre-
|
313 |
-
sponding synthetic CAD models. The CC3D dataset offers
|
314 |
-
pairs of CAD models and 3D scans, totaling to 50k+ objects.
|
315 |
-
Our CC3D autoencoder on point clouds is memory and time
|
316 |
-
efficient. Furthermore, it demonstrates superior results com-
|
317 |
-
pared to existing methods on ShapeNet data. As future work,
|
318 |
-
different types of losses will be investigated to improve the
|
319 |
-
sharpness on edges, such as quadric [5]. Testing the variants
|
320 |
-
of CC3D autoencoder with different configurations of stacked
|
321 |
-
PVConv and PVDeConv layers will also be considered. Fi-
|
322 |
-
nally, we believe that the CC3D dataset itself could assist in
|
323 |
-
real 3D scanned data analysis with deep learning methods.7. REFERENCES
|
324 |
-
[1] Yulan Guo, Hanyun Wang, Qingyong Hu, Hao Liu,
|
325 |
-
Li Liu, and Mohammed Bennamoun, “Deep learning for
|
326 |
-
3d point clouds: A survey,” ArXiv , vol. abs/1912.12033,
|
327 |
-
2019.
|
328 |
-
[2] Thibault Groueix, Matthew Fisher, Vladimir G. Kim,
|
329 |
-
Bryan C. Russell, and Mathieu Aubry, “Atlasnet: A
|
330 |
-
papier-m ˆach´e approach to learning 3d surface genera-
|
331 |
-
tion,” CoRR , vol. abs/1802.05384, 2018.
|
332 |
-
[3] Lyne P Tchapmi, Vineet Kosaraju, S. Hamid
|
333 |
-
Rezatofighi, Ian Reid, and Silvio Savarese, “Top-
|
334 |
-
net: Structural point cloud decoder,” in The IEEE
|
335 |
-
Conference on Computer Vision and Pattern Recogni-
|
336 |
-
tion (CVPR) , 2019.
|
337 |
-
[4] Isaak Lim, Moritz Ibing, and Leif Kobbelt, “A convolu-
|
338 |
-
tional decoder for point clouds using adaptive instance
|
339 |
-
normalization,” CoRR , vol. abs/1906.11478, 2019.
|
340 |
-
[5] Nitin Agarwal, Sung-Eui Yoon, and M. Gopi, “Learning
|
341 |
-
embedding of 3d models with quadric loss,” CoRR , vol.
|
342 |
-
abs/1907.10250, 2019.
|
343 |
-
[6] Zhirong Wu, S. Song, A. Khosla, Fisher Yu, Linguang
|
344 |
-
Zhang, Xiaoou Tang, and J. Xiao, “3d shapenets:
|
345 |
-
A deep representation for volumetric shapes,” in
|
346 |
-
2015 IEEE Conference on Computer Vision and Pattern
|
347 |
-
Recognition (CVPR) , June 2015, pp. 1912–1920.
|
348 |
-
[7] Angel X. Chang, Thomas Funkhouser, Leonidas
|
349 |
-
Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Sil-
|
350 |
-
vio Savarese, Manolis Savva, Shuran Song, Hao Su,
|
351 |
-
Jianxiong Xiao, Li Yi, and Fisher Yu, “Shapenet: An
|
352 |
-
information-rich 3d model repository,” 2015.
|
353 |
-
[8] Sebastian Koch, Albert Matveev, Zhongshi Jiang, Fran-
|
354 |
-
cis Williams, Alexey Artemov, Evgeny Burnaev, Marc
|
355 |
-
Alexa, Denis Zorin, and Daniele Panozzo, “Abc: A
|
356 |
-
big cad model dataset for geometric deep learning,” in
|
357 |
-
The IEEE Conference on Computer Vision and Pattern
|
358 |
-
Recognition (CVPR) , June 2019.
|
359 |
-
[9] Eman Ahmed, Alexandre Saint, Abd El Rahman
|
360 |
-
Shabayek, Kseniya Cherenkova, Rig Das, Gleb Gusev,
|
361 |
-
Djamila Aouada, and Bj ¨orn E. Ottersten, “Deep learn-
|
362 |
-
ing advances on different 3d data representations: A sur-
|
363 |
-
vey,” ArXiv , vol. abs/1808.01462, 2018.
|
364 |
-
[10] D. Maturana and S. Scherer, “V oxnet: A 3d convolu-
|
365 |
-
tional neural network for real-time object recognition,”
|
366 |
-
in2015 IEEE/RSJ International Conference on Intelli-
|
367 |
-
gent Robots and Systems (IROS) , Sep. 2015, pp. 922–
|
368 |
-
928.[11] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger,
|
369 |
-
“Octnet: Learning deep 3d representations at high res-
|
370 |
-
olutions,” in Proceedings of the IEEE Conference on
|
371 |
-
Computer Vision and Pattern Recognition , 2017.
|
372 |
-
[12] Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and
|
373 |
-
Michael J. Black, “Generating 3D faces using convo-
|
374 |
-
lutional mesh autoencoders,” in European Conference
|
375 |
-
on Computer Vision (ECCV) , 2018, pp. 725–741.
|
376 |
-
[13] M. M. Bronstein, J. Bruna, Y . LeCun, A. Szlam, and
|
377 |
-
P. Vandergheynst, “Geometric deep learning: Going be-
|
378 |
-
yond euclidean data,” IEEE Signal Processing Maga-
|
379 |
-
zine, vol. 34, no. 4, pp. 18–42, July 2017.
|
380 |
-
[14] Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes,
|
381 |
-
Shachar Fleishman, and Daniel Cohen-Or, “Meshcnn:
|
382 |
-
A network with an edge,” ACM Transactions on Graph-
|
383 |
-
ics (TOG) , vol. 38, no. 4, pp. 90, 2019.
|
384 |
-
[15] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J.
|
385 |
-
Guibas, “Pointnet++: Deep hierarchical feature learn-
|
386 |
-
ing on point sets in a metric space,” CoRR , vol.
|
387 |
-
abs/1706.02413, 2017.
|
388 |
-
[16] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma,
|
389 |
-
Michael M. Bronstein, and Justin M. Solomon, “Dy-
|
390 |
-
namic graph cnn for learning on point clouds,” ACM
|
391 |
-
Trans. Graph. , vol. 38, no. 5, Oct. 2019.
|
392 |
-
[17] Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian,
|
393 |
-
“Foldingnet: Interpretable unsupervised learning on 3d
|
394 |
-
point clouds,” ArXiv , vol. abs/1712.07262, 2017.
|
395 |
-
[18] Yongheng Zhao, Tolga Birdal, Haowen Deng, and Fed-
|
396 |
-
erico Tombari, “3d point-capsule networks,” CoRR , vol.
|
397 |
-
abs/1812.10775, 2018.
|
398 |
-
[19] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz,
|
399 |
-
and Martial Hebert, “Pcn: Point completion network,”
|
400 |
-
in3D Vision (3DV), 2018 International Conference on ,
|
401 |
-
2018.
|
402 |
-
[20] Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnab ´as
|
403 |
-
P´oczos, and Ruslan Salakhutdinov, “Point cloud GAN,”
|
404 |
-
CoRR , vol. abs/1810.05795, 2018.
|
405 |
-
[21] Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han,
|
406 |
-
“Point-voxel cnn for efficient 3d deep learning,” in Ad-
|
407 |
-
vances in Neural Information Processing Systems , 2019.
|
408 |
-
[22] Haoqiang Fan, Hao Su, and Leonidas Guibas, ,” in
|
409 |
-
A Point Set Generation Network for 3D Object Recon-
|
410 |
-
struction from a Single Image , 07 2017, pp. 2463–2471.
|
411 |
-
[23] “3dcontentcentral,” https://www.
|
412 |
-
3dcontentcentral.com , Accessed: 2020-02-02.
|
413 |
-
[24] “Artec3d,” https://www.artec3d.com/ , Ac-
|
414 |
-
cessed: 2020-02-020.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
txt/2101.07621.txt
DELETED
@@ -1,1082 +0,0 @@
|
|
1 |
-
arXiv:2101.07621v2 [cs.GT] 29 May 2021Trading Transforms of Non-weighted Simple Games
|
2 |
-
and Integer Weights of Weighted Simple Games∗
|
3 |
-
Akihiro Kawana†Tomomi Matsui‡
|
4 |
-
June 1, 2021
|
5 |
-
Abstract
|
6 |
-
This study investigates simple games. A fundamental research
|
7 |
-
question in this field is to determine necessaryand sufficient condition s
|
8 |
-
for a simple game to be a weighted majority game. Taylor and Zwicker
|
9 |
-
(1992) showed that a simple game is non-weighted if and only if there
|
10 |
-
exists a trading transform of finite size. They also provided an uppe r
|
11 |
-
bound on the size of such a trading transform, if it exists. Gvozdev a
|
12 |
-
and Slinko (2011) improved that upper bound; their proof employed a
|
13 |
-
property of linear inequalities demonstrated by Muroga (1971). In this
|
14 |
-
study, we provide a new proof of the existence of a trading transf orm
|
15 |
-
when a given simple game is non-weighted. Our proof employs Farkas’
|
16 |
-
lemma (1894), and yields an improved upper bound on the size of a
|
17 |
-
trading transform.
|
18 |
-
We also discuss an integer-weight representation of a weighted sim-
|
19 |
-
ple game, improving the bounds obtained by Muroga (1971). We show
|
20 |
-
that our bound on the quota is tight when the number of players is
|
21 |
-
less than or equal to five, based on the computational results obt ained
|
22 |
-
by Kurz (2012).
|
23 |
-
Furthermore, we discuss the problem of finding an integer-weight
|
24 |
-
representation under the assumption that we have minimal winning
|
25 |
-
coalitions and maximal losing coalitions. In particular, we show a
|
26 |
-
performance of a rounding method.
|
27 |
-
Lastly, we address roughly weighted simple games. Gvozdeva and
|
28 |
-
Slinko (2011) showed that a given simple game is not roughly weighted
|
29 |
-
if and only if there exists a potent certificate of non-weightedness .
|
30 |
-
∗preliminary version of this paper was presented at Seventh I nternational Workshop
|
31 |
-
on Computational Social Choice (COMSOC-2018), Rensselaer Polytechnic Institute, Troy,
|
32 |
-
NY, USA, 25-27 June, 2018.
|
33 |
-
†Graduate School of Engineering, Tokyo Institute of Technol ogy
|
34 |
-
‡Graduate School of Engineering, Tokyo Institute of Technol ogy
|
35 |
-
1We give an upper bound on the length of a potent certificate of non-
|
36 |
-
weightedness. We also discuss an integer-weight representation o f a
|
37 |
-
roughly weighted simple game.
|
38 |
-
1 Introduction
|
39 |
-
A simple game consists of a pair G= (N,W),whereNis a finite set of
|
40 |
-
players, and W ⊆2Nis an arbitrary collection of subsets of N. Throughout
|
41 |
-
this paper, we denote |N|byn. Usually, the property
|
42 |
-
(monotonicity): if S′⊇S∈ W,thenS′∈ W, (1)
|
43 |
-
is assumed. Subsets in Ware called winning coalitions . We denote 2N\W
|
44 |
-
byL, and subsets in Lare called losing coalitions . A simple game ( N,W)
|
45 |
-
is said to be weighted if there exists a weight vector w∈RNandq∈R
|
46 |
-
satisfying the following property:
|
47 |
-
(weightedness): for any S⊆N,S∈ Wif and only if/summationdisplay
|
48 |
-
i∈Swi≥q.(2)
|
49 |
-
Previous research established thenecessary andsufficient c onditions that
|
50 |
-
guarantee the weightedness of a simple. [Elgot, 1961] and [C how, 1961] in-
|
51 |
-
vestigated the theory of threshold logic and showed the cond ition of the
|
52 |
-
weightedness in terms of asummability . [Muroga, 1971] proved the suffi-
|
53 |
-
ciency of asummability based on the theory of linear inequal ity systems
|
54 |
-
and discussed some variations of their results in cases of a f ew variables.
|
55 |
-
[Taylor and Zwicker, 1992,Taylor and Zwicker, 1999]obtain ednecessaryand
|
56 |
-
sufficient conditions independently in terms of a trading transform . Atrad-
|
57 |
-
ing transform ofsizejisacoalition sequence( X1,X2,...,X j;Y1,Y2,...,Y j),
|
58 |
-
which may contain repetitions of coalitions, satisfying th e condition ∀p∈N,
|
59 |
-
|{i|p∈Xi}|=|{i|p∈Yi}|. A simple game is called k-trade robust if there
|
60 |
-
is no trading transform of size jsatisfying 1 ≤j≤k,X1,X2,...,X j∈ W,
|
61 |
-
andY1,Y2,...,Y j∈ L. A simple game is called trade robust if it isk-trade
|
62 |
-
robust for all positive integers k.
|
63 |
-
Taylor and Zwicker showed that a given simple game Gwithnplayers is
|
64 |
-
weightedifandonlyif Gis22n-traderobust. In2011, [Gvozdeva and Slinko, 2011]
|
65 |
-
showed that agiven simplegame Gis weighted ifandonly if Gis (n+1)nn/2-
|
66 |
-
trade robust. [Freixas and Molinero, 2009b] proposed a vari ant of trade ro-
|
67 |
-
bustness, called invariant-trade robustness, whichdeter mineswhetherasim-
|
68 |
-
ple game is weighted. The relations between the results in th reshold logic
|
69 |
-
and simple games are clarified in [Freixas et al., 2016, Freix as et al., 2017].
|
70 |
-
2In Section 2, we show that a given simple game Gis weighted if and
|
71 |
-
only ifGisαn+1-trade robust, where αn+1denotes the maximal value of
|
72 |
-
determinants of ( n+1)×(n+1) 0–1 matrices. It is well-known that αn+1≤
|
73 |
-
(n+2)n+2
|
74 |
-
2(1/2)(n+1).
|
75 |
-
Our definition of a weighted simple game allows for an arbitra ry real
|
76 |
-
number of weights. However, any weighted simple game can be r epresented
|
77 |
-
by integer weights (e.g., see [Freixas and Molinero, 2009a] ). Aninteger-
|
78 |
-
weight representation of a weighted simple game consists of an integer vec-
|
79 |
-
torw∈ZNand some q∈Zsatisfying the weightedness property (2).
|
80 |
-
[Isbell, 1956] found an example of a weighted simple game wit h 12 players
|
81 |
-
withoutauniqueminimum-suminteger-weight representati on. Examplesfor
|
82 |
-
9, 10, or11playersaregivenin[Freixas and Molinero, 2009a ,Freixas and Molinero, 2010].
|
83 |
-
Inthefieldofthresholdlogic, examples of thresholdfuncti onsrequiringlarge
|
84 |
-
weightsarediscussedby[Myhill and Kautz, 1961,Muroga, 19 71,H˚ astad, 1994].
|
85 |
-
Some previous studies enumerate (minimal) integer-weight representations
|
86 |
-
of simple games with a small number of players (e.g., [Muroga et al., 1962,
|
87 |
-
Winder, 1965,Muroga et al., 1970,Krohn and Sudh¨ olter, 199 5]). Inthecase
|
88 |
-
ofn= 9 players, refer to [Kurz, 2012]. In general, [Muroga, 1971 ] (Proof of
|
89 |
-
Theorem 9.3.2.1) showed that (under the monotonicity prope rty (1) and
|
90 |
-
∅ /\e}atio\slash∈ W ∋ N) every weighted simple game has an integer-weight repre-
|
91 |
-
sentation satisfying 0 ≤wi≤αn≤(n+ 1)n+1
|
92 |
-
2(1/2)n(∀i∈N) and
|
93 |
-
0≤q≤nαn≤n(n+1)n+1
|
94 |
-
2(1/2)nsimultaneously. Here, αndenotesthemax-
|
95 |
-
imal valueof determinantsof n×n0–1matrices. [Wang and Williams, 1991]
|
96 |
-
discussed Boolean functions that require more general surf aces to sepa-
|
97 |
-
rate their true vectors from false vectors. [Hansen and Podo lskii, 2015] in-
|
98 |
-
vestigates the complexity of computing Boolean functions b y polynomial
|
99 |
-
threshold functions. [Freixas, 2021] discusses a point-se t-additive pseudo-
|
100 |
-
weighting for a simple game, which assigns weights directly to coalitions.
|
101 |
-
In Section 3, we slightly improve Muroga’s result and show th at ev-
|
102 |
-
ery weighted simple game (satisfying ∅ /\e}atio\slash∈ W ∋ N) has an integer-weight
|
103 |
-
representation ( q;w⊤) satisfying |wi| ≤αn(∀i∈N),|q| ≤αn+1, and
|
104 |
-
1≤/summationtext
|
105 |
-
i∈Nwi≤2αn+1−1 simultaneously. Based on the computational
|
106 |
-
results of [Kurz, 2012], we also demonstrate the tightness o f our bound on
|
107 |
-
the quota when n≤5.
|
108 |
-
For a family of minimal winning coalitions, [Peled and Simeo ne, 1985]
|
109 |
-
proposed a polynomial-time algorithm for checking the weig htedness of a
|
110 |
-
given simple game. They also showed that for weighted simple games repre-
|
111 |
-
sented by minimal winning coalitions, all maximal losing co alitions can be
|
112 |
-
computed in polynomial time. When we have minimal winning co alitions
|
113 |
-
3and maximal losing coalitions, there exists a linear inequa lity system whose
|
114 |
-
solution gives a weight vector w∈RNandq∈Rsatisfying property (2).
|
115 |
-
However, it isless straightforward tofindaninteger-weigh t representation as
|
116 |
-
the problem transforms from linear programming to integer p rogramming.
|
117 |
-
In Section 4, we address the problem of finding an integer-wei ght rep-
|
118 |
-
resentation under the assumption that we have minimal winni ng coalitions
|
119 |
-
and maximal losing coalitions. We show that an integer-weig ht represen-
|
120 |
-
tation is obtained by carefully rounding a solution of the li near inequality
|
121 |
-
system multiplied by at most (2 −√
|
122 |
-
2)n+(√
|
123 |
-
2−1).
|
124 |
-
A simple game G= (N,W) is called roughly weighted if there exist a
|
125 |
-
non-negative vector w∈RN
|
126 |
-
+and a real number q∈R, not all equal to
|
127 |
-
zero ((q;w⊤)/\e}atio\slash=0⊤), such that for any S⊆Ncondition/summationtext
|
128 |
-
i∈Swi< qim-
|
129 |
-
pliesS/\e}atio\slash∈ W, and/summationtext
|
130 |
-
i∈Swi> qimpliesS∈ W. We say that ( q;w⊤) is a
|
131 |
-
rough voting representation forG. Roughly weighted simple games were ini-
|
132 |
-
tially introduced by [Baugh, 1970]. [Muroga, 1971] (p. 208) studied them
|
133 |
-
under the name of pseudothreshold functions. [Taylor and Zw icker, 1999]
|
134 |
-
discussed roughly weighted simple games and constructed se veral examples.
|
135 |
-
[Gvozdeva and Slinko, 2011] developed a theory of roughly we ighted simple
|
136 |
-
games. A trading transform ( X1,X2,...,X j;Y1,Y2,...,Y j) with all coali-
|
137 |
-
tionsX1,X2,...,X jwinningand Y1,Y2,...,Y jlosingiscalled a certificate of
|
138 |
-
non-weightedness . This certificate is said to be potentif the grand coalition
|
139 |
-
Nis among X1,X2,...,X jand the empty coalition is among Y1,Y2,...,Y j.
|
140 |
-
[Gvozdeva and Slinko, 2011] showed that under the the monoto nicity prop-
|
141 |
-
erty (1) and ∅ /\e}atio\slash∈ W ∋ N, a given simple game Gis not roughly weighted if
|
142 |
-
and only if thereexists a potent certificate of non-weighted ness whose length
|
143 |
-
islessthanorequalto( n+1)nn/2. Furtherresearchonroughlyweightedsim-
|
144 |
-
plegamesappearsin[Gvozdeva et al., 2013,Freixas and Kurz , 2014,Hameed and Slinko, 2015].
|
145 |
-
In Section 5, we show that (under the the monotonicity proper ty (1) and
|
146 |
-
∅ /\e}atio\slash∈ W ∋ N) the length of a potent certificate of non-weightedness is le ss
|
147 |
-
than or equal to 2 αn+1, if it exists. We also show that a roughly weighted
|
148 |
-
simple game (satisfying ∅ /\e}atio\slash∈ W ∋ N) has an integer vector ( q;w⊤) of rough
|
149 |
-
voting representation satisfying 0 ≤wi≤αn−1(∀i∈N), 0≤q≤αnand
|
150 |
-
0≤/summationtext
|
151 |
-
i∈Nwi≤2αn.
|
152 |
-
2 TradingTransformsof Non-weighted Simple Games
|
153 |
-
In this section, we discuss the size of a trading transform th at guarantees
|
154 |
-
the non-weightedness of a given simple game. Throughout thi s section, we
|
155 |
-
do not need to assume the monotonicity property (1). First, w e introduce a
|
156 |
-
4linear inequality system for determining the weightedness of a given simple
|
157 |
-
game. For any nonempty family of player subsets �� /\e}atio\slash=N ⊆2N, we introduce
|
158 |
-
a 0–1 matrix A(N) = (a(N)Si) whose rows are indexed by subsets in Nand
|
159 |
-
columns are indexed by players in Ndefined by
|
160 |
-
a(N)Si=/braceleftbigg1 (ifi∈S∈ N),
|
161 |
-
0 (otherwise) .
|
162 |
-
A given simple game G= (N,W) is weighted if and only if the following
|
163 |
-
linear inequality system is feasible:
|
164 |
-
P1:/parenleftbiggA(W)1 0
|
165 |
-
−A(L)−1−1/parenrightbigg
|
166 |
-
w
|
167 |
-
−q
|
168 |
-
ε
|
169 |
-
≥0,
|
170 |
-
ε >0,
|
171 |
-
where0(1) denotes a zero vector (all-one vector) of an appropriate di men-
|
172 |
-
sion.
|
173 |
-
Farkas’ Lemma [Farkas, 1902] states that P1 is infeasible if and only if
|
174 |
-
the following system is feasible:
|
175 |
-
D1:
|
176 |
-
A(W)⊤−A(L)⊤
|
177 |
-
1⊤−1⊤
|
178 |
-
0⊤−1⊤
|
179 |
-
/parenleftbiggx
|
180 |
-
y/parenrightbigg
|
181 |
-
=
|
182 |
-
0
|
183 |
-
0
|
184 |
-
−1
|
185 |
-
,
|
186 |
-
x≥0,y≥0.
|
187 |
-
For simplicity, we denote D1 by A1z=c,z≥0,where
|
188 |
-
A1=
|
189 |
-
A(W)⊤−A(L)⊤
|
190 |
-
1⊤−1⊤
|
191 |
-
0⊤−1⊤
|
192 |
-
,z=/parenleftbiggx
|
193 |
-
y/parenrightbigg
|
194 |
-
,andc=
|
195 |
-
0
|
196 |
-
0
|
197 |
-
−1
|
198 |
-
.
|
199 |
-
Subsequently, we assume that D1 is feasible. Let /tildewiderA1z=/tildewidecbe a linear
|
200 |
-
equality system obtained from A1z=cby repeatedly removing redundant
|
201 |
-
equalities. A column submatrix /hatwideBof/tildewiderA1is called a basis matrix if/hatwideBis a
|
202 |
-
square invertible matrix. Variables corresponding to the c olumns of/hatwideBare
|
203 |
-
calledbasic variables , andJ/hatwideBdenotes an index set of basic variables. A
|
204 |
-
basic solution with respect to /hatwideBis a vector zdefined by
|
205 |
-
zi=/braceleftbigg/hatwidezi(i∈J/hatwideB),
|
206 |
-
0 (i/\e}atio\slash∈J/hatwideB),
|
207 |
-
5where/hatwidezis a vector of basic variables satisfying /hatwidez=/hatwideB−1/tildewidec. It is well-known
|
208 |
-
that if a linear inequality system D1 is feasible, then it has a basic feasible
|
209 |
-
solution.
|
210 |
-
Letz′be a basic feasible solution of D1 with respect to a basis matr ixB.
|
211 |
-
By Cramer’s rule, z′
|
212 |
-
i= det(Bi)/det(B) for each i∈JB,whereBiis a matrix
|
213 |
-
formed by replacing i-th column of Bby/tildewidec. Because Biis an integer matrix,
|
214 |
-
det(B)z′
|
215 |
-
i= det(Bi) is an integer for any i∈JB. Let (x′⊤,y′⊤)⊤be a vector
|
216 |
-
corresponding to z′,and (x∗⊤,y∗⊤) =|det(B)|(x′⊤,y′⊤). Cramer’s rule
|
217 |
-
states that both x∗andy∗are integer vectors. The pair of vectors x∗and
|
218 |
-
y∗satisfies the following conditions:
|
219 |
-
A(W)⊤x∗−A(L)⊤y∗=|det(B)|(A(W)⊤x′−A(L)⊤y′) =|det(B)|0=0,/summationdisplay
|
220 |
-
S∈Wx∗
|
221 |
-
S−/summationdisplay
|
222 |
-
S∈Ly∗
|
223 |
-
S=|det(B)|(1⊤x′−1⊤y′) =|det(B)|0 = 0,
|
224 |
-
/summationdisplay
|
225 |
-
S∈Ly∗
|
226 |
-
S=|det(B)|1⊤y′=|det(B)|,
|
227 |
-
x∗=|det(B)|x′≥0,andy∗=|det(B)|y′≥0.
|
228 |
-
Next, we construct a trading transform corresponding to the pair ofx∗and
|
229 |
-
y∗. LetX= (X1,X2,...,X|det(B)|) be a sequence of winning coalitions,
|
230 |
-
where each winning coalition S∈ Wappears in Xx∗
|
231 |
-
S-times. Similarly, we
|
232 |
-
introduce a sequence Y= (Y1,Y2,...,Y|det(B)|),where each losing coalition
|
233 |
-
S∈ Lappears in Yy∗
|
234 |
-
S-times. The above equalities imply that ( X;Y) is a
|
235 |
-
trading transform of size |det(B)|. Therefore, we have shown that if D1 is
|
236 |
-
feasible, then a given simple game G= (N,W) is not|det(B)|-trade robust.
|
237 |
-
Finally, weprovidean upperboundon |det(B)|. Letαnbethemaximum
|
238 |
-
of the determinants of n×n0–1 matrices. For any n×n0–1 matrix M,it is
|
239 |
-
easy to show that det( M)≥ −αnby swapping two rows of M(whenn≥2).
|
240 |
-
If a column of Bis indexed by a component of x(i.e., indexed by a winning
|
241 |
-
coalition), then each component of the column is either 0 or 1 . Otherwise,
|
242 |
-
a column (of B) is indexed by a component of y(i.e., indexed by a losing
|
243 |
-
coalition) whose components are either 0 or −1. Now, we apply elementary
|
244 |
-
matrix operations to B(see Figure 1). For each column of Bindexed by
|
245 |
-
a component of y, we multiply the column by ( −1). The resulting matrix,
|
246 |
-
denoted by B′, is a 0–1 matrix satisfying |det(B)|=|det(B′)|.
|
247 |
-
AsBis a submatrix of A1, the number of rows (columns) of B, denoted
|
248 |
-
byn′, is less than or equal to n+ 2. When n′< n+ 2, we obtain the
|
249 |
-
desired result: |det(B)|=|det(B′)| ≤αn′≤αn+1. Ifn′=n+ 2, then B
|
250 |
-
has a row vector corresponding to equality 1⊤x−1⊤y= 0, which satisfies
|
251 |
-
the condition that each component is either 1 or −1, and thus B′has an
|
252 |
-
60 0 1 1 0−1
|
253 |
-
0 1 0 1 0 0
|
254 |
-
1 0 0 1 0−1
|
255 |
-
1 1 1 0 −1−1
|
256 |
-
1 1 1 1 −1−1
|
257 |
-
0 0 0 0 −1−1
|
258 |
-
B0 0 1 1 0 1
|
259 |
-
0 1 0 1 0 0
|
260 |
-
1 0 0 1 0 1
|
261 |
-
1 1 1 0 1 1
|
262 |
-
1 1 1 1 1 1
|
263 |
-
0 0 0 0 1 1
|
264 |
-
B′
|
265 |
-
Figure 1: Example of elementary matrix operations for D1.
|
266 |
-
all-one row vector. Lemma 2.1 (c1) appearing below states th at|det(B)|=
|
267 |
-
|det(B′)| ≤αn′−1≤αn+1.
|
268 |
-
Lemma 2.1. LetMbe ann×n0–1 matrix, where n≥2.
|
269 |
-
(c1)If a row (column) vector of Mis the all-one vector, then |det(M)| ≤αn−1.
|
270 |
-
(c2)If a row (column) vector of Mis a 0–1 vector consisting of a unique
|
271 |
-
0-component and n−11-components, then |det(M)| ≤2αn−1.
|
272 |
-
Proof of (c1). Assume that the first column of Mis the all-one vector. We
|
273 |
-
apply the following elementary matrix operations to M(see Figure 2). For
|
274 |
-
each column of Mexcept the first column, if the first component is equal to
|
275 |
-
1, then we multiply the column by ( −1) and add the all-one column vector.
|
276 |
-
The obtained matrix, denoted by M′, is ann×n0–1 matrix satisfying
|
277 |
-
|det(M)|=|det(M′)|,and the first row is a unit vector. Thus, it is obvious
|
278 |
-
that|det(M′)| ≤αn−1.
|
279 |
-
11 0 1 0
|
280 |
-
11 1 1 0
|
281 |
-
10 1 0 0
|
282 |
-
11 1 0 1
|
283 |
-
10 0 1 1
|
284 |
-
M10 0 0 0
|
285 |
-
10 1 0 0
|
286 |
-
11 1 1 0
|
287 |
-
10 1 1 1
|
288 |
-
11 0 0 1
|
289 |
-
M′
|
290 |
-
Figure 2: Example of elementary matrix operations for (c1).
|
291 |
-
Proof of (c2). Assume that the first column vector of M, denoted by a,
|
292 |
-
contains exactly one 0-component. Obviously, e=1−ais a unit vector.
|
293 |
-
LetM1andMebe a pair of matrices obtained from Mwith the first column
|
294 |
-
7replaced by 1ande, respectively. Then, it is easy to prove that
|
295 |
-
|det(M)|=|det(M1)−det(Me)| ≤ |det(M1)|+|det(Me)| ≤2αn−1.
|
296 |
-
QED
|
297 |
-
From the above discussion, we obtain the following theorem ( without
|
298 |
-
the assumption of the monotonicity property (1)).
|
299 |
-
Theorem 2.2. A given simple game G= (N,W)withnplayers is weighted
|
300 |
-
if and only if Gisαn+1-trade robust, where αn+1is the maximum of deter-
|
301 |
-
minants of (n+1)×(n+1)0–1 matrices.
|
302 |
-
Proof. If a given simple game is not αn+1-trade robust, then it is not trade
|
303 |
-
robust and, thus, not weighted, as shown by [Taylor and Zwick er, 1992,
|
304 |
-
Taylor and Zwicker, 1999]. We have discussed the inverse imp lication: if
|
305 |
-
a given simple game Gis not weighted, then the linear inequality system P1
|
306 |
-
is infeasible. Farkas’ lemma [Farkas, 1902] implies that D1 is feasible. From
|
307 |
-
the above discussion, we have a trading transform ( X1,...,X j;Y1,...Yj)
|
308 |
-
satisfying j≤αn+1,X1,...,X j∈ W, andY1,...,Y j∈ L. QED
|
309 |
-
Applying the Hadamard’s evaluation [Hadamard, 1893] of the determi-
|
310 |
-
nant, we obtain Theorem 2.3.
|
311 |
-
Theorem 2.3. For any positive integer n,αn≤(n+1)n+1
|
312 |
-
2(1/2)n.
|
313 |
-
The exact values of αnfor small positive integers nappear in “The On-
|
314 |
-
LineEncyclopediaof Integer Sequences (A003432)” [Sloane et al., 2018] and
|
315 |
-
Table 1.
|
316 |
-
3 Integer Weights of Weighted Simple Games
|
317 |
-
This section reviews the integer-weight representations o f weighted simple
|
318 |
-
games. Throughoutthis section, we donot need to assume the m onotonicity
|
319 |
-
property (1), except in Table 1.
|
320 |
-
Theorem 3.1. Assume that a given simple game G= (N,W)satisfies
|
321 |
-
∅ /\e}atio\slash∈ W ∋ N. If a given simple game Gis weighted, then there exists an
|
322 |
-
integer-weight representation (q;w⊤)ofGsatisfying |wi| ≤αn(∀i∈N),
|
323 |
-
|q| ≤αn+1, and1≤/summationtext
|
324 |
-
i∈Nwi≤2αn+1−1.
|
325 |
-
8Proof. It is easy to show that a given simple game G= (N,W) is weighted
|
326 |
-
if and only if the following linear inequality system is feas ible:
|
327 |
-
P2:A(W)w≥q1,
|
328 |
-
A(L)w≤q1−1,
|
329 |
-
1⊤w≤u−1.
|
330 |
-
We define
|
331 |
-
A2=
|
332 |
-
A(W)10
|
333 |
-
−A(L)−10
|
334 |
-
−1⊤0 1
|
335 |
-
,v=
|
336 |
-
w
|
337 |
-
−q
|
338 |
-
u
|
339 |
-
,d=
|
340 |
-
0
|
341 |
-
1
|
342 |
-
1
|
343 |
-
,
|
344 |
-
and denote the inequality system P2 by A2v≥d.
|
345 |
-
Subsequently, we assume that P2 is feasible. A non-singular submatrix
|
346 |
-
/hatwideBofA2is called a basis matrix . Variables corresponding to columns of /hatwideB
|
347 |
-
are called basic variables , andJ/hatwideBdenotes an index set of basic variables.
|
348 |
-
Letd/hatwideBbe a subvector of dcorresponding to rows of /hatwideB. Abasic solution
|
349 |
-
with respect to /hatwideBis a vector vdefined by
|
350 |
-
vi=/braceleftbigg/hatwidevi(i∈J/hatwideB),
|
351 |
-
0 (i/\e}atio\slash∈J/hatwideB),
|
352 |
-
where/hatwidevis a vector of basic variables satisfying /hatwidev=/hatwideB−1d/hatwideB. It is well-known
|
353 |
-
that if a linear inequality system P2 is feasible, there exis ts a basic feasible
|
354 |
-
solution.
|
355 |
-
Let (w′⊤,−q′,u′)⊤be a basic feasible solution of P2 with respect to a
|
356 |
-
basis matrix B. Assumption ∅ /\e}atio\slash∈ Wimplies that 0 ≤q′−1 and, thus,
|
357 |
-
−q′/\e}atio\slash= 0. As N∈ W, we have inequalities u′−1≥1⊤w′≥q′≥1,which
|
358 |
-
imply that u′/\e}atio\slash= 0. The definition of a basic solution implies that −qand
|
359 |
-
uare basic variables with respect to the basis matrix B. Thus, Bhas
|
360 |
-
columns corresponding to basic variables −qandu. A column of Bindexed
|
361 |
-
byuis called the last column. As Bis invertible, the last column of Bis
|
362 |
-
not the zero vector, and thus Bincludes a row corresponding to inequality
|
363 |
-
1⊤w≤u−1, which is called the last row (see Figure 3). Here, the numbe r
|
364 |
-
of rows (columns) of B, denoted by n′, is less than or equal to n+2.
|
365 |
-
For simplicity, we denote the basic feasible solution ( w′⊤,−q′,u′)⊤by
|
366 |
-
v′. By Cramer’s rule, v′
|
367 |
-
i= det(Bi)/det(B) for each i∈JB,whereBiis
|
368 |
-
obtained from Bwith a column correspondingto variable vireplaced by dB.
|
369 |
-
Because Biis an integer matrix, det( B)v′
|
370 |
-
i= det(Bi) is an integer for any
|
371 |
-
9i∈JB. Cramer’s rule states that ( w∗⊤,−q∗,u∗) =|det(B)|(w′⊤,−q′,u′) is
|
372 |
-
an integer vector satisfying the following conditions:
|
373 |
-
A(W)w∗=|det(B)|A(W)w′≥ |det(B)|q′1=q∗1,
|
374 |
-
A(L)w∗=|det(B)|A(L)w′≤ |det(B)|(q′1−1)≤q∗1−1,and
|
375 |
-
1⊤w∗=|det(B)|1⊤w′≤ |det(B)|(u′−1)≤u∗−1.
|
376 |
-
From the above, ( q∗;w∗⊤) is an integer-weight representation of G. As
|
377 |
-
N∈ W, we obtain 1⊤w∗≥q∗=|det(B)|q′≥1.
|
378 |
-
w1w2w3w4−q u
|
379 |
-
1 1 1 0 10
|
380 |
-
0 1 1 1 10
|
381 |
-
0−1−1 0 −10
|
382 |
-
−1 0 0 −1−10
|
383 |
-
0−1 0 −1−10
|
384 |
-
−1−1−1−101
|
385 |
-
B
|
386 |
-
w1w2w3w4−q u
|
387 |
-
1 1 1 0 00
|
388 |
-
0 1 1 1 00
|
389 |
-
0−1−1 0 10
|
390 |
-
−1 0 0 −110
|
391 |
-
0−1 0 −110
|
392 |
-
−1−1−1−111
|
393 |
-
Bqw1w2w3w4−q
|
394 |
-
1 1 1 0 0
|
395 |
-
0 1 1 1 0
|
396 |
-
0−1−1 0 1
|
397 |
-
−1 0 0 −11
|
398 |
-
0−1 0 −11
|
399 |
-
B′
|
400 |
-
qw1w2w3w4−q
|
401 |
-
1 1 1 0 0
|
402 |
-
0 1 1 1 0
|
403 |
-
0 1 1 0 1
|
404 |
-
1 0 0 1 1
|
405 |
-
0 1 0 1 1
|
406 |
-
B′′
|
407 |
-
q
|
408 |
-
w1w2w3w4−q u
|
409 |
-
101 0 10
|
410 |
-
001 1 10
|
411 |
-
01−1 0 −10
|
412 |
-
−110−1−10
|
413 |
-
010−1−10
|
414 |
-
−11−1−101
|
415 |
-
B2w1w2w3w4−q
|
416 |
-
101 0 1
|
417 |
-
001 1 1
|
418 |
-
01−1 0 −1
|
419 |
-
−110−1−1
|
420 |
-
010−1−1
|
421 |
-
B′
|
422 |
-
2w1w2w3w4−q
|
423 |
-
101 0 1
|
424 |
-
001 1 1
|
425 |
-
011 0 1
|
426 |
-
110 1 1
|
427 |
-
010 1 1
|
428 |
-
B′′
|
429 |
-
2
|
430 |
-
w1w2w3w4−q u
|
431 |
-
1 1 1 0 10
|
432 |
-
0 1 1 1 10
|
433 |
-
0−1−1 0 −11
|
434 |
-
−1 0 0 −1−11
|
435 |
-
0−1 0 −1−11
|
436 |
-
−1−1−1−101
|
437 |
-
Buw1w2w3w4−q u
|
438 |
-
1 1 1 0 10
|
439 |
-
0 1 1 1 10
|
440 |
-
0 1 1 0 11
|
441 |
-
1 0 0 1 11
|
442 |
-
0 1 0 1 11
|
443 |
-
1 1 1 1 01
|
444 |
-
B′
|
445 |
-
u
|
446 |
-
Figure 3: Examples of elementary matrix operations for P2.
|
447 |
-
Now, we discuss the magnitude of |q∗|=|det(Bq)|,whereBqis obtained
|
448 |
-
10fromBwith a column corresponding to variable −qreplaced by dB. As the
|
449 |
-
last column of Bqis a unit vector, we delete the last column and the last row
|
450 |
-
fromBqand obtain a matrix B′
|
451 |
-
qsatisfying det( Bq) = det(B′
|
452 |
-
q). We apply
|
453 |
-
the following elementary matrix operations to B′
|
454 |
-
q. First, we multiply the
|
455 |
-
column corresponding to variable −q(which is equal to dB) by (−1). Next,
|
456 |
-
we multiply the rows indexed by losing coalitions by ( −1). The resulting
|
457 |
-
matrix, denoted by B′′
|
458 |
-
q, is 0–1 valued and satisfies the following condition:
|
459 |
-
|q∗|=|det(Bq)|=|det(B′
|
460 |
-
q)|=|det(B′′
|
461 |
-
q)| ≤αn′−1≤αn+1.
|
462 |
-
Next, we show that |w∗
|
463 |
-
i| ≤αn(i∈N). Ifw∗
|
464 |
-
i/\e}atio\slash= 0, then wiis a basic
|
465 |
-
variable that satisfies |w∗
|
466 |
-
i|=|det(Bi)|,whereBiis obtained from Bbut
|
467 |
-
the column corresponding to variable wiis replaced by dB. In a manner
|
468 |
-
similar to that above, we delete the last column and the last r ow from Bi
|
469 |
-
and obtain a matrix B′
|
470 |
-
isatisfying det( Bi) = det(B′
|
471 |
-
i). Next, we multiply a
|
472 |
-
column corresponding to variable wiby (−1). We multiply rows indexed by
|
473 |
-
losing coalitions by ( −1) and obtain a 0–1 matrix B′′
|
474 |
-
i. Matrix Bicontains
|
475 |
-
a column corresponding to the original variable −q, which contains values 1
|
476 |
-
or−1. Thus, matrix B′′
|
477 |
-
icontains a column vector that is equal to an all-one
|
478 |
-
vector. Lemma 2.1 (c1) implies that
|
479 |
-
|w∗
|
480 |
-
i|=|det(Bi)|=|det(B′
|
481 |
-
i)|=|det(B′′
|
482 |
-
i)| ≤αn′−2≤αn.
|
483 |
-
Lastly, we discuss the value of |u∗|=|det(Bu)|,whereBuis obtained
|
484 |
-
fromBbut the last column (column indexed by variable u) is replaced by
|
485 |
-
dB. In a manner similar to that above, we multiply the last colum n by
|
486 |
-
(−1), multiply the rows indexed by losing coalitions by ( −1), and multiply
|
487 |
-
the last row by ( −1). The resulting matrix, denoted by B′
|
488 |
-
u, is a 0–1 matrix
|
489 |
-
in which the last row contains exactly one 0-component (inde xed by variable
|
490 |
-
−q). Lemma 2.1 (c2) implies that
|
491 |
-
|u∗|=|det(Bu)|=|det(B′
|
492 |
-
u)| ≤2αn′−1≤2αn+1,
|
493 |
-
and thus 1⊤w∗≤u∗−1≤ |u∗|−1≤2αn+1−1. QED
|
494 |
-
[Kurz, 2012] exhaustively generated all weighted voting ga mes satisfying
|
495 |
-
the monotonicity property (1) for up to nine voters. Table 1 s hows max-
|
496 |
-
ima of the exact values of minimal integer-weight represent ations obtained
|
497 |
-
by [Kurz, 2012], Muroga’s boundsin [Muroga, 1971], and our u pperbounds.
|
498 |
-
The table shows that our bound on the quota is tight when n≤5.
|
499 |
-
11Table 1: Exact values of integer weights representations.
|
500 |
-
n 1 2 3 4 5 6 7 8 9 10 11
|
501 |
-
αn† 1 1 2 3 5 9 32 56 144 320 1458
|
502 |
-
max
|
503 |
-
(N,W)min
|
504 |
-
[q;w]max
|
505 |
-
iwi‡1 1 2 3 5 9 18 42 110
|
506 |
-
Muroga’s bound (αn)•1 1 2 3 5 9 32 56 144 320 1458
|
507 |
-
max
|
508 |
-
(N,W)min
|
509 |
-
[q;w]q‡1 2 3 5 9 18 40 105 295
|
510 |
-
Our bound (αn+1)1 2 3 5 9 32 56 144 320 1458
|
511 |
-
Muroga’s bound (nαn)•1 2 6 12 25 54 224 448 1296 3200 16038
|
512 |
-
max
|
513 |
-
(N,W)min
|
514 |
-
[q;w]/summationtext
|
515 |
-
iwi‡1 2 4 8 15 33 77 202 568
|
516 |
-
Our bound (2αn+1−1)1 3 5 9 17 63 111 287 639 2915
|
517 |
-
†[Sloane et al., 2018], ‡[Kurz, 2012], •[Muroga, 1971].
|
518 |
-
4 Rounding Method
|
519 |
-
This section addresses the problem of findinginteger-weigh t representations.
|
520 |
-
In this section, we assume the monotonicity property (1). In addition, a
|
521 |
-
weighted simple game is given by a triplet ( N,Wm,LM),whereWmand
|
522 |
-
LMdenote the set of minimal winning coalitions and the set of ma ximal
|
523 |
-
losing coalitions, respectively. We also assume that the em pty set is a losing
|
524 |
-
coalition, Nis a winning coalition, and every player in Nis not a null
|
525 |
-
player. Thus, there exists an integer-weight representati on in which q≥1
|
526 |
-
andwi≥1 (∀i∈N).
|
527 |
-
We discuss a problem for findingan integer-weight represent ation, which
|
528 |
-
is formulated by the following integer programming problem :
|
529 |
-
Q: find a vector ( q;w)
|
530 |
-
satisfying/summationdisplay
|
531 |
-
i∈Swi≥q(∀S∈ Wm), (3)
|
532 |
-
/summationdisplay
|
533 |
-
i∈Swi≤q−1 (∀S∈ LM), (4)
|
534 |
-
q≥1, wi≥1 (∀i∈N), (5)
|
535 |
-
q∈Z, wi∈Z(∀i∈N). (6)
|
536 |
-
A linear relaxation problem Q is obtained from Q by dropping the integer
|
537 |
-
constraints (6).
|
538 |
-
Let (q∗;w∗⊤) be a basic feasible solution of the linear inequality sys-
|
539 |
-
temQ. Our proof in the previous section showed that |det(B∗)|(q∗;w∗⊤)
|
540 |
-
12gives a solution of Q (i.e., an integer-weight representati on), where B∗de-
|
541 |
-
notes a corresponding basis matrix of Q. When |det(B∗)|> n, there ex-
|
542 |
-
ists a simple method for generating a smaller integer-weigh t representation.
|
543 |
-
For any weight vector w= (w1,w2,...,w n)⊤, we denote the integer vector
|
544 |
-
(⌊w1⌋,⌊w2⌋,...,⌊wn⌋)⊤by⌊w⌋. Given a solution ( q∗;w∗⊤) ofQ, we intro-
|
545 |
-
duce an integer vector w′=⌊nw∗⌋and an integer q′=⌊n(q∗−1)⌋+1. For
|
546 |
-
any minimal winning coalition S∈ Wm, we have that
|
547 |
-
/summationdisplay
|
548 |
-
i∈Sw′
|
549 |
-
i>/summationdisplay
|
550 |
-
i∈S(nw∗
|
551 |
-
i−1)≥n/summationdisplay
|
552 |
-
i∈Sw∗
|
553 |
-
i−n≥nq∗−n=n(q∗−1)≥ ⌊n(q∗−1)⌋,
|
554 |
-
/summationdisplay
|
555 |
-
i∈Sw′
|
556 |
-
i≥ ⌊n(q∗−1)⌋+1 =q′.
|
557 |
-
Each maximal losing coalition S∈ LMsatisfies
|
558 |
-
/summationdisplay
|
559 |
-
i∈Sw′
|
560 |
-
i≤/summationdisplay
|
561 |
-
i∈Snw∗
|
562 |
-
i≤n(q∗−1),
|
563 |
-
/summationdisplay
|
564 |
-
i∈Sw′
|
565 |
-
i≤ ⌊n(q∗−1)⌋=q′−1.
|
566 |
-
Thus, the pair of w′andq′gives an integer-weight representation satisfying
|
567 |
-
(q′;w′⊤)≤n(q∗;w∗⊤). In the remainder of this section, we show that there
|
568 |
-
exists an integer-weight representation (vector) that is l ess than or equal
|
569 |
-
to ((2−√
|
570 |
-
2)n+(√
|
571 |
-
2−1))(q∗;w∗⊤)<(0.5858n+0.4143)(q∗;w∗⊤) for any
|
572 |
-
solution ( q∗;w∗⊤) ofQ.
|
573 |
-
Theorem 4.1. Let(q∗;w∗⊤)be a solution of Q. We define ℓ1= (2−√
|
574 |
-
2)n−
|
575 |
-
(√
|
576 |
-
2−1)andu1= (2−√
|
577 |
-
2)n+(√
|
578 |
-
2−1). Then, there exists a real number
|
579 |
-
λ•∈[ℓ1,u1]so that the pair Q=⌊λ•(q∗−1)⌋+1andW=⌊λ•w∗⌋gives a
|
580 |
-
feasible solution of Q (i.e., an integer-weight representa tion).
|
581 |
-
Proof. For any positive real λ, it is easy to see that each maximal losing
|
582 |
-
coalition S∈ LMsatisfies
|
583 |
-
/summationdisplay
|
584 |
-
i∈S⌊λw∗
|
585 |
-
i⌋ ≤/summationdisplay
|
586 |
-
i∈Sλw∗
|
587 |
-
i≤λ(q∗−1),
|
588 |
-
/summationdisplay
|
589 |
-
i∈S⌊λw∗
|
590 |
-
i⌋ ≤ ⌊λ(q∗−1)⌋.
|
591 |
-
To discuss the weights of minimal winning coalitions, we int roduce a
|
592 |
-
function g(λ) =λ−/summationtext
|
593 |
-
i∈N(λw∗
|
594 |
-
i−⌊λw∗
|
595 |
-
i⌋). In thesecond part of this proof, we
|
596 |
-
show that if we choose Λ ∈[ℓ1,u1] uniformly at random, then E[ g(Λ)]≥0.
|
597 |
-
13This implies that ∃λ•∈[ℓ1,u1] satisfying g(λ•)>0, because g(λ) is right-
|
598 |
-
continuous, piecewise linear, and not a constant function. Wheng(λ•)>0,
|
599 |
-
each minimal winning coalition S∈ Wmsatisfies
|
600 |
-
λ•>/summationdisplay
|
601 |
-
i∈N(λ•w∗
|
602 |
-
i−⌊λ•w∗
|
603 |
-
i⌋)≥/summationdisplay
|
604 |
-
i∈S(λ•w∗
|
605 |
-
i−⌊λ•w∗
|
606 |
-
i⌋) =/summationdisplay
|
607 |
-
i∈Sλ•w∗
|
608 |
-
i−/summationdisplay
|
609 |
-
i∈S⌊λ•w∗
|
610 |
-
i⌋,
|
611 |
-
(7)
|
612 |
-
which implies
|
613 |
-
/summationdisplay
|
614 |
-
i∈S⌊λ•w∗
|
615 |
-
i⌋>/summationdisplay
|
616 |
-
i∈Sλ•w∗
|
617 |
-
i−λ•=λ•/parenleftBigg/summationdisplay
|
618 |
-
i∈Sw∗
|
619 |
-
i−1/parenrightBigg
|
620 |
-
≥λ•(q∗−1)≥ ⌊λ•(q∗−1)⌋,
|
621 |
-
and thus /summationdisplay
|
622 |
-
i∈S⌊λ•w∗
|
623 |
-
i⌋ ≥ ⌊λ•(q∗−1)⌋+1.
|
624 |
-
Finally, we show that E[ g(Λ)]≥0 if we choose Λ ∈[ℓ1,u1] uniformly at
|
625 |
-
random. It is obvious that
|
626 |
-
E[g(Λ)] = E[Λ] −/summationdisplay
|
627 |
-
i∈NE[(Λw∗
|
628 |
-
i−⌊Λw∗
|
629 |
-
i⌋)] =ℓ1+u1
|
630 |
-
2−/summationdisplay
|
631 |
-
i∈N/integraldisplayu1
|
632 |
-
ℓ1(λw∗
|
633 |
-
i−⌊λw∗
|
634 |
-
i⌋)dλ
|
635 |
-
u1−ℓ1
|
636 |
-
= (2−√
|
637 |
-
2)n−/summationdisplay
|
638 |
-
i∈N/integraldisplayu1
|
639 |
-
ℓ1(λw∗
|
640 |
-
i−⌊λw∗
|
641 |
-
i⌋)dλ
|
642 |
-
u1−ℓ1.
|
643 |
-
Let us discuss the last term appearing above. By substitutin gµforλw∗
|
644 |
-
i, we
|
645 |
-
obtain
|
646 |
-
/integraldisplayu1
|
647 |
-
ℓ1(λw∗
|
648 |
-
i−⌊λw∗
|
649 |
-
i⌋)dλ
|
650 |
-
u1−ℓ1=/integraldisplayu1w∗
|
651 |
-
i
|
652 |
-
ℓ1w∗
|
653 |
-
i(µ−⌊µ⌋)dµ
|
654 |
-
w∗
|
655 |
-
i(u1−ℓ1)
|
656 |
-
≤/integraldisplay0
|
657 |
-
−w∗
|
658 |
-
i(u1−ℓ1)(µ−⌊µ⌋)dµ
|
659 |
-
w∗
|
660 |
-
i(u1−ℓ1)=/integraldisplay0
|
661 |
-
−x(µ−⌊µ⌋)dµ
|
662 |
-
x,
|
663 |
-
where the last equality is obtained by setting x=w∗
|
664 |
-
i(u1−ℓ1). Asu1−ℓ1=
|
665 |
-
2(√
|
666 |
-
2−1) andw∗
|
667 |
-
i≥1, it is clear that x=w∗
|
668 |
-
i(u1−ℓ1)≥2(√
|
669 |
-
2−1). Here,
|
670 |
-
we introduce a function f(x) =/integraldisplay0
|
671 |
-
−x(µ−⌊µ⌋)dµ
|
672 |
-
x. According to numerical
|
673 |
-
14calculations (see Figure 4), inequality x≥2(√
|
674 |
-
2−1) implies that f(x)≤
|
675 |
-
2−√
|
676 |
-
2.
|
677 |
-
0 1 2 3 4 5
|
678 |
-
x0.450.50.550.60.650.70.750.8f(x)
|
679 |
-
Figure 4: Plot of function f(x) =/integraldisplay0
|
680 |
-
−x(µ−⌊µ⌋)dµ
|
681 |
-
x.
|
682 |
-
From the above, we obtain the desired result
|
683 |
-
E[g(Λ)]≥(2−√
|
684 |
-
2)n−/summationdisplay
|
685 |
-
i∈N(2−√
|
686 |
-
2) = (2−√
|
687 |
-
2)n−(2−√
|
688 |
-
2)n= 0.
|
689 |
-
QED
|
690 |
-
5 Roughly Weighted Simple Games
|
691 |
-
In this section, we discuss roughly weighted simple games. F irst, we show
|
692 |
-
an upper bound of the length of a potent certificate of non-wei ghtedness.
|
693 |
-
Theorem 5.1. Assume that a given simple game G= (N,W)satisfies ∅ /\e}atio\slash∈
|
694 |
-
W ∋Nand the monotonicity property (1). If a given simple game Gis not
|
695 |
-
roughly weighted, then there exists a potent certificate of n on-weightedness
|
696 |
-
whose length is less than or equal to 2αn+1.
|
697 |
-
Proof. Let us introduce a linear inequality system:
|
698 |
-
P3:/parenleftbiggA(W)1
|
699 |
-
−A(L)−1/parenrightbigg/parenleftbiggw
|
700 |
-
−q/parenrightbigg
|
701 |
-
≥0,
|
702 |
-
1⊤w>0.
|
703 |
-
15First, we show that if P3 is feasible, then a given simple game is roughly
|
704 |
-
weighted. Let ( q′;w′⊤) be a feasible solution of P3. We introduce a new
|
705 |
-
voting weight w′′
|
706 |
-
i= max{w′
|
707 |
-
i,0}for each i∈N. We show that ( q′;w′′⊤) is a
|
708 |
-
rough voting representation. As 1⊤w′>0, vector w′includes at least one
|
709 |
-
positivecomponent,andthus w′′/\e}atio\slash=0. Ifacoalition Ssatisfies/summationtext
|
710 |
-
i∈Sw′′
|
711 |
-
i< q′,
|
712 |
-
thenq′>/summationtext
|
713 |
-
i∈Sw′′
|
714 |
-
i≥/summationtext
|
715 |
-
i∈Sw′
|
716 |
-
i,and thus Sis losing. Consider the case in
|
717 |
-
which a coalition Ssatisfies/summationtext
|
718 |
-
i∈Sw′′
|
719 |
-
i> q′. LetS′={i∈S|w′
|
720 |
-
i>0}. It is
|
721 |
-
obvious that q′</summationtext
|
722 |
-
i∈Sw′′
|
723 |
-
i=/summationtext
|
724 |
-
i∈S′w′′
|
725 |
-
i=/summationtext
|
726 |
-
i∈S′w′
|
727 |
-
iand thus S′is winning.
|
728 |
-
The monotonicity property (1) and S′⊆Simply that Sis winning.
|
729 |
-
From the above discussion, it is obvious that if a given simpl e game is
|
730 |
-
not roughly weighted, then P3 is infeasible. Farkas’ Lemma [ Farkas, 1902]
|
731 |
-
states that
|
732 |
-
D3:/parenleftbiggA(W)⊤−A(L)⊤
|
733 |
-
1⊤−1⊤/parenrightbigg/parenleftbiggx
|
734 |
-
y/parenrightbigg
|
735 |
-
=/parenleftbigg−1
|
736 |
-
0/parenrightbigg
|
737 |
-
,
|
738 |
-
x≥0,y≥0,
|
739 |
-
is feasible if and only if P3 is infeasible. By introducing an artificial non-
|
740 |
-
negative variable u≥0 and equality 1⊤x=u−1, we obtain a linear
|
741 |
-
inequality system:
|
742 |
-
D3+:
|
743 |
-
A(W)⊤−A(L)⊤0
|
744 |
-
1⊤−1⊤0
|
745 |
-
1⊤0⊤−1
|
746 |
-
|
747 |
-
x
|
748 |
-
y
|
749 |
-
u
|
750 |
-
=
|
751 |
-
−1
|
752 |
-
0
|
753 |
-
−1
|
754 |
-
,
|
755 |
-
x≥0,y≥0, u≥0.
|
756 |
-
It is obvious that D3 is feasible if and only if D3+is feasible.
|
757 |
-
Next, we construct a trading transform from a basic feasible solution
|
758 |
-
of D3+. Let/tildewiderA3z=/tildewidec′be a linear equality system obtained from D3+by
|
759 |
-
repeatedly removing redundant equalities. As D3+is feasible, there exists a
|
760 |
-
basic feasible solution, denoted by z′, and a corresponding basis matrix B
|
761 |
-
of/tildewiderA3. From Cramer’s rule, z′
|
762 |
-
S/\e}atio\slash= 0 implies that z′
|
763 |
-
S= det(BS)/det(B) for
|
764 |
-
eachS⊆N,whereBSis obtained from Bwith a column corresponding to
|
765 |
-
variable zSreplaced by/tildewidec′. Obviously, |det(B)|z′is a non-negative integer
|
766 |
-
vector. We denote by ( x′⊤,y′⊤,u′)⊤the basic feasible solution z′. We recall
|
767 |
-
that∅ /\e}atio\slash∈ W ∋ Nandintroduceapairofnon-negative integer vectors ( x∗,y∗)
|
768 |
-
defined as follows:
|
769 |
-
x∗
|
770 |
-
S=/braceleftbigg|det(B)|x′
|
771 |
-
S (ifS∈ W \{N}),
|
772 |
-
|det(B)|(x′
|
773 |
-
N+1) (if S=N),
|
774 |
-
y∗
|
775 |
-
S=/braceleftbigg|det(B)|y′
|
776 |
-
S (ifS∈ N \{∅} ),
|
777 |
-
|det(B)|(y′
|
778 |
-
∅+1) (if S=∅).
|
779 |
-
16Subsequently, χ(S) denotes the characteristic vector of a coalition S. It is
|
780 |
-
easy to see that pair ( x∗,y∗) satisfies
|
781 |
-
A(W)⊤x∗−A(L)⊤y∗=/summationdisplay
|
782 |
-
S∈Wχ(S)x∗
|
783 |
-
S−/summationdisplay
|
784 |
-
S∈Lχ(S)y∗
|
785 |
-
S
|
786 |
-
=/summationdisplay
|
787 |
-
S∈Wχ(S)|det(B)|x′
|
788 |
-
S+χ(N)|det(B)|−/summationdisplay
|
789 |
-
S∈Lχ(S)|det(B)|y′
|
790 |
-
S−χ(∅)|det(B)|
|
791 |
-
=|det(B)|/parenleftBigg/parenleftBigg/summationdisplay
|
792 |
-
S∈Wχ(S)x′
|
793 |
-
S−/summationdisplay
|
794 |
-
S∈Lχ(S)y′
|
795 |
-
S/parenrightBigg
|
796 |
-
+χ(N)−χ(∅)/parenrightBigg
|
797 |
-
=|det(B)|/parenleftBig/parenleftBig
|
798 |
-
A(W)⊤x′−A(L)⊤y′/parenrightBig
|
799 |
-
+1−0/parenrightBig
|
800 |
-
=|det(B)|(−1+1−0) =0
|
801 |
-
and
|
802 |
-
/summationdisplay
|
803 |
-
S∈Wx∗
|
804 |
-
S−/summationdisplay
|
805 |
-
S∈Ly∗
|
806 |
-
S=/summationdisplay
|
807 |
-
S∈W|detB|x′
|
808 |
-
S+|det(B)|−/summationdisplay
|
809 |
-
S∈L|det(B)|y′
|
810 |
-
S−|det(B)|
|
811 |
-
=|det(B)|/parenleftBigg/parenleftBigg/summationdisplay
|
812 |
-
S∈Wx′
|
813 |
-
S−/summationdisplay
|
814 |
-
S∈Ly′
|
815 |
-
S/parenrightBigg
|
816 |
-
+1−1/parenrightBigg
|
817 |
-
=|det(B)|(0+1−1) = 0.
|
818 |
-
Next, we can construct a trading transform ( X;Y) corresponding to the
|
819 |
-
pair ofx∗andy∗by analogy with the proof of Theorem 2.2. Both x∗
|
820 |
-
Nand
|
821 |
-
y∗
|
822 |
-
∅are positive and ∅ /\e}atio\slash∈ W ∋ N; therefore ( X;Y) is a potent certificate of
|
823 |
-
non-weightedness.
|
824 |
-
Lastly, we discuss the length of ( X;Y). The number of rows (columns)
|
825 |
-
ofB, denoted by n′, is less than or equal to n+2. The basic feasible solution
|
826 |
-
z′satisfies that u′= 1+1⊤x′≥1>0, and thus Cramer’s rule states that
|
827 |
-
det(B)u′= det(Bu) (Figure 5 shows an example). We multiply columns
|
828 |
-
ofBucorresponding to components in ( y⊤,u) by (−1) and obtain a 0–1
|
829 |
-
matrixB′
|
830 |
-
usatisfying |det(Bu)|=|det(B′
|
831 |
-
u)|. As/tildewidec′includes at most one 0-
|
832 |
-
component, Lemma 2.1 implies that |det(B′
|
833 |
-
u)| ≤2αn′−1≤2αn+1. Thus, the
|
834 |
-
length of ( X;Y) satisfies
|
835 |
-
/summationdisplay
|
836 |
-
S∈Wx∗
|
837 |
-
S=/summationdisplay
|
838 |
-
S∈W|det(B)|x′
|
839 |
-
S+|det(B)|=|det(B)|(1⊤x′+1)
|
840 |
-
=|det(B)|(u′−1+1) = |det(B)|u′=|det(B)u′|
|
841 |
-
=|det(Bu)|=|det(B′
|
842 |
-
u)| ≤2αn+1.
|
843 |
-
QED
|
844 |
-
In the rest of this section, we discuss integer voting weight s and a quota
|
845 |
-
of a rough voting representation. We say that a player i∈Nis apasserif
|
846 |
-
and only if every coalition S∋iis winning.
|
847 |
-
17u
|
848 |
-
0 0 1 1 0−10
|
849 |
-
0 1 0 1 0 0 0
|
850 |
-
1 0 0 1 0−10
|
851 |
-
1 1 1 0 −1−10
|
852 |
-
1 1 1 1 −1−10
|
853 |
-
1 1 1 1 0 0−1
|
854 |
-
B
|
855 |
-
/tildewidec′
|
856 |
-
0 0 1 1 0−1−1
|
857 |
-
0 1 0 1 0 0−1
|
858 |
-
1 0 0 1 0−1−1
|
859 |
-
1 1 1 0 −1−1−1
|
860 |
-
1 1 1 1 −1−10
|
861 |
-
1 1 1 1 0 0−1
|
862 |
-
Bu0 0 1 1 0 1 1
|
863 |
-
0 1 0 1 0 0 1
|
864 |
-
1 0 0 1 0 1 1
|
865 |
-
1 1 1 0 1 1 1
|
866 |
-
1 1 1 1 1 1 0
|
867 |
-
1 1 1 1 0 0 1
|
868 |
-
B′
|
869 |
-
u
|
870 |
-
Figure 5: Example of elementary matrix operations for D3+.
|
871 |
-
Theorem 5.2. Assume that a given simple game G= (N,W)satisfies
|
872 |
-
∅ /\e}atio\slash∈ W ∋ N.If a given simple game Gis roughly weighted, then there exists
|
873 |
-
an integer vector (q;w⊤)of the rough voting representation satisfying 0≤
|
874 |
-
wi≤αn−1(∀i∈N),0≤q≤αn, and1≤/summationtext
|
875 |
-
i∈Nwi≤2αn.
|
876 |
-
Proof. First, we show that if a given game is roughly weighted , then either
|
877 |
-
P4:
|
878 |
-
A(W)0
|
879 |
-
−A(L)0
|
880 |
-
−1⊤1
|
881 |
-
/parenleftbiggw
|
882 |
-
u/parenrightbigg
|
883 |
-
≥
|
884 |
-
1
|
885 |
-
−1
|
886 |
-
0
|
887 |
-
,w≥0,u≥0,
|
888 |
-
is feasible or there exists at least one passer. Suppose that a given simple
|
889 |
-
game has a rough voting representation ( q;w⊤). Ifq >0, then (1 /q)w
|
890 |
-
becomes a feasible solution of P4 by setting uto a sufficiently large positive
|
891 |
-
number. Consider the case q≤0. Assumption ∅ /\e}atio\slash∈ Wimplies that 0 ≤q,
|
892 |
-
and thus we obtain q= 0. Properties ( q,w⊤)/\e}atio\slash=0⊤andw≥0imply that
|
893 |
-
∃i◦∈N,wi◦>0, i.e., a given game Ghas a passer i◦.
|
894 |
-
When a given game Ghas a passer i◦∈N, then there exists a rough
|
895 |
-
18voting representation ( q◦;w◦⊤) defined by
|
896 |
-
w◦
|
897 |
-
i=/braceleftbigg1 (i=i◦),
|
898 |
-
0 (i/\e}atio\slash=i◦),q◦= 0,
|
899 |
-
which produces the desired result.
|
900 |
-
Lastly, we consider the case in which P4 is feasible. It is wel l-known that
|
901 |
-
when P4 is feasible, there exists a basic feasible solution. Let (w′⊤,u′)⊤be
|
902 |
-
a basic feasible solution of P4 and Bbe a corresponding basis matrix. It is
|
903 |
-
easy to see that (1; w′⊤) is a rough voting representation of G. Assumption
|
904 |
-
N∈ Wimplies the positivity of u′becauseu′≥1⊤w′≥1. Then, variable
|
905 |
-
uis a basic variable, and thus Bincludes a column corresponding to u,
|
906 |
-
which is called the last column. The non-singularity of Bimplies that a
|
907 |
-
column corresponding to uis not the zero vector, and thus Bincludes a row
|
908 |
-
corresponding to the inequality 1⊤w≤u, which is called the last row (see
|
909 |
-
Figure 6). The number of rows (columns) of basis matrix B, denoted by n′,
|
910 |
-
is less than or equal to n+1.
|
911 |
-
Cramer’s rule states that ( q∗,w∗⊤,u∗) =|det(B)|(1,w′⊤,u′) is a non-
|
912 |
-
negative integer vector. It is easy to see that ( q∗,w∗⊤,u∗) satisfies
|
913 |
-
A(W)w∗=|det(B)|A(W)w′≥ |det(B)|1=q∗1,
|
914 |
-
A(L)w∗=|det(B)|A(L)w′≤ |det(B)|1=q∗1,and
|
915 |
-
1⊤w∗=|det(B)|1⊤w′≤ |det(B)|u′=u∗.
|
916 |
-
From the above, ( q∗;w∗⊤) is an integer vector of a rough voting represen-
|
917 |
-
tation. Assumption N∈ Wimplies that 1⊤w∗≥q∗=|det(B)| ≥1.
|
918 |
-
Letd′
|
919 |
-
Bbe a subvector of the right-hand-side vector of an inequalit y sys-
|
920 |
-
tem in P4 corresponding to rows of B. Cramer’s rule states that det( B)u′=
|
921 |
-
det(Bu),whereBuis obtained from Bbut the column corresponding to a
|
922 |
-
basic variable uis replaced by d′
|
923 |
-
B(see Figure 6). We multiply rows of Bu
|
924 |
-
that correspond to losing coalitions by ( −1) and multiply the last row by
|
925 |
-
(−1). The resulting matrix, denoted by B′
|
926 |
-
u, is a 0–1 matrix whose last row
|
927 |
-
includes exactly one 0-component (indexed by u). Lemma 2.1 (c2) implies
|
928 |
-
that|det(B′
|
929 |
-
u)| ≤2αn′−1≤2αn. Thus, we obtain that
|
930 |
-
1⊤w∗≤u∗≤ |u∗|=|det(B)u′|=|det(Bu)|=|det(B′
|
931 |
-
u)| ≤2αn.
|
932 |
-
By analogy with the proof of Theorem 3.1, we can prove the desi red inequal-
|
933 |
-
ities:q∗=|det(B)| ≤αnandw∗
|
934 |
-
i≤αn−1(∀i∈N). QED
|
935 |
-
19w1w2w3w4w5u
|
936 |
-
1 1 1 0 1 0
|
937 |
-
0 1 0 1 1 0
|
938 |
-
0−1−1 0 0 0
|
939 |
-
−1 0 0 −1−10
|
940 |
-
0−1 0 −1 0 0
|
941 |
-
−1−1−1−1−11
|
942 |
-
Bw1w2w3w4w5u
|
943 |
-
1 1 1 0 1 1
|
944 |
-
0 1 0 1 1 1
|
945 |
-
0−1−1 0 0 −1
|
946 |
-
−1 0 0 −1−1−1
|
947 |
-
0−1 0 −1 0 −1
|
948 |
-
−1−1−1−1−10
|
949 |
-
Buw1w2w3w4w5u
|
950 |
-
1 1 1 0 1 1
|
951 |
-
0 1 0 1 1 1
|
952 |
-
0 1 1 0 0 1
|
953 |
-
1 0 0 1 1 1
|
954 |
-
0 1 0 1 0 1
|
955 |
-
1 1 1 1 1 0
|
956 |
-
B′
|
957 |
-
u
|
958 |
-
Figure 6: Examples of elementary matrix operations for P4.
|
959 |
-
6 Conclusion
|
960 |
-
In this paper, we discussed the smallest value of k∗such that every k∗-trade
|
961 |
-
robust simple game would be weighted. We provided a new proof of the
|
962 |
-
existence of a trading transform when a given simple game is n on-weighted.
|
963 |
-
Our proof yields an improved upper bound on the required leng th of a
|
964 |
-
trading transform. We showed that a given simple game Gis weighted if
|
965 |
-
and only if Gisαn+1-trade robust, where αn+1denotes the maximal value
|
966 |
-
of determinants of ( n+1)×(n+1) 0–1 matrices. Applying the Hadamard’s
|
967 |
-
evaluation [Hadamard, 1893] of the determinant, we obtain k∗≤αn+1≤
|
968 |
-
(n+2)n+2
|
969 |
-
2(1/2)(n+1), which improves the existing bound k∗≤(n+1)nn/2
|
970 |
-
obtained by [Gvozdeva and Slinko, 2011].
|
971 |
-
Next, we discussed upper bounds for the maximum possible int eger
|
972 |
-
weights and the quota needed to represent any weighted simpl e game with n
|
973 |
-
players. We show that every weighted simple game (satisfyin g∅ /\e}atio\slash∈ W ∋ N)
|
974 |
-
has an integer-weight representation ( q;w⊤)∈Z×ZNsuch that |wi| ≤αn
|
975 |
-
(∀i∈N),|q| ≤αn+1, and 1≤/summationtext
|
976 |
-
i∈Nwi≤2αn+1−1. We demonstrated the
|
977 |
-
tightness of our bound on the quota when n≤5.
|
978 |
-
We described a rounding method based on a linear relaxation o f an
|
979 |
-
integer programming problem for finding an integer-weight r epresentation.
|
980 |
-
We showed that an integer-weight representation is obtaine d by carefully
|
981 |
-
rounding a solution of the linear inequality system multipl ied byλ•≤
|
982 |
-
(2−√
|
983 |
-
2)n+(√
|
984 |
-
2−1)<0.5858n+0.4143. Our proof of Theorem 4.1 indicates
|
985 |
-
an existence of a randomized rounding algorithm for finding a n appropriate
|
986 |
-
valueλ•. However, from theoretical point of view, Theorem 4.1 only s howed
|
987 |
-
the existence of a real number λ•. Even if there exists an appropriate “ratio-
|
988 |
-
nal” number λ•, we need to determine the size of the rational number (its
|
989 |
-
numerator and denominator) to implement a naive randomized rounding
|
990 |
-
algorithm. Thus, it remains open whether there exists an effic ient algo-
|
991 |
-
20rithm for finding an integer-weight representation satisfy ing the properties
|
992 |
-
in Theorem 4.1.
|
993 |
-
Lastly, we showed that a roughly weighted simple game (satis fying∅ /\e}atio\slash∈
|
994 |
-
W ∋N) has an integer vector ( q;w⊤) of the rough voting representation
|
995 |
-
satisfying 0 ≤wi≤αn−1(∀i∈N), 0≤q≤αn, and 1≤/summationtext
|
996 |
-
i∈Nwi≤2αn.
|
997 |
-
When a given simple game is not roughly weighted, we showed th at (under
|
998 |
-
the the monotonicity property (1) and ∅ /\e}atio\slash∈ W ∋ N) there existed a potent
|
999 |
-
certificate of non-weightedness whose length is less than or equal to 2 αn+1.
|
1000 |
-
References
|
1001 |
-
[Baugh, 1970] Baugh, C.R.(1970). Pseudo-threshold logic: A generalization
|
1002 |
-
of threshold logic . PhDthesis, UniversityofIllinoisatUrbana-Champaign.
|
1003 |
-
[Chow, 1961] Chow, C.-K. (1961). On the characterization of threshold
|
1004 |
-
functions. In 2nd Annual Symposium on Switching Circuit Theory and
|
1005 |
-
Logical Design (SWCT 1961) , pages 34–38. IEEE.
|
1006 |
-
[Elgot, 1961] Elgot, C. C. (1961). Decision problems of finit e automata de-
|
1007 |
-
sign and related arithmetics. Transactions of the American Mathematical
|
1008 |
-
Society, 98(1):21–51.
|
1009 |
-
[Farkas, 1902] Farkas, J. (1902). Theorie der einfachen Ung leichungen.
|
1010 |
-
Journal f¨ ur die reine und angewandte Mathematik , 1902(124):1–27.
|
1011 |
-
[Freixas, 2021] Freixas, J. (2021). A characterization of w eighted simple
|
1012 |
-
games based on pseudoweightings. Optimization Letters , 15:1371—-1383.
|
1013 |
-
[Freixas et al., 2016] Freixas, J., Freixas, M., and Kurz, S. (2016).
|
1014 |
-
Characterization of threshold functions: state of the art, some
|
1015 |
-
new contributions and open problems. Available at SSRN .
|
1016 |
-
https://ssrn.com/abstract=2740475(March1,2016) .
|
1017 |
-
[Freixas et al., 2017] Freixas, J., Freixas, M., and Kurz, S. (2017). On
|
1018 |
-
the characterization of weighted simple games. Theory and Decision ,
|
1019 |
-
83(4):469–498.
|
1020 |
-
[Freixas and Kurz, 2014] Freixas, J. and Kurz, S. (2014). On α-roughly
|
1021 |
-
weighted games. International Journal of Game Theory , 43(3):659–692.
|
1022 |
-
[Freixas and Molinero, 2009a] Freixas, J. and Molinero, X. ( 2009a). On the
|
1023 |
-
existence of a minimum integer representation for weighted voting sys-
|
1024 |
-
tems.Annals of Operations Research , 166(1):243–260.
|
1025 |
-
21[Freixas and Molinero, 2009b] Freixas, J. and Molinero, X. ( 2009b). Simple
|
1026 |
-
games and weighted games: a theoretical and computational v iewpoint.
|
1027 |
-
Discrete Applied Mathematics , 157(7):1496–1508.
|
1028 |
-
[Freixas and Molinero, 2010] Freixas, J. and Molinero, X. (2 010). Weighted
|
1029 |
-
games without a unique minimal representation in integers. Optimisation
|
1030 |
-
Methods & Software , 25(2):203–215.
|
1031 |
-
[Gvozdeva et al., 2013] Gvozdeva, T., Hemaspaandra, L. A., a nd Slinko, A.
|
1032 |
-
(2013). Three hierarchies of simple games parameterized by “resource”
|
1033 |
-
parameters. International Journal of Game Theory , 42(1):1–17.
|
1034 |
-
[Gvozdeva and Slinko, 2011] Gvozdeva, T. and Slinko, A. (201 1). Weighted
|
1035 |
-
and roughly weighted simple games. Mathematical Social Sciences ,
|
1036 |
-
61(1):20–30.
|
1037 |
-
[Hadamard, 1893] Hadamard, J. (1893). R´ esolution d’une qu estion relative
|
1038 |
-
aux d´ eterminants. Bull. des Sciences Math. , 2:240–246.
|
1039 |
-
[Hameed and Slinko, 2015] Hameed, A. and Slinko, A. (2015). R oughly
|
1040 |
-
weighted hierarchical simple games. International Journal of Game The-
|
1041 |
-
ory, 44(2):295–319.
|
1042 |
-
[Hansen and Podolskii, 2015] Hansen, K. A. and Podolskii, V. V. (2015).
|
1043 |
-
Polynomial threshold functions and Boolean threshold circ uits.Informa-
|
1044 |
-
tion and Computation , 240:56–73.
|
1045 |
-
[H˚ astad, 1994] H˚ astad, J. (1994). On the size of weights fo r threshold gates.
|
1046 |
-
SIAM Journal on Discrete Mathematics , 7(3):484–492.
|
1047 |
-
[Isbell, 1956] Isbell, J. R. (1956). A class of majority game s.The Quarterly
|
1048 |
-
Journal of Mathematics , 7(1):183–187.
|
1049 |
-
[Krohn and Sudh¨ olter, 1995] Krohn, I. and Sudh¨ olter, P. (1 995). Directed
|
1050 |
-
and weighted majority games. Zeitschrift f¨ ur Operations Research ,
|
1051 |
-
42(2):189–216.
|
1052 |
-
[Kurz, 2012] Kurz, S. (2012). On minimum sum representation s for
|
1053 |
-
weighted voting games. Annals of Operations Research , 196(1):361–369.
|
1054 |
-
[Muroga, 1971] Muroga, S. (1971). Threshold Logic and its Applications .
|
1055 |
-
Wiley, New York.
|
1056 |
-
22[Muroga et al., 1962] Muroga, S., Toda, I., and Kondo, M. (196 2). Majority
|
1057 |
-
decision functions of up to six variables. Mathematics of Computation ,
|
1058 |
-
16(80):459–472.
|
1059 |
-
[Muroga et al., 1970] Muroga, S., Tsuboi, T., and Baugh, C. R. (1970).
|
1060 |
-
Enumeration of threshold functions of eight variables. IEEE Transactions
|
1061 |
-
on Computers , 100(9):818–825.
|
1062 |
-
[Myhill and Kautz, 1961] Myhill, J. and Kautz, W. H. (1961). O n the size
|
1063 |
-
of weights required for linear-input switching functions. IRE Transactions
|
1064 |
-
on Electronic Computers , EC-10(2):288–290.
|
1065 |
-
[Peled and Simeone, 1985] Peled, U. N. and Simeone, B. (1985) .
|
1066 |
-
Polynomial-time algorithms for regular set-covering and t hreshold syn-
|
1067 |
-
thesis.Discrete Applied Mathematics , 12(1):57–69.
|
1068 |
-
[Sloane et al., 2018] Sloane, N. J. et al. (2018). The on-line encyclopedia of
|
1069 |
-
integer sequences (A003432). Published electronically, .
|
1070 |
-
[Taylor and Zwicker, 1992] Taylor, A. D. and Zwicker, W. S. (1 992). A
|
1071 |
-
characterization of weighted voting. Proceedings of the American mathe-
|
1072 |
-
matical society , 115(4):1089–1094.
|
1073 |
-
[Taylor and Zwicker, 1999] Taylor, A. D. and Zwicker, W. S. (1 999).Sim-
|
1074 |
-
ple Games: Desirability Relations, Trading, Pseudoweight ings. Princeton
|
1075 |
-
University Press.
|
1076 |
-
[Wang and Williams, 1991] Wang, C. and Williams, A. (1991). T he thresh-
|
1077 |
-
old order of a Boolean function. Discrete Applied Mathematics , 31(1):51–
|
1078 |
-
69.
|
1079 |
-
[Winder, 1965] Winder, R. O. (1965). Enumeration of seven-a rgument
|
1080 |
-
threshold functions. IEEE Transactions on Electronic Computers , EC-
|
1081 |
-
14(3):315–325.
|
1082 |
-
23
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
txt/2102.04993.txt
DELETED
@@ -1,1202 +0,0 @@
|
|
1 |
-
JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 1
|
2 |
-
Attention-Based Neural Networks for Chroma Intra
|
3 |
-
Prediction in Video Coding
|
4 |
-
Marc G ´orriz Blanch, Student Member IEEE, Saverio Blasi, Alan F. Smeaton, Fellow IEEE,
|
5 |
-
Noel E. O’Connor, Member IEEE, and Marta Mrak, Senior Member IEEE
|
6 |
-
Abstract —Neural networks can be successfully used to im-
|
7 |
-
prove several modules of advanced video coding schemes. In
|
8 |
-
particular, compression of colour components was shown to
|
9 |
-
greatly benefit from usage of machine learning models, thanks
|
10 |
-
to the design of appropriate attention-based architectures that
|
11 |
-
allow the prediction to exploit specific samples in the reference
|
12 |
-
region. However, such architectures tend to be complex and
|
13 |
-
computationally intense, and may be difficult to deploy in a
|
14 |
-
practical video coding pipeline. This work focuses on reducing
|
15 |
-
the complexity of such methodologies, to design a set of simpli-
|
16 |
-
fied and cost-effective attention-based architectures for chroma
|
17 |
-
intra-prediction. A novel size-agnostic multi-model approach is
|
18 |
-
proposed to reduce the complexity of the inference process. The
|
19 |
-
resulting simplified architecture is still capable of outperforming
|
20 |
-
state-of-the-art methods. Moreover, a collection of simplifications
|
21 |
-
is presented in this paper, to further reduce the complexity
|
22 |
-
overhead of the proposed prediction architecture. Thanks to
|
23 |
-
these simplifications, a reduction in the number of parameters
|
24 |
-
of around 90% is achieved with respect to the original attention-
|
25 |
-
based methodologies. Simplifications include a framework for re-
|
26 |
-
ducing the overhead of the convolutional operations, a simplified
|
27 |
-
cross-component processing model integrated into the original
|
28 |
-
architecture, and a methodology to perform integer-precision
|
29 |
-
approximations with the aim to obtain fast and hardware-aware
|
30 |
-
implementations. The proposed schemes are integrated into the
|
31 |
-
Versatile Video Coding (VVC) prediction pipeline, retaining
|
32 |
-
compression efficiency of state-of-the-art chroma intra-prediction
|
33 |
-
methods based on neural networks, while offering different
|
34 |
-
directions for significantly reducing coding complexity.
|
35 |
-
Index Terms —Chroma intra prediction, convolutional neural
|
36 |
-
networks, attention algorithms, multi-model architectures, com-
|
37 |
-
plexity reduction, video coding standards.
|
38 |
-
I. I NTRODUCTION
|
39 |
-
EFFICIENT video compression has become an essential
|
40 |
-
component of multimedia streaming. The convergence
|
41 |
-
of digital entertainment followed by the growth of web ser-
|
42 |
-
vices such as video conferencing, cloud gaming and real-time
|
43 |
-
high-quality video streaming, prompted the development of
|
44 |
-
advanced video coding technologies capable of tackling the
|
45 |
-
increasing demand for higher quality video content and its con-
|
46 |
-
sumption on multiple devices. New compression techniques
|
47 |
-
enable a compact representation of video data by identifying
|
48 |
-
Manuscript submitted July 1, 2020. The work described in this paper has
|
49 |
-
been conducted within the project JOLT funded by the European Union’s Hori-
|
50 |
-
zon 2020 research and innovation programme under the Marie Skłodowska
|
51 |
-
Curie grant agreement No 765140.
|
52 |
-
M. G ´orriz Blanch, S. Blasi and M. Mrak are with BBC Research &
|
53 |
-
Development, The Lighthouse, White City Place, 201 Wood Lane, Lon-
|
54 |
-
don, UK (e-mail: [email protected], [email protected],
|
55 | |
56 |
-
A. F. Smeaton and N. E. O’Connor are with Dublin City University, Glas-
|
57 |
-
nevin, Dublin 9, Ireland (e-mail: [email protected], [email protected]).
|
58 |
-
Fig. 1. Visualisation of the attentive prediction process. For each reference
|
59 |
-
sample 0-16 the attention module generates its contribution to the prediction
|
60 |
-
of individual pixels from a target 44block.
|
61 |
-
and removing spatial-temporal and statistical redundancies
|
62 |
-
within the signal. This results in smaller bitstreams, enabling
|
63 |
-
more efficient storage and transmission as well as distribution
|
64 |
-
of content at higher quality, requiring reduced resources.
|
65 |
-
Advanced video compression algorithms are often complex
|
66 |
-
and computationally intense, significantly increasing the en-
|
67 |
-
coding and decoding time. Therefore, despite bringing high
|
68 |
-
coding gains, their potential for application in practice is
|
69 |
-
limited. Among the current state-of-the-art solutions, the next
|
70 |
-
generation Versatile Video Coding standard [1] (referred to as
|
71 |
-
VVC in the rest of this paper), targets between 30-50% better
|
72 |
-
compression rates for the same perceptual quality, supporting
|
73 |
-
resolutions from 4K to 16K as well as 360videos. One
|
74 |
-
fundamental component of hybrid video coding schemes, intra
|
75 |
-
prediction, exploits spatial redundancies within a frame by
|
76 |
-
predicting samples of the current block from already recon-
|
77 |
-
structed samples in its close surroundings. VVC allows a
|
78 |
-
large number of possible intra prediction modes to be usedarXiv:2102.04993v1 [eess.IV] 9 Feb 2021JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 2
|
79 |
-
on the luma component at the cost of a considerable amount
|
80 |
-
of signalling data. Conversely, to limit the impact of mode
|
81 |
-
signalling, chroma components employ a reduced set of modes
|
82 |
-
[1].
|
83 |
-
In addition to traditional modes, more recent research intro-
|
84 |
-
duced schemes which further exploit cross-component correla-
|
85 |
-
tions between the luma and chroma components. Such corre-
|
86 |
-
lations motivated the development of the Cross-Component
|
87 |
-
Linear Model (CCLM, or simply LM in this paper) intra
|
88 |
-
modes. When using CCLM, the chroma components are
|
89 |
-
predicted from already reconstructed luma samples using a
|
90 |
-
linear model. Nonetheless, the limitation of simple linear
|
91 |
-
predictions comes from its high dependency on the selection
|
92 |
-
of predefined reference samples. Improved performance can
|
93 |
-
be achieved using more sophisticated Machine Learning (ML)
|
94 |
-
mechanisms [2], [3], which are able to derive more complex
|
95 |
-
representations of the reference data and hence boost the
|
96 |
-
prediction capabilities.
|
97 |
-
Methods based on Convolutional Neural Networks (CNNs)
|
98 |
-
[2], [4] provided significant improvements at the cost of two
|
99 |
-
main drawbacks: the associated increase in system complex-
|
100 |
-
ity and the tendency to disregard the location of individual
|
101 |
-
reference samples. Related works deployed complex neural
|
102 |
-
networks (NNs) by means of model-based interpretability
|
103 |
-
[5]. For instance, VVC recently adopted simplified NN-based
|
104 |
-
methods such as Matrix Intra Prediction (MIP) modes [6]
|
105 |
-
and Low-Frequency Non Separable Transform (LFNST) [7].
|
106 |
-
For the particular task of block-based intra-prediction, the
|
107 |
-
usage of complex NN models can be counterproductive if
|
108 |
-
there is no control over the relative position of the reference
|
109 |
-
samples. When using fully-connected layers, all input samples
|
110 |
-
contribute to all output positions, and after the consecutive
|
111 |
-
application of several hidden layers, the location of each
|
112 |
-
input sample is lost. This behaviour clearly runs counter
|
113 |
-
to the design of traditional approaches, in which predefined
|
114 |
-
directional modes carefully specify which boundary locations
|
115 |
-
contribute to each prediction position. A novel ML-based
|
116 |
-
cross-component intra-prediction method is proposed in [4],
|
117 |
-
introducing a new attention module capable of tracking the
|
118 |
-
contribution of each neighbouring reference sample when
|
119 |
-
computing the prediction of each chroma pixel, as shown in
|
120 |
-
Figure 1. As a result, the proposed scheme better captures
|
121 |
-
the relationship between the luma and chroma components,
|
122 |
-
resulting in more accurate prediction samples. However, such
|
123 |
-
NN-based methods significantly increase the codec complex-
|
124 |
-
ity, increasing the encoder and decoder times by up to 120%
|
125 |
-
and 947%, respectively.
|
126 |
-
This paper focuses on complexity reduction in video coding
|
127 |
-
with the aim to derive a set of simplified and cost-effective
|
128 |
-
attention-based architectures for chroma intra-prediction. Un-
|
129 |
-
derstanding and distilling knowledge from the networks en-
|
130 |
-
ables the implementation of less complex algorithms which
|
131 |
-
achieve similar performance to the original models. Moreover,
|
132 |
-
a novel training methodology is proposed in order to design a
|
133 |
-
block-independent multi-model which outperforms the state-
|
134 |
-
of-the-art attention-based architectures and reduces inference
|
135 |
-
complexity. The use of variable block sizes during training
|
136 |
-
helps the model to better generalise on content variety whileensuring higher precision on predicting large chroma blocks.
|
137 |
-
The main contributions of this work are the following:
|
138 |
-
A competitive block-independent attention-based multi-
|
139 |
-
model and training methodology;
|
140 |
-
A framework for complexity reduction of the convolu-
|
141 |
-
tional operations;
|
142 |
-
A simplified cross-component processing model using
|
143 |
-
sparse auto-encoders;
|
144 |
-
A fast and cost-effective attention-based multi-model with
|
145 |
-
integer precision approximations.
|
146 |
-
This paper is organised as follows: Section II provides a
|
147 |
-
brief overview on the related work, Section III introduces
|
148 |
-
the attention-based methodology in detail and establishes the
|
149 |
-
mathematical notation for the rest of the paper, Section IV
|
150 |
-
presents the proposed simplifications and Section V shows
|
151 |
-
experimental results, with conclusion drawn in Section VI.
|
152 |
-
II. B ACKGROUND
|
153 |
-
Colour images are typically represented by three colour
|
154 |
-
components (e.g. RGB, YCbCr). The YCbCr colour scheme
|
155 |
-
is often adopted by digital image and video coding standards
|
156 |
-
(such as JPEG, MPEG-1/2/4 and H.261/3/4) due to its ability
|
157 |
-
to compact the signal energy and to reduce the total required
|
158 |
-
bandwidth. Moreover, chrominance components are often sub-
|
159 |
-
sampled by a factor of two to conform to the YCbCr 4:2:0
|
160 |
-
chroma format, in which the luminance signal contains most of
|
161 |
-
the spatial information. Nevertheless, cross-component redun-
|
162 |
-
dancies can be further exploited by reusing information from
|
163 |
-
already coded components to compress another component.
|
164 |
-
In the case of YCbCr, the Cross-Component Linear model
|
165 |
-
(CCLM) [8] uses a linear model to predict the chroma signal
|
166 |
-
from a subsampled version of the already reconstructed luma
|
167 |
-
block signal. The model parameters are derived at both the
|
168 |
-
encoder and decoder sides without needing explicit signalling
|
169 |
-
in the bitstream.
|
170 |
-
Another example is the Cross-Component Prediction (CCP)
|
171 |
-
[9] which resides at the transform unit (TU) level regardless
|
172 |
-
of the input colour space. In case of YCbCr, a subsampled and
|
173 |
-
dequantised luma transform block (TB) is used to modify the
|
174 |
-
chroma TB at the same spatial location based on a context
|
175 |
-
parameter signalled in the bitstream. An extension of this
|
176 |
-
concept modifies one chroma component using the residual
|
177 |
-
signal of the other one [10]. Such methodologies significantly
|
178 |
-
improved the coding efficiency by further exploiting the cross-
|
179 |
-
component correlations within the chroma components.
|
180 |
-
In parallel, recent success of deep learning application
|
181 |
-
in computer vision and image processing influenced design
|
182 |
-
of novel video compression algorithms. In particular in the
|
183 |
-
context of intra-prediction, a new algorithm [3] was introduced
|
184 |
-
based on fully-connected layers and CNNs to map the predic-
|
185 |
-
tion of block positions from the already reconstructed neigh-
|
186 |
-
bouring samples, achieving BD-rate (Bjontegaard Delta rate)
|
187 |
-
[11] savings of up to 3.0% on average over HEVC, for approx.
|
188 |
-
200% increase in decoding time. The successful integration
|
189 |
-
of CNN-based methods for luma intra-prediction into existing
|
190 |
-
codec architectures has motivated research into alternative
|
191 |
-
methods for chroma prediction, exploiting cross-componentJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 3
|
192 |
-
Fig. 2. Baseline attention-based architecture for chroma intra prediction presented in [4] and described in Section III.
|
193 |
-
redundancies similar to the aforementioned LM methods. A
|
194 |
-
novel hybrid neural network for chroma intra prediction was
|
195 |
-
recently introduced in [2]. A first CNN was designed to
|
196 |
-
extract features from reconstructed luma samples. This was
|
197 |
-
combined with another fully-connected network used to extract
|
198 |
-
cross-component correlations between neighbouring luma and
|
199 |
-
chroma samples. The resulting architecture uses complex non-
|
200 |
-
linear mapping for end-to-end prediction of chroma channels.
|
201 |
-
However, this is achieved at the cost of disregarding the spatial
|
202 |
-
location of the boundary reference samples and significant
|
203 |
-
increase of the complexity of the prediction process. As shown
|
204 |
-
in [4], after a consecutive application of fully-connected layers
|
205 |
-
in [2], the location of each input boundary reference sample
|
206 |
-
is lost. Therefore, the fully-convolutional architecture in [4]
|
207 |
-
better matches the design of the directional VVC modes and
|
208 |
-
is able to provide significantly better performance.
|
209 |
-
The use of attention models enables effective utilisation
|
210 |
-
of the individual spatial location of the reference samples
|
211 |
-
[4]. The concept of “attention-based” learning is a well-
|
212 |
-
known idea used in deep learning frameworks, to improve the
|
213 |
-
performance of trained networks in complex prediction tasks
|
214 |
-
[12], [13], [14]. In particular, self-attention is used to assess the
|
215 |
-
impact of particular input variables on the outputs, whereby
|
216 |
-
the prediction is computed focusing on the most relevant
|
217 |
-
elements of the same sequence [15]. The novel attention-
|
218 |
-
based architecture introduced in [4] reports average BD-rate
|
219 |
-
reductions of -0.22%, -1.84% and -1.78% for the Y , Cb and
|
220 |
-
Cr components, respectively, although it significantly impacts
|
221 |
-
the encoder and decoder time.
|
222 |
-
One common aspect across all related work is that whilst
|
223 |
-
the result is an improvement in compression this comes at the
|
224 |
-
expense of increased complexity of the encoder and decoder.
|
225 |
-
In order to address the complexity challenge, this paper aims
|
226 |
-
to design a set of simplified attention-based architectures for
|
227 |
-
performing chroma intra-prediction faster and more efficiently.
|
228 |
-
Recent works addressed complexity reduction in neural net-
|
229 |
-
works using methods such as channel pruning [16], [17],
|
230 |
-
[18] and quantisation [19], [20], [21]. In particular for videocompression, many works used integer arithmetic in order
|
231 |
-
to efficiently implement trained neural networks on different
|
232 |
-
hardware platforms. For example, the work in [22] proposes a
|
233 |
-
training methodology to handle low precision multiplications,
|
234 |
-
proving that very low precision is sufficient not just for
|
235 |
-
running trained networks but also for training them. Similarly,
|
236 |
-
the work in [23] considers the problem of using variational
|
237 |
-
latent-variable models for data compression and proposes
|
238 |
-
integer networks as a universal solution of range coding as
|
239 |
-
an entropy coding technique. They demonstrate that such
|
240 |
-
models enable reliable cross-platform encoding and decoding
|
241 |
-
of images using variational models. Moreover, in order to
|
242 |
-
ensure deterministic implementations on hardware platforms,
|
243 |
-
they approximate non-linearities using lookup tables. Finally,
|
244 |
-
an efficient implementation of matrix-based intra prediction
|
245 |
-
is proposed in [24], where a performance analysis evaluates
|
246 |
-
the challenges of deploying models with integer arithmetic
|
247 |
-
in video coding standards. Inspired by this knowledge, this
|
248 |
-
paper develops a fast and cost-effective implementation of the
|
249 |
-
proposed attention-based architecture using integer precision
|
250 |
-
approximations. As shown Section V-D, while such approxi-
|
251 |
-
mations can significantly reduce the complexity, the associated
|
252 |
-
drop of performance is still not negligible.
|
253 |
-
III. A TTENTION -BASED ARCHITECTURES
|
254 |
-
This section describes in detail the attention-based approach
|
255 |
-
proposed in [4] (Figure 2), which will be the baseline for the
|
256 |
-
presented methodology in this paper. The section also provides
|
257 |
-
the mathematical notation used for the rest of this paper.
|
258 |
-
Without loss of generality, only square blocks of pixels
|
259 |
-
are considered in this work. After intra-prediction and recon-
|
260 |
-
struction of a luma block in the video compression chain,
|
261 |
-
luma samples can be used for prediction of co-located chroma
|
262 |
-
components. In this discussion, the size of a luma block is
|
263 |
-
assumed to be (downsampled to) NNsamples, which is
|
264 |
-
the size of the co-located chroma block. This may require the
|
265 |
-
usage of conventional downsampling operations, such as in
|
266 |
-
the case of using chroma sub-sampled picture formats suchJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 4
|
267 |
-
Fig. 3. Proposed multi-model attention-based architectures with the integration of the simplifications introduced in this paper. More details about the model’s
|
268 |
-
hyperparameters and a description of the referred schemes can be found in Section V.
|
269 |
-
as 4:2:0. Note that a video coding standard treats all image
|
270 |
-
samples as unsigned integer values within a certain precision
|
271 |
-
range based on the internal bit depth. However, in order to
|
272 |
-
utilise common deep learning frameworks, all samples are
|
273 |
-
converted to floating point and normalised to values within the
|
274 |
-
range [0;1]. For the chroma prediction process, the reference
|
275 |
-
samples used include the co-located luma block X02I RNN,
|
276 |
-
and the array of reference samples Bc2I Rb,b= 4N+1from
|
277 |
-
the left and from above the current block (Figure 1), where
|
278 |
-
c=Y,CborCrrefers to the three colour components. B
|
279 |
-
is constructed from samples on the left boundary (starting
|
280 |
-
from the bottom-most sample), then the corner is added,
|
281 |
-
and finally the samples on top are added (starting from the
|
282 |
-
left-most sample). In case some reference samples are not
|
283 |
-
available, these are padded using a predefined value, following
|
284 |
-
the standard approach defined in VVC. Finally, S02I R3b
|
285 |
-
is the cross-component volume obtained by concatenating
|
286 |
-
the three reference arrays BY,BCbandBCr. Similar to
|
287 |
-
the model in [2], the attention-based architecture adopts a
|
288 |
-
scheme based on three network branches that are combined to
|
289 |
-
produce prediction samples, illustrated in Figure 2. The first
|
290 |
-
two branches work concurrently to extract features from the
|
291 |
-
input reference samples.
|
292 |
-
The first branch (referred to as the cross-component bound-
|
293 |
-
ary branch) extracts cross component features from S02
|
294 |
-
I R3bby applying Iconsecutive Di- dimensional 11
|
295 |
-
convolutional layers to obtain the Si2I RDiboutput feature
|
296 |
-
maps, where i= 1;2:::I. By applying 11convolutions, the
|
297 |
-
boundary input dimensions are preserved, resulting in an Di-
|
298 |
-
dimensional vector of cross-component information for each
|
299 |
-
boundary location. The resulting volumes are activated using
|
300 |
-
a Rectified Linear Unit (ReLU) non-linear function.
|
301 |
-
In parallel, the second branch (referred to as the luma
|
302 |
-
convolutional branch) extracts spatial patterns over the co-
|
303 |
-
located reconstructed luma block X0by applying convolu-
|
304 |
-
tional operations. The luma convolutional branch is defined
|
305 |
-
byJconsecutive Cj-dimensional 33convolutional layers
|
306 |
-
with a stride of 1, to obtainXj2I RCjN2feature maps fromtheN2input samples, where j= 1;2:::J . Similar to the
|
307 |
-
cross-component boundary branch, in this branch a bias and
|
308 |
-
a ReLU activation are applied within convolutional layer.
|
309 |
-
The feature maps ( SIandXJ) from both branches are
|
310 |
-
each convolved using a 11kernel, to project them into
|
311 |
-
two corresponding reduced feature spaces. Specifically, SI
|
312 |
-
is convolved with a filter WF2I RhDto obtain the h-
|
313 |
-
dimensional feature matrix F. Similarly,XJis convolved with
|
314 |
-
a filterWG2I RhCto obtain the h-dimensional feature
|
315 |
-
matrixG. The two matrices are multiplied together to obtain
|
316 |
-
the pre-attention map M=GTF. Finally, the attention matrix
|
317 |
-
A2I RN2bis obtained applying a softmax operation to each
|
318 |
-
element ofM, to generate the probability of each boundary
|
319 |
-
location being able to predict a sample location in the block.
|
320 |
-
Each valuej;iinAis obtained as:
|
321 |
-
j;i=exp (mi;j=T)
|
322 |
-
Pb 1
|
323 |
-
n=0exp (mn;j=T); (1)
|
324 |
-
wherej= 0;:::;N2 1represents the sample location in
|
325 |
-
the predicted block, i= 0;:::;b 1represents a reference
|
326 |
-
sample location, and Tis the softmax temperature parameter
|
327 |
-
controlling the smoothness of the generated probabilities, with
|
328 |
-
0<T1. Notice that the smaller the value of T, the more
|
329 |
-
localised are the obtained attention areas resulting in corre-
|
330 |
-
spondingly fewer boundary samples contributing to a given
|
331 |
-
prediction location. The weighted sum of the contribution of
|
332 |
-
each reference sample in predicting a given sample at a specific
|
333 |
-
location is obtained by computing the matrix multiplication
|
334 |
-
between the cross-component boundary features SIand the
|
335 |
-
attention matrix A, or formally ST
|
336 |
-
IA. In order to further refine
|
337 |
-
ST
|
338 |
-
IA, this weighted sum can be multiplied by the output of
|
339 |
-
the luma branch. To do so, the output of the luma branch
|
340 |
-
must be transformed to change its dimensions by means of a
|
341 |
-
11convolution using a matrix Wx2I RDCto obtain a
|
342 |
-
transformed representation X, thenO=X(ST
|
343 |
-
IA), where
|
344 |
-
is the element-wise product.
|
345 |
-
Finally, the output of the attention model is fed into the third
|
346 |
-
network branch, to compute the predicted chroma samples. InJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 5
|
347 |
-
this branch, a final CNN is used to map the fused features from
|
348 |
-
the first two branches as combined by means of the attention
|
349 |
-
model into the final chroma prediction. The prediction head
|
350 |
-
branch is defined by two convolutional layers, applying E-
|
351 |
-
dimensional 33convolutional filters and then 2-dimensional
|
352 |
-
11filters for deriving the two chroma components at once.
|
353 |
-
IV. M ULTI -MODEL ARCHITECTURES
|
354 |
-
This section introduces a new multi-model architecture
|
355 |
-
which improves the baseline attention-based approach (Section
|
356 |
-
III, [4]). The main improvement comes from its block-size
|
357 |
-
agnostic property as the proposed approach only requires one
|
358 |
-
model for all block sizes. Furthermore, a range of simpli-
|
359 |
-
fications is proposed with the aim to reduce the complex-
|
360 |
-
ity of related attention-based architectures while preserving
|
361 |
-
prediction performance as much as possible. The proposed
|
362 |
-
simplifications include a framework for complexity reduction
|
363 |
-
of the convolutional operations, a simplified cross-component
|
364 |
-
boundary branch using sparse autoencoders and insights for
|
365 |
-
fast and cost-effective implementations with integer precision
|
366 |
-
approximations. Figure 3 illustrates the proposed multi-model
|
367 |
-
attention-based schemes with the integration of the simplifica-
|
368 |
-
tions described in this section.
|
369 |
-
A. Multi-model size agnostic architecture
|
370 |
-
In order to handle variable block sizes, previous NN-based
|
371 |
-
chroma intra-prediction methods employ different architec-
|
372 |
-
tures for blocks of different sizes. These architectures differ
|
373 |
-
in the dimensionality of the networks, which depend on give
|
374 |
-
block size, as a trade-off between model complexity and
|
375 |
-
prediction performance [2]. Given a network structure, the
|
376 |
-
depth of the convolutional layers is the most predominant
|
377 |
-
factor when dealing with variable input sizes. This means that
|
378 |
-
increasingly complex architectures are needed for larger block
|
379 |
-
sizes, in order to ensure proper generalisation for these blocks
|
380 |
-
which have higher content variety. Such a factor significantly
|
381 |
-
increases requirements for inference because of the number of
|
382 |
-
multiple architectures.
|
383 |
-
In order to streamline the inference process, this work
|
384 |
-
proposes a novel multi-model architecture that is independent
|
385 |
-
of the input block size. Theoretically, a convolutional filter
|
386 |
-
can be applied over any input space. Therefore, the fully-
|
387 |
-
convolutional nature of the proposed architecture ( 11kernels
|
388 |
-
for the cross-component boundary branch and 33kernels
|
389 |
-
for the luma convolutional one) allows the design of a size
|
390 |
-
agnostic architecture. As shown in Figure 4, the same task
|
391 |
-
can be achieved using multiple models with different input
|
392 |
-
sizes sharing the weights, such that a unified set of filters can
|
393 |
-
be used a posterior, during inference. The given architecture
|
394 |
-
must employ a number of parameters that is sufficiently large
|
395 |
-
to ensure proper performance for larger blocks, but not too
|
396 |
-
large to incur overfitting for smaller blocks.
|
397 |
-
Figure 5 describes the algorithmic methodology employed
|
398 |
-
to train the multi-model approach. As defined in Section III,
|
399 |
-
the co-located luma block X02I RNNand the cross-
|
400 |
-
component volume S02I R3bare considered as inputs to
|
401 |
-
the chroma prediction network. Furthermore, for training of a
|
402 |
-
Fig. 4. Illustration of the proposed multi-model training and inference
|
403 |
-
methodologies. Multiple block-dependent models N(W(t))are used during
|
404 |
-
training time. A size-agnostic model with a single set of trained weighs W
|
405 |
-
is then used during inference.
|
406 |
-
Require:fX(N)
|
407 |
-
m,S(N)
|
408 |
-
m,Z(N)
|
409 |
-
mg,m2[0;M),N2f4;8;16g
|
410 |
-
Require:N(W(t)):Nmodel with shared weights W(t)
|
411 |
-
Require:L(t)
|
412 |
-
reg: Objective function at training step t
|
413 |
-
1:t 0(initialise timestep)
|
414 |
-
2:whiletnot converged do
|
415 |
-
3: form2[0;M)do
|
416 |
-
4: forN2f4;8;16gdo
|
417 |
-
5: t t+ 1
|
418 |
-
6:L(t)
|
419 |
-
reg MSE (Z(N)
|
420 |
-
m;N(X(N)
|
421 |
-
m;S(N)
|
422 |
-
m;W(t 1)))
|
423 |
-
7: g(t) r WL(t)
|
424 |
-
reg(get gradients at step t)
|
425 |
-
8: W(t) optimiser (g(t))
|
426 |
-
9: end for
|
427 |
-
10: end for
|
428 |
-
11:end while
|
429 |
-
Fig. 5. Training algorithm for the proposed multi-model architecture.
|
430 |
-
multi-model the ground-truth is defined as Z(N)
|
431 |
-
m, for a given
|
432 |
-
inputfX(N)
|
433 |
-
m;S(N)
|
434 |
-
mg, and the set of instances from a database
|
435 |
-
ofMsamples or batches is defined as fX(N)
|
436 |
-
m;S(N)
|
437 |
-
m;Z(N)
|
438 |
-
mg,
|
439 |
-
wherem= 0;1:::M 1andN2f4,8,16gis the set
|
440 |
-
of supported square block sizes NN(the method can be
|
441 |
-
extended to a different set of sizes). As shown in Figure 4,
|
442 |
-
multiple block-dependent models N(W)with shared weights
|
443 |
-
Ware updated in a concurrent way following the order of
|
444 |
-
supported block sizes. At training step t, the individual model
|
445 |
-
N(W(t))is updated obtaining a new set of weights W(t+1).
|
446 |
-
Finally, a single set of trained weights Wis used during
|
447 |
-
inference, obtaining a size-agnostic model (W). Model pa-
|
448 |
-
rameters are updated by minimising the Mean Square Error
|
449 |
-
(MSE) regression loss Lreg, as in:
|
450 |
-
L(t)
|
451 |
-
reg=1
|
452 |
-
CN2kZ(N)
|
453 |
-
m N(X(N)
|
454 |
-
m;S(N)
|
455 |
-
m;W(t 1)k2
|
456 |
-
2;(2)
|
457 |
-
whereC= 2 refers to the number of predicted chroma
|
458 |
-
components, and N(W(t 1))is the block-dependent model
|
459 |
-
at training step t 1.JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 6
|
460 |
-
Fig. 6. Visualisation of the receptive field of a 2-layer convolutional branch
|
461 |
-
with 33kernels. Observe that an output pixel in layer 2is computed by
|
462 |
-
applying a 33kernel over a field F1of33samples from the first layer’s
|
463 |
-
output space. Similarly, each of the F1values are computed by means of
|
464 |
-
another 33kernel looking at a field F0of55samples over the input.
|
465 |
-
B. Simplified convolutions
|
466 |
-
Convolutional layers are responsible for most of the net-
|
467 |
-
work’s complexity. For instance, based on the network hyper-
|
468 |
-
parameters from experiments in Section V, the luma convo-
|
469 |
-
lutional branch and the prediction head branch (with 33
|
470 |
-
convolutional kernels) alone contain 46;882 out of 51;714
|
471 |
-
parameters, which constitute more than 90% of the parameters
|
472 |
-
in the entire model. Therefore, the model complexity can be
|
473 |
-
significantly reduced if convolutional layers can be simpli-
|
474 |
-
fied. This subsection explains how a new simplified structure
|
475 |
-
beneficial for practical implementation can be devised by
|
476 |
-
removing activation functions, i.e. by removing non-linearities.
|
477 |
-
It is important to stress that such process is devised only for
|
478 |
-
application on carefully selected layers, i.e. for branches where
|
479 |
-
such simplification does not significantly reduce expected
|
480 |
-
performance.
|
481 |
-
Consider specific two-layer convolutional branch (e.g. luma
|
482 |
-
convolutional branch from Figure 2) formulated as:
|
483 |
-
Y=R(W2R(W1X+b1) +b2) (3)
|
484 |
-
whereCiare the number of features in layer i,bi2I RCi
|
485 |
-
are biases, KiKiare square convolutional kernel sizes,
|
486 |
-
W12I RK2
|
487 |
-
1C0C1andW22I RK2
|
488 |
-
2C1C2are the weights
|
489 |
-
and bias of the first ( i= 1) and the second ( i= 2) layers,
|
490 |
-
respectively, C0the dimensions of the input feature map,
|
491 |
-
Ris a Rectified Linear Unit (ReLU) non-linear activation
|
492 |
-
function anddenotes convolution operation. Input to the
|
493 |
-
branch isX2I RN2C0and the result is a volume of features
|
494 |
-
Y2I RN2C2, which correspond to X0andX2from Figure 2,
|
495 |
-
respectively. Removing non-linearities, the given branch can
|
496 |
-
be simplified as:
|
497 |
-
^Y=W2(W1X+b1) +b2; (4)
|
498 |
-
where it can be observed that a new convolution and bias terms
|
499 |
-
can be defined using trained parameters from the two initial
|
500 |
-
layers, to form a new single layer:
|
501 |
-
^Y=WcX+bc; (5)
|
502 |
-
Fig. 7. Visualisation of the learnt colour space resulting of encoding input
|
503 |
-
YCbCr colours to the 3-dimensional hidden space of the autoencoder.
|
504 |
-
whereWc2I R[^K2C0]C2is the function of W1andW2with
|
505 |
-
^K=K1+K2 1, andbcis a constant vector derived from W2,
|
506 |
-
b1andb2. Figure 6 (a) illustrates the operations performed in
|
507 |
-
Eq. 4 forK1=K2= 3 andC= 1. Analysing the receptive
|
508 |
-
field of the whole branch, a pixel within the output volume Y
|
509 |
-
is computed by applying a K2K2kernel over a field F1from
|
510 |
-
the first layer’s output space. Similarly, each of the F1values
|
511 |
-
are computed by means of another K1K1kernel looking
|
512 |
-
at a fieldF0. Without the non-linearities, and equivalent of
|
513 |
-
this process is simplified, Figure 6 (b) and Eq. 5. Notice that
|
514 |
-
^K=K1+K2 1equals 5in the example in Figure 6. For a
|
515 |
-
variety of parameters, including the values of C0,CiandKi
|
516 |
-
used in [4] and in this paper, this simplification of concatenated
|
517 |
-
convolutional layers allows reduction of model’s parameters at
|
518 |
-
inference time, which will be shown in Section V-C.
|
519 |
-
Finally, it should be noted that we limit the removal of
|
520 |
-
activation functions only to branches which include more than
|
521 |
-
one layer, from which at least one layer has Ki>1, and only
|
522 |
-
the activation functions between layers in the same branch are
|
523 |
-
removed (to be able to merge them as in Equation 5). In such
|
524 |
-
branches with at least one Ki>1the number of parameters
|
525 |
-
is typically very high, while the removal of non-linearities
|
526 |
-
typically does not impact prediction performance. Activation
|
527 |
-
functions are not removed from the remaining layers. It should
|
528 |
-
be noted that in the attention module and at the intersections
|
529 |
-
of various branches the activation functions are critical and
|
530 |
-
therefore are left unchanged. Section V-C performs an ablation
|
531 |
-
test to evaluate the effect of removing the non-linearities, and
|
532 |
-
a test to evaluate how would a convolutional branch directly
|
533 |
-
trained with large-support kernels ^Kperform.
|
534 |
-
C. Simplified cross-component boundary branch
|
535 |
-
In the baseline model, the cross-component boundary
|
536 |
-
branch transforms the boundary inputs S2I R3bintoDJ-
|
537 |
-
dimensional feature vectors. More specifically, after applying
|
538 |
-
J= 2 consecutive 11convolutional layers, the branch
|
539 |
-
encodes each boundary colour into a high dimensional feature
|
540 |
-
space. It should be noted that a colour is typically represented
|
541 |
-
by 3 components, indexed within a system of coordinates
|
542 |
-
(referred to as the colour space). As such, a three-dimensional
|
543 |
-
feature space can be considered as the space with minimumJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 7
|
544 |
-
dimensionality that is still capable of representing colour
|
545 |
-
information. Therefore, this work proposes the use of autoen-
|
546 |
-
coders (AE) to reduce the complexity of the cross-component
|
547 |
-
boundary branch, by compacting the D-dimensional feature
|
548 |
-
space into a reduced, 3-dimensional space. An AE tries to learn
|
549 |
-
an approximation to the identity function h(x)xsuch that
|
550 |
-
the reconstructed output ^xis as close as possible to the input
|
551 |
-
x. The hidden layer will have a reduced dimensionality with
|
552 |
-
respect to the input, which also means that the transformation
|
553 |
-
process may introduce some distortion, i.e. the reconstructed
|
554 |
-
output will not be identical to the input.
|
555 |
-
An AE consists of two networks, the encoder fwhich maps
|
556 |
-
the input to the hidden features, and the decoder gwhich
|
557 |
-
reconstructs the input from the hidden features. Applying this
|
558 |
-
concept, a compressed representation of the input can be
|
559 |
-
obtained by using the encoder part alone, with the goal of
|
560 |
-
reducing the dimensionality of the input vectors. The encoder
|
561 |
-
network automatically learns how to reduce the dimensions
|
562 |
-
of the input vectors, in a similar fashion to what could be
|
563 |
-
obtained applying a manual Principal Component Analysis
|
564 |
-
(PCA) transformation. The transformation learned by the AE
|
565 |
-
can be trained using the same loss function that is used in the
|
566 |
-
PCA process [25]. Figure 7 shows the mapping function of
|
567 |
-
the resulting colour space when applying the encoder network
|
568 |
-
over the YCbCr colour space.
|
569 |
-
Overall, the proposed simplified cross-component boundary
|
570 |
-
branch consists of two 11convolutional layers using Leaky
|
571 |
-
ReLU activation functions with a slope = 0:2. First, aD-
|
572 |
-
dimensional layer is applied over the boundary inputs Sto
|
573 |
-
obtainS12I RDbfeature maps. Then, S1is fed to the AE’s
|
574 |
-
encoder layer fwith output 3dimensions, to obtain the hidden
|
575 |
-
feature maps S22I R3b. Finally, a third 11convolutional
|
576 |
-
layer (corresponding to the AE decoder layer g) is applied
|
577 |
-
to generate the reconstructed maps ~S1withD-dimensions.
|
578 |
-
Notice that the decoder layer is only necessary during the
|
579 |
-
training stage to obtain the reconstructed inputs necessary to
|
580 |
-
derive the values of the loss function. Only the encoder layer
|
581 |
-
is needed when using the network, in order to transform the
|
582 |
-
input feature vectors into the 3dimensional, reduced vectors.
|
583 |
-
Figure 3 illustrates the branch architecture and its integration
|
584 |
-
within the simplified multi-model.
|
585 |
-
Finally, in order to interpret the behaviour of the branch
|
586 |
-
and to identify prediction patterns, a sparsity constraint can
|
587 |
-
be imposed on the loss function. Formally, the following can
|
588 |
-
be used:
|
589 |
-
LAE=r
|
590 |
-
DbkS1 ~S1k2
|
591 |
-
2+s
|
592 |
-
3bkS2k1; (6)
|
593 |
-
where the right-most term is used to keep the activation
|
594 |
-
functions in the hidden space remain inactive most of the
|
595 |
-
time, and only return non-zero values for the most descriptive
|
596 |
-
samples. In order to evaluate the effect of the sparsity term,
|
597 |
-
Section V-C performs an ablation test that shows its positive
|
598 |
-
regularisation properties during training.
|
599 |
-
The objective function in Equation 2 can be updated such
|
600 |
-
that the global multi-model loss Lconsiders bothLregand
|
601 |
-
LAEas:
|
602 |
-
L=regLreg+AELAE (7)whereregandAEcontrol the contribution of both losses.
|
603 |
-
D. Integer precision approximation
|
604 |
-
While the training algorithm results in IEEE-754 64-bit
|
605 |
-
floating point weights and prediction buffers, an additional
|
606 |
-
simplification is proposed in this paper whereby the network
|
607 |
-
weights and prediction buffers are represented using fixed-
|
608 |
-
point integer arithmetic. This is beneficial for deployment of
|
609 |
-
resulting multi-models in efficient hardware implementations,
|
610 |
-
which complex operations such as Leaky ReLU and softmax
|
611 |
-
activation functions can become serious bottlenecks. All the
|
612 |
-
network weights obtained after the training stage are therefore
|
613 |
-
appropriately quantised to fit 32-bit signed integer values. it
|
614 |
-
should be noted that integer approximation introduces quanti-
|
615 |
-
sation errors, which may have an impact on the performance
|
616 |
-
of the overall predictions.
|
617 |
-
In order to prevent arithmetic overflows after performing
|
618 |
-
multiplications or additions, appropriate scaling factors are
|
619 |
-
defined for each layer during each of the network predic-
|
620 |
-
tion steps. To further reduce the complexity of the integer
|
621 |
-
approximation, the scaling factor Klfor a given layer lis
|
622 |
-
obtained as a power of 2, namelyKl= 2Ol, whereOlis the
|
623 |
-
respective precision offset. This ensures that multiplications
|
624 |
-
can be performed by means of simple binary shifts. Formally,
|
625 |
-
the integer weights ~Wland biases ~blfor each layer lin the
|
626 |
-
network with weights Wland biasblcan be obtained as:
|
627 |
-
~Wl=bWl2Olc;~bl=bbl2Olc: (8)
|
628 |
-
The offsetOldepends on the offset used on the previous layer
|
629 |
-
Ol 1, as well as on an internal offset Oxnecessary to preserve
|
630 |
-
as much decimal information as possible, compensating for
|
631 |
-
the quantisation that occurred in the previous layer, namely
|
632 |
-
Ol=Ox Ol 1.
|
633 |
-
Furthermore, in this approach the values predicted by the
|
634 |
-
network are also integers. In order to avoid defining large
|
635 |
-
internal offsets at each layer, namely having large values of
|
636 |
-
Ox, an additional stage of compensation is applied to the
|
637 |
-
predicted values, to keep their values in the range of 32-bit
|
638 |
-
signed integer. For this purpose, another offset Oyis defined,
|
639 |
-
computed as Oy=Ox Ol. The values generated by layer l
|
640 |
-
are then computed as:
|
641 |
-
Yl= (( ~WT
|
642 |
-
lXl+~bl) + (1<<(Oy 1)))>>O y; (9)
|
643 |
-
where<< and>> represent the left and right binary shifts,
|
644 |
-
respectively, and the offset (1<<(Oy 1))is considered to
|
645 |
-
reduce the rounding error.
|
646 |
-
Complex operations requiring floating point divisions need
|
647 |
-
to be approximated to integer precision. The Leaky ReLU
|
648 |
-
activation functions applied on the cross-component boundary
|
649 |
-
branch use a slope = 0:2which multiplies the negative
|
650 |
-
values. Such an operation can be simply approximated by
|
651 |
-
defining a new activation function ~A(x)for any input xas
|
652 |
-
follows:
|
653 |
-
~A(x) =0 : x0
|
654 |
-
26x>> 7 :x<0
|
655 |
-
(10)JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 8
|
656 |
-
Conversely, the softmax operations used in the attention
|
657 |
-
module are approximated following a more complex method-
|
658 |
-
ology, similar to the one used in [26]. Consider the matrix M
|
659 |
-
as defined in Equation 1 and a given row jinM, and a vector
|
660 |
-
mjas input to the softmax operation. First, all elements mj
|
661 |
-
in a row are subtracted by the maximum element in the row,
|
662 |
-
namely:
|
663 |
-
^mi;j= (mi;j=T max i(mi;j=T)) (11)
|
664 |
-
whereTis the temperature of the softmax operation, set to 0:5
|
665 |
-
as previously mentioned. The transformed elements ^mi;jrange
|
666 |
-
between the minimum signed integer value and zero, because
|
667 |
-
the arguments ^mi;jare obtained by subtracting the elements
|
668 |
-
inMby the maximum element in each row. To further reduce
|
669 |
-
the possibility of overflows, this range is further clipped to a
|
670 |
-
minimum negative value, set to pre-determined number Ve, so
|
671 |
-
that any ^mi;j<Veis set equal to Ve.
|
672 |
-
The elements ^mi;jare negative integer numbers within the
|
673 |
-
range [Ve;0], meaning there is a fixed number of Ne=jVej+
|
674 |
-
1possible values they can assume. To further simplify the
|
675 |
-
process, such an exponential function is replaced by a pre-
|
676 |
-
computed look-up table containing Neinteger elements. To
|
677 |
-
minimise the approximation error, the exponentials are scaled
|
678 |
-
by a given scaling factor before being approximated to the
|
679 |
-
nearest integer and stored in the corresponding look-up table
|
680 |
-
LUT-EXP . Formally, for a given index k, where 0k
|
681 |
-
Ne 1, thek-th integer input is obtained as sk=Ve+k. The
|
682 |
-
k-th element in the look-up table can then be computed as the
|
683 |
-
approximated, scaled exponential value for sk, or:
|
684 |
-
LUT-EXP (k) =bKeeskc (12)
|
685 |
-
whereKe= 2Oeis the scaling factor, chosen in a way to
|
686 |
-
maximise the preservation of the original decimal information.
|
687 |
-
When using the look-up table during the prediction process,
|
688 |
-
given an element ^mi;jthe corresponding index kcan be
|
689 |
-
retrieved as: k=jVe ^mi;jj, to produce the numerator
|
690 |
-
in the softmax function.
|
691 |
-
The integer approximation of the softmax function can then
|
692 |
-
be written as:
|
693 |
-
^j;i=LUT-EXP (jVe ^mi;jj)
|
694 |
-
D(j); (13)
|
695 |
-
where:
|
696 |
-
D(j) =b 1X
|
697 |
-
n=0LUT-EXP (jVe ^mn;jj); (14)
|
698 |
-
Equation 13 implies performing an integer division between
|
699 |
-
the numerator and denominator. This is not ideal, and integer
|
700 |
-
divisions are typically avoided in low complexity encoder
|
701 |
-
implementations. A simple solution to remove the integer
|
702 |
-
division can be obtained by replacing it with a binary shift.
|
703 |
-
However, a different approach is proposed in this paper to
|
704 |
-
provide a more robust approximation that introduces smaller
|
705 |
-
errors in the division. The denominator D(j)as in Equation 14
|
706 |
-
is obtained as the sum of bvalues extracted from LUT-EXP ,
|
707 |
-
wherebis the number of reference samples extracted from
|
708 |
-
the boundary of the block. As such, the largest blocks under
|
709 |
-
consideration ( 1616) will result in the largest possible value
|
710 |
-
of reference samples bMAX . This means that the maximumvalue that this denominator can assume is obtained when
|
711 |
-
b=bMAX and when all input ^mi;j= 0 (which correspond
|
712 |
-
toLUT-EXP (jVej) =Ke), corresponding to Vs=bMAXKe.
|
713 |
-
Similarly, the minimum value (obtained when ^mi;j=Ve) is0.
|
714 |
-
Correspondingly, D(j), can assume any positive integer value
|
715 |
-
in the range [0;Vs].
|
716 |
-
Considering a given scaling factor Ks= 2Os, integer divi-
|
717 |
-
sion byD(j)can be approximated using a multiplication by
|
718 |
-
the factorM(j) =bKs=D(j)c. A given value of M(j)could
|
719 |
-
be computed for all Vs+1possible values of D(j). Such values
|
720 |
-
can then be stored in another look-up table LUT-SUM . Clearly
|
721 |
-
though,Vsis too large which means LUT-SUM would be
|
722 |
-
impractical to use due to storage and complexity constraints.
|
723 |
-
For that reason, a smaller table is used, obtained by quantising
|
724 |
-
the possible values of D(j). A pre-defined step Qis used,
|
725 |
-
resulting in Ns= (Vs+ 1)=Qquantised values of D(j). The
|
726 |
-
table LUT-SUM of sizeNsis then filled accordingly, where
|
727 |
-
each element in the table is obtained as:
|
728 |
-
LUT-SUM (l) =bKs=(lQ)c (15)
|
729 |
-
Finally, when using the table during the prediction process,
|
730 |
-
given an integer sum D(j), the corresponding index lcan
|
731 |
-
be retrieved as: l=bD(j)=Qc. Following from these
|
732 |
-
simplifications, given an input ^mi;jobtained as in Equation 11,
|
733 |
-
the integer sum D(j)obtained from Equation 14, and a
|
734 |
-
quantisation step Q, the simplified integer approximation of
|
735 |
-
the softmax function can eventually be obtained as:
|
736 |
-
~j;i=LUT-EXP (jVe ^mi;jj)LUT-SUM (bD(j)=Qc);(16)
|
737 |
-
Notice that ~j;ivalues are finally scaled by Ko=KeKs.
|
738 |
-
V. E XPERIMENTS
|
739 |
-
A. Training settings
|
740 |
-
Training examples were extracted from the DIV2K dataset
|
741 |
-
[27], which contains high-definition high-resolution content of
|
742 |
-
large diversity. This database contains 800 training samples
|
743 |
-
and100 samples for validation, providing 6lower resolution
|
744 |
-
versions with downsampling by factors of 2,3and4with
|
745 |
-
a bilinear and unknown filters. For each data instance, one
|
746 |
-
resolution was randomly selected and then M blocks of each
|
747 |
-
NNsizes (N= 4;8;16) were chosen, making balanced sets
|
748 |
-
between block sizes and uniformed spatial selections within
|
749 |
-
each image. Moreover, 4:2:0 chroma sub-sampling is assumed,
|
750 |
-
where the same downsampling filters implemented in VVC are
|
751 |
-
used to downsample co-located luma blocks to the size of the
|
752 |
-
corresponding chroma block. All the schemes were trained
|
753 |
-
from scratch using the Adam optimiser [28] with a learning
|
754 |
-
rate of 10 4.
|
755 |
-
B. Integration into VVC
|
756 |
-
The methods introduced in the paper where integrated
|
757 |
-
within a VVC encoder, using the VVC Test Model (VTM)
|
758 |
-
7.0 [29]. The integration of the proposed NN-based cross-
|
759 |
-
component prediction into the VVC coding scheme requires
|
760 |
-
normative changes not only in the prediction process, but also
|
761 |
-
in the way the chroma intra-prediction mode is signalled in
|
762 |
-
the bitstream and parsed by the decoder.JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 9
|
763 |
-
TABLE I
|
764 |
-
NETWORK HYPERPARAMETERS DURING TRAINING
|
765 |
-
Branch (Cin;KK;C out) Scheme 1 & 3 Scheme 2
|
766 |
-
CC Boundary3;11;32
|
767 |
-
32;11;323;11;32
|
768 |
-
32;11;3
|
769 |
-
Luma Convolutional1;33;64
|
770 |
-
64;33;641;33;64
|
771 |
-
64;33;64
|
772 |
-
Attention Module32;11;16
|
773 |
-
64;11;16
|
774 |
-
64;11;3232;11;16
|
775 |
-
64;11;16
|
776 |
-
64;11;3
|
777 |
-
Prediction Head32;33;32
|
778 |
-
32;11;23;33;3
|
779 |
-
3;11;2
|
780 |
-
TABLE II
|
781 |
-
NETWORK HYPERPARAMETERS DURING INFERENCE
|
782 |
-
Branch (Cin;KK;C out) Scheme 1 & 3 Scheme 2
|
783 |
-
CC Boundary3;11;32
|
784 |
-
32;11;323;11;32
|
785 |
-
32;11;3
|
786 |
-
Luma Convolutional 1;55;64 1;55;64
|
787 |
-
Attention Module32;11;16
|
788 |
-
64;11;16
|
789 |
-
64;11;3232;11;16
|
790 |
-
64;11;16
|
791 |
-
64;11;3
|
792 |
-
Prediction Head 32;33;2 3;33;2
|
793 |
-
A new block-level syntax flag is introduced to indicate
|
794 |
-
whether a given block makes use of one of the proposed
|
795 |
-
schemes. If the proposed NN-based method is used, a pre-
|
796 |
-
diction is computed for the two chroma components. No
|
797 |
-
additional information is signalled related to the chroma intra-
|
798 |
-
prediction mode for the block. Conversely, if the method
|
799 |
-
is not used, the encoder proceeds in signalling the chroma
|
800 |
-
intra-prediction mode as in conventional VVC specifications.
|
801 |
-
For instance, a subsequent flag is signalled to identify if
|
802 |
-
conventional LM modes are used in the current block or not.
|
803 |
-
The prediction path also needs to accommodate the new NN-
|
804 |
-
based predictions. This largely reuses prediction blocks that
|
805 |
-
are needed to perform conventional CCLM modes. In terms
|
806 |
-
of mode selection at the encoder side, the new NN-based mode
|
807 |
-
is added to the conventional list of modes to be tested in full
|
808 |
-
rate-distortion sense.
|
809 |
-
C. Architecture configurations
|
810 |
-
The proposed multi-model architectures and simplifications
|
811 |
-
(Section IV) are implemented in 3 different schemes:
|
812 |
-
Scheme 1: Multi-model architecture (Section IV-A) ap-
|
813 |
-
plying the methodology in Section IV-B to simplify the
|
814 |
-
convolutional layers within the luma convolutional branch
|
815 |
-
and the prediction branch, as illustrated in Figure 3.
|
816 |
-
Scheme 2: The multi-model architecture in Scheme 1
|
817 |
-
applying the methodology in Section IV-C to simplify
|
818 |
-
the cross-component boundary branch. As shown in Fig-
|
819 |
-
ure 3, the integration of the simplified branch requires
|
820 |
-
modification of the initial architecture with changes in
|
821 |
-
the attention module and the prediction branch.
|
822 |
-
Scheme 3: Architecture in Scheme 1 with the integer
|
823 |
-
precision approximations described in Section IV-D.
|
824 |
-
In contrast to previous state-of-the-art methods, the pro-
|
825 |
-
posed multi-model does not need to adapt its architectureto the input block size. Notice that the fully-convolutional
|
826 |
-
architecture introduced in [4] enables this design and is able
|
827 |
-
to significantly reduce the complexity of the cross-component
|
828 |
-
boundary branch in [2], which uses size-dependent fully-
|
829 |
-
connected layers. Table I shows the network hyperparameters
|
830 |
-
of the proposed schemes during training, whereas Table II
|
831 |
-
shows the resulting hyperparameters for inference after ap-
|
832 |
-
plying the proposed simplifications. As shown in Tables III
|
833 |
-
and IV, the employed number of parameters in the proposed
|
834 |
-
schemes represents the trade-off between complexity and
|
835 |
-
prediction performance, within the order of magnitude of
|
836 |
-
related attention-based CNNs in [4]. The proposed simplifi-
|
837 |
-
cations significantly reduce (around 90%) the original training
|
838 |
-
parameters, achieving lighter architectures for inference time.
|
839 |
-
Table III show that the inference version of Scheme 2 reduces
|
840 |
-
to around 85%, 96% and 99% the complexity of the hybrid
|
841 |
-
CNN models in [2] and to around 82%, 96% and 98% the
|
842 |
-
complexity of the attention-based models in [4], for 44;88
|
843 |
-
and1616input block sizes, respectively. Finally, in order
|
844 |
-
to provide more insights about the computational cost and
|
845 |
-
compare the proposed schemes with the state-of-the-art meth-
|
846 |
-
ods, Table V shows the number of floating point operations
|
847 |
-
(FLOPs) for each architecture per block size. The reduction of
|
848 |
-
operations (e.g. additions and matrix multiplications) to arrive
|
849 |
-
to the predictions is one the predominant factors towards the
|
850 |
-
given speedups. Notice the significant reduction of FLOPs for
|
851 |
-
the proposed inference models.
|
852 |
-
In order to obtain a preliminary evaluation of the proposed
|
853 |
-
schemes and to compare their prediction performance with the
|
854 |
-
state-of-the-art methods, the trained models were tested on the
|
855 |
-
DIV2K validation set (with 100 multi-resolution images) by
|
856 |
-
means of averaged PSNR. Test samples were obtained with
|
857 |
-
the same methodology as used in Section V-A for generating
|
858 |
-
the training dataset. Notice that this test uses the training
|
859 |
-
version of the proposed schemes. As shown in Table IV, the
|
860 |
-
multi-model approach introduced in Scheme 1 improves the
|
861 |
-
attention-based CNNs in [4] for 44and88blocks, while
|
862 |
-
only a small performance drop can be observed for 1616
|
863 |
-
blocks. However, because of using a fixed architecture for all
|
864 |
-
block sizes, the proposed multi-model architecture averages
|
865 |
-
the complexity of the individual models in [4] (Table III),
|
866 |
-
slightly increasing the complexity of the 44model and
|
867 |
-
simplifying the 1616architecture. The complexity reduction
|
868 |
-
in the 1616model leads to a small drop in performance. As
|
869 |
-
can be observed from Table IV , the generalisation process
|
870 |
-
induced by the multi-model methodology ([4] with multi-
|
871 |
-
model, compared to [4]) can minimise such drop by distilling
|
872 |
-
knowledge from the rest of block sizes, which is especially
|
873 |
-
evident for 88blocks where a reduced architecture can
|
874 |
-
improve the state-of-the-art performance.
|
875 |
-
Finally, the simplifications introduced in Scheme 2 (e.g.
|
876 |
-
the architecture changes required to integrate the modified
|
877 |
-
cross-component boundary branch within the original model)
|
878 |
-
lower the prediction performance of Scheme 1. However,
|
879 |
-
the highly simplified architecture is capable of outperforming
|
880 |
-
the hybrid CNN models in [2], observing training PSNR
|
881 |
-
improvements of an additional 1.30, 2.21 and 2.31 dB for
|
882 |
-
44;88and1616input block sizes, respectively. TheJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 10
|
883 |
-
TABLE III
|
884 |
-
MODEL COMPLEXITY PER BLOCK SIZE
|
885 |
-
Model (parameters) 44 88 1616
|
886 |
-
Hybrid CNN [2] 24435 96116 369222
|
887 |
-
Attention-based CNN [4] 21602 83106 186146
|
888 |
-
Scheme 1 & 3 (train/inference) 51714=7074
|
889 |
-
Scheme 2 (train/inference) 39371=3710
|
890 |
-
TABLE IV
|
891 |
-
PREDICTION PERFORMANCE PER BLOCK SIZE
|
892 |
-
Model (PSNR) 44 88 1616
|
893 |
-
Hybrid CNN [2] 28:61 31:47 33:36
|
894 |
-
Attention-based CNN [4] 30:23 33:13 36:13
|
895 |
-
[4] with multi-model 30:55 33:21 36:05
|
896 |
-
Scheme 1 single layer training 30:36 33:05 35:88
|
897 |
-
Scheme 2 without sparsity 29:89 32:66 35:64
|
898 |
-
(proposed) Scheme 1 30:54 33:20 35:99
|
899 |
-
(proposed) Scheme 2 29:91 32:68 35:67
|
900 |
-
combination of attention-based architectures with the proposed
|
901 |
-
multi-model methodology (Scheme 1) considerably improves
|
902 |
-
the NN-based chroma intra-prediction methods in [2], showing
|
903 |
-
training PSNR improvements by additional 1.93, 1.73 and
|
904 |
-
2.68 dB for the supported block sizes. In Section V-D it will
|
905 |
-
be shown how this relatively small PSNR differences lead to
|
906 |
-
significant differences in codec performance.
|
907 |
-
Several ablations were performed in order to evaluate the
|
908 |
-
effects of the proposed simplifications. First, the effect of
|
909 |
-
the multi-model methodology is evaluated by directly con-
|
910 |
-
verting the models in [4] to the size-agnostic architecture in
|
911 |
-
Scheme 1 but without the simplifications in Section IV-B
|
912 |
-
([4] with multi-model). As can be shown in Table IV, such
|
913 |
-
methodology improves the 44and88models, with
|
914 |
-
special emphasis in the 88case where the number of
|
915 |
-
parameters is smaller than in [4]. Moreover, the removal of
|
916 |
-
non-linearities towards Scheme 1 does not significantly affect
|
917 |
-
the performance, with a negligible PSNR loss of around 0.3
|
918 |
-
dB ([4] with multi-model compared with Scheme 1). Secondly,
|
919 |
-
in order to evaluate the simplified convolutions methodology
|
920 |
-
in Section IV-B, a version of Scheme 1 was trained with
|
921 |
-
single-layer convolutional branches with large support kernels
|
922 |
-
(e.g. instead of training 2 linear layers with 33kernels and
|
923 |
-
then combining them into 55kernels for inference, training
|
924 |
-
directly a single-layer branch with 55kernels). Experimental
|
925 |
-
results show the positive effects of the proposed methodology,
|
926 |
-
observing a significant drop of performance when a single-
|
927 |
-
layer trained branch is applied (Scheme 1 with single layer
|
928 |
-
training compared with Scheme 1). Finally, the effect of the
|
929 |
-
sparse autoencoder of Scheme 2 is evaluated by removing
|
930 |
-
the sparsity term in Equation 7. As can be observed, the
|
931 |
-
regularisation properties of the sparsity term, i.e. preventing
|
932 |
-
large activations, boosts the generalisation capabilities of the
|
933 |
-
multi-model and slightly increases the prediction performance
|
934 |
-
by around 0.2 dB. (Scheme 2 without sparsity compared with
|
935 |
-
Scheme 2).TABLE V
|
936 |
-
FLOP S PER BLOCK SIZE
|
937 |
-
Model (parameters) 44 88 1616
|
938 |
-
Hybrid CNN [2] 51465 187273 711945
|
939 |
-
Attention-based CNN [4] 42795 165451 186146
|
940 |
-
Scheme 1 & 3 (train/inference) 102859=13770
|
941 |
-
Scheme 2 (train/inference) 79103=7225
|
942 |
-
D. Simulation Results
|
943 |
-
The VVC reference software VTM-7.0 is used as our
|
944 |
-
benchmark and our proposed methodology is tested under the
|
945 |
-
Common Test Conditions (CTC) [30], using the suggested all-
|
946 |
-
intra configuration for VVC with a QP of 22, 27, 32 and 37. In
|
947 |
-
order to fully evaluate the performance of the proposed multi-
|
948 |
-
models, the encoder configuration is constrained to support
|
949 |
-
only square blocks of 44;88and1616pixels.
|
950 |
-
A corresponding VVC anchor was generated under these
|
951 |
-
conditions. BD-rate is adopted to evaluate the relative com-
|
952 |
-
pression efficiency with respect to the latest VVC anchor. Test
|
953 |
-
sequences include 26 video sequences of different resolutions:
|
954 |
-
38402160 (Class A1 and A2), 19201080 (Class B),
|
955 |
-
832480(Class C), 416240(Class D), 1280720(Class
|
956 |
-
E) and screen content (Class F). The “EncT” and “DecT” are
|
957 |
-
“Encoding Time” and “Decoding Time”, respectively.
|
958 |
-
A colour analysis is performed in order to evaluate the
|
959 |
-
impact of the chroma channels on the final prediction per-
|
960 |
-
formance. As suggested in previous colour prediction works
|
961 |
-
[31], standard regression methods for chroma prediction may
|
962 |
-
not be effective for content with wide distributions of colours.
|
963 |
-
A parametric model which is trained to minimise the Euclidean
|
964 |
-
distance between the estimations and the ground truth com-
|
965 |
-
monly tends to average the colours of the training examples
|
966 |
-
and hence produce desaturated results. As shown in Figure 8,
|
967 |
-
several CTC sequences are analysed by computing the loga-
|
968 |
-
rithmic histogram of both chroma components. The width of
|
969 |
-
the logarithmic histograms is compared to the compression
|
970 |
-
performance in Table VI. Gini index [32] is used to quantify
|
971 |
-
the width of the histograms, obtained as
|
972 |
-
Gini(H) = 1 B 1X
|
973 |
-
b=0
|
974 |
-
H(b)PB 1
|
975 |
-
k=0H(k)!2
|
976 |
-
(17)
|
977 |
-
beingHa histogram of Bbins for a given chroma component.
|
978 |
-
Notice that the average value between both chroma compo-
|
979 |
-
nents is used in Table VI. A direct correlation between Gini
|
980 |
-
index and coding performance can be observed in Table VI,
|
981 |
-
suggesting that Scheme 1 performs better for narrower colour
|
982 |
-
distributions. For instance, the Tango 2 sequence with a Gini
|
983 |
-
index of 0.63 achieves an average Y/Cb/Cr BD-rates of -
|
984 |
-
0.46%/-8.13%/-3.13%, whereas Campfire with wide colour
|
985 |
-
histograms (Gini index of 0.98), obtains average Y/Cb/Cr
|
986 |
-
BD-rates of -0.21%/0.14%/-0.88%. Although the distributions
|
987 |
-
of chroma channels can be a reliable indicator of prediction
|
988 |
-
performance, wide colour distributions may not be the only
|
989 |
-
factor in restricting chroma prediction capabilities of proposed
|
990 |
-
methods, which can be investigated in future work.
|
991 |
-
A summary of the component-wise BD-rate results for
|
992 |
-
all the proposed schemes and the related attention-basedJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 11
|
993 |
-
Fig. 8. Comparison of logarithmic colour histograms for different sequences.
|
994 |
-
TABLE VI
|
995 |
-
BD-R ATES (%) SORTED BY GINI INDEX
|
996 |
-
SequenceScheme 1GiniY Cb Cr
|
997 |
-
Tango2 -0.46 -8.13 -3.13 0.63
|
998 |
-
MarketPlace -0.59 -2.46 -3.06 0.77
|
999 |
-
FoodMarket4 -0.16 -1.60 -1.55 0.85
|
1000 |
-
DaylightRoad2 -0.09 -5.74 -1.85 0.89
|
1001 |
-
Campfire -0.21 0.14 -0.88 0.98
|
1002 |
-
ParkRunning3 -0.31 -0.73 -0.77 0.99
|
1003 |
-
approach in [4] is shown in Table VII for all-intra conditions.
|
1004 |
-
Scheme 1 achieves an average Y/Cb/Cr BD-rates of -0.25%/-
|
1005 |
-
2.38%/-1.80% compared with the anchor, suggesting that the
|
1006 |
-
proposed multi-model size agnostic methodology can improve
|
1007 |
-
the coding performance of the related attention-based block-
|
1008 |
-
dependent models. Besides improving the coding performance,
|
1009 |
-
Scheme 1 significantly reduces the encoding (from 212% to
|
1010 |
-
164%) and decoding (from 2163% to 1302%) times demon-
|
1011 |
-
strating the positive effect of the inference simplification.
|
1012 |
-
Finally, the proposed simplifications introduced in Scheme
|
1013 |
-
2 and Scheme 3 further reduce the encoding and decoding time
|
1014 |
-
at the cost of a drop in the coding performance. In particular,
|
1015 |
-
the simplified cross-component boundary branch introduced
|
1016 |
-
in Scheme 2, achieves an average Y/Cb/Cr BD-rates of -
|
1017 |
-
0.13%/-1.56%/-1.63% and, compared to Scheme 1, reduces the
|
1018 |
-
encoding (from 164% to 146%) and decoding (from 1302%
|
1019 |
-
to 665%) times. Scheme 3 has lower reduction of encoding
|
1020 |
-
time (154%) than Scheme 2, but it achieves higher reduction
|
1021 |
-
in decoding time (665%), although the integer approximations
|
1022 |
-
lowers the performance achieving average Y/Cb/Cr BD-rates
|
1023 |
-
of -0.16%/-1.72%/-1.38%.
|
1024 |
-
As described in Section IV, the simplified schemes in-troduced here tackle the complexity reduction of Scheme 1
|
1025 |
-
with two different methodologies. Scheme 2 proposes direct
|
1026 |
-
modifications on the original architecture which need to be
|
1027 |
-
retrained before being integrated in the prediction pipeline.
|
1028 |
-
Conversely, Scheme 3 directly simplifies the final prediction
|
1029 |
-
process by approximating the already trained weights from
|
1030 |
-
Scheme 1 with integer-precision arithmetic. Therefore, the
|
1031 |
-
simulation results suggest that the methodology in Scheme
|
1032 |
-
3 is better at retaining the original performance since a
|
1033 |
-
retraining process is not required. However, the highly reduced
|
1034 |
-
architecture in Scheme 2 is capable of approximating the
|
1035 |
-
performance of Scheme 3 and further reduce the decoder time.
|
1036 |
-
Overall, the comparison results in Table VII demonstrate
|
1037 |
-
that proposed models offer various trade-offs between com-
|
1038 |
-
pression performance and complexity. While it has been shown
|
1039 |
-
that the complexity can be significantly reduced, it is still not
|
1040 |
-
negligible. Challenges for future work include integerisation
|
1041 |
-
of the simplified scheme (Scheme 2) while preventing the
|
1042 |
-
compression drop observed for Scheme 3. Recent approaches,
|
1043 |
-
including a published one which focuses on intra prediction
|
1044 |
-
[24], demonstrate that sophisticated integerisation approaches
|
1045 |
-
can help retain compression performance of originally trained
|
1046 |
-
models while enabling them to become significantly less com-
|
1047 |
-
plex and thus be integrated into future video coding standards.
|
1048 |
-
VI. C ONCLUSION
|
1049 |
-
This paper showcased the effectiveness of attention-based
|
1050 |
-
architectures in performing chroma intra-prediction for video
|
1051 |
-
coding. A novel size-agnostic multi-model and its corre-
|
1052 |
-
sponding training methodology were proposed to reduce the
|
1053 |
-
inference complexity of previous attention-based approaches.
|
1054 |
-
Moreover, the proposed multi-model was proven to better
|
1055 |
-
generalise to variable input sizes, outperforming state-of-the-
|
1056 |
-
art attention-based models with a fixed and much simpler
|
1057 |
-
architecture. Several simplifications were proposed to further
|
1058 |
-
reduce the complexity of the original multi-model. First,
|
1059 |
-
a framework for reducing the complexity of convolutional
|
1060 |
-
operations was introduced and was able to derive an infer-
|
1061 |
-
ence model with around 90% fewer parameters than its rela-
|
1062 |
-
tive training version. Furthermore, sparse autoencoders were
|
1063 |
-
applied to design a simplified cross-component processing
|
1064 |
-
model capable of further reducing the coding complexity
|
1065 |
-
of its preceding schemes. Finally, algorithmic insights were
|
1066 |
-
proposed to approximate the multi-model schemes in integer-
|
1067 |
-
precision arithmetic, which could lead to fast and hardware-
|
1068 |
-
aware implementations of complex operations such as softmax
|
1069 |
-
and Leaky ReLU activations.
|
1070 |
-
The proposed schemes were integrated into the VVC an-
|
1071 |
-
chor VTM-7.0, signalling the prediction methodology as a
|
1072 |
-
new chroma intra-prediction mode working in parallel with
|
1073 |
-
traditional modes towards predicting the chroma component
|
1074 |
-
samples. Experimental results show the effectiveness of the
|
1075 |
-
proposed methods, retaining compression efficiency of pre-
|
1076 |
-
viously introduced neural network models, while offering 2
|
1077 |
-
different directions for significantly reducing coding complex-
|
1078 |
-
ity, translated to reduced encoding and decoding times. As
|
1079 |
-
future work, we aim to implement a complete multi-modelJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 12
|
1080 |
-
TABLE VII
|
1081 |
-
BD-R ATE(%) OFY,Cb ANDCr FOR ALL PROPOSED SCHEMES AND [4] UNDER ALL -INTRA COMMON TESTCONDITIONS
|
1082 |
-
Class A1 Class A2 Class B Class C
|
1083 |
-
Y Cb Cr Y Cb Cr Y Cb Cr Y Cb Cr
|
1084 |
-
Scheme 1 -0.28 -3.20 -1.85 -0.25 -3.11 -1.54 -0.26 -2.28 -2.33 -0.30 -1.92 -1.57
|
1085 |
-
Scheme 2 -0.08 -1.24 -1.26 -0.12 -1.59 -1.31 -0.15 -1.80 -2.21 -0.20 -1.41 -1.62
|
1086 |
-
Scheme 3 -0.19 -2.25 -1.56 -0.13 -2.44 -1.12 -0.16 -1.78 -2.05 -0.20 -1.44 -1.29
|
1087 |
-
Anchor + [4] -0.26 -2.17 -1.96 -0.22 -2.37 -1.64 -0.23 -2.00 -2.17 -0.26 -1.64 -1.41
|
1088 |
-
Class D Class E Class F OverallEncT[%] DecT[%]Y Cb Cr Y Cb Cr Y Cb Cr Y Cb Cr
|
1089 |
-
Scheme 1 -0.29 -1.70 -1.77 -0.13 -1.59 -1.45 -0.50 -1.58 -1.99 -0.25 -2.38 -1.80 164% 1302%
|
1090 |
-
Scheme 2 -0.18 -1.42 -1.73 -0.08 -1.67 -1.40 -0.34 -1.50 -1.90 -0.13 -1.56 -1.63 146% 665%
|
1091 |
-
Scheme 3 -0.20 -1.64 -1.41 -0.07 -0.75 -0.46 -0.37 -1.24 -1.43 -0.16 -1.72 -1.38 154% 512%
|
1092 |
-
Anchor + [4] -0.25 -1.55 -1.67 -0.03 -1.35 -1.77 -0.44 -1.30 -1.55 -0.21 -1.90 -1.81 212% 2163%
|
1093 |
-
for all VVC block sizes in order to ensure a full usage
|
1094 |
-
of the proposed approach building on the promising results
|
1095 |
-
shown in the constrained test conditions. Finally, an improved
|
1096 |
-
approach for integer approximations may enable the fusion of
|
1097 |
-
all proposed simplifications, leading to a fast and powerful
|
1098 |
-
multi-model.
|
1099 |
-
REFERENCES
|
1100 |
-
[1] B. Bross, J. Chen, and S. Liu, “Versatile Video Coding (VVC) draft 7,”
|
1101 |
-
Geneva, Switzerland, October 2019.
|
1102 |
-
[2] Y . Li, L. Li, Z. Li, J. Yang, N. Xu, D. Liu, and H. Li, “A hybrid neural
|
1103 |
-
network for chroma intra prediction,” in 2018 25th IEEE International
|
1104 |
-
Conference on Image Processing (ICIP) . IEEE, 2018, pp. 1797–1801.
|
1105 |
-
[3] J. Pfaff, P. Helle, D. Maniry, S. Kaltenstadler, B. Stallenberger,
|
1106 |
-
P. Merkle, M. Siekmann, H. Schwarz, D. Marpe, and T. Wiegand, “Intra
|
1107 |
-
prediction modes based on neural networks,” Document JVET-J0037-
|
1108 |
-
v2, Joint Video Exploration Team of ITU-T VCEG and ISO/IEC MPEG ,
|
1109 |
-
2018.
|
1110 |
-
[4] M. G ´orriz, S. Blasi, A. F. Smeaton, N. E. O’Connor, and M. Mrak,
|
1111 |
-
“Chroma intra prediction with attention-based CNN architectures,” arXiv
|
1112 |
-
preprint arXiv:2006.15349 , accepted for publication at IEEE ICIP,
|
1113 |
-
October 2020.
|
1114 |
-
[5] L. Murn, S. Blasi, A. F. Smeaton, N. E. O’Connor, and M. Mrak, “Inter-
|
1115 |
-
preting cnn for low complexity learned sub-pixel motion compensation
|
1116 |
-
in video coding,” arXiv preprint arXiv:2006.06392 , 2020.
|
1117 |
-
[6] P. Helle, J. Pfaff, M. Sch ¨afer, R. Rischke, H. Schwarz, D. Marpe,
|
1118 |
-
and T. Wiegand, “Intra picture prediction for video coding with neural
|
1119 |
-
networks,” in 2019 Data Compression Conference (DCC) . IEEE, 2019,
|
1120 |
-
pp. 448–457.
|
1121 |
-
[7] X. Zhao, J. Chen, A. Said, V . Seregin, H. E. Egilmez, and M. Kar-
|
1122 |
-
czewicz, “Nsst: Non-separable secondary transforms for next generation
|
1123 |
-
video coding,” in 2016 Picture Coding Symposium (PCS) . IEEE, 2016,
|
1124 |
-
pp. 1–5.
|
1125 |
-
[8] K. Zhang, J. Chen, L. Zhang, X. Li, and M. Karczewicz, “Enhanced
|
1126 |
-
cross-component linear model for chroma intra-prediction in video
|
1127 |
-
coding,” IEEE Transactions on Image Processing , vol. 27, no. 8, pp.
|
1128 |
-
3983–3997, 2018.
|
1129 |
-
[9] L. T. Nguyen, A. Khairat and D. Marpe, “Adaptive inter-plane prediction
|
1130 |
-
for RGB content,” Document JCTVC-M0230 , Incheon, April 2013.
|
1131 |
-
[10] M. Siekmann, A. Khairat, T. Nguyen, D. Marpe, and T. Wiegand,
|
1132 |
-
“Extended cross-component prediction in hevc,” APSIPA transactions
|
1133 |
-
on signal and information processing , vol. 6, 2017.
|
1134 |
-
[11] G. Bjontegaard, “Calculation of average PSNR differences between rd-
|
1135 |
-
curves,” VCEG-M33 , 2001.
|
1136 |
-
[12] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
|
1137 |
-
Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances
|
1138 |
-
in neural information processing systems , 2017, pp. 5998–6008.
|
1139 |
-
[13] Z. Lin, M. Feng, C. N. d. Santos, M. Yu, B. Xiang, B. Zhou, and
|
1140 |
-
Y . Bengio, “A structured self-attentive sentence embedding,” arXiv
|
1141 |
-
preprint arXiv:1703.03130 , 2017.
|
1142 |
-
[14] A. P. Parikh, O. T ¨ackstr ¨om, D. Das, and J. Uszkoreit, “A decompos-
|
1143 |
-
able attention model for natural language inference,” arXiv preprint
|
1144 |
-
arXiv:1606.01933 , 2016.
|
1145 |
-
[15] J. Cheng, L. Dong, and M. Lapata, “Long short-term memory-networks
|
1146 |
-
for machine reading,” arXiv preprint arXiv:1601.06733 , 2016.[16] Y . He, X. Zhang, and J. Sun, “Channel pruning for accelerating
|
1147 |
-
very deep neural networks,” in Proceedings of the IEEE International
|
1148 |
-
Conference on Computer Vision , 2017, pp. 1389–1397.
|
1149 |
-
[17] Z. Zhuang, M. Tan, B. Zhuang, J. Liu, Y . Guo, Q. Wu, J. Huang,
|
1150 |
-
and J. Zhu, “Discrimination-aware channel pruning for deep neural
|
1151 |
-
networks,” in Advances in Neural Information Processing Systems , 2018,
|
1152 |
-
pp. 875–886.
|
1153 |
-
[18] T.-W. Chin, R. Ding, C. Zhang, and D. Marculescu, “Towards efficient
|
1154 |
-
model compression via learned global ranking,” in Proceedings of the
|
1155 |
-
IEEE/CVF Conference on Computer Vision and Pattern Recognition ,
|
1156 |
-
2020, pp. 1518–1528.
|
1157 |
-
[19] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam,
|
1158 |
-
and D. Kalenichenko, “Quantization and training of neural networks
|
1159 |
-
for efficient integer-arithmetic-only inference,” in Proceedings of the
|
1160 |
-
IEEE Conference on Computer Vision and Pattern Recognition , 2018,
|
1161 |
-
pp. 2704–2713.
|
1162 |
-
[20] Y . Cai, Z. Yao, Z. Dong, A. Gholami, M. W. Mahoney, and K. Keutzer,
|
1163 |
-
“Zeroq: A novel zero shot quantization framework,” in Proceedings of
|
1164 |
-
the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,
|
1165 |
-
2020, pp. 13 169–13 178.
|
1166 |
-
[21] S. Xu, H. Li, B. Zhuang, J. Liu, J. Cao, C. Liang, and M. Tan,
|
1167 |
-
“Generative low-bitwidth data free quantization,” arXiv preprint
|
1168 |
-
arXiv:2003.03603 , 2020.
|
1169 |
-
[22] M. Courbariaux, Y . Bengio, and J.-P. David, “Training deep neu-
|
1170 |
-
ral networks with low precision multiplications,” arXiv preprint
|
1171 |
-
arXiv:1412.7024 , 2014.
|
1172 |
-
[23] J. Ball ´e, N. Johnston, and D. Minnen, “Integer networks for data
|
1173 |
-
compression with latent-variable models,” in International Conference
|
1174 |
-
on Learning Representations , 2018.
|
1175 |
-
[24] M. Sch ¨afer, B. Stallenberger, J. Pfaff, P. Helle, H. Schwarz, D. Marpe,
|
1176 |
-
and T. Wiegand, “Efficient fixed-point implementation of matrix-based
|
1177 |
-
intra prediction,” in 2020 IEEE International Conference on Image
|
1178 |
-
Processing (ICIP) . IEEE, 2020, pp. 3364–3368.
|
1179 |
-
[25] Y . Bengio, A. Courville, and P. Vincent, “Representation learning: A
|
1180 |
-
review and new perspectives,” IEEE transactions on pattern analysis
|
1181 |
-
and machine intelligence , vol. 35, no. 8, pp. 1798–1828, 2013.
|
1182 |
-
[26] X. Geng, J. Lin, B. Zhao, A. Kong, M. M. S. Aly, and V . Chandrasekhar,
|
1183 |
-
“Hardware-aware softmax approximation for deep neural networks,” in
|
1184 |
-
Asian Conference on Computer Vision . Springer, 2018, pp. 107–122.
|
1185 |
-
[27] R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, and L. Zhang,
|
1186 |
-
“Ntire 2017 challenge on single image super-resolution: Methods and
|
1187 |
-
results,” in Proceedings of the IEEE conference on computer vision and
|
1188 |
-
pattern recognition workshops , 2017, pp. 114–125.
|
1189 |
-
[28] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
|
1190 |
-
arXiv preprint arXiv:1412.6980 , 2014.
|
1191 |
-
[29] S. K. J. Chen, Y . Ye, “Algorithm description for versatile video coding
|
1192 |
-
and test model 7 (vtm 7),” Document JVET-P2002 , Geneva, October
|
1193 |
-
2019.
|
1194 |
-
[30] J. Boyce, K. Suehring, X. Li, and V . Seregin, “JVET common test
|
1195 |
-
conditions and software reference configurations,” Document JVET-
|
1196 |
-
J1010 , Ljubljana, Slovenia, July 2018.
|
1197 |
-
[31] M. G. Blanch, M. Mrak, A. F. Smeaton, and N. E. O’Connor, “End-to-
|
1198 |
-
end conditional gan-based architectures for image colourisation,” in 2019
|
1199 |
-
IEEE 21st International Workshop on Multimedia Signal Processing
|
1200 |
-
(MMSP) . IEEE, 2019, pp. 1–6.
|
1201 |
-
[32] R. Davidson, “Reliable inference for the gini index,” Journal of econo-
|
1202 |
-
metrics , vol. 150, no. 1, pp. 30–40, 2009.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
txt/2103.02372.txt
DELETED
@@ -1,655 +0,0 @@
|
|
1 |
-
Root cause prediction based on bug reports
|
2 |
-
Thomas Hirsch
|
3 |
-
Institute of Software Technology
|
4 |
-
Graz University of Technology
|
5 |
-
Graz, Austria
|
6 |
-
[email protected] Hofer
|
7 |
-
Institute of Software Technology
|
8 |
-
Graz University of Technology
|
9 |
-
Graz, Austria
|
10 | |
11 |
-
Abstract —This paper proposes a supervised machine learning
|
12 |
-
approach for predicting the root cause of a given bug report.
|
13 |
-
Knowing the root cause of a bug can help developers in the de-
|
14 |
-
bugging process—either directly or indirectly by choosing proper
|
15 |
-
tool support for the debugging task. We mined 54 755 closed
|
16 |
-
bug reports from the issue trackers of 103 GitHub projects and
|
17 |
-
applied a set of heuristics to create a benchmark consisting of
|
18 |
-
10 459 reports. A subset was manually classified into three groups
|
19 |
-
(semantic, memory, and concurrency) based on the bugs’ root
|
20 |
-
causes. Since the types of root cause are not equally distributed, a
|
21 |
-
combination of keyword search and random selection was applied.
|
22 |
-
Our data set for the machine learning approach consists of 369 bug
|
23 |
-
reports (122 concurrency, 121 memory, and 126 semantic bugs).
|
24 |
-
The bug reports are used as input to a natural language processing
|
25 |
-
algorithm. We evaluated the performance of several classifiers
|
26 |
-
for predicting the root causes for the given bug reports. Linear
|
27 |
-
Support Vector machines achieved the highest mean precision
|
28 |
-
(0.74) and recall (0.72) scores. The created bug data set and
|
29 |
-
classification are publicly available.
|
30 |
-
Index Terms —bug report, bug benchmark, root cause prediction
|
31 |
-
I. I NTRODUCTION
|
32 |
-
Debugging is one of the most time-consuming parts in the
|
33 |
-
software development process. While there exist numerous
|
34 |
-
fault localization [1] and repair [2] techniques to support
|
35 |
-
programmers in the debugging process, it is often unclear which
|
36 |
-
techniques work best for a given bug. For this reason, Sobreira et
|
37 |
-
al.[3] investigated the structure of Defects4J [4] bugs. For each
|
38 |
-
bug, they determined the size of the patch, the repair action,
|
39 |
-
and the change pattern. They have invited other researchers to
|
40 |
-
investigate which types of bugs1can be handled by their repair
|
41 |
-
tools.
|
42 |
-
In this paper, we change the perspective of this research topic:
|
43 |
-
instead of only providing root cause information for a bench-
|
44 |
-
mark to help researchers in evaluating their tools, we predict
|
45 |
-
the root cause for a given bug description so that programmers
|
46 |
-
can choose a proper tool for their debugging problem. There
|
47 |
-
are tools that focus on concurrency (e.g. ConcBugAssist [5])
|
48 |
-
or memory (e.g. Valgrind) bugs, while others are better suited
|
49 |
-
for semantic bugs (e.g. Jaguar [6]). While some root causes
|
50 |
-
can easily be determined when reading a bug report, other
|
51 |
-
0©2020 IEEE. Personal use of this material is permitted. Permission from
|
52 |
-
IEEE must be obtained for all other uses, in any current or future media,
|
53 |
-
including reprinting/republishing this material for advertising or promotional
|
54 |
-
purposes, creating new collective works, for resale or redistribution to servers
|
55 |
-
or lists, or reuse of any copyrighted component of this work in other works.
|
56 |
-
1http://program-repair.org/defects4j-dissection/root causes are not that obvious. Consider for example issue
|
57 |
-
ticket #514 from TwelveMonkeys project2:
|
58 |
-
TIFF: Invalid StripByteCounts when writing large resolution
|
59 |
-
files (9800x8000)
|
60 |
-
Hello, when writing a high resolution tiff file the stripByteCounts
|
61 |
-
appears to be corrupt. An approx 300 mb output file has a single
|
62 |
-
image strip with the byte count of: 4071696385 which is larger than
|
63 |
-
the file itself. However when working with lower (more common)
|
64 |
-
resolutions the meta for the image strips is created properly. [. . . ]
|
65 |
-
This code creates the file with the incorrect meta data:
|
66 |
-
// Input high resolution 48 bit depth final
|
67 |
-
InputStream inStream = [. . . ]
|
68 |
-
Attaching zipped image: 9800x8000 resolution 48bit depth.zip
|
69 |
-
I’ve tested and reproduced the issue with the following versions:
|
70 |
-
3.4.1, 3.4.2, 3.4.3
|
71 |
-
Thanks in advance,
|
72 |
-
-Jesse
|
73 |
-
Our goal is to provide information to the programmer about
|
74 |
-
the root cause of this bug. For instance, the incorrect byte
|
75 |
-
count mentioned in this bug report together with the information
|
76 |
-
about high resolution can raise suspicion of an integer overflow
|
77 |
-
occurring.
|
78 |
-
We propose a supervised machine learning (ML) approach
|
79 |
-
that uses the bug description from issue tickets to predict the
|
80 |
-
root cause of the bug. For processing the text from the issue
|
81 |
-
tickets, we make use of natural language processing (NLP). For
|
82 |
-
creating the training set, we have mined bug reports from 103
|
83 |
-
GitHub projects and manually examined a subset, classifying
|
84 |
-
them as memory, concurrency or semantic bugs based on the
|
85 |
-
actual fix. Since the number of concurrency and memory bugs
|
86 |
-
is usually very low [7], we have performed a keyword search in
|
87 |
-
the commit messages of fixes to find more instances with these
|
88 |
-
root causes.
|
89 |
-
While the primary goal of this paper is the root cause
|
90 |
-
prediction approach, the generated training data can be used
|
91 |
-
as a benchmark for specific types of faults. Often, researchers
|
92 |
-
focus on certain bug types when developing a fault localization
|
93 |
-
or repair method. While these approaches have a high potential,
|
94 |
-
their evaluation is often limited to a few real-world bugs or
|
95 |
-
artificially seeded bugs, as mentioned in [8]. The training data
|
96 |
-
set created in this paper can be used as a bug benchmark by
|
97 |
-
researchers who are interested in certain types of bugs. It can
|
98 |
-
be seen as a Java pendant to the C/C++ benchmark BugBench
|
99 |
-
that also distinguishes memory, concurrency, and semantic bugs.
|
100 |
-
Furthermore, it can be used to evaluate information retrieval
|
101 |
-
based bug localization approaches [9].
|
102 |
-
2https://github.com/haraldk/TwelveMonkeys/issues/514
|
103 |
-
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000671arXiv:2103.02372v1 [cs.SE] 3 Mar 2021The contributions of this work can be summarized as:
|
104 |
-
a machine learning approach for predicting the root cause
|
105 |
-
for a given bug report with a mean precision of 0.74 and
|
106 |
-
a mean recall of 0.72,
|
107 |
-
a data set consisting of 10 459 bug reports and fixes from
|
108 |
-
103 GitHub repositories,
|
109 |
-
a data set of 122 concurrency, 121 memory, and 269
|
110 |
-
semantic bugs with detailed sub-categories, and
|
111 |
-
a framework for building such data sets.
|
112 |
-
The created data sets, all scripts, and the categorization are
|
113 |
-
publicly available.3The structure of this paper is as follows:
|
114 |
-
Section II introduces the main root cause categories and their
|
115 |
-
sub-categories. Section III explains how we have collected
|
116 |
-
closed bug reports and their corresponding fixes. Section IV
|
117 |
-
presents the machine learning approach. We discuss the results
|
118 |
-
and threats to validity in Section V. Section VI discusses the
|
119 |
-
related work and Section VII concludes the paper.
|
120 |
-
II. C LASSIFICATION SCHEMA
|
121 |
-
We use three main categories and 18 detailed root causes as
|
122 |
-
described in Table I. The semantic and memory sub-categories
|
123 |
-
are based on Tan et al. [7]; the concurrency sub-categories are
|
124 |
-
based on Zhou et al. [10].
|
125 |
-
A problem with post mortem bug classification arises through
|
126 |
-
often unclear separation of the actual fix from other code
|
127 |
-
changes, e.g., commits that include more than one bug fix,
|
128 |
-
commits that include the bug fix aside of some refactoring
|
129 |
-
or new extension, or bug fixes that are scattered over multiple
|
130 |
-
commits. Additionally, it is difficult to distinguish a workaround
|
131 |
-
from a fix [11]. All of the above make it hard to correctly
|
132 |
-
identify the fix and to properly categorize the root cause. To deal
|
133 |
-
with these issues, we have added a confidence value ranging
|
134 |
-
from 1-10 that reflects our confidence on the correctness of
|
135 |
-
our classification: A confidence level of 10 indicates showcase
|
136 |
-
quality; 9 indicates that we are very confident about the main
|
137 |
-
category and the subcategory; 8 indicates that we are very
|
138 |
-
confident about main category and subcategory assigned, but a
|
139 |
-
different subcategory cannot be ruled out with 100 % certainty.
|
140 |
-
For example, differentiating “processing” and “missing case”
|
141 |
-
is often not possible without having the knowledge of the
|
142 |
-
programmer who wrote the code. A confidence level of 7 or be-
|
143 |
-
low indicates doubts about the chosen subcategory. Confidence
|
144 |
-
levels between 3 and 5 indicate a strong confidence about the
|
145 |
-
main category, but the subcategories were not identifiable. A
|
146 |
-
confidence level of 2 indicates doubts about the main category
|
147 |
-
while a level of 1 indicates that it was not possible to determine
|
148 |
-
the main root cause category for the bug.
|
149 |
-
III. D ATA ACQUISITION
|
150 |
-
In this section, we provide details on the collection of the
|
151 |
-
bug data set that builds the basis for creating the training set
|
152 |
-
for the machine learning approach.
|
153 |
-
Purpose of the data set. The data set should provide a realistic
|
154 |
-
distribution of different bug types, and should serve as basis for
|
155 |
-
3https://doi.org/10.5281/zenodo.3973048TABLE I
|
156 |
-
ROOT CAUSE CATEGORIES AND SUB -CATEGORIES
|
157 |
-
Semantic Description
|
158 |
-
Exception handl. Missing or improper exception handling.
|
159 |
-
Missing case Faults due to unawareness of a certain case or simply
|
160 |
-
a forgotten implementation.
|
161 |
-
Processing Incorrect implementation (e.g. miscalculations, in-
|
162 |
-
correct method output, wrong method/library usage).
|
163 |
-
Typo Ambiguous naming, typos in SQL calls/URLs/paths.
|
164 |
-
Dependency The code can be built but behaves unexpected be-
|
165 |
-
cause of changes in a foreign system (e.g. update of
|
166 |
-
utilized library or underlying OS).
|
167 |
-
Other All other semantic faults.
|
168 |
-
Memory
|
169 |
-
Buffer overflow Buffer overflows, not overflowing numeric types.
|
170 |
-
Null pointer deref. All null pointer dereferences.
|
171 |
-
Uninit. mem. read All uninitialized memory reads except null pointer
|
172 |
-
dereference.
|
173 |
-
Memory leak Memory leak.
|
174 |
-
Dangling pointer Dangling pointer.
|
175 |
-
Double free Double free.
|
176 |
-
Other All other memory bugs.
|
177 |
-
Concurrency
|
178 |
-
Order violation Missing or incorrect synchronization, e.g. object is
|
179 |
-
dereferenced by thread B before it is initialized by
|
180 |
-
thread A.
|
181 |
-
Race condition Two or more threads access the same resource with
|
182 |
-
at least one being a write access and the access is
|
183 |
-
not ordered properly.
|
184 |
-
Atomic violation Constraints on the interleaving of operations are
|
185 |
-
missing. This happens when atomicity of a certain
|
186 |
-
code region was assumed but failed to guarantee
|
187 |
-
atomicity in the implementation.
|
188 |
-
Deadlock Two or more threads wait for the other one to release
|
189 |
-
a resource.
|
190 |
-
Other All other concurrency bugs.
|
191 |
-
experiments with various fault localization and ML experiments.
|
192 |
-
The bugs should be real world Java bugs.
|
193 |
-
Project selection. We chose 103 GitHub Java projects to
|
194 |
-
source our data set. Primary selection criteria were a well known
|
195 |
-
organization driving the project, or the project having a high
|
196 |
-
star rating on GitHub. However, the list also contains lesser
|
197 |
-
known projects that were already used in other research [4],
|
198 |
-
[12], [13]. The selection process was performed manually. All of
|
199 |
-
the projects utilize GitHub’s built-in issue tracker together with
|
200 |
-
its labeling system, and have at least 100 closed issues identified
|
201 |
-
as bugs. The project sizes range from 13k Java LOC (Lines Of
|
202 |
-
Code) for HikariCP4to 1.7M Java LOC for Elasticsearch5. The
|
203 |
-
full list of mined projects can be found in the online appendix3.
|
204 |
-
Bug ticket identification. We identified bugs via the labels
|
205 |
-
used in the issue tickets and we only considered closed issue
|
206 |
-
tickets. In order to omit feature requests, maintenance tickets
|
207 |
-
and other non-bug issues, we only considered issues whose
|
208 |
-
labels contain “bug”, “defect”, or “regression”.
|
209 |
-
Filtering criteria. GitHub automatically links commits to
|
210 |
-
issue tickets based on issue ids and provides this data together
|
211 |
-
with issue tickets. We only consider issues for which at least
|
212 |
-
4https://github.com/brettwooldridge/HikariCP
|
213 |
-
5https://github.com/elastic/elasticsearch
|
214 |
-
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000672one commit is linked to the issue, and all linked commits are
|
215 |
-
still available in the Git repository. If any of the commits are
|
216 |
-
linked to multiple issues, or the commit message suggests that
|
217 |
-
the fix is done aside of other changes, the issue is discarded. As
|
218 |
-
of writing this, we omit issues whose commits do not contain
|
219 |
-
changes in production Java code. We plan to lift this limitation
|
220 |
-
to incorporate other root causes, e.g. in the documentation or
|
221 |
-
build system in the future.
|
222 |
-
We use Gumtree Spoon AST Diff6[14] to create Java
|
223 |
-
aware diffs. To manage overwhelming runtime and memory
|
224 |
-
requirements arising from the size of the data set, we limit the
|
225 |
-
size and number of the commits per issue. We only consider
|
226 |
-
issues where the number of commits linked to the issue is
|
227 |
-
smaller than 10, the number of files changed per commit is
|
228 |
-
smaller than 20, and the number of lines changed per commit
|
229 |
-
is smaller than 250. Our analysis shows that these limitations
|
230 |
-
only remove less than 3 % of the issues.
|
231 |
-
The data set. In total, 54 755 issues have been mined from
|
232 |
-
GitHub. Following the filtering criteria described above leaves
|
233 |
-
us with 10 459 issues that form the basis for our further
|
234 |
-
investigations. This bug data set consists of:
|
235 |
-
textual bug report including metadata in form of time-
|
236 |
-
stamps and user names,
|
237 |
-
all commits associated to each bug report including meta-
|
238 |
-
data as commit message and git stats, and
|
239 |
-
Java aware diff statistics and location of the changes in
|
240 |
-
terms of file, class, and method.
|
241 |
-
IV. M ACHINE LEARNING APPROACH
|
242 |
-
We employ an NLP approach, vectorizing the textual bug
|
243 |
-
reports into unigrams and bigrams, to train a model for auto-
|
244 |
-
mated classification along our fault classification schema. This
|
245 |
-
approach calculates a frequency vector from words and word
|
246 |
-
pairs occurring in the input text that is used as feature vector
|
247 |
-
for the classifier.
|
248 |
-
Input preprocessing. To increase performance of the classifier,
|
249 |
-
we applied the following preprocessing steps:
|
250 |
-
Stop word removal (i.e. removing common words that are
|
251 |
-
not adding any value)
|
252 |
-
Case folding (i.e. converting all characters to lower case)
|
253 |
-
Stemming (i.e. reducing each word to its word stem)
|
254 |
-
The bug reports often include stack traces, exceptions, and log
|
255 |
-
outputs. Currently, we process them in the same way as the
|
256 |
-
rest of the input text. In future work, we will investigate the
|
257 |
-
usefulness of domain specific preprocessing of these artifacts.
|
258 |
-
Training set creation. Figure 1 provides an overview of the
|
259 |
-
training set creation. We manually classified 160 randomly
|
260 |
-
selected issues and identified 119 semantic, 2 memory, and 4
|
261 |
-
concurrency bugs. 35 issues were not classified because the bug
|
262 |
-
reports were non-English, feature requests, deemed not a bug, or
|
263 |
-
issues for which we were not confident about the sub-category
|
264 |
-
(confidence level <8).
|
265 |
-
Concurrency and memory bugs are usually rare, accounting
|
266 |
-
for 2 % respectively 6 % of all bugs [7], [15], [16], which poses
|
267 |
-
6https://github.com/SpoonLabs/gumtree-spoon-ast-diff
|
268 |
-
Random sample of 471 issuesFilter criteria:
|
269 |
-
•Commits linked
|
270 |
-
•Commits only address
|
271 |
-
one issue
|
272 |
-
•Change of production
|
273 |
-
Java code
|
274 |
-
•Reasonable size for fix
|
275 |
-
11954 755 issues labeled as bug
|
276 |
-
from 103 GitHub projects
|
277 |
-
10 459 complete issues
|
278 |
-
Random
|
279 |
-
sample
|
280 |
-
of 160
|
281 |
-
issues756 potential memory or
|
282 |
-
concurrency bugsRandom
|
283 |
-
selectionKeyword
|
284 |
-
search
|
285 |
-
150 semantic
|
286 |
-
bugs
|
287 |
-
keyword
|
288 |
-
biased121
|
289 |
-
memory
|
290 |
-
bugs122
|
291 |
-
concurrency
|
292 |
-
bugs118
|
293 |
-
2 4150 119
|
294 |
-
Filter criteria:
|
295 |
-
•Classification
|
296 |
-
confidence > 7
|
297 |
-
•English issue text
|
298 |
-
•Bug, not hidden
|
299 |
-
feature request
|
300 |
-
119
|
301 |
-
semantic
|
302 |
-
Bugs
|
303 |
-
369 issues for machine learning approachAll All All5 %
|
304 |
-
(7)Random
|
305 |
-
selection
|
306 |
-
119Fig. 1. Training set creation
|
307 |
-
a challenge for the creation of reasonably sized training sets.
|
308 |
-
For this reason, we have performed a keyword search on the
|
309 |
-
commit messages linked to the issues to identify candidates of
|
310 |
-
memory and concurrency bugs analog to Ray et al. ’s approach
|
311 |
-
[15], resulting in a total of 756 issues. As of writing this, 471
|
312 |
-
randomly selected issues from this set have been examined and
|
313 |
-
classified. 150 semantic, 119 memory, and 118 concurrency
|
314 |
-
bugs have been identified in this sample. 84 issues could not be
|
315 |
-
classified due to the reasons mentioned above.
|
316 |
-
Training set composition. To avoid a bias towards the se-
|
317 |
-
mantic bugs that were “accidentally found” during the manual
|
318 |
-
classification of the keyword search results and to have approx-
|
319 |
-
imately equally large training sets, we reduced their volume
|
320 |
-
to 5 % of all semantic bugs. This is a rather high estimate
|
321 |
-
given the fact, that only 7.2 % of all bugs have been reported
|
322 |
-
in the keyword search and only one third of these bugs are
|
323 |
-
actually semantic bugs. Further, using separate data bases for
|
324 |
-
the keyword search (commit messages) and training set for our
|
325 |
-
ML classifier (bug reports) makes us confident that the bias
|
326 |
-
introduced by the keywords is limited. As of writing this, our
|
327 |
-
training set consists of 122 concurrency bugs, 121 memory
|
328 |
-
bugs, and 126 semantic bugs. The complete training set consists
|
329 |
-
of 369 textual bug reports.
|
330 |
-
Classifiers. We applied various supervised ML classifier
|
331 |
-
algorithms on our data set, namely Multinomial Naive Bayes
|
332 |
-
(MNB), Linear Support Vector (LSVC), Linear Support Vector
|
333 |
-
with Stochastic Gradient Descent learning (SGDC), Random
|
334 |
-
Forrest (RFC), and Logistic Regression (LRC). The selection
|
335 |
-
of classifiers is based on their suitability for multi-class clas-
|
336 |
-
sification problems based textual inputs, and their application
|
337 |
-
in similar research. Support vector machines have been used in
|
338 |
-
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000673comparable endeavors [7], [15]–[18]; the same applies to naive
|
339 |
-
Bayes [7], [16], [17], [19]–[21], logistic regression [20], [21],
|
340 |
-
and decision tree based algorithms [7], [20], [21].
|
341 |
-
Experiment. The 369 bug reports were split into a training
|
342 |
-
set (80 %) and a test set (20 %). We performed 5-fold cross
|
343 |
-
validation on the training set for each classifier, using grid
|
344 |
-
search for hyperparameter tuning. Mean accuracy was used as
|
345 |
-
scoring metric in the grid search. The highest scoring model for
|
346 |
-
each classifier was then evaluated on the test set yielding the
|
347 |
-
final score for this classifier. This experiment was performed
|
348 |
-
10 times, followed by manual examination of its results and
|
349 |
-
corresponding hyperparameters to reduce the grid search space.
|
350 |
-
Finally, to enable comparison of the classifiers given the small
|
351 |
-
input data set, the above described experiment with the reduced
|
352 |
-
hyperparameter set was performed 100 times with randomized
|
353 |
-
test and training splits. The employed set of hyperparameters,
|
354 |
-
and grid search results for each experiment, can be found in the
|
355 |
-
online appendix.
|
356 |
-
V. R ESULTS
|
357 |
-
A. Classification results
|
358 |
-
We graphically compare the classifiers’ performance by
|
359 |
-
means of the weighted averages of F1, precision, and recall
|
360 |
-
in Figure 2 and we report mean, median, standard deviation,
|
361 |
-
min, and max of each classifier in Table II based on the scores
|
362 |
-
of 100 runs. Please note that the F1 scores are computed for
|
363 |
-
the individual test runs and then the mean, median, standard
|
364 |
-
deviation, min, and max values of these F1 scores are computed.
|
365 |
-
Thus, they cannot be computed from the precision and recall
|
366 |
-
given in the table.
|
367 |
-
We observed a tight clustering of classifiers, which is also
|
368 |
-
evident in individual runs, although individual runs exhibit
|
369 |
-
varying performances. We attribute this behavior to the small
|
370 |
-
data set size and high variance in data quality. The best overall
|
371 |
-
performance was achieved with LSVC, with mean F1 (0.72),
|
372 |
-
precision (0.74), and recall (0.72). LSVC also produced the
|
373 |
-
highest observed scores in an individual run, yielding F1 (0.85),
|
374 |
-
precision (0.88), and recall (0.85).
|
375 |
-
B. Discussion
|
376 |
-
The biggest challenge lies in the creation of a reasonably
|
377 |
-
sized data set. Further, varying data quality constitutes a sig-
|
378 |
-
nificant problem. The textual bug reports in our data set range
|
379 |
-
from only 5 words to 60kB of text per report. However, our
|
380 |
-
examination of those bug tickets shows that the length is not
|
381 |
-
necessarily correlating with the quality in terms of usefulness
|
382 |
-
for the developer. Issue #4960 from Elasticsearch7is an example
|
383 |
-
of a bug report that requires context and knowledge about the
|
384 |
-
project for understanding:
|
385 |
-
Filtered query parses name incorrectly
|
386 |
-
There are bug reports that merely describe the impact, e.g.
|
387 |
-
issue #338 from the Redisson project8:
|
388 |
-
7https://github.com/elastic/elasticsearch/issues/4960
|
389 |
-
8https://github.com/redisson/redisson/issues/338
|
390 |
-
Fig. 2. Mean weighted average precision, recall and F1 score
|
391 |
-
TABLE II
|
392 |
-
WEIGHTED AVERAGE PRECISION ,RECALL ,AND F1SCORES (100 RUNS )
|
393 |
-
Classifier Precision Recall F1
|
394 |
-
MeanLRC 0.73 0.71 0.71
|
395 |
-
RFC 0.72 0.62 0.62
|
396 |
-
SGDC 0.74 0.71 0.71
|
397 |
-
LSVC 0.74 0.72 0.72
|
398 |
-
MNB 0.72 0.70 0.70
|
399 |
-
MedianLRC 0.73 0.70 0.70
|
400 |
-
RFC 0.72 0.62 0.62
|
401 |
-
SGDC 0.74 0.70 0.71
|
402 |
-
LSVC 0.74 0.72 0.72
|
403 |
-
MNB 0.73 0.70 0.70
|
404 |
-
Std. dev.LRC 0.046 0.048 0.048
|
405 |
-
RFC 0.053 0.076 0.082
|
406 |
-
SGDC 0.045 0.049 0.049
|
407 |
-
LSVC 0.046 0.046 0.046
|
408 |
-
MNB 0.045 0.049 0.049
|
409 |
-
MinLRC 0.63 0.62 0.62
|
410 |
-
RFC 0.53 0.39 0.38
|
411 |
-
SGDC 0.64 0.62 0.63
|
412 |
-
LSVC 0.65 0.64 0.63
|
413 |
-
MNB 0.60 0.57 0.57
|
414 |
-
MaxLRC 0.84 0.82 0.82
|
415 |
-
RFC 0.85 0.77 0.77
|
416 |
-
SGDC 0.86 0.84 0.84
|
417 |
-
LSVC 0.88 0.85 0.85
|
418 |
-
MNB 0.83 0.82 0.82
|
419 |
-
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000674New version 2.1.4 or greater performance is low.
|
420 |
-
When I use redisson 2.1.3 the ubuntu’s load average is 1.8 2.3; but
|
421 |
-
I use 2.1.4 or greater, the load average is often greater than 3.00,
|
422 |
-
my java application often overload.
|
423 |
-
In some cases, the bug reports point right away at the fault,
|
424 |
-
e.g. Netty issue #18789:
|
425 |
-
WebSocket08FrameDecoder leaks ByteBuf when payload is
|
426 |
-
masked
|
427 |
-
Further research is required to determine metrics to mea-
|
428 |
-
sure bug report quality for our purpose. On the other end of
|
429 |
-
the spectrum, for very long bug reports, additional text pre-
|
430 |
-
processing is required. Heuristics for reduction or removal of
|
431 |
-
artifacts have to be implemented. Such artifacts are stack traces,
|
432 |
-
code snippets, log outputs, or similar text portions, whose size
|
433 |
-
is disproportionate to the added information.
|
434 |
-
C. Threats to validity
|
435 |
-
The selection of bugs from the issue tickets by searching for
|
436 |
-
certain labels is a threat to the internal validity. While we have
|
437 |
-
considered a wide range of bug labels, we cannot rule out to
|
438 |
-
miss bugs with special labels or wrongly labeled bugs. A study
|
439 |
-
on 7000 issue reports from five open-source projects showed
|
440 |
-
that up to 40 % of the issues were wrongly labeled [22].
|
441 |
-
Manually categorizing the root cause might be error-prone
|
442 |
-
and the true root cause of the bug can only be determined
|
443 |
-
by the original programmer. For this reason, we indicated the
|
444 |
-
confidence level for each bug we categorized and excluded bugs
|
445 |
-
with a low confidence level. Furthermore, the fix might be a
|
446 |
-
workaround instead of a fix of the true fault.
|
447 |
-
The keyword search might only reveal certain types of
|
448 |
-
memory and concurrency bugs. We have tried to avoid a bias in
|
449 |
-
the classification towards the words used in the keyword search
|
450 |
-
by performing the keyword search on the commit messages and
|
451 |
-
NLP for classification on the bug description.
|
452 |
-
The small sample size is the biggest threat to external validity.
|
453 |
-
In future work, we will therefore enlarge the training set. The
|
454 |
-
performance of this approach may vary based on the software
|
455 |
-
domain of the examined project. We tried to counteract this
|
456 |
-
by including software projects to source our data set. However,
|
457 |
-
data mining was exclusively performed on open source projects.
|
458 |
-
Further, most of the examined projects are libraries. In contrast
|
459 |
-
to end-user software, bug reports for libraries are almost ex-
|
460 |
-
clusively written by other developers. Such bug reports often
|
461 |
-
already contain insights into the underlying problem. Further,
|
462 |
-
our approach may not work as good for bug descriptions of
|
463 |
-
software written in a different programming language.
|
464 |
-
VI. R ELATED WORK
|
465 |
-
Ray et al. [15] analyzed more 560 000 bug fixes from
|
466 |
-
729 GitHub projects written in 17 languages. They classified
|
467 |
-
the root causes and impacts for 10 % of the bugs by searching
|
468 |
-
for keywords in the commit messages and trained a supervised
|
469 |
-
ML approach to classify the remaining 90 % of the bugs. They
|
470 |
-
9https://github.com/netty/netty/issues/1878validated their approach by manual classifying 180 bug fixes
|
471 |
-
(83.7 % precision, 84.3 % recall). While we also rely on a
|
472 |
-
keyword search, we did not perform the keyword search on
|
473 |
-
the same text that was used in NLP to avoid biasing.
|
474 |
-
Li and colleagues [16] classified the root cause, impact,
|
475 |
-
and software component of nearly 30 000 Bugzilla entries
|
476 |
-
using NLP with SVM, Winnow, Perceptron and Naive Bayes
|
477 |
-
as classifiers. Their training set consists of 709 bugs (51 %
|
478 |
-
randomly sampled, 36 % security-related, and 13 % concurrency
|
479 |
-
bugs).
|
480 |
-
Tan et al. [7] manually classified 339 bugs from randomly
|
481 |
-
selected fixed issues of three open-source projects into the
|
482 |
-
dimensions root cause, impact, and component. Because of
|
483 |
-
the low number of concurrency bugs in the sample, they
|
484 |
-
performed a keyword search to identify additional concurrency
|
485 |
-
bugs. Semantic bugs are the dominant root cause with 70-
|
486 |
-
87 %. The Linux kernel has nearly 13.6 % concurrency bugs;
|
487 |
-
the other projects (Mozilla and Apache) have a lower number
|
488 |
-
of concurrency bugs with 1.2 % and 5.2 %. Furthermore, the
|
489 |
-
authors automatically classified more than 100 000 bugs using
|
490 |
-
a supervised ML (precision: 67 % for memory and 93 % for
|
491 |
-
semantic bugs, recall: 57 % resp. 95 %).
|
492 |
-
Ortu et al. [17] investigated whether there are differences
|
493 |
-
in the characteristics of high and low priority defects in more
|
494 |
-
than 1200 open-source software projects. Therefore, they trained
|
495 |
-
different supervised machine learning classifiers to predict the
|
496 |
-
root cause, impact, and software component.
|
497 |
-
Thung et al. [18] used machine learning to classify bugs ac-
|
498 |
-
cording to the Orthogonal Defect Classification (ODC) scheme.
|
499 |
-
They distinguished three defect groups: data and control flow,
|
500 |
-
structural, and non-functional. They manually classified 500
|
501 |
-
bugs that serve as training set. They use the description of the
|
502 |
-
bug as well as the fixes to train a model. The SVM multi-
|
503 |
-
class classification algorithm performed best (69 % precision,
|
504 |
-
70 % recall). Lopes and colleagues [23] applied different ML
|
505 |
-
algorithms on bug descriptions to classify bugs according to
|
506 |
-
different ODC dimensions. They manually categorized more
|
507 |
-
than 4000 fixed bugs from three NoSQL databases. Recurrent
|
508 |
-
Neural Networks have the highest accuracy when predicting
|
509 |
-
the activity (47.6 %) and impact (33.3 %). Linear support vector
|
510 |
-
machines are suited best to predict the target (accuracy 85.5 %)
|
511 |
-
and the defect type (34.7 %).
|
512 |
-
Hern ´andez-Gonz ´alez and colleagues [19] proposed a learning
|
513 |
-
from crowds ML approach. The training data consists of bug
|
514 |
-
reports and labels for OCD’s impact dimension. Each bug report
|
515 |
-
was labeled by five annotators. In the majority of the cases, the
|
516 |
-
annotators disagree on the labels. In the learning from crowds
|
517 |
-
paradigm, the individual labels are taken in the machine learning
|
518 |
-
training instead of the label that was assigned by the majority
|
519 |
-
of the annotators.
|
520 |
-
Antoniol et al. [20] use decision trees, naive Bayes and
|
521 |
-
logistic regression to classify issue reports as bug or feature
|
522 |
-
request. Their approach was able to correctly classify 77-82 %
|
523 |
-
of the issues. Chawla and Singh [21] also classify issue reports
|
524 |
-
as bug or other request. They receive an accuracy of 84-91 %.
|
525 |
-
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000675VII. C ONCLUSION AND FUTURE WORK
|
526 |
-
The presented approach automatically predicts the root cause
|
527 |
-
for a given bug report. This information can be used by the
|
528 |
-
developer to choose a proper debugging tool. It can also be
|
529 |
-
used by a meta-debugging approach to recommend a debugging
|
530 |
-
tool. The data set created in this work can be used to evaluate
|
531 |
-
which debugging tools are particularly well-suited to support
|
532 |
-
programmers in the debugging process of a particular bug
|
533 |
-
instance. In addition, the proposed approach can be utilized
|
534 |
-
for building benchmarks of specific bug types. This benchmark
|
535 |
-
is especially suitable for evaluating IR-based fault localization
|
536 |
-
techniques since it includes textual data in form of bug reports
|
537 |
-
and commit messages, as well as detailed information on the
|
538 |
-
fix location.
|
539 |
-
During the manual classification, we have noticed recurring
|
540 |
-
fault patterns. We will investigate if we can establish links be-
|
541 |
-
tween these fault patterns and code-smells detected by existing
|
542 |
-
code analysis tools such as SonarQube10. If so, knowledge about
|
543 |
-
the bug type combined with reports from code analysis tools can
|
544 |
-
be utilized to aid fault localization.
|
545 |
-
Besides that, we will improve the approach by pre-processing
|
546 |
-
stack trace information and other artifacts (if available in the bug
|
547 |
-
report). Currently, stack trace information is treated the same
|
548 |
-
way as human written text.
|
549 |
-
Since a detailed predicted root cause is even more helpful,
|
550 |
-
we will refine the prediction to the sub-categories. To do so, we
|
551 |
-
have to enlarge the training set. Since certain subcategories will
|
552 |
-
be underrepresented in the training set, we will up-sample those
|
553 |
-
categories by means of the Synthetic Minority Over-sampling
|
554 |
-
TEchnique (SMOTE) [24].
|
555 |
-
ACKNOWLEDGMENT
|
556 |
-
The work described in this paper has been funded by the
|
557 |
-
Austrian Science Fund (FWF): P 32653 (Automated Debugging
|
558 |
-
in Use).
|
559 |
-
REFERENCES
|
560 |
-
[1] W. E. Wong, R. Gao, Y . Li, R. Abreu, and F. Wotawa, “A Survey on
|
561 |
-
Software Fault Localization,” IEEE Transactions on Software Engineering ,
|
562 |
-
vol. 42, no. 8, pp. 707–740, aug 2016.
|
563 |
-
[2] L. Gazzola, D. Micucci, and L. Mariani, “Automatic Software Repair: A
|
564 |
-
Survey,” IEEE Transactions on Software Engineering , vol. 45, no. 1, pp.
|
565 |
-
34–67, jan 2019.
|
566 |
-
[3] V . Sobreira, T. Durieux, F. Madeiral, M. Monperrus, and M. A. Maia,
|
567 |
-
“Dissection of a Bug Dataset: Anatomy of 395 Patches from Defects4J,”
|
568 |
-
25th IEEE International Conference on Software Analysis, Evolution and
|
569 |
-
Reengineering (SANER 2018) , vol. 2018-March, pp. 130–140, jan 2018.
|
570 |
-
[Online]. Available: http://dx.doi.org/10.1109/SANER.2018.8330203
|
571 |
-
[4] R. Just, D. Jalali, and M. D. Ernst, “Defects4J: a database of
|
572 |
-
existing faults to enable controlled testing studies for Java programs,”
|
573 |
-
inInternational Symposium on Software Testing and Analysis (ISSTA
|
574 |
-
2014) . ACM Press, jul 2014, pp. 437–440. [Online]. Available:
|
575 |
-
http://dl.acm.org/citation.cfm?doid=2610384.2628055
|
576 |
-
[5] S. Khoshnood, M. Kusano, and C. Wang, “ConcBugAssist: Constraint
|
577 |
-
solving for diagnosis and repair of concurrency bugs,” in Int. Symp. on
|
578 |
-
Software Testing and Analysis (ISSTA 2015) . ACM, 2015, pp. 165–176.
|
579 |
-
[6] H. L. Ribeiro, H. A. De Souza, R. P. A. De Araujo, M. L. Chaim, and
|
580 |
-
F. Kon, “Jaguar: A Spectrum-Based Fault Localization Tool for Real-
|
581 |
-
World Software,” in 11th International Conference on Software Testing,
|
582 |
-
Verification and Validation (ICST 2018) . IEEE, may 2018, pp. 404–409.
|
583 |
-
10https://www.sonarqube.org/[7] L. Tan, C. Liu, Z. Li, X. Wang, Y . Zhou, and C. Zhai, “Bug characteristics
|
584 |
-
in open source software,” Empirical Software Engineering , vol. 19, no. 6,
|
585 |
-
pp. 1665–1705, oct 2014.
|
586 |
-
[8] Y . Tang, Q. Gao, and F. Qin, “LeakSurvivor: Towards Safely Tolerating
|
587 |
-
Memory Leaks for Garbage-Collected Languages,” in USENIX Annual
|
588 |
-
Technical conference , 2008, pp. 307–320.
|
589 |
-
[9] T. D. B. Le, F. Thung, and D. Lo, “Will this localization tool be effective
|
590 |
-
for this bug? Mitigating the impact of unreliability of information retrieval
|
591 |
-
based bug localization tools,” Empirical Software Engineering , vol. 22,
|
592 |
-
no. 4, pp. 2237–2279, aug 2017.
|
593 |
-
[10] B. Zhou, I. Neamtiu, and R. Gupta, “Predicting concurrency bugs:
|
594 |
-
How many, what kind and where are they?” in 9th International
|
595 |
-
Conference on Evaluation and Assessment in Software Engineering
|
596 |
-
(EASE’15) . ACM, apr 2015, pp. 1–10. [Online]. Available: https:
|
597 |
-
//doi.org/10.1145/2745802.2745807
|
598 |
-
[11] M. B ¨ohme, E. O. Soremekun, S. Chattopadhyay, E. Ugherughe, and
|
599 |
-
A. Zeller, “Where is the bug and how is it fixed? An experiment
|
600 |
-
with practitioners,” in 11th Joint Meeting on Foundations of Software
|
601 |
-
Engineering (ESEC/FSE 2017) , vol. Part F1301. Association for
|
602 |
-
Computing Machinery, aug 2017, pp. 117–128. [Online]. Available:
|
603 |
-
http://dl.acm.org/citation.cfm?doid=3106237.3106255
|
604 |
-
[12] P. Gyimesi, G. Gyimesi, Z. T ´oth, and R. Ferenc, “Characterization of
|
605 |
-
source code defects by data mining conducted on GitHub,” in Lecture
|
606 |
-
Notes in Computer Science , vol. 9159. Springer, 2015, pp. 47–62.
|
607 |
-
[13] Z. T ´oth, P. Gyimesi, and R. Ferenc, “A public bug database of GitHub
|
608 |
-
projects and its application in bug prediction,” in 16th Int. Conference
|
609 |
-
on Computational Science and Its Applications (ICCSA’16) , vol. 9789.
|
610 |
-
Lecture Notes in Computer Science, Springer, 2016, pp. 625–638.
|
611 |
-
[14] J. R. Falleri, F. Morandat, X. Blanc, M. Martinez, and M. Monperrus,
|
612 |
-
“Fine-grained and accurate source code differencing,” in 29th ACM/IEEE
|
613 |
-
International Conference on Automated Software Engineering (ASE
|
614 |
-
2014) . ACM, 2014, pp. 313–323. [Online]. Available: http://dl.acm.org/
|
615 |
-
citation.cfm?doid=2642937.2642982
|
616 |
-
[15] B. Ray, D. Posnett, V . Filkov, and P. Devanbu, “A large scale study
|
617 |
-
of programming languages and code quality in GitHub,” in ACM
|
618 |
-
SIGSOFT Symposium on the Foundations of Software Engineering
|
619 |
-
(FSE’14) . ACM, nov 2014, pp. 155–165. [Online]. Available:
|
620 |
-
http://dl.acm.org/citation.cfm?doid=2635868.2635922
|
621 |
-
[16] Z. Li, L. Tan, X. Wang, S. Lu, Y . Zhou, and C. Zhai, “Have things
|
622 |
-
changed now?: An empirical study of bug characteristics in modern open
|
623 |
-
source software,” in 1st Workshop on Architectural and System Support
|
624 |
-
for Improving Software Dependability (ASID’06) , 2006, pp. 25–33.
|
625 |
-
[17] M. Ortu, G. Destefanis, S. Swift, and M. Marchesi, “Measuring high and
|
626 |
-
low priority defects on traditional and mobile open source software,”
|
627 |
-
in7th International Workshop on Emerging Trends in Software Metrics
|
628 |
-
(WETSoM 2016) . ACM, may 2016, pp. 1–7. [Online]. Available:
|
629 |
-
http://dl.acm.org/citation.cfm?doid=2897695.2897696
|
630 |
-
[18] F. Thung, D. Lo, and L. Jiang, “Automatic defect categorization,” in
|
631 |
-
Working Conf. on Reverse Engineering (WCRE) , 2012, pp. 205–214.
|
632 |
-
[19] J. Hern ´andez-Gonz ´alez, D. Rodriguez, I. Inza, R. Harrison, and J. A.
|
633 |
-
Lozano, “Learning to classify software defects from crowds: A novel
|
634 |
-
approach,” Applied Soft Computing Journal , vol. 62, pp. 579–591, 2018.
|
635 |
-
[20] G. Antoniol, K. Ayari, M. Di Penta, F. Khomh, and Y .-G. Gu ´eh´eneuc,
|
636 |
-
“Is it a bug or an enhancement?” in Conference of the center
|
637 |
-
for advanced studies on collaborative research meeting of minds
|
638 |
-
(CASCON ’08) . ACM Press, 2008, pp. 304—-318. [Online]. Available:
|
639 |
-
http://portal.acm.org/citation.cfm?doid=1463788.1463819
|
640 |
-
[21] I. Chawla and S. K. Singh, “An automated approach for bug
|
641 |
-
categorization using fuzzy logic,” in 8th India Software Engineering
|
642 |
-
Conference (ISEC) . ACM, feb 2015, pp. 90–99. [Online]. Available:
|
643 |
-
http://dl.acm.org/citation.cfm?doid=2723742.2723751
|
644 |
-
[22] K. Herzig, S. Just, and A. Zeller, “It’s not a bug, it’s a feature: How
|
645 |
-
misclassification impacts bug prediction,” in International Conference on
|
646 |
-
Software Engineering (ICSE 2013) , 2013, pp. 392–401.
|
647 |
-
[23] F. Lopes, J. Agnelo, C. A. Teixeira, N. Laranjeiro, and J. Bernardino,
|
648 |
-
“Automating orthogonal defect classification using machine learning al-
|
649 |
-
gorithms,” Future Generation Computer Systems , vol. 102, pp. 932–947,
|
650 |
-
jan 2020.
|
651 |
-
[24] N. V . Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer,
|
652 |
-
“SMOTE: Synthetic minority over-sampling technique,” Journal of
|
653 |
-
Artificial Intelligence Research , vol. 16, pp. 321–357, jan 2002. [Online].
|
654 |
-
Available: https://www.jair.org/index.php/jair/article/view/10302
|
655 |
-
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000676
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
txt/2103.10051.txt
DELETED
@@ -1,435 +0,0 @@
|
|
1 |
-
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any
|
2 |
-
current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new
|
3 |
-
collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other
|
4 |
-
works.arXiv:2103.10051v1 [cs.LG] 18 Mar 2021DATA-FREE MIXED-PRECISION QUANTIZATION USING NOVEL SENSITIVITY METRIC
|
5 |
-
Donghyun Lee, Minkyoung Cho, Seungwon Lee, Joonho Song, and Changkyu Choi
|
6 |
-
Samsung Advanced Institute of Technology, Samsung Electronics, South Korea
|
7 |
-
ABSTRACT
|
8 |
-
Post-training quantization is a representative technique for
|
9 |
-
compressing neural networks, making them smaller and more
|
10 |
-
efficient for deployment on edge devices. However, an in-
|
11 |
-
accessible user dataset often makes it difficult to ensure the
|
12 |
-
quality of the quantized neural network in practice. In addi-
|
13 |
-
tion, existing approaches may use a single uniform bit-width
|
14 |
-
across the network, resulting in significant accuracy degrada-
|
15 |
-
tion at extremely low bit-widths. To utilize multiple bit-width,
|
16 |
-
sensitivity metric plays a key role in balancing accuracy and
|
17 |
-
compression. In this paper, we propose a novel sensitivity
|
18 |
-
metric that considers the effect of quantization error on task
|
19 |
-
loss and interaction with other layers. Moreover, we develop
|
20 |
-
labeled data generation methods that are not dependent on a
|
21 |
-
specific operation of the neural network. Our experiments
|
22 |
-
show that the proposed metric better represents quantization
|
23 |
-
sensitivity, and generated data are more feasible to be applied
|
24 |
-
to mixed-precision quantization.
|
25 |
-
Index Terms —Deep Learning, Quantization, Data Free
|
26 |
-
1. INTRODUCTION
|
27 |
-
In recent years, deep neural networks have simplified and en-
|
28 |
-
abled applications in numerous domains, especially for vision
|
29 |
-
tasks [1, 2, 3]. Meanwhile, there is a need to minimize the
|
30 |
-
memory footprint and reduce the network computation cost to
|
31 |
-
deploy on edge devices. Significant efforts have been made to
|
32 |
-
reduce the network size or accelerate inference of the neural
|
33 |
-
network [4, 5, 6]. Several approaches exist to this problem,
|
34 |
-
and quantization has been studied as one of the most reliable
|
35 |
-
solutions. In the quantization process, low-bit representations
|
36 |
-
of both weights and activations introduce quantization noise,
|
37 |
-
which results in accuracy degradation. To alleviate the accu-
|
38 |
-
racy loss, retraining or fine-tuning methods are developed by
|
39 |
-
exploiting extensive training datasets [7, 8, 9].
|
40 |
-
However, these methods are not applicable in many real-
|
41 |
-
world scenarios, where the user dataset is inaccessible due to
|
42 |
-
confidential or personal issues [10]. It is impossible to train
|
43 |
-
the network and verify the quality of a quantized neural net-
|
44 |
-
work without a user dataset. Although post-training quantiza-
|
45 |
-
tion is a frequently suggested method to address this problem
|
46 |
-
[11, 12], a small dataset is often required to decide the optimal
|
47 |
-
Equal contribution
|
48 |
-
(a) Data generation
|
49 |
-
(b) Profiling for quantization
|
50 |
-
(c) Sensitivity measurement
|
51 |
-
(d) Mixed-precision inference
|
52 |
-
Fig. 1 . Overall process of a post-training mixed-precision
|
53 |
-
quantization method. (a) Generate dataset from pretrained
|
54 |
-
network. (b) Post-training quantization using statistics. (c)
|
55 |
-
Measure the sensitivity of each layer. (d) Quantize to higher
|
56 |
-
precision for a more sensitive layer.
|
57 |
-
clipping range of each layer. The credential statistics of lay-
|
58 |
-
ers are important to ensure the performance of the quantized
|
59 |
-
network in increasing task loss at ultra-low-bit precision.
|
60 |
-
Considering that most quantization approaches have
|
61 |
-
used uniform bit allocation across the network, the mixed-
|
62 |
-
precision quantization approach takes a step further in pre-
|
63 |
-
serving the accuracy by lifting the limit of those approaches
|
64 |
-
[13, 14]. As a necessity, several sensitivity measurement
|
65 |
-
methods for maximizing the effects of mixed-precision have
|
66 |
-
also been proposed because it is difficult to determine which
|
67 |
-
sections of the network are comparatively less susceptible
|
68 |
-
to quantization [15, 16, 17]. To measure the quantization
|
69 |
-
robustness of activations and weights, it is necessary to an-
|
70 |
-
alyze the statistics generated during forward and backward
|
71 |
-
processes by using a reliable dataset. The prior sensitivity
|
72 |
-
metrics use the difference between outputs of the original
|
73 |
-
model and the quantized model when quantizing each layer
|
74 |
-
separately [10, 15]. However, this approach does not con-
|
75 |
-
sider the interaction of quantization error and other layers.
|
76 |
-
It cannot be neglected because a lower bit quantization im-
|
77 |
-
plies a larger quantization error. Other prior studies require
|
78 |
-
severe approximations to compute higher-order derivatives
|
79 |
-
efficiently [16, 18].Fig. 2 . Top-2 confidences of each generated data sample on
|
80 |
-
ResNet50. Confidence means the output values of the soft-
|
81 |
-
max layer. For all classes except one (Class 744), the gener-
|
82 |
-
ated labeled data samples pointed their target classes corre-
|
83 |
-
sponding to the labels with the highest confidence.
|
84 |
-
In this work, we provide a straightforward method to com-
|
85 |
-
pute the layer-wise sensitivity for mixed-precision quantiza-
|
86 |
-
tion, considering the interaction of quantization error. In addi-
|
87 |
-
tion, we propose a data generation method, which is effective
|
88 |
-
in the process of post-training mixed-precision quantization,
|
89 |
-
as shown in Figure 1. Prior works [10, 19] use statistics of
|
90 |
-
a particular operation (such as batch normalization), and it is
|
91 |
-
impotent when the networks that do not have the specific op-
|
92 |
-
eration explicitly. The proposed synthetic data engineering
|
93 |
-
approach is independent of network structures. Moreover, it
|
94 |
-
generates labeled data to verify the quantized network.
|
95 |
-
The remainder of this paper is organized as follows. Sec-
|
96 |
-
tion 2 describes the data generation method and proposes a
|
97 |
-
sensitivity measurement metric. Section 3 provides an exper-
|
98 |
-
imental demonstration of mixed-precision quantization using
|
99 |
-
the proposed metric and generated data, and the results are
|
100 |
-
compared with previous approaches.
|
101 |
-
2. METHODOLOGY
|
102 |
-
2.1. Data Generation
|
103 |
-
For a general data generation method, we seek to avoid any
|
104 |
-
dependency on a specific operation in a convolutional net-
|
105 |
-
work. As shown in Figure 1(a), we first forward a noisy image
|
106 |
-
initialized from a uniform distribution in the range between 0
|
107 |
-
and 255, inclusive. To produce a set of labeled data, we use
|
108 |
-
a set of class-relevant vectors/matrices, which means one-hot
|
109 |
-
vectors per class in the classification task. In the forward pass,
|
110 |
-
the loss is computed as
|
111 |
-
x
|
112 |
-
c= argmin
|
113 |
-
xcLCE(f(y(xc)); vc) +y(xc) (1)
|
114 |
-
wherexcis the trainable input feature, vcis the set of class-
|
115 |
-
relevant vectors/matrices, called the golden set, y()is theneural network, and f()is an activation function (i.e., soft-
|
116 |
-
max) used to compute cross-entropy loss ( LCE) with the
|
117 |
-
golden set. In addition, we reinforce the loss by maximizing
|
118 |
-
the activation of output of network to enhance the efficiency
|
119 |
-
of data generation process by referring the prior works for
|
120 |
-
model interpretation [20, 21]. The calculated loss is prop-
|
121 |
-
agated in the backward pass and generates input feature x
|
122 |
-
c
|
123 |
-
for each class, c2f1:::NumOfClass g. Finally, we have
|
124 |
-
crafted synthetic data for each class after several iterations of
|
125 |
-
forward and backward processes.
|
126 |
-
Figure 2 demonstrates the reliability of crafting labeled
|
127 |
-
data for each class by showing the top-2 confidences per class
|
128 |
-
of synthetic data. All the generated data are used not only to
|
129 |
-
measure the layer-wise sensitivity of the network but also to
|
130 |
-
obtain the statistics of activations to improve the quality of the
|
131 |
-
post-training quantized network.
|
132 |
-
2.2. Sensitivity Metric
|
133 |
-
The objective of mixed-precision quantization is to allocate
|
134 |
-
appropriate bits to each layer to reduce the cost (e.g., bit op-
|
135 |
-
erations [22]) of neural network models while suppressing the
|
136 |
-
task loss growth. The sensitivity metric is an important factor
|
137 |
-
for finding the optimal point between the effects of quanti-
|
138 |
-
zation error and cost. First, we would like to measure the
|
139 |
-
effect of quantization error on the loss of a network that has
|
140 |
-
totalLlayers when weights of ithlayer (1iL),Wi
|
141 |
-
are quantized through quantization function Qk()intok-bit.
|
142 |
-
Given input data xand quantized neural network ^y, we con-
|
143 |
-
sider the Euclidean distance between y(x)and^y(x)as theL,
|
144 |
-
the loss of network output, for sensitivity measurement in-
|
145 |
-
stead of task loss. We can define the effect of quantization
|
146 |
-
errorQk(Wi) WiofQk(Wi)onLas follows:
|
147 |
-
|
148 |
-
Wi(k) =1
|
149 |
-
NX
|
150 |
-
x
|
151 |
-
@(Qk(Wi) Wi)
|
152 |
-
whereNdenotes the total size of the dataset, and
|
153 |
-
Wi(k)are
|
154 |
-
gradients for the quantization error. Wiis not variable in post-
|
155 |
-
training quantization; thus, we can represent Eq. 2 as weight
|
156 |
-
gradients of quantized parameters by using the chain rule as
|
157 |
-
follows:
|
158 |
-
@L
|
159 |
-
@Qk(W)@Qk(W)
|
160 |
-
@(Qk(W) W)'@L
|
161 |
-
@Qk(W)(3)
|
162 |
-
The effects of the activation’s quantization error on Lis
|
163 |
-
represented as activation gradients of quantized activation by
|
164 |
-
applying the derivation of the formula shown in the previous
|
165 |
-
equations. Subsequently, we calculate the expectation for the
|
166 |
-
effects of the quantized network on Lby using the geometric
|
167 |
-
mean of
|
168 |
-
ofm-bit quantized activation Qm(A)and quan-
|
169 |
-
tized weights Qk(W), which is formulated asE[jS^yj] =LY
|
170 |
-
i
|
171 |
-
Ai(mi)
|
172 |
-
Wi(ki)1
|
173 |
-
L
|
174 |
-
(4)
|
175 |
-
The gradients of the converged single-precision neural
|
176 |
-
network are not changed for the same data. To measure
|
177 |
-
the effect of Qk(Wi) Wion other connected layers, we
|
178 |
-
observe the gradient perturbations of WandA, which are
|
179 |
-
caused byQk(Wi). Consequently, we can measure the effect
|
180 |
-
of quantization error on other layers and loss of the network
|
181 |
-
together, i.e., sensitivity of quantization by using E[jS^yj]
|
182 |
-
when we only quantize activations or weights of a layer for
|
183 |
-
which we would like to measure the sensitivity. Expressing
|
184 |
-
the single-precision as QFP32(), we have
|
185 |
-
E[jSWjj] =LY
|
186 |
-
i
|
187 |
-
Ai(FP32)
|
188 |
-
Wi(ki)1
|
189 |
-
L
|
190 |
-
(5)
|
191 |
-
s:t: k i=(
|
192 |
-
ki<32i=j
|
193 |
-
FP32i6=j
|
194 |
-
for quantization sensitivity metric for Wj. It is straightfor-
|
195 |
-
ward by using the information of back-propagation of quan-
|
196 |
-
tized network and considers the effect of quantization error
|
197 |
-
on other layers.
|
198 |
-
3. EXPERIMENTS
|
199 |
-
In this section, we first demonstrate the effectiveness of the
|
200 |
-
proposed data generation method in post-training quantiza-
|
201 |
-
tion. Then, we show that the proposed sensitivity metric rep-
|
202 |
-
resents the quantization sensitivity of the layer effectively by
|
203 |
-
using our generated data. Finally, sensitivity value by using
|
204 |
-
various datasets indicates that the proposed data generation
|
205 |
-
method is also credible in sensitivity measurement.
|
206 |
-
To demonstrate our methodology, we use classification
|
207 |
-
models VGG19, ResNet50, and InceptionV3 on the ImageNet
|
208 |
-
validation dataset.
|
209 |
-
3.1. Efficacy of Data Generation Method
|
210 |
-
We evaluate our method using the generated data to determine
|
211 |
-
the clipping range of each layer in post-training quantization.
|
212 |
-
Our experiments exploit a simple symmetric quantization ap-
|
213 |
-
proach, which uses the maximum value of activation as the
|
214 |
-
clipping range of each layer. Hence, maximum values are
|
215 |
-
extremely crucial for confining the dynamic range to utilize
|
216 |
-
the bit-width efficiently, preventing a severe accuracy drop in
|
217 |
-
low-bit quantization.
|
218 |
-
For our proposed method, input images are initialized ac-
|
219 |
-
cording to uniform distribution, which follows standardiza-
|
220 |
-
tion or normalization. To generate data, we use Adam op-
|
221 |
-
timizer to optimize the loss function with a learning rate of
|
222 |
-
0.04, and we found that =1 works best empirically.Model Dataset Top-1 Top-5
|
223 |
-
VGG19ImageNet 72.31 90.81
|
224 |
-
Noise 3.45 10.45
|
225 |
-
ZeroQ - -
|
226 |
-
Proposed 71.84 90.53
|
227 |
-
ResNet50ImageNet 75.89 92.74
|
228 |
-
Noise 11.28 29.88
|
229 |
-
ZeroQ 75.47 92.62
|
230 |
-
Proposed 75.68 92.72
|
231 |
-
InceptionV3ImageNet 77.14 93.43
|
232 |
-
Noise 62.49 85.00
|
233 |
-
ZeroQ 76.85 93.31
|
234 |
-
Proposed 76.83 93.26
|
235 |
-
Table 1 . Results of post-training quantization (8-bit quantiza-
|
236 |
-
tion for activations and weights) using different dataset. Ima-
|
237 |
-
geNet presents the oracle performance in terms of using train-
|
238 |
-
ing dataset and others are the results of the data-free methods.
|
239 |
-
For InceptionV3, input data are initialized according to
|
240 |
-
U(0, 255), which follows standardization with mean and vari-
|
241 |
-
ance by considering that the model requires synthetic data to
|
242 |
-
be converted into the range of [-1, 1] through preprocessing.
|
243 |
-
For VGG19 and ResNet50, input data are initialized accord-
|
244 |
-
ing toU(0, 255), which follows normalization with the factor
|
245 |
-
of 255 because they require the range of [0, 1].
|
246 |
-
Table 1 shows the empirical results for ImageNet, random
|
247 |
-
noise, and ZeroQ [10]. In the experiment, ImageNet data are
|
248 |
-
produced by choosing one image per class from the training
|
249 |
-
data, and we generate 1000 data samples randomly using the
|
250 |
-
existing method and the proposed method. As one can see,
|
251 |
-
our method shows similar or higher performances over exist-
|
252 |
-
ing data-free method, having less than 0.5% differences from
|
253 |
-
the results of ImageNet cases. As shown in VGG19, although
|
254 |
-
the existing research is hard to generalize, the method main-
|
255 |
-
tains sound performance regardless of the network structure.
|
256 |
-
3.2. Comparison of Sensitivity Metrics
|
257 |
-
To verify several metrics in weight sensitivity, we measure
|
258 |
-
the task loss by switching floating-point weights in the or-
|
259 |
-
der of layers with higher sensitivity values, where all weights
|
260 |
-
are quantized to 4-bit, and activations do not have quanti-
|
261 |
-
zation error. We denote this experiment as A32W f32,4g.
|
262 |
-
Af32,4gW32 indicates the evaluation of the metrics in acti-
|
263 |
-
vation sensitivity when switching the 4-bit quantized activa-
|
264 |
-
tion to floating-point, where weights are single-precision. Ze-
|
265 |
-
roQ [10] measures the KL divergence and [15] measures the
|
266 |
-
Euclidean distance between the original model and the quan-
|
267 |
-
tized model. ZeroQ [10] only measures the weight sensitivity.
|
268 |
-
HAWQ-V2 [16] uses the average Hessian trace and L2 norm
|
269 |
-
of quantization perturbation. We use the data generated by
|
270 |
-
the proposed method for all metrics.
|
271 |
-
Figure 3 shows the results of ResNet50 and InceptionV3(a) ResNet50 A32W f32,4g
|
272 |
-
(b) ResNet50 Af32, 4gW32
|
273 |
-
(c) InceptionV3 A32W f32,4g
|
274 |
-
(d) InceptionV3 Af32, 4gW32
|
275 |
-
Fig. 3 . Results of sensitivity metric evaluation on ResNet50 and InceptionV3 over ImageNet dataset. A32W f32,4gis the
|
276 |
-
evaluation for weight sensitivity and A f32,4gW32 is for activation sensitivity.
|
277 |
-
Model Metric A32W f32,4gAf32,4gW32
|
278 |
-
ResNet50Proposed 1 1
|
279 |
-
L2 [15] 1.22 1.11
|
280 |
-
ZeroQ(KLD) 1.14 18.48
|
281 |
-
HAWQ-V2 1.42 2.09
|
282 |
-
InceptionV3Proposed 1 1
|
283 |
-
L2 1.04 1.50
|
284 |
-
ZeroQ(KLD) 1.69 8.06
|
285 |
-
HAWQ-V2 1.25 6.17
|
286 |
-
Table 2 . Quantitative comparison of different sensitivity met-
|
287 |
-
rics through relative area calculation of task loss graph.
|
288 |
-
over ImageNet dataset. Our sensitivity metric reliably and
|
289 |
-
quickly lowered the task loss, which means that the proposed
|
290 |
-
sensitivity metric is good at weights and activations that are
|
291 |
-
comparatively more susceptible to quantization. To express
|
292 |
-
the results of Figure 3 quantitatively, we calculate the relative
|
293 |
-
area under the curve of task loss graph considering the result
|
294 |
-
of the proposed metric as 1 and summarize it in Table 2. Our
|
295 |
-
proposed method has the smallest area in all results.
|
296 |
-
3.3. Sensitivity According to Dataset Type
|
297 |
-
To show the importance of the data in measuring the sensitiv-
|
298 |
-
ity, we evaluate the proposed sensitivity metric over the differ-
|
299 |
-
ent datasets of Section 3.1 on the 4-bit quantized network that
|
300 |
-
clipped the activation range using ImageNet dataset. We mea-
|
301 |
-
sure the task loss as in Section 3.2. The proposed dataset is the
|
302 |
-
most similar in sensitivity to the ImageNet dataset, as shown
|
303 |
-
in Table 3. Notably, the proposed data generation method pro-
|
304 |
-
vides reliable statistics similar to the original training dataset.
|
305 |
-
For InceptionV3, preprocessed ImageNet data are in the range
|
306 |
-
of [-2.03, 2.52]. However, the range of the preprocessed data
|
307 |
-
from [10] is measured as [-11.50, 11.21], while that of our
|
308 |
-
data is in [-2.35, 2.36], whose maximum value is almost simi-
|
309 |
-
lar to that of the ImageNet dataset. These similar statistics are
|
310 |
-
seen in ResNet50. The incorrect statistics of the activation
|
311 |
-
data corresponding to the original training dataset implies in-Model Dataset A32W f32,4gAf32,4gW32
|
312 |
-
ResNet50Proposed 1 1
|
313 |
-
ImageNet 0.74 1.60
|
314 |
-
Noise 1.35 6.76
|
315 |
-
ZeroQ 1.11 3.17
|
316 |
-
InceptionV3Proposed 1 1
|
317 |
-
ImageNet 1.07 2.48
|
318 |
-
Noise 1.36 3.54
|
319 |
-
ZeroQ 1.47 2.63
|
320 |
-
Table 3 . Quantitative comparison of using different datasets
|
321 |
-
in sensitivity measurement through relative area calculation
|
322 |
-
of task loss graph.
|
323 |
-
accurate sensitivity measurement [10]. Thus, our generation
|
324 |
-
method can be used for reliable sensitivity measurement.
|
325 |
-
4. CONCLUSION
|
326 |
-
In this paper, we proposed an effective data generation
|
327 |
-
method for post-training mixed-precision quantization. Our
|
328 |
-
approach is to train the random noise to generate data by using
|
329 |
-
class-relevant vectors. It is not only independent of network
|
330 |
-
structure but also provides a labeled dataset. We demonstrate
|
331 |
-
that the generated data are sufficient to ensure the quality of
|
332 |
-
post-training quantization. Furthermore, we proposed a novel
|
333 |
-
sensitivity metric, which is important to optimize bit alloca-
|
334 |
-
tion. The proposed sensitivity metric considers the effect of
|
335 |
-
quantization on other relative layers and task loss together us-
|
336 |
-
ing the gradient perturbation of the quantized neural network.
|
337 |
-
Comparisons of sensitivity metrics were made to show the
|
338 |
-
extent to which a layer with high sensitivity, measured with
|
339 |
-
sensitivity metrics of other methods, affects task loss. The
|
340 |
-
proposed sensitivity metric outperformed other metrics to
|
341 |
-
stand for the effect of quantization error. We leave optimizing
|
342 |
-
bit allocation using the proposed metric and applying to other
|
343 |
-
tasks as part of future work.5. REFERENCES
|
344 |
-
[1] C. Szegedy, V . Vanhoucke, S. Ioffe, J. Shlens, and
|
345 |
-
Z. Wojna, “Rethinking the Inception architecture for
|
346 |
-
computer vision,” in Proceedings of the IEEE confer-
|
347 |
-
ence on computer vision and pattern recognition , 2016,
|
348 |
-
pp. 2818–2826.
|
349 |
-
[2] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed,
|
350 |
-
C.-Y . Fu, and A. C. Berg, “SSD: Single shot multibox
|
351 |
-
detector,” in European conference on computer vision .
|
352 |
-
Springer, 2016, pp. 21–37.
|
353 |
-
[3] L.-C. Chen, Y . Zhu, G. Papandreou, F. Schroff, and
|
354 |
-
H. Adam, “Encoder-decoder with atrous separable con-
|
355 |
-
volution for semantic image segmentation,” in Proceed-
|
356 |
-
ings of the European conference on computer vision
|
357 |
-
(ECCV) , 2018, pp. 801–818.
|
358 |
-
[4] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen,
|
359 |
-
M. Tan, W. Wang, Y . Zhu, R. Pang, V . Vasudevan, Q. V .
|
360 |
-
Le, and H. Adam, “Searching for mobilenetv3,” arXiv
|
361 |
-
preprint arxiv:1905.02244 , 2019.
|
362 |
-
[5] G. Hinton, O. Vinyals, and J. Dean, “Distilling
|
363 |
-
the knowledge in a neural network,” arXiv preprint
|
364 |
-
arXiv:1503.02531 , 2015.
|
365 |
-
[6] S. Han, H. Mao, and W. J. Dally, “Deep compres-
|
366 |
-
sion: Compressing deep neural networks with prun-
|
367 |
-
ing, trained quantization and huffman coding,” arXiv
|
368 |
-
preprint arXiv:1510.00149 , 2015.
|
369 |
-
[7] S. Jung, C. Son, S. Lee, J. Son, J.-J. Han, Y . Kwak,
|
370 |
-
S. J. Hwang, and C. Choi, “Learning to quantize deep
|
371 |
-
networks by optimizing quantization intervals with task
|
372 |
-
loss,” in 2019 IEEE/CVF Conference on Computer Vi-
|
373 |
-
sion and Pattern Recognition , 2019.
|
374 |
-
[8] J. Choi, Z. Wang, S. Venkataramani, P. I.-J. Chuang,
|
375 |
-
V . Srinivasan, and K. Gopalakrishnan, “PACT: Param-
|
376 |
-
eterized clipping activation for quantized neural net-
|
377 |
-
works,” arXiv preprint arXiv:1805.06085 , 2018.
|
378 |
-
[9] D. Zhang, J. Yang, D. Ye, and G. Hua, “LQ-Nets:
|
379 |
-
Learned quantization for highly accurate and compact
|
380 |
-
deep neural networks,” in 15th European Conference on
|
381 |
-
Computer Vision , 2018.
|
382 |
-
[10] Y . Cai, Z. Yao, Z. Dong, A. Gholami, M. W. Mahoney,
|
383 |
-
and K. Keutzer, “ZeroQ: A novel zero shot quantiza-
|
384 |
-
tion framework,” in 17th Computer Vision and Pattern
|
385 |
-
Recognition , 2020.
|
386 |
-
[11] P. Nayak, D. Zhang, and S. Chai, “Bit efficient quan-
|
387 |
-
tization for deep neural networks,” arXiv preprint
|
388 |
-
arXiv:1910.04877 , 2019.[12] M. Nagel, M. van Baalen, T. Blankevoort, and
|
389 |
-
M. Welling, “Data-free quantization through weight
|
390 |
-
equalization and bias correction,” in 2019 international
|
391 |
-
Conference on Computer Vison , 2019.
|
392 |
-
[13] K. Wang, Z. Liu, Y . Lin, J. Lin, and S. Han, “HAQ:
|
393 |
-
Hardware-aware automated quantization with mixed
|
394 |
-
precision,” in The 16th Computer Vision and Pattern
|
395 |
-
Recognition , 2019.
|
396 |
-
[14] B. Wu, Y . Wang, P. Zhang, Y . Tian, P. Vajda, and
|
397 |
-
K. Keutzer, “Mixed precision quantization of ConvNets
|
398 |
-
via differentiable neural architecture search,” arXiv
|
399 |
-
preprint arxiv:1812.00090 , 2018.
|
400 |
-
[15] W. Zhe, J. Lin, V . Chandrasekhar, and B. Girod, “Op-
|
401 |
-
timizing the bit allocation for compression of weights
|
402 |
-
and activations of deep neural networks,” in 2019 IEEE
|
403 |
-
International Conference on Image Processing (ICIP) .
|
404 |
-
IEEE, 2019, pp. 3826–3830.
|
405 |
-
[16] Z. Dong, Z. Yao, Y . Cai, D. Arfeen, A. Gholami, M. W.
|
406 |
-
Mahoney, and K. Keutzer, “HAWQ-V2: Hessian aware
|
407 |
-
trace-weighted quantization of neural networks,” arXiv
|
408 |
-
preprint arxiv:1911.03852 , 2019.
|
409 |
-
[17] P. Molchanov, A. Mallya, S. Tyree, I. Frosio, and
|
410 |
-
J. Kautz, “Importance estimation for neural network
|
411 |
-
pruning,” in Proceedings of the IEEE Conference on
|
412 |
-
Computer Vision and Pattern Recognition , 2019, pp.
|
413 |
-
11 264–11 272.
|
414 |
-
[18] Z. Dong, Z. Yao, A. Gholami, M. W. Mahoney, and
|
415 |
-
K. Keutzer, “HAWQ: Hessian aware quantization of
|
416 |
-
neural networks with mixed-precision,” in 2019 Intena-
|
417 |
-
tional Conference on Computer Vison , 2019.
|
418 |
-
[19] H. Yin, P. Molchanov, J. M. Alvarez, Z. Li, A. Mallya,
|
419 |
-
D. Hoiem, N. K. Jha, and J. Kautz, “Dreaming to dis-
|
420 |
-
till: Data-free knowledge transfer via DeepInversion,” in
|
421 |
-
Proceedings of the IEEE/CVF Conference on Computer
|
422 |
-
Vision and Pattern Recognition , 2020, pp. 8715–8724.
|
423 |
-
[20] M. T. Alexander Mordvintsev, Christopher Olah,
|
424 |
-
“Inceptionism: Going deeper into neural networks,”
|
425 |
-
2015. [Online]. Available: https://ai :googleblog:com/
|
426 |
-
2015/06/inceptionism-going-deeper-into-neural :html
|
427 |
-
[21] F. M. Graetz, “How to visualize convolutional
|
428 |
-
features in 40 lines of code,” 2019. [Online].
|
429 |
-
Available: https://towardsdatascience :com/how-to-
|
430 |
-
visualize-convolutional-features-in-40-lines-of-code-
|
431 |
-
70b7d87b0030
|
432 |
-
[22] M. van Baalen, C. Louizos, M. Nagel, R. A. Amjad,
|
433 |
-
Y . Wang, T. Blankevoort, and M. Welling, “Bayesian
|
434 |
-
bits: Unifying quantization and pruning,” arXiv preprint
|
435 |
-
arXiv:2005.07093 , 2020.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
txt/2103.13922.txt
DELETED
@@ -1,1473 +0,0 @@
|
|
1 |
-
ScanGAN360: A Generative Model of Realistic Scanpaths for 360Images
|
2 |
-
Daniel Martin1Ana Serrano2Alexander W. Bergman3Gordon Wetzstein3
|
3 |
-
Belen Masia1
|
4 |
-
1Universidad de Zaragoza, I3A2Centro Universitario de la Defensa, Zaragoza3Stanford University
|
5 |
-
Abstract
|
6 |
-
Understanding and modeling the dynamics of human
|
7 |
-
gaze behavior in 360environments is a key challenge in
|
8 |
-
computer vision and virtual reality. Generative adversar-
|
9 |
-
ial approaches could alleviate this challenge by generat-
|
10 |
-
ing a large number of possible scanpaths for unseen im-
|
11 |
-
ages. Existing methods for scanpath generation, however,
|
12 |
-
do not adequately predict realistic scanpaths for 360im-
|
13 |
-
ages. We present ScanGAN360, a new generative adver-
|
14 |
-
sarial approach to address this challenging problem. Our
|
15 |
-
network generator is tailored to the specifics of 360im-
|
16 |
-
ages representing immersive environments. Specifically, we
|
17 |
-
accomplish this by leveraging the use of a spherical adapta-
|
18 |
-
tion of dynamic-time warping as a loss function and propos-
|
19 |
-
ing a novel parameterization of 360scanpaths. The quality
|
20 |
-
of our scanpaths outperforms competing approaches by a
|
21 |
-
large margin and is almost on par with the human baseline.
|
22 |
-
ScanGAN360 thus allows fast simulation of large numbers
|
23 |
-
ofvirtual observers , whose behavior mimics real users, en-
|
24 |
-
abling a better understanding of gaze behavior and novel
|
25 |
-
applications in virtual scene design.
|
26 |
-
1. Introduction
|
27 |
-
Virtual reality (VR) is an emerging medium that unlocks
|
28 |
-
unprecedented user experiences. To optimize these expe-
|
29 |
-
riences, however, it is crucial to develop computer vision
|
30 |
-
techniques that help us understand how people explore im-
|
31 |
-
mersive virtual environments. Models for time-dependent
|
32 |
-
visual exploration behavior are important for designing and
|
33 |
-
editing VR content [42], for generating realistic gaze trajec-
|
34 |
-
tories of digital avatars [18], for understanding dynamic vi-
|
35 |
-
sual attention and visual search behavior [60], and for devel-
|
36 |
-
oping new rendering, display, and compression algorithms,
|
37 |
-
among other applications.
|
38 |
-
Current approaches that model how people explore vir-
|
39 |
-
tual environments often leverage saliency prediction [43,
|
40 |
-
13, 31, 2]. While this is useful for some applications, the
|
41 |
-
fixation points predicted by these approaches do not account
|
42 |
-
Figure 1. We present ScanGAN360, a generative adversarial ap-
|
43 |
-
proach to scanpath generation for 360images. ScanGAN360
|
44 |
-
generates realistic scanpaths ( bottom rows ), outperforming state-
|
45 |
-
of-the-art methods and mimicking the human baseline ( top row ).
|
46 |
-
for the time-dependent visual behavior of the user, making
|
47 |
-
it difficult to predict the order of fixations, or give insight
|
48 |
-
into how people explore an environment over time. For this
|
49 |
-
purpose, some recent work has explored scanpath predic-
|
50 |
-
tion [2, 3, 62, 4], but these algorithms do not adequately
|
51 |
-
model how people explore immersive virtual environments,
|
52 |
-
resulting in erratic or non-plausible scanpaths.
|
53 |
-
In this work, we present ScanGAN360, a novel frame-
|
54 |
-
work for scanpath generation for 360images (Figure 1).
|
55 |
-
Our model builds on a conditional generative adversarial
|
56 |
-
network (cGAN) architecture, for which we discuss and val-
|
57 |
-
idate two important insights that we show are necessary for
|
58 |
-
realistic scanpath generation. First, we propose a loss func-
|
59 |
-
tion based on a spherical adaptation of dynamic time warp-
|
60 |
-
ing (DTW), which is a key aspect for training our GAN ro-
|
61 |
-
bustly. DTW is a metric for measuring similarity between
|
62 |
-
two time series, such as scanpaths, which to our knowledge
|
63 |
-
has not been used to train scanpath-generating GANs. Sec-
|
64 |
-
ond, to adequately tackle the problem of scanpath genera-
|
65 |
-
tion in 360images, we present a novel parameterization ofarXiv:2103.13922v1 [cs.CV] 25 Mar 2021the scanpaths. These insights allow us to demonstrate state-
|
66 |
-
of-the-art results for scanpath generation in VR, close to the
|
67 |
-
human baseline and far surpassing the performance of ex-
|
68 |
-
isting methods. Our approach is the first to enable robust
|
69 |
-
scanpath prediction over long time periods up to 30 sec-
|
70 |
-
onds, and, unlike previous work, our model does not rely
|
71 |
-
on saliency, which is typically not available as ground truth.
|
72 |
-
Our model produces about 1,000 scanpaths per second,
|
73 |
-
which enables fast simulation of large numbers of virtual
|
74 |
-
observers , whose behavior mimics that of real users. Us-
|
75 |
-
ing ScanGAN360, we explore applications in virtual scene
|
76 |
-
design, which is useful in video games, interior design,
|
77 |
-
cinematography, and tourism, and scanpath-driven video
|
78 |
-
thumbnail generation of 360images, which provides pre-
|
79 |
-
views of VR content for social media platforms. Beyond
|
80 |
-
these applications, we propose to use ScanGAN360 for
|
81 |
-
applications such as gaze behavior simulation for virtual
|
82 |
-
avatars or gaze-contingent rendering. Extended discussion
|
83 |
-
and results on applications are included in the supplemen-
|
84 |
-
tary material and video.
|
85 |
-
We will make our source code and pre-trained model
|
86 |
-
publicly available to promote future research.
|
87 |
-
2. Related work
|
88 |
-
Modeling and predicting attention The multimodal na-
|
89 |
-
ture of attention [30], together with the complexity of hu-
|
90 |
-
man gaze behavior, make this a very challenging task. Many
|
91 |
-
works devoted to it have relied on representations such as
|
92 |
-
saliency, which is a convenient representation for indicat-
|
93 |
-
ing the regions of an image more likely to attract atten-
|
94 |
-
tion. Early strategies for saliency modeling have focused
|
95 |
-
on either creating hand-crafted features representative of
|
96 |
-
saliency [19, 52, 61, 29, 20, 7], or directly learning data-
|
97 |
-
driven features [49, 22]. With the proliferation of exten-
|
98 |
-
sive datasets of human attention [43, 39, 20, 8, 59], deep
|
99 |
-
learning–based methods for saliency prediction have been
|
100 |
-
successfully applied, yielding impressive results [37, 36, 14,
|
101 |
-
50, 54, 55, 58].
|
102 |
-
However, saliency models do not take into account the
|
103 |
-
dynamic nature of human gaze behavior, and therefore, they
|
104 |
-
are unable to model or predict time-varying aspects of at-
|
105 |
-
tention. Being able to model and predict dynamic explo-
|
106 |
-
ration patterns has been proven to be useful, for example,
|
107 |
-
for avatar gaze control [12, 41], video rendering in virtual
|
108 |
-
reality [26], or for directing users’ attention over time in
|
109 |
-
many contexts [9, 38]. Scanpath models aim to predict vi-
|
110 |
-
sual patterns of exploration that an observer would perform
|
111 |
-
when presented with an image. In contrast to saliency mod-
|
112 |
-
els, scanpath models typically focus on predicting plausi-
|
113 |
-
ble scanpaths, i.e., they do not predict a unique scanpath
|
114 |
-
and instead they try to mimic human behavior when ex-
|
115 |
-
ploring an image, taking into account the variability be-
|
116 |
-
tween different observers. Ellis and Smith [16] were pio-neers in this field: they proposed a general framework for
|
117 |
-
generating scanpaths based on Markov stochastic processes.
|
118 |
-
Several approaches have followed this work, incorporating
|
119 |
-
behavioral biases in the process in order to produce more
|
120 |
-
plausible scanpaths [24, 47, 27, 48]. In recent years, deep
|
121 |
-
learning models have been used to predict human scanpaths
|
122 |
-
based on neural network features trained on object recogni-
|
123 |
-
tion [22, 53, 14, 5].
|
124 |
-
Attention in 360images Predicting plausible scanpaths
|
125 |
-
in 360imagery is a more complex task: Observers do not
|
126 |
-
only scan a given image with their gaze, but they can now
|
127 |
-
also turn their head or body, effectively changing their view-
|
128 |
-
port over time. Several works have been proposed for mod-
|
129 |
-
eling saliency in 360images [33, 43, 31, 11, 44]. However,
|
130 |
-
scanpath prediction has received less attention. In their re-
|
131 |
-
cent work, Assens et al. [3] generalize their 2D model to
|
132 |
-
360images, but their loss function is unable to reproduce
|
133 |
-
the behavior of ground truth scanpaths (see Figure 4, third
|
134 |
-
column). A few works have focused on predicting short-
|
135 |
-
term sequential gaze points based on users’ previous his-
|
136 |
-
tory for 360videos, but they are limited to small temporal
|
137 |
-
windows (from one to ten seconds) [56, 25, 35]. For the
|
138 |
-
case of images, a number of recent methods focus on devel-
|
139 |
-
oping improved saliency models and principled methods to
|
140 |
-
sample from them [2, 4, 62].
|
141 |
-
Instead, we directly learn dynamic aspects of attention
|
142 |
-
from ground truth scanpaths by training a generative model
|
143 |
-
in an adversarial manner, with an architecture and loss
|
144 |
-
function specifically designed for scanpaths in 360im-
|
145 |
-
ages. This allows us to (i) effectively mimic human be-
|
146 |
-
havior when exploring scenes, bypassing the saliency gen-
|
147 |
-
eration and sampling steps, and (ii) optimize our network to
|
148 |
-
stochastically generate 360scanpaths, taking into account
|
149 |
-
observer variability.
|
150 |
-
3. Our Model
|
151 |
-
We adopt a generative adversarial approach, specifically
|
152 |
-
designed for 360content in which the model learns to gen-
|
153 |
-
erate a plausible scanpath, given the 360image as a con-
|
154 |
-
dition. In the following, we describe the parameterization
|
155 |
-
employed for the scanpaths, the design of our loss function
|
156 |
-
for the generator, and the particularities of our conditional
|
157 |
-
GAN architecture, ending with details about the training
|
158 |
-
process.
|
159 |
-
3.1. Scanpath Parameterization
|
160 |
-
Scanpaths are commonly provided as a sequence of two-
|
161 |
-
dimensional values corresponding to the coordinates (i;j)
|
162 |
-
of each gaze point in the image. When dealing with 360
|
163 |
-
images in equirectangular projections, gaze points are also
|
164 |
-
often represented by their latitude and longitude (;),Figure 2. Illustration of our generator and discriminator networks. Both networks have a two-branch structure: Features extracted from the
|
165 |
-
360image with the aid of a CoordConv layer and an encoder-like network are concatenated with the input vector for further processing.
|
166 |
-
The generator learns to transform this input vector, conditioned by the image, into a plausible scanpath. The discriminator takes as input
|
167 |
-
vector a scanpath (either captured or synthesized by the generator), as well as the corresponding image, and determines the probability of
|
168 |
-
this scanpath being real (or fake). We train them end-to-end in an adversarial manner, following a conditional GAN scheme. Please refer
|
169 |
-
to the text for details on the loss functions and architecture.
|
170 |
-
2[
|
171 |
-
2;
|
172 |
-
2]and2[ ;]. However, these parame-
|
173 |
-
terizations either suffer from discontinuities at the borders
|
174 |
-
of a 360image, or result in periodic, ambiguous values.
|
175 |
-
The same point of the scene can have two different repre-
|
176 |
-
sentations in these parameterizations, hindering the learning
|
177 |
-
process.
|
178 |
-
We therefore resort to a three-dimensional parameteriza-
|
179 |
-
tion of our scanpaths, where each gaze point p= (;)is
|
180 |
-
transformed into its three-dimensional representation P=
|
181 |
-
(x;y;z )such that:
|
182 |
-
x=cos()cos();y=cos()sin();z=sin():
|
183 |
-
This transformation assumes, without loss of generality,
|
184 |
-
that the panorama is projected over a unit sphere. We
|
185 |
-
use this parameterization for our model, which learns a
|
186 |
-
scanpath Pas a set of three-dimensional points over time.
|
187 |
-
Specifically, given a number of samples Tover time, P=
|
188 |
-
(P1;:::;PT)2R3T. The results of the model are then
|
189 |
-
converted back to a two-dimensional parameterization in
|
190 |
-
terms of latitude ( =atan2 (z;p
|
191 |
-
x2+y2)) and longitude
|
192 |
-
(=atan2 (y;x)) for display and evaluation purposes.
|
193 |
-
3.2. Overview of the Model
|
194 |
-
Our model is a conditional GAN, where the condition
|
195 |
-
is the RGB 360image for which we wish to estimate a
|
196 |
-
scanpath. The generator Gis trained to generate a scanpath
|
197 |
-
from a latent code z(drawn randomly from a uniform distri-
|
198 |
-
bution,U( 1;1)), conditioned by the RGB 360imagey.
|
199 |
-
The discriminator Dtakes as input a potential scanpath ( xorG(z;y)), as well as the condition y(the RGB 360im-
|
200 |
-
age), and outputs the probability of the scanpath being real
|
201 |
-
(or fake). The architecture of both networks, generator and
|
202 |
-
discriminator, can be seen in Figure 2, and further details
|
203 |
-
related to the architecture are described in Section 3.4.
|
204 |
-
3.3. Loss Function
|
205 |
-
The objective function of a conventional conditional
|
206 |
-
GAN is inspired by a minimax objective from game theory,
|
207 |
-
with an objective [32]:
|
208 |
-
min
|
209 |
-
Gmax
|
210 |
-
DV(D;G ) =
|
211 |
-
Ex[logD(x;y)] +Ez[log(1 D(G(z;y);y)]:(1)
|
212 |
-
We can separate this into two losses, one for the generator,
|
213 |
-
LG, and one for the discriminator, LD:
|
214 |
-
LG=Ez[log(1 D(G(z;y);y))]; (2)
|
215 |
-
LD=Ex[logD(x;y)] +Ez[log(1 D(G(z;y);y))]:(3)
|
216 |
-
While this objective function suffices in certain cases, as
|
217 |
-
the complexity of the problem increases, the generator may
|
218 |
-
not be able to learn the transformation from the input distri-
|
219 |
-
bution into the target one. One can resort to adding a loss
|
220 |
-
term toLG, and in particular one that enforces similarity to
|
221 |
-
the scanpath ground truth data. However, using a conven-
|
222 |
-
tional data term, such as MSE, does not yield good results
|
223 |
-
(Section 4.4 includes an evaluation of this). To address this
|
224 |
-
issue, we introduce a novel term in LGspecifically targeted
|
225 |
-
to our problem, and based on dynamic time warping [34].Dynamic time warping (DTW) measures the similar-
|
226 |
-
ity between two temporal sequences, considering both the
|
227 |
-
shape and the order of the elements of a sequence, with-
|
228 |
-
out forcing a one-to-one correspondence between elements
|
229 |
-
of the time series. For this purpose, it takes into account
|
230 |
-
all the possible alignments of two time series rands, and
|
231 |
-
computes the one that yields the minimal distance between
|
232 |
-
them. Specifically, the DTW loss function between two
|
233 |
-
time series r2Rknands2Rkmcan be expressed
|
234 |
-
as [15]:
|
235 |
-
DTW (r;s) = min
|
236 |
-
AhA;(r;s)i; (4)
|
237 |
-
where (r;s) = [(ri;sj)]ij2Rnmis a matrix con-
|
238 |
-
taining the distances (;)between each pair of points in r
|
239 |
-
ands,Ais a binary matrix that accounts for the alignment
|
240 |
-
(or correspondence) between rands, andh;iis the inner
|
241 |
-
product between both matrices.
|
242 |
-
In our case, r= (r1;:::;rT)2R3Tands=
|
243 |
-
(s1;:::;sT)2R3Tare two scanpaths that we wish to com-
|
244 |
-
pare. While the Euclidean distance between each pair of
|
245 |
-
points is usually employed when computing (ri;sj)for
|
246 |
-
Equation 4, in our scenario that would yield erroneous dis-
|
247 |
-
tances derived from the projection of the 360image (both
|
248 |
-
if done in 2D over the image, or in 3D with the parameteri-
|
249 |
-
zation described in Section 3.1). We instead use the distance
|
250 |
-
over the surface of a sphere, or spherical distance, and de-
|
251 |
-
finesph(r;s) = [sph(ri;sj)]ij2Rnmsuch that:
|
252 |
-
sph(ri;sj) =
|
253 |
-
2 arcsin1
|
254 |
-
2q
|
255 |
-
(rx
|
256 |
-
i sx
|
257 |
-
j)2+ (ry
|
258 |
-
i sy
|
259 |
-
j)2+ (rz
|
260 |
-
i sz
|
261 |
-
j)2
|
262 |
-
;
|
263 |
-
(5)
|
264 |
-
leading to our spherical DTW:
|
265 |
-
DTWsph(r;s) = min
|
266 |
-
AhA;sph(r;s)i: (6)
|
267 |
-
We incorporate the spherical DTW to the loss function of
|
268 |
-
the generator (LG, Equation 2), yielding our final generator
|
269 |
-
loss functionL
|
270 |
-
G:
|
271 |
-
L
|
272 |
-
G=LG+Ez[DTWsph(G(z;y);)]; (7)
|
273 |
-
whereis a ground truth scanpath for the conditioning im-
|
274 |
-
agey, and the weight is empirically set to 0:1.
|
275 |
-
While a loss function incorporating DTW (or spherical
|
276 |
-
DTW) is not differentiable, a differentiable version, soft-
|
277 |
-
DTW, has been proposed. We use this soft-DTW in our
|
278 |
-
model; details on it can be found in Section S1 in the sup-
|
279 |
-
plementary material or in the original publication [15].
|
280 |
-
3.4. Model Architecture
|
281 |
-
Both our generator and discriminator are based on a two-
|
282 |
-
branch structure (see Figure 2), with one branch for the con-
|
283 |
-
ditioning image yand the other for the input vector ( zin thegenerator, and xorG(z;y)in the discriminator). The im-
|
284 |
-
age branch extracts features from the 360image, yielding
|
285 |
-
a set of latent features that will be concatenated with the
|
286 |
-
input vector for further processing. Due to the distortion
|
287 |
-
inherent to equirectangular projections, traditional convo-
|
288 |
-
lutional feature extraction strategies are not well suited for
|
289 |
-
360images: They use a kernel window where neighboring
|
290 |
-
relations are established uniformly around a pixel. Instead,
|
291 |
-
we extract features using panoramic (or spherical) convolu-
|
292 |
-
tions [13]. Spherical convolutions are a type of dilated con-
|
293 |
-
volutions where the relations between elements in the im-
|
294 |
-
age are not established in image space, but in a gnomonic,
|
295 |
-
non-distorted space. These spherical convolutions can rep-
|
296 |
-
resent kernels as patches tangent to a sphere where the 360
|
297 |
-
is reprojected.
|
298 |
-
In our problem of scanpath generation, the location of
|
299 |
-
the features in the image is of particular importance. There-
|
300 |
-
fore, to facilitate spatial learning of the network, we use the
|
301 |
-
recently presented CoordConv strategy [28], which gives
|
302 |
-
convolutions access to its own input coordinates by adding
|
303 |
-
extra coordinate channels. We do this by concatenating a
|
304 |
-
CoordConv layer to the input 360image (see Figure 2).
|
305 |
-
This layer also helps stabilize the training process, as shown
|
306 |
-
in Section 4.4.
|
307 |
-
3.5. Dataset and Training Details
|
308 |
-
We train our model using Sitzmann et al.’s [43] dataset,
|
309 |
-
composed of 22 different 360images and a total of 1,980
|
310 |
-
scanpaths from 169 different users. Each scanpath contains
|
311 |
-
gaze information captured during 30 seconds with a binoc-
|
312 |
-
ular eye tracking recorder at 120 Hz. We sample these cap-
|
313 |
-
tured scanpaths at 1 Hz ( i.e.,T= 30 ), and reparameter-
|
314 |
-
ize them (Section 3.1), so that each scanpath is a sequence
|
315 |
-
P= (P0;:::;P 29)2R3T. Given the relatively small size
|
316 |
-
of the dataset, we perform data augmentation by longitu-
|
317 |
-
dinally shifting the 360images (and adjusting their scan-
|
318 |
-
paths accordingly); specifically, for each image we generate
|
319 |
-
six different variations with random longitudinal shifting.
|
320 |
-
We use 19 of the 22 images in this dataset for training, and
|
321 |
-
reserve three to be part of our test set (more details on the
|
322 |
-
full test set are described in Section 4). With the data aug-
|
323 |
-
mentation process, this yields 114 images in the training set.
|
324 |
-
During our training process we use the Adam opti-
|
325 |
-
mizer [21], with constant learning rates lG= 10 4for the
|
326 |
-
generator, and lD= 10 5for the discriminator, both of
|
327 |
-
them with momentum = (0:5;0:99). Further training and
|
328 |
-
implementation details can be found in the supplementary
|
329 |
-
material.
|
330 |
-
4. Validation and Analysis
|
331 |
-
We evaluate the quality of the generated scanpaths with
|
332 |
-
respect to the measured, ground truth scanpaths, as well asFigure 3. Results of our model for two different scenes: market andmall from Rai et al.’s dataset [39]. From left to right : 360image,
|
333 |
-
ground truth sample scanpath, and three scanpaths generated by our model. The generated scanpaths are plausible and focus on relevant
|
334 |
-
parts of the scene, yet they exhibit the diversity expected among different human observers. Please refer to the supplementary material for
|
335 |
-
a larger set of results.
|
336 |
-
to other approaches. We also ablate our model to illustrate
|
337 |
-
the contribution of the different design choices.
|
338 |
-
We evaluate or model on two different test sets. First,
|
339 |
-
using the three images from Sitzmann et al.’s dataset [43]
|
340 |
-
left out of the training (Section 3.5): room ,chess androbots .
|
341 |
-
To ensure our model has an ability to extrapolate, we also
|
342 |
-
evaluate it with a different dataset from Rai et al. [39]. This
|
343 |
-
dataset consists of 60 scenes watched by 40 to 42 observers
|
344 |
-
for 25 seconds. Thus, when comparing to their ground truth,
|
345 |
-
we cut our 30-second scanpaths to the maximum length of
|
346 |
-
their data. Please also refer to the supplementary material
|
347 |
-
for more details on the test set, as well as further evaluation
|
348 |
-
and results.
|
349 |
-
4.1. Scanpath Similarity Metrics
|
350 |
-
Our evaluation is both quantitative and qualitative. Eval-
|
351 |
-
uating scanpath similarity is not a trivial task, and a num-
|
352 |
-
ber of metrics have been proposed in the literature, each fo-
|
353 |
-
cused on a different context or aspect of gaze behavior [17].
|
354 |
-
Proposed metrics can be roughly categorized into: (i) di-
|
355 |
-
rect measures based on Euclidean distance; (ii) string-based
|
356 |
-
measures based on string alignment techniques (such as the
|
357 |
-
Levenshtein distance, LEV); (iii) curve similarity methods;
|
358 |
-
(iv) metrics from time-series analysis (like DTW, on which
|
359 |
-
our loss function is based); and (v) metrics from recurrence
|
360 |
-
analysis ( e.g., recurrence measure REC and determinism
|
361 |
-
measure DET). We refer the reader to supplementary mate-
|
362 |
-
rial and the review by Fahimi and Bruce [17] for an in-depth
|
363 |
-
explanation and comparison of existing metrics. Here, we
|
364 |
-
include a subset of metrics that take into account both the
|
365 |
-
position and the ordering of the points (namely LEV and
|
366 |
-
DTW), and two metrics from recurrence analysis (REC and
|
367 |
-
DET), which have been reported to be discriminative in
|
368 |
-
revealing viewing behaviors and patterns when comparing
|
369 |
-
scanpaths. We nevertheless compute our evaluation for the
|
370 |
-
full set of metrics reviewed by Fahimi and Bruce [17] in the
|
371 |
-
supplementary material.
|
372 |
-
Since for each image we have a number of ground truthscanpaths, and a set of generated scanpaths, we compute
|
373 |
-
each similarity metric for all possible pairwise comparisons
|
374 |
-
(each generated scanpath against each of the ground truth
|
375 |
-
scanpaths), and average the result. In order to provide an
|
376 |
-
upper baseline for each metric, we also compute the human
|
377 |
-
baseline ( Human BL ) [57], which is obtained by comparing
|
378 |
-
each ground truth scanpath against all the other ground truth
|
379 |
-
ones, and averaging the results. In a similar fashion, we
|
380 |
-
compute a lower baseline based on sampling gaze points
|
381 |
-
randomly over the image ( Random BL ).
|
382 |
-
4.2. Results
|
383 |
-
Qualitative results of our model can be seen in Figures 3
|
384 |
-
and 1 for scenes with different layouts. Figure 3, from left
|
385 |
-
to right, shows: the scene, a sample ground truth (captured)
|
386 |
-
scanpath, and three of our generated scanpaths sampled
|
387 |
-
from the generator. Our model is able to produce plausible,
|
388 |
-
coherent scanpaths that focus on relevant parts of the scene.
|
389 |
-
In the generated scanpaths we observe regions where the
|
390 |
-
user focuses (points of a similar color clustered together), as
|
391 |
-
well as more exploratory behavior. The generated scanpaths
|
392 |
-
are diverse but plausible, as one would expect if different
|
393 |
-
users watched the scene (the supplementary material con-
|
394 |
-
tains more ground truth, measured scanpaths, showing this
|
395 |
-
diversity). Further, our model is not affected by the inherent
|
396 |
-
distortions of the 360image. This is apparent, for exam-
|
397 |
-
ple, in the market scene: The central corridor, narrow and
|
398 |
-
seemingly featureless, is observed by generated virtual ob-
|
399 |
-
servers . Quantitative results in Table 1 further show that our
|
400 |
-
generated scanpaths are close to the human baseline ( Hu-
|
401 |
-
man BL ), both in the test set from Sitzmann et al.’s dataset,
|
402 |
-
and over Rai et al.’s dataset. A value close to Human BL in-
|
403 |
-
dicates that the generated scanpaths are as valid or as plau-
|
404 |
-
sible as the captured, ground truth ones. Note that obtaining
|
405 |
-
a value lower than Human BL is possible, if the generated
|
406 |
-
scanpaths are on average closer to the ground truth ones,
|
407 |
-
and exhibit less variance.
|
408 |
-
Since our model is generative, it can generate as manyFigure 4. Qualitative comparison to previous methods for five different scenes from Rai et al.’s dataset. In each row, from left to right:
|
409 |
-
360image, and a sample scanpath obtained with our method, PathGAN [3], SaltiNet [4], and Zhu et al.’s [62]. Note that, in the case
|
410 |
-
of PathGAN, we are including the results directly taken from their paper, thus the different visualization. Our method produces plausible
|
411 |
-
scanpaths focused on meaningful regions, in comparison with other techniques. Please see text for details, and the supplementary material
|
412 |
-
for a larger set of results, also including ground truth scanpaths.
|
413 |
-
scanpaths as needed and model many different potential ob-
|
414 |
-
servers. We perform our evaluations on a random set of 100
|
415 |
-
scanpaths generated by our model. We choose this num-
|
416 |
-
ber to match the number of generated scanpaths available
|
417 |
-
for competing methods, to perform a fair comparison. Nev-
|
418 |
-
ertheless, we have analyzed the stability of our generative
|
419 |
-
model by computing our evaluation metrics for a variable
|
420 |
-
number of generated scanpaths: Our results are very sta-
|
421 |
-
ble with the number of scanpaths (please see Table 2 in the
|
422 |
-
supplementary material).
|
423 |
-
4.3. Comparison to Other Methods
|
424 |
-
We compare ScanGAN360 to three methods devoted to
|
425 |
-
scanpath prediction in 360images: SaltiNet-based scan-
|
426 |
-
path prediction [2, 4] (we will refer to it as SaltiNet in the
|
427 |
-
following), PathGAN [3] and Zhu et al.’s method [62]. For
|
428 |
-
comparisons to SaltiNet we use the public implementation
|
429 |
-
of the authors, while the authors of Zhu et al. kindly pro-
|
430 |
-
vided us with the results of their method for the images from
|
431 |
-
Rai et al.’s dataset (but not for Sitzmann et al.’s); we there-
|
432 |
-
fore have both qualitative (Figure 4) and quantitative (Ta-
|
433 |
-
ble 1) comparisons to these two methods. In the case of
|
434 |
-
PathGAN, no model or implementation could be obtained,
|
435 |
-
so we compare qualitatively to the results extracted from
|
436 |
-
their paper (Figure 4, third column).
|
437 |
-
Table 1 shows that our model consistently provides re-sults closer to the ground truth scanpaths than Zhu et al.’s
|
438 |
-
and SaltiNet. The latter is based on a saliency-sampling
|
439 |
-
strategy, and thus these results indicate that indeed the tem-
|
440 |
-
poral information learnt by our model is relevant for the fi-
|
441 |
-
nal result. Our model, as expected, also amply surpasses the
|
442 |
-
random baseline. In Figure 4 we see how PathGAN scan-
|
443 |
-
paths fail to focus on the relevant parts of the scene (see,
|
444 |
-
e.g.,snow orsquare ), while SaltiNet exhibits a somewhat
|
445 |
-
erratic behavior, with large displacements and scarce areas
|
446 |
-
of focus ( train ,snow orsquare show this). Finally, Zhu
|
447 |
-
et al.’s approach tends to place gaze points at high contrast
|
448 |
-
borders (see, e.g.,square orresort ).
|
449 |
-
4.4. Ablation Studies
|
450 |
-
We also evaluate the contribution of different elements of
|
451 |
-
our model to the final result. For this purpose, we analyze
|
452 |
-
a standard GAN strategy ( i.e., using only the discriminative
|
453 |
-
loss), as the baseline. Figure 5 shows how the model is un-
|
454 |
-
able to learn both the temporal nature of the scanpaths, and
|
455 |
-
their relation to image features. We also analyze the results
|
456 |
-
yielded by adding a term based on the MSE between the
|
457 |
-
ground truth and the generated scanpath to the loss function,
|
458 |
-
instead of our DTW sphterm (the only previous GAN ap-
|
459 |
-
proach for scanpath generation [3] relied on MSE for their
|
460 |
-
loss term). The MSE only measures a one-to-one corre-
|
461 |
-
spondence between points, considering for each time instantFigure 5. Qualitative ablation results. From top to bottom : ba-
|
462 |
-
sic GAN strategy (baseline); adding MSE to the loss function of
|
463 |
-
the former; our approach; and an example ground truth scanpath.
|
464 |
-
These results illustrate the need for our DTW sphloss term.
|
465 |
-
Table 1. Quantitative comparisons of our model against
|
466 |
-
SaltiNet [4] and Zhu et al. [62]. We also include upper (human
|
467 |
-
baseline, Human BL ) and lower (randomly sampling over the im-
|
468 |
-
age, Random BL ) baselines. Arrows indicate whether higher or
|
469 |
-
lower is better, and boldface highlights the best result for each met-
|
470 |
-
ric (excluding the ground truth Human BL ).SaltiNet is trained
|
471 |
-
with Rai et al.’s dataset; we include it for completeness.
|
472 |
-
Dataset Method LEV#DTW#REC"DET"
|
473 |
-
Test set from
|
474 |
-
Sitzmann et al.Random BL 52.33 2370.56 0.47 0.93
|
475 |
-
SaltiNet 48.00 1928.85 1.45 1.78
|
476 |
-
ScanGAN360 (ours) 46.15 1921.95 4.82 2.32
|
477 |
-
Human BL 43.11 1843.72 7.81 4.07
|
478 |
-
Rai et al.’s
|
479 |
-
datasetRandom BL 43.11 1659.75 0.21 0.94
|
480 |
-
SaltiNet48.07 1928.41 1.43 1.81
|
481 |
-
Zhu et al. 43.55 1744.20 1.64 1.50
|
482 |
-
ScanGAN360 (ours) 40.99 1549.59 1.72 1.87
|
483 |
-
Human BL 39.59 1495.55 2.33 2.31
|
484 |
-
a single point, unrelated to the rest. This hinders the learn-
|
485 |
-
ing process, leading to non-plausible results (Figure 5, sec-
|
486 |
-
ond row). This behavior is corrected when our DTW sphis
|
487 |
-
added instead, since it is specifically targeted for time series
|
488 |
-
data and takes into account the actual spatial structure of the
|
489 |
-
data (Figure 5, third row). The corresponding quantitative
|
490 |
-
measures over our test set from Sitzmann et al. can be found
|
491 |
-
in Table 2. We also analyze the effect of removing the Co-
|
492 |
-
ordConv layer from our model: Results in Table 2 indicate
|
493 |
-
that the use of CoordConv does have a positive effect on the
|
494 |
-
results, helping learn the transformation from the input to
|
495 |
-
the target domain.Table 2. Quantitative results of our ablation study. Arrows indi-
|
496 |
-
cate whether higher or lower is better, and boldface highlights the
|
497 |
-
best result for each metric (excluding the ground truth Human BL ).
|
498 |
-
Please refer to the text for details on the ablated models.
|
499 |
-
Metric LEV# DTW# REC"DET"
|
500 |
-
Basic GAN 49.42 2088.44 3.01 1.74
|
501 |
-
MSE 48.90 1953.21 2.41 1.73
|
502 |
-
DTWsph(no CoordConv) 47.82 1988.38 3.67 1.99
|
503 |
-
DTWsph(ours) 46.19 1925.20 4.50 2.33
|
504 |
-
Human Baseline (Human BL) 43.11 1843.72 7.81 4.07
|
505 |
-
4.5. Behavioral Evaluation
|
506 |
-
While the previous subsections employ well-known met-
|
507 |
-
rics from the literature to analyze the performance of our
|
508 |
-
model, in this subsection we perform a higher-level analysis
|
509 |
-
of its results. We assess whether the behavioral characteris-
|
510 |
-
tics of our scanpaths match those which have been reported
|
511 |
-
from actual users watching 360images.
|
512 |
-
Exploration time Sitzmann et al. [43] measure the explo-
|
513 |
-
ration time as the average time that users took to move their
|
514 |
-
eyes to a certain longitude relative to their starting point,
|
515 |
-
and measure how long it takes for users to fully explore the
|
516 |
-
scene. Figure 6 (left) shows this exploration time, measured
|
517 |
-
by Sitzmann et al. from captured data, for the three scenes
|
518 |
-
from their dataset included in our test set ( room ,chess , and
|
519 |
-
robots ). To analyze whether our generated scanpaths mimic
|
520 |
-
this behavior and exploration speed, we plot the exploration
|
521 |
-
time of our generated scanpaths (Figure 6, center left) for
|
522 |
-
the same scenes and number of scanpaths. We can see how
|
523 |
-
the speed and exploration time are very similar between
|
524 |
-
real and generated data. Individual results per scene can
|
525 |
-
be found in the supplementary material.
|
526 |
-
Fixation bias Similar to the center bias of human eye fix-
|
527 |
-
ations observed in regular images [20], the existence of a
|
528 |
-
Laplacian-like equator bias has been measured in 360im-
|
529 |
-
ages [43]: The majority of fixations fall around the equa-
|
530 |
-
tor, in detriment of the poles. We have evaluated whether
|
531 |
-
the distribution of scanpaths generated by our model also
|
532 |
-
presents this bias. This is to be expected, since the data our
|
533 |
-
model is trained with exhibits it, but is yet another indicator
|
534 |
-
that we have succeeded in learning the ground truth distri-
|
535 |
-
bution. We test this by generating, for each scene, 1,000
|
536 |
-
different scanpaths with our model, and aggregating them
|
537 |
-
over time to produce a pseudo- saliency map, which we term
|
538 |
-
aggregate map . Figure 6 (right) shows this for two scenes
|
539 |
-
in our test set: We can see how this equator bias is indeed
|
540 |
-
present in our generated scanpaths.
|
541 |
-
Inter-observer congruency It is common in the literature
|
542 |
-
analyzing users’ gaze behavior to measure inter-observerFigure 6. Behavioral evaluation. Left: Exploration time for real captured data ( left) and scanpaths generated by our model ( center left ).
|
543 |
-
Speed and exploration time of our scanpaths are on par with that of real users. Center right : ROC curve of our generated scanpaths for
|
544 |
-
each individual test scene (gray), and averaged across scenes (magenta). The faster it converges to the maximum rate, the higher the
|
545 |
-
inter-observer congruency. Right : Aggregate maps for two different scenes, computed as heatmaps from 1,000 generated scanpaths. Our
|
546 |
-
model is able to produce aggregate maps that focus on relevant areas of the scenes and exhibit the equator bias reported in the literature.
|
547 |
-
congruency, often by means of a receiver operating char-
|
548 |
-
acteristic (ROC) curve. We compute the congruency of our
|
549 |
-
“generated observers” through this ROC curve for the three
|
550 |
-
scenes in our test set from the Sitzmann et al. dataset (Fig-
|
551 |
-
ure 6, center right). The curve calculates the ability of the
|
552 |
-
ithscanpath to predict the aggregate map of the correspond-
|
553 |
-
ing scene. Each point in the curve is computed by gener-
|
554 |
-
ating a map containing the top n%most salient regions of
|
555 |
-
the aggregate map (computed without the ithscanpath), and
|
556 |
-
calculating the percentage of gaze points of the ithscanpath
|
557 |
-
that fall into that map. Our ROC curve indicates a strong
|
558 |
-
agreement between our scanpaths, with around 75% of all
|
559 |
-
gaze points falling within 25% of the most salient regions.
|
560 |
-
These values are comparable to those measured in previous
|
561 |
-
studies with captured gaze data [43, 23].
|
562 |
-
Temporal and spatial coherence Our generated scan-
|
563 |
-
paths have a degree of stochasticity, to be able to model the
|
564 |
-
diversity of real human observers. However, human gaze
|
565 |
-
behavior follows specific patterns, and each gaze point is
|
566 |
-
conditioned not only by the features in the scene but also by
|
567 |
-
the previous history of gaze points of the user. If two users
|
568 |
-
start watching a scene in the same region, a certain degree
|
569 |
-
of coherence between their scanpaths is expected, that may
|
570 |
-
diverge more as more time passes. We analyze the temporal
|
571 |
-
coherence of generated scanpaths that start in the same re-
|
572 |
-
gion, and observe that indeed our generated scanpaths fol-
|
573 |
-
low a coherent pattern. Please refer to the supplementary
|
574 |
-
for more information on this part of the analysis.
|
575 |
-
5. Conclusion
|
576 |
-
In summary, we propose ScanGAN360, a conditional
|
577 |
-
GAN approach to generating gaze scanpaths for immersive
|
578 |
-
virtual environments. Our unique parameterization tailored
|
579 |
-
to panoramic content, coupled with our novel usage of a
|
580 |
-
DTW loss function, allow our model to generate scanpaths
|
581 |
-
of significantly higher quality and duration than previousapproaches. We further explore applications of our model:
|
582 |
-
Please refer to the supplementary material for a description
|
583 |
-
and examples of these.
|
584 |
-
Our GAN approach is well suited for the problem of
|
585 |
-
scanpath generation: A single ground truth scanpath does
|
586 |
-
not exist, yet real scanpaths follow certain patterns that
|
587 |
-
are difficult to model explicitly but that are automatically
|
588 |
-
learned by our approach. Note that our model is also very
|
589 |
-
fast and can produce about 1,000 scanpaths per second.
|
590 |
-
This may be a crucial capability for interactive applications:
|
591 |
-
our model can generate virtual observers in real time.
|
592 |
-
Limitations and future work Our model is trained with
|
593 |
-
30-second long scanpaths, sampled at 1 Hz. Although
|
594 |
-
this is significantly longer than most previous approaches
|
595 |
-
[16, 23, 27], exploring different or variable lengths or sam-
|
596 |
-
pling rates remains interesting for future work. When train-
|
597 |
-
ing our model, we focus on learning higher-level aspects of
|
598 |
-
visual behavior, and we do not explicitly enforce low-level
|
599 |
-
ocular movements ( e.g., fixations or saccades). Currently,
|
600 |
-
our relatively low sampling rate prevents us from model-
|
601 |
-
ing very fast dynamic phenomena, such as saccades. Yet,
|
602 |
-
fixation patterns naturally emerge in our results, and future
|
603 |
-
work could explicitly take low-level oculomotor aspects of
|
604 |
-
visual search into account.
|
605 |
-
The model, parameterization, and loss function are tai-
|
606 |
-
lored to 360images. In a similar spirit, a DTW-based loss
|
607 |
-
function could also be applied to conventional 2D images
|
608 |
-
(using an Euclidean distance in 2D instead of our sph), po-
|
609 |
-
tentially leading to better results than current 2D approaches
|
610 |
-
based on mean-squared error.
|
611 |
-
We believe that our work is a timely effort and a first step
|
612 |
-
towards understanding and modeling dynamic aspects of at-
|
613 |
-
tention in 360images. We hope that our work will serve
|
614 |
-
as a basis to advance this research, both in virtual reality
|
615 |
-
and in conventional imagery, and extend it to other scenar-
|
616 |
-
ios, such as dynamic or interactive content, analyzing the
|
617 |
-
influence of the task, including the presence of motion par-allax, or exploring multimodal experiences. We will make
|
618 |
-
our model and training code available in order to facilitate
|
619 |
-
the exploration of these and other possibilities.
|
620 |
-
References
|
621 |
-
[1] Elena Arabadzhiyska, Okan Tarhan Tursun, Karol
|
622 |
-
Myszkowski, Hans-Peter Seidel, and Piotr Didyk. Saccade
|
623 |
-
landing position prediction for gaze-contingent rendering.
|
624 |
-
ACM Transactions on Graphics (TOG) , 36(4):1–12, 2017. 5
|
625 |
-
[2] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
|
626 |
-
Noel E O’Connor. Saltinet: Scan-path prediction on 360
|
627 |
-
degree images using saliency volumes. In Proceedings of
|
628 |
-
the IEEE ICCV Workshops , pages 2331–2338, 2017. 1, 2, 6,
|
629 |
-
4
|
630 |
-
[3] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
|
631 |
-
Noel E O’Connor. Pathgan: visual scanpath prediction with
|
632 |
-
generative adversarial networks. In Proceedings of the Eu-
|
633 |
-
ropean Conference on Computer Vision (ECCV) , pages 0–0,
|
634 |
-
2018. 1, 2, 6, 4
|
635 |
-
[4] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
|
636 |
-
Noel E O’Connor. Scanpath and saliency prediction on 360
|
637 |
-
degree images. Signal Processing: Image Communication ,
|
638 |
-
69:8–14, 2018. 1, 2, 6, 7
|
639 |
-
[5] Wentao Bao and Zhenzhong Chen. Human scanpath predic-
|
640 |
-
tion based on deep convolutional saccadic model. Neuro-
|
641 |
-
computing , 404:154 – 164, 2020. 2
|
642 |
-
[6] Mathieu Blondel, Arthur Mensch, and Jean-Philippe Vert.
|
643 |
-
Differentiable divergences between time series. arXiv
|
644 |
-
preprint arXiv:2010.08354 , 2020. 1
|
645 |
-
[7] A. Borji. Boosting bottom-up and top-down visual features
|
646 |
-
for saliency estimation. In 2012 IEEE Conference on Com-
|
647 |
-
puter Vision and Pattern Recognition , 2012. 2
|
648 |
-
[8] Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Fr ´edo Du-
|
649 |
-
rand, Aude Oliva, and Antonio Torralba. Mit saliency bench-
|
650 |
-
mark. http://saliency.mit.edu/, 2019. 2
|
651 |
-
[9] Ying Cao, Rynson WH Lau, and Antoni B Chan. Look over
|
652 |
-
here: Attention-directing composition of manga elements.
|
653 |
-
ACM Trans. Graph. , 33(4):1–11, 2014. 2, 3
|
654 |
-
[10] Chien-Yi Chang, De-An Huang, Yanan Sui, Li Fei-Fei, and
|
655 |
-
Juan Carlos Niebles. D3tw: Discriminative differentiable dy-
|
656 |
-
namic time warping for weakly supervised action alignment
|
657 |
-
and segmentation. In Proceedings of the IEEE/CVF Confer-
|
658 |
-
ence on Computer Vision and Pattern Recognition (CVPR) ,
|
659 |
-
June 2019. 1
|
660 |
-
[11] Fang-Yi Chao, Lu Zhang, Wassim Hamidouche, and Olivier
|
661 |
-
Deforges. Salgan360: Visual saliency prediction on 360 de-
|
662 |
-
gree images with generative adversarial networks. In 2018
|
663 |
-
IEEE Int. Conf. on Multim. & Expo Workshops (ICMEW) ,
|
664 |
-
pages 01–04. IEEE, 2018. 2
|
665 |
-
[12] Alex Colburn, Michael F Cohen, and Steven Drucker. The
|
666 |
-
role of eye gaze in avatar mediated conversational interfaces.
|
667 |
-
Technical report, Citeseer, 2000. 2
|
668 |
-
[13] Benjamin Coors, Alexandru Paul Condurache, and An-
|
669 |
-
dreas Geiger. Spherenet: Learning spherical representations
|
670 |
-
for detection and classification in omnidirectional images.
|
671 |
-
InProc. of the European Conference on Computer Vision
|
672 |
-
(ECCV) , pages 518–533, 2018. 1, 4[14] Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita
|
673 |
-
Cucchiara. Predicting human eye fixations via an lstm-based
|
674 |
-
saliency attentive model. IEEE Transactions on Image Pro-
|
675 |
-
cessing , 27(10):5142–5154, 2018. 2
|
676 |
-
[15] Marco Cuturi and Mathieu Blondel. Soft-dtw: a dif-
|
677 |
-
ferentiable loss function for time-series. arXiv preprint
|
678 |
-
arXiv:1703.01541 , 2017. 4, 1
|
679 |
-
[16] Stephen R Ellis and James Darrell Smith. Patterns of sta-
|
680 |
-
tistical dependency in visual scanning. Eye movements and
|
681 |
-
human information processing , pages 221–238, 1985. 2, 8
|
682 |
-
[17] Ramin Fahimi and Neil DB Bruce. On metrics for measuring
|
683 |
-
scanpath similarity. Behavior Research Methods , pages 1–
|
684 |
-
20, 2020. 5, 2
|
685 |
-
[18] Kaye Horley, Leanne M Williams, Craig Gonsalvez, and
|
686 |
-
Evian Gordon. Face to face: visual scanpath evidence for
|
687 |
-
abnormal processing of facial expressions in social phobia.
|
688 |
-
Psychiatry research , 127(1-2):43–53, 2004. 1
|
689 |
-
[19] Laurent Itti, Christof Koch, and Ernst Niebur. A model
|
690 |
-
of saliency-based visual attention for rapid scene analysis.
|
691 |
-
IEEE Transactions on pattern analysis and machine intelli-
|
692 |
-
gence , 20(11):1254–1259, 1998. 2
|
693 |
-
[20] Tilke Judd, Krista Ehinger, Fr ´edo Durand, and Antonio Tor-
|
694 |
-
ralba. Learning to predict where humans look. In IEEE
|
695 |
-
ICCV , pages 2106–2113. IEEE, 2009. 2, 7
|
696 |
-
[21] Diederik P. Kingma and Jimmy Ba. Adam: A method for
|
697 |
-
stochastic optimization. In ICLR , 2014. Last updated in
|
698 |
-
arXiv in 2017. 4
|
699 |
-
[22] Matthias K ¨ummerer, Thomas S. A. Wallis, and Matthias
|
700 |
-
Bethge. Deepgaze ii: Reading fixations from deep
|
701 |
-
features trained on object recognition. arXiv preprint
|
702 |
-
arXiv:1610.01563 , 2016. 2
|
703 |
-
[23] O. Le Meur and T. Baccino. Methods for comparing scan-
|
704 |
-
paths and saliency maps: strengths and weaknesses. Behav-
|
705 |
-
ior Research Methods , pages 251–266, 2013. 8
|
706 |
-
[24] Olivier Le Meur and Zhi Liu. Saccadic model of eye move-
|
707 |
-
ments for free-viewing condition. Vision Research , 116:152
|
708 |
-
– 164, 2015. 2
|
709 |
-
[25] Chenge Li, Weixi Zhang, Yong Liu, and Yao Wang. Very
|
710 |
-
long term field of view prediction for 360-degree video
|
711 |
-
streaming. In 2019 IEEE Conference on Multimedia Infor-
|
712 |
-
mation Processing and Retrieval (MIPR) , pages 297–302.
|
713 |
-
IEEE, 2019. 2
|
714 |
-
[26] Suiyi Ling, Jes ´us Guti ´errez, Ke Gu, and Patrick Le Callet.
|
715 |
-
Prediction of the influence of navigation scan-path on per-
|
716 |
-
ceived quality of free-viewpoint videos. IEEE Journal on
|
717 |
-
Emerging and Sel. Topics in Circ. and Sys. , 9(1):204–216,
|
718 |
-
2019. 2
|
719 |
-
[27] Huiying Liu, Dong Xu, Qingming Huang, Wen Li, Min Xu,
|
720 |
-
and Stephen Lin. Semantically-based human scanpath esti-
|
721 |
-
mation with hmms. In Proceedings of the IEEE International
|
722 |
-
Conference on Computer Vision , pages 3232–3239, 2013. 2,
|
723 |
-
8
|
724 |
-
[28] Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski
|
725 |
-
Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An in-
|
726 |
-
triguing failing of convolutional neural networks and the co-
|
727 |
-
ordconv solution. In Neural information processing systems ,
|
728 |
-
pages 9605–9616, 2018. 4[29] Y . Lu, W. Zhang, C. Jin, and X. Xue. Learning attention map
|
729 |
-
from images. In 2012 IEEE Conference on Computer Vision
|
730 |
-
and Pattern Recognition , 2012. 2
|
731 |
-
[30] Daniel Martin, Sandra Malpica, Diego Gutierrez, Belen Ma-
|
732 |
-
sia, and Ana Serrano. Multimodality in VR: A survey. arXiv
|
733 |
-
preprint arXiv:2101.07906 , 2021. 2
|
734 |
-
[31] Daniel Martin, Ana Serrano, and Belen Masia. Panoramic
|
735 |
-
convolutions for 360single-image saliency prediction. In
|
736 |
-
CVPR Workshop on CV for AR/VR , 2020. 1, 2
|
737 |
-
[32] Mehdi Mirza and Simon Osindero. Conditional generative
|
738 |
-
adversarial nets. arXiv preprint arXiv:1411.1784 , 2014. 3
|
739 |
-
[33] Rafael Monroy, Sebastian Lutz, Tejo Chalasani, and Aljosa
|
740 |
-
Smolic. Salnet360: Saliency maps for omni-directional im-
|
741 |
-
ages with cnn. Signal Processing: Image Communication ,
|
742 |
-
69:26 – 34, 2018. 2
|
743 |
-
[34] Meinard M ¨uller. Dynamic time warping. Information re-
|
744 |
-
trieval for music and motion , pages 69–84, 2007. 3, 1
|
745 |
-
[35] Anh Nguyen, Zhisheng Yan, and Klara Nahrstedt. Your at-
|
746 |
-
tention is unique: Detecting 360-degree video saliency in
|
747 |
-
head-mounted display for head movement prediction. In
|
748 |
-
Proc. ACM Intern. Conf. on Multimedia , pages 1190–1198,
|
749 |
-
2018. 2
|
750 |
-
[36] Junting Pan, Cristian Canton, Kevin McGuinness, Noel E.
|
751 |
-
O’Connor, Jordi Torres, Elisa Sayrol, and Xavier and Giro-
|
752 |
-
i Nieto. Salgan: Visual saliency prediction with generative
|
753 |
-
adversarial networks. 2018. 2
|
754 |
-
[37] Junting Pan, Elisa Sayrol, Xavier Giro-i Nieto, Kevin
|
755 |
-
McGuinness, and Noel E. O’Connor. Shallow and deep con-
|
756 |
-
volutional networks for saliency prediction. In The IEEE
|
757 |
-
Conference on Computer Vision and Pattern Recognition
|
758 |
-
(CVPR) , June 2016. 2
|
759 |
-
[38] Xufang Pang, Ying Cao, Rynson WH Lau, and Antoni B
|
760 |
-
Chan. Directing user attention via visual flow on web de-
|
761 |
-
signs. ACM Trans. on Graph. , 35(6):1–11, 2016. 2, 3
|
762 |
-
[39] Yashas Rai, Jes ´us Guti ´errez, and Patrick Le Callet. A dataset
|
763 |
-
of head and eye movements for 360 degree images. In Pro-
|
764 |
-
ceedings of the 8th ACM on Multimedia Systems Conference ,
|
765 |
-
pages 205–210, 2017. 2, 5, 1
|
766 |
-
[40] Kerstin Ruhland, Christopher E Peters, Sean Andrist,
|
767 |
-
Jeremy B Badler, Norman I Badler, Michael Gleicher, Bilge
|
768 |
-
Mutlu, and Rachel McDonnell. A review of eye gaze in
|
769 |
-
virtual agents, social robotics and hci: Behaviour genera-
|
770 |
-
tion, user interaction and perception. In Computer graph-
|
771 |
-
ics forum , volume 34, pages 299–326. Wiley Online Library,
|
772 |
-
2015. 4
|
773 |
-
[41] Matan Sela, Pingmei Xu, Junfeng He, Vidhya Naval-
|
774 |
-
pakkam, and Dmitry Lagun. Gazegan-unpaired adversar-
|
775 |
-
ial image generation for gaze estimation. arXiv preprint
|
776 |
-
arXiv:1711.09767 , 2017. 2
|
777 |
-
[42] Ana Serrano, Vincent Sitzmann, Jaime Ruiz-Borau, Gordon
|
778 |
-
Wetzstein, Diego Gutierrez, and Belen Masia. Movie edit-
|
779 |
-
ing and cognitive event segmentation in virtual reality video.
|
780 |
-
ACM Trans. Graph. (SIGGRAPH) , 36(4), 2017. 1
|
781 |
-
[43] Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh
|
782 |
-
Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wet-
|
783 |
-
zstein. Saliency in VR: How do people explore virtual
|
784 |
-
environments? IEEE Trans. on Vis. and Comp. Graph. ,
|
785 |
-
24(4):1633–1642, 2018. 1, 2, 4, 5, 7, 8, 3[44] Mikhail Startsev and Michael Dorr. 360-aware saliency esti-
|
786 |
-
mation with conventional image saliency predictors. Signal
|
787 |
-
Proces.: Image Comm. , 69:43–52, 2018. 2
|
788 |
-
[45] Yu-Chuan Su and Kristen Grauman. Making 360 video
|
789 |
-
watchable in 2d: Learning videography for click free view-
|
790 |
-
ing. In 2017 IEEE Conference on Computer Vision and Pat-
|
791 |
-
tern Recognition (CVPR) , pages 1368–1376. IEEE, 2017. 3
|
792 |
-
[46] Yu-Chuan Su, Dinesh Jayaraman, and Kristen Grauman.
|
793 |
-
Pano2vid: Automatic cinematography for watching 360
|
794 |
-
videos. In Asian Conf. on CV , pages 154–171. Springer,
|
795 |
-
2016. 3
|
796 |
-
[47] Benjamin W Tatler and Benjamin T Vincent. The promi-
|
797 |
-
nence of behavioural biases in eye guidance. Visual Cogni-
|
798 |
-
tion, 17(6-7):1029–1054, 2009. 2
|
799 |
-
[48] Hamed Rezazadegan Tavakoli, Esa Rahtu, and Janne
|
800 |
-
Heikkil ¨a. Stochastic bottom–up fixation prediction and sac-
|
801 |
-
cade generation. Image and Vision Computing , 31(9):686–
|
802 |
-
693, 2013. 2
|
803 |
-
[49] Antonio Torralba, Aude Oliva, Monica S Castelhano, and
|
804 |
-
John M Henderson. Contextual guidance of eye movements
|
805 |
-
and attention in real-world scenes: the role of global features
|
806 |
-
in object search. Psychological review , 113(4):766, 2006. 2
|
807 |
-
[50] Eleonora Vig, Michael Dorr, and David Cox. Large-scale
|
808 |
-
optimization of hierarchical features for saliency prediction
|
809 |
-
in natural images. In Proceedings of the IEEE Conference
|
810 |
-
on Computer Vision and Pattern Recognition (CVPR) , June
|
811 |
-
2014. 2
|
812 |
-
[51] LE Vincent and Nicolas Thome. Shape and time distortion
|
813 |
-
loss for training deep time series forecasting models. In
|
814 |
-
Advances in neural information processing systems , pages
|
815 |
-
4189–4201, 2019. 1
|
816 |
-
[52] Dirk Walther and Christof Koch. Modeling attention to
|
817 |
-
salient proto-objects. Neural Networks , 19:1395–1407,
|
818 |
-
2006. 2
|
819 |
-
[53] Wenguan Wang and Jianbing Shen. Deep visual atten-
|
820 |
-
tion prediction. IEEE Transactions on Image Processing ,
|
821 |
-
27(5):2368–2378, 2017. 2
|
822 |
-
[54] W. Wang and J. Shen. Deep visual attention prediction. IEEE
|
823 |
-
Transactions on Image Processing , 27(5):2368–2378, 2018.
|
824 |
-
2
|
825 |
-
[55] Wenguan Wang, Jianbing Shen, Xingping Dong, and Ali
|
826 |
-
Borji. Salient object detection driven by fixation prediction.
|
827 |
-
InProceedings of the IEEE Conference on Computer Vision
|
828 |
-
and Pattern Recognition (CVPR) , June 2018. 2
|
829 |
-
[56] Chenglei Wu, Ruixiao Zhang, Zhi Wang, and Lifeng Sun. A
|
830 |
-
spherical convolution approach for learning long term view-
|
831 |
-
port prediction in 360 immersive video. In Proceedings of
|
832 |
-
the AAAI Conference on Artificial Intelligence , volume 34,
|
833 |
-
pages 14003–14040, 2020. 2
|
834 |
-
[57] Chen Xia, Junwei Han, Fei Qi, and Guangming Shi. Pre-
|
835 |
-
dicting human saccadic scanpaths based on iterative repre-
|
836 |
-
sentation learning. IEEE Transactions on Image Processing ,
|
837 |
-
28(7):3502–3515, 2019. 5
|
838 |
-
[58] M. Xu, Y . Song, J. Wang, M. Qiao, L. Huo, and Z. Wang.
|
839 |
-
Predicting head movement in panoramic video: A deep re-
|
840 |
-
inforcement learning approach. IEEE Transactions on Pat-
|
841 |
-
tern Analysis and Machine Intelligence , 41(11):2693–2708,
|
842 |
-
2019. 2[59] Chuan Yang, Lihe Zhang, Ruan Lu, Huchuan, Xiang, and
|
843 |
-
Ming-Hsuan Yang. Saliency detection via graph-based man-
|
844 |
-
ifold ranking. In Computer Vision and Pattern Recogni-
|
845 |
-
tion (CVPR), 2013 IEEE Conference on , pages 3166–3173.
|
846 |
-
IEEE, 2013. 2
|
847 |
-
[60] Kiwon Yun, Yifan Peng, Dimitris Samaras, Gregory J Zelin-
|
848 |
-
sky, and Tamara L Berg. Exploring the role of gaze behavior
|
849 |
-
and object detection in scene understanding. Frontiers in
|
850 |
-
psychology , 4:917, 2013. 1
|
851 |
-
[61] Qi Zhao and Christof Koch. Learning a saliency map using
|
852 |
-
fixated locations in natural scenes. Journal of Vision , 11:9,
|
853 |
-
2011. 2
|
854 |
-
[62] Yucheng Zhu, Guangtao Zhai, and Xiongkuo Min. The pre-
|
855 |
-
diction of head and eye movement for 360 degree images.
|
856 |
-
Signal Processing: Image Communication , 69:15–25, 2018.
|
857 |
-
1, 2, 6, 7, 4Supplementary Material
|
858 |
-
This document offers additional information and details
|
859 |
-
on the following topics:
|
860 |
-
• (S1) Extended description of the soft-DTW (differen-
|
861 |
-
tiable version of DTW) distance metric used in our
|
862 |
-
model.
|
863 |
-
• (S2) Additional results (scanpaths generated with our
|
864 |
-
method) for different scenes used in our evaluation in
|
865 |
-
the main paper.
|
866 |
-
• (S3) Additional ground truth scanpaths for the scenes
|
867 |
-
used in our evaluation in the main paper.
|
868 |
-
• (S4) Further details on our training process.
|
869 |
-
• (S5) Further details on metrics and evaluation, includ-
|
870 |
-
ing a larger set of metrics (which we briefly introduce),
|
871 |
-
and extended analysis.
|
872 |
-
• (S6) Further details on the behavioral evaluation of our
|
873 |
-
scanpaths.
|
874 |
-
• (S7) Example applications of our method.
|
875 |
-
S1. Differentiable Dynamic Time Warping:
|
876 |
-
soft-DTW
|
877 |
-
One of the key aspects of our framework relies in the
|
878 |
-
addition of a second term to the generator’s loss function,
|
879 |
-
based on dynamic time warping [34]. As pointed in Section
|
880 |
-
3.3 in the main paper, dynamic time warping (DTW) mea-
|
881 |
-
sures the similarity between two temporal sequences (see
|
882 |
-
Figure 71, Equation 5 in the main paper for the original
|
883 |
-
DTW formulation, and Equations 6 and 7 in the main pa-
|
884 |
-
per for our spherical modification on DTW). However, the
|
885 |
-
original DTW function is not differentiable, therefore it is
|
886 |
-
not suitable as a loss function. Instead, we use a differen-
|
887 |
-
tiable version of it, soft-DTW, which has been recently pro-
|
888 |
-
posed [15] and used as a loss function in different problems
|
889 |
-
dealing with time series [6, 10, 51].
|
890 |
-
Differently from the original DTW formulation (Equa-
|
891 |
-
tion 5 in the main paper), the soft-DTW is defined as fol-
|
892 |
-
lows:
|
893 |
-
soft-DTW
|
894 |
-
A
|
895 |
-
where, as with traditional DTW, (r;s) = [(ri;sj]ij2
|
896 |
-
Rnmis a matrix containing the distances (;)between
|
897 |
-
each pair of points in rands,Ais a binary matrix that
|
898 |
-
accounts for the alignment (or correspondence) between r
|
899 |
-
ands, andh;iis the inner product between both matrices.
|
900 |
-
1https://databricks.com/blog/2019/04/30/understanding-dynamic-
|
901 |
-
time-warping.html
|
902 |
-
Figure 7. Simple visualization of dynamic time warping (DTW)
|
903 |
-
alignment. Instead of assuming a pair-wise strict correspondence,
|
904 |
-
DTW optimizes the alignment between two sequences to minimize
|
905 |
-
their distance.
|
906 |
-
In our case, r= (r1;:::;rT)2R3Tands= (s1;:::;sT)2
|
907 |
-
R3Tare two scanpaths that we wish to compare.
|
908 |
-
The main difference lies in the replacement of the minA
|
909 |
-
with the minA
|
910 |
-
min
|
911 |
= 0
|
912 |
-
|