billxbf commited on
Commit
cfe77f2
·
verified ·
1 Parent(s): ced71ef

Delete txt

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
txt/2101.04493.txt DELETED
@@ -1,414 +0,0 @@
1
- PVDECONV: POINT-VOXEL DECONVOLUTION FOR AUTOENCODING CAD
2
- CONSTRUCTION IN 3D
3
- Kseniya Cherenkova?yDjamila Aouada?Gleb Gusevy
4
- ?SnT, University of LuxembourgyArtec3D
5
- ABSTRACT
6
- We propose a Point-V oxel DeConvolution ( PVDeConv ) mod-
7
- ule for 3D data autoencoder. To demonstrate its efficiency we
8
- learn to synthesize high-resolution point clouds of 10k points
9
- that densely describe the underlying geometry of Computer
10
- Aided Design (CAD) models. Scanning artifacts, such as pro-
11
- trusions, missing parts, smoothed edges and holes, inevitably
12
- appear in real 3D scans of fabricated CAD objects. Learning
13
- the original CAD model construction from a 3D scan requires
14
- a ground truth to be available together with the corresponding
15
- 3D scan of an object. To solve the gap, we introduce a new
16
- dedicated dataset, the CC3D, containing 50k+ pairs of CAD
17
- models and their corresponding 3D meshes. This dataset is
18
- used to learn a convolutional autoencoder for point clouds
19
- sampled from the pairs of 3D scans - CAD models. The chal-
20
- lenges of this new dataset are demonstrated in comparison
21
- with other generative point cloud sampling models trained on
22
- ShapeNet. The CC3D autoencoder is efficient with respect to
23
- memory consumption and training time as compared to state-
24
- of-the-art models for 3D data generation.
25
- Index Terms —CC3D, point cloud autoencoder, CAD
26
- models generation, Scan2CAD.
27
- 1. INTRODUCTION
28
- Recently, deep learning (DL) for 3D data analysis has seen
29
- a boost in successful and competitive solutions for segmen-
30
- tation, detection and classification [1], and real-life applica-
31
- tions, such as self-driving, robotics, medicine, and augmented
32
- reality. In industrial manufacturing, 3D scanning of fabricated
33
- parts is an essential step of product quality control, when the
34
- 3D scans of real objects are compared to the original Com-
35
- puter Aided Design (CAD) models. While most consumer
36
- solutions for 3D scanning are good enough for capturing
37
- the general shape of an object, artifacts can be introduced
38
- in the parts of the object that are physically inaccessible for
39
- 3D scanning, resulting in the loss of sharp features and fine
40
- details.
41
- This paper focuses on recovering scanning artifacts in an
42
- autoencoding data-driven manner. In addition to presenting a
43
- new point cloud autoencoder, we introduce a new 3D dataset,
44
- referred to as CC3D , which stands for CAD Construction
45
- Fig. 1 . Examples of CC3D data: From left to right, CAD
46
- models, corresponding 3D scans, 10k input point clouds and
47
- results of the proposed autoencoder.
48
- in 3D . We further provide an analysis focused on real 3D
49
- scanned data, keeping in mind real-world constraints, i.e.,
50
- variability, complexity, artifacts, memory and speed require-
51
- ments. The first two columns in Fig. 1 give some examples
52
- from CC3D data; the CAD model and its 3D scanned version
53
- in triangular mesh format. While the most recent existing
54
- solutions [2, 3, 4, 5] on 3D data autoencoders mostly focus
55
- on low-resolution data configuration (approximately 2500
56
- points), we see it more beneficial for real data to experiment
57
- in higher-dimension. It is what brings the important 3D object
58
- details into the big data learning perspective.
59
- Several publicly available datasets related to CAD mod-
60
- elling, such as ModelNet [6], ShapeNet [7], and ABC [8],
61
- have been released in the last years. The summary of the fea-
62
- tures they offer can be found in Table 1. These datasets have
63
- boosted the research on deep learning on 3D point clouds
64
- mainly.
65
- Similarly, our CC3D dataset should support research ef-
66
- forts in addressing real-world challenges. Indeed, this dataset
67
- provides various 3D scanned objects, with their ground-truth
68
- CAD models. The models collected in CC3D dataset are
69
- not restricted to any object’s category and/or complexity. 3D
70
- scans offer challenging cases of missing data, smoothed ge-arXiv:2101.04493v1 [cs.CV] 12 Jan 2021ometry and fusion artefacts in the form of varying protrusions
71
- and swept holes. Moreover, the resolution of 3D scans is typ-
72
- ically high with more than 100k faces in the mesh.
73
- In summary, the contributions of this paper include: (1)
74
- A 3D dataset, CC3D, a collection of 50k+ aligned pairs of
75
- meshes, a CAD model and its virtually 3D scanned coun-
76
- terpart with corresponding scanning artifacts; (2) A CC3D
77
- autoencoder architecture on 10k point clouds learned from
78
- CC3D data; (3) A Point-V oxel DeConvolution ( PVDeConv )
79
- block for the decoder part of our model, combining point fea-
80
- tures on coarse and fine levels of the data.
81
- The remainder of the paper is organized as follows: Sec-
82
- tion 2 reviews relevant state-of-the-art works in 3D data au-
83
- toencoding. In Section 3 we give a brief overview of the
84
- core components our work is built upon. Section 4 describes
85
- the main contributions of this paper in more details. In Sec-
86
- tion 5 the results and comparison with related methods are
87
- presented. Section 6 gives the conclusions.
88
- 2. RELATED WORK
89
- The choice of DL architecture and 3D data representation is
90
- usually defined by existing practices and available datasets
91
- for learning [9]. V oxel-based representations have pioneered
92
- 3D data analysis, applying 3D Convolution Neural Network
93
- (CNN) directly on a regular voxel grid [10]. Despite the im-
94
- proved models in terms of memory consumption, e.g., [11],
95
- their inability to resolve fine object details remains the main
96
- limiting factor in practical use.
97
- Other works introduce convolutions directly on graph
98
- structures, e.g., [12]. They attempt to generalize DL models
99
- to non-Euclidean domains such as graphs and manifolds [13],
100
- and offer the analogs of pooling/unpooling operations as
101
- well [14]. However, they are not applicable for learning on
102
- real unconstrained data as they require either meshes to be
103
- registered to a common template, or inefficiently deal with
104
- meshes of up to several thousand faces, or are specific to
105
- segmentation or classification tasks only.
106
- Recent advances in developing efficient architectures for
107
- 3D data analysis are mainly related to point cloud based meth-
108
- ods [15, 16]. Decoders [17, 2, 18, 3, 19] have made point
109
- clouds a highly promising representation for 3D object gen-
110
- eration and completion using neural networks. Successful
111
- works in generative adversarial network (GAN) (e.g.,[20]),
112
- show the applicability of different GAN models operating on
113
- the raw point clouds.
114
- In this paper, we comply with the common autoencoder
115
- approach, i.e., we use a point cloud encoder to embed the
116
- point cloud input, and design a decoder to generate a complete
117
- point cloud from the embedding of the encoder.3. BACKGROUND AND MOTIVATION
118
- We herein present the fundamental building blocks that com-
119
- prise the core of this paper, namely, the point cloud, metric on
120
- it, and the DL backbone. All together, these elements make
121
- the CC3D autoencoder perform efficiently on high-resolution
122
- 3D data.
123
- A point cloud Scan be represented as S=f(pk; fk)g,
124
- where each pkis the 3D coordinates of the kthinput point,
125
- andfkis the feature corresponding to it, and the size of fk
126
- defines the dimensionality of the points feature space. Note
127
- that while it is straightforward to include auxiliary informa-
128
- tion (such as points’ normals) to our architecture, in this paper
129
- we exclusively employ the xyzcoordinates of pkas the input
130
- data.
131
- We base on Point-V oxel Convolution (PVConv), a mem-
132
- ory efficient architecture for learning on 3D point cloud pre-
133
- sented in [21]. To the best of our knowledge, it is the first de-
134
- velopment of autoencoder based on PVCNN as the encoder.
135
- Briefly, PVConv combines the fine-grained feature transfor-
136
- mation on points with the coarse-grained neighboring feature
137
- aggregation in the voxel space of point cloud. Three basic op-
138
- erations are performed in the coarse branch, namely, voxeliza-
139
- tion, followed by voxel-based 3D convolution, and the devox-
140
- elization. The point-based branch aggregates the features for
141
- each individual with multilayer perceptron (MLP), providing
142
- high resolution details. The features from both branches are
143
- aggregated into a hidden feature representation.
144
- The formulation of convolution in both voxel-based and
145
- point-based cases is the following:
146
- yk=X
147
- xi2N(xk)K(xk; xi)F(xi); (1)
148
- where for each center xk, and its neighborhood N(xk), the
149
- neighboring features F(xi)are convolved with the kernel
150
- K(xk; xi). The choice of PVCNN is due to its efficiency
151
- in training on high-resolution 3D data. Indeed, it makes it
152
- a good candidate for working with real-life data. As it is
153
- stated in [21], PVConv combines advantages of point-based
154
- methods, reducing memory consumption, and voxel-based,
155
- improving the data locality and regularity.
156
- For the loss function, Chamfer distance [22] is used to
157
- measure the quality of the autoencoder. It is a differentiable
158
- metric, invariant to permutation of points in both ground-truth
159
- and target point clouds, SGandS, respectively. It is defined
160
- as follows:
161
- dCD(S; SG) =X
162
- x2Smin
163
- y2SGkxyk2+X
164
- y2SGmin
165
- x2Skxyk2:
166
- (2)
167
- As it follows from its definition, no correspondence or equal
168
- number of points in SandSGis required for the computation
169
- ofdCD, making it possible to work within different resolu-
170
- tions for the encoder and decoder.4. PROPOSED AUTOENCODING OF 3D SCANS TO
171
- CAD MODELS
172
- This paper studies the problem of 3D point cloud autoencod-
173
- ing in a deep learning setup, and in particular, the choice
174
- of the architecture of a 3D point cloud decoder for efficient
175
- reconstruction of point clouds sampled from corresponding
176
- pairs of 3D scans and CAD models.
177
- 4.1. CC3D dataset
178
- The CC3D dataset of 3D CAD models was collected from a
179
- free online service for sharing CAD designs [23]. In total,
180
- the collected dataset contains 50k+ models in STEP format,
181
- unrestricted to any category, with varying complexity from
182
- simple to highly detailed designs (see examples in Fig. 1).
183
- These CAD models are converted to meshes, and each mesh
184
- was virtually scanned using proprietary 3D scanning pipeline
185
- developed by Artec3D [24]. The typical size of the result-
186
- ing scans is in the order of 100K points and faces, while the
187
- meshes converted from CAD models are usually more than
188
- an order of magnitude lighter.
189
- In order to illustrate the uniqueness of our dataset, Ta-
190
- ble 1 summarizes the available CAD-like datasets and se-
191
- mantic information they provide. Unlike ShapeNet [7] and
192
- ModelNet [6], the CC3D dataset is a collection of 3D ob-
193
- jects unrestricted to any category, with the complexity vary-
194
- ing from very basic to highly detailed models. One of the
195
- most recent datasets, the ABC dataset [8] would have been a
196
- valuable collection due to its size for our task if it had con-
197
- tained 3D scanned models alongside with ground-truth CAD
198
- objects. The availability of CAD-3D scan pairings, the high-
199
- resolution of meshes and variability of the models make the
200
- CC3D dataset stand out among other alternatives. The CC3D
201
- dataset will be shared with the research community.
202
- 4.2. CC3D Autoencoder
203
- Our decoder is a modified version of PVCNN, where we
204
- cut the final classification/segmentation layer. The proposed
205
- Dataset #Models
206
- CAD
207
- Curves
208
- Patches
209
- Semantics
210
- Categories
211
- 3D scan
212
- CC3D (ours) 50k+ 3 7 7 7 7 3
213
- ABC [8] 1M+ 3 3 3 7 7 7
214
- ShapeNet [7] 3M+ 7 7 7 3 3 7
215
- ShapeNetCore [7] 51k+ 7 7 7 3 3 7
216
- ShapeNetSem [7] 12k 7 7 7 3 3 7
217
- ModelNet [6] 151k+ 7 7 7 3 3 7
218
- Table 1 . Summary of datasets with CAD-like data. Note that
219
- only ABC and CC3D offer CAD models in b-rep (boundary
220
- representation) format in addition to triangular meshes.
221
- Fig. 2 . Overview of CC3D autoencoder architecture and
222
- PVDeConv module. The features from coarse voxel-based
223
- and fine point-based branches are fused to be unwrapped to
224
- the output point cloud.
225
- PVDeConv structure is depicted in Fig. 2. The fine point-
226
- based branch is implemented as shared transposed MLP, al-
227
- lowing to maintain the same number of points throughout the
228
- autoencoder, while the coarse branch allows the features to be
229
- aggregated at different voxel grid resolutions, thus modelling
230
- the neighborhood information at different scales.
231
- The PVDeConv block consists of 3D volumetric deconvo-
232
- lutions to aggregate the features, dropout, the batch normal-
233
- ization and the nonlinear activation function after each 3D
234
- deconvolution. Features from both branches are fused at the
235
- final level and MLP to produce the output points.
236
- The transposed 3D convolution operator, used in PVDe-
237
- Conv, multiplies each input value element-wise by a learnable
238
- kernel, and sums over the outputs from all input feature chan-
239
- nels. This operation can be seen as the gradient of 3D convo-
240
- lution, although it is not an actual deconvolution operation.
241
- 5. EXPERIMENTS
242
- We evaluate the proposed autoencoder by training first on our
243
- CC3D dataset, and then on the ShapeNetCore [7] dataset.
244
- 5.1. Training on CC3D
245
- Dataset. CC3D dataset is randomly split into three non-
246
- intersecting folds: 80% for training, 10% for testing and 10%
247
- for validation. Ground-truth point clouds are generated by
248
- uniformly sampling N= 10 k points on the CAD models sur-
249
- faces, while the input point clouds are sampled in the same
250
- manner from corresponding 3D scans of the models. The data
251
- is normalized to (0, 1).
252
- Implementation Details. The encoder follows the struc-
253
- ture in [21], the coarse blocks are ((64, 1, 32), (64, 2, 16),
254
- (128, 1, 16), 1024), where triplets describe voxel-based con-
255
- volutional PVConv block in terms of number of channels,
256
- number of blocks, and voxel resolution. The last number de-
257
- scribes the resulting embedding size for the coarse part, and
258
- being combined with shared MLP cloud blocks = (256, 128),
259
- gives the feature embedding size of 1472. The decoder coarse
260
- blocks are ((128, 1, 16), (64, 2, 16), (64, 1, 32), 128), whereFig. 3 . Results of our autoencoder on CC3D data with 10k
261
- points for input and output. The left of each pair of results
262
- is the input point cloud of 10k, the right is the autoencoder
263
- reconstruction of 10k points.
264
- the triplets are PVDeConv concatenated with decoder point-
265
- based fine blocks of size (256, 128).
266
- Training setup. The autoencoder is trained with Chamfer
267
- loss for 50 epochs on two Quadro P6000 with batch size 80 in
268
- data parallel mode. The overall training takes approximately
269
- 15 hours. The best model is chosen based on the validation
270
- set.
271
- Evaluation. The qualitative results of our autoencoder on
272
- the CC3D data are presented in Fig. 3. We notice that the fine
273
- details are captured in these challenging cases.
274
- Method Chamfer distance, 103
275
- AtlasNet [2] 1.769
276
- FoldingNet [17] 1.648
277
- PCN [19] 1.472
278
- TopNet [3] 0.972
279
- Ours 0.804
280
- Table 2 . CC3D autoencoder results on ShapeNetCore
281
- dataset: comparison against previous works ( N= 2:5k).
282
- 5.2. Training on ShapeNetCore
283
- To demonstrate the competitive performance of our CC3D
284
- autoencoder, we train it on the ShapeNetCore dataset follow-
285
- ing the train/test/val split of [3], with the number of sampled
286
- point N= 2500 for a fair comparison. Since we do not have
287
- scanned models for the ShapeNet data, we add a 3% Gaussian
288
- noise to each point’s location. The rest of the training setup
289
- is replicated from the CC3D configuration. The final met-
290
- ric is the mean Chamfer distance averaged per model across
291
- all classes. The numbers for other methods are reported
292
- from [3]. The results of the evaluation of our method against
293
- state-of-the-art methods are shown in Table 2. We note that
294
- Fig. 4 . Chamfer distance distribution for our autoencoder. On
295
- test set of CC3D for point clouds of size N= 10 k, mean
296
- Chamfer distance is 1:26103with standard deviation of
297
- 0:794103. ShapeNetCore test set with N= 2:5k, it is
298
- 0:804103with standard deviation 0:766103.
299
- Fig. 5 . Results of our autoencoder on ShapeNetCore data.
300
- The top row is the input 2.5k point clouds, the bottom is the
301
- reconstruction of our autoencoder.
302
- our result surpasses the previous works by a significant mar-
303
- gin. Qualitative examples on ShapeNetCore data are given in
304
- Fig. 5. The distribution of distances given in Fig. 4 implies
305
- that CC3D dataset presents advanced challenges for our au-
306
- toencoder, where it performs at 1:26103average Chamfer
307
- distance, while it reaches 0:804103on ShapeNetCore.
308
- 6. CONCLUSIONS
309
- In this work, we proposed a Point-V oxel Deconvolution
310
- (PVDeConv ) block for a fast and efficient deconvolution
311
- on 3D point clouds. It was used in combination with a new
312
- dataset, CC3D, for autoencoding 3D Scans to their corre-
313
- sponding synthetic CAD models. The CC3D dataset offers
314
- pairs of CAD models and 3D scans, totaling to 50k+ objects.
315
- Our CC3D autoencoder on point clouds is memory and time
316
- efficient. Furthermore, it demonstrates superior results com-
317
- pared to existing methods on ShapeNet data. As future work,
318
- different types of losses will be investigated to improve the
319
- sharpness on edges, such as quadric [5]. Testing the variants
320
- of CC3D autoencoder with different configurations of stacked
321
- PVConv and PVDeConv layers will also be considered. Fi-
322
- nally, we believe that the CC3D dataset itself could assist in
323
- real 3D scanned data analysis with deep learning methods.7. REFERENCES
324
- [1] Yulan Guo, Hanyun Wang, Qingyong Hu, Hao Liu,
325
- Li Liu, and Mohammed Bennamoun, “Deep learning for
326
- 3d point clouds: A survey,” ArXiv , vol. abs/1912.12033,
327
- 2019.
328
- [2] Thibault Groueix, Matthew Fisher, Vladimir G. Kim,
329
- Bryan C. Russell, and Mathieu Aubry, “Atlasnet: A
330
- papier-m ˆach´e approach to learning 3d surface genera-
331
- tion,” CoRR , vol. abs/1802.05384, 2018.
332
- [3] Lyne P Tchapmi, Vineet Kosaraju, S. Hamid
333
- Rezatofighi, Ian Reid, and Silvio Savarese, “Top-
334
- net: Structural point cloud decoder,” in The IEEE
335
- Conference on Computer Vision and Pattern Recogni-
336
- tion (CVPR) , 2019.
337
- [4] Isaak Lim, Moritz Ibing, and Leif Kobbelt, “A convolu-
338
- tional decoder for point clouds using adaptive instance
339
- normalization,” CoRR , vol. abs/1906.11478, 2019.
340
- [5] Nitin Agarwal, Sung-Eui Yoon, and M. Gopi, “Learning
341
- embedding of 3d models with quadric loss,” CoRR , vol.
342
- abs/1907.10250, 2019.
343
- [6] Zhirong Wu, S. Song, A. Khosla, Fisher Yu, Linguang
344
- Zhang, Xiaoou Tang, and J. Xiao, “3d shapenets:
345
- A deep representation for volumetric shapes,” in
346
- 2015 IEEE Conference on Computer Vision and Pattern
347
- Recognition (CVPR) , June 2015, pp. 1912–1920.
348
- [7] Angel X. Chang, Thomas Funkhouser, Leonidas
349
- Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Sil-
350
- vio Savarese, Manolis Savva, Shuran Song, Hao Su,
351
- Jianxiong Xiao, Li Yi, and Fisher Yu, “Shapenet: An
352
- information-rich 3d model repository,” 2015.
353
- [8] Sebastian Koch, Albert Matveev, Zhongshi Jiang, Fran-
354
- cis Williams, Alexey Artemov, Evgeny Burnaev, Marc
355
- Alexa, Denis Zorin, and Daniele Panozzo, “Abc: A
356
- big cad model dataset for geometric deep learning,” in
357
- The IEEE Conference on Computer Vision and Pattern
358
- Recognition (CVPR) , June 2019.
359
- [9] Eman Ahmed, Alexandre Saint, Abd El Rahman
360
- Shabayek, Kseniya Cherenkova, Rig Das, Gleb Gusev,
361
- Djamila Aouada, and Bj ¨orn E. Ottersten, “Deep learn-
362
- ing advances on different 3d data representations: A sur-
363
- vey,” ArXiv , vol. abs/1808.01462, 2018.
364
- [10] D. Maturana and S. Scherer, “V oxnet: A 3d convolu-
365
- tional neural network for real-time object recognition,”
366
- in2015 IEEE/RSJ International Conference on Intelli-
367
- gent Robots and Systems (IROS) , Sep. 2015, pp. 922–
368
- 928.[11] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger,
369
- “Octnet: Learning deep 3d representations at high res-
370
- olutions,” in Proceedings of the IEEE Conference on
371
- Computer Vision and Pattern Recognition , 2017.
372
- [12] Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and
373
- Michael J. Black, “Generating 3D faces using convo-
374
- lutional mesh autoencoders,” in European Conference
375
- on Computer Vision (ECCV) , 2018, pp. 725–741.
376
- [13] M. M. Bronstein, J. Bruna, Y . LeCun, A. Szlam, and
377
- P. Vandergheynst, “Geometric deep learning: Going be-
378
- yond euclidean data,” IEEE Signal Processing Maga-
379
- zine, vol. 34, no. 4, pp. 18–42, July 2017.
380
- [14] Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes,
381
- Shachar Fleishman, and Daniel Cohen-Or, “Meshcnn:
382
- A network with an edge,” ACM Transactions on Graph-
383
- ics (TOG) , vol. 38, no. 4, pp. 90, 2019.
384
- [15] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J.
385
- Guibas, “Pointnet++: Deep hierarchical feature learn-
386
- ing on point sets in a metric space,” CoRR , vol.
387
- abs/1706.02413, 2017.
388
- [16] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma,
389
- Michael M. Bronstein, and Justin M. Solomon, “Dy-
390
- namic graph cnn for learning on point clouds,” ACM
391
- Trans. Graph. , vol. 38, no. 5, Oct. 2019.
392
- [17] Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian,
393
- “Foldingnet: Interpretable unsupervised learning on 3d
394
- point clouds,” ArXiv , vol. abs/1712.07262, 2017.
395
- [18] Yongheng Zhao, Tolga Birdal, Haowen Deng, and Fed-
396
- erico Tombari, “3d point-capsule networks,” CoRR , vol.
397
- abs/1812.10775, 2018.
398
- [19] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz,
399
- and Martial Hebert, “Pcn: Point completion network,”
400
- in3D Vision (3DV), 2018 International Conference on ,
401
- 2018.
402
- [20] Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnab ´as
403
- P´oczos, and Ruslan Salakhutdinov, “Point cloud GAN,”
404
- CoRR , vol. abs/1810.05795, 2018.
405
- [21] Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han,
406
- “Point-voxel cnn for efficient 3d deep learning,” in Ad-
407
- vances in Neural Information Processing Systems , 2019.
408
- [22] Haoqiang Fan, Hao Su, and Leonidas Guibas, ,” in
409
- A Point Set Generation Network for 3D Object Recon-
410
- struction from a Single Image , 07 2017, pp. 2463–2471.
411
- [23] “3dcontentcentral,” https://www.
412
- 3dcontentcentral.com , Accessed: 2020-02-02.
413
- [24] “Artec3d,” https://www.artec3d.com/ , Ac-
414
- cessed: 2020-02-020.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2101.07621.txt DELETED
@@ -1,1082 +0,0 @@
1
- arXiv:2101.07621v2 [cs.GT] 29 May 2021Trading Transforms of Non-weighted Simple Games
2
- and Integer Weights of Weighted Simple Games∗
3
- Akihiro Kawana†Tomomi Matsui‡
4
- June 1, 2021
5
- Abstract
6
- This study investigates simple games. A fundamental research
7
- question in this field is to determine necessaryand sufficient condition s
8
- for a simple game to be a weighted majority game. Taylor and Zwicker
9
- (1992) showed that a simple game is non-weighted if and only if there
10
- exists a trading transform of finite size. They also provided an uppe r
11
- bound on the size of such a trading transform, if it exists. Gvozdev a
12
- and Slinko (2011) improved that upper bound; their proof employed a
13
- property of linear inequalities demonstrated by Muroga (1971). In this
14
- study, we provide a new proof of the existence of a trading transf orm
15
- when a given simple game is non-weighted. Our proof employs Farkas’
16
- lemma (1894), and yields an improved upper bound on the size of a
17
- trading transform.
18
- We also discuss an integer-weight representation of a weighted sim-
19
- ple game, improving the bounds obtained by Muroga (1971). We show
20
- that our bound on the quota is tight when the number of players is
21
- less than or equal to five, based on the computational results obt ained
22
- by Kurz (2012).
23
- Furthermore, we discuss the problem of finding an integer-weight
24
- representation under the assumption that we have minimal winning
25
- coalitions and maximal losing coalitions. In particular, we show a
26
- performance of a rounding method.
27
- Lastly, we address roughly weighted simple games. Gvozdeva and
28
- Slinko (2011) showed that a given simple game is not roughly weighted
29
- if and only if there exists a potent certificate of non-weightedness .
30
- ∗preliminary version of this paper was presented at Seventh I nternational Workshop
31
- on Computational Social Choice (COMSOC-2018), Rensselaer Polytechnic Institute, Troy,
32
- NY, USA, 25-27 June, 2018.
33
- †Graduate School of Engineering, Tokyo Institute of Technol ogy
34
- ‡Graduate School of Engineering, Tokyo Institute of Technol ogy
35
- 1We give an upper bound on the length of a potent certificate of non-
36
- weightedness. We also discuss an integer-weight representation o f a
37
- roughly weighted simple game.
38
- 1 Introduction
39
- A simple game consists of a pair G= (N,W),whereNis a finite set of
40
- players, and W ⊆2Nis an arbitrary collection of subsets of N. Throughout
41
- this paper, we denote |N|byn. Usually, the property
42
- (monotonicity): if S′⊇S∈ W,thenS′∈ W, (1)
43
- is assumed. Subsets in Ware called winning coalitions . We denote 2N\W
44
- byL, and subsets in Lare called losing coalitions . A simple game ( N,W)
45
- is said to be weighted if there exists a weight vector w∈RNandq∈R
46
- satisfying the following property:
47
- (weightedness): for any S⊆N,S∈ Wif and only if/summationdisplay
48
- i∈Swi≥q.(2)
49
- Previous research established thenecessary andsufficient c onditions that
50
- guarantee the weightedness of a simple. [Elgot, 1961] and [C how, 1961] in-
51
- vestigated the theory of threshold logic and showed the cond ition of the
52
- weightedness in terms of asummability . [Muroga, 1971] proved the suffi-
53
- ciency of asummability based on the theory of linear inequal ity systems
54
- and discussed some variations of their results in cases of a f ew variables.
55
- [Taylor and Zwicker, 1992,Taylor and Zwicker, 1999]obtain ednecessaryand
56
- sufficient conditions independently in terms of a trading transform . Atrad-
57
- ing transform ofsizejisacoalition sequence( X1,X2,...,X j;Y1,Y2,...,Y j),
58
- which may contain repetitions of coalitions, satisfying th e condition ∀p∈N,
59
- |{i|p∈Xi}|=|{i|p∈Yi}|. A simple game is called k-trade robust if there
60
- is no trading transform of size jsatisfying 1 ≤j≤k,X1,X2,...,X j∈ W,
61
- andY1,Y2,...,Y j∈ L. A simple game is called trade robust if it isk-trade
62
- robust for all positive integers k.
63
- Taylor and Zwicker showed that a given simple game Gwithnplayers is
64
- weightedifandonlyif Gis22n-traderobust. In2011, [Gvozdeva and Slinko, 2011]
65
- showed that agiven simplegame Gis weighted ifandonly if Gis (n+1)nn/2-
66
- trade robust. [Freixas and Molinero, 2009b] proposed a vari ant of trade ro-
67
- bustness, called invariant-trade robustness, whichdeter mineswhetherasim-
68
- ple game is weighted. The relations between the results in th reshold logic
69
- and simple games are clarified in [Freixas et al., 2016, Freix as et al., 2017].
70
- 2In Section 2, we show that a given simple game Gis weighted if and
71
- only ifGisαn+1-trade robust, where αn+1denotes the maximal value of
72
- determinants of ( n+1)×(n+1) 0–1 matrices. It is well-known that αn+1≤
73
- (n+2)n+2
74
- 2(1/2)(n+1).
75
- Our definition of a weighted simple game allows for an arbitra ry real
76
- number of weights. However, any weighted simple game can be r epresented
77
- by integer weights (e.g., see [Freixas and Molinero, 2009a] ). Aninteger-
78
- weight representation of a weighted simple game consists of an integer vec-
79
- torw∈ZNand some q∈Zsatisfying the weightedness property (2).
80
- [Isbell, 1956] found an example of a weighted simple game wit h 12 players
81
- withoutauniqueminimum-suminteger-weight representati on. Examplesfor
82
- 9, 10, or11playersaregivenin[Freixas and Molinero, 2009a ,Freixas and Molinero, 2010].
83
- Inthefieldofthresholdlogic, examples of thresholdfuncti onsrequiringlarge
84
- weightsarediscussedby[Myhill and Kautz, 1961,Muroga, 19 71,H˚ astad, 1994].
85
- Some previous studies enumerate (minimal) integer-weight representations
86
- of simple games with a small number of players (e.g., [Muroga et al., 1962,
87
- Winder, 1965,Muroga et al., 1970,Krohn and Sudh¨ olter, 199 5]). Inthecase
88
- ofn= 9 players, refer to [Kurz, 2012]. In general, [Muroga, 1971 ] (Proof of
89
- Theorem 9.3.2.1) showed that (under the monotonicity prope rty (1) and
90
- ∅ /\e}atio\slash∈ W ∋ N) every weighted simple game has an integer-weight repre-
91
- sentation satisfying 0 ≤wi≤αn≤(n+ 1)n+1
92
- 2(1/2)n(∀i∈N) and
93
- 0≤q≤nαn≤n(n+1)n+1
94
- 2(1/2)nsimultaneously. Here, αndenotesthemax-
95
- imal valueof determinantsof n×n0–1matrices. [Wang and Williams, 1991]
96
- discussed Boolean functions that require more general surf aces to sepa-
97
- rate their true vectors from false vectors. [Hansen and Podo lskii, 2015] in-
98
- vestigates the complexity of computing Boolean functions b y polynomial
99
- threshold functions. [Freixas, 2021] discusses a point-se t-additive pseudo-
100
- weighting for a simple game, which assigns weights directly to coalitions.
101
- In Section 3, we slightly improve Muroga’s result and show th at ev-
102
- ery weighted simple game (satisfying ∅ /\e}atio\slash∈ W ∋ N) has an integer-weight
103
- representation ( q;w⊤) satisfying |wi| ≤αn(∀i∈N),|q| ≤αn+1, and
104
- 1≤/summationtext
105
- i∈Nwi≤2αn+1−1 simultaneously. Based on the computational
106
- results of [Kurz, 2012], we also demonstrate the tightness o f our bound on
107
- the quota when n≤5.
108
- For a family of minimal winning coalitions, [Peled and Simeo ne, 1985]
109
- proposed a polynomial-time algorithm for checking the weig htedness of a
110
- given simple game. They also showed that for weighted simple games repre-
111
- sented by minimal winning coalitions, all maximal losing co alitions can be
112
- computed in polynomial time. When we have minimal winning co alitions
113
- 3and maximal losing coalitions, there exists a linear inequa lity system whose
114
- solution gives a weight vector w∈RNandq∈Rsatisfying property (2).
115
- However, it isless straightforward tofindaninteger-weigh t representation as
116
- the problem transforms from linear programming to integer p rogramming.
117
- In Section 4, we address the problem of finding an integer-wei ght rep-
118
- resentation under the assumption that we have minimal winni ng coalitions
119
- and maximal losing coalitions. We show that an integer-weig ht represen-
120
- tation is obtained by carefully rounding a solution of the li near inequality
121
- system multiplied by at most (2 −√
122
- 2)n+(√
123
- 2−1).
124
- A simple game G= (N,W) is called roughly weighted if there exist a
125
- non-negative vector w∈RN
126
- +and a real number q∈R, not all equal to
127
- zero ((q;w⊤)/\e}atio\slash=0⊤), such that for any S⊆Ncondition/summationtext
128
- i∈Swi< qim-
129
- pliesS/\e}atio\slash∈ W, and/summationtext
130
- i∈Swi> qimpliesS∈ W. We say that ( q;w⊤) is a
131
- rough voting representation forG. Roughly weighted simple games were ini-
132
- tially introduced by [Baugh, 1970]. [Muroga, 1971] (p. 208) studied them
133
- under the name of pseudothreshold functions. [Taylor and Zw icker, 1999]
134
- discussed roughly weighted simple games and constructed se veral examples.
135
- [Gvozdeva and Slinko, 2011] developed a theory of roughly we ighted simple
136
- games. A trading transform ( X1,X2,...,X j;Y1,Y2,...,Y j) with all coali-
137
- tionsX1,X2,...,X jwinningand Y1,Y2,...,Y jlosingiscalled a certificate of
138
- non-weightedness . This certificate is said to be potentif the grand coalition
139
- Nis among X1,X2,...,X jand the empty coalition is among Y1,Y2,...,Y j.
140
- [Gvozdeva and Slinko, 2011] showed that under the the monoto nicity prop-
141
- erty (1) and ∅ /\e}atio\slash∈ W ∋ N, a given simple game Gis not roughly weighted if
142
- and only if thereexists a potent certificate of non-weighted ness whose length
143
- islessthanorequalto( n+1)nn/2. Furtherresearchonroughlyweightedsim-
144
- plegamesappearsin[Gvozdeva et al., 2013,Freixas and Kurz , 2014,Hameed and Slinko, 2015].
145
- In Section 5, we show that (under the the monotonicity proper ty (1) and
146
- ∅ /\e}atio\slash∈ W ∋ N) the length of a potent certificate of non-weightedness is le ss
147
- than or equal to 2 αn+1, if it exists. We also show that a roughly weighted
148
- simple game (satisfying ∅ /\e}atio\slash∈ W ∋ N) has an integer vector ( q;w⊤) of rough
149
- voting representation satisfying 0 ≤wi≤αn−1(∀i∈N), 0≤q≤αnand
150
- 0≤/summationtext
151
- i∈Nwi≤2αn.
152
- 2 TradingTransformsof Non-weighted Simple Games
153
- In this section, we discuss the size of a trading transform th at guarantees
154
- the non-weightedness of a given simple game. Throughout thi s section, we
155
- do not need to assume the monotonicity property (1). First, w e introduce a
156
- 4linear inequality system for determining the weightedness of a given simple
157
- game. For any nonempty family of player subsets �� /\e}atio\slash=N ⊆2N, we introduce
158
- a 0–1 matrix A(N) = (a(N)Si) whose rows are indexed by subsets in Nand
159
- columns are indexed by players in Ndefined by
160
- a(N)Si=/braceleftbigg1 (ifi∈S∈ N),
161
- 0 (otherwise) .
162
- A given simple game G= (N,W) is weighted if and only if the following
163
- linear inequality system is feasible:
164
- P1:/parenleftbiggA(W)1 0
165
- −A(L)−1−1/parenrightbigg
166
- w
167
- −q
168
- ε
169
- ≥0,
170
- ε >0,
171
- where0(1) denotes a zero vector (all-one vector) of an appropriate di men-
172
- sion.
173
- Farkas’ Lemma [Farkas, 1902] states that P1 is infeasible if and only if
174
- the following system is feasible:
175
- D1:
176
- A(W)⊤−A(L)⊤
177
- 1⊤−1⊤
178
- 0⊤−1⊤
179
- /parenleftbiggx
180
- y/parenrightbigg
181
- =
182
- 0
183
- 0
184
- −1
185
- ,
186
- x≥0,y≥0.
187
- For simplicity, we denote D1 by A1z=c,z≥0,where
188
- A1=
189
- A(W)⊤−A(L)⊤
190
- 1⊤−1⊤
191
- 0⊤−1⊤
192
- ,z=/parenleftbiggx
193
- y/parenrightbigg
194
- ,andc=
195
- 0
196
- 0
197
- −1
198
- .
199
- Subsequently, we assume that D1 is feasible. Let /tildewiderA1z=/tildewidecbe a linear
200
- equality system obtained from A1z=cby repeatedly removing redundant
201
- equalities. A column submatrix /hatwideBof/tildewiderA1is called a basis matrix if/hatwideBis a
202
- square invertible matrix. Variables corresponding to the c olumns of/hatwideBare
203
- calledbasic variables , andJ/hatwideBdenotes an index set of basic variables. A
204
- basic solution with respect to /hatwideBis a vector zdefined by
205
- zi=/braceleftbigg/hatwidezi(i∈J/hatwideB),
206
- 0 (i/\e}atio\slash∈J/hatwideB),
207
- 5where/hatwidezis a vector of basic variables satisfying /hatwidez=/hatwideB−1/tildewidec. It is well-known
208
- that if a linear inequality system D1 is feasible, then it has a basic feasible
209
- solution.
210
- Letz′be a basic feasible solution of D1 with respect to a basis matr ixB.
211
- By Cramer’s rule, z′
212
- i= det(Bi)/det(B) for each i∈JB,whereBiis a matrix
213
- formed by replacing i-th column of Bby/tildewidec. Because Biis an integer matrix,
214
- det(B)z′
215
- i= det(Bi) is an integer for any i∈JB. Let (x′⊤,y′⊤)⊤be a vector
216
- corresponding to z′,and (x∗⊤,y∗⊤) =|det(B)|(x′⊤,y′⊤). Cramer’s rule
217
- states that both x∗andy∗are integer vectors. The pair of vectors x∗and
218
- y∗satisfies the following conditions:
219
- A(W)⊤x∗−A(L)⊤y∗=|det(B)|(A(W)⊤x′−A(L)⊤y′) =|det(B)|0=0,/summationdisplay
220
- S∈Wx∗
221
- S−/summationdisplay
222
- S∈Ly∗
223
- S=|det(B)|(1⊤x′−1⊤y′) =|det(B)|0 = 0,
224
- /summationdisplay
225
- S∈Ly∗
226
- S=|det(B)|1⊤y′=|det(B)|,
227
- x∗=|det(B)|x′≥0,andy∗=|det(B)|y′≥0.
228
- Next, we construct a trading transform corresponding to the pair ofx∗and
229
- y∗. LetX= (X1,X2,...,X|det(B)|) be a sequence of winning coalitions,
230
- where each winning coalition S∈ Wappears in Xx∗
231
- S-times. Similarly, we
232
- introduce a sequence Y= (Y1,Y2,...,Y|det(B)|),where each losing coalition
233
- S∈ Lappears in Yy∗
234
- S-times. The above equalities imply that ( X;Y) is a
235
- trading transform of size |det(B)|. Therefore, we have shown that if D1 is
236
- feasible, then a given simple game G= (N,W) is not|det(B)|-trade robust.
237
- Finally, weprovidean upperboundon |det(B)|. Letαnbethemaximum
238
- of the determinants of n×n0–1 matrices. For any n×n0–1 matrix M,it is
239
- easy to show that det( M)≥ −αnby swapping two rows of M(whenn≥2).
240
- If a column of Bis indexed by a component of x(i.e., indexed by a winning
241
- coalition), then each component of the column is either 0 or 1 . Otherwise,
242
- a column (of B) is indexed by a component of y(i.e., indexed by a losing
243
- coalition) whose components are either 0 or −1. Now, we apply elementary
244
- matrix operations to B(see Figure 1). For each column of Bindexed by
245
- a component of y, we multiply the column by ( −1). The resulting matrix,
246
- denoted by B′, is a 0–1 matrix satisfying |det(B)|=|det(B′)|.
247
- AsBis a submatrix of A1, the number of rows (columns) of B, denoted
248
- byn′, is less than or equal to n+ 2. When n′< n+ 2, we obtain the
249
- desired result: |det(B)|=|det(B′)| ≤αn′≤αn+1. Ifn′=n+ 2, then B
250
- has a row vector corresponding to equality 1⊤x−1⊤y= 0, which satisfies
251
- the condition that each component is either 1 or −1, and thus B′has an
252
- 60 0 1 1 0−1
253
- 0 1 0 1 0 0
254
- 1 0 0 1 0−1
255
- 1 1 1 0 −1−1
256
- 1 1 1 1 −1−1
257
- 0 0 0 0 −1−1
258
- B0 0 1 1 0 1
259
- 0 1 0 1 0 0
260
- 1 0 0 1 0 1
261
- 1 1 1 0 1 1
262
- 1 1 1 1 1 1
263
- 0 0 0 0 1 1
264
- B′
265
- Figure 1: Example of elementary matrix operations for D1.
266
- all-one row vector. Lemma 2.1 (c1) appearing below states th at|det(B)|=
267
- |det(B′)| ≤αn′−1≤αn+1.
268
- Lemma 2.1. LetMbe ann×n0–1 matrix, where n≥2.
269
- (c1)If a row (column) vector of Mis the all-one vector, then |det(M)| ≤αn−1.
270
- (c2)If a row (column) vector of Mis a 0–1 vector consisting of a unique
271
- 0-component and n−11-components, then |det(M)| ≤2αn−1.
272
- Proof of (c1). Assume that the first column of Mis the all-one vector. We
273
- apply the following elementary matrix operations to M(see Figure 2). For
274
- each column of Mexcept the first column, if the first component is equal to
275
- 1, then we multiply the column by ( −1) and add the all-one column vector.
276
- The obtained matrix, denoted by M′, is ann×n0–1 matrix satisfying
277
- |det(M)|=|det(M′)|,and the first row is a unit vector. Thus, it is obvious
278
- that|det(M′)| ≤αn−1.
279
- 11 0 1 0
280
- 11 1 1 0
281
- 10 1 0 0
282
- 11 1 0 1
283
- 10 0 1 1
284
- M10 0 0 0
285
- 10 1 0 0
286
- 11 1 1 0
287
- 10 1 1 1
288
- 11 0 0 1
289
- M′
290
- Figure 2: Example of elementary matrix operations for (c1).
291
- Proof of (c2). Assume that the first column vector of M, denoted by a,
292
- contains exactly one 0-component. Obviously, e=1−ais a unit vector.
293
- LetM1andMebe a pair of matrices obtained from Mwith the first column
294
- 7replaced by 1ande, respectively. Then, it is easy to prove that
295
- |det(M)|=|det(M1)−det(Me)| ≤ |det(M1)|+|det(Me)| ≤2αn−1.
296
- QED
297
- From the above discussion, we obtain the following theorem ( without
298
- the assumption of the monotonicity property (1)).
299
- Theorem 2.2. A given simple game G= (N,W)withnplayers is weighted
300
- if and only if Gisαn+1-trade robust, where αn+1is the maximum of deter-
301
- minants of (n+1)×(n+1)0–1 matrices.
302
- Proof. If a given simple game is not αn+1-trade robust, then it is not trade
303
- robust and, thus, not weighted, as shown by [Taylor and Zwick er, 1992,
304
- Taylor and Zwicker, 1999]. We have discussed the inverse imp lication: if
305
- a given simple game Gis not weighted, then the linear inequality system P1
306
- is infeasible. Farkas’ lemma [Farkas, 1902] implies that D1 is feasible. From
307
- the above discussion, we have a trading transform ( X1,...,X j;Y1,...Yj)
308
- satisfying j≤αn+1,X1,...,X j∈ W, andY1,...,Y j∈ L. QED
309
- Applying the Hadamard’s evaluation [Hadamard, 1893] of the determi-
310
- nant, we obtain Theorem 2.3.
311
- Theorem 2.3. For any positive integer n,αn≤(n+1)n+1
312
- 2(1/2)n.
313
- The exact values of αnfor small positive integers nappear in “The On-
314
- LineEncyclopediaof Integer Sequences (A003432)” [Sloane et al., 2018] and
315
- Table 1.
316
- 3 Integer Weights of Weighted Simple Games
317
- This section reviews the integer-weight representations o f weighted simple
318
- games. Throughoutthis section, we donot need to assume the m onotonicity
319
- property (1), except in Table 1.
320
- Theorem 3.1. Assume that a given simple game G= (N,W)satisfies
321
- ∅ /\e}atio\slash∈ W ∋ N. If a given simple game Gis weighted, then there exists an
322
- integer-weight representation (q;w⊤)ofGsatisfying |wi| ≤αn(∀i∈N),
323
- |q| ≤αn+1, and1≤/summationtext
324
- i∈Nwi≤2αn+1−1.
325
- 8Proof. It is easy to show that a given simple game G= (N,W) is weighted
326
- if and only if the following linear inequality system is feas ible:
327
- P2:A(W)w≥q1,
328
- A(L)w≤q1−1,
329
- 1⊤w≤u−1.
330
- We define
331
- A2=
332
- A(W)10
333
- −A(L)−10
334
- −1⊤0 1
335
- ,v=
336
- w
337
- −q
338
- u
339
- ,d=
340
- 0
341
- 1
342
- 1
343
- ,
344
- and denote the inequality system P2 by A2v≥d.
345
- Subsequently, we assume that P2 is feasible. A non-singular submatrix
346
- /hatwideBofA2is called a basis matrix . Variables corresponding to columns of /hatwideB
347
- are called basic variables , andJ/hatwideBdenotes an index set of basic variables.
348
- Letd/hatwideBbe a subvector of dcorresponding to rows of /hatwideB. Abasic solution
349
- with respect to /hatwideBis a vector vdefined by
350
- vi=/braceleftbigg/hatwidevi(i∈J/hatwideB),
351
- 0 (i/\e}atio\slash∈J/hatwideB),
352
- where/hatwidevis a vector of basic variables satisfying /hatwidev=/hatwideB−1d/hatwideB. It is well-known
353
- that if a linear inequality system P2 is feasible, there exis ts a basic feasible
354
- solution.
355
- Let (w′⊤,−q′,u′)⊤be a basic feasible solution of P2 with respect to a
356
- basis matrix B. Assumption ∅ /\e}atio\slash∈ Wimplies that 0 ≤q′−1 and, thus,
357
- −q′/\e}atio\slash= 0. As N∈ W, we have inequalities u′−1≥1⊤w′≥q′≥1,which
358
- imply that u′/\e}atio\slash= 0. The definition of a basic solution implies that −qand
359
- uare basic variables with respect to the basis matrix B. Thus, Bhas
360
- columns corresponding to basic variables −qandu. A column of Bindexed
361
- byuis called the last column. As Bis invertible, the last column of Bis
362
- not the zero vector, and thus Bincludes a row corresponding to inequality
363
- 1⊤w≤u−1, which is called the last row (see Figure 3). Here, the numbe r
364
- of rows (columns) of B, denoted by n′, is less than or equal to n+2.
365
- For simplicity, we denote the basic feasible solution ( w′⊤,−q′,u′)⊤by
366
- v′. By Cramer’s rule, v′
367
- i= det(Bi)/det(B) for each i∈JB,whereBiis
368
- obtained from Bwith a column correspondingto variable vireplaced by dB.
369
- Because Biis an integer matrix, det( B)v′
370
- i= det(Bi) is an integer for any
371
- 9i∈JB. Cramer’s rule states that ( w∗⊤,−q∗,u∗) =|det(B)|(w′⊤,−q′,u′) is
372
- an integer vector satisfying the following conditions:
373
- A(W)w∗=|det(B)|A(W)w′≥ |det(B)|q′1=q∗1,
374
- A(L)w∗=|det(B)|A(L)w′≤ |det(B)|(q′1−1)≤q∗1−1,and
375
- 1⊤w∗=|det(B)|1⊤w′≤ |det(B)|(u′−1)≤u∗−1.
376
- From the above, ( q∗;w∗⊤) is an integer-weight representation of G. As
377
- N∈ W, we obtain 1⊤w∗≥q∗=|det(B)|q′≥1.
378
- w1w2w3w4−q u
379
- 1 1 1 0 10
380
- 0 1 1 1 10
381
- 0−1−1 0 −10
382
- −1 0 0 −1−10
383
- 0−1 0 −1−10
384
- −1−1−1−101
385
- B
386
- w1w2w3w4−q u
387
- 1 1 1 0 00
388
- 0 1 1 1 00
389
- 0−1−1 0 10
390
- −1 0 0 −110
391
- 0−1 0 −110
392
- −1−1−1−111
393
- Bqw1w2w3w4−q
394
- 1 1 1 0 0
395
- 0 1 1 1 0
396
- 0−1−1 0 1
397
- −1 0 0 −11
398
- 0−1 0 −11
399
- B′
400
- qw1w2w3w4−q
401
- 1 1 1 0 0
402
- 0 1 1 1 0
403
- 0 1 1 0 1
404
- 1 0 0 1 1
405
- 0 1 0 1 1
406
- B′′
407
- q
408
- w1w2w3w4−q u
409
- 101 0 10
410
- 001 1 10
411
- 01−1 0 −10
412
- −110−1−10
413
- 010−1−10
414
- −11−1−101
415
- B2w1w2w3w4−q
416
- 101 0 1
417
- 001 1 1
418
- 01−1 0 −1
419
- −110−1−1
420
- 010−1−1
421
- B′
422
- 2w1w2w3w4−q
423
- 101 0 1
424
- 001 1 1
425
- 011 0 1
426
- 110 1 1
427
- 010 1 1
428
- B′′
429
- 2
430
- w1w2w3w4−q u
431
- 1 1 1 0 10
432
- 0 1 1 1 10
433
- 0−1−1 0 −11
434
- −1 0 0 −1−11
435
- 0−1 0 −1−11
436
- −1−1−1−101
437
- Buw1w2w3w4−q u
438
- 1 1 1 0 10
439
- 0 1 1 1 10
440
- 0 1 1 0 11
441
- 1 0 0 1 11
442
- 0 1 0 1 11
443
- 1 1 1 1 01
444
- B′
445
- u
446
- Figure 3: Examples of elementary matrix operations for P2.
447
- Now, we discuss the magnitude of |q∗|=|det(Bq)|,whereBqis obtained
448
- 10fromBwith a column corresponding to variable −qreplaced by dB. As the
449
- last column of Bqis a unit vector, we delete the last column and the last row
450
- fromBqand obtain a matrix B′
451
- qsatisfying det( Bq) = det(B′
452
- q). We apply
453
- the following elementary matrix operations to B′
454
- q. First, we multiply the
455
- column corresponding to variable −q(which is equal to dB) by (−1). Next,
456
- we multiply the rows indexed by losing coalitions by ( −1). The resulting
457
- matrix, denoted by B′′
458
- q, is 0–1 valued and satisfies the following condition:
459
- |q∗|=|det(Bq)|=|det(B′
460
- q)|=|det(B′′
461
- q)| ≤αn′−1≤αn+1.
462
- Next, we show that |w∗
463
- i| ≤αn(i∈N). Ifw∗
464
- i/\e}atio\slash= 0, then wiis a basic
465
- variable that satisfies |w∗
466
- i|=|det(Bi)|,whereBiis obtained from Bbut
467
- the column corresponding to variable wiis replaced by dB. In a manner
468
- similar to that above, we delete the last column and the last r ow from Bi
469
- and obtain a matrix B′
470
- isatisfying det( Bi) = det(B′
471
- i). Next, we multiply a
472
- column corresponding to variable wiby (−1). We multiply rows indexed by
473
- losing coalitions by ( −1) and obtain a 0–1 matrix B′′
474
- i. Matrix Bicontains
475
- a column corresponding to the original variable −q, which contains values 1
476
- or−1. Thus, matrix B′′
477
- icontains a column vector that is equal to an all-one
478
- vector. Lemma 2.1 (c1) implies that
479
- |w∗
480
- i|=|det(Bi)|=|det(B′
481
- i)|=|det(B′′
482
- i)| ≤αn′−2≤αn.
483
- Lastly, we discuss the value of |u∗|=|det(Bu)|,whereBuis obtained
484
- fromBbut the last column (column indexed by variable u) is replaced by
485
- dB. In a manner similar to that above, we multiply the last colum n by
486
- (−1), multiply the rows indexed by losing coalitions by ( −1), and multiply
487
- the last row by ( −1). The resulting matrix, denoted by B′
488
- u, is a 0–1 matrix
489
- in which the last row contains exactly one 0-component (inde xed by variable
490
- −q). Lemma 2.1 (c2) implies that
491
- |u∗|=|det(Bu)|=|det(B′
492
- u)| ≤2αn′−1≤2αn+1,
493
- and thus 1⊤w∗≤u∗−1≤ |u∗|−1≤2αn+1−1. QED
494
- [Kurz, 2012] exhaustively generated all weighted voting ga mes satisfying
495
- the monotonicity property (1) for up to nine voters. Table 1 s hows max-
496
- ima of the exact values of minimal integer-weight represent ations obtained
497
- by [Kurz, 2012], Muroga’s boundsin [Muroga, 1971], and our u pperbounds.
498
- The table shows that our bound on the quota is tight when n≤5.
499
- 11Table 1: Exact values of integer weights representations.
500
- n 1 2 3 4 5 6 7 8 9 10 11
501
- αn† 1 1 2 3 5 9 32 56 144 320 1458
502
- max
503
- (N,W)min
504
- [q;w]max
505
- iwi‡1 1 2 3 5 9 18 42 110
506
- Muroga’s bound (αn)•1 1 2 3 5 9 32 56 144 320 1458
507
- max
508
- (N,W)min
509
- [q;w]q‡1 2 3 5 9 18 40 105 295
510
- Our bound (αn+1)1 2 3 5 9 32 56 144 320 1458
511
- Muroga’s bound (nαn)•1 2 6 12 25 54 224 448 1296 3200 16038
512
- max
513
- (N,W)min
514
- [q;w]/summationtext
515
- iwi‡1 2 4 8 15 33 77 202 568
516
- Our bound (2αn+1−1)1 3 5 9 17 63 111 287 639 2915
517
- †[Sloane et al., 2018], ‡[Kurz, 2012], •[Muroga, 1971].
518
- 4 Rounding Method
519
- This section addresses the problem of findinginteger-weigh t representations.
520
- In this section, we assume the monotonicity property (1). In addition, a
521
- weighted simple game is given by a triplet ( N,Wm,LM),whereWmand
522
- LMdenote the set of minimal winning coalitions and the set of ma ximal
523
- losing coalitions, respectively. We also assume that the em pty set is a losing
524
- coalition, Nis a winning coalition, and every player in Nis not a null
525
- player. Thus, there exists an integer-weight representati on in which q≥1
526
- andwi≥1 (∀i∈N).
527
- We discuss a problem for findingan integer-weight represent ation, which
528
- is formulated by the following integer programming problem :
529
- Q: find a vector ( q;w)
530
- satisfying/summationdisplay
531
- i∈Swi≥q(∀S∈ Wm), (3)
532
- /summationdisplay
533
- i∈Swi≤q−1 (∀S∈ LM), (4)
534
- q≥1, wi≥1 (∀i∈N), (5)
535
- q∈Z, wi∈Z(∀i∈N). (6)
536
- A linear relaxation problem Q is obtained from Q by dropping the integer
537
- constraints (6).
538
- Let (q∗;w∗⊤) be a basic feasible solution of the linear inequality sys-
539
- temQ. Our proof in the previous section showed that |det(B∗)|(q∗;w∗⊤)
540
- 12gives a solution of Q (i.e., an integer-weight representati on), where B∗de-
541
- notes a corresponding basis matrix of Q. When |det(B∗)|> n, there ex-
542
- ists a simple method for generating a smaller integer-weigh t representation.
543
- For any weight vector w= (w1,w2,...,w n)⊤, we denote the integer vector
544
- (⌊w1⌋,⌊w2⌋,...,⌊wn⌋)⊤by⌊w⌋. Given a solution ( q∗;w∗⊤) ofQ, we intro-
545
- duce an integer vector w′=⌊nw∗⌋and an integer q′=⌊n(q∗−1)⌋+1. For
546
- any minimal winning coalition S∈ Wm, we have that
547
- /summationdisplay
548
- i∈Sw′
549
- i>/summationdisplay
550
- i∈S(nw∗
551
- i−1)≥n/summationdisplay
552
- i∈Sw∗
553
- i−n≥nq∗−n=n(q∗−1)≥ ⌊n(q∗−1)⌋,
554
- /summationdisplay
555
- i∈Sw′
556
- i≥ ⌊n(q∗−1)⌋+1 =q′.
557
- Each maximal losing coalition S∈ LMsatisfies
558
- /summationdisplay
559
- i∈Sw′
560
- i≤/summationdisplay
561
- i∈Snw∗
562
- i≤n(q∗−1),
563
- /summationdisplay
564
- i∈Sw′
565
- i≤ ⌊n(q∗−1)⌋=q′−1.
566
- Thus, the pair of w′andq′gives an integer-weight representation satisfying
567
- (q′;w′⊤)≤n(q∗;w∗⊤). In the remainder of this section, we show that there
568
- exists an integer-weight representation (vector) that is l ess than or equal
569
- to ((2−√
570
- 2)n+(√
571
- 2−1))(q∗;w∗⊤)<(0.5858n+0.4143)(q∗;w∗⊤) for any
572
- solution ( q∗;w∗⊤) ofQ.
573
- Theorem 4.1. Let(q∗;w∗⊤)be a solution of Q. We define ℓ1= (2−√
574
- 2)n−
575
- (√
576
- 2−1)andu1= (2−√
577
- 2)n+(√
578
- 2−1). Then, there exists a real number
579
- λ•∈[ℓ1,u1]so that the pair Q=⌊λ•(q∗−1)⌋+1andW=⌊λ•w∗⌋gives a
580
- feasible solution of Q (i.e., an integer-weight representa tion).
581
- Proof. For any positive real λ, it is easy to see that each maximal losing
582
- coalition S∈ LMsatisfies
583
- /summationdisplay
584
- i∈S⌊λw∗
585
- i⌋ ≤/summationdisplay
586
- i∈Sλw∗
587
- i≤λ(q∗−1),
588
- /summationdisplay
589
- i∈S⌊λw∗
590
- i⌋ ≤ ⌊λ(q∗−1)⌋.
591
- To discuss the weights of minimal winning coalitions, we int roduce a
592
- function g(λ) =λ−/summationtext
593
- i∈N(λw∗
594
- i−⌊λw∗
595
- i⌋). In thesecond part of this proof, we
596
- show that if we choose Λ ∈[ℓ1,u1] uniformly at random, then E[ g(Λ)]≥0.
597
- 13This implies that ∃λ•∈[ℓ1,u1] satisfying g(λ•)>0, because g(λ) is right-
598
- continuous, piecewise linear, and not a constant function. Wheng(λ•)>0,
599
- each minimal winning coalition S∈ Wmsatisfies
600
- λ•>/summationdisplay
601
- i∈N(λ•w∗
602
- i−⌊λ•w∗
603
- i⌋)≥/summationdisplay
604
- i∈S(λ•w∗
605
- i−⌊λ•w∗
606
- i⌋) =/summationdisplay
607
- i∈Sλ•w∗
608
- i−/summationdisplay
609
- i∈S⌊λ•w∗
610
- i⌋,
611
- (7)
612
- which implies
613
- /summationdisplay
614
- i∈S⌊λ•w∗
615
- i⌋>/summationdisplay
616
- i∈Sλ•w∗
617
- i−λ•=λ•/parenleftBigg/summationdisplay
618
- i∈Sw∗
619
- i−1/parenrightBigg
620
- ≥λ•(q∗−1)≥ ⌊λ•(q∗−1)⌋,
621
- and thus /summationdisplay
622
- i∈S⌊λ•w∗
623
- i⌋ ≥ ⌊λ•(q∗−1)⌋+1.
624
- Finally, we show that E[ g(Λ)]≥0 if we choose Λ ∈[ℓ1,u1] uniformly at
625
- random. It is obvious that
626
- E[g(Λ)] = E[Λ] −/summationdisplay
627
- i∈NE[(Λw∗
628
- i−⌊Λw∗
629
- i⌋)] =ℓ1+u1
630
- 2−/summationdisplay
631
- i∈N/integraldisplayu1
632
- ℓ1(λw∗
633
- i−⌊λw∗
634
- i⌋)dλ
635
- u1−ℓ1
636
- = (2−√
637
- 2)n−/summationdisplay
638
- i∈N/integraldisplayu1
639
- ℓ1(λw∗
640
- i−⌊λw∗
641
- i⌋)dλ
642
- u1−ℓ1.
643
- Let us discuss the last term appearing above. By substitutin gµforλw∗
644
- i, we
645
- obtain
646
- /integraldisplayu1
647
- ℓ1(λw∗
648
- i−⌊λw∗
649
- i⌋)dλ
650
- u1−ℓ1=/integraldisplayu1w∗
651
- i
652
- ℓ1w∗
653
- i(µ−⌊µ⌋)dµ
654
- w∗
655
- i(u1−ℓ1)
656
- ≤/integraldisplay0
657
- −w∗
658
- i(u1−ℓ1)(µ−⌊µ⌋)dµ
659
- w∗
660
- i(u1−ℓ1)=/integraldisplay0
661
- −x(µ−⌊µ⌋)dµ
662
- x,
663
- where the last equality is obtained by setting x=w∗
664
- i(u1−ℓ1). Asu1−ℓ1=
665
- 2(√
666
- 2−1) andw∗
667
- i≥1, it is clear that x=w∗
668
- i(u1−ℓ1)≥2(√
669
- 2−1). Here,
670
- we introduce a function f(x) =/integraldisplay0
671
- −x(µ−⌊µ⌋)dµ
672
- x. According to numerical
673
- 14calculations (see Figure 4), inequality x≥2(√
674
- 2−1) implies that f(x)≤
675
- 2−√
676
- 2.
677
- 0 1 2 3 4 5
678
- x0.450.50.550.60.650.70.750.8f(x)
679
- Figure 4: Plot of function f(x) =/integraldisplay0
680
- −x(µ−⌊µ⌋)dµ
681
- x.
682
- From the above, we obtain the desired result
683
- E[g(Λ)]≥(2−√
684
- 2)n−/summationdisplay
685
- i∈N(2−√
686
- 2) = (2−√
687
- 2)n−(2−√
688
- 2)n= 0.
689
- QED
690
- 5 Roughly Weighted Simple Games
691
- In this section, we discuss roughly weighted simple games. F irst, we show
692
- an upper bound of the length of a potent certificate of non-wei ghtedness.
693
- Theorem 5.1. Assume that a given simple game G= (N,W)satisfies ∅ /\e}atio\slash∈
694
- W ∋Nand the monotonicity property (1). If a given simple game Gis not
695
- roughly weighted, then there exists a potent certificate of n on-weightedness
696
- whose length is less than or equal to 2αn+1.
697
- Proof. Let us introduce a linear inequality system:
698
- P3:/parenleftbiggA(W)1
699
- −A(L)−1/parenrightbigg/parenleftbiggw
700
- −q/parenrightbigg
701
- ≥0,
702
- 1⊤w>0.
703
- 15First, we show that if P3 is feasible, then a given simple game is roughly
704
- weighted. Let ( q′;w′⊤) be a feasible solution of P3. We introduce a new
705
- voting weight w′′
706
- i= max{w′
707
- i,0}for each i∈N. We show that ( q′;w′′⊤) is a
708
- rough voting representation. As 1⊤w′>0, vector w′includes at least one
709
- positivecomponent,andthus w′′/\e}atio\slash=0. Ifacoalition Ssatisfies/summationtext
710
- i∈Sw′′
711
- i< q′,
712
- thenq′>/summationtext
713
- i∈Sw′′
714
- i≥/summationtext
715
- i∈Sw′
716
- i,and thus Sis losing. Consider the case in
717
- which a coalition Ssatisfies/summationtext
718
- i∈Sw′′
719
- i> q′. LetS′={i∈S|w′
720
- i>0}. It is
721
- obvious that q′</summationtext
722
- i∈Sw′′
723
- i=/summationtext
724
- i∈S′w′′
725
- i=/summationtext
726
- i∈S′w′
727
- iand thus S′is winning.
728
- The monotonicity property (1) and S′⊆Simply that Sis winning.
729
- From the above discussion, it is obvious that if a given simpl e game is
730
- not roughly weighted, then P3 is infeasible. Farkas’ Lemma [ Farkas, 1902]
731
- states that
732
- D3:/parenleftbiggA(W)⊤−A(L)⊤
733
- 1⊤−1⊤/parenrightbigg/parenleftbiggx
734
- y/parenrightbigg
735
- =/parenleftbigg−1
736
- 0/parenrightbigg
737
- ,
738
- x≥0,y≥0,
739
- is feasible if and only if P3 is infeasible. By introducing an artificial non-
740
- negative variable u≥0 and equality 1⊤x=u−1, we obtain a linear
741
- inequality system:
742
- D3+:
743
- A(W)⊤−A(L)⊤0
744
- 1⊤−1⊤0
745
- 1⊤0⊤−1
746
- 
747
- x
748
- y
749
- u
750
- =
751
- −1
752
- 0
753
- −1
754
- ,
755
- x≥0,y≥0, u≥0.
756
- It is obvious that D3 is feasible if and only if D3+is feasible.
757
- Next, we construct a trading transform from a basic feasible solution
758
- of D3+. Let/tildewiderA3z=/tildewidec′be a linear equality system obtained from D3+by
759
- repeatedly removing redundant equalities. As D3+is feasible, there exists a
760
- basic feasible solution, denoted by z′, and a corresponding basis matrix B
761
- of/tildewiderA3. From Cramer’s rule, z′
762
- S/\e}atio\slash= 0 implies that z′
763
- S= det(BS)/det(B) for
764
- eachS⊆N,whereBSis obtained from Bwith a column corresponding to
765
- variable zSreplaced by/tildewidec′. Obviously, |det(B)|z′is a non-negative integer
766
- vector. We denote by ( x′⊤,y′⊤,u′)⊤the basic feasible solution z′. We recall
767
- that∅ /\e}atio\slash∈ W ∋ Nandintroduceapairofnon-negative integer vectors ( x∗,y∗)
768
- defined as follows:
769
- x∗
770
- S=/braceleftbigg|det(B)|x′
771
- S (ifS∈ W \{N}),
772
- |det(B)|(x′
773
- N+1) (if S=N),
774
- y∗
775
- S=/braceleftbigg|det(B)|y′
776
- S (ifS∈ N \{∅} ),
777
- |det(B)|(y′
778
- ∅+1) (if S=∅).
779
- 16Subsequently, χ(S) denotes the characteristic vector of a coalition S. It is
780
- easy to see that pair ( x∗,y∗) satisfies
781
- A(W)⊤x∗−A(L)⊤y∗=/summationdisplay
782
- S∈Wχ(S)x∗
783
- S−/summationdisplay
784
- S∈Lχ(S)y∗
785
- S
786
- =/summationdisplay
787
- S∈Wχ(S)|det(B)|x′
788
- S+χ(N)|det(B)|−/summationdisplay
789
- S∈Lχ(S)|det(B)|y′
790
- S−χ(∅)|det(B)|
791
- =|det(B)|/parenleftBigg/parenleftBigg/summationdisplay
792
- S∈Wχ(S)x′
793
- S−/summationdisplay
794
- S∈Lχ(S)y′
795
- S/parenrightBigg
796
- +χ(N)−χ(∅)/parenrightBigg
797
- =|det(B)|/parenleftBig/parenleftBig
798
- A(W)⊤x′−A(L)⊤y′/parenrightBig
799
- +1−0/parenrightBig
800
- =|det(B)|(−1+1−0) =0
801
- and
802
- /summationdisplay
803
- S∈Wx∗
804
- S−/summationdisplay
805
- S∈Ly∗
806
- S=/summationdisplay
807
- S∈W|detB|x′
808
- S+|det(B)|−/summationdisplay
809
- S∈L|det(B)|y′
810
- S−|det(B)|
811
- =|det(B)|/parenleftBigg/parenleftBigg/summationdisplay
812
- S∈Wx′
813
- S−/summationdisplay
814
- S∈Ly′
815
- S/parenrightBigg
816
- +1−1/parenrightBigg
817
- =|det(B)|(0+1−1) = 0.
818
- Next, we can construct a trading transform ( X;Y) corresponding to the
819
- pair ofx∗andy∗by analogy with the proof of Theorem 2.2. Both x∗
820
- Nand
821
- y∗
822
- ∅are positive and ∅ /\e}atio\slash∈ W ∋ N; therefore ( X;Y) is a potent certificate of
823
- non-weightedness.
824
- Lastly, we discuss the length of ( X;Y). The number of rows (columns)
825
- ofB, denoted by n′, is less than or equal to n+2. The basic feasible solution
826
- z′satisfies that u′= 1+1⊤x′≥1>0, and thus Cramer’s rule states that
827
- det(B)u′= det(Bu) (Figure 5 shows an example). We multiply columns
828
- ofBucorresponding to components in ( y⊤,u) by (−1) and obtain a 0–1
829
- matrixB′
830
- usatisfying |det(Bu)|=|det(B′
831
- u)|. As/tildewidec′includes at most one 0-
832
- component, Lemma 2.1 implies that |det(B′
833
- u)| ≤2αn′−1≤2αn+1. Thus, the
834
- length of ( X;Y) satisfies
835
- /summationdisplay
836
- S∈Wx∗
837
- S=/summationdisplay
838
- S∈W|det(B)|x′
839
- S+|det(B)|=|det(B)|(1⊤x′+1)
840
- =|det(B)|(u′−1+1) = |det(B)|u′=|det(B)u′|
841
- =|det(Bu)|=|det(B′
842
- u)| ≤2αn+1.
843
- QED
844
- In the rest of this section, we discuss integer voting weight s and a quota
845
- of a rough voting representation. We say that a player i∈Nis apasserif
846
- and only if every coalition S∋iis winning.
847
- 17u
848
- 0 0 1 1 0−10
849
- 0 1 0 1 0 0 0
850
- 1 0 0 1 0−10
851
- 1 1 1 0 −1−10
852
- 1 1 1 1 −1−10
853
- 1 1 1 1 0 0−1
854
- B
855
- /tildewidec′
856
- 0 0 1 1 0−1−1
857
- 0 1 0 1 0 0−1
858
- 1 0 0 1 0−1−1
859
- 1 1 1 0 −1−1−1
860
- 1 1 1 1 −1−10
861
- 1 1 1 1 0 0−1
862
- Bu0 0 1 1 0 1 1
863
- 0 1 0 1 0 0 1
864
- 1 0 0 1 0 1 1
865
- 1 1 1 0 1 1 1
866
- 1 1 1 1 1 1 0
867
- 1 1 1 1 0 0 1
868
- B′
869
- u
870
- Figure 5: Example of elementary matrix operations for D3+.
871
- Theorem 5.2. Assume that a given simple game G= (N,W)satisfies
872
- ∅ /\e}atio\slash∈ W ∋ N.If a given simple game Gis roughly weighted, then there exists
873
- an integer vector (q;w⊤)of the rough voting representation satisfying 0≤
874
- wi≤αn−1(∀i∈N),0≤q≤αn, and1≤/summationtext
875
- i∈Nwi≤2αn.
876
- Proof. First, we show that if a given game is roughly weighted , then either
877
- P4:
878
- A(W)0
879
- −A(L)0
880
- −1⊤1
881
- /parenleftbiggw
882
- u/parenrightbigg
883
- ≥
884
- 1
885
- −1
886
- 0
887
- ,w≥0,u≥0,
888
- is feasible or there exists at least one passer. Suppose that a given simple
889
- game has a rough voting representation ( q;w⊤). Ifq >0, then (1 /q)w
890
- becomes a feasible solution of P4 by setting uto a sufficiently large positive
891
- number. Consider the case q≤0. Assumption ∅ /\e}atio\slash∈ Wimplies that 0 ≤q,
892
- and thus we obtain q= 0. Properties ( q,w⊤)/\e}atio\slash=0⊤andw≥0imply that
893
- ∃i◦∈N,wi◦>0, i.e., a given game Ghas a passer i◦.
894
- When a given game Ghas a passer i◦∈N, then there exists a rough
895
- 18voting representation ( q◦;w◦⊤) defined by
896
- w◦
897
- i=/braceleftbigg1 (i=i◦),
898
- 0 (i/\e}atio\slash=i◦),q◦= 0,
899
- which produces the desired result.
900
- Lastly, we consider the case in which P4 is feasible. It is wel l-known that
901
- when P4 is feasible, there exists a basic feasible solution. Let (w′⊤,u′)⊤be
902
- a basic feasible solution of P4 and Bbe a corresponding basis matrix. It is
903
- easy to see that (1; w′⊤) is a rough voting representation of G. Assumption
904
- N∈ Wimplies the positivity of u′becauseu′≥1⊤w′≥1. Then, variable
905
- uis a basic variable, and thus Bincludes a column corresponding to u,
906
- which is called the last column. The non-singularity of Bimplies that a
907
- column corresponding to uis not the zero vector, and thus Bincludes a row
908
- corresponding to the inequality 1⊤w≤u, which is called the last row (see
909
- Figure 6). The number of rows (columns) of basis matrix B, denoted by n′,
910
- is less than or equal to n+1.
911
- Cramer’s rule states that ( q∗,w∗⊤,u∗) =|det(B)|(1,w′⊤,u′) is a non-
912
- negative integer vector. It is easy to see that ( q∗,w∗⊤,u∗) satisfies
913
- A(W)w∗=|det(B)|A(W)w′≥ |det(B)|1=q∗1,
914
- A(L)w∗=|det(B)|A(L)w′≤ |det(B)|1=q∗1,and
915
- 1⊤w∗=|det(B)|1⊤w′≤ |det(B)|u′=u∗.
916
- From the above, ( q∗;w∗⊤) is an integer vector of a rough voting represen-
917
- tation. Assumption N∈ Wimplies that 1⊤w∗≥q∗=|det(B)| ≥1.
918
- Letd′
919
- Bbe a subvector of the right-hand-side vector of an inequalit y sys-
920
- tem in P4 corresponding to rows of B. Cramer’s rule states that det( B)u′=
921
- det(Bu),whereBuis obtained from Bbut the column corresponding to a
922
- basic variable uis replaced by d′
923
- B(see Figure 6). We multiply rows of Bu
924
- that correspond to losing coalitions by ( −1) and multiply the last row by
925
- (−1). The resulting matrix, denoted by B′
926
- u, is a 0–1 matrix whose last row
927
- includes exactly one 0-component (indexed by u). Lemma 2.1 (c2) implies
928
- that|det(B′
929
- u)| ≤2αn′−1≤2αn. Thus, we obtain that
930
- 1⊤w∗≤u∗≤ |u∗|=|det(B)u′|=|det(Bu)|=|det(B′
931
- u)| ≤2αn.
932
- By analogy with the proof of Theorem 3.1, we can prove the desi red inequal-
933
- ities:q∗=|det(B)| ≤αnandw∗
934
- i≤αn−1(∀i∈N). QED
935
- 19w1w2w3w4w5u
936
- 1 1 1 0 1 0
937
- 0 1 0 1 1 0
938
- 0−1−1 0 0 0
939
- −1 0 0 −1−10
940
- 0−1 0 −1 0 0
941
- −1−1−1−1−11
942
- Bw1w2w3w4w5u
943
- 1 1 1 0 1 1
944
- 0 1 0 1 1 1
945
- 0−1−1 0 0 −1
946
- −1 0 0 −1−1−1
947
- 0−1 0 −1 0 −1
948
- −1−1−1−1−10
949
- Buw1w2w3w4w5u
950
- 1 1 1 0 1 1
951
- 0 1 0 1 1 1
952
- 0 1 1 0 0 1
953
- 1 0 0 1 1 1
954
- 0 1 0 1 0 1
955
- 1 1 1 1 1 0
956
- B′
957
- u
958
- Figure 6: Examples of elementary matrix operations for P4.
959
- 6 Conclusion
960
- In this paper, we discussed the smallest value of k∗such that every k∗-trade
961
- robust simple game would be weighted. We provided a new proof of the
962
- existence of a trading transform when a given simple game is n on-weighted.
963
- Our proof yields an improved upper bound on the required leng th of a
964
- trading transform. We showed that a given simple game Gis weighted if
965
- and only if Gisαn+1-trade robust, where αn+1denotes the maximal value
966
- of determinants of ( n+1)×(n+1) 0–1 matrices. Applying the Hadamard’s
967
- evaluation [Hadamard, 1893] of the determinant, we obtain k∗≤αn+1≤
968
- (n+2)n+2
969
- 2(1/2)(n+1), which improves the existing bound k∗≤(n+1)nn/2
970
- obtained by [Gvozdeva and Slinko, 2011].
971
- Next, we discussed upper bounds for the maximum possible int eger
972
- weights and the quota needed to represent any weighted simpl e game with n
973
- players. We show that every weighted simple game (satisfyin g∅ /\e}atio\slash∈ W ∋ N)
974
- has an integer-weight representation ( q;w⊤)∈Z×ZNsuch that |wi| ≤αn
975
- (∀i∈N),|q| ≤αn+1, and 1≤/summationtext
976
- i∈Nwi≤2αn+1−1. We demonstrated the
977
- tightness of our bound on the quota when n≤5.
978
- We described a rounding method based on a linear relaxation o f an
979
- integer programming problem for finding an integer-weight r epresentation.
980
- We showed that an integer-weight representation is obtaine d by carefully
981
- rounding a solution of the linear inequality system multipl ied byλ•≤
982
- (2−√
983
- 2)n+(√
984
- 2−1)<0.5858n+0.4143. Our proof of Theorem 4.1 indicates
985
- an existence of a randomized rounding algorithm for finding a n appropriate
986
- valueλ•. However, from theoretical point of view, Theorem 4.1 only s howed
987
- the existence of a real number λ•. Even if there exists an appropriate “ratio-
988
- nal” number λ•, we need to determine the size of the rational number (its
989
- numerator and denominator) to implement a naive randomized rounding
990
- algorithm. Thus, it remains open whether there exists an effic ient algo-
991
- 20rithm for finding an integer-weight representation satisfy ing the properties
992
- in Theorem 4.1.
993
- Lastly, we showed that a roughly weighted simple game (satis fying∅ /\e}atio\slash∈
994
- W ∋N) has an integer vector ( q;w⊤) of the rough voting representation
995
- satisfying 0 ≤wi≤αn−1(∀i∈N), 0≤q≤αn, and 1≤/summationtext
996
- i∈Nwi≤2αn.
997
- When a given simple game is not roughly weighted, we showed th at (under
998
- the the monotonicity property (1) and ∅ /\e}atio\slash∈ W ∋ N) there existed a potent
999
- certificate of non-weightedness whose length is less than or equal to 2 αn+1.
1000
- References
1001
- [Baugh, 1970] Baugh, C.R.(1970). Pseudo-threshold logic: A generalization
1002
- of threshold logic . PhDthesis, UniversityofIllinoisatUrbana-Champaign.
1003
- [Chow, 1961] Chow, C.-K. (1961). On the characterization of threshold
1004
- functions. In 2nd Annual Symposium on Switching Circuit Theory and
1005
- Logical Design (SWCT 1961) , pages 34–38. IEEE.
1006
- [Elgot, 1961] Elgot, C. C. (1961). Decision problems of finit e automata de-
1007
- sign and related arithmetics. Transactions of the American Mathematical
1008
- Society, 98(1):21–51.
1009
- [Farkas, 1902] Farkas, J. (1902). Theorie der einfachen Ung leichungen.
1010
- Journal f¨ ur die reine und angewandte Mathematik , 1902(124):1–27.
1011
- [Freixas, 2021] Freixas, J. (2021). A characterization of w eighted simple
1012
- games based on pseudoweightings. Optimization Letters , 15:1371—-1383.
1013
- [Freixas et al., 2016] Freixas, J., Freixas, M., and Kurz, S. (2016).
1014
- Characterization of threshold functions: state of the art, some
1015
- new contributions and open problems. Available at SSRN .
1016
- https://ssrn.com/abstract=2740475(March1,2016) .
1017
- [Freixas et al., 2017] Freixas, J., Freixas, M., and Kurz, S. (2017). On
1018
- the characterization of weighted simple games. Theory and Decision ,
1019
- 83(4):469–498.
1020
- [Freixas and Kurz, 2014] Freixas, J. and Kurz, S. (2014). On α-roughly
1021
- weighted games. International Journal of Game Theory , 43(3):659–692.
1022
- [Freixas and Molinero, 2009a] Freixas, J. and Molinero, X. ( 2009a). On the
1023
- existence of a minimum integer representation for weighted voting sys-
1024
- tems.Annals of Operations Research , 166(1):243–260.
1025
- 21[Freixas and Molinero, 2009b] Freixas, J. and Molinero, X. ( 2009b). Simple
1026
- games and weighted games: a theoretical and computational v iewpoint.
1027
- Discrete Applied Mathematics , 157(7):1496–1508.
1028
- [Freixas and Molinero, 2010] Freixas, J. and Molinero, X. (2 010). Weighted
1029
- games without a unique minimal representation in integers. Optimisation
1030
- Methods & Software , 25(2):203–215.
1031
- [Gvozdeva et al., 2013] Gvozdeva, T., Hemaspaandra, L. A., a nd Slinko, A.
1032
- (2013). Three hierarchies of simple games parameterized by “resource”
1033
- parameters. International Journal of Game Theory , 42(1):1–17.
1034
- [Gvozdeva and Slinko, 2011] Gvozdeva, T. and Slinko, A. (201 1). Weighted
1035
- and roughly weighted simple games. Mathematical Social Sciences ,
1036
- 61(1):20–30.
1037
- [Hadamard, 1893] Hadamard, J. (1893). R´ esolution d’une qu estion relative
1038
- aux d´ eterminants. Bull. des Sciences Math. , 2:240–246.
1039
- [Hameed and Slinko, 2015] Hameed, A. and Slinko, A. (2015). R oughly
1040
- weighted hierarchical simple games. International Journal of Game The-
1041
- ory, 44(2):295–319.
1042
- [Hansen and Podolskii, 2015] Hansen, K. A. and Podolskii, V. V. (2015).
1043
- Polynomial threshold functions and Boolean threshold circ uits.Informa-
1044
- tion and Computation , 240:56–73.
1045
- [H˚ astad, 1994] H˚ astad, J. (1994). On the size of weights fo r threshold gates.
1046
- SIAM Journal on Discrete Mathematics , 7(3):484–492.
1047
- [Isbell, 1956] Isbell, J. R. (1956). A class of majority game s.The Quarterly
1048
- Journal of Mathematics , 7(1):183–187.
1049
- [Krohn and Sudh¨ olter, 1995] Krohn, I. and Sudh¨ olter, P. (1 995). Directed
1050
- and weighted majority games. Zeitschrift f¨ ur Operations Research ,
1051
- 42(2):189–216.
1052
- [Kurz, 2012] Kurz, S. (2012). On minimum sum representation s for
1053
- weighted voting games. Annals of Operations Research , 196(1):361–369.
1054
- [Muroga, 1971] Muroga, S. (1971). Threshold Logic and its Applications .
1055
- Wiley, New York.
1056
- 22[Muroga et al., 1962] Muroga, S., Toda, I., and Kondo, M. (196 2). Majority
1057
- decision functions of up to six variables. Mathematics of Computation ,
1058
- 16(80):459–472.
1059
- [Muroga et al., 1970] Muroga, S., Tsuboi, T., and Baugh, C. R. (1970).
1060
- Enumeration of threshold functions of eight variables. IEEE Transactions
1061
- on Computers , 100(9):818–825.
1062
- [Myhill and Kautz, 1961] Myhill, J. and Kautz, W. H. (1961). O n the size
1063
- of weights required for linear-input switching functions. IRE Transactions
1064
- on Electronic Computers , EC-10(2):288–290.
1065
- [Peled and Simeone, 1985] Peled, U. N. and Simeone, B. (1985) .
1066
- Polynomial-time algorithms for regular set-covering and t hreshold syn-
1067
- thesis.Discrete Applied Mathematics , 12(1):57–69.
1068
- [Sloane et al., 2018] Sloane, N. J. et al. (2018). The on-line encyclopedia of
1069
- integer sequences (A003432). Published electronically, .
1070
- [Taylor and Zwicker, 1992] Taylor, A. D. and Zwicker, W. S. (1 992). A
1071
- characterization of weighted voting. Proceedings of the American mathe-
1072
- matical society , 115(4):1089–1094.
1073
- [Taylor and Zwicker, 1999] Taylor, A. D. and Zwicker, W. S. (1 999).Sim-
1074
- ple Games: Desirability Relations, Trading, Pseudoweight ings. Princeton
1075
- University Press.
1076
- [Wang and Williams, 1991] Wang, C. and Williams, A. (1991). T he thresh-
1077
- old order of a Boolean function. Discrete Applied Mathematics , 31(1):51–
1078
- 69.
1079
- [Winder, 1965] Winder, R. O. (1965). Enumeration of seven-a rgument
1080
- threshold functions. IEEE Transactions on Electronic Computers , EC-
1081
- 14(3):315–325.
1082
- 23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2102.04993.txt DELETED
@@ -1,1202 +0,0 @@
1
- JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 1
2
- Attention-Based Neural Networks for Chroma Intra
3
- Prediction in Video Coding
4
- Marc G ´orriz Blanch, Student Member IEEE, Saverio Blasi, Alan F. Smeaton, Fellow IEEE,
5
- Noel E. O’Connor, Member IEEE, and Marta Mrak, Senior Member IEEE
6
- Abstract —Neural networks can be successfully used to im-
7
- prove several modules of advanced video coding schemes. In
8
- particular, compression of colour components was shown to
9
- greatly benefit from usage of machine learning models, thanks
10
- to the design of appropriate attention-based architectures that
11
- allow the prediction to exploit specific samples in the reference
12
- region. However, such architectures tend to be complex and
13
- computationally intense, and may be difficult to deploy in a
14
- practical video coding pipeline. This work focuses on reducing
15
- the complexity of such methodologies, to design a set of simpli-
16
- fied and cost-effective attention-based architectures for chroma
17
- intra-prediction. A novel size-agnostic multi-model approach is
18
- proposed to reduce the complexity of the inference process. The
19
- resulting simplified architecture is still capable of outperforming
20
- state-of-the-art methods. Moreover, a collection of simplifications
21
- is presented in this paper, to further reduce the complexity
22
- overhead of the proposed prediction architecture. Thanks to
23
- these simplifications, a reduction in the number of parameters
24
- of around 90% is achieved with respect to the original attention-
25
- based methodologies. Simplifications include a framework for re-
26
- ducing the overhead of the convolutional operations, a simplified
27
- cross-component processing model integrated into the original
28
- architecture, and a methodology to perform integer-precision
29
- approximations with the aim to obtain fast and hardware-aware
30
- implementations. The proposed schemes are integrated into the
31
- Versatile Video Coding (VVC) prediction pipeline, retaining
32
- compression efficiency of state-of-the-art chroma intra-prediction
33
- methods based on neural networks, while offering different
34
- directions for significantly reducing coding complexity.
35
- Index Terms —Chroma intra prediction, convolutional neural
36
- networks, attention algorithms, multi-model architectures, com-
37
- plexity reduction, video coding standards.
38
- I. I NTRODUCTION
39
- EFFICIENT video compression has become an essential
40
- component of multimedia streaming. The convergence
41
- of digital entertainment followed by the growth of web ser-
42
- vices such as video conferencing, cloud gaming and real-time
43
- high-quality video streaming, prompted the development of
44
- advanced video coding technologies capable of tackling the
45
- increasing demand for higher quality video content and its con-
46
- sumption on multiple devices. New compression techniques
47
- enable a compact representation of video data by identifying
48
- Manuscript submitted July 1, 2020. The work described in this paper has
49
- been conducted within the project JOLT funded by the European Union’s Hori-
50
- zon 2020 research and innovation programme under the Marie Skłodowska
51
- Curie grant agreement No 765140.
52
- M. G ´orriz Blanch, S. Blasi and M. Mrak are with BBC Research &
53
- Development, The Lighthouse, White City Place, 201 Wood Lane, Lon-
54
- don, UK (e-mail: [email protected], [email protected],
55
56
- A. F. Smeaton and N. E. O’Connor are with Dublin City University, Glas-
57
- nevin, Dublin 9, Ireland (e-mail: [email protected], [email protected]).
58
- Fig. 1. Visualisation of the attentive prediction process. For each reference
59
- sample 0-16 the attention module generates its contribution to the prediction
60
- of individual pixels from a target 44block.
61
- and removing spatial-temporal and statistical redundancies
62
- within the signal. This results in smaller bitstreams, enabling
63
- more efficient storage and transmission as well as distribution
64
- of content at higher quality, requiring reduced resources.
65
- Advanced video compression algorithms are often complex
66
- and computationally intense, significantly increasing the en-
67
- coding and decoding time. Therefore, despite bringing high
68
- coding gains, their potential for application in practice is
69
- limited. Among the current state-of-the-art solutions, the next
70
- generation Versatile Video Coding standard [1] (referred to as
71
- VVC in the rest of this paper), targets between 30-50% better
72
- compression rates for the same perceptual quality, supporting
73
- resolutions from 4K to 16K as well as 360videos. One
74
- fundamental component of hybrid video coding schemes, intra
75
- prediction, exploits spatial redundancies within a frame by
76
- predicting samples of the current block from already recon-
77
- structed samples in its close surroundings. VVC allows a
78
- large number of possible intra prediction modes to be usedarXiv:2102.04993v1 [eess.IV] 9 Feb 2021JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 2
79
- on the luma component at the cost of a considerable amount
80
- of signalling data. Conversely, to limit the impact of mode
81
- signalling, chroma components employ a reduced set of modes
82
- [1].
83
- In addition to traditional modes, more recent research intro-
84
- duced schemes which further exploit cross-component correla-
85
- tions between the luma and chroma components. Such corre-
86
- lations motivated the development of the Cross-Component
87
- Linear Model (CCLM, or simply LM in this paper) intra
88
- modes. When using CCLM, the chroma components are
89
- predicted from already reconstructed luma samples using a
90
- linear model. Nonetheless, the limitation of simple linear
91
- predictions comes from its high dependency on the selection
92
- of predefined reference samples. Improved performance can
93
- be achieved using more sophisticated Machine Learning (ML)
94
- mechanisms [2], [3], which are able to derive more complex
95
- representations of the reference data and hence boost the
96
- prediction capabilities.
97
- Methods based on Convolutional Neural Networks (CNNs)
98
- [2], [4] provided significant improvements at the cost of two
99
- main drawbacks: the associated increase in system complex-
100
- ity and the tendency to disregard the location of individual
101
- reference samples. Related works deployed complex neural
102
- networks (NNs) by means of model-based interpretability
103
- [5]. For instance, VVC recently adopted simplified NN-based
104
- methods such as Matrix Intra Prediction (MIP) modes [6]
105
- and Low-Frequency Non Separable Transform (LFNST) [7].
106
- For the particular task of block-based intra-prediction, the
107
- usage of complex NN models can be counterproductive if
108
- there is no control over the relative position of the reference
109
- samples. When using fully-connected layers, all input samples
110
- contribute to all output positions, and after the consecutive
111
- application of several hidden layers, the location of each
112
- input sample is lost. This behaviour clearly runs counter
113
- to the design of traditional approaches, in which predefined
114
- directional modes carefully specify which boundary locations
115
- contribute to each prediction position. A novel ML-based
116
- cross-component intra-prediction method is proposed in [4],
117
- introducing a new attention module capable of tracking the
118
- contribution of each neighbouring reference sample when
119
- computing the prediction of each chroma pixel, as shown in
120
- Figure 1. As a result, the proposed scheme better captures
121
- the relationship between the luma and chroma components,
122
- resulting in more accurate prediction samples. However, such
123
- NN-based methods significantly increase the codec complex-
124
- ity, increasing the encoder and decoder times by up to 120%
125
- and 947%, respectively.
126
- This paper focuses on complexity reduction in video coding
127
- with the aim to derive a set of simplified and cost-effective
128
- attention-based architectures for chroma intra-prediction. Un-
129
- derstanding and distilling knowledge from the networks en-
130
- ables the implementation of less complex algorithms which
131
- achieve similar performance to the original models. Moreover,
132
- a novel training methodology is proposed in order to design a
133
- block-independent multi-model which outperforms the state-
134
- of-the-art attention-based architectures and reduces inference
135
- complexity. The use of variable block sizes during training
136
- helps the model to better generalise on content variety whileensuring higher precision on predicting large chroma blocks.
137
- The main contributions of this work are the following:
138
- A competitive block-independent attention-based multi-
139
- model and training methodology;
140
- A framework for complexity reduction of the convolu-
141
- tional operations;
142
- A simplified cross-component processing model using
143
- sparse auto-encoders;
144
- A fast and cost-effective attention-based multi-model with
145
- integer precision approximations.
146
- This paper is organised as follows: Section II provides a
147
- brief overview on the related work, Section III introduces
148
- the attention-based methodology in detail and establishes the
149
- mathematical notation for the rest of the paper, Section IV
150
- presents the proposed simplifications and Section V shows
151
- experimental results, with conclusion drawn in Section VI.
152
- II. B ACKGROUND
153
- Colour images are typically represented by three colour
154
- components (e.g. RGB, YCbCr). The YCbCr colour scheme
155
- is often adopted by digital image and video coding standards
156
- (such as JPEG, MPEG-1/2/4 and H.261/3/4) due to its ability
157
- to compact the signal energy and to reduce the total required
158
- bandwidth. Moreover, chrominance components are often sub-
159
- sampled by a factor of two to conform to the YCbCr 4:2:0
160
- chroma format, in which the luminance signal contains most of
161
- the spatial information. Nevertheless, cross-component redun-
162
- dancies can be further exploited by reusing information from
163
- already coded components to compress another component.
164
- In the case of YCbCr, the Cross-Component Linear model
165
- (CCLM) [8] uses a linear model to predict the chroma signal
166
- from a subsampled version of the already reconstructed luma
167
- block signal. The model parameters are derived at both the
168
- encoder and decoder sides without needing explicit signalling
169
- in the bitstream.
170
- Another example is the Cross-Component Prediction (CCP)
171
- [9] which resides at the transform unit (TU) level regardless
172
- of the input colour space. In case of YCbCr, a subsampled and
173
- dequantised luma transform block (TB) is used to modify the
174
- chroma TB at the same spatial location based on a context
175
- parameter signalled in the bitstream. An extension of this
176
- concept modifies one chroma component using the residual
177
- signal of the other one [10]. Such methodologies significantly
178
- improved the coding efficiency by further exploiting the cross-
179
- component correlations within the chroma components.
180
- In parallel, recent success of deep learning application
181
- in computer vision and image processing influenced design
182
- of novel video compression algorithms. In particular in the
183
- context of intra-prediction, a new algorithm [3] was introduced
184
- based on fully-connected layers and CNNs to map the predic-
185
- tion of block positions from the already reconstructed neigh-
186
- bouring samples, achieving BD-rate (Bjontegaard Delta rate)
187
- [11] savings of up to 3.0% on average over HEVC, for approx.
188
- 200% increase in decoding time. The successful integration
189
- of CNN-based methods for luma intra-prediction into existing
190
- codec architectures has motivated research into alternative
191
- methods for chroma prediction, exploiting cross-componentJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 3
192
- Fig. 2. Baseline attention-based architecture for chroma intra prediction presented in [4] and described in Section III.
193
- redundancies similar to the aforementioned LM methods. A
194
- novel hybrid neural network for chroma intra prediction was
195
- recently introduced in [2]. A first CNN was designed to
196
- extract features from reconstructed luma samples. This was
197
- combined with another fully-connected network used to extract
198
- cross-component correlations between neighbouring luma and
199
- chroma samples. The resulting architecture uses complex non-
200
- linear mapping for end-to-end prediction of chroma channels.
201
- However, this is achieved at the cost of disregarding the spatial
202
- location of the boundary reference samples and significant
203
- increase of the complexity of the prediction process. As shown
204
- in [4], after a consecutive application of fully-connected layers
205
- in [2], the location of each input boundary reference sample
206
- is lost. Therefore, the fully-convolutional architecture in [4]
207
- better matches the design of the directional VVC modes and
208
- is able to provide significantly better performance.
209
- The use of attention models enables effective utilisation
210
- of the individual spatial location of the reference samples
211
- [4]. The concept of “attention-based” learning is a well-
212
- known idea used in deep learning frameworks, to improve the
213
- performance of trained networks in complex prediction tasks
214
- [12], [13], [14]. In particular, self-attention is used to assess the
215
- impact of particular input variables on the outputs, whereby
216
- the prediction is computed focusing on the most relevant
217
- elements of the same sequence [15]. The novel attention-
218
- based architecture introduced in [4] reports average BD-rate
219
- reductions of -0.22%, -1.84% and -1.78% for the Y , Cb and
220
- Cr components, respectively, although it significantly impacts
221
- the encoder and decoder time.
222
- One common aspect across all related work is that whilst
223
- the result is an improvement in compression this comes at the
224
- expense of increased complexity of the encoder and decoder.
225
- In order to address the complexity challenge, this paper aims
226
- to design a set of simplified attention-based architectures for
227
- performing chroma intra-prediction faster and more efficiently.
228
- Recent works addressed complexity reduction in neural net-
229
- works using methods such as channel pruning [16], [17],
230
- [18] and quantisation [19], [20], [21]. In particular for videocompression, many works used integer arithmetic in order
231
- to efficiently implement trained neural networks on different
232
- hardware platforms. For example, the work in [22] proposes a
233
- training methodology to handle low precision multiplications,
234
- proving that very low precision is sufficient not just for
235
- running trained networks but also for training them. Similarly,
236
- the work in [23] considers the problem of using variational
237
- latent-variable models for data compression and proposes
238
- integer networks as a universal solution of range coding as
239
- an entropy coding technique. They demonstrate that such
240
- models enable reliable cross-platform encoding and decoding
241
- of images using variational models. Moreover, in order to
242
- ensure deterministic implementations on hardware platforms,
243
- they approximate non-linearities using lookup tables. Finally,
244
- an efficient implementation of matrix-based intra prediction
245
- is proposed in [24], where a performance analysis evaluates
246
- the challenges of deploying models with integer arithmetic
247
- in video coding standards. Inspired by this knowledge, this
248
- paper develops a fast and cost-effective implementation of the
249
- proposed attention-based architecture using integer precision
250
- approximations. As shown Section V-D, while such approxi-
251
- mations can significantly reduce the complexity, the associated
252
- drop of performance is still not negligible.
253
- III. A TTENTION -BASED ARCHITECTURES
254
- This section describes in detail the attention-based approach
255
- proposed in [4] (Figure 2), which will be the baseline for the
256
- presented methodology in this paper. The section also provides
257
- the mathematical notation used for the rest of this paper.
258
- Without loss of generality, only square blocks of pixels
259
- are considered in this work. After intra-prediction and recon-
260
- struction of a luma block in the video compression chain,
261
- luma samples can be used for prediction of co-located chroma
262
- components. In this discussion, the size of a luma block is
263
- assumed to be (downsampled to) NNsamples, which is
264
- the size of the co-located chroma block. This may require the
265
- usage of conventional downsampling operations, such as in
266
- the case of using chroma sub-sampled picture formats suchJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 4
267
- Fig. 3. Proposed multi-model attention-based architectures with the integration of the simplifications introduced in this paper. More details about the model’s
268
- hyperparameters and a description of the referred schemes can be found in Section V.
269
- as 4:2:0. Note that a video coding standard treats all image
270
- samples as unsigned integer values within a certain precision
271
- range based on the internal bit depth. However, in order to
272
- utilise common deep learning frameworks, all samples are
273
- converted to floating point and normalised to values within the
274
- range [0;1]. For the chroma prediction process, the reference
275
- samples used include the co-located luma block X02I RNN,
276
- and the array of reference samples Bc2I Rb,b= 4N+1from
277
- the left and from above the current block (Figure 1), where
278
- c=Y,CborCrrefers to the three colour components. B
279
- is constructed from samples on the left boundary (starting
280
- from the bottom-most sample), then the corner is added,
281
- and finally the samples on top are added (starting from the
282
- left-most sample). In case some reference samples are not
283
- available, these are padded using a predefined value, following
284
- the standard approach defined in VVC. Finally, S02I R3b
285
- is the cross-component volume obtained by concatenating
286
- the three reference arrays BY,BCbandBCr. Similar to
287
- the model in [2], the attention-based architecture adopts a
288
- scheme based on three network branches that are combined to
289
- produce prediction samples, illustrated in Figure 2. The first
290
- two branches work concurrently to extract features from the
291
- input reference samples.
292
- The first branch (referred to as the cross-component bound-
293
- ary branch) extracts cross component features from S02
294
- I R3bby applying Iconsecutive Di- dimensional 11
295
- convolutional layers to obtain the Si2I RDiboutput feature
296
- maps, where i= 1;2:::I. By applying 11convolutions, the
297
- boundary input dimensions are preserved, resulting in an Di-
298
- dimensional vector of cross-component information for each
299
- boundary location. The resulting volumes are activated using
300
- a Rectified Linear Unit (ReLU) non-linear function.
301
- In parallel, the second branch (referred to as the luma
302
- convolutional branch) extracts spatial patterns over the co-
303
- located reconstructed luma block X0by applying convolu-
304
- tional operations. The luma convolutional branch is defined
305
- byJconsecutive Cj-dimensional 33convolutional layers
306
- with a stride of 1, to obtainXj2I RCjN2feature maps fromtheN2input samples, where j= 1;2:::J . Similar to the
307
- cross-component boundary branch, in this branch a bias and
308
- a ReLU activation are applied within convolutional layer.
309
- The feature maps ( SIandXJ) from both branches are
310
- each convolved using a 11kernel, to project them into
311
- two corresponding reduced feature spaces. Specifically, SI
312
- is convolved with a filter WF2I RhDto obtain the h-
313
- dimensional feature matrix F. Similarly,XJis convolved with
314
- a filterWG2I RhCto obtain the h-dimensional feature
315
- matrixG. The two matrices are multiplied together to obtain
316
- the pre-attention map M=GTF. Finally, the attention matrix
317
- A2I RN2bis obtained applying a softmax operation to each
318
- element ofM, to generate the probability of each boundary
319
- location being able to predict a sample location in the block.
320
- Each value j;iinAis obtained as:
321
- j;i=exp (mi;j=T)
322
- Pb1
323
- n=0exp (mn;j=T); (1)
324
- wherej= 0;:::;N21represents the sample location in
325
- the predicted block, i= 0;:::;b1represents a reference
326
- sample location, and Tis the softmax temperature parameter
327
- controlling the smoothness of the generated probabilities, with
328
- 0<T1. Notice that the smaller the value of T, the more
329
- localised are the obtained attention areas resulting in corre-
330
- spondingly fewer boundary samples contributing to a given
331
- prediction location. The weighted sum of the contribution of
332
- each reference sample in predicting a given sample at a specific
333
- location is obtained by computing the matrix multiplication
334
- between the cross-component boundary features SIand the
335
- attention matrix A, or formally ST
336
- IA. In order to further refine
337
- ST
338
- IA, this weighted sum can be multiplied by the output of
339
- the luma branch. To do so, the output of the luma branch
340
- must be transformed to change its dimensions by means of a
341
- 11convolution using a matrix Wx2I RDCto obtain a
342
- transformed representation X, thenO=X (ST
343
- IA), where
344
- is the element-wise product.
345
- Finally, the output of the attention model is fed into the third
346
- network branch, to compute the predicted chroma samples. InJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 5
347
- this branch, a final CNN is used to map the fused features from
348
- the first two branches as combined by means of the attention
349
- model into the final chroma prediction. The prediction head
350
- branch is defined by two convolutional layers, applying E-
351
- dimensional 33convolutional filters and then 2-dimensional
352
- 11filters for deriving the two chroma components at once.
353
- IV. M ULTI -MODEL ARCHITECTURES
354
- This section introduces a new multi-model architecture
355
- which improves the baseline attention-based approach (Section
356
- III, [4]). The main improvement comes from its block-size
357
- agnostic property as the proposed approach only requires one
358
- model for all block sizes. Furthermore, a range of simpli-
359
- fications is proposed with the aim to reduce the complex-
360
- ity of related attention-based architectures while preserving
361
- prediction performance as much as possible. The proposed
362
- simplifications include a framework for complexity reduction
363
- of the convolutional operations, a simplified cross-component
364
- boundary branch using sparse autoencoders and insights for
365
- fast and cost-effective implementations with integer precision
366
- approximations. Figure 3 illustrates the proposed multi-model
367
- attention-based schemes with the integration of the simplifica-
368
- tions described in this section.
369
- A. Multi-model size agnostic architecture
370
- In order to handle variable block sizes, previous NN-based
371
- chroma intra-prediction methods employ different architec-
372
- tures for blocks of different sizes. These architectures differ
373
- in the dimensionality of the networks, which depend on give
374
- block size, as a trade-off between model complexity and
375
- prediction performance [2]. Given a network structure, the
376
- depth of the convolutional layers is the most predominant
377
- factor when dealing with variable input sizes. This means that
378
- increasingly complex architectures are needed for larger block
379
- sizes, in order to ensure proper generalisation for these blocks
380
- which have higher content variety. Such a factor significantly
381
- increases requirements for inference because of the number of
382
- multiple architectures.
383
- In order to streamline the inference process, this work
384
- proposes a novel multi-model architecture that is independent
385
- of the input block size. Theoretically, a convolutional filter
386
- can be applied over any input space. Therefore, the fully-
387
- convolutional nature of the proposed architecture ( 11kernels
388
- for the cross-component boundary branch and 33kernels
389
- for the luma convolutional one) allows the design of a size
390
- agnostic architecture. As shown in Figure 4, the same task
391
- can be achieved using multiple models with different input
392
- sizes sharing the weights, such that a unified set of filters can
393
- be used a posterior, during inference. The given architecture
394
- must employ a number of parameters that is sufficiently large
395
- to ensure proper performance for larger blocks, but not too
396
- large to incur overfitting for smaller blocks.
397
- Figure 5 describes the algorithmic methodology employed
398
- to train the multi-model approach. As defined in Section III,
399
- the co-located luma block X02I RNNand the cross-
400
- component volume S02I R3bare considered as inputs to
401
- the chroma prediction network. Furthermore, for training of a
402
- Fig. 4. Illustration of the proposed multi-model training and inference
403
- methodologies. Multiple block-dependent models N(W(t))are used during
404
- training time. A size-agnostic model with a single set of trained weighs W
405
- is then used during inference.
406
- Require:fX(N)
407
- m,S(N)
408
- m,Z(N)
409
- mg,m2[0;M),N2f4;8;16g
410
- Require:N(W(t)):Nmodel with shared weights W(t)
411
- Require:L(t)
412
- reg: Objective function at training step t
413
- 1:t 0(initialise timestep)
414
- 2:whiletnot converged do
415
- 3: form2[0;M)do
416
- 4: forN2f4;8;16gdo
417
- 5: t t+ 1
418
- 6:L(t)
419
- reg MSE (Z(N)
420
- m;N(X(N)
421
- m;S(N)
422
- m;W(t1)))
423
- 7: g(t) r WL(t)
424
- reg(get gradients at step t)
425
- 8: W(t) optimiser (g(t))
426
- 9: end for
427
- 10: end for
428
- 11:end while
429
- Fig. 5. Training algorithm for the proposed multi-model architecture.
430
- multi-model the ground-truth is defined as Z(N)
431
- m, for a given
432
- inputfX(N)
433
- m;S(N)
434
- mg, and the set of instances from a database
435
- ofMsamples or batches is defined as fX(N)
436
- m;S(N)
437
- m;Z(N)
438
- mg,
439
- wherem= 0;1:::M1andN2f4,8,16gis the set
440
- of supported square block sizes NN(the method can be
441
- extended to a different set of sizes). As shown in Figure 4,
442
- multiple block-dependent models N(W)with shared weights
443
- Ware updated in a concurrent way following the order of
444
- supported block sizes. At training step t, the individual model
445
- N(W(t))is updated obtaining a new set of weights W(t+1).
446
- Finally, a single set of trained weights Wis used during
447
- inference, obtaining a size-agnostic model (W). Model pa-
448
- rameters are updated by minimising the Mean Square Error
449
- (MSE) regression loss Lreg, as in:
450
- L(t)
451
- reg=1
452
- CN2kZ(N)
453
- mN(X(N)
454
- m;S(N)
455
- m;W(t1)k2
456
- 2;(2)
457
- whereC= 2 refers to the number of predicted chroma
458
- components, and N(W(t1))is the block-dependent model
459
- at training step t1.JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 6
460
- Fig. 6. Visualisation of the receptive field of a 2-layer convolutional branch
461
- with 33kernels. Observe that an output pixel in layer 2is computed by
462
- applying a 33kernel over a field F1of33samples from the first layer’s
463
- output space. Similarly, each of the F1values are computed by means of
464
- another 33kernel looking at a field F0of55samples over the input.
465
- B. Simplified convolutions
466
- Convolutional layers are responsible for most of the net-
467
- work’s complexity. For instance, based on the network hyper-
468
- parameters from experiments in Section V, the luma convo-
469
- lutional branch and the prediction head branch (with 33
470
- convolutional kernels) alone contain 46;882 out of 51;714
471
- parameters, which constitute more than 90% of the parameters
472
- in the entire model. Therefore, the model complexity can be
473
- significantly reduced if convolutional layers can be simpli-
474
- fied. This subsection explains how a new simplified structure
475
- beneficial for practical implementation can be devised by
476
- removing activation functions, i.e. by removing non-linearities.
477
- It is important to stress that such process is devised only for
478
- application on carefully selected layers, i.e. for branches where
479
- such simplification does not significantly reduce expected
480
- performance.
481
- Consider specific two-layer convolutional branch (e.g. luma
482
- convolutional branch from Figure 2) formulated as:
483
- Y=R(W2R(W1X+b1) +b2) (3)
484
- whereCiare the number of features in layer i,bi2I RCi
485
- are biases, KiKiare square convolutional kernel sizes,
486
- W12I RK2
487
- 1C0C1andW22I RK2
488
- 2C1C2are the weights
489
- and bias of the first ( i= 1) and the second ( i= 2) layers,
490
- respectively, C0the dimensions of the input feature map,
491
- Ris a Rectified Linear Unit (ReLU) non-linear activation
492
- function anddenotes convolution operation. Input to the
493
- branch isX2I RN2C0and the result is a volume of features
494
- Y2I RN2C2, which correspond to X0andX2from Figure 2,
495
- respectively. Removing non-linearities, the given branch can
496
- be simplified as:
497
- ^Y=W2(W1X+b1) +b2; (4)
498
- where it can be observed that a new convolution and bias terms
499
- can be defined using trained parameters from the two initial
500
- layers, to form a new single layer:
501
- ^Y=WcX+bc; (5)
502
- Fig. 7. Visualisation of the learnt colour space resulting of encoding input
503
- YCbCr colours to the 3-dimensional hidden space of the autoencoder.
504
- whereWc2I R[^K2C0]C2is the function of W1andW2with
505
- ^K=K1+K21, andbcis a constant vector derived from W2,
506
- b1andb2. Figure 6 (a) illustrates the operations performed in
507
- Eq. 4 forK1=K2= 3 andC= 1. Analysing the receptive
508
- field of the whole branch, a pixel within the output volume Y
509
- is computed by applying a K2K2kernel over a field F1from
510
- the first layer’s output space. Similarly, each of the F1values
511
- are computed by means of another K1K1kernel looking
512
- at a fieldF0. Without the non-linearities, and equivalent of
513
- this process is simplified, Figure 6 (b) and Eq. 5. Notice that
514
- ^K=K1+K21equals 5in the example in Figure 6. For a
515
- variety of parameters, including the values of C0,CiandKi
516
- used in [4] and in this paper, this simplification of concatenated
517
- convolutional layers allows reduction of model’s parameters at
518
- inference time, which will be shown in Section V-C.
519
- Finally, it should be noted that we limit the removal of
520
- activation functions only to branches which include more than
521
- one layer, from which at least one layer has Ki>1, and only
522
- the activation functions between layers in the same branch are
523
- removed (to be able to merge them as in Equation 5). In such
524
- branches with at least one Ki>1the number of parameters
525
- is typically very high, while the removal of non-linearities
526
- typically does not impact prediction performance. Activation
527
- functions are not removed from the remaining layers. It should
528
- be noted that in the attention module and at the intersections
529
- of various branches the activation functions are critical and
530
- therefore are left unchanged. Section V-C performs an ablation
531
- test to evaluate the effect of removing the non-linearities, and
532
- a test to evaluate how would a convolutional branch directly
533
- trained with large-support kernels ^Kperform.
534
- C. Simplified cross-component boundary branch
535
- In the baseline model, the cross-component boundary
536
- branch transforms the boundary inputs S2I R3bintoDJ-
537
- dimensional feature vectors. More specifically, after applying
538
- J= 2 consecutive 11convolutional layers, the branch
539
- encodes each boundary colour into a high dimensional feature
540
- space. It should be noted that a colour is typically represented
541
- by 3 components, indexed within a system of coordinates
542
- (referred to as the colour space). As such, a three-dimensional
543
- feature space can be considered as the space with minimumJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 7
544
- dimensionality that is still capable of representing colour
545
- information. Therefore, this work proposes the use of autoen-
546
- coders (AE) to reduce the complexity of the cross-component
547
- boundary branch, by compacting the D-dimensional feature
548
- space into a reduced, 3-dimensional space. An AE tries to learn
549
- an approximation to the identity function h(x)xsuch that
550
- the reconstructed output ^xis as close as possible to the input
551
- x. The hidden layer will have a reduced dimensionality with
552
- respect to the input, which also means that the transformation
553
- process may introduce some distortion, i.e. the reconstructed
554
- output will not be identical to the input.
555
- An AE consists of two networks, the encoder fwhich maps
556
- the input to the hidden features, and the decoder gwhich
557
- reconstructs the input from the hidden features. Applying this
558
- concept, a compressed representation of the input can be
559
- obtained by using the encoder part alone, with the goal of
560
- reducing the dimensionality of the input vectors. The encoder
561
- network automatically learns how to reduce the dimensions
562
- of the input vectors, in a similar fashion to what could be
563
- obtained applying a manual Principal Component Analysis
564
- (PCA) transformation. The transformation learned by the AE
565
- can be trained using the same loss function that is used in the
566
- PCA process [25]. Figure 7 shows the mapping function of
567
- the resulting colour space when applying the encoder network
568
- over the YCbCr colour space.
569
- Overall, the proposed simplified cross-component boundary
570
- branch consists of two 11convolutional layers using Leaky
571
- ReLU activation functions with a slope = 0:2. First, aD-
572
- dimensional layer is applied over the boundary inputs Sto
573
- obtainS12I RDbfeature maps. Then, S1is fed to the AE’s
574
- encoder layer fwith output 3dimensions, to obtain the hidden
575
- feature maps S22I R3b. Finally, a third 11convolutional
576
- layer (corresponding to the AE decoder layer g) is applied
577
- to generate the reconstructed maps ~S1withD-dimensions.
578
- Notice that the decoder layer is only necessary during the
579
- training stage to obtain the reconstructed inputs necessary to
580
- derive the values of the loss function. Only the encoder layer
581
- is needed when using the network, in order to transform the
582
- input feature vectors into the 3dimensional, reduced vectors.
583
- Figure 3 illustrates the branch architecture and its integration
584
- within the simplified multi-model.
585
- Finally, in order to interpret the behaviour of the branch
586
- and to identify prediction patterns, a sparsity constraint can
587
- be imposed on the loss function. Formally, the following can
588
- be used:
589
- LAE=r
590
- DbkS1~S1k2
591
- 2+s
592
- 3bkS2k1; (6)
593
- where the right-most term is used to keep the activation
594
- functions in the hidden space remain inactive most of the
595
- time, and only return non-zero values for the most descriptive
596
- samples. In order to evaluate the effect of the sparsity term,
597
- Section V-C performs an ablation test that shows its positive
598
- regularisation properties during training.
599
- The objective function in Equation 2 can be updated such
600
- that the global multi-model loss Lconsiders bothLregand
601
- LAEas:
602
- L=regLreg+AELAE (7)whereregandAEcontrol the contribution of both losses.
603
- D. Integer precision approximation
604
- While the training algorithm results in IEEE-754 64-bit
605
- floating point weights and prediction buffers, an additional
606
- simplification is proposed in this paper whereby the network
607
- weights and prediction buffers are represented using fixed-
608
- point integer arithmetic. This is beneficial for deployment of
609
- resulting multi-models in efficient hardware implementations,
610
- which complex operations such as Leaky ReLU and softmax
611
- activation functions can become serious bottlenecks. All the
612
- network weights obtained after the training stage are therefore
613
- appropriately quantised to fit 32-bit signed integer values. it
614
- should be noted that integer approximation introduces quanti-
615
- sation errors, which may have an impact on the performance
616
- of the overall predictions.
617
- In order to prevent arithmetic overflows after performing
618
- multiplications or additions, appropriate scaling factors are
619
- defined for each layer during each of the network predic-
620
- tion steps. To further reduce the complexity of the integer
621
- approximation, the scaling factor Klfor a given layer lis
622
- obtained as a power of 2, namelyKl= 2Ol, whereOlis the
623
- respective precision offset. This ensures that multiplications
624
- can be performed by means of simple binary shifts. Formally,
625
- the integer weights ~Wland biases ~blfor each layer lin the
626
- network with weights Wland biasblcan be obtained as:
627
- ~Wl=bWl2Olc;~bl=bbl2Olc: (8)
628
- The offsetOldepends on the offset used on the previous layer
629
- Ol1, as well as on an internal offset Oxnecessary to preserve
630
- as much decimal information as possible, compensating for
631
- the quantisation that occurred in the previous layer, namely
632
- Ol=OxOl1.
633
- Furthermore, in this approach the values predicted by the
634
- network are also integers. In order to avoid defining large
635
- internal offsets at each layer, namely having large values of
636
- Ox, an additional stage of compensation is applied to the
637
- predicted values, to keep their values in the range of 32-bit
638
- signed integer. For this purpose, another offset Oyis defined,
639
- computed as Oy=OxOl. The values generated by layer l
640
- are then computed as:
641
- Yl= (( ~WT
642
- lXl+~bl) + (1<<(Oy1)))>>O y; (9)
643
- where<< and>> represent the left and right binary shifts,
644
- respectively, and the offset (1<<(Oy1))is considered to
645
- reduce the rounding error.
646
- Complex operations requiring floating point divisions need
647
- to be approximated to integer precision. The Leaky ReLU
648
- activation functions applied on the cross-component boundary
649
- branch use a slope = 0:2which multiplies the negative
650
- values. Such an operation can be simply approximated by
651
- defining a new activation function ~A(x)for any input xas
652
- follows:
653
- ~A(x) =0 : x0
654
- 26x>> 7 :x<0
655
- (10)JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 8
656
- Conversely, the softmax operations used in the attention
657
- module are approximated following a more complex method-
658
- ology, similar to the one used in [26]. Consider the matrix M
659
- as defined in Equation 1 and a given row jinM, and a vector
660
- mjas input to the softmax operation. First, all elements mj
661
- in a row are subtracted by the maximum element in the row,
662
- namely:
663
- ^mi;j= (mi;j=Tmax i(mi;j=T)) (11)
664
- whereTis the temperature of the softmax operation, set to 0:5
665
- as previously mentioned. The transformed elements ^mi;jrange
666
- between the minimum signed integer value and zero, because
667
- the arguments ^mi;jare obtained by subtracting the elements
668
- inMby the maximum element in each row. To further reduce
669
- the possibility of overflows, this range is further clipped to a
670
- minimum negative value, set to pre-determined number Ve, so
671
- that any ^mi;j<Veis set equal to Ve.
672
- The elements ^mi;jare negative integer numbers within the
673
- range [Ve;0], meaning there is a fixed number of Ne=jVej+
674
- 1possible values they can assume. To further simplify the
675
- process, such an exponential function is replaced by a pre-
676
- computed look-up table containing Neinteger elements. To
677
- minimise the approximation error, the exponentials are scaled
678
- by a given scaling factor before being approximated to the
679
- nearest integer and stored in the corresponding look-up table
680
- LUT-EXP . Formally, for a given index k, where 0k
681
- Ne1, thek-th integer input is obtained as sk=Ve+k. The
682
- k-th element in the look-up table can then be computed as the
683
- approximated, scaled exponential value for sk, or:
684
- LUT-EXP (k) =bKeeskc (12)
685
- whereKe= 2Oeis the scaling factor, chosen in a way to
686
- maximise the preservation of the original decimal information.
687
- When using the look-up table during the prediction process,
688
- given an element ^mi;jthe corresponding index kcan be
689
- retrieved as: k=jVe^mi;jj, to produce the numerator
690
- in the softmax function.
691
- The integer approximation of the softmax function can then
692
- be written as:
693
- ^ j;i=LUT-EXP (jVe^mi;jj)
694
- D(j); (13)
695
- where:
696
- D(j) =b1X
697
- n=0LUT-EXP (jVe^mn;jj); (14)
698
- Equation 13 implies performing an integer division between
699
- the numerator and denominator. This is not ideal, and integer
700
- divisions are typically avoided in low complexity encoder
701
- implementations. A simple solution to remove the integer
702
- division can be obtained by replacing it with a binary shift.
703
- However, a different approach is proposed in this paper to
704
- provide a more robust approximation that introduces smaller
705
- errors in the division. The denominator D(j)as in Equation 14
706
- is obtained as the sum of bvalues extracted from LUT-EXP ,
707
- wherebis the number of reference samples extracted from
708
- the boundary of the block. As such, the largest blocks under
709
- consideration ( 1616) will result in the largest possible value
710
- of reference samples bMAX . This means that the maximumvalue that this denominator can assume is obtained when
711
- b=bMAX and when all input ^mi;j= 0 (which correspond
712
- toLUT-EXP (jVej) =Ke), corresponding to Vs=bMAXKe.
713
- Similarly, the minimum value (obtained when ^mi;j=Ve) is0.
714
- Correspondingly, D(j), can assume any positive integer value
715
- in the range [0;Vs].
716
- Considering a given scaling factor Ks= 2Os, integer divi-
717
- sion byD(j)can be approximated using a multiplication by
718
- the factorM(j) =bKs=D(j)c. A given value of M(j)could
719
- be computed for all Vs+1possible values of D(j). Such values
720
- can then be stored in another look-up table LUT-SUM . Clearly
721
- though,Vsis too large which means LUT-SUM would be
722
- impractical to use due to storage and complexity constraints.
723
- For that reason, a smaller table is used, obtained by quantising
724
- the possible values of D(j). A pre-defined step Qis used,
725
- resulting in Ns= (Vs+ 1)=Qquantised values of D(j). The
726
- table LUT-SUM of sizeNsis then filled accordingly, where
727
- each element in the table is obtained as:
728
- LUT-SUM (l) =bKs=(lQ)c (15)
729
- Finally, when using the table during the prediction process,
730
- given an integer sum D(j), the corresponding index lcan
731
- be retrieved as: l=bD(j)=Qc. Following from these
732
- simplifications, given an input ^mi;jobtained as in Equation 11,
733
- the integer sum D(j)obtained from Equation 14, and a
734
- quantisation step Q, the simplified integer approximation of
735
- the softmax function can eventually be obtained as:
736
- ~ j;i=LUT-EXP (jVe^mi;jj)LUT-SUM (bD(j)=Qc);(16)
737
- Notice that ~ j;ivalues are finally scaled by Ko=KeKs.
738
- V. E XPERIMENTS
739
- A. Training settings
740
- Training examples were extracted from the DIV2K dataset
741
- [27], which contains high-definition high-resolution content of
742
- large diversity. This database contains 800 training samples
743
- and100 samples for validation, providing 6lower resolution
744
- versions with downsampling by factors of 2,3and4with
745
- a bilinear and unknown filters. For each data instance, one
746
- resolution was randomly selected and then M blocks of each
747
- NNsizes (N= 4;8;16) were chosen, making balanced sets
748
- between block sizes and uniformed spatial selections within
749
- each image. Moreover, 4:2:0 chroma sub-sampling is assumed,
750
- where the same downsampling filters implemented in VVC are
751
- used to downsample co-located luma blocks to the size of the
752
- corresponding chroma block. All the schemes were trained
753
- from scratch using the Adam optimiser [28] with a learning
754
- rate of 104.
755
- B. Integration into VVC
756
- The methods introduced in the paper where integrated
757
- within a VVC encoder, using the VVC Test Model (VTM)
758
- 7.0 [29]. The integration of the proposed NN-based cross-
759
- component prediction into the VVC coding scheme requires
760
- normative changes not only in the prediction process, but also
761
- in the way the chroma intra-prediction mode is signalled in
762
- the bitstream and parsed by the decoder.JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 9
763
- TABLE I
764
- NETWORK HYPERPARAMETERS DURING TRAINING
765
- Branch (Cin;KK;C out) Scheme 1 & 3 Scheme 2
766
- CC Boundary3;11;32
767
- 32;11;323;11;32
768
- 32;11;3
769
- Luma Convolutional1;33;64
770
- 64;33;641;33;64
771
- 64;33;64
772
- Attention Module32;11;16
773
- 64;11;16
774
- 64;11;3232;11;16
775
- 64;11;16
776
- 64;11;3
777
- Prediction Head32;33;32
778
- 32;11;23;33;3
779
- 3;11;2
780
- TABLE II
781
- NETWORK HYPERPARAMETERS DURING INFERENCE
782
- Branch (Cin;KK;C out) Scheme 1 & 3 Scheme 2
783
- CC Boundary3;11;32
784
- 32;11;323;11;32
785
- 32;11;3
786
- Luma Convolutional 1;55;64 1;55;64
787
- Attention Module32;11;16
788
- 64;11;16
789
- 64;11;3232;11;16
790
- 64;11;16
791
- 64;11;3
792
- Prediction Head 32;33;2 3;33;2
793
- A new block-level syntax flag is introduced to indicate
794
- whether a given block makes use of one of the proposed
795
- schemes. If the proposed NN-based method is used, a pre-
796
- diction is computed for the two chroma components. No
797
- additional information is signalled related to the chroma intra-
798
- prediction mode for the block. Conversely, if the method
799
- is not used, the encoder proceeds in signalling the chroma
800
- intra-prediction mode as in conventional VVC specifications.
801
- For instance, a subsequent flag is signalled to identify if
802
- conventional LM modes are used in the current block or not.
803
- The prediction path also needs to accommodate the new NN-
804
- based predictions. This largely reuses prediction blocks that
805
- are needed to perform conventional CCLM modes. In terms
806
- of mode selection at the encoder side, the new NN-based mode
807
- is added to the conventional list of modes to be tested in full
808
- rate-distortion sense.
809
- C. Architecture configurations
810
- The proposed multi-model architectures and simplifications
811
- (Section IV) are implemented in 3 different schemes:
812
- Scheme 1: Multi-model architecture (Section IV-A) ap-
813
- plying the methodology in Section IV-B to simplify the
814
- convolutional layers within the luma convolutional branch
815
- and the prediction branch, as illustrated in Figure 3.
816
- Scheme 2: The multi-model architecture in Scheme 1
817
- applying the methodology in Section IV-C to simplify
818
- the cross-component boundary branch. As shown in Fig-
819
- ure 3, the integration of the simplified branch requires
820
- modification of the initial architecture with changes in
821
- the attention module and the prediction branch.
822
- Scheme 3: Architecture in Scheme 1 with the integer
823
- precision approximations described in Section IV-D.
824
- In contrast to previous state-of-the-art methods, the pro-
825
- posed multi-model does not need to adapt its architectureto the input block size. Notice that the fully-convolutional
826
- architecture introduced in [4] enables this design and is able
827
- to significantly reduce the complexity of the cross-component
828
- boundary branch in [2], which uses size-dependent fully-
829
- connected layers. Table I shows the network hyperparameters
830
- of the proposed schemes during training, whereas Table II
831
- shows the resulting hyperparameters for inference after ap-
832
- plying the proposed simplifications. As shown in Tables III
833
- and IV, the employed number of parameters in the proposed
834
- schemes represents the trade-off between complexity and
835
- prediction performance, within the order of magnitude of
836
- related attention-based CNNs in [4]. The proposed simplifi-
837
- cations significantly reduce (around 90%) the original training
838
- parameters, achieving lighter architectures for inference time.
839
- Table III show that the inference version of Scheme 2 reduces
840
- to around 85%, 96% and 99% the complexity of the hybrid
841
- CNN models in [2] and to around 82%, 96% and 98% the
842
- complexity of the attention-based models in [4], for 44;88
843
- and1616input block sizes, respectively. Finally, in order
844
- to provide more insights about the computational cost and
845
- compare the proposed schemes with the state-of-the-art meth-
846
- ods, Table V shows the number of floating point operations
847
- (FLOPs) for each architecture per block size. The reduction of
848
- operations (e.g. additions and matrix multiplications) to arrive
849
- to the predictions is one the predominant factors towards the
850
- given speedups. Notice the significant reduction of FLOPs for
851
- the proposed inference models.
852
- In order to obtain a preliminary evaluation of the proposed
853
- schemes and to compare their prediction performance with the
854
- state-of-the-art methods, the trained models were tested on the
855
- DIV2K validation set (with 100 multi-resolution images) by
856
- means of averaged PSNR. Test samples were obtained with
857
- the same methodology as used in Section V-A for generating
858
- the training dataset. Notice that this test uses the training
859
- version of the proposed schemes. As shown in Table IV, the
860
- multi-model approach introduced in Scheme 1 improves the
861
- attention-based CNNs in [4] for 44and88blocks, while
862
- only a small performance drop can be observed for 1616
863
- blocks. However, because of using a fixed architecture for all
864
- block sizes, the proposed multi-model architecture averages
865
- the complexity of the individual models in [4] (Table III),
866
- slightly increasing the complexity of the 44model and
867
- simplifying the 1616architecture. The complexity reduction
868
- in the 1616model leads to a small drop in performance. As
869
- can be observed from Table IV , the generalisation process
870
- induced by the multi-model methodology ([4] with multi-
871
- model, compared to [4]) can minimise such drop by distilling
872
- knowledge from the rest of block sizes, which is especially
873
- evident for 88blocks where a reduced architecture can
874
- improve the state-of-the-art performance.
875
- Finally, the simplifications introduced in Scheme 2 (e.g.
876
- the architecture changes required to integrate the modified
877
- cross-component boundary branch within the original model)
878
- lower the prediction performance of Scheme 1. However,
879
- the highly simplified architecture is capable of outperforming
880
- the hybrid CNN models in [2], observing training PSNR
881
- improvements of an additional 1.30, 2.21 and 2.31 dB for
882
- 44;88and1616input block sizes, respectively. TheJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 10
883
- TABLE III
884
- MODEL COMPLEXITY PER BLOCK SIZE
885
- Model (parameters) 44 88 1616
886
- Hybrid CNN [2] 24435 96116 369222
887
- Attention-based CNN [4] 21602 83106 186146
888
- Scheme 1 & 3 (train/inference) 51714=7074
889
- Scheme 2 (train/inference) 39371=3710
890
- TABLE IV
891
- PREDICTION PERFORMANCE PER BLOCK SIZE
892
- Model (PSNR) 44 88 1616
893
- Hybrid CNN [2] 28:61 31:47 33:36
894
- Attention-based CNN [4] 30:23 33:13 36:13
895
- [4] with multi-model 30:55 33:21 36:05
896
- Scheme 1 single layer training 30:36 33:05 35:88
897
- Scheme 2 without sparsity 29:89 32:66 35:64
898
- (proposed) Scheme 1 30:54 33:20 35:99
899
- (proposed) Scheme 2 29:91 32:68 35:67
900
- combination of attention-based architectures with the proposed
901
- multi-model methodology (Scheme 1) considerably improves
902
- the NN-based chroma intra-prediction methods in [2], showing
903
- training PSNR improvements by additional 1.93, 1.73 and
904
- 2.68 dB for the supported block sizes. In Section V-D it will
905
- be shown how this relatively small PSNR differences lead to
906
- significant differences in codec performance.
907
- Several ablations were performed in order to evaluate the
908
- effects of the proposed simplifications. First, the effect of
909
- the multi-model methodology is evaluated by directly con-
910
- verting the models in [4] to the size-agnostic architecture in
911
- Scheme 1 but without the simplifications in Section IV-B
912
- ([4] with multi-model). As can be shown in Table IV, such
913
- methodology improves the 44and88models, with
914
- special emphasis in the 88case where the number of
915
- parameters is smaller than in [4]. Moreover, the removal of
916
- non-linearities towards Scheme 1 does not significantly affect
917
- the performance, with a negligible PSNR loss of around 0.3
918
- dB ([4] with multi-model compared with Scheme 1). Secondly,
919
- in order to evaluate the simplified convolutions methodology
920
- in Section IV-B, a version of Scheme 1 was trained with
921
- single-layer convolutional branches with large support kernels
922
- (e.g. instead of training 2 linear layers with 33kernels and
923
- then combining them into 55kernels for inference, training
924
- directly a single-layer branch with 55kernels). Experimental
925
- results show the positive effects of the proposed methodology,
926
- observing a significant drop of performance when a single-
927
- layer trained branch is applied (Scheme 1 with single layer
928
- training compared with Scheme 1). Finally, the effect of the
929
- sparse autoencoder of Scheme 2 is evaluated by removing
930
- the sparsity term in Equation 7. As can be observed, the
931
- regularisation properties of the sparsity term, i.e. preventing
932
- large activations, boosts the generalisation capabilities of the
933
- multi-model and slightly increases the prediction performance
934
- by around 0.2 dB. (Scheme 2 without sparsity compared with
935
- Scheme 2).TABLE V
936
- FLOP S PER BLOCK SIZE
937
- Model (parameters) 44 88 1616
938
- Hybrid CNN [2] 51465 187273 711945
939
- Attention-based CNN [4] 42795 165451 186146
940
- Scheme 1 & 3 (train/inference) 102859=13770
941
- Scheme 2 (train/inference) 79103=7225
942
- D. Simulation Results
943
- The VVC reference software VTM-7.0 is used as our
944
- benchmark and our proposed methodology is tested under the
945
- Common Test Conditions (CTC) [30], using the suggested all-
946
- intra configuration for VVC with a QP of 22, 27, 32 and 37. In
947
- order to fully evaluate the performance of the proposed multi-
948
- models, the encoder configuration is constrained to support
949
- only square blocks of 44;88and1616pixels.
950
- A corresponding VVC anchor was generated under these
951
- conditions. BD-rate is adopted to evaluate the relative com-
952
- pression efficiency with respect to the latest VVC anchor. Test
953
- sequences include 26 video sequences of different resolutions:
954
- 38402160 (Class A1 and A2), 19201080 (Class B),
955
- 832480(Class C), 416240(Class D), 1280720(Class
956
- E) and screen content (Class F). The “EncT” and “DecT” are
957
- “Encoding Time” and “Decoding Time”, respectively.
958
- A colour analysis is performed in order to evaluate the
959
- impact of the chroma channels on the final prediction per-
960
- formance. As suggested in previous colour prediction works
961
- [31], standard regression methods for chroma prediction may
962
- not be effective for content with wide distributions of colours.
963
- A parametric model which is trained to minimise the Euclidean
964
- distance between the estimations and the ground truth com-
965
- monly tends to average the colours of the training examples
966
- and hence produce desaturated results. As shown in Figure 8,
967
- several CTC sequences are analysed by computing the loga-
968
- rithmic histogram of both chroma components. The width of
969
- the logarithmic histograms is compared to the compression
970
- performance in Table VI. Gini index [32] is used to quantify
971
- the width of the histograms, obtained as
972
- Gini(H) = 1B1X
973
- b=0
974
- H(b)PB1
975
- k=0H(k)!2
976
- (17)
977
- beingHa histogram of Bbins for a given chroma component.
978
- Notice that the average value between both chroma compo-
979
- nents is used in Table VI. A direct correlation between Gini
980
- index and coding performance can be observed in Table VI,
981
- suggesting that Scheme 1 performs better for narrower colour
982
- distributions. For instance, the Tango 2 sequence with a Gini
983
- index of 0.63 achieves an average Y/Cb/Cr BD-rates of -
984
- 0.46%/-8.13%/-3.13%, whereas Campfire with wide colour
985
- histograms (Gini index of 0.98), obtains average Y/Cb/Cr
986
- BD-rates of -0.21%/0.14%/-0.88%. Although the distributions
987
- of chroma channels can be a reliable indicator of prediction
988
- performance, wide colour distributions may not be the only
989
- factor in restricting chroma prediction capabilities of proposed
990
- methods, which can be investigated in future work.
991
- A summary of the component-wise BD-rate results for
992
- all the proposed schemes and the related attention-basedJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 11
993
- Fig. 8. Comparison of logarithmic colour histograms for different sequences.
994
- TABLE VI
995
- BD-R ATES (%) SORTED BY GINI INDEX
996
- SequenceScheme 1GiniY Cb Cr
997
- Tango2 -0.46 -8.13 -3.13 0.63
998
- MarketPlace -0.59 -2.46 -3.06 0.77
999
- FoodMarket4 -0.16 -1.60 -1.55 0.85
1000
- DaylightRoad2 -0.09 -5.74 -1.85 0.89
1001
- Campfire -0.21 0.14 -0.88 0.98
1002
- ParkRunning3 -0.31 -0.73 -0.77 0.99
1003
- approach in [4] is shown in Table VII for all-intra conditions.
1004
- Scheme 1 achieves an average Y/Cb/Cr BD-rates of -0.25%/-
1005
- 2.38%/-1.80% compared with the anchor, suggesting that the
1006
- proposed multi-model size agnostic methodology can improve
1007
- the coding performance of the related attention-based block-
1008
- dependent models. Besides improving the coding performance,
1009
- Scheme 1 significantly reduces the encoding (from 212% to
1010
- 164%) and decoding (from 2163% to 1302%) times demon-
1011
- strating the positive effect of the inference simplification.
1012
- Finally, the proposed simplifications introduced in Scheme
1013
- 2 and Scheme 3 further reduce the encoding and decoding time
1014
- at the cost of a drop in the coding performance. In particular,
1015
- the simplified cross-component boundary branch introduced
1016
- in Scheme 2, achieves an average Y/Cb/Cr BD-rates of -
1017
- 0.13%/-1.56%/-1.63% and, compared to Scheme 1, reduces the
1018
- encoding (from 164% to 146%) and decoding (from 1302%
1019
- to 665%) times. Scheme 3 has lower reduction of encoding
1020
- time (154%) than Scheme 2, but it achieves higher reduction
1021
- in decoding time (665%), although the integer approximations
1022
- lowers the performance achieving average Y/Cb/Cr BD-rates
1023
- of -0.16%/-1.72%/-1.38%.
1024
- As described in Section IV, the simplified schemes in-troduced here tackle the complexity reduction of Scheme 1
1025
- with two different methodologies. Scheme 2 proposes direct
1026
- modifications on the original architecture which need to be
1027
- retrained before being integrated in the prediction pipeline.
1028
- Conversely, Scheme 3 directly simplifies the final prediction
1029
- process by approximating the already trained weights from
1030
- Scheme 1 with integer-precision arithmetic. Therefore, the
1031
- simulation results suggest that the methodology in Scheme
1032
- 3 is better at retaining the original performance since a
1033
- retraining process is not required. However, the highly reduced
1034
- architecture in Scheme 2 is capable of approximating the
1035
- performance of Scheme 3 and further reduce the decoder time.
1036
- Overall, the comparison results in Table VII demonstrate
1037
- that proposed models offer various trade-offs between com-
1038
- pression performance and complexity. While it has been shown
1039
- that the complexity can be significantly reduced, it is still not
1040
- negligible. Challenges for future work include integerisation
1041
- of the simplified scheme (Scheme 2) while preventing the
1042
- compression drop observed for Scheme 3. Recent approaches,
1043
- including a published one which focuses on intra prediction
1044
- [24], demonstrate that sophisticated integerisation approaches
1045
- can help retain compression performance of originally trained
1046
- models while enabling them to become significantly less com-
1047
- plex and thus be integrated into future video coding standards.
1048
- VI. C ONCLUSION
1049
- This paper showcased the effectiveness of attention-based
1050
- architectures in performing chroma intra-prediction for video
1051
- coding. A novel size-agnostic multi-model and its corre-
1052
- sponding training methodology were proposed to reduce the
1053
- inference complexity of previous attention-based approaches.
1054
- Moreover, the proposed multi-model was proven to better
1055
- generalise to variable input sizes, outperforming state-of-the-
1056
- art attention-based models with a fixed and much simpler
1057
- architecture. Several simplifications were proposed to further
1058
- reduce the complexity of the original multi-model. First,
1059
- a framework for reducing the complexity of convolutional
1060
- operations was introduced and was able to derive an infer-
1061
- ence model with around 90% fewer parameters than its rela-
1062
- tive training version. Furthermore, sparse autoencoders were
1063
- applied to design a simplified cross-component processing
1064
- model capable of further reducing the coding complexity
1065
- of its preceding schemes. Finally, algorithmic insights were
1066
- proposed to approximate the multi-model schemes in integer-
1067
- precision arithmetic, which could lead to fast and hardware-
1068
- aware implementations of complex operations such as softmax
1069
- and Leaky ReLU activations.
1070
- The proposed schemes were integrated into the VVC an-
1071
- chor VTM-7.0, signalling the prediction methodology as a
1072
- new chroma intra-prediction mode working in parallel with
1073
- traditional modes towards predicting the chroma component
1074
- samples. Experimental results show the effectiveness of the
1075
- proposed methods, retaining compression efficiency of pre-
1076
- viously introduced neural network models, while offering 2
1077
- different directions for significantly reducing coding complex-
1078
- ity, translated to reduced encoding and decoding times. As
1079
- future work, we aim to implement a complete multi-modelJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 12
1080
- TABLE VII
1081
- BD-R ATE(%) OFY,Cb ANDCr FOR ALL PROPOSED SCHEMES AND [4] UNDER ALL -INTRA COMMON TESTCONDITIONS
1082
- Class A1 Class A2 Class B Class C
1083
- Y Cb Cr Y Cb Cr Y Cb Cr Y Cb Cr
1084
- Scheme 1 -0.28 -3.20 -1.85 -0.25 -3.11 -1.54 -0.26 -2.28 -2.33 -0.30 -1.92 -1.57
1085
- Scheme 2 -0.08 -1.24 -1.26 -0.12 -1.59 -1.31 -0.15 -1.80 -2.21 -0.20 -1.41 -1.62
1086
- Scheme 3 -0.19 -2.25 -1.56 -0.13 -2.44 -1.12 -0.16 -1.78 -2.05 -0.20 -1.44 -1.29
1087
- Anchor + [4] -0.26 -2.17 -1.96 -0.22 -2.37 -1.64 -0.23 -2.00 -2.17 -0.26 -1.64 -1.41
1088
- Class D Class E Class F OverallEncT[%] DecT[%]Y Cb Cr Y Cb Cr Y Cb Cr Y Cb Cr
1089
- Scheme 1 -0.29 -1.70 -1.77 -0.13 -1.59 -1.45 -0.50 -1.58 -1.99 -0.25 -2.38 -1.80 164% 1302%
1090
- Scheme 2 -0.18 -1.42 -1.73 -0.08 -1.67 -1.40 -0.34 -1.50 -1.90 -0.13 -1.56 -1.63 146% 665%
1091
- Scheme 3 -0.20 -1.64 -1.41 -0.07 -0.75 -0.46 -0.37 -1.24 -1.43 -0.16 -1.72 -1.38 154% 512%
1092
- Anchor + [4] -0.25 -1.55 -1.67 -0.03 -1.35 -1.77 -0.44 -1.30 -1.55 -0.21 -1.90 -1.81 212% 2163%
1093
- for all VVC block sizes in order to ensure a full usage
1094
- of the proposed approach building on the promising results
1095
- shown in the constrained test conditions. Finally, an improved
1096
- approach for integer approximations may enable the fusion of
1097
- all proposed simplifications, leading to a fast and powerful
1098
- multi-model.
1099
- REFERENCES
1100
- [1] B. Bross, J. Chen, and S. Liu, “Versatile Video Coding (VVC) draft 7,”
1101
- Geneva, Switzerland, October 2019.
1102
- [2] Y . Li, L. Li, Z. Li, J. Yang, N. Xu, D. Liu, and H. Li, “A hybrid neural
1103
- network for chroma intra prediction,” in 2018 25th IEEE International
1104
- Conference on Image Processing (ICIP) . IEEE, 2018, pp. 1797–1801.
1105
- [3] J. Pfaff, P. Helle, D. Maniry, S. Kaltenstadler, B. Stallenberger,
1106
- P. Merkle, M. Siekmann, H. Schwarz, D. Marpe, and T. Wiegand, “Intra
1107
- prediction modes based on neural networks,” Document JVET-J0037-
1108
- v2, Joint Video Exploration Team of ITU-T VCEG and ISO/IEC MPEG ,
1109
- 2018.
1110
- [4] M. G ´orriz, S. Blasi, A. F. Smeaton, N. E. O’Connor, and M. Mrak,
1111
- “Chroma intra prediction with attention-based CNN architectures,” arXiv
1112
- preprint arXiv:2006.15349 , accepted for publication at IEEE ICIP,
1113
- October 2020.
1114
- [5] L. Murn, S. Blasi, A. F. Smeaton, N. E. O’Connor, and M. Mrak, “Inter-
1115
- preting cnn for low complexity learned sub-pixel motion compensation
1116
- in video coding,” arXiv preprint arXiv:2006.06392 , 2020.
1117
- [6] P. Helle, J. Pfaff, M. Sch ¨afer, R. Rischke, H. Schwarz, D. Marpe,
1118
- and T. Wiegand, “Intra picture prediction for video coding with neural
1119
- networks,” in 2019 Data Compression Conference (DCC) . IEEE, 2019,
1120
- pp. 448–457.
1121
- [7] X. Zhao, J. Chen, A. Said, V . Seregin, H. E. Egilmez, and M. Kar-
1122
- czewicz, “Nsst: Non-separable secondary transforms for next generation
1123
- video coding,” in 2016 Picture Coding Symposium (PCS) . IEEE, 2016,
1124
- pp. 1–5.
1125
- [8] K. Zhang, J. Chen, L. Zhang, X. Li, and M. Karczewicz, “Enhanced
1126
- cross-component linear model for chroma intra-prediction in video
1127
- coding,” IEEE Transactions on Image Processing , vol. 27, no. 8, pp.
1128
- 3983–3997, 2018.
1129
- [9] L. T. Nguyen, A. Khairat and D. Marpe, “Adaptive inter-plane prediction
1130
- for RGB content,” Document JCTVC-M0230 , Incheon, April 2013.
1131
- [10] M. Siekmann, A. Khairat, T. Nguyen, D. Marpe, and T. Wiegand,
1132
- “Extended cross-component prediction in hevc,” APSIPA transactions
1133
- on signal and information processing , vol. 6, 2017.
1134
- [11] G. Bjontegaard, “Calculation of average PSNR differences between rd-
1135
- curves,” VCEG-M33 , 2001.
1136
- [12] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
1137
- Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances
1138
- in neural information processing systems , 2017, pp. 5998–6008.
1139
- [13] Z. Lin, M. Feng, C. N. d. Santos, M. Yu, B. Xiang, B. Zhou, and
1140
- Y . Bengio, “A structured self-attentive sentence embedding,” arXiv
1141
- preprint arXiv:1703.03130 , 2017.
1142
- [14] A. P. Parikh, O. T ¨ackstr ¨om, D. Das, and J. Uszkoreit, “A decompos-
1143
- able attention model for natural language inference,” arXiv preprint
1144
- arXiv:1606.01933 , 2016.
1145
- [15] J. Cheng, L. Dong, and M. Lapata, “Long short-term memory-networks
1146
- for machine reading,” arXiv preprint arXiv:1601.06733 , 2016.[16] Y . He, X. Zhang, and J. Sun, “Channel pruning for accelerating
1147
- very deep neural networks,” in Proceedings of the IEEE International
1148
- Conference on Computer Vision , 2017, pp. 1389–1397.
1149
- [17] Z. Zhuang, M. Tan, B. Zhuang, J. Liu, Y . Guo, Q. Wu, J. Huang,
1150
- and J. Zhu, “Discrimination-aware channel pruning for deep neural
1151
- networks,” in Advances in Neural Information Processing Systems , 2018,
1152
- pp. 875–886.
1153
- [18] T.-W. Chin, R. Ding, C. Zhang, and D. Marculescu, “Towards efficient
1154
- model compression via learned global ranking,” in Proceedings of the
1155
- IEEE/CVF Conference on Computer Vision and Pattern Recognition ,
1156
- 2020, pp. 1518–1528.
1157
- [19] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam,
1158
- and D. Kalenichenko, “Quantization and training of neural networks
1159
- for efficient integer-arithmetic-only inference,” in Proceedings of the
1160
- IEEE Conference on Computer Vision and Pattern Recognition , 2018,
1161
- pp. 2704–2713.
1162
- [20] Y . Cai, Z. Yao, Z. Dong, A. Gholami, M. W. Mahoney, and K. Keutzer,
1163
- “Zeroq: A novel zero shot quantization framework,” in Proceedings of
1164
- the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,
1165
- 2020, pp. 13 169–13 178.
1166
- [21] S. Xu, H. Li, B. Zhuang, J. Liu, J. Cao, C. Liang, and M. Tan,
1167
- “Generative low-bitwidth data free quantization,” arXiv preprint
1168
- arXiv:2003.03603 , 2020.
1169
- [22] M. Courbariaux, Y . Bengio, and J.-P. David, “Training deep neu-
1170
- ral networks with low precision multiplications,” arXiv preprint
1171
- arXiv:1412.7024 , 2014.
1172
- [23] J. Ball ´e, N. Johnston, and D. Minnen, “Integer networks for data
1173
- compression with latent-variable models,” in International Conference
1174
- on Learning Representations , 2018.
1175
- [24] M. Sch ¨afer, B. Stallenberger, J. Pfaff, P. Helle, H. Schwarz, D. Marpe,
1176
- and T. Wiegand, “Efficient fixed-point implementation of matrix-based
1177
- intra prediction,” in 2020 IEEE International Conference on Image
1178
- Processing (ICIP) . IEEE, 2020, pp. 3364–3368.
1179
- [25] Y . Bengio, A. Courville, and P. Vincent, “Representation learning: A
1180
- review and new perspectives,” IEEE transactions on pattern analysis
1181
- and machine intelligence , vol. 35, no. 8, pp. 1798–1828, 2013.
1182
- [26] X. Geng, J. Lin, B. Zhao, A. Kong, M. M. S. Aly, and V . Chandrasekhar,
1183
- “Hardware-aware softmax approximation for deep neural networks,” in
1184
- Asian Conference on Computer Vision . Springer, 2018, pp. 107–122.
1185
- [27] R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, and L. Zhang,
1186
- “Ntire 2017 challenge on single image super-resolution: Methods and
1187
- results,” in Proceedings of the IEEE conference on computer vision and
1188
- pattern recognition workshops , 2017, pp. 114–125.
1189
- [28] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
1190
- arXiv preprint arXiv:1412.6980 , 2014.
1191
- [29] S. K. J. Chen, Y . Ye, “Algorithm description for versatile video coding
1192
- and test model 7 (vtm 7),” Document JVET-P2002 , Geneva, October
1193
- 2019.
1194
- [30] J. Boyce, K. Suehring, X. Li, and V . Seregin, “JVET common test
1195
- conditions and software reference configurations,” Document JVET-
1196
- J1010 , Ljubljana, Slovenia, July 2018.
1197
- [31] M. G. Blanch, M. Mrak, A. F. Smeaton, and N. E. O’Connor, “End-to-
1198
- end conditional gan-based architectures for image colourisation,” in 2019
1199
- IEEE 21st International Workshop on Multimedia Signal Processing
1200
- (MMSP) . IEEE, 2019, pp. 1–6.
1201
- [32] R. Davidson, “Reliable inference for the gini index,” Journal of econo-
1202
- metrics , vol. 150, no. 1, pp. 30–40, 2009.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2103.02372.txt DELETED
@@ -1,655 +0,0 @@
1
- Root cause prediction based on bug reports
2
- Thomas Hirsch
3
- Institute of Software Technology
4
- Graz University of Technology
5
- Graz, Austria
6
7
- Institute of Software Technology
8
- Graz University of Technology
9
- Graz, Austria
10
11
- Abstract —This paper proposes a supervised machine learning
12
- approach for predicting the root cause of a given bug report.
13
- Knowing the root cause of a bug can help developers in the de-
14
- bugging process—either directly or indirectly by choosing proper
15
- tool support for the debugging task. We mined 54 755 closed
16
- bug reports from the issue trackers of 103 GitHub projects and
17
- applied a set of heuristics to create a benchmark consisting of
18
- 10 459 reports. A subset was manually classified into three groups
19
- (semantic, memory, and concurrency) based on the bugs’ root
20
- causes. Since the types of root cause are not equally distributed, a
21
- combination of keyword search and random selection was applied.
22
- Our data set for the machine learning approach consists of 369 bug
23
- reports (122 concurrency, 121 memory, and 126 semantic bugs).
24
- The bug reports are used as input to a natural language processing
25
- algorithm. We evaluated the performance of several classifiers
26
- for predicting the root causes for the given bug reports. Linear
27
- Support Vector machines achieved the highest mean precision
28
- (0.74) and recall (0.72) scores. The created bug data set and
29
- classification are publicly available.
30
- Index Terms —bug report, bug benchmark, root cause prediction
31
- I. I NTRODUCTION
32
- Debugging is one of the most time-consuming parts in the
33
- software development process. While there exist numerous
34
- fault localization [1] and repair [2] techniques to support
35
- programmers in the debugging process, it is often unclear which
36
- techniques work best for a given bug. For this reason, Sobreira et
37
- al.[3] investigated the structure of Defects4J [4] bugs. For each
38
- bug, they determined the size of the patch, the repair action,
39
- and the change pattern. They have invited other researchers to
40
- investigate which types of bugs1can be handled by their repair
41
- tools.
42
- In this paper, we change the perspective of this research topic:
43
- instead of only providing root cause information for a bench-
44
- mark to help researchers in evaluating their tools, we predict
45
- the root cause for a given bug description so that programmers
46
- can choose a proper tool for their debugging problem. There
47
- are tools that focus on concurrency (e.g. ConcBugAssist [5])
48
- or memory (e.g. Valgrind) bugs, while others are better suited
49
- for semantic bugs (e.g. Jaguar [6]). While some root causes
50
- can easily be determined when reading a bug report, other
51
- 0©2020 IEEE. Personal use of this material is permitted. Permission from
52
- IEEE must be obtained for all other uses, in any current or future media,
53
- including reprinting/republishing this material for advertising or promotional
54
- purposes, creating new collective works, for resale or redistribution to servers
55
- or lists, or reuse of any copyrighted component of this work in other works.
56
- 1http://program-repair.org/defects4j-dissection/root causes are not that obvious. Consider for example issue
57
- ticket #514 from TwelveMonkeys project2:
58
- TIFF: Invalid StripByteCounts when writing large resolution
59
- files (9800x8000)
60
- Hello, when writing a high resolution tiff file the stripByteCounts
61
- appears to be corrupt. An approx 300 mb output file has a single
62
- image strip with the byte count of: 4071696385 which is larger than
63
- the file itself. However when working with lower (more common)
64
- resolutions the meta for the image strips is created properly. [. . . ]
65
- This code creates the file with the incorrect meta data:
66
- // Input high resolution 48 bit depth final
67
- InputStream inStream = [. . . ]
68
- Attaching zipped image: 9800x8000 resolution 48bit depth.zip
69
- I’ve tested and reproduced the issue with the following versions:
70
- 3.4.1, 3.4.2, 3.4.3
71
- Thanks in advance,
72
- -Jesse
73
- Our goal is to provide information to the programmer about
74
- the root cause of this bug. For instance, the incorrect byte
75
- count mentioned in this bug report together with the information
76
- about high resolution can raise suspicion of an integer overflow
77
- occurring.
78
- We propose a supervised machine learning (ML) approach
79
- that uses the bug description from issue tickets to predict the
80
- root cause of the bug. For processing the text from the issue
81
- tickets, we make use of natural language processing (NLP). For
82
- creating the training set, we have mined bug reports from 103
83
- GitHub projects and manually examined a subset, classifying
84
- them as memory, concurrency or semantic bugs based on the
85
- actual fix. Since the number of concurrency and memory bugs
86
- is usually very low [7], we have performed a keyword search in
87
- the commit messages of fixes to find more instances with these
88
- root causes.
89
- While the primary goal of this paper is the root cause
90
- prediction approach, the generated training data can be used
91
- as a benchmark for specific types of faults. Often, researchers
92
- focus on certain bug types when developing a fault localization
93
- or repair method. While these approaches have a high potential,
94
- their evaluation is often limited to a few real-world bugs or
95
- artificially seeded bugs, as mentioned in [8]. The training data
96
- set created in this paper can be used as a bug benchmark by
97
- researchers who are interested in certain types of bugs. It can
98
- be seen as a Java pendant to the C/C++ benchmark BugBench
99
- that also distinguishes memory, concurrency, and semantic bugs.
100
- Furthermore, it can be used to evaluate information retrieval
101
- based bug localization approaches [9].
102
- 2https://github.com/haraldk/TwelveMonkeys/issues/514
103
- ©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000671arXiv:2103.02372v1 [cs.SE] 3 Mar 2021The contributions of this work can be summarized as:
104
- a machine learning approach for predicting the root cause
105
- for a given bug report with a mean precision of 0.74 and
106
- a mean recall of 0.72,
107
- a data set consisting of 10 459 bug reports and fixes from
108
- 103 GitHub repositories,
109
- a data set of 122 concurrency, 121 memory, and 269
110
- semantic bugs with detailed sub-categories, and
111
- a framework for building such data sets.
112
- The created data sets, all scripts, and the categorization are
113
- publicly available.3The structure of this paper is as follows:
114
- Section II introduces the main root cause categories and their
115
- sub-categories. Section III explains how we have collected
116
- closed bug reports and their corresponding fixes. Section IV
117
- presents the machine learning approach. We discuss the results
118
- and threats to validity in Section V. Section VI discusses the
119
- related work and Section VII concludes the paper.
120
- II. C LASSIFICATION SCHEMA
121
- We use three main categories and 18 detailed root causes as
122
- described in Table I. The semantic and memory sub-categories
123
- are based on Tan et al. [7]; the concurrency sub-categories are
124
- based on Zhou et al. [10].
125
- A problem with post mortem bug classification arises through
126
- often unclear separation of the actual fix from other code
127
- changes, e.g., commits that include more than one bug fix,
128
- commits that include the bug fix aside of some refactoring
129
- or new extension, or bug fixes that are scattered over multiple
130
- commits. Additionally, it is difficult to distinguish a workaround
131
- from a fix [11]. All of the above make it hard to correctly
132
- identify the fix and to properly categorize the root cause. To deal
133
- with these issues, we have added a confidence value ranging
134
- from 1-10 that reflects our confidence on the correctness of
135
- our classification: A confidence level of 10 indicates showcase
136
- quality; 9 indicates that we are very confident about the main
137
- category and the subcategory; 8 indicates that we are very
138
- confident about main category and subcategory assigned, but a
139
- different subcategory cannot be ruled out with 100 % certainty.
140
- For example, differentiating “processing” and “missing case”
141
- is often not possible without having the knowledge of the
142
- programmer who wrote the code. A confidence level of 7 or be-
143
- low indicates doubts about the chosen subcategory. Confidence
144
- levels between 3 and 5 indicate a strong confidence about the
145
- main category, but the subcategories were not identifiable. A
146
- confidence level of 2 indicates doubts about the main category
147
- while a level of 1 indicates that it was not possible to determine
148
- the main root cause category for the bug.
149
- III. D ATA ACQUISITION
150
- In this section, we provide details on the collection of the
151
- bug data set that builds the basis for creating the training set
152
- for the machine learning approach.
153
- Purpose of the data set. The data set should provide a realistic
154
- distribution of different bug types, and should serve as basis for
155
- 3https://doi.org/10.5281/zenodo.3973048TABLE I
156
- ROOT CAUSE CATEGORIES AND SUB -CATEGORIES
157
- Semantic Description
158
- Exception handl. Missing or improper exception handling.
159
- Missing case Faults due to unawareness of a certain case or simply
160
- a forgotten implementation.
161
- Processing Incorrect implementation (e.g. miscalculations, in-
162
- correct method output, wrong method/library usage).
163
- Typo Ambiguous naming, typos in SQL calls/URLs/paths.
164
- Dependency The code can be built but behaves unexpected be-
165
- cause of changes in a foreign system (e.g. update of
166
- utilized library or underlying OS).
167
- Other All other semantic faults.
168
- Memory
169
- Buffer overflow Buffer overflows, not overflowing numeric types.
170
- Null pointer deref. All null pointer dereferences.
171
- Uninit. mem. read All uninitialized memory reads except null pointer
172
- dereference.
173
- Memory leak Memory leak.
174
- Dangling pointer Dangling pointer.
175
- Double free Double free.
176
- Other All other memory bugs.
177
- Concurrency
178
- Order violation Missing or incorrect synchronization, e.g. object is
179
- dereferenced by thread B before it is initialized by
180
- thread A.
181
- Race condition Two or more threads access the same resource with
182
- at least one being a write access and the access is
183
- not ordered properly.
184
- Atomic violation Constraints on the interleaving of operations are
185
- missing. This happens when atomicity of a certain
186
- code region was assumed but failed to guarantee
187
- atomicity in the implementation.
188
- Deadlock Two or more threads wait for the other one to release
189
- a resource.
190
- Other All other concurrency bugs.
191
- experiments with various fault localization and ML experiments.
192
- The bugs should be real world Java bugs.
193
- Project selection. We chose 103 GitHub Java projects to
194
- source our data set. Primary selection criteria were a well known
195
- organization driving the project, or the project having a high
196
- star rating on GitHub. However, the list also contains lesser
197
- known projects that were already used in other research [4],
198
- [12], [13]. The selection process was performed manually. All of
199
- the projects utilize GitHub’s built-in issue tracker together with
200
- its labeling system, and have at least 100 closed issues identified
201
- as bugs. The project sizes range from 13k Java LOC (Lines Of
202
- Code) for HikariCP4to 1.7M Java LOC for Elasticsearch5. The
203
- full list of mined projects can be found in the online appendix3.
204
- Bug ticket identification. We identified bugs via the labels
205
- used in the issue tickets and we only considered closed issue
206
- tickets. In order to omit feature requests, maintenance tickets
207
- and other non-bug issues, we only considered issues whose
208
- labels contain “bug”, “defect”, or “regression”.
209
- Filtering criteria. GitHub automatically links commits to
210
- issue tickets based on issue ids and provides this data together
211
- with issue tickets. We only consider issues for which at least
212
- 4https://github.com/brettwooldridge/HikariCP
213
- 5https://github.com/elastic/elasticsearch
214
- ©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000672one commit is linked to the issue, and all linked commits are
215
- still available in the Git repository. If any of the commits are
216
- linked to multiple issues, or the commit message suggests that
217
- the fix is done aside of other changes, the issue is discarded. As
218
- of writing this, we omit issues whose commits do not contain
219
- changes in production Java code. We plan to lift this limitation
220
- to incorporate other root causes, e.g. in the documentation or
221
- build system in the future.
222
- We use Gumtree Spoon AST Diff6[14] to create Java
223
- aware diffs. To manage overwhelming runtime and memory
224
- requirements arising from the size of the data set, we limit the
225
- size and number of the commits per issue. We only consider
226
- issues where the number of commits linked to the issue is
227
- smaller than 10, the number of files changed per commit is
228
- smaller than 20, and the number of lines changed per commit
229
- is smaller than 250. Our analysis shows that these limitations
230
- only remove less than 3 % of the issues.
231
- The data set. In total, 54 755 issues have been mined from
232
- GitHub. Following the filtering criteria described above leaves
233
- us with 10 459 issues that form the basis for our further
234
- investigations. This bug data set consists of:
235
- textual bug report including metadata in form of time-
236
- stamps and user names,
237
- all commits associated to each bug report including meta-
238
- data as commit message and git stats, and
239
- Java aware diff statistics and location of the changes in
240
- terms of file, class, and method.
241
- IV. M ACHINE LEARNING APPROACH
242
- We employ an NLP approach, vectorizing the textual bug
243
- reports into unigrams and bigrams, to train a model for auto-
244
- mated classification along our fault classification schema. This
245
- approach calculates a frequency vector from words and word
246
- pairs occurring in the input text that is used as feature vector
247
- for the classifier.
248
- Input preprocessing. To increase performance of the classifier,
249
- we applied the following preprocessing steps:
250
- Stop word removal (i.e. removing common words that are
251
- not adding any value)
252
- Case folding (i.e. converting all characters to lower case)
253
- Stemming (i.e. reducing each word to its word stem)
254
- The bug reports often include stack traces, exceptions, and log
255
- outputs. Currently, we process them in the same way as the
256
- rest of the input text. In future work, we will investigate the
257
- usefulness of domain specific preprocessing of these artifacts.
258
- Training set creation. Figure 1 provides an overview of the
259
- training set creation. We manually classified 160 randomly
260
- selected issues and identified 119 semantic, 2 memory, and 4
261
- concurrency bugs. 35 issues were not classified because the bug
262
- reports were non-English, feature requests, deemed not a bug, or
263
- issues for which we were not confident about the sub-category
264
- (confidence level <8).
265
- Concurrency and memory bugs are usually rare, accounting
266
- for 2 % respectively 6 % of all bugs [7], [15], [16], which poses
267
- 6https://github.com/SpoonLabs/gumtree-spoon-ast-diff
268
- Random sample of 471 issuesFilter criteria:
269
- •Commits linked
270
- •Commits only address
271
- one issue
272
- •Change of production
273
- Java code
274
- •Reasonable size for fix
275
- 11954 755 issues labeled as bug
276
- from 103 GitHub projects
277
- 10 459 complete issues
278
- Random
279
- sample
280
- of 160
281
- issues756 potential memory or
282
- concurrency bugsRandom
283
- selectionKeyword
284
- search
285
- 150 semantic
286
- bugs
287
- keyword
288
- biased121
289
- memory
290
- bugs122
291
- concurrency
292
- bugs118
293
- 2 4150 119
294
- Filter criteria:
295
- •Classification
296
- confidence > 7
297
- •English issue text
298
- •Bug, not hidden
299
- feature request
300
- 119
301
- semantic
302
- Bugs
303
- 369 issues for machine learning approachAll All All5 %
304
- (7)Random
305
- selection
306
- 119Fig. 1. Training set creation
307
- a challenge for the creation of reasonably sized training sets.
308
- For this reason, we have performed a keyword search on the
309
- commit messages linked to the issues to identify candidates of
310
- memory and concurrency bugs analog to Ray et al. ’s approach
311
- [15], resulting in a total of 756 issues. As of writing this, 471
312
- randomly selected issues from this set have been examined and
313
- classified. 150 semantic, 119 memory, and 118 concurrency
314
- bugs have been identified in this sample. 84 issues could not be
315
- classified due to the reasons mentioned above.
316
- Training set composition. To avoid a bias towards the se-
317
- mantic bugs that were “accidentally found” during the manual
318
- classification of the keyword search results and to have approx-
319
- imately equally large training sets, we reduced their volume
320
- to 5 % of all semantic bugs. This is a rather high estimate
321
- given the fact, that only 7.2 % of all bugs have been reported
322
- in the keyword search and only one third of these bugs are
323
- actually semantic bugs. Further, using separate data bases for
324
- the keyword search (commit messages) and training set for our
325
- ML classifier (bug reports) makes us confident that the bias
326
- introduced by the keywords is limited. As of writing this, our
327
- training set consists of 122 concurrency bugs, 121 memory
328
- bugs, and 126 semantic bugs. The complete training set consists
329
- of 369 textual bug reports.
330
- Classifiers. We applied various supervised ML classifier
331
- algorithms on our data set, namely Multinomial Naive Bayes
332
- (MNB), Linear Support Vector (LSVC), Linear Support Vector
333
- with Stochastic Gradient Descent learning (SGDC), Random
334
- Forrest (RFC), and Logistic Regression (LRC). The selection
335
- of classifiers is based on their suitability for multi-class clas-
336
- sification problems based textual inputs, and their application
337
- in similar research. Support vector machines have been used in
338
- ©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000673comparable endeavors [7], [15]–[18]; the same applies to naive
339
- Bayes [7], [16], [17], [19]–[21], logistic regression [20], [21],
340
- and decision tree based algorithms [7], [20], [21].
341
- Experiment. The 369 bug reports were split into a training
342
- set (80 %) and a test set (20 %). We performed 5-fold cross
343
- validation on the training set for each classifier, using grid
344
- search for hyperparameter tuning. Mean accuracy was used as
345
- scoring metric in the grid search. The highest scoring model for
346
- each classifier was then evaluated on the test set yielding the
347
- final score for this classifier. This experiment was performed
348
- 10 times, followed by manual examination of its results and
349
- corresponding hyperparameters to reduce the grid search space.
350
- Finally, to enable comparison of the classifiers given the small
351
- input data set, the above described experiment with the reduced
352
- hyperparameter set was performed 100 times with randomized
353
- test and training splits. The employed set of hyperparameters,
354
- and grid search results for each experiment, can be found in the
355
- online appendix.
356
- V. R ESULTS
357
- A. Classification results
358
- We graphically compare the classifiers’ performance by
359
- means of the weighted averages of F1, precision, and recall
360
- in Figure 2 and we report mean, median, standard deviation,
361
- min, and max of each classifier in Table II based on the scores
362
- of 100 runs. Please note that the F1 scores are computed for
363
- the individual test runs and then the mean, median, standard
364
- deviation, min, and max values of these F1 scores are computed.
365
- Thus, they cannot be computed from the precision and recall
366
- given in the table.
367
- We observed a tight clustering of classifiers, which is also
368
- evident in individual runs, although individual runs exhibit
369
- varying performances. We attribute this behavior to the small
370
- data set size and high variance in data quality. The best overall
371
- performance was achieved with LSVC, with mean F1 (0.72),
372
- precision (0.74), and recall (0.72). LSVC also produced the
373
- highest observed scores in an individual run, yielding F1 (0.85),
374
- precision (0.88), and recall (0.85).
375
- B. Discussion
376
- The biggest challenge lies in the creation of a reasonably
377
- sized data set. Further, varying data quality constitutes a sig-
378
- nificant problem. The textual bug reports in our data set range
379
- from only 5 words to 60kB of text per report. However, our
380
- examination of those bug tickets shows that the length is not
381
- necessarily correlating with the quality in terms of usefulness
382
- for the developer. Issue #4960 from Elasticsearch7is an example
383
- of a bug report that requires context and knowledge about the
384
- project for understanding:
385
- Filtered query parses name incorrectly
386
- There are bug reports that merely describe the impact, e.g.
387
- issue #338 from the Redisson project8:
388
- 7https://github.com/elastic/elasticsearch/issues/4960
389
- 8https://github.com/redisson/redisson/issues/338
390
- Fig. 2. Mean weighted average precision, recall and F1 score
391
- TABLE II
392
- WEIGHTED AVERAGE PRECISION ,RECALL ,AND F1SCORES (100 RUNS )
393
- Classifier Precision Recall F1
394
- MeanLRC 0.73 0.71 0.71
395
- RFC 0.72 0.62 0.62
396
- SGDC 0.74 0.71 0.71
397
- LSVC 0.74 0.72 0.72
398
- MNB 0.72 0.70 0.70
399
- MedianLRC 0.73 0.70 0.70
400
- RFC 0.72 0.62 0.62
401
- SGDC 0.74 0.70 0.71
402
- LSVC 0.74 0.72 0.72
403
- MNB 0.73 0.70 0.70
404
- Std. dev.LRC 0.046 0.048 0.048
405
- RFC 0.053 0.076 0.082
406
- SGDC 0.045 0.049 0.049
407
- LSVC 0.046 0.046 0.046
408
- MNB 0.045 0.049 0.049
409
- MinLRC 0.63 0.62 0.62
410
- RFC 0.53 0.39 0.38
411
- SGDC 0.64 0.62 0.63
412
- LSVC 0.65 0.64 0.63
413
- MNB 0.60 0.57 0.57
414
- MaxLRC 0.84 0.82 0.82
415
- RFC 0.85 0.77 0.77
416
- SGDC 0.86 0.84 0.84
417
- LSVC 0.88 0.85 0.85
418
- MNB 0.83 0.82 0.82
419
- ©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000674New version 2.1.4 or greater performance is low.
420
- When I use redisson 2.1.3 the ubuntu’s load average is 1.8 2.3; but
421
- I use 2.1.4 or greater, the load average is often greater than 3.00,
422
- my java application often overload.
423
- In some cases, the bug reports point right away at the fault,
424
- e.g. Netty issue #18789:
425
- WebSocket08FrameDecoder leaks ByteBuf when payload is
426
- masked
427
- Further research is required to determine metrics to mea-
428
- sure bug report quality for our purpose. On the other end of
429
- the spectrum, for very long bug reports, additional text pre-
430
- processing is required. Heuristics for reduction or removal of
431
- artifacts have to be implemented. Such artifacts are stack traces,
432
- code snippets, log outputs, or similar text portions, whose size
433
- is disproportionate to the added information.
434
- C. Threats to validity
435
- The selection of bugs from the issue tickets by searching for
436
- certain labels is a threat to the internal validity. While we have
437
- considered a wide range of bug labels, we cannot rule out to
438
- miss bugs with special labels or wrongly labeled bugs. A study
439
- on 7000 issue reports from five open-source projects showed
440
- that up to 40 % of the issues were wrongly labeled [22].
441
- Manually categorizing the root cause might be error-prone
442
- and the true root cause of the bug can only be determined
443
- by the original programmer. For this reason, we indicated the
444
- confidence level for each bug we categorized and excluded bugs
445
- with a low confidence level. Furthermore, the fix might be a
446
- workaround instead of a fix of the true fault.
447
- The keyword search might only reveal certain types of
448
- memory and concurrency bugs. We have tried to avoid a bias in
449
- the classification towards the words used in the keyword search
450
- by performing the keyword search on the commit messages and
451
- NLP for classification on the bug description.
452
- The small sample size is the biggest threat to external validity.
453
- In future work, we will therefore enlarge the training set. The
454
- performance of this approach may vary based on the software
455
- domain of the examined project. We tried to counteract this
456
- by including software projects to source our data set. However,
457
- data mining was exclusively performed on open source projects.
458
- Further, most of the examined projects are libraries. In contrast
459
- to end-user software, bug reports for libraries are almost ex-
460
- clusively written by other developers. Such bug reports often
461
- already contain insights into the underlying problem. Further,
462
- our approach may not work as good for bug descriptions of
463
- software written in a different programming language.
464
- VI. R ELATED WORK
465
- Ray et al. [15] analyzed more 560 000 bug fixes from
466
- 729 GitHub projects written in 17 languages. They classified
467
- the root causes and impacts for 10 % of the bugs by searching
468
- for keywords in the commit messages and trained a supervised
469
- ML approach to classify the remaining 90 % of the bugs. They
470
- 9https://github.com/netty/netty/issues/1878validated their approach by manual classifying 180 bug fixes
471
- (83.7 % precision, 84.3 % recall). While we also rely on a
472
- keyword search, we did not perform the keyword search on
473
- the same text that was used in NLP to avoid biasing.
474
- Li and colleagues [16] classified the root cause, impact,
475
- and software component of nearly 30 000 Bugzilla entries
476
- using NLP with SVM, Winnow, Perceptron and Naive Bayes
477
- as classifiers. Their training set consists of 709 bugs (51 %
478
- randomly sampled, 36 % security-related, and 13 % concurrency
479
- bugs).
480
- Tan et al. [7] manually classified 339 bugs from randomly
481
- selected fixed issues of three open-source projects into the
482
- dimensions root cause, impact, and component. Because of
483
- the low number of concurrency bugs in the sample, they
484
- performed a keyword search to identify additional concurrency
485
- bugs. Semantic bugs are the dominant root cause with 70-
486
- 87 %. The Linux kernel has nearly 13.6 % concurrency bugs;
487
- the other projects (Mozilla and Apache) have a lower number
488
- of concurrency bugs with 1.2 % and 5.2 %. Furthermore, the
489
- authors automatically classified more than 100 000 bugs using
490
- a supervised ML (precision: 67 % for memory and 93 % for
491
- semantic bugs, recall: 57 % resp. 95 %).
492
- Ortu et al. [17] investigated whether there are differences
493
- in the characteristics of high and low priority defects in more
494
- than 1200 open-source software projects. Therefore, they trained
495
- different supervised machine learning classifiers to predict the
496
- root cause, impact, and software component.
497
- Thung et al. [18] used machine learning to classify bugs ac-
498
- cording to the Orthogonal Defect Classification (ODC) scheme.
499
- They distinguished three defect groups: data and control flow,
500
- structural, and non-functional. They manually classified 500
501
- bugs that serve as training set. They use the description of the
502
- bug as well as the fixes to train a model. The SVM multi-
503
- class classification algorithm performed best (69 % precision,
504
- 70 % recall). Lopes and colleagues [23] applied different ML
505
- algorithms on bug descriptions to classify bugs according to
506
- different ODC dimensions. They manually categorized more
507
- than 4000 fixed bugs from three NoSQL databases. Recurrent
508
- Neural Networks have the highest accuracy when predicting
509
- the activity (47.6 %) and impact (33.3 %). Linear support vector
510
- machines are suited best to predict the target (accuracy 85.5 %)
511
- and the defect type (34.7 %).
512
- Hern ´andez-Gonz ´alez and colleagues [19] proposed a learning
513
- from crowds ML approach. The training data consists of bug
514
- reports and labels for OCD’s impact dimension. Each bug report
515
- was labeled by five annotators. In the majority of the cases, the
516
- annotators disagree on the labels. In the learning from crowds
517
- paradigm, the individual labels are taken in the machine learning
518
- training instead of the label that was assigned by the majority
519
- of the annotators.
520
- Antoniol et al. [20] use decision trees, naive Bayes and
521
- logistic regression to classify issue reports as bug or feature
522
- request. Their approach was able to correctly classify 77-82 %
523
- of the issues. Chawla and Singh [21] also classify issue reports
524
- as bug or other request. They receive an accuracy of 84-91 %.
525
- ©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000675VII. C ONCLUSION AND FUTURE WORK
526
- The presented approach automatically predicts the root cause
527
- for a given bug report. This information can be used by the
528
- developer to choose a proper debugging tool. It can also be
529
- used by a meta-debugging approach to recommend a debugging
530
- tool. The data set created in this work can be used to evaluate
531
- which debugging tools are particularly well-suited to support
532
- programmers in the debugging process of a particular bug
533
- instance. In addition, the proposed approach can be utilized
534
- for building benchmarks of specific bug types. This benchmark
535
- is especially suitable for evaluating IR-based fault localization
536
- techniques since it includes textual data in form of bug reports
537
- and commit messages, as well as detailed information on the
538
- fix location.
539
- During the manual classification, we have noticed recurring
540
- fault patterns. We will investigate if we can establish links be-
541
- tween these fault patterns and code-smells detected by existing
542
- code analysis tools such as SonarQube10. If so, knowledge about
543
- the bug type combined with reports from code analysis tools can
544
- be utilized to aid fault localization.
545
- Besides that, we will improve the approach by pre-processing
546
- stack trace information and other artifacts (if available in the bug
547
- report). Currently, stack trace information is treated the same
548
- way as human written text.
549
- Since a detailed predicted root cause is even more helpful,
550
- we will refine the prediction to the sub-categories. To do so, we
551
- have to enlarge the training set. Since certain subcategories will
552
- be underrepresented in the training set, we will up-sample those
553
- categories by means of the Synthetic Minority Over-sampling
554
- TEchnique (SMOTE) [24].
555
- ACKNOWLEDGMENT
556
- The work described in this paper has been funded by the
557
- Austrian Science Fund (FWF): P 32653 (Automated Debugging
558
- in Use).
559
- REFERENCES
560
- [1] W. E. Wong, R. Gao, Y . Li, R. Abreu, and F. Wotawa, “A Survey on
561
- Software Fault Localization,” IEEE Transactions on Software Engineering ,
562
- vol. 42, no. 8, pp. 707–740, aug 2016.
563
- [2] L. Gazzola, D. Micucci, and L. Mariani, “Automatic Software Repair: A
564
- Survey,” IEEE Transactions on Software Engineering , vol. 45, no. 1, pp.
565
- 34–67, jan 2019.
566
- [3] V . Sobreira, T. Durieux, F. Madeiral, M. Monperrus, and M. A. Maia,
567
- “Dissection of a Bug Dataset: Anatomy of 395 Patches from Defects4J,”
568
- 25th IEEE International Conference on Software Analysis, Evolution and
569
- Reengineering (SANER 2018) , vol. 2018-March, pp. 130–140, jan 2018.
570
- [Online]. Available: http://dx.doi.org/10.1109/SANER.2018.8330203
571
- [4] R. Just, D. Jalali, and M. D. Ernst, “Defects4J: a database of
572
- existing faults to enable controlled testing studies for Java programs,”
573
- inInternational Symposium on Software Testing and Analysis (ISSTA
574
- 2014) . ACM Press, jul 2014, pp. 437–440. [Online]. Available:
575
- http://dl.acm.org/citation.cfm?doid=2610384.2628055
576
- [5] S. Khoshnood, M. Kusano, and C. Wang, “ConcBugAssist: Constraint
577
- solving for diagnosis and repair of concurrency bugs,” in Int. Symp. on
578
- Software Testing and Analysis (ISSTA 2015) . ACM, 2015, pp. 165–176.
579
- [6] H. L. Ribeiro, H. A. De Souza, R. P. A. De Araujo, M. L. Chaim, and
580
- F. Kon, “Jaguar: A Spectrum-Based Fault Localization Tool for Real-
581
- World Software,” in 11th International Conference on Software Testing,
582
- Verification and Validation (ICST 2018) . IEEE, may 2018, pp. 404–409.
583
- 10https://www.sonarqube.org/[7] L. Tan, C. Liu, Z. Li, X. Wang, Y . Zhou, and C. Zhai, “Bug characteristics
584
- in open source software,” Empirical Software Engineering , vol. 19, no. 6,
585
- pp. 1665–1705, oct 2014.
586
- [8] Y . Tang, Q. Gao, and F. Qin, “LeakSurvivor: Towards Safely Tolerating
587
- Memory Leaks for Garbage-Collected Languages,” in USENIX Annual
588
- Technical conference , 2008, pp. 307–320.
589
- [9] T. D. B. Le, F. Thung, and D. Lo, “Will this localization tool be effective
590
- for this bug? Mitigating the impact of unreliability of information retrieval
591
- based bug localization tools,” Empirical Software Engineering , vol. 22,
592
- no. 4, pp. 2237–2279, aug 2017.
593
- [10] B. Zhou, I. Neamtiu, and R. Gupta, “Predicting concurrency bugs:
594
- How many, what kind and where are they?” in 9th International
595
- Conference on Evaluation and Assessment in Software Engineering
596
- (EASE’15) . ACM, apr 2015, pp. 1–10. [Online]. Available: https:
597
- //doi.org/10.1145/2745802.2745807
598
- [11] M. B ¨ohme, E. O. Soremekun, S. Chattopadhyay, E. Ugherughe, and
599
- A. Zeller, “Where is the bug and how is it fixed? An experiment
600
- with practitioners,” in 11th Joint Meeting on Foundations of Software
601
- Engineering (ESEC/FSE 2017) , vol. Part F1301. Association for
602
- Computing Machinery, aug 2017, pp. 117–128. [Online]. Available:
603
- http://dl.acm.org/citation.cfm?doid=3106237.3106255
604
- [12] P. Gyimesi, G. Gyimesi, Z. T ´oth, and R. Ferenc, “Characterization of
605
- source code defects by data mining conducted on GitHub,” in Lecture
606
- Notes in Computer Science , vol. 9159. Springer, 2015, pp. 47–62.
607
- [13] Z. T ´oth, P. Gyimesi, and R. Ferenc, “A public bug database of GitHub
608
- projects and its application in bug prediction,” in 16th Int. Conference
609
- on Computational Science and Its Applications (ICCSA’16) , vol. 9789.
610
- Lecture Notes in Computer Science, Springer, 2016, pp. 625–638.
611
- [14] J. R. Falleri, F. Morandat, X. Blanc, M. Martinez, and M. Monperrus,
612
- “Fine-grained and accurate source code differencing,” in 29th ACM/IEEE
613
- International Conference on Automated Software Engineering (ASE
614
- 2014) . ACM, 2014, pp. 313–323. [Online]. Available: http://dl.acm.org/
615
- citation.cfm?doid=2642937.2642982
616
- [15] B. Ray, D. Posnett, V . Filkov, and P. Devanbu, “A large scale study
617
- of programming languages and code quality in GitHub,” in ACM
618
- SIGSOFT Symposium on the Foundations of Software Engineering
619
- (FSE’14) . ACM, nov 2014, pp. 155–165. [Online]. Available:
620
- http://dl.acm.org/citation.cfm?doid=2635868.2635922
621
- [16] Z. Li, L. Tan, X. Wang, S. Lu, Y . Zhou, and C. Zhai, “Have things
622
- changed now?: An empirical study of bug characteristics in modern open
623
- source software,” in 1st Workshop on Architectural and System Support
624
- for Improving Software Dependability (ASID’06) , 2006, pp. 25–33.
625
- [17] M. Ortu, G. Destefanis, S. Swift, and M. Marchesi, “Measuring high and
626
- low priority defects on traditional and mobile open source software,”
627
- in7th International Workshop on Emerging Trends in Software Metrics
628
- (WETSoM 2016) . ACM, may 2016, pp. 1–7. [Online]. Available:
629
- http://dl.acm.org/citation.cfm?doid=2897695.2897696
630
- [18] F. Thung, D. Lo, and L. Jiang, “Automatic defect categorization,” in
631
- Working Conf. on Reverse Engineering (WCRE) , 2012, pp. 205–214.
632
- [19] J. Hern ´andez-Gonz ´alez, D. Rodriguez, I. Inza, R. Harrison, and J. A.
633
- Lozano, “Learning to classify software defects from crowds: A novel
634
- approach,” Applied Soft Computing Journal , vol. 62, pp. 579–591, 2018.
635
- [20] G. Antoniol, K. Ayari, M. Di Penta, F. Khomh, and Y .-G. Gu ´eh´eneuc,
636
- “Is it a bug or an enhancement?” in Conference of the center
637
- for advanced studies on collaborative research meeting of minds
638
- (CASCON ’08) . ACM Press, 2008, pp. 304—-318. [Online]. Available:
639
- http://portal.acm.org/citation.cfm?doid=1463788.1463819
640
- [21] I. Chawla and S. K. Singh, “An automated approach for bug
641
- categorization using fuzzy logic,” in 8th India Software Engineering
642
- Conference (ISEC) . ACM, feb 2015, pp. 90–99. [Online]. Available:
643
- http://dl.acm.org/citation.cfm?doid=2723742.2723751
644
- [22] K. Herzig, S. Just, and A. Zeller, “It’s not a bug, it’s a feature: How
645
- misclassification impacts bug prediction,” in International Conference on
646
- Software Engineering (ICSE 2013) , 2013, pp. 392–401.
647
- [23] F. Lopes, J. Agnelo, C. A. Teixeira, N. Laranjeiro, and J. Bernardino,
648
- “Automating orthogonal defect classification using machine learning al-
649
- gorithms,” Future Generation Computer Systems , vol. 102, pp. 932–947,
650
- jan 2020.
651
- [24] N. V . Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer,
652
- “SMOTE: Synthetic minority over-sampling technique,” Journal of
653
- Artificial Intelligence Research , vol. 16, pp. 321–357, jan 2002. [Online].
654
- Available: https://www.jair.org/index.php/jair/article/view/10302
655
- ©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000676
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2103.10051.txt DELETED
@@ -1,435 +0,0 @@
1
- © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any
2
- current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new
3
- collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other
4
- works.arXiv:2103.10051v1 [cs.LG] 18 Mar 2021DATA-FREE MIXED-PRECISION QUANTIZATION USING NOVEL SENSITIVITY METRIC
5
- Donghyun Lee, Minkyoung Cho, Seungwon Lee, Joonho Song, and Changkyu Choi
6
- Samsung Advanced Institute of Technology, Samsung Electronics, South Korea
7
- ABSTRACT
8
- Post-training quantization is a representative technique for
9
- compressing neural networks, making them smaller and more
10
- efficient for deployment on edge devices. However, an in-
11
- accessible user dataset often makes it difficult to ensure the
12
- quality of the quantized neural network in practice. In addi-
13
- tion, existing approaches may use a single uniform bit-width
14
- across the network, resulting in significant accuracy degrada-
15
- tion at extremely low bit-widths. To utilize multiple bit-width,
16
- sensitivity metric plays a key role in balancing accuracy and
17
- compression. In this paper, we propose a novel sensitivity
18
- metric that considers the effect of quantization error on task
19
- loss and interaction with other layers. Moreover, we develop
20
- labeled data generation methods that are not dependent on a
21
- specific operation of the neural network. Our experiments
22
- show that the proposed metric better represents quantization
23
- sensitivity, and generated data are more feasible to be applied
24
- to mixed-precision quantization.
25
- Index Terms —Deep Learning, Quantization, Data Free
26
- 1. INTRODUCTION
27
- In recent years, deep neural networks have simplified and en-
28
- abled applications in numerous domains, especially for vision
29
- tasks [1, 2, 3]. Meanwhile, there is a need to minimize the
30
- memory footprint and reduce the network computation cost to
31
- deploy on edge devices. Significant efforts have been made to
32
- reduce the network size or accelerate inference of the neural
33
- network [4, 5, 6]. Several approaches exist to this problem,
34
- and quantization has been studied as one of the most reliable
35
- solutions. In the quantization process, low-bit representations
36
- of both weights and activations introduce quantization noise,
37
- which results in accuracy degradation. To alleviate the accu-
38
- racy loss, retraining or fine-tuning methods are developed by
39
- exploiting extensive training datasets [7, 8, 9].
40
- However, these methods are not applicable in many real-
41
- world scenarios, where the user dataset is inaccessible due to
42
- confidential or personal issues [10]. It is impossible to train
43
- the network and verify the quality of a quantized neural net-
44
- work without a user dataset. Although post-training quantiza-
45
- tion is a frequently suggested method to address this problem
46
- [11, 12], a small dataset is often required to decide the optimal
47
- Equal contribution
48
- (a) Data generation
49
- (b) Profiling for quantization
50
- (c) Sensitivity measurement
51
- (d) Mixed-precision inference
52
- Fig. 1 . Overall process of a post-training mixed-precision
53
- quantization method. (a) Generate dataset from pretrained
54
- network. (b) Post-training quantization using statistics. (c)
55
- Measure the sensitivity of each layer. (d) Quantize to higher
56
- precision for a more sensitive layer.
57
- clipping range of each layer. The credential statistics of lay-
58
- ers are important to ensure the performance of the quantized
59
- network in increasing task loss at ultra-low-bit precision.
60
- Considering that most quantization approaches have
61
- used uniform bit allocation across the network, the mixed-
62
- precision quantization approach takes a step further in pre-
63
- serving the accuracy by lifting the limit of those approaches
64
- [13, 14]. As a necessity, several sensitivity measurement
65
- methods for maximizing the effects of mixed-precision have
66
- also been proposed because it is difficult to determine which
67
- sections of the network are comparatively less susceptible
68
- to quantization [15, 16, 17]. To measure the quantization
69
- robustness of activations and weights, it is necessary to an-
70
- alyze the statistics generated during forward and backward
71
- processes by using a reliable dataset. The prior sensitivity
72
- metrics use the difference between outputs of the original
73
- model and the quantized model when quantizing each layer
74
- separately [10, 15]. However, this approach does not con-
75
- sider the interaction of quantization error and other layers.
76
- It cannot be neglected because a lower bit quantization im-
77
- plies a larger quantization error. Other prior studies require
78
- severe approximations to compute higher-order derivatives
79
- efficiently [16, 18].Fig. 2 . Top-2 confidences of each generated data sample on
80
- ResNet50. Confidence means the output values of the soft-
81
- max layer. For all classes except one (Class 744), the gener-
82
- ated labeled data samples pointed their target classes corre-
83
- sponding to the labels with the highest confidence.
84
- In this work, we provide a straightforward method to com-
85
- pute the layer-wise sensitivity for mixed-precision quantiza-
86
- tion, considering the interaction of quantization error. In addi-
87
- tion, we propose a data generation method, which is effective
88
- in the process of post-training mixed-precision quantization,
89
- as shown in Figure 1. Prior works [10, 19] use statistics of
90
- a particular operation (such as batch normalization), and it is
91
- impotent when the networks that do not have the specific op-
92
- eration explicitly. The proposed synthetic data engineering
93
- approach is independent of network structures. Moreover, it
94
- generates labeled data to verify the quantized network.
95
- The remainder of this paper is organized as follows. Sec-
96
- tion 2 describes the data generation method and proposes a
97
- sensitivity measurement metric. Section 3 provides an exper-
98
- imental demonstration of mixed-precision quantization using
99
- the proposed metric and generated data, and the results are
100
- compared with previous approaches.
101
- 2. METHODOLOGY
102
- 2.1. Data Generation
103
- For a general data generation method, we seek to avoid any
104
- dependency on a specific operation in a convolutional net-
105
- work. As shown in Figure 1(a), we first forward a noisy image
106
- initialized from a uniform distribution in the range between 0
107
- and 255, inclusive. To produce a set of labeled data, we use
108
- a set of class-relevant vectors/matrices, which means one-hot
109
- vectors per class in the classification task. In the forward pass,
110
- the loss is computed as
111
- x
112
- c= argmin
113
- xcLCE(f(y(xc)); vc) +y(xc) (1)
114
- wherexcis the trainable input feature, vcis the set of class-
115
- relevant vectors/matrices, called the golden set, y()is theneural network, and f()is an activation function (i.e., soft-
116
- max) used to compute cross-entropy loss ( LCE) with the
117
- golden set. In addition, we reinforce the loss by maximizing
118
- the activation of output of network to enhance the efficiency
119
- of data generation process by referring the prior works for
120
- model interpretation [20, 21]. The calculated loss is prop-
121
- agated in the backward pass and generates input feature x
122
- c
123
- for each class, c2f1:::NumOfClass g. Finally, we have
124
- crafted synthetic data for each class after several iterations of
125
- forward and backward processes.
126
- Figure 2 demonstrates the reliability of crafting labeled
127
- data for each class by showing the top-2 confidences per class
128
- of synthetic data. All the generated data are used not only to
129
- measure the layer-wise sensitivity of the network but also to
130
- obtain the statistics of activations to improve the quality of the
131
- post-training quantized network.
132
- 2.2. Sensitivity Metric
133
- The objective of mixed-precision quantization is to allocate
134
- appropriate bits to each layer to reduce the cost (e.g., bit op-
135
- erations [22]) of neural network models while suppressing the
136
- task loss growth. The sensitivity metric is an important factor
137
- for finding the optimal point between the effects of quanti-
138
- zation error and cost. First, we would like to measure the
139
- effect of quantization error on the loss of a network that has
140
- totalLlayers when weights of ithlayer (1iL),Wi
141
- are quantized through quantization function Qk()intok-bit.
142
- Given input data xand quantized neural network ^y, we con-
143
- sider the Euclidean distance between y(x)and^y(x)as theL,
144
- the loss of network output, for sensitivity measurement in-
145
- stead of task loss. We can define the effect of quantization
146
- errorQk(Wi)WiofQk(Wi)onLas follows:
147
-
148
- Wi(k) =1
149
- NX
150
- x
151
- @(Qk(Wi)Wi)
152
- whereNdenotes the total size of the dataset, and
153
- Wi(k)are
154
- gradients for the quantization error. Wiis not variable in post-
155
- training quantization; thus, we can represent Eq. 2 as weight
156
- gradients of quantized parameters by using the chain rule as
157
- follows:
158
- @L
159
- @Qk(W)@Qk(W)
160
- @(Qk(W)W)'@L
161
- @Qk(W)(3)
162
- The effects of the activation’s quantization error on Lis
163
- represented as activation gradients of quantized activation by
164
- applying the derivation of the formula shown in the previous
165
- equations. Subsequently, we calculate the expectation for the
166
- effects of the quantized network on Lby using the geometric
167
- mean of
168
- ofm-bit quantized activation Qm(A)and quan-
169
- tized weights Qk(W), which is formulated asE[jS^yj] =LY
170
- i
171
- Ai(mi)
172
- Wi(ki)1
173
- L
174
- (4)
175
- The gradients of the converged single-precision neural
176
- network are not changed for the same data. To measure
177
- the effect of Qk(Wi)Wion other connected layers, we
178
- observe the gradient perturbations of WandA, which are
179
- caused byQk(Wi). Consequently, we can measure the effect
180
- of quantization error on other layers and loss of the network
181
- together, i.e., sensitivity of quantization by using E[jS^yj]
182
- when we only quantize activations or weights of a layer for
183
- which we would like to measure the sensitivity. Expressing
184
- the single-precision as QFP32(), we have
185
- E[jSWjj] =LY
186
- i
187
- Ai(FP32)
188
- Wi(ki)1
189
- L
190
- (5)
191
- s:t: k i=(
192
- ki<32i=j
193
- FP32i6=j
194
- for quantization sensitivity metric for Wj. It is straightfor-
195
- ward by using the information of back-propagation of quan-
196
- tized network and considers the effect of quantization error
197
- on other layers.
198
- 3. EXPERIMENTS
199
- In this section, we first demonstrate the effectiveness of the
200
- proposed data generation method in post-training quantiza-
201
- tion. Then, we show that the proposed sensitivity metric rep-
202
- resents the quantization sensitivity of the layer effectively by
203
- using our generated data. Finally, sensitivity value by using
204
- various datasets indicates that the proposed data generation
205
- method is also credible in sensitivity measurement.
206
- To demonstrate our methodology, we use classification
207
- models VGG19, ResNet50, and InceptionV3 on the ImageNet
208
- validation dataset.
209
- 3.1. Efficacy of Data Generation Method
210
- We evaluate our method using the generated data to determine
211
- the clipping range of each layer in post-training quantization.
212
- Our experiments exploit a simple symmetric quantization ap-
213
- proach, which uses the maximum value of activation as the
214
- clipping range of each layer. Hence, maximum values are
215
- extremely crucial for confining the dynamic range to utilize
216
- the bit-width efficiently, preventing a severe accuracy drop in
217
- low-bit quantization.
218
- For our proposed method, input images are initialized ac-
219
- cording to uniform distribution, which follows standardiza-
220
- tion or normalization. To generate data, we use Adam op-
221
- timizer to optimize the loss function with a learning rate of
222
- 0.04, and we found that =1 works best empirically.Model Dataset Top-1 Top-5
223
- VGG19ImageNet 72.31 90.81
224
- Noise 3.45 10.45
225
- ZeroQ - -
226
- Proposed 71.84 90.53
227
- ResNet50ImageNet 75.89 92.74
228
- Noise 11.28 29.88
229
- ZeroQ 75.47 92.62
230
- Proposed 75.68 92.72
231
- InceptionV3ImageNet 77.14 93.43
232
- Noise 62.49 85.00
233
- ZeroQ 76.85 93.31
234
- Proposed 76.83 93.26
235
- Table 1 . Results of post-training quantization (8-bit quantiza-
236
- tion for activations and weights) using different dataset. Ima-
237
- geNet presents the oracle performance in terms of using train-
238
- ing dataset and others are the results of the data-free methods.
239
- For InceptionV3, input data are initialized according to
240
- U(0, 255), which follows standardization with mean and vari-
241
- ance by considering that the model requires synthetic data to
242
- be converted into the range of [-1, 1] through preprocessing.
243
- For VGG19 and ResNet50, input data are initialized accord-
244
- ing toU(0, 255), which follows normalization with the factor
245
- of 255 because they require the range of [0, 1].
246
- Table 1 shows the empirical results for ImageNet, random
247
- noise, and ZeroQ [10]. In the experiment, ImageNet data are
248
- produced by choosing one image per class from the training
249
- data, and we generate 1000 data samples randomly using the
250
- existing method and the proposed method. As one can see,
251
- our method shows similar or higher performances over exist-
252
- ing data-free method, having less than 0.5% differences from
253
- the results of ImageNet cases. As shown in VGG19, although
254
- the existing research is hard to generalize, the method main-
255
- tains sound performance regardless of the network structure.
256
- 3.2. Comparison of Sensitivity Metrics
257
- To verify several metrics in weight sensitivity, we measure
258
- the task loss by switching floating-point weights in the or-
259
- der of layers with higher sensitivity values, where all weights
260
- are quantized to 4-bit, and activations do not have quanti-
261
- zation error. We denote this experiment as A32W f32,4g.
262
- Af32,4gW32 indicates the evaluation of the metrics in acti-
263
- vation sensitivity when switching the 4-bit quantized activa-
264
- tion to floating-point, where weights are single-precision. Ze-
265
- roQ [10] measures the KL divergence and [15] measures the
266
- Euclidean distance between the original model and the quan-
267
- tized model. ZeroQ [10] only measures the weight sensitivity.
268
- HAWQ-V2 [16] uses the average Hessian trace and L2 norm
269
- of quantization perturbation. We use the data generated by
270
- the proposed method for all metrics.
271
- Figure 3 shows the results of ResNet50 and InceptionV3(a) ResNet50 A32W f32,4g
272
- (b) ResNet50 Af32, 4gW32
273
- (c) InceptionV3 A32W f32,4g
274
- (d) InceptionV3 Af32, 4gW32
275
- Fig. 3 . Results of sensitivity metric evaluation on ResNet50 and InceptionV3 over ImageNet dataset. A32W f32,4gis the
276
- evaluation for weight sensitivity and A f32,4gW32 is for activation sensitivity.
277
- Model Metric A32W f32,4gAf32,4gW32
278
- ResNet50Proposed 1 1
279
- L2 [15] 1.22 1.11
280
- ZeroQ(KLD) 1.14 18.48
281
- HAWQ-V2 1.42 2.09
282
- InceptionV3Proposed 1 1
283
- L2 1.04 1.50
284
- ZeroQ(KLD) 1.69 8.06
285
- HAWQ-V2 1.25 6.17
286
- Table 2 . Quantitative comparison of different sensitivity met-
287
- rics through relative area calculation of task loss graph.
288
- over ImageNet dataset. Our sensitivity metric reliably and
289
- quickly lowered the task loss, which means that the proposed
290
- sensitivity metric is good at weights and activations that are
291
- comparatively more susceptible to quantization. To express
292
- the results of Figure 3 quantitatively, we calculate the relative
293
- area under the curve of task loss graph considering the result
294
- of the proposed metric as 1 and summarize it in Table 2. Our
295
- proposed method has the smallest area in all results.
296
- 3.3. Sensitivity According to Dataset Type
297
- To show the importance of the data in measuring the sensitiv-
298
- ity, we evaluate the proposed sensitivity metric over the differ-
299
- ent datasets of Section 3.1 on the 4-bit quantized network that
300
- clipped the activation range using ImageNet dataset. We mea-
301
- sure the task loss as in Section 3.2. The proposed dataset is the
302
- most similar in sensitivity to the ImageNet dataset, as shown
303
- in Table 3. Notably, the proposed data generation method pro-
304
- vides reliable statistics similar to the original training dataset.
305
- For InceptionV3, preprocessed ImageNet data are in the range
306
- of [-2.03, 2.52]. However, the range of the preprocessed data
307
- from [10] is measured as [-11.50, 11.21], while that of our
308
- data is in [-2.35, 2.36], whose maximum value is almost simi-
309
- lar to that of the ImageNet dataset. These similar statistics are
310
- seen in ResNet50. The incorrect statistics of the activation
311
- data corresponding to the original training dataset implies in-Model Dataset A32W f32,4gAf32,4gW32
312
- ResNet50Proposed 1 1
313
- ImageNet 0.74 1.60
314
- Noise 1.35 6.76
315
- ZeroQ 1.11 3.17
316
- InceptionV3Proposed 1 1
317
- ImageNet 1.07 2.48
318
- Noise 1.36 3.54
319
- ZeroQ 1.47 2.63
320
- Table 3 . Quantitative comparison of using different datasets
321
- in sensitivity measurement through relative area calculation
322
- of task loss graph.
323
- accurate sensitivity measurement [10]. Thus, our generation
324
- method can be used for reliable sensitivity measurement.
325
- 4. CONCLUSION
326
- In this paper, we proposed an effective data generation
327
- method for post-training mixed-precision quantization. Our
328
- approach is to train the random noise to generate data by using
329
- class-relevant vectors. It is not only independent of network
330
- structure but also provides a labeled dataset. We demonstrate
331
- that the generated data are sufficient to ensure the quality of
332
- post-training quantization. Furthermore, we proposed a novel
333
- sensitivity metric, which is important to optimize bit alloca-
334
- tion. The proposed sensitivity metric considers the effect of
335
- quantization on other relative layers and task loss together us-
336
- ing the gradient perturbation of the quantized neural network.
337
- Comparisons of sensitivity metrics were made to show the
338
- extent to which a layer with high sensitivity, measured with
339
- sensitivity metrics of other methods, affects task loss. The
340
- proposed sensitivity metric outperformed other metrics to
341
- stand for the effect of quantization error. We leave optimizing
342
- bit allocation using the proposed metric and applying to other
343
- tasks as part of future work.5. REFERENCES
344
- [1] C. Szegedy, V . Vanhoucke, S. Ioffe, J. Shlens, and
345
- Z. Wojna, “Rethinking the Inception architecture for
346
- computer vision,” in Proceedings of the IEEE confer-
347
- ence on computer vision and pattern recognition , 2016,
348
- pp. 2818–2826.
349
- [2] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed,
350
- C.-Y . Fu, and A. C. Berg, “SSD: Single shot multibox
351
- detector,” in European conference on computer vision .
352
- Springer, 2016, pp. 21–37.
353
- [3] L.-C. Chen, Y . Zhu, G. Papandreou, F. Schroff, and
354
- H. Adam, “Encoder-decoder with atrous separable con-
355
- volution for semantic image segmentation,” in Proceed-
356
- ings of the European conference on computer vision
357
- (ECCV) , 2018, pp. 801–818.
358
- [4] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen,
359
- M. Tan, W. Wang, Y . Zhu, R. Pang, V . Vasudevan, Q. V .
360
- Le, and H. Adam, “Searching for mobilenetv3,” arXiv
361
- preprint arxiv:1905.02244 , 2019.
362
- [5] G. Hinton, O. Vinyals, and J. Dean, “Distilling
363
- the knowledge in a neural network,” arXiv preprint
364
- arXiv:1503.02531 , 2015.
365
- [6] S. Han, H. Mao, and W. J. Dally, “Deep compres-
366
- sion: Compressing deep neural networks with prun-
367
- ing, trained quantization and huffman coding,” arXiv
368
- preprint arXiv:1510.00149 , 2015.
369
- [7] S. Jung, C. Son, S. Lee, J. Son, J.-J. Han, Y . Kwak,
370
- S. J. Hwang, and C. Choi, “Learning to quantize deep
371
- networks by optimizing quantization intervals with task
372
- loss,” in 2019 IEEE/CVF Conference on Computer Vi-
373
- sion and Pattern Recognition , 2019.
374
- [8] J. Choi, Z. Wang, S. Venkataramani, P. I.-J. Chuang,
375
- V . Srinivasan, and K. Gopalakrishnan, “PACT: Param-
376
- eterized clipping activation for quantized neural net-
377
- works,” arXiv preprint arXiv:1805.06085 , 2018.
378
- [9] D. Zhang, J. Yang, D. Ye, and G. Hua, “LQ-Nets:
379
- Learned quantization for highly accurate and compact
380
- deep neural networks,” in 15th European Conference on
381
- Computer Vision , 2018.
382
- [10] Y . Cai, Z. Yao, Z. Dong, A. Gholami, M. W. Mahoney,
383
- and K. Keutzer, “ZeroQ: A novel zero shot quantiza-
384
- tion framework,” in 17th Computer Vision and Pattern
385
- Recognition , 2020.
386
- [11] P. Nayak, D. Zhang, and S. Chai, “Bit efficient quan-
387
- tization for deep neural networks,” arXiv preprint
388
- arXiv:1910.04877 , 2019.[12] M. Nagel, M. van Baalen, T. Blankevoort, and
389
- M. Welling, “Data-free quantization through weight
390
- equalization and bias correction,” in 2019 international
391
- Conference on Computer Vison , 2019.
392
- [13] K. Wang, Z. Liu, Y . Lin, J. Lin, and S. Han, “HAQ:
393
- Hardware-aware automated quantization with mixed
394
- precision,” in The 16th Computer Vision and Pattern
395
- Recognition , 2019.
396
- [14] B. Wu, Y . Wang, P. Zhang, Y . Tian, P. Vajda, and
397
- K. Keutzer, “Mixed precision quantization of ConvNets
398
- via differentiable neural architecture search,” arXiv
399
- preprint arxiv:1812.00090 , 2018.
400
- [15] W. Zhe, J. Lin, V . Chandrasekhar, and B. Girod, “Op-
401
- timizing the bit allocation for compression of weights
402
- and activations of deep neural networks,” in 2019 IEEE
403
- International Conference on Image Processing (ICIP) .
404
- IEEE, 2019, pp. 3826–3830.
405
- [16] Z. Dong, Z. Yao, Y . Cai, D. Arfeen, A. Gholami, M. W.
406
- Mahoney, and K. Keutzer, “HAWQ-V2: Hessian aware
407
- trace-weighted quantization of neural networks,” arXiv
408
- preprint arxiv:1911.03852 , 2019.
409
- [17] P. Molchanov, A. Mallya, S. Tyree, I. Frosio, and
410
- J. Kautz, “Importance estimation for neural network
411
- pruning,” in Proceedings of the IEEE Conference on
412
- Computer Vision and Pattern Recognition , 2019, pp.
413
- 11 264–11 272.
414
- [18] Z. Dong, Z. Yao, A. Gholami, M. W. Mahoney, and
415
- K. Keutzer, “HAWQ: Hessian aware quantization of
416
- neural networks with mixed-precision,” in 2019 Intena-
417
- tional Conference on Computer Vison , 2019.
418
- [19] H. Yin, P. Molchanov, J. M. Alvarez, Z. Li, A. Mallya,
419
- D. Hoiem, N. K. Jha, and J. Kautz, “Dreaming to dis-
420
- till: Data-free knowledge transfer via DeepInversion,” in
421
- Proceedings of the IEEE/CVF Conference on Computer
422
- Vision and Pattern Recognition , 2020, pp. 8715–8724.
423
- [20] M. T. Alexander Mordvintsev, Christopher Olah,
424
- “Inceptionism: Going deeper into neural networks,”
425
- 2015. [Online]. Available: https://ai :googleblog:com/
426
- 2015/06/inceptionism-going-deeper-into-neural :html
427
- [21] F. M. Graetz, “How to visualize convolutional
428
- features in 40 lines of code,” 2019. [Online].
429
- Available: https://towardsdatascience :com/how-to-
430
- visualize-convolutional-features-in-40-lines-of-code-
431
- 70b7d87b0030
432
- [22] M. van Baalen, C. Louizos, M. Nagel, R. A. Amjad,
433
- Y . Wang, T. Blankevoort, and M. Welling, “Bayesian
434
- bits: Unifying quantization and pruning,” arXiv preprint
435
- arXiv:2005.07093 , 2020.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2103.13922.txt DELETED
@@ -1,1473 +0,0 @@
1
- ScanGAN360: A Generative Model of Realistic Scanpaths for 360Images
2
- Daniel Martin1Ana Serrano2Alexander W. Bergman3Gordon Wetzstein3
3
- Belen Masia1
4
- 1Universidad de Zaragoza, I3A2Centro Universitario de la Defensa, Zaragoza3Stanford University
5
- Abstract
6
- Understanding and modeling the dynamics of human
7
- gaze behavior in 360environments is a key challenge in
8
- computer vision and virtual reality. Generative adversar-
9
- ial approaches could alleviate this challenge by generat-
10
- ing a large number of possible scanpaths for unseen im-
11
- ages. Existing methods for scanpath generation, however,
12
- do not adequately predict realistic scanpaths for 360im-
13
- ages. We present ScanGAN360, a new generative adver-
14
- sarial approach to address this challenging problem. Our
15
- network generator is tailored to the specifics of 360im-
16
- ages representing immersive environments. Specifically, we
17
- accomplish this by leveraging the use of a spherical adapta-
18
- tion of dynamic-time warping as a loss function and propos-
19
- ing a novel parameterization of 360scanpaths. The quality
20
- of our scanpaths outperforms competing approaches by a
21
- large margin and is almost on par with the human baseline.
22
- ScanGAN360 thus allows fast simulation of large numbers
23
- ofvirtual observers , whose behavior mimics real users, en-
24
- abling a better understanding of gaze behavior and novel
25
- applications in virtual scene design.
26
- 1. Introduction
27
- Virtual reality (VR) is an emerging medium that unlocks
28
- unprecedented user experiences. To optimize these expe-
29
- riences, however, it is crucial to develop computer vision
30
- techniques that help us understand how people explore im-
31
- mersive virtual environments. Models for time-dependent
32
- visual exploration behavior are important for designing and
33
- editing VR content [42], for generating realistic gaze trajec-
34
- tories of digital avatars [18], for understanding dynamic vi-
35
- sual attention and visual search behavior [60], and for devel-
36
- oping new rendering, display, and compression algorithms,
37
- among other applications.
38
- Current approaches that model how people explore vir-
39
- tual environments often leverage saliency prediction [43,
40
- 13, 31, 2]. While this is useful for some applications, the
41
- fixation points predicted by these approaches do not account
42
- Figure 1. We present ScanGAN360, a generative adversarial ap-
43
- proach to scanpath generation for 360images. ScanGAN360
44
- generates realistic scanpaths ( bottom rows ), outperforming state-
45
- of-the-art methods and mimicking the human baseline ( top row ).
46
- for the time-dependent visual behavior of the user, making
47
- it difficult to predict the order of fixations, or give insight
48
- into how people explore an environment over time. For this
49
- purpose, some recent work has explored scanpath predic-
50
- tion [2, 3, 62, 4], but these algorithms do not adequately
51
- model how people explore immersive virtual environments,
52
- resulting in erratic or non-plausible scanpaths.
53
- In this work, we present ScanGAN360, a novel frame-
54
- work for scanpath generation for 360images (Figure 1).
55
- Our model builds on a conditional generative adversarial
56
- network (cGAN) architecture, for which we discuss and val-
57
- idate two important insights that we show are necessary for
58
- realistic scanpath generation. First, we propose a loss func-
59
- tion based on a spherical adaptation of dynamic time warp-
60
- ing (DTW), which is a key aspect for training our GAN ro-
61
- bustly. DTW is a metric for measuring similarity between
62
- two time series, such as scanpaths, which to our knowledge
63
- has not been used to train scanpath-generating GANs. Sec-
64
- ond, to adequately tackle the problem of scanpath genera-
65
- tion in 360images, we present a novel parameterization ofarXiv:2103.13922v1 [cs.CV] 25 Mar 2021the scanpaths. These insights allow us to demonstrate state-
66
- of-the-art results for scanpath generation in VR, close to the
67
- human baseline and far surpassing the performance of ex-
68
- isting methods. Our approach is the first to enable robust
69
- scanpath prediction over long time periods up to 30 sec-
70
- onds, and, unlike previous work, our model does not rely
71
- on saliency, which is typically not available as ground truth.
72
- Our model produces about 1,000 scanpaths per second,
73
- which enables fast simulation of large numbers of virtual
74
- observers , whose behavior mimics that of real users. Us-
75
- ing ScanGAN360, we explore applications in virtual scene
76
- design, which is useful in video games, interior design,
77
- cinematography, and tourism, and scanpath-driven video
78
- thumbnail generation of 360images, which provides pre-
79
- views of VR content for social media platforms. Beyond
80
- these applications, we propose to use ScanGAN360 for
81
- applications such as gaze behavior simulation for virtual
82
- avatars or gaze-contingent rendering. Extended discussion
83
- and results on applications are included in the supplemen-
84
- tary material and video.
85
- We will make our source code and pre-trained model
86
- publicly available to promote future research.
87
- 2. Related work
88
- Modeling and predicting attention The multimodal na-
89
- ture of attention [30], together with the complexity of hu-
90
- man gaze behavior, make this a very challenging task. Many
91
- works devoted to it have relied on representations such as
92
- saliency, which is a convenient representation for indicat-
93
- ing the regions of an image more likely to attract atten-
94
- tion. Early strategies for saliency modeling have focused
95
- on either creating hand-crafted features representative of
96
- saliency [19, 52, 61, 29, 20, 7], or directly learning data-
97
- driven features [49, 22]. With the proliferation of exten-
98
- sive datasets of human attention [43, 39, 20, 8, 59], deep
99
- learning–based methods for saliency prediction have been
100
- successfully applied, yielding impressive results [37, 36, 14,
101
- 50, 54, 55, 58].
102
- However, saliency models do not take into account the
103
- dynamic nature of human gaze behavior, and therefore, they
104
- are unable to model or predict time-varying aspects of at-
105
- tention. Being able to model and predict dynamic explo-
106
- ration patterns has been proven to be useful, for example,
107
- for avatar gaze control [12, 41], video rendering in virtual
108
- reality [26], or for directing users’ attention over time in
109
- many contexts [9, 38]. Scanpath models aim to predict vi-
110
- sual patterns of exploration that an observer would perform
111
- when presented with an image. In contrast to saliency mod-
112
- els, scanpath models typically focus on predicting plausi-
113
- ble scanpaths, i.e., they do not predict a unique scanpath
114
- and instead they try to mimic human behavior when ex-
115
- ploring an image, taking into account the variability be-
116
- tween different observers. Ellis and Smith [16] were pio-neers in this field: they proposed a general framework for
117
- generating scanpaths based on Markov stochastic processes.
118
- Several approaches have followed this work, incorporating
119
- behavioral biases in the process in order to produce more
120
- plausible scanpaths [24, 47, 27, 48]. In recent years, deep
121
- learning models have been used to predict human scanpaths
122
- based on neural network features trained on object recogni-
123
- tion [22, 53, 14, 5].
124
- Attention in 360images Predicting plausible scanpaths
125
- in 360imagery is a more complex task: Observers do not
126
- only scan a given image with their gaze, but they can now
127
- also turn their head or body, effectively changing their view-
128
- port over time. Several works have been proposed for mod-
129
- eling saliency in 360images [33, 43, 31, 11, 44]. However,
130
- scanpath prediction has received less attention. In their re-
131
- cent work, Assens et al. [3] generalize their 2D model to
132
- 360images, but their loss function is unable to reproduce
133
- the behavior of ground truth scanpaths (see Figure 4, third
134
- column). A few works have focused on predicting short-
135
- term sequential gaze points based on users’ previous his-
136
- tory for 360videos, but they are limited to small temporal
137
- windows (from one to ten seconds) [56, 25, 35]. For the
138
- case of images, a number of recent methods focus on devel-
139
- oping improved saliency models and principled methods to
140
- sample from them [2, 4, 62].
141
- Instead, we directly learn dynamic aspects of attention
142
- from ground truth scanpaths by training a generative model
143
- in an adversarial manner, with an architecture and loss
144
- function specifically designed for scanpaths in 360im-
145
- ages. This allows us to (i) effectively mimic human be-
146
- havior when exploring scenes, bypassing the saliency gen-
147
- eration and sampling steps, and (ii) optimize our network to
148
- stochastically generate 360scanpaths, taking into account
149
- observer variability.
150
- 3. Our Model
151
- We adopt a generative adversarial approach, specifically
152
- designed for 360content in which the model learns to gen-
153
- erate a plausible scanpath, given the 360image as a con-
154
- dition. In the following, we describe the parameterization
155
- employed for the scanpaths, the design of our loss function
156
- for the generator, and the particularities of our conditional
157
- GAN architecture, ending with details about the training
158
- process.
159
- 3.1. Scanpath Parameterization
160
- Scanpaths are commonly provided as a sequence of two-
161
- dimensional values corresponding to the coordinates (i;j)
162
- of each gaze point in the image. When dealing with 360
163
- images in equirectangular projections, gaze points are also
164
- often represented by their latitude and longitude (;),Figure 2. Illustration of our generator and discriminator networks. Both networks have a two-branch structure: Features extracted from the
165
- 360image with the aid of a CoordConv layer and an encoder-like network are concatenated with the input vector for further processing.
166
- The generator learns to transform this input vector, conditioned by the image, into a plausible scanpath. The discriminator takes as input
167
- vector a scanpath (either captured or synthesized by the generator), as well as the corresponding image, and determines the probability of
168
- this scanpath being real (or fake). We train them end-to-end in an adversarial manner, following a conditional GAN scheme. Please refer
169
- to the text for details on the loss functions and architecture.
170
- 2[
171
- 2;
172
- 2]and2[;]. However, these parame-
173
- terizations either suffer from discontinuities at the borders
174
- of a 360image, or result in periodic, ambiguous values.
175
- The same point of the scene can have two different repre-
176
- sentations in these parameterizations, hindering the learning
177
- process.
178
- We therefore resort to a three-dimensional parameteriza-
179
- tion of our scanpaths, where each gaze point p= (;)is
180
- transformed into its three-dimensional representation P=
181
- (x;y;z )such that:
182
- x=cos()cos();y=cos()sin();z=sin():
183
- This transformation assumes, without loss of generality,
184
- that the panorama is projected over a unit sphere. We
185
- use this parameterization for our model, which learns a
186
- scanpath Pas a set of three-dimensional points over time.
187
- Specifically, given a number of samples Tover time, P=
188
- (P1;:::;PT)2R3T. The results of the model are then
189
- converted back to a two-dimensional parameterization in
190
- terms of latitude ( =atan2 (z;p
191
- x2+y2)) and longitude
192
- (=atan2 (y;x)) for display and evaluation purposes.
193
- 3.2. Overview of the Model
194
- Our model is a conditional GAN, where the condition
195
- is the RGB 360image for which we wish to estimate a
196
- scanpath. The generator Gis trained to generate a scanpath
197
- from a latent code z(drawn randomly from a uniform distri-
198
- bution,U(1;1)), conditioned by the RGB 360imagey.
199
- The discriminator Dtakes as input a potential scanpath ( xorG(z;y)), as well as the condition y(the RGB 360im-
200
- age), and outputs the probability of the scanpath being real
201
- (or fake). The architecture of both networks, generator and
202
- discriminator, can be seen in Figure 2, and further details
203
- related to the architecture are described in Section 3.4.
204
- 3.3. Loss Function
205
- The objective function of a conventional conditional
206
- GAN is inspired by a minimax objective from game theory,
207
- with an objective [32]:
208
- min
209
- Gmax
210
- DV(D;G ) =
211
- Ex[logD(x;y)] +Ez[log(1D(G(z;y);y)]:(1)
212
- We can separate this into two losses, one for the generator,
213
- LG, and one for the discriminator, LD:
214
- LG=Ez[log(1D(G(z;y);y))]; (2)
215
- LD=Ex[logD(x;y)] +Ez[log(1D(G(z;y);y))]:(3)
216
- While this objective function suffices in certain cases, as
217
- the complexity of the problem increases, the generator may
218
- not be able to learn the transformation from the input distri-
219
- bution into the target one. One can resort to adding a loss
220
- term toLG, and in particular one that enforces similarity to
221
- the scanpath ground truth data. However, using a conven-
222
- tional data term, such as MSE, does not yield good results
223
- (Section 4.4 includes an evaluation of this). To address this
224
- issue, we introduce a novel term in LGspecifically targeted
225
- to our problem, and based on dynamic time warping [34].Dynamic time warping (DTW) measures the similar-
226
- ity between two temporal sequences, considering both the
227
- shape and the order of the elements of a sequence, with-
228
- out forcing a one-to-one correspondence between elements
229
- of the time series. For this purpose, it takes into account
230
- all the possible alignments of two time series rands, and
231
- computes the one that yields the minimal distance between
232
- them. Specifically, the DTW loss function between two
233
- time series r2Rknands2Rkmcan be expressed
234
- as [15]:
235
- DTW (r;s) = min
236
- AhA;(r;s)i; (4)
237
- where (r;s) = [(ri;sj)]ij2Rnmis a matrix con-
238
- taining the distances (;)between each pair of points in r
239
- ands,Ais a binary matrix that accounts for the alignment
240
- (or correspondence) between rands, andh;iis the inner
241
- product between both matrices.
242
- In our case, r= (r1;:::;rT)2R3Tands=
243
- (s1;:::;sT)2R3Tare two scanpaths that we wish to com-
244
- pare. While the Euclidean distance between each pair of
245
- points is usually employed when computing (ri;sj)for
246
- Equation 4, in our scenario that would yield erroneous dis-
247
- tances derived from the projection of the 360image (both
248
- if done in 2D over the image, or in 3D with the parameteri-
249
- zation described in Section 3.1). We instead use the distance
250
- over the surface of a sphere, or spherical distance, and de-
251
- finesph(r;s) = [sph(ri;sj)]ij2Rnmsuch that:
252
- sph(ri;sj) =
253
- 2 arcsin1
254
- 2q
255
- (rx
256
- isx
257
- j)2+ (ry
258
- isy
259
- j)2+ (rz
260
- isz
261
- j)2
262
- ;
263
- (5)
264
- leading to our spherical DTW:
265
- DTWsph(r;s) = min
266
- AhA;sph(r;s)i: (6)
267
- We incorporate the spherical DTW to the loss function of
268
- the generator (LG, Equation 2), yielding our final generator
269
- loss functionL
270
- G:
271
- L
272
- G=LG+Ez[DTWsph(G(z;y);)]; (7)
273
- whereis a ground truth scanpath for the conditioning im-
274
- agey, and the weight is empirically set to 0:1.
275
- While a loss function incorporating DTW (or spherical
276
- DTW) is not differentiable, a differentiable version, soft-
277
- DTW, has been proposed. We use this soft-DTW in our
278
- model; details on it can be found in Section S1 in the sup-
279
- plementary material or in the original publication [15].
280
- 3.4. Model Architecture
281
- Both our generator and discriminator are based on a two-
282
- branch structure (see Figure 2), with one branch for the con-
283
- ditioning image yand the other for the input vector ( zin thegenerator, and xorG(z;y)in the discriminator). The im-
284
- age branch extracts features from the 360image, yielding
285
- a set of latent features that will be concatenated with the
286
- input vector for further processing. Due to the distortion
287
- inherent to equirectangular projections, traditional convo-
288
- lutional feature extraction strategies are not well suited for
289
- 360images: They use a kernel window where neighboring
290
- relations are established uniformly around a pixel. Instead,
291
- we extract features using panoramic (or spherical) convolu-
292
- tions [13]. Spherical convolutions are a type of dilated con-
293
- volutions where the relations between elements in the im-
294
- age are not established in image space, but in a gnomonic,
295
- non-distorted space. These spherical convolutions can rep-
296
- resent kernels as patches tangent to a sphere where the 360
297
- is reprojected.
298
- In our problem of scanpath generation, the location of
299
- the features in the image is of particular importance. There-
300
- fore, to facilitate spatial learning of the network, we use the
301
- recently presented CoordConv strategy [28], which gives
302
- convolutions access to its own input coordinates by adding
303
- extra coordinate channels. We do this by concatenating a
304
- CoordConv layer to the input 360image (see Figure 2).
305
- This layer also helps stabilize the training process, as shown
306
- in Section 4.4.
307
- 3.5. Dataset and Training Details
308
- We train our model using Sitzmann et al.’s [43] dataset,
309
- composed of 22 different 360images and a total of 1,980
310
- scanpaths from 169 different users. Each scanpath contains
311
- gaze information captured during 30 seconds with a binoc-
312
- ular eye tracking recorder at 120 Hz. We sample these cap-
313
- tured scanpaths at 1 Hz ( i.e.,T= 30 ), and reparameter-
314
- ize them (Section 3.1), so that each scanpath is a sequence
315
- P= (P0;:::;P 29)2R3T. Given the relatively small size
316
- of the dataset, we perform data augmentation by longitu-
317
- dinally shifting the 360images (and adjusting their scan-
318
- paths accordingly); specifically, for each image we generate
319
- six different variations with random longitudinal shifting.
320
- We use 19 of the 22 images in this dataset for training, and
321
- reserve three to be part of our test set (more details on the
322
- full test set are described in Section 4). With the data aug-
323
- mentation process, this yields 114 images in the training set.
324
- During our training process we use the Adam opti-
325
- mizer [21], with constant learning rates lG= 104for the
326
- generator, and lD= 105for the discriminator, both of
327
- them with momentum = (0:5;0:99). Further training and
328
- implementation details can be found in the supplementary
329
- material.
330
- 4. Validation and Analysis
331
- We evaluate the quality of the generated scanpaths with
332
- respect to the measured, ground truth scanpaths, as well asFigure 3. Results of our model for two different scenes: market andmall from Rai et al.’s dataset [39]. From left to right : 360image,
333
- ground truth sample scanpath, and three scanpaths generated by our model. The generated scanpaths are plausible and focus on relevant
334
- parts of the scene, yet they exhibit the diversity expected among different human observers. Please refer to the supplementary material for
335
- a larger set of results.
336
- to other approaches. We also ablate our model to illustrate
337
- the contribution of the different design choices.
338
- We evaluate or model on two different test sets. First,
339
- using the three images from Sitzmann et al.’s dataset [43]
340
- left out of the training (Section 3.5): room ,chess androbots .
341
- To ensure our model has an ability to extrapolate, we also
342
- evaluate it with a different dataset from Rai et al. [39]. This
343
- dataset consists of 60 scenes watched by 40 to 42 observers
344
- for 25 seconds. Thus, when comparing to their ground truth,
345
- we cut our 30-second scanpaths to the maximum length of
346
- their data. Please also refer to the supplementary material
347
- for more details on the test set, as well as further evaluation
348
- and results.
349
- 4.1. Scanpath Similarity Metrics
350
- Our evaluation is both quantitative and qualitative. Eval-
351
- uating scanpath similarity is not a trivial task, and a num-
352
- ber of metrics have been proposed in the literature, each fo-
353
- cused on a different context or aspect of gaze behavior [17].
354
- Proposed metrics can be roughly categorized into: (i) di-
355
- rect measures based on Euclidean distance; (ii) string-based
356
- measures based on string alignment techniques (such as the
357
- Levenshtein distance, LEV); (iii) curve similarity methods;
358
- (iv) metrics from time-series analysis (like DTW, on which
359
- our loss function is based); and (v) metrics from recurrence
360
- analysis ( e.g., recurrence measure REC and determinism
361
- measure DET). We refer the reader to supplementary mate-
362
- rial and the review by Fahimi and Bruce [17] for an in-depth
363
- explanation and comparison of existing metrics. Here, we
364
- include a subset of metrics that take into account both the
365
- position and the ordering of the points (namely LEV and
366
- DTW), and two metrics from recurrence analysis (REC and
367
- DET), which have been reported to be discriminative in
368
- revealing viewing behaviors and patterns when comparing
369
- scanpaths. We nevertheless compute our evaluation for the
370
- full set of metrics reviewed by Fahimi and Bruce [17] in the
371
- supplementary material.
372
- Since for each image we have a number of ground truthscanpaths, and a set of generated scanpaths, we compute
373
- each similarity metric for all possible pairwise comparisons
374
- (each generated scanpath against each of the ground truth
375
- scanpaths), and average the result. In order to provide an
376
- upper baseline for each metric, we also compute the human
377
- baseline ( Human BL ) [57], which is obtained by comparing
378
- each ground truth scanpath against all the other ground truth
379
- ones, and averaging the results. In a similar fashion, we
380
- compute a lower baseline based on sampling gaze points
381
- randomly over the image ( Random BL ).
382
- 4.2. Results
383
- Qualitative results of our model can be seen in Figures 3
384
- and 1 for scenes with different layouts. Figure 3, from left
385
- to right, shows: the scene, a sample ground truth (captured)
386
- scanpath, and three of our generated scanpaths sampled
387
- from the generator. Our model is able to produce plausible,
388
- coherent scanpaths that focus on relevant parts of the scene.
389
- In the generated scanpaths we observe regions where the
390
- user focuses (points of a similar color clustered together), as
391
- well as more exploratory behavior. The generated scanpaths
392
- are diverse but plausible, as one would expect if different
393
- users watched the scene (the supplementary material con-
394
- tains more ground truth, measured scanpaths, showing this
395
- diversity). Further, our model is not affected by the inherent
396
- distortions of the 360image. This is apparent, for exam-
397
- ple, in the market scene: The central corridor, narrow and
398
- seemingly featureless, is observed by generated virtual ob-
399
- servers . Quantitative results in Table 1 further show that our
400
- generated scanpaths are close to the human baseline ( Hu-
401
- man BL ), both in the test set from Sitzmann et al.’s dataset,
402
- and over Rai et al.’s dataset. A value close to Human BL in-
403
- dicates that the generated scanpaths are as valid or as plau-
404
- sible as the captured, ground truth ones. Note that obtaining
405
- a value lower than Human BL is possible, if the generated
406
- scanpaths are on average closer to the ground truth ones,
407
- and exhibit less variance.
408
- Since our model is generative, it can generate as manyFigure 4. Qualitative comparison to previous methods for five different scenes from Rai et al.’s dataset. In each row, from left to right:
409
- 360image, and a sample scanpath obtained with our method, PathGAN [3], SaltiNet [4], and Zhu et al.’s [62]. Note that, in the case
410
- of PathGAN, we are including the results directly taken from their paper, thus the different visualization. Our method produces plausible
411
- scanpaths focused on meaningful regions, in comparison with other techniques. Please see text for details, and the supplementary material
412
- for a larger set of results, also including ground truth scanpaths.
413
- scanpaths as needed and model many different potential ob-
414
- servers. We perform our evaluations on a random set of 100
415
- scanpaths generated by our model. We choose this num-
416
- ber to match the number of generated scanpaths available
417
- for competing methods, to perform a fair comparison. Nev-
418
- ertheless, we have analyzed the stability of our generative
419
- model by computing our evaluation metrics for a variable
420
- number of generated scanpaths: Our results are very sta-
421
- ble with the number of scanpaths (please see Table 2 in the
422
- supplementary material).
423
- 4.3. Comparison to Other Methods
424
- We compare ScanGAN360 to three methods devoted to
425
- scanpath prediction in 360images: SaltiNet-based scan-
426
- path prediction [2, 4] (we will refer to it as SaltiNet in the
427
- following), PathGAN [3] and Zhu et al.’s method [62]. For
428
- comparisons to SaltiNet we use the public implementation
429
- of the authors, while the authors of Zhu et al. kindly pro-
430
- vided us with the results of their method for the images from
431
- Rai et al.’s dataset (but not for Sitzmann et al.’s); we there-
432
- fore have both qualitative (Figure 4) and quantitative (Ta-
433
- ble 1) comparisons to these two methods. In the case of
434
- PathGAN, no model or implementation could be obtained,
435
- so we compare qualitatively to the results extracted from
436
- their paper (Figure 4, third column).
437
- Table 1 shows that our model consistently provides re-sults closer to the ground truth scanpaths than Zhu et al.’s
438
- and SaltiNet. The latter is based on a saliency-sampling
439
- strategy, and thus these results indicate that indeed the tem-
440
- poral information learnt by our model is relevant for the fi-
441
- nal result. Our model, as expected, also amply surpasses the
442
- random baseline. In Figure 4 we see how PathGAN scan-
443
- paths fail to focus on the relevant parts of the scene (see,
444
- e.g.,snow orsquare ), while SaltiNet exhibits a somewhat
445
- erratic behavior, with large displacements and scarce areas
446
- of focus ( train ,snow orsquare show this). Finally, Zhu
447
- et al.’s approach tends to place gaze points at high contrast
448
- borders (see, e.g.,square orresort ).
449
- 4.4. Ablation Studies
450
- We also evaluate the contribution of different elements of
451
- our model to the final result. For this purpose, we analyze
452
- a standard GAN strategy ( i.e., using only the discriminative
453
- loss), as the baseline. Figure 5 shows how the model is un-
454
- able to learn both the temporal nature of the scanpaths, and
455
- their relation to image features. We also analyze the results
456
- yielded by adding a term based on the MSE between the
457
- ground truth and the generated scanpath to the loss function,
458
- instead of our DTW sphterm (the only previous GAN ap-
459
- proach for scanpath generation [3] relied on MSE for their
460
- loss term). The MSE only measures a one-to-one corre-
461
- spondence between points, considering for each time instantFigure 5. Qualitative ablation results. From top to bottom : ba-
462
- sic GAN strategy (baseline); adding MSE to the loss function of
463
- the former; our approach; and an example ground truth scanpath.
464
- These results illustrate the need for our DTW sphloss term.
465
- Table 1. Quantitative comparisons of our model against
466
- SaltiNet [4] and Zhu et al. [62]. We also include upper (human
467
- baseline, Human BL ) and lower (randomly sampling over the im-
468
- age, Random BL ) baselines. Arrows indicate whether higher or
469
- lower is better, and boldface highlights the best result for each met-
470
- ric (excluding the ground truth Human BL ).SaltiNet is trained
471
- with Rai et al.’s dataset; we include it for completeness.
472
- Dataset Method LEV#DTW#REC"DET"
473
- Test set from
474
- Sitzmann et al.Random BL 52.33 2370.56 0.47 0.93
475
- SaltiNet 48.00 1928.85 1.45 1.78
476
- ScanGAN360 (ours) 46.15 1921.95 4.82 2.32
477
- Human BL 43.11 1843.72 7.81 4.07
478
- Rai et al.’s
479
- datasetRandom BL 43.11 1659.75 0.21 0.94
480
- SaltiNet48.07 1928.41 1.43 1.81
481
- Zhu et al. 43.55 1744.20 1.64 1.50
482
- ScanGAN360 (ours) 40.99 1549.59 1.72 1.87
483
- Human BL 39.59 1495.55 2.33 2.31
484
- a single point, unrelated to the rest. This hinders the learn-
485
- ing process, leading to non-plausible results (Figure 5, sec-
486
- ond row). This behavior is corrected when our DTW sphis
487
- added instead, since it is specifically targeted for time series
488
- data and takes into account the actual spatial structure of the
489
- data (Figure 5, third row). The corresponding quantitative
490
- measures over our test set from Sitzmann et al. can be found
491
- in Table 2. We also analyze the effect of removing the Co-
492
- ordConv layer from our model: Results in Table 2 indicate
493
- that the use of CoordConv does have a positive effect on the
494
- results, helping learn the transformation from the input to
495
- the target domain.Table 2. Quantitative results of our ablation study. Arrows indi-
496
- cate whether higher or lower is better, and boldface highlights the
497
- best result for each metric (excluding the ground truth Human BL ).
498
- Please refer to the text for details on the ablated models.
499
- Metric LEV# DTW# REC"DET"
500
- Basic GAN 49.42 2088.44 3.01 1.74
501
- MSE 48.90 1953.21 2.41 1.73
502
- DTWsph(no CoordConv) 47.82 1988.38 3.67 1.99
503
- DTWsph(ours) 46.19 1925.20 4.50 2.33
504
- Human Baseline (Human BL) 43.11 1843.72 7.81 4.07
505
- 4.5. Behavioral Evaluation
506
- While the previous subsections employ well-known met-
507
- rics from the literature to analyze the performance of our
508
- model, in this subsection we perform a higher-level analysis
509
- of its results. We assess whether the behavioral characteris-
510
- tics of our scanpaths match those which have been reported
511
- from actual users watching 360images.
512
- Exploration time Sitzmann et al. [43] measure the explo-
513
- ration time as the average time that users took to move their
514
- eyes to a certain longitude relative to their starting point,
515
- and measure how long it takes for users to fully explore the
516
- scene. Figure 6 (left) shows this exploration time, measured
517
- by Sitzmann et al. from captured data, for the three scenes
518
- from their dataset included in our test set ( room ,chess , and
519
- robots ). To analyze whether our generated scanpaths mimic
520
- this behavior and exploration speed, we plot the exploration
521
- time of our generated scanpaths (Figure 6, center left) for
522
- the same scenes and number of scanpaths. We can see how
523
- the speed and exploration time are very similar between
524
- real and generated data. Individual results per scene can
525
- be found in the supplementary material.
526
- Fixation bias Similar to the center bias of human eye fix-
527
- ations observed in regular images [20], the existence of a
528
- Laplacian-like equator bias has been measured in 360im-
529
- ages [43]: The majority of fixations fall around the equa-
530
- tor, in detriment of the poles. We have evaluated whether
531
- the distribution of scanpaths generated by our model also
532
- presents this bias. This is to be expected, since the data our
533
- model is trained with exhibits it, but is yet another indicator
534
- that we have succeeded in learning the ground truth distri-
535
- bution. We test this by generating, for each scene, 1,000
536
- different scanpaths with our model, and aggregating them
537
- over time to produce a pseudo- saliency map, which we term
538
- aggregate map . Figure 6 (right) shows this for two scenes
539
- in our test set: We can see how this equator bias is indeed
540
- present in our generated scanpaths.
541
- Inter-observer congruency It is common in the literature
542
- analyzing users’ gaze behavior to measure inter-observerFigure 6. Behavioral evaluation. Left: Exploration time for real captured data ( left) and scanpaths generated by our model ( center left ).
543
- Speed and exploration time of our scanpaths are on par with that of real users. Center right : ROC curve of our generated scanpaths for
544
- each individual test scene (gray), and averaged across scenes (magenta). The faster it converges to the maximum rate, the higher the
545
- inter-observer congruency. Right : Aggregate maps for two different scenes, computed as heatmaps from 1,000 generated scanpaths. Our
546
- model is able to produce aggregate maps that focus on relevant areas of the scenes and exhibit the equator bias reported in the literature.
547
- congruency, often by means of a receiver operating char-
548
- acteristic (ROC) curve. We compute the congruency of our
549
- “generated observers” through this ROC curve for the three
550
- scenes in our test set from the Sitzmann et al. dataset (Fig-
551
- ure 6, center right). The curve calculates the ability of the
552
- ithscanpath to predict the aggregate map of the correspond-
553
- ing scene. Each point in the curve is computed by gener-
554
- ating a map containing the top n%most salient regions of
555
- the aggregate map (computed without the ithscanpath), and
556
- calculating the percentage of gaze points of the ithscanpath
557
- that fall into that map. Our ROC curve indicates a strong
558
- agreement between our scanpaths, with around 75% of all
559
- gaze points falling within 25% of the most salient regions.
560
- These values are comparable to those measured in previous
561
- studies with captured gaze data [43, 23].
562
- Temporal and spatial coherence Our generated scan-
563
- paths have a degree of stochasticity, to be able to model the
564
- diversity of real human observers. However, human gaze
565
- behavior follows specific patterns, and each gaze point is
566
- conditioned not only by the features in the scene but also by
567
- the previous history of gaze points of the user. If two users
568
- start watching a scene in the same region, a certain degree
569
- of coherence between their scanpaths is expected, that may
570
- diverge more as more time passes. We analyze the temporal
571
- coherence of generated scanpaths that start in the same re-
572
- gion, and observe that indeed our generated scanpaths fol-
573
- low a coherent pattern. Please refer to the supplementary
574
- for more information on this part of the analysis.
575
- 5. Conclusion
576
- In summary, we propose ScanGAN360, a conditional
577
- GAN approach to generating gaze scanpaths for immersive
578
- virtual environments. Our unique parameterization tailored
579
- to panoramic content, coupled with our novel usage of a
580
- DTW loss function, allow our model to generate scanpaths
581
- of significantly higher quality and duration than previousapproaches. We further explore applications of our model:
582
- Please refer to the supplementary material for a description
583
- and examples of these.
584
- Our GAN approach is well suited for the problem of
585
- scanpath generation: A single ground truth scanpath does
586
- not exist, yet real scanpaths follow certain patterns that
587
- are difficult to model explicitly but that are automatically
588
- learned by our approach. Note that our model is also very
589
- fast and can produce about 1,000 scanpaths per second.
590
- This may be a crucial capability for interactive applications:
591
- our model can generate virtual observers in real time.
592
- Limitations and future work Our model is trained with
593
- 30-second long scanpaths, sampled at 1 Hz. Although
594
- this is significantly longer than most previous approaches
595
- [16, 23, 27], exploring different or variable lengths or sam-
596
- pling rates remains interesting for future work. When train-
597
- ing our model, we focus on learning higher-level aspects of
598
- visual behavior, and we do not explicitly enforce low-level
599
- ocular movements ( e.g., fixations or saccades). Currently,
600
- our relatively low sampling rate prevents us from model-
601
- ing very fast dynamic phenomena, such as saccades. Yet,
602
- fixation patterns naturally emerge in our results, and future
603
- work could explicitly take low-level oculomotor aspects of
604
- visual search into account.
605
- The model, parameterization, and loss function are tai-
606
- lored to 360images. In a similar spirit, a DTW-based loss
607
- function could also be applied to conventional 2D images
608
- (using an Euclidean distance in 2D instead of our sph), po-
609
- tentially leading to better results than current 2D approaches
610
- based on mean-squared error.
611
- We believe that our work is a timely effort and a first step
612
- towards understanding and modeling dynamic aspects of at-
613
- tention in 360images. We hope that our work will serve
614
- as a basis to advance this research, both in virtual reality
615
- and in conventional imagery, and extend it to other scenar-
616
- ios, such as dynamic or interactive content, analyzing the
617
- influence of the task, including the presence of motion par-allax, or exploring multimodal experiences. We will make
618
- our model and training code available in order to facilitate
619
- the exploration of these and other possibilities.
620
- References
621
- [1] Elena Arabadzhiyska, Okan Tarhan Tursun, Karol
622
- Myszkowski, Hans-Peter Seidel, and Piotr Didyk. Saccade
623
- landing position prediction for gaze-contingent rendering.
624
- ACM Transactions on Graphics (TOG) , 36(4):1–12, 2017. 5
625
- [2] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
626
- Noel E O’Connor. Saltinet: Scan-path prediction on 360
627
- degree images using saliency volumes. In Proceedings of
628
- the IEEE ICCV Workshops , pages 2331–2338, 2017. 1, 2, 6,
629
- 4
630
- [3] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
631
- Noel E O’Connor. Pathgan: visual scanpath prediction with
632
- generative adversarial networks. In Proceedings of the Eu-
633
- ropean Conference on Computer Vision (ECCV) , pages 0–0,
634
- 2018. 1, 2, 6, 4
635
- [4] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
636
- Noel E O’Connor. Scanpath and saliency prediction on 360
637
- degree images. Signal Processing: Image Communication ,
638
- 69:8–14, 2018. 1, 2, 6, 7
639
- [5] Wentao Bao and Zhenzhong Chen. Human scanpath predic-
640
- tion based on deep convolutional saccadic model. Neuro-
641
- computing , 404:154 – 164, 2020. 2
642
- [6] Mathieu Blondel, Arthur Mensch, and Jean-Philippe Vert.
643
- Differentiable divergences between time series. arXiv
644
- preprint arXiv:2010.08354 , 2020. 1
645
- [7] A. Borji. Boosting bottom-up and top-down visual features
646
- for saliency estimation. In 2012 IEEE Conference on Com-
647
- puter Vision and Pattern Recognition , 2012. 2
648
- [8] Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Fr ´edo Du-
649
- rand, Aude Oliva, and Antonio Torralba. Mit saliency bench-
650
- mark. http://saliency.mit.edu/, 2019. 2
651
- [9] Ying Cao, Rynson WH Lau, and Antoni B Chan. Look over
652
- here: Attention-directing composition of manga elements.
653
- ACM Trans. Graph. , 33(4):1–11, 2014. 2, 3
654
- [10] Chien-Yi Chang, De-An Huang, Yanan Sui, Li Fei-Fei, and
655
- Juan Carlos Niebles. D3tw: Discriminative differentiable dy-
656
- namic time warping for weakly supervised action alignment
657
- and segmentation. In Proceedings of the IEEE/CVF Confer-
658
- ence on Computer Vision and Pattern Recognition (CVPR) ,
659
- June 2019. 1
660
- [11] Fang-Yi Chao, Lu Zhang, Wassim Hamidouche, and Olivier
661
- Deforges. Salgan360: Visual saliency prediction on 360 de-
662
- gree images with generative adversarial networks. In 2018
663
- IEEE Int. Conf. on Multim. & Expo Workshops (ICMEW) ,
664
- pages 01–04. IEEE, 2018. 2
665
- [12] Alex Colburn, Michael F Cohen, and Steven Drucker. The
666
- role of eye gaze in avatar mediated conversational interfaces.
667
- Technical report, Citeseer, 2000. 2
668
- [13] Benjamin Coors, Alexandru Paul Condurache, and An-
669
- dreas Geiger. Spherenet: Learning spherical representations
670
- for detection and classification in omnidirectional images.
671
- InProc. of the European Conference on Computer Vision
672
- (ECCV) , pages 518–533, 2018. 1, 4[14] Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita
673
- Cucchiara. Predicting human eye fixations via an lstm-based
674
- saliency attentive model. IEEE Transactions on Image Pro-
675
- cessing , 27(10):5142–5154, 2018. 2
676
- [15] Marco Cuturi and Mathieu Blondel. Soft-dtw: a dif-
677
- ferentiable loss function for time-series. arXiv preprint
678
- arXiv:1703.01541 , 2017. 4, 1
679
- [16] Stephen R Ellis and James Darrell Smith. Patterns of sta-
680
- tistical dependency in visual scanning. Eye movements and
681
- human information processing , pages 221–238, 1985. 2, 8
682
- [17] Ramin Fahimi and Neil DB Bruce. On metrics for measuring
683
- scanpath similarity. Behavior Research Methods , pages 1–
684
- 20, 2020. 5, 2
685
- [18] Kaye Horley, Leanne M Williams, Craig Gonsalvez, and
686
- Evian Gordon. Face to face: visual scanpath evidence for
687
- abnormal processing of facial expressions in social phobia.
688
- Psychiatry research , 127(1-2):43–53, 2004. 1
689
- [19] Laurent Itti, Christof Koch, and Ernst Niebur. A model
690
- of saliency-based visual attention for rapid scene analysis.
691
- IEEE Transactions on pattern analysis and machine intelli-
692
- gence , 20(11):1254–1259, 1998. 2
693
- [20] Tilke Judd, Krista Ehinger, Fr ´edo Durand, and Antonio Tor-
694
- ralba. Learning to predict where humans look. In IEEE
695
- ICCV , pages 2106–2113. IEEE, 2009. 2, 7
696
- [21] Diederik P. Kingma and Jimmy Ba. Adam: A method for
697
- stochastic optimization. In ICLR , 2014. Last updated in
698
- arXiv in 2017. 4
699
- [22] Matthias K ¨ummerer, Thomas S. A. Wallis, and Matthias
700
- Bethge. Deepgaze ii: Reading fixations from deep
701
- features trained on object recognition. arXiv preprint
702
- arXiv:1610.01563 , 2016. 2
703
- [23] O. Le Meur and T. Baccino. Methods for comparing scan-
704
- paths and saliency maps: strengths and weaknesses. Behav-
705
- ior Research Methods , pages 251–266, 2013. 8
706
- [24] Olivier Le Meur and Zhi Liu. Saccadic model of eye move-
707
- ments for free-viewing condition. Vision Research , 116:152
708
- – 164, 2015. 2
709
- [25] Chenge Li, Weixi Zhang, Yong Liu, and Yao Wang. Very
710
- long term field of view prediction for 360-degree video
711
- streaming. In 2019 IEEE Conference on Multimedia Infor-
712
- mation Processing and Retrieval (MIPR) , pages 297–302.
713
- IEEE, 2019. 2
714
- [26] Suiyi Ling, Jes ´us Guti ´errez, Ke Gu, and Patrick Le Callet.
715
- Prediction of the influence of navigation scan-path on per-
716
- ceived quality of free-viewpoint videos. IEEE Journal on
717
- Emerging and Sel. Topics in Circ. and Sys. , 9(1):204–216,
718
- 2019. 2
719
- [27] Huiying Liu, Dong Xu, Qingming Huang, Wen Li, Min Xu,
720
- and Stephen Lin. Semantically-based human scanpath esti-
721
- mation with hmms. In Proceedings of the IEEE International
722
- Conference on Computer Vision , pages 3232–3239, 2013. 2,
723
- 8
724
- [28] Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski
725
- Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An in-
726
- triguing failing of convolutional neural networks and the co-
727
- ordconv solution. In Neural information processing systems ,
728
- pages 9605–9616, 2018. 4[29] Y . Lu, W. Zhang, C. Jin, and X. Xue. Learning attention map
729
- from images. In 2012 IEEE Conference on Computer Vision
730
- and Pattern Recognition , 2012. 2
731
- [30] Daniel Martin, Sandra Malpica, Diego Gutierrez, Belen Ma-
732
- sia, and Ana Serrano. Multimodality in VR: A survey. arXiv
733
- preprint arXiv:2101.07906 , 2021. 2
734
- [31] Daniel Martin, Ana Serrano, and Belen Masia. Panoramic
735
- convolutions for 360single-image saliency prediction. In
736
- CVPR Workshop on CV for AR/VR , 2020. 1, 2
737
- [32] Mehdi Mirza and Simon Osindero. Conditional generative
738
- adversarial nets. arXiv preprint arXiv:1411.1784 , 2014. 3
739
- [33] Rafael Monroy, Sebastian Lutz, Tejo Chalasani, and Aljosa
740
- Smolic. Salnet360: Saliency maps for omni-directional im-
741
- ages with cnn. Signal Processing: Image Communication ,
742
- 69:26 – 34, 2018. 2
743
- [34] Meinard M ¨uller. Dynamic time warping. Information re-
744
- trieval for music and motion , pages 69–84, 2007. 3, 1
745
- [35] Anh Nguyen, Zhisheng Yan, and Klara Nahrstedt. Your at-
746
- tention is unique: Detecting 360-degree video saliency in
747
- head-mounted display for head movement prediction. In
748
- Proc. ACM Intern. Conf. on Multimedia , pages 1190–1198,
749
- 2018. 2
750
- [36] Junting Pan, Cristian Canton, Kevin McGuinness, Noel E.
751
- O’Connor, Jordi Torres, Elisa Sayrol, and Xavier and Giro-
752
- i Nieto. Salgan: Visual saliency prediction with generative
753
- adversarial networks. 2018. 2
754
- [37] Junting Pan, Elisa Sayrol, Xavier Giro-i Nieto, Kevin
755
- McGuinness, and Noel E. O’Connor. Shallow and deep con-
756
- volutional networks for saliency prediction. In The IEEE
757
- Conference on Computer Vision and Pattern Recognition
758
- (CVPR) , June 2016. 2
759
- [38] Xufang Pang, Ying Cao, Rynson WH Lau, and Antoni B
760
- Chan. Directing user attention via visual flow on web de-
761
- signs. ACM Trans. on Graph. , 35(6):1–11, 2016. 2, 3
762
- [39] Yashas Rai, Jes ´us Guti ´errez, and Patrick Le Callet. A dataset
763
- of head and eye movements for 360 degree images. In Pro-
764
- ceedings of the 8th ACM on Multimedia Systems Conference ,
765
- pages 205–210, 2017. 2, 5, 1
766
- [40] Kerstin Ruhland, Christopher E Peters, Sean Andrist,
767
- Jeremy B Badler, Norman I Badler, Michael Gleicher, Bilge
768
- Mutlu, and Rachel McDonnell. A review of eye gaze in
769
- virtual agents, social robotics and hci: Behaviour genera-
770
- tion, user interaction and perception. In Computer graph-
771
- ics forum , volume 34, pages 299–326. Wiley Online Library,
772
- 2015. 4
773
- [41] Matan Sela, Pingmei Xu, Junfeng He, Vidhya Naval-
774
- pakkam, and Dmitry Lagun. Gazegan-unpaired adversar-
775
- ial image generation for gaze estimation. arXiv preprint
776
- arXiv:1711.09767 , 2017. 2
777
- [42] Ana Serrano, Vincent Sitzmann, Jaime Ruiz-Borau, Gordon
778
- Wetzstein, Diego Gutierrez, and Belen Masia. Movie edit-
779
- ing and cognitive event segmentation in virtual reality video.
780
- ACM Trans. Graph. (SIGGRAPH) , 36(4), 2017. 1
781
- [43] Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh
782
- Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wet-
783
- zstein. Saliency in VR: How do people explore virtual
784
- environments? IEEE Trans. on Vis. and Comp. Graph. ,
785
- 24(4):1633–1642, 2018. 1, 2, 4, 5, 7, 8, 3[44] Mikhail Startsev and Michael Dorr. 360-aware saliency esti-
786
- mation with conventional image saliency predictors. Signal
787
- Proces.: Image Comm. , 69:43–52, 2018. 2
788
- [45] Yu-Chuan Su and Kristen Grauman. Making 360 video
789
- watchable in 2d: Learning videography for click free view-
790
- ing. In 2017 IEEE Conference on Computer Vision and Pat-
791
- tern Recognition (CVPR) , pages 1368–1376. IEEE, 2017. 3
792
- [46] Yu-Chuan Su, Dinesh Jayaraman, and Kristen Grauman.
793
- Pano2vid: Automatic cinematography for watching 360
794
- videos. In Asian Conf. on CV , pages 154–171. Springer,
795
- 2016. 3
796
- [47] Benjamin W Tatler and Benjamin T Vincent. The promi-
797
- nence of behavioural biases in eye guidance. Visual Cogni-
798
- tion, 17(6-7):1029–1054, 2009. 2
799
- [48] Hamed Rezazadegan Tavakoli, Esa Rahtu, and Janne
800
- Heikkil ¨a. Stochastic bottom–up fixation prediction and sac-
801
- cade generation. Image and Vision Computing , 31(9):686–
802
- 693, 2013. 2
803
- [49] Antonio Torralba, Aude Oliva, Monica S Castelhano, and
804
- John M Henderson. Contextual guidance of eye movements
805
- and attention in real-world scenes: the role of global features
806
- in object search. Psychological review , 113(4):766, 2006. 2
807
- [50] Eleonora Vig, Michael Dorr, and David Cox. Large-scale
808
- optimization of hierarchical features for saliency prediction
809
- in natural images. In Proceedings of the IEEE Conference
810
- on Computer Vision and Pattern Recognition (CVPR) , June
811
- 2014. 2
812
- [51] LE Vincent and Nicolas Thome. Shape and time distortion
813
- loss for training deep time series forecasting models. In
814
- Advances in neural information processing systems , pages
815
- 4189–4201, 2019. 1
816
- [52] Dirk Walther and Christof Koch. Modeling attention to
817
- salient proto-objects. Neural Networks , 19:1395–1407,
818
- 2006. 2
819
- [53] Wenguan Wang and Jianbing Shen. Deep visual atten-
820
- tion prediction. IEEE Transactions on Image Processing ,
821
- 27(5):2368–2378, 2017. 2
822
- [54] W. Wang and J. Shen. Deep visual attention prediction. IEEE
823
- Transactions on Image Processing , 27(5):2368–2378, 2018.
824
- 2
825
- [55] Wenguan Wang, Jianbing Shen, Xingping Dong, and Ali
826
- Borji. Salient object detection driven by fixation prediction.
827
- InProceedings of the IEEE Conference on Computer Vision
828
- and Pattern Recognition (CVPR) , June 2018. 2
829
- [56] Chenglei Wu, Ruixiao Zhang, Zhi Wang, and Lifeng Sun. A
830
- spherical convolution approach for learning long term view-
831
- port prediction in 360 immersive video. In Proceedings of
832
- the AAAI Conference on Artificial Intelligence , volume 34,
833
- pages 14003–14040, 2020. 2
834
- [57] Chen Xia, Junwei Han, Fei Qi, and Guangming Shi. Pre-
835
- dicting human saccadic scanpaths based on iterative repre-
836
- sentation learning. IEEE Transactions on Image Processing ,
837
- 28(7):3502–3515, 2019. 5
838
- [58] M. Xu, Y . Song, J. Wang, M. Qiao, L. Huo, and Z. Wang.
839
- Predicting head movement in panoramic video: A deep re-
840
- inforcement learning approach. IEEE Transactions on Pat-
841
- tern Analysis and Machine Intelligence , 41(11):2693–2708,
842
- 2019. 2[59] Chuan Yang, Lihe Zhang, Ruan Lu, Huchuan, Xiang, and
843
- Ming-Hsuan Yang. Saliency detection via graph-based man-
844
- ifold ranking. In Computer Vision and Pattern Recogni-
845
- tion (CVPR), 2013 IEEE Conference on , pages 3166–3173.
846
- IEEE, 2013. 2
847
- [60] Kiwon Yun, Yifan Peng, Dimitris Samaras, Gregory J Zelin-
848
- sky, and Tamara L Berg. Exploring the role of gaze behavior
849
- and object detection in scene understanding. Frontiers in
850
- psychology , 4:917, 2013. 1
851
- [61] Qi Zhao and Christof Koch. Learning a saliency map using
852
- fixated locations in natural scenes. Journal of Vision , 11:9,
853
- 2011. 2
854
- [62] Yucheng Zhu, Guangtao Zhai, and Xiongkuo Min. The pre-
855
- diction of head and eye movement for 360 degree images.
856
- Signal Processing: Image Communication , 69:15–25, 2018.
857
- 1, 2, 6, 7, 4Supplementary Material
858
- This document offers additional information and details
859
- on the following topics:
860
- • (S1) Extended description of the soft-DTW (differen-
861
- tiable version of DTW) distance metric used in our
862
- model.
863
- • (S2) Additional results (scanpaths generated with our
864
- method) for different scenes used in our evaluation in
865
- the main paper.
866
- • (S3) Additional ground truth scanpaths for the scenes
867
- used in our evaluation in the main paper.
868
- • (S4) Further details on our training process.
869
- • (S5) Further details on metrics and evaluation, includ-
870
- ing a larger set of metrics (which we briefly introduce),
871
- and extended analysis.
872
- • (S6) Further details on the behavioral evaluation of our
873
- scanpaths.
874
- • (S7) Example applications of our method.
875
- S1. Differentiable Dynamic Time Warping:
876
- soft-DTW
877
- One of the key aspects of our framework relies in the
878
- addition of a second term to the generator’s loss function,
879
- based on dynamic time warping [34]. As pointed in Section
880
- 3.3 in the main paper, dynamic time warping (DTW) mea-
881
- sures the similarity between two temporal sequences (see
882
- Figure 71, Equation 5 in the main paper for the original
883
- DTW formulation, and Equations 6 and 7 in the main pa-
884
- per for our spherical modification on DTW). However, the
885
- original DTW function is not differentiable, therefore it is
886
- not suitable as a loss function. Instead, we use a differen-
887
- tiable version of it, soft-DTW, which has been recently pro-
888
- posed [15] and used as a loss function in different problems
889
- dealing with time series [6, 10, 51].
890
- Differently from the original DTW formulation (Equa-
891
- tion 5 in the main paper), the soft-DTW is defined as fol-
892
- lows:
893
- soft-DTW
894
- A
895
- where, as with traditional DTW, (r;s) = [(ri;sj]ij2
896
- Rnmis a matrix containing the distances (;)between
897
- each pair of points in rands,Ais a binary matrix that
898
- accounts for the alignment (or correspondence) between r
899
- ands, andh;iis the inner product between both matrices.
900
- 1https://databricks.com/blog/2019/04/30/understanding-dynamic-
901
- time-warping.html
902
- Figure 7. Simple visualization of dynamic time warping (DTW)
903
- alignment. Instead of assuming a pair-wise strict correspondence,
904
- DTW optimizes the alignment between two sequences to minimize
905
- their distance.
906
- In our case, r= (r1;:::;rT)2R3Tands= (s1;:::;sT)2
907
- R3Tare two scanpaths that we wish to compare.
908
- The main difference lies in the replacement of the minA
909
- with the minA
910
- min
911
  = 0
912
-
913
- neai=
914
  > 0(9)
915
- This soft-min function allows DTW to be differentiable,
916
- with the
917
- soft implementation and the original DTW algorithm, both
918
- being the same when
919
- S2. Additional Results
920
- We include in this section a more extended set of results.
921
- First, we include results for the scenes room (see Figures 17
922
- to 20), chess (see Figures 21 to 24), and robots (see Fig-
923
- ures 25 to 28) from the Sitzmann et al. dataset [43]. Then,
924
- we include results for the five scenes from the Rai et al.
925
- dataset [39] used in comparisons throughout the main pa-
926
- per: train (see Figures 29 to 32), resort (see Figures 33
927
- to 36), square (see Figures 37 to 40), snow (see Figures 41
928
- to 44), and museum (see Figures 45 to 48).
929
- S3. Ground Truth Scanpaths for Comparison
930
- Scenes
931
- We include in Figures 49 to 53 sets of ground truth scan-
932
- paths for all the images shown in Figure 4 in the main pa-
933
- per, which is devoted to comparisons of our method against
934
- other models; and in Figures 54 to 56 sets of ground truth
935
- scanpaths for the three images from our test set from Sitz-
936
- mann et al.’s dataset.
937
- S4. Additional Details on our Training Process
938
- In addition to the details commented in Section 3.5 in
939
- the main paper, our generator trains two cycles per discrim-
940
- inator cycle, to avoid the latter from surpassing the former.
941
- To enhance the training process, we also resort to a mini-
942
- batching strategy: Instead of inputting to our model a setcontaining all available scanpaths for a given image, we
943
- split our data in different mini-batches of eight scanpaths
944
- each. This way, the same image is input in our network mul-
945
- tiple times per epoch, also allowing more images to be in-
946
- cluded in the same batch, and therefore enhancing the train-
947
- ing process. We trained our model for 217 epochs, as we
948
- found that epoch to yield the better evaluation results.
949
- S5. Additional Details on Metrics and Evalua-
950
- tion
951
- Throughout this work, we evaluate our model and com-
952
- pare to state-of-the-art works by means of several widely
953
- used metrics, recently reviewed by Fahimi and Bruce [17].
954
- Table 3 shows a list of these metrics, indicating which ones
955
- take into account position and/or order of gaze points. In
956
- the following, we briefly introduce these metrics (please re-
957
- fer to Fahimi and Bruce [17] for a formal description):
958
- • Levenshtein distance: Transforms scanpaths into
959
- strings, and then calculates the minimum number of
960
- single-character edits (insertions, deletions, or substi-
961
- tutions) required to change one string (scanpath) into
962
- the other. All edits costs are treated equally.
963
- • ScanMatch: Improved version of Levenshtein dis-
964
- tance. Different from Levenshtein distance, Scan-
965
- Match takes into account semantic information (as a
966
- score matrix), and can even take into account duration
967
- of data points. This way, each of the edit operations
968
- can be differently weighted.
969
- • Hausdorff distance: Represents the degree of mis-
970
- match between two sets by measuring the farthest spa-
971
- tial distance from one set to the other, i.e., the distance
972
- between two different curves.
973
- • Frechet distance: Similar to Hausdorff distance, it
974
- measures the similarity between curves. However,
975
- Frechet disatance takes into account both the position
976
- and ordering of all the points in the curves.
977
- • Dynamic time warping: Metric that compares two
978
- time-series with varying (and differing) lengths to
979
- find an optimal path to match both sequences while
980
- preserving boundary, continuity, and monotonicity to
981
- make sure that the path respects time.
982
- • Time delay embedding: Splits a scanpath into sev-
983
- eral sub-samples, i.e., small sub-scanpaths. This met-
984
- rics calculates a similarity score by performing several
985
- pair-wise Hausdorff comparisons over sub-samples
986
- from both scanpaths to compare.
987
- • Recurrence: Measures the percentage of gaze points
988
- that match (are close) between the two scanpaths.• Determinism: Percentage of cross-recurrent points that
989
- form diagonal lines (i.e., percentage of gaze trajecto-
990
- ries common to both scanpaths).
991
- • Laminarity: Measures locations that were fixated in
992
- detail in one of the scanpaths, but only fixated briefly
993
- in the other scanpath. This way, it indicates whether
994
- specific areas of a scene are repeatedly fixated.
995
- • Center of recurrence mass: Defined as the distance of
996
- the center of gravity from the main diagonal, indicates
997
- the dominant lag of cross recurrences, i.e., whether the
998
- same gaze point in both scanpaths tends to occur close
999
- in time.
1000
- Table 3. Set of metrics to quantitatively evaluate scanpath similar-
1001
- ity [17]. Each metric specializes in specific aspects of the scan-
1002
- paths, and as a result using any of them in isolation may not be
1003
- representative.
1004
- Metric Abrv Position Order
1005
- Levenshtein distance LEV 3 3
1006
- ScanMatch SMT 3 3
1007
- Hausdorff distance HAU 3 7
1008
- Frechet distance FRE 3 3
1009
- Dynamic time warping DTW 3 3
1010
- Time delay embedding TDE 3 7
1011
- Recurrence REC 3 7
1012
- Determinism DET 7 3
1013
- Laminarity LAM 7 7
1014
- Center of recurrence mass COR 7 7
1015
- Our model is stochastic by nature. This means that the
1016
- scanpaths that it generates for a given scene are always dif-
1017
- ferent, simulating observer variability. We have analyzed
1018
- whether the reported metrics vary depending on the num-
1019
- ber of scanpaths generated, to asses the stability and overall
1020
- goodness of our model. Results can be seen in Table 4
1021
- We include in Table 5 the evaluation results with the full
1022
- set of metrics shown in Table 3 (extension to Table 1 in the
1023
- main paper), and in Tables 6 and 7 the evaluation results of
1024
- our ablation studies over the full set of metrics (extension to
1025
- Table 2 in the main paper).
1026
- Images for one of our test sets belong to Rai et al.’s
1027
- dataset [39]. This dataset is larger than Sitzmann et al.’s
1028
- in size (number of images), but provides gaze data in the
1029
- form of fixations with associated timestamps, and not the
1030
- raw gaze points. Note that most of the metrics proposed in
1031
- the literature for scanpath similarity are designed to work
1032
- with time series of different length, and do not necessarily
1033
- assume a direct pairwise equivalence, making them valid to
1034
- compare our generated scanpaths to the ground truth ones
1035
- from Rai et al.’s dataset.Table 4. Quantitative results of our model with sets of generated
1036
- scanpaths with different number of samples. Our results are stable
1037
- regardless the number of generated samples.
1038
- Dataset # of samples LEV#DTW#REC"DET"
1039
- Test set from
1040
- Sitzmann et al.100 46.19 1925.20 4.50 2.33
1041
- 800 46.10 1916.26 4.75 2.34
1042
- 2500 46.15 1921.95 4.82 2.32
1043
- Human BL 43.11 1843.72 7.81 4.07
1044
- Rai et al.’s
1045
- dataset100 40.95 1548.86 1.91 1.85
1046
- 800 40.94 1542.82 1.86 1.86
1047
- 2500 40.99 1549.59 1.72 1.87
1048
- Human BL 39.59 1495.55 2.33 2.31
1049
- S6. Behavioral Evaluation
1050
- In this section, we include further analysis and additional
1051
- details on behavioral aspects of our scanpaths, extending
1052
- Section 4.5 in the main paper.
1053
- Temporal and spatial coherence As discussed in the
1054
- main paper, our generated scanpaths have a degree of
1055
- stochasticity, and different patterns arise depending on
1056
- users’ previous history. To assess whether our scanpaths
1057
- actually follow a coherent pattern, we generate a set of ran-
1058
- dom scanpaths for each of the scenes in our test dataset, and
1059
- separate them according to the longitudinal region where
1060
- the scanpath begins ( e.g.,[0;40);[40;80), etc.). Then,
1061
- we estimate the probability density of the generated scan-
1062
- paths from each starting region using kernel density esti-
1063
- mation (KDE) for each timestamp. We include the com-
1064
- plete KDE results for the three images from our test set in
1065
- Figures 11 to 16, for different starting regions, at different
1066
- timestamps, and computed over 1000 generated scanpaths.
1067
- During the first seconds (first column), gaze points tend to
1068
- stay in a smaller area, and closer to the starting region; as
1069
- time progresses, they exhibit a more exploratory behavior
1070
- with higher divergence, and eventually may reach a conver-
1071
- gence close to regions of interest. We can also see how the
1072
- behavior can differ depending on the starting region.
1073
- Exploration time As introduced in the main paper, we
1074
- also explore the time that users took to move their eyes to a
1075
- certain longitude relative to their starting point, and measure
1076
- how long it takes for users to fully explore the scene. We in-
1077
- clude in Figure 8 the comparison between ground truth and
1078
- generated scanpaths in terms of time to explore the scene,
1079
- for all the three scenes from our test set ( room ,chess , and
1080
- robots ), both individual and aggregated. We can see how
1081
- the speed and exploration time are very similar between real
1082
- and generated data.
1083
- S7. Applications of the Model
1084
- Our model is able to generate plausible 30-second scan-
1085
- paths, drawn from a distribution that mimics the behavior ofhuman observers. As we briefly discuss through the paper,
1086
- this enables a number of applications, starting with avoiding
1087
- the need to recruit and measure gaze from high numbers of
1088
- observers in certain scenarios. We show here two applica-
1089
- tions of our model, virtual scene design and scanpath-driven
1090
- video thumbnail creation for static 360images, and discuss
1091
- other potential application scenarios.
1092
- Virtual scene design In an immersive environment, the
1093
- user has control over the camera when exploring it. This
1094
- poses a challenge to content creators and designers, who
1095
- have to learn from experience how to layout the scene to
1096
- elicit a specific viewing or exploration behavior. This is
1097
- not only a problem in VR, but has also received attention
1098
- in,e.g., manga composition [9] or web design [38]. How-
1099
- ever, actually measuring gaze from a high enough number
1100
- of users to determine optimal layouts can be challenging
1101
- and time-consuming. While certain goals may require real
1102
- users, others can make use of our model to generate plausi-
1103
- ble and realistic generated observers.
1104
- As a proof of concept, we have analyzed our model’s
1105
- ability to adapt its behavior to different layouts of a scene
1106
- (Figure 9). Specifically, we have removed certain elements
1107
- from a scene, and run our model to analyze whether these
1108
- changes affect the behavior of our generated scanpaths. We
1109
- plot the resulting probability density (using KDE, see Sec-
1110
- tion S6) as a function of time. The presence of different ele-
1111
- ments in the scene affects the general viewing behavior, in-
1112
- cluding viewing direction, or time spent on a certain region.
1113
- These examples are particularly promising if we consider
1114
- that our model is trained with a relatively small number of
1115
- generic scenes.
1116
- Scanpath-driven video thumbnails of static 360images
1117
- 360images capture the full sphere and are thus unintuitive
1118
- when projected into a conventional 2D image. To address
1119
- this problem, a number of approaches have proposed to re-
1120
- target 360images or videos to 2D [46, 43, 45]. In the case
1121
- of images, extracting a representative 2D visualization of
1122
- the 360image can be helpful to provide a thumbnail of it,
1123
- for example as a preview on a social media platform. How-
1124
- ever, these thumbnails are static. The Ken Burns effect can
1125
- be used to animate static images by panning and zooming
1126
- a cropping window over a static image. In the context of
1127
- 360, however, it seems unclear what the trajectory of such
1128
- a moving window would be.
1129
- To address this question, we leverage our generated scan-
1130
- paths to drive a Ken Burns–like video thumbnail of a static
1131
- panorama. For this purpose, we use an average scanpath,
1132
- computed as the probability density of several generated
1133
- scanpaths using KDE (see Section S6), as the trajectory of
1134
- the virtual camera. Specifically, KDE allows us to find the
1135
- point of highest probability, along with its variance, of allTable 5. Quantitative comparison of our model against different approaches, following the metrics introduced in Table 1. We evaluate our
1136
- model over the test set we separated from Sitzmann et al.’s dataset, and compare against Saltinet [2]. On the other hand, we validate our
1137
- model over Rai et al.’s dataset, and compare us against Zhu et al.’s [62], whose results over this dataset were provided by the authors; and
1138
- against SaltiNet, which was trained over that specific dataset (*). HB accounts for the human baseline, computed with the set of ground-
1139
- truth scanpaths. We also compute a lower baseline, computed by randomly sampling the image. The arrow next to each metric indicates
1140
- whether higher or lower is better. Best results are in bold.
1141
- Dataset Method LEV#SMT"HAU#FRE# DTW#TDE#REC"DET"LAM"CORM"
1142
- Test set from
1143
- Sitzmann et al.Random BL 52.33 0.22 59.88 146.39 2370.56 27.93 0.47 0.93 9.19 33.19
1144
- SaltiNet 48.00 0.18 64.23 149.34 1928.85 28.19 1.45 1.78 10.45 29.23
1145
- ScanGAN360 (ours) 46.15 0.39 43.28 141.23 1921.95 18.62 4.82 2.32 24.51 35.78
1146
- Human BL 43.11 0.43 41.38 142.91 1843.72 16.05 7.81 4.07 24.69 35.32
1147
- Rai et al’s
1148
- datasetRandom BL 43.11 0.17 65.71 144.73 1659.75 35.41 0.21 0.94 4.30 19.08
1149
- SaltiNet()48.07 0.18 63.86 148.76 1928.41 28.42 1.43 1.81 10.22 29.33
1150
- Zhu et al. 43.55 0.20 73.09 136.37 1744.20 30.62 1.64 1.50 9.18 26.05
1151
- ScanGAN360 (ours) 40.99 0.24 61.86 139.10 1549.59 28.14 1.72 1.87 12.23 26.15
1152
- Human BL 39.59 0.24 66.23 136.70 1495.55 27.24 2.33 2.31 14.36 23.14
1153
- Table 6. Results of our ablation study over Sitzmann et al.’s test set. We take a basic GAN strategy as baseline, and evaluate the effects
1154
- of adding a second term into our generator’s loss function. We ablate a model with an MSE error (as used in the only GAN approach for
1155
- scanpath generation so far [3]), and compare it against our spherical DTW approach. We also analyze the importance of the CoordConv
1156
- layer, whhose absence slightly worsen the results. See Section 4 in the main paper for further discussion. Qualitative results of this ablation
1157
- study can be seen in Figure 5 in the main paper.
1158
- Metric LEV#SMT"HAU#FRE# DTW#TDE#REC"DET"LAM"CORM"
1159
- Basic GAN 49.42 0.36 43.69 145.95 2088.44 20.05 3.01 1.74 18.55 34.51
1160
- MSE 48.90 0.37 42.27 133.24 1953.21 19.48 2.41 1.73 18.47 37.34
1161
- DTWsph(no CoordConv) 47.82 0.37 46.59 144.92 1988.38 20.13 3.67 1.99 18.09 35.66
1162
- DTWsph(ours) 46.15 0.39 43.28 141.23 1921.95 18.62 4.82 2.32 24.21 35.78
1163
- Human BL 43.11 0.43 41.38 142.91 1843.72 16.05 7.81 4.07 24.69 35.32
1164
- Table 7. Results of our ablation study over Rai et al.’s dataset. We take a basic GAN strategy as baseline, and evaluate the effects of adding
1165
- a second term into our generator’s loss function. We ablate a model with an MSE error (as used in the only GAN approach for scanpath
1166
- generation so far [3]), and compare it against our spherical DTW approach. We also analyze the importance of the CoordConv layer,
1167
- whhose absence slightly worsen the results. See Section 4 in the main paper for further discussion. Qualitative results of this ablation study
1168
- can be seen in Figure 5 in the main paper.
1169
- Metric LEV#SMT"HAU#FRE# DTW#TDE#REC"DET"LAM"CORM"
1170
- Basic GAN 41.73 0.23 59.11 142.42 1542.52 28.40 0.99 1.47 8.08 24.55
1171
- MSE 41.81 0.23 61.30 139.59 1541.44 28.66 1.01 1.51 8.56 24.45
1172
- DTWsph(no CoordConv) 41.42 0.23 61.55 148.13 1610.10 28.78 1.61 1.65 10.25 24.68
1173
- DTWsph(ours) 40.99 0.24 61.86 139.10 1549.59 28.14 1.72 1.87 12.23 26.15
1174
- Human BL 39.59 0.24 66.23 136.70 1495.55 27.24 2.33 2.31 14.36 23.14
1175
- generated scanpaths at any point in time. Note that this
1176
- point is not necessarily the average of the scanpaths. We
1177
- use the time-varying center point as the center of our 2D
1178
- viewport, and its variance to drive the FOV or zoom of the
1179
- moving viewport.
1180
- Figure 10 shows several representative steps of this pro-
1181
- cess for two different scenes ( chess andstreet ). Full videos
1182
- of several scenes are included in the supplementary video.
1183
- The generated Ken Burns–style panorama previews look
1184
- like a human observer exploring these panorama and pro-
1185
- vide a very intuitive preview of the complex scenes they
1186
- depict.Other applications Our model has the potential to en-
1187
- able other applications beyond what we have shown in this
1188
- section. One such example is gaze simulation for virtual
1189
- avatars . When displaying or interacting with virtual char-
1190
- acters, eye gaze is one of the most critical, yet most diffi-
1191
- cult, aspects to simulate [40]. Accurately simulating gaze
1192
- behavior not only aids in conveying realism, but can also
1193
- provide additional information such as signalling interest,
1194
- aiding the conversation through non-verbal cues, facilitating
1195
- turn-taking in multi-party conversations, or indicating atten-
1196
- tiveness, among others. Given an avatar immersed within
1197
- a virtual scene, generating plausible scanpaths conditioned
1198
- by a 360image of their environment could be an efficient,
1199
- affordable way of driving the avatar’s gaze behavior in aFigure 8. Time to explore each of the scenes from the Sitzmann et al. test set, together with their ground truth counterpart.
1200
- Figure 9. Our model can be used to aid the design of virtual scenes. We show two examples, each with two possible layouts (original,
1201
- and removing some significant elements). We generate a large number of scanpaths (virtual observers) starting from the same region, and
1202
- compute their corresponding probability density function as a function of time, using KDE (see Section S6). room scene: The presence of
1203
- the dining table and lamps (top) retains the viewers’ attention longer, while in their absence they move faster towards the living room area,
1204
- performing a more linear exploration. gallery scene: When the central picture is present (top), the viewers linger there before splitting to
1205
- both sides of the scene. In its absence, observers move towards the left, then explore the scene linearly in that direction.
1206
- realistic manner.
1207
- Another potential application of our model is its use for
1208
- gaze-contingent rendering . These approaches have been
1209
- proposed to save rendering time and bandwidth in VR sys-
1210
- tems or drive the user’s accommodation. Eye trackers are
1211
- required for these applications, but they are often too slow,making computationally efficient approaches for predicting
1212
- gaze trajectories or landing positions important [1]. Our
1213
- method for generating scanpaths could not only help proto-
1214
- type and evaluate such systems in simulation, without the
1215
- need for a physical eye tracker and actual users, but also in
1216
- optimizing their latency and performance during runtime.Figure 10. Scanpath-driven video thumbnails of 360images. We
1217
- propose a technique to generate these videos that results in relevant
1218
- and intuitive explorations of the 360scenes. Top row: Points of
1219
- highest probability at each time instant, displayed as scanpaths.
1220
- These are used as a guiding trajectory for the virtual camera. Mid-
1221
- dle rows: Two viewports from the guiding trajectory, correspond-
1222
- ing to the temporal window with lowest variance. Bottom row: 2D
1223
- images retargeted from those viewports. Please refer to the text for
1224
- details.
1225
- References
1226
- [1] Elena Arabadzhiyska, Okan Tarhan Tursun, Karol
1227
- Myszkowski, Hans-Peter Seidel, and Piotr Didyk. Saccade
1228
- landing position prediction for gaze-contingent rendering.
1229
- ACM Transactions on Graphics (TOG) , 36(4):1–12, 2017. 5
1230
- [2] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
1231
- Noel E O’Connor. Saltinet: Scan-path prediction on 360
1232
- degree images using saliency volumes. In Proceedings of
1233
- the IEEE ICCV Workshops , pages 2331–2338, 2017. 1, 2, 6,
1234
- 4
1235
- [3] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
1236
- Noel E O’Connor. Pathgan: visual scanpath prediction with
1237
- generative adversarial networks. In Proceedings of the Eu-
1238
- ropean Conference on Computer Vision (ECCV) , pages 0–0,
1239
- 2018. 1, 2, 6, 4
1240
- [4] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
1241
- Noel E O’Connor. Scanpath and saliency prediction on 360
1242
- degree images. Signal Processing: Image Communication ,
1243
- 69:8–14, 2018. 1, 2, 6, 7
1244
- [5] Wentao Bao and Zhenzhong Chen. Human scanpath predic-tion based on deep convolutional saccadic model. Neuro-
1245
- computing , 404:154 – 164, 2020. 2
1246
- [6] Mathieu Blondel, Arthur Mensch, and Jean-Philippe Vert.
1247
- Differentiable divergences between time series. arXiv
1248
- preprint arXiv:2010.08354 , 2020. 1
1249
- [7] A. Borji. Boosting bottom-up and top-down visual features
1250
- for saliency estimation. In 2012 IEEE Conference on Com-
1251
- puter Vision and Pattern Recognition , 2012. 2
1252
- [8] Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Fr ´edo Du-
1253
- rand, Aude Oliva, and Antonio Torralba. Mit saliency bench-
1254
- mark. http://saliency.mit.edu/, 2019. 2
1255
- [9] Ying Cao, Rynson WH Lau, and Antoni B Chan. Look over
1256
- here: Attention-directing composition of manga elements.
1257
- ACM Trans. Graph. , 33(4):1–11, 2014. 2, 3
1258
- [10] Chien-Yi Chang, De-An Huang, Yanan Sui, Li Fei-Fei, and
1259
- Juan Carlos Niebles. D3tw: Discriminative differentiable dy-
1260
- namic time warping for weakly supervised action alignment
1261
- and segmentation. In Proceedings of the IEEE/CVF Confer-
1262
- ence on Computer Vision and Pattern Recognition (CVPR) ,
1263
- June 2019. 1
1264
- [11] Fang-Yi Chao, Lu Zhang, Wassim Hamidouche, and Olivier
1265
- Deforges. Salgan360: Visual saliency prediction on 360 de-
1266
- gree images with generative adversarial networks. In 2018
1267
- IEEE Int. Conf. on Multim. & Expo Workshops (ICMEW) ,
1268
- pages 01–04. IEEE, 2018. 2
1269
- [12] Alex Colburn, Michael F Cohen, and Steven Drucker. The
1270
- role of eye gaze in avatar mediated conversational interfaces.
1271
- Technical report, Citeseer, 2000. 2
1272
- [13] Benjamin Coors, Alexandru Paul Condurache, and An-
1273
- dreas Geiger. Spherenet: Learning spherical representations
1274
- for detection and classification in omnidirectional images.
1275
- InProc. of the European Conference on Computer Vision
1276
- (ECCV) , pages 518–533, 2018. 1, 4
1277
- [14] Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita
1278
- Cucchiara. Predicting human eye fixations via an lstm-based
1279
- saliency attentive model. IEEE Transactions on Image Pro-
1280
- cessing , 27(10):5142–5154, 2018. 2
1281
- [15] Marco Cuturi and Mathieu Blondel. Soft-dtw: a dif-
1282
- ferentiable loss function for time-series. arXiv preprint
1283
- arXiv:1703.01541 , 2017. 4, 1
1284
- [16] Stephen R Ellis and James Darrell Smith. Patterns of sta-
1285
- tistical dependency in visual scanning. Eye movements and
1286
- human information processing , pages 221–238, 1985. 2, 8
1287
- [17] Ramin Fahimi and Neil DB Bruce. On metrics for measuring
1288
- scanpath similarity. Behavior Research Methods , pages 1–
1289
- 20, 2020. 5, 2
1290
- [18] Kaye Horley, Leanne M Williams, Craig Gonsalvez, and
1291
- Evian Gordon. Face to face: visual scanpath evidence for
1292
- abnormal processing of facial expressions in social phobia.
1293
- Psychiatry research , 127(1-2):43–53, 2004. 1
1294
- [19] Laurent Itti, Christof Koch, and Ernst Niebur. A model
1295
- of saliency-based visual attention for rapid scene analysis.
1296
- IEEE Transactions on pattern analysis and machine intelli-
1297
- gence , 20(11):1254–1259, 1998. 2
1298
- [20] Tilke Judd, Krista Ehinger, Fr ´edo Durand, and Antonio Tor-
1299
- ralba. Learning to predict where humans look. In IEEE
1300
- ICCV , pages 2106–2113. IEEE, 2009. 2, 7Figure 11. KDE for the room scene, including scanpaths starting from 0up to 160.
1301
- [21] Diederik P. Kingma and Jimmy Ba. Adam: A method for
1302
- stochastic optimization. In ICLR , 2014. Last updated in
1303
- arXiv in 2017. 4
1304
- [22] Matthias K ¨ummerer, Thomas S. A. Wallis, and MatthiasBethge. Deepgaze ii: Reading fixations from deep
1305
- features trained on object recognition. arXiv preprint
1306
- arXiv:1610.01563 , 2016. 2
1307
- [23] O. Le Meur and T. Baccino. Methods for comparing scan-Figure 12. KDE for the room scene, including scanpaths starting from 180up to 340.
1308
- paths and saliency maps: strengths and weaknesses. Behav-
1309
- ior Research Methods , pages 251–266, 2013. 8
1310
- [24] Olivier Le Meur and Zhi Liu. Saccadic model of eye move-ments for free-viewing condition. Vision Research , 116:152
1311
- – 164, 2015. 2
1312
- [25] Chenge Li, Weixi Zhang, Yong Liu, and Yao Wang. VeryFigure 13. KDE for the chess scene, including scanpaths starting from 0up to 160.
1313
- long term field of view prediction for 360-degree video
1314
- streaming. In 2019 IEEE Conference on Multimedia Infor-
1315
- mation Processing and Retrieval (MIPR) , pages 297–302.
1316
- IEEE, 2019. 2[26] Suiyi Ling, Jes ´us Guti ´errez, Ke Gu, and Patrick Le Callet.
1317
- Prediction of the influence of navigation scan-path on per-
1318
- ceived quality of free-viewpoint videos. IEEE Journal on
1319
- Emerging and Sel. Topics in Circ. and Sys. , 9(1):204–216,Figure 14. KDE for the chess scene, including scanpaths starting from 180up to 340.
1320
- 2019. 2
1321
- [27] Huiying Liu, Dong Xu, Qingming Huang, Wen Li, Min Xu,
1322
- and Stephen Lin. Semantically-based human scanpath esti-
1323
- mation with hmms. In Proceedings of the IEEE InternationalConference on Computer Vision , pages 3232–3239, 2013. 2,
1324
- 8
1325
- [28] Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski
1326
- Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An in-Figure 15. KDE for the robots scene, including scanpaths starting from 0up to 160.
1327
- triguing failing of convolutional neural networks and the co-
1328
- ordconv solution. In Neural information processing systems ,
1329
- pages 9605–9616, 2018. 4[29] Y . Lu, W. Zhang, C. Jin, and X. Xue. Learning attention map
1330
- from images. In 2012 IEEE Conference on Computer Vision
1331
- and Pattern Recognition , 2012. 2Figure 16. KDE for the robots scene, including scanpaths starting from 180up to 340.
1332
- [30] Daniel Martin, Sandra Malpica, Diego Gutierrez, Belen Ma-
1333
- sia, and Ana Serrano. Multimodality in VR: A survey. arXiv
1334
- preprint arXiv:2101.07906 , 2021. 2
1335
- [31] Daniel Martin, Ana Serrano, and Belen Masia. Panoramicconvolutions for 360single-image saliency prediction. In
1336
- CVPR Workshop on CV for AR/VR , 2020. 1, 2
1337
- [32] Mehdi Mirza and Simon Osindero. Conditional generative
1338
- adversarial nets. arXiv preprint arXiv:1411.1784 , 2014. 3Figure 17. Generated scanpaths for the room scene.
1339
- [33] Rafael Monroy, Sebastian Lutz, Tejo Chalasani, and Aljosa
1340
- Smolic. Salnet360: Saliency maps for omni-directional im-
1341
- ages with cnn. Signal Processing: Image Communication ,
1342
- 69:26 – 34, 2018. 2
1343
- [34] Meinard M ¨uller. Dynamic time warping. Information re-
1344
- trieval for music and motion , pages 69–84, 2007. 3, 1
1345
- [35] Anh Nguyen, Zhisheng Yan, and Klara Nahrstedt. Your at-
1346
- tention is unique: Detecting 360-degree video saliency in
1347
- head-mounted display for head movement prediction. In
1348
- Proc. ACM Intern. Conf. on Multimedia , pages 1190–1198,
1349
- 2018. 2
1350
- [36] Junting Pan, Cristian Canton, Kevin McGuinness, Noel E.
1351
- O’Connor, Jordi Torres, Elisa Sayrol, and Xavier and Giro-
1352
- i Nieto. Salgan: Visual saliency prediction with generative
1353
- adversarial networks. 2018. 2
1354
- [37] Junting Pan, Elisa Sayrol, Xavier Giro-i Nieto, Kevin
1355
- McGuinness, and Noel E. O’Connor. Shallow and deep con-
1356
- volutional networks for saliency prediction. In The IEEE
1357
- Conference on Computer Vision and Pattern Recognition
1358
- (CVPR) , June 2016. 2
1359
- [38] Xufang Pang, Ying Cao, Rynson WH Lau, and Antoni B
1360
- Chan. Directing user attention via visual flow on web de-
1361
- signs. ACM Trans. on Graph. , 35(6):1–11, 2016. 2, 3
1362
- [39] Yashas Rai, Jes ´us Guti ´errez, and Patrick Le Callet. A dataset
1363
- of head and eye movements for 360 degree images. In Pro-
1364
- ceedings of the 8th ACM on Multimedia Systems Conference ,
1365
- pages 205–210, 2017. 2, 5, 1[40] Kerstin Ruhland, Christopher E Peters, Sean Andrist,
1366
- Jeremy B Badler, Norman I Badler, Michael Gleicher, Bilge
1367
- Mutlu, and Rachel McDonnell. A review of eye gaze in
1368
- virtual agents, social robotics and hci: Behaviour genera-
1369
- tion, user interaction and perception. In Computer graph-
1370
- ics forum , volume 34, pages 299–326. Wiley Online Library,
1371
- 2015. 4
1372
- [41] Matan Sela, Pingmei Xu, Junfeng He, Vidhya Naval-
1373
- pakkam, and Dmitry Lagun. Gazegan-unpaired adversar-
1374
- ial image generation for gaze estimation. arXiv preprint
1375
- arXiv:1711.09767 , 2017. 2
1376
- [42] Ana Serrano, Vincent Sitzmann, Jaime Ruiz-Borau, Gordon
1377
- Wetzstein, Diego Gutierrez, and Belen Masia. Movie edit-
1378
- ing and cognitive event segmentation in virtual reality video.
1379
- ACM Trans. Graph. (SIGGRAPH) , 36(4), 2017. 1
1380
- [43] Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh
1381
- Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wet-
1382
- zstein. Saliency in VR: How do people explore virtual
1383
- environments? IEEE Trans. on Vis. and Comp. Graph. ,
1384
- 24(4):1633–1642, 2018. 1, 2, 4, 5, 7, 8, 3
1385
- [44] Mikhail Startsev and Michael Dorr. 360-aware saliency esti-
1386
- mation with conventional image saliency predictors. Signal
1387
- Proces.: Image Comm. , 69:43–52, 2018. 2
1388
- [45] Yu-Chuan Su and Kristen Grauman. Making 360 video
1389
- watchable in 2d: Learning videography for click free view-
1390
- ing. In 2017 IEEE Conference on Computer Vision and Pat-
1391
- tern Recognition (CVPR) , pages 1368–1376. IEEE, 2017. 3Figure 18. Generated scanpaths for the room scene.
1392
- [46] Yu-Chuan Su, Dinesh Jayaraman, and Kristen Grauman.
1393
- Pano2vid: Automatic cinematography for watching 360
1394
- videos. In Asian Conf. on CV , pages 154–171. Springer,
1395
- 2016. 3
1396
- [47] Benjamin W Tatler and Benjamin T Vincent. The promi-
1397
- nence of behavioural biases in eye guidance. Visual Cogni-
1398
- tion, 17(6-7):1029–1054, 2009. 2
1399
- [48] Hamed Rezazadegan Tavakoli, Esa Rahtu, and Janne
1400
- Heikkil ¨a. Stochastic bottom–up fixation prediction and sac-
1401
- cade generation. Image and Vision Computing , 31(9):686–
1402
- 693, 2013. 2
1403
- [49] Antonio Torralba, Aude Oliva, Monica S Castelhano, and
1404
- John M Henderson. Contextual guidance of eye movements
1405
- and attention in real-world scenes: the role of global features
1406
- in object search. Psychological review , 113(4):766, 2006. 2
1407
- [50] Eleonora Vig, Michael Dorr, and David Cox. Large-scale
1408
- optimization of hierarchical features for saliency prediction
1409
- in natural images. In Proceedings of the IEEE Conference
1410
- on Computer Vision and Pattern Recognition (CVPR) , June
1411
- 2014. 2
1412
- [51] LE Vincent and Nicolas Thome. Shape and time distortion
1413
- loss for training deep time series forecasting models. In
1414
- Advances in neural information processing systems , pages
1415
- 4189–4201, 2019. 1
1416
- [52] Dirk Walther and Christof Koch. Modeling attention to
1417
- salient proto-objects. Neural Networks , 19:1395–1407,
1418
- 2006. 2[53] Wenguan Wang and Jianbing Shen. Deep visual atten-
1419
- tion prediction. IEEE Transactions on Image Processing ,
1420
- 27(5):2368–2378, 2017. 2
1421
- [54] W. Wang and J. Shen. Deep visual attention prediction. IEEE
1422
- Transactions on Image Processing , 27(5):2368–2378, 2018.
1423
- 2
1424
- [55] Wenguan Wang, Jianbing Shen, Xingping Dong, and Ali
1425
- Borji. Salient object detection driven by fixation prediction.
1426
- InProceedings of the IEEE Conference on Computer Vision
1427
- and Pattern Recognition (CVPR) , June 2018. 2
1428
- [56] Chenglei Wu, Ruixiao Zhang, Zhi Wang, and Lifeng Sun. A
1429
- spherical convolution approach for learning long term view-
1430
- port prediction in 360 immersive video. In Proceedings of
1431
- the AAAI Conference on Artificial Intelligence , volume 34,
1432
- pages 14003–14040, 2020. 2
1433
- [57] Chen Xia, Junwei Han, Fei Qi, and Guangming Shi. Pre-
1434
- dicting human saccadic scanpaths based on iterative repre-
1435
- sentation learning. IEEE Transactions on Image Processing ,
1436
- 28(7):3502–3515, 2019. 5
1437
- [58] M. Xu, Y . Song, J. Wang, M. Qiao, L. Huo, and Z. Wang.
1438
- Predicting head movement in panoramic video: A deep re-
1439
- inforcement learning approach. IEEE Transactions on Pat-
1440
- tern Analysis and Machine Intelligence , 41(11):2693–2708,
1441
- 2019. 2
1442
- [59] Chuan Yang, Lihe Zhang, Ruan Lu, Huchuan, Xiang, and
1443
- Ming-Hsuan Yang. Saliency detection via graph-based man-
1444
- ifold ranking. In Computer Vision and Pattern Recogni-Figure 19. Generated scanpaths for the room scene.
1445
- tion (CVPR), 2013 IEEE Conference on , pages 3166–3173.
1446
- IEEE, 2013. 2
1447
- [60] Kiwon Yun, Yifan Peng, Dimitris Samaras, Gregory J Zelin-
1448
- sky, and Tamara L Berg. Exploring the role of gaze behavior
1449
- and object detection in scene understanding. Frontiers in
1450
- psychology , 4:917, 2013. 1
1451
- [61] Qi Zhao and Christof Koch. Learning a saliency map using
1452
- fixated locations in natural scenes. Journal of Vision , 11:9,
1453
- 2011. 2
1454
- [62] Yucheng Zhu, Guangtao Zhai, and Xiongkuo Min. The pre-
1455
- diction of head and eye movement for 360 degree images.
1456
- Signal Processing: Image Communication , 69:15–25, 2018.
1457
- 1, 2, 6, 7, 4Figure 20. Generated scanpaths for the room scene.
1458
- Figure 21. Generated scanpaths for the chess scene.Figure 22. Generated scanpaths for the chess scene.
1459
- Figure 23. Generated scanpaths for the chess scene.Figure 24. Generated scanpaths for the chess scene.
1460
- Figure 25. Generated scanpaths for the robots scene.Figure 26. Generated scanpaths for the robots scene.
1461
- Figure 27. Generated scanpaths for the robots scene.Figure 28. Generated scanpaths for the robots scene.
1462
- Figure 29. Generated scanpaths for the train scene.Figure 30. Generated scanpaths for the train scene.
1463
- Figure 31. Generated scanpaths for the train scene.Figure 32. Generated scanpaths for the train scene.
1464
- Figure 33. Generated scanpaths for the resort scene.Figure 34. Generated scanpaths for the resort scene.
1465
- Figure 35. Generated scanpaths for the resort scene.Figure 36. Generated scanpaths for the resort scene.
1466
- Figure 37. Generated scanpaths for the square scene.Figure 38. Generated scanpaths for the square scene.
1467
- Figure 39. Generated scanpaths for the square scene.Figure 40. Generated scanpaths for the square scene.
1468
- Figure 41. Generated scanpaths for the snow scene.Figure 42. Generated scanpaths for the snow scene.
1469
- Figure 43. Generated scanpaths for the snow scene.Figure 44. Generated scanpaths for the snow scene.
1470
- Figure 45. Generated scanpaths for the museum scene.Figure 46. Generated scanpaths for the museum scene.
1471
- Figure 47. Generated scanpaths for the museum scene.Figure 48. Generated scanpaths for the museum scene.
1472
- Figure 49. Ground truth scanpaths for the train scene.Figure 50. Ground truth scanpaths for the resort scene.
1473
- Figure 51. Ground truth scanpaths for the snow scene.Figure 52. Ground truth scanpaths for the museum scene.
1474
- Figure 53. Ground truth scanpaths for the square scene.Figure 54. Ground truth scanpaths for the room scene.
1475
- Figure 55. Ground truth scanpaths for the chess scene.Figure 56. Ground truth scanpaths for the robots scene.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  = 0
 
 
1
  > 0(9)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2103.14651.txt DELETED
@@ -1,1336 +0,0 @@
1
- Local Explanations via Necessity and Sufficiency:
2
- Unifying Theory and Practice
3
- David S. Watson*1Limor Gultchin*2,3Ankur Taly4Luciano Floridi5,3
4
- *Equal contribution1Department of Statistical Science, University College London, London, UK
5
- 2Department of Computer Science, University of Oxford, Oxford, UK
6
- 3The Alan Turing Institute, London, UK4Google Inc., Mountain View, USA
7
- 5Oxford Internet Institute, University of Oxford, Oxford, UK
8
- Abstract
9
- Necessity and sufficiency are the building blocks
10
- of all successful explanations. Yet despite their im-
11
- portance, these notions have been conceptually un-
12
- derdeveloped and inconsistently applied in explain-
13
- able artificial intelligence (XAI), a fast-growing re-
14
- search area that is so far lacking in firm theoretical
15
- foundations. Building on work in logic, probabil-
16
- ity, and causality, we establish the central role of
17
- necessity and sufficiency in XAI, unifying seem-
18
- ingly disparate methods in a single formal frame-
19
- work. We provide a sound and complete algorithm
20
- for computing explanatory factors with respect to
21
- a given context, and demonstrate its flexibility and
22
- competitive performance against state of the art al-
23
- ternatives on various tasks.
24
- 1 INTRODUCTION
25
- Machine learning algorithms are increasingly used in a va-
26
- riety of high-stakes domains, from credit scoring to medi-
27
- cal diagnosis. However, many such methods are opaque , in
28
- that humans cannot understand the reasoning behind partic-
29
- ular predictions. Post-hoc, model-agnostic local explanation
30
- tools (e.g., feature attributions, rule lists, and counterfactu-
31
- als) are at the forefront of a fast-growing area of research
32
- variously referred to as interpretable machine learning or
33
- explainable artificial intelligence (XAI).
34
- Many authors have pointed out the inconsistencies between
35
- popular XAI tools, raising questions as to which method
36
- is more reliable in particular cases [Mothilal et al., 2020a;
37
- Ramon et al., 2020; Fernández-Loría et al., 2020]. Theoret-
38
- ical foundations have proven elusive in this area, perhaps
39
- due to the perceived subjectivity inherent to notions such
40
- as “intelligible” and “relevant” [Watson and Floridi, 2020].
41
- Practitioners often seek refuge in the axiomatic guarantees
42
- of Shapley values, which have become the de facto stan-
43
- Figure 1: We describe minimal sufficient factors (here, sets
44
- of features) for a given input (top row), with the aim of
45
- preserving or flipping the original prediction. We report a
46
- sufficiency score for each set and a cumulative necessity
47
- score for all sets, indicating the proportion of paths towards
48
- the outcome that are covered by the explanation. Feature
49
- colors indicate source of feature values (input or reference).
50
- dard in many XAI applications, due in no small part to their
51
- attractive theoretical properties [Bhatt et al., 2020]. How-
52
- ever, ambiguities regarding the underlying assumptions of
53
- the method [Kumar et al., 2020] and the recent prolifera-
54
- tion of mutually incompatible implementations [Sundarara-
55
- jan and Najmi, 2019; Merrick and Taly, 2020] have com-
56
- plicated this picture. Despite the abundance of alternative
57
- XAI tools [Molnar, 2021], a dearth of theory persists. This
58
- has led some to conclude that the goals of XAI are under-
59
- specified [Lipton, 2018], and even that post-hoc methods do
60
- more harm than good [Rudin, 2019].
61
- We argue that this lacuna at the heart of XAI should be filled
62
- by a return to fundamentals – specifically, to necessity and
63
- sufficiency . As the building blocks of all successful expla-
64
- nations, these dual concepts deserve a privileged position
65
- in the theory and practice of XAI. Following a review of re-
66
- lated work (Sect. 2), we operationalize this insight with a
67
- unified framework (Sect. 3) that reveals unexpected affinities
68
- Accepted for the 37thConference on Uncertainty in Artificial Intelligence (UAI 2021).arXiv:2103.14651v2 [cs.LG] 10 Jun 2021between various XAI tools and probabilities of causation
69
- (Sect. 4). We proceed to implement a novel procedure for
70
- computing model explanations that improves upon the state
71
- of the art in various quantitative and qualitative comparisons
72
- (Sect. 5). Following a brief discussion (Sect. 6), we conclude
73
- with a summary and directions for future work (Sect. 7).
74
- We make three main contributions. (1) We present a formal
75
- framework for XAI that unifies several popular approaches,
76
- including feature attributions, rule lists, and counterfactu-
77
- als. (2) We introduce novel measures of necessity and suf-
78
- ficiency that can be computed for any feature subset. The
79
- method enables users to incorporate domain knowledge,
80
- search various subspaces, and select a utility-maximizing
81
- explanation. (3) We present a sound and complete algorithm
82
- for identifying explanatory factors, and illustrate its perfor-
83
- mance on a range of tasks.
84
- 2 NECESSITY AND SUFFICIENCY
85
- Necessity and sufficiency have a long philosophical tradi-
86
- tion [Mackie, 1965; Lewis, 1973; Halpern and Pearl, 2005b],
87
- spanning logical, probabilistic, and causal variants. In propo-
88
- sitional logic, we say that xis a sufficient condition for y
89
- iffx!y, andxis a necessary condition for yiffy!x.
90
- So stated, necessity and sufficiency are logically converse .
91
- However, by the law of contraposition, both definitions ad-
92
- mit alternative formulations, whereby sufficiency may be
93
- rewritten as:y!:xand necessity as:x!:y. By pair-
94
- ing the original definition of sufficiency with the latter def-
95
- inition of necessity (and vice versa), we find that the two
96
- concepts are also logically inverse .
97
- These formulae suggest probabilistic relaxations, measur-
98
- ingx’s sufficiency for ybyP(yjx)andx’s necessity for y
99
- byP(xjy). Because there is no probabilistic law of contra-
100
- position, these quantities are generally uninformative w.r.t.
101
- P(:xj:y)andP(:yj:x), which may be of independent
102
- interest. Thus, while necessity is both the converse and in-
103
- verse of sufficiency in propositional logic, the two formula-
104
- tions come apart in probability calculus. We revisit the dis-
105
- tinction between probabilistic conversion and inversion in
106
- Rmk. 1 and Sect. 4.
107
- These definitions struggle to track our intuitions when we
108
- consider causal explanations [Pearl, 2000; Tian and Pearl,
109
- 2000]. It may make sense to say in logic that if xis a neces-
110
- sary condition for y, thenyis a sufficient condition for x;
111
- it does not follow that if xis a necessary cause ofy, theny
112
- is a sufficient cause ofx. We may amend both concepts us-
113
- ingcounterfactual probabilities – e.g., the probability that
114
- Alice would still have a headache if she had not taken an as-
115
- pirin, given that she does not have a headache and did take
116
- an aspirin. Let P(yxjx0;y0)denote such a quantity, to be
117
- read as “the probability that Ywould equal yunder an in-
118
- tervention that sets Xtox, given that we observe X=x0andY=y0.” Then, according to Pearl [2000, Ch. 9], the
119
- probability that xis a sufficient cause of yis given by
120
- suf(x;y) :=P(yxjx0;y0), and the probability that xis a
121
- necessary cause of yis given by nec(x;y) :=P(y0
122
- x0jx;y):
123
- Analysis becomes more difficult in higher dimensions,
124
- where variables may interact to block or unblock causal path-
125
- ways. VanderWeele and Robins [2008] analyze sufficient
126
- causal interactions in the potential outcomes framework,
127
- refining notions of synergism without monotonicity con-
128
- straints. In a subsequent paper, VanderWeele and Richard-
129
- son [2012] study the irreducibility and singularity of interac-
130
- tions in sufficient-component cause models. Halpern [2016]
131
- devotes an entire monograph to the subject, providing vari-
132
- ous criteria to distinguish between subtly different notions
133
- of “actual causality”, as well as “but-for” (similar to nec-
134
- essary) and sufficient causes. These authors generally limit
135
- their analyses to Boolean systems with convenient structural
136
- properties, e.g. conditional ignorability and the stable unit
137
- treatment value assumption [Imbens and Rubin, 2015]. Op-
138
- erationalizing their theories in a practical method without
139
- such restrictions is one of our primary contributions.
140
- Necessity and sufficiency have begun to receive explicit at-
141
- tention in the XAI literature. Ribeiro et al. [2018a] propose
142
- a bandit procedure for identifying a minimal set of Boolean
143
- conditions that entails a predictive outcome (more on this in
144
- Sect. 4). Dhurandhar et al. [2018] propose an autoencoder
145
- for learning pertinent negatives and positives, i.e. features
146
- whose presence or absence is decisive for a given label,
147
- while Zhang et al. [2018] develop a technique for generat-
148
- ing symbolic corrections to alter model outputs. Both meth-
149
- ods are optimized for neural networks, unlike the model-
150
- agnostic approach we develop here.
151
- Another strand of research in this area is rooted in logic pro-
152
- gramming. Several authors have sought to reframe XAI as
153
- either a SAT [Ignatiev et al., 2019; Narodytska et al., 2019]
154
- or a set cover problem [Lakkaraju et al., 2019; Grover et al.,
155
- 2019], typically deriving approximate solutions on a pre-
156
- specified subspace to ensure computability in polynomial
157
- time. We adopt a different strategy that prioritizes complete-
158
- ness over efficiency, an approach we show to be feasible in
159
- moderate dimensions (see Sect. 6 for a discussion).
160
- Mothilal et al. [2020a] build on Halpern [2016]’s definitions
161
- of necessity and sufficiency to critique popular XAI tools,
162
- proposing a new feature attribution measure with some pur-
163
- ported advantages. Their method relies on the strong as-
164
- sumption that predictors are mutually independent. Galho-
165
- tra et al. [2021] adapt Pearl [2000]’s probabilities of cau-
166
- sation for XAI under a more inclusive range of data gen-
167
- erating processes. They derive analytic bounds on multidi-
168
- mensional extensions of nec andsuf, as well as an algo-
169
- rithm for point identification when graphical structure per-
170
- mits. Oddly, they claim that non-causal applications of ne-
171
- cessity and sufficiency are somehow “incorrect and mislead-ing” (p. 2), a normative judgment that is inconsistent with
172
- many common uses of these concepts.
173
- Rather than insisting on any particular interpretation of ne-
174
- cessity and sufficiency, we propose a general framework that
175
- admits logical, probabilistic, and causal interpretations as
176
- special cases. Whereas previous works evaluate individual
177
- predictors, we focus on feature subsets , allowing us to detect
178
- and quantify interaction effects. Our formal results clarify
179
- the relationship between existing XAI methods and proba-
180
- bilities of causation, while our empirical results demonstrate
181
- their applicability to a wide array of tasks and datasets.
182
- 3 A UNIFYING FRAMEWORK
183
- We propose a unifying framework that highlights the role of
184
- necessity and sufficiency in XAI. Its constituent elements
185
- are described below.
186
- Target function. Post-hoc explainability methods assume
187
- access to a target function f:X7!Y , i.e. the model whose
188
- prediction(s) we seek to explain. For simplicity, we restrict
189
- attention to the binary setting, with Y2f0;1g. Multi-class
190
- extensions are straightforward, while continuous outcomes
191
- may be accommodated via discretization. Though this in-
192
- evitably involves some information loss, we follow authors
193
- in the contrastivist tradition in arguing that, even for con-
194
- tinuous outcomes, explanations always involve a juxtapo-
195
- sition (perhaps implicit) of “fact and foil” [Lipton, 1990].
196
- For instance, a loan applicant is probably less interested in
197
- knowing why her credit score is precisely ythan she is in
198
- discovering why it is below some threshold (say, 700). Of
199
- course, binary outcomes can approximate continuous values
200
- with arbitrary precision over repeated trials.
201
- Context. The contextDis a probability distribution over
202
- which we quantify sufficiency and necessity. Contexts may
203
- be constructed in various ways but always consist of at least
204
- some input (point or space) and reference (point or space).
205
- For instance, we may want to compare xiwith all other
206
- samples, or else just those perturbed along one or two axes,
207
- perhaps based on some conditioning event(s).
208
- In addition to predictors and outcomes, we optionally in-
209
- clude information exogenous to f. For instance, if any
210
- events were conditioned upon to generate a given refer-
211
- ence sample, this information may be recorded among a
212
- set of auxiliary variables W. Other examples of potential
213
- auxiliaries include metadata or engineered features such as
214
- those learned via neural embeddings. This augmentation al-
215
- lows us to evaluate the necessity and sufficiency of factors
216
- beyond those found in X. Contextual data take the form
217
- Z= (X;W)D . The distribution may or may not en-
218
- code dependencies between (elements of) Xand (elements
219
- of)W. We extend the target function to augmented inputs
220
- by definingf(z) :=f(x).Factors. Factors pick out the properties whose necessity
221
- and sufficiency we wish to quantify. Formally, a factor
222
- c:Z 7!f 0;1gindicates whether its argument satisfies
223
- some criteria with respect to predictors or auxiliaries. For
224
- instance, if xis an input to a credit lending model, and w
225
- contains information about the subspace from which data
226
- were sampled, then a factor could be c(z) =1[x[gender =
227
- “female” ]^w[do(income>$50k)]], i.e. checking if zis
228
- female and drawn from a context in which an intervention
229
- fixes income at greater than $50k. We use the term “factor”
230
- as opposed to “condition” or “cause” to suggest an inclusive
231
- set of criteria that may apply to predictors xand/or auxil-
232
- iaries w. Such criteria are always observational w.r.t. zbut
233
- may be interventional or counterfactual w.r.t. x. We assume
234
- a finite space of factors C.
235
- Partial order. When multiple factors pass a given neces-
236
- sity or sufficiency threshold, users will tend to prefer some
237
- over others. For instance, factors with fewer conditions are
238
- often preferable to those with more, all else being equal;
239
- factors that change a variable by one unit as opposed to two
240
- are preferable, and so on. Rather than formalize this pref-
241
- erence in terms of a distance metric, which unnecessarily
242
- constrains the solution space, we treat the partial ordering
243
- as primitive and require only that it be complete and transi-
244
- tive. This covers not just distance-based measures but also
245
- more idiosyncratic orderings that are unique to individual
246
- agents. Ordinal preferences may be represented by cardi-
247
- nal utility functions under reasonable assumptions (see, e.g.,
248
- [von Neumann and Morgenstern, 1944]).
249
- We are now ready to formally specify our framework.
250
- Definition 1 (Basis) .Abasis for computing necessary and
251
- sufficient factors for model predictions is a tuple B=
252
- hf;D;C;i, wherefis a target function, Dis a context,C
253
- is a set of factors, and is a partial ordering on C.
254
- 3.1 EXPLANATORY MEASURES
255
- For some fixed basis B=hf;D;C;i, we define the fol-
256
- lowing measures of sufficiency and necessity, with probabil-
257
- ity taken overD.
258
- Definition 2 (Probability of Sufficiency) .The probability
259
- thatcis a sufficient factor for outcome yis given by:
260
- PS(c;y) :=P(f(z) =yjc(z) = 1):
261
- The probability that factor set C=fc1;:::;ckgis sufficient
262
- foryis given by:
263
- PS(C;y) :=P(f(z) =yjkX
264
- i=1ci(z)1):
265
- Definition 3 (Probability of Necessity) .The probability
266
- thatcis a necessary factor for outcome yis given by:
267
- PN(c;y) :=P(c(z) = 1jf(z) =y):The probability that factor set C=fc1;:::;ckgis neces-
268
- sary foryis given by:
269
- PN(C;y) :=P(kX
270
- i=1ci(z)1jf(z) =y):
271
- Remark 1. These probabilities can be likened to the “pre-
272
- cision” (positive predictive value) and “recall” (true posi-
273
- tive rate) of a (hypothetical) classifier that predicts whether
274
- f(z) =ybased on whether c(z) = 1 . By examining the
275
- confusion matrix of this classifier, one can define other
276
- related quantities, e.g. the true negative rate P(c(z) =
277
- 0jf(z)6=y)and the negative predictive value P(f(z)6=
278
- yjc(z) = 0) , which are contrapositive transformations of
279
- our proposed measures. We can recover these values exactly
280
- viaPS(1c;1y)andPN(1c;1y), respectively.
281
- When necessity and sufficiency are defined as probabilistic
282
- inversions (rather than conversions), such transformations
283
- are impossible.
284
- 3.2 MINIMAL SUFFICIENT FACTORS
285
- We introduce Local Explanations via Necessity and Suffi-
286
- ciency (LENS), a procedure for computing explanatory fac-
287
- tors with respect to a given basis Band threshold parame-
288
- ter(see Alg. 1). First, we calculate a factor’s probability
289
- of sufficiency (see probSuff ) by drawing nsamples from
290
- Dand taking the maximum likelihood estimate ^PS(c;y).
291
- Next, we sort the space of factors w.r.t. in search of those
292
- that are-minimal.
293
- Definition 4 (-minimality) .We say thatcis-minimal iff
294
- (i)PS(c;y)and (ii) there exists no factor c0such that
295
- PS(c0;y)andc0c.
296
- Since a factor is necessary to the extent that it covers all
297
- possible pathways towards a given outcome, our next step is
298
- to span the-minimal factors and compute their cumulative
299
- PN (seeprobNec ). As a minimal factor cstands for all c0
300
- such thatcc0, in reporting probability of necessity, we
301
- expandCto its upward closure.
302
- Thms. 1 and 2 state that this procedure is optimal in a sense
303
- that depends on whether we assume access to oracle or
304
- sample estimates of PS(see Appendix A for all proofs).
305
- Theorem 1. With oracle estimates PS(c;y)for allc2C,
306
- Alg. 1 is sound and complete. That is, for any Creturned
307
- by Alg. 1 and all c2C,cis-minimal iff c2C.
308
- Population proportions may be obtained if data fully saturate
309
- the spaceD, a plausible prospect for categorical variables
310
- of low to moderate dimensionality. Otherwise, proportions
311
- will need to be estimated.
312
- Theorem 2. With sample estimates ^PS(c;y)for allc2C,
313
- Alg. 1 is uniformly most powerful. That is, Alg. 1 identifiesthe most-minimal factors of any method with fixed type I
314
- error .
315
- Multiple testing adjustments can easily be accommodated,
316
- in which case modified optimality criteria apply [Storey,
317
- 2007].
318
- Remark 2. We take it that the main quantity of interest
319
- in most applications is sufficiency, be it for the original or
320
- alternative outcome, and therefore define -minimality w.r.t.
321
- sufficient (rather than necessary) factors. However, necessity
322
- serves an important role in tuning , as there is an inherent
323
- trade-off between the parameters. More factors are excluded
324
- at higher values of , thereby inducing lower cumulative
325
- PN; more factors are included at lower values of , thereby
326
- inducing higher cumulative PN. See Appendix B.
327
- Algorithm 1 LENS
328
- 1:Input:B=hf;D;C;i;
329
- 2:Output: Factor setC,(8c2C)PS(c;y);PN (C;y)
330
- 3:Sample ^D=fzign
331
- i=1D
332
- 4:function probSuff (c,y)
333
- 5: n(c&y) =Pn
334
- i=11[c(zi) = 1^f(zi) =y]
335
- 6: n(c) =Pn
336
- i=1c(zi)
337
- 7: return n(c&y) / n(c)
338
- 8:function probNec (C,y, upward_closure_flag)
339
- 9: ifupward_closure_flag then
340
- 10:C=fcjc2C^9c02C:c0cg
341
- 11: end if
342
- 12: n(C&y) =Pn
343
- i=11[Pk
344
- j=1cj(zi)1^f(zi) =y]
345
- 13: n(y) =Pn
346
- i=11[f(zi) =y]
347
- 14: return n(C&y) / n(y)
348
- 15:function minimalSuffFactors (y,, sample_flag, )
349
- 16: sorted_factors = topological _sort(C;)
350
- 17: cands = []
351
- 18: forcin sorted_factors do
352
- 19: if9(c0;_)2cands :c0cthen
353
- 20: continue
354
- 21: end if
355
- 22: ps =probSuff (c,y)
356
- 23: ifsample_flag then
357
- 24: p =binom.test (n(c&y), n(c), , alt =>)
358
- 25: ifp then
359
- 26: cands.append( c, ps)
360
- 27: end if
361
- 28: else if psthen
362
- 29: cands.append( c, ps)
363
- 30: end if
364
- 31: end for
365
- 32: cum_pn = probNec (fcj(c;_)2candsg;y, TRUE)
366
- 33: return cands, cum_pn4 ENCODING EXISTING MEASURES
367
- Explanatory measures can be shown to play a central role in
368
- many seemingly unrelated XAI tools, albeit under different
369
- assumptions about the basis tuple B. In this section, we
370
- relate our framework to a number of existing methods.
371
- Feature attributions. Several popular feature attribution
372
- algorithms are based on Shapley values [Shapley, 1953],
373
- which decompose the predictions of any target function as a
374
- sum of weights over dinput features:
375
- f(xi) =0+dX
376
- j=1j; (1)
377
- where0represents a baseline expectation and jthe
378
- weight assigned to Xjat point xi. Letv: 2d7!Rbe a
379
- value function such that v(S)is the payoff associated with
380
- feature subset S[d]andv(f;g) = 0 . Define the comple-
381
- mentR= [d]nSsuch that we may rewrite any xias a pair
382
- of subvectors, (xS
383
- i;xR
384
- i). Payoffs are given by:
385
- v(S) =E[f(xS
386
- i;XR)]; (2)
387
- although this introduces some ambiguity regarding the ref-
388
- erence distribution for XR(more on this below). The Shap-
389
- ley valuejis thenj’s average marginal contribution to all
390
- subsets that exclude it:
391
- j=X
392
- S[d]nfjgjSj!(djSj1)!
393
- d!v(S[fjg)v(S):(3)
394
- It can be shown that this is the unique solution to the attri-
395
- bution problem that satisfies certain desirable properties, in-
396
- cluding efficiency, linearity, sensitivity, and symmetry.
397
- Reformulating this in our framework, we find that the value
398
- functionvis a sufficiency measure. To see this, let each
399
- zD be a sample in which a random subset of variables
400
- Sare held at their original values, while remaining features
401
- Rare drawn from a fixed distribution D(jS).1
402
- Proposition 1. LetcS(z) = 1 iffxzwas constructed
403
- by holding xSfixed and sampling XRaccording toD(jS).
404
- Thenv(S) =PS(cS;y).
405
- Thus, the Shapley value jmeasuresXj’s average marginal
406
- increase to the sufficiency of a random feature subset. The
407
- advantage of our method is that, by focusing on particular
408
- subsets instead of weighting them all equally, we disregard
409
- irrelevant permutations and home in on just those that meet
410
- a-minimality criterion. Kumar et al. [2020] observe that,
411
- 1The diversity of Shapley value algorithms is largely due to
412
- variation in how this distribution is defined. Popular choices in-
413
- clude the marginal P(XR)[Lundberg and Lee, 2017]; conditional
414
- P(XRjxS)[Aas et al., 2019]; and interventional P(XRjdo(xS))
415
- [Heskes et al., 2020] distributions.“since there is no standard procedure for converting Shapley
416
- values into a statement about a model’s behavior, developers
417
- rely on their own mental model of what the values represent”
418
- (p. 8). By contrast, necessary and sufficient factors are more
419
- transparent and informative, offering a direct path to what
420
- Shapley values indirectly summarize.
421
- Rule lists. Rule lists are sequences of if-then statements
422
- that describe a hyperrectangle in feature space, creating par-
423
- titions that can be visualized as decision or regression trees.
424
- Rule lists have long been popular in XAI. While early work
425
- in this area tended to focus on global methods [Friedman
426
- and Popescu, 2008; Letham et al., 2015], more recent efforts
427
- have prioritized local explanation tasks [Lakkaraju et al.,
428
- 2019; Sokol and Flach, 2020].
429
- We focus in particular on the Anchors algorithm [Ribeiro
430
- et al., 2018a], which learns a set of Boolean conditions A
431
- (the eponymous “anchors”) such that A(xi) = 1 and
432
- PD(xjA)(f(xi) =f(x)): (4)
433
- The lhs of Eq. 4 is termed the precision , prec(A), and proba-
434
- bility is taken over a synthetic distribution in which the con-
435
- ditions inAhold while other features are perturbed. Once 
436
- is fixed, the goal is to maximize coverage , formally defined
437
- asE[A(x) = 1] , i.e. the proportion of datapoints to which
438
- the anchor applies.
439
- The formal similarities between Eq. 4 and Def. 2 are imme-
440
- diately apparent, and the authors themselves acknowledge
441
- that Anchors are intended to provide “sufficient conditions”
442
- for model predictions.
443
- Proposition 2. LetcA(z) = 1 iffA(x) = 1 . Then
444
- prec(A) =PS(cA;y).
445
- While Anchors outputs just a single explanation, our method
446
- generates a ranked list of candidates, thereby offering a
447
- more comprehensive view of model behavior. Moreover, our
448
- necessity measure adds a mode of explanatory information
449
- entirely lacking in Anchors.
450
- Counterfactuals. Counterfactual explanations identify
451
- one or several nearest neighbors with different outcomes, e.g.
452
- all datapoints xwithin an-ball of xisuch that labels f(x)
453
- andf(xi)differ (for classification) or f(x)> f(xi) +
454
- (for regression).2The optimization problem is:
455
- x= argmin
456
- x2CF(xi)cost(xi;x); (5)
457
- where CF(xi)denotes a counterfactual space such that
458
- f(xi)6=f(x)andcost is a user-supplied cost function, typ-
459
- ically equated with some distance measure. [Wachter et al.,
460
- 2Confusingly, the term “counterfactual” in XAI refers to any
461
- point with an alternative outcome, which is distinct from the causal
462
- sense of the term (see Sect. 2). We use the word in both senses
463
- here, but strive to make our intended meaning explicit in each case.2018] recommend using generative adversarial networks
464
- to solve Eq. 5, while others have proposed alternatives de-
465
- signed to ensure that counterfactuals are coherent and ac-
466
- tionable [Ustun et al., 2019; Karimi et al., 2020a; Wexler
467
- et al., 2020]. As with Shapley values, the variation in these
468
- proposals is reducible to the choice of context D.
469
- For counterfactuals, we rewrite the objective as a search for
470
- minimal perturbations sufficient to flip an outcome.
471
- Proposition 3. Letcost be a function representing , and
472
- letcbe some factor spanning reference values. Then the
473
- counterfactual recourse objective is:
474
- c= argmin
475
- c2Ccost(c)s.t.PS(c;1y); (6)
476
- wheredenotes a decision threshold. Counterfactual out-
477
- puts will then be any zD such thatc(z) = 1 .
478
- Probabilities of causation. Our framework can describe
479
- Pearl [2000]’s aforementioned probabilities of causation,
480
- however in this case Dmust be constructed with care.
481
- Proposition 4. Consider the bivariate Boolean setting, as
482
- in Sect. 2. We have two counterfactual distributions: an in-
483
- put spaceI, in which we observe x;ybut intervene to set
484
- X=x0; and a reference space R, in which we observe x0;y0
485
- but intervene to set X=x. LetDdenote a uniform mixture
486
- over both spaces, and let auxiliary variable Wtag each sam-
487
- ple with a label indicating whether it comes from the origi-
488
- nal (W= 1) or contrastive ( W= 0) counterfactual space.
489
- Definec(z) =w. Then we have suf(x;y) =PS(c;y)and
490
- nec(x;y) =PS(1c;y0).
491
- In other words, we regard Pearl’s notion of necessity as suf-
492
- ficiency of the negated factor for the alternative outcome .
493
- By contrast, Pearl [2000] has no analogue for our proba-
494
- bility of necessity. This is true of any measure that defines
495
- sufficiency and necessity via inverse, rather than converse
496
- probabilities. While conditioning on the same variable(s)
497
- for both measures may have some intuitive appeal, it comes
498
- at a cost to expressive power. Whereas our framework can
499
- recover all four explanatory measures, corresponding to the
500
- classical definitions and their contrapositive forms, defini-
501
- tions that merely negate instead of transpose the antecedent
502
- and consequent are limited to just two.
503
- Remark 3. We have assumed that factors and outcomes
504
- are Boolean throughout. Our results can be extended to
505
- continuous versions of either or both variables, so long as
506
- c(Z)
507
- j=YjZ. This conditional independence holds when-
508
- everW
509
- j=YjX, which is true by construction since
510
- f(z) :=f(x). However, we defend the Boolean assump-
511
- tion on the grounds that it is well motivated by contrastivist
512
- epistemologies [Kahneman and Miller, 1986; Lipton, 1990;
513
- Blaauw, 2013] and not especially restrictive, given that parti-
514
- tions of arbitrary complexity may be defined over ZandY.
515
- Figure 2: Comparison of top kfeatures ranked by SHAP
516
- against the best performing LENS subset of size kin
517
- terms ofPS(c;y).German results are over 50 inputs;
518
- SpamAssassins results are over 25 inputs.
519
- 5 EXPERIMENTS
520
- In this section, we demonstrate the use of LENS on a va-
521
- riety of tasks and compare results with popular XAI tools,
522
- using the basis configurations detailed in Table 1. A com-
523
- prehensive discussion of experimental design, including
524
- datasets and pre-processing pipelines, is left to Appendix
525
- C. Code for reproducing all results is available at https:
526
- //github.com/limorigu/LENS .
527
- Contexts. We consider a range of contexts Din our exper-
528
- iments. For the input-to-reference (I2R) setting, we replace
529
- input values with reference values for feature subsets S; for
530
- the reference-to-input (R2I) setting, we replace reference
531
- values with input values. We use R2I for examining suffi-
532
- ciency/necessity of the original model prediction, and I2R
533
- for examining sufficiency/necessity of a contrastive model
534
- prediction. We sample from the empirical data in all exper-
535
- iments, except in Sect. 5.3, where we assume access to a
536
- structural causal model (SCM).
537
- Partial Orderings. We consider two types of partial or-
538
- derings in our experiments. The first, subset , evaluates
539
- subset relationships. For instance, if c(z) =1[x[gender =
540
- “female” ]]andc0(z) = 1[x[gender =“female”^
541
- age40]], then we say that csubsetc0. The second,
542
- ccostc0:=csubsetc0^cost(c)cost(c0), adds the
543
- additional constraint that chas cost no greater than c0. The
544
- cost function could be arbitrary. Here, we consider distance
545
- measures over either the entire state space or just the inter-
546
- vention targets corresponding to c.
547
- 5.1 FEATURE ATTRIBUTIONS
548
- Feature attributions are often used to identify the top- kmost
549
- important features for a given model outcome [Barocas et al.,
550
- 2020]. However, we argue that these feature sets may not
551
- be explanatory with respect to a given prediction. To show
552
- this, we compute R2I and I2R sufficiency – i.e., PS(c;y)
553
- andPS(1c;1y), respectively – for the top- kmost in-
554
- fluential features ( k2[1;9]) as identified by SHAP [Lund-
555
- berg and Lee, 2017] and LENS. Fig. 2 shows results from
556
- the R2I setting for German credit [Dua and Graff, 2017]
557
- andSpamAssassin datasets [SpamAssassin, 2006]. OurTable 1: Overview of experimental settings by basis configuration.
558
- Experiment Datasets f DC 
559
- Attribution comparison German ,SpamAssassins Extra-Trees R2I, I2R Intervention targets -
560
- Anchors comparison: Brittle predictions IMDB LSTM R2I, I2R Intervention targets subset
561
- Anchors comparison: PS and Prec German Extra-Trees R2I Intervention targets subset
562
- Counterfactuals: Adverserial SpamAssassins MLP R2I Intervention targets subset
563
- Counterfactuals: Recourse, DiCE comparison Adult MLP I2R Full interventions cost
564
- Counterfactuals: Recourse, causal vs. non-causal German Extra-Trees I2Rcausal Full interventions cost
565
- method attains higher PSfor all cardinalities. We repeat
566
- the experiment over 50 inputs, plotting means and 95% con-
567
- fidence intervals for all k. Results indicate that our rank-
568
- ing procedure delivers more informative explanations than
569
- SHAP at any fixed degree of sparsity. Results from the I2R
570
- setting are in Appendix C.
571
- 5.2 RULE LISTS
572
- Sentiment sensitivity analysis. Next, we use LENS to
573
- study model weaknesses by considering minimal factors
574
- with high R2I and I2R sufficiency in text models. Our
575
- goal is to answer questions of the form, “What are words
576
- with/without which our model would output the origi-
577
- nal/opposite prediction for an input sentence?” For this ex-
578
- periment, we train an LSTM network on the IMDB dataset
579
- for sentiment analysis [Maas et al., 2011]. If the model mis-
580
- labels a sample, we investigate further; if it does not, we
581
- inspect the most explanatory factors to learn more about
582
- model behavior. For the purpose of this example, we only
583
- inspect sentences of length 10 or shorter. We provide two
584
- examples below and compare with Anchors (see Table 2).
585
- Consider our first example: READ BOOK FORGET MOVIE is
586
- a sentence we would expect to receive a negative prediction,
587
- but our model classifies it as positive. Since we are inves-
588
- tigating a positive prediction, our reference space is condi-
589
- tioned on a negative label. For this model, the classic UNK
590
- token receives a positive prediction. Thus we opt for an al-
591
- ternative, PLATE . Performing interventions on all possible
592
- combinations of words with our token, we find the conjunc-
593
- tion of READ ,FORGET , and MOVIE is a sufficient factor for
594
- a positive prediction (R2I). We also find that changing any
595
- ofREAD ,FORGET , or MOVIE to PLATE would result in a
596
- negative prediction (I2R). Anchors, on the other hand, per-
597
- turbs the data stochastically (see Appendix C), suggesting
598
- the conjunction READ AND BOOK . Next, we investigate
599
- the sentence: YOU BETTER CHOOSE PAUL VERHOEVEN
600
- EVEN WATCHED . Since the label here is negative, we use
601
- theUNK token. We find that this prediction is brittle – a
602
- change of almost any word would be sufficient to flip the
603
- outcome. Anchors, on the other hand, reports a conjunction
604
- including most words in the sentence. Taking the R2I view,
605
- we still find a more concise explanation: CHOOSE orEVEN
606
- would be enough to attain a negative prediction. These brief
607
- examples illustrate how LENS may be used to find brittle
608
- predictions across samples, search for similarities between
609
- Figure 3: We compare PS(c;y)against precision scores at-
610
- tained by the output of LENS and Anchors for examples
611
- from German . We repeat the experiment for 100 inputs,
612
- and each time consider the single example generated by An-
613
- chors against the mean PS(c;y)among LENS’s candidates.
614
- Dotted line indicates = 0:9.
615
- errors, or test for model reliance on sensitive attributes (e.g.,
616
- gender pronouns).
617
- Anchors comparison. Anchors also includes a tabular
618
- variant, against which we compare LENS’s performance
619
- in terms of R2I sufficiency. We present the results of this
620
- comparison in Fig. 3, and include additional comparisons
621
- in Appendix C. We sample 100 inputs from the German
622
- dataset, and query both methods with = 0:9using the
623
- classifier from Sect. 5.1. Anchors satisfies a PAC bound
624
- controlled by parameter . At the default value = 0:1,
625
- Anchors fails to meet the threshold on 14% of samples;
626
- LENS meets it on 100% of samples. This result accords
627
- with Thm. 1, and vividly demonstrates the benefits of our
628
- optimality guarantee. Note that we also go beyond Anchors
629
- in providing multiple explanations instead of just a single
630
- output, as well as a cumulative probability measure with no
631
- analogue in their algorithm.
632
- 5.3 COUNTERFACTUALS
633
- Adversarial examples: spam emails. R2I sufficiency an-
634
- swers questions of the form, “What would be sufficient
635
- for the model to predict y?”. This is particularly valuable
636
- in cases with unfavorable outcomes y0. Inspired by adver-
637
- sarial interpretability approaches [Ribeiro et al., 2018b;
638
- Lakkaraju and Bastani, 2020], we train an MLP classifier
639
- on the SpamAssassins dataset and search for minimal
640
- factors sufficient to relabel a sample of spam emails as non-
641
- spam. Our examples follow some patterns common to spam
642
- emails: received from unusual email addresses, includes sus-Table 2: Example prediction given by an LSTM model trained on the IMDB dataset. We compare -minimal factors identified
643
- by LENS (as individual words), based on PS(c;y)andPS(1c;1y), and compare to output by Anchors.
644
- Inputs Anchors LENS
645
- Text Original model prediction Suggested anchors Precision Sufficient R2I factors Sufficient I2R factors
646
- ’read book forget movie’ wrongly predicted positive [read, movie] 0.94 [read, forget, movie] read, forget, movie
647
- ’you better choose paul verhoeven even watched’ correctly predicted negative [choose, better, even, you, paul, verhoeven] 0.95 choose, even better, choose, paul, even
648
- Table 3: (Top) A selection of emails from SpamAssassins , correctly identified as spam by an MLP. The goal is to find
649
- minimal perturbations that result in non-spam predictions. (Bottom) Minimal subsets of feature-value assignments that
650
- achieve non-spam predictions with respect to the emails above.
651
- From To Subject First Sentence Last Sentence
652
- resumevalet info resumevalet com yyyy cv spamassassin taint org adv put resume back work dear candidate professionals online network inc
653
- jacqui devito goodroughy ananzi co za picone linux midrange com enlargement breakthrough zibdrzpay recent survey conducted increase size enter detailsto come open
654
- rose xu email com yyyyac idt net adv harvest lots target email address quickly want advertisement persons 18yrs old
655
- Gaming options Feature subsets for value changes
656
- From To
657
- 1crispin cown crispin wirex com example com mailing... list secprog securityfocus... moderator
658
- From First Sentence
659
- 2crispin cowan crispin wirex com scott mackenzie wrote
660
- From First Sentence
661
- 3tim one comcast net tim peters tim
662
- picious keywords such as ENLARGEMENT orADVERTISE -
663
- MENT in the subject line, etc. We identify minimal changes
664
- that will flip labels to non-spam with high probability. Op-
665
- tions include altering the incoming email address to more
666
- common domains, and changing the subject or first sen-
667
- tences (see Table 3). These results can improve understand-
668
- ing of both a model’s behavior and a dataset’s properties.
669
- Diverse counterfactuals. Our explanatory measures can
670
- also be used to secure algorithmic recourse. For this experi-
671
- ment, we benchmark against DiCE [Mothilal et al., 2020b],
672
- which aims to provide diverse recourse options for any
673
- underlying prediction model. We illustrate the differences
674
- between our respective approaches on the Adult dataset
675
- [Kochavi and Becker, 1996], using an MLP and following
676
- the procedure from the original DiCE paper.
677
- According to DiCE, a diverse set of counterfactuals is
678
- one that differs in values assigned to features, and can
679
- thus produce a counterfactual set that includes different
680
- interventions on the same variables (e.g., CF1: age=
681
- 91;occupation = “retired”; CF2: age= 44;occupation =
682
- “teacher”). Instead, we look at diversity of counterfactuals
683
- in terms of intervention targets , i.e. features changed (in
684
- this case, from input to reference values) and their effects.
685
- We present minimal cost interventions that would lead to re-
686
- course for each feature set but we summarize the set of paths
687
- to recourse via subsets of features changed. Thus, DiCE pro-
688
- vides answers of the form “Because you are not 91 and re-
689
- tired” or “Because you are not 44 and a teacher”; we answer
690
- “Because of your age and occupation”, and present the low-
691
- est cost intervention on these features sufficient to flip the
692
- prediction.
693
- With this intuition in mind, we compare outputs given by
694
- DiCE and LENS for various inputs. For simplicity, we let
695
- all features vary independently. We consider two metrics for
696
- comparison: (a) the mean cost of proposed factors, and (b)
697
- the number of minimally valid candidates proposed, where a
698
- Figure 4: A comparison of mean cost of outputs by LENS
699
- and DiCE for 50 inputs sampled from the Adult dataset.
700
- factorcfrom a method Misminimally valid iff for allc0pro-
701
- posed byM0,:(c0costc)(i.e.,M0does not report a fac-
702
- tor preferable to c). We report results based on 50 randomly
703
- sampled inputs from the Adult dataset, where references
704
- are fixed by conditioning on the opposite prediction. The
705
- cost comparison results are shown in Fig. 4, where we find
706
- that LENS identifies lower cost factors for the vast majority
707
- of inputs. Furthermore, DiCE finds no minimally valid can-
708
- didates that LENS did not already account for. Thus LENS
709
- emphasizes minimality anddiversity of intervention targets,
710
- while still identifying low cost intervention values.
711
- Causal vs. non-causal recourse. When a user relies on
712
- XAI methods to plan interventions on real-world systems,
713
- causal relationships between predictors cannot be ignored.
714
- In the following example, we consider the DAG in Fig. 5,
715
- intended to represent dependencies in the German credit
716
- dataset. For illustrative purposes, we assume access to the
717
- structural equations of this data generating process. (There
718
- are various ways to extend our approach using only partial
719
- causal knowledge as input [Karimi et al., 2020b; Heskes
720
- et al., 2020].) We construct Dby sampling from the SCM
721
- under a series of different possible interventions. Table 4
722
- describes an example of how using our framework with
723
- augmented causal knowledge can lead to different recourse
724
- options. Computing explanations under the assumption of
725
- feature independence results in factors that span a large
726
- part of the DAG depicted in Fig. 5. However, encoding
727
- structural relationships in D, we find that LENS assigns
728
- high explanatory value to nodes that appear early in the
729
- topological ordering. This is because intervening on a single
730
- root factor may result in various downstream changes once
731
- effects are fully propagated.Table 4: Recourse example comparing causal and non-causal (i.e., feature independent) D. We sample a single input
732
- example with a negative prediction, and 100 references with the opposite outcome. For I2R causal we propagate the effects
733
- of interventions through a user-provided SCM.
734
- input I2R I2Rcausal
735
- Age Sex Job Housing Savings Checking Credit Duration Purpose -minimal factors ( = 0)Cost-minimal factors ( = 0)Cost
736
- Job: Highly skilled 1 Age: 24 0.07
737
- Checking: NA 1 Sex: Female 1
738
- Duration: 30 1.25 Job: Highly skilled 1
739
- Age: 65, Housing: Own 4.23 Housing: Rent 123 Male Skilled Free Little Little 1845 45 Radio/TV
740
- Age: 34, Savings: N/A 1.84 Savings: N/A 1
741
- AgeSex
742
- JobSavingsHousingChecking
743
- CreditDuration
744
- Purpose
745
- Figure 5: Example DAG for German dataset.
746
- 6 DISCUSSION
747
- Our results, both theoretical and empirical, rely on access to
748
- the relevant context Dand the complete enumeration of all
749
- feature subsets. Neither may be feasible in practice. When
750
- elements of Zare estimated, as is the case with the genera-
751
- tive methods sometimes used in XAI, modeling errors could
752
- lead to suboptimal explanations. For high-dimensional set-
753
- tings such as image classification, LENS cannot be naïvely
754
- applied without substantial data pre-processing. The first is-
755
- sue is extremely general. No method is immune to model
756
- misspecification, and attempts to recreate a data generat-
757
- ing process must always be handled with care. Empirical
758
- sampling, which we rely on above, is a reasonable choice
759
- when data are fairly abundant and representative. However,
760
- generative models may be necessary to correct for known
761
- biases or sample from low-density regions of the feature
762
- space. This comes with a host of challenges that no XAI al-
763
- gorithm alone can easily resolve. The second issue – that
764
- a complete enumeration of all variable subsets is often im-
765
- practical – we consider to be a feature, not a bug. Complex
766
- explanations that cite many contributing factors pose cog-
767
- nitive as well as computational challenges. In an influen-
768
- tial review of XAI, Miller [2019] finds near unanimous con-
769
- sensus among philosophers and social scientists that, “all
770
- things being equal, simpler explanations – those that cite
771
- fewer causes... are better explanations” (p. 25). Even if we
772
- could list all -minimal factors for some very large value of
773
- d, it is not clear that such explanations would be helpful to
774
- humans, who famously struggle to hold more than seven ob-
775
- jects in short-term memory at any given time [Miller, 1955].
776
- That is why many popular XAI tools include some sparsity
777
- constraint to encourage simpler outputs.
778
- Rather than throw out some or most of our low-level fea-
779
- tures, we prefer to consider a higher level of abstraction,where explanations are more meaningful to end users. For
780
- instance, in our SpamAssassins experiments, we started
781
- with a pure text example, which can be represented via
782
- high-dimensional vectors (e.g., word embeddings). How-
783
- ever, we represent the data with just a few intelligible com-
784
- ponents: From andToemail addresses, Subject , etc. In
785
- other words, we create a more abstract object and consider
786
- each segment as a potential intervention target, i.e. a candi-
787
- date factor. This effectively compresses a high-dimensional
788
- dataset into a 10-dimensional abstraction. Similar strategies
789
- could be used in many cases, either through domain knowl-
790
- edge or data-driven clustering and dimensionality reduction
791
- techniques [Chalupka et al., 2017; Beckers et al., 2019; Lo-
792
- catello et al., 2019]. In general, if data cannot be represented
793
- by a reasonably low-dimensional, intelligible abstraction,
794
- then post-hoc XAI methods are unlikely to be of much help.
795
- 7 CONCLUSION
796
- We have presented a unified framework for XAI that fore-
797
- grounds necessity and sufficiency, which we argue are the
798
- fundamental building blocks of all successful explanations.
799
- We defined simple measures of both, and showed how they
800
- undergird various XAI methods. Our formulation, which re-
801
- lies on converse rather than inverse probabilities, is uniquely
802
- flexible and expressive. It covers all four basic explanatory
803
- measures – i.e., the classical definitions and their contra-
804
- positive transformations – and unambiguously accommo-
805
- dates logical, probabilistic, and/or causal interpretations, de-
806
- pending on how one constructs the basis tuple B. We illus-
807
- trated illuminating connections between our measures and
808
- existing proposals in XAI, as well as Pearl [2000]’s proba-
809
- bilities of causation. We introduced a sound and complete
810
- algorithm for identifying minimally sufficient factors, and
811
- demonstrated our method on a range of tasks and datasets.
812
- Our approach prioritizes completeness over efficiency, suit-
813
- able for settings of moderate dimensionality. Future research
814
- will explore more scalable approximations, model-specific
815
- variants optimized for, e.g., convolutional neural networks,
816
- and developing a graphical user interface.
817
- Acknowledgements
818
- DSW was supported by ONR grant N62909-19-1-2096.References
819
- Kjersti Aas, Martin Jullum, and Anders Løland. Explain-
820
- ing individual predictions when features are dependent:
821
- More accurate approximations to Shapley values. arXiv
822
- preprint, 1903.10464v2, 2019.
823
- Solon Barocas, Andrew D Selbst, and Manish Raghavan.
824
- The Hidden Assumptions behind Counterfactual Explana-
825
- tions and Principal Reasons. In FAT* , pages 80–89, 2020.
826
- Sander Beckers, Frederick Eberhardt, and Joseph Y Halpern.
827
- Approximate causal abstraction. In UAI, pages 210–219,
828
- 2019.
829
- Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian
830
- Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir
831
- Puri, José M F Moura, and Peter Eckersley. Explainable
832
- machine learning in deployment. In FAT* , pages 648–
833
- 657, 2020.
834
- Steven Bird, Ewan Klein, and Edward Loper. Natural lan-
835
- guage processing with Python: Analyzing text with the
836
- natural language toolkit . O’Reilly, 2009.
837
- Martijn Blaauw, editor. Contrastivism in Philosophy . Rout-
838
- ledge, New York, 2013.
839
- Krzysztof Chalupka, Frederick Eberhardt, and Pietro Perona.
840
- Causal feature learning: an overview. Behaviormetrika ,
841
- 44(1):137–164, 2017.
842
- Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen
843
- Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel
844
- Das. Explanations based on the missing: Towards con-
845
- trastive explanations with pertinent negatives. In NeurIPS ,
846
- pages 592–603, 2018.
847
- Dheeru Dua and Casey Graff. UCI machine learning
848
- repository, 2017. URL http://archive.ics.uci.
849
- edu/ml .
850
- C. Fernández-Loría, F. Provost, and X. Han. Explaining
851
- data-driven decisions made by AI systems: The counter-
852
- factual approach. arXiv preprint, 2001.07417, 2020.
853
- Jerome H Friedman and Bogdan E Popescu. Predictive
854
- learning via rule ensembles. Ann. Appl. Stat. , 2(3):916–
855
- 954, 2008.
856
- Sainyam Galhotra, Romila Pradhan, and Babak Salimi. Ex-
857
- plaining black-box algorithms using probabilistic con-
858
- trastive counterfactuals. In SIGMOD , 2021.
859
- Pierre Geurts, Damien Ernst, and Louis Wehenkel. Ex-
860
- tremely randomized trees. Mach. Learn. , 63(1):3–42,
861
- 2006.Sachin Grover, Chiara Pulice, Gerardo I. Simari, and V . S.
862
- Subrahmanian. Beef: Balanced english explanations of
863
- forecasts. IEEE Trans. Comput. Soc. Syst. , 6(2):350–364,
864
- 2019.
865
- Joseph Y Halpern. Actual Causality . The MIT Press, Cam-
866
- bridge, MA, 2016.
867
- Joseph Y Halpern and Judea Pearl. Causes and explanations:
868
- A structural-model approach. Part I: Causes. Br. J. Philos.
869
- Sci., 56(4):843–887, 2005a.
870
- Joseph Y Halpern and Judea Pearl. Causes and explanations:
871
- A structural-model approach. Part II: Explanations. Br. J.
872
- Philos. Sci. , 56(4):889–911, 2005b.
873
- Tom Heskes, Evi Sijben, Ioan Gabriel Bucur, and Tom
874
- Claassen. Causal Shapley values: Exploiting causal
875
- knowledge to explain individual predictions of complex
876
- models. In NeurIPS , 2020.
877
- Alexey Ignatiev, Nina Narodytska, and Joao Marques-Silva.
878
- Abduction-based explanations for machine learning mod-
879
- els. In AAAI , pages 1511–1519, 2019.
880
- Guido W Imbens and Donald B Rubin. Causal Inference
881
- for Statistics, Social, and Biomedical Sciences: An Intro-
882
- duction . Cambridge University Press, Cambridge, 2015.
883
- Daniel Kahneman and Dale T. Miller. Norm theory: Com-
884
- paring reality to its alternatives. Psychol. Rev. , 93(2):136–
885
- 153, 1986.
886
- Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf,
887
- and Isabel Valera. A survey of algorithmic recourse:
888
- Definitions, formulations, solutions, and prospects. arXiv
889
- preprint, 2010.04050, 2020a.
890
- Amir-Hossein Karimi, Julius von Kügelgen, Bernhard
891
- Schölkopf, and Isabel Valera. Algorithmic recourse under
892
- imperfect causal knowledge: A probabilistic approach. In
893
- NeurIPS , 2020b.
894
- Diederik P. Kingma and Jimmy Ba. Adam: A method for
895
- stochastic optimization. In The 3rd International Confer-
896
- ence for Learning Representations , 2015.
897
- Ronny Kochavi and Barry Becker. Adult income dataset,
898
- 1996. URL https://archive.ics.uci.edu/
899
- ml/datasets/adult .
900
- Indra Kumar, Suresh Venkatasubramanian, Carlos Scheideg-
901
- ger, and Sorelle Friedler. Problems with Shapley-value-
902
- based explanations as feature importance measures. In
903
- ICML , pages 5491–5500, 2020.
904
- Himabindu Lakkaraju and Osbert Bastani. “How do I fool
905
- you?”: Manipulating user trust via misleading black box
906
- explanations. In AIES , pages 79–85, 2020.Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure
907
- Leskovec. Faithful and customizable explanations of
908
- black box models. In AIES , pages 131–138, 2019.
909
- E.L. Lehmann and Joseph P. Romano. Testing Statistical
910
- Hypotheses . Springer, New York, Third edition, 2005.
911
- Benjamin Letham, Cynthia Rudin, Tyler H McCormick, and
912
- David Madigan. Interpretable classifiers using rules and
913
- Bayesian analysis: Building a better stroke prediction
914
- model. Ann. Appl. Stat. , 9(3):1350–1371, 2015.
915
- David Lewis. Causation. J. Philos. , 70:556–567, 1973.
916
- Peter Lipton. Contrastive explanation. Royal Inst. Philos.
917
- Suppl. , 27:247–266, 1990.
918
- Zachary Lipton. The mythos of model interpretability. Com-
919
- mun. ACM , 61(10):36–43, 2018.
920
- Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar
921
- Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier
922
- Bachem. Challenging common assumptions in the un-
923
- supervised learning of disentangled representations. In
924
- ICML , pages 4114–4124, 2019.
925
- Scott M Lundberg and Su-In Lee. A unified approach to
926
- interpreting model predictions. In NeurIPS , pages 4765–
927
- 4774. 2017.
928
- Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan
929
- Huang, Andrew Y . Ng, and Christopher Potts. Learning
930
- word vectors for sentiment analysis. In ACL, pages 142–
931
- 150, 2011.
932
- J.L. Mackie. Causes and conditions. Am. Philos. Q. , 2(4):
933
- 245–264, 1965.
934
- Luke Merrick and Ankur Taly. The explanation game: Ex-
935
- plaining machine learning models using shapley values.
936
- InCD-MAKE , pages 17–38. Springer, 2020.
937
- George A. Miller. The magical number seven, plus or minus
938
- two: Some limits on our capacity for processing informa-
939
- tion. Psychol. Rev. , 101(2):343–352, 1955.
940
- Tim Miller. Explanation in artificial intelligence: Insights
941
- from the social sciences. Artif. Intell. , 267:1–38, 2019.
942
- Christoph Molnar. Interpretable Machine Learning: A
943
- Guide for Making Black Box Models Interpretable .
944
- Münich, 2021. URL https://christophm.
945
- github.io/interpretable-ml-book/ .
946
- Ramaravind K. Mothilal, Divyat Mahajan, Chenhao Tan,
947
- and Amit Sharma. Towards unifying feature attribution
948
- and counterfactual explanations: Different means to the
949
- same end. arXiv preprint, 2011.04917, 2020a.Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan.
950
- Explaining machine learning classifiers through diverse
951
- counterfactual explanations. In FAT* , pages 607–617,
952
- 2020b.
953
- Nina Narodytska, Aditya Shrotri, Kuldeep S Meel, Alexey
954
- Ignatiev, and Joao Marques-Silva. Assessing heuristic
955
- machine learning explanations with model counting. In
956
- SAT, pages 267–278, 2019.
957
- Judea Pearl. Causality: Models, Reasoning, and Inference .
958
- Cambridge University Press, New York, 2000.
959
- Jeffrey Pennington, Richard Socher, and Christopher D Man-
960
- ning. GloVe: Global vectors for word representation. In
961
- EMNLP , pages 1532–1543, 2014.
962
- Yanou Ramon, David Martens, Foster Provost, and
963
- Theodoros Evgeniou. A comparison of instance-level
964
- counterfactual explanation algorithms for behavioral and
965
- textual data: SEDC, LIME-C and SHAP-C. Adv. Data
966
- Anal. Classif. , 2020.
967
- Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin.
968
- Anchors: High-precision model-agnostic explanations. In
969
- AAAI , pages 1527–1535, 2018a.
970
- Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin.
971
- Semantically equivalent adversarial rules for debugging
972
- NLP models. In ACL, pages 856–865, 2018b.
973
- Cynthia Rudin. Stop explaining black box machine learning
974
- models for high stakes decisions and use interpretable
975
- models instead. Nat. Mach. Intell. , 1(5):206–215, 2019.
976
- Lloyd Shapley. A value for n-person games. In Contribu-
977
- tions to the Theory of Games , chapter 17, pages 307–317.
978
- Princeton University Press, Princeton, 1953.
979
- Kacper Sokol and Peter Flach. LIMEtree: Interactively
980
- customisable explanations based on local surrogate multi-
981
- output regression trees. arXiv preprint, 2005.01427, 2020.
982
- Apache SpamAssassin, 2006. URL https:
983
- //spamassassin.apache.org/old/
984
- publiccorpus/ . Accessed 2021.
985
- John D Storey. The optimal discovery procedure: A new
986
- approach to simultaneous significance testing. J. Royal
987
- Stat. Soc. Ser. B Methodol. , 69(3):347–368, 2007.
988
- Mukund Sundararajan and Amir Najmi. The many Shapley
989
- values for model explanation. In ACM , New York, 2019.
990
- Jin Tian and Judea Pearl. Probabilities of causation: Bounds
991
- and identification. Ann. Math. Artif. Intell. , 28(1-4):287–
992
- 313, 2000.
993
- Berk Ustun, Alexander Spangher, and Yang Liu. Actionable
994
- recourse in linear classification. In FAT* , pages 10–19,
995
- 2019.Tyler J VanderWeele and Thomas S Richardson. General
996
- theory for interactions in sufficient cause models with
997
- dichotomous exposures. Ann. Stat. , 40(4):2128–2161,
998
- 2012.
999
- Tyler J VanderWeele and James M Robins. Empirical and
1000
- counterfactual conditions for sufficient cause interactions.
1001
- Biometrika , 95(1):49–61, 2008.
1002
- John von Neumann and Oskar Morgenstern. Theory of
1003
- Games and Economic Behavior . Princeton University
1004
- Press, Princeton, NJ, 1944.
1005
- Sandra Wachter, Brent Mittelstadt, and Chris Russell. Coun-
1006
- terfactual explanations without opening the black box:
1007
- Automated decisions and the GDPR. Harvard J. Law
1008
- Technol. , 31(2):841–887, 2018.
1009
- David S Watson and Luciano Floridi. The explanation game:
1010
- a formal framework for interpretable machine learning.
1011
- Synthese , 2020.
1012
- J. Wexler, M. Pushkarna, T. Bolukbasi, M. Wattenberg,
1013
- F. Viégas, and J. Wilson. The what-if tool: Interactive
1014
- probing of machine learning models. IEEE Trans. Vis.
1015
- Comput. Graph. , 26(1):56–65, 2020.
1016
- Xin Zhang, Armando Solar-Lezama, and Rishabh Singh. In-
1017
- terpreting neural network judgments via minimal, stable,
1018
- and symbolic corrections. In NeurIPS , page 4879–4890,
1019
- 2018.
1020
- A PROOFS
1021
- A.1 THEOREMS
1022
- A.1.1 Proof of Theorem 1
1023
- Theorem. With oracle estimates PS(c;y)for allc2C,
1024
- Alg. 1 is sound and complete.
1025
- Proof. Soundness and completeness follow directly from the
1026
- specification of (P1) Cand (P2)in the algorithm’s input
1027
- B, along with (P3) access to oracle estimates PS(c;y)for
1028
- allc2C. Recall that the partial ordering must be complete
1029
- and transitive, as noted in Sect. 3.
1030
- Assume that Alg. 1 generates a false positive, i.e. outputs
1031
- somecthat is not-minimal. Then by Def. 4, either the algo-
1032
- rithm failed to properly evaluate PS(c;y), thereby violating
1033
- (P3); or failed to identify some c0such that (i) PS(c0;y)
1034
- and (ii)c0c. (i) is impossible by (P3), and (ii) is impos-
1035
- sible by (P2). Thus there can be no false positives.
1036
- Assume that Alg. 1 generates a false negative, i.e. fails to
1037
- output some cthat is in fact -minimal. By (P1), this ccan-
1038
- not exist outside the finite set C. Therefore there must besomec2Cfor which either the algorithm failed to properly
1039
- evaluatePS(c;y), thereby violating (P3); or wrongly iden-
1040
- tified somec0such that (i) PS(c0;y)and (ii)c0c.
1041
- Once again, (i) is impossible by (P3), and (ii) is impossible
1042
- by (P2). Thus there can be no false negatives.
1043
- A.1.2 Proof of Theorem 2
1044
- Theorem. With sample estimates ^PS(c;y)for allc2C,
1045
- Alg. 1 is uniformly most powerful.
1046
- Proof. A testing procedure is uniformly most powerful
1047
- (UMP) if it attains the lowest type II error of all tests with
1048
- fixed type I error . Let0;1denote a partition of the pa-
1049
- rameter space into null and alternative regions, respectively.
1050
- The goal in frequentist inference is to test the null hypoth-
1051
- esisH0:20against the alternative H1:21for
1052
- some parameter . Let (X)be a testing procedure of the
1053
- form1[T(X)c ], whereXis a finite sample, T(X)is a
1054
- test statistic, and c is the critical value. This latter param-
1055
- eter defines a rejection region such that test statistics inte-
1056
- grate to underH0. We say that (X)is UMP iff, for any
1057
- other test 0(X)such that
1058
- sup
1059
- 20E[ 0(X)] ;
1060
- we have
1061
- (821)E[ 0(X)]E[ (X)];
1062
- where E21[ (X)]denotes the power of the test to de-
1063
- tect the true ,1 (). The UMP-optimality of Alg. 1
1064
- follows from the UMP-optimality of the binomial test (see
1065
- [Lehmann and Romano, 2005, Ch. 3]), which is used to de-
1066
- cide between H0:PS(c;y)<  andH1:PS(c;y)
1067
- on the basis of observed proportions ^PS(c;y), estimated
1068
- fromnsamples for all c2C. The proof now takes the same
1069
- structure as that of Thm. 1, with (P3) replaced by (P 30): ac-
1070
- cess to UMP estimates of PS(c;y). False positives are no
1071
- longer impossible but bounded at level ; false negatives
1072
- are no longer impossible but occur with frequency . Be-
1073
- cause no procedure can find more -minimal factors for any
1074
- fixed , Alg. 1 is UMP.
1075
- A.2 PROPOSITIONS
1076
- A.2.1 Proof of Proposition 1
1077
- Proposition. LetcS(z) = 1 iffxzwas constructed by
1078
- holding xSfixed and sampling XRaccording toD(jS).
1079
- Thenv(S) =PS(cS;y).
1080
- As noted in the text, D(xjS)may be defined in a variety of
1081
- ways (e.g., via marginal, conditional, or interventional dis-
1082
- tributions). For any given choice, let cS(z) = 1 iffxis con-
1083
- structed by holding xS
1084
- ifixed and sampling XRaccordingtoD(xjS). Since we assume binary Y(or binarized, as dis-
1085
- cussed in Sect. 3), we can rewrite Eq. 2 as a probability:
1086
- v(S) =PD(xjS)(f(xi) =f(x));
1087
- where xidenotes the input point. Since conditional sam-
1088
- pling is equivalent to conditioning after sampling, this value
1089
- function is equivalent to PS(cS;y)by Def. 2.
1090
- A.2.2 Proof of Proposition 2
1091
- Proposition. LetcA(z) = 1 iffA(x) = 1 . Then
1092
- prec(A) =PS(cA;y).
1093
- The proof for this proposition is essentially identical, except
1094
- in this case our conditioning event is A(x) = 1 . LetcA=
1095
- 1iffA(x) = 1 . Precision prec( A), given by the lhs of
1096
- Eq. 3, is defined over a conditional distribution D(xjA).
1097
- Since conditional sampling is equivalent to conditioning
1098
- after sampling, this probability reduces to PS(cA;y).
1099
- A.2.3 Proof of Proposition 3
1100
- Proposition. Letcost be a function representing , and
1101
- letcbe some factor spanning reference values. Then the
1102
- counterfactual recourse objective is:
1103
- c= argmin
1104
- c2Ccost(c)s.t.PS(c;1y); (7)
1105
- wheredenotes a decision threshold. Counterfactual out-
1106
- puts will then be any zD such thatc(z) = 1 .
1107
- There are two closely related ways of expressing the counter-
1108
- factual objective: as a search for optimal points , or optimal
1109
- actions . We start with the latter interpretation, reframing ac-
1110
- tions as factors. We are only interested in solutions that flip
1111
- the original outcome, and so we constrain the search to fac-
1112
- tors that meet an I2R sufficiency threshold, PS(c;1y)
1113
- . Then the optimal action is attained by whatever factor
1114
- (i) meets the sufficiency criterion and (ii) minimizes cost.
1115
- Call this factor c. The optimal point is then any zsuch that
1116
- c(z) = 1 .
1117
- A.2.4 Proof of Proposition 4
1118
- Proposition. Consider the bivariate Boolean setting, as in
1119
- Sect. 2. We have two counterfactual distributions: an input
1120
- spaceI, in which we observe x;ybut intervene to set X=
1121
- x0; and a reference space R, in which we observe x0;y0but
1122
- intervene to set X=x. LetDdenote a uniform mixture
1123
- over both spaces, and let auxiliary variable Wtag each sam-
1124
- ple with a label indicating whether it comes from the origi-
1125
- nal (W= 1) or contrastive ( W= 0) counterfactual space.
1126
- Definec(z) =w. Then we have suf(x;y) =PS(c;y)and
1127
- nec(x;y) =PS(1c;y0).Recall from Sect. 2 that Pearl [2000, Ch. 9] defines
1128
- suf(x;y) :=P(yxjx0;y0)andnec(x;y) :=P(y0
1129
- x0jx;y):
1130
- We may rewrite the former as PR(y), where the reference
1131
- spaceRdenotes a counterfactual distribution conditioned on
1132
- x0;y0;do(x). Similarly, we may rewrite the latter as PI(y0),
1133
- where the input space Idenotes a counterfactual distribu-
1134
- tion conditioned on x;y;do (x0). Our contextDis a uniform
1135
- mixture over both spaces.
1136
- The key point here is that the auxiliary variable Windicates
1137
- whether samples are drawn from IorR. Thus condition-
1138
- ing on different values of Wallows us to toggle between
1139
- probabilities over the two spaces. Therefore, for c(z) =w,
1140
- we have suf(x;y) =PS(c;y)andnec(x;y) =PS(1
1141
- c;y0).
1142
- B ADDITIONAL DISCUSSIONS OF
1143
- METHOD
1144
- B.1-MINIMALITY AND NECESSITY
1145
- As a follow up to Remark 2 in Sect. 3.2, we expand here
1146
- upon the relationship between and cumulative probabili-
1147
- ties of necessity, which is similar to a precision-recall curve
1148
- quantifying and qualifying errors in classification tasks. In
1149
- this case, as we lower , we allow more factors to be taken
1150
- into account, thus covering more pathways towards a desired
1151
- outcome in a cumulative sense. We provide an example of
1152
- such a precision-recall curve in Fig. 6, using an R2I view of
1153
- theGerman credit dataset. Different levels of cumulative
1154
- necessity may be warranted for different tasks, depending on
1155
- how important it is to survey multiple paths towards an out-
1156
- come. Users can therefore adjust to accommodate desired
1157
- levels of cumulative PN over successive calls to LENS.
1158
- Figure 6: An example curve exemplifying the relationship
1159
- betweenand cumulative probability necessity attained by
1160
- selected-minimal factors.C ADDITIONAL DISCUSSIONS OF
1161
- EXPERIMENTAL RESULTS
1162
- C.1 DATA PRE-PROCESSING AND MODEL
1163
- TRAINING
1164
- German Credit Risk. We first download the dataset from
1165
- Kaggle,3which is a slight modification of the UCI version
1166
- [Dua and Graff, 2017]. We follow the pre-processing steps
1167
- from a Kaggle tutorial.4In particular, we map the categori-
1168
- cal string variables in the dataset ( Savings ,Checking ,
1169
- Sex,Housing ,Purpose and the outcome Risk ) to nu-
1170
- meric encodings, and mean-impute values missing values
1171
- forSavings andChecking . We then train an Extra-Tree
1172
- classifier [Geurts et al., 2006] using scikit-learn, with ran-
1173
- dom state 0 and max depth 15. All other hyperparameters
1174
- are left to their default values. The model achieves a 71%
1175
- accuracy.
1176
- German Credit Risk - Causal. We assume a partial order-
1177
- ing over the features in the dataset, as described in Fig. 5.
1178
- We use this DAG to fit a structural causal model (SCM)
1179
- based on the original data. In particular, we fit linear regres-
1180
- sions for every continuous variable and a random forest clas-
1181
- sifier for every categorical variable. When sampling from
1182
- D, we let variables remain at their original values unless ei-
1183
- ther (a) they are directly intervened on, or (b) one of their
1184
- ancestors was intervened on. In the latter case, changes are
1185
- propagated via the structural equations. We add stochastic-
1186
- ity via Gaussian noise for continuous outcomes, with vari-
1187
- ance given by each model’s residual mean squared error.
1188
- For categorical variables, we perform multinomial sampling
1189
- over predicted class probabilities. We use the same fmodel
1190
- as for the non-causal German credit risk description above.
1191
- SpamAssassins. The original spam assassins dataset comes
1192
- in the form of raw, multi-sentence emails captured on
1193
- the Apache SpamAssassins project, 2003-2015.5We seg-
1194
- mented the emails to the following “features”: From
1195
- is the sender; Tois the recipient; Subject is the
1196
- email’s subject line; Urls records any URLs found in
1197
- the body; Emails denotes any email addresses found
1198
- in the body; First Sentence ,Second Sentence ,
1199
- Penult Sentence , andLast Sentence refer to the
1200
- first, second, penultimate, and final sentences of the email,
1201
- respectively. We use the original outcome label from the
1202
- dataset (indicated by which folder the different emails were
1203
- saved to). Once we obtain a dataset in the form above, we
1204
- continue to pre-process by lower-casing all characters, only
1205
- 3See https://www.kaggle.com/kabure/
1206
- german-credit-data-with-risk?select=german_
1207
- credit_data.csv .
1208
- 4See https://www.kaggle.com/vigneshj6/
1209
- german-credit-data-analysis-python .
1210
- 5Seehttps:
1211
- //spamassassin.apache.org/old/credits.html .keeping words or digits, clearing most punctuation (except
1212
- for ‘-’ and ‘_’), and removing stopwords based on nltk’s pro-
1213
- vided list [Bird et al., 2009]. Finally, we convert all clean
1214
- strings to their mean 50-dim GloVe vector representation
1215
- [Pennington et al., 2014]. We train a standard MLP classi-
1216
- fier using scikit-learn, with random state 1, max iteration
1217
- 300, and all other hyperparameters set to their default val-
1218
- ues.6This model attains an accuracy of 98.3%.
1219
- IMDB. We follow the pre-processing and modeling steps
1220
- taken in a standard tutorial on LSTM training for sentiment
1221
- prediction with the IMDB dataset.7The CSV is included in
1222
- the repository named above, and can be additionally down-
1223
- loaded from Kaggle or ai.standford.8In particular, these
1224
- include removal of HTML-tags, non-alphabetical charac-
1225
- ters, and stopwords based on the the list provided in the ntlk
1226
- package, as well as changing all alphabetical characters to
1227
- lower-case. We then train a standard LSTM model, with 32
1228
- as the embedding dimension and 64 as the dimensionality
1229
- of the output space of the LSTM layer, and an additional
1230
- dense layer with output size 1. We use the sigmoid activa-
1231
- tion function, binary cross-entropy loss, and optimize with
1232
- Adam [Kingma and Ba, 2015]. All other hyperparameters
1233
- are set to their default values as specified by Keras.9The
1234
- model achieves an accuracy of 87.03%.
1235
- Adult Income. We obtain the adult income dataset via
1236
- DiCE’s implementation10and followed Haojun Zhu’s pre-
1237
- processing steps.11For our recourse comparison, we use a
1238
- pretrained MLP model provided by the authors of DiCE,
1239
- which is a single layer, non-linear model trained with Ten-
1240
- sorFlow and stored in their repository as ‘adult.h5’.
1241
- C.2 TASKS
1242
- Comparison with attributions. For completeness, we also
1243
- include here comparison of cumulative attribution scores
1244
- per cardinality with probabilities of sufficiency for the I2R
1245
- view (see Fig. 7).
1246
- Sentiment sensitivity analysis. We identify sentences in
1247
- the original IMDB dataset that are up to 10 words long. Out
1248
- of those, for the first example we only look at wrongly pre-
1249
- dicted sentences to identify a suitable example. For the other
1250
- 6Seehttps://scikit-learn.org/stable/
1251
- modules/generated/sklearn.\neural_network.
1252
- MLPClassifier.html .
1253
- 7Seehttps://github.com/hansmichaels/
1254
- sentiment-analysis-IMDB-Review-using-LSTM/
1255
- blob/master/sentiment_analysis.py.ipynb .
1256
- 8See
1257
- https://www.kaggle.com/lakshmi25npathi/
1258
- imdb-dataset-of-50k-movie-reviews orhttp:
1259
- //ai.stanford.edu/~amaas/data/sentiment/ .
1260
- 9Seehttps://keras.io .
1261
- 10Seehttps://github.com/interpretml/DiCE .
1262
- 11Seehttps://rpubs.com/H_Zhu/235617 .Table 5: Recourse options for a single input given by DiCE and our method. We report targets of interventions as suggested
1263
- options, but they could correspond to different values of interventions. Our method tends to propose more minimal and
1264
- diverse intervention targets. Note that all of DiCE’s outputs are already subsets of LENS’s two top suggestions, and due to
1265
- -minimality LENS is forced to pick the next factors to be non-supersets of the two top rows. This explains the higher cost
1266
- of LENS’s bottom three rows.
1267
- input DiCE output LENS output
1268
- Age Wrkcls Edu. Marital Occp. Race Sex Hrs/week Targets of intervention Cost Targets of intervention Cost
1269
- Age, Edu., Marital, Hrs/week 8.13 Edu. 1
1270
- Age, Edu., Marital, Occp., Sex, Hrs/week 5.866 Martial 1
1271
- Age, Wrkcls, Educ., Marital, Hrs/week 5.36 Occp., Hrs/week 19.3
1272
- Age, Edu., Occp., Hrs/week 3.2 Wrkcls, Occp., Hrs/week 12.642 Govt. HS-grad Single Service White Male 40
1273
- Edu., Hrs/week 11.6 Age, Wrkcls, Occp., Hrs/week 12.2
1274
- Figure 7: Comparison of degrees of sufficiency in I2R set-
1275
- ting, for top kfeatures based on SHAP scores, against the
1276
- best performing subset of cardinality kidentified by our
1277
- method. Results for German are averaged over 50 inputs;
1278
- results for SpamAssassins are averaged over 25 inputs.
1279
- example, we simply consider a random example from the
1280
- 10-word maximum length examples. We noted that Anchors
1281
- uses stochastic word-level perturbations for this setting. This
1282
- leads them to identify explanations of higher cardinality for
1283
- some sentences, which include elements that are not strictly
1284
- necessary. In other words, their outputs are not minimal, as
1285
- required for descriptions of “actual causes” [Halpern and
1286
- Pearl, 2005a; Halpern, 2016].
1287
- Comparison with Anchors. To complete the picture of
1288
- our comparison with Anchors on the German Credit Risk
1289
- dataset, we provide here additional results. In the main text,
1290
- we included a comparison of Anchors’s single output preci-
1291
- sion against the mean degree of sufficiency attained by our
1292
- multiple suggestions per input. We sample 100 different in-
1293
- puts from the German Credit dataset and repeat this same
1294
- comparison. Here we additionally consider the minimum
1295
- and maximum PS(c;y)attained by LENS against Anchors.
1296
- Note that even when considering minimum PSsuggestions
1297
- by LENS, i.e. our worst output, the method shows more con-
1298
- sistent performance. We qualify this discussion by noting
1299
- that Anchors may generate results comparable to our own
1300
- by setting the hyperparameter to a lower value. However,
1301
- Ribeiro et al. [2018a] do not discuss this parameter in de-
1302
- tail in either their original article or subsequent notebook
1303
- guides. They use default settings in their own experiments,
1304
- and we expect most practitioners will do the same.
1305
- Recourse: DiCE comparison First, we provide a single
1306
- Figure 8: We compare degree of sufficiency against preci-
1307
- sion scores attained by the output of LENS and Anchors for
1308
- examples from German . We repeat the experiment for 100
1309
- sampled inputs, and each time consider the single output
1310
- by Anchors against the min (left) and max (right) PS(c;y)
1311
- among LENS’s multiple candidates. Dotted line indicates
1312
- = 0:9, the threshold we chose for this experiment.
1313
- illustrative example of the lack of diversity in intervention
1314
- targets we identify in DiCE’s output. Let us consider one
1315
- example, shown in Table 5. While DiCE outputs are diverse
1316
- in terms of values and target combinations, they tend to
1317
- have great overlap in intervention targets. For instance, Age
1318
- andEducation appear in almost all of them. Our method
1319
- would focus on minimal paths to recourse that would involve
1320
- different combinations of features.
1321
- Figure 9: We show results over 50 input points sampled
1322
- from the original dataset, and all possible references of the
1323
- opposite class, across two metrics: the min cost (left) of
1324
- counterfactuals suggested by our method vs. DiCE, and the
1325
- max cost (right) of counterfactuals.
1326
- Next, we also provide additional results from our cost com-
1327
- parison with DiCE’s output in Fig. 8. While in the main text
1328
- we include a comparison of our mean cost output against
1329
- DiCE’s, here we additionally include a comparison of min
1330
- and max cost of the methods’ respective outputs. We see thateven when considering minimum and maximum cost, our
1331
- method tends to suggest lower cost recourse options. In par-
1332
- ticular, note that all of DiCE’s outputs are already subsets of
1333
- LENS’s two top suggestions. The higher costs incurred by
1334
- LENS for the next two lines are a reflection of this fact: due
1335
- to-minimality, LENS is forced to find other interventions
1336
- that are no longer supersets of options already listed above.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2104.03109.txt DELETED
@@ -1,919 +0,0 @@
1
- VGF-Net: Visual-Geometric Fusion Learning
2
- for Simultaneous Drone Navigation and Height Mapping
3
- Yilin Liu
4
- Shenzhen UniversityKe Xie
5
- Shenzhen UniversityHui Huang*
6
- Shenzhen University
7
- Abstract
8
- The drone navigation requires the comprehensive un-
9
- derstanding of both visual and geometric information
10
- in the 3D world. In this paper, we present a Visual-
11
- Geometric Fusion Network (VGF-Net), a deep network
12
- for the fusion analysis of visual/geometric data and
13
- the construction of 2.5D height maps for simultaneous
14
- drone navigation in novel environments. Given an initial
15
- rough height map and a sequence of RGB images, our
16
- VGF-Net extracts the visual information of the scene,
17
- along with a sparse set of 3D keypoints that capture
18
- the geometric relationship between objects in the scene.
19
- Driven by the data, VGF-Net adaptively fuses visual
20
- and geometric information, forming a unified Visual-
21
- Geometric Representation . This representation is fed to
22
- a new Directional Attention Model (DAM), which helps
23
- enhance the visual-geometric object relationship and
24
- propagates the informative data to dynamically refine
25
- the height map and the corresponding keypoints. An
26
- entire end-to-end information fusion and mapping sys-
27
- tem is formed, demonstrating remarkable robustness
28
- and high accuracy on the autonomous drone navigation
29
- across complex indoor and large-scale outdoor scenes.
30
- 1. Introduction
31
- In recent years, we have witnessed the development of
32
- autonomous robotic systems that have been broadly used
33
- in many scenarios (e.g., autonomous driving, manufactur-
34
- ing and surveillance). Drone belongs to the robotic system,
35
- and is well-known for its flying capacity. Navigation is ex-
36
- tremely important to the drone fly, as it facilitates the effec-
37
- tive exploration and recognition of the unknown environ-
38
- ments. Yet, the navigation of drone remains a challenging
39
- task, especially for planning the pathway as short as pos-
40
- sible to the target/destination whilst avoiding the potential
41
- collision with objects in the unexplored space. The conven-
42
- tional navigation heavily relies on the expertise of human,
43
- who intuitively designs the drone flyby trajectory based on
44
- the spatial layout within the visible range. The resulting
45
- *Corresponding author: Hui Huang ([email protected])
46
- Figure 1: We show a drone navigation trajectory (yellow
47
- curve) in 3D scene, which connects the starting and target
48
- points (red dots). During the navigation, our VGF-Net dy-
49
- namically updates the 2.5D height map (see the bottom-left
50
- corner) in new places (see pictures in red rectangles), which
51
- is used to timely update the navigation trajectory.
52
- navigation system lacks of the globe knowledge of scenes,
53
- leading to unsatisfactory or even failed path planning.
54
- To better leverage the global information of 3D en-
55
- vironment, researches on drone navigation have focused
56
- on collecting and memorizing the environmental informa-
57
- 1arXiv:2104.03109v1 [cs.CV] 7 Apr 2021tion during the navigating process. Typically, the exist-
58
- ing works [11, 2, 3] employ the mapping techniques to
59
- construct 2D/3D maps with respect to the vacant/occupied
60
- space. The mapping result contains rich geometric relation-
61
- ship between objects, which helps to navigate. There have
62
- also been navigation approaches based on visual informa-
63
- tion [6, 10, 2], saving the computational overhead to con-
64
- struct maps. Nonetheless, these works purely condition the
65
- accuracy of navigation on either geometric or visual infor-
66
- mation.
67
- In this paper, we utilize 2.5D height map for autonomous
68
- drone navigation. There are growing computer applica-
69
- tions that use height map to represent the boundaries of
70
- objects (e.g., buildings or furniture). Nonetheless, there is
71
- nothing guaranteed for the quality of given height maps,
72
- as the mapping process likely involves incomplete or out-
73
- of-date information. Here, we advocate the importance
74
- of fusing geometric and visual information for a more ro-
75
- bust construction of the height map. The new trend of re-
76
- searches [24, 5] on the 3D object/scene understanding has
77
- also demonstrated that the geometric relationship between
78
- objects and visual appearance of scenes are closely corre-
79
- lated. We thus propose a Visual-Geometric Fusion Network
80
- (VGF-Net) to dynamically update the height map during
81
- drone navigation by utilizing the timely captured new im-
82
- ages (see Figure 1).
83
- More specifically, as illustrated in Figure 2, the network
84
- takes an initial rough height map together with a sequence
85
- of RGB images as input. We use convolutional layers to
86
- compute the visual and geometric information to renew the
87
- height map. Next, we apply the simultaneous localization
88
- and mapping (SLAM) [20] module to extract a sparse set
89
- of 3D keypoints from the image sequence. These key-
90
- points are used along with the renewed height map to con-
91
- struct a novel Visual-Geometric Representation , which is
92
- passed to a Directional Attention Model . This attention
93
- model exchanges visual and geometric information among
94
- objects in the scene, providing quite useful object relation-
95
- ship for simultaneous refinement of the height map and
96
- the corresponding keypoints, leading to the successful path
97
- planning [15] at each navigation moment. Compared to
98
- dense point clouds that require time-consuming depth es-
99
- timation [4] and costly processing, the sparse keypoints we
100
- use are fast to compute yet effective in terms of capturing
101
- useful geometric information without much redundancy. As
102
- the drone flies over more and more places, our network can
103
- achieve and fuse more and more the visual and geometric
104
- information to largely increase the precision of height map
105
- and consequently the reliability of autonomous navigation.
106
- We intensively train and evaluate our method on a bench-
107
- mark of seven large-scale urban scenes and six complex
108
- indoor scenes for height map construction and drone nav-
109
- igation. The experimental results and comparative statisticsclearly demonstrate the effectiveness and the robustness of
110
- our proposed VGF-Net.
111
- 2. Related Work
112
- There have been an array of researches on the naviga-
113
- tion system that allows robots to smartly explore the real
114
- world. Below, we will mainly survey on the drone naviga-
115
- tion and environment mapping, as they are highly relevant
116
- to our work in the sense that their navigation systems are
117
- driven by the critical environment data.
118
- 2.1. Drone Navigation
119
- The modern drone systems are generally equipped with
120
- various sensors (e.g., RGB-D camera, radar and GPS),
121
- which help the hardware devices to achieve accurate percep-
122
- tion of the real world. Typically, the data captured by sen-
123
- sors is used for mapping (i.e., the construction of map), pro-
124
- viding comprehensive information for planning the moving
125
- path of drone. During the navigation process, the traditional
126
- methods [11, 22] compute the trajectory of drone based on
127
- the pre-scribed maps. However, the construction of a pre-
128
- cise map is generally expensive and time-consuming. Thus,
129
- the recent works [6, 10, 2] simplify the construction of map
130
- to facility more commercially-cheap navigation.
131
- The advances on deep learning have significantly im-
132
- proved the robustness of visual navigation, leading to the
133
- emergency of many navigation systems that do not rely on
134
- the given maps. Kim et al. [14] and Padhy et al. [21] use the
135
- classification neural network to predict the direction (e.g.,
136
- right, left or straight) of moving drone. Furthermore, Lo-
137
- quercio et al. [17] and Mirowski et al. [19] use neural net-
138
- works to compute the angle of flying and the risk of colli-
139
- sion, which provide more detailed information to control
140
- the drone flyby. Note that the above methods learn the
141
- actions of drone from the human annotations. The latest
142
- works employ deep reinforcement learning [23, 29, 26] to
143
- optimize the network, enabling more flexible solutions for
144
- autonomous drone navigation in novel environments.
145
- Our approach utilizes a rough 2.5D height map to in-
146
- crease the success rate of navigation in different complex
147
- scenes, which may have various spatial layouts of objects.
148
- Compared to the existing methods that conduct the mapping
149
- before navigation, we allow for real-time intelligent update
150
- of the height map during navigation, largely alleviating neg-
151
- ative impacts of problematic mapping results.
152
- 2.2. Mapping Technique
153
- The mapping technique is fundamental in the drone nav-
154
- igation. The techniques of 2D mapping have been widely
155
- used in the navigation task. Henriques et al. [11] and Savi-
156
- nov et al. [22] use 2D layout map to store useful informa-
157
- tion, which is learned by neural networks from the imageOffset... ...
158
- Projection
159
- SLAM
160
- ConvConv
161
- Direational Attention ModelKeypoints
162
- t
163
- 1
164
- tMt+1rM tcM trR tcR
165
- Figure 2: Overview of VGF-Net. At the tthmoment, the network uses convolutional layers to learn visual and geometric
166
- representations from the RGB image Itand 2.5D height map Mt(produced at the (t1)thmoment). The representations
167
- are combined to compute the residual update map Rc
168
- t, which is added to the 2.5D height map to form a renewed height
169
- mapMc
170
- t. Based on the new height map and the 3D keypoints fpt;1; :::; p t;Ng(produced by SLAM), we construct the VG
171
- representation for each keypoint (yellow dot), which is used by DAM to select useful information to refine object boundaries
172
- and 3D keypoints at the next moment. Note that the refined height map Mr
173
- t+1is used for path planning, which is omitted for
174
- a simple illustration.
175
- data of 3D scenes. Chen et al. [6] use the 2D topologi-
176
- cal map, which can be constructed using the coarse spatial
177
- layout of objects, to navigate the robot in an indoor scene.
178
- Different from the methods that consider the 2D map of an
179
- entire scene, Gupta et al. [10] unify the mapping and 2D
180
- path planning to rapidly adjust the navigation with respect
181
- to the surrounding local environment. Bansal et al. [2] uti-
182
- lize sparse waypoints to represent the map, which can be
183
- used to generate a smooth pathway to the target object or
184
- destination.
185
- Compared to 2D mapping, 3D mapping provides much
186
- richer spatial information for the navigation system. Wang
187
- et al. [27] use visual odometry to capture the geometric re-
188
- lationship between 3D points, which is important to recon-
189
- struct the 3D scene. Engel et al. [8, 7] integrate the tracking
190
- of keypoints into the mapping process, harnessing tempo-
191
- ral information to produce a more consistent mapping of
192
- the global environment. Futhermore, Huang et al. [12, 13]
193
- use a probabilistic Conditional Random Field model and a
194
- noise-aware motion affinity matrix to effectively track both
195
- moving and static objects. Wang et al. [28] use plane as
196
- a geometric constrain to reconstruct the whole scene. Be-
197
- sides 3D points, depth information is also important to 3D
198
- mapping. During the mapping process, Tateno et al. [25]
199
- and Ma et al. [18] use neural networks to estimate the depth
200
- map of a single image, for a faster construction of the 3D
201
- map. However, the fidelity of depth estimation is bounded
202
- by the scale of training data. To enhance, Kuznietsov et
203
- al. [16], Godard et al. [9] and Bian et al. [3] train the depthestimation network in semi-supervised/unsupervised man-
204
- ner, where the consistence in-between images are learned.
205
- Nowadays, a vast of real-world 3D models and applica-
206
- tions emerge, such as Google earth, and so there is abundant
207
- data of height maps available for the training of drone nav-
208
- igation system. Nonetheless, the accuracy and timeliness
209
- of such data is impossible to be guaranteed, thus hard to
210
- be directly used in practice. We deeply exploit the visual-
211
- geometric information fusion representation to effectively
212
- and dynamically update the given height map during navi-
213
- gation, yielding a significant increase of the success rate of
214
- the autonomous drone navigation in various novel scenes.
215
- 3. Overview
216
- The core idea behind our approach is to fuse the visual
217
- and geometric information for the construction of height
218
- map. This is done by our Visual-Geometric Fusion Net-
219
- work (VGF-Net) to compute the visual-geometric represen-
220
- tation with respect to the visual and geometric consistence
221
- between the 3D keypoints and object boundaries character-
222
- ized in the height map. VGF-Net uses the fused representa-
223
- tion to refine the keypoints and height map at each moment
224
- during drone navigation. Below, we outline the architecture
225
- of VGF-Net.
226
- As illustrated in Figure 2, at the tthmoment ( t0), the
227
- network takes the RGB image Itand the associated height
228
- mapMtas input. The image Itis fed to convolutional lay-
229
- ers to compute the visual representation Vt. The height map
230
- Mtis also input to the convolutional layers for the geomet-ric representation Gt. The visual and geometric representa-
231
- tions are fused to compute the residual update map Rc
232
- tthat
233
- updates the height map to Mc
234
- t, providing more consistent
235
- information for the subsequent steps.
236
- Next, we use the SLAM [20] module to compute a sparse
237
- set of 3D keypoints fpt;1; :::; p t;Ng, based on the images
238
- fI1; :::; I tg. We project these keypoints to the renewed
239
- height map Mc
240
- t. For the keypoint pt;i, we compute a set
241
- of distancesfdt;i;1; :::; d t;i;Kg, where dt;i;kdenotes the dis-
242
- tance from the keypoint pt;ito the nearest object boundary
243
- along the kthdirection (see Figure 3(a)). Intuitively, the
244
- keypoint, which is extracted around the objects in the 3D
245
- scene, is also near to the boundaries of the corresponding
246
- objects in the height map. This relationship between the
247
- keypoint pt;iand the object can be represented by the vi-
248
- sual and geometric information in the scene. Specifically,
249
- this is done by fusing the visual representation Vt, geomet-
250
- ric representation Gc
251
- t(learned from the renewed height map
252
- Mc
253
- t) and the distances fdt;i;1; :::; d t;i;Kgto form a novel
254
- Visual-Geometric (VG) representation Uifor the keypoint
255
- pt;i. For all keypoints, we compute a set of VG representa-
256
- tionsfUt;1; :::; U t;Ng.
257
- Finally, we employ a Directional Attention Model
258
- (DAM), which takes input as the VG representations
259
- fUt;1; :::; U t;Ng, to learn a residual update map Rr
260
- tto refine
261
- the height map Mc
262
- t. The DAM produces a new height map
263
- Mr
264
- t+1that respects the importance of each keypoint to the
265
- object boundaries in different directions (see Figure 3(b)).
266
- Meanwhile, we use DAM to compute a set of spatial off-
267
- setsfpt+1;1; :::;pt+1;Ngto update the keypoints, whose
268
- locations are imperfectly estimated by the SLAM. We use
269
- the height map Mr
270
- t+1for dynamic path planning [15] at the
271
- (t+ 1)thmoment, and meanwhile input the image It+1and
272
- the height map Mr
273
- t+1to VGF-Net at this moment for next
274
- update. As drone flies, the network achieves more accu-
275
- rate information and works more robustly for simultaneous
276
- drone navigation and height mapping.
277
- 4. Method
278
- We now introduce our VGF-Net in more detail. The net-
279
- work extracts visual and geometric information from the
280
- RGB images, the associated 2.5D height map and 3D key-
281
- points. In what follows, we formally define the informa-
282
- tion fusion that produces the visual-geometric representa-
283
- tion, which is then used for the refinement of the height
284
- map and keypoints.
285
- 4.1. Residual Update Strategy
286
- The VGF-Net refines the height map and keypoints iter-
287
- atively, as the drone flies to new places and captures new
288
- images. We divide this refinement process into separate
289
- moments. At the tthmoment, we feed the RGB image
290
- It2RHIWI3and the height map Mt2RHMWMinto the VGF-Net, computing the global visual represen-
291
- tation Vt2RHMWMCand the geometric representation
292
- Gt2RHMWMCas:
293
- Vt=Fv(It); G t=Fg(Mt); (1)
294
- whereFvandFgdenote the two sets of convolutional lay-
295
- ers. Note that the value of each location on Mtrepresents
296
- the height of object, and we set the height of ground to be 0.
297
- We concatenate the representations VtandGtfor comput-
298
- ing a residual update map Rc
299
- t2RHMWM, which is used
300
- to update the height map Mtas:
301
- Mc
302
- t=Mt+Rc
303
- t; (2)
304
- where
305
- Rc
306
- t=Fc(Vt; Gt): (3)
307
- Here, Mc
308
- t2RHMWMis a renewed height map, and Fc
309
- denotes a set of convolutional layers. Compared to directly
310
- computing a new height map, the residual update strategy
311
- (as formulated by Eq. (2)) adaptively reuses the information
312
- ofMt. More importantly, we learn the residual update map
313
- Rc
314
- tfrom the new content captured at the tthmoment. It
315
- facilitates a more focused update on the height values of
316
- regions that are unexplored before the tthmoment. The
317
- height map Mc
318
- tis fed to an extra set of convolutional layers
319
- to produce the representation Gc
320
- t, which will be used for the
321
- construction of the visual-geometric representation.
322
- 4.2. Visual-Geometric Representation
323
- We conduct the visual-geometric information fusion
324
- to further refine the height map. To capture the geo-
325
- metric relationship between objects, we use a standard
326
- SLAM [20] module to extract a sparse set of 3D keypoints
327
- fpt;1; :::; p t;Ngfrom the sequence of images fI1; :::; I tg.
328
- Given the keypoint pt;i2R13in the camera coordinate
329
- system, we project it to the 2.5D space as:
330
- p0
331
- t;i=pt;iSR+T: (4)
332
- Here, S2R33is decided by a pre-defined scale factor,
333
- which could be calculated at the initialization of the SLAM
334
- system or by GPS adjustment. T2R13andR2R33
335
- translate the origin of the 3D point set from the camera to
336
- the height map coordinate system. In the height map co-
337
- ordinate system, the drone is located at (W
338
- 2;0), where W
339
- represent the width of the height map.
340
- Note that the first two dimensions of p0
341
- t;i2R13indi-
342
- cate the location on the height map, and the third dimension
343
- indicates the corresponding height value. The set of key-
344
- pointsfp0
345
- t;1; :::; p0
346
- t;Ngare used for constructing the visual-
347
- geometric representations.
348
- Next, for each keypoint p0
349
- t;i, we compute its distances to
350
- the nearest objects in Kdifferent directions. Here, we refer(a) VG Representation (b) DAM
351
- height increase height decrease
352
- Figure 3: Illustration of fusing visual and geometric infor-
353
- mation for updating the 2.5D height map. (a) We construct
354
- the VG representation for each 3D keypoint (yellow dot)
355
- projected to the 2.5D height map. The information of VG
356
- representation is propagated to surrounding object bound-
357
- aries, along different directions (indicated by different col-
358
- ors). The distance between the keypoint and object bound-
359
- ary (black arrow) determines the weight for adjusting the in-
360
- formation propagation. The dash arrow means that there is
361
- no object along the corresponding direction. (b) Given the
362
- existing object boundary, we use DAM to select the most
363
- relevant keypoint along each direction. We use the selected
364
- keypoints to provide fused visual and geometric informa-
365
- tion, which is used for refining object boundary.
366
- to objects as the regions that have larger height values than
367
- the ground (with height value of 0) in the height map Mc
368
- t.
369
- As illustrated in Figure 3(a), we compute the Euclidean dis-
370
- tance dt;i;kalong the kthdirection, from p0
371
- t;ito the first lo-
372
- cation, where the height value is larger than 0. We compute
373
- a set of distancesfdt;i;1; :::; d t;i;KgforKdirections, then
374
- useVt(see Eq. (1)), Gc
375
- tand this distance set to form the VG
376
- representation Ut;i2RKas:
377
- Ut;i;k=Fv
378
- k(Wt;i;kVt) +Fg
379
- k(Wt;i;kGc
380
- t;i); (5)
381
- where
382
- Wv
383
- t;i;k=KX
384
- k0=1exp(jdt;i;kdt;i;k0j): (6)
385
- Here, Gc
386
- t;i2RCdenotes the feature vectors located in p0
387
- t;i
388
- in the map Gc
389
- t. In Eq. (5), Ut;i;kis represented as a weighted
390
- map with the resolution equal to the geometric representa-
391
- tion ( 2020by default), where Wt;i;kplays as a weight of
392
- importance that is determined by the distance from the key-
393
- pointp0
394
- t;ito the nearest object boundary along the kthdirec-
395
- tion. As formulated in Eq. (5) and Eq. (6), longer distance
396
- decays the importance. Besides, we use independent set of
397
- fully connected layers (i.e., Fv
398
- kandFg
399
- kin Eq. (5)) to learn
400
- important information from VtandGc
401
- t;i. It allows the con-
402
- tent, which is far from p0
403
- t;i, to have the opportunity to makean impact on Ut;i;k. We construct the VG representation for
404
- each keypoint infp0
405
- t;1; :::; p0
406
- t;Ng, while each VG represen-
407
- tation captures the visual and geometric information around
408
- the corresponding keypoint. Based on the the VG represen-
409
- tations, we propagate the information of the keypoints to
410
- each location on the height map, where the corresponding
411
- height value is refined. We also learn temporal information
412
- from the VG representations to refine the spatial locations
413
- of keypoints at the (t+ 1)thmoment, as detailed below.
414
- 4.3. Directional Attention Model
415
- We use DAM to propagate the visual and geometric
416
- information, from each keypoint to each location on the
417
- height map, along different directions. More formally, for
418
- a location ph
419
- j2R13on the height map Mc
420
- t, we conduct
421
- the information propagation that yields a new representation
422
- Qt;j2RCKas:
423
- Qt;j=NX
424
- i=1Gc
425
- t;jU>
426
- t;i: (7)
427
- Along the second dimension of the representation Qt;j, we
428
- perform max pooling to yield Q0
429
- t;j2RCas:
430
- Q0
431
- t;j;c= max( Qt;j;c; 1; :::; Q t;j;c;K ): (8)
432
- As illustrated in Eq. (7), Qt;j;c;k summarizes the influence
433
- of all keypoints along kthdirection. We perform max pool-
434
- ing on the setfQt;j;c; 1; :::; Q t;j;c;Kg(see Eq. (8)), attend-
435
- ing to the most information along a direction to form the
436
- representation Q0
437
- t;j;c(see Figure 3(b)). To further refine the
438
- height map, we use the representation Q0
439
- t2RHMWMC
440
- to compute another residual update map Rr
441
- t2RHMWM,
442
- which is added to the height map Mc
443
- tto form a new height
444
- mapMr
445
- t+12RHMWMas:
446
- Mr
447
- t+1=Mc
448
- t+Rr
449
- t; (9)
450
- where
451
- Rr
452
- t=Fr(Vt; Q0
453
- t): (10)
454
- Again,Frdenotes a set of convolutional layers. We make
455
- use of the new height map Mr
456
- t+1for the path planning at the
457
- (t+ 1)thmoment.
458
- We refine not only the 2.5D height map but also
459
- the 3D keypoints at the (t+ 1)thmoment. Assume
460
- that we use SLAM to produce a new set of keypoints
461
- fpt+1;1; :::; p t+1;Ng. We remark that the keypoint sets at
462
- thetthand(t+ 1)thmoments are not necessary the same.
463
- To refine the new keypoint pt+1;j2R13, we use DAM to
464
- compute the representation p0
465
- t+1;j2R3Kas:
466
- p0
467
- t+1;j=NX
468
- i=1pt;iU>
469
- t;i: (11)Figure 4: Overview of our 3D urban navigation dataset, including 7 city scenes with different characteristics.
470
- In this way, DAM distills the information of keypoints
471
- at the tthmoment, which is propagated to the next mo-
472
- ment. Again, we use max pooling to form the spatial offset
473
- pt+1;j;c2R13for updating keypoint pt+1;jas:
474
- pt+1;j;c= max( p0
475
- t+1;j;c; 1; :::;p0
476
- t+1;j;c;K ):(12)
477
- We take the average of the updated keypoints pt+1;j+
478
- pt+1;jand the estimated keypoints pt+1;jin place of
479
- the original one to construct the VG representation at the
480
- (t+ 1)thmoment.
481
- 4.4. Training Details
482
- We use the L1loss function for training the VGF-Net as:
483
- L(Mgt
484
- t; Mr
485
- t) =TX
486
- t=1HWX
487
- j=1jMgt
488
- t;jMr
489
- t;jj; (13)
490
- where Mgt
491
- tRHWis the ground-truth height map. Ac-
492
- tually, we select 8 pairs of RGB image and height map
493
- (T= 8) to construct each mini-batch for the standard SGD
494
- solver. We set the height and width of each RGB image
495
- (224224) and the height map ( 2020). The overall train-
496
- ing samples is nearly 24000 images randomly sampled in 3
497
- scenes, while we test the model on the 24000 samples sam-
498
- pled on the other 3 scenes. Details about the dataset could
499
- be found in Sec. 5. We train the network for 30 epochs,
500
- and use the final snapshot of network parameters for test-
501
- ing. The learning rate is set to 0.001 at the first 15 epochs,
502
- and decayed to 0.0001 for a more stable optimization.
503
- By default, the backbone of FvandFgis a ResNet-18,
504
- while the remained FcandFris two stacked 33convo-
505
- lutional layer with max-pooling and batch normalization.Note that it is our contribution to learn spatial offsets
506
- of 3D keypoints, without explicitly using any ground-truth
507
- data. This is done by modeling the computation of spatial
508
- offsets as a differentiable function with respect to the VG
509
- representation. In this way, we enable the end-to-end learn-
510
- ing of spatial offsets, where the related network parameters
511
- can be optimized by the back-propagated gradients. It sig-
512
- nificantly reduces the effort for data annotation, while al-
513
- lows the network training to be flexibly driven by data.
514
- When constructing the VG representation, we set the
515
- number of directions K= 16 for each keypoint, and the
516
- number of keypoints N= 50 at each moment. We remark
517
- that these hyper-parameters are chosen based on the valida-
518
- tion results.
519
- 5. Results and Discussion
520
- 5.1. Description of Experimental Dataset
521
- To promote the related research on drone navigation, we
522
- newly collect a 3D urban navigation dataset. This dataset
523
- contains 7 models of different city scenes (see Figure 4).
524
- Note that New York, Chicago, San Francisco, and Las
525
- Vegas are Google Earth models we download, which are
526
- similar to the real-world scenes with respect to the appear-
527
- ance but most objects inside are only buildings. We have
528
- also Shenzhen, Suzhou and Shanghai that are manually built
529
- based on the map by professional modelers, which contain
530
- rich 3D objects (e.g., buildings, trees, street lights and road
531
- signs, etc.) and other stuff (e.g., ground, sky and sea). There
532
- are various spatial configurations of objects, building styles
533
- and weather conditions in these 3D scenes. Thus, we pro-
534
- vide challenging data for evaluating the navigation system.Table 1: Statistics of our 3D urban navigation dataset. Note that in addition to buildings, there may also exist many other
535
- objects we must consider, such as trees, flower beds, and street lights, which highly increase the challenge for height mapping
536
- and autonomous navigation task.
537
- scene area (km2)objects (#) model size ( MB )texture images (#) texture size ( MB )
538
- New York 7.4 744 86.4 762 122
539
- Chicago 24 1629 146 2277 227
540
- San Francisco 55 2801 225 2865 322
541
- Las Vegas 20 1408 108 1756 190
542
- Shenzhen 3 1126 50.3 199 72.5
543
- Suzhou 7 168 191 395 23.7
544
- Shanghai 37 6850 308 2285 220
545
- Table 2: Comparisons with different strategies of information fusion, in terms of the accuracy of height mapping (average L1
546
- error). We also show the accuracies ( %) of predicting height values, with respect to different ranges of error ( <3m, 5mand
547
- 10m). All strategies are evaluated on the testing (i.e., unknown and novel) scenes of San Francisco, Shenzhen and Chicago.
548
- methodaverage L1error ( m) accuracy w.r.t. error 2[0;3]m(%)
549
- San Francisco Shenzhen Chicago San Francisco Shenzhen Chicago
550
- w/o fusion 4.57 4.57 4.49 68.95% 68.02% 70.05%
551
- w/ fusion 2.37 2.93 3.41 85.09% 83.63% 78.44%
552
- w/ fusion and memory 2.81 3.44 4.02 79.86% 79.20% 72.86%
553
- w/ fusion, memory and exchange 2.35 3.04 3.80 80.54% 82.36% 74.73%
554
- full strategy 1.98 2.72 3.10 85.71% 86.13% 80.46%
555
- methodaccuracy w.r.t. error 2[0;5]m(%) accuracy w.r.t. error 2[0;10]m(%)
556
- San Francisco Shenzhen Chicago San Francisco Shenzhen Chicago
557
- w/o fusion 75.02% 74.08% 76.86% 83.96% 83.96% 85.71%
558
- w/ fusion 89.20% 87.39% 84.12% 93.87% 92.25% 91.18%
559
- w/ fusion and memory 86.35% 84.56% 80.36% 93.00% 91.31% 89.51%
560
- w/ fusion, memory and exchange 86.13% 86.43% 81.41% 93.33% 91.85% 89.94%
561
- full strategy 89.22% 88.90% 85.30% 94.10% 92.56% 91.67%
562
- The models are input to the render for producing sequences
563
- of RGB images. All RGB images and the associated 2.5D
564
- height maps are used to form a training set (i.e., New York,
565
- Las Vegas and Suzhou) and a testing set (i.e., San Francisco,
566
- Shenzhen, and Chicago). We provides more detailed statis-
567
- tics of the dataset in Table 1.To train our VGF-Net, which takes as input a rough im-
568
- perfect height map and outputs an accurate height map,
569
- we use 5 types of manipulations (i.e., translation, height
570
- increase/decrease, size dilation/contraction, creation and
571
- deletion) to disturb the object boundaries in the ground-
572
- truth height map. One time of the disturbance increases orBefore disturbance After disturbance Residual map050100
573
- Translation + Dilation
574
- DilationHeight increaseHeight decrease
575
- Translation
576
- TranslationCreationDeletion
577
- Figure 5: Illustration of disturbance manipulations. Actu-
578
- ally, these manipulations can be combined to yield the dis-
579
- turbance results (e.g., translation and dilation). The bottom
580
- row of this figure shows the difference between height maps
581
- before/after disturbance. The residual map is learned by our
582
- VGF-Net, for recovering the disturbed height map to the
583
- undisturbed counterpart.
584
- decreases height values by 10 min certain map locations.
585
- See Figure 5 for an illustration of our manipulations.
586
- 5.2. Different Strategies of Information Fusion
587
- The residual update, VG representation and DAM are
588
- critical components of VFG-Net, defining the strategy of
589
- information fusion. Below, we conduct an internal study by
590
- removing these components, and examine the effect on the
591
- accuracy of height mapping (see Table 2).
592
- First, we report the performance using visual informa-
593
- tion only for height mapping, disabling any visual and geo-
594
- metric fusion. Here, the visual information is learned from
595
- RGB images (see the entries “w/o fusion” in Table 2). But
596
- visual information is insufficient for reconstructing height
597
- maps, which requires the modeling of geometric relation-
598
- ship between objects, yielding lower performances com-
599
- pared to other methods using geometric information.
600
- Next, we examine the efficiency of residual update strat-
601
- egy. At each moment, the residual update allows VGF-Net
602
- to reuse the mapping result produced earlier. This strategy,
603
- where the useful visual and geometric contents can be ef-
604
- fectively distilled and memorized at all moments, improves
605
- the reliability of height mapping. Thus, by removing the
606
- residual update (see the entries “w/ fusion” in Table 2) from
607
- VGF-Net (see the entries “full strategy”), we degrade the
608
- performance of height mapping.
609
- We further study the effect of VG representation on the
610
- performance. The VG representation can be regarded as an
611
- information linkage. It contains fused visual and geometric
612
- information, which is exchanged among objects. Withoutthe VG representation, we use independent sets of convo-
613
- lutional layers to extract the visual and geometric represen-
614
- tations from the image and height map, respectively. The
615
- representations are simply concatenated for computing the
616
- residual update map (see the entries “w/ fusion and mem-
617
- ory” in Table 2). This manner successfully disconnects the
618
- communication between objects and leads to performance
619
- drops on almost all scenes, compared to our full strategy of
620
- information fusion.
621
- We find that the performance of using memory of height
622
- values lags behind the second method without using mem-
623
- ory (see the entries “w/ fusion” in Table 2). We explain that
624
- the information fusion with memory easily accumulates er-
625
- rors in the height map over time. Thus, it is critical to com-
626
- pute the VG representation based on the memorized infor-
627
- mation, enabling the information exchange between objects
628
- (see the entries “w/ fusion, memory and exchange”). Such
629
- exchange process provides richer object relationship to ef-
630
- fectively address the error accumulation problem, signifi-
631
- cantly assisting height mapping at each moment.
632
- Finally, we investigate the importance of DAM (see the
633
- entries “w/ fusion, memory and exchange” in Table 2). We
634
- solely remove DAM from the full model, by directly us-
635
- ing VG representations to compute the residual update map
636
- and spatial offsets for refining the height map and key-
637
- points. Compared to this fusion strategy, our full strategy
638
- with DAM provides a more effective way to adjust the im-
639
- pact of each keypoint along different directions. Therefore,
640
- our method achieves the best results on all testing scenes.
641
- 5.3. Sensitivity to the Quality of Height Map
642
- As demonstrated in the above experiment, it is impor-
643
- tant to the iterative information fusion for achieving a more
644
- global understanding of 3D scene to perfect the height map
645
- estimation. During the iterative procedure, the problematic
646
- height values may be memorized to make a negative im-
647
- pact on the production of height map at future moment. In
648
- this experiment, we investigate the sensitivity of different
649
- approaches to the quality of height maps, by controlling
650
- the percentage of height values that are dissimilar to the
651
- ground-truth height maps. Again, we produce dissimilar
652
- height maps by using disturbance manipulations to change
653
- the object boundaries.
654
- At each moment, the disturbed height map is input to
655
- the trained model to compute the new height map, which is
656
- compared to the ground-truth height map for calculating the
657
- average L1error. In Figure 6, we compare the average L1
658
- errors produced by 4 different information fusion strategies
659
- (i.e., see the entries “w/ fusion”, “w/ fusion and memory”,
660
- “w/ fusion, memory and exchange” and “full strategies” in
661
- Table 2), which learn geometric information from height
662
- maps. As we can see, heavier disturbances generally lead to
663
- the degradation of all strategies.w/ fusion w/ fusion and memory w/ fusion, memory and exchange full strategy20%Error (m)
664
- Dissimilarity to GT Dissimilarity to GT Dissimilarity to GT40% 60% 80% 20%271217
665
- 40% 60% 80% 20%24610
666
- 8
667
- 04
668
- 2610
669
- 8
670
- 40% 60% 80%Shenzhen San Francisco ChicagoFigure 6: We disturb the 2.5D height maps, which are used to examine the robustness of different information fusion ap-
671
- proaches. We evaluate different approaches on the testing sets of San Francisco, Shenzhen and Chicago. All results are
672
- reported in terms of L1errors.
673
- Figure 7: The five indoor training scenes selected from the S3DIS dataset [1].
674
- Figure 8: The successful navigation trajectories produced by VGF-Net in a complicate indoor testing scene from the S3DIS
675
- dataset [1].
676
- The strategy “w/ fusion and memory” performs the worst
677
- among all approaches, showing very high sensitivity to the
678
- quality of height maps. This result further evidences our
679
- finding in Sec. 5.2, where we have shown the unreliabil-
680
- ity of the method with memory of height information but
681
- without information exchange. Compared to other meth-
682
- ods, our full strategy yields better results. Especially, givena very high percentage (80%) of incorrect height values, our
683
- full strategy outperforms other methods by remarkable mar-
684
- gins. These results clearly demonstrate the robustness of
685
- our strategy.Table 3: We compare VGF-Net with/without using depth to other methods. All methods are evaluated on the outdoor sets
686
- (i.e., San Francisco, Shenzhen and Chicago) and the indoor set (i.e., S3DIS). Results are reported in terms of the success
687
- rates of navigation.
688
- outdoor testw/ depth w/o depth
689
- ground-truth depth estimated depth [3] VGF-Net
690
- San Francisco 100% 27% 85%
691
- Shenzhen 100% 34% 83%
692
- Chicago 100% 19% 82%
693
- indoor testw/ depth w/o depth
694
- LSTM [10] CMP [10] VGF-Net LSTM [10] CMP [10] VGF-Net
695
- S3DIS 71.8% 78.3% 92% 53% 62.5% 76%
696
- 5.4. Comparison on the Navigation Task
697
- The quality of 2.5D height maps, which are estimated
698
- by the height mapping, largely determines the accuracy of
699
- drone navigation. In this experiment, we compare our VGF-
700
- Net to different mapping approaches. All methods are di-
701
- vided into two groups. In the first group, the approaches
702
- apply depth information for height mapping. Note that the
703
- depth information can be achieved by scanner [10], or esti-
704
- mated by deep network based on the RGB images [3]. The
705
- second group consists of approaches that only use RGB im-
706
- ages to reconstruct the height map. In addition to an ini-
707
- tial height map that can be easily obtained from various re-
708
- sources, our VGF-Net only requires image inputs, but can
709
- also accept depth information if available without changing
710
- any scheme architecture. We set the height of flight to be
711
- 1030mfor drone, evaluating the success rate of 3D navi-
712
- gation on our outdoor dataset. Overheight (e.g., 100 m) al-
713
- ways leads to successful navigation, making the evaluation
714
- meaningless. On the indoor dataset [1] (see also Figure 7
715
- and Figure 8) , we report the success rate of 2D drone navi-
716
- gation, by fixing the height of flight to 0.5 m. All results can
717
- be found in Table 3.
718
- Obviously, using accurate depth information can yield a
719
- perfect success rate of navigation (see the entry “ground-
720
- truth depth”). Here, the depth data is directly computed
721
- from the synthesized 3D urban scenes, without involving
722
- any noise. However, due to the limitation of hardware de-
723
- vice, it is difficult for the scanner to really capture the accu-
724
- rate depth data of outdoor scenes. A simple alternative is to
725
- use deep network to estimate the depth based on the RGB
726
- image (see the entry “estimated depth”). Depth estimation
727
- often produces erroneous depth values for the height map-
728
- ping, even with the most advanced method [3], thus severely
729
- Input Predicted GTFigure 9: Examples of height mapping. All the height maps
730
- are selected from the outdoor dataset. Here, we compare
731
- the height maps with noise (in the first column), predicted
732
- height maps (in the second column) and ground-truth height
733
- maps (in the last column).misleading the navigation process. Similar to depth infor-
734
- mation, the sparse 3D keypoints used in our approach also
735
- provide valuable geometry information of objects. More
736
- importantly, our VGF-Net uses visual cues to assist the
737
- learning of geometric representations. Therefore, our ap-
738
- proach without using depth produces better results than that
739
- of using depth estimated by state-of-the-art techniques. We
740
- have shown an example of trajectory for 3D drone naviga-
741
- tion in Figure 1. We also show examples of height mapping
742
- in Figure 9, where the height map with redundant boundary
743
- (see the first two rows of Figure 9) or missing boundary (see
744
- the last two rows of Figure 9) is input to the VGF-Net. Even
745
- given the input height maps with much noise, our network
746
- still precisely recovers the height information.
747
- Depth data of indoor scenes (see Figure 7) can be more
748
- easily achieved. With the available depth information, we
749
- can trivially input the RGB image along with the associated
750
- depth to the VGF-Net, producing the height map. We com-
751
- pare VGF-Net to the recent approach [10] (see the entries
752
- “LSTM ” and “CMP”) that produces state-of-the-art indoor
753
- navigation accuracies. Our method achieves a better result
754
- under the same condition of training and testing. Without
755
- depth, our approach still leads to the best result among all
756
- image based methods. It demonstrates the generality and
757
- ability of our approach, in terms of stably learning useful
758
- information from different data sources. In Figure 8, we
759
- show more navigation trajectories planned by our approach
760
- in an indoor testing scene.
761
- 6. Conclusions and Future Work
762
- The latest progress on drone navigation is largely driven
763
- by the active sensing and selecting the useful visual and ge-
764
- ometric information of surrounding 3D scenes. In this pa-
765
- per, we have presented VGF-Net, where we fuse visual and
766
- geometric information for simultaneous drone navigation
767
- and height mapping. Our network distills the fused infor-
768
- mation, which is learned from the RGB image sequences
769
- and an initial rough height map, constructing a novel VG
770
- representation to better capture object/scene relation infor-
771
- mation. Based on the VG representation, we propose DAM
772
- to establish information exchange among objects and select
773
- essential object relationship in a data-driven fashion. By us-
774
- ing residual update strategy, DAM progressively refines the
775
- object boundaries in the 2.5D height map and the extracted
776
- 3D keypoints, showing its generality to various complicate
777
- outdoor/indoor scenes. The mapping module runs at nearly
778
- 0.2sec on a mobile GPU, which could be further optimized
779
- by compression and pruning in an embedded system.
780
- VGF-Net eventually outputs the residual update map and
781
- spatial offsets, which are used for explicitly updating the
782
- geometric information of objects (i.e., the 2.5D height map
783
- and 3D keypoints). It should be noted that we currently use
784
- convolutional layers to learn implicit representation fromthe fused information, and update the visual representa-
785
- tion. The visual content of the sequence of RGB image
786
- shows complex patterns, which together form the global ob-
787
- ject/scene relationship. However, these patterns may be ne-
788
- glected by the implicit representation during the learning
789
- process. Thus, in the near future, we would like to investi-
790
- gate a more controllable way to update the visual represen-
791
- tation. Additionally, complex occlusion relations in the real
792
- scenarios often lead to inaccurate height mappings in the oc-
793
- cluded areas. In the future, we would like to further utilize
794
- the uncertainty map of the environment, together with the
795
- multi-view information to improve both the accuracy and
796
- the efficiency of the mapping process. Moreover, since the
797
- geometric modeling (triangulation of sparse keypoints) is
798
- commonly involved in the optimization pipeline of SLAM,
799
- effectively collaborating the 3D keypoints detection and the
800
- height mapping would be quite interesting to explore.
801
- Acknowledgment
802
- We would like to thank the anonymous reviewers for
803
- their constructive comments. This work was supported
804
- in parts by NSFC Key Project (U2001206), Guangdong
805
- Outstanding Talent Program (2019JC05X328), Guangdong
806
- Science and Technology Program (2020A0505100064,
807
- 2018A030310441, 2015A030312015), DEGP Key Project
808
- (2018KZDXM058), Shenzhen Science and Technology
809
- Program (RCJC20200714114435012), and Guangdong
810
- Laboratory of Artificial Intelligence and Digital Economy
811
- (Shenzhen University).
812
- References
813
- [1] I. Armeni, O. Sener, A. R. Zamir, H. Jiang, I. Brilakis,
814
- M. Fischer, and S. Savarese. 3D semantic parsing of large-
815
- scale indoor spaces. In Proc. IEEE Conf. on Computer Vision
816
- & Pattern Recognition , pages 1534–1543, 2016. 9, 10
817
- [2] S. Bansal, V . Tolani, S. Gupta, J. Malik, and C. Tomlin. Com-
818
- bining optimal control and learning for visual navigation in
819
- novel environments. In Proc. Conf. on Robot Learning , vol-
820
- ume 100, pages 420–429, 2020. 2, 3
821
- [3] J. Bian, Z. Li, N. Wang, H. Zhan, C. Shen, M.-M. Cheng, and
822
- I. Reid. Unsupervised scale-consistent depth and ego-motion
823
- learning from monocular video. In Proc. of Advances in Neu-
824
- ral Information Processing Systems , pages 35–45, 2019. 2,
825
- 3, 10
826
- [4] G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and
827
- G. Drettakis. Depth synthesis and local warps for plau-
828
- sible image-based navigation. ACM Trans. on Graphics ,
829
- 32(3):30:1–30:12, 2013. 2
830
- [5] D. Chen, B. Zhou, V . Koltun, and P. Kr ¨ahenb ¨uhl. Learning
831
- by cheating. In Proc. Conf. on Robot Learning , volume 100,
832
- pages 66–75, 2019. 2
833
- [6] K. Chen, J. P. de Vicente, G. Sepulveda, F. Xia, A. Soto,
834
- M. V ´azquez, and S. Savarese. A behavioral approach to vi-
835
- sual navigation with graph localization networks. In Proc. of
836
- Robotics: Science and Systems , pages 1–10, 2019. 2, 3[7] J. Engel, V . Koltun, and D. Cremers. Direct sparse odome-
837
- try. IEEE Trans. Pattern Analysis & Machine Intelligence ,
838
- 40(3):611–625, 2017. 3
839
- [8] J. Engel, T. Sch ¨ops, and D. Cremers. Lsd-slam: Large-scale
840
- direct monocular slam. In Proc. Euro. Conf. on Computer
841
- Vision , pages 834–849, 2014. 3
842
- [9] C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised
843
- monocular depth estimation with left-right consistency. In
844
- Proc. IEEE Conf. on Computer Vision & Pattern Recogni-
845
- tion, pages 270–279, 2017. 3
846
- [10] S. Gupta, J. Davidson, S. Levine, R. Sukthankar, and J. Ma-
847
- lik. Cognitive mapping and planning for visual navigation.
848
- InProc. IEEE Conf. on Computer Vision & Pattern Recog-
849
- nition , pages 2616–2625, 2017. 2, 3, 10, 11
850
- [11] J. F. Henriques and A. Vedaldi. Mapnet: An allocentric spa-
851
- tial memory for mapping environments. In Proc. IEEE Conf.
852
- on Computer Vision & Pattern Recognition , pages 8476–
853
- 8484, 2018. 2
854
- [12] J. Huang, S. Yang, T.-J. Mu, and S.-M. Hu. Clustervo: Clus-
855
- tering moving instances and estimating visual odometry for
856
- self and surroundings. In Proc. IEEE Conf. on Computer
857
- Vision & Pattern Recognition , pages 2165–2174, 2020. 3
858
- [13] J. Huang, S. Yang, Z. Zhao, Y .-K. Lai, and S.-M. Hu. Clus-
859
- terslam: A slam backend for simultaneous rigid body cluster-
860
- ing and motion estimation. In Proc. Int. Conf. on Computer
861
- Vision , pages 5874–5883, 2019. 3
862
- [14] D. K. Kim and T. Chen. Deep neural network for real-time
863
- autonomous indoor navigation. arXiv preprint:1511.04668 ,
864
- 2015. 2
865
- [15] S. Koenig and M. Likhachev. D* lite. In Proc. of Association
866
- for the Advancement of Artificial Intelligence , pages 476–
867
- 483, 2002. 2, 4
868
- [16] Y . Kuznietsov, J. Stuckler, and B. Leibe. Semi-supervised
869
- deep learning for monocular depth map prediction. In Proc.
870
- IEEE Conf. on Computer Vision & Pattern Recognition ,
871
- pages 6647–6655, 2017. 3
872
- [17] A. Loquercio, A. I. Maqueda, C. R. Del-Blanco, and
873
- D. Scaramuzza. Dronet: Learning to fly by driving. IEEE
874
- Robotics and Automation Letters , 3(2):1088–1095, 2018. 2
875
- [18] F. Ma and S. Karaman. Sparse-to-dense: Depth prediction
876
- from sparse depth samples and a single image. In Proc. IEEE
877
- Int. Conf. on Robotics & Automation , pages 1–8, 2018. 3
878
- [19] P. Mirowski, M. Grimes, M. Malinowski, K. M. Hermann,
879
- K. Anderson, D. Teplyashin, K. Simonyan, A. Zisserman,
880
- R. Hadsell, et al. Learning to navigate in cities without a
881
- map. In Proc. of Advances in Neural Information Processing
882
- Systems , pages 2419–2430, 2018. 2
883
- [20] R. Mur-Artal and J. D. Tard ´os. Orb-slam2: An open-source
884
- slam system for monocular, stereo, and rgb-d cameras. IEEE
885
- Trans. on Robotics , 33(5):1255–1262, 2017. 2, 4
886
- [21] R. P. Padhy, S. Verma, S. Ahmad, S. K. Choudhury, and P. K.
887
- Sa. Deep neural network for autonomous UA V navigation in
888
- indoor corridor environments. Procedia Computer Science ,
889
- 133:643–650, 2018. 2
890
- [22] N. Savinov, A. Dosovitskiy, and V . Koltun. Semi-parametric
891
- topological memory for navigation. In Proc. Int. Conf. on
892
- Learning Representations , pages 1–16, 2018. 2[23] L. Tai, G. Paolo, and M. Liu. Virtual-to-real deep rein-
893
- forcement learning: Continuous control of mobile robots for
894
- mapless navigation. In Proc. IEEE Int. Conf. on Intelligent
895
- Robots & Systems , pages 31–36, 2017. 2
896
- [24] M. Tatarchenko, S. R. Richter, R. Ranftl, Z. Li, V . Koltun,
897
- and T. Brox. What do single-view 3D reconstruction net-
898
- works learn? In Proc. IEEE Conf. on Computer Vision &
899
- Pattern Recognition , pages 3405–3414, 2019. 2
900
- [25] K. Tateno, F. Tombari, I. Laina, and N. Navab. Cnn-slam:
901
- Real-time dense monocular slam with learned depth predic-
902
- tion. In Proc. IEEE Conf. on Computer Vision & Pattern
903
- Recognition , pages 6243–6252, 2017. 3
904
- [26] C. Wang, J. Wang, Y . Shen, and X. Zhang. Autonomous nav-
905
- igation of uavs in large-scale complex environments: A deep
906
- reinforcement learning approach. IEEE Trans. on Vehicular
907
- Technology , 68(3):2124–2136, 2019. 2
908
- [27] R. Wang, M. Schworer, and D. Cremers. Stereo dso: Large-
909
- scale direct sparse visual odometry with stereo cameras.
910
- InProc. Int. Conf. on Computer Vision , pages 3903–3911,
911
- 2017. 3
912
- [28] W. Wang, W. Gao, and Z. hu. Effectively modeling piece-
913
- wise planar urban scenes based on structure priors and cnn.
914
- Science China Information Sciences , 62:1869–1919, 2019. 3
915
- [29] Y . Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-
916
- Fei, and A. Farhadi. Target-driven visual navigation in in-
917
- door scenes using deep reinforcement learning. In Proc.
918
- IEEE Int. Conf. on Robotics & Automation , pages 3357–
919
- 3364, 2017. 2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2104.04958.txt DELETED
The diff for this file is too large to render. See raw diff
 
txt/2104.10602.txt DELETED
@@ -1,840 +0,0 @@
1
- Visualizing Adapted Knowledge in Domain Transfer
2
- Yunzhong Hou Liang Zheng
3
- Australian National University
4
- ffirstname.lastname [email protected]
5
- Abstract
6
- A source model trained on source data and a target
7
- model learned through unsupervised domain adaptation
8
- (UDA) usually encode different knowledge. To understand
9
- the adaptation process, we portray their knowledge dif-
10
- ference with image translation. Specifically, we feed a
11
- translated image and its original version to the two mod-
12
- els respectively, formulating two branches. Through up-
13
- dating the translated image, we force similar outputs from
14
- the two branches. When such requirements are met, dif-
15
- ferences between the two images can compensate for and
16
- hence represent the knowledge difference between models.
17
- To enforce similar outputs from the two branches and de-
18
- pict the adapted knowledge, we propose a source-free im-
19
- age translation method that generates source-style images
20
- using only target images and the two models. We visual-
21
- ize the adapted knowledge on several datasets with differ-
22
- ent UDA methods and find that generated images success-
23
- fully capture the style difference between the two domains.
24
- For application, we show that generated images enable fur-
25
- ther tuning of the target model without accessing source
26
- data. Code available at https://github.com/hou-
27
- yz/DA_visualization .
28
- 1. Introduction
29
- Domain transfer or domain adaptation aims to bridge
30
- the distribution gap between source and target domains.
31
- Many existing works study the unsupervised domain adap-
32
- tation (UDA) problem, where the target domain is unla-
33
- beled [27, 6, 46, 1, 11]. In this process, we are interested
34
- in what knowledge neural networks learn and adapt.
35
- Essentially, we should visualize the knowledge differ-
36
- ence between models: a source model trained on the source
37
- domain, and a target model learned through UDA for the
38
- target domain. We aim to portray the knowledge difference
39
- with image generation. Given a translated image and its
40
- original version, we feed the two images to the source and
41
- the target model, respectively. It is desired that differences
42
- between image pairs can compensate for the knowledge dif-
43
- (a) Target images (real-world)
44
- (b) Generated source-style images
45
- (c) Unseen source images (synthetic)
46
- Figure 1: Visualization of adapted knowledge in unsuper-
47
- vised domain adaptation (UDA) on the VisDA dataset [38].
48
- To depict the knowledge difference, in our source-free im-
49
- age translation (SFIT) approach, we generate source-style
50
- images (b) from target images (a). Instead of accessing
51
- source images (c), the training process is guided entirely
52
- by the source and target models, so as to faithfully portray
53
- the knowledge difference between them.
54
- ference between models, leading to similar outputs from
55
- the two branches (two images fed to two different models).
56
- Achieving this, we could also say that the image pair repre-
57
- sent the knowledge difference.
58
- This visualization problem is very challenging and
59
- heretofore yet to be studied in the literature. It focuses on a
60
- relatively understudied field in transfer learning, where we
61
- distill knowledge differences from models and embed it in
62
- generated images . A related line of works, traditional image
63
- translation, generates images in the desired style utilizing
64
- content images and style images [7, 13, 48], and is applied
65
- 1arXiv:2104.10602v2 [cs.CV] 1 May 2021in pixel-level alignment methods for UDA [26, 2, 44, 11].
66
- However, relying on images from both domains to indicate
67
- the style difference, such works cannot faithfully portray
68
- the knowledge difference between source and target models ,
69
- and are unable to help us understand the adaptation process.
70
- In this paper, we propose a source-free image translation
71
- (SFIT) approach, where we translate target images to the
72
- source style without using source images. The exclusion of
73
- source images prevents the system from relying on image
74
- pairs for style difference indication, and ensures that the
75
- system only learns from the two models . Specifically, we
76
- feed translated source-style images to the source model and
77
- original target images to the target model, and force similar
78
- outputs from these two branches by updating the generator
79
- network. To this end, we use the traditional knowledge dis-
80
- tillation loss and a novel relationship preserving loss, which
81
- maintains relative channel-wise relationships between fea-
82
- ture maps. We show that the proposed relationship preserv-
83
- ing loss also helps to bridge the domain gap while chang-
84
- ing the image style, further explaining the proposed method
85
- from a domain adaptation point of view. Some results of
86
- our method are shown in Fig. 1. We observe that even un-
87
- der the source-free setting, knowledge from the two models
88
- can still power the style transfer from the target style to the
89
- source style (SFIT decreases color saturation and whitens
90
- background to mimic the unseen source style).
91
- On several benchmarks [19, 36, 39, 38], we show that
92
- generated images from the proposed SFIT approach signifi-
93
- cantly decrease the performance gap between the two mod-
94
- els, suggesting a successful distillation of adapted knowl-
95
- edge. Moreover, we find SFIT transfers the image style
96
- at varying degrees, when we use different UDA methods
97
- on the same dataset. This further verifies that the SFIT
98
- visualizations are faithful to the models and that different
99
- UDA methods can address varying degrees of style differ-
100
- ences. For applications, we show that generated images can
101
- serve as an additional cue and enable further tuning of target
102
- models. This also falls into a demanding setting of UDA,
103
- source-free domain adaptation (SFDA) [17, 20, 24], where
104
- the system has no access to source images.
105
- 2. Related Work
106
- Domain adaptation aims to reduce the domain gap be-
107
- tween source and target domains. Feature-level distribution
108
- alignment is a popular strategy [27, 6, 46, 40]. Long et
109
- al. [27] use the maximum mean discrepancy (MMD) loss
110
- for this purpose. Tzeng et al. [46] propose an adversarial
111
- method, ADDA, with a loss function based on the gen-
112
- erative adversarial network (GAN). Pixel-level alignment
113
- with image translation is another popular choice in UDA
114
- [26, 2, 44, 42, 1, 11]. Hoffman et al . propose the Cy-
115
- CADA [11] method based on CycleGAN [48] image trans-
116
- lation. Other options are also investigated. Saito et al. [40]align the task-specific decision boundaries of two classi-
117
- fiers. Source-free domain adaptation (SFDA) does notuse
118
- the source data and therefore greatly alleviates the privacy
119
- concerns in releasing the source dataset. As an early at-
120
- tempt, AdaBN [22] adapts the statistics of the batch normal-
121
- ization layers in the source CNN to the target domain. Li et
122
- al. [20] generate images with the same distribution of the
123
- target images and use them to fine-tune the classifier. Liang
124
- et al. [24] fine-tune a label smoothed [34] source model on
125
- the target images. To the authors’ knowledge, there is still
126
- yet to be any visualization that can indicate what models
127
- learn during adaptation.
128
- Knowledge distillation transfers knowledge from a pre-
129
- trained teacher model to a student model [10], by maxi-
130
- mizing the mutual information between teacher outputs and
131
- student outputs. Some existing works consider the relation-
132
- ship between instance or pixels for better distillation per-
133
- formance [45, 23, 37]. Instead of distilling teacher knowl-
134
- edge on a given training dataset, data-free knowledge dis-
135
- tillation (DFKD) [30, 35, 3, 33, 8, 47] first generates train-
136
- ing data and then learns a student network on this gener-
137
- ated dataset. Training data can be generated by aligning
138
- feature statistics [30, 8, 47], enforcing high teacher confi-
139
- dence [30, 35, 3, 8, 47], and adversarial generation of hard
140
- examples for the student [33, 47]. In [8, 47], batch normal-
141
- ization statistics are matched as regularization. Our work,
142
- while also assuming no access to source images, differs sig-
143
- nificantly from these works in that our image translation
144
- has to portray the transferred knowledge, whereas data-free
145
- knowledge distillation just generates whatever images that
146
- satisfy the teacher networks.
147
- Image translation renders the same content in a differ-
148
- ent artistic style. Some existing works adopt a GAN-based
149
- system for this task [26, 44, 14, 48, 11], while others use a
150
- pre-trained feature extractor for style transfer [7, 15, 32, 13].
151
- Zhuet al. adopt a cycle consistency loss in the image trans-
152
- lation loop to train the CycleGAN system [48]. Gatys
153
- et al . consider a content loss on high-level feature maps,
154
- and a style loss on feature map statistics for style transfer
155
- [7]. Huang and Belongie [13] propose a real-time AdaIN
156
- style transfer method by changing the statistics in instance
157
- normalization layers. Based on AdaIN, Karras et al. pro-
158
- pose StyleGAN for state-of-the-art image generation [16].
159
- Our work differs from traditional image translations in that
160
- rather than images from the two domains, only models from
161
- two domains are used to guide the image update.
162
- 3. Problem Formulation
163
- To achieve our goal, i.e.,visualizing adapted knowledge
164
- in UDA, we translate a image xfrom a certain domain to
165
- a new imageex. It is hoped that feeding the original image
166
- to its corresponding model (trained for that certain domain)
167
- and the generated image to the other model can minimize
168
- 2target image 𝒙𝒙 generated image �𝒙𝒙generatorsource
169
- CNN
170
- target
171
- CNNrelationship
172
- preserving loss
173
- classifier classifierknowledge
174
- distillation lossFigure 2: The proposed source-free image translation (SFIT) method for visualizing the adapted knowledge in UDA. The
175
- system includes two branches: original target images are fed to the target CNN, whereas generated source-style images are
176
- fed to the source CNN. We minimize the knowledge distillation loss and the relationship preserving loss, and update the
177
- generator network accordingly. If the two branches get similar results while adopting different models, then the difference
178
- between the original target image xand the generated source-style image exshould be able to mitigate and therefore exhibit
179
- the knowledge difference between models. Dashed lines indicate fixed network parameters.
180
- the output difference between these two branches. The up-
181
- date process is directed only by the source model fS()and
182
- the target model fT(), and we prevent access to the images
183
- from the other domain to avoid distractions. We formulate
184
- the task of visualizing adapted knowledge as a function of
185
- the source model, the target model, and the image from a
186
- certain domain,
187
- G(fS; fT;x)!ex: (1)
188
- In contrast, traditional image translation needs access to im-
189
- ages from both domains for content and style specification.
190
- In addition to the source image xSand the target image xT,
191
- traditional image translation also relies on certain neural
192
- network d()as the criterion. Instead of the source and tar-
193
- get models, ImageNet [4] pre-trained VGG [43] and adver-
194
- sarially trained discriminator networks are used for this task
195
- in style transfer [7, 13] and GAN-based methods [48, 11],
196
- respectively. Traditional image translation task can thus be
197
- formulated as,
198
- G(d;xS;xT)!ex: (2)
199
- Comparing our goal in Eq. 1 and traditional image transla-
200
- tion in Eq. 2, we can see a clear gap between them. Tradi-
201
- tional image translation learns the style difference indicated
202
- byimages from both domains, whereas our goal is to learn
203
- to visualize the knowledge difference between the source
204
- and target models fS(); fT().
205
- 4. Method
206
- To investigate what neural networks learn in do-
207
- main adaptation, we propose source-free image translation
208
- (SFIT), a novel method that generates source-style images
209
- from original target images, so as to mitigate and represent
210
- the knowledge difference between models.4.1. Overview
211
- Following many previous UDA works [6, 27, 46, 24], we
212
- assume that only the feature extractor CNN in the source
213
- model is adapted to the target domain. Given a source CNN
214
- fS()and a target CNN fT()sharing the same classifier
215
- p(), we train a generator g()for the SFIT task. We discuss
216
- why we choose this translation direction in Section 4.3. As
217
- the training process is source-free, for simplicity, we refer
218
- to the target image as xinstead of xTin what follows.
219
- As shown in Fig. 2, given a generated image ex=g(x),
220
- the source model outputs a feature map fS(ex)and a prob-
221
- ability distribution p(fS(ex))over all Cclasses. To depict
222
- the adapted knowledge in the generated image, in addition
223
- to the traditional knowledge distillation loss, we introduce a
224
- novel relationship preserving loss, which maintains relative
225
- channel-wise relationships between the target-image-target-
226
- model feature map fT(x)and the generated-image-source-
227
- model feature map fS(ex).
228
- 4.2. Loss Functions
229
- With a knowledge distillation loss LKDand a relationship
230
- preserving lossLRP, we have the overall loss function,
231
- L=LKD+LRP: (3)
232
- In the following sections, we detail the loss terms.
233
- Knowledge distillation loss. In the proposed source-
234
- free image translation method, portraying the adapted
235
- knowledge in the target model fT()with source model
236
- and generator combined fS(g())can be regarded as a spe-
237
- cial case of knowledge distillation, where we aim to distill
238
- the adapted knowledge to the generator. In this case, we
239
- include a knowledge distillation loss between generated-
240
- image-source-model output p(fS(ex))and target-image-
241
- 3target-model output p(fT(x)),
242
- LKD=DKL(p(fT(x)); p(fS(ex))); (4)
243
- whereDKL(;)denotes the Kullback-Leibler divergence.
244
- Relationship preserving loss. Similar classification
245
- outputs indicate a successful depiction of the target model
246
- knowledge on the generated images. As we assume a fixed
247
- classifier for UDA, the global feature vectors from the tar-
248
- get image target CNN and the generated image source CNN
249
- should be similar after a successful knowledge distillation.
250
- Promoting similar channel-wise relationships between fea-
251
- ture maps fT(x)andfS(ex)helps to achieve this goal.
252
- Previous knowledge distillation works preserve relative
253
- batch-wise or pixel-wise relationships [45, 23]. However,
254
- they are not suitable here for the following reasons. Relative
255
- batch-wise relationships can not effectively supervise the
256
- per-image generation task. Besides, the efficacy of pixel-
257
- wise relationship preservation can be overshadowed by the
258
- global pooling before the classifier. By contrast, channel-
259
- wise relationships are computed on a per-image basis, and
260
- are effective even after global pooling. As such, we choose
261
- the channel-wise relationship preserving loss that is com-
262
- puted in the following manner.
263
- Given feature maps fT(x); fS(ex), we first reshape them
264
- into feature vectors FSandFT,
265
- fS(ex)2RDHW!F S2RDHW;
266
- fT(x)2RDHW!F T2RDHW;(5)
267
- where D; H , and Ware the feature map depth (number of
268
- channels), height, and width, respectively. Next, we calcu-
269
- late their channel-wise self correlations, or Gram matrices,
270
- GS=FSFT
271
- S; G T=FTFT
272
- T; (6)
273
- where GS; GT2RDD. Like other similarity preserving
274
- losses for knowledge distillation [45, 23], we then apply the
275
- row-wiseL2normalization,
276
- eGS[i;:]=GS[i;:]
277
- 2;eGT[i;:]=GT[i;:]
278
- 2; (7)
279
- where [i;:]indicates the i-th row in a matrix. At last, we
280
- define the relationship preserving loss as mean square error
281
- (MSE) between the normalized Gram matrices,
282
- LRP=1
283
- D
284
- F; (8)
285
- wherekkFdenotes the Frobenius norm (entry-wise L2
286
- norm for matrix). In Section 4.3, we further discuss the rela-
287
- tionship preserving loss from the viewpoint of style transfer
288
- and domain adaptation, and show it can align feature map
289
- distributions in a similar way as style loss [7] for style trans-
290
- fer and MMD loss [27] for UDA, forcing the generator to
291
- portray the knowledge difference between the two models.
292
- (a) Relationship preserving loss
293
- (b) Traditional style loss
294
- Figure 3: Comparison between the proposed relationship
295
- preserving loss and the traditional style loss. In (a) and (b),
296
- given 256-dimensional feature maps, we show differences
297
- of row-wise normalized Gram matrix (Eq. 8) and original
298
- Gram matrix (Eq. 9). Deeper colors indicate larger dif-
299
- ferences and therefore stronger supervision. The proposed
300
- relationship preserving loss provides evenly distributed su-
301
- pervision for all channels, whereas the traditional style loss
302
- focuses primarily on several channels.
303
- 4.3. Discussions
304
- Why transfer target images to the source style. Ac-
305
- cording to the problem formulation in Eq. 1, we should be
306
- able to visualize the adapted knowledge by generating ei-
307
- ther source-style images from target images, or target-style
308
- images from source images. In this paper, we select the for-
309
- mer direction as it might be further applied in fine-tuning
310
- the target model (see Section 5.4 for application).
311
- Style transfer with the relationship preserving loss.
312
- The proposed relationship preserving loss can be regarded
313
- as a normalized version of the traditional style loss intro-
314
- duced by Gatys et al. [7],
315
- Lstyle=1
316
- D2kGSGTk2
317
- F; (9)
318
- which computes MSE between Gram matrices.
319
- In the proposed relationship preserving loss (Eq. 8), in-
320
- stead of original Gram matrices, we use a row-wise normal-
321
- ized version. It focuses on relative relationships between
322
- channels, rather than absolute values of self correlations as
323
- in the traditional style loss. Preserving relative relation-
324
- ships provides more evenly-distributed supervision for all
325
- channels, instead of prioritizing several channels as in the
326
- traditional style loss (Fig. 3). Experiments find this evenly-
327
- distributed supervision better preserves the foreground ob-
328
- ject and allows for easier training and higher performance,
329
- while also changing the image style (see Section 5.5).
330
- Distribution alignment with the relationship preserv-
331
- ing loss. As proved by Li et al. [21], the traditional style
332
- 4lossLstyleis equivalent to the MMD loss [27] for UDA. We
333
- can also see the relationship preserving loss as a modified
334
- version of the MMD loss, which aligns the distribution of
335
- the generated image source CNN feature map fS(ex)to the
336
- target image target CNN feature map fT(x).
337
- 5. Experiments
338
- 5.1. Datasets
339
- We visualize the knowledge difference between source
340
- and target models on the following datasets.
341
- Digits is a standard UDA benchmark that focuses on
342
- 10-class digit recognition. Specifically, we experiment on
343
- MNIST [19], USPS, and SVHN [36] datasets.
344
- Office-31 [39] is a standard benchmark for UDA that
345
- contains 31 classes from three distinct domains: Amazon
346
- (A), Webcam (W), and DSLR (D).
347
- VisDA [38] is a challenging large-scale UDA benchmark
348
- for domain adaptation from 12 classes of synthetic CAD
349
- model images to real-world images in COCO [25].
350
- 5.2. Implementation Details
351
- Source and target models. We adopt source and tar-
352
- get models from a recent SFDA work SHOT-IM [24] if not
353
- specified. SFDA is a special case of UDA, and it is even
354
- more interesting to see what machines learn in the absence
355
- of source data. We also include UDA methods DAN [27]
356
- and ADDA [46] for SFIT result comparisons. For network
357
- architectures, on digits dataset, following Long et al. [28],
358
- we choose a LeNet [18] classifier. On Office-31 and VisDA,
359
- we choose ResNet-50 and ResNet-101 [9], respectively.
360
- Generator for SFIT. We use a modified CycleGAN [48]
361
- architecture with 3 residue blocks due to memory concerns.
362
- Training schemes. During training, we first initialize
363
- the generator as a transparent filter, which generates im-
364
- ages same as the original input. To this end, we use the
365
- ID lossLID=kexxk1and the content loss Lcontent =
366
- kfS(ex)fS(x)k2to train the generator for initialization.
367
- The initialization performance is shown in Table 4, where
368
- we can see a mild 1.9% accuracy drop from original tar-
369
- get images. Then, we train the generator with the overall
370
- loss function in Eq. 3 for visualizing the adapted knowl-
371
- edge. Specifically, we use an Adam optimizer with a cosine
372
- decaying [31] learning rate starting from 3104and a
373
- batch size of 16. All experiments are finished using one
374
- RTX-2080Ti GPU.
375
- 5.3. Evaluation
376
- Recognition accuracy on generated images. To ex-
377
- amine whether the proposed SFIT method can depict the
378
- knowledge difference, in Table 1-3, we report recogni-
379
- tion results using the generated-image-source-model branch
380
- (referred as “generated images”). On the digits dataset,Method SVHN!MNIST USPS!MNIST MNIST!USPS
381
- Source only [11] 67.10.6 69.63.8 82.20.8
382
- DAN [27] 71.1 - 81.1
383
- DANN [6] 73.8 73 85.1
384
- CDAN+E [28] 89.2 98.0 95.6
385
- CyCADA [11] 90.40.4 96.50.1 95.60.4
386
- MCD [40] 96.20.4 94.10.3 94.20.7
387
- GTA [41] 92.40.9 90.81.3 95.30.7
388
- 3C-GAN [20] 99.40.1 99.30.1 97.30.2
389
- Source model [24] 72.30.5 90.51.6 72.72.3
390
- Target model [24] 98.80.1 98.10.5 97.90.2
391
- Generated images 98.60.1 97.40.3 97.60.3
392
- Table 1: Classification accuracy (%) on digits datasets. In
393
- Table 1-3, “Generated images” refers to feeding images
394
- generated by SFIT to the source model.
395
- Method A!W D!W W!D A!D D!A W!A Avg.
396
- ResNet-50 [9] 68.4 96.7 99.3 68.9 62.5 60.7 76.1
397
- DAN [27] 80.5 97.1 99.6 78.6 63.6 62.8 80.4
398
- DANN [6] 82.6 96.9 99.3 81.5 68.4 67.5 82.7
399
- ADDA [46] 86.2 96.2 98.4 77.8 69.5 68.9 82.9
400
- JAN [29] 86.0 96.7 99.7 85.1 69.2 70.7 84.6
401
- CDAN+E [28] 94.1 98.6 100.0 92.9 71.0 69.3 87.7
402
- GTA [41] 89.5 97.9 99.8 87.7 72.8 71.4 86.5
403
- 3C-GAN [20] 93.7 98.5 99.8 92.7 75.3 77.8 89.6
404
- Source model [24] 76.9 95.6 98.5 80.3 60.6 63.4 79.2
405
- Target model [24] 90.8 98.4 99.9 88.8 73.6 71.7 87.2
406
- Generated images 89.1 98.1 99.9 87.3 69.8 68.7 85.5
407
- Fine-tuning 91.8 98.7 99.9 89.9 73.9 72.0 87.7
408
- Table 2: Classification accuracy (%) on the Office-31
409
- dataset. In Table 2 and Table 3, “Fine-tuning” refers to tar-
410
- get model fine-tuning result with both generated images and
411
- target images (see Section 5.4 for more details).
412
- Method plane bcycl bus car horse knife mcycl person plant sktbrd train truck per-class
413
- ResNet-101 [9] 55.1 53.3 61.9 59.1 80.6 17.9 79.7 31.2 81.0 26.5 73.5 8.5 52.4
414
- DAN [27] 87.1 63.0 76.5 42.0 90.3 42.9 85.9 53.1 49.7 36.3 85.8 20.7 61.1
415
- DANN [6] 81.9 77.7 82.8 44.3 81.2 29.5 65.1 28.6 51.9 54.6 82.8 7.8 57.4
416
- JAN [29] 75.7 18.7 82.3 86.3 70.2 56.9 80.5 53.8 92.5 32.2 84.5 54.5 65.7
417
- ADDA [46] 88.8 65.7 85.6 53.1 74.9 96.2 83.3 70.7 75.9 26.4 83.9 32.4 69.7
418
- MCD [40] 87.0 60.9 83.7 64.0 88.9 79.6 84.7 76.9 88.6 40.3 83.0 25.8 71.9
419
- CDAN+E [28] 85.2 66.9 83.0 50.8 84.2 74.9 88.1 74.5 83.4 76.0 81.9 38.0 73.9
420
- SE [5] 95.9 87.4 85.2 58.6 96.2 95.7 90.6 80.0 94.8 90.8 88.4 47.9 84.3
421
- 3C-GAN [20] 94.8 73.4 68.8 74.8 93.1 95.4 88.6 84.7 89.1 84.7 83.5 48.1 81.6
422
- Source model [24] 58.3 17.6 54.2 69.9 64.4 5.5 82.2 30.7 62.2 24.6 86.2 6.0 46.8
423
- Target model [24] 92.5 84.7 81.3 54.6 90.5 94.7 80.9 79.1 90.8 81.5 87.9 50.1 80.7
424
- Generated images 88.9 65.8 83.0 61.7 88.5 76.8 89.5 69.6 91.4 51.9 84.3 34.3 73.8
425
- Fine-tuning 94.3 79.0 84.9 63.6 92.6 92.0 88.4 79.1 92.2 79.8 87.6 43.0 81.4
426
- Table 3: Classification accuracy (%) on the VisDA dataset.
427
- in terms of performance gaps, the knowledge differ-
428
- ences between source and target models are 26.5% on
429
- SVHN!MNIST, 7.6% on USPS !MNIST, and 25.2%
430
- on MNIST!USPS. Generated images from SFIT bridges
431
- these differences to 0.2%, 0.7%, and 0.3%, respectively.
432
- On the Office-31 dataset, the performance gap between the
433
- two models is 8.0% on average, and the generated images
434
- shrink this down to 1.7%. Notably, the performance drops
435
- from the target-image-target-model branch to the generated-
436
- image-source-model branch are especially pronounced on
437
- D!A and W!A, two settings that transfer Amazon im-
438
- ages with white or no background to real-world background
439
- 5(a) Target images (MNIST)
440
- (b) Generated source-style images
441
- (c) Unseen source images (SVHN)
442
- Figure 4: Results from the SFIT method on digits datasets
443
- SVHN!MNIST. In Fig. 1 and Fig. 4-6, we show in (a):
444
- target images, (b): generated source-style images, each of
445
- which corresponds to the target image above it, and (c):
446
- the unseen source images. For gray-scale target images
447
- from MNIST, our SFIT approach adds random RGB colors
448
- to mimic the full-color style in the unseen source (SVHN)
449
- without changing the content.
450
- (a) Target images (Webcam)
451
- (b) Generated source-style images
452
- (c) Unseen source images (Amazon)
453
- Figure 5: Results from the SFIT method on the Office-
454
- 31 dataset Amazon !Webcam. Our translation method
455
- whitens backgrounds while increasing contrast ratios of the
456
- object (Webcam) for more appealing appearances as in the
457
- online shopping images (Amazon).
458
- in Webcam or DSLR. In fact, in experiments we find gen-
459
- erating an overall consistent colored background is very de-
460
- manding, and the system usually generates a colored back-
461
- ground around the outline of the object. On the VisDAdataset, generated images bridge the performance gap from
462
- 33.9% to 6.9%, even under a more demanding setting and
463
- a larger domain gap going from real-world images to syn-
464
- thetic CAD model images. Overall, on all three datasets,
465
- generated images significantly mitigate the knowledge dif-
466
- ference in terms of performance gaps, indicating that the
467
- proposed SFIT method can successfully distill the adapted
468
- knowledge from the target model to the generated images.
469
- Visualization of source-free image translation results.
470
- For digits datasets SVHN !MNIST (Fig. 4), the generator
471
- learns to add RGB colors to the gray-scale MNIST (target)
472
- images, which mimics the full-color SVHN (source) im-
473
- ages. For Office-31 dataset Amazon !Webcam (Fig. 5), the
474
- generated images whiten the background, while having a
475
- white or no background rather than real-world background
476
- is one of the main characteristics of the Amazon (source)
477
- domain when compared to Webcam (target). Moreover,
478
- Amazon online shopping images also have higher contrast
479
- ratios for more appealing appearances, and our translated
480
- images also capture these characteristics, e.g., keys in the
481
- calculator, case of the desktop computer. For VisDA dataset
482
- SYN!REAL (Fig. 1 and Fig. 6), the generator learns to
483
- decrease the overall saturation of the real-world (target)
484
- objects which makes them more similar to the synthetic
485
- (source) scenario, while at the same time whitens the back-
486
- ground, e.g., horse, truck, and plane in Fig. 1, car and skate-
487
- board in Fig. 6, and brings out the green color in the plants.
488
- Overall, image generation results exhibit minimal content
489
- changes from target images, while successfully capturing
490
- theunseen source style.
491
- In terms of visual quality, it is noteworthy that generation
492
- results for digits datasets SVHN !MNIST contain colors
493
- and patterns that are not from the source domain, whereas
494
- our results on the Office-31 dataset and VisDA dataset are
495
- more consistent with the unseen source. Due to the lack
496
- of source images, rather than traditional image translation
497
- approaches [7, 13, 11, 44], SFIT only relies on source and
498
- target models, and portrays adapted knowledge according
499
- to the two models. Since a weaker LeNet classifier is used
500
- for the digits dataset, it is easier to generate images that sat-
501
- isfy the proposed loss terms without requiring the generated
502
- images to perfectly mimic the source style. On Office-31
503
- and VisDA datasets, given stronger models like ResNet, it
504
- is harder to generate images that can satisfy the loss terms.
505
- Stricter restrictions and longer training time lead to gener-
506
- ation results more coherent with unseen source images that
507
- also have better visual quality.
508
- Visualization for different UDA methods. In Fig. 7,
509
- we show SFIT visualization results using different UDA
510
- methods. Given source and target domain, a traditional im-
511
- age translation method generates a certain type of images
512
- regardless of the UDA methods, indicating its incapabil-
513
- ity of presenting the knowledge difference between mod-
514
- 6(a) Target images (real-world)
515
- (b) Generated source-style images
516
- (c) Unseen source images (synthetic)
517
- Figure 6: Results from the SFIT method on the VisDA dataset SYN !REAL. Our translation method decreases the target
518
- (real-world) image saturation and whitens the background while keeping the semantics unchanged.
519
- (a)
520
- (b)
521
- (c)
522
- (d)
523
- Figure 7: SFIT results on VisDA dataset with different UDA
524
- methods. (a) Target images; (b) DAN [27]; (c) ADDA [46];
525
- (d) SHOT-IM [24].
526
- els. In contrast, the proposed SFIT method generates differ-
527
- ent images for different UDA methods. Specifically, when
528
- comparing visualization results of the adapted knowledge
529
- in DAN [27], ADDA [46], and SHOT-IM [24], we find
530
- stronger UDA methods can better transfer the target style
531
- to the unseen source style. As shown in Fig. 7, in terms
532
- of whitening the background for style transfer, SFIT re-sults on ADDA are less coherent than SHOT-IM but better
533
- than DAN. This further verifies that our SFIT method in-
534
- deed visualizes the knowledge difference between models,
535
- and stronger adaptation methods can better endure the style
536
- difference (leading to larger knowledge difference and thus
537
- stronger style transfer results).
538
- 5.4. Application
539
- The generated images from SFIT allows for further tun-
540
- ing of the target model in SFDA systems, where no source
541
- image is available. We include a diversity loss on all train-
542
- ing samples to promote even class-wise distributions,
543
- Ldiv=H
544
- ExPtarget(x)[p(fT(x))]
545
- ; (10)
546
- whereH()denotes the information entropy function. We
547
- also incluse a pseudo-label fine-tuning loss, if pseudo
548
- label ^yS= arg max p(fS(ex))from the generated-
549
- image-source-model branch equals to the pseudo label
550
- ^yT= arg max p(fT(x))from the target-image-target-
551
- model branch. We then use this pseudo label ^y= ^yS= ^yT
552
- to fine-tune the target model,
553
- Lpseudo =(
554
- H(p(fT(x));^y); if^y= ^yS= ^yT;
555
- 0; else;(11)
556
- whereH(;)denotes the cross entropy function. We com-
557
- bine these two loss terms in Eq. 10 and Eq. 11 to give an
558
- overall fine-tuning loss LFT=Ldiv+Lpseudo .
559
- 7(a)
560
- (b)
561
- (c)
562
- (d)
563
- Figure 8: Visualization results on VisDA dataset with dif-
564
- ferent distribution alignment methods. (a) Target images;
565
- (b) BN stats alignment [12]; (c) traditional style loss [7];
566
- (d) relationship preserving loss.
567
- As an additional cue, supervision from generated-image-
568
- source-model further boosts target model SFDA perfor-
569
- mance. On Office-31, fine-tuning brings a performance
570
- improvement of 0.4% according to Table 2. On VisDA,
571
- fine-tuning improves the target model accuracy by 0.7% as
572
- shown in Table 3. These improvements are statistically very
573
- significant ( i.e.,p-value <0.001 over 5 runs), and introduce
574
- a real-world application for images generated by SFIT.
575
- 5.5. Comparison and Variant Study
576
- Comparison with the BatchNorm statistics alignment
577
- method [12]. Hou et al. propose to match the batch-wise
578
- feature map statistics so as to directly generate images that
579
- mimic the source style. Specifically, they explore the Batch-
580
- Norm (BN) statistics stored in the BN layers in the source
581
- model for style indication, and match them against that of
582
- the generated images. Using their approach, we can mildly
583
- change the image to the unseen source style (see Fig. 8) and
584
- slightly reduce the performance difference between the two
585
- branches (see Table 4). With that said, their lack of output
586
- alignments between the two branches (only supervisions
587
- from the source branch ) results in much lower quantita-
588
- tive performance and under-performing style transfer qual-
589
- ity when compared to the proposed method.
590
- Effect of the knowledge distillation loss. The knowl-
591
- edge distillation loss transfers the adapted knowledge to the
592
- generated images, and the removal of it results in a 1.1%
593
- performance drop.
594
- Effect of the relationship preserving loss. As shown in
595
- Fig. 8, the traditional style loss can successfully transfer the
596
- target image to the source style on its own. However, usingVariantLKDLRP accuracy (%)
597
- Target image - - 46.8
598
- Initialized g() 44.9
599
- BN stats alignment [12] 51.7
600
- w/oLKD 3 72.7
601
- w/oLRP 3 71.2
602
- LRP!L style 3Lstyle[7] 66.4
603
- LRP!L batch 3Lbatch[45] 71.2
604
- LRP!L pixel 3Lpixel[23] 70.9
605
- SFIT 3 3 73.8
606
- Table 4: Variant study on VisDA dataset. “Initialized g()”
607
- refers to our transparent filter initialization in Section 5.2.
608
- it causes a 4.8% performance drop compared to the “w/o
609
- LRP” variant (see Table 4), suggesting it being unsuitable
610
- for SFIT. On the other hand, the batch-wise or pixel-wise
611
- relationship preserving variants [45, 23] are found not use-
612
- ful, as they fail to improve over the “w/o LRP” variant.
613
- In contrast, the proposed channel-wise relationship pre-
614
- serving lossLRPcan effectively improve the recognition ac-
615
- curacy on the generated images, as the inclusion of it leads
616
- to a 2.6% performance increase. Moreover, as shown in
617
- Fig. 8, similar to the traditional style loss, using only the re-
618
- lationship preserving loss can also effectively transfer the
619
- target image to the unseen source style. Besides, focus-
620
- ing on the relative channel-wise relationship instead of the
621
- absolute correlation values, the proposed relationship pre-
622
- serving loss can better maintain the foreground object (less
623
- blurry and more prominent) while transferring the overall
624
- image style, leading to higher recognition accuracy.
625
- 6. Conclusion
626
- In this paper, we study the scientific problem of visu-
627
- alizing the adapted knowledge in UDA. Specifically, we
628
- propose a source-free image translation (SFIT) approach,
629
- which generates source-style images from original target
630
- images under the guidance of source and target models.
631
- Translated images on the source model achieve similar re-
632
- sults as target images on the target model, indicating a suc-
633
- cessful depiction of the adapted knowledge. Such images
634
- also exhibit the source style, and the extent of style trans-
635
- fer follows the performance of UDA methods, which fur-
636
- ther verifies that stronger UDA methods can better address
637
- the distribution difference between domains. We show that
638
- the generated images can be applied to fine-tune the target
639
- model, and might help other tasks like incremental learning.
640
- Acknowledgement
641
- This work was supported by the ARC Discovery Early
642
- Career Researcher Award (DE200101283) and the ARC
643
- Discovery Project (DP210102801).
644
- 8References
645
- [1] Konstantinos Bousmalis, Nathan Silberman, David Dohan,
646
- Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-
647
- level domain adaptation with generative adversarial net-
648
- works. In Proceedings of the IEEE conference on computer
649
- vision and pattern recognition , pages 3722–3731, 2017. 1, 2
650
- [2] Konstantinos Bousmalis, George Trigeorgis, Nathan Silber-
651
- man, Dilip Krishnan, and Dumitru Erhan. Domain separa-
652
- tion networks. In Advances in neural information processing
653
- systems , pages 343–351, 2016. 2
654
- [3] Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang,
655
- Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, and Qi
656
- Tian. Data-free learning of student networks. In Proceed-
657
- ings of the IEEE International Conference on Computer Vi-
658
- sion, pages 3514–3522, 2019. 2
659
- [4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei.
660
- ImageNet: A Large-Scale Hierarchical Image Database. In
661
- CVPR09 , 2009. 3
662
- [5] Geoff French, Michal Mackiewicz, and Mark Fisher. Self-
663
- ensembling for visual domain adaptation. In International
664
- Conference on Learning Representations , 2018. 5
665
- [6] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pas-
666
- cal Germain, Hugo Larochelle, Franc ¸ois Laviolette, Mario
667
- Marchand, and Victor Lempitsky. Domain-adversarial train-
668
- ing of neural networks. The Journal of Machine Learning
669
- Research , 17(1):2096–2030, 2016. 1, 2, 3, 5
670
- [7] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Im-
671
- age style transfer using convolutional neural networks. In
672
- Proceedings of the IEEE conference on computer vision and
673
- pattern recognition , pages 2414–2423, 2016. 1, 2, 3, 4, 6, 8
674
- [8] Matan Haroush, Itay Hubara, Elad Hoffer, and Daniel
675
- Soudry. The knowledge within: Methods for data-free model
676
- compression. arXiv preprint arXiv:1912.01274 , 2019. 2
677
- [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
678
- Deep residual learning for image recognition. In Proceed-
679
- ings of the IEEE conference on computer vision and pattern
680
- recognition , pages 770–778, 2016. 5
681
- [10] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distill-
682
- ing the knowledge in a neural network. arXiv preprint
683
- arXiv:1503.02531 , 2015. 2
684
- [11] Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu,
685
- Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Dar-
686
- rell. CyCADA: Cycle-consistent adversarial domain adap-
687
- tation. In Jennifer Dy and Andreas Krause, editors, Pro-
688
- ceedings of the 35th International Conference on Machine
689
- Learning , volume 80 of Proceedings of Machine Learning
690
- Research , pages 1989–1998, Stockholmsm ¨assan, Stockholm
691
- Sweden, 10–15 Jul 2018. PMLR. 1, 2, 3, 5, 6
692
- [12] Yunzhong Hou and Liang Zheng. Source free do-
693
- main adaptation with image translation. arXiv preprint
694
- arXiv:2008.07514 , 2020. 8
695
- [13] Xun Huang and Serge Belongie. Arbitrary style transfer in
696
- real-time with adaptive instance normalization. In Proceed-
697
- ings of the IEEE International Conference on Computer Vi-
698
- sion, pages 1501–1510, 2017. 1, 2, 3, 6
699
- [14] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A
700
- Efros. Image-to-image translation with conditional adver-sarial networks. In Proceedings of the IEEE conference on
701
- computer vision and pattern recognition , pages 1125–1134,
702
- 2017. 2
703
- [15] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual
704
- losses for real-time style transfer and super-resolution. In
705
- European conference on computer vision , pages 694–711.
706
- Springer, 2016. 2
707
- [16] Tero Karras, Samuli Laine, and Timo Aila. A style-based
708
- generator architecture for generative adversarial networks.
709
- InProceedings of the IEEE Conference on Computer Vision
710
- and Pattern Recognition , pages 4401–4410, 2019. 2
711
- [17] Jogendra Nath Kundu, Naveen Venkat, and R Venkatesh
712
- Babu. Universal source-free domain adaptation. arXiv
713
- preprint arXiv:2004.04393 , 2020. 2
714
- [18] Yann LeCun, L ´eon Bottou, Yoshua Bengio, and Patrick
715
- Haffner. Gradient-based learning applied to document recog-
716
- nition. Proceedings of the IEEE , 86(11):2278–2324, 1998.
717
- 5
718
- [19] Yann LeCun, Corinna Cortes, and CJ Burges. Mnist hand-
719
- written digit database. ATT Labs [Online]. Available:
720
- http://yann.lecun.com/exdb/mnist , 2, 2010. 2, 5
721
- [20] Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and
722
- Si Wu. Model adaptation: Unsupervised domain adaptation
723
- without source data. In Proceedings of the IEEE/CVF Con-
724
- ference on Computer Vision and Pattern Recognition , pages
725
- 9641–9650, 2020. 2, 5
726
- [21] Yanghao Li, Naiyan Wang, Jiaying Liu, and Xiaodi Hou. De-
727
- mystifying neural style transfer. In Proceedings of the 26th
728
- International Joint Conference on Artificial Intelligence , IJ-
729
- CAI’17, page 2230–2236. AAAI Press, 2017. 4
730
- [22] Yanghao Li, Naiyan Wang, Jianping Shi, Xiaodi Hou, and
731
- Jiaying Liu. Adaptive batch normalization for practical do-
732
- main adaptation. Pattern Recognition , 80:109–117, 2018. 2
733
- [23] Zeqi Li, Ruowei Jiang, and Parham Aarabi. Semantic re-
734
- lation preserving knowledge distillation for image-to-image
735
- translation. In European conference on computer vision .
736
- Springer, 2020. 2, 4, 8
737
- [24] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need
738
- to access the source data? source hypothesis transfer for un-
739
- supervised domain adaptation. In International Conference
740
- on Machine Learning (ICML) , pages xx–xx, July 2020. 2, 3,
741
- 5, 7
742
- [25] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays,
743
- Pietro Perona, Deva Ramanan, Piotr Doll ´ar, and C Lawrence
744
- Zitnick. Microsoft coco: Common objects in context. In
745
- European conference on computer vision , pages 740–755.
746
- Springer, 2014. 5
747
- [26] Ming-Yu Liu and Oncel Tuzel. Coupled generative adversar-
748
- ial networks. In Advances in neural information processing
749
- systems , pages 469–477, 2016. 2
750
- [27] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I
751
- Jordan. Learning transferable features with deep adaptation
752
- networks. arXiv preprint arXiv:1502.02791 , 2015. 1, 2, 3,
753
- 4, 5, 7
754
- [28] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and
755
- Michael I Jordan. Conditional adversarial domain adapta-
756
- tion. In Advances in Neural Information Processing Systems ,
757
- pages 1645–1655, 2018. 5
758
- 9[29] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I
759
- Jordan. Deep transfer learning with joint adaptation net-
760
- works. In International conference on machine learning ,
761
- pages 2208–2217. PMLR, 2017. 5
762
- [30] Raphael Gontijo Lopes, Stefano Fenu, and Thad Starner.
763
- Data-free knowledge distillation for deep neural networks.
764
- arXiv preprint arXiv:1710.07535 , 2017. 2
765
- [31] Ilya Loshchilov and Frank Hutter. Sgdr: Stochas-
766
- tic gradient descent with warm restarts. arXiv preprint
767
- arXiv:1608.03983 , 2016. 5
768
- [32] Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala.
769
- Deep photo style transfer. In Proceedings of the IEEE Con-
770
- ference on Computer Vision and Pattern Recognition , pages
771
- 4990–4998, 2017. 2
772
- [33] Paul Micaelli and Amos J Storkey. Zero-shot knowledge
773
- transfer via adversarial belief matching. In Advances in
774
- Neural Information Processing Systems , pages 9547–9557,
775
- 2019. 2
776
- [34] Rafael M ¨uller, Simon Kornblith, and Geoffrey E Hinton.
777
- When does label smoothing help? In Advances in Neural
778
- Information Processing Systems , pages 4694–4703, 2019. 2
779
- [35] Gaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj,
780
- Venkatesh Babu Radhakrishnan, and Anirban Chakraborty.
781
- Zero-shot knowledge distillation in deep networks. In Ka-
782
- malika Chaudhuri and Ruslan Salakhutdinov, editors, Pro-
783
- ceedings of the 36th International Conference on Machine
784
- Learning , volume 97 of Proceedings of Machine Learning
785
- Research , pages 4743–4751, Long Beach, California, USA,
786
- 09–15 Jun 2019. PMLR. 2
787
- [36] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bis-
788
- sacco, Bo Wu, and Andrew Y . Ng. Reading digits in natural
789
- images with unsupervised feature learning. In NIPS Work-
790
- shop on Deep Learning and Unsupervised Feature Learning
791
- 2011 , 2011. 2, 5
792
- [37] Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho. Rela-
793
- tional knowledge distillation. In Proceedings of the IEEE
794
- Conference on Computer Vision and Pattern Recognition ,
795
- pages 3967–3976, 2019. 2
796
- [38] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman,
797
- Dequan Wang, and Kate Saenko. Visda: The visual domain
798
- adaptation challenge, 2017. 1, 2, 5
799
- [39] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Dar-
800
- rell. Adapting visual category models to new domains. In
801
- European conference on computer vision , pages 213–226.
802
- Springer, 2010. 2, 5
803
- [40] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tat-
804
- suya Harada. Maximum classifier discrepancy for unsuper-
805
- vised domain adaptation. In Proceedings of the IEEE Con-
806
- ference on Computer Vision and Pattern Recognition , pages
807
- 3723–3732, 2018. 2, 5
808
- [41] Swami Sankaranarayanan, Yogesh Balaji, Carlos D Castillo,
809
- and Rama Chellappa. Generate to adapt: Aligning domains
810
- using generative adversarial networks. In Proceedings of the
811
- IEEE Conference on Computer Vision and Pattern Recogni-
812
- tion, pages 8503–8512, 2018. 5
813
- [42] Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua
814
- Susskind, Wenda Wang, and Russell Webb. Learningfrom simulated and unsupervised images through adversarial
815
- training. In Proceedings of the IEEE conference on computer
816
- vision and pattern recognition , pages 2107–2116, 2017. 2
817
- [43] Karen Simonyan and Andrew Zisserman. Very deep convo-
818
- lutional networks for large-scale image recognition. arXiv
819
- preprint arXiv:1409.1556 , 2014. 3
820
- [44] Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised
821
- cross-domain image generation. In ICLR , 2017. 2, 6
822
- [45] Frederick Tung and Greg Mori. Similarity-preserving knowl-
823
- edge distillation. In Proceedings of the IEEE International
824
- Conference on Computer Vision , pages 1365–1374, 2019. 2,
825
- 4, 8
826
- [46] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Dar-
827
- rell. Adversarial discriminative domain adaptation. In Pro-
828
- ceedings of the IEEE Conference on Computer Vision and
829
- Pattern Recognition , pages 7167–7176, 2017. 1, 2, 3, 5, 7
830
- [47] Hongxu Yin, Pavlo Molchanov, Jose M. Alvarez, Zhizhong
831
- Li, Arun Mallya, Derek Hoiem, Niraj K Jha, and Jan Kautz.
832
- Dreaming to distill: Data-free knowledge transfer via deep-
833
- inversion. In The IEEE/CVF Conf. Computer Vision and Pat-
834
- tern Recognition (CVPR) , 2020. 2
835
- [48] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A
836
- Efros. Unpaired image-to-image translation using cycle-
837
- consistent adversarial networks. In Proceedings of the IEEE
838
- international conference on computer vision , pages 2223–
839
- 2232, 2017. 1, 2, 3, 5
840
- 10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2105.00696.txt DELETED
The diff for this file is too large to render. See raw diff
 
txt/2105.06948.txt DELETED
The diff for this file is too large to render. See raw diff
 
txt/2105.11519.txt DELETED
The diff for this file is too large to render. See raw diff
 
txt/2105.11780.txt DELETED
@@ -1,644 +0,0 @@
1
- Look Inside. Predicting Stock Prices by Analysing an Enterprise
2
- Intranet Social Network and Using Word Co -Occurrence
3
- Networks
4
-
5
- Fronzetti Colladon, A., & Scettri, G.
6
-
7
-
8
- This is the accepted manuscript after the review process, but prior to final layout
9
- and copyediting . Please cit e as:
10
-
11
- Fronzetti Colladon, A., & Scettri, G. (2019). Look Inside. Predicting Stock
12
- Prices by Analysing an Enterprise Intranet Social Network and Using Word Co -
13
- Occurrence Networks. International Journa l of Entrepreneurship and Small
14
- Business, 36(4), 378 -391. https://dx.doi .org/10.1504/IJESB.2019.098986
15
-
16
- This work is licensed under the Creative Commons Attribution -
17
- NonCommercial -NoDerivatives 4.0 International License. To view a copy of
18
- this license, visit http://crea tivecommons.org/licenses/by -nc-nd/4.0/ or send a
19
- letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. 1
20
- Look Inside. Predicting Stock Prices by Analysing an
21
- Enterprise Intranet Social Network and Using Word Co -
22
- Occurrence Networks
23
-
24
- Fronzetti Colladon , A., & Scettri, G.
25
-
26
-
27
-
28
-
29
- Abstract
30
- This study looks into the employees’ communication behaviours taking place in an intranet social
31
- network , offering novel metrics of Semantic and Social Network Analysis, which can help predict a
32
- company stock price. To this purpose, w e studied the intranet forum of a large Italian company,
33
- exploring the interaction s and the use of language of about 8,000 employees . We analyse d more
34
- than 48,000 news and comments , over a period of 94 weeks . In addition to using more traditional
35
- semantic and social network metrics, we built a network linking words included in the general
36
- discourse. In this network, we focused on the posit ion of the node representing the company brand.
37
- We f ound that a lower se ntiment of the language used , a higher betwe enness centrality of the
38
- company brand, a denser word co -occurrence network and more equally distributed centrality
39
- scores of employees (lower group betweenness centrality) are all significant predictors of higher
40
- stock prices. Our findings contribute to the strea m of research concerned with the prediction of
41
- stock prices , offering new metrics that can be helpful for scholars , company man agers and
42
- professional investors and could be integrated in to existing forecasting models to improve their
43
- accuracy . We also show the importance of looking at internal communication streams while
44
- analysing a company’s financial performance . Lastly , we contribute to the research on word co -
45
- occurrence network s by extend ing their field of application.
46
- 2
47
- Keywords: stock price; economic forecasting; intranet; social network; web forum; semantic
48
- analysis; word co -occurrence network .
49
-
50
- 1. Introduction
51
- The question whether stock market prices are predictable is frequent in literature, especially after
52
- financial crisis (Cowles, 1933; Bollerslev and Ole Mikkelsen, 1996; Barro, 2015) . In recent years ,
53
- new approaches were presented, comprising the analysis of co mplex system s and data mining and
54
- machine learning techniques (Kuo, Chen and Hwang, 2001; Sornette, 2003) . With the complex
55
- system approach it is poss ible to model the influence of the various agent s that operate in a market
56
- (investors, companies , bank s, nations , etc.. ), investigating their reciprocal interactions (Grimm,
57
- 2005) . Data mining offers the possibility to analyse large volumes of data , retrieved from different
58
- sources , such as stock indices , social media or other online platforms . Nowadays researchers can
59
- take advantage of datasets never seen before, in which interesting information can be hidden.
60
- In this paper, we combine methods and tools from Social Network and Semantic Analysis
61
- (Wasserman and Faust, 1994; Aggarwal and Zhai, 2013) , while investigating the interactions taking
62
- place in the intranet forum of a large Italian company. We study the interaction patterns of the
63
- company employees with regard to their activity levels, network positions an d use of language ,
64
- while posting news and comments . Additionally, we pro pose a novel method to transform the
65
- general discourse into a network of words (Danowski, 2009) , thus analys ing the positions and
66
- fluctuations of the company brand in that network. Our scope is to understand if there are hidden
67
- associations between th ese patterns and the company stock price at specific time lags . In this
68
- regard, we offer a novel contribut ion since we use a new information source – which, to the extent
69
- of our knowledge, has never been used for the same exact purpose – and study original metrics
70
- obtained from the combination of semantic and social network analysis . In other words, the
71
- objective of this paper is to extract new variables fr om the analysis of the communication taking 3
72
- place in a large intranet social network , show ing the value of these new predictors for the
73
- forecasting of stock prices : we investigate if it is possible to look at employees’ communication
74
- behaviours to better predict the firm market value . This research also offers an additional
75
- contribut ion to increase the knowledge about possible use s of word co -occurrence network s.
76
-
77
- 2. Network and Semantic Analysis for Stock Market Predictions
78
- The use of network theory to analyse stock market trend s evolved around two main approaches:
79
- empirical and theoretical. T he first is centred on the experimental way s by which network s are
80
- created, as in the work of Sun et al. (2015) , who connect ed stock prices with other market attributes
81
- (e.g., trading volume s or net return s). The t heoretical approach , on the other hand, is focused on the
82
- evaluation of the problem in a general form : given some mathematical conditions related to the
83
- network, the aim is to demonstra te the existence and uniqueness of a solution (Barucca et al. , 2016) .
84
- Studies applying network theor ies were successfully implemented to analyse the behaviour of
85
- Brazilian (Tabak et al. , 2009) , Chinese (Huang, Zhuang and Yao, 2009) and Indian (Pan and Sinha,
86
- 2007) stock marke ts. Tabak et al. (2009) tried to figure out if the Brazilian stock price returns
87
- present ed a power law distribution . Their result s show ed that, for most of the study period , a power
88
- law distribution is not representative of the phenomenon. This suggests that the dynamics of stock
89
- prices may change abruptly , being more complicated than a power law distribution, especially when
90
- critical events occur . Huang and colleagues (2009) used a threshold method to build a social
91
- network considering the correlations among 1080 stocks in the Chinese market and their daily
92
- prices for about four years. Testing the topological stability of the network, they found it to be
93
- robust against random vertex failures, but fragile against targeted attack s; in this way they offered
94
- insights for risk and portfolio management . Similarly, Pan and Sinha (2007) analysed the cross -
95
- correlation matrix of stock price s fluctuations in the Indian National Stock Exchange. This market
96
- exhibit ed strong er correlations when compared to more mature markets , such as the New York 4
97
- Stock Exchange . Pan and Sinha (2007) showed that the existence of an internal structure made of
98
- multiple connected groups is a possible indicator of market development.
99
- Other authors addressed the problem of interdependence between stock markets (Morana and
100
- Beltratti, 2008; Sedik and Williams, 2011) , suggesting the inclusion of control indices while
101
- making stock price predictions. In particular, Sedik and Williams (2011) used a GARCH model to
102
- study the influence of the volatility of U.S. and regional equity markets on the conditional volatility
103
- of stock prices in the Gulf Cooperation Council’s m arket. Morana and Beltratti (2008) , showed a
104
- link among the variance s of the stock market indices of US, UK, Japan and Germany .
105
- Recently , the improvement of sentiment analysis and other social media related research , offered
106
- new possibilities, combining stock market predictions with metrics extracted from online platforms ,
107
- such as Twitter or Facebook (Zhang, Fuehres and Gloor, 2011; Chen and Lazer, 2013; Makrehchi,
108
- Shah and Liao, 2013) . Measuring the sentiment of the conversations about a specific company,
109
- scholars proved the influence of positive and negative feelings on stock market prices (Chung and
110
- Liu, 2011; Elshendy et al. , 2017) . To this purpose, Khadjeh et al. (2014) and Schumaker and Chen
111
- (2009) used sentiment analysis combined with linear regression models and support vector
112
- machines . In the former study, authors used the text of breaking financial new s-headlines to
113
- forecast currency movements in the foreign exchange market . In the latter study, autho rs succeeded
114
- in partially predict ing future stock price s twenty minutes after a financial article was released : they
115
- used several different textual representations – such as Bag of Words, Noun Phrases (Caropreso and
116
- Matwin, 2006) and Named Entities (Diesner and Carley, 2005 ). Zhang et al. (2011) analysed
117
- Twitter with a similar purpose, tagging tweets according to feelings of fear, general concern and
118
- hope ; they found a negative correlation between the sentiment trend of the tagging variables and the
119
- Dow Jones, NASDAQ and S&P 500 . Antweiler and Frank (2004) studied the influence of messages
120
- in Yahoo!Finance over a set of 45 stock market prices of private companies . Similarly, Xie et al.
121
- (2013) crawled Yahoo!Finance and developed a tree representation for word s in sentences, which
122
- performed significantly better than approaches based on bag-of-words in predicting the polarity of 5
123
- stock price s trends . A different approach for the sentiment analysis of tweets is to use non-
124
- parametric topic mode lling algorithms , as proposed by Si et al. (2013) .
125
- In the paper of Bollen Mao and Zeng (2011) the prediction of the Dow Jones Industrial Average
126
- was done by analysing the text content of daily Twitter feeds by means of two mood -tracking tools :
127
- OpinionFinder – which tracks positive and negative mood s – and Google Profile of Mood States
128
- (GPOMS) , which measure s language mood in term of six dimensions (Calm, Alert, Sure, Vital,
129
- Kind, and Happy) . Rechenthin et al. (2013) tried to understand if there were financial agents that
130
- could manipulate sentiment trend s, to influence stock market price s posting specific, either positive
131
- or negative, messages on specialized financial website s. Nguyen et al. (2014) analyzed the main
132
- topics and sentiment of eighteen Yahoo!Finance message boards for eighteen stocks . On message
133
- boards, users discussed company news, fa cts or comments (often negative) about specific company
134
- events, and personal forecasts. Analysing such platforms can be helpful, also because users have the
135
- possibility to annotate message with tags (e.g., Strong Buy, Buy, Hold, Sell and Strong Sell).
136
- Auth ors proved that adding sentiment analysis to models based on historical prices trends can
137
- increase the predictive power of such models.
138
- Yang et al. (2015) demonstrated the existence of Twitter ’s financial communities – which presented
139
- a small -world structure – inferred studying friend -following relationship s and user profile s,
140
- including language preference s, location s, account creation date s and time zone s. Looking at the
141
- sentiment of the tweets sent by people in the most central network positions, the authors could
142
- predict financial market indices.
143
- Lastly, other techniques were also implemented with the aim of improving stock prices prediction s,
144
- such as a rtificial neural network s. Patel and Yalamalle (2014) , for example, achieved good results
145
- using neural networks to predict stock price s in India.
146
- Semantic and social network analysis of the communication of employees proved their value for
147
- several purposes, such as predicting turnover intentions (Gloor et al., 2017 a), improving customer 6
148
- satisfaction (Gloor and Giacomelli, 2014; Gloor et al., 2017 b), or promoting innova tion within, and
149
- across, organis ational boundaries (Wright and Dana, 2003; Dana, Etemad and Wright, 2008; Allen
150
- et al., 2016) . These analyses , which also stress ed the importance of looking at the communication
151
- styles and interactions taking place within the organizational boundaries, were often carried out
152
- using traditional surveys or exploring e -mail networks.
153
- Starting from the insights that emerge from past research, we developed t he present study with the
154
- idea of offering a triple contribution: first ly, we present semantic and social network metrics that
155
- can be integrated in existing financial models to increase their forecasting accuracy; second ly, we
156
- give evidence to the value of transforming text data into words of networks, to extend the
157
- informative power of traditional semantic analysis; third ly, we show how the market value of a firm
158
- can be at least partially inferred by looking at the internal interactions among employees, when
159
- considering an intranet social network.
160
- An intranet is a private network based on web protocols , belonging to an organization, and usually
161
- accessible only by the organization's members. The websites and software applic ations of an
162
- intranet look and act just like any others, but the firewall surrounding an intranet fends off
163
- unauthorized access and use (Beal, 2017) . An intranet forum is a social network among all the
164
- people inside a company, in which employees can exchange text messages, share news and
165
- comments, or multimedia files.
166
- Enterprise intranets proved to offer important insights to business mangers (Eppler, 2001) , also
167
- being a valuable tool to promote knowledge sharing (Hollingshead, Fulk and Monge, 2002),
168
- facilitate HR activities and assess the internal mood (Sulaiman, Zailani and Ramayah, 2012) . From
169
- an intranet social network it is often possible to extract knowledge maps (Eppler, 2001) . These are
170
- graph s that provide a visual info rmation about knowledge sources and help evaluating strengths and
171
- weaknesses of knowledge related assets . Intranet s are usefu l also from a Human Resource point of
172
- view , to understand role s and skills within the organization, current activities and possessed
173
- knowledge (DiMicco et al., 2009). 7
174
- To the extent of our knowledge, there is no research trying to combine words of networks with
175
- semantic and social network analysis of an intranet social network, with the aim of forecasting the
176
- stock price of a company.
177
-
178
- 2.1. Words and Networks
179
- In this paper, we propose the analysis of a word co -occurrence network , as an addition to traditional
180
- semantic analysis. There are several studies in this field, for example Diesner and Carley (2005)
181
- showed how to detect social structure s through text analysis: by analysing major newspaper s they
182
- managed to discover the social structure of covert networks – terrorist groups operating in the West
183
- Bank.
184
- Network s of words can be buil t in different ways, such as the analysis of words co-occurrence s in
185
- single sentences or text excerpts (Dagan, Marcus and Markovitch, 1995; Bullinaria and Levy,
186
- 2012) . Centering resonance analysis was also proposed as a method for creating a network from a
187
- text by analysing its centres (Corman et al. , 2002) . Other scholars used techniques based on
188
- hypertext s (Trigg and Weiser, 1986) or semantic webs (Tim and Berners -Lee, 2006; van Atteveldt,
189
- 2008) focusing on specific words and links between online pages.
190
- In our explorative study, we create d a word co -occurrence network , considering the messages
191
- extracted from the intranet forum of a large Italian company and the co -occurrence of these words .
192
- We investigated the position in the graph of the company brand – i.e. its centrality measures
193
- (Freeman, 1978) – to better predict the company stock price (together with the use of more
194
- conventional social network and semantic variables) .
195
-
196
- 3. Case Study
197
- In our case study, we were able to fully crawl the intranet forum of a large Italian company , with
198
- more than 50,000 employees, out of which about 8,000 were actively partic ipating to the online 8
199
- discussion s. As per agreed privacy arrangements, w e are prohibited from revealing more details
200
- about the company.
201
- In the intranet forum , employees were al lowed to post news and to comment on their own posts or
202
- those of others, with no restrictions or pre -approval s from moderators. Starting from the analysis of
203
- these interaction patterns, we were able to create a first Interaction Network , with n nodes
204
- (employees) and m directed arcs (posts) . In this network, there is an arc originating at the node i and
205
- terminating at the node j, if the social actor i answer s to a comment or a news of the actor j. This
206
- network – built considering the news and comments posted between September 2014 and June
207
- 2016 , for a total of 94 weeks – comprises 8320 nodes and 48020 links.
208
- In addition to studying the interactions among people working in the company, we also analysed the
209
- relationships among the words used in the posts. In particular, we transformed the text of news and
210
- comments into a Word Network , based on words co -occurrences (Dagan, Marcus and Markovitch,
211
- 1995; Bullin aria and Lev y, 2012) . In this network, nodes are representative of single words and arcs
212
- connect words that co -occur in the text, either conside ring the words before, or after a specific one,
213
- with a maximum distance of seven words. To put it in other words , for each co -occurrence within
214
- this range we created a link going from one word to the other, with a directionality which respect s
215
- the order of appearance in the text. To give an example, if a post is “Hello D olly”, then we would
216
- have two nodes – “Hello” and “D olly” – with an arc originating at the first node and terminating at
217
- the second one. Before building the word network, we corrected or removed the misspelled words
218
- and removed the stop words - i.e. most common words in the language which usually do not contain
219
- important significance - using the Python NLTK library (Perkins, 2014) . The resulting network is
220
- made of about 16,00 0 words and more than 6,000,000 links.
221
- When building networks of words, differen t techniques can be used and the maximum range for
222
- considering a co-occurrence can vary. The choice of a maximum interval of 7 words provided the
223
- best results in our case study; however , we maintain that the analyst should be free to adjust this
224
- interval according to the specific context and dataset analysed. We also considered the possibility to 9
225
- simplify our network , by applying lemming or stemming algorithms to reduce words to their root
226
- forms (Korenius et al. , 2004; Perkins, 2014) . However, a test in this direction produced no b etter
227
- results than those obtained when we worked on the full network.
228
-
229
- 3.1. Study Variables
230
- Considering the two networks described in the previous section, we extracted weekly measures for
231
- a specific set of variables, representative of the social structure, the activity and the employees’ use
232
- of language. To measure structural positions, we referred to well -known centrality metrics of Social
233
- Network Analysis : betweenness and degree centrality (Freeman, 1978) .
234
- Degree centrality measures the number of direct connection of a node in the network and
235
- corresponds to the number of incident arcs, either originating or terminating at that node .
236
- Group Degree Centrality quantifies the variability of individual degree centrality scores and it is
237
- used to measure how much centralized a network is, and so how much dominated by a set of highly
238
- central nodes. This measure reaches its maximum of 1 for a star graph (Wasserman and Faust,
239
- 1994) .
240
- Betweenness Cen trality is a measure that reflects the number of times a node lies in -between the
241
- shortest paths that connect every other pair of nodes (Wasserman and Faust, 1994) .
242
- Group Betweenness Centrality reflects the heterogeneity of betweenness centrality scores of single
243
- actors: for a star graph the value is maximum and equal to 1; for a network where each actor has the
244
- same level of betweenness, this index is equal to zero (Wasserman and Faust, 1994) .
245
- With regard to the interaction n etwork, we considered group level metrics with the intent of
246
- investigating the full network activity, without focusing on single employees; for the word n etwork,
247
- on the other hand, we focused on the structural position s of the node associated to the company
248
- brand (which also co rresponds to the company name in the stock market) . Our aim was to test if the 10
249
- fluctuations of the brand in this network could be used as a predictor of the company stock price at
250
- the end of each week (which is our dependent variable) .
251
- To investigate the u se of language by the employees in the intranet forum , we carried out a
252
- semantic analysis of the content of news and comments, mapping their average weekly level of
253
- sentiment, emotionality and complexity.
254
- Sentiment is a measure of the positivity or negativity of a text, calculated by using a multi -lingual
255
- classifier based on a machine learning algorithm (trained on large datasets extracted from Twitter)
256
- (Brönnimann, 2014) . A score of 0.5 represents a neutral sentime nt, whereas higher scores represent
257
- a positive sentiment , and lower scores a negative one.
258
- Emotionality measures the variation in the overall sentiment and it is calculated as its standard
259
- deviation. If all the messages converg e towards a positive sentimen t score, the emotionality will be
260
- low; by contrast , if there is a frequent alternation of positive and negative messages, the
261
- emotionality will be high (Brönnimann, 2014) .
262
- Complexity measures the distribution of words within a text, c onsidering as more complex the
263
- words that are less frequently used in a specific context (Brönnimann, 2014) . Accordingly, a rare
264
- word that is commonly used in a n intranet forum will not increase the general level of complexity.
265
- As the last predictor in our study, we considered the weekly levels of activity as the number of
266
- messages posted during a week ( Activity ) or the number of word co -occurrences ( Activity Words ),
267
- i.e. the nu mber of links in the word network .
268
- The semantic analysis, as well as the computation of the centrality and activity measures, were
269
- implemented by using the software Condor1 and the package NLTK developed for the
270
- programming language Python 3 (Perkins, 2014) .
271
-
272
- 1 www.galaxyadvisors.com/product -extended.html 11
273
- Lastly, to control for the general stock mar ket fluctuations, and the events which could affect all the
274
- market, we included the FTSE MIB in the predictive models . This i ndex is the primary benchmark
275
- for the Italian equity market; it captures about 76% of the domestic market capitalization.
276
- As per agreed privacy arrangements, we could not collect other control variables, such as
277
- employees’ age, gender, job role or busines s department.
278
-
279
- 4. Results
280
- We combined the variables presented in Section 3 to see if they c ould be used as predictors of a
281
- company price on the Italian stock market. Our aim is not to present a final model to make stock
282
- market prediction s, but to draw the attention of scholar s on new variables that can be extracted from
283
- a company intranet forum and easily integrated in more complex models, to improve the ir
284
- predictive power.
285
- We started our analysis by looking at the correlation s between our variables and the company stock
286
- price , at various time lags. Specifically, we analysed the insights coming from the intranet forum
287
- during the same week in which the stock price was collected (Current Week ) and in the two weeks
288
- before (named respectively 1 Week Lag and 2 Week s Lag ), to see which time interval was more
289
- informative (Table 1) .
290
-
291
-
292
-
293
-
294
-
295
-
296
-
297
- 12
298
-
299
- Table 1. Pearson’s correlation coefficients with stock price (N=94).
300
-
301
- For an easier reading , we only reported the correlation coefficients of each variable with the
302
- company stock price . As the table shows , there are relatively high correlation s of group degree and
303
- group betweenne ss centrality wi th price . These correlations are negative at all lags, suggesting that
304
- a more heterogeneous network, less centred on few dominant nodes, is a better indicator of higher
305
- prices . As regards the semantic variables, sentiment is the only one which shows a significant
306
- correlation (at lag 1 an d 2); this correlation is negative, probably reflecting the fact that a more
307
- technical language is usually associated to news related to firm performance and to financial
308
- events. As expected, FTS E MIB is highly correlated with stock price, proving its val ue as a control
309
- variable. Lastly, the degree centrality of the company brand in the word network proved to be
310
- significantly correlated with price: when the company name is more frequently mentioned in the
311
- general discourse, in association with a higher number of different co -occurring words, the stock
312
- price is higher. To consider possible autocorrelation ef fects , we implemented a first differencing of
313
- the dependent variable and the Granger causality tests presente d in Table 2 . We considered up to
314
- three -week lags, as suggested by the models diagnostics.
315
-
316
- Current Week
317
- Price Price 1 Week
318
- Lag Price 2 Weeks Lag
319
- Activity Words .421** .447** .426**
320
- Activity (Interaction Network) -.128 -.121 -.180
321
- Group Betweenness Centrality (Interaction
322
- Network) -.302** -.256* -.272**
323
- Betweenness Centrality (Word Network) .142 .107 .283*
324
- Complexity (Interaction Network) .091 .043 -.034
325
- Degree Centrality (Word Network) .369** .358** .330**
326
- Emotionality (Interaction Network) -.165 -.194 -.148
327
- Sentiment (Interaction Network) -.185 -.230* -.264*
328
- FTSE MIB .877** .863** .877**
329
- Group Degree Centrality (Interaction Network) -.389** -.286** -.252*
330
- *p<.05; **p<.01. 13
331
- Dependent: Price 2
332
- FTSE MIB 3.796 ^
333
- Activity (Interaction Network) .994^
334
- Activity Words 5.246^
335
- Group Betweenness Centrality
336
- (Interaction Network) 3.984^
337
- Betweenness Centrality (Word
338
- Network) 3.777^
339
- Complexity (Interaction Network)
340
- 5.806^
341
- Degree Centrality (Word Network) .023^
342
- Emotionality (Interaction Network) 1.543^
343
- Sentiment (Interaction Network) .831^
344
- Group Degree Centrality (Interaction
345
- Network) 1.574^
346
- ^p > .05.
347
-
348
- Table 2 Granger causality tests ( Chi-squared values, N=94) .
349
-
350
- As Table 2 shows , all our predictors can significantly granger -cause the company stock price ,
351
- suggesting that they could all be considered while making new predictive models.
352
- As a last step of our analysis, we b uilt multiple regression models to forecas t the company stock
353
- price and comment on the variance explained by each predictor (Table 3) . We did not use more
354
- complex time series models – such as ARIMAX – because the autocorrelation of the stock price
355
- was not significant in the Durbin -Watson test.
356
-
357
-
358
-
359
-
360
-
361
-
362
-
363
-
364
-
365
- 14
366
-
367
- Table 3. Stock price prediction: multiple regression models.
368
-
369
- In the first models, we tested the contribut ion of predictors in blocks, avoiding to put together
370
- measures for which we found multicollinearity problems ( e.g., group degree and betweenne ss
371
- centrality). In Model 8, we combined all the significant variables which c ould better explain the
372
- price variance . With respect to the time lags of each variable , we chose those who better performed
373
- in the models. We found that the FTSE MIB alone explain ed 75.8% of the price variance. The new
374
- predictors introduced by this study could increase the variance explained by 9.2%. In particular, we
375
- found that negative sentiment at two weeks lag, higher activity in the word network at one week
376
- lag, higher betweenne ss centrality of the company brand and lower group betweenne ss centrality
377
- are all significant indicators of a higher stock price. Figure 1 summarizes our findings.
378
-
379
-
380
- Model 1 Model
381
- 2 Model 3 Model 4 Mod
382
- el 5 Model
383
- 6 Model
384
- 7 Model 8
385
- FTSE MIB 5.34x10E -
386
- 5**
387
-
388
- 5.88x10E -
389
- 5**
390
- Complexity (Interaction
391
- Network) Lag 0
392
- .029
393
-
394
- Emotionality (Interaction
395
- Network) Lag1
396
- -.058
397
-
398
- Sentiment (Interaction
399
- Network) Lag2
400
- -.108*
401
-
402
- -.053**
403
- Activity Words Lag1
404
- 0.222**
405
-
406
- .076**
407
- Activity (Interaction
408
- Network) Lag 0
409
- 1.59x10
410
- E-5
411
- Group Degree Centrality
412
- (Interaction Network)
413
- Lag 0 -
414
- .409
415
- *
416
- Group Betweenness
417
- Centrality (Interaction
418
- Network) Lag2
419
- -.236*
420
- -.053*
421
- Betweenness Centrality
422
- (Word Network) Lag 0
423
- .100*
424
- .114**
425
- Degree Centrality (Word
426
- Network) Lag0
427
-
428
- .001*
429
- *
430
- Constant -.085 1.081*
431
- * .899** 1.081** 1.06
432
- 9* .963*
433
- * .922*
434
- * -.179**
435
- Adjusted R Square d .758 .046 .205 .069 .142 .032 .127 .851
436
- *p<.05;
437
- **p<.01 15
438
-
439
-
440
-
441
- Figure 1 . Significant predictors from the intranet social network .
442
-
443
- 5. Discussion and Conclusions
444
- The objective of this paper was to study the employees’ communication behaviour on a large
445
- company intranet forum , to extract new variables that could be useful to forecast a company’s stock
446
- price . We did not mean to p resent final forecasting models but to give evidence to the importance of
447
- new metr ics that can be integrated in existing financial models to improve their accuracy . In this
448
- sense, our work has practical and theoretical implications for scholars, financial analysts and
449
- professional investors.
450
- By means of Granger causality tests, we proved the value of our metrics up to two -week lags.
451
- Subsequently, we included our predictors in multiple regression models to see whether they could
452
- improve the models accuracy with respect to the control va riable FTSE MIB.
453
- Our work is consistent with previous studies linking sentiment analysis and stock prices (Chung and
454
- Liu, 2011; Zhang, Fuehres and Gloor, 2011) : we found a negative corre lation of the stock price with
455
- the sentiment of messages posted in the company intranet forum . This cou ld be partially explained
456
- by a more technical language used in news which deal with firm performance (with a lower
457
- sentiment , for example, when compared to posts where employees discuss their leisure activities
458
- 16
459
- promoted by the company) . It could be the case that more irrelevant and positive messages are
460
- dominant in the network when there are no big events that can positively affect the stock price (this
461
- hypothesis was partially confirmed by an interview with the people in charge of monitoring the
462
- employees’ activity on the intranet forum). A more “democratic” network structure (lower group
463
- betweenness centrality) , less dominated by a smaller number of social actors, also proved to be
464
- predictive of higher prices.
465
- We offer a new contribution since we are among the first scholars linking a company stock price
466
- with the communication behaviours on its intranet forum . Even if the informative power of the
467
- analysis of the employees’ activity on enterprise intranets has been discussed in the past ( Eppler,
468
- 2001; DiMicco et al., 2009) , its association with the market value of a company remain s mostly
469
- unexplored. We support the id ea that internal communication behaviours can affect and partially
470
- mirror business performance (Gloor et al., 2017 b), thus influencing a company ’s stock price.
471
- An additional contribution of this research is to present a new use of word co-occurrence network s,
472
- which helps financial predictions . We transformed the content of the messages posted on the
473
- intranet forum into a word co -occurrence network – where every node, representative of a specific
474
- word, was linked to the others based on th eir co -occurrences in messages (Dagan, Marcus and
475
- Markovitch, 1995 ); on this network , we studied the associations between specific positions of the
476
- company brand and its stock price. Looking at the patt erns interconnecting words, we saw that a
477
- higher activity – i.e. a denser graph with more co -occurrences – and a higher betweenness centrality
478
- of the company brand can predict higher prices. This last result could be representative of cases
479
- where the brand is more widely used, spreading ac ross many different news and comments and
480
- therefore being more central in the discourse .
481
- Some of the l imitations of this study are due to our impossibility to analyse more companies or
482
- different business sector s. Moreover, we could not consider the communication and the interactions 17
483
- between employees which went through other media – such as emails, instant messaging apps,
484
- phone calls or face -to-face communication.
485
- To extend our results , we advocate further researc h to study the intranet forum s of more firms, also
486
- taking into account their size and business sector. It would also be important to consider larger time
487
- frames , to test other variables and network construction techniques , and to develop predictive
488
- models based on more complex machine learning algorithms (which already proved their value in
489
- the past).
490
- Due to privacy agreements, we could not access to more control variables to differentiate the
491
- interactions based on the individual differe nces of employees. Therefore, future researchers should
492
- consider taking into account factors such as employees’ age, gender and job role . Lastly , it could be
493
- interesting to study if more accurate models c an be obtained by isolating the communication taking
494
- place in specific business departments or geographical locations . Depending on the business
495
- context, one could focus the attention to the communication of employees working in specific
496
- business units (customer care, financial department, etc ...).
497
-
498
- References
499
- Aggarwal, C. C. and Zhai, C. X. (2013) Mining text data , Mining Text Data . Edited by C. C.
500
- Aggarwal and C. Zhai. Boston, MA: Springer US. doi: 10.1007/978 -1-4614 -3223 -4.
501
- Allen, T. J., Gloor, P. A. , Fronzetti Colladon, A., Woerner, S. L. and Raz, O. (2016) ‘The Power of
502
- Reciprocal Knowledge Sharing Relationships for Startup Success’, Journal of Small
503
- Business and Enterprise Development , 23(3), pp. 636 –651. doi: 10.1108/JSBED -08-2015 -
504
- 0110.
505
- Antweiler, W. and Frank, M. Z. (2004) ‘Is All That Talk Just Noise? The Information Content of
506
- Internet Stock Message Boards’, The Journal of Finance , 59(3), pp. 1259 –1294. doi:
507
- 10.1111/j.1540 -6261.2004.00662.x.
508
- van Atteveldt, W. H. (2008) Semantic Network Analysis Techniques for Extracting , Representing , 18
509
- and Querying Media Content . Charleston , SC: BookSurge Publishing.
510
- Barro, R. J. (2015) ‘The Stock Market and Investment’, The Review of Financial Studies , 3(1), p.
511
- 115. doi: 10.1093/rfs/3.1.115.
512
- Barucca, P., Bardoscia, M., Caccioli, F., D’Errico, M., Visentin, G., Battiston, S. and Caldarelli, G.
513
- (2016) ‘Network Valuation in Financial Systems’, available at SSRN:
514
- https://ssrn.com/abstract=2795583
515
- Beal, V. (2017) Intranet, Webopedia . Available at:
516
- http://www.webopedia.com/TERM/I/Intranet.html (Accessed: 23 May 2017). Bollen, J.,
517
- Mao, H. and Zeng, X. (2011) ‘Twitter mood predicts the stock market’, Journal of
518
- Computational Science , 2(1), pp. 1 –8. doi: 10.1016/j.jocs.2010.12.007.
519
- Bollerslev, T. and Ole Mikkelsen, H. (1996) ‘Modeling and pricing long memory in stock market
520
- volatility’, Journal of Econometrics , 73(1), pp. 151 –184. doi: 10.1016/0304 -4076(95)01736 -
521
- 4.
522
- Brönnimann, L. (2014) Analyse der Verbreitung von Innovationen in sozialen Netzwerken .
523
- Univ ersity of Applied Sciences Northwestern Switzerland.
524
- Bullinaria, J. a and Levy, J. P. (2012) ‘Extracting semantic representations from word co -
525
- occurrence statistics: stop -lists, stemming, and SVD.’, Behavior research methods , 44(3),
526
- pp. 890 –907. doi: 10.37 58/s13428 -011-0183 -8.
527
- Caropreso, M. F. and Matwin, S. (2006) ‘Beyond the Bag of Words: A Text Representation for
528
- Sentence Selection’, in Lecture Notes in Computer Science (including subseries Lecture
529
- Notes in Artificial Intelligence and Lecture Notes in Bi oinformatics) , pp. 324 –335. doi:
530
- 10.1007/11766247_28.
531
- Chen, R. and Lazer, M. (2013) Sentiment Analysis of Twitter Feeds for the Prediction of Stock
532
- Market Movement , Cs 229 Machine Learning: Final Project, University of Stanford.
533
- Chung, S. and Liu, S. (2011 ) Predicting stock market fluctuations from Twitter: A analysis of the
534
- predictive powers of real -time social media . University of Berkeley, Berkeley, CA. 19
535
- Corman, S. R., Kuhn, T., Mcphee, R. D. and Dooley, K. J. (2002) ‘Studying Complex Discursive
536
- Systems Centering Resonance Analysis of Communication’, Human Communication
537
- Research , pp. 157 –206. doi: 10.1093/hcr/28.2.157.
538
- Cowles, A. (1933) ‘Can Stock Market F orecasters Forecast?’, Econometrica , 1(3), pp. 309 –324.
539
- doi: 10.2307/1907042.
540
- Dagan, I., Marcus, S. and Markovitch, S. (1995) ‘Contextual Word Similarity and Estimation from
541
- Sparse Data’, Computer Speech and Language , 9, pp. 123 –152.
542
- Dana, L. P., Etemad, H . and Wright, R. W. (2008) ‘Toward a paradigm of symbiotic
543
- entrepreneurship’, International Journal of Entrepreneurship and Small Business , 5(2), pp.
544
- 109–126. doi: 10.1504/IJESB.2008.016587.
545
- Danowski, J. (2009) ‘Inferences from word networks in messages’, in Krippendorff, K. and Bock,
546
- M. A. (eds) The content analysis reader . London, UK: Sage Publications, Inc, pp. 421 –429.
547
- Diesner, J. and Carley, K. M. (2005) ‘Revealing Social Structure from Texts’, in Causal Mapping
548
- for Research in Information Technology . IGI Global, pp. 81 –108. doi: 10.4018/978 -1-
549
- 59140 -396-8.ch004.
550
- DiMicco, J. M., Geyer, W., Millen, D. R., Dugan, C. and Brownholtz, B. (2009) ‘People
551
- Sensemaking and Relationship Building on an Enterprise Social Network Site’, in 2009
552
- 42nd Hawaii Internation al Conference on System Sciences . IEEE, pp. 1 –10. doi:
553
- 10.1109/HICSS.2009.343.
554
- Elshendy, M., Fronzetti Colladon, A., Battistoni, E. and Gloor, P. A. (2017) ‘Using Four Different
555
- Online Media Sources to Forecast the Crude Oil Price’, Journal of Information Science , p.
556
- in press. doi: 10.1177/0165551517698298
557
- Eppler, M. J. (2001) ‘Making knowledge visible through intranet knowledge maps: concepts,
558
- elements, cases’, in Proceedings of the 34th Annual Hawaii International Conference on
559
- System Sciences . IEEE Comput. Soc. doi: 10.1109/HICSS.2001.926495.
560
- Freeman, L. C. (1978) ‘Centrality in Social Networks’, Social Networks , 1(3), pp. 215 –239. doi: 20
561
- 10.1016/0378 -8733(78)90021 -7.
562
- Gloor, P. A., Fronzetti Colladon, A., Grippa, F. and Giacomelli, G. (2017a) ‘Forecast ing managerial
563
- turnover through e -mail based social network analysis’, Computers in Human Behavior , 71,
564
- pp. 343 –352. doi: 10.1016/j.chb.2017.02.017.
565
- Gloor, P. A., Fronzetti Colladon, A., Grippa, F., Giacomelli, G., and Saran, T. (2017 b) ‘The Impact
566
- of Virt ual Mirroring on Customer Satisfaction’, Journal of Business Research , 75, pp. 67-
567
- 76. doi: 10.1016/j.jbusres.2017.02.010
568
- Gloor, P. A. and Giacomelli, G. (2014) ‘Reading Global Clients’ Signals’, MIT Sloan Management
569
- Review , 55(3), pp. 23 –29.
570
- Grimm, V. (200 5) ‘Pattern -Oriented Modeling of Agent -Based Complex Systems: Lessons from
571
- Ecology’, Science , 310(5750), pp. 987 –991. doi: 10.1126/science.1116681.
572
- Hollingshead, A. B., Fulk, J. and Monge, P. (2002) ‘Fostering Intranet Knowledge Sharing: An
573
- Integration of Transactive Memory and Public Goods Approaches BT - Distributed Work’,
574
- in Distributed Work , pp. 335 –355.
575
- Huang, W., Zhuang, X. and Yao, S. (2009) ‘A network analysis of the Chinese stock market’,
576
- Physica A , 388(14), pp. 2956 –2964. doi: 10.1016/j.physa.2009.03.028.
577
- Khadjeh Nassirtoussi, A., Aghabozorgi, S., Ying Wah, T. and Ngo, D. C. L. (2014) ‘Text mining
578
- for market prediction: A systematic review’, Expert Systems with Applications . Elsevier Ltd,
579
- 41(16), pp. 7653 –7670. doi: 10.1016/j.eswa.2014.06.009.
580
- Korenius, T., Laurikkala, J., Järvelin, K. and Juhola, M. (2004) ‘Stemming and lemmatization in the
581
- clusterin g of finnish text documents’, in Proceedings of the Thirteenth ACM conference on
582
- Information and knowledge management - CIKM ’04 . New York, NY: ACM Press, p p. 625 -
583
- 633. doi: 10.1145/1031171.1031285.
584
- Kuo, R. J., Chen, C. H. and Hwang, Y. C. (2001) ‘An intell igent stock trading decision support
585
- system through integration of genetic algorithm based fuzzy neural network and artificial
586
- neural network’, Fuzzy Sets and Systems , 118(1), pp. 21 –45. doi: 10.1016/S0165 -21
587
- 0114(98)00399 -6.
588
- Makrehchi, M., Shah, S. and Liao, W. (2013) ‘Stock Prediction Using Event -Based Sentiment
589
- Analysis’, in 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence
590
- (WI) and Intelligent Agent Technologies (IAT) . IEEE, pp. 337 –342. doi: 10.1109/WI -
591
- IAT.2013.48.
592
- Morana, C. and Beltratti, A. (2008) ‘Comovements in international stock markets’, Journal of
593
- International Financial Markets, Institutions and Money , 18(1), pp. 31 –45. doi:
594
- 10.1016/j.intfin.2006.05.001.
595
- Nguyen, T. H., Shirai, K. and Velcin, J. (2014) ‘Sentiment analysis on social media for stock
596
- movement prediction’, Expert Systems with Applications . Elsevier Ltd., 42(24), pp. 9603 –
597
- 9611. doi: 10.1016/j.eswa.2015.07.052.
598
- Pan, R. K. and Sinha, S. (2007) ‘Collective behavior of stock price movemen ts in an emerging
599
- market’, Physical Review E , 76(046116 ), pp. 1-9. doi: 10.1103/PhysRevE.76.046116.
600
- Patel, M. B. and Yalamalle, S. R. (2014) ‘Stock Price Prediction Using Artificial Neural Network’,
601
- International Journal of Innovative Research in Science, Engineering and Technology , 3(6),
602
- pp. 13755 –13762.
603
- Perkins, J. (2014) Python 3 Text Processing With NLTK 3 Cookbook , Python 3 Text Processing
604
- With NLTK 3 Cookbook . Birmingham, UK: Packt Publishing.
605
- Rechenthin, M., Street, W. N. and Srinivasan, P. (2013) ‘Stock chatter : Using stock sentiment to
606
- predict price direction’, Algorithmic Finance , 2(3–4), pp. 169 –196. doi: 10.3233/AF -13025.
607
- Schumaker, R. P. and Chen, H. (2009) ‘Textual analysis of stock mar ket prediction using breaking
608
- financial news’, ACM Transactions on Information Systems , 27(2), pp. 1 –19. doi:
609
- 10.1145/1462198.1462204.
610
- Sedik, T. S. and Williams, O. H. (2011) Global and Regional Spillovers to GCC Equity Markets ,
611
- International Monetary Fund .
612
- Si, J., Mukherjee, A., Liu, B., Li, Q., Li, H. and Deng, X. (2013) ‘Exploiting Topic based Twitter 22
613
- Sentiment for Stock Prediction’, in Proceedings of the 51st Annual Meeting of the
614
- Association for Computational Linguistics (Volume 2: Short Papers) . Sofia , Bulgaria:
615
- Association for Computational Linguistics (ACL), pp. 24 –29.
616
- Sornette, D. (2003) ‘Why stock market crash’, Vasa , 32(1), pp. 54 –55. doi: 10.1024/0301 -
617
- 1526.32.1.54.
618
- Sulaiman, F., Zailani, S. and Ramayah, T. (2012) ‘Intranet Portal Utilization: Mon itoring Tool for
619
- Productivity - Quality and Acceptance Point of View’, Procedia - Social and Behavioral
620
- Sciences . The Authors, 65, pp. 381 –386. doi: 10.1016/j.sbspro.2012.11.138.
621
- Tabak, B. M., Takami, M. Y., Cajueiro, D. O. and Petitinga, A. (2009) ‘Quanti fying price
622
- fluctuations in the Brazilian stock market’, Physica A: Statistical Mechanics and its
623
- Applications . Elsevier B.V., 388(1), pp. 59 –62. doi: 10.1016/j.physa.2008.09.028.
624
- Tim and Berners -Lee (2006) ‘The Semantic Web Revisited’, IEEE Intelligent Sy stems , 21(3), pp.
625
- 96–101. doi: 10.1109/MIS.2006.62.
626
- Trigg, R. H. and Weiser, M. (1986) ‘TEXTNET: a network -based approach to text handling’, ACM
627
- Transactions on Information Systems . New York, NY, USA: ACM, 4(1), pp. 1 –23. doi:
628
- 10.1145/5401.5402.
629
- Wasserman, S. and Faust, K. (1994) Social Network Analysis: Methods and Applications . New
630
- York, NY: Cambridge University Press. doi: 10.1525/ae.1997.24.1.219.
631
- Wenyue, S., Chuan, T. and Guang, Y. (2015) Network Analysis of the stock market . CS224W
632
- Project Report, Sta ndford University.
633
- Wright, R. W. and Dana, L. P. (2003) ‘Changing Paradigms of International Entrepreneurship
634
- Strategy’, Journal of International Entrepreneurship , 1(1), pp. 135 –152. doi:
635
- 10.1023/A:1023384808859.
636
- Xie, B., Passonneau, R. J. and Wu, L. (2013 ) ‘Semantic Frames to Predict Stock Price Movement’,
637
- in Proocedings of the 51st Annual Meeting of ACL . Sofia, Bulgaria, pp. 873 –883.
638
- Yang, S. Y., Mo, S. Y. K. and Liu, A. (2015) ‘Twitter financial community sentiment and its 23
639
- predictive relationship to stoc k market movement’, Quantitative Finance , 15(10), pp. 1637 –
640
- 1656. doi: 10.1080/14697688.2015.1071078.
641
- Zhang, X., Fuehres, H. and Gloor, P. A. (2011) ‘Predicting Stock Market Indicators Through
642
- Twitter “I hope it is not as bad as I fear”’, Procedia - Social and Behavioral Sciences , 26,
643
- pp. 55 –62. doi: 10.1016/j.sbspro.2011.10.56 2.
644
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2105.12205.txt DELETED
@@ -1,554 +0,0 @@
1
- arXiv:2105.12205v1 [cs.AI] 25 May 2021A New Score for Adaptive Tests
2
- in Bayesian and Credal Networks
3
- Alessandro Antonucci, Francesca Mangili,
4
- Claudio Bonesana, and Giorgia Adorni
5
- Istituto Dalle Molle di Studi sull’Intelligenza Artificial e, Lugano, Switzerland
6
- {alessandro,francesca,claudio.bonesana,giorgia.adorn i}@idsia.ch
7
- Abstract. Atest is adaptive when its sequence andnumber of questions
8
- is dynamically tuned on the basis of the estimated skills of t he taker.
9
- Graphical models, such as Bayesian networks, are used for ad aptive tests
10
- as they allow to model the uncertainty about the questions an d the skills
11
- in an explainable fashion, especially when coping with mult iple skills.
12
- A better elicitation of the uncertainty in the question/ski lls relations
13
- can be achieved by interval probabilities. This turns the mo del into a
14
- credalnetwork, thus making more challenging the inferential comp lexity
15
- of the queries required to select questions. This is especia lly the case for
16
- the information theoretic quantities used as scoresto drive the adaptive
17
- mechanism. We present an alternative family of scores, base d on the
18
- mode of the posterior probabilities, and hence easier to exp lain. This
19
- makes considerably simpler the evaluation in the credal cas e, without
20
- significantly affectingthe qualityofthe adaptive process. Numerical tests
21
- on synthetic and real-world data are used to support this cla im.
22
- Keywords: computer adaptive tests ·information theory ·credal net-
23
- works ·Bayesian networks ·index of qualitative variation
24
- 1 Introduction
25
- A test or an exam can be naturally intended as a measurement proce ss, with the
26
- questions acting as sensors measuring the skills of the test taker in a particular
27
- discipline. Such measurement is typically imperfect with the skills modelle d as
28
- latent variables whose actual values cannot be revealed in a perfec tly reliable
29
- way. The role of the questions, whose answers are regarded inste ad as mani-
30
- fest variables, is to reduce the uncertainty about the latent skills. Following this
31
- perspective, probabilistic models are an obvious framework to desc ribe tests.
32
- Consider for instance the example in Figure 1, where a Bayesian netw ork evalu-
33
- ates the probability that the test taker knows how to multiply intege rs. In such
34
- framework making the test adaptive, i.e., picking a next question on the basis of
35
- the currentknowledgelevelofthe testtakerisalsoverynatural. The information
36
- gain for the available questions might be used to select the question le ading to
37
- the more informative results (e.g., according to Table 1, Q1is more informative
38
- thanQ2no matter what the answer is). This might also be done before the
39
- answer on the basis of expectations over the possible alternatives .2 Antonucci et al.
40
- A critical point when coping with such approaches is to provide a realis tic
41
- assessment for the probabilistic parameters associated with the m odelling of the
42
- relationsbetweenthe questionsand the skills.Havingto providesha rpnumerical
43
- values for these probabilities might be difficult. As the skill is a latent qu antity,
44
- complete data are not available for a statistical learning and a direct elicitation
45
- should be typically demanded to experts (e.g., a teacher). Yet, it mig ht be not
46
- obvious to express such a domain knowledge by single numbers and a m ore
47
- robust elicitation, such as a probability interval (e.g., P(Q1= 1|S1= 1)∈
48
- [0.85,0.95]), might add realism and robustness to the modelling process [13].
49
- With such generalized assessments of the parameters a Bayesian n etwork simply
50
- becomes a credalnetwork [20]. The counterpart of such increased realism is
51
- the higher computational complexity characterizing inference in cr edal networks
52
- [19]. This is an issue especially when coping with information theoretic me asures
53
- such an information gain, whose computation in credal networks mig ht lead to
54
- complex non-linear optimization tasks [17].
55
- The goal of this paper is to investigate the potential of alternative s to the
56
- information-theoreticscoresdrivingthequestionselectioninadap tivetestsbased
57
- on directed graphical models, no matter whether these are Bayes ian or credal
58
- networks. In particular, we consider a family of scores based on th e (expected)
59
- mode of the posterior distributions over the skills. We show that, wh en coping
60
- withcredalnetworks,thecomputationofthesescorescanbere ducedtoasimpler
61
- sequenceoflinearprogrammingtask.Moreover,weshowthatthe sescoresbenefit
62
- of better explainability properties, thus allowing for a more transpa rent process
63
- in the question selection.
64
- P(Q1= 1|S= 1) = 0 .9
65
- P(Q1= 1|S= 0) = 0 .3
66
- P(Q2= 1|S= 1) = 0 .6
67
- P(Q2= 1|S= 0) = 0 .4Knows multiplication ( S)
68
- 10×5? (Q1) 13×14?(Q2)
69
- Fig.1.A Bayesian network over Boolean variables modelling a simpl e test to evaluate
70
- integer multiplication skill with two questions.
71
- The paper is organized as follows. A critical discussion about the exis ting
72
- work in this area is in Section 2. The necessary background material is reviewed
73
- in Section 3. The adaptive testing concepts are introduced in Sectio n 4 and
74
- specialized to graphical models in 5. The technical part of the paper is in Section
75
- 6, where the new scores are discussed and specialized to the creda l case, while
76
- the experiments are in Section 7. Conclusions and outlooks are in Sec tion 8.A New Score for Adaptive Tests in Bayesian and Credal Network s 3
77
- Table 1. Posterior probabilities of the skill after one or two questi ons in the test based
78
- on the Bayesian network in Figure 1. A uniform prior over the s kill is considered.
79
- Probabilities are regarded as grades and sorted from the low est one. Bounds obtained
80
- with a perturbation ǫ=±0.05 of all the input parameters are also reported.
81
- Q1Q2P(S= 1|q1,q2)P(S= 1|q1,q2)P(S= 1|q1,q2)
82
- 0 0 0 .087 0 .028 0 .187
83
- 0− 0.125 0 .052 0 .220
84
- 0 1 0 .176 0 .092 0 .256
85
- −0 0 .400 0 .306 0 .506
86
- −1 0 .600 0 .599 0 .603
87
- 1 0 0 .667 0 .626 0 .708
88
- 1− 0.750 0 .748 0 .757
89
- 1 1 0 .818 0 .784 0 .852
90
- 2 Related Work
91
- Modelling atest asa processrelatinglatent and manifest variablessin ce the clas-
92
- sicalitem response theory (IRT), that has been widely used even to implement
93
- adaptive sequences [12]. Despite its success related to the easeof implementation
94
- and inference, IRT might be inadequate when coping with multiple laten t skills,
95
- especially when these are dependent. This moved researcherstow ards the area of
96
- probabilistic graphical models [15], as practical tools to implement IR T in more
97
- complex setups [2]. Eventually, Bayesian networks have been even tually iden-
98
- tified as a suitable formalism to models tests, even behind the IRT fra mework
99
- [23], this being especially the case for adaptive models [24] and coach ed solving
100
- [10]. In order to cope with latent skills, some authors successfully ad opted EM
101
- approaches to these models [21], this also involving the extreme situa tion of no
102
- ground truth information about the answers [5]. As an alternative a pproach to
103
- the same issue, some authors considered relaxations of the Bayes ian formalism,
104
- such as fuzzy models [6] and imprecise probabilities [17]. The latter is the di-
105
- rection we consider here, but trying to overcome the computation al limitations
106
- of that approach when coping with information-theoretic scores. This has some
107
- analogy with the approach in [9], that is focused on the Bayesian case only, but
108
- whose score, based on the same-decision problem, appears hard to be extended
109
- to the imprecise framework without affecting the computational co mplexity.
110
- 3 Background on Bayesian and Credal Networks
111
- We denote variables by Latin uppercase letters, while using lowercas e for their
112
- generic values, and calligraphic for the set of their possible values. T hus,v∈ V
113
- is a possible value of V. Here we only consider discrete variables.1
114
- 1IRT uses instead with continuous skills. Yet, when coping pr obabilistic models, hav-
115
- ing discrete skill does no prevent evaluations to range over a continuous domain.
116
- E.g., see Table 1, where the grade corresponds to a (continuo us) probability.4 Antonucci et al.
117
- 3.1 Bayesian Networks
118
- A probability mass function (PMF) over Vis denoted as P(V), whileP(v) is
119
- the probability assigned to state v. Given a function fofV, its expectation with
120
- respect to P(V) isEP(f) :=/summationtext
121
- v∈VP(v)f(v). The expectation of −logb[P(V)] is
122
- calledentropyand denoted also as H(X).2We setb:=|V|to have the maximum
123
- of the entropy, achieved for uniform PMFs, equal to one.
124
- Given joint PMF P(U,V), the marginal PMF P(V) is obtained by sum-
125
- ming out the other variable, i.e., P(v) =/summationtext
126
- u∈UP(u,v). Conditional PMFs such
127
- asP(U|v) are similarly obtained by Bayes’s rule, i.e., P(u|v) =P(u,v)/P(v)
128
- provided that P(v)>0. Notation P(U|V) :={P(U|v)}v∈Vis used for such con-
129
- ditional probability table (CPT). The entropy of a conditional PMF is d efined
130
- as in the unconditional case and denoted as H(U|v). The conditional entropy
131
- is a weighted average of entropies of the conditional PMFs, i.e., H(U|V) :=/summationtext
132
- v∈VH(U|v)P(v). IfP(u,v) =P(u)P(v) for each u∈ Uandv∈ V, variables
133
- UandVare independent. Conditional formulations are also considered.
134
- We assume the set of variables V:= (V1,...,V r) to be in one-to-one corre-
135
- spondence with a directed acyclic graph G. For each V∈V, the parents of V,
136
- i.e., the predecessors of VinG, are denoted as Pa V. GraphGtogether with the
137
- collection of CPTs {P(V|PaV)}V∈Vprovides a Bayesian network (BN) specifi-
138
- cation [15]. Under the Markov condition, i.e., every variable is condition ally in-
139
- dependent of its non-descendants non-parents given its parent s, a BN compactly
140
- defines a joint PMF P(V) that factorizes as P(v) =/producttext
141
- V∈VP(v|paV). Inference,
142
- intended as the computation of the posterior PMF of a single (querie d) variables
143
- given some evidence about other variables, is in general NP-hard, b ut exact and
144
- approximate schemes are available (see [15] for details).
145
- 3.2 Credal Sets and Credal Networks
146
- A set of PMFs over Vis denoted as K(V) and called credal set (CS). Expec-
147
- tations based on CSs are the bounds of the PMF expectations with r espect to
148
- the CS. Thus E[f] := inf P(V)∈K(V)E[f] and similarly for the supremum E. Ex-
149
- pectations of events are in particular called lower and upper probab ilities and
150
- denoted as PandP. Notation K(U|v) is used for a set of conditional CSs, while
151
- K(U|V) :={K(U|v)}v∈Vis a credal CPT (CCPT).
152
- Analogously to a BN, a credal network (CN) is specified by graph Gtogether
153
- with a family of CCPTs {K(V|PaV)}V∈V[11]. A CN defines a joint CS K(V)
154
- corresponding to all the joint PMFs induced by BNs whose CPTs are c onsis-
155
- tent with the CN CCPTs. For CNs, we intend inference as the comput ation of
156
- the lower and upper posterior probabilities. The task generalizes BN inference
157
- being therefore NP-hard, see [19] for a deeper characterization . Yet, exact and
158
- approximate schemes are also available to practically compute infere nces [4].
159
- 2We set 0·logb0 = 0 to cope with zero probabilities.A New Score for Adaptive Tests in Bayesian and Credal Network s 5
160
- 4 Testing Algorithms
161
- Atypicaltestaimsatevaluatingthe knowledgelevelofatesttaker σonthe basis
162
- of her answers to a number of questions. Let Qdenote a repository of questions
163
- availabletothe instructor.The orderand the number ofquestions pickedfrom Q
164
- to be asked to σmight not be defined in advance. We call testing algorithm (TA)
165
- a procedure taking care of the selection of the sequence of quest ions asked to the
166
- test taker, and to decide when the test stops. Algorithm 1 depicts a general TA
167
- scheme, with edenoting the array of the answers collected from test taker σ.
168
- Algorithm 1 General TA: given the profile σand repository Q, an evaluation
169
- based on answers eis returned.
170
- 1:e←∅
171
- 2:while not Stopping (e)do
172
- 3:Q∗←Pick(Q,e)
173
- 4:q∗←Answer(Q∗,σ)
174
- 5:e←e∪{Q∗=q∗}
175
- 6:Q←Q\{Q∗}
176
- 7:end while
177
- 8:returnEvaluate (e)
178
- Boolean function Stopping decides whether the test should end, this choice
179
- being possibly based on the previous answers in e. Trivial stopping rules might
180
- be based on the number of questions asked to the test takes ( Stopping (e) = 1
181
- if and only if |e|> n) or on the number of correct answers provided that a
182
- maximum number of questions is not exceeded. Function Pickselects instead
183
- the question to be asked to the student from the repository Q. A TA is called
184
- adaptive when this function takes into account the previous answers e. Trivial
185
- non-adaptive strategies might consist in randomly picking an element ofQor
186
- following a fixed order. Function Answeris simply collecting (or simulating) the
187
- answeroftest taker σtoaparticularquestion Q. In ourassumptions,this answer
188
- is not affected by the previous answers to other questions.3
189
- Finally,Evaluate is a function returning the overall judgement of the test
190
- (e.g., a numerical grade or a pass/fail Boolean) on the basis of all th e answers
191
- collected after the test termination. Trivial examples of such func tions are the
192
- percentage of correct answers or a Boolean that is true when a su fficient number
193
- of correct answers has been provided. Note also that in our assum ptions the TA
194
- isexchangeable , i.e., the stopping rule, the question finder and the evaluation
195
- function are invariant with respect to permutations in e[22]. In other words,
196
- 3Generalized setupswhere thequalityofthestudentanswer i s affectedbytheprevious
197
- answers will be discussed at the end of the paper. This might i nclude a fatigue
198
- model negatively affecting the quality of the answers when ma ny questions have
199
- been already answered as well as the presence of revealing questions that might
200
- improve the quality of other answers [16].6 Antonucci et al.
201
- the same next question, the same evaluation and the same stopping decision is
202
- produced for any two students, who provided the same list of answ ers in two
203
- different orders.
204
- A TA is supposed to achieve reliable evaluation of taker σfrom the answers
205
- e. As each answer is individually assumed to improve such quality, asking all the
206
- questions, no matter the order because of the exchangeability as sumption, is an
207
- obvious choice. Yet, this might be impractical (e.g., because of time lim itations)
208
- or just provide an unnecessary burden to the test taker. The go al of a good TA
209
- is therefore to trade off the evaluation accuracy and the number o f questions.4
210
- 5 Adaptive Testing in Bayesian and Credal Networks
211
- The general TA setup in Algorithm 1 can be easily specialized to BNs as f ollows.
212
- First, we identify the profile σof the test taker with the actual states of a
213
- number of latent discrete variables, called skills. LetS={Si}n
214
- j=1denote these
215
- skill variables, and sσthe actual values of the skills for the taker. Skills are
216
- typically ordinal variables, whose states corresponds to increasin g knowledge
217
- levels. Questions in Qare still described as manifest variables whose actual
218
- values are returned by the answerfunction. This is achieved by a (possibly
219
- stochastic) function of the actual profile sσ. This reflects the taker perspective,
220
- while the teacher has clearly no access to sσ. As a remark, note that we might
221
- often coarsenthe set ofpossible values Qfor eachQ∈Q: for instance, amultiple
222
- choicequestionwiththreeoptionsmighthaveasinglerightanswer,t hetwoother
223
- answers being indistinguishable from the evaluation point of view.5
224
- A joint PMF over the skills Sand the questions Qis supposed to be avail-
225
- able. In particular we assume this to correspond to a BN whose grap h has the
226
- questions as leaf nodes. Thus, for each Q∈Q,PaQ⊆Sand we call PaQ
227
- thescopeof question Q. Note that this assumption about the graph is simply
228
- reflecting a statement about the conditional independence betwe en (the answer
229
- to) a question and all the other skills and questions given scope of th e ques-
230
- tion. This basically means that the answers to other questions are n ot directly
231
- affecting the answer to a particular question, and this naturally follo ws from the
232
- exchangeability assumption.6
233
- As the available data are typically incomplete because of the latent na ture
234
- of the skills, dedicated learning strategies, such as various form of constrained
235
- EM should be considered to train a BN from data. We refer the reade r to the
236
- variouscontributionsofPlajnerand Vomlel in this field (e.g.,[21]) for acomplete
237
- discussion of that approach. Here we assume the BN quantification available.
238
- 4In some generalized setups, other elements such as a serendipity in choice in order
239
- to avoid tedious sequences of questions might be also consid ered [7].
240
- 5The case of abstention to an answer and the consequent problem of modelling the
241
- incompleteness is a topic we do not consider here for the sake of conciseness. Yet,
242
- general approaches based on the ideas in [18] could be easily adopted.
243
- 6Moving to other setups would not be really critical because o f the separation prop-
244
- erties of observed nodes in Bayesian and credal networks, se e for instance [3,8].A New Score for Adaptive Tests in Bayesian and Credal Network s 7
245
- In such a BN framework, Stopping (e) might be naturally based on an eval-
246
- uation of the posterior PMF P(S|e), this being also the case for Evaluate . Re-
247
- garding the question selection, Pickmight be similarly based on the (posterior)
248
- CPTP(S|Q,e), whose values for the different answers to Qmight be weighted
249
- by the marginal P(Q|e). More specifically, entropies and conditional entropies
250
- are considered by Algorithm 2, while the evaluation is based on a condit ional
251
- expectation for a given utility function.
252
- Algorithm 2 Information Theoretic TA in BN over the questions Qand the
253
- skillsS: given the student profile sσ, the algorithms returns an evaluation cor-
254
- responding to the expectation of an evaluation function fwith respect to the
255
- posterior for the skills given the answers e.
256
- 1:e=∅
257
- 2:whileH(S|e)> H∗do
258
- 3:Q∗←argmax Q∈Q[H(S|e)−H(S|Q,e)]
259
- 4:q∗←Answer(Q∗,sσ)
260
- 5:e←e∪{Q∗=q∗}
261
- 6:Q←Q\{Q∗}
262
- 7:end while
263
- 8:returnEP(S|e)[f(S)]
264
- When no data are available for the BN training, elicitation techniques s hould
265
- beconsideredinstead.AsalreadydiscussedCNsmightofferabette rformalismto
266
- capture domain knowledge, especially by providing interval-valued pr obabilities
267
- instead of sharp values. If this is the case, a CN version of Algorithm 2 can be
268
- equivalentlyconsidered.MovingtoCNsisalmostthesame,providedt hatbounds
269
- on the entropy are used instead for decisions. Yet, the price of su ch increased
270
- realism in the elicitation is the higher complexity characterizinginferen ces based
271
- on CNs. The work in [17] offers a critical discussion of those issues, that are
272
- only partially addressed by heuristic techniques used there to appr oximate such
273
- bounds. In the next section we consider an alternative approach t o cope with
274
- CNs and adaptive TAs based on different scores used to select the q uestions.
275
- 6 Coping with the Mode
276
- Following[25], we can regardthe PMF entropy(and its conditionalver sion)used
277
- by Algorithm 2 as an example of index of qualitative variation (IQV). An IQV
278
- is just a normalized number that takes value zero for degenerate P MFs, one on
279
- uniform ones, being independent on the number of possible states ( and samples
280
- for empirical models). The closer to uniform is the PMF, the higher is t he index
281
- and vice versa.
282
- In order to bypass the computational issues related to its applicat ion with
283
- CNs and the explainability limits with both BNs and CNs, we want to consid er8 Antonucci et al.
284
- alternative IQVs to replace entropy in Algorithm 2. Wilkox deviation from the
285
- mode(DM) appears a sensible option. Given PMF P(V), this corresponds to:
286
- M(V) := 1−/summationdisplay
287
- v∈Vmaxv′∈VP(v′)−P(v)
288
- |V|−1. (1)
289
- It is a trivial exercise to check that this is a proper IQV, with the sam e unimodal
290
- behaviour of the entropy. In terms of explainability, being a linear fu nction of
291
- the modal probability, the numerical value of the DM offers a more tr ansparent
292
- interpretation than the entropy. From a computational point of v iew, for both
293
- marginaland unconditional PMFs, both the entropy and the DM can be directly
294
- obtained from the probabilities of the singletons.
295
- The situation is different when computing the bounds of these quant ities
296
- with respect to a CS. The bounds of M(V) are obtained from the upper and
297
- lower probabilities of the singletons by simple algebra, i.e,
298
- M(V) := max
299
- P(V)∈K(V)M(V) :=|V|−maxv′∈VP(v′)
300
- |V|−1, (2)
301
- and analogously with the lower probabilities for M(V). Maximizing entropy
302
- requires instead a non-trivial, but convex, optimization. See for ins tance [1] for
303
- an iterative procedure to find such maximum when coping with CSs defi ned by
304
- probability intervals. The situation is even more critical for the minimiz ation,
305
- that has been proved to be NP-hard in [26].
306
- The optimization becomes even more challenging for conditional entr opies,
307
- there basically are mixtures of conditional entropies based on impre cise weights.
308
- Consequently, in [17], only inner approximation for the upper bound h ave been
309
- derived.ThesituationisdifferentforconditionalDMs.Thefollowingr esultoffers
310
- a feasible approachin a simplified setup, to be later extended to the g eneralcase.
311
- Theorem 1. Under the setup of Section 5, consider a CN with a single skill
312
- Sand a single question Q, that is a child of S. LetK(S)andK(Q|S)be the
313
- CCPTs of such CN. Let also Q={q1,...,qn}andS={s1,...,sm}. The upper
314
- conditional DM, i.e.,
315
- M(S|Q) :=|S|− max
316
- P(S)∈K(S)
317
- P(Q|S)∈K(Q|S)/summationdisplay
318
- i=1,...,n/bracketleftbigg
319
- max
320
- j=1,...,mP(sj|qi)/bracketrightbigg
321
- P(qi),(3)
322
- whose normalizing denominator was omitted for the sake of br evity, is such that:
323
- M(S|Q) :=m−max
324
- ˆji=1,...,m
325
- i=1,...,nΩ(ˆj1,...,ˆjn), (4)A New Score for Adaptive Tests in Bayesian and Credal Network s 9
326
- whereΩ(ˆj1,...,ˆjn)is the solution of the following linear programming task her e
327
- below.
328
- max/summationdisplay
329
- jxij
330
- s.t./summationdisplay
331
- ijxij= 1 (5)
332
- xij≥0 ∀i,j (6)
333
- /summationdisplay
334
- ixij≥P(sj)∀j (7)
335
- /summationdisplay
336
- ixij≤P(sj)∀j (8)
337
- P(qi|sj)/summationdisplay
338
- ixij≤xij ∀i,j (9)
339
- P(qi|sj)/summationdisplay
340
- ixij≥xij ∀i,j (10)
341
- xiˆji≥xij ∀i,j (11)
342
- Note that the bounds on the sums over the indexes and on the uni versal quanti-
343
- fiers are also omitted for the sake of brevity.
344
- Proof. Equation (3)rewrites as:
345
- M(S|Q) =m−max
346
- P(S)∈K(S)
347
- P(Q|S)∈K(Q|S)n/summationdisplay
348
- i=1/bracketleftbigg
349
- max
350
- j=1,...,mP(sj)P(qi|sj)/bracketrightbigg
351
- .(12)
352
- Let us define the variables of such constrained optimization task as:
353
- xij:=P(sj)·P(qi|sj). (13)
354
- for each i= 1,...,nandj= 1,...,m. Let us show how the CCPT constraints
355
- can be easily reformulated with respect to such new variable s by simply noticing
356
- thatxij=P(sj,qi), and hence P(si) =/summationtext
357
- ixijandP(qj|si) =xij/(/summationtext
358
- kxkj).
359
- Consequently, the interval constraints on P(S)corresponds to the linear con-
360
- straints in Equations (7)and(8). Similarly, for P(Q|S), we obtain:
361
- P(qi|sj)≤xij/summationtext
362
- kxkj≤P(qi|sj), (14)
363
- that easily gives the linear constraints in Equations (9)and(10). The non-
364
- negativity of the probabilities corresponds to Equation (6), while Equation (5)
365
- gives the normalization of P(S)and the normalization of P(Q|S)is by con-
366
- struction. Equation (12)rewrites therefore as:
367
- M(S|Q) =m−max
368
- {vij}ij∈Γ/summationdisplay
369
- imax
370
- jxij, (15)10 Antonucci et al.
371
- whereΓdenotes the linear constraints in Equations (5)-(10). If we set
372
- ˆji:= argmax
373
- jxij, (16)
374
- Equation (15)rewrites as
375
- M(S|Q) = max
376
- {vij}ij∈Γ′/summationdisplay
377
- ixiˆji, (17)
378
- whereΓ′are the constraints in Γwith the additional (linear) constraints in
379
- Equation (11), that are implementing Equation (16).
380
- The optimization on the right-hand side of Equation (17)is not a linear
381
- programming task, as the values of the indexes ˆjicannot be decided in advance
382
- being potentially different for different assignments of the optimization variables
383
- consistent with the constraints in Γ. Yet,we might address such optimization as a
384
- brute-force task with respect to all the possible assignati on of the indexes ˆji. This
385
- is exactly what is done by Equation (4)where all the mnpossible assignations
386
- are considered. This proves the thesis. ⊓ ⊔
387
- An analogous result with the linear programming tasks minimizing the sa me
388
- objectivefunctionswithexactlythesameconstraintsallowstocom puteM(S|Q).
389
- The overall complexity is clearly O(mn) withn:=|Q|. This means quadratic
390
- complexity for any test where only the difference between a wrong a nd a right
391
- answer is considered from an elicitation perspective, and tractable computations
392
- providedthatthenumberofpossibleanswerstothesamequestion wedistinguish
393
- is bounded by a small constant. Coping with multiple answers becomes trivial
394
- by means of the results in [3], that allows to merge multiple observed c hildren
395
- into a single one. Finally, the case of multiple skills might be similarly conside red
396
- by using the marginal bounds of the single skills in Equations (7) and (8 ).
397
- 7 Experiments
398
- In this section we validate the ideas outlined in the previous section in o rder to
399
- check whether or not the DM can be used for TAs as a sensible altern ative to
400
- information-theoreticscoressuchastheentropy.IntheBNcon text,thisissimply
401
- achieved by computing the necessary updated probabilities, while Th eorem 1 is
402
- used instead for CNs.
403
- 7.1 Single-Skill Experiments on Synthetic Data
404
- For a very first validation of our approach, we consider a simple setu p made of a
405
- single Boolean skill Sand a repository with 18 Boolean questions based on nine
406
- different parametrizations (two questions for parametrization). In such BN, the
407
- CPT of a question can be parametrized by two numbers. E.g., in the ex ample
408
- in Figure 1, we used the probabilities of correctly answering the ques tion givenA New Score for Adaptive Tests in Bayesian and Credal Network s 11
409
- that the skill is present or not, i.e., P(Q= 1|S= 1) and P(Q= 1|S= 0). A
410
- more interpretable parametrization can be obtained as follows:
411
- δ:= 1−1
412
- 2[P(Q= 1|S= 1)+P(Q= 1|S= 0)], (18)
413
- κ:=P(Q= 1|S= 1)−P(Q= 1|S= 0). (19)
414
- Note that P(Q= 1|S= 1)> P(Q= 1|S= 0) is an obvious rationality con-
415
- straint for questions, otherwise having the skill would make less likely to answer
416
- properly to a question. Both parameters are therefore non-neg ative. Parameter
417
- δ, corresponding to the (arithmetic) average of the probability for a wrong an-
418
- swer over the different skill values, can be regarded as a normalized index of
419
- the question difficulty . E.g., in Figure 1, Q1(δ= 0.4) is less difficult than Q2
420
- (δ= 0.5). Parameter κcan be instead regarded as a descriptor of the differ-
421
- ence of the conditional PMFs associated with the different skill value s. In the
422
- most extreme case κ= 1, the CPT P(Q|S) is diagonal implementing an iden-
423
- tity mapping between the skill and the question. We therefore rega rdκas a
424
- indicator of the discriminative power of the question. In our tests, for the BN
425
- quantification, we consider the nine possible parametrizations corr esponding to
426
- (δ,γ)∈[0.4,0.5,0.6]2. ForP(S) we use instead a uniform quantification. For the
427
- CN approach we perturb all the BN parameters with ǫ=±0.05, thus obtaining
428
- a CN quantification. A group of 1024 simulated students, half of the m having
429
- S= 0 and half with S= 1 is used for simulations. The student answers are
430
- sampled from the CPT of the asked question on the basis of the stud ent profile.
431
- Figure 2 (left) depicts the accuracy of the BN and CN approaches b ased on
432
- both the entropy and the DM scores. For credal models, decisions are based on
433
- the mid-point between the lower and the upper probability, while lower entropy
434
- and conditional entropies are used. We notably see all the adaptive approaches
435
- outperforming a non-adaptive, random, choice of the questions. To better in-
436
- vestigate the strong overlap between these trajectories, in Figu re 2 (right) we
437
- compute the Brier score and we might observe the strong similarity b etween
438
- DM and entropy approaches in both the Bayesian and the credal ca se, with the
439
- credal approaches slightly outperforming the Bayesian ones.
440
- 7.2 Multi-Skill Experiments on Real Data
441
- For a validation on real data, we consider an online German language p lacement
442
- test (see also [17]). Four different Booleanskills associated with differ ent abilities
443
- (vocabulary, communication, listening and reading) are considered and modeled
444
- by a chain-shaped graph, for which BN and CN quantification are alre ady avail-
445
- able. A repository of 64 Boolean questions, 16 for each skill, with fou r different
446
- levels of difficulty and discriminative power, have been used.
447
- Experiments have been achieved by means of the CREMA library for c redal
448
- networks [14].7The Java code used for the simulations is available together with
449
- the Python scripts used to analyze the results and the model spec ifications.8
450
- 7github.com/IDSIA/crema
451
- 8github.com/IDSIA/adaptive-tests12 Antonucci et al.
452
- 0 5 10 15 2000.20.40.60.81
453
- Number of questionsAccuracy
454
- 0 5 10 15 2000.10.20.30.40.5
455
- Number of questionsBrier DistanceCredal Entropy
456
- Credal Mode
457
- Random
458
- Bayesian Entropy
459
- Bayesian Mode
460
- Fig.2.Accuracy (left) and Brier distance (right) of TAs for a singl e-skill BN/CN
461
- Performancesare evaluated as for the previous model, the only diff erence be-
462
- ing that here the accuracy is aggregated by average over the sep arate accuracies
463
- for the four skills. The observed behaviour, depicted in Figure 3, is a nalogous
464
- to that of the single skill case: entropy-based and mode-based sc ores are provid-
465
- ing similar results, with the credal approach typically leading to more a ccurate
466
- evaluations (or evaluations of the same quality with fewer questions ).
467
- 0 10 20 30 40 50 6000.20.40.60.81
468
- Number of questionsAggregated AccuracyCredal Entropy
469
- Credal Mode
470
- Random
471
- Bayesian Entropy
472
- Bayesian Mode
473
- Fig.3.Aggregated Accuracy for a multi-skill TA
474
- 8 Outlooks and Conclusions
475
- A new score for adaptive testing in Bayesian and credal networks h as been
476
- proposed. Our proposal is based on indexes of qualitative variation , being in
477
- particular focused on the modal probability for their explainability fe atures. An
478
- algorithm to evaluate this quantity in the credal case is derived. Our experi-
479
- ments show that moving to these scores does not really affect the q uality of the
480
- selection process. Besides a deeper experimental validation, a nec essary future
481
- work consists in the derivation of simpler elicitation strategies for th ese model
482
- in order to promote their application to real-world testing environme nts.A New Score for Adaptive Tests in Bayesian and Credal Network s 13
483
- References
484
- 1. Abellan, J., Moral, S.: Maximum of entropy for credal sets . International Journal
485
- of Uncertainty, Fuzziness and Knowledge-Based Systems 11(05), 587–597 (2003)
486
- 2. Almond, R.G., Mislevy, R.J.: Graphical models and comput erized adaptive testing.
487
- Applied Psychological Measurement 23(3), 223–237 (1999)
488
- 3. Antonucci, A., Piatti, A.: Modeling unreliable observat ions in Bayesian networks
489
- by credal networks. In: Godo, L., Pugliese, A. (eds.) Scalab le Uncertainty Man-
490
- agement, Third International Conference, SUM 2009. Procee dings. Lecture Notes
491
- in Computer Science, vol. 5785, pp. 28–39. Springer (2009)
492
- 4. Antonucci, A., de Campos, C.P., Huber, D., Zaffalon, M.: Ap proximating credal
493
- networkinferences bylinear programming. In:vanderGaag, L.C. (ed.)Proceedings
494
- of the 12th European Conference on Symbolic and Quantitativ e Approaches to
495
- Reasoning with Uncertainty. Lecture Notes in Artificial Int elligence, vol. 7958, pp.
496
- 13–25. Springer, Utrecht, The Netherlands (2013)
497
- 5. Bachrach, Y., Graepel, T., Minka, T., Guiver, J.: How to gr ade a test without
498
- knowing the answers—a Bayesian graphical model for adaptiv e crowdsourcing and
499
- aptitude testing. arXiv preprint arXiv:1206.6386 (2012)
500
- 6. Badaracco, M., Mart´ ınez, L.: A fuzzy linguistic algorit hm for adaptive test in intel-
501
- ligent tutoring system based on competences. Expert System s with Applications
502
- 40(8), 3073–3086 (2013)
503
- 7. Badran, M.E.K., Abdo, J.B., Al Jurdi, W., Demerjian, J.: A daptive serendipity
504
- for recommender systems: Let it find you. In: ICAART (2). pp. 7 39–745 (2019)
505
- 8. Bolt, J.H., De Bock, J., Renooij, S.: Exploiting Bayesian network sensitivity func-
506
- tions for inference in credal networks. In: Proceedings of t he Twenty-Second Eu-
507
- ropean Conference on Artificial Intelligence (ECAI). vol. 2 85, pp. 646–654. IOS
508
- Press (2016)
509
- 9. Chen, S.J., Choi, A., Darwiche, A.: Computer adaptive tes ting using the same-
510
- decision probability. In: BMA@ UAI. pp. 34–43 (2015)
511
- 10. Conati, C., Gertner, A.S., VanLehn, K., Druzdzel, M.J.: On-line student modeling
512
- for coached problem solving using Bayesian networks. In: Us er Modeling. pp. 231–
513
- 242. Springer (1997)
514
- 11. Cozman, F.G.: Credal networks. Artificial intelligence 120(2), 199–233 (2000)
515
- 12. Embretson, S.E., Reise, S.P.: Item response theory. Psy chology Press (2013)
516
- 13. H´ ajek, A., Smithson, M.: Rationality and indeterminat e probabilities. Synthese
517
- 187(1), 33–48 (2012)
518
- 14. Huber, D., Caba˜ nas, R., Antonucci, A., Zaffalon, M.: CRE MA: a Java library
519
- for credal network inference. In: Jaeger, M., Nielsen, T. (e ds.) Proceedings of the
520
- 10th International Conference on Probabilistic Graphical Models (PGM 2020).
521
- Proceedings of Machine Learning Research, PMLR, Aalborg, D enmark (2020)
522
- 15. Koller, D., Friedman, N.: Probabilistic Graphical Mode ls: Principles and Tech-
523
- niques. MIT press (2009)
524
- 16. Laitusis, C.C., Morgan, D.L., Bridgeman, B., Zanna, J., Stone, E.: Examination
525
- of fatigue effects from extended-time accommodations on the SAT reasoning test.
526
- ETS Research Report Series 2007(2), i–13 (2007)
527
- 17. Mangili, F., Bonesana, C., Antonucci, A.: Reliable know ledge-based adaptive tests
528
- by credal networks. In: Antonucci, A., Cholvy, L., Papini, O . (eds.) Symbolic and
529
- QuantitativeApproachestoReasoning with Uncertainty.EC SQARU2017. Lecture
530
- Notes in Computer Science, vol. 10369, pp. 282–291. Springe r, Cham (2017)14 Antonucci et al.
531
- 18. Marchetti, S., Antonucci, A.: Reliable uncertain evide nce modeling in Bayesian
532
- networks by credal networks. In: Brawner, K.W., Rus, V. (eds .) Proceedings of the
533
- Thirty-First International Florida Artificial Intelligen ce Research Society Confer-
534
- ence (FLAIRS-31). pp. 513–518. AAAI Press, Melbourne, Flor ida, USA (2018)
535
- 19. Mau´ a, D.D., De Campos, C.P., Benavoli, A., Antonucci, A .: Probabilistic infer-
536
- ence in credal networks: new complexity results. Journal of Artificial Intelligence
537
- Research 50, 603–637 (2014)
538
- 20. Piatti, A., Antonucci, A., Zaffalon, M.: Building knowle dge-based expert systems
539
- by credal networks: a tutorial. In: Baswell, A. (ed.) Advanc es in Mathematics
540
- Research, vol. 11, chap. 2. Nova Science Publishers, New Yor k (2010)
541
- 21. Plajner, M., Vomlel,J.: Monotonicityinpractice ofada ptivetesting. arXivpreprint
542
- arXiv:2009.06981 (2020)
543
- 22. Sawatzky, R., Ratner, P.A., Kopec, J.A., Wu, A.D., Zumbo , B.D.: The accuracy
544
- of computerized adaptive testing in heterogeneous populat ions: A mixture item-
545
- response theory analysis. PloS one 11(3), e0150563 (2016)
546
- 23. Vomlel, J.: Bayesian networks in educational testing. I nternational Journal of Un-
547
- certainty, Fuzziness and Knowledge-Based Systems 12(supp01), 83–100 (2004)
548
- 24. Vomlel, J.: Building adaptive tests using Bayesian netw orks. Kybernetika 40(3),
549
- 333–348 (2004)
550
- 25. Wilcox, A.R.: Indices of qualitative variation and poli tical measurement. Western
551
- Political Quarterly 26(2), 325–343 (1973)
552
- 26. Xiang, G., Kosheleva, O., , Klir, G.J.: Estimating infor mation amount under in-
553
- terval uncertainty: algorithmic solvability and computat ional complexity. Tech.
554
- Rep. 158, Departmental Technical Reports (CS) (2006)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2106.07286.txt DELETED
@@ -1,970 +0,0 @@
1
- This paper has been accepted for publication at the
2
- IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, 2021. ©IEEE
3
- Time Lens: Event-based Video Frame Interpolation
4
- Stepan Tulyakov*;1Daniel Gehrig;2Stamatios Georgoulis1Julius Erbach1
5
- Mathias Gehrig2Yuanyou Li1Davide Scaramuzza2
6
- 1Huawei Technologies, Zurich Research Center
7
- 2Dept. of Informatics, Univ. of Zurich and Dept. of Neuroinformatics, Univ. of Zurich and ETH Zurich
8
- Figure 1: Qualitative results comparing our proposed method, Time Lens, with DAIN [3] and BMBC [29]. Our method can
9
- interpolate frames in highly-dynamic scenes, such as while spinning an umbrella (top row) and bursting a balloon (bottom
10
- row). It does this by combining events (b) and frames (a).
11
- Abstract
12
- State-of-the-art frame interpolation methods generate
13
- intermediate frames by inferring object motions in the
14
- image from consecutive key-frames. In the absence of
15
- additional information, first-order approximations, i.e.
16
- optical flow, must be used, but this choice restricts the
17
- types of motions that can be modeled, leading to errors
18
- in highly dynamic scenarios. Event cameras are novel
19
- sensors that address this limitation by providing auxiliary
20
- visual information in the blind-time between frames. They
21
- asynchronously measure per-pixel brightness changes and
22
- do this with high temporal resolution and low latency.
23
- Event-based frame interpolation methods typically adopt a
24
- synthesis-based approach, where predicted frame residuals
25
- are directly applied to the key-frames. However, while these
26
- approaches can capture non-linear motions they suffer
27
- from ghosting and perform poorly in low-texture regions
28
- with few events. Thus, synthesis-based and flow-based
29
- approaches are complementary. In this work, we introduce
30
- *indicates equal contributionTime Lens , a novel method that leverages the advantages of
31
- both. We extensively evaluate our method on three synthetic
32
- and two real benchmarks where we show an up to 5.21
33
- dB improvement in terms of PSNR over state-of-the-art
34
- frame-based and event-based methods. Finally, we release
35
- a new large-scale dataset in highly dynamic scenarios,
36
- aimed at pushing the limits of existing methods.
37
- Multimedia Material
38
- The High-Speed Event and RGB (HS-ERGB) dataset
39
- and evaluation code can be found at: http://rpg.ifi.
40
- uzh.ch/timelens
41
- 1. Introduction
42
- Many things in real life can happen in the blink of an
43
- eye. A hummingbird flapping its wings, a cheetah accel-
44
- erating towards its prey, a tricky stunt with the skateboard,
45
- or even a baby taking its first steps. Capturing these mo-
46
- ments as high-resolution videos with high frame rates typi-arXiv:2106.07286v1 [cs.CV] 14 Jun 2021cally requires professional high-speed cameras, that are in-
47
- accessible to casual users. Modern mobile device producers
48
- have tried to incorporate more affordable sensors with sim-
49
- ilar functionalities into their systems, but they still suffer
50
- from the large memory requirements and high power con-
51
- sumption associated with these sensors.
52
- Video Frame Interpolation (VFI) addresses this problem,
53
- by converting videos with moderate frame rates high frame
54
- rate videos in post-processing. In theory, any number of
55
- new frames can be generated between two keyframes of
56
- the input video. Therefore, VFI is an important problem
57
- in video processing with many applications, ranging from
58
- super slow motion [10] to video compression [42].
59
- Frame-based interpolation approaches relying solely
60
- on input from a conventional frame-based camera that
61
- records frames synchronously and at a fixed rate. There are
62
- several classes of such methods that we describe below.
63
- Warping-based approaches [20, 10, 44, 21, 29] combine
64
- optical flow estimation [8, 16, 36] with image warping [9],
65
- to generate intermediate frames in-between two consecutive
66
- key frames. More specifically, under the assumptions of lin-
67
- ear motion and brightness constancy between frames, these
68
- works compute optical flow and warp the input keyframe(s)
69
- to the target frame, while leveraging concepts, like contex-
70
- tual information [20], visibility maps [10], spatial trans-
71
- former networks [44], forward warping [21], or dynamic
72
- blending filters [29], to improve the results. While most of
73
- these approaches assume linear motion, some recent works
74
- assume quadratic [43] or cubic [5] motions. Although these
75
- methods can address non-linear motions, they are still lim-
76
- ited by their order, failing to capture arbitrary motion.
77
- Kernel-based approaches [22, 23] avoid the explicit mo-
78
- tion estimation and warping stages of warping-based ap-
79
- proaches. Instead, they model VFI as local convolution over
80
- the input keyframes, where the convolutional kernel is esti-
81
- mated from the keyframes. This approach is more robust to
82
- motion blur and light changes. Alternatively, phase-based
83
- approaches [18] pose VFI as a phase shift estimation prob-
84
- lem, where a neural network decoder directly estimates the
85
- phase decomposition of the intermediate frame. However,
86
- while these methods can in theory model arbitrary motion,
87
- in practice they do not scale to large motions due to the lo-
88
- cality of the convolution kernels.
89
- In general, all frame-based approaches assume simplis-
90
- tic motion models (e.g. linear) due to the absence of vi-
91
- sual information in the blind-time between frames, which
92
- poses a fundamental limitation of purely frame-based VFI
93
- approaches. In particular, the simplifying assumptions rely
94
- on brightness and appearance constancy between frames,
95
- which limits their applicability in highly dynamic scenar-
96
- ios such as ( i) for non-linear motions between the input
97
- keyframes, ( ii) when there are changes in illumination or
98
- motion blur, and ( iii) non-rigid motions and new objects ap-pearing in the scene between keyframes.
99
- Multi-camera approaches. To overcome this limita-
100
- tion, some works seek to combine inputs from several
101
- frame-based cameras with different spatio-temporal trade-
102
- offs. For example, [1] combined low-resolution video with
103
- high resolution still images, whereas [25] fused a low-
104
- resolution high frame rate video with a high resolution low
105
- frame rate video. Both approaches can recover the miss-
106
- ing visual information necessary to reconstruct true object
107
- motions, but this comes at the cost of a bulkier form factor,
108
- higher power consumption, and a larger memory footprint.
109
- Event-based approaches. Compared to standard frame-
110
- based cameras, event cameras [14, 4] do not incur the afore-
111
- mentioned costs. They are novel sensors that only report the
112
- per-pixel intensity changes, as opposed to the full intensity
113
- images and do this with high temporal resolution and low
114
- latency on the order of microseconds. The resulting output
115
- is an asynchronous stream of binary “events” which can be
116
- considered a compressed representation of the true visual
117
- signal. These properties render them useful for VFI under
118
- highly dynamic scenarios (e.g. high-speed non-linear mo-
119
- tion, or challenging illumination).
120
- Events-only approaches reconstruct high frame rate
121
- videos directly from the stream of incoming events using
122
- GANs [38], RNNs [32, 33, 34], or even self-supervised
123
- CNNs [28], and can be thought of as a proxy to the VFI
124
- task. However, since the integration of intensity gradients
125
- into an intensity frame is an ill-posed problem, the global
126
- contrast of the interpolated frames is usually miscalculated.
127
- Moreover, as in event cameras intensity edges are only ex-
128
- posed when they move, the interpolation results are also de-
129
- pendent on the motion.
130
- Events-plus-frames approaches. As certain event cam-
131
- eras such as the Dynamic and Active VIsion Sensor
132
- (DA VIS) [4] can simultaneously output the event stream and
133
- intensity images – the latter at low frame rates and prone
134
- to the same issues as frame-based cameras (e.g. motion
135
- blur) – several works [26, 41, 11, 37] use both streams of
136
- information. Typically, these works tackle VFI in conjunc-
137
- tion with de-blurring, de-noising, super-resolution, or other
138
- relevant tasks. They synthesize intermediate frames by
139
- accumulating temporal brightness changes, represented by
140
- events, from the input keyframes and applying them to the
141
- key frames. While these methods can handle illumination
142
- changes and non-linear motion they still perform poorly
143
- compared to the frame-based methods (please see § 3.2),
144
- as due to the inherent instability of the contrast threshold
145
- and sensor noise, not all brightness changes are accurately
146
- registered as events.
147
- Our contributions are as follows
148
- 1. We address the limitations of all aforementioned
149
- methods by introducing a CNN framework, named
150
- Time Lens , that marries the advantages of warping-Figure 2: Proposed event-based VFI approach.
151
- and synthesis-based interpolation approaches. In our
152
- framework, we use a synthesis-based approach to
153
- ground and refine results of high-quality warping-
154
- based approach and provide the ability to handle illu-
155
- mination changes and new objects appearing between
156
- keyframes (refer Fig. 7),
157
- 2. We introduce a new warping-based interpolation ap-
158
- proach that estimates motion from events, rather than
159
- frames and thus has several advantages: it is more ro-
160
- bust to motion blur and can estimate non-linear mo-
161
- tion between frames. Moreover, the proposed method
162
- provides a higher quality interpolation compared to
163
- synthesis-based methods that use events when event
164
- information is not sufficient or noisy.
165
- 3. We empirically show that the proposed Time Lens
166
- greatly outperforms state-of-the-art frame-based and
167
- event-based methods, published over recent months,
168
- on three synthetic and two real benchmarks where we
169
- show an up to 5.21 dB improvement in terms of PSNR.
170
- 2. Method
171
- Problem formulation. Let us assume an event-based
172
- VFI setting, where we are given as input the left I0and
173
- rightI1RGB key frames, as well as the left E0!and right
174
- E!1event sequences , and we aim to interpolate (one or
175
- more) new frames ^Iat random timesteps in-between the
176
- key frames. Note that, the event sequences ( E0!,E!1)
177
- contain all asynchronous events that are triggered from the
178
- moment the respective (left I0or rightI1) key RGB frame
179
- is synchronously sampled, till the timestep at which we
180
- want to interpolate a new frame ^I. Fig. 2 illustrates the
181
- proposed event-based VFI setting.
182
- System overview. To tackle the problem under consid-
183
- eration we propose a learning-based framework, namely
184
- Time Lens , that consists of four dedicated modules that
185
- serve complementary interpolation schemes, i.e. warping-
186
- based and synthesis-based interpolation. In particular, (1)
187
- thewarping-based interpolation module estimates a new
188
- frame by warping the boundary RGB keyframes using op-
189
- tical flow estimated from the respective event sequence; (2)
190
- thewarping refinement module aims to improve this esti-
191
- mate by computing residual flow; (3) the interpolation by
192
- synthesis module estimates a new frame by directly fusing
193
- the input information from the boundary keyframes and the
194
- event sequences; finally (4) the attention-based averaging
195
- module aims to optimally combine the warping-based andsynthesis-based results. In doing so, Time Lens marries
196
- the advantages of warping- and synthesis-based interpola-
197
- tion techniques, allowing us to generate new frames with
198
- color and high textural details while handling non-linear
199
- motion, light changes, and motion blur. The workflow of
200
- our method is shown in Fig. 3a.
201
- All modules of the proposed method use the same back-
202
- bone architecture, which is an hourglass network with skip
203
- connections between the contracting and expanding parts,
204
- similar to [10]. The backbone architecture is described
205
- in more detail in the supplementary materials. Regarding
206
- the learning representation [7] used to encode the event
207
- sequences, all modules use the voxel grid representation.
208
- Specifically, for event sequence E0!endwe compute a
209
- voxel gridV0!endfollowing the procedure described
210
- in [46]. In the following paragraphs, we analyze each mod-
211
- ule and its scope within the overall framework.
212
- Interpolation by synthesis , as shown in Fig. 3b, directly
213
- regresses a new frame ^Isyngiven the left I0and rightI1
214
- RGB keyframes and events sequences E0!andE!1re-
215
- spectively. The merits of this interpolation scheme lie in
216
- its ability to handle changes in lighting, such as water re-
217
- flections in Fig. 6 and a sudden appearance of new objects
218
- in the scene, because unlike warping-based method, it does
219
- not rely on the brightness constancy assumption. Its main
220
- drawback is the distortion of image edges and textures when
221
- event information is noisy or insufficient because of high
222
- contrast thresholds, e.g. triggered by the book in Fig. 6.
223
- Warping-based interpolation , shown in Fig. 3d, first
224
- estimates the optical flow F!0andF!1between a la-
225
- tent new frame ^Iand boundary keyframes I0andI1using
226
- eventsE!0andE!1respectively. We compute E!0,
227
- by reversing the event sequence E0!, as shown in Fig. 4.
228
- Then our method uses computed optical flow to warp the
229
- boundary keyframes in timestep using differentiable in-
230
- terpolation [9], which in turn produces two new frame esti-
231
- mates ^Iwarp
232
- 0!and^Iwarp
233
- 1!.
234
- The major difference of our approach from the tradi-
235
- tional warping-based interpolation methods [20, 10, 21, 43],
236
- is that the latter compute optical flow between keyframes
237
- using the frames themselves and then approximate opti-
238
- cal flow between the latent middle frame and boundary
239
- by using a linear motion assumption. This approach does
240
- not work when motion between frames is non-linear and
241
- keyframes suffer from motion blur. By contrast, our ap-
242
- proach computes the optical flow from the events, and thus
243
- can naturally handle blur and non-linear motion. Although
244
- events are sparse, the resulting flow is sufficiently dense as
245
- shown in Fig. 3d, especially in textured areas with dominant
246
- mostion, which is most important for interpolation.
247
- Moreover, the warping-based interpolation approach re-
248
- lying on events also works better than synthesis-based
249
- method in the scenarios when event data is noisy or not(a) Overview of the proposed method.
250
- (b) Interpolation by synthesis module.
251
- (c) Attention-based averaging module.
252
- (d) Warping-based interpolation module.
253
- (e) Warping refinement module.
254
- Figure 3: Structure of the proposed method. The overall workflow of the method is shown in Fig. 3a and individual modules
255
- are shown in Fig. 3d, 3b, 3e and 3c. In the figures we also show loss function that we use to train each module. We show
256
- similar modules in the same color across the figures.
257
- Figure 4: Example of an event sequence reversal.
258
- sufficient due to high contrast thresholds, e.g. the book in
259
- Fig. 6. On the down side, this method still relies on the
260
- brightness constancy assumption for optical flow estimation
261
- and thus can not handle brightness changes and new ob-
262
- jects appearing between keyframes, e.g. water reflections
263
- in Fig. 6.
264
- Warping refinement module computes refined interpo-
265
- lated frames, ^Irefine
266
- 0!and^Irefine
267
- 1!, by estimating residual op-
268
- tical flow, F!0andF!1respectively, between the
269
- warping-based interpolation results, ^Iwarp
270
- 0!and^Iwarp
271
- 1!, and
272
- the synthesis result ^Isyn
273
- . It then proceeds by warping ^Iwarp
274
- 0!
275
- and^Iwarp
276
- 1!for a second time using the estimated residual
277
- optical flow, as shown in Fig. 3e. The refinement module
278
- draws inspiration from the success of optical flow and dis-
279
- parity refinement modules in [8, 27], and also by our ob-
280
- servation that the synthesis interpolation results are usually
281
- perfectly aligned with the ground-truth new frame. Besides
282
- computing residual flow, the warping refinement module
283
- also performs inpainting of the occluded areas, by fillingthem with values from nearby regions.
284
- Finally, the attention averaging module, shown in
285
- Fig. 3c, blends in a pixel-wise manner the results of synthe-
286
- sis^Isyn
287
- and warping-based interpolation ^Irefine
288
- 0!and^Irefine
289
- 1!to
290
- achieve final interpolation result ^I. This module leverages
291
- the complementarity of the warping- and synthesis-based
292
- interpolation methods and produces a final result, which is
293
- better than the results of both methods by 1.73 dB in PSNR
294
- as shown in Tab. 1 and illustrated in Fig. 6.
295
- A similar strategy was used in [21, 10], however these
296
- works only blended the warping-based interpolation results
297
- to fill the occluded regions, while we blend both warping
298
- and synthesis-based results, and thus can also handle light
299
- changes. We estimate the blending coefficients using an at-
300
- tention network that takes as an input the interpolation re-
301
- sults, ^Irefine
302
- 0!,^Irefine
303
- 1!and^Isyn, the optical flow results F!0
304
- andF!1and bi-linear coefficient , that depends on the
305
- position of the new frame as a channel with constant value.
306
- 2.1. High Speed Events-RGB (HS-ERGB) dataset
307
- Due to the lack of available datasets that combine
308
- synchronized, high-resolution event cameras and standard
309
- RGB cameras, we build a hardware synchronized hybrid
310
- sensor which combines a high-resolution event camera withEvent Camera
311
- Prophesee Gen4M 720p
312
- Resolution 1280 720
313
- RGB Camera
314
- FLIR BlackFly S
315
- Resolution: 1440 1080
316
- 2.5 cm baselineFigure 5: Illustration of the dual camera setup. It comprises
317
- a Prophesee Gen4 720p monochrome event camera (top)
318
- and a FLIR BlackFly S RGB camera (bottom). Both cam-
319
- eras are hardware synchronized with a baseline of 2:5 cm
320
- .
321
- a high resolution and high-speed color camera. We use this
322
- hybrid sensor to record a new large-scale dataset which we
323
- term the High-Speed Events and RGB (HS-ERGB) dataset
324
- which we use to validate our video frame interpolation ap-
325
- proach. The hybrid camera setup is illustrated in Fig. 5.
326
- It features a Prophesee Gen4 (1280 720) event camera
327
- (Fig. 5 top) and a FLIR BlackFly S global shutter RGB cam-
328
- era (1440 1080) (Fig. 5 bottom), separated by a baseline
329
- of2:5 cm . Both cameras are hardware synchronized and
330
- share a similar field of view (FoV). We provide a detailed
331
- comparison of our setup against the commercially available
332
- DA VIS 346 [4] and the recently introduced setup [40] in
333
- the appendix.Compared to both [4] and [40] our setup is
334
- able to record events at much higher resolution (1280 720
335
- vs. 240 180 or 346 260) and standard frames at much
336
- higher framerate (225 FPS vs. 40 FPS or 35 FPS) and with
337
- a higher dynamic range (71.45 dB vs. 55 dB or 60 dB).
338
- Moreover, standard frames have a higher resolution com-
339
- pared to the DA VIS sensor (1440 1080 vs. 240 180) and
340
- provide color. The higher dynamic range and frame rate,
341
- enable us to more accurately compare event cameras with
342
- standard cameras in highly dynamic scenarios and high dy-
343
- namic range. Both cameras are hardware synchronized and
344
- aligned via rectification and global alignment. For more
345
- synchronization and alignment details see the appendix.
346
- We record data in a variety of conditions, both indoors
347
- and outdoors. Sequences were recorded outdoors with ex-
348
- posure times as low as 100µsor indoors with exposure
349
- times up to 1000 µs. The dataset features frame rates of
350
- 160 FPS, which is much higher than previous datasets, en-
351
- abling larger frame skips with ground truth color frames.
352
- The dataset includes highly dynamic close scenes with non-
353
- linear motions and far-away scenes featuring mainly cam-
354
- era ego-motion. For far-away scenes, stereo rectification is
355
- sufficient for good per-pixel alignment. For each sequence,
356
- alignment is performed depending on the depth either by
357
- stereo rectification or using feature-based homography esti-
358
- mation.To this end, we perform standard stereo calibration
359
- between RGB images and E2VID [32] reconstructions and
360
- rectify the images and events accordingly. For the dynamic
361
- close scenes, we additionally estimate a global homogra-
362
- phy by matching SIFT features [17] between these two im-ages. Note that for feature-based alignment to work well,
363
- the camera must be static and objects of interest should only
364
- move in a fronto-parallel plane at a predetermined depth.
365
- While recording we made sure to follow these constraints.
366
- For a more detailed dataset overview we refer to the sup-
367
- plementary material.
368
- 3. Experiments
369
- All experiments in this work are done using the Py-
370
- Torch framework [30]. For training, we use the Adam
371
- optimizer[12] with standard settings, batches of size 4 and
372
- learning rate 104, which we decrease by a factor of 10ev-
373
- ery 12 epoch. We train each module for 27 epoch. For the
374
- training, we use large dataset with synthetic events gener-
375
- ated from Vimeo90k septuplet dataset [44] using the video to
376
- events method [6], based on the event simulator from [31].
377
- We train the network by adding and training modules
378
- one by one, while freezing the weights of all previously
379
- trained modules. We train modules in the following or-
380
- der: synthesis-based interpolation, warping-based interpo-
381
- lation, warping refinement, and attention averaging mod-
382
- ules. We adopted this training because end-to-end training
383
- from scratch does not converge, and fine-tuning of the en-
384
- tire network after pretraining only marginally improved the
385
- results. We supervise our network with perceptual [45] and
386
- L1losses as shown in Fig. 3b, 3d, 3e and 3c. We fine-tune
387
- our network on real data module-by-module in the order
388
- of training. To measure the quality of interpolated images
389
- we use structural similarity (SSIM) [39] and peak signal to
390
- noise ratio (PSNR) metrics.
391
- Note, that the computational complexity of our interpo-
392
- lation method is among the best: on our machine for image
393
- resolutions of 640480, a single interpolation on the GPU
394
- takes 878 ms for DAIN [3], 404 ms for BMBC [29], 138 ms
395
- for ours, 84 ms for RRIN [13], 73 ms for Super SloMo [10]
396
- and 33 ms for LEDVDI [15] methods.
397
- 3.1. Ablation study
398
- To study the contribution of every module of the pro-
399
- posed method to the final interpolation, we investigate the
400
- interpolation quality after each module in Fig. 3a, and re-
401
- port their results in Tab. 1. The table shows two notable re-
402
- sults. First, it shows that adding a warping refinement block
403
- after the simple warping block significantly improves the
404
- interpolation result. Second, it shows that by attention aver-
405
- aging synthesis-based and warping-based results, the inter-
406
- polations are improved by 1.7 dB in terms of PSNR. This
407
- is because the attention averaging module combines the ad-
408
- vantages of both methods. To highlight this further, we il-
409
- lustrate example reconstructions from these two modules in
410
- Fig. 6. As can be seen, the warping-based module excels at
411
- reconstructing textures in non-occluded areas (fourth col-
412
- umn) while the synthesis module performs better in regionswith difficult lighting conditions (fifth column). The atten-
413
- tion module successfully combines the best parts of both
414
- modules (first column).
415
- Figure 6: Complementarity of warping- and synthesis-
416
- based interpolation.
417
- Table 1: Quality of interpolation after each module on
418
- Vimeo90k (denoising) validation set. For SSIM and PSNR
419
- we show mean and one standard deviation. The best result
420
- is highlighted.
421
- Module PSNR SSIM
422
- Warping interpolation 26.68 3.68 0.926 0.041
423
- Interpolation by synthesis 34.10 3.98 0.964 0.029
424
- Warping refinement 33.02 3.76 0.963 0.026
425
- Attention averaging (ours) 35.83 3.70 0.976 0.019
426
- 3.2. Benchmarking
427
- Synthetic datasets. We compare the proposed
428
- method, which we call Time Lens , to four state-of-the-art
429
- frame-based interpolation methods DAIN [3],RRIN [13],
430
- BMBC [29], SuperSloMo [10], event-based video recon-
431
- struction method E2VID [33] and two event and frame-
432
- based methods EDI [26] and LEDVDI [15] on pop-
433
- ular video interpolation benchmark datasets, such as
434
- Vimeo90k (interpolation) [44], Middlebury [2]. During
435
- the evaluation, we take original video sequence, skip 1
436
- or 3 frames respectively, reconstruct them using interpola-
437
- tion method and compare to ground truth skipped frames.
438
- Events for event-based methods we simulate using [6] from
439
- the skipped frames. We do not fine-tune the methods for
440
- each dataset but simply use pre-trained models provided by
441
- the authors. We summarise the results in Tab. 2.
442
- As we can see, the proposed method outperforms other
443
- method across datasets in terms of average PSNR (up to
444
- 8.82 dB improvement) and SSIM scores (up to 0.192 im-
445
- provement). As before these improvements stem from the
446
- use of auxiliary events during the prediction stage which
447
- allow our method to perform accurate frame interpolation,
448
- event for very large non-linear motions. Also, it has signif-
449
- icantly lower standard deviation of the PSNR (2.53 dB vs.
450
- 4.96 dB) and SSIM (0.025 vs. 0.112) scores, which sug-
451
- gests more consistent performance across examples. Also,we can see that PSNR and SSIM scores of the proposed
452
- method degrades to much lesser degree than scores of the
453
- frame-based methods (up to 1.6 dB vs. up to 5.4 dB), as we
454
- skip and attempt to reconstruct more frames. This suggests
455
- that our method is more robust to non-linear motion than
456
- frame-based methods.
457
- High Quality Frames (HQF) dataset. We also evalu-
458
- ate our method on High Quality Frames (HQF) dataset [35]
459
- collected using DA VIS240 event camera that consists of
460
- video sequences without blur and saturation. During eval-
461
- uation, we use the same methodology as for the synthetic
462
- datasets, with the only difference that in this case we use
463
- real events. In the evaluation, we consider two versions of
464
- our method: Time Lens-syn , which we trained only on syn-
465
- thetic data, and Time Lens-real , which we trained on syn-
466
- thetic data and fine-tuned on real event data from our own
467
- DA VIS346 camera. We summarise our results in Tab. 3.
468
- The results on the dataset are consistent with the re-
469
- sults on the synthetic datasets: the proposed method outper-
470
- forms state-of-the-art frame-based methods and produces
471
- more consistent results over examples. As we increase the
472
- number of frames that we skip, the performance gap be-
473
- tween the proposed method and the other methods widens
474
- from 2.53 dB to 4.25 dB, also the results of other methods
475
- become less consistent which is reflected in higher devia-
476
- tion of PSNR and SSIM scores. For a more detailed dis-
477
- cussion about the impact of frame skip length and perfor-
478
- mance, see the appendix. Interestingly, fine-tuning of the
479
- proposed method on real event data, captured by another
480
- camera, greatly boosts the performance of our method by
481
- an average of 1.94 dB. This suggest that existence of large
482
- domain gap between synthetic and real event data.
483
- High Speed Event-RGB dataset. Finally, we evaluate
484
- our method on our dataset introduced in § 2.1. As clear from
485
- Tab. 4, our method, again significantly outperforms frame-
486
- based and frame-plus-event-based competitors. In Fig. 7 we
487
- show several examples from the HS-ERGB test set which
488
- show that, compared to competing frame-based method,
489
- our method can interpolate frames in the case of nonlin-
490
- ear (“Umbrella” sequence) and non-rigid motion (“Water
491
- Bomb”), and also handle illumination changes (“Fountain
492
- Schaffhauserplatz” and “Fountain Bellevue”).
493
- 4. Conclusion
494
- In this work, we introduce Time Lens, a method that
495
- can show us what happens in the blind-time between
496
- two intensity frames using high temporal resolution in-
497
- formation from an event camera. It works by leveraging
498
- the advantages of synthesis-based approaches, which can
499
- handle changing illumination conditions and non-rigid
500
- motions, and flow-based approach, relying on motion
501
- estimation from events. It is therefore robust to motion blur
502
- and non-linear motions. The proposed method achievesTable 2: Results on standard video interpolation benchmarks such as Middlebury [2],Vimeo90k (interpolation) [44] and
503
- GoPro [19]. In all cases, we use a test subset of the datasets. To compute SSIM and PSNR, we downsample the original
504
- video and reconstruct the skipped frames. For Middlebury and Vimeo90k (interpolation), we skip 1 and 3 frames, and for
505
- GoPro we skip 7 and 15 frames due its its high frame rate of 240 FPS. Uses frames andUses events indicate if a method uses
506
- frames and events for interpolation. For event-based methods we generate events from the skipped frames using the event
507
- simulator [6]. Color indicates if a method works with color frames. For SSIM and PSNR we show mean and one standard
508
- deviation. Note, that we can not produce results with 3 skips on the Vimeo90k dataset, since it consists of frame triplet. We
509
- show the best result in each column in bold and the second-best using underscore text.
510
- Method Uses frames Uses events Color PSNR SSIM PSNR SSIM
511
- Middlebury [2] 1 frame skip 3 frames skips
512
- DAIN [3] 4 8 4 30.875.38 0.899 0.110 26.674.53 0.838 0.130
513
- SuperSloMo [10] 4 8 4 29.755.35 0.880 0.112 26.43 5.30 0.823 0.141
514
- RRIN [13] 4 8 4 31.085.55 0.8960.112 27.18 5.57 0.8370.142
515
- BMBC [29] 4 8 4 30.836.01 0.897 0.111 26.86 5.82 0.834 0.144
516
- E2VID [32] 8 4 8 11.262.82 0.427 0.184 26.86 5.82 0.834 0.144
517
- EDI [26] 4 4 8 19.722.95 0.725 0.155 18.44 2.52 0.669 0.173
518
- Time Lens (ours) 4 4 4 33.273.11 0.929 0.027 32.13 2.81 0.908 0.039
519
- Vimeo90k (interpolation) [44] 1 frame skip 3 frames skips
520
- DAIN [3] 4 8 4 34.204.43 0.962 0.023 - -
521
- SuperSloMo [10] 4 8 4 32.934.23 0.948 0.035 - -
522
- RRIN [13] 4 8 4 34.724.40 0.9620.029 - -
523
- BMBC [29] 4 8 4 34.564.40 0.962 0.024 - -
524
- E2VID [32] 8 4 8 10.082.89 0.395 0.141 - -
525
- EDI [26] 4 4 8 20.743.31 0.748 0.140 - -
526
- Time Lens (ours) 4 4 4 36.313.11 0.962 0.024 - -
527
- GoPro [19] 7 frames skip 15 frames skips
528
- DAIN [3] 4 8 4 28.814.20 0.876 0.117 24.39 4.69 0.7360.173
529
- SuperSloMo [10] 4 8 4 28.984.30 0.875 0.118 24.38 4.78 0.747 0.177
530
- RRIN [13] 4 8 4 28.964.38 0.876 0.119 24.324.80 0.749 0.175
531
- BMBC [29] 4 8 4 29.084.58 0.8750.120 23.68 4.69 0.736 0.174
532
- E2VID [32] 8 4 8 9.742.11 0.549 0.094 9.75 2.11 0.549 0.094
533
- EDI [26] 4 4 8 18.792.03 0.670 0.144 17.45 2.23 0.603 0.149
534
- Time Lens (ours) 4 4 4 34.811.63 0.959 0.012 33.21 2.00 0.942 0.023
535
- Table 3: Benchmarking on the High Quality Frames (HQF) DA VIS240 dataset. We do not fine-tune our method and other
536
- methods and use models provided by the authors. We evaluate methods on all sequences of the dataset. To compute SSIM
537
- and PSNR, we downsample the original video by skip 1 and 3 frames, reconstruct these frames and compare them to the
538
- skipped frames. In Uses frames andUses events columns we specify if a method uses frames and events for interpolation.
539
- In the Color column, we indicate if a method works with color frames. In the table, we present two versions of our method:
540
- Time Lens-syn , which we trained only on synthetic data, and Time Lens-real , which we trained on synthetic data and fine-
541
- tuned on real event data from our own DA VIS346 camera. For SSIM and PSNR, we show mean and one standard deviation.
542
- We show the best result in each column in bold and the second-best using underscore text.
543
- Method Uses frames Uses events Color PSNR SSIM PSNR SSIM
544
- 1 frame skip 3 frames skips
545
- DAIN [3] 4 8 4 29.826.91 0.875 0.124 26.10 7.52 0.782 0.185
546
- SuperSloMo [10] 4 8 4 28.766.13 0.861 0.132 25.54 7.13 0.761 0.204
547
- RRIN [13] 4 8 4 29.767.15 0.874 0.132 26.11 7.84 0.778 0.200
548
- BMBC [29] 4 8 4 29.967.00 0.8750.126 26.327.78 0.7810.193
549
- E2VID [32] 8 4 8 6.702.19 0.315 0.124 6.70 2.20 0.315 0.124
550
- EDI [26] 4 4 8 18.76.53 0.574 0.244 18.8 6.88 0.579 0.274
551
- Time Lens-syn (our) 4 4 4 30.575.01 0.903 0.067 28.98 5.09 0.873 0.086
552
- Time Lens-real (ours) 4 4 4 32.494.60 0.927 0.048 30.57 5.08 0.900 0.069Figure 7: Qualitative results for the proposed method and its closes competitor DAIN [3] on our Dual Event and Color Camera
553
- Dataset test sequences: “Fountain Schaffhauserplatz” (top-left), “Fountain Bellevue” (bottom-left) “Water bomb” (top-right)
554
- and “Umbrella” (bottom-right). For each sequence, the figure shows interpolation results on the left (the animation can be
555
- viewed in Acrobat Reader) and close-up interpolation results on the right. The close-ups, show input left and right frame and
556
- intermediate interpolated frames.
557
- Table 4: Benchmarking on the test set of the High Speed Event and RGB camera (HS-ERGB) dataset. We report PSNR and
558
- SSIM for all sequences by skipping 5 and 7 frames respectively, and reconstructing the missing frames with each method. By
559
- design LEDVDI [15] can interpolate only 5 frames. Uses frames andUses events indicate if a method uses frames or events
560
- respectively. Color indicates whether a method works with color frames. For SSIM and PSNR the scores are averaged over
561
- the sequences. Best results are shown in bold and the second best are underlined.
562
- Method Uses frames Uses events Color PSNR SSIM PSNR SSIM
563
- Far-away sequences 5 frame skip 7 frames skips
564
- DAIN [3] 4 8 4 27.921.55 0.7800.141 27.131.75 0.7480.151
565
- SuperSloMo [10] 4 8 4 25.666.24 0.727 0.221 24.16 5.20 0.692 0.199
566
- RRIN [13] 4 8 4 25.265.81 0.738 0.196 23.73 4.74 0.703 0.170
567
- BMBC [29] 4 8 4 25.626.13 0.742 0.202 24.13 4.99 0.710 0.175
568
- LEDVDI [15] 4 4 8 12.501.74 0.393 0.174 n/a n/a
569
- Time Lens (ours) 4 4 4 33.132.10 0.877 0.092 32.31 2.27 0.869 0.110
570
- Close planar sequences 5 frame skip 7 frames skips
571
- DAIN [3] 4 8 4 29.034.47 0.807 0.093 28.50 4.54 0:8010:096
572
- SuperSloMo [10] 4 8 4 28.354.26 0.788 0.098 27.27 4.26 0:7750:099
573
- RRIN [13] 4 8 4 28.694.17 0.813 0.083 27.46 4.24 0.800 0.084
574
- BMBC [29] 4 8 4 29.224.45 0.8200.085 27.994.55 0.808 0.084
575
- LEDVDI [15] 4 4 8 19.464.09 0.602 0.164 n/a n/a
576
- Time Lens (ours) 4 4 4 32.194.19 0.839 0.090 31.68 4.18 0.835 0.091
577
- an up to 5.21 dB improvement over state-of-the-art
578
- frame-based and event-plus-frames-based methods on both
579
- synthetic and real datasets. In addition, we release the
580
- first High Speed Event and RGB (HS-ERGB) dataset,
581
- which aims at pushing the limits of existing interpola-
582
- tion approaches by establishing a new benchmark for both
583
- event- and frame-based video frame interpolation methods.5. Acknowledgement
584
- This work was supported by Huawei Zurich Research
585
- Center; by the National Centre of Competence in Re-
586
- search (NCCR) Robotics through the Swiss National Sci-
587
- ence Foundation (SNSF); the European Research Coun-
588
- cil (ERC) under the European Union’s Horizon 2020 re-
589
- search and innovation programme (Grant agreement No.
590
- 864042).6. Video Demonstration
591
- This PDF is accompanied with a video showing advan-
592
- tages of the proposed method compared to state-of-the-art
593
- frame-based methods published over recent months, as well
594
- as potential practical applications of the method.
595
- 7. Backbone network architecture
596
- Figure 8: Backbone hourglass network that we use in all
597
- modules of the proposed method.
598
- For all modules in the proposed method, we use the same
599
- backbone architecture which is an hourglass network with
600
- shortcut connections between the contracting and the ex-
601
- panding parts similar to [10] which we show in Fig. 8.
602
- 8. Additional Ablation Experiments
603
- 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35
604
- Percentage of locationsSynthesisWarped successiveWarp preceeding
605
- Figure 9: Percentage of pixels each interpolation method
606
- contributes on average to the final interpolation result for
607
- Vimeo90k (denoising) validation set. Note, that all meth-
608
- ods contribute almost equally to the final result and thus are
609
- equally important.
610
- Table 5: Importance of inter-frame events on Middlebury
611
- test set. To compute SSIM and PSNR, we skip one frame
612
- of the original video, reconstruct it and compare to the
613
- skipped frame. One version of the proposed method has
614
- access to the events synthesized from the skipped frame
615
- and another version does not have inter-frame information.
616
- We also show performance of frame-based SuperSloMo
617
- method [10], that is used in event simulator for reference.
618
- We highlight the best performing method.
619
- Method PSNR SSIM
620
- With inter-frame events (ours) 33.27 3.11 0.929 0.027
621
- Without inter-frame events 29.03 4.85 0.866 0.111
622
- SuperSloMo [10] 29.75 5.35 0.880 0.112
623
- Importance of inter-frame events . To study the im-
624
- portance of additional information provided by events, we
625
- skip every second frame of the original video and attempt
626
- to reconstruct it using two versions of the proposed method.One version has access to the events synthesized from the
627
- skipped frame and another version does not have inter-
628
- frame information. As we can see from the Tab. 5, the
629
- former significantly outperforms the later by a margin of
630
- 4.24dB. Indeed this large improvements can be explained
631
- by the fact that the method with inter-frame events has im-
632
- plicit access to the ground truth image it tries to recon-
633
- struct, albeit in the form of asynchronous events. This high-
634
- lights that our network is able to efficiently decode the asyn-
635
- chronous intermediate events to recover the missing frame.
636
- Moreover, this shows that the addition of events has a sig-
637
- nificant impact on the final task performance, proving the
638
- usefulness of an event camera as an auxiliary sensor.
639
- Importance of each interpolation method. To study
640
- relative importance of synthesis-based andwarping-based
641
- interpolation methods, we compute the percentage of pixels
642
- that each method contribute on average to the final interpo-
643
- lation result for the Vimeo90k (denoising) validation dataset
644
- and show the result in Fig. 9. As it is clear from the figure,
645
- all the methods contribute almost equally to the final result
646
- and thus are all equally important.
647
- 1 2 3 4 5 6 7
648
- Frame Index2224262830PSNR [dB]
649
- BMBC
650
- DAINRRIN
651
- SuperSlowMoOurs
652
- Figure 10: “Rope plot” showing interpolation quality as a
653
- function of distance from input boundary frames on High
654
- Quality Frames dataset. We skip all but every 7th frame and
655
- restore them using events and remaining frames. For each
656
- skip position, we compute average PSNR of the restored
657
- frame over entire dataset. We do not fine-tune the proposed
658
- and competing methods on the HQF dataset and simply use
659
- pre-trained models provided by the authors. Note, that the
660
- proposed method have the highest PSNR. Also, its PSNR
661
- decreases much slower than PSNR of other methods we
662
- move away from the input boundary frames.
663
- “Rope” plot. To study how the interpolation quality de-
664
- creases with the distance to the input frames, we skip all but
665
- every 7th frame in the input videos from the High Quality
666
- Frames dataset, restore them using our method and compare
667
- to the original frames. For each skipped frame position, we
668
- compute average PSNR of the restored frame over entire
669
- dataset and show results in Fig. 10. As clear from the fig-ure, the proposed method has the highest PSNR. Also, its
670
- PSNR decreases much slower than PSNR of the competing
671
- methods as we move away from the boundary frames.
672
- 9. Additional Benchmarking Results
673
- To makes sure that the fine-tuning does not af-
674
- fect our general conclusions, we fine-tuned the pro-
675
- posed method and RRIN method [13] on subset of
676
- High Quality Frames dataset and test them on the
677
- remaining part (“poster pillar 1”, “slow andfastdesk”,
678
- “bike bayhdr” and “desk” sequences). We choose RRIN
679
- method for this experiment, because it showed good perfor-
680
- mance across synthetic and real datasets and it is fairly sim-
681
- ple. As clear from the Tab. 6, after the fine-tuning, perfor-
682
- mance of the proposed method remained very strong com-
683
- pared to the RRIN method.
684
- 10. High Speed Events and RGB Dataset
685
- In this section we describe the sequences in the High-
686
- Speed Event and RGB (HS-ERGB) dataset. The commer-
687
- cially available DA VIS 346 [4] already allows the simul-
688
- taneous recording of events and grayscale frames, which
689
- are temporally and spatially synchronized. However, it has
690
- some shortcomings as the relatively low resolution of only
691
- 346260 pixels of both frames and events. This is far
692
- below the resolution of typical frame based consumer cam-
693
- eras. Additionally, the DA VIS 346 has a very limited dy-
694
- namic range of 55 db and a maximum frame of 40 FPS.
695
- Those properties render it not ideal for many event based
696
- methods, which aim to outperform traditional frame based
697
- cameras in certain applications. The setup described in [40]
698
- shows improvements in the resolution of frames and dy-
699
- namic range, but has a reduced event resolution instead. The
700
- lack of publicly available high resolution event and color
701
- frame datasets and of the shelf hardware motivated the de-
702
- velopment of our dual camera setup. It features high reso-
703
- lution, high frame rate, high dynamic range color frames
704
- combined with high resolution events. A comparison of
705
- our setup with the DA VIS346[4] and the setup with beam
706
- splitter in [40] is shown in 7. With this new setup we col-
707
- lect new High Speed Events and RGB (HS-ERGB) Dataset
708
- that we summarize in Tab. 8. We show several fragments
709
- from the dataset in Fig. 12. In the following paragraphs we
710
- describe temporal synchronization and spatial alignment of
711
- frame and event data that we performed for our dataset.
712
- Synchronization In our setup, two cameras are hard-
713
- ware synchronized through the use of external triggers.
714
- Each time the standard camera starts and ends exposure, a
715
- trigger is sent to the event camera which records an exter-
716
- nal trigger event with precise timestamp information. This
717
- information allows us to assign accurate timestamps to the
718
- standard frames, as well as group events during exposure orbetween consecutive frames.
719
- Alignment In our setup event and RGB cameras are ar-
720
- ranged in stereo configuration, therefore event and frame
721
- data in addition to temporal, require spatial alignment. We
722
- perform the alignment in three steps: (i)stereo calibration,
723
- (ii)rectification and (iii) feature-based global alignment.
724
- We first calibrate the cameras using a standard checker-
725
- board pattern. The recorded asynchronous events are con-
726
- verted to temporally aligned video reconstructions using
727
- E2VID[32, 33]. Finally, we find the intrinsic and extrin-
728
- sics by applying the stereo calibration tool Kalibr[24] to the
729
- video reconstructions and the standard frames recorded by
730
- the color camera. We then use the found intrinsics and ex-
731
- trinsics to rectify the events and frames.
732
- Due to the small baseline and similar fields of view
733
- (FoV), stereo rectification is usually sufficient to align the
734
- output of both sensors for scenes with a large average depth
735
- (>40 m ). This is illustrated in Fig. 11 (a).
736
- For close scenes, however, events and frames are mis-
737
- aligned (Fig. 11 (b)). For this reason we perform the sec-
738
- ond step of global alignment using a homography which
739
- we estimate by matching SIFT features [17] extracted on
740
- the standard frames and video reconstructions. The homog-
741
- raphy estimation also utilizes RANSAC to eliminate false
742
- matches. When the cameras are static, and the objects of
743
- interest move within a plane, this yields accurate alignment
744
- between the two sensors (Fig.11 (c)).
745
- References
746
- [1] Enhancing and experiencing spacetime resolution with
747
- videos and stills. In ICCP , pages 1–9. IEEE, 2009. 2
748
- [2] Simon Baker, Daniel Scharstein, JP Lewis, Stefan Roth,
749
- Michael J Black, and Richard Szeliski. A database and evalu-
750
- ation methodology for optical flow. IJCV , 92(1):1–31, 2011.
751
- 6, 7
752
- [3] Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang,
753
- Zhiyong Gao, and Ming-Hsuan Yang. Depth-aware video
754
- frame interpolation. In CVPR , pages 3703–3712, 2019. 1, 5,
755
- 6, 7, 8
756
- [4] Christian Brandli, Raphael Berner, Minhao Yang, Shih-Chii
757
- Liu, and Tobi Delbruck. A 240 180 130 db 3 s la-
758
- tency global shutter spatiotemporal vision sensor. JSSC ,
759
- 49(10):2333–2341, 2014. 2, 5, 10, 11
760
- [5] Zhixiang Chi, Rasoul Mohammadi Nasiri, Zheng Liu, Juwei
761
- Lu, Jin Tang, and Konstantinos N Plataniotis. All at once:
762
- Temporally adaptive multi-frame interpolation with ad-
763
- vanced motion modeling. arXiv preprint arXiv:2007.11762 ,
764
- 2020. 2
765
- [6] Daniel Gehrig, Mathias Gehrig, Javier Hidalgo-Carri ´o, and
766
- Davide Scaramuzza. Video to events: Recycling video
767
- datasets for event cameras. In CVPR , June 2020. 5, 6, 7
768
- [7] Daniel Gehrig, Antonio Loquercio, Konstantinos G. Derpa-
769
- nis, and Davide Scaramuzza. End-to-end learning of repre-
770
- sentations for asynchronous event-based data. In Int. Conf.
771
- Comput. Vis. (ICCV) , 2019. 3Table 6: Results on High Quality Frames [35] with fine-tuning. Due to the time limitations, we only fine-tuned the pro-
772
- posed method and RRIN [13] method, that performed well across synthetic and real datasets. For evaluation, we used
773
- “poster pillar 1”, “slow andfastdesk”, “bike bayhdr” and “desk” sequences of the set and other sequences we used for the
774
- fine-tuning. For SSIM and PSNR, we show mean and one standard deviation across frames of all sequences.
775
- Method1 skip 3 skips
776
- PSNR SSIM PSNR SSIM
777
- RRIN [13] 28.62 5.51 0.839 0.132 25.36 5.70 0.750 0.173
778
- Time Lens (Ours) 33.42 3.18 0.934 0.041 32.27 3.44 0.917 0.054
779
- Table 7: Comparison of our HS-ERGB dataset against publicly available High Quality Frames (HQF) dataset, acquired by
780
- DA VIS 346 [4] and Guided Event Filtering (GEF) dataset, acquired by setup with DA VIS240 and RGB camera mounted with
781
- beam splitter [40]. Note, that in contrast to the previous datasets, the proposed dataset has high resolution of event data,
782
- and high frame rate. Also, it is the first dataset acquired by dual system with event and frame sensors arranged in stereo
783
- configuration.
784
- Frames Events
785
- FPS Dynamic Range, [dB] Resolution Color Dynamic Range, dB Resolution Sync. Aligned
786
- DA VIS 346 [4] 40 55 346 260 8 120 346 260 4 4
787
- GEF[40] 35 60 24802048 4 120 240 180 4 4
788
- HS-ERGB (Ours) 226 71.45 14401080 4 120 7201280 4 4
789
- [8] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper,
790
- Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolu-
791
- tion of optical flow estimation with deep networks. In CVPR ,
792
- pages 2462–2470, 2017. 2, 4
793
- [9] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al.
794
- Spatial transformer networks. In NIPS , pages 2017–2025,
795
- 2015. 2, 3
796
- [10] Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan
797
- Yang, Erik Learned-Miller, and Jan Kautz. Super slomo:
798
- High quality estimation of multiple intermediate frames for
799
- video interpolation. In CVPR , pages 9000–9008, 2018. 2, 3,
800
- 4, 5, 6, 7, 8, 9
801
- [11] Zhe Jiang, Yu Zhang, Dongqing Zou, Jimmy Ren, Jiancheng
802
- Lv, and Yebin Liu. Learning event-based motion deblurring.
803
- InCVPR , pages 3320–3329, 2020. 2
804
- [12] Diederik P. Kingma and Jimmy L. Ba. Adam: A method for
805
- stochastic optimization. Int. Conf. Learn. Representations
806
- (ICLR) , 2015. 5
807
- [13] Haopeng Li, Yuan Yuan, and Qi Wang. Video frame interpo-
808
- lation via residue refinement. In ICASSP 2020 , pages 2613–
809
- 2617. IEEE, 2020. 5, 6, 7, 8, 10, 11
810
- [14] Patrick Lichtsteiner, Christoph Posch, and Tobi Delbruck. A
811
- 128128 120 dB 15 s latency asynchronous temporal con-
812
- trast vision sensor. IEEE J. Solid-State Circuits , 43(2):566–
813
- 576, 2008. 2
814
- [15] Songnan Lin, Jiawei Zhang, Jinshan Pan, Zhe Jiang,
815
- Dongqing Zou, Yongtian Wang, Jing Chen, and Jimmy Ren.
816
- Learning event-driven video deblurring and interpolation.
817
- ECCV , 2020. 5, 6, 8
818
- [16] Ziwei Liu, Raymond A Yeh, Xiaoou Tang, Yiming Liu, and
819
- Aseem Agarwala. Video frame synthesis using deep voxel
820
- flow. In ICCV , pages 4463–4471, 2017. 2
821
- [17] David G. Lowe. Distinctive image features from scale-
822
- invariant keypoints. Int. J. Comput. Vis. , 60(2):91–110, Nov.
823
- 2004. 5, 10[18] Simone Meyer, Abdelaziz Djelouah, Brian McWilliams,
824
- Alexander Sorkine-Hornung, Markus Gross, and Christo-
825
- pher Schroers. Phasenet for video frame interpolation. In
826
- CVPR , 2018. 2
827
- [19] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep
828
- multi-scale convolutional neural network for dynamic scene
829
- deblurring. In CVPR , July 2017. 7
830
- [20] Simon Niklaus and Feng Liu. Context-aware synthesis for
831
- video frame interpolation. In CVPR , pages 1701–1710,
832
- 2018. 2, 3
833
- [21] Simon Niklaus and Feng Liu. Softmax splatting for video
834
- frame interpolation. In CVPR , pages 5437–5446, 2020. 2, 3,
835
- 4
836
- [22] Simon Niklaus, Long Mai, and Feng Liu. Video frame inter-
837
- polation via adaptive convolution. In CVPR , 2017. 2
838
- [23] Simon Niklaus, Long Mai, and Feng Liu. Video frame inter-
839
- polation via adaptive separable convolution. In ICCV , 2017.
840
- 2
841
- [24] L. Oth, P. Furgale, L. Kneip, and R. Siegwart. Rolling shutter
842
- camera calibration. In CVPR , 2013. 10
843
- [25] Avinash Paliwal and Nima Khademi Kalantari. Deep slow
844
- motion video reconstruction with hybrid imaging system.
845
- PAMI , 2020. 2
846
- [26] Liyuan Pan, Cedric Scheerlinck, Xin Yu, Richard Hartley,
847
- Miaomiao Liu, and Yuchao Dai. Bringing a blurry frame
848
- alive at high frame-rate with an event camera. In CVPR ,
849
- pages 6820–6829, 2019. 2, 6, 7
850
- [27] Jiahao Pang, Wenxiu Sun, JS Ren, Chengxi Yang, and Qiong
851
- Yan. Cascade Residual Learning: A Two-stage Convolu-
852
- tional Neural Network for Stereo Matching. In ICCV , pages
853
- 887–895, 2017. 4
854
- [28] Federico Paredes-Vall ´es and Guido CHE de Croon. Back to
855
- event basics: Self-supervised learning of image reconstruc-
856
- tion for event cameras via photometric constancy. CoRR ,
857
- 2020. 2(a) far away scenes (b) misaligned close scenes (c) after global alignment
858
- Figure 11: Alignment of standard frames with events. Aggregated events (blue positive, red negative) are overlain with the
859
- standard frame. For scenes with sufficient depth (more than 40 m ) stereo rectification of both outputs yields accurate per-pixel
860
- alignment (a). However, for close scenes (b) events and frames are misaligned. In the absence of camera motion and motion
861
- in a plane, the views can be aligned with a global homography (c).
862
- [29] Junheum Park, Keunsoo Ko, Chul Lee, and Chang-Su Kim.
863
- Bmbc: Bilateral motion estimation with bilateral cost vol-
864
- ume for video interpolation. ECCV , 2020. 1, 2, 5, 6, 7, 8
865
- [30] Pytorch web site. http://http://pytorch.org/
866
- Accessed: 08 March 2019. 5
867
- [31] Henri Rebecq, Daniel Gehrig, and Davide Scaramuzza.
868
- ESIM: an open event camera simulator. In Conf. on Robotics
869
- Learning (CoRL) , 2018. 5
870
- [32] Henri Rebecq, Ren ´e Ranftl, Vladlen Koltun, and Davide
871
- Scaramuzza. Events-to-video: Bringing modern computer
872
- vision to event cameras. In CVPR , pages 3857–3866, 2019.
873
- 2, 5, 7, 10
874
- [33] Henri Rebecq, Ren ´e Ranftl, Vladlen Koltun, and Davide
875
- Scaramuzza. High speed and high dynamic range video with
876
- an event camera. TPAMI , 2019. 2, 6, 10
877
- [34] Cedric Scheerlinck, Henri Rebecq, Daniel Gehrig, Nick
878
- Barnes, Robert Mahony, and Davide Scaramuzza. Fast im-
879
- age reconstruction with an event camera. In WACV , pages
880
- 156–163, 2020. 2
881
- [35] Timo Stoffregen, Cedric Scheerlinck, Davide Scaramuzza,
882
- Tom Drummond, Nick Barnes, Lindsay Kleeman, and
883
- Robert Mahony. Reducing the sim-to-real gap for event cam-
884
- eras. In ECCV , 2020. 6, 11
885
- [36] Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz.
886
- Pwc-net: Cnns for optical flow using pyramid, warping, and
887
- cost volume. In CVPR , pages 8934–8943, 2018. 2
888
- [37] Bishan Wang, Jingwei He, Lei Yu, Gui-Song Xia, and Wen
889
- Yang. Event enhanced high-quality image recovery. ECCV ,
890
- 2020. 2
891
- [38] Lin Wang, Yo-Sung Ho, Kuk-Jin Yoon, et al. Event-
892
- based high dynamic range image and very high frame rate
893
- video generation using conditional generative adversarial
894
- networks. In CVPR , pages 10081–10090, 2019. 2
895
- [39] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Si-
896
- moncelli. Image quality assessment: from error visibility tostructural similarity. IEEE transactions on image processing ,
897
- 13(4):600–612, 2004. 5
898
- [40] Zihao Wang, Peiqi Duan, Oliver Cossairt, Aggelos Kat-
899
- saggelos, Tiejun Huang, and Boxin Shi. Joint filtering of in-
900
- tensity images and neuromorphic events for high-resolution
901
- noise-robust imaging. In CVPR , 2020. 5, 10, 11
902
- [41] Zihao W Wang, Weixin Jiang, Kuan He, Boxin Shi, Aggelos
903
- Katsaggelos, and Oliver Cossairt. Event-driven video frame
904
- synthesis. In ICCV Workshops , pages 0–0, 2019. 2
905
- [42] Chao-Yuan Wu, Nayan Singhal, and Philipp Krahenbuhl.
906
- Video compression through image interpolation. In ECCV ,
907
- pages 416–431, 2018. 2
908
- [43] Xiangyu Xu, Li Siyao, Wenxiu Sun, Qian Yin, and Ming-
909
- Hsuan Yang. Quadratic video interpolation. In NeurIPS ,
910
- pages 1647–1656, 2019. 2, 3
911
- [44] Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, and
912
- William T Freeman. Video enhancement with task-oriented
913
- flow. IJCV , 127(8):1106–1125, 2019. 2, 5, 6, 7
914
- [45] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman,
915
- and Oliver Wang. The unreasonable effectiveness of deep
916
- features as a perceptual metric. In CVPR , pages 586–595,
917
- 2018. 5
918
- [46] Alex Zihao Zhu, Liangzhe Yuan, Kenneth Chaney, and
919
- Kostas Daniilidis. Unsupervised event-based optical flow us-
920
- ing motion compensation. In ECCV , pages 0–0, 2018. 3Table 8: Overview of all sequences of the High Speed Event-RGB (HS-ERGB) dataset.
921
- Sequence Name Subset Camera Settings Description
922
- Close planar sequences
923
- Water bomb air (Fig. 12a)
924
- Train163 FPS, 1080 µsexposure, 1065 frames accelerating object, water splash
925
- Lighting match 150 FPS, 2972 µsexposure, 666 frames illumination change, fire
926
- Fountain Schaffhauserplatz 1 150 FPS, 977µsexposure, 1038 frames illumination change, fire
927
- Water bomb ETH 2 (Fig. 12c) 163 FPS, 323µsexposure, 3494 frames accelerating object, water splash
928
- Waving arms 163 FPS, 3476 µsexposure, 762 frames non-linear motion
929
- Popping air balloon
930
- Test150 FPS, 2972 µsexposure, 335 frames non-linear motion, object disappearance
931
- Confetti (Fig. 12e 150 FPS, 2972 µsexposure, 832 frames non-linear motion, periodic motion
932
- Spinning plate 150 FPS, 2971 µsexposure, 1789 frames non-linear motion, periodic motion
933
- Spinning umbrella 163 FPS, 3479 µsexposure, 763 frames non-linear motion
934
- Water bomb floor 1 (Fig. 12d) 160 FPS, 628µsexposure, 686 frames accelerating object, water splash
935
- Fountain Schaffhauserplatz 2 150 FPS, 977µsexposure, 1205 frames non-linear motion, water
936
- Fountain Bellevue 2 (Fig. 12b) 160 FPS, 480µsexposure, 1329 frames non-linear motion, water, periodic movement
937
- Water bomb ETH 1 163 FPS, 323µsexposure, 3700 frames accelerating object, water splash
938
- Candle (Fig. 12f) 160 FPS, 478µsexposure, 804 frames illumination change, non-linear motion
939
- Far-away sequences
940
- Kornhausbruecke letten x 1
941
- Train163 FPS, 266µsexposure, 831 frames fast camera rotation around z-axis
942
- Kornhausbruecke rot x 5 163 FPS, 266µsexposure, 834 frames fast camera rotation around x-axis
943
- Kornhausbruecke rot x 6 163 FPS, 266µsexposure, 834 frames fast camera rotation around x-axis
944
- Kornhausbruecke rot y 3 163 FPS, 266µsexposure, 833 frames fast camera rotation around y-axis
945
- Kornhausbruecke rot y 4 163 FPS, 266µsexposure, 833 frames fast camera rotation around y-axis
946
- Kornhausbruecke rot z 1 163 FPS, 266µsexposure, 857 frames fast camera rotation around z-axis
947
- Kornhausbruecke rot z 2 163 FPS, 266µsexposure, 833 frames fast camera rotation around z-axis
948
- Sihl 4 163 FPS, 426µsexposure, 833 frames fast camera rotation around z-axis
949
- Tree 3 163 FPS, 978µsexposure, 832 frames camera rotation around z-axis
950
- Lake 4 163 FPS, 334µsexposure, 833 frames camera rotation around z-axis
951
- Lake 5 163 FPS, 275µsexposure, 833 frames camera rotation around z-axis
952
- Lake 7 163 FPS, 274µsexposure, 833 frames camera rotation around z-axis
953
- Lake 8 163 FPS, 274µsexposure, 832 frames camera rotation around z-axis
954
- Lake 9 163 FPS, 274µsexposure, 832 frames camera rotation around z-axis
955
- Bridge lake 4 163 FPS, 236µsexposure, 836 frames camera rotation around z-axis
956
- Bridge lake 5 163 FPS, 236µsexposure, 834 frames camera rotation around z-axis
957
- Bridge lake 6 163 FPS, 235µsexposure, 832 frames camera rotation around z-axis
958
- Bridge lake 7 163 FPS, 235µsexposure, 832 frames camera rotation around z-axis
959
- Bridge lake 8 163 FPS, 235µsexposure, 834 frames camera rotation around z-axis
960
- Kornhausbruecke letten random 4
961
- Test163 FPS, 266µsexposure, 834 frames random camera movement
962
- Sihl 03 163 FPS, 426µsexposure, 834 frames camera rotation around z-axis
963
- Lake 01 163 FPS, 335µsexposure, 784 frames camera rotation around z-axis
964
- Lake 03 163 FPS, 334µsexposure, 833 frames camera rotation around z-axis
965
- Bridge lake 1 163 FPS, 237µsexposure, 833 frames camera rotation around z-axis
966
- Bridge lake 3 163 FPS, 236µsexposure, 834 frames camera rotation around z-axis(a) Water bomb air (b) Fountain Bellevue
967
- (c) Water bomb ETH 2 (d) Water bomb floor 1
968
- (e) Confetti (f) Candle
969
- Figure 12: Example sequences of the HS-ERGB dataset. This figure contains animation that can be viewed in Acrobat
970
- Reader.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2106.08858.txt DELETED
@@ -1,901 +0,0 @@
1
- Grounding Spatio-Temporal Language with
2
- Transformers
3
- Tristan Karch, Laetitia Teodorescu
4
- Inria - Flowers Team
5
- Université de Bordeaux
6
- [email protected] Hofmann
7
- Microsoft Research
8
- Cambridge, UKClément Moulin-Frier
9
- Inria - Flowers team
10
- Université de Bordeaux
11
- ENSTA ParisTech
12
- Pierre-Yves Oudeyer
13
- Inria - Flowers team
14
- Université de Bordeaux
15
- ENSTA ParisTech
16
- Abstract
17
- Language is an interface to the outside world. In order for embodied agents to use
18
- it, language must be grounded in other, sensorimotor modalities. While there is
19
- an extended literature studying how machines can learn grounded language, the
20
- topic of how to learn spatio-temporal linguistic concepts is still largely uncharted.
21
- To make progress in this direction, we here introduce a novel spatio-temporal
22
- language grounding task where the goal is to learn the meaning of spatio-temporal
23
- descriptions of behavioral traces of an embodied agent. This is achieved by
24
- training a truth function that predicts if a description matches a given history of
25
- observations. The descriptions involve time-extended predicates in past and present
26
- tense as well as spatio-temporal references to objects in the scene. To study the role
27
- of architectural biases in this task, we train several models including multimodal
28
- Transformer architectures; the latter implement different attention computations
29
- between words and objects across space and time. We test models on two classes of
30
- generalization: 1) generalization to randomly held-out sentences; 2) generalization
31
- to grammar primitives. We observe that maintaining object identity in the attention
32
- computation of our Transformers is instrumental to achieving good performance
33
- on generalization overall, and that summarizing object traces in a single token has
34
- little influence on performance. We then discuss how this opens new perspectives
35
- for language-guided autonomous embodied agents. We also release our code under
36
- open-source license as well as pretrained models and datasets to encourage the
37
- wider community to build upon and extend our work in the future.
38
- 1 Introduction
39
- Building autonomous agents that learn to represent and use language is a long standing-goal in
40
- Artificial Intelligence [20,39]. In developmental robotics [ 7], language is considered a cornerstone
41
- of development that enables embodied cognitive agents to learn more efficiently through social
42
- interactions with humans or through cooperation with other autonomous agents. It is also considered
43
- essential to develop complex sensorimotor skills, facilitating the representation of behaviors and
44
- actions.
45
- Embodied Language Grounding [50] is the field that studies how agents can align language with
46
- their behaviors in order to extract the meaning of linguistic constructions. Early approaches in
47
- Equal Contribution
48
- 35th Conference on Neural Information Processing Systems (NeurIPS 2021).arXiv:2106.08858v2 [cs.AI] 11 Oct 2021shake
49
- grow
50
- - ‘Shake red cat’
51
- - ‘Shake thing right of lamp’
52
- - ‘Shake thing was left of lamp ’- ‘Was grasp blue cactus’
53
- - ‘Was grasp thing top most’
54
- - ‘Was grasp thing was right of parrot’(a) Grow action with spatial or
55
- attribute reference to object(b) Shake action with spatio-temporal
56
- reference to object (c) Grasp past actions description with
57
- spatio-temporal reference to object
58
- - ‘Grow blue chameleon’
59
- - ‘Grow thing left of tv ’
60
- - ‘Grow thing left most ’
61
- - ‘Grasp green water ’
62
- Traces of interactions Sampled
63
- descriptionsNatural
64
- English
65
- version ‘You are currently growing the
66
- blue chameleon’‘You are currently shaking some -
67
- thing that used to be at the left of
68
- the lamp’‘You previously grasped some-
69
- thing that used to be at the right
70
- of the parrot’Figure 1: Visual summary of the Temporal Playground environment: At each episode (column
71
- a, b and c), the actions of an agent (represented by a hand) unfold in the environment and generate
72
- a trace of interactions between objects and the agent body. Given such a trace, the environment
73
- automatically generates a set of synthetic linguistic descriptions that are true at the end of the trace.
74
- In (a) the agent grows an object which is described with spatial (underlined) or attribute (highlighted)
75
- reference. In (b) it shakes an object which is described with attribute, spatial or spatio-temporal
76
- (underlined) reference. In (c) it has grasped an object (past action underlined) which is described
77
- with attribute, spatial or spatio-temporal (highlighted) reference. The natural english version is given
78
- for illustrative purposes.
79
- developmental robotics studied how various machine learning techniques, ranging from neural
80
- networks [ 40,45,24] to non-negative matrix factorization [ 32], could enable the acquisition of
81
- grounded compositional language [ 42,41]. This line of work was recently extended using techniques
82
- forLanguage conditioned Deep Reinforcement Learning [31]. Among these works we can distinguish
83
- mainly three language grounding strategies. The first one consists of directly grounding language
84
- in the behavior of agents by training goal-conditioned policies satisfying linguistic instructions
85
- [40,45,23,22,9]. The second aims at extracting the meaning of sentences from mental simulations
86
- (i.e. generative models) of possible sensorimotor configurations matching linguistic descriptions
87
- [32,1,12,34]. The third strategy searches to learn the meaning of linguistic constructs in terms of
88
- outcomes that agents can observe in the environment. This is achieved by training a truth function that
89
- detects if descriptions provided by an expert match certain world configurations. This truth function
90
- can be obtained via Inverse Reinforcement Learning [49,3] or by training a multi-modal binary
91
- classifier [ 14]. Previous work [ 14,3] has shown that access to this reward is enough for sucessfully
92
- grounding language in instruction-following agents.
93
- While all the above-mentioned approaches consider language that describes immediate and instanta-
94
- neous actions, we argue that it is also important for agents to grasp language expressing concepts
95
- that are relational and that span multiple time steps. We thus propose to study the grounding of new
96
- spatio-temporal concepts enabling agents to ground time extended predicates (Fig. 1a) with complex
97
- spatio-temporal references to objects (Fig. 1b) and understand both present and past tenses (Fig. 1c).
98
- To do so we choose the third strategy mentioned above, i.e. to train a truth function that predicts when
99
- descriptions match traces of experience. This choice is motivated by two important considerations.
100
- First, prior work showed that learning truth functions was key to foster generalization [ 3], enabling
101
- agents to efficiently self-train policies via goal imagination [ 14] and goal relabeling [ 12]. Hence the
102
- truth function is an important and self-contained component of larger learning systems. Second, this
103
- strategy allows to carefully control the distribution of experiences and descriptions perceived by the
104
- agent.
105
- 2The Embodied Language Grounding problem has a relational structure. We understand the meaning of
106
- words by analyzing the relations they state in the world [ 18]. The relationality of spatial and temporal
107
- concepts has long been identified in the field of pragmatics [ 43,44] (see Supplementary Section C
108
- for additional discussion). Furthermore actions themselves are relations between subjects and objects,
109
- and can be defined in terms of affordances of the agent [ 19]. We acknowledge this and provide
110
- the right relational inductive bias [ 5] to our architectures through the use of Transformers [ 46]. We
111
- propose a formalism unifying three variants of a multi-modal transformer inspired by Ding et al.
112
- [16] that implement different relational operations through the use of hierarchical attention. We
113
- measure the generalization capabilities of these architectures along three axes 1) generalization to
114
- new traces of experience; 2) generalization to randomly held out sentences; 3) generalization to
115
- grammar primitives, systematically held out from the training set as in Ruis et al. [37]. We observe
116
- that maintaining object identity in the attention computation of our Transformers is instrumental to
117
- achieving good performance on generalization overall. We also identify specific relational operations
118
- that are key to generalize on certain grammar primitives.
119
- Contributions. This paper introduces:
120
- 1. A new Embodied Language Grounding task focusing on spatio-temporal language;
121
- 2.A formalism unifying different relational architectures based on Transformers expressed as
122
- a function of mapping and aggregation operations;
123
- 3.A systematic study of the generalization capabilities of these architectures and the identifica-
124
- tion of key components for their success on this task.
125
- 2 Methods
126
- 2.1 Problem Definition
127
- We consider the setting of an embodied agent behaving in an environment. This agent interacts with
128
- the surrounding objects over time, during an episode of fixed length ( T). Once this episode is over, an
129
- oracle provides exhaustive feedback in a synthetic language about everything that has happened. This
130
- language describes actions of the agent over the objects and includes spatial and temporal concepts.
131
- The spatial concepts are reference to an object through its spatial relation with others (Fig. 1a), and
132
- the temporal concepts are the past modality for the actions of the agent (Fig. 1c), past modality for
133
- spatial relations (Fig. 1b), and actions that unfold over time intervals. The histories of states of the
134
- agent’s body and of the objects over the episode as well as the associated sentences are recorded in
135
- a bufferB. From this setting, and echoing previous work on training agents from descriptions, we
136
- frame the Embodied Language Grounding problem as learning a parametrized truth function Rover
137
- couples of observations traces and sentences, tasked with predicting whether a given sentence Wis
138
- true of a given episode history Sor not. Formally, we aim to minimize:
139
- E(S;W )B
140
- L(R(S;W);r(S;W))
141
- whereLdenotes the cross-entropy loss and rdenotes the ground truth boolean value for sentence W
142
- about traceS.
143
- 2.2 Temporal Playground
144
- In the absence of any dedicated dataset providing spatio-temporal descriptions from behavioral traces
145
- of an agent, we introduce Temporal Playground (Fig. 1) an environment coupled with a templated
146
- grammar designed to study spatio-temporal language grounding. The environment is a 2D world,
147
- with procedurally-generated scene containing N= 3objects sampled from 32 different object types
148
- belonging to 5 categories. Each object has a continuous 2D position, a size, a continuous color
149
- code specified by a 3D vector in RGB space, a type specified by a one-hot vector, and a boolean
150
- unit specifying whether it is grasped. The size of the object feature vector ( o) isjoj= 39 . The
151
- agent’s body has its 2D position in the environment and its gripper state (grasping or non-grasping) as
152
- features (body feature vector ( b) of sizejbj= 3). In this environment, the agent can perform various
153
- actions over the length ( T) of an episode. Some of the objects (the animals) can move independently.
154
- Objects can also interact: if the agent brings food or water to an animal, it will grow in size; similarly,
155
- if water is brought to a plant, it will grow. At the end of an episode, a generative grammar generates
156
- 3sentences describing all the interactions that occurred. A complete specification of the environment
157
- as well as the BNF of the grammar can be found in Supplementary Section A.2.
158
- Synthetic language. To enable a controlled and systematic study of how different types of spatio-
159
- temporal linguistic meanings can be learned, we argue it is necessary to first conduct a systematic
160
- study with a controlled synthetic grammar. We thus consider a synthetic language with a vocabulary of
161
- size53and sentences with a maximum length of 8. This synthetic language facilitates the generation
162
- of descriptions matching behavioral traces of the agent. Moreover, it allows us to express four
163
- categories of concepts associated with specific words. Thus, the generated sentences consist in four
164
- conceptual types based on the words they involve:
165
- •Sentences involving basic concepts. This category of sentences talk about present-time
166
- events by referring to objects and their attributes. Sentences begin with the ’grasp’ token
167
- combined with any object. Objects can be named after their category (eg. ’animal’, ’thing’ )
168
- or directly by their type ( ’dog’, ’door’, ’algae’, etc. ). Finally, the color (’red’, ’blue’, ’green’ )
169
- of objects can also be specified.
170
- •Sentences involving spatial concepts. This category of sentences additionally involve
171
- one-to-one spatial relations and one-to-all spatial relations to refer to objects. An object
172
- can be ’left of’ another object (reference is made in relation to a single other object), or
173
- can be the ’top most’ object (reference is made in relation with all other objects). Example
174
- sentences include ’grasp thing bottom of cat’ or’grasp thing right most’ .
175
- •Sentences involving temporal concepts . This category of sentences involves talking about
176
- temporally-extended predicates and the past tense, without any spatial relations. The two
177
- temporal predicates are denoted with the words ’grow’ and’shake’ . The truth value of these
178
- predicates can only be decided by looking at the temporal evolution of the object’s size and
179
- position respectively. A predicate is transposed at the past tense if the action it describes
180
- was true at some point in the past and is no longer true in the present, this is indicated by
181
- adding the modifier ’was’ before the predicate. Example sentences include ’was grasp red
182
- chameleon’ (indicating that the agent grasped the red chameleon and then released it) and
183
- ’shake bush’ ;
184
- •Sentences involving spatio-temporal concepts . Finally, we consider the broad class of
185
- spatio-temporal sentences that combine spatial reference and temporal or past-tense predi-
186
- cates. These are sentences that involve both the spatial and temporal concepts defined above.
187
- Additionally, there is a case of where the spatial and the temporal aspects are entangled:
188
- past spatial reference. This happens when an object is referred to by its previous spatial
189
- relationship with another object. Consider the case of an animal that was at first on the
190
- bottom of a table, then moved on top, and then is grasped. In this case we could refer to this
191
- animal as something that was previously on the bottom of the table. We use the same ’was’
192
- modifier as for the past tense predicates; and thus we would describe the action as ’Grasp
193
- thing was bottom of table’ .
194
- 2.3 Architectures
195
- In this section we describe the architectures used as well as their inputs. Let one input sample to
196
- our model be I= (S;W), where (Si;t)i;trepresents the objects’ and body’s evolution, and (Wl)l
197
- represents the linguistic observations. Shas a spatial (or entity) dimension indexed by i2[0::N]and
198
- a temporal dimension indexed by t2[1::T]; for anyi,t,Si;tis a vector of observational features.
199
- Note that by convention, the trace (S0;t)trepresents the body’s features, and the traces (Si;t)t;i> 0
200
- represents the other objects’ features. Wis a 2-dimensional tensor indexed by the sequence l2[1::L];
201
- for anyl,Wl2RdWis a one-hot vector defining the word in the dictionary. The output to our models
202
- is a single scalar between 0and1representing the probability that the sentence encoded by Wis true
203
- in the observation trace S.
204
- Transformer Architectures. To systematically study the influence of architectural choices on
205
- language performance and generalization in our spatio-temporal grounded language context, we
206
- define a set of mapping and aggregation operations that allows us to succinctly describe different
207
- models in a unified framework. We define:
208
- 4i=0
209
- i=2i=1
210
- i=3
211
- ‘Grasp thing left most’qObject
212
- Encoder
213
- Language
214
- EncoderBody
215
- Encoder
216
- Positional
217
- Encodingq
218
- ...
219
- reducerreduce
220
- q ...
221
- rreduceq
222
- reduce
223
- rq
224
- reduce(c) Unstructured Transformer ( UT)
225
- (d) Spatial-First Transformer ( SFT)
226
- (e) Temporal-First Transformer (TFT)(a) Input Encoding
227
- qreduce
228
- (b) Optional Word Aggregation (-WA)
229
- t=2 t=1 t=0
230
- S0,2
231
- S2,1S
232
- WS =
233
- W =
234
- W WS
235
- S
236
- S WW
237
- W
238
- Figure 2: Visual summary of the architectures used. We show the details of UT,SFTand TFT
239
- respectively in subfigures (c), (d), (e), as well as a schematic illustration of the preprocessing phase
240
- (a) and the optional word-aggregation procedure (b).
241
- •An aggregation operation based on a Transformer model, called reduce .reduce is a
242
- parametrized function that takes 3 inputs: a tensor, a dimension tuple Dover which to
243
- reduce and a query tensor (that has to have the size of the reduced tensor). Rlayers of a
244
- Transformer are applied to the input-query concatenation and are then queried at the position
245
- corresponding to the query tokens. This produces an output reduced over the dimensions D.
246
- •A casting operation called cast .cast takes as input 2 tensors AandBand a dimension d.
247
- Ais flattened, expanded so as to fit the tensor Bin all dimensions except d, and concatenated
248
- along theddimension.
249
- •A helper expand operation called expand that takes as arguments a tensor and an integer n
250
- and repeats the tensor ntimes.
251
- Using those operations, we define three architectures: one with no particular bias ( Unstructured
252
- Transformer , inspired by Ding et al. [16], or UT); one with a spatial-first structural bias – objects and
253
- words are aggregated along the spatial dimension first ( Spatial-First Transformer orSFT); and one
254
- with a temporal-first structural bias – objects and words are aggregated along the temporal dimension
255
- first ( Temporal-First Transformer , or TFT).
256
- Before inputting the observations of bodies and objects Sand the language Winto any of the Trans-
257
- former architectures, they are projected to a common dimension (see Supplementary Section B.2 for
258
- more details). A positional encoding [ 46] is then added along the time dimension for observations and
259
- along the sequence dimension for language; and finally a one-hot vector indicating whether the vector
260
- is observational or linguistic is appended at the end. This produces the modified observation-language
261
- tuple (^S;^W). We let:
262
- UT(^S;^W) := reduce (cast(^S;^W;0);0;q)
263
- SFT(^S;^W;q) := reduce (reduce (cast(^W;^S;0);0;expand (q;T));0;q)
264
- TFT(^S;^W;q) := reduce (reduce (cast(^W;^S;1);1;expand (q;N+ 1));0;q)
265
- whereTis the number of time steps, Nis the number of objects and qis a learned query token. See
266
- Fig. 2 for an illustration of these architectures.
267
- Note that SFTand TFTare transpose versions of each other: SFTis performing aggregation over space
268
- first and then time, and the reverse is true for TFT. Additionally, we define a variant of each of these
269
- architectures where the words are aggregated before being related with the observations. We name
270
- these variants by appendding - WA(word-aggregation) to the name of the model (see Fig. 2 (b)).
271
- ^W reduce (^W;0;q)
272
- 5We examine these variants to study the effect of letting word-tokens directly interact with object-token
273
- through the self-attention layers vs simply aggregating all language tokens in a single embedding
274
- and letting this vector condition the processing of observations. The latter is commonly done in
275
- the language-conditioned RL and language grounding literature [ 11,3,25,37], using the language
276
- embedding in FiLM layers [ 36] for instance. Finding a significant effect here would encourage using
277
- architectures which allow direct interactions between the word tokens and the objects they refer to.
278
- LSTM Baselines. We also compare some LSTM -based baselines on this task; their architecture is
279
- described in more detail in Supplementary Section B.3.
280
- 2.4 Data Generation, Training and Testing Procedures
281
- We use a bot to generate the episodes we train on. The data collected consists of 56837 trajectories of
282
- T= 30 time steps. Among the traces some descriptions are less frequent than others but we make
283
- sure to have at least 50 traces representing each of the 2672 descriptions we consider. We record
284
- the observed episodes and sentences in a buffer, and when training a model we sample (S;W;r )
285
- tuples with one observation coupled with either a true sentence from the buffer or another false
286
- sentence generated from the grammar. More details about the data generation can be found in
287
- Supplementary Section B.1.
288
- For each of the Transformer variants (6 models) and the LSTM baselines (2 models) we perform an
289
- hyper parameter search using 3 seeds in order to extract the best configuration. We extract the best
290
- condition for each model by measuring the mean F1on a testing set made of uniformly sampled
291
- descriptions from each of the categories define in section 2.2. We use the F1score because testing
292
- sets are imbalanced (the number of traces fulfilling each description is low). We then retrain best
293
- configurations over 10 seeds and report the mean and standard deviation (reported as solid black lines
294
- in Fig. 3 and Fig. 4) of the averaged F1score computed on each set of sentences. When statistical
295
- significance is reported in the text, it is systematically computed using a two-tail Welch’s t-test with
296
- null hypothesis 1=2, at level = 0:05[13]. Details about the training procedure and the hyper
297
- parameter search are provided in Supplementary Section B.4.
298
- 3 Experiments and Results
299
- 3.1 Generalization abilities of models on non-systematic split by categories of meaning
300
- In this experiment, we perform a study of generalization to new sentences from known observations.
301
- We divide our set of test sentences in four categories based on the categories of meanings listed in
302
- Section 2.2: Basic, Spatial, Spatio-Temporal and Temporal. We remove 15% of all possible sentences
303
- in each category from the train set and evaluate the F1 score on those sentences. The results are
304
- provided in Fig. 3.
305
- First, we notice that over all categories of meanings, all UTand TFTmodels, with or without word-
306
- aggregation, perform extremely well compared to the LSTM baselines, with all these four models
307
- achieving near-perfect test performance on the Basic sentences, with very little variability across the
308
- 10 seeds. We then notice that all SFTvariants perform poorly on all test categories, in line or worse
309
- than the baselines. This is particularly visible on the spatio-temporal category, where the SFTmodels
310
- perform at 0:750:020whereas the baselines perform at 0:800:019. This suggests that across
311
- tasks, it is harmful to aggregate each scene plus the language information into a single vector. This
312
- may be due to the fact that objects lose their identity in this process, since information about all the
313
- objects becomes encoded in the same vector. This may make it difficult for the network to perform
314
- computations about the truth value of predicate on a single object.
315
- Secondly, we notice that the word-aggregation condition seems to have little effect on the performance
316
- on all three Transformer models. We only observe a significant effect for UTmodels on spatio-
317
- temporal concepts ( p-value = 2:38e-10). This suggests that the meaning of sentences can be
318
- adequately summarised by a single vector; while maintaining separated representations for each
319
- object is important for achieving good performance it seems unnecessary to do the same for linguistic
320
- input. However we notice during our hyperparameter search that our -WA models are not very robust
321
- to hyperparameter choice, with bigger variants more sensitive to the learning rate.
322
- 6Thirdly, we observe that for our best-performing models, the basic categories of meanings are the
323
- easiest, with a mean score of 1:00:003across all UTand TFT models, then the spatial ones
324
- at0:960:020, then the temporal ones at 0:960:009, and finally the spatio-temporal ones at
325
- 0:890:027. This effectively suggests, as we hypothesised, that sentences containing spatial relations
326
- or temporal concepts are harder to ground than those who do not.
327
- Known sentences with novel observations. We also examine the mean performance of our models
328
- for sentences in the training set but evaluated on a set of new observations : we generate a new set
329
- of rollouts on the environment, and only evaluate the model on sentences seen at train time (plots
330
- are reported in Supplementary Section D ). We see the performance is slightly better in this case,
331
- especially for the LSTM baselines ( 0:820:031versus 0:790:032), but the results are comparable
332
- in both cases, suggesting that the main difficulty for models lies in grounding spatio-temporal
333
- meanings and not in linguistic generalization for the type of generalization considered in this section.
334
- Figure 3: F1scores for all the models on randomly held-out sentences. F1is measured on
335
- separated sets representing each category of concepts defined in Section 2.2.
336
- 3.2 Systematic generalization on withheld combinations of words
337
- In addition to the previous generalization studies, we perform an experiment in a harder linguistic
338
- generalization setting where we systematically remove binary combinations in our train set. This is in
339
- line with previous work on systematic generalization on deep learning models [ 29,37,26]. We create
340
- five test sets to examine the abilities of our models to generalize on binary combinations of words
341
- that have been systematically removed from the set of training sentences, but whose components have
342
- been seen before in other contexts. Our splits can be described by the set of forbidden combinations
343
- of words as:
344
- 1.Forbidden object-attribute combinations. remove from the train set all sentences contain-
345
- ing’red cat’ ,’blue door’ and’green cactus’ . This tests the ability of models to recombine
346
- known objects with known attributes;
347
- 2.Forbidden predicate-object combination. remove all sentences containing ’grow’ and all
348
- objects from the ’plant’ category. This tests the model’s ability to apply a known predicate
349
- to a known object in a new combination;
350
- 3.Forbidden one-to-one relation. remove all sentences containing ’right of’ . Since the
351
- ’right’ token is already seen as-is in the context of one-to-all relations ( ’right most’ ), and
352
- other one-to-one relations are observed during training, this tests the abilities of models to
353
- recombine known directions with in a known template;
354
- 4.Forbidden past spatial relation. remove all sentences containing the contiguous tokens
355
- ’was left of’ . This tests the abilities of models to transfer a known relation to the past
356
- modality, knowing other spatial relations in the past;
357
- 5.Forbidden past predicate. remove all sentences containing the contiguous tokens ’was
358
- grasp’ . This tests the ability of the model to transfer a known predicate to the past modality,
359
- knowing that it has already been trained on other past-tense predicates.
360
- 7To avoid retraining all models for each split, we create one single train set with all forbidden sentences
361
- removed and we test separately on all splits. We use the same hyperparameters for all models than in
362
- the previous experiments. The results are reported in Fig. 4.
363
- Figure 4: F1scores of all the models on systematic generalization splits. F1is measured on
364
- separated sets representing each of the forbidden combinations of word defined above.
365
- First we can notice that the good test scores obtained by the UTand TFTmodels on the previous
366
- sections are confirmed in on this experiment: they are the best performing models overall. We then
367
- notice that the first two splits, corresponding to new attribute-object and predicate-object combinations,
368
- are solved by the UTand TFTmodels, while the SFTmodels and the LSTM baselines struggle to
369
- achieve high scores. For the next 3 splits, which imply new spatial and temporal combinations, the
370
- scores overall drop significantly; we also observe much wider variability between seeds for each
371
- model, perhaps suggesting the various strategies adopted by the models to fit the train set have very
372
- different implications in terms of systematic generalization on spatial and temporal concepts. This
373
- very high variability between seeds on systematic generalization scores are reminiscent of the results
374
- obtained on the gSCAN benchmark [37].
375
- Additionally, for split 3, which implies combining known tokens to form a new spatial relation, we
376
- observe a significant drop in generalization for the word-aggregation ( WA) conditions, consistent
377
- across models (on average across seeds, 0:140:093,0:150:234and0:200:061for
378
- UT,SFTand TFTresp. with p-values <1e-04 for UTand SFT). This may be due to the fact that
379
- recombining any one-to-one relation with the known token right seen in the context of one-to-all
380
- relations requires a separate representation for each of the linguistic tokens. The same significant
381
- drop in performance for the WAcondition can be observed for UTand TFTin split 4, which implies
382
- transferring a known spatial relation to the past.
383
- However, very surprisingly, for split 5 – which implies transposing the known predicate grasp to the
384
- past tense – we observe a very strong effect in the opposite direction: the WAcondition seems to help
385
- generalizing to this unknown past predicate (from close-to-zero scores for all transformer models,
386
- theWAadds on average 0:710:186,0:450:178and0:520:183points for UT, ST and TT
387
- resp. and p-values <1e-05). This may be due to the fact that models without WAlearn a direct and
388
- systematic relationship between the grasp token and grasped objects, as indicated in their features;
389
- this relation is not modulated by the addition of the wasmodifier as a prefix to the sentence. Models
390
- do not exhibit the same behavior on split 4, which has similar structure (transfer the relation left of
391
- to the past). This may be due to the lack of variability in instantaneous predicates (only the grasp
392
- predicate); whereas there are several spatial relations (4 one-to-one, 4 one-to-all).
393
- Control experiment. We evaluate previously trained models on a test set containing hard negative
394
- examples. The aim of this experiment is to ensure that models truly identify the compositional
395
- structure of our spatio-temporal concepts and do not simply perform unit concept recognition. We
396
- select negative pairs (trajectory, description) so that the trajectories contain either the object or the
397
- action described in the positive example. Results are provided in Fig. 5. We observe a slight decrease
398
- of performances on all 5 categories (drop is less than 5%), demonstrating that the models do in fact
399
- represent the meaning of the sentence and not simply the presence or absence of a particular object or
400
- predicate.
401
- 8Figure 5: Control experiment: F1scores for all the models on systematic generalization splits in
402
- the hard negative examples setting.
403
- 4 Related Work
404
- The idea that agents should learn to represent and ground language in their experience of the world
405
- has a long history in developmental robotics [ 50,39,40,7] and was recently extended in the context
406
- of Language Conditioned Deep Reinforcement Learning [ 11,22,31,3]. These recent approaches
407
- often consider navigation [ 10,9] or object manipulation [ 1,22] tasks and are always using instructive
408
- language. Meanings typically refer to instantaneous actions and rarely consider spatial reference
409
- to objects [ 35]. Although our environment includes object manipulations, we here tackle novel
410
- categories of meanings involving the grounding of spatio-temporal concepts such as the past modality
411
- or complex spatio-temporal reference to objects.
412
- We evaluate our learning architectures on their ability to generalise to sets of descriptions that contain
413
- systematic differences with the training data so as to assess whether they correctly model grammar
414
- primitives. This procedure is similar to the gSCAN benchmark [ 37]. This kind of compositional
415
- generalisation is referred as ’systematicity’ by Hupkes et al. [26]. Environmental drivers that facilitate
416
- systematic generalization are also studied by Hill et al. [23]. Although Hupkes et al. [26] consider
417
- relational models in their work, they do not evaluate their performance on a Language Grounding
418
- task. Ruis et al. [37] consider an Embodied Language Grounding setup involving one form of time-
419
- extended meanings (adverbs), but do not consider the past modality and spatio-temporal reference to
420
- objects, and do not consider learning truth functions. Also, they do not consider learning architectures
421
- that process sequences of sensorimotor observations. To our knowledge, no previous work has
422
- conducted systematic generalization studies on an Embodied Language Grounding task involving
423
- spatio-temporal language with Transformers.
424
- The idea that relational architectures are relevant models for Language Grounding has been previously
425
- explored in the context of Visual Reasoning . They were indeed successfully applied for spatial
426
- reasoning in the visual question answering task CLEVR [38]. With the recent publication of the video
427
- reasoning dataset CLEVRER [47], those models were extended and demonstrated abilities to reason
428
- over spatio-temporal concepts, correctly answering causal, predictive and counterfactual questions
429
- [16]. In contrast to our study, these works around CLEVRER do not aim to analyze spatio-temporal
430
- language and therefore do not consider time-extended predicates or spatio-temporal reference to
431
- objects in their language, and do not study properties of systematic generalization over sets of new
432
- sentences.
433
- 5 Discussion and Conclusion
434
- In this work, we have presented a first step towards learning Embodied Language Grounding of
435
- spatio-temporal concepts, framed as the problem of learning a truth function that can predict if a
436
- given sentence is true of temporally-extended observations of an agent interacting with a collection
437
- of objects. We have studied the impact of architectural choices on successful grounding of our
438
- artificial spatio-temporal language. We have modelled different possible choices for aggregation of
439
- 9observations and language as hierarchical Transformer architectures. We have demonstrated that in
440
- our setting, it is beneficial to process temporally-extended observations and language tokens side-by-
441
- side, as evidenced by the good score of our Unstructured Transformer variant. However, there seems
442
- to be only minimal effect on performance in aggregating temporal observations along the temporal
443
- dimension first – compared to processing all traces and the language in an unstructured manner – as
444
- long as object identity is preserved. This can inform architectural design in cases where longer episode
445
- lengths make it impossible to store all individual timesteps for each object; our experiments provide
446
- evidence that a temporal summary can be used in these cases. Our experiments with systematic
447
- dimensions of generalization provide mixed evidence for the influence of summarizing individual
448
- words into a single vector, showing it can be detrimental to generalize to novel word combinations
449
- but also can help prevent overgeneralization of a relation between a single word and a single object
450
- without considering the surrounding linguistic context.
451
- Limitations and further work. There are several limitations of our setup which open important
452
- opportunities for further work. First, we have used a synthetic language that could be extended:
453
- for instance with more spatial relations and relations that are more than binary. Another axis for
454
- further research is using low-level observations. In our setting, we wanted to disentangle the effect
455
- of structural biases on learning spatio-temporal language from the problem of extracting objects
456
- from low level observations [ 6,21,17,30,8] in a consistent manner over time (object permanence
457
- [15,48]). Further steps in this direction are needed, and it could allow us to define richer attributes
458
- (related to material or texture) and richer temporal predicates (such as breaking, floating, etc). Finally,
459
- we use a synthetic language which is far from the richness of the natural language used by humans,
460
- but previous work has shown that natural language can be projected onto the subspace defined by
461
- synthetic language using the semantic embeddings learned by large language models [ 33]: this opens
462
- up be a fruitful avenue for further investigation.
463
- A further interesting avenue for future work would be to use the grounding provided by this reward
464
- function to allow autonomous language-conditioned agents to target their own goals [ 14]. In this sense,
465
- the truth function can be seen as a goal-achievement function or reward function. While generalization
466
- performance of our method is not perfect, the good overall f1 scores of our architectures imply that
467
- they can be directly transferred to a more complete RL setup to provide signal for policies conditioned
468
- on spatio-temporal language.
469
- Broader Impact. This work provides a step in the direction of building agents that better understand
470
- how language relates to the physical world; this can lead to personal robots that can better suit the
471
- needs of their owners because they can be interacted with using language. If successfully implemented,
472
- this technology can raise issues concerning automation of certain tasks resulting in loss of jobs for
473
- less-qualified workers.
474
- Links. The source code as well as the generated datasets can be found at https://github.com/
475
- flowersteam/spatio-temporal-language-transformers
476
- Acknowledgments and Disclosure of Funding
477
- Tristan Karch is partly funded by the French Ministère des Armées - Direction Générale de
478
- l’Armement. Laetitia Teodorescu is supported by Microsoft Research through its PhD Scholar-
479
- ship Programme. This work was performed using HPC resources from GENCI-IDRIS (Grant
480
- 2020-A0091011996)
481
- References
482
- [1]Ahmed Akakzia, Cédric Colas, Pierre-Yves Oudeyer, Mohamed Chetouani, and Olivier
483
- Sigaud. Grounding Language to Autonomously-Acquired Skills via Goal Generation.
484
- InICLR 2021 - Ninth International Conference on Learning Representation , Vienna / Virtual,
485
- Austria, May 2021. URL https://hal.inria.fr/hal-03121146 .
486
- [2]James F. Allen. Towards a general theory of action and time. Artificial Intelligence , 23(2):
487
- 123–154, 1984. ISSN 0004-3702. doi: https://doi.org/10.1016/0004-3702(84)90008-0. URL
488
- https://www.sciencedirect.com/science/article/pii/0004370284900080 .
489
- 10[3]Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Pushmeet Kohli, and Edward
490
- Grefenstette. Learning to understand goal specifications by modelling reward. In International
491
- Conference onLearning Representations , 2019. URL https://openreview.net/forum?
492
- id=H1xsSjC9Ym .
493
- [4]John A. Bateman, Joana Hois, Robert Ross, and Thora Tenbrink. A linguistic ontology of space
494
- for natural language processing. Artificial Intelligence , 174(14):1027–1071, 2010. ISSN 0004-
495
- 3702. doi: https://doi.org/10.1016/j.artint.2010.05.008. URL https://www.sciencedirect.
496
- com/science/article/pii/S0004370210000858 .
497
- [5]Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zam-
498
- baldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner,
499
- Caglar Gulcehre, Francis Song, Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani,
500
- Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra,
501
- Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. Relational
502
- inductive biases, deep learning, and graph networks, 2018.
503
- [6]Christopher P. Burgess, Loic Matthey, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt
504
- Botvinick, and Alexander Lerchner. Monet: Unsupervised scene decomposition and representa-
505
- tion, 2019.
506
- [7]Angelo Cangelosi, Giorgio Metta, Gerhard Sagerer, Stefano Nolfi, Chrystopher Nehaniv, Kerstin
507
- Fischer, Jun Tani, Tony Belpaeme, Giulio Sandini, Francesco Nori, Luciano Fadiga, Britta
508
- Wrede, Katharina Rohlfing, Elio Tuci, Kerstin Dautenhahn, Joe Saunders, and Arne Zeschel.
509
- Integration of action and language knowledge: A roadmap for developmental robotics. IEEE
510
- Transactions onAutonomous Mental Development , 2(3):167–195, 2010. doi: 10.1109/TAMD.
511
- 2010.2053034.
512
- [8]Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and
513
- Sergey Zagoruyko. End-to-end object detection with transformers, 2020.
514
- [9]Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj
515
- Rajagopal, and Ruslan Salakhutdinov. Gated-attention architectures for task-oriented language
516
- grounding, 2018.
517
- [10] David L. Chen and Raymond J. Mooney. Learning to interpret natural language navigation
518
- instructions from observations. In Proceedings oftheTwenty-Fifth AAAI Conference on
519
- Artificial Intelligence, AAAI’11, page 859–865. AAAI Press, 2011.
520
- [11] Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan
521
- Saharia, Thien Huu Nguyen, and Yoshua Bengio. Babyai: A platform to study the sample
522
- efficiency of grounded language learning, 2019.
523
- [12] Geoffrey Cideron, Mathieu Seurin, Florian Strub, and Olivier Pietquin. Higher: Improving
524
- instruction following with hindsight generation for experience replay. In 2020 IEEE Symposium
525
- Series onComputational Intelligence (SSCI) , pages 225–232, 2020. doi: 10.1109/SSCI47803.
526
- 2020.9308603.
527
- [13] Cédric Colas, Olivier Sigaud, and Pierre-Yves Oudeyer. A hitchhiker’s guide to statistical
528
- comparisons of reinforcement learning algorithms, 2019.
529
- [14] Cédric Colas, Tristan Karch, Nicolas Lair, Jean-Michel Dussoux, Clément Moulin-Frier, Pe-
530
- ter Ford Dominey, and Pierre-Yves Oudeyer. Language as a cognitive tool to imagine goals in
531
- curiosity-driven exploration, 2020.
532
- [15] Antonia Creswell, Kyriacos Nikiforou, Oriol Vinyals, Andre Saraiva, Rishabh Kabra, Loic
533
- Matthey, Chris Burgess, Malcolm Reynolds, Richard Tanburn, Marta Garnelo, and Murray
534
- Shanahan. Alignnet: Unsupervised entity alignment, 2020.
535
- [16] David Ding, Felix Hill, Adam Santoro, and Matt Botvinick. Object-based attention for spatio-
536
- temporal reasoning: Outperforming neuro-symbolic models with flexible distributed architec-
537
- tures, 2020.
538
- 11[17] Martin Engelcke, Adam R. Kosiorek, Oiwi Parker Jones, and Ingmar Posner. Genesis: Genera-
539
- tive scene inference and sampling with object-centric latent representations, 2020.
540
- [18] Dedre Gentner and Jeffrey Loewenstein. Relational language and relational thought, 2002.
541
- [19] James J. (James Jerome) Gibson. Thesenses considered asperceptual systems . Allen & Unwin,
542
- London, 1968.
543
- [20] Arthur M. Glenberg and Michael P. Kaschak. Grounding language in action. Psychonomic
544
- Bulletin &Review , 9(3):558–565, September 2002. ISSN 1531-5320. doi: 10.3758/
545
- BF03196313. URL https://doi.org/10.3758/BF03196313 .
546
- [21] Klaus Greff, Raphaël Lopez Kaufman, Rishabh Kabra, Nick Watters, Chris Burgess, Daniel
547
- Zoran, Loic Matthey, Matthew Botvinick, and Alexander Lerchner. Multi-object representation
548
- learning with iterative variational inference, 2020.
549
- [22] Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer,
550
- David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus
551
- Wainwright, Chris Apps, Demis Hassabis, and Phil Blunsom. Grounded language learning in a
552
- simulated 3d world, 2017.
553
- [23] Felix Hill, Andrew Lampinen, Rosalia Schneider, Stephen Clark, Matthew Botvinick, James L.
554
- McClelland, and Adam Santoro. Environmental drivers of systematicity and generalization in a
555
- situated agent, 2020.
556
- [24] Xavier Hinaut, Maxime Petit, Gregoire Pointeau, and Peter Ford Dominey. Exploring the
557
- acquisition and production of grammatical constructions through human-robot interaction with
558
- echo state networks. Frontiers inneurorobotics, 8:16, 2014.
559
- [25] David Yu-Tung Hui, Maxime Chevalier-Boisvert, Dzmitry Bahdanau, and Yoshua Bengio.
560
- Babyai 1.1, 2020.
561
- [26] Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. Compositionality decomposed:
562
- how do neural networks generalise?, 2020.
563
- [27] Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick,
564
- and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary
565
- visual reasoning, 2016.
566
- [28] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017.
567
- [29] Brenden M. Lake and Marco Baroni. Generalization without systematicity: On the composi-
568
- tional skills of sequence-to-sequence recurrent networks, 2018.
569
- [30] Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg
570
- Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. Object-centric learning with
571
- slot attention, 2020.
572
- [31] Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward
573
- Grefenstette, Shimon Whiteson, and Tim Rocktäschel. A survey of reinforcement learning
574
- informed by natural language, 2019.
575
- [32] Olivier Mangin, David Filliat, Louis Ten Bosch, and Pierre-Yves Oudeyer. Mca-nmf: Multi-
576
- modal concept acquisition with non-negative matrix factorization. PloS one, 10(10):e0140732,
577
- 2015.
578
- [33] Alana Marzoev, Samuel Madden, M. Frans Kaashoek, Michael Cafarella, and Jacob Andreas.
579
- Unnatural language processing: Bridging the gap between synthetic and natural language data,
580
- 2020.
581
- [34] Khanh Nguyen, Dipendra Misra, Robert Schapire, Miro Dudík, and Patrick Shafto. Interactive
582
- learning from activity description, 2021.
583
- 12[35] Rohan Paul, Jacob Arkin, N. Roy, and T. Howard. Efficient grounding of abstract spatial
584
- concepts for natural language interaction with robot manipulators. In Robotics: Science and
585
- Systems, 2016.
586
- [36] Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron Courville. Film:
587
- Visual reasoning with a general conditioning layer, 2017.
588
- [37] Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, and Brenden M. Lake. A
589
- benchmark for systematic generalization in grounded language understanding, 2020.
590
- [38] Adam Santoro, David Raposo, David G. T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter
591
- Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning,
592
- 2017.
593
- [39] L. Steels. Semiotic dynamics for embodied agents. IEEE Intelligent Systems , 21(3):32–38,
594
- 2006. doi: 10.1109/MIS.2006.58.
595
- [40] Yuuya Sugita and Jun Tani. Learning semantic combinatoriality from the interaction between
596
- linguistic and behavioral processes. Adaptive behavior, 13(1):33–52, 2005.
597
- [41] Jun Tani. Exploring robotic minds: actions, symbols, andconsciousness asself-organizing
598
- dynamic phenomena. Oxford University Press, 2016.
599
- [42] Tadahiro Taniguchi, Takayuki Nagai, Tomoaki Nakamura, Naoto Iwahashi, Tetsuya Ogata,
600
- and Hideki Asoh. Symbol emergence in robotics: a survey. Advanced Robotics , 30(11-12):
601
- 706–728, 2016.
602
- [43] Thora Tenbrink. Space, Time, andtheUseofLanguage: AnInvestigation ofRelationships . De
603
- Gruyter Mouton, 2008. ISBN 978-3-11-019882-9. doi: doi:10.1515/9783110198829. URL
604
- https://doi.org/10.1515/9783110198829 .
605
- [44] Thora Tenbrink. Reference frames of space and time in language. Journal ofPragmatics , 43(3):
606
- 704–722, 2011. ISSN 0378-2166. doi: https://doi.org/10.1016/j.pragma.2010.06.020. URL
607
- https://www.sciencedirect.com/science/article/pii/S037821661000192X . The
608
- Language of Space and Time.
609
- [45] Elio Tuci, Tomassino Ferrauto, Arne Zeschel, Gianluca Massera, and Stefano Nolfi. An
610
- experiment on behavior generalization and the emergence of linguistic compositionality in
611
- evolving robots. IEEE Transactions onAutonomous Mental Development , 3(2):176–189, 2011.
612
- [46] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
613
- Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2017.
614
- [47] Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, and Joshua B.
615
- Tenenbaum. Clevrer: Collision events for video representation and reasoning, 2020.
616
- [48] Honglu Zhou, Asim Kadav, Farley Lai, Alexandru Niculescu-Mizil, Martin Renqiang Min,
617
- Mubbasir Kapadia, and Hans Peter Graf. Hopper: Multi-hop transformer for spatiotemporal
618
- reasoning. In International Conference onLearning Representations , 2021. URL https:
619
- //openreview.net/forum?id=MaZFq7bJif7 .
620
- [49] Li Zhou and Kevin Small. Inverse reinforcement learning with natural language goals, 2020.
621
- [50] Rolf A. Zwaan and Carol J. Madden. Embodied Sentence Comprehension , page 224–245.
622
- Cambridge University Press, 2005. doi: 10.1017/CBO9780511499968.010.
623
- 13Checklist
624
- 1. For all authors...
625
- (a)Do the main claims made in the abstract and introduction accurately reflect the paper’s
626
- contributions and scope? [Yes] , we propose a summary of our results supporting our
627
- contributions in Section 5.
628
- (b) Did you describe the limitations of your work? [Yes] , in the concluding Section 5.
629
- (c)Did you discuss any potential negative societal impacts of your work? [Yes] , in a the
630
- Broader Impact paragraph in the Conclusion section 5.
631
- (d)Have you read the ethics review guidelines and ensured that your paper conforms to
632
- them? [Yes]
633
- 2. If you are including theoretical results...
634
- (a) Did you state the full set of assumptions of all theoretical results? [N/A]
635
- (b) Did you include complete proofs of all theoretical results? [N/A]
636
- 3. If you ran experiments...
637
- (a)Did you include the code, data, and instructions needed to reproduce the main experi-
638
- mental results (either in the supplemental material or as a URL)? [Yes] , the code is
639
- given as supplementary material.
640
- (b)Did you specify all the training details (e.g., data splits, hyperparameters, how they
641
- were chosen)? [Yes] , see Sections 3.1 and 3.2.
642
- (c)Did you report error bars (e.g., with respect to the random seed after running exper-
643
- iments multiple times)? [Yes] , all reported scores were averaged over 10 seeds and
644
- std is reported in all plots (Fig. 3, Fig. 4 of Section 3) and in the main text. We also
645
- performed statistical tests to measure significance.
646
- (d)Did you include the total amount of compute and the type of resources used (e.g.,
647
- type of GPUs, internal cluster, or cloud provider)? [Yes] , details are given in
648
- Supplementary Section D.2.
649
- 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
650
- (a) If your work uses existing assets, did you cite the creators? [N/A]
651
- (b) Did you mention the license of the assets? [N/A]
652
- (c)Did you include any new assets either in the supplemental material or as a URL? [N/A]
653
- (d)Did you discuss whether and how consent was obtained from people whose data you’re
654
- using/curating? [N/A]
655
- (e)Did you discuss whether the data you are using/curating contains personally identifiable
656
- information or offensive content? [N/A]
657
- 5. If you used crowdsourcing or conducted research with human subjects...
658
- (a)Did you include the full text of instructions given to participants and screenshots, if
659
- applicable? [N/A]
660
- (b)Did you describe any potential participant risks, with links to Institutional Review
661
- Board (IRB) approvals, if applicable? [N/A]
662
- (c)Did you include the estimated hourly wage paid to participants and the total amount
663
- spent on participant compensation? [N/A]
664
- 14SUPPLEMENTARY MATERIAL
665
- A Supplementary: Temporal Playground Specifications
666
- A.1 Environment
667
- Temporal Playground is a procedurally generated environment consisting of 3 objects and an agent’s
668
- body. There are 32 types of objects, listed in Fig. 6 along with 5 object categories. Each object has a
669
- continuous 2D position, a size, a continuous color code specified by a 3D vector in RGB space, a type
670
- specified by a one-hot vector, and a boolean unit specifying whether it is grasped. Note that categories
671
- are not encoded in the objects’ features. The agent’s body has its 2D position in the environment and
672
- its gripper state (grasping or non-grasping) as features. The size of the body feature vector is 3 while
673
- the object feature vector has a size of 39. This environment is a spatio-temporal extension of the one
674
- used in this work [14].
675
- All positions are constrained within [1;1]2. The initial position of the agent is (0;0)while the
676
- initial object positions are randomized so that they are not in contact (d(obj1;obj 2)>0:3). Object
677
- sizes are sampled uniformly in [0:2;0:3], the size of the agent is 0:05. Objects can be grasped
678
- when the agent has nothing in hand, when it is close enough to the object center (d(agent;obj)<
679
- (size(agent) +size(obj))=2)and the gripper is closed ( 1,1when open). When a supply is on an
680
- animal or water is on a plant (contact define as distance between object being equal to the mean size
681
- of the two objects d= (size(obj1) +size(obj2))=2), the object will grow over time with a constant
682
- growth rate until it reaches the maximum size allowed for objects or until contact is lost.
683
- Category
684
- Object
685
- Typefurniture animal
686
- plantliving thing
687
- supply
688
- dog
689
- cat
690
- chameleon
691
- human
692
- flycactus
693
- carnivorous
694
- flower
695
- tree
696
- bushgrass
697
- algae
698
- tea
699
- rose
700
- bonsaiparrot
701
- mouse
702
- lion
703
- pig
704
- cowcupboard
705
- sink
706
- window
707
- sofa
708
- carpetdoor
709
- chair
710
- desk
711
- lamp
712
- tablewater
713
- food
714
- Can move independently,
715
- can be grown with food
716
- or waterCan be grown with water
717
- Figure 6: Representation of possible objects types and categories . Information about the possible
718
- interactions between objects are also given.
719
- 15A.2 Language
720
- Grammar. The synthetic language we use can be decomposed into two components: the instanta-
721
- neous grammar and the temporal logic. Both are specified through the BNF given in Figure 7.
722
- Instantaneous grammar:
723
- <S> ::= <pred> <thing_A>
724
- <pred> ::= grow | grasp | shake
725
- <thing_A> ::= <thing_B> | <attr> <thing_B> | thing <localizer> |
726
- thing <localizer_all>
727
- <localizer> ::= left of <thing_B> | right of <thing_B> |
728
- bottom of <thing_B> | top of <thing_B>
729
- <localizer_all> ::= left most | right most | bottom most | top most
730
- <thing_B> ::= dog | cat | … | thing
731
- <attr> ::= blue | green | red
732
- Temporal aspect:
733
- <S> ::= was <pred> <thing_A>
734
- <thing_A> ::= thing was <localizer> | thing was <localizer_all>
735
- Figure 7: BNF of the grammar used in Temporal Playground . The instantaneous grammar allows
736
- generating true sentences about predicates, spatial relations (one-to-one and one to all). These
737
- sentences are then processed by the temporal logic to produce the linguistic descriptions of our
738
- observations; this step is illustrated in the Temporal Aspect rules. See the main text for information
739
- on how these sentences are generated.
740
- Concept Definition. We split the set of all possible descriptions output by our grammar into four
741
- conceptual categories according to the rules given in Table 1.
742
- Concept BNF Size
743
- 1. Basic<S> ::= <pred> <thing_A>
744
- 152 <pred> ::= grasp
745
- <thing_A> ::= <thing_B> | <attr> <thing_B
746
- 2. Spatial<S> ::= <pred> <thing_A>
747
- 156 <pred> ::= grasp
748
- <thing_A> ::= < thing <localizer> | thing <localizer_all>
749
- 3. Temporal<S> ::= <pred_A> <thing_A> | was <pred_B> <thing_A>
750
- 648<pred_A> ::= grow |shake
751
- <pred_B> ::= grasp |grow |shake
752
- <thing_A> ::= <thing_B> | <attr> <thing_B>
753
- 4. Spatio-
754
- Temporal<S> ::= <pred_A> <thing_A> | was <pred_B> <thing_A>
755
- 1716<pred_C> <thing_C >
756
- <pred_A> ::= grow |shake
757
- <pred_B> ::= grasp |grow |shake
758
- <pred_C> ::= grasp
759
- <thing_A> ::= thing <localizer> | thing <localizer_all> |
760
- thing was <localizer> |
761
- thing was <localizer_all> |
762
- <thing_C> ::= thing was <localizer> |
763
- thing was <localizer_all>
764
- Table 1: Concept categories with their associated BNF. <thing_B> ,<attr> ,<localizer> and
765
- <localizer_all> are given in Fig. 7
766
- 16B Supplementary Methods
767
- B.1 Data Generation
768
- Scripted bot. To generate the traces matching the descriptions of our grammar we define a set of
769
- scenarii that correspond to sequences of actions required to fulfill the predicates of our grammar,
770
- namely grasp ,grow andshake . Those scenarii are then conditioned on a boolean that modulates them
771
- to obtain a mix of predicates in the present and the past tenses. For instance, if a grasp scenario is
772
- sampled, there will be a 50% chance that the scenario will end with the object being grasped, leading
773
- to a present-tense description; and a 50% chance that the agent releases the object, yielding a past
774
- tense description.
775
- Description generation from behavioral traces of the agent. For each time step, the instanta-
776
- neous grammar generates the set of all true instantaneous sentences using a set of filtering operations
777
- similar to the one used in CLEVR [ 27], without the past predicates and past spatial relations. Then
778
- the temporal logic component uses these linguistic traces in the following way: if a given sentence
779
- for a predicate is true in a past time step and false in the present time step, the prefix token ’was’ is
780
- prepended to the sentence; similarly, if a given spatial relation is observed in a previous time step and
781
- unobserved in the present, the prefix token ’was’ is prepended to the spatial relation.
782
- B.2 Input Encoding
783
- We present the input processing in Fig. 8. At each time step t, the body feature vector btand the
784
- object features vector oi;t,i= 1;2;3are encoded using two single-layer neural networks whose
785
- output are of size h. Similarly, each of the words of the sentence describing the trace (represented
786
- as one-hot vectors) is encoded and projected in the dimension of size h. We concatenate to the
787
- vector obtained a modality token mthat defines if the output belongs to the scene (1;0)or to the
788
- description (0;1). We then feed the resulting vectors to a positional encoding that modulates the
789
- vectors according to the time step in the trace for btandoi;t,i= 1;2;3and according to the position
790
- of the word in the description for wl.
791
- We call the encoded body features ^btand it corresponds to ^S0;tof the input tensor of our model (see
792
- Fig. 2 in the Main document). Similarly, ^oi;t,i= 1;2;3are the encoded object features corresponding
793
- to^Si;t,i= 1;2;3. Finally ^wlare the encoded words and the components of tensor ^W.
794
- We callhthe hidden size of our models and recall that j^btj=j^oi;tj=j^wlj=h+ 2. This parameter
795
- is varied during the hyper-parameter search.
796
- Object
797
- EncoderLanguage
798
- EncoderBody
799
- Encoder
800
- ............bt
801
- bt
802
- oi,t
803
- oi,twl
804
- Positional
805
- EncodingPositional
806
- EncodingPositional
807
- Encoding
808
- wlm
809
- mm
810
- Figure 8: Input encoding. Body, words and objects are all projected in the same dimension.
811
- 17B.3 Details on LSTM models
812
- To provide baseline models on our tasks we consider two LSTM variants. They are interesting
813
- baselines because they do not perform any relational computation except for relations between inputs
814
- at successive time steps. We consider the inputs as they were defined in Section 2.3 of the main paper.
815
- We consider two LSTM variants:
816
- 1.LSTM -FLAT : This variant has two internal LSTM : one that processes the language and one
817
- that processes the scenes as concatenations of all the body and object features. This produces
818
- two vectors that are concatenated into one, which is then run through an MLP and a final
819
- softmax to produce the final output.
820
- 2.LSTM -FACTORED : This variant independently processes the different body and object
821
- traces, which have previously been projected to the same dimension using a separate linear
822
- projection for the object and for the body. The language is processed by a separate LSTM .
823
- These body, object and language vectors are finally concatenated and fed to a final MLP and
824
- a softmax to produce the output.
825
- B.4 Details on Training Schedule
826
- Implementation Details. The architectures are trained via backpropagation using the Adam
827
- Optimizer[ 28]. The data is fed to the model in batches of 512 examples for 150 000 steps. We use a
828
- modular buffer to sample an important variety of different descriptions in each batch and to impose a
829
- ratio of positive samples of 0.1 for each description in each batch.
830
- Model implementations. We used the standard implementations of TransformerEncoderLayer and
831
- TransformerEncoder from pytorch version 1.7.1, as well as the default LSTM implementation. For
832
- initialization, we also use pytorch defaults.
833
- Hyper-parameter search. To pick the best set of parameters for each of our eight models, we
834
- train them on 18 conditions and select the best models. Note that each condition is run for 3 seeds
835
- and best models are selected according to their averaged F1score on randomly held-out descriptions
836
- (15% of the sentences in each category given in Table 1).
837
- Best models. Best models obtained thanks to the parameter search are given in Table 2.
838
- Model Learning rate Model hyperparams
839
- hidden size layer count head count param count
840
- UT 1e-4 256 4 8 1 :3M
841
- UT-WA 1e-5 512 4 8 14 :0M
842
- TFT 1e-4 256 4 4 3 :5M
843
- TFT-WA 1e-5 512 4 8 20 :3M
844
- SFT 1e-4 256 4 4 3 :5M
845
- SFT-WA 1e-4 256 2 8 2 :7M
846
- LSTM -FLAT 1e-4 512 4 N/A 15:6M
847
- LSTM -FACTORED 1e-4 512 4 N/A 17:6M
848
- Table 2: Hyperparameters. (for all models)
849
- Robustness to hyperparameters For some models, we have observed a lack of robustness to
850
- hyperparameters during our search. This translated to models learning to predict all observation-
851
- sentence tuples as false since the dataset is imbalanced (the proportion of true samples is 0.1). This
852
- behavior was systematically observed with a series of models whose hyperparameters are listed in
853
- Table 3. This happens with the biggest models with high learning rates, especially with the -WA
854
- variants.
855
- 18Model Learning rate Model hyperparams
856
- hidden size layer count head count
857
- UT-WA 1e-4 512 4 4
858
- UT-WA 1e-4 512 4 8
859
- SFT 1e-4 512 4 4
860
- SFT-WA 1e-4 512 4 8
861
- SFT-WA 1e-4 512 2 4
862
- SFT-WA 1e-4 512 4 4
863
- TFT 1e-4 512 4 4
864
- TFT-WA 1e-4 512 4 8
865
- TFT-WA 1e-4 512 2 4
866
- TFT-WA 1e-4 512 4 4
867
- Table 3: Unstable models. Models and hyperparameters collapsing into uniform false prediction.
868
- C Supplementary Discussion: Formal descriptions of spatio-temporal
869
- meanings
870
- The study of spatial and temporal aspects of language has a long history in Artificial Intelligence and
871
- linguistics, where researchers have tried to define formally the semantics of such uses of language.
872
- For instance, work in temporal logic [ 2] has tried to create rigorous definitions of various temporal
873
- aspects of action reflected in the English language, such as logical operations on time intervals (an
874
- action fulfilling itself simultaneously with another, before, or after), non-action events (standing
875
- still for one hour), and event causality. These formal approaches have been complemented by work
876
- in pragmatics trying to define language user’s semantics as relates to spatial and temporal aspects
877
- of language. For instance, Tenbrink [43] examines the possible analogies to be made between
878
- relationships between objects in the spatial domain and relationships between events in a temporal
879
- domain, and concludes empirically that these aspects of language are not isomorphic and have their
880
- own specific rules. Within the same perspective, a formal ontology of space is developed in [ 4],
881
- whose complete system can be used to achieve contextualized interpretations of language users’
882
- spatial language. Spatial relations in everyday language use are also specified by the perspective
883
- used by the speaker; a formal account of this system is given in [ 44], where the transferability of
884
- these representations to temporal relations between events is also studied. These lines of work are of
885
- great relevance to our approach, especially the ones involving spatial relationships. We circumvent
886
- the problem of reference frames by placing ourselves in an absolute reference system where the x-y
887
- directions unambiguously define the concepts of left,right ,top,bottom ; nevertheless these studies
888
- would be very useful in a context where the speaker would also be embodied and speak from a
889
- different perspective. As for the temporal aspect, these lines of work focus on temporal relations
890
- between separate events, which is not the object of our study here; we are concerned about single
891
- actions (as opposed to several events) unfolding, in the past or present, over several time steps.
892
- 19D Supplementary Results
893
- D.1 Generalization to new observations from known sentences
894
- Figure 9: Generalization to new traces of observations. F1 scores of all models on the train
895
- sentences with new observations. UTand TFToutperform other models on all four categories of
896
- meanings.
897
- D.2 Computing Resources
898
- This work was performed using HPC resources from GENCI-IDRIS (Grant 2020-A0091011996).
899
- We used 22k GPU-hours on nvidia-V100 GPUs for the development phase, hyperparameter search,
900
- and the main experiments.
901
- 20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2106.09564.txt DELETED
@@ -1,366 +0,0 @@
1
- Knowledge distillation from multi-modal to
2
- mono-modal segmentation networks
3
- Minhao Hu1;2?, Matthis Maillard2?( ), Ya Zhang1( ), Tommaso Ciceri2,
4
- Giammarco La Barbera2, Isabelle Bloch2, and Pietro Gori2
5
- 1CMIC, Shanghai Jiao Tong University, Shanghai, China
6
- 2LTCI, T el ecom Paris, Institut Polytechnique de Paris, France
7
8
9
- Abstract. The joint use of multiple imaging modalities for medical im-
10
- age segmentation has been widely studied in recent years. The fusion of
11
- information from di erent modalities has demonstrated to improve the
12
- segmentation accuracy, with respect to mono-modal segmentations, in
13
- several applications. However, acquiring multiple modalities is usually
14
- not possible in a clinical setting due to a limited number of physicians
15
- and scanners, and to limit costs and scan time. Most of the time, only
16
- one modality is acquired. In this paper, we propose KD-Net, a framework
17
- to transfer knowledge from a trained multi-modal network (teacher) to
18
- a mono-modal one (student). The proposed method is an adaptation
19
- of the generalized distillation framework where the student network is
20
- trained on a subset (1 modality) of the teacher's inputs (n modalities).
21
- We illustrate the e ectiveness of the proposed framework in brain tumor
22
- segmentation with the BraTS 2018 dataset. Using di erent architectures,
23
- we show that the student network e ectively learns from the teacher and
24
- always outperforms the baseline mono-modal network in terms of seg-
25
- mentation accuracy.
26
- 1 Introduction
27
- Using multiple modalities to automatically segment medical images has become
28
- a common practice in several applications, such as brain tumor segmentation [11]
29
- or ischemic stroke lesion segmentation [10]. Since di erent image modalities can
30
- accentuate and better describe di erent tissues, their fusion can improve the seg-
31
- mentation accuracy. Although multi-modal models usually give the best results,
32
- it is often dicult to obtain multiple modalities in a clinical setting due to a
33
- limited number of physicians and scanners, and to limit costs and scan time. In
34
- many cases, especially for patients with pathologies or for emergency, only one
35
- modality is acquired.
36
- Two main strategies have been proposed in the literature to deal with prob-
37
- lems where multiple modalities are available at training time but some or most
38
- ?The two rst authors contributed equally to this paper.arXiv:2106.09564v1 [cs.CV] 17 Jun 20212 M.Hu et al.
39
- of them are missing at inference time. The rst one is to train a generative model
40
- to synthesize the missing modalities and then perform multi-modal segmenta-
41
- tion. In [13], the authors have shown that using a synthesized modality helps
42
- improving the accuracy of classi cation of brain tumors. Ben Cohen et al. [1]
43
- generated PET images from CT scans to reduce the number of false positives in
44
- the detection of malignant lesions in livers. Generating a synthesized modality
45
- has also been shown to improve the quality of the segmentation of white matter
46
- hypointensities [12]. The main drawback of this strategy is that it is compu-
47
- tationally cumbersome, especially when many modalities are missing. In fact,
48
- one needs to train one generative network per missing modality in addition to a
49
- multi-modal segmentation network.
50
- The second strategy consists in learning a modality-invariant feature space
51
- that encodes the multi-modal information during training, and that allows for all
52
- possible combinations of modalities during inference. Within this second strat-
53
- egy, Havaei et al. proposed HeMIS [4], a model that, for each modality, trains a
54
- di erent feature extractor. The rst two moments of the feature maps are then
55
- computed and concatenated in the latent space from which a decoder is trained
56
- to predict the segmentation map. Dorent et al. [3], inspired by HeMIS, proposed
57
- U-HVED where they introduced skip-connections by considering intermediate
58
- layers, before each down-sampling step, as a feature map. This network outper-
59
- formed HeMIS on BraTS 2018 dataset. In [2], instead of fusing the layers by
60
- computing mean and variance, the authors learned a mapping function from the
61
- multiple feature maps to the latent space. They claimed that computing the
62
- moments to fuse the maps is not satisfactory since it makes each modality con-
63
- tribute equally to the nal result which is inconsistent with the fact that each
64
- modality highlights di erent zones. They obtained better results than HeMIS
65
- on BraTS 2015 dataset. This second strategy has good results only when one
66
- or two modalities are missing, however, when only one modality is available, it
67
- has worse results than a model trained on this speci c modality. This kind of
68
- methods is therefore not suitable for a clinical setting where only one modality
69
- is usually acquired, such as pre-operative neurosurgery or radiotherapy.
70
- In this paper, in contrast to the previously presented methods, we propose a
71
- framework to transfer knowledge from a multi-modal network to a mono-modal
72
- one. The proposed method is based on generalized knowledge distillation [9]
73
- which is a combination of distillation [5] and privileged information [14]. Distil-
74
- lation has originally been designed for classi cation problems to make a small
75
- network (Student) learn from an ensemble of networks or from a large network
76
- (Teacher). It has been applied to image segmentation in [8,15] where the same in-
77
- put modalities have been used for the Teacher network and the Student network.
78
- In [15], the Student learns from the Teacher only thanks to a loss term between
79
- their outputs. In [8], the authors also constrained the intermediate layers of the
80
- Student to be similar to the ones of the Teacher. With a di erent perspective,
81
- the framework of privileged information was designed to boost the performance
82
- of a Student model by learning from both the training data and a Teacher model
83
- with privileged and additional information. In generalized knowledge distillation,KD-Net 3
84
- one uses distillation to extract useful knowledge from the privileged information
85
- of the Teacher [9]. In our case, Teacher and Student have the same architec-
86
- ture (i.e. same number of parameters) but the Teacher can learn from multiple
87
- input modalities (additional information) whereas the Student from only one.
88
- The proposed framework is based on two encoder-decoder networks, which have
89
- demonstrated to work well in image segmentation [7], one for the Student and
90
- one for the Teacher. Importantly, the proposed framework is generic since it
91
- works for di erent architectures of the encoder-decoder networks. Each encoder
92
- summarizes its input space to a latent representation that captures important
93
- information for the segmentation. Since the Teacher and the Student process
94
- di erent inputs but aim at extracting the same information, we make the as-
95
- sumption that their rst layers should be di erent, whereas the last layers and
96
- especially the latent representations (i.e. bottleneck) should be similar. By forc-
97
- ing the latent space of the Student to resemble the one of the Teacher, we make
98
- the hypothesis that the Student should learn from the additional information
99
- of the Teacher. To the best of our knowledge, this is the rst time that the
100
- generalized knowledge distillation strategy is adapted to guide the learning of
101
- a mono-modal student network using a multi-modal teacher network. We show
102
- the e ectiveness of the proposed method using the BraTS 2018 dataset [11] for
103
- brain tumor segmentation.
104
- The paper is organized as follows. First, we present the proposed framework,
105
- called KD-Net and illustrated in Figure 1, and how the Student learns from the
106
- Teacher and the reference segmentation. Then, we present the implementation
107
- details and the results on the BraTS 2018 dataset [11].
108
- KD loss GT loss KL loss
109
- 128×128×128×1
110
- 128×128×128×4
111
- MaxPool3d Trilinear interpolation Softmax Conv3d InstanceNorm3d LeakyReLUReference
112
- segmentation
113
- Teacher
114
- Student
115
- Fig. 1. Illustration of the proposed framework. Both Teacher and Student have the
116
- same architecture adapted from nnUNet [7]. First, the Teacher is trained using only
117
- the reference segmentation (GT loss). Then, the student network is trained using all
118
- proposed losses: KL loss, KD loss and GT loss.4 M.Hu et al.
119
- 2 KD-Net
120
- The goal of the proposed framework is to train a mono-modal segmentation
121
- network (Student) by leveraging the knowledge from a well-trained multi-modal
122
- segmentation network (Teacher). Except for the number of input channels, both
123
- networks have the same encoder-decoder architecture with skip connections. The
124
- multi-modal input xi=fxi
125
- n;n= 1:::Ngis the concatenation of the Nmodalities
126
- for theithsample of the dataset. Let EtandDt(resp.EsandDs) denote the
127
- encoder and decoder parts of the Teacher (resp. Student). The Teacher network
128
- ft(xi) =DtEt(xi) receives as input multiple modalities whereas the student
129
- networkfs(xi
130
- k) =DsEs(xi
131
- k) only one modality xi
132
- k,kbeing a xed integer
133
- between 1 and N.
134
- We rst train the Teacher, using only the reference segmentation as target.
135
- Then, we train the Student using three di erent losses: the knowledge distillation
136
- term, the dissimilarity between the latent spaces, and the reference segmentation
137
- loss. Note that the weights of the Teacher are frozen during the training of the
138
- Student and the error of the Student is not back-propagated to the Teacher.
139
- The rst two terms allow the Student to learn from the Teacher by using the
140
- soft prediction of the latter as target and by forcing the encoded information
141
- (i.e. bottleneck) of the Student to be similar to the one of the Teacher. The last
142
- term makes the predicted segmentation of the Student similar to the reference
143
- segmentation.
144
- 2.1 Generalized knowledge distillation
145
- Following the strategy of generalized knowledge distillation [9], we transfer useful
146
- knowledge from the additional information of the Teacher to the Student using
147
- the soft label targets of the Teacher. These are computed as follows:
148
- si=(ft(xi)=T) (1)
149
- whereis the softmax function and T, the temperature parameter, is a strictly
150
- positive value. The parameter Tcontrols the softness of the target, and the higher
151
- it is, the softer the target. The idea of using soft targets is to uncover relations
152
- between classes that would be harder to detect with hard labels. The e ectiveness
153
- of using a temperature parameter to soften the labels was demonstrated in [5].
154
- The knowledge distillation loss is de ned as:
155
- LKD=X
156
- i
157
- (1Dice (si;(fs(xi
158
- k)))) +BCE (s
159
- i;(fs(xi
160
- k))
161
- (2)
162
- whereDice is the Dice score, BCE the binary cross-entropy measure and s
163
- i
164
- the binary prediction of the teacher. We need to binarize sisince the soft labels
165
- cannot be used in the binary cross-entropy. The dice score ( Dice ) measures the
166
- similarity of the shape of two ensembles. Hence, it globally measures how the
167
- Teacher and Student's segmentation maps are close to each other. By contrast,
168
- the binary cross-entropy ( BCE ) is computed for each pixel independently andKD-Net 5
169
- therefore it is a local measure. We use the combination of these two terms to
170
- globally and locally measure the distance between the Student prediction and
171
- the Teacher soft labels.
172
- 2.2 Latent space
173
- We speculate that Teacher and Student, having di erent inputs, should also
174
- encode di erently the information in the rst layers, the ones related to low-
175
- level image properties, such as color, texture and edges. By contrast, the deepest
176
- layers closer to the bottleneck, and related to higher level properties, should be
177
- more similar. Furthermore, we make the assumption that an encoder-decoder
178
- network encodes the information to correctly segment the input images in its
179
- latent space. Based on that, we propose to force the Student to learn from the
180
- additional information of the Teacher encoded in its bottleneck (and partially in
181
- the deepest layers) by making their latent representations as close as possible.
182
- To this end, we apply the Kullback-Leibler (KL) divergence as a loss function
183
- between the teacher and student's bottlenecks:
184
- LKL(p;q) =X
185
- iX
186
- jqi(j) logqi(j)
187
- pi(j)
188
- (3)
189
- wherepi(resp.qi) are the
190
- Es(xi
191
- k) (respEt(xi)). Note that this function is not symmetric and we put the
192
- vectors in that order because we want the distribution of the Student's bottleneck
193
- to be similar to the one of the Teacher.
194
- 2.3 Objective function
195
- We add a third term to the objective function to make the predicted segmen-
196
- tation as close as possible to the reference segmentation. It is the sum of the
197
- Dice loss (Dice ) and the binary cross-entropy ( BCE ) for the same reasons as in
198
- Section 2.1. We call it LGT:
199
- LGT=X
200
- i
201
- (1Dice (yi;(fs(xi
202
- k)))) +BCE (yi;(fs(xi
203
- k))
204
- : (4)
205
- whereyidenotes the reference segmentation of the ithsample in the dataset.
206
- The complete objective function is then:
207
- L=LKD+ (1)LGT+ LKL (5)
208
- with2[0;1] and 2R+. The imitation parameter balances the in
209
- of the reference segmentation with the one of the Teacher's soft labels. The
210
- greater the value, the greater the in
211
- parameter is instead needed to balance the magnitude of the KL loss with
212
- respect to the other two losses.6 M.Hu et al.
213
- 3 Results and Discussion
214
- 3.1 Dataset
215
- We evaluate the performance of the proposed framework on a publicly avail-
216
- able dataset from the BraTS 2018 Challenge [11]. It contains MR scans from
217
- 285 patients with four modalities: T1, T2, T1 contrasted-enhanced (T1ce) and
218
- Flair. The goal of the challenge is to segment three sub-regions of brain tumors:
219
- whole tumor (WT), tumor core (TC) and enhancing tumor (ET). We apply a
220
- central crop of size 128 128128 and a random
221
- augmentation. For each modality, only non-zero voxels have been normalized by
222
- subtracting the mean and dividing by standard deviation. Due to memory and
223
- time constraint, we subsample the images to the size 64 6464.
224
- 3.2 Implementation details
225
- We adopt the encoder-decoder architecture described in Figure 1. Empirically,
226
- we found that the best parameters for the objective function are = 0:75,T= 5
227
- and = 10. We used Adam optimizer for 500 epochs with a learning rate equal
228
- to 0.0001 that is multiplied by 0.2 when the validation loss has not decreased
229
- for 50 epochs. We run a three fold cross validation on the 285 training cases
230
- of BraTS 2018. The training of the baseline, the Teacher or the Student takes
231
- approximately 12 hours on a NVIDIA P100 GPU.
232
- 3.3 Results
233
- In our experiments, the Teacher uses all four modalities (T1, T2, T1ce and
234
- Flair concatenated) and the Student uses only T1ce. We choose T1ce for the
235
- Student since this is the standard modality used in pre-operative neurosurgery
236
- or radiotherapy.
237
- Model comparison: To demonstrate the e ectiveness of the proposed frame-
238
- work, we rst compare it to a baseline model. Its architecture is the same as the
239
- encoder-decoder network in Figure 1 and it is trained using only the T1ce modal-
240
- ity as input. We also compare it to two other models, U-HVED and HeMIS, using
241
- only T1ce as input. Results were directly taken from [3]. The results are visible
242
- in Table 1. Our method outperforms U-HVED and HeMIS in the segmentation
243
- of all three tumor components. KD-Net also seems to obtain better results than
244
- the method proposed in [2] (again when using only T1ce as input). The authors
245
- show results on the BraTS 2015 dataset and therefore they are not directly
246
- comparable to KD-Net. Furthermore, we could not nd online their code. Nev-
247
- ertheless, the results of HeMIS [4] on BraTS 2015 (in [2]) and on BraTS 2018
248
- (in [3]) suggest that the observations of BraTS 2018 seem to be more dicult
249
- to segment. Since the method proposed in [2] has worst results than ours on a
250
- dataset that seems easier to segment, this should also be the case for the BraTS
251
- 2018 dataset. However, this should be con rmed.KD-Net 7
252
- Table 1. Comparison of 3 models using the dice score on the tumor regions. Results
253
- of U-HVED and HeMIS are taken from the article [3], where the standard deviations
254
- were not provided.
255
- Model ET TC WT
256
- Baseline (nnUnet [7]) 68:11:27 80 :282:44 77 :061:47
257
- Teacher (4 modalities) 69:471:86 80 :771:18 88 :480:79
258
- U-HVED 65:5 66 :7 62 :4
259
- HeMIS 60:8 58 :5 58 :5
260
- Ours 71 :671:22 81 :451:25 76:981:54
261
- Ablation study: To evaluate the contribution of each loss term, we did an
262
- ablation study by removing each term from the objective function de ned in
263
- Eq. 5. Table 2 shows the results using either 0 or 4 skip-connections both in the
264
- Student and Teacher networks. We observe that both the KL and KD loss im-
265
- proves the results with respect to the baseline model, especially for the enhanced
266
- tumor and tumor core. This also demonstrates that the proposed framework is
267
- generic and it works with di erent encoder-decoder architectures. More results
268
- can be found in the supplementary material.
269
- Table 2. Ablation study of the loss terms. We compare the results of the model
270
- trained with 3 di erent objective functions: the baseline using only the GT loss, KD-
271
- Net trained with only the KL term and KD-Net with the complete objective function.
272
- We also tested it with 0 or 4 skip-connections for both the Student and the Teacher.
273
- Skip
274
- connectionsModel Loss ET TC WT
275
- 4 Baseline GT 68:11:27 80:282:44 77:061:47
276
- 4 Teacher GT 69:471:86 80:771:18 88:480:79
277
- 4 KD-Net GT+KL 70:001:51 80:851:82 77 :081:29
278
- 4 KD-Net GT+KD 69:221:19 80:541:66 76:831:36
279
- 4 KD-Net GT+KL+KD 71 :671:2281 :451:25 76:981:54
280
- 0 Baseline GT 42:953:42 69:441:37 69:411:52
281
- 0 Teacher GT 42:592:54 69:791:63 75:930:33
282
- 0 KD-Net GT+KL 47 :590:98 70:961:73 71:411:2
283
- 0 KD-Net GT+KD 44:81:1 70:122:42 70:191:4
284
- 0 KD-Net GT+KL+KD 46:232:91 70:732:47 71 :931:26
285
- Qualitative results: In Figure 2, we show some qualitative results of the
286
- proposed framework and compare them with the ones obtained using the base-
287
- line method. We can see that the proposed framework allows the Student to8 M.Hu et al.
288
- discard some outliers and predict segmentation labels of higher quality. In the
289
- experiments, the student uses as input only T1ce, which clearly highlights the
290
- enhancing tumor. Remarkably, it seems that the Student learns more in this
291
- region (see Figure 2 and Table 1). The knowledge distilled from the Teacher
292
- seems to help the Student learn more where it is supposed to be \stronger".
293
- More qualitative results can be found in the supplementary material.
294
- Fig. 2. Qualitative results obtained using the the baseline and the proposed framework
295
- (Student). We show the slice of a subject with the corresponding 3 segmentation labels.
296
- Observations: It is important to remark that we also tried to expand the
297
- Student network by rst synthesizing another modality, such as the Flair, from
298
- the T1ce and then using it, together with the T1ce, for segmenting the tumor
299
- labels. Results were actually worse than the baseline and the computational
300
- time quite prohibitive. We also tried sharing the weights between the Teacher
301
- and the Student in the deepest layers of the networks to help transferring the
302
- knowledge. The intuition behind it was that since the bottlenecks should be the
303
- same, the information in the deepest layers should be handled identically. The
304
- results were almost identical, but slightly worse, to the ones obtained with the
305
- proposed framework presented in Figure 1. In this paper, we used the nnUNet[7]
306
- as network for the Student and Teacher, but theoretically any other encoder-
307
- decoder architecture, such as the one in [6], could be used.KD-Net 9
308
- 4 Conclusions
309
- We present a novel framework to transfer knowledge from a multi-modal segmen-
310
- tation network to a mono-modal one. To this end, we propose to use a twofold
311
- approach. We employ the strategy of generalized knowledge distillation and, in
312
- addition, we also constrain the latent representation of the Student to be similar
313
- to the one of the Teacher. We validate our method in brain tumor segmen-
314
- tation, achieving better results than state-of-the-art methods when using only
315
- T1ce on Brats 2018. The proposed framework is generic and can be applied to
316
- any encoder-decoder segmentation network. The gain in segmentation accuracy
317
- and robustness to errors produced by the proposed framework makes it highly
318
- valuable for real-world clinical scenarios where only one modality is available at
319
- test time.
320
- 5 Acknowledgment
321
- M.Hu is grateful for nancial support from China Scholarship Council.This work
322
- is supported by SHEITC (No. 2018-RGZN-02046), 111 plan (No. BP0719010),
323
- and STCSM (No. 18DZ2270700). M. Maillard was supported by a grant of IMT,
324
- Fondation Mines-T el ecom and Institut Carnot TSN, through the \Futur & Rup-
325
- tures" program.
326
- References
327
- 1. Ben-Cohen, A., Klang, E., Raskin, S., So er, S., Ben-Haim, S., Konen, E., Amitai,
328
- M., Greenspan, H.: Cross-modality synthesis from CT to PET using FCN and
329
- GAN networks for improved automated lesion detection. Engineering Applications
330
- of Arti cial Intelligence 78, 186{194 (2018)
331
- 2. Chen, C., Dou, Q., Jin, Y., Chen, H., Qin, J., Heng, P.A.: Robust Multimodal
332
- Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion. In:
333
- MICCAI. vol. LNCS 11766, pp. 447{456. Springer, Cham (2019)
334
- 3. Dorent, R., Joutard, S., Modat, M., Ourselin, S., Vercauteren, T.: Hetero-Modal
335
- Variational Encoder-Decoder for Joint Modality Completion and Segmentation.
336
- In: MICCAI. vol. LNCS 11765, pp. 74{82. Springer (2019)
337
- 4. Havaei, M., Guizard, N., Chapados, N., Bengio, Y.: HeMIS: Hetero-Modal Image
338
- Segmentation. In: MICCAI. vol. LNCS 9901, pp. 469{477. Springer (2016)
339
- 5. Hinton, G., Vinyals, O., Dean, J.: Distilling the Knowledge in a Neural Network.
340
- Deep Learning and Representation Learning Workshop: NIPS 2015 (2015)
341
- 6. Ibtehaz, N., Rahman, M.S.: MultiResUNet : Rethinking the U-Net Architecture for
342
- Multimodal Biomedical Image Segmentation. Neural Networks 121, 74{87 (2020)
343
- 7. Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: No New-
344
- Net. In: BrainLes - MICCAI Workshop. vol. LNCS 11384, pp. 234{244. Springer
345
- (2019)
346
- 8. Liu, Y., Chen, K., Liu, C., Qin, Z., Luo, Z., Wang, J.: Structured Knowledge
347
- Distillation for Semantic Segmentation. In: CVPR. pp. 2604{2613 (2019)
348
- 9. Lopez-Paz, D., Bottou, L., Sch olkopf, B., Vapnik, V.: Unifying distillation and
349
- privileged information. In: ICLR (2016)10 M.Hu et al.
350
- 10. Maier, O., et al.: ISLES 2015 - A public evaluation benchmark for ischemic stroke
351
- lesion segmentation from multispectral MRI. Medical Image Analysis 35, 250{269
352
- (2017)
353
- 11. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark
354
- (BRATS). IEEE Transactions on Medical Imaging 34(10), 1993{2024 (2015)
355
- 12. Orbes-Arteaga, M., Cardoso, M.J., Srensen, L., Modat, M., Ourselin, S., Nielsen,
356
- M., Pai, A.: Simultaneous synthesis of FLAIR and segmentation of white matter
357
- hypointensities from T1 MRIs. In: MIDL (2018)
358
- 13. van Tulder, G., de Bruijne, M.: Why Does Synthesized Data Improve Multi-
359
- sequence Classi cation? In: MICCAI. vol. LNCS 9349, pp. 531{538. Springer,
360
- Cham (2015)
361
- 14. Vapnik, V., Izmailov, R.: Learning using privileged information: Similarity control
362
- and knowledge transfer. Journal of Machine Learning Research 16(61), 2023{2049
363
- (2015)
364
- 15. Xie, J., Shuai, B., Hu, J.F., Lin, J., Zheng, W.S.: Improving Fast Segmentation
365
- With Teacher-student Learning. In: British Machine Vision Conference (BMVC)
366
- (2018)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2106.12269.txt DELETED
@@ -1,827 +0,0 @@
1
- arXiv:2106.12269v1 [cs.AI] 23 Jun 2021Improved Acyclicity Reasoning for Bayesian Network Struct ure Learning with
2
- Constraint Programming
3
- Fulya Tr ¨osser1∗,Simon de Givry1and George Katsirelos2
4
- 1Universit´ e de Toulouse, INRAE, UR MIAT, F-31320, Castanet -Tolosan, France
5
- 2UMR MIA-Paris, INRAE, AgroParisTech, Univ. Paris-Saclay, 75005 Paris, France
6
- {fulya.ural, simon.de-givry }@inrae.fr, [email protected]
7
- Abstract
8
- Bayesian networks are probabilistic graphical mod-
9
- els with a wide range of application areas includ-
10
- ing gene regulatory networks inference, risk anal-
11
- ysis and image processing. Learning the structure
12
- of a Bayesian network (BNSL) from discrete data
13
- is known to be an NP-hard task with a superexpo-
14
- nential search space of directed acyclic graphs. In
15
- this work, we propose a new polynomial time algo-
16
- rithm for discovering a subset of all possible cluster
17
- cuts, a greedy algorithm for approximately solving
18
- the resulting linear program, and a generalised arc
19
- consistency algorithm for the acyclicity constraint.
20
- We embed these in the constraint programming-
21
- based branch-and-bound solver CPBayes and show
22
- that, despite being suboptimal, they improve per-
23
- formance by orders of magnitude. The resulting
24
- solver also compares favourably with GOBNILP, a
25
- state-of-the-art solver for the BNSL problem which
26
- solves an NP-hard problem to discover each cut and
27
- solves the linear program exactly.
28
- 1 Introduction
29
- Towards the goal of explainable AI, Bayesian networks offer
30
- a rich framework for probabilistic reasoning. Bayesian Net -
31
- work Structure Learning (BNSL) from discrete observations
32
- corresponds to finding a compact model which best explains
33
- the data. It defines an NP-hard problem with a superexponen-
34
- tial search space of Directed Acyclic Graphs (DAG). Several
35
- constraint-based (exploiting local conditional independ ence
36
- tests) and score-based (exploiting a global objective form ula-
37
- tion) BNSL methods have been developed in the past.
38
- Complete methods for score-based BNSL include dynamic
39
- programming [Silander and Myllym¨ aki, 2006 ], heuristic
40
- search [Yuan and Malone, 2013; Fan and Yuan, 2015 ],
41
- maximum satisfiability [Berg et al. , 2014 ], branch-and-
42
- cut [Bartlett and Cussens, 2017 ]and constraint program-
43
- ming [van Beek and Hoffmann, 2015 ]. Here, we focus on
44
- the latter two.
45
- GOBNILP [Bartlett and Cussens, 2017 ]is a state-of-the-
46
- art solver for BNSL. It implements branch-and-cut in an in-
47
- ∗Contact Authorteger linear programming (ILP) solver. At each node of the
48
- branch-and-bound tree, it generates cuts that improve the l in-
49
- ear relaxation. A major class of cuts generated by GOBNILP
50
- arecluster cuts , which identify sets of parent sets that cannot
51
- be used together in an acyclic graph. In order to find cluster
52
- cuts, GOBNILP solves an NP-hard subproblem created from
53
- the current optimal solution of the linear relaxation.
54
- CPBayes [van Beek and Hoffmann, 2015 ]is a constraint
55
- programming-based (CP) method for BNSL. It uses a CP
56
- model that exploits symmetry and dominance relations
57
- present in the problem, subproblem caching, and a pattern
58
- database to compute lower bounds, adapted from heuris-
59
- tic search [Fan and Yuan, 2015 ]. van Beek and Hoffmann
60
- showed that CPBayes is competitive with GOBNILP in many
61
- instances. In contrast to GOBNILP, the inference mech-
62
- anisms of CPBayes are very lightweight, which allows it
63
- to explore many orders of magnitude more nodes per time
64
- unit, even accounting for the fact that computing the patter n
65
- databases before search can sometimes consume considerabl e
66
- time. On the other hand, the lightweight pattern-based boun d-
67
- ing mechanism can take into consideration only limited in-
68
- formation about the current state of the search. Specificall y,
69
- it can take into account the current total ordering implied b y
70
- the DAG under construction, but no information that has been
71
- derived about the potential parent sets of each vertex, i.e. , the
72
- current domains of parent set variables.
73
- In this work, we derive a lower bound that is computation-
74
- ally cheaper than that computed by GOBNILP. We give in
75
- Section 3 a polynomial-time algorithm that discovers a clas s
76
- of cluster cuts that provably improve the linear relaxation . In
77
- Section 4, we give a greedy algorithm for solving the linear
78
- relaxation, inspired by similar algorithms for MaxSAT and
79
- Weighted Constraint Satisfaction Problems (WCSP). Finall y,
80
- in Section 5 we give an algorithm that enforces generalised
81
- arc consistency on the acyclicity constraint, based on pre-
82
- vious work by van Beek and Hoffmann, but with improved
83
- complexity and practical performance. In Section 6, we show
84
- that our implementation of these techniques in CPBayes lead s
85
- to significantly improved performance, both in the size of th e
86
- search tree explored and in runtime.
87
- 2 Preliminaries
88
- We give here only minimal background on (inte-
89
- ger) linear programming and constraint program-ming, and refer the reader to existing literature
90
- [Papadimitriou and Steiglitz, 1998; Rossi et al. , 2006 ]
91
- for more.
92
- Constraint Programming
93
- A constraint satisfaction problem (CSP) is a tuple /a\}bracketle{tV,D,C/a\}bracketri}ht,
94
- whereVis a set of variables, Dis a function mapping vari-
95
- ables to domains and C is a set of constraints. An assignment
96
- AtoV′⊆Vis a mapping from each v∈V′toD(v). A
97
- complete assignment is an assignment to V. If an assignment
98
- mapsvtoa, we say it assigns v=a. A constraint is a pair
99
- /a\}bracketle{tS,P/a\}bracketri}ht, whereS⊆Vis the scope of the constraint and Pis
100
- a predicate over/producttext
101
- V∈SD(V)which accepts assignments to
102
- Sthatsatisfy the constraint. For an assignment AtoS′⊇S,
103
- letA′|Sbe the restriction of AtoS. We say that Asatisfies
104
- c=/a\}bracketle{tS,P/a\}bracketri}htifA|Ssatisfiesc. A problem is satisfied by Aif
105
- Asatisfies all constraints.
106
- For a constraint c=/a\}bracketle{tS,P/a\}bracketri}htand forv∈S,a∈D(v),
107
- v=ais generalized arc consistent (GAC) for cif there exists
108
- an assignment Athat assigns v=aand satisfies c. If for all
109
- v∈S,a∈D(v),v=ais GAC for c, thencis GAC. If
110
- all constraints are GAC, the problem is GAC. A constraint is
111
- associated with an algorithm fc, called the propagator for c,
112
- that removes (or prunes ) values from the domains of variables
113
- inSthat are not GAC.
114
- CSPs are typically solved by backtracking search, using
115
- propagators to reduce domains at each node and avoid parts
116
- of the search tree that are proved to not contain any solution s.
117
- Although CSPs are decision problems, the technology can be
118
- used to solve optimization problems like BNSL by, for ex-
119
- ample, using branch-and-bound and embedding the bounding
120
- part in a propagator. This is the approach used by CPBayes.
121
- Integer Linear Programming
122
- A linear program (LP) is the problem of finding
123
- min{cTx|x∈Rn∧Ax≥b∧x≥0}
124
- wherecandbare vectors, Ais a matrix, and xis a vector
125
- of variables. A feasible solution of this problem is one that
126
- satisfiesx∈Rn∧Ax≥b∧x≥0and an optimal solution is a
127
- feasible one that minimizes the objective function cTx. This
128
- can be found in polynomial time. A row Aicorresponds to
129
- an individual linear constraint and a column AT
130
- jto a variable.
131
- The dual of a linear program Pin the above form is another
132
- linear program D:
133
- max{bTy|y∈Rm∧ATy≤c∧y≥0}
134
- whereA,b,c are as before and yis the vector of dual vari-
135
- ables. Rows of the dual correspond to variables of the primal
136
- and vice versa. The objective value of any dual feasible so-
137
- lution is a lower bound on the optimum of P. WhenPis
138
- satisfiable, its dual is also satisfiable and the values of the ir
139
- optima meet. For a given feasible solution ˆxofP, the slack
140
- of constraint iisslackˆx(i) =AT
141
- ix−bi. Given a dual feasible
142
- solutionˆy,slackD
143
- ˆy(i)is the reduced cost of primal variable
144
- i,rcˆy(i). The reduced cost rcˆy(i)is interpreted as a lower
145
- bound on the amount that the dual objective would increase
146
- overbTˆyifxiis forced to be non-zero in the primal.An integer linear program (ILP) is a linear program in
147
- which we replace the constraint x∈Rnbyx∈Znand it
148
- is an NP-hard optimization problem.
149
- Bayesian Networks
150
- A Bayesian network is a directed graphical model B=
151
- /a\}bracketle{tG,P/a\}bracketri}htwhereG=/a\}bracketle{tV,E/a\}bracketri}htis a directed acyclic graph (DAG)
152
- called the structure of BandPare its parameters. A BN
153
- describes a normalised joint probability distribution. Ea ch
154
- vertex of the graph corresponds to a random variable and
155
- presence of an edge between two vertices denotes direct con-
156
- ditional dependence. Each vertex viis also associated with
157
- a Conditional Probability Distribution P(vi|parents(vi)).
158
- The CPDs are the parameters of B.
159
- The approach which we use here for learning a BN from
160
- data is the score-and-search method. Given a set of mul-
161
- tivariate discrete data I={I1,...,I N}, a scoring func-
162
- tionσ(G|I)measures the quality of the BN with un-
163
- derlying structure G. The BNSL problem asks to find a
164
- structure Gthat minimises σ(G|I)for some scoring
165
- function σand it is NP-hard [Chickering, 1995 ]. Several
166
- scoring functions have been proposed for this purpose, in-
167
- cluding BDeu [Buntine, 1991; Heckerman et al. , 1995 ]and
168
- BIC [Schwarz, 1978; Lam and Bacchus, 1994 ]. These func-
169
- tions are decomposable and can be expressed as the sum
170
- of local scores which only depend on the set of parents
171
- (from now on, parent set ) of each vertex: σF(G|I) =/summationtext
172
- v∈Vσv
173
- F(parents(v)|I)forF∈ {BDeu,BIC}. In
174
- this setting, we first compute local scores and then com-
175
- pute the structure of minimal score. Although there are
176
- potentially an exponential number of local scores that have
177
- to be computed, the number of parent sets actually con-
178
- sidered is often much smaller, for example because we re-
179
- strict the maximum cardinality of parent sets considered or
180
- we exploit dedicated pruning rules [de Campos and Ji, 2010;
181
- de Campos et al. , 2018 ]. We denote PS(v)the set of candi-
182
- date parent sets of vandPS−C(v)those parent sets that do
183
- not intersect C. In the following, we assume that local scores
184
- are precomputed and given as input, as is common in similar
185
- works. We also omit explicitly mentioning IorF, as they are
186
- constant for solving any given instance.
187
- LetCbe a set of vertices of a graph G.Cis a violated
188
- cluster if the parent set of each vertex v∈Cintersects C.
189
- Then, we can prove the following property:
190
- Property 1. A directed graph G=/a\}bracketle{tV,E/a\}bracketri}htis acyclic if and
191
- only if it contains no violated clusters, i.e., for all C⊆V,
192
- there exists v∈C, such that parents(v)∩C=∅.
193
- The GOBNILP solver [Bartlett and Cussens, 2017 ]formu-
194
- lates the problem as the following 0/1 ILP:
195
- min/summationdisplay
196
- v∈V,S⊆V\{v}σv(S)xv,S (1)
197
- s.t./summationdisplay
198
- S∈PS(v)xv,S= 1 ∀v∈V (2)
199
- /summationdisplay
200
- v∈C,S∈PS−C(v)xv,S≥1∀C⊆V (3)
201
- xv,S∈{0,1} ∀ v∈V,S∈PS(v)(4)Algorithm 1: Acyclicity Checker
202
- acycChecker (V, D)
203
- order←{}
204
- changes←true
205
- whilechanges do
206
- changes←false
207
- foreachv∈V\order do
208
- if∃S∈D(v)s.t.(S∩V)⊆order then
209
- 1 order←order+v
210
- changes←true
211
- returnorder
212
- This ILP has a 0/1 variable xv,Sfor each candidate parent
213
- setSof each vertex vwherexv,S= 1means that Sis the par-
214
- ent set of v. The objective (1) directly encodes the decompo-
215
- sition of the scoring function. The constraint (2) asserts t hat
216
- exactly one parent set is selected for each random variable.
217
- Finally, the cluster inequalities (3) are violated when Cis a
218
- violated cluster. We denote the cluster inequality for clus ter
219
- Cascons(C)and the 0/1 variables involved as varsof(C).
220
- As there is an exponential number of these, GOBNILP gen-
221
- erates only those that improve the current linear relaxatio n
222
- and they are referred to as cluster cuts . This itself is an NP-
223
- hard problem [Cussens et al. , 2017 ], which GOBNILP also
224
- encodes and solves as an ILP. Interestingly, these inequali -
225
- ties are facets of the BNSL polytope [Cussens et al. , 2017 ],
226
- so stand to improve the relaxation significantly.
227
- The CPBayes solver [van Beek and Hoffmann, 2015 ]
228
- models BNSL as a constraint program. The CP model has a
229
- parent set variable for each random variable, whose domain
230
- is the set of possible parent sets, as well as order variables ,
231
- which give a total order of the variables that agrees with
232
- the partial order implied by the DAG. The objective is the
233
- same as (1). It includes channelling constraints between
234
- the set of variables and various symmetry breaking and
235
- dominance constraints. It computes a lower bound using
236
- two separate mechanisms: a component caching scheme
237
- and a pattern database that is computed before search and
238
- holds the optimal graphs for all orderings of partitions
239
- of the variables. Acyclicity is enforced using a global
240
- constraint with a bespoke propagator. The main routine
241
- of the propagator is acycChecker (Algorithm 1), which
242
- returns an order of all variables if the current set of domain s
243
- of the parent set variables may produce an acyclic graph, or
244
- a partially completed order if the constraint is unsatisfiab le.
245
- This algorithm is based on Property 1.
246
- Briefly, the algorithm takes the domains of the parent set
247
- variables as input and greedily constructs an ordering of th e
248
- variables, such that if variable vis later in the order than v′,
249
- thenv /∈parents(v′)1. It does so by trying to pick a parent
250
- setSfor an as yet unordered vertex such that Sis entirely
251
- contained in the set of previously ordered vertices2. If all
252
- assignments yield cyclic graphs, it will reach a point where
253
- 1We treatorder as both a sequence and a set, as appropriate.
254
- 2When propagating the acyclicity constraint it always holds that
255
- a∩V=a, so this statement is true. In section 3.1, we use the
256
- algorithm in a setting where this is not always the case.all remaining vertices are in a violated cluster in all possi ble
257
- graphs, and it will return a partially constructed order. If there
258
- exists an assignment that gives an acyclic graph, it will be
259
- possible by property 1 to select from a variable in V\order
260
- a parent set which does not intersect V\order , hence is a
261
- subset of order . The value Schosen for each variable in
262
- line 1 also gives a witness of such an acyclic graph.
263
- An immediate connection between the GOBNILP and CP-
264
- Bayes models is that the ILP variables xv,S,∀S∈PS(v)are
265
- the direct encoding [Walsh, 2000 ]of the parent set variables
266
- of the CP model. Therefore, we use them interchangeably,
267
- i.e., we can refer to the value SinD(v)asxv,S.
268
- 3 Restricted Cluster Detection
269
- One of the issues hampering the performance of CPBayes is
270
- that it computes relatively poor lower bounds at deeper leve ls
271
- of the search tree. Intuitively, as the parent set variable d o-
272
- mains get reduced by removing values that are inconsistent
273
- with the current ordering, the lower bound computation dis-
274
- cards more information about the current state of the proble m.
275
- We address this by adapting the branch-and-cut approach of
276
- GOBNILP. However, instead of finding all violated cluster
277
- inequalities that may improve the LP lower bound, we only
278
- identify a subset of them.
279
- Consider the linear relaxation of the ILP (1)– (4), restrict ed
280
- to a subsetCof all valid cluster inequalities, i.e., with equa-
281
- tion (4) replaced by 0≤xv,S≤1∀v∈V,S∈PS(v)and
282
- with equation (3) restricted only to clusters in C. We denote
283
- thisLPC. We exploit the following property of this LP.
284
- Theorem 1. Letˆybe a dual feasible solution of LPCwith
285
- dual objective o. Then, if Cis a cluster such that C /∈C
286
- and the reduced cost rcof all variables varsof(C)is greater
287
- than 0, there exists a dual feasible solution ˆyofLPC∪Cwith
288
- dual objective o′≥o+minrc(C)whereminrc(C) =
289
- minx∈varsof(C)rcˆy(x).
290
- Proof. The only difference from LPCtoLPC∪Cis the ex-
291
- tra constraint cons(C)in the primal and corresponding dual
292
- variableyC. In the dual, yConly appears in the dual con-
293
- straints of the variables varsof(C)and in the objective, al-
294
- ways with coefficient 1. Under the feasible dual solution
295
- ˆy∪{yC= 0}, these constraints have slack at least minrc(C),
296
- by the definition of reduced cost. Therefore, we can set
297
- ˆy= ˆy∪{yC=minrc(C)}, which remains feasible and
298
- has objective o′=o+minrc(C), as required.
299
- Theorem 1 gives a class of cluster cuts, which we call RC-
300
- clusters, for reduced-cost clusters, guaranteed to improv e the
301
- lower bound. Importantly, this requires only a feasible, pe r-
302
- haps sub-optimal, solution.
303
- Example 1 (Running example) .Consider a BNSL instance
304
- with domains as shown in Table 1 and let C=∅. Then,ˆy= 0
305
- leaves the reduced cost of every variable to exactly its prim al
306
- objective coefficient. The corresponding ˆxassigns 1 to vari-
307
- ables with reduced cost 0 and 0 to everything else. These are
308
- both optimal solutions, with cost 0 and ˆxis integral, so it is
309
- also a solution of the corresponding ILP . However, it is not a
310
- solution of the BNSL, as it contains several cycles, includi ngVariable Domain Value Cost
311
- 0{2} 0
312
- 1{2,4} 0
313
- {} 6
314
- 2{1,3} 0
315
- {} 10
316
- 3{0} 0
317
- {} 5
318
- 4{2,3} 0
319
- {3} 1
320
- {2} 2
321
- {} 3
322
- Table 1: BNSL instance used as running example.
323
- Algorithm 2: Lower bound computation with RC-
324
- clusters
325
- lowerBoundRC (V, D,C)
326
- ˆy←DualSolve (LPC(D))
327
- while True do
328
- 2C←V\acycChecker (V,Drc
329
- C,ˆy)
330
- 3 ifC=∅then
331
- return/a\}bracketle{tcost(ˆy),C/a\}bracketri}ht
332
- C←minimise (C)
333
- C←C∪{ C}
334
- ˆy←DualImprove (ˆy,LPC(D),C)
335
- C={0,2,3}. The cluster inequality cons(C)is violated in
336
- the primal and allows the dual bound to be increased.
337
- We consider the problem of discovering RC-clusters within
338
- the CP model of CPBayes. First, we introduce the nota-
339
- tionLPC(D)which is LPCwith the additional constraint
340
- xv,S= 0 for eachS /∈D(v). Conversely, Drc
341
- C,ˆyis the set
342
- of domains minus values whose corresponding variable in
343
- LPC(D)has non-zero reduced cost under ˆy, i.e.,Drc
344
- C,ˆy=D′
345
- whereD′(v) ={S|S∈D(v)∧rcˆy(xv,S) = 0}. With
346
- this notation, for values S /∈D(v),xv,S= 1 is infeasible in
347
- LPC(D), hence effectively rcˆy(xv,S) =∞.
348
- Theorem 2. Given a collection of clusters C, a set of domains
349
- Dandˆy, a feasible dual solution of LPC(D), there exists
350
- an RC-cluster C /∈C if and only if Drc
351
- C,ˆydoes not admit an
352
- acyclic assignment.
353
- Proof.(⇒)LetCbe such a cluster. Since for all xv,S∈
354
- varsof(C), none of these are in Drc
355
- C,ˆy, socons(C)is violated
356
- and hence there is no acyclic assignment.
357
- (⇐)Consider once again acycChecker , in Algorithm 1.
358
- When it fails to find a witness of acyclicity, it has reached
359
- a point where order/subsetnoteqlVand for the remaining variables
360
- C=V\order , all allowed parent sets intersect C. So if
361
- acycChecker is called with Drc
362
- C,ˆy, all values in varsof(C)
363
- have reduced cost greater than 0, so Cis an RC-cluster.
364
- Theorem 2 shows that detecting unsatisfiability of Drc
365
- C,ˆyis
366
- enough to find an RC-cluster. Its proof also gives a way to
367
- extract such a cluster from acycChecker .
368
- Algorithm 2 shows how theorems 1 and 2 can be used
369
- to compute a lower bound. It is given the current set ofdomains and a set of clusters as input. It first solves the
370
- dual ofLPC(D), potentially suboptimally. Then, it uses
371
- acycChecker iteratively to determine whether there exists
372
- an RC-cluster Cunder the current dual solution ˆy. If that
373
- cluster is empty, there are no more RC-clusters, and it termi -
374
- nates and returns a lower bound equal to the cost of ˆyunder
375
- LPC(D)and an updated pool of clusters. Otherwise, it min-
376
- imisesC(see section 3.1), adds it to the pool of clusters and
377
- solves the updated LP. It does this by calling DualImprove ,
378
- which solves LPC(D)exploiting the fact that only the cluster
379
- inequality cons(C)has been added.
380
- Example 2. Continuing our example, consider the behav-
381
- ior ofacycChecker with domains Drc
382
- ∅,ˆyafter the initial dual
383
- solutionˆy= 0. Since the empty set has non-zero reduced
384
- cost for all variables, acycChecker fails with order={},
385
- henceC=V. We postpone discussion of minimization for
386
- now, other than to observe that Ccan be minimized to C1=
387
- {1,2}. We add cons(C1)to the primal LP and set the dual
388
- variable of C1to 6 in the new dual solution ˆy1. The reduced
389
- costs ofx1,{}andx2,{}are decreased by 6 and, importantly,
390
- rcˆy1(x1,{}) = 0 . In the next iteration of lowerBoundRC ,
391
- acycChecker is invoked on Drc
392
- {C1},ˆy1and returns the clus-
393
- ter{0,2,3,4}. This is minimized to C2={0,2,3}. The
394
- parent sets in the domains of these variables that do not in-
395
- tersectC2arex2,{}andx3,{}, sominrc(C2) = 4 , so we add
396
- cons(C2)to the primal and we set the dual variable of C2
397
- to 4 inˆy2. This brings the dual objective to 10. The reduced
398
- cost ofx2,{}is 0, so in the next iteration acycChecker runs
399
- onDrc
400
- {C1,C2},ˆy2and succeeds with the order {2,0,3,4,1}, so
401
- the lower bound cannot be improved further. This also hap-
402
- pens to be the cost of the optimal structure.
403
- Theorem 3. Algorithm 2 terminates but is not confluent.
404
- Proof. It terminates because there is a finite number of cluster
405
- inequalities and each iteration generates one. In the extre me,
406
- all cluster inequalities are in Cand the test at line 3 succeeds,
407
- terminating the algorithm.
408
- To see that it is not confluent, consider an example with 3
409
- clustersC1={v1,v2},C2={v2,v3}andC3={v3,v4}
410
- and assume that the minimum reduced cost for each cluster is
411
- unit and comes from x2,{4}andx3,{1}, i.e., the former value
412
- has minimum reduced cost for C1andC2and the latter for C2
413
- andC3. Then, if minimisation generates first C1, the reduced
414
- cost ofx3,{1}is unaffected by DualImprove , so it can then
415
- discoverC3, to get a lower bound of 2. On the other hand,
416
- if minimisation generates first C2, the reduced costs of both
417
- x2,{4}andx3,{1}are decreased to 0 by DualImprove , so
418
- neitherC1norC3are RC-clusters under the new dual solution
419
- and the algorithm terminates with a lower bound of 1.
420
- Related Work. The idea of performing propagation on the
421
- subset of domains that have reduced cost 0 has been used
422
- in the V AC algorithm for WCSPs [Cooper et al. , 2010 ]. Our
423
- method is more light weight, as it only performs propagation
424
- on the acyclicity constraint, but may give worse bounds. The
425
- bound update mechanism in the proof of theorem 1 is also
426
- simpler than V AC and more akin to the “disjoint core phase”
427
- in core-guided MaxSAT solvers [Morgado et al. , 2013 ].3.1 Cluster Minimisation
428
- It is crucial for the quality of the lower bound produced by Al -
429
- gorithm 2 that the RC-clusters discovered by acycChecker
430
- are minimised, as the following example shows. Empirically ,
431
- omitting minimisation rendered the lower bound ineffectiv e.
432
- Example 3. Suppose that we attempt to use lowerBoundRC
433
- without cluster minimization. Then, we use the cluster
434
- given byacycChecker ,C1={0,1,2,3,4}. We have
435
- minrc(C1) = 3 , given from the empty parent set value of
436
- all variables. This brings the reduced cost of x4,{}to 0.
437
- It then proceeds to find the cluster C2={0,1,2,3}with
438
- minrc(C2) = 2 and decrease the reduced cost of x3,{}to
439
- 0, thenC3={0,1,2}withminrc(C3) = 1 , which brings
440
- the reduced cost of x1,{}to 0. At this point, acycChecker
441
- succeeds with the order {4,3,1,2,0}andlowerBoundRC re-
442
- turns a lower bound of 6, compared to 10 with minimization.
443
- The order produced by acycChecker also disagrees with the
444
- optimum structure.
445
- Therefore, when we get an RC-cluster Cat line 2 of algo-
446
- rithm 2, we want to extract a minimal RC-cluster (with re-
447
- spect to set inclusion) from C, i.e., a cluster C′⊆C, such
448
- that for all∅⊂C′′⊂C′,C′′is not a cluster.
449
- Minimisation problems like this are handled with an ap-
450
- propriate instantiation of QuickXPlain [Junker, 2004 ]. These
451
- algorithms find a minimal subset of constraints, not variabl es.
452
- We can pose this as a constraint set minimisation problem by
453
- implicitly treating a variable as the constraint “this vari able is
454
- assigned a value” and treating acyclicity as a hard constrai nt.
455
- However, the property of being an RC-cluster is not mono-
456
- tone. For example, consider the variables {v1,v2,v3,v4}
457
- andˆysuch that the domains restricted to values with 0 re-
458
- duced cost are{{v2}},{{v1}},{{v4}},{{v3}}, respectively.
459
- Then{v1,v2,v3,v4},{v1,v2}and{v3,v4}are RC-clusters.
460
- but{v1,v2,v3}is not because the sole value in the do-
461
- main ofv3does not intersect{v1,v2,v3}. We instead min-
462
- imise the set of variables that does not admit an acyclic so-
463
- lution and hence contains an RC-cluster. A minimal un-
464
- satisfiable set that contains a cluster is an RC-cluster, so
465
- this allows us to use the variants of QuickXPlain. We fo-
466
- cus on RobustXPlain, which is called the deletion-based al-
467
- gorithm in SAT literature for minimising unsatisfiable sub-
468
- sets[Marques-Silva and Menc´ ıa, 2020 ]. The main idea of the
469
- algorithm is to iteratively pick a variable and categorise i t as
470
- either appearing in all minimal subsets of C, in which case
471
- we mark it as necessary, or not, in which case we discard
472
- it. To detect if a variable appears in all minimal unsatisfi-
473
- able subsets, we only have to test if omitting this variable
474
- yields a set with no unsatisfiable subsets, i.e., with no vio-
475
- lated clusters. This is given in pseudocode in Algorithm 3.
476
- This exploits a subtle feature of acycChecker as described
477
- in Algorithm 1: if it is called with a subset of V, it does not
478
- try to place the missing variables in the order and allows par -
479
- ent sets to use these missing variables. Omitting variables
480
- from the set given to acycChecker acts as omitting the con-
481
- straint that these variables be assigned a value. The com-
482
- plexity ofMinimiseCluster isO(n3d), wheren=|V|and
483
- d= max v∈V|D(v)|, a convention we adopt throughout.Algorithm 3: Find a minimal RC-cluster subset of C
484
- MinimiseCluster (V, D, C)
485
- N=∅
486
- whileC/\e}atio\slash=∅do
487
- Pickc∈C
488
- C←C\{c}
489
- C′←V\acycChecker (N∪C,D)
490
- ifC′=∅then
491
- N←N∪{c}
492
- else
493
- C←C′\N
494
- returnN
495
- 4 Solving the Cluster LP
496
- Solving a linear program is in polynomial time, so in princip le
497
- DualSolve can be implemented using any of the commercial
498
- or free software libraries available for this. However, sol ving
499
- this LP using a general LP solver is too expensive in this set-
500
- ting. As a data point, solving the instance steelBIC with
501
- our modified solver took 25,016 search nodes and 45 sec-
502
- onds of search, and generated 5,869RC-clusters. Approx-
503
- imately 20% of search time was spent solving the LP using
504
- the greedy algorithm that we describe in this section. CPLEX
505
- took around 70 seconds to solve LPCwith these cluster in-
506
- equalities once. While this data point is not proof that solv ing
507
- the LP exactly is too expensive, it is a pretty strong indicat or.
508
- We have also not explored nearly linear time algorithms for
509
- solving positive LPs [Allen-Zhu and Orecchia, 2015 ].
510
- Our greedy algorithm is derived from theorem 1. Observe
511
- first thatLPCwithC=∅, i.e., only with constraints (2) has
512
- optimal dual solution ˆy0that assigns the dual variable yvof/summationtext
513
- S∈PS(v)xv,S= 1 tominS∈PS(v)σv(S). That leaves at
514
- least one of xv,S,S∈PS(v)with reduced cost 0 for each
515
- v∈V.DualSolve starts with ˆy0and then iterates over C.
516
- Givenˆyi−1and a cluster C, it setsˆyi= ˆyi−1ifCis not
517
- an RC-cluster. Otherwise, it increases the lower bound by
518
- c=minrc(C)and setsˆyi= ˆyi−1∪{yC=c}. It remains to
519
- specify the order in which we traverse C.
520
- We sort clusters by increasing size |C|, breaking ties by
521
- decreasing minimum cost of all original parent set values
522
- invarsof(C). This favours finding non-overlapping cluster
523
- cuts with high minimum cost. In section 6, we give experi-
524
- mental evidence that this computes better lower bounds.
525
- DualImprove can be implemented by discarding previ-
526
- ous information and calling DualSolve (LPC(D)). Instead,
527
- it uses the RC-cluster Cto update the solution without revis-
528
- iting previous clusters.
529
- In terms of implementation, we store varsof(C)for each
530
- cluster, not cons(C). During DualSolve , we maintain the
531
- reduced costs of variables rather than the dual solution, ot h-
532
- erwise computing each reduced cost would require iterating
533
- over all cluster inequalities that contain a variable. Spec ifi-
534
- cally, we maintain ∆v,S=σv(S)−rcˆy(xv,S). In order to
535
- test whether a cluster Cis an RC-cluster, we need to com-
536
- puteminrc(C). To speed this up, we associate with each
537
- stored cluster a support pair (v,S)corresponding to the lastminimum cost found. If rcˆy(v,S) = 0 , the cluster is not an
538
- RC-cluster and is skipped. Moreover, parent set domains are
539
- sorted by increasing score σv(S), soS≻S′⇐⇒σv(S)>
540
- σv(S′). We also maintain the maximum amount of cost
541
- transferred to the lower bound, ∆max
542
- v= max S∈D(v)∆v,S
543
- for every v∈V. We stop iterating over D(v)as soon as
544
- σv(S)−∆max
545
- v is greater than or equal to the current mini-
546
- mum because∀S′≻S,σv(S′)−∆v,b≥σv(S)−∆max
547
- v.
548
- In practice, on very large instances 97.6%of unproductive
549
- clusters are detected by support pairs and 8.6%of the current
550
- domains are visited for the rest3.
551
- To keep a bounded-memory cluster pool, we discard fre-
552
- quently unproductive clusters. We throw away large cluster s
553
- with a productive ratio#productive
554
- #productive +#unproductivesmaller
555
- than1
556
- 1,000. Clusters of size 10 or less are always kept because
557
- they are often more productive and their number is bounded.
558
- 5 GAC for the Acyclicity Constraint
559
- Previously, van Beek and
560
- Hoffmann [van Beek and Hoffmann, 2015 ]showed that
561
- usingacycChecker as a subroutine, one can construct a
562
- GAC propagator for the acyclicity constraint by probing,
563
- i.e., detecting unsatisfiability after assigning each indi vidual
564
- value and pruning those values that lead to unsatisfiability .
565
- acycChecker is inO(n2d), so this gives a GAC propagator
566
- inO(n3d2). We show here that we can enforce GAC in time
567
- O(n3d), a significant improvement given that dis usually
568
- much larger than n.
569
- SupposeacycChecker finds a witness of acyclicity and
570
- returns the order O={v1,...,v n}. Every parent set Sof
571
- a variable vthat is a subset of {v′|v′≺Ov}is supported
572
- byO. We call such values consistent with O. Consider now
573
- S∈D(vi)which is inconsistent with O, therefore we have to
574
- probe to see if it is supported. We know that during the probe,
575
- nothing forcesacycChecker to deviate from{v1,...,v i−1}.
576
- So in a successful probe, acycChecker constructs a new or-
577
- derO′which is identical to Oin the first i−1positions and
578
- in which it moves vifurther down. Then all values consistent
579
- withO′are supported. This suggests that instead of probing
580
- each value, we can probe different orders.
581
- Acyclicity-GAC , shown in Algorithm 4, exploits this in-
582
- sight. It ensures first that acycChecker can produce a valid
583
- orderO. For each variable v, it constructs a new order O′
584
- fromOso thatvis as late as possible. It then prunes all par-
585
- ent set values of vthat are inconsistent with O′.
586
- Theorem 4. Algorithm 4 enforces GAC on the Acyclicity con-
587
- straint in O(n3d).
588
- Proof. Letv∈VandS∈D(v). LetO={O1,...,O n}
589
- andQ={Q1,...,Q n}be two valid orders such that O
590
- does not support SwhereasQdoes. It is enough to show
591
- that we can compute from Oa new order O′that supports
592
- Sby pushing vtowards the end. Let Oi=Qj=vand
593
- letOp={O1,...,O (i−1)},Qp={Q1,...,Q (j−1)}and
594
- Os={Oi+1,...,O n}.
595
- 3See the supplementary material for more.Algorithm 4: GAC propagator for acyclicity
596
- Acyclicity-GAC (V , D)
597
- O←acycChecker (V,D)
598
- ifO/subsetnoteqlVthen
599
- return Failure
600
- foreachv∈Vdo
601
- changes←true
602
- i←O−1(v)
603
- prefix←{O1,...,O i−1}
604
- 4 whilechanges do
605
- changes←false
606
- foreachw∈O\(prefix∪{v})do
607
- if∃S∈D(w)s.t.S⊆prefix then
608
- prefix←prefix∪{w}
609
- changes←true
610
- Prune{S|S∈D(v)∧S/notsubseteqlprefix}
611
- return Success
612
- LetO′be the order Opfollowed by Qp, followed by v,
613
- followed by Os, keeping only the first occurrence of each
614
- variable when there are duplicates. O′is a valid order: Op
615
- is witnessed by the assignment that witnesses O,Qpby the
616
- assignment that witnesses Q,vbyS(as inQ) andOsby the
617
- assignment that witnesses O. It also supports S, as required.
618
- Complexity is dominated by repeating O(n)times the loop
619
- at line 4, which is a version of acycChecker so has complex-
620
- ityO(n2d)for a total O(n3d).
621
- 6 Experimental Results
622
- 6.1 Benchmark Description and Settings
623
- The datasets come from the UCI Machine Learning Reposi-
624
- tory4, the Bayesian Network Repository5, and the Bayesian
625
- Network Learning and Inference Package6. Local scores
626
- were computed from the datasets using B. Malone’s code7.
627
- BDeu and BIC scores were used for medium size instances
628
- (less than 64 variables) and only BIC score for large instanc es
629
- (above 64 variables). The maximum number of parents was
630
- limited to 5 for large instances (except for accidents.test
631
- with maximum of 8), a high value that allows even learning
632
- complex structures [Scanagatta et al. , 2015 ]. For example,
633
- jester.test has 100 random variables, a sample size of
634
- 4,116and770,950parent set values. For medium instances,
635
- no restriction was applied except for some BDeu scores (limi t
636
- sets to 6 or 8 to complete the computation of the local scores
637
- within 24 hours of CPU-time [Lee and van Beek, 2017 ]).
638
- We have modified the C++ source of CPBayes v1.1 by
639
- adding our lower bound mechanism and GAC propagator.
640
- We call the resulting solver ELSA and have made it publicly
641
- available. For the evaluation, we compare with GOBNILP
642
- v1.6.3 using SCIP v3.2.1 with cplex v12.7.0. All compu-
643
- tations were performed on a single core of Intel Xeon E5-
644
- 2680 v3 at 2.50 GHz and 256 GB of RAM with a 1-hour
645
- 4http://archive.ics.uci.edu/ml
646
- 5http://www.bnlearn.com/bnrepository
647
- 6https://ipg.idsia.ch/software.php?id=132
648
- 7http://urlearning.orgInstance |V|/summationtext|ps(v)|GOBNILP CPBayes ELSA ELSA\GAC ELSAchrono
649
- carpo100 BIC 60 424 0.6 78.5 (29.7) 40.6 (0.0) 40.7 (0.0) 40.6 (0.0)
650
- alarm1000 BIC 37 1003 1.2 204.2 (172.9) 27.8 (0.7) 28.8 (1.5) 29.9 (2.7)
651
- flagBDe 29 1325 4.4 19.0 (18.1) 0.9(0.1) 0.9 (0.1) 1.3 (0.5)
652
- wdbc BIC 31 14614 99.8 629.8 (576.6) 48.9 (1.6) 49.1 (1.7) 50.3 (3.1)
653
- kdd.ts 64 43584 327.6 † 1314.5 (158.2) 1405.4 (239.5) 1663.2 (512.4)
654
- steel BIC 28 93027 †1270.9 (1218.9) 98.0 (49.2) 99.2 (50.1) 130.0 (81.2)
655
- kdd.test 64 152873 1521.7 † 1475.3 (120.6) 1515.9 (128.5) 1492.4 (109.5)
656
- mushroom BDe 23 438186 † 176.4 (56.0) 135.4 (33.7) 137.0 (35.0) 133.7 (31.9)
657
- bnetflix.ts 100 446406 † 629.0 (431.4) 1065.1 (878.4) 1111.4 (931.0) 1132.4 (936.3)
658
- plants.test 69 520148 † †18981.9 (17224.0) 30791.2 (29073.0) †
659
- jester.ts 100 531961 † † 10166.0 (9697.9) 14915.9 (14470.1) 23877.6 (23325.7)
660
- accidents.ts 111 568160 1274.0 † 2238.7 (904.5) 2260.3 (986.1) 2221.1 (904.8)
661
- plants.valid 69 684141 † † 12347.6 (8509.7) 19853.1 (15963.1) †
662
- jester.test 100 770950 † †17637.8 (16979.2) 21284.0 (20661.9) †
663
- bnetflix.test 100 1103968 †3525.2 (3283.8) 8197.7 (7975.6) 8057.3 (7841.4) 7915.0 (7686.3)
664
- bnetflix.valid 100 1325818 †1456.6 (1097.0) 9282.0 (8950.3) 10220.5 (9898.4) 9619.7 (9257.4)
665
- accidents.test 111 1425966 4975.6 † 3661.7 (641.5) 4170.1 (1213.6) 3805.2 (687.6)
666
- Table 2: Comparison of ELSA against GOBNILP and CPBayes. Tim e limit for instances above the line is 1h, for the rest 10h. In stances are
667
- sorted by increasing total domain size. For variants of CPBa yes we report in parentheses time spent in search, after prep rocessing finishes. †
668
- indicates a timeout.
669
- (resp. 10-hour) CPU time limit for medium (resp. large)
670
- size instances. We used default settings for GOBNILP with
671
- no approximation in branch-and-cut ( limits/gap= 0 ). We
672
- used the same settings in CPBayes and ELSA for their pre-
673
- processing phase (partition lower bound sizes lmin,lmax and
674
- local search number of restarts rmin,rmax). We used two
675
- different settings depending on problem size |V|:lmin=
676
- 20,lmax= 26,rmin= 50,rmax= 500 if|V|≤64, else
677
- lmin= 20,lmax= 20,rmin= 15,rmax= 30 .
678
- 6.2 Evaluation
679
- In Table 2 we present the runtime to solve each instance to
680
- optimality with GOBNILP, CPBayes, and ELSA with default
681
- settings, without the GAC algorithm and without sorting the
682
- cluster pool (leaving clusters in chronological order, rat her
683
- than the heuristic ordering presented in Section 4). For the in-
684
- stances with/bardblV/bardbl≤64(resp.>64), we had a time limit of 1
685
- hour (resp. 10 hours). We exclude instances that were solved
686
- within the time limit by GOBNILP and have a search time of
687
- less than 10 seconds for CPBayes and all variants of ELSA.
688
- We also exclude 8 instances that were not solved to optimalit y
689
- by any method. This leaves us 17 instances to analyse here
690
- out of 69 total. More details are given in the supplemental
691
- material, available from the authors’ web pages.
692
- Comparison to GOBNILP. CPBayes was al-
693
- ready proven to be competitive to GOBNILP
694
- [van Beek and Hoffmann, 2015 ]. Our results in Table 2
695
- confirm this while showing that neither is clearly better.
696
- When it comes to our solver ELSA, for all the variants, all
697
- instances solved within the time limit by GOBNILP are
698
- solved, unlike CPBayes. On top of that, ELSA solves 9 more
699
- instances optimally.
700
- Comparison to CPBayes. We have made some low-level
701
- performance improvements in preprocessing of CPBayes, so
702
- for a more fair comparison, we should compare only thesearch time, shown in parentheses. ELSA takes several or-
703
- ders of magnitude less search time to optimally solve most
704
- instances, the only exception being the bnetflix instances.
705
- ELSA also proved optimality for 8 more instances within the
706
- time limit.
707
- Gain from GAC. The overhead of GAC pays off as the in-
708
- stances get larger. While we do not see either a clear im-
709
- provement nor a downgrade for the smaller instances, search
710
- time for ELSA improves by up to 47% for larger instances
711
- compared to ELSA \GAC.
712
- Gain from Cluster Ordering. We see that the ordering
713
- heuristic improves the bounds computed by our greedy dual
714
- LP algorithm significantly. Compared to not ordering the
715
- clusters, we see improved runtime throughout and 3 more in-
716
- stances solved to optimality.
717
- 7 Conclusion
718
- We have presented a new set of inference techniques for
719
- BNSL using constraint programming, centered around the ex-
720
- pression of the acyclicity constraint. These new technique s
721
- exploit and improve on previous work on linear relaxations o f
722
- the acyclicity constraint and the associated propagator. T he
723
- resulting solver explores a different trade-off on the axis of
724
- strength of inference versus speed, with GOBNILP on one
725
- extreme and CPBayes on the other. We showed experimen-
726
- tally that the trade-off we achieve is a better fit than either ex-
727
- treme, as our solver ELSA outperforms both GOBNILP and
728
- CPBayes. The major obstacle towards better scalability to
729
- larger instances is the fact that domain sizes grow exponen-
730
- tially with the number of variables. This is to some degree
731
- unavoidable, so our future work will focus on exploiting the
732
- structure of these domains to improve performance.Acknowledgements
733
- We thank the GenoToul (Toulouse, France) Bioinformatics
734
- platform for its support. This work has been partly funded
735
- by the “Agence nationale de la Recherche” (ANR-16-CE40-
736
- 0028 Demograph project and ANR-19-PIA3-0004 ANTI-
737
- DIL chair of Thomas Schiex).
738
- References
739
- [Allen-Zhu and Orecchia, 2015 ]Zeyuan Allen-Zhu and
740
- Lorenzo Orecchia. Nearly-linear time positive LP solver
741
- with faster convergence rate. In Proc. of the Forty-Seventh
742
- Annual ACM Symposium on Theory of Computing ,
743
- STOC’15, page 229–236, New York, NY , USA, 2015.
744
- [Bartlett and Cussens, 2017 ]Mark Bartlett and James
745
- Cussens. Integer linear programming for the bayesian net-
746
- work structure learning problem. Artificial Intelligence ,
747
- pages 258–271, 2017.
748
- [Berg et al. , 2014 ]Jeremias Berg, Matti J¨ arvisalo, and Bran-
749
- don Malone. Learning optimal bounded treewidth
750
- bayesian networks via maximum satisfiability. In Artificial
751
- Intelligence and Statistics , pages 86–95. PMLR, 2014.
752
- [Buntine, 1991 ]Wray Buntine. Theory refinement on
753
- bayesian networks. In Proc. of UAI , pages 52–60. Else-
754
- vier, 1991.
755
- [Chickering, 1995 ]David Maxwell Chickering. Learning
756
- bayesian networks is NP-Complete. In Proc. of Fifth Int.
757
- Workshop on Artificial Intelligence and Statistics (AIS-
758
- TATS) , pages 121–130, Key West, Florida, USA, 1995.
759
- [Cooper et al. , 2010 ]Martin C Cooper, Simon de Givry,
760
- Martı S´ anchez, Thomas Schiex, Matthias Zytnicki, and
761
- Tomas Werner. Soft arc consistency revisited. Artificial
762
- Intelligence , 174(7-8):449–478, 2010.
763
- [Cussens et al. , 2017 ]James Cussens, Matti J¨ arvisalo,
764
- Janne H Korhonen, and Mark Bartlett. Bayesian network
765
- structure learning with integer programming: Polytopes,
766
- facets and complexity. Journal of Artificial Intelligence
767
- Research , 58:185–229, 2017.
768
- [de Campos and Ji, 2010 ]Cassio Polpo de Campos and
769
- Qiang Ji. Properties of bayesian dirichlet scores to learn
770
- bayesian network structures. In Proc. of AAAI-00 , Atlanta,
771
- Georgia, USA, 2010.
772
- [de Campos et al. , 2018 ]Cassio P de Campos, Mauro
773
- Scanagatta, Giorgio Corani, and Marco Zaffalon. Entropy-
774
- based pruning for learning bayesian networks using BIC.
775
- Artificial Intelligence , 260:42–50, 2018.
776
- [Fan and Yuan, 2015 ]Xiannian Fan and Changhe Yuan. An
777
- improved lower bound for bayesian network structure
778
- learning. In Proc. of AAAI-15 , Austin, Texas, 2015.
779
- [Heckerman et al. , 1995 ]David Heckerman, Dan Geiger,
780
- and David M Chickering. Learning bayesian networks:
781
- The combination of knowledge and statistical data. Ma-
782
- chine learning , 20(3):197–243, 1995.
783
- [Junker, 2004 ]Ulrich Junker. Preferred explanations and re-
784
- laxations for over-constrained problems. In Proc. of AAAI-
785
- 04, pages 167–172, San Jose, California, USA, 2004.[Lam and Bacchus, 1994 ]Wai Lam and Fahiem Bacchus.
786
- Using new data to refine a bayesian network. In Proc. of
787
- UAI, pages 383–390, 1994.
788
- [Lee and van Beek, 2017 ]Colin Lee and Peter van Beek. An
789
- experimental analysis of anytime algorithms for bayesian
790
- network structure learning. In Advanced Methodologies
791
- for Bayesian Networks , pages 69–80, 2017.
792
- [Marques-Silva and Menc´ ıa, 2020 ]Jo˜ ao Marques-Silva and
793
- Carlos Menc´ ıa. Reasoning about inconsistent formulas.
794
- In Christian Bessiere, editor, Proc. of IJCAI-2020 , pages
795
- 4899–4906, 2020.
796
- [Morgado et al. , 2013 ]Ant´ onio Morgado, Federico Heras,
797
- Mark H. Liffiton, Jordi Planes, and Jo˜ ao Marques-Silva.
798
- Iterative and core-guided MaxSAT solving: A survey and
799
- assessment. Constraints An Int. J. , 18(4):478–534, 2013.
800
- [Papadimitriou and Steiglitz, 1998 ]Christos H Papadim-
801
- itriou and Kenneth Steiglitz. Combinatorial optimization:
802
- algorithms and complexity . Courier Corporation, 1998.
803
- [Rossi et al. , 2006 ]Francesca Rossi, Peter Van Beek, and
804
- Toby Walsh. Handbook of constraint programming . El-
805
- sevier, 2006.
806
- [Scanagatta et al. , 2015 ]Mauro Scanagatta, Cassio P
807
- de Campos, Giorgio Corani, and Marco Zaffalon. Learn-
808
- ing bayesian networks with thousands of variables. Proc.
809
- of NeurIPS , 28:1864–1872, 2015.
810
- [Schwarz, 1978 ]Gideon Schwarz. Estimating the dimension
811
- of a model. The Annals of Statistics , 6(2):461–464, 1978.
812
- [Silander and Myllym¨ aki, 2006 ]Tomi Silander and Petri
813
- Myllym¨ aki. A simple approach for finding the globally
814
- optimal bayesian network structure. In Proc. of UAI’06 ,
815
- Cambridge, MA, USA, 2006.
816
- [van Beek and Hoffmann, 2015 ]Peter van Beek and Hella-
817
- Franziska Hoffmann. Machine learning of bayesian net-
818
- works using constraint programming. In Proc. of Inter-
819
- national Conference on Principles and Practice of Con-
820
- straint Programming , pages 429–445, Cork, Ireland, 2015.
821
- [Walsh, 2000 ]Toby Walsh. SAT vs CSP. In Proc. of the
822
- Sixth International Conference on Principles and Practice
823
- of Constraint Programming , pages 441–456, 2000.
824
- [Yuan and Malone, 2013 ]Changhe Yuan and Brandon Mal-
825
- one. Learning optimal bayesian networks: A shortest path
826
- perspective. J. of Artificial Intelligence Research , 48:23–
827
- 65, 2013.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2106.14625.txt DELETED
@@ -1,752 +0,0 @@
1
- AMU-EURANOV A at CASE 2021 Task 1: Assessing the stability of
2
- multilingual BERT
3
- L´eo Bouscarrat1;2, Antoine Bonnefoy1, C´ecile Capponi2, Carlos Ramisch2
4
- 1EURA NOV A, Marseille, France
5
- 2Aix Marseille Univ, Universit ´e de Toulon, CNRS, LIS, Marseille, France
6
- fleo.bouscarrat, antoine.bonnefoy [email protected]
7
- fleo.bouscarrat, cecile.capponi, carlos.ramisch [email protected]
8
- Abstract
9
- This paper explains our participation in task 1
10
- of the CASE 2021 shared task. This task is
11
- about multilingual event extraction from news.
12
- We focused on sub-task 4, event information
13
- extraction. This sub-task has a small training
14
- dataset and we fine-tuned a multilingual BERT
15
- to solve this sub-task. We studied the instabil-
16
- ity problem on the dataset and tried to mitigate
17
- it.
18
- 1 Introduction
19
- Event extraction is becoming more and more impor-
20
- tant as the number of online news increases. This
21
- task consists of extracting events from documents,
22
- especially news. An event is defined by a group of
23
- entities that give some information about the event.
24
- Therefore, the goal of this task is to extract, for
25
- each event, a group of entities that define the event,
26
- such as the place and time of the event.
27
- This task is related but still different from named
28
- entity recognition (NER) as the issue is to group
29
- the entities that are related to the same event, and
30
- differentiate those related to different events. This
31
- difference makes the task harder and also compli-
32
- cates the annotation.
33
- In the case of this shared task, the type of events
34
- to extract is protests (H ¨urriyeto ˘glu et al., 2021a,b).
35
- This shared task is in the continuation of two previ-
36
- ous shared tasks at CLEF 2019 (H ¨urriyeto ˘glu et al.,
37
- 2019) and AESPEN (H ¨urriyeto ˘glu et al., 2020).
38
- The first one deals with English event extraction
39
- with three sub-tasks: document classification, sen-
40
- tence classification, and event information extrac-
41
- tion. The second focuses on event sentence co-
42
- reference identification, whose goal is to group
43
- sentences related to the same events.
44
- This year, task 1 is composed of the four afore-
45
- mentioned tasks and adds another difficulty: multi-
46
- linguality. This year’s data is available in English,Spanish, and Portuguese. Thus, it is important to
47
- note that there is much more data in English than
48
- in the other languages. For the document classi-
49
- fication sub-task, to test multilingual capabilities,
50
- Hindi is available on the testing set only.
51
- We have mainly focused on the last sub-task
52
- (event information extraction), but we have also
53
- submitted results for the first and second sub-tasks
54
- (document and sentence classification). We used
55
- multilingual BERT (Devlin et al., 2019), hence-
56
- forth M-BERT, which is a model known to obtain
57
- near state-of-the-art results on many tasks. It is also
58
- supposed to work well for zero-or-few-shot learn-
59
- ing on different languages (Pires et al., 2019). We
60
- will see the results on these sub-tasks, especially
61
- for sub-task 4 where the training set available for
62
- Spanish and Portuguese is small.
63
- Thus, one of the issues with transformer-based
64
- models such as M-BERT is the instability on small
65
- datasets (Dodge et al., 2020; Ruder, 2021). The
66
- instability issue is the fact that by changing some
67
- random seeds before the learning phase but using
68
- the same architecture, data and hyper-parameters
69
- the results can have a great variance. We will look
70
- at some solutions to mitigate this issue, and how
71
- this issue is impacting our results for sub-task 4.1
72
- 2 Tasks and data
73
- Sub-tasks 1 and 2 can be seen as binary sequence
74
- classification, where the goal is to say if a given
75
- sequence is part of a specific class. In our case, a
76
- classifier must predict whether a document contains
77
- information about an event for sub-task 1 or if a
78
- sentence contains information about an event for
79
- sub-task 2.
80
- Document and sentence classification tasks, sub-
81
- tasks 1 and 2, are not our main research interest.
82
- 1Our code is available here: https://github.com/
83
- euranova/AMU-EURANOVA-CASE-2021arXiv:2106.14625v2 [cs.CL] 4 Aug 2021Figure 1: Example of a snippet from sub-task 4.
84
- Moreover, the datasets provided for these tasks
85
- are less interesting (reasonable amount of training
86
- data).
87
- On the other hand, sub-task 4 not only has less
88
- training data available but also requires more fine-
89
- grained token-based prediction. The goal of sub-
90
- task 4 is to extract event information from snippets
91
- that contain sentences speaking about the same
92
- event. H ¨urriyeto ˘glu et al. (2019) have defined that
93
- an event has the following information classes (ex-
94
- ample in Figure 1):
95
- •Time, which indicates when the protest took
96
- place,
97
- •Facility name, which indicates in which facil-
98
- ity the protest took place,
99
- •Organizer, which indicates who organized the
100
- protest,
101
- • Participant, which indicates who participated
102
- in the protest,
103
- •Place, which indicates where the protest took
104
- place in a more general area than the facility
105
- (city, region, ...),
106
- •Target, which indicates against whom or what
107
- the protest took place,
108
- •Trigger, which is a specific word or group of
109
- words that indicate that a protest took place
110
- (examples: protested, attack, ...),
111
- Thus, not all the snippets contain all the classes,
112
- and they can contain several times the same classes.
113
- Each information can be composed of one or sev-
114
- eral adjacent words. Each snippet contains infor-
115
- mation related to one and only one event.
116
- As the data is already separated into groups
117
- of sentences related to the same event, our ap-
118
- proach consists of considering a task of named
119
- entity recognition with the aforementioned classes.
120
- Multilingual BERT has already been used for multi-
121
- lingual named entity recognition and showed great
122
- results compared to state-of-the-art models (Hakala
123
- and Pyysalo, 2019).The data is in BIO format (Ramshaw and Mar-
124
- cus, 1995), where each word has a B tag or an I tag
125
- of a specific class or an O tag. The B tag means
126
- beginning and marks the beginning of a new entity.
127
- The tag I means inside, which has to be preceded
128
- by another I tag or a B tag, and marks that the word
129
- is inside an entity but not the first word of the entity.
130
- Finally, the O-tag means outside, which means the
131
- word is not part of an entity.
132
- 3 System overview
133
- Our model is based on pre-trained multilingual
134
- BERT (Devlin et al., 2019). This model has been
135
- pretrained on multilingual Wikipedia texts. To bal-
136
- ance the fact that the data is not equally distributed
137
- between all the languages the authors used expo-
138
- nential smoothed weighting to under-sample the
139
- most present languages and over-sample the rarest
140
- ones. This does not perfectly balance all the lan-
141
- guages but it reduces the impact of low-resourced
142
- languages.
143
- The authors of the M-BERT paper shared the
144
- weights of a pretrained model that we use to do fine-
145
- tuning. Fine-tuning a model consists of taking an
146
- already trained model on a specific task and using
147
- this model as a starting point of the training for the
148
- task of interest. This approach has reached state-
149
- of-the-arts in numerous tasks. In the case of M-
150
- BERT, the pre-training tasks are Masked Language
151
- Modeling (MLM) and Next Sentence Prediction
152
- (NSP).
153
- To be able to learn our task, we add a dense layer
154
- on top of the outputs of M-BERT and learn it during
155
- the fine-tuning. All our models are fine-tuning all
156
- the layers of M-BERT.
157
- The implementation is the one from Hugging-
158
- Face’s ‘transformers’ library (Wolf et al., 2020). To
159
- train it on our data, the model is fine-tuned on each
160
- sub-task.
161
- 3.1 Sub task 1 and 2
162
- For sub-tasks 1 and 2, we approach these tasks
163
- as binary sequence classification, as the goal is to
164
- predict whether or not a document (sub-task 1) orsentence (sub-task 2) contains relevant information
165
- about a protest event. Thus the size of the output
166
- of the dense layer is 2. We then perform an argmax
167
- on these values to predict a class. We use the base
168
- parameters in HuggingFace’s ’transformers’ library.
169
- The loss is a cross-entropy, the learning rate is
170
- handled by an AdamW optimizer (Loshchilov and
171
- Hutter, 2019) and the activation function is a gelu
172
- (Hendrycks and Gimpel, 2016). We use a dropout
173
- of 10% for the fully connected layers inside M-
174
- BERT and the attention probabilities.
175
- One of the issues with M-BERT is the limited
176
- length of the input, as it can only take 512 tokens,
177
- which are tokenized words. M-BERT uses the
178
- wordpiece tokenizer (Wu et al., 2016). A token is
179
- either a word if the tokenizer knows it, if it does not
180
- it will separate it into several sub-tokens which are
181
- known. For sub-task 1, as we are working with en-
182
- tire documents, it can be frequent that a document
183
- is longer than this limit and has to be broken down
184
- into several sub-documents. To retain contexts in
185
- each sub-documents we use an overlap of 150 to-
186
- kens, which means between two sub-documents,
187
- they will have 150 tokens in common. Our method
188
- to output a class, in this case, is as follows:
189
- • tokenize a document,
190
- •if the tokenized document is longer than
191
- the 512-tokens limit, create different sub-
192
- documents with 150-tokens overlaps between
193
- each sub-document,
194
- • generate a prediction for each sub-document,
195
- •average all the predictions from sub-
196
- documents originated from the same docu-
197
- ment,
198
- • take the argmax of the final prediction.
199
- 3.2 Sub-task 4
200
- For sub-task 4, our approach is based on word
201
- classification where we predict a class for each
202
- word of the documents.
203
- One issue is that as words are tokenized and can
204
- be transformed into several sub-tokens we have to
205
- choose how to choose the prediction of a multi-
206
- token word. Our approach is to take the prediction
207
- of the first token composing a word as in Hakala
208
- and Pyysalo (2019).
209
- We also have to deal with the input size as some
210
- documents are longer than the limit. In this case,we separate them into sub-documents with an over-
211
- lap of 150. Our approach is:
212
- • tokenize a document,
213
- •if the tokenized document is longer than
214
- the 512-tokens limit, create different sub-
215
- documents with 150-tokens overlaps between
216
- each sub-document,
217
- • generate a prediction for each sub-document,
218
- •reconstruct the entire document: take the first
219
- and second sub-documents, average the pre-
220
- diction for the same tokens (from the overlap),
221
- keep the prediction for the others, then use
222
- the same process with the obtained document
223
- and the next sub-document. As the size of
224
- each sequence is 512 and the overlap is only
225
- 150, no tokens can be in more than 2 different
226
- sequences,
227
- •take the argmax of the final prediction for each
228
- word.
229
- 3.2.1 Soft macro-F1 loss
230
- We used a soft macro-F1 loss (Lipton et al., 2014).
231
- This loss is closer than categorical cross-entropy on
232
- BIO labels to the metric used to evaluate systems
233
- in the shared task. The main issue with F1 is its
234
- non-differentiability, so it cannot be used as is but
235
- must be modified to become differentiable. The F1
236
- score is based on precision and recall, which in turn
237
- are functions of the number of true positives, false
238
- positives, and false negatives. These quantities are
239
- usually defined as follows:
240
- tp=X
241
- i2tokens(pred (i)true (i))
242
- fp=X
243
- i2tokens(pred (i)(1true (i)))
244
- fn=X
245
- i2tokens((1pred (i))true (i))
246
- With:
247
- •tokens , the list of tokens in a document,
248
- •true(i) , 0 if the true label of the token i is of
249
- the negative class, 1 if the true label is of the
250
- positive class
251
- •pred(i) , 0 if the predicted label of the token i
252
- is of the negative class, 1 if the predicted label
253
- is of the positive classAs we use macro-F1 loss, we compute the F1
254
- score for each class where the positive class is the
255
- current class and negative any other class, e.g. if
256
- the reference class is B-trigger, then true(i)=1 for
257
- B-trigger and true(i)=0 for all other classes when
258
- macro-averaging the F1.
259
- We replace the binary function pred(i) by a func-
260
- tion outputting the predicted probability of the to-
261
- ken i to be of the positive class:
262
- soft tp=X
263
- i2tokens(proba (i)true (i))
264
- soft fp=X
265
- i2tokens(proba (i)(1true (i)))
266
- soft fn=X
267
- i2tokens((1proba (i))true (i))
268
- With proba(i) outputting the probability of the
269
- token i to be of the positive class, this probability is
270
- the predicted probability resulting from the softmax
271
- activation of the fine-tuning network.
272
- Then we compute, in a similar fashion as a nor-
273
- mal F1, the precision and recall using the soft defi-
274
- nitions of the true positive, false positive, and false
275
- negative. And finally we compute the F1 score with
276
- the given precision and recall. As a loss function is
277
- a criterion to be minimized whereas F1 is a score
278
- that we would like to maximize, the final loss is
279
- 1F1.
280
- 3.2.2 Recommendation for improved stability
281
- A known problem of Transformers-based models
282
- is the training instability, especially with small
283
- datasets (Dodge et al., 2020; Ruder, 2021). Dodge
284
- et al. (2020) explain that two elements that have
285
- much influence on the stability are the data order
286
- and the initialization of the prediction layer, both
287
- controlled by pseudo-random numbers generated
288
- from a seed. To study the impact of these two el-
289
- ements on the models’ stability, we freeze all the
290
- randomness on the other parts of the models and
291
- change only two different random seeds:
292
- •the data order, i.e. the different batches and
293
- their order. Between two runs the model will
294
- see the same data during each epoch but the
295
- batches will be different, as the batches are
296
- built beforehand and do not change between
297
- epochs,
298
- •the initialization of the linear layer used to
299
- predict the output of the model.Another recommendation to work with
300
- Transformers-based models and small data made
301
- by Mosbach et al. (2021) is to use smaller learning
302
- rates but compensating with more epochs. We have
303
- taken this into account during the hyper-parameter
304
- search.
305
- Ruder (2021) recommend using behavioral fine-
306
- tuning to reduce fine-tuning instabilities. It is sup-
307
- posed to be especially helpful to have a better ini-
308
- tialization of the final prediction layer. It has also al-
309
- ready been used on named entity recognition tasks
310
- (Broscheit, 2019) and has shown that it has im-
311
- proved results for a task with a very small training
312
- dataset. Thus, to do so, we need a task with the
313
- same number of classes, but much larger training
314
- datasets. As we did not find such a task, we de-
315
- cided to fine-tune our model on at least the different
316
- languages we are working with, English, Spanish
317
- and Portuguese. We used named entity recognition
318
- datasets and kept only three classes in common in
319
- all the datasets: person, organization, and location.
320
- These three types of entities can be found in the
321
- shared task.
322
- To perform this test, the training has been done
323
- like that:
324
- •the first fine-tuning is done on the concatena-
325
- tion of NER datasets in different languages,
326
- once the training is finished we save all the
327
- weights of the model,
328
- •we load the weights of the previous model,
329
- except for the weights of the final prediction
330
- layer which are randomized with a given seed,
331
- •we train the model on the dataset of the shared
332
- task.
333
- 4 Experimental setup
334
- 4.1 Data
335
- The dataset of the shared task is based on articles
336
- from different newspapers in different languages.
337
- More information about this dataset can be found
338
- in (H ¨urriyeto ˘glu et al., 2021a)
339
- For the final submissions of sub-tasks 1, 2, and 4
340
- we divided the dataset given for training purposes
341
- into two parts with 80% for training and 20% for
342
- evaluation during the system training phase. We
343
- then predicted the data given for testing purposes
344
- during the shared task evaluation phase. The quan-
345
- tity of data for each sub-task and language can be
346
- found in Table 1. We can note that the majority ofSub-task English Spanish Portuguese
347
- Sub-task 1 9,324 1,000 1,487
348
- Sub-task 2 22,825 2,741 1,182
349
- Sub-task 4 808 33 30
350
- Table 1: Number of elements for each sub-task for each
351
- language in the data given for training purposes. Docu-
352
- ments for sub-task 1, sentences for sub-task 2, snippet
353
- (group of sentences about one event) for sub-task 4.
354
- Dataset Train Eval Test
355
- CoNLL 2003 14,041 3,250 3,453
356
- CoNLL 2002 8,324 1,916 1,518
357
- HAREM 121 8 128
358
- Table 2: Number of elements for each dataset used in
359
- the behavioral fine-tuning in each split.
360
- the data is in English. Spanish and Portuguese are
361
- only a small part of the dataset.
362
- For all the experiments made on sub-task 4, we
363
- divided the dataset given for training purposes into
364
- three parts with 60% for training, 20% for evaluat-
365
- ing and 20% for testing.
366
- To be able to do our approach of behavioral fine-
367
- tuning, we needed some Named Entity Recognition
368
- datasets in English, Spanish and Portuguese. For
369
- English we used the CoNLL 2003 dataset (Tjong
370
- Kim Sang and De Meulder, 2003), for Spanish the
371
- Spanish part of the CoNLL 2002 dataset (Tjong
372
- Kim Sang, 2002) and for Portuguese the HAREM
373
- dataset (Santos et al., 2006). Each of these datasets
374
- had already three different splits for training, devel-
375
- opment and test. Information about their size can
376
- be found in Table 2.
377
- The dataset for Portuguese is pretty small com-
378
- pared to the two others, but the impact of the size
379
- can be interesting to study.
380
- 4.2 Hyper-parameter search
381
- For sub-task 4, we did a hyper-parameter search
382
- to optimize the results. We used Ray Tune (Liaw
383
- et al., 2018) and the HyperOpt algorithm Bergstra
384
- et al. (2013). We launched 30 different trainings,
385
- all the information about the search space and the
386
- hyper-parameters can be found in A.1. The goal is
387
- to optimize the macro-F1 on the evaluation set.
388
- Our goal was to find a set of hyper-parameters
389
- that performs well to use always the same in the
390
- following experiments. We also wanted to evaluate
391
- the impacts of the hyper-parameters on the training.4.3 Behavioral fine-tuning
392
- For the first part of the behavioral fine-tuning,
393
- we trained an M-BERT model on the three NER
394
- datasets for one epoch. We only learn for one epoch
395
- for timing issues, as the learning on this datasets
396
- takes several hours. We then fine-tune the resulting
397
- models with the best set of hyper-parameters found
398
- with the hyper-parameter search.
399
- 4.4 Stability
400
- To study the stability of the model and the impact
401
- of behavioral fine-tuning we made 6 sets of experi-
402
- ments with 20 experiments in each set:
403
- •normal fine-tuning with random data order
404
- and frozen initialization of final layer,
405
- •normal fine-tuning with frozen data order and
406
- random initialization of final layer,
407
- •normal fine-tuning with random data order
408
- and random initialization of final layer,
409
- •behavioral fine-tuning with random data order
410
- and frozen initialization of final layer,
411
- •behavioral fine-tuning with frozen data order
412
- and random initialization of final layer,
413
- •behavioral fine-tuning with random data order
414
- and random initialization of final layer,
415
- Once again it is important to note that what we
416
- called behavioral fine-tuning is different from be-
417
- havioral fine-tuning as proposed by Ruder (2021),
418
- as we reset the final layer. Only the weights of all
419
- the layers of M-BERT are modified.
420
- For each set of experiments we will look at
421
- the average of the macro-F1, as implemented in
422
- Nakayama (2018), and the standard deviation of
423
- the macro-F1 on the training dataset, on the evalua-
424
- tion dataset, and on three different test datasets, one
425
- for each language. Thus we will be able to assess
426
- the importance of the instability, if our approach to
427
- behavioral fine-tuning helps to mitigate it and if it
428
- has similar results across the languages.
429
- We can also note that in our implementation
430
- the batches are not randomized. They are built
431
- once before the learning phase and do not change,
432
- neither in content nor order of passage, between
433
- each epoch.Figure 2: (Top) Parallel coordinates plot of the 30 experiments on sub-task 4 during the hyper-parameter search in
434
- function of the value of the hyper-parameters and the value of the F1 on the evaluation set. Each line represents an
435
- experiment, and each column a specific hyper-parameter, except the last which is the value of the metric. (Bottom)
436
- Same plot with the worst results removed to have a better view of the best results.
437
- 5 Results
438
- 5.1 Hyper-parameter search
439
- The results of the hyper-parameter search can be
440
- seen in Figure 2. On the top pictures which repre-
441
- sent the 30 experiments, we can see that a specific
442
- hyper-parameter seems to impact the worst results
443
- (in blue). This parameter is the learning rate, we
444
- can see it in the red box on the top image, all the
445
- blue lines are at the bottom, which means these
446
- experiments had a small learning rate. It seems
447
- that we obtain the best results with a learning rate
448
- around 5e-05 (0.00005), lower than 1e-06 seems to
449
- give bad results.
450
- We can then focus on the bottom picture, with
451
- the same type of plot but with the worst results
452
- removed. Another hyper-parameter that seems to
453
- have an impact is the number of training epochs,
454
- 40 seems better than 20. We use a high number of
455
- epochs as recommended by Mosbach et al. (2021)
456
- to limit the instability. Beyond the learning rate and
457
- number of epochs, it is then hard to find impactful
458
- hyper-parameters.
459
- Finally, the set of hyper-parameters that has been
460
- selected is:• Adafactor: True
461
- • Number of training epochs: 40
462
- • Adam beta 2: 0.99
463
- • Adam beta 1: 0.74
464
- • Maximum gradient norm: 0.17
465
- • Adam epsilon: 3e-08
466
- • Learning rate: 5e-05
467
- • Weight decay: 0.36
468
- For the stability experiments, the number of
469
- training epochs have been reduced to 20 for speed
470
- purposes. For the first part of the behavioral fine-
471
- tuning, the learning rate has been set to 1e-05 as
472
- more data were available.
473
- 5.2 Behavioral fine-tuning
474
- The results on the test dataset of each model after
475
- one epoch of training can be found in Table 5.
476
- We could not compare to state-of-the-art NER
477
- models on these three datasets as we do not take all
478
- the classes (classes such as MISC were removedData Init layer Train Eval Test EN Test ES Test PT
479
- NRand Fix 86.11 (1.08) 69.34 (1.01) 71.80 (.85) 54.33 (3.43) 73.14 ( 1.96)
480
- Fix Rand 86.88 (.53) 70.03 (.63) 71.68 ( .53)55.02 (3.28) 74.51 (2.41)
481
- Rand Rand 86.63 (1.08) 69.56 (.97) 71.94 (.72) 54.73 (3.44) 74.08 (3.37)
482
- BRand Fix 85.79 (.97) 69.32 (1.00) 71.60 (.54) 54.69 (2.99) 74.01 (2.92)
483
- Fix Rand 86.20 (.55) 69.57 ( .51)71.80 (.58) 53.97 (3.90) 74.50 (2.67)
484
- Rand Rand 86.11 (.87) 69.40 (.80) 71.85 (.73) 55.51 (2.82)74.97 (2.66)
485
- Table 3: Average macro-F1 score, higher is better (standard deviation, lower is better) of the 20 experiments with
486
- the specified setup. N means normal fine-tuning and B behavioral fine-tuning. Data means data order and Init layer
487
- means initialization of the final layer. Rand means random, and fix refers to frozen.
488
- English Spanish Portuguese Hindi
489
- Sub-task 1 53.46 (84.55) 46.47 (77.27) 46.47 (84.00) 29.66 (78.77)
490
- Sub-task 2 75.64 (85.32) 76.39 (88.61) 81.61 (88.47) /
491
- Sub-task 4 69.96 (78.11) 56.64 (66.20) 61.87 (73.24) /
492
- Table 4: Score of our final submissions for each sub-task, in parenthesis the score achieved by the best scoring
493
- team on each sub-task.
494
- Dataset Test macro-F1
495
- CoNLL 2003 89.8
496
- CoNLL 2002 86.1
497
- HAREM 76.1
498
- Table 5: Macro-F1 score of the NER task on the test
499
- split of each dataset used in behavioral fine-tuning after
500
- training the base M-BERT for 1 epoch.
501
- before the learning phase). The metrics used on
502
- these datasets are not by classes, so the comparison
503
- cannot be made. However, the results are already
504
- much better than what a random classifier would
505
- output, thus the weights of the models should al-
506
- ready be better than the weights of the base model.
507
- 5.3 Stability
508
- The results of the different sets of experiments can
509
- be found in Table 3. First, we can see that the dif-
510
- ference between behavioral fine-tuning and normal
511
- fine-tuning is not important enough to say one is
512
- better than the other. We can also note that the
513
- standard deviation is small for English, but not
514
- negligible for Spanish and Portuguese.
515
- 5.4 Final submission
516
- The results of the final submissions can be found
517
- in Table 4. We can see that our results are lower
518
- than the best results, especially for sub-task 1 with
519
- a difference of between 30 to 50 macro-F1 score
520
- depending on the language, whereas for sub-tasks2 and 4 the difference is close to 10 macro-F1 score
521
- for all the languages.
522
- 6 Conclusion
523
- 6.1 Sub-task 1 and 2
524
- As we can see in Table 4, our final results for sub-
525
- task 1 are much lower than the best results, but for
526
- sub-task 2 the difference is smaller. This is interest-
527
- ing as the tasks are pretty similar, thus expected the
528
- difference between our results and the best results
529
- to be of the same magnitude.
530
- One explanation could be our approach to han-
531
- dle documents longer than the input of M-BERT.
532
- We have chosen to take the average of the sub-
533
- documents, but if one part of a document contains
534
- an event the entire document does too. We may
535
- have better results looking if one sub-document at
536
- least is considered as having an event.
537
- It is then hard to compare to other models as we
538
- have chosen to use one model for all the languages
539
- and we do not know the other approaches.
540
- 6.2 Sub-task 4
541
- For sub-task 4 we have interesting results for all
542
- the languages, even for Spanish and Portuguese,
543
- as we were not sure that we could learn this task
544
- in a supervised fashion with the amount of data
545
- available. In a further study, we could compare our
546
- results with results obtained by fine-tuning mono-
547
- lingual models, where we fine-tune one model for
548
- each language with only the data of one language.This could show the impact of having data if using
549
- a multilingual model instead of several monolin-
550
- gual models improves or not the results. We do not
551
- expect good results for Spanish and Portuguese as
552
- the training dataset is pretty limited. The results
553
- seem to comfort the claim of (Pires et al., 2019)
554
- that M-BERT works well for few-shot learning on
555
- other languages.
556
- The other question for sub-task 4 was about in-
557
- stability. In Table 3 we can see that the instability is
558
- way more pronounced for Spanish and Portuguese.
559
- It seems logical as we have fewer data available
560
- in Spanish and Portuguese than in English. The
561
- standard deviation for Spanish and Portuguese is
562
- large and can have a real impact on the final re-
563
- sults. Finding good seeds could help to improve
564
- the results for Spanish and Portuguese.
565
- Furthermore, our approach of behavioral fine-
566
- tuning did not help to reduce the instabilities. It
567
- was expected that one of the sources of the insta-
568
- bility is the initialization of the prediction, and in
569
- our approach, the initialization of this layer is still
570
- random. In our approach, we only fine-tune the
571
- weights of M-BERT. This does not seem to work
572
- and reinforces the advice of Ruder (2021) that us-
573
- ing behavioral fine-tuning is more useful for having
574
- a good initialization of the final prediction layer.
575
- On the two sources of randomness we studied,
576
- data order seems the most impactful for English,
577
- where we have more data. Nonetheless, for Span-
578
- ish and Portuguese, the two sources have a large
579
- impact. In a further study, we could see how the
580
- quantity of data helps to decrease the impact of
581
- these sources of instabilities.
582
- For the final submissions, the macro-F1 score
583
- for English and Portuguese is beneath the average
584
- macro-F1 score we found during our development
585
- phases. This could be due to bad seeds for random-
586
- ness or because the splits are different. We did not
587
- try to find the best-performing seeds for the final
588
- submissions.
589
- Acknowledgments
590
- We thank Damien Fourrure, Arnaud Jacques, Guil-
591
- laume Stempfel and our anonymous reviewers for
592
- their helpful comments.
593
- References
594
- James Bergstra, Daniel Yamins, and David Cox. 2013.
595
- Making a science of model search: Hyperparameteroptimization in hundreds of dimensions for vision ar-
596
- chitectures. In International conference on machine
597
- learning , pages 115–123. PMLR.
598
- Samuel Broscheit. 2019. Investigating entity knowl-
599
- edge in BERT with simple neural end-to-end en-
600
- tity linking. In Proceedings of the 23rd Confer-
601
- ence on Computational Natural Language Learning
602
- (CoNLL) , pages 677–685, Hong Kong, China. Asso-
603
- ciation for Computational Linguistics.
604
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
605
- Kristina Toutanova. 2019. Bert: Pre-training of
606
- deep bidirectional transformers for language under-
607
- standing. In Proceedings of the 2019 Conference of
608
- the North American Chapter of the Association for
609
- Computational Linguistics: Human Language Tech-
610
- nologies, Volume 1 (Long and Short Papers) , pages
611
- 4171–4186.
612
- Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali
613
- Farhadi, Hannaneh Hajishirzi, and Noah Smith.
614
- 2020. Fine-tuning pretrained language models:
615
- Weight initializations, data orders, and early stop-
616
- ping. arXiv preprint arXiv:2002.06305 .
617
- Kai Hakala and Sampo Pyysalo. 2019. Biomedical
618
- named entity recognition with multilingual BERT.
619
- InProceedings of The 5th Workshop on BioNLP
620
- Open Shared Tasks , pages 56–61, Hong Kong,
621
- China. Association for Computational Linguistics.
622
- Dan Hendrycks and Kevin Gimpel. 2016. Gaus-
623
- sian error linear units (gelus). arXiv preprint
624
- arXiv:1606.08415 .
625
- Ali H ¨urriyeto ˘glu, Osman Mutlu, Farhana Ferdousi
626
- Liza, Erdem Y ¨or¨uk, Ritesh Kumar, and Shyam
627
- Ratan. 2021a. Multilingual protest news detection
628
- - shared task 1, case 2021. In Proceedings of the 4th
629
- Workshop on Challenges and Applications of Auto-
630
- mated Extraction of Socio-political Events from Text
631
- (CASE 2021) , online. Association for Computational
632
- Linguistics (ACL).
633
- Ali H ¨urriyeto ˘glu, Hristo Tanev, Vanni Zavarella, Jakub
634
- Piskorski, Reyyan Yeniterzi, and Erdem Y ¨or¨uk.
635
- 2021b. Challenges and applications of automated
636
- extraction of socio-political events from text (case
637
- 2021): Workshop and shared task report. In
638
- Proceedings of the 4th Workshop on Challenges
639
- and Applications of Automated Extraction of Socio-
640
- political Events from Text (CASE 2021) , online. As-
641
- sociation for Computational Linguistics (ACL).
642
- Ali H ¨urriyeto ˘glu, Erdem Y ¨or¨uk, Deniz Y ¨uret, C ¸ a ˘grı
643
- Yoltar, Burak G ¨urel, Fırat Durus ¸an, Osman Mutlu,
644
- and Arda Akdemir. 2019. Overview of clef 2019
645
- lab protestnews: Extracting protests from news
646
- in a cross-context setting. In Experimental IR
647
- Meets Multilinguality, Multimodality, and Interac-
648
- tion, pages 425–432, Cham. Springer International
649
- Publishing.Ali H ¨urriyeto ˘glu, Vanni Zavarella, Hristo Tanev, Er-
650
- dem Y ¨or¨uk, Ali Safaya, and Osman Mutlu. 2020.
651
- Automated extraction of socio-political events from
652
- news (AESPEN): Workshop and shared task report.
653
- InProceedings of the Workshop on Automated Ex-
654
- traction of Socio-political Events from News 2020 ,
655
- pages 1–6, Marseille, France. European Language
656
- Resources Association (ELRA).
657
- Richard Liaw, Eric Liang, Robert Nishihara, Philipp
658
- Moritz, Joseph E Gonzalez, and Ion Stoica.
659
- 2018. Tune: A research platform for distributed
660
- model selection and training. arXiv preprint
661
- arXiv:1807.05118 .
662
- Zachary C Lipton, Charles Elkan, and Balakrishnan
663
- Naryanaswamy. 2014. Optimal thresholding of clas-
664
- sifiers to maximize f1 measure. In Joint European
665
- Conference on Machine Learning and Knowledge
666
- Discovery in Databases , pages 225–239. Springer.
667
- Ilya Loshchilov and Frank Hutter. 2019. Decoupled
668
- weight decay regularization. In International Con-
669
- ference on Learning Representations .
670
- Marius Mosbach, Maksym Andriushchenko, and Diet-
671
- rich Klakow. 2021. On the stability of fine-tuning
672
- fbertg: Misconceptions, explanations, and strong
673
- baselines. In International Conference on Learning
674
- Representations .
675
- Hiroki Nakayama. 2018. seqeval: A python framework
676
- for sequence labeling evaluation. Software available
677
- from https://github.com/chakki-works/seqeval.
678
- Telmo Pires, Eva Schlinger, and Dan Garrette. 2019.
679
- How multilingual is multilingual bert? In Proceed-
680
- ings of the 57th Annual Meeting of the Association
681
- for Computational Linguistics , pages 4996–5001.
682
- Lance Ramshaw and Mitch Marcus. 1995. Text chunk-
683
- ing using transformation-based learning. In Third
684
- Workshop on Very Large Corpora .
685
- Sebastian Ruder. 2021. Recent Advances in Lan-
686
- guage Model Fine-tuning. http://ruder.io/
687
- recent-advances-lm-fine-tuning .
688
- Diana Santos, Nuno Seco, Nuno Cardoso, and Rui
689
- Vilela. 2006. Harem: An advanced ner evalua-
690
- tion contest for portuguese. In quot; In Nicoletta
691
- Calzolari; Khalid Choukri; Aldo Gangemi; Bente
692
- Maegaard; Joseph Mariani; Jan Odjik; Daniel
693
- Tapias (ed) Proceedings of the 5 th International
694
- Conference on Language Resources and Evaluation
695
- (LREC’2006)(Genoa Italy 22-28 May 2006) .
696
- Erik F. Tjong Kim Sang. 2002. Introduction to the
697
- CoNLL-2002 shared task: Language-independent
698
- named entity recognition. In COLING-02: The
699
- 6th Conference on Natural Language Learning 2002
700
- (CoNLL-2002) .
701
- Erik F. Tjong Kim Sang and Fien De Meulder.
702
- 2003. Introduction to the CoNLL-2003 shared task:
703
- Language-independent named entity recognition. InProceedings of the Seventh Conference on Natu-
704
- ral Language Learning at HLT-NAACL 2003 , pages
705
- 142–147.
706
- Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
707
- Chaumond, Clement Delangue, Anthony Moi, Pier-
708
- ric Cistac, Tim Rault, R ´emi Louf, Morgan Funtow-
709
- icz, Joe Davison, Sam Shleifer, Patrick von Platen,
710
- Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu,
711
- Teven Le Scao, Sylvain Gugger, Mariama Drame,
712
- Quentin Lhoest, and Alexander M. Rush. 2020.
713
- Transformers: State-of-the-art natural language pro-
714
- cessing. In Proceedings of the 2020 Conference on
715
- Empirical Methods in Natural Language Processing:
716
- System Demonstrations , pages 38–45, Online. Asso-
717
- ciation for Computational Linguistics.
718
- Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V .
719
- Le, Mohammad Norouzi, Wolfgang Macherey,
720
- Maxim Krikun, Yuan Cao, Qin Gao, Klaus
721
- Macherey, Jeff Klingner, Apurva Shah, Melvin John-
722
- son, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws,
723
- Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith
724
- Stevens, George Kurian, Nishant Patil, Wei Wang,
725
- Cliff Young, Jason Smith, Jason Riesa, Alex Rud-
726
- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes,
727
- and Jeffrey Dean. 2016. Google’s neural machine
728
- translation system: Bridging the gap between human
729
- and machine translation. CoRR , abs/1609.08144.
730
- A Appendix
731
- A.1 Hyper-parameter search
732
- The space search for our hyper-parameter search
733
- was:
734
- •Number of training epochs: value in [20, 25,
735
- 30, 40],
736
- •Weight decay: uniform distribution between
737
- 0.001 and 1,
738
- •Learning rate: value in [1e-5, 2e-5, 3e-5, 4e-5,
739
- 5e-5, 6e-5, 2e-7, 1e-7, 3e-7, 2e-8],
740
- • Adafactor: value in ”True”, ”False”,
741
- •Adam beta 1: uniform distribution between 0
742
- and 1,
743
- •Adam beta 2: uniform distribution between 0
744
- and 1,
745
- •Epsilon: value in [1e-8, 2e-8, 3e-8, 1e-9, 2e-9,
746
- 3e-10],
747
- •Maximum gradient norm: uniform distribu-
748
- tion between 0 and 1.For the HyperOpt algorithm we used two set
749
- of hyper-parameters to help finding a good sub-
750
- space. We maximized the macro-F1 on the evalu-
751
- ation dataset, and set the number of initial points
752
- before starting the algorithm to 5.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2107.03444.txt DELETED
@@ -1,1256 +0,0 @@
1
- Keep it Simple: Unsupervised Simplification of Multi-Paragraph Text
2
- Philippe Laban
3
- UC BerkeleyTobias Schnabel
4
- MicrosoftPaul N. Bennett
5
- MicrosoftMarti A. Hearst
6
- UC Berkeley
7
- Abstract
8
- This work presents Keep it Simple (KiS), a
9
- new approach to unsupervised text simplifica-
10
- tion which learns to balance a reward across
11
- three properties: fluency, salience and simplic-
12
- ity. We train the model with a novel algorithm
13
- to optimize the reward ( k-SCST), in which
14
- the model proposes several candidate simpli-
15
- fications, computes each candidate’s reward,
16
- and encourages candidates that outperform the
17
- mean reward. Finally, we propose a realis-
18
- tic text comprehension task as an evaluation
19
- method for text simplification. When tested on
20
- the English news domain, the KiS model out-
21
- performs strong supervised baselines by more
22
- than 4 SARI points, and can help people com-
23
- plete a comprehension task an average of 18%
24
- faster while retaining accuracy, when com-
25
- pared to the original text.
26
- 1 Introduction
27
- The main objective of text simplification is to make
28
- a complex text accessible to a wide audience by
29
- increasing its readability. In contrast with text sum-
30
- marization – in which key content is selected to
31
- remain in the summary and other content is elided
32
- – in text simplification, ideally all relevant content
33
- is preserved.
34
- We propose that text simplification algorithms
35
- need to balance three properties: (1) fluency : the
36
- simplified text should use well-formed English sen-
37
- tences, (2) salience : the simplified text should relay
38
- the same information as the original, and (3) sim-
39
- plicity : the simplified text should be syntactically
40
- and lexically simpler than the original.
41
- Figure 1 provides intuition for the necessity of
42
- each of the three properties. It shows the origi-
43
- nal text and the output of the full proposed model
44
- compared to three reduced versions:
45
- Author emails: fphillab,hearst [email protected],
46
- fTobias.Schnabel,Paul.N.Bennett [email protected]
47
- Original: NASA's Curiosity rover just celebrated a major
48
- milestone — 3,000 days on the surface of Mars. T o mark the
49
- occasion, the space agency has released a stunning new
50
- panorama of the red planet, captured by the rover .
51
- Model Full: NASA's Curiosity rover has now passed 3,000
52
- days of travel on the surface of Mars. T o mark the milestone,
53
- the space agency released a huge panorama of Mars, as
54
- seen by the rover .
55
- Model No Fluency: NASA's Curiosity rover . celebrated. A
56
- major milestone — 3,000 days on. The of.. T o mark. The
57
- space agency has. a stunning new panorama.. red planet.
58
- captured by . The rover . However
59
- Model No Salience: NASA's Curiosity rover just celebrated a
60
- major milestone. The space agency has released a stunning
61
- new panoramic of the red planet, captured by the team. It
62
- was by the rover's panoramic camera.
63
- Model No Simplicity: NASA's Curiosity rover has celebrated
64
- a major milestone, 3,000 days on the ground of Mars. T o
65
- mark the occasion, the space agency has unveiled a stunning
66
- new panoramic view of the red planet, captured by the rover .Figure 1: Motivating example for the KiS method,
67
- based on a CBS article (Lewis, 2021). We optimize a
68
- three-component reward: fluency, salience and simplic-
69
- ity. We show model outputs when trained with all three
70
- components, and with a missing component.
71
- Without Fluency , the generator has no incen-
72
- tive to generate full sentences, and learns it can
73
- boost the simplicity score by generating short
74
- phrases with excessive punctuation.
75
- Without Salience , the generator does not gain
76
- by covering facts in the original text, and can im-
77
- prove the simplicity score by learning to remove
78
- facts (e.g., not mentioning planet Mars by name).
79
- Without Simplicity , the generator is not guided
80
- to favor syntactically and lexically simpler re-
81
- writes. In Figure 1, Model No Simplicity is in fact
82
- more complex than the original according to read-
83
- ability measures.
84
- As we show in the related work section (Sec-
85
- tion 2), there are no high-quality, large datasets
86
- publicly released for text simplification. In this
87
- work, we build on recent progress of reinforcement
88
- learning (RL)-based training of text generators: wearXiv:2107.03444v1 [cs.CL] 7 Jul 2021formulate a reference-free reward for text simplifi-
89
- cation and directly optimize it, circumventing the
90
- need for aligned data.
91
- Our main contribution is the Keep it Simple
92
- (KiS) procedure, a novel unsupervised method for
93
- text simplification. Applied to the English news do-
94
- main, KiS outperforms several supervised models
95
- on common simplification metrics such as SARI
96
- (Xu et al., 2016) and the Flesch-Kincaid Grade
97
- Level (Kincaid et al., 1975).
98
- A second contribution is a new algorithm for RL-
99
- based training of text generators, k-SCST, which
100
- is an extension of Self-Critical Sequence Training
101
- (Rennie et al., 2017). For each input, we generate
102
- ksampled outputs (vs. 2 in SCST), and use the
103
- mean population reward as a baseline. We show in
104
- Section 4 that in our domain, k-SCST outperforms
105
- models trained with SCST.
106
- A third contribution is a novel evaluation method
107
- for text simplification. Based on the assumption
108
- that simplified text should enable faster reading
109
- with better understanding, we propose a realistic
110
- Text Comprehension task. We show that people
111
- reading texts simplified by KiS are able to complete
112
- comprehension tasks faster than comparison texts.
113
- Another departure from previous work is that we
114
- work with paragraphs as units of text. Most work
115
- in text simplification is done at the sentence level,
116
- despite work such as Zhong et al. (2020) showing
117
- that common simplification phenomena occur at
118
- the level of the paragraph, (e.g., the deletion, inser-
119
- tion or re-ordering of full sentences). Specifically,
120
- we train our models to simplify full paragraphs,
121
- and evaluate our models in a human evaluation on
122
- short documents (i.e., 3-4 paragraphs).
123
- Through rigorous empirical evaluation, we
124
- demonstrate the strong performance of our ap-
125
- proach; automated results show that this unsuper-
126
- vised approach is able to outperform strong su-
127
- pervised models by 4 SARI points or more. We
128
- publicly released the code and model checkpoints1.
129
- 2 Related Work
130
- Simplification Datasets. Early datasets were first
131
- based on Simple Wikipedia2: WikiSmall (Zhu
132
- et al., 2010), later expanded into WikiLarge (Zhang
133
- and Lapata, 2017). Xu et al. (2015) show there are
134
- quality concerns with Simple Wikipedia datasets,
135
- 1https://github.com/tingofurro/keep_
136
- it_simple
137
- 2https://simple.wikipedia.org/and propose Newsela3as a replacement. Newsela
138
- is a project led by educators re-writing news ar-
139
- ticles targeting different school grade levels. We
140
- view Newsela as the gold-standard for our work,
141
- and use the public Newsela release of 1,911 groups
142
- of articles to design and evaluate our work. Us-
143
- ing a coarse paragraph alignment algorithm, we
144
- extract 40,000 paired simple/complex paragraphs
145
- targeting a separation of 4 grade levels. We call
146
- this dataset the paired Newsela dataset , which we
147
- use for analysis and baseline training.
148
- Seq2Seq for Simplification . Text simplifica-
149
- tion is most commonly framed as a sequence-to-
150
- sequence (seq2seq) task, leveraging model archi-
151
- tectures of other seq2seq tasks, such as natural ma-
152
- chine translation (Zhu et al., 2010; Wubben et al.,
153
- 2012). Martin et al. (2020) introduce ACCESS, a
154
- finetuned Transformer model that achieves state-
155
- of-the-art performance on WikiLarge. ACCESS
156
- can customize simplifications on parameters such
157
- as compression rate and paraphrase amount. We
158
- directly compare our approach to ACCESS.
159
- Data availability remains one of the main lim-
160
- itations to seq2seq-based text simplification. We
161
- side-step this issue entirely by working with unsu-
162
- pervised data, only requiring a small dataset with
163
- coarse-level alignments for calibration.
164
- Lexical Simplification focuses on the substi-
165
- tution of single words or phrases with simpler
166
- equivalents, with diverse approaches using lexical
167
- databases such as WordNet (Thomas and Anderson,
168
- 2012), to using contextualized word vectors (Qiang
169
- et al., 2020). These methods tend to be limited, as
170
- they do not consider syntactic complexity, and have
171
- no direct way of modeling deletions and insertions.
172
- We incorporate a lexical score ( LScore ) as one of
173
- the rewards in our simplicity component.
174
- Text-edit for Simplification . Recent work
175
- (Dong et al., 2019; Stahlberg and Kumar, 2020)
176
- has modeled text simplification as a text-edit task,
177
- learning sequences of word-edits that transform the
178
- input into the output. Text editing offers explain-
179
- ability, at the cost of added model complexity. We
180
- find that without explicitly representing edits, the
181
- KiS model easily learns to copy (using attention
182
- heads) and deviate from the original text. Outputs
183
- can be post-processed into edits, if desired.
184
- Unsupervised Simplification has mostly been
185
- limited to lexical simplification. Recently Surya
186
- et al. (2019) (Unsup NTS) proposed a system that
187
- 3https://newsela.com/can perform both lexical and syntactic simplifica-
188
- tion, with a joint encoder, and two decoders (simple
189
- and complex). We directly compare our unsuper-
190
- vised approach to Unsup NTS.
191
- RL for Simplification . Prior work (Zhang and
192
- Lapata, 2017; Guo et al., 2018) used Reinforce-
193
- ment Learning (RL)-based simplification. How-
194
- ever, in both cases, components of the reward or
195
- training procedure involved reference simplifica-
196
- tions, requiring an aligned dataset. By designing
197
- a reference-free reward, we are able to train our
198
- model with RL without supervision.
199
- Evaluation of Simplification . This usually falls
200
- into two categories: automatic offline evaluation,
201
- and human evaluation. Automatic evaluations usu-
202
- ally involve using n-gram overlap calculations such
203
- as BLEU (Papineni et al., 2002) and SARI (Xu
204
- et al., 2016)). SARI was shown to correlate better
205
- with human judgements of simplicity than BLEU,
206
- and it has since become a standard (Zhang and Lap-
207
- ata, 2017; Surya et al., 2019; Martin et al., 2020). In
208
- our experiments, we report both SARI and BLEU.
209
- Human evaluation is typically done in an intrin-
210
- sicway – e.g., by directly rating factors like fluency,
211
- simplicity and relevance of model outputs (Surya
212
- et al., 2019; Wubben et al., 2012). In this work,
213
- we propose an extrinsic, task-based protocol. In
214
- our comprehension study, we directly measure how
215
- much simplified texts can help a human reader an-
216
- swer questions more efficiently. The closest to our
217
- evaluation design is that of Angrosh et al. (2014)
218
- with the important difference that we require par-
219
- ticipants to resubmit after erroneous answers. In
220
- pilot studies, we found this step to be crucial for
221
- high-quality responses.
222
- 3 KiS Components
223
- In KiS, we approach unsupervised simplification as
224
- a (non-differentiable) reward maximization prob-
225
- lem. As shown in Figure 2, there are four compo-
226
- nents to the reward: simplicity, fluency, salience
227
- and guardrails which are jointly optimized. This
228
- is essential to avoid trivial solutions that only con-
229
- sider subsets. We therefore use the product of all
230
- components as the total reward, because the prod-
231
- uct is sensitive to the sharp decrease of a single
232
- component. For example, the triggering of a single
233
- guardrail leads to the zeroing of the total reward.
234
- Each component is normalized to the [0;1]range.
235
- Generator
236
- SalienceOriginal
237
- Text
238
- Simplicity
239
- ScoreOptimization
240
- Fluency
241
- Guardrails
242
- Simplified
243
- TextFigure 2: Keep it Simple is an unsupervised training
244
- procedure for text simplification. The text generator
245
- (GPT-2) produces candidate simplifications, scored ac-
246
- cording to fluency ,simplicity ,salience .Guardrails en-
247
- force the model does not learn high-scoring shortcuts.
248
- def S_Score(original,simple):
249
- Fstart = fkgl(original)
250
- tgt = target_delta(Fstart)
251
- Fend = fkgl(simple)
252
- D = Fend-Fstart
253
- return clip(1-((D-tgt)/tgt),0,1)
254
- def target_delta(Fstart):
255
- # Line-fitted from analysis
256
- ifFstart < 4.0:
257
- return 0.1
258
- ifFstart < 12:
259
- return 0.5*Fstart-1.9
260
- return 0.8*Fstart-5.6
261
- Figure 3: SScore algorithm. fkgl computes the
262
- Flesch-Kincaid grade level.
263
- 3.1 Simplicity
264
- The simplicity score should establish whether the
265
- generator’s output uses simpler language than the
266
- original text. We follow prior work (Ferr ´es et al.,
267
- 2016) and organize our score into a syntactic score
268
- SScore , and a lexical score LScore . Syntactic sim-
269
- plification focuses on reducing the complexity of a
270
- sentence, for example by reducing the number of
271
- words in a clause, or reducing distant dependencies.
272
- In lexical simplification, the objective is to replace
273
- complex phrases with simpler synonyms. To pro-
274
- duce a single simplicity score, we take the product
275
- ofSScore andLScore (both in [0;1]).
276
- 3.1.1 Syntactic Simplicity: SScore
277
- We measure syntactic complexity via the Flesch-
278
- Kincaid grade level (FKGL) as it is easy to compute
279
- and maps to a grade-level which also corresponds
280
- to the scale used by Newsela. Other readability met-
281
- rics such as Dale-Chall formula (Dale and Chall,
282
- 1948), or the Gunning-Fog index (Gunning, 1969)
283
- could be used, and future work could examine the
284
- effect of choosing one readability metric over the5.0 7.5 10.0 12.5 15.0 17.5
285
- FKGL of original paragraph5
286
- 0510FKGL in Newsela rewrite
287
- Linear approximationFigure 4: Analysis (Kernel Density Estimate plot)
288
- of change in Flesch-Kincaid Grade Level in the
289
- paired Newsela dataset. Most simple paragraphs have
290
- lower FKGL than the original paragraphs (positive
291
- FKGL ). When the original paragraph’s FKGL is
292
- higher (x-axis), the change in FKGL tends to be larger
293
- (y-axis). We fit a linear approximation, which we use
294
- to compute the Sscore .
295
- other. Another viable option is the Lexile score
296
- (Smith et al., 2016), however, because its imple-
297
- mentation is not publicly released, we cannot use it
298
- during training and we report it only for evaluation
299
- (done manually on the Lexile Hub4).
300
- Figure 3 shows the SScore algorithm. We com-
301
- pute the original paragraph’s FKGL ( FStart ),
302
- used to compute a target FKGL ( tgt). The score
303
- is a linear ramp measuring how close the achieved
304
- FKGL ( Fend ) is to the target, clipped to [0;1].
305
- In the initial design, the target drop was a con-
306
- stant: 4 grade levels, independent of FStart .
307
- However, analysis on the paired Newsela corpus
308
- revealed that the target FKGL should depend on
309
- the initial FKGL. This makes sense intuitively: an
310
- already syntactically simple paragraph should not
311
- require further simplification, while more complex
312
- paragraphs require more simplification. Figure 4
313
- shows the positive correlation between the original
314
- paragraph’s FKGL and the drop of FKGL in the
315
- simplified text. We fit a piece-wise linear function
316
- to calculate the target FKGL drop from the initial
317
- paragraph.
318
- 3.1.2 Lexical Simplicity: LScore
319
- Lexical simplicity focuses on whether words in
320
- the input paragraph ( W1) are more complex than
321
- ones in the output paragraph ( W2). We rely on the
322
- observation that word frequency and difficulty are
323
- correlated (Breland, 1996), and use word frequency
324
- in a large corpus of text (Brysbaert and New, 2009)
325
- to determine simplicity.
326
- 4https://hub.lexile.comBecause word frequency follows a Zipf power
327
- law, we use Speer et al. (2018)’s log normaliza-
328
- tion, adjusting the frequency on a [0;8]range, with
329
- words at 0 being non-existent in the corpus, and 8
330
- for most common words. As an example, the word
331
- vigorous has a frequency of 3:54, while its more
332
- common synonym strong obtains 5:23.
333
- We compute the average Zipf frequency of the
334
- set of inserted words ( Z(W2W1)), and the set
335
- of deleted words ( Z(W1W2)). The difference
336
- Z(W1;W2) =Z(W2W1)Z(W1W2)(1)
337
- should be positive. Analysis of the paired Newsela
338
- corpus reveals that 91% of pairs have a positive
339
- Z(W1;W2), with a median value of 0:4. We use
340
- this median as the target Zipf shift in the LScore ,
341
- and use a ramp shape similar to the SScore , clipped
342
- between 0 and 1 (denoted as []+):
343
- LScore(W1;W2) ="
344
- 1jZ(W1;W2)0:4j
345
- 0:4#+
346
- (2)
347
- 3.2 Fluency
348
- We use two sub-components for the fluency com-
349
- ponent: a pre-trained language-model, and a dis-
350
- criminator trained dynamically with the generator.
351
- 3.2.1 Language-Model Fluency
352
- Language models assign a probability to a sequence
353
- of words. This probability is often used to measure
354
- fluency of generated text (Kann et al., 2018; Salazar
355
- et al., 2020). The KiS fluency score is based on
356
- a language model in a way similar way to Laban
357
- et al. (2020). The language model is used to ob-
358
- tain a likelihood of the original paragraph ( LM(p))
359
- and of the generated output LM(q). We use av-
360
- erage log-likelihood, for numerical stability. The
361
- language model fluency score is then:
362
- LMScore(p;q) =h
363
- 1LM(p)LM(q)
364
- i+
365
- (3)
366
- is a tunable hyper-parameter. If the LM(q)is
367
- lower thanLM(p)byor more,LMScore(p;q) =
368
- 0. IfLM(q)is above or equal to LM(p), then
369
- LMScore(p;q) = 1 , and otherwise, it is a linear
370
- interpolation.
371
- We set= 1:3as it is the value for which
372
- thepaired Newsela dataset achieves an average
373
- LMScore of 0.9.3.2.2 Discriminator Fluency
374
- TheLMScore is static and deterministic, which can
375
- be limiting, as the generator can learn during train-
376
- ing how to adapt and exploit flaws in the language-
377
- model (e.g., learning to alter capitalization).
378
- Inspired from the Generative Adversarial Net-
379
- work (GAN) framework (Goodfellow et al., 2014),
380
- we create a dynamic discriminator, trained in con-
381
- junction with the generator, dynamically adapting
382
- the fluency score during training.
383
- Specifically, we use a RoBERTa model (Liu
384
- et al., 2019) as the basis for the discriminator, a clas-
385
- sifier with two labels: 1 for authentic paragraphs,
386
- and 0 for generator outputs.
387
- As the generator produces outputs, they are as-
388
- signed a label of 0 and added to a training buffer ,
389
- while the original paragraphs are assigned a label
390
- of 1 and added to the training buffer as well.
391
- Once the training buffer reaches a size of 2,000
392
- samples, the discriminator is trained, using 90% of
393
- the training buffer. We train the discriminator for
394
- 5 epochs (details of training are in Appendix A.1).
395
- At the end of each epoch, we checkpoint the dis-
396
- criminator model. We compare the 5 checkpoints
397
- in terms of F-1 performance on the remaining 10%
398
- of the training buffer, and keep the best checkpoint
399
- as the new discriminator.
400
- The discriminator’s probability that a paragraph
401
- (q) is authentic is the discriminator score:
402
- DScore(q) =pdisc(Y= 1jX=q) (4)
403
- As with GANs, there is an equilibrium between
404
- the generator attempting to maximize the proba-
405
- bility of generating real outputs (“fooling” the dis-
406
- criminator), and the discriminator succeeding at
407
- distinguishing generated and authentic texts.
408
- 3.3 Salience
409
- For the salience component, we use the coverage
410
- model introduced in the summary loop (Laban
411
- et al., 2020) for the domain of text summarization,
412
- and adapt it to the simplification domain.
413
- The coverage model is a Transformer-based
414
- model trained to look at generated text and answer
415
- fill-in-the-blank questions about the original text.
416
- The score is based on model accuracy at filling in
417
- the blanks: the more is filled in, the more relevant
418
- the generated content is, and the higher the score.
419
- A key element of the coverage model is its mask-
420
- ing procedure, which decides which words to mask.
421
- In the summary loop, a limited number of extractedkeywords (up to 15 words) are masked. By contrast,
422
- for simplification, we mask all non-stop words,
423
- amounting to a masking rate of about 40%.
424
- This change reflects a difference in expectation
425
- between summarization and simplification: in sum-
426
- marization, only key components are expected to
427
- be recovered from a summary, whereas in simpli-
428
- fication most of the original paragraph should be
429
- recoverable. Coverage ranges in [0;1], and refer-
430
- ence simplifications in the paired Newsela corpus
431
- obtain an average score of 0.76, confirming that
432
- manual simplification can achieve high coverage.
433
- 3.4 Guardrails
434
- We use guardrails as simple pattern-based scores to
435
- avoid common pathological generation problems
436
- that we observed. Unlike the main components,
437
- guardrails are binary, giving a score of 1 (pass) un-
438
- less they trigger (score of 0). We use two guardrails:
439
- brevity and inaccuracy.
440
- 3.4.1 Brevity guardrail
441
- The brevity guardrail ensures the length of gen-
442
- erated paragraph ( L2) falls in a range around the
443
- original paragraph’s length ( L1). We compute a
444
- compression ratio: C=L2=L1. IfCminC
445
- Cmax, the guardrail passes, otherwise it triggers.
446
- We set [Cmin;Cmax] = [0:6;1:5], because these
447
- values ensure the guardrail is not triggered on 98%
448
- of the paired Newsela dataset; this can be adapted
449
- depending on the application.
450
- 3.4.2 Inaccuracy guardrail
451
- Modern text generation models are known to hallu-
452
- cinate facts (Huang et al., 2020), which has led the
453
- community to create models to detect and correct
454
- hallucinations (Cao et al., 2020; Zhang et al., 2020;
455
- Wang et al., 2020).
456
- We propose a light-weight inaccuracy detector
457
- as a guardrail. We use a Named Entity Recognition
458
- (NER) model (Honnibal et al., 2020) to extract
459
- entities present in the original paragraph ( E1) and
460
- the model’s output ( E2). We trigger the guardrail
461
- if an entity present in E2is not inE1.
462
- Even though human writers can successfully in-
463
- troduce new entities without creating inaccuracies
464
- (e.g., replacing the city La Paz with the country Bo-
465
- livia), we find that text generators predominantly
466
- introduce inaccuracies with novel entities. This
467
- simple heuristic can eventually be replaced once
468
- inaccuracy detection technology matures.2 4 6 8 10 12
469
- Hours of Training105
470
- 104
471
- 103
472
- 102
473
- Total Score8-SCST
474
- 6-SCST
475
- 4-SCST
476
- SCSTFigure 5: Training KiS models comparing SCST
477
- withk-SCST. We try 4, 6 and 8 as values for k. In-
478
- creasing k improves performance and stability.
479
- 4 KiS Training
480
- Rennie et al. (2017) introduced Self-Critical Se-
481
- quence Training (SCST) as an effective algorithm
482
- for reward-based training of text generators, suc-
483
- cessfully applying it to image captioning. The effi-
484
- cacy of SCST was later confirmed on other text gen-
485
- eration tasks such as question generation (Zhang
486
- and Bansal, 2019), and summarization (Celikyil-
487
- maz et al., 2018; Laban et al., 2020). In SCST, a
488
- probabilistic model is used to generate two distinct
489
- candidates: CS, a candidate constructed by sam-
490
- pling the word distribution at each step, and ^C, by
491
- taking the argmax of the word distribution at each
492
- step. Each candidate is scored, obtaining rewards
493
- ofRSand^R, respectively, and the loss is:
494
- L= (^RRS)NX
495
- i=0logp(wS
496
- ijwS
497
- 1:::wS
498
- i1;P)(5)
499
- wherep(wS
500
- ij:::)represents the probability of the
501
- i-th word conditioned on previously generated sam-
502
- pled sequence according to the model, P is the input
503
- paragraph, and N the number of words in the gen-
504
- erated sequence. Intuitively, minimizing this loss
505
- increases the likelihood of the sampled sequence if
506
- RS>^R, and decreases it otherwise, both increas-
507
- ing the expected total reward.
508
- One limitation in SCST occurs when the two
509
- sequences achieve comparable rewards ( RS'^R):
510
- the loss nears zero, and the model has little to learn,
511
- wasting a training sample. In our experiments with
512
- SCST, this can occur with 30% of samples.
513
- We propose an extension of SCST, which we
514
- callk-SCST. We generate ksampled candidates
515
- (k > 2), compute the rewards of each candidate
516
- RS1;:::;RSk, as well as the mean reward achievedby this sampled population:RS= (RS1+:::+
517
- RSk)=k, which we use as the baseline, instead of
518
- ^R. The lossLbecomes:
519
- L=kX
520
- j=1(RSRSj)NX
521
- i=0logp(wSj
522
- ijwSj
523
- 1:::wSj
524
- i1;P)
525
- (6)
526
- We use a GPT2-medium for the generator, ini-
527
- tialized with the released pre-trained checkpoint.
528
- Experimental details such as data and optimizer
529
- used are provided in Appendix A.1.
530
- In Figure 5, we show results of a direct compar-
531
- ison of SCST ( k= 2) withk-SCST varying kin
532
- f4;6;8g, while keeping other components of the
533
- training fixed. Because of the variance involved in
534
- RL training, we recorded six independent training
535
- runs for each setting (for a total of 24 runs), and
536
- plot the average reward across runs of a setting, as
537
- well as the standard error of the mean (SEM).
538
- We observe that increasing kleads to higher
539
- average reward, and less variation in the reward.
540
- In our setting, k-SCST boosts performance and
541
- stabilizes training. We use k= 8in all final models,
542
- as increasing kfurther is impractical due to GPU
543
- memory limitations.
544
- We believek-SCST’s advantage stems from two
545
- factors: first, obtaining a better estimate of the
546
- distribution of rewards by sampling more outputs,
547
- second, by using the mean reward as the baseline,
548
- saving on computation of a separate baseline gener-
549
- ation. We believe k-SCST can also improve learn-
550
- ing in other text generation applications and plan
551
- to pursue this in future work.
552
- 5 Experiments
553
- We present results experimentally validating the
554
- KiS procedure for text simplification. We give re-
555
- sults based on automatic metrics, on a novel human
556
- comprehension task, and from an ablation study.
557
- 5.1 Models Compared
558
- We compare the KiS Model to three strong super-
559
- vised models, and an unsupervised approach.
560
- ACCESS from (Martin et al., 2020), is a state-
561
- of-the-art Transformer model trained on WikiLarge
562
- (300,000 pairs of complex/simple sentences). This
563
- model uses default parameters ( NBChar =0.95,
564
- LevSim =0.75).
565
- ACCESS90 is identical to ACCESS , with dif-
566
- ferent parameters ( NBChar =0.90, LevSim =0.75),
567
- reducing target compression from 95% to 90%,
568
- matching the average compression rate in Newsela.Model SARI BLEU %FKGL %Lexile Comp. Cov.
569
- Newsela - - 87 79 .918 .754
570
- Finetune Baseline .470 .719 68 52 .903 .894
571
- ACCESS Default .666 .649 86 63 .958 .805
572
- ACCESS 90 .674 .644 93 64 .921 .789
573
- Unsup NTS .677 .535 48 57 .753 .618
574
- KiS Model .709 .526 100 72 .852 .640
575
- Table 1: Automatic results on Newsela test-set. SARI
576
- andBLEU are reference-based metrics. %FKGL and
577
- %Lexile are percentages of model outputs lowering the
578
- grade level. Comp. is the average compression ratio (#
579
- words), and Cov. the output’s average coverage score.
580
- Finetune Baseline is a GPT2-medium model
581
- finetuned on the paired Newsela dataset . Large
582
- pre-trained models often perform competitively in
583
- low-resource environments, making this a strong
584
- point of comparison.
585
- Unsup NTS from (Surya et al., 2019) is an unsu-
586
- pervised approach based on successively encoding
587
- and denoising text using a GRU architecture.
588
- Training details for the KiS Model and Finetune
589
- Baseline are in Appendix A.1.
590
- 5.2 Automatic Results
591
- We put aside 500 samples from the paired Newsela
592
- dataset as a test set to compare models on auto-
593
- matic metrics. We compare models on SARI and
594
- BLEU, report the percentage when readability mea-
595
- sures see an improvement in readability: %FKGL,
596
- and %Lexile and compute the average compres-
597
- sion rate (Comp.), and coverage (Cov.). Results are
598
- summarized in Table 1.
599
- The KiS model achieves the highest SARI score
600
- by a margin of 0.04, even though it is an unsuper-
601
- vised approach.
602
- Finetune Baseline achieves the highest BLEU
603
- and salience scores, but lowest SARI score. We
604
- interpret this as showing the model takes the least
605
- risk: high salience, with little simplification.
606
- We observe that all models are able to increase
607
- readability in terms of FKGL and Lexile compared
608
- to original paragraphs. We note that for almost all
609
- models, the percentage is lower for the Lexile mea-
610
- sure than for FKGL, showing that an improvement
611
- in Lexile score is more difficult to achieve than
612
- FKGL. The KiS model achieves an increase in Lex-
613
- ile readability 72% of the time, the closest figure
614
- to 79% of the Newsela human-written reference.
615
- We note that the perfect performance of KiS on
616
- %FKGL could be explained by the fact that FKGL
617
- is a part of a component being optimized ( SScore ),
618
- however Lexile was not.In terms of compression, the KiS model com-
619
- presses the second most, most likely hurting its
620
- coverage. Adjusting the Brevity guardrail could
621
- encourage the model to compress less. ACCESS90
622
- has the compression rate closest to Newsela refer-
623
- ences, but this only leads to a modest improvement
624
- in SARI when compared to ACCESS.
625
- Overall, the Newsela references achieve the
626
- best percentage of Lexile readability improvement,
627
- while outperforming the KiS model at coverage:
628
- there is still a gap between human-written simplifi-
629
- cations and model-generated ones.
630
- 5.3 Human Comprehension Study
631
- We propose a human comprehension study to evalu-
632
- ate the usefulness of simplification results. Simpli-
633
- fied text should be easier to read than the original
634
- text, while retaining accuracy and understanding.
635
- We design a task to evaluate how well both manual
636
- and automated simplifications achieve this objec-
637
- tive. The main idea is to show readers a text and
638
- ask them to answer multiple-choice questions, eval-
639
- uating the texts based on time and retries needed to
640
- select the correct answer.
641
- 5.3.1 Study Design
642
- Five different versions of each document were
643
- generated as stimuli: the original document, the
644
- Newsela reference, and versions from the three
645
- best-performing methods from the last section:
646
- KiS, Finetune Baseline, and ACCESS. We did not
647
- include Unsup NTS in our analysis, because of its
648
- low performance on %FKGL and %Lexile metrics.
649
- Associated with each document are five manually
650
- generated multiple-choice questions, each with one
651
- or more correct answers and one to four distractors.
652
- The original and the Newsela texts were checked
653
- manually by experimenters to ensure that all allow
654
- for questions to be answered correctly. Crowd-
655
- workers were shown four documents in succession,
656
- in a between-participants design. Order of docu-
657
- ment and stimuli type were randomized. Figure 6
658
- shows two stimuli of a document (original and KiS)
659
- along with the comprehension questions. (The en-
660
- tire set of five stimuli can be found in Figure A2 in
661
- the Appendix.)
662
- After several rounds of pilot testing, we arrived
663
- at the following design choices:
664
- Document theme. We chose recent news arti-
665
- cles involving complex themes (e.g., trajectory of
666
- iceberg) as the source of documents. For news ar-
667
- ticles, recency seems to engage participants, andORIGINAL [Lexile Grade 1 1] Each summer , libraries in St. Louis,
668
- Missouri, host many types of free camps — yoga, chess and even a
669
- Harry Pot ter “Sorting Hat Camp.” In 2020, camp dreams seemed far-
670
- fetched given the global coronavirus pandemic. That didn’t stop St.
671
- Louis libraries, though.
672
- Instead of canceling, they brought camp into kids’ homes. So children
673
- who signed up for ukulele camp got a beginner ’s guidebook,
674
- instructional DVD and an actual ukulele in the mail. It was all free. In
675
- addition, camp sessions still occurred. Advisers met with kids using
676
- virtual formats.
677
- Joe Monahan, manager of youth services for the St. Louis library
678
- system, says that of the 70 camps originally scheduled, 54 were held
679
- virtually .
680
- Paula Langsam, a youth services manager at the soon-to- reopen
681
- Martin Luther King Junior Memorial Library in W ashington, D.C., says,
682
- “In a way , our work has changed a lot. W e didn’t used to do videos a
683
- lot.”KIS MODEL [Lexile Grade 9] In the summer months, St. Louis
684
- has many free classes for kids, including yoga, chess and a Harry
685
- Potter “Sorting Hat Camp.” In 2020, camp dreams again seemed
686
- far-fetched given the crisis. That didn’t stop St. Louis libraries,
687
- though.
688
- They brought camp in. So kids who signed up for ukulele camp got
689
- a beginner ’s guidebook, a lesson DVD and a real ukulele in the
690
- mailbox. It was all free. In addition, camp sessions continued.
691
- Advisers tried out a virtual format.
692
- Joe Monahan, the manager of youth services for the St. Louis
693
- library system, says that of the 70 camps originally scheduled, 54
694
- were held mostly .
695
- Paula Langsam, a youth services manager at the Martin Luther
696
- King Junior library , says, “In a way , our work changed a lot. W e
697
- didn’t do videos a lot.”
698
- Who manages the St Louis library kids programs?
699
- Joe Monahan , Paula Langsam, St. Louis Camp Leaders
700
- Were any camps in St. Louis cancelled?
701
- Yes, NoHow many camps were scheduled, how many were run?
702
- 54 and 70, 70 and 54 , 70 and 0, 54 and 0
703
- How did the Ukulele camp meet?
704
- In the park, Virtually , Did not meetWhat camps did the libraries host?
705
- Yoga, Chess , Pottery , Ukulele
706
- Figure 6: Example Task (from a Washington Post article (Kelati, 2020)) for the Comprehension Study. Shown
707
- are two of five stimuli: original document (left), and KiS model output (right). Participants read a text and answered
708
- comprehension questions (bottom). Average completion time was 160 seconds (original) and 136 seconds (KiS
709
- model output).
710
- technical terms increase the impact of simplifica-
711
- tion.
712
- Section length. We chose document length of
713
- 3-4 paragraphs (or 200 words), and five compre-
714
- hension questions. Document length should not be
715
- too W (makes some questions trivial), or too long
716
- (adds a retrieval component to the task).
717
- Selection of questions. Questions were gener-
718
- ated via a GPT2 question generation model fine-
719
- tuned on the NewsQA dataset (Trischler et al.,
720
- 2017). We select questions answerable by both
721
- the original and Newsela references, attempting to
722
- have both factoid (answer is entity) and reasoning
723
- questions.
724
- Re-submission until correct. When submitting
725
- answers, participants received feedback on which
726
- were incorrect, and were required to re-submit un-
727
- til all answers were correct. This aligns the ob-
728
- jective of the participant (i.e., finishing the task
729
- rapidly), with the task’s objective (i.e., measuring
730
- participant’s efficiency at understanding). This also
731
- gives a way to discourage participants from “brute-
732
- forcing” the task, re-submitting many combinations
733
- until one works.
734
- We note that some components of the study such
735
- as the choice of document themes and the selection
736
- of comprehension questions are elements that cre-
737
- ate variability in the results. We release the models
738
- used in the study, as well all generated texts that
739
- were evaluated to enable follow-up research and to
740
- aid reproducibility.Model Time (sec) # Subs. Comp. CASpeed
741
- [Original 174.0 4.23 1.0 1.00
742
- \Newsela 163.3 5.10 1.08 1.15
743
- 8ACCESS 188.5 6.69 0.96 0.88
744
- 9Finetune Baseline 161.0 8 4.70 0.97 1.04
745
- rKiS Model 142.6[\84.108 0.87 1.06
746
- Table 2: Results of the Human Comprehension
747
- Study. We measure average completion time (Time),
748
- number of submissions (#Subs.), compression ra-
749
- tio (Comp.) and a compression-accounted speed-up
750
- (CASpeed). Each text version is assigned a symbol
751
- used to indicate statistical significance ( p<0:05).
752
- 5.3.2 Study Results
753
- We ran the study on Mechanical Turk, accepting
754
- crowd-workers with 1700+ completed tasks, and
755
- an acceptance rate of 97%+. The study was active
756
- for two weeks in December 2020, and remunerated
757
- participants completing all four sections at a rate of
758
- $10/hour. (Appendix A.2 shows crowd-worker in-
759
- structions and the document/version distributions.)
760
- When removing “brute-forced” submissions (10+
761
- re-submissions), we are left with 244 submissions,
762
- used for result analysis reported in Table 2, (A more
763
- detailed results table is included in Appendix A.4.)
764
- We measure two outcomes: question comple-
765
- tion time (in seconds), and number of submissions
766
- to correctness. We performed a Kruskal-Wallis
767
- test (Kruskal and Wallis, 1952) with a Dunn post-
768
- hoc test (Dunn, 1964) for statistical significance
769
- between pairs of conditions.
770
- In line with study objectives, simplified textshelp participants complete the task faster than read-
771
- ing original texts, with three of the four simplified
772
- versions leading to improvements in completion
773
- times. Participants were fastest with KiS simpli-
774
- fications (18% faster). The KiS model led to a
775
- statistically significant speed-up compared to the
776
- originals, Newsela references, and ACCESS sim-
777
- plifications. ACCESS simplifications surprisingly
778
- led to a non-significant slow-down, which we at-
779
- tribute to a potential loss in fluency that might have
780
- confused participants.
781
- One important factor we consider is that shorter
782
- passages (i.e., smaller compression) might lead to a
783
- speed-up regardless of simplicity. We confirm this
784
- by finding a small positive correlation between pas-
785
- sage length and completion time of 0.09. We com-
786
- pute a compression-adjusted speed-up (CASpeed )
787
- ratio by: (1) computing the passage length of each
788
- simplified version, (2) linearly extrapolating the ex-
789
- pected completion time for this passage length for
790
- original paragraphs, and (3) computing the ratio of
791
- the extrapolation to the observed completion time.
792
- IfCASpeed> 1, participants were faster than ex-
793
- pected for the passage length. Newsela reference
794
- paragraphs achieve the best CASpeed , followed by
795
- the KiS model. This suggests that good simplifica-
796
- tion can involve making texts longer.
797
- 5.4 Ablation Study
798
- We train three ablated models, each missing a re-
799
- ward component to gain understanding in the value
800
- of each component of the KiS procedure.
801
- Figure 1 gives a qualitative perspective on each
802
- ablation. Without fluency, the generator learns to
803
- generate incomplete sentences, without salience, it
804
- omits important information, and without simplic-
805
- ity, it can sometimes “complexify”.
806
- We computed complete automatic results for the
807
- ablated models, and find that each ablation leads to
808
- a decrease on an evaluation metric, confirming that
809
- all three components are necessary to generate high-
810
- quality simplifications (details in Appendix A.5).
811
- 6 Limitations and Future Work
812
- Improved Accuracy Scoring . The current
813
- guardrail for inaccuracy is rudimentary; trained
814
- models still generate non-factual simplifications.
815
- Recent work in fact-checking for the summariza-
816
- tion domain (Kryscinski et al., 2020; Li et al., 2018)
817
- could be adapted to the simplification domain to
818
- improve this.Inclusion of Supervised Signal . In this work,
819
- we establish that text simplification can be ap-
820
- proached in an unsupervised manner. In future
821
- work, Keep it Simple could be used as a pre-
822
- training strategy, or used jointly with supervised
823
- training.
824
- Reproducibility of Human Evaluation . Even
825
- though we release the models, stimuli and compre-
826
- hension questions used in the human evaluation,
827
- some elements of the procedure introduce random-
828
- ness. Participating crowd-workers differ in literacy
829
- level which may have an effect on their perfor-
830
- mance at the task (Alonzo et al., 2021).
831
- New Settings, Domains and Languages . We
832
- limited our experiments to the simplification of En-
833
- glish news articles following prior work, but plan
834
- to pursue other languages in the future. Similarly,
835
- because Keep it Simple does not require labeled
836
- data, it can be applied to new settings (e.g., rewrit-
837
- ing to inverse the effects of simplification), or to
838
- new domains (e.g., legal texts).
839
- 7 Conclusion
840
- We have shown that text simplification can be ap-
841
- proached in an unsupervised manner via KiS. By
842
- optimizing a reward comprised of simplicity, flu-
843
- ency and salience components, KiS is able to out-
844
- perform strong supervised models on automatic
845
- metrics (+0.04 in SARI). We propose a human
846
- comprehension task to evaluate the usefulness of
847
- simplification and show that simplifications tend to
848
- lead to a measurable speed-up in task completion,
849
- with KiS texts producing the best speed-up of 18%
850
- on average. These are first steps for unsupervised
851
- text simplification, and we suggest that future work
852
- should focus on adapting the methodology to new
853
- domains (i.e., legal), non-English languages, and
854
- refining optimized rewards to take factuality into
855
- account.
856
- Acknowledgments
857
- We would like to thank Katie Stasaski, Dongyeop
858
- Kang, and the ACL reviewers for their helpful com-
859
- ments, as well as Newsela for providing a version
860
- of their simplified news corpus. This work was
861
- supported by a Microsoft BAIR Commons grant as
862
- well as a Microsoft Azure Sponsorship.References
863
- Oliver Alonzo, Jessica Trussell, Becca Dingman, and
864
- Matt Huenerfauth. 2021. Comparison of methods
865
- for evaluating complexity of simplified texts among
866
- deaf and hard-of-hearing adults at different literacy
867
- levels. In Proceedings of the 2021 CHI Conference
868
- on Human Factors in Computing Systems , pages 1–
869
- 12.
870
- Mandya Angrosh, Tadashi Nomoto, and Advaith Sid-
871
- dharthan. 2014. Lexico-syntactic text simplification
872
- and compression with typed dependencies. In Pro-
873
- ceedings of COLING 2014, the 25th International
874
- Conference on Computational Linguistics: Tech-
875
- nical Papers , pages 1996–2006, Dublin, Ireland.
876
- Dublin City University and Association for Compu-
877
- tational Linguistics.
878
- H. Breland. 1996. Word frequency and word difficulty:
879
- A comparison of counts in four corpora. Psycholog-
880
- ical Science , 7:96 – 99.
881
- M. Brysbaert and B. New. 2009. Moving beyond
882
- kucera and francis: A critical evaluation of current
883
- word frequency norms and the introduction of a new
884
- and improved word frequency measure for american
885
- english. Behavior Research Methods , 41:977–990.
886
- Meng Cao, Yue Dong, Jiapeng Wu, and Jackie Chi Kit
887
- Cheung. 2020. Factual error correction for abstrac-
888
- tive summarization models. In Proceedings of the
889
- 2020 Conference on Empirical Methods in Natural
890
- Language Processing (EMNLP) , pages 6251–6258.
891
- Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and
892
- Yejin Choi. 2018. Deep communicating agents for
893
- abstractive summarization. In Proceedings of the
894
- 2018 Conference of the North American Chapter of
895
- the Association for Computational Linguistics: Hu-
896
- man Language Technologies, Volume 1 (Long Pa-
897
- pers) , pages 1662–1675.
898
- Edgar Dale and Jeanne S Chall. 1948. A formula for
899
- predicting readability: Instructions. Educational re-
900
- search bulletin , pages 37–54.
901
- Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and
902
- Jackie Chi Kit Cheung. 2019. Editnts: An neural
903
- programmer-interpreter model for sentence simplifi-
904
- cation through explicit editing. In Proceedings of
905
- the 57th Annual Meeting of the Association for Com-
906
- putational Linguistics , pages 3393–3402.
907
- Olive Jean Dunn. 1964. Multiple comparisons using
908
- rank sums. Technometrics , 6(3):241–252.
909
- Daniel Ferr ´es, Montserrat Marimon, Horacio Saggion,
910
- et al. 2016. Yats: yet another text simplifier. In
911
- International Conference on Applications of Natural
912
- Language to Information Systems , pages 335–342.
913
- Springer.
914
- Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza,
915
- Bing Xu, David Warde-Farley, Sherjil Ozair, AaronCourville, and Yoshua Bengio. 2014. Generative ad-
916
- versarial nets. In Advances in Neural Information
917
- Processing Systems , volume 27.
918
- Robert Gunning. 1969. The fog index after twenty
919
- years. Journal of Business Communication , 6(2):3–
920
- 13.
921
- Han Guo, Ramakanth Pasunuru, and Mohit Bansal.
922
- 2018. Dynamic multi-level multi-task learning for
923
- sentence simplification. In Proceedings of the 27th
924
- International Conference on Computational Linguis-
925
- tics, pages 462–476.
926
- Matthew Honnibal, Ines Montani, Sofie Van Lan-
927
- deghem, and Adriane Boyd. 2020. spaCy:
928
- Industrial-strength Natural Language Processing in
929
- Python. Doi.org/10.5281/zenodo.1212303.
930
- Dandan Huang, Leyang Cui, Sen Yang, Guangsheng
931
- Bao, Kun Wang, Jun Xie, and Yue Zhang. 2020.
932
- What have we achieved on text summarization? In
933
- Proceedings of the 2020 Conference on Empirical
934
- Methods in Natural Language Processing (EMNLP) ,
935
- pages 446–469.
936
- Katharina Kann, Sascha Rothe, and Katja Filippova.
937
- 2018. Sentence-level fluency evaluation: Refer-
938
- ences help, but can be spared! In Proceedings of
939
- the 22nd Conference on Computational Natural Lan-
940
- guage Learning , pages 313–323.
941
- Haben Kelati. 2020. Librarians find creative ways to
942
- serve kids when buildings are closed for browsing.
943
- The Washington Post .
944
- J. Peter Kincaid, Robert P. Fishburne Jr., Richard L.
945
- Rogers, and Brad S. Chissom. 1975. Derivation of
946
- new readability formulas (automated readability in-
947
- dex, fog count and flesch reading ease formula) for
948
- navy enlisted personnel. Technical report, Naval
949
- Technical Training Command Millington TN Re-
950
- search Branch.
951
- William H Kruskal and W Allen Wallis. 1952. Use of
952
- ranks in one-criterion variance analysis. Journal of
953
- the American statistical Association , 47(260):583–
954
- 621.
955
- Wojciech Kryscinski, Bryan McCann, Caiming Xiong,
956
- and Richard Socher. 2020. Evaluating the factual
957
- consistency of abstractive text summarization. In
958
- Proceedings of the 2020 Conference on Empirical
959
- Methods in Natural Language Processing (EMNLP) ,
960
- pages 9332–9346.
961
- Philippe Laban, Andrew Hsi, John Canny, and Marti A.
962
- Hearst. 2020. The summary loop: Learning to write
963
- abstractive summaries without examples. In Pro-
964
- ceedings of the 58th Annual Meeting of the Asso-
965
- ciation for Computational Linguistics , pages 5135–
966
- 5150. Association for Computational Linguistics.
967
- Sophie Lewis. 2021. Nasa curiosity rover celebrates
968
- 3,000th day on mars with stunning panorama of
969
- planet. CBS News .Haoran Li, Junnan Zhu, Jiajun Zhang, and Chengqing
970
- Zong. 2018. Ensure the correctness of the summary:
971
- Incorporate entailment knowledge into abstractive
972
- sentence summarization. In Proceedings of the 27th
973
- International Conference on Computational Linguis-
974
- tics, pages 1430–1441.
975
- Y . Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar
976
- Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke
977
- Zettlemoyer, and Veselin Stoyanov. 2019. Roberta:
978
- A robustly optimized bert pretraining approach.
979
- ArXiv , abs/1907.11692.
980
- Louis Martin, ´Eric Villemonte de la Clergerie, Beno ˆıt
981
- Sagot, and Antoine Bordes. 2020. Controllable sen-
982
- tence simplification. In Proceedings of The 12th
983
- Language Resources and Evaluation Conference ,
984
- pages 4689–4698.
985
- Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
986
- Jing Zhu. 2002. Bleu: a method for automatic eval-
987
- uation of machine translation. In Proceedings of the
988
- 40th annual meeting of the Association for Compu-
989
- tational Linguistics , pages 311–318.
990
- Jipeng Qiang, Yun Li, Yi Zhu, Yunhao Yuan, and Xin-
991
- dong Wu. 2020. Lexical simplification with pre-
992
- trained encoders. In Proceedings of the AAAI Con-
993
- ference on Artificial Intelligence , volume 34, pages
994
- 8649–8656.
995
- Steven J Rennie, Etienne Marcheret, Youssef Mroueh,
996
- Jerret Ross, and Vaibhava Goel. 2017. Self-critical
997
- sequence training for image captioning. In Proceed-
998
- ings of the IEEE Conference on Computer Vision
999
- and Pattern Recognition , pages 7008–7024.
1000
- Julian Salazar, Davis Liang, Toan Q Nguyen, and Ka-
1001
- trin Kirchhoff. 2020. Masked language model scor-
1002
- ing. In Proceedings of the 58th Annual Meeting
1003
- of the Association for Computational Linguistics ,
1004
- pages 2699–2712.
1005
- Malbert Smith, J. Turner, Eleanor E. Sanford-Moore,
1006
- and Heather H. Koons. 2016. The lexile framework
1007
- for reading: An introduction to what it is and how to
1008
- use it.
1009
- Robyn Speer, Joshua Chin, Andrew Lin, Sara Jewett,
1010
- and Lance Nathan. 2018. Luminosoinsight / word-
1011
- freq: v2.2. Doi.org/10.5281/zenodo.1443582.
1012
- Felix Stahlberg and Shankar Kumar. 2020. Seq2edits:
1013
- Sequence transduction using span-level edit opera-
1014
- tions. In Proceedings of the 2020 Conference on
1015
- Empirical Methods in Natural Language Processing
1016
- (EMNLP) , pages 5147–5159.
1017
- Sai Surya, Abhijit Mishra, Anirban Laha, Parag Jain,
1018
- and Karthik Sankaranarayanan. 2019. Unsupervised
1019
- neural text simplification. In Proceedings of the
1020
- 57th Annual Meeting of the Association for Compu-
1021
- tational Linguistics , pages 2058–2068.
1022
- S Rebecca Thomas and Sven Anderson. 2012.
1023
- Wordnet-based lexical simplification of a document.
1024
- InKONVENS , pages 80–88.Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har-
1025
- ris, Alessandro Sordoni, Philip Bachman, and Ka-
1026
- heer Suleman. 2017. Newsqa: A machine compre-
1027
- hension dataset. In Proceedings of the 2nd Work-
1028
- shop on Representation Learning for NLP , pages
1029
- 191–200.
1030
- Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020.
1031
- Asking and answering questions to evaluate the fac-
1032
- tual consistency of summaries. In Proceedings of
1033
- the 58th Annual Meeting of the Association for Com-
1034
- putational Linguistics , pages 5008–5020.
1035
- Sander Wubben, Antal van den Bosch, and Emiel Krah-
1036
- mer. 2012. Sentence simplification by monolingual
1037
- machine translation. In Proceedings of the 50th An-
1038
- nual Meeting of the Association for Computational
1039
- Linguistics (Volume 1: Long Papers) , pages 1015–
1040
- 1024.
1041
- W. Xu, Chris Callison-Burch, and Courtney Napoles.
1042
- 2015. Problems in current text simplification re-
1043
- search: New data can help. Transactions of the Asso-
1044
- ciation for Computational Linguistics , 3:283–297.
1045
- Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze
1046
- Chen, and Chris Callison-Burch. 2016. Optimizing
1047
- statistical machine translation for text simplification.
1048
- Transactions of the Association for Computational
1049
- Linguistics , 4:401–415.
1050
- Shiyue Zhang and Mohit Bansal. 2019. Address-
1051
- ing semantic drift in question generation for semi-
1052
- supervised question answering. In Proceedings of
1053
- the 2019 Conference on Empirical Methods in Nat-
1054
- ural Language Processing and the 9th International
1055
- Joint Conference on Natural Language Processing
1056
- (EMNLP-IJCNLP) .
1057
- Xingxing Zhang and Mirella Lapata. 2017. Sentence
1058
- simplification with deep reinforcement learning. In
1059
- Proceedings of the 2017 Conference on Empirical
1060
- Methods in Natural Language Processing , pages
1061
- 584–594.
1062
- Yuhao Zhang, Derek Merck, Emily Tsai, Christopher D
1063
- Manning, and Curtis Langlotz. 2020. Optimizing
1064
- the factual correctness of a summary: A study of
1065
- summarizing radiology reports. In Proceedings of
1066
- the 58th Annual Meeting of the Association for Com-
1067
- putational Linguistics , pages 5108–5120.
1068
- Yang Zhong, Chao Jiang, Wei Xu, and Junyi Jessy Li.
1069
- 2020. Discourse level factors for sentence deletion
1070
- in text simplification. In Proceedings of the AAAI
1071
- Conference on Artificial Intelligence , volume 34,
1072
- pages 9709–9716.
1073
- Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych.
1074
- 2010. A monolingual tree-based translation model
1075
- for sentence simplification. In Proceedings of the
1076
- 23rd International Conference on Computational
1077
- Linguistics (Coling 2010) , pages 1353–1361.Ethical Considerations
1078
- We present a method for text simplification and ver-
1079
- ify its performance on text from the news domain
1080
- in the English language. Even though we expect
1081
- the method to be adaptable to other domains and
1082
- languages, we have not verified this assumption
1083
- experimentally and limit our claims to the English
1084
- news domain.
1085
- When comparing to prior work (e.g., ACCESS
1086
- model), we obtained implementations directly from
1087
- the authors (through Github repositories) and pro-
1088
- duced results following the recommended setting,
1089
- with an objective to present prior work as a strong
1090
- comparison point.
1091
- For the human evaluation, we paid the annota-
1092
- tors above the minimum wage, and did not collect
1093
- any personal identifiable information. We selected
1094
- topics to avoid sensitive or political subjects and
1095
- had our protocols reviewed by the university’s IRB
1096
- committee (Protocol ID: 2018-07-11230). We re-
1097
- lied on a third party (Amazon Mechanical Turk) to
1098
- remunerate the crowd-workers.
1099
- A Appendices
1100
- A.1 Training Details
1101
- We detail the model architecture size, data, opti-
1102
- mizer of the models we train in the paper. All
1103
- models were trained using Pytorch and Hugging-
1104
- Face’s Transformers library5. We use the Apex6
1105
- library to enable half-precision training.
1106
- The KiS procedure was trained on a single
1107
- GPU, either an Nvidia V-100 (16Gb memory) or
1108
- a Quadro RTX 8000 (48 Gb memory). We ran a
1109
- total of around 200 experiments, with an average
1110
- run-time of one week.
1111
- Because the procedure is unsupervised, the
1112
- model was trained using a large unreleased cor-
1113
- pus of news articles, containing 7 million news
1114
- articles in English.
1115
- KiS Model is initialized with a GPT2-medium
1116
- model. We used the Adam optimizer, with a learn-
1117
- ing rate of 106, a batch-size of 1, using k-SCST
1118
- withk= 8.
1119
- Finetune Baseline is initialized with a GPT2-
1120
- medium model. We train using using standard
1121
- teacher forcing on the 40,000 samples in the paired
1122
- Newsela dataset , reserving 2,000 samples for val-
1123
- idation. We use the Adam optimizer, and use the
1124
- 5https://github.com/huggingface/transformers
1125
- 6https://github.com/nvidia/apexvalidation set to choose a learning rate of 105,
1126
- and a batch-size of 8, and run for 3 epochs before
1127
- seeing a plateau in the validation loss.
1128
- Discriminator Model is initialized with a
1129
- Roberta-base , and retrained every time the train-
1130
- ing buffer reaches 2,000 samples. The discrim-
1131
- inator is reset to the original Roberta-base each
1132
- time the training buffer is full. We use a standard
1133
- cross-entropy loss, the ADAM optimizer with a
1134
- learning rate of 105and a batch size of 8. Each
1135
- time we retrain, we run for 5 epochs, and check-
1136
- point one model after each epoch. The checkpoint
1137
- that achieves the highest performance on a valida-
1138
- tion set becomes the new discriminator for the next
1139
- round.
1140
- A.2 Human Evaluation Instructions
1141
- Figure A1 shows the instructions given to crowd-
1142
- worker participants for the manual evaluation.
1143
- •The entire HIT should take no more than 15
1144
- minutes:
1145
- (1) You will answer a pre-questionnaire.
1146
- (2) Read 4 short news stories and answer
1147
- comprehension questions about each.
1148
- •If you believe the answer is not in the
1149
- document, you can select the option “Answer
1150
- not in document”.
1151
- •There is no time limit for each individual
1152
- document or question.
1153
- •You can leave at any point but will not
1154
- complete the HIT.
1155
- • You can complete this task at most once.
1156
- •If you have a question/problem, contact us at
1157
- email .
1158
- Figure A1: Instructions given to participants of the
1159
- comprehension evaluation. Participants were recruited
1160
- on Amazon Mechanical Turk (MTurk), on which jobs
1161
- are named “HIT”.
1162
- A.3 Full Example of Generated Texts
1163
- Figure A2 is a complement to Figure 6, with the
1164
- five stimuli that were shown for the Covid Libraries
1165
- document.
1166
- A.4 Detailed of Human Evaluation Results
1167
- Table A1 details the timing and number of par-
1168
- ticipants for each combination of document and
1169
- stimuli.ORIGINAL [Lexile Grade 1 1] Each summer , libraries in St. Louis, Missouri, host many types of free camps — yoga, chess and even a
1170
- Harry Pot ter “Sorting Hat Camp.” In 2020, camp dreams seemed far- fetched given the global coronavirus pandemic. That didn’t stop
1171
- St. Louis libraries, though.
1172
- Instead of canceling, they brought camp into kids’ homes. So children who signed up for ukulele camp got a beginner ’s guidebook,
1173
- instructional DVD and an actual ukulele in the mail. It was all free. In addition, camp sessions still occurred. Advisers met with kids
1174
- using virtual formats.
1175
- Joe Monahan, manager of youth services for the St. Louis library system, says that of the 70 camps originally scheduled, 54 were held
1176
- virtually .
1177
- Paula Langsam, a youth services manager at the soon-to- reopen Martin Luther King Junior Memorial Library in W ashington, D.C.,
1178
- says, “In a way , our work has changed a lot. W e didn’t used to do videos a lot.”
1179
- Who manages the St Louis library kids programs?
1180
- Joe Monahan , Paula Langsam, St. Louis Camp Leaders
1181
- Were any camps in St. Louis cancelled?
1182
- Yes, NoHow many camps were scheduled, how many were run?
1183
- 54 and 70, 70 and 54 , 70 and 0, 54 and 0
1184
- How did the Ukulele camp meet?
1185
- In the park, Virtually , Did not meetWhat camps did the libraries host?
1186
- Yoga, Chess , Pottery , UkuleleKIS MODEL [Lexile Grade 9] In the summer months, St. Louis has many free classes for kids, including yoga, chess and a Harry
1187
- Potter “Sorting Hat Camp.” In 2020, camp dreams again seemed far-fetched given the crisis. That didn’t stop St. Louis libraries, though.
1188
- They brought camp in. So kids who signed up for ukulele camp got a beginner ’s guidebook, a lesson DVD and a real ukulele in the
1189
- mailbox. It was all free. In addition, camp sessions continued. Advisers tried out a virtual format.
1190
- Joe Monahan, the manager of youth services for the St. Louis library system, says that of the 70 camps originally scheduled, 54 were
1191
- held mostly .
1192
- Paula Langsam, a youth services manager at the Martin Luther King Junior library , says, “In a way , our work changed a lot. W e didn’t
1193
- do videos a lot.”NEWSELA [Lexile Grade 7] In St. Louis, Missouri, libraries hold many types of free camps in the summer . They have yoga, chess and
1194
- even a Harry Potter camp. In 2020, camp ideas seemed unlikely due to the spread of COVID-19. That did not stop St. Louis libraries,
1195
- though. They did not cancel their programs. They brought camp into kids' homes.
1196
- Children who signed up for ukulele camp got a beginner's book, instructional video and a ukulele in the mail. It was all free. In addition,
1197
- camp sessions still went on. Camp leaders met with kids over video calls.
1198
- Joe Monahan runs the kids programs for the St. Louis libraries. He said 70 camps were scheduled at first. Some were dropped but 54
1199
- were still held online.
1200
- Paula Langsam works at the Martin Luther King Junior Memorial Library in W ashington, D.C. It reopened on September 24. She said,
1201
- "Our work has changed a lot. W e didn't used to do videos a lot."
1202
- FINETUNE BASELINE [Lexile Grade 9] Each summer , St. Louis libraries host many types of free camps — yoga, chess and even a
1203
- Harry Potter "Sorting Hat Camp." In 2020, camp dreams seemed far-fetched given the global cough . However , the future looked
1204
- bright for libraries in St. Louis.
1205
- Instead of canceling, they brought camp into kids' homes. This is called ukulele camp. Kids signed up for it in the mail. They got a
1206
- beginner's guidebook, video and an actual ukulele to play . It was all free. Advisers met with kids using virtual formats.
1207
- Joe Monahan, manager of youth services for the St. Louis library system, says that 70 camps were originally planned. Of those, 54
1208
- were held mostly .
1209
- Paula Langsam is a youth services manager who works at the Martin Luther King Junior Memorial Library in W ashington, D.C. She
1210
- says, "In a way , our work has changed a lot. W e didn't used to do videos a lot."
1211
- ACCESS [Lexile Grade 1 1] Each summer , libraries in St. Louis, Missouri, has many dif ferent types of free camps that are yoga, chess
1212
- and even a Harry Potter gang Sorting Hat Camp. In 2020, camp dreams seemed far-fetched that there was the global coronavirus
1213
- pandemic. That did not stop St. Louis libraries, though.
1214
- Instead of being canceled, they brought camp into children's homes. So children who signed up for ukulele camp got a guidebook.
1215
- They also had an actual ukulelele in the mail. It was all free. In addition, camp meetings still happened. Advisers met with new children
1216
- using virtual formats.
1217
- Joe Monahan, also known as Joe Monahan, has youth services for the St. Louis library system says that of the 70 camps first started,
1218
- 54 were held.
1219
- Paula Langsam, also known as Paula Langsam, is a youth services manager at the soon-to-reopen Martin Luther King Junior Library in
1220
- Washington, D. W e did not use to do many videos a lot.Figure A2: Complement to Figure 6. Example Task for the Comprehension Study. Participants were assigned
1221
- to one of five settings: original, Newsela, KiS, Finetune Baseline, and ACCESS. Participants were instructed to
1222
- answer the five comprehension questions.
1223
- Simplification Model
1224
- Document Id Original Newsela Sup. Base. ACCESS KiS
1225
- Marvel Show 152 (12) 209 (11) 140 (11) 209 (14) 126 (13)
1226
- Covid Libraries 167 (14) 180 (12) 182 (10) 190 (13) 171 (12)
1227
- Sustainable Food 163 (13) 144 (10) 181 (13) 242 (13) 154 (12)
1228
- Iceberg Collision 208 (14) 116 (11) 139 (12) 104 (12) 119 (12)
1229
- Version Aggregate 174 (53) 163 (44) 161 (46) 188 (52) 143 (49)
1230
- Table A1: Average time taken and number of participants in each of the document/stimuli combinations.
1231
- Also shown are aggregates (mean time taken and total number of participants).Model SARI BLEU %FKGL %Lexile Comp. Cov.
1232
- KiS Full 0.709 0.526 100 72 0.85 0.636
1233
- KiS No Fluency 0.718 0.611 99 95 1.02 0.901
1234
- KiS No Salience 0.695 0.591 100 65 1.01 0.701
1235
- KiS No Simplicity 0.672 0.617 51 23 0.92 0.809
1236
- Table A2: Automatic results of the three ablation models. SARI andBLEU are reference-based metrics. %
1237
- FKGL and% Lexile are the percentage of simplified paragraphs with a lower FKGL and Lexile score than the
1238
- original paragraph. Comp. is the average compression ratio (# of words), and Cov. is the average coverage score
1239
- of the simplifications.
1240
- A.5 Detail of Ablation Study Results
1241
- Table A2 details the metric results of the three ab-
1242
- lated models, an extension to Table 1. An example
1243
- output of each ablated model, illustrating the limi-
1244
- tation when a score component is missing, is given
1245
- in Figure 1.
1246
- One surprising element is that the model trained
1247
- without fluency achieves higher scores on almost
1248
- all metrics, compared to the full model. This sur-
1249
- prising fact is due to the fact that without fluency,
1250
- the model does not learn to generate full sentences
1251
- (see the example in Figure 1). Instead, the model
1252
- learns to concatenate high-scoring phrases together,
1253
- which can boost automatic metrics artificially. In
1254
- fact, the strong performance of a model generating
1255
- incomplete sentences reveals a limitation of current
1256
- automatic metrics, such as BLEU and SARI.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2107.04240.txt DELETED
Binary file (123 kB)
 
txt/2107.13662.txt DELETED
@@ -1,598 +0,0 @@
1
- Investigating Text Simplification Evaluation
2
- Laura V ´asquez-Rodr ´ıguez1,Matthew Shardlow2,Piotr Przybyła3,Sophia Ananiadou1
3
- 1National Centre for Text Mining,
4
- The University of Manchester, Manchester, United Kingdom
5
- 2Department of Computing and Mathematics,
6
- Manchester Metropolitan University, Manchester, United Kingdom
7
- 3Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland
8
- flaura.vasquezrodriguez, sophia.ananiadou [email protected]
9
10
- Abstract
11
- Modern text simplification (TS) heavily relies
12
- on the availability of gold standard data to
13
- build machine learning models. However, ex-
14
- isting studies show that parallel TS corpora
15
- contain inaccurate simplifications and incor-
16
- rect alignments. Additionally, evaluation is
17
- usually performed by using metrics such as
18
- BLEU or SARI to compare system output to
19
- the gold standard. A major limitation is that
20
- these metrics do not match human judgements
21
- and the performance on different datasets and
22
- linguistic phenomena vary greatly. Further-
23
- more, our research shows that the test and
24
- training subsets of parallel datasets differ sig-
25
- nificantly. In this work, we investigate existing
26
- TS corpora, providing new insights that will
27
- motivate the improvement of existing state-of-
28
- the-art TS evaluation methods. Our contribu-
29
- tions include the analysis of TS corpora based
30
- on existing modifications used for simplifica-
31
- tion and an empirical study on TS models per-
32
- formance by using better-distributed datasets.
33
- We demonstrate that by improving the distribu-
34
- tion of TS datasets, we can build more robust
35
- TS models.
36
- 1 Introduction
37
- Text Simplification transforms natural language
38
- from a complex to a simple format, with the aim to
39
- not only reach wider audiences (Rello et al., 2013;
40
- De Belder and Moens, 2010; Aluisio et al., 2010;
41
- Inui et al., 2003) but also as a preprocessing step in
42
- related tasks (Shardlow, 2014; Silveira and Branco,
43
- 2012).
44
- Simplifications are achieved by using parallel
45
- datasets to train sequence-to-sequence text gen-
46
- eration algorithms (Nisioi et al., 2017) to make
47
- complex sentences easier to understand. They are
48
- typically produced by crowdsourcing (Xu et al.,
49
- 2016; Alva-Manchego et al., 2020a) or by align-
50
- ment (Cao et al., 2020; Jiang et al., 2020). They areinfamously noisy and models trained on these give
51
- poor results when evaluated by humans (Cooper
52
- and Shardlow, 2020). In this paper we add to the
53
- growing narrative around the evaluation of natu-
54
- ral language generation (van der Lee et al., 2019;
55
- Caglayan et al., 2020; Pang, 2019), focusing on
56
- parallel text simplification datasets and how they
57
- can be improved.
58
- Why do we need to re-evaluate TS resources?
59
- In the last decade, TS research has relied on
60
- Wikipedia-based datasets (Zhang and Lapata, 2017;
61
- Xu et al., 2016; Jiang et al., 2020), despite their
62
- known limitations (Xu et al., 2015; Alva-Manchego
63
- et al., 2020a) such as questionable sentence pairs
64
- alignments, inaccurate simplifications and a limited
65
- variety of simplification modifications. Apart from
66
- affecting the reliability of models trained on these
67
- datasets, their low quality influences the evaluation
68
- relying on automatic metrics that requires gold-
69
- standard simplifications, such as SARI (Xu et al.,
70
- 2016) and BLEU (Papineni et al., 2001).
71
- Hence, evaluation data resources must be further
72
- explored and improved to achieve reliable evalu-
73
- ation scenarios. There is a growing body of ev-
74
- idence (Xu et al., 2015) (including this work) to
75
- show that existing datasets do not contain accurate
76
- and well-constructed simplifications, significantly
77
- impeding the progress of the TS field.
78
- Furthermore, well-known evaluation metrics
79
- such as BLEU are not suitable for simplification
80
- evaluation. According to previous research (Sulem
81
- et al., 2018) BLEU does not significantly correlate
82
- with simplicity (Xu et al., 2016), making it inap-
83
- propriate for TS evaluation. Moreover, it does not
84
- correlate (or the correlation is low) with grammati-
85
- cality and meaning preservation when performing
86
- syntactic simplification such as sentence splitting.
87
- Therefore in most recent TS research BLEU has
88
- not been considered as a reliable evaluation metric.
89
- We use SARI as the preferred method for TS eval-arXiv:2107.13662v1 [cs.CL] 28 Jul 2021uation, which has also been used as the standard
90
- evaluation metric in all the corpora analysed in this
91
- research.
92
- Our contributions include 1) the analysis of the
93
- most common TS corpora based on quantifying
94
- modifications used for simplification, evidencing
95
- their limitations and 2) an empirical study on TS
96
- models performance by using better-distributed
97
- datasets. We demonstrate that by improving the
98
- distribution of TS datasets, we can build TS mod-
99
- els that gain a higher SARI score in our evaluation
100
- setting.
101
- 2 Related Work
102
- The exploration of neural networks in TS started
103
- with the work of Nisioi et al. (2017), using
104
- the largest parallel simplification resource avail-
105
- able (Hwang et al., 2015). Neural-based work
106
- focused on state-of-the-art deep learning and
107
- MT-based methods, such as reinforcement learn-
108
- ing (Zhang and Lapata, 2017), adversarial train-
109
- ing (Surya et al., 2019), pointer-copy mecha-
110
- nism (Guo et al., 2018), neural semantic en-
111
- coders (Vu et al., 2018) and transformers supported
112
- by paraphrasing rules (Zhao et al., 2018).
113
- Other successful approaches include the usage
114
- of control tokens to tune the level of simplification
115
- expected (Alva-Manchego et al., 2020a; Scarton
116
- and Specia, 2018) and the prediction of operations
117
- using parallel corpora (Alva-Manchego et al., 2017;
118
- Dong et al., 2020). The neural methods are trained
119
- mostly on Wikipedia-based sets, varying in size
120
- and improvements in the quality of the alignments.
121
- Xu et al. (2015) carried out a systematic study on
122
- Wikipedia-based simplification resources, claim-
123
- ing Wikipedia is not a quality resource, based on
124
- the observed alignments and the type of simplifi-
125
- cations. Alva-Manchego et al. (2020a) proposed
126
- a new dataset, performing a detailed analysis in-
127
- cluding edit distance and proportion of words that
128
- are deleted, inserted and reordered, and evaluation
129
- metrics performance for their proposed corpus.
130
- Chasing the state-of-the-art is rife in NLP (Hou
131
- et al., 2019), and no less so in TS, where a SARI
132
- score is too often considered the main quality indi-
133
- cator. However, recent work has shown that these
134
- metrics are unreliable (Caglayan et al., 2020) and
135
- gains in performance according to them may not de-
136
- liver improvements in simplification performance
137
- when the text is presented to an end user.3 Simplification Datasets: Exploration
138
- 3.1 Data and Methods
139
- In the initial exploration of TS datasets, we investi-
140
- gated the training, test and validation subsets (when
141
- available) of the following: WikiSmall and Wiki-
142
- Large (Zhang and Lapata, 2017), TurkCorpus (Xu
143
- et al., 2015), MSD dataset (Cao et al., 2020), AS-
144
- SET (Alva-Manchego et al., 2020a) and WikiMan-
145
- ual (Jiang et al., 2020). For the WikiManual dataset,
146
- we only considered sentences labelled as “aligned”.
147
- We computed the number of changes between
148
- the original and simplified sentences through the
149
- token edit distance . Traditionally, edit distance
150
- quantifies character-level changes from one char-
151
- acter string to another (additions, deletions and re-
152
- placements). In this work, we calculated the token-
153
- based edit distance by adapting the Wagner–Fischer
154
- algorithm (Wagner and Fischer, 1974) to determine
155
- changes at a token level. We preprocessed our
156
- sentences by changing them into lowercase prior
157
- to this analysis. To make the results comparable
158
- across sentences, we divide the number of changes
159
- by the length of the original sentence and obtain
160
- values between 0% (no changes) to 100% (com-
161
- pletely different sentence).
162
- In addition to toked-based edit operation exper-
163
- iments, we analysed the difference of sentence
164
- length between complex and simple variants, the
165
- quantity of edit operations type (INSERT, DELETE
166
- and REPLACE) and an analysis of redundant oper-
167
- ations such as deletions and insertions in the same
168
- sentence over the same text piece (we define this as
169
- the MOVE operation). Based on our objective to
170
- show how different split configurations affect TS
171
- model performance, we have presented the percent-
172
- age of edit operations as the more informative anal-
173
- ysis performed on the most representative datasets.
174
- 3.2 Edit Distance Distribution
175
- Except for the recent work of Alva-Manchego et al.
176
- (2020b), there has been little work on new TS
177
- datasets. Most prior datasets are derived by align-
178
- ing English and Simple English Wikipedia, for ex-
179
- ample WikiSmall andWikiLarge (Zhang and La-
180
- pata, 2017).
181
- In Figure 1 we can see that the edit distance
182
- distribution of the splits in the selected datasets is
183
- not even. By comparing the test and development
184
- subsets in WikiSmall (Figure 1a) we can see dif-
185
- ferences in the number of modifications involved
186
- in simplification. Moreover, the WikiLarge dataset(a) WikiSmall Test/Dev/Train
187
- (b) WikiLarge Test/Dev/Train
188
- (c) TurkCorpus Test
189
- (d) MSD Test
190
- (e) ASSET Test
191
- (f) WikiManual Test/Dev/Train
192
- Figure 1: Comparison of TS datasets with respect to the number of edit operations between the original and
193
- simplified sentences. X-axis: token edit distance normalised by sentence length, Y-axis: probability density for the
194
- change percentage between complex and simple sentence pairs.
195
- (Figure 1b) shows a complete divergence of the test
196
- subset. Additionally, it is possible to notice a signif-
197
- icant number of unaligned or noisy cases, between
198
- the 80% and 100% of change in the WikiLarge
199
- training and validation subsets (Figure 1b).
200
- We manually checked a sample of these cases
201
- and confirmed they were poor-quality simplifica-
202
- tions, including incorrect alignments. The simplifi-
203
- cation outputs (complex/simple pairs) were sorted
204
- by their edit distances and then manually checked
205
- to determine an approximate heuristic for noisy sen-
206
- tences detection. Since many of these alignments
207
- had really poor quality, it was easy to determine the
208
- number that removed a significant number of cases
209
- without actually reducing dramatically the size of
210
- the dataset.
211
- Datasets such as Turk Corpus (Xu et al., 2015)
212
- are widely used for evaluation and their opera-
213
- tions mostly consist of lexical simplification (Alva-
214
- Manchego et al., 2020a). We can see this behaviour
215
- in Figure 1c, where most edits involve a small per-
216
- centage of the tokens. This can be noticed when a
217
- large proportion of the sample cases are between
218
- 0% (no change) to 40%.
219
- In the search of better evaluation resources, Turk-
220
- Corpus was improved with the development of
221
- ASSET (Alva-Manchego et al., 2020a) including
222
- more heterogeneous modification measures. As
223
- we can see in Figure 1e, the data are more evenly
224
- distributed than in Figure 1c.Recently proposed datasets, such as WikiMan-
225
- ual(Jiang et al., 2020), as shown in Figure 1f, have
226
- an approximately consistent distribution, and their
227
- simplifications are less conservative. Based on a
228
- visual inspection on the uppermost values of the
229
- distribution (80%), we can tell that often most
230
- of the information in the original sentence is re-
231
- moved or the target simplification does not express
232
- accurately the original meaning.
233
- MSD dataset (Cao et al., 2020) is a domain-
234
- specific dataset, developed for style transfer in the
235
- health domain. In the style transfer setting, the
236
- simplifications are aggressive (i.e., not limited to
237
- individual words), to promote the detection of a
238
- difference between one style (expert language) and
239
- another (lay language). Figure 1d shows how their
240
- change-percentage distribution differs dramatically
241
- in comparison to the other datasets, placing most
242
- of the results at the right-side of the distribution.
243
- Among TS datasets, it is important to mention
244
- that the raw text of the Newsela (Xu et al., 2015)
245
- dataset was produced by professional writers and is
246
- likely of higher quality than other TS datasets. Un-
247
- fortunately, it is not aligned at the sentence level by
248
- default and its usage and distribution are limited by
249
- a restrictive data agreement. We have not included
250
- this dataset in our analysis due to the restrictive
251
- licence under which it is distributed.Dataset Split KL-div p-value
252
- WikiSmallTest/Dev 0.0696 0.51292
253
- Test/Tr 0.0580 0.83186
254
- WikiLargeTest/Dev 0.4623 <0.00001
255
- Test/Tr 0.4639 <0.00001
256
- WikiManualTest/Dev 0.1020 0.00003
257
- Test/Tr 0.0176 0.04184
258
- TurkCorpus Test/Dev 0.0071 0.00026
259
- ASSET Test/Dev 0.0491 <0.00001
260
- Table 1: KL-divergence between testing (Test) and de-
261
- velopment (Dev) or training (Tr) subsets.
262
- 3.3 KL Divergence
263
- In addition to edit distance measurements presented
264
- in Figure 1, we further analysed KL divergence
265
- (Kullback and Leibler, 1951) of those distributions
266
- to understand how much dataset subsets diverge.
267
- Specifically, we compared the distribution of the
268
- test set to the development and training sets for
269
- WikiSmall, WikiLarge, WikiManual, TurkCorpus
270
- and ASSET Corpus (when available). We did not
271
- include MSD dataset since it only has a testing set.
272
- We performed randomised permutation
273
- tests (Morgan, 2006) to confirm the statistical
274
- significance of our results. Each dataset was
275
- joined together and split randomly for 100,000
276
- iterations. We then computed the p-value as a
277
- percentage of random splits that result in the KL
278
- value equal to or higher than the one observed in
279
- the data. Based on the p-value, we can decide
280
- whether the null hypothesis (i.e. that the original
281
- splits are truly random) can be accepted. We reject
282
- the hypothesis for p-value lower than 0.05. In
283
- Table 1 we show the computed KL-divergence and
284
- p-values. The p-values below 0.05 for WikiManual
285
- and WikiLarge confirm that these datasets do not
286
- follow a truly random distribution.
287
- 4 Simplification Datasets: Experiments
288
- We carried out the following experiments to eval-
289
- uate the variability in performance of TS models
290
- caused by the issues described in Wiki-based data.
291
- 4.1 Data and Methods
292
- For the proposed experiments, we used the
293
- EditNTS model, a Programmer-Interpreter
294
- Model (Dong et al., 2020). Although the original
295
- code was published, its implementation required
296
- minor modifications to run in our setting. The
297
- modifications performed, the experimental subsetsas well as the source code are documented via
298
- GitHub1. We selected EditNTS model due to its
299
- competitive performance in both WikiSmall and
300
- WikiLarge datasets2. Hence, we consider this
301
- model as a suitable candidate for evaluating the
302
- different limitations of TS datasets. In future work,
303
- we will definitely consider testing our assumptions
304
- under additional metrics and models.
305
- In relation to TS datasets, we trained our mod-
306
- els on the training and development subsets from
307
- WikiLarge and WikiSmall, widely used in most
308
- of TS research. In addition, these datasets have
309
- a train, development and test set, which is essen-
310
- tial for retraining and testing the model with new
311
- split configurations. The model was first trained
312
- with the original splits, and then with the following
313
- variations:
314
- Randomised split : as explained in Section 3.3,
315
- the original WikiLarge split does not have an even
316
- distribution of edit-distance pairs between subsets.
317
- For this experiment, we resampled two of our
318
- datasets (WikiSmall and WikiLarge). For each
319
- dataset, we joined all subsets together and per-
320
- formed a new random split.
321
- Refined and randomised split : we created sub-
322
- sets that minimise the impact of poor alignments.
323
- These alignments were selected by edit distance
324
- and then subsets were randomised as above. We
325
- presume that the high-distance cases correspond
326
- to noisy and misaligned sentences. For both Wik-
327
- iSmall and WikiLarge, we reran our experiments
328
- removing 5% and 2% of the worst alignments.
329
- Finally, we evaluated the models by using the
330
- test subsets of external datasets, including: Turk-
331
- Corpus, ASSET and WikiManual.
332
- 5 Discussion
333
- Figure 2 shows the results for WikiSmall. We can
334
- see a minor decrease in SARI score with the ran-
335
- dom splits, which means that the noisy alignments
336
- were equivalently present in all the sets rather than
337
- using the best cases for training. On the other hand,
338
- when the noisy cases are removed from the datasets
339
- the increase in model performance is clear.
340
- Likewise, we show WikiLarge results in Figure
341
- 3. When the data is randomly distributed, we obtain
342
- better performance than the original splits. This
343
- 1https://github.com/lmvasque/
344
- ts-explore
345
- 2https://github.com/sebastianruder/
346
- NLP-progress/blob/master/english/
347
- simplification.mdFigure 2: SARI scores for evaluating WikiSmall-based
348
- models on external test sets.
349
- is consistent with WikiLarge having the largest
350
- discrepancy according to our KL-divergence mea-
351
- surements, as shown in Section 3.3. We also found
352
- that the 95% split gave a similar behaviour to Wiki-
353
- Large Random. Meanwhile, the 98% dataset, gave
354
- a similar performance to the original splits for AS-
355
- SET and TurkCorpus3.
356
- We can also note, that although there is a per-
357
- formance difference between WikiSmall Random
358
- and WikiSmall 95%, in WikiLarge the same splits
359
- have quite similar results. We believe these dis-
360
- crepancies are related to the size and distribution
361
- of the training sets. WikiLarge subset is three
362
- times bigger than WikiSmall in the number of sim-
363
- ple/complex pairs. Also, WikiLarge has a higher
364
- KL-divergence (0.46) than WikiSmall ( 0.06),
365
- which means that WikiLarge could benefit more
366
- from a random distribution experiment than Wik-
367
- iSmall, resulting in higher performance on Wiki-
368
- Large. Further differences may be caused by the
369
- procedures used to make the training/test splits in
370
- the original research, which were not described in
371
- the accompanying publications.
372
- Using randomised permutation testing, we have
373
- confirmed that the SARI differences between the
374
- models based on the original split and our best
375
- alternative (95% refined) is statistically significant
376
- (p <0:05) for each configuration discussed above.
377
- In this study, we have shown the limitations of
378
- TS datasets and the variations in performance in
379
- different splits configurations. In contrast, exist-
380
- ing evidence cannot determine which is the most
381
- suitable split, especially since this could depend
382
- on each specific scenario or target audience (e.g.,
383
- model data similar to “real world” applications).
384
- 3ASSET and Turk Corpus results are an average on their
385
- multiple references scores.
386
- Figure 3: SARI scores for evaluating WikiLarge-based
387
- models on external test sets.
388
- Also, we have measured our results using SARI,
389
- not only because it is the standard evaluation metric
390
- in TS but also because there is no better automatic
391
- alternatives to measure simplicity. We use SARI
392
- as a way to expose and quantify SOTA TS datasets
393
- limitations. The increase in SARI scores should be
394
- interpreted as the variability in the relative quality
395
- of the output simplifications. By relative we mean,
396
- that there is a change in simplicity gain but we
397
- cannot state the simplification is at its best quality
398
- since the metric itself has its own weaknesses.
399
- 6 Conclusions
400
- In this paper, we have shown 1) the statistical limita-
401
- tions of TS datasets, and 2) the relevance of subset
402
- distribution for building more robust models. To
403
- our knowledge, distribution-based TS datasets anal-
404
- ysis has not been considered before. We hope that
405
- the exposure of these limitations kicks off a discus-
406
- sion in the TS community on whether we are in the
407
- correct direction regarding evaluation resources in
408
- TS and more widely in NLG. The creation of new
409
- resources is expensive and complex, however, we
410
- have shown that current resources can be refined,
411
- motivating future studies in the field of TS.
412
- Acknowledgments
413
- We would like to thank Nhung T.H. Nguyen and
414
- Jake Vasilakes for their valuable discussions and
415
- comments. Laura V ´asquez-Rodr ´ıguez’s work was
416
- funded by the Kilburn Scholarship from the Uni-
417
- versity of Manchester . Piotr Przybyła’s work was
418
- supported by the Polish National Agency for Aca-
419
- demic Exchange through a Polish Returns grant
420
- number PPN/PPO/2018/1/00006.References
421
- Sandra Aluisio, Lucia Specia, Caroline Gasperin, and
422
- Carolina Scarton. 2010. Readability assessment for
423
- text simplification. Proceedings of the NAACL HLT
424
- 2010 Fifth Workshop on Innovative Use of NLP for
425
- Building Educational Applications , pages 1–9.
426
- Fernando Alva-Manchego, Joachim Bingel, Gustavo H
427
- Paetzold, Carolina Scarton, and Lucia Specia. 2017.
428
- Learning How to Simplify From Explicit Labeling
429
- of Complex-Simplified Text Pairs. Proceedings of
430
- the Eighth International Joint Conference on Natu-
431
- ral Language Processing (Volume 1: Long Papers) ,
432
- pages 295–305.
433
- Fernando Alva-Manchego, Louis Martin, Antoine Bor-
434
- des, Carolina Scarton, Beno ˆıt Sagot, and Lucia Spe-
435
- cia. 2020a. ASSET: A Dataset for Tuning and Eval-
436
- uation of Sentence Simplification Models with Mul-
437
- tiple Rewriting Transformations. arXiv .
438
- Fernando Alva-Manchego, Louis Martin, Antoine Bor-
439
- des, Carolina Scarton, Beno ˆıt Sagot, and Lucia Spe-
440
- cia. 2020b. ASSET: A dataset for tuning and eval-
441
- uation of sentence simplification models with multi-
442
- ple rewriting transformations. In Proceedings of the
443
- 58th Annual Meeting of the Association for Compu-
444
- tational Linguistics , pages 4668–4679, Online. As-
445
- sociation for Computational Linguistics.
446
- Ozan Caglayan, Pranava Madhyastha, and Lucia Spe-
447
- cia. 2020. Curious case of language generation
448
- evaluation metrics: A cautionary tale. In Proceed-
449
- ings of the 28th International Conference on Com-
450
- putational Linguistics , pages 2322–2328, Barcelona,
451
- Spain (Online). International Committee on Compu-
452
- tational Linguistics.
453
- Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan,
454
- Zhiyuan Liu, and Tat-Seng Chua. 2020. Expertise
455
- Style Transfer: A New Task Towards Better Com-
456
- munication between Experts and Laymen. In arXiv ,
457
- pages 1061–1071. Association for Computational
458
- Linguistics (ACL).
459
- Michael Cooper and Matthew Shardlow. 2020. Com-
460
- biNMT: An exploration into neural text simplifica-
461
- tion models. In Proceedings of the 12th Language
462
- Resources and Evaluation Conference , pages 5588–
463
- 5594, Marseille, France. European Language Re-
464
- sources Association.
465
- Jan De Belder and Marie-Francine Moens. 2010. Text
466
- Simplification for Children. Proceedings of the SI-
467
- GIR Workshop on Accessible Search Systems , pages
468
- 19–26.
469
- Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and
470
- Jackie Chi Kit Cheung. 2020. Editnts: An neu-
471
- ral programmer-interpreter model for sentence sim-
472
- plification through explicit editing. In ACL 2019 -
473
- 57th Annual Meeting of the Association for Com-
474
- putational Linguistics, Proceedings of the Confer-
475
- ence, pages 3393–3402. Association for Computa-
476
- tional Linguistics (ACL).Han Guo, Ramakanth Pasunuru, and Mohit Bansal.
477
- 2018. Dynamic Multi-Level Multi-Task Learning
478
- for Sentence Simplification. In Proceedings of the
479
- 27th International Conference on Computational
480
- Linguistics (COLING 2018) , pages 462–476.
481
- Yufang Hou, Charles Jochim, Martin Gleize, Francesca
482
- Bonin, and Debasis Ganguly. 2019. Identifica-
483
- tion of tasks, datasets, evaluation metrics, and nu-
484
- meric scores for scientific leaderboards construction.
485
- InProceedings of the 57th Annual Meeting of the
486
- Association for Computational Linguistics , pages
487
- 5203–5213, Florence, Italy. Association for Compu-
488
- tational Linguistics.
489
- William Hwang, Hannaneh Hajishirzi, Mari Ostendorf,
490
- and Wei Wu. 2015. Aligning sentences from stan-
491
- dard Wikipedia to simple Wikipedia. In NAACL
492
- HLT 2015 - 2015 Conference of the North American
493
- Chapter of the Association for Computational Lin-
494
- guistics: Human Language Technologies, Proceed-
495
- ings of the Conference , pages 211–217. Association
496
- for Computational Linguistics (ACL).
497
- Kentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Ryu
498
- Iida, and Tomoya Iwakura. 2003. Text Simplifica-
499
- tion for Reading Assistance: A Project Note. In
500
- Proceedings of the Second International Workshop
501
- on Paraphrasing - Volume 16 , PARAPHRASE ’03,
502
- pages 9–16, USA. Association for Computational
503
- Linguistics (ACL).
504
- Chao Jiang, Mounica Maddela, Wuwei Lan, Yang
505
- Zhong, and Wei Xu. 2020. Neural CRF Model
506
- for Sentence Alignment in Text Simplification. In
507
- arXiv , pages 7943–7960. arXiv.
508
- S. Kullback and R. A. Leibler. 1951. On Information
509
- and Sufficiency. The Annals of Mathematical Statis-
510
- tics, 22(1):79–86.
511
- Chris van der Lee, Albert Gatt, Emiel van Miltenburg,
512
- Sander Wubben, and Emiel Krahmer. 2019. Best
513
- practices for the human evaluation of automatically
514
- generated text. In Proceedings of the 12th Interna-
515
- tional Conference on Natural Language Generation ,
516
- pages 355–368, Tokyo, Japan. Association for Com-
517
- putational Linguistics.
518
- William Morgan. 2006. Statistical Hypothesis Tests for
519
- NLP or: Approximate Randomization for Fun and
520
- Profit.
521
- Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto,
522
- and Liviu P. Dinu. 2017. Exploring neural text sim-
523
- plification models. In ACL 2017 - 55th Annual Meet-
524
- ing of the Association for Computational Linguistics,
525
- Proceedings of the Conference (Long Papers) , vol-
526
- ume 2, pages 85–91. Association for Computational
527
- Linguistics (ACL).
528
- Richard Yuanzhe Pang. 2019. The Daunting Task of
529
- Real-World Textual Style Transfer Auto-Evaluation.
530
- arXiv .Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
531
- Jing Zhu. 2001. BLEU: a method for automatic eval-
532
- uation of machine translation. ACL, Proceedings
533
- of the 40th Annual Meeting of the Association for
534
- Computational Linguistics(July):311–318.
535
- Luz Rello, Ricardo Baeza-Yates, Stefan Bott, and Ho-
536
- racio Saggion. 2013. Simplify or help? Text simpli-
537
- fication strategies for people with dyslexia. In W4A
538
- 2013 - International Cross-Disciplinary Conference
539
- on Web Accessibility .
540
- Carolina Scarton and Lucia Specia. 2018. Learning
541
- simplifications for specific target audiences. In ACL
542
- 2018 - 56th Annual Meeting of the Association for
543
- Computational Linguistics, Proceedings of the Con-
544
- ference (Long Papers) , volume 2, pages 712–718,
545
- Stroudsburg, PA, USA. Association for Computa-
546
- tional Linguistics.
547
- Matthew Shardlow. 2014. A Survey of Automated Text
548
- Simplification. International Journal of Advanced
549
- Computer Science and Applications , 4(1).
550
- Sara Botelho Silveira and Ant ´onio Branco. 2012. En-
551
- hancing multi-document summaries with sentence
552
- simplification. In Proceedings of the 2012 Inter-
553
- national Conference on Artificial Intelligence, ICAI
554
- 2012 , volume 2, pages 742–748.
555
- Elior Sulem, Omri Abend, and Ari Rappoport. 2018.
556
- BLEU is Not Suitable for the Evaluation of Text
557
- Simplification. In Proceedings of the 2018 Con-
558
- ference on Empirical Methods in Natural Language
559
- Processing , pages 738–744, Stroudsburg, PA, USA.
560
- Association for Computational Linguistics.
561
- Sai Surya, Abhijit Mishra, Anirban Laha, Parag Jain,
562
- and Karthik Sankaranarayanan. 2019. Unsupervised
563
- Neural Text Simplification. ACL 2019 - 57th An-
564
- nual Meeting of the Association for Computational
565
- Linguistics, Proceedings of the Conference , pages
566
- 2058–2068.
567
- Tu Vu, Baotian Hu, Tsendsuren Munkhdalai, and Hong
568
- Yu. 2018. Sentence simplification with memory-
569
- augmented neural networks. In NAACL HLT 2018
570
- - 2018 Conference of the North American Chapter of
571
- the Association for Computational Linguistics: Hu-
572
- man Language Technologies - Proceedings of the
573
- Conference , volume 2, pages 79–85. Association for
574
- Computational Linguistics (ACL).
575
- Robert A. Wagner and Michael J. Fischer. 1974. The
576
- String-to-String Correction Problem. Journal of the
577
- ACM (JACM) , 21(1):168–173.
578
- Wei Xu, Chris Callison-Burch, and Courtney Napoles.
579
- 2015. Problems in Current Text Simplification Re-
580
- search: New Data Can Help. Transactions of the As-
581
- sociation for Computational Linguistics , 3:283–297.
582
- Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze
583
- Chen, and Chris Callison-Burch. 2016. Optimizing
584
- Statistical Machine Translation for Text Simplifica-
585
- tion. Transactions of the Association for Computa-
586
- tional Linguistics , 4:401–415.Xingxing Zhang and Mirella Lapata. 2017. Sentence
587
- Simplification with Deep Reinforcement Learning.
588
- InEMNLP 2017 - Conference on Empirical Meth-
589
- ods in Natural Language Processing, Proceedings ,
590
- pages 584–594. Association for Computational Lin-
591
- guistics (ACL).
592
- Sanqiang Zhao, Rui Meng, Daqing He, Saptono Andi,
593
- and Parmanto Bambang. 2018. Integrating trans-
594
- former and paraphrase rules for sentence simplifi-
595
- cation. In Proceedings of the 2018 Conference on
596
- Empirical Methods in Natural Language Process-
597
- ing, EMNLP 2018 , pages 3164–3173. Association
598
- for Computational Linguistics.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2107.14072.txt DELETED
@@ -1,473 +0,0 @@
1
- What Does TERRA-REF’s High Resolution, Multi Sensor Plant Sensing Public
2
- Domain Data Offer the Computer Vision Community?
3
- David LeBauer
4
- University of Arizona
5
- [email protected] Burnette
6
- University of Illinois
7
- [email protected] Fahlgren
8
- Donald Danforth Plant Science Center
9
10
- Rob Kooper
11
- University of Illinois
12
- [email protected] McHenry
13
- University of Illinois
14
- [email protected] Stylianou
15
- St. Louis University
16
17
- Abstract
18
- A core objective of the TERRA-REF project was to gen-
19
- erate an open-access reference dataset for the evaluation
20
- of sensing technologies to study plants under field condi-
21
- tions. The TERRA-REF program deployed a suite of high-
22
- resolution, cutting edge technology sensors on a gantry sys-
23
- tem with the aim of scanning 1 hectare (104m) at around 1
24
- mm2spatial resolution multiple times per week. The system
25
- contains co-located sensors including a stereo-pair RGB
26
- camera, a thermal imager, a laser scanner to capture 3D
27
- structure, and two hyperspectral cameras covering wave-
28
- lengths of 300-2500nm. This sensor data is provided along-
29
- side over sixty types of traditional plant phenotype measure-
30
- ments that can be used to train new machine learning mod-
31
- els. Associated weather and environmental measurements,
32
- information about agronomic management and experimen-
33
- tal design, and the genomic sequences of hundreds of plant
34
- varieties have been collected and are available alongside
35
- the sensor and plant phenotype data.
36
- Over the course of four years and ten growing seasons,
37
- the TERRA-REF system generated over 1 PB of sensor data
38
- and almost 45 million files. The subset that has been re-
39
- leased to the public domain accounts for two seasons and
40
- about half of the total data volume. This provides an un-
41
- precedented opportunity for investigations far beyond the
42
- core biological scope of the project.
43
- The focus of this paper is to provide the Computer Vi-
44
- sion and Machine Learning communities an overview of the
45
- available data and some potential applications of this one
46
- of a kind data.1. Introduction
47
- In 2015, the Advanced Research Projects Agency for En-
48
- ergy (ARPA-E) funded the TERRA-REF Phenotyping Plat-
49
- form (Figure 1). The scientific aim was to transform plant
50
- breeding by providing a reference dataset generated by de-
51
- ploying a suite of co-located high-resolution sensors un-
52
- der field conditions. The goal of these sensors was to use
53
- proximate sensing from approximately 2m above the plant
54
- canopy to quantify plant characteristics.
55
- The study has evaluated diverse populations of sorghum,
56
- wheat, and lettuce over the course of four years and ten
57
- cropping cycles. Future releases of additional data will be
58
- informed by user interests.
59
- Figure 1. TERRA-REF field scanner at the University of Arizona’s
60
- Maricopa Agricultural Center.
61
- The TERRA-REF reference dataset can be used to char-
62
- acterize phenotype-to-genotype associations, on a genomic
63
- scale, that will enable knowledge-driven breeding and the
64
- development of higher-yielding cultivars of sorghum and
65
- wheat. The data is also being used to develop new algo-
66
- rithms for machine learning, image analysis, genomics, and
67
- optical sensor engineering. Beyond applications in plantbreeding, the resulting dataset provides opportunities for the
68
- study and integration of diverse remote sensing modalities.
69
- 1.1. Types of Data
70
- The TERRA-REF field scanner platform utilizes a sensor
71
- suite of co-located instruments (Figure 2 and Table 1). The
72
- TERRA-REF reference dataset includes several data types
73
- (Figures 3 and 4, Table 2) including raw and processed
74
- outputs from sensors, environmental sensor measurements,
75
- manually measured and computationally derived pheno-
76
- types, and raw and processed genomics datasets [16]. Ex-
77
- tensive contextual measurements and metadata include sen-
78
- sor information and extensive documentation for each of the
79
- sensors, the field scanner, calibration targets, and the results
80
- of sensor validation tests [16].
81
- In addition to raw sensor data, the first release of
82
- TERRA-REF data includes derived sensor data products in
83
- enhanced formats including calibrated and georeferenced
84
- images and point clouds (Table 2). Many of the data prod-
85
- ucts are provided in formats that follow Open Geospatial
86
- Consortium (OGC) standards and work with GIS software.
87
- Figure 2. TERRA-REF field scanner sensor suite.
88
- 1.2. Sensors
89
- Sensors available on the TERRA-REF field scanner in-
90
- clude snapshot and line-scan imaging, multi-spectral radio-
91
- metric, and environmental sensors. Table 1 and Figure 2)
92
- provide a high level overview of the sensors deployed on
93
- this system. Full documentation and metadata for each sen-
94
- sor as well as the configuration and geometry of the sensor
95
- box are provided as metadata alongside the TERRA-REF
96
- data release.
97
- 2. Computer Vision and Machine Learning
98
- Problems
99
- There are a variety of questions that the TERRA-REF
100
- dataset could be used to answer that are of high importance
101
- to the agricultural and plant science communities, while
102
- also posing extremely interesting and challenging computer
103
- vision and machine learning problems. In this section, weconsider example research areas or topics within computer
104
- vision and discuss the relevant agricultural and plant sci-
105
- ence questions those research communities could help ad-
106
- dress using the TERRA-REF data.
107
- Measurement, Prediction and Causal Inference. The
108
- TERRA-REF sensor data can be used to drive development
109
- of vision-based algorithms for fundamental problems in
110
- plant phenotyping, such as making measurements of plant
111
- height, leaf length, flower counting, or estimating environ-
112
- mental stress. Additional challenges include attempting to
113
- predict end of season phenotypes, such as end of season
114
- yield, from early season sensor data – an accurate predic-
115
- tor of end of season yield from early season visual data, for
116
- example, could help growers and breeders invest resources
117
- only in the most promising of candidate crops. There are
118
- additional opportunities to investigate the causal relation-
119
- ship between genotypes or environmental conditions and
120
- their expressed phenotypes, as the TERRA-REF dataset in-
121
- cludes both comprehensive genetic information, as well as
122
- high temporal resolution environmental information. The
123
- TERRA-REF data contain over sixty hand measurements
124
- that could be used to train models from one or more sensors.
125
- In addition, there are opportunities to train models that pre-
126
- dict plot-level phenotypes measured by an expensive sensor
127
- with a less expensive sensor. Further, many events includ-
128
- ing insect damage, heat stress, and plant lodging (falling)
129
- could be labeled in new images.
130
- Fine Grained Visual Categorization. The TERRA-REF
131
- data is a rich source of visual sensor data collected from
132
- crop species that are visually similar. Differentiating be-
133
- tween data with low inter-class variance is an interesting
134
- categorization challenge, requiring visual models that learn
135
- the fine-grained differences between varieties of the same
136
- crop.
137
- Transfer Learning. There are a variety of interesting
138
- transfer learning challenges of utmost importance to the
139
- agricultural and plant science communities, including dis-
140
- covering approaches that generalize across sensors, across
141
- crops, or across environmental conditions. The TERRA-
142
- REF data additionally presents an opportunity to help solve
143
- the greenhouse-to-field gap, where models that perform
144
- well in greenhouse conditions tend to not generalize to field
145
- conditions; because the TERRA-REF data includes both
146
- greenhouse and field data for the exact same varieties, re-
147
- searchers in transfer learning could help build models that
148
- bridge this gap.
149
- Multi-sensor Integration. The TERRA-REF data in-
150
- cludes data captured from a variety of visual sensors (de-
151
- scribed in Section 1.2). These sensors have similar, but notTable 1. Summary of TERRA-REF sensor instruments.
152
- Sensor Name Model Technical Specifications
153
- Imaging Sensors
154
- Stereo RGB Camera Allied Vision Prosilica GT3300C
155
- Laser Scanner Custom Fraunhofer 3D Spatial Resolution: 0.3 to 0.9 mm
156
- Thermal Infrared FLIR A615 Thermal Sensitivity: ≤50mK @ 30◦C
157
- PS II Camera LemnaTec PS II Fluorescence Prototype Illumination 635nm x 4000 µmol/m2/s, Camera 50 fps
158
- Multi-spectral Radiometers
159
- Dedicated NDVI Multispectral Radiometer Skye Instruments SKR 1860D/A 650 nm, 800 nm ±5 nm; 1 down, 1 up
160
- Dedicated PRI Multispectral Radiometer Skye Instruments SKR 1860ND/A 531nm +/- 3nm; PRI = Photochemical Reflectance Index
161
- Active Reflectance Holland Scientific Crop Circle ACS-430 670 nm, 730 nm, 780 nm
162
- Hyper-spectral Cameras
163
- VNIR Hyperspectral Imager Headwall Inspector VNIR 380-1000 nm @ 2/3 nm resolution
164
- SWIR Hyperspectral Imager Headwall Inspector SWIR 900-2500 nm @ 12 nm resolution
165
- Environmental Sensors
166
- Climate Sensors Thies Clima 4.9200.00.000
167
- VNIR Spectroradiometer Ocean Optics STS-Vis Range: 337-824 nm @ 1/2 nm
168
- VNIR+SWIR Spectroradiometer Spectral Evolution PSR+3500 Range 800-2500nm @3-8 nm; Installed 2018
169
- PAR Sensor Quantum SQ–300 Spectral Range 410 to 655 nm
170
- Table 2. Summary of the sensor data products included in the first release of TERRA-REF data.
171
- Data Product Sensor Algorithm File Format Plot Clip Full Field
172
- Environment Thies Clima envlog2netcdf netcdf NA NA
173
- Thermal Image FLIR ir geotiff geotiff +
174
- Point Cloud Fraunhofer Laser 3D laser3d las las +
175
- Point Cloud Fraunhofer Laser 3D scanner3DTop ply
176
- Images Time-Series PSII Camera ps2png png
177
- Color Images RGB Stereo bin2tiff geotiff + +
178
- Plant Mask RGB Stereo rgb mask geotiff x
179
- identical, viewpoints from within the gantry box, may not
180
- have captured data at the exact same time, and may have
181
- captured different perspectives of the same part of the field
182
- on different days. This presents interesting challenges in
183
- terms of how to incorporate information across the various
184
- sensors, and how to work with time-series data that is not
185
- necessarily well-aligned or continuously captured.
186
- Explainable Models. All too often in machine learning
187
- research, datasets and models are built solely to drive the
188
- development of machine learning algorithms. When build-
189
- ing models to answer questions like “should I cut this plant
190
- down because it won’t produce sufficient yield?” or “is this
191
- plant under environmental stress?,” it is important not just
192
- to have maximally accurate models but to also understand
193
- why the models make the determinations that they make.
194
- This makes the TERRA-REF data, and the biologically rel-
195
- evant questions it supports, an excellent opportunity to drive
196
- development of new approaches for explainable machine
197
- learning, conveying the decisions made by algorithms to
198
- non-machine learning experts.
199
- Information Content. The TERRA-REF field scanner
200
- and sensors represent a substantial investment, and it is still
201
- not clear which sensors, sensor configurations, and spatialand temporal resolutions are useful to answer a particular
202
- question. Presently, much less expensive sensors and sens-
203
- ing platforms are available [11, 1]. What do we gain from
204
- the 1mm spatial resolution on this platform relative to unoc-
205
- cupied aerial systems (UAS) that are quickly approaching
206
- 1cm spatial resolution? Or, which subset of hyperspectral
207
- wavelengths provide the most useful information? Can we
208
- predict the useful parts of a hyperspectral image from RGB
209
- images? Or get most of the information from a multispec-
210
- tral camera with a half-dozen bands? At the outset, the team
211
- recognized that this configuration would be oversampling
212
- the plant subjects, but it wasn’t clear what the appropriate
213
- resolutions or most useful sensors would be.
214
- Overall Challenges. Within all of these topic areas in
215
- computer vision and machine learning, the challenges that
216
- must be addressed require addressing interesting questions
217
- such as determining the most appropriate sensors and data
218
- processing choices for specific questions, addressing dif-
219
- ficult domain transfer issues, considering how to integrate
220
- noisy side channels of information, such as genetic infor-
221
- mation that may conflict or conflate with each other, or deal-
222
- ing with nuisance parameters like environmental or weather
223
- variations that simultaneously influence plant subjects and
224
- the sensor data content.Figure 3. Example data from the TERRA-REF gantry system. (top left) RGB data (center top) RGB data with soil masked (top right)
225
- close up of RGB data. 3D-scanner data (center row, right to left) depth, reflectance, surface normals, and point cloud data produced by
226
- the 3D scanner. (bottom row, right to left) FLIR thermal image with transpiring leaves shown as cooler than soil; F v/Fmderived from
227
- active fluorescence PSII sensor providing a measure of photosynthetic efficiency; the reflectance of light at 543nm wavelength measured
228
- by the VNIR hyperspectral camera. Because these are two-dimensional representations of three-dimensional systems, all scale bars are
229
- approximate.
230
- 3. Algorithm Development.
231
- The process of converting raw sensor outputs into usable
232
- data products required geometric, radiometric, and geospa-
233
- tial calibration. In this regard, each sensor presented its own
234
- challenges. Combining these steps into an automated com-
235
- puting pipeline also represented a substantial effort that is
236
- described by Burnette et al. [3].
237
- Radiometric calibration was particularly challenging,
238
- owing that many images contain both sunlit and shaded ar-
239
- eas. In the case of hyperspectral images, the white sensor
240
- box and scans spread out over multiple days confounded
241
- an already challenging problem. Radiometric calibration
242
- of images taken by the two hyperspectral cameras exempli-
243
- fies these challenges, and a robust solution is described by
244
- Sagan et al. [21] and implemented in [19]. Even process-
245
- ing images from an RGB camera was challenging due to
246
- fixed settings resulting in high variability in quality and ex-
247
- posure, requiring the novel approach described by Li et al.
248
- [18]. Herritt et al. [14, 13] demonstrate and provide soft-
249
- ware used in analysis of a sequence of images that capture
250
- plant fluorescence response to a pulse of light.Most of the algorithms used to generate data products
251
- have not been published as papers but are made available
252
- on GitHub ( https://github.com/terraref ); code
253
- used to release the data publication in 2020 is available on
254
- Zenodo [25, 15, 10, 6, 4, 19, 8, 7, 5, 9, 17].
255
- Pipeline development continues to support ongoing use
256
- of the field scanner as well as more general applica-
257
- tions in plant sensing pipelines. Recent advances have
258
- improved pipeline scalability and modularity by adopt-
259
- ing workflow tools and making use of heterogeneous
260
- computing environments. The TERRA-REF computing
261
- pipeline has been adapted and extended for continuing
262
- use with the Field Scanner with the new name ”Phy-
263
- toOracle” and is available at https://github.com/
264
- LyonsLab/PhytoOracle . Related work generalizing
265
- the pipeline for other phenomics applications has been re-
266
- leased under the name ”AgPipeline” https://github.
267
- com/agpipeline with applications to aerial imaging de-
268
- scribed by Schnaufer et al . [22]. All of these software
269
- are made available with permissive open source licenses on
270
- GitHub to enable access and community development.Figure 4. Summary of public sensor datasets from Seasons 4 and 6. Each dot represents the dates for which a particular data product is
271
- available, and the size of the dot indicates the number of files available.
272
- 4. Uses to date.
273
- TERRA-REF is being used in a variety of ways. For ex-
274
- ample, hyperspectral images have been used to measure soil
275
- moisture [2], but the potential to predict leaf chemical com-
276
- position and biophysical traits related to photosynthesis and
277
- water use are particularly promising based on prior work
278
- [23, 24].
279
- Plant Science and Computer Vision Research. A few
280
- projects are developing curated datasets for specific ma-
281
- chine learning challenges related to classification, objectrecognition, and prediction.
282
- We currently know of at least three datasets curated
283
- for CVPPA 2021. The Sorghum-100 dataset was cre-
284
- ated to support development of algorithms that can clas-
285
- sify sorghum varieties from RGB images from Ren et al.
286
- [20]. Another set of RGB images curated for the Sorghum
287
- Biomass Prediction Challenge on Kaggle was developed
288
- with the goal of developing methods to predict end of sea-
289
- son biomass from images taken of different sorghum geno-
290
- types over the course of the growing season. Finally, RGB
291
- images from the TERRA-REF field scanner in Maricopa
292
- accounted for 250 of the 6000 1024x1024 pixel images inthe Global Wheat Head Dataset 2021 [12]. The goal of the
293
- Global Wheat Challenge 2021 on AIcrowd is to develop an
294
- algorithm that can identify wheat heads from a collection of
295
- images from around the world that represent diverse fields
296
- conditions, sensors, settings, varieties, and growth stages.
297
- Most of the research applications to date have focused on
298
- analysis of plot-level phenotypes and genomic data rather
299
- than the full resolution sensor data.
300
- 5. Data Access
301
- Public Domain Data. A curated subset of the TERRA-
302
- REF data was released to the public domain in 2020 (Figure
303
- 4) [16]. These data are intended to be re-used and are acces-
304
- sible as a combination of files and databases linked by spa-
305
- tial, temporal, and genomic information. In addition to pro-
306
- viding open access data, the entire computational pipeline is
307
- open source, and we can assist academic users with access
308
- to high-performance computing environments.
309
- The total size of raw (Level 0) data generated by these
310
- sensors is 60 TB. Combined, the Level 1 and Level 2 sen-
311
- sor data products are 490 TB. This size could be substan-
312
- tially reduced through compression and removal of dupli-
313
- cate data. For example, the same images at the same resolu-
314
- tion appear in the georeferenced Level 1 files, the full field
315
- mosaics, and the plot-level clip.
316
- Other Data Available. The complete TERRA-REF
317
- dataset is not publicly available because of the effort and
318
- cost of processing, reviewing, curating, describing, and
319
- hosting the data. Instead, we focused on an initial public
320
- release and plan to make new datasets available based on
321
- need. Access to unpublished data can be requested from
322
- the authors, and as data are curated they will be added to
323
- subsequent versions of the public domain release ( https:
324
- //terraref.org/data/access-data ).
325
- In addition to hosting an archival copy of data on Dryad
326
- [16], the documentation includes instructions for browsing
327
- and accessing these data through a variety of online portals.
328
- These portals provide access to web user interfaces as well
329
- as databases, APIs, and R and Python clients. In some cases
330
- it will be easier to access data through these portals using
331
- web interfaces and software libraries.
332
- The public domain data is archived on Dryad, with the
333
- exception of the large sensor data files. The Dryad archive
334
- provides a catalog of these files that can be accessed via
335
- Globus or directly on the host computer at the National Cen-
336
- ter for Supercomputing Applications.
337
- 6. Acknowledgements
338
- The work presented herein was funded in part by the
339
- Advanced Research Projects Agency-Energy (ARPA-E),U.S. Department of Energy, under Award Numbers DE-
340
- AR0000598 and DE-AR0001101, and the National Science
341
- Foundation, under Award Numbers 1835834 and 1835543.
342
- References
343
- [1] Jonathan A Atkinson, Robert J Jackson, Alison R Bentley,
344
- Eric Ober, and Darren M Wells. Field Phenotyping for the
345
- Future , pages 1–18. John Wiley & Sons, Ltd, Chichester,
346
- UK, Nov. 2018. 3
347
- [2] Ebrahim Babaeian, Paheding Sidike, Maria S New-
348
- comb, Maitiniyazi Maimaitijiang, Scott A White, Jeffrey
349
- Demieville, Richard W Ward, Morteza Sadeghi, David S
350
- LeBauer, Scott B Jones, et al. A new optical remote sens-
351
- ing technique for high-resolution mapping of soil moisture.
352
- Frontiers in Big Data , 2:37, 2019. 5
353
- [3] Maxwell Burnette, Rob Kooper, J D Maloney, Gareth S Ro-
354
- hde, Jeffrey A Terstriep, Craig Willis, Noah Fahlgren, Todd
355
- Mockler, Maria Newcomb, Vasit Sagan, Pedro Andrade-
356
- Sanchez, Nadia Shakoor, Paheding Sidike, Rick Ward, and
357
- David LeBauer. TERRA-REF data processing infrastructure.
358
- InProceedings of the Practice and Experience on Advanced
359
- Research Computing , page 27. ACM, July 2018. 4
360
- [4] Max Burnette, David LeBauer, Solmaz Hajmohammadi,
361
- ZongyangLi, Craig Willis, Wei Qin, Sidke Paheding, and JD
362
- Maloney. terraref/extractors-multispectral: Season 6 Data
363
- Publication (2019), Sept. 2019. 4
364
- [5] Max Burnette, David LeBauer, Wei Qin, and Yan Liu.
365
- terraref/extractors-metadata: Season 6 Data Publication
366
- (2019), Sept. 2019. 4
367
- [6] Max Burnette, David LeBauer, ZongyangLi, Wei Qin, Sol-
368
- maz Hajmohammadi, Craig Willis, Sidke Paheding, and
369
- Nick Heyek. terraref/extractors-stereo-rgb: Season 6 Data
370
- Publication (2019), Sept. 2019. 4
371
- [7] Max Burnette, Zongyang Li, Solmaz Hajmohammadi, David
372
- LeBauer, Nick Heyek, and Craig Willis. terraref/extractors-
373
- 3dscanner: Season 6 Data Publication (2019), Sept. 2019.
374
- 4
375
- [8] Max Burnette, Jerome Mao, David LeBauer, Charlie Zen-
376
- der, and Harsh Agrawal. terraref/extractors-environmental:
377
- Season 6 Data Publication (2019), Sept. 2019. 4
378
- [9] Max Burnette, Craig Willis, Chris Schnaufer, David
379
- LeBauer, Nick Heyek, Wei Qin, Solmaz Hajmohammadi,
380
- and Kristina Riemer. terraref/terrautils: Season 6 Data Pub-
381
- lication (2019), Sept. 2019. 4
382
- [10] Max Burnette, Charlie Zender, JeromeMao, David LeBauer,
383
- Rachel Shekar, Noah Fahlgren, Craig Willis, Henry Bu-
384
- towsky, Xingchen Hong, ZongyangLi, Fengling Wang, Tin-
385
- oDornbusch, JD Maloney, Wei Qin, Stuart Marshall, Abby
386
- Stylianou, and Ting Li. terraref/computing-pipeline: Season
387
- 4 & 6 Data Publication (2019), Feb. 2020. 4
388
- [11] Anna L Casto, Haley Schuhl, Jose C Tovar, Qi Wang, Re-
389
- becca S Bart, Noah Fahlgren, and Malia A Gehan. Picturing
390
- the future of food. Plant phenome j. , 4(1), Jan. 2021. 3
391
- [12] Etienne David, Mario Serouart, Daniel Smith, Simon
392
- Madec, Kaaviya Velumani, Shouyang Liu, Xu Wang, Fran-
393
- cisco Pinto Espinosa, Shahameh Shafiee, Izzat S. A. Tahir,Hisashi Tsujimoto, Shuhei Nasuda, Bangyou Zheng, Norbert
394
- Kichgessner, Helge Aasen, Andreas Hund, Pouria Sadhegi-
395
- Tehran, Koichi Nagasawa, Goro Ishikawa, S ´ebastien Dan-
396
- drifosse, Alexis Carlier, Benoit Mercatoris, Ken Kuroki,
397
- Haozhou Wang, Masanori Ishii, Minhajul A. Badhon, Cur-
398
- tis Pozniak, David Shaner LeBauer, Morten Lilimo, Jesse
399
- Poland, Scott Chapman, Benoit de Solan, Fr ´ed´eric Baret, Ian
400
- Stavness, and Wei Guo. Global wheat head dataset 2021:
401
- more diversity to improve the benchmarking of wheat head
402
- localization methods, 2021. 6
403
- [13] Matthew T Herritt, Jacob C Long, Mike D Roybal, David C
404
- Moller Jr, Todd C Mockler, Duke Pauli, and Alison L
405
- Thompson. Flip: Fluorescence imaging pipeline for
406
- field-based chlorophyll fluorescence images. SoftwareX ,
407
- 14:100685, 2021. 4
408
- [14] Matthew T Herritt, Duke Pauli, Todd C Mockler, and Al-
409
- ison L Thompson. Chlorophyll fluorescence imaging cap-
410
- tures photochemical efficiency of grain sorghum (sorghum
411
- bicolor) in a field setting. Plant methods , 16(1):1–13, 2020.
412
- 4
413
- [15] David LeBauer, Nick Heyek, Rachel Shekar, Katrin Leinwe-
414
- ber, JD Maloney, and Tino Dornbusch. terraref/reference-
415
- data: Season 4 & 6 Data Publication (2019), Feb. 2020. 4
416
- [16] David LeBauer, Burnette Maxwell, Jeffrey Demieville, Noah
417
- Fahlgren, Andrew French, Roman Garnett, Zhenbin Hu,
418
- Kimberly Huynh, Rob Kooper, Zongyang Li, Maitiniyazi
419
- Maimaitijiang, Jerome Mao, Todd Mockler, Geoffrey Mor-
420
- ris, Maria Newcomb, Michael Ottman, Philip Ozersky,
421
- Sidike Paheding, Duke Pauli, Robert Pless, Wei Qin, Kristina
422
- Riemer, Gareth Rohde, William Rooney, Vasit Sagan, Nadia
423
- Shakoor, Abby Stylianou, Kelly Thorp, Richard Ward, Jef-
424
- frey White, Craig Willis, and Charles Zender. Data from:
425
- TERRA-REF, an open reference data set from high resolu-
426
- tion genomics, phenomics, and imaging sensors, Aug. 2020.
427
- 2, 6
428
- [17] David LeBauer, Craig Willis, Rachel Shekar, Max Burnette,
429
- Ting Li, Scott Rohde, Yan Liu, JD Maloney, Noah Fahlgren,
430
- Charlie Zender, Rob Kooper, Jerome Mao, Harsh Agrawal,
431
- Xingchen Hong, Shannon Bradley, Samy Pess ´e, Katrin Lein-
432
- weber, Justin Manzo, Jeff Terstriep, and Abby Stylianou. ter-
433
- raref/documentation: Season 6 Data Publication (2019), Feb.
434
- 2020. 4
435
- [18] Zongyang Li, Abby Stylianou, and Robert Pless. Learning to
436
- correct for bad camera settings in large scale plant monitor-
437
- ing. In Proceedings of the IEEE/CVF Conference on Com-
438
- puter Vision and Pattern Recognition (CVPR) Workshops ,
439
- June 2019. 4
440
- [19] Jerome Mao, Max Burnette, Henry Butowsky, Charlie Zen-
441
- der, David LeBauer, and Sidke Paheding. terraref/extractors-
442
- hyperspectral: Season 6 Data Publication (2019), Sept. 2019.
443
- 4
444
- [20] Chao Ren, Justin Dulay, Gregory Rolwes, Duke Pauli, Na-
445
- dia Shakoor, and Abby Stylianou. Multi-resolution out-
446
- lier pooling for sorghum classification. In Proceedings of
447
- the IEEE/CVF Conference on Computer Vision and Pattern
448
- Recognition (CVPR) Workshops , pages 2931–2939, June
449
- 2021. 5[21] Vasit Sagan, Maitiniyazi Maimaitijiang, Sidke Paheding,
450
- Sourav Bhadra, Nichole Gosselin, Max Burnette, Jeffrey
451
- Demieville, Sean Hartling, David LeBauer, Maria New-
452
- comb, Duke Pauli, Kyle T. Peterson, Nadia Shakoor, Abby
453
- Stylianous, Charlie Zender, and Todd C. Mockler. Data-
454
- driven artificial intelligence for calibration of hyperspectral
455
- big data. Transactions on Geoscience and Remote Sensing ,
456
- 2021 . 4
457
- [22] Christophe Schnaufer, Julian L Pistorius, and David S
458
- LeBauer. An open, scalable, and flexible framework for au-
459
- tomated aerial measurement of field experiments. In Pro-
460
- ceedings of SPIE , 2020. 4
461
- [23] Shawn P Serbin, Dylan N Dillaway, Eric L Kruger, and
462
- Philip A Townsend. Leaf optical properties reflect variation
463
- in photosynthetic metabolism and its sensitivity to tempera-
464
- ture. J. Exp. Bot. , 63(1):489–502, Jan. 2012. 5
465
- [24] Shawn P Serbin, Aditya Singh, Ankur R Desai, Sean G
466
- Dubois, Andrew D Jablonski, Clayton C Kingdon, Eric L
467
- Kruger, and Philip A Townsend. Remotely estimating pho-
468
- tosynthetic capacity, and its response to temperature, in veg-
469
- etation canopies using imaging spectroscopy. Remote Sens.
470
- Environ. , 167:78–87, Sept. 2015. 5
471
- [25] Craig Willis, David LeBauer, Max Burnette, and Rachel
472
- Shekar. terraref/sensor-metadata: Season 4 & 6 Data Pub-
473
- lication (2019), Feb. 2020. 4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2108.06156.txt DELETED
@@ -1,1272 +0,0 @@
1
- EEEA-N ET: ANEARLY EXITEVOLUTIONARY NEURAL
2
- ARCHITECTURE SEARCH
3
- PUBLISHED AT ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE . DOI:
4
- https://doi.org/10.1016/j.engappai.2021.104397
5
- Chakkrit Termritthikun
6
- STEM, University of South Australia
7
- Adelaide, SA, 5095, Australia
8
- [email protected] Jamtsho
9
- College of Science and Technology
10
- Royal University of Bhutan
11
- Phuentsholing, 21101, Bhutan
12
13
- Jirarat Ieamsaard
14
- Department of Electrical and Computer Engineering
15
- Faculty of Engineering, Naresuan University
16
- Phitsanulok, 65000, Thailand
17
- [email protected] Muneesawang
18
- Department of Electrical and Computer Engineering
19
- Faculty of Engineering, Naresuan University
20
- Phitsanulok, 65000, Thailand
21
22
- Ivan Lee
23
- STEM, University of South Australia
24
- Adelaide, SA, 5095, Australia
25
26
- ABSTRACT
27
- The goals of this research were to search for Convolutional Neural Network (CNN) architectures,
28
- suitable for an on-device processor with limited computing resources, performing at substantially
29
- lower Network Architecture Search (NAS) costs. A new algorithm entitled an Early Exit Population
30
- Initialisation (EE-PI) for Evolutionary Algorithm (EA) was developed to achieve both goals. The
31
- EE-PI reduces the total number of parameters in the search process by filtering the models with fewer
32
- parameters than the maximum threshold. It will look for a new model to replace those models with
33
- parameters more than the threshold. Thereby, reducing the number of parameters, memory usage
34
- for model storage and processing time while maintaining the same performance or accuracy. The
35
- search time was reduced to 0.52 GPU day. This is a huge and significant achievement compared to
36
- the NAS of 4 GPU days achieved using NSGA-Net, 3,150 GPU days by the AmoebaNet model, and
37
- the 2,000 GPU days by the NASNet model. As well, Early Exit Evolutionary Algorithm networks
38
- (EEEA-Nets) yield network architectures with minimal error and computational cost suitable for a
39
- given dataset as a class of network algorithms. Using EEEA-Net on CIFAR-10, CIFAR-100, and
40
- ImageNet datasets, our experiments showed that EEEA-Net achieved the lowest error rate among
41
- state-of-the-art NAS models, with 2.46% for CIFAR-10, 15.02% for CIFAR-100, and 23.8% for
42
- ImageNet dataset. Further, we implemented this image recognition architecture for other tasks, such
43
- as object detection, semantic segmentation, and keypoint detection tasks, and, in our experiments,
44
- EEEA-Net-C2 outperformed MobileNet-V3 on all of these various tasks. (The algorithm code is
45
- available at https://github.com/chakkritte/EEEA-Net ).
46
- Keywords Deep learningNeural Architecture Search Multi-Objective Evolutionary Algorithms Image classification
47
- This work is done when Chakkrit Termritthikun works as a visiting research student in University of South AustraliaarXiv:2108.06156v1 [cs.CV] 13 Aug 2021EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
48
- 1 Introduction
49
- Deep convolutional neural networks (CNNs) have been widely used in computer vision applications, including image
50
- recognition, image detection, and image segmentation. In the ImageNet Large Scale Visual Recognition Challenge
51
- (ILSVRC) Russakovsky et al. [2015], the AlexNet Krizhevsky et al. [2017], GoogLeNet Szegedy et al. [2015], ResNet
52
- He et al. [2016], and SENet Hu et al. [2018] were represented models that had been widely used in various applications.
53
- The SqueezeNet Iandola et al. [2016], MobileNets Howard et al. [2017], NUF-Net Termritthikun et al. [2019, 2020],
54
- and ShuffleNet Zhang et al. [2018] models were simultaneously developed to be used on devices with limited resources.
55
- All of these architecture networks have been strengthened and advanced by developers for many years.
56
- However, a significant drawback of the useability of these models, and to the development of the efficient CNNs models,
57
- was the dependence on the designer’s expertise and experience, including utilising resources such as high-performance
58
- computing (HPC) for the experimentation. The datasets used for analysis also affect the model efficiency, depending on
59
- the different features that different datasets manifest, and all image recognition datasets require specialised research
60
- knowledge for each dataset when modelling. One algorithm, the NAS Zoph and Le [2016] was designed to search the
61
- CNN network architecture for different datasets and thereby avoided the hitherto human intervention or design activity
62
- except during the definition of the initial hyper-parameters.
63
- The CNN network architecture is flexible, allowing the model to be developed in different structures, with the module
64
- structure consisting of layers, linked in sequence with different parameters. Models obtained from NAS methods differ
65
- in structure and parameters, making NAS searches more efficient in finding dataset models, resulting in finding a model
66
- for each particular dataset which have a unique structure and parameter set.
67
- Reinforcement Learning (RL) and gradient descent algorithms, automate the search for models of deep learning.
68
- Searching a model with an RL algorithm takes 2,000 GPU days to uncover an effective model. However, when the
69
- gradient descent algorithm is focused on searching a model with the highest accuracy for only one objective; it takes
70
- 4 GPU days. Both algorithms have difficulty in dealing with a multi-objective problem. EA, however, can solve
71
- multi-objective optimisation problems.
72
- EA apply optimisation methods that mimic the evolution of living organisms in nature, including reproduction, mutation,
73
- recombination, and selection. EA can find the most suitable candidate solution with quality function rates. EA-based
74
- NAS approaches are very robust with shallow error values for experiments with CIFAR-10 and CIFAR-100 datasets.
75
- However, past search algorithms in models of the EA-based NAS approaches have taken up to 3,150 GPU days in Real
76
- et al. [2019], 300 GPU days in Liu et al. [2018a], and 4 GPU days in Liu et al. [2018b]. Many network architectures
77
- built from NAS have a high number of parameters and high computing costs. It is obvious that extensive network
78
- architectures must be avoided when attempting to identify a network architecture that is suitable for other applications.
79
- A network architecture, built from the DARTS Liu et al. [2018b] search space, is a multi-path NAS. However, excessive
80
- path-level selection is a problem for multi-path NASs where every operation of the super network of a multi-path NAS
81
- takes a separate path. This means that all connected weights across all paths require considerable memory.
82
- The single-path NAS Stamoulis et al. [2019] was introduced to solve the NAS problem by finding a subset of kernel
83
- weights in the single layer. This layer is called a super kernel. We called the model built from the super kernel, the
84
- Supernet. Based on the Supernet, which has many subnets, whole subnets are trained at the same time through weight
85
- sharing. The Supernet can be sampled and deployed without re-training.
86
- In our current work, we developed the EE-PI method for Evolutionary NASs. EE-PI was applied to the Multi-Objective
87
- Evolutionary Algorithm (MOEA) in the first generation to locate the newly created model with a certain number of
88
- parameters, discarding models with parameters more than the set criteria. This process iteratively continues until the
89
- model with fewer parameters than the initial set criteria is found. Thus, EE-PI complements MOEA. We created this
90
- add-on to prevent models with a high number of parameters and a high number of Floating point Operations Per Second
91
- (FLOPS). Also, using a small number of parameters helps reduce model search costs.
92
- The key contributions of this paper are:
93
- •We introduce Evolutionary Neural Architecture Search (EA-Net), which adopts a multi-objective evolutionary
94
- algorithm for neural architecture search with three objectives: minimisation of errors, minimal number of
95
- parameters, and lowest computing costs.
96
- •We proposed a simple method called Early Exit Population Initialisation (EE-PI) to avoid a model with a high
97
- number of parameters and high computational cost, by filtering the neural network architecture based on the
98
- number of parameters in the first-generation of the evolution. The architectures obtained by this method are
99
- called Early Exit Evolutionary Algorithm networks (EEEA-Net).
100
- 2EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
101
- Image
102
- Softmax x N
103
- x N
104
- x N Normal
105
- Normal Normal
106
- Normal X1 X2 Image
107
- Initial channel
108
- X1
109
- X1
110
- X1 X2
111
- X2
112
- X2 Channel
113
- increment
114
- Reduction
115
- Normal Normal
116
- Reduction Normal
117
- Figure 1: NASNet Zoph and Le [2016] network architecture (left) and NSGA-Net Lu et al. [2019] network architecture
118
- (right).
119
- •We conduct extensive experiments to evaluate the effectiveness of an EEEA-Net by outperforming MobileNet-
120
- V3 for all of the image recognition, object detection, semantic segmentation, and keypoint detection. Also,
121
- the EEEA-Net was widely tested on standard CIFAR-10 Krizhevsky [2009], CIFAR-100 Krizhevsky [2009],
122
- ImageNet Russakovsky et al. [2015], PASCAL VOC Everingham et al. [2010], Cityscapes Cordts et al. [2016],
123
- and MS COCO Lin et al. [2014] datasets.
124
- 2 Related Work
125
- NAS was designed to search and design the model structure that suits the best to the applied dataset. Thus, the model
126
- obtained by NAS has a small number of parameters with high performance. NAS can find models suitable for both
127
- small and large datasets. The NAS can be of single-objective NAS and multi-objective NAS: a single-objective NAS
128
- considers models from a single objective such as error rate, number of parameters, or FLOPS. The multi-objective NAS
129
- considers models considering more than one objective, which we have adopted in this paper to optimise the model
130
- performance.
131
- 2.1 Network Architecture Search
132
- Setting optimised parameters in each layer, such as kernel size, kernel scroll position (stride), zero paddings, as well as
133
- the output size, is the main challenge in creating CNN architectures efficiently for a given dataset. The total parameters
134
- are directly proportional to the number of layers. Manually designing a model takes too long and requires considerable
135
- experimentation to achieve optimal performance, which is why an automated model discovery is essential.
136
- The ability of NASs to find automated models suitable for datasets has proved a popular area of experimentation. Deep
137
- learning also is gaining popularity and is now being widely used. The NAS model has also been designed and expanded
138
- to enable applications in new tasks such as a NAS for semantic image segmentation, NAS for object detection, and
139
- NAS for skin lesion classification. The approaches used in NAS are RL Zoph and Le [2016], EA Real et al. [2019], Liu
140
- et al. [2018a], Lu et al. [2019], and relaxation Liu et al. [2018b]. The three critical steps for NAS are:
141
- Step 1: Search the CNN model’s search space to find the suitable architecture. A CNN architecture search model
142
- contains many search spaces which are optimised in the image classification application. Google Brain has launched a
143
- search space for the NASNet model. It is a feed-forward network in which layers are broken into subgroups called cells.
144
- The normal cell can learn from images, and the normal cell maintains an image output equal to the input size. However,
145
- the kernel stride in each reduction cell is 2, halving the input image size. Normal and reduction cells are linked. Each
146
- cell is stacked during modelling, where Nnormal cells are connected. Reduction cells are added between the Nnormal
147
- cells, as shown in Fig. 1 (left), to halve the image size, helping the next normal cell to process faster.
148
- 3EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
149
- The output from the search space thereby includes normal cells and reduction cells used in the evaluation. NAS has
150
- directed acyclic graphs (DAGs) connection between input X1andX2of cells, as shown in Fig. 1 (right). In each
151
- normal cell, there are two input activations and one output activation. In the first normal cell, input X1andX2are
152
- copied from the input (image). The next normal cell uses, as input, both the X1from the last normal cell, and the X2
153
- from the second to last normal cell. All cells are connected the same way until the end of the model. Also, each cell has
154
- a greater number of cell output channels in each layer.
155
- Step 2: Evaluate the CNN model on a standard dataset for benchmarks. These benchmarks include number of errors,
156
- number of parameters, and search cost. The normal cells and reduction cells that are found in the search space are
157
- evaluated to measure the error rate, the number of parameters, and the computing costs, using CIFAR-10. Due to
158
- limited processor resources and GPU memory, parameters such as cell count ( N), number of epochs, initial channel,
159
- and channel increment, are different for each search space and evaluation.
160
- Step 3: Evaluate with a large-scale dataset. When the model from the search space has been identified, it is evaluated
161
- with a larger dataset. Model evaluation with the CIFAR-10 dataset cannot be compared with other models because the
162
- CIFAR-10 dataset contains only ten classes. Given this constraint, CIFAR-100 datasets with 100 classes are required.
163
- The search space of NASNet used RL and was tested with the CIFAR-10 dataset, which takes up to 2,000 GPU days to
164
- model. The AmoebaNet Real et al. [2019], based on an evolutionary algorithm model, takes up to 3,150 GPU days for
165
- the same dataset. Also, the search space of NASNet was designed to use shorter search times. However, the sequential
166
- model-based optimisation (SMBO) method Liu et al. [2018c] takes 335 GPU days, the gradient descent method Liu
167
- et al. [2018b] takes just 4 GPU days, whereas weight-sharing across different structures Pham et al. [2018] takes only
168
- 0.5 GPU days.
169
- As indicated, the AmoebaNet takes 3,150 GPU days, whereas the NSGA-Net Lu et al. [2019], which uses a multi-
170
- objective evolutionary algorithm to find models, takes 4 GPU days. However, although the error rate of NSGA-Net is
171
- higher than that of AmoebaNet, based on a standard CIFAR-10 evaluation, the main focus of this area of research has
172
- been the reduction of search costs.
173
- 2.2 Multi-objective Network Architecture Search
174
- NAS aims to minimise errors, hyper-parameters, FLOPS, and delays, making it challenging to identify a network
175
- architecture suitable for each objective simultaneously. Thus, the best network architecture should reduce or minimise
176
- all of these dimensions. For the evolution-based NASs, NSGA-Nets Lu et al. [2019] considers FLOPS and error count,
177
- CARS Yang et al. [2020] and LEMONADE Elsken et al. [2018] consider device-agnostic and device-aware objectives.
178
- In our work, however, we sought the achievement of the three goals; minimising errors, and reducing parameters and
179
- FLOPS, simultaneously.
180
- The NASs mostly focus on creating a network architecture for image recognition, then transferring that architecture to
181
- other tasks. However, for object detection and semantic segmentation, the same network architecture can be used as a
182
- backbone.
183
- Many network architectures, such as the EfficientNet Tan and Le [2019], FBNetV2 Wan et al. [2020], DARTS Liu et al.
184
- [2018b], P-DARTS Chen et al. [2019], CDARTS Yu and Peng [2020], CARS Yang et al. [2020], LEMONADE Elsken
185
- et al. [2018], NSGA-Net Lu et al. [2019] and NSGA-NetV2 Lu et al. [2020a] were tested only on image recognition
186
- datasets. It is challenging to design and evaluate a network architecture for general purposes.
187
- Table 1 shows the research objectives of the various NASs, illustrating that the image identification architectures were,
188
- in some cases, transferred to object detection, with one, MobileNetV3 Howard et al. [2019] also being applied to
189
- transfer specifically researched image identification architecture to both object detection and semantic segmentation.
190
- Our objective was to extend this image identification architecture, using the ImageNet dataset, to object detection,
191
- semantic segmentation, as well the further purpose of keypoint detection.
192
- 2.3 Inverted Residuals Network (IRN)
193
- The Inverted Residuals Network (IRN) Tan et al. [2019] concept is needed to reduce the Residuals Network (RN)
194
- parameters. In contrast, the RN concept integrates data from the previous layer into the last layer. Fig. 2 (left) shows
195
- that the RN structure has three layers: wide, narrow, and wide approach layers. The wide layers have N16output
196
- channels whereas the narrow layers have N16channels each. The wide approach layer has N32output channels
197
- (Nis the input channels in each case). However, all the convolution layers used the standard convolution. Batch
198
- normalisation (BN) and activation functions (ReLU) were also added into each convolution layer.
199
- 4EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
200
- Methods Search Method Multiple Objective Dataset Searched Architecture transfer y
201
- MobileNetV3 Howard et al. [2019] RL + expert - ImageNet IR, OD, SS
202
- EfficientNet Tan and Le [2019] RL + scaling - ImageNet IR
203
- FBNetV2 Wan et al. [2020] gradient - ImageNet IR
204
- DARTS Liu et al. [2018b] gradient - CIFAR-10 IR
205
- P-DARTS Chen et al. [2019] gradient - CIFAR-10, CIFAR-100 IR
206
- PC-DARTS Xu et al. [2020] gradient - CIFAR-10, ImageNet IR, OD
207
- CDARTS Yu and Peng [2020] gradient - CIFAR-10, ImageNet IR
208
- CARS Yang et al. [2020] EA Yes CIFAR-10 IR
209
- LEMONADE Elsken et al. [2018] EA Yes CIFAR-10, CIFAR-100, ImageNet64 IR
210
- NSGA-Net Lu et al. [2019] EA Yes CIFAR-10 IR
211
- NSGA-NetV2 Lu et al. [2020a] EA Yes ImageNet IR
212
- EEEA-Net (this paper) EA+ EE-PI Yes CIFAR-10, ImageNet IR, OD, SS, KD
213
- yIR = Image Recognition, OD = Object Detection, SS = Semantic Segmentation, KD = Keypoint Detection.
214
- Table 1: Comparison of different NAS search method with multi-objectives.
215
- Inverted Residuals Network
216
- Input narrow
217
- Add Conv 1x1, f=N
218
- BN narrow
219
- approach wide
220
- Depthwise 3x3
221
- BN
222
- ReLU Conv 1x1, f=Nx16
223
- BN
224
- ReLU channels = N
225
- channels = Nx16
226
- channels = Nx16
227
- channels = N Xi
228
- Xi+1 Residual Network
229
- Input wide
230
- Add Conv 1x1, f=Nx32
231
- BN
232
- ReLU wide
233
- approach narrow
234
- Conv 3x3, f=Nx16
235
- BN
236
- ReLU Conv 1x1, f=Nx16
237
- BN
238
- ReLU channels = N
239
- channels = Nx16
240
- channels = Nx16
241
- channels = Nx32 Xi
242
- Xi+1
243
- Figure 2: The difference between the residual network (left) and the inverted residual network (right).
244
- The RN structure is modified and reversed to obtain the IRN. The layers in IRN are defined as narrow layer, wide layer
245
- and narrow approach layer. In IRN, the number of output channels obtained is equal to the number of input channels,
246
- N, as shown in Fig. 2 (right). When the data is fed into the 11convolution layer, the number of channels will be
247
- expanded to N16. The wide layer changes to a 33depth-wise separable convolution instead of a 33standard
248
- convolution, reducing the FLOPS and number of parameters. There are N16channels in a wide layer, which is equal
249
- to the previous layer. Also, 11standard convolution is used in the narrow approach to reduce the channels’ size to be
250
- equal to the input channel N. Then, the input data ( Xi) is combined with the IRN’s output to get data ( Xi+ 1).
251
- Convolutional layers can be defined in various formats to find a model structure with EA. Convolutional layers are also
252
- called cells. The types of convolutional layers used in our experiment are presented in Table 2.
253
- 3 Method
254
- The most commonly used methods to develop NAS are RL and gradient descent algorithms. However, these algorithms
255
- possess limitations in solving multi-objective problems. EA automates the model search process, is easier to implement,
256
- and enables discovery of solutions while considering multiple objectives.
257
- A general description of an EA, including encoding, a presentation of the multi-objective genetic algorithm, and genetic
258
- operations used with NAS, is provided in Section 3.1. The Early Exit Population Initialisation concept and method, its
259
- simple, yet effective application in EA, mitigating the complexity and parameters used in the previous models, while
260
- maintaining the same accuracy, are described in Section 3.2.
261
- 3.1 Evolutionary Neural Architecture Search
262
- The Genetic Algorithm (GA) is an algorithm based on Darwinian concepts of evolution. The GA is part of a random-
263
- based EA Xie and Yuille [2017], Baldominos et al. [2017], Real et al. [2017]. GA’s search-based solutions stem from
264
- the genetic selection of robust members that can survive. The population is integral to GA because GA solutions are
265
- 5EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
266
- kernel Type Layer
267
- 3×3 max and average pooling
268
- 3×3 and 5×5 depth-wise separable convolution
269
- 3×3 and 5×5 dilated convolution
270
- 3×3 and 5×5 inverted residuals convolution
271
- - skip connection -
272
- Table 2: Search space of EA-Net and EEEA-Net.
273
- inv
274
- 3×3
275
- inv
276
- 5×5+avg
277
- 3×3
278
- max
279
- 3×3Input 1
280
- ( X1)
281
- Input 2
282
- ( X2)dep
283
- 3×3
284
- skip
285
- Concat
286
- max
287
- 3×3
288
- dep
289
- 5×5+
290
- ++
291
- Output 0
292
- 12
293
- 3L = Type of conv layers [0,1,2,3,4,5,6,7,8]
294
- A = Index of cells [ 0]
295
- B = Index of cells [ 0,1]
296
- C = Index of cells [ 0,1,2]
297
- D = index of cells [ 0,1,2,3]Input-1
298
- Input-2 Output
299
- 102
300
- 383
301
- 01
302
- 76
303
- 20
304
- Chromosome = LA1LA2,LB1LB2,LC1LC2,LD1LD2
305
-
306
- = 3080-0121-1202-6373
307
- Figure 3: The chromosome structure of the Evolutionary Algorithm.
308
- like organisms that evolve after the environment. The most suitable solutions need to rely on genetic diversity. Thus, a
309
- greater number of genetically diverse populations enables more effective GA solutions.
310
- The initial phase of GA creates the first population for candidate solutions. The population is determined by population
311
- size, which describes total solutions. Each solution is called an individual, where an individual consists of a chromosome.
312
- The chromosome is a mix of genes. In the initial population, it is possible to provide unique information about each
313
- gene to all other genes, at random. A fitness function computes the fitness value for each individual. CNN’s model
314
- structure is searched with the NAS search space, defining error rate as a fitness function where fitness value represents
315
- dataset error value. As shown in Equation 1, the fitness value is calculated where n is the number of individuals.
316
- fitness (i) =f(xi);i= 1;2;3;::;n (1)
317
- Organisms consist of different phenotypes and genotypes. Appearances such as foreign features (such as eye colour)
318
- and internal features (such as blood types) are called phenotypes. Genes of different organisms, called genotypes, can
319
- be transferred from model to model by gene transfer.
320
- The CNN model’s architecture is represented as a genotype in the NAS search space. A subset of the NAS search space
321
- includes normal cells and reduction cells. The cells are stacked in the complete architecture. Normal cells or reduction
322
- cells consist of connected layers such as convolution layers, average pooling, max pooling, and skip connection. A
323
- complete model must connect cells to create a genotype for training or testing.
324
- 3.1.1 Encoding
325
- The genotype of the NAS model consists of normal cells and reduction cells, called chromosomes. There are various
326
- genes linked to chromosomes. The number of genes is defined as LA 1LA 2,LB 1LB 2,LC 1LC 2, andLD 1LD 2as
327
- in Equation 2. The gene consists of operations ( L) and indices of operations ( A;B;C;D ). Operation ( L) can be
328
- considered a type of CNN layer, such as max pooling, average pooling, depth-wise separable convolution, dilated
329
- convolution, inverted residuals block, and skip connection.
330
- chromosome (x) =LA 1LA 2;LB 1LB 2;LC 1LC 2;LD 1LD 2 (2)
331
- For example, consider nine different operations ( L) in the experiment, as [0;1;2;3;4;5;6;7;8]. Moreover, the operation
332
- index (A;B;C;D ) refers to the operation location to be connected with other operations ( L). From Fig.3 (left), the
333
- 6EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
334
- index is defined as follows: A = [0], B = [0;1], C = [0;1;2], and D = [0;1;2;3]. The connection between operations (L)
335
- and the operation index ( A;B;C;D ) determines the location of the connection between operations. For example, LA’s
336
- gene code,LA 1LA 2is 30,80, meaning output data processed in operation 3 and 8 will be linked to index 0.
337
- Similarly,LB 1LB 2equals 01,21, meaning data processed by operation 0 and 1 are connected at index 1. However, in
338
- the genes of LC 1LC 2andLD 1LD 2, those genes are linked sequentially to the output of LA 1LA 2andLB 1LB 2, to
339
- help reduce the number of model parameters. If the LC 1LC 2andLD 1LD 2is connected to the same input as LA 1LA 2,
340
- andLB 1LB 2(parallel network) will increase the processing time and the parameters.
341
- PreviousIndex =index2 (3)
342
- The position of the previous index can be computed from Equation 3, where the index is greater than 1. If the index
343
- is an even number, it is linked to the even previous index; otherwise, it is linked to the odd previous index. Thus, the
344
- LC 1LC 2gene is 12-02, which has an index of 2, and the LC 1LC 2gene is linked from index 0. While the LD 1LD 2
345
- gene is 63-73, it has an index of 3. The LD 1LD 2gene is linked from index 1. However, if there are different indices in
346
- a gene, for example, a gene 63-72, operator 6 is connected from index 1 and operator 7 is from index 0.
347
- 3.1.2 Multi-objective Genetic Algorithm
348
- Initially, GA was used for single-objective optimisation problems (SOOP) and, later, GA was developed to solve
349
- the multi-objective optimisation problem (MOOP) Deb et al. [2002], which has more than one objective function to
350
- minimise fitness values. The GA that can solve the MOOP problems Carrau et al. [2017], Hasan et al. [2019] is called a
351
- multi-objective genetic algorithm (MOGA).
352
- minff1(x);f2(x);:::;f k(x)g
353
- s.t.x2X(4)
354
- The optimisation of the CNN model is generally a problem with more than one objective. As illustrated in Equation 4,
355
- wherefis fitness values, the integer k2is the number of objectives, xis individual, and Xis the set of individuals.
356
- All these objectives must be as small as possible.
357
- Indicators used to measure CNNs model performance include model accuracy, model size, and processing speed. There
358
- are three objectives to consider during a model search: lowest validation error, minimum parameters, and computational
359
- cost.
360
- minfError (x);FLOPS (x);Params (x)g
361
- s.t.x2X
362
- werror +wflops +wparams = 1
363
- werror;wflops;wparams>= 0(5)
364
- The evolutionary algorithm finds the most effective model for each objective by finding the lowest objective values of
365
- the entire population. We defined the three objective values as being equally important. Thus, it is necessary to set the
366
- weight of each of the three objective values to 1/3 to find the best model for each value. As illustrated in Equation 5,
367
- wherexis individual, Xis the individual’s set and werror;wflops;wparams weighs each objective’s weight.
368
- For the MOOP problem, it is almost impossible to find one solution that provides the optimal value for each objective
369
- function. For each solution given by the MOOP, the best solution group is called nondominated or Pareto optimal
370
- because these solutions are compared using the Pareto Domination principle. Many solutions can be obtained from the
371
- search using MOOP. These solutions will be reviewed again to find the best solution within the searched solution group.
372
- The best solution group should not dominate when compared to other solutions. For example, any solution vover-
373
- whelming a better solution can be represented as vw. If no solution vis worse than solution w, then solutions v is
374
- better thanw.
375
- 3.1.3 Genetic Operations
376
- The processes used to create offspring in the new generation are called genetic operations. A new population must
377
- replace an ancestor group that cannot survive. The population can be created in two ways, by crossover or mutation.
378
- 7EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
379
- Parent 2 Child =
380
- : Common
381
- : Normal
382
- : Mutant
383
- (Parent 1) : 40-30- 61-31-00-60-42-13
384
- (Parent 2) : 40-30- 21-61-22-72-53-11
385
- (Child) : 40-30- 61-61-02-72-52-13
386
- (Original) : 40-30-61-61-02- 72-52-13
387
- (Mutant) : 40-30-61-61-02- 33-52-13
388
- Crossover : Parent 1 ⊗: Common
389
- : Parent 1
390
- : Parent 2
391
- Mutation : Normal Mutant
392
- Figure 4: Crossover operation (top): the parent has two different network architectures; the chromosome of each parent
393
- architecture can be visualised as a digits string. Child architecture was mixed with a chromosome index from their
394
- parent’s chromosome. Mutation operation (bottom): The location of a normal chromosome was randomly selected.
395
- Then this pair of genes will be replaced by a new random pair.
396
- Crossover is the creation of a new population by switching genes of different chromosomes from two populations. The
397
- genotype of the parent chromosomes will be recombined to create a novel chromosome, which can be done in various
398
- ways. For example, a point crossover that performs random cutting points or chromosomes to produce offspring is a
399
- crossover between two chromosomes with a random probability of 0.5. Crossover creates offspring using random genes
400
- from parents.
401
- Fig. 4 (top) demonstrates the uniform crossover operation used in this implementation, requiring two-parent architectures.
402
- We visualised the architectures as follows: 40-30-61-31-00-60-42-13, as parent 1 and 40-30-21-61-22-72-53-11, as
403
- parent 2. Then, in the crossover operation, the random probability of 0.5 is defined. The fifty-fifty chance was used
404
- to cross the gene between the two parent architectures for child modelling (40-30-61-61-02-72-52-13). The common
405
- parent gene is coloured black, but if the gene derived from the first parent is red, then the gene derived from the second
406
- parent is represented by blue.
407
- A mutation is an operation to reduce population uniformity and contribute to genetic diversity. The mutation changes
408
- data in the gene by randomly locating the gene and replacing the original gene with random new genes. The mutation
409
- causes offspring chromosomes to be different from parents. The individual being mutated is called a mutant.
410
- Fig. 4 (bottom) shows the mutation operation used during implementation; the location of a chromosome of architecture
411
- was determined by randomly selecting only one pair of gene locations (72, orange). Then it was replaced with a random
412
- pair of gene value (33, magenta) of the newly mutated gene.
413
- Begin Generate the
414
- initial populations Encoding Evaluate
415
- populations Stop ? Multi-Objective
416
- Selection Crossover
417
- No
418
- YesMutation
419
- Output
420
- FLOPsParamsError
421
- MinimizeMinimize
422
- Minimize
423
- Non-dominated
424
- Population Define maximum
425
- of parameters ( 𝞫)searching
426
- model? Param
427
- < 𝞫 ?No
428
- return the model
429
- with the less
430
- parameter Yes
431
- Early Exit
432
- Figure 5: An Early Exit Evolutionary Algorithm.
433
- 8EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
434
- Algorithm 1 Multi-objective evolutionary algorithm with an Early Exit Population Initialisation.
435
- Input: The number of generations G, population size n, validation dataset D, objectivesw.
436
- Output: A set ofKindividuals on the Pareto front.
437
- Initialisation: An Early Exit Population Initialisation P1andQ1.
438
- fori= 1toGdo
439
- Ri=Pi[Qi
440
- for allp2Rido
441
- Train model ponD
442
- Evaluate model ponD
443
- end for
444
- M=tournament-selection (Pi+1)
445
- F=non-dominated-sorting (Ri)
446
- Picknindividuals to form Pi+1by ranks and the crowding distance weighted bywbased on Equation 5
447
- Qi+1=crossover (M)[mutation (M)
448
- end for
449
- SelectKmodels at an equal distance near Pareto front from PG
450
- 3.2 Early Exit Population Initialisation (EE-PI)
451
- EA can find cell patterns by selecting the best model with the lowest error rate. However, the discovery process takes
452
- longer to search and select cells in each generation. Each population has to be trained and evaluated, which increases
453
- the time needed to find cell patterns. The single-objective EA uses the error value to find network architecture. However,
454
- the network architecture with the lowest error may have too many parameters. In our experiment, the maximum number
455
- of generations was set to 30 with 40 populations per generation due to the limited resources of a single GPU.
456
- The population is the only factor affecting processing time. The evaluation must examine the entire population to select
457
- the population with the lowest error rate, thus obtaining an effective model structure. However, a longer search time is
458
- required to evaluate every population using a single processor unit. Therefore, the EE-PI method was introduced into
459
- the evolutionary algorithm to reduce the search time and control the number of parameters, as illustrated in Fig. 5 and
460
- detailed in Algorithm 1.
461
- The EE-PI method filters the CNN models based on the number of parameters in the network architecture, which is
462
- iteratively compared to a pre-set maximum value ( ). The EE-PI obtains the CNN network architecture which has less
463
- parameters than the maximum number of parameters attached to the EA, as illustrated in Fig. 5, which shows the Early
464
- Exit as the dashed-line block.
465
- EarlyExit ( ; ),
466
- 1;if 
467
- 0;otherwise(6)
468
- In Equation 6, where is the parameter of the model which is discovered, is the maximum number of parameters. If
469
- the number of parameters found in the model are less than or equal to the maximum number of parameters (  ),
470
- then the model will be considered as a part of the first-generation population.
471
- For example, to select a network architecture with a maximum of 3 million parameters ( = 3), EA selects the model
472
- by considering the number of parameters lower than the maximum number of parameters. Suppose network architecture
473
- is not considered, because it has more than maximum parameters ( ). In this case, it chooses a new structure with less
474
- than 3 million parameters. Therefore, in conjunction with the EA in the selection process, Early Exit facilitates the
475
- filtering out of the model with the number of parameters greater than the maximum number of parameters. The best
476
- network architecture is also discovered using the EA with Early Exit by considering the error rate and the number of
477
- parameters.
478
- 4 Experiments and Results
479
- Experiments were carried out in three parts: First, finding and evaluating the network architecture with EEEA on
480
- CIFAR-10 and CIFAR-100. The second part was the finding and evaluation of the EEEA-Net using the ImageNet
481
- datasets. In the third part, the EEEA-Net obtained from the second part is applied for other tasks such as object detection,
482
- semantic segmentation, and keypoint detection. The PyTorch deep learning library was used in the experiments. The
483
- 9EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
484
- experiment was carried out on Intel(R) Xeon(R) W-3235 CPU @ 3.30GHz 12 Core CPU, 192 GB RAM and NVIDIA
485
- RTX 2080 Ti GPU, running on the Ubuntu 18.04.3 operating systems.
486
- 4.1 CIFAR-10 and CIFAR-100 datasets
487
- This subsection searched for a model with the CIFAR-10 dataset; it was evaluated on CIFAR-10 and CIFAR-100
488
- datasets. Both CIFAR-10 and CIFAR-100 datasets consisted of 60,000 images, with 50,000 images and 10,000 images
489
- in the training set and test set, respectively. The CIFAR-10 and CIFAR-100 have 10 and 100 classes, respectively, with
490
- 600 images in each class.
491
- 4.1.1 Architecture Search on CIFAR-10
492
- Thirty generations with 40 populations in each generation were defined to locate the network architecture with EA. The
493
- first-generation populations were randomly generated, with subsequent populations in generations 2-30 being evolved
494
- with EA. Each population was defined with a depth of two normal cells instead of the usual six normal cells. Thus, it
495
- reduced the search time of the network architecture. The search and evolution process happens more rapidly when
496
- Early Exit is used in the initial populations’ process. Early Exit selects the network architecture having less than the
497
- pre-specified maximum of parameters ( ). Thus, population evolution will choose only network architectures that are
498
- efficient and have fewer parameters.
499
- The hyper-parameters for the search process were defined as: the total number of cells (normal cells and reduce cells)
500
- equal to eight layers with 32 initial channels by training the network from scratch for one epoch on the CIFAR-10
501
- dataset. The hyper-parameters used included a batch size of 128, with SGD optimiser with weight decay equal to
502
- 0.0003 and momentum equal to 0.9. The initial learning rate was 0.05. Using the cosine rule scheduler, the Cutout
503
- regularisation had a length set to 16, a drop-path of the probability of 0.2, and the maximum number of parameters
504
- equal to 3, 4, and 5 million.
505
- The evolutionary algorithm (EA-Net, = 0) took 0.57 GPU days to find the network architecture with NVIDIA RTX
506
- 2080 Ti. However, the early exit evolutionary algorithm (EEEA-Net-A, = 3) took 0.38 GPU days, EEEA-Net-B
507
- ( = 4) took up to 0.36 GPU days, and EEEA-Net-C ( = 5) took up to 0.52 GPU days. These architectures are used
508
- for performance evaluation in the next section.
509
- 4.1.2 Architecture Evaluation on the CIFAR-10 dataset
510
- The network architecture had to be changed to find the normal and reduced cells with the Early Exit evolutionary
511
- algorithms. The CIFAR-10 dataset was used for the evaluation. The hyper-parameters were defined with the number of
512
- all cells (normal and reduce cells) set to 20 layers with 32 initial channels, the network was trained from scratch with
513
- 600 epochs with a batch size of 96, SGD optimiser with weight decay was 0.0003 and momentum 0.9, and the initial
514
- learning rate set to 0.025. Using the cosine rule scheduler, the Cutout regularisation had a length set to 16, a drop-path
515
- of the probability of 0.2, and auxiliary towers of weight equal to 0.4.
516
- Table 3 shows the comparisons and evaluations of EEEA-Net with other state-of-the-art models. EEEA-Net-C was
517
- evaluated with the test dataset, giving an error rate of 2.46% for CIFAR-10. It took 0.52 GPU days to find normal and
518
- reduce cells.
519
- By comparison, our EEEA-Net-C model achieved a lower error rate and search time than all of those other models.
520
- AmoebaNet-B was the lowest of the other state-of-the-art models: NASNet-A model, PNAS, both DARTS versions,
521
- and NSGA-Net. However, the AmoebaNet-B model required 3,150 GPU days to complete, according to Real et al.
522
- [2019]. This is clearly a hugely greater amount of search resources than required in our model (0.52 GPU days).
523
- 4.1.3 Performance and Computational Complexity Analysis
524
- The multi-objective search tested error rate, number of FLOPS, and parameters. Optimisation on the effectiveness
525
- of a multi-objective uses Hypervolume (HV) as a measure of performance that computes the dominated area, using
526
- a reference point (Nadir point) with the most significant objective value from the first-generation population. Then,
527
- the Pareto-frontier solution computes the area between the reference point and Pareto. The higher HV shows that a
528
- multi-objective solution performs better in all objectives.
529
- After the model search, the HV for all the solutions obtained from the search is calculated to compare the performance
530
- of two variants: a model that uses Early Exit (EA-Net) and a model that does not use Early Exit. In Fig. 6, the values
531
- shown in the vertical axis are normalised HV , and the horizontal axis is generations. When we closely look at the HV
532
- value, it was found that the search using the EA-Net model yielded HV values greater than the three EEEA-Net models.
533
- 10EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
534
- ArchitectureCIFAR-10 Error
535
- (%)CIFAR-100 Error
536
- (%)Params
537
- (M)Search cost
538
- (GPU days)Search
539
- Method
540
- NASNet-A + CO Zoph and Le [2016] 2.83 16.58 3.1 2,000 RL
541
- ENAS + CO Pham et al. [2018] 2.89 17.27 4.6 0.5 RL
542
- PNAS Liu et al. [2018c] 3.41 17.63 3.2 225 SMBO
543
- DARTS-V1 + CO Liu et al. [2018b] 2.94 - 2.9 1.5 gradient
544
- DARTS-V2 + CO Liu et al. [2018b] 2.83 17.54 3.4 4 gradient
545
- P-DARTS + CO Chen et al. [2019] 2.50 15.92 3.4 0.3 gradient
546
- PC-DARTS + CO Xu et al. [2020] 2.57 17.36 3.6 0.1 gradient
547
- CDARTS + CO Yu and Peng [2020] 2.48 15.69 3.8 0.3 gradient
548
- AmoebaNet-A + CO Real et al. [2019] 3.12 18.93 3.1 3,150 evolution
549
- AmoebaNet-B + CO Real et al. [2019] 2.55 - 2.8 3,150 evolution
550
- LEMONADE Elsken et al. [2018] 3.05 - 4.7 80 evolution
551
- NSGA-Net + CO Lu et al. [2019] 2.75 20.74 3.3 4 evolution
552
- CARS-I + CO Yang et al. [2020] 2.62 16.00 3.6 0.4 evolution
553
- EA-Net ( = 0) + CO 3.30 17.58 2.9 0.57 evolution
554
- EEEA-Net-A ( = 3) + CO 3.69 20.16 1.8 0.34 evolution
555
- EEEA-Net-B ( = 4)+ CO 2.88 16.90 1.8 0.36 evolution
556
- EEEA-Net-C ( = 5)+ CO 2.46 15.02 3.6 0.52 evolution
557
- Table 3: Comparing EEEA-Net with other architectures from RL, SMBO, gradient, and evolution search method on
558
- CIFAR-10 and CIFAR-100 datasets.
559
- 0 5 10 15 20 25 30
560
- Generation0.000.020.040.060.080.100.120.14Normalized Hypervolume
561
- EA-Net
562
- EEEA-Net-A
563
- EEEA-Net-B
564
- EEEA-Net-C
565
- Figure 6: Performance Metric of EA-Net and EEEA-Nets.
566
- However, considering only the model with an early exit, it was found that searches using equal to 5 performed
567
- better than equal to 3 and 4, since is a parameter that determines the model size by the number of parameters.
568
- Consequently, creating larger models by increasing the size of , gave superior performance.
569
- In addition, when considering a model without an Early Exit (EA-Net) and a model that used an Early Exit ( =5,
570
- EEEA-Net-C), it was found that the search efficiency of the EEEA-Net-C model was nearly similar to that of EA-Net
571
- because the EA-Net search does not control the model size while searching for the model. Therefore, the model obtained
572
- by EA-Net’s search may is likely to obtain a model of a large parameter size. On the other hand, the model with an
573
- Early Exit better controls the model size, and the resulting model provides similar performance than achievable in an
574
- uncontrolled search.
575
- 11EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
576
- 0 5 10 15 20 25 30
577
- Generations404550556065CIFAR-10 Accuracy (%)
578
- Search space of EA-Net
579
- Pareto-front
580
- (a) CIFAR-10 Accuracy vs Generations.
581
- 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
582
- FLOPs (M)45.047.550.052.555.057.560.062.565.0CIFAR-10 Accuracy (%)
583
- Search space of EA-Net
584
- Pareto-front (b) CIFAR-10 Accuracy vs FLOPS.
585
- 0 5 10 15 20 25 30
586
- Generations404550556065CIFAR-10 Accuracy (%)
587
- Search space of EEEA-Net-A
588
- Pareto-front
589
- (c) CIFAR-10 Accuracy vs Generations.
590
- 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
591
- FLOPs (M)45.047.550.052.555.057.560.062.565.0CIFAR-10 Accuracy (%)
592
- Search space of EEEA-Net-A
593
- Pareto-front (d) CIFAR-10 Accuracy vs FLOPS.
594
- 0 5 10 15 20 25 30
595
- Generations404550556065CIFAR-10 Accuracy (%)
596
- Search space of EEEA-Net-B
597
- Pareto-front
598
- (e) CIFAR-10 Accuracy vs Generations.
599
- 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
600
- FLOPs (M)45.047.550.052.555.057.560.062.565.0CIFAR-10 Accuracy (%)
601
- Search space of EEEA-Net-B
602
- Pareto-front (f) CIFAR-10 Accuracy vs FLOPS.
603
- 0 5 10 15 20 25 30
604
- Generations404550556065CIFAR-10 Accuracy (%)
605
- Search space of EEEA-Net-C
606
- Pareto-front
607
- (g) CIFAR-10 Accuracy vs Generations.
608
- 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
609
- FLOPs (M)45.047.550.052.555.057.560.062.565.0CIFAR-10 Accuracy (%)
610
- Search space of EEEA-Net-C
611
- Pareto-front (h) CIFAR-10 Accuracy vs FLOPS.
612
- Figure 7: Progress of trade-offs after each generation of EA-Net and EEEA-Nets.
613
- 12EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
614
- We present progress trade-offs after each generation of the EA-Net and EEEA-Nets search through Fig. 7. The whole
615
- population is demonstrated by two-dimensional coordinates such as CIFAR-10 accuracy vs. Generations and CIFAR-10
616
- accuracy vs. FLOPS.
617
- 4.1.4 Data augmentation
618
- The results from the evaluation of EEEA-Net are represented in precision floating-point (FP32). Our experimental goals
619
- were to create the most effective EEEA-Net without modifying the model structure. In the evaluation, we added the
620
- AutoAugment (AA) Cubuk et al. [2019] technique to the data augmentation process. AA created a more diverse set of
621
- data, making the model more effective. When Cutout DeVries and Taylor [2017] and AA Cubuk et al. [2019] were
622
- used, we observed that the error rate of EEEA-Net-C was reduced to 2.42%. Without AA, however, an error rate of
623
- 2.46% occurred, as shown in Table 4.
624
- 4.1.5 Architecture Evaluation on CIFAR-100 dataset
625
- The AA technique was used to optimise the search and evaluation process of the EEEA-Net. The EEEA-Net-C
626
- was trained using the CIFAR-10 dataset, which was not sufficient for our purposes. Consequently, the EEEA-Net
627
- architectures obtained from CIFAR-10 dataset were used with CIFAR-100.
628
- The hyper-parameters used in the training process were changed to evaluate the EEEA-Net with the CIFAR-100 dataset,
629
- where the number of all cells (normal and reduce cells) was set to 20 layers with 36 initial channels. This was the
630
- outcome of training the network from scratch in 600 epochs with a batch size of 128, setting the SGD optimiser with a
631
- weight decay of 0.0003 and momentum of 0.9, and the initial learning rate set to 0.025 and running with the cosine rule
632
- scheduler. The Cutout regularisation length was equal to 16, and the drop-path of probability was 0.2, with auxiliary
633
- towers of the weight of 0.4.
634
- When the EEEA-Net-C (same model structure) was evaluated with CIFAR-100 datasets, it showed an error rate of
635
- 15.02%, as shown in Table 3. Further, this evaluation, with 3.6 million parameters, resulted in the lowest error rate of all
636
- the state-of-the-art models.
637
- 4.2 ImageNet dataset
638
- In this subsection, we used the ImageNet dataset for the search and model evaluation. The ImageNet dataset is a
639
- large-scale standard dataset for benchmarking performance for image recognition for 1,000 classes with 1,281,167
640
- images for the training set, 50,000 images for the test set, divided into 1,000 classes.
641
- 4.2.1 Architecture Search on ImageNet
642
- Early Exit was used to discover a network architecture using the CIFAR-10 dataset. However, this network architecture
643
- was constructed from a multi-path NAS, which requires considerable memory. Given this, we used a single-path NAS
644
- to find the network architecture on ImageNet to reduce this search time, which also allows a multi-objective search
645
- with early exit population initialisation to be used on the OnceForAll Cai et al. [2020] super-network (called Supernet)
646
- to discover all network architectures that offer the best trade-off. Supernet can also search for the four dimensions
647
- of the network architecture, including kernel size, width (number of channels), depth (number of layers), and input
648
- resolution resize. We set all hyper-parameters for our architecture searches following the process in NSGA-NetV2 Lu
649
- et al. [2020a].
650
- The two objectives of accuracy and FLOPS were the criteria for searching for 300 high accuracy samples with low
651
- FLOPS. However, these sample architectures have a diverse number of parameters. The number of parameters affects
652
- ArchitectureCIFAR-10 Error
653
- (%)Training time
654
- (GPU Hours)AutoAugment
655
- EEEA-Net-A ( = 3) + CO 3.69 25.75 -
656
- EEEA-Net-B ( = 4) + CO 2.88 25.95 -
657
- EEEA-Net-C ( = 5) + CO 2.46 48.05 -
658
- EEEA-Net-A ( = 3) + CO 3.35 30.38 Yes
659
- EEEA-Net-B ( = 4) + CO 2.87 31.26 Yes
660
- EEEA-Net-C ( = 5) + CO 2.42 54.26 Yes
661
- Table 4: Results of CIFAR-10 using Cutout (CO) and AutoAugment (AA).
662
- 13EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
663
- architecture size when running on devices that may have memory constraints. Thus, to prevent the architecture from
664
- having too many parameters, we appended the Early Exit to create the first population with limited parameters.
665
- In this experiment, we compiled the number of architecture parameters shown in Table 5 to calculate the average
666
- number of parameters equal to 5. Thus, the maximum number of parameters ( ) where equals 5, 6 or 7, was defined
667
- as follows: EEEA-Net-A ( = 5), EEEA-Net-B ( = 6), EEEA-Net-C ( = 7). For a fair comparison, we set qual
668
- to 0, and called that EA-Net-N ( = 0). We categorised our networks using the number of MobilenetV3 FLOPS to
669
- define the network size architectures of EEEA-Net-A1, EEEA-Net-B1, EEEA-Net-C1, and EEEA-Net-N1 as small-
670
- scale architectures ( <155 FLOPS). The large-scale architectures ( <219 FLOPS) are EEEA-Net-A2, EEEA-Net-B2
671
- EEEA-Net-C2, and EEEA-Net-N2.
672
- 4.2.2 Architecture Evaluation on ImageNet dataset
673
- The discovery of architecture from Supernet is the separation of some layers from Supernet called subnets. Since
674
- Supernet and the subnets have different network architectures, the accuracy of the subnets, with pre-trained weight from
675
- Supernet, is very low when they were tested on the validation dataset. So, the subnets have calibrated the statistics of
676
- batch normalisation (BN) after searching on Supernet. The new BN statistics from the subnets were calculated using
677
- the validation dataset and updating the BN of all of the subnets. Thus, BN calibration can improve test accuracy value
678
- efficiency with the ImageNet dataset.
679
- Table 5 shows the comparison of EEEA-Net performance with other models, using three main comparison factors: error
680
- rate, number of parameters, and FLOPS. We classify models using architectural search methods such as auto, manual,
681
- or a combination. When comparing our small-scale architectures (EEEA-Net-A1, EEEA-Net-B1, EEEA-Net-C1, and
682
- EEEA-Net-N1) with GhostNet 1.0 Han et al. [2020], we found that all our architectures outperform GhostNet 1.0.
683
- Also, EEEA-Net-A1, EEEA-Net-B1, EEEA-Net-C1, and EEEA-Net-N1 provide lower error and FLOPS counts than
684
- MobileNetsV3 Large 0.75 Howard et al. [2019]. However, the MobileNetsV3 Large 0.75 has fewer parameters than our
685
- models.
686
- Similarly, when we compared our large-scale architectures with other architectures, we found that EEEA-Net-C2
687
- ( = 7) has a Top-1 error, and FLOPS were lower than all other architectures, as shown in Table 4. When we compare
688
- our architecture with MobileNetsV3 Large 1.0, EEEA-Net-C2 provides a 1% less error value than MobileNetsV3 [28],
689
- and the FLOPS count of EEEA-Net-C2 is reduced by 2 Million from MobileNetsV3. However, EEEA-Net-C2 had 0.6
690
- Million more parameters than MobileNetsV3.
691
- Model Top-1 Error (%) Top-5 Error (%) Params (M) FLOPS (M) Type
692
- GhostNet 1.0 Han et al. [2020] 26.1 8.6 5.2 141 manual
693
- MobileNetsV3 Large 0.75 Howard et al. [2019] 26.7 - 4.0 155 combined
694
- EA-Net-N1 (ours) 26.1 8.6 4.4 140 auto
695
- EEEA-Net-A1 ( = 5)(our) 26.3 8.8 5.0 127 auto
696
- EEEA-Net-B1 ( = 6) (our) 26.0 8.5 5.0 138 auto
697
- EEEA-Net-C1 ( = 7) (our) 25.7 8.5 5.1 137 auto
698
- MobileNetsV1 Howard et al. [2017] 29.4 - 4.2 575 manual
699
- MobileNetsV2 Sandler et al. [2018] 28.0 - 3.4 300 manual
700
- GhostNet 1.3 Han et al. [2020] 24.3 7.3 7.3 226 manual
701
- MobileNetsV3 Large 1.0 Howard et al. [2019] 24.8 - 5.4 219 combined
702
- NASNet-A Zoph and Le [2016] 26.0 8.4 5.3 564 auto
703
- MnasNet-A1 Tan et al. [2019] 24.8 7.5 3.9 312 auto
704
- FBNet-C Wu et al. [2019] 25.1 - 5.5 375 auto
705
- MOGA-A Chu et al. [2020] 24.1 7.2 5.1 304 auto
706
- FairNAS-A Chu et al. [2019] 24.7 7.6 4.6 388 auto
707
- PNASNet-5 Liu et al. [2018c] 25.8 8.1 5.1 588 auto
708
- NSGANetV1-A2 Lu et al. [2020b] 25.5 8.0 4.1 466 auto
709
- OnceForAll Cai et al. [2020] 24.0 - 6.1 230 auto
710
- MSuNAS Cai et al. [2020] 24.1 - 6.1 225 auto
711
- EA-Net-N2 (ours) 24.4 7.6 5.9 226 auto
712
- EEEA-Net-A2 ( = 5) (our) 24.1 7.4 5.6 198 auto
713
- EEEA-Net-B2 ( = 6) (our) 24.0 7.5 5.7 219 auto
714
- EEEA-Net-C2 ( = 7) (our) 23.8 7.3 6.0 217 auto
715
- Table 5: Comparing EEEA-Net with other architectures from manual, combined, and auto search method on ImageNet
716
- datasets.
717
- 14EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
718
- 140 160 180 200 220
719
- FLOPs (M)73.574.074.575.075.576.0ImageNet Top-1 Accuracy (%)
720
- EA-Net
721
- EEEA-Net (=5)
722
- EEEA-Net (=6)
723
- EEEA-Net (=7)
724
- MobileNetV3
725
- GhostNet
726
- (a) Top-1 accuracy vs FLOPS
727
- 4.0 4.5 5.0 5.5 6.0 6.5 7.0
728
- Params (M)73.574.074.575.075.576.0ImageNet Top-1 Accuracy (%)
729
- EA-Net
730
- EEEA-Net (=5)
731
- EEEA-Net (=6)
732
- EEEA-Net (=7)
733
- MobileNetV3
734
- GhostNet (b) Top-1 accuracy vs Parameters
735
- Figure 8: Comparison of Top-1 accuracy, FLOPS (left), and parameters (right) between EEEA-Nets and MobileNetV3
736
- [28] and GhostNet [40] on ImageNet dataset.
737
- We chose MobileNetV3 and GhostNet, including both small and large versions, to compare with our architecture, as
738
- shown in Fig. 8. Overall, we observed that EEEA-Net-C ( = 7) significantly outperforms MobileNetV3 and GhostNet
739
- for Top-1 accuracy and FLOPS. Furthermore, EEEA-Net-C ( = 7) achieves lower parameters than GhostNet.
740
- 4.3 Architecture Transfer
741
- After searching and evaluating the model using the ImageNet dataset for image recognition, the models trained with the
742
- ImageNet dataset can be further developed and applied to object detection, semantic segmentation and human keypoint
743
- detection applications.
744
- 4.3.1 Object detection
745
- EEEA-Net-C2 ( = 7) was used as the backbone for the object detection task to compare the effectiveness of our
746
- architecture on a real-world application. We utilised the same architecture trained with ImageNet datasets on the
747
- firmware called Single-Shot Detectors (SSD) Liu et al. [2016] and You Only Look Once version four (YOLOv4)
748
- Bochkovskiy et al. [2020].
749
- PASCAL VOC is a standard set of data used to measure an architecture’s performance with object detection datasets. It
750
- consists of 20 classes, with bottles and plants being small objects with the lowest Average Precision (AP) of all classes.
751
- We used the SSDLite framework Sandler et al. [2018] for fast and optimised processing on mobile devices. We also
752
- used the YOLOv4 framework Bochkovskiy et al. [2020] for high precision object detection.
753
- All models were trained on the PASCAL VOC 2007 and VOC 2012 Everingham et al. [2010] train set by training the
754
- network for 200 epochs with a batch size of 32, SGD optimiser with weight decay equal to 0.0005 and momentum
755
- equal to 0.9, the initial learning rate is 0.01. It uses the scheduler with the cosine rule without a restart. All input images
756
- are resized to 320320pixels, and these models were used to evaluate the PASCAL VOC test set.
757
- For YOLOv4, we adopted MobileNet-V2 Sandler et al. [2018], MobileNet-V3 Howard et al. [2019] and EEEA-Net-C2
758
- as the backbone of YOLOv4. All models were trained with 140 epochs and a batch size of 4. All inputs are random
759
- scales with multi-scale images ranging from 320 to 640 pixels. The label smoothing is 0.1, SGD optimiser with weight
760
- decay equal to 0.0005 and momentum equal to 0.9. The initial learning rate was set to 0.01 for a scheduler with a cosine
761
- rule with the warm-up strategy performed twice.
762
- Table 6 shows the performance of our architecture for object detection. EEEA-Net-C2 achieved a higher AP than NAS-
763
- Net, DARTS, ShuffleNet-V2, MobileNet-V2, MobileNet-V3, and MnasNet for the SSDLite framework. Nonetheless,
764
- EEEA-Net-C2 has 152 more million FLOPS than MobileNet-V3. For fairness, we used MobileNet-V2, MobileNet-V3
765
- and EEEA-Net-C2 for training and evaluated these models using the PASCAL VOC test dataset via the YOLOv4
766
- framework. The EEEA-Net-C2 significantly outperformed both MobileNet-V2 and MobileNet-V3.
767
- 15EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
768
- Model Framework Params (M) FLOPS (M)Small Objects AP (%)VOC2007 mAP (%)Bottle Plant
769
- NASNet Zoph and Le [2016] SSDLite 5.22 1238 41.5 46.1 71.6
770
- DARTS Liu et al. [2018b] SSDLite 4.73 1138 38.3 49.3 71.2
771
- ShuffleNet-V2 Ma et al. [2018] SSDLite 2.17 355 29.9 38.1 65.4
772
- MobileNet-V2 Sandler et al. [2018] SSDLite 3.30 680 37.9 43.9 69.4
773
- MobileNet-V3 Howard et al. [2019] SSDLite 3.82 485 38.1 45.6 69.2
774
- MnasNet Tan et al. [2019] SSDLite 4.18 708 37.7 44.4 69.6
775
- EEEA-Net-C2 (ours) SSDLite 5.57 637 40.9 48.9 71.7
776
- MobileNet-V2 Sandler et al. [2018] YOLOv4 46.34 8740 66.4 58.4 81.5
777
- MobileNet-V3 Howard et al. [2019] YOLOv4 47.30 8520 68.2 50.7 78.9
778
- EEEA-Net-C2 (ours) YOLOv4 31.15 5540 68.6 56.7 81.8
779
- Table 6: Result of Object detection with different backbones on PASCAL VOC 2007 test set.
780
- Model Params (M) FLOPS (G) mIoU (%)
781
- NASNet Zoph and Le [2016] 7.46 36.51 77.9
782
- DARTS Liu et al. [2018b] 6.64 34.77 77.5
783
- ShuffleNet-V2 Ma et al. [2018] 4.10 26.30 73.0
784
- MobileNet-V2 Sandler et al. [2018] 5.24 29.21 77.1
785
- MobileNet-V3 Howard et al. [2019] 5.60 27.09 75.9
786
- MnasNet Tan et al. [2019] 6.12 29.50 76.8
787
- EEEA-Net-C2 (ours) 7.34 28.65 76.8
788
- Table 7: Results of BiSeNet with different backbones on Cityscapes validation set. (single scale and no flipping).
789
- 4.3.2 Semantic Segmentation
790
- The cityscape dataset Cordts et al. [2016] was chosen to experiment with semantic segmentation. It is a large-scale
791
- dataset of street scenes in 50 cities. Cityscapes provide dense pixel annotations of 5,000 images. These images were
792
- divided into three groups of 2,975, 500, 1,525 images for training, validation, and testing. We used BiSeNet Yu et al.
793
- [2018] with different backbones to evaluate our architecture’s performance for semantic segmentation on the Cityscapes
794
- dataset. NASNet, DARTS, ShuffleNet-V2, MobileNet-V2, MobileNet-V3, MnasNet, and EEEA-Net-C2 have trained
795
- 80,000 iterations with a poly learning scheduler at an initial learning rate of 0.01 and batch size equals 16. All training
796
- images were resized to 10241024 pixels using image augmentation using colour jitter, random scale, and random
797
- horizontal flip.
798
- Table 7 shows that ShuffleNet-V2 achieved a smaller number of parameters and lower FLOPS than other architectures.
799
- However, MobileNet-V2 achieved a greater Mean Intersection over Union (mIoU) than ShuffleNet-V2, MobileNet-V3,
800
- MnasNet, and EEEA-Net-C2. The mIoU of EEEA-Net-C2 is the same as MnasNet. It is better than ShuffleNet-V2 and
801
- MobileNet-V3.
802
- 4.3.3 Keypoint Detection
803
- Human keypoint detection, also known as human pose estimation, is the visual sensing of human gestures from
804
- keypoints such as the head, hips, or ankles. MS COCO Lin et al. [2014] is a comprehensive dataset to measure keypoint
805
- detection performance, consisting of data for 250,000 persons, with the data labelle at 17 keypoints. SimpleBaseline
806
- Xiao et al. [2018] is a framework for keypoint detection, enabling easier changes to backbones. Given this, it allowed
807
- us to adapt to other architectures more simply.
808
- All architectures were trained on the MS COCO train2017 set by training the network for 140 epochs with a batch size
809
- of 128, Adam optimiser, the initial learning rate is 0.001, which is reduced to 0.0001 at 90th epoch and reduced to
810
- 0.00001 at 120th epoch. The training set is resized to 256192pixels using random rotation, scale, and flipping data.
811
- Table 8 shows the experimental result of SimpleBaseline with different backbones. Our EEEA-Net-C2 performed better
812
- than other backbones in the number of parameters. As well, EEEA-Net-C2 outperforms small architectures (excluding
813
- NASNet and DARTS).
814
- 16EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
815
- Model Params (M) FLOPS (M) AP (%)
816
- NASNet Zoph and Le [2016] 10.66 569.11 67.9
817
- DARTS Liu et al. [2018b] 9.20 531.77 66.9
818
- ShuffleNet-V2 Ma et al. [2018] 7.55 154.37 60.4
819
- MobileNet-V2 Sandler et al. [2018] 9.57 306.80 64.9
820
- MobileNet-V3 Howard et al. [2019] 9.01 223.16 65.3
821
- MnasNet Tan et al. [2019] 10.45 320.17 62.5
822
- EEEA-Net-C2 (ours) 7.47 297.49 66.7
823
- Table 8: Results of SimpleBaseline with different backbone settings on MS COCO2017 validation set. Flip is used
824
- during validation.
825
- 4.4 Limitations
826
- The development of a NAS search with only one GPU processor was a challenge for the reasons set out below. Setting
827
- the appropriate number of populations, the number of generations of each population, and the number of search epochs
828
- suitable for one GPU processor presents considerable difficulties. All of these parameters affect the model search time.
829
- Increasing the number of generations increases the computing cost, but increasing the number of generations provides an
830
- opportunity for greater recombination of populations, thereby maximising the efficiency of discovering new populations.
831
- Moreover, an increased number of search epoch helps improve each population’s error fitness value.
832
- All these restrictions help to improve the NAS search. However, increasing these numbers influences search time. For
833
- example, increasing the number of search epochs from 1 epoch to 10 epochs results in a 10 increase in search time.
834
- 5 Conclusion
835
- We achieved our research goals by successfully developing a CNN architecture suitable for an on-device processor with
836
- limited computing resources and applying it in real-world applications.
837
- This outcome was achieved by significantly reducing the computational cost of a neural architecture search. We
838
- introduced the Early Exit Population Initialisation (EE-PI) for Evolutionary Algorithm method to create the EEEA-Nets
839
- model. Our method achieved a massive reduction in search time on CIFAR-10 dataset; 0.34 to 0.52 GPU days. This
840
- must be seen as an outstanding outcome compared against other state-of-the-art models, such as the NSGA-Net model,
841
- which required 4 GPU days, the 2,000 GPU days of the NASNet model and the 3,150 GPU days of the AmoebaNet
842
- model.
843
- In the EEEA-Nets architecture, our emphasis was on reducing the number of parameters, the error rate and the
844
- computing cost. We were able to achieve this by introducing an Early Exit step into the Evolutionary Algorithm.
845
- Our EEEA-Nets architectures were searched on image recognition task, transferring architectures to other tasks.
846
- Experimentally, EEEA-Net-C2 is significantly better than MobileNet-V3 on image recognition, object detection,
847
- semantic segmentation, and keypoint detection tasks. Addressing this latter task had not been achieved or even
848
- attempted in any other CNN model. Therefore, our architectures can be deployed on devices with limited memory
849
- and processing capacity by achieving these significant reductions, allowing real-time processing on smartphones or
850
- on-device systems.
851
- The task of optimising the search for multi-objective evolutionary algorithms shall be continued as our future work to
852
- find better-performing models. In addition, we will consider applying a multi-objective evolutionary algorithm with
853
- EE-PI to find mobile-suitable models in other applications such as marine detection or pest detection.
854
- Acknowledgements
855
- The authors would like to acknowledge the Thailand Research Fund’s financial support through the Royal Golden
856
- Jubilee PhD. Program (Grant No. PHD/0101/2559). The study was undertaken using the National Computational
857
- Infrastructure (NCI) in Australia under the National Computational Merit Allocation Scheme (NCMAS). Further, we
858
- would like to extend our appreciation to Mr Roy I. Morien of the Naresuan University Graduate School for his assistance
859
- in editing the English grammar, syntax, and expression in the paper.
860
- 17EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
861
- References
862
- Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy,
863
- Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition
864
- challenge. International Journal of Computer Vision , 115(3):211–252, 2015.
865
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural
866
- networks. Communications of The ACM , 60(6):84–90, 2017.
867
- Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent
868
- Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In 2015 IEEE Conference on Computer
869
- Vision and Pattern Recognition (CVPR) , pages 1–9, 2015.
870
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016
871
- IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 770–778, 2016.
872
- Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In 2018 IEEE/CVF Conference on Computer Vision
873
- and Pattern Recognition , pages 7132–7141, 2018.
874
- Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, and Kurt Keutzer. Squeezenet:
875
- Alexnet-level accuracy with 50x fewer parameters and <0.5mb model size. arXiv preprint arXiv:1602.07360 , 2016.
876
- Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto,
877
- and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv
878
- preprint arXiv:1704.04861 , 2017.
879
- Chakkrit Termritthikun, Yeshi Jamtsho, and Paisarn Muneesawang. On-device facial verification using nuf-net model
880
- of deep learning. Engineering Applications of Artificial Intelligence , 85:579–589, 2019.
881
- Chakkrit Termritthikun, Yeshi Jamtsho, and Paisarn Muneesawang. An improved residual network model for image
882
- recognition using a combination of snapshot ensembles and the cutout technique. Multimedia Tools and Applications ,
883
- 79(1):1475–1495, 2020.
884
- Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural
885
- network for mobile devices. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages
886
- 6848–6856, 2018.
887
- Barret Zoph and Quoc V . Le. Neural architecture search with reinforcement learning. In ICLR , 2016.
888
- Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V . Le. Regularized evolution for image classifier architecture
889
- search. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33, pages 4780–4789, 2019.
890
- Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical representa-
891
- tions for efficient architecture search. In International Conference on Learning Representations , 2018a.
892
- Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In International Conference
893
- on Learning Representations , 2018b.
894
- Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, and Diana
895
- Marculescu. Single-path nas: Designing hardware-efficient convnets in less than 4 hours. In Joint European
896
- Conference on Machine Learning and Knowledge Discovery in Databases , pages 481–497, 2019.
897
- Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
898
- Mark Everingham, Luc Gool, Christopher K. Williams, John Winn, and Andrew Zisserman. The pascal visual object
899
- classes (voc) challenge. International Journal of Computer Vision , 88(2):303–338, 2010.
900
- Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke,
901
- Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In 2016 IEEE
902
- Conference on Computer Vision and Pattern Recognition (CVPR) , pages 3213–3223, 2016.
903
- Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and
904
- C. Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision ,
905
- pages 740–755, 2014.
906
- Zhichao Lu, Ian Whalen, Vishnu Boddeti, Yashesh Dhebar, Kalyanmoy Deb, Erik Goodman, and Wolfgang Banzhaf.
907
- Nsga-net: neural architecture search using multi-objective genetic algorithm. In Proceedings of the Genetic and
908
- Evolutionary Computation Conference on , pages 419–427, 2019.
909
- Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan L. Yuille, Jonathan
910
- Huang, and Kevin Murphy. Progressive neural architecture search. In Proceedings of the European Conference on
911
- Computer Vision (ECCV) , pages 19–35, 2018c.
912
- 18EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
913
- Hieu Pham, Melody Y . Guan, Barret Zoph, Quoc V . Le, and Jeff Dean. Efficient neural architecture search via parameter
914
- sharing. arXiv preprint arXiv:1802.03268 , 2018.
915
- Zhaohui Yang, Yunhe Wang, Xinghao Chen, Boxin Shi, Chao Xu, Chunjing Xu, Qi Tian, and Chang Xu. Cars:
916
- Continuous evolution for efficient neural architecture search. In 2020 IEEE/CVF Conference on Computer Vision
917
- and Pattern Recognition (CVPR) , pages 1829–1838, 2020.
918
- Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural architecture search via
919
- lamarckian evolution. In International Conference on Learning Representations , 2018.
920
- Mingxing Tan and Quoc V . Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In
921
- International Conference on Machine Learning , pages 6105–6114, 2019.
922
- Alvin Wan, Xiaoliang Dai, Peizhao Zhang, Zijian He, Yuandong Tian, Saining Xie, Bichen Wu, Matthew Yu, Tao Xu,
923
- Kan Chen, Peter Vajda, and Joseph E. Gonzalez. Fbnetv2: Differentiable neural architecture search for spatial and
924
- channel dimensions. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages
925
- 12965–12974, 2020.
926
- Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecture search: Bridging the depth gap
927
- between search and evaluation. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) , pages
928
- 1294–1303, 2019.
929
- Hongyuan Yu and Houwen Peng. Cyclic differentiable architecture search. arXiv preprint arXiv:2006.10724 , 2020.
930
- Zhichao Lu, Kalyanmoy Deb, Erik D. Goodman, Wolfgang Banzhaf, and Vishnu Naresh Boddeti. Nsganetv2:
931
- Evolutionary multi-objective surrogate-assisted neural architecture search. In European Conference on Computer
932
- Vision , pages 35–51, 2020a.
933
- Andrew Howard, Ruoming Pang, Hartwig Adam, Quoc Le, Mark Sandler, Bo Chen, Weijun Wang, Liang-Chieh Chen,
934
- Mingxing Tan, Grace Chu, Vijay Vasudevan, and Yukun Zhu. Searching for mobilenetv3. In 2019 IEEE/CVF
935
- International Conference on Computer Vision (ICCV) , pages 1314–1324, 2019.
936
- Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, and Hongkai Xiong. Pc-darts: Partial channel
937
- connections for memory-efficient architecture search. In ICLR 2020 : Eighth International Conference on Learning
938
- Representations , 2020.
939
- Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V . Le. Mnasnet:
940
- Platform-aware neural architecture search for mobile. In 2019 IEEE/CVF Conference on Computer Vision and
941
- Pattern Recognition (CVPR) , pages 2820–2828, 2019.
942
- Lingxi Xie and Alan Yuille. Genetic cnn. In 2017 IEEE International Conference on Computer Vision (ICCV) , 2017.
943
- Alejandro Baldominos, Yago Saez, and Pedro Isasi. Evolutionary convolutional neural networks: An application to
944
- handwriting recognition. Neurocomputing , 283:38–52, 2017.
945
- Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V . Le, and Alexey
946
- Kurakin. Large-scale evolution of image classifiers. In ICML’17 Proceedings of the 34th International Conference
947
- on Machine Learning - Volume 70 , pages 2902–2911, 2017.
948
- K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE
949
- Transactions on Evolutionary Computation , 6(2):182–197, 2002.
950
- Jesús Velasco Carrau, Gilberto Reynoso-Meza, Sergio García-Nieto, and Xavier Blasco. Enhancing controller’s
951
- tuning reliability with multi-objective optimisation: From model in the loop to hardware in the loop. Engineering
952
- Applications of Artificial Intelligence , 64:52–66, 2017.
953
- Mahmudul Hasan, Khin Lwin, Maryam Imani, Antesar M. Shabut, Luiz F. Bittencourt, and Mohammed Alamgir
954
- Hossain. Dynamic multi-objective optimisation using deep reinforcement learning: benchmark, algorithm and an
955
- application to identify vulnerable zones based on water quality. Engineering Applications of Artificial Intelligence ,
956
- 86:107–135, 2019.
957
- Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V . Le. Autoaugment: Learning augmentation
958
- strategies from data. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages
959
- 113–123, 2019.
960
- Terrance DeVries and Graham W. Taylor. Improved regularization of convolutional neural networks with cutout. arXiv
961
- preprint arXiv:1708.04552 , 2017.
962
- Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, and Chang Xu. Ghostnet: More features from cheap
963
- operations. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 1580–1589,
964
- 2020.
965
- 19EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
966
- Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted
967
- residuals and linear bottlenecks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages
968
- 4510–4520, 2018.
969
- Bichen Wu, Kurt Keutzer, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter
970
- Vajda, and Yangqing Jia. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search.
971
- In2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 10734–10742, 2019.
972
- Xiangxiang Chu, Bo Zhang, and Ruijun Xu. Moga: Searching beyond mobilenetv3. In ICASSP 2020 - 2020 IEEE
973
- International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 4042–4046, 2020.
974
- Xiangxiang Chu, Bo Zhang, Ruijun Xu, and Jixiang Li. Fairnas: Rethinking evaluation fairness of weight sharing
975
- neural architecture search. arXiv preprint arXiv:1907.01845 , 2019.
976
- Zhichao Lu, Ian Whalen, Yashesh Dhebar, Kalyanmoy Deb, Erik Goodman, Wolfgang Banzhaf, and Vishnu Naresh
977
- Boddeti. Multi-objective evolutionary design of deep convolutional neural networks for image classification. IEEE
978
- Transactions on Evolutionary Computation , pages 1–1, 2020b.
979
- Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once for all: Train one network and specialize it
980
- for efficient deployment. In ICLR 2020 : Eighth International Conference on Learning Representations , 2020.
981
- Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott E. Reed, Cheng-Yang Fu, and Alexander C.
982
- Berg. Ssd: Single shot multibox detector. In 14th European Conference on Computer Vision, ECCV 2016 , pages
983
- 21–37, 2016.
984
- Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Optimal speed and accuracy of object
985
- detection. arXiv preprint arXiv:2004.10934 , 2020.
986
- Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn
987
- architecture design. In Proceedings of the European Conference on Computer Vision (ECCV) , pages 122–138, 2018.
988
- Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. Bisenet: Bilateral segmentation
989
- network for real-time semantic segmentation. In Proceedings of the European Conference on Computer Vision
990
- (ECCV) , pages 325–341, 2018.
991
- Bin Xiao, Haiping Wu, and Yichen Wei. Simple baselines for human pose estimation and tracking. In Proceedings of
992
- the European Conference on Computer Vision (ECCV) , pages 472–487, 2018.
993
- Chris Ying, Aaron Klein, Eric Christiansen, Esteban Real, Kevin Murphy, and Frank Hutter. Nas-bench-101: Towards
994
- reproducible neural architecture search. In International Conference on Machine Learning , pages 7105–7114, 2019.
995
- Arber Zela, Julien Siems, and Frank Hutter. Nas-bench-1shot1: Benchmarking and dissecting one-shot neural
996
- architecture search. In ICLR 2020 : Eighth International Conference on Learning Representations , 2020.
997
- Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural architecture search. In Eighth
998
- International Conference on Learning Representations , 2020.
999
- Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture search. In Uncertainty in
1000
- Artificial Intelligence , pages 367–377, 2019.
1001
- Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine
1002
- Learning , 8(3):229–256, 1992.
1003
- Xuanyi Dong and Yi Yang. Searching for a robust neural architecture in four gpu hours. In 2019 IEEE/CVF Conference
1004
- on Computer Vision and Pattern Recognition (CVPR) , pages 1761–1770, 2019.
1005
- Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: stochastic neural architecture search. In International
1006
- Conference on Learning Representations , 2018.
1007
- Shoukang Hu, Sirui Xie, Hehui Zheng, Chunxiao Liu, Jianping Shi, Xunying Liu, and Dahua Lin. Dsnas: Direct neural
1008
- architecture search without parameter retraining. In 2020 IEEE/CVF Conference on Computer Vision and Pattern
1009
- Recognition (CVPR) , pages 12084–12092, 2020.
1010
- 6 Appendix
1011
- 6.1 Architecture Visualisation
1012
- This section visualises the architectures obtained by searching for EEEA-Nets with CIFAR-10 datasets, as shown in
1013
- Fig. 9, and EEEA-Nets with ImageNet datasets shown in Fig. 10. These architectures are the most reliable, minimising
1014
- three goals.
1015
- 20EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
1016
- c_{k-2} 0dil_conv_3x3
1017
- max_pool_3x3
1018
- c_{k-1}1inv_res_5x5
1019
- dil_conv_5x5
1020
- 2avg_pool_3x3
1021
- avg_pool_3x33inv_res_3x3
1022
- c_{k}inv_res_5x5
1023
- 4max_pool_3x3
1024
- sep_conv_3x3
1025
- (a) EA-Net ( = 0) Normal Cell.
1026
- c_{k-2}0avg_pool_3x3
1027
- inv_res_3x3
1028
- 1dil_conv_3x32inv_res_3x3
1029
- dil_conv_5x54max_pool_3x3sep_conv_3x3
1030
- c_{k-1}inv_res_5x5c_{k}
1031
- 3max_pool_3x3
1032
- skip_connect (b) EA-Net ( = 0) Reduction Cell.
1033
- c_{k-2}0max_pool_3x3
1034
- inv_res_3x3
1035
- 1max_pool_3x32inv_res_3x3
1036
- c_{k-1} dil_conv_3x3skip_connect
1037
- 3avg_pool_3x3
1038
- c_{k}
1039
- skip_connect
1040
- 4skip_connectavg_pool_3x3
1041
- (c) EEEA-Net-A ( =3.0) Normal Cell.
1042
- c_{k-2}0
1043
- skip_connectdil_conv_3x3
1044
- 1avg_pool_3x32 dil_conv_3x3avg_pool_3x3
1045
- 4dil_conv_5x5
1046
- c_{k-1}max_pool_3x33skip_connect
1047
- c_{k}skip_connect
1048
- dil_conv_3x3 (d) EEEA-Net-A ( =3.0) Reduction Cell.
1049
- c_{k-2} 0skip_connect
1050
- max_pool_3x3
1051
- c_{k-1}
1052
- 1max_pool_3x3skip_connect2 skip_connect
1053
- 3sep_conv_3x3skip_connect
1054
- c_{k}
1055
- dil_conv_3x34max_pool_3x3
1056
- sep_conv_3x3
1057
- (e) EEEA-Net-B ( =4.0) Normal Cell.
1058
- c_{k-2} 0skip_connectinv_res_5x5
1059
- 1avg_pool_3x3
1060
- c_{k-1}dil_conv_3x3
1061
- 2
1062
- inv_res_3x3skip_connect3max_pool_3x3
1063
- avg_pool_3x34avg_pool_3x3
1064
- c_{k}dil_conv_3x3 (f) EEEA-Net-B ( =4.0) Reduction Cell.
1065
- c_{k-2}0dil_conv_5x5
1066
- max_pool_3x3
1067
- 1inv_res_5x5
1068
- inv_res_3x3c_{k-1}
1069
- 3sep_conv_5x54sep_conv_3x32
1070
- dil_conv_5x5inv_res_3x3
1071
- skip_connectc_{k}avg_pool_3x3
1072
- (g) EEEA-Net-C ( =5.0) Normal Cell.
1073
- c_{k-2}
1074
- 0inv_res_3x3
1075
- dil_conv_3x31avg_pool_3x3
1076
- inv_res_3x3
1077
- c_{k-1} 2dil_conv_5x5
1078
- max_pool_3x34inv_res_3x3c_{k}3skip_connect
1079
- max_pool_3x3
1080
- inv_res_5x5 (h) EEEA-Net-C ( =5.0) Reduction Cell.
1081
- Figure 9: Normal and Reduction cells learned on CIFAR-10: EA-Net ( =0.0), EEEA-Net-A ( =3.0), EEEA-Net-B
1082
- ( =4.0), and EEEA-Net-C ( =5.0).
1083
- 21EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
1084
- K=3
1085
- E=3 K=3
1086
- E=4 K=3
1087
- E=6 K=5
1088
- E=3 K=5
1089
- E=4 K=5
1090
- E=6 K=7
1091
- E=3 K=7
1092
- E=4 K=7
1093
- E=6 Skip Legend
1094
- Stem Tail Stem Tail
1095
- Stem TailStage 1 Stage 2 Stage 3 Stage 4 Stage 5
1096
- EA-Net-N1 EA-Net-N2
1097
- Stem TailEEEA-Net-A1
1098
- ( 𝛽 = 5) EEEA-Net-A2
1099
- ( 𝛽 = 5)
1100
- Stem Tail Stem Tail
1101
- Stem TailEEEA-Net-B1
1102
- ( 𝛽 = 6) EEEA-Net-B2
1103
- ( 𝛽 = 6)
1104
- Stem TailEEEA-Net-C1
1105
- ( 𝛽 = 7) EEEA-Net-C2
1106
- ( 𝛽 = 7) Stage 1 Stage 2 Stage 3 Stage 4 Stage 5
1107
- Figure 10: EA-Nets and EEEA-Nets architectures were searched from ImageNet datasets. The stem and tail layers in
1108
- all architectures are the same.
1109
- EEEA -Net-C2
1110
- MobileNetv 3
1111
- MobileNetv 2
1112
- Figure 11: An example of object detection results of MobileNetv2, MobileNetv3, and EEEA-Net-C2 models.
1113
- 6.2 Error Analysis of EEEA-Net-C2
1114
- Our experiment applied EEEA-Net-C2 for detection, semantic segmentation, and human keypoint detection, where
1115
- we concluded that the EEEA-Net-C2 model was better than the MobileNet-V3 model. For error analysis of the
1116
- EEEA-Net-C2 model, images for each application were processed to check the correct results. In this appendix, error
1117
- analysis of the EEEA-Net-C2 model is divided into three parts: error analysis of object detection, error analysis of
1118
- semantic segmentation errors and error analysis of human keypoint detection.
1119
- 6.2.1 Object detection
1120
- Object detection results from MobileNetv2, MobileNetv3 and EEEA-Net-C2 models are shown in Fig. 11. Given the
1121
- error from the images in the first column, the MobileNetv2 model was found to have mistakenly identified “bird" as
1122
- “person", while the MobileNetv3 model was unable to find “bird". However, the EEEA-Net-C2 model detected the
1123
- location of all “birds".
1124
- As with the second column in Fig. 11, the EEEA-Net-C2 model can identify all “persons" positions. However, only
1125
- the EEEA-Net-C2 model could not locate the hidden “person" behind the middle woman in the third column image.
1126
- Additionally, in the fourth column pictures, the EEEA-Net-C2 model was able to identify more “plant pots" than the
1127
- MobileNetv2 and MobileNetv3 models.
1128
- 22EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
1129
- MobileNetv 3
1130
- EEEA -Net-C2
1131
- Input
1132
- Figure 12: An example of semantic segmentation results of MobileNetv2, MobileNetv3 and EEEA-Net-C2 models.
1133
- MobileNetv 3
1134
- EEEA -Net-C2MobileNetv 2
1135
- Figure 13: An example of human keypoint detection results of MobileNetv2, MobileNetv3 and EEEA-Net-C2 models.
1136
- 6.2.2 Semantic segmentation
1137
- The results of visual image segmentation using MobileNetv3 and EEEA-Net-C2 models are shown in Fig. 12. The
1138
- error of the semantic segmentation results can be determined from the pictures of the first column. It was found that
1139
- MobileNetv3 models could segment only “Traffic sign pole". However, the MobileNetv3 model cannot segment the left
1140
- “traffic sign", while the EEEA-Net-C2 model can segment both “pole and sign".
1141
- The pictures in the second column from Fig. 12 depicts that the EEEA-Net-C2 model segmented from the “traffic
1142
- island" was less than the MobileNetv3 model. Next, in the third column, the EEEA-Net-C2 model segmented the
1143
- “footpath" more precisely than the MobileNetv3 model.
1144
- 6.2.3 Human keypoint detection
1145
- The human keypoint detection results from the MobileNetv2, MobileNetv3 and EEEA-Net-C2 models are shown in
1146
- Fig. 13. When considering the error from the pictures in the first column, the MobileNetv3 model was found to indicate
1147
- the “left arm" position, while the MobileNetv2 and EEEA-Net-C2 models were able to locate.
1148
- 23EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
1149
- Figure 14: Comparison of latency between EEEA-Net and other state-of-the-art models on non-GPU processing.
1150
- Only the MobileNetv3 model can pinpoint the “leg" of the person sitting in the second column pictures. However, in the
1151
- third column pictures, the EEEA-Net-C2 model can locate the middle person’s “arms and legs", while the MobileNetv3
1152
- model identifies the person’s wrong location. Additionally, in the fourth column pictures, the EEEA-Net-C2 model
1153
- could locate the “arm and leg" more accurately than the MobileNetv2 and MobileNetv3 models.
1154
- The above data shows that the EEEA-Net-C2 model is accurate as well as inaccurate. The EEEA-Net-C2 model was
1155
- designed and searched with the ImageNet dataset. Thus, the EEEA-Net-C2 model may have errors when used with other
1156
- dataset or tasks. However, the EEEA-Net-C2 model has a performance higher than the MobileNetv2 and MobileNetv3
1157
- models on the same dataset and framework used in the three applications.
1158
- 6.3 Mobile Processing
1159
- This appendix measures the performance of our EEEA-Net-C2 model and other state-of-the-art models on the smart-
1160
- phone and only CPU. All trained models with the ImageNet dataset are converted to the PyTorch JIT version to enable
1161
- easy implementation on different platforms.
1162
- Fig. 14 shows the latency performance with 100 images with 224x224 pixels on the Google Pixel 3 XL smartphone
1163
- (blue bars) and Intel i7-6700HQ CPU (red bars) devices with non-GPU resources by DARTSv2, P-DARTS, NASNet,
1164
- ReletiveNAS, ShuffleNetV2, MNASNet 1.0, MobileNetV2, MobileNetV3, and EEEA-Net-C2 models.
1165
- On the Google Pixel 3 XL, the EEEA-Net-C2 model processed each image in 86 milliseconds per image, whereas the
1166
- MobileNetV2 and MobileNetV3 models took 90 and 88 milliseconds, respectively. The EEEA-Net-C2 model has a
1167
- shorter latency time than state-of-the-art models (including DARTSv2, P-DARTS, NASNet, and ReletiveNAS), and
1168
- MobileNets models, primarily models for smartphones, are compared to EEEA-Net-C2.
1169
- Also, on the Intel i7-6700HQ CPU, the latency time of the EEEA-Net-C2 model has shorter latency than state-of-the-art
1170
- models and lightweight models (including MNASNet 1.0, MobileNetV2, and MobileNetV3).
1171
- 6.4 NAS-Bench dataset
1172
- Experimental results with CIFAR-10, CIFAR-100, and ImageNet datasets were compared between NAS methods. The
1173
- results obtained from different methods are achieved with different settings such as hyperparameters (e.g., learning rate
1174
- and batch size), data augmentation (e.g., Cutout and AutoAugment). Thus, the comparison may not be fair.
1175
- 24EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
1176
- 0 1 2 3 4 5
1177
- total training time spent (seconds) 1e60.9360.9380.9400.9420.944accuracy
1178
- random
1179
- evolution
1180
- early exit evolution
1181
- Figure 15: Comparison of accuracy between random search, regularised evolution and Early Exit evolution algorithms
1182
- on NAS-Bench-101.
1183
- This section has implemented an Early Exit method to model search from the NAS-Bench datasets, which avoids
1184
- unfair comparisons and provides a uniform benchmark for NAS algorithms. The dataset used in this experiment was
1185
- NAS-Bench-101, NAS-Bench-1Shot1 and NAS-Bench-201.
1186
- 6.4.1 NAS-Bench-101
1187
- The NAS-Bench-101 Ying et al. [2019] provides a table dataset of 423,624 unique architectures. These architectures
1188
- have trained and evaluated the CIFAR-10 dataset to allow our work to search and query the mapped performance in the
1189
- dataset in a few milliseconds.
1190
- We have re-implemented model search from the NAS-Bench-101 dataset by using random search, regularised evolution
1191
- and Early Exit evolution algorithms to search and query the performance of the resulting models. We used a re-
1192
- implemented regularised evolution with the Early Exit method by taking population size of 100, a tournament size of
1193
- 10, and maximum parameters’ Early Exit ( ) is 25 million.
1194
- The results in Fig. 15 show that our early exit evolution algorithm tends to be higher in accuracy than the regularised
1195
- evolution from 2 million seconds to 5 million seconds. Overall, the regularised evolution algorithm appears to perform
1196
- better than the random search algorithm. However, our early exit evolution tends to outperform both random search and
1197
- regularised evolution algorithms.
1198
- 6.4.2 NAS-Bench-1Shot1
1199
- NAS-Bench-1Shot1 Zela et al. [2020] is the benchmark for a one-shot neural architecture search, developed from the
1200
- NAS-Bench-101 search space by tracking the trajectory and performance of the obtained architectures for three search
1201
- spaces: 6,240 architectures for search space 1, 29,160 architectures for search space 2, and 363,648 architectures for
1202
- search space 3.
1203
- Fig. 16 shows the mean performance on validation regret of architectures obtained by random search, regularised
1204
- evolution and Early Exit evolution algorithms. For search space 1, our algorithm achieves validation regret close to
1205
- the regularised evolution algorithm. For search space 2, our algorithm converges better than the regularised evolution
1206
- algorithm. Our algorithm outperforms random search and regularised evolution algorithms for search space 3, the most
1207
- significant (100more architectures than space 1, and 10 more than space 2).
1208
- 25EEEA: Early Exit Evolutionary Algorithm Network by Chakkrit Termritthikun, et al.
1209
- 104
1210
- 105
1211
- estimated wallclock time [s]102
1212
- validation regret
1213
- search space 1
1214
- RS
1215
- RE
1216
- EE
1217
- (a)
1218
- 104
1219
- 105
1220
- estimated wallclock time [s]102
1221
- validation regret
1222
- search space 2
1223
- RS
1224
- RE
1225
- EE (b)
1226
- 104
1227
- 105
1228
- estimated wallclock time [s]102
1229
- 6×103
1230
- 2×102
1231
- validation regret
1232
- search space 3
1233
- RS
1234
- RE
1235
- EE (c)
1236
- Figure 16: Comparison of accuracy between random search (RS), regularised evolution (RE) and Early Exit evolution
1237
- (EE) algorithms on NAS-Bench-1Shot1.
1238
- MethodCIFAR-10 CIFAR-100 ImageNet16-120
1239
- validation test validation test validation test
1240
- ResNet He et al. [2016] 90.83 93.97 70.42 70.86 44.53 43.63
1241
- RSPS Li and Talwalkar [2019] 84.16±1.69 87.66±1.69 45.78±6.33 46.60±6.57 31.09±5.65 30.78±6.12
1242
- Reinforce Williams [1992] 91.09±0.37 93.85±0.37 70.05±1.67 70.17±1.61 43.04±2.18 43.16±2.28
1243
- ENAS Pham et al. [2018] 39.77±0.00 54.30±0.00 10.23±0.12 10.62±0.27 16.43±0.00 16.32±0.00
1244
- DARTS Liu et al. [2018b] 39.77±0.00 54.30±0.00 38.57±0.00 38.97±0.00 18.87±0.00 18.41±0.00
1245
- GDAS Dong and Yang [2019] 90.01±0.46 93.23±0.23 24.05±8.12 24.20±8.08 40.66±0.00 41.02±0.00
1246
- SNAS Xie et al. [2018] 90.10±1.04 92.77±0.83 69.69±2.39 69.34±1.98 42.84±1.79 43.16±2.64
1247
- DSNAS Hu et al. [2020] 89.66±0.29 93.08±0.13 30.87±16.40 31.01±16.38 40.61±0.09 41.07±0.09
1248
- PC-DARTS Xu et al. [2020] 89.96±0.15 93.41±0.30 67.12±0.39 67.48±0.89 40.83±0.08 41.31±0.22
1249
- EA-Net (SO) 91.53±0.00 94.22±0.00 73.13±0.00 73.17±0.00 46.32±0.00 46.48±0.00
1250
- EA-Net ( = 0) 88.97±2.48 91.54±2.69 66.84±5.08 67.00±4.90 39.93±5.54 39.27±6.21
1251
- EEEA-Net ( = 0.3) 87.07±1.59 89.76±1.87 64.04±3.21 64.31±3.21 35.42±3.81 34.98±4.13
1252
- EEEA-Net ( = 0.4) 89.91±0.77 92.68±0.69 68.70±1.50 68.65±1.51 41.71±1.58 41.25±1.61
1253
- EEEA-Net ( =0.5) 90.21±0.58 92.83±0.46 69.15±1.36 68.95±1.25 42.14±1.14 41.98±1.22
1254
- Optimal 91.61 94.37 73.49 73.51 46.77 47.31
1255
- Table 9: Comparison of a single objective and multi-objective evolution algorithm with the 8 NAS methods provided by
1256
- NAS-Bench-201 benchmark. Optimal shows the best architecture in the search space.
1257
- 6.4.3 NAS-Bench-201
1258
- NAS-Bench-201 Dong and Yang [2020] is an extension of NAS-Bench-101, which extends different search spaces,
1259
- and it has a wide range of datasets, including CIFAR-10, CIFAR-100, and ImageNet-16-120. It contains 15,625
1260
- architectures by five operations, and 6-dimensional vectors indicate the operation in the cell. All architecture evaluated
1261
- performance by validation and test sets on CIFAR-10, CIFAR-100, and ImageNet-16-120.
1262
- We compare our Early Exit Evolution Algorithm or EEEA-Net ( = 0.3, 0.4 and 0.5) with the single objective evolution
1263
- algorithms (SO) and multi-objective evolution algorithms ( = 0). The hyper-parameters for this search process were
1264
- defined as the generations of EA equal to 10 generations with 100 populations by retaining a probability of 0.5, a
1265
- mutation probability of 0.1, and Early Exit is the maximum number of parameters equal to 0.3, 0.4 and 0.5 million.
1266
- The results are shown in Table 9, our EEEA-Net ( = 0.4 and 0.5) outperforms EEEA-Net ( = 0 and 0.3). However,
1267
- the EA-Net (SO) using accuracy as the optimisation objective performed better than all EEEA-Nets.
1268
- Furthermore, when we compared our EEEA-Net ( = 0.5) with 8 NAS methods, including RSPS Li and Talwalkar
1269
- [2019], Reinforce Williams [1992], ENAS Pham et al. [2018], DARTS Liu et al. [2018b], GDAS Dong and Yang
1270
- [2019], SNAS Xie et al. [2018], DSNAS, and PC-DARTS Xu et al. [2020], we found that EEEA-Net ( = 0.5) has an
1271
- accuracy was higher than all other NAS method except for the Reinforce method.
1272
- 26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2109.02417.txt DELETED
@@ -1,514 +0,0 @@
1
-
2
-
3
- This is a preprint: it has been accepted for publication in the Conference: 2020 6th IEEE International
4
- Conference on Network Softwarization (NetSoft) .
5
- • DOI: 10.1109/NetSoft48620.2020.9165337
6
-
7
-
8
-
9
-
10
-
11
-
12
-
13
-
14
-
15
-
16
-
17
-
18
-
19
-
20
-
21
-
22
-
23
- Detection of Insider Threats using Artificial
24
- Intelligence and Visualisation
25
- Vasileios Koutsouvelis∗, Stavros Shiaeles†, Bogdan Ghita‡, Gueltoum Bendiab‡
26
- ∗Open University of Cyprus, 33 Yiannou Kranidioti Ave., Latsia, Nicosia, Cyprus
27
28
- †Network and Security Research Group (NSRG), School of Computing,
29
- University of Portsmouth, Winston Churchill Avenue, Portsmouth, PO1 2UP, Hampshire, UK
30
31
- ‡ Centre for Security, Communications and Networks Research (CSCAN),
32
- School of Computing and Mathematics, Plymouth University, Drake Circus, Plymouth PL4 8AA, UK
33
34
-
35
-
36
- Abstract —Insider threats are one of the most damaging risk
37
- factors for the IT systems and infrastructure of a company or an
38
- organization; identification of insider threats has prompted the
39
- interest of the world academic research community, with several
40
- solutions having been proposed to alleviate their potential impact.
41
- For th e implementation of the experimental stage described in this
42
- study, the Convolutional Neural Network (from now on CNN)
43
- algorithm was used and implemented via the Google TensorFlow
44
- program, which was trained to identify potential threats from
45
- images produced by the available dataset. From the examination
46
- of the images that were produced and with the help of Machine
47
- Learning, the question whether the activity of each user is
48
- classified as “malicious” or not for the Information System was
49
- answere d.
50
- Index Terms —Threats, visualization, security, artificial intelli -
51
- gence, machine learning
52
-
53
- I. INTRODUCTION
54
- Computers nowadays are used in every human activity and
55
- all kinds of operations, from residential to commercial and from
56
- basic service provisioning to research. Beyond their benefit,
57
- the risks and cases of deliberate or accidental de struction,
58
- tampering or unauthorised use of data and computer resources,
59
- in general, are increasing. The consequences of possible
60
- tampering, leakage, or misuse of data can lead not only to
61
- significant damage and costs but also risks for the protection of
62
- the human rights of individuals [1]. In the current security
63
- monitoring landscape, artificial neural networks play an
64
- extremely important role in the prevention and handling of
65
- internal threats [2] and alleviating the risks associated with
66
- information syste ms infrastructures. While a typical infras -
67
- tructure would have a level of protection against an external
68
- attack/threat, an internal threat refers to a person who may have
69
- privileged access to classified, sensitive, or proprietary data and
70
- uses this advant age to remove information from an organization
71
- and transfer it to unauthorised external users. Such attackers
72
- may include the users of the company, who can bypass the
73
- control procedures for access to classified data, and the users
74
- who gain access to user accounts with more rights in relation to these, which they already have. The purpose of
75
- this survey was to answer the question of whether Artificial
76
- Intelligence can be successfully used to detect malicious
77
- activity. The answer to this question went through three (3)
78
- steps: (a) collecting, processing, and classifying the data of the
79
- users tested; (b) visualizing the extracted data; (c) categorize
80
- the behaviour as malicious or normal.
81
- II. RELATED WORK
82
- The detection of malicious activity with the help of Artificial
83
- Intelligence has been a matter of concern to scholars, who have
84
- occasionally dealt with this issue, using different approaches.
85
- For instance, X. Ren, L. Wang, [3] presented a hybrid insider
86
- threat detection system, consisting of data processing, entity
87
- portrait, rule matching as well as iterative attention. Based
88
- on the results of the experiments, the authors claim that the
89
- propose d system provides a higher detection rate and better
90
- timeliness since it incorporates multiple complementary
91
- detection techniques and components. In [4], Sajjanhar, Atul et
92
- al proposed an image -based feature representation of the daily
93
- resource usage pattern of users in the organization. authors
94
- reported an overall accuracy of 99% when compared with
95
- other techniques. In another recent study [5], Kim, Junhong et
96
- al introduced an insider threat detection framework based on
97
- user behaviour modelling and anomaly detection algorithms
98
- and they support those experimental results indicate that the
99
- proposed framework can work well for imbalanced datasets in
100
- which there are only a few insider threats and where no domain
101
- experts’ knowledge is provided. In the same context, Hu, Teng
102
- et al [6] proposed a continuous identity authentication method
103
- based on mouse dynamic behaviour and deep learning to solve
104
- the insider threat attack detection problem and they claim that
105
- the experimental results showed that the proposed method could
106
- identify the user’s identities in seconds and has a lower false
107
- accep t rate (FAR) and false reject rate (FRR).
108
- Tuor, Kaplan and Hutchinson [7] referred to a system of
109
- profound knowledge for filtering log files’ data and analysing
110
- them. According to the writers, an internal threat behaviour varies widely, so the researcher does not attempt to formulate
111
- the pattern of behaviour which is a threat. Instead, new variants
112
- of Neural Networks (DNN) and Recurring Neural Networks
113
- (RNNs) are trained to recognize the activity that is typical
114
- for each user on a network. At the same time, these Neural
115
- Networks assess whether the behaviour of the user is normal or
116
- suspicious. The authors note that detecting malicious cases is
117
- particularly difficult because attackers often try to imitate the
118
- typical behaviour of a normal user. In another study, Sanzgiri
119
- and Dasgupta [8] presented the techniques that have been
120
- developed to detect internal threats referring to the researchers
121
- and their techniques. In particular, the objective of this paper is
122
- to present a categorization of the techniques used by
123
- researchers (Hu, Giordano, Kandias, Maybury, Greitzer,
124
- Eldardiry) to deal with insider threats. Finally, the researchers
125
- remarked that one of the main reasons why it is still difficult
126
- to detect attacks by internal users is the lack of sufficient
127
- real available data in order to build and test models and
128
- mechanisms for detecting internal threats.
129
- Breier and Branisova [9] proposed a threat detection method
130
- based on data mining techniques for analysing system log
131
- files (log analysis). Their approach was based on Apache
132
- Hadoop, which allows the processing of large volumes of data
133
- in a parallel way. The method detects new types of viola -
134
- tions without further human intervention, while the overall
135
- error reaches a value below 10%. Legg, Buckley, Goldsmith
136
- and Creese (2015) [10] proposed Corporate Insider Threat
137
- Detection (CITD), a corporate threat detection system which
138
- incor porates technical and behavioural activities for the evalu -
139
- ation of threats caused by individuals. In particular, the system
140
- recognised the users and the role-based profiles and measured
141
- how they deviate from their observed behaviour in order to
142
- estimate the potential threat that a set of user activities can
143
- cause. Some other studies have used approaches [11] based on
144
- graphs to find malicious cases in structural data models that
145
- represent an internal threat activity that is looking for activities
146
- that display similarities to normal data transactions but are
147
- structurally different from them. Others [12] have suggested
148
- approaches that combine Structural Anomaly Detection - SA.
149
- Despite the work to date, the challenge to holistically
150
- observe and analyse user and application behaviour remains a
151
- current one, due to its volume and complexity. The benefits
152
- of AI-based solution are obvious when faced with the large
153
- amounts of data collected, but interpreting the data and results
154
- demands further research.
155
- III. PROPOSED APPROACH
156
- As identified in the previous section, existing research
157
- demonstrated the efficiency of machine learning approaches,
158
- but also exposed their limitations in segregating the complexity
159
- of user behaviour into normal and malicious. To account
160
- for this complexity, we apply the CNN algorithm on user
161
- interaction data, as reflected through the log files collected from
162
- individual users. Unlike other studies focusing on domain
163
- knowledge to detect malicious behaviours through the use of
164
- specific structural anomaly detection [13], our proposed approach focuses exclusively on the behaviour of each user
165
- of the Information Technology System. In addition to the
166
- standard approach of log data collection [10, 14], the method
167
- also considers the organizational role for each user when estab -
168
- lishing the behaviour profile, which improved significantly the
169
- accuracy of the method. For example, the “malicious” activity
170
- of a user, who holds an IT Admin position in the organization,
171
- may justify - to a certain extent - this activity.
172
- In the context of internal threats, the method aims to
173
- discriminate between legitimate and malicious behaviour by
174
- investigating the differences in the associated visualisation
175
- maps. For each user, the approach creates an image that
176
- depicted his/her activity and behaviour, as emerged from their
177
- interaction with various information systems. While the result -
178
- ing images may appear visuall y different, they were processed
179
- through a machine learning algorithm in order to automatically
180
- recognize which subset of the users appear to exhibit malicious
181
- behaviour (and therefore posing a threat for the respective
182
- information systems) and which are legitimate/benign ones.
183
- Two stages are critical for the success of the method: the
184
- input data and the processing method. Given the proposed
185
- approach is aiming to provide a holistic view of the user
186
- interactions, the most appropriate method was considered to
187
- be converting log events into a visual representation. A full
188
- overview of the process is provided in the following section,
189
- including the attempts to highlight the most relevant features
190
- through the chosen visualisation. The second critical s tage,
191
- information processing, was biased by the choice of input data.
192
- Research undertaken in recent years demonstrated that
193
- Convolutional Neural Networks are indeed extremely efficient
194
- at image analysis, as shown by [15]. However, they have also
195
- proved the ir efficiency in the security area. Given their ability
196
- to deal with complex relationship, CNNs have been applied
197
- from basic security problems, as in [16] which introduced a
198
- CNN -based generic detection engine, to [17] for analysis of
199
- encrypted content.
200
- The approach presented in this paper follows from the mal -
201
- ware classification method from Wang et al. [18] of converting
202
- the data analysis task into an image recognition one. Unlike
203
- [18] and [19], which look at either malware or network activity
204
- to detect attacks, this paper aims to extend the analysis into
205
- the logging footprint to detect malicious vs normal behaviour.
206
- As highlighted in previous studies, the challenge is to ensure
207
- that sufficient input data is available for the method to be
208
- successful, and the transformation applied makes it compatible
209
- with the image analysis task.
210
- A. Methodology
211
- The input data was the Insider Threat Test Dataset from
212
- CERT, which is part of the Institute of Software Engineering
213
- (SEI). The file included log files, CSV type, which recorded
214
- an activity covering eighteen (18) months, collected between
215
- 01.01.2010 and 31.05.2011. Through these files and after
216
- analysis and pro cessing, it was attempted to present an image
217
- of the Information System and to analyse the behaviour of users
218
- identified as malicious. For each user, the inputs included login records, files/documents used or opened, emails sent and
219
- received, web browsing history, devices used, and user role
220
- within the organization.
221
- The steps we followed to complete the process and draw
222
- our conclusions were 1) data sharing and creation of files
223
- based on the data of the user under consideration, 2) importing
224
- the data files we created in the ”D3.js” library, selecting an
225
- appropriate image creation plan, examining the application
226
- library’s patterns and creating images of the user i n question
227
- that included his/her activity during each day, 3) creating
228
- images, 4) implementing and training the CNN algorithm in
229
- Tensorflow program and examining user behaviour, which we
230
- have described as ”normal” or ”malicious”; and 5) drawing
231
- conclusions .
232
- 1) First Stage: In the first stage of pre-processing , the raw
233
- input files were parsed to classify the log files from fifteen users
234
- according to the user role (salesman, IT admin, electrical
235
- engineer, mechanical engineer, administrator, production line
236
- worker, computer scientist, and software quality engineer).
237
- First, log files were parsed for each of the 15 users to create
238
- separate profiles consisting of three files (file.csv, email.csv,
239
- http.csv), which included the complete activity. In the second
240
- step, the profile files were compared against a number of rules
241
- that defined threats and malicious behaviour. In the website
242
- category, we chose terms that were associated with social
243
- networks, work search sites, malware, and file sharing. The
244
- parsing scripts also searched for attached files, which were
245
- divided into three size intervals, defining small [50K B -100K
246
- B] medium [100KB -200KB] and large files (over 200KB).
247
- In parallel, we also parsed the p rofile files for terms associ -
248
- ated with malware, such as Keylogger, files that may have been
249
- leaked to the Internet, and files that may have been distributed
250
- over the Internet. The result of the pre-processing was a set of
251
- three files per user, which had the following structure: date,
252
- user (username), host (from which the action was carried out),
253
- keyword (defining the type of threat), threat index (e ach
254
- specific threat was assigned a numerical category) which files
255
- contained the respective threats. The activity was classified
256
- using a separate record field which defined a specific type of
257
- [illegal] activity. At a subsequent stage, the field was
258
- transco ded to a unique colour for visualization of the images
259
- associated with the user activity map. The collected content
260
- for the users was then aggregated into weekly and monthly
261
- activity for long-term analysis.
262
- 2) Second Stage: In the second stage of the experiment,
263
- the numerical input was processed through the Java D3.js
264
- library to visualize and produce images that depict the activity
265
- of each user. Concerning D3.js is an open -source JavaScript
266
- library, used for document and data handling, which is based
267
- on web templates. It is used through upgraded web browsers,
268
- combining powerful data visualization methods and elements.
269
- Large datasets can be easily linked to SVG objects using simple
270
- D3.js functions to create rich text and graphics di agrams. With
271
- a minim al resource burden on a computer system, D3.js is
272
- extremely fast, supporting large data sets and dynamic
273
- interactions through static or moving images. As mentioned above, the illegal activity was colour -coded for
274
- visual processing as follows: blue for social and professional
275
- interaction, such as job search and social networking sites, red
276
- for sharing site file -sharing websites, pink for email threats,
277
- green for file threats, yellow for 50-100 KB email attachments,
278
- orange for 100-200 KB email attachments, and brown for
279
- > 200KB email attachments. Using this coding, the user data
280
- was converted to an hourly resolution heatmap, summarising
281
- week -long and month -long data; this allowed a harmonised
282
- view of the data for the two timeframes. This process aimed
283
- to convert the numerical data into a visual representation and
284
- characterization of the activity to determine whether it is
285
- malicious or normal.
286
- The result is a two-dimension heatmap, colour -coded as
287
- described above, with the days of the month on the vertical
288
- and the hours of the day horizontally. The dimensions of the
289
- design area were determined according to the size of the grid
290
- that was set. The result was the creation of a space consisting
291
- of hourly activity squares, each of which was coloured with
292
- the dominant activity identified during that respective timeslot.
293
- Following the generation of the activity -based heatmap, the
294
- full dataset included a total of 1199 images; these were
295
- manually labelled as normal or malicious. The selection was
296
- based on specific criteria, indicative of the density of the
297
- activity at specific time intervals, its colours, and the time
298
- of day. Following manual analysis, 769 images were labelled
299
- as containing malicious activity and 430 images were labelled
300
- as normal. Indicatively, Figure 1 presents four such images.
301
-
302
-
303
-
304
- Fig. 1. Display of user’ normal and malicious activity weekly and monthly
305
-
306
-
307
- Fig. 2. Schematic layout of the CNN algorithm implemented
308
-
309
- 3) Third Stage: In the third stage of the experiment,
310
- Google’s Tensorflow program was used to design a six -layer
311
- CNN that would recognize whether the image was classified
312
- as normal or malicious. CNN is a class of forward -facing
313
- artificial neural networks and has been successfully applied
314
- to the analysis and recognition of visual images, videos, and
315
- natural language processing systems. It also uses relatively
316
- little processing compared to other image sorting algorithms.
317
- This means that the network easily learns filters made using
318
- traditiona l algorithms. It is also known as an invariant artificial
319
- neural network, based on the weight’s architecture. Finally, in
320
- the previous step one thousand one hundred ninety -nine (1199)
321
- images were produced, of which - as reported - four hundred
322
- and thirty (430) were assessed as containing normal activity.
323
- For this reason, a corresponding number of malicious images
324
- were selected. Finally, eight hundred and sixty (860) images
325
- were used for this stage, of which eight hundred forty (840)
326
- were the training data images as follows:
327
- 1) Training Data : 80% of training images were used.
328
- 2) Validation Data : 20% of training images were used for
329
- validation.
330
- Figure 2 shows the resulting implementation.
331
-
332
- B. Evalu ation
333
- In order to ensure a balanced dataset, the 769 malicious
334
- activity images were reduced through random selection to 430
335
- images, matching the number of normal activity images. The
336
- resulting dataset had 860 samples, including an equal number
337
- of normal and malicious activity samples; the dataset was split
338
- as follows: 20 images were set aside for validation and the
339
- remaining 840 images were split 80% training (672 images)
340
- and 20% testi ng (168 images). The validation images were also
341
- selected in a balanced fashion, with an equal number
342
- (10) of normal and malicious activity samples. The forecasting
343
- of the results was successful, with a proportion that reached
344
- 100%; the CNN algorithm was traced graphically to
345
- illustrate training accuracy, validation accuracy, and cost.
346
- Below are presented the three training exercises (training
347
- accuracy, validation accuracy, cost) and the implementation graph of the
348
- CNN algorithm.
349
- According to the graphs in Figure 3, training accuracy starts
350
- at a value close to 0.450, i .e., 45%, and has an increasing value
351
- to reach a rate close to 100%. Validation accuracy (F igure 3)
352
- starts from a value close to 0.400, i.e., 40%, and has a rising
353
- value to reach a rate close to 90%. While the cost starts from a
354
- value close to 0.700, i.e., 70% and has a decreasing value to
355
- arrive at a value, which is close to 0.
356
- IV. CONCLUSION
357
- The purpose of this study was to investigate the feasibility of
358
- using machine learning techniques to detect malicious activity
359
- by converting activity reports into a visual representation. The
360
- answer to this question has gone through three stages: a) the
361
- collection, processing, and classification of the data of the users
362
- under consideration, b) the visualization of the extracted data,
363
- and c) the use of the CNN algorithm to classify behaviour
364
- into malicious or normal. The algorithm was trained and tested
365
- using a dataset of 860 created images, including both malicious
366
- and normal activity. Our conclusion is that, with the
367
- methodology used, the malicious activity of the users of the
368
- information system was achieved. The forecasting of the
369
- results was successful , with a percentage that reached 100%.
370
- Table I is a concise table showing the results of some internal
371
- recognition methods, both in terms of validation accuracy and
372
- some of their comparisons.
373
- In conclusion, it should be noted that the characterization
374
- of a user’s behaviour as malicious is also dependent on the
375
- Security Policy that the Company or Organization adopts to
376
- protect its Information System. For the analysis, one of the
377
- sets polices was that visiting social networking sites or job
378
- search websites is falls outside normal activity and should
379
- be categorised as malicious. In addition, an important role in
380
- characterising a user’s activity is played by the position held
381
- in the organization and the set policy and associated analysis
382
- should also consider this.
383
-
384
-
385
-
386
-
387
-
388
-
389
-
390
-
391
-
392
- Fig. 3. Graphic representation of Training Accuracy, Validation Accuracy and Cost
393
- TABLE I
394
- COMPARISON OF OUR MET HOD WITH OTHERS
395
-
396
- Study Accuracy Comparison
397
- Proposed method Testing Accuracy: 100% Mechanical learning (machine learning) and training of the CNN algorithm for
398
- Training Accuracy:100%
399
- Validation Accuracy:90.6%
400
- Cost: 0.582 the implementation and categorization of the user’s behaviour in normal and
401
- malicious.
402
-
403
- W. Eberle et al [11] Testing Accuracy: 95% In this method a graphical theoretical approach was used to detect malicious user
404
- behaviour in an Information System
405
- O. Brdiczka et al [12] Server Eitrigg:82.74% This study proposes an approach combining structural anomaly detection (SA)
406
- Server Cenarion Circle:89.09% from social networks and information networks as well as the psychological
407
- Server Bleeding Hollow:79.84% profiling (PP) of individuals.
408
- W. T. Young et al [13] Indicators:87.4% This study focuses on the awareness of domain knowledge to detect malicious
409
- Anomalies:97.9% behaviour through the use of specific SAs algorithms.
410
- Scenarios:80.6%
411
- J. Breier et al [14] The error rate of the method used was less than 10%. In the method we used,
412
- the error rate approaches zero (0).
413
- P. A. Legg et al [10] The method used was based on collecting log data, by building a profile for each
414
- user and his / her property. In the method we used, the user property was not
415
- evaluated.
416
- X.Ren, L.Wang [3] The method proposes a hybrid intelligent system for insider threat detection that
417
- aims to realize more effective detection of security incidents by incorporating
418
- multiple complementary detection techniques, such as entity portrait, rule match -
419
- ing and iterative attention. In our method, only the user activity was evaluated
420
- A.Sajjanhar et al [4] Accuracy 99% The method proposes an image -based feature representation of the daily resource
421
- usage pattern of users in the organization. Classification models are applied to
422
- the representative images to detect anomalous behaviour of insiders. The images
423
- are classified too malicious and no malicious
424
- J.Kim et al [5] Accuracy > 90% The method proposes insider -threat detection methods based on user behaviour
425
- modelling and anomaly detection algorithms. Experimental results show that the
426
- proposed framework can work reasonably well to detect insiders’ malicious
427
- behaviours . Although the proposed framework was empirically verified, there are
428
- some limitations in the current research, which led the authors to future research
429
- directions
430
-
431
- ACKNOWLEDGMENT
432
- This project has received funding from the Euro -
433
- pean Union’s Horizon 2020 research and innovation
434
- programme under grant agreement no. 786698. The work re-
435
- flects only the authors’ view and the Agency is not responsible
436
- for any use that may be made of the information it contains.
437
-
438
- REFERENCES
439
- [1] C. Chousiadis, I. Mavridis, and G. Pangalos, “An authen -
440
- tication architecture for healthcare information systems,”
441
- Health Informatics Journal , vol. 8, no. 4, pp. 199 –204,
442
- 2002.
443
- [2] J. R. Vacca, Computer and information security hand -
444
- book . Newnes, 2012.
445
- [3] X. Ren and L. Wang, “A hybrid intelligen t system for
446
- insider threat detection using iterative attention,” in
447
- Proceedings of 2020 the 6th International Conference on
448
- Computing and Data Engineering , 2020, pp. 189–194.
449
- [4] A. Sajjanhar, Y. Xiang et al. , “Image -based feature repre -
450
- sentation for insider threat classification,” arXiv preprint
451
- arXiv:1911.05879 , 2019.
452
- [5] J. Kim, M. Park, H. Kim, S. Cho, and P. Kang, “Insider
453
- threat detection based on user behavior modeling and
454
- anomaly detection algorithms,” Applied Sciences , vol. 9,
455
- no. 19, p. 4018, 2019. [6] T. Hu, W. Niu, X. Zhang, X. Liu, J. Lu, and Y. Liu,
456
- “An insider threat detection approach based on mouse
457
- dynamics and deep learning,” Security and Communica -
458
- tion Networks , vol. 2019, 2019.
459
- [7] A. Tuor, S. Kaplan, B. Hutchinson, N. Nichols, and
460
- S. Robinson, “Deep learning for unsupervised insider
461
- threat detection in structured cybersecurity data streams,”
462
- in Workshops at the Thirty -First AAAI Conference on
463
- Artificial Intelligence , 2017.
464
- [8] A. Sanzgiri and D. Da sgupta, “Classification of insider
465
- threat detection techniques,” in Proceedings of the 11th
466
- annual cyber and information security research confer -
467
- ence, 2016, pp. 1–4.
468
- [9] J. Breier and J. Bran isˇova´, “Anomaly detection from
469
- log files using data mining techniques,” in Information
470
- Science and Applications . Springer, 2015, pp. 449–457.
471
- [10] P. A. Legg, O. Buckley, M. Goldsmith, and S. Creese,
472
- “Caught in the act of an insider attack: detection and
473
- assessment of insider threat,” in 2015 IEEE Interna -
474
- tional Symposium on Technologies for Homeland Secu -
475
- rity (HST) . IEEE, 2015, pp. 1–6.
476
- [11] W. Eberle, J. Graves, and L. Holder, “Insider threat
477
- detection using a graph -based approach,” Journal of
478
- Applied Security Research , vol. 6, no. 1, pp. 32–81, 2010.
479
- [12] O. Brdiczka, J. Liu, B. Price, J. Shen, A. Patil, R. Chow, E. Bart, and N. Ducheneaut, “Proactive insider threat
480
- detection through graph learning and psychological con -
481
- text,” in 2012 IEEE Symposium on Security and Privacy
482
- Workshops . IEEE, 2012, pp. 142–149.
483
- [13] W. T. Young, H. G. Goldberg, A. Memory, J. F. Sartain,
484
- and T. E. Senator, “Use of do main knowledge to detect
485
- insider threats in computer activities,” in 2013 IEEE
486
- Security and Privacy Workshops . IEEE, 2013, pp. 60 – 67.
487
- [14] J. Breier and J. Bran isˇova´, “A dynamic rule creation
488
- based anomaly detection method for identifying security
489
- breaches in log records,” Wireless Personal Communica -
490
- tions , vol. 94, no. 3, pp. 497–511, 2017.
491
- [15] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet
492
- classification with deep convolutional neural networks,”
493
- in Advances in neural information processing systems ,
494
- 2012, pp. 1097 –1105.
495
- [16] R. Vinayakumar, K. Soman, and P. Poornachandran,
496
- “Applying convolutional neural network for network in -
497
- trusion detection,” in 2017 International Conference on
498
- Advances in Computing, Communications and Informat -
499
- ics (ICACCI) . IEEE, 2017, pp. 1222 –1228.
500
- [17] R. Gilad -Bachrach, N. Dowlin, K. Laine, K. Lauter,
501
- M. Naehrig, and J. Wernsing, “Cryptonets: Applying
502
- neural networks to encrypted data with high throughput
503
- and accuracy,” in International Conference on Machine
504
- Learning , 2016, pp. 201–210.
505
- [18] W. Wang, M. Zhu, X. Zeng, X. Ye, and Y. Sheng,
506
- “Malware traffic classification using convolutional neural
507
- network for representation learning,” in 2017 Interna -
508
- tional Conference on Information Networking (ICOIN) .
509
- IEEE, 2017, pp. 712–717.
510
- [19] Z. Li, Z. Qin, K. Huang, X. Yang, and S. Ye, “Intru -
511
- sion detection using convolutional neural networks for
512
- representation learning,” in International Conference on
513
- Neural Information Processing . Springer, 2017, pp. 858–
514
- 866.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2109.04223.txt DELETED
@@ -1,998 +0,0 @@
1
- Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
2
- KELM: K NOWLEDGE ENHANCED PRE-TRAINED LAN-
3
- GUAGE REPRESENTATIONS WITH MESSAGE PASSING
4
- ONHIERARCHICAL RELATIONAL GRAPHS
5
- Yinquan Lu1,5Haonan Lu2yGuirong Fu3Qun Liu4
6
- Huawei Technologies Co., Ltd.1OPPO Guangdong Mobile Telecommunications Co., Ltd.2
7
- ByteDance3Huawei Noah’s Ark Lab4Shanghai AI Laboratory5
8
9
10
- ABSTRACT
11
- Incorporating factual knowledge into pre-trained language models (PLM) such as
12
- BERT is an emerging trend in recent NLP studies. However, most of the existing
13
- methods combine the external knowledge integration module with a modified
14
- pre-training loss and re-implement the pre-training process on the large-scale
15
- corpus. Re-pretraining these models is usually resource-consuming, and difficult
16
- to adapt to another domain with a different knowledge graph (KG). Besides,
17
- those works either cannot embed knowledge context dynamically according to
18
- textual context or struggle with the knowledge ambiguity issue. In this paper, we
19
- propose a novel knowledge-aware language model framework based on fine-tuning
20
- process, which equips PLM with a unified knowledge-enhanced text graph that
21
- contains both text and multi-relational sub-graphs extracted from KG. We design
22
- a hierarchical relational-graph-based message passing mechanism, which allows
23
- the representations of injected KG and text to mutually update each other and can
24
- dynamically select ambiguous mentioned entities that share the same text1. Our
25
- empirical results show that our model can efficiently incorporate world knowledge
26
- from KGs into existing language models such as BERT, and achieve significant
27
- improvement on the machine reading comprehension (MRC) tasks compared with
28
- other knowledge-enhanced models.
29
- 1 I NTRODUCTION
30
- Pre-trained language models benefit from the large-scale corpus and can learn complex linguistic
31
- representation (Devlin et al., 2019; Liu et al., 2019b; Yang et al., 2020). Although they have achieved
32
- promising results in many NLP tasks, they neglect to incorporate structured knowledge for language
33
- understanding. Limited by implicit knowledge representation, existing PLMs are still difficult to
34
- learn world knowledge efficiently (Poerner et al., 2019; Yu et al., 2020). For example, hundreds
35
- of related training samples in the corpus are required to understand the fact “ banmeans an official
36
- prohibition or edict against something” for PLMs.
37
- By contrast, knowledge graphs (KGs) explicitly organize the above fact as a triplet “(ban, hypernyms,
38
- prohibition)” . Although domain knowledge can be represented more efficiently in KG form, entities
39
- with different meanings share the same text may happen in a KG (knowledge ambiguity issue). For
40
- example, one can also find “(ban, hypernyms, moldovan monetary unit)” in WordNet (Miller, 1995).
41
- Recently, many efforts have been made on leveraging heterogeneous factual knowledge in KGs to
42
- enhance PLM representations. These models generally adopt two methods: (1). Injecting pre-trained
43
- entity embeddings into PLM explicitly, such as ERNIE (Zhang et al., 2019), which injects entity
44
- embeddings pre-trained on a knowledge graph by using TransE (Bordes et al., 2013). (2). Implicitly
45
- This work is done when Yinquan Lu, Haonan Lu and Guirong Fu work at Huawei Technologies Co., Ltd.
46
- yCorresponding author
47
- 1Words or phrases in the text corresponding to certain entities in KGs are often named “entity mentions” .
48
- While entities in KGs that correspond to entity mentions in the text are often named “mentioned entities”
49
- 1arXiv:2109.04223v2 [cs.CL] 5 May 2022Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
50
- learning factual knowledge by adding extra pre-training tasks such as entity-level mask, entity-based
51
- replacement prediction, etc. (Wang et al., 2020c; Sun et al., 2020). Some studies use both of the
52
- above two methods such as CokeBERT (Su et al., 2020).
53
- However, as summarized in Table 4 of Appendix, most of the existing knowledge-enhanced PLMs
54
- need to re-pretrain the models based on an additional large-scale corpus, they mainly encounter
55
- two problems below: (1) Incorporating external knowledge during pretraining is usually resource-
56
- consuming and difficult to adapt to other domains with different KGs. By checking the third column
57
- of Table 4 in Appendix, one can see that most of the pretrain-based models use Wiki-related KG as
58
- their injected knowledge source. These models also use English Wikipedia as pre-training corpus.
59
- They either use an additional entity linking tool (e.g. TAGME (Ferragina & Scaiella, 2010)) to align
60
- the entity mention in the text to a single mentioned entity in a Wiki-related KG uniquely or directly
61
- treat hyperlinks in Wikipedia as entity annotations. These models depend heavily on the one-to-one
62
- mapping relationship between Wikipedia corpus and Wiki-related KG, thus they never consider
63
- handling knowledge ambiguity issue. (2) These models with explicit knowledge injection usually use
64
- algorithms like BILINEAR (Yang et al., 2015) to obtain pre-trained KG embeddings, which contain
65
- information about graph structure. Unfortunately, their knowledge context is usually static and cannot
66
- be embedded dynamically according to textual context.
67
- Several works (Qiu et al., 2019; Yang et al., 2019) concentrate on injecting external knowledge based
68
- on fine-tuning PLM on downstream tasks, which is much easier to change the injected KGs and adapt
69
- to relevant domain tasks. They either cannot consider multi-hop relational information, or struggle
70
- with knowledge ambiguity issue. How to fuse heterogeneous information dynamically based on the
71
- fine-tuning process on the downstream tasks and use the information of injected KGs more efficiently
72
- remains a challenge.
73
- Figure 1: Unified Knowledge-enhanced Text Graph (UKET)
74
- consists of three parts corresponding to our model: (1) KG
75
- only part, (2) Entity link to token graph, (3) Text only graph.To overcome the challenges mentioned
76
- above, we propose a novel frame-
77
- work named KELM , which injects
78
- world knowledge from KGs during
79
- the fine-tuning phase by building a
80
- Unified Knowledge-enhanced Text
81
- Graph (UKET) that contains both in-
82
- jected sub-graphs from external knowl-
83
- edge and text. The method extends
84
- the input sentence by extracting sub-
85
- graphs centered on every mentioned
86
- entity from KGs. In this way, we can
87
- get a Unified Knowledge-enhanced
88
- Text Graph as shown in Fig. 1, which is
89
- made of three kinds of graph: (1) The
90
- injected knowledge graphs, referred to
91
- as the “ KG only ” part; (2) The graph
92
- about entity mentions in the text and mentioned entities in KGs, referred to as the “ entity link to
93
- token ” part. Entity mentions in the text are linked with mentioned entities in KGs by string matching,
94
- so one entity mention may trigger several mentioned entities that share the same text in the injected
95
- KGs (e.g. “Ford” in Fig. 1); (3) The “ text only ” part, where the input text sequence is treated as a
96
- fully-connected word graph just like classical Transformer architecture (Vaswani et al., 2017).
97
- Based on this unified graph, we design a novel Hierarchical relational-graph-based Message Passing
98
- (HMP) mechanism to fuse heterogeneous information on the output layer of PLM. The implemen-
99
- tation of HMP is via a Hierarchical Knowledge Enhancement Module as depicted in Fig. 2, which
100
- also consists of three parts, and each part is designed for solving the different problems above: (1)
101
- For reserving the structure information and dynamically embedding injected knowledge, we utilize
102
- a relational GNN (e.g. rGCN (Schlichtkrull et al., 2017)) to aggregate and update representations
103
- of extracted sub-graphs for each injected KG (corresponding to the “KG only” part of UKET). All
104
- mentioned entities and their K-hop neighbors in sub-graphs are initialized by pre-trained vectors
105
- obtained from the classical knowledge graph embedding (KGE) method (we adopt BILINEAR here).
106
- In this way, knowledge context can be dynamically embedded, the structural information about the
107
- graph is also kept; (2) For handling knowledge ambiguity issue and selecting relevant mentioned
108
- entities according to the input context, we leverage a specially designed attention mechanism to
109
- 2Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
110
- Figure 2: Framework of KELM (left) and illustrates how to generate knowledge-enriched token
111
- embeddings (right).
112
- weight these ambiguous mentioned entities by using the textual representations of words/tokens to
113
- query the representations of their related mentioned entities in KGs (corresponding to the “entity
114
- link to token” graph of UKET). The attention score can help to select knowledge according to the
115
- input sentence dynamically. By concatenating the outputs of this step with the original outputs
116
- of PLM, we can get a knowledge-enriched representation for each token; (3) For further interac-
117
- tions between knowledge-enriched tokens, we employ a self-attention mechanism that operates on
118
- the fully-connected word graph (corresponding to the “text only” graph of UKET) to allow the
119
- knowledge-enriched representation of each token to further interact with others.
120
- We conduct experiments on the MRC task, which requires a system to comprehend a given text
121
- and answer questions about it. In this paper, to prove the generalization ability of our method,
122
- we evaluate KELM on both the extractive-style MRC task (answers can be found in a span of the
123
- given text) and the multiple-response-items-style MRC task (each question is associated with several
124
- choices for answer-options, the number of correct answer-options is not pre-specified). MRC is a
125
- challenging task and represents a valuable path towards natural language understanding (NLU). With
126
- the rapid increment of knowledge, NLU becomes more difficult since the system needs to absorb new
127
- knowledge continuously. Pre-training models on large-scale corpus is inefficient. Therefore, fine-
128
- tuning the knowledge-enhanced PLM on the downstream tasks directly is crucial in the application.2
129
- 2 R ELATED WORK
130
- 2.1 K NOWLEDGE GRAPH EMBEDDING
131
- We denote a directed knowledge graph as G(E;R), whereEandRare sets of entities and relations,
132
- respectively. We also define Fas a set of facts, a fact stored in a KG can be expressed as a triplet
133
- (h;r;t )2F, which indicates a relation rpointing from the head entity hto tail entity t, where
134
- h;t2E andr2R . KGE aims to extract topological information in KG and to learn a set of
135
- low-dimensional representations of entities and relations by knowledge graph completion task (Yang
136
- et al., 2015; Lu & Hu, 2020).
137
- 2.2 M ULTI -RELATIONAL GRAPH NEURAL NETWORK
138
- Real-world KGs usually include several relations. However, traditional GNN models such as
139
- GCN (Kipf & Welling, 2017), and GAT (Veli ˇckovi ´c et al., 2018) can only be used in the graph
140
- 2Code is available at here.
141
- 3Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
142
- with one type of relation. (Schlichtkrull et al., 2017; Haonan et al., 2019) generalizes traditional
143
- GNN models by performing relation-specific aggregation, making it possible to encode relational
144
- graphs. The use of multi-relational GNN makes it possible to encode injected knowledge embeddings
145
- dynamically in SKG and CokeBERT.
146
- 2.3 J OINT LANGUAGE AND KNOWLEDGE MODELS
147
- Since BERT was published in 2018, many efforts have been made for further optimization, basically
148
- focusing on the design of the pre-training process and the variation of the encoder. For studies of
149
- knowledge-enhanced PLMs, they also fall into the above two categories or combine both of them
150
- sometimes. Despite their success in leveraging external factual knowledge, the gains are limited by
151
- computing resources, knowledge ambiguity issue, and the expressivity of their methods for the fusion
152
- of heterogeneous information, as summarized in Table 4 of Appendix and the introduction part.
153
- Recent studies notice that the architecture of Transformer treats input sequences as fully-connected
154
- word graphs, thus some of them try to integrate injected KGs and textual context into a unified
155
- data structure. Here we argue that UKET in our KELM is different from the WK graph proposed
156
- in CoLAKE/K-BERT. These two studies heuristically convert textual context and entity-related
157
- sub-graph into input sequences, both entities and relations are treated as input words of the PLM,
158
- then they leverage a Transformer with a masked attention mechanism to encode those sequences
159
- from the embedding layer and pre-train the model based on the large-scale corpus. Unfortunately, it
160
- is not trivial for them to convert the second or higher order neighbors related to textual context (Su
161
- et al., 2020), the structural information about the graph is lost. UKET differs from the WK graph
162
- of CoLAKE/K-BERT in that, instead of converting mentioned entities, relations, and text into a
163
- sequence of words and feeding them together into the input layer of PLM (they unify text and KG into
164
- a sequence), UKET unifies text and KG into a graph. Besides, by using our UKET framework, the
165
- knowledge fusion process of KELM is based on the representation of the last hidden layer of PLM,
166
- making it possible to directly fine-tune the PLM on the downstream tasks without re-pretraining the
167
- model. SKG also utilizes relational GNN to fuse information of KGs and text representation encoded
168
- by PLM. However, SKG only uses GNN to dynamically encode the injected KGs, which corresponds
169
- to part one of Fig. 1. Outputs of SKG are made by directly concatenating outputs of graph encoder
170
- with the outputs of PLM. It cannot select ambiguous knowledge and forbids the interactions between
171
- knowledge-enriched tokens corresponding to part two and part three of Fig. 1, respectively. KT-NET
172
- uses a specially designed attention mechanism to select relevant knowledge from KGs. For example,
173
- it treats all synsets of entity mentions within the WN183as candidate KB concepts. This limits the
174
- ability of KT-NET to select the most relevant mentioned entities4. Moreover, the representations
175
- of injected knowledge are static in KT-NET, they cannot dynamically change according to textual
176
- context, the information about the original graph structure in KG is also lost.
177
- 3 M ETHODOLOGY
178
- The architecture of KELM is shown in Fig. 2. It consists of three main modules: (1) PLM Encoding
179
- Module; (2) Hierarchical Knowledge Enhancement Module; (3) Output Module.
180
- 3.1 PLM E NCODING MODULE
181
- This module utilizes PLM (e.g.BERT) to encode text to get textual representations for pas-
182
- sages and questions. An input example of the MRC task includes a paragraph and a ques-
183
- tion with a candidate answer, represented as a single sequence of tokens of the length n:
184
- T=f[CLS];Q;(A);[SEP ];P;[SEP ]g=ftign
185
- i=1, whereQ,AandPrepresent all tokens for ques-
186
- tion, candidate answer and paragraph, respectively5.[SEP ]and[CLS]are special tokens in BERT
187
- and defined as a sentence separator and a classification token, respectively. i-th token in the sequence
188
- is represented by ~ht
189
- i2Rdt, wheredtis the last hidden layer size of used PLM.
190
- 3A subset of WordNet.
191
- 4Refer the example given in the case study of KT-NET, the most relevant concept for the word “ban” is
192
- “forbidding_NN_1” (with the probability of 86.1%), but not “ban_NN_4”.
193
- 5Depending on the type of MRC task (extractive-style v.s. multiple-response-items-style), candidate answer
194
- A is not required in the sequence of tokens for the extractive-style MRC task.
195
- 4Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
196
- 3.2 H IERARCHICAL KNOWLEDGE ENHANCEMENT MODULE
197
- This module is the implementation of our proposed HMP mechanism to fuse information of textual
198
- and graph context. We will formally introduce graph construction for UKET, and the three sub-
199
- processes of HMP in detail in the following sections.
200
- 3.2.1 C ONSTRUCTION OF UKET G RAPH
201
- (1) Given a set with jQjelements:fGq
202
- k(Eq
203
- k;Rq
204
- k)gjQj
205
- q=1and input text, where jQjis the total number of
206
- injected KGs, and qindicates the q-th KG. We denote the set of entity mentions related to the q-th
207
- KG asXq=fxq
208
- igjXqj
209
- i=1, wherejXqjis the number of entity mentions in the text. The corresponding
210
- mentioned entities are shared by all tokens in the same entity mention. All mentioned entities
211
- Mq=fmq
212
- igjMqj
213
- i=1are linked with their relevant entity mentions in the text, where jMqjis the number
214
- of mentioned entities in the q-th KG. We define this "entity link to token graph" in Fig. 1 as
215
- Gq
216
- m(Eq
217
- m;Rq
218
- m), whereEq
219
- m=Xq[Mqis the union of entity mentions and their relevant mentioned
220
- entities,Rq
221
- mis a set with only one element that links mentioned entities and their relevant entity
222
- mentions. (2) For i-th mentioned entity mq
223
- iinMq, we retrieve all its K-hop neighbors fNx
224
- mq
225
- igK
226
- x=0
227
- from theq-th knowledge graph, where Nx
228
- mq
229
- iis a set ofi-th mentioned entity’s x-hop neighbors, hence
230
- we haveN0
231
- mq
232
- i=fmq
233
- ig. We define "KG-only graph": Gq
234
- s(Eq
235
- s;Rq
236
- s), whereEq
237
- s=SjMqj
238
- i=0SK
239
- x=0Nx
240
- mq
241
- iis the
242
- union of all mentioned entities and their neighbors within the K-hops sub-graph, and Rq
243
- sis a set
244
- of all relations in the extracted sub-graph of q-th KG. (3) The text sequence can be considered as
245
- a fully-connected word graph as pointed out previously. This “text-only graph” can be denoted as
246
- Gt(Et;Rt), whereEtis all tokens in text and Rtis a set with only one element that connects all
247
- tokens. Finally, we define the full hierarchical graph consisting of all three parts fGq
248
- sgjQj
249
- q=1,fGq
250
- mgjQj
251
- q=1,
252
- andGt, as Unified Knowledge-enhanced Text Graph (UKET).
253
- 3.2.2 D YNAMICALLY EMBEDDING KNOWLEDGE CONTEXT
254
- We use pre-trained vectors obtained from the KGE method to initialize representations of entities in
255
- Gq
256
- s(Eq
257
- s;Rq
258
- s). Considering the structural information of injected knowledge graph forgotten during
259
- training, we utilize jQjindependent GNN encoders (i.e. g1(:),g2(:)in Fig. 2, which is the case of
260
- injecting two independent KGs in our experiment setting) to dynamically update entity embeddings
261
- ofjQjinjected KGs. We use rGCN to model the multi-relational nature of the knowledge graph. To
262
- updatei-th node ofq-th KG inl-th rGCN layer:
263
- ~ sq(l+1)
264
- i =(X
265
- r2Rq
266
- sX
267
- j2Nr
268
- i1
269
- jNr
270
- ijWq(l)
271
- r~ sq(l)
272
- j)(1)
273
- WhereNr
274
- iis a set of neighbors of i-th node under relation r2Rq
275
- s.Wq(l)
276
- ris trainable weight matrix
277
- atl-th layer and ~ sq(l+1)
278
- i is the hidden state of i-th node at ( l+1)-th layer. After Lupdates,jQjsets
279
- of node embeddings are obtained. The output of the q-th KG can be represented as Sq2RjEq
280
- sjdq,
281
- wherejEq
282
- sjanddqare the numbers of nodes of extracted sub-graph and the dimension of pre-trained
283
- KGE, respectively.
284
- 3.2.3 D YNAMICALLY SELECTING SEMANTICS -RELATED MENTIONED ENTITIES
285
- To handle the knowledge ambiguity issue, we introduce an attention layer to weight these ambiguous
286
- mentioned entities by using the textual representations of tokens (outputs of Section 3.1) to query
287
- their semantics-related mentioned entities representations in KGs. Here, we follow the attention
288
- mechanism of GAT to update each entity mention embedding in Gq
289
- m:
290
- ~ xq
291
- i=(X
292
- j2Nq
293
- i q
294
- ijWq~ sq
295
- j)(2)
296
- Where~ sq
297
- jis the output embeddings from the q-th rGCN in the previous step. ~ xq
298
- iis the hidden state
299
- ofi-th entity mention xq
300
- iinXq, andNq
301
- iis a set of neighbors of xq
302
- iinGq
303
- m.Wq2Rdoutdinis a
304
- 5Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
305
- trainable weight matrix, we set din=dout=dq(thus~ xq
306
- i2Rdq).is a nonlinear activation function.
307
- q
308
- ijis the attention score that weights ambiguous mentioned entities in the q-th KG:
309
- q
310
- ij=exp(LeakyReLU (~ T
311
- q[Wq~ht0
312
- ijjWq~ sq
313
- j]))
314
- P
315
- k2Nq
316
- iexp(LeakyReLU (~ Tq[Wq~ht0
317
- ijjWq~ sq
318
- k]))(3)
319
- The representation ~ht
320
- iwith a dimension of dtis projected to the dimension of dq, before using it
321
- to query the related mentioned entity embeddings of Sq:~ht0
322
- i=Wq
323
- proj~ht
324
- i, whereWq
325
- proj2Rdqdt.
326
- ~ q2R2dqis a trainable weight vector. Tis the transposition operation and jjis the concatenation
327
- operation.
328
- Finally, we concatenate outputs of jQjKGs with textual context representation to get final knowledge-
329
- enriched representation:
330
- ~hk
331
- i= [~ht
332
- i;~ x1
333
- i;:::;~ xjQj
334
- i]2Rdt+d1++djQj (4)
335
- If tokentican’t match any entity in q-th KG (say ti=2Xq), we fill~ xq
336
- iin Eq.4 with zeros. Note
337
- that mentioned entities in KGs are not always useful, to prevent noise, we follow (Yang & Mitchell,
338
- 2017)’s work and add an extra sentinel node linked to each entity mention in Gq
339
- m. The sentinel node
340
- is initialized by zeros and not trainable, which is the same as the case of no retrieved entities in the
341
- KG. In this way, according to the textual context, KELM can dynamically select mentioned entities
342
- and avoid introducing knowledge noise.
343
- 3.2.4 I NTERACTION BETWEEN KNOWLEDGE -ENRICHED TOKEN EMBEDDINGS
344
- To allow knowledge-enriched tokens’ representations to propagate to each other in the text, we
345
- use a fully-connected word graph Gt, with knowledge-enriched representations from outputs of
346
- the previous step, and employ the self-attention mechanism similar to KT-NET to update token’s
347
- embedding. The final representation for i-th token in the text is ~hf
348
- i2R6(dt+d1++djQj).
349
- 3.3 O UTPUT MODULE
350
- 3.3.1 E XTRACTIVE -STYLE MRC TASK
351
- A simple linear transformation layer and softmax operation are used to predict start and end positions
352
- of answers. For i-th token, the probabilities to be the start and end position of answer span are:
353
- ps
354
- i=exp(wT
355
- s~hf
356
- i)
357
- nP
358
- j=1exp(wTs~hf
359
- j);pe
360
- i=exp(wT
361
- e~hf
362
- i)
363
- nP
364
- j=1exp(wTe~hf
365
- j), wherews;we2R6(dt+d1++djQj)are trainable vectors
366
- andnis the number of tokens. The training loss is calculated by the log-likelihood of the true start
367
- and end positions: L=1
368
- NNP
369
- i=1(logps
370
- ys
371
- i+logpe
372
- ye
373
- i), whereNis the total number of examples in the
374
- dataset,ys
375
- iandye
376
- iare the true start and end positions of i-th query’s answer, respectively. During
377
- inference, we pick the span (a;b)with maximum ps
378
- ape
379
- bwhereabas predicted anwser.
380
- 3.3.2 M ULTIPLE -RESPONSE -ITEMS -STYLE MRC TASK
381
- Since answers to a given question are independent of each other, to predict the correct probability
382
- of each answer, a fully connected layer followed by a sigmoid function is applied on the final
383
- representation of [CLS]token in BERT.
384
- 4 E XPERIMENTS
385
- 4.1 D ATASETS
386
- In this paper, we empirically evaluate KELM on both two types of MRC benchmarks in Super-
387
- GLUE (Wang et al., 2020a): ReCoRD (Zhang et al., 2018) (extractive-style) and MultiRC (Khashabi
388
- 6Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
389
- et al., 2018) (multiple-response-items-style). Detailed descriptions of the two datasets can be found
390
- in Appendix B. On both datasets, the test set is not public, one has to submit the predicted results to
391
- the organization to get the final test score. Since frequent submissions to probe the unseen test set are
392
- not encouraged, we only submit our best model once for each of the datasets, thus the statistics of the
393
- results (e.g., mean, variance, etc.) are not applicable. We use Exact Match (EM) and (macro-averaged)
394
- F1 as the evaluation metrics.
395
- External Knowledge We adopt knowledge sources the same as used in KT-NET: WordNet and
396
- NELL (Carlson et al., 2010). Representations of injected knowledge are initialized by resources
397
- provided by (Yang & Mitchell, 2017). The size of these embeddings is 100. We retrieve related
398
- knowledge from the two KGs in a given sentence and construct UKET graph (as shown in Section
399
- 3.2.1). More details about entity embedding and concepts retrieval are available in Appendix B.
400
- 4.2 E XPERIMENTAL SETUPS
401
- Baselines and Comparison Setting Because we use BERT largeas the base model in our method,
402
- we use it as our primary baseline for all tasks. For fair comparison, we mainly compare our results
403
- with two fine-tune-based knowledge-enhanced models: KT-NET and SKG, which also evaluate
404
- their results on ReCoRD with BERT largeas the encoder part. As mentioned in the original paper
405
- of KT-NET, KT-NET mainly focuses on the extractive-style MRC task. We also evaluate KT-NET
406
- on the multiple-response-items-style MRC task and compare the results with KELM. We evaluate
407
- our approach in three different KB settings: KELM WordNet ,KELM NELL, and KELM Both, to inject KG
408
- from WordNet, NELL, and both of the two, respectively (The same as KT-NET). Implementation
409
- details of our model are presented in Appendix C.
410
- Dev Test
411
- Model EM F1 EM F1
412
- BERT large 70.2 72.2 71.3 72.0
413
- SKG+BERT large 70.9 71.6 72.2 72.8
414
- KT-NET WordNet 70.6 72.8 - -
415
- KT-NET NELL 70.5 72.5 - -
416
- KT-NET BOTH 71.6 73.6 73.0 74.8
417
- KELM WordNet 75.4 75.9 75.9 76.5
418
- KELM NELL 74.8 75.3 75.9 76.3
419
- KELM Both 75.1 75.6 76.2 76.7
420
- Table 1: Result on ReCoRD.Dev Test
421
- Model EM F1 EM F1
422
- BERT large - - 24.1 70.0
423
- KT-NET
424
- BOTH 26.7 71.7 25.4 71.1
425
- KELM WordNet 29.2 70.6 25.9 69.2
426
- KELM NELL 27.3 70.4 26.5 70.6
427
- KELM Both 30.3 71.0 27.2 70.8
428
- Table 2: Result on MultiRC. [*] are from our
429
- implementation.
430
- 4.3 R ESULTS
431
- The results for the extractive-style MRC task and multiple-response-items-style MRC task are given in
432
- Table 1 and Table 2, respectively. The scores of other models are taken directly from the leaderboard
433
- of SuperGLUE6and literature (Qiu et al., 2019; Yang et al., 2019). In this paper, our implementation
434
- is based on a single model, and hence comparing with ensemble based models is not considered. Best
435
- results are labeled in bold and the second best are underlined.
436
- Results on the dev set of ReCoRD show that: (1) KELM outperforms BERT large, irrespective of
437
- which external KG is used. Our best KELM offers a 5.2/3.7 improvement in EM/F1 over BERT large.
438
- (2) KELM outperforms previous SOTA knowledge-enhanced PLM (KT-NET) by +3.8 EM/+2.3 F1 .
439
- In addition, KELM outperforms KT-NET significantly in all three KB settings. On the dev set of
440
- MultiRC, the best KELM offers a 3.6improvement in EM over KT-NET. Although the performance
441
- on F1 drop a little compared with KT-NET, we still get a gain of +2.9 (EM+F1) over the former
442
- SOTA model7.
443
- Results on the test set further demonstrate the effectiveness of KELM and its superiority over the
444
- previous works. On ReCoRD, it significantly outperforms the former SOTA knowledge-enhanced
445
- PLM (finetuning based model) by +3.2 EM/+1.9 F1 . And on MultiRC, KELM offers a 3.1/0.8
446
- improvement in EM/F1 over BERT large, and achieves a gain of +1.5 (EM+F1) over KT-NET.
447
- 6https://super.gluebenchmark.com/leaderboard (Nov.14th, 2021)
448
- 7The best model is chosen according to the EM+F1 score (same as KT-NET).
449
- 7Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
450
- 5 C ASE STUDY
451
- This section uses an example in ReCoRD to show how KELM avoids knowledge ambiguity issue
452
- and selects the most relevant mentioned entities adaptively w.r.t the textual context. Recall that given
453
- a tokenti, the importance of a mentioned entity mq
454
- jinq-th KG is scored by the attention weight
455
- q
456
- ijin Eq.2. To illustrate how KELM can select the most relevant mentioned entities, we analyze
457
- the example that was also used in the case study part of KT-NET. The question of this example is
458
- “Sudan remains a XXX-designated state sponsor of terror and is one of six countries subject to the
459
- Trump administration’s ban”, where the “XXX” is the answer that needs to be predicted. The case
460
- study in KT-NET shows the top 3 most relevant concepts from WordNet for the word “ban” are
461
- “forbidding.n.01”, “proscription.n.01”, and “ban.v.02”, with the weights of 0.861, 0.135, and 0.002,
462
- respectively. KT-NET treats all synsets of a word as candidate KG concepts, both “forbidding.n.01”
463
- and “ban.v.02” will be the related concepts of the word “ban” in the text. Although KT-NET can
464
- select relevant concepts and suppress the knowledge noise through its specially designed attention
465
- mechanism, we still observe two problems from the previous case study: (1) KT-NET cannot select
466
- the most relevant mentioned entities in KG that share the same string in the input text. (2) Lack
467
- of ability to judge the part of speech (POS) of the word (e.g. “ban.v.02” gets larger weights than
468
- “ban.n.04”).
469
- Word in text
470
- (prototype)The most relevant
471
- mentioned entity in
472
- WordNet (predicted)Golden mentioned entity
473
- ford ford.n.05 (0.56) ford.n.05
474
- pardon pardon.v.02 (0.86) pardon.v.02
475
- nixon nixon.n.01 (0.74) nixon.n.01
476
- lead lead.v.03 (0.73) lead.v.03
477
- outrage outrage.n.02 (0.62) outrage.n.02
478
- Table 3: Case study. Comparisons between the golden label
479
- with the most relevant mentioned entity in WordNet. The
480
- importance of selected mentioned entities is provided in the
481
- parenthesis.For KELM, by contrast, we focus on
482
- selecting the most relevant mentioned
483
- entities to solve the knowledge ambi-
484
- guity issue (based on the “entity link
485
- to token graph” part of UKET). For in-
486
- jecting WordNet, by allowing message
487
- passing on the extracted sub-graphs
488
- (“KG only” part of UKET), knowl-
489
- edge context can be dynamically em-
490
- bedded according to the textual con-
491
- text. Thus the neighbors’ information
492
- of mentioned entities in WordNet can
493
- be used to help the word in a text to
494
- correspond to a particular POS based on its context. The top 3 most relevant mentioned entities in
495
- WordNet for the word “ban” in the above example are “ban.n.04”, “ban.v.02”, and “ban.v.01”, with
496
- the weights of 0.715, 0.205, and 0.060, respectively.
497
- To vividly show the effectiveness of KELM, we analyze ambiguous words in the motivating example
498
- show in Fig. 1 (The example comes from ReCoRD):
499
- “President Ford then pardoned Richard Nixon, leading to a further firestorm of outrage. ”
500
- Table. 3 presents 5 words in the above passage. For each word, the most relevant mentioned entity in
501
- WordNet with the highest score is given. The golden mentioned entity for each word is labeled by
502
- us. Definitions of mentioned entities in WordNet that correspond to the word examples are listing in
503
- Table 5 of Appendix.
504
- 6 C ONCLUSION
505
- In this paper, we have proposed KELM for MRC, which enhances PLM representations with
506
- structured knowledge from KGs based on the fine-tuning process. Via a unified knowledge-enhanced
507
- text graph, KELM can embed the injected knowledge dynamically, and select relevant mentioned
508
- entities in the input KGs. In the empirical analysis, KELM shows the effectiveness of fusing external
509
- knowledge into representations of PLM and demonstrates the ability to avoid knowledge ambiguity
510
- issue. Injecting emerging factual knowledge into PLM during finetuning without re-pretraining
511
- the whole model is quite important in the application of PLMs and is still barely investigated.
512
- Improvements achieved by KELM over vanilla baselines indicate a potential direction for future
513
- research.
514
- 8Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
515
- ACKNOWLEDGEMENTS
516
- The authors thank Ms. X. Lin for insightful comments on the manuscript. We also thank Dr. Y . Guo
517
- for helpful suggestions in parallel training settings. We also thank all the colleagues in AI Application
518
- Research Center (AARC) of Huawei Technologies for their supports.
519
- REFERENCES
520
- Antoine Bordes, Nicolas Usunier, Alberto García-Durán, J. Weston, and Oksana Yakhnenko. Trans-
521
- lating embeddings for modeling multi-relational data. In NIPS , 2013.
522
- A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E.R. Hruschka Jr., and T.M. Mitchell. Toward an
523
- architecture for never-ending language learning. In Proceedings of the Conference on Artificial
524
- Intelligence (AAAI) , pp. 1306–1313. AAAI Press, 2010.
525
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep
526
- bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of
527
- the North American Chapter of the Association for Computational Linguistics: Human Language
528
- Technologies, Volume 1 (Long and Short Papers) , pp. 4171–4186, Minneapolis, Minnesota, June
529
- 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https:
530
- //www.aclweb.org/anthology/N19-1423 .
531
- P. Ferragina and Ugo Scaiella. Tagme: on-the-fly annotation of short text fragments (by wikipedia
532
- entities). Proceedings of the 19th ACM international conference on Information and knowledge
533
- management , 2010.
534
- Lu Haonan, Seth H. Huang, Tian Ye, and Guo Xiuyan. Graph star net for generalized multi-task
535
- learning, 2019.
536
- Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. Looking
537
- beyond the surface:a challenge set for reading comprehension over multiple sentences. In Pro-
538
- ceedings of North American Chapter of the Association for Computational Linguistics (NAACL) ,
539
- 2018.
540
- Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks,
541
- 2017.
542
- Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. K-bert:
543
- Enabling language representation with knowledge graph, 2019a.
544
- Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike
545
- Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining
546
- approach, 2019b.
547
- Edward Loper and Steven Bird. Nltk: The natural language toolkit. In Proceedings of the ACL-02
548
- Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and
549
- Computational Linguistics - Volume 1 , ETMTNLP ’02, pp. 63–70, USA, 2002. Association for
550
- Computational Linguistics. doi: 10.3115/1118108.1118117. URL https://doi.org/10.
551
- 3115/1118108.1118117 .
552
- Haonan Lu and Hailin Hu. Dense: An enhanced non-abelian group representation for knowledge
553
- graph embedding, 2020.
554
- George A. Miller. Wordnet: A lexical database for english. COMMUNICATIONS OF THE ACM , 38:
555
- 39–41, 1995.
556
- Matthew E. Peters, Mark Neumann, Robert L. Logan IV au2, Roy Schwartz, Vidur Joshi, Sameer
557
- Singh, and Noah A. Smith. Knowledge enhanced contextual word representations, 2019.
558
- Nina Poerner, Ulli Waltinger, and Hinrich Schütze. Bert is not a knowledge base (yet): Factual
559
- knowledge vs. name-based reasoning in unsupervised qa. ArXiv , abs/1911.03681, 2019.
560
- 9Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
561
- Delai Qiu, Yuanzhe Zhang, Xinwei Feng, Xiangwen Liao, Wenbin Jiang, Yajuan Lyu, Kang Liu, and
562
- Jun Zhao. Machine reading comprehension using structural knowledge graph-aware network. In
563
- Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and
564
- the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pp.
565
- 5896–5901, Hong Kong, China, November 2019. Association for Computational Linguistics. doi:
566
- 10.18653/v1/D19-1602. URL https://www.aclweb.org/anthology/D19-1602 .
567
- Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
568
- Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text
569
- transformer, 2020.
570
- Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for
571
- machine comprehension of text, 2016.
572
- Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. Choice of plausible alternatives:
573
- An evaluation of commonsense causal reasoning. In Logical Formalizations of Commonsense
574
- Reasoning, Papers from the 2011 AAAI Spring Symposium, Technical Report SS-11-06, Stanford,
575
- California, USA, March 21-23, 2011 . AAAI, 2011. URL http://www.aaai.org/ocs/
576
- index.php/SSS/SSS11/paper/view/2418 .
577
- Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max
578
- Welling. Modeling relational data with graph convolutional networks, 2017.
579
- Yusheng Su, Xu Han, Zhengyan Zhang, Peng Li, Zhiyuan Liu, Yankai Lin, Jie Zhou, and Maosong
580
- Sun. Cokebert: Contextual knowledge selection and embedding towards enhanced pre-trained
581
- language models, 2020.
582
- Tianxiang Sun, Yunfan Shao, Xipeng Qiu, Qipeng Guo, Yaru Hu, Xuanjing Huang, and Zheng Zhang.
583
- Colake: Contextualized language and knowledge embedding, 2020.
584
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
585
- Kaiser, and Illia Polosukhin. Attention is all you need, 2017.
586
- Petar Veli ˇckovi ´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua
587
- Bengio. Graph attention networks, 2018.
588
- Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer
589
- Levy, and Samuel R. Bowman. Superglue: A stickier benchmark for general-purpose language
590
- understanding systems, 2020a.
591
- Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma,
592
- Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. Deep
593
- graph library: A graph-centric, highly-performant package for graph neural networks, 2020b.
594
- Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu ji, Guihong Cao, Daxin
595
- Jiang, and Ming Zhou. K-adapter: Infusing knowledge into pre-trained models with adapters,
596
- 2020c.
597
- Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian
598
- Tang. Kepler: A unified model for knowledge embedding and pre-trained language representation,
599
- 2020d.
600
- Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi,
601
- Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von
602
- Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama
603
- Drame, Quentin Lhoest, and Alexander M. Rush. Huggingface’s transformers: State-of-the-art
604
- natural language processing, 2020.
605
- Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. Pretrained encyclopedia:
606
- Weakly supervised knowledge-pretrained language model, 2019.
607
- Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. Luke: Deep
608
- contextualized entity representations with entity-aware self-attention, 2020.
609
- 10Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
610
- An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li.
611
- Enhancing pre-trained language representations with rich knowledge for machine reading com-
612
- prehension. In Proceedings of the 57th Annual Meeting of the Association for Computational
613
- Linguistics , pp. 2346–2357, Florence, Italy, July 2019. Association for Computational Linguistics.
614
- doi: 10.18653/v1/P19-1226. URL https://www.aclweb.org/anthology/P19-1226 .
615
- Bishan Yang and Tom Mitchell. Leveraging knowledge bases in LSTMs for improving machine
616
- reading. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin-
617
- guistics (Volume 1: Long Papers) , pp. 1436–1446, Vancouver, Canada, July 2017. Association for
618
- Computational Linguistics. doi: 10.18653/v1/P17-1132. URL https://www.aclweb.org/
619
- anthology/P17-1132 .
620
- Bishan Yang, Wen tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and
621
- relations for learning and inference in knowledge bases, 2015.
622
- Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V . Le.
623
- Xlnet: Generalized autoregressive pretraining for language understanding, 2020.
624
- Donghan Yu, Chenguang Zhu, Yiming Yang, and Michael Zeng. Jaket: Joint pre-training of
625
- knowledge graph and language understanding, 2020.
626
- Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme.
627
- Record: Bridging the gap between human and machine commonsense reading comprehension,
628
- 2018.
629
- Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. Ernie: Enhanced
630
- language representation with informative entities, 2019.
631
- ASUMMARY AND COMPARISON OF RECENT KNOWLEDGE -ENHANCED PLM S
632
- Table 4 shows a brief summary and comparison of recent knowledge-enhanced PLMs. Most of recent
633
- work concentrated on injecting external knowledge graphs during pre-training phase, which makes
634
- them inefficient in injecting external knowledge (e.g. LUKE takes about 1000 V100 GPU days to
635
- re-pretraining the RoBERTa based PLM model). Also, nearly all of them uses an additional entity
636
- linking tool to align the mentioned entities in the Wikidata to the entity mentions in the pre-trained
637
- corpus (English Wikipedia) uniquely. These methods never consider to resolve the knowledge
638
- ambiguity problem.
639
- B D ATATSET AND KNOWLEDGE GRAPH DETAILS
640
- ReCoRD (an acronym for the Reading Comprehension with Commonsense Reasoning Dataset) is a
641
- large-scale dataset for extractive-style MRC requiring commonsense reasoning. There are 100,730,
642
- 10,000, and 10,000 examples in the training, development (dev), and test set, respectively. An example
643
- of the ReCoRD consists of three parts: passage, question, and answer. The passage is formed by
644
- the first few paragraphs of an article from CNN or Daily Mail, with named entities recognized and
645
- marked. The question is a sentence from the rest of the article, with a missing entity specified as the
646
- golden answer. The model needs to find the golden answer among the entities marked in the passage.
647
- Questions that can be easily answered by pattern matching are filtered out. By the design of the
648
- process of data collection, one can see that to answer the questions, external background knowledge
649
- and ability of reasoning are required.
650
- MultiRC (Multi-Sentence Reading Comprehension) is a multiple-response-items-style MRC dataset
651
- of short paragraphs and multi-sentence questions that can be answered from the content of the
652
- paragraph. Each example of MultiRC includes a question that associates with several choices for
653
- answer-options, and the number of correct answer-options is not pre-specified. The correct answer is
654
- not required to be a span in the text. The dataset consists of 10K questions ( 6k multiple-sentence
655
- questions), about 60% of this data make training/dev data. Paragraphs in the dataset have diverse
656
- provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and
657
- hence are expected to be more complicated in their contents as compared to single-domain datasets.
658
- 11Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
659
- Model Downstream Task Used KGsNeed
660
- Pre-trainDynamically
661
- Embedding KG
662
- ContextInject external
663
- KG’s RepresentationsSupport
664
- Multi-relationalSupport
665
- Multi-hopHandle Knowledge
666
- Ambiguity IssueBase Model
667
- ERNIE (Zhang et al., 2019)Glue, Ent Typing
668
- Rel CLSWikidataYes
669
- (MLM, NSP,
670
- Ent Mask task)NoInject pretrained
671
- entity embeddings
672
- (TransE) explicitlyNo
673
- (only entity embedding)NoNo
674
- (anchored entity mention to
675
- the unique id of Wikidata)BERT base
676
- K-BERT (Liu et al., 2019a)Q&A, NER
677
- Sent CLSCN-DBpedia
678
- HowNet, MedicalKGOptional
679
- (MLM, NSP)No NoYes
680
- (treat relations as words)NoNo
681
- (designed ATT mechanism
682
- can solve KN issue)BERT base
683
- KnowBERT (Peters et al., 2019)Rel Extraction
684
- Ent TypingCrossWikis,
685
- WordNetYes
686
- (MLM, NSP,
687
- Ent Linking task)NoInject both pretrained
688
- entity embeddings (TuckER)
689
- and entity definition explicitlyNo
690
- (only entity embedding)NoYes
691
- (weighed entity embeddings
692
- shared the same text)BERT base
693
- WKLM (Xiong et al., 2019) Q&A, Ent Typing WikidataYes
694
- (MLM, Ent
695
- replacement task)No No No NoNo
696
- (anchored entity mention to
697
- the unique id of Wikidata)BERT base
698
- K-Adapter (Wang et al., 2020c)Q&A,
699
- Ent TypingWikidata
700
- Dependency ParsingYes
701
- (MLM,
702
- Rel predition task)No NoYes
703
- (Via Rel prediction task
704
- during pretraining)NoNo
705
- (anchored entity mention to
706
- the unique id of Wikidata)RoBERTa large
707
- KEPLER (Wang et al., 2020d)Ent Typing
708
- Glue, Rel CLS
709
- Link PredictionWikidataYes
710
- (MLM,
711
- Link predition task)YesInject embeddings of
712
- entity and relation’s
713
- description explicitlyYes
714
- (Via link prediction task
715
- during pretraining)NoNo
716
- (anchored entity mention to
717
- the unique id of Wikidata)RoBERTa base
718
- JAKET (Yu et al., 2020)Rel CLS, KGQA
719
- Ent CLSWikidataYes
720
- (MLM, Ent Mask task,
721
- Ent category prediction,
722
- Rel type prediction)YesInject embeddings of
723
- entity descriptionsYes
724
- (Via Rel type prediction
725
- during pretraining)YesNo
726
- (anchored entity mention to
727
- the unique id of Wikidata)RoBERTa base
728
- CoLAKE (Sun et al., 2020)Glue, Ent Typing
729
- Rel ExtractionWikidataYes
730
- (MLM, Ent Mask task,
731
- Rel type prediction)Yes NoYes
732
- (treat relations as words)NoNo
733
- (anchored entity mention to
734
- the unique id of Wikidata)RoBERTa base
735
- LUKE (Yamada et al., 2020)Ent Typing, Rel CLS
736
- NER, Q&AEnt from
737
- WikipediaYes
738
- (MLM, Ent Mask task)No No No NoNo
739
- (treat hyperlinks in Wikipedia
740
- as entity annotations)RoBERTa large
741
- CokeBERT (Su et al., 2020)Rel CLS
742
- Ent TypingWikidataYes
743
- (MLM, NSP,
744
- Ent Mask task)YesInject pretrained
745
- entity embeddings
746
- (TransE) explicitlyYes
747
- (Via S-GNN to encode KG
748
- context dynamically)YesNo
749
- (anchored entity mention to
750
- the unique id of Wikidata)RoBERTa large
751
- SKG (Qiu et al., 2019) MRC WordNet, ConceptNet No YesInject pretrained
752
- entity embeddings
753
- (BILINER) explicitlyYes
754
- (Via multi-relational
755
- GNN to encode KG
756
- context dynamically)Yes No BERT large
757
- KT-NET (Yang et al., 2019) MRC WordNet, NELL No NoInject pretrained
758
- entity embeddings
759
- (BILINER) explicitlyNo
760
- (only entity embedding)NoYes
761
- (dynamically selecting
762
- KG context)BERT large
763
- KELM MRC WordNet, NELL No YesInject pretrained
764
- entity embeddings
765
- (BILINER) explicitlyYes
766
- (Via multi-relational
767
- GNN to encode KG
768
- context dynamically)YesYes
769
- (dynamically selecting
770
- related mentioned entity)BERT large
771
- Table 4: A brief summary and comparison of recent knowledge-enhanced PLMs. The full names
772
- of some abbreviations are as follows. MLM : masked language model, NSP: next sentence pre-
773
- diction, Ent: entity, Rel: relation, CLS : classification, Sent: sentence, ATT : attention. Com-
774
- ments/descriptions of features are written in parentheses. Desired properties are written in bold .
775
- WordNet contains 151,442 triplets with 40,943 synsets459and 18 relations. We look up mentioned
776
- entities in the WordNet by string matching operation, and link all tokens in the same word to the
777
- retrieved mentioned entities (tokens are tokenized by Tokenizer of BERT). Then, we extract all 1-hop
778
- neighbors for each mentioned entity and construct sub-graphs. In this paper, our experiment results
779
- are based on the 1-hop case. However, our framework can be generalized to multi-hop easily, and we
780
- leave this for future work.
781
- NELL contains 180,107 entities and 258 concepts. We link entity mentions to the whole KG, and
782
- return associated concepts.
783
- C I MPLEMENTATION DETAILS
784
- Our implementation is based on HuggingFace (Wolf et al., 2020) and DGL (Wang et al., 2020b).
785
- For all three settings of KELM, parameters of the encoding layer of BERT largeare initialized with
786
- pre-trained model released by Google. Other trainable parameters in HMP are randomly initialized.
787
- The total number of trainable parameters of KELM is 340.4M (Roughly the same as BERT large, which
788
- has 340M parameters). Since including all neighbors around mentioned entities of WordNet is not
789
- efficient, for simplicity, we use top 3 most common relations in WordNet in our experiment (i.e.
790
- hyponym, hypernym, derivationally_related_form). For both datasets, we use a “two stage” fine-tune
791
- strategy to achieve our best performance, the FullTokenizer built in BERT is used to segment input
792
- words into wordpieces.
793
- For ReCoRD, the maximum length of answer during inference is set to 30, and the maximum length
794
- of question is set to 64. Questions longer than that are truncated. The maximum length of input
795
- sequenceT8is set to 384. Input sequences longer than that are segmented into chunks with a stride
796
- of 128. Fine-tuning our model on ReCoRD costs about 18 hours on 4 V100 GPU with a batch size of
797
- 48. We freeze parameters of BERT and use Adam optimizer with a learning rate of 1e-3 to train our
798
- knowledge module in the first stage. The maximum number of training epochs of the first stage is
799
- 10. The purpose of this is to provide a good weight initialization for our HMP. For the second stage,
800
- the pre-trained BERT parameters and our HMP part will be fine-tuned together. The max number of
801
- training epochs is chosen from f4;6;8g. The learning rate is set to be 2e-5 with a warmup over the
802
- 8Refer to the PLM Encoding Module of Methodology Section.
803
- 12Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
804
- first6%of max steps, and linear decay until up to max epochs. For both two stages, early stopping is
805
- applied according to the best EM+F1 score on the dev set every 500 steps.
806
- For MultiRC, the maximum length of input sequence Tis set to 256. The summation of length of
807
- question (Q) and length of candidate answer ( A) is not limited. Paragraph ( P) is truncated to fit the
808
- maximum length of input sequence. Fine-tuning KELM on MultiRC needs about 12 hours on 4 V100
809
- GPU with a batch size of 48. For the first stage finetuning, learning rate is 1e-4 and the maximum
810
- number of training epochs is 10. For the second stage, the max number of training steps is chosen
811
- fromf10000;15000;20000g. The learning rate is set to be 2e-5 with a warmup over the first 10% of
812
- max steps.
813
- D S UPPLEMENTATION OF THE CASE STUDY SECTION
814
- We provide definitions of the top 3 most relevant mentioned entities in WordNet that correspond to the
815
- word examples mentioned in Case Study Section . Descriptions are obtained by using NLTK (Loper
816
- & Bird, 2002). By comprehending the motivating example in the case study section, we can see that
817
- KELM can correctly select the most relevant mentioned entities in the KG.
818
- Word in text
819
- (prototype)Mentioned entity
820
- in WordNetDefinition
821
- banban.n.04 (0.72) an official prohibition or edict against something
822
- ban.v.02 (0.21) prohibit especially by legal means or social pressure
823
- ban.v.01 (0.06) forbid the public distribution of ( a movie or a newspaper)
824
- fordford.n.05 (0.56)38th President of the United States;
825
- appointed vice president and succeeded
826
- Nixon when Nixon resigned (1913-)
827
- ford.n.07 (0.24) a shallow area in a stream that can be forded
828
- ford.v.01 (0.08) cross a river where it’s shallow
829
- pardonpardon.v.02 (0.86) a warrant granting release from punishment for an offense
830
- sentinel (0.10) -
831
- pardon.n.02 (0.04) grant a pardon to
832
- nixonnixon.n.01 (0.74)vice president under Eisenhower and 37th President
833
- of the United States; resigned after the Watergate
834
- scandal in 1974 (1913-1994)
835
- sentinel (0.26) -
836
- leadlead.v.03 (0.73) tend to or result in
837
- lead.n.03 (0.12) evidence pointing to a possible solution
838
- lead.v.04 (0.05) travel in front of; go in advance of others
839
- outrageoutrage.n.02 (0.62) a wantonly cruel act
840
- sentinel (0.38) -
841
- Table 5: Definitions of mentioned entities in WordNet corresponding to the word examples in the case
842
- study. The importance of mentioned entities is provided in the parenthesis. “sentinel” is meaningless,
843
- which is used to avoid knowledge noise.
844
- E E XPERIMENT ON COMMONSENSE CAUSAL REASONING TASK
845
- To further explore the generalization ability of KELM, we also evaluate our method on COPA (Roem-
846
- mele et al., 2011) (Choice of Plausible Alternatives), which is also a benchmark dataset in SuperGLUE
847
- and can be used for evaluating progress in open-domain commonsense causal reasoning. COPA
848
- consists of 1000 questions, split equally into development and test sets of 500 questions each. Each
849
- question is composed of a premise and two alternatives, where the task is to select the alternative
850
- that more plausibly has a causal relation with the premise. Similar to the previous two MRC tasks,
851
- the development set is publicly available, but the test set is hidden. One has to submit the predicted
852
- results for the test set to SuperGLUE to retrieve the final test score. Since the implementation of
853
- KELM is based on BERT large, we use it as our baseline for the comparison. The result of BERT large
854
- is directly taken from the leaderboard of SuperGLUE. Table 6 shows the experiment results. The
855
- injected KG is WordNet here, and we use accuracy as the evaluation metric.
856
- The huge improvement over the baseline in this task demonstrates that knowledge in WordNet is
857
- indeed helpful for BERT to improve the generalization ability to the out-of-domain downstream task.
858
- 13Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
859
- Model dev test
860
- BERT large - 70.6
861
- KELMBERTlarge
862
- WordNet76.1 78.0
863
- Table 6: Performance comparison on COPA. The effectiveness of injecting knowledge (WordNet) are
864
- shown.
865
- F KELM: A FRAMEWORK OF FINETUNE -BASED MODEL -AGNOSTIC
866
- KNOWLEDGE -ENHANCED PLM
867
- We implement KELM based on the RoBERTa large, which has a similar number of trainable parameters
868
- asBERT largebut uses nearly 10 times of training corpus than BERT large. Since the performances of
869
- RoBEATa on the leaderboard of SuperGLUE are based on ensembling, we also finetune RoBERTa large
870
- on ReCoRD to produce the results of a single model. Comparisons of the results can be found
871
- in Table 7, where you can also see an improvement there. However, that improvement is not
872
- as significant as we observed in BERT large. Reasons are two-fold: (1) Passages in ReCoRD are
873
- collected from articles in CNN/Daily Mail, while BERT is pre-trained on BookCorpus and English
874
- Wikipedia. RoBERTa not only uses the corpus that used in BERT (16 GB), but also an additional
875
- corpus collected from the CommonCrawl News dataset (76 GB). ReCoRD dataset is in-domain
876
- for RoBERTa but is out-of-domain for BERT. It seems that the improvements of KELM with
877
- injecting general KGs (e.g. WordNet) on the in-domain downstream tasks are not as large
878
- as the out-of-domain downstream tasks . A similar phenomenon can be also observed in the
879
- experiment of SQuAD 1.1 (Refer to Appendix E). (2) The same external knowledge (WordNet,
880
- NELL) can not help RoBERTa largetoo much, since RoBERTa is pre-trained on a much larger corpus
881
- than BERT, knowledge in WordNet/NELL has been learned in RoBERTa.
882
- Dev Test
883
- Model EM F1 EM F1
884
- PLM w/o
885
- external knowledgeBERT large 70.2 72.2 71.3 72.0
886
- RoBERTa
887
- large 87.9 88.4 88.4 88.9
888
- knowledge enhanced PLM
889
- (finetune-based)KELMBERTlarge
890
- Both75.1 75.6 76.2 76.7
891
- KELMRoBERTalarge
892
- Both88.2 88.7 89.1 89.6
893
- knowledge enhanced PLM
894
- (pretrain-based)LUKE 90.8 91.4 90.6 91.2
895
- Table 7: Comparison of the effectiveness of injecting external knowledge between BERT and
896
- RoBERTa. [*] Results are from our implementation.
897
- We also list the results of LUKE (Yamada et al., 2020) in Table 7. LUKE is a pretrain-based
898
- knowledge enhanced PLM and uses Wiki-related golden entities (one-to-one mapping) as the injected
899
- knowledge source (about 500k entities9). It has more 128 M parameters than the total number
900
- of parameters of the vanilla RoBERTa. As we summarized in Table 4 in the main text, the pre-
901
- training task is also different compared with RoBERTa. Although LUKE gets better results compared
902
- with vanilla RoBERTa and KELM, it needs 16 NVIDIA Tesla V100 GPUs and the training takes
903
- approximately 30 days. Relying on hyperlinks in Wikipedia as golden entity annotations, lacking
904
- the flexibility to adapt the external knowledge of other domains, and needing re-pretraining when
905
- incorporating knowledge, these limitations hinder the abilities of applications.
906
- G E XPERIMENT ON SQUAD 1.1
907
- SQuAD1.1 (Rajpurkar et al., 2016) is a well known extractive-style MRC dataset that consists of
908
- questions created by crowdworkers for Wikipedia articles . It contains 100,000+ question-answer
909
- pairs on 536 articles. We implement KELM based on the BERT large, and compare our results on the
910
- development set of SQuAD 1.1 with KT-NET (Best result of KT-NET is based on injecting WordNet
911
- only). Results are shown in Table 8
912
- 9For KELM, we only use 40943 entities in WordNet and 258 concepts in NELL.
913
- 14Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
914
- Dev
915
- Model EM F1
916
- PLM w/o external knowledge BERT large 84.4 91.2
917
- knowledge enhanced PLM
918
- (finetune-based)KT-NETBERTlarge
919
- WordNet85.1 91.7
920
- KELMBERTlarge
921
- WordNet84.7 91.5
922
- Table 8: Performance comparison on the development set of SQuAD 1.1.
923
- Results on KELM show an improvement over vanilla BERT. Both BERT and RoBERTa use English
924
- Wikipedia as the corpus for pretraining. Since SQuAD is also created from Wikipedia, it is an
925
- in-domain downstream task for both BERT and RoBERTa (while ReCoRD dataset is in-domain for
926
- RoBERTa but is out-of-domain for BERT). This explains why RoBERTa achieves a much larger
927
- improvement over BERT on the result of ReCoRD ( 71:3!88:4in EM on test set) than the one
928
- on SQuAD 1.1 ( 84:1!88:9). The rest of the improvement is because RoBERTa uses 10 times of
929
- training corpus than BERT and different pre-training strategies they used.
930
- Interestingly, we find the performance of KELM on SQuAD 1.1 is sub-optimal compared with KT-
931
- NET. As we mentioned in the last paragraph of the Related Work Section , KT-NET treats all synsets
932
- of entity mentions within the WN18 as candidate KB concepts. Via a specially designed attention
933
- mechanism, KT-NET can directly use all 1-hop neighbors of the mentioned entities. Although this
934
- limits the ability of KT-NET to select the most relevant mentioned entities (as we discussed in Case
935
- Study Section ), information of these neighbors can be directly considered. Using neighbors of the
936
- mentioned entities indirectly via the HMP mechanism makes it possible for KELM to dynamically
937
- embed injected knowledge and to select semantics-related mentioned entities. However, SQuAD
938
- is an in-domain downstream task for BERT, the problem of ambiguous meanings of words can be
939
- alleviated by pretraining model on the in-domain corpus. Compared with KT-NET, a longer message
940
- passing path in KELM may lead to sub-optimal improvement on the in-domain task.
941
- H F URTHER DISCUSSIONS ABOUT THE NOVELTY W .R.TSKG/KT-NET
942
- UKET defined in KELM consists of three subgraphs in a hierarchical structure, each subgraph
943
- corresponds to one sub-process of our proposed HMP mechanism and solves one problem presented
944
- in the Hierarchical Knowledge Enhancement Module part of Methodology Section . SKG only
945
- uses GNN to dynamically encode the extracted KG which corresponds to the first part of UKET, it can
946
- not solve the knowledge ambiguity issue and forbids interactions among knowledge-enriched tokens.
947
- KT-NET defines a similar graph as the third part of UKET. However, the first and second subgraphs
948
- of UKET are absent. The second subgraph of UKET is independent of ideas of KT-NET and SKG,
949
- thus KELM is not a simple combination of these two methods. We are the first to unify text and
950
- KG into a graph and to propose this hierarchical message passing framework to incorporate two
951
- heterogeneous information. SKG/KT-NET can be interpreted as parts of the ablation study of
952
- components of KELM . The result of SKG is ablation with the component only related to the first
953
- subgraph of UKET. While KT-NET only contains the third subgraph with a modified knowledge
954
- integration module. KELM uses a dedicatedly designed HMP mechanism to let the information of
955
- farther neighbors to be considered. However, longer information passing path than KT-NET makes
956
- it less efficient. In our experiments, KELM takes more 30% training time than KT-NET on both
957
- ReCoRD and MultiRC.
958
- I L IMITATIONS AND FURTHER IMPROVEMENTS OF KELM
959
- Limitations for KELM are two-fold: (1) Meanings of mentioned entities in different KGs that share
960
- the same entity mentions in the text may conflict with each other. Although HMP can help to select
961
- the most relevant mentioned entities in a single KG, there’s no mechanism to guarantee the selections
962
- across different KGs; (2) Note the knowledge-enriched representation in Eq.3 is obtained by simple
963
- concatenation of the embeddings from different KGs. Too much knowledge incorporation may
964
- divert the sentence from its correct meaning (Knowledge noise issue). We expect these two potential
965
- improvements to be a promising avenue for future research.
966
- 15Accepted at the ICLR 2022 Workshop on Deep Learning on Graphs for Natural Language Processing
967
- J F URTHER ANALYSIS AND DISCUSSION
968
- KELM incorporates knowledge in KGs into the representations in the last hidden layer of PLM
969
- (Refer to Methodology Section ). It is essentially a model-agnostic, KG-agnostic, and task-agnostic
970
- framework for enhancing language model representations with factual knowledge from KGs. It
971
- can be used to enhance any PLM, with any injected KGs, on any downstream task. Besides the
972
- two Q&A-related MRC tasks we mentioned in the main text, we also evaluate KELM on COPA
973
- and SQuAD 1.1 based on BERT large, results are presented in Appendix E and Appendix G, respec-
974
- tively. To demonstrate KELM is a model-agnostic framework, we also implement KELM based on
975
- RoBERTa largeand evaluate it on ReCoRD. The experiment is presented in Appendix F. Improvements
976
- achieved by KELM over all vanilla base PLM models indicate the effectiveness of injecting external
977
- knowledge.
978
- However, the improvements of KELM over RoBERTa on ReCoRD and BERT on SQuAD 1.1 are
979
- marginal compared with the ones on ReCoRD/MultiRC/COPA ( BERT largebased). The reason behind
980
- this is that pretraining model on in-domain unlabeled data could boost performance on downstream
981
- tasks. Passages in ReCoRD are collected from articles in CNN/Daily Mail, while BERT is pre-trained
982
- on BookCorpus and English Wikipedia. RoBERTa not only uses the corpus that used in BERT
983
- (16 GB), but also an additional corpus collected from the CommonCrawl News dataset (76 GB).
984
- ReCoRD is in-domain for RoBERTa but is out-of-domain for BERT. Similarly, SQuAD 1.1 is
985
- created from Wikipedia, it is an in-domain downstream task for both BERT and RoBERTa. This
986
- partially explains why RoBERTa achieves a much larger improvement over BERT on the result of
987
- ReCoRD ( 71:3!88:4in EM on test set) than the one on SQuAD 1.1 ( 84:1!88:9). A similar
988
- analysis can be also found in T5 (Raffel et al., 2020). From our empirical results, we can summarize
989
- that general KG (e.g. WordNet) can not help too much for the PLMs pretrained on in-domain data.
990
- But it can still improve the performance of the model when the downstream tasks are out-of-domain.
991
- Further detailed analysis can be found in our appendix.
992
- Finding a popular NLP task/dataset that is not related to the training corpus of modern PLMs is
993
- difficult. Pre-training on large-scale corpus is always good if we have unlimited computational
994
- resources and plenty of in-domain corpus. It has been evident that the simple finetuning of PLM is
995
- not sufficient for domain-specific applications. KELM can provide people another choice when they
996
- do not have such a large-scale in-domain corpus and want to incorporate incremental domain-related
997
- structural knowledge into the domain-specific applications.
998
- 16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2109.05783.txt DELETED
@@ -1,15 +0,0 @@
1
- The State of the Art when using GPUs in Devising Image Generation Methods Using Deep Learning Yasuko Kawahata1 1 Department of Mathematical Informatics Graduate School of Information Science and Technology, The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan. Abstruct Deep learning is a technique for machine learning using multi-layer neural networks. It has been used for image synthesis and image recognition, but in recent years, it has also been used for various social detection and social labeling. In this analysis, we compared (1) the number of Iterations per minute between the GPU and CPU when using the VGG model and the NIN model, and (2) the number of Iterations per minute by the number of pixels when using the VGG model, using an image with 128 pixels. When the number of pixels was 64 or 128, the processing time was almost the same when using the GPU, but when the number of pixels was changed to 256, the number of iterations per minute decreased and the processing time increased by about three times. In this case study, since the number of pixels becomes core dumping when the number of pixels is 512 or more, we can consider that we should consider improvement in the vector calculation part. If we aim to achieve 8K highly saturated computer graphics using neural networks, we will need to consider an environment that allows computation even when the size of the image becomes even more highly saturated and massive, and parallel computation when performing image recognition and tuning. 1. About Deep Learning(2015~2016) Deep learning is a technique for machine learning using multilayer neural networks. In the past, technologies such as fuzzy networks and neural networks have been used for image synthesis [1] and image recognition [2] that incorporate elements of opportunity learning, but in recent years, they have quickly become one of the technologies that have attracted attention. Due to the progress of learning algorithms and the availability of a large amount of input data, the performance of neural networks has improved dramatically compared to conventional neural networks. Many fields where the market is expected to expand in the future, such as automatic driving, IoT (Internet of Things), and robotics, have a close relationship with machine learning or artificial intelligence, and are expected to develop into future applications. The general recognition seems to be caused by the acquisition of DeepMind by Google, and the appearance of Google, IBM, and Caffe. In the 2014s, simple character recognition and image recognition packages such as Theano and Pylearn2 appeared, but after 2015, more powerful image recognition and processing packages using CUDA from NIVIDIA started to appear. From Google, APIs such as Cloud Vision API [3], Tensoflow, MXnet, and H2O, as well as packages and libraries that can be implemented more easily using Python and R have also appeared. Chainer, which uses the same Python library as Tensorflow, has also been introduced. Unlike Caffe, Chainer and Tensorflow are relatively easy to integrate with CUDA libraries, but they are even faster because they can be integrated with Python's acceleration package Cyton. It is also highly versatile in that it can be used with cuDNN, a GPU library for deep learning developed by NIVIDIA. METAMIND has started to use it in the field of x-ray image diagnosis in medicine[4]. Furthermore, in Japan, UEI announced the DEEPstation DK-1, a personal computer dedicated to deep learning, and Alpaca has started distributing APIs for image recognition and tools for predicting financial data[5, 6]. In January 2016, Microsoft also distributed toolkits such as CNTK, and we can say that it is becoming more familiar. As for Microsoft, in a project called "Catapult," we have been conducting research on improving computational efficiency by attaching FPGAs to servers in data centers. In the research of Catapult, they are not only building an accelerator engine by attaching FPGAs to the CPUs of servers, but also challenging the structure of a two-dimensional torus network using FPGAs to handle the communication between nodes. In a report in February 2016, we succeeded in accelerating the image recognition flow (FPGA) in Google's Tensorflow by a factor of although the performance of FPGA is usually 1/5 of that of GPU, it can be compensated by scaling up. In deep learning, a CNN (Convolutional Neural Network), which mimics a neural system, is used, and a huge amount of computation is required to evaluate and train the input. We aim to improve the performance by one order of magnitude or more, with a cost increase of less than 30% and a power increase of less than 10%, by using the abundant resources of FPGAs to perform these calculations. Microsoft is aiming to popularize deep learning by providing the FPGA accelerator as a software library for users to use in the future[7]. In the field of deep learning, it can be said that there is a lot of discussion on how to speed up the learning of a large number of images, how to compete with other models for more detailed image recognition, and how to improve the development environment. 2. Expression techniques using Deep Learning(2015~2016) As described in the previous section, the idea of using multilayer neural networks such as neural networks itself has been around for a long time, and these techniques have been used not only in industrial applications but also in the field of artistic expression[9]. With the advocacy of Genda et al. (1980), a new technique for expressing graphics has emerged, in which mathematical models are devised based on graphics such as images to be generated, using phenomena and things that occur in society and nature as input [9-13]. In addition, representation techniques that generate new images by evolutionary computation reflecting neural network operations in real time have been used in situations such as the self-organizing CG generation by Kawaguchi (2001)[14]. In addition, the environment on GPUs is becoming more and more important in the generation of graphics that require large-scale operations. Since the dawn of computer graphics, Genda et al(1980). have proposed a number of techniques for computer animation, such as the production of animation using random scan displays that take advantage of machine power[13]. And new graphic expressions based on the application of deep learning techniques are expected to increase in the future[15]. GPU-based operations are also necessary in the field of image recognition, which has outstanding results in deep learning compared to conventional machine learning. Nowadays, optimized GPU libraries have appeared to speed up image recognition techniques for deep learning such as cuDNN [16]. In this report, we compare the results of CPU and GPU benchmarks to see the difference in graphics generation between CPU and GPU benchmarks, in order to explore the potential of deep learning in image recognition, image generation, and graphics representation. By conducting this test, we would like to consider the scale, data size and machine for graphic representation using deep learning and batch plotting of analysis results of large-scale data in the future. 3. Consider which package to Use Cases(2015~2016) As described in Section 1, there are many libraries, packages, and toolkits related to deep learning, but in this case, we used Chainer, which has been reported to be an effective benchmark when using GPUs among the packages available until 2015[17]. Under the environment of a 6-core Intel Core i7-5930K CPU @ 3.50GHz + NVIDIA Titan X + Ubuntu 14.04 x86_64, the following results were obtained, and we decided to use Chainer in this report. Model Alexnet[18] Overfeat[19] OxfordNet[20] Googlenet V1[21] Chainer 177(sec) 620(sec) 885(sec) 687(sec) TensorFlow 277(sec) 843(sec) 1510(sec) 1084(sec) Caffe (native) 324(sec) 823(sec) 1068(sec) 1935(sec) Table 1: Execution times (in seconds) for various deep learning models under 6-core Intel Core i7-5930K CPU @ 3.50GHz + NVIDIA Titan X + Ubuntu 14.04 x86_64. )[18-21]. 4. About Chainer(2015~2016) Chainer is a library for implementing neural networks developed by Preferred Networks [17]. This package is characterized by its support for CUDA. It is also possible to implement various types of neural networks, such as convolutional and recurrent, using Python. 5. Points to consider when installing Chainer(2015~2016) When you install Chainer, you need to install some related libraries. You also need to install CUDA and cuDNN. Note that you need to register an account for cuDNN at the CUDA official website, answer a questionnaire about your usage, and download the cuDNN for your environment [16]. In addition, since Chainer depends on the Python library, it is also important to have an environment where the pip command can be used as follows. $ su $ yum install python-setuptools $ wget http://peak.telecommunity.com/dist/ez_setup.py $ /usr/local/bin/python ez_setup.py $ easy_install pip $ easy_install numpy $ easy_install six $ easy_install Mako $ easy_install scipy $ pip install scikit-learn lspci | grep -i nvidia Table 2: Codes for configuration (excerpt) Before using GPU and cuDNN in Chainer, you should check your graphics board with the above command. Also, make sure to run Devicequery before running the program. The following assumes that CUDA7.5 is installed. Note that we recommend you to install Chainer 1.5 or later. The reason is that Chainer1.5 or later has no dependency with Pycuda, and thus there is no difficulty in passing through cuDNN in the later versions. CPATH=$CPATH:/usr/local/cuda-7.5/include PATH=$PATH:/usr/local/cuda-7.5/bin CUDA_ROOT=/usr/local/cuda-7.5 export LD_LIBRARY_PATH=/usr/local/cuda-7.5:$LD_LIBRARY_PATH which nvcc ./deviceQuery sudo sh -c " cd /root/NVIDIA_CUDA-7.5_Samples/1_Utilities/deviceQuery/; ./deviceQuery " CPATH=$CPATH:/usr/local/cuda-7.5/lib64 PATH=$PATH:/usr/local/cuda-7.5/lib64 CUDA_ROOT=/usr/local/cuda-7.5/lib64 export LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:$LD_LIBRARY_PATH CPATH=$CPATH:/usr/local/cuda-7.5/bin PATH=$PATH:/usr/local/cuda-7.5/bin CUDA_ROOT=/usr/local/cuda-7.5/bin export LD_LIBRARY_PATH=/usr/local/cuda-7.5/bin:$LD_LIBRARY_PATH CPATH=$CPATH:/usr/local/cuda-7.5/include PATH=$PATH:/usr/local/cuda-7.5/include CUDA_ROOT=/usr/local/cuda-7.5/include export LD_LIBRARY_PATH=/usr/local/cuda-7.5/include:$LD_LIBRARY_PATH Table 3: Codes for setting up the system (excerpt) 6. Models Handled(2015~2016) In this report, we compare the computational results of the NIN model and the VGG model [22] based on A Neural Algorithm of Artistic Stlye proposed as a method for image generation in deep learning, and the benchmarks of CPU, GPU, and cuDNN. The basis of our model is the Convolutional Learning Model. The basis of our model is to generate an image using Convolutional Neural Network.
2
- Figure 1: Picture of the model flow.
3
- The numbers in Figure 1 mean [number of channels * height * width]. The basic model is based on a Convolutional Neural Network, and is generated from parameters that are intermediate between the main features of the two images. Assume that the input images are two images of 128*128 pixels, and the number of channels is 3 since the input images are RGB tri-color. As the layers increase, the number of channels increases, and the intermediate values of line thickness and line thinness are acquired and redrawn for representation. In this figure, we used 256*256 as an example, but we found that the algorithm works even if the resolution is changed, and the processing time becomes longer as the size is larger. In this algorithm, the output from the intermediate layers 1-4 is used.
4
- Figure 2: Channel flow The following are the generated images.
5
- Figure 3: Generated image (VGG model)
6
- In addition, the correlation between the three channels output by Stylenet is calculated, and the features of the entire image, as well as the thickness and fineness of the lines, are represented in each intermediate layer.
7
- Figure 4: Flow of feature acquisition. By outputting these results, the computational results are reflected in the generated image. The idea of the overall objective function is to correlate the difference between the image to be synthesized and the image of the middle layer to be minimized, and the difference in color and line between the Stylenet and the image, and to make a vector.
8
- Figure 5: The flow of obtaining image features. In this way, the difference between the former and the latter style images can be measured in both shallow and deep layers, and information such as fine brush strokes can be extracted in the shallow layers, while larger spatial patterns can be extracted in the deep layers.
9
-
10
- Figure 6: The overall flow As shown in Fig. 6, in the initial stage, the image was generated by gradually repeating the calculation, starting from a figure reflecting only RGB information, gradually reflecting the special quantity of lines, and obtaining the intermediate quantity of line depth. 7. Calculation Results The environment used in this report is Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz + NVIDIA Corporation GM206 [GeForce GTX 960] + CentOS Linux release 7.2.1511 (Core). In this section, we compare (1) the number of Iterations per minute between the GPU and the CPU when the VGG model is used and when the NIN model is used, and (2) the number of Iterations per minute by the number of pixels when the VGG model is used, using an image with 128 pixels. Through (1) and (2), we discussed the number of calculations per minute on the GPU according to each model and the number of pixels handled.
11
- Figure 7: Comparison of GPU and CPU when using VGG model and NIN model. Number of iterations per minute, using an image with 128 pixels. Time(minutes) VGG(CPU)/ITERATION(Per Count) VGG(GPU) /ITERATION (Per Count) NIN(CPU) /ITERATION (Per Count) NIN(GPU) /ITERATION (Per Count) 1(Per Minutes) 71.4 721.9 108.9 980 2 142.9 1443.7 217.8 1960 3 214.3 2165.6 326.7 2940 4 285.7 2887.4 435.6 3920 5 357.1 3609.3 544.4 4900 Table 4: Comparison of the number of Iterations per minute between the GPU and CPU when using the VGG model and when using the NIN model (Iterations up to 5 minutes, using an image with 128 pixels). Using an image with 128 pixels, we compared the number of iterations per minute between the GPU and CPU when using the VGG model and when using the NIN model. In Figure 7 and Table 4, the comparison of VGG and NIN models by model and environment is omitted because the cuDNN processing is supposed to improve the speed, but it is faster when only GPU mode is used. 01000200030004000500060007000
12
- 14710131619222528313437404346495255586164ITERATIONTime:ScoreIterations per minute for each modelIteration Count
13
- VGG(CPU)VGG(GPU)NIN(CPU)NIN(GPU)As can be seen from Figs. 2 and 3, VGG is a model that strictly reflects the accuracy of the generated image and the intermediate amount of pixels, so the computation was somewhat slower than NIN, with fewer iterations per minute even in GPU mode. The issue here is how to speed up the computation of VGG, which will be a theme in the future.
14
- Figure 8: Comparison by image size when using VGG model. (Number of Iterations per minute) Time (minutes) ITERATION times of VGG (CPU in case of 64 pixels) TERATION times of VGG (GPU in case of 64 pixels) ITERATION times of VGG (CPU in case of 128 pixels) ITERATION times of VGG (GPU for 128 pixels) ITERATION times of VGG (CPU in case of 256 2-pixel number) ITERATION of VGG (GPU for 256 pixels) 1 94.2 847.8 71.4 721.9 32.0 286.7 2 188.3 1695.5 142.9 1443.7 64.1 573.3 3 282.6 2543.3 214.3 2165.6 96.1 860.1 4 376.8 3391.0 285. 7 2887.4 128.1 1146.7 5 471.0 4238.8 357.1 3609.3 160.1 1433.4 Table 5: Comparison of Iterations per minute by pixel count when using VGG model (Iterations up to 5 minutes) 010002000300040005000600070001815222936435057647178859299106113120127134141148155162169176ITERATION時間:分数Comparison of image size when using VGG modelComparison by Image Size (Iteration Count per Minute)
15
- VGG(64:CPU)VGG(64:GPU)VGG(128:CPU)VGG(128:GPU)VGG(256:CPU)VGG(256:GPU)We also compared the number of Iterations per minute by the number of pixels when using the VGG model. In the above comparison of models and environments in the VGG model in Fig. 8 and Table 5, we omitted the cuDNN processing in this example because it should have improved the speed, but it was faster in the GPU mode only. This is an issue to be addressed in the future. However, when the number of pixels was changed to 256, the number of Iteations per minute was reduced, and the processing time was increased by a factor of about three. As a future prospect, when we consider the case of generating highly saturated images with the size of 1920 pixels or more, we can consider that we should consider improvements in the vector calculation part, because in this case, the core dumping occurs when the number of pixels is 512 or more. 8. Prospect In this report, we were able to generate images with clearer material detection by using GPU operations, but the problem is that the deep learning flow itself requires a large amount of computation time, and the image size of 512 pixels causes core dumping in the environment of this report. In the future, if we aim to achieve 8K highly saturated computer graphics using neural networks such as those of Genda(1990) and Kawaguchi(2000), it will be necessary to consider the construction of an environment in which computation is possible even when the size of the image becomes even more saturated and massive, and parallel computation when performing image recognition and tuning.In the future, we can expect to use GPUs for machine learning of big data using deep learning, and we would like to discuss image generation based on image learning in this report, as well as application examples. Acknowledgments This research is the Result of Research Project 15J06703(Japan Society for the Promotion of Science PD) "Mathematical empirical analysis of past complex social phenomena using historical materials". In addition, the Graduate School of Information Science and Engineering, the University of Tokyo, who provided the GPU analysis environment for this research. I would like to thank Professor NAKATA Toshiyuki of the Social ICT Research Center. References [1] Keita Nakamura, et al. "Construction of the underwater environment of an object based on the laws of physics." Proceedings of the Academic Lecture Meeting of the Japan Society for Precision Engineering 2008.0 (2008): 1001-1002. [2] Hitoshi Yatomi, and Masafumi Hagiwara. "General-purpose image recognition using adaptive fuzzy inference neural networks and active search methods (video / multimedia and pattern recognition / understanding)." Technical Report of the Society of Electronics, Information and Communication Engineers. IE, Image Engineering 103.206 (2003): 17-22. [3]Cloud Vision(Ref:2016/01) https://cloud.google.com/vision/ [4] $ 8 Million from Deep Learning Entrepreneur Marc Benioff et al.(2014) (Ref:2016/01) http://thebridge.jp/2014/12/deep-learning-startup-metamind-launches-with-8m-from-benioff-khosla-ventures-pickupnews [5]Alpaca(Ref:2016/01) http://www.alpaca.ai/ [6] Deep Station(Ref:2016/01) https://deepstation.jp/ [7]Microsoft News (Ref:2016/01) http://research.microsoft.com/apps/catalog/default.aspx?t=news [8] Lane, Nicholas D., et al. "An Early Resource Characterization of Deep Learning on Wearables, Smartphones and Internet-of-Things Devices." Proceedings of the 2015 International Workshop on Internet of Things towards Applications. ACM, 2015. [9] Tomohiro Ohira, Etsuo Genda. "Graphics as seen from the click of ITECa (Computer collection of the 1980 Spring Tournament)" Design Research 31 (1980): 26-27. [10] Takashi Yoneyama, Etsuo Genda, Kunio Kondo. "Vision is the parameter of painting." Painting Studies 43.4 (2009): 13-21. [11] Takeshi Shibamoto, Etsuo Genda. "Application of Laser Beam to Display (25th Research Laser Program Collection)" Design Study 28 (1978): 48-49. [12] Minoru Takayama, Etsuo Genda. "Shaping metaball" A shape expression that shapes a "metaball". JSSD Japanese Society for Design Research Competition 51.0 (2004): C04-C04. [13] Etsuo Genda. "Computer Animation Virtual Research (2): Creation of Random Scan Display Animation (29th Research Display Conference Online Collection)" Design Studies 39 (1982): 15-16. [14] Takahiro Harada et al. "Simulation of particle method simulation on the GPU of the scope" Doctor of Computational Engineering Proceedings 13.1 (2008): 289-290. [15] Yoichiro Kawaguchi. "Self-organizing CG art-new" Jemotion "mother (re-digital & art)." Collage 3 (2001): 15-19. [16] https://developer.nvidia.com/cudnn [17]Chainer(Ref:2015/12) http://chainer.org/ [18]Nguyen, Anh, Jason Yosinski, and Jeff Clune. "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images." Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on. IEEE, 2015. [19] Sermanet, Pierre, et al. "Overfeat: Integrated recognition, localization and detection using convolutional networks." arXiv preprint arXiv:1312.6229 (2013). [20] Kiros, Ryan, Ruslan Salakhutdinov, and Richard S. Zemel. "Unifying visual-semantic embeddings with multimodal neural language models." arXiv preprint arXiv:1411.2539 (2014). [21] Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015. [22] VGG Convolutional Neural Networks Practical(Ref:2016/01) http://www.robots.ox.ac.uk/~vgg/practicals/cnn/ [23] Lin, Min, Qiang Chen, and Shuicheng Yan. "Network in network." arXiv preprint arXiv:1312.4400 (2013). [24]Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. "A neural algorithm of artistic style." arXiv preprint arXiv:1508.06576 (2015). [25]Belanger, David, and Andrew McCallum. "Structured Prediction Energy Networks." arXiv preprint arXiv:1511.06350 (2015). [26]Lin, Tsung-Yu, and Subhransu Maji. "Visualizing and Understanding Deep Texture Representations." arXiv preprint arXiv:1511.05197 (2015).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2109.11406.txt DELETED
The diff for this file is too large to render. See raw diff
 
txt/2110.10486.txt DELETED
@@ -1,1576 +0,0 @@
1
- 1
2
- A TinyML Platform for On-Device Continual
3
- Learning with Quantized Latent Replays
4
- Leonardo Ravaglia, Manuele Rusci, Davide Nadalini, Alessandro Capotondi,
5
- Francesco Conti, Member, IEEE , Luca Benini, Fellow, IEEE
6
- Abstract —In the last few years, research and development on
7
- Deep Learning models & techniques for ultra-low-power devices
8
- – in a word, TinyML – has mainly focused on a train-then-
9
- deploy assumption, with static models that cannot be adapted to
10
- newly collected data without cloud-based data collection and fine-
11
- tuning. Latent Replay-based Continual Learning (CL) techniques
12
- [1] enable online, serverless adaptation in principle, but so far
13
- they have still been too computation- and memory-hungry for
14
- ultra-low-power TinyML devices, which are typically based on
15
- microcontrollers. In this work, we introduce a HW/SW platform
16
- for end-to-end CL based on a 10-core FP32 -enabled parallel
17
- ultra-low-power (PULP) processor. We rethink the baseline La-
18
- tent Replay CL algorithm, leveraging quantization of the frozen
19
- stage of the model and Latent Replays (LRs) to reduce their
20
- memory cost with minimal impact on accuracy. In particular,
21
- 8-bit compression of the LR memory proves to be almost lossless
22
- (-0.26% with 3000LR) compared to the full-precision baseline
23
- implementation, but requires 4 less memory, while 7-bit can
24
- also be used with an additional minimal accuracy degradation
25
- (up to 5%). We also introduce optimized primitives for forward
26
- and backward propagation on the PULP processor, together
27
- with data tiling strategies to fully exploit its memory hierarchy,
28
- while maximizing efficiency. Our results show that by combining
29
- these techniques, continual learning can be achieved in practice
30
- using less than 64MB of memory – an amount compatible with
31
- embedding in TinyML devices. On an advanced 22nm prototype
32
- of our platform, called VEGA , the proposed solution performs on
33
- average 65faster than a low-power STM32 L4 microcontroller,
34
- being 37more energy efficient – enough for a lifetime of 535h
35
- when learning a new mini-batch of data once every minute.
36
- Index Terms —TinyML, Continual Learning, Deep Neural
37
- Networks, Parallel Ultra-Low-Power, Microcontrollers.
38
- I. I NTRODUCTION
39
- The internet-of-Things ecosystem is made possible by
40
- miniaturized and smart end-node devices, which can sense the
41
- surrounding environment and take decisions based on the in-
42
- formation inferred from sensor data. Because of their tiny form
43
- L. Ravaglia, M. Rusci, D. Nadalini, F. Conti, and L. Benini are
44
- with the Department of Electrical, Electronic and Information Engineering
45
- (DEI) of the University of Bologna, Viale del Risorgimento 2, 40136
46
- Bologna, Italy (e-mail: fleonardo.ravaglia2, manuele.rusci, d.nadalini, f.conti,
47
48
- A. Capotondi is with the Department of Physics, Informatics and Mathe-
49
- matics of the University of Modena and Reggio Emilia, Via Campi 213/A,
50
- 41125 Modena, Italy (e-mail: [email protected]).
51
- L. Benini is also with the integrated Systems Laboratory (IIS) of
52
- ETH Z ¨urich, ETZ, Gloriastrasse 35, 8092 Z ¨urich, Switzerland (e-mail:
53
54
- This work was supported in part by the ECSEL Horizon 2020 project
55
- AI4DI (Artificial intelligence for Digital Industry, g.a. no. 826060); and by EU
56
- Horizon 2020 project BonsAPPs (g.a. no. 101015848). We also acknowledge
57
- CINECA for the availability of high-performance computing resources and
58
- support awarded under the ISCRA initiative through the NAS4NPC project.
59
- Manuscript received May 15, 2021.factor and the requirement for low cost and battery-operated
60
- nature, these smart networked devices are severely constrained
61
- in terms of memory capacity and maximum performance and
62
- use small Microcontroller Units (MCUs) as their main on-
63
- board computing device [2]. At the same time, there is an ever-
64
- growing interest in deploying more accurate and sophisticated
65
- data analytics pipelines, such as Deep Learning (DL) inference
66
- models, directly on IoT end-nodes. These competing needs
67
- have given rise in the last few years to a specific branch of
68
- machine learning (ML) and DL research called TinyML [3] –
69
- focused on shrinking and compressing top-accurate DL models
70
- with respect to the target device characteristics.
71
- The primary limitation of the current generation of TinyML
72
- hardware and software is that it is mostly focused on inference .
73
- The inference task can be strongly optimized by quantizing [4]
74
- or pruning [5] the trained model. Many vendors of AI-oriented
75
- system-on-chips (SoCs) provide deployment frameworks to
76
- automatically translate DL inference graphs into human-
77
- readable or machine code [6]. This train-then-deploy design
78
- process rigidly separates the learning phase from the runtime
79
- inference, resulting in a static intelligence model design flow,
80
- incapable of adapting to phenomena such as data distribution
81
- shift: a shift in the statistical properties of real incoming data
82
- vs the training set that often impacts applications, causing the
83
- smart sensors platform to be unreliable when deployed in the
84
- field [7].
85
- Even if the algorithms themselves are technically capable
86
- to learn and adapt to new incoming data, the update process
87
- can only be handled from a centralized service, running on
88
- the cloud or host servers [8]. In this regard, the original
89
- training dataset would have to be enriched with the newly
90
- collected dataset, and the model would have to be retrained
91
- from scratch on the enlarged dataset, adapting to the new
92
- data without forgetting the original information [8]. Such an
93
- adaptive mechanism belongs to the rehearsal category and
94
- requires the storage of the full training set, often amounting
95
- to gigabytes of data. Additionally, large amounts of data
96
- have to be collected in a centralized fashion by network
97
- communication, resulting in potential security and privacy
98
- concerns, as well as issues of radio power consumption and
99
- network reliability in non-urban areas.
100
- We argue that a robust and privacy-aware solution to these
101
- challenges is enabling future smart IoT end-nodes to Life-
102
- long Learning, also known as Continual Learning [9](CL):
103
- the capability to autonomously adapt to the ever-changing
104
- surrounding environment by learning continually (only) from
105
- incoming data without forgetting the original knowledge – aarXiv:2110.10486v1 [cs.LG] 20 Oct 20212
106
- phenomenon known as catastrophic forgetting [10]. Despite
107
- many approaches exists to learn from data [11], recently the
108
- focus has moved to improve the recognition accuracy of DL
109
- models because of their superior capabilities, accounting on
110
- new data belonging to known classes ( domain-incremental
111
- CL) or a new classes ( class-incremental CL ) [12], [13]. The
112
- CL techniques recently proposed are grouped in three cate-
113
- gories: architectural, regularization and memory (or rehearsal)
114
- strategies. The architectural approaches specialize a subset
115
- of parameters for every (new and old) task but require the
116
- task-ID information at inference time, indicating the nature of
117
- current task in a multi-head network, and therefore they are
118
- not suitable for class or domain incremental continual learning.
119
- Concerning these latter scenarios, memory-based approaches,
120
- which preserve samples from previous tasks for replaying,
121
- perform better than regularization techniques, which simply
122
- address catastrophic forgetting by imposing constraints on the
123
- network parameter update at low memory cost [13]–[15]. This
124
- finding was confirmed during the recent CL competition at
125
- CVPR2020 [16], where the best entry leveraged on rehearsal
126
- based strategies.
127
- The main drawback of memory-based CL approaches con-
128
- cerns the high memory overhead for the storage of previous
129
- samples: the memory requirement can potentially grows over
130
- time preventing the applicability of these methods at the tiny
131
- scale, e.g. [17]. To address this problem, Pellegrini et al. [1]
132
- have recently introduced Continual Learning based on Latent
133
- Replays (LRs). The idea behind this is to combine a few old
134
- data points taken from the original training set, but encoded
135
- intoa low-dimensional latent space to reduce the memory
136
- cost, with the new data for the incremental learning tasks.
137
- Hence, the previous knowledge is retained by means of Latent
138
- Replays samples, i.e. the intermediate feature maps of the DL
139
- model inference, selected so that they require less space with
140
- respect to the input data (up to 48 smaller compared to raw
141
- images [1]). This strategy also leads to reduced computational
142
- cost: the Latent intermediate layer splits the network in a
143
- frozen stage at the front and an adaptive stage at the back,
144
- and only layers in the latter need to be updated. So far, LR-
145
- based Continual Learning has been successfully prototyped
146
- on high-performance embedded devices such as smartphones,
147
- including a Snapdragon-845 CPU running Android OS in the
148
- power envelope of a few Watts1. On the contrary, in this
149
- work, we focus on IoT applications and TinyML devices, with
150
- 100tighter power constraints and 1000 smaller memories
151
- available.
152
- In our preliminary work [18], we proposed the early design
153
- concept of a HW/SW platform for Continual Learning based
154
- on the Parallel Ultra Low Power (PULP) paradigm [19], and
155
- assessed the computational and memory costs to deploy Latent
156
- Replay-based CL algorithms.
157
- In this paper, we complete and extend that effort by in-
158
- troducing several novel contributions from the software stack,
159
- system integration and algorithm viewpoint. To the best of our
160
- knowledge, we present the first TinyML processing platform
161
- 1https://hothardware.com/reviews/qualcomm-snapdragon-845-
162
- performance-benchmarksand framework capable of on-device CL, together with the
163
- design flow required to sustain learning tasks within a few
164
- tens of mW of power envelope ( >10lower than state-of-
165
- the-art solutions). The proposed platform is based on VEGA ,
166
- a recently introduced end-node System-on-Chip prototype
167
- fabricated in 22nm technology [20]. Unlike traditional low-
168
- power and flexible MCUs design, VEGA exploits explicit
169
- data parallelism, by featuring a multi-core SW programmable
170
- RISC-V cluster with shared Floating Point Units (FPUs), DSP-
171
- oriented ISA and optimized memory management to enable
172
- the learning paradigm on low-end IoT devices. Additionally,
173
- to gain minimum-cost on-device retention of Latent Replays
174
- and better enable deployment on an ultra-low-power platform,
175
- we extend the LR algorithm proposed by Pellegrini et al. [1]
176
- to work with a fully quantized frozen front-end and compress
177
- Latent Replays using quantization down to 7 bits, with a small
178
- accuracy drop (almost lossless for 8-bit) when compared to the
179
- single-precision floating-point datatype ( FP32 ) on the Core50
180
- CL classification benchmark.
181
- In summary, the contributions of this work are:
182
- 1) We extend the LR algorithm to work with an 8-bit
183
- quantized and frozen front-end without impact on the
184
- CL process and to support LR compression with quan-
185
- tization, reducing up to 4.5 the memory needed for
186
- rehearsing. We call this extension Quantized Latent
187
- Replay-based Continual Learning orQLR-CL .
188
- 2) We propose a set of CL primitives including forward
189
- and backward propagation of common layers such as
190
- convolution, depthwise convolution, and fully connected
191
- layers, fine-tuned for optimized execution on VEGA, a
192
- TinyML platform for Deep Learning based on PULP
193
- [19], fabricated in 22nm technology. We also introduce
194
- a tiling scheme to manage data movement for the CL
195
- primitives.
196
- 3) We compare the performance of our CL primitives on
197
- VEGA with that on other devices that could in the future
198
- target on-chip at-edge learning, such as a state-of-the-art
199
- low-power STM32L4 microcontroller.
200
- Our results show that the Quantized Latent Replay based Con-
201
- tinual Learning lead to a minimal accuracy loss on the Core50
202
- dataset compared to the FP32 baseline, when compressing the
203
- Latent Replay memory by 4by means of 8-bit quantization.
204
- Compression to 7 bit can also be exploited but at the cost of
205
- a slightly lower accuracy, up to 5% wrt the baseline when
206
- retraining one of the intermediate layer. When testing the
207
- QLR-CL pipeline on the proposed VEGA platform, our CL
208
- primitives demonstrated to run up to 65faster with respect
209
- to the MCUs for TinyML that can be found currently on the
210
- market. Compared against edge devices with a power envelope
211
- of 4W our solution is about 6more energy-efficient, enough
212
- to operate 317h with a typical battery for embedded devices.
213
- The rest of the paper is organized as follows: Section II
214
- discusses related work in CL, inference and learning at the
215
- edge, and hardware architectures targeted at edge learning.
216
- Section III introduces the proposed methodology for Quan-
217
- tized Continual Learning. Section IV describes the HW/SW
218
- architecture of the proposed TinyML. Section V evaluates and3
219
- discusses experimental results. Section VI concludes the paper.
220
- II. R ELATED WORK
221
- In this section, we first review the recent memory-efficient
222
- Continual Learning approaches before discussing the main
223
- solutions and methods for the TinyML ecosystem, including
224
- the first attempts for on-device learning on embedded systems.
225
- A. Memory-efficient Continual Learning
226
- Differently from Transfer Learning [21], [22], which by
227
- design does not retain the knowledge of the primitive learned
228
- task when learning a new one, Continual Learning (CL) has
229
- recently emerged as a new technique to tackle the acquisition
230
- of new/extended capabilities without losing the original ones
231
- – a phenomenon known as catastrophic forgetting [12], [13].
232
- One of the main causes of this phenomenon is that the newly
233
- acquired set breaks one of the main assumptions underlying
234
- supervised learning – i.e., that training data are statistically
235
- independent and identically distributed (IID). Instead, CL deals
236
- with training data that is organized in non-IID learning events .
237
- Maltoni et al. in [26] sort the main CL techniques intothree
238
- groups: rehearsal , which includes a periodic replay of the past
239
- information; architectural , relying on a specialized architec-
240
- ture, layers, and activation functions to mitigate forgetting;
241
- andregularization -based, where the loss term is extended to
242
- encourage retaining memory of pre-learned tasks.
243
- Among these groups, rehearsal CL strategies have emerged
244
- as the most effective to deal with catastrophic forgetting, at the
245
- cost of an additional replay memory [1], [27], [28]. In the re-
246
- cent CL challenge at CVPR2020 on the Core50 image dataset,
247
- 90% of the competitors used rehearsal strategies [16]. The
248
- best entry of the more challenging New Instances and Classes
249
- track (the same scenario considered in our work) [17], which
250
- is evaluated in terms of test accuracy but also memory and
251
- computation requirements, scores 91% by replaying image
252
- data. Unfortunately, this strategy results untractable for an
253
- IoT platform because of the expanding replay memory (up
254
- to 78k images) and the usage of a large DenseNet-161 model.
255
- Conversely, the Latent Replay-based approach [1] relies on
256
- a fixed, and relatively small, amount of compressed latent
257
- activations as replay data; it scores 71% if retraining only the
258
- last layer, which presents a peak of 52lower (compressed)
259
- data points than the winning solution. Additionally, the Jodelet
260
- entry – also employing LR-based CL – achieves 83% thanks
261
- to 3more replays and a more accurate pre-trained model
262
- (ResNet50) [16]. In our work, we focus on [1] because of the
263
- tunable accuracy-memory setting. Nevertheless, our proposed
264
- platform and compression methodology can be applied to any
265
- replay-based CL approach.
266
- Also related to our work, ExStream [29] clusters in a
267
- streaming fashion the training samples before pushing them
268
- into the replay buffer while [30] uses discrete autoencoders
269
- to compress the input data for rehearsing. In contrast, we
270
- propose low-bitwidth quantization to compress the Latent
271
- Replay memory by >4and, at the same time, reduce the
272
- inference latency and the memory requirement of the inference
273
- task of the frozen stage if compared to a full-precision FP32
274
- implementation.B. Deep Learning at the Extreme Edge
275
- Two main trends can be identified for TinyML platforms
276
- targeting the extreme edge. On the one hand, Deep Learning
277
- applications are dominated by linear algebra which is an
278
- ideal target for application-specific HW acceleration [31], [32].
279
- Most efforts in this direction employ a variety of inference-
280
- only acceleration techniques such as pruning [33] and byte
281
- and sub-byte integer quantization [4]; the use of large arrays
282
- of simple MAC units [34] or even mixed-signal techniques
283
- such as in-memory computing [35].
284
- On the other hand, there are also many reasons for the
285
- alternative approach: running TinyML applications as soft-
286
- ware on top of commercial off-the-shelf (COTS) extreme-
287
- edge platforms, such as MCUs. Extreme-edge TinyML de-
288
- vices need to be very cheap; they have to be flexible due
289
- both to economy of scale and to their need for integration
290
- within larger applications, composed of both neural and non-
291
- neural tasks [36]. For these reasons, there is a strong push
292
- towards squeezing the maximal performance out of platforms
293
- based on COTS ARM Cortex-M class microcontrollers and
294
- DSPs, such as STMicroelectronics STM32 microcontrollers2,
295
- or on multi-core parallel ultra-low-power (PULP) end-nodes,
296
- like GreenWaves Technologies GAP-83. To cope with the
297
- severe constraints in terms of memory and maximum compute
298
- throughput of these platforms, a large number of deploy-
299
- ment tools have been recently proposed. Examples of this
300
- trend include non-vendor-locked tools such as Google TFLite
301
- Micro [6], ARM CMSIS-NN [37], Apache TVM [38], as
302
- well as frameworks that only support specific families of de-
303
- vices, such as STMicroelectronics X-CUBE-AI4, GreenWaves
304
- Technologies NNTOOL5, and DORY [39]. Internally, these
305
- tools employ hardware-independent techniques, such as post-
306
- training compression & quantization [40]–[42], as well as
307
- hardware-dependent ones such as data tiling [43] and loop
308
- unrolling to boost data reuse exploitation [37], coupled with
309
- automated generation of optimized backend code [44].
310
- As previously discussed, all of these efforts are mostly
311
- targeted at extreme edge inference, with little hardware and/or
312
- software dedicated to training. Most of the techniques used to
313
- boost inference efficiency are not as effective for learning.
314
- For example, the vast majority of training is done in full
315
- precision floating-point ( FP32 ) or, with some restrictions,
316
- using half-precision floats ( FP16 ) [45] – whereas inference is
317
- commonly pushed to INT8 or even below [4], [40]. IBM has
318
- recently proposed a specialized 8-bit format for training called
319
- HFP8 [46], but its effectiveness is still under investigation.
320
- Hardware-accelerated on-device learning has so far been
321
- limited to high-performance embedded platforms (e.g.,
322
- NVIDIA TensorCores on Tegra Xavier6and mobile platforms
323
- such as Qualcomm Snapdragon 845 [1]) or very narrow in
324
- scope. For example, Shin et al. [47] claim to implement an
325
- online adaptable architecture, but this is done using a simple
326
- 2https://www.st.com/content/st com/en/ecosystems/stm32-ann.html
327
- 3https://greenwaves-technologies.com/gap8 gap9/
328
- 4https://www.st.com/en/embedded-software/x-cube-ai.html
329
- 5https://greenwaves-technologies.com/sdk-manuals/nn quick start guide
330
- 6https://www.nvidia.com/en-us/autonomous-machines/embedded-
331
- systems/jetson-xavier-nx4
332
- TABLE I
333
- ON-DEVICE LEARNING METHODS ON TINY EMBEDDED SYSTEMS .
334
- Method Learning Problem Proc. Tiny On-Device Compute Memory Continual
335
- Approach Device Device Learning Cost Cost Learning
336
- Transfer Retraining last Image Coral X LOW LOW
337
- Learning [21] layer’s weights Classification Edge TPU
338
- TinyTL Retraining Biases Image EPYC AMD X MEDIUM LOW /
339
- [22] Classification 7302 MEDIUM
340
- TinyOL Add layer for transfer-learning Anomaly Arduino Nano X X LOW LOW
341
- [23] based on streaming data Detection 33 BLE
342
- TinyML CNN backprop. from scracth Linear Camera GAP8 X - - X
343
- Minicar [8] on increasing dataset Class. 7 actions
344
- TML kNN Classifier Audio/Image STM32F7 X X LOW HIGH X
345
- [24] Class. 2 classes (unbounded)
346
- PULP-HD Hyperdimensional EMG 10 gestures Mr. Wolf X X MEDIUM LOW X
347
- [25] Computing Classification
348
- LR-CL CNN backprop. Image Qualcomm X HIGH HIGH / X
349
- [1] w/ LRs Class. 50 classes Snapdragon MEDIUM
350
- QLR-CL CNN backprop. Image VEGA X X HIGH MEDIUM X
351
- [This Work] w/ Quantized LRs Class. 50 classes
352
- LUT to selectively activate parameters, and does not support
353
- more powerful mechanisms based on gradient descent. A
354
- few recently proposed hardware accelerators for low-power
355
- training platforms [48]–[51] enable partial gradient back-
356
- propagation by using selective and compressed weight updates,
357
- but they do not address the large memory footprint required
358
- by training. Finally, several online-learning devices using bio-
359
- inspired algorithms such as Spiking Neural Networks [52] and
360
- High-Dimensional Computing [25] have been proposed [53]–
361
- [55]. Most of these approaches, however, have only been
362
- demonstrated on simple MNIST-like tasks.
363
- In this work, we propose the first, to the best of our
364
- knowledge, MCU-class hardware-software system capable of
365
- continual learning based on gradient back-propagation with
366
- a LR approach. We achieve these results by leveraging on
367
- few key ideas in the state-of-the-art: INT8 inference, FP32
368
- continual learning, and exploitation of linear algebra kernels,
369
- back-propagation, and aggressive parallelization by deploying
370
- them on a multi-core FPU-enhanced PULP cluster.
371
- C. On-Device Learning on low-end platforms
372
- Table I lists the main edge solutions featuring on-device
373
- learning capabilities. Every approach is evaluated by consid-
374
- ering the memory and computational costs for the continual
375
- learning task and the suitability for deployment on highly
376
- resource-constrained (tiny) devices.
377
- A first group of works deals with on-device transfer learn-
378
- ing. The Coral Edge TPU, which presents a power budget
379
- of several Watts, features SW support for on-device fine-
380
- tuning of the parameters of the last fully-connected layer [21].
381
- TinyTL [22] demonstrated on a high-end CPU that the transfer
382
- learning task results more effective (+32% on the target Image
383
- Classification task) by retraining the bias terms and adding lite
384
- residual modules. TinyOL [23] brought the transfer learning
385
- task on a tiny devices, i.e. an Arduino Nano platform featuring
386
- a 64MHz ARM Cortex-M4, by adding a trainable layer on top
387
- of a frozen inference model. Because only the coefficients of
388
- the last layer are updated during the online training process, nobackpropagation of error gradients applies. Compared to these
389
- works, we address a continual learning scenario and therefore
390
- we provide a more capable and optimized HW/SW solution
391
- to match the memory and computational requirements of the
392
- adopted CL method.
393
- Differently from the above works, de Prado et al. [8] pro-
394
- posed a Continual Learning framework for self-driving mini-
395
- cars. The embedded PULP-based MCU engine streams new
396
- data to a remote server, where the inference model is retrained
397
- from scratch on the enhanced dataset to improve the accu-
398
- racy over time. This fully-rehearsal methodology cannot be
399
- migrated to low-end devices because of the unconstrained in-
400
- crease of the memory footprint. In contrast, Disabato et al. [24]
401
- presented an online adaptive scheme based on a kNN classifier
402
- placed on top of a frozen feature extraction CNN model. The
403
- final stage is updated by incrementally adding the labeled
404
- samples to the knowledge memory of the kNN classifier.
405
- This approach has been evaluated on a tiny STM32F76ZI
406
- device but unfortunately has proven the effectiveness only
407
- on limited 2-classes problems and presents an unbounded
408
- memory requirement, which scales linearly with the number of
409
- training samples. PULP-HD [25] showed few-shot continual
410
- learning capabilities on an ultra-low power prototype using
411
- Hyperdimensional Computing. During the training phase the
412
- new data are mapped intoa limited hyperdimensional space
413
- by making use of a complex encoding procedure; at inference
414
- time the incoming samples are compared to the computed
415
- class prototypes. The method has been demonstrated on a
416
- 10 gesture classification scenario based on EMG data but
417
- lacks of experimental evidences to be effective on complex
418
- image classification problems. In contrast to the these works,
419
- we demonstrate superior learning capabilities for a TinyML
420
- platform by i)running backpropagation on-device to update in-
421
- termediate layers, and ii)supporting a memory-efficient Latent
422
- Replay-based strategy to address catastrophic forgetting on a
423
- more complex Continual Learning scenario. An initial CNN-
424
- based prototype of a Continual Learning system was presented
425
- in in [1] using Latent Replays. The authors demonstrated the5
426
- Fig. 1. Continual Learning with Latent Replays. The frozen stage is the
427
- light-blue part (first half) of the network. After the first forward of the inputs
428
- (yellow arrow), activations (namely LRs) are stored apart. After having stored
429
- them, they will be used later mixed with the new images coming through the
430
- frozen stage and used to retrain the adaptive portion of the network.
431
- on-device learning capabilities using a Qualcomm Snapdragon
432
- processor, which features a power envelope 100 higher than
433
- our target and therefore it results not suitable for battery-
434
- operated tiny devices. In contrast to them, we also extend the
435
- LR algorithm by leveraging on quantization to compress the
436
- LR memory requirements.
437
- III. M ETHODS
438
- In this section, we analyze the memory requirements of the
439
- Latent Replay-based Continual Learning method and present
440
- QLR-CL , our strategy to reduce the memory footprint of the
441
- LR vectors based on a quantization process.
442
- A. Background: Continual Learning with Latent Replays
443
- In general, supervised learning aims at fitting an unknown
444
- function by using a set of known examples – the training
445
- dataset. In the case of Deep Neural Networks, the training
446
- procedure returns the values of the network parameters, such
447
- as weights and biases, that minimize a loss function. Among
448
- the used optimization strategies, the mini-batch Stochastic
449
- Gradient Descent (SGD), which is an iterative method applied
450
- over multiple learning step (i.e. the epochs), is widely adopted.
451
- In particular, The SGD algorithm computes the gradient of the
452
- parameters based on the loss function by back-propagating the
453
- error value through the network. This error function compares
454
- the model prediction, i.e. the output of the forward pass, with
455
- the expected outcome (the data label). Parameter gradients
456
- obtained after the backward pass are weighted over a mini-
457
- batch of data before updating the model coefficients.
458
- As introduced at the beginning of this work, the Latent
459
- Replay CL method [1] is a viable solution to gain TinyML
460
- adaptive systems with on-device learning capabilities based
461
- on the availability of new labeled data. In Fig. 1 we illustrate
462
- the CL process with Latent Replays. The new data are injected
463
- into the model to obtain the latent embeddings, which are the
464
- feature maps of a specific intermediate layer. We indicate such
465
- a layer with the index l, where l2[0; L), assuming the tar-
466
- geted model to be composed by Lstacked layers. At runtime,
467
- the new latent vectors are combined with the precomputed
468
- NLRLatent Replays vectors to execute the learning algorithm
469
- on the last Ll1layers. More specifically, the coefficientparameters of the adaptive stage are updated by using a mini-
470
- batch gradient descend algorithm. Every mini-batch includes
471
- both new data (in the latent embedding form) and LR vectors.
472
- The typical ratio of new data over the full mini-batch is 1/6 [1].
473
- The coefficient gradients are computed through forward and
474
- backward passes over the adaptive (learned) layers. Multiple
475
- iterations, i.e. the epochs, of the learning algorithms take place
476
- within the training procedure.
477
- B. Memory Requirements
478
- We model the Latent Replay-based Continual Learning task
479
- as operating on a set of new data coming from a sensor (e.g.,
480
- a camera), which is interfaced with an embedded digital pro-
481
- cessing engine, namely the TinyML Platform , and its memory
482
- subsystem. Given the limited memory capacity of IoT end-
483
- nodes, the quantification of the learning algorithm’s memory
484
- requirements is essential. We distinguish between two different
485
- memory requirements: additional memory necessary for CL,
486
- e.g., the LR memory, and that required to save intermediate
487
- tensors during forward-prop to be used for back-prop – a
488
- requirement common to all algorithms based on gradient
489
- descent, not specific to CL.
490
- Concerning the LR memory, the system has to save a
491
- set of NLRLRs, each one of the size of the feature map
492
- computed at the l-th layer of the network. In our scenario,
493
- LR vectors are represented employing floating-point ( FP32 )
494
- datatype and typically determine the majority of the memory
495
- requirement [18]. Since LRs are part of the static long-term
496
- memory of the CL system, for their storage, we use non-
497
- volatile memory, e.g., external Flash.
498
- On the other hand, forward- and back-prop of the network
499
- model require to allocate the space for NPnetwork parame-
500
- ters statically. In addition, forward-prop requires dynamically
501
- allocated buffers to store the activation feature maps for all
502
- layers. Up to the l-th layer, these buffers are temporary and
503
- can be released after their usage. Conversely, the system must
504
- keep in memory the feature maps after lto compute the
505
- gradients during back-prop. They can only be released after
506
- the corresponding layer has been back-propagated. Lastly, the
507
- system must also keep in memory the coefficients’ gradients,
508
- demanding a second array of NPelements. To keep accu-
509
- racy on the learning process, every tensor, i.e. coefficients,
510
- gradients, and activations, employ a FP32 format in our
511
- baseline scenario. Different from LRs, these tensors are kept
512
- intovolatile memories, except the frozen weights, which are
513
- stored in a non-volatile memory.
514
- C. Quantized Latent Replay-based Continual Learning
515
- Quantization techniques have been extensively used to re-
516
- duce the data size of model parameters, and activation feature
517
- maps for the inference task, i.e. the forward pass. An effective
518
- quantization strategy reduces the data bitwidth from 32-bit
519
- (FP32 ) to low bit-precision, 8-bit or less ( Qbits, in general)
520
- while paying an almost negligible accuracy loss.
521
- In this paper, we introduce the Quantized Latent Replay-
522
- based Continual Learning method (QLR-CL) relying on low-
523
- bitwidth quantization to speed up the execution of the network6
524
- SRAM
525
- SRAM
526
- SRAM
527
- SRAM
528
- SRAM
529
- SRAM
530
- SRAM
531
- SRAM
532
- SRAM
533
- SRAM
534
- SRAM
535
- SRAM
536
- SRAM
537
- SRAM
538
- SRAM
539
- SRAM
540
- Logari thmi c InterconnectCore 0
541
- Core 1
542
- Core 2
543
- Core 3
544
- Core 4
545
- Core 5
546
- Core 6
547
- Core 7
548
- Core 8
549
- 4x Shared Fl oating Point Uni ts
550
- I$ I$ I$ I$ I$ I$ I$ I$Priv
551
- I$ Shared I$DMAEvent
552
- UnitAXI Cluste r Bus
553
- HWCEFPUFC CoreMCU InterconnectSRAM
554
- SRAM
555
- SRAM
556
- SRAM
557
- SRAM
558
- SRAMTightly Coup led Data Mem ory 128 KBPrivate L2
559
- 64 KBInterleaved L2
560
- 1.5 MB
561
- I/O DMASPI
562
- OctaSPI
563
- UART
564
- I2C
565
- I2S
566
- CPI
567
- MIPI
568
- MRA M
569
- 4 MBI$
570
- GPIO APB Bus Peripheral InterconnectINTC
571
- FLLs
572
- TIM
573
- DBG
574
- CLUSTER MCU
575
- Fig. 2. Architecture outline of the proposed PULP-based System-on-Chip for
576
- Continual Learning.
577
- up to the l-th layer and at the same time reduce the memory
578
- requirement of the LR vectors from the baseline FP32 arrays.
579
- To do so, we split the deep model intotwo sub-networks,
580
- namely the frozen stage and the adaptive stage . The frozen
581
- stage includes the lower layers of the network, up to the
582
- Latent Replay layer l. The coefficients of this sub-network,
583
- including batch normalization statistics, are frozen during the
584
- incremental learning process. On the contrary, the parameters
585
- of the adaptive stage are updated based on the new data
586
- samples.
587
- In QLR-CL, the Latent Replay vectors are generated by
588
- feeding the frozen stage sub-network with a random subset
589
- of training samples from the CL dataset, which we denote
590
- asXtrain. The frozen stage is initialized using pre-trained
591
- weights from a related problem – in the case of Core50,
592
- we use a network pre-trained on the ImageNet-1k dataset.
593
- Post-Training Quantization of the frozen stage is based on
594
- training samples Xtrain. We apply a standard Post-Training
595
- Quantization process that works by i)determining the dynamic
596
- range of coefficient and activation tensors, ii)dividing the
597
- range intoequal steps, using a uniform affine quantization
598
- scheme [56]. While the statistics of the parameters can be
599
- drawn without relying on data, the dynamic range of the acti-
600
- vation features maps is estimated using Xtrainas a calibration
601
- set. If we denote the dynamic range of the weights at the i-th
602
- layer of the network as [wi;min; wi;max], we can define the
603
- wi;quant INT-Q representation of parameters as
604
- wi;quant =wi
605
- Sw;i
606
- ; S w;i=wi;maxwi;min
607
- 2Q1(1)
608
- where Qis the number of bits, wiis the full-precision output
609
- of the frozen stage . The representation of activations is similar,
610
- but we further restrict (1) for activations aiby considering the
611
- effect of ReLU’s: aiare always positive and ai;quant can be
612
- represented using an unsigned UINT-Q format:
613
- ai;quant =ai
614
- Sa;i
615
- ; S a;i=ai;max
616
- 2Q1(2)
617
- where ai;max is obtained through calibration on Xtrain.
618
- Quantized Latent Replays (QLRs) al;replay are represented
619
- similarly to other quantized activations, setting the layer ito
620
- the LR l. Their value is initialized during the initial setup
621
- of the QLR-CL process using the latent quantized activations
622
- al;quant over the Xtrainset.During the QLR-CL process, the adaptive stage is fed by
623
- dequantized vectors obtained as Sa;lal;replay , along with the
624
- dequantized latent representation of the new data sample Sa;l
625
- al;quant . Hence, the single FP32 parameter Sa;lis also stored
626
- in memory as part of the frozen stage . In our experiments, we
627
- set the bitwidth Qof all activations and coefficients to 8-bit,
628
- while the output of the frozen stage is compressed to 8-bit or
629
- less, as further explored in Section V.
630
- IV. H ARDWARE /SOFTWARE PLATFORM
631
- In this section, we describe the hardware architecture of
632
- the proposed platform for TinyML learning and the related
633
- software stack.
634
- A. Hardware architecture
635
- The CL platform we propose is inspired and extends on our
636
- previous work [18]. We build it upon an advanced PULP-based
637
- SoC, called VEGA , which combines parallel programming for
638
- high-performance with ultra-low-power features. An advanced
639
- prototype of this platform has been taped out in Global-
640
- Foundries 22nm technology [20]. The system architecture,
641
- which is outlined in Fig. 2, is based on an I/O-rich MCU
642
- platform coupled with a multi-core cluster of RISC-V ISA
643
- digital signal processing cores which are used to accelerate
644
- data-parallel machine learning & linear algebra code. The
645
- MCU side features a single RISC-V core, namely the Fabric-
646
- Controller (FC), and a large set of peripherals. Besides the
647
- FC core, the MCU-side of the platform includes a large L2
648
- SRAM, organized in an FC-private section of 64kB and a
649
- larger interleaved section of 1.5MB. The interleaved L2 is
650
- shared between the FC core and an autonomous I/O DMA
651
- controller, connected to a broad set of peripherals such as
652
- OctaSPI/HyperBus to access an external Flash or DRAM
653
- of up to 64MB, as well as camera interfaces (CPI, MIPI)
654
- and standard MCU interfaces (SPI, UART, I2C, I2S, and
655
- GPIO). The I/O DMA controller is connected to an on-chip
656
- magnetoresistive RAM (MRAM) of 4MB, which resides in its
657
- power and clock domain and can be accessed through the I/O
658
- DMA to move data to/from the L2 SRAM.
659
- The multi-core cluster features nine processing elements
660
- (PE) that share data on a 128kB multi-banked L1 tightly
661
- coupled data memory (TCDM) through a 1-cycle latency
662
- logarithmic interconnect. All cores are identical, using
663
- an in-order 4-stage architecture implementing the RISC-V
664
- RV32IMCFXpulpv2 ISA. The cluster includes a set of four
665
- highly flexible FPUs shared between all nine cores, capable
666
- ofFP32 andFP16 computation [57]. Eight cores are meant
667
- to execute primarily data-parallel code, and therefore they
668
- use a hierarchical Instruction cache (I$) with a small private
669
- part (512B) plus 4kB of shared I$ [58]. The ninth core is
670
- meant to be used as a cluster controller for control-heavy
671
- data tiling & marshaling operations; it has a private I$ of
672
- 1kB. The cluster also features a multi-channel DMA engine
673
- that autonomously handles data transfers between the shared
674
- L1 and the external memories through a 64-bit AXI4 cluster
675
- bus. The DMA can transfer up to 8B/cycle between L2 and
676
- L1 TCDM in both directions simultaneously and perform 2D7
677
- FW
678
- BW error BW gradim2col transform K
679
- KCin
680
- K × K × C inK × K × C in
681
- 1 × 1 × C out
682
- 1 × 1 × C out
683
- output weight input
684
- K × K × C inK × K × C in 1 × 1 × C out
685
- 1 × 1 × C out
686
- grad_w eight
687
- Linput grad_out put
688
- L1 × 1 × C out
689
- K × K × C in1 × 1 × C out
690
- K × K × C in
691
- weight grad_input grad_out put
692
- K × K × C in
693
- =.
694
- = = . .
695
- Fig. 3. Clockwise from top-left: im2col transform, Forward and Backward
696
- propagation for error and gradient calculation for a KKConv layer.
697
- Fig. 4. Tiling scheme between L2 and L1 memories exploiting double-
698
- buffering. Two equal buffers are filled with the matrix multiplication terms
699
- that fit intohalf the size of L1. The second buffer is filled with the next terms
700
- of the convolution that have to be matrix-multiplied.
701
- strided access on the L2 side by generating multiple AXI4
702
- bursts. The cluster can be switched on and off at runtime
703
- by the FC core employing clock-gating; it also resides on a
704
- separate power domain than the MCU, making it possible to
705
- completely turn it off and to tune its Vdd using an embedded
706
- DC-DC regulator.
707
- B. Software stack
708
- To execute the CL algorithm, the workload is largely
709
- dominated by the execution of convolutional layers, such as
710
- pointwise, and depthwise, or fully connected layers ( 98% of
711
- operations in MobileNet-V1). Consequently, the main load on
712
- computations is due to variants of matrix multiplications dur-
713
- ing the forward and backward steps, which can be efficiently
714
- parallelized on the 8 compute PEs of the cluster, leaving one
715
- core out to manage tiling and program data transfers. Thus,
716
- to enable the learning paradigm on the PULP platform, we
717
- propose a SW stack composed of parallel layer-wise primitives
718
- that realize the forward step and the back-propagation. The
719
- latter concerns either the computation of the activation gradi-
720
- ents ( backward error step ) and coefficient gradients ( backwardgradient step ). Fig. 3 depicts the dataflow of the forward and
721
- backward for commonly used convolutional kernels such as
722
- pointwise (PW), depthwise (DW), and linear (L) layers. To
723
- reshape all convolution operations intomatrix multiplications,
724
- theim2col transformation is applied to the activation tensors to
725
- reshape them into2D matrix operands [37]. The FP32 matrix
726
- multiplication kernel is parallelized over the eight cores of the
727
- cluster according to a data-parallelism strategy, making use of
728
- fmadd.s (floating multiply-add) instructions made available by
729
- the shared FPU engines.
730
- The cores must operate on data from arrays located in
731
- the low-latency L1 TCDM to maximize throughput and com-
732
- putational efficiency (i.e., IPC). However, the operands of a
733
- layer function may not entirely fit into the lower memory
734
- level because of the limited space (128kB). For instance, the
735
- tensors of the PW layer #22 of the used MobileNet-V1 occupy
736
- 1.25MB. Hence, the operands have to be sliced intoreduced-
737
- size blocks that can fit intothe available L1 memory and
738
- convolutional functions are applied on L1 tensor slices to
739
- increase the computational efficiency.
740
- This approach is generally referred to as tiling [39], which
741
- is schematized in Fig. 4. By locating layer-wise data on the
742
- larger L2 memory (1.5MB), the DMA firstly copies individual
743
- slices of operand data, also referred to as tiles, intoL1 buffers,
744
- to be later fetched by the cores. Since the cluster DMA engine
745
- is capable of 2D-strided access on the L2 side, this operation
746
- can also be designed to perform im2col , without any manual
747
- data marshaling overhead on L1.
748
- To increase the computation efficiency, we implement a
749
- software flow that interleaves DMA transfers between L2 and
750
- L1 and calls to parallel primitives, e.g. forward ,backward
751
- error , orbackward gradient steps , which operate on individual
752
- tiles of data. Hence, every layer is expected to load and
753
- process all the tiles of any operand tensor. To reduce the
754
- overhead due to the data copy, the DMA transfers take place
755
- in the background of the multi-core computation: the copy
756
- of the next tile is launched before invoking the computation
757
- on loaded tiles. On the other side, this optimization requires
758
- doubling the L1 memory requirement: while one L1 buffer is
759
- used for computation, an equally-sized buffer is used by the
760
- data movement task. From a different viewpoint, the maximum
761
- tile size must not exceed half of the available memory. At
762
- runtime, layer-wise tiled kernels are invoked sequentially to
763
- run the learning algorithm with respect to the input data. To
764
- this aim, LRs are loaded from external embedded memory, if
765
- not fitting the internal memory, and copied to the on-chip L2
766
- memory thanks to the I/O DMA.
767
- V. E XPERIMENTAL RESULTS
768
- In this section, we provide the experimental evidence about
769
- our proposed TinyML platform for on-device Continual Learn-
770
- ing. First, we evaluate the impact of quantization of the frozen
771
- stage and the LR vectors upon the overall accuracy, and we
772
- analyze the memory-accuracy trade-off.
773
- Secondly, we study the efficiency of the proposed SW ar-
774
- chitecture with respect to multiple HW configurations, namely
775
- #cores, L1 size and DMA bandwidth, introducing the tiling8
776
- Fig. 5. Accuracy plots for NLR=f375;750;1500;3000gand different
777
- levels of quantization. From these plots it is visible that below UINT-7
778
- accuracy degrades rapidly.
779
- requirements and evaluating the latency for each kernel of
780
- computation. Then, we measure performance on an advanced
781
- PULP prototype, VEGA, fabricated in GlobalFoundries 22nm
782
- technology with 4 FPUs shared among all cores. We analyze
783
- the latency results for individual layers forward and backward
784
- and estimate the overall energy consumption to perform a CL
785
- task on our platform. Finally, we compare the efficiency of our
786
- TinyML platform to other devices used for on-device learning.
787
- A. Experimental Setup
788
- We benchmark the compression technique for the Latent
789
- Replay memory on the image-classification Core50 dataset,
790
- which includes 120k 128 128 RGB images of 50 objects
791
- for the training and about 40k images for the testing. On the
792
- Core50 dataset, the CL setting is regulated by the NICv2-
793
- 391 protocol [59]. According to this protocol, 3000 images
794
- belonging to ten classes are made available during the initial
795
- phase to fine-tune the targeted deep model on the Core50
796
- problem. Afterward, the remaining 40 classes are introduced at
797
- training time in 390 learning events. Each event, as described
798
- more in detail in Section III-A, comprises iterations over mini-
799
- batches of 128 samples each: 21 coming from actual images,
800
- all from the same class and typically not independent (e.g.,
801
- coming from a video), and 107 latent replays. After each
802
- learning event, the accuracy is measured on the test set, which
803
- includes samples from the complete set of classes.
804
- Following [1], we use a MobileNet-V1 model with an
805
- input resolution of 128 128 and width multiplier 1, pre-
806
- trained on ImageNet; we start from their public released
807
- code7and use PyTorch 1.5. In our experiments, we replace
808
- BatchReNormalization with BatchNormalization layers and
809
- we freeze the statistics of the frozen stage after fine-tuning.
810
- B. QLR-CL memory usage and accuracy
811
- To evaluate the proposed QLR-CL setting, we quantize the
812
- frozen stage of the model using the PyTorch-based NEMO
813
- library [60] after fine-tuning the MobileNet-V1 model with
814
- 7Available at https://github.com/vlomonaco/ar1-pytorch/. While Pelle-
815
- grini et al. [1] report lower accuracies in their paper, our FP32 baseline results
816
- are aligned with their released code.the initially available 3000 images. We set the activation and
817
- parameters bitwidth of the frozen stage to Q= 8bit while we
818
- vary the bitwidth QLRof the latent replay layer. The quantized
819
- frozen stage is used to generate a set of NLRLatent Replays,
820
- as sampled from the initial images.
821
- The plots in Fig. 5 show the test accuracy on the Core50
822
- that is achieved at the end of the NICv2-391 training protocol
823
- for a varying NLR=f375;750;1500;3000gwhile sweeping
824
- the LR layer l. Depending on the selected layer type, the size
825
- of the LR vector varies as reported in Table III.
826
- Each subplot of Fig. 5 compares the baseline FP32 ver-
827
- sion with our 8-bit fully-quantized solutions with a varying
828
- QLR=f8;7;6g, denoted in the figures, respectively, as UINT-
829
- 8, UINT-7 and UINT-6. For a QLR<6, we observe the
830
- Continual Learning process to not converge on the Core50
831
- dataset.
832
- From the obtained results, we can observe the UINT-8
833
- compressed solution featuring a small accuracy drop with
834
- respect to the full-precision FP32 baseline. When increasing
835
- the number of latent replays NLRto 3000, the UINT-8
836
- quantized version results almost lossless (-0.26%), if LR = 19 .
837
- On the contrary, if the LR layer is moved towards the last
838
- layer (LR= 27), the accuracy drop increases up to 3.4%. The
839
- same effect is observed when reducing NLRto 1500, 750 or
840
- 375. In particular, when NLR= 1500 , the UINT-8 quantatized
841
- version presents an accuracy drop from 1.2% (LR = 19 ) to
842
- 2.9% (LR = 27 ). On the other hand, lowering the bit precision
843
- to UINT-7, the accuracy reduces on average of up to 5:2%, if
844
- compared to the FP32 baseline. Bringing this further down to
845
- UINT-6 largely degrades the accuracy by more than 10%.
846
- To deeply investigate the impact of the quantization process
847
- on the overall accuracy, we perform an ablation study to
848
- distinguish the individual effects of i)the quantization of
849
- the front-end and ii)the quantization of the LRs. In case
850
- ofNLR= 1500 , Table II compares the accuracy on the
851
- Core50 dataset for different LR layers, if applying quantization
852
- to both the LR memory and the frozen stage or only to
853
- the LR memory. The accuracy statistics are averaged over
854
- 5 experiments; we report in the table the mean and the
855
- std deviation of the obtained results. In particular, we see
856
- that quantizing the LRs has a larger effect on the accuracy
857
- than quantizing the frozen graph. By quantizing only the LR
858
- memory to UINT-8, the accuracy drops by up to 1.2-2.6%
859
- (higher in case of larger adaptive stages) with respect to the
860
- FP32 baseline. On the contrary, the UINT-8 quantized frozen
861
- graph brings only an additional 0.5-1% of accuracy drop.
862
- With UINT-7 LRs, the accuracy drop is mainly due to the
863
- LR quantization: when compressing also the frozen stage to
864
- 8-bit the accuracy drop is up to -1%, which is small compared
865
- to the total 4-7% of accuracy degradation.
866
- To facilitate the interpretation of the results, Fig. 6 reports
867
- the test accuracy for multiple quantization settings compared
868
- to the size (in MB) of the Latent Replay Memory. In red,
869
- we highlight a Pareto frontier of non-dominated points, to
870
- have a range of options to maximize accuracy and minimize
871
- the memory footprint. Among the best solutions, we detect
872
- two clusters of points on the frontier. The first cluster ( A),
873
- corresponding to the low-memory side of the frontier, is9
874
- TABLE II
875
- ACCURACY ON CORE50DATASET WITH MULTIPLE QUANTIZATION
876
- SETTINGS A+B ,WHERE ADENOTES THE QUANTIZATION OF THE FROZEN
877
- STAGE (FP32 ORUINT-8) AND BINDICATES THE QUANTIZATION SCHEME
878
- OF THE LR VECTORS (FP32, UINT-8, UINT-7). T HE BASELINE IS FP32.
879
- LR FP32 FP32 + UINT-8 + FP32 + UINT-8 +
880
- layer baseline UINT-8 UINT-8 UINT-7 UINT-7
881
- 27 72.70.34 70.10.54 69.20.48 68.00.63 67.81.14
882
- 25 73.30.58 70.90.65 70.20.67 66.20.75 66.10.94
883
- 23 75.00.83 73.20.46 73.40.66 71.10.63 69.91.25
884
- 21 76.50.63 74.90.51 73.91.67 72.70.74 72.61.30
885
- 19 77.70.73 76.50.48 76.00.80 74.00.57 75.21.10
886
- TABLE III
887
- SIZE OF TILES FOR THE MOBILE NET-V1 LAYERS .
888
- LR Layer LR Dim. LR Size
889
- Layer l Type (HWC)(#elements)
890
- 19 DW 88512 32k
891
- 20 PW 88512 32k
892
- 21 DW 88512 32k
893
- 22 PW 88512 32k
894
- 23 DW 44512 8k
895
- 24 PW 441024 16k
896
- 25 DW 441024 16k
897
- 26 PW 441024 16k
898
- 27 Linear 111024 1k
899
- constituted by experiments that use l= 27 with 1500 or 3000
900
- LRs and UINT-7 or UINT-8 representation. On the other hand,
901
- if we aim at the highest accuracy possible for our QLR-CL
902
- classification algorithm, we can follow the Pareto frontier to
903
- the right towards higher accuracies at steeper memory cost,
904
- reaching cluster B. All points in cluster Bfeatures l= 23
905
- as Latent Replay layer, which is a bottleneck layer of the
906
- network and allows to store more compact tensors as LR (refer
907
- to Table III). Adopting LR layers within Bleads accuracy to
908
- an average of 76%, gaining5%on average with respect to
909
- the layers within cluster A. A single point C1is shown further
910
- to the right, but still below 128MB.
911
- For a deeper analysis of the Pareto frontier, in Fig. 7, we
912
- detail the memory requirements when analyzing the points
913
- into the two clusters AandB, as well as C1. We make two
914
- observations: first, in all Apoints, it would be possible to
915
- fit entirely within the on-chip memory available on VEGA,
916
- exploiting the 4MB of non-volatile MRAM. This would allow
917
- avoiding any external memory access, increasing the energy
918
- efficiency of the algorithm by a factor of up to 3[20].
919
- Moreover, considering that the maximization of accuracy is
920
- often the primary objective in CL, we observe that accumu-
921
- lating features at l= 19 with 1500 UINT-8 LRs (point C1)
922
- enables accuracy to grow above 77%, almost 10% more than
923
- the compact solutions in A(Fig. 7). This analysis allows us to
924
- also speculate over possible future architectural explorations to
925
- design optimized bottleneck layers that could facilitate better
926
- memory accuracy trade-off for QLR-CL.
927
- C. Hardware/Software Efficiency
928
- To assess the performance of the proposed solution, we
929
- study the efficiency of the CL Software primitives on the target
930
- platform and the sensitivity to some of the HW architecturalparameters, namely the #cores, the L1 memory size and the
931
- cluster DMA Bandwidth.
932
- Single-tile performance on L1 TCDM: Based on the tiling
933
- strategy described in Section IV-B, we run experiments con-
934
- cerning the CL primitives of the software stack that operates
935
- on individual tiles of data placed in the L1 memory. Figure 8
936
- shows the latency performance, expressed as MAC/cyc , i.e.
937
- the ratio between Multiply-Accumulate operations ( MAC ) and
938
- elapsed clock cycles ( cyc), for each of the main FP32 compu-
939
- tation kernels in case of single-core ( 1-CORE ) or multi-core
940
- (2-4-8-CORES ) execution. We highlight that a higher value of
941
- MAC/cyc denotes a more efficient processing scheme, leading
942
- to lower latency for a given computation workload, i.e. fixed
943
- MAC. More specifically, in this plot, we evaluate the forward
944
- (FW), backward error ( BW ERR ), and backward gradient ( BW
945
- GRAD ) for each of the considered layer for a varying size of
946
- the L1 TCDM memory, i.e. 128, 256 or 512kB. The shapes
947
- of the tiles for PointWise ( PW), DepthWise ( DW), and Linear
948
- (Lin) layers used for the experiments are reported in the tables
949
- on the left of the figure. Such dimensions are defined to fit
950
- three different sizes of the TCDM, considering buffers of size
951
- 64kB, 128kB and 256kB.
952
- Focusing firstly on the PW layers (histograms at the top
953
- of the figure), we observe a peak performance in the 8-cores
954
- FW step, achieving up to 1.91 MAC/cyc for a L1 memory
955
- size of 512kB. We observe also a performance improvement
956
- of up to 11% by increasing the L1 size from 128kB to 512kB,
957
- which is is motivated by the higher computational density of
958
- the kernel: if L1 = 512 kB the inner loop features 4 iterations
959
- than a scenario with 128kB of L1 size. Moreover, the parallel
960
- speedup scales almost linearly with respect to the number of
961
- cores and archives 7.2 in case of 8 cores. With respect to
962
- the theoretical maximum of 8 , the parallel implementation
963
- presents some overheads mainly due to increased L1 TCDM
964
- contentions and cluster’s cache misses.
965
- If we look at DW convolutions, their performance is lower
966
- with respect to the others. The main reason is that it requires
967
- a software-based im2col data layout transformation, which
968
- increase the amount of data marshaling operations and adds an
969
- extra L1 buffer, thus reducing the size of matrices in the matrix
970
- Fig. 6. Accuracy achieved by considering NLR=f750;1500;3000gand
971
- different precision, highlighting the Pareto frontier.10
972
- Fig. 7. Memory requirements for the points highlighted in Fig. 6. Each layer
973
- belongs to the Pareto frontier and accounts for all the memory components.
974
- Going deeper into the network, LRs (gray) dominate memory consumption.
975
- The other components are the parameters of the frozen stage, the gradient and
976
- the activations needed during the training.
977
- multiplication, leading to increased overheads. Specifically, we
978
- measure the workload of the im2col to achieve up to 70%
979
- of the FW kernel’s latency. As mentioned in Section IV, the
980
- primitives we introduce also support performing the im2col
981
- directly when moving the data tile from L2 via DMA transfer –
982
- in that case, this source of performance loss is not present, and
983
- the MAC/cyc necessary for depthwise convolutions increases
984
- up to 1 MAC/cycles for depthwise forward-prop, depending
985
- also on the L1 size selected. The remaining overhead with
986
- respect to pointwise convolutions is justified by the fact that
987
- depthwise convolutions can only exploit filter reuse (of size
988
- 33, for example, in MobileNet-V1 DW layers) and no input
989
- channel data-reuse, resulting in much shorter inner loops and
990
- more visible effect of overheads. This latter effect cannot be
991
- counteracted by efficient DMA usage; on the other hand, since
992
- depthwise convolutions account for less than 1:5% of the
993
- computation, their impact on the overall latency is limited,
994
- as we further explore in the following section.
995
- Moving our analysis towards the different performance
996
- between forward- and backward-prop layers (particularly BW
997
- grad), we observe that this effect is again due to different
998
- data re-use between the matrix multiplication kernels. The
999
- reduction in re-use in the backward-prop is due to the tiling
1000
- strategy adopted (see Fig. 3) has a grad output vector which
1001
- is shorter than the input in the forward matrix multiplication.
1002
- Specifically, the input to the matrix multiplication has size
1003
- 8x1x1 in backward, while the input shape in forward changes
1004
- accordingly with the L1 memory: 512x1x1 for 128kB L1,
1005
- 1024x1x1 for 256kB L1 and 2048 for 512kB L1. In this
1006
- scenario, the inner loop of the matrix multiplication of a
1007
- forward computation is 64 , 128or 256larger with
1008
- respect to the backward kernels’ cases. This fact motivates the
1009
- lower MAC/cyc of the BW ERR step (22%) and BW GRAD
1010
- step (-46%) if compared to the FW kernel.
1011
- L2-L1 DMA Bandwidth effects on performance: Next we
1012
- analyze the impact of L2-L1 DMA Bandwidth variations, due
1013
- to the Cluster DMA, on the overall performance of the learning
1014
- task. In particular, we monitor the latency and the MAC/cyc
1015
- for multiple values of L2-L1 bandwidth ranging from 8 to
1016
- 128 bits per clock cycle (bit/cyc) and different configurations
1017
- of #cores and L1 size. We remark that a higher value of
1018
- MAC/cyc indicates a better performing HW configuration. Ouranalysis assumes a single half-duplex DMA channel, hence the
1019
- bandwidth value accounts for either read or write transfers.
1020
- Fig. 9 reports the average MAC/cyc when running the
1021
- forward and backward steps with respect to the L2-L1 cluster’s
1022
- DMA bandwidth. As a benchmark, we consider the adaptive
1023
- stage of the MobileNetV1 model when the LR layer is set to
1024
- the 19th layer. Hence, we adopt our tiling strategy and double-
1025
- buffering scheme to realize the training. When increasing
1026
- the L1 size, the tensor tiles feature a larger size, therefore
1027
- demanding a higher transfer time to copy data between the
1028
- L1 memory (used for computation) and L2 memory (used for
1029
- storage). Thanks to the adopted double-buffering technique,
1030
- such transfer time can be hidden by the computation time
1031
- because the DMA works in the background of CPU operation
1032
- (compute-bound ). On the contrary, if the transfer time results
1033
- dominating, the computation becomes DMA transfer-bound ,
1034
- with lower benefits from the multi-core acceleration.
1035
- In case of single core execution, the measured MAC/cyc
1036
- does not vary with respect to the L1 size (128kB, 256kB or
1037
- 512kB) as can be seen from the plot. In this scenario, the CPU
1038
- time results as the dominant contribution with respect to the
1039
- transfer time: the execution is compute-bound and a higher
1040
- L2-L1 bandwidth does not impact the overall performance.
1041
- Differently, in a multi-core execution (2, 4 or 8 cores), the
1042
- average MAC/cyc increases and therefore the ratio between
1043
- transfer time and the computation time decreases: from the plot
1044
- we can observe higher performance if the DMA bandwidth is
1045
- increased. If featuring a L1 size of 128kB, the sweet spots
1046
- between DMA and compute bound are observed when the
1047
- L2-L1 DMA bandwidth is 16 (2 cores), 32 (4 cores) and 64
1048
- (8 cores) bit/cyc, respectively, as highlighted by the red circles
1049
- in the plot. These configurations denote the sweet spots to tune
1050
- the DMA requirements with respect to the chosen L1 memory
1051
- size and #cores.
1052
- If focusing more on the impact of the L1 memory size to
1053
- the multi-core performance, we observe up to 2 efficiency
1054
- gain with 8 cores with a larger L1 memory, increasing from
1055
- 0.25 MAC/cyc for a 128kB L1 memory to 0.4MAC/cyc at
1056
- L1=256kB and to 0.53MAC/cyc for 512kB of L1. At 64
1057
- bit/cyc of L2-L1 DMA bandwidth, the execution, which is
1058
- dominated by the computation, reaches 0.52MAC/cyc, 2.12 
1059
- faster than the low-bandwidth configuration.
1060
- From this analysis we can conclude that the best design
1061
- point for the learning task on a low-end multi-core architecture
1062
- can be pinpointed leveraging the L2-L1 DMA Bandwidth and
1063
- the L1 memory size tuning: when using 8 cores, 128kB of
1064
- L1 memory, which is typically the main expensive resource
1065
- for the system, can lead already to the highest performance as
1066
- long as the DMA features a bandwidth of 64 bit/cyc. On the
1067
- contrary, if the DMA’s bandwidth is as low as 8 bit/cyc, a 512
1068
- kB L1 memory is needed to gain maximum performance. The
1069
- target chip VEGA includes a L1 memory of 128 kB; the DMA
1070
- follows a full-duplex scheme and can provide up to 64 bit/cyc
1071
- for read transactions and 64 bit/cyc for write transactions.
1072
- Therefore the VEGA HW architecture can fully exploit the
1073
- presented SW architecture and optimization schemes to reach
1074
- the optimal utilization and performance for the learning task.11
1075
- Fig. 8. Efficiency, expressed in MAC/cyc, of the proposed CL primitives for forward and backward pass: PointWise, DepthWise, and Linear layers. The
1076
- analysis concerns a varying number of cores (1, 2, 4 or 8) and L1 memory size (128, 256 or 512 kB), which impacts on the dimension of the layer’s tensor
1077
- tiles as reported in the tables on the left.
1078
- Fig. 9. SW efficiency, expressed as average MAC/cyc, when running forward
1079
- and backward steps with respect to a varying L1-L2 bandwidth. Every line
1080
- corresponds to a configuration of #cores (1, 2, 4 or 8 cores) and L1 memory
1081
- size (128, 256 or 512kB).
1082
- D. Latency Evaluation on VEGA SoC
1083
- We run experiments on the VEGA SoC to assess the on-
1084
- device learning performance, in terms of latency and energy
1085
- consumption, of the proposed QLR-CL framework. Specifi-
1086
- cally, we report the computation time, i.e. the latency, at the
1087
- running frequency of 375MHz and the power consumption by
1088
- measuring the current absorbed by the chip when powered at
1089
- 1.8V . To measure the full layer latency, we profile forward
1090
- and backward tiled kernels, which include DMA transfers
1091
- of data, initially stored in L2, and calls to low-level kernel
1092
- primitives, introduced above. On average, we observe a 7% of
1093
- tiling overhead with respect to the single-tile execution on L1.
1094
- This is not surprising, due to the large bandwidth availability
1095
- between L1 and L2 and the presence of compute-bound matrix
1096
- multiplication operations.TABLE IV
1097
- CUMULATIVE LATENCY VALUES PER LEARNING EVENT FOR VEGA,
1098
- STM32, AND SNAPDRAGON 845.
1099
- LR VEGA @ 375 MHz STM32L4 @ 80 MHz Snapdragon
1100
- Layer Adaptive Frozen Cumul. Total Cumul. Total
1101
- l [s] [s] En. [J] [s] En. [J] [s]
1102
- 20 2:491030.87 154 1:651055688 n.a.
1103
- 21 1:731030.94 107 1:151053981 n.a.
1104
- 22 1:641030.95 101 1:081053728 n.a.
1105
- 23 8:771021.03 54.3 5:861042020 n.a.
1106
- 24 7:811021.03 48.4 5:121041769 n.a.
1107
- 25 4:011021.09 24.9 2:65104915 n.a.
1108
- 26 3:811021.10 23.5 2:49104859 n.a.
1109
- 27 2.07 1.25 0.13 1:391024.80 0.50
1110
- Based on the implemented tiled functions, we report the
1111
- layer-wise performance in Table IV for any of the layers of
1112
- the MobileNet-V1 model. We consider as complete time for
1113
- the execution of a layer the cumulated time for frozen stage
1114
- andadaptive stage . The latency of the frozen stage is obtained
1115
- using DORY [39] to deploy the network, as this operation is
1116
- performed as pure 8-bit quantized inference. We compute the
1117
- full latency of the adaptive stage as the time needed to execute
1118
- the forward and backward phases of each layer. Since we have
1119
- multiple configurations, latencies for retraining start growing
1120
- from the last layer (#27) up to layer #20, where retraining
1121
- comprises a total of eight layers.
1122
- First of all, we note that frozen stage latencies are utterly
1123
- dominated by the adaptive stage . Apart from the faster infer-
1124
- ence backend, which can rely on 8-bit SIMD vectorization,
1125
- this is because only 21 images per mini-batch pass through
1126
- thefrozen stage , while the adaptive stage has to be executed
1127
- on 128 latent inputs (107 LRs and the 21 dequantized outputs
1128
- from the frozen stage ), and it has to run for multiple epochs
1129
- (by default, 4) in order to work.
1130
- When l= 27 , the adaptive stage is very fast thanks to12
1131
- its very small number of parameters (it produces just the 50
1132
- output classes). This is the only case in which the frozen
1133
- stage is non-negligible ( 1/6 of the overall time). Progressing
1134
- upward in the table, the frozen stage becomes negligible. The
1135
- cumulative impact of forward and backward passes through
1136
- all the other layers we take intoaccount ( lfrom #20 to #26) is
1137
- in the range between 0.3h and 1.5h. In particular, l= 23
1138
- corresponds to14 min per learning event; this LR layer
1139
- corresponds to high accuracy ( >75% in Core50, see Fig. 6),
1140
- which means that in this time the proposed system is capable
1141
- of acquiring a very significant new capability (e.g., a new
1142
- task/object to classify) while retaining previous knowledge to
1143
- a high degree.
1144
- Having the basic mini-batch measurements, we can estimate
1145
- any scenario, by considering that to train with 1500 LR and
1146
- l= 27 , we will need 300 new images, thus we need 14 mini-
1147
- batches ( 300=21), which leads to 3.30 seconds to learn a new
1148
- set of images, with an accuracy of 69.2%. If we push back
1149
- the LR layer l, this leads to an increase of accuracy 76.5%,
1150
- at the expense of much larger latency, up to 42 minutes for
1151
- layer #20 (see Table IV).
1152
- E. Energy Evaluation on CL Use-Cases and Comparison with
1153
- other Solutions
1154
- To understand the performance of our system and its real-
1155
- world applicability, we study two use-cases: a single mini-
1156
- batch of the Core50 training we used, and the simplified sce-
1157
- nario presented by Pellegrini et al. [1] in their demonstration
1158
- video. We compare our results with another MCU targeting
1159
- ultra-power consumption: a NUCLEO-64 board based on the
1160
- STM32L476RG MCU, on which we ran a direct port of the
1161
- same code we use on the PULP-based platforms. It has two on-
1162
- chip SRAMs with 1-cycle access time and an overall capacity
1163
- of 96kB. Performance results, in terms of latency, are reported
1164
- in Table IV, where we take intoaccount the cumulative latency
1165
- values both for VEGA and STM32 implementations, along
1166
- with the cumulative energy consumption. Cumulative latency
1167
- is computed by adding from the linear layer of the network
1168
- the latencies of the preceding layers.
1169
- On average, execution on VEGA’s 8-cores on performs
1170
- 65faster with respect to the STM32 solution thanks to
1171
- three main factors. Firstly, the clock frequency of VEGA is
1172
- 4.7higher than the max clock frequency of the STM32L4
1173
- (375MHz vs 80MHz), also thanks to the superior technology
1174
- node. Secondly, VEGA presents a parallel speed-up of up
1175
- to 7.2. Lastly, thanks to the more optimized ISA and the
1176
- core microarchitecture, VEGA performs less operations while
1177
- executing the same learning task. For example, the inner
1178
- loop of the matrix multiplication on VEGA requires only 4
1179
- instructions while the STM32L4 takes 9 instructions, resulting
1180
- 2.25faster, mainly thanks to the HW loop extension and the
1181
- fmadd.s instruction.
1182
- The latency speed up, leads to an energy gain of around
1183
- 37, because the average power consumption of VEGA is
1184
- 2higher than the STM32L4 at full load.
1185
- Notice that the latency measurement of the STM32L4 does
1186
- not account for possible overheads due to the tiling data
1187
- Fig. 10. Battery Lifetime of the VEGA SoC and the STM32L4 devices when
1188
- handling multiple learning events per hour.
1189
- between the small on-chip SRAM banks and off-chip memory.
1190
- Even then, our results show that fine-tuning from any layer
1191
- above the last one results in too large a latency to be realistic
1192
- on STM32L4 – in the order of a day per learning event with
1193
- l= 23 . On the contrary, CL on VEGA can be completed in
1194
- 14minutes if selecting l= 23 or as fast as 3.3 seconds if
1195
- retraining only the last layer.
1196
- Given the reported energy consumption, we estimated the
1197
- battery lifetime of our device when adapting the inference
1198
- model by means of multiple learning events per hour; we
1199
- assumed no extra energy consumption for the remaining
1200
- time. In particular, Fig. 10 shows the battery lifetime (in
1201
- hours) depending on the selected Latent Replay layer and the
1202
- adaptation rate, expressed as the amount of learning events
1203
- per hour. We considered a 3300 mAh battery as the only
1204
- energy source for the device. By retraining only the last layer
1205
- (LR= 27 ), an intelligent node featuring our device can perform
1206
- more than 1080 continual learning events per hour, leading
1207
- to a lifetime of about 175h. On the contrary, if retraining
1208
- larger portions of the network, the training time increases
1209
- and the maximum rate of the learning events reduces to less
1210
- than 10/hour, with a lifetime in the range 200-1000h. In
1211
- comparison, on a STM32L4, if retraining the coefficients of
1212
- the last layer, the maximum learning rate per hour is limited to
1213
- 750, with a lifetime of about 10h. This latter can be increased
1214
- up to 10000h but retraining only once in one hour. At the
1215
- same learning event rate, the battery lifetime of VEGA is 20x
1216
- higher.
1217
- Lastly, we compare with the use-case presented by Pel-
1218
- legrini et al. [1], where they developed a mobile phone
1219
- application that performs CL with LRs on a OnePlus6 with
1220
- Snapdragon845. For this scenario, they consider only 500 LRs
1221
- before the linear layer, these will be shuffled with 100 new
1222
- images. Then, by construction the mini-batch is composed of
1223
- 100 LRs and 20 new images, thus, for each of the 8 training
1224
- epochs, the network will process 5 times over the 20 new
1225
- images and the 100 LRs. This scenario leads them to obtain
1226
- an average latency of 502 ms for a single learning event. On
1227
- the other hand, considering our measurements on VEGA we
1228
- obtain a forward latency of 1.25s and a training time of 2.07s
1229
- for a whole learning event.
1230
- Considering the power envelope of a Snapdragon845 of13
1231
- about 4W, and the average power consumption of VEGA of
1232
- 62mW, this implies that our solution is 9.7 more efficient in
1233
- terms of energy. We additionally assess the energy consump-
1234
- tion and the duration of a battery in the mobile application
1235
- scenario, provided the energy measurements on VEGA, when
1236
- using a 3300mAh battery. Thus, if we consider performing
1237
- learning over a mini-batch of images once every minute in
1238
- the ultra-fast scenario (just retraining the linear layer) and
1239
- to perform an inference each second, we obtain an energy
1240
- consumption of 0.25J per minute. This leads the accuracy of
1241
- the model to achieve an average of 69.2%, with an overall
1242
- lifetime of about 108 days.
1243
- VI. C ONCLUSION
1244
- In this work, we presented what, to the best of our knowl-
1245
- edge, is the first HW/SW platform for TinyML Continual
1246
- Learning – together with the novel Quantized Latent Replay-
1247
- based Continual Learning (QLR-CL) methodology. More
1248
- specifically, we propose to use low-bitwidth quantization to
1249
- reduce the high memory requirements of a Continual Learning
1250
- strategy based on Latent Replay rehearsing. We show a small
1251
- accuracy drop as small as 0.26% if using 8-bit quantized LR
1252
- memory if compared to floating-point vectors and an average
1253
- degradation of 5% if lowering the bit precision to 7-bit,
1254
- depending on the LR layer selected. Our results demonstrate
1255
- that sophisticated adaptive behavior based on CL is within
1256
- reach for next-generation TinyML devices, such as PULP
1257
- devices; we show the capability to learn a new Core50 class
1258
- with accuracy up to 77%, using less than 64MB of memory –
1259
- a typical constraint to fit Flash memories. We show that our
1260
- QLR-CL library based on VEGA achieves up to 65better
1261
- performance than a conventional STM32 microcontroller.
1262
- These results constitute an initial step towards moving the
1263
- TinyML from a strict train-then-deploy approach to a more
1264
- flexible and adaptive scenario, where low power devices are
1265
- capable to learn and adapt to changing tasks and conditions
1266
- directly in the field.
1267
- Despite this work focused on a single CL method, we
1268
- remark that, thanks to the flexibility of the proposed platform,
1269
- other adaptation methods or models can be also supported,
1270
- especially if relying on the back-propagation algorithm and
1271
- CNN primitives, such as convolution operations.
1272
- ACKNOWLEDGEMENT
1273
- We thank Vincenzo Lomonaco and Lorenzo Pellegrini for
1274
- the insightful discussions.
1275
- REFERENCES
1276
- [1] L. Pellegrini, G. Graffieti, V . Lomonaco, and D. Maltoni, “Latent Replay
1277
- for Real-Time Continual Learning,” 2020 IEEE/RSJ International Con-
1278
- ference on Intelligent Robots and Systems (IROS) , pp. 10 203–10 209,
1279
- 2020.
1280
- [2] A. Kumar, S. Goyal, and M. Varma, “Resource-efficient machine learn-
1281
- ing in 2 kb ram for the internet of things,” in International Conference
1282
- on Machine Learning . PMLR, 2017, pp. 1935–1944.
1283
- [3] C. R. Banbury, V . Janapa Reddi, M. Lam, W. Fu, A. Fazel, J. Holleman,
1284
- X. Huang, R. Hurtado, D. Kanter, A. Lokhmotov et al. , “Benchmarking
1285
- tinyml systems: Challenges and direction,” arXiv e-prints , pp. arXiv–
1286
- 2003, 2020.[4] J. Choi, Z. Wang, S. Venkataramani, P. I.-J. Chuang, V . Srinivasan,
1287
- and K. Gopalakrishnan, “PACT: Parameterized Clipping Activation for
1288
- Quantized Neural Networks,” arXiv e-prints , pp. arXiv–1805, 2018.
1289
- [5] D. Blalock, J. J. Gonzalez Ortiz, J. Frankle, and J. Guttag, “What is the
1290
- state of neural network pruning?” in Proceedings of Machine Learning
1291
- and Systems , I. Dhillon, D. Papailiopoulos, and V . Sze, Eds., vol. 2,
1292
- 2020, pp. 129–146.
1293
- [6] R. David, J. Duke, A. Jain, V . J. Reddi, N. Jeffries, J. Li, N. Kreeger,
1294
- I. Nappier, M. Natraj, S. Regev, R. Rhodes, T. Wang, and P. Warden,
1295
- “TensorFlow Lite Micro: Embedded Machine Learning on TinyML
1296
- Systems,” arXiv e-prints , pp. arXiv–2010, 2020.
1297
- [7] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and
1298
- D. Man ´e, “Concrete Problems in AI Safety,” arXiv e-prints , pp. arXiv–
1299
- 1606, 2016.
1300
- [8] M. de Prado, M. Rusci, A. Capotondi, R. Donze, L. Benini, and N. Pa-
1301
- zos, “Robustifying the Deployment of tinyML Models for Autonomous
1302
- mini-vehicles,” Sensors , vol. 21, no. 4, p. 1339, 2021.
1303
- [9] M. Song, K. Zhong, J. Zhang, Y . Hu, D. Liu, W. Zhang, J. Wang, and
1304
- T. Li, “In-situ ai: Towards autonomous and incremental deep learning
1305
- for iot systems,” in 2018 IEEE International Symposium on High
1306
- Performance Computer Architecture (HPCA) . IEEE, 2018, pp. 92–103.
1307
- [10] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins,
1308
- A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska,
1309
- D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming
1310
- catastrophic forgetting in neural networks,” in Proceedings of the na-
1311
- tional academy of sciences , N. A. Sciences, Ed., vol. 114, no. 13, 2017,
1312
- pp. 3521–3526.
1313
- [11] S. Dhar, J. Guo, J. Liu, S. Tripathi, U. Kurup, and M. Shah, “A
1314
- survey of on-device machine learning: An algorithms and learning theory
1315
- perspective,” ACM Transactions on Internet of Things , vol. 2, no. 3, pp.
1316
- 1–49, 2021.
1317
- [12] M. Delange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis,
1318
- G. Slabaugh, and T. Tuytelaars, “A continual learning survey: Defying
1319
- forgetting in classification tasks,” IEEE Transactions on Pattern Analysis
1320
- and Machine Intelligence , 2021.
1321
- [13] Z. Mai, R. Li, J. Jeong, D. Quispe, H. Kim, and S. Sanner, “Online
1322
- continual learning in image classification: An empirical survey,” arXiv
1323
- preprint arXiv:2101.10423 , 2021.
1324
- [14] G. M. Van de Ven and A. S. Tolias, “Three scenarios for continual
1325
- learning,” arXiv preprint arXiv:1904.07734 , 2019.
1326
- [15] A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania,
1327
- P. H. Torr, and M. Ranzato, “On tiny episodic memories in continual
1328
- learning,” arXiv preprint arXiv:1902.10486 , 2019.
1329
- [16] V . Lomonaco, L. Pellegrini, P. Rodriguez, M. Caccia, Q. She, Y . Chen,
1330
- Q. Jodelet, R. Wang, Z. Mai, D. Vazquez, G. I. Parisi, N. Churamani,
1331
- M. Pickett, I. Laradji, and D. Maltoni, “CVPR 2020 Continual Learning
1332
- in Computer Vision Competition: Approaches, Results, Current Chal-
1333
- lenges and Future Directions,” arXiv preprint arXiv:2009.09929 , 2020.
1334
- [17] Z. Mai, H. Kim, J. Jeong, and S. Sanner, “Batch-level experience replay
1335
- with review for continual learning,” arXiv preprint arXiv:2007.05683 ,
1336
- 2020.
1337
- [18] L. Ravaglia, M. Rusci, A. Capotondi, F. Conti, L. Pellegrini,
1338
- V . Lomonaco, D. Maltoni, and L. Benini, “Memory-Latency-Accuracy
1339
- Trade-offs for Continual Learning on a RISC-V Extreme-Edge Node,”
1340
- in2020 IEEE Workshop on Signal Processing Systems (SiPS) . IEEE,
1341
- 2020, pp. 1–6.
1342
- [19] D. Rossi, F. Conti, A. Marongiu, A. Pullini, I. Loi, M. Gautschi,
1343
- G. Tagliavini, A. Capotondi, P. Flatresse, and L. Benini, “PULP: A
1344
- parallel ultra low power platform for next generation IoT applications,”
1345
- in2015 IEEE Hot Chips 27 Symposium (HCS) . IEEE, 2015, pp. 1–39.
1346
- [20] D. Rossi, F. Conti, M. Eggiman, S. Mach, A. D. Mauro, M. Guermandi,
1347
- G. Tagliavini, A. Pullini, I. Loi, J. Chen, E. Flamand, and L. Benini, “4.4
1348
- a 1.3tops/w @ 32gops fully integrated 10-core soc for iot end-nodes with
1349
- 1.7uw cognitive wake-up from mram-based state-retentive sleep mode,”
1350
- in2021 IEEE International Solid- State Circuits Conference (ISSCC) ,
1351
- vol. 64, 2021, pp. 60–62.
1352
- [21] S. Cass, “Taking ai to the edge: Google’s tpu now comes in a maker-
1353
- friendly package,” IEEE Spectrum , vol. 56, no. 5, pp. 16–17, 2019.
1354
- [22] H. Cai, C. Gan, L. Zhu, and S. Han, “TinyTL: Reduce Memory,
1355
- Not Parameters for Efficient On-Device Learning,” Advances in Neural
1356
- Information Processing Systems , vol. 33, 2020.
1357
- [23] H. Ren, D. Anicic, and T. Runkler, “TinyOL: TinyML with Online-
1358
- Learning on Microcontrollers,” arXiv e-prints , pp. arXiv–2103, 2021.
1359
- [24] S. Disabato and M. Roveri, “Incremental On-Device Tiny Machine
1360
- Learning,” in Proceedings of the 2nd International Workshop on Chal-
1361
- lenges in Artificial Intelligence and Machine Learning for Internet of
1362
- Things , 2020, pp. 7–13.14
1363
- [25] S. Benatti, F. Montagna, V . Kartsch, A. Rahimi, D. Rossi, and L. Benini,
1364
- ��Online learning and classification of emg-based gestures on a parallel
1365
- ultra-low power platform using hyperdimensional computing,” IEEE
1366
- transactions on biomedical circuits and systems , vol. 13, no. 3, pp. 516–
1367
- 528, 2019.
1368
- [26] D. Maltoni and V . Lomonaco, “Continuous learning in single-
1369
- incremental-task scenarios,” Neural Networks , vol. 116, pp. 56–73, 2019.
1370
- [27] F. M. Castro, M. J. Mar ´ın-Jim ´enez, N. Guil, C. Schmid, and K. Alahari,
1371
- “End-to-end incremental learning,” in Proceedings of the European
1372
- conference on computer vision (ECCV) , 2018, pp. 233–248.
1373
- [28] S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl:
1374
- Incremental classifier and representation learning,” in Proceedings of the
1375
- IEEE conference on Computer Vision and Pattern Recognition , 2017, pp.
1376
- 2001–2010.
1377
- [29] T. L. Hayes, N. D. Cahill, and C. Kanan, “Memory efficient experience
1378
- replay for streaming learning,” in 2019 International Conference on
1379
- Robotics and Automation (ICRA) . IEEE, 2019, pp. 9769–9776.
1380
- [30] L. Caccia, E. Belilovsky, M. Caccia, and J. Pineau, “Online learned con-
1381
- tinual compression with adaptive quantization modules,” in International
1382
- Conference on Machine Learning . PMLR, 2020, pp. 1240–1250.
1383
- [31] B. Moons, R. Uytterhoeven, W. Dehaene, and M. Verhelst, “Envi-
1384
- sion: A 0.26-to-10TOPS/W subword-parallel dynamic-voltage-accuracy-
1385
- frequency-scalable Convolutional Neural Network processor in 28nm
1386
- FDSOI,” in 2017 IEEE International Solid-State Circuits Conference
1387
- (ISSCC) , Feb. 2017, pp. 246–247.
1388
- [32] V . Sze, Y .-H. Chen, T.-J. Yang, and J. Emer, “Efficient Processing of
1389
- Deep Neural Networks: A Tutorial and Survey,” arXiv:1703.09039 [cs] ,
1390
- Mar. 2017.
1391
- [33] S. Han, H. Mao, and W. J. Dally, “Deep Compression: Compressing
1392
- Deep Neural Networks with Pruning, Trained Quantization and Huffman
1393
- Coding,” arXiv:1510.00149 [cs] , Feb. 2016.
1394
- [34] Y . H. Chen, T. Krishna, J. S. Emer, and V . Sze, “Eyeriss: An Energy-
1395
- Efficient Reconfigurable Accelerator for Deep Convolutional Neural
1396
- Networks,” IEEE Journal of Solid-State Circuits , vol. 52, no. 1, pp.
1397
- 127–138, Jan. 2017.
1398
- [35] M. Le Gallo, A. Sebastian, R. Mathis, M. Manica, H. Giefers, T. Tuma,
1399
- C. Bekas, A. Curioni, and E. Eleftheriou, “Mixed-precision in-memory
1400
- computing,” Nature Electronics , vol. 1, no. 4, pp. 246–253, Apr. 2018.
1401
- [36] M. Zemlyanikin, A. Smorkalov, T. Khanova, A. Petrovicheva, and
1402
- G. Serebryakov, “512KiB RAM Is Enough! Live Camera Face Recog-
1403
- nition DNN on MCU,” in Proceedings of the IEEE/CVF International
1404
- Conference on Computer Vision Workshops , 2019, pp. 0–0.
1405
- [37] L. Lai, N. Suda, and V . Chandra, “CMSIS-NN: Efficient Neural Network
1406
- Kernels for Arm Cortex-M CPUs,” arXiv e-prints , p. arXiv:1801.06601,
1407
- Jan. 2018.
1408
- [38] T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, H. Shen, M. Cowan,
1409
- L. Wang, Y . Hu, L. Ceze, C. Guestrin, and A. Krishnamurthy,
1410
- “TVM: An automated end-to-end optimizing compiler for deep
1411
- learning,” in 13th USENIX Symposium on Operating Systems
1412
- Design and Implementation (OSDI 18) . Carlsbad, CA: USENIX
1413
- Association, Oct. 2018, pp. 578–594. [Online]. Available: https:
1414
- //www.usenix.org/conference/osdi18/presentation/chen
1415
- [39] A. Burrello, A. Garofalo, N. Bruschi, G. Tagliavini, D. Rossi, and
1416
- F. Conti, “Dory: Automatic end-to-end deployment of real-world dnns
1417
- on low-cost iot mcus,” IEEE Transactions on Computers , p. 1–1, 2021.
1418
- [Online]. Available: http://dx.doi.org/10.1109/TC.2021.3066883
1419
- [40] A. Capotondi, M. Rusci, M. Fariselli, and L. Benini, “CMix-NN: Mixed
1420
- low-precision CNN library for memory-constrained edge devices,” IEEE
1421
- Transactions on Circuits and Systems II: Express Briefs , vol. 67, no. 5,
1422
- pp. 871–875, 2020.
1423
- [41] J. Cheng, J. Wu, C. Leng, Y . Wang, and Q. Hu, “Quantized CNN: A
1424
- unified approach to accelerate and compress convolutional networks,”
1425
- IEEE transactions on neural networks and learning systems , vol. 29,
1426
- no. 10, pp. 4730–4743, 2017.
1427
- [42] S. Ghamari, K. Ozcan, T. Dinh, A. Melnikov, J. Carvajal, J. Ernst, and
1428
- S. Chai, “Quantization-Guided Training for Compact TinyML Models,”
1429
- arXiv e-prints , pp. arXiv–2103, 2021.
1430
- [43] L. Cecconi, S. Smets, L. Benini, and M. Verhelst, “Optimal Tiling
1431
- Strategy for Memory Bandwidth Reduction for CNNs,” in Advanced
1432
- Concepts for Intelligent Vision Systems , ser. Lecture Notes in Computer
1433
- Science, J. Blanc-Talon, R. Penne, W. Philips, D. Popescu, and P. Sche-
1434
- unders, Eds. Springer International Publishing, 2017, pp. 89–100.
1435
- [44] T. Moreau, T. Chen, L. Vega, J. Roesch, E. Yan, L. Zheng, J. Fromm,
1436
- Z. Jiang, L. Ceze, C. Guestrin, and A. Krishnamurthy, “A hardware–
1437
- software blueprint for flexible deep learning specialization,” IEEE Micro ,
1438
- vol. 39, no. 5, pp. 8–16, 2019.[45] D. Kalamkar, D. Mudigere, N. Mellempudi, D. Das, K. Banerjee,
1439
- S. Avancha, D. T. V ooturi, N. Jammalamadaka, J. Huang, H. Yuen,
1440
- J. Yang, J. Park, A. Heinecke, E. Georganas, S. Srinivasan, A. Kundu,
1441
- M. Smelyanskiy, B. Kaul, and P. Dubey, “A Study of BFLOAT16 for
1442
- Deep Learning Training,” arXiv e-prints , pp. arXiv–1905, 2019.
1443
- [46] X. Sun, J. Choi, C.-Y . Chen, N. Wang, S. Venkataramani, V . V .
1444
- Srinivasan, X. Cui, W. Zhang, and K. Gopalakrishnan, “Hybrid 8-bit
1445
- floating point (HFP8) training and inference for deep neural networks,”
1446
- Advances in neural information processing systems , vol. 32, pp. 4900–
1447
- 4909, 2019.
1448
- [47] D. Shin, J. Lee, J. Lee, and H.-J. Yoo, “14.2 DNPU: An 8.1 TOPS/W
1449
- reconfigurable CNN-RNN processor for general-purpose deep neural
1450
- networks,” in 2017 IEEE International Solid-State Circuits Conference
1451
- (ISSCC) . IEEE, 2017, pp. 240–241.
1452
- [48] T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y . Chen, and O. Temam,
1453
- “Diannao: A small-footprint high-throughput accelerator for ubiqui-
1454
- tous machine-learning,” ACM SIGARCH Computer Architecture News ,
1455
- vol. 42, no. 1, pp. 269–284, 2014.
1456
- [49] J. Shin, S. Choi, Y . Choi, and L.-S. Kim, “A pragmatic approach to
1457
- on-device incremental learning system with selective weight updates,”
1458
- in2020 57th ACM/IEEE Design Automation Conference (DAC) . IEEE,
1459
- 2020, pp. 1–6.
1460
- [50] D. Han, D. Im, G. Park, Y . Kim, S. Song, J. Lee, and H.-J. Yoo, “HNPU:
1461
- An Adaptive DNN Training Processor Utilizing Stochastic Dynamic
1462
- Fixed-Point and Active Bit-Precision Searching,” IEEE Journal of Solid-
1463
- State Circuits , pp. 1–1, 2021.
1464
- [51] S. Kang, D. Han, J. Lee, D. Im, S. Kim, S. Kim, and H.-J. Yoo,
1465
- “7.4 GANPU: A 135TFLOPS/W Multi-DNN Training Processor for
1466
- GANs with Speculative Dual-Sparsity Exploitation,” in 2020 IEEE
1467
- International Solid- State Circuits Conference - (ISSCC) , 2020, pp. 140–
1468
- 142.
1469
- [52] J. L. Lobo, J. Del Ser, A. Bifet, and N. Kasabov, “Spiking Neural
1470
- Networks and online learning: An overview and perspectives,” Neural
1471
- Networks , vol. 121, pp. 88–100, Jan. 2020.
1472
- [53] M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y . Cao, S. H. Choday,
1473
- G. Dimou, P. Joshi, N. Imam, S. Jain, Y . Liao, C.-K. Lin, A. Lines,
1474
- R. Liu, D. Mathaikutty, S. McCoy, A. Paul, J. Tse, G. Venkataramanan,
1475
- Y .-H. Weng, A. Wild, Y . Yang, and H. Wang, “Loihi: A Neuromorphic
1476
- Manycore Processor with On-Chip Learning,” IEEE Micro , vol. 38,
1477
- no. 1, pp. 82–99, Jan. 2018.
1478
- [54] J. Pei, L. Deng, S. Song, M. Zhao, Y . Zhang, S. Wu, G. Wang,
1479
- Z. Zou, Z. Wu, W. He, F. Chen, N. Deng, S. Wu, Y . Wang, Y . Wu,
1480
- Z. Yang, C. Ma, G. Li, W. Han, H. Li, H. Wu, R. Zhao, Y . Xie, and
1481
- L. Shi, “Towards artificial general intelligence with hybrid Tianjic chip
1482
- architecture,” Nature , vol. 572, no. 7767, pp. 106–111, Aug. 2019.
1483
- [55] G. Karunaratne, M. Schmuck, M. Le Gallo, G. Cherubini, L. Benini,
1484
- A. Sebastian, and A. Rahimi, “Robust high-dimensional memory-
1485
- augmented neural networks,” Nature Communications , vol. 12, no. 1,
1486
- p. 2468, Apr. 2021.
1487
- [56] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam,
1488
- and D. Kalenichenko, “Quantization and training of neural networks
1489
- for efficient integer-arithmetic-only inference,” in Proceedings of the
1490
- IEEE Conference on Computer Vision and Pattern Recognition , 2018,
1491
- pp. 2704–2713.
1492
- [57] S. Mach, D. Rossi, G. Tagliavini, A. Marongiu, and L. Benini, “A
1493
- transprecision floating-point architecture for energy-efficient embedded
1494
- computing,” in 2018 IEEE International Symposium on Circuits and
1495
- Systems (ISCAS) . IEEE, 2018, pp. 1–5.
1496
- [58] I. Loi, A. Capotondi, D. Rossi, A. Marongiu, and L. Benini, “The quest
1497
- for energy-efficient i$ design in ultra-low-power clustered many-cores,”
1498
- IEEE Transactions on Multi-Scale Computing Systems , vol. 4, no. 2, pp.
1499
- 99–112, 2017.
1500
- [59] V . Lomonaco, D. Maltoni, and L. Pellegrini, “Rehearsal-free continual
1501
- learning over small non-iid batches,” in 2020 IEEE/CVF Conference on
1502
- Computer Vision and Pattern Recognition Workshops (CVPRW) . IEEE
1503
- Computer Society, 2020, pp. 989–998.
1504
- [60] F. Conti, “Technical Report: NEMO DNN Quantization for Deployment
1505
- Model,” arXiv preprint arXiv:2004.05930 , 2020.15
1506
- Leonardo Ravaglia receive his M.Sc. degree in
1507
- Automation Engineering from the University of
1508
- Bologna in 2019. He is currently a Doctoral Student
1509
- in Data Science and Computation at the University
1510
- of Bologna. His research interests include DNN al-
1511
- gorithms for Continual Learning, parallel computing
1512
- on Ultra Low Power devices and Quantized Neural
1513
- Networks.
1514
- Dr. Manuele Rusci received the Ph.D. degree
1515
- in electronic engineering from the University of
1516
- Bologna in 2018. He is currently a Post-Doctoral
1517
- Researcher at the same University at the Department
1518
- of Electrical, Electronic and Information Engineer-
1519
- ing “Guglielmo Marconi” and closely collaborates
1520
- with Greenwaves Technologies. His main research
1521
- interests include low-power embedded systems and
1522
- AI-powered smart sensors.
1523
- Davide Nadalini Davide Nadalini received the
1524
- M.Sc. degree in electronic engineering from the
1525
- University of Bologna in 2021. Since then, he is
1526
- a Ph.D. student at Politecnico di Torino. His main
1527
- research topic is Hardware-Software co-design and
1528
- optimization of embedded Artificial Intelligence. His
1529
- research interests include parallel computing, Quan-
1530
- tized Neural Networks and low-level optimization.
1531
- Dr. Alessandro Capotondi Dr. Alessandro Capo-
1532
- tondi (IEEE Member) is a postdoctoral researcher
1533
- at the Universit `a di Modena e Reggio Emilia (IT).
1534
- His main research interests focus on heteroge-
1535
- neous architectures, parallel programming models,
1536
- and TinyML. He received his Ph.D. in Electrical,
1537
- Electronic, and Information Engineering from the
1538
- University of Bologna in 2016.
1539
- Prof. Francesco Conti received the Ph.D. in elec-
1540
- tronic engineering from the University of Bologna,
1541
- Italy, in 2016. He is currently an Assistant Professor
1542
- in the DEI Department of the University of Bologna.
1543
- From 2016 to 2020, he was a postdoctoral researcher
1544
- at the Integrated Systems Laboratory of ETH Z ¨urich
1545
- in the Digital Systems group. His research is focused
1546
- on the development of deep learning based intelli-
1547
- gence on top of ultra-low power, ultra-energy effi-
1548
- cient programmable Systems-on-Chip. In particular,
1549
- he works on Deep Learning-aware architecture, on
1550
- tinyML hardware acceleration facilities such as dedicated accelerator cores
1551
- and ISA extensions, as well as on automated DNN architecture search, quan-
1552
- tization, and deployment methodologies tuned to maximally exploit hardware
1553
- features. He has been involved in the development of the RISC-V based open-
1554
- source Parallel Ultra-Low-Power (PULP) project since its inception (2013).
1555
- From 2020, he has collaborated with GreenWaves Technologies, France as a
1556
- consultant for the development of DNN and RNN acceleration IP. His research
1557
- has resulted in 50+ publications in international conferences and journals and
1558
- has been awarded several times, including the 2020 IEEE TCAS-I Darlington
1559
- Best Paper Award, the 2018 Hipeac Tech Transfer Award, the 2018 ESWEEK
1560
- Best Paper Award, and the 2014 ASAP Best Paper Award.
1561
- Prof. Luca Benini (Fellow, IEEE) received the
1562
- Ph.D. degree in electrical engineering from Stanford
1563
- University, Stanford, CA, USA, in 1997. He was
1564
- the Chief Architect of the Platform 2012/STHORM
1565
- Project with STMicroelectronics, Grenoble, France,
1566
- from 2009 to 2013. He held visiting/consulting po-
1567
- sitions with ´Ecole Polytechnique F ´ed´erale de Lau-
1568
- sanne, Lausanne, Switzerland; Stanford University;
1569
- and IMEC, Leuven, Belgium. He is currently a Full
1570
- Professor with the University of Bologna, Bologna,
1571
- Italy. He is also the Chair of Digital Circuits and
1572
- Systems with ETH Z ¨urich, Z ¨urich, Switzerland. He has authored over 700
1573
- papers in peer-reviewed international journals and conferences, four books,
1574
- and several book chapters. His current research interests include energy-
1575
- efficient system design and multicore system-on-chip design. Dr. Benini is
1576
- a member of Academia Europaea.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2110.12687.txt DELETED
@@ -1,427 +0,0 @@
1
- arXiv:2110.12687v1 [cs.CL] 25 Oct 2021Fine-tuning of Pre-trained Transformers for Hate, Offensi ve,
2
- and Profane Content Detection in English and Marathi
3
- Anna Glazkova1,*, Michael Kadantsev2, Maksim Glazkov3,
4
- 1 University of Tyumen, Tyumen, Russia
5
- 2 Thales Canada, Transportation Solutions, Toronto, Canad a
6
- 3 Neuro.net, Nizhny Novgorod, Russia
7
8
- Abstract
9
- This paper describes neural models developed for the Hate Speech and Offensive Content
10
- Identification in English and Indo-Aryan Languages Shared Task 20 21. Our team called
11
- neuro-utmn-thales participated in two tasks on binary and fine-grained classification of
12
- English tweets that contain hate, offensive, and profane conten t (English Subtasks A &
13
- B) and one task on identification of problematic content in Marathi ( Marathi Subtask
14
- A). For English subtasks, we investigate the impact of additional co rpora for hate speech
15
- detection to fine-tune transformer models. We also apply a one-vs -rest approach based
16
- on Twitter-RoBERTa to discrimination between hate, profane and o ffensive posts. Our
17
- models ranked third in English Subtask A with the F1-score of 81.99% a nd ranked
18
- second in English Subtask B with the F1-score of 65.77%. For the Mar athi tasks, we
19
- propose a system based on the Language-Agnostic BERT Sentenc e Embedding (LaBSE).
20
- This model achieved the second result in Marathi Subtask A obtainin g an F1 of 88.08%.
21
- Introduction
22
- Social media has a greater impact on our society. Social networks g ive us almost limitless
23
- freedom of speech and contribute to the rapid dissemination of info rmation. However,
24
- these positive properties often lead to unhealthy usage of social m edia. Thus, hate
25
- speech spreading affects users’ psychological state, promote s violence, and reinforces
26
- hateful sentiments [4, 5]. This problem attracts many scholars to a pply modern tech-
27
- nologies in order to make social media safer. The Hate Speech and Of fensive Content
28
- Identification in English and Indo-Aryan Languages Shared Task (H ASOC) 2021 [27]
29
- aims to compare and analyze existing approaches to identifying hate speech not only
30
- for English, but also for other languages. It focused on detecting hate, offensive, and
31
- profane content in tweets, and offering six subtasks. We particip ated in three of them:
32
- •English Subtask A: identifying hate, offensive, and profane content from the
33
- post in English [24].
34
- •English Subtask B: discrimination between hate, profane, and offensive posts
35
- in English.
36
- •Marathi Subtask A: identifying hate, offensive, and profane content from the
37
- post in Marathi [14].
38
- 1/10The source code for our models is freely available1.
39
- The paper is organized as follows. Section 2 contains a brief review of related works.
40
- Next, we describe our experiments on the binary and fine-grained c lassification of En-
41
- glish tweets in Section 3. In Section 4, we present our model for hat e, offensive, and
42
- profane language identification in Marathi. We conclude this paper in S ection 5. Finally,
43
- Section 6 contains acknowledgments.
44
- 1 Related Works
45
- We briefly discuss works done related to harmful content detectio n in the past few
46
- years. Shared tasks related to hate speech and offensive langua ge detection from tweets
47
- was organized as a part of some workshops and conferences, suc h as FIRE [22, 23], Se-
48
- mEval, [3, 10], GermEval [40, 43], IberLEF [42], and OSACT [29]. The pa rticipants
49
- proposed a broad range of approaches from traditional machine le arning techniques
50
- (for example, Support Vector Machines [15, 38], Random Forest [34 ]) to various neural
51
- architectures (Convolutional Neural Networks, CNN [35]; Long Sh ort Term Memory,
52
- LSTM [26, 28]; Embeddings from Language Models, ELMo [6]; and Bidirec tional En-
53
- coder Representations from Transformers, BERT [19,36]). In mo st cases, BERT-based
54
- systems outperformed other approaches.
55
- Most research on hate speech detection continues to be based on English corpora.
56
- Despite this, the harmful content is distributed in different langua ges. Therefore, there
57
- have been previous attempts at creating corpora and developing m odels for hate speech
58
- detection in common non-English languages, such as Arabic [1, 29], G erman [22, 23, 40,
59
- 43], Italian [7, 37], Spanish [3, 42], Hindi [22, 23], Tamil and Malayalam [22]. Several
60
- studies have focused on collecting hate speech corpora for Chines e [41], Portuguese [11],
61
- Polish [33], Turkish [8] and Russian [16] languages.
62
- 2 English Subtasks A & B: Identification and Fine-
63
- grained Classification of Hate, Offensive, and Pro-
64
- fane Tweets
65
- The objective of English Subtasks A & B is to identify whether a tweet in English
66
- contains harmful content (Subtask A) and perform a fine-graine d classification of posts
67
- into three categories, including: hate, offensive, or profane (Su btask B).
68
- 2.1 Data
69
- The dataset provided to the participants of the shared task cont ains 4355 manually
70
- annotated social media posts divided into training (3074) and test ( 1281) sets. Table 1
71
- presents the data description.
72
- Further, we tested several data sampling techniques using differ ent hate speech cor-
73
- pora as additional training data. Firstly, we evaluated the joint use of multilingual data
74
- provided by the organizers of HASOC 2021, including the English, the Hindi, and the
75
- Marathi training sets. Secondly, as the training sets were highly imb alanced, we applied
76
- the positive class random oversampling technique so that each train ing batch contained
77
- approximately the same number of samples. Besides, we experiment ed with the seq2seq-
78
- based data augmentation technique [17]. For this purpose, we fine- tuned the BART-base
79
- model for the denoising reconstruction task where 40% of tokens are masked and the
80
- goal of the decoder is to reconstruct the original sequence. Sinc e the BART model [18]
81
- 1https://github.com/ixomaxip/hasoc
82
- 2/10Table 1. Data description.
83
- Label DescriptionNumber of examples
84
- (training set)
85
- Subtask A
86
- NOTNon Hate-Offensive: the post does not contain
87
- hate speech, profane, offensive content1102
88
- HOFHate and Offensive: the post contains hate,
89
- offensive, or profane content.1972
90
- Subtask B
91
- NONEThe post does not contain hate speech, profane,
92
- offensive content1102
93
- HATE Hate speech: the post contains hate speech content. 542
94
- OFFN Offensive: the post contains offensive content. 482
95
- PRFN Profane: the post contains profane words. 948
96
- Table 2. Hate-related dataset characteristics.
97
- Dataset Size Labels
98
- HASOC 2020 4522HOF - 50.4%
99
- NOT - 49.6%
100
- HatebaseTwitter 24783hate speech - 20.15%
101
- offensive language - 85.98%
102
- neither - 23.77%
103
- HatEval 130001 (hate speech) - 42.08%
104
- 0 (not hate speech) - 57.92%
105
- OLID 14100OFF - 32.91%
106
- NOT - 67.09%
107
- already contains the ¡mask¿ token, we use it to replace mask tokens . We generated
108
- one synthetic example for every tweet in the training set. Thus, th e augmented data
109
- size is the same size as the size of the original training set. Finally, we e valuated the
110
- impact of additional training data, including: (a) the English dataset , used at HASOC
111
- 2020 [22]; (b) HatebaseTwitter, based on the hate speech lexicon f rom Hatebase2[10];
112
- (c) HatEval, a dataset presented at Semeval-2019 Task 5 [3] ; (d) Offensive Language
113
- Identification Dataset (OLID), used in the SemEval-2019 Task 6 (O ffensEval) [45]. All
114
- corpora except the HatebaseTwitter dataset contain non-inter sective classes. Besides,
115
- all listed datasets are collected from Twitter. A representative sa mpling of additional
116
- data is shown in Table 2.
117
- We preprocessed the datasets for Subtasks A & B in a similar manner . Inspired
118
- by [2], we used the following text preprocessing technique3: (a) removed all URLs; (b)
119
- replaced all user mentions with the $MENTION $placeholder.
120
- 2.2 Models
121
- We conduct our experiments with neural models based on BERT [12] a s they have
122
- achieved state-of-the-art results in harmful content detectio n. For example, BERT-
123
- based models proved efficient at previous HASOC shared tasks [22 , 23] and SemEval
124
- [32,45].
125
- We used the following models:
126
- 2https://hatebase.org/
127
- 3https://pypi.org/project/tweet-preprocessor
128
- 3/10Table 3. Model validation results for English Subtask A, %.
129
- Model F1 P R
130
- BERT 79.24 79.74 78.82
131
- BERTweet 78.65 79.36 78.08
132
- Twitter-RoBERTa 81.1 80.01 82.65
133
- LaBSE (English dataset) 78.83 79.5 78.29
134
- LaBSE (English + Hindi) 79.32 79.95 78.8
135
- LaBSE (English, Hindi, and Marathi) 79.27 81.74 77.79
136
- Adding extra data to Twitter-RoBERTa
137
- + random oversampling 79.97 79.9 80.04
138
- + BART data augmentation 79.24 78.44 80.31
139
- + HASOC 2020 78.79 77.66 80.47
140
- + HatabaseTwitter 81.19 79.99 82.93
141
- + HatEval 74.31 75.53 73.64
142
- + OLID 79.29 78.17 80.93
143
- •BERT base[12], a pre-trained model on BookCorpus [46] and English Wikipedia
144
- using a masked language modeling objective.
145
- •BERTweet base[30], a pre-trained language model for English tweets. The corpus
146
- used to pre-train BERTweet consists of 850M English Tweets includin g 845M
147
- Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the COVID-
148
- 19 pandemic.
149
- •Twitter-RoBERTa basefor Hate Speech Detection [2], a RoBERTa base[20] model
150
- trained on 58M tweets and fine-tuned for hate speech detection w ith the Tweet-
151
- Eval benchmark.
152
- •LaBSE [13], a language-agnostic BERT sentence embedding model su pporting 109
153
- languages.
154
- 2.3 Experiments
155
- For both Subtask A and Subtask B, we adopted pre-trained models from HuggingFace
156
- [44] and fine-tuned them using PyTorch [31]. We fine-tuned each pre -trained language
157
- model for 3 epochs with the learning rate of 2e-5 using the AdamW op timizer [21]. We
158
- set batch size to 32 and maximum sequence size to 64. To validate our models during
159
- the development phase, we divided labelled data using the train and va lidation split in
160
- the ratio 80:20.
161
- Table 3 shows the performance of our models on the validation subse t for Subtask A
162
- in terms of macro-averaging F1-score (F1), precision (P), and re call (R). As can be seen
163
- from the table, BERT, BERTweet, and LaBSE show very close result s during validation.
164
- Despite this, LaBSE jointly fine-tuned on three mixed multilingual dat asets shows the
165
- highest precision score. The use of Twitter-RoBERTa increases th e F1-score by 1.5-2.5%
166
- compared to other classification models. Based on this, we chose Tw itter-RoBERTa for
167
- further experiments. We found out that neither the random over sampling technique
168
- nor the use of the augmented and additional data shows a perform ance improvement,
169
- except the joint use of the original dataset and the HatebaseTwit ter dataset that gives
170
- an F1-score growth of 0.09% and a precision growth of 0.28% compar ed to basic Twitter-
171
- RoBERTa.
172
- For our official submission for Subtask A, we designed a soft-votin g ensemble of five
173
- Twitter-RoBERTa jointly fine-tuned on the original training set and the HatebaseTwit-
174
- 4/10Table 4. Performance of our final models for English Subtasks A & B, official results,
175
- %.
176
- SubtaskF1 (our
177
- model)F1 (winning
178
- solution)P (our
179
- model)P (winning
180
- solution)Avg F1Number of
181
- submitted
182
- teamsRank
183
- A 81.99 83.05 84.68 84.14 75.7 56 3
184
- B 65.77 66.57 66.32 66.88 57.07 37 2
185
- ter dataset (see Table 4). For Subtask B, we used the following one -vs-rest approach to
186
- discrimination between hate, profane, and offensive posts.
187
- •First, we applied our Subtask A binary models to identify non hate-of fensive
188
- examples.
189
- •Second, we fine-tuned three Twitter-RoBERTa binary models to de limit exam-
190
- ples of hate-vs-profane, hate-vs-offensive, and offensive-v s-profane classes. The
191
- training dataset was extended with the HatebaseTwitter dataset .
192
- •Finally, we compared the results of binary models. If the result was d efined
193
- uniquely, we used it as a predicted label. Otherwise, we chose the labe l in propor-
194
- tion to the number of examples in the training set.
195
- This can be illustrated briefly by the following examples.
196
- –Let the models show the following results:
197
- ∗hate-vs-profane →hate;
198
- ∗hate-vs-offensive →hate;
199
- ∗offensive-vs-profane →offensive.
200
- Thus, classes have the following votes: hate – 2, offensive - 1, pro fane – 0.
201
- Then we predict the HATE label.
202
- –If the results are:
203
- ∗hate-vs-profane →profane;
204
- ∗hate-vs-offensive →hate;
205
- ∗offensive-vs-profane →offensive,
206
- we have the class votes, such as hate – 1, offensive - 1, profane – 1. Then we
207
- choose the PRFN label as the most common label in the training set.
208
- 3 Marathi Subtask A: Identifying Hate, Offensive,
209
- and Profane Content from the Post
210
- 3.1 Data
211
- For the Marathi task, we used the original training and test sets p rovided by the orga-
212
- nizers of the HASOC 2021. The whole dataset contains 2499 tweets , including: 1874
213
- training and 625 test examples. The training set consists of 1205 te xts of the NOT
214
- class and 669 texts of the HOF class. We used raw data as an input fo r our models.
215
- Following [25,39], we experimented with the combination of the English, the Hindi, and
216
- the Marathi training sets provided by the organizers.
217
- 5/10Table 5. Model validation results for Marathi Subtask A, %.
218
- Model F1 P R
219
- XLM-RoBERTa (Marathi dataset) 83.87 85.39 83.39
220
- XLM-RoBERTa (Marathi + Hindi) 83.23 83.82 82.76
221
- XLM-RoBERTa (Marathi + English) 84.83 85.03 84.64
222
- XLM-RoBERTa (Marathi + Hindi + English) 84.35 84.82 83.95
223
- LaBSE (Marathi) 87.76 87.82 87.68
224
- LaBSE (Marathi + Hindi) 87.62 88.21 87.13
225
- LaBSE (Marathi + English) 87.62 88.21 87.13
226
- LaBSE (Marathi + Hindi + English) 86.34 86.63 86.08
227
- Table 6. Performance of our final model for the Marathi Subtask A, offic ial results, %.
228
- F1 (our
229
- model)F1 (winning
230
- solution)P (our
231
- model)P (winning
232
- solution)Avg F1Number of
233
- submitted
234
- teamsRank
235
- 88.08 91.44 87.58 91.82 82.55 25 2
236
- 3.2 Models
237
- We evaluated the following models:
238
- •XLM-RoBERTa base[9], a transformer-based multilingual masked language model
239
- supporting 100 languages.
240
- •LaBSE [13], a language-agnostic BERT sentence embedding model pr e-trained on
241
- texts in 109 languages.
242
- 3.3 Experiments
243
- We experimented with the above-mentioned language models fine-tu ned on monolingual
244
- and multilingual data. For model evaluation during the development p hase, we used
245
- the random train and validation split in the ratio 80:20 with a fixed seed. We set the
246
- same model parameters as for English tasks.
247
- Table 5 illustrates the results. It can be seen that LaBSE outperfo rms XLM-
248
- RoBERTa in all cases. Moreover, the F1-score of LaBSE fine-tune d only on the Marathi
249
- dataset are higher than the results of LaBSE fine-tuned on multiling ual data. XLM-
250
- RoBERTa, on the other hand, mostly benefits from multilingual fine- tuning.
251
- For our final submission, we used a soft-voting ensemble of five LaB SE fine-tuned on
252
- the official Marathi dataset provided by the organizers of the co mpetition. The results
253
- of this model on the test set are shown in Table 6.
254
- Conclusion
255
- In this paper, we have presented the details about our participatio n in the HASOC
256
- Shared Task 2021. We have explored an application of domain-specif ic monolingual and
257
- multilingual BERT-based models to the tasks of binary and fine-grain ed classification of
258
- Twitter posts. We also proposed a one-vs-rest approach to discr imination between hate,
259
- offensive, and profane tweets. Further research can focus on analyzing the effectiveness
260
- of various text preprocessing techniques for harmful content d etection and exploring
261
- how different transfer learning approaches can affect classifica tion performance.
262
- 6/10Acknowledgments
263
- The work on multi-label text classification was carried out by Anna Gla zkova and sup-
264
- ported by the grant of the President of the Russian Federation no . MK-637.2020.9.
265
- References
266
- 1. N. Albadi, M. Kurdi, and S. Mishra. Are they our brothers? analys is and de-
267
- tection of religious hate speech in the Arabic twittersphere. In 2018 IEEE/ACM
268
- International Conference on Advances in Social Networks An alysis and Mining
269
- (ASONAM) , pages 69–76. IEEE, 2018.
270
- 2. F. Barbieri, J. Camacho-Collados, L. E. Anke, and L. Neves. Twe etEval: Unified
271
- benchmark and comparative evaluation for tweet classification. In Proceedings
272
- of the 2020 Conference on Empirical Methods in Natural Langu age Processing:
273
- Findings , pages 1644–1650, 2020.
274
- 3. V. Basile, C. Bosco, E. Fersini, N. Debora, V. Patti, F. M. R. Pard o, P. Rosso,
275
- M. Sanguinetti, et al. Semeval-2019 task 5: Multilingual detection of hate speech
276
- against immigrants and women in Twitter. In 13th International Workshop on
277
- Semantic Evaluation , pages 54–63, 2019.
278
- 4. L. E. Beausoleil. Free, hateful, and posted: rethinking first ame ndment protection
279
- of hate speech in a social media world. BCL Rev. , 60:2101, 2019.
280
- 5. M. Bilewicz and W. Soral. Hate speech epidemic. the dynamic effect s of deroga-
281
- tory language on intergroup relations and political radicalization. Political Psy-
282
- chology , 41:3–33, 2020.
283
- 6. M. Bojkovsky and M. Pikuliak. STUFIIT at SemEval-2019 task 5: M ultilingual
284
- hate speech detection on twitter with MUSE and ELMo embeddings. I nProceed-
285
- ings of the 13th International Workshop on Semantic Evaluat ion, pages 464–468,
286
- 2019.
287
- 7. C. Bosco, D. Felice, F. Poletto, M. Sanguinetti, and T. Maurizio. O verview of the
288
- EVALITA 2018 hate speech detection task. In EVALITA 2018-Sixth Evaluation
289
- Campaign of Natural Language Processing and Speech Tools fo r Italian , volume
290
- 2263, pages 1–9. CEUR, 2018.
291
- 8. C ¸ . C ¸ ¨ oltekin. A corpus of Turkish offensive language on social media. In Proceed-
292
- ings of the 12th Language Resources and Evaluation Conferen ce, pages 6174–6184,
293
- 2020.
294
- 9. A. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzm´ an,
295
- ´E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov. Unsupervise d cross-lingual
296
- representation learning at scale. In Proceedings of the 58th Annual Meeting of
297
- the Association for Computational Linguistics , pages 8440–8451, 2020.
298
- 10. T. Davidson, D. Warmsley, M. Macy, and I. Weber. Automated h ate speech de-
299
- tection and the problem of offensive language. In Proceedings of the International
300
- AAAI Conference on Web and Social Media , volume 11, 2017.
301
- 11. R. P. de Pelle and V. P. Moreira. Offensive comments in the Brazilia n web:
302
- a dataset and baseline results. In Anais do VI Brazilian Workshop on Social
303
- Network Analysis and Mining . SBC, 2017.
304
- 7/1012. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-t raining of
305
- deep bidirectional transformers for language understanding. arXiv preprint
306
- arXiv:1810.04805 , 2018.
307
- 13. F. Feng, Y. Yang, D. Cer, N. Arivazhagan, and W. Wang. Langu age-agnostic
308
- BERT sentence embedding. arXiv preprint arXiv:2007.01852 , 2020.
309
- 14. S. Gaikwad, T. Ranasinghe, M. Zampieri, and C. M. Homan. Cross -lingual offen-
310
- sive language identification for low resource languages: The case of Marathi. In
311
- Proceedings of RANLP , 2021.
312
- 15. S. Hassan, Y. Samih, H. Mubarak, A. Abdelali, A. Rashed, and S. A. Chowdhury.
313
- Alt submission for osact shared task on offensive language detect ion. InProceed-
314
- ings of the 4th Workshop on Open-Source Arabic Corpora and Pr ocessing Tools,
315
- with a Shared Task on Offensive Language Detection , pages 61–65, 2020.
316
- 16. L. Komalova, A. Glazkova, D. Morozov, R. Epifanov, L. Motovs kikh, and E. May-
317
- orova. Automated classification of potentially insulting speech acts on social net-
318
- work sites. In International Conference on Digital Transformation and Gl obal
319
- Society . Springer, 2021.
320
- 17. V. Kumar, A. Choudhary, and E. Cho. Data augmentation using pre-trained
321
- transformer models. In Proceedings of the 2nd Workshop on Life-long Learning
322
- for Spoken Language Systems , pages 18–26, 2020.
323
- 18. M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Lev y, V. Stoy-
324
- anov, and L. Zettlemoyer. Bart: Denoising sequence-to-sequen ce pre-training for
325
- natural language generation, translation, and comprehension. I nProceedings of
326
- the 58th Annual Meeting of the Association for Computationa l Linguistics , pages
327
- 7871–7880, 2020.
328
- 19. P. Liu, W. Li, and L. Zou. NULI at SemEval-2019 task 6: Transfe r learning for
329
- offensive language detection using bidirectional transformers. I nProceedings of
330
- the 13th international workshop on semantic evaluation , pages 87–91, 2019.
331
- 20. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Le wis, L. Zettle-
332
- moyer, and V. Stoyanov. Roberta: A robustly optimized bert pret raining ap-
333
- proach.arXiv preprint arXiv:1907.11692 , 2019.
334
- 21. I. Loshchilov and F. Hutter. Decoupled weight decay regulariza tion. InInterna-
335
- tional Conference on Learning Representations , 2018.
336
- 22. T. Mandl, S. Modha, A. Kumar M, and B. R. Chakravarthi. Overv iew of the
337
- HASOC track at FIRE 2020: Hate speech and offensive language ide ntification
338
- in Tamil, Malayalam, Hindi, English and German. In Forum for Information
339
- Retrieval Evaluation , pages 29–32, 2020.
340
- 23. T. Mandl, S. Modha, P. Majumder, D. Patel, M. Dave, C. Mandlia, and A. Patel.
341
- Overview of the HASOC track at FIRE 2019: Hate speech and offen sive content
342
- identification in Indo-European languages. In Proceedings of the 11th forum for
343
- information retrieval evaluation , pages 14–17, 2019.
344
- 24. T. Mandl, S. Modha, G. K. Shahi, H. Madhu, S. Satapara, P. Maj umder,
345
- J. Sch¨ afer, T. Ranasinghe, M. Zampieri, D. Nandini, and A. K. Jaisw al. Overview
346
- of the HASOC subtrack at FIRE 2021: Hate speech and offensive c ontent identi-
347
- fication in English and Indo-Aryan languages. In Working Notes of FIRE 2021 -
348
- Forum for Information Retrieval Evaluation . CEUR, December 2021.
349
- 8/1025. S. Mishra, S. Prasad, and S. Mishra. Multilingual joint fine-tunin g of transformer
350
- models for identifying trolling, aggression and cyberbullying at TRAC 2 020. In
351
- Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying ,
352
- pages 120–125, 2020.
353
- 26. A. K. Mishraa, S. Saumyab, and A. Kumara. IIIT DWD@ HASOC 2020: Iden-
354
- tifying offensive content in Indo-European languages. 2020.
355
- 27. S. Modha, T. Mandl, G. K. Shahi, H. Madhu, S. Satapara, T. Ran asinghe, and
356
- M. Zampieri. Overview of the HASOC subtrack at FIRE 2021: Hate sp eech
357
- and offensive content identification in English and Indo-Aryan langu ages and
358
- conversational hate speech. In FIRE 2021: Forum for Information Retrieval
359
- Evaluation, Virtual Event, 13th-17th December 2021 . ACM, December 2021.
360
- 28. A. Montejo-R´ aez, S. M. Jim´ enez-Zafra, M. A. Garc´ ıa-Cum breras, and M. C. D´ ıaz-
361
- Galiano. SINAI-DL at SemEval-2019 task 5: Recurrent networks a nd data aug-
362
- mentation by paraphrasing. In Proceedings of the 13th International Workshop
363
- on Semantic Evaluation , pages 480–483, 2019.
364
- 29. H. Mubarak, K. Darwish, W. Magdy, T. Elsayed, and H. Al-Khalifa . Overview
365
- of OSACT4 Arabic offensive language detection shared task. In Proceedings of
366
- the 4th Workshop on Open-Source Arabic Corpora and Processi ng Tools, with a
367
- Shared Task on Offensive Language Detection , pages 48–52, 2020.
368
- 30. D. Q. Nguyen, T. Vu, and A. T. Nguyen. BERTweet: A pre-train ed language
369
- model for English tweets. In Proceedings of the 2020 Conference on Empirical
370
- Methods in Natural Language Processing: System Demonstrat ions, pages 9–14,
371
- 2020.
372
- 31. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Ch anan, T. Killeen,
373
- Z. Lin, N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, h igh-
374
- performance deep learning library. Advances in neural information processing
375
- systems , 32:8026–8037, 2019.
376
- 32. J. Pavlopoulos, J. Sorensen, L. Laugier, and I. Androutsopo ulos. SemEval-2021
377
- task 5: Toxic spans detection. In Proceedings of the 15th International Workshop
378
- on Semantic Evaluation (SemEval-2021) , pages 59–69, 2021.
379
- 33. M. Ptaszynski, A. Pieciukiewicz, and P. Dyba/suppress la. Results of the P olEval 2019
380
- shared task 6: First dataset and open shared task for automatic cyberbullying
381
- detection in Polish Twitter. 2019.
382
- 34. B. Ray and A. Garain. JU at HASOC 2020: Deep learning with RoBER Ta
383
- and random forest for hate speech and offensive content identif ication in Indo-
384
- European languages. In FIRE (Working Notes) , pages 168–174, 2020.
385
- 35. A. Ribeiro and N. Silva. Inf-hateval at semeval-2019 task 5: Co nvolutional neural
386
- networks for hate speech detection against women and immigrants on Twitter. In
387
- Proceedings of the 13th International Workshop on Semantic Evaluation , pages
388
- 420–425, 2019.
389
- 36. J. Risch, A. Stoll, M. Ziegele, and R. Krestel. hpiDEDIS at GermEv al 2019:
390
- Offensive language identification using a German BERT model. In KONVENS ,
391
- 2019.
392
- 9/1037. M. Sanguinetti, F. Poletto, C. Bosco, V. Patti, and M. Stranisc i. An Italian
393
- Twitter corpus of hate speech against immigrants. In Proceedings of the Eleventh
394
- International Conference on Language Resources and Evalua tion (LREC 2018) ,
395
- 2018.
396
- 38. F. Schmid, J. Thielemann, A. Mantwill, J. Xi, D. Labudde, and M. Sp ranger.
397
- Fosil-offensive language classification of German tweets combining S VMs and
398
- deep learning techniques. In KONVENS , 2019.
399
- 39. P. Singh and P. Bhattacharyya. CFILT IIT Bombay at HASOC 20 20: Joint
400
- multitask learning of multilingual hate speech and offensive content detection
401
- system. In FIRE (Working Notes) , pages 325–330, 2020.
402
- 40. J. M. Struß, M. Siegel, J. Ruppenhofer, M. Wiegand, M. Klenner , et al. Overview
403
- of GermEval task 2, 2019 shared task on the identification of offe nsive language.
404
- 2019.
405
- 41. X. Tang, X. Shen, Y. Wang, and Y. Yang. Categorizing offensiv e language in so-
406
- cial networks: A Chinese corpus, systems and an explanation tool. InChina Na-
407
- tional Conference on Chinese Computational Linguistics , pages 300–315. Springer,
408
- 2020.
409
- 42. M. Taul´ e, A. Ariza, M. Nofre, E. Amig´ o, and P. Rosso. Overv iew of detoxis at
410
- IberLEF 2021: Detection of toxicity in comments in Spanish. Procesamiento del
411
- Lenguaje Natural , 67:209–221, 2021.
412
- 43. M. Wiegand, M. Siegel, and J. Ruppenhofer. Overview of the ger meval 2018
413
- shared task on the identification of offensive language. In 14th Conference on
414
- Natural Language Processing KONVENS 2018 , 2018.
415
- 44. T. Wolf, J. Chaumond, L. Debut, V. Sanh, C. Delangue, A. Moi, P . Cistac,
416
- M. Funtowicz, J. Davison, S. Shleifer, et al. Transformers: State -of-the-art natural
417
- language processing. In Proceedings of the 2020 Conference on Empirical Methods
418
- in Natural Language Processing: System Demonstrations , pages 38–45, 2020.
419
- 45. M. Zampieri, S. Malmasi, P. Nakov, S. Rosenthal, N. Farra, and R . Kumar.
420
- Semeval-2019 task 6: Identifying and categorizing offensive langu age in social
421
- media (OffensEval). In Proceedings of the 13th International Workshop on Se-
422
- mantic Evaluation , pages 75–86, 2019.
423
- 46. Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. To rralba, and
424
- S. Fidler. Aligning books and movies: Towards story-like visual explan ations by
425
- watching movies and reading books. In Proceedings of the IEEE international
426
- conference on computer vision , pages 19–27, 2015.
427
- 10/10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2111.01203.txt DELETED
The diff for this file is too large to render. See raw diff
 
txt/2111.09172.txt DELETED
@@ -1,536 +0,0 @@
1
- End-to-end optimized image compression with competition of prior distributions
2
- Benoit Brummer
3
- intoPIX
4
- Mont-Saint-Guibert, Belgium
5
- [email protected] De Vleeschouwer
6
- Universit ´e catholique de Louvain
7
- Louvain-la-Neuve, Belgium
8
9
- Abstract
10
- Convolutional autoencoders are now at the forefront of
11
- image compression research. To improve their entropy cod-
12
- ing, encoder output is typically analyzed with a second
13
- autoencoder to generate per-variable parametrized prior
14
- probability distributions. We instead propose a compression
15
- scheme that uses a single convolutional autoencoder and
16
- multiple learned prior distributions working as a competition
17
- of experts. Trained prior distributions are stored in a static
18
- table of cumulative distribution functions. During inference,
19
- this table is used by an entropy coder as a look-up-table
20
- to determine the best prior for each spatial location. Our
21
- method offers rate-distortion performance comparable to
22
- that obtained with a predicted parametrized prior with only
23
- a fraction of its entropy coding and decoding complexity.
24
- 1. Introduction
25
- Image compression typically consists of a transforma-
26
- tion step (including quantization) and an entropy coding
27
- step that attempts to capture the probability distribution of
28
- a transformed context to generate a smaller compressed bit-
29
- stream. Entropy coding ranges in complexity from simple
30
- non-adaptive encoders [ 26,24] to complex arithmetic coders
31
- with adaptive context models [ 15,23]. The entropy cod-
32
- ing strategy has been revised to address the specificities of
33
- learned compression. More specifically, for recent works
34
- that make use of a convolutional autoencoder [ 12] (AE) as
35
- the all-inclusive transformation and quantization step, the en-
36
- tropy coder relies on a cumulative probability model (CPM)
37
- trained alongside the AE [ 5]. This model estimates the cumu-
38
- lative distribution function (CDF) of each channel coming
39
- out of the AE and passes these learned CDFs to an entropy
40
- coder such as range encoding [16].
41
- Such a simple method outperforms traditional codecs
42
- like JPEG2000 but work is still needed to surpass complex
43
- codecs like BPG. Johannes Ball ´e et al. (2018) [ 6] proposed
44
- analyzing the output of the convolutional encoder with an-
45
- other AE to generate a floating-point scale parameter thatdiffers for every variable that needs to be encoded by the
46
- entropy coder, thus for every location in every channel. This
47
- method has been widely used in subsequent works but in-
48
- troduces substantial complexity in the entropy coding step
49
- because a different CDF is needed to encode every variable
50
- in the latent representation of the image, whereas the single
51
- AE method by Ball ´e et al. (2017) [ 5] reused the same CDF
52
- table for every latent spatial location.
53
- Our work uses the principle of competition of experts
54
- [22,14] to get the best out of both worlds. Multiple prior
55
- distributions compete for the lowest bit cost on every spatial
56
- location in the quantized latent representation. During train-
57
- ing, only the best prior distribution is updated in each spatial
58
- location, further improving the prior distributions special-
59
- ization. CDF tables are fixed at the end of training. Hence,
60
- at testing, the CDF table resulting in the lowest bitcost is
61
- assigned to each spatial location of the latent representation.
62
- The rate-distortion (RD) performance obtained is compa-
63
- rable to that obtained with a parametrized distribution [ 6],
64
- yet the entropy coding process is greatly simplified since it
65
- does not require a per-variable CDF and can build on look-
66
- up-tables (LUT) rather than the computation of analytical
67
- distributions.
68
- 2. Background
69
- Entropy coders such as range encoding [ 16] require cdfs
70
- where, for each variable to be encoded, the probability that a
71
- smaller or equal value appears is defined for every allowable
72
- value in the latent representation space. Johannes Ball ´e et
73
- al.’s seminal work (2017) [ 5] consists of an AE, computing a
74
- latent image representation consisting in CLchannels of size
75
- HLWL, and a CPM, consisting of one CDF per latent out-
76
- put channel, which are trained conjointly. The latent repre-
77
- sentation coming out of the encoder is quantized then passed
78
- through the CPM. The CPM defines, in a parametrized and
79
- differentiable manner, a CDF per channel. At the end of
80
- training, the CPM is evaluated at every possible value1to
81
- generate the static CDF table. The CDF table is not differen-
82
- tiable, but going from a differentiable CPM to a static CDF
83
- table speeds up the encoding and decoding process. The
84
- 1arXiv:2111.09172v1 [eess.IV] 17 Nov 2021CDF table is used to compress latent representations with an
85
- entropy coder, the approximate bit cost of a symbol is the
86
- binary logarithm of its probability.
87
- Ball´e et al. (2018) improved the RD efficiency by re-
88
- placing the unique CDF table with a Gaussian distribution
89
- parametrized with a hyperprior (HP) sub-network [ 6]. The
90
- HP generates a scale parameter, and in turn a different CDF,
91
- for every variable to be encoded. Thus, complexity is added
92
- by exploiting the parametrized Gaussian prior during the
93
- entropy coding process, since a different CDF is required for
94
- each variable in the channel and spatial dimensions.
95
- Minnen et al. proposed a scheme where one of multi-
96
- ple probability distributions is chosen to adapt the entropy
97
- model locally [ 21]. However, these distributions are defined
98
- a posteriori, given the encoder trained with a global entropy
99
- model. Thus [ 21] does not perform as well as the HP scheme
100
- [6] per [ 19, Fig. 2a]. In contrast, the present method jointly
101
- optimizes the local entropy models and the AE in an end-to-
102
- end fashion that results in greater performance. Minnen et al.
103
- [19] later proposed to improve RD with the use of an autore-
104
- gressive sequential context model. However, as highlighted
105
- in [13], this is obtained at the cost of increased runtime
106
- by several orders of magnitude. Subsequent works have
107
- attempted to reduce complexity of the neural network archi-
108
- tecture [ 10] and to bridge the RD gap with Minnen’s work
109
- [13], but entropy coding complexity has remained largely
110
- unaddressed and has instead evolved towards increased com-
111
- plexity [ 19,7,20] compared to [ 6]. The present work builds
112
- on Ball ´e et al. (2017) [ 5] and achieves the performance of
113
- Ball´e et al. (2018) [ 6] without the complexity introduced
114
- by a per-variable parametrized probability distribution. We
115
- chose Ball ´e et al. (2017) as a baseline because it corresponds
116
- to the basic unit adopted as a common reference and starting
117
- point for most models proposed in the recent literature to im-
118
- prove compression quality [ 6,19,13,20]. Due to its generic
119
- nature, our contribution remains relevant for the newer, often
120
- computationally more complex, incremental improvements
121
- on Ball ´e et al. (2017).
122
- 3. Competition of prior distributions
123
- Our proposed method introduces competitions of expert
124
- [22,14] prior distributions: a single AE transforms the image
125
- and a set of prior distributions are trained to model the CDF
126
- of the latent representation in each spatial location. For each
127
- latent spatial dimension the CDF table which minimizes
128
- bit cost is selected; that prior is either further optimized
129
- on the features it won in the training mode, or its index is
130
- stored for decoding in the inference mode. This scheme is
131
- illustrated in Figure 1, a set of 16 optimized CDF tables is
132
- shown in Figure 2, and three sample images are segmented
133
- by “winning” CDF table in Figure 3.
134
- All prior distributions are estimated in parallel by consid-
135
- ering NCDFCDF tables, and selecting, as a function of the
136
- CDF0...
137
- CDFNCDF...Cumulative probability model
138
- Input image
139
- Encoder
140
- Decoder
141
- Reconstruction
142
- Distortion
143
- (eg: MSE, MS-SSIM, discriminator)x̂
144
- bitstream[ ŷk,l]ik,lbitstream
145
- bitCost( ŷk,l, )BackpropagateBackpropagate
146
- Entropy coder
147
- ik,l=argmin p
148
- [bitCost( ŷk,l, CDF p)]...y x
149
- ŷ
150
- CDFik,l
151
- Quantizer
152
- Noise (train),
153
- round (test) ŷk,lFor all k,l
154
- CDFik,l
155
- CDFik,lFigure 1. AE compression scheme with competition of prior distri-
156
- butions. The AE architecture is detailed in [ 6, Fig. 4]. The indices
157
- idenote the indices of CDF tables that minimizes the bitcount for
158
- each latent spatial dimension. Loss = Distortion + bitCost.
159
- 0.00.51.0
160
- 0.00.51.0
161
- 0.00.51.0
162
- 100
163
- 0 1000.00.51.0
164
- 100
165
- 0 100 100
166
- 0 100 100
167
- 0 100
168
- Figure 2. We observe some diversity among the 16 cumulative
169
- distribution functions learned by a network trained with MSE loss
170
- and= 4096 . Each box presents a CDF table and each colored
171
- line corresponds to the cdfof one of 256 latent channels. The best
172
- fitting CDF table is selected for each latent spatial location.
173
- encoded latent spatial location, the one that minimizes the
174
- entropy coder bitcount. The CDF table index is determined
175
- for each spatial location by evaluating each CDF table in
176
- inference. This can be done in a vectorized operation given
177
- sufficient memory. During training the CPM is evaluated
178
- instead of CDF tables such that the probabilities are up to
179
- date and the model is differentiable, and the bit cost is re-
180
- turned as it contributes to the loss function. The cost of CDF
181
- table indices has been shown to be neglectable due to the
182
- reasonably small number of priors, which in turns results
183
- from the fact that little gain in latent code entropy has been
184
- obtained by increasing the number of priors.
185
- In all our experiments , the AE architecture follows the
186
- one in Ball ´e et al. (2018) [ 6], without the HP, since we found
187
- that the AE from [ 6] offers better RD than the one describedFigure 3. Segmentation of three test images [ 1]: each distinct color
188
- represents one of 64 CDF tables used to encode a latent spatial
189
- location ( 1616pixels patch)
190
- in Ball ´e et al. (2017) [ 5], even with a single CDF table. A
191
- functional training loop is described in Algorithm 1.
192
- Algorithm 1 Training loop
193
- y model.Encoder( x)
194
- ˆy quantize( y)
195
- ˆx clip(model.Decoder( ˆy), 0, 1)
196
- distortion visualLossFunction( ˆx,x)
197
- for0k< HLand0l<WLdo
198
- bitCost [k;l] min i<NCDF log2
199
- CPM i(ˆy[k;l] + 0:5)CPM i(ˆy[k;l]0:5)
200
- end for .CPM is the differentiable version of CDF
201
- Loss distortion+jbitCostj
202
- Loss.backward()
203
- 4. Experiments
204
- 4.1. Method
205
- These experiments are based on the PyTorch implemen-
206
- tation of Ball ´e et al. (2018) [ 6] published by Liu Jia-
207
- heng [ 9,13]. To implement our proposed method, the
208
- HP is omitted in favor of competition of expert prior dis-
209
- tributions. The CPM is that defined in [ 9] with an addi-
210
- tional NCDFdimension to compute all CDF tables in par-
211
- allel. Theoretical results are verified using the torchac
212
- range coder [ 18,17,16]. A functional training loop is de-
213
- scribed in Algorithm 1, and source code is provided on
214
- https://github.com/trougnouf/Manypriors .
215
- To ensure that all priors get an opportunity to train, the prior
216
- distributions that have not been used for at least fifty stepsare randomly assigned to spatial locations with largest bit-
217
- counts, to be forced to train. The Adam optimizer [ 11] is
218
- used with a starting learning rate (LR) of 0.0001 for the AE
219
- and 0.001 for the CPM. Performance is tested every 2500
220
- steps in inference mode on the validation set, and the LR
221
- is decayed by a factor of 0.99 if the performance have not
222
- improved for two tests. Reported performance is the one of
223
- the model taht minimizes (visualLoss+bitCost )on the
224
- validation set at the end of training. Base models are trained
225
- for six million steps at = 4096 with the mean squared er-
226
- ror (MSE) loss. Smaller values and MS-SSIM models are
227
- trained for four million steps starting from the base model
228
- with their LR and optimizer reset. All models use CH= 192
229
- (hidden layers channels) and CL= 256 (output channels)
230
- such that a single base model is needed for each prior con-
231
- figuration. The training and validation dataset is made of
232
- free-license images from Wikimedia Commons [ 3]; mainly
233
- “Category:Featured pictures on Wikimedia Common” which
234
- consists of 13928 images of the highest quality. The images
235
- are cropped into 10242pixels patches on disk to speed up
236
- further resizing, then they are resized on-the-fly by a random
237
- factor down to 2562pixels during training. A batch size of 4
238
- patches is used. The kodak set [ 2] is used as a validation set
239
- and the CLIC professional test dataset [ 4] is used for testing.
240
- The RD curve of our “multiprior” model is compared
241
- with that of the HP model [ 6], which is trained from scratch
242
- using Liu Jiaheng’s PyTorch implementation [ 9,13]. Liu
243
- Jiaheng’s code differs slightly from the paper’s definition [ 6]
244
- in that a Laplace distribution is used in place of the normal
245
- distribution to stabilize training. Complexity is measured as
246
- the number of GMac (billion multiply-accumulate operation)
247
- using the ptflops counter [ 25] and the number of memory
248
- lookup operations is calculated manually.
249
- 4.2. Results
250
- The PSNR RD curve measured on the CLIC professional
251
- test set [ 4] is shown on top of Figure 4. The performance
252
- of a 64-priors model is in line with that of the HP model
253
- : they both perform slightly better than BPG at high bpp,
254
- and achieve significantly better RD than the single-prior
255
- model. In the middle, the RD value at = 4096 , the highest
256
- bitrate, is shown for 1, 2, 4, 8, 16, 32, 64, and 128 prior
257
- distributions. 128-priors offer marginal gains and costs an
258
- increased training time (1.5) and encoding time. MS-SSIM
259
- performance of fine-tuned models is shown in the bottom
260
- of Figure 4; the 64-priors model still performs similarly to
261
- [6], and both learned compression models benefit from this
262
- more perceptual metric compared with traditional codecs. A
263
- visual comparison of images compressed with the MSE loss
264
- (= 512 ) and the equivalent bitrate settings in conventional
265
- codecs is shown in Figure 5.
266
- Computational complexity of our Manypriors has been
267
- compared to the one of the HP model [ 6]). This complex-0.0 0.2 0.4 0.6 0.8 1.0323334353637383940PSNR
268
- 1-prior
269
- 2-priors
270
- 4-priors
271
- 8-priors
272
- 16-priors
273
- 32-priors
274
- 64-priors
275
- 128-priors
276
- hyperprior
277
- BPG
278
- JPEG
279
- 0.60 0.65 0.70 0.75 0.80 0.8538.538.638.738.838.939.039.139.2PSNR
280
- 1-prior
281
- 2-priors
282
- 4-priors
283
- 8-priors
284
- 16-priors
285
- 32-priors
286
- 64-priors
287
- 128-priors
288
- hyperprior
289
- BPG
290
- JPEG
291
- 0.0 0.2 0.4 0.6 0.8 1.0
292
- bpp
293
- 0.960.970.980.991.00MS-SSIM
294
- 1-prior
295
- 64-priors
296
- hyperprior
297
- BPG
298
- JPEGFigure 4. Top: PSNR RD curve of a 64-priors model on the CLIC
299
- pro. test set, compared with the HP model [ 6], and the BPG and
300
- JPEG codecs. Middle : Zoom in on models with 1, 2, 4, 8, 16, 32,
301
- and 64 priors. Bottom: MS-SSIM RD curve.
302
- Table 1. Complexity of the HP model [ 6]) compared to Manypriors
303
- (ours), expressed in GMac for the neural network parts and number
304
- of memory lookup operations (* or parametrized Laplace CDF
305
- generations in full-precision) for the CDF tables generation, to
306
- process a 4K image.
307
- (#) Hyperprior Manypriors ratio MPHP
308
- EncodingGMacmain encoder 769.82 769.82
309
- hyper encoder 23.75
310
- hyper decoder 23.86
311
- total 817.43 769.82 0.942
312
- Lookupsindices 530.84 M
313
- CDF 829.44 K * 32.400 K
314
- total 829.44 K* 530.87 M N CDF= 64
315
- DecodingGMachyper decoder 23.154
316
- main decoder 769.60 769.60
317
- total 792.75 769.60 0.971
318
- Lookups CDF (total) 829.44 K* 32.400 K1
319
- CL= 0:004
320
- ity is expressed in GMac for the neural network parts and
321
- number of memory lookup operations. It is summarized in
322
- Table 1. The lack of a HP AE saves 3 % to 6 % GMac, de-
323
- pending on whether only the HP decoder (image decoding)
324
- Figure 5. Visual comparison of Larry the cat [ 1] compressed with
325
- learned (= 512 ) and conventional methods. Top-left: uncom-
326
- pressed, top-middle: JPEG (PSNR: 29.3, 0.224 bpp), top-right:
327
- BPG (PSNR: 32.9, 0.217 bpp), bottom-left: 1-prior (PSNR: 32.4,
328
- 0.252 bpp), bottom-middle: hyperprior (PSNR: 32.8, 0.217 bpp),
329
- bottom-right: 64-priors/ours (PSNR: 32.9, 0.218 bpp)
330
- or the whole HP codec (image encoding) is used. Decoding
331
- with the Manypriors scheme is greatly simplified compared
332
- to [6] because the CDF tables generation process takes the
333
- optimal indices stored as side-information and looks up one
334
- static CDF table per latent spatial dimension, that is CL(typ-
335
- ically 256) fewer lookups than with a HP. During encoding,
336
- the Manypriors scheme must lookup every latent variable
337
- with every CDF table in order to determine the most cost
338
- effective CDF tables. This results in NCDF(typically 64)
339
- times more lookup operations than the HP scheme overall,
340
- although these lookup operations are relatively cheap be-
341
- cause only two values are needed (variable 0.5), whereas
342
- each CDF table lookup in [ 6] returns Lprobabilities. More-
343
- over, it is challenging to make an accurate CDF LUT for the
344
- HP scheme, because quantizing the distribution scale param-
345
- eter reduces the accuracy of the resulting CDFs, negatively
346
- impacting the bitrate. This challenge is exacerbated when
347
- the distribution has multiple parameters [ 19] or a mixture
348
- of distributions [ 7] is used. In Figure 4, LUT are replaced
349
- by accurate but complex Laplace distribution computation
350
- for the HP scheme in order to maximize the reported RD
351
- performance.
352
- Time complexity is measured for every step on CPU,
353
- where it can be reliably profiled due to synchroneous execu-
354
- tion. It is summarized in Table 2 with the following distinct
355
- sub-categories: NN (neural network) is the time spent in
356
- the AE, CDF generation is the time spent building the CDF
357
- tables for a specific image, and entropy is the bitstream gener-
358
- ation. All operations are done using the PyTorch framework
359
- in python, except for entropy encoding which makes use of
360
- the torchac range coding library [ 18,17], written in C++, andTable 2. Breaking down the image encoding and decoding time, in
361
- seconds. Image: 4.5 MP snail [ 1]. CPU: AMD Ryzen 7 2700X.
362
- Time avg. of 50 runs.
363
- (#)Hyperprior
364
- (Ball ´e2018)64-priors
365
- (ours)ratio
366
- (oursHP)
367
- EncodingNN encode: main + hyperprior 3.81 + 0.41 3.79 + 0.00 0.90
368
- entropy encode, main + hyperprior 0.15 + 0.02 0.15 + 0.00
369
- CDF : select indices + gather tables 0.00 +FP: 15.95
370
- LUT: 5.661.90 + 0.81FP: 0.17
371
- LUT: 0.48
372
- encode (total)FP: 20.33
373
- LUT: 10.046.65FP: 0.32
374
- LUT: 0.66
375
- DecodingNN decode : main + hyperprior 10.66 + 0.34 10.50 0.95
376
- CDF : gather tablesFP: 15.95
377
- LUT: 5.660.81FP: 0.05
378
- LUT: 0.14
379
- entropy decode : main + hyperprior 0.24 + 0.02 0.24 0.92
380
- decode (total)FP: 27.21
381
- LUT: 16.9211.54FP: 0.42
382
- LUT: 0.68
383
- the prior indices are compressed using the LZMA library [ 8].
384
- The total encoding time of the 64-priors model is 0.32 time
385
- that of the HP model and the decoding time is 0.42 times that
386
- of the HP model. The timing is more significant when it is
387
- broken down by sub-category because each component has
388
- a different response time depending on the hardware (and
389
- software) architecture in place. The AE (“NN”) encoding
390
- time is 0.90 that of the HP scheme and decoding time is
391
- 0.95 time as much as the HP. Both the hyper-encoder and
392
- hyper-decoder are called during encoding, thus it appears
393
- that each part of the HP sub-network costs 5 % of the AE
394
- time. The time taken to build the CDF tables for the HP
395
- model was measured both by estimating the per-variable
396
- Laplace distributions (“full-precision”) and with a quantized
397
- scale parameter LUT. In any case, finding the best indices of
398
- a 64-priors model appears to be relatively inexpensive and
399
- the total CDF tables generation time is only 0.17 to 0.48 that
400
- of the HP model (depending on whether the HP model uses
401
- full-precision or LUT) for encoding. During decoding, the
402
- 64-priors model spends 0.05 to 0.14 as much time building
403
- the CDF tables as the HP model, because the optimal CDF
404
- table indices have already been determined during encoding
405
- and they are included in the bitstream.
406
- 5. Conclusion
407
- Convolutional autoencoders trained for compression are
408
- optimized for both rate and distortion. Rate is estimated with
409
- a cumulative probability model, which in turns generates a
410
- CDF for every latent variable to be encoded. A single CDF
411
- per latent channel is not sufficient to capture the statistics at
412
- the output of the encoder, nor to allow the encoder to express
413
- a wide variety of features. To support multiple statistics, the
414
- hyperprior [ 6] parametrizes a standard distribution, but this
415
- introduces a great deal of complexity in the entropy coding
416
- stage because the CDF differs for every latent variable to be
417
- encoded. The proposed method uses multiple prior distri-
418
- butions working as a competition of experts to capture the
419
- relevant features which they specialize on. This approach is
420
- advantageous because the learned CDF tables are stored in
421
- a static LUT once training is finished, and a model trainedwith 64 prior distributions performs with a similar RD as
422
- one trained with a HP sub-network. Moreover, a learned
423
- CDF table includes the CDF for all channels in the latent
424
- code. Hence, accessing the CDF table for a spatial location
425
- provides the CDF for each of its channels and the number of
426
- lookups is reduced to the number of latent spatial locations.
427
- In our experiments, CDF tables generation in the encoding
428
- step takes 0.17 to 0.48 as much time with a 64-priors model
429
- as it does with the HP model (depending on the precision
430
- of the HP model). This ratio is lowered to 0.05 to 0.14 dur-
431
- ing decoding because the prior indices have already been
432
- determined during the encoding.
433
- 6. Acknowledgements
434
- This research has been funded by the Walloon Region.
435
- Computational resources have been provided by the super-
436
- computing facilities of the Universit ´e catholique de Lou-
437
- vain (CISM/UCL) and the Consortium des ´Equipements de
438
- Calcul Intensif en F ´ed´eration Wallonie Bruxelles (C ´ECI)
439
- funded by the Fond de la Recherche Scientifique de Bel-
440
- gique (F.R.S.-FNRS) under convention 2.5020.11 and by the
441
- Walloon Region.
442
- References
443
- [1]Commons test photographs. https : / / commons .
444
- wikimedia . org / wiki / Category : Commons _
445
- Test_Photographs . Accessed: 2020-10-22. 3, 4, 5
446
- [2]True color kodak images. http://r0k.us/graphics/
447
- kodak/ . Accessed: 2020-01-20. 3
448
- [3]Wikimedia commons. https://commons.wikimedia.
449
- org. Accessed: 2020-04-03. 3
450
- [4]Challenge on learned image compression. http://
451
- challenge.compression.cc/tasks/ , 2020. 3
452
- [5]Johannes Ball ´e, Valero Laparra, and Eero Simoncelli. End-
453
- to-end optimized image compression. In 5th International
454
- Conference on Learning Representations, ICLR 2017 , 2017.
455
- 1, 2, 3
456
- [6]Johannes Ball ´e, David Minnen, Saurabh Singh, Sung Jin
457
- Hwang, and Nick Johnston. Variational image compression
458
- with a scale hyperprior. In International Conference on Learn-
459
- ing Representations , 2018. 1, 2, 3, 4, 5
460
- [7]Zhengxue Cheng, Heming Sun, Masaru Takeuchi, and Jiro
461
- Katto. Learned image compression with discretized gaussian
462
- mixture likelihoods and attention modules. In Proceedings of
463
- the IEEE/CVF Conference on Computer Vision and Pattern
464
- Recognition (CVPR) , June 2020. 2, 4
465
- [8] Lasse Collin and Igor Pavlov. Xz utils, Jul 2009. 5
466
- [9]Liu Jiaheng. compression. https://github.com/
467
- liujiaheng/compression , 2020. 3
468
- [10] Nick Johnston, Elad Eban, Ariel Gordon, and Johannes Ball ´e.
469
- Computationally efficient neural image compression. CoRR ,
470
- abs/1912.08771, 2019. 2
471
- [11] Diederik P. Kingma and Jimmy Ba. Adam: A method for
472
- stochastic optimization. In Yoshua Bengio and Yann LeCun,editors, 3rd International Conference on Learning Represen-
473
- tations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015,
474
- Conference Track Proceedings , 2015. 3
475
- [12] Mark A. Kramer. Nonlinear principal component analy-
476
- sis using autoassociative neural networks. AIChE Journal ,
477
- 37(2):233–243, 1991. 1
478
- [13] Jiaheng Liu, Guo Lu, Zhihao Hu, and Dong Xu. A unified
479
- end-to-end framework for efficient deep image compression.
480
- arXiv preprint arXiv:2002.03370 , 2020. 2, 3
481
- [14] Shunta Maeda. Fast and flexible image blind denoising
482
- via competition of experts. In 2020 IEEE/CVF Conference
483
- on Computer Vision and Pattern Recognition Workshops
484
- (CVPRW) , pages 2239–2247, 2020. 1, 2
485
- [15] Detlev Marpe, Heiko Schwarz, and Thomas Wiegand.
486
- Context-based adaptive binary arithmetic coding in the
487
- h.264/avc video compression standard. IEEE Transactions on
488
- Circuits and Systems for Video Technology , 13(7):620–636,
489
- 2003. 1
490
- [16] Gloria Mart ´ın. Range encoding: an algorithm for removing
491
- redundancy from a digitised message. In Video and Data
492
- Recording Conference, Southampton, 1979 , pages 24–27,
493
- 1979. 1, 3
494
- [17] Fabian Mentzer. torchac. https://github.com/fab-
495
- jul/L3C-PyTorch/tree/master/src/torchac ,
496
- 2020. 3, 4
497
- [18] Fabian Mentzer, Eirikur Agustsson, Michael Tschannen,
498
- Radu Timofte, and Luc Van Gool. Practical full resolution
499
- learned lossless image compression. In Proceedings of the
500
- IEEE Conference on Computer Vision and Pattern Recogni-
501
- tion (CVPR) , 2019. 3, 4
502
- [19] David Minnen, Johannes Ball ´e, and George D Toderici. Joint
503
- autoregressive and hierarchical priors for learned image com-
504
- pression. In S. Bengio, H. Wallach, H. Larochelle, K. Grau-
505
- man, N. Cesa-Bianchi, and R. Garnett, editors, Advances
506
- in Neural Information Processing Systems 31 , pages 10771–
507
- 10780. Curran Associates, Inc., 2018. 2, 4
508
- [20] D. Minnen and S. Singh. Channel-wise autoregressive en-
509
- tropy models for learned image compression. In 2020 IEEE
510
- International Conference on Image Processing (ICIP) , pages
511
- 3339–3343, 2020. 2
512
- [21] D. Minnen, G. Toderici, S. Singh, S. J. Hwang, and M. Covell.
513
- Image-dependent local entropy models for learned image
514
- compression. In 2018 25th IEEE International Conference
515
- on Image Processing (ICIP) , pages 430–434, 2018. 2
516
- [22] Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-
517
- Carulla, and Bernhard Sch ¨olkopf. Learning independent
518
- causal mechanisms. volume 80 of Proceedings of Machine
519
- Learning Research , pages 4036–4044, Stockholmsm ¨assan,
520
- Stockholm Sweden, 10–15 Jul 2018. PMLR. 1, 2
521
- [23] Alexander Rhatushnyak, Jan Wassenberg, Jon Sneyers, Jyrki
522
- Alakuijala, Lode Vandevenne, Luca Versari, Robert Obryk,
523
- Zoltan Szabadka, Evgenii Kliuchnikov, Iulia-Maria Comsa,
524
- Krzysztof Potempa, Martin Bruse, Moritz Firsching, Renata
525
- Khasanova, Ruud van Asseldonk, Sami Boukortt, Sebastian
526
- Gomez, and Thomas Fischbacher. Committee draft of jpeg xl
527
- image coding system, 2019. 1[24] Thomas Richter, Joachim Keinert, Antonin Descampe, and
528
- Gael Rouvroy. Entropy coding and entropy coding improve-
529
- ments of jpeg xs. In 2018 Data Compression Conference ,
530
- pages 87–96, 2018. 1
531
- [25] Vladislav Sovrasov. Flops counter for convolutional net-
532
- works in pytorch framework. https://github.com/
533
- sovrasov/flops-counter.pytorch , 2020. 3
534
- [26] Gregory K Wallace. The jpeg still picture compression stan-
535
- dard. IEEE transactions on consumer electronics , 38(1):xviii–
536
- xxxiv, 1992. 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2111.13236.txt DELETED
Binary file (69.2 kB)
 
txt/2112.01591.txt DELETED
Binary file (34.9 kB)
 
txt/2112.03557.txt DELETED
@@ -1,216 +0,0 @@
1
- arXiv:2112.03557v1 [cs.CL] 7 Dec 2021Multi-speaker Emotional Text-to-speech Synthesizer
2
- Sungjae Cho1,†, Soo-Young Lee2
3
- 1Korea Institute of Science and Technology, Republic of Kore a
4
- 2Korea Advanced Institute of Science and Technology, Republ ic of Korea
5
6
- Abstract
7
- We present a methodology to train our multi-speaker emotion al
8
- text-to-speech synthesizer that can express speech for 10 s peak-
9
- ers’ 7 different emotions. All silences from audio samples a re
10
- removed prior to learning. This results in fast learning by o ur
11
- model. Curriculum learning is applied to train our model effi -
12
- ciently. Our model is first trained with a large single-speak er
13
- neutral dataset, and then trained with neutral speech from a ll
14
- speakers. Finally, our model is trained using datasets of em o-
15
- tional speech from all speakers. In each stage, training sam ples
16
- of each speaker-emotion pair have equal probability to appe ar
17
- in mini-batches. Through this procedure, our model can synt he-
18
- size speech for all targeted speakers and emotions. Our synt he-
19
- sized audio sets are available on our web page.
20
- Index Terms : emotional speech synthesis, text-to-speech, ma-
21
- chine learning, neural network, deep learning
22
- 1. Introduction
23
- Emotional speech synthesis has been achieved through deep
24
- neural networks [1, 2, 3, 4]. However, most studies have trai ned
25
- models on a small number of speakers or balanced class distri -
26
- butions because it is challenging to guarantee speech quali ty for
27
- each speaker and emotion, given imbalanced data distributi ons
28
- with respect to speakers and emotions. In this paper, we pres ent
29
- a methodology for training our multi-speaker emotional tex t-to-
30
- speech (TTS) synthesizer capable of generating speech for a ll
31
- targeted speakers’ voices and emotions. The main methods ar e
32
- silence removal, curriculum learning [5], and oversamplin g [6].
33
- The synthesized audios are demonstrated through a web page.
34
- 2. Datasets
35
- 4 datasets were used to train the multi-speaker emotional TT S
36
- synthesizer. The first dataset, the Korean single speaker sp eech
37
- (KSS) dataset [7], is publicly available and contains speec h
38
- samples of a single female speaker: kss-f. We labeled their
39
- emotion as neutral. The remaining 3 datasets consist of spee ch
40
- of the Ekman’s 7 basic emotions [8]: neutral, anger, disgust ,
41
- fear, happiness, sadness, and surprise.
42
- The first Korean emotional TTS (KETTS) dataset consists
43
- of 1 female and 1 male speaker: ketts-30f and ketts-30m, whic h
44
- are abbreviations of a 30’s female and male in KETTS. The 2
45
- speakers were assigned to different sets of sentences; howe ver,
46
- the same sentences were recorded across 7 emotions for a sing le
47
- speaker. In the female case, only happy speech samples have a
48
- different set of sentences. KETTS is balanced with respect t o
49
- speakers and emotions, except for the female’s happy speech
50
- subset (Table 1).
51
- The second Korean emotional TTS (KETTS2) dataset con-
52
- sists of 3 female and 3 male speakers, totally 6 speakers: ket t2-
53
- †work done at KAISTTable 1: Hours of preprocessed training datasets
54
- Speaker all neu ang dis fea hap sad sur
55
- kss-f 12.59 12.59
56
- ketts-30f 26.61 3.52 3.46 3.51 3.68 5.13 3.75 3.56
57
- ketts-30m 24.12 3.37 3.29 3.31 3.51 3.50 3.73 3.40
58
- ketts2-20m 5.09 0.72 0.72 0.74 0.76 0.69 0.75 0.70
59
- ketts2-30f 4.69 0.66 0.65 0.67 0.65 0.70 0.68 0.68
60
- ketts2-40m 4.98 0.73 0.69 0.70 0.75 0.69 0.74 0.69
61
- ketts2-50f 4.98 0.73 0.71 0.71 0.70 0.72 0.71 0.69
62
- ketts2-50m 4.73 0.68 0.68 0.69 0.67 0.68 0.68 0.65
63
- ketts2-60f 4.90 0.77 0.68 0.67 0.68 0.72 0.72 0.67
64
- ketts3-f 9.64 3.96 1.34 1.27 1.44 1.64
65
- ketts3-m 9.38 3.90 1.43 1.18 1.39 1.48
66
- all 111.70 31.63 13.65 11.01 13.85 15.64 14.87 11.05
67
- 20m, ketts2-30f, ketts2-40m, ketts-50f, ketts2-50m, and k etts2-
68
- 60f. The same sentences were recorded across 7 emotions and 6
69
- speakers. Hence, KETTS2 is balanced with respect to speaker s
70
- and emotions (Table 1).
71
- The third Korean emotional TTS (KETTS3) dataset con-
72
- sists of 1 female and 1 male speaker: ketts3-f and ketts3-m. I t
73
- includes 5 emotions, excluding disgust and surprise. The sa me
74
- sentences were recorded across 2 speakers; however, differ ent
75
- sentences were spoken for the 5 emotions. KETTS3 is balanced
76
- for speakers but not for emotions. Therefore, the whole trai ning
77
- dataset is balanced for neither speakers nor emotions (Tabl e 1).
78
- 3. Methodology
79
- 3.1. Preprocessing
80
- The WebRTC voice activity detector, py-webrtcvad1, is utilized
81
- to remove unvoiced segments in audios, with its settings of a n
82
- aggressiveness level of 3, frame duration 30ms, and padding
83
- duration 150ms. These settings remove silences at the start , end,
84
- and middle of speech. However, the amount of silence removed
85
- does not distort emotional expression. All audios are resam pled
86
- to sampling rate 22,050Hz. Mel spectrograms are computed
87
- through a short-time Fourier transform (STFT) using frame s ize
88
- 1024, hop size 256, window size 1024, and a Hann window
89
- function. The STFT magnitudes are transformed to the libros a
90
- Slaney mel scale using an 80-channel mel filterbank spanning
91
- 0Hz to 8kHz, and the results are then clipped to a minimum
92
- value of10−5, followed by log dynamic range compression.
93
- Every Korean character in an input sentence is decomposed
94
- into 3 elements: an onset, nucleus, and coda. In total, 19 ons ets,
95
- 21 nuclei, and 28 codas including the empty coda are employed
96
- as defined by Unicode. A sequence of these elements becomes
97
- a grapheme sequence taken as input by our synthesizer.
98
- 1https://github.com/wiseman/py-webrtcvad3.2. Model
99
- Our multi-speaker emotional TTS synthesizer takes 3 inputs —
100
- the grapheme sequence of a Korean sentence, 1 of 10 speakers
101
- (5 females, 5 males), and 1 of the 7 Ekman’s emotion classes. I t
102
- then generates a waveform in which the speaker utters the inp ut
103
- sentence with the given emotion. Our synthesizer consists o f 2
104
- sub-models: Tacotron 2 [9], mapping a grapheme sequence to
105
- a mel spectrogram, and WaveGlow [10], transforming the mel
106
- spectrogram to a waveform. Tacotron 2 is an auto-regressive
107
- sequence-to-sequence neural network with a location-sens itive
108
- attention mechanism. WaveGlow is a flow-based generative
109
- neural network without auto-regression. We adapted NVIDIA
110
- Tacotron 2 and WaveGlow repositories2,3to synthesize speech
111
- for multiple speakers and emotions. The WaveGlow model
112
- was utilized without modification but the Tacotron 2 model wa s
113
- modified as outlined in the following paragraph.
114
- Speaker identity is represented as a 5-dimensional train-
115
- able speaker vector . Emotion identity is represented as a 3-
116
- dimensional trainable emotion vector , except for the neutral
117
- emotion vector, which is a non-trainable zero vector. To syn -
118
- thesize speech of a given speaker and emotion, in the decoder
119
- of Tacotron 2, speaker and emotion vectors are concatenated to
120
- attention context vectors taken by the first and second LSTM
121
- layers and the linear layer estimating a mel spectrogram.
122
- 3.3. Training
123
- Tacotron 2 was trained with a batch size of 64 equally dis-
124
- tributed to 4 GPUs. The Adam optimizer [11] of the default
125
- settings (β1= 0.9,β2= 0.999,ǫ= 10−6) was used with a
126
- learning rate of 10−3andL2regularization with weight 10−6.
127
- If the norm of gradients exceeded 1, their norm was normalize d
128
- to 1 to ensure stable learning.
129
- Using a curriculum learning [5] strategy, Tacotron 2 was
130
- trained to learn single-speaker neutral speech, multi-spe aker
131
- neutral speech, and multi-speaker emotional speech in this or-
132
- der. More specifically, the model was trained with the KSS
133
- dataset for 20,000 iterations, then additionally with all d atasets
134
- of neutral speech for 30,000 iterations, and finally with all train-
135
- ing datasets for 65,000 iterations. Transitioning to train ing on
136
- the next dataset was done when the model stably pronounced
137
- given whole sentences for all training speaker-emotion pai rs.
138
- In each training stage, we oversampled [6] the training set
139
- with respect to speaker-emotion pairs, which means samples of
140
- each speaker-emotion pair appear in a mini-batch with equal
141
- probability. For example, samples of (ketts-30f, neutral) and
142
- those of (ketts2-20m, happy) appear in a mini-batch with equ al
143
- probability. This helped overcome difficulty in learning to syn-
144
- thesize speech of speaker-emotion pairs with relatively sc arce
145
- samples.
146
- WaveGlow was trained with a batch size of 24, equally dis-
147
- tributed to 3 GPUs using 24 clips of 16,000 mel spectrogram
148
- frames randomly chosen from each training sample. Training
149
- samples shorter than 16,000 mel frames were excluded from th e
150
- training set since these samples padded with zeros caused un sta-
151
- ble learning such as exploding gradients. Similar to Tacotr on 2,
152
- we oversampled the training set with respect to speaker-emo tion
153
- pairs. The Adam optimizer was used with the default settings
154
- and learning rate 10−4. Weight normalization was applied, as
155
- described in the original paper [10]. To ensure stable learn ing,
156
- if the norm of gradients exceeded 1, their norm was normalize d
157
- 2https://github.com/NVIDIA/tacotron2
158
- 3https://github.com/NVIDIA/waveglowto 1. The model was initialized with the pretrained weights4of-
159
- fered in the WaveGlow repository. The network was trained fo r
160
- 400,000 iterations until its loss curve formed a plateau. Th ez
161
- elements were sampled from Gaussians with standard deviati on
162
- 1 during training and 0.75 during inference.
163
- 4. Results and Discussion
164
- Through this procedure, our speech synthesizer is able to sy n-
165
- thesize speech for all available 10 speakers and 7 emotions. Un-
166
- expectedly, disgusted and surprised expressions of the KET TS3
167
- speakers can be synthesized even without training supervis ion.
168
- Synthesized speech samples can be found on this web page5.
169
- Although our model expresses speaker and emotion iden-
170
- tities, there are some minor inconsistencies in the quality of
171
- synthesized samples across speakers and emotions. Thus, in
172
- production, it is reasonable to fine-tune for each speaker an d
173
- respectively preserve the model parameters.
174
- Our silence removal settings substantially accelerated th e
175
- learning of Tacotron 2. This was probably because silence re -
176
- moval at the start, end, and middle of speech resulted in the
177
- linear relationship between text and speech, and this relat ion-
178
- ship helped the location-sensitive attention network easi ly learn
179
- text-to-speech alignments.
180
- 5. Acknowledgements
181
- This work was supported by Ministry of Culture, Sports and
182
- Tourism andKorea Creative Content Agency [R2019020013,
183
- R2020040298].
184
- 6. References
185
- [1] Y . Lee, A. Rabiee, and S.-Y . Lee, “Emotional end-to-end n eural
186
- speech synthesizer,” ArXiv , vol. abs/1711.05447, 2017.
187
- [2] H. Choi, S. Park, J. Park, and M. Hahn, “Multi-speaker emo tional
188
- acoustic modeling for CNN-based speech synthesis,” in ICASSP ,
189
- 2019.
190
- [3] S.-Y . Um, S. Oh, K. Byun, I. Jang, C. Ahn, and H.-G. Kang,
191
- “Emotional speech synthesis with rich and granularized con trol,”
192
- inICASSP , 2020.
193
- [4] T.-H. Kim, S. Cho, S. Choi, S. Park, and S.-Y . Lee, “Emotio nal
194
- voice conversion using multitask learning with text-to-sp eech,” in
195
- ICASSP , 2020.
196
- [5] Y . Bengio, J. Louradour, R. Collobert, and J. Weston, “Cu rriculum
197
- learning,” in ICML , 2009.
198
- [6] M. Buda, A. Maki, and M. A. Mazurowski, “A systematic stud y
199
- of the class imbalance problem in convolutional neural netw orks,”
200
- Neural Networks , vol. 106, 2018.
201
- [7] K. Park, “KSS dataset: Korean single speaker speech data set,”
202
- https://kaggle.com/bryanpark/korean-single-speaker- speech-dataset,
203
- 2018.
204
- [8] P. Ekman and D. Cordaro, “What is meant by calling emotion s
205
- basic,” Emotion Review , vol. 3, no. 4, 2011.
206
- [9] J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Ya ng,
207
- Z. Chen, Y . Zhang, Y . Wang, R.-S. Ryan, R. A. Saurous,
208
- Y . Agiomyrgiannakis, and Y . Wu, “Natural TTS synthesis by co n-
209
- ditioning wavenet on MEL spectrogram predictions,” in ICASSP ,
210
- 2018.
211
- [10] R. Prenger, R. Valle, and B. Catanzaro, “Waveglow: A flow -based
212
- generative network for speech synthesis,” in ICASSP , 2019.
213
- [11] D. P. Kingma and J. Ba, “Adam: A method for stochastic opt i-
214
- mization,” in ICLR , 2015.
215
- 4“waveglow_256channels_universal_v5.pt” was used.
216
- 5https://sungjae-cho.github.io/InterSpeech2021_STDem o/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2201.00132.txt DELETED
@@ -1,617 +0,0 @@
1
- SAFL: A Self-Attention Scene Text Recognizer
2
- with Focal Loss
3
- Bao Hieu Tran, Thanh Le-Cong, Huu Manh Nguyen, Duc Anh Le, Thanh Hung Nguyeny, Phi Le Nguyeny
4
- School of Information and Communication Technology
5
- Hanoi University of Science and Technology
6
- Hanoi, Vietnam
7
- fhieu.tb167182, thanh.ld164834, manh.nh166428, anh.nd160126 [email protected],flenp, [email protected]
8
- Abstract —In the last decades, scene text recognition has gained
9
- worldwide attention from both the academic community and
10
- actual users due to its importance in a wide range of applications.
11
- Despite achievements in optical character recognition, scene
12
- text recognition remains challenging due to inherent problems
13
- such as distortions or irregular layout. Most of the existing
14
- approaches mainly leverage recurrence or convolution-based
15
- neural networks. However, while recurrent neural networks
16
- (RNNs) usually suffer from slow training speed due to sequential
17
- computation and encounter problems as vanishing gradient or
18
- bottleneck, CNN endures a trade-off between complexity and
19
- performance. In this paper, we introduce SAFL, a self-attention-
20
- based neural network model with the focal loss for scene text
21
- recognition, to overcome the limitation of the existing approaches.
22
- The use of focal loss instead of negative log-likelihood helps the
23
- model focus more on low-frequency samples training. Moreover,
24
- to deal with the distortions and irregular texts, we exploit Spatial
25
- TransformerNetwork (STN) to rectify text before passing to the
26
- recognition network. We perform experiments to compare the
27
- performance of the proposed model with seven benchmarks.
28
- The numerical results show that our model achieves the best
29
- performance.
30
- Index Terms —Scene Text Recognition, Self-attention, Focal
31
- loss,
32
- I. I NTRODUCTION
33
- In recent years, text recognition has attracted the attention
34
- of both academia and actual users due to its application
35
- on various domains such as translation in mixed reality,
36
- autonomous driving, or assistive technology for the blind.
37
- Text recognition can be classified into two main categories:
38
- scanned document recognition and scene text recognition.
39
- While the former has achieved significant advancements, the
40
- latter remains challenging due to scene texts’ inherent char-
41
- acteristics such as the distortion and irregular shapes of the
42
- texts. Recent methods in scene text recognition are inspired
43
- by the success of deep learning-based recognition models.
44
- Generally, these methods can be classified in two approaches:
45
- recurrent neural networks (RNN) based and convolutional
46
- neural networks (CNN) based. RNN-based models have shown
47
- their effectiveness, thanks to capturing contextual information
48
- and dependencies between different patches. However, RNNs
49
- typically compute along with the symbol positions of the
50
- input and output sequences, which cannot be performed in
51
- Authors contribute equallyyCorresponding authorparallel fashion, thus leads to high training time. Furthermore,
52
- RNNs also encounter problems such as vanishing gradient
53
- [1] or bottleneck [2]. CNN-based approach, which allows
54
- computing the hidden representation parallelly, have been
55
- proposed to speed up the training procedure. However, to
56
- capture the dependencies between distant patches in long input
57
- sequences, CNN models require stacking more convolutional
58
- layers, which significantly increases the network’s complexity.
59
- Therefore, CNN-based methods suffer the trade-off between
60
- complexity and accuracy. To remedy these limitations, in nat-
61
- ural language processing (NLP) fields, a self-attention based
62
- mechanism named transformer [3] has been proposed. In the
63
- transformer, dependencies between different input and output
64
- positions are captured using a self-attention mechanism instead
65
- of sequential procedures in RNN. This mechanism allows
66
- more computation parallelization with higher performance. In
67
- the computer vision domain, some research have leveraged the
68
- transformer architecture and showed the effectiveness of some
69
- problems [4] [5]
70
- Inspired by the transformer network, in this paper, we
71
- propose a self-attention based scene text recognizer with focal
72
- loss, namely as SAFL. Moreover, to tackle irregular shapes of
73
- scene texts, we also exploit a text rectification named Spatial
74
- Transformer Network (STN) to enhance the quality of text
75
- before passing to the recognition network. SAFL, as depicted
76
- in Figure 1, contains three components: rectification, feature
77
- extraction, and recognition. First, given an input image, the
78
- rectification network, built based on the Spatial Transformer
79
- Network (STN) [6], transforms the image to rectify its text.
80
- Then, the features of the rectified image are extracted using
81
- a convolutional neural network. Finally, a self-attention based
82
- recognition network is applied to predict the output character
83
- sequence. Specifically, the recognition network is an encoder-
84
- decoder model, where the encoder utilizes multi-head self-
85
- attention to transform input sequence to hidden feature rep-
86
- resentation, then the decoder applies another multi-head self-
87
- attention to output character sequence. To balance the training
88
- data for improving the prediction accuracy, we exploit focal
89
- loss instead of negative log-likelihood as in most recent works
90
- [7] [8].
91
- To evaluate our proposed model’s performance, we train
92
- SAFL with two synthetic datasets: Synth90k [9] and SynthText
93
- [10], and compare its accuracy with standard benchmarks,arXiv:2201.00132v1 [cs.CV] 1 Jan 2022Fig. 1. Overview of SAFL
94
- on both regular and irregular datasets. The experiment results
95
- show that our method outperforms the state-of-the-art on all
96
- datasets. Furthermore, we also perform experiments to study
97
- the effectiveness of focal loss. The numerical results show the
98
- superiority of focal loss over the negative log-likelihood loss
99
- on all datasets.
100
- The remainder of the paper is organized as follows. Section
101
- II introduces related works. We describe the details of the
102
- proposed model in Section III and present the evaluation
103
- results in Section IV. Finally, we conclude the paper and
104
- discuss the future works in Section V.
105
- II. R ELATED WORK
106
- Scene text recognition has attracted great interest over
107
- the past few years. Comprehensive surveys for scene text
108
- recognition may be found in [11] [12] [13]. As categorized
109
- by previous works [8] [14] [15], scene text may be divided
110
- into two categories: regular and irregular text. The regular text
111
- usually has a nearly horizontal shape, while the irregular text
112
- has an arbitrary shape, which may be distorted.
113
- A. Regular text recognition
114
- Early work mainly focused on regular text and used a
115
- bottom-up scheme, which first detects individual characters
116
- using a sliding window, then recognizing the characters us-
117
- ing dynamic programming or lexicon search [16] [17] [18].
118
- However, these methods have an inherent limitation, which is
119
- ignoring contextual dependencies between characters. Shi et al.
120
- [19] and He et al. [20] typically regard text recognition as a
121
- sequence-to-sequence problem. Input images and output texts
122
- are typically represented as patch sequences and character
123
- sequences, respectively. This technique allows leveraging deep
124
- learning techniques such as RNNs or CNNs to capture con-
125
- textual dependencies between characters [7] [19] [20], lead to
126
- significant improvements in accuracy on standard benchmarks.
127
- Therefore, recent work has shifted focus to the irregular text,
128
- a more challenging problem of scene text recognition.
129
- B. Irregular text recognition
130
- Irregular text is a recent challenging problem of scene text
131
- recognition, which refers to texts with perspective distortionsand arbitrary shape. The early works correct perspective distor-
132
- tions by using hand-craft features. However, these approaches
133
- require correct tunning by expert knowledge for achieving
134
- the best results because of a large variety of hyperparame-
135
- ters. Recently, Yang et al. [21] proposed an auxiliary dense
136
- character detection model and an alignment loss to effectively
137
- solve irregular text problems. Liu et al. [22] introduced a
138
- Character-Aware Neural Network (Char-Net) to detect and rec-
139
- tify individual characters. Shi et al. [7] [8] addressed irregular
140
- text problems with a rectification network based on Spatial
141
- Transformer Network (STN), which transform input image for
142
- better recognition. Zhan et al. [23] proposed a rectification
143
- network employing a novel line-fitting transformation and an
144
- iterative rectification pipeline for correction of perspective and
145
- curvature distortions of irregular texts.
146
- III. P ROPOSED MODEL
147
- Figure 1 shows the structure of SAFL, which is comprised
148
- of three main components: rectification, feature extraction, and
149
- recognition. The rectification module is a Spatial Transformer
150
- Network (STN) [6], which receives the original image and
151
- rectifies the text to enhance the quality. The feature extraction
152
- module is a convolution neural network that extracts the
153
- information of the rectified image and represents it into a
154
- vector sequence. The final module, i.e., recognition, is based
155
- on the self-attention mechanism and the transformer network
156
- architecture [3], to predict character sequence from the feature
157
- sequence. In the following, we first present the details of the
158
- three components in Section III-A, III-B and III-C, respec-
159
- tively. Then, we describe the training strategy using focal loss
160
- in Section III-D.
161
- A. Rectification
162
- In this module, we leverage a Thin Plate Spline (TPS) trans-
163
- formation [8], a variant of STN, to construct a rectification
164
- network. Given the input image Iwith an arbitrary size, the
165
- rectification module first resizes Iinto a predefined fixed size.
166
- Then the module detects several control points along the top
167
- and bottom of the text’s bounding. Finally, TPS applies asmooth spline interpolation between a set of control points
168
- to rectify the predicted region to obtain a fixed-size image.
169
- B. Feature Extraction
170
- We exploit the convolution neural network (CNN) to extract
171
- the features of the rectified image (obtained from a rectification
172
- network) into a sequence of vectors. Specifically, the input
173
- image is passed through convolution layers (ConvNet) to
174
- produce a feature map. Then, the model separates the feature
175
- map by rows. The output received after separating the feature
176
- map are feature vectors arranged in sequences. The scene
177
- text recognition problem then becomes a sequence-to-sequence
178
- problem whose input is a sequence of characteristic vectors,
179
- and whose output is a sequence of characters predicted. Based
180
- on the proposal in [3], we further improve information about
181
- the position of the text in the input image by using positional
182
- encoding. Each position posis represented by a vector whose
183
- value of the ithdimension, i.e., PE (pos;i ), is defined as
184
- PE (pos;i )=8
185
- <
186
- :sinpos
187
- 100002i
188
- dmodel;if0idmodel
189
- 2
190
- cospos
191
- 100002i
192
- dmodel;ifdmodel
193
- 2idmodel;
194
- (1)
195
- wheredmodel is the vector size. The position information is
196
- added into the encoding vectors.
197
- C. Self-attention based recognition network
198
- The architecture of the recognition network follows the
199
- encoder-decoder model. Both encoder blocks and decoder
200
- blocks are built based on the self-attention mechanism. We
201
- will briefly review this mechanism before describing each
202
- network’s details.
203
- 1) Self-attention mechanism: Self-attention is a mechanism
204
- that extracts the correlation between different positions of a
205
- single sequence to compute a representation of the sequence.
206
- In this paper, we utilize the scaled dot-product attention
207
- proposed in [3]. This mechanism consists of queries and keys
208
- of dimension dk, and values of dimension dv. Each query
209
- performs the dot product of all keys to obtain their correlation.
210
- Then, we obtain the weights on the values by using the softmax
211
- function. In practice, the keys, values, and queries are also
212
- packed together into matrices K,VandQ. The matrix of the
213
- outputs is computed as follow:
214
- Attention (Q;K;V ) =softmaxQKT
215
- pdk
216
- V (2)
217
- The dot product is scaled by1pdkto alleviate the small softmax
218
- values which lead to extremely small gradients with large
219
- values ofdk. [3].
220
- 2) Encoder: Encoder is a stack of Neblocks. Each block
221
- consists of two main layers. The first layer is a multi-head
222
- attention layer, and the second layer is a fully-connected feed-
223
- forward layer. The multi-head attention layer is the combi-
224
- nation of multiple outputs of the scale dot product attention.
225
- Each scale-dot product attention returns a matrix representing
226
- the feature sequences, which is called head attention. The
227
- combination of multiple head attentions to the multi-head
228
- Fig. 2. Frenquency of characters in training lexicon
229
- attention allows our model to learn more representations of
230
- feature sequences, thereby increasing the diversity of the
231
- extracted information, and thereby enhance the performance.
232
- Multi-head attention can be formulated as follows:
233
- MultiHead (Q;K;V ) =Concat (head 1;:::;head h)WO
234
- (3)
235
- whereheadi=Attention
236
- QWQ
237
- i;KWK
238
- i;VWV
239
- i
240
- ,his the
241
- number of heads, WQ
242
- i2Rdmodeldk;WK
243
- i2Rdmodeldk;WV
244
- i2
245
- Rdmodeldv,WO2Rhdvdmodel are weight matrices. dk;dv
246
- anddmodel are set to the same value. Layer normalization
247
- [24] and residual connection [25] are added into each main
248
- layer (i.e., multi-head attention layer and fully-connected
249
- layer) to improve the training effect. Specifically, the residual
250
- connections helps to decrease the loss of information in the
251
- backpropagation process, while the normalization makes the
252
- training process more stable. Consequently, the output of
253
- each main layer with the input xcan be represented as
254
- LayerNorm (x+Layer (x)), whereLayer (x)is the function
255
- implemented by the layer itself, and LayerNorm ()represents
256
- the normalization operation. The blocks of the encoder are
257
- stacked sequentially, i.e., the output of the previous block is
258
- the input of the following block.
259
- 3) Decoder: The decoding process predicts the words in
260
- a sentence from left to right, starting with the hstartitag
261
- until encountering the henditag. The decoder is comprised of
262
- Nddecoder blocks. Each block is also built based on multi-
263
- head attention and a fully connected layer. The multi-head
264
- attention in the decoder does not consider words that have
265
- not been predicted by weighting these positions with 1.
266
- Furthermore, the decoder uses additional multi-head attention
267
- that receives keys and values from the encoder and queries
268
- from the decoder. Finally, the decoder’s output is converted
269
- into a probability distribution through a linear transformation
270
- and softmax function.
271
- D. Training
272
- Figure 2 shows that the lexicon of training datasets suffers
273
- from an unbalanced sample distribution. The unbalance may
274
- lead to severe overfitting for high-frequency samples and un-
275
- derfitting for low-frequency samples. To this end, we propose
276
- to use focal loss [26] instead of negative log-likelihood as in
277
- most of recent methods [7] [8]. By exploiting focal loss, themodel will not encounter the phenomenon of ignoring to train
278
- low-frequency samples.
279
- Focal loss is known as an effective loss function to address
280
- the unbalance of datasets. By reshaping the standard cross-
281
- entropy loss, focal loss reduces the impacts of high-frequency
282
- samples and thus focus training on low-frequency ones [26].
283
- The focal loss is defined as follows:
284
- FL(pt) = t(1pt)
285
- where,ptis the probability of the predicted value, computed
286
- using softmax function, and
287
- used to balance the loss. Intuitively, focal loss is obtained
288
- by multiplying cross entropy by t(1pt)
289
- weight t(1pt)
290
- the focal loss helps to reduce the impact of high-frequency
291
- samples (whose value of ptis usually high) and focus more
292
- on low-frequency ones (which usually have low value of pt).
293
- Based on focal loss, we define our training objective as
294
- follows:
295
- L=TX
296
- t=1( t(1p(ytjI))
297
- whereytare the predicted characters, Tis the length of the
298
- predicted sequence, and Iis the input image.
299
- IV. P ERFORMANCE EVALUATION
300
- In this section, we conduct experiments to demonstrate the
301
- effectiveness of our proposed model. We first briefly introduce
302
- datasets used for training and testing, then we describe our
303
- implementation details. Next, we analyze the effect of focal
304
- loss on our model. Finally, we compare our model against
305
- state-of-the-art techniques on seven public benchmark datasets,
306
- including regular and irregular text.
307
- A. Datasets
308
- The training datasets contains two datasets: Synth90k and
309
- SynthText . Synth90k is a synthetic dataset introduced in [9].
310
- This dataset contains 9 million images created by combining
311
- 90.000 common English words and random variations and
312
- effects. SynthText is a synthetic dataset introduced in [10],
313
- which contains 7 million samples by the same generation
314
- process as Synth90k [9]. However, SynthText is targeted for
315
- text detection so that an image may contain several words. All
316
- experiments are evaluated on seven well-known public bench-
317
- marks described, which can be divided into two categories:
318
- regular text and irregular text. Regular text datasets include
319
- IIIT5K, SVT, ICDAR03, ICDAR13.
320
- IIIT5K [27] contains 3000 test images collected from
321
- Google image searches.
322
- ICDAR03 [28] contains 860 word-box cropped images.
323
- ICDAR13 [29] contains 1015 word-box cropped images.
324
- SVT contains 647 testing word-box collected from
325
- Google Street View.
326
- Irregular text datasets include ICDAR15, SVT-P, CUTE.ICDAR15 [30] contains 1811 testing word-box cropped
327
- images collected from Google Glass without careful
328
- positioning and focusing.
329
- SVT-P [31] contains 645 testing word-box cropped im-
330
- ages collected from Google Street View. Most of them
331
- are heavily distorted by the non-frontal view angle.
332
- CUTE [32] contains 288 word-box cropped images,
333
- which are curved text images.
334
- B. Configurations
335
- 1) Implementation Detail: We implement the proposed
336
- model by Pytorch library and Python programming language.
337
- The model is trained and tested on an NDIVIA RTX 2080 Ti
338
- GPU with 12GB memory. We train the model from scratch
339
- using Adam optimizer with the learning rate of 0:00002 .
340
- To evaluate the trained model, we use dataset III5K. The
341
- pretrained model and code are available at [33]
342
- 2) Rectification Network: All input images are resized
343
- to64256 before applying the rectification network. The
344
- rectification network consists of three components: a localiza-
345
- tion network, a thin plate spline (TPS) transformation, and a
346
- sampler. The localization network consists of 6 convolutional
347
- layers with the kernel size of 33and two fully-connected
348
- (FCN) layers. Each FCN is followed by a 22max-pooling
349
- layer. The number of the output filters is 32, 64, 128, 256,
350
- and 256. The number of output units of FCN is 512 and 2K,
351
- respectively, where Kis the number of the control points. In
352
- all experiments, we set Kto 20, as suggested by [8]. The
353
- sampler generates the rectified image with a size of 32100.
354
- The size of the rectified image is also the input size of the
355
- feature extraction module.
356
- 3) Feature Extraction: We construct the feature extraction
357
- module based on Resnet architecture [25]. The configurations
358
- of the feature extraction network are listed in Table I. Our
359
- feature extraction network contains five blocks of 45 residual
360
- layers. Each residual unit consists of a 11convolutional
361
- layer, followed by a 33convolution layer. In the first
362
- two blocks, we use 22stride to reduce the feature map
363
- dimension. In the next blocks, we use 21stride to down-
364
- sampled feature maps. The 21stride also allows us to
365
- retain more information horizontally to distinguish neighbor
366
- characters effectively.
367
- TABLE I
368
- FEATURE EXTRACTION NETWORK CONFIGURATIONS .EACH BLOCK IS A
369
- RESIDUAL NETWORK BLOCK . ”S”STANDS FOR STRIDE OF THE FIRST
370
- CONVOLUTIONAL LAYER IN A BLOCK .
371
- Layer Feature map size Configuration
372
- EncoderBlock 0 32100 33conv, s( 11)
373
- Block 1 1650
374
- 11 conv ;32
375
- 33 conv ;32
376
- 3, s(22)
377
- Block 2 825
378
- 11 conv ;64
379
- 33 conv ;32
380
- 3, s(22)
381
- Block 3 425
382
- 11 conv ;128
383
- 33 conv ;32
384
- 3, s(21)
385
- Block 4 225
386
- 11 conv ;256
387
- 33 conv ;32
388
- 3, s(21)
389
- Block 5 125
390
- 11 conv ;512
391
- 33 conv ;32
392
- 3, s(21)4) Recognition: The number of blocks in the encoder and
393
- the decoder are set both to 4. In each block of the encoder and
394
- the decoder, the dimension of the feed forward vector and the
395
- ouput vector are set to 2048 and512, respectively. The number
396
- of head attention layers is set to 8. The decoder recognizes 94
397
- different characters, including numbers, alphabet characters,
398
- uppercase, lowercase, and 32 punctuation in ASCII.
399
- C. Result and Discussion
400
- 1) Impact of focal loss: To analyze the effect of focal loss,
401
- we study two variants of the proposed model. The first variant
402
- uses negative log-likelihood, and the second one leverages
403
- focal loss.
404
- TABLE II
405
- RECOGNITION ACCURACIES WITH NEGATIVE LOG -LIKELIHOOD AND
406
- FOCAL LOSS
407
- Variant Negative log-likelihood Focal Loss
408
- IIIT5K 92.6 93.9
409
- SVT 85.8 88.6
410
- ICDAR03 94.1 95
411
- ICDAR13 92 92.8
412
- ICDAR15 76.1 77.5
413
- SVT-P 79.4 81.7
414
- CUTE 80.6 85.4
415
- Avarage 86.9 88.2
416
- As shown in Table II, the model with focal loss outperforms
417
- the one with log-likelihood on all datasets. Notably, on aver-
418
- age, focal loss improves the accuracy by 2.3 % compared to
419
- log-likelihood. For the best case, i.e., CUTE, the performance
420
- gap between the two variants is 4.8 %
421
- 2) Impact of rectification network: In this section, we study
422
- the effect of text rectification by comparing SAFL and a
423
- variant which does not include the rectification module.
424
- TABLE III
425
- RECOGNITION ACCURACIES WITH AND WITHOUT RECTIFICATION
426
- Variant SAFL w/o text rectification SAFL
427
- IIIT5K 90.7 93.9
428
- SVT 83.3 88.6
429
- ICDAR03 93 95
430
- ICDAR13 90.7 92.8
431
- ICDAR15 72.9 77.5
432
- SVT-P 71.6 81.7
433
- CUTE 77.4 85.4
434
- Avarage 84.1 88.2
435
- Table III depicts the recognition accuracies of the two mod-
436
- els over seven datasets. It can be observed that the rectification
437
- module increases the accuracy significantly. Specifically, the
438
- performance gap between SAFL and the one without the
439
- rectification module is 4:1%on average. In the best cases,
440
- SAFL improves the accuracy by 10:1%and7%compared
441
- to the other on the datasets SVT-P and CUTE, respectively.
442
- The reason is that both SVT-P and CUTE contains many both
443
- irregular texts such as perspective texts or curved texts.
444
- 3) Comparison with State-of-the-art: In this section, we
445
- compare the performance of SAFL with the latest approaches
446
- in scene text recognition. The evaluation results are shownin Table IV. In each column, the best value is bolded.
447
- the ”Avarage” column is the weighted average over all the
448
- data sets. Concerning the irregular text, it can be observed
449
- that SAFL achieves the best performance on 3data sets.
450
- Particularly, SAFL outperforms the current state-of-the-art,
451
- ESIR [23], by a margin of 1:2% on average, particulary
452
- on CUTE ( +2:1%) and SVT-P ( +2:1%). Concerning the
453
- regular datasets, SAFL outperforms the other methods on two
454
- datasets IIIT5K and ICDAR03. Moreover, SAFL also shows
455
- the highest average accuracy over all the regular text datasets.
456
- To summarize, SAFL achieves the best performance on 5 of
457
- 7 datasets and the highest average accuracy on both irregular
458
- and regular texts.
459
- V. C ONCLUSION
460
- In this paper, we proposed SAFL, a deep learning model
461
- for scene text recognition, which exploits self-attention mecha-
462
- nism and focal loss. The experiment results showed that SAFL
463
- achieves the highest average accuracy on both the regular
464
- datasets and irregular datasets. Moreover, SAFL outperforms
465
- the state-of-the-art on CUTE dataset by a margin of 2:1%.
466
- Summary, SAFL shows superior performance on 5 out of 7
467
- benchmarks, including IIIT5k, ICDAR 2003, ICDAR 2015,
468
- SVT-P and CUTE.
469
- ACKNOWLEDGMENT
470
- We would like to thank AIMENEXT Co., Ltd. for support-
471
- ing our research.
472
- REFERENCES
473
- [1] Y . Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies
474
- with gradient descent is difficult,” IEEE transactions on neural networks ,
475
- vol. 5, no. 2, pp. 157–166, 1994.
476
- [2] K. Cho, B. Van Merri ¨enboer, D. Bahdanau, and Y . Bengio, “On the
477
- properties of neural machine translation: Encoder-decoder approaches,”
478
- arXiv preprint arXiv:1409.1259 , 2014.
479
- [3] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
480
- Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances
481
- in neural information processing systems , 2017, pp. 5998–6008.
482
- [4] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and
483
- S. Zagoruyko, “End-to-end object detection with transformers,” arXiv
484
- preprint arXiv:2005.12872 , 2020.
485
- [5] N. Parmar, A. Vaswani, J. Uszkoreit, Ł. Kaiser, N. Shazeer, A. Ku, and
486
- D. Tran, “Image transformer,” arXiv preprint arXiv:1802.05751 , 2018.
487
- [6] M. Jaderberg, K. Simonyan, A. Zisserman et al. , “Spatial transformer
488
- networks,” in Advances in neural information processing systems , 2015,
489
- pp. 2017–2025.
490
- [7] B. Shi, X. Wang, P. Lyu, C. Yao, and X. Bai, “Robust scene text
491
- recognition with automatic rectification,” in Proceedings of the IEEE
492
- conference on computer vision and pattern recognition , 2016, pp. 4168–
493
- 4176.
494
- [8] B. Shi, M. Yang, X. Wang, P. Lyu, C. Yao, and X. Bai, “Aster:
495
- An attentional scene text recognizer with flexible rectification,” IEEE
496
- transactions on pattern analysis and machine intelligence , vol. 41, no. 9,
497
- pp. 2035–2048, 2018.
498
- [9] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman, “Synthetic
499
- data and artificial neural networks for natural scene text recognition,”
500
- arXiv preprint arXiv:1406.2227 , 2014.
501
- [10] A. Gupta, A. Vedaldi, and A. Zisserman, “Synthetic data for text
502
- localisation in natural images,” in Proceedings of the IEEE conference
503
- on computer vision and pattern recognition , 2016, pp. 2315–2324.
504
- [11] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
505
- large-scale image recognition,” arXiv preprint arXiv:1409.1556 , 2014.TABLE IV
506
- SCENE TEXT ACCURANCIES (%) OVER SEVEN PUBLIC BENCHMARK TEST DATASETS .
507
- MethodRegular test dataset Irregular test dataset
508
- IIIT5k SVT ICDAR03 ICDAR13 Average ICDAR15 SVT-P CUTE Average
509
- Jaderberg et al. [34] - 80.7 93.1 90.8 - - - - -
510
- CRNN [19] 78.2 80.8 89.4 86.7 81.8 - - - -
511
- RARE [7] 81.9 81.9 90.1 88.6 85.3 - 71.8 59.2 -
512
- Lee et al. 78.4 80.7 88.7 90.0 82.4 - - - -
513
- Yang et al. [21] - 75.8 - - - - 75.8 69.3 -
514
- FAN [35] 87.4 85.9 94.2 93.3 89.4 70.6 - - -
515
- Shi et al. [7] 81.2 82.7 91.9 89.6 84.6 - - - -
516
- Yang et al. [21] - - - - - - 75.8 69.3 -
517
- Char-Net [22] 83.6 84.4 91.5 90.8 86.2 60.0 73.5 - -
518
- AON [36] 87.0 82.8 91.5 - - 68.2 73.0 76.8 70.0
519
- EP [37] 88.3 87.5 94.6 94.4 90.3 73.9 - - -
520
- Liao et al. [38] 91.9 86.4 - 86.4 - - - 79.9 -
521
- Baek et al. [14] 87.9 87.5 94.9 92.3 89.8 71.8 79.2 74.0 73.6
522
- ASTER [8] 93.4 89.5 94.5 91.8 92.8 76.1 78.5 79.5 76.9
523
- SAR [39] 91.5 84.5 - 91.0 - 69.2 76.4 83.3 72.1
524
- ESIR [23] 93.3 90.2 - 91.3 - 76.9 79.6 83.3 78.1
525
- SAFL 93.9 88.6 95 92.8 93.3 77.5 81.7 85.4 79.3
526
- [12] Y . Zhu, C. Yao, and X. Bai, “Scene text detection and recognition:
527
- Recent advances and future trends,” Frontiers of Computer Science ,
528
- vol. 10, no. 1, pp. 19–36, 2016.
529
- [13] Q. Ye and D. Doermann, “Text detection and recognition in imagery: A
530
- survey,” IEEE transactions on pattern analysis and machine intelligence ,
531
- vol. 37, no. 7, pp. 1480–1500, 2014.
532
- [14] J. Baek, G. Kim, J. Lee, S. Park, D. Han, S. Yun, S. J. Oh, and
533
- H. Lee, “What is wrong with scene text recognition model comparisons?
534
- dataset and model analysis,” in Proceedings of the IEEE International
535
- Conference on Computer Vision , 2019, pp. 4715–4723.
536
- [15] P. Wang, L. Yang, H. Li, Y . Deng, C. Shen, and Y . Zhang, “A simple and
537
- robust convolutional-attention network for irregular text recognition,”
538
- arXiv preprint arXiv:1904.01375 , vol. 6, 2019.
539
- [16] K. Wang, B. Babenko, and S. Belongie, “End-to-end scene text recog-
540
- nition,” in 2011 International Conference on Computer Vision . IEEE,
541
- 2011, pp. 1457–1464.
542
- [17] C. Yao, X. Bai, B. Shi, and W. Liu, “Strokelets: A learned multi-scale
543
- representation for scene text recognition,” in Proceedings of the IEEE
544
- Conference on Computer Vision and Pattern Recognition , 2014, pp.
545
- 4042–4049.
546
- [18] K. Wang and S. Belongie, “Word spotting in the wild,” in European
547
- Conference on Computer Vision . Springer, 2010, pp. 591–604.
548
- [19] B. Shi, X. Bai, and C. Yao, “An end-to-end trainable neural network
549
- for image-based sequence recognition and its application to scene
550
- text recognition,” IEEE transactions on pattern analysis and machine
551
- intelligence , vol. 39, no. 11, pp. 2298–2304, 2016.
552
- [20] P. He, W. Huang, Y . Qiao, C. C. Loy, and X. Tang, “Reading scene
553
- text in deep convolutional sequences,” in Thirtieth AAAI conference on
554
- artificial intelligence , 2016.
555
- [21] X. Yang, D. He, Z. Zhou, D. Kifer, and C. L. Giles, “Learning to read
556
- irregular text with attention mechanisms.” in IJCAI , vol. 1, no. 2, 2017,
557
- p. 3.
558
- [22] W. Liu, C. Chen, and K.-Y . K. Wong, “Char-net: A character-aware
559
- neural network for distorted scene text recognition,” in Thirty-Second
560
- AAAI Conference on Artificial Intelligence , 2018.
561
- [23] F. Zhan and S. Lu, “Esir: End-to-end scene text recognition via iter-
562
- ative image rectification,” in Proceedings of the IEEE Conference on
563
- Computer Vision and Pattern Recognition , 2019, pp. 2059–2068.
564
- [24] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv
565
- preprint arXiv:1607.06450 , 2016.
566
- [25] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
567
- recognition,” in Proceedings of the IEEE conference on computer vision
568
- and pattern recognition , 2016, pp. 770–778.
569
- [26] T.-Y . Lin, P. Goyal, R. Girshick, K. He, and P. Doll ´ar, “Focal loss
570
- for dense object detection,” in Proceedings of the IEEE international
571
- conference on computer vision , 2017, pp. 2980–2988.
572
- [27] A. Mishra, K. Alahari, and C. Jawahar, “Top-down and bottom-up cues
573
- for scene text recognition,” in 2012 IEEE Conference on Computer
574
- Vision and Pattern Recognition . IEEE, 2012, pp. 2687–2694.[28] S. M. Lucas, A. Panaretos, L. Sosa, A. Tang, S. Wong, R. Young,
575
- K. Ashida, H. Nagai, M. Okamoto, H. Yamamoto et al. , “Icdar 2003
576
- robust reading competitions: entries, results, and future directions,”
577
- International Journal of Document Analysis and Recognition (IJDAR) ,
578
- vol. 7, no. 2-3, pp. 105–122, 2005.
579
- [29] D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, L. G. i Bigorda, S. R.
580
- Mestre, J. Mas, D. F. Mota, J. A. Almazan, and L. P. De Las Heras,
581
- “Icdar 2013 robust reading competition,” in 2013 12th International
582
- Conference on Document Analysis and Recognition . IEEE, 2013, pp.
583
- 1484–1493.
584
- [30] D. Karatzas, L. Gomez-Bigorda, A. Nicolaou, S. Ghosh, A. Bagdanov,
585
- M. Iwamura, J. Matas, L. Neumann, V . R. Chandrasekhar, S. Lu et al. ,
586
- “Icdar 2015 competition on robust reading,” in 2015 13th International
587
- Conference on Document Analysis and Recognition (ICDAR) . IEEE,
588
- 2015, pp. 1156–1160.
589
- [31] T. Quy Phan, P. Shivakumara, S. Tian, and C. Lim Tan, “Recognizing
590
- text with perspective distortion in natural scenes,” in Proceedings of the
591
- IEEE International Conference on Computer Vision , 2013, pp. 569–576.
592
- [32] A. Risnumawan, P. Shivakumara, C. S. Chan, and C. L. Tan, “A robust
593
- arbitrary text detection system for natural scene images,” Expert Systems
594
- with Applications , vol. 41, no. 18, pp. 8027–8048, 2014.
595
- [33] https://github.com/ICMLA-SAFL/SAFL pytorch.
596
- [34] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep
597
- structured output learning for unconstrained text recognition,” arXiv
598
- preprint arXiv:1412.5903 , 2014.
599
- [35] Z. Cheng, F. Bai, Y . Xu, G. Zheng, S. Pu, and S. Zhou, “Focusing
600
- attention: Towards accurate text recognition in natural images,” in
601
- Proceedings of the IEEE international conference on computer vision ,
602
- 2017, pp. 5076–5084.
603
- [36] Z. Cheng, Y . Xu, F. Bai, Y . Niu, S. Pu, and S. Zhou, “Aon: Towards
604
- arbitrarily-oriented text recognition,” in Proceedings of the IEEE Con-
605
- ference on Computer Vision and Pattern Recognition , 2018, pp. 5571–
606
- 5579.
607
- [37] F. Bai, Z. Cheng, Y . Niu, S. Pu, and S. Zhou, “Edit probability for scene
608
- text recognition,” in Proceedings of the IEEE Conference on Computer
609
- Vision and Pattern Recognition , 2018, pp. 1508–1516.
610
- [38] M. Liao, J. Zhang, Z. Wan, F. Xie, J. Liang, P. Lyu, C. Yao, and
611
- X. Bai, “Scene text recognition from two-dimensional perspective,” in
612
- Proceedings of the AAAI Conference on Artificial Intelligence , vol. 33,
613
- 2019, pp. 8714–8721.
614
- [39] H. Li, P. Wang, C. Shen, and G. Zhang, “Show, attend and read: A simple
615
- and strong baseline for irregular text recognition,” in Proceedings of the
616
- AAAI Conference on Artificial Intelligence , vol. 33, 2019, pp. 8610–
617
- 8617.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2201.01661.txt DELETED
@@ -1,1600 +0,0 @@
1
- 1
2
-
3
-
4
- Evaluation of Thermal Imaging on Embedded GPU
5
- Platforms for Application in Vehicular Assistance
6
- Systems
7
-
8
- Muhammad Ali Farooq, Waseem Shariff, Peter C orcoran, Fellow, IEEE
9
- Abstract—This study is focused on evaluating the real-time
10
- performance of thermal object detection for smart and safe
11
- vehicular systems by deploying the trained networks on GPU &
12
- single -board EDGE -GPU computing platforms for onboard
13
- automotive sensor suite testing. A novel large -scale thermal
14
- dataset comprising of > 35,000 distinct frames is acquired,
15
- processed, and open -sourced in challenging weather and
16
- environmental scenarios . The dataset is a recorded from lost -cost
17
- yet effective uncooled LWIR thermal camera , mounted stand -
18
- alone and on an electric vehicle to minimize mechanical vibrations .
19
- State -of-the-art YOLO -V5 networks variants are trained using
20
- four different public datasets as well newly acquired local dataset
21
- for optimal generalization of DNN by employing SGD optimizer.
22
- The effectiveness of trained networks is validated on extensive test
23
- data u sing various quantitative metrics which include precision,
24
- recall curve, mean average precision, and frames per second. T he
25
- smaller network variant of YOLO is further optimized using
26
- TensorRT inference accelerator to explicitly boost the frames per
27
- second rate. Optimized network engine increases the frames per
28
- second rate by 3.5 times when testing on low power edge devices
29
- thus achieving 11 fps on Nvidia Jetson Nano and 60 fps on Nvidia
30
- Xavier NX development boards .
31
-
32
- Index Terms — ADAS, Object detection, Thermal imaging,
33
- LWIR, CNN, Edge computing
34
- I. INTRODUCTION
35
- hermal imaging is the digital interpretation of the
36
- infrared radiations emitted from the object. Thermal
37
- imaging cameras with microbolometer focal plane
38
- arrays (FPA) is a type of uncooled detector that provides low -
39
- cost solutions for acquiring thermal images in different weather
40
- and environmental conditions. These cameras when integrated
41
- with AI -based imaging pipelines can be used for various real-
42
- world applications. In this work, the core focus is to design an
43
- intelligent thermal object detection -based video analysis system
44
- for automotive sensor suite application that should be effective
45
- in all light conditions thus enabling safe and more reli able road
46
- journeys. Unlike other video solutions such as visible imaging
47
- which mainly relies on reflected light thus having the greater
48
- chances of being blocked by visual impediments, thermal
49
- imaging does not require any external lighting conditions to
50
- capture quality images and it can see through visual obscurants
51
-
52
- October 25th, 2021 , “This research work is funded by the ECSEL Joint
53
- Undertaking (JU) under grant agreement No 826131 (Heliaus project
54
- https://www.heliaus.eu/ ). The JU receives support from the European Union’s
55
- Horizon 2020 research and innovation program and National funders from
56
- France, Germany, Ireland (Enterprise Ireland , International Research Fund),
57
- and Italy ”. such as dust, light fog, smoke, or other such occlusions.
58
- Moreover, the integration of AI-based thermal imaging systems
59
- can provide us with a multitude of advantages from better
60
- analytics with fe wer false alarms to increased coverage , provide
61
- redundancy and, higher return on investment.
62
- In th is research work , we have focused on utilizing thermal
63
- data for designing efficient AI -based object detection and
64
- classification pipeline for Advance Driver -Assistance Systems.
65
- Such type of thermal imaging -based forward sensing (F -sense)
66
- system is useful in providing enhance d safety and security
67
- feature s thus enabling the driver to better scrutinize the
68
- complete road -side environment. For this purpose, we ha ve
69
- used a state-of-the-art (SoA) end-to-end deep learning
70
- framework YOLO -V5 on thermal data. In the first phase , a
71
- novel thermal data set is acquired for training and validation
72
- purposes of different network variants of YOLO -V5. The data
73
- is captured using a prototype low -cost uncooled LWIR thermal
74
- camera specifically designed under the ECSEL Heliaus
75
- research project [ 32]. The raw thermal data is processed using
76
- shutterless camera calibration, automatic gain control, bad -
77
- pixel removal , and temporal denoising methods.
78
- Furthermore, the trained network variants are deployed and
79
- tested on two state-of-the-art embedded GPU platforms, which
80
- include NVIDIA Jetson nano [23] and Nvidia Jetson Xavier NX
81
- [25]. Thus, studying the extensive real -time and on -board
82
- feasibility in terms of various quantitative metrics, inference
83
- time, FPS, and hardware sensor temperatures.
84
- The core contributions of the proposed research work are
85
- summarized below :
86
- • Preparation and annotation of a large o pen-access dataset of
87
- thermal images captured in different weather and
88
- environmental conditions.
89
- • A detailed comparative evaluation of SoA object detection
90
- based on a modified YOLO -V5 network , fine-tuned for
91
- thermal images using this newly acquired dataset .
92
- • Model optimization using TensorRT inference accelerator
93
- to implement a fast inference network on SoA embedded
94
- GPU boards (Jetson, Xavier) with comparative evaluations .
95
- Muhammad Ali Farooq, Peter Corcoran, and Waseem Shariff are with the
96
- National University of Ireland Galway, (NUIG), Coll ege of Science &
97
- Engineering Galway, H91TK33, Ireland (e -mail: [email protected] ,
98
99
- Thermal Dataset Link: https://bit.ly/3tAkJ0J
100
- GitHub Link : https://git hub.com/MAli -Farooq/Thermal -YOLO T 2
101
-
102
- • A determination of realistic frame rates that can be achieved
103
- for the rmal object detection on SoA embedded GPU
104
- platforms .
105
- II. BACK GROUND
106
- ADAS (Advanced Driver Assistance Systems) are classified
107
- as AI -based intelligent systems integrated with core vehicular
108
- systems to assist the driver by providing a wide range of digital
109
- features for safe and reliable road journeys. Such type of system
110
- is designed by employing an array of electronic sensors and
111
- optical mixtures such as different types of cameras to identify
112
- surrounding impediments, driver faults, and reacts
113
- automatically.
114
- The second part of this section will mainly summarize the
115
- existing/ p ublished thermal datasets along with their respective
116
- attributes. These datasets can be effectively used for training
117
- and testing the machine learning algorithms for object detection
118
- in thermal spectrum for ADAS. The complete dataset details are
119
- provided i n Table I .
120
- TABLE I
121
- EXISTING THERMAL DATASETS
122
- A. Related Literature
123
- We can find numerous studies regarding the implementation
124
- of object detection algorithms using AI based conventional
125
- machine learning as well as deep learning algorithms. Such type
126
- of optical imaging -based systems system can be deployed and
127
- effectively used as forward sensing methods for ADAS.
128
- Advanced Driver -Assistance Systems (ADAS) is an active area
129
- of research that seeks to make road trips more safe and secure.
130
- Real time object detection pl ays a critical role to warn the driver
131
- thus allowing them to make timely decisions [ 8]. Ziyatdinov et
132
- al [8] proposed an automated system to detect road signs. This method uses the GTSRB dataset [ 20] to train on conventional
133
- machine learning algorithms whi ch include SVM, KNN , and
134
- Decision Trees classifier. The results proved that SVM and K –
135
- nearest neighbour ( k-NN) outperforms all other classifiers.
136
- Autonomous cars on the road require the abilities to
137
- consistently perceive and comprehend their surroundings [9].
138
- Oliver et al [ 9] presented a procedure to use Bernoulli particle
139
- filter, which is suitable for object identification because it can
140
- handle a wide range of sensor measurements as well as object
141
- appearance -disappearance . Gang Yan et al [ 10] proposed a
142
- novel method to use HOG to extract features and AdaBoost and
143
- SVM classifiers to detect vehicles in real -time. The histogram
144
- of oriented gradients (HOG) is a feature extraction technique
145
- used for object detection in the domai n of computer vision and
146
- machine learning . The study concluded that the AdaBoost
147
- classification technique performed slightly better than SVM
148
- since it uses the ensemble method. Authors in [ 11], proposed
149
- another approach to detect vehicles on road using HOG filters
150
- to again extract features from the frames and then classify them
151
- using support vector machines and decision tree classification
152
- algorithms. Furthermore, SVM achieved 93.75% accuracy,
153
- which outperformed decision tree accuracy on classifying the
154
- vehicles. These are some of the conventional machine learning
155
- object detection techniques used for driver assistance system till
156
- date. The main drawback of traditional machine learning
157
- technique s is that the features are extracted and predefined prior
158
- to training and tes ting of the algorithms . When dealing with
159
- high-dimensional data, and with many classes conventional
160
- machine learning techniques are often ineffective [ 21].
161
- Deep learning approaches have emerged as more reliable and
162
- effective solutions than these classic approaches. There are
163
- many state -of-the-art pre -trained deep learning classifiers and
164
- object detection models which can be retrain ed and rapidly
165
- deployed for designing efficient forward sensing algorithms
166
- [22]. YOLO (you only look once) object classifier provides
167
- sufficient performance to operate at real -time speeds on
168
- conventional video data without compromising the overall
169
- detector precision [ 15]. Veta et al [12] presented a technique for
170
- detecting objects at a distance by employing YOLO on low -
171
- quality thermal images . Another research [ 13] focused on
172
- pedestrian detection in thermal images using the histogram of
173
- gradient (HOG) and YOLO methods on FLIR [ 7] datas et and
174
- computed performance with a 70 % accuracy on test data using
175
- the intersection over union technique. Further, Rumi et al [ 14]
176
- proposed a real -time human detection technique using YOLO -
177
- v3 on KAIST [ 5] thermal dataset, achieving 95.5% average
178
- precision on test data. Authors in [ 16] proposed a human
179
- detection system using YOLO object detector. The authors used
180
- their custom dataset recorded in different weather conditions
181
- using FLIR Therma -CAM P10 thermal camera.
182
- Focusing on road-side objects, authors in [ 17] used YOLO -v2
183
- object detection model to enhance the recognition of tiny
184
- vehicle objects by combining low -level and high -level features
185
- of the image . In [18], the authors proposed a deep learning -
186
- based vehicle occupancy detection system in a parking lot using
187
- a thermal camera . In this study authors had establis hed that
188
- YOLO, Yolo -Conv, GoogleNet , and ResNet18 are
189
- computationally more efficient, take less processing time, and
190
- are suitable for real -time object detection. In one of the most
191
- recent studies [ 24], the efficacy of typical state -of-the-art object Datasets Condition
192
- Day Night Annotations Objects Total
193
- no. of
194
- frames Image
195
- Resolution
196
- OSU
197
- Thermal
198
- [2] ✓ ✓ - Person,
199
- Cars,
200
- Poles 284 360 X 240
201
- CVC
202
- [19] ✓ ✓ - Person,
203
- Cars,
204
- Poles,
205
- Bicycle,
206
- Bus,
207
- Bikes 11K 640 X 480
208
- LITIV
209
- [3] - - - Person 6K 320 X 240
210
- TIV [ 4] - - - Person,
211
- Cars,
212
- Bicycle,
213
- Bat 63K 1024 X
214
- 1024
215
- SCUT
216
- [6] - ✓ ✓ Person 211K 384 X 288
217
- FLIR
218
- [7] ✓ ✓ ✓ Person,
219
- Cars,
220
- Poles,
221
- Bicycle,
222
- Bus,
223
- Dog 14K 640 X 512
224
- KAIST
225
- [5] ✓ ✓ ✓ Person,
226
- Cars,
227
- Poles,
228
- Bicycle,
229
- Bus 95K 640 X 480 3
230
-
231
-
232
- detectors which includes Faster R -CNN, SSD, Cascade R -
233
- CNN, and YOLO -v3 was assessed by retrain ing them on a
234
- thermal dataset. The results demonstrated that Yolo -v3
235
- outclassed other object SoA object detect ors.
236
- B. Object Detection on Edge Devices
237
- AI on edge devices benefit s us in various methods such that
238
- it speeds up decision -making, makes data processing more
239
- reliable, enhan ces user experience with hyper -personalization,
240
- and cuts down the costs . While machine learning models ha ve
241
- shown immense strength in diversified consumer electronic
242
- applications , the increased prevalence of AI on edge has
243
- contributed to the growth of spec ial-purpose embedded boards
244
- for various applications. Such type of embedded boards can
245
- achieve AI inference at higher frames per second (fps) and low
246
- power usage . Some of these board includes Nvidia Jetson Nano,
247
- Nvidia Xavier, Google Coral, AWS DeepLens , and Intel AI-
248
- Stick . Authors in [ 26-27] proposed a r aspberry pi-based edge
249
- computing system to detect thermal objects. Sen Cao et al [ 28]
250
- developed a roadside object detector using KITTI dataset [ 29]
251
- by training an efficient and lightweight neural network on
252
- Nvidia Jetson TX2 embedded GPU [28].
253
- In another study [ 30] authors proposed deep learning -based
254
- smart task scheduling for self -driving vehicles. This task
255
- management module was implemented on multicore SoCs
256
- (Odroid Xu4 and Nvidia Jetson).
257
- The overall goal of this study is to analy se the real -time
258
- performance feasibility of Thermal -YOLO object detector by
259
- deploying on edge devices. Different network variants of yolo -
260
- v5 framework are trained and fine -tuned on thermal image data
261
- and implemented on the Nvidia Jetson Nano [ 23] and Nvidia
262
- Jetson Xavier NX [ 25]. These two platforms, although from the
263
- same manufacturer provide very differe nt levels of performance
264
- and may be regarded as close to current SoA in terms of
265
- performance for embedded neural inference algorithms.
266
- III. THERMAL DATA ACQUISITION AT SCALE FOR ADAS
267
- This section will mainly cover the thermal data collection
268
- process using the LWIR pr ototype thermal imaging camera.
269
- The overall data is consisting of more than 35K distinct thermal
270
- frames acquired in different weather and environmental
271
- conditions . The data collection process includes shutterless
272
- camera calibration and thermal data processing [36], using the
273
- Lynred Display Kit (LDK) [ 1], data collection methods , and
274
- overall dataset attributes with different weather and
275
- environmental conditions for comprehensive data formation.
276
- A. Prototype Thermal Camera
277
- For the proposed research work we have utilized micro -
278
- bolometer technology based uncooled thermal imaging camera
279
- developed under the HELIAUS project [ 32]. The main
280
- characteristic of this camera includes low -cost, lightweight and
281
- its sleek compact design thus allowing to easily integrate it with
282
- artificially intelligent imaging pipelines for building effective
283
- in-cabin driver -passenger monitoring and road mon itoring
284
- systems for ADAS. It enables us to capture high -quality thermal
285
- frames with low -power consumption thus proving the agility of
286
- configurations and data processing algorithms in real -time.
287
- Fig. 1 shows the prototype thermal camera. The technical
288
- specifications of the camera are as follows, the camera type is a QVGA long-wave infrared (LWIR) with a spectral range from
289
- 8-14 µm and a camera resolution of 640 X 480 pixels. The focal
290
- length (f) of the camera is 7.5 mm, F -number is 1.2, the pixel
291
- pitch is 17 µm, and the power consumption is less than 950mW.
292
- The camera relates to a high-speed USB 3.0 (micro -USB) port
293
- for the interface.
294
-
295
-
296
-
297
-
298
-
299
-
300
- Fig. 1 . LWIR thermal imaging module images from different
301
- view angles.
302
-
303
- The data is recorded using a specifically designed toolbox. The
304
- complete camera calibration process along with the data
305
- processing pipeline is explained in the next section.
306
- B. Shutterless Calibration and Real -time Data Processing
307
- This section will highlight the thermal camera calibration
308
- process for shutterless camera configuration along with real -
309
- time data processing methods for converting the raw thermal
310
- data to refined output s. Shutterless technology allows uncooled
311
- IR engines and thermal imaging sensors to continuously operate
312
- without the need for a mechanical shutter for Non -Uniformity
313
- Correction (NUC) operations. Such type of technology
314
- provides proven and effective results in poor visibility
315
- conditions ensuring good quality thermal frames in real -time
316
- testing situations. For this, we have used a low -cost blackbody
317
- source to provide three different constant reference temperature
318
- values referred to as T -ambient1 -BB1 (hot unif orm scene with
319
- temperature value of 40 degree centigrade), T -ambient1 -BB2
320
- (cold uniform scene with the temperature value of 20 degree
321
- centigrade), and T -ambient2 -BB1 (either hot or cold uniform
322
- scene but with different temperature value). The imager can
323
- store up to 50 snapshots and select the best uniform temperature
324
- scenes for calibration purposes. Fig. 2 shows the blackbody
325
- used for the thermal camera calibration .
326
-
327
-
328
-
329
-
330
-
331
-
332
-
333
-
334
-
335
-
336
- Fig. 2. Thermal camera calibration a) blackbody source used
337
- for LWIR thermal camera calibration, b) uniform scene:
338
- temperature set to 40.01 degree centigrade.
339
-
340
- Once the uniform temperature images are recorded the images
341
- are loaded in camera SDK as shown in Fig. 3 to finally calibrate
342
- the shutterless camera stream. Fig. 4 shows the results before
343
- applying shutterless calibration and processed results using
344
- shutterless algorithms on thermal frame capture through the
345
- prototype thermal IR camera . (a) (b) 4
346
-
347
-
348
- Fig. 3. Prototype thermal camera SDK for loading constant
349
- reference temperatures values for shutterless camera
350
- calibration.
351
-
352
-
353
-
354
-
355
-
356
-
357
-
358
-
359
-
360
-
361
-
362
- Fig. 4. Shutterless algorithm results on sample thermal frame
363
- captured from 640x480 LWIR thermal camera designed by
364
- Lynred France [ 1].
365
-
366
- In the next phase, various real -time image processing -based
367
- correction methods are applied to convert the original thermal
368
- data to produce good -quality thermal frames. Fig. 5 shows the
369
- complete image processing pipeline.
370
-
371
-
372
-
373
-
374
-
375
-
376
-
377
-
378
-
379
- Fig. 5. Thermal image correction pipeline
380
-
381
- As shown in Fig. 5 image processing pipeline consist of three
382
- different image correction methods which include gain
383
- correction, bad -pixel replacement, and temporal denoising. The
384
- further details of these methods are provided as follows.
385
-
386
- 1) Gain Correction Automatic Gain Control (AGC)
387
- Thermal image detectors, based on flat panels, suff er
388
- from irregular gains due to the non -uniform amplifiers.
389
- To correct the irregular gains, a common yet effective
390
- technique referred to as automatic gain control is
391
- applied. It is usually based on the gain map. By
392
- averaging uniformly illuminated images wit hout any
393
- objects, the gain map is designed. By increasing the
394
- number of images for averaging provides a good gain -correction performance since the remained quantum
395
- noise in the gain map is reduced [ 1].
396
-
397
- 2) Bad Pixel Replacement (BPR)
398
- This is used to list bad pixels estimated at the calibration
399
- stage. It works by tracking potential new bad pixels by
400
- looking at pixel neighbourhood also known as the
401
- nearest neighbour method. Once it traces the bad pixels
402
- in the nearest neighbour it replaces them with good
403
- pixels. Fig. 6 demonstrates one such example.
404
-
405
-
406
-
407
-
408
-
409
-
410
-
411
-
412
-
413
-
414
- Fig. 6. Bad pixel replacement algorithm output on
415
- sample thermal frame, left side frame with some bad
416
- pixels and the right side is processed frame.
417
-
418
- 3) Temporal Denoising (TD)
419
- The consistent reduction of image noise poses a
420
- frequently recurring problem in digitized thermal
421
- imaging systems and especially when it comes to un -
422
- cooled thermal imagers [ 34]. To mitigate these
423
- limitations for better outputs different methods are used
424
- which include hardware as well software -based image
425
- processing methods such as temporal and spatial
426
- denoising algorithms. The temporal denoising method is
427
- used to decrease the temporal noise between different
428
- frames of the video. In commercial solutions, it usually
429
- works by gathering m ultiple frames and averaging those
430
- frames to cancel out the random noise among the frames.
431
- In our data acquisition process, this method is used after
432
- applying the shutterless algorithm. Fig. 7 shows the
433
- sample thermal images in the form of outcomes after
434
- applying shutterless algorithms and all the image
435
- processing -based corrections methods as shown in Fig.
436
- 5.
437
-
438
-
439
-
440
-
441
-
442
-
443
-
444
-
445
-
446
-
447
-
448
- Fig. 7. High -quality thermal frames after applying the
449
- shutterless calibration algorithm and image correction
450
- methods.
451
- Before After
452
- Shutterl
453
- ess
454
- thermal
455
- data
456
- Gain
457
- correction
458
- Bad-pixel
459
- replacement
460
- Temporal
461
- denoising
462
- Image processing
463
- pipeline
464
- Output
465
- Input Output
466
- 5
467
-
468
- C. Data Collection Methods and Overall Dataset Attributes
469
- This section will highlight different data collection
470
- approaches adopted in this research work. The data is collected
471
- in two different approaches. In, the first approach (M -1) the data
472
- is gathered in an imm obile method by placing the camera at a
473
- fixed place. The camera is mounted on the tripod stand at a
474
- fixed height of nearly 30 inches such that the roadsides objects
475
- are covered in the video stream. The thermal video stream is
476
- recorded at 30 frames per seco nd (FPS). The data is recorded in
477
- different weather and environmental conditions. Fig. 8 shows
478
- the M -1 data acquisition setup. In the second method (M -2) the
479
- thermal imaging system is mounted over the car and data is
480
- acquired in the mobile method. The prime reason for collecting
481
- the data in two different methods is to bring variations and
482
- collect distinctive local data in different environmental and
483
- weather conditions. For this, a specialized waterproof camera
484
- housing case was designe d to hold the thermal camera in the
485
- correct position and angle to cover the entire roadside scene.
486
- The housing case is fixed on a suction -based tripod stand thus
487
- allowing us to easily fix and remove the complete structure
488
- from the car bonnet. The housing c ase also contains a visible
489
- camera to get initial visible images as reference data thus
490
- allowing us to adjust both the camera positions in proper angle
491
- and field of view .
492
-
493
-
494
- Fig. 8. Data Acquisition setup by placing the camera at a fixed
495
- place a) camera mounted on a tripod stand, b) complete daytime
496
- roadside view, c) video recording setup at 30fps, d) evening
497
- time alleyway view.
498
-
499
- Fig. 9 shows the camera housing case along with the initial
500
- data acquisition setup whereas
501
-
502
-
503
- Fig. 9. Data acquisition setup through car a) camera housing
504
- case holding thermal and visible camera, b) initial data
505
- acquisition testing phase.
506
-
507
- Fig. 10 shows the housing case fixed on tripod structure and
508
- complete M -2 acquisition setup mounted on the car. The overall
509
- dataset is acquired from Galway County Ireland. The data is
510
- collected in form of short video clips and more th an> 35,000
511
- unique thermal frames have been extracted from the recorde d
512
- video clips. The data is recorded in the daytime, evening time ,
513
- and night -time which is distributed in the ratio of 44. 61%, 31.78%, and 23. 61% respectively of overall data. The complete
514
- dataset attributes are summarized in Table II. The acquired data
515
- comprises distinct stationary classes , such as road signs and
516
- poles, as well as moving object classes such as pedestrians, cars,
517
- buses, bikes, and bicycles.
518
-
519
-
520
- Fig. 10. Complete data acquisition setup mounted on the car a)
521
- camera housing fixed on a suction tripod stand, b) data
522
- acquisition kit from the front view, c) data acquisition kit from
523
- the side view.
524
- TABLE II
525
- NEW THERMAL DATASET ATTRIBUTES
526
- Locally acquired dataset attributes
527
- Data
528
- collection
529
- method with
530
- frame
531
- properties Total
532
- number
533
- of
534
- extracted
535
- frames Processing
536
- Method Environment Time and
537
- weather
538
- conditions
539
- M-1
540
- Camera
541
- mounted at a
542
- fixed place
543
-
544
- 96 dpi
545
- (horizontal
546
- and vertical
547
- resolution)
548
- with
549
- 640x480
550
- image
551
- dimension 8,140
552
- Shutterless,
553
- AGC, BPR,
554
- TD
555
- Roadside Daytime
556
- with cloudy
557
- weather
558
- 680 Alleyway Evening
559
- time cloudy
560
- weather
561
- 4,790 Roadside Night -time
562
- with light
563
- cloudy and
564
- windy
565
- weather
566
- M-2
567
- Camera
568
- mounted on
569
- the car
570
- (Driving
571
- condition)
572
-
573
- 96 dpi
574
- (horizontal
575
- and vertical
576
- resolution)
577
- with
578
- 640x480
579
- image
580
- dimension
581
- 9,600 Shutterless,
582
- AGC, BPR,
583
- TD
584
- Industrial
585
- Park Daytime
586
- with clear
587
- weather
588
- and light
589
- foggy
590
- weather
591
- 11,960 Downtown Evening
592
- time with
593
- partially
594
- cloudy and
595
- windy
596
- weather
597
- 4,600 Shutterless,
598
- AGC, BPR,
599
- & TD Downtown Night -time
600
- with clear
601
- weather
602
- conditions
603
- frames Daytime:
604
- 17,740
605
- (44.61%) Evening
606
- time:
607
- 12,640
608
- (31.78%) Night -time:
609
- 9,390
610
- (23.61%) Total:
611
- 39,770
612
- Fig. 11 shows the six distinct sample of thermal frames captured
613
- in different environmental and weather conditions using M1
614
- and M2 methods. These samples show different class object s
615
- such as buses, bicycles , poles, person, and cars. Most of these
616
- objects are found commonly on the roadside thus providing the
617
- driver a comprehensive video analysis of car surroundings .
618
- (a)
619
- (b) (c) (d)
620
- (a) (b) (a) (b) (c) Suction Tripod
621
- structure
622
- 640x480 LWIR
623
- thermal camera 6
624
-
625
-
626
-
627
-
628
- Fig. 11. Six different thermal samples acquired using LWIR
629
- 640x480 prototype thermal camera showing various class
630
- objects.
631
- IV. PROPOSED METHODOLOGY
632
- This section will detail the proposed methodology and
633
- training outcomes from the various network variants tested in
634
- this study.
635
- A. Network Training and Learning Perspectives
636
- The overall training data comprises both locally and publicly
637
- available datasets. The complete training data is divided in the
638
- ratio of 50% - 50% where 50% of data is selected from locally
639
- acquired t hermal frames whereas the rest 50% of the training
640
- data leverages from public datasets. Six distinct types of
641
- roadside objects for driving assistance are included in training
642
- and validations sets . These include b icycles, motor cycles ,
643
- buses, cars, pedestrians or people, and static roadside objects
644
- such as poles or road signs , as shown in Fig 1 2.
645
-
646
-
647
-
648
-
649
-
650
-
651
-
652
-
653
-
654
-
655
-
656
-
657
-
658
-
659
-
660
-
661
-
662
-
663
-
664
-
665
-
666
-
667
-
668
-
669
-
670
-
671
-
672
- Fig. 12. Block diagram depicts the steps taken to evaluate the
673
- performance of Yolo v5 on local and public datasets.
674
- Fig. 13 shows the class -wise data distribution. In the training
675
- phase of the YOLO -V5 framework, a total of 59,150 class -wise
676
- data samples wer e utilized, along with their corresponding class
677
- labels
678
-
679
-
680
-
681
-
682
-
683
-
684
-
685
-
686
-
687
-
688
-
689
-
690
-
691
- Fig. 13. Depicts the respective class -wise training samples
692
- distributions .
693
- B. Data Annotation and Augmentation
694
- The overall data annotations were performed manually using
695
- an open -source bounding box -based annotations tool LabelImg
696
- [31] for all the thermal classes in our study. Annotations are
697
- stored in YOLO format as text files. During the training phase
698
- all the YoloV5 network variations which include small,
699
- medium, large, and x -large networks have been trained to detect
700
- and classify six different classes in different environmental
701
- conditions.
702
- Large -scale datasets are considered a vital requirement for
703
- achieving optimal training results using deep learning
704
- architectures. Without the need of gathering new data, data
705
- augmentation allows us to significantly improve the diversity
706
- of data available that c an be effectively used for training the
707
- DNN models. In the proposed study we have incorporated a
708
- variety of data augmentation techniques which involve
709
- cropping, flipping, rotation, shearing, translation, mosaic
710
- transformation for an optimum training of all the network
711
- variants of the YOLO -V5 framework .
712
- A. C. Training Results
713
- As discussed in subsection A of section IV all the networks
714
- are trained using the combination of public as well as the locally
715
- gathered dataset. Training data from public datasets are
716
- included from four different datasets which include FLIR [ 7],
717
- OST [ 2], CVC [ 19], and KAIST [ 5] datasets. Secondly, we have
718
- used thermal frames acquired from the locally gathered video
719
- sets using both M1 and M2 methods. The training process is
720
- performed on a server -grade machine with XEON E5 -1650 v4
721
- 3.60 GHz processor, 64 GB of ram, and equipped with
722
- GEFORCE RTX 2080 Ti graphical processing unit. It comes
723
- with 12 GB of dedicated graphical memory, memory bandwidth
724
- of 616 GB/second, and 4352 cuda cores . During the training
725
- phase the batch size is fixed to 32 and as an optimizer, both
726
- stochastic gradient descent (SGD) and ADAM optimizer were
727
- used. However, we were unable to achieve satisfactory training
728
- results using ADAM optimizer as compared to SGD thus
729
- select ed SGD optimizer for training purposes. Table III shows
730
- the performance evaluation of all the trained models in the form
731
- Training Data
732
- Locally Acquired + Public
733
- Datasets
734
- Classes
735
-
736
- Car, Pole, Bike, Bicycle, Person, Bus
737
-
738
- Data Annotation
739
- Data Augmentation Techniques
740
-
741
- Flipping, Cropping, Shearing, Rotation, Translation, Mosaic
742
- YOLO v5 Network Variants
743
-
744
- Small
745
- (7.3 million
746
- parameters )
747
- Medium
748
- (21 million
749
- parameters )
750
- Large
751
- (47 million
752
- parameters )
753
- X-Large
754
- (87.7m illio
755
- n
756
- parameters )
757
- Inference Testing (On both public and local dataset)
758
-
759
- GPU, Nvidia -Jetson, Nvidia -Xavier
760
- 7
761
-
762
- of mean average precision (mAP), recall rate, precision, and
763
- losses.
764
- TABLE III
765
- TRAINING RESULTS
766
- Optimizer: SGD (best model *)
767
- Network P % R% mAP
768
- % Box
769
- Loss Object
770
- Loss Classific
771
- ation
772
- Loss
773
- Small 75.5
774
- 8 65.75 70.71 0.03
775
- 2 0.034 0.0017
776
- Medium 71.0
777
- 6 64.74 65.34 0.02
778
- 7 0.030 0.0013
779
- Large * 82.2
780
- 9 68.67 71.8 0.02
781
- 5 0.0287 0.0011
782
- X-Large 74.2
783
- 3 65.03 64.94 0.02
784
- 5 0.0270 0.0010
785
-
786
- By analy sing Table III, it can be observed that the large model
787
- performed significantly better when compared to other models
788
- with an overall precision of 82.29%, recall rate of 68.67%, and
789
- mean average precision of 71.8% mAP . Fig. 14 shows the
790
- graph result s of yolo-v5 large model. The figure visualizes
791
- obtained PR -curve, box loss, object loss, and classification loss.
792
- During the tra ining process, the X -large model consumes the
793
- maximum amount of hardware resources with the largest
794
- training time as compared to other network variants with overall
795
- GPU usage of 9.78 GB and a total training time of 14 hours .
796
- Fig. 15 shows the overall GPU m emory usage, GPU power
797
- required in percentages, and GPU temperature in centigrade
798
- scale while training the largest x -large network variant of yolo -
799
- v5 model.
800
-
801
-
802
-
803
-
804
-
805
-
806
- Fig. 14. Training results of YOLO -v5 large model using SGD
807
- optimizer .
808
- V. VALIDATION RESULTS ON GPU AND EDGE DEVICES
809
- This section will demonstrate the object detection validation
810
- results on GPU as well as on two different embedded boards.
811
- A. Testing Methodology and Overall Test Data
812
- In this research study , we have used three different testing
813
- approaches which include the conventional test -time method
814
- with no augmentation (NA), test -time augmentation (TTA), and
815
- test-time with model ensembling (ME). TTA is an extensive
816
- application of data augmentation applied to the test dataset. It
817
- performs by creating multiple augmented copies of each image
818
- in the test set, having the model make a prediction for each, then
819
- returning an ensemble of those predictions. However, since the
820
- test dataset is enlarged with a new set of augmented images the
821
-
822
-
823
-
824
- Fig. 15. GPU resource utilization during the training process of
825
- x-large network, a) 85% (9.78 GB) of GPU memory utilized,
826
- (b) 90% (585 watts) of GPU power required and, (c) 68 C of
827
- GPU temperature with the maximum rating of 89 C.
828
-
829
- overall inference time also increases as compared to NA which
830
- is one of the downsides of this approach . TTME or ensemble
831
- learning refers to as using multiple trained networks at the same
832
- time in a parallel manner to produce one optimal predictive
833
- inference model [35]. In this study, we have tested the
834
- performance of individually trained variants of the Yolo -V5
835
- framework and selected the best combination of models which
836
- in turn helps in achieving better validation results.
837
- After training all the networks variants of yolo -v5, the
838
- performance of each model is cross -validated on a
839
- comprehensive set of test data selected from the public as well
840
- as locally gathered thermal data. Table IV provides the numeric
841
- data distribution of the overall validation set.
842
- TABLE IV
843
- TEST DATASET
844
- Test Dataset Attributes
845
- Frames Used
846
- Public
847
- dataset OST CVC -09 KAIS
848
- T FLI
849
- R Total No
850
- frames
851
-
852
-
853
- 50 5360
854
- (day +
855
- night -
856
- time) 149 130 5,689
857
- Local
858
- dataset Method (M1) Method (M2) Total No
859
- frames
860
- a
861
- b
862
- c 100
863
- 80
864
- 60
865
- 40
866
- 20
867
- 60 120 20 minutes GPU Memory Usage
868
- GPU Power Usage
869
- GPU Temperature 20 60 120 minutes 20 40 60 80 100
870
- minutes 20 60 120 10 20 30 40 50 60 8
871
-
872
-
873
-
874
- 8,820 16,560 25,380
875
- Total: 31,069
876
- B. Inference Results Using YOLO Network Variants
877
- In the first phase, we have run the rigorous inference test on
878
- GPU as well as Edge -GPU platforms on our test data using the
879
- newly trained networks variants of yolo framework. The overall
880
- test data is consisting of nearly ≈ 31,000 thermal frames. Fig. 1 6
881
- shows the inference results on 9 diffe rent thermal frames
882
- selected from both public as well as locally acquired data. These
883
- frames have data complications such as multiple class objects,
884
- occlusion, overlapping classes, scale variation, and varying
885
- environmental conditions . The complete inference results are
886
- available on our local repos itory ( https://bit.ly/3lfvxhd ).
887
- In the second phase , we have run t he combination of different
888
- models in a parallel manner using the model ensembling
889
- approach to output one optimal predictive engine which can be
890
- further used to run the inference test on the validation set. The
891
- different combination of these models is show n in Table V
892
- respectively where 1 indicates that model is in active state and
893
- 0 means model is in a non -active state.
894
- TABLE V
895
- MODEL ENSEMBLING
896
- Model Combinations
897
- N
898
- o Small Medium Large X-Large Combination
899
- State 1 (active) or 0 (not active)
900
- 1 1 1 0 0 A0
901
- 2 1 0 1 0 A1
902
- 3 1 0 0 1 A2
903
- 4 0 1 1 0 A3
904
- 5 0 0 1 1 A4
905
-
906
-
907
-
908
-
909
-
910
-
911
-
912
-
913
-
914
-
915
-
916
-
917
-
918
-
919
-
920
-
921
-
922
-
923
- Fig. 16 . Inference results on nine different frames selected
924
- from test data.
925
-
926
- With the model ensembling method small and large models
927
- (A1) turn out to best model combination in terms of achieving
928
- the best mAP, recall, and relatively less amount of inference
929
- time per frame thus producing optimal validation results. These
930
- results are examined in further parts of this section. Fig. 1 7
931
- shows the inference results using A1 model ensembling engine
932
- on three different thermal frames s elected from the test data.
933
-
934
-
935
-
936
-
937
-
938
-
939
- Fig. 17. Inference results on three different frames using
940
- model ensembling.
941
- C. Quantitative Validation Results on GPU
942
- The third part of the testing phase shows the quantitative
943
- numerical results of all the trained models on GPU. To better
944
- analy se and validate the overall performance for all the trained
945
- models on test data, relatively a smaller set of test images has
946
- been selected from the overall test set. For this purpose, a subset
947
- of 402 thermal frames is selected to compute all the evaluation
948
- metrics. The selected images consist of different roadside
949
- objects such as pedestrians, cars and buses under different
950
- illum ination and environmental conditions, time of day, and
951
- distance from the camera. The objects are either far -field
952
- (between 11 -18 meters) , mid -field (between 7 -10 meters) or
953
- near-field (between 3 -6 meters) from the camera. Fig. 18 shows
954
- selected views from the test data for quick reference of the
955
- reader.
956
-
957
-
958
-
959
-
960
-
961
-
962
-
963
-
964
- Fig. 18. Test data samples with the object at varying distances
965
- from the camera, (a) near -field distance, (b) mid -field distance,
966
- (c) far -field distance.
967
-
968
- The performance evaluation of each model is computed
969
- using four different metrics which include recall, preci sion,
970
- mean average precision (mAP), and frames per second rate
971
- (FPS). Table VI shows all the quantitative validation results on
972
- GPU. During the testing phase batch size is fixed to 8. Also,
973
- three different testing configuration is selected thus having
974
- separate confidence threshold values and the intersection of
975
- union values at each validation phase. Confidence threshold
976
- defines the minimum threshold value, or in other words, it is the
977
- minimum confidence score above which we consider a
978
- prediction as true. If it’s below the threshold value, we consider
979
- the prediction as “no”. The last row of Table VI shows the best
980
- ME results using A1 configuration from Table V with a selected
981
- confidence threshold of 0.2 and IoU threshold of 0.4.
982
- TABLE VI
983
- QUANTITATIVE RESULTS ON GPU
984
- Platform: GPU
985
- Inference image size: 800 x 800
986
- Confidence Threshold: 0.4, IoU Threshold: 0.6
987
- No Augmentation (NA) Test-time Augmentation
988
- (TTA)
989
- Network P
990
- % R
991
- % mA
992
- P% FPS P
993
- % R
994
- % mA
995
- P% FPS
996
- a b c camera camera camera person person car 9
997
-
998
- Small 72 46 43 79 76 48 50 45
999
- Medium 73 54 49 53 76 58 57 26
1000
- Large 75 56 52 34 77 63 60 16
1001
- X-Large 74 53 49 20 71 59 55 10
1002
- Confidence Threshold: 0.2, IoU Threshold: 0.4
1003
- NA TTA
1004
- Small 66 50 47 82 64 55 52 45
1005
- Medium 66 57 51 53 77 58 59 27
1006
- Large 71 61 56 35 78 63 63 16
1007
- X-Large 70 54 50 21 68 62 56 10
1008
- Confidence Threshold: 0.1, IoU Threshold: 0.2
1009
- NA TTA
1010
- Small 65 52 48 81 65 53 53 45
1011
- Medium 69 54 51 53 77 58 59 26
1012
- Large 73 61 57 34 79 63 63 16
1013
- X-Large 71 54 52 21 69 62 57 10
1014
- Confidence Threshold: 0.2, IoU Threshold: 0.4
1015
- Model Ensembling (ME)
1016
- A = Small
1017
- B = Large
1018
- Comb: A1
1019
- --- --- --- --- 77 66 65 25
1020
-
1021
- D. Quantitative Validation Results on Edge -GPU Devices
1022
- This section will review the quantitative validation results on
1023
- two different Edge -GPU platforms (Jetson Nano & Jetson
1024
- Xavier NX). It is pertinent to mention that Jetson Xavier NX
1025
- development kit embeds more computational power in terms of
1026
- GPU, CPU, and memory as compared to Nvidia Jetson Nano.
1027
- Table VII shows the specification comparison of both boards.
1028
- TABLE VII
1029
- HARDWARE SPECIFICAT ION COMPARISON
1030
- Hardware specification comparison o f Nvidia Jetson Nano and
1031
- Nvidia Jetson Xavier NX
1032
- Board Jetson Nano [23] Jetson Xavier N X [25]
1033
- CPU Quad -Core ARM®
1034
- Cortex® -A57 MPCore,
1035
- 2 MB L2, Maximum
1036
- Operating
1037
- Frequency: 1.43 GHz 6-core NVIDIA Carmel
1038
- ARM®v8.2 64 -bit
1039
- CPU, 6 MB L2 + 4 MB
1040
- L3, Maximum
1041
- Operating
1042
- Frequency: 1.9 GHz
1043
- GPU 128-
1044
- core Maxwell GPU,
1045
- 512 GFLOPS (FP16),
1046
- Maximum Operating
1047
- Frequency: 921 MHz 384 CUDA® cores +
1048
- 48 Tensor
1049
- cores Volta GPU, 21
1050
- TOPS, Maximum
1051
- Operating Frequency:
1052
- 1100 MHz
1053
- RAM 4 GB 64-bit LPDDR4
1054
- @ 1600MHz | 25.6
1055
- GB/s 8 GB 128-bit
1056
- LPDDR4x @
1057
- 1600MHz | 51.2GB/s
1058
- On module
1059
- Storage 16 GB eMMC 5.1 Flash Storage, Bus Width: 8 -bit,
1060
- Maximum Bus Frequency: 200 MHz (HS400)
1061
- Thermal
1062
- Design
1063
- Power 5W – 10W 10W – 15W
1064
- AI
1065
- Performance 0.5 TFLOPS (FP16) 6 TFLOPS (FP16)
1066
- 21 TOPS (INT8)
1067
-
1068
- On Jetson Nano we have validated the performance of the
1069
- small version only whereas on Jetson Xavier NX we have
1070
- evaluated the performance of smaller and medium versions of
1071
- models due to the memory limitations and constrained
1072
- hardware resources on these boar ds. During the testing phase,
1073
- we have selected the highest power modes on both boards to
1074
- provide the utmost efficiency thus utilizing maximum hardware
1075
- resources. For instance, on Nvidia Xavier board NX we have selected ‘Mode Id: 2’ which means the board is operating in 15 -
1076
- watt power mode with all the six cores active with a maximal
1077
- CPU frequency of 1.4 gigahertz and GPU frequency of 1.1
1078
- gigahertz. Similarly, on Nvidia Jetson Nano all the four CPU
1079
- cores were utilized with overall power utilization of 5 watts .
1080
- Table VIII shows the quantitative validation results on ARM
1081
- processor based embedded boards
1082
- TABLE VIII
1083
- QUANTITATIVE RESULTS ON EDGE PLATFORMS
1084
- Platform: N vidia Jetson Nano
1085
- Inference image size: 128 x 128
1086
- Confidence Threshold: 0.4, IoU Threshold: 0.6
1087
- NA TTA
1088
- P
1089
- % R
1090
- % mA
1091
- P% FPS P
1092
- % R
1093
- % mA
1094
- P% FPS
1095
-
1096
- Small 75 44 45 3 77 47 49 1
1097
- Confidence Threshold: 0.2, IoU Threshold: 0.4
1098
- NA TTA
1099
- Small 75 44 47 3 71 51 51 1
1100
- Confidence Threshold: 0.1, IoU Threshold: 0.2
1101
- NA TTA
1102
- Small 66 47 48 2 73 50 52 1
1103
- Platform: N vidia Jetson Xavier NX
1104
- Inference image size: 128 x 128
1105
- Confidence Threshold: 0.4, IoU Threshold: 0.6
1106
- NA TTA
1107
- Small 75 44 45 18 77 47 49 10
1108
- Med 76 53 50 12 79 50 52 6
1109
- Confidence Threshold: 0.2, IoU Threshold: 0.4
1110
- NA TTA
1111
- Small 75 44 47 19 71 51 51 10
1112
- Med 76 52 53 12 73 54 53 6
1113
- Confidence Threshold: 0.1, IoU Threshold: 0.2
1114
- NA TTA
1115
- Small 66 47 48 18 73 50 52 10
1116
- Med 76 51 52 12 81 49 53 6
1117
- E. Real-time Hardware Feasibility Testing
1118
- While running these tests we closely monitor the temperature
1119
- ratings of different hardware peripherals on both Edge -GPU
1120
- platforms. It is done to prevent the overheating effect which can
1121
- damage the onboard processor or effect the overall operational
1122
- capabil ity of the system. In the case of Nvidia Jetson Nano, a
1123
- cooling fan was mounted on top of the processor heatsink to
1124
- reduce the overheating effect as shown in Fig. 1 9.
1125
-
1126
-
1127
- Fig. 19. External 5 -volt fan unit mounted on Nvidia Jetson
1128
- Nano processor heatsink to avoid onboard overheating effect
1129
- while running the inference testing.
1130
- The temperature ratings of various hardware peripherals are
1131
- monitored using eight different on -die thermal s ensors and one
1132
- on-die thermal diode. These temperature monitors are referred
1133
- to as CPU -Thermal, GPU -Thermal, Memory -Thermal, and
1134
- PLL-Thermal (part thermal zone). External fans help us in
1135
- External fan 10
1136
-
1137
-
1138
- reducing the temperature rating of various hardware peripherals
1139
- drast ically as compared to without mounting the fan. Fig. 20
1140
- shows the temperature rating difference of onboard thermal
1141
- sensors while running the smaller version of the model on
1142
- Nvidia Jetson Nano without and with mounting the external
1143
- cooling fan.
1144
-
1145
-
1146
-
1147
-
1148
-
1149
- Fig. 20. Temperature rating difference of different onboard
1150
- hardware peripherals on Jetson Nano (a) without fan: A0
1151
- thermal zone = 65.50 C, CPU = 55 C, GPU = 52 C, PLL: 53.50,
1152
- overall thermal temperature = 53.50 C, (b) with external fan:
1153
- A0 therm al zone = 45.50 C, CPU = 33 C, GPU = 33 C, PLL:
1154
- 33, overall thermal temperature = 32.75 C.
1155
-
1156
- It can be examined from Fig. 20 that by mounting an external
1157
- cooling fan the temperature rating of various onboard
1158
- peripheral on Jetson Nano was reduced by nearly 30% thus
1159
- allowing us to operate the board at its maximum capacity for
1160
- rigorous model testing. Fig. 21 shows the Nvidia Jetson running
1161
- at its full pace (with an external fan) such that all the four cores
1162
- running at their maximum limit (100% capacity) while ru nning
1163
- the quantitative and inference test by deploying the smaller
1164
- network variant of the yolo -v5 framework.
1165
-
1166
- Fig. 21. Nvidia Jetson Nano running at MAXN power mode
1167
- with all the cores running at their maximum capacity while
1168
- running t he inference test and quantitative validation test.
1169
- Fig. 22 shows the temperature rating difference of onboard
1170
- thermal sensors while running the smaller version of the model
1171
- on Nvidia Jetson Xavier NX board. Whereas Fig. 23 shows the
1172
- CPU and GPU usage while running the smaller variant of YOLO -V5 framework for quantitative validation and inference
1173
- test on Nvidia Xavier NX development kit.
1174
-
1175
- Fig. 22. Temperature rating of different onboard hardware
1176
- peripherals on Jetson Xavier NX (a) A0 thermal zone = 41.50
1177
- C, AUX: 42.5 C, CPU = 44 C, GPU = 42 C, overall thermal
1178
- temperature = 42.80 C,
1179
-
1180
-
1181
-
1182
-
1183
-
1184
- Fig. 23. Nvidia Jetson Xavier running at 15 -watt 6 core power
1185
- mode, (a) all the CPU cores running at its maximum capacity
1186
- while running the quantitative validation test, (b) 69% GPU
1187
- utilization while running the inference test with an image size
1188
- of 128 x 128.
1189
- VI. MODEL PERFORMANCE OPTIMIZATION (S)
1190
- This section will mainly aim at further model optimization
1191
- using TensorRT [ 33] inference accelerator tool. The prime
1192
- reason for this is to further increase the FPS rate for real -time
1193
- evaluation and on -board feasibility te sting on edge devices.
1194
- Secondly, it helps in saving onboard memory footprints on the
1195
- target device by performing various optimization methods.
1196
- TensorRT [33] works by performing five modes of
1197
- optimization methods for increasing the throughput of deep
1198
- neural networks . In the first step, it maximizes throughput by
1199
- quantizing models to 8 -bit integer data type or FP16 precision
1200
- while preserving the model accuracy. This method significantly
1201
- (a)
1202
- (b)
1203
- (a)
1204
- (b) 11
1205
-
1206
-
1207
- reduces the model size since it is transformed from originally
1208
- FP32 to FP16 version. In the next step, it uses layer and tensor
1209
- fusion techniques to further optimize the usage of onboard GPU
1210
- memory. The third step includes perform ing kernel auto -tuning.
1211
- It is the most important step where the TensorRT engine
1212
- shortlists the best network layers, and optimal batch size based
1213
- on the target GPU hardware. In the second last step, it
1214
- minimizes memory footprint s and re -uses memory by
1215
- distributing memory to tensor only for the period of its usage.
1216
- In the last steps, it processes m ultiple input streams in parallel
1217
- and finally optimizes neural networks periodically with
1218
- dynamically generated kernels [ 33].
1219
- In the proposed research work we have deployed a smaller
1220
- variant of yolo -v5 using TensorRT inference accelerator on
1221
- both edge plat forms Nvidia Jetson Nano and Nvidia Jetson
1222
- Xavier NX development boards to further excel the
1223
- performance of the trained model. It produces faster inference
1224
- time thus increasing the FPS on thermal data which in turn helps
1225
- us in building an effective real -time forward sensing system for
1226
- ADAS embedded applications. Fig. 24 depicts the block
1227
- diagram representation of deployment phase TensorRT
1228
- inference accelerator on embedded platforms.
1229
-
1230
-
1231
-
1232
-
1233
-
1234
-
1235
-
1236
-
1237
-
1238
-
1239
-
1240
-
1241
-
1242
-
1243
-
1244
-
1245
-
1246
-
1247
-
1248
- Fig. 24. Overall block diagram representation of deployment
1249
- and running TensorRT inference accelerator on two different
1250
- embedded platforms.
1251
-
1252
- Table IX shows the overall inference time along with FPS
1253
- rate on thermal test data using TensorRT run-time engine. By
1254
- analyzing the results from Table. IX we can deduce that
1255
- TensorRT API supports in boosting the overall FPS rate on
1256
- ARM -based embedded platforms by nearly 3.5 times as
1257
- compared to the FPS rate achieved by running the non -
1258
- optimized smaller variant on Nvidia Jetson Nano and Nvidia
1259
- Jetson Xavier boards. The same is demonstrated via graphical
1260
- chart results in Fig. 2 5.
1261
- TABLE IX
1262
- TENSOR RT INFERENCE ACCELERATOR RESULTS
1263
- FPS on Nvidia Jetson Nano and Nvidia Jetson Xavier NX
1264
- Board Nvidia Jetson Nano Nvidia Jetson Xavier
1265
- NX Test Data 402 images with the resolution of 128x128
1266
- Overall
1267
- inference time 35,090 milliseconds
1268
- ≈ 35.1 seconds 6,675 milliseconds ≈
1269
- 6.7 seconds
1270
- PS 35.1 sec / 402 frame s
1271
- = 0.087 sec/frame
1272
-
1273
- FPS: 1 sec / 0.087 =
1274
- 11.49 ≈ 11 fps 6.7 sec / 402 frames =
1275
- 0.0166 sec/frame
1276
-
1277
- FPS: 1 sec / 0.0166 =
1278
- 60.24 ≈ 60 fps
1279
-
1280
-
1281
-
1282
- Fig. 25. FPS increment rate of nearly 3.5 times on Jetson Nano
1283
- and Jetson Xavier NX embedded boards using the TensorRT
1284
- built optimized inference engine.
1285
-
1286
- Fig. 2 6 shows the thermal object detection inference results on
1287
- six different thermal frames from the public as well as locally
1288
- acquired test data produced through the neural accelerator.
1289
-
1290
-
1291
-
1292
- Fig. 2 6. Inference results using TensorRT neural accelerator,
1293
- (a) Object detection results on public data, (b) Object Detection
1294
- results on locally acquired thermal frames.
1295
- DISCUSSION / ANALYSIS
1296
- This section will review the training and testing performance
1297
- of all YO LO-V5 framework model variants.
1298
- • During the training phase, the large YOLO v5 network
1299
- outperforms other network variants scoring the highest
1300
- precision of 82.29 % and a mean average precision (mAP)
1301
- score of 71.8 %.
1302
- • Although the large network variant performed significantly
1303
- better during the training phase , the small network variant 020406080
1304
- N V ID I A J E T S O N N A N O N V ID I A J E T S O N
1305
- X A V IE R N XF P S C O M P A R A S IO N U S IN G O P T IM IZ E D
1306
- A N D N O N -O P T IM I Z E D V E R S IO N O F
1307
- S M A LL N E T W O R K V A R IA N T
1308
- Non-optimized version TensorRT optimized version
1309
- Smal ler variant
1310
- training on
1311
- thermal data
1312
- GPU
1313
- Trained
1314
- weights
1315
- TensorRT Inference Engine
1316
- 1. Nvidia Jetson Nano
1317
-
1318
- 2. Nvidia Jetson Xavier NX
1319
-
1320
- Jetson optimized
1321
- runtime engine
1322
- Xavier optimized
1323
- runtime engine
1324
- Test data
1325
- (a)
1326
- b 12
1327
-
1328
- also performed well with an overall precision of 75.58 % and
1329
- mAP of 70.71%. Also, it gains a higher FPS rate on GPU
1330
- during the testing phase as compared to the large model.
1331
- Fig. 2 7 summarizes the quantitative performance
1332
- comparison of small and large network variants of yolo
1333
- framework.
1334
-
1335
- Fig. 2 7. Quantitative metrics comparison of small and large
1336
- network variants
1337
- • Due to the smaller number of model parameters as
1338
- compared to larger network variant ( 7.3M Vs 47M model
1339
- parameters) and faster FPS rate on GPU during the testing
1340
- phase as shown in Fig. 26 this model is shortlisted for
1341
- validation and deployment purposes on both the edge
1342
- embedded platforms Nvidia Jetson Nano and Nvidia Jetson
1343
- Xavier NX kits.
1344
- • During the testing phase, it was noticed that by reducing the
1345
- confidence threshold from 0.4 to 0.1 and the IoU threshold
1346
- from 0.6 to 0.2 in three stepwise intervals, the model's mAP
1347
- and recall rates increased significantly, but the precision
1348
- level decreas es. However, the FPS rate remains effectively
1349
- constant in most of the trained model cases.
1350
- • TTA methods achieved improved testing results when
1351
- compared to the NA method however the main drawback of
1352
- this method is that the FPS rate dro ps substantially which is
1353
- not suitable for real -time deployments. To overcome this
1354
- problem a model ensembling (ME) based inference engine
1355
- is proposed. Table IV shows the ME results by running
1356
- large -small model in parallel configuration with a
1357
- confidence threshold of 0.2, and an IoU Threshold of 0.4.
1358
- The ensembling engine attains an overall mAP of 66% with
1359
- 25 frames per second.
1360
- • When comparing the individual hardware resources of both
1361
- the edge platforms (NVidia Jetson Nano and Jetson Xavier),
1362
- Xavier is computationally more powerful than the Jetson
1363
- Nano. Note that d ue to memory limitations and the lower
1364
- computational power of the Jetson only the small network
1365
- variant was evaluated on the Jetson Nano, whereas both the
1366
- smaller and medium network variants we re evaluated on the
1367
- Jetson Xavier NX.
1368
- • It was observed that t hroughout the testing phase , it was
1369
- important to keep a close eye on the operational temperature
1370
- ratings of different onboard thermal sensors to avoid
1371
- overheating, which might damage the onboard components or affect the system's typical operational performance.
1372
- Active cooling fans were used on both boards during testing,
1373
- and both ran at close to their rated temperature limits.
1374
- • This study also included model optimization using
1375
- TensorRT [33] inference accelerator tool. It was determine d
1376
- that TensorRT leads to an approximate increase of FPS rate
1377
- by a factor of 3.5 when compared to the non-optimized
1378
- smaller variant of yolo -v5 on Nvi dia Jetson Nano and
1379
- Nvidia Jetson Xavier devices.
1380
- • After performing model optimization, the Nvidia Jetson
1381
- produced 11 FPS and Nvidia Jetson Xavier achieved 60 FPS
1382
- on test data .
1383
- CONCLUSION
1384
- Thermal imaging provides superior and effective results in
1385
- challeng ing environments such that in low lighting scenarios
1386
- and has aggregate immunity to visual limitations thus making it
1387
- an optimal solution for intelligent and safer vehicular systems .
1388
- In this study, we presented a new benchmark thermal dataset
1389
- that comprises over 35K distinct frames recorded, analyzed,
1390
- and open -sourced in challenging weather and environmental
1391
- conditions utilizing a low -cost yet reliable uncooled LWIR
1392
- thermal camera. All the YOLO v5 network variants were
1393
- trained using locally gathered data as well as four different
1394
- publicly available datasets. The performance of trained
1395
- networks is analysed on both GPU as well as ARM processor -
1396
- based edge devices for onboard automotive sensor suite
1397
- feasibility testing . On edge devices , the small and medium
1398
- network edition of YOLO is deployed and tested due to certain
1399
- memory limitations and less computational power of these
1400
- boards . Lastly, we further optimized the smaller network
1401
- variant using TensorRT inference accelerator to explicitly
1402
- increase the FPS on edge devices. This allowed the system to
1403
- achieve 11 frames per second on jetson nano , while the Nvidia
1404
- Jetson Xavier delivered a significantly higher performance of
1405
- 60 frames per second . These results validate the potentia l for
1406
- thermal imaging as a core component of ADAS systems for
1407
- intelligent vehicles .
1408
- As the future directions, the system's performance can be
1409
- further enhanced by porting the trained networks on more
1410
- advanced and powerful edge devices thus tailoring it for real -
1411
- time onboard deployments. Moreover, the current system
1412
- focuses on object recognition, but it can be enhanced to
1413
- incorporate image segmentation, road and lane detection, traffic
1414
- signal and road signs classification, and object tracking for
1415
- providing comprehensive driver assistanc e.
1416
- ACKNOWLEDGMENT
1417
- The authors would like to acknowledge Cosmin Rotariu from
1418
- Xperi -Ireland and the rest of the team members for providing
1419
- the support in preparing the data accusation setup and helping
1420
- throughout in data collection and Quentin Noir from Lynred
1421
- France for giving their feedback. Moreover, the authors would
1422
- like to acknowledge the contributors of all the public datasets
1423
- for providing the image resources to carry out this research
1424
- work and ultralytics for sharing the YOLO -V5 Pytorch version. 020406080100
1425
- Small LargePerformance Comparsion of Small vs Large Model
1426
- Model Parameters mAP Recall Precision FPS13
1427
-
1428
- REFERENCES
1429
- [1] Lynred France, https://www.lynred.com / (Last accessed on 20th July
1430
- 2021) .
1431
- [2] Davis, James W., and Mark A. Keck. "A two -stage template approach to
1432
- person detection in thermal imagery." in proceedings of 2005 Seventh
1433
- IEEE Workshops on Applications of Computer Vision
1434
- (WACV/MOTION'05) -Volume 1 , vol. 1, IEEE, 2005 , pp. 364 -369.
1435
- https://doi.org/10.1109/ACVMOT.2005.14 .
1436
- [3] A. Torabi, G. Massé, and G. A. Bilodeau, “An iterative integrated
1437
- framework for thermal -visible image registration, sensor fusion, and
1438
- people tracking for video su rveillance applications,” Comput. Vis. Image
1439
- Underst., vol. 116, no. 2, 2012, doi: 10.1016/j.cviu.2011.10.006.
1440
- [4] Z. Wu, N. Fuller, D. Theriault, and M. Betke, “A thermal infrared video
1441
- benchmark for visual analysis,” 2014, doi: 10.1109/CVPRW.2014.39.
1442
- http://www.vcipl.okstate.edu/otcbvs/bench/
1443
- [5] Hwang, S., Park, J., Kim, N., Choi, Y., & Kweon, I. S. (n.d.).
1444
- Multispectral Pedestrian Detection: Benchmark Dataset and Baseline .
1445
- http://rcv.kaist.ac.kr/multisp ectral -pedestrian/
1446
- [6] Z. Xu, J. Zhuang, Q. Liu, J. Zhou, and S. Peng, “Benchmarking a large -
1447
- scale FIR dataset for on -road pedestrian detection,” Infrared Phys.
1448
- Technol. , vol. 96, 2019, doi: 10.1016/j.infrared.2018.11.007.
1449
- [7] FLIR Thermal Dataset . [Online] Available:
1450
- ‘https://www.flir.com/oem/adas/adas -dataset -form/’ , (Last accessed on
1451
- 22nd July 2021).
1452
- [8] R. R. Ziyatdinov and R. A. Biktimirov, “Automated system of
1453
- recognition of road signs for ADAS systems,” in IOP Conference Series:
1454
- Materials Scienc e and Engineering , 2018, vol. 412, no. 1, doi:
1455
- 10.1088/1757 -899X/412/1/012081.
1456
- [9] O. Töro, T. Bécsi, and S. Aradi, “Performance Evaluation of a Bernoulli
1457
- filter Based Multi -Vehicle Cooperative Object Detection,” in
1458
- Transportation Research Procedia , 2017, vol. 27, doi:
1459
- 10.1016/j.trpro.2017.12.042.
1460
- [10] G. Yan, M. Yu, Y. Yu, and L. Fan, “Real -time vehicle detection using
1461
- histograms of oriented gradients and AdaBoost classification,” Optik
1462
- (Stuttg). , vol. 127, no. 19, 2016, doi: 10.1016/j.ijleo.2016.05.092.
1463
- [11] A. M. Ali , W. I. Eltarhouni, and K. A. Bozed, “On -road vehicle detection
1464
- using support vector machine and decision tree classifications,” 2020,
1465
- doi: 10.1145/3410352.3410803.
1466
- [12] V. Ghenescu, E. Barnoviciu, S. V. Carata, M. Ghenescu, R. Mihaescu,
1467
- and M. Chindea, “Object recognition on long range thermal image using
1468
- state of the art dnn,” 2018, doi: 10.1109/ROLCG.2018.8572026.
1469
- [13] A. Narayanan, R. Darshan Kumar, R. Roselinkiruba, and T. Sree
1470
- Sharmila, “Study and Analysis of Pedestrian Detection in Thermal
1471
- Images Using YOLO an d SVM,” 2021, doi:
1472
- 10.1109/WiSPNET51692.2021.9419443.
1473
- [14] R. Kalita, A. K. Talukdar, and K. Kumar Sarma, “Real -Time Human
1474
- Detection with Thermal Camera Feed using YOLOv3,” 2020, doi:
1475
- 10.1109/INDICON49873.2020.9342089.
1476
- [15] A. Corovic, V. Ilic, S. Duric, M. Marijan, and B. Pavkovic, “The Real -
1477
- Time Detection of Traffic Participants Using YOLO Algorithm,” 2018,
1478
- doi: 10.1109/TELFOR.2018.8611986.
1479
- [16] M. Ivašić -Kos, M. Krišto, and M. Pobar, “Human detection in thermal
1480
- imaging using YOLO,” in ACM International Conferen ce Proceeding
1481
- Series , 2019, vol. Part F148262, doi: 10.1145/3323933.3324076.
1482
- [17] X. Han, J. Chang, and K. Wang, “Real -time object detection based on
1483
- YOLO -v2 for tiny vehicle object,” in Procedia Computer Science , 2021,
1484
- vol. 183, doi: 10.1016/j.procs.2021.02.03 1.
1485
- [18] V. Paidi, H. Fleyeh, and R. G. Nyberg, “Deep learning -based vehicle
1486
- occupancy detection in an open parking lot using thermal camera,” IET
1487
- Intell. Transp. Syst. , vol. 14, no. 10, 2020, doi: 10.1049/iet -
1488
- its.2019.0468.
1489
- [19] CVC -09 FIR Sequence Pedestrian Dataset. Available
1490
- online: http://adas.cvc.uab.es/elektra/enigma -portfolio/item -1/ (Last
1491
- accessed on 22nd June 2021).
1492
- [20] S. Houben, J. Stallkamp, J. Salmen, M. Schlipsing, and C. Igel,
1493
- “Detection of traffic signs in real -world images: The German traffic sign
1494
- detection benchmark,” 2013, doi: 10.1109/IJCNN.2013.6706807.
1495
- [21] L. Zhang, J. Tan, D. Han, and H. Zhu, “From machine learning to deep
1496
- learning: progress in machine intelligence for rational drug discovery,”
1497
- Drug Discovery Today , vol. 22, no. 11. 2017, doi:
1498
- 10.1016/j.drudis.2017.08.010.
1499
- [22] Pretrained object detection and classification models , [Online]
1500
- Available, https://pytorc h.org/serve/model_zoo.html , (Last accessed on
1501
- 29th August 2021). [23] Nvidia Jetson Nano, [Online] Available :
1502
- https://developer.nvidia.com/embedded/jetson -nano -developer -kit,
1503
- (Last accessed on 14th J une 2021).
1504
- [24] M. Kristo, M. Ivasic -Kos, and M. Pobar, “Thermal Ob ject Detection in
1505
- Difficult Weather Conditions Using YOLO,” IEEE Access , vol. 8, 2020,
1506
- doi: 10.1109/ACCESS.2020.3007481.
1507
- [25] Nvidia Jetson Xavier NX Development kit , [Online] Available :
1508
- https://developer.nvidia.com/embedded/jetson -xavier -nx-devkit , (Last
1509
- accessed on 14th J une 2021).
1510
- [26] A. Farouk Khalifa, E. Badr, and H. N. Elmahdy, “A survey on human
1511
- detection surveillance systems for Raspberry Pi,” Image Vis. Comput. ,
1512
- vol. 85, 2019, doi: 10 .1016/j.imavis.2019.02.010.
1513
- [27] D. S. Breland, S. B. Skriubakken, A. Dayal, A. Jha, P. K. Yalavarthy,
1514
- and L. R. Cenkeramaddi, “Deep Learning -Based Sign Language Digits
1515
- Recognition from Thermal Images with Edge Computing System,” IEEE
1516
- Sens. J. , vol. 21, no. 9, 2021, doi: 10.1109/JSEN.2021.3061608.
1517
- [28] Y. Liu, S. Cao, P. Lasang, and S. Shen, “Modular Lightweight Network
1518
- for Road Object Detection Using a Feature Fusion Approach,” IEEE
1519
- Trans. Syst. Man, Cybern. Syst. , vol. 51, no. 8, 2021, doi:
1520
- 10.1109/TSMC.2019.294505 3.
1521
- [29] J. Fritsch, T. Kuhnl, and A. Geiger, “A new performance measure and
1522
- evaluation benchmark for road detection algorithms,” 2013, doi:
1523
- 10.1109/ITSC.2013.6728473.
1524
- [Online] Available, http://www.cvlibs.n et/datasets/kitti/ (Last accessed
1525
- on 14th June 2021)
1526
- [30] G. Balasekaran, S. Jayakumar, and R. Pérez de Prado, “An intelligent
1527
- task scheduling mechanism for autonomous vehicles via deep learning,”
1528
- Energies , vol. 14, no. 6, 2021, doi: 10.3390/en14061788.
1529
- [31] LabelIm g object annotation tool, [Online] Available
1530
- https://github.com/tzutalin/labelImg , (Last accessed on 20th July 2021) .
1531
- [32] Heliaus European Union Project, https://www.heliaus.eu/ (Last accessed
1532
- on 20th February 2021) .
1533
- [33] Nvidia TensorRT for developers, [Online] Available,
1534
- https://developer.nvidia.com/tensorrt, (Last accessed on 14th February
1535
- 2021).
1536
- [34] J. de Vries, “Image proc essing and noise reduction techniques for
1537
- thermographic images from large -scale industrial fires,” 2014.
1538
- [35] Ganaie, M. A., and Minghui Hu. "Ensemble deep learning: A
1539
- review." arXiv preprint repository arXiv:2104.02395 (2021) .
1540
- [36] Saragaglia, Amaury, and Alain Durand. "Method of infrared image
1541
- processing for non -uniformity correction." U.S. Patent No. 10,015,425.
1542
- 3 Jul. 2018.
1543
-
1544
- Muhammad Ali Farooq received his BE
1545
- degree in electronic engineering from
1546
- IQRA University in 2012 and his MS
1547
- degree in electrical control engineering
1548
- from the National University of Sciences
1549
- and Technology (NUST) in 2017. He is
1550
- currently pursuing the Ph.D. degree at the
1551
- National University of Ireland Galway
1552
- (NUIG) . His research interests include
1553
- machine vision , computer vision, video analytics, and sensor
1554
- fusion. He has won the prestigious H2020 European Union
1555
- (EU) scholarship and currently working at NUIG as one of the
1556
- consortium partners in the H eliaus (thermal v ision augmented
1557
- awarenes s) project funded by EU.
1558
- Waseem Shariff received his B.E degree
1559
- in computer science from Nagarjuna
1560
- College of Engineering and Technology
1561
- (NCET) in 2019 and his M.S. degree in
1562
- computer science, specializing in artificial
1563
- intelligence from National University of
1564
- Ireland Galway (NUIG) in 2020. He is
1565
- working as research assistant at National
1566
- University of Ireland Galway (NUIG). He
1567
- is associated with Heliaus (thermal v ision augmented
1568
- awarenes s) project. He is also allied with FotoNation/Xperi
1569
- 14
1570
-
1571
- research team. His research interests include machine learning
1572
- utilizing deep neural networks for computer vision applications,
1573
- including working with synthetic data, thermal data, and RGB .
1574
-
1575
- Peter Corcoran (Fellow, IEEE) holds a
1576
- Personal Chair in Electronic Engineering at
1577
- the College of Science and Engineering,
1578
- National University of Ireland Galway
1579
- (NUIG) . He was the Co -Founder in several
1580
- start-up companies, notably FotoNation,
1581
- now the Imaging Division of Xperi
1582
- Corporation. He has more th an 600 cited
1583
- technical publications and patents, more than 120 peer -
1584
- reviewed journal articles, 160 international conference papers,
1585
- and a co -inventor on more than 300 granted U.S. patents. He is
1586
- an IEEE Fellow recognized for his contributions to digital
1587
- camera technologies, notably in -camera red -eye correction and
1588
- facial detection. He is a member of the IEEE Consumer
1589
- Technology Society for more than 25 years and the Founding
1590
- Editor of IEEE Consumer Electronics Magazine.
1591
-
1592
-
1593
-
1594
-
1595
-
1596
-
1597
-
1598
-
1599
-
1600
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2201.02942.txt DELETED
@@ -1,1740 +0,0 @@
1
- See discussions, st ats, and author pr ofiles f or this public ation at : https://www .researchgate.ne t/public ation/357188680
2
- Fast Solver for J2-Pertu rbed Lambert Problem Using Deep Neu ral Network
3
- Article    in  Journal of Guidanc e, Contr ol, and Dynamics · Dec ember 2021
4
- DOI: 10.2514/1.G006091
5
- CITATIONS
6
- 0READS
7
- 160
8
- 4 author s, including:
9
- Some o f the author s of this public ation ar e also w orking on these r elat ed pr ojects:
10
- Stardust View pr oject
11
- Multi-Objectiv e Hybrid Optimal Contr ol of Sp ace Syst ems View pr oject
12
- Bin Y ang
13
- Univ ersity of Str athcly de
14
- 18 PUBLICA TIONS    57 CITATIONS    
15
- SEE PROFILE
16
- Shuang Li
17
- Nanjing Univ ersity of Aer onautics & Astr onautics
18
- 197 PUBLICA TIONS    1,391 CITATIONS    
19
- SEE PROFILE
20
- Massimiliano V asile
21
- Univ ersity of Str athcly de
22
- 416 PUBLICA TIONS    3,755 CITATIONS    
23
- SEE PROFILE
24
- All c ontent f ollo wing this p age was uplo aded b y Shuang Li on 23 Dec ember 2021.
25
- The user has r equest ed enhanc ement of the do wnlo aded file.1
26
- Fast solver for J2-perturbed Lambert problem using deep
27
- neural network
28
- Bin Yang1 and Shuang Li2*
29
- Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
30
- Jinglang Feng3 and Massimiliano Vasile4
31
- University of Strathclyde, Glasgow, Scotland G1 1XJ, United Kingdom
32
- This paper presents a novel and fast solver for the J2-perturbe d Lambert problem. The
33
- solver consists of an intelligent initial guess generator combi ned with a differential
34
- correction procedure. The intelligent initial guess generator i s a deep neural network that is
35
- trained to correct the initial velocity vector coming from the solution of the unperturbed
36
- Lambert problem. The differential correction module takes the i nitial guess and uses a
37
- forward shooting procedure to further update the initial veloci ty and exactly meet the
38
- terminal conditions. Eight sample forms are analyzed and compar e d t o f i n d t h e o p t i m u m
39
- form to train the neural network on the J2-perturbed Lambert pr oblem. The accuracy and
40
- performance of this novel approach will be demonstrated on a re presentative test case: the
41
- solution of a multi-revolution J2-perturbed Lambert problem in the Jupiter system. We will
42
- compare the performance of the proposed approach against a clas sical standard shooting
43
-
44
- 1 Ph.D. candidate, Advanced Space Technology Laboratory, No. 29 Yudao Str., Nanjing 211106, China.
45
- 2 Professor, Advanced Space Technology Laboratory, Email: lishua [email protected], Corresponding Author.
46
- 3 Assistant Professor, Department of Mechanical and Aerospace En gineering, University of Strathclyde, 75
47
- Montrose Street, Glasgow, UK.
48
- 4 Professor, Department of Mechanical and Aerospace Engineering, University of Strathclyde, 75 Montrose Street,
49
- Glasgow, UK. 2
50
- method and a homotopy-based perturbed Lambert algorithm. It wil l be s ho w n t h a t, f o r a
51
- comparable level of accuracy, th e proposed method is significan tly faster than the other two.
52
- I. Introduction
53
- The effect of orbital perturbations, such as those coming from a non-spherical, inhomogeneous gravity field,
54
- leads a spacecraft to depart from the trajectory prescribed by t h e s o l u t i o n o f t h e L a m b e r t p r o b l e m i n a s i m p l e
55
- two-body model [1], [2]. Since the perturbation due to the J2 z onal harmonics has the most significant effect around
56
- all planets in the solar system, a body of research exists that addressed the problem of solving the perturbed Lambert
57
- problem accounting for the J2 effect [3], [4]. This body of res earch can be classified into two categories: indirect
58
- methods and shooting methods [5]. Indirect methods transform th e perturbed Lambert problem into the solution of a
59
- system of parametric nonlinear algebraic equations. For instanc e, Engles and Junkins [1] proposed an indirect
60
- method that uses the Kustaanheimo-Stiefel (KS) transformation t o derive a system of two nonlinear algebraic
61
- equations. Der [6] presented a superior Lambert algorithm by us ing the modified iterative method of Laguerre that
62
- h a s g o o d c o m p u t a t i o n a l p e r f o r m a n c e i f g i v e n a g o o d i n i t i a l g u e s s. Armellin et al. [7] proposed two algorithms,
63
- based on Differential Algebra, for the multi-revolution perturb ed Lambert problems (MRPLP). One uses homotopy
64
- over the value of the perturbati on and the solution of the unpe rturbed, or Keplerian, Lambert problem as initial guess.
65
- The other uses a high-order Taylor polynomial expansion to map the dependency of the terminal position on the
66
- initial velocity, and solves a system of three nonlinear equati ons. A refinement step is th en added to obtain a solution
67
- with the required accuracy. A common problem of indirect method s is the need for a good initial guess to solve the
68
- system of nonlinear algebraic equations. A bad initial guess in creases the time to solve the algebraic system or can
69
- lead to a failure of the solution procedure, especially when th e transfer time is long. 3
70
- Shooting methods transcribe the perturbed Lambert problem into the search for the initial velocity vector that
71
- provides the desired terminal conditions at a given time. Kraig e et al. [8] investigated the efficiency of different
72
- shooting approaches and found that a straightforward differenti al correction algorithm combined with the
73
- Rectangular Encke’s motion predictor is more efficient than the analytical KS approach. Junkins and Schaub [9]
74
- transformed the problem into a two-point boundary value problem and applied Newton iteration method to solve it.
75
- The main problem with shooting methods is that, with the increa se of the transfer time, the terminal conditions
76
- become more sensitive to the variations of the initial velocity and the derivatives of the final states with respect to
77
- the initial velocity are more affected by the propagation of nu merical errors. In order to mitigate this problem, Arora
78
- et al. [10] proposed to compute the derivatives of the initial and final velocity vectors with respect to the initial and
79
- final position vectors, and the time of flight, with the state transition matrix. Woollands et al. [11] applied the KS
80
- transformation and the modified Chebyshev–Picard iteration to o btain the perturbed solution starting from the
81
- solution of the Keplerian Lambert problem, which is to solve th e initial velocity vector corresponding to the transfer
82
- between two given points with a given time of free flight in a two-body gravitational field [12]. For the
83
- multi-revolution perturbed Lambert problem with long flight tim e, Woollands et al. [13] also utilized the modified
84
- Chebyshev-Picard iteration and the method of particular solutio ns based on the local-linearity, to improve the
85
- computational efficiency, but its solution relies on the soluti on of the Keplerian Lambert problem as the initial
86
- guesses. Alhulayil et al. [14] proposed a high-order perturbati on expansion method that accelerates convergence,
87
- compared to conventional first-order Newton’s methods, but requ ires a good initial guess to guarantee convergence.
88
- Yang et al. [15] developed a targeting technique using homotopy to reduce the sensitivity of the terminal position
89
- errors on the variation of the initial velocity. However, often techniques that improve robustness of convergence by
90
- reducing the sensitivity of the terminal conditions on the init ial velocity vector, incur in a higher computational cost. 4
91
- The major problem of both classes of methods can be identified in the need for a judicious initial guess, often
92
- better than the simple solution of the Keplerian Lambert proble m. To this end, this paper proposes a novel method
93
- combining the generation of a first guess with machine learning and a shooting method based on finite-differences.
94
- We propose to train a deep neural network (DNN) to generate ini tial guesses for the solution of the J2-perturbed
95
- Lambert problem and which has been a growing interest in the ap plication of machine learning (ML) to space
96
- trajectory design [16], [17]. In Ref. [18] one can find a recen t survey of the application of ML to spacecraft guidance
97
- dynamics and control. Deep neural network is a technology in th e field of ML, which has at least one hidden layer
98
- and can be trained using a back-propagation algorithm [18]. Sán chez-Sánchez and Izzo [19] used DNNs to achieve
99
- online real-time optimal control for precise landing. Li et al. [16] used DNN to estimate the parameters of low-thrust
100
- and multi-impulse trajectories in multi-target missions. Zhu an d Luo [20] proposed a rapid assessment approach of
101
- low-thrust transfer trajectory using a classification multilaye r perception and a regression multilayer perception.
102
- Song and Gong [21] utilized a DNN to approximate the flight tim e of the transfer trajectory with solar sail. Cheng et
103
- al. [22] adopted the multi-scale deep neural network to achieve real-time on-board trajectory optimization with
104
- guaranteed convergence for optimal transfers. However, to the b est of our knowledge ML has not yet been applied
105
- to improve the solution of the perturbed Lambert problem.
106
- The DNN-based solver proposed in this paper was applied to the design of trajectories in the Jovian system. The
107
- strong perturbation induced by the J2 harmonics of the gravity field of Jupiter creates significant differences
108
- between the J2-perturbed and Keplerian Lambert solutions, even for a small number of revolutions. Hence Jupiter
109
- was chosen to put the proposed DNN-based solver to the test. Th e performance of the combination of the DNN first
110
- guess generation and shooting will be compared against two solv ers: one implementing the homotopy method of
111
- Yang et al. [15], the other implementing a direct application o f Newton method starting from a first guess generated 5
112
- with the solution of the Keplerian Lambert problem. The homotop y method in Ref. [15] was chosen for its
113
- simplicity of implementation and robustness also in the case of long transfer times.
114
- The rest of this paper is organized as follows. In Sec. II, the J2-perturbed Lambert problem and the shooting
115
- method are presented. Sec. III investigates eight sample forms and their learning features for the DNN. With
116
- comparative analysis of the different sample forms and standard ization technologies, the optimal sample form for
117
- the J2-perturbed Lambert problem is found. The algorithm using the deep neural network and the finite
118
- difference-based shooting method is proposed and implemented to solve the J2-perturbed Lambert problem in Sec.
119
- IV. Considering Jupiter’s J2 perturbation, Sec. V compares the numerical simulation results of the proposed
120
- algorithm, the traditional shooting method and the method with homotopy technique. Finally, the conclusions are
121
- made in Sec. VI.
122
- II. J2-perturbed Lambert Problem
123
- This section presents the dynamical model we used to study the J2-perturbed Lambert problem and the shooting
124
- method we implemented to solve it.
125
- A. Dynamical modeling with J2 perturbation
126
- The J2 non-spherical term of the gravity field of planets and m oons in the solar system induces a significant
127
- variation of the orbital parameters of an object orbiting those celestial bodies. Thus, the accurate solution of the
128
- Lambert problem [12] needs to account for the J2 perturbation, especially in the case of a multi-revolution transfer.
129
- The dynamic equations of an object subject to the effect of J2 can be written, in Cartesian coordinates, in the
130
- following form: 6
131
- 2 2
132
- 2 32
133
- 2 2
134
- 2 32
135
- 2 2
136
- 2 32311 52
137
- 311 52
138
- 313 52x
139
- y
140
- z
141
- x
142
- y
143
- zxv
144
- yv
145
- zv
146
- xR zvJr rr
147
- yR zvJr rr
148
- zR zvJr rr
149
-
150
- 
151
- 
152
- 
153
-          
154
-         
155
-           
156
-
157
-
158
-
159
-  (1)
160
- where
161
- , R ,and J2 represent the gravitational constant, mean equator radius and oblateness of the celestial body,
162
- respectively. ( x, y, z, vx, vy, vz) is the Cartesian coordinates of the state of the spacecraft, and 22 2rx y z  i s
163
- the distance from the spacecraft to the center of the celestial body.
164
- B. Shooting Method for the J2-perturbed Lambert Problem
165
- The classical Lambert problem (or Keplerian Lambert problem in the following) considers only an unperturbed
166
- two-body dynamics [12]. However, perturbations can induce a sig nificant deviation of the actual trajectory from the
167
- solution of the Keplerian Lambert problem. One way to take pert urbations into account is to propagate the dynamics
168
- in Eqs. (1) and use a standard shooting method for the solution of two-point boundary value problems.
169
- Fig. 1 depicts the problem introduced by orbit perturbations. T he solution of the Keplerian Lambert problem,
170
- dashed line, provides an initial velocity v0. Because of the dynamics in Eq.(1), the velocity v0 corresponds to a
171
- difference f0 f f0rr r between the desired terminal position fr and the propagated one f0r, when the dynamics
172
- is integrated forward in time, for a period tof, from the initial conditions [ r0, v0]. In order to eliminate this error, one
173
- can use a shooting method to calculate a velocity v that corrects v0. Fig. 1 shows an example with two subsequent
174
- varied velocity vectors vi and the corresponding terminal conditions. 7
175
-
176
- 0rfr
177
- 0r
178
- ir0viv0v
179
- iv
180
- nvf0r
181
- fir
182
-
183
- Fig. 1 Illustration of the shooti ng method based on Newton’s it eration algorithm for the J2-perturbed
184
- Lambert problem
185
- As mentioned in the introduction, shooting methods have been ex tensively applied to solve the perturbed
186
- Lambert problem. Different algorithms have been proposed in the literature to improve both computational
187
- efficiency and convergence, e.g. the Picard iteration [11] and the Newton’s iteration [23]. In this section, the
188
- standard shooting method based on Newton’s algorithm is present ed [23]. Given the terminal position rfi = [xi, yi, zi]T
189
- and the initial velocity vi = [vxi, vyi, vzi]T at the i-th iteration, the shooting method requires the Jacobian matrix :
190
- =iii
191
- xiy iz i
192
- iii
193
- i
194
- xiy iz i
195
- iii
196
- xiy iz ix xx
197
- vvv
198
- y yy
199
- vvv
200
- zzz
201
- vvv    
202
-   
203
-  
204
-   
205
-    H , (2)
206
- to compute the correction term:
207
- 1
208
- f iivHr r , (3)
209
- where J-1 is the inverse of the Jacobian matrix Hi, and rf is the desired terminal position, as shown in Fig. 1. The
210
- corrected initial velocity then becomes 1ii i vvv . 8
211
- Here the partial derivatives in the Jacobian matrix are approxi mated with forward differences. Finite differences
212
- are computed by introducing a variation 610v in the three components of the initial velocity and computing t he
213
- corresponding variation of the three components of the terminal conditions ixr, iyr, and izr. Consequently, the
214
- Jacobian matrix can be written as follows.
215
- =iy ix iz
216
- ivvv
217
-  
218
-  
219
-  r rrH (4)
220
- Because of the need to compute the Jacobian matrix in Eq. (2), finite-difference-based shooting methods need to
221
- perform at least three integrations for each iteration. Further more, if the accuracy of the calculation of the Jacobian
222
- matrix in Eq.(2) is limited, this algorithm could fail to conve rge to the specified accuracy or diverge, which is a
223
- common situation if the time of flight is long (e.g., tens of r evolutions). Homotopy techniques are an effective way
224
- to improve the convergence of standard shooting methods for MRP LP but still require an initial guess to initiate the
225
- homotopy process and can require the solution of multiple two-p oint boundary value problems over a number of
226
- iterations. Here a DNN is employed to globally map the change i n the initial velocity to the variation of the terminal
227
- position for a variety of initial state vectors and transfer ti mes. This mapping allows one to generate a first guess for
228
- the initial velocity change ivby simply passing the required initial state, transfer time and terminal condition as
229
- input to the DNN.
230
- In the following, we will present how we trained the DNN to gen erate good first guesses to initiate a standard
231
- shooting method. We will show that an appropriately trained DNN can generate initial guesses that provide
232
- improved convergence of the shooting method even for multi-revo lution trajectories. It will be shown that the use of
233
- this initial guess improves the robustness of convergence of a standard shooting method and makes it significantly
234
- faster than the homotopy method in [15]. 9
235
- III. Sample Learning Feature Analysis
236
- DNN consists of multiple layers of neurons with a specific arch itecture, which is an analytical mapping from
237
- inputs to outputs once its parameters are given. The typical st ructure of DNN and its neuron computation is
238
- illustrated in Fig. 2. The output of each neuron is generated f rom the input vector x, the weights of each component
239
- w, the offset value b, and the activation function y=f(x). The inputs are provided according to the specific problem or
240
- the outputs of the neurons of the previous layer. The weight an d offset values are obtained through the sample
241
- training. The activation function is fixed once the network is built. The training process includes two steps: the
242
- forward propagation of the input from the input layer to the ou tput layer; and then the back propagation of the output
243
- error from the output layer to the input layer. During this pro cess, the weight and the offset between adjacent layers
244
- are adjusted or trained to reduce the error of the outputs.
245
- ii sbw x
246
-  yf s
247
-
248
- Fig. 2 The diagram of the DNN s tructure and neuron computation
249
- The ability of a DNN to return a good initial guess depends hig hly on the representation and quality of samples
250
- used to train the network. High-quality samples cannot only imp rove the output accuracy of the network, but also 10
251
- reduce the training cost. Therefore, in the following, we prese nt the procedure used to generate samples with the
252
- appropriate features.
253
- A. Definition of Sample Form and Features
254
- In this work two groups of sample forms have been considered: o ne has the initial velocity v0 solving the
255
- J2-perturbed Lambert problem as output, the other has the veloc ity correction 0v to an initial guess of v0 a s
256
- output.
257
- For the first group of sample forms, the input to the neural ne twork includes the known initial and terminal
258
- positions 0f,rr and the time of flight tof. The output is only the initial velocity v0 as the terminal velocity can be
259
- obtained through orbital propagation once the initial velocity is solved. This type of sample form is defined as
260
-    0f 0,, ,vSt o frr v (5)
261
- where the subscript 0 and f denotes the start and end of the tr ansfer trajectory, respectively. Thus, when trained with
262
- sample form in Eq. (5), the DNN is used to build a functional r elationship between 0f,,tof rr and 0v.
263
- The second group of sample forms was further divided in two sub groups. One that uses the initial state of the
264
- spacecraft 0r, the time of flight tofand the terminal error fras input and the other that uses the initial state 0r,
265
- the time of flight tof, the terminal position error frand the initial velocity vector from the Keplerian solution
266
- dvas inputs. These two sample forms are defined as follows:
267
-   
268
-   d1 0 f 0
269
- d2 0 d f 0,, ,
270
- ,, , ,v
271
- vSt o f
272
- St o f 
273
-  rr v
274
- rv r v (6)
275
- In Eq. (6) the output 0v is always the initial velocity correction 00 dvv v , in which 0v is the initial
276
- velocity that solves the J2-perturbed Lambert problem. Thus, wh en trained with sample forms Sdv1 and Sdv2, the DNN 11
277
- realizes a mapping between 0v a n d 0f,,tof rr or  0d f,, ,tof rv r respectively. The difference between Sdv1
278
- and Sdv2 is whether the input includes the initial velocity vd t h a t i s n e c e s s a r y f o r s o l v i n g t h e J a c o b i a n m a t r i x .
279
- Therefore, it is theoretically easier to obtain the desired map ping with the input including the initial velocity, i.e. Sdv2.
280
- However, this increases the dimensionality of the sample and mi ght increase the difficulty of training.
281
- For each group of sample forms there are three main ways of par ameterizing the state of the spacecraft: Cartesian
282
- coordinates, spherical coordinates and the mean orbital element s. Cartesian coordinates provide a general and
283
- straightforward way to describe the motion of a spacecraft but state variables change significantly over time even for
284
- circular orbits with no orbital perturbations. Spherical coordi nates can provide a more contained and simpler
285
- variation of the state variables but are singular at the poles. Double averaged mean orbital elements present no
286
- variation of semimajor axis, eccentric and inclination due to J 2 and a constant variation of argument of the perigee
287
- and right ascension of the ascending node [24]. Which parameter ization to choose for the training of the DNN will
288
- be established in the remainder of this section. The structures of Eqs. (5) and (6) expressed in terms of these three
289
- coordinate systems are as follows:
290
-  
291
-   
292
-   
293
- 
294
- Car 0 0 0 f f f 0 0 0
295
- S p h 0 0 0 fff 0 0 0
296
- OEm 0 f 0 0 0
297
- d1 C a r 0 0 0 f f f 0 0 0d1 S p h 0 0 0 f f f,,,,,, , ,
298
- ,,, ,,, , ,
299
- ,, ,,
300
- ,,, , , , , , ,
301
- ,,, , , ,vx y z
302
- vv v
303
- vv v
304
- vx y zvSx y z x y z t o f v v v
305
- Srrt o f v
306
- So e o e t o f v
307
- Sx y z x y z t o f v v v
308
- Srr t o f v   
309
- 
310
-   
311
-
312
-
313
-  
314
-
315
-
316
-       
317
-    ,
318
-
319
-
320
- , 
321
- 
322
-   
323
-   00 0
324
- d2 C a r 0 0 0 d d d f f f 0 0 0
325
- d2S p h 0 0 0 d d d f f f 0 0 0
326
- d2 O E m d f f f 0 0 0,,
327
- ,,, , , , , , , , , ,
328
- ,,, ,,, , , , , ,
329
- ,, , , , ,vv
330
- vx y z x y z
331
- v vv
332
- vv vSx y z v v v x y z t o f v v v
333
- Sr vr t o f v
334
- So e r t o f v
335
-      
336
-   
337
-
338
- 
339
-          
340
-      
341
-      ,
342
- , (7)
343
- where the subscript Car, Sph and OEm denote the Cartesian coordinate, the spherical coordinate and mean orbital
344
- elements, respectively. And x, y, and z are the Cartesian coordinates of the position vector. And r, , and  a r e 12
345
- the distance, azimuth, and elevation angle of position vector i n the spherical coordinate system.
346
- ,, , , ,Toe a e i w M represents the mean orbital elements.
347
- B. Performance Analysis of Different Sample Forms
348
- In this section the performance of the eight sample forms defin ed in Eq.(7) is assessed in order to identify the
349
- best one to train the DNN. We always generate a value for the i nitial conditions starting from an initial set of orbital
350
- elements. Values of the orbital parameters for each sample are randomly generated with the rand function in
351
- MATLAB using a uniform distribution over the intervals defined in Table 1. Note that semimajor axis and
352
- eccentricity are derived from the radii of the perijove and apo jove. Considering the strong radiation environment of
353
- Jupiter and the distribution of Galilean moons, we want to limi t the radius of the pericentre rp of the initial orbit of
354
- each sample to be in the interval [5 RJ, 30RJ], where RJ = 71492 km is the Jovian mean radius. The value of the
355
- inclination is set to range in the interval [0, 1] radians. The time of flight does not exceed one orbital period T, which
356
- is approximately calculated using the following formula
357
- 3
358
- J=2aT (8)
359
- where a is the semi-major axis, a = ( ra + rp ) / 2.
360
- Table 1 Parameters’ ranges of the sample
361
- Parameters Range
362
- Apojove radius ra (×RJ) [ rp, 30]
363
- Perijove radius rp (×RJ) [5, 30]
364
- inclination (rad) [0, 1]
365
- RAAN (rad) [0, 2)
366
- Argument of perigee (rad) [0, 2)
367
- Mean anomaly (rad) [0, 2)
368
- tof (T) (0, 1) 13
369
- The following procedure is proposed to efficiently generate a l arge number of samples without solving the
370
- J2-perturbed Lambert problem:
371
- Step 1: The initial state [ r0, v0] and time of flight tof are randomly generated.
372
- Step 2: The terminal state [ rf, vf] is obtained by propagating the initial state [ r0, v0] under the J2 perturbation
373
- dynamics model, for the propagation period tof.
374
- Step 3: The Keplerian solution vd is solved from the classical Lambert problem with the initial and terminal
375
- position r0, rf and flight time tof.
376
- Step 4: The end state [ rfd, vfd] is obtained by propagating the initial Keplerian state [ r0, vd] under the J2
377
- perturbation dynamics model, and for the propagation period tof.
378
- Step 5: The initial velocity correction 0v and the end state error fr are computed with00 dvv v and
379
- ff f drr r .
380
- Using these five steps, we generated 100000 samples and then gr ouped them in the eight sample forms given in
381
- Eq.(7). Before training, a preliminary learning feature analysi s is performed on the distribution of sample data and
382
- the correlation between the inputs and the output. Specifically , the mean, standard deviation, and magnitude
383
- difference coefficients are used to describe the distribution o f the data, and the Pearson correlation coefficient is
384
- chosen to evaluate the correlation of the data. Their mathemati cal definitions are given as follows
385
- 
386
- 
387
- 1
388
- 2
389
- 1=
390
- 1
391
- max
392
- log
393
- min 0n
394
- j
395
- j
396
- n
397
- j
398
- jX
399
- Xn
400
- XXn
401
- X
402
- X
403
- 
404
- 
405
- 
406
- 
407
-  (9) 14
408
- where X and  are the mean and standard deviation of the data, respectively. And n is the total number of data.
409
-  denotes the magnitude difference coefficients that assesses th e internal diversity of the data.
410
- The statistical characteristics of the variables in the sample are given in Table 2. For the variables described in
411
- Cartesian coordinate, the mean values are close to 0 but the st andard deviations are generally large. Furthermore,
412
- their magnitude difference coefficients are all more than 5, wh ich indicate a large difference in the absolute values
413
- of the variables. For the variables described in spherical coor dinate, the most of their standard deviations are less
414
- than these described in the Cartesian coordinate. In addition, the magnitude difference coefficients of the magnitude
415
- of the position and velocity vectors are less than 1. The varia bles with smaller standard deviation have better
416
- performance in the training process. Therefore, the samples wit h the variables represented in spherical coordinate
417
- are easier to learn than those described in Cartesian coordinat es.
418
- Table 2 The statistical distributi ons of the variables in the s amples
419
- Parameters
420
- of sample Mean Standard deviations Magnitude difference
421
- coefficients
422
- r0-Car [-0.014; 0.087; 0.001] [11.424; 11.424; 1.145] [5.125; 5.949; 7.077]
423
- r0-Sph [14.954; 0.007229; 0.000193] [6.221; 1.815; 0.070] [0.777; 4.9 26; 6.607]
424
- rf-Car [0.001; -0.031; -0.005] [12.438; 12.469; 1.246] [5.094; 4.371; 6.442]
425
- rf-Sph [16.503; -0.005966; -0.000386] [6.275; 1.813; 0.070] [0.777; 4 .454; 6.321]
426
- v0-Car [-0.088351; -0.032116;
427
- -0.006130] [8.916; 8.887; 0.895] [5.450; 5.185; 6.338]
428
- v0-Sph [12.082577; -0.002034;
429
- -0.000529] [3.647; 1.821; 0.071] [0.773; 5.257; 5.851]
430
- oe0 [15.771; 0.257895; 0.087045;
431
- 3.148016; 3.137227; 3.138286] [5.225; 0.177; 0.050;
432
- 1.813;1.812;1.816] [0.774; 5.273; 4.145;
433
- 4.653; 5.422; 4.634]
434
- oef [15.771; 0.257850; 0.087045;
435
- 3.147935; 3.137600; 3.151625] [5.225; 0.177; 0.050;
436
- 1.813; 1.812; 1.528] [0.774; 4.721; 4.145;
437
- 5.917; 5.228; 5.163]
438
- vd-Car [-0.087568; -0.031006;
439
- -0.006340] [8.915; 8.886; 0.895] [5.654; 5.578; 6.673]
440
- vd-Sph [12.081729; -0.001599;
441
- -0.000538] [3.647; 1.821; 0.071] [0.774; 5.651; 6.658]
442
- f-Carr [-3.162; -11.384; -0.075] [1369.838; 1395.080;
443
- 187.322] [10.495; 10.828; 11.371]15
444
- f-Sphr [1154.249; -0.004; -0.001] [1589.222; 1.817; 0.135] [10.283; 4. 831; 8.576]
445
- oed [15.769; 0.258264; 0.087096;
446
- 3.147709; 3.136805; 3.139263] [5.226179; 0.177; 0.051;
447
- 1.813; 1.812; 1.814] [9.556; 4.800; 5.049;
448
- 5.240; 5.927; 4.639]
449
- tof 4.023 3.220 5.481
450
- 0-Carv [-0.000782; -0.001109; 0.000210] [0.326; 0.283; 0.063] [9.917; 1 0.432; 10.471]
451
- 0-Sphv [0.013321; 0.003913; 0.002884] [0.436; 1.818; 0.503] [8.948; 4. 672; 5.924]
452
- It is also known that the learning process is easier if the cor relation between the input and output of the sample is
453
- stronger. Here the Pearson correlation coefficient is used to d escribe this correlation and is defined as follows
454
- 
455
- 1n
456
- jj
457
- j
458
- XYXX Y Y
459
- R
460
- 
461
- ( 1 0 )
462
- where n is the total number of sample data. Y and Yrepresent the mean and standard deviation of the data Y.
463
- X a n d Xdenote the mean and standard deviation of the data X.
464
- The matrix of the Pearson correlation coefficients of the propo sed sample’s inputs and outputs are given in Table
465
- 3. The elements of Pearson correlation coefficients matrix are the correlation coefficient between the corresponding
466
- input and output variables. The signs of the elements indicate positive and negative correlations, respectively. The
467
- absolute values of elements represent the strength of correlati on. The greater the absolute value is, the stronger the
468
- correlation is.
469
- Table 3 The matrix of the Pearson correlation coefficients of t he input and output for different sample forms
470
- Sample
471
- Forms Pearson correlation coefficients matrix
472
- Sv-Car 0.003 0.004 0.000 0.002 0.002
473
- 0.002 0.003 0.001 0.001 0.003
474
- 0.005 0.000 0.002 0.001 0.002 0.000 0.002 
475
- 
476
-   
477
- 
478
- -0.764 0.126
479
- 0.764 -0.122
480
- Sv-Sph 0.005 0.001 0.003 0.000
481
- 0.002 0.000 0.004 0.000 0.001 0.003
482
- 0.001 0.001 0.002 0.003 0.002 0.001 0.003
483
- 
484
-   
485
- 
486
- -0.898 -0.459 -0.344
487
- -0.116
488
- Sv-OEm 0.002 0.002 0.002 0.000 0.002 0.003 0.002 0.001
489
- 0.002 0.000 0.000 0.002 0.003 0.007 0.002 0.000 0.000 0.002 0.003 0.001 0.0 03
490
- 0.002 0.001 0.007 0.004 0.002 0.001 0.002 0.001 0.007 0.004 0.000 
491
-  -0.587 0.291 -0.587 0.291 -0.344
492
- 0.008 0.003
493
- 
494
- 16
495
- Sdv1-Car 0.006 0.002 0.002
496
- 0.004 0.000 0.002
497
- 0.003 0.006 0.003 0.004 0.004
498
-  
499
- 
500
- 
501
- -0.011 -0.049 -0.053 -0.046
502
- 0.010 0.037 -0.041 -0.013
503
- 0.011 -0.090
504
- Sdv1-Sph 0.003 0.005 0.004 0.002
505
- 0.002 0.000 0.002 0.000 0.001
506
- 0.001 0.002 0.001 0.001 0.004
507
- 
508
- 
509
- 
510
- -0.025 0.081 0.010
511
- 0.377 0.254
512
- 0.512 0.045
513
- Sdv2-Car 0.006 0.002 0.000 0.002
514
- 0.004 0.000 0.002 0.002
515
- 0.003 0.006 0.003 0.004 0.004 0.004
516
-   
517
- 
518
- 
519
- -0.011 -0.017 -0.014 -0.049 -0.053 -0.046
520
- 0.010 0.011 -0.012 0.037 -0.041 -0.013
521
- 0.010 -0.040 0.011 -0.090
522
- Sdv2-Sph - . 0.003 -0.005 . 0.004 -0.005 . 0.004 -0.002 .
523
- -0.002 . 0.000 0.005 . -0.001 0.002 . 0.000 -0.001
524
- -0.001 -0.002 . 0.003 0.004 . 0.001 -0.001 . 0.004      
525
-       
526
-             0 025 0 032 0 081 0 010
527
- 0 377 0 259 0 254
528
- 05 1 2 02 9 7 00 4 5
529
- Sdv2-OEm 0.001 0.004 0.001 0.004 0.002
530
- 0.002 -0.003 0.002 0.004 0.002 0.001 0.002 0.000 0.001
531
- 0.000 0.001 0.002 0.003 0.002 0.001 0.001 0.004 
532
-
533
-  
534
- 
535
- -0.019 0.082 0.076 0.081 0.010
536
- 0.254
537
- 0.010 0.045
538
- First, it is seen that most elements of the matrix are less tha n 0.01, indicating the correlations between the inputs
539
- and the outputs are generally weak. Second, for the first three sample forms of Table 3, the absolute values of all
540
- elements for some rows are less than 0.01. This means that some components of the output variable are in
541
- weak-correlation with all input variables, and hence the mappin g from these output components to the input
542
- variables is very difficult to capture. Therefore, samples with the initial velocity as output, i.e. Sv-Car, Sv-Sph, and
543
- Sv-OEm, are not deemed to be ideal for the training of the neural net work. Third, by comparing the matrix listed in
544
- rows 4 to 7 of Table 3, the absolute values of the elements for the samples described in Cartesian coordinates are
545
- smaller than those for the samples described in spherical coord inates. Furthermore, for the samples in spherical
546
- coordinates, it is seen that the submatrix of each input variab le in the Pearson correlation coefficients matrix is a
547
- diagonally dominant matrix, where the elements with large absol ute values for each input variable are distributed in
548
- different rows and columns, and are independent. Therefore, the samples described in the spherical coordinate have
549
- better learning features and performance due to the strong corr elations. Additionally, for Sdv2-Sph that includes the
550
- Keplerian solution vd as one of the inputs, the correlation with the initial velocit y correction 0v i s [0.032 , 0.004, 17
551
- -0.005; 0.005, 0.259 , -0.001; 0.003, 0.004, 0.297 ], which is diagonally dominant with large diagonal values, whi ch
552
- demonstrates that the Keplerian solution is an important input. Finally, for the sample in the mean orbital elements
553
- in the last row of Table 3, the matrix only contains a few elem ents whose absolute values are greater than 0.01, and
554
- most of them are distributed in the first row. The mean variati ons of semimajor axis, eccentricity and inclination are
555
- not affected by the J2 perturbation but only by the variation o f the initial velocity. Therefore, only the first row in the
556
- matrix displays larger values. In addition, the elements in the first six columns of the Pearson correlation matrix of
557
- Sdv2-OEm are generally smaller than others in Table 3, because the outp uts of the sample is the initial velocity
558
- correction, which is calculated using the osculating orbital el ements that contain both the long and the short term
559
- effects of the J2 perturbation. Thus the correlation using the mean orbital elements is moderate. This would suggest
560
- that the sample Sdv2-Sph is the best option for the training of the DNN among the eight tested sample forms. We will
561
- now quantify the training performance for each of the eight sam ple forms by comparing the training convergence of
562
- a given DNN. It has to be noted that the structure of the DNN p lays a role as well. For example, a high dimensional
563
- sample with more variables needs a larger size DNN with more la yers and neurons. However, we argue that, since
564
- the sample form selection mainly depends on the problem and the dynamics, a better sample form will have better
565
- training performance than other sample forms given the same DNN structure. For this reason, it is reasonable to
566
- compare sample forms even on DNN structures that are not optima l. The effect of the structure of DNN on the
567
- training performance will be discussed in section V.
568
- Some data pretreatment is necessary to facilitate the training process and improve the prediction accuracy.
569
- Standardization, normalization and logarithms are used to pre-p rocess data with large ranges or magnitude
570
- differences. Tests in this section were performed using a four- layer fully connected DNN with 50 neurons per
571
- hidden layer. The activation functions of the hidden layers and the output layer are all Tanh. The Adaptive moment estimatio n
572
- algorith m
573
- The con s
574
- training p
575
- output o f
576
-
577
- where n i
578
- Here MS E
579
- From
580
- significa n
581
- has a lar gn (Adam) [25 ]
582
- m works throu g
583
- struction and
584
- process, the va
585
- fthe sample f o
586
- s the number o
587
- E has no unit s
588
- Fig. 3 one c a
589
- ntly smaller t h
590
- ger range of v] was employ e
591
- gh the entire t
592
- training of t h
593
- ariations of t h
594
- or different sa m
595
- of samples, a n
596
- s because data
597
- Fig. 3 The tr
598
- an see that th e
599
- han that with t h
600
- values and the r
601
- ed for the opt i
602
- training datas e
603
- he DNN are b
604
- he mean squa r
605
- mple forms ar e
606
- MSE
607
- ndˆiyand y i are
608
- has been nor m
609
- ainin g conve r
610
- e MSE of the n
611
- he initial velo c
612
- refore has a m
613
- imization. Th e
614
- et) was set to
615
- based on the
616
- re error (MS E
617
- e given in Fig
618
-
619
- 11ˆn
620
- i
621
- iE y
622
- n
623
- the output pr e
624
- malized befor e
625
- rgence histor y
626
- neural netwo r
627
- city as the ou t
628
- more scattere d
629
- e maximum e p
630
- 10000 and t h
631
- Python impl e
632
- E) between th e
633
- . 3. The math e
634
- 2
635
- iy
636
- edicted by the
637
- e training.
638
- y for differe n
639
- rk with the in i
640
- tput. This is b e
641
- d distribution.
642
- poch (or num b
643
- he initial lear n
644
- ementation o f
645
- e output of t h
646
- ematical expr e
647
- DNN and th e
648
-
649
- nt sample for m
650
- itial velocity c
651
- ecause the ini t
652
- Also, the M S
653
- ber times that
654
- ning rate was s
655
- fTensorFlow.
656
- he neural net w
657
- ession of the M
658
- e true output r e
659
- ms
660
- correction as t
661
- tial velocity i n
662
- SE of sample S
663
- 18the learning
664
- set to 0.001.
665
- During the
666
- work and the
667
- MSE is:
668
- (11)
669
- espectively .
670
- the output is
671
- n the sample
672
- Sdv1-Sph is an 19
673
- order of magnitude higher than that of sample Sdv2-Sph. Therefore, the accuracy of predicting the initial velocity
674
- correction is effectively improved by including the Keplerian v elocity in the input of the sample. The blue line in
675
- Fig. 3 has obvious fluctuations due to the weak correlations be tween the output and the input of Sv-OEm, as shown in
676
- Table 3. Finally, the training results of the samples in spheri cal coordinate are better than those in Cartesian
677
- coordinates, which is consistent with the conclusions drawn in previous sections.
678
- In summary, for the J2-perturbed Lambert problem, the samples d escribed in spherical coordinate appear to be
679
- more suitable for the training of a DNN. In fact, among all eig ht sample forms, the sample form Sdv2-Sph yielded the
680
- best learning converge, given the initial position, Keplerian v elocity, the terminal position error of the Keplerian
681
- solution and time of flight as inputs and the initial velocity correction as output. Therefore, in the remainder of this
682
- paper, the Sdv2-Sph sample form is selected for the training of the DNN.
683
- IV. Solution of the J2-perturbed La mbert Problem Using DNN
684
- The proposed solution algorithm (see the flow diagram in Fig. 4 ) is made of an Intelligent initial Guess
685
- Generator (IGG) and a Shooting Correction Module (SCM). The DNN is used in the IGG to estimate the correction
686
- of the Keplerian solution and provide an initial guess to the s hooting module. The shooting method discussed in part
687
- B of Section II is employed in th e SCM to converge to the requi red accuracy. As s h
688
- vectors. T
689
- terminal p
690
- form an d
691
- shooting
692
- rendezvo u
693
- approxi m
694
- The m
695
- terminal s
696
- one call t o
697
- method mFig. 4
698
- hown in Fig. 4
699
- Then the init i
700
- position error
701
- d the generat i
702
- method in S e
703
- us constraint .
704
- mated with the
705
- method propo s
706
- state, where i
707
- o the DNN ar e
708
- mainly depend4 The flow c h
709
- 4, first the K e
710
- ial conditions
711
- rfd. With th i
712
- ion method o
713
- ection II is a
714
- . The Jacobi a
715
- difference qu o
716
- sed here perfo r
717
- is the numbe r
718
- e necessary t o
719
- s on the SC M
720
- hart of the pr o
721
- eplerian Lam b
722
- [r0, vd] are p
723
- is erro r, the i n
724
- of the sample s
725
- applied to co r
726
- an matrix is
727
- otient to redu c
728
- rms a total of
729
- r of iterations.
730
- o obtain the in i
731
- M. As it will be
732
- oposed J2-pe
733
- bert problem
734
- propagated f o
735
- nitial velocity
736
- s a r e d e s c r i b e
737
- rrect the initi a
738
- calculated a c
739
- ce the comput a
740
- 4i+2 numeric
741
- Additionally ,
742
- itial velocity g
743
- shown in the
744
- rturbed La m
745
- is solved wit
746
- orward in tim e
747
- correction is
748
- ed in Sectio n
749
- al velocity to
750
- ccording to E
751
- ational load.
752
- al propagatio n
753
- , one solution
754
- guess. Theref o
755
- next section,
756
-
757
- mbert proble m
758
- th the desired
759
- e u n d e r t h e e
760
- calculated us i
761
- n I I I . T h e n t h
762
- make the te r
763
- Eq.(4), where
764
- ns to obtain t h
765
- of the Keple r
766
- ore, the calcul a
767
- the initial gu e
768
- m solver
769
- initial and f i
770
- ef f e c t o f J 2 t o
771
- ing the traine d
772
- he finite diff e
773
- rminal positi o
774
- the partial d
775
- he Jacobian ma
776
- rian Lambert p
777
- ation time of t
778
- ess provided b y
779
- 20inal position
780
- o obtain the
781
- d DNN. The
782
- erence-based
783
- on meet the
784
- derivative is
785
- atrix and the
786
- problem and
787
- the proposed
788
- y the IGG is 21
789
- close enough to the final solution that the number of iteration s required to the SCM to converge to the required
790
- accuracy is significantly reduced.
791
- V. Case Study of Jupiter Scenario
792
- In this section, taking the Jovian system as an example, some n umerical simulations are performed to
793
- demonstrate the effectiveness and efficiency of the proposed J2 -perturbed Lambert solver. Firstly, different network
794
- structures and training parameters are tested to find the optim al ones for this application. Then, we simulate the
795
- typical use of the proposed solver with a Monte Carlo simulatio n whereby a series of transfer trajectories are
796
- computed starting from a random set of boundary conditions and transfer times. To be noted that although the tests
797
- in this section use the J2, μ and R, of Jupiter the proposed method can be generalized to other ce lestial bodies by
798
- training the corresponding DNNs with a different triplet of val ues J2, μ and R, but using the same sample form.
799
- A. DNN Structure Selection and Training
800
- With reference to the results in Section III, the samples used to train the DNN include the initial position, the
801
- initial velocity, coming from the solution of the Keplerian Lam bert problem, the terminal position error of the
802
- Keplerian solution, and the time of flight. The output is the i nitial velocity correction of the Keplerian solution and
803
- all vectors in a sample are expressed in spherical coordinates. In order to generalize the applicability of this method,
804
- the ranges of the parameters of the sample given in Table 1 hav e been appropriately expanded. The range of orbital
805
- inclinations is [0, ] in radian. The range of times of flight is now in the open in terval (0, 10 T), where T is calculated
806
- using Eq. (8) from the initial state ( r0, v0). The ranges of other parameters are consistent with Table 1. In total,
807
- 200000 training samples are obtained using the rapid sample gen eration algorithm given in part B of Section III. Since
808
- results, i n
809
- one woul
810
- sample f o
811
- We s t
812
- learning w
813
- and ReL U
814
- The sphe[-0.5
815
- , 0
816
- of the th r
817
- chosen a s
818
- Also i
819
- other trai n
820
- in Table 4the structure
821
- n this section
822
- d need to loo p
823
- orm remains r e
824
- tart by defini n
825
- while Sigmoi d
826
- U will be use d
827
- rical coordin a
828
- 0.5]. Becau s
829
- ree compone n
830
- s the activatio n
831
- in this case t h
832
- ning paramet e
833
- 4. and training p
834
- we analyze d
835
- p back and ch e
836
- easonably go o
837
- ng the activa t
838
- d functions ar e
839
- d. The output r
840
- ates (magnitu d
841
- se the range o
842
- nts of the sph e
843
- n function of t
844
- Fig
845
- he Adaptive m
846
- ers are the sa m
847
- parameters o f
848
- different DNN
849
- eck the optim a
850
- od even once t h
851
- tion functions
852
- e less used bec
853
- ranges of Tan h
854
- de, azimuth, a
855
- f elevation a n
856
- erical coordin
857
- the output lay e
858
- g. 5 The t ypica
859
- moment estim a
860
- me as in Secti o
861
- fthe neural ne
862
- structures a n
863
- ality of the sa m
864
- he DNN stru c
865
- . Tanh and R
866
- cause the gra d
867
- h and ReLU a
868
- and elevation)
869
- ngle can be tr a
870
- ates can all m
871
- er.
872
- al activation
873
- ation is used a
874
- on III. The tr a
875
- twork also pl a
876
- nd settings. N o
877
- mple form, ho
878
- cture is chang e
879
- ReLU are the
880
- dient tends to
881
- are [-1, 1] and
882
- of the outpu t
883
- ansformed fro m
884
- meet the requi
885
- functions for
886
- as optimizer. T
887
- aining results
888
- ays a signific a
889
- ote that once t
890
- wever, in this
891
- ed.
892
- common acti v
893
- vanish [26], t h
894
- [0, ∞] respec t
895
- t of the samp l
896
- m [-0.5, 0.5
897
- rements of R e
898
-
899
- DNN
900
- The maximu m
901
- of DNNs wit h
902
- ant impact on
903
- the structure i
904
- paper we ass u
905
- vation functi o
906
- hus in the fol l
907
- tively, as sho w
908
- le are [0, ∞],
909
- 5] to [0, ]
910
- eLU. Therefo
911
- m epoch is 5 0
912
- h different si z
913
- 22the training
914
- is optimized
915
- ume that the
916
- on s f o r d e e p
917
- lowing Tanh
918
- wn in Fig. 5.
919
- [0, 2] and
920
- ], the ranges
921
- re, ReLU is
922
- 0000 and the
923
- zes are listed 23
924
- Table 4 Training results of DNNs with different sizes
925
- Hidden
926
- Layers Neurons per
927
- hidden layer activation
928
- function MSE Training
929
- time (s)
930
- 2 20 ReLU 9.423e-05 762
931
- Tanh 3.286e-05 839
932
- 50 ReLU 1.435e-05 951
933
- Tanh 1.226e-05 1084
934
- 100 ReLU 9.423e-06 1210
935
- Tanh 9.163e-06 1425
936
- 3 20 ReLU 2.423e-06 1198
937
- Tanh 2.154e-06 1267
938
- 50 ReLU 1.315e-06 1347
939
- Tanh 1.258e-06 1523
940
- 100 ReLU 5.631e-06 1746
941
- Tanh 1.226e-06 1935
942
- 4 20 ReLU 9.423e-06 1648
943
- Tanh 3.286e-06 1864
944
- 50 ReLU 7.522e-07 1977
945
- Tanh 4.816e-07 2186
946
- 100 ReLU 6.395e-05 2361
947
- Tanh 2.861e-05 2643
948
- The neural network with the minimum MSE has 4 hidden layers, ea ch with 50 neurons. The activation function
949
- of its hidden layers is Tanh. Additionally, some conclusions ca n be made from Table 4. Firstly, the networks with
950
- ReLU as the activation function take less time for training. Se condly, the networks with Tanh as the activation
951
- function achieve smaller MSEs. Thirdly, the network with 4 hidd en layers and 100 neurons in each hidden layer has
952
- overfitted during the training process.
953
- The variation of MSE of the neural network with 4 hidden layers and with 50 neurons for each layer is shown in
954
- Fig. 6. MSE finally converges to 4.816e-07, which transforms in to the mean absolute error (MAE) of the DNN’s
955
- output: [0.004241 km/s; 0.000232 rad; 0.000152 rad]. In or d
956
- trained s a
957
- of the tr a
958
- terminal p
959
- of the tra i
960
-
961
- Fig. 7
962
- DNN. [Δ v
963
- Kepleria n
964
- the termi n
965
- points in
966
- also redu c
967
- After the der to verify t
968
- amples were r
969
- ained DNN. T
970
- position rf are
971
- ined DNN ( vc,
972
- 7 and Fig. 8 s
973
- v0dx; Δv0dy; Δv
974
- n solutions, re s
975
- nal position a
976
- Fig. 7 and Fi g
977
- ced significa n
978
- correction b yFig. 6 M S
979
- the predictio n
980
- randomly reg e
981
- The initial vel o
982
- used as refer e
983
- , rfc) are calcu
984
- show the co m
985
- v0dz] and [Δ rfdx
986
- spectively. [ Δ
987
- after the DN N
988
- g. 8) is much c
989
- ntly after the c
990
- y the DNN, th
991
- SE of the sele c
992
- n accuracy of
993
- enerated with t
994
- ocity v0, whi c
995
- ence values. T
996
- lated as follo w
997
- 0d
998
- 0cv
999
- v
1000
- mparison betw e
1001
- x; Δrfdy; Δrfdz]
1002
- Δv0cx; Δv0cy; Δv
1003
- N’s correc tion
1004
- closer to 0 aft e
1005
- correction, wh
1006
- e initial velo c
1007
- cted DNN du r
1008
- the trained D
1009
- the algorithm
1010
- ch is the exac t
1011
- The errors of t h
1012
- ws
1013
- 0df d
1014
- 0cf c,
1015
- ,
1016
- vv r
1017
- vvr
1018
- een the Kepl e
1019
- are the errors
1020
- v0cz] and [Δ rfcx
1021
- s, respectivel y
1022
- er the DNN’s c
1023
- ich is indicat e
1024
- city error is li m
1025
- ring the trai n
1026
- DNN, 1000 n e
1027
- in part B of S
1028
- t solution of t
1029
- he Keplerian s
1030
- ff d
1031
- ff c
1032
- rr
1033
- rr
1034
- erian solution s
1035
- of the initial v
1036
- x; Δrfcy; Δrfcz]
1037
- y. It can be s
1038
- correction. T h
1039
- ed by the len g
1040
- mited to 10 m
1041
-
1042
- ning process.
1043
- ew samples t h
1044
- Section III to
1045
- the J2-pertur b
1046
- solutions ( vd, r
1047
- s and the app r
1048
- velocity and t h
1049
- are the errors
1050
- een that the m
1051
- he standard de v
1052
- gth of the blue
1053
- m/s, and the te r
1054
- hat are differ e
1055
- examine the p
1056
- bed Lambert p
1057
- rfd) and the ap p
1058
- roximation o f
1059
- he terminal p o
1060
- of the initial v
1061
- mean of thes e
1062
- viation of the s
1063
- bars in Fig. 7
1064
- rminal positio n
1065
- 24ent fro m t h e
1066
- performance
1067
- proble m an d
1068
- proximation
1069
- (12)
1070
- f t h e t r a i n e d
1071
- osition of the
1072
- velocity and
1073
- e errors (red
1074
- se errors has
1075
- 7 and Fig. 8.
1076
- n error does not exce e
1077
- initial va l
1078
- Fig. 7 T
1079
- Fig.
1080
- B. Perfo
1081
- In this s e
1082
- method ued 100 km. T
1083
- lue with respe c
1084
- The statistica l
1085
- . 8 The statis t
1086
- ormance Ana l
1087
- ection the pr o
1088
- using NewtonThis proves th a
1089
- ct to a simple
1090
- l results of th e
1091
- tical results o
1092
- lysis for MR P
1093
- oposed DNN- b
1094
- ’s iteration a l
1095
- at the applic a
1096
- Keplerian La m
1097
- e initial velo c
1098
- of the termin a
1099
- PLP
1100
- based metho d
1101
- lgorithm (SN )
1102
- ation of the D
1103
- mbert solutio n
1104
- city errors of t
1105
- al position er r
1106
- correctio n
1107
- d is compare d
1108
- ) a n d t h e h o m
1109
- DNN has sign i
1110
- n.
1111
- the Kepleria n
1112
- rors of the K e
1113
- n
1114
- d a g a i n s t o t h e
1115
- motopic pertu r
1116
- ificantly imp r
1117
-
1118
- n solution an d
1119
- eplerian solu t
1120
- er two metho d
1121
- rbed Lamber t
1122
- roved the acc u
1123
- d the DNN’s c
1124
- tion and the D
1125
- ds: a traditio n
1126
- t algorith m (H
1127
- 25uracy of the
1128
- correction
1129
- DNN’s
1130
- nal shooting
1131
- HL) in [15]. 26
1132
- When applying the HL, the C++ version of Vinit6 algorithm in li terature [27] is employed to implement the HL
1133
- method in Ref. [15] and to decrease the CPU computation time of HL. The HL is running in Matlab and the MEX
1134
- function calls the Vinit6 algorithm that is running in visual s tudio 2015 C++ compiler to analytically propagate the
1135
- perturbed trajectory. The accuracy tolerance of Vinit6 algorith m is set at 1 10-12. The homotopy parameter is
1136
- defined as the deviation in the terminal position and other det ails of implementation and settings are the same as
1137
- these given in Ref. [15]. For the SN and the proposed method, t heir dynamical models only include the J2
1138
- perturbation. For the Vinit6 algorithm, the dynamical model inc ludes the J2, J3 and partial J4 perturbations.
1139
- However, the magnitudes of J3 and J4 of Jupiter are much smalle r than that of J2. Their perturbation effects are very
1140
- weak compared with that of J2. Therefore, the slight difference in the dynamical model has very limited impact on
1141
- the number of iterations and running time of the HL since the V init6 algorithm has high computational efficiency.
1142
- Therefore, the comparison among the three methods is still vali d.
1143
- The performance of the three methods is compared over 11 datase ts one per number of full revolutions from 0
1144
- to 10. Each dataset has 1000 samples, which are regenerated wit h the method described in Section III to validate the
1145
- DNN. The maximum iterations and tolerances of the three methods are listed in Table 5.
1146
- Table 5 The maximum iterations an d tolerance of three methods
1147
- Algorithm Tolerance (km) Maximum iterations
1148
- SN 0.001 2000
1149
- HL 0.001 10000
1150
- DNN-based method 0.001 2000
1151
- If the algorithm converges to a solution that meets the specifi ed tolerance within the set number of iterations, it is
1152
- recorded as a valid convergence, otherwise, as a failed converg ence. The result is displayed in Fig. 9 and Fig. 10, in
1153
- terms of convergence ratio (number of converged solutions over number of samples) and average number of
1154
- iterations to converge. Fig.
1155
- Fig. 10 A
1156
- Acco r
1157
- the valid
1158
- the num b
1159
- the num b
1160
- requires t
1161
- revolutio n
1162
- problem. mitigates . 9 The conv e
1163
- Avera ge num
1164
- rding to Fig. 9
1165
- convergence r
1166
- ber of iteratio n
1167
- ber of iteratio n
1168
- the least nu m
1169
- ns is due the
1170
- For the sam e
1171
- this problem ergence ratio
1172
- mber of iterat i
1173
- 9, the HL and t
1174
- ratio of the S N
1175
- ns of HL appe a
1176
- ns of SN and t
1177
- mber of iterati
1178
- growing dif f
1179
- e r e a s o n t h e H
1180
- by providing a
1181
- of different a
1182
- ions of differ e
1183
- the proposed m
1184
- N decreases a
1185
- ars to increas e
1186
- the proposed D
1187
- o n s . T h e l a c k
1188
- ference betwe e
1189
- HL progressi v
1190
- a good initial
1191
- algorithms fo r
1192
- ent al gorith m
1193
- method coul d
1194
- s the number
1195
- e linearly in l o
1196
- DNN-based m
1197
- k of converg e
1198
- en the exact s
1199
- vely requires
1200
- guess for eve r
1201
- r the Jupiter J
1202
- ms on the Jup i
1203
- converge to t
1204
- of revolution s
1205
- og-scale as th e
1206
- method remai n
1207
- ence of the S N
1208
- solution and t
1209
- more iteratio n
1210
- ry number of r
1211
-
1212
- J2-perturbe d
1213
-
1214
- iter J2-pertu r
1215
- the required a c
1216
- s increases. T h
1217
- e number of r e
1218
- n nearly const a
1219
- N w i t h t h e i n
1220
- the solution o
1221
- ns to conver g
1222
- revolutions.
1223
- d Lambert pr
1224
- rbed Lambe r
1225
- ccuracy in all
1226
- hen, accordin g
1227
- evolutions inc r
1228
- ant. The prop o
1229
- ncrease in th e
1230
- of the Kepler i
1231
- ge . T h e p r o p o
1232
-
1233
- 27roblem
1234
- rt problem
1235
- cases, while
1236
- g to Fig. 10,
1237
- reases while
1238
- osed method
1239
- e number of
1240
- ian Lambert
1241
- osed method The a
1242
- only acc o
1243
- DNN-Ba s
1244
- CPU cal c
1245
- of the re v
1246
- the SCM
1247
- the incre a
1248
- time of t
1249
- computatDNN eff
1250
- e
1251
- time wit h
1252
- number o
1253
- proposed shooting matrix.
1254
- T
1255
- HL takes
1256
- Fig. 11 Aaverage CPU c
1257
- ounts the ti m
1258
- sed method a n
1259
- culation time o
1260
- volution incre a
1261
- . In general, t
1262
- ase in the nu m
1263
- the SN and t h
1264
- tional time of H
1265
- ectively redu c
1266
- h the number o
1267
- of revolution s
1268
- method are r
1269
- algorith m, fo
1270
- Their computa t
1271
- less time, the
1272
- Avera ge CPU computationa l
1273
- me of the S C
1274
- nd HL are 0. 0
1275
- of the propos e
1276
- ases because t h
1277
- the computati o
1278
- mber of iterat i
1279
- he proposed m
1280
- HL appears t o
1281
- ces the numb e
1282
- of revolution s
1283
- s tested in th
1284
- respectively 0
1285
- or which eac h
1286
- tional time pe r
1287
- HL requires m
1288
- computatio nl time of the t
1289
- CM . F o r z e r o
1290
- 051 seconds, 0
1291
- ed method is t
1292
- he accurate i n
1293
- onal time inc r
1294
- ions and the l
1295
- method appe a
1296
- o increase mo r
1297
- er of iteration s
1298
- s. The comput
1299
- is pape r. The
1300
- 0.0082 s, 0.0 0
1301
- h iteration ne e
1302
- r iteration is h
1303
- much more it e
1304
-
1305
- nal time of di f
1306
- three method s
1307
- o-revolution c
1308
- 0.027 second s
1309
- the shortest. T
1310
- nitial guess ob t
1311
- reases with th e
1312
- longer propa g
1313
- ar to increase
1314
- re rapidly. Th e
1315
- s and provide s
1316
- ational time o
1317
- e average co m
1318
- 018 s, and 0. 0
1319
- eds additional
1320
- higher than th a
1321
- erations than t h
1322
- fferent metho
1323
- s is given in F
1324
- case, the av e
1325
- s and 0.329 s e
1326
- This advantag e
1327
- tained using I
1328
- e increase in t
1329
- gation time. A
1330
- linearly with
1331
- e figure show s
1332
- s, as a result,
1333
- of the propose
1334
- mputational t i
1335
- 0078 s. The p r
1336
- t h r e e i n t e g r a
1337
- at of the HL. H
1338
- he other two m
1339
- ds for the Ju p
1340
- Fig. 11, in w h
1341
- erage CPU c o
1342
- econds, respe c
1343
- e becomes m o
1344
- GG reduces t h
1345
- the number o f
1346
- As shown in F
1347
- the number
1348
- s that the initi a
1349
- a slower incr e
1350
- d method is b
1351
- ime per itera t
1352
- ropose d m e t h
1353
- al operations
1354
- However, tho u
1355
- methods, as s h
1356
-
1357
- piter J2-pert u
1358
- hich the prop o
1359
- omputation t i
1360
- ctively. It is s
1361
- ore obvious as
1362
- he number of
1363
- f revolutions,
1364
- Fig. 11, the c o
1365
- of revolution s
1366
- al guess obtai n
1367
- ease of the c o
1368
- below 0.5 sec o
1369
- tion of SN, H
1370
- ho d a n d S N u
1371
- to calculate t
1372
- ugh the singl e
1373
- hown in Fig. 1
1374
- urbed Lamb e
1375
- 28osed method
1376
- ime of SN,
1377
- seen that the
1378
- the number
1379
- iterations of
1380
- due to both
1381
- omputational
1382
- s, while the
1383
- ned with the
1384
- omputational
1385
- onds, for the
1386
- HL , a n d t h e
1387
- se the same
1388
- the Jacobian
1389
- e-iteration of
1390
- 1.
1391
- ert problem C. Mon t
1392
- In thi
1393
- conditio n
1394
- generate s
1395
- time of s a
1396
- methods,
1397
- transfer t i
1398
- DNN is t
1399
- called o n
1400
- while the computatfinal res
1401
- u
1402
- increase i
1403
- or larger sample g
1404
- ete Carlo Ana l
1405
- s section we
1406
- ns and transf e
1407
- samples and t
1408
- ample genera t
1409
- four sets of M
1410
- imes are per fo
1411
- trained only o
1412
- ne time per M
1413
- solutions of t
1414
- tions are perf o
1415
- ults are given
1416
- in the numbe r
1417
- than 5000, t h
1418
- eneration and
1419
- Fig. 12 Tota llysis
1420
- simulate the
1421
- er times and
1422
- train DNN be f
1423
- tion, the traini n
1424
- Monte Carlo s i
1425
- formed. For e a
1426
- once, using 2 0
1427
- MC simulation
1428
- the J2-perturb
1429
- ormed on the
1430
- in Fig. 12. I
1431
- r of Lambert s
1432
- he proposed m
1433
- the training o f
1434
- l CPU time o f
1435
- repeated use
1436
- computing m
1437
- fore using the
1438
- ng of the DN N
1439
- imulations wi t
1440
- ach set, the n u
1441
- 00000 sample s
1442
- to generate t h
1443
- ed Lambert p r
1444
- personal co mp
1445
- It can be see n
1446
- olutions to b e
1447
- method outper f
1448
- f the DNN.
1449
- f different m e
1450
- of the DNN-
1451
- multiple J2-pe r
1452
- proposed me t
1453
- N and the SC M
1454
- th 1000, 5000 ,
1455
- umber revolu t
1456
- s and the par a
1457
- he first guess
1458
- roblem using
1459
- mputer with In t
1460
- n t h a t t h e e f fi
1461
- e compute d. In
1462
- forms the oth e
1463
- ethods for th e
1464
- -based metho d
1465
- rturbed Lam b
1466
- thod, the total
1467
- M. To compa r
1468
- , 10000, and 1
1469
- tions are equ a
1470
- ameters settin g
1471
- . The trainin g
1472
- the proposed
1473
- tel Core-i7 4. 2
1474
- ficiency of th e
1475
- n particular, w
1476
- er two metho d
1477
- e Jupiter J2- p
1478
- d b y t a k i n g a
1479
- bert solutions .
1480
- l computation a
1481
- re the total C P
1482
- 100000 sets o f
1483
- ally distribute d
1484
- g presented i n
1485
- g of DNN wa
1486
- method, HL a
1487
- 2 GHz CPU a
1488
- e p r o p o s e d m
1489
- when the num b
1490
- ds even when
1491
-
1492
- perturbed La
1493
- random set o
1494
- . Since it is
1495
- al time shoul d
1496
- PU time of the
1497
- f boundary co n
1498
- d between 0 a
1499
- n previous se c
1500
- s implemente
1501
- and SN run in
1502
- and 128GB o f
1503
- method i m p r o v
1504
- ber of simulat i
1505
- including th e
1506
- mbert probl e
1507
- 29of boundary
1508
- essential to
1509
- d include the
1510
- above three
1511
- nditions and
1512
- and 10. The
1513
- ction, and is
1514
- d in Python
1515
- Matlab. All
1516
- f RAM. The
1517
- ves with the
1518
- ions is equal
1519
- e cost of the
1520
- em In ad
1521
- have bee n
1522
- converge
1523
- Fig. 11. F
1524
- 0.024 se c
1525
- due to its
1526
- Fig. 13 A
1527
-
1528
- A fas t
1529
- solve the
1530
- neural ne t
1531
- the nove l
1532
- J2-pertur b
1533
- the Kepl e
1534
- applied t oddition, two s t
1535
- n teste d with t
1536
- successfully a
1537
- For the zero r e
1538
- conds and 0.0 2
1539
- longer time o
1540
- Avera ge CP U
1541
- t and novel m
1542
- J2-perturbed
1543
- tworks, whic h
1544
- l method is t o
1545
- bed Lambert p
1546
- erian solutio n
1547
- o t h e J u p i t e r tress cases, w
1548
- the proposed m
1549
- and their ave r
1550
- evolution cas e
1551
- 29 seconds, r e
1552
- of flight for ea c
1553
- U computatio n
1554
- method using D
1555
- Lambert pro b
1556
- h has an excel l
1557
- o u s e a D N N
1558
- proble m. We
1559
- n and provide
1560
- J2-pertur bed
1561
- here the angl e
1562
- method. For e a
1563
- rage CPU co m
1564
- e, the CPU co m
1565
- espectively. T h
1566
- ch revolution.
1567
- nal time of t w
1568
- VI.
1569
- DNN and th e
1570
- blem. DNN co
1571
- lent performa n
1572
- N to generate
1573
- demonstrated
1574
- good initial
1575
- Lambert pro b
1576
- e between the
1577
- ach revolutio n
1578
- mputational ti m
1579
- mputation ti m
1580
- he case of 36 0
1581
-
1582
- wo stress cas e
1583
- Conclus i
1584
- e finite-differ e
1585
- mposed of se v
1586
- nce on appro x
1587
- a first guess
1588
- that the DN N
1589
- values for t h
1590
- bl e m , t h e e r r o
1591
- initial and te r
1592
- n, 100 MC tes t
1593
- me is given in
1594
- me of the 180 d
1595
- 0 deg costs a
1596
- es for the Jup i
1597
- ion
1598
- ence-based sh o
1599
- veral layers is
1600
- ximating nonl i
1601
- o f t h e c o r r e c
1602
- N is capable o
1603
- he subsequent
1604
- ors in the ini t
1605
- rminal positio
1606
- ts are perfor m
1607
- Fig. 13, whic h
1608
- degree and th e
1609
- bit more tim e
1610
-
1611
- iter J2-pertu r
1612
- ooting algorit h
1613
- the extensio n
1614
- inear system. T
1615
- ction of the i n
1616
- f correcting t h
1617
- differential c
1618
- tial velocity a
1619
- ns is 180 deg
1620
- med for each c a
1621
- h is similar to
1622
- e 360 degree s
1623
- e than the cas e
1624
- rbed Lambe r
1625
- hm has been
1626
- n of conventio n
1627
- The major co n
1628
- nitial velocit y
1629
- he initial velo c
1630
- correction me
1631
- and terminal p
1632
- 30or 360 deg,
1633
- ase. All tests
1634
- the trend in
1635
- scenarios are
1636
- e of 180 deg
1637
- rt problem
1638
- proposed to
1639
- nal artificial
1640
- ntribution of
1641
- y t o s o l v e a
1642
- city error of
1643
- thod. When
1644
- position are 31
1645
- limited to 5m/s and 100 km, respectively. In addition, when com pared to a direct application of a shooting method
1646
- using Newton’s iterations and to a homotopy perturbed Lambert a lgorithm, the proposed method displayed a
1647
- computational time that appears to increase linearly with a slo pe of 0.047 with the number of revolutions. In the
1648
- application scenario presented in this paper the computational time is less than 0.5 seconds even for ten revolutions.
1649
- It was also shown that compared to a direct application of a sh ooting method it provides convergence to the required
1650
- accuracy in all the cases analyzed in this paper. Thus, we can conclude that the proposed DNN-based generation of a
1651
- first guess is a promising method to increase robustness and re duce computational cost of shooting methods for the
1652
- solution of the J2-pertubed Lambert problem.
1653
- The method proposed in this paper can be used to solve the J2-p erturbed Lambert problem for other celestial
1654
- bodies, by training the corresponding DNN with the correspondin g J2 a n d  parameters. Thus a library of
1655
- pre-trained DNN could be easily used to have a more general app lication to missions around any celestial body. On
1656
- the other hand, adding these dynamical parameters as part of th e training set would allow a single more general
1657
- DNN to be used with all celestial bodies. This latter option is the object of the current investigation.
1658
- Acknowledgments
1659
- The work described in this paper was supported by the National Natural Science Foundation of China (Grant No.
1660
- 11672126), sponsored by Qing Lan Project, Science and Technolog y on Space Intelligent Control Laboratory (Grant
1661
- No. 6142208200203 and HTKJ2020KL502019), the Funding for Outsta nding Doctoral Dissertation in NUAA
1662
- (Grant No. BCXJ19-12), State Scholarship from China Scholarship Council (Grant No. 201906830066). The authors
1663
- fully appreciate their financial supports.
1664
- 32
1665
- References
1666
- [1] Engels R C, and Junkins J L., “The gravity-perturbed Lambert pr oblem: A KS variation of parameters approach,” Celestial
1667
- mechanics , Vol. 24, No.1, 1981, pp. 3-21.
1668
- doi: 10.1007/BF01228790
1669
- [2] He B, Shen H., “Solution set calculation of the Sun-perturbed o ptimal two-impulse trans-lunar orbits using continuation
1670
- theory,” Astrodynamics , Vol. 4, No. 1, 2020, pp. 75-86.
1671
- doi: 10.1007/s42064-020-0069-6
1672
- [3] Izzo D, “Revisiting La mbert’s problem,” Celestial Mechanics and Dynamical Astronomy , Vol. 121, No. 1, 2015, pp. 1-15.
1673
- doi: 10.1007/s10569-014-9587-y
1674
- [4] Bombardelli C, Gonzalo J L, and Roa J., “Approximate analytical solution of the multiple revolution Lambert’s targeting
1675
- problem,” Journal of Guidance, Control, and Dynamics , Vol. 41, No. 3, 2018, pp. 792-801.
1676
- doi: 10.2514/1.G002887
1677
- [5] Russell R P., “On the solution to every Lambert problem,” Celestial Mechanics and Dynamical Astronomy , Vol. 131, No. 11,
1678
- 2019, pp. 1-33.
1679
- doi: 10.1007/s10569-019-9927-z
1680
- [6] Der G J., “The superior Lambert algorithm,” Proceedings of the Advanced Maui Optical and Space Surveillance
1681
- Technologies Conference , Maui Economic Development B oard, Maui, 2011, pp. 462–490.
1682
- [7] Armellin R, Gondelach D, and San Juan J F., “Multiple revolutio n perturbed Lambert problem solvers,” Journal of Guidance,
1683
- Control, and Dynamics , Vol. 41, No. 9, 2018, pp. 2019-2032.
1684
- doi: 10.2514/1.G003531
1685
- [8] Kraige L G, Junkins J L, and Ziems L D., “Regularized Integrati on of Gravity-Perturbed Trajectories-A Numerical Efficiency
1686
- Study,” Journal of Spacecraft and Rockets , Vol. 19, No. 4, 1982, pp. 291-293.
1687
- doi: 10.2514/3.62255
1688
- [9] Junkins J L and Schaub H., Analytical mechanics of space systems , 2nd ed., AIAA, Reston, VA, 2009, pp. 557-561.
1689
- doi: 10.2514/4.867231
1690
- [10] Arora N, Russell R P, and Strange N, and Ottesen, D., “Partial derivatives of the solution to the Lambert boundary value
1691
- problem,” Journal of Guidance, Control, and Dynamics , Vol. 38, No. 9, 2015, pp. 1563-1572.
1692
- doi: 10.2514/1.G001030
1693
- [11] Woollands R M, Bani Younes A, and Junkins J L., “New solutions for the perturbed lambert problem using regularization
1694
- and picard iteration,” Journal of Guidance, Control, and Dynamics , Vol. 38, No. 9, 2 015, pp. 1548-1562.
1695
- doi: 10.2514/1.G001028
1696
- [12] Godal T. “Method for determining the initial velocity vector co rresponding to a given time of free flight transfer between
1697
- given points in a simple gravitational field,” Astronautik , Vol. 2, 1961, pp. 183-186.
1698
- [13] Woollands R M, Read J L, Probe A B, and Junkins J. L., “Multipl e revolution solutions for the perturbed lambert problem
1699
- using the method of particular solutions and picard iteration,” The Journal of the Astronautical Sciences , Vol. 64, No. 4, 2017,
1700
- pp. 361-378. doi: 10.1007/s40295-017-0116-6
1701
- [14] Alhulayil M, Younes A B, and Turner J D. “Higher order algorith m for solving lambert’s problem,” T he Journal of the
1702
- Astronautical Sciences , Vol. 65, No. 4, 2018, pp. 400-422.
1703
- doi: 10.1007/s40295-018-0137-9 33
1704
- [15] Yang Z, Luo Y Z, Zhang J, and Tang G J, “Homotopic perturbed La mbert algorithm for long-duration rendezvous
1705
- optimization,” Journal of Guidance, Control, and Dynamics , Vol. 38, No. 11, 2015, pp. 2215-2223.
1706
- doi: 10.2514/1.G001198
1707
- [16] Li H, Chen S, Izzo D, and Baoying H, “Deep networks as approxim ators of optimal low-thrust and multi-impulse cost in
1708
- multitarget missions,” Acta Astronautica , Vol. 166, 2020, pp. 469-481.
1709
- doi: 10.1016/j.actaastro.2019.09.023
1710
- [17] Rubinsztejn A, Sood R, and Laipert F E., “Neural network optima l control in astrodynamics: Application to the missed
1711
- thrust problem,” Acta Astronautica , Vol. 176, 2020, pp.192-203.
1712
- doi: 10.1016/j.actaastro.2020.05.027
1713
- [18] Izzo D, Märtens M, and Pan B., “A survey on artificial intellig ence trends in spacecraft guidance dynamics and control,”
1714
- Astrodynamics , 2018, pp. 1-13.
1715
- doi: 10.1007/s42064-018-0053-6
1716
- [19] Sánchez-Sánchez C and Izzo D., “ R e a l - t i m e o p t i m a l c o n t r o l v i a D eep Neural Networks: study on landing problems,”
1717
- Journal of Guidance, Control, and Dynamics , Vol. 41, No. 5, 2018, pp. 1122-1135.
1718
- doi: 10.2514/1.G002357
1719
- [20] Zhu Y and Luo Y Z., “Fast Evaluation of Low-Thrust Transfers vi a Multilayer Perceptions,” Journal of Guidance, Control,
1720
- and Dynamics , Vol. 42, No. 12, 2019, pp. 2627-2637.
1721
- doi: 10.2514/1.G004080
1722
- [21] Song Y and Gong S., “Solar-sail trajectory design for multiple near-Earth asteroid exploration based on deep neural
1723
- networks,” Aerospace Science and Technology , Vol. 91, 2019, pp. 28-40.
1724
- doi: 10.1016/j.ast.2019.04.056
1725
- [22] Cheng L, Wang Z, Jiang F, and Zhou C., “Real-time optimal contr ol for spacecraft or bit transfer via multiscale deep neural
1726
- networks,” IEEE Transactions on Aerospace and Electronic Systems , Vol. 55, No. 5, 2018, pp. 2436-2450.
1727
- doi: 10.1109/TAES.2018.2889571
1728
- [23] Battin R H. An Introduc tion to the Mathematics and Methods of A strodynamics, revised ed ., AIAA, VA, 1999, Chap. 6.
1729
- doi: 10.2514/4.861543
1730
- [24] Ely T A., “Transforming mean and osculating elements using nume rical methods,” The Journal of the Astronautical
1731
- Sciences , Vol. 62, No. 1 , 2015, pp: 21-43.
1732
- doi: 10.1007/s40295-015-0036-2
1733
- [25] Kingma D P, Ba J., “Adam: A method for stochastic optimization, ” arXiv preprint arXiv:1412.6980, 2014.
1734
- [26] Menon A, Mehrotra K, Mohan C K, et al., “Characterization of a class of sigmoid functions with applications to neural
1735
- networks,” Neural Networks , Vol. 9, No. 5, 1996, pp: 819-835.
1736
- doi: 10.1016/0893-6080(95)00107-7
1737
- [27] Vinti, J. P., Orbital and Celestial Mechanics, Vol. 177, Progre ss in Astronautics and Aeronautics, AIAA, Reston, VA, 1998,
1738
- pp. 367–385. doi:10.2514/4.866487
1739
-
1740
- View publication statsView publication stats
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2202.08613.txt DELETED
@@ -1,396 +0,0 @@
1
- On the evaluation of (meta-)solver approaches
2
- Research Note
3
- On the evaluation of (meta-)solver approaches
4
- Roberto Amadini [email protected]
5
- Maurizio Gabbrielli [email protected]
6
- Department of Computer Science and Engineering,
7
- University of Bologna, Italy
8
- Tong Liu [email protected]
9
- Meituan,
10
- Beijing, China
11
- Jacopo Mauro [email protected]
12
- Department of Mathematics and Computer Science,
13
- University of Southern Denmark, Denmark
14
- Abstract
15
- Meta-solver approaches exploits a number of individual solvers to potentially build a
16
- better solver. To assess the performance of meta-solvers, one can simply adopt the metrics
17
- typically used for individual solvers (e.g., runtime or solution quality), or employ more
18
- specific evaluation metrics (e.g., by measuring how close the meta-solver gets to its virtual
19
- best performance). In this paper, based on some recently published works, we provide an
20
- overview of different performance metrics for evaluating (meta-)solvers, by underlying their
21
- strengths and weaknesses.
22
- 1. Introduction
23
- A famous quote attributed to Aristotle says that “ the whole is greater than the sum of its
24
- parts”. This principle has been applied in several contexts, including the field of constraint
25
- solving and optimization. Combinatiorial problems arising from application domains such as
26
- scheduling, manufacturing, routing or logistics can be tackled by combining and leveraging
27
- the complementary strengths of different solvers to create a better global meta-solver .1
28
- Several approaches for combining solvers and hence creating effective meta-solvers have
29
- been developed. Over the last decades we witnessed the creation of new Algorithm Selec-
30
- tion (Kotthoff, 2016) and Configuration (Hoos, 2012) approaches2that reached peak results
31
- in various solving competitions (SAT competition, 2021; Stuckey, Becket, & Fischer, 2010;
32
- ICAPS, 2021). To compare different meta-solvers, new competitions were created, e.g., the
33
- 2015 ICON challenge (Kotthoff, Hurley, & O’Sullivan, 2017) and 2017 OASC competition
34
- on algorithm selection (Lindauer, van Rijn, & Kotthoff, 2019). However, the discussion of
35
- why a particular evaluation metric has been chosen to rank the solvers is lacking.
36
- 1. Meta-solvers are sometimes referred in the literature as portfolio solvers , because they take advantage of
37
- a “portfolio” of different solvers.
38
- 2. A fine-tuned solver can be seen as a meta-solver where we consider different configurations of the same
39
- solver as different solvers.
40
- 1arXiv:2202.08613v1 [cs.AI] 17 Feb 2022Amadini, Gabbrielli, Liu & Mauro
41
- We believe that further study on this issue is necessary because often meta-solvers are
42
- evaluated on heterogeneous scenarios, characterized by a different number of problems, dif-
43
- ferent timeouts, and different individual solvers from which the meta-solvers approaches
44
- are built. In this paper, starting from some surprising results presented by Liu, Amadini,
45
- Mauro, and Gabbrielli (2021) showing dramatic ranking changes with different, but rea-
46
- sonable, metrics we would like to draw more attention on the evaluation of meta-solvers
47
- approaches by shedding some light on the strengths and weaknesses of different metrics.
48
- 2. Evaluation metrics
49
- Before talking about the evaluation metrics, we should spend some words on what we need
50
- to evaluate: the solvers. In our context, a solver is a program that takes as input the
51
- description of a computational problem in a given language, and returns an observable
52
- outcome providing zero or more solutions for the given problem. For example, for decision
53
- problems the outcome may be simply “yes” or “no” while for optimization problems we might
54
- be interested in the sub-optimal solutions found along the search. An evaluation metric , or
55
- performance metric, is a function mapping the outcome of a solver on a given instance to a
56
- number representing “how good” the solver on this instance is.
57
- An evaluation metric is often not just defined by the output of the (meta-)solver but
58
- can also be influenced by other actors such as the computational resources available, the
59
- problems on which we evaluate the solver, and the other solvers involved in the evaluation.
60
- For example, it is often unavoidable to set a timeouton the solver’s execution when there
61
- is no guarantee of termination in a reasonable amount of time (e.g., NP-hard problems).
62
- Timeouts makes the evaluation feasible, but inevitably couple the evaluation metric to the
63
- execution context. For this reason, the evaluation of a meta-solver should also take into
64
- account the scenario that encompasses the solvers to evaluate, the instances used for the
65
- validation, and the timeout. Formally, at least for the purposes of this paper, we can define
66
- a scenario as a triple (I;S;)where:Iis a set of problem instances, Sis a set of individual
67
- solvers,2(0;+1)is a timeout such that the outcome of solver s2Sover instance i2I
68
- is always measured in the time interval [0;).
69
- Evaluating meta-solvers over heterogeneous scenarios is complicated by the fact that
70
- the set of instances, solvers and the timeout can have high variability. As we shall see in
71
- Sect. 2.3, things are even trickier in scenarios including optimization problems.
72
- 2.1 Absolute vs relative metrics
73
- A sharp distinction between evaluation metrics can be drawn depending on whether their
74
- value depend on the outcome of other solvers or not. We say that an evaluation metric is
75
- relativein the former case, absolute otherwise. For example, a well-known absolute metric
76
- is thepenalized average runtime with penalty 1(PAR) that compares the solvers by
77
- using the average solving runtime and penalizes the timeouts with times the timeout.
78
- Formally, let time (i;s; )be the function returning the runtime of solver son instance
79
- iwith timeout , assuming time (i;s; ) =ifscannot solve ibefore the timeout . For
80
- optimization problems, we consider the runtime as the time taken by sto solveito opti-
81
- 2On the evaluation of (meta-)solver approaches
82
- mality3assuming w.l.o.g. that an optimization problem is always a minimization problem.
83
- We can define PAR as follows.
84
- Definition 1 (Penalized Average Runtime) .Let(I;S;)be a scenario, the PAR score of
85
- solvers2SoverIis given by1
86
- jIjP
87
- i2Ipar(i;s; )where:
88
- par(i;s; ) =(
89
- time (i;s; )iftime (i;s; )<
90
-  otherwise.
91
- Well-known PAR measures are, e.g., the PAR 2adopted in the SAT competitions (SAT
92
- competition, 2021) or the PAR 10used by Lindauer et al. (2019). Evaluating (meta-)solvers
93
- with scenarios having different timeouts should imply a normalization of PARin a fixed
94
- range to avoid misleading comparisons.
95
- Another absolute metric for decision problems is the number (or percentage) of instances
96
- solved where ties are broken by favoring the solver minimizing the average running time,
97
- i.e., minimizing the PAR 1score. This metric has been used in various tracks of the planning
98
- competition (ICAPS, 2021), the XCSP competition (XCSP Competition, 2019), and the
99
- QBF evaluations (QBFEVAL, 2021).
100
- A well-established relative metric is instead the Borda count , adopted for example by the
101
- MiniZinc Challenge (Stuckey, Feydy, Schutt, Tack, & Fischer, 2014) for both single solvers
102
- and meta-solvers. The Borda count is a family of voting rules that can be applied to the
103
- evaluation of a solver by considering the comparison as an election where the solvers are
104
- the candidates and the problem instances are the voters. The MiniZinc challenge uses a
105
- variant of Borda4where each solver scores points proportionally to the number of solvers it
106
- beats. Assumingthat obj(i;s;t )isthebestobjectivevaluefoundbysolver sonoptimization
107
- problemiat timet, with obj(i;s;t ) =1when no solution is found at time t, the MiniZinc
108
- challenge score is defined as follows.
109
- Definition 2 (MiniZinc challenge score) .Let(S;I;)be a scenario where I=Idec[
110
- IoptwithIdecdecision problems and Ioptoptimization problems. The MiniZinc challenge
111
- (MZNC) score of s2SoverIisP
112
- i2I;s02Snfsgms(i;s0;)where:
113
- ms(i;s0;) =8
114
- >>>>>>>><
115
- >>>>>>>>:0 ifunknown (i;s)_better (i;s0;s)
116
- 1 ifbetter (i;s;s0)
117
- 0:5 iftime (i;s; ) =time (i;s0;)
118
- andobj(i;s; ) =obj(i;s0;)
119
- time (i;s0;)
120
- time (i;s; ) +time (i;s0;)otherwise
121
- where the predicate unknown (i;s)holds ifsdoes not produce a solution within the timeout:
122
- unknown (i;s) = (i2Idec^time (i;s; ) =)_(i2Iopt^obj(i;s; ) =1)
123
- 3. Ifscannot solve ito optimality before , then time (i;s) =even if sub-optimal solutions are found.
124
- 4. In the original definition, the lowest-ranked candidate gets 0 points, the next-lowest 1 point, and so on.
125
- 3Amadini, Gabbrielli, Liu & Mauro
126
- andbetter (i;s;s0)holds ifsfinishes earlier than s0or it produces a better solution:
127
- better (i;s;s0) =time (i;s; )<time (i;s0;) =_obj(i;s; )<obj(i;s0;)
128
- This is clearly a relative metric because changing the set of available solvers can affect
129
- the MiniZinc scores.
130
- Tohandlethedisparatenatureofthescenarioswhencomparingmeta-solversapproaches,
131
- the evaluation function adopted in the ICON and OASC challenges was relative: the closed
132
- gap score . This metric assigns to a meta-solver a value in [0;1]proportional to how much it
133
- closes the gap between the best individual solver available, or single best solver (SBS), and
134
- thevirtual best solver (VBS),i.e.,anoracle-likemeta-solveralwaysselectingthebestindivid-
135
- ual solver. The closed gap is actually a “meta-metric”, defined in terms of another evaluation
136
- metric. Formally, if (I;S;)is a scenario and man evaluation metric to minimize, we have
137
- m(i;VBS;) = minfm(i;s; )js2Sgfor eachi2IandSBS = argmin
138
- s2SP
139
- i2Im(i;s; ).
140
- We can define the closed gap as follows.
141
- Definition 3 (Closed gap) .Letmbe an evaluation metric to minimize for a scenario
142
- (I;S;), and letmVBS=P
143
- i2Im(i;VBS;)andmSBS=P
144
- i2Im(i;SBS;). Theclosed
145
- gapof a (meta-)solver sw.r.t.mon that scenario is:
146
- mSBSP
147
- i2Im(i;s; )
148
- mSBSmVBS
149
- If not specified, we will assume the closed gap computed w.r.t. the PAR 10score as done
150
- in the AS challenges 2015 and 2017.5
151
- 2.2 A surprising outcome
152
- An interesting outcome reported in Liu et al. (2021) was the profound di��erence between
153
- the closed gap and the MiniZinc challenge scores. Liu et al. compared the performance of
154
- six meta-solvers approaches across 15 decision-problems scenarios taken from ASlib (Bischl
155
- et al., 2016) and coming from heterogeneous domains such as Answer-Set Programming,
156
- Constraint Programming, Quantified Boolean Formula, Boolean Satisfiability.
157
- Tab. 1 reports the performance of ASAP and RF, respectively the best approach accord-
158
- ing to the closed gap score and the MZNC score. The scores in the four leftmost columns
159
- clearly show a remarkable difference rank if we swap the evaluation metric. With the closed
160
- gap, ASAP is the best approach and RF the worst among all the meta-solvers considered,
161
- while with the MZNC score RF climbs to the first position while ASAP drops to the last
162
- position.
163
- Another thing that catches the eye in Tab. 1 is the presence of negative scores . This
164
- happens because, by definition, the closed gap has upper bound 1 (no meta-solver can
165
- improve the VBS) but not a fixed lower bound. Hence, when the performance of the meta-
166
- solver is worse than the performance of the single best solver, the closed gap drops below
167
- zero. While on a first glance this seems reasonable—meta-solvers should perform no worse
168
- than the individual solvers—it is worth noting that the penalty for performing worse than
169
- 5. In the 2015 edition, the closed gap was computed as 1mSBSm(i;s; )
170
- mSBSmVBS=mVBSm(i;s; )
171
- mVBSmSBS.
172
- 4On the evaluation of (meta-)solver approaches
173
- Table 1: Comparison ASAP vs RF. The MZNC column reports the average MZNC score
174
- per scenario. Negative scores are in bold font.
175
- Closed gap MZNC Better than other
176
- Scenario ASAP RF ASAP RF ASAP RF
177
- ASP-POTASSCO 0.7444 0.5314 2.2235 2.6163 275 671
178
- BNSL-2016 0.8463 0.7451 1.2830 3.0250 98 993
179
- CPMP-2015 0.6323 0.1732 2.0501 2.3660 137 334
180
- CSP-MiniZinc-Time-2016 0.6251 0.2723 2.1552 2.7214 17 53
181
- GLUHACK-2018 0.4663 0.4057 1.9040 2.4528 62 147
182
- GRAPHS-2015 0.758-0.6412 2.3045 3.3731 489 3663
183
- MAXSAT-PMS-2016 0.5734 0.3263 1.4747 2.8616 66 439
184
- MAXSAT-WPMS-2016 0.7736-1.1826 1.5168 2.4043 126 386
185
- MAXSAT19-UCMS 0.6583-0.2413 2.0893 2.5189 145 269
186
- MIP-2016 0.35-0.3626 2.4035 2.4239 81 105
187
- QBF-2016 0.7568-0.1366 1.8642 2.7154 193 467
188
- SAT03-16_INDU 0.3997 0.1503 2.1508 2.5812 491 1116
189
- SAT12-ALL 0.7617 0.6528 1.6785 2.8250 262 1227
190
- SAT18-EXP 0.5576 0.3202 1.9239 2.4998 61 164
191
- TSP-LION2015 0.4042-19.1569 2.4352 2.6979 1115 1949
192
- Tot. 9.3077-18.1439 29.4573 40.0826 3618 11983
193
- Tot.TSP-LION2015 8.9035 1.013 27.0221 37.3846 2503 10034
194
- the SBS also depends on the denominator mSBSmVBS. This means that in scenarios
195
- where the performance of the SBSis close to the perfect performance of the VBSthis
196
- penalty can be significantly magnified. The TSP-LION2015 scenario is a clear example: the
197
- RF approach gets a penalization of more than 19 points, meaning that RF should perform
198
- flawlessly in about 20 other scenarios to expiate this punishment. In fact, in TSP-LION2015
199
- thePAR 10distributions of SBSandVBSare very close: the SBSis able to solve 99.65% of
200
- the instances solved by the VBS, leaving little room for improvement. RF scores -19.1569
201
- while still solving more than 90% of the instances of the scenario and having a difference
202
- with ASAP of slightly more than 5% instances solved.
203
- Why are the closed gap and the MZNC rankings so different? Looking at the rightmost
204
- twocolumnsinTab.1showing, foreachscenario, thenumberofinstanceswhereanapproach
205
- is faster than the other, one may conclude that RF is far better than ASAP. In all the
206
- scenarios the number of instances where its runtime is lower than ASAP runtime is greater
207
- than the number of instances where ASAP is faster. Overall, it is quite impressive to see
208
- that RF beats ASAP on 11983 instances while ASAP beats RF on 3618 times only.
209
- An initial clue of why this happens is revealed in Liu et al. (2021), where a parametric
210
- versionofMZNCscoreisused. Inpractice,Def.2isgeneralizedbyassumingtheperformance
211
- of two solvers equivalent if their runtime difference is below a given time threshold .6This
212
- variantwasconsideredbecauseatimedifferenceoffewsecondscouldbeconsideredirrelevant
213
- 6. Formally, ifjtime (i;s; )time (i;s0;)jthen bothsands0scores 0.5 points—note that if = 0we
214
- get the original MZNC score as in Def. 2.
215
- 5Amadini, Gabbrielli, Liu & Mauro
216
- Figure 1: Cumulative Borda count by varying the threshold.
217
- Figure 2: Solve instances difference ASAP vs RF.
218
- if solving a problem can take minutes or hours. The parametric MZNC score is depicted in
219
- Fig. 1, where different thresholds are considered on the x-axis. It is easy to see how the
220
- performance of ASAP and RF reverses when increases: ASAP bubbles from the bottom
221
- to the top, while RF gradually sinks to the bottom.
222
- Let us further investigate this anomaly. Fig. 2 shows the runtime distributions of the
223
- instances solved by ASAP and RF, sorted by ascending runtime. We can see that ASAP
224
- solves more instances, but for around 15k instances RF is never slower than ASAP. Sum-
225
- 6On the evaluation of (meta-)solver approaches
226
- Table 2: Average closed gap, speedup, and normalized runtime. Peak performance in bold
227
- font.
228
- Meta-solver Closed gap Speedup Norm. runtime
229
- ASAP 0.4866 0.4026 0.8829
230
- sunny-as2 0.4717 0.4122 0.8879
231
- autofolio 0.4713 0.4110 0.8855
232
- SUNNY-original 0.4412 0.3905 0.8790
233
- *Zilla 0.3416 0.3742 0.8753
234
- Random Forest -0.1921 0.3038 0.8507
235
- marizing, ASAP solves more instances but RF is in general quicker when it solves an (often
236
- easy) instance. This entails the significant difference between closed gap and Borda metrics.
237
- Inouropinion, ontheonehand, itisfairtothinkthatASAPperformsbetterthanRFon
238
- these scenarios. The MZNC score seems to over-penalize ASAP w.r.t. RF. Moreover, from
239
- Fig. 1 we can also note that for 103the parametric MZNC score of RF is still better,
240
- but 103 seconds looks quite a high threshold to consider two performances as equivalent.
241
- On the other hand, the closed gap score can also be over-penalizing due to negative outliers.
242
- We also would like to point out that the definitions of SBSfound in the literature do
243
- not clarify how it is computed in scenarios where the set of instances Iis split into test and
244
- training sets. Should the SBSbe computed on the instances of the training set, the test set,
245
- or the whole dataset I? One would be inclined to use the test set to select the SBS, but this
246
- choice might be problematic because the test set is usually quite small w.r.t. the training set
247
- when using, e.g., cross-validation methods. In this case issues with negative outliers might
248
- be amplified. If not clarified, this could lead to confusion. For example, in the 2015 ICON
249
- challenge the SBSwas computed by considering the entire dataset (training and testing
250
- instances together). In the 2017 OASC instead the SBSwas originally computed on the
251
- test set of the scenarios, but then the results were amended by computing the SBSon the
252
- training set.
253
- An alternative to the above metrics is the speedupof a single solver w.r.t. the SBSor the
254
- VBS, i.e., how much a meta-solver can improve a baseline solver. Tab. 2 reports, using the
255
- data of (Liu et al., 2021), for each meta-solver sin a scenario (I;S;)the average speedup
256
- computed as1
257
- jIjP
258
- i2Itime (i;VBS;)
259
- time (i;s; ). Unlike the closed gap, that has no lower bound, the
260
- speedup always falls in [0;1]with bigger values meaning better performance. We compared
261
- this with the average normalized runtime score, computed as 11
262
- jIjP
263
- i2Itime (i;s; )
264
- and the
265
- average closed gap score w.r.t PAR 1. We use PAR 1instead of PAR 10to be consistent with
266
- speedup and normalized runtime, which do not get any penalization.
267
- The rank with speedup and normalised runtime is the same. The podium changes if we
268
- use the closed gap: in this case sunny-as2 and autofolio lose one position while ASAP rises
269
- from third to first position. However, as we shall see in the next section, the generalization
270
- of these metrics to optimization problems is not trivial.
271
- 7Amadini, Gabbrielli, Liu & Mauro
272
- 2.3 Optimization problems
273
- So far we have mainly talked about evaluating meta-solvers on decision problems. While the
274
- MZNC score takes into account also optimization problems, for the closed gap, (normalized)
275
- runtime, and speedup the generalization is not as obvious as it might seem. Here using
276
- the runtime might not be the right choice: often a solver cannot prove the optimality of a
277
- solution, even when it actually finds it. Hence, the obvious alternative is to consider just
278
- the objective value of a solution. But this value needs to be normalized , and to do so what
279
- bounds should we choose? Furthermore: how to reward a solver that actually proves the
280
- optimality of a solution? And how to penalize solvers that cannot find any solution?
281
- Ifsolvingtooptimalityisnotrewarded, metricssuchastheratioscoreofthesatisfiability
282
- track of the planning competition can be used.7This score is computed as the ratio between
283
- the best known solution and the best objective value found by the solver, giving 0 points in
284
- case no solution is found.
285
- A different metric that focuses on quickly reaching good solution is the areascore,
286
- introduced in the MZNC starting from 2017. This metric computes the integral of a step
287
- function of the solution value over the runtime horizon. Intuitively, a solver that finds good
288
- solutions earlier can outperform a solver that finds better solutions much later in the solving
289
- stage.
290
- Other attempts have been proposed to take into account the objective value and the run-
291
- ning times. For example, the ASP competition (Calimeri, Gebser, Maratea, & Ricca, 2016)
292
- adopted an elaborate scoring system that combines together the percentage of instances
293
- solved within the timeout, the evaluation time, and the quality of a solution. Similarly,
294
- Amadini, Gabbrielli, and Mauro (2016) proposed a relative metric where each solver sgets
295
- a reward inf0g[[ ; ][f1gaccording to the objective value obj(s;i; )of the best solution
296
- it finds, with 0  1. If no solution is found then sscores 0, if it solves ito optimality
297
- it scores 1, otherwise the score is computed by linearly scaling obj(s;i; )in[ ; ]according
298
- to the best and worst objective values find by any other available solver on problem i.
299
- 2.4 Randomness and aggregation
300
- We conclude the section with some remarks about randomness and data aggregation.
301
- When evaluating a meta-solver son scenario (I;S;), it is common practice to partition
302
- Iinto a training set Itr, on which s“learns” how to leverage its individual solvers, and a
303
- test setItswhere the performance of son unforeseen problems is measured. In particular,
304
- to prevent overfitting, it is possible to use a k-fold cross validation by first splitting Iinto
305
- kdisjoint folds, and then using in turn one fold as test set and the union of the other
306
- folds as training set. In the AS challenge 2015 (Lindauer et al., 2019) the submissions were
307
- indeed evaluated with a 10-fold cross validation, while in the OASC in 2017 the dataset of
308
- the scenarios was divided only in one test set and one training set. As also underlined by
309
- the organizers of the competition, this is risky because it may reward a lucky meta-solver
310
- performing well on that split but poorly on other splits.
311
- Note that so far we have always assumed deterministic solvers, i.e., solvers always provid-
312
- ing the same outcome if executed on the same instance in the same execution environment.
313
- Unfortunately, the scenario may contain randomized solvers potentially producing different
314
- 7. This track includes optimization problems where the goal is to minimize the length of a plan.
315
- 8On the evaluation of (meta-)solver approaches
316
- results with a high variability. In this case, solvers should be evaluated over a number of
317
- runs and particular care must be taken because the assumption that a solver can never
318
- outperform the VBS would be no longer true.
319
- A cautious choice to decrease the variance of model predictions would be to repeat the
320
- k-fold cross validation n>1times with different random splits. However, this might imply
321
- a tremendous computational effort—the training phase of a meta-solver might take hours or
322
- days—and therefore a significant energy consumption. This issue is becoming an increasing
323
- concern. For example, in their recent work Matricon, Anastacio, Fijalkow, Simon, and Hoos
324
- (2021) propose an approach to early stop running an individual solver that it is likely to
325
- perform worse than another solver on a subset of the instances of the scenario. In this way,
326
- less resources are wasted for solvers that most likely will not bring any improvement.
327
- Finally, we spend a few words on the aggregation of the results. It is quite common
328
- to use the arithmetic mean, or just the sum, when it comes to aggregate the outcomes
329
- of a meta-solver over different problems of the same scenario (e.g., when evaluating the
330
- results on the nktest sets of a k-fold cross validation repeated ntimes). The same
331
- applies when evaluating different scenarios. The choice of how to aggregate the metric
332
- values into a unique value should however be motivated since the arithmetic mean can lead
333
- to misleading conclusions when summarizing normalized benchmark (Fleming & Wallace,
334
- 1986). For example, to amortize the effect of outliers, one may use the median or use the
335
- geometric mean to average over normalized numbers.
336
- 3. Conclusions
337
- As it happens in many other fields, the choice of reasonable metrics can have divergent
338
- effectsontheassessmentof(meta-)solvers. Whiletheseissuesaremitigatedwhencomparing
339
- individual solvers in competitions having uniform scenarios in term of size, difficulty, and
340
- nature, the comparison of meta-solver approaches poses new challenges due to diversity of
341
- the scenarios on which they are evaluated. Although it is impossible to define a fits-all
342
- metric, we believe that we should aim at more robust metrics avoiding as much as possible
343
- the under- and over-penalization of meta-solvers.
344
- Particular care should be taken when using relativemeasurements, because the risk is
345
- to amplify small performance variations into large differences of the metric’s value. Present-
346
- ing the results in terms of orthogonal evaluation metrics allows a better understanding of
347
- the (meta-)solvers performance, and these insights may help the researchers to build meta-
348
- solvers that better fit their needs, as well as to prefer an evaluation metric over another.
349
- Moreover, well-established metrics may be combined into hybrid “meta-metrics” folding to-
350
- gether different performance aspects and handling the possible presence of randomness.
351
- 9Amadini, Gabbrielli, Liu & Mauro
352
- References
353
- Amadini, R., Gabbrielli, M., & Mauro, J. (2016). Portfolio approaches for constraint
354
- optimization problems. Annals of Mathematics and Artificial Intelligence ,76(1-2),
355
- 229–246.
356
- Bischl, B., Kerschke, P., Kotthoff, L., Lindauer, M., Malitsky, Y., Fréchette, A., ... Van-
357
- schoren, J. (2016). Aslib: A benchmark library for algorithm selection. Artificial
358
- Intelligence ,237, 41–58.
359
- Calimeri, F., Gebser, M., Maratea, M., & Ricca, F. (2016). Design and results of the
360
- fifth answer set programming competition. Artif. Intell. ,231, 151–181. Retrieved
361
- from https://doi.org/10.1016/j.artint.2015.09.008 doi: 10.1016/
362
- j.artint.2015.09.008
363
- Fleming, P. J., & Wallace, J. J. (1986). How not to lie with statistics: The correct way
364
- to summarize benchmark results. Commun. ACM ,29(3), 218–221. Retrieved from
365
- https://doi.org/10.1145/5666.5673 doi: 10.1145/5666.5673
366
- Hoos, H. H. (2012). Automated algorithm configuration and parameter tuning. In
367
- Y. Hamadi, É. Monfroy, & F. Saubion (Eds.), Autonomous search (pp. 37–71).
368
- Springer.
369
- ICAPS. (2021). The international planning competition web page. https://www.icaps
370
- -conference.org/competitions/ . (Accessed: 2021-12-10)
371
- Kotthoff, L. (2016). Algorithm selection for combinatorial search problems: A survey. In
372
- Data mining and constraint programming (pp. 149–190). Springer.
373
- Kotthoff,L.,Hurley,B.,&O’Sullivan,B. (2017). TheICONchallengeonalgorithmselection.
374
- AI Magazine ,38(2), 91–93.
375
- Lindauer, M., van Rijn, J. N., & Kotthoff, L. (2019). The algorithm selection competitions
376
- 2015 and 2017. Artificial Intelligence ,272, 86–100.
377
- Liu, T., Amadini, R., Mauro, J., & Gabbrielli, M. (2021). sunny-as2: Enhancing SUNNY
378
- for algorithm selection. J. Artif. Intell. Res. ,72, 329–376.
379
- Matricon, T., Anastacio, M., Fijalkow, N., Simon, L., & Hoos, H. H. (2021). Statistical
380
- comparisonofalgorithmperformancethroughinstanceselection. InL.D.Michel(Ed.),
381
- 27th international conference on principles and practice of constraint programming,
382
- CP 2021, montpellier, france (virtual conference), october 25-29, 2021 (Vol. 210, pp.
383
- 43:1–43:21). Schloss Dagstuhl - Leibniz-Zentrum für Informatik.
384
- QBFEVAL. (2021). Qbf evaluations web page. http://www.qbflib.org/index _eval
385
- .php. (Accessed: 2021-12-10)
386
- SAT competition. (2021). The international sat competition web page. http://www
387
- .satcompetition.org/ . (Accessed: 2021-12-10)
388
- Stuckey, P. J., Becket, R., & Fischer, J. (2010). Philosophy of the minizinc challenge. Con-
389
- straints An Int. J. ,15(3), 307–316. Retrieved from https://doi.org/10.1007/
390
- s10601-010-9093-0 doi: 10.1007/s10601-010-9093-0
391
- Stuckey, P. J., Feydy, T., Schutt, A., Tack, G., & Fischer, J. (2014). The MiniZinc Challenge
392
- 2008-2013. AI Magazine ,35(2), 55–60. Retrieved from http://www.aaai.org/
393
- ojs/index.php/aimagazine/article/view/2539
394
- XCSP Competition. (2019). Xcsp competition. http://www.cril.univ-artois.fr/
395
- XCSP19/ . (Accessed: 2021-12-10)
396
- 10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2202.10177.txt DELETED
@@ -1,786 +0,0 @@
1
- CELLNUCLEI CLASSIFICATION INHISTOPATHOLOGICAL
2
- IMAGES USING HYBRID OLCONV NET
3
- Suvidha Tripathi
4
- Department of Information Technology
5
- Indian Institute of Information Technology Allahabad
6
- Jhalwa, Deoghat, Prayagraj, Uttar Pradesh 211015
7
- [email protected] Kumar Singh
8
- Department of Information Technology
9
- Indian Institute of Information Technology Allahabad
10
- Jhalwa, Deoghat, Prayagraj, Uttar Pradesh 211015
11
12
- February 22, 2022
13
- ABSTRACT
14
- Computer-aided histopathological image analysis for cancer detection is a major research challenge
15
- in the medical domain. Automatic detection and classification of nuclei for cancer diagnosis impose
16
- a lot of challenges in developing state of the art algorithms due to the heterogeneity of cell nuclei
17
- and data set variability. Recently, a multitude of classification algorithms has used complex deep
18
- learning models for their dataset. However, most of these methods are rigid and their architectural
19
- arrangement suffers from inflexibility and non-interpretability. In this research article, we have pro-
20
- posed a hybrid and flexible deep learning architecture O LConvNet that integrates the interpretability
21
- of traditional object-level features and generalization of deep learning features by using a shallower
22
- Convolutional Neural Network (CNN) named as CNN 3L.CNN 3Lreduces the training time by
23
- training fewer parameters and hence eliminating space constraints imposed by deeper algorithms.
24
- We used F1-score and multiclass Area Under the Curve (AUC) performance parameters to compare
25
- the results. To further strengthen the viability of our architectural approach, we tested our proposed
26
- methodology with state of the art deep learning architectures AlexNet, VGG16, VGG19, ResNet50,
27
- InceptionV3, and DenseNet121 as backbone networks. After a comprehensive analysis of classi-
28
- fication results from all four architectures, we observed that our proposed model works well and
29
- perform better than contemporary complex algorithms.
30
- Keywords Deep Learning, Hybrid networks, Object level features, Transfer Learning, Histopathological Images,
31
- Cell Nuclei Classification, Class balancing, Convolutional Neural Networks, Multi Layer Perceptron
32
- 1 Introduction
33
- Early cancer detection is a major challenge in the medical domain. Even today the medical community is largely
34
- dependent upon the expert pathologist for detecting and classifying such cell anomalies that cause cancer, in whole
35
- slide histopathological images. The job of the pathologists becomes very cumbersome and may take several days for
36
- annotating the whole slide images of biopsy samples. Moreover, the reliability of predictions also depends upon the
37
- experience of the pathologist and some times, consensus of more than one pathologists are required for confirming
38
- such anomalies. These factors provide adequate motivation for research and development of a computer-assisted
39
- diagnostic (CAD) systems which classifies cell nuclei and improves the understanding of some of the underlying
40
- biological phenomenon, e.g., monitoring cancer cell cycle progress [1], type, shape, size and arrangement of the cells
41
- in the affected organ sites, and the knowledge about metastasis, if the cells are present at some unlikely locations .
42
- All these observations can be comprehended if we know the type of cell present in the diseased tissue sample. Early
43
- diagnosis of cell anomalies can largely affect the disease prognosis [2]. Such as in the case of a colon or colorectal
44
- carcinoma, epithelial cells lining the colon or rectum of the gastrointestinal tract are affected and timely detection of
45
- these cells can help in quick diagnosis, which eventually would increase the prognostic value of the disease. Similarly,
46
- the lymphocytes can also be analyzed for sentinel lymph node disease [2]. The other examples are the Myeloma orarXiv:2202.10177v1 [cs.CV] 21 Feb 2022APREPRINT - FEBRUARY 22, 2022
47
- Figure 1: Example sub-images of different classes of nuclei starting from first row to fourth: Epithelial, Fibroblasts,
48
- Inflammatory, Miscellaneous (in sets of 2 - Adipocyte, Endothelial, Mitotic Figure, and Necrotic Nucleus (from left
49
- to right)
50
- multiple Myeloma detections through the plasma cells which are the types of the white blood cells and cause the
51
- cancer [3]. Therefore, a sample biopsy from a specific location can be quickly analyzed using the information of cell
52
- environment provided by appropriate CAD system.
53
- In particular medical image analysis, for all diagnosis, is attributed to the knowledge and skills possessed by trained and
54
- experienced pathologists. Although pathologists have the ability and means to single out the affected cancerous lesions
55
- in a tissue biopsy samples, most of such detections are still done manually and hence time-consuming. Numerous
56
- challenges are involved in diagnosing cancer due to data set variability and heterogeneity in cell structures, which
57
- makes the process extremely tedious even for experts. Software intervention for early detection is therefore important
58
- for the purpose of effective control and treatment of the diseased organs [4]. To develop such automated cell detection
59
- and classification algorithms, the knowledge of histology is vital and requires the annotated or labeled data set to
60
- be prepared by the expert histo-pathologists. Once the labelled data is acquired, then the routine intervention of
61
- pathologists can be eliminated while analyzing the whole slide samples under test by using the developed automated
62
- CAD algorithms. Cell nuclei in a Hematoxylin and Eosin (H&E) stained histopathological slide sample have a specific
63
- shade of blue caused by hematoxylin’s reaction with cellular protein present in the nuclei of the cells [5]. A shape
64
- of the cell varies with cell type, cell-cycle stage, and also with the presence or absence of cancer. Fig. 1 shows four
65
- different classes of nuclei namely, Inflammatory, Fibroblast, Epithelial, and Miscellaneous, which include adipocyte,
66
- endothelial nucleus, mitotic figures, and necrotic nucleus [6]. The nuclei structures as shown in Fig.1 have different
67
- shapes, texture, and intensity features, which vary by the factors i.e. nuclei type (epithelial, fibroblasts, lymphocytes,
68
- etc.), the malignancy of the disease (or grade of cancer), and nuclei life cycle (interphase or mitotic phase) [7]. For
69
- example, the inflammatory nuclei, a type of white blood cells and called as lymphocyte nuclei (LN) are smaller in size
70
- and regular spherical shape as compared to epithelial nuclei (EN) [8]. Fibroblasts have long spindle-like shape and
71
- appearance and having very little cytoplasm [9]. Activated fibroblasts and infiltrating inflammatory cells are the signs
72
- of potential tumor growth [9]. All these histological and biological differences between cell structures and their site
73
- of origin highlight the clinical relevance of classifying different types of nuclei.
74
- In this paper, we have undertaken the feature based approach for automated nuclei classification. Feature based
75
- approach can be classified into two general categories: hand-crafted features and deep learning based features. In
76
- histopathology Images, morphological and architectural features whose accuracy depends on the amount of magnifi-
77
- cation and the type of class and which has unique mixture of visual pattern qualify as hand crafted features whereas
78
- unsupervised deep learning features are intuitive and are a by-product of filter responses obtained from large number
79
- of training samples and fine tuning of the network. In the proposed work we clearly proved the benefits of using
80
- combined feature set consisting both object level features and learned deep learning features over feature set acquired
81
- from single domain on complex medical data. Moreover, for detailed analysis, the accuracy-generalization tradeoff
82
- and space-time complexity issues as exhibited by traditional and DL methods respectively have been considered in the
83
- proposed architectural arrangement.
84
- In summary, key contributions of this work include:
85
- 1. The strength of our method lies in the flexible architecture that supports a backbone of deep learning model
86
- to extract deep features and a simple object level extraction framework for extracting cell level features.
87
- 2APREPRINT - FEBRUARY 22, 2022
88
- 2. We achieved a high level of nuclei classification system through simple concatenation of derived features
89
- from two domains.
90
- 3. The emphasis has been put through a series of experiments that in case of nuclei structures, even very small
91
- number of basic and locally focussed object level features can enhance the performance if combined with
92
- three or more layers of deep learning architecture.
93
- 4. To the best of our knowledge, this is the first study on this hypothesis that have developed a custom architec-
94
- ture for the problem to highlight the need of designing lighter architectures for specific problems rather than
95
- using deeper pre-trained architectures.
96
- 5. To the best of our knowledge, this is the first study on this hypothesis to have experimentally prove the
97
- performance of end to end learning over stage wise learning.
98
- The rest of the paper is organized as follows. Section-II describes the reviewed literature for handcrafted and deep
99
- features. Section-III describes the complete methodology of the proposed work. The experimental setup is elaborated
100
- in Section-IV including the database and workflow. Section-V contains various results and necessary discussion.
101
- Discussion section also justifies the appropriateness of the proposed method while the flexibility and robustness are
102
- the points of concerns. Section-VI concludes the work presented in the paper followed by the acknowledgments and
103
- references.
104
- 2 Reviewed Literature
105
- Owing to the above-mentioned properties exhibited by cell nuclei, many traditional handcrafted cell nuclei classi-
106
- fication algorithms have been reported in [10–16]. Authors in [10] have first segmented the nuclei objects using
107
- morphological region growing, wavelet decomposition and then found the shape and texture features for classifying
108
- cancer cells vs. normal cells using SVM classifier. Another handcrafted feature-based method for cell nuclei clas-
109
- sification in histopathological images while using shape, statistical, and texture (Gabor and Markov Random Field)
110
- features from localized cells has been reported in [11]. Other object level (OL) feature extraction from localized cell
111
- objects based methods have been reported in [12]. All the methods [10–12, 15, 16] using the OL features have been
112
- critically analyzed against the utility of those features for individual problems related to a histological and/or cytologi-
113
- cal analysis of cancer cells in [13,14]. The quality of the extracted features from various handcrafted methods [10–16]
114
- is then assessed after passing them through appropriate classifiers. The success of their findings motivated the use
115
- of targeted OL features in our methodology. To design effective handcrafted feature-based models requires complex
116
- algorithms to achieve high performance and a decent level of domain-specific knowledge [15, 16]. Moreover, it be-
117
- comes extremely difficult to resolve the issues due to dataset variability and heterogeneity within each cell type. These
118
- issues lead to the inability of the reported novel but complex models to generalize well with varying datasets. It is
119
- worth mentioning that, most of these methods are reported on a very small sample size in general, causing robustness
120
- issues. To overcome the generalization problem, it is required to model features, which are common for a particular
121
- class of cell nuclei but highly discriminating among different classes. Recently, deep learning architectures have been
122
- known to produce generalized feature sets and hence have proved their niche in classification algorithms [17–24].
123
- To put it more clearly, the key advantage of using deep learning architectures can be explained by highlighting the
124
- problems of linear or other shallow classifiers. The traditional classifiers do not use raw pixel data and possibly cannot
125
- distinguish two similar objects on different backgrounds which is a case of selectivity-invariance dilemma [17]. That
126
- is why we need good feature extractors with such classifiers to solve the selectivity-invariance dilemma. The Deep
127
- Learning (DL) based architectures automatically learn the good feature sets from the large histopathological image
128
- data sets. In 2016, the CAMELYON challenge has also reported the use of extensive deep learning architectures for
129
- solving various problems of Localization and Classification. Detailed methodologies of all these methods have been
130
- reported in [21]. More recently, authors in [25] have used pre-trained VGG19 to classify extensively augmented multi
131
- grade brain tumour samples whereas authors in [26] to identify alcoholism in subjects using their brain scans. So,
132
- DL methods find applicability in wide range of applications due to their robust and better performing architectures.
133
- However, there are some issues with deep learning based methods as well. DL features lack interpretability and can-
134
- not be confirmed as global or local features. Moreover, there is always the lack of a large number of datasets in the
135
- medical domain, which hampers or restricts DL algorithms to scale well on all other test data sets not used for the
136
- training. Another major issue with deep architectures is the huge parameters with greater depths, causing the opti-
137
- mization problem very time-consuming. At the same time, the complexity of a model increases, as the depth increases
138
- and eventually the intermediate processes become less and less interpretable. One of the approaches for minimizing
139
- the training time on the medical dataset is to use the concept of transfer learning and fine-tuning pre-trained models
140
- such as AlexNet [18], VGG16, VGG19 [20], ResNet50 [19], DenseNet [27], and InceptionV3 [28]. Originally these
141
- models have been trained on natural image datasets, which is from an entirely different domain but can be fine-tuned to
142
- extract features on medical images. But, medical data has very little to no correspondence with natural images. Hence,
143
- 3APREPRINT - FEBRUARY 22, 2022
144
- relying solely on transfer learning based fine-tune approach should not be preferred. Rather, the training should be
145
- done on the networks that have either not been pre-trained on natural images or have been pre-trained on similar medi-
146
- cal dataset. But, training DL networks on medical dataset has its own set of challenges, including lack of huge amount
147
- of annotated medical data for training. Moreover, the diverse nature of medical images prevents the generalization and
148
- standardization of datasets on which DL networks could be trained for transfer learning.
149
- An exhaustive survey of deep learning methods as reported in [29] thoroughly highlights the merits of applying DL
150
- methods in the field of medical imaging, medical informatics, translational bioinformatics, and public health. The
151
- amalgamated use of both OL and DL features for the purpose of nuclei detection, segmentation, and classification
152
- have also been suggested in [8, 29].
153
- Therefore, OL features in combination with DL features could help to bridge the gap between issues that two domains
154
- bring individually. Some recent articles have worked on the similar hypothesis of inter-domain feature combination
155
- and developed a method that combines the two feature sets as reported in [22,23]. But, the drawback of their method is
156
- in their complexity and huge training times due to very deep network models. Authors in [22] combined different deep
157
- learning features extracted from Caffe-ref [30], VGG-f [31] and VGG19 models with Bag of Features (BoF) and Local
158
- Binary Pattern (LBP) features. They then used ensemble classifiers to produce better classification accuracy than that
159
- of the softmax classification method used by deep learning models. However, the dataset under experiments in [22]
160
- was imbalanced hence, the reported accuracy trend may not hold good for other imbalanced datasets which are highly
161
- probable in case of medical image datasets. F1 score and AUC are better parameters for assessing the performance of
162
- classification algorithms for imbalanced datasets. Also, the authors [22, 23] reported the complex models which were
163
- based on pre-trained deep architectures with 7 and more layers and did not analyze the performance trend on other
164
- customized architectures that could have minimized the space and time constraints. It is difficult to design and test
165
- such relatively inflexible algorithms on a new dataset and deploy in real time applications. For example, it is difficult
166
- to change the design if one wishes to add a new functionality and re-train the algorithm. Furthermore, the reported
167
- handcrafted features in these studies lack direct relevance to the nuclei structural properties.
168
- 3 METHODOLOGY
169
- Hybrid feature based flexible classification framework trained on a dataset from [32] is used to determine the suit-
170
- ability of combining different feature sets. Few pre-processing steps are performed to segment the cell nuclei from
171
- background stroma. This step is necessary to extract the OL features. This feature set comprises relevant visual, shape
172
- and texture features from each nucleus. DL feature is extracted from the original input images. Both the set of features
173
- are then fused to produce a final feature set. The final fused feature set has been used by Multi-Layer Perceptron
174
- (MLP) as input for classifying the cell nuclei into one of the four categories. The block diagram of the proposed
175
- architectural setup has been shown in Fig. 2.
176
- The entire flow has been modeled in the Algorithm-1.
177
- Various steps involved in the proposed methodology has been elaborated in the following sub-sections (3.1)-(3.5)
178
- 3.1 Segmentation
179
- Cytologic and histologic images prevent the generalization of segmentation algorithms because of the inherent vari-
180
- ability of the nuclei structures present in them. Due to this reason, determining which state of the art algorithm for
181
- nuclei segmentation would work for our dataset was a lengthy problem. Therefore, we seek to develop application-
182
- specific segmentation algorithm for OL feature extraction. Our dataset contains H&E (Hematoxylin and Eosin) stained
183
- RGB image blocks that stains the nuclei region in bright blue and cell region in pink. The staining helped us to roughly
184
- extract the nucleus contour. Segmentation of an object then allowed for the calculation of OL features such as homo-
185
- geneous color, texture, size, and shape of the segmented region.
186
- Firstly, we enhanced the blue intensity of the nuclei through contrast adjustment. For this purpose blue color channel
187
- intensities were mapped from initial values to 255. Similarly, Red and Green channel pixel values less than a certain
188
- range were also tweaked towards higher range. This technique of adjusting intensity values in each channel to new
189
- values in an output image helped in highlighting poorly contrasted nuclei regions from cell cytoplasm and background
190
- noise. We assigned a higher value to blue intensity pixels relative to red and green components because blue-ratio is
191
- proven to be capable of highlighting nuclei regions in H&E stained histopathological images [33]. This step is followed
192
- by color normalization so that the intensity values follow normal distribution and as well remove any noise/artefact
193
- that may have been introduced due to contrast enhancement. For the next step, we computed the binary image and
194
- calculated the convex hull of the labelled region having the highest number of pixels. Convex hull of the binary
195
- image ensured that the largest area containing most blue pixels is retained and the defined boundary of the nuclei can
196
- 4APREPRINT - FEBRUARY 22, 2022
197
- Figure 2: Block Diagram of proposed OLConvNet. Raw training images of cell nuclei is passed through Branch-1 of
198
- the network for DL feature extraction and further for classification using fully connected (FC1) and softmax layer of
199
- the DL network (OUTPUT-1). OL features are extracted from segmented nuclei images after segmentation pipeline.
200
- Extracted OL features are classified in Branch 2. Switch between branch 1 and 2 make the decision about which kind
201
- of output we would want for our dataset (OUTPUT-1 or OUTPUT-2 or Both).
202
- be obtained for calculating OL features. In other words, the perturbation in nuclei structure due to staining process
203
- may distort original nuclei structures, so obtaining a convex hull defines the smooth boundary around the nucleus.
204
- This further helps in following procedural steps of extracting OL features. Convex hull step is then followed by edge
205
- extraction of the convex hull. Lastly, we did the scalar multiplication of the resultant image with the original image to
206
- obtain the final output of a segmented RGB Nuclear image. Segmentation results helped in delineating nuclei region
207
- from the surrounding tissues. Figure 3 shows the pipeline of segmentation. Some of the segmented classwise nuclei
208
- examples are shown in Figure 4.
209
- Figure 3: Segmentation Pipeline of our network
210
- 5APREPRINT - FEBRUARY 22, 2022
211
- Algorithm 1 OLConvNet
212
- Input: Training data set Dtrwith msamples. Data fi(X), where i= 1;_:::; m is an instance in the 3 dimensional
213
- image space X2<UVdandyi2Y=f1;2;3;4gis the class identity label associated with fi(X).
214
- Output: Output1: ^ycnn1-of-4 Softmax output from CNN 3L, Output2: ^ymlp1-of-4 Softmax output from MLP
215
- Pre - Processing (Balancing Dataset) :ffi(X); yigM ADASY N (ffi(X); yigm).M - samples after balancing
216
- dataset, ADASYN - a function used for balancing the dataset
217
- forxi2Xdo
218
- SEGMENTATION :xs
219
- i SegN (xi) .SegN: a function following segmentation pipeline
220
- OBJECT LEVEL FEATURES :(FVi
221
- OL)MN1 OLFV (xs
222
- i)
223
- .OLFV: a function extracting nuclei features, N1- Number of Object Level features
224
- STAGEWISE - TRAINING :
225
- forepoche2(1;100) do
226
- forbatch b do
227
- forxi2bdo
228
- CNN Inference: ^yicnn CNN !cnn(xi)
229
- @L
230
- @!cnn ^ycnn
231
- i;yi .!- weight parameter
232
- Compute@L
233
- @!cnnusing backpropagation .L - cross-entropy Loss function
234
- Update CNN: !cnn !cnn@L
235
- @!cnn .- Learning rate
236
- ExtractFeatures:fFVcnng(MN2) CNN 3L(ff(X)g;layer ) .N 2- Number of CNN features
237
- Concatenation:fTotalFeaturesgM(N1+N2) FVcnn+FVOL
238
- forTFi2TotalFeatures do
239
- MLP Inference: ^yimlp MLP!mlp(TFi)
240
- @L
241
- @!mlp ^ymlp
242
- i;yi
243
- Compute@L
244
- @!mlpusing backpropagation
245
- Update MLP: !mlp !mlp@L
246
- @!mlp
247
- END to END - TRAINING :
248
- forepoche2(1;100) do
249
- forbatch b do
250
- forxi2bdo
251
- CNN Inference: ^yicnn CNN !cnn(xi)
252
- @L
253
- @!cnn ^ycnn
254
- i;yi
255
- Compute@L
256
- @!cnnusing backpropagation
257
- Update CNN: !cnn !cnn@L
258
- @!cnn
259
- ExtractFeat:fFVi
260
- cnng(1N2) CNN 3L(ffi(X)g;layer )
261
- Concatenation:fTotalFeaturesig1(N1+N2) FVi
262
- cnn+FVi
263
- OL
264
- MLP Inference: ^yimlp MLP!mlp(TotalFeaturesi)
265
- @L
266
- @!mlp ^ymlp
267
- i;yi
268
- Compute@L
269
- @!mlpusing backpropagation
270
- Update MLP: !mlp !mlp@L
271
- @!mlp
272
- Figure 4: Example segmented-images of different classes of nuclei starting from first row to fourth: Epithelial nuclei,
273
- Fibroblasts, Inflammatory nuclei, and Miscellaneous
274
- 6APREPRINT - FEBRUARY 22, 2022
275
- 3.2 Object Level Feature Extraction
276
- We have extracted a set of nine features. These nine features include color, texture and shape information of the nuclei
277
- calculated on the segmented nuclei image. Color information contains the pixel intensity value that has the highest
278
- frequency in the histogram. Since the nuclei of the cells take the shades of blue after H&E staining, the area which
279
- has the maximum intensity of blue would be the area inside the nucleus. If the intensity with maximum frequency
280
- is not around the blue range, it is not the nucleus and that acts as the differencing factor while classifying nuclei
281
- against miscellaneous or poorly segmented region. Texture information contains the GLCM texture features [34].
282
- Among many GLCM texture features, we calculated four statistical texture features which are Contrast, Homogeneity,
283
- Correlation, and Energy of the nucleus surface. Texture, being a fundamental property of tissue surfaces, helps to
284
- differentiate between different types of cells such as epithelial, fibroblasts and lymph node cells. In papers [9], [11]
285
- and [12] authors described the shape and morphology features of different classes of nuclei. The considered shape and
286
- morphological features in this paper are the areas, major axis length, minor axis length, and eccentricity.
287
- 3.3 Convolutional Feature Extraction
288
- CNN Architecture: The general CNN architecture first proposed by Yann LeCun et al. in their paper [35]. In CNN,
289
- unlike traditional machine learning approaches that take features as input to classify the data into categories, raw
290
- images are used as input. It works on the input images by learning filter weights and modify itself until convergence.
291
- The basic architecture of CNN comprises seven major layers, namely, Input image layer, Convolution layer, ReLU
292
- layer (Non-Linearity), Pooling (Local Max), Fully connected layer, Softmax Layer, and Classification Layer.
293
- An exhaustive theoretical account of CNN can be found in [36]. These layers, when combined in a pattern, create
294
- deeper networks with the increased number of hidden units. Deeper the network, a more exhaustive set of features
295
- could be extracted from the image [37]. However, this theory is highly subjective and depends majorly on the properties
296
- of the datasets. For example, the size of the image data should be large enough to be processed into a meaningful
297
- representation in case of architectures with great depths. Also, in a few cases, the number of parameters resulting
298
- due to large depths pose a major disadvantage in terms of computational load and efficiency of the algorithm. Hence,
299
- we did experiments to develop a custom three layer convolutional network to extract features from the nuclei dataset.
300
- We named it CNN 3Lfor ease of reference. Figure 2 shows the network in detail. After training the whole network
301
- for a set number of epochs, the final network yields the best set of features which are then further progressed to the
302
- classification layer. For the purpose of extracting DL features, it is preferred to extract features from the last layer
303
- located before the fully connected layer since the final convolution layer features are a more specific representation
304
- of the dataset images. Our proposed setup also has the flexibility to change the backbone architecture from shallow
305
- three-layer network to any number of layers.
306
- 3.4 Fusion
307
- We get two set of features which are globally extracted from segmented nuclei (OL features) and automatically ex-
308
- tracted features that include both local and global information (DL features) of the image. We have concatenated
309
- these two sets along the axis that contained the feature vector of a sample and produced one combined set for further
310
- classification (Fig. 5). This exhaustive set of features are then used as input to the first MLP layer for the purpose of
311
- categorizing them into four classes.
312
- Figure 5: Diagram displaying fusion of OL and DL features
313
- 7APREPRINT - FEBRUARY 22, 2022
314
- 3.5 Classification
315
- For classification, we have used Multi-Layer Perceptron (MLP). This process is called Transfer Learning where CNN
316
- is used only for feature extraction while the next step of classification is performed by another machine learning
317
- algorithm such as MLP for multiclass classification. In this paper, we used the MLP network with one input layer, one
318
- hidden layer having ten nodes, and one output layer. Combination of features is fed as an input to the first MLP layer.
319
- Hidden layer used tansig as an activation function defined by
320
- tansig (n) =2
321
- (1 +exp(2n))1; (1)
322
- where nis our input vector to the second hidden layer. This activation function is faster than tanh , however numerical
323
- differences are small [38]. The output prediction scores from the output layer of MLP was given by softmax function
324
- [39] - [40] defined as:
325
- pj(^y(x)) =exp(^yj(x))P
326
- kexp(^yk(x)); (2)
327
- where ^yj(x)denotes the jthelement of output vector ^y(x)We performed 2-fold cross-validation test on our network
328
- to observe the efficacy of our model.
329
- 4 Experimental Setup
330
- 4.1 Database
331
- For our experiments, we have taken the database from [32]. This database has a total of 100 H&E stained histology
332
- images of size 500 by 500 pixels. The center coordinate location of each nucleus in each image has been provided
333
- as ground truth. Different types of nuclei are divided into sets belonging to four classes which are Epithelial nuclei,
334
- Fibroblast nuclei, Inflammatory nuclei and the rest of the small types are categorized as one class called ’miscella-
335
- neous’. We obtain subimages of size 27x27 extracted around the center coordinate locations and maintained them
336
- in four sub-folders segregated by class types. Four classes in the dataset have 7,722 Epithelial, 5,712 Fibroblasts,
337
- 6,970 Inflammatory, and 2039 miscellaneous nuclei totaling up to 22,444 nuclei in 100 H&E stained histopathological
338
- images. The number of samples in each class are imbalanced which can cause classification results biased towards the
339
- majority class ( the class having the highest number of samples). A systematic study of the class imbalance problem
340
- in convolutional neural networks by authors in [41] have concluded through their research that the class imbalance
341
- problem, if not addressed may have a detrimental effect on classification performance. The influence of imbalance
342
- on classification performance increases with the size of the dataset. They also reported that in case of convolutional
343
- neural networks, the method that works best for eliminating class imbalance problem is oversampling with thresh-
344
- olding which opposed to other machine learning models, does not cause overfitting in CNN. Thus, we performed
345
- our experiments after balancing our dataset. We used adaptive synthetic sampling for eliminating class imbalance by
346
- synthetically creating new examples in the vicinity of the boundary between the two classes than in the interior of
347
- the minority class via linear interpolation between existing minority class examples [42]. So, after creating synthetic
348
- examples for minority classes (Fibroblast, Inflammatory, and miscellaneous) with respect to number of samples in
349
- majority class (Epithelial) we accumulated 29,771 total data points having 7,722, 7,275, 6,970, and 7804 samples
350
- in class 1 (Epithelial), class 2 (Fibroblast), class 3 (Inflammatory), and class 4 (Miscellaneous), respectively. After
351
- acquiring the balanced nuclei dataset, for the purpose of training, validation, and testing, we divided each class in
352
- the ratio 0.7:0.15:0.15, respectively. All the networks are evaluated on the testing set and performance metrics are
353
- reflected in Section V .
354
- 4.2 CNN 3Land Parameter Settings
355
- We used the CNN framework with three convolutional layers and one fully connected layer as shown in Figure 2.
356
- Traditionally, CONV RELU layers are stacked, which are then followed by POOL layer. This pattern is repeated until
357
- we get a relatively small number of parameters from the image or when the optimal performance is achieved. The
358
- aim to stack up layers is to extract a more exhaustive set of features. The last fully-connected layer holds the output,
359
- such as the class scores. In our research work, experiments with a different number of layers yielded the optimal value
360
- of three convolution layers and one fully connected layer. We have used our knowledge about the problem, applied
361
- a heuristic approach to select parameters, and observed the outputs. Based on the outputs obtained, we tweaked
362
- the parameters. This process was repeated until we got the optimal set. The architecture was trained with different
363
- hyperparameters such as the number of convolutional layers and fully connected layers, learning rate and the number
364
- of epochs. The accuracy achieved at each instance was recorded and plotted as shown in figures 6a and 6b. The plots
365
- 8APREPRINT - FEBRUARY 22, 2022
366
- show the accuracy values on the Y-axis corresponding to the number of epochs on X-axis for each type of architecture
367
- taken into consideration. Here, NCMFCdenotes N Convolutional layers and M Fully Connected layers. The two
368
- figures show observations recorded from two learning rates.
369
- (a) Accuracy at learning rate 1e-4
370
- (b) Accuracy at learning rate 1e-5
371
- Figure 6: Epochs v/s Accuracy graph at different hyperparameters
372
- From the experiments, we found out that neither stacking up more layers nor decreasing the learning rate showed a
373
- positive impact on the performance of the network. A learning rate of 1e4with three convolutional layers and 1
374
- fully connected layer the accuracy recorded at epoch number 100 was the highest (refer figure 6a). Increasing the
375
- learning rate further to 1e3for CNN 3Ldecreased the accuracy to 61.77% at epoch 70 and further down to 59.41%
376
- at epoch 100.
377
- For training the final optimal network (CNN 3L), we normalized our dataset images by subtracting the mean image of
378
- the training set from all training set images before using them as input vectors. If we don’t scale our input training
379
- vectors, the feature values distribution range would likely be different for each feature, and thus the corrections in each
380
- dimension after each epoch through learning rate will differ for each dimension. This might cause overcompensation
381
- (very high variance from mean weight) in weight of one dimension while undercompensating (very low variance from
382
- mean weight) the another. This creates a non-ideal situation since it might get difficult to center the weight space. This
383
- will also affect the time efficiency as it will travel too slow to get to a better maxima.
384
- In our architecture CNN 3L, We followed 5 x 5, 100 channel, 3 x 3, 50 channels, and 3 x 3, 100 channels for each
385
- convolutional layer respectively. We experimented with different filter window size and numbers of filters and ob-
386
- served little to no improvement in the performance, for example, experiment with filter numbers 32, 64 and 128 on
387
- subsequent three convolutional layers of the optimized CNN 3Lachieved the F1-score of 0.7907. There could be an
388
- infinite number of permutations to choose network parameters from. We performed numerous experiments to decide
389
- on CNN 3Las our optimal network, however finding a theoretical justification is out of the scope of this work. Convo-
390
- lution layer is followed by ReLU layer. There is no change in the dimension of the output of this layer. Next is the Max
391
- pooling layer with size 2 and stride 1. Since, our network depth is less so, keeping stride 1 in Max pooling layer was a
392
- design choice to avoid loss of information. Total number of learnable parameters were only 118,000 as compared to
393
- heavier networks like AlexNet ( 56,000,000), VGG16 ( 134,000,000), VGG19( 139,000,000), ResNet50( 23,000,000),
394
- InceptionV3( 22,000,000), and DenseNet121( 7,000,000). After fixing the network architecture, first branch 1 of the
395
- network was trained. Trained Branch 1 (CNN layers) were tested on the test set and the results ( OUTPUT -1 ) are
396
- accumulated in Table 1. Applying transfer learning approach [43], we used MLP for final classification outcome
397
- (OUTPUT - 2) instead of 1-of-4 classification output from a softmax layer. Trained CNN backbone has been used
398
- as fixed feature extractor. The approach is stage-wise learning and the output 2 observations are recorded in Table 2
399
- under the column named as ”stage-wise”.
400
- 4.3 End-to-End Training
401
- We performed end-to-end training of our three layer CNN architecture (CNN 3Lto show the performance of our net-
402
- work when the learning happened from end-to-end rather than stage wise. By end-to-end learning, we mean that both
403
- CNN network (CNN 3L) and MLP network were trained together using the intermediate outputs of CNN 3Las inputs to
404
- the MLP network. So, MLP network received two sets of input features, activated features from the last Max pooling
405
- layer of CNN 3Land the other from the pre-calculated handcrafted features. These two set of features were concate-
406
- nated before giving them as input to the MLP layers. At the end of the training, two outputs were accumulated, one
407
- 9APREPRINT - FEBRUARY 22, 2022
408
- from the CNN 3L1 of 4 softmax outputs and another from the MLP layer classification. The diagram showing the
409
- architectural setup is shown in Figure 2. We performed two experiments in different settings.
410
- 1. Branch 1 (CNN layers) of the network (refer Figure 2) was trained keeping branch 2 (MLP layers) inactive
411
- by turning off the switch and the results of 1-of-4 classification layer of CNN was recorded.
412
- 2. Both branch 1 and branch 2 were trained simultaneously to obtain two outputs, one from CNN layers
413
- (OUTPUT-1) and second from MLP layers (OUTPUT-2) (refer Figure 2). The switch was connected in
414
- this case.
415
- We have reported the output of MLP classification of combined feature set (OUTPUT-2) in Table 2.
416
- 5 Results and Discussion
417
- The aim of this work is to highlight the different outcomes of classifying the database with different settings. To
418
- represent the outcomes in order of increasing classification performance we have divided our results in the following
419
- categories.
420
- 1. Classification output of only deep learning features from softmax layer of CNN 3L
421
- 2. Classification results of combined feature set using MLP (O LConvNet) - Stagewise.
422
- 3. Classification results of combined feature set using MLP (O LConvNet) - End-to-end.
423
- 5.1 Results
424
- Data imbalance has been eliminated using the method described in Section 4.1. We evaluated the proposed model
425
- on the basis of F1-scores and multiclass AUC. Multiclass AUC is calculated using the prediction scores given by the
426
- softmax function described in (2).
427
- Table 1 shows the comparative classification performance of MLP and softmax CNN networks, separately. OL features
428
- classified the nuclei classes using MLP classifier (Branch - 2 of Fig. 2) while DL features were propagated along the
429
- fully connected and classification layers of CNN networks (Branch - 1 of Fig. 2). To visualize the network flow
430
- in both OL classification and deep learning classification, connection through the switch in Fig. 2 was broken to
431
- create two separate branches where, both the branches were trained mutually exclusively. We compared the OL
432
- features performance with CRImage [44] (Table 1) which also calculates features after the segmentation of the nuclei
433
- using basic thresholding, morphological operations, distance transform, and watershed. CRImage also calculates
434
- statistical features from the segmented nucleus but lacks visual, shape and texture features. Besides statistical features,
435
- incorporation of histopathologically deduced features such as nuclei size, shape, area, color, and texture hold direct
436
- relevance to the dataset and therefore the absence of such features prevents CRImage to perform well on the dataset.
437
- So, from the observations reported in Table 1, we deduced that our OL features, though only a small number, performed
438
- better than [44]. Deep models CNN 3L, AlexNet, VGG16, VGG19, ResNet50, InceptionV3, and DenseNet121 when
439
- tested (without OL features) on the dataset produced F1-score and AUC better than OL features by a huge margin due
440
- to exhaustive feature set produced after convolution operations. These features are non-interpretable but have been
441
- known to perform quite well in classification tasks. Hence, Table 1 reports the individual performance of OL features
442
- and DL features and proves the statements made in the ’Introduction’ Section about the need to shift from traditional
443
- feature engineering to convolutional feature learning.
444
- Next, we trained the network in Fig. 2 stage wise. Features from trained deep learning networks from the previous
445
- experiment were concatenated with OL handcrafted features. Then, the second stage of the training of concatenated
446
- features was done by MLP classifier. The results obtained on test dataset after MLP training was then reported as
447
- combined features classification performance. End to end training followed stagewise experiments to analyze the
448
- effect of the performance of our network when trained in two ways. We have shown the difference through F1-score,
449
- multiclass AUC and cross-entropy loss in Table 2 which reports performance of classification on a 2-fold cross-
450
- validation experiment. The obtained results from stage-wise training, as expected, showed improvement in F1-Score
451
- and Multiclass AUC in comparison to individual results obtained after classifying with only DL features and only OL
452
- features (Table 1). While there is only a 2% increase in F1-score and 1% increase in AUC score in case of CNN 3L,
453
- a marked improvement has been recorded in case of deeper pre-trained architectures. Whereas, in the case of end-2-
454
- end training, no improvement in the performance metrics has been observed in any of the backbone networks. Also,
455
- the higher loss value recorded in the case of end-2-end approach reflects decreased performance in comparison to
456
- the stage-wise approach. These cross-entropy loss values highlight that the joint loss propagation after the shared
457
- layer (concatenation layer) might have affected the overall performance of the model. The additional OL feature
458
- 10APREPRINT - FEBRUARY 22, 2022
459
- Table 1: Comparison between methods stratified by classifier
460
- MethodPrecision Recall F1-Score Multiclass AUCBackbone Classifier
461
- Only Object Level MLP 0.5154 0.5156 0.5135 0.7857
462
- CRImage [44] SVM (RBF kernel) * * 0.4880 0.6840
463
- CNN 3L Softmax 0.8043 0.8046 0.8040 0.9441
464
- AlexNet Softmax 0.8280 0.8281 0.8216 0.9386
465
- VGG16 Softmax 0.8693 0.8699 0.8689 0.9757
466
- VGG19 Softmax 0.8575 0.8581 0.8578 0.9701
467
- ResNet50 Softmax 0.8900 0.8893 0.8892 0.9799
468
- InceptionV3 Softmax 0.8164 0.8175 0.8175 0.9538
469
- DenseNet121 Softmax 0.8784 0.8706 0.8706 0.9756* data
470
- unavailable
471
- set concatenated simultaneously during training on the shared concatenation layer did not improve the discriminative
472
- property of the DL features and hence, the subsequent MLP layers performed in the same way as the fully connected
473
- and softmax layers of DL models.
474
- Fig. 7 show the ROC curves obtained after end-2-end training on all four networks. The subfigures 7a, 7b, 7c, 7d,
475
- 7e, 7f, and 7g shows the ROC curves of all four classes with respect to seven backbone networks, discriminated
476
- through four colors. The figure also mentions the AUC value for individual classes. Micro and Macro average of
477
- four ROC curves from each class show similar value because our dataset is balanced. The micro-average method
478
- calculates the sum of true positives, false positives, and false negatives for different sets whereas, in Macro-average
479
- method, the average of the precision and recall is taken into account. Micro Average is preferred when there is a
480
- class imbalance problem. The Figure also labels AUC values for each class. After recording the values we observed
481
- a dip of 2% to 4% for class 2 (Fibroblast) and 3 (Inflammatory), whereas, the AUC values for classes 1 (Epithelial)
482
- and 4 (Miscellaneous) are comparable across all four networks. The decrease in performance can be attributed to
483
- the number of samples which are relatively less for class 2 (7275) and class 3 (6970) than class 1 (7722) and class
484
- 4 (7804). Also, in the case of Fibroblasts (class 2), the long spindle-shaped cell barely has a visible nucleus which
485
- prevents segmentation algorithms to detect the nucleus area effectively. Consistent class-wise performance across all
486
- four backbones and high AUC are some of the other observations deduced from Figure 7.
487
- 2-fold cross validation performance metrics with CNN 3Lalong with deep architectures AlexNet H, VGG16 H, VGG19 H,
488
- ResNet50 H, InceptionV3 H,and DenseNet121 H, where H stands for Hybrid, are reported in Table 3 and Figure 8. The
489
- OLConvNet metrics recorded in Table 3 are stagewise observations from OUTPUT - 2 of the network obtained after
490
- combined feature set testing. We compared our results with Softmax CNN + SSPP (Standard Single Patch Predictor)
491
- and Softmax CNN + NEP (Neighboring Ensemble Predictor) [32] architectures used for classification of nuclei from
492
- the nuclei dataset cited by [32]. The authors in this article worked on the theory that the pixel of interest is likely
493
- the center of the nucleus and using this theory they formulated the classification algorithm by spatially constraining
494
- the high probability pixels in the vicinity of the centers of nuclei. They proposed two algorithms called Neighboring
495
- Ensemble Predictor (NEP) and Standard Single Patch Predictor(SSPP) which when coupled with SC-CNN (Spatially
496
- Constrained- CNN) produced the classification results as mentioned in Table 2. The drawback of their method was
497
- in building a complex target specific model which lead to low classification performance. Their CNN architecture is
498
- custom but they didn’t test their methodology on deeper pre-trained architectures which might have performed better
499
- with or without their elaborate model. Further comparison with superpixel descriptor [15] and CRImage on the same
500
- dataset show that both the methods performed relatively poor and that our method exhibited a higher performance
501
- regardless of the backbone architecture used. The motivation behind devising Superpixel descriptor was to distinguish
502
- the area with the different histologic pattern. This descriptor lacks direct features related to the visual appearance of
503
- the nucleus, like color, texture, and shape, thus yielded lower classification performance. Whereas, CRImage [24] only
504
- calculates segmented nuclei features. Segmented nuclei features are insufficient to classify complex nuclei patterns
505
- due to either weak staining or presence of overlapping boundaries. This prevents CRImage to perform well in this
506
- dataset. We have shown the same through our only OL features classification results in Table 1. Figure 8 reports
507
- comparative class-wise classification performance of all the methods. F1-score of Miscellaneous class show a marked
508
- jump in values obtained from O LConvNet and an improvement of more than 47% is recorded between the highest
509
- and lowest performing algorithms which are ResNet50 and CRImage, respectively. For Epithelial and Fibroblast, the
510
- 11APREPRINT - FEBRUARY 22, 2022
511
- (a) CNN 3LTest ROC
512
- (b) AlexNet Test ROC
513
- (c) VGG16 Test ROC
514
- (d) VGG19 Test ROC
515
- (e) ResNet50 Test ROC
516
- (f) InceptionV3 Test ROC
517
- (g) DenseNet121 Test ROC
518
- Figure 7: ROC curves from four backbone networks using O LConvNet (end-2-end learning)
519
- 12APREPRINT - FEBRUARY 22, 2022
520
- Table 2: Performance parameters from stage wise and end-2-end learning experiments (OUTPUT - 2)
521
- Method BackboneStagewise End-2-End
522
- F1-score AUC loss F1-score AUC loss
523
- OLConvNetCNN 3L 0.8243 0.9587 0.1211 0.8084 0.9500 1.2750
524
- AlexNet H 0.9542 0.9903 0.0915 0.8359 0.9600 1.0860
525
- VGG16 H 0.9569 0.9923 0.1036 0.8731 0.9700 0.9407
526
- VGG19 H 0.9546 0.9879 0.1221 0.8675 0.9700 1.1213
527
- ResNet50 H 0.9677 0.9973 0.0272 0.8892 0.9799 0.7335
528
- InceptionV3 H 0.9618 0.9963 0.0309 0.8175 0.9538 1.0795
529
- DenseNet121 H 0.9616 0.9961 0.0318 0.8706 0.9756 0.7972
530
- class wise performance of CNN 3Lwas marginally lesser than SSPP but, a difference of 4% to 7% was recorded
531
- with NEP whereas, in case of DL models used in O LConvNet, the F1-scores have been consistently better than all
532
- the algorithms used for comparison. Same can be interpreted from Figure 8 where the classwise performance of
533
- OLConvNet backbones is consistently better than contemporary algorithms.
534
- Table 3: Comparative Performance parameters for Nucleus Classification
535
- Method Backbone Precision Recall F1-Score Multiclass AUC
536
- Softmax CNN + SSPP [32] CNN * * 0.7480 0.8930
537
- Softmax CNN + NEP [32] CNN * * 0.7840 0.9170
538
- Superpixel Descriptor [15] - * * 0.6870 0.8530
539
- CRImage [44] - * * 0.4880 0.6840
540
- OLConvNetCNN 3L 0.8241 0.8245 0.8243 0.9587
541
- AlexNet H 0.9578 0.9577 0.9578 0.9953
542
- VGG16 H 0.9611 0.9610 0.9610 0.9960
543
- VGG19 H 0.9544 0.9548 0.9546 0.9879
544
- ResNet50 H 0.9676 0.9678 0.9677 0.9973
545
- InceptionV3 H 0.9618 0.9618 0.9618 0.9963
546
- DenseNet H 0.9616 0.9617 0.9616 0.9961* data
547
- unavailable
548
- 5.2 Observations and Discussions
549
- The experiments performed in this study gave some interesting points for discussion. Formally, whenever we per-
550
- formed deep learning based classification tasks, our focus generally remained on improving the classification perfor-
551
- mance of the architecture by fine-tuning or transfer learning. Transfer learning is generally used in the case when there
552
- are fewer samples of similar data. In our case as well, samples were not enough for deep architectures to generalize
553
- well on the dataset. Moreover, pre-trained weights of discriminative nuclei types to transfer learn on our database was
554
- also not available. Also, the intention to not use pre-trained publically available ImageNet weights was because our
555
- dataset is very dissimilar to ImageNet. Hence, we chose to build our own CNN layers (CNN 3L) and fine-tune them to
556
- produce optimized performance parameters.
557
- DL features from CNN 3Land OL features representing visual and structural properties of nuclei were concatenated
558
- to produce a combined feature set which consequently improved classification results. F1-Score and multiclass AUC
559
- values recorded in Table 1 reflected the individual performance of the traditional handcrafted OL features with MLP
560
- classifier and DL features with Softmax classifier whereas, F1-Score and multiclass AUC values in Table 3 show
561
- the performance of the O LConvNet network with combined feature sets. These observations reflect that even if the
562
- OL feature set was very small (total 9) relative to CNN 3Lfeature length (48,400) , a marked difference between 1-
563
- of-4 softmax classification (OUTPUT 1 of Fig. 2) and O LConvNet classification (OUTPUT 2 of Fig. 2) reflect the
564
- applicability and importance of including object-specific features. Further, the high performance metric values in Table
565
- 3, exhibited by deeper pre-trained architectures ( AlexNet H,V GG 16H,V GG 19H,ResNet 50H,InceptionV 3H,
566
- 13APREPRINT - FEBRUARY 22, 2022
567
- Figure 8: Comparative results for nucleus classification stratified with respect to class label
568
- DenseNet 121H) shows that increasing depth of the network although increase the capability to extract discriminative
569
- DL features and hence better F1-score and AUC from CNN 3L(as reflected in Table 7) but, the discriminative ability of
570
- the DL feature set is enhanced many folds when combined with OL features. This statement can be verified from Table
571
- 2 which shows the increase of 10% in F1-score and AUC values of these pre-trained architectures. However, training
572
- OLConvNet jointly for classification (end-2-end ) ( Table 2) using a combined feature set did not affect F1-score
573
- and AUC values. This is likely because we trained the network with the combined loss of CNN classification layer
574
- and MLP output layer. The combined cross-entropy loss affected the performance of the joint network. Therefore,
575
- from Table 2 we observe that the loss values are higher in end-2-end training in comparison to stage-wise training.
576
- Hence, stagewise 2-fold cross-validation results are preferred over end-2-end classification. The results were also
577
- suggestive of the fact that the classification performance has most certainly increased quite dramatically as the levels
578
- of convolution layers increased from just three layers in CNN 3Lto 121 layers in DenseNet121. In all the cases, though,
579
- our hypothesis remained true and our architectural setup provided the room to make modifications in the configuration
580
- of backbone networks.
581
- The question that can be raised at this point is why to use CNN 3Lwhen we are getting better performance with deeper
582
- architectures. In support of CNN 3L, when we look in Table 2 and observe cross-entropy loss of all backbones in
583
- stagewise learning, we see that CNN 3Lhas comparable loss value with deeper networks but, the number of parameters
584
- and the training time taken by AlexNet, VGG16, VGG19, ResNet50, InceptionV3, and DenseNet121 shown in Table
585
- 4 is much larger than CNN 3L. This opens up the discussion for the need of using an optimal number of layers to
586
- achieve satisfactory performance by an application instead of very deep CNN models. To further strengthen our point,
587
- we know that in deep learning, a number of successful early and recent state of the art architectures such as AlexNet,
588
- VGG16, VGG19, , ResNet50, InceptionV3, and DenseNet121 etc., were developed to increase the abstractness in
589
- extracting deep and better quality feature maps, primarily for the purpose of classification. However, the standard
590
- datasets used to test these networks were real life natural images.
591
- Medical data is a completely different modality that has high variations in features from the same class. Therefore,
592
- the state of the art deep network models on such datasets do not perform well and suffer from high validation and
593
- testing loss. Another main reason for their failure is a scarcity of labeled data volume in the medical image domain.
594
- To cope up with these limitations, every new model that is being developed for medical image data classification
595
- is function specific and does not address the global problems in the domain. Most of the recent literature in the
596
- classification of structures in histology images do not use raw images for deep learning models. A fair amount of pre
597
- and post processing of data is required to enhance the performance which consequently, hampers the generalization
598
- capability of the model. So, instead of building novel models with less or limited applicability in the cell nuclei image
599
- classification problem, it seemed better to change the way these models are used. With simple architectural change
600
- and introduction of visually and structurally relevant handcrafted feature set, through experiments, we have established
601
- gain in performance values. Our model is flexible in a way that any deep model can be fitted in the architecture to
602
- extract deep features. Another highlight of our work is to show how the addition of a handful of basic OL handcrafted
603
- 14APREPRINT - FEBRUARY 22, 2022
604
- features can bring a notable change in the final classification output. We do not need to design special descriptors or
605
- use complex handcrafted features to complement deep learning features. Instead, a small number of discriminative
606
- OL features in case of nuclei classification like color, texture, and shape can enhance the discriminative capability of
607
- the neural network classifier.
608
- While we agree that the architecture is simple, we first need a fine-tuned and flexible model that scales well with
609
- the number of options of novel models available today. More than that, we need a compact model with less training
610
- time ( a comparatively shallow convolutional network ) that fairly works well and give comparable results with deeper
611
- architectures. We compared the time taken by each of the methods to train in Table 4. We observed that CNN 3L
612
- took the least amount of time and very huge differences were noted when compared with ResNet50, InceptionV3, and
613
- DenseNet121. Time may be regarded as an insignificant paradigm with advanced computer systems available today.
614
- However here, it is important to mention that, we trained our algorithms using three NVIDIA GeForce GTX 1080
615
- Ti GPUs with 11GB RAM each, in parallel. This much computing power is not still really common in most of the
616
- places and, also it becomes difficult to train heavy deep networks in light mobile applications or hand-held systems.
617
- These much computing resources are impossible to accommodate in lighter applications as of yet and hence, shallow
618
- networks that work fairly well for general diagnostic procedures can help in reducing space and time constraints for
619
- such systems. In this research, we are dealing with time sensitive application where the quick delivery of classification
620
- results are important to help pathologists to proceed for further analysis in cancerous tissues and diagnose cancer as
621
- soon as possible. Besides time, we would also like to highlight that our architecture CNN 3Lis not using pre-trained
622
- weights of ImageNet which has several other advantages such as, applications can use the custom size of their dataset
623
- directly without the need of resizing it to conform according to the size of ImageNet data. This is beneficial at times
624
- where very large or very small images may lose quite a significant amount of details when scaled. While one may
625
- argue in this case that we could have trained all DL backbones from scratch without the need of using pre-trained
626
- weights. However, these deep networks require a certain minimum size of the image (48 x 48 in case of VGG16 and
627
- VGG19, 197 for resNet50 and 221 for DenseNet121) at the input layer to train. Hence, the limitation of these state
628
- of the art deep learning architectures with complex datasets is too much to ignore. Additionaly, for training such deep
629
- networks from scratch, we would need a very large number of dataset samples for the networks to learn efficiently.
630
- Table 4 summarizes the time is taken and the number of trainable parameters used by CNN 3L, AlexNet, VGG16,
631
- VGG19, ResNet50, InceptionV3, and DenseNet121 to train the dataset.
632
- Table 4: Time and trainable parameters comparison between backbone architectures
633
- Method No. of parameters Time (seconds)
634
- CNN 3L 118 thousands 303
635
- AlexNet 56 millions 3304
636
- VGG16 134 millions 1408
637
- VGG19 139 millions 1420
638
- ResNet50 23 millions 18951
639
- InceptionV3 22 millions 13314
640
- DenseNet121 7 millions 24515
641
- Hence, a simple architectural setup whose components can be modelled as per the dataset requirement and which uses
642
- the combination of OL features and shallow, yet incorporating the properties of DL model - CNN 3Lin case of complex
643
- dataset such as ours has proved to be a better approach than the traditional OL models or trending DL algorithms, alone
644
- that do not allow changes in the architecture and require specific configurations to work well. It is important to note
645
- that there are no standard datasets yet in case of histological nuclei images so, comparing various methods mentioned
646
- in the literature on only one database does not guarantee to give expected results. According to the free lunch theorem,
647
- which is most certainly applicable in case of medical image databases, there is no global algorithm that could be
648
- developed to give good results across all kinds of histopathological data. Each experiment is conducted with different
649
- dataset acquired by the research team themselves or from the pathologists, whose property changes with the location
650
- of the disease. Their results are validated using different performance metrics. Hence, standard datasets and ground
651
- truth in the case of complex histopathological cancer images is a current challenge in this field of research.
652
- 6 Conclusion
653
- The knowledge about the cell of origin of a tumor may benefit doctors to treat the tumor more effectively since a correct
654
- classification will greatly increase the biological understanding of carcinogenesis. Using this motivation in this paper,
655
- 15APREPRINT - FEBRUARY 22, 2022
656
- we have built our classification model by emphasizing on a hybrid and flexible model that can incorporate the two
657
- wide domains of features, the age-old traditional object-level features such as intensity, texture, and shape features and
658
- recent deep learning based features. While object-level features have proved their efficiency in various domains and
659
- purposes of biomedical image processing, including cancer-based disease recognition and classification, the current
660
- trend has drastically shifted towards using various deep learning methods. Our work tried to highlight through the
661
- results that using only deep learning might not work in case of all datasets. Therefore, a need to develop a shallow
662
- yet effective architecture such as CNN 3Lincorporated in our proposed skeleton called O LConvNet, that could robustly
663
- combine the benefits of object level features in this study, motivated our work. Moreover, to guarantee a wide range of
664
- applicability, a model that is easy to understand, and deploy is required, also, which has the flexibility to incorporate
665
- both deeper, more efficient networks like AlexNet, VGG16, VGG19, ResNet50, InceptionV3, and DenseNet121, and
666
- shallow, light model like CNN 3Lso that the O LConvNet can be adapted to a wider range of applications. The results
667
- were encouraging and our network performed better than the recent state of the art implementation on the same dataset.
668
- The future work could be to incorporate better algorithms that combine the best of both the worlds i.e. traditional object
669
- level and deep learning features. Approaches can differ with better classifiers as well. In conclusion, our method opens
670
- the possibility of further research in developing more robust nuclei classification models that can scale well on all kind
671
- of datasets.
672
- 7 Acknowledgments
673
- This research was carried out in Indian Institute of Information Technology, Allahabad and supported by the Ministry
674
- of Human Resource and Development, Government of India. We are also grateful to the NVIDIA corporation for
675
- supporting our research in this area by granting us TitanX (PASCAL) GPU.
676
- References
677
- [1] Xiaowei Chen, Xiaobo Zhou, and Stephen TC Wong. Automated segmentation, classification, and tracking of
678
- cancer cell nuclei in time-lapse microscopy. IEEE Transactions on Biomedical Engineering , 53(4):762–766,
679
- 2006.
680
- [2] C ˆatoi C. Baba AI. Comparative Oncology , chapter 3. The Publishing House of the Romanian Academy,
681
- Bucharest, 2007.
682
- [3] Cancer Research UK. Types of cancer, November 2017.
683
- [4] Anant Madabhushi and George Lee. Image analysis and machine learning in digital pathology: challenges and
684
- opportunities, 2016.
685
- [5] Andrew H Fischer, Kenneth A Jacobson, Jack Rose, and Rolf Zeller. Hematoxylin and eosin staining of tissue
686
- and cell sections. Cold Spring Harbor Protocols , 2008(5):pdb–prot4986, 2008.
687
- [6] S. Tripathi, S. Mishra, and S. K. Singh. Routine colon cancer detection using local image descriptors. In 2016
688
- IEEE Region 10 Conference (TENCON) , pages 2062–2065, Nov 2016.
689
- [7] Daniele Zink, Andrew H Fischer, and Jeffrey A Nickerson. Nuclear structure in cancer cells. Nature reviews
690
- cancer , 4(9):677, 2004.
691
- [8] Humayun Irshad, Antoine Veillard, Ludovic Roux, and Daniel Racoceanu. Methods for nuclei detection, seg-
692
- mentation, and classification in digital histopathology: a review—current status and future potential. IEEE
693
- reviews in biomedical engineering , 7:97–114, 2014.
694
- [9] H Peter Rodemann and Hans-Oliver Rennekampff. Functional diversity of fibroblasts. In Tumor-Associated
695
- Fibroblasts and their Matrix , pages 23–36. Springer, 2011.
696
- [10] Pin Wang, Xianling Hu, Yongming Li, Qianqian Liu, and Xinjian Zhu. Automatic cell nuclei segmentation and
697
- classification of breast cancer histopathology images. Signal Processing , 122:1–13, 2016.
698
- [11] James Diamond, Neil H Anderson, Peter H Bartels, Rodolfo Montironi, and Peter W Hamilton. The use of mor-
699
- phological characteristics and texture analysis in the identification of tissue composition in prostatic neoplasia.
700
- Human pathology , 35(9):1121–1131, 2004.
701
- [12] Scott Doyle, Mark Hwang, Kinsuk Shah, Anant Madabhushi, Michael Feldman, and John Tomaszeweski. Auto-
702
- mated grading of prostate cancer using architectural and textural image features. In 2007 4th IEEE International
703
- Symposium on Biomedical Imaging: From Nano to Macro , pages 1284–1287. IEEE, 2007.
704
- [13] Metin N Gurcan, Laura Boucheron, Ali Can, Anant Madabhushi, Nasir Rajpoot, and Bulent Yener. Histopatho-
705
- logical image analysis: A review. IEEE reviews in biomedical engineering , 2:147, 2009.
706
- 16APREPRINT - FEBRUARY 22, 2022
707
- [14] Laura E Boucheron. Object-and spatial-level quantitative analysis of multispectral histopathology images for
708
- detection and characterization of cancer . University of California at Santa Barbara, 2008.
709
- [15] Korsuk Sirinukunwattana, David RJ Snead, and Nasir M Rajpoot. A novel texture descriptor for detection of
710
- glandular structures in colon histology images. In Medical Imaging 2015: Digital Pathology , volume 9420, page
711
- 94200S. International Society for Optics and Photonics, 2015.
712
- [16] Jun Shi, Jinjie Wu, Yan Li, Qi Zhang, and Shihui Ying. Histopathological image classification with color pat-
713
- tern random binary hashing-based pcanet and matrix-form classifier. IEEE journal of biomedical and health
714
- informatics , 21(5):1327–1337, 2017.
715
- [17] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature , 521(7553):436, 2015.
716
- [18] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan,
717
- Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE
718
- conference on computer vision and pattern recognition , pages 1–9, 2015.
719
- [19] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
720
- Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016.
721
- [20] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition.
722
- arXiv preprint arXiv:1409.1556 , 2014.
723
- [21] Babak Ehteshami Bejnordi, Mitko Veta, Paul Johannes Van Diest, Bram Van Ginneken, Nico Karssemeijer,
724
- Geert Litjens, Jeroen AWM Van Der Laak, Meyke Hermsen, Quirine F Manson, Maschenka Balkenhol, et al.
725
- Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast
726
- cancer. Jama , 318(22):2199–2210, 2017.
727
- [22] Jianpeng Zhang, Yong Xia, Yutong Xie, Michael Fulham, and David Dagan Feng. Classification of medical
728
- images in the biomedical literature by jointly using deep and handcrafted visual features. IEEE journal of
729
- biomedical and health informatics , 22(5):1521–1530, 2018.
730
- [23] Haibo Wang, Angel Cruz Roa, Ajay N Basavanhally, Hannah L Gilmore, Natalie Shih, Mike Feldman, John
731
- Tomaszewski, Fabio Gonzalez, and Anant Madabhushi. Mitosis detection in breast cancer pathology images by
732
- combining handcrafted and convolutional neural network features. Journal of Medical Imaging , 1(3):034003,
733
- 2014.
734
- [24] Dan C Cires ¸an, Alessandro Giusti, Luca M Gambardella, and J ¨urgen Schmidhuber. Mitosis detection in breast
735
- cancer histology images with deep neural networks. In International Conference on Medical Image Computing
736
- and Computer-assisted Intervention , pages 411–418. Springer, 2013.
737
- [25] Sajjad Muhammad, Khan Salman, Khan Muhammad, Wanqing Wu, Amin Ullah, and Sung Wook Baik. Multi-
738
- grade brain tumor classification using deep cnn with extensive data augmentation. Journal of computational
739
- science , 30:174–182, 2019.
740
- [26] Shui-Hua Wang, Khan Muhammad, Jin Hong, Arun Kumar Sangaiah, and Yu-Dong Zhang. Alcoholism iden-
741
- tification via convolutional neural network based on parametric relu, dropout, and batch normalization. Neural
742
- Computing and Applications , Dec 2018.
743
- [27] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional
744
- networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4700–4708,
745
- 2017.
746
- [28] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the
747
- inception architecture for computer vision. arxiv 2015. arXiv preprint arXiv:1512.00567 , 1512, 2015.
748
- [29] Daniele Rav `ı, Charence Wong, Fani Deligianni, Melissa Berthelot, Javier Andreu-Perez, Benny Lo, and Guang-
749
- Zhong Yang. Deep learning for health informatics. IEEE journal of biomedical and health informatics , 21(1):4–
750
- 21, 2017.
751
- [30] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar-
752
- rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the
753
- 22nd ACM international conference on Multimedia , pages 675–678. ACM, 2014.
754
- [31] Ken Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Return of the devil in the details:
755
- Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531 , 2014.
756
- [32] Korsuk Sirinukunwattana, Shan e Ahmed Raza, Yee-Wah Tsang, David RJ Snead, Ian A Cree, and Nasir M Ra-
757
- jpoot. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology
758
- images. IEEE Trans. Med. Imaging , 35(5):1196–1206, 2016.
759
- 17APREPRINT - FEBRUARY 22, 2022
760
- [33] Hang Chang, Leandro A Loss, and Bahram Parvin. Nuclear segmentation in h&e sections via multi-reference
761
- graph cut (mrgc). In International symposium biomedical imaging , 2012.
762
- [34] Robert M Haralick, Karthikeyan Shanmugam, et al. Textural features for image classification. IEEE Transactions
763
- on systems, man, and cybernetics , (6):610–621, 1973.
764
- [35] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and
765
- Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation ,
766
- 1(4):541–551, 1989.
767
- [36] Andrej Karpathy. Stanford university cs231n: convolutional neural networks for visual recognition. URL:
768
- http://cs231n. stanford. edu/syllabus. html , 2018.
769
- [37] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf:
770
- A deep convolutional activation feature for generic visual recognition. In International conference on machine
771
- learning , pages 647–655, 2014.
772
- [38] Thomas P V ogl, JK Mangis, AK Rigler, WT Zink, and DL Alkon. Accelerating the convergence of the back-
773
- propagation method. Biological cybernetics , 59(4-5):257–263, 1988.
774
- [39] Christopher M Bishop. Pattern recognition and machine learning . springer, 2006.
775
- [40] Kevin P Murphy. Machine learning: a probabilistic perspective . MIT press, 2012.
776
- [41] Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. A systematic study of the class imbalance problem in
777
- convolutional neural networks. Neural Networks , 106:249–259, 2018.
778
- [42] Haibo He, Yang Bai, Edwardo A Garcia, and Shutao Li. Adasyn: Adaptive synthetic sampling approach for
779
- imbalanced learning. In 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress
780
- on Computational Intelligence) , pages 1322–1328. IEEE, 2008.
781
- [43] L Torrey and J Shavlik. Transfer learning. handbook of research on machine learning applications, vol. 3, 2009.
782
- [44] Yinyin Yuan, Henrik Failmezger, Oscar M Rueda, H Raza Ali, Stefan Gr ¨af, Suet-Feung Chin, Roland F Schwarz,
783
- Christina Curtis, Mark J Dunning, Helen Bardwell, et al. Quantitative image analysis of cellular heterogeneity
784
- in breast tumors complements genomic profiling. Science translational medicine , 4(157):157ra143–157ra143,
785
- 2012.
786
- 18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2202.13056.txt DELETED
The diff for this file is too large to render. See raw diff
 
txt/2203.01944.txt DELETED
@@ -1,954 +0,0 @@
1
- 1
2
- A multi-stream convolutional neural network for
3
- classification of progressive MCI in Alzheimer’s
4
- disease using structural MRI images
5
- Mona Ashtari-Majlan, Abbas Seifi, Mohammad Mahdi Dehshibi
6
- Abstract — Early diagnosis of Alzheimer’s disease and
7
- its prodromal stage, also known as mild cognitive impair-
8
- ment (MCI), is critical since some patients with progressive
9
- MCI will develop the disease. We propose a multi-stream
10
- deep convolutional neural network fed with patch-based
11
- imaging data to classify stable MCI and progressive MCI.
12
- First, we compare MRI images of Alzheimer’s disease with
13
- cognitively normal subjects to identify distinct anatomical
14
- landmarks using a multivariate statistical test. These land-
15
- marks are then used to extract patches that are fed into
16
- the proposed multi-stream convolutional neural network to
17
- classify MRI images. Next, we train the architecture in a
18
- separate scenario using samples from Alzheimer’s disease
19
- images, which are anatomically similar to the progressive
20
- MCI ones and cognitively normal images to compensate for
21
- the lack of progressive MCI training data. Finally, we trans-
22
- fer the trained model weights to the proposed architecture
23
- in order to fine-tune the model using progressive MCI and
24
- stable MCI data. Experimental results on the ADNI-1 dataset
25
- indicate that our method outperforms existing methods for
26
- MCI classification, with an F1-score of 85.96%.
27
- Index Terms — Alzheimer’s disease, Brain-shaped map,
28
- Convolutional Neural Network, Multivariate statistical test,
29
- Transfer learning.
30
- I. INTRODUCTION
31
- ALZHEIMER’S disease (AD) is a progressive neurode-
32
- generative disorder that is one of the leading causes
33
- of dementia in the elderly. According to [1], this disorder
34
- affects over 30 million people worldwide. Early diagnosis
35
- of this disease and its prodromal stage, also known as mild
36
- cognitive impairment (MCI), is crucial since 10% to 15% of
37
- MCI patients progress to AD, which is classified as progressive
38
- MCI (pMCI) [2].
39
- As AD advances, several brain regions develop structural
40
- deformation and atrophy [3]. Structural Magnetic Resonance
41
- Imaging (sMRI) is one of the most widely employed neu-
42
- roimaging tools for predicting this disorder through identifying
43
- brain atrophy [4] (see Fig. 1). In addition to sMRI (referred
44
- to as MRI in this paper), non-invasive biomarkers such as (1)
45
- M. Ashtri-Majlan is with the Department of Computer Sci-
46
- ence, Universitat Oberta de Catalunya, Barcelona, Spain (e-mail:
47
48
- A. Seifi is with the Department of Industrial Engineering, Amirkabir
49
- University of Technology, Tehran, Iran (e-mail: aseifi@aut.ac.ir).
50
- M. M. Dehshibi is with the Department of Computer Science,
51
- Universitat Oberta de Catalunya, Barcelona, Spain (e-mail: moham-
52
- [email protected]).demographic information ( e.g., age and education) [5], and (2)
53
- cognitive test scores [4] can also be used to provide possible
54
- discriminative information for diagnosing AD in the early
55
- stages. Several studies [4], [6], [7], [8], [9] have addressed
56
- the MCI-to-AD conversion issue using neuroimaging methods
57
- in conjunction with the biomarkers.
58
- (a)
59
- (b)
60
- Fig. 1. MRI samples of (a) Cognitively Normal (CN) and (b) AD
61
- classes from Alzheimer’s Disease Neuroimaging Initiative database
62
- (ADNI-1) [10]. In order to demonstrate the subtle brain atrophy, we
63
- highlighted the affected regions.
64
- Conventional methods in medical image processing typ-
65
- ically use prior knowledge to segment brain images into
66
- Regions of Interest (ROI) [11] or V oxel-Based Morphometry
67
- (VBM) [12] to predict AD. While these methods can classify
68
- stable MCI (sMCI) and pMCI, the focus was mainly on
69
- Alzheimer’s disease. Deep learning algorithms, on the other
70
- hand, have made it possible for researchers to combine feature
71
- extraction, dimensionality reduction, and classification in an
72
- end-to-end way. These algorithms also outperform conven-
73
- tional methods in identifying AD patterns because they can
74
- discover hidden representations among multiple regions of
75
- neuroimages. Hence, these models have gained prominence
76
- in analysing Alzheimer’s disease.
77
- In this paper, we propose a multi-stream convolutional
78
- neural network (CNN) for classifying sMCI and pMCI, which
79
- is fed with patch-based imaging data extracted using a novel
80
- data-driven technique. To accomplish so, we first use the
81
- multivariate T2 Hotelling test to compare MRI images of AD
82
- and cognitively normal (CN) individuals in order to identify
83
- distinct anatomical landmarks. Following that, the statistical
84
- test is performed on textural and statistical features extractedarXiv:2203.01944v1 [eess.IV] 3 Mar 20222
85
- from MRI images. These landmarks are then used to generate
86
- 191919patches, with each landmark serving as the
87
- centre of the patch. Finally, the extracted patches are fed
88
- into the proposed multi-stream CNN to classify MRI images.
89
- To compensate for the lack of pMCI training data, we first
90
- train the proposed architecture using AD/CN images that are
91
- anatomically similar to the pMCI ones. Then, we transfer the
92
- weights of the trained model to the proposed architecture in
93
- order to fine-tune the model using pMCI and sMCI data. The
94
- contribution of this study is two-fold:
95
- 1) Rather than utilising non-rigid registration to identify
96
- anatomical landmarks in the brain, we propose employ-
97
- ing rigid registration to reduce computational complexity
98
- and eliminate morphological structure deformations that
99
- could cause inherent errors in the classification step.
100
- Therefore, we partition each MRI image during the
101
- anatomical landmark detection phase and perform the
102
- T2 Hotelling test on these partitions to capture subtle
103
- anatomical differences, to account for more information,
104
- and to reduce the impact of potential errors caused by
105
- inter-subject brain shape variations. Finally, the identi-
106
- fied statistically significant landmarks centres from the
107
- partitions are used as anchoring forces for selecting the
108
- patches fed into the proposed multi-stream CNN.
109
- 2) We propose using transfer learning to classify
110
- pMCI/sMCI classes in training the proposed architecture
111
- to overcome the complexity of learning caused by the
112
- subtle structural changes in MCI brains compared to AD
113
- and CN. In the Ablation study, we also demonstrate the
114
- importance of employing transfer learning. Furthermore,
115
- we address intrinsic errors caused by inter-subject brain
116
- shape differences by conducting experiments to deter-
117
- mine an ideal image patch size in order to feed to the
118
- proposed CNN model.
119
- The rest of this paper is organised as follows: Section II
120
- surveys the previous studies. Section III describes the proposed
121
- method. Experimental results are given in Section IV. Finally,
122
- the paper is concluded in Section V.
123
- II. L ITERATURE REVIEW
124
- Mild cognitive impairment (MCI) is a stage of cognitive
125
- decline that occurs between the predicted cognitive loss as-
126
- sociated with normal ageing and the more severe decline
127
- associated with dementia. MCI may result in a higher level
128
- of developing dementia caused by Alzheimer’s disease if the
129
- anatomical changes in the brain are proactive. Progressive
130
- MCI differs from stable MCI in the progression of functional
131
- connectivity values over time. However, classifying pMCI and
132
- sMCI patients is challenging due to the subtle anatomical
133
- differences in the brain [13]. The four conventional feature
134
- extraction approaches usually mentioned in the literature for
135
- classifying pMCI and sMCI are voxel-based, slice-based, ROI-
136
- based, and patch-based [14], although they are not entirely
137
- mutually exclusive. In this section, before surveying recent
138
- advances in deep learning-based methods for classifying pMCI
139
- and sMCI, we will briefly review these four approaches by
140
- discussing the advantages and disadvantages of each group.The voxel-based techniques [15], [16], [17] use the voxel
141
- intensity values from all neuroimaging modalities. Although
142
- voxel-based techniques are simple to implement, they typically
143
- require spatial co-alignment of the input image to standard 3D
144
- space and suffer from high dimension feature space compared
145
- to available sample numbers. Ortiz et al. [18] used the t-
146
- test algorithm to partition the brain area into 3D patches
147
- in order to address the mentioned drawbacks and eliminate
148
- non-significant voxels. The patches were then used to train
149
- an ensemble of deep belief networks, and a voting scheme
150
- was used to make the final prediction. However, as mentioned
151
- by [19], there is an inherent over-fitting challenge with voxel-
152
- based techniques.
153
- The sliced-based techniques [20], [21] extract slices from
154
- the 3D neuroimaging brain scan by projecting the sagittal,
155
- coronal, and axial to the 2D image slices. Indeed, because
156
- non-affected regions and normal slices must be chosen as the
157
- reference distribution, they cannot account for the disease and
158
- may be considered an anomaly [22]. Furthermore, choosing
159
- separate 2D slices may neglect the spatial dependencies of
160
- voxels in adjacent slices due to inter/intra anatomical variances
161
- in the brain images [14]. However, sliced-based techniques
162
- allow for the usage of a broader range of conventional and
163
- deep learning-based approaches. For instance, different pre-
164
- trained deep learning models on ImageNet, such as DenseNet,
165
- VGG16, GoogLeNet, and ResNet, can be fine-tuned by 2D
166
- slices to classify AD from CN [19]. In [21], researchers
167
- extracted features from MRI image slices using a pre-trained
168
- 2D CNN and fed the extracted feature sequence to a recurrent
169
- neural network (RNN). The RNN was in charge of determining
170
- the relationship between the sequence of extracted features
171
- corresponding to MRI image slices. However, sliced-based
172
- techniques are computationally expensive due to the use of
173
- additional learnable parameters which cannot directly benefit
174
- from transfer learning.
175
- ROI-based techniques consider brain regions that have
176
- been predefined physically or functionally [23], [8], [24].
177
- These methods use spatial information such as automated
178
- anatomical labelling [25] and diffusion-weighted imaging in
179
- MRI to extract features. The prominent regions that have
180
- been considered by almost all ROI-based feature extraction
181
- studies on AD prediction are the hippocampus, amygdala,
182
- and entorhinal. However, one of the advantages of employing
183
- ROI-based approaches is like a double-edged sword which can
184
- be a disadvantage because ROI identification requires expert
185
- human expertise. Furthermore, these techniques are considered
186
- time-consuming due to the need for non-linear registration
187
- and brain tissue segmentation. There is also the possibility
188
- of information loss because the abnormal region may spread
189
- from a single ROI to multiple ROIs.
190
- Patched-based approaches [26], [27] partition the entire
191
- brain into multiple patches from which numerous feature
192
- vectors are extracted. The extracted patches include shapes,
193
- texture, and volume features generated from distinct brain
194
- regions or specific patterns. This computational approach
195
- eliminates the need for manual ROI identification, and makes
196
- it possible to use landmark-based patch extraction or other
197
- discriminative patch-based biomarkers [8]. However, selecting3
198
- useful patches from a complete image is problematic, mainly
199
- due to increasing computational complexity when a non-
200
- rigid registration approach is used. Researchers have used
201
- rigid registration or embedded the registration step into deep
202
- learning-based methods to address this challenge [28]. There
203
- is another issue with patched-based approaches related to
204
- multiple instance learning. Although this challenge is almost
205
- addressed in classifying AD and CN by leveraging patch
206
- relationships [29], [30], the difficulty in classifying pMCI
207
- in Alzheimer’s disease is not still resolved. This issue is
208
- linked to the bag’s labelling in classifying pMCI from sMCI,
209
- where a bag can get a negative label even though it contains
210
- informative anatomical landmark(s), but it cannot meet the
211
- majority rule.
212
- Convolutional neural networks (CNNs) are a popular deep
213
- learning approach for classifying Alzheimer’s disease. Islam
214
- and Zhang [31] proposed three 2D CNNs to generate three
215
- distinct MRI views. Each CNN in their architecture comprised
216
- three convolutional layers and four dense blocks, where the
217
- final decision was made by majority vote. Oh et al. [32] pro-
218
- posed a convolutional autoencoder-based approach for the AD
219
- and CN classification task, addressing the pMCI data limitation
220
- with transfer learning. Liu et al. [8] suggested a coarse-to-
221
- fine hierarchical ensemble learning method for simultaneous
222
- hippocampus segmentation and Alzheimer’s disease classi-
223
- fication that employed a multi-task deep CNN architecture
224
- and 3D densely connected CNNs. In this method, an MRI
225
- image is first divided into multiple slices, and then a pre-
226
- trained deep neural network is used to extract features from
227
- the slices. The coarse predictions were then used in ensemble
228
- learning to obtain refined results for all slices. Ebrahimi et
229
- al. [21] extracted a sequence of features from 2D MRI slices
230
- using a pre-trained ResNet18 [33], which they subsequently
231
- trained a temporal convolutional network and several types
232
- of RNNs to classify AD/CN. Zhao et al. [24] introduced aregion ensemble model with three sequential sub-networks
233
- to account for a global feature map derived from the entire
234
- brain and regional feature maps extracted using a segmentation
235
- model. The feature representations were fused in their method,
236
- and the classification was performed using an attention-based
237
- technique. Researchers employed a data-driven technique to
238
- select informative patches in [34], which resulted in specific
239
- landmark localisation in brain MRI images. Each landmark
240
- patch was then fed into the CNN models, which produced the
241
- final classification result using the maximum voting strategy.
242
- P¨olsterl et al. [35] proposed the dynamic affine feature map
243
- transform, an auxiliary module for CNNs that dynamically
244
- incites or represses each feature map of a convolutional layer
245
- based on both image and tabular biomarkers. A more detailed
246
- overview of deep learning algorithms for Alzheimer’s disease
247
- classification can be found in [36], [14], [37].
248
- III. P ROPOSED METHOD
249
- Given a dataset of NsamplesD=f(xi; yi)gN
250
- i=1, with xi2
251
- Rdxandyi2Rdy, our goal is to train a multi-stream deep
252
- convolutional neural network H(x) =E[YjX=x]to classify
253
- sMCI and pMCI by minimising the cross-entropy between the
254
- class labels and the softmax output as in Eq. 1
255
- p(yijx;w; b) =exp(xTwi+bi)P
256
- j2dyexp(xTwj+bj): (1)
257
- where wandbare the network’s weights and bias terms, re-
258
- spectively. In this study, we use the baseline 1.5T T1-weighted
259
- MRI images of subjects from the ADNI-1 dataset [10], where
260
- the input image Xhas a size of dx= 185155150and is
261
- labelled by dy=f0;1g. The output label Yconsists of two
262
- probability values in the [0;1]range withH(xi) = 0 if the i-th
263
- sample belongs to the sMCI class and H(xi) = 1 otherwise.
264
- It has been shown [38] that in the early stages of
265
- Alzheimer’s disease, only certain brain regions are subject
266
- (a)
267
- Fig. 2. The schematic of the proposed multi-stream convolutional neural network.4
268
- to morphological changes caused by the disease. Therefore,
269
- we conduct a statistical test to identify these informative
270
- landmarks in the MRI images and extract Lpatches from
271
- each MRI image xi=fsi;jgL
272
- j=1, with si;j2R191919.
273
- The proposed data-driven approach for extracting patches from
274
- the MRI image, on which a preprocessing step has been
275
- performed, is described in the following subsections, followed
276
- by the details of the multi-stream CNN. Figure 2 shows the
277
- schematic of the proposed multi-stream architecture.
278
- A. Preprocessing
279
- We pre-process the MRI images to use them in the proposed
280
- method. There are four steps in the preprocessing phase: (1)
281
- anterior commissure-posterior commissure correction using
282
- the 3D Slicer software1; (2) intensity inhomogeneity correction
283
- using N4ITK [39], an enhanced version of nonparametric
284
- nonuniform normalisation; (3) skull stripping using a pre-
285
- trained U-Net2to remove both the skull and the dura; and
286
- (4) rigid registration, which involves linearly aligning MRI
287
- images to the Colin27 template and resampling them to a size
288
- of155185150with a resolution of 111 mm3. Figure 3
289
- shows a sample of MRI image from ADNI-1 dataset on which
290
- the preprocessing is performed.
291
- (a)
292
- (b)
293
- (c)
294
- (d)
295
- Fig. 3. The visual representation of the preprocessing steps for an
296
- MRI sample. (a) A raw MRI image, (b) the MRI image with anterior
297
- commissure-posterior commissure correction (c) the MRI image with
298
- intensity inhomogeneity correction, and (4) skull stripped MRI image.
299
- The yellow line in (a) and (b) depicts the anterior commissure-posterior
300
- commissure line.
301
- B. Anatomical landmark detection
302
- We must first identify the anatomical locations in the brain
303
- that are most influenced by the disease before we can classify
304
- sMCI and pMCI patients. As a result, we randomly divide
305
- samples from AD and CN individuals into the train, validation,
306
- and test sets with sizes of 0:7N,0:1N, and 0:2N,
307
- respectively, where Nis the total number of samples. Then,
308
- we select M= 0:7N(i.e., the training set) and propose a
309
- novel data-driven landmark detection method in which MRI
310
- images are partitioned into 555patches. As mentioned
311
- in [40], [41], when a patient is diagnosed with Alzheimer’s
312
- disease, several regions in the brain are subject to anatomical
313
- degeneration. At the pMCI stage, the same regions undergo
314
- anatomical changes, but the degeneration is not as severe as
315
- those seen at the onset of Alzheimer’s disease. With respect to
316
- this fact, we use identical anatomical locations for classifying
317
- sMCI and pMCI patients.
318
- 1http://www.slicer.org/
319
- 2https://github.com/iitzco/deepbrainEach partition is then represented by a 29-dimensional
320
- feature vector. This feature vector includes the Gray-Level Co-
321
- Occurrence Matrix (GLCM) [42], Structural Similarity Index
322
- Measure (SSIM) [43], Mean Square Error (MSE), entropy, and
323
- the mean and standard deviation of the partition voxels. To
324
- extract the GLCM elements of the feature vector, we generate
325
- six GLCM matrices with three adjacency directions, namely
326
- horizontal, vertical, and in-depth, each of which is associated
327
- with two distance values. Then, we extract contrast, corre-
328
- lation, homogeneity, and entropy, from each GLCM matrix
329
- resulting in a total of 24 elements. The Colin27 template [44]
330
- is used as the reference image for measuring SSIM and MSE
331
- at each patch location. Finally, we apply the multivariate T2
332
- Hotelling statistical test [45] to generate a brain-shaped p-
333
- value map (see Fig. 4). Algorithm 1 details the steps of
334
- generating the brain-shaped p-value map.
335
- Fig. 4. A brain-shaped p-value map in which top 50 landmark locations
336
- are represented by spheres in a gradient colour from red to blue, with p-
337
- values ranging from 0 to 0.001. Each p-value is paired with a landmark
338
- (lx; ly; lz).
339
- After obtainingPset, we exclude landmarks with a spatial
340
- Euclidean distance of less than 15 to reduce the redundancy of
341
- overlapped adjacent patches to identify the most discriminative
342
- anatomical landmarks in the brain. The top 50 landmarks with
343
- the lowest p-values are then chosen (see Fig. 4).
344
- The registration and landmark detection steps in Algo-
345
- rithm 1 are affected by the MRI image partitioning size, i.e.,
346
- 555. In the case of selecting a smaller partition size,
347
- the lack of adequate morphological variations could lead to
348
- discarding informative landmarks. In contrast, larger partition
349
- sizes cause intrinsic physiological differences to eclipse subtle
350
- disease-related changes.
351
- Therefore, after obtaining the top 50 landmarks, we sample
352
- 27 3D image patches with a 333displacement around
353
- the centre of each landmark to increase the size of patches
354
- to191919with two intentions: (i) compensating for
355
- regions that may unintentionally be discarded in anatomical
356
- landmark detection, and (ii) providing sufficient morphological
357
- structures for each stream of the proposed CNN to construct
358
- a discriminative latent space.5
359
- Algorithm 1: Generating brain-shaped p-value map.
360
- Input : AD=fxAD
361
- 1; xAD
362
- 2;; xAD
363
- Mg
364
- CN=fxCN
365
- 1; xCN
366
- 2;; xCN
367
- Mg
368
- K= 34;410, which is the total number of
369
- patches with a size of 555.
370
- Output:P: A set of p-value , forming the brain-shaped
371
- p-value map.
372
- Step 1: Partitioning
373
- 1VAD Partition( AD);
374
- //VAD=fp1;1;; p1;K;; pM;1;; pM;Kg.
375
- 2VCN Partition( CN);
376
- //VCN=fq1;1;; q1;K;; qM;1;; qM;Kg.
377
- Step 2: Feature extraction & T2 Hotelling test
378
- 3forj 1toKdo
379
- 4 fori 1toMdo
380
- 5 fAD
381
- i;j Feature-Extraction( pi;j);
382
- 6 fCN
383
- i;j Feature-Extraction( qi;j);
384
- 7 end
385
- 8 pvalue j Hotelling-Test( fAD
386
- M;j; fCN
387
- M;j);
388
- //fM;jis a M29matrix,
389
- representing extracted features
390
- form the jthpatch.
391
- 9end
392
- 10P Sort( pvalue; asc);
393
- // Each pvalue is paired with a
394
- landmark (lx; ly; lz).
395
- 11returnP
396
- C. Multi-stream classifier architecture
397
- We propose a multi-stream CNN architecture with L
398
- streams, each fed with the patch si;jextracted from the input
399
- image xiand centred on the identified landmark location
400
- (lx;j; ly;j; lz;j). We construct the patch si;jwith a size of
401
- 191919, surrounding the corresponding landmark location,
402
- in order to better represent morphological variations in the
403
- MRI images. The local spectral-spatial feature is extracted
404
- from each 3D image patch by each stream of the proposed
405
- CNN architecture.
406
- As depicted in Fig. 2, the proposed multi-stream CNN has
407
- L= 50 streams, with an identical structure. Each stream has
408
- five convolutional layers (Conv), followed by a rectified linear
409
- unit (ReLU) activation function. The convolutional layers have
410
- 32, 64, 64, 128, and 128 333convolution filters,
411
- respectively. After Conv2, Conv4, and Conv5, we consider
412
- batch normalisation and 222max-pooling layers. There
413
- are three fully connected layers with 128, 64, and 8 units at the
414
- end of each stream, which are followed by a dropout layer with
415
- a ratio of 0.4 to prevent overfitting. Although the architecture
416
- of all 50 streams is the same, their weights are tuned and
417
- updated separately, where the input patches for each stream
418
- are randomly selected to avoid the unintentional bias towards
419
- ordering streams. We concatenate the outputs of 50 streams
420
- and add a dropout layer with a ratio of 0.6 to fuse the locally
421
- extracted spectral-spatial features. Before passing the featurevector into the softmax function for the final classification,
422
- we add three fully connected layers with 64, 64, and 32 units,
423
- respectively.
424
- IV. E XPERIMENTS
425
- A. ADNI-1 dataset
426
- In this study, we use the baseline 1.5T T1-weighted MRI
427
- images of subjects from the ADNI-1 dataset [10]. The vol-
428
- umetric 3D MPRAGE protocol is used to acquire sagittal
429
- T1-weighted MRI images with an in-plane spatial resolution
430
- of1:251:25 mm2and 1.2 mm thick sagittal slices. The
431
- imaging dataset contains baseline images from 695 partic-
432
- ipants including 200 Alzheimer’s disease, 231 cognitively
433
- normal, 164 progressive MCI, and 100 stable MCI. Figure 5
434
- shows four samples from this dataset, and Table I presents the
435
- demographic and clinical information of subjects in ADNI-1.
436
- (a)
437
- (b)
438
- (c)
439
- (d)
440
- Fig. 5. Four samples from ADNI-1 dataset [10] (a) AD, (b) CN, (c) pMCI,
441
- and (d) sMCI
442
- B. Architecture details and Evaluation metrics
443
- The proposed architecture is implemented using Python
444
- based on the Keras package3, on a computer with Intel(R)
445
- Core(TM) i7-4790K @4.00 GHz CPU and 16G RAM. We
446
- trained the network using Adam optimiser [46] with the first
447
- momentum of 0.9 and the second momentum of 0.999. The
448
- initial learning rate and the constant for numerical stability
449
- are set to 103and106, respectively. We set the maximum
450
- number of training epochs to 40 and used a mini-batch-size of
451
- 5 at each iteration, where the training data was shuffled before
452
- each training epoch. There are two other hyper-parameters
453
- which are the number of streams Land the patch size.
454
- We evaluate the performance of the proposed ar-
455
- chitecture with the number of streams in the range
456
- f10;20;30;40;50;60gand the size of patches in the range
457
- f9;11;15;19;23gusing the validation set from the AD and
458
- CN classes.It is worth noting that because there are two
459
- hyperparameters, one must be fixed in order to generate the
460
- other. Figure 6 shows the accuracy of the proposed multi-
461
- stream CNN as a function of these two hyper-parameters.
462
- While increasing the number of streams improves classifi-
463
- cation accuracy (see Fig. 6a), it also increases computational
464
- complexity. We can also observe in Fig. 6b that a larger patch
465
- size ( i.e.,232323) shows a performance drop when
466
- compared with the 191919patch size. This is owing
467
- to the fact that enlarging the patch size includes tissues in that
468
- 3https://github.com/fchollet/keras6
469
- TABLE I
470
- DEMOGRAPHIC AND CLINICAL INFORMATION OF SUBJECTS IN ADNI-1. V ALUES ARE REPORTED AS MEAN STANDARD DEVIATION .
471
- Class Male/Female Age ADAS CDR-sb FAQ MMSE NPI-Q
472
- AD 103/97 75.64 7.71 13.02 5.23 4.39 1.6 13.16 6.71 23.31 2.03 3.44 3.27
473
- CN 119/112 76.155 4.99 28.51 4.89 0.03 0.12 0.13 0.59 29.09 0.98 0.36 0.95
474
- sMCI 66/34 75.44 7.27 22.93 5.78 1.24 0.62 1.65 3.00 27.65 1.70 1.45 2.40
475
- pMCI 97/67 74.54 7.05 17.67 5.14 1.87 0.96 5.64 5.15 26.62 1.71 2.30 3.11
476
- ADAS: Alzheimer’s Disease Assessment Scale
477
- CDR-sb: Clinical Dementia Rating ‘sum of boxes’
478
- FAQ: Functional Activities QuestionnaireMMSE: Mini-Mental State Examination
479
- NPI-Q: Neuropsychiatric Inventory Questionnaire
480
- (a)
481
- (b)
482
- Fig. 6. The accuracy of the proposed multi-stream CNN classifier as a
483
- function of (a) the number of streams, and (b) the patch size.
484
- patch of the brain whose intrinsic morphological similarity
485
- is dominant to subtle changes that aid in the early diagnosis
486
- of the disease. Therefore, instead of discriminating the subtle
487
- changes induced by the disease’s development in the early
488
- stages, the network discriminates irrelevant tissues.As a result,
489
- to establish a trade-off between computational complexity and
490
- accuracy, we set the number of streams Lto 50 and the patch
491
- size to 191919and preserve these values in the rest of
492
- the experiments.
493
- To evaluate the performance of the proposed architecture,
494
- we use the metrics as in Eq. 2.
495
- ACC =TP+TN
496
- TP+TN+FP+FN:
497
- SEN =TP
498
- TP+FN;
499
- SPE =TN
500
- TN+FP;
501
- F1score =2SENSPE
502
- SEN + SPE:(2)
503
- where TP,TN,FP, and FN denote true positive, true
504
- negative, false positive, and false negative, respectively. We
505
- also report area under receiver operating characteristic curve
506
- (AUC).
507
- C. Experimental results
508
- To assess the performance of the proposed architecture,
509
- we performed three steps: (1) training the proposed multi-
510
- stream CNN with MRI images of AD and CN subjects, (2)
511
- transferring weights from the trained model in Step 1 to the
512
- identical architecture and fine-tuning it with the data related
513
- to sMCI and pMCI patients, and (3) adding biomarkers asan auxiliary modality to examine the reciprocal influence of
514
- spectral-spatial features and biomarkers.
515
- (1) Classification of AD and CN subjects : To train the pro-
516
- posed multi-stream CNN, we randomly select 70% of the MRI
517
- patches from the two classes of AD and CN as the training
518
- set, 20% as the test set, and 10% as the validation set. We
519
- also compare the trained model in this step with 10 additional
520
- approaches, including regions of interest (ROI) [47], V oxel-
521
- Based Morphometry (VBM) [12], and five deep learning-based
522
- methods. We used the FAST algorithm [48] to segment brain
523
- MRI images into three different tissues for the ROI and VBM
524
- comparison: White Matter (WM), Gray Matter (GM), and
525
- Cerebrospinal Fluid (CSF). We followed the implementations
526
- described by the researchers for the deep learning-based
527
- approaches.
528
- For the RIO-based method, we align the anatomical auto-
529
- mated labelling template [25] with the native space of each
530
- MRI image. This template contains 116 predefined regions
531
- in which the extracted GM tissue volumes are normalised
532
- using the summation of GM, WM, and CSF volumes. This
533
- normalised tissue volume is used as the representative feature
534
- for the MRI images. For the VBM method, we use affine reg-
535
- istration with 12 degrees of freedom to align MRI images with
536
- the Colin27 template [44] in order to extract the GM density
537
- as the representative feature. To reduce the dimensionality of
538
- this feature vector, we perform the t-student statistical test on
539
- the extracted features from AD and CN subjects and chose
540
- GM densities with p-values less than 0.001.
541
- Finally, we classify AD and CN using these two feature
542
- representations using a linear support vector machine
543
- (SVM) [50] with soft-margin and a multilayer perceptron
544
- (MLP). The MLP comprises two hidden layers, with 13
545
- and 15 neurons, and a binary output layer with a logistic
546
- activation function. We train these classifiers using the 5-fold
547
- cross-validation strategy. To ensure a fair comparison for
548
- MCI conversion prediction, we train the VBM and ROI
549
- models on AD/CN and test them on an independent test set
550
- of pMCI/sMCI.
551
- (2) Classifying sMCI and pMCI using transfer learning :
552
- The availability of MRI images for sMCI and pMCI patients
553
- is less than that of AD because the symptoms of Alzheimer’s
554
- disease are not as severe at these stages. As a result, many
555
- patients may not be asked to get an MRI test. Furthermore,
556
- structural changes in MCI brains caused by dementia may be7
557
- TABLE II
558
- RESULTS OF CLASSIFYING AD/CN AND P MCI/ SMCI USING THE PROPOSED MULTI -STREAM CNN S ALONG WITH 10ADDITIONAL APPROACHES .
559
- MethodAD vs. CN (%) pMCI vs. sMCI (%)
560
- ACC SEN SPE F1-score AUC ACC SEN SPE F1-score AUC
561
- ROI + SVM 70.30 68.51 67.50 67.83 79.39 59.47 66.86 68.90 67.87 60.99
562
- VBM + SVM 82.84 82.54 80.50 81.34 90.39 70.08 81.48 67.07 73.58 74.38
563
- ROI + MLP 73.08 70.00 71.19 70.59 77.52 58.75 70.00 66.04 67.96 53.93
564
- VBM + MLP 83.85 78.33 85.45 81.74 89.92 70.00 72.00 78.26 75.00 73.93
565
- Shmulev et al. [49] - - - - - 76.00 70.00 88.00 77.97 86.00
566
- DM2L[5] 91.09 93.50 88.05 90.69 95.86 76.90 82.43 42.11 55.74 77.64
567
- H-FCN [6] 90.30 96.50 82.40 85.08 95.10 80.90 85.40 52.60 65.10 78.10
568
- HybNet [28] 91.90 82.40 94.50 91.50 96.50 82.70 57.90 86.60 69.39 79.30
569
- Zhao et al. [24] - - - - - 85.90 50.00 91.60 64.68 85.40
570
- Proposed architectureLbiomarkers 97.54 95.54 99.40 97.43 99.38 69.21 68.56 93.15 78.98 77.35
571
- Proposed architecture 97.78 95.59 99.82 97.66 99.97 79.90 75.55 99.70 85.96 94.39
572
- very subtle compared to CN and AD, making the convergence
573
- of the proposed multi-stream CNN challenging. To overcome
574
- these limitations, we first trained our model using data from
575
- CN and AD MRI images and fine-tune the model with data
576
- gathered from sMCI and pMCI patients.
577
- Since the pre-trained streams of the proposed architecture
578
- are designed to extract structural features that correspond to
579
- AD, we freeze the initial layers and only re-train the last
580
- three fully connected layers. To fine-tune the model, we used
581
- 70% of the pMCI and sMCI MRI patches and tested the
582
- fine-tuned model with the remaining data.
583
- (3) Unleashing the impact of biomarkers : In addition to
584
- MRI images, non-invasive biomarkers such as demographic
585
- information and cognitive test scores [4] can also be used to
586
- provide potentially discriminatory information for diagnosing
587
- Alzheimer’s disease in its early stages. We propose adding
588
- biomarkers as an auxiliary modality to the architecture in order
589
- to evaluate the reciprocal influence of spectral-spatial features
590
- and biomarkers. Therefore, we introduce an additional input
591
- stream to the proposed architecture to incorporate numerical
592
- values offAge, ADAS, CDR-sb, FAQ, MMSE, NPI-Q g(see
593
- Table I). However, a subset of the MRI images in the ADNI-
594
- 1 dataset does not include biomarkers, resulting in missing
595
- values when incorporating them into the proposed CNN archi-
596
- tecture. We use k-Nearest Neighbour (kNN) as a preprocessing
597
- step to handle missing values. First, we execute the kNN
598
- with k= 6 (analogous to 6 biomarkers) on the available
599
- data. Then, for those entries that do not have a biomarker,
600
- we fill in the value with the average values of the knearest
601
- neighbours. Finally, we incorporate these biomarkers into the
602
- architecture at the ‘Concatenate’ layer while preserving the
603
- learning parameters as described in Section IV-B.
604
- Table II shows the performance of the proposed architec-
605
- tures compared with 10 other approaches. As shown in Ta-
606
- ble II, the proposed multi-stream CNNs achieve considerably
607
- better F1-scores than both conventional and deep learning-
608
- based approaches. While the biomarkers can improve the
609
- classification performance, the lack of large-scale data is
610
- a barrier to incorporating them into the proposed method.
611
- Furthermore, as previously stated, the cognitive test scores
612
- of individuals suffering from various types of MCI differ
613
- slightly. This subtle difference could explain why the multi-stream architecture combined with biomarkers performs poorly
614
- in distinguishing pMCI and sMCI when compared with the AD
615
- and CN classification. We also plot the ROC curves in Fig. 7
616
- for classifying both AD vs. CN and pMCI vs. sMCI.
617
- (a)
618
- (b)
619
- Fig. 7. ROC curves of the proposed multi-stream architecture, the
620
- proposed architecture integrated with biomarkers, ROI, and VBM-based
621
- methods for (a) AD/CN classification, and (b) pMCI/sMCI classification.
622
- In the case of classifying AD and CN, Fig. 7a shows that
623
- the proposed architecture achieves an AUC of 99.97%, outper-
624
- forming ROI, VBM, and multi-stream architecture integrated
625
- with biomarkers, which achieve AUCs of 79.39%, 90.39%,
626
- and 99.38%, respectively. Furthermore, as demonstrated in
627
- Fig. 7b, when we use transfer learning, the fine-tuned model
628
- can provide appropriate discriminative information to classify
629
- pMCI and sMCI subjects. This evidence can also convey that
630
- AD and pMCI have structural deformations similar to the
631
- extracted features from various landmarks in different streams
632
- of the proposed architecture.
633
- The results in Table II show that the proposed multi-stream
634
- architecture combined with biomarkers is slightly less efficient
635
- than the multi-stream CNN in distinguishing pMCI from
636
- sMCI when compared with the AD and CN classification. To
637
- highlight these subtle differences and better comprehend the
638
- reciprocal influence of spectral-spatial features and biomark-
639
- ers, we have visualised the class-discriminative localisation
640
- map and the latent space in Fig. 8 using Grad-CAM [51] and
641
- t-SNE [52], respectively. Figure 8a shows an example of a
642
- class-discriminative localisation map for the last convolutional
643
- layers in the proposed multi-stream CNN in which the salient
644
- regions of each brain patch have been identified so that the
645
- classifier could distinguish between AD/CN and pMCI/sMCI8
646
- (a)
647
- (b)
648
- Fig. 8. (a) An example of a class-discriminative localisation map for the last convolutional layers in the proposed multi-stream CNN using Grad-
649
- CAM [51]. Each stream has 128 filters. However, in order to keep the publication page limits, we only illustrate five out of 50 patches, each
650
- representing five filters. The colourbar represents the relevance of each pixel in the Grad-CAM, where the red colour has the highest importance
651
- in the trained network’s final decision and the blue colour has the least importance. (b) Visualisation of the feature space using t-SNE [52] for the
652
- classification of AD/CN and pMCI/sMCI with and without biomarkers.
653
- with high potential. Furthermore, we have observed in Fig. 8b
654
- that introducing biomarkers as an auxiliary modality causes the
655
- clusters ( i.e., AD/CN and pMCI/sMCI) to become tighter with
656
- a degree of overlap than without the biomarkers. This evidence
657
- can explain why the proposed multi-stream CNN without
658
- biomarkers performs marginally better, where the latent space
659
- provides for clean cluster separation.
660
- D. Ablation study
661
- In the ablation study, we conduct two sets of experiments
662
- to better understand the efficacy of the proposed multi-stream
663
- CNN for the classification of pMCI in Alzheimer’s disease.
664
- We therefore examine:
665
- — Contribution of transfer learning : In this experiment,
666
- instead of using transfer learning, we test the baseline perfor-
667
- mance when the model is directly trained on pMCI/sMCI data.
668
- We have 100 and 164 sMCI and pMCI samples, respectively,
669
- and use the same data split as in the transfer learning (70/30)
670
- for a fair comparison. The other learning parameters are
671
- preserved as described in Section IV-B. The evaluation metrics
672
- (i.e., ACC: 63.53%, SEN: 100%, SPE: 0.0%, and F1-score:
673
- 77.70%) reveal that the model cannot converge to classify
674
- pMCI/sMCI classes without using transfer learning. In fact,
675
- the trained model predicts the same optimal output regardless
676
- of the input, with the average answer minimising loss.
677
- — Contribution of multi-stream architecture : In this ex-
678
- periment, instead of the proposed multi-stream CNN, we use
679
- a single stream CNN to which 3D patches are fed. The
680
- model is trained on the AD/CN dataset and fine-tuned using
681
- pMCI/sMCI data with the same data split used in the previous
682
- experiments. The evaluation metrics ( i.e., ACC: 56.96%, SEN:
683
- 67.35%, SPE: 40.00%, and F1-score: 66.00%) show that the
684
- model is unable to converge to classify pMCI/sMCI classes,
685
- and the trained model is predicting random class for all the
686
- data points regardless of the input. These metrics highlighttwo main facts: (1) with the multi-stream architecture, the
687
- model can build a more discriminative latent space as a result
688
- of focusing on the landmarks identified in the anatomical
689
- landmark detection phase, which are more likely to be signs
690
- of Alzheimer’s disease. However, with a single-stream ar-
691
- chitecture, the model cannot distinguish between anatomical
692
- abnormalities caused by the disease and those caused by the
693
- morphology of the brain. Indeed, the model’s sensitivity and
694
- specificity are inadvertently changed due to the morphological
695
- similarity of the inputs; (2) input patches to the single-stream
696
- architecture predominantly capture local information of the
697
- image, and the global relationship between different landmark
698
- locations is no longer taken into account.
699
- V. C ONCLUSION
700
- Alzheimer’s disease is the most common type of dementia,
701
- resulting in memory impairment and cognitive decline. Mild
702
- cognitive impairment is a prodromal stage of AD, also known
703
- as the transition stage. MCI patients either progress to AD or
704
- remain at the same stage over time. Therefore, it is critical to
705
- distinguish between progressive MCI and stable MCI in early
706
- stages to prevent rapid progression of the disease.
707
- In this study, we have proposed a method for classifying
708
- pMCI and sMCI patients using MRI images. The proposed
709
- method consists of two main steps. (1) We have developed
710
- a novel data-driven approach based on the multivariate T2
711
- Hotelling statistical test to identify anatomical landmarks in
712
- MRI images and generate a brain-shaped p-value map. Each
713
- landmark is paired with a 3D coordinate, allowing us to extract
714
- patches of 191919. (2) We have proposed a multi-stream
715
- deep convolutional neural network in which each stream is
716
- fed by one of the patches. This multi-stream CNN employed
717
- transfer learning to classify pMCI and sMCI patients using the
718
- ADNI-1 dataset. We assessed the proposed method in three
719
- experimental steps and demonstrated the significance of our
720
- contributions to transfer learning and multi-stream architecture9
721
- in the Ablation study. We have performed several experiments
722
- to evaluate the efficiency of the proposed architecture based
723
- on the best practices. Experimental results have shown that
724
- our method outperformed existing approaches, particularly in
725
- the classification of MCI patients. Thus, this method can
726
- assist practitioners to expand investigating on various diseases
727
- associated with structural atrophy.
728
- REFERENCES
729
- [1] D. E. Barnes and K. Yaffe, “The projected effect of risk factor reduction
730
- on alzheimer’s disease prevalence,” The Lancet Neurology , vol. 10, no. 9,
731
- pp. 819–828, 2011.
732
- [2] R. C. Petersen, R. O. Roberts, D. S. Knopman, B. F. Boeve, Y . E.
733
- Geda, R. J. Ivnik, G. E. Smith, and J. Jack, Clifford R., “Mild Cognitive
734
- Impairment: Ten Years Later,” Archives of neurology , vol. 66, no. 12,
735
- pp. 1447–1455, 2009.
736
- [3] M. A. DeTure and D. W. Dickson, “The neuropathological diagnosis of
737
- Alzheimer’s disease,” Molecular neurodegeneration , vol. 14, no. 1, pp.
738
- 1–18, 2019.
739
- [4] K. Hett, V .-T. Ta, I. Oguz, J. V . Manj ´on, and P. Coup ´e, “Multi-scale
740
- graph-based grading for alzheimer’s disease prediction,” Medical Image
741
- Analysis , vol. 67, pp. 101 850: 1–13, 2021.
742
- [5] M. Liu, J. Zhang, E. Adeli, and D. Shen, “Joint classification and
743
- regression via deep multi-task multi-channel learning for alzheimer’s
744
- disease diagnosis,” IEEE Transactions on Biomedical Engineering ,
745
- vol. 66, no. 5, pp. 1195–1206, 2019.
746
- [6] C. Lian, M. Liu, J. Zhang, and D. Shen, “Hierarchical Fully Convolu-
747
- tional Network for Joint Atrophy Localization and Alzheimer’s Disease
748
- Diagnosis Using Structural MRI,” IEEE transactions on pattern analysis
749
- and machine intelligence , vol. 42, no. 4, pp. 880–893, 2020.
750
- [7] Q. Li, X. Xing, Y . Sun, B. Xiao, H. Wei, Q. Huo, M. Zhang, X. S. Zhou,
751
- Y . Zhan, Z. Xue, and F. Shi, “Novel Iterative Attention Focusing Strategy
752
- for Joint Pathology Localization and Prediction of MCI Progression,”
753
- inMedical Image Computing and Computer Assisted Intervention –
754
- MICCAI 2019 . Springer International Publishing, 2019, pp. 307–315.
755
- [8] M. Liu, F. Li, H. Yan, K. Wang, Y . Ma, L. Shen, and M. Xu, “A multi-
756
- model deep convolutional neural network for automatic hippocampus
757
- segmentation and classification in Alzheimer’s disease,” NeuroImage ,
758
- vol. 208, pp. 116 459: 1–15, 2020.
759
- [9] S. Qiu, P. S. Joshi, M. I. Miller, C. Xue, X. Zhou, C. Karjadi, G. H.
760
- Chang, A. S. Joshi, B. Dwyer, S. Zhu, M. Kaku, Y . Zhou, Y . J. Alderazi,
761
- A. Swaminathan, S. Kedar, M.-H. Saint-Hilaire, S. H. Auerbach, J. Yuan,
762
- E. A. Sartor, R. Au, and V . B. Kolachalama, “Development and
763
- validation of an interpretable deep learning framework for alzheimer’s
764
- disease classification,” Brain , vol. 143, no. 6, pp. 1920–1933, 2020.
765
- [10] C. R. Jack Jr., M. A. Bernstein, N. C. Fox, P. Thompson, G. Alexander,
766
- D. Harvey, B. Borowski, P. J. Britson, J. L. Whitwell, C. Ward, A. M.
767
- Dale, J. P. Felmlee, J. L. Gunter, D. L. Hill, R. Killiany, N. Schuff,
768
- S. Fox-Bosetti, C. Lin, C. Studholme, C. S. DeCarli, G. Krueger, H. A.
769
- Ward, G. J. Metzger, K. T. Scott, R. Mallozzi, D. Blezek, J. Levy, J. P.
770
- Debbins, A. S. Fleisher, M. Albert, R. Green, G. Bartzokis, G. Glover,
771
- J. Mugler, and M. W. Weiner, “The alzheimer’s disease neuroimaging
772
- initiative (adni): Mri methods,” Journal of Magnetic Resonance Imaging ,
773
- vol. 27, no. 4, pp. 685–691, 2008.
774
- [11] C. Cabral, P. M. Morgado, D. Campos Costa, and M. Silveira, “Predict-
775
- ing conversion from mci to ad with fdg-pet brain images at different
776
- prodromal stages,” Computers in Biology and Medicine , vol. 58, pp.
777
- 101–109, 2015.
778
- [12] J. Ashburner and K. J. Friston, “V oxel-based morphometry—the meth-
779
- ods,” Neuroimage , vol. 11, no. 6, pp. 805–821, 2000.
780
- [13] U. R. Acharya, S. L. Fernandes, J. E. WeiKoh, E. J. Ciaccio, M. K. M.
781
- Fabell, U. J. Tanik, V . Rajinikanth, and C. H. Yeong, “Automated De-
782
- tection of Alzheimer’s Disease Using Brain MRI Images– A Study with
783
- Various Feature Extraction Techniques,” Journal of Medical Systems ,
784
- vol. 43, no. 9, pp. 1–14, 2019.
785
- [14] X. Zhao, C. K. E. Ang, U. R. Acharya, and K. H. Cheong, “Application
786
- of Artificial Intelligence techniques for the detection of Alzheimer’s
787
- disease using structural MRI images,” Biocybernetics and Biomedical
788
- Engineering , vol. 41, no. 2, pp. 456–473, 2021.
789
- [15] S. Basheera and M. S. S. Ram, “A novel CNN based Alzheimer’s
790
- disease classification using hybrid enhanced ICA segmented gray matter
791
- of MRI,” Computerized Medical Imaging and Graphics , vol. 81, pp.
792
- 101 713: 1–15, 2020.[16] J. Ruiz, M. Mahmud, M. Modasshir, and M. S. Kaiser, “3d densenet
793
- ensemble in 4-way classification of alzheimer’s disease,” in 13th In-
794
- ternational Conference on Brain Informatics . Springer International
795
- Publishing, 2020, pp. 85–96.
796
- [17] W. Li, X. Lin, and X. Chen, “Detecting alzheimer’s disease based on 4d
797
- fmri: An exploration under deep learning framework,” Neurocomputing ,
798
- vol. 388, pp. 280–287, 2020.
799
- [18] A. Ortiz, J. Munilla, J. M. G ´orriz, and J. Ram ´ırez, “Ensembles of deep
800
- learning architectures for the early diagnosis of the alzheimer’s disease,”
801
- International journal of neural systems , vol. 26, no. 07, p. 1650025,
802
- 2016.
803
- [19] M. A. Ebrahimighahnavieh, S. Luo, and R. Chiong, “Deep learning to
804
- detect Alzheimer’s disease from neuroimaging: A systematic literature
805
- review,” Computer Methods and Programs in Biomedicine , vol. 187, p.
806
- 105242, 2020.
807
- [20] Y . Zhang, Q. Teng, Y . Liu, Y . Liu, and X. He, “Diagnosis of alzheimer’s
808
- disease based on regional attention with smri gray matter slices,” Journal
809
- of Neuroscience Methods , vol. 365, pp. 109 376: 1–8, 2022.
810
- [21] C. R. A. D. N. I. Ebrahimi A, Luo S, “Deep sequence modelling for
811
- alzheimer’s disease detection using mri,” Computers in Biology and
812
- Medicine , p. 104537, 2021.
813
- [22] R. Dey and Y . Hong, “Asc-net: Adversarial-based selective network
814
- for unsupervised anomaly segmentation,” in International Conference
815
- on Medical Image Computing and Computer-Assisted Intervention .
816
- Springer, 2021, pp. 480–489.
817
- [23] V . S. Rallabandi, K. Tulpule, M. Gattu, A. D. N. Initiative et al. , “Au-
818
- tomatic classification of cognitively normal, mild cognitive impairment
819
- and Alzheimer’s disease using structural MRI analysis,” Informatics in
820
- Medicine Unlocked , vol. 18, pp. 100 305: 1–7, 2020.
821
- [24] Y .-X. Zhao, Y .-M. Zhang, M. Song, and C.-L. Liu, “Region Ensemble
822
- Network for MCI Conversion Prediction with a Relation Regularized
823
- Loss,” in International Conference on Medical Image Computing and
824
- Computer-Assisted Intervention . Springer, 2021, pp. 185–194.
825
- [25] N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou, F. Crivello,
826
- O. Etard, N. Delcroix, B. Mazoyer, and M. Joliot, “Automated Anatom-
827
- ical Labeling of Activations in SPM Using a Macroscopic Anatomi-
828
- cal Parcellation of the MNI MRI Single-Subject Brain,” Neuroimage ,
829
- vol. 15, no. 1, pp. 273–289, 2002.
830
- [26] M. Liu, J. Zhang, D. Nie, P.-T. Yap, and D. Shen, “Anatomical
831
- landmark based deep feature representation for mr images in brain
832
- disease diagnosis,” IEEE journal of biomedical and health informatics ,
833
- vol. 22, no. 5, pp. 1476–1485, 2018.
834
- [27] J. Wen, E. Thibeau-Sutre, M. Diaz-Melo, J. Samper-Gonz ´alez,
835
- A. Routier, S. Bottani, D. Dormont, S. Durrleman, N. Burgos,
836
- and O. Colliot, “Convolutional neural networks for classification of
837
- alzheimer’s disease: Overview and reproducible evaluation,” Medical
838
- Image Analysis , vol. 63, pp. 101 694: 1–19, 2020.
839
- [28] C. Lian, M. Liu, Y . Pan, and D. Shen, “Attention-guided hybrid network
840
- for dementia diagnosis with structural mr images,” IEEE Transactions
841
- on Cybernetics , 2020.
842
- [29] M. Liu, J. Zhang, E. Adeli, and D. Shen, “Landmark-based deep multi-
843
- instance learning for brain disease diagnosis,” Medical image analysis ,
844
- vol. 43, pp. 157–168, 2018.
845
- [30] M. Ilse, J. Tomczak, and M. Welling, “Attention-based Deep Multi-
846
- ple Instance Learning,” in 35th International Conference on Machine
847
- Learning . PMLR, 2018, pp. 2127–2136.
848
- [31] J. Islam and Y . Zhang, “Brain MRI analysis for Alzheimer’s disease
849
- diagnosis using an ensemble system of deep convolutional neural
850
- networks,” Brain Informatics , vol. 5, no. 2, pp. 1–14, 2018.
851
- [32] K. Oh, Y .-C. Chung, K. W. Kim, W.-S. Kim, and I.-S. Oh, “Classification
852
- and Visualization of Alzheimer’s Disease using V olumetric Convolu-
853
- tional Neural Network and Transfer Learning,” Scientific Reports , vol. 9,
854
- no. 1, pp. 1–16, 2019.
855
- [33] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for
856
- Image Recognition,” in 2016 IEEE Conference on Computer Vision and
857
- Pattern Recognition (CVPR) , 2016, pp. 770–778.
858
- [34] M. Liu, C. Lian, and D. Shen, “Anatomical-landmark-based deep learn-
859
- ing for alzheimer’s disease diagnosis with structural magnetic resonance
860
- imaging,” in Deep Learning in Healthcare: Paradigms and Applications .
861
- Springer International Publishing, 2020, pp. 127–147.
862
- [35] S. P ¨olsterl, T. N. Wolf, and C. Wachinger, “Combining 3d image
863
- and tabular data via the dynamic affine feature map transform,” in
864
- International Conference on Medical Image Computing and Computer-
865
- Assisted Intervention . Springer, 2021, pp. 688–698.
866
- [36] N. Goenka and S. Tiwari, “Deep learning for alzheimer prediction using
867
- brain biomarkers,” Artificial Intelligence Review , vol. 54, no. 7, pp. 1–45,
868
- 2021.10
869
- [37] G. Chamarajan, Y . Charishma et al. , “Alzheimer’s disease: A survey,”
870
- International Journal of Artificial Intelligence , vol. 8, no. 1, pp. 33–39,
871
- 2021.
872
- [38] H. Braak, E. Braak, D. Yilmazer, R. A. I. De V os, E. N. H. Jansen,
873
- and J. Bohl, “Pattern of brain destruction in parkinson’s and alzheimer’s
874
- diseases,” Journal of Neural Transmission , vol. 103, no. 4, pp. 455–490,
875
- 1996.
876
- [39] N. J. Tustison, B. B. Avants, P. A. Cook, Y . Zheng, A. Egan, P. A.
877
- Yushkevich, and J. C. Gee, “N4itk: Improved n3 bias correction,” IEEE
878
- Transactions on Medical Imaging , vol. 29, no. 6, pp. 1310–1320, 2010.
879
- [40] Y . Fan, N. Batmanghelich, C. M. Clark, and C. Davatzikos, “Spatial
880
- patterns of brain atrophy in MCI patients, identified via high-dimensional
881
- pattern classification, predict subsequent cognitive decline,” NeuroIm-
882
- age, vol. 39, no. 4, pp. 1731–1743, 2008.
883
- [41] M. L. Schroeter, T. Stein, N. Maslowski, and J. Neumann, “Neural corre-
884
- lates of alzheimer’s disease and mild cognitive impairment: A systematic
885
- and quantitative meta-analysis involving 1351 patients,” NeuroImage ,
886
- vol. 47, no. 4, pp. 1196–1206, 2009.
887
- [42] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features
888
- for image classification,” IEEE Transactions on Systems, Man, and
889
- Cybernetics , vol. SMC-3, no. 6, pp. 610–621, 1973.
890
- [43] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assess-
891
- ment: from error visibility to structural similarity,” IEEE Transactions
892
- on Image Processing , vol. 13, no. 4, pp. 600–612, 2004.
893
- [44] C. J. Holmes, R. Hoge, L. Collins, R. Woods, A. W. Toga, and
894
- A. C. Evans, “Enhancement of mr images using registration for signal
895
- averaging,” Journal of computer assisted tomography , vol. 22, no. 2, pp.
896
- 324–333, 1998.
897
- [45] H. Hotelling, The Generalization of Student’s Ratio . Springer New
898
- York, 1992, pp. 54–65.
899
- [46] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
900
- inThird International Conference on Learning Representations, ICLR ,
901
- 2015, pp. 1—-15.
902
- [47] D. Zhang, Y . Wang, L. Zhou, H. Yuan, D. Shen, A. D. N. Initiative et al. ,
903
- “Multimodal classification of alzheimer’s disease and mild cognitive
904
- impairment,” NeuroImage , vol. 55, no. 3, pp. 856–867, 2011.
905
- [48] S. M. Smith, M. Jenkinson, M. W. Woolrich, C. F. Beckmann, T. E.
906
- Behrens, H. Johansen-Berg, P. R. Bannister, M. De Luca, I. Drobnjak,
907
- D. E. Flitney, R. K. Niazy, J. Saunders, J. Vickers, Y . Zhang, N. De Ste-
908
- fano, J. M. Brady, and P. M. Matthews, “Advances in functional and
909
- structural mr image analysis and implementation as fsl,” NeuroImage ,
910
- vol. 23, pp. S208–S219, 2004.
911
- [49] Y . Shmulev, M. Belyaev, A. D. N. Initiative et al. , “Predicting Con-
912
- version of Mild Cognitive Impairments to Alzheimer’s Disease and
913
- Exploring Impact of Neuroimaging,” in Graphs in Biomedical Image
914
- Analysis and Integrating Medical Imaging and Non-Imaging Modalities .
915
- Springer, 2018, pp. 83–91.
916
- [50] C. Cortes and V . Vapnik, “Support-vector networks,” Machine Learning ,
917
- vol. 20, no. 3, pp. 273–297, 1995.
918
- [51] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and
919
- D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via
920
- Gradient-Based Localization,” in 2017 IEEE International Conference
921
- on Computer Vision (ICCV) , 2017, pp. 618–626.
922
- [52] G. Hinton and S. Roweis, “Stochastic neighbor embedding,” in Pro-
923
- ceedings of the 15th International Conference on Neural Information
924
- Processing Systems . MIT Press, 2002, pp. 857—-864.
925
- Mona Ashtari-Majlan received her Master’s de-
926
- gree in Health Systems Engineering from Amirk-
927
- abir University of Technology, Tehran, in 2021.
928
- She is a PhD candidate in computer science
929
- at Universitat Oberta de Catalunya, Spain. Her
930
- area of interest includes Biomedical Image Pro-
931
- cessing, Computer Vision, and Deep Learning.
932
- Abbas Seifi is a professor of Industrial Engi-
933
- neering and Management Systems at Amirkabir
934
- University of Technology in Iran. He did his
935
- BASc and MASc in Industrial Engineering at
936
- Sharif University of Technology. He received his
937
- PhD in Systems Design Engineering from the
938
- University of Waterloo in Canada and worked
939
- there as a postdoctoral research associate for
940
- over 2 years. His teaching and research interests
941
- include optimisation and simulation of various
942
- operational research problems, data driven op-
943
- timisation, data science, machine learning and system dynamics.
944
- Mohammad Mahdi Dehshibi received his PhD
945
- in Computer Science in 2017 from IAU, Iran. He
946
- is currently a research scientist at Universitat
947
- Oberta de Catalunya, Spain. He was also a
948
- visiting researcher at Unconventional Computing
949
- Lab, UWE, Bristol, UK. He has contributed to
950
- more than 60 papers published in scientific jour-
951
- nals and international conferences. His research
952
- interests include Affective Computing, Medical
953
- data processing, Deep Learning, and Unconven-
954
- tional Computing.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2203.10094.txt DELETED
@@ -1,497 +0,0 @@
1
- K. López de Ipiña et al., "Selection of entropy based features for the analysis of the Archimedes' spiral applied to
2
- essential tremor," 2015 4th International Work Conference on Bioinspired Intelligence (IWOBI ), 2015, pp. 157 -
3
- 162, doi: 10.1109/IWOBI.2015.7160160 .
4
- _________________________________________________________________________________________
5
- Selection of entropy based features for the analysis of
6
- the Archimedes’ spiral applied to essential tremor
7
-
8
- K. López de Ipiña, M. Iturrate, P. Calvo, B. Beitia, J.
9
- Garcia -Melero
10
- Universidad del País Vasco/Euskal Herriko Unibertsitatea
11
- {karmele.ipina, mikel.iturrate, itziar.gurruchaga,
12
- mariablanca.beitia, joseba.garcia}@ehu.eus
13
-
14
- A. Bergareche, P. De la Riva, J.F. Marti -Masso,
15
- BioDonostia Health Institute, Donostia, Spain
16
- {jesusalberto.bergarecheyarza, patricia.delarivajuez,
17
- josefelix.martimasso}@osakidetza.eus M. Faundez -Zanuy, E. Sesa -Nogueras, J.Roure
18
- Escola Universitaria Politècnica de Mataró (UPF),
19
- Tecnocampus
20
- {faundez, sesa, roure }@tecnocampus.cat
21
-
22
- J. Solé-Casals
23
- Data and Signal Processing Group. University of Vic –
24
- Central University of Catalonia
25
26
-
27
- Abstract—: Biomedical systems are regulated by interacting
28
- mechanisms that operate across multiple spatial and temporal
29
- scales and produce biosignals with linear and non-linear
30
- information inside. In this sense entropy could provide a useful
31
- measure about disorder in the system, lack of information in
32
- time -series and/or irregularity of the signals. Essential tremor
33
- (ET) is t he most common movement disorder, being 20 times
34
- more common than Parkinson’s disease, and 50-70% of this
35
- disease cases are estimated to be genetic in origin. Archimedes
36
- spiral drawing is one of the most used standard tests for clinical
37
- diagnosis. This work, on selection of nonlinear biomarkers from
38
- drawings and handwriting, is part of a wide -ranging cross study
39
- for the diagnosis of essential tremor in BioDonostia Health
40
- Institute. Several entropy algorithms are used to generate non -
41
- linear feayures. The automatic analysis system consists of several
42
- Machine Learning paradigms.
43
-
44
- Keywords — Permutation entropy; Essential tremor; Automatic
45
- drawing analysis; Archimedes’ spiral; Non-linear features;
46
- automatic selection of features
47
- I. INTRODUCTION
48
- Biomedical systems are regulated by interacting
49
- mechanisms that operate across multiple spatial and temporal
50
- scales and produce biosignals with linear and non-linear
51
- information inside. Output variables of real systems often have
52
- complex fluctuations that are not only due to noise but also
53
- contain information about the intrinsic dynamics and the
54
- underlying system. In all cases the dynamics’ global aspects
55
- can be somehow captured by classic linear methods, but the
56
- different approaches are not equivalent to discern all the
57
- relevant physical details [1,2]. In this sense the measurement
58
- of non -linear features such as the system entropy are essential
59
- and useful tools to analyse the system stage. The analysis of
60
- system entropy provides not only the probability distributions
61
- of the possible state of a system but also the information
62
- encoded in it [1]. However the applicability of entropy based
63
- methodologies depends on particular characteristics of the
64
- data, such as stationarity, time series length, variation of the parameters, level of noise contamination, etc., and important
65
- information may be codified also in the temporal dynamics, an
66
- aspect which is not usually taken into account [1,3]. Time
67
- series generated by biological and biomedical systems most
68
- likely contain deterministic and stochastic components [4].
69
- Classical methods of signal and noise analysis can quantify the
70
- degree of regularity of a time series by evaluating the
71
- appearance of repetitive patterns, but most such methods only
72
- model linear components without introducing any information
73
- about non -linearity, irregularities or stochastic components.
74
- This complex information could be essential when subtle
75
- changes are anal ysed. Massimiliano Zanin et al [1] present a
76
- review based on biomedical applications which includes
77
- analysis about EEG, anesthesia, cognitive neuroscience or
78
- heart rhythms. Among biomedical applications, the related to
79
- neurological diseases are a challenge due to their variability
80
- and impact in society. Essential tremor is one of the most
81
- common.
82
- Essential tremor is a condition that affects individuals
83
- worldwide, being 20 times more common than Parkinson’s
84
- disease. The prevalence of essential tremor (ET) in the
85
- western world is of about 0.3 -4.0%, 40 years of old males and
86
- females are affected more or less equally with an incidence of
87
- 23.7 per 100,000 people per year. Studies in the elderly
88
- suggest that prevalence in these patients ranges between 3.9%
89
- and 14.0%. 50 -70% of essential tremor cases are estimated to
90
- be genetic in origin [5]. Essential tremor presents itself as a
91
- rhythmic tremor (4 –12 Hz) that occurs only when the affected
92
- muscle is exerting effort. The amplitude of the tremor
93
- increases its variability with regard to age but there is no
94
- gender predilection. Physical or mental stress could make the
95
- tremor worse and the prevalence of Parkinson's disease, in
96
- people with essential tremor is greater than in the general
97
- population. Parkinson's disease and parkinsonism can also
98
- occur simultaneously with essential tremor. With regard to
99
- symptoms hand tremor predominates (as it does in Parkinson’s
100
- disease), and occurs in nearly all cases, followed by head
101
- tremor, voice tremor, neck, face, leg, tongue and trunk tremor. K. López de Ipiña et al., "Selection of entropy based features for the analysis of the Archimedes' spiral applied to
102
- essential tremor," 2015 4th International Work Conference on Bioinspired Intelligence (IWOBI ), 2015, pp. 157 -
103
- 162, doi: 10.1109/IWOBI.2015.7160160 .
104
- _________________________________________________________________________________________
105
- ET is characterized by postural and kinetic tremor which often
106
- maximally affects the hands. PD and ET can appear in
107
- individuals of the same family [5].
108
- The clinical hallmark and earliest manifestation of the
109
- disorder is essential to manage and palliate the symptoms. All
110
- these symptoms lead to impaired performance in everyday
111
- activities. Approaches to the early diagnosis of ET have in the
112
- past few years made significant advances in the deve lopment
113
- of reliable clinical biomarkers. Despite the usefulness of
114
- biomarkers, the cost and technology requirements involved
115
- make it impossible to apply such tests to all patients with
116
- motor troubles. Given these problems, non-invasive intelligent
117
- techniques of diagnosis may become valuable tools for early
118
- detection of disorders. Non-technical staff in the habitual
119
- environments of the patient could use these methodologies,
120
- without altering or blocking the patients' abilities, as speech
121
- analysis, han dwriting or drawing analysis involved in these
122
- techniques is not perceived as a stressful test by the patient.
123
- Moreover, these techniques are very low -cost and do not
124
- require extensive infrastructure or the availability of medical
125
- equipment. They are thus capable of yielding information
126
- easily, quickly, and inexpensively [6 -8]. It is well established
127
- that handwritten tasks can be used for diagnosis of essential
128
- tremor. In this sense Archimedes’s spiral is one of the most
129
- used standard tests in clinical diagnosis [14].
130
- In the past, the analysis of handwriting had to be
131
- performed in an offline manner. Only the writing itself
132
- (strokes on a paper) were available for analysis. Nowadays,
133
- modern capturing devices, such as digitizing tablets and pens
134
- (with or without ink) can gather data without losing its
135
- temporal dimension. When spatiotemporal information is
136
- available, its analysis is referred as online. Modern digitizing
137
- tablets not only gather the x and y coordinates that describe the
138
- movement of the writing device as it changes its position, but
139
- it can also collect other data, such as the pressure exerted by
140
- the writing device on the writing surface, to the azimuth, the
141
- angle of the pen in the horizontal plane, the altitude, the angle
142
- of the pen with respect the vertical axis [9]. This gives the
143
- possibility to analyze not only static (“off -line”.) but also
144
- dynamic (“on-line”) features [10]. Figure 1. . The Archimedes’ spiral drawing performed by an
145
- individual with essential tremor.
146
- This work is part of a wide -ranging cross study for the
147
- diagnosis of essential tremor. The general transversal study is
148
- focused to characterize ET (Biodonostia Health Institute) in a
149
- study based on families with identified genetics loci.
150
- Archimedes’s spiral h as been selected for the evaluation of
151
- nonlinear biomarkers from drawings and handwriting. The
152
- presence of integrated features of other diseases such as stress
153
- is also analyzed. In the next sections not only classical linear
154
- features static and dynamics but also non-linear features based
155
- on several entropy algorithms will be analyzed. In that
156
- biomarker selection, automatic methodologies will be used.
157
- Finally an automatic analysis system based on Machine
158
- Learning paradigms measures the quality of the selec ted
159
- features.
160
- II. MATERIALS
161
- The acquisition is carried out using an Intuos Wacom 4
162
- digitizing tablet. The pen tablet USB [11] captures the
163
- following information. The tablet acquired 100 samples per
164
- second including the spatial coordinates ( x, y), the pressure,
165
- and azimuth and altitude angles. Using this set of dynamic
166
- data, further information can be inferred, such as acceleration,
167
- velocity, instantaneous trajectory angle, instantaneous
168
- displacement, tangential acceleration, curvature radius,
169
- centripetal accele ration, etc [12]. The database BIODARW
170
- consists of 21 control people (CR) and 29 ET people with
171
- identified genetics loci and register of electrophysiological test
172
- (EPT) and fMRI. The test consists of a line, the Archimedes’
173
- spiral drawing and handwriting w ith dominant hand and non -
174
- dominant hand. Table 1 summarizes the features of the group
175
- with ET with regard to EPT, diagnosis and demography [13].
176
- In this work Archimedes’ spiral is used (Figure 1). The
177
- database presents variability with regard to: tremor frequency,
178
- amplitude and pattern, diagnosis scale and demography data
179
- (age and gender). From the original database, a subset of the
180
- samples of Archimedes’s spiral is selected. Moreover from the
181
- group wit h ET, only the samples of the hand with essential
182
- tremor are selected. Thus this sub-database BIODARWO
183
- consists of 51 samples: 27 samples of the control group and
184
- 24 samples of the group with ET.
185
- III. METHODS
186
- A. Feature extraction
187
- The research presented here is in the nature of a
188
- preliminary experiment; its aim is to define thresholds for a
189
- number of biomarkers related to handwriting. It forms part of
190
- a broader study focused on early ET detection. Feature search
191
- in this work aims at pre clinical evaluation so as to formulate
192
- useful tests for ET diagnosis [5,13,14].
193
- 1) Linear features
194
- In this study, we aim at automatically distinguishing of
195
- handwriting between an ET patient and a healthy subject by
196
- analyzing different linear features (LF) and their variants
197
- (max, min, mean and median) in handwriting: time, spatial
198
- components, pressure, speed, acceleration, zero crossing rate,
199
- and spectral components for on-surface and in-air signals.
200
- K. López de Ipiña et al., "Selection of entropy based features for the analysis of the Archimedes' spiral applied to
201
- essential tremor," 2015 4th International Work Conference on Bioinspired Intelligence (IWOBI ), 2015, pp. 157 -
202
- 162, doi: 10.1109/IWOBI.2015.7160160 .
203
- _________________________________________________________________________________________
204
- i 2) Non-linear feature: entropy
205
- Entropy is a measure of disorder in physical systems, and n−m
206
- Am(r) = (n − m)−1 ∑ Am(r), (5)
207
- also a basic quantity with multiple field-specific
208
- interpretations. It has been associated with disorder, state -
209
- space volume, or lack of information [1,2,15,16]. When try to
210
- analyze information content, the Shannon entropy is often
211
- considered as the classic, foundational and most natural one
212
- [3,4,17]. Richman et al., analyze that entropy, as it relates to
213
- dynamical systems, is the rate of information production [18].
214
- On the one hand some authors points that the calculation of
215
- entropy usually requires very long data sets that can be
216
- difficult or impossible to obtain mainly for biomedical signal.
217
- On the other hand methods for estimation of the entropy of a
218
- system represented by a time series are not well suited to
219
- analysis of the short and noisy data sets encountered in
220
- biomedical studies [1,18]. Several proposals for calculating
221
- entropy used in this work are presented.
222
- The entropy H(X) of a single discrete random variable is
223
- a measure of its average uncertainty. Shannon entropy [17] is
224
- calculated by the equation:
225
-
226
- H(X) = − ∑ p(xi) logp (xi) = − E[log p(xi)] (1)
227
- Xi∈Θ
228
- Where X represents a random variable with a set of values
229
- and probability mass function
230
- p(xi) = P r {X = x i}, xi ∈ Θ, and E represents the expectation
231
- operator. Note that p logp = 0 if p = 0.
232
- The Approximate Entropy (ApEn) is a statistical measure
233
- that smooth transient interference and can suppress the
234
- influence of noise by properly setting of the algorithms
235
- parameters. It can be employed in the analysis of both
236
- stochastic and deterministic signals [19,20]. This is crucial in
237
- the case of biological signals, which are outputs of complex
238
- biological networks and may be deterministic or stochastic, or
239
- both. ApEn provides a model -independent measure of the
240
- irregularity of the signals. The algorithm summarizes a time
241
- series into a non-negative number, with higher values
242
- representing more irregular systems [19,20]. The method
243
- examines time series for similar epochs [21]: more frequent
244
- and more similar epochs lead to lower values of ApEn.
245
- ApEn (m, r, n ) measures for n points, sequences that are
246
- similar for m points remain similar, within a tolerance r, at the
247
- next point. Thus a low value of ApEn reflects a high degree of
248
- regularity. ApEn algorithm counts each sequence as matching
249
- itself. In order to reduce this bias, Sample Entropy
250
- SampEn (m, r, n ) was developed, which does not count self-
251
- matches.
252
- The sample entropy statistic SampEn (m, r, n)is defined as:
253
- SampEn (m, r, n) = lim{−ln(Am(r)/Bm(r))} i=1
254
- The scalar is the tolerance for accepting matches. In the
255
- present investigation, we used the parameters recommended in
256
- [22], with m = 2 and r = 0.2 (standard deviation of the
257
- sources is normalized to 1). SampEn is a robust quantifier of
258
- complexity as for instance EEG signals [23], and can be used
259
- to detect of artifacts in EEG recordings [24].
260
- Permutation entropy directly accounts for the temporal
261
- information contained in the time series; furthermore, it has
262
- the quality of simplicity, robustness and very low
263
- computational cost [1,3,4]. Bandt and Pompe [25] introduce a
264
- simple and robust method based on the Shannon entropy
265
- measurement that takes into account time causality by
266
- comparing neighboring values in a time series. The
267
- appropriate symbol sequence arises naturally from the time
268
- series, with no prior knowledge assumed [1]. The Permutation
269
- entropy is calculated for a given time series {x1, x2, … , x n} as a
270
- function of the scale factor . In order to be able to compute
271
- the permutation of a new time vector Xj,
272
- St = [Xt, X t+1, … , X t+m−1 ] is generated with the embedding
273
- dimension and then arranged in an increasing order:
274
- [Xt+j1−1 ≤ X t+j2−1 ≤ ⋯ ≤ X t+jn−1]. Given different values,
275
- there will be m! possible patterns , also known as
276
- permutations. If f(π) denotes its frequency in the time series,
277
- its relative frequency is p(π) = f(π)⁄(L⁄s − m + 1 ). The
278
- permutation entropy is then defined as:
279
- m!
280
- PE = − ∑ p(πi) ln p(πi) (6)
281
- i=1
282
- B. Automatic classification
283
- The main goal of the present work is feature search in
284
- handwriting aiming at preclinical evaluation in order to define
285
- tests for ET diagnosis. These features will define the control
286
- group (CR) and the essential tremor group (ET). A secondary
287
- goal is the optimization of computational cost with the aim of
288
- making these techniques useful for real -time applications in
289
- real environments. Thus, automatic classification will be
290
- modeled with this in mind. We used five different classifiers:
291
-  A Support Vector Machine (SVM) with polynomial
292
- kernel
293
-  A multi layer perceptron (MLP) with neuron number
294
- in hidden layer (NNHL) =
295
- max(Attribute/Number+Classes/Number) and training
296
- step (TS) = NNHL*10
297
-  k-NN Algorithm.
298
- The WEKA software suite [26] has been used to carry out
299
-
300
-
301
- with n→∞
302
- = −ln(A/B ), (2) the experiments. The results were evaluated using
303
- Classification Error Rate (CER, %). For training and
304
- validation steps we used k-fold cross -validation with k=10.
305
- A = [(n − m − 1)(n − m)/2]Am(r), (3)
306
- and
307
- B = [(n − m − 1)(n − m)/2]Bm(r). (4)
308
- Bm(r) is the probability that two sequences match for
309
- points. Similarly, Am(r) is the probability that two
310
- sequences match for m + 1 points: Cross -validation is a robust validation method for variable
311
- selection [27]. Repeated cross -validation (as calculated by the
312
- WEKA environment) allows robust statistical tests. We also
313
- use the measurement provided automatically by WEKA
314
- “Coverage of cases” (0.95 level) that is the confidence interval
315
- at 95% level.
316
- K. López de Ipiña et al., "Selection of entropy based features for the analysis of the Archimedes' spiral applied to
317
- essential tremor," 2015 4th International Work Conference on Bioinspired Intelligence (IWOBI ), 2015, pp. 157 -
318
- 162, doi: 10.1109/IWOBI.2015.7160160 .
319
- _________________________________________________________________________________________
320
-
321
-
322
-
323
-
324
-
325
-
326
-
327
-
328
-
329
-
330
-
331
-
332
-
333
-
334
- Figure 2. CER (%) for paradigms with linear and non -linear
335
- features sets based on entropy: Shannon, ApEn, SmEn and
336
- Permutation Entropy.
337
- IV. RESULTS AND DISCUSSION
338
- The experimentation has been carried out with the
339
- balanced subset BIODARWO. The goal of these experiments
340
- was to examine the potential of entropy algorithms and
341
- selected features for automatic measurement of the
342
- degradation of Archimedes’s spiral drawing with ET. Thus,
343
- previously defined feature sets have been evaluated in order to
344
- properly define control and ET groups. In a first stage linear
345
- and non-linear features have been extracted by several
346
- methods described in section III: Shannon entropy, ApEn and
347
- SmEn. Automatic classification by described (section III)
348
- paradigms was performed over the database. The results of
349
- CER (%) for paradigms with linear and non -linear features
350
- sets are summarized in Figure 2. For both algorithms m=2,3
351
- and tolerance r=0.2.
352
-  Non Linear Features (NLF) sets increase about 7%
353
- the features number
354
-  Non-linear features sets improve the results for all the
355
- paradigms
356
-  Shannon entropy outperforms LF (LFSE) for MLP
357
- and k-NN
358
-  ApEn improve system performance for m=3 with
359
- MLP and SVM. k-NN has better performance for
360
- m=2.
361
-  SmEn with m=3 appears as the best option for all
362
- paradigms.
363
-  The best option is SmEn with m=3 and MLP with a
364
- CER of 15.69%.
365
-
366
- In a second stage permutation entropy is evaluated for
367
- different orders m and time delay. In our particular case and
368
- due the signals are composed from 5000 -1000 samples, and
369
- parameter was fixed until m=7. The results are also compared
370
- with SmEn with m=3 and tolerance of r=0.2. The results are
371
- shown in Figure 3.  PE improves the results in most of the cases
372
-  The best results are obtained with PE and m=7, t=7.
373
-  MLP obtains the best results for this last option
374
-  The best option is PE-m7t7 with MLP and CER of
375
- 15.65%.
376
-  Good results are achieved even with k-NN with less
377
- computational cost.
378
- V. CONCLUSIONS
379
- This work, on selection of nonlinear biomarkers from
380
- drawings and handwriting, is part of a wide -ranging cross
381
- study for the diagnosis of essential tremor which is developed
382
- in Biodonostia Health Institute. Specifically the main goal of
383
- the present work is the analysis of features in Archimed es’s
384
- spiral drawing, one of the most used standard tests for clinical
385
- diagnosis of ET. In this sense entropy based features have
386
- been add to a set of classical linear features (static and
387
- dynamics). Several entropy algorithms have been evaluated by
388
- an automatic analysis system consists of several Machine
389
- Learning. The best option is MLP with permutation entropy
390
- and good results are obtained even with k-NN. Then these
391
- new biomarkers will be integrated in future works with those
392
- obtained in the Biodonosti a study. It should be highlighted
393
- that the use of this technology could provide undoubted
394
- benefits towards the development of more sustainable, low
395
- cost, high quality, non -invasive technologies. These systems
396
- are easily adaptable to the user and environment, and from a
397
- social and economic point of view can be very useful in real
398
- complex environments. In future works new non-linear
399
- features, entropy algorithms and automatic selection
400
- methodologies will be used.
401
-
402
-
403
- Figure 3. CER (%) for the paradigms with the references
404
- Linear Features (LF) and Shannon Entropy (SE) and other
405
- entropy paradigms
406
- K. López de Ipiña et al., "Selection of entropy based features for the analysis of the Archimedes' spiral applied to
407
- essential tremor," 2015 4th International Work Conference on Bioinspired Intelligence (IWOBI ), 2015, pp. 157 -
408
- 162, doi: 10.1109/IWOBI.2015.7160160 .
409
- _________________________________________________________________________________________
410
- Acknowledgments
411
- This work has been partially supported by the University
412
- of the Basque Country by UPV/EHU —58/14 project,
413
- SAIOTEK from the Basque Government, University of Vic -
414
- Central University of Catalonia under the research grant
415
- R0904, and the Spanish Ministerio de Ciencia e Innovación
416
- TEC2012 -38630 -C04-03.
417
- References
418
- [1] Zanin M., L Zunino, OA Rosso, D Papo. Permutation entropy and its
419
- main biomedical and econophysics applications: a review. Entropy 14
420
- (8), 1553 -1577.
421
- [2] Morabito F.C., D. Labate, F. La Foresta, A. Bramanti, G. Morabit, and I.
422
- Palamara. Multivariate Multi -Scale Permutation Entropy for Complexity
423
- Analysis of Alzheimer's Disease EEG. Entropy 14(7):1186 -1202 (2012).
424
- [3] H Eguiraun, K López -de-Ipiña, I Martinez. Application of Entropy and
425
- Fractal Dimension Analyses to the Pattern Recognition of Contaminated
426
- Fish Respon ses in Aquaculture. Entropy
427
- [4] Costa, M.; Goldberger, A.; Peng, C. -K. Multiscale entropy analysis of
428
- biological signals. Phys. Rev. E 2005, 71, 021906:1 –021906:18. - See
429
- more at: http://www.mdpi.com/1099 -
430
- 4300/16/1 1/6133/htm#sthash.ufFUh4mq.dpuf, 2005.
431
- [5] Louis ED, JP. Vonsattel. The emerging neuropathology of essential
432
- tremor. Movement Disorders, 23(2),174 18, 2007.
433
- [6] Faundez -Zanuy M., et al., 2012. Biometric Applications Related to
434
- Human Beings: There Is Life beyond Secu -rity, Cognitive Computation,
435
- 5(1), 136-15,1 DOI: 10.1007/s12559 -012-9169 -9, 2013.
436
- [7] Lopez -de-Ipiña K., J.B. Alonso, C.M. Travieso, J. Solé -Casals , H.
437
- Egiraun, M. Faundez -Zanuy, A. Ezeiza, N. Barroso, M. Ecay , P.
438
- Martinez -Lage, and U. Martinez -de-Lizardui. On the selection of non -
439
- invasive methods based on speech analysis oriented to Automatic
440
- Alzheimer Disease Diagnosis, Sensors, 13 (5), 6730 -6745, 2013.
441
- [8] Laske C., H.R. Sohrabi, S.M. Frost, K. López -de-Ipiña, P. Garrard, M.
442
- Buscem, J. Dauwels, S.R. Soekadar, S. Mueller, C. Linnemann, S.A.
443
- Bridenbaugh, Y. Kanagasingam, R.N Martins, S.E. O'Bryant, Innovative
444
- diagnostic tools for early detection of Alzheimer's disease, Alzheimer &
445
- Demen -tia, 2014, doi:10.1016/j.jalz.2014.06.004, 2013.
446
- [9] Sesa-Nogueras E. and M. Faundez -Zanuy “Biometric recognition using
447
- online uppercase handwritten text” Pattern Recognition, 45 (2012), 128 –
448
- 144, 2012. https://doi.org/10.1016/j.patcog.2011.06.002
449
- [10] Faundez -Zanuy M. On-line signature recognition based on VQ-DTW,
450
- Journal. Pattern Recognition, vol 40(3), pp. 981-982, 2007
451
- https://doi.org/10.1016 /j.patcog.2006.06.007
452
- [11] http://www.wacom.com
453
- [12] J. Ortega -Garcia, J. Gonzalez -Rodriguez, D. Simon -Zorita, S. Cruz -
454
- Llanas “From biometrics technology to applications regarding face,
455
- voice, signature and fingerprint recognition systems”. Chapter 12, pp.
456
- 289-337 in Biometrics Solutions for Authentication in an E-World.
457
- Kluwer Academic Publishers 2002
458
- [13] Lopez -de-Ipiña K., A. Bergareche, P. de la R iva, M. Faundez -Zanuy, J.
459
- Roure, E. Sesa -Nogeras, F.Zelarain. Nonlinear biomarkers from non -
460
- invasive writing signals, applied to Essential Tremor, 3rd SPLab
461
- Workshop, University of technology, Brno, Czech Republic, 2013,JAL
462
- [14] Pullman S.L., MD, FRCPC. Spiral Analysis: A New Technique for
463
- Measuring Tremor With a Digitizing Tablet Movement Disorders, 13(3),
464
- 85-89, DOI: 10.1002/mds.870131315, 1998.
465
- [15] Gray, R.M. Entropy and Information Theory; Springer:
466
- Berlin/Heidelberg, Germany, 1990.
467
- [16] Brissaud, J.B. The meaning of entropy. Entropy 2005, 7, 68–96.
468
- [17] Shannon, C.; Weaver, W. The Mathematical Theory of Communication;
469
- University of Illinois Press: Champaign, IL, USA, 1949.
470
- [18] Richman, J.S.; Moorman, J.R. (2000). "Physiological time-series
471
- analysis using approximate entropy and sample entropy". American
472
- journal of physiology. Heart and circulatory physiology 278 (6): 2039 –
473
- 2049. PMID 10843903. [19] Pincus SM, Huang WM. Approximate entropy, statistical properties and
474
- applications. Commun Stat Theory Methods. 1992;21:3061 –3077. doi:
475
- 10.1080/03610929208830963. [Cross Ref]
476
- [20] Pincus SM. Approximate entropy as a measure of system complexity.
477
- Proc Natl Acad Science. 1991;88:2297 –2301. doi:
478
- 10.1073/pnas.88.6.2297. [PMC free art icle] [PubMed] [Cross Ref]
479
- [21] Dragomir A., Akay Y., Curran A.K. Investigating the complexity of
480
- respiratory patterns during the laryngeal chemoreflex. ournal of
481
- NeuroEngineering and Rehabilitation, 5: 17, 2008.
482
- [22] J. M. Yentes, N. Hunt, K. Schmid, J. P. Kaipust, D. McGrath and N.
483
- Stergiou, "The appropriate use of approximate entropy and sample
484
- entropy with short data sets.," Ann Biomed Eng, vol. 41, no. 2, pp. 349 -
485
- 65, 2013.
486
- [23] P. Ramanand, V. P. Nampoori and R. Sreenivasan, "Complexity
487
- quantification of dense array EEG using sample entropy analysis.," J
488
- Integr Neurosci., vol. 3, no. 3, pp. 343-58, 2004.
489
- [24] R. Mahajan and B. Morshed, "Unsupervised Eye Blink Artifact
490
- Denoising of EEG Data with Modified Multiscale Samp le Entropy,
491
- Kurtosis and Wavelet -ICA," IEEE J Biomed Health Inform., vol. in
492
- press, 2014
493
- [25] Bandt, C.; Pompe, B. Permutation entropy: A natural complexity
494
- measure for time series. Phys. Rev. Lett. 2002, 88, 174102:1 –174102:4.
495
- [26] WEKA. Available online: http://www.cs.waikato.ac.nz/ml/weka
496
- [27] Picard, R.; Cook, D. Cross -Validation of Regression Models. Journal of
497
- the American Statistical As-sociation (1984), vol. 79(387), pp. 575–583
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2203.10517.txt DELETED
@@ -1,1153 +0,0 @@
1
- IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. XX, NO. XX, XXXX 2020 1
2
- Learning Whole Heart Mesh Generation From
3
- Patient Images For Computational Simulations
4
- Fanwei Kong, Shawn C. Shadden
5
- Abstract — Patient-specific cardiac modeling combines
6
- geometries of the heart derived from medical images
7
- and biophysical simulations to predict various aspects of
8
- cardiac function. However, generating simulation-suitable
9
- models of the heart from patient image data often requires
10
- complicated procedures and significant human effort. We
11
- present a fast and automated deep-learning method to
12
- construct simulation-suitable models of the heart from
13
- medical images. The approach constructs meshes from
14
- 3D patient images by learning to deform a small set of
15
- deformation handles on a whole heart template. For both
16
- 3D CT and MR data, this method achieves promising ac-
17
- curacy for whole heart reconstruction, consistently outper-
18
- forming prior methods in constructing simulation-suitable
19
- meshes of the heart. When evaluated on time-series CT
20
- data, this method produced more anatomically and tempo-
21
- rally consistent geometries than prior methods, and was
22
- able to produce geometries that better satisfy modeling
23
- requirements for cardiac flow simulations. Our source code
24
- and pretrained networks are available at https://github.
25
- com/fkong7/HeartDeformNets .
26
- Index Terms — Geometric Deep Learning, Mesh Genera-
27
- tion, Shape Deformation, Cardiac Simulations
28
- I. INTRODUCTION
29
- IMAGE-based cardiac modeling is used to simulate various
30
- aspects of cardiac function, including electrophysiology
31
- [1], hemodynamics [2] and tissue mechanics [3]. This method
32
- derives geometries of the heart from patient image data and
33
- numerically solves mathematical equations that describe var-
34
- ious physiology on discretized computational domains. Such
35
- “digital twin” modeling of a patient’s heart can provide infor-
36
- mation that cannot be readily measured to facilitate diagnosis
37
- and treatment planning [4]–[6], or to quantify biomechanical
38
- underpinnings of diseases [7]. This paradigm has motivated
39
- numerous research efforts on a wide range of clinical applica-
40
- tions, such as, simulations of the stress and strain of cardiac
41
- tissues when interacting with implantable cardiac devices [8],
42
- the cardiac flow pattern after surgical corrections [4], [6], and
43
- cardiac rhythm outcome after ablation surgery [5].
44
- This work was supported by the NSF , Award No. 1663747. We
45
- thank Drs. Shone Almeida, Amirhossein Arzani and Kashif Shaikh for
46
- providing the time-series CT image data.
47
- Fanwei Kong, is with the Department of Mechanical Engineering,
48
- University of California at Berkeley, Berkeley, CA 94720 USA (e-mail:
49
- fanwei [email protected]).
50
- Shawn C. Shadden, is with the Department of Mechanical Engineer-
51
- ing, University of California at Berkeley, Berkeley, CA 94720 USA (e-
52
- mail: [email protected]).Generating simulation-suitable models of the heart from
53
- image data has remained a time-consuming and labor-intensive
54
- process. It is the major bottleneck limiting large-cohort val-
55
- idations and clinical translations of functional computational
56
- heart modeling [2], [9]. Indeed, prior studies have been limited
57
- to only a few subjects [2], [4], [5]. The entwined nature of the
58
- heart makes it difficult to differentiate individual cardiac struc-
59
- tures, and typically a complicated series of steps are needed
60
- to identify and label various structures for the assignment
61
- of boundary conditions or modeling parameters. Deforming-
62
- domain computational fluid dynamics (CFD) simulations of
63
- the intracardiac hemodynamics, is particularly labor-intensive
64
- since it requires reconstructing temporally-consistent deforma-
65
- tions of the heart from a sequence of image snapshots.
66
- Deep learning methods can train neural networks from
67
- existing data to automatically process medical images and gen-
68
- erate whole heart reconstructions. Most deep learning methods
69
- have, however, focused on segmentation (pixel classification)
70
- rather than construction of a computer model of the heart,
71
- usually represented by tessellated meshes [10]. Prior studies
72
- on automated cardiac mesh reconstruction thus adopted multi-
73
- stage approaches, where segmentation of the heart was first
74
- generated by convolutional neural networks (CNN) and surface
75
- meshes of the heart were then created from the marching cube
76
- algorithms and following surface post processing methods
77
- [11]. However, the intermediate segmentation steps often intro-
78
- duce extraneous regions containing topological anomalies that
79
- are unphysical and unintelligible for simulation-based analyses
80
- [11]. Direct mesh reconstruction using geometric deep learning
81
- [12], [13] provides a recent avenue to address the end-to-end
82
- learning between volumetric medical images and simulation-
83
- ready surface meshes of the heart [14]–[16]. However, these
84
- approaches often assumes the connectivity of the meshes.
85
- That is, the shape and topology of the predicted meshes from
86
- these approaches are pre-determined by the mesh template
87
- and cannot be easily changed to accommodate various mesh
88
- requirements for different cardiac simulations.
89
- To overcome these short comings, we propose to learn to
90
- deform the space enclosing a whole heart template mesh to
91
- automatically and directly generate meshes that are suitable
92
- for computational simulations of cardiac function. Here we
93
- propose to leverage a control-handle-based shape deformation
94
- method to parameterize smooth deformation of the template
95
- with the displacements of a small set of control handles and
96
- their biharmonic coordinates. Our approach learns to predict
97
- the control handle displacements to fit the whole heart templatearXiv:2203.10517v2 [eess.IV] 8 Nov 20232 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. XX, NO. XX, XXXX 2020
98
- to the target image data. We also introduce learning biases to
99
- produce meshes that better satisfy the modeling requirements
100
- for computational simulation of cardiac flow. The contributions
101
- of this work are summarized as follows:
102
- 1) We propose a novel end-to-end learning method combin-
103
- ing deformation handles to predict deformation of whole
104
- heart mesh templates from volumetric patient image
105
- data. We show that our approach achieves comparable
106
- geometric accuracy for whole heart segmentation as
107
- prior state-of-the-art segmentation methods.
108
- 2) We introduced novel mesh regularization losses on
109
- vessel inlet and outlet structures to better satisfy the
110
- meshing requirements for CFD simulations. Namely, our
111
- method predicts meshes with coplanar vessel caps that
112
- are orthogonal to vessel walls for CFD simulations.
113
- 3) We validated our method on creating 4D dynamic whole
114
- heart and left ventricle meshes for CFD simulation
115
- of cardiac flow. Our method can efficiently generate
116
- simulation-ready meshes with minimal post-processing
117
- to facilitate large-cohort computational simulations of
118
- cardiac function.
119
- A. Learning-Based Shape Deformation
120
- Shape deformation using low-dimensional control of de-
121
- formation fields has been extensively studied for decades in
122
- computer graphics and has been ubiquitously used in anima-
123
- tion. These methods usually interpolate the transformation of a
124
- sparse set of control points to all points on the shape. Among
125
- the most popular approaches were are free-form-deformation
126
- that uses a regular control point lattice to deform the shape
127
- enclosed within the lattice [17], cage-deformation that uses
128
- a convex control cage that encloses the shape [18], as well
129
- as control-handle-based approaches that directly place control
130
- points on the surface of the shape [19]–[21]. Recent works
131
- have shown success in integrating these shape deformation
132
- methods in deep-learning frameworks for automated mesh
133
- reconstruction from single-view camera images [22], gener-
134
- ative shape modeling [23] as well as deformation transfer
135
- [24]. However, these approaches were designed to take 2D
136
- camera images or 3D meshes as input and used memory-
137
- intensive CNNs or fully connected neural network to predict
138
- the transformation of control points. They thus cannot be
139
- directly applied to deform complicated whole heart structures
140
- from high-resolution 3D medical image data. Therefore, we
141
- herein propose to use graph convolutional networks (GCN)
142
- and sparsely sampling of the volumetric image feature map to
143
- predict control point translations and thus efficiently produce
144
- meshes from 3D medical images.
145
- B. Mesh Reconstruction From 3D Medical Images
146
- Recent works on direct mesh reconstruction from volumet-
147
- ric images aim to deform an initial mesh with pre-defined
148
- topology to a target mesh [14], [15]. Our previous approach
149
- leveraged a GCN to predict deformation on mesh vertices
150
- from a pre-defined mesh template to fit multiple anatomical
151
- structures in a 3D image [15]. However, different structures
152
- were represented by decoupled mesh templates and thus stillrequired post-processing to merge different structures for com-
153
- putational simulations involving multiple cardiac structures.
154
- Similarly, [16] used deep neural networks and patient metadata
155
- to predict cardiac shape parameters of a pre-built statistical
156
- shape model of the heart. Our approach presented herein, in
157
- contrast, deforms the space enclosing the mesh template. Once
158
- being trained on the whole heart template, our network can
159
- deform alternative template meshes that represent a subset
160
- of the geometries in the template to accommodate different
161
- modeling requirements.
162
- A few studies have focused on learning space deformation
163
- fields. [25] used a 3D UNet to predict a deformation field to
164
- deform heart valve templates from CT images. Additionally,
165
- our preliminary work combined free-form deformation (FFD)
166
- with deep learning to predict the displacement of a control
167
- point grid to deform the space enclosing a simulation-ready
168
- whole heart template [26]. However, predicting the defor-
169
- mation fields requires many degrees of freedom to produce
170
- accurate results. For example, since FFD has limited capability
171
- for complex shape deformation, our prior method required a
172
- dense grid of thousands of control point to achieve acceptable
173
- whole heart reconstruction accuracy. Herein we demonstrate
174
- that using control-handle-based deformation with biharmonic
175
- coordinates achieves higher reconstruction accuracy while
176
- using far less control points than the FFD-based approach.
177
- II. M ETHODS
178
- A. Shape Deformation Using Biharmonic Coordinates
179
- We parameterize deformations of whole heart meshes with
180
- the translations of a small set of deformation handles sampled
181
- from the mesh template. Given a set of mesh vertices V∈
182
- Rn×3and a set of control points P∈Rc×3, we compute
183
- the biharmonic coordinates W∈Rn×c, which is a linear
184
- map, V=WP.nandcare the number of vertices and the
185
- number of control points, respectively. Wis defined based on
186
- biharmonic functions and can be pre-computed by minimizing
187
- a quadratic deformation energy function while satisfying the
188
- handle constraints with linear precision [21]. Namely, let
189
- Q∈Rc×nbe the binary selector matrix that selects rows of X
190
- corresponding to the control handles, and let T∈R(n−c)×n
191
- be the complementary selector matrix of Qcorresponding to
192
- the free vertices. W is computed by
193
- V= arg min
194
- X∈Rn×31
195
- 2trace(XTAX),subject to QX=P (1)
196
- V= (QT−TT(TATT)−1TAQT)| {z }
197
- WP (2)
198
- where Ais a positive semi-definite quadratic form based on
199
- the squared Laplacian energy to encourage smoothness [21].
200
- Under this framework, displacements of the control handles
201
- can smoothly deform the underlying mesh template.
202
- B. Network Architecture
203
- Figure 1 shows the overall architecture of our network. The
204
- central architecture is the novel control-handle-based mesh de-
205
- formation module, which learns to predict the displacements ofKONG et al. : LEARNING WHOLE HEART MESH GENERATION FROM PATIENT IMAGES FOR COMPUTATIONAL SIMULATIONS 3
206
- Fig. 1. Diagram of the proposed automatic whole heart reconstruction approach. A total of three deformation blocks were used to progressively
207
- deform the mesh templates, using 75, 75 and 600 control handles, respectively, for the 3 deformation blocks.
208
- control handles based on image features, so that the underlying
209
- mesh templates can be smoothly deformed to match the input
210
- 3D image data.
211
- 1) Image Encoding and Segmentation Modules: We applied
212
- a residual 3D CNN backbone to extract and encode image
213
- features at multiple resolutions [27]. The CNN backbone
214
- involves 4 down-sampling operations so that image feature
215
- volumes at 5 different resolution are obtained. These image
216
- feature volumes are used as inputs to GCN layers to predict
217
- the displacements of control handles. Similar to [15], we also
218
- used a segmentation module that predicted a binary segmen-
219
- tation map to enable additional supervision using ground truth
220
- annotations. This module was only used during training.
221
- 2) Mesh Deformation Module: Biharmonic coordinates con-
222
- strain the displaced control handles to be located on the
223
- deformed mesh template. Therefore, regardless of which set
224
- of control handles are sampled, these handles will be located
225
- at the corresponding positions on the template mesh. We
226
- used a neural network to update the coordinates of all points
227
- (S∈Rn×3) on the mesh template and obtain the coordinates
228
- of the selected control handles ( P∈Rc×3) from the updated
229
- mesh vertex locations to deform the template using the pre-
230
- computed biharmonic coordinates. This design allows picking
231
- arbitrary sets of control handles to deform the template at
232
- various resolutions after training. Furthermore, training to pre-
233
- dict the coordinates of every mesh vertex provides additional
234
- supervision that can speed up training.
235
- Since the mesh template can be represented as a graph,
236
- a GCN was used to predict the mesh vertex displacements.
237
- We chose to approximate the graph convolutional kernel with
238
- a first order Chebyshev polynomial of the normalized graph
239
- Laplacian matrix [12]. At each mesh vertex, we extracted the
240
- image feature vectors at the corresponding image coordinatesfrom multiple image feature volumes with various resolution.
241
- These image feature vectors were then concatenated with the
242
- mesh feature vectors following a GCN layer. The combined
243
- vertex feature vectors were then processed by three graph
244
- residual blocks. We then used an additional GCN layer to
245
- predict displacements as 3D feature vectors on mesh vertices.
246
- We used a total of three deformation blocks to progressively
247
- deform the template mesh. The first and second deformation
248
- blocks used 75 control handles to deform the mesh, whereas
249
- the last deformation block used more control handles, 600, for
250
- a more detailed prediction.
251
- C. Network Training
252
- Fig. 2. Graphical illustration of different loss functions. Y ellow and teal
253
- on the right panel shows the caps and walls to apply the mesh regular-
254
- ization losses, respectively, and arrows shows cap normal vectors.4 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. XX, NO. XX, XXXX 2020
255
- The training of the networks was supervised by 3D ground
256
- truth meshes of the whole heart as well as a binary segmenta-
257
- tion indicating occupancy of the heart on the input image grid.
258
- We used the following loss functions (illustrated in figure 2)
259
- in training to produce accurate whole heart geometries while
260
- ensuring the resulting mesh is suitable to support simulations.
261
- 1) Geometric Consistency Losses: The geometric consis-
262
- tency loss Lgeois the geometric mean between the point
263
- and normal consistency losses to supervise the geometric
264
- consistency between the prediction and the ground truth [15].
265
- We note that edge length and Laplacian regularization losses,
266
- such as used in [15], are not necessary since the smoothness of
267
- the mesh prediction is naturally constrained by the biharmonic
268
- coordinates used to deform the template. Since only the
269
- selected control points were used to deform the mesh template
270
- while the displacements of all mesh points were predicted, we
271
- needed to regularize the L2distances between the mapped
272
- mesh points ( S∈Rn×3) and the corresponding mesh vertices
273
- on the deformed mesh template ( V∈Rn×3). This consistency
274
- loss between the points and the mesh ensures that coordinates
275
- of other unselected control points also result in reasonable
276
- deformations. In other words, the deformation results should
277
- not be sensitive to the choice of pre-selected control points.
278
- 2) Mesh Regularization for CFD Simulations: Cardiac mod-
279
- els generally includes portions of the great vessels connected
280
- to the heart (e.g., pulmonary veins and arteries, venae cavae,
281
- and aorta). For CFD simulation of cardiac flow, locations
282
- where these vessels are truncated (so-called inflow and out-
283
- flow boundaries, or “caps”) should be planar and nominally
284
- orthogonal to the vessel. On our training template, we labeled
285
- these caps, as well as the associated vessel walls. Figure 2
286
- shows the identified cap and wall faces on left atrium (LA),
287
- right atrium (RA) and aorta. We applied a co-planar loss
288
- on each cap that penalizes the L2differences of the surface
289
- normals on the cap. Namely, Lcoplanar =P
290
- kP
291
- j∈Ck||nj−
292
- 1
293
- |Ck|P
294
- j∈Cknj||2
295
- 2where Ckis the set of mesh faces for the
296
- kth cap and njis the normal vector for the jth face on Ck.
297
- For mesh vertices that are on the vessel walls near the caps,
298
- we minimized the absolute value of the dot product between
299
- the surface normal vectors and the surface normal vector of
300
- the caps to encourage orthogonality. Namely, Lorthogonal =P
301
- kP
302
- j∈Wk|⟨nk,1
303
- |Ck|P
304
- p∈Cknp⟩|,where Wkis the set of
305
- mesh faces on the vessel wall that corresponds to the kth cap.
306
- 3) Weighted Mesh Losses: Patient images may not always
307
- contain the targeted cardiac structures. As shown in Figure
308
- 2 (left), cardiac structures such as pulmonary veins, pul-
309
- monary arteries and the aorta are often not captured in full,
310
- although the truncks of these structures can be necessary for
311
- simulations. We thus aim to predict “complete” whole heart
312
- structures from incomplete image data. Namely, we computed
313
- the bounding box of the ground truth meshes and assigned zero
314
- weights within the geometric consistency loss for predicted
315
- mesh vertices that were located outside the bounding box.
316
- Furthermore, as the geometry of inlet vessels are important
317
- to the accuracy of CFD results, we applied a higher weight
318
- for the geometric consistency loss on mesh vertices that are
319
- located on vessel walls near the inlets.4) Total Losses: The total loss on a predicted mesh Mis
320
- Lmesh(M,G, W ) =X
321
- iLgeo(Mi, Gi,wi)
322
- +αX
323
- k(Lcoplanar (M) +βLorthogonal (M))(3)
324
- where Girepresents the ground truth mesh for individual
325
- cardiac structure, and wirepresents the weighting vector for
326
- each mesh point. Mcan be both the deformed whole heart
327
- mesh template Vand the mesh obtained from mapping all
328
- mesh points S. The total loss for training is a weighted sum
329
- of the mesh losses and the segmentation loss, which is the
330
- sum of the binary cross-entropy and the dice losses between
331
- the predicted occupancy probability map Ipand the ground
332
- truth binary segmentation of the whole heart Ig.
333
- Ltotal=λ1Lmesh(S, G, W ) +λ2Lmesh(V, G, W )+
334
- λ3||S−V||2
335
- F+Lseg(Ig, Ip)(4)
336
- 5) Image Augmentation for Shape Completion: Leveraging
337
- the mesh template, we trained our method to predict the
338
- geometries of the whole heart represented by the template
339
- mesh when the images do not cover the complete cardiac
340
- structures. Since CT images often do not cover the whole
341
- heart, we selected CT images that did cover the whole heart
342
- (n=10) from our training set and then generated 10 random
343
- crops for each image while keeping the ground truth meshes
344
- to be the same. Figure 3 visualizes example image crops
345
- and the corresponding ground truth meshes. We also applied
346
- Fig. 3. Visualization of augmented input image crops and the corre-
347
- sponding ground truth meshes.
348
- random shearing, rotations and elastic deformations, following
349
- the same augmentation strategies described in our prior work
350
- [15].
351
- III. E XPERIMENTS
352
- A. Datasets and Experiments
353
- 1) Task-1: Whole Heart Segmentation for 3D Images: We
354
- applied our method to public datasets of contrast-enhanced
355
- 3D CT images and 3D MR images from both normal and
356
- abnormal hearts. For training and validation, we used a total
357
- of 102 CT images and 47 MR images from the multi-
358
- modality whole heart segmentation challenge (MMWHS) [10],
359
- orCalScore challenge [28], left atrial wall thickness challenge
360
- [29] and left atrial segmentation challenge [30]. Among them,
361
- we used 87 CT and 41 MR images for training, and we used 15
362
- CT images and 6 MR images for validation, where we tuned
363
- the hyper-parameters and selected the model trained with the
364
- hyper-parameter set that performed best on the validation data.
365
- We then evaluated the final performance of the selected modelKONG et al. : LEARNING WHOLE HEART MESH GENERATION FROM PATIENT IMAGES FOR COMPUTATIONAL SIMULATIONS 5
366
- on a held out test dataset from the MMWHS challenge, which
367
- contained 40 CT and 40 MR images. For CT images, the
368
- in-plane resolutions vary from 0.4×0.4mm to 0.78×0.78
369
- mm and the through-plane resolutions vary from 0.5mm to
370
- 1.6mm. For MR images, the in-plane resolutions vary from
371
- 1.25×1.25mm to 2×2.mm and the through-plane resolutions
372
- vary from 2.mm to 2.3mm.
373
- Fig. 4. Illustration of example CT and MR image data, the correspond-
374
- ing surface meshes generated from manual segmentation using the
375
- marching cube algorithm, and the resulting ground truth surface meshes
376
- after post processing.
377
- For each image in the dataset, we followed the MMWHS
378
- challenge [10] and created ground truth segmentation of 7
379
- cardiac structures to supervise the training and evaluate model
380
- performance on validation and test datasets. The 7 cardiac
381
- structures included the blood cavities of left ventricle (LV),
382
- right ventricle (RV), left atrium (LA), right atrium (RA),
383
- LV myocardium (Myo), aorta (Ao), and pulmonary artery
384
- (PA), for all images. Figure 4 illustrates our pipeline to
385
- generate smooth ground truth surface meshes from the manual
386
- segmentations. We resampled the segmentation to a resolution
387
- of1.×1.×1.mm, then used the marching cube algorithm
388
- to generate the surface meshes for each cardiac structure. We
389
- then applied a Windowed-Sinc smoothing filter [31] with a low
390
- pass band of 0.01 and 20 iterations of smoothing to generate
391
- smooth ground truth meshes. Furthermore, as visualized in the
392
- example CT case in Figure 4, surface meshes were clipped at
393
- the image bounding box to remove the fictitious surface at
394
- the image boundaries for cardiac structures that exceeded the
395
- coverage of the image data.
396
- We compared the geometric accuracy of the reconstructed
397
- whole heart surfaces against prior deep-learning methods that
398
- demonstrated strong performance of segmenting whole heart
399
- geometries from 3D medical images. Namely, we considered
400
- HeartFFDNet [26], our prior work that generates simulation-
401
- ready whole heart surface meshes from images by learning
402
- free-form deformation from a template mesh, MeshDeformNet
403
- [15] that predicts displacements on sphere mesh templates, as
404
- well as 2D UNet [32] and a residual 3D UNet [27] that are
405
- arguably the most successful neural network architecture for
406
- image segmentation. We also implemented a SpatialConfigu-
407
- ration Net (SCN) [33] that ranked first for CT and second for
408
- MRI in the MMWHS challenge, using our residual 3D UNet
409
- backbone. This segmentation-based approach incorporates rel-
410
- ative positions among structures to focus on anatomically
411
- feasible regions. All methods were trained on the same trainingand validation data splits, and used the same pre-processing
412
- and augmentation procedures to ensure a fair comparison.
413
- 2) Task-2: Whole Heart Mesh Construction for 4D Images:
414
- We applied our method on time-series CT images to evaluate
415
- its performance on creating whole heart meshes for CFD
416
- simulations. Since the MMWHS dataset does not include
417
- pulmonary veins, LA appendage or venae cavae, we prepared
418
- another set of ground truth segmentations to include these
419
- structures. The geometric accuracy and the mesh quality of
420
- the reconstructed meshes for CFD simulations were then
421
- evaluated on 10 sets of time-series CT images against the
422
- learning-based mesh reconstruction baselines, HeartFFDNet
423
- and MeshDeformNet.
424
- Fig. 5. Visualization of simulation-ready templates with trimmed
425
- inlet/outlet geometries and tagged face IDs for prescribing boundary
426
- conditions.
427
- 3) Task-3: CFD Simulations: We conducted CFD simula-
428
- tions of cardiac flow using the predicted whole heart meshes
429
- from time-series CT images. Since our predicted model does
430
- not contain heart valves, only diastolic flow was simulated.
431
- We also conducted CFD simulations for the LV and simulated
432
- the LV flow for the entire cardiac cycle. Figure 5 visualizes
433
- the simulation-ready templates of the 4 heart cambers and
434
- the LV with trimmed inlet/outlet geometries and tagged face
435
- IDs for prescribing boundary conditions. These simulation-
436
- ready templates were manually created from the training whole
437
- heart template in a surface processing software, SimVascular
438
- [34].We linearly interpolated the pre-computed biharmonic
439
- coordinates onto the new templates so that our trained mod-
440
- els can readily deform these new templates. The simulation
441
- results were compared against results obtained from time-
442
- series ground truth meshes created manually in SimVascular
443
- [34]. We also compared simulation results among our method,
444
- HeartFFDNet, and a conventional semi-automatic model con-
445
- struction pipeline based on image registration, where a manu-
446
- ally created ground truth mesh was morphed based on trans-
447
- formations obtained from registering images across different
448
- time points.
449
- B. Evaluation Metrics
450
- We used Dice similarity coefficient (DSC) and Hausdorff
451
- Distance (HD) to measure segmentation accuracy. The DSC
452
- and HD values for the MMWHS test dataset were evalu-
453
- ated with an executable provided by MMWHS organizers.
454
- For mesh-based methods, we converted the predicted surface
455
- meshes to segmentation prior to evaluation. Mesh quality was
456
- compared in terms of the percentage self-intersection, which
457
- measures the local topological correctness of the meshes,6 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. XX, NO. XX, XXXX 2020
458
- orthogonality of the vessel caps with respect to the vessel
459
- walls, as well as the coplanarity of the vessel caps. The
460
- percentage mesh self-intersection was calculated as the per-
461
- centage of intersected mesh facets detected by TetGen [35]
462
- among all mesh facets. The orthogonality between vessel
463
- caps and walls (Cap-Wall-Orthogonality) was measured by
464
- the normal consistency between the mean cap normal vector
465
- and the vector connecting the centroids of the mesh points
466
- on the cap and on the wall, respectively. Namely, CWO =P
467
- k1− ⟨1
468
- |Wk|P
469
- i∈Wk|ni,1
470
- |Ck|P
471
- j∈Ck|nj⟩where WkandCk
472
- represent the sets containing the mesh vertices on a vessel wall
473
- and the corresponding vessel cap. Vessel caps coplanarity was
474
- measured by the projected distance between the mesh vertices
475
- on the cap and the best fit plane over those mesh vertices. For
476
- CFD simulations, we compared integrative measures during a
477
- cardiac cycle, namely, LV volume and average kinetic energy
478
- KE′=1
479
- 2VLVR R R
480
- ρu2dV, where VLVis the volume of the
481
- LV and uis the flow velocity. We also compared the mean
482
- velocity near the mitral valve opening (MO) and aortic valve
483
- opening (AO) during a cardiac cycle. Paired t-test was used
484
- for statistical significance.
485
- C. Deforming-Domain CFD simulations of Cardiac Flow
486
- We applied the Arbitrary Lagrangian-Eulerian (ALE) for-
487
- mulation of the incompressible Navier-Stokes equations to
488
- simulate the intraventricular flow and account for deforming
489
- volumetric mesh using the finite element method. Since time
490
- resolution of image data is too coarse (about 0.1s) to be used
491
- directly in time-stepping of the Navier–Stokes equations, cubic
492
- spline interpolation was applied to interpolate the meshes
493
- predicted at different imaging time points so that the time
494
- resolution of the interpolated meshes was 0.001s, which cor-
495
- responded to the simulation time step. For the fluid domain,
496
- the mesh motions computed from these interpolated meshes
497
- were imposed as Dirichlet boundary conditions on the chamber
498
- walls. For simulations of LV flow, we imposed Dirichlet
499
- boundary conditions on the mitral inlet during systole, and
500
- on the aortic outlet during diastole. Neumann (prescribed
501
- pressure) boundary conditions were applied to the mitral inlet
502
- during diastole or to the aortic inlet during systole. Diastole
503
- and systole phases were determined based on the increase and
504
- decrease of the LV volume. For simulations of diastolic cardiac
505
- flow within 4 heart chambers, we applied Neumann boundary
506
- conditions to the pulmonary vein inlets, and imposed Dirichlet
507
- boudary condition on the aortic outlet. Blood was assumed to
508
- have a viscosity µof4.0×10−3Pa·sand a density ρof
509
- 1.06g/cm3. The volumetric mesh was created automatically
510
- from our predicted surface mesh using TetGen [35], using a
511
- maximum edge size of 1.5mm. The equations were solved with
512
- the open-source svFSI solver from the SimVascular project
513
- [36].
514
- IV. R ESULTS
515
- A. Comparative Studies of Whole Heart Segmentation
516
- Performance on MMWHS Dataset
517
- 1) Comparison of Geometric Accuracy with Other Methods:
518
- Table I compares the average Dice scores and Hausdorffdistances of the reconstruction results of both the whole heart
519
- and the individual cardiac structures for the MMWHS test
520
- dataset. We show the accuracy of deforming the template by
521
- mapping all mesh points ( S) and by interpolating the mesh
522
- deformation using only 600 uniformly-sampled control han-
523
- dles ( V). Mapping all points consistently achieved higher dice
524
- scores than using 600 selected control handles, but the HDs are
525
- worse for some cardiac structures. For both CT and MR data,
526
- in terms of Dice scores, our method consistently outperformed
527
- HeartFFDNet and 3D UNet for all cardiac structures and
528
- achieved comparable performance with MeshDeformNet, 2D
529
- UNet, 3D SCNet for most cardiac structures. Our method
530
- achieved the best HDs for LA, RA and RV for CT data and
531
- for all cardiac structures except for aorta and PA for MR data.
532
- Figure 6 presents the best, median, and worst segmentation
533
- results of our method on CT and MRI test images and pro-
534
- vides qualitative comparisons of the results from the different
535
- methods. As shown, mesh-based approaches, ours, HeartFFD-
536
- Net and MeshDeformNet produced smooth and anatomically
537
- consistent cardiac geometries while segmentation-based ap-
538
- proaches, 2D UNet, residual 3D UNet, 3D SCNet produced
539
- segmentations with topological artifacts such as missing parts,
540
- holes, and isolated islands. Although 3D SCNet produced
541
- higher Dice scores than residual 3D UNet, it produced a
542
- few misclassifications where the LA was incorrectly classified
543
- into RV . Although MeshDeformNet produced smooth and
544
- anatomically consistent cardiac geometries, it was prone to
545
- gaps between adjacent cardiac structures by deforming un-
546
- coupled spheres. Our method and the HeartFFDNet were able
547
- to avoid this limitation by deforming the space enclosing
548
- a whole heart template, preserving the connections among
549
- cardiac structures.
550
- 2) Effect of Varying Control Handle Numbers: We investi-
551
- gated the effect of various design choices on the whole heart
552
- segmentation performance of our proposed method. Table II,
553
- presents the effect of varying the number of control handles
554
- used during training on the average Dice scores and Hausdorff
555
- distances of the reconstruction results. Increasing the number
556
- of control handles used in the last deformation block from
557
- 75 to 900 generally resulted in increased performance for
558
- most cardiac structures. However, the resulting improvement
559
- in terms of Dice scores was only around 1%, indicating
560
- the robustness of our method towards using relatively fewer
561
- numbers of control handles. Similarly, using more control
562
- handles in the first and/or second deformation blocks did not
563
- result in significant improvement for most cardiac structures.
564
- Therefore, in our final network model, we chose to use a small
565
- number of control handles (75) in the first and second blocks
566
- to reduce the computational cost, and used 600 control handles
567
- in the last deformation block for a slightly better performance.
568
- 3) Effect of Individual Loss Components on Whole Heart
569
- Segmentation Performance: Since our training pipeline in-
570
- volves a joint supervision of multiple objectives, we performed
571
- an ablation study on the total training loss Ltotal to evaluate
572
- the contribution of individual loss components. Namely, we
573
- trained network models while removing the segmentation loss
574
- Lseg, and the L2consistency loss ||S−V||2
575
- Fto investigate
576
- the effectiveness of supervising a segmentation branch, andKONG et al. : LEARNING WHOLE HEART MESH GENERATION FROM PATIENT IMAGES FOR COMPUTATIONAL SIMULATIONS 7
577
- TABLE I
578
- COMPARISON OF WHOLE -HEART SEGMENTATION PERFORMANCE , DSC ( ↑)AND HD ( MM) (↓),FROM DIFFERENT METHODS ON THE MMWHS
579
- CT AND MR TEST DATASETS .* D ENOTES SIGNIFICANT DIFFERENCE OF"OURS (S)" F ROM THE OTHERS (P-VALUES <0.05)
580
- CT MR
581
- Method Myo LA LV RA RV Ao PA WH Myo LA LV RA RV Ao PA WH
582
- DCSOurs (S) 90.07 93.18 93.47 89.48 91.48 93.33 85.60 91.76 80.45 86.98 91.61 88.08 88.09 85.76 78.14 87.41
583
- Ours (V) 88.38* 92.53* 91.99* 88.76* 90.59* 91.25* 84.73* 90.53* 78.62* 86.27* 89.38* 87.79* 87.20* 83.30* 77.55 86.04*
584
- HeartFFDNet 83.85* 90.55* 89.38* 86.33* 87.65* 90.65* 80.20* 87.82* 70.67* 83.27* 86.92* 84.47* 82.77* 79.71* 69.68* 81.33*
585
- MeshDeformNet 89.94 93.23 93.98 * 89.18 91.00 94.98 * 85.22 91.80 79.71 88.13 92.23 88.82 89.24 *88.98 *81.65 *88.17 *
586
- 2DUNet 89.87 93.08 93.06 87.71* 90.49* 93.43 83.23* 91.09* 79.47 86.41 89.61* 85.21 86.48* 86.94 77.24 85.94*
587
- 3DUNet 86.34* 90.17* 92.28* 86.77* 87.58* 92.34* 81.29* 88.78* 76.11* 85.20* 87.90* 86.63* 82.77* 74.18* 76.38 84.04*
588
- 3DUNet+SCN 90.09 92.63* 93.43 89.01 90.49* 91.61 84.05 91.18* 78.36* 85.77* 89.84* 88.09 87.19 84.35 76.85 86.12*
589
- HDOurs (S) 14.41 10.72 10.41 13.80 11.63 6.59 7.88 16.95 16.39 12.12 11.93 13.93 14.76 7.19 9.12 19.97
590
- Ours (V) 14.40 8.18* 6.87* 12.46 *9.55* 5.54* 8.45 16.63* 15.96 *10.16 *8.97* 12.56 *12.46 * 7.39 9.14 18.91 *
591
- HeartFFDNet 14.20 8.74* 7.66* 13.24 10.44* 6.17 8.35 16.57 18.21* 12.43 12.57 15.57 16.36 9.26* 11.36* 22.28*
592
- MeshDeformNet 14.39 10.41 10.33 13.67 13.36 5.27* 9.16* 17.62 16.92 12.22 11.63 15.05 14.73 6.05* 7.79* 21.08
593
- 2DUNet 9.98* 9.34 6.10* 13.78 10.39 5.63 8.38 16.37 20.34 10.78* 10.62 17.43 16.32 6.98 8.09 27.54*
594
- 3DUNet 13.64 11.47 10.12 16.56* 16.17* 6.88 9.88* 19.91* 34.97* 33.81* 36.28* 24.36* 27.72* 15.52* 10.13 51.54*
595
- 3DUNet+SCN 13.78 12.80 9.86 17.02 17.70* 6.57 7.77 28.04* 39.30* 34.87* 21.06* 28.37* 28.20* 8.05 8.49 53.76*
596
- Fig. 6. Example segmentation results for CT and MR images from different methods. The CT or MR images that our method had the best,
597
- the median, and the worst Dice scores among the CT or MR test data were selected, thus illustrating best, typical, and worst segmentation
598
- results, respectively. The gold arrows indicate locations of artifacts unsuitable for simulations, such as gaps between adjacent cardiac structures for
599
- MeshDeformNet, missing structures, holes, noisy boundaries, and isolated islands for the segmentation-based methods.
600
- the effectiveness of encouraging the consistency between
601
- directly mapped mesh points ( S) and deformed mesh vertex
602
- locations ( V), respectively. We trained two additional models
603
- while removing supervision on Vor on S, respectively, to
604
- validate the effectiveness of supervising both meshes together.
605
- Table III presents the effect of removing individual loss
606
- components on the whole heart segmentation performance
607
- evaluated on the MMWHS test dataset. Removing any of theaforementioned objectives resulted in a significant decrease
608
- in whole heart Dice scores for both CT and MR data, as
609
- well as decreased Dice scores for most cardiac structures. The
610
- Hausdorff distances increased significantly for most cardiac
611
- structures when supervision on the smoothly deformed mesh
612
- template Vwas removed. However, there were no significant
613
- changes in Hausdorff distances for most cardiac structures
614
- following the removal of other objectives.8 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. XX, NO. XX, XXXX 2020
615
- TABLE II
616
- ACOMPARISON OF WHOLE -HEART SEGMENTATION PERFORMANCE , DSC ( ↑)AND HD ( MM) (↓)ON THE MMWHS CT AND MR TEST DATASETS ,
617
- WHEN USING DIFFERENT NUMBERS OF CONTROL HANDLES DURING TRAINING . B1/B2/B3 D ENOTES THE NUMBERS OF CONTROL HANDLES
618
- USED IN THE THREE DEFORMATION BLOCKS . * D ENOTES SIGNIFICANT DIFFERENCE OFOURFINAL NETWORK MODEL USING "75/75/600"
619
- CONTROL HANDLES FROM THE OTHERS (P-VALUES <0.05)
620
- CT MR
621
- B1/B2/B3 Myo LA LV RA RV Ao PA WH Myo LA LV RA RV Ao PA WH
622
- DCS75/75/600 90.07 93.18 93.47 89.48 91.48 93.33 85.60 91.76 80.45 86.98 91.61 88.08 88.09 85.76 78.14 87.41
623
- 75/75/75 88.08* 92.48* 92.54* 88.42* 90.98* 92.76 84.77 90.77* 79.51 86.57 90.71* 87.42 87.55 83.21* 74.40* 86.33*
624
- 75/75/150 88.37* 92.59* 92.37* 88.26* 90.65* 92.78 84.20* 90.69* 78.40* 85.84* 90.64* 86.99* 86.84* 82.81* 74.67* 85.89*
625
- 75/75/300 88.31* 93.47 93.28 89.20 91.28 93.96 84.84 91.34* 80.23 87.98 92.39 * 87.08* 87.94 86.84 78.98 87.46
626
- 75/75/900 89.04* 93.20 93.36 89.54 91.64 93.97 *85.99 91.66 78.64* 87.07 91.54 86.91* 86.77 84.73 75.22 86.48*
627
- 75/300/600 89.91 93.22 93.27 89.17 90.73* 93.83 85.48 91.52 79.74 86.00* 91.18 88.26 87.61 83.94 73.17* 86.66*
628
- 600/600/600 89.53 93.01 93.07 88.43* 90.87* 93.32 84.95 91.25* 79.87 86.32 91.33 87.20* 87.34 83.42* 73.59* 86.55*
629
- HD75/75/600 14.41 10.72 10.41 13.80 11.63 6.59 7.88 16.95 16.39 12.12 11.93 13.93 14.76 7.19 9.12 19.97
630
- 75/75/75 14.33 10.89 11.18 14.14 11.21 6.87 7.88 16.76 16.24 12.19 12.43 14.62 14.18 8.22 9.90 20.00
631
- 75/75/150 14.33 9.98 9.75 15.38* 12.33* 5.86* 9.40* 17.59 16.54 12.33 11.89 15.27* 15.31 8.48* 10.86* 20.53
632
- 75/75/300 14.29 10.88 10.67 14.70 11.64 6.35 7.80 17.74 16.95 12.45 12.78 15.44* 13.55* 7.03 8.77 21.45*
633
- 75/75/900 14.61 11.02 11.01 13.44 11.64 6.30 8.93* 16.84 17.93* 12.88 12.94 14.19 13.99 8.12 10.50* 21.37*
634
- 75/300/600 14.50 11.20 10.90 13.79 11.29 5.82* 8.32 17.36 16.63 12.00 12.03 13.53 12.30 * 8.17 10.46* 20.38
635
- 600/600/600 14.36 10.60 10.28 14.58 12.06 7.04 8.16 17.42 17.06 12.92* 13.60* 14.42 14.07 8.33 10.25* 20.70
636
- TABLE III
637
- IMPACT OF INDIVIDUAL LOSS COMPONENTS OF Ltotal ON THE PREDICTION ACCURACY ON MMWHS MR AND CT TEST DATASETS . * D ENOTES
638
- SIGNIFICANT DIFFERENCE OF OUR FINAL NETWORK MODEL "OURS (S)" F ROM THE OTHERS (P-VALUES <0.05)
639
- CT MR
640
- Models Myo LA LV RA RV Ao PA WH Myo LA LV RA RV Ao PA WH
641
- DCSOurs (S) 90.07 93.18 93.47 89.48 91.48 93.33 85.60 91.76 80.45 86.98 91.61 88.08 88.09 85.76 78.14 87.41
642
- w/o segmentation 87.11* 93.32 92.32* 88.74 90.65* 92.90 85.09 90.68* 77.96* 86.31 90.81* 87.48 87.03* 85.47 77.28 86.28*
643
- w/o L2 88.85* 92.94 92.57* 88.90* 90.96* 93.68 83.87* 91.10* 79.57* 85.96 90.56* 86.75* 87.01 84.46 75.00* 86.23*
644
- w/o L2+V 85.89* 93.18 92.34* 89.11 90.83* 93.41 85.38 90.57* 79.08* 86.92 91.93 87.51* 87.01 85.02 75.30* 86.66*
645
- w/o L2+S 86.12* 92.73* 90.84* 88.78* 90.41* 91.99* 83.81* 90.05* 78.11* 87.09 90.94* 87.63 87.34 85.04 76.77 86.58*
646
- HDOurs (S) 14.41 10.72 10.41 13.80 11.63 6.59 7.88 16.95 16.39 12.12 11.93 13.93 14.76 7.19 9.12 19.97
647
- w/o segmentation 14.41 9.98 10.44 15.04* 12.06 6.80 8.44 17.50 17.62 12.71 13.60* 14.57 15.70 7.89 8.91 20.73
648
- w/o L2 14.32 10.86 10.49 14.63* 12.39 6.34 8.38 17.26 16.54 11.84 11.46 14.74 14.75 8.17* 9.86 20.52
649
- w/o L2+V 14.42 10.41 10.27 14.43 12.34 6.30 6.78 * 17.55 17.09 12.47 11.79 14.40 14.39 7.51 9.93 21.19*
650
- w/o L2+S 14.17 12.20* 12.58* 16.31* 13.54* 8.17* 9.46* 17.82 16.65 13.72* 14.35* 16.19* 16.07* 8.91* 10.37* 20.74
651
- B. Construction of Cardiac Meshes for CFD Simulations
652
- TABLE IV
653
- ABLATION STUDY OF MESH REGULARIZATION LOSSES ON VESSEL INLET
654
- AND OUTLET STRUCTURES ON CT TEST DATASET (N=20).
655
- CoP+
656
- Ortho+HWCoP+Ortho CoP None
657
- Cap-Wall
658
- Orthogonality ( ↓)LA 0.128 ±0.121 0.032±0.012 0.365 ±0.265 0.273 ±0.266
659
- RA 0.023 ±0.008 0.012±0.008 0.105 ±0.038 0.066 ±0.026
660
- Ao 0.019 ±0.023 0.005±0.006 0.467 ±0.117 0.127 ±0.024
661
- Cap Coplanarity
662
- (mm) (↓)LA 0.228 ±0.041 0.256 ±0.029 0.12±0.024 0.312 ±0.058
663
- RA 0.34 ±0.073 0.339 ±0.055 0.185±0.043 0.466 ±0.114
664
- Ao 0.447 ±0.115 0.429 ±0.063 0.263±0.068 0.852 ±0.16
665
- Wall Chamfer
666
- Distance (mm) ( ↓)LA 2.093 ±0.803 2.715 ±1.105 2.487 ±0.898 2.042±0.857
667
- RA 2.021 ±1.176 2.66 ±1.301 2.231 ±0.983 1.899±0.952
668
- 1) Ablation Study of Individual Loss Components on Vessel
669
- Inlet/Outlet Structures: CFD simulations of cardiac flow re-
670
- quires well-defined inlet and outlet vessel structures to pre-
671
- scribe boundary conditions for the inflow and outflow. Figure
672
- 7 and table IV demonstrate the effect of applying individual
673
- regularization loss components on the predicted inlet and
674
- outlet geometries (pulmonary veins, vena cava, and aorta).
675
- Without any of the regularization losses, the predicted vessel
676
- structure lacked well define caps. Indeed, our ground truth
677
- meshes were generated from manual segmentations where
678
- vessels were not truncated precisely orthogonal to the vessel
679
- walls by the human observers, and the caps were not co-
680
- planar due to necessary smoothing steps to filter out the
681
- Fig. 7. Visualization of example whole heart surface predictions follow-
682
- ing addition of regularization losses on vessel inlet/outlet structures. The
683
- yellow regions highlight the ”caps” where the regularization losses were
684
- applied.
685
- staircase artifacts. The coplanar loss and the orthogonal loss
686
- succeeded in producing more planar cap geometries that were
687
- more orthogonal to vessel walls. Owning to the imperfect
688
- ground truth vessel meshes, although adding regularization
689
- losses to the training objective improved the structural quality
690
- of inlet geometries, it slightly reduced the geometric accuracy
691
- in terms of Chamfer distances compared with the ground truth.
692
- Applying a higher weight on the inlet mesh vertices in the
693
- geometric consistency loss was able to improve the geometricKONG et al. : LEARNING WHOLE HEART MESH GENERATION FROM PATIENT IMAGES FOR COMPUTATIONAL SIMULATIONS 9
694
- TABLE V
695
- COMPARISON OF DCS (↑)AND HDS(MM) (↓)OF PREDICTIONS
696
- FROM DIFFERENT METHODS ON 4D CT TEST IMAGES (N=20). *
697
- DENOTES SIGNIFICANT DIFFERENCE OF"OURS (S)" F ROM THE
698
- OTHERS (P<0.05)
699
- Myo LA LV RA RV Ao PA WH
700
- DiceOurs (S) 89.53 93.30 94.48 92.91 94.32 96.20 85.31 93.14
701
- Ours (V) 88.27* 91.60* 93.21* 92.18* 93.21* 95.62* 83.48* 91.97*
702
- HeartFFDNet 84.37* 88.38* 91.41* 90.26* 90.19* 93.03* 70.44* 88.94*
703
- MeshDeformNet 90.58 *95.18 *95.85 *93.50 94.63 *97.50 * 80.21* 93.94 *
704
- HD (mm)Ours (S) 6.04 10.21 4.95 10.04 6.61 3.80 19.71 16.02
705
- Ours (V) 5.91* 10.64 5.36 10.32 7.03* 4.16 19.14 15.69
706
- HeartFFDNet 6.78* 12.01* 6.37 10.85* 8.77* 5.13* 23.20 16.46
707
- MeshDeformNet 5.98 9.29 4.39 10.42 6.35*3.42 23.25 15.77
708
- accuracy of vessel inlets while maintaining satisfactory mesh
709
- quality for CFD simulations.
710
- Fig. 8. Qualitative comparison of whole heart surfaces from different
711
- methods at the end-diastolic phase and the end-systolic phase of a set
712
- of time-series image data. The colormap for end-systolic surfaces shows
713
- vertex displacement magnitude from end-systole to end-diastole.
714
- 2) Comparison with Other Methods on Time-Series CT Data:
715
- Table V compares the reconstruction accuracy between our
716
- method and the other baseline methods on end-diastolic and
717
- end-systolic phases of a cardiac cycle. Overall, our method
718
- demonstrated high accuracy comparable to the prior state-
719
- of-the-art approach MeshDeformNet, both in terms of Dice
720
- scores and Hausdorff distances. Figure 8 shows a qualitative
721
- comparison of the reconstructed whole heart surfaces at end-
722
- systolic and end-diastolic phases and the estimated surface
723
- motion by computing the displacements of mesh vertices
724
- over time. MeshDeformNet produced gaps between cardiac
725
- structures as well as overly smoothed pulmonary veins and
726
- vena cava geometries, since that method is biased by the
727
- use of sphere templates rather than a more fitting template
728
- of the whole heart. In contrast, our method produced high-
729
- quality geometries of the vessel inlets and outlets as well
730
- as whole heart geometries that better match with the ground
731
- truth. Furthermore, our method demonstrated a more accurate
732
- estimation of surface deformation over time, which is required
733
- for prescribing boundary conditions for CFD simulations.
734
- Figure 9 provides further qualitative comparisons between
735
- using FFD and using biharmonic coordinates to deform the
736
- template. Using biharmonic coordinates enables more flexible
737
- Fig. 9. Comparison of whole heart surface predictions between using
738
- control handles as in our approach and using FFD as in HeartFFDNet.
739
- deformation and can thus more closely capture detailed ge-
740
- ometries such as the left atrial appendage. In contrast, geome-
741
- tries of left atrial appendage predicted from HeartFFDNet were
742
- strongly biased by the geometries of the template, although it
743
- used far more control points (4096) than our method (600).
744
- Furthermore, our method was able to predict cardiac structures
745
- that were not covered in the image data. Namely, thanks to
746
- the augmentation pipeline, our method generated reasonable
747
- geometries of the pulmonary arteries and pulmonary veins.
748
- In contrast, manual segmentation can only produce surface
749
- meshes of the cardiac structures captured in the images and
750
- HeartFFDNet predicted flat and unphysiological geometries
751
- despite starting from a realistic whole heart template.
752
- We further quantified the accuracy of the predicted shape
753
- of the heart outside the input image data. Since most images
754
- from the time-series CT dataset did not cover the entire
755
- heart, we selected 28 images that covered the entire heart
756
- from the MMWHS CT test dataset, and cropped them above
757
- various axial planes to evaluate the accuracy of the predicted
758
- whole heart reconstruction when increasing portions of cardiac
759
- structures were uncovered in the input images. Figure 10
760
- (top) compares the average point-to-point distance errors for
761
- each cardiac structures when the image volume was cropped
762
- along the axial view before and after applying the proposed
763
- random-cropping augmentation method and weighted losses
764
- on the meshes. When the random-cropping augmentation
765
- was applied, our method produced more accurate aorta and
766
- pulmonary arteries. As expected, the distance errors increased
767
- when more image data were removed. When as much as
768
- 30% of the image volume was removed, the average distance
769
- errors of pulmonary arteries and aorta were around 1 cm with
770
- the augmentation, whereas the average distance errors were
771
- around 2.5 cm without the augmentation. Figure 10 (bottom)
772
- visualizes three examples of the reconstruction results where
773
- 15%,22.5%, and 30% of the image data were removed. Our
774
- method produced reasonable geometries of the pulmonary
775
- arteries, pulmonary veins, and aorta in all cases. Although10 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. XX, NO. XX, XXXX 2020
776
- our method tended to predict a shorter aorta when the image
777
- data had limited coverage of the aorta, we note that the length
778
- of aortic outlet is often arbitrary when creating meshes for
779
- CFD simulations and thus does not significantly impact the
780
- simulation results.
781
- Fig. 10. Accuracy of shape completion of the heart when image data
782
- has limited coverage for aorta, pulmonary arteries and veins. Top: A
783
- comparison of point-to-point L2 distance between the meshes predicted
784
- using uncropped input images and using cropped input images at varies
785
- percentage, for network models trained without and with the random-
786
- cropping augmentation. Bottom: Example prediction results for three
787
- cases using uncropped input images and using cropped input images at
788
- varies percentage. The color map on the meshes for uncropped images
789
- indicates different cardiac structures whereas those for cropped images
790
- indicates L2 distance in mm.
791
- Table VI compares the quality of the predicted inlet and
792
- outlet geometries as well as the percentage face intersection of
793
- the whole heart meshes. Besides comparing with our baselines,
794
- HeartFFDNet and MeshDeformNet, we also compared our
795
- method with the surface meshes generated from applying the
796
- Marching Cube algorithm on manual ground truth segmen-
797
- tations, where the vessel inlet and outlet geometries were
798
- manually trimmed by human experts. Our method produced
799
- significantly better vessel inlet and outlet geometries than
800
- HeartFFDNet and MeshDeformNet. Also, our method outper-
801
- formed the manual segmentation in terms of Cap-Wall Or-
802
- thogonality. When deforming the mesh template using control
803
- handles, our method achieved the lowest percentage of face
804
- intersection than other deep-learning methods, and the small
805
- amount of face intersections that occurred could be readily
806
- corrected by a few iterations of Laplacian smoothing.C. CFD Simulations of Cardiac Flow
807
- We were able to successfully conduct CFD simulations
808
- using the automatically constructed LV meshes for all 10
809
- patients, as well as for 9 of 10 patients with the 4-chamber
810
- meshes. The 1 failed case had structure penetrations between
811
- two pulmonary veins, causing the simulation to diverge. Figure
812
- 11 displays the simulation results of the velocity streamlines
813
- at multiple time steps during diastole for 2 different patients.
814
- The simulation results demonstrate the formation of typical
815
- vortex flow during ventricle filling.
816
- Fig. 11. Velocity streamlines from CFD simulations of 2 different
817
- patients using the predicted 4D meshes.
818
- Fig. 12. Quantitative comparisons of the % errors in LV volume, volume
819
- averaged KE density, mean velocity near the MO during diastole and
820
- mean velocity near the AO during systole among different methods.
821
- Lines show the mean values and shades show the 95% confidence
822
- intervals.
823
- Figure 12 provides quantitative comparisons of the ac-
824
- curacy of CFD simulation results of LV flow. Both our
825
- approach and HeartFFDNet significantly outperformed the
826
- image-registration-based approach in terms of all metrics.
827
- Namely, the image-registration-based method significantly un-
828
- derestimated the LV volume during diastole since the recon-
829
- structed meshes did not capture the large deformation of LV
830
- from systole to diastole. Our proposed approach demonstrated
831
- comparable or slightly better accuracy than HeartFFDNet in
832
- general, with smaller volume errors throughout the cardiac
833
- cycle and smaller errors in average kinetic energy and mean
834
- aortic flow velocity during systole.KONG et al. : LEARNING WHOLE HEART MESH GENERATION FROM PATIENT IMAGES FOR COMPUTATIONAL SIMULATIONS 11
835
- TABLE VI
836
- ACOMPARISON OF THE QUALITY OF THE INLET /OUTLET GEOMETRIES AND WHOLE HEART SURFACE QUALITY FROM DIFFERENT METHODS .
837
- Cap-Wall Orthogonality ( ↓) Cap Coplanarity (mm) ( ↓) % Face Intersection ( ↓)
838
- LA RA Ao LA RA Ao WH
839
- Ours (V) 0.038±0.046 0.013±0.007 0.013±0.012 0.22±0.024 0.284±0.044 0.292±0.088 0.018±0.022
840
- HeartFFDNet 0.137 ±0.08 0.228 ±0.182 0.494 ±0.386 0.398 ±0.068 0.45 ±0.125 0.949 ±0.557 0.262 ±0.191
841
- MeshDeformNet 0.106 ±0.104 0.044 ±0.038 0.209 ±0.117 1.145 ±0.165 0.917 ±0.379 0.36 ±0.216 0.034 ±0.068
842
- Manual 0.04 ±0.04 0.034 ±0.054 0.025 ±0.023 0.037 ±0.009 0.035 ±0.007 0.02 ±0.003 0.0 ±0.0
843
- Fig. 13. Qualitative comparisons of the simulated flow pattern from different methods at different time phases during a cardiac cycle for an example
844
- case. Left: Streamlines within the left ventricle models. Right: Contours of the velocity magnitude at the same clipping plane. Color map shows the
845
- velocity magnitude
846
- Figure 13 qualitatively compares the simulated LV flow
847
- pattern during both systole and diastole using meshes automat-
848
- ically constructed by our proposed approach and HeartFFD-
849
- Net, semi-automatically constructed by conventional image
850
- registration and manually constructed by human observers.
851
- Image registration underestimated the LV expansion from end
852
- systole to diastole, leading to underestimated flow velocity
853
- and disparate flow pattern compared with the ground truth.
854
- Both of our approaches generally produced similar vortex
855
- structures during diastole and converging flow during systole,
856
- with moderate differences in flow velocity and vortex locations
857
- compared with the ground truth.
858
- V. D ISCUSSION
859
- Automated image-based reconstruction of cardiac meshes is
860
- important for computational simulation of cardiac physiology.
861
- While deep-learning-based methods have demonstrated suc-
862
- cess in tasks such as image segmentation and registration, fewstudies have addressed the end-to-end learning between images
863
- and meshes for modeling applications. Furthermore, prior
864
- learning-based mesh reconstruction approaches suffer from
865
- a number of limitations such as using decoupled meshes of
866
- individual cardiac structures and assumed mesh topology, thus
867
- unable to directly support different cardiac simulations without
868
- additional efforts [14], [15]. We addressed this challenge
869
- herein using a novel approach that trains a neural network
870
- to learn the translation of a small set of control handles to
871
- deform the space enclosing a whole heart template to fit
872
- the cardiac structures in volumetric patient image data. Our
873
- method demonstrated promising whole-heart reconstruction
874
- accuracy and was able to generate simulation-ready meshes
875
- from time-series image data for CFD simulations of cardiac
876
- flow.
877
- Our approach achieved comparable geometric accuracy to
878
- the prior state-of-the-art whole heart mesh reconstruction
879
- method MeshDeformNet [15] while having the additional
880
- advantage of directly enabling various cardiac simulations. We12 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. XX, NO. XX, XXXX 2020
881
- TABLE VII
882
- COMPARISON OF MODEL SIZE ,TRAINING AND TESTING TIME .
883
- Ours HeartFFDNet MeshDeformNet 2D UNet 3D UNet
884
- # of Parameters 8.7M 8.5M 16.8M 31.1M 18.6M
885
- Training Time 18 hrs 26 hrs 32 hrs 7 hrs 37 hrs
886
- Test Time 0.230s 0.177s 0.425s 1.555s 0.367s
887
- note that our approach used fewer parameters in the CNN
888
- encoder compared to MeshDeformNet (Table VII) and the use
889
- of biharmonic coordinates naturally ensures the smoothness of
890
- deformation without using explicit mesh regularization (e.g.,
891
- Laplacian and/or edge length loss constraints [15]). This is im-
892
- portant since mesh regularization schemes can complicate the
893
- optimization process [37], whereas we observed our approach
894
- to converge significantly faster than MeshDeformNet (18 vs
895
- 32 hrs on a GTX2080Ti GPU).
896
- For CFD simulations requiring the time-dependent mo-
897
- tion of the heart over the cardiac cycle, our method has
898
- the advantage of deforming the template mesh in a tem-
899
- porally consistent manner, enabling automated construction
900
- of dynamic cardiac meshes within minutes on a standard
901
- desktop computer. Registration-based approaches, in contrast,
902
- often require test time optimizations that are computationally
903
- expensive and prone to local minimums, which often lead
904
- to inaccurate registration results such as underestimation of
905
- large deformation. Although deep-learning approaches have
906
- been proposed to speed-up the registration process [38], [39],
907
- large-deformation registration on cardiac images remains chal-
908
- lenging for learning-based approaches. The establishment of
909
- temporal feature correspondence of our method is due to
910
- similar features of time-series images naturally being encoded
911
- into similar feature vectors by the CNN encoders and does
912
- not require explicit training. Nevertheless, ground truth data
913
- of anatomical landmarks could be incorporated in the future
914
- during training to further improve the accuracy of feature
915
- correspondence across different time frames or patients.
916
- Blood flow simulations developed from our automated mesh
917
- generation process demonstrated circulatory flow patterns dur-
918
- ing diastole and converging flow patterns during systole in the
919
- ventricular cavity consistent with prior studies [2], [40]. How-
920
- ever, we observed an average of 15-25% error in the simulated
921
- mean velocity and kinetic energy, despite a promising mean
922
- LV Dice scores of 93% and an mean volume error of 6%. Such
923
- amount of volume error is consistent with the inter- and intra-
924
- observer variations of manual LV segmentation [10], [15].
925
- Indeed, simulation of blood flow is sensitive to uncertainties in
926
- geometry, such as inflow directions and vessel and/or LV wall
927
- smoothness [40], [41]. We plan to conduct intra- and inter-
928
- observer studies on the ground truth meshes to further un-
929
- derstand the relationship between prediction uncertainties and
930
- the accuracy of CFD simulations. Nevertheless, our approach
931
- is among the first to enable creation of simulation-suitable
932
- meshes from patient images. And our design of using template
933
- meshes and control handles could support shape editing and
934
- analysis to study the effect of geometric variations on CFD
935
- simulations.
936
- Our proposed method has the following limitations. First,
937
- the testing images we used from the benchmark MMWHStest dataset and the times-series CT dataset do not cover the
938
- full variations of cardiac abnormalities observed clinically. For
939
- example, our test datasets did not contain patients with con-
940
- genital heart defects and thus the performance of our trained
941
- network for those patients were not evaluated. Furthermore,
942
- image data used to training and testing using relatively similar
943
- imaging protocols and parameters, such as slice thickness,
944
- field of view, and spatial orientation. The above limitations
945
- can be addressed by retraining the model with data more
946
- representative of particular use cases (e.g. particular abnor-
947
- malities or scanner protocol). A related limitation is that the
948
- whole-heart template used may need to be modified to capture
949
- richer variations of cardiac malformations. For example, the
950
- current template assumes four separate and distinct pulmonary
951
- vein ostia and thus may not fully capture pulmonary veins
952
- with alternate branching patterns, which can be important
953
- for preoperative planning of pulmonary and cardiac surgery
954
- [42]. Similarly, the template used would not be suitable for
955
- cardiac malformations such as single-ventricle patients with
956
- congenital heart diseases since the structures of the heart
957
- are significantly different from our current training template.
958
- Nonetheless, this framework could still be utilized if sufficient
959
- training data of, say, single-ventricle patients were available
960
- and a corresponding single-ventricle mesh template were used.
961
- To better handle the above applications, in future work we aim
962
- to add a template retrieval module to automatically select a
963
- template that best suits the application. Furthermore, implicit
964
- shape representation [43] can be combined with our learning-
965
- based shape deformation approach to predict cardiac structures
966
- with different anatomies.
967
- VI. C ONCLUSION
968
- We proposed a novel deep-learning approach that directly
969
- constructs simulation-ready whole heart meshes from cardiac
970
- image data and allows switching of template meshes to
971
- accommodate different modeling requirements. Our method
972
- leverages a graph convolutional network to predict the trans-
973
- lations of a small set of control handles to smoothly deform
974
- a whole heart template using biharmonic coordinates. Our
975
- method consistently outperformed prior state-of-the-art meth-
976
- ods in constructing simulation-ready meshes of the heart, and
977
- was able to produce geometries that better satisfy modeling
978
- requirements for cardiac flow simulations. We demonstrated
979
- application of our method on constructing dynamic whole
980
- heart meshes from time-series CT image data to simulate the
981
- ventricular flow driven by the cardiac motion. The presented
982
- approach is able to automatically construct whole heart meshes
983
- within seconds on a modern desktop computer and has the
984
- potential in facilitating high-throughput, large-cohort valida-
985
- tion of patient-specific cardiac modeling, as well as its future
986
- clinical applications.
987
- REFERENCES
988
- [1] N. A. Trayanova, J. Constantino, and V . Gurev, “Electromechanical
989
- models of the ventricles,” American Journal of Physiology-Heart and
990
- Circulatory Physiology , vol. 301, no. 2, pp. H279–H286, 2011.KONG et al. : LEARNING WHOLE HEART MESH GENERATION FROM PATIENT IMAGES FOR COMPUTATIONAL SIMULATIONS 13
991
- [2] R. Mittal, J. H. Seo, V . Vedula, Y . Choi, H. Liu, H. Huang, S. Jain,
992
- L. Younes, T. Abraham, and R. George, “Computational modeling of
993
- cardiac hemodynamics: Current status and future outlook,” Journal of
994
- Computational Physics , vol. 305, 11 2015.
995
- [3] L. Marx, M. A. F. Gsell, A. Rund, F. Caforio, A. J. Prassl, G. Toth-Gayor,
996
- T. Kuehne, C. M. Augustin, and G. Plank, “Personalization of electro-
997
- mechanical models of the pressure-overloaded left ventricle: fitting of
998
- windkessel-type afterload models,” Philosophical Transactions of the
999
- Royal Society A: Mathematical, Physical and Engineering Sciences , vol.
1000
- 378, no. 2173, p. 20190342, 2020.
1001
- [4] E. Karabelas, M. Gsell, C. Augustin, L. Marx, A. Neic, A. Prassl,
1002
- L. Goubergrits, T. Kuehne, and G. Plank, “Towards a computational
1003
- framework for modeling the impact of aortic coarctations upon left
1004
- ventricular load,” Frontiers in Physiology , vol. 9, p. 538, 05 2018.
1005
- [5] A. Prakosa, H. J. Arevalo, D. Deng, P. M. Boyle, P. P. Nikolov,
1006
- H. Ashikaga, J. J. E. Blauer, E. Ghafoori, C. J. Park, R. C. Blake,
1007
- F. T. Han, R. S. MacLeod, H. R. Halperin, D. J. Callans, R. Ranjan,
1008
- J. Chrispin, S. Nazarian, and N. A. Trayanova, “Personalized virtual-
1009
- heart technology for guiding the ablation of infarct-related ventricular
1010
- tachycardia,” Nature Biomedical Engineering , vol. 2, no. 10, pp. 732–
1011
- 740, Oct 2018.
1012
- [6] E. Kung, A. Baretta, C. Baker, G. Arbia, G. Biglino, C. Corsini,
1013
- S. Schievano, I. E. Vignon-Clementel, G. Dubini, G. Pennati, A. Taylor,
1014
- A. Dorfman, A. M. Hlavacek, A. L. Marsden, T.-Y . Hsia, and F. Migli-
1015
- avacca, “Predictive modeling of the virtual hemi-fontan operation for
1016
- second stage single ventricle palliation: Two patient-specific cases,”
1017
- Journal of Biomechanics , vol. 46, no. 2, pp. 423 – 429, 2013.
1018
- [7] K. S. McDowell, F. Vadakkumpadan, R. Blake, J. Blauer, G. Plank,
1019
- R. S. MacLeod, and N. A. Trayanova, “Methodology for patient-specific
1020
- modeling of atrial fibrosis as a substrate for atrial fibrillation,” Journal
1021
- of Electrocardiology , vol. 45, no. 6, pp. 640 – 645, 2012.
1022
- [8] F. Kong, A. Caballero, R. McKay, and W. Sun, “Finite element analysis
1023
- of mitraclip procedure on a patient-specific model with functional mitral
1024
- regurgitation,” Journal of Biomechanics , vol. 104, p. 109730, 02 2020.
1025
- [9] M. Strocchi, C. M. Augustin, M. A. F. Gsell, E. Karabelas, A. Neic,
1026
- K. Gillette, O. Razeghi, A. J. Prassl, E. J. Vigmond, J. M. Behar,
1027
- J. Gould, B. Sidhu, C. A. Rinaldi, M. J. Bishop, G. Plank, and S. A.
1028
- Niederer, “A publicly available virtual cohort of four-chamber heart
1029
- meshes for cardiac electro-mechanics simulations,” PLOS ONE , vol. 15,
1030
- no. 6, pp. 1–26, 06 2020.
1031
- [10] X. Zhuang, L. Li, C. Payer, D. ˇStern, M. Urschler, M. P. Heinrich,
1032
- J. Oster, C. Wang, ¨Orjan Smedby, C. Bian, X. Yang, P.-A. Heng, A. Mor-
1033
- tazi, U. Bagci, G. Yang, C. Sun, G. Galisot, J.-Y . Ramel, T. Brouard,
1034
- Q. Tong, W. Si, X. Liao, G. Zeng, Z. Shi, G. Zheng, C. Wang,
1035
- T. MacGillivray, D. Newby, K. Rhode, S. Ourselin, R. Mohiaddin,
1036
- J. Keegan, D. Firmin, and G. Yang, “Evaluation of algorithms for multi-
1037
- modality whole heart segmentation: An open-access grand challenge,”
1038
- Medical Image Analysis , vol. 58, p. 101537, 2019.
1039
- [11] F. Kong and S. C. Shadden, “Automating Model Generation for Image-
1040
- Based Cardiac Flow Simulation,” Journal of Biomechanical Engineer-
1041
- ing, vol. 142, no. 11, 09 2020.
1042
- [12] M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural
1043
- networks on graphs with fast localized spectral filtering,” in Advances
1044
- in Neural Information Processing Systems , D. Lee, M. Sugiyama,
1045
- U. Luxburg, I. Guyon, and R. Garnett, Eds., vol. 29, 2016, pp. 3844–
1046
- 3852.
1047
- [13] M. M. Bronstein, J. Bruna, Y . LeCun, A. Szlam, and P. Vandergheynst,
1048
- “Geometric deep learning: Going beyond euclidean data,” IEEE Signal
1049
- Processing Magazine , vol. 34, no. 4, pp. 18–42, 2017.
1050
- [14] U. Wickramasinghe, E. Remelli, G. Knott, and P. Fua, “V oxel2mesh:
1051
- 3d mesh model generation from volumetric data,” in Medical Im-
1052
- age Computing and Computer Assisted Intervention , A. L. Martel,
1053
- P. Abolmaesumi, D. Stoyanov, D. Mateus, M. A. Zuluaga, S. K. Zhou,
1054
- D. Racoceanu, and L. Joskowicz, Eds., 2020, pp. 299–308.
1055
- [15] F. Kong, N. Wilson, and S. Shadden, “A deep-learning approach for
1056
- direct whole-heart mesh reconstruction,” Medical Image Analysis , p.
1057
- 102222, 2021.
1058
- [16] R. Attar, M. Perea ˜nez, C. Bowles, S. K. Piechnik, S. Neubauer, S. E.
1059
- Petersen, and A. F. Frangi, “3d cardiac shape prediction with deep neural
1060
- networks: Simultaneous use of images and patient metadata,” in MICCAI
1061
- 2019 , D. Shen, T. Liu, T. M. Peters, L. H. Staib, C. Essert, S. Zhou,
1062
- P.-T. Yap, and A. Khan, Eds., 2019, pp. 586–594.
1063
- [17] T. W. Sederberg and S. R. Parry, “Free-form deformation of solid
1064
- geometric models,” Proceedings of the 13th annual conference on
1065
- Computer graphics and interactive techniques , 1986.
1066
- [18] J. R. Nieto and A. Sus ´ın, “Cage based deformations: A survey,” 2013.[19] O. Sorkine-Hornung, D. Cohen-Or, Y . Lipman, M. Alexa, C. R ¨ossl, and
1067
- H.-P. Seidel, “Laplacian surface editing,” in SGP ’04 , 2004.
1068
- [20] A. Jacobson, I. Baran, J. Popovi ´c, and O. Sorkine-Hornung, “Bounded
1069
- biharmonic weights for real-time deformation,” Communications of the
1070
- ACM , vol. 57, pp. 99 – 106, 2014.
1071
- [21] Y . Wang, A. Jacobson, J. Barbic, and L. Kavan, “Linear subspace
1072
- design for real-time shape deformation,” ACM Transactions on Graphics ,
1073
- vol. 34, pp. 1 – 11, 2015.
1074
- [22] A. Kurenkov, J. Ji, A. Garg, V . Mehta, J. Gwak, C. B. Choy, and
1075
- S. Savarese, “Deformnet: Free-form deformation network for 3d shape
1076
- reconstruction from a single image,” 2018 IEEE Winter Conference on
1077
- Applications of Computer Vision , pp. 858–866, 2018.
1078
- [23] M. Liu, M. Sung, R. Mech, and H. Su, “Deepmetahandles: Learning
1079
- deformation meta-handles of 3d meshes with biharmonic coordinates,”
1080
- 2021 IEEE/CVF Conference on Computer Vision and Pattern Recogni-
1081
- tion, pp. 12–21, 2021.
1082
- [24] Y . Wang, N. Aigerman, V . G. Kim, S. Chaudhuri, and O. Sorkine-
1083
- Hornung, “Neural cages for detail-preserving 3d deformations,” 2020
1084
- IEEE/CVF Conference on Computer Vision and Pattern Recognition ,
1085
- pp. 72–80, 2020.
1086
- [25] D. H. Pak, M. Liu, T. Kim, L. Liang, R. McKay, W. Sun, and J. S.
1087
- Duncan, “Distortion energy for deep learning-based volumetric finite
1088
- element mesh generation for aortic valves,” in MICCAI , 2021.
1089
- [26] F. Kong and S. C. Shadden, “Automatic whole heart meshes generation
1090
- for image-based computational simulations by learning free-from defor-
1091
- mations,” International Conference on Medical Image Computing and
1092
- Computer Assisted Intervention , 2021.
1093
- [27] F. Isensee and K. Maier-Hein, “An attempt at beating the 3d u-net,”
1094
- ArXiv , vol. abs/1908.02182, 2019.
1095
- [28] J. M. Wolterink, T. Leiner, B. D. de V os, J.-L. Coatrieux, B. M. Kelm,
1096
- S. Kondo, R. A. Salgado, R. Shahzad, H. Shu, M. Snoeren, R. A. P. Takx,
1097
- L. J. van Vliet, T. van Walsum, T. P. Willems, G. Yang, Y . Zheng, M. A.
1098
- Viergever, and I. I ˇsgum, “An evaluation of automatic coronary artery
1099
- calcium scoring methods with cardiac ct using the orcascore framework,”
1100
- Medical Physics , vol. 43, no. 5, pp. 2361–2373, 2016.
1101
- [29] R. Karim, L.-E. Blake, J. Inoue, Q. Tao, S. Jia, R. J. Housden,
1102
- P. Bhagirath, J.-L. Duval, M. Varela, J. M. Behar, L. Cadour, R. J. van der
1103
- Geest, H. Cochet, M. Drangova, M. Sermesant, R. Razavi, O. Aslanidi,
1104
- R. Rajani, and K. Rhode, “Algorithms for left atrial wall segmentation
1105
- and thickness – evaluation on an open-source ct and mri image database,”
1106
- Medical Image Analysis , vol. 50, pp. 36 – 53, 2018.
1107
- [30] C. Tobon-Gomez, A. J. Geers, J. Peters, J. Weese, K. Pinto, R. Karim,
1108
- M. Ammar, A. Daoudi, J. Margeta, Z. Sandoval, B. Stender, Y . Zheng,
1109
- M. A. Zuluaga, J. Betancur, N. Ayache, M. A. Chikh, J. Dillenseger,
1110
- B. M. Kelm, S. Mahmoudi, S. Ourselin, A. Schlaefer, T. Schaeffter,
1111
- R. Razavi, and K. S. Rhode, “Benchmark for algorithms segmenting the
1112
- left atrium from 3d ct and mri datasets,” IEEE Transactions on Medical
1113
- Imaging , vol. 34, no. 7, pp. 1460–1473, 2015.
1114
- [31] G. Taubin, T. Zhang, and G. H. Golub, “Optimal surface smoothing as
1115
- filter design,” in ECCV , 1996.
1116
- [32] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks
1117
- for biomedical image segmentation,” in Medical Image Computing and
1118
- Computer-Assisted Intervention , N. Navab, J. Hornegger, W. M. Wells,
1119
- and A. F. Frangi, Eds., 2015, pp. 234–241.
1120
- [33] C. Payer, D. ˇStern, H. Bischof, and M. Urschler, “Multi-label whole
1121
- heart segmentation using cnns and anatomical label configurations,” in
1122
- Statistical Atlases and Computational Models of the Heart. ACDC and
1123
- MMWHS Challenges . Cham: Springer, 2018, pp. 190–198.
1124
- [34] A. Updegrove, N. Wilson, J. Merkow, H. Lan, A. Marsden, and
1125
- S. Shadden, “Simvascular: An open source pipeline for cardiovascular
1126
- simulation,” Annals of Biomedical Engineering , vol. 45, 12 2016.
1127
- [35] H. Si, “Tetgen, a delaunay-based quality tetrahedral mesh generator,”
1128
- ACM Trans. Math. Softw. , vol. 41, no. 2, Feb. 2015.
1129
- [36] C. Zhu, V . Vedula, D. Parker, N. Wilson, S. Shadden, and A. Marsden,
1130
- “svfsi: a multiphysics package for integrated cardiac modeling,” Under
1131
- review for the Journal of Open Source Software , 2022.
1132
- [37] K. Gupta and M. Chandraker, “Neural mesh flow: 3d manifold mesh
1133
- generation via diffeomorphic flows,” Advances in neural information
1134
- processing systems .
1135
- [38] G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. V . Guttag, and A. V .
1136
- Dalca, “V oxelmorph: A learning framework for deformable medical
1137
- image registration,” IEEE Transactions on Medical Imaging , vol. 38,
1138
- pp. 1788–1800, 2019.
1139
- [39] T. C. W. Mok and A. C. S. Chung, “Large deformation diffeomor-
1140
- phic image registration with laplacian pyramid networks,” ArXiv , vol.
1141
- abs/2006.16148, 2020.14 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. XX, NO. XX, XXXX 2020
1142
- [40] V . Vedula, J. H. Seo, A. Lardo, and R. Mittal, “Effect of trabeculae and
1143
- papillary muscles on the hemodynamics of the left ventricle,” Theoretical
1144
- and Computational Fluid Dynamics , vol. 30, 05 2015.
1145
- [41] S. Celi, E. Vignali, K. Capellini, and E. Gasparotti, “On the role and
1146
- effects of uncertainties in cardiovascular in silico analyses,” Frontiers in
1147
- Medical Technology , vol. 3, 2021.
1148
- [42] A. Kandathil and M. R. Chamarthy, “Pulmonary vascular anatomy &
1149
- anatomical variants.” Cardiovascular diagnosis and therapy , vol. 8 3,
1150
- pp. 201–207, 2018.
1151
- [43] Y . Deng, J. Yang, and X. Tong, “Deformed implicit field: Modeling 3d
1152
- shapes with learned dense correspondence,” 2021 IEEE/CVF Conference
1153
- on Computer Vision and Pattern Recognition , pp. 10 281–10 291, 2021.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2204.03592.txt DELETED
The diff for this file is too large to render. See raw diff
 
txt/2204.14047.txt DELETED
@@ -1,944 +0,0 @@
1
- A Deep Learning based No-reference Quality Assessment Model
2
- for UGC Videos
3
- Wei Sun
4
- Shanghai Jiao Tong University
5
- Shanghai, China
6
7
- Shanghai Jiao Tong University
8
- Shanghai, China
9
10
- Wei Lu
11
- Shanghai Jiao Tong University
12
- Shanghai, China
13
- [email protected] Zhai∗
14
- Shanghai Jiao Tong University
15
- Shanghai, China
16
17
- ABSTRACT
18
- Quality assessment for User Generated Content (UGC) videos plays
19
- an important role in ensuring the viewing experience of end-users.
20
- Previous UGC video quality assessment (VQA) studies either use
21
- the image recognition model or the image quality assessment (IQA)
22
- models to extract frame-level features of UGC videos for quality
23
- regression, which are regarded as the sub-optimal solutions be-
24
- cause of the domain shifts between these tasks and the UGC VQA
25
- task. In this paper, we propose a very simple but effective UGC
26
- VQA model, which tries to address this problem by training an
27
- end-to-end spatial feature extraction network to directly learn the
28
- quality-aware spatial feature representation from raw pixels of the
29
- video frames. We also extract the motion features to measure the
30
- temporal-related distortions that the spatial features cannot model.
31
- The proposed model utilizes very sparse frames to extract spatial
32
- features and dense frames (i.e. the video chunk) with a very low
33
- spatial resolution to extract motion features, which thereby has
34
- low computational complexity. With the better quality-aware fea-
35
- tures, we only use the simple multilayer perception layer (MLP)
36
- network to regress them into the chunk-level quality scores, and
37
- then the temporal average pooling strategy is adopted to obtain
38
- the video-level quality score. We further introduce a multi-scale
39
- quality fusion strategy to solve the problem of VQA across differ-
40
- ent spatial resolutions, where the multi-scale weights are obtained
41
- from the contrast sensitivity function of the human visual system.
42
- The experimental results show that the proposed model achieves
43
- the best performance on five popular UGC VQA databases, which
44
- demonstrates the effectiveness of the proposed model. The code is
45
- available at https://github.com/sunwei925/SimpleVQA.
46
- CCS CONCEPTS
47
- •Computing methodologies →Modeling methodologies .
48
- ∗Corresponding author: Guangtao Zhai.
49
- Permission to make digital or hard copies of all or part of this work for personal or
50
- classroom use is granted without fee provided that copies are not made or distributed
51
- for profit or commercial advantage and that copies bear this notice and the full citation
52
- on the first page. Copyrights for components of this work owned by others than ACM
53
- must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
54
- to post on servers or to redistribute to lists, requires prior specific permission and/or a
55
- fee. Request permissions from [email protected].
56
- MM ’22, October 10–14, 2022, Lisboa, Portugal
57
- ©2022 Association for Computing Machinery.
58
- ACM ISBN 978-1-4503-9203-7/22/10. . . $15.00
59
- https://doi.org/10.1145/3503161.3548329KEYWORDS
60
- video quality assessment, UGC videos, deep learning, feature fusion
61
- ACM Reference Format:
62
- Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗. 2022. A Deep Learn-
63
- ing based No-reference Quality Assessment Model for UGC Videos. In Pro-
64
- ceedings of the 30th ACM International Conference on Multimedia (MM ’22),
65
- October 10–14, 2022, Lisboa, Portugal. ACM, New York, NY, USA, 10 pages.
66
- https://doi.org/10.1145/3503161.3548329
67
- 1 INTRODUCTION
68
- With the proliferation of mobile devices and wireless networks in
69
- recent years, User Generated Content (UGC) videos have exploded
70
- over the Internet. It has become a popular daily activity for the gen-
71
- eral public to create, view, and share UGC videos through various
72
- social media applications such as YouTube, TikTok, etc. However,
73
- UGC videos are captured by a wide variety of consumers, ranging
74
- from professional photographers to amateur users, which makes
75
- the visual quality of UGC videos vary greatly. In order to ensure
76
- the Quality of Experience (QoE) of end-users, the service providers
77
- need to monitor the quality of UGC videos in the entire streaming
78
- media link, including but not limited to video uploading, compress-
79
- ing, post-processing, transmitting, etc. Therefore, with billions of
80
- video viewing and millions of newly uploaded UGC videos every
81
- day, an effective and efficient video quality assessment (VQA) model
82
- is needed to measure the perceptual quality of UGC videos.
83
- Objective VQA can be divided into full-reference (FR), reduced-
84
- reference (RR), and no-reference (NR) according to the amount of
85
- pristine video information needed. Since there is no reference video
86
- for in-the-wild UGC videos, only NR VQA models are qualified for
87
- evaluating their quality. Although NR VQA algorithms [ 21,23,26]
88
- have been studied for many years, most of them were developed
89
- for Professionally Generated Content (PGC) videos with synthetic
90
- distortions, where the pristine PGC videos are shot by photogra-
91
- phers using professional devices and are normally of high quality,
92
- and the distorted PGC videos are then degraded by specific video
93
- processing algorithms such as video compression, transmission, etc.
94
- So, previous VQA studies mainly focus on modeling several types
95
- of distortions caused by specific algorithms, which makes them less
96
- effective for UGC videos with in-the-wild distortions. To be more
97
- specific, the emerging UGC videos pose the following challenges
98
- to the existing VQA algorithms for PGC videos:arXiv:2204.14047v2 [cs.CV] 20 Oct 2022MM ’22, October 10–14, 2022, Lisboa, Portugal Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗
99
- First, the distortion types of UGC videos are diverse. A mass
100
- of UGC videos are captured by amateur users, which may suffer
101
- various distortion types such as under/over exposure, low visibility,
102
- jitter, noise, color shift, etc. These authentic distortions are intro-
103
- duced in the shooting processing and cannot be modeled by the
104
- single distortion type, which thereby requires that the VQA models
105
- have a more strong feature representation ability to qualify the
106
- authentic distortions. Second, the content and forms of UGC videos
107
- are extremely rich. UGC videos can be natural scenes, animation
108
- [35], games [ 46,47], screen content, etc. Note that the statistics
109
- characteristics of different video content vary greatly. For example,
110
- the natural scenes statistics (NSS) features [ 22–24,26] are com-
111
- monly used in the previous VQA studies to measure the distortions
112
- of natural scene content, but they may be ineffective for computer-
113
- generated content like animation or games. In addition, live videos,
114
- videoconferencing, etc. are also ubiquitous for UGC videos nowa-
115
- days, whose quality is severely affected by the network bandwidth.
116
- Third, due to the advancement of shooting devices, more high res-
117
- olution [ 18] and high frame rate [ 19,52,53] videos have emerged
118
- on the Internet. The various kinds of resolutions and frame rates
119
- are also important factors for video quality. What’s more, users
120
- can view the UGC videos through mobile devices anywhere and at
121
- any time, so the display [ 25] and the viewing environment such as
122
- ambient luminance [ 29], etc. also affect the perceptual quality of
123
- UGC videos to a certain extent. However, these factors are rarely
124
- considered by previous studies.
125
- The recently released large-scale UGC VQA databases such as
126
- KoNViD-1k [ 8], YouTube UGC [ 36], LSVQ [ 44], etc. have greatly
127
- promoted the development of UGC VQA. Several deep learning
128
- based NR VQA models [ 14,15,37,40,44] have been proposed to
129
- solve some challenges mentioned above and achieve pretty good
130
- performance. However, there are still some problems that need to
131
- be addressed. First, the previous studies either use the image recog-
132
- nition model [ 15][44] or the pretrained image quality assessment
133
- (IQA) models [ 37][40][14] to extract frame-level features, which
134
- lacks an end-to-end learning method to learn the quality-aware
135
- spatial feature representation from raw pixels of video frames. Sec-
136
- ond, previous studies usually extract the features from all video
137
- frames and have a very high computational complexity, making
138
- them difficult to apply to real-world scenarios. Since there is much
139
- redundancy spatial information between the adjacent frames, we
140
- argue that there is not necessary to extract the features from all
141
- frames. Third, the spatial resolution and frame rate of UGC videos
142
- as well as other factors such as the display, viewing environment,
143
- etc. are still rarely considered by these studies. However, these
144
- factors are very important for the perceptual quality of UGC videos
145
- since the contrast sensitivity of the human visual system (HVS) is
146
- affected by them.
147
- In this paper, to address the challenges mentioned above, we
148
- propose a very simple but effective deep learning based VQA model
149
- for UGC videos. The proposed framework is illustrated in Figure 1,
150
- which consists of the feature extraction module, the quality regres-
151
- sion module, and the quality pooling module. For the feature ex-
152
- traction module, we extract quality-aware features from the spatial
153
- domain and the spatial-temporal domain to respectively measure
154
- the spatial distortions and motion distortions. Instead of using thepretrained model to extract the spatial features in the previous stud-
155
- ies, we propose to train an end-to-end spatial feature extraction
156
- network to learn quality-aware feature representation in the spatial
157
- domain, which thereby makes full use of various video content
158
- and distortion types in current UGC VQA databases. We then uti-
159
- lize the action recognition network to extract the motion features,
160
- which can make up the temporal-related distortions that the spatial
161
- features cannot model. Considering that the spatial features are
162
- sensitive to the resolution while the motion features are sensitive
163
- to the frame rate, we first split the video into continuous chunks
164
- and then extract the spatial features and motion features by using a
165
- key frame of each chunk and all frames of each chunk but at a low
166
- spatial resolution respectively. So, the computational complexity of
167
- the proposed model can be greatly reduced.
168
- For the quality regression module, we use the multilayer per-
169
- ception (MLP) network to map the quality-aware features into
170
- the chunk-level quality scores, and the temporal average pooling
171
- strategy is adopted to obtain the final video quality. In order to
172
- solve the problem of quality assessment across different resolu-
173
- tions, we introduce a multi-scale quality fusion strategy to fuse
174
- the quality scores of the videos with different resolutions, where
175
- the multi-scale weights are obtained from the contrast sensitivity
176
- function (CSF) of HVS by considering the viewing environment
177
- information. The proposed models are validated on five popular
178
- UGC VQA databases and the experimental results show that the
179
- proposed model outperforms other state-of-the-art VQA models
180
- by a large margin. What’s more, the proposed model trained on a
181
- large-scale database such as LSVQ [ 44] achieves remarkable perfor-
182
- mance when tested on the other databases without any fine-tuning,
183
- which further demonstrates the effectiveness and generalizability
184
- of the proposed model.
185
- In summary, this paper makes the following contributions:
186
- (1)We propose an effective and efficient deep learning based
187
- model for UGC VQA, which includes the feature extraction
188
- module, the quality regression module, and the quality pool-
189
- ing module. The proposed model not only achieves remark-
190
- able performance on the five popular UGC VQA databases
191
- but also has a low computational complexity, which makes
192
- it very suitable for practical applications.
193
- (2)The feature extraction module extracts two kinds of quality-
194
- aware features, the spatial features for spatial distortions and
195
- the spatial-temporal features for motion distortions, where
196
- the spatial features are learned from raw pixels of video
197
- frames via an end-to-end manner and the spatial-temporal
198
- features are extracted by a pretrained action recognition
199
- network.
200
- (3)We introduce a multi-scale quality fusion strategy to solve
201
- the problem of quality assessment across different resolu-
202
- tions, where the multi-scale weights are obtained from the
203
- contrast sensitivity function of the human visual system by
204
- considering the viewing environment information.
205
- 2 RELATED WORK
206
- 2.1 Handcrafted feature based NR VQA Models
207
- A naive NR VQA method is to compute the quality of each frame
208
- via popular NR IQA methods such as NIQE [ 24], BRISQUE [ 22],A Deep Learning based No-reference Quality Assessment Model for UGC Videos MM ’22, October 10–14, 2022, Lisboa, Portugal
209
- Input video
210
- Frame extraction
211
- 2D framesChunks
212
- Global Average and STD Pooling
213
- Spatial feature extractionStage 1Stage 2 Stage 3 Stage 4Motion feature extraction
214
- Quality regressionQuality
215
- ScoreMotion feature Spatial feature
216
- Pooling3D CNN
217
- Figure 1: The network architecture of the proposed model. The proposed model contains the feature extraction module, the
218
- quality regression module, and the quality pooling module. The feature extraction module extracts two kinds of features, the
219
- spatial features and the motion features.
220
- CORNIA [ 42] etc., and then pool them into the video quality score.
221
- A comparative study of various temporal pooling strategies on pop-
222
- ular NR IQA methods can refer to [ 32]. The temporal information
223
- is very important for VQA. V-BLIINDS [ 26] is a spatio-temporal
224
- natural scene statistics (NSS) model for videos by quantifying the
225
- NSS feature of frame-differences and motion coherency character-
226
- istics. Mittal et al. [23] propose a training-free blind VQA model
227
- named VIIDEO that exploits intrinsic statistics regularities of natu-
228
- ral videos to quantify disturbances introduced due to distortions.
229
- TLVQM [ 12] extracts abundant spatio-temporal features such as
230
- motion, jerkiness, blurriness, noise, blockiness, color, etc. at two
231
- levels of high and low complexity. VIDEVAL [ 33] further combines
232
- the selected features from typical NR I/VQA methods to train a SVR
233
- model to regress them into the video quality. Since video content
234
- also affects its quality, especially for UGC videos, understanding the
235
- video content is beneficial to NR VQA. Previous handcrafted feature
236
- based methods are difficult to understand semantic information.
237
- Hence, some studies [ 13,34] try to combine the handcrafted features
238
- with the semantic-level features extracted by the pretrained CNN
239
- model to improve the performance of NR VQA models. For example,
240
- CNN-TLVQM [ 13] combines the handcrafted statistical temporal
241
- features from TLVQM and spatial features extracted by 2D-CNN
242
- model trained for IQA. RAPIQUE [ 34] utilizes the quality-aware
243
- scene statistics features and semantics-aware deep CNN features
244
- to achieve a rapid and accurate VQA model for UGC videos.
245
- 2.2 Deep learning based NR VQA Models
246
- With the release of several large-scale VQA databases [ 8,36,44],
247
- deep learning based NR VQA models [ 2,11,14,15,31,37,40,43,44]
248
- attract many researchers’ attention. Liu et al. [17] propose a multi-
249
- task BVQA model V-MEON by jointly optimizing the 3D-CNN
250
- for quality assessment and compression distortion classification.
251
- VSFA [ 15] first extracts the semantic features from a pre-trained
252
- CNN model and then uses a gated recurrent unit (GRU) network
253
- to model the temporal relationship between the semantic features
254
- of video frames. The authors of VSFA further propose MDVSFA[16], which trains the VSFA model on the multiple VQA databases
255
- to improve its performance and generalization. RIRNet [ 4] exploits
256
- the effect of motion information extracted from the multi-scale
257
- temporal frequencies for video quality assessment. Ying et al. [44]
258
- propose a local-to-global region-based NR VQA model that com-
259
- bines the spatial features extracted from a 2D-CNN model and the
260
- spatial-temporal features from a 3D-CNN network. Wang et al. [37]
261
- propose a feature-rich VQA model for UGC videos, which measures
262
- the quality from three aspects, compression level, video content,
263
- and distortion type and each aspect is evaluated by an individual
264
- neural network. Xu et al. [40] first extract the spatial feature of
265
- the video frame from a pre-trained IQA model and use the graph
266
- convolution to extract and enhance these features, then extract
267
- motion information from the optical flow domain, and finally inte-
268
- grated the spatial feature and motion information via a bidirectional
269
- long short-term memory network. Li et al. [14] also utilize the IQA
270
- model pre-trianed on multiple databases to extract quality-aware
271
- spatial features and the action recognition model to extract tem-
272
- poral features, and then a GRU network is used to model spatial
273
- and temporal features and regress them into the quality score. Wen
274
- and Wang [ 39] propose a baseline I/VQA model for UGC videos,
275
- which calculates the video quality by averaging the scores of each
276
- frame and frame-level quality scores are obtained by a simple CNN
277
- network.
278
- 3 PROPOSED MODEL
279
- The framework of the proposed NR VQA model is shown in Fig-
280
- ure 1, which consists of the feature extraction module, the quality
281
- regression module, and the quality pooling module. First, we ex-
282
- tract the quality-aware features from the spatial domain and the
283
- spatial-temporal domain via the feature extraction module, which
284
- are utilized to evaluate the spatial distortions and motion distor-
285
- tions respectively. Then, the quality regression module is used to
286
- map the quality-aware features into chunk-level quality scores. Fi-
287
- nally, we perform the quality pooling module to obtain the video
288
- quality score.MM ’22, October 10–14, 2022, Lisboa, Portugal Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗
289
- 3.1 Feature Extraction Module
290
- In this section, we expect to extract the quality-aware features that
291
- can represent the impact of various distortion types and content
292
- on visual quality. The types of video distortion can be roughly
293
- divided into two categories: the spatial distortions and the motion
294
- distortions. The spatial distortions refer to the artifacts introduced
295
- in the video frames, such as noise, blur, compression, low visibility,
296
- etc. The motion distortions refer to the jitter, lagging due, etc., which
297
- are mainly caused by unstable shooting equipment, fast-moving
298
- objects, the low network bandwidth, etc. Therefore, we need to
299
- extract the quality-aware features from these two aspects.
300
- Note that the characteristics of the spatial features and motion
301
- features are quite different. The spatial features are sensitive to
302
- the video resolution but insensitive to the video frame rate since
303
- the adjacent frames of the video contain lots of redundancy spatial
304
- information and higher resolution can represent more abundant
305
- high-frequency information, while motion features are the opposite
306
- because the motion distortions are reflected on the temporal dimen-
307
- sion and these features are usually consistent for local regions of
308
- the frames.
309
- Therefore, considering these characteristics, given a video 𝑉,
310
- whose number of frames and frame rate are 𝑙and𝑟respectively,
311
- we first split the video 𝑉into𝑁𝑐continuous chunks 𝑐={𝑐𝑖}𝑁𝑐
312
- 𝑖=1at
313
- an time interval 𝜏, where𝑁𝑐=𝑙/(𝑟∗𝜏), and there are 𝑁𝑓=𝑟∗𝜏
314
- frames in each chunk 𝑐𝑖, which is denoted as 𝑐𝑖={𝑥𝑖,𝑗}𝑁𝑓
315
- 𝑗=1. Then
316
- we only choose a key frame 𝑥𝑖,𝑘𝑒𝑦 in each chunk to extract the
317
- spatial features and the motion features of each chunk are extracted
318
- using all frames in 𝑐𝑖but at a very low spatial resolution. As a
319
- result, we can greatly reduce the computation complexity of the
320
- VQA model with little performance degradation.
321
- 3.1.1 Spatial Feature Extraction Module. Given a frame 𝑥, we de-
322
- note𝑓𝑤(𝑥)as the output of the CNN model 𝑓with trainable pa-
323
- rameters𝑤={𝑤𝑘}applied on the frame 𝑥. Assume that there are
324
- 𝑁𝑠stages in the CNN model, and 𝑓𝑘𝑤(𝑥)is the output feature maps
325
- extracted from the 𝑘-th stage, where 𝑓𝑘𝑤(𝑥)∈R𝐻𝑘×𝑊𝑘×𝐶𝑘, and𝐻𝑘,
326
- 𝑊𝑘, and𝐶𝑘are the height, width, and the number of channels of
327
- the feature maps 𝑓𝑘𝑤(𝑥)respectively. In the following, we use the
328
- 𝑓𝑘𝑤to replace the 𝑓𝑘𝑤(𝑥)for simplicity.
329
- It is well known that the features extracted by the deep lay-
330
- ers of the CNN model contain rich semantic information, and are
331
- suitable for representing content-aware features for UGC VQA.
332
- Moreover, previous studies indicate that the features extracted by
333
- the shallow layers of the CNN models contain low-level informa-
334
- tion [ 28,48], which responds to low-level features such as edges,
335
- corners, textures, etc. The low-level information is easily affected
336
- by the distortion and is therefore distortion-aware. Hence, we ex-
337
- tract the quality-aware features via calculating the global mean
338
- and stand deviation of feature maps extracted from all stages of the
339
- CNN model. Then, we apply global average and stand deviation
340
- pooling operations on the feature maps 𝑓𝑘𝑤:
341
- 𝜇𝑓𝑘𝑤=GPavg(𝑓𝑘
342
- 𝑤),
343
- 𝜎𝑓𝑘𝑤=GPstd(𝑓𝑘
344
- 𝑤),(1)where𝜇𝑓𝑘𝑤and𝜎𝑓𝑘𝑤are the global means and stand deviation of
345
- feature maps 𝑓𝑘𝑤respectively. Finally, we concatenate the 𝜇𝑓𝑘𝑤and
346
- 𝜎𝑓𝑘𝑤to derive the spital feature representation of our NR VQA model:
347
- 𝐹𝑘
348
- 𝑠=cat([𝜇𝑓𝑘𝑤,𝜎𝑓𝑘𝑤]),
349
- 𝐹𝑠=cat({𝐹𝑘
350
- 𝑠}𝑁𝑠
351
- 𝑘=1).(2)
352
- 3.1.2 Motion Feature Extraction Module. We extract the motion
353
- features as the complementary quality-aware features since UGC
354
- videos are commonly degraded by the motion distortions caused
355
- by the unstable shooting equipment or low bit rates in the living
356
- streaming or videoconferencing. The spatial features are difficult
357
- to handle these distortions because they are extracted by the intra-
358
- frames while motion distortions occur in the interframes. Therefore,
359
- the motion features are also necessary for evaluating the quality
360
- of UGC videos. Here, we utilize the pretrained action recognition
361
- model as the motion feature extractor to obtain the motion features
362
- of each video chunk. The action recognition model is designed
363
- to detect different kinds of action classes, so the feature represen-
364
- tation of the action recognition network can reflect the motion
365
- information of the video to a certain extent. Therefore, given the
366
- video chunk 𝑐and the action recognition network MOTION , we
367
- can obtain the motion features:
368
- 𝐹𝑚=MOTION(c) (3)
369
- where𝐹𝑚represents the motion features extract by the action
370
- recognition network.
371
- Therefore, given the video chunk 𝑐, we first select a key frame
372
- in the chunk to calculate the spatial features 𝐹𝑠. Then, we calculate
373
- the motion features 𝐹𝑚using the whole frames but at a low spatial
374
- resolution in the video chunk. Finally, we obtain the quality-aware
375
- features for the video chunk 𝑐by concatenating the spatial features
376
- and motion features:
377
- 𝐹=cat([𝐹𝑠,𝐹𝑚]), (4)
378
- 3.2 Quality Regression Module
379
- After extracting quality-aware feature representation by the feature
380
- extraction module, we need to map these features to the quality
381
- scores via a regression model. In this paper, we use the multi-layer
382
- perception (MLP) as the regression model to obtain the chunk-level
383
- quality due to its simplicity and effectiveness. The MLP consists of
384
- two fully connected layers and there are 128 and 1 neuron in each
385
- layer respectively. Therefore, we can obtain the chunk-level quality
386
- score via
387
- 𝑞=𝑓𝑤FC(𝐹), (5)
388
- where𝑓𝑤FCdenotes the function of the two FC layers and 𝑞is the
389
- quality of the video chunk.
390
- 3.3 Quality Pooling Module
391
- As stated in Section 3.1, we split the video 𝑉into𝑁𝑐continuous
392
- chunks{𝑐𝑖}𝑁𝑐
393
- 𝑖=1. For the chunk 𝑐𝑖, we can obtain its chunk-level
394
- quality score 𝑞𝑖via the feature extraction module and the quality
395
- regression module. Then, it is necessary to pool the chunk-level
396
- scores into the video level. Though many temporal pooling methods
397
- have been proposed in literature [ 32][15], we find that the temporalA Deep Learning based No-reference Quality Assessment Model for UGC Videos MM ’22, October 10–14, 2022, Lisboa, Portugal
398
- Table 1: Summary of the benchmark UGC VQA databases. Time duration: Seconds.
399
- Database Videos Scenes Resolution Time Duration Format Distortion Type DATA Environment
400
- KoNViD-1k [8] 1,200 1,200 540p 8 MP4 Authentic MOS + 𝜎 Crowd
401
- YouTube-UGC [36] 1500 1500 360p-4K 20 YUV, MP4 Authentic MOS + 𝜎 Crowd
402
- LSVQ [44] 38,811 38,811 99p-4K 5-12 MP4 Authentic MOS + 𝜎 Crowd
403
- LBVD [3] 1,013 1,013 240p-540p 10 MP4 Authentic, Transmission MOS + 𝜎 In-lab
404
- LIVE-YT-Gaming [45] 600 600 360p-1080p 8-9 MP4 Authentic MOS Crowd
405
- averaging pooling achieves the best performance from Section 4.3.2.
406
- Therefore, the video-level quality is calculated as:
407
- 𝑄=1
408
- 𝑁𝑐𝑁𝑐∑︁
409
- 𝑖=1𝑞𝑖, (6)
410
- where𝑞𝑖is the quality of the 𝑖-th chunk and 𝑄is the video quality
411
- evaluated by the proposed model.
412
- 3.4 Loss Function
413
- The loss function used to optimize the proposed models consists of
414
- two parts: the mean absolute error (MAE) loss and rank loss [ 39].
415
- The MAE loss is used to make the evaluated quality scores close to
416
- the ground truth, which is defined as:
417
- 𝐿𝑀𝐴𝐸=1
418
- 𝑁𝑁∑︁
419
- 𝑖=1 𝑄𝑖−ˆ𝑄𝑖 , (7)
420
- where the ˆ𝑄𝑖is the ground truth quality score of the 𝑖-th video in a
421
- mini-batch and 𝑁is the number of videos in the mini-batch.
422
- The rank loss is further introduced to make the model distinguish
423
- the relative quality of videos better, which is very useful for the
424
- model to evaluate the videos with similar quality. Since the rank
425
- value between two video quality is non-differentiable, we use the
426
- following formula to approximate the rank value:
427
- 𝐿𝑖𝑗
428
- 𝑟𝑎𝑛𝑘=max(0, ˆ𝑄𝑖−ˆ𝑄𝑗 −𝑒(ˆ𝑄𝑖,ˆ𝑄𝑗)·(𝑄𝑖−𝑄𝑗)), (8)
429
- where𝑖and𝑗are two video indexes in a mini-batch, and 𝑒(ˆ𝑄𝑖,ˆ𝑄𝑗)
430
- is formulated as:
431
- 𝑒(ˆ𝑄𝑖,ˆ𝑄𝑗)=(
432
- 1,ˆ𝑄𝑖≥ˆ𝑄𝑗,
433
- −1,ˆ𝑄𝑖<ˆ𝑄𝑗,(9)
434
- Then,𝐿𝑟𝑎𝑛𝑘 is calculated via:
435
- 𝐿𝑟𝑎𝑛𝑘=1
436
- 𝑁2𝑁∑︁
437
- 𝑖=1𝑁∑︁
438
- 𝑗=1𝐿𝑖𝑗
439
- 𝑟𝑎𝑛𝑘(10)
440
- Finally, the loss function can be obtained by:
441
- 𝐿=𝐿𝑀𝐴𝐸+𝜆·𝐿𝑟𝑎𝑛𝑘, (11)
442
- where𝜆is a hyper-parameter to balance the MAE loss and the rank
443
- loss.
444
- 3.5 Multi-scale Quality Fusion Strategy
445
- Previous studies evaluate the video quality either using the original
446
- spatial resolution or a fixed resized spatial resolution, which ig-
447
- nore that videos are naturally multi-scale [ 53]. Some existing work
448
- [38][25][20] shows that considering the multi-scale characteristics
449
- can improve the performance of image quality assessment. So, wepropose a multi-scale quality fusion strategy to further improve
450
- the evaluation accuracy of the VQA model and this strategy is
451
- very useful to compare the quality of videos with different spatial
452
- resolutions.
453
- 3.5.1 Multi-scale Video Quality Scores. We first resize the resolu-
454
- tion of the video into three fixed spatial scales, which are 540p,
455
- 720p, and 1080p, respectively. We do not downscale the video from
456
- the original scale to several lower resolution scales, which is a
457
- more common practice in previous studies. That is because when
458
- users watch videos in an application, the resolution of videos is
459
- actually adapted to the resolution of the playback device, and the
460
- modern display resolution is normally larger than 1080p. So, the
461
- perceptual quality of the low-resolution videos is also affected by
462
- the up-sampling artifacts, which also need to be considered by VQA
463
- models. Therefore, given a VQA model, we can derive three qual-
464
- ity of videos at three scales, which are denoted as 𝑄1,𝑄2, and𝑄3
465
- respectively.
466
- 3.5.2 Adaptive Multi-scale Weights. The weight of each scale is
467
- obtained by considering the human psychological behaviors and
468
- the visual sensitivity characteristics. It is noted that the contrast
469
- perception ability of the HVS depends on the spatial frequency
470
- of the visual signal, which is modeled by the contrast sensitivity
471
- function (CSF). Specifically, we first define a viewing resolution
472
- factor𝜉as:
473
- 𝜉=𝜋·𝑑·𝑛
474
- 180·ℎ𝑠·2, (12)
475
- where the unit of 𝜉is cycles per degree of visual angle (cpd), 𝑑is
476
- the viewing distance (inch), ℎ𝑠is the height of the screen (inch),
477
- and𝑛denotes the number of pixels in the vertical direction of the
478
- screen. For the above three spatial scales of video, we can obtain the
479
- corresponding 𝜉, which are denoted as 𝜉1,𝜉2, and𝜉3respectively.
480
- We use𝜉to divide the spatial frequency range of the corresponding
481
- scale, which covers one section of the CSF formulated by:
482
- 𝑆(𝑢)=5200𝑒(−0.0016𝑢2(1+100/𝐿)0.08)
483
- √︃
484
- (1+144
485
- 𝑋2
486
- 0+0.64𝑢2)(63
487
- 𝐿0.83+1
488
- 1−𝑒(−0.02𝑢2))(13)
489
- where𝑢,𝐿, and𝑋2
490
- 0indicate spatial frequency (cpd), luminance
491
- (cd/m2), and angular object area (squared degrees), respectively.
492
- The weight of each scale is calculated as the area under the CSF
493
- within the corresponding frequency covering range:
494
- 𝑤𝑖=1
495
- 𝑍∫𝜉𝑖
496
- 𝜉𝑖−1𝑆(𝑢)d𝑢,𝑖∈{1,2,3}, (14)
497
- where𝑖from 1 to 3 corresponds the finest to coarsest scale respec-
498
- tively, and𝜉0corresponds the viewing resolution factor of 0. 𝑍is a
499
- normalization factor such thatÍ
500
- 𝑖𝑤𝑖=1.MM ’22, October 10–14, 2022, Lisboa, Portugal Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗
501
- Table 2: Performance of the SOTA models and the proposed model on the KoNViD-1k, YouTube-UGC, LBVD, and LIVE-YT-
502
- Gaming databases. W.A. means the weight average results. The best performing model is highlighted in each column.
503
- TypeDatabase KoNViD-1k YouTube-UGC LBVD LIVE-YT-Gaming W.A.
504
- Criterion SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC
505
- IQANIQE 0.542 0.553 0.238 0.278 0.327 0.387 0.280 0.304 0.359 0.393
506
- BRISQUE 0.657 0.658 0.382 0.395 0.435 0.446 0.604 0.638 0.513 0.525
507
- GM-LOG 0.658 0.664 0.368 0.392 0.314 0.304 0.312 0.317 0.433 0.440
508
- VGG19 0.774 0.785 0.703 0.700 0.676 0.673 0.678 0.658 0.714 0.712
509
- ResNet50 0.802 0.810 0.718 0.710 0.715 0.717 0.729 0.768 0.744 0.751
510
- KonCept512 0.735 0.749 0.587 0.594 0.626 0.636 0.643 0.649 0.650 0.660
511
- VQAV-BLIINDS 0.710 0.704 0.559 0.555 0.527 0.558 0.357 0.403 0.566 0.578
512
- TLVQM 0.773 0.769 0.669 0.659 0.614 0.590 0.748 0.756 0.699 0.689
513
- VIDEVAL 0.783 0.780 0.779 0.773 0.707 0.697 0.807 0.812 0.766 0.762
514
- RAPIQUE 0.803 0.818 0.759 0.768 0.712 0.725 0.803 0.825 0.767 0.781
515
- VSFA 0.773 0.775 0.724 0.743 0.622 0.642 0.776 0.801 0.721 0.736
516
- Liel al. 0.836 0.834 0.831 0.819 - - - - - -
517
- Pro. 0.856 0.860 0.847 0.856 0.844 0.846 0.861 0.866 0.851 0.856
518
- Table 3: Performance of the SOTA models and the proposed
519
- models on the LSVQ database. Pro. M.S. refers to the pro-
520
- posed model implemented by the multi-scale quality fusion
521
- strategy. W.A. means the weighted average results. The best
522
- performing model is highlighted in each column.
523
- Database Test Test-1080p W.A.
524
- Criterion SRCC PLCC SRCC PLCC SRCC PLCC
525
- TLVQM 0.772 0.774 0.589 0.616 0.712 0.722
526
- VIDEVAL 0.794 0.783 0.545 0.554 0.712 0.707
527
- VSFA 0.801 0.796 0.675 0.704 0.759 0.766
528
- PVQ 0.827 0.828 0.711 0.739 0.789 0.799
529
- Liel al. 0.852 0.854 0.772 0.788 0.825 0.832
530
- Pro. 0.864 0.861 0.756 0.801 0.829 0.841
531
- Pro. M.S. 0.867 0.861 0.764 0.803 0.833 0.842
532
- Therefore, the multi-scale fusion quality score 𝑄𝑚is calculated
533
- as:
534
- 𝑄𝑚=3Ö
535
- 𝑖=1𝑄𝑤𝑖
536
- 𝑖, (15)
537
- 4 EXPERIMENTAL VALIDATION
538
- 4.1 Experimental Protocol
539
- 4.1.1 Test Databases. We test the proposed model on the five UGC
540
- VQA database: KoNViD-1k [ 8], YouTube-UGC [ 36], LSVQ [ 44],
541
- LBVD [ 3], and LIVE-YT-Gaming [ 45]. We summarize the main infor-
542
- mation of the databases in Table 1. The LSVQ database is the largest
543
- UGC VQA database so far, and there are 15 video categories such
544
- as animation, gaming, HDR, live music, sports, etc. in the YouTube-
545
- UGC database, which is more diverse than other databases. The
546
- LBVD database focuses on the live broadcasting videos, of which
547
- the videos are degraded by the authentic transmission distortions.
548
- The LIVE-YT-Gaming database consists of streamed gaming videos,
549
- where the video content is generated by computer graphics.4.1.2 Implementation Details. We use the ResNet50 [ 7] as the back-
550
- bone of the spatial feature extraction module and the SlowFast R50
551
- [6] as the motion feature extraction model for the whole experi-
552
- ments. The weights of the ResNet50 are initialized by training on
553
- the ImageNet dataset [ 5], the weights of the SlowFast R50 are fixed
554
- by training on the Kinetics 400 dataset [ 10], and other weights are
555
- randomly initialized. For the spatial feature extraction module, we
556
- resize the resolution of the minimum dimension of key frames as
557
- 520 while maintaining their aspect ratios. In the training stage, the
558
- input frames are randomly cropped with the resolution of 448 ×448.
559
- If we do not use the multi-scale quality fusion strategy, we crop the
560
- center patch with the same resolutions of 448 ×448 in the testing
561
- stage. Note that we only validate the multi-scale quality fusion
562
- strategy on the model trained by the LSVQ database since there
563
- are enough videos with various spatial resolutions in it. For the
564
- motion feature extraction module, the resolution of the videos is
565
- resized to 224×224 for both the training and testing stages. We use
566
- PyTorch to implement the proposed models. The Adam optimizer
567
- with the initial learning rate 0.00001 and batch size 8 are used for
568
- training the proposed model on a server with NVIDIA V100. The
569
- hyper-parameter 𝜆is set as 1. For simplicity, we select the first
570
- frame in each chunk as the key frame. For the multi-scale quality
571
- fusion strategy, there are 𝑑=35,𝑛=1080 ,ℎ=11.3,𝐿=200, and
572
- 𝑋2
573
- 0=606, and the final multi-scale weights for UGC videos are
574
- 𝑤1=0.8317,𝑤2=0.0939, and𝑤3=0.0745.
575
- 4.1.3 Comparing Algorithms. We compare the proposed method
576
- with the following no-reference models:
577
- •IQA models: NIQE [ 24], BRISQUE [ 22], GM-LOG [ 41], VGG19
578
- [27], ResNet50 [7], and KonCept512 [9].
579
- •VQA models: V-BLIINDS [ 26], TLVQM [ 12], VIDEAL [ 33],
580
- RAPIQUE [34], VSFA [15], PVQ [44], and Li et al. [14].
581
- Since the number of videos in the LSVQ database is relatively
582
- large, we only compare some representative VQA models on the
583
- LSVQ database and omit the methods which perform poorly on the
584
- other four UGC databases.A Deep Learning based No-reference Quality Assessment Model for UGC Videos MM ’22, October 10–14, 2022, Lisboa, Portugal
585
- 4.1.4 Evaluation Criteria. We adopt two criteria to evaluate the
586
- performance of VQA models, which are Pearson linear correlation
587
- coefficient (PLCC) and Spearman rank-order correlation coefficient
588
- (SRCC). PLCC reflects the prediction linearity of the VQA algorithm
589
- and SRCC indicates the prediction monotonicity. An excellent VQA
590
- model should obtain the value of SRCC and PLCC close to 1. Before
591
- calculating the PLCC, we follow the same procedure in [ 1] to map
592
- the objective score to the subject score using a four-parameter
593
- logistic function.
594
- For KoNViD-1k, YouTube-UGC, LBVD, and LIVE-YT-Gaming
595
- databases, we randomly split these databases into the training set
596
- with 80% videos and the test set with 20% videos for 10 times, and
597
- report the median values of SRCC and PLCC. For the LSVQ database,
598
- we follow the same training and test split suggested by [ 44] and
599
- report the performance on the test and test-1080p subsets.
600
- 4.2 Performance Comparison with the SOTA
601
- Models
602
- The performance results of the VQA models on the KoNViD-1k,
603
- YouTube-UGC, LBVD, and LIVE-YT-Gaming databases are listed in
604
- Table 2, and on the LSVQ database are listed in Table 3. From Table
605
- 2 and Table 3, we observe that the proposed model achieves the best
606
- performance on all five UGC VQA databases and leads by a large
607
- margin, which demonstrates that the proposed model does have a
608
- strong ability to measure the perceptual quality of various kinds of
609
- UGC videos. For the test-1080p subset of the LSVQ database, the
610
- proposed model is inferior to Li et al., which may be because the spa-
611
- tial resolution of most videos in the test-1080p subset is larger than
612
- 1080p while the proposed model resizes the spatial resolution of test
613
- videos into 448×448, so the proposed model has a relatively poor
614
- ability to represent the characteristics of high-resolution videos.
615
- Through the multi-scale quality weighting fusion strategy, the pro-
616
- posed model can significantly improve the performance on the
617
- test-1080p subset.
618
- Then, most handcrafted feature based IQA models perform poorly
619
- on these UGC VQA databases especially for the LBVD and LIVE-
620
- YT-Gaming databases since they are designed for natural scene
621
- images with synthetic distortions and are difficult to handle the
622
- complex in-the-wild distortions and other video types such gaming,
623
- videoliving, etc. It is worth noting that through fine-tuning the deep
624
- CNN baseline i.e. ResNet50 on the VQA databases, it can achieve a
625
- pretty good performance, which also indicates that spatial features
626
- are very important for VQA tasks. For the NR VQA methods, the
627
- hand-crafted feature based NR VQA methods such as TLVQM and
628
- VIDEVAL achieve pretty well performance by incorporating the
629
- rich spatial and temporal quality features, such as NSS features,
630
- motion features, etc., but they are inferior to the deep learning
631
- based NR VQA methods due to the strong feature representation
632
- ability of CNN. VSFA extracts the spatial features from the pre-
633
- trained image recognition model, which are not quality-aware, and
634
- achieves relatively poor performance when compared with other
635
- deep learning based methods. PVQ and Li et al. methods both uti-
636
- lize the pretrained IQA model and ptretrained action recognition
637
- model to extract spatial and motion features respectively, and they
638
- perform better than other compared NR I/VQA methods but are
639
- inferior to the proposed model. Through training an end-to-endTable 4: The results of ablation studies on the LSVQ data-
640
- base. S and M means the spatial features and motion features
641
- respectively, and S∗means that the spatial features are ex-
642
- tracted by the pretrained image classification network.
643
- Database Test Test-1080p
644
- Criterion SRCC PLCC SRCC PLCC
645
- FeatureS∗+M 0.847 0.841 0.732 0.774
646
- S 0.827 0.829 0.702 0.757
647
- M 0.660 0.669 0.569 0.621
648
- RegressionGRU 0.858 0.855 0.735 0.788
649
- Transformer 0.860 0.861 0.753 0.799
650
- PoolingMethod in [15] 0.860 0.858 0.733 0.786
651
- 1D CNN based 0.864 0.862 0.739 0.790
652
- Table 5: The SRCC results of cross-database evaluation. The
653
- model is trained on the LSVQ database.
654
- Database KoNViD-1k YouTube-UGC LBVD LIVE-YT-Gaming
655
- Pro. 0.860 0.789 0.689 0.642
656
- Pro. M.S. 0.859 0.822 0.711 0.683
657
- spatial feature extractor, the proposed model can take advantage of
658
- various video content and distortion types in the UGC databases
659
- and learn a better quality-aware feature representation. As a result,
660
- the proposed model achieves the best performance on all five UGC
661
- VQA databases.
662
- 4.3 Ablation Studies
663
- In this section, we conduct several ablation studies to investigate
664
- the effectiveness of each module in the proposed model, including
665
- the feature extraction module, and the quality regression module.
666
- All the experiments are tested on the LSVQ database since it is the
667
- largest UGC VQA model and is more representative.
668
- 4.3.1 Feature Extraction Module. The proposed model consists
669
- of the spatial feature extractor that learns the end-to-end spatial
670
- quality-aware features and the motion feature extractor that utilizes
671
- a pretrained action recognition model to represent motion informa-
672
- tion. Therefore, we first do not train the spatial feature extractor
673
- and directly use the weights trained on the ImageNet database to
674
- study the effect of the end-to-end training strategy for the spatial
675
- feature extractor. Then, we only use the end-to-end trained spatial
676
- features or the pretrained motion features to evaluate the quality of
677
- UGC videos to investigate the effect of these two kinds of features.
678
- The results are listed in Table 4. First, it is observed that the model
679
- using the motion features is inferior to the model using the spatial
680
- features and both of them are inferior to the proposed model, which
681
- indicates that both spatial and motion features are beneficial to the
682
- UGC VQA task and the spatial features are more important. Then,
683
- we find that end-to-end training for the spatial feature extractor can
684
- significantly improve the evaluation performance, which demon-
685
- strates that end-to-end trained spatial features represent better than
686
- that extracted by the pretrained image classification model.MM ’22, October 10–14, 2022, Lisboa, Portugal Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗
687
- Table 6: Comparison of computational complexity for the six VQA models and two proposed models. Time: Second.
688
- Methods V-BLIINDS TLVQM VIDEVAL VSFA RAPIQUE Li et al. Pro. Pro. M.S.
689
- Time 61.982 219.992 561.408 56.424 38.126 61.971 6.929 8.448
690
- 4.3.2 Quality Regression Module. In this paper, we use the MLP
691
- as the regression model to derive the chunk-level quality scores.
692
- However, in previous studies, some sequential models such as GRU
693
- [15], Transformer [ 14], etc. are also adopted to further consider the
694
- influence of the features extracted from adjacent frames. Here, we
695
- also adopt these methods as a comparison to investigate whether
696
- sequential models can improve the performance of the proposed
697
- models. Specifically, we replace the MLP module with the GRU and
698
- Transformer and keep other experimental setups the same. The
699
- results are listed in Table 4. We observe that models using GRU
700
- and Transformer are both inferior to the proposed model, which
701
- means that the MLP module is enough to regress the quality-aware
702
- features to quality scores though it is very simple. This conclusion
703
- is also consistent with [ 37]. The reason is that the proposed model
704
- and the model in [ 37] calculate the chunk-level quality score and
705
- the effect of adjacent frames are considered in the quality-aware
706
- features (i.e. motion features), while other VQA models [ 15] [14]
707
- calculate the frame-level quality scores, which may need to consider
708
- the effect of adjacent frames in the quality regression module.
709
- 4.3.3 Quality Pooling Module. The proposed model uses the tem-
710
- poral average pooling method to fuse the chunk-level quality scores
711
- into the video level. It is noted that previous studies also propose
712
- several temporal pooling methods for VQA. In this section, we test
713
- two temporal pooling methods, which are the subjectively-inspired
714
- method introduced in [ 15] and a learning based temporal pooling
715
- method using the 1D CNN. The results are listed in Table 4. From
716
- Table 4, we observe that the average pooling strategy achieves sim-
717
- ilar performance to the learning based pooling method, and both of
718
- them are superior to the subjectively-inspired methods. Since the
719
- average pooling strategy is simpler and does not increase the extra
720
- parameters, we use the temporal average pooling method in this
721
- paper.
722
- 4.4 Cross-Database Evaluation
723
- UGC videos may contain various kinds of distortions and content,
724
- most of which may not exist in the training set. Hence, the gen-
725
- eralization ability of the UGC VQA model is very important. In
726
- this section, we use the cross-database evaluation to test the gener-
727
- alization ability of the proposed model. Specifically, we train the
728
- proposed model on the LSVQ database and test the trained model
729
- on the other four UGC VQA databases. We list the results in Table
730
- 5. It is observed that the proposed model achieves excellent per-
731
- formance in cross-database evaluation. The SRCC results on the
732
- KoNViD-1k and YouTube-UGC databases both exceed 0.8, which
733
- have surpassed most VQA models trained on the corresponding
734
- database. We find that the multi-scale quality fusion strategy can
735
- significantly improve the performance on the databases containing
736
- videos with different spatial resolutions (YouTube-UGC, LBVD, and
737
- LIVE-YT-Gaming), which further demonstrates its effectiveness.It is also observed that the performance on the LBVD and LIVE-
738
- YT-Gaming databases is not good as the other two databases. The
739
- reason is that the LBVD and LIVE-YT-Gaming databases contain
740
- live broadcasting and gaming videos respectively, which may rarely
741
- exist in the LSVQ database. Since the single database can not cover
742
- all kinds of video types and distortions, we may further improve the
743
- generalization ability of the proposed model via the multiple data-
744
- base training strategy [ 30] [51] or the continual learning manner
745
- [49] [50].
746
- 4.5 Computational Complexity
747
- The computational complexity is a very important factor that needs
748
- to be considered in practical applications. Hence, we test the com-
749
- putational complexity in this section. All models are tested on a
750
- computer with i7-6920HQ CPU, 16G RAM, and NVIDIA Quadro
751
- P400. The deep learning based models and the handcrafted based
752
- models are tested using the GPU and CPU respectively. We report
753
- the running time for a video with the resolution of 1920 ×1080 and
754
- time duration of eight seconds in Table 6. It is seen that the proposed
755
- model has a considerably low running time compared with other
756
- VQA models. The reason is that we use very sparse frames to calcu-
757
- late the spatial features while other deep learning based methods
758
- need dense frames. Moreover, we extract the motion features at a
759
- very low resolution, which only adds little computational complex-
760
- ity to the proposed model. The very low computational complexity
761
- makes the proposed model suitable for practical applications.
762
- 5 CONCLUSION
763
- In this paper, we propose an effective and efficient NR VQA model
764
- for UGC videos. The proposed model extracts the quality-aware
765
- features from the spatial domain and the spatial-temporal domain to
766
- measure the spatial distortions and motion distortions respectively.
767
- We train the spatial feature extractor in an end-to-end training
768
- manner, so the proposed model can make full use of the various
769
- spatial distortions and content in the current VQA database. Then,
770
- the quality-aware features are regressed into the quality scores
771
- by the MLP network, and the temporal average pooling is used
772
- to obtain the video-level quality scores. We further introduce the
773
- multi-scale quality fusion strategy to address the problem of quality
774
- assessment across different spatial resolutions. The experimental
775
- results show that the proposed model can effectively measure the
776
- quality of UGC videos.
777
- ACKNOWLEDGMENTS
778
- This work was supported by the National Natural Science Foun-
779
- dation of China (61831015, 61901260) and the National Key R&D
780
- Program of China 2021YFE0206700.A Deep Learning based No-reference Quality Assessment Model for UGC Videos MM ’22, October 10–14, 2022, Lisboa, Portugal
781
- REFERENCES
782
- [1]Jochen Antkowiak, T Jamal Baina, France Vittorio Baroncini, Noel Chateau,
783
- France FranceTelecom, Antonio Claudio França Pessoa, F Stephanie Colonnese,
784
- Italy Laura Contin, Jorge Caviedes, and France Philips. 2000. Final report from
785
- the video quality experts group on the validation of objective models of video
786
- quality assessment march 2000. (2000).
787
- [2]Yuqin Cao, Xiongkuo Min, Wei Sun, and Guangtao Zhai. 2021. Deep Neural Net-
788
- works For Full-Reference And No-Reference Audio-Visual Quality Assessment.
789
- In2021 IEEE International Conference on Image Processing (ICIP) . IEEE, 1429–1433.
790
- [3]Pengfei Chen, Leida Li, Yipo Huang, Fengfeng Tan, and Wenjun Chen. 2019. QoE
791
- evaluation for live broadcasting video. In 2019 IEEE International Conference on
792
- Image Processing (ICIP) . IEEE, 454–458.
793
- [4]Pengfei Chen, Leida Li, Lei Ma, Jinjian Wu, and Guangming Shi. 2020. RIRNet:
794
- Recurrent-in-recurrent network for video quality assessment. In Proceedings of
795
- the 28th ACM International Conference on Multimedia . 834–842.
796
- [5]Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet:
797
- A large-scale hierarchical image database. In 2009 IEEE conference on computer
798
- vision and pattern recognition . Ieee, 248–255.
799
- [6]Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. Slow-
800
- fast networks for video recognition. In Proceedings of the IEEE/CVF international
801
- conference on computer vision . 6202–6211.
802
- [7]Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual
803
- learning for image recognition. In Proceedings of the IEEE conference on computer
804
- vision and pattern recognition . 770–778.
805
- [8]Vlad Hosu, Franz Hahn, Mohsen Jenadeleh, Hanhe Lin, Hui Men, Tamás Szirányi,
806
- Shujun Li, and Dietmar Saupe. 2017. The Konstanz natural video database
807
- (KoNViD-1k). In 2017 Ninth international conference on quality of multimedia
808
- experience (QoMEX) . IEEE, 1–6.
809
- [9]Vlad Hosu, Hanhe Lin, Tamas Sziranyi, and Dietmar Saupe. 2020. KonIQ-10k: An
810
- ecologically valid database for deep learning of blind image quality assessment.
811
- IEEE Transactions on Image Processing 29 (2020), 4041–4056.
812
- [10] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra
813
- Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al .2017.
814
- The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017).
815
- [11] Woojae Kim, Jongyoo Kim, Sewoong Ahn, Jinwoo Kim, and Sanghoon Lee. 2018.
816
- Deep video quality assessor: From spatio-temporal visual sensitivity to a convo-
817
- lutional neural aggregation network. In Proceedings of the European Conference
818
- on Computer Vision (ECCV) . 219–234.
819
- [12] Jari Korhonen. 2019. Two-level approach for no-reference consumer video quality
820
- assessment. IEEE Transactions on Image Processing 28, 12 (2019), 5923–5938.
821
- [13] Jari Korhonen, Yicheng Su, and Junyong You. 2020. Blind natural video quality
822
- prediction via statistical temporal features and deep spatial features. In Proceed-
823
- ings of the 28th ACM International Conference on Multimedia . 3311–3319.
824
- [14] Bowen Li, Weixia Zhang, Meng Tian, Guangtao Zhai, and Xianpei Wang. 2021.
825
- Blindly Assess Quality of In-the-Wild Videos via Quality-aware Pre-training and
826
- Motion Perception. arXiv preprint arXiv:2108.08505 (2021).
827
- [15] Dingquan Li, Tingting Jiang, and Ming Jiang. 2019. Quality assessment of in-the-
828
- wild videos. In Proceedings of the 27th ACM International Conference on Multimedia .
829
- 2351–2359.
830
- [16] Dingquan Li, Tingting Jiang, and Ming Jiang. 2021. Unified quality assessment
831
- of in-the-wild videos with mixed datasets training. International Journal of
832
- Computer Vision 129, 4 (2021), 1238–1257.
833
- [17] Wentao Liu, Zhengfang Duanmu, and Zhou Wang. 2018. End-to-End Blind
834
- Quality Assessment of Compressed Videos Using Deep Neural Networks.. In
835
- ACM Multimedia . 546–554.
836
- [18] Wei Lu, Wei Sun, Xiongkuo Min, Wenhan Zhu, Quan Zhou, Jun He, Qiyuan Wang,
837
- Zicheng Zhang, Tao Wang, and Guangtao Zhai. 2022. Deep Neural Network for
838
- Blind Visual Quality Assessment of 4K Content. arXiv preprint arXiv:2206.04363
839
- (2022).
840
- [19] Pavan C Madhusudana, Xiangxu Yu, Neil Birkbeck, Yilin Wang, Balu Adsumilli,
841
- and Alan C Bovik. 2021. Subjective and objective quality assessment of high
842
- frame rate videos. IEEE Access 9 (2021), 108069–108082.
843
- [20] Xiongkuo Min, Kede Ma, Ke Gu, Guangtao Zhai, Zhou Wang, and Weisi Lin.
844
- 2017. Unified blind quality assessment of compressed natural, graphic, and screen
845
- content images. IEEE Transactions on Image Processing 26, 11 (2017), 5462–5474.
846
- [21] Xiongkuo Min, Guangtao Zhai, Jiantao Zhou, Mylene CQ Farias, and Alan Conrad
847
- Bovik. 2020. Study of subjective and objective quality assessment of audio-visual
848
- signals. IEEE Transactions on Image Processing 29 (2020), 6054–6068.
849
- [22] Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. 2012. No-
850
- reference image quality assessment in the spatial domain. IEEE Transactions on
851
- image processing 21, 12 (2012), 4695–4708.
852
- [23] Anish Mittal, Michele A Saad, and Alan C Bovik. 2015. A completely blind video
853
- integrity oracle. IEEE Transactions on Image Processing 25, 1 (2015), 289–300.
854
- [24] Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. 2012. Making a “completely
855
- blind” image quality analyzer. IEEE Signal processing letters 20, 3 (2012), 209–212.
856
- [25] Abdul Rehman, Kai Zeng, and Zhou Wang. 2015. Display device-adapted video
857
- quality-of-experience assessment. In Human Vision and Electronic Imaging XX ,Vol. 9394. International Society for Optics and Photonics, 939406.
858
- [26] Michele A Saad, Alan C Bovik, and Christophe Charrier. 2014. Blind prediction
859
- of natural video quality. IEEE Transactions on Image Processing 23, 3 (2014),
860
- 1352–1365.
861
- [27] Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks
862
- for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
863
- [28] Wei Sun, Xiongkuo Min, Guangtao Zhai, Ke Gu, Huiyu Duan, and Siwei Ma. 2019.
864
- MC360IQA: a multi-channel CNN for blind 360-degree image quality assessment.
865
- IEEE Journal of Selected Topics in Signal Processing 14, 1 (2019), 64–77.
866
- [29] Wei Sun, Xiongkuo Min, Guangtao Zhai, Ke Gu, Siwei Ma, and Xiaokang Yang.
867
- 2020. Dynamic backlight scaling considering ambient luminance for mobile
868
- videos on lcd displays. IEEE Transactions on Mobile Computing (2020).
869
- [30] Wei Sun, Xiongkuo Min, Guangtao Zhai, and Siwei Ma. 2021. Blind quality
870
- assessment for in-the-wild images via hierarchical feature fusion and iterative
871
- mixed database training. arXiv preprint arXiv:2105.14550 (2021).
872
- [31] Wei Sun, Tao Wang, Xiongkuo Min, Fuwang Yi, and Guangtao Zhai. 2021. Deep
873
- learning based full-reference and no-reference quality assessment models for
874
- compressed ugc videos. In 2021 IEEE International Conference on Multimedia &
875
- Expo Workshops (ICMEW) . IEEE, 1–6.
876
- [32] Zhengzhong Tu, Chia-Ju Chen, Li-Heng Chen, Neil Birkbeck, Balu Adsumilli,
877
- and Alan C Bovik. 2020. A comparative evaluation of temporal pooling methods
878
- for blind video quality assessment. In 2020 IEEE International Conference on Image
879
- Processing (ICIP) . IEEE, 141–145.
880
- [33] Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and Alan C Bovik.
881
- 2021. UGC-VQA: Benchmarking blind video quality assessment for user generated
882
- content. IEEE Transactions on Image Processing (2021).
883
- [34] Zhengzhong Tu, Xiangxu Yu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and
884
- Alan C Bovik. 2021. Rapique: Rapid and accurate video quality prediction of user
885
- generated content. arXiv preprint arXiv:2101.10955 (2021).
886
- [35] Tao Wang, Zicheng Zhang, Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao
887
- Zhai. 2022. Subjective Quality Assessment for Images Generated by Computer
888
- Graphics. arXiv preprint arXiv:2206.05008 (2022).
889
- [36] Yilin Wang, Sasi Inguva, and Balu Adsumilli. 2019. YouTube UGC dataset for video
890
- compression research. In 2019 IEEE 21st International Workshop on Multimedia
891
- Signal Processing (MMSP) . IEEE, 1–5.
892
- [37] Yilin Wang, Junjie Ke, Hossein Talebi, Joong Gon Yim, Neil Birkbeck, Balu
893
- Adsumilli, Peyman Milanfar, and Feng Yang. 2021. Rich features for percep-
894
- tual quality assessment of UGC videos. In Proceedings of the IEEE/CVF Conference
895
- on Computer Vision and Pattern Recognition . 13435–13444.
896
- [38] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. 2003. Multiscale structural sim-
897
- ilarity for image quality assessment. In The Thrity-Seventh Asilomar Conference
898
- on Signals, Systems & Computers, 2003 , Vol. 2. Ieee, 1398–1402.
899
- [39] Shaoguo Wen and Junle Wang. 2021. A strong baseline for image and video
900
- quality assessment. arXiv preprint arXiv:2111.07104 (2021).
901
- [40] Jiahua Xu, Jing Li, Xingguang Zhou, Wei Zhou, Baichao Wang, and Zhibo Chen.
902
- 2021. Perceptual Quality Assessment of Internet Videos. In Proceedings of the
903
- 29th ACM International Conference on Multimedia . 1248–1257.
904
- [41] Wufeng Xue, Xuanqin Mou, Lei Zhang, Alan C Bovik, and Xiangchu Feng. 2014.
905
- Blind image quality assessment using joint statistics of gradient magnitude and
906
- Laplacian features. IEEE Transactions on Image Processing 23, 11 (2014), 4850–
907
- 4862.
908
- [42] Peng Ye, Jayant Kumar, Le Kang, and David Doermann. 2012. Unsupervised
909
- feature learning framework for no-reference image quality assessment. In 2012
910
- IEEE conference on computer vision and pattern recognition . IEEE, 1098–1105.
911
- [43] Fuwang Yi, Mianyi Chen, Wei Sun, Xiongkuo Min, Yuan Tian, and Guangtao
912
- Zhai. 2021. Attention Based Network For No-Reference UGC Video Quality
913
- Assessment. In 2021 IEEE International Conference on Image Processing (ICIP) .
914
- IEEE, 1414–1418.
915
- [44] Zhenqiang Ying, Maniratnam Mandal, Deepti Ghadiyaram, and Alan Bovik. 2021.
916
- Patch-VQ:’Patching Up’the Video Quality Problem. In Proceedings of the IEEE/CVF
917
- Conference on Computer Vision and Pattern Recognition . 14019–14029.
918
- [45] Xiangxu Yu, Zhenqiang Ying, Neil Birkbeck, Yilin Wang, Balu Adsumilli, and
919
- Alan C Bovik. 2022. Subjective and Objective Analysis of Streamed Gaming
920
- Videos. arXiv preprint arXiv:2203.12824 (2022).
921
- [46] Saman Zadtootaghaj, Nabajeet Barman, Steven Schmidt, Maria G Martini, and
922
- Sebastian Möller. 2018. NR-GVQM: A no reference gaming video quality metric.
923
- In2018 IEEE International Symposium on Multimedia (ISM) . IEEE, 131–134.
924
- [47] Saman Zadtootaghaj, Steven Schmidt, Saeed Shafiee Sabet, Sebastian Möller, and
925
- Carsten Griwodz. 2020. Quality estimation models for gaming video streaming
926
- services using perceptual video quality dimensions. In Proceedings of the 11th
927
- ACM Multimedia Systems Conference . 213–224.
928
- [48] Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolu-
929
- tional networks. In European conference on computer vision . Springer, 818–833.
930
- [49] Weixia Zhang, Dingquan Li, Chao Ma, Guangtao Zhai, Xiaokang Yang, and Kede
931
- Ma. 2021. Continual learning for blind image quality assessment. arXiv preprint
932
- arXiv:2102.09717 (2021).
933
- [50] Weixia Zhang, Kede Ma, Guangtao Zhai, and Xiaokang Yang. 2021. Task-specific
934
- normalization for continual learning of blind image quality models. arXiv preprintMM ’22, October 10–14, 2022, Lisboa, Portugal Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗
935
- arXiv:2107.13429 (2021).
936
- [51] Weixia Zhang, Kede Ma, Guangtao Zhai, and Xiaokang Yang. 2021. Uncertainty-
937
- aware blind image quality assessment in the laboratory and wild. IEEE Transac-
938
- tions on Image Processing 30 (2021), 3474–3486.
939
- [52] Qi Zheng, Zhengzhong Tu, Yibo Fan, Xiaoyang Zeng, and Alan C Bovik. 2022.
940
- No-Reference Quality Assessment of Variable Frame-Rate Videos Using Tem-
941
- poral Bandpass Statistics. In ICASSP 2022-2022 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP) . IEEE, 1795–1799.
942
- [53] Qi Zheng, Zhengzhong Tu, Pavan C Madhusudana, Xiaoyang Zeng, Alan C Bovik,
943
- and Yibo Fan. 2022. FAVER: Blind Quality Prediction of Variable Frame Rate
944
- Videos. arXiv preprint arXiv:2201.01492 (2022).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2205.05391.txt DELETED
@@ -1,362 +0,0 @@
1
- Query-Based Keyphrase Extraction from Long Documents
2
- Martin Docekal, Pavel Smrz
3
- Brno University of Technology
4
- idocekal@fit.vutbr.cz, smrz@fit.vutbr.cz
5
- Abstract
6
- Transformer-based architectures in natural language process-
7
- ing force input size limits that can be problematic when long
8
- documents need to be processed. This paper overcomes this
9
- issue for keyphrase extraction by chunking the long docu-
10
- ments while keeping a global context as a query defining the
11
- topic for which relevant keyphrases should be extracted. The
12
- developed system employs a pre-trained BERT model and
13
- adapts it to estimate the probability that a given text span
14
- forms a keyphrase. We experimented using various context
15
- sizes on two popular datasets, Inspec and SemEval, and a
16
- large novel dataset. The presented results show that a shorter
17
- context with a query overcomes a longer one without the
18
- query on long documents.1
19
- Introduction
20
- Keyphrase refers to a short language expression describ-
21
- ing the content of a longer text. Due to their concise form,
22
- keyphrases can be used for a quick familiarization with
23
- a document. They also improve the findability of docu-
24
- ments or passages within them. In the bibliographic records,
25
- keyphrase descriptors enable flexible indexing.
26
- Whether a text span is a keyphrase depends on the context
27
- of that span because a keyphrase for a specific topic may
28
- not be a keyphrase for another topic. The presented work
29
- builds on the idea that the topic can be explicitly given as
30
- an input to the keyphrase extraction algorithm in the form
31
- of a query. We approximate such a query with a document’s
32
- title in our experiments. We also investigate the influence of
33
- context size and document structure on the results.
34
- Related Work
35
- Traditional approaches to keyphrase extraction involve
36
- graph-based methods, such as TextRank (Mihalcea and Ta-
37
- rau 2004) and RAKE (Rose et al. 2010). Recently, many
38
- types of neural networks have been used for the task (Lin
39
- and Wang 2019; Sahrawat et al. 2020). Most of the deep
40
- learning work assumes the existence of a title and an abstract
41
- of the document and extracts keyphrases from them because
42
- Copyright © 2022 by the authors. All rights reserved.
43
- 1The code is available at https://github.com/KNOT-FIT-
44
- BUT/QBEK.they struggle with longer inputs such as whole scientific ar-
45
- ticles (Kontoulis, Papagiannopoulou, and Tsoumakas 2021).
46
- Some works try to overcome this limitation by first creating
47
- a document summary and then extracting keyphrases from
48
- it (Kontoulis, Papagiannopoulou, and Tsoumakas 2021; Liu
49
- and Iwaihara 2021). Our research follows an alternative
50
- path, compensating for the limited context by a query spec-
51
- ifying a topic.
52
- Model
53
- First, a document is split into parts (contexts), which are fur-
54
- ther processed independently. Then, the devised model esti-
55
- mates the probability that a given continuous text span forms
56
- a keyphrase. It looks for boundaries tsandte, corresponding
57
- to the text span’s start and end, respectively. The inspiration
58
- for this approach comes from the task of reading compre-
59
- hension, where a similar technique is used to search for po-
60
- tential answers to a question in an input text (Devlin et al.
61
- 2019). Formally, the model estimates probability:
62
- P(ts; tejx) =P(tsjx)P(tejx); (1)
63
- where xis the input sequence. It assumes that the start and
64
- the end of a text span < ts; te>are independent. The prob-
65
- abilities P(tsjx)andP(tejx)are obtained in the following
66
- way:
67
- P(tjx) =sigmoid (wT
68
- BERT (x)[] +b); (2)
69
- wherestands for the start and end positions. Weights
70
- wsandweare learnable vectors, bis the scalar bias and
71
- BERT (x)[i]is BERT (Devlin et al. 2019) vector representa-
72
- tion of a token from sequence xon position i. See the model
73
- illustration in Figure 1.
74
- The task of predicting whether a given token is the start
75
- token of a span or the end token could be seen as binary clas-
76
- sification with two classes start/end andnot start/not end ,
77
- respectively. The binary cross-entropy ( BCE ) loss function
78
- is used for training in the following way:
79
- BCE( vs; gs) + BCE( ve; ge); (3)
80
- where vsis a vector of probabilities that a token is the start
81
- token of a span, for each token in the input, and veis vec-
82
- tor computed analogously, but for the ends. The gsandge
83
- are vectors of ground truth probabilities of starts or ends,
84
- respectively.x = [CLS] title [SEP] context [SEP]
85
- P(t |x)s
86
- P(t |x)e
87
- BERT
88
- BERT
89
- Linear Layer
90
- SigmoidBERT(x)[i]Figure 1: Illustration of our model with query at the input.
91
- We work with two types of inputs. One consists of a text
92
- fragment such as a sentence, and the other uses a query (doc-
93
- ument title) and a text segment, separated by a special token.
94
- Various context sizes are explored in our experiments. The
95
- context size determines how big is the document part the
96
- model sees at once. Every context part of a document is pro-
97
- cessed independently. The final list of keyphrases is created
98
- by collecting keyphrase spans with their scores and selecting
99
- the top ones.
100
- Datasets
101
- Besides two standard datasets for keyphrase extraction, we
102
- created and used a novel dataset of long documents, referred
103
- to a Library, and we also prepared an unstructured version
104
- of the SemEval-2010 dataset. A comparison of the datasets
105
- is given in Table 1.
106
- dataset train val. testsentences
107
- (train)
108
- SemEval-2010 130 14 100 66 428 (72%)
109
- Unstructured-
110
- SemEval-2010130 14 100 45 346 (67%)
111
- Inspec 1 000 500 500 5 894 (25%)
112
- Library 48 879 499 499 298 217 589 (94%)
113
- Table 1: The number of documents in each split along with
114
- the total number of sentences in a train set. The percentage in
115
- the sentences column is the proportion of sentences without
116
- keyphrases.
117
- We had to annotate the spans that represent given
118
- keyphrases in the text as the discussed datasets provide just a
119
- list of associated keyphrases with no information about their
120
- actual positions. The search was case insensitive and the
121
- Porter stemmer was utilized for the SemEval and Hulth2003
122
- (Inspec) datasets. For the Library dataset, as it is in Czech,
123
- theMorphoDiTa lemmatizer2was used.
124
- SemEval-2010 (Kim et al. 2010) consists of whole
125
- plain texts from scientific articles. The dataset provides
126
- keyphrases provided by authors and readers. As it is
127
- common practice (Kim et al. 2010; Kontoulis, Papa-
128
- giannopoulou, and Tsoumakas 2021), we use a combina-
129
- tion of both in our experiments. Our validation dataset was
130
- 2http://hdl.handle.net/11858/00-097C-0000-0023-43CD-0created by randomly choosing a subset of the train set. As
131
- the original data source does not explicitly provide the ti-
132
- tles, which we need to formulate a query, we have manually
133
- extracted the title of each document from the plain text.
134
- Documents in this dataset have a well-formed structure.
135
- They contain a title and abstract and are divided into sections
136
- introduced with a headline. As we want to investigate the
137
- influence of such structure on results, we have made an un-
138
- structured version of this dataset. We downloaded the orig-
139
- inal PDFs and used the GROBID3to get a structured XML
140
- version of them. We kept only the text from the document’s
141
- main body while the parts such as title, abstract, authors,
142
- or section headlines were removed. Nevertheless, document
143
- keyphrase annotations remain the same. We call this dataset
144
- Unstructured-SemEval-2010. The name SemEval is used to
145
- name these two collectively.
146
- Inspec (Hulth 2003) contains a set of title-abstract pairs
147
- collected from scientific articles. For each abstract, there are
148
- two sets of keyphrases — controlled , which are restricted to
149
- the Inspec thesaurus, and uncontrolled that can contain any
150
- suitable keyphrase. To be comparable with previous works
151
- (Hulth 2003; Liu and Iwaihara 2021), we used only the un-
152
- controlled set in our experiments.
153
- Library is a newly created dataset that takes advantage
154
- of a large body of scanned documents, provided by Czech
155
- libraries, that were converted to text by OCR software. This
156
- way of getting the text is unreliable, so the dataset contains
157
- many errors on the word and character level. The dataset
158
- builds on the documents where the language was recognized
159
- as ‘Czech’ by the OCR software.
160
- All used documents in the original data source are rep-
161
- resented by their significant content (the average number
162
- of characters per document is 529 276) and metadata. The
163
- metadata contains (not for all) keyphrases and document lan-
164
- guage annotations. We did not ask annotators to annotate
165
- each document. Instead, we selected metadata fields used by
166
- librarians as keyphrase annotations. So, our data and meta-
167
- data come from the real-world environment of Czech li-
168
- braries. We have filtered out all documents with less than
169
- five keyphrases.
170
- Documents come from more than 290 categories. Vari-
171
- ous topics such as mathematics, psychology, belles lettres,
172
- music, and computer science are covered. Not all anno-
173
- tated keyphrases can be extracted from the text. Consid-
174
- ering the lemmatization, the test set annotations contain
175
- about 53% of extractive keyphrases. Bibliographic field Title
176
- (MARC 2454) is used as the query. Note that the field may
177
- contain additional information to the title, such as authors.
178
- Experimental Setup
179
- The implemented system builds on PyTorch5and PyTorch
180
- Lightning6tools. The BERT part of the model uses the
181
- 3https://github.com/kermitt2/grobid
182
- 4https://www.loc.gov/marc/bibliographic/bd245.html
183
- 5https://pytorch.org/
184
- 6https://www.pytorchlightning.ai/1359 19 370:120:140:160:180:2
185
- input size [sentences]F1@10sentences only sentences + query
186
- sentences only (unst) sentences + query (unst)
187
- Figure 2: Results for SemEval-2010 and Unstructured-
188
- SemEval-2010 test set. The light red and blue areas are con-
189
- fidence intervals with a confidence level of 0.95. Each point
190
- corresponds to an average of five runs.
191
- implementation of BERT by Hugging Face7and it is ini-
192
- tialized with pre-trained weights of bert-base-multilingual-
193
- cased . These weights are also optimized during fine-tuning.
194
- The Adam optimiser with weight decay (Loshchilov and
195
- Hutter 2017) is used in all the experiments. The evaluation
196
- during training on the validation set is done every 4 000 op-
197
- timization steps for the Library dataset and every 50 steps
198
- for Inspec (25 for whole abstracts with titles). For SemEval
199
- datasets, the number of steps differs among experiments.
200
- Early stopping with patience 3 is applied, so the training
201
- ends when the model stops improving. Batch size 128 is
202
- used for experiments with the Library dataset, and batch
203
- size 32 is used for Inspec and SemEval datasets. The learn-
204
- ing rate 1E-06 is used for the experiments with SemEval
205
- datasets, while it is set to the value of 1E-05 for all other
206
- datasets. Inputs longer than a maximum input size are split
207
- into sequences of roughly the same size in a way that for-
208
- bids splitting of keyphrase spans. In edge cases (split is not
209
- possible), the input is truncated. No predicted span is longer
210
- than 6 tokens.
211
- The official script for SemEval-2010 is used for evalua-
212
- tion. However, the normalization of keyphrases is different
213
- for the Library dataset as we have used the mentioned Czech
214
- lemmatizer instead of the original stemmer. We use the F1
215
- over the top five (F1@5) candidates for the Library dataset
216
- and over the top ten (F1@10) for the rest.
217
- Experiments
218
- The performed experiments investigate the influence of
219
- queries on four different datasets, the output quality with
220
- various context sizes, and the impact of the document struc-
221
- ture.
222
- The first set of experiments is performed on long docu-
223
- ments with a well-formed structure from the SemEval-2010
224
- 7https://huggingface.co/1359 19 370255075
225
- input size [sentences]too long inputs [%]sentences only sentences + query
226
- Figure 3: The proportion of inputs longer than the maximum
227
- input size for SemEval-2010 train set.
228
- 1359 19 370255075
229
- input size [sentences]proportion [%]
230
- Figure 4: The proportion of contexts containing at least one
231
- section headline as a substring for SemEval-2010 test set.
232
- dataset and compares them with SemEval’s unstructured
233
- version. Figure 2 shows that inputs with a query are bet-
234
- ter than those without a query, but the last point. For the
235
- structured input, it can be seen that from the point with 19
236
- sentences, the performance of input with query stops with
237
- the fast growth. It correlates with Figure 3 showing the sat-
238
- uration of the model input. Notice that from 19 sentences,
239
- the input becomes more saturated, and the splitting strategy
240
- starts shrinking contexts.
241
- It is not surprising that the nominal values are lower for
242
- unstructured inputs. On the other hand, it is clear that the
243
- query has a bigger influence on the unstructured version, es-
244
- pecially for short context sizes, because the average abso-
245
- lute difference among results (with- and without a query) for
246
- each context size is 2.2% compared to 1.29% for the struc-
247
- tured one.
248
- Looking at the curve of results with a query on an un-
249
- structured version, we assume that the model can exploit a
250
- document structure without explicitly tagging it with special
251
- tokens because additional context size above the three sen-
252
- tences is not much beneficial compared to the case with the
253
- document structure. This hypothesis is supported by the fact
254
- that the proportion of the context containing structured infor-
255
- mation grows with context size, as is demonstrated in Figure
256
- 4 showing the proportion of contexts containing a section
257
- headline.
258
- The second set of experiments was performed on our Li-
259
- brary dataset. The results can be seen in Figure 5. We have
260
- chosen F1@5 because only approximately half of the doc-
261
- uments have ten and more keyphrases. Again, the results
262
- show that queries are beneficial. Also, it can be seen that
263
- the shape of the query curve is similar to Unstructured-
264
- SemEval-2010. The average absolute difference between the
265
- version with and without query is now 3.1%. For F1@10, it1359 19 370:10:15
266
- input size [sentences]F1@5sentences only sentences + query
267
- Figure 5: Results for Library test set for various context
268
- sizes. The light red area symbolizes a confidence interval
269
- with a confidence level of 0.95. Each point is average from
270
- three runs.
271
- Inspec SemEval 2010
272
- F1@10 F1@10
273
- TextRank 15.28 6.55
274
- KFBS + BK-Rank 46.62 15.59
275
- DistilRoBERTa + TF-IDF - 16.2
276
- context
277
- [sentences]query
278
- whole document 7 39.67 -
279
- 17 40.26 14.96
280
- 3 39.95 16.06
281
- 197 - 17.18
282
- 3 - 18.56
283
- Table 2: Comparison of achieved results with other work.
284
- KFBS + BK-Rank and TextRank is from (Liu and Iwaihara
285
- 2021). The DistilRoBERTa + TF-IDF is from (Kontoulis,
286
- Papagiannopoulou, and Tsoumakas 2021). Our results are
287
- averages from five runs.
288
- is 2.3, which is close to the value for the unstructured version
289
- of SemEval.
290
- The last set of experiments is done on the Inspec dataset,
291
- which has only titles and abstracts. The purpose is to in-
292
- vestigate the influence of a query on short inputs containing
293
- mainly salient sentences. Results are summarized in Table 2,
294
- which also compares our results with other works. It shows
295
- that the results for a single sentence, a single sentence with
296
- a title, and whole abstract with a title are similar. The ex-
297
- planation can be that the abstract contains mainly salient
298
- sentences containing keyphrases, and also, the abstract it-
299
- self defines the topic of the article. A similar observation
300
- is presented in (Liu and Iwaihara 2021), where the version
301
- without summarization gives similar results as the extraction
302
- performed on a summary.
303
- Conclusions
304
- We have conducted experiments that show that query-based
305
- keyphrase extraction is promising for processing long doc-
306
- uments. Our experiments show the relationship between
307
- the context size and the performance of the BERT-basedkeyphrase extractor. The developed model was evaluated on
308
- four datasets; one of them is non-English. The datasets al-
309
- lowed us to find when the query-based approach is benefi-
310
- cial and when not. It was shown that a query gives no bene-
311
- fit when extracting keyphrases from abstracts. On the other
312
- hand, it is beneficial for long documents, particularly those
313
- without a well-formed document structure on short context
314
- sizes.
315
- Acknowledgment
316
- This work was supported by the Technology Agency of the
317
- Czech Republic, Grant FW03010656 – MASAPI: Multilin-
318
- gual assistant for searching, analysing and processing infor-
319
- mation and decision support. The computation used the in-
320
- frastructure supported by the Ministry of Education, Youth
321
- and Sports of the Czech Republic through the e-INFRA CZ
322
- (ID:90140).
323
- References
324
- Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019.
325
- BERT: Pre-training of deep bidirectional transformers for lan-
326
- guage understanding. In Proceedings of the 2019 NAACL Con-
327
- ference: Human Language Technologies, Volume 1 (Long and
328
- Short Papers) , 4171–4186. Minneapolis, Minnesota: Associa-
329
- tion for Computational Linguistics.
330
- Hulth, A. 2003. Improved automatic keyword extraction given
331
- more linguistic knowledge. In Proceedings of the 2003 EMNLP
332
- Conference , 216–223.
333
- Kim, S. N.; Medelyan, O.; Kan, M.-Y .; and Baldwin, T. 2010.
334
- SemEval-2010 task 5 : Automatic keyphrase extraction from
335
- scientific articles. In Proceedings of the 5th SemEval , 21–26.
336
- Uppsala, Sweden: Association for Computational Linguistics.
337
- Kontoulis, C. G.; Papagiannopoulou, E.; and Tsoumakas, G.
338
- 2021. Keyphrase extraction from scientific articles via extrac-
339
- tive summarization. In Proceedings of the Second SDP Work-
340
- shop , 49–55. Online: Association for Computational Linguis-
341
- tics.
342
- Lin, Z.-L., and Wang, C.-J. 2019. Keyword extraction
343
- with character-level convolutional neural tensor networks. In
344
- Pacific-Asia Conference on Knowledge Discovery and Data
345
- Mining , 400–413. Springer.
346
- Liu, T., and Iwaihara, M. 2021. Supervised learning of
347
- keyphrase extraction utilizing prior summarization. In Ke, H.-
348
- R.; Lee, C. S.; and Sugiyama, K., eds., Towards Open and
349
- Trustworthy Digital Societies , 157–166. Cham: Springer In-
350
- ternational Publishing.
351
- Loshchilov, I., and Hutter, F. 2017. Decoupled weight decay
352
- regularization. arXiv preprint arXiv:1711.05101 .
353
- Mihalcea, R., and Tarau, P. 2004. Textrank: Bringing order
354
- into text. In Proceedings of the 2004 conference on EMNLP ,
355
- 404–411.
356
- Rose, S.; Engel, D.; Cramer, N.; and Cowley, W. 2010. Au-
357
- tomatic keyword extraction from individual documents. Text
358
- mining: applications and theory 1:1–20.
359
- Sahrawat, D.; Mahata, D.; Zhang, H.; Kulkarni, M.; Sharma,
360
- A.; Gosangi, R.; Stent, A.; Kumar, Y .; Shah, R. R.; and Zim-
361
- mermann, R. 2020. Keyphrase extraction as sequence labeling
362
- using contextualized embeddings. In ECIR , 328–335. Springer.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2205.08365.txt DELETED
@@ -1,823 +0,0 @@
1
- Deep Supervised Information Bottleneck Hashing for Cross-modal Retrieval
2
- based Computer-aided Diagnosis
3
- Yufeng Shi1, Shuhuang Chen1, Xinge You1*, Qinmu Peng1, Weihua Ou2, Yue Zhao3
4
- 1Huazhong University of Science and Technology
5
- 2Guizhou Normal University
6
- 3Hubei University
7
- fyufengshi17, shuhuangchen, youxg, pengqinmu [email protected], [email protected], [email protected]
8
- Abstract
9
- Mapping X-ray images, radiology reports, and other medi-
10
- cal data as binary codes in the common space, which can
11
- assist clinicians to retrieve pathology-related data from het-
12
- erogeneous modalities (i.e., hashing-based cross-modal med-
13
- ical data retrieval), provides a new view to promot computer-
14
- aided diagnosis. Nevertheless, there remains a barrier to boost
15
- medical retrieval accuracy: how to reveal the ambiguous se-
16
- mantics of medical data without the distraction of superflu-
17
- ous information. To circumvent this drawback, we propose
18
- Deep Supervised Information Bottleneck Hashing (DSIBH),
19
- which effectively strengthens the discriminability of hash
20
- codes. Specifically, the Deep Deterministic Information Bot-
21
- tleneck (Yu, Yu, and Pr ´ıncipe 2021) for single modality is
22
- extended to the cross-modal scenario. Benefiting from this,
23
- the superfluous information is reduced, which facilitates the
24
- discriminability of hash codes. Experimental results demon-
25
- strate the superior accuracy of the proposed DSIBH com-
26
- pared with state-of-the-arts in cross-modal medical data re-
27
- trieval tasks.
28
- The rapid development of medical technology not only
29
- provides diverse medical examinations but also produces
30
- tremendous amounts of medical data, ranging from X-ray
31
- images to radiology reports. It is an experience-demanding,
32
- time-consuming, and error-prone job to manually assess
33
- medical data and diagnose disease. To reduce the work
34
- burden of physicians and optimize the diagnostic process,
35
- computer-aided diagnosis (CAD) systems including classi-
36
- fier based CAD(Shi et al. 2020; In ´es et al. 2021) and content-
37
- based image retrieval (CBIR) based CAD(Yang et al. 2020;
38
- Fang, Fu, and Liu 2021) have been designed to automatically
39
- identify illness. Although the two types of methods greatly
40
- promote the development of CAD, existing systems ignore
41
- the character of current medical data, which is diverse in
42
- modality and huge in terms of scale. Therefore, we introduce
43
- cross-modal retrieval (CMR) (Wang et al. 2016) techniques
44
- and construct a CMR-based CAD method using semantic
45
- hashing (Wang et al. 2017) to handle the above challenges.
46
- With the help of CMR that projects multimodal data into
47
- the common space, samples from different modalities can be
48
- directly matched without the interference of heterogeneity.
49
- *Contact Author
50
- Accepted by the AAAI-22 Workshop on Information Theory for
51
- Deep Learning (IT4DL).Therefore, CMR-based CAD can not only retrieve the se-
52
- mantically similar clinical profiles in heterogeneous modal-
53
- ities but also provide diagnosis results according to the pre-
54
- vious medical advice. Compared with the classifier-based
55
- CAD that only provides diagnosis results, CMR-based CAD
56
- is more acceptable due to the interpretability brought by
57
- retrieved profiles. Compared with the CBIR-based CAD,
58
- CMR-based CAD wins on its extended sight of multi-modal
59
- data, which meets the requirement of current medical data.
60
- Recently, extensive work on hashing-based CMR that
61
- maps data from different modalities into the same hamming
62
- space, has been done by researchers to achieve CMR (Li
63
- et al. 2018; Zhu et al. 2020; Yu et al. 2021). Due to its com-
64
- pact binary codes and XOR distance calculation, hashing-
65
- based CMR possesses low memory usage and high query
66
- speed (Wang et al. 2017), which is also compatible with
67
- the huge volume of current medical data. In terms of ac-
68
- curacy, the suitable hashing-based solutions for CMR-based
69
- CAD are deep supervised hashing (DSH) methods (Xie et al.
70
- 2020; Zhan et al. 2020; Yao et al. 2021). With the guid-
71
- ance of manual annotations, deep supervised methods usu-
72
- ally perform hash code learning based on the original data
73
- via neural networks. Inspired by the information bottle-
74
- neck principle (Tishby, Pereira, and Bialek 1999), the above-
75
- mentioned optimization procedure can be viewed as build-
76
- ing hash code Gabout a semantic label Ythrough samples
77
- in different modalities X=/braceleftbig
78
- X1,X2/bracerightbig
79
- , which can be for-
80
- mulated as:
81
- maxL=I(G;Y)−βI(G;X), (1)
82
- whereI(·;·)represents the mutual information, and βis a
83
- hyper-parameter. As quantified by I(G;Y), current DSH
84
- methods model the semantic annotations to establish pair-
85
- wise, triplet-wise or class-wise relations, and maximize the
86
- correlation between hash codes and the semantic relations.
87
- Despite the consideration of semantic relations, the neglect
88
- ofI(G;X)will result in the retention of redundant infor-
89
- mation in the original data, thus limiting the improvement
90
- of the retrieval accuracy. I(G;X)measures the correlation
91
- between the hash code Gand the data from two modalities
92
- X, which can be used to reduce the superfluous informa-
93
- tion from medical data, and constrain the hash code to grasp
94
- the correct semantics from annotations. Therefore, it can be
95
- expected that the optimization of Eq. (1) can strengthen thearXiv:2205.08365v1 [cs.LG] 6 May 2022discriminability of hash codes, which improves the accuracy
96
- of CMR-based CAD.
97
- To perform CMR-based CAD, we design a novel method
98
- named Deep Supervised Information Bottleneck Hash-
99
- ing (DSIBH), which optimizes the information bottleneck
100
- to strengthen the discriminability of hash codes. Specifi-
101
- cally, to avoid variational inference and distribution assump-
102
- tion, we extend the Deep Deterministic Information Bot-
103
- tleneck (DDIB) (Yu, Yu, and Pr ´ıncipe 2021) from single
104
- modality to the cross-modal scenario for hash code learning.
105
- To summarize, our main contributions are fourfold:
106
- • The cross-modal retrieval technique based on semantic
107
- hashing is introduced to establish computer-aided diag-
108
- nosis systems, which is suitable for the current large-
109
- scale multi-modal medical data.
110
- • A deep hashing method named DSIBH, which optimizes
111
- the hash code learning procedure following the informa-
112
- tion bottleneck principle to reduce the distraction of su-
113
- perfluous information, is proposed for CMR-based CAD.
114
- • To reduce the adverse impact of variational inference
115
- or distribution assumption, the Deep Deterministic In-
116
- formation Bottleneck is elegantly extended to the cross-
117
- modal scenario for hash code learning.
118
- • Experiments on the large-scale multi-modal medical
119
- dataset MIMIC-CXR show that DSIBH can strengthen
120
- the discriminability of hash codes more effectively than
121
- other methods, thus boosting the retrieval accuracy.
122
- Related Work
123
- In this section, representative CAD approaches and hashing-
124
- based solutions of cross-modal retrieval are briefly reviewed.
125
- To make readers easier to understand our work, some knowl-
126
- edge of the DDIB is also introduced.
127
- Computer-aided Diagnosis
128
- CAD approaches generally fall into two types including
129
- classifier-based CAD and CBIR-based CAD. Thanks to the
130
- rapid progress of deep learning, classifier-based CAD meth-
131
- ods (Zhang et al. 2019; de La Torre, Valls, and Puig 2020)
132
- can construct task-specific neural networks to categorize
133
- histopathology images and employ the outcomes as the diag-
134
- nosis. On the other side, CBIR-based CAD can provide clin-
135
- ical evidence since they retrieve and visualize images with
136
- the most similar morphological profiles. According to the
137
- data type of representations, existing CBIR methods can be
138
- divided into continuous value CBIR (Erfankhah et al. 2019;
139
- Zhen et al. 2020) and hashing-based CBIR (Hu et al. 2020;
140
- Yang et al. 2020). In the age of big data, the latter increas-
141
- ingly become mainstream due to the low memory usage and
142
- high query speed brought by hashing. Although substantial
143
- efforts have been made to analyse clinical image, medical
144
- data such as radiology reports in other modalities are ig-
145
- nored. Consequently, CAD is restricted in single modality
146
- and the cross-modal relevance between different modalities
147
- still waits to be explored.Cross-modal Retrieval
148
- Cross-modal hashing has made remarkable progress in han-
149
- dling cross-modal retrieval, and this kind of methods can
150
- be roughly divided into two major types including unsuper-
151
- vised approaches and supervised approaches in terms of the
152
- consideration of semantic information. Due to the absence of
153
- semantic information, the former usually relies on data dis-
154
- tributions to align semantic similarities of different modali-
155
- ties (Liu et al. 2020; Yu et al. 2021). For example, Collec-
156
- tive Matrix Factorization Hashing (Ding et al. 2016) learns
157
- unified hash codes by collective matrix factorization with
158
- a latent factor model to capture instance-level correlations.
159
- Recently, Deep Graph-neighbor Coherence Preserving Net-
160
- work (Yu et al. 2021) extra explores graph-neighbor coher-
161
- ence to describe the complex data relationships. Although
162
- data distributions indeed help to solve cross-modal retrieval
163
- to some extent, one should note that unsupervised methods
164
- fail to manage the high-level semantic relations due to the
165
- neglect of manual annotations.
166
- Supervised hashing methods are thereafter proposed to
167
- perform hash code learning with the guidance of manual
168
- annotations. Data points are encoded to express semantic
169
- similarity such as pair-wise(Shen et al. 2017; Wang et al.
170
- 2019), triplet-wise (Hu et al. 2019; Song et al. 2021) or
171
- multi-wise similarity relations(Cao et al. 2017; Li et al.
172
- 2018). As an early attempt with deep learning, Deep Cross-
173
- modal Hashing (Jiang and Li 2017) directly encodes origi-
174
- nal data points by minimizing the negative log likelihood of
175
- the cross-modal similarities. To discover high-level semantic
176
- information, Self-Supervised Adversarial Hashing (Li et al.
177
- 2018) harnesses a self-supervised semantic network to pre-
178
- serve the pair-wise relationships. Although various relations
179
- have been built between the hash code and the semantic la-
180
- bels, the aforementioned algorithms still suffer from the dis-
181
- traction of superfluous information, which is caused by the
182
- connections between the hash code and the original data,
183
- Consequently, for CMR-based CAD, there remains a need
184
- for a deep hashing method which can reduce the superfluous
185
- information to strengthen the discriminability of hash codes.
186
- Deep Deterministic Information Bottleneck
187
- Despite great efforts to handle the ambiguous semantics of
188
- medical data, the discriminability of hash codes still needs
189
- to be strengthened. To alleviate such limitation, a promising
190
- solution is Deep Deterministic Information Bottleneck (Yu
191
- et al. 2021) that has been proved to reduce the superfluous
192
- information during feature extraction. Before elaborating on
193
- our solution, we introduce basic knowledge on DDIB below.
194
- DDIB intends to adopt a neural network to parameterize
195
- information bottleneck (Tishby, Pereira, and Bialek 1999),
196
- which considers extracting information about a target signal
197
- Ythrough a correlated observable X. The extracted infor-
198
- mation is represented as a variable T. The information ex-
199
- traction process can be formulated as:
200
- maxLIB=I(T;Y)−βI(T;X). (2)
201
- When the above objective is optimized with a neural net-
202
- work,Tis the output of one hidden layer. To update theparameters of networks, the second item in Eq. (2) is cal-
203
- culated with the differentiable matrix-based R ´enyi’sα-order
204
- mutual information:
205
- Iα(X;T) =Hα(X) +Hα(T)−Hα(X,T),(3)
206
- whereHα(·)indicates the matrix-based analogue to R ´enyi’s
207
- α-entropy and Hα(·,·)is the matrix-based analogue to
208
- R´enyi’sα-order joint-entropy. More details of the matrix-
209
- based R ´enyi’sα-order entropy functional can be found in
210
- (Yu et al. 2019).
211
- For the first item in Eq. (2), since I(T;Y) =H(Y)−
212
- H(Y|T), the maximization of I(T;Y)is converted to
213
- the minimization of H(Y|T). Given the training set
214
- {xi,yi}N
215
- i=1, the average cross-entropy loss is adopted to
216
- minimize the H(Y|T):
217
- 1
218
- NN/summationdisplay
219
- i=1Et∼p(t|xi)[−logp(yi|t)], (4)
220
- Therefore, DDIB indicates that the optimization of Infor-
221
- mation Bottleneck in single modality can be achieved with
222
- a cross-entropy loss and a differentiable mutual informa-
223
- tion itemI(T;X). Obviously, the differentiable optimiza-
224
- tion strategy of information bottleneck in DDIB can benefit
225
- DSH methods in terms of superfluous information reduction.
226
- Method
227
- In this section, we first present the problem definition, and
228
- then detail our DSIBH method. The optimization is finally
229
- given. For illustration purposes, our DSIBH is applied in X-
230
- ray images and radiology reports.
231
- Notation and problem definition
232
- Matrix and vector used in this paper are represented by bold-
233
- face uppercase letter (e.g., G) and boldface lowercase let-
234
- ter (e.g., g) respectively./bardbl·/bardbldenotes the 2-norm of vectors.
235
- sign(·)is defined as the sign function, which outputs 1 if its
236
- input is positive else outputs -1.
237
- LetX1=/braceleftbig
238
- x1
239
- i/bracerightbigN
240
- i=1andX2=/braceleftbig
241
- x2
242
- j/bracerightbigN
243
- j=1symbolize X-
244
- ray images and radiology reports in the training set, where
245
- x1
246
- i∈Rd1,x2
247
- j∈Rd2. Their semantic labels that indicate
248
- the existence of pathology are represented by Y={yl}N
249
- l=1,
250
- where yl={yl1,yl2,...,yld3}∈Rd3. Following (Cao et al.
251
- 2016; Jiang and Li 2017; Li et al. 2018), we define the se-
252
- mantic affinities SN×Nbetween x1
253
- iandx2
254
- jusing semantic
255
- labels. If x1
256
- iandx2
257
- jshare at least one category label, they
258
- are semantically similar and Sij= 1. Otherwise, they are
259
- semantically dissimilar and thus Sij= 0.
260
- The goal of the proposed DSIBH is to learn hash functions
261
- f1/parenleftbig
262
- θ1;X1/parenrightbig
263
- :Rd1→Rdcandf2/parenleftbig
264
- θ2;X2/parenrightbig
265
- :Rd2→Rdc,
266
- which can map X-ray images and radiology reports as ap-
267
- proximate binary codes G1andG2in the same continuous
268
- space respectively. Later, binary codes can be generated by
269
- applying a sign function to G1,2.
270
- Meanwhile, hamming distance D/parenleftbig
271
- g1
272
- i,g2
273
- j/parenrightbig
274
- between hash
275
- codes g1
276
- iandg2
277
- jneeds to indicate the semantic similaritySijbetween x1
278
- iandx2
279
- j, which can be formulated as:
280
- Sij∝−D/parenleftbig
281
- g1
282
- i,g2
283
- j/parenrightbig
284
- . (5)
285
- Information Bottleneck in Cross-modal Scenario
286
- To improve the accuracy of CMR-based CAD, the super-
287
- fluous information from the medical data in the hash code
288
- learning procedure should be reduced via the information
289
- bottleneck principle. Therefore, the information bottleneck
290
- principle in single modality should be extended to the cross-
291
- modal scenario, where one instance can own descriptions in
292
- different modalities.
293
- Analysis starts from the hash code learning processes for
294
- X-ray images and radiology reports respectively. Follow-
295
- ing the information bottleneck principle, the basic objective
296
- functions can be formulated as:
297
- maxLIB1=I/parenleftbig
298
- G1;Y1/parenrightbig
299
- −βI/parenleftbig
300
- G1;X1/parenrightbig
301
- ,
302
- maxLIB2=I/parenleftbig
303
- G2;Y2/parenrightbig
304
- −βI/parenleftbig
305
- G2;X2/parenrightbig
306
- . (6)
307
- In cross-modal scenario, X-ray images and radiology re-
308
- ports in the training set are collected to describe the com-
309
- mon pathology. Therefore, the image-report pairs own the
310
- same semantic label. To implement this idea, Eq. (6) is trans-
311
- formed to:
312
- maxLIB1=I/parenleftbig
313
- G1;Y/parenrightbig
314
- −βI/parenleftbig
315
- G1;X1/parenrightbig
316
- ,
317
- maxLIB2=I/parenleftbig
318
- G2;Y/parenrightbig
319
- −βI/parenleftbig
320
- G2;X2/parenrightbig
321
- , (7)
322
- Furthermore, the same hash code should be assigned for the
323
- paired samples to guarantee the consistency among different
324
- modalities, which is achieved with the /lscript2loss:
325
- minLCONS =E/bracketleftBig/vextenddouble/vextenddoubleg1−gy/vextenddouble/vextenddouble2/bracketrightBig
326
- +E/bracketleftBig/vextenddouble/vextenddoubleg2−gy/vextenddouble/vextenddouble2/bracketrightBig
327
- ,(8)
328
- whereGyrepresents the modality-invariant hash codes for
329
- the image-report pairs.
330
- Incorporating Eq. (7) and Eq. (8), the overall objective of
331
- the information bottleneck principle in cross-modal scenario
332
- is formulated as:
333
- maxLIBC=/parenleftbig
334
- I/parenleftbig
335
- G1;Y/parenrightbig
336
- +I/parenleftbig
337
- G2;Y/parenrightbig/parenrightbig
338
- (9)
339
- −β/parenleftbig
340
- I/parenleftbig
341
- G1;X1/parenrightbig
342
- +I/parenleftbig
343
- G2;X2/parenrightbig/parenrightbig
344
- −γ/parenleftBig
345
- E/bracketleftBig/vextenddouble/vextenddoubleg1−gy/vextenddouble/vextenddouble2/bracketrightBig
346
- +E/bracketleftBig/vextenddouble/vextenddoubleg2−gy/vextenddouble/vextenddouble2/bracketrightBig/parenrightBig
347
- .
348
- Deep Supervised Information Bottleneck Hashing
349
- Following the information bottleneck principle in cross-
350
- modal scenario (i.e., Eq. (9)), three variables including
351
- Gy,G1andG2should be optimized. To obtain modality-
352
- invariantGy, we build labNet fyto directly transform se-
353
- mantic labels into the pair-level hash codes. The labNet is
354
- formed by a two-layer Multi-Layer Perception (MLP) whose
355
- nodes are 4096 and c. Then, we build imgNet f1and txtNet
356
- f2as hash functions to generate hash codes G1andG2. For
357
- X-ray images, we modify CNN-F (Chatfield et al. 2014)
358
- to build imgNet with the consideration of network scale.
359
- To obtaincbit length hash codes, the last fully-connectedlayer in the origin CNN-F is changed to a c-node fully-
360
- connected layer. For radiology reports, we first use the multi-
361
- scale network in (Li et al. 2018) to extract multi-scale fea-
362
- tures and a two-layer MLP whose nodes are 4096 and cto
363
- transform them into hash codes. Except the activation func-
364
- tion of last layers is tanh to approximate the sign (·)func-
365
- tion, other layers use ReLU as activation functions. To im-
366
- prove generalization performance, Local Response Normal-
367
- ization (LRN) (Krizhevsky, Sutskever, and Hinton 2012) is
368
- applied between layers of all MLPs. One should note that the
369
- application of CNN-F (Chatfield et al. 2014) and multi-scale
370
- network (Li et al. 2018) is only for illustrative purposes; any
371
- other networks can be integrated into our DSIBH as back-
372
- bones of imgNet and txtNet.
373
- As described before, semantic labels are encoded as hash
374
- codesGy. To preserve semantic similarity, the loss function
375
- of labNet is:
376
- min
377
- Gy,θyLy=Ly
378
- 1+ηLy
379
- 2 (10)
380
- =−N/summationdisplay
381
- l,j/parenleftbig
382
- Slj∆lj−log/parenleftbig
383
- 1 +e∆lj/parenrightbig/parenrightbig
384
- +ηN/summationdisplay
385
- l=1/parenleftBig
386
- /bardblgy
387
- l−fy(θy;yl)/bardbl2/parenrightBig
388
- ,
389
- s.t.Gy={gy
390
- l}N
391
- l=1∈{− 1,1}c
392
- where∆lj=fy(θy;yl)Tfy/parenleftbig
393
- θy;yj/parenrightbig
394
- ,fy(θy;yl)is the
395
- output of labNet for yl,gy
396
- lis the hash codes of fy(θy;yl)
397
- handled bysign (·), andηaims to adjust the weight of loss
398
- items.
399
- The first term of Eq. (10) intends to minimize the nega-
400
- tive log likelihood of semantic similarity with the likelihood
401
- function, which is defined as follows:
402
- p/parenleftbig
403
- Slj|fy(θy;yl),fy/parenleftbig
404
- θy;yj/parenrightbig/parenrightbig
405
- =/braceleftbiggσ(∆lj) Slj= 1
406
- 1−σ(∆lj)Slj= 0,
407
- (11)
408
- whereσ(∆lj) =1
409
- 1+e−∆ljis the sigmoid function. Mean-
410
- while, the second term restricts the outputs of labNet to ap-
411
- proximate binary as the request of hash codes.
412
- After the optimization of labNet, the modality-invariant
413
- hash codeGyis obtained. The next step is to optimize the
414
- imgNet and txtNet to generate G1andG2respectively fol-
415
- lowing Eq. (9). For the first item in Eq. (9), DDIB interprets
416
- it as a cross-entropy loss (i.e., Eq. (4)). In our implement, Gy
417
- is also used as class-level weight in the cross-entropy loss,
418
- which intends to make G1andG2inherent the semantic sim-
419
- ilarity of the modality-invariant hash code. Specifically, the
420
- non-redundant multi-label annotations are transformed into
421
- Ny-class annotations{¯yl}Ny
422
- l=1, and their corresponding hash
423
- codes are regarded as the class-level weights. The weightedcross-entropy loss is formulated as:
424
- min
425
- θmLm
426
- 1=−1
427
- NN/summationdisplay
428
- iNy/summationdisplay
429
- l¯yllog (ail), (12)
430
- ail,exp/parenleftBig
431
- (¯gy
432
- l)Tgm
433
- i/parenrightBig
434
- /summationtextNy
435
- l/primeexp/parenleftBig
436
- (¯gy
437
- l/prime)Tgm
438
- i/parenrightBig,
439
- wheremindicates the modality 1 or 2.
440
- For the second item in Eq. (9), we adopt the differen-
441
- tiable matrix-based R ´enyi’sα-order mutual information to
442
- estimate:
443
- min
444
- θmLm
445
- 2=I(Gm;Xm). (13)
446
- For the third item in Eq. (9), the /lscript2loss is directly used:
447
- min
448
- θmLm
449
- 3=N/summationdisplay
450
- i=1/parenleftBig
451
- /bardblgy
452
- i−gm
453
- i/bardbl2/parenrightBig
454
- . (14)
455
- By merging Eqs. (12), (13) and (14) together, we obtain
456
- the loss function of imgNet (or txtNet), formulated as the
457
- following minimization problem:
458
- min
459
- θmLm=Lm
460
- 1+βLm
461
- 2+γLm
462
- 3, (15)
463
- whereβandγare hyper-parameters that are used to adjust
464
- the weights of loss items.
465
- Optimization
466
- The optimization of our DSIBH includes two parts: learning
467
- the modality-invariant hash code Gyand learning the hash
468
- codes G1andG2for X-ray images and radiology reports
469
- respectively. Learning Gyequals to optimize θy+. For hash
470
- codes of modality m,θmneeds to be optimized. The whole
471
- optimization procedure is summarized in Algorithm 1.
472
- Forθyof labNet, Eq. (10) is derivable. Therefore, Back-
473
- propagation algorithm (BP) with mini-batch stochastic gra-
474
- dient descent (mini-batch SGD) method is applied to update
475
- it. As for gy
476
- l, we use Eq. (16) to update:
477
- gy
478
- l=sign (fy(θy;yl)). (16)
479
- For imgNet and txtNet, we also use the BP with mini-
480
- batch SGD method to update θ1andθ2.
481
- Once Algorithm 1 converges, the well-trained imgNet and
482
- txtNet with sign(·)are used to handle out-of-sample data
483
- points from modality m:
484
- gm
485
- i=sign (fm(θm;xm
486
- i)). (17)
487
- Experiments
488
- In this section, we first introduce the dataset used for assess-
489
- ment and specify the experimental setting. Following this,
490
- we demonstrate that the proposed DSIBH can achieve the
491
- state-of-the-art performance on CMR-based CAD.Table 1: Comparison with baselines in terms of MAP on CMR-based CAD. The best results are marked with bold .
492
- MethodX→R R →X
493
- 16 bits 32 bits 64 bits 128 bits 16 bits 32 bits 64 bits 128 bits
494
- CCA (Hotelling 1992) 0.3468 0.3354 0.3273 0.3215 0.3483 0.3368 0.3288 0.3230
495
- CMSSH (Bronstein et al. 2010) 0.4224 0.4020 0.3935 0.3896 0.3899 0.3967 0.3646 0.3643
496
- SCM (Zhang and Li 2014) 0.4581 0.4648 0.4675 0.4684 0.4516 0.4574 0.4604 0.4611
497
- STMH (Wang et al. 2015) 0.3623 0.3927 0.4211 0.4387 0.3980 0.4183 0.4392 0.4453
498
- CMFH (Ding et al. 2016) 0.3649 0.3673 0.3736 0.3760 0.4130 0.4156 0.4303 0.4309
499
- SePH (Lin et al. 2016) 0.4684 0.4776 0.4844 0.4903 0.4475 0.4555 0.4601 0.4658
500
- DCMH (Jiang and Li 2017) 0.4834 0.4878 0.4885 0.4839 0.4366 0.4513 0.4561 0.4830
501
- SSAH (Li et al. 2018) 0.4894 0.4999 0.4787 0.4624 0.4688 0.4806 0.4832 0.4833
502
- EGDH (Shi et al. 2019) 0.4821 0.5010 0.4996 0.5096 0.4821 0.4943 0.4982 0.5041
503
- DSIBH 0.5001 0.5018 0.5116 0.5172 0.4898 0.4994 0.4997 0.5084
504
- Query X-ray images Retrieved radiology reports via our DSIBH
505
- Query radiology reports Retrieved X -ray images via our DSIBH(a)
506
- (b)Theendotracheal tube tipis6cmabove
507
- the carina .Nasogastric tube tipis
508
- beyond theGEjunction andofftheedge
509
- ofthefilm.Aleftcentral lineispresent in
510
- thetipisinthemidSVC.Apacemaker is
511
- noted ontheright inthelead projects
512
- over the right ventricle .There is
513
- probable scarring inboth lung apices .
514
- There arenonew areas ofconsolidation .
515
- There isupper zone redistribution and
516
- cardiomegaly suggesting pulmonary
517
- venous hypertension .There isno
518
- pneumothorax .
519
- There iscardiomegaly .Apacemaker is
520
- present with thelead overlying theright
521
- ventricle .Anapparent Swan -Ganz
522
- catheter ispresent, with tipoverlying the
523
- distal right pulmonary artery .There is
524
- upper zone re-distribution, with mild
525
- vascular plethora, butnoovert CHF.No
526
- focal infiltrate isdetected .Possible trace
527
- fluid attheleftcostophrenic angle .Ascompared totheprevious radiograph,
528
- thepatient hasreceived aright -sided
529
- chest tube .The tube isincorrect
530
- position, the right lung isnow fully
531
- expanded, there isnoevidence ofright
532
- pneumothorax .Incomparison with thestudy of___,the
533
- patient has taken abetter inspiration .
534
- Continued enlargement ofthecardiac
535
- silhouette with minimal central vascular
536
- congestion .Right PICC lineisstable .No
537
- evidence ofacute focal pneumonia .
538
- Since prior radiograph from ___, the
539
- mediastinal drain tube hasbeen removed .
540
- There isnopneumothorax .Both lung
541
- volumes arevery low.Bilateral, right side
542
- more than left side, moderate
543
- pulmonary edema has improved .
544
- Widened cardiomediastinal silhouette is
545
- more than itwas on___;however, this
546
- appearance could beexacerbation from
547
- lowlung volumes .Patient isstatus post
548
- median sternotomy with intact sternal
549
- sutures .Atelectasis; Pleural Effusion;
550
- PneumothoraxAtelectasis; Pleural Effusion; Support
551
- DevicesAtelectasis; CardiomegalyAtelectasis; Cardiomegaly; Pleural
552
- Effusion; Support DevicesAtelectasis; Cardiomegaly
553
- Edema; Enlarged Cardiomediastinum ;
554
- Support DevicesEdema; Lung Opacity; Pleural Effusion;
555
- Pneumonia; Support DevicesCardiomegaly; Pleural Effusion;
556
- Support DevicesAtelectasis; Pleural Effusion; Support
557
- DevicesCardiomegaly; Edema; Pleural
558
- Effusion; Support Devices
559
- Figure 1: The top 4 profiles retrieved by our DSIBH on the MIMIC-CXR dataset with 128 bits.
560
- Experimental setting
561
- The large-scale chest X-ray and radiology report dataset
562
- MIMIC-CXR (Johnson et al. 2019) is used to evaluate the
563
- performance of DSIBH. Some statistics of this dataset are
564
- introduced as follows.
565
- MIMIC-CXR1consists of chest X-ray images and radi-
566
- ology reports sourced from the Beth Israel Deaconess Med-
567
- ical Center between 2011-2016. Each radiology report is as-
568
- sociated with at least one X-ray image and annotated with
569
- a 14-dimensional label indicating the existence of pathol-
570
- ogy or lack of pathology. To evaluate the performance of
571
- CMR-based CAD, we adopt 73876 image-report pairs for
572
- assessment. During the comparison process, radiology re-
573
- ports are represented as bag-of-word vectors according to
574
- the top 617 most-frequent words. In the testing phase, we
575
- randomly sample 762 image-report pairs as query set and
576
- regard the rest as retrieval set. In the training phase, 14000
577
- pairs from the retrieval set are used as training set.
578
- The proposed DSIBH is compared with nine state-of-
579
- the-arts in hashing-based CMR including CCA (Hotelling
580
- 1992), CMSSH (Bronstein et al. 2010), SCM (Zhang and
581
- Li 2014), STMH (Wang et al. 2015), CMFH (Ding et al.
582
- 1https://physionet.org/content/mimic-cxr/2.0.0/2016), SePH (Lin et al. 2016), DCMH (Jiang and Li 2017),
583
- SSAH (Li et al. 2018), and EGDH (Shi et al. 2019). CCA,
584
- STMH and CMFH are unsupervised approaches that depend
585
- on data distributions, whereas the other six are supervised
586
- methods that take semantic labels into account. For fair com-
587
- parison with shallow-structure-based baselines, we use the
588
- trainset of MIMIC-CXR to optimize a CNN-F network for
589
- classification and extract 4096-dimensional features to rep-
590
- resent X-ray images. We set η= 1,β= 0.1andγ= 1
591
- for MIMIC-CXR as hyper-parameters. In the optimization
592
- phase, the batch size is set as 128 and three Adam solvers
593
- with different learning rates are applied (i.e., 10−3for lab-
594
- Net,10−4.5for imgNet and 10−3.5for txtNet).
595
- Mean average precision (MAP) is adopted to evaluate the
596
- performance of hashing-based CMR methods. MAP is the
597
- most widely used criteria metric to measure retrieval accu-
598
- racy, which is computed as follows:
599
- MAP =1
600
- |Q||Q|/summationdisplay
601
- i=11
602
- rqiR/summationdisplay
603
- j=1Pqi(j)δqi(j), (18)
604
- where|Q|indicates the number of query set, rqirepresents
605
- the number of correlated instances of query qiin database
606
- set,Ris the retrieval radius, Pqi(j)denotes the precision ofAlgorithm 1: The Optimization Procedure of DSIBH
607
- Input : X-ray images X1, radiology reports X2, semantic
608
- labels Y, learning rates λy,λ1,λ2, and iteration numbers
609
- Ty,T1,T2.
610
- Output : Parameters θ1andθ2of imgNet and
611
- txtNet.
612
- 1:Randomly initialize θy,θ1,θ2andGy.
613
- 2:repeat
614
- 3: foriter=1 toTydo
615
- 4: Updateθyby BP algorithm:
616
- θy←θy−λy·∇θyLy
617
- 5: Update Gyby Eq. (16)
618
- 6: end for
619
- 7: foriter=1 toT1do
620
- 8: Updateθ1by BP algorithm:
621
- θ1←θ1−λ1·∇θ1L1
622
- 9: end for
623
- 10: foriter=1 toT2do
624
- 11: Updateθ2by BP algorithm:
625
- θ2←θ2−λ2·∇θ2L2
626
- 12: end for
627
- 13:until Convergence
628
- the topjretrieved sample and δqi(j)indicates whether the
629
- jthreturned sample is correlated with the ithquery entity. To
630
- reflect the overall property of rankings, the size of database
631
- set is used as the retrieval radius.
632
- The efficacy of DSIBH in CMR-based CAD
633
- CMR-based CAD stresses on two retrieval directions: using
634
- X-ray images to retrieve radiology reports ( X→R) and
635
- using radiology reports to retrieve X-ray images ( R→X).
636
- In experiments, we set bit length as 16, 32, 64 and 128 bits.
637
- Table 1 reports the MAP results on the MIMIC-CXR
638
- dataset. As can be seen, unsupervised methods fail to pro-
639
- vide reasonable retrieval results due to the neglect of se-
640
- mantic information. CCA performs the worst among these
641
- unsupervised methods due to the naive management of data
642
- distribution. Compared with CCA, STMH and CMFH can
643
- achieve a better retrieval accuracy, which we argue can
644
- be attributed to the coverage of data correlation. By con-
645
- trast, shallow-structure-based supervised methods includ-
646
- ing CMSSH, SCM, and SePH achieve a large performance
647
- gain over unsupervised methods by further considering se-
648
- mantic information to express semantic similarity with hash
649
- codes. Benefiting from the effect of nonlinear fitting abil-
650
- ity and self-adjusting feature extraction ability, deep super-
651
- vised methods including DCMH, SSAH and EGDH out-
652
- perform the six shallow methods in the mass. Due to the
653
- extra consideration of superfluous information reduction,
654
- our DSIBH can achieve the best accuracy. Specifically,
655
- compared with the recently proposed deep hashing method
656
- EGDH by MAP, our DSIBH achieves average absolute in-
657
- creases of 0.96%/0.47% on the MIMIC-CXR dataset.
658
- Meanwhile, we also visualize the top 4 retrieved medical
659
- profiles of our DSIBH on X→RandR→Xdirections
660
- using the MIMIC-CXR dataset in Figure 1. These resultsconfirm our concern that DSIBH can retrieve pathology-
661
- related heterogeneous medical data again.
662
- Conclusion
663
- In this paper, to preform computer-aided diagnosis (CAD)
664
- based on the large-scale multi-modal medical data, the
665
- cross-modal retrieval (CMR) technique based on semantic
666
- hashing is introduced. Inspired by Deep Deterministic In-
667
- formation Bottleneck, a novel method named Deep Super-
668
- vised Information Bottleneck Hashing (DSIBH) is designed
669
- to perform CMR-based CAD. Experiments are conducted
670
- on the large-scale medical dataset MIMIC-CXR. Compared
671
- with other state-of-the arts, our DSIBH can reduce the dis-
672
- traction of superfluous information, which thus strengthens
673
- the discriminability of hash codes in CMR-based CAD.
674
- Acknowledgements
675
- This work is partially supported by NSFC (62101179,
676
- 61772220), Key R&D Plan of Hubei
677
- Province (2020BAB027) and Project of Hubei Univer-
678
- sity School (202011903000002).
679
- References
680
- Bronstein, M. M.; Bronstein, A. M.; Michel, F.; and Para-
681
- gios, N. 2010. Data fusion through cross-modality metric
682
- learning using similarity-sensitive hashing. In 2010 IEEE
683
- computer society conference on computer vision and pattern
684
- recognition , 3594–3601. IEEE.
685
- Cao, Y .; Long, M.; Wang, J.; and Liu, S. 2017. Collec-
686
- tive deep quantization for efficient cross-modal retrieval. In
687
- Thirty-First AAAI Conference on Artificial Intelligence .
688
- Cao, Y .; Long, M.; Wang, J.; and Zhu, H. 2016. Correlation
689
- autoencoder hashing for supervised cross-modal search. In
690
- Proceedings of the 2016 ACM on International Conference
691
- on Multimedia Retrieval , 197–204. ACM.
692
- Chatfield, K.; Simonyan, K.; Vedaldi, A.; and Zisserman, A.
693
- 2014. Return of the devil in the details: Delving deep into
694
- convolutional nets. arXiv preprint arXiv:1405.3531 .
695
- de La Torre, J.; Valls, A.; and Puig, D. 2020. A deep learn-
696
- ing interpretable classifier for diabetic retinopathy disease
697
- grading. Neurocomputing , 396: 465–476.
698
- Ding, G.; Guo, Y .; Zhou, J.; and Gao, Y . 2016. Large-
699
- scale cross-modality search via collective matrix factoriza-
700
- tion hashing. IEEE Transactions on Image Processing ,
701
- 25(11): 5427–5440.
702
- Erfankhah, H.; Yazdi, M.; Babaie, M.; and Tizhoosh, H. R.
703
- 2019. Heterogeneity-aware local binary patterns for retrieval
704
- of histopathology images. IEEE Access , 7: 18354–18367.
705
- Fang, J.; Fu, H.; and Liu, J. 2021. Deep triplet hashing net-
706
- work for case-based medical image retrieval. Medical Image
707
- Analysis , 69: 101981.
708
- Hotelling, H. 1992. Relations between two sets of variates.
709
- InBreakthroughs in statistics , 162–190. Springer.
710
- Hu, H.; Xie, L.; Hong, R.; and Tian, Q. 2020. Creating
711
- Something From Nothing: Unsupervised Knowledge Distil-
712
- lation for Cross-Modal Hashing. In 2020 IEEE/CVF Confer-
713
- ence on Computer Vision and Pattern Recognition (CVPR) .Hu, Z.; Liu, X.; Wang, X.; Cheung, Y .-m.; Wang, N.; and
714
- Chen, Y . 2019. Triplet Fusion Network Hashing for Un-
715
- paired Cross-Modal Retrieval. In Proceedings of the 2019
716
- on International Conference on Multimedia Retrieval , 141–
717
- 149.
718
- In´es, A.; Dom ´ınguez, C.; Heras, J.; Mata, E.; and Pascual, V .
719
- 2021. Biomedical image classification made easier thanks to
720
- transfer and semi-supervised learning. Computer Methods
721
- and Programs in Biomedicine , 198: 105782.
722
- Jiang, Q.-Y .; and Li, W.-J. 2017. Deep cross-modal hashing.
723
- InProceedings of the IEEE conference on computer vision
724
- and pattern recognition , 3232–3240.
725
- Johnson, A. E.; Pollard, T. J.; Greenbaum, N. R.; Lungren,
726
- M. P.; Deng, C.-y.; Peng, Y .; Lu, Z.; Mark, R. G.; Berkowitz,
727
- S. J.; and Horng, S. 2019. MIMIC-CXR-JPG, a large pub-
728
- licly available database of labeled chest radiographs. arXiv
729
- preprint arXiv:1901.07042 .
730
- Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Im-
731
- agenet classification with deep convolutional neural net-
732
- works. In Advances in neural information processing sys-
733
- tems, 1097–1105.
734
- Li, C.; Deng, C.; Li, N.; Liu, W.; Gao, X.; and Tao, D.
735
- 2018. Self-supervised adversarial hashing networks for
736
- cross-modal retrieval. In Proceedings of the IEEE con-
737
- ference on computer vision and pattern recognition , 4242–
738
- 4251.
739
- Lin, Z.; Ding, G.; Han, J.; and Wang, J. 2016. Cross-view re-
740
- trieval via probability-based semantics-preserving hashing.
741
- IEEE transactions on cybernetics , 47(12): 4342–4355.
742
- Liu, S.; Qian, S.; Guan, Y .; Zhan, J.; and Ying, L.
743
- 2020. Joint-modal Distribution-based Similarity Hashing
744
- for Large-scale Unsupervised Deep Cross-modal Retrieval.
745
- InProceedings of the International ACM SIGIR Conference
746
- on Research and Development in Information Retrieval ,
747
- 1379–1388.
748
- Shen, F.; Gao, X.; Liu, L.; Yang, Y .; and Shen, H. T. 2017.
749
- Deep asymmetric pairwise hashing. In Proceedings of the
750
- ACM international conference on Multimedia , 1522–1530.
751
- Shi, X.; Su, H.; Xing, F.; Liang, Y .; Qu, G.; and
752
- Yang, L. 2020. Graph temporal ensembling based semi-
753
- supervised convolutional neural network with noisy labels
754
- for histopathology image analysis. Medical Image Analysis ,
755
- 60: 101624.
756
- Shi, Y .; You, X.; Zheng, F.; Wang, S.; and Peng, Q. 2019.
757
- Equally-guided discriminative hashing for cross-modal re-
758
- trieval. In Proceedings of the 28th International Joint Con-
759
- ference on Artificial Intelligence , 4767–4773.
760
- Song, G.; Tan, X.; Zhao, J.; and Yang, M. 2021. Deep Ro-
761
- bust Multilevel Semantic Hashing for Multi-Label Cross-
762
- Modal Retrieval. Pattern Recognition , 108084.
763
- Tishby, N.; Pereira, F. C.; and Bialek, W. 1999. The infor-
764
- mation bottleneck method. 368–377.
765
- Wang, D.; Gao, X.; Wang, X.; and He, L. 2015. Seman-
766
- tic topic multimodal hashing for cross-media retrieval. In
767
- Twenty-Fourth International Joint Conference on Artificial
768
- Intelligence .Wang, J.; Zhang, T.; Sebe, N.; Shen, H. T.; et al. 2017. A
769
- survey on learning to hash. IEEE transactions on pattern
770
- analysis and machine intelligence , 40(4): 769–790.
771
- Wang, K.; Yin, Q.; Wang, W.; Wu, S.; and Wang, L. 2016.
772
- A comprehensive survey on cross-modal retrieval. arXiv
773
- preprint arXiv:1607.06215 .
774
- Wang, L.; Zhu, L.; Yu, E.; Sun, J.; and Zhang, H. 2019.
775
- Fusion-supervised deep cross-modal hashing. In IEEE Inter-
776
- national Conference on Multimedia and Expo , 37–42. IEEE.
777
- Xie, D.; Deng, C.; Li, C.; Liu, X.; and Tao, D. 2020.
778
- Multi-Task Consistency-Preserving Adversarial Hashing for
779
- Cross-Modal Retrieval. IEEE Transactions on Image Pro-
780
- cessing , 29: 3626–3637.
781
- Yang, E.; Yao, D.; Cao, B.; Guan, H.; Yap, P.-T.; Shen, D.;
782
- and Liu, M. 2020. Deep disentangled hashing with momen-
783
- tum triplets for neuroimage search. In International Confer-
784
- ence on Medical Image Computing and Computer-Assisted
785
- Intervention , 191–201. Springer.
786
- Yao, H.-L.; Zhan, Y .-W.; Chen, Z.-D.; Luo, X.; and Xu,
787
- X.-S. 2021. TEACH: Attention-Aware Deep Cross-Modal
788
- Hashing. In Proceedings of the 2021 International Confer-
789
- ence on Multimedia Retrieval , ICMR ’21, 376–384. New
790
- York, NY , USA: Association for Computing Machinery.
791
- ISBN 9781450384636.
792
- Yu, J.; Zhou, H.; Zhan, Y .; and Tao, D. 2021. Deep Graph-
793
- neighbor Coherence Preserving Network for Unsupervised
794
- Cross-modal Hashing. In Proceedings of the AAAI Confer-
795
- ence on Artificial Intelligence , volume 35, 4626–4634.
796
- Yu, S.; Giraldo, L. G. S.; Jenssen, R.; and Principe, J. C.
797
- 2019. Multivariate Extension of Matrix-Based R ´enyi’s\
798
- α-Order Entropy Functional. IEEE transactions on pattern
799
- analysis and machine intelligence , 42(11): 2960–2966.
800
- Yu, X.; Yu, S.; and Pr ´ıncipe, J. C. 2021. Deep Deterministic
801
- Information Bottleneck with Matrix-Based Entropy Func-
802
- tional. In ICASSP 2021-2021 IEEE International Confer-
803
- ence on Acoustics, Speech and Signal Processing (ICASSP) ,
804
- 3160–3164. IEEE.
805
- Zhan, Y .-W.; Luo, X.; Wang, Y .; and Xu, X.-S. 2020. Su-
806
- pervised Hierarchical Deep Hashing for Cross-Modal Re-
807
- trieval. In Proceedings of the 28th ACM International Con-
808
- ference on Multimedia , MM ’20, 3386–3394. New York,
809
- NY , USA: Association for Computing Machinery. ISBN
810
- 9781450379885.
811
- Zhang, D.; and Li, W.-J. 2014. Large-scale supervised mul-
812
- timodal hashing with semantic correlation maximization. In
813
- Twenty-Eighth AAAI Conference on Artificial Intelligence .
814
- Zhang, J.; Xie, Y .; Xia, Y .; and Shen, C. 2019. Attention
815
- residual learning for skin lesion classification. IEEE trans-
816
- actions on medical imaging , 38(9): 2092–2103.
817
- Zhen, L.; Hu, P.; Wang, X.; and Peng, D. 2020. Deep Super-
818
- vised Cross-Modal Retrieval. In 2019 IEEE/CVF Confer-
819
- ence on Computer Vision and Pattern Recognition (CVPR) .
820
- Zhu, L.; Lu, X.; Cheng, Z.; Li, J.; and Zhang, H. 2020. Flex-
821
- ible multi-modal hashing for scalable multimedia retrieval.
822
- ACM Transactions on Intelligent Systems and Technology
823
- (TIST) , 11(2): 1–20.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
txt/2205.10663.txt DELETED
@@ -1,265 +0,0 @@
1
- Transformer based Generative Adversarial
2
- Network for Liver Segmentation
3
- Ugur Demir*, Zheyuan Zhang*, Bin Wang, Matthew Antalek, Elif Keles,
4
- Debesh Jha, Amir Borhani, Daniela Ladner, and Ulas Bagci
5
- Northwestern University, IL 60201, USA
6
7
- Abstract. Automated liver segmentation from radiology scans (CT,
8
- MRI) can improve surgery and therapy planning and follow-up assess-
9
- ment in addition to conventional use for diagnosis and prognosis. Al-
10
- though convolutional neural networks (CNNs) have became the stan-
11
- dard image segmentation tasks, more recently this has started to change
12
- towards Transformers based architectures because Transformers are tak-
13
- ing advantage of capturing long range dependence modeling capability in
14
- signals, so called attention mechanism. In this study, we propose a new
15
- segmentation approach using a hybrid approach combining the Trans-
16
- former(s) with the Generative Adversarial Network (GAN) approach.
17
- The premise behind this choice is that the self-attention mechanism of the
18
- Transformers allows the network to aggregate the high dimensional fea-
19
- ture and provide global information modeling. This mechanism provides
20
- better segmentation performance compared with traditional methods.
21
- Furthermore, we encode this generator into the GAN based architecture
22
- so that the discriminator network in the GAN can classify the credibility
23
- of the generated segmentation masks compared with the real masks com-
24
- ing from human (expert) annotations. This allows us to extract the high
25
- dimensional topology information in the mask for biomedical image seg-
26
- mentation and provide more reliable segmentation results. Our model
27
- achieved a high dice coecient of 0.9433, recall of 0.9515, and preci-
28
- sion of 0.9376 and outperformed other Transformer based approaches.
29
- The implementation details of the proposed architecture can be found
30
- athttps://github.com/UgurDemir/tranformer_liver_segmentation .
31
- Keywords: Liver segmentation ·Transformer ·Generative adversarial
32
- network
33
- 1 Introduction
34
- Liver cancer is among the leading causes of cancer-related deaths, accounting for
35
- 8.3% of cancer mortality [14]. The high variability in shape, size, appearance,
36
- and local orientations makes liver (and liver diseases such as tumors, brosis)
37
- challenging to analyze from radiology scans for which the image segmentation is
38
- ∗Those authors contribute equally to this paper.arXiv:2205.10663v2 [eess.IV] 28 May 20222 Demir, Zhang et al.
39
- often necessary [3]. An accurate organ and lesion segmentation could facilitate
40
- reliable diagnosis and therapy planning including prognosis [5].
41
- As a solution to biomedical image segmentation, the literature is vast and
42
- rich. The self-attention mechanism is nowadays widely used in the biomedical
43
- image segmentation eld where long-range dependencies and context dependent
44
- features are essential. By capturing such information, transformer based seg-
45
- mentation architectures (for example, SwinUNet [2]) have achieved promising
46
- performance on many vision tasks including biomedical image segmentation [7,
47
- 15].
48
- In parallel to the all advances in Transformers, generative methods have
49
- achieved remarkable progresses in almost all elds of computer vision too [4].
50
- For example, Generative Adversarial Networks (GAN) [6] is a widely used tool
51
- for generating one target image from one source image. GAN has been applied to
52
- the image segmentation framework to distinguish the credibility of the generated
53
- masks like previous studies [11, 9]. The high dimensional topology information
54
- is an important feature for pixel levell classi cation, thus segmentation. For ex-
55
- ample, the segmented mask should recognize the object location, orientation,
56
- and scale prior to delineation procedure, but most current segmentation en-
57
- gines are likely to provide false positives outside the target region or conversely
58
- false negatives within the target region due to an inappropriate recognition of
59
- the target regions. By introducing the discriminator architecture (as a part of
60
- GAN) to distinguish whether the segmentation mask is high quality or not, we
61
- could proactively screen poor predictions from the segmentation model. Fur-
62
- thermore, this strategy can also allow us to take advantage of many unpaired
63
- segmentation masks which can be easily acquired or even simulated in the seg-
64
- mentation targets. To this end, in this paper, we propose a Transformer based
65
- GAN architecture as well as a Transformer based CycleGAN architecture for au-
66
- tomatic liver segmentation, a very improtant clinical precursor for liver diseases.
67
- By combining two strong algorithms, we aim to achieve both good recognition
68
- (localization) of the target region and high quality delineations.
69
- 2 Proposed method
70
- We rst investigated the transformer architecture to solve the liver segmenta-
71
- tion problem from radiology scans, CT in particular due to its widespread use
72
- and being the rst choice in most liver disease quanti cation. The self-attention
73
- mechanism of the Transformers has been demonstrated to be very e ective ap-
74
- proach when nding long range dependencies as stated before. This can be quite
75
- bene cial for the liver segmentation problem especially because the object of
76
- interest (liver) is large and pixels constituting the same object are far from each
77
- other. We also utilized an adversarial training approach to boost the segmenta-
78
- tion model performance. For this, we have devised a conditional image generator
79
- in a vanilla-GAN that learns a mapping between the CT slices and the segmen-
80
- tation maps (i.e., surrogate of the truths or reference standard). The adversarial
81
- training forces the generator model to predict more realistic segmentation out-Transformer based Generative Adversarial Network for Liver Segmentation 3
82
- Ground
83
- Truth
84
- Predicted
85
- Mask
86
- Real
87
- Fake Discriminator Transformer
88
- Blocks Transformer
89
- (Generator)
90
- Decoder Encoder
91
- Transformer
92
- Blocks Image to Mask
93
- Generator
94
- Decoder Encoder Transformer
95
- Blocks Mask to Image
96
- Generator
97
- Decoder Encoder
98
- Real
99
- Mask
100
- Fake
101
- Mask Mask
102
- Discriminator Real
103
- Image
104
- Fake
105
- Image Image
106
- Discriminator
107
- Ground
108
- Truth
109
- Input Input
110
- Predicted
111
- Mask
112
- Input a) Transformer-GAN
113
- b) Transformer-CycleGAN
114
- Fig. 1: Block diagram of the Transformer GAN. (a) Vanilla GAN and (b) Cycle-
115
- GAN with Transformer generator architectures.
116
- comes. In addition to vanilla-GAN, we have also utilized the CycleGAN [17], [13]
117
- approach to investigate the e ect of cycle consistency constraint on the segmen-
118
- tation task. Figure 1 demonstrates the general overview of the proposed method.
119
- 2.1 Transformer based GAN
120
- Like other GAN architectures [10], Transformer based GAN architecture is com-
121
- posed of two related sub-architectures: the generator and the discriminator. The
122
- generator part could generate the segmentation mask from the raw image (i.e.,
123
- segmentation task itself), while the discriminator tries to distinguish predictions
124
- from the human annotated ground truth. GAN provides a better way to dis-
125
- tinguish the high-dimensional morphology information. The discriminator can
126
- provide the similarity between the predicted masks and the ground truth (i.e.,
127
- surrogate truth) masks. Vanilla GAN considers the whole segmentation to decide
128
- whether it is fake or not.
129
- 2.2 Transformer based CycleGAN
130
- One alternative extension to the standard GAN approach is to use transformer
131
- based segmentation model within the CycleGAN setup. Unlike a standard GAN,
132
- CycleGAN consists of two generators and two discriminator networks. While the4 Demir, Zhang et al.
133
- rst generator accepts the raw images as input and predicts the segmentation
134
- masks, the second generator takes the predicted segmentation maps as input
135
- and maps them back to the input image. The rst discriminator classi es the
136
- segmentation masks as either real or fake, and the second discriminator distin-
137
- guishes the real and the reconstructed image. Figure 1 illustrates this procedure
138
- with liver segmentation from CT scans.
139
- To embed transformers within the CycleGAN, we utilized the encoder-decoder
140
- style convolutional transformer model [13]. The premise behind this idea was
141
- that the encoder module takes the input image and decreases the spatial dimen-
142
- sions while extracting features with convolution layers. This allowed processing
143
- of large-scale images. The core transformer module consisted of several stacked
144
- linear layers and self-attention blocks. The decoder part increased the spatial
145
- dimension of the intermediate features and makes the nal prediction. For the
146
- discriminator network, we tried three convolutional architectures. The vanilla-
147
- GAN discriminator evaluates the input image as a whole. Alternatively, we have
148
- adopted PatchGAN discriminator architecture [8] to focus on small mask patches
149
- to decide the realness of each region. It splits the input masks into NxN regions
150
- and asses their quality individually. When we set the patch size to a pixel,
151
- PatchGAN can be considered as pixel level discriminator. W have observed that
152
- the pixel level discriminator tends to surpass other architecture for segmenta-
153
- tion. Figure 1 demonstrates the network overview. In all of the experiments,
154
- the segmentation model uses the same convolutional transformer and pixel level
155
- discriminator architectures.
156
- 3 Experimental setup
157
- We have used Liver Tumor Segmentation Challenge (LiTS)[1] dataset. LiTS
158
- consists of 131 CT scans. This dataset is publicly available under segmentation
159
- challenge website and approved IRB by the challenge organizers. More informa-
160
- tion about the dataset and challenge can be found here1.
161
- All our models were trained on NVIDIA RTX A6000 GPU after implemented
162
- using the PyTorch [12] framework. We have used 95 samples for training and
163
- 36 samples for testing. All models are trained on the same hyperparameters
164
- con guration with a learning rate of 2 e4, and Adam optimizer with beta1
165
- being 0.5 and beta2 being 0.999. All of the discriminators use the pixel level
166
- discriminator in both GAN and CycleGAN experiments. We have used recall,
167
- precision, and dice coecient for quantitative evaluations of the segmentation.
168
- Further, segmentation results were qualitatively evaluated by the participating
169
- physicians. Our algorithms are available for public use.
170
- 4 Results
171
- We presented the evaluation results in Table 1. Our best performing method
172
- was Transformer based GAN architecture, achieved a highest dice coecient
173
- 1https://competitions.codalab.org/competitions/17094#learn_the_detailsTransformer based Generative Adversarial Network for Liver Segmentation 5
174
- Input Segmentation
175
- Input Segmentation
176
- Fig. 2: Transformer based GAN liver segmentation results. Green: True positive,
177
- Red: False Positive, Blue: False Negative.
178
- Table 1: Performance of Transformer based methods on the LITS dataset. [1]
179
- Method Dice coecient Precision Recall
180
- Transformer [16, 13] 0.9432 0.9464 0.9425
181
- Transformer - CycleGAN (ours) 0.9359 0.9539 0.9205
182
- Transformer - GAN (ours) 0.9433 0.9376 0.9515
183
- of 0.9433 and recall rate of 0.9515. Similarly, our transformer based CycleGAN
184
- architecture has the highest precision, 0.9539. With Transformer based GAN, we
185
- achieved 0.9% improvement in recall and 0.01% improvement in dice coecient
186
- with respect to the vanilla Transformers. It is to be noted that we have used also
187
- post-processing technique which boosts the performance for "all" the baselines
188
- to avoid biases one from each other.6 Demir, Zhang et al.
189
- Figure 2 shows our qualitative results for the liver segmentation. We have ex-
190
- amined all the liver segmentation results one-by-one and no failure were identi ed
191
- by the participating physicians. Hence, visual results agreed with the quantita-
192
- tive results as described in Table 1.
193
- 5 Conclusion
194
- In this study, we explored the use of transformer-based GAN architectures for
195
- medical image segmentation. Speci cally, we used a self-attention mechanism
196
- and designed a discriminator for classifying the credibility of generated segmen-
197
- tation masks. Our experimental result showed that the proposed new segmenta-
198
- tion architectures could provide accurate and reliable segmentation performance
199
- as compared to the baseline Transfomers. Although we have shown our results
200
- in an important clinical problem for liver diseases where image-based quanti -
201
- cation is vital, the proposed hybrid architecture (i.e., combination of GAN and
202
- Transformers) can potentially be applied to various medical image segmentation
203
- tasks beyond liver CTs as the algorithms are generic, reproducible, and carries
204
- similarities with the other segmentation tasks in biomedical imaging eld. We
205
- anticipate that our architecture can also be applied to medical scans within the
206
- semi-supervised learning, planned as a future work.
207
- Acknowledgement
208
- This study is partially supported by NIH NCI grants R01-CA246704 and R01-
209
- R01-CA240639.
210
- References
211
- 1. Bilic, P., Christ, P.F., Vorontsov, E., Chlebus, G., Chen, H., Dou, Q., Fu, C.W.,
212
- Han, X., Heng, P.A., Hesser, J., et al.: The liver tumor segmentation benchmark
213
- (lits). arXiv preprint arXiv:1901.04056 (2019)
214
- 2. Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., Wang, M.: Swin-
215
- unet: Unet-like pure transformer for medical image segmentation. arXiv preprint
216
- arXiv:2105.05537 (2021)
217
- 3. Chlebus, G., Schenk, A., Moltz, J.H., van Ginneken, B., Hahn, H.K., Meine, H.:
218
- Automatic liver tumor segmentation in ct with fully convolutional neural networks
219
- and object-based postprocessing. Scienti c reports 8(1), 1{7 (2018)
220
- 4. Chuquicusma, M.J., Hussein, S., Burt, J., Bagci, U.: How to fool radiologists with
221
- generative adversarial networks? a visual turing test for lung cancer diagnosis. In:
222
- 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018). pp.
223
- 240{244. IEEE (2018)
224
- 5. Cornelis, F., Martin, M., Saut, O., Buy, X., Kind, M., Palussiere, J., Colin, T.:
225
- Precision of manual two-dimensional segmentations of lung and liver metastases
226
- and its impact on tumour response assessment using recist 1.1. European radiology
227
- experimental 1(1), 1{7 (2017)Transformer based Generative Adversarial Network for Liver Segmentation 7
228
- 6. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S.,
229
- Courville, A., Bengio, Y.: Generative adversarial nets. Advances in neural infor-
230
- mation processing systems 27(2014)
231
- 7. Huang, X., Deng, Z., Li, D., Yuan, X.: Missformer: An e ective medical image
232
- segmentation transformer. arXiv preprint arXiv:2109.07162 (2021)
233
- 8. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with condi-
234
- tional adversarial networks. CVPR (2017)
235
- 9. Khosravan, N., Mortazi, A., Wallace, M., Bagci, U.: PAN: Projective Adversar-
236
- ial Network for Medical Image Segmentation. In: Medical Image Computing and
237
- Computer Assisted Intervention { MICCAI 2019 - 22nd International Conference,
238
- Proceedings (2019)
239
- 10. Liu, Y., Khosravan, N., Liu, Y., Stember, J., Shoag, J., Bagci, U., Jambawalikar,
240
- S.: Cross-modality knowledge transfer for prostate segmentation from ct scans. In:
241
- Domain adaptation and representation transfer and medical image learning with
242
- less labels and imperfect data, pp. 63{71. Springer (2019)
243
- 11. Luc, P., Couprie, C., Chintala, S., Verbeek, J.: Semantic Segmentation using Ad-
244
- versarial Networks. In: NIPS Workshop on Adversarial Training. Barcelona, Spain
245
- (Dec 2016), https://hal.inria.fr/hal-01398049
246
- 12. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen,
247
- T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: An imperative style, high-
248
- performance deep learning library. Advances in neural information processing sys-
249
- tems32(2019)
250
- 13. Ristea, N.C., Miron, A.I., Savencu, O., Georgescu, M.I., Verga, N., Khan, F.S.,
251
- Ionescu, R.T.: Cytran: Cycle-consistent transformers for non-contrast to contrast
252
- ct translation. arXiv preprint arXiv:2110.06400 (2021)
253
- 14. Sung, H., Ferlay, J., Siegel, R.L., Laversanne, M., Soerjomataram, I., Jemal, A.,
254
- Bray, F.: Global cancer statistics 2020: Globocan estimates of incidence and mor-
255
- tality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians
256
- 71(3), 209{249 (2021)
257
- 15. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser,
258
- L., Polosukhin, I.: Attention is all you need. Advances in neural information pro-
259
- cessing systems 30(2017)
260
- 16. Wu, H., Xiao, B., Codella, N., Liu, M., Dai, X., Yuan, L., Zhang, L.: Cvt: In-
261
- troducing convolutions to vision transformers. In: Proceedings of the IEEE/CVF
262
- International Conference on Computer Vision (ICCV). pp. 22{31 (October 2021)
263
- 17. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation
264
- using cycle-consistent adversarial networks. In: Proceedings of the IEEE interna-
265
- tional conference on computer vision. pp. 2223{2232 (2017)