{"doc_id": "BkCxP2Fez", "text": ["The paper presents a Depthwise Separable Graph Convolution network that aims at generalizing Depthwise convolutions, that exhibit a nice performance in image related tasks, to the graph domain. ", "In particular it targets Graph Convolutional Networks.", "In the abstract the authors mention that the Depthwise Separable Graph Convolution that they propose is the key to understand the connections between geometric convolution methods and traditional 2D ones. ", "I am afraid I have to disagree ", "as the proposed approach is not giving any better understanding of what needs to be done and why. ", "It is an efficient way to mimic what has worked so far for the planar domain ", "but I would not consider it as fundamental in \"closing the gap\".", "I feel that the text is often redundant and that it could be simplified a lot.", "For example the authors state in various parts that DSC does not work on non-Euclidean data. ", "Section 2 should be clearer and used to better explain related approaches to motivate the proposed one.", "In fact, the entire motivation, at least for me, never went beyond the simple fact that this happens to be a good way to improve performance. ", "The intuition given is not sufficient to substantiate some of the claims on generality and understanding of graph based DL.", "In 3.1, at point (2), the authors mention that DSC filters are learned from the data whereas GC uses a constant matrix. ", "This is not correct, ", "as also reported in equation 2. ", "The matrix U is learned from the data as well.", "Equation (4) shows that the proposed approach would weight Q different GC layers. ", "In practical terms this is a linear combination of these graph convolutional layers.", "What is not clear is the \\Delta_{ij} definition. ", "It is first introduced in 2.3 and described as the relative position of pixel i and pixel j on the image, but then used in the context of a graph in (4). ", "What is the coordinate system used by the authors in this case? ", "This is a very important point that should be made clearer.", "Why is the Related Work section at the end? ", "I would put it at the front.", "The experiments compare with the recent relevant literature. ", "I think that having less number of parameters is a good thing in this setting ", "as the data is scarce,", "however I would like to see a more in-depth comparison with respect to the number of features produced by the model itself. ", "For example GCN has a representation space (latent) much smaller than DSCG.", "No statistics over multiple runs are reported, ", "and given the high variance of results on these datasets I would like them to be reported.", "I think the separability of the filters in this case brings the right level of simplification to the learning task, ", "however as it also holds for the planar case it is not clear whether this is necessarily the best way forward.", "What are the underlying mathematical insights that lead towards selecting separable convolutions?", "Overall I found the paper interesting but not ground-breaking. ", "A nice application of the separable principle to GCN. ", "Results are also interesting ", "but should be further verified by multiple runs."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "request", "request", "evaluation", "request", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "HyxmggJbM", "text": ["This paper proposes a new way of sampling data for updates in deep-Q networks. ", "The basic principle is to update Q values starting from the end of the episode in order to facility quick propagation of rewards back along the episode.", "The paper is interesting, ", "but it lacks the proper comparisons to previously published techniques.", "The results presented by this paper shows improvement over the baseline. ", "But the Atari results is still significantly worse than the current SOTA.", "In the non-tabular case, the authors have actually moved away from Q learning and defined an objective that is both on and off-policy. ", "Some (theoretical) analysis would be nice. ", "It is hard to judge whether the objective defined in the non-tabular defines a contraction operator at all in the tabular case.", "There has been a number of highly relevant papers. ", "Prioritized replay, for example, could have a very similar effect to proposed approach in the tabular case.", "In the non-tabular case, the Retrace algorithm, tree backup, Watkin's Q learning all bear significant resemblance to the proposed method. ", "Although the proposed algorithm is different from all 3, ", "the authors should still have compared to at least one of them as a baseline. ", "The Retrace algorithm specifically has also been shown to help significantly in the Atari case, ", "and it defines a convergent update rule."], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "fact"]}
{"doc_id": "H1WORsdlG", "text": ["This paper addresses the important problem of understanding mathematically how GANs work. ", "The approach taken here is to look at GAN through the lense of the scattering transform.", "Unfortunately the manuscrit submitted is very poorly written.", "Introduction and flow of thoughts is really hard to follow.", "In method sections, the text jumps from one concept to the next without proper definitions.", "Sorry I stopped reading on page 3.", "I suggest to rewrite this work before sending it to review.", "Among many things: - For citations use citep and not citet to have () at the right places.", "- Why does it seems -> Why does it seem etc."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request"]}
{"doc_id": "H1JzYwcxM", "text": ["=== SUMMARY === The paper considers a combination of Reinforcement Learning (RL) and Imitation Learning (IL), in the infinite horizon discounted MDP setting.", "The IL part is in the form of an oracle that returns a value function V^e, which is an approximation of the optimal value function. ", "The paper defines a new cost (or reward) function based on V^e, through shaping (Eq. 1). ", "It is known that shaping does not change the optimal policy.", "A key aspect of this paper is to consider a truncated horizon problem (say horizon k) with the reshaped cost function, instead of an infinite horizon MDP.", "For this truncated problem, one can write the (dis)advantage function as a k-step sum of reward plus the value returned by the oracle at the k-th step (cf. Eq. 5).", "Theorem 3.3 shows that the value of the optimal policy of the truncated MDP w.r.t. the original MDP is only O(gamma^k eps) worse than the optimal policy of the original problem (gamma is the discount factor and eps is the error between V^e and V*).", "This suggests two things: 1) Having an oracle that is accurate (small eps) leads to good performance. ", "If oracle is the same as the optimal value function, we do not need to plan more than a single step ahead.", "2) By planning for k steps ahead, one can decrease the error in the oracle geometrically fast. ", "In the limit of k \u2014> inf, the error in the oracle does not matter.", "Based on this insight, the paper suggests an actor-critic-like algorithm called THOR (Truncated HORizon policy search) that minimizes the total cost over a truncated horizon with a modified cost function.", "Through a series of experiments on several benchmark problems (inverted pendulum, swimmer, etc.), the paper shows the effect of planning horizon k.", "=== EVALUATION & COMMENTS === I like the main idea of this paper. ", "The paper is also well-written. ", "But one of the main ideas of this paper (truncating the planning horizon and replacing it with approximation of the optimal value function) is not new and has been studied before, ", "but has not been properly cited and discussed.", "There are a few papers that discuss truncated planning. ", "Most closely is the following paper:", "Farahmand, Nikovski, Igarashi, and Konaka, \u201cTruncated Approximate Dynamic Programming With Task-Dependent Terminal Value,\u201d AAAI, 2016.", "The motivation of AAAI 2016 paper is different from this work. ", "The goal there is to speedup the computation of finite, but large, horizon problem with a truncated horizon planning. ", "The setting there is not the combination of RL and IL, but multi-task RL. ", "An approximation of optimal value function for each task is learned off-line and then used as the terminal cost. ", "The important point is that the learned function there plays the same role as the value provided by the oracle V^e in this work. ", "They both are used to shorten the planning horizon. ", "That paper theoretically shows the effect of various error terms, including terms related to the approximation in the planning process (this paper does not do that).", "Nonetheless, the resulting algorithms are quite different. ", "The result of this work is an actor-critic type of algorithm. ", "AAAI 2016 paper is an approximate dynamic programming type of algorithm.", "There are some other papers that have ideas similar to this work in relation to truncating the horizon. ", "For example, the multi-step lookahead policies and the use of approximate value function as the terminal cost in the following paper:", "Bertsekas, \u201cDynamic Programming and Suboptimal Control: A Survey from ADP to MPC,\u201d European Journal of Control, 2005.", "The use of learned value function to truncate the rollout trajectory in a classification-based approximate policy iteration method has been studied by Gabillon, Lazaric, Ghavamzadeh, and Scherrer, \u201cClassification-based Policy Iteration with a Critic,\u201d ICML, 2011.", "Or in the context of Monte Carlo Tree Search planning, the following paper is relevant:", "Silver et al., \u201cMastering the game of Go with deep neural networks and tree search,\u201d Nature, 2016.", "Their \u201cvalue network\u201d has a similar role to V^e. ", "It provides an estimate of the states at the truncated horizon to shorten the planning depth.", "Note that even though these aforementioned papers are not about IL, ", "this paper\u2019s stringent requirement of having access to V^e essentially make it similar to those papers.", "In short, a significant part of this work\u2019s novelty has been explored before. ", "Even though not being completely novel is totally acceptable, ", "it is important that the paper better position itself compared to the prior art.", "Aside this main issue, there are some other comments: - Theorem 3.1 is not stated clearly and may suggest more than what is actually shown in the proof. ", "The problem is that it is not clear about the fact the choice of eps is not arbitrary.", "The proof works only for eps that is larger than 0.5. ", "With the construction of the proof, if eps is smaller than 0.5, there would not be any error, i.e., J(\\hat{pi}^*) = J(pi^*).", "The theorem basically states that if the error is very large (half of the range of value function), the agent does not not perform well. ", "Is this an interesting case?", "- In addition to the papers I mentioned earlier, there are some results suggesting that shorter horizons might be beneficial and/or sufficient under certain conditions. ", "A related work is a theorem in the PhD dissertation of Ng:", "Andrew Ng, Shaping and Policy Search in Reinforcement Learning, PhD Dissertation, 2003.", "(Theorem 5 in Appendix 3.B: Learning with a smaller horizon).", "It is shown that if the error between Phi (equivalent to V^e here) and V* is small, one may choose a discount factor gamma\u2019 that is smaller than gamma of the original MDP, and still have some guarantees. ", "As the discount factor has an interpretation of the effective planning horizon, ", "this result is relevant. ", "The result, however, is not directly comparable to this work ", "as the planning horizon appears implicitly in the form of 1/(1-gamma\u2019) instead of k,", "but I believe it is worth to mention and possibly compare.", "- The IL setting in this work is that an oracle provides V^e, which is the same as (Ross & Bagnell, 2014). ", "I believe this setting is relatively restrictive ", "as in many problems we only have access to (state, action) pairs, or sequence thereof, and not the associated value function. ", "For example, if a human is showing how a robot or a car should move, we do not easily have access to V^e (unless the reward function is known and we estimate the value with rollouts; which requires us having a long trajectory). ", "This is not a deal breaker, ", "and I would not consider this as a weakness of the work, ", "but the paper should be more clear and upfront about this.", "- The use of differential operator nabla instead of gradient of a function (a vector field) in Equations (10), (14), (15) is non-standard.", "- Figures are difficult to read, ", "as the colors corresponding to confidence regions of different curves are all mixed up. ", "Maybe it is better to use standard error instead of standard deviation."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "reference", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "reference", "fact", "fact", "reference", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "reference", "quote", "fact", "fact", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "fact", "evaluation", "fact", "request"]}
{"doc_id": "BktJHw_lM", "text": ["The paper discusses a setting in which an existing dataset/trained model is augmented/refined by adding additional datapoints.", "Issues of how to price the new data are discussed in a high level, abstract way, and arguments against retrieving the new data for free or encrypting it are presented.", "Overall, the paper is of an expository nature,", "discussing high-level ideas rather than actually implementing them,", "and does not experimentally or theoretically substantiate any of its claims.", "This makes the technical contribution rather shallow.", "Interesting questions do arise, such as how to assess the value of new data and how to price datapoints,", "but these questions are never addressed (neither theoretically nor empirically).", "Though main points are valid,", "the paper is also rife with informal statements and logical jumps,", "perhaps due to the expository/high-level approach taken in discussing these issues.", "Detailed comments:The (informal) information theoretic argument has a few holes.", "The claim is roughly that every datapoint (~1Mbyte image) contributes ~1M bits of changes in a model,", "which can be quite revealing.", "As a result, there is no benefit from encrypting the datapoint, as the mapping from inputs to changes is insecure (in an information-theoretic sense) in itself.", "This assumes that every step of stochastic gradient descent (one step per image) is done in the clear;", "this is not what one would consider secure in cryptography literature.", "A secure function evaluation (SFE) would encrypt the data and the computation in an end-to-end fashion;", "in particular, it would only reveal the final outcome of SGD over all images in the dataset without revealing any intermediate steps.", "Presuming that the new dataset is large (i.e., having N images), the \"information theoretic\" limit becomes ~N x 1Mbyte inputs for ~1M function outputs (the finally-trained model).", "In this sense, this argument that \"encryption is hopeless\" is somewhat brittle.", "Encryption-issues aside, the paper would have been much stronger if it spent more effort in formalizing or evaluating different methods for assessing the value of data.", "The authors approach this by treating the ML algorithm as a blackbox, and using influence functions (a la Bastani 2017) to assess the impact of different inputs on the finally trained model", "(again, this is proposed but not implemented/explored/evaluated in any way).", "This is a design choice, but it is not obvious.", "There is extensive literature in statistics and machine learning on the areas of experimental design and active learning.", "Both are active, successful research areas, and both can be provide tools to formally reason about the value of data/labels not yet seen;", "the paper summarily ignores this literature.", "Examples of imprecise/informal statements: \"The fairness in the pricing is highly questionable\"", "\"implicit contracts get difficult to verify\"", "\"The fairness in the pricing is dubious\"", "\"As machine learning models become more and more complicated, its (sic) capability can outweigh the privacy guarantees encryption gives us\"", "\"as an image classifier's model architecture changes, all the data would need to be collected and purchased again\"", "\"Interpretability solutions aim to alleviate the notoriety of reasonability of neural networks\""], "labels": ["fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "request", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "quote", "quote", "quote", "quote", "quote", "quote"]}
{"doc_id": "BkEYMCPlG", "text": ["The authors present RDA, the Recurrent Discounted Attention unit, that improves upon RWA, the earlier introduced Recurrent Weighted Average unit, by adding a discount factor. ", "While the RWA was an interesting idea with bad results (far worse than the standard GRU or LSTM with standard attention except for hand-picked tasks), ", "the RDA brings it more on-par with the standard methods.", "On the positive side, the paper is clearly written and adding discount to RWA, while a small change, is original. ", "On the negative side, in almost all tasks the RDA is on par or worse than the standard GRU - ", "except for MultiCopy where it trains faster, but not to better results ", "and it looks like the difference is between few and very-few training steps anyway. ", "The most interesting result is language modeling on Hutter Prize Wikipedia, ", "where RDA very significantly improves upon RWA - ", "but again, only matches a standard GRU or LSTM. ", "So the results are not strongly convincing, ", "and the paper lacks any mention of newer work on attention. ", "This year strong improvements over state-of-the-art have been achieved using attention for translation (\"Attention is All You Need\") and image classification (e.g., Non-local Neural Networks, but also others in ImageNet competition). ", "To make the evaluation convincing enough for acceptance, RDA should be combined with those models and evaluated more competitively on multiple widely-studied tasks."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "request"]}
{"doc_id": "rkZAtAaxM", "text": ["This manuscript is fairly well-written, ", "and discusses how the batch normalization step helps to stabilize the scale of the gradients. ", "Intriguingly, the analysis suggests that using a shallower but wider resnet should provide competitive performance, which is supported by empirical evidence. ", "This work should help elucidate the structure in the learning, and help to support efforts to improve both learning algorithms and the architecture.", "Pros: Clean, simple analysis", "Empirical support suggests that theory captures reasonable effects behind learning", "Cons: The reasonableness of the assumptions used in the analysis needs a more careful analysis. ", "In particular, the assumption that all weights are independent is valid only at the first random iteration. ", "Therefore, the utility of this theory during initialization seems reasonable, ", "but during learning the theory seems quite tenuous. ", "I would encourage the authors to discuss their assumptions, and talk about how the math would change as a result of relaxing the assumptions.", "The empirical support does provide evidence that the theory is reasonable. ", "However, it is limited to a single dataset. ", "It would be nice to see that the effect happens more generally. ", "Second, it is clear that shallow+wide networks may be better than deep+narrow networks, ", "but it's not clear about how the width is evaluated and supported. ", "I would encourage the authors to do more extensive experiments and evaluate the architecture further."], "labels": ["evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "evaluation", "request", "fact", "fact", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "BknXbsdxG", "text": ["In this paper, an number of very strong (even extraordinary) claims are made:", "* The abstract promises \"a framework to understand the unprecedented performance and robustness of deep neural networks using field theory.\"", "* Page 8 states that this is \"This is a first attempt to describe a neural network with a scalar quantum field theory.\"", "* Page 2 promises the use of the \"Goldstone theorem\" (no less) to understand phase transition in deep learning", "* It also claim that many \"seemingly different experimental results can be explained by the presence of these zero eigenvalue weights.\"", "* Three important results are stated as \"theorem\", with a statement like \"Deep feedforward networks learn by breaking symmetries\" proven in 5 lines, with no formal mathematics.", "These are extraordinary claims,", "but when reaching page 5, one sees that the basis of these claims seems to be the Lagrangian of a simple phi-4 theory,", "and Fig. 1 shows the standard behaviour of the so-called mexican hat in physics, the basis of the second-order transition.", "Given physicists have been working on neural network for more than three or four decades,", "I am surprise that this would enough to solve all these problems!", "I tried to understand these many results,", "but I am afraid I cannot really understand or see them.", "In many case, the explanation seems to be a vague analogy.", "These are not without interest,", "and maybe there is indeed something deep in this paper, but it is so far hidden by the hype.", "Still, I fail to see how the fact that phase transitions and negative direction in the landscape is a new phenomena, and how it explains all the stated phenomenology.", "Beside, there are quite a lot of things known about the landscape of these problems", "Maybe I am indeed missing something,", "but i clearly suspect the authors are simply overselling physics results.", "I have been wrong many times,", "but I beleive that the authors should probably precise their claim, and clarify the relation between their results and both the physics AND statistics litterature, or better, with the theoretical physics litterature applied to learning, which is ---astonishing-- absent in the paper.", "About the content: The main problem for me is that the whole construction using field theory seems to be used to advocate for the appearence of a phase transition in neural nets and in learning.", "This rises three comments: (1) So we really need to use quantum field theory for this?", "I do not see what should be quantum here", "(despite the very vague remarks page 12 \"WHY QUANTUM FIELD THEORY?\")", "(2) This is not new.", "Phase transitions in learning in neural nets are being discussed since aboutn 40 years, see for instance all the pionnering work of Sompolinky et al.", "one can see for instance the nice review in https://arxiv.org/abs/1710.09553", "In non aprticular order, phase transition and symmetry breaking are discussed in * \"Statistical mechanics of learning from examples\", Phys. Rev. A 45, 6056 \u2013 Published 1 April 1992", "* \"The statistical mechanics of learning a rule\", Rev. Mod. Phys. 65, 499 \u2013 Published 1 April 1993", "* Phase transitions in the generalization behaviour of multilayer neural networks", "http://iopscience.iop.org/article/10.1088/0305-4470/28/16/010/meta", "* Note that some of these results are now rigourous,", "as shown in \"Phase Transitions, Optimal Errors and Optimality of Message-Passing in Generalized Linear Models\", https://arxiv.org/abs/1708.03395", "* The landscape of these problems has been studied quite extensivly,", "see for instance \"Identifying and attacking the saddle point problem in high-dimensional non-convex optimization\", https://arxiv.org/abs/1406.2572", "(3) There is nothing particular about deep neural net and neural nets about this.", "Negative direction in the Hessian in learning problems appears in matrix and tensor factorizaion, where phase transition are well understood (even rigorously, see for instance, https://arxiv.org/abs/1711.05424 ) or in problems such as unsupervised learning, as e.g.:", "https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.86.2174", "https://journals.aps.org/pre/pdf/10.1103/PhysRevE.50.1766", "Here are additional comments: PAGE 1: * \"It has been discovered that the training process ceases when it goes through an information bottleneck (ShwartzZiv & Tishby, 2017)\".", "While this paper indeed make a nice suggestion, I would not call it a discovery yet as this has never been shown on a large network.", "Beside, another paper in the conference is claiming exacly the opposite,", "see : \"On the Information Bottleneck Theory of Deep Learning\".", "This is still subject of discussion.", "* \"In statistical terms, a quantum theory describes errors from the mean of random variables. \"", "Last time I studied quantum theory, it was a theory that aim to explain the physical behaviours at the molecular, atomic and sub-atomic levels, usinge either on the wave function (Schrodinger) or the Matrix operatir formalism (Hesienbger) (or if you want, the path integral formalism of Feynman).", "It is certainly NOT a theory that describes errors from the mean of random variables.", "This is, i beleive, the field of \"statistics\" or \"probability\" for correlated variables.", "It is certianly used in physics, and heavily both in statistical physics and in quantum thoery,", "but this is not what the theory is about in the first place.", "Beside, there is little quantum in this paper,", "I think most of what the authors say apply to a statistical field theory", "( https://en.wikipedia.org/wiki/Statistical_field_theory )", "* \"In the limit of a continuous sample space, the quantum theory becomes a quantum field theory.\"", "Again, what is quantum about all this?", "This true for a field theory, as well for continous theories of, say, mechanics, fracture, etc...", "PAGE 2: * \"Using a scalar field theory we show that a phase transition must exist towards the end of training based on empirical results.\"", "So it is a scalar classical field theory after all.", "This sounds a little bit less impressive that a quantum field theory.", "Note that the fact that phase transition arises in learning, and in a statistical theory applied to any learning process, is an old topic, with a classical litterature.", "The authors might be interested by the review \"The statistical mechanics of learning a rule\", Rev. Mod. Phys. 65, 499 \u2013 Published 1 April 1993", "PAGE 8: * \"In this work we solved one of the most puzzling mysteries of deep learning by showing that deep neural networks undergo spontaneous symmetry breaking.\"", "I am afraid I fail to see what is so mysterious about this nor what the authors showed about it.", "In any case, gradient descent break symmetry spontaneously in many systems, including phi-4, the Ising model or (in learning problems) the community detection problem", "(see eg https://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011047).", "I am afraid I miss what is new there...", "* \"This is a first attempt to describe a neural network with a scalar quantum field theory.\"", "Given there seems to be little quantum in the paper,", "I fail to see the relevance of the statement.", "Secondly, I beleive that field theory has been used, many times and in greater lenght, both for statistical and dynamical problems in neural nets, see eg.", "* http://iopscience.iop.org/article/10.1088/0305-4470/27/6/016/meta", "* https://arxiv.org/pdf/q-bio/0701042.pdf", "* http://www.lps.ens.fr/~derrida/PAPIERS/1987/gardner-zippelius-87.pdf", "* http://iopscience.iop.org/article/10.1088/0305-4470/21/1/030/meta", "* https://arxiv.org/pdf/cond-mat/9805073.pdf"], "labels": ["evaluation", "fact", "quote", "fact", "quote", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "non-arg", "evaluation", "evaluation", "non-arg", "evaluation", "non-arg", "fact", "fact", "reference", "reference", "reference", "reference", "reference", "fact", "reference", "fact", "reference", "evaluation", "fact", "reference", "reference", "quote", "evaluation", "fact", "reference", "evaluation", "quote", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "reference", "quote", "non-arg", "fact", "quote", "fact", "evaluation", "fact", "evaluation", "quote", "evaluation", "fact", "reference", "evaluation", "quote", "evaluation", "evaluation", "evaluation", "reference", "reference", "reference", "reference", "reference"]}
{"doc_id": "SksrEW9eG", "text": ["Summary:The paper proposes a new dialog model combining both retrieval-based and generation-based modules. ", "Answers are produced in three phases: a retrieval-based model extracts candidate answers; a generator model, conditioned on retrieved answers, produces an additional candidate; a reranker outputs the best among all candidates.", "The approach is interesting: ", "the proposed ensemble can improve on both the retrieval module and the generation module, ", "since it does not restrict modeling power (e.g. the generator is not forced to be consistent with the candidates). ", "I am not aware of similar approaches for this task. ", "One work that comes to mind regarding the blend of retrieval and generation is Memory Networks ", "(e.g. https://arxiv.org/pdf/1606.03126.pdf and references): ", "given a query, a set of relevant memories is extracted from a KB using an inverted index and the memories are fed into the generator. ", "However, the extracted items in the current work are candidate answers which are used both to feed the generator and to participate in reranking.", "The experimental section focuses on the task of building conversational systems. ", "The performance measures used are 1) a human evaluation score with three volunteers and 2) BLUE scores. ", "While these methods are not very satisfying, ", "effective evaluation of such systems is a known difficulty. ", "The results show that the ensemble outperforms the individual modules, indicating that: ", "the multi-seq2seq models have learned to use the new inputs as needed and that the ranker is correlated with the evaluation metrics.", "However, the results themselves do not look impressive to me: ", "the subjective evaluation is close to the \"borderline\" score; ", "in the examples provided, one is good, the other is borderline/bad, and the baseline always provides something very short. ", "Does the LSTM work particularly poor on this dataset? ", "Given that this is a novel dataset, I don't know what the state-of-the-art should be. ", "Could you provide more insight? ", "Have you considered adding a benchmark dataset (e.g. a QA dataset)?", "Specific questions:1. The paper motivates conditioning on the candidates in two ways. ", "First, that the candidates bring additional information which the decoder can use (e.g. read from the candidates locations, actions, etc.). ", "Second, that the probability of universal replies must decrease due to the additional condition. ", "I think the second argument depends on how the conditioning is performed: ", "if the candidates are simply appended to the input, the model can learn to ignore them.", "2. The copy mechanism is a nice touch, encouraging the decoder to use the provided queries. ", "Why not copy from the query too, e.g. with some answers reusing part of the query <\"Where are you going?\", \"I'm going to the park\">?", "3. How often does the model select the generated answer vs. the extracted answers? ", "In both examples provided the selected answer is the one merging the candidate answers.", "Minor issues:- Section 3.2: using and the state", "- Section 3.2: more than one replies", "- last sentence on page 3: what are the \"following principles\"?"], "labels": ["fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "reference", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "request", "request", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "request", "fact", "fact", "fact", "request"]}
{"doc_id": "BytyNwclz", "text": ["This paper presents an analysis of the communication systems that arose when neural network based agents played simple referential games. ", "The set up is that a speaker and a listener engage in a game where both can see a set of possible referents (either represented symbolically in terms of features, or represented as simple images) and the speaker produces a message consisting of a sequence of numbers while the listener has to make the choice of which referent the speaker intends. ", "This is a set up that has been used in a large amount of previous work, ", "and the authors summarize some of this work. ", "The main novelty in this paper is the choice of models to be used by speaker and listener, ", "which are based on LSTMs and convolutional neural networks. ", "The results show that the agents generate effective communication systems, ", "and some analysis is given of the extent to which these communications systems develop compositional properties ", "\u2013 a question that is currently being explored in the literature on language creation.", "This is an interesting question, ", "and it is nice to see worker playing modern neural network models to his question and exploring the properties of the solutions of the phone. ", "However, there are also a number of issues with the work.", "1. One of the key question is the extent to which the constructed communication systems demonstrate compositionality. ", "The authors note that there is not a good quantitative measure of this. ", "However, this is been the topic of much research of the literature and language evolution. ", "This work has resulted in some measures that could be applied here, ", "see for example Carr et al. (2016): http://www.research.ed.ac.uk/portal/files/25091325/Carr_et_al_2016_Cognitive_Science.pdf", "2. In general the results occurred be more quantitative. ", "In section 3.3.2 it would be nice to see statistical tests used to evaluate the claims. ", "Minimally I think it is necessary to calculate a null distribution for the statistics that are reported.", "3. As noted above the main novelty of this work is the use of contemporary network models. ", "One of the advantages of this is that it makes it possible to work with more complex data stimuli, such as images. ", "However, unfortunately the image example that is used is still very artificial being based on a small set of synthetically generated images.", "Overall, I see this as an interesting piece of work that may be of interest to researchers exploring questions around language creation and language evolution, ", "but I think the results require more careful analysis and the novelty is relatively limited, at least in the way that the results are presented here."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "reference", "request", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "Hyl2iJgGG", "text": ["This paper examines the very popular and useful ADAM optimization algorithm, and locates a mistake in its proof of convergence (for convex problems).", "Not only that, the authors also show a specific toy convex problem on which ADAM fails to converge.", "Once the problem was identified to be the decrease in v_t (and increase in learning rate), they modified the algorithm to solve that problem.", "They then show the modified algorithm does indeed converge and show some experimental results comparing it to ADAM.", "The paper is well written, interesting and very important given the popularity of ADAM.", "Remarks: - The fact that your algorithm cannot increase the learning rate seems like a possible problem in practice.", "A large gradient at the first steps due to bad initialization can slow the rest of training.", "The experimental part is limited,", "as you state \"preliminary\",", "which is a unfortunate for a work with possibly an important practical implication.", "Considering how easy it is to run experiments with standard networks using open-source software,", "this can easily improve the paper.", "That being said, I understand that the focus of this work is theoretical and well deserves to be accepted based on the theoretical work.", "- On page 14 the fourth inequality not is clear to me.", "- On page 6 you talk about an alternative algorithm using smoothed gradients which you do not mention anywhere else", "and this isn't that clear (more then one way to smooth).", "A simple pseudo-code in the appendix would be welcome.", "Minor remarks:- After the proof of theorem 1 you jump to the proof of theorem 6", "(which isn't in the paper)", "and then continue with theorem 2.", "It is a bit confusing.", "- Page 16 at the bottom v_t= ... sum beta^{t-1-i}g_i should be g_i^2", "- Page 19 second line, you switch between j&t and it is confusing.", "Better notation would help.", "- The cifarnet uses LRN layer that isn't used anymore."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "fact", "evaluation", "request", "fact", "fact", "fact", "evaluation", "request", "evaluation", "request", "fact"]}
{"doc_id": "rynqOnBez", "text": ["My problem with this paper that all the theoretical contributions / the new approach refer to 2 arXiv papers, ", "what's then left is an application of that approach to learning form imperfect demonstrations.", "Quality ====== The approach seems sound ", "but the paper does not provide many details on the underlying approach. ", "The application to learning from (partially adversarial) demonstrations is a cool idea ", "but effectively is a very straightforward application based on the insight that the approach can handle truly off-policy samples. ", "The experiments are OK ", "but I would have liked a more thorough analysis.", "Clarity ===== The paper reads well, ", "but it is not really clear what the claimed contribution is.", "Originality ========= The application seems original.", "Significance ========== Having an RL approach that can benefit from truly off-policy samples is highly relevant.", "Pros and Cons ============ + good results", "+ interesting idea of using the algorithm for RLfD", "- weak experiments for an application paper", "- not clear what's new"], "labels": ["evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "BkE3cW5gG", "text": ["Summary: This paper presents a thorough examination of the effects of pruning on model performance. ", "Importantly, they compare the performance of \"large-sparse\" models (large models that underwent pruning in order to reduce memory footprint of model) and \"small-dense\" models, showing that \"large-sparse\" models typically perform better than the \"small-dense\" models of comparable size (in terms of number of non-zero parameters, and/or memory footprint). ", "They present results across a number of domains (computer vision, language modelling, and neural machine translation) and model types (CNNs, LSTMs). ", "They also propose a way of performing pruning with a pre-defined sparsity schedule, simplifying the pruning process in a way which works across domains. ", "They are able to show convincingly that pruning is an effective way of trading off accuracy for model size (more effective than simply reducing the size of model architecture), ", "although there does come a point where too much sparsity degrades the model performance considerably; ", "this suggests that pruning a medium size model to 80%-90% sparsity is likely better than pruning a larger model to >= 95% sparsity.", "Review: Quality: The quality of the work is high ", "--- the experiments are extensive and thorough. ", "I would have liked to see \"small-dense\" vs. \"large-sparse\" comparisons on Inception (only large-sparse results are reported).", "Clarity: The paper is clearly written, ", "though there is room for improvement. ", "For example, many of the results are presented in a redundant manner (in both tables and figures, where the table and figure are often not next to each other in the document). ", "Also, it is not clear in several cases exactly which training/heldout/test sets are used, and on which partition of the data the accuracies/BLEU scores/perplexities presented correspond to. ", "A small section (before \"Methods\") describing the datasets/features in detail would be helpful. ", "Also, it would have probably been nice to explain all of the tasks and datasets early on, and then present all the results at once (NIT: include the plots in paper, and move the tables to an appendix).", "Originality: Although the experiments are informative, ", "the work as a whole is not very original. ", "The method proposed of using a sparsity schedule to perform pruning is simple and effective, ", "but is a rather incremental contribution. ", "The primary contribution of this paper is its experiments, which for the most part compare known methods.", "Significance: The paper makes a nice contribution, ", "though it is not particularly significant or surprising. ", "The primary observations are: (1) large-sparse is typically better than small-dense, for a fixed number of non-zero parameters and/or memory footprint.", "(2) There is a point at which increasing the sparsity percentage severely degrades the performance of the model, ", "which suggests that there is a \"sweet-spot\" when it comes to choosing the model architecture and sparsity percentage which give the best performance (for a fixed memory footprint).", "Result #1 is not very surprising, ", "given that Han et al (2016) were able to show significant compression without loss in accuracy; ", "thus, because one would expect a smaller dense model to perform worse than the large dense model, ", "it would also perform worse than the large sparse model.", "Result #2 had already been seen in Han et al (2016) (for example, in Figure 6).", "Pros: - Very thorough experiments across a number of domains", "Cons: - Methodological contributions are minor.", "- Results are not surprising, and are in line with previous papers."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "HypMNiy-G", "text": ["Training GAN in a hierarchical optimization schedule shows promising performance recently (e.g. Zhao et al., 2016). ", "However, these works utilize the prior knowledge of the data (e.g. image) ", "and it's hard to generalize it to other data types (e.g. text). ", "The paper aims to learn these hierarchies directly instead of designing by human. ", "However, several parts are missing and not well-explained. ", "Also, many claims in paper are not proved properly by theory results or empirical results. ", "(1) It is not clear to me how to train the proposed algorithm. ", "My understanding is train a simple ALI, then using the learned latent as the input and train the new layer. ", "Do the authors use a separate training ? or a joint training algorithms. ", "The authors should provide a more clear and rigorous objective function. ", "It would be even better to have a pseudo code. ", "(2) In abstract, the authors claim the theoretical results are provided. ", "I am not sure whether it is sec 3.2 ", "The claims is not clear and limited. ", "For example, what's the theory statement of [Johnsone 200; Baik 2005]. ", "What is the error measure used in the paper? ", "For different error, the matrix concentration bound might be different. ", "Also, the union bound discussed in sec 3.2 is also problematic. ", "Lats, for using simple standard GAN to learn mixture of Gaussian, the rigorous theory result doesn't seem easy (e.g. [1]) ", "The author should strive for this results if they want to claim any theory guarantee.", "(3) The experiments part is not complete. ", "The experiment settings are not described clearly. ", "Therefore, it is hard to justify whether the proposed algorithm is really useful based on Fig 3. ", "Also, the authors claims it is applicable to text data in Section 1, this part is missing in the experiment. ", "Also, the idea of \"local\" disentangled LV is not well justified to be useful.", "[1] On the limitations of first order approximation in GAN dynamics, ICLR 2018 under review"], "labels": ["evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "request", "request", "fact", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "reference"]}
{"doc_id": "BkQD60b-f", "text": ["The paper proposes the use of a GAN to learn the distribution of image classes from an existing classifier, ", "that is a nice and straightforward idea. ", "From the point of view of forensic analysis of a classifier, it supposes a more principled strategy than a brute force attack based on the classification of a database and some conditional density estimation of some intermediate image features. ", "Unfortunately, the experiments are inconclusive. ", "Quality: The key question of the proposed scheme is the role of the auxiliary dataset. ", "In the EMNIST experiment, the results for the \u201cexact same\u201d and \u201cpartly same\u201d situations are good, ", "but it seems that for the \u201cmutually exclusive\u201d situation the generated samples look like letters, not numbers, ", "and raises the question on the interpolation ability of the generator. ", "In the FaceScrub experiment is even more difficult to interpret the results, ", "basically because we do not even know the full list of person identities. ", "It seems that generated images contain only parts of the auxiliary images related to the most discriminative features of the given classifier. ", "Does this imply that the GAN models a biased probability distribution of the image class? ", "What is the result when the auxiliary dataset comes from a different kind of images? ", "Due to the difficulty of evaluating GAN results, more experiments are needed to determine the quality and significance of this work.", "Clarity: The paper is well structured and written, ", "but Sections 1-4 could be significantly shorter to leave more space to additional and more conclusive experiments. ", "Some typos on Appendix A should be corrected.", "Originality: the paper is based on a very smart and interesting idea and a straightforward use of GANs. ", "Significance: If additional simulations confirm the author\u2019s claims, this work can represent a significant contribution to the forensic analysis of discriminative classifiers."], "labels": ["fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "request", "request", "evaluation", "request", "request", "evaluation", "evaluation"]}
{"doc_id": "rJ74wm5xM", "text": ["The paper describes a neural network-based approach to active localization based upon RGB images. ", "The framework employs Bayesian filtering to maintain an estimate of the agent's pose using a convolutional network model for the measurement (perception) function. ", "A convolutional network models the policy that governs the action of the agent. ", "The architecture is trained in an end-to-end manner via reinforcement learning. ", "The architecture is evaluated in 2D and 3D simulated environments of varying complexity and compared favorably to traditional (structured) approaches to passive and active localization.", "As the paper correctly points out, there is large body of work on map-based localization, ", "but relatively little attention has been paid to decision theoretic formulations to localization, whereby the agent's actions are chosen in order to improve localization accuracy. ", "More recent work instead focuses on the higher level objective of navigation, whereby any effort act in an effort to improve localization are secondary to the navigation objective. ", "The idea of incorporating learned representations with a structured Bayesian filtering approach is interesting, ", "but it's utility could be better motivated. ", "What are the practical benefits to learning the measurement and policy model beyond (i) the temptation to apply neural networks to this problem and (ii) the ability to learn these in an end-to-end fashion? ", "That's not to say that there aren't benefits, but rather that they aren't clearly demonstrated here. ", "Further, the paper seems to assume (as noted below) that there is no measurement uncertainty and, with the exception of the 3D evaluations, no process noise.", "The evaluation demonstrates that the proposed method yields estimates that are more accurate according to the proposed metric than the baseline methods, with a significant reduction in computational cost. ", "However, the environments considered are rather small by today's standards ", "and the baseline methods almost 20 years old. ", "Further, the evaluation makes a number of simplifying assumptions, the largest being that the measurements are not subject to noise ", "(the only noise that is present is in the motion for the 3D experiments). ", "This assumption is clearly not valid in practice. ", "Further, it is not clear from the evaluation whether the resulting distribution that is maintained is consistent (e.g., are the estimates over-/under-confident?). ", "This has important implications if the system were to actually be used on a physical system. ", "Further, while the computational requirements at test time are significantly lower than the baselines, ", "the time required for training is likely very large. ", "While this is less of an issue in simulation, it is important for physical deployments. ", "Ideally, the paper would demonstrate performance when transferring a policy trained in simulation to a physical environment (e.g., using diversification, which has proven effective at simulation-to-real transfer).", "Comments/Questions:* The nature of the observation space is not clear.", "* Recent related work has focused on learning neural policies for navigation, and any localization-specific actions are secondary to the objective of reaching the goal. ", "It would be interesting to discuss how one would balance the advantages of choosing actions that improve localization with those in the context of a higher-level task (or at least including a cost on actions as with the baseline method of Fox et al.).", "* The evaluation that assigns different textures to each wall is unrealistic.", "* It is not clear why the space over which the belief is maintained flips as the robot turns and shifts as it moves.", "* The 3D evaluation states that a 360 deg view is available. ", "What happens when the agent can only see in one (forward) direction?", "* AML includes a cost term in the objective. ", "Did the author(s) experiment with setting this cost to zero?", "* The 3D environments rely upon a particular belief size (70 x 70) being suitable for all environments. ", "What would happen if the test environment was larger than those encountered in training?", "* The comment that the PoseNet and VidLoc methods \"lack a strainghtforward method to utilize past map data to do localization in a new environment\" is unclear.", "* The environments that are considered are quite small compared to the domains currently considered for", "* Minor: It might be better to move Section 3 into Section 4 after introducing notation (to avoid redundancy).", "* The paper should be proofread for grammatical errors (e.g., \"bayesian\" --> \"Bayesian\", \"gaussian\" --> \"Gaussian\")"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "fact", "request", "fact", "non-arg", "fact", "request", "evaluation", "evaluation", "request", "request"]}
{"doc_id": "S1EAO5qxM", "text": ["In the centre loss, the centre is learned. ", "Now it's calculated as the average of the last layer's features", "To enable training with SGD, the authors calculate the centre within a mini batch"], "labels": ["fact", "fact", "fact"]}
{"doc_id": "SkDHZUXlG", "text": ["The authors train an RNN to perform deduced reckoning (ded reckoning) for spatial navigation, ", "and then study the responses of the model neurons in the RNN. ", "They find many properties reminiscent of neurons in the mammalian entorhinal cortex (EC): grid cells, border cells, etc. ", "When regularization of the network is not used during training, the trained RNNs no longer resemble the EC. ", "This suggests that those constraints (lower overall connectivity strengths, and lower metabolic costs) might play a role in the EC's navigation function. ", "The paper is overall quite interesting and the study is pretty thorough: ", "no major cons come to mind. ", "Some suggestions / criticisms are given below.", "1) The findings seem conceptually similar to the older sparse coding ideas from the visual cortex. ", "That connection might be worth discussing ", "because removing the regularizing (i.e., metabolic cost) constraint from your RNNS makes them learn representations that differ from the ones seen in EC. ", "The sparse coding models see something similar: ", "without sparsity constraints, the image representations do not resemble those seen in V1, ", "but with sparsity, the learned representations match V1 quite well. ", "That the same observation is made in such disparate brain areas (V1, EC) suggests that sparsity / efficiency might be quite universal constraints on the neural code.", "2) The finding that regularizing the RNN makes it more closely match the neural code is also foreshadowed somewhat by the 2015 Nature Neuro paper by Susillo et al. ", "That could be worthy of some (brief) discussion.", "Sussillo, D., Churchland, M. M., Kaufman, M. T., & Shenoy, K. V. (2015). A neural network that finds a naturalistic solution for the production of muscle activity. Nature neuroscience, 18(7), 1025-1033.", "3) Why the different initializations for the recurrent weights for the hexagonal vs other environments? ", "I'm guessing it's because the RNNs don't \"work\" in all environments with the same initialization (i.e., they either don't look like EC, or they don't obtain small errors in the navigation task). ", "That seems important to explain more thoroughly than is done in the current text.", "4) What happens with ongoing training? ", "Animals presumably continue to learn throughout their lives. ", "With on-going (continous) training, do the RNN neurons' spatial tuning remain stable, or do they continue to \"drift\" (so that border cells turn into grid cells turn into irregular cells, or some such)? ", "That result could make some predictions for experiment, ", "that would be testable with chronic methods (like Ca2+ imaging) that can record from the same neurons over multiple experimental sessions.", "5) It would be nice to more quantitatively map out the relation between speed tuning, direction tuning, and spatial tuning (illustrated in Fig. 3). ", "Specifically, I would quantify the cells' direction tuning using the circular variance methods that people use for studying retinal direction selective neurons. ", "And I would quantify speed tuning via something like the slope of the firing rate vs speed curves. ", "And quantify spatial tuning somehow (a natural method would be to use the sparsity measures sometimes applied to neural data to quantify how selective the spatial profile is to one or a few specific locations). ", "Then make scatter plots of these quantities against each other. ", "Basically, I'd love to see the trends for how these types of tuning relate to each other over the whole populations: ", "those trends could then be tested against experimental data (possibly in a future study)."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "request", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "reference", "non-arg", "evaluation", "request", "non-arg", "evaluation", "non-arg", "evaluation", "evaluation", "request", "request", "request", "request", "request", "request", "evaluation"]}
{"doc_id": "BynVEQJGM", "text": ["This paper considers the problem of autonomous lane changing for self-driving cars in multi-lane multi-agent slot car setting. ", "The authors propose a new learning strategy called Q-masking which couples well a defined low level controller with a high level tactical decision making policy.", "The authors rightly say that one of the skills an autonomous car must have is the ability to change lanes, ", "however this task is not one of the most difficult for autonomous vehicles to achieve and this ability has already been implemented in real vehicles. ", "Real vehicles also decouple wayfinding with local vehicle control, similar to the strategy employed here. ", "To make a stronger case for this research being relevant to the real autonomous driving problem, the authors would need to compare their algorithm to a real algorithm and prove that it is more \u201cdata efficient.\u201d ", "This is a difficult comparison ", "since the sensing strategies employed by real vehicles \u2013 LIDAR, computer vision, recorded, labeled real maps are vastly different from the slot car model proposed by the authors. ", "In term of impact, this is a theoretical paper looking at optimizing a sandbox problem where the results may be one day applicable to the real autonomous driving case.", "In this paper the authors investigate \u201cthe use and place\u201d of deep reinforcement learning in solving the autonomous lane change problem they propose a framework that uses Q-learning to learn \u201chigh level tactical decisions\u201d and introduce \u201cQ-masking\u201d a way of limiting the problem that the agent has to learn to force it to learn in a subspace of the Q-values.", "The authors claim that \u201cBy relying on a controller for low-level decisions we are also able to completely eliminate collisions during training or testing, which makes it a possibility to perform training directly on real systems.\u201d ", "I am not sure what is meant by this since in this paper the authors never test their algorithm on real systems ", "and in real systems it is not possible to completely eliminate collisions. ", "If it were, this would be a much sought breakthrough. ", "Additionally for their experiment authors use the SUMO top view driving simulator. ", "This choice makes their algorithm not currently relevant to most autonomous vehicles that use ego-centric sensing. ", "This paper presents a learning algorithm that can \u201coutperform a greedy baseline in terms of efficiency\u201d and \u201chumans driving the simulator in terms of safety and success\u201d within their top view driving game. ", "The game can be programmed to have an \u201cn\u201d lane highway, where n could reasonable go up to five to represent larger highways. ", "The authors limit the problem by specifying that all simulated cars must operate between a preset minimum and maximum and follow a target (random) speed within these limits. ", "Cars follow a fixed model of behavior, do not collide with each other and cannot switch lanes. ", "It is unclear if the simulator extends beyond a single straight section of highway, as shown in Figure 1. ", "The agent is tasked with driving the ego-car down the n-lane highway and stopping at \u201cthe exit\u201d in the right hand lane D km from the start position. ", "The authors use deep Q learning from Mnih et al 2015 to learn their optimal policy. ", "They use a sparse reward function of +10 for reaching the goal and -10x(lane difference from desired lane) as a penalty for failure. ", "This simple reward function is possible because the authors do not require the ego car to obey speed limits or avoid collisions. ", "The authors limit what the car is able to do ", "\u2013 for example it is not allowed to take actions that would get it off the highway. ", "This makes the high level learning strategy more efficient ", "because it does not have to explore these possibilities (Q-masking). ", "The authors claim that this limitation of the simulation is made valid by the ability of the low level controller to incorporate prior knowledge and perfectly limit these actions. ", "In the real world, however, it is unlikely that any low level controller would be able to do this perfectly.", "In terms of evaluation, the authors do not compare their result against any other method. ", "Instead, using only one set of test parameters, the authors compare their algorithm to a \u201cgreedy baseline\u201d policy that is specified a \u201calways try to change lanes to the right until the lane is correct\u201d then it tries to go as fast as possible while obeying the speed limit and not colliding with any car in front. ", "It seems that baseline is additionally constrained vs the ego car due to the speed limit and the collision avoidance criteria and is not a fair comparison. ", "So given a fixed policy and these constraints it is not surprising that it underperforms the Q-masked Q-learning algorithm. ", "With respect to the comparison vs. human operators of the car simulation, the human operators were not experts. ", "They were only given \u201ca few trials\u201d to learn how to operate the controls before the test. ", "It was reported that the human participants \u201cdid not feel comfortable\u201d with the low level controller on, ", "possibly indicating that the user experience of controlling the car was less than ideal. ", "With the low level controller off, collisions became possible. ", "It is possibly not a fair claim to say that human drivers were \u201cless safe\u201d but rather that it was difficult to play the game or control the car with the safety module on. ", "This could be seen as a game design issue. ", "It was not clear from this presentation how the human participants were rewarded for their performance. ", "In more typical HCI experiments the gender distribution and ages ranges of participants are specified as well as how participants were recruited and how the game was motivated, including compensation (reward) are specified. ", "Overall, this paper presents an overly simplified game simulation with a weak experimental result."], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation"]}
{"doc_id": "HJSdXVqxG", "text": ["This paper creates a layered representation in order to better learn segmentation from unlabeled images. ", "It is well motivated, ", "as Fig. 1 clearly shows the idea that if the segmentation was removed properly, the result would still be a natural image. ", "However, the method itself as described in the paper leaves many questions about whether they can achieve the proposed goal.", "I cannot see from the formulation why would this model work as it is advertised. ", "The formulation (3-4) looks like a standard GAN, with some twist about measuring the GAN loss in the z space (this has been used in e.g. PPGN and CVAE-GAN). ", "I don't see any term that would guarantee:1) Each layer is a natural image. ", "This was advertised in the paper, ", "but the loss function is only on the final product G_K. ", "The way it is written in the paper, the result of each layer does not need to go through a discriminator. ", "Nothing seems to have been done to ensure that each layer outputs a natural image.", "2) None of the layers is degenerate. ", "There does not seem to be any constraint either regularizing the content in each layer, or preventing any layer to be non-degenerate.", "3) The mask being contiguous. ", "I don't see any term ensuring the mask being contiguous, ", "I imagine normally without such terms doing such kinds of optimization would lead to a lot of fragmented small areas being considered as the mask.", "The claim that this paper is for unsupervised semantic segmentation is overblown. ", "A major problem is that when conducting experiments, all the images seem to be taken from a single category, this implicitly uses the label information of the category. ", "In that regard, this cannot be viewed as an unsupervised algorithm.", "Even with that, the results definitely looked too good to be true. ", "I have a really difficult time believing why such a standard GAN optimization would not generate any of the aforementioned artifacts and would perform exactly as the authors advertised. ", "Even if it does work as advertised, the utilization of implicit labels would make it subject to comparisons with a lot of weakly-supervised learning papers with far better results than shown in this paper. ", "Hence I am pretty sure that this is not up to the standards of ICLR."], "labels": ["fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "B1oFM1FeG", "text": ["This paper presents, and analyzes, a method for learning word relationships based on co-occurrence. ", "In the method, relationships between pairs of words (A, B) are represented by the terms that tend to occur around co-mentions of A and B in text. ", "The paper shows the start of some interesting ideas, ", "but needs revisions and much more extensive experiments.", "On the plus side, the method proposed here does perform relatively well (Table 1) and probably merits further investigation. ", "The experiments in Table 1 can only be considered preliminary, however. ", "They only evaluate over a small number of relationships (three) ", "-- looking at 20 or so different relationships would greatly improve confidence in the conclusions.", "Beyond Table 1 the paper makes a number of claims that are not supported or weakly supported (the paper uses only a handful of examples as evidence). ", "An attempt to explain what Word2Vec is doing should be made with careful experiments over many relations and hundreds of examples, ", "whereas this paper presents only a handful of examples for most of its claims. ", "Further, whether the behavior of the proposed algorithm actually reflects what word2vec is doing is left as a significant open question.", "I appreciate the clarity of Assumption 1 and Proposition 1, ", "but ultimately this formalism is not used ", "and because Assumption 1 about which nouns are \"semantically related\" to which other nouns attempts to trivialize a complex notion (semantics) and is clearly way too strong ", "-- the paper would be better off without it. ", "Also Assumption 1 does not actually claim what the text says it claims ", "(the text says words outside the window are *not* semantically related, but the assumption does not actually say this) ", "and furthermore is soon discarded and only the frequency of noun occurrences around co-mentions is used. ", "I think the description of the algorithm could be retained without including Assumption 1.", "minor: References to numbered algorithms or assumptions should be capitalized in the text.", "what the introduction means about the \"dynamics\" of the vector equation is a little unclear", "A submission shouldn't have acknowledgments, and in particular with names that undermine anonymity", "MLE has a particular technical meaning that is not utilized here, ", "I would just refer to the most frequent words as \"most related nouns\" or similar", "In Table 1, are the \"same dataset\" results with w2v for the nouns-only corpus, or with all the other words?", "The argument made assuming a perfect Zipf distribution (with exponent equal to one) should be made with data.", "will likely by observed -> will likely be observed", "lions:dolphins probably ends up that way because of \"sea lions\"", "Table 4 caption: frequencies -> currencies", "Table 2 -- claim is that improvements from k=10 to k=20 are 'nominal' but they look non-negligible to me", "I did not understand how POS lying in the same subspace means that Vec(D) has to be in the span of Vecs A-C."], "labels": ["fact", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "request", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "fact", "fact", "fact", "request", "request", "evaluation", "request", "fact", "request", "request", "request", "request", "request", "request", "evaluation", "evaluation"]}
{"doc_id": "r1IWuK2lf", "text": ["The paper presents a method for navigating in an unknown and partially observed environment is presented.", "The proposed approach splits planning into two levels: 1) local planning based on the observed space and 2) a global planner which receives the local plan, observation features, and access to an addressable memory to decide on which action to select and what to write into memory.", "The contribution of this work is the use of value iteration networks (VINs) for local planning on a locally observed map that is fed into a learned global controller that references history and a differential neural computer (DNC), local policy, and observation features select an action and update the memory.", "The core concept of learned local planner providing additional cues for a global, memory-based planner is a clever idea", "and the thorough analysis clearly demonstrates the benefit of the approach.", "The proposed method is tested against three problems: a gridworld, a graph search, and a robot environment.", "In each case the proposed method is more performant than the baseline methods.", "The ablation study of using LSTM instead of the DNC and the direct comparison of CNN + LSTM support the authors\u2019 hypothesis about the benefits of the two components of their method.", "While the author\u2019s compare to DRL methods with limited horizon (length 4), there is no comparison to memory-based RL techniques.", "Furthermore, a comparison of related memory-based visual navigation techniques on domains for which they are applicable should be considered", "as such an analysis would illuminate the relative performance over the overlapping portions problem domains", "For example, analysis of the metric map approaches on the grid world or of MACN on their tested environments.", "Prior work in visual navigation in partially observed and unknown environments have used addressable memory (e.g., Oh et al.) and used VINs (e.g., Gupta et al.) to plan as noted.", "In discussing these methods, the authors state that these works are not comparable as they operate strictly on discretized 2d spaces.", "However, it appears to the reviewer that several of these methods can be adapted to higher dimensions and be applicable at least a subclass (for the euclidean/metric map approaches) or the full class of the problems (for Oh et al.),", "which appears to be capable to solve non-euclidean tasks like the graph search problem.", "If this assessment is correct, the authors should differentiate between these approaches more thoroughly and consider empirical comparisons.", "The authors should further consider contrasting their approach with \u201cNeural SLAM\u201d by Zhang et al.", "A limitation of the presented method is requirement that the observation \u201creveals the labeling of nearby states.\u201d", "This assumption holds in each of the examples presented: the neighborhood map in the gridworld and graph examples and the lidar sensor in the robot navigation example.", "It would be informative for the authors to highlight this limitation and/or identify how to adapt the proposed method under weaker assumptions such as a sensor that doesn\u2019t provide direct metric or connectivity information such as a RGB camera.", "Many details of the paper are missing and should be included to clarify the approach and ensure reproducible results.", "The reviewer suggests providing both more details in the main section of the paper and providing the precise architecture including hyperparameters in the supplementary materials section."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "request", "fact", "fact", "fact", "fact", "evaluation", "fact", "request", "request", "evaluation", "fact", "request", "request", "request"]}
{"doc_id": "BkTXGMKlf", "text": ["This paper proposes a family of first-order stochastic optimization schemes", "based on (1) normalizing (batches of) stochastic gradient descents and (2) choosing from a step size updating scheme. ", "The authors argue that iterative first-order optimization algorithms can be interpreted as a choice of an update direction and a step size, ", "so they suggest that one should always normalize the gradient when computing the direction and then choose a step size using the normalized gradient. ", "\\n\\nThe presentation in the paper is clear, ", "and the exposition is easy to follow.", "The authors also do a good job of presenting related work and putting their ideas in the proper context. ", "The authors also test their proposed method on many datasets,", "which is appreciated.\\n\\n", "However, I didn't find the main idea of the paper to be particularly compelling. ", "The proposed technique is reasonable on its own, ", "but the empirical results do not come with any measure of statistical significance. ", "The authors also do not analyze the sensitivity of the different optimization algorithms to hyperparameter choice, opting to only use the default. ", "Moreover, some algorithms were used as benchmarks on some datasets but not others. ", "For a primarily empirical paper, every state-of-the-art algorithm should be used as a point of comparison on every dataset considered. ", "These factors altogether render the experiments uninformative in comparing the proposed suite of algorithms to state-of-the-art methods. ", "The theoretical result in the convex setting is also not data-dependent, despite the fact that it is the normalized gradient version of AdaGrad, which does come with a data-dependent convergence guarantee.\\n\\n", "Given the suite of optimization algorithms in the literature and in use today, any new optimization framework should either demonstrate improved (or at least matching) guarantees in some common (e.g. convex) settings or definitively outperform state-of-the-art methods on problems that are of widespread interest. ", "Unfortunately, this paper does neither. ", "\\n\\nBecause of these points, I do not feel the quality, originality, and significance of the work to be high enough to merit acceptance. ", "\\n\\nSome specific comments: \\np. 2: \\\"adaptive feature-dependent step size has attracted lots of attention\\\". ", "When you apply feature-dependent step sizes, you are effectively changing the direction of the gradient, ", "so your meta learning formulation, as posed, doesn't make as much sense.", "\\np. 2: \"we hope the resulting methods can benefit from both techniques\\\". ", "What reason do you have to hope for this? ", "Why should they be complimentary? ", "Existing optimization techniques are based on careful design and coupling of gradients or surrogate gradients, with specific learning rate schedules. ", "Arbitrarily mixing the two doesn't seem to be theoretically well-motivated.", "\\np. 2: \\\"numerical results shows that normalized gradient always helps to improve the performance of the original methods when the network structure is deep\\\". ", "It would be great to provide some intuition for this. ", "\\np. 2: \\\"we also provide a convergence proof under this framework when the problem is convex and the stepsize is adaptive\\\". ", "The result that you prove guarantees a \\\\theta(\\\\sqrt{T}) convergence rate. ", "On the other hand, the AdaGrad algorithm guarantees a data-dependent bound that is O(\\\\sqrt{T}) ", "but can also be much smaller. ", "This suggests that there is no theoretical motivation to use NGD with an adaptive step size over AdaGrad.", "\\np. 2-3: \\\"NGD can find a \\\\eps-optimal solution....when the objective function is quasi-convex. ....extended NGD for upper semi-continuous quasiconvex objective functions...\\\". ", "This seems like a typo. ", "How are results that go from quasi-convex to upper semi-continuous quasi-convex an extension?", "\\np. 3: There should be a reference for RMSProp.", "\\np. 3: \\\"where each block of parameters x^i can be viewed as parameters associated to the ith layer in the network\\\". ", "Why is layer parametrization (and later on normalization) a good way idea? ", "There should be either a reference or an explanation.", "\\np. 4: \\\"x=(x_1, x_2, \\\\ldots, x_B)\\\". ", "Should these subscripts be superscripts?", "\\np. 4: \\\"For all the algorithms, we use their default settings.\\\" ", "This seems insufficient for an empirical paper, ", "since most problems often involve some amount of hyperparameter tuning. ", "How sensitive is each method to the choice of hyperparameters? ", "What about the impact of initialization?", "\\np. 4-8: None of the experimental results have error bars or any measure of statistical significance.", "\\np. 5: \\\"NG... is a variant of the NG_{UNIT} method\\\". ", "This method is never motivated.", "\\np. 5-6: Why are SGD and Adam used for MNIST but not on CIFAR? ", "\\np. 5: \\\"we chose the best heyper-paerameter from the 56 layer residual network.\\\" ", "Apart from the typos, are these parameters chosen from the training set or the test set? ", "\\np. 6: Why isn't Adam tested on ImageNet?"], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "request", "evaluation", "fact", "request", "fact", "evaluation", "quote", "fact", "evaluation", "quote", "non-arg", "non-arg", "fact", "evaluation", "quote", "request", "quote", "fact", "fact", "fact", "evaluation", "quote", "evaluation", "evaluation", "request", "quote", "non-arg", "request", "quote", "non-arg", "quote", "evaluation", "fact", "non-arg", "non-arg", "fact", "quote", "fact", "non-arg", "quote", "non-arg", "non-arg"]}
{"doc_id": "Bk6nbuf-M", "text": ["The authors use deep learning to learn a surrogate model for the motion vector in the advection-diffusion equation that they use to forecast sea surface temperature.", "In particular, they use a CNN encoder-decoder to learn a motion field, and a warping function from the last component to provide forecasting.", "I like the idea of using deep learning for physical equations.", "I would like to see a description of the algorithm with the pseudo-code in order to understand the flow of the method.", "I got confused at several points", "because it was not clear what was exactly being estimated with the CNN.", "Having an algorithmic environment would make the description easier.", "I know that authors are going to publish the code,", "but this is not enough at this point of the revision.", "Physical processes in Machine learning have been studied from the perspective of Gaussian processes.", "Just to mention a couple of references \u201cLinear latent force models using Gaussian processes\u201d and \"Numerical Gaussian Processes for Time-dependent and Non-linear Partial Differential Equations\"", "In Theorem 2, do you need to care about boundary conditions for your equation?", "I didn\u2019t see any mention to those in the definition for I(x,t).", "You only mention initial conditions.", "How do you estimate the diffusion parameter D?", "Are you assuming isotropic diffusion?", "Is that realistic?", "Can you provide more details about how you run the data assimilation model in the experiments?", "Did you use your own code?"], "labels": ["fact", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "reference", "request", "fact", "fact", "request", "request", "evaluation", "request", "non-arg"]}
{"doc_id": "H1g6bb9gG", "text": ["The approach solves an important problem ", "as getting labelled data is hard. ", "The focus is on the key aspect, which is generalisation across heteregeneous data. ", "The novel idea is the dataset embedding ", "so that their RL policy can be trained to work across diverse datasets.", "Pros: 1. The approach performs well against all the baselines, and also achieves good cross-task generalisation in the tasks they evaluated on. ", "2. In particular, they alsoevaluated on test datasets with fairly different statistics from the training datasets, which isnt very common in most meta-learning papers today, ", "so it\u2019s encouraging that the method works in that regime.", "Cons: 1. The embedding strategy, especially the representative and discriminative histograms, is complicated. ", "It is unclear if the strategy is general enough to work on harder problems / larger datasets, or with higher dimensional data like images. ", "More evidence in the paper for why it would work on harder problems would be great. ", "2. The policy network would have to output a probability for each datapoint in the dataset U, ", "which could be fairly large, ", "thus the method is computationally much more expensive than random sampling. ", "A section devoted to showing what practical problems could be potentially solved by this method would be useful.", "3. It is unclear to me if the results in table 3 and 4 are achieved by retraining from scratch with an RBF SVM, or by freezing the policy network trained on a linear SVM and directly evaluating it with a RBF SVM base learner.", "Significance/Conclusion: The idea of meta-learning or learning to learn is fairly common now. ", "While they do show good performance, ", "it\u2019s unclear if the specific embedding strategy suggested in this paper will generalise to harder tasks. ", "Comments: There\u2019s lots of typos, ", "please proof read to improve the paper."], "labels": ["evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "rJMoToYlz", "text": ["The authors present a derivation of previous work of [1].", "In particular they propose the method of using the error signal of a dynamics model as curiosity for exploration, such as [1], but without any additionaly auxiliary methods.", "This the author call Curiosity by Bootstrapping Feature (CBF).\\n", "\\nIn particular they show over a set of auxiliary learning methods (hindsight ER, inverse dynamics model[1]) there is\\nnot a clear cut edge one method has over the other (or over using no auxilirary method all, that is CBF).\\n\\n", "Overall I think the novelty is too limited for acceptance.", "The main point of the authors (heterogeneous results\\nover different auxilirary learning methods), is not suprising at all, and to be expected.", "The method the authors introduce\\nis just a submodule of already published results[1].\\n\\n", "For instance, section 4 discusses challenges related to these class of approaches such as the presence of stochasticity.", "Had the authors proposed a solution to these challenges that would have benefited the paper greatly.\\n\\n", "Minor: The light green link color make the paper hard on the eye,", "I suggest using [hidelinks] for hyperref.\\n", "Figure 2 is very small and hard to read.\\n\\n\\n", "[1] Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. Curiosity-driven exploration by\\nself-supervised prediction. In ICML, 2017"], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "request", "evaluation", "request", "evaluation", "reference"]}
{"doc_id": "BJ9DfkxWM", "text": ["The paper is clear and well written.", "It is an incremental modification of prior work (ResNeXt) that performs better on several experiments selected by the author; ", "comparisons are only included relative to ResNeXt.", "This paper is not about gating (c.f., gates in LSTMs, mixture of experts, etc) but rather about masking or perhaps a kind of block sparsity, ", "as the \"gates\" of the paper do not depend upon the input: ", "they are just fixed masking matrices (see eq (2)).", "The main contribution appears to be the optimisation procedure for the binary masking tensor g. ", "But this procedure is not justified: ", "does each step minimise the loss? ", "This seems unlikely due to the sampling. ", "Can the authors show that the procedure will always converge? ", "It would be good to contrast this with other attempts to learn discrete random variables ", "(for example, The Concrete Distribution: Continuous Relaxation of Continuous Random Variables, Maddison et al, ICLR 2017)."], "labels": ["evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "request", "evaluation", "request", "request", "reference"]}
{"doc_id": "BJiW7IkZM", "text": ["In this work, the objective is to analyze the robustness of a neural network to any sort of attack.", "This is measured by naturally linking the robustness of the network to the local Lipschitz properties of the network function. ", "This approach is quite standard in learning theory, ", "I am not aware of how original this point of view is within the deep learning community.", "This is estimated by obtaining values of the norm of the gradient (also naturally linked to the Lipschitz properties of the function) by backpropagation. ", "This is again a natural idea."], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "Hy7Gjh9eM", "text": ["The authors proposed to supplement adversarial training with an additional regularization that forces the embeddings of clean and adversarial inputs to be similar.", "The authors demonstrate on MNIST and CIFAR that the added regularization leads to more robustness to various kinds of attacks.", "The authors further propose to enhance the network with cascaded adversarial training, that is, learning against iteratively generated adversarial inputs, and showed improved performance against harder attacks.", "The idea proposed is fairly straight-forward.", "Despite being a simple approach, the experimental results are quite promising.", "The analysis on the gradient correlation coefficient and label leaking phenomenon provide some interesting insights.", "As pointed out in section 4.2, increasing the regularization coefficient leads to degenerated embeddings.", "Have the authors consider distance metrics that are less sensitive to the magnitude of the embeddings, for example, normalizing the inputs before sending it to the bidirectional or pivot loss, or use cosine distance etc.?", "Table 4 and 5 seem to suggest that cascaded adversarial learning have more negative impact on test set with one-step attacks than clean test set,", "which is a bit counter-intuitive.", "Do the authors have any insight on this?", "Comments: 1. The writing of the paper could be improved.", "For example, \"Transferability analysis\" in section 1 is barely understandable;", "2. Arrow in Figure 3 are not quite readable;", "3. The paper is over 11 pages.", "The authors might want to consider shrink it down the recommended length."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "fact", "request"]}
{"doc_id": "ryjxrEwlM", "text": ["The authors propose a mechanism for learning task-specific region embeddings for use in text classification. ", "Specifically, this comprises a standard word embedding an accompanying local context embedding. ", "The key idea here is the introduction of a (h x c x v) tensor K, where h is the embedding dim (same as the word embedding size), c is a fixed window size around a target word, and v is the vocabulary size. ", "Each word in v is then associated with an (h x c) matrix that is meant to encode how it affects nearby words, ", "in particular this may be viewed as parameterizing a projection to be applied to surrounding word embeddings. ", "The authors propose two specific variants of this approach, which combine the K matrix and constituent word embeddings (in a given region) in different ways. ", "Region embeddings are then composed (summed) and fed through a standard model. ", "Strong points--- + The proposed approach is simple and largely intuitive: ", "essentially the context matrix allows word-specific contextualization. ", "Further, the work is clearly presented.", "+ At the very least the model does seem comparable in performance to various recent methods (as per Table 2), ", "however as noted below the gains are marginal ", "and I have some questions on the setup.", "+ The authors perform ablation experiments, ", "which are always nice to see. ", "Weak points--- - I have a critical question for clarification in the experiments. ", "The authors write 'Optimal hyperparameters are tuned with 10% of the training set on Yelp Review Full dataset, and identical hyperparameters are applied to all datasets' ", "-- is this true for *all* models, or only the proposed approach? ", "- The gains here appear to be consistent, ", "but they seem marginal. ", "The biggest gain achieved over all datasets is apparently .7, ", "and most of the time the model very narrowly performs better (.2-.4 range). ", "Moreoever, it is not clear if these results are averaged over multiple runs of SGD or not ", "(variation due to initialization and stochastic estimation can account for up to 1 point in variance ", "-- see \"A sensitivity analysis of (and practitioners guide to) CNNs...\" Zhang and Wallace, 2015.)", "- The related work section seems light. ", "For instance, there is no discussion at all of LSTMs and their application to text classificatio (e.g., Tang et al., EMNLP 2015) ", "-- although it is noted that the authors do compare against D-LSTM, or char-level CNNs for the same (see Zhang et al., NIPs 2015). ", "Other relevant work not discussed includes Iyyer et al. (ACL 2015). ", "In their respective ways, these papers address some of the same issues the authors consider here. ", "- The two approaches to inducing the final region embedding (word-context and then context-word in sections 3.2 and 3.3, respectively) feel a bit ad-hoc. ", "I would have appreciated more intuition behind these approaches. ", "Small comments---There is a typo in Figure 4 -- \"Howerver\" should be \"However\""], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "request", "fact", "fact", "fact", "fact", "evaluation", "fact", "reference", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request"]}
{"doc_id": "H1Mbr8b4f", "text": ["General comment ============== Low-rank decomposing convolutional filters has been used to speedup convolutional networks at the cost of a drop in prediction performance.", "The authors a) extended existing decomposition techniques by an iterative method for decomposition and fine-tuning convolutional filter weights,", "and b) and algorithm to determine the rank of each convolutional filter.", "The authors show that their method enables a higher speedup and lower accuracy drop than existing methods when applied to VGG16.", "The proposed method is a useful extension of existing methods but needs to evaluated more rigorously.", "The manuscript is hard to read due to unclear descriptions and grammatical errors.", "Major comments ============= 1. The authors authors showed that their method enables a higher speedup and lower drop in accuracy than existing methods when applied to VGG16.", "The authors should analyze if this also holds true for ResNet and Inception, which are more widely used than VGG16.", "2. The authors measured the actual speedup on a single CPU (Intel Core i5).", "The authors should measure the actual speedup also on a single GPU.", "3. It is unclear how the actual speedup was measured.", "Does it correspond to the seconds per update step or the overall training time?", "In the latter case, how long were models trained?", "4. How and which hyper-parameters were optimized?", "The authors should use the same hyper-parameters for all methods (Jaderberg, Zhang, Rank selection).", "The authors should also analyze the sensitivity of speedup and accuracy drop depending on the learning rate for \u2018Rank selection\u2019.", "5. Figure 4: the authors should show the same plot for more convolutional layers at varying depth from both VGG and ResNet.", "6. The manuscript is hard to understand and not written clearly enough.", "In the abstract, what does \u2018two-pass decomposition\u2019, \u2018proper ranks\u2019, \u2018the instability problem\u2019, or \u2018systematic\u2019 mean?", "What are \u2018edge devices\u2019, \u2018vanilla parameters\u2019?", "The authors should also avoid uninformative adjectives, clutter, and vague terms throughout the manuscript such as \u2018vital importance\u2019 or \u2018little room for fine-tuning\u2019.", "Minor comments ============= 1. The authors should use \u2018significantly\u2019 only if a statistical hypothesis was performed.", "2. The manuscript contains several typos and grammatical flaws,", "e.g. \u2018have been widely applied to have the breakthrough\u2019, \u2018The CP decomposition factorizes the tensors into a sum of series rank-one tensors.\u2019, \u2018Our two-pass decomposition provides the better result as compared with the original CP decomposition\u2019.", "3. For clarity, the authors should express equation 5 in terms of Y_1, Y_2, Y_3, and Y_4.", "4. Equation 2, bottom: C_in, W_f, H_f, and C_out are undefined at this point."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "request", "fact", "request", "evaluation", "request", "request", "request", "request", "request", "request", "evaluation", "non-arg", "non-arg", "request", "request", "evaluation", "quote", "request", "fact"]}
{"doc_id": "Hyx7bEPez", "text": ["In this paper, the authors studied the problem of semi-supervised few-shot classification, by extending the prototypical networks into the setting of semi-supervised learning with examples from distractor classes. ", "The studied problem is interesting, ", "and the paper is well-written. ", "Extensive experiments are performed to demonstrate the effectiveness of the proposed methods. ", "While the proposed method is a natural extension of the existing works (i.e., soft k-means and meta-learning).", "On top of that, It seems the authors have over-claimed their model capability at the first place ", "as the proposed model cannot properly classify the distractor examples but just only consider them as a single class of outliers. ", "Overall, I would like to vote for a weakly acceptance regarding this paper."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "Byw1O6Fgz", "text": ["The paper is interesting, ", "but needs more work, ", "and should provide clear and fair comparisons. ", "Per se, the model is incrementally new, ", "but it is not clear what the strengths are, ", "and the presentations needs to be done more carefully.", "In detail: - please fix several typos throughout the manuscript, and have a native speaker (and preferably an ASR expert) proofread the paper", "Introduction - please define HMM/GMM model (and other abbreviations that will be introduced later), ", "it cannot be assumed that the reader is familiar with all of them (\"ASG\" is used before it is defined, ...)", "- The standard units that most ASR systems use can be called \"senones\", ", "and they are context dependent sub-phonetic units (see http://ssli.ee.washington.edu/~mhwang/), not phonetic states. ", "Also the units that generate the alignment and the units that are trained on an alignment can be different ", "(I can use a system with 10000 states to write alignments for a system with 3000 states) ", "- this needs to be corrected.", "- When introducing CNNs, please also cite Waibel and TDNNs ", "- they are *the same* as 1-d CNNs, and predate them. ", "They have been extended to 2-d later on (Spatio-temporal TDNNs)", "- The most influential deep learning paper here might be Seide, Li, Yu Interspeech 2011 on CD-DNN-HMMs, rather than overview articles", "- Many papers get rid of the HMM pipeline, ", "I would add https://arxiv.org/abs/1408.2873, which predates Deep Speech", "- What is a \"sequence-level variant of CTC\"? ", "CTC is a sequence training criterion", "- The reason that Deep Speech 2 is better on noisy test sets is not only the fact they trained on more data, but they also trained on \"noisy\" (matched) data", "- how is this an end-to-end approach if you are using an n-gram language model for decoding? ", "Architecture - MFSC are log Filterbanks ...", "- 1D CNNs would be TDNNs", "- Figure 2: can you plot the various transition types (normalized, un-normalized, ...) in the plots? ", "not sure if it would help, but it might", "- Maybe provide a reference for HMM/GMM and EM (forward backward training)", "- MMI was also widely used in HMM/GMM systems, not just NN systems", "- the \"blank\" states do *not* model \"garbage\" frames, ", "if one wants to interpret them, they might be said to model \"non-stationary\" frames between CTC \"peaks\", ", "but these are different from silence, garbage, noise, ...", "- what is the relationship of the presented ASG criterion to MMI? ", "the form of equation (3) looks like an MMI criterion to me?", "Experiments - Many of the previous comments still hold, ", "please proofread", "- you say there is no \"complexity\" incrase when using \"logadd\" ", "- how do you measure this? ", "number of operations? ", "is there an implementation of \"logadd\" that is (absolutely) as fast as \"add\"?", "- There is discussion as to what i-vectors model (speaker or environment information) ", "- I would leave out this discussion entirely here, ", "it is enough to mention that other systems use adaptation, and maybe re-run an unadapted baselien for comparsion", "- There are techniques for incremental adaptation and a constrained MLLR (feature adaptation) approaches that are very eficient, if one wnats to get into this", "- it may also be interesting to discuss the role of the language model to see which factors influence system performance", "- some of the other papers might use data augmentation, which would increase noise robustness ", "(did not check, but this might explain some of the results in table 4)", "- I am confused by the references in the caption of Table 3 ", "- surely the Waibel reference is meant to be for TDNNs ", "(and should appear earlier in the paper), ", "while p-norm came later ", "(Povey used it first for ASR, I think) ", "and is related to Maxout", "- can you also compare the training times? ", "Conculsion - can you show how your approach is not so computationally expensive as RNN based approaches? ", "either in terms of FLOPS or measured times"], "labels": ["evaluation", "request", "request", "evaluation", "evaluation", "request", "request", "request", "evaluation", "fact", "fact", "fact", "fact", "request", "request", "fact", "fact", "evaluation", "evaluation", "request", "non-arg", "fact", "fact", "fact", "fact", "fact", "request", "evaluation", "request", "evaluation", "fact", "fact", "fact", "request", "fact", "evaluation", "request", "fact", "request", "request", "request", "fact", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "fact", "evaluation", "request", "request", "request"]}
{"doc_id": "HJ_m58weG", "text": ["This paper proposes to use neural network and gradient descent to automatically design for engineering tasks.", "It uses two networks, parameterization network and prediction network to model the mapping from design parameters to fitness.", "It uses back propagation (gradient descent) to improve the design.", "The method is evaluated on heat sink design and airfoil design.", "This paper targets at a potentially very useful application of neural networks that can have real world impacts.", "However, I have three main concerns: 1) Presentation. The organization of the paper could be improved.", "It mixes the method, the heat sink example and the airfoil example throughout the entire paper.", "Sometimes I am very confused about what is being described.", "My suggestion would be to completely separate these three parts:", "present a general method first,", "then use heat sink as the first experiment and airfoil as the second experiment.", "This organization would make the writing much clearer.", "2) In the paragraph above Section 4.1, the paper made two arguments.", "I might be wrong, but I do not agree with either of them in general.", "First of all, \"neural networks are good at generalizing to examples outside their train set\".", "This depends entirely on whether the sample distribution of training and testing are similar and whether you have enough training examples that cover important sample space.", "This is especially critical if a deep neural network is used since overfitting is a real issue.", "Second, \"it is easy to imagine a hybrid system where a network is trained on a simulation and fine tuned ...\".", "Implementing such a hybrid system is nontrivial due to the reality gap.", "There is an entire research field about closing the reality gap and transfer learning.", "So I am not convinced by these two arguments made by this paper.", "They might be true for a narrow field of application.", "But in general, I think they are not quite correct.", "3) The key of this paper is to approximate the dynamics using neural network (which is a continuous mapping) and take advantage of its gradient computation.", "However, many of dynamic systems are inherently discontinuous (collision/contact dynamics) or chaotic (turbulent flow).", "In those scenarios, the proposed method might not work well and we may have to resort to the gradient free methods.", "It seems that the proposed method works well for heat sink problem and the steady flow around airfoil,", "both of which do not fall into the more complex physics regime.", "It would be great that the paper could be more explicit about its limitations.", "In summary, I like the idea, the application and the result of this paper.", "The writing could be improved.", "But more importantly, I think that the proposed method has its limitation about what kind of physical systems it can model.", "These limitation should be discussed more explicitly and more thoroughly."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "request", "fact", "evaluation", "request", "request", "request", "evaluation", "fact", "evaluation", "quote", "evaluation", "evaluation", "quote", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "request", "evaluation", "request", "evaluation", "request"]}
{"doc_id": "SkOj779lM", "text": ["This paper proposes the concept of optimal representation space and suggests that a model should be evaluated in its optimal representation space to get good performance.", "It could be a good idea if this paper could suggest some ways to find the optimal representation space in general, instead of just showing two cases.", "It is disappointing, because this paper is named as \"finding optimal representation spaces ...\".", "In addition, one of the contributions claimed in this paper is about introducing the \"formalism\" of an optimal representation space.", "However, I didn't see any formal definition of this concept or theoretical justification.", "About FastSent or any other log-linear model, the reason that dot product (or cosine similarity) is a good metric is because the model is trained to optimize the dot product, as shown in equation 5", "--- I think this simple fact is missed in this paper.", "The experimental results are not convincing,", "because I didn't find any consistent pattern that shows the performance is getting better once we evaluated the model in its optimal representation space.", "There are statements in this paper that I didn't agree with", "1) Distributional hypothesis from Harris (1954) is about words not sentences.", "2) Not sure the following line makes sense:", "\"However, these unsupervised tasks are more interesting from a general AI point of view, as they test whether the machine truly understands the human notion of similarity, without being explicitly told what is similar\""], "labels": ["fact", "request", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "quote"]}
{"doc_id": "rkCp66Tef", "text": ["The paper proposes a deep learning framework called DeePa that supports multiple dimensions of parallelism in computation to accelerate training of convolutional neural networks.", "Whereas the majority of work on parallel or distributed deep learning partitions training over bootstrap samples of training data (called image parallelism in the paper),", "DeePa is able to additionally partition the operations over image height, width and channel.", "This gives more options to parallelize different parts of the neural network.", "For example, the best DeePa configurations studied in the paper for AlexNet, VGG-16, and Inception-v3 typically use image parallelism for the initial layers, reduce GPU utilization for the deeper layers to reduce data transfer overhead, and use model parallelism on a smaller number of GPUs for fully connected layers.", "The net is that DeePa allows such configurations to be created that provide an increase in training throughput and lower data transfer in practice for training these networks.", "These configurations for parellism are not easily programmed in other frameworks like TensorFlow and PyTorch.", "The paper can potentially be improved in a few ways.", "One is to explore more demanding training workloads that require larger-scale distribution and parallelism.", "The ImageNet 22-K would be a good example and would really highlight the benefits of the DeePa in practice.", "Beyond that, more complex workloads like 3D CNNs for video modeling would also provide a strong motivation for having multiple dimensions of the data for partitioning operations."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "request"]}
{"doc_id": "Bk_UdcKxf", "text": ["*Summary* The paper proposes to use hyper-networks [Ha et al. 2016] for the tuning of hyper-parameters, along the lines of [Brock et al. 2017]. ", "The core idea is to have a side neural network sufficiently expressive to learn the (large-scale, matrix-valued) mapping from a given configuration of hyper-parameters to the weights of the model we wish to tune.", "The paper gives a theoretical justification of its approach, ", "and then describes several variants of its core algorithm which mix the training of the hyper-networks together with the optimization of the hyper-parameters themselves. ", "Finally, experiments based on MNIST illustrate the properties of the proposed approach.", "While the core idea may appear as appealing, ", "the paper suffers from several flaws (as further detailed afterwards):", "-Insufficient related work", "-Correctness/rigor of Theorem 2.1", "-Clarity of the paper (e.g., Sec. 2.4)", "-Experiments look somewhat artificial", "-How scalable is the proposed approach in the perspective of tuning models way larger/more complex than those treated in the experiments?", "*Detailed comments* -\"...and training the model to completion.\" and \"This is wasteful, since it trains the model from scratch each time...\" (and similar statement in Sec. 2.1): ", "Those statements are quite debatable. ", "There are lines of work, e.g., in Bayesian optimization, to model early stopping/learning curves (e.g., Domhan2014, Klein2017 and references therein) and where training procedures are explicitly resumed (e.g., Swersky2014, Li2016). ", "The paper should reformulate its statements in the light of this literature.", "-\"Uncertainty could conceivably be incorporated into the hypernet...\". ", "This seems indeed an important point, ", "but it does not appear as clear how to proceed (e.g., uncertainty on w_phi(lambda) which later needs to propagated to L_val); ", "could the authors perhaps further elaborate?", "-I am concerned about the rigor/correctness of Theorem 2.1; ", "for instance, how is the continuity of the best-response exploited? ", "Also, throughout the paper, the argmin is defined as if it was a singleton ", "while in practice it is rather a set-valued mapping (except if there is a unique minimizer for L_train(., lambda), ", "which is unlikely to be the case given the nature of the considered neural-net model). ", "In the same vein, Jensen's inequality states that Expectation[g(X)] >= g(Expectation[X]) for some convex function g and random variable X; ", "how does it precisely translate into the paper's setting (convexity, which function g, etc.)? ", "-Specify in Alg. 1 that \"hyperopt\" refers to a generic hyper-parameter procedure.", "-More details should be provided to better understand Sec. 2.4. ", "At the moment, it is difficult to figure out (and potentially reproduce) the model which is proposed.", "-The training procedure in Sec. 4.2 seems quite ad hoc; ", "how sensitive was the overall performance with respect to the optimization strategy? ", "For instance, in 4.2 and 4.3, different optimization parameters are chosen.", "-typo: \"weight decay is applied the...\" --> \"weight decay is applied to the...\"", "-\"a standard Bayesian optimization implementation from sklearn\": Could more details be provided? ", "(there does not seem to be implementation there http://scikit-learn.org/stable/model_selection.html to the best of my knowledge)", "-The experimental set up looks a bit far-fetched and unrealistic: ", "first scalar, than diagonal and finally matrix-weighted regularization schemes. ", "While the first two may be used in practice, ", "the third scheme is not used in practice to the best of my knowledge.", "-typo: \"fit a hypernet same dataset.\" --> \"fit a hypernet on the same dataset.\"", "-(Franceschi2017) could be added to the related work section.", "*References* (Domhan2014) Domhan, T.; Springenberg, T. & Hutter, F. Extrapolating learning curves of deep neural networks ICML 2014 AutoML Workshop, 2014", "(Franceschi2017) Franceschi, L.; Donini, M.; Frasconi, P. & Pontil, M. Forward and Reverse Gradient-Based Hyperparameter Optimization preprint arXiv:1703.01785, 2017", "(Klein2017) Klein, A.; Falkner, S.; Springenberg, J. T. & Hutter, F. Learning curve prediction with Bayesian neural networks International Conference on Learning Representations (ICLR), 2017, 17", "(Li2016) Li, L.; Jamieson, K.; DeSalvo, G.; Rostamizadeh, A. & Talwalkar, A. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization preprint arXiv:1603.06560, 2016", "(Swersky2014) Swersky, K.; Snoek, J. & Adams, R. P. Freeze-Thaw Bayesian Optimization preprint arXiv:1406.3896, 2014"], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "quote", "evaluation", "fact", "request", "quote", "evaluation", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "request", "request", "request", "evaluation", "evaluation", "request", "fact", "fact", "request", "fact", "evaluation", "fact", "fact", "fact", "request", "request", "reference", "reference", "reference", "reference", "reference"]}
{"doc_id": "r1cczyqef", "text": ["The authors propose a new network architecture for RL that contains some relevant inductive biases about planning.", "This fits into the recent line of work on implicit planning where forms of models are learned to be useful for a prediction/planning task.", "The proposed architecture performs something analogous to a full-width tree search using an abstract model (learned end-to-end).", "This is done by expanding all possible transitions to a fixed depth before performing a max backup on all expanded nodes.", "The final backup value is the Q-value prediction for a given state, or can represent a policy through a softmax.", "I thought the paper was clear and well-motivated.", "The architecture (and various associated tricks like state vector normalization) are well-described for reproducibility.", "Experimental results seem promising", "but I wasn\u2019t fully convinced of its conclusions.", "In both domains, TreeQN and AtreeC are compared to a DQN architecture,", "but it wasn\u2019t clear to me that this is the right baseline.", "Indeed TreeQN and AtreeC share the same conv stack in the encoder (I think?),", "but also have the extra capacity of the tree on top.", "Can the performance gain we see in the Push task as a function of tree depth be explained by the added network capacity?", "Same comment in Atari,", "but there it\u2019s not really obvious that the proposed architecture is helping.", "Baselines could include unsharing the weights in the tree, removing the max backup, having a regular MLP with similar capacity, etc.", "Page 5, the auxiliary loss on reward prediction seems appropriate,", "but it\u2019s not clear from the text and experiments whether it actually was necessary.", "Is it that makes interpretability of the model easier (like we see in Fig 5c)?", "Or does it actually lead to better performance?", "Despite some shortcomings in the result section, I believe this is good work and worth communicating as is."], "labels": ["fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "request", "request", "evaluation", "request", "evaluation", "evaluation", "request", "request", "evaluation"]}
{"doc_id": "B1_TQ-clG", "text": ["This paper studies learning to play two-player general-sum games with state (Markov games).", "The idea is to learn to cooperate (think prisoner's dilemma) but in more complex domains.", "Generally, in repeated prisoner's dilemma, one can punish one's opponent for noncooperation.", "In this paper, they design an apporach to learn to cooperate in a more complex game, like a hybrid pong meets prisoner's dilemma game.", "This is fun but I did not find it particularly surprising from a game-theoretic or from a deep learning point of view.", "From a game-theoretic point of view, the paper begins with somewhat sloppy definitions followed by a theorem that is not very surprising.", "It is basically a straightforward generalization of the idea of punishing, which is common in \"folk theorems\" from game theory, to give a particular equilibrium for cooperating in Markov games.", "Many Markov games do not have a cooperative equilibrium, so this paper restricts attention to those that do.", "Even in games where there is a cooperative solution that maximizes the total welfare, it is not clear why players would choose to do so.", "When the game is symmetric, this might be \"the natural\" solution", "but in general it is far from clear why all players would want to maximize the total payoff.", "The paper follows with some fun experiments implementing these new game theory notions.", "Unfortunately, since the game theory was not particularly well-motivated,", "I did not find the overall story compelling.", "It is perhaps interesting that one can make deep learning learn to cooperate,", "but one could have illustrated the game theory equally well with other techniques.", "In contrast, the paper \"Coco-Q: Learning in Stochastic Games with Side Payments\" by Sodomka et. al. is an example where they took a well-motivated game theoretic cooperative solution concept and explored how to implement that with reinforcement learning.", "I would think that generalizing such solution concepts to stochastic games and/or deep learning might be more interesting.", "It should also be noted that I was asked to review another ICLR submission entitled \"CONSEQUENTIALIST CONDITIONAL COOPERATION IN SOCIAL DILEMMAS WITH IMPERFECT INFORMATION\"", "which amazingly introduced the same \"Pong Player\u2019s Dilemma\" game as in this paper.", "Notice the following suspiciously similar paragraphs from the two papers:From \"MAINTAINING COOPERATION IN COMPLEX SOCIAL DILEMMAS USING DEEP REINFORCEMENT LEARNING\":", "We also look at an environment where strategies must be learned from raw pixels.", "We use the method of Tampuu et al. (2017) to alter the reward structure of Atari Pong so that whenever an agent scores a point they receive a reward of 1 and the other player receives \u22122.", "We refer to this game as the Pong Player\u2019s Dilemma (PPD).", "In the PPD the only (jointly) winning move is not to play.", "However, a fully cooperative agent can be exploited by a defector.", "From \"CONSEQUENTIALIST CONDITIONAL COOPERATION IN SOCIAL DILEMMAS WITH IMPERFECT INFORMATION\":", "To demonstrate this we follow the method of Tampuu et al. (2017) to construct a version of Atari Pong which makes the game into a social dilemma.", "In what we call the Pong Player\u2019s Dilemma (PPD) when an agent scores they gain a reward of 1 but the partner receives a reward of \u22122.", "Thus, in the PPD the only (jointly) winning move is not to play,", "but selfish agents are again tempted to defect and try to score points even though this decreases total social reward.", "We see that CCC is a successful, robust, and simple strategy in this game."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "non-arg", "fact", "reference", "quote", "quote", "quote", "quote", "quote", "reference", "quote", "quote", "quote", "quote", "quote"]}
{"doc_id": "BJBWMqqlf", "text": ["This paper proposes a new theoretically-motivated method for combining reinforcement learning and imitation learning for acquiring policies that are as good as or superior to the expert. ", "The method assumes access to an expert value function (which could be trained using expert roll-outs) and uses the value function to shape the reward function and allow for truncated-horizon policy search. ", "The algorithm can gracefully handle suboptimal demonstrations/value functions, ", "since the demonstrations are only used for reward shaping, ", "and the experiments demonstrate faster convergence and better performance compared to RL and AggreVaTeD on a range of simulated control domains. ", "The paper is well-written and easy to understand.", "My main feedback is with regard to the experiments: I appreciate that the experiments used 25 random seeds! ", "This provides a convincing evaluation.", "It would be nice to see experimental results on even higher dimensional domains such as the ant, humanoid, or vision-based tasks, ", "since the experiments seem to suggest that the benefit of the proposed method is diminished in the swimmer and hopper domains compared to the simpler settings.", "Since the method uses demonstrations, ", "it would be nice to see three additional comparisons: (a) training with supervised learning on the expert roll-outs, (b) initializing THOR and AggreVaTeD (k=1) with a policy trained with supervised learning, and (c) initializing TRPO with a policy trained with supervised learning. ", "There doesn't seem to be any reason not to initialize in such a way, when expert demonstrations are available, ", "and such an initialization should likely provide a significant speed boost in training for all methods.", "How many demonstrations were used for training the value function in each domain? ", "I did not see this information in the paper.", "With regard to the method and discussion: The paper discusses the connection between the proposed method and short-horizon imitation and long-horizon RL, describing the method as a midway point. ", "It would also be interesting to see a discussion of the relation to inverse RL, ", "which considers long-term outcomes from expert demonstrations. ", "For example, MacGlashn & Littman propose a midway point between imitation and inverse RL [1].", "Theoretically, would it make sense to anneal k from small to large? (to learn the most effectively from the smallest amount of experience)", "[1] https://www.ijcai.org/Proceedings/15/Papers/519.pdf", "Minor feedback: - The RHS of the first inequality in the proof of Thm 3.3 seems to have an error in the indexing of i and exponent, which differs from the line before and line after"], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "request", "fact", "fact", "request", "fact", "fact", "non-arg", "reference", "fact"]}
{"doc_id": "B1A7YkceM", "text": ["The authors propose a procedure to generate an ensemble of sparse structured models. ", "To do this, the authors propose to (1) sample models using SG-MCMC with group sparse prior, (2) prune hidden units with small weights, (3) and retrain weights by optimizing each pruned model. ", "The ensemble is applied to MNIST classification and language modelling on PTB dataset. ", "I have two major concerns on the paper. ", "First, the proposed procedure is quite empirically designed. ", "So, it is difficult to understand why it works well in some problems. ", "Particularly. the justification on the retraining phase is weak. ", "It seems more like to use SG-MCMC to *initialize* models which will then be *optimized* to find MAP with the sparse-model constraints. ", "The second problem is about the baselines in the MNIST experiments. ", "The FNN-300-100 model without dropout, batch-norm, etc. seems unreasonably weak baseline. ", "So, the results on Table 1 on this small network is not much informative practically. ", "Lastly, I also found a significant effort is also desired to improve the writing. ", "The following reference also needs to be discussed in the context of using SG-MCMC in RNN.", "- \"Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling\", Zhe Gan*, Chunyuan Li*, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence Carin"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "reference"]}
{"doc_id": "B1P-gBclf", "text": ["The quality of the paper is good, and clarity is mostly good. ", "The proposed metric is interesting, ", "but it is hard to judge the significance without more thorough experiments demonstrating that it works in practice.", "Pros:- clear definitions of terms", " - overall outline of paper is good", " - novel metric", "Cons - text is a bit over-wordy, and flow/meaning sometimes get lost. ", "A strict editor would be helpful, ", "because the underlying content is good", " - odd that your definition of generalization in GANs appears immediately preceding the section titled \"Generalisation in GANs\"", " - the paragraph at the end of the \"Generalisation in GANs\" section is confusing. ", "I think this section and the previous (\"The objective of unsupervised learning\") could be combined, removing some repetition, adding some subtitles to improve clarity. ", "This would cut down the text a bit to make space for more experiments.", " - why is your definition of generalization that the test set distance is strictly less than training set ? ", "I would think this should be less-than-or-equal", " - there is a sentence that doesn't end at the top of p.3: \"... the original GAN paper showed that [ends here]\"", " - should state in the abstract what your \"notion of generalization\" for gans is, instead of being vague about it", " - more experiments showing a comparison of the proposed metric to others (e.g. inception score, Mturk assessments of sample quality, etc.) would be necessary to find the metric convincing", " - what is a \"pushforward measure\"? (p.2)", " - the related work section is well-written and interesting, ", "but it's a bit odd to have it at the end. ", "Earlier in the work (e.g. before experiments and discussion) would allow the comparison with MMD to inform the context of the introduction", " - there are some errors in figures that I think were all mentioned by previous commentators."], "labels": ["evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request", "evaluation", "non-arg", "evaluation", "fact", "request", "request", "non-arg", "evaluation", "evaluation", "request", "evaluation"]}
{"doc_id": "rkCi3T3lG", "text": ["Summary: The paper proposes a new dataset for reading comprehension, called DuoRC. ", "The questions and answers in the DuoRC dataset are created from different versions of a movie plot narrating the same underlying story. ", "The DuoRC dataset offers the following challenges compared to the existing reading comprehension (RC) datasets \u2013 ", "1) low lexical overlap between questions and their corresponding passages, ", "2) requires use of common-sense knowledge to answer the question, ", "3) requires reasoning across multiples sentences to answer the question, ", "4) consists of those questions as well that cannot be answered from the given passage. ", "The paper experiments with two types of models ", "\u2013 1) a model which only predicts the span in a document and ", "2) a model which generates the answer after predicting the span. ", "Both these models are built off of an existing model on SQuAD \u2013 the Bidirectional Attention Flow (BiDAF) model. ", "The experimental results show that the span based model performs better than the model which generates the answers. ", "But the accuracy of both the models is significantly lower than that of their base model (BiDAF) on SQuAD, demonstrating the difficulty of the DuoRC dataset. ", "Strengths:1.\tThe data collection process is interesting. ", "The challenges in the proposed dataset as outlined in the paper seem worth pushing for.", "2.\tThe paper is well written making it easy to follow.", "3.\tThe experiments and analysis presented in the paper are insightful.", "Weaknesses:1.\tIt would be good if the paper can throw some more light on the comparison between the existing MovieQA dataset and the proposed DuoRC dataset, other than the size.", "2.\tThe dataset is motivated as consisting of four challenges (described in the summary above) that do not exist in the existing RC datasets.", "However, the paper lacks an analysis on what percentage of questions in the proposed dataset belong to each category of the four challenges. ", "Such an analysis would helpful to accurately get an estimate of the proportion of these challenges in the dataset.", "3.\tIt is not clear from the paper how should the questions which are unanswerable be evaluated. ", "As in, what should be the ground-truth answer against which the answers should such questions be evaluated. ", "Clearly, string matching would not work ", "because a model could say \u201cdon\u2019t know\u201d whereas some other model could say \u201cunanswerable\u201d. ", "So, does the training data have a particular string as the ground truth answer for such questions, so that a model can just be trained to spit out that particular string when it thinks it can\u2019t answer the questions? ", "4.\tOne of the observations made in the paper is that \u201ctraining on one dataset and evaluating on the other results in a drop in the performance.\u201d ", "However, in table 4, evaluating on Paraphrase RC is better when trained on Self RC as opposed to when trained on Paraphrase RC. ", "This seems to be in conflict with the observation drawn in the paper. ", "Could authors please clarify this? ", "Also, could authors please throw some light on why this might be happening?", "5.\tIn the third phase of data collection (Paraphrase RC), was waiting for 2-3 weeks the only step taken in order to ensure that the workers for this stage are different from those in stage 2, or was something more sophisticated implemented which did not allow a worker who has worked in stage 2 to be able to participate in stage 3?", "6.\tTypo: Dataset section, phrases --> phases", "Overall: The challenges proposed in the DuoRC dataset are interesting. ", "The paper is well written ", "and the experiments are interesting. ", "However, there are some questions (as mentioned in the Weaknesses section) which need to be clarified before I can recommend acceptance for the paper."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "fact", "request", "evaluation", "evaluation", "fact", "fact", "non-arg", "fact", "fact", "fact", "request", "request", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "S1GVQk5gG", "text": ["This paper is about rethinking how to use encoder-decoder architectures for representation learning when the training objective contains a similarity between the decoder output and the encoding of something else.", "For example, for the skip-thought RNN encoder-decoder that encodes a sentence and decodes neighboring sentences: rather than use the final encoder hidden state as the representation of the sentence, the paper uses some function of the decoder,", "since the training objective is to maximize each dot product between a decoder hidden state and the embedding of a context word.", "If dot product (or cosine similarity) is going to be used as the similarity function for the representation, then it makes more sense, the paper argues, to use the decoder hidden state(s) as the representation of the input sentence.", "The paper considers both averaging and concatenating hidden states.", "One difficulty here is that the neighboring sentences are typically not available in downstream tasks,", "so the paper runs the decoder to produce a predicted sentence one word-at-a-time, using the predicted words as inputs to the decoder RNNs.", "Then those decoder RNN hidden states are used via averaging or concatenation", "as the representation of a sentence in downstream tasks.", "This paper is a source of contributions,", "but I think in its current form it is not yet ready for publication.", "Pros: I think it makes sense to pay attention to the training objective when deciding how to use the model for downstream tasks.", "I like the empirical investigation of combining RNN and BOW encoders and decoders.", "The experimental results show that a single encoder-decoder model can be trained and then two different functions of it can be used at test time for different kinds of tasks (RNN-RNN for supervised transfer and RNN-RNN-mean for unsupervised transfer).", "I think this is an interesting result.", "Cons: I have several concerns.", "The first relate to the theoretical arguments and their empirical support.", "Regarding the theoretical arguments: First, the paper discusses the notion of an \"optimal representation space\" and describes the argument as theoretical,", "but I don't see much of a theoretical argument here.", "As far as I can tell, the paper does not formally define its terms or define in what sense the representation space is \"optimal\".", "I can only find heuristic statements like those in the paragraph in Sec 3.2 that begins \"These observations...\".", "What exactly is meant formally by statements like \"any model where the decoder is log-linear with respect to the encoder\" or \"that distance is optimal with respect to the model\u2019s objective\"?", "It seems like the paper may want to start with formal definitions of an encoder and a decoder, then define what is meant by a \"decoder that is log-linear with respect to the encoder\", and define what it means for a distance to be optimal with respect to a training objective.", "That seems necessary in order to provide the foundation to make any theoretical statement about choices for encoders, decoders, and training objectives.", "I am still not exactly sure what that theoretical statement might look like,", "but maybe defining the terms would help the authors get started in heading toward the goal of defining a statement to prove.", "Second, the paper's theoretical story seems to diverge almost immediately from the choices used in the model and experimental procedure.", "For example, in Sec. 3.2, it is stated that cosine similarity \"is the appropriate similarity measure in the case of log-linear decoders.\"", "But the associated footnote (footnote 2) seems to admit a contradiction here by noting that actually the appropriate similarity measure is dot product:", "\"Evidently, the correct measure is actually the dot product.\"", "This is a bit confusing.", "It also raises a question: If cosine similarity will be used later for computing similarity, then why not try using cosine similarity in place of dot product in the model?", "That is, replace \"u_w \\cdot h_i\" in Eq. (2) with \"cos(u_w, h_i)\".", "If the paper's story is correct (and if I understand the ideas correctly), training with cosine similarity should work better than training with dot product,", "because the similarity function used during training is more similar to that used in testing.", "This seems like a natural experiment to try.", "Other natural experiments would be to vary both the similarity function used in the model during training and the similarity function used at test time.", "The authors' claims could be validated if the optimal choices always use the same choice for the training and test-time similarity functions.", "That is, if Euclidean distance is used during training, then will Euclidean distance be the best choice at test time?", "Another example of the divergence lies in the use of the skip-thought decoder on downstream tasks.", "Since the decoder hidden states depend on neighboring sentences and these are considered to be unavailable at test time,", "the paper \"unrolls\" the decoder for several steps by using it to predict words which are then used as inputs on the next time step.", "To me, this is a potentially very significant difference between training and testing.", "Since much of the paper is about reconciling training and testing conditions in terms of the representation space and similarity function,", "this difference feels like a divergence from the theoretical story.", "It is only briefly mentioned at the end of Sec. 3.3 and then discussed again later in the experiments section.", "I think this should be described in more detail in Section 3.3", "because it is an important note about how the model will be used in practice.", "It would be nice to be able to quantify the impact (of unrolling the decoder with predicted words) by, for example, using the decoder on a downstream evaluation dataset that has neighboring sentences in it.", "Then the actual neighboring sentences can be used as inputs to the decoder when it is unrolled, which would be closer to the training conditions", "and we could empirically see the difference.", "Perhaps there is an evaluation dataset with ordered sentences so that the authors could empirically compare using real vs predicted inputs to the decoder on a downstream task?", "The above experiments might help to better connect the experiments section with the theoretical arguments.", "Other concerns, including more specific points, are below: Sec. 2: When describing inferior performance of RNN-based models on unsupervised sentence similarity tasks, the paper states: \"While this shortcoming of SkipThought and RNN-based models in general has been pointed out, to the best of our knowledge, it has never been systematically addressed in the literature before.\"", "The authors may want to check Wieting & Gimpel (2017) (and its related work) which investigates the inferiority of LSTMs compared to word averaging for unsupervised sentence similarity tasks.", "They found that averaging the encoder hidden states can work better than using the final encoder hidden state;", "the authors may want to try that as well.", "Sec. 3.2: When describing FastSent, the paper includes \"Due to the model's simplicity, it is particularly fast to train and evaluate, yet has shown state-of-the-art performance in unsupervised similarity tasks (Hill et al., 2015).\"", "I don't think it makes much sense to cite the SimLex-999 paper in this context,", "as that is a word similarity task and that paper does not include any results of FastSent.", "Maybe the Hill et al (2016) FastSent citation was meant instead?", "But in that case, I don't think it is quite accurate to make the claim that FastSent is SOTA on unsupervised similarity tasks.", "In the original FastSent paper (Hill et al., 2016), FastSent is not as good as CPHRASE or \"DictRep BOW+embs\" on average across the unsupervised sentence similarity evaluations.", "FastSent is also not as good as sent2vec from Pagliardini et al (2017) or charagram-phrase from Wieting et al. (2016).", "Sec. 3.3:In describing skip-thought, the paper states: \"While computationally complex, it is currently the state-of-the-art model for supervised transfer tasks (Hill et al., 2016).\"", "I don't think it is accurate to state that skip-thought is still state-of-the-art for supervised transfer tasks, in light of recent work (Conneau et al., 2017; Gan et al., 2017).", "Sec. 3.3:When discussing averaging the decoder hidden states, the paper states: \"Intuitively, this corresponds to destroying the word order information the decoder has learned.\"", "I'm not sure this strong language can be justified here.", "Is there any evidence to suggest that averaging the decoder hidden states will destroy word order information?", "The hidden states may be representing word order information in a way that is robust to averaging, i.e., in a way such that the average of the hidden states can still lead to the reconstruction of the word order.", "Sec. 4: What does it mean to use an RNN encoder and a BOW decoder?", "This seems to be a strongly-performing setting and competitive with RNN-mean,", "but I don't know exactly what this means.", "Minor things:Sec. 3.1:When defining v_w, it would be helpful to make explicit that it's in \\mathbb{R}^d.", "Sec. 4: For TREC question type classification, I think the correct citation should be Li & Roth (2002) instead of Vorhees (2002).", "Sec. 5:I think there's a typo in the following sentence: \"Our results show that, for example, the raw encoder output for SkipThought (RNN-RNN) achieves strong performance on supervised transfer, whilst its mean decoder output (RNN-mean) achieves strong performance on supervised transfer.\"", "I think \"unsupervised\" was meant in the latter mention.", "References: Conneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. EMNLP.", "Gan, Z., Pu, Y., Henao, R., Li, C., He, X., & Carin, L. (2017). Learning generic sentence representations using convolutional neural networks. EMNLP.", "Li, X., & Roth, D. (2002). Learning question classifiers. COLING.", "Pagliardini, M., Gupta, P., & Jaggi, M. (2018). Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features. arXiv preprint arXiv:1703.02507.", "Wieting, J., Bansal, M., Gimpel, K., & Livescu, K. (2016). Charagram: Embedding words and sentences via character n-grams. EMNLP.", "Wieting, J., & Gimpel, K. (2017). Revisiting Recurrent Networks for Paraphrastic Sentence Embeddings. ACL."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "non-arg", "request", "evaluation", "evaluation", "request", "evaluation", "fact", "fact", "quote", "evaluation", "evaluation", "fact", "fact", "fact", "request", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "request", "evaluation", "request", "request", "request", "non-arg", "evaluation", "fact", "request", "fact", "request", "fact", "evaluation", "fact", "non-arg", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "fact", "request", "evaluation", "evaluation", "request", "request", "request", "request", "reference", "reference", "reference", "reference", "reference", "reference"]}
{"doc_id": "BJggQbceG", "text": ["Summary: This paper proposes an adversarial learning framework for machine comprehension task. ", "Specifically, authors consider a reader network which learns to answer the question by reading the passage and a narrator network which learns to obfuscate the passage so that the reader can fail in its task. ", "Authors report results in 3 different reading comprehension datasets ", "and the proposed learning framework results in improving the performance of GMemN2N.", "My Comments: This paper is a direct application of adversarial learning to the task of reading comprehension. ", "It is a reasonable idea ", "and authors indeed show that it works.", "1. The paper needs a lot of editing. ", "Please check the minor comments.", "2. Why is the adversary called narrator network? ", "It is bit confusing ", "because the job of that network is to obfuscate the passage.", "3. Why do you motivate the learning method using self-play? ", "This is just using the idea of adversarial learning (like GAN) and it is not related to self-play.", "4. In section 2, first paragraph, authors mention that the narrator prevents catastrophic forgetting. ", "How is this happening? ", "Can you elaborate more?", "5. The learning framework is not explained in a precise way. ", "What do you mean by re-initializing and retraining the narrator? ", "Isn\u2019t it costly to reinitialize the network and retrain it for every turn? ", "How many such epochs are done? ", "You say that test set also contains obfuscated documents. ", "Is it only for the validation set? ", "Can you please explain if you use obfuscation when you report the final test performance too? ", "It would be more clear if you can provide a complete pseudo-code of the learning procedure.", "6. How does the narrator choose which word to obfuscate? ", "Do you run the narrator model with all possible obfuscations and pick the best choice?", "7. Why don\u2019t you treat number of hops as a hyper-parameter and choose it based on validation set? ", "I would like to see the results in Table 1 where you choose number of hops for each of the three models based on validation set.", "8. In figure 2, how are rounds constructed? ", "Does the model sees the same document again and again for 100 times or each time it sees a random document and you sample documents with replacement? ", "This will be clear if you provide the pseudo-code for learning.", "9. I do not understand author's\u2019 justification for figure-3. ", "Is it the case that the model learns to attend to last sentences for all the questions? ", "Or where it attends varies across examples?", "10. Are you willing to release the code for reproducing the results?", "Minor comments: Page 1, \u201cexploit his own decision\u201d should be \u201cexploit its own decision\u201d", "In page 2, section 2.1, sentence starting with \u201cIndeed, a too low percentage \u2026\u201d needs to be fixed.", "Page 3, \u201cforgetting is compensate\u201d should be \u201cforgetting is compensated\u201d.", "Page 4, \u201cfor one sentences\u201d needs to be fixed.", "Page 4, \u201cunknow\u201d should be \u201cunknown\u201d.", "Page 4, \u201c??\u201d needs to be fixed.", "Page 5, \u201cfor the two first datasets\u201d needs to be fixed.", "Table 1, \u201cGMenN2N\u201d should be \u201cGMemN2N\u201d. ", "In caption, is it mean accuracy or maximum accuracy?", "Page 6, \u201cdataset was achieves\u201d needs to be fixed.", "Page 7, \u201cdocument by obfuscated this word\u201d needs to be fixed.", "Page 7, \u201coverall aspect of the two first readers\u201d needs to be fixed.", "Page 8, last para, references needs to be fixed.", "Page 9, first sentence, please check grammar.", "Section 6.2, last sentence is irrelevant."], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "request", "request", "evaluation", "fact", "request", "fact", "fact", "request", "request", "evaluation", "request", "evaluation", "request", "fact", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "evaluation", "request", "request", "non-arg", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "evaluation"]}
{"doc_id": "HJv0cb5xG", "text": ["This work addresses an important and outstanding problem: accurate long-term forecasting using deep recurrent networks.", "The technical approach seems well motivated, plausible, and potentially a good contribution,", "but the experimental work has numerous weaknesses which limit the significance of the work in current form.", "For one, the 3 datasets tested are not established as among the most suitable, well-recognized benchmarks for evaluating long-term forecasting.", "It would be far more convincing if the author\u2019s used well-established benchmark data, for which existing best methods have already been well-tuned to get their best results.", "Otherwise, the reader is left with concerns that the author\u2019s may not have used the best settings for the baseline method results reported, which indeed is a concern here (see below).", "One weakness with the experiments is that it is not clear that they were fair to RNN or LSTM,", "for example, in terms of giving them the same computation as the TT-RNNs.", "Section Hyper-parameter Analysis\u201d on page 7 explains that they determined best TT rank and lags via grid search.", "But presumably larger values for rank and lag require more computation,", "so to be fair to RNN and LSTM they should be given more computation as well, for example allowing them more hidden units than TT-RNNs get, so that overall computation cost is the same for all 3 methods.", "As far as this reviewer can tell, the authors offer no experiments to show that a larger number of units for RNN or LSTM would not have helped them in improving long-term forecasting accuracies,", "so this seems like a very serious and plausible concern.", "Also, on page 6 the authors say that they tried ARMA but that it performed about 5% worse than LSTM, and thus dismissing direct comparisons of ARMA against TT-RNN.", "But they are unclear whether they gave ARMA as much hyper-parameter tuning (e.g. for number of lags) via grid search as their proposed TT-RNN benefited from.", "Again, the concern here is that the experiments are plausibly not being fair to all methods equally.", "So, due to the weaknesses in the experimental work,", "this work seems a bit premature.", "It should more clearly establish that their proposed TT-RNN are indeed performing well compared to existing SOTA."], "labels": ["fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "HkZ8Gb9eG", "text": ["This paper is well constructed and written.", "It consists of a number of broad ideas regarding density estimation using transformations of autoregressive networks.", "Specifically, the authors examine models involving linear maps from past states (LAM) and recurrence relationships (RAM).", "The critical insight is that the hidden states in the LAM are not coupled allowing considerable flexibility between consecutive conditional distributions.", "This is at the expense of an increased number of parameters and a lack of information sharing.", "In contrast, the RAM transfers information between conditional densities via the coupled hidden states allowing for more constrained smooth transitions.", "The authors then explored a variety of transformations designed to increase the expressiveness of LAM and RAM.", "The authors importantly note that one important restriction on the class of transformations is the ability to evaluate the Jacobian of the transformation efficiently.", "A composite of transformations coupled with the LAM/RAM networks provides a highly expressive model for modelling arbitrary joint densities but retaining interpretable conditional structure.", "There is a rich variety of synthetic and real data studies which demonstrate that LAM and RAM consistently rank amongst the top models demonstrating potential utility for this class of models.", "Whilst the paper provides no definitive solutions, this is not the point of the work which seeks to provide a description of a general class of potentially useful models."], "labels": ["evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation"]}
{"doc_id": "SyZQxkmxG", "text": ["CONTRIBUTION The main contribution of the paper is not clearly stated. ", "To the reviewer, It seems \u201clife-long learning\u201d is the same as \u201conline learning\u201d. ", "However, the whole paper does not define what \u201clife-long learning\u201d is.", "The limited budget scheme is well established in the literature. ", "1. J. Hu, H. Yang, I. King, M. R. Lyu, and A. M.-C. So. Kernelized online imbalanced learning with fixed budgets. In AAAI, Austin Texas, USA, Jan. 25-30 2015. \u2028", "2. Y. Engel, S. Mannor, and R. Meir. The kernel recursive least-squares algorithm. IEEE Transactions on Signal Processing, 52(8):2275\u20132285, 2004.", "It is not clear what the new proposal in the paper.", "WRITING QUALITY The paper is not well written in a good shape. ", "Many meanings of the equations are not stated clearly, e.g., $phi$ in eq. (7). ", "Furthermore, the equation in algorithm 2 is not well formatted. ", "DETAILED COMMENTS 1. The mapping function $phi$ appears in Eq. (1) without definition.", "2. The last equation in pp. 3 defines the decision function f by an inner product. ", "In the equation, the notation x_t and i_t is not clearly defined. ", "More seriously, a comma is missed in the definition of the inner product.", "3. Some equations are labeled but never referenced, e.g., Eq. (4).", "4. The physical meaning of Eq.(7) is unclear. ", "However, this equation is the key proposal of the paper. ", "For example, what is the output of the Eq. (7)? ", "What is the main objective of Eq. (7)? ", "Moreover, what support vectors should be removed by optimizing Eq. (7)? ", "One main issue is that the notation $phi$ is not clearly defined. ", "The computation of f-y_r\\phi(s_r) makes it hard to understand. ", "Especially, the dimension of $phi$ in Eq.(7) is unknown. ", "ABOUT EXPERIMENTS 1.\tIt is unclear how to tune the hyperparameters.", "2.\tIn Table 1, the results only report the standard deviation of AUC. ", "No standard deviations of nSV and Time are reported."], "labels": ["evaluation", "evaluation", "fact", "fact", "reference", "reference", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "request", "request", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact"]}
{"doc_id": "Hy4_ANE-f", "text": ["This paper studies new off-policy policy optimization algorithm using relative entropy objective and use EM algorithm to solve it. ", "The general idea is not new, aka, formulating the MDP problem as a probabilistic inference problem. ", "There are some technical questions: 1. For parametric EM case, there is asymptotic convergence guarantee to local optima case; ", "However, for nonparametric EM case, there is no guarantee for that. ", "This is the biggest concern I have for the theoretical justification of the paper.", "2. In section 4, it is said that Retrace algorithm from Munos et al. (2016) is used for policy evaluation. ", "This is not true. ", "The Retrace algorithm, is per se, a value iteration algorithm. ", "I think the author could say using the policy evaluation version of Retrace, or use the truncated importance weights technique as used in Retrace algorithm, which is more accurate.", "Besides, a minor point: Retrace algorithm is not off-policy stable with function approximation, as shown in several recent papers, such as \u201cConvergent Tree-Backup and Retrace with Function Approximation\u201d. ", "But this is a minor point if the author doesn\u2019t emphasize too much about off-policy stability.", "3. The shifting between the unconstrained multiplier formulation in Eq.9 to the constrained optimization formulation in Eq.10 should be clarified. ", "Usually, an in-depth analysis between the choice of \\lambda in multiplier formulation and the \\epsilon in the constraint should be discussed, ", "which is necessary for further theoretical analysis. ", "4. The experimental conclusions are conducted without sound evidence. ", "For example, the author claims the method to be 'highly data efficient' compared with existing approaches, ", "however, there is no strong evidence supporting this claim. ", "Overall, although the motivation of this paper is interesting, ", "I think there is still a lot of details to improve."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "request", "fact", "evaluation", "request", "request", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "HyjN-YPlz", "text": ["The manuscript proposes two objective functions based on the manifold assumption as defense mechanisms against adversarial examples. ", "The two objective functions are based on assigning low confidence values to points that are near or off the underlying (learned) data manifold while assigning high confidence values to points lying on the data manifold. ", "In particular, for an adversarial example that is distinguishable from the points on the manifold and assigned a low confidence by the model, is projected back onto the designated manifold such that the model assigns it a high confidence value. ", "The authors claim that the two objective functions proposed in this manuscript provide such a projection onto the desired manifold and assign high confidence for these adversarial points. ", "These mechanisms, together with the so-called shell wrapper around the model (a deep learning model in this case) will provide the desired defense mechanism against adversarial examples.", "The manuscript at the current stage seems to be a preliminary work that is not well matured yet. ", "The manuscript is overly verbose and the arguments seem to be weak and not fully developed yet. ", "More importantly, the experiments are very preliminary and there is much more room to deliver more comprehensive and compelling experiments."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "SJ2P_-YgG", "text": ["The main idea of this paper is to replace the feedforward summation", "y = f(W*x + b) where x,y,b are vectors, W is a matrix by an integral \\y = f(\\int W \\x + \\b) where \\x,\\y,\\b are functions, and W is a kernel. ", "A deep neural network with this integral feedforward is called a deep function machine. ", "The motivation is along the lines of functional PCA: ", "if the vector x was obtained by discretization of some function \\x, then one encounters the curse of dimensionality as one obtains finer and finer discretization. ", "The idea of functional PCA is to view \\x as a function is some appropriate Hilbert space, and expands it in some appropriate basis. ", "This way, finer discretization does not increase the dimension of \\x (nor its approximation), but rather improves the resolution. ", "This paper takes this idea and applies it to deep neural networks. ", "Unfortunately, beyond rather obvious approximation results, the paper does not get major mileage out of this idea. ", "This approach amounts to a change of basis - ", "and therefore the resolution invariance is not surprising. ", "In the experiments, results of this method should be compared not against NNs trained on the data directly, but against NNs trained on dimension reduced version of the data (eg: first fixed number of PCA components). ", "Unfortunately, this was not done. ", "I suspect that in this case, the results would be very similar."], "labels": ["evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation"]}
{"doc_id": "rJ6Z7prxf", "text": ["This paper introdues NoisyNets, that are neural networks whose parameters are perturbed by a parametric noise function, and they apply them to 3 state-of-the-art deep reinforcement learning algorithms: DQN, Dueling networks and A3C.", "They obtain a substantial performance improvement over the baseline algorithms, without explaining clearly why.", "The general concept is nice,", "the paper is well written", "and the experiments are convincing,", "so to me this paper should be accepted, despite a weak analysis.", "Below are my comments for the authors. ---------------------------------", "General, conceptual comments: The second paragraph of the intro is rather nice,", "but it might be updated with recent work about exploration in RL.", "Note that more than 30 papers are submitted to ICLR 2018 mentionning this topic,", "and many things have happened since this paper was posted on arxiv (see the \"official comments\" too).", "p2: \"our NoisyNet approach requires only one extra parameter per weight\"", "Parameters in a NN are mostly weights and biases,", "so from this sentence one may understand that you close-to-double the number of parameters, which is not so few!", "If this is not what you mean, you should reformulate...", "p2: \"Though these methods often rely on a non-trainable noise of vanishing size as opposed to NoisyNet which tunes the parameter of noise by gradient descent.\"", "Two ideas seem to be collapsed here:", "the idea of diminishing noise over an experiment, exploring first and exploiting later,", "and the idea of adapting the amount of noise to a specific problem.", "It should be made clearer whether NoisyNet can address both issues and whether other algorithms do so too...", "In particular, an algorithm may adapt noise along an experiment or from an experiment to the next.", "From Fig.3, one can see that having the same initial noise in all environments is not a good idea,", "so the second mechanism may help much.", "BTW, the short section in Appendix B about initialization of noisy networks should be moved into the main text.", "p4: the presentation of NoisyNets is not so easy to follow and could be clarified in several respects:", "- a picture could be given to better explain the structure of parameters, particularly in the case of factorised (factorized, factored?) Gaussian noise.", "- I would start with the paragraph \"Considering a linear layer [...] below)\" and only after this I would introduce \\theta and \\xi as a more synthetic notation.", "Later in the paper, you then have to state \"...are now noted \\xi\" several times, which I found rather clumsy.", "p5: Why do you use option (b) for DQN and Dueling and option (a) for A3C?", "The reason why (if any) should be made clear from the clearer presentation required above.", "By the way, a wild question: if you wanted to use NoisyNets in an actor-critic architecture like DDPG, would you put noise both in the actor and the critic?", "The paragraph above Fig3 raises important questions which do not get a satisfactory answer.", "Why is it that, in deterministic environments, the network does not converge to a deterministic policy, which should be able to perform better?", "Why is it that the adequate level of noise changes depending on the environment?", "By the way, are we sure that the curves of Fig3 correspond to some progress in noise tuning (that is, is the level of noise really \"better\" through time with these curves, or they they show something poorly correlated with the true reasons of success?)?", "Finally, I would be glad to see the effect of your technique on algorithms like TRPO and PPO which require a stochastic policy for exploration,", "and where I believe that the role of the KL divergence bound is mostly to prevent the level of stochasticity from collasping too quickly.", "-----------------------------------Local comments: The first sentence may make the reader think you only know about 4-5 old works about exploration.", "Pp. 1-2 : \"the approach differs ... from variational inference. [...] It also differs variational inference...\"", "If you mean it differs from variational inference in two ways, the paragraph should be reorganized.", "p2: \"At a high level our algorithm induces a randomised network for exploration, with care exploration via randomised value functions can be provably-efficient with suitable linear basis (Osband et al., 2014)\"", "=> I don't understand this sentence at all.", "At the top of p3, you may update your list with PPO and ACKTR, which are now \"classical\" baselines too.", "Appendices A1 and A2 are a lot redundant with the main text (some sentences and equations are just copy-pasted),", "this should be improved.", "The best would be to need to reject nothing to the Appendix.", "--------------------------------------- Typos, language issues: p2 the idea ... the optimization process have been => has", "p2 Though these methods often rely on a non-trainable noise of vanishing size as opposed to NoisyNet which tunes the parameter of noise by gradient descent.", "=> you should make a sentence...", "p3 the the double-DQN", "several times, an equation is cut over two lines, a line finishing with \"=\",", "which is inelegant", "You should deal better with appendices:", "Every \"Sec. Ax/By/Cz\" should be replaced by \"Appendix Ax/By/Cz\".", "Besides, the big table and the list of performances figures should themselves be put in two additional appendices", "and you should refer to them as Appendix D or E rather than \"the Appendix\"."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "request", "fact", "fact", "quote", "fact", "fact", "request", "quote", "fact", "fact", "fact", "request", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "request", "request", "request", "evaluation", "request", "request", "request", "request", "evaluation", "evaluation", "quote", "request", "quote", "evaluation", "request", "evaluation", "request", "evaluation", "request", "quote", "request", "quote", "fact", "evaluation", "request", "request", "request", "request"]}
{"doc_id": "Bkp-xJ5xf", "text": ["This paper presents a so-called cross-view training for semi-supervised deep models.", "Experiments were conducted on various data sets", "and experimental results were reported.", "Pros:* Studying semi-supervised learning techniques for deep models is of practical significance.", "Cons:* The novelty of this paper is marginal.", "The use of unlabeled data is in fact a self-training process.", "Leveraging the sub-regions of the image to improve performance is not new and has been widely-studied in image classification and retrieval.", "* The proposed approach suffers from a technical weakness or flaw.", "For the self-labeled data, the prediction of each view is enforced to be same as the assigned self-labeling.", "However, since each view related to a sub-region of the image (especially when the model is not so deep), it is less likely for this region to contain the representation of the concepts", "(e.g., some local region of an image with a horse may exhibit only grass);", "enforcing the prediction of this view to be the same self-labeled concepts (e.g,\u201chorse\u201d) may drive the prediction away from what it should be", "( e..g, it will make the network to predict grass as horse).", "Such a flaw may affect the final performance of the proposed approach.", "* The word \u201cview\u201d in this paper is misleading.", "The \u201cview\u201d in this paper is corresponding to actually sub-regions in the images", "* The experimental results indicate that the proposed approach fails to perform better than the compared baselines in table 2, which reduces the practical significance of the proposed approach."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "H1q18tjxM", "text": ["This paper presents a nearest-neighbor based continuous control policy.", "Two algorithms are presented: NN-1 runs open-loop trajectories from the beginning state,", "and NN-2 runs a state-condition policy that retrieves nearest state-action tuples for each state.", "The overall algorithm is very simple to implement and can do reasonably well on some simple control tasks,", "but quickly gets overwhelmed by higher-dimensional and stochastic environments.", "It is very similar to \"Learning to Steer on Winding Tracks Using Semi-Parametric Control Policies\" and is effectively an indirect form of tile coding (each could be seen as a fixed voronoi cell).", "I am sure this idea has been tried before in the 90s", "but I am not familiar enough with all the literature to find it", "(A quick google search brings this up: Reinforcement Learning of Active Recognition Behaviors, with a chapter on nearest-neighbor lookup for policies: https://people.eecs.berkeley.edu/~trevor/papers/1997-045/node3.html).", "Although I believe there is work to be done in the current round of RL research using nearest neighbor policies,", "I don't believe this paper delves very far into pushing new ideas", "(even a simple adaptive distance metric could have provided some interesting results, nevermind doing a learned metric in a latent space to allow for rapid retrainig of a policy on new domains....),", "and for that reason I don't think it has a place as a conference paper at ICLR.", "I would suggest its submission to a workshop where it might have more use triggering discussion of further work in this area."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "non-arg", "reference", "evaluation", "evaluation", "request", "evaluation", "request"]}
{"doc_id": "BkYwge9ef", "text": ["There could be an interesting idea here, ", "but the limitations and applicability of the proposed approach are not clear yet. ", "More analysis should be done to clarify its potential. ", "Besides, the paper seriously needs to be reworked. ", "The text in general, but also the notation, should be improved.", "In my opinion, the authors should explain how to apply their algorithm to more general network architectures, and test it, in particular to convnets. ", "An experiment on a modern dataset beyond MNIST would also be a welcome addition.", "Some comments:- The method is present as a fully-connected network training procedure. ", "But the resulting network is not really fully-connected, but modular. ", "This is clear in Fig. 1 and in the explanation in Sect. 3.1. ", "The newly added hidden neurons at every iteration do not project to the previous pool of hidden neurons. ", "It should be stressed that the networks end up with this non-conventional \u201ctiled\u201d architecture. ", "Are there studies where the capacity of such networks is investigated, when all the weights are trained concurrently.", "- It wasn\u2019t clear to me whether the memory reallocation could be easily implemented in hardware. ", "A few references or remarks on this issue would be welcome.", "- The work \u201cEfficient supervised learning in networks with binary synapses\u201d by Baldassi et al. (PNAS 2007) should be cited. ", "Although usually ignored by the deep learning community, it actually was a pioneering study on the use of low resolution weights during inference while allowing for auxiliary variables during learning.", "- Coming back my main point above, I didn\u2019t really get the discussion on Sect. 5.3. ", "Why didn\u2019t the authors test their algorithm on a convnet? ", "Are there any obstacles in doing so? ", "It seems quite important to understand this point, ", "as the paper appeals to technical applications and convolution seems hard to sidestep currently.", "- Fig. 3: xx-axis: define storage efficiency and storage requirement.", "- Fig. 4: What\u2019s an RSBL? ", "Acronyms should be defined.", "- Overall, language and notation should really be refined. ", "I had a hard time reading Algorithm 1, ", "as the notation is not even defined anywhere. ", "And this problem extends throughout the paper.", "For example, just looking at Sect. 4.1, \u201ctraining and testing data x is normalized\u2026\u201d, if x is not properly defined, it\u2019s best to omit it; ", "\u201c\u2026 2-dimentonal\u2026\u201d, at least major typos should be scanned and corrected."], "labels": ["evaluation", "evaluation", "request", "request", "request", "request", "request", "fact", "fact", "fact", "fact", "fact", "non-arg", "evaluation", "request", "request", "fact", "evaluation", "non-arg", "non-arg", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "fact", "fact", "request", "request"]}
{"doc_id": "HJfRKPFeM", "text": ["SUMMARY. The paper presents an extension of word2vec for structured features.", "The authors introduced a new compatibility function between features and, as in the skipgram approach, they propose a variation of negative sampling to deal with structured features.", "The learned representation of features is tested on a recommendation-like task. ", "---------- OVERALL JUDGMENT The paper is not clear ", "and thus I am not sure what I can learn from it.", "From what is written on the paper I have trouble to understand the definition of the model the authors propose and also an actual NLP task where the representation induced by the model can be useful.", "For this reason, I would suggest the authors make clear with a more formal notation, and the use of examples, what the model is supposed to achieve.", "---------- DETAILED COMMENTS When the authors refer to word2vec is not clear if they are referring to skipgram or cbow algorithm, ", "please make it clear.", "Bottom of page one: \"a positive example is 'semantic'\", ", "please, use another expression to describe observable examples, ", "'semantic' does not make sense in this context.", "Levi and Goldberg (2014) do not say anything about factorization machines, ", "could the authors clarify this point?", "Equation (4), what do i and j stand for? ", "what does \\beta represent? ", "is it the embedding vector? ", "How is this formula related to skipgram or cbow?", "The introduction of structured deep-in factorization machine should be more clear with examples that give the intuition on the rationale of the model.", "The experimental section is rather poor, ", "first, the authors only compare themselves with word2ve (cbow), ", "it is not clear what the reader should learn from the results the authors got.", "Finally, the most striking flaw of this paper is the lack of references to previous works on word embeddings and feature representation, ", "I would suggest the author check and compare themselves with previous work on this topic."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "quote", "request", "evaluation", "fact", "request", "request", "request", "request", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "request"]}
{"doc_id": "HJzYCPDlf", "text": ["Twin Networks: Using the Future as a Regularizer", "** PAPER SUMMARY ** The authors propose to regularize RNN for sequence prediction by forcing states of the main forward RNN to match the state of a secondary backward RNN.", "Both RNNs are trained jointly and only the forward model is used at test time.", "Experiments on conditional generation (speech recognition, image captioning), and unconditional generation (MNIST pixel RNN, language models) show the effectiveness of the regularizer.", "** REVIEW SUMMARY ** The paper reads well, has sufficient reference.", "The idea is simple and well explained.", "Positive empirial results support the proposed regularizer.", "** DETAILED REVIEW ** Overall, this is a good paper.", "I have a few suggestions along the text but nothing major.", "In related work, I would cite co-training approaches.", "In effect, you have two view of a point in time, its past and its future and you force these two views to agree,", "see (Blum and Mitchell, 1998) or Xu, Chang, Dacheng Tao, and Chao Xu. \"A survey on multi-view learning.\" arXiv preprint arXiv:1304.5634 (2013).", "I would also relate your work to distillation/model compression which tries to get one network to behave like another.", "On that point, is it important to train the forward and backward network jointly or could the backward network be pre-trained?", "In section 2, it is not obvious to me that the regularizer (4) would not be ignored in absence of regularization on the output matrix.", "I mean, the regularizer could push h^b to small norm, compensating with higher norm for the output word embeddings.", "Could you comment why this would not happen?", "In Section 4.2, you need to refer to Table 2 in the text.", "You also need to define the evaluation metrics used.", "In this section, why are you not reporting the results from the original Show&Tell paper?", "How does your implementation compare to the original work?", "On unconditional generation, your hypothesis on uncertainty is interesting and could be tested.", "You could inject uncertainty in the captioning task for instance, e.g. consider that multiple version of each word e.g. dogA, dogB, docC which are alternatively used instead of dog with predefined substitution rates.", "Would your regularizer still be helpful there?", "At which point would it break?"], "labels": ["non-arg", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "fact", "reference", "request", "request", "evaluation", "fact", "request", "request", "request", "request", "request", "evaluation", "request", "request", "request"]}
{"doc_id": "ry9RWezWM", "text": ["The authors purpose a method for creating mini batches for a student network by using a second learned representation space to dynamically selecting examples by their 'easiness and true diverseness'. ", "The framework is detailed ", "and results on MNIST, cifar10 and fashion-MNIST are presented. ", "The work presented is novel but there are some notable omissions: ", " - there are no specific numbers presented to back up the improvement claims; ", "graphs are presented but not specific numeric results", "- there is limited discussion of the computational cost of the framework presented ", "- there is no comparison to a baseline in which the additional learning cycles used for learning the embedding are used for training the student model.", "- only small data sets are evaluated. ", "This is unfortunate ", "because if there are to be large gains from this approach, it seems that they are more likely to be found in the domain of large scale problems, than toy data sets like mnist."], "labels": ["fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation"]}
{"doc_id": "HyvZDmueM", "text": ["This paper proposed a new optimization framework for semi-supervised learning based on derived inversion scheme for deep neural networks. ", "The numerical experiments show a significant improvement in accuracy of the approach."], "labels": ["fact", "evaluation"]}
{"doc_id": "ByvABDcxz", "text": ["The authors present an algorithm for training ensembles of policy networks that regularly mixes different policies in the ensemble together by distilling a mixture of two policies into a single policy network, adding it to the ensemble and selecting the strongest networks to remain (under certain definitions of a \"strong\" network). ", "The experiments compare favorably against PPO and A2C baselines on a variety of MuJoCo tasks, ", "although I would appreciate a wall-time comparison as well, ", "as training the \"crossover\" network is presumably time-consuming.", "It seems that for much of the paper, the authors could dispense with the genetic terminology altogether - and I mean that as a compliment. ", "There are few if any valuable ideas in the field of evolutionary computing ", "and I am glad to see the authors use sensible gradient-based learning for GPO, even if it makes it depart from what many in the field would consider \"evolutionary\" computing. ", "Another point on terminology that is important to emphasize - the method for training the crossover network by direct supervised learning from expert trajectories is technically not imitation learning but behavioral cloning. ", "I would perhaps even call this a distillation network rather than a crossover network. ", "In many robotics tasks behavioral cloning is known for overfitting to expert trajectories, ", "but that may not be a problem in this setting as \"expert\" trajectories can be generated in unlimited quantities."], "labels": ["fact", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact"]}
{"doc_id": "BkSq8vBxG", "text": ["The authors report a number of experiments using off-the-shelf sentence embedding methods for performing extractive summarisation, using a number of simple methods for choosing the extracted sentences. ", "Unfortunately the contribution is too minor, and the work too incremental, to be worthy of a place at a top-tier international conference such as ICLR. ", "The overall presentation is also below the required standard. ", "The work would be better suited for a focused summarisation workshop, ", "where there would be more interest from the participants.", "Some of the statements motivating the work are questionable. ", "I don't know if sentence vectors *in particular* have been especially successful in recent NLP (unless we count neural MT with attention as using \"sentence vectors\"). ", "It's also not the case that the sentence reordering and text simplification problems have been solved, as is suggested on p.2. ", "The most effective method is a simple greedy technique. ", "I'm not sure I'd describe this as being \"based on fundamental principles of vector semantics\" (p.4).", "The citations often have the authors mentioned twice.", "The reference to \"making or breaking applications\" in the conclusion strikes me as premature to say the least."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "HkN9lyRxG", "text": ["This paper proposes to bring together multiple inductive biases that hope to correct for inconsistencies in sequence decoding. ", "Building on previous works that utilize modified objectives to generate sequences, this work proposes to optimize for the parameters of a pre-defined combination of various sub-objectives. ", "The human evaluation is straight-forward and meaningful to compensate for the well-known inaccuracies of automatic evaluation. ", "While the paper points out that they introduce multiple inductive biases that are useful to produce human-like sentences, ", "it is not entirely correct that the objective is being learnt as claimed in portions of the paper. ", "I would like this point to be clarified better in the paper. ", "I think showing results on grounded generation tasks like machine translation or image-captioning would make a stronger case for evaluating relevance. ", "I would like to see comparisons on these tasks."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "request", "request", "request"]}
{"doc_id": "Sye2eNDxM", "text": ["This paper aims to learn hierarchical policies by using a recursive policy structure regulated by a stochastic temporal grammar.", "The experiments show that the method is better than a flat policy for learning a simple set of block-related skills in minecraft (find, get, put, stack) ", "and generalizes better to a modification of the environment (size of room).", "The sequence of subtasks generated by the policy are interpretable.\\n\\n", "Strengths:\\n- The grammar and policies are trained using a sparse reward upon task completion. ", "\\n- The method is well ablated; ", "Figures 4 and 5 answered most questions I had while reading.\\n", "- Theoretically, the method makes few assumptions about the environment and the relationships between tasks.\\n", "- The interpretability of the final behaviors is a good result. ", "\\n\\nWeaknesses:\\n- The implementation gives the agent a -0.5 reward if it generates a currently unexecutable goal g\\u2019. ", "Providing this reward requires knowing the full state of the world. ", "If this hack is required, then this method would not be useful in a real world setting, ", "defeating the purpose of the sparse reward mentioned above. ", "I would really like to see how the method performs without this hack. \\n", "- There are no comparisons to other multitask or hierarchical methods. ", "Progressive Networks or Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning seem like natural comparisons.\\n", "- A video to show what the environments and tasks look like during execution would be helpful.\\n", "- The performances of the different ablations are rather close. ", "Please a standard deviation over multiple training runs. ", "Also, why does figure 4.b not include a flat policy?\\n", "- The stages are ordered in a semantically meaningful order (find is the first stage), ", "but the authors claim that the order is arbitrary. ", "If this claim is going to be included in the paper, it needs to be proven (results shown for random orderings) ", "because right now I do not believe it. ", "\\n\\nQuality:\\nThe method does provide hierarchical and interpretable policies for executing instructions, ", "this is a meaningful direction to work on.", "\\n\\nClarity:\\nAlthough the method is complicated, the paper was understandable.", "\\n\\nOriginality and significance:\\nAlthough the method is interesting, I am worried that the environment has been too tailored for the method, ", "and that it would fail in realistic scenarios. ", "The results would be more significant if the tasks had an additional degree of complexity,", "e.g. \\u201cput blue block next to the green block\\u201d \\u201cget the blue block in room 2\\u201d. ", "Then the sequences of subtasks would be a bit less linear", "(e.g., first need to find blue, then get, then find green, then put). ", "At the moment the tasks are barely more than the actions provided in the environment.", "\\n\\nAnother impedance to the paper\\u2019s significance is the number of hacks to make the method work ", "(ordering of stages, alternating policy optimization, first training each stage on only tasks of previous stage). ", "Because the method is only evaluated on one simple environment, ", "it unclear which hacks are for the method generally, and which hacks are for the method to work on the environment."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "fact", "evaluation", "request", "evaluation", "request", "request", "fact", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "non-arg", "evaluation", "fact", "evaluation", "evaluation", "non-arg", "fact", "evaluation"]}
{"doc_id": "rk-GXLRgz", "text": ["This paper suggests a simple yet effective approach for learning with weak supervision. ", "This learning scenario involves two datasets, one with clean data (i.e., labeled by the true function) and one with noisy data, collected using a weak source of supervision. ", "The suggested approach assumes a teacher and student networks, ", "and builds the final representation incrementally, by taking into account the \"fidelity\" of the weak label when training the student at the final step. ", "The fidelity score is given by the teacher, after being trained over the clean data, ", "and it's used to build a cost-sensitive loss function for the students. ", "The suggested method seems to work well on several document classification tasks. ", "Overall, I liked the paper. ", "I would like the authors to consider the following questions - - Over the last 10 years or so, many different frameworks for learning with weak supervision were suggested (e.g., indirect supervision, distant supervision, response-based, constraint-based, to name a few). ", "First, I'd suggest acknowledging these works and discussing the differences to your work. ", "Second - Is your approach applicable to these frameworks? ", "It would be an interesting to compare to one of those methods (e.g., distant supervision for relation extraction using a knowledge base), and see if by incorporating fidelity score, results improve. ", "- Can this approach be applied to semi-supervised learning? ", "Is there a reason to assume the fidelity scores computed by the teacher would not improve the student in a self-training framework?", "- The paper emphasizes that the teacher uses the student's initial representation, when trained over the clean data. ", "Is it clear that this step in needed? ", "Can you add an additional variant of your framework when the fidelity score are computed by the teacher when trained from scratch? ", "using different architecture than the student?"], "labels": ["evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "request", "request", "request", "request", "request", "fact", "request", "request", "request"]}
{"doc_id": "Sy4HaTtlz", "text": ["Quality: The work focuses on a novel problem of generating text sample using GAN and a novel in-filling mechanism of words. ", "Using GAN to generate samples in adversarial setup in texts has been limited due to the mode collapse and training instability issues. ", "As a remedy to these problems an in-filling-task conditioning on the surrounding text has been proposed. ", "But, the use of the rewards at every time step (RL mechanism) to employ the actor-critic training procedure could be challenging computationally challenging.", "Clarity: The mechanism of generating the text samples using the proposed methodology has been described clearly. ", "However the description of the reinforcement learning step could have been made a bit more clear.", "Originality: The work indeed use a novel mechanism of in-filling via a conditioning approach to overcome the difficulties of GAN training in text settings. ", "There has been some work using GAN to generate adversarial examples in textual context too to check the robustness of classifiers. ", "How this current work compares with the existing such literature?", "Significance: The research problem is indeed significant ", "since the use of GAN in generating adversarial examples in image analysis has been more prevalent compared to text settings. ", "Also, the proposed actor-critic training procedure via RL methodology is indeed significant from its application in natural language processing.", "pros: (a) Human evaluations applications to several datasets show the usefulness of MaskGen over the maximum likelihood trained model in generating more realistic text samples.", "(b) Using a novel in-filling procedure to overcome the complexities in GAN training.", "(c) generation of high quality samples even with higher perplexity on ground truth set.", "cons: (a) Use of rewards at every time step to the actor-critic training procure could be computationally expensive.", "(b) How to overcome the situation where in-filling might introduce implausible text sequences with respect to the surrounding words?", "(c) Depending on the Mask quality GAN can produce low quality samples. ", "Any practical way of choosing the mask?"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "fact", "request"]}
{"doc_id": "H1_YBgxZz", "text": ["This paper presents a method for clustering based on latent representations learned from the classification of transformed data after pseudo-labellisation corresponding to applied transformation.", "Pipeline: -Data are augmented with domain-specific transformations.", "For instance, in the case of MNIST, rotations with different degrees are applied.", "All data are then labelled as \"original\" or \"transformed by ...(specific transformation)\".", "-Classification task is performed with a neural network on augmented dataset according to the pseudo-labels.", "-In parallel of the classification, the neural network also learns the latent representation in an unsupervised fashion.", "-k-means clustering is performed on the representation space observed in the hidden layer preceding the augmented softmax layer.", "Detailed Comments: (*) Pros -The method outperforms the state-of-art regarding unsupervised methods for handwritten digits clustering on MNIST.", "-Use of ACOL and GAR is interesting, also the idea to make \"labeled\" data from unlabelled ones by using data augmentation.", "(*) Cons -minor: in the title, I find the expression \"unsupervised clustering\" uselessly redundant since clustering is by definition unsupervised.", "-Choice of datasets: we already obtained very good accuracy for the classification or clustering of handwritten digits.", "This is not a very challenging task.", "And just because something works on MNIST, does not mean it works in general.", "What are the performances on more challenging datasets like colored images (CIFAR-10, labelMe, ImageNet, etc.)?", "-This is not clear what is novel here", "since ACOL and GAR already exist.", "The novelty seems to be in the adaptation to GAR from the semi-supervised to the unsupervised setting with labels indicating if data have been transformed or not."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "fact", "evaluation"]}
{"doc_id": "ByR8Gr5gf", "text": ["The paper proposed a copula-based modification to an existing deep variational information bottleneck model, such that the marginals of the variables of interest (x, y) are decoupled from the DVIB latent variable model, allowing the latent space to be more compact when compared to the non-modified version. ", "The experiments verified the relative compactness of the latent space, and also qualitatively shows that the learned latent features are more 'disentangled'. ", "However, I wonder how sensitive are the learned latent features to the hyper-parameters and optimizations?", "Quality: Ok. ", "The claims appear to be sufficiently verified in the experiments. ", "However, it would have been great to have an experiment that actually makes use of the learned features to make predictions. ", "I struggle a little to see the relevance of the proposed method without a good motivating example.", "Clarity: Below average. ", "Section 3 is a little hard to understand. ", "Is q(t|x) in Fig 1 a typo? ", "How about t_j in equation (5)? ", "There is a reference that appeared twice in the bibliography (1st and 2nd).", "Originality and Significance: Average. ", "The paper (if I understood it correctly) appears to be mainly about borrowing the key ideas from Rey et. al. 2014 and applying it to the existing DVIB model."], "labels": ["fact", "fact", "non-arg", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "fact", "evaluation", "evaluation"]}
{"doc_id": "BkC-HgcxG", "text": ["In this paper, the authors present an analysis of SGD within an SDE framework. ", "The ideas and the presented results are interesting and are clearly of interest to the deep learning community. ", "The paper is well-written overall.", "However, the paper has important problems. ", "1) The analysis is widely based on the recent paper by Mandt et al. ", "While being an interesting work on its own, the assumptions made in that paper are very strict and not very realistic. ", "For instance, the assumption that the stochastic gradient noise being Gaussian is very restrictive and trying to justify it just by the usual CLT is not convincing especially when the parameter space is extremely large, ", "the setting that is considered in the paper.", "2) There is a mistake in the proof Theorem 1. ", "Even with the assumption that the gradient of sigma is bounded, eq 20 cannot be justified and the equality can only be \"approximately equal to\". ", "The result will only hold if sigma does not depend on theta. ", "However, letting sigma depend on theta is the only difference from Mandt et al. ", "On the other hand, with constant sigma the result is very trivial and can be found in any text book on SDEs (showing the Gibbs distribution). ", "Therefore, presenting it as a new result is misleading. ", "3) Even if the sigma is taken constant and theorem 1 is corrected, I don't think theorem 2 is conclusive. ", "Theorem 2 basically assumes that the distribution is locally a proper Gaussian (it is stated as locally convex, however it is taken as quadratic) ", "and the result just boils down to computing some probability under a Gaussian distribution, ", "which is still quite trivial. ", "Apart from this assumption not being very realistic, ", "the result does not justify the claims on \"the probability of ending in a certain minimum\" ", "-- which is on the other hand a vague statement. ", "First of all \"ending in\" a certain area depends on many different factors, such as the structure of the distribution, the initial point, the distance between the modes etc. ", "Also it is not very surprising that the inverse image of a wider Gaussian density is larger than of a pointy one. ", "This again does not justify the claims. ", "For instance consider a GMM with two components, where the means of the individual components are close to each other, but one component having a very large variance and a smaller weight, and the other one having a lower variance and higher weight. ", "With authors' claim, the algorithm should spend more time on the wider one, ", "however it is evident that this will not be the case. ", "4) There is a conceptual mistake that the authors assume that SGD will attain the exact stationary distribution even when the SDE is simulated by the fixed step-size Euler integrator. ", "As soon as one uses eta>0 the algorithm will never attain the stationary distribution of the continuous-time process, but will attain a stationary distribution that is close to the ideal one (of course with several smoothness, growth assumptions). ", "The error between the ideal distribution and the empirical distribution will be usually O(eta) depending on the assumption ", "and therefore changing eta will result in a different distribution than the ideal one. ", "With this in mind the stationary distributions for (eta/S) and (2eta/2S) will be clearly different. ", "The experiments are very interesting and I do not underestimate their value. ", "However, the current analysis unfortunately does not properly explain the rather strong claims of the authors, which is supposed to be the main contribution of this paper."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation"]}
{"doc_id": "ByIyxIKef", "text": ["In the Following, pros and cons of the paper are presented.", "Pros ------- 1. Many real-world applications.", "2. Simple architecture and can be reproduced (if given enough details.)", "Cons----------------------1. Ablation study showing whether bidirectional LSTM contributing to the similarity would be helpful.", "2. Baseline is not strong. ", "How about using just LSTM?", "4. It is suprising to see that only concatenation with MLP is used for optimization of capturing regularities across languages. ", "5. Equation-11 looks like softplus function more than vanilla ReLU.", "6. How are the similarity assessments made in the gold standard dataset. ", "The cost function used only suggest binary assessments. ", "Please refer to some SemEval tasks for cross-lingual or cross-level assessments. ", "As binary assessments may not be a right measure to compare articles of two different lengths or languages.", "Minor issues------------1. SNS is meant to be social networking sites?", "2. In Section 2.2, it denote that 'as the figure demonstrates'. ", "No reference to the figure.", "3. In Section 3, 'discussed in detail' pointed to Section 2.1 related work section. ", "Not clear what is discussed in detail there.", "4. Reference to Google Translate API is wrong.", "The paper requires more experimental analysis to judge the significance of the approach presented."], "labels": ["non-arg", "fact", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "request", "fact", "request", "fact", "request", "fact", "fact", "fact", "evaluation", "fact", "request"]}
{"doc_id": "Sy_JLe5lz", "text": ["This paper uses an information geometric view on hierarchical models to discuss a bias - variance decomposition in Boltzmann machines, ", "presenting interesting conclusions, ", "whereby some more care appears to be needed for making these claims. ", "The paper arrives at the main conclusion that it is possible to reduce both the bias and the variance in a hierarchical model. ", "The discussion is not specific to deep learning nor to Boltzmann machines, but actually addresses hierarchical exponential family models. ", "The methods pertaining hierarchical models are interesting and presented in a clear way. ", "My concern are the following points: The main theorem presents only a lower bound, meaning that it provides no guarantee that the variance can indeed be reduced. ", "The paper seems to ignore that a model with hidden variables may be singular, ", "in which case the Fisher metric is not positive definite and the Cramer Rao bound has no meaning. ", "This interferes with the claims and derivations made in the paper in the case of models with hidden variables. ", "The problem seems to lie in the fact that the presented derivations assume that an optimal distribution in the data manifold is given (see Theorem 1 and proof), effectively making this a discussion about a fully observed hierarchical model. ", "In particular, it is not further specified how to obtain \u03b8\u02c6B(s) in page 6 before (13). ", "Also, in page 5 the paper states that ``it is known that the EM-algorithm can obtain the global optimum of Equation (12) (Amari, 2016, Section 8.1.3)''. ", "However, what is shown in that reference is only that: (Theorem 8.2., Amari, 2016) ``The KL-divergence decreases monotonically by repeating the E-step and the M-step. Hence, the algorithm converges to an equilibrium.'' ", "A model with hidden variables can have several global and local optimisers ", "(see, e.g. https://arxiv.org/abs/1709.05276). ", "The critical points of the EM algorithm can have a non trivial structure, ", "as has been observed in the case of non negative rank matrix varieties ", "(see, e.g., https://arxiv.org/pdf/1312.5634.pdf). ", "OTHER In page 3, ``S_\\beta is e-flat and S_\\alpha ... '', should this not be the other way around? ", "(See also page 5 last paragraph of Section 2.) ", "Please also indicate the precise location in the provided reference. ", "All pages up to page 5 are introduction. ", "Section 2.3. as presented is very vague and does not add much to the discussion. ", "In page 7, please explain E \u03c8(\u03b8\u02c6 )^2 \u2212\u03c8(\u03b8\u2217 )^2=0"], "labels": ["fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "reference", "fact", "fact", "reference", "request", "request", "request", "evaluation", "evaluation", "request"]}
{"doc_id": "HyciX9dxM", "text": ["1) Summary This paper proposes a recurrent neural network (RNN) training formulation for encouraging RNN the hidden representations to contain information useful for predicting future timesteps reliably. ", "The authors propose to train a forward and backward RNN in parallel. ", "The forward RNN predicts forward in time and the backward RNN predicts backwards in time. ", "While the forward RNN is trained to predict the next timestep, ", "its hidden representation is forced to be similar to the representation of the backward RNN in the same optimization step. ", "In experiments, it is shown that the proposed method improves training speed in terms of number of training iterations, achieves 0.8 CIDEr points improvement over baselines using the proposed training, and also achieves improved performance for the task of speech recognition.", "2) Pros:+ Novel idea that makes sense for learning a more robust representation for predicting the future and prevent only local temporal correlations learned.", "+ Informative analysis for clearly identifying the strengths of the proposed method and where it is failing to perform as expected.", "+ Improved performance in speech recognition task.", "+ The idea is clearly explained and well motivated.", "3) Cons:Image captioning experiment:In the experimental section, there is an image captioning result in which the proposed method is used on top of two baselines. ", "This experiment shows improvement over such baselines, ", "however, the performance is still worse compared against baselines such as Lu et al, 2017 and Yao et al, 2016. ", "It would be optimal if the authors can use their training method on such baselines and show improved performance, or explain why this cannot be done.", "Unconditioned generation experiments:In these experiments, sequential pixel-by-pixel MNIST generation is performed in which the proposed method did not help. ", "Because of this, two conditioned set ups are performed: 1) 25% of pixels are given before generation, and 2) 75% of pixels are given before generation. ", "The proposed method performs similar to the baseline in the 25% case, and better than the baseline in the 75% case. ", "For completeness, and to come to a stronger conclusion on how much uncertainty really affects the proposed method, this experiment needs a case in which 50% of the pixels are given. ", "Observing 25% of the pixels gives almost no information about the identity of the digit ", "and it makes sense that it\u2019s hard to encode the future, ", "however, 50% of the pixels give a good idea of what the digit identity is. ", "If the authors believe that the 50% case is not necessary, please feel free to explain why.", "Additional comments:The method is shown to converge faster compared to the baselines, ", "however, it is possible that the baseline may finish training faster ", "(the authors do acknowledge the additional computation needed in the backward RNN).", "It would be informative for the research community to see the relationship of training time (how long it takes in hours) versus how fast it learns (iterations taken to learn).", "Experiments on RL planning tasks would be interesting to see (Maybe on a simple/predictable environment).", "4) Conclusion The paper proposes a method for training RNN architectures to better model the future in its internal state supervised by another RNN modeling the future in reverse. ", "Correctly modeling the future is very important for tasks that require making decisions of what to do in the future based on what we predict from the past. ", "The proposed method presents a possible way of better modeling the future, ", "however, some the results do not clearly back up the claim yet. ", "The given score will improve if the authors are able to address the stated issues."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "request", "fact", "fact", "fact", "request", "evaluation", "evaluation", "evaluation", "request", "fact", "fact", "fact", "request", "request", "fact", "evaluation", "fact", "evaluation", "evaluation"]}
{"doc_id": "H1W1OsYxG", "text": ["The paper introduces a method for learning graph representations (i.e., vector representations for graphs). ", "An existing node embedding method is used to learn vector representations for the nodes. ", "The node embeddings are then projected into a 2-dimensional space by PCA. ", "The 2-dimensional space is binned using an imposed grid structure. ", "The value for a bin is the (normalized) number of nodes falling into the corresponding region. ", "The idea is simple and easily explained in a few minutes. ", "That is an advantage. ", "Also, the experimental results look quite promising. ", "It seems that the methods outperforms existing methods for learning graph representations. ", "The problem with the approach is that it is very ad-hoc. ", "There are several (existing) ideas of how to combine node representations into a representation for the entire graph. ", "For instance, averaging the node embeddings is something that has shown promising results in previous work. ", "Since the methods is so ad-hoc (node2vec -> PCA -> discretized density map -> CNN architecure) and since a theoretical understanding of why the approach works is missing, it is especially important to compare your method more thoroughly to simpler methods. ", "Again, pooling operations (average, max, etc.) on the learned node2vec embeddings are examples of simpler alternatives. ", "The experimental results are also not explained thoroughly enough. ", "For instance, since two runs of node2vec will give you highly varying embeddings (depending on the initialization), you will have to run node2vec several times to reduce the variance of your resulting discretized density maps. ", "How many times did you run node2vec on each graph?"], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "non-arg"]}
{"doc_id": "Hyr9bdveG", "text": ["In this paper the authors give a nice review of clustering methods with deep learning and a systematic taxonomy for existing methods.", "Finally, the authors propose a new method by using one unexplored combination of taxonomy features.", "The paper is well-written and easy to follow.", "The proposed combination is straightforward,", "but lack of novelty.", "From table 1, it seems that the only differences between the proposed method and DEPICK is whether the method uses balanced assignment and pretraining.", "I am not convinced that these changes will lead to a significant difference.", "The performance of the proposed method and DEPICK are also similar in table 1.", "In addition, the experiments section is not comprehensive enough as well.", "the author only tested on two datasets.", "More datasets should be tested for evaluation.", "In addition, It seems that nearly all the experiments results from comparison methods are borrowed from the original publications.", "The authors should finish the experiments on comparison methods and fill the entries in Table 1.", "In summary, the proposed method is lack of novelty compare to existing methods.", "The survey part is nice,", "however extensive experiments should be conducted by running existing methods on different datasets and analyzing the pros and cons of the methods and their application scenarios.", "Therefore, I think the paper cannot be accepted at this stage."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "request", "fact", "request", "evaluation", "evaluation", "request", "evaluation"]}
{"doc_id": "B1IwI-2xz", "text": ["This paper proposes an empirical measure of the intrinsic dimensionality of a neural network problem. ", "Taking the full dimensionality to be the total number of parameters of the network model, the authors assess intrinsic dimensionality by randomly projecting the network to a domain with fewer parameters (corresponding to a low-dimensional subspace within the original parameter), and then training the original network while restricting the projections of its parameters to lie within this subspace. ", "Performance on this subspace is then evaluated relative to that over the full parameter space (the baseline). ", "As an empirical standard, the authors focus on the subspace dimension that achieves a performance of 90% of the baseline. ", "The authors then test out their measure of intrinsic dimensionality for fully-connected networks and convolutional networks, for several well-known datasets, ", "and draw some interesting conclusions.", "Pros:* This paper continues the recent research trend towards a better characterization of neural networks and their performance. ", "The authors show a good awareness of the recent literature, ", "and to the best of my knowledge, their empirical characterization of the number of latent parameters is original. ", "* The characterization of the number of latent variables is an important one, ", "and their measure does perform in a way that one would intuitively expect. ", "For example, as reported by the authors, when training a fully-connected network on the MNIST image dataset, shuffling pixels does not result in a change in their intrinsic dimensionality. ", "For a convolutional network the observed 3-fold rise in intrinsic dimension is explained by the authors as due to the need to accomplish the classification task while respecting the structural constraints of the convnet.", "* The proposed measures seem very practical - ", "training on random projections uses far fewer parameters than in the original space (the baseline),", "and presumably the cost of determining the intrinsic dimensionality would presumably be only a fraction of the cost of this baseline training.", "* Except for the occasional typo or grammatical error, the paper is well-written and organized. ", "The issues are clearly identified, for the most part (but see below...).", "Cons:* In the main paper, the authors perform experiments and draw conclusions without taking into account the variability of performance across different random projections. ", "Variance should be taken into account explicitly, in presenting experimental results and in the definition and analysis of the empirical intrinsic dimension itself. ", "How often does a random projection lead to a high-quality solution, and how often does it not?", "* The authors are careful to point out that training in restricted subspaces cannot lead to an optimal solution for the full parameter domain unless the subspace intersects the optimal solution region (which in general cannot be guaranteed). ", "In their experiments (FC networks of varying depths and layer widths for the MNIST dataset), between projected and original solutions achieving 90% of baseline performance, they find an order of magnitude gap in the number of parameters needed. ", "This calls into question the validity of random projection as an empirical means of categorizing the intrinsic dimensionality of a neural network.", "* The authors then go on to propose that compression of the network be achieved by random projection to a subspace of dimensionality greater than or equal to the intrinsic dimension. ", "However, I don't think that they make a convincing case for this approach. ", "Again, variation is the difficulty: ", "two different projective subspaces of the same dimensionality can lead to solutions that are extremely different in character or quality. ", "How then can we be sure that our compressed network can be reconstituted into a solution of reasonable quality, even when its dimensionality greatly exceeds the intrinsic dimension?", "* The authors argue for a relationship between intrinsic dimensionality and the minimum description length (MDL) of their solution, ", "in that the intrinsic dimensionality should serve as an upper bound on the MDL. ", "However they don't formally acknowledge that there is no standard relationship between the number of parameters and the actual number of bits needed to represent the model - ", "it varies from setting to setting, with some parameters potentially requiring many more bits than others. ", "And given this uncertain connection, and given the lack of consideration given to variation in the proposed measure of intrinsic dimensionality, it is hard to accept that \"there is some rigor behind\" their conclusion that LeNet is better than FC networks for classification on MNIST ", "because its empirical intrinsic dimensionality score is lower.", "* The experimental validation of their measure of intrinsic dimension could be made more extensive. ", "In the main paper, they use three image datasets - MNIST, CIFAR-10 and ImageNet. ", "In the supplemental information, they report intrinsic dimensions for reinforcement learning and other training tasks on four other sets.", "Overall, I think that this characterization does have the potention to give insights into the performance of neural networks, provided that variation across projections is properly taken into account. ", "For now, more work is needed."], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "request", "non-arg", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "request", "fact", "fact", "evaluation", "evaluation"]}
{"doc_id": "SkHg5PQxf", "text": ["While I acknowledge that training generative models with binary latent variables is hard, ", "I'm not sure this paper really makes valuable progress in this direction. ", "The only results that seem promising are those on binarized MNIST, for the non-convolutional architecture, ", "and this setting isn't particularly exciting. ", "All other experiments seem to suggest that the proposed model/algorithm is behind the state of the art. ", "Moreover, the proposed approach is fairly incremental, compared to existing work on RWS, VIMCO, etc.", "So while this work seem to have been seriously and thoughtfully executed, ", "I think it falls short of the ICLR acceptance bar."], "labels": ["evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "SJUEXlDxf", "text": ["Summary This paper presents Neural Process Networks, an architecture for capturing procedural knowledge stated in texts that makes use of a differentiable memory, a sentence and word attention mechanism, as well as learning action representations and their effect on entity representations. ", "The architecture is tested for tracking entities in recipes, as well as generating the natural language description for the next step in a recipe. ", "It is compared against a suit of baselines, such as GRUs, Recurrent Entity Networks, Seq2Seq and the Neural Checklist Model. ", "While I liked the overall paper, ", "I am worried about the generality of the model, the qualitative analysis, as well as a fair comparison to Recurrent Entity Networks and non-neural baselines.", "Strengths I believe the authors made a good effort in comparing against existing neural baselines (Recurrent Entity Networks, Neural Checklist Model) *for their task*. ", "That said, it is unclear to me how generally applicable the method is and whether the comparison against Recurrent Entity Networks is fair (see Weaknesses).", "I like the ablation study.", "Weaknesses While I find the Neural Process Networks architecture interesting ", "and I acknowledge that it outperforms Recurrent Entity Networks for the presented tasks, ", "after reading the paper it is not clear to me how generally applicable the architecture is. ", "Some design choices seem rather tailored to the task at hand (manual collection of actions MTurk annotation in section 3.1) ", "and I am wondering where else the authors see their method being applied given that the architecture relies on all entities and actions being known in advance. ", "My understanding is that the architecture could be applied to bAbI and CBT (the two tasks used in the Recurrent Entity Networks paper). ", "If that is the case, a fair comparison to Recurrent Entity Networks would have been to test against Recurrent Entity Networks on these tasks too. ", "If they the architecture cannot be applied in these tasks, the authors should explain why.", "I am not convinced by the qualitative analysis. ", "Table 2 tells me that even for the best model the entity selection performance is rather unreliable (only 55.39% F1), ", "yet all examples shown in Table 3 look really good, missing only the two entities oil (1) and sprinkles (3). ", "This suggests that these examples were cherry-picked ", "and I would like to see examples that are sampled randomly from the dev set. ", "I have a similar concern regarding the generation task. ", "First, it is not mentioned where the examples in Table 6 are taken from \u2013 is it the train, dev or test set? ", "Second, the overall BLEU score seems quite low even for the best model, ", "yet the examples in Table 6 look really good. ", "In my opinion, a good qualitative analysis should also discuss failure cases. ", "Since the BLEU score is so low here, ", "you might also want to compare perplexity of the models.", "The qualitative analysis in Table 5 is not convincing either. ", "In Appendix A.1 it is mentioned that word embeddings are initialized from word2vec trained on the training set. ", "My suspicion is that one would get the clustering in Table 4 already from those pretrained vectors, maybe even when pretrained on the Google news corpus. ", "Hence, it is not clear what propagating gradients through the Neural Process Networks into the action embeddings adds, or put differently, why does it have to be a differentiable architecture when an NLP pipeline might be enough? ", "This could easily be tested by another ablation where action embeddings are pretrained using word2vec and then fixed during training of the Neural Process Network. ", "Moreover, in 3.3 it is mentioned that even the Action Selection is pretrained, ", "which makes me wonder what is actually trained jointly in the architecture and what is not.", "I think the difficulty of the task at hand needs to be discussed at some point, ideally early in the paper. ", "Until examples on page 7 are shown, I did not have a sense for why a neural architecture is chosen. ", "For example, in 2.3 it is mentioned that for \"wash and cut\" the two functions fwash and fcut need to be selected. ", "For this example, this seems trivial ", "as the functions have the same name ", "(and you could even have a function per name!). ", "As far as I understand, the point of the action selector is to only have a fixed number of learned actions and multiple words (cut, slice etc.) should select the same action fcut. ", "Otherwise (if there is little language ambiguity) I would not see the need for a complex neural architecture. ", "Related to that, a non-neural baseline for the entity selection task that in my opinion definitely needs to be added is extracting entities using a pretrained NER system and returning all of them as the selection.", "p2 Footnote 1: So if I understand this correctly, this work builds upon a dataset of over 65k recipes from Kiddon et al. (2016), but only for 875 of those detailed annotations were created?", "Minor Comments p1: The statement \"most natural language understanding algorithms do not have the capacity \u2026\" should be backed by reference.", "p2: \"context representation ht\" \u2013 I would directly mention that this is a sentence encoding.", "p3: 2.4: I have the impression what you are describing here is known in the literature as entity linking.", "p3 Eq.3: Isn't c3*0 always a vector of zeros?", "p4 Eq.6: W4 is an order-3 tensor, correct?", "p4 Eq.8: What is YC and WC here and what are their dimensions? ", "I am confused by the softmax, ", "as my understanding (from reading the paragraph on the Action Selection Loss on p.5) was that the expression in the softmax here is a scalar (as it is done for every possible action), ", "so this should be a sigmoid to allow for multiple actions to attain a probability of 1?", "p5: \"See Appendix for details\" -> \"see Appendix C for details\"", "p5 3.3: Could you elaborate on the heuristic for extracting verb mentions? ", "Is only one verb mention per sentence extracted?", "p5: \"trained to minimize cross-entropy loss\" -> \"trained to minimize the cross-entropy loss\"", "p5 3.3: What is the global loss?", "p6: \"been read (\u00a72.5.\" -> \"been read (\u00a72.5).\"", "p6: \"We encode these vectors using a bidirectional GRU\" \u2013 I think you composing a fixed-dimensional vector from the entity vectors? ", "What's eI?", "p7: For which statement is (Kim et al. 2016) the reference? ", "Surely, they did not invent the Hadamard product.", "p8: \"Our model, in contrast\" use\" -> \"Our model, in contrast, uses\".", "p8 Related Work: I think it is important to mention that existing architectures such as Memory Netwroks could, in principle, learn to track entities and devote part of their parameters to learn the effect of actions. ", "What Neural Process Networks are providing is a strong inductive bias for tracking entities and learning the effect of actions that is useful for the task considered in this paper. ", "As mentioned in the weaknesses, this might however come at the price of a less general model, ", "which should be discussed."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "request", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "request", "non-arg", "request", "request", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "fact", "request", "request", "request", "request", "request", "request", "request", "request", "request", "fact", "request", "request", "evaluation", "fact", "request"]}
{"doc_id": "Hy9zmitlG", "text": ["This paper presents a novel method for spike based learning that aims at reducing the needed computation during learning and testing when classifying temporal redundant data.", "This approach extends the method presented on Arxiv on Sigma delta quantized networks", "(Peter O\u2019Connor and Max Welling. Sigma delta quantized networks. arXiv preprint arXiv:1611.02024, 2016b.).", "Overall, the paper is interesting and promising;", "only a few works tackle the problem of learning with spikes showing the potential advantages of such form of computing.", "The paper, however, is not flawless.", "The authors demonstrate the method on just two datasets, and effectively they show results of training only for Feed-Forward Neural Nets", "(the authors claim that \u201cthe entire spiking network end-to-end works\u201d referring to their pre-trained VGG19,", "but this paper presents only training for the three top layers).", "Furthermore, even if suitable datasets are not available, the authors could have chosen to train different architectures.", "The first dataset is the well-known benchmark MNIST also presented in a customized Temporal-MNIST.", "Although it is a common base-line, some choices are not clear:", "why using a FFNN instead that a CNN which performs better on this dataset;", "how data is presented in terms of temporal series \u2013", "this applies to the Temporal MNIST too;", "why performances for Temporal MNIST \u2013 which should be a more suitable dataset \u2014 are worse than for the standard MNIST;", "what is the meaning of the right column of Figure 5", "since it\u2019s just a linear combination of the GOps results.", "For the second dataset, some points are not clear too:", "why the labels and the pictures seem not to match (in appendix E);", "why there are more training iterations with spikes w.r.t. the not-spiking case.", "Overall, the paper is mathematically sound,", "except for the \u201cfuture updates\u201d meaning which probably deserves a clearer explanation.", "Moreover, I don\u2019t see why the learning rule equations (14-15) are described in the appendix,", "while they are referred constantly in the main text.", "The final impression is that the problem of the dynamical range of the hidden layer activations is not fully resolved by the empirical solution described in Appendix D:", "perhaps this problem affects CCNs more than FFN.", "Finally, there are some minor issues here and there", "(the authors show quite some lack of attention for just 7 pages):", "-\tTwo times \u201cget\u201d in \u201cwe get get a decoding scheme\u201d in the introduction;", "-\tTwo times \u201cupdate\u201d in \u201cour true update update as\u201d in Sec. 2.6;", "-\tPag3 correct the capital S in 2.3.1", "-\tPag4 Figure 1 increase font size (also for Figure2);", "close bracket after Equation 3;", "N (number of spikes) is not defined", "-\tPag5 \u201cone-hot\u201d or \u201conehot\u201d;", "-\tin the inline equation the sum goes from n=1 to S, while in eq.(8) it goes from n=1 to N;", "-\tEq(10)(11)(12) and some lines have a typo (a \\cdot) just before some of the ws;", "-\tPag6 k_{beta} is not defined in the main text;", "-\tPag7 there are two \u201cso that\u201d in 3.1;", "capital letter \u201cIt used 32x10^12..\u201d;", "beside, here, why do not report the difference in computation w.r.t. not-spiking nets?", "-\tPag7 in 3.2 \u201cdiscussed in 1\u201d is section 1?", "-\tPag14 Appendix E, why the labels don\u2019t match the pictures;", "-\tPag14 Appendix F, explain better the architecture used for this experiment."], "labels": ["fact", "fact", "reference", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "request", "fact", "evaluation", "request", "request", "request", "request", "request", "fact", "evaluation", "request", "request", "evaluation", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "request", "request", "request", "fact", "request", "fact", "evaluation", "fact", "fact", "fact", "request", "request", "request", "request"]}
{"doc_id": "H1qiNa1HM", "text": ["This paper examines the effects of RL in an augmented action space which includes sequences of actions (e.g. meta actions) as well as the primitive actions defined by the MDP.", "The authors extend a GPU-based A3C implementation to include meta actions", "and show that their algorithm can achieve better sample complexity / and higher performance in most cases.", "The paper is well written,", "but fails to mention the relationship between meta-actions and the Options framework (Sutton et al 99).", "In particular, it seems that meta-actions can just be viewed as a set of predefined options given to the agent.", "Much prior work has studied how to combine options with Deep RL.", "To name a few: Multi-Level Discovery of Deep Options (Fox et al 17),", "Classifying Options for Deep RL (Arulkumaran 16),", "and Deep Exploration via Bootstrapped DQN (Osband et al).", "The former even learns the options rather than pre-defining them.", "This connection needs to be made explicit.", "I have concerns regarding the results in this paper:", "\u2022\t Why on Qbert is the switching agent able to do so much better than both IU and DU?", "I suspect the curves may not be averaged over enough trials and results may be noisy,", "as it seems this shouldn\u2019t be possible.", "Results curves should show the standard deviation or variance of the 3 runs.", "\u2022\tI am concerned this approach will not scale to games that have more actions than the 4 games explored.", "The concern is that A4C exponentially increases the size of the action space as a function of k.", "Cartpole as well as the explored Atari games all have relatively small action spaces,", "so I think it is critical to show that it scales to games with larger spaces as well.", "My concern is that in larger actions spaces, the gains A4C gets from meta-actions will be outweighed by the difficulty of having to learn with so many different actions.", "\u2022\tWhat is the value for k used in Atari experiments?", "Overall, I was very excited about this paper after reading the introduction.", "I really like the idea of allowing the network to decide on sequences of actions,", "and I think many games do have opportunity to identify and re-use combos of primitive actions (e.g. to stay between lanes in Beam Rider).", "However, I don\u2019t think the architecture, algorithm, and results live up to this motivation.", "Simply augmenting the action space with all possible sequences of actions begs for a better solution.", "Pros:\u2022\tThe authors show that in certain domains, exploration can be aided by pre-defined meta actions.", "\u2022\tThe authors introduce an algorithm to squeeze more gradient updates out of a meta-action (DU-A4C).", "This is related to the insight that meta actions can be thought of as sequences of primitive actions.", "Cons:\u2022\tRelationship to Options is not identified.", "\u2022\tResults are only given for games with small action spaces.", "Unclear how the method scales to larger action spaces.", "\u2022\tMethod for augmenting action space is not particularly interesting.", "\u2022\tHeuristic switching is somewhat undesirable.", "It would be nice to understand why DU stops working well and how to improve it."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "reference", "reference", "reference", "fact", "request", "evaluation", "non-arg", "evaluation", "evaluation", "request", "evaluation", "fact", "fact", "request", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "Bysyyl5eM", "text": ["This paper proposed to jointly train a multilingual skip-gram model and a cross-lingual sentence similarity model to construct sentence embeddings. ", "They used cross-lingual classification tasks for evaluation. ", "This idea is fairly simple but interesting. ", "Their results on some language pairs showed that the joint training is effective ", "(results on table 1 showed that sent-LSTM worked best with dim=128). ", "The downside of this paper is that their results could not outperform state-of-the-art results. ", "Some detailed comments: -\tThe authors should weaken some of the statements, e.g. \u2018since our multilingual skip-gram and cross-lingual sentence similarity models are trained jointly, they can inform each other through the shared word embedding layer and promote the compositionality of learned word embeddings at training time\u2019. ", "Actually, there are no experimental results and evidences in this paper supporting this statement.", "-\tI don\u2019t see that \u2018Amenable to Multi-task modeling\u2019 is a contribution of this paper. ", "The authors should report additional experimental results to prove this statement."], "labels": ["fact", "fact", "evaluation", "fact", "fact", "evaluation", "request", "fact", "evaluation", "request"]}
{"doc_id": "HJpgrTKxf", "text": ["Summary: The paper proposes a learnable skimming mechanism for RNN.", "The model decides whether to send the word to a larger heavy-weight RNN or a light-weight RNN.", "The heavy-weight and the light-weight RNN each controls a portion of the hidden state.", "The paper finds that with the proposed skimming method, they achieve a significant reduction in terms of FLOPS.", "Although it doesn\u2019t contribute to much speedup on modern GPU hardware, there is a good speedup on CPU,", "and it is more power efficient.", "Contribution: - The paper proposes to use a small RNN to read unimportant text.", "Unlike (Yu et al., 2017), which skips the text, here the model decides between small and large RNN.", "Pros: - Models that dynamically decide the amount of computation make intuitive sense and are of general interests.", "- The paper presents solid experimentation on various text classification and question answering datasets.", "- The proposed method has shown reasonable reduction in FLOPS and CPU speedup with no significant accuracy degradation (increase in accuracy in some tasks).", "- The paper is well written, and the presentation is good.", "Cons: - Each model component is not novel.", "The authors propose to use Gumbel softmax, but does compare other gradient estimators.", "It would be good to use REINFORCE to do a fair comparison with (Yu et al., 2017 ) to see the benefit of using small RNN.", "- The authors report that training from scratch results in unstable skim rate, while Half pretrain seems to always work better than fully pretrained ones.", "This makes the success of training a bit adhoc,", "as one need to actively tune the number of pretraining steps.", "- Although there is difference from (Yu et al., 2017),", "the contribution of this paper is still incremental.", "Questions: - Although it is out of the scope for this paper to achieve GPU level speedup,", "I am curious to know some numbers on GPU speedup.", "- One recommended task would probably be text summarization, in which the attended text can contribute to the output of the summary.", "Conclusion: - Based on the comments above, I recommend Accept"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "request", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "request", "evaluation"]}
{"doc_id": "Bk1-V2plG", "text": ["The paper proposes a technique for training quantized neural networks, where the precision (number of bits) varies per layer and is learned in an end-to-end fashion.", "The idea is to add two terms to the loss, one representing quantization error, and the other representing the number of discrete values the quantization can support (or alternatively the number of bits used).", "Updates are made to the parameter representing the # of bits via the sign of its gradient.", "Experiments are conducted using a LeNet-inspired architecture on MNIST and CIFAR10.", "Overall, the idea is interesting, as providing an end-to-end trainable technique for distributing the precision across layers of a network would indeed be quite useful.", "I have a few concerns: First, I find the discussion around the training methodology insufficient.", "Inherently, the objective is discontinuous since # of bits is a discrete parameter.", "This is worked around by updating the parameter using the sign of its gradient.", "This is assuming the local linear approximation given by the derivative is accurate enough one integer away;", "this may or may not be true,", "but it's not clear and there is little discussion of whether this is reasonable to assume.", "It's also difficult for me to understand how this interacts with the other terms in the objective (quantization error and loss).", "We'd like the number of bits parameter to trade off between accuracy (at least in terms of quantization error, and ideally overall loss as well) and precision.", "But it's not at all clear that the gradient of either the loss or the quantization error w.r.t. the number of bits will in general suggest increasing the number of bit (thus requiring the bit regularization term).", "This will clearly not be the case when the continuous weights coincide with the quantized values for the current bit setting.", "More generally, the direction of the gradient will be highly dependent on the specific setting of the current weights.", "It's unclear to me how effectively accuracy and precision are balanced by this training strategy,", "and there isn't any discussion of this point either.", "I would be less concerned about the above points if I found the experiments compelling.", "Unfortunately, although I am quite sympathetic to the argument that state of the art results or architectures aren't necessary for a paper of this kind,", "the results on MNIST and CIFAR10 are so poor that they give me some concern about how the training was performed and whether the results are meaningful.", "Performance on MNIST in the 7-11% test error range is comparable to a simple linear logistic regression model; for a CNN that is extremely bad.", "Similarly, 40% error on CIFAR10 is worse than what some very simple fully connected models can achieve.", "Overall, while I like the and think the goal is good,", "I think the motivation and discussion for the training methodology is insufficient, and the empirical work is concerning.", "I can't recommend acceptance."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "ByV24asxM", "text": ["Monte-Carlo Tree Search is a reasonable and promising approach to hyperparameter optimization or algorithm configuration in search spaces that involve conditional structure.", "This paper must acknowledge more explicitly that it is not the first to take a graph-search approach. ", "The cited work related to SMAC and Hyperopt / TPE addresses this problem similarly. ", "The technique of separating a description language from the optimization algorithm is also used in both of these projects / lines of research. ", "The [mis-cited] paper titled \u201cMaking a science of model search \u2026\u201d is about using TPE to configure 1, 2, and 3 layer convnets for several datasets, including CIFAR-10. ", "SMAC and Hyperopt have been used to search large search spaces involving pre-processing and classification algorithms (e.g. auto-sklearn, autoweka, hyperopt-sklearn). ", "There have been near-annual workshops on AutoML and Bayesian optimization at NIPS and ICML (see e.g. automl.org).", "There is a benchmark suite of hyperparameter optimization problems that would be a better way to evaluate MCTS as a hyperparameter optimization algorithm: http://www.ml4aad.org/automl/hpolib/"], "labels": ["evaluation", "request", "evaluation", "fact", "fact", "fact", "fact", "evaluation"]}
{"doc_id": "S1UrbZQ-f", "text": ["(Score before author revision: 4)\\n (Score after author revision: 7)\\n\\n", "I think the authors have taken both the feedback of reviewers as well as anonymous commenters thoroughly into account, running several ablations as well as reporting nice results on an entirely new dataset (MultiNLI)", "where they show how their multi level fusion mechanism improves a baseline significantly.", "I think this is nice", "since it shows how their mechanism helps on two different tasks (question answering and natural language inference).\\n\\n", "Therefore I would now support accepting this paper.\\n\\n", "------------(Original review below) -----------------------\\n\\n", "The authors present an enhancement to the attention mechanism called \\\"multi-level fusion\\\"", "that they then incorporate into a reading comprehension system.", "It basically takes into account a richer context of the word at different levels in the neural net to compute various attention scores.\\n\\n", "i.e. the authors form a vector \\\"HoW\\\" (called history of the word),", "that is defined as a concatenation of several vectors:\\n\\n HoW_i = [g_i, c_i, h_i^l, h_i^h]\\n\\nwhere g_i = glove embeddings, c_i = COVE embeddings there is no predicate put with 15 ", "(McCann et al. 2017),", " and h_i^l and h_i^h are different LSTM states for that word.\\n\\n", "The attention score is then a function of these concatenated vectors i.e. \\\\alpha_{ij} = \\\\exp(S(HoW_i^C, HoW_j^Q))\\n\\n it cannot stay by itself merge with 21 ", "Results on SQuAD show a small gain in accuracy (75.7->76.0 Exact Match).", "The gains on the adversarial set are larger", "but that is because some of the higher performing, more recent baselines don't seem to have adversarial numbers.\\n\\n", "The authors also compare various attention functions (Table 5) showing a particular one (Symmetric + ReLU) works the best.\\n\\n", "Comments:\\n\\n-I feel overall the contribution is not very novel.", "The general neural architecture that the authors propose in Section 3 is generally quite similar to the large number of neural architectures developed for this dataset ", "(e.g. some combination of attention between question/context and LSTMs over question/context).", "The only novelty is these \\\"HoW\\\" inputs to the extra attention mechanism", "that takes a richer word representation into account.\\n\\n", "-I feel the model is seems overly complicated for the small gain (i.e. 75.7->76.0 Exact Match),", "especially on a relatively exhausted dataset (SQuAD) that is known to have lots of pecularities (see anonymous comment below).", "It is possible the gains just come from having more parameters.\\n\\n", "-The authors (on page 6) claim that that by running attention multiple times with different parameters but different inputs (i.e. \\\\alpha_{ij}^l, \\\\alpha_{ij}^h, \\\\alpha_{ij}^u) it will learn to attend to \\\"different regions for different level\\\".", "However, there is nothing enforcing this", "and the gains just probably come from having more parameters/complexity."], "labels": ["non-arg", "evaluation", "fact", "evaluation", "fact", "evaluation", "non-arg", "fact", "fact", "fact", "fact", "fact", "reference", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "non-arg", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation"]}
{"doc_id": "S1VwmoFxz", "text": ["In this paper, the authors investigate variance reduction techniques for agents with multi-dimensional policy outputs, in particular when they are conditionally independent ('factored').", "With the increasing focus on applying RL methods to continuous control problems and RTS type games, this is an important problem", "and this technique seems like an important addition to the RL toolbox.", "The paper is well written, the method is easy to implement,", "and the algorithm seems to have clear positive impact on the presented experiments.", "- The derivations in pages 4-6 are somewhat disconnected from the rest of the paper:", "the optimal baseline derivation is very standard (even if adapted to the slightly different situation situated here),", "and for reasons highlighted by the authors in this paper, they are not often used;", "the 'marginalized' baseline is more common, and indeed, the authors adopt this one as well.", "In light of this (and of the paper being quite a bit over the page limit)-", "is this material (4.2->4.4) mostly not better suited for the appendix?", "Same for section 4.6 (which I believe is not used in the experiments).", "- The experimental section is very strong;", "regarding the partial observability experiments, assuming actions are here factored as well, I could see four baselines", "(two choices for whether the baseline has access to the goal location or not, and two choices for whether the baseline has access to the vector $a_{-i}$).", "It's not clear which two baselines are depicted in 5b -", "is it possible to disentangle the effect of providing $a_{-i}$ and the location of the hole to the baseline?", "(side note: it is an interesting idea to include information not available to the agent as input to the baseline though it does feel a bit 'iffy' ;", "the agent requires information to train,", "but is not provided the information to act.", "Out of curiosity, is it intended as an experiment to verify the need for better baselines?", "Or as a 'fair' training procedure?)", "- Minor: in equation 2- is the correct exponent not t'?", "Also since $\\rho_\\pi$ is define with a scaling $(1-\\gamma)$ (to make it an actual distribution),", "I believe the definition of $\\eta$ should also be multiplied by $(1-\\gamma)$ (as well as equation 2)."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "fact", "fact", "non-arg", "non-arg", "evaluation", "fact", "request"]}
{"doc_id": "BkTles9xM", "text": ["The paper proposes a new evaluation measure for evaluating GANs.", "Specifically, the paper proposes generating synthetic images using GAN, training a classifier (for an auxiliary task, not the real vs fake discriminator) and measuring the performance of this classifier on held out real data.", "While the idea of using a downstream classification task to evaluate the quality of generative models has been explored before (e.g. semi-supervised learning),", "I think that this is the first paper to evaluate GANs using such an evaluation metric.", "I'm not super convinced that this is an useful evaluation metric as the absolute number is somewhat to interpret and dependent on the details of the classifier used.", "The results in Table 1 change quite a bit depending on the classifier.", "It would be useful to add a discussion of the failure modes of the proposed metric.", "It seems like a generator which generates samples close to the classification boundary (but drops examples far away from the boundary) could still achieve a high score under this metric.", "In the experiments, were different architectures used for different GAN variants?", "I think the mode-collapse evaluation metrics in MR-GAN are worth discussing in Section 2.1", "Mode Regularized Generative Adversarial Networks https://arxiv.org/abs/1612.02136"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "non-arg", "request", "reference"]}
{"doc_id": "SJ22faFez", "text": ["There is no scientific consensus on whether quantum annealers such as the D-Wave 2000Q that use the transverse-field Ising models yield any gains over classical methods ", "(c.f. https://arxiv.org/abs/1703.00622). ", "However, it is an exciting research area ", "and this paper is an interesting demonstration of the feasibility of using quantum annealers for reinforcement learning. ", "This paper builds on Crawford et al. (2016), an unpublished preprint, who develop a quantum Boltzmann machine reinforcement learning algorithm (QBM-RL). ", "A QBM consists of adding a transverse field term to the RBM Hamiltonian (negative log likelihood), ", "but the benefits of this for unsupervised tasks are unclear ", "(c.f. https://arxiv.org/abs/1601.02036, another unpublished preprint). ", "QBM-RL consists of using a QBM to model the state-action variables: ", "it is an undirected graphical model whose visible nodes are clamped to observed state-action pairs. ", "The hidden nodes model dependencies between states and actions, and the weights of the model are updated to maximize the free energy or Q function (value of the state-action pair).", "The authors extend QBM-RL to work with quantum annealers such as the D-Wave 2000Q, which has a specific bipartite graph structure and requires special consideration ", "because it can only yield samples of hidden variables in a fixed basis. ", "To overcome this, the authors develop a Suzuki-Trotter expansion and call it 'replica stacking', where a classical Hamiltonian in one dimension higher is used to approximate the quantum Hamiltonian. ", "This enables the use of quantum annealers. ", "The authors compare their method to standard baselines in a grid world environment.", "Overall, I do not want to criticize the work. ", "It is an interesting proof of concept. ", "But given the high price of quantum annealers, limited applicability of the technique, and unclear benefits of the authors' method, I do not think it is relevant to this specific conference. ", "It may be better suited to a workshop specific to quantum machine learning methods. ", "======================================= + please add an algorithm box for your method. ", "It deviates significantly from QBM-RL. ", "For example, something like: (1) init weights of boltzmann machine randomly (2) sample c_eff ~ C from the pool of configurations sampled from the transverse-field Ising model using a quantum annealer with chimera graph (3) using the samples, calculate effective classical hamiltonian used to approximate the quantum system (4) use the weight update rules derived from Bellman equations (spell out the rules). ", "+ moving the details of sampling into the appendix would help; ", "they are not important for understanding the main ingredients of your method", "There are so many moving parts in your system, ", "and someone without a physics background will struggle to understand it. ", "Clarifying the algorithm in terms familiar to machine learning researchers will go a long way toward helping people understand your method. ", "+ the benefits of your method is unclear - ", "it looks like the method works, but doesn't outperform the others. ", "this is fine, ", "but it is better to be straightforward about this and bill it as a 'proof of concept' ", "+ perhaps consider rebranding the paper as something like 'RL using replica stacking for sampling from quantum boltzmann machines with quantum annealers'. ", "Elucidating why replica stacking is a crucial contribution of your work would be helpful, and could be of broad interest in the machine learning community. ", "Right now it is too dense to be useful for the average person without a physics background: ", "what difficulties are intrinsic to a quantum Hamiltonian? ", "What is the intuition behind the Suzuki-Trotter decomposition you develop? ", "What is the 'quantum' Boltzmann machine in machine learning terms (hidden-hidden connections in an undirected graphical model!)? ", "What is replica-stacking in graphical model terms ", "(this would be a great ML contribution in its own right!)? ", "Really spelling these things out in detail (or in the appendix) would help", "========================================== 1) eq 14 is malformed", "2) references are not well-formatted", "3) need factor of 1/2 to avoid double counting in sums over nearest neighbors (please be precise)"], "labels": ["fact", "reference", "evaluation", "evaluation", "fact", "fact", "evaluation", "reference", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "request", "evaluation", "evaluation", "request"]}
{"doc_id": "H1--71dlz", "text": ["The paper proposes to improve the kernel approximation of random features by using quadratures, in particular, stochastic spherical-radial rules.", "The quadrature rules have smaller variance given the same number of random features,", "and experiments show its reconstruction error and classification accuracies are better than existing algorithms.", "It is an interesting paper,", "but it seems the authors are not aware of some existing works [1, 2] on quadrature for random features.", "Given these previous works, the contribution and novelty of the paper is limited.", "[1] Francis Bach. On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions. JMLR, 2017.", "[2] Tri Dao, Christopher De Sa, Christopher R\u00e9. Gaussian Quadrature for Kernel Features. NIPS 2017"], "labels": ["fact", "fact", "fact", "evaluation", "fact", "evaluation", "reference", "reference"]}
{"doc_id": "B10Nn-jlf", "text": ["The authors consider new attacks for generating adversarial samples against neural networks.", "In particular, they are interested in approximating gradient-based white-box attacks such as FGSM in a black-box setting by estimating gradients from queries to the classifier.", "They assume that the attacker is able to query, for any example x, the vector of probabilities p(x) corresponding to each class.", "Given such query access it\u2019s trivial to estimate the gradients of p using finite differences.", "As a consequence one can implement FGSM using these estimates assuming cross-entropy loss, as well as a logit-based loss.", "They consider both iterative and single-step FGSM attacks in the targeted (i.e. the adversary\u2019s goal is to switch the example\u2019s label to a specific alternative label) and un-targeted settings (any mislabelling is a success).", "They compare themselves to transfer black-box attacks, where the adversary trains a proxy model and generates the adversarial sample by running a white-box attack on that model.", "For a number of target classifiers on both MNIST and CIFAR-10, they show that these attacks outperform the transfer-based attacks, and are comparable to white-box attacks, while maintaining low distortion on the attack samples.", "One drawback of estimating gradients using finite differences is that the number of queries required scales with the dimensionality of the examples,", "which can be prohibitive in the case of images.", "They therefore describe two practical approaches for query reduction \u2014 one based on random feature grouping, and the other on PCA (which requires access to training data).", "They once again demonstrate the effectiveness of these methods across a number of models and datasets, including models deploying adversarially trained defenses.", "Finally, they demonstrate compelling real-world deployment against Clarifai classification models designed to flag \u201cNot Safe for Work\u201d content.", "Overall, the paper provides a very thorough experimental examination of a practical black-box attack that can be deployed against real-world systems.", "While there are some similarities with Chen et al. with respect to utilizing finite-differences to estimate gradients, I believe the work is still valuable for its very thorough experimental verification, as well as the practicality of their methods.", "The authors may want to be more explicit about their claim in the Related Work section that the running time of their attack is \u201c40x\u201d less than that of Chen et al.", "While this is believable, there is no running time comparison in the body of the paper."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "fact"]}
{"doc_id": "Bk_WgJqgM", "text": ["Authors proposed a neural network based machine translation method between two programming languages. ", "The model is based on both source/target syntax trees and performs an attentional encoder-decoder style network over the tree structure.", "The new things in the paper are the task definition and using the tree-style network in both encoder and decoder. ", "Although each structure of encoder/decoder/attention network is based on the application of some well-known components, ", "unfortunately, the paper pays much space to describe them. ", "On the other hand, the whole model structure looks to be easily generalized to other tree-to-tree tasks and might have some potential to contribute this kind of problems.", "In experimental settings, there are many shortages of the description. ", "First, it is unclear that what the linearization method of the syntax tree is, ", "which could affect the final model accuracy. ", "Second, it is also unclear what the method to generate train/dev/test data is. ", "Are those generated completely randomly? ", "If so, there could be many meaningless (e.g., inexecutable) programs in each dataset. ", "What is the reasonableness of training such kind of data, or are they already avoided from the data? ", "Third, the evaluation metrics \"token/program accuracy\" looks insufficient about measuring the correctness of the program ", "because it has sensitivity about meaningless differences between identifier names and some local coding styles.", "Authors also said that CoffeeScript has a succinct syntax and Javascript has a verbose one without any agreement about what the syntax complexity is. ", "Since any CoffeeScript programs can be compiled into the corresponding Javascript programs, ", "we should assume that CoffeeScript is the only subset of Javascript (without physical difference of syntax), ", "and this translation task may never capture the whole tendency of Javascript. ", "In addition, authors had generated the source CoffeeScript codes, ", "which seems that this task is only one of \"synthetic\" task and no longer capture any real world's programs.", "If authors were interested in the tendency of real program translation task, they should arrange the experiment by collecting parallel corpora between some unrelated programming languages using resources in the real world.", "Global attention mechanism looks somewhat not suitable for this task. ", "Probably we can suppress the range of each attention by introducing some prior knowledge about syntax trees (e.g., only paying attention to the descendants in a specific subtree).", "Suggestion: After capturing the motivation of the task, I suspect that the traditional tree-to-tree (also X-to-tree) \"statistical\" machine translation methods still can also work correctly in this task. ", "The traditional methods are basically based on the rule matching, which constructs a target tree by selecting source/target subtree pairs and arranging them according to the actual connections between each subtree in the source tree. ", "This behavior might be suitable to transform syntax trees while keeping their whole structure, and also be able to treat the OOV (e.g., identifier names) problem by a trivial modification. ", "Although it is not necessary, it would like to apply those methods to this task as another baseline if authors are interested in."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "fact", "request", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "fact", "request"]}
{"doc_id": "rJAS034ez", "text": ["This paper presents a framework to recover a set of independent mechanisms. ", "In order to do so it uses a set of experts each one made out of a GAN.", "\\n\\nMy main concern with this work is that I don't see any mechanism in the framework that prevents an expert (or few of them) to win all examples except its own learning capacities. ", "p7 authors have also noticed that several experts fail to specialize ", "and I bet that is the reason why.", "\\nThus, authors should analyze how well we can have all/most experts specialize in a pool vs expert capacity/architecture.", "\\nIt would also be great to integrate a direct regularization mechanism in the cost in order to do so. ", "Like for example a penalty in how many examples a expert has catched.", "\\n\\nMoreover, the discrimator D (which is trained to discriminate between real or fake examples) seems to be directly used to tell if an example is throw from the targeted distribution. ", "It is not the same task. ", "How D will handle an example far from fake or real ones ? ", "Why will D answer negatively (or positively) on this example ?"], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "request", "request", "request", "evaluation", "evaluation", "non-arg", "non-arg"]}
{"doc_id": "S1gFMVoeM", "text": ["This paper introduces a simple correlation-based metric to measure whether filters in neural networks are being used effectively, as a proxy for effective capacity.", "The authors then introduce a greedy algorithm", "that expands the different layers in a neural network until the metric indicates that additional features will end up not being used effectively.", "The application of this algorithm is shown to lead to architectures that differ substantially from hand-designed models with the same number of layers:", "most of the parameters end up in intermediate layers, with fewer parameters in earlier and later layers.", "This indicates that common heuristics to divide capacity over the layers of a network are suboptimal,", "as they tend to put most parameters in later layers.", "It's also nice that simpler tasks yield smaller models (e.g. MNIST vs. CIFAR in figure 3).", "The experimental section is comprehensive and the results are convincing.", "I especially appreciate the detailed analysis of the results (figure 3 is great).", "Although most experiments were conducted on the classic benchmark datasets of MNIST, CIFAR-10 and CIFAR-100,", "the paper also includes some promising preliminary results on ImageNet,", "which nicely demonstrates that the technique scales to more practical problems as well.", "That said, it would be nice to demonstrate that the algorithm also works for other tasks than image classification.", "I also like the alternative perspective compared to pruning approaches,", "which most research seems to have been focused on in the past.", "The observation that the cross-correlation of a weight vector with its initial values is a good measure for effective filter use seems obvious in retrospect,", "but hindsight is 20/20 and the fact is that apparently this hasn't been tried before.", "It is definitely surprising that a simple method like this ends up working this well.", "The fact that all parameters are reinitialised whenever any layer width changes seems odd at first,", "but I think it is sufficiently justified.", "It would be nice to see some comparison experiments as well though,", "as the intuitive thing to do would be to just keep the existing weights as they are.", "Other remarks: Formula (2) seems needlessly complicated because of all the additional indices.", "Maybe removing some of those would make things easier to parse.", "It would also help to mention that it is basically just a normalised cross-correlation.", "This is mentioned two paragraphs down,", "but should probably be mentioned right before the formula is given instead.", "page 6, section 3.1: \"it requires convergent training of a huge architecture with lots of regularization before complexity can be introduced\",", "I guess this should be \"reduced\" instead of \"introduced\"."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "request", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "request", "request", "fact", "request", "reference", "request"]}
{"doc_id": "Hkcb6tG-M", "text": ["The paper proposes a technique for quantizing the weights of a neural network, with bit-depth/precision varying on a per-parameter basis. ", "The main idea is to minimize the number of bits used in the quantization while constraining the loss to remain below a specified upper bound. ", "This is achieved by formulating an upper bound on the number of bits used via a set of \"tolerances\"; ", "this upper bound is then minimized while estimating any increase in loss using a first order Taylor approximation.", "I have a number of questions and concerns about the proposed approach. ", "First, at a high level, there are many details that aren't clear from the text. ", "Quantization has some bookkeeping associated with it: In a per-parameter quantization setup it will be necessary to store not just the quantized parameter, but also the number of bits used in the quantization (takes e.g. 4-5 extra bits), and there will be some metadata necessary to encode how the quantized value should be converted back to floating point (e.g., for 8-bit quantization of a layer of weights, usually the min and max are stored). ", "From Algorithm 1 it appears the quantization assumes parameters in the range [0, 1]. ", "Don't negative values require another bit? ", "What happens to values larger than 1? ", "How are even bit depths and associated asymmetries w.r.t. 0 handled ", "(e.g., three bits can represent -1, 0, and 1, but 4 must choose to either not represent 0 or drop e.g. -1)? ", "None of these details are clearly discussed in the paper, ", "and it's not at all clear that the estimates of compression are correct if these bookkeeping matters aren't taken into account properly.", "Additionally the paper implies that this style of quantization has benefits for compute in addition to memory savings. ", "This is highly dubious, ", "since the method will require converting all parameters to a standard bit-depth on the fly (probably back to floating point, since some parameters may have been quantized with bit depth up to 32). ", "Alternatively custom GEMM/conv routines would be required which are impossible to make efficient for weights with varying bit depths. ", "So there are likely not runtime compute or memory savings from such an approach.", "I have a few other specific questions: Are the gradients used to compute \\mu computed on the whole dataset or minibatches? ", "How would this scale to larger datasets? ", "I am confused by the equality in Equation 8: What happens for values shared by many different quantization bit depths ", "(e.g., representing 0 presumably requires 1 bit, but may be associated with a much finer tolerance)? ", "Should \"minimization in equation 4\" refer to equation 3?", "In the end, while do like the general idea of utilizing the gradient to identify how sensitive the model might be to quantization of various parameters, ", "there are significant clarity issues in the paper, ", "I am a bit uneasy about some of the compression results claimed without clearer description of the bookkeeping, ", "and I don't believe an approach of this kind has any significant practical relevance for saving runtime memory or compute resources."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "request", "request", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "request", "request", "request", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "ByjrTO5ef", "text": ["Recently some interesting work on a role of prior in deep generative models has been presented.", "The choice of prior may have an impact on the expressiveness of the model", "[Hoffman and Johnson, 2016].", "A few existing work presents methods for learning priors from data for variational autoencoders", "[Goyal et al., 2017]", "[Tomczak and Welling, 2017].", "The work, \"VAE with a VampPrior,\" [Tomczak and Welling, 2017] is missing in references.", "The current work focuses on adversarial autoencoder (AAE) and introduces a code generator network to transform a simple prior into one that together with the generator can better fit the data distribution.", "Adversarial loss is used to train the code generator network, allowing the output of the network could be any distribution.", "I think the method is quite simple but interesting approach to improve AAEs without hurting the reconstruction.", "The paper is well written and is easy to read.", "The method is well described.", "However, what is missing in this paper is an analysis of learned priors,", "which help us to better understand its behavior.", "The model is evaluated qualitatively only.", "What about quantitative evaluation?"], "labels": ["evaluation", "fact", "reference", "fact", "reference", "reference", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "request"]}
{"doc_id": "HJSqWxjez", "text": ["The authors consider a Neural Network where the neurons are treated as rational agents. ", "In this model, the neurons must pay to observe the activation of neurons upstream. ", "Thus, each individual neuron seeks to maximize the sum of payments it receives from other neurons minus the cost for observing the activations of other neurons (plus an external reward for success at the task). ", "While this is an interesting idea on its surface, ", "the paper suffers from many problems in clarity, motivation, and technical presentation. ", "It would require very major editing to be fit for publication. ", "The major problem with this paper is its clarity. ", "See detailed comments below for problems just in the introduction. ", "More generally, the paper is riddled with non sequiturs. ", "The related work section mentions Generative Adversarial Nets. ", "As far as I can tell, this paper has nothing to do with GANs. ", "The Background section introduces notation for POMDPs, never to be used again in the entirety of the paper, before launching into a paragraph about apoptosis in glial cells. ", "There is also a general lack of attention to detail. ", "For example, the entire network receives an external reward (R_t^{ex}), presumably for its performance on some task. ", "This reward is dispersed to the the individual agents who receive individual external rewards (R_{it}^{ex}). ", "It is never explained how this reward is allocated even in the authors\u2019 own experiments. ", "The authors state that all units playing NOOP is an equilibrium. ", "While this is certainly believable/expected, ", "such a result would depend on the external rewards R_{it}^{ex}, the observation costs \\sigma_{jit}, and the network topology. ", "None of this is discussed. ", "The authors discuss Pareto optimality without ever formally describing what multi-objective function defines this supposed Pareto boundary. ", "This is pervasive throughout the paper, ", "and is detrimental to the reader\u2019s understanding. ", "While this might be lost because of the clarity problems described above, ", "the model itself is also never really motivated. ", "Why is this an interesting problem? ", "There are many ways to create rational incentives for neurons in a neural net. ", "Why is paying to observe activations the one chosen here? ", "The neuroscientific motivation is not very convincing to me, considering that ultimately these neurons have to hold an auction. ", "Is there an economic motivation? ", "Is it just a different way to train a NN? ", "Detailed Comments: \u201cIn the of NaaA\u201d => remove \u201cof\u201d?", "\u201cpassing its activation to the unit as cost\u201d ", "=> Unclear. ", "What does this mean?", "\u201cperformance decreases if we naively consider units as agents\u201d ", "=> Performance on what?", "\u201c.. we demonstrate that the agent obeys to maximize its counterfactual return as the Nash Equilibrium\u201c ", "=> Perhaps, this should be rewritten as \u201cAgents maximize their counterfactual return in equilibrium. ", "\u201cSubsequently, we present that learning counterfactual return leads the model to learning optimal topology\u201d ", "=> Do you mean \u2028\u201cmaximizing\u201d instead of learning. ", "Optimal with respect to what task?", "\u201cpure-randomly\u201d => \u201crandomly\u201d", " \u201cwith adaptive algorithm\u201d => \u201cwith an adaptive algorithm\u201d", "\u201cthe connection\u201d => \u201cconnections\u201d", "\u201cIn game theory, the outcome maximizing overall reward is named Pareto optimality.\u201d ", "=> This is simply incorrect."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "request", "request", "request", "quote", "evaluation", "request", "quote", "request", "quote", "request", "quote", "request", "request", "request", "request", "request", "quote", "fact"]}
{"doc_id": "BJ3VB6_xG", "text": ["The paper proposes an interesting alternative to recent approaches to learning from logged bandit feedback, and validates their contribution in a reasonable experimental comparison.", "The clarity of writing can be improved", "(several typos in the manuscript, notation used before defining, missing words, poorly formatted citations, etc.).", "Implementing the approach using recent f-GANs is an interesting contribution and may spur follow-up work.", "There are several lingering concerns about the approach (detailed below) that detract from the quality of their contributions.", "[Major] In Lemma 1, L(z) is used before defining it.", "Crucially, additional assumptions on L(z) are necessary (e.g. |L(z)| <= 1 for all z.", "If not, a trivial counter-example is: set L(z) >> 1 for all z and Lemma 1 is violated).", "It is unclear how crucially this additional assumption is required in practice", "(their expts with Hamming losses clearly do not satisfy such an assumption).", "[Minor] Typo: Section 3.2, first equation; the integral equals D_f(...) + 1 (not -1).", "[Crucial!] Eqn10: Expected some justification on why it is fruitful to *lower-bound* the divergence term,", "which contributes to an *upper-bound* on the true risk.", "[Crucial!] Algorithm1: How is the condition of the while loop checked in a tractable manner?", "[Minor] Typos: Initilization -> Initialization, Varitional -> Variational", "[Major] Expected an additional \"baseline\" in the expts -- Supervised but with the neural net policy architecture", "(NN approaches outperforming Supervised on LYRL dataset was baffling before realizing that Supervised is implemented using a linear CRF).", "[Major] Is there any guidance for picking the new regularization hyper-parameters (or at least, a sensible range for them)?", "[Minor] The derived bounds depend on M,", "an a priori upper bound on the Renyi divergence between the logging policy and any new policy.", "It's unclear that such a bound can be tractably guessed", "(in contrast, prior work uses an upper bound on the importance weight", "-- which is simply 1/(Min action selection prob. by logging policy) )."], "labels": ["fact", "request", "fact", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "evaluation", "fact", "request", "fact", "request", "request", "request", "evaluation", "request", "fact", "fact", "evaluation", "fact", "fact"]}
{"doc_id": "HyvB9RKez", "text": ["This paper provides theoretical and empirical motivations for removing the top few principle components of commonly-used word embeddings.", "The paper is well-written and I enjoyed reading it. ", "However, it does not explain how significant this result is beyond that of (Bullinaria and Levy, 2012), ", "who also removed the top N dimensions when benchmarking SVD-factorized word embeddings. ", "From what I can see, this paper provides a more detailed explanation of the phenomenon (\"why\" it works), ", "which is supported with both theoretical results and a series of empirical analyses, as well as \"updating\" the benchmarks and methods from the pre-neural era. ", "Although this contribution is relatively incremental, ", "I find the depth of this work very interesting, ", "and I think future work could perhaps rely on these insights to create better embedding algorithms that directly enforce isotropy.", "I have two concerns regarding the empirical section, which may be resolvable fairly quickly:", "1) Are the embedding vectors L2 normalized before using them in each task? ", "This is known to significantly affect performance. ", "I am curious whether removing the top PCs is redundant or not given L2 normalization.", "2) Most of the benchmarks used in this paper are \"toy\" tasks. ", "As Schnabel et al (2015) and Tsvetkov et al (2015) showed, there is often little correlation between success on these benchmarks and improvement of downstream NLP tasks. ", "I would like to measure the change in performance on a major NLP task that heavily relies on pre-trained word embeddings such as SQuAD.", "Minor Comments:* The last sentence in the first paragraph (\"The success comes from the geometry of the representations...\") is not true; ", "the success stems from the ability to capture lexical similarity. ", "Levy and Goldberg (2014) showed that searching for the closest word vector to (king - man + woman) is equivalent to optimizing a linear combination of 3 similarity terms [+(x,king), -(x,man), +(x, woman)]. ", "This explanation was further demonstrated by Linzen (2016) who showed that even when removing the negative term (x, man), many analogies can still be solved, i.e. by looking for a word that is similar both to \"king\" and to \"woman\". ", "Add to that the fact that the analogy trick works best when the vectors are L2 normalized; ", "if they are all on the unit sphere, what is the geometric interpretation of (king - man + woman), which is not on the unit sphere? ", "I suggest removing this sentence and other references to linguistic regularities from this paper, ", "since they are controversial at best, and distract from the main findings.", "* This is also related to Bullinaria and Levy's (2012) finding that downweighting the eigenvalue matrix in SVD-based methods improves their performance. ", "Levy et al (2015) showed that keeping the original eigenvalues can actually degenerate SVD-based embeddings. ", "Perhaps there is a connection to the findings in this paper?"], "labels": ["fact", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "fact", "fact", "fact", "non-arg", "request", "evaluation", "fact", "fact", "evaluation"]}
{"doc_id": "Sy7QPPYxM", "text": ["The paper proposes an adaptation of existing Graph ConvNets and evaluates this formulation on a several existing benchmarks of the graph neural network community.", "In particular, a tree structured LSTM is taken and modified.", "The authors describe this as adapting it to general graphs, stacking, followed by adding edge gates and residuality.", "My biggest concern is novelty, as the modifications are minor.", "In particular, the formulation can be seen in a different way.", "As I see it, instead of adapting Tree LSTMs to arbitary graphs, it can be seen as taking the original formulation by Scarselli and replacing the RNN by a gated version, i.e. adding the known LSTM gates (input, output, forget gate).", "This is a minor modification.", "Adding stacking and residuality are now standard operations in deep learning,", "and edge-gates have also already been introduced in the literature, as described in the paper.", "A second concern is the presentation of the paper, which can be confusing at some points.", "A major example is the mathematical description of the methods.", "When reading the description as given, one should actually infer that Graph ConvNets and Graph RNNs are the same thing, which can be seen by the fact that equations (1) and (6) are equivalent.", "Another example, after (2), the important point to raise is the difference to classical (sequential) RNNs, namely the fact that the dependence graph of the model is not a DAG anymore, which introduces cyclic dependencies.", "Generally, a clear introduction of the problem is also missing.", "What are the inputs,", "what are the outputs,", "what kind of problems should be solved?", "The update equations for the hidden states are given for all models,", "but how is the output calculated given the hidden states from variable numbers of nodes of an irregular graph?", "The model has been evaluated on standard datasets with a performance,", "which seems to be on par, or a slight edge, which could probably be due to the newly introduced residuality.", "A couple of details :- the length of a graph is not defined. The size of the set of nodes might be meant.", "- at the beginning of section 2.1 I do not understand the reference to word prediction and natural language processing.", "RNNs are not restricted to NLP", "and I think there is no need to introduce an application at this point.", "- It is unclear what does the following sentence means: \"ConvNets are more pruned to deep networks than RNNs\"?", "- What are \"heterogeneous graph domains\"?"], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request", "fact", "request", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request"]}
{"doc_id": "SymYit2xf", "text": ["The paper shows that several recently proposed interpretation techniques for neural network are performing similar processing and yield similar results.", "The authors show that these techniques can all be seen as a product of input activations and a modified gradient, where the local derivative of the activation function at each neuron is replaced by some fixed function.", "A second part of the paper looks at whether explanations are global or local.", "The authors propose a metric called sensitivity-n for that purpose,", "and make some observations about the optimality of some interpretation techniques with respect to this metric in the linear case.", "The behavior of each explanation w.r.t. these properties is then tested on multiple DNN models tested on real-world datasets.", "Results further outline the resemblance between the compared methods.", "In the appendix, the last step of the proof below Eq. 7 is unclear.", "As far as I can see, the variable g_i^LRP wasn\u2019t defined,", "and the use of Eq. 5 to achieve this last could be better explained.", "There also seems to be some issues with the ordering i,j, where these indices alternatively describe the lower/higher layers, or the higher/lower layers."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation"]}
{"doc_id": "ryfzqnhxz", "text": ["Summary: The paper propose a method for generating adversarial examples in image recognition problems. ", "The Adversarial scheme is inspired in the one proposed by Goodgellow et al 2015 (AT) that introduces small perturbations to the data in the direction that increases the error. ", "Such a perturbations are random (they have not structure) and lack of interpretation for a human user. ", "The proposal is to limit the perturbations to just three kind of global motion fields: shift, centered rotation and scale (zoom in/out). ", "Since the motions are small in scale, ", "the authors use a first-order Taylor series approximation (as in classical optical flow). ", "This approximation allows to obtain close formulas for the perturbed examples; i.e. the correction factor of the Back-propagation computed derivatives w.r.t. original example. ", "As result, the method is computational efficient respect to the AT and the perturbations are interpretable. ", "Experiments demonstrate that with the MNIST database is not obtained an improvement in the error reduction but a reduction of the computational time. ", "However, with ta more general recognition problem conducted with the CIFAR-10 database, the use of the proposed method improves both the error and the computational time, when compared with AT and Virtual Adversarial Train. ", "Comments:1. The paper presents a series os typos: FILED (title), obouve, freedm, nerual,; please check carfully.", "2. The Derivation of eq. (13) should be explained, ", "It could be said that (12) can be casted as a eigenvalue problem [for example: $ max_{\\tilde v} \\| \\nabla_p L^T \\tilde v \\|^2 \\;\\; s.t. \\| v\\|=1 $] and (13) is the largest eigenvalue of $ \\nabla_p L \\nabla_p L^T $]", "3. The improvement in the error results in the db CIFAR-10 is good enough to see merit in the proposal approach. ", "Maybe other perturbations with closed formula could be considered and linear combinations of them"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "request", "request", "fact", "evaluation", "request"]}
{"doc_id": "Bk3B7T5gf", "text": ["The authors present a deep neural network that evaluates plate numbers. ", "The relevance of this problem is that there are auctions for plate numbers in Hong Kong, and predicting their value is a sensible activity in that context. ", "I find that the description of the applied problem is quite interesting; ", "in fact overall the paper is well written and very easy to follow. ", "There are some typos and grammatical problems (indicated below), but nothing really serious.", "So, the paper is relevant and well presented. ", "However, I find that the proposed solution is an application of existing techniques, ", "so it lacks on novelty and originality. ", "Even though the significance of the work is apparent given the good results of the proposed neural network, ", "I believe that such material is more appropriate to a focused applied meeting. ", "However, even for that sort of setting I think the paper requires some additional work, ", "as some final parts of the paper have not been tested yet ", "(the interesting part of explanations). ", "Hence I don't think the submission is ready for publication at this moment.", "Concerning the text, some questions/suggestions: - Abstract, line 1: I suppose \"In the Chinese society...\"--- are there many Chinese societies?", "- The references are not properly formatted; ", "they should appear at (XXX YYY) but appear as XXX (YYY) in many cases, mixed with the main text. ", "- Footnote 1, line 2: \"an exchange\".", "- Page 2, line 12: \"prices. Among\".", "- Please add commas/periods at the end of equations.", "- There are problems with capitalization in the references."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "quote", "quote", "request", "evaluation"]}
{"doc_id": "rySfFbFgz", "text": ["In this paper, the authors propose a novel tracking loss to convert the RPN to a tracker. ", "The internal structure of top layer features of RPN is exploited to treat feature points discriminatively. ", "In addition, the proposed compression network speeds up the tracking algorithm. ", "The experimental results on the VOT2016 dataset demonstrate its efficiency in tracking. ", "This work is the combination of Faster R-CNN (Ren et al. PAMI 2015) and tracking-by-detection framework. ", "The main contributions proposed in this paper are new tracking loss, network compression and results. ", "There are numerous concerns with this work:", "1.\tThe new tracking loss shown in equation 2 is similar with the original Faster R-CNN loss shown in equation 1. ", "The only difference is to replace the regression loss with a predefined mask selection loss, ", "which is of little sense that the feature processing can be further fulfilled through one-layer CNN. ", "The empirical operation shown in figure 2 seems arbitrary and lack of theoretical explanation. ", "There is no insight of why doing so. ", "Simply showing the numbers in table 1 does not imply the necessity, ", "which ought to be put in the experiment sections. ", "2.\tThe network compression is engineering and lack insight as well. ", "To remove part of the CNN and retrain is a common strategy in the CNN compression methods [a] [b]. ", "There is a lack of discussion with the relationship with prior arts.", "3.\tThe organization is not clear. ", "Section 3.4 should be set in the experiments ", "and Section 3.5 should be set at the beginning of the algorithm. ", "The description of the network compression is not clear enough, especially the training details. ", "Meanwhile, the presentation is hard to follow. ", "There is no clear expression of how the tracker performs in practice.", "4.\tIn addition, VOT 2016, the method should evaluate on the OTB dataset with the following trackers [c] [d].", "5.\tThe evaluation is not fair. ", "In Sec 6, the authors indicate that MDNet runs at 1FPS while the proposed tracker runs at 1.6FPS. ", "However, MDNet is based on Matlab ", "and the proposed tracker is based on C++ (i.e., Caffe).", "Reference:[a] On Compressing Deep Models by Low Rank and Sparse Decomposition. Yu et al. CVPR 2017.", "[b] Designing Energy-Efficient Convolutional Neural Network Using Energy-Aware Pruning. Yang et al. CVPR 2017.", "[c] ECO: Efficient Convolution Operators for Tracking. Danelljan et al. CVPR 2017.", "[d] Multi-Task Correlation Particle Filter For Robust Object Tracking. Zhang et al. CVPR 2017."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "request", "evaluation", "evaluation", "fact", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "fact", "fact", "reference", "reference", "reference", "reference"]}
{"doc_id": "S15xOyjgf", "text": ["This paper proposes an evolutionary algorithm for solving the variational E step in expectation-maximization algorithm for probabilistic models with binary latent variables. ", "This is done by (i) considering the bit-vectors of the latent states as genomes of individuals, and by (ii) defining the fitness of the individuals as the log joint distribution of the parameters and the latent space.", "Pros:The paper is well written and the methodology presented is largely clear.", "Cons:While the reviewer is essentially fine with the idea of the method, ", "the reviewer is much less convinced of the empirical study. ", "There is no comparison with other methods such as Monte carlo sampling.", "It is not clear how computationally Evolutionary EM performs comparing to Variational EM algorithm ", "and there is neither experimental results nor analysis for the computational complexity of the proposed model.", "The datasets used in the experiments are quite old. ", "The reviewer is concerned that these datasets may not be representative of real problems.", "The applicability of the method is quite limited. ", "The proposed model is only applicable for the probabilistic models with binary latent variables, ", "hence it cannot be applied to more realistic complex model with real-valued latent variables."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact"]}
{"doc_id": "H13MWgq4M", "text": ["This paper identifies and proposes a fix for a shortcoming of the Deep Information Bottleneck approach, namely that the induced representation is not invariant to monotonic transform of the marginal distributions (as opposed to the mutual information on which it is based). ", "The authors address this shortcoming by applying the DIB to a transformation of the data, obtained by a copula transform. ", "This explicit approach is shown on synthetic experiments to preserve more information about the target, yield better reconstruction and converge faster than the baseline. ", "The authors further develop a sparse extension to this Deep Copula Information Bottleneck (DCIB), which yields improved representations (in terms of disentangling and sparsity) on a UCI dataset.", "(significance) This is a promising idea. ", "This paper builds on the information theoretic perspective of representation learning, ", "and makes progress towards characterizing what makes for a good representation. ", "Invariance to transforms of the marginal distributions is clearly a useful property, ", "and the proposed method seems effective in this regard.", "Unfortunately, I do not believe the paper is ready for publication as it stands, ", "as it suffers from lack of clarity and the experimentation is limited in scope.", "(clarity) While Section 3.3 clearly defines the explicit form of the algorithm ", "(where data and labels are essentially pre-processed via a copula transform), ", "details regarding the \u201cimplicit form\u201d are very scarce. ", "From Section 3.4, it seems as though the authors are optimizing the form of the gaussian information bottleneck I(x,t), in the hopes of recovering an encoder $f_\\beta(x)$ which gaussianizes the input (thus emulating the explicit transform) ? ", "Could the authors clarify whether this interpretation is correct, or alternatively provide additional clarifying details ? ", "There are also many missing details in the experimental section: ", "how were the number of \u201cactive\u201d components selected ? ", "Which versions of the algorithm (explicit/implicit) were used for which experiments ? ", "I believe explicit was used for Section 4.1, and implicit for 4.2 ", "but again this needs to be spelled out more clearly. ", "I would also like to see a discussion (and perhaps experimental comparison) to standard preprocessing techniques, such as PCA-whitening.", "(quality) The experiments are interesting and seem well executed. ", "Unfortunately, I do not think their scope (single synthetic, plus a single UCI dataset) is sufficient. ", "While the gap in performance is significant on the synthetic task, ", "this gap appears to shrink significantly when moving to the UCI dataset. ", "How does this method perform for more realistic data, even e.g. MNIST ? ", "I think it is crucial to highlight that the deficiencies of DIB matter in practice, and are not simply a theoretical consideration. ", "Similarly, the representation analyzed in Figure 7 is promising, ", "but again the authors could have targeted other common datasets for disentangling, e.g. the simple sprites dataset used in the beta-VAE paper. ", "I would have also liked to see a more direct and systemic validation of the claims made in the paper. ", "For example, the shortcomings of DIB identified in Section 3.1, 3.2 could have been verified more directly by plotting I(y,t) for various monotonic transformations of x. ", "A direct comparison of the explicit and implicit forms of the algorithms would also also make for a stronger paper in my opinion.", "Pros:* Theoretically well motivated", "* Promising results on synthetic task", "* Potential for impact", "Cons:* Paper suffers from lack of clarity (method and experimental section)", "* Lack of ablative / introspective experiments", "* Weak empirical results (small or toy datasets only)."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "request", "evaluation", "request", "request", "fact", "request", "request", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "evaluation", "request", "request", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "Hk_LRZ5gG", "text": ["This paper proposes several client-server neural network gradient update strategies aimed at reducing uplink usage while maintaining prediction performance.", "The main approaches fall into two categories: structured, where low-rank/sparse updates are learned,", "and sketched, where full updates are either sub-sampled or compressed before being sent to the central server.", "Experiments are based on the federated averaging algorithm.", "The work is valuable, but has room for improvement.", "The paper is mainly an empirical comparison of several approaches, rather than from theoretically motivated algorithms.", "This is not a criticism,", "however, it is difficult to see the reason for including the structured low-rank experiments in the paper", "(itAs a reader, I found it difficult to understand the actual procedures used.", "For example, what is the difference between the random mask update and the subsampling update", "(why are there no random mask experiments after figure 1, even though they performed very well)?", "How is the structured update \"learned\"?", "It would be very helpful to include algorithms.", "It seems like a good strategy is to subsample, perform Hadamard rotation, then quantise.", "For quantization, it appears that the HD rotation is essential for CIFAR, but less important for the reddit data.", "It would be interesting to understand when HD works and why,", "and perhaps make the paper more focused on this winning strategy, rather than including the low-rank algo.", "If convenient, could the authors comment on a similarly motivated paper under review at iclr 2018:", "VARIANCE-BASED GRADIENT COMPRESSION FOR EFFICIENT DISTRIBUTED DEEP LEARNING", "pros:- good use of intuition to guide algorithm choices", "- good compression with little loss of accuracy on best strategy", "- good problem for FA algorithm / well motivated", "cons:- some experiment choices do not appear well motivated / inclusion is not best choice", "- explanations of algos / lack of 'algorithms' adds to confusion", "a useful reference: Strom, Nikko. \"Scalable distributed dnn training using commodity gpu cloud computing.\" Sixteenth Annual Conference of the International Speech Communication Association. 2015."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "non-arg", "evaluation", "evaluation", "non-arg", "evaluation", "non-arg", "request", "request", "evaluation", "request", "request", "request", "reference", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "reference"]}
{"doc_id": "BJDxbMvez", "text": ["The authors propose a generative method that can produce images along a hierarchy of specificity, i.e. both when all relevant attributes are specified, and when some are left undefined, creating a more abstract generation task.", "Pros:+ The results demonstrating the method's ability to generate results for (1) abstract and (2) novel/unseen attribute descriptions, are generally convincing.", "Both quantitative and qualitative results are provided.", "+ The paper is fairly clear.", "Cons:- It is unclear how to judge diversity qualitatively, e.g. in Fig. 4(b).", "- Fig. 5 could be more convincing;", "\"bushy eyebrows\" is a difficult attribute to judge,", "and in the abstract generation when that is the only attribute specified, it is not clear how good the results are."], "labels": ["fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "rycISJNgz", "text": ["Quality The method description, particularly about reference ambiguity, I found difficult to follow.", "The experiments and analysis look solid,", "although it would be nice to see experiments on more challenging natural image datasets.", "Clarity \u201cIn general this is not possible\u2026 \u201c -", "you are saying it is not possible to learn an encoder that recovers disentangled factors of variation?", "But that seems to be one of the main goals of the paper.", "It is not clear at all what is meant here or what the key problem is,", "which detracts from the paper\u2019s motivation.", "What is the purpose of R_v and R_c in eq 2?", "Why can these not be collapsed into the encoders N_v and N_c?", "What does \u201cdifferent common factor\u201d mean?", "What is f_c in proof of proposition 1?", "Previously f (no subscript) was referred to as a rendering engine.", "T(v,c) ~ p_v and c ~ p_c are said to be independent.", "But T(v,c) is explicitly defined in terms of c (equation 6).", "So which is correct?", "Overall the argument seems plausible -", "pairs of images in which a single factor of variation changes have a reference ambiguity -", "but the details are unclear.", "Originality The model is very similar to Mathieu et al, although using image pairs rather than category labels directly.", "The idea of weakly-supervised disentangling has also been explored in many other papers,", "e.g. \u201cWeakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis\u201d, Yang et al.", "The description of reference ambiguity seems new and potentially valuable,", "but I did not find it easy to follow.", "Significance Disentangling factors of variation with weak supervision is an important problem,", "and this paper makes a modest advance in terms of the model and potentially in terms of the theory.", "The analysis in figure 3 I found particularly interesting - illustrating that the encoder embedding dimension can have a drastic effect on the shortcut problem.", "Overall I think this can be a significant contribution if the exposition can be improved.", "Pros- Proposed method allows disentangling two factors of variation given a training set of image pairs with one factor of variation matching and the other non-matching.", "- A challenge inherent to weakly supervised disentangling called reference ambiguity is described.", "Cons- Only two factors of variation are studied,", "and the datasets are fairly simple.", "- The method description and the description of reference ambiguity are unclear."], "labels": ["evaluation", "evaluation", "evaluation", "quote", "fact", "fact", "evaluation", "fact", "request", "request", "request", "request", "fact", "fact", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "fact", "reference", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "fact", "fact", "evaluation", "evaluation"]}
{"doc_id": "SJdWxzoxz", "text": ["Summary:The paper presents a novel method for answering \u201cHow many \u2026?\u201d questions in the VQA datasets. ", "Unlike previously proposed approaches, the proposed method uses an iterative sequential decision process for counting the relevant entity. ", "The proposed model makes discrete choices about what to count at each time step. ", "Another qualitative difference compared to existing approaches is that the proposed method returns bounding boxes for the counted object. ", "The training and evaluation of the proposed model and baselines is done on a subset of the existing VQA dataset that consists of \u201cHow many \u2026?\u201d questions. ", "The experimental results show that the proposed model outperforms the baselines discussed in the paper.", "Strengths:1.\tThe idea of sequential counting is novel and interesting.", "2.\tThe analysis of model performance by grouping the questions as per frequency with which the counting object appeared in the training data is insightful. ", "Weaknesses:1.\tThe proposed dataset consists of 17,714 QA pairs in the dev set, whereas only 5,000 QA pairs in the test set. ", "Such a 3.5:1 split of dev and test seems unconventional. ", "Also, the size of the test set seems pretty small given the diversity of the questions in the VQA dataset.", "2.\tThe paper lacks quantitative comparison with existing models for counting such as with Chattopadhyay et al. ", "This would require the authors to report the accuracies of existing models by training and evaluating on the same subset as that used for the proposed model. ", "Absence of such a comparison makes it difficult to judge how well the proposed model is performing compared to existing models.", "3.\tThe paper lacks analysis on how much of performance improvement is due to visual genome data augmentation and pre-training? ", "When comparing with existing models (as suggested in above), this analysis should be done, so as to identify the improvements coming from the proposed model alone.", "4.\tThe paper does not report the variation in model performance when changing the weights of the various terms involved in the loss function (equations 15 and 16).", "5.\tRegarding Chattopadhyay et al. the paper says that \u201cHowever, their analysis was limited to the specific subset of examples where their approach was applicable.\u201d", "It would be good it authors could elaborate on this a bit more.", "6.\tThe relation prediction part of the vision module in the proposed model seems quite similar to the Relation Networks, ", "but the paper does not mention Relation Networks. ", "It would be good to cite the Relation Networks paper and state clearly if the motivation is drawn from Relation Networks.", "7.\tIt is not clear what are the 6 common relationships that are being considered in equation 1. ", "Could authors please specify these?", "8.\tIn equation 1, if only 6 relationships are being considered, then why does f^R map to R^7 instead of R^6?", "9.\tIn equations 4 and 5, it is not clarified what each symbol represents, making it difficult to understand.", "10.\tWhat is R in equation 15? ", "Is it reward?", "Overall:The paper proposes a novel and interesting idea for solving counting questions in the Visual Question Answering tasks. ", "However, the writing of the paper needs to be improved to make is easier to follow. ", "The experimental set-up \u2013 the size of the test dataset seems too small. ", "And lastly, the paper needs to add comparisons with existing models on the same datasets as used for the proposed model. ", "So, the paper seems to be not ready for the publication yet."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "fact", "request", "fact", "fact", "request", "evaluation", "fact", "request", "evaluation", "request", "evaluation", "evaluation", "request", "non-arg", "evaluation", "request", "evaluation", "request", "evaluation"]}
{"doc_id": "SyKUVctlM", "text": ["This paper proposes a recurrent neural network for visual question answering. ", "The recurrent neural network is equipped with a carefully designed recurrent unit called MAC (Memory, Attention and Control) cell, which encourages sequential reasoning by restraining interaction between inputs and its hidden states. ", "The proposed model shows the state-of-the-art performance on CLEVR and CLEVR-Humans dataset, which are standard benchmarks for visual reasoning problem. ", "Additional experiments with limited training data shows the data efficiency of the model, which supports its strong generalization ability.", "The proposed model in this paper is designed with reasonable motivations and shows strong experimental results in terms of overall accuracy and the data efficiency. ", "However, an issue in the writing, usage of external component and lack of experimental justification of the design choices hinder the clear understanding of the proposed model.", "An issue in the writing Overall, the paper is well written and easy to understand, ", "but Section 3.2.3 (The Write Unit) has contradictory statements about their implementation. ", "Specifically, they proposed three different ways to update the memory (simple update, self attention and memory gate), ", "but it is not clear which method is used in the end.", "Usage of external component The proposed model uses pretrained word vectors called GloVE, which has boosted the performance on visual question answering. ", "This experimental setting makes fair comparison with the previous works difficult ", "as the pre-trained word vectors are not used for the previous works. ", "To isolate the strength of the proposed reasoning module, I ask to provide experiments without pretrained word vectors.", "Lack of experimental justification of the design choices The proposed recurrent unit contains various design choices such as separation of three different units (control unit, read unit and memory unit), attention based input processing and different memory updates stem from different motivations. ", "However, these design choices are not justified well ", "because there is neither ablation study nor visualization of internal states. ", "Any analysis or empirical study on these design choices is necessary to understand the characteristics of the model. ", "Here, I suggest to provide few visualizations of attention weights and ablation study that could support indispensability of the design choices."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "request", "fact", "evaluation", "fact", "request", "request"]}
{"doc_id": "Bk8FeZjgf", "text": ["Instead of either optimization-based variational EM or an amortized inference scheme implemented via a neural network as in standard VAE models, this paper proposes a hybrid approach that essentially combines the two.", "In particular, the VAE inference step, i.e., estimation of q(z|x), is conducted via application of a recent learning-to-learn paradigm", "(Andrychowicz et al., 2016),", "whereby direct gradient ascent on the ELBO criteria with respect to moments of q(z|x) is replaced with a neural network that iteratively outputs new parameter estimates using these gradients.", "The resulting iterative inference framework is applied to a couple of small datasets and shown to produce both faster convergence and a better likelihood estimate.", "Although probably difficult for someone to understand that is not already familiar with VAE models,", "I felt that this paper was nonetheless clear and well-presented, with a fair amount of useful background information and context.", "From a novelty standpoint though, the paper is not especially strong", "given that it represents a fairly straightforward application of", "(Andrychowicz et al., 2016).", "Indeed the paper perhaps anticipates this perspective and preemptively offers that \"variational inference is a qualitatively different optimization problem\" than that considered in (Andrychowicz et al., 2016), and also that non-recurrent optimization models are being used for the inference task, unlike prior work.", "But to me, these are rather minor differentiating factors,", "since learning-to-learn is a quite general concept already,", "and the exact model structure is not the key novel ingredient.", "That being said, the present use for variational inference nonetheless seems like a nice application,", "and the paper presents some useful insights such as Section 4.1 about approximating posterior gradients.", "Beyond background and model development, the paper presents a few experiments comparing the proposed iterative inference scheme against both variational EM, and pure amortized inference as in the original, standard VAE.", "While these results are enlightening,", "most of the conclusions are not entirely unexpected.", "For example, given that the model is directly trained with the iterative inference criteria in place,", "the reconstructions from Fig. 4 seem like exactly what we would anticipate, with the last iteration producing the best result.", "It would certainly seem strange if this were not the case.", "And there is no demonstration of reconstruction quality relative to existing models,", "which could be helpful for evaluating relative performance.", "Likewise for Fig. 6,", "where faster convergence over traditional first-order methods is demonstrated;", "but again, these results are entirely expected", "as this phenomena has already been well-documented in", "(Andrychowicz et al., 2016).", "In terms of Fig. 5(b) and Table 1, the proposed approach does produce significantly better values of the ELBO critera;", "however, is this really an apples-to-apples comparison?", "For example, does the standard VAE have the same number of parameters/degrees-of-freedom as the iterative inference model, or might eq. (4) involve fewer parameters than eq. (5) since there are fewer inputs?", "Overall, I wonder whether iterative inference is better than standard inference with eq. (4), or whether the recurrent structure from eq. (5) just happens to implicitly create a better neural network architecture for the few examples under consideration.", "In other words, if one plays around with the standard inference architecture a bit, perhaps similar results could be obtained.", "Other minor comment:* In Fig. 5(a), it seems like the performance of the standard inference model is still improving", "but the iterative inference model has mostly saturated.", "* A downside of the iterative inference model not discussed in the paper is that it requires computing gradients of the objective even at test time,", "whereas the standard VAE model would not."], "labels": ["fact", "fact", "reference", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "reference", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "reference", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact"]}
{"doc_id": "H1wVDrtgM", "text": ["This paper tried to analyze the subspaces of the adversarial examples neighborhood. ", "More specifically, the authors used Local Intrinsic Dimensionality to analyze the intrinsic dimensional property of the subspaces. ", "The characteristics and theoretical analysis of the proposed method are discussed and explained. ", "This paper helps others to better understand the vulnerabilities of DNNs."], "labels": ["fact", "fact", "fact", "evaluation"]}
{"doc_id": "BkMvqjYgG", "text": ["This paper focuses on the problem of \"machine teaching\", i.e., how to select a good strategy to select training data points to pass to a machine learning algorithm, for faster learning. ", "The proposed approach leverages reinforcement learning by defining the reward as how fast the learner learns, and use policy gradient to update the teacher parameters. ", "I find the definition of the \"state\" in this case very interesting. ", "The experimental results seem to show that such a learned teacher strategy makes machine learning algorithms learn faster. ", "Overall I think that this paper is decent. ", "The angle the authors took is interesting (essentially replacing one level of the bi-level optimization problem in machine teaching works with a reinforcement learning setup). ", "The problem formulation is mostly reasonable, ", "and the evaluation seems quite convincing. ", "The paper is well-written: ", "I enjoyed the mathematical formulation (Section 3). ", "The authors did a good job of using different experiments (filtration number analysis, and teaching both the same architecture and a different architecture) to intuitively explain what their method actually does. ", "At the same time, though, I see several important issues that need to be addressed if this paper is to be accepted. ", "Details below. ", "1. As much as I enjoyed reading Section 3, it is very redundant. ", "In some cases it is good to outline a powerful and generic framework (like the authors did here with defining \"teaching\" in a very broad sense, including selecting good loss functions and hypothesis spaces) and then explain that the current work focuses on one aspect (selecting training data points). ", "However, I do not see it being the case here. ", "In my opinion, selecting good loss functions and hypothesis spaces are much harder problems than data teaching - except maybe when one use a pre-defined set of possible loss functions and select from it. ", "But that is not very interesting ", "(if you can propose new loss functions, that would be way cooler). ", "I also do not see how to define an intuitive set of \"states\" in that case. ", "Therefore, I think this section should be shortened. ", "I also think that the authors should not discuss the general framework and rather focus on \"data teaching\", ", "which is the only focus of the current paper. ", "The abstract and introduction should also be modified accordingly to more honestly reflect the current contributions. ", "2. The authors should do a better job at explaining the details of the state definition, especially the student model features and the combination of data and current learner model. ", "3. There is only one definition of the reward - related to batch number when the accuracy first exceeds a threshold. ", "Is accuracy stable, can it drop back down below the threshold in the next epoch? ", "The accuracy on a held-out test set is not guaranteed to be monotonically increasing, right? ", "Is this a problem in practice (it seems to happen on your curves)? ", "What about other potential reward definitions? ", "And what would they potentially lead to? ", "4. Experimental results are averaged over 5 repeated runs ", "- a bit too small in my opinion. ", "5. Can the authors show convergence of the teacher parameter \\theta? ", "I think it is important to see how fast the teacher model converges, too. ", "6. In some of your experiments, every training method converges to the same accuracy after enough training (Fig.2b), while in others, not quite (Fig. 2a and 2c). ", "Why is this the case? ", "Does it mean that you have not run enough iterations for the baseline methods? ", "My intuition is that if the learner algorithm is convex, then ultimately they will all get to the same accuracy level, so the task is just to get there quicker. ", "I understand that since the learner algorithm is an NN, ", "this is not the case ", "- but more explanation is necessary here ", "- does your method also reduces the empirical possibility to get stuck in local minima? ", "7. More explanation is needed towards Fig.4c. ", "In this case, using a teacher model trained on a harder task (CIFAR10) leads to much improved student training on a simpler task (MNIST). ", "Why?", "8. Although in terms of \"effective training data points\" the proposed method outperforms the other methods, ", "in terms of time (Fig.5) the difference between it and say, NoTeach, is not that significant (especially at very high desired accuracy). ", "More explanation needed here."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "request", "request", "fact", "request", "request", "request", "request", "request", "fact", "evaluation", "request", "evaluation", "fact", "request", "request", "evaluation", "fact", "fact", "request", "request", "request", "fact", "request", "fact", "evaluation", "request"]}
{"doc_id": "S1KIF7olf", "text": ["This paper presents an empirical study of whether data augmentation can be a substitute for explicit regularization of weight decay and dropout.", "It is a well written and well organized paper.", "However, overall I do not find the authors\u2019 premises and conclusions to be well supported by the results and", "would suggest further investigations.", "In particular: a) Data augmentation is a very domain specific process and limits of augmentation are often not clear.", "For example, in financial data or medical imaging data it is often not clear how data augmentation should be carried out and how much is too much.", "On the other hand model regularization is domain agnostic", "(has to be tuned for each task, but the methodology is consistent and well known).", "Thus advocating that data augmentation can universally replace explicit regularization does not seem correct.", "b) I find the results to be somewhat inconsistent.", "For example, on CIFAR-10, for 100% data regularization+augmentation is better than augmentation alone for both models,", "whereas for 80% data augmentation alone seems to be better.", "Similarly on CIFAR-100 the WRN model shows mixed trends,", "and this model is significantly better than the All-CNN model in performance.", "These results also seem inconsistent with authors statement", "\u201c\u2026and conclude that data augmentation alone - without any other explicit regularization techniques - can achieve the same performance to higher as regularized models\u2026\u201d"], "labels": ["fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "quote"]}
{"doc_id": "rJAxUSLSM", "text": ["The paper consider a method for \"weight normalization\" of layers of a neural network. ", "The weight matrix is maintained normalized, which helps accuracy. ", "However, the simplest way to normalize on a fully connected layer is quadratic (adding squares of weights and taking square root).", "The paper proposes \"FastNorm\", which is a way to implicitly maintain the normalized weight matrix using much less computation. ", "Essentially, a normalization vector is maintained an updated separately.", "Pros: Natural method to do weight normalization efficeintly", "Cons: A very natural and simple solution that is fairly obvious.", "Limited experiments"], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "ry2OdYCeM", "text": ["Paper presents an interesting attention mechanism for fine-grained image classification.", "Introduction states that the method is simple and easy to understand.", "However, the presentation of the method is bit harder to follow.", "It is not clear to me if the attention modules are applied over all pooling layers.", "How they are combined?", "Why use cross -correlation as the regulariser?", "Why not much stronger constraint such as orthogonality over elements of M in equation 1?", "What is the impact of this regularisation?", "Why use soft-max in equation 1?", "One may use a Sigmoid as well?", "Is it better to use soft-max?", "Equation 9 is not entirely clear to me.", "Undefined notations.", "In Table 2, why stop from AD= 2 and AW=2?", "What is the performance of AD=1, AW=1 with G?", "Why not perform this experiment over all 5 datasets?", "Is this performances, dataset specific?", "The method is compared against 5 datasets.", "Obtained results are quite good."], "labels": ["evaluation", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "request", "non-arg", "non-arg", "evaluation", "fact", "request", "request", "request", "request", "fact", "evaluation"]}
{"doc_id": "HJ2pirpxG", "text": ["This paper considers the problem of improving sequence generation by learning better metrics. ", "Specifically, it focuses on addressing the exposure bias problem, where traditional methods such as SeqGAN uses GAN framework and reinforcement learning. ", "Different from these work, this paper does not use GAN framework. ", "Instead, it proposed an expert-based reward function training, which trains the reward function (the discriminator) from data that are generated by randomly modifying parts of the expert trajectories. ", "Furthermore, it also introduces partial reward function that measures the quality of the subsequences of different lengths in the generated data. ", "This is similar to the idea of hierarchical RL, which divide the problem into potential subtasks, which could alleviate the difficulty of reinforcement learning from sparse rewards. ", "The idea of the paper is novel. ", "However, there are a few points to be clarified.", "In Section 3.2 and in (4) and (5), the authors explains how the action value Q_{D_i} is modeled and estimated for the partial reward function D_i of length L_{D_i}. ", "But the authors do not explain how the rewards (or action value functions) of different lengths are aggregated together to update the model using policy gradient. ", "Is it a simple sum of all of them?", "It is not clear why the future subsequences that do not contain y_{t+1} are ignored for estimating the action value function Q in (4) and (5). ", "The authors stated that it is for reducing the computation complexity. ", "But it is not clear why specifically dropping the sequences that do not contain y_{t+1}. ", "Please clarify more on this point."], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "non-arg", "evaluation", "fact", "evaluation", "request"]}
{"doc_id": "HygXOMDxf", "text": ["The authors propose an approach to dynamically generating filters in a CNN based on the input image. ", "The filters are generated as linear combinations of a basis set of filters, based on features extracted by an auto-encoder. ", "The authors test the approach on recognition tasks on three datasets: MNIST, MTFL (facial landmarks) and CIFAR10, and show a small improvement over baselines without dynamic filters.", "Pros: 1) I have not seen this exact approach proposed before.", "2) There method is evaluated on three datasets and two tasks: classification and facial landmark detection.", "Cons: 1) The authors are not the first to propose dynamically generating filters, ", "and they clearly mention that the work of De Brabandere et al. is closely related. ", "Yet, there is no comparison to other methods for dynamic weight generation. ", "2) Related to that, there is no ablation study, ", "so it is unclear if the authors\u2019 contributions are useful. ", "I appreciate the analysis in Tables 1 and 2, ", "but this is not sufficient. ", "Why the need for the autoencoder - why can\u2019t the whole network be trained end-to-end on the goal task? ", "Why generate filters as linear combination - is this just for computational reasons, or also accuracy? ", "This should be analyzed empirically.", "3) The experiments are somewhat substandard:", "- On MNIST the authors use a tiny poorly-performance network, ", "and it is no surprise that one can beat it with a bigger dynamic filter network.", "- The MTFL experiments look most convincing ", "(although this might be because I am not familiar with SoTA on the dataset), ", "but still there is no control for the number of parameters, ", "and the performance improvements are not huge", "- On CIFAR10 - there is a marginal improvement in performance, ", "which, as the authors admit, can also be reached by using a deeper model. ", "The baseline models are far from SoTA ", "- the authors should look at more modern architecture such as AllCNN (not particularly new or good, but very simple), ResNet, wide ResNet, DenseNet, etc.", "As a comment, I don\u2019t think classification is a good task for showcasing such an architecture ", "- classification is already working extremely well. ", "Many other tasks - for instance, detection, tracking, few-shot learning - seem much more promising.", "To conclude, the authors propose a new approach to learning convolutional networks with dynamic input-conditioned filters. ", "Unfortunately, the authors fail to demonstrate the value of the proposed method. ", "I therefore recommend rejection."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation"]}
{"doc_id": "H15qgiFgf", "text": ["This work identifies a mistake in the existing proof of convergence of Adam, ", "which is among the most popular optimization methods in deep learning. ", "Moreover, it gives a simple 1-dimensional counterexample with linear losses on which Adam does not converge. ", "The same issue also affects RMSprop, ", "which may be viewed as a special case of Adam without momentum. ", "The problem with Adam is that the \"learning rate\" matrices V_t^{1/2}/alpha_t are not monotonically decreasing. ", "A new method, called AMSGrad is therefore proposed, which modifies Adam by forcing these matrices to be decreasing. ", "It is then shown that AMSGrad does satisfy essentially the same convergence bound as the one previously claimed for Adam. ", "Experiments and simulations are provided that support the theoretical analysis.", "Apart from some issues with the technical presentation (see below), ", "the paper is well-written.", "Given the popularity of Adam, I consider this paper to make a very interesting observation. ", "I further believe all issues with the technical presentation can be readily addressed.", "Issues with Technical Presentation:- All theorems should explicitly state the conditions they require instead of referring to \"all the conditions in (Kingma & Ba, 2015)\".", "- Theorem 2 is a repetition of Theorem 1 (except for additional conditions).", "- The proof of Theorem 3 assumes there are no projections, ", "so this should be stated as part of its conditions. ", "(The claim in footnote 2 that they can be handled seems highly plausible, ", "but you should be up front about the limitations of your results.)", "- The regret bound Theorem 4 establishes convergence of the optimization method, ", "so it plays the role of a sanity check. ", "However, it is strictly worse than the regret bound O(sqrt{T}) for online gradient descent [Zinkevich,2003], ", "so it cannot explain why the proposed AMSgrad method might be adaptive. ", "(The method may indeed be adaptive in some sense; ", "I am just saying the *bound* does not express that.", "This is also not a criticism of the current paper; ", "the same remark also applies to the previously claimed regret bound for Adam.)", "- The discussion following Corollary 1 suggests that sum_i hat{v}_{T,i}^{1/2} might be much smaller than d G_infty. ", "This is true, ", "but we should always expect it to be at least a constant, ", "because hat{v}_{t,i} is monotonically increasing by definition of the algorithm, ", "so the bound does not get better than O(sqrt(T)).", "It is also suggested that sum_i ||g_{1:T,i}|| = sqrt{sum_{t=1}^T g_{t,i}^2} might be much smaller than dG_infty, ", "but this is very unlikely, ", "because this term will typically grow like O(sqrt{T}), unless the data are extremely sparse, ", "so we should at least expect some dependence on T.", "- In the proof of Theorem 1, the initial point is taken to be x_1 = 1,", "which is perfectly fine, ", "but it is not \"without loss of generality\", as claimed. ", "This should be stated in the statement of the Theorem.", "- The proof of Theorem 6 in appendix B only covers epsilon=1. ", "If it is \"easy to show\" that the same construction also works for other epsilon, as claimed, then please provide the proof for general epsilon.", "Other remarks:- Theoretically, nonconvergence of Adam seems a severe problem. ", "Can you speculate on why this issue has not prevented its widespread adoption?", "Which factors might mitigate the issue in practice?", "- Please define g_t \\circ g_t and g_{1:T,i}", "- I would recommend sticking with standard linear algebra notation for the sqrt and the inverse of a matrix and simply using A^{-1} and A^{1/2} instead of 1/A and sqrt{A}.", "- In theorems 1,2,3, I would recommend stating the dimension (d=1) of your counterexamples, ", "which makes them very nice!", "Minor issues:- Check accent on Nicol\\`o Cesa-Bianchi in bibliography.", "- Near the end of the proof of Theorem 6: I believe you mean Adam suffers a \"regret\" instead of a \"loss\" of at least 2C-4.", "Also 2C-4=2C-4 is trivial in the second but last display."], "labels": ["fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "fact", "request", "evaluation", "request", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "request", "fact", "request", "evaluation", "request", "request", "request", "request", "request", "evaluation", "request", "request", "evaluation"]}
{"doc_id": "ryoWUP5lz", "text": ["This work proposes an approach for transcription factor binding site prediction using a multi-label classification formulation. ", "It is a very interesting problem ", "and application and the approach is interesting. ", "Novelty: The method is quite similar to matching networks (Vinyals, 2016) with a few changes in the matching approach. ", "As such, in order to establish its broader applicability there should be additional evaluation on other benchmark datasets. ", "The MNIST performance comparison is inadequate ", "and there are other papers that do better on it. ", "They should clearly list what the contributions are w.r.t to the work by Vinyals et al 2016.", "They should also cite works that learn embeddings in a multi-label setting such as StarSpace.", "Impact: In its current form the paper seems to be most relevant to the computational biology / TFBS community. ", "However, there is no comparison to the exact networks used in the prior works DeepBind/DeepSea/DanQ/Basset/DeepLift or bidirectional LSTMs. ", "Further there is no comparison to existing one-shot learning techniques either. ", "This greatly limits the impact of the work.", "For biological impact, a comparison to any of the motif learning approaches that are popular in the biology/comp-bio community will help (for instance, HOMER, FIMO).", "Cons: The authors claim they can learn TF-TF interactions and it is one of the main biological contributions, ", "but there is no evidence of why ", "(beyond very preliminary evaluation using the Trrust database). ", "Their examples are 200-bp long which does not mean that all TFs binding in that window are involved in cooperative binding. ", "The prototype loss is too simplistic to capture co-binding tendencies ", "and the combinationLSTM is not well motivated. ", "One interesting source of information they could tap into for TF-TF interactions is CAP-SELEX (Jolma et al, Nature 2015).", "One of the main drawbacks is the lack of interpretability of their model where approaches like DanQ/DeepLift etc benefit. ", "The PWM-like filters in some of the prior works help understand what type of sequence properties contribute to binding events. ", "Can their model lead to an understanding of this sort?", "Evaluation: The empirical evaluation itself is not very strong ", "as there are only modest improvements over simple baselines. ", "Further there are no error-bars etc to indicate the variance in their performance numbers.", "It will be useful to have a TF-level performance split-up to get an idea of which TFs benefit most.", "Clarity: The paper can benefit from more clarity in the technical aspects. ", "It is hard to follow for anyone not already familiar with matching networks. ", "The objective function, parameters need to be clearly introduced in one place. ", "For instance, what is y_i in their multi-label framework?", "Various choices are not well motivated; for instance cosine similarity, the value of hyperparameter epsilon.", "The prototype vectors are not motif-like at all -- ", "can the authors motivate this aspect better?"], "labels": ["fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "request", "evaluation", "fact", "fact", "evaluation", "request", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "non-arg", "evaluation", "evaluation", "fact", "request", "request", "evaluation", "request", "request", "evaluation", "evaluation", "request"]}
{"doc_id": "S1ck4rYxM", "text": ["[Overview]In this paper, the authors proposed a novel model called MemoryGAN, which integrates memory network with GAN.", "As claimed by the authors, MemoryGAN is aimed at addressing two problems of GAN training:", "1) difficult to model the structural discontinuity between disparate classes in the latent space;", "2) catastrophic forgetting problem during the training of discriminator about the past synthesized samples by the generator.", "It exploits the life-long memory network and adapts it to GAN.", "It consists of two parts, discriminative memory network (DMN) and Memory Conditional Generative Network (MCGN).", "DMN is used for discriminating input samples by integrating the memory learnt in the memory network, and MCGN is used for generating images based on random vector and the sampled memory from the memory network.", "In the experiments, the authors evaluated memoryGAN on three datasets, CIFAR-10, affine-MNIST and Fashion-MNIST, and demonstrated the superiority to previous models.", "Through ablation study, the authors further showed the effects of separate components in memoryGAN.", "[Strengths] 1. This paper is well-written.", "All modules in the proposed model and the experiments were explained clearly.", "I enjoyed much to read the paper.", "2. The paper presents a novel method called MemoryGAN for GAN training.", "To address the two infamous problems mentioned in the paper, the authors proposed to integrate a memory network into GAN.", "Through memory network, MemoryGAN can explicitly learn the data distribution of real images and fake images.", "I think this is a very promising and meaningful extension to the original GAN.", "3. With MemoryGAN, the authors achieved best Inception Score on CIFAR-10.", "By ablation study, the authors demonstrated each part of the model helps to improve the final performance.", "[Comments] My comments are mainly about the experiment part:", "1. In Table 2, the authors show the Inception Score of images generated by DCGAN at the last row.", "On CIFAR-10, it is ~5.35.", "As the authors mentioned, removing EM, MCGCN and Memory will result in a conventional DCGAN.", "However, as far as I know, DCGAN could achieve > 6.5 Inception Score in general.", "I am wondering what makes such a big difference between the reported numbers in this paper and other papers?", "2. In the experiments, the authors set N = 16,384, and M = 512, and z is with dimension 16.", "I did not understand why the memory size is such large.", "Take CIFAR-10 as the example, its training set contains 50k images.", "Using such a large memory size, each memory slot will merely count for several samples.", "Is a large memory size necessary to make MemoryGAN work?", "If not, the authors should also show ablated study on the effect of different memory size;", "If it is true, please explain why is that.", "Also, the authors should mention the training time compared with DCGAN.", "Updating memory with such a large size seems very time-consuming.", "3. Still on the memory size in this model.", "I am curious about the results if the size is decreased to the same or comparable number of image categories in the training set.", "As the author claimed, if the memory network could learn to cluster training data into different category, we should be able to see some interesting results by sampling the keys and generate categoric images.", "4. The paper should be compared with InfoGAN (Chen et al. 2016),", "and the authors should explain the differences between two models in the related work.", "Similar to MemoryGAN, InfoGAN also did not need any data annotations, but could learn the latent code flexibly.", "[Summary]", "This paper proposed a new model called MemoryGAN for image generation.", "It combined memory network with GAN, and achieved state-of-art performance on CIFAR-10.", "The arguments that MemoryGAN could solve the two infamous problem make sense.", "As I mentioned above, I did not understand why the authors used such large memory size.", "More explanations and experiments should be conducted to justify this setting.", "Overall, I think MemoryGAN opened a new direction of GAN and worth to further explore."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "non-arg", "fact", "fact", "fact", "fact", "non-arg", "fact", "evaluation", "fact", "fact", "non-arg", "request", "request", "request", "fact", "non-arg", "non-arg", "fact", "request", "request", "fact", "non-arg", "fact", "fact", "evaluation", "non-arg", "request", "evaluation"]}
{"doc_id": "HJ1MEAYxG", "text": ["The authors are motivated by two problems: Inputting non-Euclidean data (such as graphs) into deep CNNs, and analyzing optimization properties of deep networks.", "In particular, they look at the problem of maze testing, where, given a grid of black and white pixels, the goal is to answer whether there is a path from a designated starting point to an ending point.", "They choose to analyze mazes because they have many nice statistical properties from percolation theory.", "For one, the problem is solvable with breadth first search in O(L^2) time, for an L x L maze.", "They show that a CNN can essentially encode a BFS,", "so theoretically a CNN should be able to solve the problem.", "Their architecture is a deep feedforward network where each layer takes as input two images: one corresponding to the original maze (a skip connection), and the output of the previous layer.", "Layers alternate between convolutional and sigmoidal.", "The authors discuss how this architecture can solve the problem exactly.", "The pictorial explanation for how the CNN can mimic BFS is interesting", "but I got a little lost in the 3 cases on page 4.", "For example, what is r?", "And what is the relation of the black/white and orange squares?", "I thought this could use a little more clarity.", "Though experiments, they show that there are two kinds of minima, depending on whether we allow negative initializations in the convolution kernels.", "When positive initializations are enforced, the network can more or less mimic the BFS behavior, but never when initializations can be negative.", "They offer a rigorous analysis into the behavior of optimization in each of these cases, concluding that there is an essential singularity in the cost function around the exact solution,", "yet learning succumbs to poor optima due to poor initial predictions in training.", "I thought this was an impressive paper that looked at theoretical properties of CNNs.", "The problem was very well-motivated,", "and the analysis was sharp and offered interesting insights into the problem of maze solving.", "What I thought was especially interesting is how their analysis can be extended to other graph problems;", "while their analysis was specific to the problem of maze solving, they offer an approach -- e.g. that of finding \"bugs\" when dealing with graph objects -- that can extend to other problems.", "I would be excited to see similar analysis of other toy problems involving graphs.", "One complaint I had was inconsistent clarity:", "while a lot was well-motivated and straightforward to understand,", "I got lost in some of the details (as an example, the figure on page 4 did not initially make much sense to me).", "Also, in the experiments, the authors mention multiple attempt with the same settings --", "are these experiments differentiated only by their initialization?", "Finally, there were various typos throughout", "(one example is \"neglect minimua\" on page 2 should be \"neglect minima\").", "Pros: Rigorous analysis,", "well motivated problem,", "generalizable results to deep learning theory", "Cons: Clarity"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "request", "request", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "request", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "SJQVdQ5lG", "text": ["This paper describes an extension to the recently introduced Transformer networks which shows better convergence properties and also improves results on standard machine translation benchmarks. ", "This is a great paper ", "-- it introduces a relatively simple extension of Transformer networks which only adds very few parameters and speeds up convergence and achieves better results. ", "It would have been good to also add a motivation for doing this ", "(for example, this idea can be interpreted as having a variable number of attention heads which can be blended in and out with a single learned parameter, hence making it easier to use the parameters where they are needed). ", "Also, it would be interesting to see how important the concatenation weight and the addition weight are relative to each other -- ", "do you possibly get the same results even without the concatenation weight? ", "A suggested improvement: Please check the references in the introduction and see if you can find earlier ones -- ", "for example, language modeling with RNNs has been done for a very long time, not just since 2017 which are the ones you list; ", "similar for speech recognition etc. (which probably has been done since 1993!)."], "labels": ["fact", "evaluation", "fact", "request", "fact", "request", "request", "request", "fact", "fact"]}
{"doc_id": "BJQD_I_eM", "text": ["The paper proposes an analysis on different adaptive regularization techniques for deep transfer learning. ", "Specifically it focuses on the use of an L2-SP condition that constraints the new parameters to be close to the ones previously learned when solving a source task. ", "+ The paper is easy to read and well organized", "+ The advantage of the proposed regularization against the more standard L2 regularization is clearly visible from the experiments", "- The idea per se is not new: ", "there is a list of shallow learning methods for transfer learning based on the same L2 regularization choice", "[Cross-Domain Video Concept Detection using Adaptive SVMs, ACM Multimedia 2007]", "[Learning categories from few examples with multi model knowledge transfer, PAMI 2014]", "[From n to n+ 1: Multiclass transfer incremental learning, CVPR 2013]", "I believe this literature should be discussed in the related work section", "- It is true that the L2-SP-Fisher regularization was designed for life-long learning cases with a fixed task, ", "however, this solution seems to work quite well in the proposed experimental settings. ", "From my understanding L2-SP-Fisher can be considered the best competitor of L2-SP ", "so I think the paper should dedicate more space to the analysis of their difference and similarities both from the theoretical and experimental point of view. ", "For instance: -- adding the L2-SP-Fisher results in table 2", "-- repeating the experiments of figure 2 and figure 3 with L2-SP-Fisher"], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "reference", "reference", "reference", "request", "fact", "evaluation", "evaluation", "request", "request", "request"]}
{"doc_id": "SyFscqngM", "text": ["This paper essentially uses CycleGANs for Domain Adaptation.", "My biggest concern is that it doesn't adequately compare to similar papers that perform adaptation at the pixel level", "(eg. Shrivastava et al-'Learning from Simulated and Unsupervised Images through Adversarial Training'", "and Bousmalis et al - 'Unsupervised Pixel-level Domain Adaptation with GANs',", "two similar papers published in CVPR 2017 -the first one was even a best paper- and available on arXiv since December 2016-before CycleGANs).", "I believe the authors should have at least done an ablation study to see if they cycle-consistency loss truly makes a difference on top of these works-that would be the biggest selling point of this paper.", "The experimental section had many experiments, which is great.", "However I think for semantic segmentation it would be very interesting to see whether using the adapted synthetic GTA5 samples would improve the SOTA on Cityscapes.", "It wouldn't be unsupervised domain adaptation,", "but it would be very impactful.", "Finally I'm not sure the oracle (train on target) mIoU on Table 2 is SOTA,", "and I believe the proposed model's performance is really far from SOTA.", "Pros: * CycleGANs for domain adaptation!", "Great idea!", "* I really like the work on semantic segmentation,", "I think this is a very important direction", "Cons: * I don't think Domain separation networks is a pixel-level transformation-", "that's a feature-level transformation,", "you probably mean to use Bousmalis et al. 2017.", "Also Shrivastava et al is missing from the image-level papers.", "* the authors claim that Bousmalis et al, Liu & Tuzel and Shrivastava et al ahve only been shown to work for small image sizes.", "There's a recent work by Bousmalis et al. (Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping) that shows these methods working well (w/o cycle-consistency) for settings similar to semantic segmentation at a relatively high resolution.", "Also it was mentioned that these methods do not necessarily preserve content, when pixel-da explicitly accounts for that with a task loss (identical to the semantic loss used in this submission)", "* The authors talk about the content similarity loss on the foreground in Bousmalis et al. 2017,", "but they could compare to this method w/o using the content similarity or using a different content similarity tailored to the semantic segmentation tasks, which would be trivial.", "* Math seems wrong in (4) and (6).", "(4) should be probably have a minus instead of a plus.", "(6) has an argmin of a min,", "not sure what is being optimized here.", "In fact, I'm not sure if eg you use the gradients of f_T for training the generators?", "* The authors mention that the pixel-da approach cross validates with some labeled data.", "Although I agree that is not an ideal validation,", "I'm not sure if it's equivalent or not the authors' validation setting,", "as they don't describe what that is.", "* The authors present the semantic loss as novel,", "however this is the task loss proposed by the pixel-da paper.", "* I didn't understand what pixel-only and feat-only meant in tables 2, 3, 4.", "I couldn't find an explanation in captions or in text"], "labels": ["fact", "evaluation", "reference", "reference", "fact", "request", "evaluation", "request", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation"]}
{"doc_id": "B1BFRS7ZM", "text": ["There may be some interesting ideas here, ", "but I think in many places the mathematical\\ndescription is very confusing and/or flawed.", "To give some examples:\\n\\n* Just before section 2.1.1, P(T) = \\\\prod_{p \\\\in Path(T)} ... : it's not clear \\nat all clear that this defines a valid distribution over trees.", "There is an\\nimplicit order over the paths in Path(T)", "that is simply not defined", "(otherwise\\nhow for x^p could we decide which symbols x^1 ... x^{p-1} to condition\\nupon?)\\n\\n", "\\\"We can write S -> O | v | \\\\epsilon...\\\" ", "with S, O and v defined as sets.\\n", "This is certainly non-standard notation,", "more explanation is needed.\\n\\n", "\\\"The observation is generated by the sequence of left most \\nproduction rules\\\".", "This appears to be related to the idea of left-most\\nderivations in context-free grammars. ", "But no discussion is given, ", "and\\nthe writing is again vague/imprecise.\\n\\n", "\\\"Although the above grammar is not, in general, context free\\\"", "- I'm not\\nsure what is being referred to here. ", "Are the authors referring to the underlying grammar,\\nor the lack of independence assumptions in the model? ", "The grammar\\nis clearly context-free; ", "the lack of independence assumptions is a separate\\nissue.\\n\\n", "\\\"In a probabilistic context-free grammar (PCFG), all production rules are\\nindependent\\\": ", "this is not an accurate statement, ", "it's not clear what is meant\\nby production rules being independent. ", "More accurate would be to say that\\nthe choice of rule is conditionally independent of all other information \\nearlier in the derivation, once the non-terminal being expanded is\\nconditioned upon."], "labels": ["evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "quote", "fact", "evaluation", "request", "quote", "evaluation", "fact", "evaluation", "quote", "non-arg", "non-arg", "evaluation", "fact", "quote", "evaluation", "evaluation", "request"]}
{"doc_id": "B1MeHT3rG", "text": ["This paper proposes a model for generating pop music melodies with a recurrent neural network conditioned on chord and part (song section) information.", "They train their model on a small dataset and compare it to a few existing models in a human evaluation.", "I think this paper has many issues, which I describe below.", "As a broad overview, the use of what the authors call \"word\" representations of notes is not novel (appearing first in BachBot and the PerformanceRNN);", "I suspect the model may be outputting sequences from the training set;", "and the dataset is heavily constrained in a way that will make producing pleasing melodies easily but heavily limits the possible outputs of the model.", "The paper is also missing important references and is confusingly laid out (e.g. introducing a GAN model in a few paragraphs in the experiments).", "Specific criticism: - \"often producing works that are indistinguishable from human works (Goodfellow et al. (2014); Radford et al. (2016); Potash et al. (2015)).\" I would definitely not say that any of the cited papers produce anything that could be confused as \"real\";", "e.g. the early GAN papers you cite were not even close (maybe somewhat close for images of bedrooms, which is a limited domain and certainly cannot be considered a \"work\").", "- There are many unsubstantiated claims in the second paragraph.", "E.g. \"there is yet a certain aspect about it that makes it sound like (or not sound like) human-written music.\"", "What is it?", "What evidence do we have that this is true?", "\" notes in the chorus part generally tend to be more high-pitched\"", "Really?", "Where was this measured?", "\"music is not merely a series of notes, but entails an overall structure of its own\"", "Sure, but natural images are not merely a series of pixels either, and they certainly have structure, but we are making lots of good progress modeling them. Etc.", "- Your related work section is lacking.", "For example, Eck & Schmidhuber in 2002 proposed using LSTMs for music composition, which is not much later than works from \"the early days\" despite having not \"employed rule or template based approach\".", "Your note/time offset/duration representation is very similar to that of BachBot (by Liang) and Magenta's PerformanceRNN.", "GANs were also previously applied to piano roll generation,", "see MuseGAN (Dong et al), MidiNet (Yang et al), etc.", "Your critique of Jaques et al. is misleading;", "\"they defined a number of music-theory based rules to set up the reward function\"", "is the whole point - this is an optional step which improves results, and there is no reason a priori to think that hand-designing regularizers is better than hand-designing RL objectives.", "- The analogy to image captioning models is interesting,", "but this type of image captioning model is not only model which is effectively a conditional language model - any sequence-to-sequence model can be looked at this way.", "I don't think that these image captioning models are even the most commonly known example,", "so I'm not sure why the proposed approach is being proposed in analogy to image captioning.", "- I don't think you need to reproduce the LSTM equations in your text, they are well-known.", "- You should define early on what you mean by \"part\",", "I think you mean the song's section (verse, chorus, etc)", "but I have also heard this used to refer to the different instruments in a song.", "I don't think you should expect this term to be known outside of musical communities (e.g. the ICLR community).", "- It seems simpler (and more in keeping with the current zeigeist, e.g. the image captioning models you refer to) to replace your HMM with a model that", "- The regularization is interesting,", "but a simpler way to enforce this constraint would be to just only allow the model to produce notes within that predefined range.", "Since you effectively constrain it to an octave,", "it would be simple to wrap all notes in your training data into this octave.", "This baseline is probably worth comparing to", "since it is substantially simpler than your regularizer.", "- You write that the softmax cost should have \\frac{\\partial E}{\\partial p_i} \\mu added to it for the regularizer.", "First, you don't define E anywhere, you only introduce it in its derivative", "(and of course you can't \"define\" the derivative of an expression, it's an analytically computed quantity).", "Second, are you sure you mean that the partial derivative should be added, and not the cost C itself?", "- Your results showing that human raters preferred your models are impressive,", "but you have made the task easier for yourself in various ways:", "1) Constraining the training data to pop music", "2) Making all of the training data in a single (major) key", "3) Effectively limiting the melody range to within a single octave.", "- It sounds very much like your model is repeating bars, e.g. it generates a melody of length N bars, then repeats this melody.", "Is this something you hard-coded into the model?", "It would be very surprising if it learned to exhibit this behavior on its own.", "If you hard-coded it into the model, I would expect it to sound better to human raters,", "but this is a strong heuristic.", "- I'd suggest you provide example melodies from your model in isolation (more like the \"varying number of bars\" examples) rather than as part of a full music mix", "- this makes it easier to judge the quality of the model's output.", "- The GAN experiments are interesting but come as a big surprise and are largely orthogonal to the other model;", "why not include this in your model description section?", "The model and training details are not adequately described", "and I don't think it adds much to the paper to include it.", "Furthermore it's quite similar to the MidiNet and MuseGAN, so maybe it should be introduced as a baseline instead.", "- How did you order the notes for chords?", "If three notes occur simultaneously (in a chord), there's no a priory correct way to list them sequentially (two with an interval of length zero between notes).", "- \"Generated instruments sound fairly in tune individually, confirming that our proposed model is applicable to other instruments as well\" Assuming you are still using C-major-only melodies, it's not surprising that the generations sound in tune!", "- It is not surprising that your model ends up overfitting", "because your dataset is very small,", "your model is very powerful,", "and your regularizer does not really limit the model's capacity much.", "I suspect that your model is overfitting even earlier than you think.", "You should check that none of the sequences output by your model appear in the training set.", "You could easily compute n-gram overlap of the generated sequences vs. the training set.", "At what point did you stop training before running the human evaluation?", "If you let your model overfit, then of course it will generate very human-sounding melodies,", "but this is not a terribly interesting generative model."], "labels": ["fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "quote", "evaluation", "evaluation", "quote", "fact", "fact", "quote", "fact", "evaluation", "fact", "evaluation", "fact", "reference", "evaluation", "quote", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "non-arg", "evaluation", "request", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "request", "fact", "evaluation", "request", "non-arg", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "non-arg", "fact", "evaluation"]}
{"doc_id": "Hk96V1clf", "text": ["This paper generates adversarial examples using the fast gradient sign (FGS) and iterated fast gradient sign (IFGS) methods, but replacing the gradient computation with finite differences or another gradient approximation method. ", "Since finite differences is expensive in high dimensions, ", "the authors propose using directional derivatives based on random feature groupings or PCA. ", "This paper would be much stronger if it surveyed a wider variety of gradient-free optimization methods. ", "Notably, there's two important black-box optimization baselines that were not included: ", "simultaneous perturbation stochastic approximation ( https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation), which avoids computing the gradient explicitly, and evolutionary strategies ( https://blog.openai.com/evolution-strategies/ ), a similar method that uses several random directions to estimate a better descent direction.", "The gradient approximation methods proposed in this paper may or may not be better than SPSA or ES. ", "Without a direct comparison, it's hard to know. ", "Thus, the main contribution of this paper is in demonstrating that gradient approximation methods are sufficient for generating good adversarial attacks and applying those attacks to Clarifai models. ", "That's interesting and useful to know, but is still a relatively small contribution, making this paper borderline. ", "I lean towards rejection, ", "since the paper proposes new methods without comparing to or even mentioning well-known alternatives."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact"]}
{"doc_id": "Sy4mWsOeG", "text": ["Many black-box optimization problems are \"multi-fidelity\", in which it is possible to acquire data with different levels of cost and associated uncertainty.", "The training of machine learning models is a common example, in which more data and/or more training may lead to more precise measurements of the quality of a hyperparameter configuration.", "This has previously been referred to as a special case of \"multi-task\" Bayesian optimization, in which the tasks can be constructed to reflect different fidelities.", "The present paper examines this construction with three twists: using the knowledge gradient acquisition function, using batched function evaluations, and incorporating derivative observations.", "Broadly speaking, the idea is to allow fidelity to be represented as a point in a hypercube and then include this hypercube as a covariate in the Gaussian process.", "The knowledge gradient acquisition function then becomes \"knowledge gradient per unit cost\" the KG equivalent to the \"expected improvement per unit cost\" discussed in Snoek et al (2012),", "although that paper did not consider treating fidelity separately.", "I don't understand the claim that this is \"the first multi-fidelity algorithm that can leverage gradients\".", "Can't any Gaussian process model use gradient observations trivially, as discussed in the Rasmussen and Williams book?", "Why can't any EI or entropy search method also use gradient observations?", "This doesn't usually come up in hyperparameter optimization,", "but it seems like a grandiose claim.", "Similarly, although I don't know of a paper that explicitly does \"A + B\" for multi-fidelity BO and parallel BO,", "it is an incremental contribution to combine them, not least because no other parallel BO methods get evaluated as baselines.", "Figure 1 does not make sense to me.", "How can the batched algorithm outperform the sequential algorithm on total cost?", "The sequential cfKG algorithm should always be able to make better decisions with its remaining budget than 8-cfKG.", "Is the answer that \"cost\" here means \"wall-clock time when parallelism is available\"?", "If that's the case, then it is necessary to include plots of parallelized EI, entropy search, and KG.", "The same is true for Figure 2; other parallel BO algorithms need to appear."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "non-arg", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "non-arg", "request", "request"]}
{"doc_id": "rJGK3urgz", "text": ["In this paper, the authors trains a large number of MNIST classifier networks with differing attributes (batch-size, activation function, no. layers etc.) ", "and then utilises the inputs and outputs of these networks to predict said attributes successfully. ", "They then show that they are able to use the methods developed to predict the family of Imagenet-trained networks and use this information to improve adversarial attack.", "I enjoyed reading this paper. ", "It is a very interesting set up, and a novel idea.", "A few comments:The paper is easy to read, and largely written well. ", "The article is missing from the nouns quite often though ", "so this is something that should be amended. ", "There are a few spelling slip ups ", "(\"to a certain extend\" --> \"to a certain extent\", ", "\"as will see\" --> \"as we will see\")", "It appears that the output for kennen-o is a discrete probability vector for each attribute, where each entry corresponds to a possibility ", "(for example, for \"batch-size\" it is a length 3 vector where the first entry corresponds to 64, the second 128, and the third 256). ", "What happens if you instead treat it as a regression task, would it then be able to hint at intermediates (a batch size of 96) or extremes (say, 512).", "A flaw of this paper is that kennen-i and io appear to require gradients from the network being probed (you do mention this in passing), which realistically you would never have access to. ", "(Please do correct me if I have misunderstood this)", "It would be helpful if Section 4 had a paragraph as to your thoughts regarding why certain attributes are easier/harder to predict. ", "Also, the caption for Table 2 could contain more information regarding the network outputs.", "You have jumped from predicting 12 attributes on MNIST to 1 attribute on Imagenet. ", "It could be beneficial to do an intermediate experiment (a handful of attributes on a middling task).", "I think this paper should be accepted ", "as it is interesting and novel.", "Pros ------ - Interesting idea", "- Reads well", "- Fairly good experimental results", "Cons ------ - kennen-i seems like it couldn't be realistically deployed", "- lack of an intermediate difficulty task"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "request", "request", "fact", "fact", "request", "fact", "non-arg", "request", "request", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact"]}
{"doc_id": "HJmKXVcgz", "text": ["This paper proposes a ranking-based similarity metric for distributional semantic models. ", "The main idea is to learn \"baseline\" word embeddings, retrofitting those and applying localized centering, to then calculate similarity using a measure called \"Ranking-based Exponential Similarity Measure\" (RESM), which is based on the recently proposed APSyn measure.", "I think the work has several important issues:", "1. The work is very light on references. ", "There is a lot of previous work on evaluating similarity in word embeddings (e.g. Hill et al, a lot of the papers in RepEval workshops, etc.); specialization for similarity of word embeddings (e.g. Kiela et al., Mrksic et al., and many others); multi-sense embeddings (e.g. from Navigli's group); and the hubness problem (e.g. Dinu et al.). ", "For the localized centering approach, Hara et al.'s introduced that method. ", "None of this work is cited, which I find inexcusable.", "2. The evaluation is limited, in that the standard evaluations (e.g. SimLex would be a good one to add, as well as many others, please refer to the literature) are not used and there is no comparison to previous work. ", "The results are also presented in a confusing way, ", "with the current state of the art results separate from the main results of the paper. ", "It is unclear what exactly helps, in which case, and why.", "3. There are technical issues with what is presented, with some seemingly factual errors. ", "For example, \"In this case we could apply the inversion, however it is much more convinient [sic] to take the negative of distance. Number 1 in the equation stands for the normalizing, hence the similarity is defined as follows\" ", "- the 1 does not stand for normalizing, that is the way to invert the cosine distance ", "(put differently, cosine distance is 1-cosine similarity, which is a metric in Euclidean space due to the properties of the dot product). ", "Another example, \"are obtained using the GloVe vector, not using PPMI\" ", "- there are close relationships between what GloVe learns and PPMI, ", "which the authors seem unaware of (see e.g. the GloVe paper and Omer Levy's work).", "4. Then there is the additional question, why should we care? ", "The paper does not really motivate why it is important to score well on these tests: ", "these kinds of tests are often used as ways to measure the quality of word embeddings, ", "but in this case the main contribution is the similarity metric used *on top* of the word embeddings. ", "In other words, what is supposed to be the take-away, and why should we care?", "As such, I do not recommend it for acceptance - ", "it needs significant work before it can be accepted at a conference.", "Minor points:- Typo in Eq 10", "- Typo on page 6 (/cite instead of \\cite)"], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "quote", "fact", "fact", "quote", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "fact"]}
{"doc_id": "H1E1RgqxM", "text": ["# Summary This paper presents a new external-memory-based neural network (Neural Map) for handling partial observability in reinforcement learning. ", "The proposed memory architecture is spatially-structured so that the agent can read/write from/to specific positions in the memory. ", "The results on several memory-related tasks in 2D and 3D environments show that the proposed method outperforms existing baselines such as LSTM and MQN/FRMQN. ", "[Pros] - The overall direction toward more flexible/scalable memory is an important research direction in RL.", "- The proposed memory architecture is new. ", "- The paper is well-written.", "[Cons] - The proposed memory architecture is new but a bit limited to 2D/3D navigation tasks.", "- Lack of analysis of the learned memory behavior.", "# Novelty and Significance The proposed idea is novel in general. ", "Though [Gupta et al.] proposed an ego-centric neural memory in the RL context, ", "the proposed memory architecture is still new in that read/write operations are flexible enough for the agent to write any information to the memory, ", "whereas [Gupta et al.] designed the memory specifically for predicting free space. ", "On the other hand, the proposed method is also specific to navigation tasks in 2D or 3D environment, ", "which is hard to apply to more general memory-related tasks in non-spatial environments. ", "But, it is still interesting to see that the ego-centric neural memory works well on challenging tasks in a 3D environment.", "# Quality The experiment does not show any analysis of the learned memory read/write behavior especially for ego-centric neural map and the 3D environment. ", "It is hard to understand how the agent utilizes the external memory without such an analysis. ", "# Clarity The paper is overall clear and easy-to-follow except for the following. ", "In the introduction section, the paper claims that \"the expert must set M to a value that is larger than the time horizon of the currently considered task\" when mentioning the limitation of the previous work. ", "In some sense, however, Neural Map also requires an expert to specify the proper size of the memory based on prior knowledge about the task."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "HJUMdjteM", "text": ["The authors propose a model for learning physical interaction skills through trial and error.", "They use end-to-end deep reinforcement learning - the DQN model - including the task goal as an input in order to to improve generalization over several tasks, and shaping the reward depending on the visual differences between the goal state and the current state.", "They show that the task performance of their model is better than the DQN on two simulated tasks.", "The paper is well-written, clarity is good,", "it could be slightly improved by updating the title \"Toy example with Goal integration\" to make it consistent with the naming \"navigation task\" used elsewhere.", "If the proposed model is new given the reviewer's knowledge, the contribution is small.", "The biggest change compared to the DQN model is the addition of information in the input.", "The authors initially claim that \"In this paper, [they] study how an artificial agent can autonomously acquire this intuition through interaction with the environment\",", "however the proposed tasks present little to no realistic physical interaction:", "the navigation task is a toy problem where no physics is simulated.", "In the stacking task, only part of the simulation actually use the physical simulation result.", "Given that machine learning methods are in general good at finding optimal policies that exploit simulation limitations,", "this problem seems a threat to the significance of this work.", "The proposed GDQN model shows better performance than the DQN model.", "However, as the authors do not provide in-depth analysis of what the network learns (e.g. by testing policies in the absence of an explicit goal),", "it is difficult to judge if the network learnt a meaningful representation of the world's physics.", "This limitation along with potential other are not discussed in the paper.", "Finally, more than a third (10/26) references point to Arxiv papers.", "Despite Arxiv definitely being an important tool for paper availability, it is not peer-reviewed and there are also work that are non-finished or erroneous.", "It is thus a necessary condition that all Arxiv references are replaced by the peer-reviewed material when it exist (e.g. Lerer 2016 in ICML or Denil 2016 in ICLR 2017), once again to strengthen the author's claim."], "labels": ["fact", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "quote", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "request"]}
{"doc_id": "S1uLIj8lG", "text": ["* sec.2.2 is about label-preserving translation ", "and many notations are introduced. ", "However, it is not clear what label here refers to, ", "and it does not shown in the notation so far at all. ", "Only until the end of sec.2.2, the function F(.) is introduced and its revelation - Google Search as label function is discussed only at Fig.4 and sec.2.3.", "* pp.5 first paragraph: when assuming D_X and D_Y being perfect, why L_GAN_forward = L_GAN_backward = 0? ", "To trace back, in fact it is helpful to have at least a simple intro/def. to the functions D(.) and G(.) of Eq.(1). ", "* Somehow there is a feeling that the notations in sec.2.1 and sec.2.2 are not well aligned. ", "It is helpful to start providing the math notations as early as sec.2.1, ", "so labels, pseudo labels, the algorithm illustrated in Fig.2 etc. can be consistently integrated with the rest notations. ", "* F() is firstly shown in Fig.2 the beginning of pp.3, and is mentioned in the main text as late as of pp.5.", "* Table 2: The CNN baseline gives an error rate of 7.80 ", "while the proposed variants are 7.73 and 7.60 respectively. ", "The difference of 0.07/0.20 are not so significant. ", "Any explanation for that?", "Minor issues: * The uppercase X in the sentence before Eq.(2) should be calligraphic X"], "labels": ["fact", "evaluation", "evaluation", "fact", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "request", "request"]}
{"doc_id": "Byz0IGvgz", "text": ["This paper combines the tensor contraction method and the tensor regression method and applies them to CNN.", "This paper is well written and easy to read.", "However, I cannot find a strong or unique contribution from this paper.", "Both of the methods (tensor contraction and tensor decomposition) are well developed in the existing studies,", "and combining these ideas does not seem non-trivial.", "--Main question Why authors focus on the combination of the methods?", "Both of the two methods can perform independently.", "Is there a special synergy effect?", "--Minor question The performance of the tensor contraction method depends on a size of tensors.", "Is there any effective way to determine the size of tensors?"], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "non-arg", "fact", "non-arg"]}
{"doc_id": "rk156h2gf", "text": ["The manuscript proposes a new framework for inference in RNN based upon the Bayes by Backprop (BBB) algorithm. ", "In particular, the authors propose a new framework to \"sharpen\" the posterior.", "In particular, the hierarchical prior in (6) and (7) frame an interesting modification to directly learning a multivariate normal variational approximation. ", "In the experimental results, it seems clear that this approach is beneficial, but it's not clear as to why. ", "In particular, how does the variational posterior change as a result of the hierarchical prior? ", "It seems that (7) would push the center of the variational structure back towards the MAP point and reduces the variance of the output of the hierarchical prior; ", "however, with the two layers in the prior it's unclear what actually is happening. ", "Carefully explaining *what* the authors believe is happening and exploring how it changes the variational approximation in a classic modeling framework would be beneficial to understanding the proposed change and evaluating it. ", "As a final point, the authors state, \"as long as the improvement along the gradient is great than the KL loss incurred...this method is guaranteed to make progress towards optimizing L.\" ", "Do the authors mean that the negative log-likelihood will be improved in this case? ", "Or the actual optimization? ", "Improving the negative log-likelihood seems straightforward, ", "but I am confused by what the authors mean by optimization.", "The new evaluation metric proposed in Section 6.1.1 is confusing, ", "and I do not understand what the metric is trying to capture. ", "This needs significantly more detail and explanation. ", "Also, it is unclear to me what would happen when you input data examples that are opposite to the original input sequence; ", "in particular, for many neural networks the predictions are unstable outside of the input domain and inputting infeasible data leads to unusable outputs. ", "It's completely feasible that these outputs would just be highly uncertain, ", "and I'm not sure how you can ascribe meaning to them. ", "The authors should not compare to the uniform prior as a baseline for entropy. ", "It's much more revealing to compare it to the empirical likelihoods of the words."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "quote", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request"]}
{"doc_id": "ryIbx22yz", "text": ["The authors perform a set of experiments in which they inspect the Hessian matrix of the loss of a neural network, and observe that most of the eigenvalues are very close to zero. ", "This is a potentially important observation, ", "and the experiments were well worth performing, ", "but I don't find them fully convincing ", "(partly because I was confused by the presentation).", "They perform four sets of experiments:", "1) In section 3.1, they show on simulated data that for data drawn from k clusters, there are roughly k significant eigenvalues in the Hessian of the solution.", "2) In section 3.2, they show on MNIST that the solution contains few large eigenvalues, and also that there are negative eigenvalues.", "3) In section 3.3, they show (again on MNIST) that at their respective solutions, large batch and small batch methods find solutions with similar numbers of large eigenvalues, but that for the large batch method the magnitudes are larger.", "4) In section 4.1, they train (on CIFAR10) using a large batch method, and then transition to a small batch method, and argue that the second solution appears to be better than the first, but that they are a part of the same basin ", "(since linearly while interpolating between them they don't run into any barriers).", "I'm not fully convinced by the second and third experiments, ", "partly because I didn't fully understand the plots (more on this below), ", "but also because it isn't clear to me what we should expect from the spectrum of a Hessian, ", "so I don't know whether the observed specra have fewer large eigenvalues, or more large eigenvalues, then would be \"natural\". ", "In other words, there isn't a *baseline*.", "For the fourth experiment, it's unsurprising that the small batch method winds up in a different location in the same basin as the large batch method, ", "since it was initialized to the large batch method's solution ", "(and it doesn't appear to me, in figure 9, that the small batch solution is significantly different).", "Section 2.1 is said to contain an argument that the second term of equation 5 can be ignored, but only says that if \\ell' and \\nabla^2 of f are uncorrelated, then it can be ignored. ", "I don't see any reason that these two quantities should be correlated, ", "but this is not an argument that they are uncorrelated. ", "Also, it isn't clear to me where this approximation was used--everywhere? ", "In section 3.2, it sounds as if the exact Hessian is used, ", "and at the end of this section the authors say that figure 6 demonstrates that the effect of this second term is small, ", "but I don't see why this is, ", "and it isn't explained.", "My main complaint is that I had a great deal of difficulty interpreting the plots: ", "it often wasn't clear to me what exactly was being plotted, ", "and most of the language describing them was frustratingly vague. ", "For example, figure 6 is captioned \"left edge of the spectrum, eigenvalues are scaled by their ratio\". ", "The text explains that \"left edge of the spectrum\" means \"small but negative eigenvalues\" ", "(this would be better in the caption), ", "but what are the ratios? ", "Ratio of what to what? ", "I think it would greatly enhance clarity if every plot caption described exactly, and unambiguously, what quantities were plotted on the horizontal and vertical axes.", "Some minor notes:There are a number of places where \"it's\" is used, where it should be \"its\".", "In the introduction, the definition of \\mathcal{L}' is slightly confusing, ", "since it's an expectation, ", "but the use of \"'\" makes one expect a derivative. ", "Perhaps use \\hat{\\mathcal{L}} for the empirical loss, and \\mathcal{L} for the expected one?", "On the bottom of page 4, \"if \\ell' and \\nabla f are not correlated\": I think the \\nabla should be \\nabla^2.", "It's \"principal components\", not \"principle components\"."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "non-arg", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "fact", "evaluation", "request", "request", "request"]}
{"doc_id": "Hy4tIW5xf", "text": ["The paper \"IMPROVING SEARCH THROUGH A3C REINFORCEMENT LEARNING BASED CONVERSATIONAL AGENT\" proposes to define an agent to guide users in information retrieval tasks.", "By proposing refinements of the query, categorizations of the results or some other bookmarking actions, the agent is supposed to help the user in achieving his search.", "The proposed agent is learned via reinforcement learning.", "My concern with this paper is about the experiments that are only based on simulated agents, as it is the case for learning.", "While it can be questionable for learning", "(but we understand why it is difficult to overcome),", "it is very problematic for the experiments to not have anything that demonstrates the usability of the approach in a real-world scenario.", "I have serious doubts about the performances of such an artificially learned approach for achieving real-world search tasks.", "Also, for me the experimental section is not sufficiently detailed, which lead to not reproducible results.", "Moreover, authors should have considered baselines", "(only the two proposed agents are compared which is clearly not sufficient).", "Also, both models have some issues from my point of view.", "First, the Q-learning methods looks very complex:", "how could we expect to get an accurate model with 10^7 states ?", "No generalization about the situations is done here,", "examples of trajectories have to be collected for each individual considered state,", "which looks very huge (especially if we think about the number of possible trajectories in such an MDP).", "The second model is able to generalize from similar situations thanks to the neural architecture that is proposed.", "However, I have some concerns about it:", "why keeping the history of actions in the inputs since it is captured by the LSTM cell ?", "It is a redondant information that might disturb the process.", "Secondly, the proposed loss looks very heuristic for me,", "it is difficult to understand what is really optimized here.", "Particularly, the loss entropy function looks strange to me.", "Is it classical ?", "Are there some references of such a method to maintain some exploration ability.", "I understand the need of exploration,", "but including it in the loss function reduces the interpretability of the objective", "(wouldn't it be preferable to use a more classical loss but with an epsilon greedy policy?).", "Other remarks: - In the begining of \"varying memory capacity\" section, what is \"100, 150 and 250\" ?", "Time steps ?", "What is the unit ?", "Seconds ?", "- I did not understand the \"Capturing seach context at local and global level\" at all", "- In the loss entropy formula, the two negation signs could be removed", "- Wouldn't it be possible to use REINFORCE or other policy gradient method rather than roll-outs used in the paper (which lead to biased gradient updates) ?"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "non-arg", "non-arg", "evaluation", "request", "request"]}
{"doc_id": "H1rLr8ZNM", "text": ["This paper proposed to combine three kinds of data sources: real, simulated and unlabeled, to help solve \"small\" data issue occurring in packet stream.", "A directed information flow graph was constructed,", "a multi-headed network was trained by using Keras and GAN library.", "Its use on the packet sequence classification can archive comparable accuracy while relieve operation engineers from heavy background learning.", "The presentation of this paper can be improved.", "* With the missing citations as \"(?)\" and not clearly defined concepts, including property of function H (any function? convex?) in (3),", "full name of TCP/abbr of GAN when first appear, etc.", "reader might need to make guesses to follow.", "* P2: You can draw your audience by expanding the \"related work\" like a story:", "more background of GAN etc. and one or two highlight formula to help clear the idea", "* P3: What's the purpose of inserting \"dummy packets to denote the timeout between two time stamps\"?", "* P3: Help sell to \"non-engineer\" by maybe having image example or even plainer language to describe the meaning (deep difference/purpose) of \"3 levels of feature engineering\"; and when addressing features, mentioned as 1,2,3, while in Table 1, shown as Feature=0,1,2;", "* P6: section 4.2 mentioned \"only metrics cared by operators\", is this what you mean by \"relieve operation engineers ...\",", "and which is or how to decide the cutoff accuracy the engineers should make a Go or No Go decision?"], "labels": ["fact", "fact", "fact", "evaluation", "request", "request", "request", "evaluation", "request", "request", "non-arg", "request", "non-arg", "non-arg"]}
{"doc_id": "HJNeoqYNG", "text": ["This paper focuses on the problem of \\\"machine teaching\\\", ", "i.e., how to select a good strategy to select training data points to pass to a machine learning algorithm, for faster learning. ", "The proposed approach leverages reinforcement learning by defining the reward as how fast the learner learns, ", "and use policy gradient to update the teacher parameters. ", "I find the definition of the \\\"state\\\" in this case very interesting. ", "The experimental results seem to show that such a learned teacher strategy makes machine learning algorithms learn faster. ", "\\n\\nOverall I think that this paper is decent. ", "The angle the authors took is interesting (essentially replacing one level of the bi-level optimization problem in machine teaching works with a reinforcement learning setup). ", "The problem formulation is mostly reasonable, ", "and the evaluation seems quite convincing. ", "The paper is well-written: ", "I enjoyed the mathematical formulation (Section 3). ", "The authors did a good job of using different experiments (filtration number analysis, and teaching both the same architecture and a different architecture) to intuitively explain what their method actually does. ", "\\n\\nAt the same time, though, I see several important issues that need to be addressed if this paper is to be accepted. ", "Details below. \\n\\n1. As much as I enjoyed reading Section 3, it is very redundant. ", "In some cases it is good to outline a powerful and generic framework", "(like the authors did here with defining \\\"teaching\\\" in a very broad sense, including selecting good loss functions and hypothesis spaces) ", "and then explain that the current work focuses on one aspect (selecting training data points). ", "However, I do not see it being the case here. ", "In my opinion, selecting good loss functions and hypothesis spaces are much harder problems than data teaching - except maybe when one use a pre-defined set of possible loss functions and select from it. ", "But that is not very interesting", "(if you can propose new loss functions, that would be way cooler). ", "I also do not see how to define an intuitive set of \\\"states\\\" in that case. ", "Therefore, I think this section should be shortened. ", "I also think that the authors should not discuss the general framework and rather focus on \\\"data teaching\\\",", "which is the only focus of the current paper. ", "The abstract and introduction should also be modified accordingly to more honestly reflect the current contributions. ", "\\n2. The authors should do a better job at explaining the details of the state definition,", "especially the student model features and the combination of data and current learner model. ", "\\n3. There is only one definition of the reward \u2013 related to batch number when the accuracy first exceeds a threshold. ", "Is accuracy stable,", "can it drop back down below the threshold in the next epoch? ", "The accuracy on a held-out test set is not guaranteed to be monotonically increasing, right? ", "Is this a problem in practice (it seems to happen on your curves)? ", "What about other potential reward definitions? ", "And what would they potentially lead to? ", "\\n4. Experimental results are averaged over 5 repeated runs ", "- a bit too small in my opinion. ", "\\n5. Can the authors show convergence of the teacher parameter \\\\theta? ", "I think it is important to see how fast the teacher model converges, too. ", "\\n6. In some of your experiments, every training method converges to the same accuracy after enough training (Fig.2b), while in others, not quite (Fig. 2a and 2c). ", "Why is this the case? ", "Does it mean that you have not run enough iterations for the baseline methods? ", "My intuition is that if the learner algorithm is convex, then ultimately they will all get to the same accuracy level, ", "so the task is just to get there quicker. ", "I understand that since the learner algorithm is an NN, this is not the case \u2013 ", "but more explanation is necessary here \u2013 ", "does your method also reduces the empirical possibility to get stuck in local minima? ", "\\n7. More explanation is needed towards Fig.4c. ", "In this case, using a teacher model trained on a harder task (CIFAR10) leads to much improved student training on a simpler task (MNIST). Why?\\n ", "8. Although in terms of \\\"effective training data points\\\" the proposed method outperforms the other methods,", "in terms of time (Fig.5) the difference between it and say, NoTeach, is not that significant (especially at very high desired accuracy). ", "More explanation needed here. ", "\\n\\nRead the rebuttal and revision and slightly increased my rating."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "request", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "request", "evaluation", "request", "request", "request", "fact", "non-arg", "request", "non-arg", "non-arg", "non-arg", "non-arg", "fact", "evaluation", "request", "request", "fact", "non-arg", "non-arg", "evaluation", "evaluation", "evaluation", "request", "non-arg", "request", "non-arg", "fact", "evaluation", "request", "non-arg"]}
{"doc_id": "BkaINb9xz", "text": ["The authors propose an extension to CNN using an autoregressive weighting for asynchronous time series applications.", "The method is applied to a proprietary dataset as well as a couple UCI problems and a synthetic dataset, showing improved performance over baselines in the asynchronous setting.", "This paper is mostly an applications paper.", "The method itself seems like a fairly simple extension for a particular application,", "although perhaps the authors have not clearly highlighted details of methodological innovation.", "I liked that the method was motivated to solve a real problem, and that it does seem to do so well compared to reasonable baselines.", "However, as an an applications paper, the bread of experiments are a little bit lacking", "-- with only that one potentially interesting dataset, which happens to proprietary.", "Given the fairly empirical nature of the paper in general, it feels like a strong argument should be made, which includes experiments, that this work will be generally significant and impactful.", "The writing of the paper is a bit loose with comments like:", "\u201cBesides these and claims of secretive hedge funds (it can be marketing surfing on the deep learning hype), no promising results or innovative architectures were publicly published so far, to the best of our knowledge.\u201d", "Parts of the also appear rush written, with some sentences half finished:", "\u201c\"ues of x might be heterogenous, hence On the other hand, significance network provides data-dependent weights for all regressors and sums them up in autoregressive manner.\u201d\u201d", "As a minor comment, the statement", "\u201chowever, due to assumed Gaussianity they are inappropriate for financial datasets, which often follow fat-tailed distributions (Cont, 2001).\u201d", "Is a bit too broad.", "It depends where the Gaussianity appears.", "If the likelihood is non-Gaussian, then it often doesn\u2019t matter if there are latent Gaussian variables."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "quote", "evaluation", "quote", "non-arg", "quote", "evaluation", "fact", "fact"]}
{"doc_id": "Hk0lS3teG", "text": ["The authors analyze show theoretical shortcomings in previous methods of explaining neural networks and propose an elegant way to remove these shortcomings in their methods PatternNet and PatternAttribution.", "The quest of visualizing neural network decision is now a very active field with many contributions.", "The contribution made by the authors stands out due to its elegant combination of theoretical insights and improved performance in application.", "The work is very detailed and reads very well.", "I am missing at least one figure with comparison with more state-of-the-art methods", "(e.g. I would love to see results from the method by Zintgraf et al. 2017 which unlike all included prior methods seems to produce much crisper visualizations and also is very related because it learns from the data, too).", "Minor questions and comments:* Fig 3: Why is the random method so good at removing correlation from fc6?", "And the S_w even better?", "Something seems special about fc6.", "* Fig 4: Why is the identical estimator better than the weights estimator and that one better than S_a?", "* It would be nice to compare the image degradation experiment with using the ranking provided by the work from Zintgraf which should by definition function as a kind of gold standard", "* Figure 5, 4th row (mailbox): It looks like the umbrella significantly contributes to the network decision to classify the image as \"mailbox\" which doesn't make too much sense.", "Is is a problem of the visualization (maybe there is next to no weight on the umbrella), of PatternAttribution or a strange but interesting a artifact of the analyzed network?", "* page 8 \"... closed form solutions (Eq (4) and Eq. (7))\"", "The first reference seems to be wrong.", "I guess Eq 4. should instead reference the unnumbered equation after Eq. 3."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "request", "request", "evaluation", "evaluation", "quote", "fact", "request"]}
{"doc_id": "B1Fe0Zqxz", "text": ["The paper presents a way to regularize a sequence generator by making the hidden states also predict the hidden states of an RNN working backward.", "Applied to sequence-to-sequence networks, the approach requires training one encoder, and two separate decoders, that generate the target sequence in forward and reversed orders. ", "A penalty term is added that forces an agreement between the hidden states of the two decoders. ", "During model evaluation only the forward decoder is used, with the backward operating decoder discarded. ", "The method can be interpreted to generalize other recurrent network regularizers, such as putting an L2 loss on the hidden states.", "Experiments indicate that the approach is most successful when the regularized RNNs are conditional generators, which emit sequences of low entropy, such as decoders of a seq2seq speech recognition network. ", "Negative results were reported when the proposed regularization technique was applied to language models, whose output distribution has more entropy.", "The proposed regularization is evaluated with positive results on a speech recognition task and on an image captioning task, and with negative results (no improvement, but also no deterioration) on a language modeling and sequential MNIST digit generation tasks.", "I have one question about baselines: is the proposed approach better than training to forward generators and force an agreement between them (in the spirit of the concurrent ICLR submission https://openreview.net/forum?id=rkr1UDeC-)? ", "Also, would using the backward RNN, e.g. for rescoring, bring another advantage? ", "In other words, what is (and is there) a gap between an ensemble of a forward and backward rnn and the forward-rnn only, but trained with the state-matching penalty?", "Quality:The proposed approach is well motivated ", "and the experiments show the limits of applicability range of the technique.", "Clarity:The paper is clearly written.", "Originality:The presented idea seems novel.", "Significance:The method may prove to be useful to regularize recurrent networks, ", "however I would like to see a comparison with ensemble methods. ", "Also, as the authors note the method seems to be limited to conditional sequence generators.", "Pros and cons:Pros: the method is simple to implement, ", "the paper lists for what kind of datasets it can be used.", "Cons: the method needs to be compared with typical ensembles of models going only forward in time, ", "it may turn that it using the backward RNN is not necessary"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "non-arg", "non-arg", "non-arg", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "fact", "evaluation", "fact", "request", "evaluation"]}
{"doc_id": "HydgKG5ez", "text": ["The paper proposes a CNN-based based approach for speech processing using raw waveforms as input. ", "An analysis of convolution and pooling layers applied on waveforms is first presented. ", "An architecture called SimpleNet is then presented and evaluated on two speech tasks: emotion recognition and gender classification. ", "This paper propose a theoretical analysis of convolution and pooling layers to motivate the SimpleNet architecture. ", "To my understanding, the analysis is flawed (see comments below). ", "The SimpleNet approach is interesting but not sufficiently backed with experimental results. ", "The network analysis is minimal and provides almost no insights. ", "I therefore recommend to reject the paper. ", "Detailed comments:Section 1:* \u201cTherefore, it remains unknown what actual features CNNs learn from waveform\u201d. ", "This is not true, ", "several works on speech recognition have shown that a convolution layer taking raw speech as input can be seen as a bank of learned filters. ", "For instance in the context of speech recognition, [9] showed that the filters learn phoneme-specific responses, ", "[10] showed that the learned filters are close to Mel filter banks ", "and [7] showed that the learned filters are related to MRASTA features and Gabor filters. ", "The authors should discuss these previous works in the paper.", "Section 2:* Section 2.1 seems unnecessary, ", "I think it\u2019s safe to assume that the Shannon-Nyquist theorem and the definition of convolution are known by the reader.", "* Section 2.2.2 & 2.2.3: I don't follow the justification that stacking convolutions are not needed: ", "the example provided is correct if two convolutions are directly stacked without non-linearity, but the conclusion does not hold with a non-linearity and/or a pooling layer between the convolutions: ", "two stacked convolutions with non-linearities are not equivalent to a single convolution. ", "To my understanding, the same problem is present for the pooling layer: ", "the presented conclusion that pooling introduces aliasing is only valid for two directly stacked pooling layers and is not correct for stacked blocks of convolution/pooling/non-linearity.", "* Section 2.2.5: The ReLU can be seen as a half-wave rectifier if it is applied directly to the waveform. ", "However, it is usually not the case ", "as it is applied on the output of the convolution and/or pooling layers. Therefore I don\u2019t see the point of this section. ", "* Section 2.2.6: In this section, the authors discuss the differences between spectrogram-based and waveforms-based approaches, assuming that spectrogram-based approach have fixed filters. ", "But spectrogram can also be used as input to CNNs (i.e. using learned filters) for instance in speech recognition [1] or emotion recognition [11]. ", "Thus the comparison could be more interesting if it was between spectrogram-based and raw waveform-based approaches when the filters are learned in both cases. ", "Section 3:* Figure 4 is very interesting, ", "and is in my opinion a stronger motivation for SimpleNet that the analysis presented in Section 2.", "* Using known filterbanks such as Mel or Gammatone filters as initialization point for the convolution layer is not novel and has been already investigated in [7,8,10] in the context of speech recognition. ", "Section 4:* On emotion recognition, the results show that the proposed approach is slightly better, ", "but there is some issues: the average recall metric is usually used for this task due to class imbalance (see [1] for instance). ", "Could the authors provide results with this metric ? ", "Also IEMOCAP is a well-used corpus for this task, ", "could the authors provide some baselines performance for comparison (e.g. [11]) ? ", "* For gender classification, there is no gain from SimpleNet compared to the baselines. ", "The authors also mention that some utterances have overlapping speech. ", "These utterances are easy to find from the annotations provided with the corpus, ", "so it should be easy to remove them for the train and test set. ", "Overall, in the current form, the results are not convincing.", "* Section 4.3: The analysis is minimal: ", "it shows that filters changed after training (as already presented in Figure 4). ", "I don't follow completely the argument that the filters should focus on low frequency. ", "It is more informative, ", "but one could expect that the filters will specialized, thus some of them will focus on high frequencies, to model the high frequency events such as consonants or unvoiced event. ", "It could be very interesting to relate the learned filters to the labels: ", "are some filters learned to model specific emotions ? ", "For gender classification, are some filters focusing on the average pitch frequency of male and female speaker ?", "* Finally, it would be nice to see if the claims in Section 2 about the fact that only one convolution layer is needed and that stacking pooling layers can hurt the performance are verified experimentally: for instance, experiments with more than one pair of convolution/pooling could be presented.", "Minor comments:* More references for raw waveforms-based approach for speech recognition should be added [3,4,6,7,8,9] in the introduction.", "* I don\u2019t understand the first sentence of the paper: \u201cIn the field of speech and audio processing, due to the lack of tools to directly process high dimensional data \u2026\u201d. ", "Is this also true for any pattern recognition fields ? ", "* For the MFCCs reference in 2.2.2, the authors should cite [12].", "* Figure 6: Only half of the spectrum should be presented.", "References: [1] H. Lee, P. Pham, Y. Largman, and A. Y. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. In Advances in Neural Information Processing Systems 22, pages 1096\u20131104, 2009.", "[2] Schuller, Bj\u00f6rn, Stefan Steidl, and Anton Batliner. \"The interspeech 2009 emotion challenge.\" Tenth Annual Conference of the International Speech Communication Association. 2009.", "[3] N. Jaitly, G. Hinton, Learning a better representation of speech sound waves using restricted Boltzmann machines, in: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011, pp. 5884\u20135887.", "[4] D. Palaz, R. Collobert, and M. Magimai.-Doss. Estimating Phoneme Class Conditional Probabilities from Raw Speech Signal using Convolutional Neural Networks, INTERSPEECH 2013, pages 1766\u20131770.", "[5] Van den Oord, Aaron, Sander Dieleman, and Benjamin Schrauwen. \"Deep content-based music recommendation.\" Advances in neural information processing systems. 2013.", "[6] Z.Tuske, P.Golik, R.Schluter, H.Ney, Acoustic Modeling with Deep Neural Networks Using Raw Time Signal for LVCSR, in: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), Singapore, 2014, pp. 890\u2013894.", "[7] P. Golik, Z. Tuske, R. Schlu \u0308ter, H. Ney, Convolutional Neural Networks for Acoustic Modeling of Raw Time Signal in LVCSR, in: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015, pp. 26\u201330.", "[8] Yedid Hoshen and Ron Weiss and Kevin W Wilson, Speech Acoustic Modeling from Raw Multichannel Waveforms, International Conference on Acoustics, Speech, and Signal Processing, 2015.", "[9] D. Palaz, M. Magimai-Doss, and R. Collobert. Analysis of CNN-based Speech Recognition System using Raw Speech as Input, INTERSPEECH 2015, pages 11\u201315.", "[10] T. N. Sainath, R. J. Weiss, A. Senior, K. W. Wilson, and O. Vinyals. Learning the Speech Front-end With Raw Waveform CLDNNs. Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015.", "[11] Satt, Aharon & Rozenberg, Shai & Hoory, Ron. (2017). Efficient Emotion Recognition from Speech Using Deep Learning on Spectrograms. 1089-1093. Interspeech 2017.", "[12] S. Davis and P. Mermelstein. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech and Signal Processing, 28(4):357\u2013366, 1980."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "fact", "fact", "fact", "fact", "fact", "request", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "request", "evaluation", "evaluation", "fact", "evaluation", "fact", "request", "evaluation", "request", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "request", "evaluation", "evaluation", "request", "request", "reference", "reference", "reference", "reference", "reference", "reference", "reference", "reference", "reference", "reference", "reference", "reference"]}
{"doc_id": "S1SG_l5gz", "text": ["This paper proposes to automatically recognize domain names as malicious or benign by deep networks (convnets and RNNs) trained to directly classify the character sequence as such.", "Pros The paper addresses an important application of deep networks, comparing the performance of a variety of different types of model architectures.", "The tested networks seem to perform reasonably well on the task.", "Cons There is little novelty in the proposed method/models ", "-- the paper is primarily focused on comparing existing models on a new task.", "The descriptions of the different architectures compared are overly verbose ", "-- they are all simple standard convnet / RNN architectures. ", "The code specifying the models is also excessive for the main text ", "-- it should be moved to an appendix or even left for a code release.", "The comparisons between various architectures are not very enlightening ", "as they aren\u2019t done in a controlled way ", "-- there are a large number of differences between any pair of models ", "so it\u2019s hard to tell where the performance differences come from. ", "It\u2019s also difficult to compare the learning curves among the different models (Fig 1) ", "as they are in separate plots with differently scaled axes.", "The proposed problem is an explicitly adversarial setting ", "and adversarial examples are a well-known issue with deep networks and other models, ", "but this issue is not addressed or analyzed in the paper. ", "(In fact, the intro claims this is an advantage of not using hand-engineered features for malicious domain detection, seemingly ignoring the literature on adversarial examples for deep nets.) ", "For example, in this case an attacker could start with a legitimate domain name and use black box adversarial attacks (or white box attacks, given access to the model weights) to derive a similar domain name that the models proposed here would classify as benign.", "While this paper addresses an important problem, ", "in its current form the novelty and analysis are limited ", "and the paper has some presentation issues."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "BJQGTw5lM", "text": ["This manuscript explores the idea of adding noise to the adversary's play in GAN dynamics over an RKHS. ", "This is equivalent to adding noise to the gradient update, using the duality of reproducing kernels. ", "Unfortunately, the evaluation here is wholly unsatisfactory to justify the manuscript's claims. ", "No concrete practical algorithm specification is given (only a couple of ideas to inject noise listed), ", "only a qualitative one on a 2-dimensional latent space in MNIST, and an inconclusive one using the much-doubted Parzen window KDE method. ", "The idea as stated in the abstract and introduction may well be worth pursuing, ", "but not on the evidence provided by the rest of the manuscript."], "labels": ["fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation"]}
{"doc_id": "SyuPmP3lM", "text": ["The collaborative block that authors propose is a generalized module that can be inserted in deep architectures for better multi-task learning.", "The problem is relevant as we are pushing deep networks to learn representation for multiple tasks.", "The proposed method while simple is novel.", "The few places where the paper needs improvement are: 1. The authors should test their collaborative block on multiple tasks where the tasks are less related.", "Ex: Scene and object classification.", "The current datasets where the model is evaluated is limited to Faces which is a constrained setting.", "It would be great if Authors provide more experiments beyond Faces to test the universality of the proposed approach.", "2. The Face datasets are rather small.", "I wonder if the accuracy improvements hold on larger datasets and if authors can comment on any large scale experiments they have done using the proposed architecture.", "In it's current form I would say the experiment section and large scale experiments are two places where the paper falls short."], "labels": ["fact", "fact", "evaluation", "request", "request", "fact", "request", "request", "request", "evaluation"]}
{"doc_id": "B1ja8-9lf", "text": ["This paper presents a novel approach to calibrate classifiers for out of distribution samples.", "In additional to the original cross entropy loss, the \u201cconfidence loss\u201d was proposed to guarantee the out of distribution points have low confidence in the classifier.", "As out of distribution samples are hard to obtain,", "authors also propose to use GAN generating \u201cboundary\u201d samples as out of distribution samples.", "The problem setting is new and objective (1) is interesting and reasonable.", "However, I am not very convinced that objective (3) will generate boundary samples.", "Suppose that theta is set appropriately so that p_theta (y|x) gives a uniform distribution over labels for out of distribution samples.", "Because of the construction of U(y), which uniformly assign labels to generated out of distribution samples,", "the conditional probability p_g (y|x) should always be uniform so p_g (y|x) divided by p_theta (y|x) is almost always 1.", "The KL divergence in (a) of (3) should always be approximately 0 no matter what samples are generated.", "I also have a few other concerns: 1. There seems to be a related work:", "[1] Perello-Nieto et al., Background Check: A general technique to build more reliable and versatile classifiers, ICDM 2016,", "Where authors constructed a classifier, which output K+1 labels and the K+1-th label is the \u201cbackground noise\u201d label for this classification problem.", "Is the method in [1] applicable to this paper\u2019s setting?", "Moreover, [1] did not seem to generate any out of distribution samples.", "2. I am not so sure that how the actual out of distribution detection was done", "(did I miss something here?).", "Authors repeatedly mentioned \u201cmaximum prediction values\u201d,", "but it was not defined throughout the paper.", "Algorithm 1. is called \u201cminimization for detection and generating out of distribution (samples)\u201d,", "but this is only gradient descent, right?", "I do not see a detection procedure.", "Given the title also contains \u201cdetecting\u201d, I feel authors should write explicitly how the detection is done in the main body."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "reference", "fact", "request", "evaluation", "evaluation", "non-arg", "fact", "fact", "fact", "fact", "fact", "request"]}
{"doc_id": "Bk-6h6Txz", "text": ["The article \"Contextual Explanation Networks\" introduces the class of models which learn the intermediate explanations in order to make final predictions.", "The contexts can be learned by, in principle, any model including neural networks,", "while the final predictions are supposed to be made by some simple models like linear ones.", "The probabilistic model allows for the simultaneous training of explanation and prediction parts as opposed to some recent post-hoc methods.", "The experimental part of the paper considers variety of experiments, including classification on MNIST, CIFAR-10, IMDB and also some experiments on survival analysis.", "I should note, that the quality of the algorithm is in general similar to other methods considered (as expected).", "However, while in some cases the CEN algorithm is slightly better, in other cases it appears to sufficiently loose, see for example left part of Figure 3(b) for MNIST data set.", "It would be interesting to know the explanation.", "Also, it would be interesting to have more examples of qualitative analysis to see, that the learned explanations are really useful.", "I am a bit worried, that while we have interpretability with respect to intermediate features, these features theirselves might be very hard to interpret.", "To sum up, I think that the general idea looks very natural and the results are quite supportive.", "However, I don't feel myself confident enough in this area of research to make strong conclusion on the quality of the paper."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "request", "request", "evaluation", "evaluation", "non-arg"]}
{"doc_id": "S1kxi6OlM", "text": ["In general I find this to be a good paper and vote for acceptance. ", "The paper is well-written and easy to follow. ", "The proposed approach is a useful addition to existing literature.", "Besides that I have not much to say except one point I would like to discuss: ", "In 4.2 I am not fully convinced of using an adversial model for goal generation. ", "RL algorithms generally suffer from poor stability ", "and GANs themselves can have convergence issues. ", "This imposes another layer of possible instability. ", "Besides, generating useful reward function, while not trivial, can be seen as easier than solving the full RL problem. ", "Can the authors argue why this model class was chosen over other, more simple, generative models? ", "Furthermore, did the authors do experiments with simpler models?", "Related: \"We found that the LSGAN works better than other forms of GAN for our problem.\" ", "Was this improvement minor, or major, or didn't even work with other GAN types? ", "This question is important, ", "because for me the big question is if this model is universal and stable in a lot of applications or requires careful fine-tuning and monitoring."], "labels": ["evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "request", "quote", "request", "evaluation", "evaluation"]}
{"doc_id": "S1gH28vgM", "text": ["1) Summary This paper proposed a new method for predicting multiple future frames in videos. ", "A new formulation is proposed where the frames\u2019 inherent noise is modeled separate from the uncertainty of the future. ", "This separation allows for directly modeling the stochasticity in the sequence through a random variable z ~ p(z) where the posterior q(z | past and future frames) is approximated by a neural network, ", "and as a result, sampling of a random future is possible through sampling from the prior p(z) during testing. ", "The random variable z can be modeled in a time-variant and time-invariant way. ", "Additionally, this paper proposes a training procedure to prevent their method from ignoring the stochastic phenomena modeled by z. ", "In the experimental section, the authors highlight the advantages of their method in 1) a synthetic dataset of shapes meant to clearly show the stochasticity in the prediction, 2) two robotic arm datasets for video prediction given and not given actions, and 3) A challenging human action dataset in which they perform future prediction only given previous frames.", "2) Pros: + Novel/Sound future frame prediction formulation and training for modeling the stochasticity of future prediction.", "+ Experiments on the synthetic shapes and robotic arm datasets highlight the proposed method\u2019s power of multiple future frame prediction possible.", "+ Good analysis on the number of samples improving the chance of outputting the correct future, the modeling power of the posterior for reconstructing the future, and a wide variety of qualitative examples.", "+ Work is significant for the problem of modeling the stochastic nature of future frame prediction in videos.", "3) Cons: Approximate posterior in non-synthetic datasets: The variable z seems to not be modeling the future very well. ", "In the robot arm qualitative experiments, the robot motion is well modeled, however, the background is not. ", "Given that for the approximate posterior computation the entire sequence is given (e.g. reconstruction is performed), ", "I would expect the background motion to also be modeled well. ", "This issue is more evident in the Human 3.6M experiments, ", "as it seems to output blurriness regardless of the true future being observed. ", "This problem may mean the method is failing to model a large variety of objects and clearly works for the robotic arm ", "because a very similar large shape (e.g. robot arm) is seen in the training data. ", "Do you have any comments on this?", "Finn et al 2016 PNSR performance on Human 3.6M: ", "Is the same exact data, pre-processing, training, and architecture being utilized? ", "In her paper, the PSNR for the first timestep on Human 3.6M is about 41 (maybe 42?) while in this paper it is 38.", "Additional evaluation on Human 3.6M: PSNR is not a good evaluation metric for frame prediction ", "as it is biased towards blurriness, ", "and also SSIM does not give us an objective evaluation in the sense of semantic quality of predicted frames. ", "It would be good if the authors present additional quantitative evaluation to show that the predicted frames contain useful semantic information [1, 2, 3, 4]. ", "For example, evaluating the predicted frames for the Human 3.6M dataset to see if the human is still detectable in the image or if the expected action is being predicted could be useful to verify that the predicted frames contain the expected meaningful information compared to the baselines.", "Additional comments: Are all 15 actions being used for the Human 3.6M experiments? ", "If so, the fact of the time-invariant model performs better than the time-variant one may not be the consistent action being performed (last sentence of 5.2). ", "The motion performed by the actors in each action highly overlaps (talking on the phone action may go from sitting to walking a little to sitting again, and so on). ", "Unless actions such as walking and discussion were only used, it is unlikely the time-invariant z is performing better because of consistent action. ", "Do you have any comments on this?", "4) Conclusion This paper proposes an interesting novel approach for predicting multiple futures in videos, ", "however, the results are not fully convincing in all datasets. ", "If the authors can provide additional quantitative evaluation besides PSNR and SSIM (e.g. evaluation on semantic quality), and also address the comments above, the current score will improve.", "References: [1] Emily Denton and Vighnesh Birodkar. Unsupervised Learning of Disentangled Representations from Video. In NIPS, 2017.", "[2] Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, and Honglak Lee. Learning to generate long-term future via hierarchical prediction. In ICML, 2017.", "[3] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv preprint arXiv:1710.10196, 2017.", "[4] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved Techniques for Training GANs. In NIPS, 2017."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "non-arg", "reference", "request", "fact", "evaluation", "fact", "fact", "request", "request", "non-arg", "fact", "evaluation", "evaluation", "non-arg", "evaluation", "evaluation", "request", "reference", "reference", "reference", "reference"]}
{"doc_id": "B1ZlEVXyf", "text": ["Summary ======== The authors present CLEVER, an algorithm which consists in evaluating the (local) Lipschitz constant of a trained network around a data point. ", "This is used to compute a lower-bound on the minimal perturbation of the data point needed to fool the network.", "The method proposed in the paper already exists for classical function, ", "they only transpose it to neural networks. ", "Moreover, the lower bound comes from basic results in the analysis of Lipschitz continuous functions.", "Clarity ===== The paper is clear and well-written.", "Originality ========= This idea is not new: ", "if we search for \"Lipschitz constant estimation\" in google scholar, we get for example Wood, G. R., and B. P. Zhang. \"Estimation of the Lipschitz constant of a function.\" (1996)", "which presents a similar algorithm (i.e., estimation of the maximum slope with reverse Weibull).", "Technical quality ============== The main theoretical result in the paper is the analysis of the lower-bound on \\delta, the smallest perturbation to apply on a data point to fool the network. ", "This result is obtained almost directly by writing the bound on Lipschitz-continuous function | f(y)-f(x) | < L || y-x || where x = x_0 and y = x_0 + \\delta.", "Comments: - Lemma 3.1: why citing Paulavicius and Zilinskas for the definition of Lipschitz continuity? ", "Moreover, a Lipschitz-continuous function does not need to be differentiable at all (e.g. |x| is Lipschitz with constant 1 but sharp at x=0). ", "Indeed, this constant can be easier obtained if the gradient exists, ", "but this is not a requirement.", "- (Flaw?) Theorem 3.2 : This theorem works for fixed target-class ", "since g = f_c - f_j for fixed g. ", "However, once g = min_j f_c - f_j, this theorem is not clear with the constant Lq. ", "Indeed, the function g should be g(x) = min_{k \\neq c} f_c(x) - f_k(x).", "Thus its Lipschitz constant is different, potentially equal to L_q = max_{k} \\| L_q^k \\|, where L_q^k is the Lipschitz constant of f_c-f_k. ", "If the theorem remains unchanged after this modification, you should clarify the proof. ", "Otherwise, the theorem will work with the maximum over all Lipschitz constants but the theoretical result will be weakened.", "- Theorem 4.1: I do not see the purpose of this result in this paper. ", "This should be better motivated.", "Numerical experiments ==================== Globally, the numerical experiments are in favor of the presented method. ", "The authors should also add information about the time it takes to compute the bound, the evolution of the bound in function of the number of samples and the distribution of the relative gap between the lower-bound and the best adversarial example.", "Moreover, the numerical experiments look to be realized in the context of targeted attack. ", "To show the real effectiveness of the approach, the authors should also show the effectiveness of the lower-bound in the context of non-targeted attack."], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "request", "fact", "request", "fact", "evaluation", "request", "evaluation", "request", "evaluation", "request"]}
{"doc_id": "ByhgguzeM", "text": ["The paper presents a method to parametrize unitary matrices in an RNN as a Kronecker product of smaller matrices. ", "Given N inputs and output, this method allows one to specify a linear transformation with O(log(N)) parameters, and perform a forward and backward pass in O(Nlog(N)) time. ", "In addition a relaxation is performed allowing each constituent to deviate a bit from unitarity (\u201csoft unitary constraint\u201d).", "The paper shows nice results on a number of small tasks. ", "The idea is original to the best of my knowledge and is presented clearly.", "I especially like the idea of \u201csoft unitary constraint\u201d which can be applied very efficiently in this factorized setup. ", "I think this is the main contribution of this work.", "However the paper in its current form has a number of problems:", "- The authors state that a major shortcoming of previous (efficient) unitary RNN methods is the lack of ability to span the entire space of unitary matrices. ", "This method presents a family that can span the entire space, but the efficient parts of this family (which give the promised speedup) only span a tiny fraction of it, ", "as they require only O(log(N)) params to specify an O(N^2) unitary matrix. ", "Indeed in the experimental section only those members are tested.", "- Another claim that is made is that complex numbers are key, and again the argument is the need to span the entire space of unitary matrices, ", "but the same comment still hold - that is not the space this work is really dealing with, ", "and no experimental evidence is provided that using complex numbers was really needed.", "- In the experimental section an emphasis is made as to how small the number of recurrent params are, ", "but at the same time the input/output projections are very large, leaving the reader wondering if the workload simply shifted from the RNN to the projections. ", "This needs to be addressed.", "- Another aspect of the previous points is that it\u2019s not clear if stacking KRU layers will work well. ", "This is important ", "as stacking LSTMs is a common practice. ", "Efficient KRU span a restricted subspace whose elements might not compose into structures that are expressive enough. ", "One way to overcome this potential problem is to add projection matrices between layers that will do some mixing, ", "but this will blow the number of parameters. ", "This needs to be explored.", "- The authors claim that the soft unitary constraint was key for the success of the network, ", "yet no details are provided as to how this constraint was applied, ", "and no analysis was made for its significance."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "request", "fact", "fact", "fact"]}
{"doc_id": "ryx2q7_eG", "text": ["This paper proposes for training a question answering model from answers only and a KB by learning latent trees that capture the syntax and learn the semantic of words, including referential terms like \"red\" and also compositional operators like \"not\".", "I think this model is elegant, beautiful and timely.", "The authors do a good job of explaining it clearly.", "I like the modules of composition that seem to make a very intuitive sense for the \"algebra\" that is required and the parsing algorithm is clean.", "However, I think that the evaluation is lacking, and in some sense the model exposes the weakness of the dataset that it uses for evaluation.", "I have 2.5 major issues with the paper and a few minor comments:", "Parsing: * The authors don't really say what is the base case for \\Psi that scores tokens", "(unless I missed it and if indeed it is missing it really needs to be added)", "and only provide the recursive case.", "From that I understand that the only features that they use are whether a certain word makes sense in a certain position of the rule application in the context of the question.", "While these features are based on Durrett et al.'s neural syntactic parser it seems like a pretty weak signal to learn from.", "This makes me wonder, how does the parser learn whether one parse is better than the other?", "Only based on this signal?", "It makes me suspicious that the distribution of language is not very ambiguous and that as long as you can construct a tree in some context you can do it in almost any other context.", "This is probably due to the fact that the CLEVR dataset was generated mostly using templates and is not really natural utterances produced by people.", "Of course many people have published on CLEVR although of its language limitations,", "but I was a bit surprised that only these features are enough to solve the problem completely,", "and this makes me curious as to how hard is it to reverse-engineer the way that the language was generated with a context-free mechanism that is similar to how the data was produced.", "* Related to that is that the decision for a score of a certain type t for a span (i,j) is the sum for all possible rule applications, rather than a max, which again means that there is no competition between different parse trees that result with the same type of a single span.", "Can the authors say something about what the parser learns?", "Does it learn to extract from the noise clear parse trees?", "What is the distribution of rules in those sums?", "is there some rule that is more preferred than others usually?", "It seems like there is loss of information in the sum", "and it is unclear what is the effect of that in the paper.", "Evaluation: * Related to that is indeed the fact that they use CLEVR only.", "There is now the Cornell NLVR dataset that is more challenging from a language perspective", "and it would be great to have an evaluation there as well.", "Also the authors only compare to 3 baselines where 2 don't even see the entire KB,", "so the only \"real\" baseline is relation net.", "The authors indeed state that it is state-of-the-art on clevr.", "* It is worth noting that relation net is reported to get 95.5 accuracy while the authors have 89.4.", "They use a subset so this might be the reason,", "but I am not sure how they compared to relation net exactly.", "Did they re-tune parameters once you have the new dataset?", "This could make a difference in the final accuracy and cause an unfair advantage.", "* I would really appreciate more analysis on the trees that one gets.", "Are sub-trees interpretable?", "Can one trace the process of composition?", "This could have been really nice if one could do that.", "The authors have a figure of a purported tree, but where does this tree come from?", "From the mode?", "Form the authors?", "Scalability: * How much of a problem would it be to scale this?", "Will this work in larger domains?", "It seems they compute an attention score over every entity and also over a matrix that is squared in the number of entities.", "So it seems if the number of entities is large that could be very problematic.", "Once one moves to larger KBs it might become hard to maintain full differentiability which is one of the main selling points of the paper.", "Minor comments: * I think the phrase \"attention\" is a bit confusing -", "I thought of a distribution over entities at first.", "* The feature function is not super clearly written I think - perhaps clarify in text a bit more what it does.", "* I did not get how the denotation that is based on a specific rule applycation t_1 + t_2 --> t works.", "Is it by looking at the grounding that is the result of that rule application?", "* Authors say that the neural enquirer and neural symbolic machines produce flat programs -", "that is not really true, the programs are just a linearized form of a tree,", "so there is nothing very flat about it in my opinion.", "Overall, I really enjoyed reading the paper,", "but I was left wondering whether the fact that it works so well mostly attests to the way the data was generated and am still wondering how easy it would be to make this work in for more natural language or when the KB is large."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "fact", "request", "fact", "fact", "evaluation", "non-arg", "non-arg", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "request", "non-arg", "non-arg", "non-arg", "evaluation", "evaluation", "fact", "fact", "request", "fact", "evaluation", "fact", "fact", "fact", "non-arg", "non-arg", "fact", "request", "non-arg", "non-arg", "evaluation", "non-arg", "non-arg", "non-arg", "non-arg", "non-arg", "fact", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "evaluation", "non-arg", "fact", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "BJ0qmr9xf", "text": ["The paper solves the problem of how to do autonomous resets, ", "which is an important problem in real world RL. ", "The method is novel, ", "the explanation is clear, ", "and has good experimental results.", "Pros: 1. The approach is simple, solves a task of practical importance, and performs well in the experiments. ", "2. The experimental section performs good ablation studies wrt fewer reset thresholds, reset attempts, use of ensembles.", "Cons: 1. The method is evaluated only for 3 tasks, which are all in simulation, and on no real world tasks. ", "Additional tasks could be useful, especially for qualitative analysis of the learned reset policies.", "2. It seems that while the method does reduce hard resets, ", "it would be more convincing if it can solve tasks which a model without a reset policy couldnt. ", "Right now, the methods without the reset policy perform about equally well on final reward.", "3. The method wont be applicable to RL environments where we will need to take multiple non-invertible actions to achieve the goal (an analogy would be multiple levels in a game). ", "In such situations, one might want to use the reset policy to go back to intermediate \u201cstart\u201d states from where we can continue again, rather than the original start state always.", "Conclusion/Significance: The approach is a step in the right direction, ", "and further refinements can make it a significant contribution to robotics work."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "request", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "ry07SzQgG", "text": ["This paper investigates human priors for playing video games.", "Considering a simple video game, where an agent receives a reward when she completes a game board, this paper starts by stating that: -\tFirstly, the humans perform better than an RL agent to complete the game board.", "-\tSecondly, with a simple modification of textures the performances of human players collapse, while those of a RL agent stay the same.", "If I have no doubts about these results, I have a concern about the method. ", "In the case of human players the time needed to complete the game is plotted, ", "and in the case of a RL agent the number of steps needed to complete the game is plotted (fig 1). ", "Formally, we cannot conclude that one minute is lesser than 4 million of steps.", "This issue could be easily fixed. ", "Unfortunately, I have other concerns about the method and the conclusions.", "For instance, masking where objects are or suppressing visual similarity between similar objects should also deteriorate the performance of a RL agent. ", "So it cannot be concluded that the change of performances is due to human priors. ", "In these cases, I think that the change of performances is due to the increased difficulty of the game.", "The authors have to include RL agent in all their experiments to be able to dissociate what is due to human priors and what is due to the noise introduced in the game."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "request"]}
{"doc_id": "S1ZbRMqlM", "text": ["The paper suggests taking GloVe word vectors, adjust them, and then use a non-Euclidean similarity function between them.", "The idea is tested on very small data sets (80 and 50 examples, respectively).", "The proposed techniques are a combination of previously published steps,", "and the new algorithm fails to reach state-of-the-art on the tiny data sets.", "It isn't clear what the authors are trying to prove,", "nor whether they have successfully proven what they are trying to prove.", "Is the point that GloVe is a bad algorithm?", "That these steps are general?", "If the latter, then the experimental results are far weaker than what I would find convincing.", "Why not try on multiple different word embeddings?", "What happens if you start with random vectors?", "What happens when you try a bigger data set or a more complex problem?"], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "request", "non-arg", "non-arg"]}
{"doc_id": "BympCwwgf", "text": ["This paper presents a method to cope with adversarial examples in classification tasks, leveraging a generative model of the inputs.", "Given an accurate generative model of the input, this approach first projects the input onto the manifold learned by the generative model", "(the idea being that inputs on this manifold reflect the non-adversarial input distribution).", "This projected input is then used to produce the classification probabilities.", "The authors test their method on various adversarially constructed inputs (with varying degrees of noise).", "Questions/Comments: - I am interested in unpacking the improvement of Defense-GAN over the MagNet auto-encoder based method.", "Is the MagNet auto-encoder suffering lower accuracy because the projection of an adversarial image is based on an encoding function that is learned only on true data?", "If the decoder from the MagNet approach were treated purely as a generative model, and the same optimization-based projection approach (proposed in this work) was followed, would the results be comparable?", "- Is there anything special about the GAN approach, versus other generative approaches?", "- In the black-box vs. white-box scenarios, can the attacker know the GAN parameters?", "Is that what is meant by the \"defense network\" (in experiments bullet 2)?", "- How computationally expensive is this approach take compared to MagNet or other adversarial approaches?", "Quality: The method appears to be technically correct.", "Clarity: This paper clearly written;", "both method and experiments are presented well.", "Originality: I am not familiar enough with adversarial learning to assess the novelty of this approach.", "Significance: I believe the main contribution of this method is the optimization-based approach to project onto a generative model's manifold.", "I think this kernel has the potential to be explored further (e.g. computational speed-up, projection metrics)."], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "non-arg", "non-arg", "non-arg", "non-arg", "non-arg", "non-arg", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "evaluation"]}
{"doc_id": "B1g5pBTxz", "text": ["The article \"Do GANs Learn the Distribution? Some Theory and Empirics\" considers the important problem of quantifying whether the distributions obtained from generative adversarial networks come close to the actual distribution of images.", "The authors argue that GANs in fact generate the distributions with fairly low support.", "The proposed approach relies on so-called birthday paradox", "which allows to estimate the number of objects in the support by counting number of matching (or very similar) pairs in the generated sample.", "This test is expected to experimentally support the previous theoretical analysis by Arora et al. (2017).", "The further theoretical analysis is also performed showing that for encoder-decoder GAN architectures the distributions with low support can be very close to the optimum of the specific (BiGAN) objective.", "The experimental part of the paper considers the CelebA and CIFAR-10 datasets.", "We definitely see many very similar images in fairly small sample generated.", "So, the general claim is supported.", "However, if you look closely at some pictures, you can see that they are very different though reported as similar.", "For example, some deer or truck pictures.", "That's why I would recommend to reevaluate the results visually,", "which may lead to some change in the number of near duplicates and consequently the final support estimates.", "To sum up, I think that the general idea looks very natural and the results are supportive.", "On theoretical side, the results seem fair (though I didn't check the proofs)", "and, being partly based on the previous results of Arora et al. (2017), clearly make a step further."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "r1RTd8hgG", "text": ["The proposed method is a classifier that is fair and works in collaboration with an unfair (but presumably accurate model). ", "The novel classifier is the result of the optimisation of a loss function ", "(composed of a part similar to a logistic regression model and a part being the disparate impact). ", "Hence, it can be interpreted as a logistic loss with a fairness regularisation.", "The results are promising and the applications are very important for the acceptance of ML approaches in the society.", "\u2028However, I believe that the model could be made more general (than a fairness regularized logistic loss) and its theoretical properties studied.", "Finally, this paper used uncommon vocabulary (for the machine learning community) ", "and it make is difficult to follow sometimes (for example, the use of a Decision-Maker entity).", "When reading the submitted paper, it was unclear (until section 6) how deferring could help fairness. ", "Hence, the structure of the paper could maybe be improved by introducing the cost function earlier in the manuscript (as a fairness regularised loss).", "To conclude, although the application is of high interest and the numerical results encouraging, ", "the methodological approach does not seem to be very novel.", "Minor comment : - The list of authors of the reference \u201cMachine bias : theres software\u2026\u201d apperars incorrectly (some comma may be missing in the .bib file) ", "and there is a small typo in the title.", "Possible extensions :- The proposed fairness aware loss could be made more general (and not only in the case of a logistic model) ", "- It could also be generalised to a mixture of biased classifier (more than on DM)."], "labels": ["evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "request", "request"]}
{"doc_id": "Hk6aJkmWM", "text": ["This paper proposes to use RGANs and RCGANS to generate synthetic sequences of actual data. ", "They demonstrate the quality of the sequences on sine waves, MNIST, and ICU telemetry data.", "The authors demonstrate novel approaches for generating real-valued sequences using adversarial training, a train on synthetic, test of real and vice versa method for evaluating GANS, generating synthetic medical time series data, and an empirical privacy analysis. ", "Major - the medical use case is not motivating. ", "de-identifying the 4 telemetry measures is extremely easy ", "and there is little evidence to show that it is even possible to reidentify individuals using these 4 measures. ", "our institutional review board would certainly allow self-certification of the data (i.e. removing the patient identifiers and publishing the first 4 hours of sequences).", "- the labels selected by the authors for the icu example are to forecast the next 15 minutes and whether a critical value is reached. ", "Please add information about how this critical value was generated. ", "Also it would be very useful to say that a physician was consulted and that the critical values were \"clinically\" useful.", "- the changes in performance of TSTR are large enough that I would have difficulty trusting any experiments using the synthetic data. ", "If I optimized a method using this synthetic data, I would still need to assess the result on real data.", "- In addition it is unclear whether this synthetic process would actually generate results that are clinically useful. ", "The authors certainly make a convincing statement about the internal validity of the method. ", "An externally valid measure would strengthen the results. ", "I'm not quite sure how the authors could externally validate the synthetic data ", "as this would also require generating synthetic outcome measures. ", "I think it would be possible for the synthetic sequence to also generate an outcome measure (i.e. death) based on the first 4 hours of stay.", "Minor- write in the description for table 1 what task the accuracies correspond.", "Summary The authors present methods for generating synthetic sequences. ", "The MNIST example is compelling. ", "However the ICU example has some pitfalls which need to be addressed."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "non-arg", "fact", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "request", "fact", "evaluation", "evaluation"]}
{"doc_id": "rJOVWxjez", "text": ["The authors describe a new defense mechanism against adversarial attacks on classifiers (e.g., FGSM).", "They propose utilizing Generative Adversarial Networks (GAN),", "which are usually used for training generative models for an unknown distribution,", "but have a natural adversarial interpretation.", "In particular, a GAN consists of a generator NN G which maps a random vector z to an example x, and a discriminator NN D which seeks to discriminate between an examples produced by G and examples drawn from the true distribution.", "The GAN is trained to minimize the max min loss of D on this discrimination task, thereby producing a G (in the limit) whose outputs are indistinguishable from the true distribution by the best discriminator.", "Utilizing a trained GAN, the authors propose the following defense at inference time.", "Given a sample x (which has been adversarially perturbed), first project x onto the range of G by solving the minimization problem z* = argmin_z ||G(z) - x||_2.", "This is done by SGD.", "Then apply any classifier trained on the true distribution on the resulting x* = G(z*).", "In the case of existing black-box attacks, the authors argue (convincingly) that the method is both flexible and empirically effective.", "In particular, the defense can be applied in conjunction with any classifier (including already hardened classifiers), and does not assume any specific attack model.", "Nevertheless, it appears to be effective against FGSM attacks, and competitive with adversarial training specifically to defend against FGSM.", "The authors provide less-convincing evidence that the defense is effective against white-box attacks.", "In particular, the method is shown to be robust against FGSM, RAND+FGSM, and CW white-box attacks.", "However, it is not clear to me that the method is invulnerable to novel white-box attacks.", "In particular, it seems that the attacker can design an x which projects onto some desired x* (using some other method entirely), which then fools the classifier downstream.", "Nevertheless, the method is shown to be an effective tool for hardening any classifier against existing black-box attacks", "(which is arguably of great practical value).", "It is novel and should generate further research with respect to understanding its vulnerabilities more completely.", "Minor Comments: The sentence starting \u201cUnless otherwise specified\u2026\u201d at the top of page 7 is confusing given the actual contents of Tables 1 and 2, which are clarified only by looking at Table 5 in the appendix.", "This should be fixed."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "SJxF3VsxG", "text": ["This paper describes computationally efficient methods for training adversarially robust deep neural networks for image classification.", "(These methods may extend to other machine learning models and domains as well, but that's beyond the scope of this paper.)", "The former standard method for generating adversarially images quickly and using them in training was to do a single gradient step to increase the loss of the true label or decrease the loss of an alternate label.", "This paper shows that such training methods only lead to robustness against these \"weak\" adversarial examples, leaving the adversarially-trained models vulnerable to multi-step white-box attacks and black-box attacks (adversarial examples generated to attack alternate models).", "There are two proposed solutions.", "The first is to generate additional adversarial examples from other models and use them in training.", "This seems to yield robustness against black-box attacks from held-out models as well.", "Of course, it requires that you have a somewhat diverse group of models to choose from.", "If that's the case, why not directly build an ensemble of all the models?", "An ensemble of neural networks can still be represented as a neural network, although a more computationally costly one.", "Thus, while this heuristic appears to be useful with current models against current attacks,", "I don't know how well it will hold up in the future.", "The second solution is to add random noise before taking the gradient step.", "This yields more effective adversarial examples, both for attacking models and for training,", "because it relies less on the local gradient.", "This is another simple idea that appears to be effective.", "However, I would be interested to see a comparison to a 2-step gradient-based attack.", "R+Step-LL can be viewed as a 2-step attack: a random step followed by a gradient step.", "What if both steps were gradient steps instead?", "This interpolates between Step-LL and I-Step-LL, with an intermediate computational cost.", "It would be very interesting to know if R+Step-LL is more or less effective than 2+Step-LL, and how large the difference is.", "I like that this paper demonstrates the weakness of previous methods, including extensive experiments and a very nice visualization of the loss landscape in two adversarial dimensions.", "The proposed heuristics seem effective in practice,", "but they're somewhat ad hoc", "and there is no analysis of how these heuristics might or might not be vulnerable to future attacks."], "labels": ["fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "request", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "request", "evaluation", "non-arg", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact"]}
{"doc_id": "r1rOlgOlz", "text": ["Authors describe a procedure of building their production recommender system from scratch, begining with formulating the recommendation problem, label data formation, model construction and learning. ", "They use several different evaluation techniques to show how successful their model is (offline metrics, A/B test results, etc.)", "Most of the originality comes from integrating time decay of purchases into the learning framework. ", "Rest of presented work is more or less standard.", "Paper may be useful to practitioners who are looking to implement something like this in production."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "rJBLYC--f", "text": ["The paper proposes a novel approach on estimating the parameters \\nof Mean field games (MFG).", "The key of the method is a reduction of the unknown parameter MFG to an unknown parameter Markov Decision Process (MDP).\\n\\n", "This is an important class of models", "and I recommend the acceptance of the paper.\\n\\n", "I think that the general discussion about the collective behavior application should be more carefully presented", "and some better examples of applications should be easy to provide.", "In addition the authors may want to enrich their literature review", "and give references to alternative work on unknown MDP estimation methods cf. [1], [2] below. \\n\\n", "[1] Burnetas, A. N., & Katehakis, M. N. (1997). Optimal adaptive policies for Markov decision processes. Mathematics of Operations Research, 22(1), 222-255.\\n\\n", "[2] Budhiraja, A., Liu, X., & Shwartz, A. (2012). Action time sharing policies for ergodic control of Markov chains. SIAM Journal on Control and Optimization, 50(1), 171-195."], "labels": ["fact", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "reference", "reference"]}
{"doc_id": "B1LfYs_gf", "text": ["This paper proposes to use 3D conditional GAN models to generate fMRI scans. ", "Using the generated images, paper reports improvement in classification accuracy on various tasks.", "One claim of the paper is that a generative model of fMRI data can help to caracterize and understand variability of scans across subjects.", "Article is based on recent works such as Wasserstein GANs and AC-GANs by (Odena et al., 2016).", "Despite the rich literature of this recent topic ", "the related work section is rather convincing.", "Model presented extends IW-GAN by using 3D convolution and also by supervising the generator using sample labels.", "Major: - The size of the generated images is up to 26x31x22 ", "which is limited (about half the size of the actual resolution of fMRI data). ", "As a consequence results on decoding learning task using low resolution images can end up worse than with the actual data (as pointed out).", "What it means is that the actual impact of the work is probably limited.", "- Generating high resolution images with GANs even on faces for which there is almost infinite data is still a challenge. ", "Here a few thousand data points are used. ", "So it raises too concerns: First is it enough?", "Using so-called learning curves is a good way to answer this. ", "Second is what are the contributions to the state-of-the-art of the 2 methods introduced? ", "Said differently, as there is no classification results using images produced by an another GAN architecture ", "it is hard to say that the extra complexity proposed here (which is a bit contribution of the work) is actually necessary.", "Minor: - Fonts in figure 4 are too small."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request"]}
{"doc_id": "Hy4cMGVlf", "text": ["The authors build on the work of Tang et al. (2017), ", "who made a minor change to the skip-thought model by decoding only the next sentence, rather than the previous one also. ", "The additional minor change in this paper is to use a CNN, rather than RNN, decoder.", "I am sympathetic to the goals of the work, and believe this sort of work should be carried out, ", "but I see the contribution as too minor to constitute a paper at the conference track of a leading international conference such as ICLR. ", "Given the incremental nature of the work, I think this would be a good fit for something like a short paper at *ACL.", "I found the more theoretical motivation of the CNN decoder not terribly convincing, and somewhat post-hoc. ", "I feel as though analogous arguments could just as easily be made for an RNN decoder.", " Ultimately I see these questions - such as CNN vs. RNN for the decoder - as empirical ones.", "Finally, the authors have admirably attempted a thorough comparison with existing work, in the related work section, ", "but this section takes up a large chunk of the paper at the end, ", "and again I would have preferred this section to be much shorter and more concise.", "Summary: worthwhile empirical goal, ", "but the paper could have been easily written using half as much space."], "labels": ["evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation"]}
{"doc_id": "ryU7ZMsgf", "text": ["This paper presents a reparametrization of the perturbation applied to features in adversarial examples based attacks. ", "It tests this attack variation on against Inception-family classifiers on ImageNet. ", "It shows some experimental robustness to JPEG encoding defense.", "Specifically about the method: Instead of perturbating a feature x_i by delta_i, as in other attacks, with delta_i in range [-Delta_i, Delta_i], they propose to perturbate x_i^*, which is recentered in the domain of x_i through a heuristic ((x_i \u00b1 Delta_i + domain boundary that would be clipped)/2), and have a similar heuristic for computing a Delta_i^*. ", "Instead of perturbating x_i^* directly by delta_i, they compute the perturbed x by x_i^* + Delta_i^* * g(r_i), ", "so they follow the gradient of loss to misclassify w.r.t. r (instead of delta). ", "+/-: + The presentation of the method is clear.", "+ ImageNet is a good dataset to benchmark on.", "- (!) The (ensemble) white-box attack is effective ", "but the results are not compared to anything else, e.g. it could be compared to (vanilla) FGSM nor C&W.", "- The other attack demonstrated is actually a grey-box attack, ", "as 4 out of the 5 classifiers are known, they are attacking the 5th, ", "but in particular all the 5 classifiers are Inception-family models.", "- The experimental section is a bit sloppy at times (e.g. enumerating more than what is actually done, starting at 3.1.1.).", "- The results on their JPEG approximation scheme seem too explorative (early in their development) to be properly compared.", "I think that the paper need some more work, in particular to make more convincing experiments that the benefit lies in CIA (baselines comparison), and that it really is robust across these defenses shown in the paper."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request"]}
{"doc_id": "H1k_ZpFlf", "text": ["Summary: The paper proposes to learn new priors for latent codes z for GAN training.", "for this the paper shows that there is a mismatch between the gaussian prior and an estimated of the latent codes of real data by reversal of the generator .", "To fix this the paper proposes to learn a second GAN to learn the prior distributions of \"real latent code\" of the first GAN.", "The first GAN then uses the second GAN as prior to generate the z codes.", "Quality/clarity: The paper is well written and easy to follow.", "Originality:pros: -The paper while simple sheds some light on important problem with the prior distribution used in GAN.", "- the second GAN solution trained on reverse codes from real data is interesting", "- In general the topic is interesting, the solution presented is simple but needs more study", "cons: - It related to adversarial learned inference and BiGAN, in term of learning the mapping z ->x, x->z and seeking the agreement.", "- The solution presented is not end to end", "(learning a prior generator on learned models have been done in many previous works on encoder/decoder)", "General Review: More experimentation with the latent codes will be interesting:", "- Have you looked at the decay of the singular values of the latent codes obtained from reversing the generator?", "Is this data low rank?", "how does this change depending on the dimensionality of the latent codes?", "Maybe adding plots to the paper can help.", "- the prior agreement score is interesting", "but assuming gaussian prior also for the learned latent codes from real data is maybe not adequate.", "Maybe computing the entropy of the codes using a nearest neighbor estimate of the entropy can help understanding the entropy difference wrt to the isotropic gaussian prior?", "- Have you tried to multiply the isotropic normal noise with the learned singular values and generate images from this new prior and compute inceptions scores etc?", "Maybe also rotating the codes with the singular vector matrix V or \\Sigma^{0.5} V?", "- What architecture did you use for the prior generator GAN?", "- Have you thought of an end to end way to learn the prior generator GAN?"], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "request", "request", "request", "request", "request", "evaluation", "evaluation", "request", "request", "request", "request", "request"]}
{"doc_id": "HJ9LXfvlz", "text": ["Paper studies an interesting phenomenon of overparameterised models being able to learn well-generalising solutions.", "It focuses on a setting with three crucial simplifications:", "- data is linearly separable", "- model is 1-hidden layer feed forward network with homogenous activations", "- **only input-hidden layer weights** are trained, while the hidden-output layer's weights are fixed to be (v, v, v, ..., v, -v, -v, -v, ..., -v) (in particular -- (1,1,...,1,-1,-1,...,-1))", "While the last assumption does not limit the expressiveness of the model in any way,", "as homogenous activations have the property of f(ax)=af(x) (for positive a)", "and so for any unconstrained model in the second layer, we can \"propagate\" its weights back into first layer and obtain functionally equivalent network.", "However, learning dynamics of a model of form z(x) = SUM( g(Wx+b) ) - SUM( g(Vx+c) ) + d and \"standard\" neural model z(x) = Vg(Wx+b)+c can be completely different.", "Consequently, while the results are very interesting, claiming their applicability to the deep models is (at this point) far fetched.", "In particular, abstract suggests no simplifications are being made, which does not correspond to actual result in the paper.", "The results themselves are interesting,", "but due to the above restriction it is not clear whether it sheds any light on neural nets, or simply described a behaviour of very specific, non-standard shallow model.", "I am happy to revisit my current rating given authors rephrase the paper so that the simplifications being made are clear both in abstract and in the text, and that (at least empirically) it does not affect learning in practice.", "In other words - all the experiments in the paper follow the assumption made, if authors claim is that the restriction introduced does not matter,but make proofs too technical - at least experimental section should show this.", "If the claims do not hold empirically without the assumptions made, then the assumptions are not realistic and cannot be used for explaining the behaviour of models we are interested in.", "Pros: - tackling a hard problem of overparametrised models, without introducing common unrealistic assumptions of activations independence", "- very nice result of \"phase change\" dependend on the size of hidden layer in section 7", "Cons: - simplification with non-trainable second layer is currently not well studied in the paper;", "and while not affecting expressive power - it is something that can change learning dynamics completely"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "HkeOU0qgf", "text": ["The author unveils some properties of the resnets, for example, the cosine loss and l2 ratio of the layers. ", "I think the author should place more focus to study \"real\" iterative inference with shared parameters rather than analyzing original resnets.", "In resnet without sharing parameters, it is quite ambiguous to say whether it is doing representation learning or iterative refinement.", "1. The cosine loss is not meaningful in the sense that the classification layer is trained on the output of the last residual block and fixed. ", "Moving the classification layer to early layers will definitely result in accuracy loss. ", "Even in non-residual network, we can always say that the vector h_{i+1} - h_i is refining h_i towards the negative gradient direction. ", "The motivation of iterative inference would be to generate a feature that is easier to classify rather than to match the current fixed classifier. ", "Thus the final classification layer should be retrained for every addition or removal of residual blocks.", "2. The l2 ratio. The l2 ratio is small for higher residual layers, I'm not sure how much this phenomenon can prove that resnet is actually doing iterative inference.", "3. In section 4.4 it is shown that unrolling the layers can improve the performance of the network. ", "However, the same can be achieved by adding more unshared layers. ", "I think the study should focus more on whether shared or unshared is better.", "4. Section 4.5 is a bit weak in experiments, ", "my conclusion is that currently it is still limited by batch normalization and optimization, ", "the evidence is still not strong enough to show that iterative inference is advantageous / disadvantageous.", "The the above said, I think the more important thing is how we can benefit from iterative inference interpretation, which is relatively weak in this paper."], "labels": ["fact", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "SJw9gV2ZM", "text": ["This paper draws an interesting connection between deep neural networks and theories of quantum entanglement.", "They leveraged the tool for analyzing quantum entanglement to deep neural networks,", "and proposed a graph theoretical analysis for neural networks.", "They demonstrated how their theory can help designing neural network architectures on the MNIST dataset.", "I think the theoretical findings are novel", "and may contribute to the important problem on understanding neural networks theoretically.", "I am not familiar with the theory for quantum entanglement though."], "labels": ["evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "fact"]}
{"doc_id": "BkEcWHKlf", "text": ["Pros: 1. It provided theoretic analysis why larger feature norm is preferred in feature representation learning.", "2. A new regularization method (feature incay) is proposed.", "Cons: It seems there is not much comparison between this proposed method and the concurrent work", "\"COCO(Liu et al. (2017c))\"."], "labels": ["fact", "fact", "fact", "reference"]}
{"doc_id": "SJTAcW5xf", "text": ["This paper describes a method for computing representations for out-of-vocabulary words, e.g. based on their spelling or dictionary definitions. ", "The main difference from previous approaches is that the model is that the embeddings are trained end-to-end for a specific task, rather than trying to produce generically useful embeddings. ", "The method leads to better performance than using no external resources, but not as high performance as using Glove embeddings. ", "The paper is clearly written, and has useful ablation experiments. ", "However, I have a couple of questions/concerns: - Most of the gains seem to come from using the spelling of the word. ", "As the authors note, this kind of character level modelling has been used in many previous works. ", "- I would be slightly surprised if no previous work has used external resources for training word representations using an end-task loss, ", "but I don\u2019t know the area well enough to make specific suggestions ", "- I\u2019m a little skeptical about how often this method would really be useful in practice. ", "It seems to assume that you don\u2019t have much unlabelled text (or you\u2019d use Glove), ", "but you probably need a large labelled dataset to learn how to read dictionary definitions well. ", "All the experiments use large tasks ", "- it would be helpful to have an experiment showing an improvement over character-level modelling on a smaller task.", "- The results on SQUAD seem pretty weak - 52-64%, compared to the SOTA of 81. ", "It seems like the proposed method is quite generic, ", "so why not apply it to a stronger baseline?"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "request"]}
{"doc_id": "Hk2dO8ngz", "text": ["This very well written paper covers the span between W-GAN and VAE.", "For a reviewer who is not an expert in the domain, it reads very well,", "and would have been of tutorial quality if space had allowed for more detailed explanations.", "The appendix are very useful, and tutorial paper material (especially A).", "While I am not sure description would be enough to reproduce and no code is provided, every aspect of the architecture, if not described, if referred as similar to some previous work.", "There are also some notation shortcuts (not explained) in the proof of theorems that can lead to initial confusion, but they turn out to be non-ambiguous.", "One that could be improved is P(P_X, P_G) where one loses the fact that the second random variable is Y.", "This work contains plenty of novel material, which is clearly compared to previous work:", "- The main consequence of the use of Wasserstein distance is the surprisingly simple and useful Theorem 1.", "I could not verify its novelty, but this seems to be a great contribution.", "- Blending GAN and auto-encoders has been tried in the past,", "but the authors claim better theoretical foundations that lead to solutions that do not rquire min-max", "- The use of MMD in the context of GANs has also been tried.", "The authors claim that their use in the latent space makes it more practival", "The experiments are very convincing, both numerically and visually.", "Source of confusion: in algorithm 1 and 2, \\tilde{z} is \"sampled\" from Q_TH(Z|xi),", "some one is lead to believe that this is the sampling process as in VAEs, while in reality Q_TH(Z|xi) is deterministic in the experiments."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "BkviGptxG", "text": ["This paper presents an alternative approach to constructing variational lower bounds on data log likelihood in deep, directed generative models with latent variables.", "Specifically, the authors propose using approximate posteriors shared across groups of examples, rather than posteriors which treat examples independently.", "The group-wise posteriors allow amortization of the information cost KL(group posterior || prior) across all examples in the group,", "which the authors liken to the \"KL annealing\" tricks that are sometimes used to avoid posterior collapse when training models with strong decoders p(x|z) using current techniques for approximate variational inference in deep nets.", "The presentation of the core idea is solid,", "though it did take two read-throughs before the equations really clicked for me.", "I think the paper could be improved by spending more time on a detailed description of the model for the Omniglot experiments (as illustrated in Figure 3).", "E.g., explicitly describing how group-wise and per-example posteriors are composed in this model, using Equations and pseudo-code for the main training loop, would have saved me some time.", "For readers less familiar with amortized variational inference in deep nets, the benefit would be larger.", "I appreciate that the authors developed extensions of the core method to more complex group structures,", "though I didn't find the related experiments particularly convincing.", "Overall, I like this paper", "and think the underlying group-wise posterior construction trick is worth exploring further.", "Of course, the elephant in the room is how to determine the groups across which the posteriors can be shared and their information costs amortized."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "non-arg", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "SylxFWcgG", "text": ["This paper extends an existing thread of neural computation research focused on learning resuable subprocedures (or options in RL-speak). ", "Instead of simply input and output examples, as in most of the work in neural computation, they follow in the vein of the Neural Programmer-Interpreter (Reed and de Freitas, 2016) and Li et. al., 2017, ", "where the supervision contains the full sequence of elementary actions in the domain for all samples, and some samples also contain the hierarchy of subprocedure calls.", "The main focus of their work is learning from fewer fully annotated samples than prior work. ", "They introduce two new ideas in order to enable this:", "1. They limit the memory state of each level in the program heirarchy to simply a counter indicating the number of elementary actions/subprocedure calls taken so far (rather than a full RNN embedded hidden/cell state as in prior work). ", "They also limit the subprocedures such that they do not accept any arguments.", "2. By considering this very limited set of possible hidden states, they can compute the gradients using a dynamic program that seems to be more accurate than the approximate dynamic program used in Li et. al., 2017. ", "The main limitation of the work is this extremely limited memory state, and the lack of arguments. ", "Without arguments, everything that parameterizes the subprocedures must be in the visible world state. ", "In both of their domains, this is true, ", "but this places a significant limitation on the algorithms which can be modeled with this technique. ", "Furthermore, the limited memory state means that the only way a subprocedure can remember anything about the current observation is to call a different subprocedure. ", "Again, their two evalation tasks fit into this paradigm, ", "but this places very significant limitations on the set of applicable domains. ", "I would have like to see more discussion on how constraining these limitations would be in practice. ", "For example, it seems it would be impossible for this architecture to perform the Nanocraft task if the parameters of the task (width, height, etc.) were only provided in the first observation, rather than every observation. ", "None-the-less I think this work is an important step in our understanding of the learning dynamics for neural programs. ", "In particular, while the RNN hidden state memory used by the prior work enables the learning of more complicted programs *in theory*, this has not been shown in practice. ", "So, it's possible that all the prior work is doing is learning to approixmate a much simpler architecture of this form. ", "Specifically, I think this work can act as a great base-line by forcing future work to focus on domains which cannot be easily solved by a simpler architecture of this form. ", "This limitation will also force the community to think about which tasks require a more complicated form of memory, and which can be solved with a very simple memory of this form.", "I also have the following additional concerns about the paper: 1. I found the current explanation of the algorithm to be very difficult to understand. ", "It's extremely difficult to understand the core method without reading the appendix, ", "and even with the appendix I found the explanation of the level-by-level decomposition to be too terse.", "2. It's not clear how their gradient approximation compares to the technique used by Li et. al. ", "They obviously get better results on the addition and Nanocraft domains, ", "but I would have liked a more clear explanation and/or some experiments providing insights into what enables these improvements (or at least an admission by the authors that they don't really understand what enabled the performance improvements)."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "request", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request"]}
{"doc_id": "HJmMNVDlz", "text": ["This paper proposes a new model for the general task of inducing document representations (embeddings).", "The approach uses a CNN architecture, distinguishing it from the majority of prior efforts on this problem, which have tended to use RNNs.", "This affords obvious computational advantages, as training may be parallelized.", "Overall, the model presented is relatively simple (a good thing, in my view) and it indeed seems fast.", "I can thus see potential practical uses of this CNN based approach to document embedding in future work on language tasks.", "The training strategy, which entails selecting documents and then indexes within them stochastically, is also neat.", "Furthermore, the work is presented relatively clearly.", "That said, my main concerns regarding this paper are that: (1) there's not much new here, and,", "(2) the experimental setup may be flawed,", "in that it would seem model hyperparams were tuned for the proposed approach but not for the baselines;", "I elaborate on these concerns below.", "Specific comments:---- It's hard to tease out exactly what's new here:", "the various elements used are all well known.", "But perhaps there is merit in putting the specific pieces together.", "Essentially, the novelty is using a CNN rather than an RNN to induce document embeddings.", "- In Section 4.1, the authors write that they report results for their after running \"parameter sweeps ...\" --", "I presume that these were performed on a validation set,", "but the authors should say so.", "In any case, a very potential weakness here: were analagous parameter sweeps for this dataset performed for the baseline models?", "It would seem not, as the authors write \"the IMDB training data using the default hyper-parameters\" for skip-thought.", "Surely it is unfair comparison if one model has been tuned to a given dataset while others use only the default hyper-parameters?", "- Many important questions were left unaddressed in the experiments.", "For example, does one really need to use the gating mechanism borrowed from the Dauphin et al. paper?", "What happens if not?", "How big of an effect does the stochastic sampling of document indices have on the learned embeddings?", "Does the specific underlying CNN architecture affect results, and how much?", "None of these questions are explored.", "- I was left a bit confused regarding how the v_{1:i-1} embedding is actually estimated;", "I think the details here are insufficient in the current presentation.", "The authors write that this is a \"function of all words up to w_{i-1}\".", "This would seem to imply that at test time, prediction is not in fact parallelizable, no?", "Yet this seems to be one of the main arguments the authors make in favor of the model (in contrast to RNN based methods).", "In fact, I think the authors are proposing using the (aggregated) filter activation vectors (h^l(x)) in eq. 5,", "but for some reason this is not made explicit.", "Minor comments:- In Eq. 4, should the product be element-wise to realize the desired gating (as per the Dauhpin paper)?", "This should be made explicit in the notation.", "- On the bottom of page 3, the authors claim \"Expanding the prediction to multiple words makes the problem more difficult since the only way to achieve that is by 'understanding' the preceding sequence.\"", "This claim should either by made more precise or removed.", "It is not clear exactly what is meant here, nor what evidence supports it.", "- Commas are missing in a few.", "For example on page 2, probably want a comma after \"in parallel\" (before \"significantly\"); also after \"parallelize\" above \"Approach\".", "- Page 4: \"In contrast, our model addresses only requires\"", "--> drop the \"addresses\"."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "non-arg", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "request", "fact", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "fact", "evaluation", "request", "fact", "fact", "fact", "fact", "fact", "request", "request", "fact", "request", "evaluation", "fact", "request", "quote", "request"]}
{"doc_id": "HJZM5e9eM", "text": ["Summary This article considers neural networks over time-series, defined as a succession of convolutions and fully-connected layers with Leaky ReLU activations.", "The authors provide relatively general conditions for transformations described by such networks to admit a Lipschitz-continuous inverse.", "They extend these results to the case where the first layer is a convolution with irregular sampling.", "Finally, they show that the first convolutional filters can be chosen so as to represent a discrete wavelet transform, and provide some numerical experiments.", "Main remarks While the introduction seemed promising,", "and I enjoyed the writing style,", "I was disappointed with this article.", "(1) There are many mistakes in the mathematical statements.", "First, in Theorem 1.1, I do not think that phi_L \\circ ... \\circ phi_1 \\circ F is a non-linear frame,", "because I do not see why it should be of the form of Definition 1.2 (what would be the functions psi_n?).", "For the same reason, I also do not understand Theorem 1.2.", "In Proof 1.4, the line of equalities after \u00ab Also with the Plancherel formula \u00bb is, in my opinion, not true,", "because the L^2 norm of a product of functions is not the product of the L^2 norms of the functions.", "It also seems to me that Theorem 1.3, from [Benedetto, 1992], is incorrect:", "it is not the limit of t_n/n that must be larger than 2R, but the limit of N_n/n (with N_n the number of t_i's that belong to the interval [-n;n]),", "and there must probably be a compatibility condition between (t_n)_n and R_1, not only between (t_n)_n and R.", "In Proposition 1.6, I think that the equality should be a strict inequality.", "Additionally, I do not say that Proof 2.1 is not true,", "but the fact that the undersampling by a factor 2 does not prevent the operator from being a frame should be justified.", "(2) The authors do not justify, in the introduction, why admitting a continuous inverse should be a crucial criterion of quality for the representation described by a neural network.", "Additionally, the existence of this continous inverse relies on the fact that the non-linearity that is used is a Leaky ReLU,", "which looks a bit like \"cheating\" to me,", "because the Lipschitz constant of the inverse of a Leaky ReLU, although finite, is large,", "so it seems to me that cascading several layers with Leaky ReLUs could encode a transformation with strictly positive, but still very poor frame bounds.", "(3) I also do not understand why having \"orthogonal outputs\", as in Section 2, is really desirable;", "I think that it should be better justified.", "Also, there are probably other ways to achieve orthogonality than using wavelets in the first layer,", "so the fact that wavelets achieve orthogonality does not really justify why using wavelets in the first layer is a good choice, compared to other filters.", "(4) I had understood in the introduction that the authors would explain how to define a (good) deep representation for data of the form (x_n)_{n\\in\\N}, where each x_n would be the value of a time series at instant t_n, with the t_n non-uniformly spaced.", "But all the representations considered in the article seem to be applicable to functions in L^2(\\R) only (like in Theorem 1.4 and Theorem 2.2), and not to sequences (x_n)_{n\\in\\N}.", "There is something that I did not get here.", "Minor remarks - Fourth paragraph, third line: \"this generalization frames\"?", "- Last paragraph before \"Contributions & Organization\": \"that that\".", "- Paragraph about notations: it seems to me that what is defined as l^2(R) is denoted as l^2(Z) after the introduction.", "- Last line of this paragraph: R^d_1 should be R^{d_1}, and R^d_2 R^{d_2}.", "- I think \"smooth\" could be replaced by \"continuous\"", "(smoothness implies a notion of differentiability).", "- Paragraph before Proposition 1.1: \\sqrt{s} is not defined, and \"is supported\" should be \"are supported\".", "- Theorem 1.1: the f_k should be phi_k.", "- Definition 1.4: \"piece-linear\" -> \"piecewise linear\"?", "- Lemma 1.2 and Proof 1.4: there are indices missing to \\tilde h and \\tilde g.", "- Proof 1.4: \"and finally\" -> \"And finally\".", "- Proof 1.5: I do not understand the grammatical structure of the second sentence.", "- Proposition 1.4: the definition of a RNN is the same as definition 1.2 (except for the frame bounds);", "I do not see why such transformations should model RNNs.", "- Paragraph before Proposition 1.5: \"in,formation\".", "- Proposition 1.6: it should be said on which space the frame is injective.", "- On page 8, \"Lipschitz\" is erroneously written (twice).", "- Proposition 1.7: \"ProjW,l\"?", "- Definition 2.1: in the \"nested\" property, I think that the inclusion should be the other way around.", "- Before Theorem 2.1, the sentence \"Such Riesz basis is proven\" is unclear to me.", "- Theorem 2.1: \"filters convolution filters\".", "- I think the architecture described in Theorem 2.2 could be clarified;", "I am not exactly sure where all the arrows start from.", "- First line of Subsection 2.3: \". is always\" -> \"is always\".", "- First paragraph of Subsection 3.2: \"the the\".", "- Paragraph 3.2: could the previous algorithms developed for this dataset be described in slightly more detail?", "I also do not understand the meaning of \"must solely leverage the temporal structure\".", "- I think that the section about numerical experiments could be slightly rewritten, so that the architecture used in each experiment is clearer.", "In Paragraph 3.2 in particular, I did not get why the architecture presented in Figure 6 has far fewer parameters than the one in Figure 5;", "it would help if the authors clearly precised how many parameters each layer contains.", "- Conclusion: \"we can to\" -> \"we can\".", "- Definition 4.1: p_v(s) -> p_v(t)."], "labels": ["fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "non-arg", "request", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "request", "evaluation", "request", "request", "evaluation", "request", "request", "request", "request", "request", "evaluation", "fact", "evaluation", "non-arg", "request", "fact", "non-arg", "request", "evaluation", "non-arg", "request", "evaluation", "request", "non-arg", "request", "evaluation", "request", "evaluation", "request", "request", "request"]}
{"doc_id": "Hyd9YyOlf", "text": ["The paper studies the problem of DNN loss function design for reducing intra-class variance in the output feature space. ", "The key contribution is proposing an isotropic variant of the softmax loss that can balance the accuracy of classification and compactness of individual class. ", "The proposed loss has been compared extensively against a number of closely related approaches in methodology. ", "Numerical results on benchmark datasets show some improvement of the proposed loss over softmax loss and center loss (Wen et al., 2016), when applied to distance-based classifiers such as k-NN and k-means. ", "Pros: - The idea of isotropic normalization for enhancing compactness of class is well motivated", "- The paper is mostly clearly organized and presented.", "- Numerical study shows some promise of the proposed method.", "Cons: - The novelty of method is mostly incremental given the prior work of (Wen et al., 2016) which has provided a slightly different isotropic variant of softmax loss.", "- The training procedure of the proposed method remains unclear in this paper."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "By-CxBKgz", "text": ["This paper presents Defense-GAN: a GAN that used at test time to map the input generate an image (G(z)) close (in MSE(G(z), x)) to the input image (x), by applying several steps of gradient descent of this MSE. ", "The GAN is a WGAN trained on the train set (only to keep the generator). ", "The goal of the whole approach is to be robust to adversarial examples, without having to change the (downstream task) classifier, only swapping in the G(z) for the x.", "+ The paper is easy to follow.", "+ It seems (but I am not an expert in adversarial examples) to cite the relevant litterature (that I know of) and compare to reasonably established attacks and defenses.", "+ Simple/directly applicable approach that seems to work experimentally, ", "but - A missing baseline is to take the nearest neighbour of the (perturbed) x from the training set.", "- Only MNIST-sized images, and MNIST-like (60k train set, 10 labels) datasets: MNIST and F-MNIST.", "- Between 0.043sec and 0.825 sec to reconstruct an MNIST-sized image.", "? MagNet results were very often worse than no defense in Table 4, ", "could you comment on that?", "- In white-box attacks, it seems to me like L steps of gradient descent on MSE(G(z), x) should be directly extended to L steps of (at least) FGSM-based attacks, at least as a control."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "non-arg", "request"]}
{"doc_id": "r1ke1YDlz", "text": ["SIGNIFICANCE AND ORIGINALITY: The authors propose to accelerate the learning of complex tasks by exploiting traces of experts.", "Unlike the most common form of imitation learning or behavioral cloning, the authors formulate their solution in the case where the expert\u2019s state trajectory is observable, but the expert\u2019s actions are not. ", "This is an important and useful problem in robotics and other applications. ", "Within this specific setting the authors differentiate their approach from others by developing a solution that does NOT estimate an explicit dynamics model ( e.g., P( S\u2019 | S, A ) ).", "The benefits of not estimating an explicit action model are not really demonstrated in a clear way.", "The author\u2019s articulate a specific solution that provides heuristic guidance rewards that cause the learner to favor actions that achieve subgoals calculated from expert behavior and refactors the representation of the Q function so that it has a component that is a function of the subgoal extracted from the expert.", "These subgoals are linear functions of the expert\u2019s change in state (or change in state features).", "The resultant policy is a function of the expert traces on which it depends.", "The authors show they can retrain a new policy that does not require the expert traces.", "As far as I am aware, this is a novel approach to the problem. ", "The authors claim that this factorization is important and useful ", "but the paper doesn\u2019t really illustrate this well.", "They demonstrate the usefulness of the algorithm against a DQN baseline on Doom game problems.", "The algorithm learns faster than unassisted DQN as shown by learning curve plots. ", "They also evaluate the algorithms on the quality of the final policies for their approach, DQN, and a supervised learning from demonstration approach ( LfD ) that requires expert actions.", "The proposed approach does as well or better than competing approaches.", "QUALITY Ablation studies show that the guidance rewards are important to achieving the improved performance of the proposed method which is important confirmation that the architecture is working in the intended way. ", "However, it would also be useful to do an ablation study of the \u201cfactorization\u201d of action values. ", "Is this important to achieving better results as well or is the guidance reward enough? ", "This seems like a key claim to establish.", "CLARITY The details of the memory based kernel density estimation and neural gradient training seemed complicated by the way that the process was implemented. ", "Is it possible to communicate the intuitions behind what is going on?", "I was able to work out the intuitions behind the heuristic rewards, but I still don\u2019t clearly get what the Q-value factorization is providing:", "To keep my text readable, I assume we are working in feature space instead of state space and use different letters for learner and expert:", "Learner: S = \\phi(s) Expert\u2019s i^th state visit: Ei = \\phi( \\hat{s}_i } where Ei\u2019 is the successor state to Ei", "The paper builds upon approximate n-step discrete-action Q-learning where the Q value for an action is a linear function of the state features: Qp(S,a) = Wa S + Ba where parameters p = ( Wa, Ba ).", "After observing an experience ( S,A,R,S\u2019 ) we use Bellman Error as a loss function to optimize Qp for parameter p.", "I ignore the complexities of n-step learning and discount factors for clarity.", "Loss = E[ R + MAXa\u2019 Qp(S\u2019,a\u2019) - Qp(S,a) ] ", "The authors suggest we can augment the environment reward R with a heuristic reward Rh proportional to the similarity between the learner \u201csubgoal\" and the expert \u201csubgoal\" in similar states. ", "The authors propose to use cosine distance between representations of what they call the \u201csubgoals\u201d of learner and expert. ", "A subgoal is defined as a linear transformation of the distance traveled by an agent during a transition.", "The heuristic reward is proportional to the cosine distance between the learner and expert \u201csubgoals\" Rh = B < Wv LearnerDirectionInStateS, Wv ExpectedExpertDirectionInStatesSimilarToS > The learner\u2019s direction in state S is just (S-S\u2019) in feature space.", "The authors model the behavior of the expert as a kernel density type approximator giving the expected direction of the expert starting from a states similar to the one the learner is in. ", "Let < Wk S, Wk Ej > be a weighted similarity between learner state features S and expert state features Ej and Ej\u2019 be the successor state features encountered by the expert.", "Then the expected expert direction for learner state S is: SUMj < Wk S, Wk Ej > ( Ej - Ej\u2019 ) ", "Presumably the linear Wk transform helps us pick out the important dimensions of similarity between S and Ej.", "Mapping the learner and expert directions into subgoal space using Wv, the heuristic reward is Rh = B < Wv (S-S\u2019), Wv SUMj < Wk S, Wk Ej > ( Ej - Ej\u2019 ) >", "I ignore the ReLU here, but I assume that is operates element-wise and just clips negative values?", "There is only one layer here ", "so we don\u2019t have complex non-linear things going on?", "In addition to introducing a heuristic reward term, the authors propose to alter the Q-function to be specific to the subgoal.", "Q( s,a,g ) = g(S) Wa S + Ba", "The subgoal is the same as the first part, namely a linear transform of the expected expert direction in states similar to state S.", "g(S) = Wv SUMj < Wk S, Wk Ej > ( Ej - Ej\u2019 ) ", "So in some sense, the Q function is really just a function of S, as g is calculated from S.", "Q( S,a ) = g(S) Wa S + Ba ", "So this allows the Q-function more flexibility to capture each subgoal in a different linear space?", "I don\u2019t really get the intuition behind this formulation. ", "It allows the subgoal to adjust the value of the underlying model? ", "Essentially the expert defines a new Q-value problem at every state for the learner? ", "In some sense are we are defining a model for the action taken by the expert?", "ADDITIONAL THOUGHTS While the authors compare to an unassisted baseline, they don\u2019t compare to methods that use an action model", "which is not a fatal flaw but would have been nice. ", "One can imagine there might be scenarios where the local guidance rewards of this form could be problematic, particularly in scenarios where the expert and learner are not identical", "and it is possible to return to previous states, such as the grid worlds the authors discuss:", "If the expert\u2019s first few transitions were easily approximable, the learner would get local rewards that cause it to mimic expert behavior.", "However, if the next step in the expert\u2019s path was difficult to approximate, then the reward for imitating the expert would be lower.", "Would the learner then just prefer to go back towards those states that it can approximate and endlessly loop?", "In this case, perhaps expressing heuristic rewards as potentials as described in Ng\u2019s shaping paper might solve the problem.", "PROS AND CONS Important problem generally. ", "Avoiding the estimation of a dynamics model was stated as a given, but perhaps more could be put into motivating this goal. ", "Hopefully it is possible to streamline the methodology section to communicate the intuitions more easily."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "request", "request", "evaluation", "evaluation", "request", "evaluation", "non-arg", "non-arg", "evaluation", "fact", "non-arg", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "request", "evaluation", "request", "fact", "request", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "request", "request", "evaluation", "request", "request"]}
{"doc_id": "ByGPUUYgz", "text": ["This paper attacks an important problems with an interesting and promising methodology. ", "The authors deal with inference in models of collective behavior, specifically at how to infer the parameters of a mean field game representation of collective behavior. ", "The technique the authors innovate is to specify a mean field game as a model, and then use inverse reinforcement learning to learn the reward functions of agents in the mean field game.", "This work has many virtues, and could be an impactful piece. ", "There is still minimal work at the intersection of machine learning and collective behavior, ", "and this paper could help to stimulate the growth of that intersection. ", "The application to collective behavior could be an interesting novel application to many in machine learning, ", "and conversely the inference techniques that are innovated should be novel to many researchers in collective behavior.", "At the same time, the scientific content of the work has critical conceptual flaws. ", "Most fundamentally, the authors appear to implicitly center their work around highly controversial claims about the ontological status of group optimization, without the careful justification necessary to make this kind of argument. ", "In addition to that, the authors appear to implicitly assume that utility function inference can be used for causal inference. ", "That is, there are two distinct mistakes the authors make in their scientific claims:", "1) The authors write as if mean field games represent population optimization ", "(Mean field games are not about what a _group_ optimizes; they are about what _individuals_ optimize, and this individual optimization leads to certain patterns in collective behaviors)", "2) The authors write as if utility/reward function inference alone can provide causal understanding of collective or individual behavior", "1 - I should say that I am highly sympathetic to the claim that many types of collective behavior can be viewed as optimizing some kind of objective function. ", "However, this claim is far from mainstream, and is in fact highly contested. ", "For instance, many prominent pieces of work in the study of collective behavior have highlighted its irrational aspects, from the madness of crowds to herding in financial markets.", "Since it is so fringe to attribute causal agency to groups, let alone optimal agency, ", "in the remainder of my review I will give the authors the benefit of the doubt and assume when they say things like \"population behavior may be optimal\", they mean \"the behavior of individuals within a population may be optimal\". ", "If the authors do mean to say this, they should be more careful about their language use in this regard (individuals are the actors, not populations). ", "If the authors do indeed mean to attribute causal agency to groups (as suggested in their MDP representation), they will run into all the criticisms I would have about an individual-level analysis and more. ", "Suffice it to say, mean field games themselves don't make claims about aggregate-level optimization. ", "A Nash equilibrium achieves a balance between individual-level reward functions. ", "These reward functions are only interpretable at the individual level. ", "There is no objective function the group itself in aggregate is optimizing in mean field games. ", "For instance, even though the mean field game model of the Mexican wave produces wave solutions, ", "the model is premised on people having individual utility functions that lead to emergent wave behavior. ", "The model does not have the representational capacity to explain that people actually intend to create the emergent behavior of a wave (even though in this case they do). ", "Furthermore, the fact that mean field games aggregate to a single-agent MDP does not imply that that the group can rightfully be thought of as an agent optimizing the reward function, ", "because there is an exact correspondence between the rewards of the individual agents in the MFG and of the aggregate agent in the MDP by construction.", "2 - The authors also claim that their inference methods can help explain why people choose to talk about certain topics. ", "As far as the extent to which utility / reward function inference can provide causal explanations of individual (or collective) behavior, the argument that is invariably brought against a claim of optimization is that almost any behavior can be explained as optimal post-hoc with enough degrees of freedom in the utiliy function of the behavioral model. ", "Since optimization frameworks are so flexible, ", "they have little explanatory power and are hard to falsify. ", "In fact, there is literally no way that the modeling framework of the authors even affords the possibility that individual/collective behavior is not optimal. ", "Optimality is taken as an assumption that allows the authors to infer what reward function is being optimized. ", "The authors state that the reward function they infer helps to interpret collective behavior ", "because it reveals what people are optimizing. ", "However, the reward function actually discovered is not interpretable at all. ", "It is simply a summary of the statistical properties of changes in popularity of the topics of conversation in the Twitter data the authors' study. ", "To quote the authors' insights: \"The learned reward function reveals that a real social media population favors states characterized by a highly non-uniform distribution with negative mass gradient in decreasing order of topic popularity, as well as transitions that increase this distribution imbalance.\" ", "The authors might as well have simply visualized the topic popularities and changes in popularities to arrive at such an insight. ", "To take the authors claims literally, we would say that people have an intrinsic preference for everyone to arbitrarily be talking about the same thing, regardless of the content or relevance of that topic. ", "To draw an analogy, this is like observing that on some days everybody on the street is carrying open umbrellas and on other days not, and inferring that the people on the street have a preference for everyone having their umbrellas open together (and the model would then predict that if one person opens an umbrella on a sunny day, everybody else will too).", "To the authors credit, they do make a brief attempt to present empirical evidence for their optimization view, stating succinctly: \"The high prediction accuracy of the learned policy provides evidence that real population behavior can be understood and modeled as the result of an emergent population-level optimization with respect to a reward function.\" ", "Needless to say, this one-sentence argument for a highly controversial scientific claims falls flat on closer inspection. ", "Setting aside the issues of correlation versus causation, predictive accuracy does not in and of itself provide scientific plausibility. ", "When an n-gram model produces text that is in the style of a particular writer, we do not conclude that the writer must have been composing based on the n-gram's generative mechanism. ", "Predictive accuracy only provides evidence when combined in the first place with scientific plausibility through other avenues of evidence.", "The authors could attempt to address these issues by making what is called an \"as-if\" argument, ", "but it's not even clear such an argument could work here in general. ", "With all this in mind, it would be more instructive to show that the inference method the authors introduce could infer the correct utility functions used in standard mean field games, such as modeling traffic congestion and the Mexican wave. ", "-- All that said, the general approach taken in the authors' work is highly promising, ", "and there are many fruitful directions I would be exicted to see this work taken --- e.g., combining endogenous and exogenous rewards or looking at more complex applications. ", "As a technical contribution, the paper is wonderful, ", "and I would enthusiastically support acceptance. ", "The authors simply either need to be much more careful with the scientific claims about collective behavior they make, or limit the scope of the contribution of the paper to be modeling / inference in the area of collective behavior. ", "Mean field games are an important class of models in collective behavior, ", "and being able to infer their parameters is a nice step forward purely due to the importance of that class of games. ", "Identifying where the authors' inference method could be applied to draw valid scientific conclusions about collective behavior could then be an avenue for future work. ", "Examples of plausible scientific applications might include parameter inference in settings where mean field games are already typically applied in order to improve the fit of those models or to learn about trade-offs people make in their utility functions in those settings.", "-- Other minor comments: - (Introduction) It is not clear at all how the Arab Spring, Black Lives Matter, and fake news are similar --- i.e., whether a single model could provide insight into these highly heterogeneous events ", "--- nor is it clear what end the authors hope to achieve by modeling them ", "--- the ethics of modeling protests in a field crowded with powerful institutional actors is worth carefully considering.", "- If I understand correctly, the fact that the authors assume a factored reward function seems limiting. ", "Isn't the major benefit of game theory it's ability to accommodate utility functions that depend on the actions of others?", "- The authors state that one of their essential insights is that \"solving the optimization problem of a single-agent MDP is equivalent to solving the inference problem of an MFG.\" ", "This statement feels a bit too cute at the expense of clarity. ", "The authors perform inference via inverse-RL, ", "so it's more clear to say the authors are attempting to use statistical inference to figure out what is being optimized.", "- The relationship between MFGs and a single-agent MDP is nice and a fine observation, but not as surprising as the authors frame it as. ", "Any multiagent MDP can be naively represented as a single-agent MDP where the agent has control over the entire population, ", "and we already know that stochastic games are closely related to MDPs. ", "It's therefore hard to imagine that there woudn't be some sort of correspondence."], "labels": ["evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "quote", "request", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation"]}
{"doc_id": "B1fZIQcxM", "text": ["The paper is not anonymized.", "In page 2, the first line, the authors revealed [15] is a self-citation", "and [15] is not anonumized in the reference list."], "labels": ["evaluation", "fact", "fact"]}
{"doc_id": "H1asng9lG", "text": ["This paper introduces a new exploration policy for Reinforcement Learning for agents on the web called \"Workflow Guided Exploration\".", "Workflows are defined through a DSL unique to the domain.", "The paper is clear, very well written, and well-motivated.", "Exploration is still a challenging problem for RL.", "The workflows remind me of options though in this paper they appear to be hand-crafted.", "In that sense, I wonder if this has been done before in another domain.", "The results suggest that WGE sometimes helps but not consistently.", "While the experiments show that DOMNET improves over Shi et al, that could be explained as not having to train on raw pixels or not enough episodes."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "non-arg", "fact", "evaluation"]}
{"doc_id": "HkgrJeEgM", "text": ["This paper studies the question: Why does SGD on deep network is often successful, despite the fact that the objective induces bad local minima?", "The approach in this paper is to study a standard MNN with one hidden layer. ", "They show that in an overparametrized regime, where the number of parameters is logarithmically larger than the number of parameters in the input, the ratio between the number of (bad) local minima to the number of global minima decays exponentially. ", "They show this for a piecewise linear activation function, and input drawn from a standard Normal distribution. ", "Their improvement over previous work is that the required overparameterization is fairly moderate, and that the network that they considered is similar to ones used in practice. ", "This result seems interesting, ", "although it is clearly not sufficient to explain even the success on the setting studied in this paper, ", "since the number of minima of a certain type does not correspond to the probability of the SGD ending in one: ", "to estimate the latter, the size of each basin of attraction should be taken into account. ", "The authors are aware of this point and mention it as a disadvantage. ", "However, since this question in general is a difficult one, ", "any progress might be considered interesting. ", "Hopefully, in future work it would be possible to also bound the probability of starting in one of the basins of attraction of bad local minima.", "The paper is well written and well presented, ", "and the limitations of the approach, as well as its advantages over previous work, are clearly explained. ", "As I am not an expert on the previous works in this field, my judgment relies mostly on this work and its representation of previous work. ", "I did not verify the proofs in the appendix."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg"]}
{"doc_id": "SJs7uYYeM", "text": ["At the heart of the paper, there is a single idea: to decouple the weight decay from the number of steps taken by the optimization process (the paragraph at the end of page 2 is the key to the paper). ", "This is an important and largely overlooked area of implementation ", "and most off-the-shelf optimization algorithms, unfortunately, miss this point, too. ", "I think that the proposed implementation should be taken seriously, especially in conjunction with the discussion that has been carried out with the work of Wilson et al., 2017 ", "(https://arxiv.org/abs/1705.08292).", "The introduction does a decent job explaining why it is necessary to pay attention to the norm of the weights as the training progresses within its scope. ", "However, I would like to add a couple more points to the discussion: - \"Optimal weight decay is a function (among other things) of the total number of epochs / batch passes.\" ", "in principle, it is a function of weight updates. ", "Clearly, it depends on the way the decay process is scheduled. ", "However, there is a bad habit in DL where time is scaled by the number of epochs rather than the number of weight updates which sometimes lead to misleading plots (for instance, when comparing two algorithms with different batch sizes).", "- Another ICLR 2018 submission has an interesting take on the norm of the weights and the algorithm ", "(https://openreview.net/forum?id=HkmaTz-0W¬eId=HkmaTz-0W). ", "Figure 3 shows the histograms of SGD/ADAM with and without WD (the *un-fixed* version), ", "and it clearly shows how the landscape appear misleadingly different when one doesn't pay attention to the weight distribution in visualizations. ", "- In figure 2, it appears that the training process has three phases, an initial decay, a steady progress, and a final decay that is more pronounced in AdamW. ", "This final decay also correlates with the better test error of the proposed method. ", "This third part also seems to correspond to the difference between Adam and AdamW through the way they branch out after following similar curves. ", "One wonders what causes this branching and whether the key the desired effects are observed at the bottom of the landscape.", "- The paper concludes with \"Advani & Saxe (2017) analytically showed that in the limited data regime of deep networks the presence of eigenvalues that are zero forms a frozen subspace in which no learning occurs and thus smaller (e.g., zero) initial weight norms should be used to achieve best generalization results.\" ", "Related to this there is another ICLR 2018 submission ", "(https://openreview.net/forum?id=rJrTwxbCb), ", "figure 1 shows that the eigenvalues of the Hessian of the loss have zero forms at the bottom of the landscape, not at the beginning. ", "Back to the previous point, maybe that discussion should focus on the second and third phases of the training, not the beginning. ", "- Finally, it would also be interesting to discuss the relation of the behavior of the weights at the last parts of the training and its connection to pruning. ", "I'm aware that one can easily go beyond the scope of the paper by adding more material. ", "Therefore, it is not completely reasonable to expect all such possible discussions to take place at once. ", "The paper as it stands is reasonably self-contained and to the point. ", "Just a minor last point that is irrelevant to the content of the work: The slash punctuation mark that is used to indicate 'or' should be used without spaces as in 'epochs/batch'."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "reference", "evaluation", "quote", "fact", "fact", "evaluation", "evaluation", "reference", "fact", "evaluation", "fact", "fact", "fact", "request", "fact", "fact", "reference", "fact", "request", "request", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "HkdTXw1bM", "text": ["The paper takes an interesting approach to solve the existing problems of GAN training, using Coulomb potential for addressing the learning problem. ", "It is also well written with a clear presentation of the motivation of the problems it is trying to address, the background and proves the optimality of the suggested solution. ", "My understanding and validity of the proof is still an educated guess. ", "I have been through section A.2 , but I'm unfamiliar with the earlier literature on the similar topics so I would not be able to comment on it. ", "Overall, I think this is a good paper that provides a novel way of looking at and solving problems in GANs. ", "I just had a couple of points in the paper that I would like some clarification on : ", "* In section 2.2.1 : The notion of the generated a_i not disappearing is something I did not follow. ", "What does it mean for a generated sample to \"not disappear\" ? ", "and this directly extends to the continuity equation in (2). ", "* In section 1 : in the explanation of the 3rd problem that GANs exhibit i.e. the generator not being able to generalize the distribution of the input samples, I was hoping if you could give a bit more motivation as to why this happens. ", "I don't think this needs to be included in the paper, ", "but would like to have it for a personal clarification."], "labels": ["evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "evaluation", "evaluation", "request", "fact", "request", "evaluation", "request"]}
{"doc_id": "rycZrCJef", "text": ["Authors of this paper derived an efficient quantum-inspired learning algorithm based on a hierarchical representation that is known as tree tensor network, which is inspired by the multipartite entanglement renormalization ansatz approach where the tensors in the TN are kept to be unitary during training. ", "Some observations are: The limitation of learnability of TTN strongly depends on the physical indexes and the geometrical indexes determine how well the TTNs approximate the limit; ", "TTNs exhibit same increase level of abstractions as CNN or DBN; ", "Fidelity and entanglement entropy can be considered as some measurements of the network.", "Authors introduced the two-dimensional hierarchical tensor networks for solving image recognition problems, ", "which suits more the 2-D nature of images. ", "In section 2, authors stated that the choice of feature function is arbitrary, ", "and a specific feature map was introduced in Section 4. ", "However, it is not straightforward to connect (10) to (1) or (2). ", "It is better to clarify this connection ", "because some important parameters such as the virtual bond and input bond are related to the complexity of the proposed algorithm as well as the limitation of learnability. ", "For example, the scaling of the complexity O(dN_T(b_v^5 + b_i^4)) is not easy to understand. ", "Is it related to specific feature map? ", "How about the complexity of eigen-decomposition for one tensor at each iterates. ", "And also, whether the tricks used to accelerate the computations will affect the convergence of the algorithm? ", "More details on these problems are required for readers\u2019 better understanding.", "From Fig 2, it is difficult to see the relationship between learnability and parameters such input bond and virtual bond ", "because it seems there are no clear trends in the Fig 2(a) and (b) to make any conclusion. ", "It is better to clarify these relationships with either clear explanation or better examples.", "From Fig 3, authors claimed that TN obtained the same levels of abstractions as in deep learning. ", "However, from Fig 3 only, it is hard to make this conclusion. ", "First, there are not too many differences from Fig 3(a) to Fig 3(e). ", "Second, there is no visualization result reported from deep learning on the same data for comparison. ", "Hence, it is not convincing to draw this conclusion only from Fig 3. ", "In Section 4.2, what strategy is used to obtain these parameters in Table 1?", "In Section 5, it is interesting to see more experiments in terms of fidelity and entanglement entropy."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "evaluation", "request", "fact", "evaluation", "evaluation", "fact", "evaluation", "non-arg", "request"]}
{"doc_id": "H1PuapUef", "text": ["*Paper summary* The paper considers GANs from a theoretical point of view. ", "The authors approach GANs from the 3-Wasserstein point of view and provide several insights for a very specific setting. ", "In my point of view, the main novel contribution of the paper is to notice the following fact: (*) It is well known that the 2-Wasserstein distance W2(PY,QY) between multivariate Gaussian PY and its empirical version QY scales as $n^{-2/d}$, i.e. converges very slow as the dimensionality of the space $d$ increases. ", "In other words, QY is not such a good way to estimate PY in this setting. ", "A somewhat better way is use a Gaussian distribution PZ with covariance matrix S computed as a sample covariance of QY. ", "In this case W2(PY, PZ) scales as $\\sqrt{d/n}$.", "The paper introduces this observation in a very strange way within the context of GANs. ", "Moreover, I think the final conclusion of the paper (Eq. 19) has a mistake, ", "which makes it hard to see why (*) has any relation to GANs at all.", "There are several other results presented in the paper regarding relation between PCA and the 2-Wasserstein minimization for Gaussian distributions (Lemma 1 & Theorem 1). ", "This is indeed an interesting point, ", "however the proof is almost trivial ", "and I am not sure if this provides any significant contribution for the future research.", "Overall, I think the paper contains several novel ideas, ", "but its structure requires a *significant* rework ", "and in the current form it is not ready for being published. ", "*Detailed comments* In the first part of the paper (Section 2) the authors propose to use the optimal transport distance Wc(PY, g(PX)) between the data distribution PY (or its empirical version QY) and the model as the objective for GAN optimization. ", "This idea is not novel: ", "WGAN [1] proposed (and successfully implemented) to minimize the particular case of W1 distance by going through the dual form, ", "[2] proposed to approach any Wc using auto-encoder reformulation of the primal (and also shoed that [5] is doing exactly W2 minimization), ", "and [3] proposed the same using Sinkhorn algorithm. ", "So this point does not seem to be novel.", "The rest of the paper only considers 2-Wasserstein distance with Gaussian PY and Gaussian g(PX) (which I will abbreviate with R), ", "which looks like an extremely limited scenario (and certainly has almost no connection to the applications of GANs).", "Section 3 first establishes a relation between PCA and minimizing 2-Wasserstein distance for Gaussian distributions (Lemma 1, Theorem 1). ", "Then the authors show that if R minimizes W2(PY, R) and QR minimizes W2(QY, QR) then the excess loss W2(PY, QR) - W2(PY, R) approaches zero at the rate $n^{-2/d}$ (both for linear and unconstrained generators). ", "This result basically provides an upper bound showing that GANs need exponentially many samples to minimize W2 distance. ", "I don't find these results novel, ", "as they already appeared in [4] with a matching lower bound for the case of Gaussians ", "(Theorem B.1 in Appendix can be modified easily to show this). ", "As the authors note in the conclusion of Section 3, these results have little to do with GANs, ", "as GANs are known to learn quite quickly ", "(which contradicts the theory of Section 3).", "Finally, in Section 4 the authors approach the same W2 problem from its dual form and notice that for the LQG model the optimal discriminator is quadratic. ", "Based on this they reformulate the W2 minimization for LQG as the constrained optimization with respect to p.d. matrix A (Eq 16). ", "The same conclusion does not work unfortunately for W2(QY, R), ", "which is the real training objective of GANs. ", "Theorem 3 shows that nevertheless, if we still constrain discriminator in the dual form of W2(QY, R) to be quadratic, the resulting soliton QR* performs the empirical PCA of Pn. ", "This leads to the final conclusion of the paper, ", "which I think contains a mistake. ", "In Eq 19 the first equation, according to the definitions of the authors, reads \\[W2(PY, QR) = W2(PY, PZ), (**)\\] where QR is trained to minimize min_R W2(QY, R) and PZ is as defined in (*) in the beginning of these notes. ", "However, PZ is not the solution of min_R W2(QY, R) ", "as the authors notice in the 2nd paragraph of page 8. ", "Thus (**) is not true ", "(at least, it is not proved in the current version of the text). ", "PZ is a solution of min_R W2(QY, R) *where the discriminator is constrained to be quadratic*. ", "This mismatch is especially strange, ", "given the authors emphasize in the introduction that they provide bounds on divergences which are the same as used during the training (see 2nd paragraph on page 2) ", "--- here the bound is on W2, but the empirical GAN actually does a regularized training (with constrained discriminator).", "Finally, I don't think the experiments provide any convincing insights, ", "because the authors use W1-minimization to illustrate properties of the W2. ", "Essentially the authors say \"we don't have a way to perform W2 minimization, so we rather do the W1 minimization and assume that these two are kind of similar\".", "* Other comments * (1) Discussion in Section 2.1 seems to never play a role in the paper.", "(2) Page 4: in p-Wasserstein distance, ||.|| does not need to be a Euclidean metric. ", "It can be any metric.", "(3) Lemma 2 seems to repeat the result from (Canas and Rosasco, 2012) ", "as later cited by authors on page 7?", "(4) It is not obvious how does Theorem 2 translate to the excess loss? ", "(5) Section 4. I am wondering how exactly the authors are going to compute the conjugate of the discriminator, given the discriminator most likely is a deep neural network?", "[1] Arjovsky et al., Wasserstein GAN, 2017", "[2] Bousquet et al, From optimal transport to generative modeling: the VEGAN cookbook, 2017", "[3] Genevay et al., Learning Generative Models with Sinkhorn Divergences, 2017", "[4] Arora et al, Generalization and equilibrium in GANs, 2017", "[5] Makhazani et al., Adversarial Autoencoders, 2015"], "labels": ["fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "reference", "reference", "reference", "reference", "reference"]}
{"doc_id": "rk_xMk8ef", "text": ["Summary This paper presents a dataset of mathematical equations and applies TreeLSTMs to two tasks: verifying and completing mathematical equations. ", "For these tasks, TreeLSTMs outperform TreeNNs and RNNs. ", "In my opinion, the main contribution of this paper is this potentially useful dataset, as well as an interesting way of representing fixed-precision floats. ", "However, the application of TreeNNs and TreeLSTMs is rather straight-forward, ", "so in my (subjective) view there are only a few insights salvageable for the ICLR community ", "and compared to Allamanis et al. (2017) this paper is a rather incremental extension.", "Strengths The authors present a new datasets for mathematical identities. ", "The method for generating additional correct identities could be useful for future research in this area.", "I find the representation of fixed-precision floats presented in this paper intriguing. ", "I believe this contribution should be emphasized more ", "as it allows the model to generalize to unseen numbers ", "and I am wondering whether the authors see some wider application of this representation for neural programming models.", "I liked the categorization of the related work.", "Weaknesses p2: It is mentioned that the framework is the first to combine symbolic expressions with black-box function evaluations, ", "but I would argue that Neural Programmer-Interpreters (NPI; Reed & De Freitas) are already doing that ", "(see Fig 1 in that paper where the execution trace is a symbolic expression and some expressions \"Act(LEFT)\" are black-box function applications directly changing the image).", "The differences to Allamanis et al. (2017) are not worked out well. ", "For instance, the authors use the TreeNN model from that paper as a baseline ", "but the EqNet model is not mentioned at all. ", "The obvious question is whether EqNets can be applied to the two tasks (verifying and completing mathematical equations) and if so why this has not been done.", "The contribution regarding black box function application is unclear to me. ", "On page 6, it is unclear to me what \"handles [\u2026] function evaluation expressions\". ", "As far as I understand, the TreeLSTM learns to the return value of function evaluation expressions in order to predict equality of equations, ", "but this should be clarified.", "I find the connection of the proposed model and task to \"neural programming\" weak. ", "For instance, as far as I understand there is no support for stateful programs. ", "Furthermore, it would be interesting to hear how this work can be applied to existing programming languages such as Haskell. ", "What are the limitations of the architecture? ", "Could it learn to identify equality of two lists in Haskell?", "p6: The paragraph on baseline models is rather uninformative. ", "TreeLSTMs have been shown to outperform Tree NN's in various prior work. ", "The statement that \"LSTM cell [\u2026] helps the model to have a better understanding of the underlying functions in the domain\" is vague. ", "LSTM cells compared to fully-connected layers in Tree NNs ameliorate vanishing and exploding gradients along paths in the tree. ", "Furthermore, I would like to see a qualitative analysis of the reasoning capabilities that are mentioned here. ", "Did you observe any systematic differences in the ~4% of equations where the TreeLSTM fails to generalize (Table 3; first column).", "Minor Comments Abstract: \"Our framework generalizes significantly better\" I think it would be good to already mention in comparison to what this statement is.", "p1: \"aim to solve tasks such as learn mathematical\" -> \"aim to solve tasks such as learning mathematical\"", "p2: You could add a citation for Theano, Tensorflow and Mxnet.", "p2: Could you elaborate how equation completion is used in Mathematical Q&A?", "p3: Could you expand on \"mathematical equation verification and completion [\u2026] has broader applicability\" by maybe giving some concrete examples.", "p3 Eq. 5: What precision do you consider? ", "Two digits?", "p3: \"division because that they can\" -> \"division because they can\"", "p4 Fig. 1: Is there a reason 1 is represented as 10^0 here? ", "Do you need the distinction between 1 (the integer) and 1.0 (the float)?", "p5: \"we include set of changes\" -> \"we include the set of changes\"", "p5: In my view there is enough space to move appendix A to section 2. ", "In addition, it would be great to see more examples of generated identities at this stage (including negative ones).", "p5: \"We generate all possible equations (with high probability)\"", "\u2013 what is probabilistic about this?", "p5: I don't understand why function evaluation results in identities of depth 2 and 3. ", "Is it both or one of them?", "p6: The modules \"symbol\" and \"number\" are not shown in the figure. ", "I assume they refer to projections using Wsymb and Wnum?", "p6: \"tree structures neural networks\" -> \"tree structured neural networks\"", "p6: A reference for the ADAM optimizer should be added.", "p6: Which method was used for optimizing these hyperparameters? ", "If a grid search was used, what intervals were used?", "p7: \"the superiority of Tree LSTM to Tree NN shows that is important to incorporate cells that have memory\" is not a novel insight.", "p8: When you mention \"you give this set of equations to the models look at the top k predictions\" I assume you ranked the substituted equations by the probability that the respective model assigns to it?", "p8: Do you have an intuition why prediction function evaluations for \"cos\" seem to plateau certain points? ", "Furthermore, it would be interesting to see what effect the choice of non-linearity on the output of the TreeLSTM has on how accurately it can learn to evaluate functions. ", "For instance, one could replace the tanh with cos and might expect that the model has now an easy time to learn to evaluate cos(x).", "p8 Fig 4b; p9: Relating to the question regarding plateaus in the function evaluation: \"in Figure 4b [\u2026] the top prediction (0.28) is the correct value for tan with precision 2, but even other predictions are quite close\" \u2013 they are all the same and this bad, right?", "p9: \"of the state-of-the-art neural reasoning systems\" is very broad and in my opinion misleading too. ", "First, there are other reasoning tasks (machine reading/Q&A, Visual Q&A, knowledge base inference etc.) too ", "and it is not obvious how ideas from this paper translate to these domains. ", "Second, for other tasks TreeLSTMs are likely not state-of-the-art ", "(see for example models on the SQuAD leaderboard: https://rajpurkar.github.io/SQuAD-explorer/) .", "p9: \"exploring recent neural models that explicitly use memory cells\" ", "\u2013 I think what you mean is models with addressable differentiable memory."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "fact", "request", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "request", "evaluation", "evaluation", "fact", "request", "evaluation", "fact", "request", "request", "request", "evaluation", "fact", "evaluation", "fact", "request", "request", "request", "request", "request", "request", "request", "request", "non-arg", "non-arg", "request", "request", "request", "request", "request", "quote", "request", "request", "request", "fact", "request", "request", "request", "request", "request", "evaluation", "non-arg", "non-arg", "request", "evaluation", "request", "evaluation", "fact", "evaluation", "evaluation", "reference", "quote", "fact"]}
{"doc_id": "Hym3oxKlf", "text": ["In this paper, the authors have proposed a GAN based method to conduct data augmentation. ", "The cross-class transformations are mapped to a low dimensional latent space using conditional GAN. ", "The paper is technically sound and the novelty is significant. ", "The motivation of the proposed methods is clearly illustrated. ", "Experiments on three datasets demonstrate the advantage of the proposed framework. ", "However, this paper still suffers from some drawbacks as below:", "(1)\tThe illustration of the framework is not clear enough. ", "For example, in figure 3, it says the GAN is designed for \u201cclass c\u201d, which is ambiguous whether the authors trained only one network for all class or trained multiple networks and each is trained on one class.", "(2)\tSome details is not clearly given, such as the dimension of the Gaussian distribution, the dimension of the projected noise and .", "(3)\tThe proposed method needs to sample image pairs in each class. ", "As far as I am concerned, in most cases sampling strategy will affect the performance to some extent. ", "The authors need to show the robustness to sampling strategy of the proposed method."], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request"]}
{"doc_id": "SJyXoTtlG", "text": ["This paper introduces a generative approach for 3D point clouds.", "More specifically, two Generative Adversarial approaches are introduced: Raw point cloud GAN, and Latent-space GAN (r-GAN and l-GAN as referred to in the paper).", "In addition, a GMM sampling + GAN decoder approach to generation is also among the experimented variations.", "The results look convincing for the generation experiments in the paper, both from class-specific (Figure 1) and multi-class generators (Figure 6).", "The quantitative results also support the visuals.", "One question that arises is whether the point cloud approaches to generation is any more valuable compared to voxel-grid based approaches.", "Especially Octree based approaches [1-below] show very convincing and high-resolution shape generation results,", "whereas the details seem to be washed out for the point cloud results presented in this paper.", "I would like to see comparison experiments with voxel based approaches in the next update for the paper.", "[1] @article{tatarchenko2017octree, title={Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs}, author={Tatarchenko, Maxim and Dosovitskiy, Alexey and Brox, Thomas}, journal={arXiv preprint arXiv:1703.09438}, year={2017} }"], "labels": ["fact", "fact", "fact", "evaluation", "fact", "request", "evaluation", "fact", "request", "reference"]}
{"doc_id": "HkBIjt2xz", "text": ["Summary: This paper presents a derivation which links a DNN to recursive application of maximum entropy model fitting. ", "The mathematical notation is unclear, ", "and in one cases the lemmas are circular (i.e. two lemmas each assume the other is correct for their proof). ", "Additionally the main theorem requires complete independence, ", "but the second theorem provides pairwise independence, ", "and the two are not the same.", "Major comments: - The second condition of the maximum entropy equivalence theorem requires that all T are conditionally independent of Y. ", "This statement is unclear, ", "as it could mean pairwise independence, or it could mean jointly independent (i.e. for all pairs of non-overlapping subsets A & B of T I(T_A;T_B|Y) = 0).", "This is the same as saying the mapping X->T is making each dimension of T orthogonal, as otherwise it would introduce correlations. ", "The proof of the theorem assumes that pairwise independence induces joint independence ", "and this is not correct.", "- Section 4.1 makes an analogy to EM, ", "but gradient descent is not like this process as all the parameters are updated at once, and only optimised by a single (noisy) step. ", "The optimisation with respect to a single layer is conditional on all the other layers remaining fixed, ", "but the gradient information is stale ", "(as it knows about the previous step of the parameters in the layer above). ", "This means that gradient descent does all 1..L steps in parallel, ", "and this is different to the definition given.", "- The proofs in Appendix C which are used for the statement I(T_i;T_j) >=I(T_i;T_j|Y) are incomplete, ", "and in generate this statement is not true, ", "so requires proof.", "- Lemma 1 appears to assume Lemma 2, and Lemma 2 appears to assume Lemma 1.", "Either these lemmas are circular or the derivations of both of them are unclear.", "- In Lemma 3 what is the minimum taken over for the left hand side? ", "Elsewhere the minimum is taken over T, but T does not appear on the left hand side.", "Explicit minimums help the reader to follow the logic, ", "and implicit ones should only be used when it is obvious what the minimum is over.", "- In Lemma 5, what does \"T is only related to X\" mean? ", "The proof states that Y -> T -> X forms a Markov chain, ", "but this implies that T is a function of Y, not X.", "Minor comments:- I assume that the E_{P(X,Y)} notation is the expectation of that probability distribution, ", "but this notation is uncommon,", "and should be replaced with a more explicit one.", "- Markov is usually romanized with a \"k\" not a \"c\".", "- The paper is missing numerous prepositions and articles, ", "and contains multiple spelling mistakes & typos."], "labels": ["fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "request", "fact", "evaluation", "request", "fact", "evaluation", "evaluation", "request", "fact", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "fact"]}
{"doc_id": "SkWQLvebf", "text": ["This paper proposes a deep learning (DL) approach (pre-trained CNNs) to the analysis of histopathological images for disease localization.", "It correctly identifies the problem that DL usually requires large image databases to provide competitive results,", "while annotated histopathological data repositories are costly to produce and not on that size scale.", "It also correctly identifies that this is a daunting task for human medical experts", "and therefore one that could surely benefit from the use of automated methods like the ones proposed.", "The study seems sound from a technical viewpoint to me", "and its contribution is incremental, as it builds on existing research,", "which is correctly identified.", "Results are not always too impressive,", "but authors seem intent on making them useful for pathogists in practice", "(an intention that is always worth the effort).", "I think the paper would benefit from a more explicit statement of its original contributions (against contextual published research)", "Minor issues: Revise typos (e.g. title of section 2)", "Please revise list of references", "(right now a mess in terms of format, typos, incompleteness"], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "request", "evaluation"]}
{"doc_id": "ry_xOQ5ef", "text": ["This paper creates adversarial images by imposing a flow field on an image such that the new spatially transformed image fools the classifier. ", "They minimize a total variation loss in addition to the adversarial loss to create perceptually plausible adversarial images, ", "this is claimed to be better than the normal L2 loss functions.", "Experiments were done on MNIST, CIFAR-10, and ImageNet, ", "which is very useful to see that the attack works with high dimensional images. ", "However, some numbers on ImageNet would be helpful ", "as the high resolution of it make it potentially different than the low-resolution MNIST and CIFAR.", "It is a bit concerning to see some parts of Fig. 2. ", "Some of Fig. 2 (especially (b)) became so dotted that it no longer seems an adversarial that a human eye cannot detect. ", "And model B in the appendix looks pretty much like a normal model. ", "It might needs some experiments, either human studies, or to test it against an adversarial detector, to ensure that the resulting adversarials are still indeed adversarials to the human eye. ", "Another good thing to run would be to try the 3x3 average pooling restoration mechanism in the following paper:", "Xin Li, Fuxin Li. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics . ICCV 2017.", "to see whether this new type of adversarial example can still be restored by a 3x3 average pooling the image ", "(I suspect that this is harder to restore by such a simple method than the previous FGSM or OPT-type, but we need some numbers).", "I also don't think FGSM and OPT are this bad in Fig. 4. ", "Are the authors sure that if more regularization are used these 2 methods no longer fool the corresponding classifiers?", "I like the experiment showing the attention heat maps for different attacks. ", "This experiment shows that the spatial transforming attack (stAdv) changes the attention of the classifier for each target class, and is robust to adversarially trained Inception v3 unlike other attacks like FGSM and CW. ", "I would likely upgrade to a 7 if those concerns are addressed."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "request", "fact", "evaluation", "fact", "evaluation", "request", "request", "reference", "fact", "evaluation", "evaluation", "non-arg", "evaluation", "fact", "non-arg"]}
{"doc_id": "r1cIB5Fxf", "text": ["Paper proposes to use a convolutional network with 3 layers (convolutional + maxpoolong + fully connected layers) to embed time series in a new space such that an Euclidian distance is effective to perform a classification. ", "The algorithm is simple and experiments show that it is effective on a limited benchmark. ", "It would be interesting to enlarge the dataset to be able to compare statistically the results with state-of-the-art algorithms. ", "In addition, Authors compare themselves with time series metric learning and generalization of DTW algorithms. ", "It would also be interesting to compare with other types of time series classification algorithms (Bagnall 2016) ."], "labels": ["fact", "evaluation", "request", "fact", "request"]}
{"doc_id": "B1m1clFlM", "text": ["This paper presents MAd-RL, a method for decomposition of a single-agent RL problem into a simple sub-problems, and aggregating them back together.", "Specifically, the authors propose a novel local planner - emphatic, and analyze the newly proposed local planner along of two existing ones - egocentric and agnostic.", "The MAd-RL, and theoretical analysis, is evaluated on the Pac-Boy task, and compared to DQN and Q-learning with function approximation.", "Pros: 1. The paper is well written, and well-motivated.", "2. The authors did an extraordinary job in building the intuition for the theoretical work, and giving appropriate examples where needed.", "3. The theoretical analysis of the paper is extremely interesting.", "The observation that a linearly weighted reward, implies linearly weighted Q function, analysis of different policies, and local minima that result is the strongest and the most interesting points of this paper.", "Cons:1. The paper is too long.", "14 pages total - 4 extra pages (in appendix) over the 8 page limit,", "and 1 extra page of references.", "That is 50% overrun in the context,", "and 100% overrun in the references.", "The most interesting parts and the most of the contributions are in the Appendix,", "which makes it hard to assess the contributions of the paper.", "There are two options: 1.1 If the paper is to be considered as a whole, the excessive overrun gives this paper unfair advantage over other ICLR papers.", "The flavor and scope and quality of the problems that can be tackled with 50% more space is substantially different from what can be addressed within the set limit.", "If the extra space is necessary, perhaps this paper is better suited for another publication?", "1.2 If the paper is assessed only based on the main part without Appendix, then the only novelty is emphatic planner, and the theoretical claims with no proofs.", "The results are interesting,", "but are lacking implementation details.", "Overall, a substandard paper.", "2. Experiments are disjoint from the method\u2019s section.", "For example:2.1 Section 5.1 is completely unrelated with the material presented in Section 4.", "2.2 The noise evaluation in Section 5.3 is nice,", "but not related with the Section 4.", "This is problematic because, it is not clear if the focus of the paper is on evaluating MAd-RL and performance on the Ms.PacMan task, or experimentally demonstrating claims in Section 4.", "Recommendations:1. Shorten the paper to be within (or close to the recommended length) including Appendix.", "2. Focus paper on the analysis of the advisors,", "and Section 5. on demonstrating the claims.", "3. Be more explicit about the contributions.", "4. How does the negative reward influence the behavior the agent?", "The agent receives negative reward when near ghosts.", "5. Move the short (or all) proofs from Appendix into the main text.", "6. Move implementation details of the experiments (in particular the short ones) into the main text.", "7. Use the standard terminology (greedy and random policies vs. egoistic and agnostic) where possible.", "The new terms for well-established make the paper needlessly more complex.", "8. Focus the literature review on the most relevant work, and contrast the proposed work with existing peer reviewed methods.", "9. Revise the literature to emphasize more recent peer reviewed references.", "Only three references are recent (less than 5 years), peer reviewed references,", "while there are 12 historic references.", "Try to reduce dependencies on non-peer reviewed references (~10 of them).", "10. Make a pass through the paper, and decouple it from the van Seijen et al., 2017a", "11. Minor: Some claims need references:", "11.1 Page 5: \u201cegocentric sub-optimality does not come from the actions that are equally good, nor from the determinism of the policy, since adding randomness\u2026\u201d -", "Wouldn\u2019t adding epsilon-greediness get the agent unstuck?", "11.2 Page 1. \u201cIt is shown on the navigation task \u2026.\u201d -", "This seems to be shown later in the results,", "but in the intro it is not clear if some other work, or this one shows it.", "12. Minor:12.1 Mix genders when talking about people.", "Don\u2019t assume all people that make \u201ccomplex and important problems\u201d, or who are \u201cconsulted for advice\u201d, are male.", "12.2 Typo: Page 5: a_0 sine die", "12.3 Page 7 - omit results that are not shown", "12.4 Make Figures larger - it is difficult, if not impossible to see", "12.5 What is the difference between Pac-Boy and Ms. Pacman task? And why not use Ms. Packman?"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "non-arg", "fact", "request", "request", "request", "evaluation", "request", "request", "evaluation", "fact", "request", "request", "request", "quote", "non-arg", "quote", "fact", "evaluation", "fact", "request", "fact", "fact", "request", "non-arg"]}
{"doc_id": "HJIPOSAbf", "text": ["The paper develops an interesting approach for solving multi-class classification with softmax loss.", "The key idea is to reformulate the problem as a convex minimization of a \"double-sum\" structure via a simple conjugation trick. ", "SGD is applied to the reformulation: in each step samples a subset of the training samples and labels, which appear both in the double sum. ", "The main contributions of this paper are: \"U-max\" idea (for numerical stability reasons) and an \"\"proposing an \"implicit SGD\" idea.", "Unlike the first review, I see what the term \"exact\" in the title is supposed to mean. ", "I believe this was explained in the paper. ", "I agree with the second reviewer that the approach is interesting. ", "However, I also agree with the criticism ", "(double sum formulations exist in the literature; ", "comments about experiments); ", "and will not repeat it here. ", "I will stress though that the statement about Newton in the paper is not justified. ", "Newton method does not converge globally with linear rate. ", "Cubic regularisation is needed for global convergence. ", "Local rate is quadratic. ", "I believe the paper could warrant acceptance if all criticism raised by reviewer 2 is addressed.", "I apologise for short and late review: I got access to the paper only after the original review deadline."], "labels": ["evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "non-arg", "non-arg", "fact", "fact", "fact", "fact", "evaluation", "non-arg"]}
{"doc_id": "HJecicqxG", "text": ["In conventional boosting methods, one puts a weight on each sample.", "The wrongly classified samples get large weights such that in the next round those samples will be more likely to get right.", "Thus the learned weak learner at this round will make different mistakes.", "This idea however is difficult to be applied to deep learning with a large amount of data.", "This paper instead designed a new boosting method which puts large weights on the category with large error in this round.", "In other words samples in the same category will have the same weight", "Error bound is derived.", "Experiments show its usefulness", "though experiments are limited"], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation"]}
{"doc_id": "rJ9BTHFez", "text": ["Summary ******* The paper provides a collection of existing results in statistics.", "Comments ******** Page 1: references to Q-learning and Policy-gradients look awkwardly recent, ", "given that these have been around for several decades.", "I dont get what is the novelty in this paper. ", "There is no doubt that all the tools that are detailed here are extremely useful and powerful results in mathematical statistics. ", "But they are all known.", "The Gibbs variational principle is folklore, ", "Proposition 1,2 are available in all good text books on the topic, ", "and Proposition 4 is nothing but a transportation Lemma.", "Now, Proposition 3 is about soft-Bellman operators. ", "This perhaps is less standard ", "because contraction property of soft-Bellman operator in infinite norm is more recent than for Bellman operators.", "But as mentioned by the authors, this is not new either. ", "Also I don't really see the point of providing the proofs of these results in the main material, and not for instance in appendix, ", "as there is no novelty either in the proof techniques.", "I don't get the sentence \"we have restricted so far the proof in the bandit setting\": ", "bandits are not even mentioned earlier.", "Decision ******** I am sorry but unless I missed something (that then should be clarified) this seems to be an empty paper: Strong reject."], "labels": ["fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "BJ1X3tYgf", "text": ["The paper treats the interesting problem of long term video prediction in complex video streams. ", "I think the approach of adding more structure to their representation before making longer term prediction is also a reasonable one. ", "Their approach combines an RNN that predicts an encoding of scene and then generating an image prediction using a VAN (Reed et al.). ", "They show some results on the Human3.6M and the Robot Push dataset. ", "I find the submission lacking clarity in many places. ", "The main lack of clarity source I think is about what the contribution is. ", "There are sparse mentions in the introduction ", "but I think it would be much more forceful and clear if they would present VAN or Villegas et al method separately and then put the pieces together for their method in a separate section. ", "This would allow the author to clearly delineate their contribution and maybe why those choices were made. ", "Also the use of hierarchical is non-standard and leads to confusion I recommend maybe \"semantical\" or better \"latent structured\" instead. ", "Smaller ambiguities in wording are also in the paper : ", "e.g. related work -> long term prediction \"in this work\" refers to the work mentioned but could as well be the work that they are presenting. ", "I find some of the claims not clearly backed by a thorough evaluation and analysis. ", "Claiming to be able to produce encodings of scenes that work well at predicting many steps into the future is a very strong claim. ", "I find the few images provided very little evidence for that fact. ", "I think a toy example where this is clearly the case ", "because we know exactly the factors of variations and they are inferred by the algorithm automatically or some better ones are discovered by the algorithm, ", "that would make it a very strong submission. ", "Reed et al. have a few examples that could be adapted to this setting and the resulting representation, analyzed appropriately, would shed some light into whether this is the right approach for long term video prediction and what are the nobs that should be tweaked in this system. ", "In the current format, I think that the authors are on a good path ", "and I hope my suggestions will help them improve their submission, ", "but as it stands I recommend rejection from this conference."], "labels": ["fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "SJ45Qm8Zz", "text": ["The paper makes a striking connection between two apparently unrelated problems: the problem of designing neural networks to handle a certain type of correlation and the problem of designing a structure to represent wave-function with quantum entanglement.", "In the wave-function context, the Schmidt decomposition of the wave function is an inner product of tensors.", "Thus, the mathematical glue connecting the neural networks and quantum entanglement is shown to be tensor networks,", "which can represent higher order tensors through inner product of lower-order tensors.", "The main technical contribution in the paper is to map convolutional networks with product pooling function (called ConvACs) to a tensor network.", "Given this mapping, the authors exploit results in tensor networks (in particular the quantum max-flow min-cut theorem) to calculate the rank of the matricized tensor between a pair of vertex sets using the (appropriately defined) min-cut.", "The connection has potential to yield fruitful new results,", "however, the potential is not manifested (yet) in the paper.", "The main application in deep convolutional networks proposed by the paper is to model how much correlation between certain partition of input variables can be captured by a given convolutional network design.", "However, it is unclear how to use Theorem 1 to design neural networks that capture a certain correlation.", "A simple example is given in the experiment where the wider layers can be either early in the the neural network or at the later stages; demonstrating that one does better than the other in a certain regime.", "It seems that there is an obvious intuition that explains this phenomenon: wider base networks with large filters are better suited to the global task and narrow base networks that have more parameters later down have more local early filters suited to the local task.", "The experiments do not quite reveal the power of the proposed approach,", "and it is unclear how, if at all, the proposed approach can be applied to more complicated networks.", "In summary, this paper is of high theoretical interest and has potential for future applications."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "SkPNib9ez", "text": ["This paper extends and speeds up PROSE, a programming by example system, by posing the selection of the next production rule in the grammar as a supervised learning problem.", "This paper requires a large amount of background knowledge", "as it depends on understanding program synthesis as it is done in the programming languages community.", "Moreover the work mentions a neurally-guided search,", "but little time is spent on that portion of their contribution.", "I am not even clear how their system is trained.", "The experimental results do show the programs can be faster but only if the user is willing to suffer a loss in accuracy.", "It is difficult to conclude overall if the technique helps in synthesis."], "labels": ["fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "HJ0Hc82gM", "text": ["Review for Deformation of Bregman divergence and its application", "Summary:This paper considers parameter estimation for discrete probability models.", "The authors propose an estimator that is computed by minimizing a deformed Bregman divergence.", "The authors prove that the proposed estimator is more computationally efficient and more robust than the maximal likelihood estimator (MLE), both in theory and simulation.", "Major Comments:1. After the definition 1, the likelihood $L(\\theta)$ is defined to be the sum of $\\log \\bar{q}_{\\theta}(x_i).", "$ Why the gradient of $L(\\theta)$ is a related to $\\tilde{p}$.", "2. After the equation (4), when the authors say $f=(U\u2019)^{-1}$ the authors assume that the first order derivative of $U$ should be a strictly increasing function", "(otherwise the inverse function is not well defined, at least in classic notations).", "I would like to know whether we only need assume the convexity of $U$.", "Are there other assumptions?", "3. In Proposition 1, I think the \u201cFisher consistent\u201d means that (6) holds for any reasonable $U$ and $f$ just as the authors said before Proposition 1.", "It is better to add this in the statement of Proposition 1 too.", "4. The \u201cProof 1\u201d is better to be replaced with \u201cProof of Proposition 1\u201d", "(same issues for \u201cProof 2\u201d, \u201cProof 3\u201d, etc).", "5. In the statement of Theorem 1, do the authors have any constraint for $U$?", "6. $\\xi_{U,f}$ appears in Theorem 2 without a clear definition.", "Even if it seems to be defined in (17), it is better to be defined again.", "7. Why Theorem 2 indicates that \u201cthe estimator (5) is not influenced so much by the outlier\u201d?", "8. How to solve (5)?", "Is it trivial?", "I expect to see something like \u201cWe use \u2026 algorithm or toolbox to solve (5)\u201c.", "Minor Comments:1. In Example 2, I suggest use some more beautiful symbol like $\\top$ to denote the transpose instead of $T$.", "2. The length of the equations should not exceed the line-width (e.g., (4) and (7)).", "3. In page 5, \u201cWe find some examples satisfying 25 in Theorem 2\u201d.", "The \u201c25\u201d should be \u201c(25)\u201d."], "labels": ["non-arg", "fact", "fact", "fact", "fact", "non-arg", "fact", "evaluation", "non-arg", "non-arg", "fact", "request", "request", "request", "request", "evaluation", "request", "evaluation", "request", "request", "request", "request", "request", "quote", "request"]}
{"doc_id": "ry9X12Fgz", "text": ["The authors present two autoregressive models for sampling action probabilities from a factorized discrete action space. ", "On a multi-agent gridworld task and a multi-agent multi-armed bandit task, the proposed method seems to benefit from their lower-variance entropy estimator for exploration bonus. ", "A few key citations were missing - notably the LSTM model they propose is a clear instance of an autoregressive density estimator, as in PixelCNN, WaveNet and other recently popular deep architectures. ", "In that context, this work can be viewed as applying deep autoregressive density estimators to policy gradient methods. ", "At least one of those papers ought to be cited. ", "It also seems like a simple, obvious baseline is missing from their experiments - simply independently outputting D independent softmaxes from the policy network. ", "Without that baseline it's not clear that any actual benefit is gained by modeling the joint distribution between actions, especially since the optimal policy for an MDP is provably deterministic anyway. ", "The method could even be made to capture dependencies between different actions by adding a latent probabilistic layer in the middle of the policy network, inducing marginal dependencies between different actions. ", "A direct comparison against one of the related methods in the discussion section would help better contextualize the paper as well. ", "A final point on clarity of presentation - in keeping with the convention in the field, the readability of the tables could be improved by putting the top-performing models in bold, and Table 2 should almost certainly be replaced by a boxplot."], "labels": ["fact", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request", "request"]}
{"doc_id": "SyOiDTtef", "text": ["The paper proposes an online distillation method, called co-distillation, where the two different models are trained to match the predictions of other model in addition to minimizing its own loss. ", "The proposed method is applied to two large-scale datasets ", "and showed to perform better than other baselines such as label smoothing, and the standard ensemble. ", "The paper is clearly written and was easy to understand. ", "My major concern is the significance and originality of the proposed method. ", "As written by the authors, the main contribution of the paper is to apply the codistillation method, which is pretty similar to Zhang et. al (2017), at scale. ", "But, because from Zhang's method, I don't see any significant difficulty in applying to large-scale problems, ", "I'm not sure that this can be a significant contribution. ", "Rather, I think, it would have been better for the authors to apply the proposed methods to a smaller scale problems as well in order to explore more various aspects of the proposed methods including the effects of number of different models. ", "In this sense, it is also a limitation that the authors showing experiments where only two models are codistillated. ", "Usually, ensemble becomes stronger as the number of model increases."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation"]}
{"doc_id": "ByPQQOX1G", "text": ["Summary ======== The authors present a new regularization term, inspired from game theory, which encourages the discriminator's gradient to have a norm equal to one.", "This leads to reduce the number of local minima,", "so that the behavior of the optimization scheme gets closer to the optimization of a zero-sum games with convex-concave functions.", "Clarity ====== Overall, the paper is clear and well-written.", "However, the authors should motivate better the regularization introduced in section 2.3.", "Originality ========= The idea is novel and interesting.", "In addition, it is easy to implement it for any GANs since it requires only an additional regularization term.", "Moreover, the numerical experiments are in favor of the proposed method.", "Comments ========= - Why should the norm of the gradient should to be equal to 1 and not another value?", "Is this possible to improve the performance if we put an additional hyper-parameter instead?", "- Are the performances greatly impacted by other value of lambda and c (the suggested parameter values are lambda = c = 10)?", "- As mentioned in the paper, the regularization affects the modeling performance.", "Maybe the authors should add a comparison between different regularization parameters to illustrate the real impact of lambda and c on the performance.", "- GANs performance is usually worse on very big dataset such as Imagenet.", "Does this regularization trick makes their performance better?"], "labels": ["fact", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "request", "request", "request", "fact", "request", "fact", "request"]}
{"doc_id": "SJbxHpnHz", "text": ["Summary ------- This paper proposes a generative model of symbolic (MIDI) melody in western popular music.", "The model uses an LSTM architecture to map sequences of chord symbols and structural identifiers (e.g., verse or chorus) to predicted note sequences which constitute the melody line.", "The key innovation proposed in this work is to jointly encode note symbols along with timing and duration information to form musical \"words\" from which melodies are composed.", "The proposed model compares compared favorably to prior work in listener preference and Turing-test studies, and performs.", "Quality ------- Overall, I found the paper interesting, and the provided examples generated by the model sound relatively good.", "The quantitative evaluations seem promising,", "though difficult to interpret fully due to a lack of provided detail (see below).", "Apart from clarification issues enumerated below, where the paper could be most substantially improved is in the evaluation of the various ingredients of the model.", "Many ideas are presented with some abstract motivation,", "but there is no comparative evaluation to demonstrate what happens when any one piece is removed from the system.", "Some examples:- How important is the \"song part\" contextual input?", "- What happens if the duration or timing information is not encoded with the note?", "- How important is the pitch range regularization?", "Since the authors claim these ideas as novel,", "I would expect to see more evaluation of their independent impact on the resulting system.", "Without such an evaluation, it is difficult to take any general lessons away from this paper.", "Clarity ------- While the main ideas of this paper are presented clearly,", "I found the details difficult to follow.", "Specifically, the following points need to be substantially clarified: - The note encoding described in Section 3.1: \"w_i = (p_i, t_i, l_i)\" describes the pitch, timing, and duration of the i'th chord, but it is not explained how time and duration are represented.", "Since these are derived from MIDI, I would expect either ticks or seconds -- or maybe a tempo-normalized variant --", "but Figure 2 suggests staff notation, which is not explicitly coded in MIDI.", "Please explain precisely how the data is represented.", "- Also in Section 3, several references are made to a \"previous\" model, but no citation is given.", "Was this a specific published work?", "- Equation 1 is missing a variable (j) for the range of the summation.", "It took a few passes for me to parse what was going on here.", "One could easily mistake it for summing over i to describe partial subsequences,", "but I don't think this is what is intended.", "- \"... our model does not have to consider intervals that do not contain notes\" --", "this contradicts the implication of Figure 2b, where a rest is explicitly notated in the generated sequence.", "Since MIDI does not explicitly encode rests", "(they must be inferred from the absence of note events),", "I'd suggest wording this more carefully, and being more explicit about what is produced by the model and what is notational embellishment for expository purposes.", "- Equation 2 describes the LSTM gate equations,", "but there is no concrete description of the model architecture used in this paper.", "How are the hidden states mapped to note predictions?", "What is the loss function and optimizer?", "These details are necessary to facilitate replication.", "- Equation 3 and the accompanying text implies that song part states (x_i) are conditionally independent given the current chord state (z_i).", "Is that correct?", "If so, it seems like a strange choice,", "since I would expect a part state to persist across multiple chord state transitions.", "Please explain this part in more detail.", "Also a typo in the second factor: p(z_N | z_{N-1}) should be p(z_n | z_{n-1}); likewise p(x_n | z_N).", "- The regularization penalty (Alg. 1) is also difficult to follow.", "Is S derived from P by viterbi decoding, or independent (point-wise argmax) decoding?", "What exactly is the \"E\" that results in the derivative at step 8, and why does the derivative for p_i depend on the total sum C?", "This all seems non-obvious, and worth describing in more detail since it seems critical to the performance of the model.", "- Table 2: what does \"# samples\" mean in this context?", "Why is it different from \"# songs\"?", "- Section 4.2: the description of the evaluation suggests that the proposed model's output was always played before the baseline.", "Is that correct?", "If so, does that bias the results?", "- Section 4.2: are the examples provided to listeners just the melody lines, or full mixes on top of the input chord sequence?", "It's unclear from the text,", "and it seems like a relevant detail to correctly assess the fairness of the comparison to the baselines.", "- Section 4.2: how many generated examples were included in this evaluation?", "Should this instead be in or out of key, since the tuning is presumably fixed by MIDI synthesis?", "Originality ----------- As far as I know, the proposed method is novel, though strongly related to (cited) prior work.", "The key idea seems to be encoding of notes and properties as analogous to \"words\".", "I find this analogy a little bit of a stretch,", "since even with timing and duration included, it's hard to argue that a single note event has semantic content in the way that a word does.", "A little more development of this idea, and some more concrete motivation for the specific choices of which properties to include, would go a long way in strengthening the paper.", "Significance ------------ The significance of this work is difficult to assess without independent evaluation of the proposed novel components."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "request", "request", "request", "fact", "request", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "request", "request", "non-arg", "fact", "non-arg", "evaluation", "evaluation", "quote", "fact", "fact", "fact", "request", "fact", "fact", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "request", "request", "evaluation", "request", "request", "fact", "request", "request", "request", "evaluation", "fact", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "BJCXSFZgz", "text": ["This paper proposes leveraging labelled controlled data to accelerate reinforcement-based learning of a control policy. ", "It provides two main contributions: pre-training the policy network of a DDPG agent in a supervised manner so that it begins in reasonable state-action distribution and regalurizing the Q-updates of the q-network to be biased towards existing actions. ", "The authors use the TORCS enviroment to demonstrate the performance of their method both in final cumulative return of the policy and speed of learning.", "This paper is easy to understand but has a couple shortcomings and some fatal (but reparable) flaws:.", "1) When using RL please try to standardize your notation to that used by the community, ", "it makes things much easier to read. ", "I would strongly suggest avoiding your notation a(x|\\Theta) and using \\pi(x) ", "(subscripting theta or making conditional is somewhat less important). ", "Your a(.) function seems to be the policy here, ", "which is invariable denoted \\pi in the RL literature. ", "There has been recent effort to clean up RL notation which is presented here: ", "https://sites.ualberta.ca/~szepesva/papers/RLAlgsInMDPs.pdf. ", "You have no obligation to use this notation but it does make reading of your paper much easier on others in the community. ", "This is more of a shortcoming than a fundamental issue.", "2) More fatally, you have failed to compare your algorithm's performance against benchline implementations of similar algorithms. ", "It is almost trivial to run DDPG on Torcs using the openAI baselines package ", "[https://github.com/openai/baselines]. ", "I would have loved, for example, to see the effects of simply pre-training the DDPG actor on supervised data, vs. adding your mixture loss on the critic. ", "Using the baselines would have (maybe) made a very compelling graph showing DDPG, DDPG + actor pre-training, and then your complete method.", "3) And finally, perhaps complementary to point 2), you really need to provide examples on more than one environment. ", "Each of these simulated environments has its own pathologies linked to determenism, reward structure, and other environment particularities. ", "Almost every algorithm I've seen published will often beat baselines on one environment and then fail to improve or even be wors on others, ", "so it is important to at least run on a series of these. ", "Mujoco + AI Gym should make this really easy to do ", "(for reference, I have no relatinship with OpenAI). ", "Running at least cartpole (which is a very well understood control task), and then perhaps reacher, swimmer, half-cheetah etc. using a known contoller as your behavior policy (behavior policy is a good term for your data-generating policy.)", "4) In terms of state of the art you are very close to Todd Hester et. al's paper on imitation learning, ", "and although you cite it, you should contrast your approach more clearly with the one in that paper. ", "Please also have a look at some more recent work my Matej Vecerik, Todd Hester & Jon Scholz: 'Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards' for an approach that is pretty similar to yours.", "Overall I think your intuitions and ideas are good, ", "but the paper does not do a good enough job justifying empirically that your approach provides any advantages over existing methods. ", "The idea of pre-training the policy net has been tried before ", "(although I can't find a published reference) ", "and in my experience will help on certain problems, and hinder on others, ", "primarily because the policy network is already 'overfit' somewhat to the expert, and may have a hard time moving to a more optimal space. ", "Because of this experience I would need more supporting evidence that your method actually generalizes to more than one RL environment."], "labels": ["fact", "evaluation", "fact", "evaluation", "request", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "reference", "request", "evaluation", "evaluation", "evaluation", "reference", "request", "evaluation", "request", "fact", "evaluation", "request", "evaluation", "reference", "request", "evaluation", "request", "request", "evaluation", "evaluation", "fact", "evaluation", "non-arg", "evaluation", "request"]}
{"doc_id": "BJHcawFxM", "text": ["This paper proposes training binary and ternary weight distribution networks through the local reparametrization trick and continuous optimization. ", "The argument is that due to the central limit theorem (CLT) the distribution on the neuron pre-activations is approximately Gaussian, with a mean given by the inner product between the input and the mean of the weight distribution and a variance given by the inner product between the squared input and the variance of the weight distribution. ", "As a result, the parameters of the underlying discrete distribution can be optimized via backpropagation by sampling the neuron pre-activations with the reparametrization trick. ", "The authors further propose appropriate initialisation schemes and regularization techniques to either prevent the violation of the CLT or to prevent underfitting. ", "The method is evaluated on multiple experiments.", "This paper proposed a relatively simple idea for training networks with discrete weights that seems to work in practice. ", "My main issue is that while the authors argue about novelty, ", "the first application of CLT for sampling neuron pre-activations at neural networks with discrete r.v.s is performed at [1]. ", "While [1] was only interested in faster convergence and not on optimization of the parameters of the underlying distribution, ", "the extension was very straightforward. ", "I would thus suggest that the authors update the paper accordingly. ", "Other than that, I have some other comments: - The L2 regularization on the distribution parameters for the ternary weights is a bit ad-hoc; ", "why not penalise according to the entropy of the distribution which is exactly what you are trying to achieve? ", "- For the binary setting you mentioned that you had to reduce the entropy thus added a \u201cbeta density regulariser\u201d. ", "Did you add R(p) or log R(p) to the objective function? ", "Also, with alpha, beta = 2 the beta density is unimodal with a peak at p=0.5; ", "essentially this will force the probabilities to be close to 0.5, i.e. exactly what you are trying to avoid. ", "To force the probability near the endpoints you have to use alpha, beta < 1 which results into a \u201cbowl\u201d shaped Beta distribution. ", "I thus wonder whether any gains you observed from this regulariser are just an artifact of optimization.", "- I think that a baseline (at least for the binary case) where you learn the weights with a continuous relaxation, such as the concrete distribution, and not via CLT would be helpful. ", "Maybe for the network to properly converge the entropy for some of the weights needs to become small (hence break the CLT). ", "[1] Wang & Manning, Fast Dropout Training."], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "request", "evaluation", "request", "fact", "non-arg", "fact", "fact", "fact", "fact", "request", "fact", "reference"]}
{"doc_id": "Hy2FsQKef", "text": ["This paper addresses the problem of one class classification.", "The authors suggest a few techniques to learn how to classify samples as negative (out of class) based on tweaking the GAN learning process to explore large areas of the input space which are out of the objective class.", "The suggested techniques are nice and show promising results.", "But I feel a lot can still be done to justify them, even just one of them.", "For instance, the authors manipulate the objective of G using a new parameter alpha_new and divide heuristically the range of its values.", "But, in the experimental section results are shown only for a single value, alpha_new=0.9", "The authors also suggest early stopping", "but again (as far as I understand) only a single value for the number of iterations was tested.", "The writing of the paper is also very unclear, with several repetitions and many typos e.g.:", "'we first introduce you a'", "'architexture'", "'future work remain to'", "'it self'", "I believe there is a lot of potential in the approach(es) presented in the paper.", "In my view a much stronger experimental section together with a clearer presentation and discussion could overcome the lack of theoretical discussion."], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "quote", "quote", "quote", "quote", "evaluation", "request"]}
{"doc_id": "H18ZJWAgG", "text": ["Summary The paper is well-written ", "but does not make deep technical contributions and does not present a comprehensive evaluation or highly insightful empirical results.", "Abstract / Intro I get the entire focus of the paper is some variant of Pac-Man which has received attention in the RL literature for Atari games, ", "but for the most part the impressive advances of previous Atari/RL papers are in the setting that the raw video is provided as input, ", "which is much different than solving the underlying clean mathematically abstracted problem (as a grid world with obstacles) as done here and evident in the videos. ", "Further it is honestly hard for me to be strongly motivated about a paper that focuses on the need to decompose Pac-man into sub-agents/advisor value functions.", "Section 2 Another historically well-cited paper for MDP decomposition:", "Flexible Decomposition Algorithms for Weakly Coupled Markov Decision Problems, Ronald Parr. UAI 98. https://dslpitt.org/uai/papers/98/p422-parr.pdf", "Section 3 Is the additive reward decomposition a required part of the problem specification? ", "It seems so, i.e., there is no obvious method for automatically decomposing a monolithic reward function over advisors.", "Section 4 * Egocentric: Definition 1: Sure, the problem will have local optima (attractors) when decomposed suboptimally ", "-- I'm not sure what new insight we've gained from this analysis... ", "it is a general problem with any function approximation scheme that does not guarantee that the rank ordering of actions for a state is preserved.", "* Agnostic Other than approximating some type of myopic rollout, I really don't see why this approach would be reasonable? ", "I am surprised it works at all ", "though my guess is that this could simply be an artifact of evaluating on a single domain with a specific structure.", "* Empathic This appears to be the key contribution ", "though related work certainly infringes on its novelty. ", "Is this paper then an empirical evaluation of previous methods in a single Pac-man grid world variant?", "I wonder if the theory of DEC-MDPs would have any relevance for novel analysis here?", "Section 5 I'm disappointed that the authors only evaluate on a single domain; ", "presumably the empathic approach has applications beyond Pac-Man?", "The fact that empathic generally performs better is not at all surprising. ", "The fact that a modified discount factor for egocentric can also perform well is not surprising given that lower discount factors have often been shown to improve approximated MDP solutions, e.g.,", "Biasing Approximate Dynamic Programming with a Lower Discount Factor Marek Petrik, Bruno Scherrer (NIPS-08). http://marek.petrik.us/pub/Petrik2009a.pdf", "***Side note: The following part is somewhat orthogonal to the review above in that I would not expect the authors to address this on revision, *but* at the same time I think it provides a connection to the special case of concurrent action decomposition into advisors, ", "which could potentially provide a high impact direction of application for this work ", "(i.e., concurrent problems are hard and show up in numerous operations research problems covering inventory control, logistics, epidemic response).", "For the special case that each advisor is assigned to one action in a factored space of concurrent actions, the egocentric algorithm would be very close to the Hindsight approximation in Section 6 of this paper (including an additive decomposition of rewards):", "Planning in Factored Action Spaces with Symbolic Dynamic Programming Aswin Nadamuni Raghavan, Alan Fern, Prasad Tadepalli, Roni Khardon, and Saket Joshi (AAAI-12). https://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/download/5012/5336", "This simple algorithm is hard to beat ", "for the following reason that connects some details of your egocentric and empathic settings: rather than decomposing a concurrent MDP into independent problems per concurrent action, the optimization of each action (by each advisor) is done in sequence (advisors are ordered) and gets to condition on the previously selected advisor actions. ", "So it provides an alternate paradigm where advisors actually get to see and condition their policy on what other advisors are doing. ", "In my own work comparing optimal concurrent solutions to this approach, I have found this approach to be near-optimal and much more efficient to solve since it exploits decomposition.", "Why is this relevant to this work? ", "Because (a) it suggests another variant of the advisor decomposition that at least makes sense in the case of concurrent actions (and perhaps shared actions though this would require some extension) ", "and (b) it suggests there are more options than just the full egocentric and empathic settings in this important class of concurrent action problems that are necessarily solved in practice for large action spaces by some form of decomposition. ", "This could be an interesting direction for future exploration of the ideas in this work, where there might be additional technical novelty and more space for empirical contributions and observations."], "labels": ["evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "reference", "request", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "reference", "non-arg", "evaluation", "evaluation", "evaluation", "reference", "evaluation", "fact", "fact", "non-arg", "non-arg", "fact", "fact", "evaluation"]}
{"doc_id": "S1ufxZqlG", "text": ["The authors propose an objective whose Lagrangian dual admits a variety of modern objectives from variational auto-encoders and generative adversarial networks. ", "They describe tradeoffs between flexibility and computation in this objective leading to different approaches. ", "Unfortunately, I'm not sure what specific contributions come out, ", "and the paper seems to meander in derivations and remarks that I didn't understand what the point was.", "First, it's not clear what this proposed generalization offers. ", "It's a very nuanced and not insightful construction (eq. 3) and with a specific choice of a weighted sum of mutual informations subject to a combinatorial number of divergence measure constraints, each possibly held in expectation (eq. 5) to satisfy the chosen subclass of VAEs and GANs; and with or without likelihoods (eq. 7). ", "What specific insights come from this that isn't possible without the proposed generalization?", "It's also not clear with many GAN algorithms that reasoning with their divergence measure in the limit of infinite capacity discriminators is even meaningful ", "(e.g., Arora et al., 2017; ", "Fedus et al., 2017). ", "It's only true for consistent objectives such as MMD-GANs.", "Section 4 seems most pointed in explaining potential insights. ", "However, it only introduces hyperparameters and possible combinatorial choices with no particular guidance in mind. ", "For example, there are no experiments demonstrating the usefulness of this approach except for a toy mixture of Gaussians and binarized MNIST, explaining what is already known with the beta-VAE and infoGAN. ", "It would be useful if the authors could make the paper overall more coherent and targeted to answer specific problems in the literature rather than try to encompass all of them.", "Misc + The \"feature marginal\" is also known as the aggregate posterior (Makhzani et al., 2015) and average encoding distribution (Hoffman and Johnson, 2016); also see Tomczak and Welling (2017)."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "reference", "reference", "fact", "evaluation", "fact", "fact", "request", "fact"]}
{"doc_id": "S1omqSUSM", "text": ["This paper proposes to use a hybrid of convolutional and recurrent networks to predict the DSL specification of a GUI given a screenshot of the GUI.", "Pros:The paper is clear ", "and the proposed problem is novel and well-defined.", "The training data is synthetic, allowing for arbitrarily large training sets to be generated. ", "The authors have made their synthetic dataset publicly available.", "The method seems to work well based on the samples and ROC curves presented.", "Cons: This is mostly an application of an existing method to a new domain -- as stated in the related work section, effectively the same convnet+RNN architecture has been in common use for image captioning and other vision applications.", "The UIs that are represented in the dataset seem quite simple; ", "it\u2019s not clear that this will transfer to arbitrarily complex and multi-page UIs.", "The main motivation for the proposed system seems to be for non-technical designers to be able to implement UIs just by drawing a mockup screenshot. ", "However, the paper hasn\u2019t shown that this is necessarily possible assuming the hand-designed mockups aren\u2019t pixel-for-pixel matches with a screenshot that could be generated by the \u201cDSL code -> screenshot\u201d mapping that this system learns to invert.", "There exist a number of \u201cdrag and drop\u201d style UI design products (at least for HTML) that would seem to accomplish the same basic goal as the proposed system in a more reliable way. ", "(Though the proposed system does have the advantage of only requiring a screenshot created using any software, rather than being restricted to a particular piece of software.)", "Overall, the paper is well-written but the novelty and applicability seems a bit limited."], "labels": ["fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation"]}
{"doc_id": "B129GzFxf", "text": ["This paper proposes a new method for reverse curriculum generation by gradually reseting the environment in phases and classifying states that tend to lead to success. ", "It additionally proposes a mechanism for learning from human-provided \"key states\".", "The ideas in this paper are quite nice, ", "but the paper has significant issues with regard to clarity and applicability to real-world problems:", "First, it is unclear is the proposed method requires access only high-dimensional observations (e.g. images) during training or if it additionally requires low-dimensional states (e.g. sufficient information to reset the environment). ", "In most compelling problems settings where a low-dimensional representation that sufficiently explains the current state of the world is available during training, then it is also likely that one can write down a nicely shaped reward function using that state information during training, in which case, it makes sense to use such a reward function. ", "This paper seems to require access to low-dimensional states, and specifically considers the sparse-reward setting, ", "which seems contrived.", "Second, the paper states that the assumption \"when resetting, the agent can be reset to any state\" can be satisfied in problems such as real-world robotic manipulation. ", "This is not correct. ", "If the robot could autonomously reset to any state, then we would have largely solved robotic manipulation. ", "Further, it is not always realistic to assume access to low-dimensional state information during training on a real robotic system (e.g. knowing the poses of all of the objects in the world).", "Third, the experiments section lacks crucial information needed to understand the experiments. ", "What is the state, observation, and action space for each problem setting? ", "What is the reward function for each problem setting? ", "What reinforcement learning algorithm is used in combination with the curriculum and tendency rewards? ", "Are the states and actions continuous or discrete? ", "Without this information, it is difficult to judge the merit of the experimental setting.", "Fourth, the proposed method seems to lack motivation, making the proposed scheme seem a bit ad hoc. ", "Could each of the components be motivated further through more discussion and/or ablative studies?", "Finally, the main text of the paper is substantially longer than the recommended page limit. ", "It should be shortened by making the writing more concise.", "Beyond my feedback on clarity and significance, here are further pieces of feedback with regard to the technical content, experiments, and related work:I'm wondering -- can the reward shaping in Equation 2 be made to satisfy the property of not affecting the final policy? ", "(see Ng et al. '09) ", "If so, such a reward shaping would make the method even more appealing.", "How do the experiments in section 5.4 compare to prior methods and ablations? ", "Without such a comparison, it is impossible to judge the performance of the proposed method and the level of difficulty of these tasks. ", "At the very least, the paper should compare the performance of the proposed method to the performance a random policy.", "The paper is missing some highly relevant references. ", "First, how does the proposed method compare to hindsight experience replay? ", "[1] Second, learning from keyframes (rather than demonstrations) has been explored in the past [1]. ", "It would be preferable to use the standard terminology of \"keyframe\".", "[1] Andrychowicz et al. Hindsight Experience Replay. 2017", "[2] Akgun et al. Keyframe-based Learning from Demonstration. 2012", "In summary, I think this paper has a number of promising ideas and experimental results, ", "but given the significant issues in clarity and significance to real world problems, I don't think that the current version of this paper is suitable for publication in ICLR.", "More minor feedback on clarity and correctness:- Abstract: \"Deep RL algorithms have proven successful in a vast variety of domains\" -- This is an overstatement.", "- The introduction should be more clear with regard to the assumptions. ", "In particular, it would be helpful to see discussion of requiring human-provided keyframes. ", "As is, it is unclear what is meant by \"checkpoint scheme\", ", "which is not commonly used terminology.", "- \"This kind of spare reward, goal-oriented tasks are considered the most difficult challenges\" -- This is also an overstatement. ", "Long-horizon tasks and high-dimensional observations are also very difficult. ", "Also, the sentence is not grammatically correct.", "- \"That is, environment\" -> \"That is, the environment\"", "- In the last paragraph of the intro, it would be helpful to more clearly state what the experiments can accomplish. ", "Can they handle raw pixel inputs?", "- \"diverse domains\" -> \"diverse simulated domains\"", "- \"a robotic grasping task\" -> \"a simulated robotic grasping task\"", "- There are a number of issues and errors in citations, e.g. missing the year, including the first name, incorrect reference", "- Assumption 1: \\mathcal{P} has not yet been defined.", "- The last two paragraphs of section 3.2 are very difficult to understand without reading the method yet", "- \"conventional RL solver tend\" -> \"conventional RL tend\", ", "also should mention sparse reward in this sentence.", "- Algorithm 1 and Figure 1 are not referenced in the text anywhere, and should be", "- The text in Figure 1 and Figure 3 is extremely small", "- The text in Figure 3 is extremely small"], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "evaluation", "request", "fact", "request", "request", "reference", "evaluation", "request", "fact", "request", "evaluation", "request", "fact", "request", "reference", "reference", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "request", "request", "request", "fact", "fact", "evaluation", "request", "request", "request", "request", "request"]}
{"doc_id": "SkSMlWcgG", "text": ["The paper provides methods for training deep networks using half-precision floating point numbers without losing model accuracy or changing the model hyper-parameters. ", "The main ideas are to use a master copy of weights when updating the weights, scaling the loss before back-prop and using full precision variables to store products. ", "Experiments are performed on a large number of state-of-art deep networks, tasks and datasets ", "which show that the proposed mixed precision training does provide the same accuracy at half the memory.", "Positives - The experimental evaluation is fairly exhaustive on a large number of deep networks, tasks and datasets ", "and the proposed training preserves the accuracy of all the tested networks at half the memory cost.", "Negatives - The overall technical contribution is fairly small and are ideas that are regularly implemented when optimizing systems.", "- The overall advantage is only a 2x reduction in memory which can be gained by using smaller batches at the cost of extra compute."], "labels": ["fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact"]}
{"doc_id": "HkCNqISxM", "text": ["The authors try to use continuous time generalizations of normalizing flows for improving upon VAE-like models or for standard density estimation problems.", "Clarity: the text is mathematically very sloppy / hand-wavy.", "1. I do not understand proposition (1). ", "I do not think that the proof is correct ", "(e.g. the generator L needs to be applied to a function ", "-- the notation L(x) does not make too much sense): ", "indeed, in the case when the volatility is zero (or very small), this proposition would imply that any vector field induces a volume preserving transformation, which is indeed false.", "2. I do not really see how the sequence of minimization Eq(5) helps in practice. ", "The Wasserstein term is difficult to hand.", "3. in Equation (6), I do not really understand what $\\log(\\bar{\\rho})$ is if $\\bar{\\rho}$ is an empirical distribution. ", "One really needs $\\bar{\\rho}$ to be a probability density to make sense of that."], "labels": ["fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "BkzesZcxG", "text": ["The authors propose an extension to the Neural Statistician which can model contexts with multiple partially overlapping features. ", "This model can explain datasets by taking into account covariate structure needed to explain away factors of variation and it can also share this structure partially between datasets.", "A particularly interesting aspect of this model is the fact that it can learn these context c as features conditioned on meta-context a, which leads to a disentangled representation.", "This is also not dissimilar to ideas used in 'Bayesian Representation Learning With Oracle Constraints' Karaletsos et al 2016 ", "where similar contextual features c are learned to disentangle representations over observations and implicit supervision.", "The authors provide a clean variational inference algorithm to learn their model. ", "However, a key problem is the following: the nature of the discrete variables being used makes them hard to be inferred with variational inference. ", "The authors mention categorical reparametrization as their trick of choice, but do not go into empirical details int heir experiments regarding the success of this approach. ", "In fact, it would be interesting to study which level of these variables could be analytically collapsed (such as done in the Semi-Supervised learning work by Kingma et al 2014) and which ones can be sampled effectively using a form of reparametrization.", "This also touches on the main criticism of the paper: While the model technically makes sense and is cleanly described and derived, the empirical evaluation is on the weak side and the rich properties of the model are not really shown off. ", "It would be interesting if the authors could consider adding a more illustrative experiment and some more empirical results regarding inference in this model and the marginal structures that can be learned with this model in controlled toy settings.", "Can the model recover richer structure that was imposed during data generation? ", "How limiting is the learning of a?", "How does the likelihood of the model behave under the circumstances?", "The experiments do not really convey how well this all will work in practice."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "request", "request", "request", "request", "fact"]}
{"doc_id": "H1Pyl4sxM", "text": ["Summary of paper: The paper proposes an RNN-based neural network architecture for embedding programs, focusing on the semantics of the program rather than the syntax. ", "The application is to predict errors made by students on programming tasks. ", "This is achieved by creating training data based on program traces obtained by instrumenting the program by adding print statements. ", "The neural network is trained using this program traces with an objective for classifying the student error pattern (e.g. list indexing, branching conditions, looping bounds).", "---Quality: The experiments compare the three proposed neural network architectures with two syntax-based architectures. ", "It would be good to see a comparison with some techniques from Reed & De Freitas (2015) ", "as this work also focuses on semantics-based embeddings.", "Clarity: The paper is clearly written.", "Originality: This work doesn't seem that original from an algorithmic point of view ", "since Reed & De Freitas (2015) and Cai et. al (2017) among others have considered using execution traces. ", "However the application to program repair is novel (as far as I know).", "Significance: This work can be very useful for an educational platform ", "though a limitation is the need for adding instrumentation print statements by hand.", "--- Some questions/comments: - Do we need to add the print statements for any new programs that the students submit? ", "What if the structure of the submitted program doesn't match the structure of the intended solution and hence adding print statements cannot be automated?", "---References Cai, J., Shin, R., & Song, D. (2017). Making Neural Programming Architectures Generalize via Recursion. In International Conference on Learning Representations (ICLR)."], "labels": ["fact", "fact", "fact", "fact", "fact", "request", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "request", "request", "reference"]}
{"doc_id": "S1c4VEXWz", "text": ["This paper provides an overview of the Deep Voice 3 text-to-speech system. ", "It describes the system in a fair amount of detail and discusses some trade-offs w.r.t. audio quality and computational constraints. ", "Some experimental validation of certain architectural choices is also provided.", "My main concern with this work is that it reads more like a tech report: ", "it describes the workings and design choices behind one particular system in great detail, ", "but often these choices are simply stated as fact and not really motivated, or compared to alternatives. ", "This makes it difficult to tell which of these aspects are crucial to get good performance, and which are just arbitrary choices that happen to work okay.", "As this system was clearly developed with actual deployment in mind (and not purely as an academic pursuit), ", "all of these choices must have been well-deliberated. ", "It is unfortunate that the paper doesn't demonstrate this. ", "I think this makes the work less interesting overall to an ICLR audience. ", "That said, it is perhaps useful to get some insight into what types of models are actually used in practice.", "An exception to this is the comparison of \"converters\", model components that convert the model's internal representation of speech into waveforms. ", "This comparison is particularly interesting ", "because some of the results are remarkable, i.e. Griffin-Lim spectrogram inversion and the WORLD vocoder achieving very similar MOS scores in some cases (Table 2). ", "I wish there would be more of that kind of thing in the paper. ", "The comparison of attention mechanisms is also useful.", "I'm on the fence as I think it is nice to get some insight into a practical pipeline which benefits from many current trends in deep learning research (autoregressive models, monotonic attention, ...), ", "but I also feel that the paper is a bit meager when it comes to motivating all the architectural aspects. ", "I think the paper is well written ", "so I've tentatively recommended acceptance.", "Other comments: - The separation of the \"decoder\" and \"converter\" stage is not entirely clear to me. ", "It seems that the decoder is trained to predict spectrograms autoregressively, but its final layer is then discarded and its hidden representation is then used as input to the converter stage instead? ", "The motivation for doing this is unclear to me, ", "surely it would be better to train everything end-to-end, including the converter? ", "This seems like an unnecessary detour, ", "what's the reasoning behind this?", "- At the bottom of page 2 it is said that \"the whole model is trained end-to-end, excluding the vocoder\", ", "which I think is an unfortunate turn of phrase. ", "It's either end-to-end, or it isn't.", "- In Section 3.3, the point of mixing of h_k and h_e is unclear to me. ", "Why is this done?", "- The gated linear unit in Figure 2a shows that speaker embedding information is only injected in the linear part. ", "Has this been experimentally validated to work better than simpler mechanisms such as adding conditioning-dependent biases/gains?", "- When the decoder is trained to do autoregressive prediction of spectrograms, is it autoregressive only in time, or also in frequency? ", "I'm guessing it's the former, ", "but this means there is an implicit independence assumption ", "(the intensities in different frequency bins are conditionally independent, given all past timesteps). ", "Has this been taken into consideration? ", "Maybe it doesn't matter because the decoder is never used directly anyway, and this is only a \"feature learning\" stage of sorts?", "- Why use the L1 loss on spectrograms?", "- The recent work on Parallel WaveNet may allow for speeding up WaveNet when used as a vocoder, ", "this could be worth looking into seeing as inference speed is used as an argument to choose different vocoder strategies (with poorer audio quality as a result).", "- The title heavily emphasizes that this model can do multi-speaker TTS with many (2000) speakers, ", "but that seems to be only a minor aspect that is only discussed briefly in the paper. ", "And it is also something that preceding systems were already capable of ", "(although maybe it hasn't been tested with a dataset of this size before). ", "It might make sense to rethink the title to emphasize some of the more relevant and novel aspects of this work."], "labels": ["fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "request", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "request", "fact", "evaluation", "fact", "evaluation", "request", "fact", "request", "request", "evaluation", "fact", "fact", "request", "non-arg", "request", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "request"]}
{"doc_id": "rJfo7HsxG", "text": ["The paper proposes a VAE inference network for a non-parametric topic model.", "The model on page 4 is confusing to me ", "since this is a topic model, ", "so document-specific topic distributions are required, ", "but what is shown is only stick-breaking for a mixture model.", "From what I can tell, the model itself is not new, only the fact that a VAE is used to approximate the posterior. ", "In this case, if the model is nonparametric, then comparing with Wang, et al (2011) seems the most relevant non-deep approach. ", "Given the factorization used in that paper, the q distributions are provably optimal by the standard method. ", "Therefore, something must be gained by the VAE due to a non-factorized q. ", "This would be best shown by comparing with the corresponding non-deep version of the model rather than LDA and other deep models."], "labels": ["fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "r1zEZ9ief", "text": ["The paper proposes a method which jointly learns the label embedding (in the form of class similarity) and a classification model. ", "While the motivation of the paper makes sense, ", "the model is not properly justified, ", "and I learned very little after reading the paper.", "There are 5 terms in the proposed objective function. ", "There are also several other parameters associated with them: for example, the label temperature of z_2\u2019\u2019 and and parameter alpha in the second last term etc.", "For all the experiments, the same set of parameters are used, ", "and it is claimed that \u201cthe method is robust in our experiment and simply works without fine tuning\u201d. ", "While I agree that a robust and fine-tuning-free model is ideal ", "1) this has to be justified by experiment. ", "2) showing the experiment with different parameters will help us understand the role each component plays. ", "This is perhaps more important than improving the baseline method by a few point, ", "especially given that the goal of this work is not to beat the state-of-the-art."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "quote", "evaluation", "request", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "SkKuc-Kef", "text": ["Proposal is to restrict the feasible parameters to ones that have produce a function with small variance over pre-defined groups of images that should be classified the same. ", "As authors note, this constraint can be converted into a KKT style penalty with KKT multiplier lambda. ", "Thus this is very similar to other regularizers that increase smoothness of the function, such as total variation or a graph Laplacian defined with graph edges connecting the examples in each group, as well as manifold regularization (see e.g. Belkin, Niyogi et al. JMLR). ", "Heck, in practie ridge regularization will also do something similar for many function classes. ", "Experiments didn't compare to any similar smoothness regularization ", "(and my preferred would have been a comparison to graph Laplacian or total variation on graphs formed by the same clustered examples). ", "It's also not clear either how important it is that they hand-define the groups over which to minimize variance or if just generally adding smoothness regularization would have achieved the same results. ", "That made it hard to get excited about the results in a vacuum. ", "Would this proposed strategy have thwarted the Russian tank legend problem? ", "Would it have fixed the Google gorilla problem? ", "Why or why not?", "Overall, I found the writing a bit bombastic for a strategy that seems to require the user to hand-define groups/clusters of examples. ", "Page 2: calling additional instances of the same person \u201ccounterfactual observations\u201d didn\u2019t seem consistent with the usual definition of that term\u2026 ", "maybe I am just missing the semantic link here, ", "but this isn't how we usually use the term counterfactual in my corner of the field.", "Re: \u201cone creates additional samples by modifying\u2026\u201d ", "be nice to quote more of the early work doing this, ", "I believe the first work of this sort was Scholkopf\u2019s, he called it \u201cvirtual examples\u201d ", "and I\u2019m pretty sure he specifically did it for rotation MNIST images (and if not exactly that, it was implied). ", "I think the right citation is \u201cIncorporating invariances in support vector learning machines\u201c Scholkopf, Burges, Vapnik 1996, but also see Decoste * Scholkopf 2002 \u201cTraining invariant support vector machines.\u201d"], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "request", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "request", "fact", "evaluation", "request"]}
{"doc_id": "ryBhOOXlM", "text": ["The authors ask when the hidden layer units of a multi-layer feed-forward neural network will display selectivity to object categories.", "They train 3-layer ANNs to categorize binary patterns,", "and find that typically at least some of the hidden layer units are category selective.", "The number of category selective (\"localist\") units varies depending on the size of the hidden layer, the structure of the outputs the network is trained to return (i.e., one-hot vs distributed), the neurons' activation functions, and the level of dropout-induced noise in the training procedure.", "Overall, I find the work to hint at an interesting phenomenon.", "However, the paper as presented uses an overly-simplistic task for the ANNs,", "and the work is sloppily presented.", "These factors detract from my enthusiasm.", "My specific criticisms are as follows: 1) The binary pattern classification seems overly simplistic a task for this study.", "If you want to compare to the medial temporal lobe's Jennifer Aniston cells (i.e., the Quiroga result), then an object recognition task seems much more meaningful, as does a deeper network structure.", "Likewise, to inform the representations we see in deep object recognition networks, it is better to just study those networks, instead of simple shallow binary classification networks.", "Or, at least show that the findings apply to those richer settings, where the networks do \"real\" tasks.", "2) The paper is somewhat sloppy, and could use a thorough proofreading.", "For example, what are \"figures 3, ?? and 6\"?", "And which is Figure 3.3.1?", "3) What formula is used to quantify the selectivity?", "And do the results depend on the cut-off used to label units as \"selective\" or not (i.e., using a higher or lower cutoff than 0.05)?", "Given that the 0.05 number is somewhat arbitrary, this seems worth checking.", "4) I don't think that very many people would argue that the presence of distributed representations strictly excludes the possibility of some of the units having some category selectivity.", "Consequently, I find the abstract and introduction to be a bit off-putting, coming off almost as a rant against PDP.", "This is a minor stylistic thing, but I'd encourage the authors to tone it down a bit.", "5) The finding that more of the selective units arise in the hidden layer in the presence of higher levels of noise is interesting,", "and the authors provide some nice intuition for this phenomenon (i.e., getting redundant local representations makes the system robust to the dropout).", "This seems interesting in light of the Quiroga findings of Jennifer Aniston cells: the fact that the (small number of) units they happened to record from showed such selectivity suggests that many neurons in the brain would have this selectivity, so there must be a large number of category selective units.", "Does that finding, coupled with the result from Fig. 6, imply that those \"grandmother cell\" observations might reflect an adaptation to increase robustness to noise?"], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "request", "evaluation", "request", "request", "request", "request", "request", "evaluation", "evaluation", "request", "evaluation", "fact", "evaluation", "non-arg"]}
{"doc_id": "r1ajkbceM", "text": ["The authors present a new RL algorithm for sparse reward tasks. ", "The work is fairly novel in its approach, ", "combining a learned reward estimator with a contextual bandit algorithm for exploration/exploitation. ", "The paper was mostly clear in its exposition, ", "however some additional information of the motivation for why the said reduction is better than simpler alternatives would help. ", "\\n\\nPros\\n1. The results on bandit structured prediction problems are pretty good\\n", "2. The idea of a learnt credit assignment function, and using that to separate credit assignment from the exploration/exploitation tradeoff is good. ", "\\n\\nCons: \\n1. The method seems fairly more complicated than PPO / A2C, ", "yet those methods seem to perform equally well on the RL problems (Figure 2.). ", "It also seems to be designed only for discrete action spaces.\\n", "2. Reslope Boltzmann performs much worse than Reslope Bootstrap, ", "thus having a bag of policies helps. ", "However, in the comparison in Figures 2 and 3, the policy gradient methods dont have the advantage of using a bag of policies. ", "A fairer comparison would be to compare with methods that use ensembles of Q-functions. ", "(like this https://arxiv.org/abs/1706.01502 by Chen et al.). ", "The Q learning methods in general would also have better sample efficiency than the policy gradient methods.\\n", "3. The method claims to learn an internal representation of a denser reward function for the sparse reward problem, ", "however the experimental analysis of this is pretty limited (Section 5.3). ", "It would be useful to do a more thorough investigation of whether it learnt a good credit assignment function in the games. ", "One way to do this would be to check the qualitative aspects of the function in a well understood game, like Blackjack.\\n\\n", "Suggestions:\\n1. What is the advantage of the method over a simple RL method that predicts a reward at every step (such that the dense rewards add up to match the sparse reward for the episode), and uses this predicted dense reward to perform RL? ", "This, and also a bigger discussion on prior bandit learning methods like LOLS will help under the context for why we\\u2019re performing the reduction stated in the paper.", "\\n\\nSignificance: While the method is novel and interesting, the experimental analysis and the explanations in the paper leave it unclear as to whether its significant compared to prior work."], "labels": ["fact", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "request", "reference", "fact", "fact", "evaluation", "request", "request", "non-arg", "request", "evaluation"]}
{"doc_id": "rkHhxN2lG", "text": ["This paper focuses on the density estimation when the amount of data available for training is low. ", "The main idea is that a meta-learning model must be learnt, which learns to generate novel density distributions by learn to adapt a basic model on few new samples. ", "The paper presents two independent method.", "The first method is effectively a PixelCNN combined with an attention module. ", "Specifically, the support set is convolved to generate two sets of feature maps, the so called \"key\" and the \"value\" feature maps. ", "The key feature map is used from the model to compute the attention in particular regions in the support images to generate the pixels for the new \"target\" image. ", "The value feature maps are used to copmpute the local encoding, which is used to generate the respective pixels for the new target image, taking into account also the attention values. ", "The second method is simpler, ", "and very similar to fine-tuning the basis network on the few new samples provided during training. ", "Despite some interesting elements, ", "the paper has problems.", "First, the novelty is rather limited. ", "The first method seems to be slightly more novel, ", "although it is unclear whether the contribution by combining different models is significant. ", "The second method is too similar to fine-tuning: ", "although the authors claim that \\mathcal{L}_inner can be any function that minimizes the total loss \\mathcal{L}, ", "in the end it is clear that the log-likelihood is used. ", "How is this approach (much) different from standard fine-tuning, ", "since the quantity P(x; \\theta') is anyways unknown and cannot be \"trained\" to be maximized.", "Besides the limited novelty, ", "the submission leaves several parts unclear. ", "First, why are the convolutional features of the support set in the first methods divided into \"key\" and \"value\" feature maps as in p_key=p[:, 0:P], p_value=p[:, P:2*P]? ", "Is this division arbitrary, or is there a more basic reason? ", "Also, is there any different between key and value? ", "Why not use the same feature map for computing the attention and computing eq (7)?", "Also, in the first model it is suggested that an additional feature can be having a 1-of-K channel for the supporting image label: ", "the reason is that you might have multiple views of objects, and knowing which view contributes to the attention can help learning the density. ", "However, this assumes that the views are ordered, namely that the recording stage has a very particular format. ", "Isn't this a bit unrealistic, given the proposed setup anyways?", "Regarding the second method, it is not clear why leaving this room for flexibility (by allowing L_inner to be any function) to the model is a good idea. ", "Isn't this effectively opening the doors to massive overfitting? ", "Besides, isn't the statement that the function \\mathcal{L}_inner void? ", "At the end of the day one can also claim the same for gradient descent: you don't need to have the true gradients of the true loss, as long as the objective function obtains gradually lower and lower values?", "Last, it is unclear what is the connection between the first and the second model. ", "Are these two independent models that solve the same problem? ", "Or are they connected?", "Regarding the evaluation of the models, the nature of the task makes the evaluation hard: ", "for real data like images one cannot know the true distribution of particular support examples. ", "Surrogate tasks are explored, first image flipping, then likelihood estimation of Omniglot characters, then image generation. ", "Image flipping does not sound a very relevant task to density estimation, given that the task is deterministic. ", "Perhaps, what would make more sense would be to generate a new image given that the support set has images of a particular orientation, meaning that the model must learn how to learn densities from arbitrary rotations. ", "Regarding Omniglot character generation, the surrogate task of computing likelihood of known samples gives a bit better, ", "however, this is to be expected when combining a model without attention, with an attention module.", "All in all, the paper has some interesting ideas. ", "I encourage the authors to work more on their submission and think of a better evaluation and resubmit."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "request", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "r1Kg9atxz", "text": ["The authors extend the approach proposed in the \"Reverse Curriculum Learning for Reinforcement Learning\" paper by adding a discriminator that gives a bonus reward to a state based on how likely it thinks the current policy is to reach the goal from said state. ", "The discriminator is a potentially interesting mechanism to approximate multi-step backups in sparse-reward environments. ", "The approach of this paper seems severely severely limited by the assumptions made by the authors, mainly assuming a deterministic environment, known goal states and the ability to sample anywhere in the state space. ", "Some of these assumptions may be reasonable in domains such as robotics, ", "but they seem very restrictive in the domains like the games considered in the paper.", "Additional Comments: -The authors demonstrate some benefits of using Tendency rewards, ", "but made little attempt to explain why it leads to accelerated learning. ", "Results are pure performance results.", "-The authors should probably structure the tendency reward as potential based instead of using the Gaussian kernel hack they introduce in section 4.2", "- Presentation: There are several mistakes and formatting issues in References", "- Assumption 2 transformations -> transitions?", "-Need to add assumption 3: advance knowledge of goal state", "- the use of gamma as a scale factor in equation 2 is confusion, ", "it was already introduced as the discount factor ( which is default notation in RL). ", "It also isn't clear what the notation r_f denotes (is it the same as r^f in appendix?).", "-It is nice to see that the authors compare their method with alternative approaches. ", "Unfortunately, the proposed method does not seem to offer many benefits."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "request", "fact", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "Skoq_d6gz", "text": ["This paper presents a semi-supervised extension for applying GANs to regression tasks.", "The authors propose two architectures: one adds a supervised regression loss to the standard unsupervised GAN discriminator loss.", "The other replaces the real/fake output of the discriminator with only a real-valued output and then applies a kernel on top of this output to predict if samples are real or fake.", "The methods are evaluated on a public driving dataset,", "and are shown to outperform an Improved-GAN which predicts the real-valued labels discretized into 10 classes.", "This is a nice idea,", "but I am not completely convinced by the experimental results.", "The proposed method is compared to Improved-GAN where the real-valued labels are discretized into 10 classes.", "Why 10?", "How was this chosen?", "The authors rightfully state that \"[...] this discretization will add some unavoidable quantization error to our training\" (Sec 5.2) and then again in the conclusion \"determining the number of [discretization] classes for each application is non-trivial\",", "yet nowhere do they explore the effects of this.", "Surely, this is a very important part of the evaluation?", "And surely as we improve the discretization-resolution the gap between the two will close?", "This needs to be evaluated.", "Also, the main motivation for a GAN-based regression model is based on the paucity of labeled training data.", "However, this is another place where the argument would greatly benefit from some empirical backing.", "I.e., I would really at least like to see how a discriminative regression model (e.g. a pretrained convnet fine-tuned for regression) compares to the proposed technique when trained (fine-tuned) only on the (smaller) labeled data set, perhaps augmented with standard image augmentation techniques to increase the size.", "Overall, I found the paper a little hard to read (especially understanding how Architecture 2 works and moreover what its motivation is)", "and empirical evaluation a bit lacking.", "I also found the claims of \"solving\" the regression task using GANs unfounded based on the experimental results presented.", "In conclusion, while the technique looks promising, the novelty seems fairly low", "and the evaluation can benefit from one or more additional baselines", "(at the very least showing how varying the discretization resolution of the Improved-GAN affects the results, but preferably one or more discriminative baselines),", "and also perhaps on one or more additional data sets to showcase the technique's generality.", "Nits:Several part are quite repetitive and can benefit from a rewrite.", "Particularly the last paragraphs in the Introduction.", "Section 3: notation seems inconsistent (p_z(z) vs P_z(z) directly below in Eqn 1)", "The second architecture needs to be explained a little better, and motivated a little better.", "Eqn 5: I think it should be 0 \\geq \\hat{y}, and not 0 \\leq \\hat{y}"], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "non-arg", "non-arg", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "request", "evaluation", "request", "request"]}
{"doc_id": "HyKlaaFxf", "text": ["Overall, the paper is well-written ", "and the proposed model is quite intuitive. ", "Specifically, the idea is to represent entailment as a product of continuous functions over possible worlds. ", "Specifically, the idea is to generate possible worlds, and compute the functions that encode entailment in those worlds. ", "The functions themselves are designed as tree neural networks to take advantage of logical structure. ", "Several different encoding benchmarks of the entailment task are designed to compare against the performance of the proposed model, using a newly created dataset. ", "The results seem very impressive with > 99% accuracy on tests sets.", "One weakness with the paper was that it was only tested on 1 dataset. ", "Also, should some form of cross-validation be applied to smooth out variance in the evaluation results. ", "I am not sure if there are standard \"shared\" datasets for this task, ", "which would make the results much stronger.", "Also how about the tradeoff, i.e., does training time significantly increase when we \"imagine\" more worlds. ", "Also, in general, a discussion on the efficiency of training the proposed model as compared to TreeNN would be helpful.", "The size of the world vectors, I would believe is quite important, ", "so maybe a more detailed analysis on how this was chosen is important to replicate the results.", "This problem, I think, is quite related to model counting. ", "There has been a lot of work on model counting. ", "a discussion on how this relates to those lines of work would be interesting."], "labels": ["evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "non-arg", "request", "request", "request", "evaluation", "request", "evaluation", "evaluation", "request"]}
{"doc_id": "Bk-lFRWWz", "text": ["The authors propose reducing the number of parameters learned by a deep network by setting up sparse connection weights in classification layers. ", "Numerical experiments show that such sparse networks can have similar performance to fully connected ones. ", "They introduce a concept of \u201cscatter\u201d that correlates with network performance. ", "Although I found the results useful and potentially promising, ", "I did not find much insight in this paper.", "It was not clear to me why scatter (the way it is defined in the paper) would be a useful performance proxy anywhere but the first classification layer. ", "Once the signals from different windows are intermixed, how do you even define the windows? ", "Minor Second line of Section 2.1: \u201clesser\u201d -> less or fewer"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "Sk6i-Szbz", "text": ["This paper proposes to use Cross-Corpus training for biomedical relationship extraction from text. ", "- Many wording issues, like citation formats, grammar mistakes, missing words, e.g., Page 2: it as been", "- The description of the methods should be improved. ", "For instance, why the input has only two entities? ", "In many biomedical sentences, there are more than two entities. ", "How can the proposed two models handle these cases? ", "- The paper just presents to train on a larger labeled corpus and test on a task with a smaller labeled set. ", "Why is this novel? ", "Nothing is novel in the deep models (CNN and TreeLSTM). ", "- Missing refs, like: A simple neural network module for relational reasoning, Arxiv 2017"], "labels": ["fact", "fact", "request", "request", "fact", "non-arg", "fact", "evaluation", "evaluation", "fact"]}
{"doc_id": "H1IrTpFxz", "text": ["The paper addresses the problem of learning the form of the activation functions in neural networks.", "The authors propose to place Gaussian process (GP) priors on the functional form of each activation function (each associated with a hidden layer and unit) in the neural net.", "This somehow allows to non-parametrically infer from the data the \"shape\" of the activation functions needed for a specific problem.", "The paper then proposes an inference framework (to approximately marginalize out all GP functions) based on sparse GP methods that use inducing points and variational inference.", "The inducing point approximation used here is very efficient since all GP functions depend on a scalar input (as any activation function!)", "and therefore by just placing the inducing points in a dense grid gives a fast and accurate representation/compression of all GPs in terms of the inducing function values (denoted by U in the paper).", "Of course then inference involves approximating the finite posterior over inducing function values U", "and the paper make use of the standard Gaussian approximations.", "In general I like the idea", "and I believe that it can lead to a very useful model.", "However, I have found the current paper quite preliminary and incomplete.", "The authors need to address the following:", "First (very important): You need to show experimentally how your method compares against regular neural nets (with specific fixed forms for their activation functions such relus etc).", "At the moment in the last section you mention", "\"We have validated networks of Gaussian Process Neurons in a set of experiments, the details of which we submit in a subsequent publication. In those experiments, our model shows to be significantly less prone to overfitting than a traditional feed-forward network of same size, despite having more parameters.\"", "===> Well all this needs to be included in the same paper.", "Secondly: Discuss the connection with Deep GPs (Damianou and Lawrence 2013).", "Your method seems to be connected with Deep GPs", "although there appear to be important differences as well.", "E.g. you place GPs on the scalar activation functions in an otherwise heavily parametrized neural network (having interconnection weights between layers) while deep GPs model the full hidden layer mapping as a single GP (which does not require interconnection weights).", "Thirdly: You need to better explain the propagation of uncertainly in section 3.2.2 and the central limit of distribution in section 3.2.1.", "This is the technical part of your paper which is a non-standard approximation.", "I will suggest to give a better intuition of the whole idea and move a lot of mathematical details to the appendix."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "quote", "request", "request", "evaluation", "evaluation", "fact", "request", "evaluation", "request"]}
{"doc_id": "Bk9oIe5gG", "text": ["The paper investigates different representation learning methods to create a latent space for intrinsic goal generation in guided exploration algorithms.", "The research is in principle very important and interesting.", "The introduction discusses a great deal about intrinsic motivations and about goal generating algorithms.", "This is really great,", "just that the paper only focuses on a very small aspect of learning a state representation in an agent that has no intrinsic motivation other than trying to achieve random goals.", "I think the paper (not only the Intro) could be a bit condensed to more concentrate on the actual contribution.", "The contribution is that the quality of the representation and the sampling of goals is important for the exploration performance and that classical methods like ISOMap are better than Autoencoder-type methods.", "Also, it is written in the Conclusions (and in other places): \"[..] we propose a new intrinsically Motivated goal exploration strategy....\".", "This is not really true.", "There is nothing new with the intrinsically motivated selection of goals here, just that they are in another space.", "Also, there is no intrinsic motivation.", "I also think the title is misleading.", "The paper is in principle interesting.", "However, I doubt that the experimental evaluations are substantial enough for profound conclusion.", "Several points of critic: - the input space was very simple in all experiments, not suitable for distinguishing between the algorithms,", "for instance, ISOMap typically suffers from noise and higher dimensional manifolds, etc.", "- only the ball/arrow was in the input image, not the robotic arm.", "I understand this because in phase 1 the robot would not move,", "but this connects to the next point:- The representation learning is only a preprocessing step requiring a magic first phase.", "-> Representation is not updated during exploration", "- The performance of any algorithm (except FI) in the Arm-Arrow task is really bad but without comment.", "- I am skeptical about the VAE and RFVAE results.", "The difference between Gaussian sampling and the KDE is a bit alarming,", "as the KL in the VAE training is supposed to match the p(z) with N(0,1).", "Given the power of the encoder/decoder it should be possible to properly represent the simple embedded 2D/3D manifold and not just a very small part of it as suggested by Fig 10.", "I have a hard time believing these results.", "I urge you to check for any potential errors made.", "If there are not mistakes then this is indeed alarming.", "Questions: - Is it true that the robot always starts from same initial condition?!", "Context=Emptyset.", "- For ISOMap etc, you also used a 10dim embedding?", "Suggestion: - The main problem seems to be that some algorithms are not representing the whole input space.", "- an additional measure that quantifies the difference between true input distribution and reproduced input distribution could tier the algorithms apart and would measure more what seems to be relevant here.", "One could for instance measure the KL-divergence between the true input and the sampled (reconstructed) input (using samples and KDE or the like).", "- This could be evaluated on many different inputs (also those with a bit more complicated structure) without actually performing the goal finding.", "- BTW: I think Fig 10 is rather illustrative and should be somehow in the main part of the paper", "On the positive side, the paper provides lots of details in the Appendix.", "Also, it uses many different Representation Learning algorithms and uses measures from manifold learning to access their quality.", "In the related literature, in particular concerning the intrinsic motivation, I think the following papers are relevant:J. Schmidhuber, PowerPlay: training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Front. Psychol., 2013.", "and G. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013.", "Typos and small details:p3 par2: for PCA you cited Bishop.", "Not critical, but either cite one the original papers or maybe remove the cite altogether", "p4 par-2: has multiple interests...: interests -> purposes?", "p4 par-1: Outcome Space to the agent is is ...", "Sec 2.2 par1: are rapidly mentioned... -> briefly", "Sec 2.3 ...Outcome Space O, we can rewrite the architecture as: and then comes the algorithm.", "This is a bit weird", "Sec 3: par1: experimental campaign -> experiments?", "p7: Context Space: the object was reset to a random position or always to the same position?", "Footnote 14: superior to -> larger than", "p8 par2: Exploration Ratio Ratio_expl... probably also want to add (ER) as it is later used", "Sec 4: slightly underneath -> slightly below", "p9 par1: unfinished sentence: It is worth noting that the....", "one sentence later: RP architecture? RPE?", "Fig 3: the error of the methods (except FI) are really bad.", "An MSE of 1 means hardly any performance!", "p11 par2: for e.g. with the SAGG..... grammar?", "Plots in general: use bigger font sizes."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "request", "fact", "fact", "evaluation", "evaluation", "fact", "reference", "reference", "fact", "request", "request", "fact", "request", "request", "evaluation", "request", "request", "request", "request", "request", "fact", "request", "evaluation", "evaluation", "request", "request"]}
{"doc_id": "BJJTve9gM", "text": ["This paper proposes to adapt convnet representations to new tasks", "while avoiding catastrophic forgetting by learning a per-task \u201ccontroller\u201d specifying weightings of the convolution-al filters throughout the network", "while keeping the filters themselves fixed.", "Pros The proposed approach is novel and broadly applicable.", "By definition it maintains the exact performance on the original task,", "and enables the network to transfer to new tasks using a controller with a small number of parameters (asymptotically smaller than that of the base network).", "The method is tested on a number of datasets (each used as source and target) and shows good transfer learning performance on each one.", "A number of different fine-tuning regimes are explored.", "The paper is mostly clear and well-written", "(though with a few typos that should be fixed).", "Cons/Questions/Suggestions The distinction between the convolutional and fully-connected layers (called \u201cclassifiers\u201d) in the approach description (sec 3) is somewhat arbitrary", "-- after all, convolutional layers are a generalization of fully-connected layers.", "(This is hinted at by the mention of fully convolutional networks.)", "The method could just as easily be applied to learn a task-specific rotation of the fully-connected layer weights.", "A more systematic set of experiments could compare learning the proposed weightings on the first K layers of the network (for K={0, 1, \u2026, N}) and learning independent weights for the latter N-K layers,", "but I understand this would be a rather large experimental burden.", "When discussing the controller initialization (sec 4.3), it\u2019s stated that the diagonal init works the best, and that this means one only needs to learn the diagonals to get the best results.", "Is this implying that the gradients wrt off-diagonal entries of the controller weight matrix are 0 under the diagonal initialization, hence the off-diagonal entries remain zero after learning?", "It\u2019s not immediately clear to me whether this is the case", "-- it could help to clarify this in the text.", "If the off-diag gradients are indeed 0 under the diag init, it could also make sense to experiment with an \u201cidentity+noise\u201d initialization of the controller matrix,", "which might give the best of both worlds in terms of flexibility and inductive bias to maintain the original representation.", "(Equivalently, one could treat the controller-weighted filters as a \u201cresidual\u201d term on the original filters F with the controller weights W initialized to noise, with the final filters being F+(W\\crossF) rather than just W\\crossF.)", "The dataset classifier (sec 4.3.4) could be learnt end-to-end by using a softmax output of the dataset classifier as the alpha weighting.", "It would be interesting to see how this compares with the hard thresholding method used here.", "(As an intermediate step, the performance could also be measured with the dataset classifier trained in the same way but used as a soft weighting, rather than the hard version rounding alpha to 0 or 1.)", "Overall, the paper is clear and the proposed method is sensible, novel, and evaluated reasonably thoroughly."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "non-arg", "evaluation", "request", "request", "evaluation", "evaluation", "request", "request", "request", "evaluation"]}
{"doc_id": "Sy8Kdltgz", "text": ["This paper proposes a method of learning sparse dictionary learning by introducing new types of priors. ", "Specifically, they designed a novel idea of defining a metric to measure discriminative properties along with the quality of presentations.", "It is also presented the power of the proposed method in comparison with the existing methods in the literature.", "Overall, the paper deals with an important issue in dictionary learning and proposes a novel idea of utilizing a set of priors. ", "To this reviewer\u2019s understanding, the thresholding parameter $\\tau_{c}$ is specific for a class $c$ only, ", "thus different classes have different $\\tau$ vectors. ", "If so, Eq. (6) for approximation of the measure $D(\\cdot)$ is not clear how the similarity measure between ${\\bf y}_{c,k}$ and ${\\bf y}_{c1,k1}$, \\ie, $\\left\\|{\\bf y}_{c,k}^{+}\\odot{\\bf y}_{c1,k1}^{+}\\right\\|_{1}+\\left\\|{\\bf y}_{c,k}^{+}\\odot{\\bf y}_{c1,k1}^{+}\\right\\|_{1}$ and $\\left\\|{\\bf y}_{c,k}\\odot{\\bf y}_{c1,k1}\\right\\|_{2}^{2}$, works to approximate it. ", "It would be appreciated to give more detailed description on it and geometric illustration, if possible.", "There are many typos and grammatical errors, ", "which distract from reading and understanding the manuscript."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "request", "fact", "evaluation"]}
{"doc_id": "S12o7fqlM", "text": ["This paper tackles the task of learning embeddings of multi-relational graphs using a neural network.", "As much of previous work, the proposed architecture works on triples (h, r, t) wth h, t entities and r the relation type.", "Despite interesting experimental results, I find that the paper carries too many imprecisions as is.", "* One of the main originality of the approach is to be able for a given input triple to train by sequentially removing in turn the head h, then the tail t and finally the relation r.", "(called multi-shot in the paper).", "However, most (if not all) approaches learning embeddings of multi-relational graphs also create multiple examples given a triple.", "And that, at least since \"Learning Structured Embeddings of Knowledge Bases\" by Bordes et al. 2011 that was predicting h and t (not r).", "The only difference is that here it is done sequentially", "while most methods sample one case each time.", "Not really meaningful or at least not proved meaningful here.", "* The sequential/RNN-like structure is unclear and it is hard to see how it relates to the data.", "* Writing that the proposed method \"unsupervised, which is distinctly different from previous works\" is not true or should be rephrased.", "The only difference comes from that the prediction function (softmax and not ranking for instance) and the loss used.", "But none of the methods compared in the experiments use more information than GEN (the original graph).", "GEN is not the only model using a softmax by the way.", "* The fact of predicting indistinctly a fact or its reverse seems rather worrying to me.", "Predicting that \"John is_father_of Paul\" or that \"John is_child_of Paul\" is not the same..!", "How is assessed the fact that a prediction is conceptually correct?", "Using types?", "* The bottom part of Table 2 is surprising.", "How come for the task of predicting Head, the model trained only at predicting heads (GEN(t,r => h)) performs worse than the model trained only at predicting tails (GEN(h,r => t))?"], "labels": ["fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "fact", "fact", "fact", "evaluation", "fact", "request", "non-arg", "evaluation", "fact"]}
{"doc_id": "BynUQBQZM", "text": ["This paper proposes a regularization to the softmax layer, which try to make the distribution of feature representation (inputs fed to the softmax layer) more meaningful according to the Euclidean distance.", "The proposed isotropic loss in equation 3 tries to equalize the squared distances from each point to the mean,", "so the features are encouraged to lie close to a sphere.", "Overall, the proposed method is a relatively simple tweak to softmax.", "The authors show that empirically, features learned under softmax loss + isotropic regularization outperforms other features in Euclidean metric-based tasks.", "My main concern with this paper is the motivation:", "what are the practical scenarios in which one would want to used proposed method?", "1. It is true that features learned with the pure softmax loss may not presents the ideal similarity under the Euclidean metric (e.g. the problem depicted in Figure 1),", "because they are not trained to do so:", "their purpose is just to predict the correct label.", "While the proposed regularization does lead to a nicer Euclidean geometry,", "there is not sufficient motivation and evidence showing this regularization improves classification accuracy.", "2. In table 2, the authors seem to indicate that not using the label information in the definition of Isotropic loss is an advantage.", "But this does not matter", "since you already use the labels in the softmax loss.", "3. I can not easily think of scenarios in which, we would like to perform KNN in the feature space (Table 3) after training a softmax layer.", "In fact, Table 3 shows KNN is almost always worse than softmax in terms of classification accuracy.", "4. Running kmeans or agglomerative clustering in the feature space (Table 5) *using the Euclidean metric* is again ill-posed,", "because the softmax layer is not trained to do this.", "If one really wants good clustering performance, one shall always try to learn a good metric, or ,", "why do not you perform clustering on the softmax output (a probability vector?)", "5. The experiments on adversarial robustness and face verification seems more interesting to me,", "but the tasks were not carefully explained for someone not familiar with that literature.", "Perhaps for these tasks, multi-class classification is not the most correct objective, and maybe the proposed regularization can help,", "but the motivations are not given."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "evaluation", "non-arg", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact"]}
{"doc_id": "ryT2f8KgM", "text": ["This paper continues a trend of incremental improvements to Wasserstein GANs (WGAN), ", "where the latter were proposed in order to alleviate the difficulties encountered in training GANs. ", "Originally, Arjovsky et al. [1] argued that the Wasserstein distance was superior to many others typically used for GANs. ", "An important feature of WGANs is the requirement for the discriminator to be 1-Lipschitz, ", "which [1] achieved simply by clipping the network weights. ", "Recently, Gulrajani et al. [2] proposed a gradient penalty \"encouraging\" the discriminator to be 1-Lipschitz. ", "However, their approach estimated continuity on points between the generated and the real samples, ", "and thus could fail to guarantee Lipschitz-ness at the early training stages. ", "The paper under review overcomes this drawback by estimating the continuity on perturbations of the real samples. ", "Together with various technical improvements, this leads to state-of-the-art practical performance both in terms of generated images and in semi-supervised learning. ", "In terms of novelty, the paper provides one core conceptual idea followed by several tweaks aimed at improving the practical performance of GANs. ", "The key conceptual idea is to perturb each data point twice and use a Lipschitz constant to bound the difference in the discriminator\u2019s response on the perturbed points. ", "The proposed method is used in eq. (6) together with the gradient penalty from [2]. ", "The authors found that directly perturbing the data with Gaussian noise led to inferior results ", "and therefore propose to perturb the hidden layers using dropout. ", "For supervised learning they demonstrate less overfitting for both MNIST and CIFAR 10. ", "They also extend their framework to the semi-supervised setting of Salismans et al 2016 and report improved image generation. ", "The authors do an excellent comparative job in presenting their experiments. ", "They compare numerous techniques (e.g., Gaussian noise, dropout) and demonstrates the applicability of the approach for a wide range of tasks. ", "They use several criteria to evaluate their performance (images, inception score, semi-supervised learning, overfitting, weight histogram) and compare against a wide range of competing papers. ", "Where the paper could perhaps be slightly improved is writing clarity. ", "In particular, the discussion of M and M' is vital to the point of the paper, ", "but could be written in a more transparent manner. ", "The same goes for the semi-supervised experiment details and the CIFAR-10 augmentation process. ", "Finally, the title seems uninformative. ", "Almost all progress is incremental, ", "and the authors modestly give credit to both [1] and [2], ", "but the title is neither memorable nor useful in expressing the novel idea. ", "[1] Martin Arjovsky, Soumith Chintala, and Leon Bottou. Wasserstein gan.", "[2] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "request", "evaluation", "request", "request", "evaluation", "evaluation", "fact", "evaluation", "reference", "reference"]}
{"doc_id": "r1OoL_Yxz", "text": ["The authors suggest using a mixture of shared and individual rewards within a MARL environment to induce cooperation among independent agents.", "They show that on their specific application this can lead to a better overall global performance than purely sharing the global signal, or using just the independent rewards.", "The paper is a little too focused on the packet routing example domain and fails to deliver much in terms of a general theory of reward design for cooperative behaviours beyond showing that mixed rewards can lead to improved results in their domain.", "They discuss what and how rewards,", "and this could be made more formal, as well as (at the very least) some guiding principles to follow when mixing rewards.", "It feels like there is a missing section between sections 2 and 3, where this methodological content could be described.", "The rest of the paper has similar issues, with key intuition and concepts either missing entirely or under-represented.", "The technical content often assumes that the reader is familiar with certain terms,", "and it is difficult to see what meaningful conclusions can be drawn from the evaluation.", "On a minor note, the use of the term cooperative in this paper could be better defined.", "In game theory, cooperative games are those in which agents share rewards.", "Non-cooperative (game theory) games are those where agents have general reward signals (not necessarily cooperature or adversarial).", "Conventionally (yes there is existing reward design/shaping literature for MARL) people have used the same terms in MARL.", "Perhaps the authors could define their approach as weakly cooperative, or emergent cooperation.", "The related work could be better described.", "There are existing papers on MARL and the issues with cooperation among independent learners,", "and this could be referenced.", "This includes reward shaping and reward potential.", "I would also have expected to see brief mention of empowerment in this section too (the agent favouring states where it has the power to control outcomes in an information theoretic sense), as an underyling principle for intrinsic reward.", "However, more importantly, the authors really needed to do more to synthesize this into an overall picture of what principles are at play and what ideas/methods exist that have tried to exploit some of these principles.", "Detailed comments: \u2022 [p2] the authors say \"We set the meta reward signals as 1 - max(U l ).\", before they define what U_l is.", "\u2022 [p2] we have \"As many applications in the real world can be modeled using similar methods, we expect that other fields can also benefit from this work.\"", "This statement is too vague,", "and the authors could do more to identify which application areas might benefit.", "\u2022 [p3, first para] \"However, the reward design studies for MARL is so limited.\"", "Drop the word 'so'.", "Also, I would argue that there have been quite a few (non-deep) discussions about reward design in MARL, cooperative, non-cooperative and competitive domains.", "\u2022 [p3, sec 2.2] \"This makes the diligent agents confuse about...\"", "should be \"confused\", and I would advise against anthropomorphism at least when the meaning is obscured.", "\u2022 [p3, sec 3] \"After having considered several other options, we finally choose the Packet Routing Domain as our experimental environments.\"", "Not sure what useful information is being conveyed here.", "\u2022 [sec 3] THe domain could be better described with intuition and formal descriptions, e.g. link utilization ratio, etc, before.", "\u2022 [p6] \"Importantly, the proposed blR seems to have similar capacity with dlR,\"", "The discussion here is all in terms of the reward acronyms with very little call on intuition or other such assistance to the reader.", "\u2022 [p7] \"We firstly try gR without any thinking\"", "The language could be better here."], "labels": ["fact", "fact", "evaluation", "fact", "request", "evaluation", "fact", "fact", "evaluation", "request", "fact", "fact", "fact", "evaluation", "request", "fact", "request", "request", "request", "request", "fact", "quote", "evaluation", "request", "quote", "request", "evaluation", "quote", "request", "quote", "evaluation", "request", "quote", "evaluation", "quote", "request"]}
{"doc_id": "SyJaBw1eG", "text": ["Summary: The paper considers second-order optimization methods for training of neural networks.", "In particular, the contribution of the paper is a Hessian-free method that works on blocks of parameters ", "(this is a user defined splitting of the parameters in blocks, e.g., parameters of each layer is one block, or parameters in several layers could constitute a block). ", "This results into a block-diagonal approximation to the curvature matrix, in order to improve Hessian-free convergence properties: ", "in the latter, a single step might require many CG steps, ", "so the benefit from using second-order information is not apparent.", "This is mainly an experimental work, ", "where the authors show the merits of their approach on deep autoencoders, convolutional networks and LSTMs: ", "results show favourable performance compared to the original Hessian-free approach and the Adam method.", "Originality: The paper is based on the works of Collobert (2004) and Le Roux et al. (2008), as well as the work of Martens: ", "the twist is that each layer of the neural network is considered a parameter block, ", "so that gradient interactions among weights in a single layer are more useful than those between weights in different layers. ", "This increases the separability of the problem and reduces the complexity. ", "Importance: Understanding the difference between first- and second-order methods for NN training is an important topic. ", "Using second-order methods could be considered at its infancy, compared to the wide variety of first-order methods. ", "Having new results on second-order methods with interesting results would definitely attract some attention at the conference. ", "Presentation/Clarity: The paper is well structured and well written. ", "The authors clearly place their work w.r.t. state of the art and previous works, ", "so that it is clear what is new and what is known.", "Comments: 1. It is not clear why the deficiency of first-order methods on training NNs with big batches motivates us to turn into second-order methods. ", "Is there a reasoning for this statement? ", "Or is it just because second-order methods are kind-of the only other alternative we have?", "2. Assuming we can perform a second-order method, like Newton's method, on a deep NN. ", "Since originally Newton's method was designed to find solutions that have gradient equal to zero, ", "and since NNs have saddle points (probably many more than local minima), ", "even if we could perfectly perform second-order Newton motions, there is no guarantee whether we converge to a local minimum or a saddle point. ", "However, since we perform Newton's method approximately in practice, ", "this might help escaping saddle points. ", "Any comment on this aspect ", "(I'm not aware whether this is already commented in Schraudolph 2002, where the Gauss-Newton matrix was proposed instead of the Hessian)?"], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "non-arg"]}
{"doc_id": "B1pcOYBlG", "text": ["Quality This paper demonstrates that convolutional and relational neural networks fail to solve visual relation problems by training networks on artificially generated visual relation data. ", "This points at important limitations of current neural network architectures where architectures depend mainly on rote memorization.", "Clarity The rationale in the paper is straightforward. ", "I do think that breakdown of networks by testing on increasing image variability is expected given that there is no reason that networks should generalize well to parts of input space that were never encountered before.", "Originality While others have pointed out limitations before, ", "this paper considers relational networks for the first time.", "Significance This work demonstrates failures of relational networks on relational tasks, ", "which is an important message. ", "At the same time, no new architectures are presented to address these limitations.", "Pros Important message about network limitations.", "Cons Straightforward testing of network performance on specific visual relation tasks. ", "No new theory development. ", "Conclusions drawn by testing on out of sample data may not be completely valid."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact"]}
{"doc_id": "ry6owJ9lM", "text": ["This paper introduces a simple extension to parallelize Hyperband. ", "Points in favor of the paper:* Addresses an important problem", "Points against:* Only 5-fold speedup by parallelization with 5 x 25 workers, and worse performance in the same budget than Google Vizier (even though that treats the problem as a black box)", "* Limited methodological contribution/novelty", "The paper's methodological contribution is quite limited: ", "it amounts to a straight-forward parallelization of successive halving (SHA). ", "Specifically, whenever a worker frees up, do a new run on it, at the highest rung possible while making sure to not run too many runs for too high rungs. ", "(I am pretty sure that is the idea, even though Algorithm 1, which is supposed to give the details, appears to have a bug in Procedure get_job ", "-- it would always either pick the highest rung or the lowest!)", "Empirically, the paper strangely does not actually evaluate a parallel version of Hyperband, but only evaluates the 5 parallel variants of SHA that Hyperband would run, each of them with all workers. ", "The experiments in Section 4.2 show that, using 25 workers, the best of these 5 variants obtains a 5-fold speedup over sequential Hyperband on CIFAR and an 8-fold speedup on SVHN. ", "I am confused: the *best* of 5 SHA variants only achieves a 5-fold speedup using 25 workers? ", "I.e., parallel Hyperband, which would run the 5 SHA variants in parallel, would require 125 workers but only yield a 5-fold speedup? ", "If I understand this correctly, I would clearly call this a negative result.", "Likewise, for the large-scale experiment, a single run of Vizier actually yields as good performance as the best of the 5 SHA variants, ", "and it is unknown beforehand which SHA variant works best -- in this example, actually Bracket 0 (which is often the best) stagnates. ", "Parallel Hyperband would run the 5 SHA variants in parallel, ", "so its performance at a budget of 10R with a total of 500 workers can be evaluated by taking the minimum of the 5 SHA variants at a budget of 2R. ", "This would obtain a perplexity of above 90, ", "which is quite a bit worse than Vizier's result of about 82. ", "In general, the performance of parallel Hyperband can be computed by taking the minimum of the SHA variants and multiplying the time taken by 5; ", "this shows that at any time in the plot (Figure 3, left) Vizier dominates parallel Hyperband. ", "Again, this is apparently a negative result. ", "(For Figure 3, right, no results for Vizier are given yet.)", "If I understand correctly, the experiment in Section 4.4 does not involve any run of Hyperband, but merely plots predictions of Qi et al.'s Paelo framework of how many models could be evaluated with a growing number of GPUs.", "Therefore, all empirical results for parallel Hyperband reported in the paper appear to be negative. ", "This confuses me, ", "especially since the authors seem to take them as positive results. ", "Because the original Hyperband paper argued that Bayesian optimization does not parallelize as well as random search / Hyperband, ", "and because Hyperband has been reported to work much better than Bayesian optimization on a single node, ", "I would have expected clear improvements of parallel Hyperband over parallel Bayesian optimization (=Vizier in the authors' setup). ", "However, this is not what I see in the results. ", "Am I mistaken somewhere? ", "If not, based on these negative results the paper does not seem to quite clear the bar for ICLR.", "Details, in order of appearance in the paper:- Vizier: why did the authors only use Vizier's default Bayesian optimization algorithm? ", "The Vizier paper by Golovin et al (2017) states that for large budgets other optimizers often perform better, and the budget in the large scale experiments is as high as 5000 function evaluations. ", "Also, isn't there an automatic choice built into Vizier to pick the optimizer expected to be best? ", "I think using a suboptimal version of Vizier would be a problem for the experimental setup.", "- Algorithm 1: this needs some improvement; in particular fixing the bug I mentioned above.", "- Section 3.1: Li et al (2017) do not analyze any algorithm theoretically. ", "They also do not discuss finite vs. infinite horizon. ", "I believe the authors meant Li et al's arXiv paper (2016) in both of these cases.", "- Section 3.1, point 2: this is unclear to me, even though I know Hyperband very well. ", "Can you please make this clearer?", "- \"A complete theoretical treatment of asynchronous SHA is out of the scope of this paper\" ", "-> is some theoretical treatment in scope?", "- Section 4.1: It seems very useful to already recommend configurations in each rung of Hyperband, ", "and I am surprised that the methods section does not mention this. ", "From the text in this experiments section, it feels a little like that was always part of Hyperband; ", "I didn't think it was, ", "so I checked the original papers and blog posts, ", "and both the ICLR 2017 and the arXiv 2016 paper state \"In fact, the first result returned by HYPERBAND after using a budget of 5R is often competitive with results returned by other searchers after using 50R.\" ", "and Kevin Jamieson's blog post on Hyperband (https://people.eecs.berkeley.edu/~kjamieson/hyperband.html) explicitly states: \"While random and the Bayesian Optimization algorithms output their first recommendation after max_iter iterations, Hyperband does not output anything until about max_iter(logeta(max_iter)+1) iterations [...]\"", "Therefore, recommending after each rung seems to be a contribution of this paper, ", "and I think it would be nice to read about this in the methods section. ", "- Experiment 1 (SVM) used dataset size as a budget, which is what Fabolas (\"Fast Bayesian optimization on large datasets\") is designed for according to Klein et al (2017). ", "On the other hand, Experiments (2) and (3) used the number of epochs as a budget, and Fabolas is not designed for that ", "(one would want to use a different kernel, for epochs, e.g., like Freeze-Thaw Bayesian optimization (FTBO) by Swersky et al (2014), instead of a kernel made for dataset sizes). ", "Therefore, it is not surprising that Fabolas does not work as well in those cases. ", "The case of number of epochs as a budget would be the domain of FTBO. ", "I know that there is no reference implementation of FTBO, ", "so I am not asking for a comparison, but the comparison against Fabolas is misleading for Experiments (2) and (3). ", "This doesn't really change anything for the paper: ", "the authors could still make the case that Fabolas hasn't been designed for this case and that (to the best of my knowledge) there simply isn't an implementation of a BO algorithm that is. ", "Fabolas is arguably the closest thing, ", "so the results could still be reported, just not as an apples-to-apples comparison; probably best as \"Fabolas-like, with dataset size kernel\" in the figure. ", "The justification to not compare against Fabolas in the parallel regime is clearly valid.", "- A clarification question: Section 4.4 does not report on any runs of actual neural networks, does it? ", "And not on any runs of Hyperband, correct? ", "Do I understand the reasoning correctly as pointing out that standard parallelization across multiple GPUs is not great, and that thus, in combination with parallel Hyperband, runs should be done mostly on one GPU only? ", "How does this relate to the results in the cited paper \"Accurate, Large-batch SGD: Training ImageNet in 1 Hour\" (https://arxiv.org/abs/1706.02677)? ", "Quoting from its abstract: \"Using commodity hardware, our implementation achieves \u223c 90% scaling efficiency when moving from 8 to 256 GPUs.\" ", "That seems like a very good utilization of parallel computing power?", "- There is no conclusion / future work."], "labels": ["fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "non-arg", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "fact", "fact", "evaluation", "evaluation", "request", "quote", "request", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "fact", "fact", "fact", "request", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "non-arg", "non-arg", "non-arg", "non-arg", "quote", "evaluation", "fact"]}
{"doc_id": "HkeBFwYgf", "text": ["This paper introduces a new toolbox for deep neural networks learning and evaluation.", "The central idea is to include time in the processing of all the units in the network.", "For this, the authors propose a paradigm switch: form layerwise-sequential networks, where at every time frame the network is evaluated by updating each layer \u2013 from bottom to top \u2013 sequentially; to layerwise-parallel networks, where all the neurons are updated in parallel.", "The new paradigm implies that the layer update is achieved by using the stored previous state and the corresponding previous state of the previous layer.", "This has three consequences.", "First, every layer now use memory,", "a condition that already applies for RNNs in layerwise-sequential networks.", "Second, in order to have a consistent output, the information has to flow in the network for a number of time frames equal to the number of layers.", "In Neuroscience, this concept is known as reaction time.", "Third, since the network is not synchronized in terms of the information that is processed in a specific time frame, there are discrepancies w.r.t. the layerwise-sequential networks computation: all the techniques used to train deep NNs have to be reconsidered.", "Overall, the concept is interesting and timely especially for the rising field of spiking neural networks or for large and distributed architectures.", "The paper, however, should probably provide more examples and results in terms of architectures that can been implemented with the toolbox in comparison with other toolboxes.", "The paper presents a single example in which either the accuracy and the training time are not reported.", "While I understand that the main result of this work is the toolbox itself, more examples and results would improve the clarity and the implications for such paradigm switch.", "Another concern comes from the choice to use Theano as back-end,", "since it's known that it is going to be discontinued.", "Finally I suggest to improve the clarity and description of Figure 2,", "which is messy and confusing especially if printed in B&W."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "fact", "request", "fact", "fact", "request", "evaluation"]}
{"doc_id": "SyrOMN9eM", "text": ["The authors propose WAGE, which discretized weights, activations, gradients, and errors at both training and testing time. ", "By quantization and shifting, SGD training without momentum, and removing the softmax at output layer as well, the model managed to remove all cumbersome computations from every aspect of the model, ", "thus eliminating the need for a floating point unit completely. ", "Moreover, by keeping up to 8-bit accuracy, the model performs even better than previously proposed models. ", "I am eager to see a hardware realization for this method because of its promising results. ", "The model makes a unified discretization scheme for 4 different kinds of components, ", "and the accuracy for each of the kind becomes independently adjustable. ", "This makes the method quite flexible and has the potential to extend to more complicated networks, such as attention or memory. ", "One caveat is that there seem to be some conflictions in the results shown in Table 1, especially ImageNet. ", "Given the number of bits each of the WAGE components asked for, a 28.5% top 5 error rate seems even lower than XNOR. ", "I suspect it is due to the fact that gradients and errors need higher accuracy for real-valued input, ", "but if that is the case, accuracies on SVHN and CIFAR-10 should also reflect that. ", "Or, maybe it is due to hyperparameter setting or insufficient training time?", "Also, dropout seems not conflicting with the discretization. ", "If there are no other reasons, it would make sense to preserve the dropout in the network as well.", "In general, the paper was writ ten in good quality and in detail, ", "I would recommend a clear accept."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "HkKWeUCef", "text": ["Adversarial example is studied on one synthetic data.", "A neural networks classifier is trained on this synthetic data. ", "Average distances and norms of errorneous perturbations are computed. ", "It is observed that small perturbation (chosen in a right direction) is sufficient to cause misclassification. ", "CONS: The writing is bad and hard to follow, with typos: ", "for example what is a period just before section 3.1 for? ", "Another example is \"Red lines indicate the range of needed for perfect classification\", which does not make sense. ", "Yet another example is the period at the end of Proposition 4.1. ", "Another example is \"One counter-intuitive property of adversarial examples is it that nearly \". ", "It looks as if the paper was written in a hurry, and it shows in the writing. ", "At the beginning of Section 3, Figure 1 is discussed. ", "It points out that there exists adversarial directions that are very bad. ", "But I don't see how it is relevant to adversarial examples. ", "If one was interested in studying adversarial examples, then one would have done the following. ", "Under the setting of Figure 1, pick a test data randomly from the distribution (and one of the classes), and find an adversarial direction", "I do not see how Section 3.1 fits in with other parts of the paper. ", "Is it related to any experiment? ", "Why it defining a manifold attack?", "Putting a \"conjecture\" on a paper has to be accompanied by the depth of the insight that brought the conjecture. ", "Having an unjustified conjecture 5.1 would poison the field of adversarial examples, ", "and it must be removed.", "This paper is a list of experiments and observations, that are not coherent and does not give much insight into the topics of \"adversarial examples\". ", "The only main messages are that on ONE synthetic dataset, random perturbation does not cause misclassification and targeted classification can cause misclassification. ", "And, expected loss is good while worst-case loss is bad. ", "This, in my opinion, is not enough to be published at a conference."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "request", "request", "request", "request", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "request", "request", "evaluation", "evaluation", "request", "evaluation", "fact", "evaluation", "evaluation"]}
{"doc_id": "BkC87_Cgz", "text": ["The paper proposes to augment (traditional) text-based sentence generation/dialogue approaches by incorporating visual information. ", "The idea is that associating visual information with input text, and using that associated visual information as additional input will produce better output text than using only the original input text.", "The basic idea is to collect a bunch of data consisting of both text and associated images or video. ", "Here, this was done using Japanese news programs. ", "The text+image/video is used to train a model that requires both as input and that encodes both as context vectors, which are then combined and decoded into output text. ", "Next, the image inputs are eliminated, with the encoded image context vector being instead associatively predicted directly from the encoded text context vector (why not also use the input text to help predict the visual context?), which is still obtained from the text input, as before. ", "The result is a model that can make use of the text-visual associations without needing visual stimuli. ", "This is a nice idea.", "Actually, based on the brief discussion in Section 2.2.2, it occurs to me that the model might not really be learning visual context vectors associatively, or, that this doesn't really have meaning in some sense. ", "Does it make sense to say that what it is really doing is just learning to associate other concepts/words with the input text, and that it is using the augmenting visual information in the training data to provide those associations? ", "Is this worth talking about?", "Unfortunately, while the idea has merit, and I'd like to see it pursued, ", "the paper suffers from a fatal lack of validation/evaluation, ", "which is very curious, given the amount of data that was collected, the fact that the authors have both a training and a test set, and that there are several natural ways such an evaluation might be performed. ", "The two examples of Fig 3 and the additional four examples in the appendix are nice for demonstrating some specific successes or weaknesses of the model, ", "but they are in no way sufficient for evaluation of the system, to demonstrate its accuracy or value in general.", "Perhaps the most obvious thing that should be done is to report the model's accuracy for reproducing the news dialogue, that is, how accurately is the next sentence predicted by the baseline and ACM models over the training instances and over the test data? ", "How does this compare with other state-of-the-art models for dialogue generation trained on this data (perhaps trained only on the textual part of the data in some cases)?", "Second, some measure of accuracy for recall of the associative image context vector should be reported; for example, on average, how close (cosine similarity or some other appropriate measure) is the associatively recalled image context vector to the target image context vector? ", "On average? ", "Best case? ", "Worst case? ", "How often is this associative vector closer to a confounding image vector than an appropriate one?", "A third natural kind of validation would be some form of study employing human subjects to test it's quality as a generator of dialogue.", "One thing to note, the example of learning to associate the snowy image with the text about university entrance exams demonstrates that the model is memorizing rather than generalizing. ", "In general, this is a false association ", "(that is, in general, there is no reason that snow should be associated with exams on the 14th and 15th\u2014the month is not mentioned, which might justify such an association.)", "Another thought: did you try not retraining the decoder and attention mechanisms for step 3? ", "In theory, if step 2 is successful, the retraining should not be necessary. ", "To the extent that it is necessary, step 2 has failed to accurately predict visual context from text. ", "This seems like an interesting avenue to explore (and is obviously related to the second type of validation suggested above). ", "Also, in addition to the baseline model, it seems like it would be good to compare a model that uses actual visual input and the model of step 1 against the model of step 3 (possibly bot retrained and not retrained) to see the effect on the outputs generated\u2014how well do each of these do at predicting the next sentence on both training and test sets?", "Other concerns:1. The paper is too long by almost a page in main content.", "2. The paper exhibits significant English grammar and usage issues ", "and should be carefully proofed by a native speaker.", "3. There are lots of undefined variables in the Eqs. (s, W_s, W_c, b_s, e_t,i, etc.) ", "Given the context and associated discussion, it is almost possible to sort out what all of them mean, ", "but brief careful definitions should be given for clarity. ", "4. Using news broadcasts as a substitute for true dialogue data seems kind of problematic, ", "though I see why it was done."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "request", "request", "request", "request", "fact", "fact", "fact", "non-arg", "fact", "evaluation", "request", "request", "evaluation", "evaluation", "request", "fact", "evaluation", "request", "evaluation", "evaluation"]}
{"doc_id": "Byqj1QtlM", "text": ["A DeepRL algorithm is presented that represents distributions over Q values, as applied to DDPG, and in conjunction with distributed evaluation across multiple actors, prioritized experience replay, and N-step look-aheads.", "The algorithm is called Distributed Distributional Deep Deterministic Policy Gradient algorithm, D4PG.", "SOTA results are generated for a number of challenging continuous domain learning problems, as compared to benchmarks that include DDPG and PPO, in terms of wall-clock time, and also (most often) in terms of sample efficiency.", "pros/cons + the paper provides a thorough investigation of the distributional approach, as applied to difficult continuous action problems, and in conjunction with a set of other improvements (with ablation tests)", "- the story is a bit mixed in terms of the benefits, as compared to the non-distributional approach, D3PG", "- it is not clear which of the baselines are covered in detail in the cited paper:", "\"Anonymous. Distributed prioritized experience replay. In submission, 2017.\",", "i.e., should readers assume that D3PG already exists and is attributable to this other submission?", "Overall, I believe that the community will find this to be interesting work.", "Is a video of the results available?", "It seems that the distributional model often does not make much of a difference, as compared to D3PG non-prioritized.", "However, sometimes it does make a big difference, i.e., 3D parkour; acrobot.", "Do the examples where it yields the largest payoff share a particular characteristic?", "The benefit of the distributional models is quite different between the 1-step and 5-step versions.", "Any ideas why?", "Occasionally, D4PG with N=1 fails very badly, e.g., fish, manipulator (bring ball), swimmer.", "Why would that be?", "Shouldn't it do at least as well as D3PG in general?", "How many atoms are used for the categorical representation?", "As many as [Bellemare et al.], i.e., 51 ?", "How much \"resolution\" is necessary here in order to gain most of the benefits of the distributional representation?", "As far as I understand, V_min and V_max are not the global values, but are specific to the current distribution.", "Hence the need for the projection.", "Is that correct?", "Would increasing the exploration noise result in a larger benefit for the distributional approach?", "Figure 2: DDPG performs suprisingly poorly in most examples.", "Any comments on this, or is DDPG best avoided in normal circumstances for continuous problems? :-)", "Is the humanoid stand so easy because of large (or unlimited) torque limits?", "The wall-clock times are for a cluster with K=32 cores for Figure 1?", "\"we utilize a network architecture as specified in Figure 1 which processes the terrain info in order to reduce its dimensionality\"", "Figure 1 provides no information about the reduced dimensionality of the terrain representation,", "unless I am somehow failing to see this.", "\"the full critic architecture is completed by attaching a critic head as defined in Section A\"", "I could find no further documenation in the paper with regard to the \"head\" or a separate critic for the \"head\".", "It is not clear to me why multiple critics are needed.", "Do you have an intuition as to why prioritized replay might be reducing performance in many cases?"], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "reference", "request", "evaluation", "non-arg", "evaluation", "fact", "non-arg", "fact", "non-arg", "fact", "non-arg", "evaluation", "non-arg", "non-arg", "non-arg", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "evaluation", "non-arg", "non-arg", "quote", "fact", "evaluation", "quote", "evaluation", "evaluation", "non-arg"]}
{"doc_id": "BJAg3e7ZM", "text": ["1) Summary This paper proposes a flow-based neural network architecture and adversarial training for multi-step video prediction.", "The neural network in charge of predicting the next frame in a video implicitly generates flow that is used to transform the previously observed frame into the next.", "Additionally, this paper proposes a new quantitative evaluation criteria based on the observed flow in the prediction in comparison to the groundtruth.", "Experiments are performed on a new robot arm dataset proposed in the paper where they outperform the used baselines.", "2) Pros:+ New quantitative evaluation criteria based on motion accuracy.", "+ New dataset for robot arm pushing objects.", "3) Cons:Overall architectural prediction network differences with baseline are unclear:", "The differences between the proposed prediction network and [1] seem very minimal.", "In Figure 3, it is mentioned that the network uses a U-Net with recurrent connections.", "This seems like a very minimal change in the overall architecture proposed.", "Additionally, there is a paragraph of \u201carchitecture improvements\u201d which also are minimal changes.", "Based on the title of section 3, it seems that there is a novelty on the \u201cprediction with flow\u201d part of this method.", "If this is a fact, there is no equation describing how this flow is computed.", "However, if this \u201cflow\u201d is computed the same way [1] does it, then the title is misleading.", "Adversarial training objective alone is not new as claimed by the authors:", "The adversarial objective used in this paper is not new.", "Works such as [2,3] have used this objective function for single step and multi-step frame prediction training, respectively.", "If the authors refer to the objective being new in the sense of using it with an action conditioned video prediction network, then this is again an extremely minimal contribution.", "Essentially, the authors just took the previously used objective function and used it with a different network.", "If the authors feel otherwise, please comment on why this is the case.", "Incomplete experiments:The authors only show experiments on videos containing objects that have already been seen,", "but no experiments with objects never seen before.", "The missing experiment concerns me in the sense that the network could just be memorizing previously seen objects.", "Additionally, the authors present evaluation based on PSNR and SSIM on the overall predicted video, but not in a per-step paradigm.", "However, the authors show this per-step evaluation in the Amazon Mechanical Turk, and predicted object position evaluations.", "Unclear evaluation:The way the Amazon Mechanical Turk experiments are performed are unclear and/or not suited for the task at hand.", "Based on the explanation of how these experiments are performed, the authors show individual images to mechanical turkers.", "If we are evaluating the video prediction task for having real or fake looking videos, the turkers need to observe the full video and judge based on that.", "If we are just showing images, then they are evaluating image synthesis, which do not necessarily contain the desired properties in videos such as temporal coherence.", "Additional comments:The paper needs a considerable amount of polishing.", "4) Conclusion:This paper seems to contain very minimal changes in comparison to the baseline by [1].", "The adversarial objective is not novel as mentioned by the authors and has been used in [2,3].", "Evaluation is unclear and incomplete.", "References:[1] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016.", "[2] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016.", "[3] Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee. Decomposing Motion and Content for Natural Video Sequence Prediction. In ICLR, 2017"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "request", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "request", "fact", "request", "evaluation", "evaluation", "evaluation", "reference", "reference", "reference"]}
{"doc_id": "B104VQCgM", "text": ["The paper basically propose keep using the typical data-augmentation transformations done during training also in evaluation time, to prevent adversarial attacks.", "In the paper they analyze only 2 random resizing and random padding,", "but I suppose others like random contrast, random relighting, random colorization, ... could be applicable.", "\\n\\nSome of the pros of the proposed tricks is that it doesn't require re-training existing models,", "although as the authors pointed out re-training for adversarial images is necessary to obtain good results.", "\\n\\n\\nTypically images have different sizes", ", however in the Dataset are described as having 299x299x3 size,", "are all the test images resized before hand?", "How would this method work with variable size images?", "\\n\\nThe proposed defense requires increasing the size of the input images,", "have you analyzed the impact in performance?", "Also it would be good to know how robust is the method for smaller sizes.", "\\n\\nSection 4.6.2 seems to indicate that 1 pixel padding or just resizing 1 pixel is enough to get most of the benefit,", "please provide an analysis of how results improve as the padding or size increase.", "\\n\\nIn section 5 for the challenge authors used a lot more evaluations per image,", "could you provide how much extra computation is needed for that model?"], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "non-arg", "non-arg", "fact", "non-arg", "request", "evaluation", "request", "evaluation", "request"]}
{"doc_id": "B1y7_3YgM", "text": ["This submission proposes a new seq2sel solution by adopting two new techniques, a sequence-to-set model and column attention mechanism. ", "They show performance improve over existing studies on WikiSQL dataset.", "While the paper is written clearly, ", "the contributions of the work heavily depends on the WikiSQL dataset. ", "It is not sure if the approach is generally applicable to other sequence-to-sql workloads. ", "Detailed comments are listed below: 1. WikiSQL dataset contains only a small class of SQL queries, with aggregation over single table and various filtering conditions. ", "It does not involve any complex operator in relational database system, e.g., join and groupby. ", "Due to its simple structure, the problem of sequence-to-sql translation over WikiSQL is actually simplified as a parameter selection problem for a fixed template. ", "This greatly limits the generalization of approaches only applicable to WikiSQL. ", "The authors are encouraged to explore other datasets available in the literature.", "2. The \"order-matters\" motivation is not very convincing. ", "It is straightforward to employ a global ordering approach to rank the columns and filtering conditions based on certain rules, e.g., alphabetical order. ", "That could ensure the orders in the SQL results are always consistent.", "3. The experiments do not fully verify how the approaches bring performance improvements. ", "In the current version, the authors only report superficial accuracy results on final outcomes, without any deep investigation into why and how their approach works. ", "For instance, they could verify how much accuracy improvement is due to the insensitivity to order in filtering expressions.", "4. They do not compare against state-of-the-art solution on column and expression selection. ", "While their attention mechanism over the columns could bring performance improvement, ", "they should have included experiments over existing solutions designed for similar purpose. ", "In (Yin, et al., IJCAI 2016), for example, representations over the columns are learned to generate better column selection.", "As a conclusion, I find the submission contains certain interesting ideas but lacks serious research investigations. ", "The quality of the paper could be much enhanced, if the authors deepen their studies on this direction."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "fact", "fact", "request", "fact", "evaluation", "evaluation"]}
{"doc_id": "HkshYX9xz", "text": ["In this work, discrete-weight NNs are trained using the variational Bayesian framework, achieving similar results to other state-of-the-art models.", "Weights use 3 bits on the first layer and are ternary on the remaining layers.", "- Pros: The paper is well-written and connections with the literature properly established.", "The approach to training discrete-weights NNs, which is variational inference, is more principled than previous works (but see below).", "- Cons: The authors depart from the original motivation when the central limit theorem is invoked.", "Once we approximate the activations with Gaussians, do we have any guarantee that the new approximate lower bound is actually a lower bound?", "This is not discussed.", "If it is not a lower bound, what is the rationale behind maximizing it?", "This seems to place this work very close to previous works, and not in the \"more principled\" regime the authors claim to seek.", "The likelihood weighting seems hacky.", "The authors claim \"there are usually many more NN weights than there are data samples\".", "If that is the case, then it seems that the prior dominating is indeed the desired outcome.", "A different, more flat prior (or parameter sharing), can be used,", "but the described reweighting seems to be actually breaking a good property of Bayesian inference,", "which is defecting to the prior when evidence is lacking.", "In terms of performance (Table 1), the proposed method seems to be on par with existing ones.", "It is unclear then what the advantage of this proposal is.", "Sparsity figures are provided for the current approach,", "but those are not contrasted with existing approaches.", "Speedup is claimed with respect to an NN with real weights, but not with respect existing NNs with binary weights,", "which is the appropriate baseline.", "- Minor comments: Page 3: Subscript t and variable t is used for the targets,", "but I can't find where it is defined.", "Only the names of the datasets used in the experiments are given,", "but they are not described, or even better, shown in pictures (maybe in a supplementary).", "The title of the paper says \"discrete-valued NNs\".", "The weights are discrete, but the activations and outputs are continuous,", "so I find it confusing.", "As a contrast, I would be less surprised to hear a sigmoid belief network called a \"discrete-valued NN\", even though its weights are continuous."], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "request", "fact", "request", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "non-arg", "fact", "fact", "fact", "fact", "evaluation", "evaluation"]}
{"doc_id": "BJFxOpcez", "text": ["This paper explores learning dynamic filters for CNNs. ", "The filters are generated by using the features of an autoencoder on the input image, and linearly combining a set of base filters for each layer.", " This addresses an interesting problem which has been looked at a lot before, but with some small new parts.", " There is a lot of prior work in this area ", "that should be cited in the area of dynamic filters and steerable filters. ", "There are also parallels to ladder networks that should be highlighted. ", "The results indicate improvement over baselines, ", "however baselines are not strong baselines. ", "A key question is what happens when this method is combined with VGG11 which the authors train as a baseline? ", "What is the effect of the reconstruction loss? ", "Can it be removed? ", "There should be some ablation study here.", "Figure 5 is unclear what is being displayed, ", "there are no labels.", "Overall I would advise the authors to address these questions and suggest this as a paper suitable for a workshop submission."], "labels": ["fact", "fact", "evaluation", "evaluation", "request", "request", "fact", "evaluation", "request", "non-arg", "non-arg", "request", "evaluation", "fact", "request"]}
{"doc_id": "ry-q9ZOlf", "text": ["This paper proposes a simple modification to the standard alternating stochastic gradient method for GAN training, which stabilizes training, by adding a prediction step.", "This is a clever and useful idea, ", "and the paper is very well written. ", "The proposed method is very clearly motivated, both intuitively and mathematically, ", "and the authors also provide theoretical guarantees on its convergence behavior. ", "I particularly liked the analogy with the damped harmonic oscillator.", "The experiments are well designed and provide clear evidence in favor of the usefulness of the proposed technique. ", "I believe that the method proposed in this paper will have a significant impact in the area of GAN training.", "I have only one minor question: in the prediction step, why not use a step size, say $\\bar{u}_k+1 = u_{k+1} + \\gamma_k (u_{k+1} \u2212 u_k)$, such that the \"amount of predition\" may be adjusted?"], "labels": ["fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "non-arg"]}
{"doc_id": "S1QsSa1-M", "text": ["Authors provide an interesting loss function approach for clustering using a deep neural network. ", "They optimize Kuiper-based nonparametric loss and apply the approach on a large social network data-set. ", "However, the details of the deep learning approach are not well described. ", "Some specific comments are given below.", "1.Further details on use of 10-fold cross validation need to be discussed including over-fitting aspect.", "2. Details on deep learning, number of hidden layers, number of hidden units, activation functions, weight adjustment details on each learning methods should be included.", "3. Conclusion section is very brief ", "and can be expanded by including a discussion on results comparison and over fitting aspects in cross validation. ", "Use of Kuiper-based nonparametric loss should also be justified as there are other loss functions can be used under these settings."], "labels": ["fact", "fact", "evaluation", "non-arg", "request", "request", "evaluation", "request", "request"]}
{"doc_id": "BkuT3b9ef", "text": ["This paper investigates meta-learning strategy for automated architecture search in the context of RNN. ", "To constraint the architecture search space, authors propose a DSL that specifies the RNN recurrent operations. ", "This DSL allows to explore RNN architectures using either random search or a reinforcement-learning strategy. ", "Candidate architectures are ranked using a TreeLSTM that tries to predict the architecture performances. ", "The top-k architectures are then evaluated by fully training them on a given task.", "Authors evaluate their approach on PTB/Wikitext 2 language modeling and Multi30k/IWSLT'16 machine translation. ", "In both experiments, authors show that their approach obtains competitive results and can sometime outperforms RNN cells such as GRU/LSTM. ", "In the PTB experiment, their architecture however underperforms other LSTM variant in the literatures.", "- Quality/Clarity The paper is overall well written and pleasant to read.", "Few details can be clarified. ", "In particular how did you initialize the weight and bias for both the LSTM/GRU baselines and the found architectures? ", "Is there other works leveraging RNN that report results on the Multi30k/IWSLT datasets?", "You state in paragraph 3.2 that human experts can inject the previous best known architecture when training the ranking networks. ", "Did you use this in the experiments? ", "If yes, what was the impact of this online learning strategy on the final results? ", "- Originality The idea of using DSL + ranking for architecture search seems novel.", "- Significance Automated architecture search is a promising way to design new networks. ", "However, it is not clear why the proposed approach is not able to outperforms other LSTM-based architectures on the PTB task. ", "Could the problem arise from the DSL that constraint too much the search space ? ", "It would be nice to have other tasks that are commonly used as benchmark for RNN to see where this approach stand.", "In addition, authors propose both a DSL, a random and RL generator and a ranking function. ", "It would be nice to disentangle the contributions of the different components. ", "In particular, did the authors compare the random search vs the RL based generator or the performances of the RL-based generator when the ranking network is not used?", "Although authors do show that they outperform NAScell in one setting, ", "it would be nice to have an extended evaluation (using character level PTB for instance)."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "request", "fact", "request", "request", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "evaluation", "request", "fact", "request"]}
{"doc_id": "HJI6Rf1eG", "text": ["This writeup describes an application of recurrent autoencoder to analysis of multidimensional time series. ", "The quality of writing, experimentation and scholarship is clearly below than what is expected from a scientific article. ", "The method is explained in a very unclear way, ", "there is no mention of any related work. ", "I would encourage the authors to take a look at other ICLR submissions and see how rigorously written they are, how they position the reported research among comparable works."], "labels": ["fact", "evaluation", "evaluation", "fact", "request"]}
{"doc_id": "H1caL6tXz", "text": ["The paper proposes to use a regularizer for tensor completion problems that can be written in a similar fashion as the variational factorization formulation of the trace norm aka nuclear norm for matrices.", "The paper introduces the regularizer", "with the nice argument that the gradient of the L3 norm to the power of 3rd will be easy to compute,", "but if we were raising the L2 norm to the 3rd power it would not be the case.", "They mention that their argument can generalize from D=3 to higher order tensors.", "Authors mention the paper by Friedland and Lim that introduces this norm and provides first theoretical results on it.", "Authors develop on the tensor equivalent of the matrix max norm", "which is built with the motivation of bringing robustness to heavy nodes in the graph (very popular content).", "This is again straightforward on the technical side.", "Empirical results are fine but do not show huge improvements compared to baselines", "so I do not think this is a strong argument for accepting the paper.", "On the scalability, authors do not show that their approach is better suited than baselines."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact"]}
{"doc_id": "BJu6VdYlf", "text": ["The paper suggests an importance sampling based Coreset construction for Support Vector Machines (SVM). ", "To understand the results, we need to understand Coreset and importance sampling: ", "Coreset: In the context of SVMs, a Coreset is a (weighted) subset of given dataset such that for any linear separator, the cost of the separator with respect to the given dataset X is approximately (there is an error parameter \\eps) the same as the cost with respect to the weighted subset. ", "The main idea is that if one can find a small coreset, then finding the optimal separator (maximum margin etc.) over the coreset might be sufficient. ", "Since the computation is done over a small subset of points, one hopes to gain in terms of the running time.", "Importance sampling: This is based on the theory developed in Feldman and Langberg, 2011 (and some of the previous works such as Langberg and Schulman 2010, the reference of which is missing). ", "The idea is to define a quantity called sensitivity of a data-point that captures how important this datapoint is with respect to contributing to the cost function. ", "Then a subset of datapoint are sampled based on the sensitivity and the sampled data point is given weight proportional to inverse of the sampling probability. ", "As per the theory developed in these past works, sampling a subset of size proportional to the sum of sensitivities gives a coreset for the given problem.", "So, the main contribution of the paper is to do all the sensitivity calculations with respect to SVM problem and then use the importance sampling theory to obtain bounds on the coreset size. ", "One interesting point of this construction is that Coreset construction involves solving the SVM problem on the given dataset which may seem like beating the purpose. ", "However, the authors note that one only needs to compute the Coreset of small batches of the given dataset and then use standard procedures (available in streaming literature) to combine the Coresets into a single Coreset. ", "This should give significant running time benefits. ", "The paper also compares the results against the simple procedure where a small uniform sample from the dataset is used for computation. ", "Evaluation: Significance: Coresets give significant running time benefits when working with very big datasets. ", "Coreset construction in the context of SVMs is a relevant problem and should be considered significant.", "Clarity: The paper is reasonably well-written. ", "The problem has been well motivated and all the relevant issues point out for the reader. ", "The theoretical results are clearly stated as lemmas a theorems that one can follow without looking at proofs. ", "Originality: The paper uses previously developed theory of importance sampling. ", "However, the sensitivity calculations in the SVM context is new as per my knowledge. ", "It is nice to know the bounds given in the paper and to understand the theoretical conditions under which we can obtain running time benefits using corsets. ", "Quality: The paper gives nice theoretical bounds in the context of SVMs. ", "One aspect in which the paper is lacking is the empirical analysis. ", "The paper compares the Coreset construction with simple uniform sampling. ", "Since Coreset construction is being sold as a fast alternative to previous methods for training SVMs, ", "it would have been nice to see the running time and cost comparison with other training methods that have been discussed in section 2."], "labels": ["fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request"]}
{"doc_id": "By2zFR_gz", "text": ["Quality: Although the research problem is an interesting direction ", "the quality of the work is not of a high standard. ", "My main conservation is that the idea of perturbation in semantic latent space has not been described in an explicit way. ", "How different it will be compared to a perturbation in an input space? ", "Clarity: The use of the term \"adversarial\" is not quite clear in the context ", "as in many of those example classification problems the perturbation completely changes the class label (e.g. from \"church\" to \"tower\" or vice-versa)", "Originality: The generation of adversarial examples in black-box classifiers has been looked in GAN literature as well and gradient based perturbations are studied too. ", "What is the main benefit of the proposed mechanism compared to the existing ones?", "Significance: The research problem is indeed a significant one ", "as it is very important to understand the robustness of the modern machine learning methods by exposing them to adversarial scenarios where they might fail.", "pros: (a) An interesting problem to evaluate the robustness of black-box classifier systems", "(b) generating adversarial examples for image classification as well as text analysis.", "(c) exploiting the recent developments in GAN literature to build the framework forge generating adversarial examples.", "cons:(a) The proposed search algorithm in the semantic latent space could be computationally intensive. ", "any remedy for this problem?", "(b) Searching in the latent space z could be strongly dependent on the matching inverter $I_\\gamma(.)$. ", "any comment on this?", "(c) The application of the search algorithm in case of imbalanced classes could be something that require further investigation."], "labels": ["evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "fact", "fact", "non-arg", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "non-arg", "fact", "non-arg", "request"]}
{"doc_id": "BkuDb6tgf", "text": ["This work attempts to improve the global consistency of samples generated by generative adversarial networks by replacing the discriminator with an autoregressive model in an encoded feature space. ", "The log likelihood of the classification model is then replaced with the log likelihood of the feature space autoregressive model. ", "It's not clear what can be said with respect to the convergence properties of this class of models, ", "and this is not discussed.", "The method is quite similar in spirit to Denoising Feature Matching of Warde-Farley & Bengio (2017), ", "as both estimate a density model in feature space -- this method via a constrained autoregressive model and DFM via an estimator of the score function, ", "although DFM was used in conjunction with the standard criterion whereas this method replaces it. ", "This is certainly worth mentioning and discussing. ", "In particular the section in Warde-Farley & Bengio regarding the feature space transformation of the data density seems quite relevant in this work.", "Unfortunately the only quantitative measurements reporter are Inception scores, ", "which is known to be a poor measure ", "(and the scores presented are not particularly high, either); ", "Frechet Inception distance or log likelihood estimates via AIS on some dataset would be more convincing. ", "On the plus side, the authors report an average over Inception scores for multiple runs. ", "On the other hand, it sounds as though the stopping criterion was still qualitative."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "request", "fact", "evaluation"]}
{"doc_id": "B1B3e0Oef", "text": ["This work introduces a particular parametrization of a stochastic policy (a uniform mixture of deterministic policies).", "They find this parametrization, when trained with stochastic value gradient outperforms DDPG on several OpenAI gym benchmarks.", "This paper unfortunately misses many significant pieces of prior work training stochastic policies.", "The most relevant is [1] which should definitely be cited.", "The algorithm here can be seen as SVG(0) with a particular parametrization of the policy.", "However, numerous other works have examined stochastic policies including [2] (A3C which also used the Torcs environment) and [3].", "The wide use of stochastic policies in prior work makes the introductory explanation of the potential benefits for stochastic policies distracting,", "instead the focus should be on the particular choice and benefits of the particular stochastic parametrization chosen here and the choice of stochastic value gradient as a training method (as opposed to many on-policy methods).", "The empirical comparison is also hampered by only comparing with DDPG,", "there are numerous stochastic policy algorithms that have been compared on these environments.", "Additionally, the DDPG performance here is lower for several environments than the results reported in Henderson et al. 2017 (cited in the paper, table 2 here, table 3 Henderson)", "which should be explained.", "While this particular parametrization may provide some benefits, the lack of engagement with relevant prior work and other stochastic baselines significant limits the impact of this work and makes assessing its significance difficult.", "This work would benefit from careful copyediting.", "[1] Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T., & Tassa, Y. (2015). Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems (pp. 2944-2952).", "[2] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., ... & Kavukcuoglu, K. (2016, June). Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning (pp. 1928-1937).", "[3] Schulman, J., Moritz, P., Levine, S., Jordan, M., & Abbeel, P. (2015). High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438."], "labels": ["fact", "fact", "evaluation", "request", "fact", "fact", "evaluation", "request", "evaluation", "fact", "fact", "request", "evaluation", "request", "reference", "reference", "reference"]}
{"doc_id": "SJJrhg5lf", "text": ["## Review Summary Overall, the paper's paper core claim, that increasing batch sizes at a linear rate during training is as effective as decaying learning rates, isinteresting ", "but doesn't seem to be too surprising given other recent work in this space. ", "The most useful part of the paper is the empirical evidence to backup this claim, ", "which I can't easily find in previous literature. ", "I wish the paper had explored a wider variety of dataset tasks and models to better show how well this claim generalizes, better situated the practical benefits of the approach ", "(how much wallclock time is actually saved? ", "how well can it be integrated into a distributed workflow?), ", "and included some comparisons with other recent recommended ways to increase batch size over time.", "## Pros / Strengths + effort to assess momentum / Adam / other modern methods", "+ effort to compare to previous experimental setups", "## Cons / Limitations - lack of wallclock measurements in experiments", "- only ~2 models / datasets examined, ", "so difficult to assess generalization", "- lack of discussion about distributed/asynchronous SGD", "## Significance Many recent previous efforts have looked at the importance of batch sizes during training, ", "so topic is relevant to the community. ", "Smith and Le (2017) present a differential equation model for the scale of gradients in SGD,finding a linear scaling rule proportional to eps N/B, where eps = learning rate, N = training set size, and B = batch size. ", "Goyal et al (2017) show how to train deep models on ImageNet effectively with large (but fixed) batch sizes by using a linear scaling rule.", "A few recent works have directly tested increasing batch sizes during training. ", "De et al (AISTATS 2017) have a method for gradually increasing batch sizes, as do Friedlander and Schmidt (2012). ", "Thus, it is already reasonable to practitioners that the proposed linear scaling of batch sizes during training would be effective.", "While increasing batch size at the proposed linear scale is simple and seems to be effective, ", "a careful reader will be curious how much more could be gained from the backtracking line search method proposed in De et al.", "## Quality Overall, only single training runs from a random initialization are used. ", "It would be better to take the best of many runs or to somehow show error bars,", "to avoid the reader wondering whether gains are due to changes in algorithm or to poor exploration due to bad initialization. ", "This happens a lot in Sec. 5.2.", "Some of the experimental setting seem a bit haphazard and not very systematic.", "In Sec. 5.2, only two learning rate scales are tested (0.1 and 0.5). ", "Why not examine a more thorough range of values?", "Why not report actual wallclock times? ", "Of course having reduced number of parameter updates is useful, ", "but it's difficult to tell how big of a win this could be.", "What about distributed SGD or asyncronous SGD (hogwild)? ", "Small batch sizes sometimes make it easier for many machines to be working simultaneously. ", "If we scale up to batch sizes of ~ N/10, we can only get 10x speedups in parallelization (in terms of number of parameter updates). ", "I think there is some subtle but important discussion needed on how this framework fits into modern distributed systems for SGD.", "## Clarity Overall the paper reads reasonably well.", "Offering a related work \"feature matrix\" that helps readers keep track of how previous efforts scale learning rates or minibatch sizes for specific experiments could be valueable. ", "Right now, lots of this information is just provided in text, ", "so it's not easy to make head-to-head comparisons.", "Several figure captions should be updated to clarify which model and dataset are studied. ", "For example, when skimming Fig. 3's caption there is no such information.", "## Paper Summary The paper examines the influence of batch size on the behavior of stochastic gradient descent to minimize cost functions. ", "The central thesis is that instead of the \"conventional wisdom\" to fix the batch size during training and decay the learning rate, it is equally effective (in terms of training/test error reached) to gradually increase batch size during training while fixing the learning rate. ", "These two strategies are thus \"equivalent\". ", "Furthermore, using larger batches means fewer parameter updates per epoch, ", "so training is potentially much faster.", "Section 2 motivates the suggested linear scaling using previous SGD analysis from Smith and Le (2017). ", "Section 3 makes connections to previous work on finding optimal batch sizes to close the generaization gap. ", "Section 4 extends analysis to include SGD methods with momentum.", "In Section 5.1, experiments training a 16-4 ResNet on CIFAR-10 compare three possible SGD schedules: ", "* increasing batch size * decaying learning rate * hybrid (increasing batch size and decaying learning rate) ", "Fig. 2, 3 and 4 show that across a range of SGD variants (+/- momentum, etc) these three schedules have similar error vs. epoch curves. ", "This is the core claimed contribution: empirical evidence that these strategies are \"equivalent\".", "In Section 5.3, experiments look at Inception-ResNet-V2 on ImageNet, ", "showing the proposed approach can reach comparable accuracies to previous work at even fewer parameter updates (2500 here, vs. \u223c14000 for Goyal et al 2007)"], "labels": ["evaluation", "evaluation", "evaluation", "non-arg", "request", "request", "request", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "evaluation", "request", "evaluation", "evaluation", "request", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact"]}
{"doc_id": "rkZd9y9xz", "text": ["The paper proposes a novel way of compressing gradient updates for distributed SGD, in order to speed up overall execution.", "While the technique is novel as far as I know (eq. (1) in particular),", "many details in the paper are poorly explained (I am unable to understand)", "and experimental results do not demonstrate that the problem targeted is actually alleviated.", "More detailed remarks: 1: Motivating with ImageNet taking over a week to train seems misplaced when we have papers claiming to train ImageNet in 1 hour, 24 mins, 15 mins...", "4.1: Lemma 4.1 seems like you want B > 1, or clarify definition of V_B.", "4.2: This section is not fully comprehensible to me.", "- It seems you are confusingly overloading the term gradient and words derived (also in other parts or the paper).", "What is \"maximum value of gradients in a matrix\"?", "Make sure to use something else, when talking about individual elements of a vector (which is constructed as an average of gradients), etc.", "- Rounding: do you use deterministic or random rounding?", "Do you then again store the inaccuracy?", "- I don't understand definition of d.", "It seems you subtract logarithm of a gradient from a scalar.", "- In total, I really don't know what is the object that actually gets communicated,", "and consequently when you remark that this can be combined with QSGD and the more below it, I don't understand it.", "This section has to be thoroughly explained, perhaps with some illustrative examples.", "4.3: allgatherv remark: does that mean that this approach would not scale well to higher number of workers?", "4.4: Remarks about quantization and mantissa manipulation are not clear to me again, or what is the point in doing so.", "Possible because the problems above.", "5: I think this section is not too useful unless you can accompany it with actual efficient implementation and contrast the practical performance.", "6: Given that I don't understand how you compress the information being communicated, it is hard to believe the utility of the method.", "The objective was to speed up training time because communication is bottleneck.", "If you provide 12,000x compression, is it any more practically useful than providing 120x compression?", "What would be the difference in runtime?", "Such questions are never discussed.", "Further, if in the implementation you discuss masking mantissa,", "I have serious concern about whether the compression protocol is feasible to implement efficiently, without writing some extremely low-level code.", "I think the soundness of work addressing this particular problem is damaged if not implemented properly (compared to other kinds of works in current ML related research).", "Therefore I highly recommend including proper time comparison with a baseline in the future.", "Further, I don't understand 2 things about the Tables.", "a) how do you combine the proposed method with Momentum in SGD?", "This is not discussed as far as I can see.", "b) What is \"QSGD, 2bit\"", "If I remember QSGD protocol correctly, there's no natural mapping of 2bit to its parameters."], "labels": ["fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "request", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "request", "fact", "request", "evaluation"]}
{"doc_id": "ryD53e9xG", "text": ["This work addresses the scenario of fine-tuning a pre-trained network for new data/tasks and empirically studies various regularization techniques.", "Overall, the evaluation concludes with recommending that all layers of a network whose weights are directly transferred during fine-tuning should be regularized against the initial net with an L2 penalty during further training.", "Relationship to prior work: Regularizing a target model against a source model is not a new idea.", "The authors miss key connections to A-SVM [1] and PMT-SVM [2] -- two proposed transfer learning models applied to SVM weights, but otherwise very much the same as the proposed solution in this paper.", "Though the study here may offer new insights for deep nets,", "it is critical to mention prior work which also does analysis of these regularization techniques.", "Significance: As the majority of visual recognition problems are currently solved using variants of fine-tuning,", "if the findings reported in this paper generalize, then it could present a simple new regularization which improves the training of new models.", "The change is both conceptually simple and easy to implement so could be quickly integrated by many people.", "Clarity and Questions: The purpose of the paper is clear,", "however, some questions remain unanswered.", "1) How is the regularization weight of 0.01 chosen?", "This is likely a critical parameter.", "In an experimental paper, I would expect to see a plot of performance for at least one experiment as this regularization weighting parameter is varied.", "2) How does the use of L2 regularization on the last layer effect the regularization choice of other layers?", "What happens if you use no regularization on the last layer?", "L1 regularization?", "3) Figure 1 is difficult to read.", "Please at least label the test sets on each sub-graph.", "4) There seems to be some issue with the freezing experiment in Figure 2.", "Why does performance of L2 regularization improve as you freeze more and more layers, but is outperformed by un-freezing all.", "5) Figure 3 and the discussion of linear dependence with the original model in general seems does not add much to the paper.", "It is clear that regularizing against the source model weights instead of 0 should result in final weights that are more similar to the initial source weights.", "I would rather the authors use this space to provide a deeper analysis of why this property should help performance.", "6) Initializing with a source model offers a strong starting point so full from scratch learning isn\u2019t necessary -- meaning fewer examples are needed for the continued learning (fine-tuning) phase.", "In a similar line of reasoning, does regularizing against the source further reduce the number of labeled points needed for fine-tuning?", "Can you recover L2 fine-tuning performance with fewer examples when you use L2-SP?", "[1] J. Yang, R. Yan, and A. Hauptmann. Adapting svm classifiers to data with shifted distributions. In ICDM Workshops, 2007.", "[2] Y. Aytar and A. Zisserman. Tabula rasa: Model transfer for object category detection. In Proc. ICCV, 2011."], "labels": ["fact", "fact", "fact", "fact", "fact", "request", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "request", "request", "request", "evaluation", "request", "evaluation", "request", "evaluation", "evaluation", "request", "fact", "request", "request", "reference", "reference"]}
{"doc_id": "H1JAev9gz", "text": ["The authors present 3 architectures for learning representations of programs from execution traces. ", "In the variable trace embedding, the input to the model is given by a sequence of variable values. ", "The state trace embedding combines embeddings for variable traces using a second recurrent encoder. ", "The dependency enforcement embedding performs element-wise multiplication of embeddings for parent variables to compute the input of the GRU to compute the new hidden state of a variable. ", "The authors evaluate their architectures on the task of predicting error patterns for programming assignments from Microsoft DEV204.1X (an introduction to C# offered on edx) and problems on the Microsoft CodeHunt platform. ", "They additionally use their embeddings to decrease the search time for the Sarfgen program repair system.", "This is a fairly strong paper. ", "The proposed models make sense ", "and the writing is for the most part clear, ", "though there are a few places where ambiguity arises:", "- The variable \"Evidence\" in equation (4) is never defined. ", "- The authors refer to \"predicting the error patterns\", ", "but again don't define what an error pattern is. ", "The appendix seems to suggest that the authors are simply performing multilabel classification based on a predefined set of classes of errors, ", "is this correct? ", "- It is not immediately clear from Figures 3 and 4 that the architectures employed are in fact recurrent.", "- Figure 5 seems to suggest that dependencies are only enforced at points in a program where assignment is performed for a variable, ", "is this correct?", "Assuming that the authors can address these clarity issues, I would in principle be happy for the paper to appear."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "non-arg", "evaluation"]}
{"doc_id": "BkH22ZFxG", "text": ["This paper proposes two regularization terms to encourage learning disentangled representations. ", "One term is applied to weight parameters of a layer just like weight decay. ", "The other is applied to the activations of the target layer (e.g., the penultimate layer). ", "The core part of both regularization terms is a compound hinge loss of which the input is the KL divergence between two softmax-normalized input arguments. ", "Experiments demonstrate the proposed regularization terms are helpful in learning representations which significantly facilitate clustering performance.", "Pros: (1) This paper is clearly written and easy to follow.", "(2) Authors proposed multiple variants of the regularization term which cover both supervised and unsupervised settings.", "(3) Authors did a variety of classification experiments ranging from time serials, image and text data.", "Cons: (1) The design choice of the compound hinge loss is a bit arbitrary. ", "KL divergence is a natural similarity measure for probability distribution. ", "However, it seems that authors use softmax to force the weights or the activations of neural networks to be probability distributions just for the purpose of using KL divergence. ", "Have you compared with other choices of similarity measure, e.g., cosine similarity? ", "I think the comparison as an additional experiment would help explain the design choice of the proposed function.", "(2) In the binary classification experiments, it is very strange to almost randomly group several different classes of images into the same category. ", "I would suggest authors look into datasets where the class hierarchy is already provided, e.g., ImageNet or a combination of several fine-grained image classification datasets.", "Additionally, I have the following questions: (1) I am curious how the proposed method compares to other competitors in terms of the original classification setting, e.g., 10-class classification accuracy on CIFAR10. ", "(2) What will happen for the multi-layer loss if the network architecture is very large such that you can not use large batch size, e.g., less than 10? ", "(3) In drawing figure 2 and 3, if the nonlinear activation function is not ReLU, how would you exam the same behavior? ", "Have you tried multi-class classification for the case \u201cwithout proposed loss component\u201d and does the similar pattern still happen or not?", "Some typos: (1) In introduction, \u201cwhen the cosine between the vectors 1\u201d should be \u201cwhen the cosine between the vectors is 1\u201d.", "(2) In section 4.3, \u201cwe used the DBPedia ontology dataset dataset\u201d should be \u201cwe used the DBPedia ontology dataset\u201d. ", "I would like to hear authors\u2019 feedback on the issues I raised."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "request", "request", "request", "request", "request", "request", "request", "non-arg"]}
{"doc_id": "rksMwz9xG", "text": ["This paper presents a new reinforcement learning architecture called Reactor by combining various improvements in deep reinforcement learning algorithms and architectures into a single model.", "The main contributions of the paper are to achieve a better bias-variance trade-off in policy gradient updates, multi-step off-policy updates withdistributional RL, and prioritized experience replay for transition sequences.", "The different modules are integrated well and the empirical results are very promising.", "The experiments (though limited to Atari) are well carried out and the evaluation is performed on both sample efficiency and training time.", "Pros: 1. Nice integration of several recent improvements in deep RL, along with a few novel tricks to improve training.", "2. The empirical results on 57 Atari games are impressive, in terms of final scores as well as real-time training speed.", "Cons: 1. Reactor is still less sample-efficient than Rainbow, with significantly lower scores after 200M frames.", "While the reactor trains much faster, it does use more parallel compute,", "so the comparison with Rainbow on wall clock time is not entirely fair.", "Would a distributed version of Rainbow perform better in this respect?", "2. Empirical comparisons are restricted to the Atari domain.", "The conclusions of the paper will be much stronger if results are also shown on other environments like Mujoco/Vizdoom/Deepmind Lab.", "3. Since the paper introduces a few new ideas like prioritized sequence replay,", "it would help if a more detailed analysis was performed on the impact of these individual schemes, even if in a model simpler than the Reactor.", "For instance, one could investigate the impact of prioritized sequence replay in models like multi-step DQN or recurrent DQN.", "This will help us understand the impact of each of these ideas in a more comprehensive fashion."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "fact", "request", "fact", "request", "request", "evaluation"]}
{"doc_id": "B1qbgHcxz", "text": ["Summary:This paper proposes a simple recipe to preserve proximity to zero mean for activations in deep neural networks.", "The proposal is to replace the non-linearity in half of the units in each layer with its \"bipolar\" version --", "one that is obtained by flipping the function on both axes.", "The technique is tested on deep stacks of recurrent layers, and on convolutional networks with depth of 28, showing that improved results over the baseline networks are obtained.", "Clarity: The paper is easy to read.", "The plots in Fig. 2 and the appendix are quite helpful in improving presentation.", "The experimental setups are explained in detail.", "Quality and significance: The main idea from this paper is simple and intuitive.", "However, the experiments to support the idea do not seem to match the motivation of the paper.", "As stated in the beginning of the paper, the motivation behind having close to zero mean activations is that this is expected to speed up training using gradient descent.", "However, the presented results focus on the performance on held-out data instead of improvements in training speed.", "This is especially the case for the RNN experiments.", "For the CIFAR-10 experiment, the training loss curves do show faster initial progress in learning.", "However, it is unclear that overall training time can be reduced with the help of this technique.", "To evaluate this speed up effect, the dependence on the choice of learning rate and other hyperparameters should also be considered.", "Nevertheless, it is interesting to note the result that the proposed approach converts a deep network that does not train into one which does in many cases.", "The method appears to improve the training for moderately deep convolutional networks without batch normalization", "(although this is tested on a single dataset),", "but is not practically useful yet", "since the regularization benefits of Batch Normalization are also taken away."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "fact", "fact", "fact", "evaluation", "fact"]}
{"doc_id": "SkaNEl9xM", "text": ["Summary of paper and review: The paper presents the instability issue of training GANs for semi-supervised learning. ", "Then, they propose to essentially utilize a wgan for semi-supervised learning. ", "The novelty of the paper is minor, ", "since similar approaches have been done before. ", "The analysis is poor, ", "the text seems to contain mistakes, ", "and the results don't seem to indicate any advantage or promise of the proposed algorithm.", "Detailed comments: - Unless I'm grossly mistaken the loss function (2) is clearly wrong. ", "There is a cross-entropy term used by Salimans et al. clearly missing.", "- As well, if equation (4) is referring to feature matching, the expectation should be inside the norm and not outside ", "(this amounts to matching random specific random fake examples to specific random real examples, an imbalanced form of MMD).", "- Theorem 2.1 is an almost literal rewrite of Theorem 2.4 of [1], without proper attribution. ", "Furthermore, Theorem 2.1 is not sufficient to demonstrate existence of this issues. ", "This is why [1] provides an extensive batch of targeted experiments to verify this assumptions. ", "Analogous experiments are clearly missing. ", "A detailed analysis of these assumptions and its implications are missing.", "- In section 3, the authors propose a minor variation of the Improved GAN approach by using a wgan on the unsupervised part of the loss. ", "Remarkably similar algorithms (where the two discriminators are two separate heads) to this have been done before (see for example, [2], but other approaches exist after that, see for examples papers citing [2]).", "- Theorem 3.1 is a trivial consequence of Theorem 3 from WGAN.", "- The experiments leave much to be desired. ", "It is widely known that MNIST is a bad benchmark at this point, ", "and that no signal can be established from a minor success in this dataset. ", "Furthermore, the results in CIFAR don't seem to bring any advantage, considering the .1% difference in accuracy is 1/100 of chance in this dataset.", "[1]: Arjovsky & Bottou, Towards Principled Methods for Training Generative Adversarial Networks, ICLR 2017", "[2]: Mroueh & Sercu, Goel, McGan: Mean and Covariance Feature Matching GAN, ICML 2017"], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "reference", "reference"]}
{"doc_id": "r1Q8qCdgf", "text": ["The authors investigate different message passing schedules for GNN learning. ", "Their proposed approach is to partition the graph into disjoint subregions, pass many messages on the sub regions and pass fewer messages between regions (an approach that is already considered in related literature, e.g., the BP literature), with the goal of minimizing the number of messages that need to be passed to convey information between all pairs of nodes in the network. ", "Experimentally, the proposed approach seems to perform comparably to existing methods (or slightly worse on average in some settings). ", "The paper is well-written and easy to read. ", "My primary concern is with novelty. ", "Many similar ideas have been floating around in a variety of different message-passing communities. ", "With no theoretical reason to prefer the proposed approach, it seems like it may be of limited interest to the community if speed is its only benefit (see detailed comments below).", "Specific comments:1) \"When information from any one node has reached all other nodes in the graph for the first time, this problem is considered as solved.\"", "Perhaps it is my misunderstanding of the way in which GNNs work, but isn't the objective actually to reach a set of fixed point equations. ", "If so, then simply propagating information from one side of the graph may not be sufficient.", "2) The experimental results in Section 4.4 are almost impossible to interpret. ", "Perhaps it is better to plot number of edges updated versus accuracy? ", "This at least would put them on equal footing. ", "In addition, the experiments that use randomness should be repeated and plotted on average (just in case you happened to pick a bad schedule).", "3) More generally, why not consider random schedules (i.e., just pick a random edge, update, repeat) or random partitions? ", "I'm not certain that a fixed set will perform best independent of the types of updates being considered, and random schedules, like the fully synchronous case for an important baseline (especially if update speed is all you care about).", "Typos: -pg. 6, \"Thm. 2\" -> \"Table 2\""], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "fact", "fact", "evaluation", "request", "fact", "request", "request", "evaluation", "request"]}
{"doc_id": "Hy0xrQegf", "text": ["This paper's main thesis is that automatic metrics like BLEU, ROUGE, or METEOR is suitable for task-oriented natural language generation (NLG). ", "In particular, the paper presents a counterargument to \"How NOT To Evaluate Your Dialogue System...\" ", "where Wei et al argue that automatic metrics are not correlated or only weakly correlated with human eval on dialogue generation. ", "The authors here show that the performance of various NN models as measured by automatic metrics like BLEU and METEOR is correlated with human eval.", "Overall, this paper presents a useful conclusion: use METEOR for evaluating task oriented NLG. ", "However, there isn't enough novel contribution in this paper to warrant a publication. ", "Many of the details unnecessary: ", "1) various LSTM model descriptions are unhelpful ", "given the base LSTM model does just as well on the presented tasks ", "2) Many embedding based eval methods are proposed ", "but no conclusions are drawn from any of these techniques."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact"]}
{"doc_id": "BJHwtGogM", "text": ["The paper proposes data augmentation as an alternative to commonly used regularisation techniques like weight decay and dropout, and shows for a few reference models / tasks that the same generalization performance can be achieved using only data augmentation.", "I think it's a great idea to investigate the effects of data augmentation more thoroughly.", "While it is a technique that is often used in literature,", "there hasn't really been any work that provides rigorous comparisons with alternative approaches and insights into its inner workings.", "Unfortunately I feel that this paper falls short of achieving this.", "Experiments are conducted on two fairly similar tasks (image classification on CIFAR-10 and CIFAR-100), with two different network architectures.", "This is a bit meager to be able to draw general conclusions about the properties of data augmentation.", "Given that this work tries to provide insight into an existing common practice,", "I think it is fair to expect a much stronger experimental section.", "In section 2.1.1 it is stated that this was a conscious choice because simplicity would lead to clearer conclusions,", "but I think the conclusions would be much more valuable if variety was the objective instead of simplicity, and if larger-scale tasks were also considered.", "Another concern is that the narrative of the paper pits augmentation against all other regularisation techniques, whereas more typically these will be used in conjunction.", "It is however very interesting that some of the results show that augmentation alone can sometimes be enough.", "I think extending the analysis to larger datasets such as ImageNet, as is suggested at the end of section 3, and probably also to different problems than image classification, is going to be essential to ensure that the conclusions drawn hold weight.", "Comments:- The distinction between \"explicit\" and \"implicit\" regularisation is never clearly enunciated.", "A bunch of examples are given for both,", "but I found it tricky to understand the difference from those.", "Initially I thought it reflected the intention behind the use of a given technique;", "i.e. weight decay is explicit because clearly regularisation is its primary purpose --", "whereas batch normalisation is implicit because its regularisation properties are actually a side effect.", "However, the paper then goes on to treat data augmentation as distinct from other explicit regularisation techniques,", "so I guess this is not the intended meaning.", "Please clarify this, as the terms crop up quite often throughout the paper.", "I suspect that the distinction is somewhat arbitrary and not that meaningful.", "- In the abstract, it is already implied that data augmentation is superior to certain other regularisation techniques because it doesn't actually reduce the capacity of the model.", "But this ignores the fact that some of the model's excess capacity will be used to model out-of-distribution data (w.r.t. the original training distribution) instead.", "Data augmentation always modifies the distribution of the training data.", "I don't think it makes sense to imply that this is always preferable over reducing model capacity explicitly.", "This claim is referred to a few times throughout the work.", "- It could be more clearly stated that the reason for the regularising effect of batch normalisation is the noise in the batch estimates for mean and variance.", "- Some parts of the introduction could be removed", "because they are obvious, at least to an ICLR audience (like \"the model would not be regularised if alpha (the regularisation parameter) equals 0\").", "- The experiments with smaller dataset sizes would be more interesting if smaller percentages were used.", "50% / 80% / 100% are all on the same order of magnitude", "and this setting is not very realistic.", "In practice, when a dataset is \"too small\" to be able to train a network that solves a problem reliably, it will generally be one or more orders of magnitude too small, not 2x too small.", "- The choices of hyperparameters for \"light\" and \"heavy\" motivation seem somewhat arbitrary and are not well motivated.", "Some parameters which are sampled uniformly at random should be probably be sampled log-uniformly instead,", "because they represent scale factors.", "It should also be noted that much more extreme augmentation strategies have been used for this particular task in literature, in combination with padding (for example by Graham).", "It would be interesting to include this setting in the experiments as well.", "- On page 7 it is stated that \"when combined with explicit regularization, the results are much worse than without it\",", "but these results are omitted from the table.", "This is unfortunate", "because it is a very interesting observation, that runs counter to the common practice of combining all these regularisation techniques together (e.g. L2 + dropout + data augmentation is a common combination).", "Delving deeper into this could make the paper a lot stronger.", "- It is not entirely true that augmentation parameters depend only on the training data and not the architecture (last paragraph of section 2.4).", "Clearly more elaborate architectures benefit more from data augmentation, and might need heavier augmentation to perform optimally", "because they are more prone to overfitting", "(this is in fact stated earlier on in the paper as well).", "It is of course true that these hyperparameters tend to be much more robust to architecture changes than those of other regularisation techniques such as dropout and weight decay.", "This increased robustness is definitely useful", "and I think this is also adequately demonstrated in the experiments.", "- Phrases like \"implicit regularization operates more effectively at capturing reality\" are too vague to be meaningful.", "- Note that weight decay has also been found to have side effects related to optimization", "(e.g. in \"Imagenet classification with deep convolutional neural networks\", Krizhevsky et al.)"], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "request", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "request", "request", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "request", "fact", "fact", "request", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "reference"]}
{"doc_id": "BkYfM_Rgz", "text": ["It is clear that the problem studied in this paper is interesting. ", "However, after reading through the manuscript, it is not clear to me what are the real contributions made in this paper.", " I also failed to find any rigorous results on generalization bounds. ", "In this case, I cannot recommend the acceptance of this paper."], "labels": ["evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "SkYNcg5xz", "text": ["The paper addresses the problem of learners forgetting rare states and revisiting catastrophic danger states. ", "The authors propose to train a predictive \u2018fear model\u2019 that penalizes states that lead to catastrophes. ", "The proposed technique is validated both empirically and theoretically. ", "Experiments show a clear advantage during learning when compared with a vanilla DQN. ", "Nonetheless, there are some criticisms than can be made of both the method and the evaluations:", "The fear radius threshold k_r seems to add yet another hyperparameter that needs tuning. ", "Judging from the description of the experiments this parameter is important to the performance of the method and needs to be set experimentally. ", "There seems to be no way of a priori determine a good distance ", "as there is no way to know in advance when a catastrophe becomes unavoidable. ", "No empirical results on the effect of the parameter are given.", "The experimental results support the claim that this technique helps to avoid catastrophic states during initial learning.", "The paper however, also claims to address the longer term problem of revisiting these states once the learner forgets about them, ", "since they are no longer part of the data generated by (close to) optimal policies. ", "This problem does not seem to be really solved by this method. ", "Danger and safe state replay memories are kept, but are only used to train the catastrophe classifier. ", "While the catastrophe classifier can be seen as an additional external memory, ", "it seems that the learner will still drift away from the optimal policy and then need to be reminded by the classifier through penalties. ", "As such the method wouldn\u2019t prevent catastrophic forgetting, ", "it would just prevent the worst consequences by penalizing the agent before it reaches a danger state. ", "It would therefore be interesting to see some long running experiments and analyse how often catastrophic states (or those close to them) are visited. ", "Overall, the current evaluations focus on performance and give little insight into the behaviour of the method. ", "The paper also does not compare to any other techniques that attempt to deal with catastrophic forgetting and/or the changing state distribution ([1,2]).", "In general the explanations in the paper often often use confusing and imprecise language, even in formal derivations, e.g. \u2018if the fear model reaches arbitrarily high accuracy\u2019 or \u2018if the probability is negligible\u2019.", "It is wasn\u2019t clear to me that the properties described in Theorem 1 actually hold. ", "The motivation in the appendix is very informal and no clear derivation is provided. ", "The authors seem to indicate that a minimal return can be guaranteed because the optimal policy spends a maximum of epsilon amount of time in the catastrophic states and the alternative policy simply avoids these states. ", "However, as the alternative policy is learnt on a different reward, ", "it can have a very different state distribution, even for the non-catastrophics states. ", "It might attach all its weight to a very poor reward state in an effort to avoid the catastrophe penalty. ", "It is therefore not clear to me that any claims can be made about its performance without additional assumptions.", "It seems that one could construct a counterexample using a 3-state chain problem (no_reward,danger, goal) where the only way to get to the single goal state is to incur a small risk of visiting the danger state. ", "Any optimal policy would therefore need to spend some time e in the danger state, on average. ", "A policy that learns to avoid the danger state would then also be unable to reach the goal state and receive rewards. ", "E.g pi* has stationary distribution (0,e,1-e) and return 0*0+e*Rmin + (1-e)*Rmax. ", "By adding a sufficiently high penalty, policy pi~ can learn to avoid the catastrophic state with distribution (1,0,0) and then gets return 1*0+ 0*Rmin+0*Rmax= 0 < n*_M - e (Rmax - Rmin) = e*Rmin + (1-e)*Rmax - e (Rmax - Rmin). ", "This seems to contradict the theorem. ", "It wasn\u2019t clear what assumptions the authors make to exclude situations like this.", "[1] T. de Bruin, J. Kober, K. Tuyls and R. Babu\u0161ka, \"Improved deep reinforcement learning for robotics through distribution-based experience retention,\" 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, 2016, pp. 3947-3952.", "[2] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., ... & Hassabis, D. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 201611835."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "reference", "reference"]}
{"doc_id": "H1kAEtYlz", "text": ["The paper studies different methods for defining hypergraph embeddings, i.e. defining vectorial representations of the set of hyperedges of a given hypergraph. ", "It should be noted that the framework does not allow to compute a vectorial representation of a set of nodes not already given as an hyperedge. ", "A set of methods is presented : the first one is based on an auto-encoder technique ; ", "the second one is based on tensor decomposition ; ", "the third one derives from sentence embedding methods. ", "The fourth one extends over node embedding techniques ", "and the last one use spectral methods. ", "The two first methods use plainly the set structure of hyperedges. ", "Experimental results are provided on semi-supervised regression tasks. ", "They show very similar performance for all methods and variants. ", "Also run-times are compared ", "and the results are expected. ", "In conclusion, the paper gives an overview of methods for computing hypernode embeddings. ", "This is interesting in its own. ", "Nevertheless, as the target problem on hypergraphs is left unspecified, ", "it is difficult to infer conclusions from the study. ", "Therefore, I am not convinced that the paper should be published in ICLR'18.", "* typos * Recent surveys on graph embeddings have been published in 2017 and should be cited as \"A comprehensive survey of graph embedding ...\" by Cai et al", "* Preliminaries. The occurrence number R(g_i) are not modeled in the hypergraphs. ", "A graph N_a is defined but not used in the paper.", "* Section 3.1. the procedure for sampling hyperedges in the lattice shoud be given. ", "At least, you should explain how it is made efficient when the number of nodes is large.", "* Section 3.2. The method seems to be restricted to cases where the cardinality of hyperedges can take a small number of values. ", "This is discussed in Section 3.6 ", "but the discussion is not convincing enough.", "* Section 3.3 The term Sen2vec is not common knowledge", "* Section 3.3 The length of the sentences depends on the number of permutations of $k$ elements. ", "How can you deal with large k ?", "* Section 3.4 and Section 3.5. The methods proposed in these two sections should be related with previous works on hypergraph kernels. ", "I.e. there should be mentions on the clique expansion and star expansion of hypergraphs. ", "This leads to the question why graph embeddings methods on these expansions have not be considered in the paper.", "* Section 4.1. Only hyperedeges of cardinality in [2,6] are considered. ", "This seems a rather strong limitation ", "and this hypothesis does not seem pertinent in many applications. ", "* Section 4. For online multi-player games, hypernode embeddings only allow to evaluate existing teams, i.e. already existing as hyperedges in the input hypergraph. ", "One of the most important problem for multi-player games is team making where team evaluation should be made for all possible teams.", "* Section 5. Seems redundant with the Introduction."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "fact", "fact", "request", "request", "fact", "fact", "evaluation", "evaluation", "fact", "request", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation"]}
{"doc_id": "rkFUZ2uxf", "text": ["The authors introduce a set of very simple tasks that are meant to illustrate the challenges of learning visual relations.", "They then evaluate several existing network architectures on these tasks,", "and show that results are not as impressive as others might have assumed they would be.", "They show that while recent approaches (e.g. relational networks) can generalize reasonably well on some tasks, these results do not generalize as well to held-out-object scenarios as might have been assumed.", "Clarity: The paper is fairly clearly written.", "I think I mostly followed it.", "Quality: I'm intrigued by but a little uncomfortable with the generalization metrics that the authors use.", "The authors estimate the performance of algorithms by how well they generalize to new image scenarios when trained on other image conditions.", "The authors state that \". . . the effectiveness of an architecture to learn visual-relation problems should be measured in terms of generalization over multiple variants of the same problem, not over multiple splits of the same dataset.\"", "Taken literally, this would rule out a lot of modern machine learning, even obviously very good work.", "On the other hand, it's clear that at some point, generalization needs to occur in testing ability to understand relationships.", "I'm a little worried that it's \"in the eye of the beholder\" whether a given generalization should be expected to work or not.", "There are essentially three scenarios of generalization discussed in the paper: (a) various generalizations of image parameters in the PSVRT dataset (b) various hold-outs of the image parameters in the sort-of-CLEVR dataset (c) from sort-of-CLEVR \"objects\" to PSVRT bit patterns", "The result that existing architectures didn't do very well at these generalizations (especially b and c) *may* be important -- or it may not.", "Perhaps if CNN+RN were trained on a quite rich real-world training set with a variety of real-world three-D objects beyond those shown in sort-of-CLEVR, it would generalize to most other situations that might be encountered.", "After all, when we humans generalize to understanding relationships, exactly what variability is present in our \"training sets\" as compared to our \"testing\" situations?", "How do the authors know that humans are effectively generalizing rather than just \"interpolating\" within their (very rich) training set?", "It's not totally clear to me that if totally naive humans (who had never seen spatial relationships before) were evaluated on exactly the training/testing scenarios described above, that they would generalize particularly well either.", "I don't think it can just be assumed a priori that humans would be super good this form of generalization.", "So how should authors handle this criticism?", "What would be useful would either be some form of positive control.", "Either human training data showing very effective generalization (if one could somehow make \"novel\" relationships unfamiliar to humans), or a different network architecture that was obviously superior in generalization to CNN+RN.", "If such were present, I'd rate this paper significantly higher.", "Also, I can't tell if I really fully believe the results of this paper.", "I don't doubt that the authors saw the results they report.", "However, I think there's some chance that if the same tasks were in the hands of people who *wanted* CNNs or CNN+RN to work well, the results might have been different.", "I can't point to exactly what would have to be different to make things \"work\",", "because it's really hard to do that ahead of actually trying to do the work.", "However, this suspicion on my part is actually a reason I think it might be *good* for this paper to be published at ICLR.", "This will give the people working on (e.g.) CNN+RN somewhat more incentive to try out the current paper's benchmarks and either improve their architecture or show that the the existing one would have totally worked if only tried correctly.", "I myself am very curious about what would happen and would love to see this exchange catalyzed.", "Originality and Significance: The area of relation extraction seems to me to be very important and probably a bit less intensively worked on that it should be.", "However, as the authors here note, there's been some recent work (e.g. Santoro 2017) in the area.", "I think that the introduction of baselines benchmark challenge datasets such as the ones the authors describe here is very useful, and is a somewhat novel contribution."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "non-arg", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "HJ3OcT3gG", "text": ["In this paper, the authors propose a novel method for generating adversarial examples when the model is a black-box and we only have access to its decisions (and a positive example). ", "It iteratively takes steps along the decision boundary while trying to minimize the distance to the original positive example.", "Pros:- Novel method that works under much stricter and more realistic assumptions.", "- Fairly thorough evaluation.", "- The paper is clearly written.", "Cons:- Need a fair number of calls to generate a small perturbation. ", "Would like to see more analysis of this.", "- Attack works for making something outside the boundary (not X), ", "but is less clear how to generate image to meet a specific classification (X). ", "3.2 attempts this slightly by using an image in the class, ", "but is less clear for something like FaceID.", "- Unclear how often the images generated look reasonable. ", "Do different random initializations given different quality examples?"], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "fact", "evaluation", "evaluation", "request"]}
{"doc_id": "Bk9Z3ZQlG", "text": ["The paper proposes LSD-NET, an active vision method for object classification. ", "In the proposed method, based on a given view of an object, the algorithm can decide to either classify the object or to take a discrete action step which will move the camera in order to acquire a different view of the object. ", "Following this procedure the algorithm iteratively moves around the object until reaching a maximum number of allowed moves or until a object view favorable for classification is reached.", "The main contribution of the paper is a hierarchical action space that distinguishes between camera-movement actions and classification actions. ", "At the top-level of the hierarchy, the algorithm decides whether to perform a movement or a classification -type action. ", "At the lower-level, the algorithm either assign a specific class label (for the case of classification actions) or performs a camera movement (for the case of camera-movement actions). ", "This hierarchical action space results in reduced bias towards classification actions.", "Strong Points - The content is clear and easy to follow.", "- The proposed method achieves competitive performance w.r.t. existing work.", "Weak Points- Some aspects of the proposed method could have been evaluated better.", "- A deeper evaluation/analysis of the proposed method is missing.", "Overall the proposed method is sound and the paper has a good flow and is easy to follow. ", "The proposed method achieves competitive results, ", "and up to some extent, shows why it is important to have the proposed hierarchical action space.", "My main concerns with this manuscript are the following:", "In some of the tables a LSTM variant? of the proposed method is mentioned. ", "However it is never introduced properly in the text. ", "Can you indicate how this LSTM-based method differs from the proposed method?", "At the end of Section 5.2 the manuscript states: \"In comparison to other methods, our method is agnostic of the starting point i.e. it can start randomly on any image and it would get similar testing accuracies.\" ", "This suggests that the method has been evaluated over different trials considering different random initializations. ", "However, this is unclear based on the evaluation protocol presented in Section 5. ", "If this is not the case, perhaps this is an experiment that should be conducted.", "In Section 3.2 it is mentioned that different from typical deep reinforcement learning methods, the proposed method uses a deeper AlexNet-like network. ", "In this context, it would be useful to drop a comment on the computation costs added in training/testing by this deeper model.", "Table 3 shows the number of correctly and wrongly classified objects as a function of the number of steps taken. ", "Here we can notice that around 50% of the objects are in the step 1 and 12, ", "which as correctly indicated by the manuscript, suggests that movement does not help for those cases. ", "Would it be possible to have more class-specific (or classes grouped into intermediate categories) visualization of the results? ", "This would provide a better insight of what is going on and when exactly actions related to camera movements really help to get better classification performance. ", "On the presentation side, I would recommend displaying the content of Table 3 in a plot. ", "This may display the trends more clearly. ", "Moreover, I would recommend to visualize the classification accuracy as a function of the step taken by the method. ", "In this regard, a deeper analysis of the effect of the proposed hierarchical action space is a must.", "I would encourage the authors to address the concerns raised on my review."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "fact", "evaluation", "request", "quote", "evaluation", "evaluation", "request", "fact", "request", "fact", "fact", "evaluation", "request", "evaluation", "request", "evaluation", "request", "request", "request"]}
{"doc_id": "rydWCNKxz", "text": ["Summary: This paper empirically studies adversarial perturbations dx and what the effects are of adversarial training (AT) with respect to shared (dx fools for many x) and singular (only for a single x) perturbations.", "Experiments use a (previously published) iterative fast-gradient-sign-method and use a Resnet on CIFAR.", "The authors conclude that in this experimental setting: - AT seems to defend models against shared dx's.", "- This is visible on universal perturbations,", "which become less effective as more AT is applied.", "- AT decreases the effectiveness of adversarial perturbations, e.g. AT decreases the number of adversarial perturbations that fool both an input x and x with e.g. a contrast change.", "- Singular perturbations are easily detected by a detector model,", "as such perturbations don't change much when applying AT.", "Pro:- Paper addresses an important problem: qualitative / quantitative understanding of the behavior of adversarial perturbations is still lacking.", "- The visualizations of universal perturbations as they change during AT are nice.", "- The basic observation wrt the behavior of AT is clearly communicated.", "Con:- The experiments performed are interesting directions, although unfocused and rather limited in scope.", "For instance, does the same phenomenon happen for different datasets?", "Different models?", "- What happens when we use adversarial attacks different from FGSM?", "Do we get similar results?", "- The papers lacks a more in-depth theoretical analysis.", "Is there a principled reason AT+FGSM defends against universal perturbations?", "Overall:- As is, it seems to me the paper lacks a significant central message (due to limited and unfocused experiments) or significant new theoretical insight into the effect of AT.", "A number of questions addressed are interesting starting points towards a deeper understanding of *how* the observations can be explained and more rigorous empirical investigations.", "Detailed: -"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "request", "evaluation", "evaluation", "non-arg"]}
{"doc_id": "SJzxBpKeM", "text": ["SUMMARY: This work is about learning the validity of a sequences in specific application domains like SMILES strings for chemical compounds. ", "In particular, the main emphasis is on predicting if a prefix sequence could possibly be extended to a complete valid sequence. ", "In other words, one tries to predict if there exists a valid suffix sequence, and based on these predictions, the goal is to train a generative model that always produces valid sequences. ", "In the proposed reinforcement learning setting, a neural network models the probability that a certain action (adding a symbol) will result in a valid full sequence. ", "For training the network, a large set of (validity-)labelled sequences would be needed. ", "To overcome this problem, the authors introduce an active learning strategy, where the information gain is re-expressed as the conditional mutual information between the the label y and the network weights w, and this mutual information is maximized in a greedy sequential manner. ", "EVALUATION: CLARITY & NOVELTY: In principle, the paper is easy to read. ", "Unfortunately, however, for the reader is is not easy to find out what the authors consider their most relevant contribution. ", "Every single part of the model seems to be quite standard (basically a network that predicts the probability of a valid sequence and an information-gain based active learning strategy) ", "- so is the specific application to SMILES strings what makes the difference here? ", "Or is is the specific greedy approximation to the mutual information criterion in the active learning part? ", "Or is it the way how you augment the dataset? ", "All these aspects might be interesting, ", "but somehow I am missing a coherent picture.", "SIGNIFICANCE: it is not entirely clear to me if the proposed \"pruning\" strategy for the completion of prefix sequences can indeed be generally applied to sequence modelling problems, ", "because in more general domains it might be very difficult to come up with reasonable validity estimates for prefixes that are significantly shorter than the whole sequence. ", "I am not so familiar with SMILES strings ", "-- but could it be that the experimental success reported here is mainly a result of the very specific structure of valid SMILES strings? ", "But then, what can be learned for general sequence validation problems?"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "request", "request"]}
{"doc_id": "HJ75M8ogM", "text": ["The authors have addressed the problem of translating natural language queries to SQL queries. ", "They proposed a deep neural network based solution which combines the attention based neural semantic parser and pointer networks. ", "They also released a new dataset WikiSQL for the problem. ", "The proposed method outperforms the existing semantic parsing baselines on WikiSQL dataset.", "Pros:1. The idea of using pointer networks for reducing search space of generated queries is interesting. ", "Also, using extrinsic evaluation of generated queries handles the possibility of paraphrasing SQL queries.", "2. A new dataset for the problem.", "3. The experiments report a significant boost in the performance compared to the baseline. ", "The ablation study is helpful for understanding the contribution of different component of the proposed method.", "Cons:1. It would have been better to see performance of the proposed method in other datasets (wherever possible). ", "This is my main concern about the paper.", "2. Extrinsic evaluation can slow down the overall training. ", "Comparison of running times would have been helpful.", "3. More details about training procedure (specifically for the RL part) would have been better."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "request", "request"]}
{"doc_id": "rkejdYtxz", "text": ["Summary:This work is about model evaluation for molecule generation and design. ", "19 benchmarks are proposed, small data sets are expanded to a large, standardized data set ", "and it is explored how to apply new RL techniques effectively for molecular design.", "on the positive side: The paper is well written, quality and clarity of the work are good. ", "The work provides a good overview about how to apply new reinforcement learning techniques for sequence generation. ", "It is investigated how several RL strategies perform on a large, standardized data set. ", "Different RL models like Hillclimb-MLE, PPO, GAN, A2C are investigated and discussed. ", "An implementation of 19 suggested benchmarks of relevance for de novo design will be provided as open source as an OpenAI Gym. ", "on the negative side: There is no new novel contribution on the methods side. ", "minor comments: Section 2.1. see Fig.2 \u2014> see Fig.1", "page 4just before equation 8: the the"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "request", "request"]}
{"doc_id": "ByCeFUNgz", "text": ["The authors propose a new episodic reinforcement learning algorithm based on contextual bandit oracles.", "The key specificity of this algorithm is its ability to deal with the credit assignment problem by learning automatically a progressive \"reward shaping\" (the residual losses) from a feedback that is only provided at the end of the epochs.", "The paper is dense but well written.", "The theoretical grounding is a bit thin or hard to follow.", "The authors provide a few regret theoretical results (that I did not check deeply) obtained by reduction to \"value-aware\" contextual bandits.", "The experimental section is solid.", "The method is evaluated on several RL environments against state of the art RL algorithms.", "It is also evaluated on bandit structured prediction tasks.", "An interesting synthetic experiment (Figure 4) is also proposed to study the ability of the algorithm to work on both decomposable and non-decomposable structured prediction tasks.", "Question 1: The credit assignment approach you propose seems way more sophisticated than eligibility traces in TD learning.", "But sometimes old and simple methods are not that bad.", "Could you develop a bit on the relation between RESLOPE and eligibility traces ?", "Question 2: RESLOPE is built upon contextual bandits which require a stationary environment.", "Does RESLOPE inherit from this assumption?", "Typos: page 1 \"scalar loss that output.\" -> \"scalar loss.\"", "\", effectively a representation\" -> \". By effective we mean effective in term of credit assignment.\"", "page 5 \"and MTR\" -> \"and DR\"", "page 6 \"in simultaneously.\" -> ???", "\".In greedy\" -> \". In greedy\""], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "request", "fact", "request", "request", "request", "request", "request", "request"]}
{"doc_id": "rkSREOYgM", "text": ["This paper proposes a device placement algorithm to place operations of tensorflow on devices. ", "Pros: 1. It is a novel approach which trains the placement end to end.", "2. The experiments are solid to demonstrate this method works very well.", "3. The writing is easy to follow.", "4. This would be a very useful tool for the community if open sourced.", "Cons: 1. It is not very clear in the paper whether the training happens for each model yielding separate agents, or a shared agent is trained and used for all kinds of models. ", "The latter would be more exciting. ", "The adjacency matrix varies size for different graphs, ", "so I guess a separate agent is trained for each graph? ", "However, if the agent is not shared, why not just use integer to represent each operation in the graph, ", "since overfitting would be more desirable in this case.", "2. Averaging the embedding is hard to understand especially for the output sizes and number of outputs.", "3. It is not clear how the adjacency information is used."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "non-arg", "request", "fact", "evaluation", "evaluation"]}
{"doc_id": "Bycjn6tef", "text": ["Paper summary: Existing works on multi-task neural networks typically use hand-tuned weights for weighing losses across different tasks.", "This work proposes a dynamic weight update scheme that updates weights for different task losses during training time by making use of the loss ratios of different tasks.", "Experiments on two different network indicate that the proposed scheme is better than using hand-tuned weights for multi-task neural networks.", "Paper Strengths:- The proposed technique seems simple yet effective for multi-task learning.", "- Experiments on two different network architectures showcasing the generality of the proposed method.", "Major Weaknesses:- The main weakness of this work is the unclear exposition of the proposed technique.", "Entire technique is explained in a short section-3.1 with many important details missing.", "There is no clear basis for the main equations 1 and 2.", "How does equation-2 follow from equation-1?", "Where is the expectation coming from?", "What exactly does \u2018F\u2019 refer to?", "There is dependency of \u2018F\u2019 on only one of sides in equations 1 and 2?", "More importantly, how does the gradient normalization relate to loss weight update?", "It is very difficult to decipher these details from the short descriptions given in the paper.", "- Also, several details are missing in toy experiments.", "What is the task here?", "What are input and output distributions and what is the relation between input and output?", "Are they just random noises?", "If so, is the network learning to overfit to the data as there is no relationship between input and output?", "Minor Weaknesses:- There are no training time comparisons between the proposed technique and the standard fixed loss learning.", "- Authors claim that they operate directly on the gradients inside the network.", "But, as far as I understood, the authors only update loss weights in this paper.", "Did authors also experiment with gradient normalization in the intermediate CNN layers?", "- No comparison with state-of-the-art techniques on the experimented tasks and datasets.", "Clarifications:- See the above mentioned issues with the exposition of the technique.", "- In the experiments, why are the input images downsampled to 320x320?", "- What does it mean by \u2018unofficial dataset\u2019 (page-4).", "Any references here?", "- Why is 'task normalized' test-time loss as good measure for comparison between models in the toy example (Section 4)?", "The loss ratios depend on initial loss,", "which is not important for the final performance of the system.", "Suggestions:- I strongly suggest the authors to clearly explain the proposed technique to get this into a publishable state.", "- The term \u2019GradNorm\u2019 seem to be not defined anywhere in the paper.", "Review Summary:Despite promising results, the proposed technique is quite unclear from the paper.", "With its poor exposition of the technique, it is difficult to recommend this paper for publication."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "non-arg", "non-arg", "non-arg", "evaluation", "fact", "non-arg", "non-arg", "non-arg", "non-arg", "fact", "fact", "fact", "non-arg", "fact", "non-arg", "non-arg", "non-arg", "non-arg", "non-arg", "fact", "evaluation", "request", "fact", "evaluation", "evaluation"]}
{"doc_id": "HyWGBr5lf", "text": ["Summary: The authors proposed an unsupervised time series clustering methods built with deep neural networks.", "The proposed model is equipped with an encoder-decoder and a clustering model.", "First, the encoder employs CNN to shorten the time series and extract local temporal features,", "and the CNN is followed by bidirectional LSTMs to get the encoded representations.", "A temporal clustering model and a DCNN decoder are applied on the encoded representations and jointly trained.", "An additional heatmap generator component can be further included in the clustering model.", "The authors compared the proposed method with hierarchical clustering with 4 different temporal similarity methods on several univariate time series datasets.", "Detailed comments:The problem of unsupervised time series clustering is important and challenging.", "The idea of utilizing deep learning models to learn encoded representations for clustering is interesting and could be a promising solution.", "One potential limitation of the proposed method is that it is only designed for univariate time series of the same temporal length,", "which limits the usage of this model in practice.", "In addition, given that the input has fixed length, clustering baselines for static data can be easily applied", "and should be compared to demonstrate the necessity of temporal clustering.", "Some important details are missing or lack of explanations.", "For example, what is the size of each layer and the dimension of the encoded space?", "How much does the model shorten the input time series and how is this be determined?", "How does the model combine the heatmap output (which is a sequence of the same length as the time series) and the clustering output (which is a vector of size K) in Figure 1?", "The heatmap shown in Figure 3 looks like the negation of the decoded output (i.e., lower value in time series -> higher value in heatmap).", "How do we interpret the generated heatmap?", "From the experimental results, it is difficult to judge which method/metric is the best.", "For example, in Figure 4, all 4 DTC-methods achieved the best performance on one or two datasets.", "Though several datasets are evaluated in experiments, they are relatively small.", "Even the largest dataset (Phalanges OutlinesCorrect) has only 2 thousand samples,", "and the best performance is achieved by one of the baseline, with AUC score only 0.586 for binary classification.", "Minor suggestion: In Figure 3, instead of showing the decoded output (reconstruction), it may be more helpful to visualize the encoded time series", "since the clustering method is applied directly on those encoded representations."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "non-arg", "non-arg", "non-arg", "evaluation", "non-arg", "evaluation", "fact", "evaluation", "fact", "fact", "request", "fact"]}
{"doc_id": "Hk7vlKsxz", "text": ["Summary: The authors present a simple variation of vanilla recurrent neural networks, which use ReLU hiddens and a fixed identity matrix that is added to the hidden-to-hidden weight matrix. ", "This identity connection acts as a \u201csurrogate memory\u201d component, preserving hidden activations over time steps. ", "The experiments demonstrate that this architecture reliably solves the addition task for up to 400 input frames. ", "It also achieves a very good performance on sequential and permuted MNIST and achieves SOTA performance on bAbI.", "The authors observe that the proposed recurrent identity network (RIN) is relatively robust to hyperparameter choices. ", "After Le et al. (2015), the paper presents another convincing case for the application of ReLUs in RNNs.", "Review: I very much like the paper. ", "The motivation and architecture is presented very clearly ", "and I am happy to also see explorations of simpler recurrent architectures in parallel to research of gated architectures!", "I have a few comments and questions:1) Clarification: In Section 2.2, do you really mean bit-wise multiplication or element-wise? ", "If bit-wise, can you elaborate why? ", "I might have missed something.", "2) Why does the learning curve of the IRNN stop around epoch 270 in Figure 2c? ", "Also some curves in the appendix stop abruptly without visible explosions. ", "Were these experiments run until completion? ", "If so, would it be possible to plot the complete curves?", "3) I think for a fair comparison with LSTMs and IRNNs a limited hyperparameter search should be performed separately on all three architectures at least for the addition task. ", "Optimal hyperparameters are usually model-specific. ", "Admittedly, the authors mention that they do not intend to make claims about superior performance to LSTMs, ", "however the competitive performance of small RINs is mentioned a couple of times in the manuscript.", "Le et al. (2015) for instance perform a coarse grid search for each model.", "4) I wouldn't say that ResNets are Gated Neural Networks, ", "as the branches are just summed up. ", "There is no (multiplicative) gating as in Highway Networks.", "5) I think what enables the training of very deep networks or LSTMs on long sequences is the presence of a (close-to-)identity component in forward/backward propagation, not the gating. ", "The use of ReLU activations in IRNNs (with identity initialization of the hidden-to-hidden weights) and RINs (effectively initialized with identity plus some noise) makes the recurrence more linear than with squashing activation functions.", "6) Regarding the absence of gating in RINs: What is your intuition on how the model would perform in tasks for which conditional forgetting is useful. ", "Consider for example a task with long sequences, outputs at every time step and hidden activations not necessarily being encouraged to estimate last step hidden activations. ", "Would RINs readily learn to reset parts of the hidden state?", "7) Henaff et al. (2016) might be related, ", "as they are also looking into the addition task with long sequences.", "Overall, the presented idea is novel to the best of my knowledge ", "and the manuscript is well-written. ", "I would recommend it for acceptance, ", "but would like to see the above points addressed (especially 1-3 and some comments on 4-6). ", "After a revision I would consider to increase the score.", "References: Henaff, Mikael, Arthur Szlam, and Yann LeCun. \"Recurrent orthogonal networks and long-memory tasks.\" In International Conference on Machine Learning, pp. 2034-2042. 2016.", "Le, Quoc V., Navdeep Jaitly, and Geoffrey E. Hinton. \"A simple way to initialize recurrent networks of rectified linear units.\" arXiv preprint arXiv:1504.00941 (2015)."], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "request", "fact", "non-arg", "request", "request", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "request", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "reference", "reference"]}
{"doc_id": "SJEDvEvez", "text": ["This reviewer has found the proposed approach quite compelling, ", "but the empirical validation requires significant improvements:", "1) you should include in your comparison Query-by- Bagging & Boosting, ", "which are two of the best out-of-the-box active learning strategies", "2) in your empirical validation you have (arbitrarily) split the 14 datasets in 7 training and testing ones, ", "but many questions are still unanswered:", " - would any 7-7 split work just as well (ie, cross-validate over the 14 domains)", " - do you what happens if you train on 1, 2, 3, 8, 10, or 13 domains? are the results significantly different? ", "OTHER COMMENTS: - p3: both images in Figure 1 are labeled Figure 1.a", "- p3: typo \"theis\" --> \"this\" ", "Abe & Mamitsuksa (ICML-1998). Query Learning Strategies Using Boosting and Bagging."], "labels": ["evaluation", "evaluation", "request", "evaluation", "fact", "evaluation", "request", "request", "fact", "request", "reference"]}
{"doc_id": "BkOfh_eWM", "text": ["The paper discusses the problem of optimizing neural networks with hard threshold and proposes a novel solution to it.", "The problem is of significance because in many applications one requires deep networks which uses reduced computation and limited energy.", "The authors frame the problem of optimizing such networks to fit the training data as a convex combinatorial problems.", "However since the complexity of such a problem is exponential, the authors propose a collection of heuristics/approximations to solve the problem.", "These include, a heuristic for setting the targets at each layer, using a soft hinge loss, mini-batch training and such.", "Using these modifications the authors propose an algorithm (Algorithm 2 in appendix) to train such models efficiently.", "They compare the performance of a bunch of models trained by their algorithm against the ones trained using straight-through-estimator (SSTE) on a couple of datasets, namely, CIFAR-10 and ImageNet.", "They show superiority of their algorithm over SSTE.", "I thought the paper is very well written and provides a really nice exposition of the problem of training deep networks with hard thresholds.", "The authors formulation of the problem as one of combinatorial optimization", "and proposing Algorithm 1 is also quite interesting.", "The results are moderately convincing in favor of the proposed approach.", "Though a disclaimer here is that I'm not 100% sure that SSTE is the state of the art for this problem.", "Overall i like the originality of the paper and feel that it has a potential of reasonable impact within the research community.", "There are a few flaws/weaknesses in the paper though, making it somewhat lose.", "- The authors start of by posing the problem as a clean combinatorial optimization problem and propose Algorithm 1.", "Realizing the limitations of the proposed algorithm, given the assumptions under which it was conceived in,", "the authors relax those assumptions in the couple of paragraphs before section 3.1", "and pretty much throw away all the nice guarantees, such as checks for feasibility, discussed earlier.", "- The result of this is another algorithm (I guess the main result of the paper), which is strangely presented in the appendix as opposed to the main text, which has no such guarantees.", "- There is no theoretical proof that the heuristic for setting the target is a good one, other than a rough intuition", "- The authors do not discuss at all the impact on generalization ability of the model trained using the proposed approach.", "The entire discussion revolves around fitting the training set and somehow magically everything seem to generalize and not overfit."], "labels": ["fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation"]}
{"doc_id": "Bk8udEEeM", "text": ["Quick summary: This paper proposes an energy based formulation to the BEGAN model and modifies it to include an image quality assessment based term.", "The model is then trained with CelebA under different parameters settings and results are analyzed.", "Quality and significance: This is quite a technical paper, written in a very compressed form and is a bit hard to follow.", "Mostly it is hard to estimate what is the contribution of the model and how the results differ from baseline models.", "Clarity: I would say this is one of the weak points of the paper - the paper is not well motivated and the results are not clearly presented.", "Originality: Seems original.", "Pros: * Interesting energy formulation and variation over BEGAN", "Cons: * Not a clear paper", "* results are only partially motivated and analyzed"], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "ryVd3dFgf", "text": ["This paper proposes a method for parameter space noise in exploration.", "Rather than the \"baseline\" epsilon-greedy (that sometimes takes a single action at random)... this paper presents an method for perturbations to the policy.", "In some domains this can be a much better approach and this is supported by experimentation.", "There are several things to like about the paper:", "- Efficient exploration is a big problem for deep reinforcement learning (epsilon-greedy or Boltzmann is the de-facto baseline) ", "and there are clearly some examples where this approach does much better.", "- The noise-scaling approach is (to my knowledge) novel, good and in my view the most valuable part of the paper.", "- This is clearly a very practical and extensible idea... ", "the authors present good results on a whole suite of tasks.", "- The paper is clear and well written, ", "it has a narrative and the plots/experiments tend to back this up.", "- I like the algorithm, it's pretty simple/clean and there's something obviously *right* about it (in SOME circumstances).", "However, there are also a few things to be cautious of... and some of them serious:", "- At many points in the paper the claims are quite overstated. ", "Parameter noise on the policy won't necessarily get you efficient exploration... ", "and in some cases it can even be *worse* than epsilon-greedy... ", "if you just read this paper you might think that this was a truly general \"statistically efficient\" method for exploration (in the style of UCRL or even E^3/Rmax etc).", "- For instance, the example in 4.2 only works because the optimal solution is to go \"right\" in every timestep... ", "if you had the network parameterized in a different way (or the actions left/right were relabelled) then this parameter noise approach would *not* work... ", "By contrast, methods such as UCRL/PSRL and RLSVI https://arxiv.org/abs/1402.0635 *are* able to learn polynomially in this type of environment. ", "I think the claim/motivation for this example in the bootstrapped DQN paper is more along the lines of \"deep exploration\" ", "and you should be clear that your parameter noise does *not* address this issue.", "- That said I think that the example in 4.2 is *great* to include... ", "you just need to be more upfront about how/why it works and what you are banking on with the parameter-space exploration. ", "Essentially you perform a local exploration rule in parameter space... ", "and sometimes this is great ", "- but you should be careful to distinguish this type of method from other approaches. ", "This must be mentioned in section 4.2 \"does parameter space noise explore efficiently\" ", "because the answer you seem to imply is \"yes\" ... when the answer is clearly NOT IN GENERAL... but it can still be good sometimes ;D", "- The demarcation of \"RL\" and \"evolutionary strategies\" suggests a pretty poor understanding of the literature and associated concepts. ", "I can't really support the conclusion \"RL with parameter noise exploration learns more efficiently than both RL and evolutionary strategies individually\". ", "This sort of sentence is clearly wrong and for many separate reasons:", " - Parameter noise exploration is not a separate/new thing from RL... it's even been around for ages! ", "It feels like you are talking about DQN/A3C/(whatever algorithm got good scores in Atari last year) as \"RL\" and that's just really not a good way to think about it.", " - Parameter noise exploration can be *extremely* bad relative to efficient exploration methods ", "(see section 2.4.3 https://searchworks.stanford.edu/view/11891201)", "Overall, I like the paper, I like the algorithm ", "and I think it is a valuable contribution.", "I think the value in this paper comes from a practical/simple way to do policy randomization in deep RL.", "In some (maybe even many of the ones you actually care about) settings this can be a really great approach, especially when compared to epsilon-greedy.", "However, I hope that you address some of the concerns I have raised in this review.", "You shouldn't claim such a universal revolution to exploration / RL / evolution ", "because I don't think that it's correct.", "Further, I don't think that clarifying that this method is *not* universal/general really hurts the paper... ", "you could just add a section in 4.2 pointing out that the \"chain\" example wouldn't work if you needed to do different actions at each timestep (this algorithm does *not* perform \"deep exploration\").", "I vote accept."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "request", "fact", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "reference", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "evaluation", "request", "evaluation"]}
{"doc_id": "BkcUX-5eG", "text": ["This paper investigates learning representations for the problem of nearest neighbor (NN) search by exploring various deep learning architectural choices.", "The crux of the paper is the connection between NN and the angles between the closest neighbors --", "the higher this angle, more data points need to be explored for finding the nearest one, and thus more computational expense.", "Thus, the paper proposes to learn a network that tries to reduce the angles between the inputs and the corresponding class vectors in a supervised framework using softmax cross-entropy loss.", "Three architectural choices are investigated,", "(i) controlling the norm of output layers of the CNN (using batch norm essentially),", "(ii) removing relu so that the outputs are well-distributed in both positive and negative orthants,", "and (iii) normalizing the class vectors.", "Experiments are given on multiMNIST and Sports 1M and show improvements.", "Pros: 1) The paper explores different architectural choices for the deep network to some depth and show extensive results.", "2) The results do demonstrate clearly the advantage of the various choices and is useful", "3) The theoretical connections between data angles and query times are quite interesting,", "Cons: 1) Unclear Problem Statement.", "I find the problem statement a bit vague.", "Standard NN search finds a data point in the database closest to a query under some distance metric.", "While, the current paper uses the cosine similarity as the distance, the deep framework is trained on class vectors using cross-entropy loss.", "I do not think class labels are usually assumed to be given in the standard definition of NN,", "and it is not clear to me how the proposed setup can accommodate NN without class labels.", "Thus as such, I see this paper is perhaps proposing a classification problem and not an NN problem per se.", "2) Lacks Focus The paper lacks a good organization in my opinion.", "Things that are perhaps technically important are moved to the Appendix.", "For example, I find the theoretical part of the paper (e.g., Theorem 1) quite elegant and perhaps the main innovation in this paper.", "However, that is moved completely to the Appendix.", "So it cannot be really considered a contribution.", "It is also not clear if those theoretical results are novel.", "2) Disconnect/Unclear Assumptions There seems to be some disconnect between LSH and deep learning architectures explored in Sections 2 and 3 respectively.", "Are the assumptions used in the theoretical results for LSH also assumed in the deep networks?", "For example, as far as I know, the standard LSH works assumes the projection hyperplanes are randomly chosen and the theoretical results are based on such assumptions.", "It is not clear how a softmax output of a CNN, which is trained in a supervised way, follow such assumptions.", "It would be important if the paper could clarify such assumptions to make sure the sections are congruent.", "3) No Related Work", "There have been several efforts for adapting deep frameworks into KNN.", "The paper ignores all such works.", "Thus, it is not clear how significant is the proposed contribution.", "There are also not comparisons what-so-ever to competitive prior works.", "4) Novelty The main contribution of this paper is basically a set of experiments looking into architectural choices.", "However, the results of this study do not provide any surprises.", "It appears that batch normalization is essential for good performances,", "while using RELU is not so when one wants to use all directions for effective data encoding.", "Thus, as such, the novelty or the contributions of this paper are minor.", "Overall, while I find there are some interesting theoretical bits in this paper,", "it lacks focus,", "the experiments do not offer any surprises,", "and there are no comparisons with prior literature.", "Thus, I do not think this paper is ready to be accepted in its present form."], "labels": ["fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "fact", "evaluation", "request", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation"]}
{"doc_id": "HJyXsRtef", "text": ["This paper presents a new approach to determining what to measure and when to measure it, using a novel deep learning architecture.", "The problem addressed is important and timely", "and advances here may have an impact on many application areas outside medicine.", "The approach is evaluated on real-world medical datasets and has increased accuracy over the other methods compared against.", "+ A key advantage of the approach is that it continually learns from the collected data, using new measurements to update the model, and that it runs efficiently even on large real-world datasets.", "-However, the related work section is significantly underdeveloped, making it difficult to really compare the approach to the state of the art.", "The paper is ambitious and claims to address a variety of problems,", "but as a result each segment of related work seems to have been shortchanged.", "In particular, the section on missing data is missing a large amount of recent and related work.", "Normally, methods for handling missing data are categorized based on the missingness model (MAR/MCAR/MNAR).", "The paper seems to assume all data are missing at random, which is also a significant limitation of the methods.", "-The paper is organized in a nonstandard way, with the methods split across two sections, separated by the related work.", "It would be easier to follow with a more common intro/related work/methods structure.", "Questions: -One of the key motivations for the approach is sensing in medicine.", "However, many tests come as a group (e.g. the chem-7 or other panels).", "In this case, even if the only desired measurement is glucose, others will be included as well.", "Is it possible to incorporate this?", "It may change the threshold for the decision, as a combination of measures can be obtained for the same cost."], "labels": ["fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "request", "fact", "fact", "fact", "request", "evaluation"]}
{"doc_id": "S1jezarxG", "text": ["The paper offers a formal proof that gradient descent on the logistic loss converges very slowly to the hard SVM solution in the case where the data are linearly separable.", "This result should be viewed in the context of recent attempts at trying to understand the generalization ability of neural networks, which have turned to trying to understand the implicit regularization bias that comes from the choice of optimizer.", "Since we do not even understand the regularization bias of optimizers for the simpler case of linear models,", "I consider the paper's topic very interesting and timely.", "The overall discussion of the paper is well written,", "but on a more detailed level the paper gives an unpolished impression, and has many technical issues.", "Although I suspect that most (or even all) of these issues can be resolved, they interfere with checking the correctness of the results.", "Unfortunately, in its current state I therefore do not consider the paper ready for publication.", "Technical Issues: The statement of Lemma 5 has a trivial part and for the other part the proof is incorrect: Let x_u = ||nabla L(w(u))||^2.", "- Then the statement sum_{u=0}^t x_u < infinity is trivial,", "because it follows directly from ||nabla L(w(u))||^2 < infinity for all u.", "I would expect the intended statement to be sum_{u=0}^infinity x_u < infinity,", "which actually follows from the proof of the lemma.", "- The proof of the claim that t*x_t -> 0 is incorrect:", "sum_{u=0}^t x_u < infinity does not in itself imply that t*x_t -> 0, as claimed.", "For instance, we might have x_t = 1/i^2 when t=2^i for i = 1,2,... and x_t = 0 for all other t.", "Definition of tilde{w} in Theorem 4: - Why would tilde{w} be unique?", "In particular, if the support vectors do not span the space, because all data lie in the same lower-dimensional hyperplane, then this is not the case.", "- The KKT conditions do not rule out the case that \\hat{w}^top x_n = 1, but alpha_n = 0 (i.e. a support vector that touches the margin, but does not exert force against it).", "Such n are then included in cal{S}, but lead to problems in (2.7),", "because they would require tilde{w}^top x_n = infinity, which is not possible.", "In the proof of Lemma 6, case 2. at the bottom of p.14: - After the first inequality, C_0^2 t^{-1.5 epsilon_+} should be C_0^2 t^{-epsilon_+}", "- After the second inequality the part between brackets is missing an additional term C_0^2 t^{-\\epsilon_+}.", "- In addition, the label (1) should be on the previous inequality and it should be mentioned that e^{-x} <= 1-x+x^2 is applied for x >= 0 (otherwise it might be false).", "In the proof of Lemma 6, case 2 in the middle of p.15: - In the line of inequality (1) there is a t^{-epsilon_-} missing.", "In the next line there is a factor t^{-epsilon_-} too much.", "- In addition, the inequality e^x >= 1 + x holds for all x, so no need to mention that x > 0.", "In Lemma 1: - claim (3) should be lim_{t \\to \\infty} w(t)^\\top x_n = infinity", "- In the proof: w(t)^top x_n > 0 only holds for large enough t.", "Remarks: p.4 The claim that \"we can expect the population (or test) misclassification error of w(t) to improve\" because \"the margin of w(t) keeps improving\" is worded a little too strongly,", "because it presumes that the maximum margin solution will always have the best generalization error.", "In the proof sketch (p.3): - Why does the fact that the limit is dominated by gradients that are a linear combination of support vectors imply that w_infinity will also be a non-negative linear combination of support vectors?", "- \"converges to some limit\". Mention that you call this limit w_infinity", "Minor Issues: In (2.4): add \"for all n\".", "p.10, footnote: Shouldn't \"P_1 = X_s X_s^+\" be something like \"P_1 = (X_s^top X_s)^+\"?", "A.9: ell should be ell'", "The paper needs a round of copy editing.", "For instance: - top of p.4: \"where tilde{w} A is the unique\"", "- p.10: \"the solution tilde{w} to TO eq. A.2\"", "- p.10: \"might BOT be unique\"", "- p.10: \"penrose-moorse pseudo inverse\" -> \"Moore-Penrose pseudoinverse\"", "In the bibliography, Kingma and Ba is cited twice, with different years."], "labels": ["fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "request", "request", "request", "request", "request", "request", "request", "fact", "evaluation", "fact", "request", "request", "request", "request", "request", "request", "quote", "quote", "quote", "request", "fact"]}
{"doc_id": "BJ2J7pFgf", "text": ["This paper presents a method for classifying Tumblr posts with associated images according to associated single emotion word hashtags.", "The method relies on sentiment pre-processing from GloVe and image pre-processing from Inception.", "My strongest criticism for this paper is against the claim that Tumblr post represent self-reported emotions and that this method sheds new insight on emotion representation", "and my secondary criticism is a lack of novelty in the method,", "which seems to be simply a combination of previously published sentiment analysis module and previously published image analysis module, fused in an output layer.", "The authors claim that the hashtags represent self-reported emotions,", "but this is not true in the way that psychologists query participants regarding emotion words in psychology studies.", "Instead these are emotion words that a person chooses to broadcast along with an associated announcement.", "As the authors point out, hashtags and words may be used sarcastically or in different ways from what is understood in emotion theory.", "It is quite common for everyday people to use emotion words this way e.g. using #love to express strong approval rather than an actual feeling of love.", "In their analysis the authors claim:", "\u201cThe 15 emotions retained were those with high relative frequencies on Tumblr among the PANAS-X scale (Watson & Clark, 1999)\u201d.", "However five of the words the authors retain: bored, annoyed, love, optimistic, and pensive are not in fact found in the PANAS-X scale:", "Reference: The PANAS-X Scale: https://wiki.aalto.fi/download/attachments/50102838/PANAS-X-scale_spec.pdf", "Also the longer version that the authors cited:", "https://www2.psychology.uiowa.edu/faculty/clark/panas-x.pdf", "It should also be noted that the PANAS (Positive and Negative Affect Scale) scale and the PANAS-X (the \u201cX\u201d is for eXtended) scale are questionnaires used to elicit from participants feelings of positive and negative affect,", "they are not collections of \"core\" emotion words,", "but rather words that are colloquially attached to either positive or negative sentiment.", "For example PANAS-X includes words like:\u201cstrong\u201d ,\u201cactive\u201d, \u201chealthy\u201d, \u201csleepy\u201d which are not considered emotion words by psychology.", "If the authors stated goal is \"different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment\" they should be aware that this is exactly what PANAS is designed to do -", "not to infer the latent emotional state of a person, except to the extent that their affect is positive or negative.", "The work of representing emotions had been an field in psychology for over a hundred years", "and it is still continuing.", "https://en.wikipedia.org/wiki/Contrasting_and_categorization_of_emotions.", "One of the most popular theories of emotion is the theory that there exist \u201cbasic\u201d emotions: Anger, Disgust, Fear, Happiness (enjoyment), Sadness and Surprise", "(Paul Ekman, cited by the authors).", "These are short duration sates lasting only seconds.", "They are also fairly specific,", "for example \u201csurprise\u201d is sudden reaction to something unexpected,", "which is it exactly the same as seeing a flower on your car and expressing \u201cwhat a nice surprise.\u201d", "The surprise would be the initial reaction of \u201cwhat\u2019s that on my car? Is it dangerous?\u201d", "but after identifying the object as non-threatening, the emotion of \u201csurprise\u201d would likely pass and be replaced with appreciation.", "The Circumplex Model of Emotions (Posner et al 2005) the authors refer to actually stands in opposition to the theories of Ekman.", "From the cited paper by Posner et al :", "\"The circumplex model of affect proposes that all affective states arise from cognitive interpretations of core neural sensations that are the product of two independent neurophysiological systems. This model stands in contrast to theories of basic emotions, which posit that a discrete and independent neural system subserves every emotion.\"", "From my reading of this paper, it is clear to me that the authors do not have a clear understanding of the current state of psychology\u2019s view of emotion representation", "and this work would not likely contribute to a new understanding of the latent structure of peoples\u2019 emotions.", "In the PCA result, it is not \"clear\" that the first axis represents valence,", "as \"sad\" has a slight positive on this scale", "and \"sad\" is one of the emotions most clearly associated with negative valence.", "With respect to the rest of the paper, the level of novelty and impact is \"ok, but not good enough.\"", "This analysis does not seem very different from Twitter analysis,", "because although Tumblr posts are allowed to be longer than Twitter posts,", "the authors truncate the posts to 50 characters.", "Additionally, the images do not seem to add very much to the classification.", "The authors algorithm also seems to be essentially a combination of two other, previously published algorithms.", "For me the novelty of this paper was in its application to the realm of emotion theory,", "but I do not feel there is a contribution here.", "This paper is more about classifying Tumblr posts according to emotion word hashtags than a paper that generates a new insights into emotion representation or that can infer latent emotional state."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "quote", "fact", "reference", "fact", "reference", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "reference", "fact", "reference", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "quote", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation"]}
{"doc_id": "ByRWmWAxM", "text": ["This paper proposes an extremely simple methodology to improve the network's performance by adding extra random perturbations (resizing/padding) at evaluation time.", "Although the paper is very basic, ", "it creates a good baseline for defending about various types of attacks and got good results in kaggle competition.", "The main merit of the paper is to study this simple but efficient baseline method extensively and shows how adversarial attacks can be mitigated by some extent.", "Cons of the paper: there is not much novel insight or really exciting new ideas presented.", "Pros: It gives a convincing very simple baseline ", "and the evaluation of all subsequent results on defending against adversaries will need to incorporate this simple defense method in addition to any future proposed defenses, ", "since it is very easy to implement and evaluate and seems to improve the defense capabilities of the network to a significant degree. ", "So I assume that this paper will be influential in the future just by the virtue of its easy applicability and effectiveness."], "labels": ["evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation"]}
{"doc_id": "B1eq0Hqlz", "text": ["The authors investigate a modified input layer that results in color invariant networks. ", "The proposed methods are evaluated on two car datasets. ", "It is shown that certain color invariant \"input\" layers can improve accuracy for test-images from a different color distribution than the training images.", "The proposed assumptions are not well motivated and seem arbitrary. ", "Why is using a permutation of each pixels' color a good idea?", "The paper is very hard to read. ", "The message is unclear ", "and the experiments to prove it are of very limited scope, i.e. one small dataset with the only experiment purportedly showing generalization to red cars.", "Some examples of specific issues:- the abstract is almost incomprehensible and it is not clear what the contributions are", "- Some references to Figures are missing the figure number, eg. 3.2 first paragraph, ", "- It is not clear how many input channels the color invariant functions use, eg. p1 does it use only one channel and hence has fewer parameters?", "- are the training and testing sets all disjoint (sec 4.3)?", "- at random points figures are put in the appendix, even though they are described in the paper and seem to show key results (eg \"tested on nored-test\")", "- Sec 4.6: The explanation for why the accuracy drops for all models is not clear. ", "Is it because the total number of training images drops? If that's the case the whole experimental setup seems flawed.", "- Sec 4.6: the authors refer to the \"order net\" beating the baseline, ", "however, from Fig 8 (right most) it appears as if all models beat the baseline. ", "In the conclusion they say that weighted order net beats the baseline on all three test sets w/o red cars in the training set. ", "Is that Fig 8 @0%? ", "The baseline seems to be best performing on \"all cars\" and \"non-red cars\"", "In order to be at an appropriate level for any publication the experiments need to be much more general in scope."], "labels": ["fact", "fact", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "non-arg", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "non-arg", "fact", "request"]}
{"doc_id": "B1RZJ1cxG", "text": ["The paper explores momentum SGD and an adaptive version of momentum SGD which the authors name YF (Yellow Fin).", "They compare YF to hand tuned momentumSGD and to Adam in several deep learning applications.", "I found the first part which discusses the theoretical motivation behind YF to be very confusing and misleading:", "Based on the analysis of 1-dimensional problems, the authors design a framework and an algorithm that supposedly ensures accelerated convergence.", "There are two major problems with this approach:-First: Exploring 1-dim functions is indeed a nice way to get some intuition.", "Yet, algorithms that work in the 1-dim case do not trivially generalize to high dimensions,", "and such reasoning might lead to very bad solutions.", "-Second: Accelerated GD does not benefit over GD in the 1-dim case.", "And therefore, this is not an appropriate setting to explore acceleration.", "Concretely, the definition of the generalized condition number $\\nu$, and relating it to the standard definition of the condition number $\\kappa$, is very misleading.", "This is since $\\kappa =1$ for 1-dim problems,", "and therefore accelerated GD does not have any benefits over non accelerated GD in this case.", "However, $\\nu$ might be much larger than 1 even in the 1-dim case.", "Regarding the algorithm itself: there are too many hyper-parameters (which depend on each other) that are tuned (per-dimension).", "And as I have mentioned, the design of the algorithm is inspired by the analysis of 1-dim quadratic functions.", "Thus, it is very hard for me to believe that this algorithm works in practice unless very careful fine tuning is employed.", "The authors mention that their experiments were done without tuning or with very little tuning, which is very mysterious for me.", "In contrast to the theoretical part, the experiments seems very encouraging.", "Showing YF to perform very well on several deep learning tasks without (or with very little) tuning.", "Again, this seems a bit magical or even too good to be truth.", "I suggest the authors to perform a experiment with say a qaudratic high dimensional function, which is not aligned with the axes in order to illustrate how their method behaves and try to give intuition."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request"]}
{"doc_id": "B1qhp-qeG", "text": ["The paper investigates the iterative estimation view on gated recurrent networks (GNN). ", "Authors observe that the average estimation error between a given hidden state and the last hidden state gradually decreases toward zeros. ", "This suggest that GNN are bias toward an identity mapping and learn to preserve the activation through time.", "Given this observation, authors then propose RIN, a new RNN parametrization where the hidden to hidden matrix is decomposed as a learnable weight matrix plus the identity matrix.", "Authors evaluate their RIN on the adding, sequential MNIST and the baby tasks and show that their IRNN outperforms the IRNN and LSTM models.", "Questions:- Section 2 suggests that use of the gate in GNNs encourages to learn an identity mapping. ", "Does the average iteration error behaves differently in case of a tanh-RNN ?", "- It seems from Figure 4 (a) that the average estimation error is higher for RIN than IRNN and LSTM and only decrease toward zero at the very end.", "What could explain this phenomenon?", "- While the LSTM baseline matches the results of Le et al., ", "later work such as Recurrent Batch Normalization or Unitary Evolution RNN have demonstrated much better performance with a vanilla LSTM on those tasks (outperforming both IRNN and RIN). ", "What could explain this difference in the performances?", "- Unless I am mistaken, Gated Orthogonal Recurrent Units: On Learning to Forget from Jing et al. also reports better performances for the LSTM (and GRU) baselines that outperform RIN on the baby tasks with mean performances of 58.2 and 56.0 for GRU and LSTM respectively?", "- Quality/Clarity:The paper is well written and pleasant to read", "- Originality:Looking at RNN from an iterative refinement point of view seems novel.", "- Significance:While looking at RNN from an iterative estimation is interesting, ", "the experimental part does not really show what are the advantages of the propose RIN. ", "In particular, the LSTM baseline seems to weak compared to other works."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "non-arg", "fact", "non-arg", "fact", "fact", "non-arg", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact"]}
{"doc_id": "HyM5JWhgG", "text": ["This paper discusses an application of survival analysis in social networks.", "While the application area seems to be pertinent, the statistics as presented in this paper are suboptimal at best.", "There is no useful statistical setup described (what is random? etc etc),", "the interplay between censoring and end-of-life is left rather fuzzy,", "and mentioned clustering approaches are extensively studied in the statistical literature in so-called frailty analysis.", "The setting is also covered in statistics in the extensive literature on repeated measurements and even time-series analysis.", "It's up to the authors discuss similarities and differences of results of the present approach and those areas.", "The numerical result is not assessing the different design decisions of the approach (why use a Kuyper loss?) in this empirical paper."], "labels": ["fact", "evaluation", "fact", "evaluation", "fact", "fact", "request", "evaluation"]}
{"doc_id": "Hyu5lW5xf", "text": ["This paper proposes a method, Dual-AC, for optimizing the actor(policy) and critic(value function) simultaneously which takes the form of a zero-sum game resulting in a principled method for using the critic to optimize the actor. ", "In order to achieve that, they take the linear programming approach of solving the bellman optimality equations, outline the deficiencies of this approach, and propose solutions to mitigate those problems. ", "The discussion on the deficiencies of the naive LP approach is mostly well done. ", "Their main contribution is extending the single step LP formulation to a multi-step dual form that reduces the bias and makes the connection between policy and value function optimization much clearer without loosing convexity by applying a regularization. ", "They perform an empirical study in the Inverted Double Pendulum domain to conclude that their extended algorithm outperforms the naive linear programming approach without the improvements. ", "Lastly, there are empirical experiments done to conclude the superior performance of Dual-AC in contrast to other actor-critic algorithms. ", "Overall, this paper could be a significant algorithmic contribution, with the caveat for some clarifications on the theory and experiments. ", "Given these clarifications in an author response, I would be willing to increase the score. ", "For the theory, there are a few steps that need clarification and further clarification on novelty. ", "For novelty, it is unclear if Theorem 2 and Theorem 3 are both being stated as novel results. ", "It looks like Theorem 2 has already been shown in \"Randomized Linear Programming Solves the Discounted Markov Decision Problem in Nearly-Linear Running Time\u201d. ", "There is a statement that \u201cChen & Wang (2016); Wang (2017) apply stochastic first-order algorithms (Nemirovski et al., 2009) for the one-step Lagrangian of the LP problem in reinforcement learning setting. However, as we discussed in Section 3, their algorithm is restricted to tabular parametrization\u201d. ", "Is you Theorem 2 somehow an extension? ", "Is Theorem 3 completely new?", "This is particularly called into question due to the lack of assumptions about the function class for value functions. ", "It seems like the value function is required to be able to represent the true value function, ", "which can be almost as restrictive as requiring tabular parameterizations (which can represent the true value function). ", "This assumption seems to be used right at the bottom of Page 17, where U^{pi*} = V^*. ", "Further, eta_v must be chosen to ensure that it does not affect (constrain) the optimal solution, ", "which implies it might need to be very small. ", "More about conditions on eta_v would be illuminating. ", "There is also one step in the theorem that I cannot verify. ", "On Page 18, how is the squared removed for difference between U and Upi? ", "The transition from the second line of the proof to the third line is not clear. ", "It would also be good to more clearly state on page 14 how you get the first inequality, for || V^* ||_{2,mu}^2. ", "For the experiments, the following should be addressed.", "1. It would have been better to also show the performance graphs with and without the improvements for multiple domains.", "2. The central contribution is extending the single step LP to a multi-step formulation. ", "It would be beneficial to empirically demonstrate how increasing k (the multi-step parameter) affects the performance gains.", "3. Increasing k also comes at a computational cost. ", "I would like to see some discussions on this and how long dual-AC takes to converge in comparison to the other algorithms tested (PPO and TRPO).", "4. The authors concluded the presence of local convexity based on hessian inspection due to the use of path regularization. ", "It was also mentioned that increasing the regularization parameter size increases the convergence rate. ", "Empirically, how does changing the regularization parameter affect the performance in terms of reward maximization? ", "In the experimental section of the appendix, it is mentioned that multiple regularization settings were tried but their performance is not mentioned. ", "Also, for the regularization parameters that were tried, based on hessian inspection, did they all result in local convexity? ", "A bit more discussion on these choices would be helpful. ", "Minor comments:1. Page 2: In equation 5, there should not be a 'ds' in the dual variable constraint"], "labels": ["fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "quote", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "request", "evaluation", "request", "request", "request", "evaluation", "request", "fact", "request", "fact", "fact", "request", "fact", "request", "request", "request"]}