SoAyBench / 39.jsonl
frederickwang99's picture
soaybench v1-2
99ed059
{"Query": "University of Technology Sydney的Zhibin Li的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Zhibin Li at University of Technology Sydney?", "Answer": "Crowd Flow Prediction (CFP) is one major challenge in the intelligent transportation systems of the Sydney Trains Network. However, most advanced CFP methods only focus on entrance and exit flows at the major stations or a few subway lines, neglecting Crowd Flow Distribution (CFD) forecasting problem across the entire city network. CFD prediction plays an irreplaceable role in metro management as a tool that can help authorities plan route schedules and avoid congestion. In this paper, we propose three online non-negative matrix factorization (ONMF) models. ONMF-AO incorporates an Average Optimization strategy that adapts to stable passenger flows. ONMF-MR captures the Most Recent trends to achieve better performance when sudden changes in crowd flow occur. The Hybrid model, ONMF-H, integrates both ONMF-AO and ONMF-MR to exploit the strengths of each model in different scenarios and enhance the models' applicability to real-world situations. Given a series of CFD snapshots, both models learn the latent attributes of the train stations and, therefore, are able to capture transition patterns from one timestamp to the next by combining historic guidance. Intensive experiments on a large-scale, real-world dataset containing transactional data demonstrate the superiority of our ONMF models.\n\n", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Zhibin Li", "organization": "University of Technology Sydney", "interest": "3DV"}}
{"Query": "Department of Computer Science, George Mason University的Jessica Lin的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Jessica Lin at Department of Computer Science, George Mason University?", "Answer": "The parallel explosions of interest in streaming data, and data mining of time series have had surprisingly little intersection. This is in spite of the fact that time series data are typically streaming data. The main reason for this apparent paradox is the fact that the vast majority of work on streaming data explicitly assumes that the data is discrete, whereas the vast majority of time series data is real valued.Many researchers have also considered transforming real valued time series into symbolic representations, nothing that such representations would potentially allow researchers to avail of the wealth of data structures and algorithms from the text processing and bioinformatics communities, in addition to allowing formerly \"batch-only\" problems to be tackled by the streaming community. While many symbolic representations of time series have been introduced over the past decades, they all suffer from three fatal flaws. Firstly, the dimensionality of the symbolic representation is the same as the original data, and virtually all data mining algorithms scale poorly with dimensionality. Secondly, although distance measures can be defined on the symbolic approaches, these distance measures have little correlation with distance measures defined on the original time series. Finally, most of these symbolic approaches require one to have access to all the data, before creating the symbolic representation. This last feature explicitly thwarts efforts to use the representations with streaming algorithms.In this work we introduce a new symbolic representation of time series. Our representation is unique in that it allows dimensionality/numerosity reduction, and it also allows distance measures to be defined on the symbolic approach that lower bound corresponding distance measures defined on the original series. As we shall demonstrate, this latter feature is particularly exciting because it allows one to run certain data mining algorithms on the efficiently manipulated symbolic representation, while producing identical results to the algorithms that operate on the original data. Finally, our representation allows the real valued data to be converted in a streaming fashion, with only an infinitesimal time and space overhead.We will demonstrate the utility of our representation on the classic data mining tasks of clustering, classification, query by content and anomaly detection.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Jessica Lin", "organization": "Department of Computer Science, George Mason University", "interest": "Time Series"}}
{"Query": "Amazon的Jun Ma的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Jun Ma at Amazon?", "Answer": " Extracting structured knowledge from product profiles is crucial for various applications in e-Commerce. State-of-the-art approaches for knowledge extraction were each designed for a single category of product, and thus do not apply to real-life e-Commerce scenarios, which often contain thousands of diverse categories. This paper proposes TXtract, a taxonomy-aware knowledge extraction model that applies to thousands of product categories organized in a hierarchical taxonomy. Through category conditional self-attention and multi-task learning, our approach is both scalable, as it trains a single model for thousands of categories, and effective, as it extracts category-specific attribute values. Experiments on products from a taxonomy with 4,000 categories show that TXtract outperforms state-of-the-art approaches by up to 10% in F1 and 15% in coverage across all categories. ", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Jun Ma", "organization": "Amazon", "interest": "Knowledge Extraction"}}
{"Query": "4Paradigm Inc.的Yongqi Zhang的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Yongqi Zhang at 4Paradigm Inc.?", "Answer": "Knowledge graph (KG) embedding is a fundamental problem in data mining research with many real-world applications. It aims to encode the entities and relations in the graph into low dimensional vector space, which can be used for subsequent algorithms. Negative sampling, which samples negative triplets from non-observed ones in the training data, is an important step in KG embedding. Recently, generative adversarial network (GAN), has been introduced in negative sampling. By sampling negative triplets with large scores, these methods avoid the problem of vanishing gradient and thus obtain better performance. However, using GAN makes the original model more complex and harder to train, where reinforcement learning must be used. In this paper, motivated by the observation that negative triplets with large scores are important but rare, we propose to directly keep track of them with cache. However, how to sample from and update the cache are two important questions. We carefully design the solutions, which are not only efficient but also achieve good balance between exploration and exploitation. In this way, our method acts as a \"distilled\" version of previous GAN-based methods, which does not waste training time on additional parameters to fit the full distribution of negative triplets. The extensive experiments show that our method can gain significant improvement on various KG embedding models, and outperform the state-of-the-arts negative sampling methods based on GAN.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Yongqi Zhang", "organization": "4Paradigm Inc.", "interest": "Knowledge Graph"}}
{"Query": "Criteo的David Rohde的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of David Rohde at Criteo?", "Answer": "We present the largest catalogue to date of optical counterparts for H i radio-selected galaxies, HOPCAT. Of the 4315 H i radio-detected sources from the H i Parkes All Sky Survey (HIPASS) catalogue, we find optical counterparts for 3618 (84 per cent) galaxies. Of these, 1798 (42 per cent) have confirmed optical velocities and 848 (20 per cent) are single matches without confirmed velocities. Some...", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "David Rohde", "organization": "Criteo", "interest": "Flow-comap"}}
{"Query": "College of Computer Science and Technology, Zhejiang University的Jianling Sun的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Jianling Sun at College of Computer Science and Technology, Zhejiang University?", "Answer": "Defect prediction is a very meaningful topic, particularly at change-level. Change-level defect prediction, which is also referred as just-in-time defect prediction, could not only ensure software quality in the development process, but also make the developers check and fix the defects in time. Nowadays, deep learning is a hot topic in the machine learning literature. Whether deep learning can be used to improve the performance of just-in-time defect prediction is still uninvestigated. In this paper, to bridge this research gap, we propose an approach Deeper which leverages deep learning techniques to predict defect-prone changes. We first build a set of expressive features from a set of initial change features by leveraging a deep belief network algorithm. Next, a machine learning classifier is built on the selected features. To evaluate the performance of our approach, we use datasets from six large open source projects, i.e., Bugzilla, Columba, JDT, Platform, Mozilla, and PostgreSQL, containing a total of 137,417 changes. We compare our approach with the approach proposed by Kamei et al. The experimental results show that on average across the 6 projects, Deeper could discover 32.22% more bugs than Kamei et al's approach (51.04% versus 18.82% on average). In addition, Deeper can achieve F1-scores of 0.22-0.63, which are statistically significantly higher than those of Kamei et al.'s approach on 4 out of the 6 projects.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Jianling Sun", "organization": "College of Computer Science and Technology, Zhejiang University", "interest": "Software Maintenance"}}
{"Query": "School of Computer Science and Technology, University of Science and Technology of China的Runlong Yu的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Runlong Yu at School of Computer Science and Technology, University of Science and Technology of China?", "Answer": "As users implicitly express their preferences to items on many real-world applications, the implicit feedback based collaborative filtering has attracted much attention in recent years. Pairwise methods have shown state-of-the-art solutions for dealing with the implicit feedback, with the assumption that users prefer the observed items to the unobserved items. However, for each user, the huge unobserved items are not equal to represent her preference. In this paper, we propose a Multiple Pairwise Ranking (MPR) approach, which relaxes the simple pairwise preference assumption in previous works by further tapping the connections among items with multiple pairwise ranking criteria. Specifically, we exploit the preference difference among multiple pairs of items by dividing the unobserved items into different parts. Empirical studies show that our algorithms outperform the state-of-the-art methods on real-world datasets.\n\n", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Runlong Yu", "organization": "School of Computer Science and Technology, University of Science and Technology of China", "interest": "One-class Collaborative Filtering"}}
{"Query": "Department of Computer Science and Engineering, University of South Carolina的Yuewei Lin的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Yuewei Lin at Department of Computer Science and Engineering, University of South Carolina?", "Answer": "We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the MI body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection/localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Yuewei Lin", "organization": "Department of Computer Science and Engineering, University of South Carolina", "interest": "Feature Extraction"}}
{"Query": "LIAAD - INESC TEC的Bruno Veloso的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Bruno Veloso at LIAAD - INESC TEC?", "Answer": "The widespread usage of smart devices and sensors together with the ubiquity of the Internet access is behind the exponential growth of data streams. Nowadays, there are hundreds of machine learning algorithms able to process high-speed data streams. However, these algorithms rely on human expertise to perform complex processing tasks like hyper-parameter tuning. This paper addresses the problem of data variability modelling in data streams. Specifically, we propose and evaluate a new parameter tuning algorithm called Self Parameter Tuning (SPT). SPT consists of an online adaptation of the Nelder u0026 Mead optimisation algorithm for hyper-parameter tuning. The method explores a dynamic size sample method to evaluate the current solution, and uses the Nelder u0026 Mead operators to update the current set of parameters. The main contribution is the adaptation of the Nelder-Mead algorithm to automatically tune regression hyper-parameters for data streams. Additionally, whenever concept drifts occur in the data stream, it re-initiates the search for new hyper-parameters. The proposed method has been evaluated on regression scenario. Experiments with well known time-evolving data streams show that the proposed SPT hyper-parameter optimisation outperforms the results of previous expert hyper-parameter tuning efforts.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Bruno Veloso", "organization": "LIAAD - INESC TEC", "interest": "Profiling"}}
{"Query": "University of Pittsburgh的Shengyu Chen的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Shengyu Chen at University of Pittsburgh?", "Answer": "Identification of replication origins is playing a key role in understanding the mechanism of DNA replication. This task is of great significance in DNA sequence analysis. Because of its importance, some computational approaches have been introduced. Among these predictors, the iRO-3wPseKNC predictor is the first discriminative method that is able to correctly identify the entire replication origins. For further improving its predictive performance, we proposed the Pseudo k-tuple GC Composition (PsekGCC) approach to capture the \"GC asymmetry bias\" of yeast species by considering both the GC skew and the sequence order effects of k-tuple GC Composition (k-GCC) in this study. Based on PseKGCC, we proposed a new predictor called iRO-PsekGCC to identify the DNA replication origins. Rigorous jackknife test on two yeast species benchmark datasets (Saccharomyces cerevisiae, Pichia pastoris) indicated that iRO-PsekGCC outperformed iRO-3wPseKNC. It can be anticipated that iRO-PsekGCC will be a useful tool for DNA replication origin identification. Availability and implementation: The web-server for the iRO-PsekGCC predictor was established, and it can be accessed at http://bliulab.net/iRO-PsekGCC/.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Shengyu Chen", "organization": "University of Pittsburgh", "interest": "Transmission Electron Microscopy"}}
{"Query": "South China University of Technology的Hengjie Song的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Hengjie Song at South China University of Technology?", "Answer": "Transfer learning has been proven to be effective for the problems where training data from a source domain and test data from a target domain are drawn from different distributions. To reduce the distribution divergence between the source domain and the target domain, many previous studies have been focused on designing and optimizing objective functions with the Euclidean distance to measure dissimilarity between instances. However, in some real-world applications, the Euclidean distance may be inappropriate to capture the intrinsic similarity or dissimilarity between instances. To deal with this issue, in this paper, we propose a metric transfer learning framework (MTLF) to encode metric learning in transfer learning. In MTLF, instance weights are learned and exploited to bridge the distributions of different domains, while Mahalanobis distance is learned simultaneously to maximize the intra-class distances and minimize the inter-class distances for the target domain. Unlike previous work where instance weights and Mahalanobis distance are trained in a pipelined framework that potentially leads to error propagation across different components, MTLF attempts to learn instance weights and a Mahalanobis distance in a parallel framework to make knowledge transfer across domains more effective. Furthermore, we develop general solutions to both classification and regression problems on top of MTLF, respectively. We conduct extensive experiments on several real-world datasets on object recognition, handwriting recognition, and WiFi location to verify the effectiveness of MTLF compared with a number of state-of-the-art methods.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Hengjie Song", "organization": "South China University of Technology", "interest": "Metric Learning"}}
{"Query": "Department of Computer Science and Technology, Tsinghua University的Wenwu Zhu的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Wenwu Zhu at Department of Computer Science and Technology, Tsinghua University?", "Answer": "Network embedding is an important method to learn low-dimensional representations of vertexes in networks, aiming to capture and preserve the network structure. Almost all the existing network embedding methods adopt shallow models. However, since the underlying network structure is complex, shallow models cannot capture the highly non-linear network structure, resulting in sub-optimal network representations. Therefore, how to find a method that is able to effectively capture the highly non-linear network structure and preserve the global and local structure is an open yet important problem. To solve this problem, in this paper we propose a Structural Deep Network Embedding method, namely SDNE. More specifically, we first propose a semi-supervised deep model, which has multiple layers of non-linear functions, thereby being able to capture the highly non-linear network structure. Then we propose to exploit the first-order and second-order proximity jointly to preserve the network structure. The second-order proximity is used by the unsupervised component to capture the global network structure. While the first-order proximity is used as the supervised information in the supervised component to preserve the local network structure. By jointly optimizing them in the semi-supervised deep model, our method can preserve both the local and global network structure and is robust to sparse networks. Empirically, we conduct the experiments on five real-world networks, including a language network, a citation network and three social networks. The results show that compared to the baselines, our method can reconstruct the original network significantly better and achieves substantial gains in three applications, i.e. multi-label classification, link prediction and visualization.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Wenwu Zhu", "organization": "Department of Computer Science and Technology, Tsinghua University", "interest": "Internet"}}
{"Query": "RobustNet Lab, University of Michigan, Ann Arbor的Jiachen Sun的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Jiachen Sun at RobustNet Lab, University of Michigan, Ann Arbor?", "Answer": "Perception plays a pivotal role in autonomous driving systems, which utilizes onboard sensors like cameras and LiDARs (Light Detection and Ranging) to assess surroundings. Recent studies have demonstrated that LiDAR-based perception is vulnerable to spoofing attacks, in which adversaries spoof a fake vehicle in front of a victim self-driving car by strategically transmitting laser signals to the victim's LiDAR sensor. However, existing attacks suffer from effectiveness and generality limitations. In this work, we perform the first study to explore the general vulnerability of current LiDAR-based perception architectures and discover that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks. We construct the first black-box spoofing attack based on our identified vulnerability, which universally achieves around 80% mean success rates on all target models. We perform the first defense study, proposing CARLO to mitigate LiDAR spoofing attacks. CARLO detects spoofed data by treating ignored occlusion patterns as invariant physical features, which reduces the mean attack success rate to 5.5%. Meanwhile, we take the first step towards exploring a general architecture for robust LiDAR-based perception, and propose SVF that embeds the neglected physical features into end-to-end learning. SVF further reduces the mean attack success rate to around 2.3%.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Jiachen Sun", "organization": "RobustNet Lab, University of Michigan, Ann Arbor", "interest": "Barrier"}}
{"Query": "School of Computer Science and Technology, University of Science and Technology of China的Zhenya Huang的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Zhenya Huang at School of Computer Science and Technology, University of Science and Technology of China?", "Answer": "For offering proactive services (e.g., personalized exercise recommendation) to the students in computer supported intelligent education, one of the fundamental tasks is predicting student performance (e.g., scores) on future exercises, where it is necessary to track the change of each student's knowledge acquisition during her exercising activities. Unfortunately, to the best of our knowledge, existing approaches can only exploit the exercising records of students, and the problem of extracting rich information existed in the materials (e.g., knowledge concepts, exercise content) of exercises to achieve both more precise prediction of student performance and more interpretable analysis of knowledge acquisition remains underexplored. To this end, in this paper, we present a holistic study of student performance prediction. To directly achieve the primary goal of performance prediction, we first propose a general Exercise-Enhanced Recurrent Neural Network (EERNN) framework by exploring both student's exercising records and the text content of corresponding exercises. In EERNN, we simply summarize each student's state into an integrated vector and trace it with a recurrent neural network, where we design a bidirectional LSTM to learn the encoding of each exercise from its content. For making final predictions, we design two implementations on the basis of EERNN with different prediction strategies, i.e., EERNNM with Markov property and EERNNA with Attention mechanism. Then, to explicitly track student's knowledge acquisition on multiple knowledge concepts, we extend EERNN to an explainable Exercise-aware Knowledge Tracing (EKT) framework by incorporating the knowledge concept information, where the student's integrated state vector is now extended to a knowledge state matrix. In EKT, we further develop a memory network for quantifying how much each exercise can affect the mastery of students on multiple knowledge concepts during the exercising process. Finally, we conduct extensive experiments and evaluate both EERNN and EKT frameworks on a large-scale real-world data. The results in both general and cold-start scenarios clearly demonstrate the effectiveness of two frameworks in student performance prediction as well as the superior interpretability of EKT.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Zhenya Huang", "organization": "School of Computer Science and Technology, University of Science and Technology of China", "interest": "Knowledge Tracing"}}
{"Query": "U.S. Army Research Laboratory的Ananthram Swami的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Ananthram Swami at U.S. Army Research Laboratory?", "Answer": "Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Ananthram Swami", "organization": "U.S. Army Research Laboratory", "interest": "Signal Processing"}}
{"Query": "Nanjing University的Zhi-Hua Zhou的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Zhi-Hua Zhou at Nanjing University?", "Answer": "We study the problem of segmenting a sequence into k pieces so that the resulting segmentation satisfies monotonicity or unimodality constraints. Unimodal functions can be used to model phenomena in which a measured variable first increases to ...", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Zhi-Hua Zhou", "organization": "Nanjing University", "interest": "Machine Learning"}}
{"Query": "Department of Computer Science, Princeton University的Vishvak Murahari的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Vishvak Murahari at Department of Computer Science, Princeton University?", "Answer": "Deep Learning methods have become very attractive in the wider, wearables-based human activity recognition (HAR) research community. The majority of models are based on either convolutional or explicitly temporal models, or combinations of both. In this paper we introduce attention models into HAR research as a data driven approach for exploring relevant temporal context. Attention models learn a set of weights over input data, which we leverage to weight the temporal context being considered to model each sensor reading. We construct attention models for HAR by adding attention layers to a state-of-the-art deep learning HAR model (DeepConvLSTM) and evaluate our approach on benchmark datasets achieving significant increase in performance. Finally, we visualize the learned weights to better understand what constitutes relevant temporal context.\n\n", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Vishvak Murahari", "organization": "Department of Computer Science, Princeton University", "interest": "Visual Dialog"}}
{"Query": "Max-Planck-Institut fur Informatik的Panagiotis Mandros的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Panagiotis Mandros at Max-Planck-Institut fur Informatik?", "Answer": "Given a database and a target attribute of interest, how can we tell whether there exists a functional, or approximately functional dependence of the target on any set of other attributes in the data? How can we reliably, without bias to sample size or dimensionality, measure the strength of such a dependence? And, how can we efficiently discover the optimal or α-approximate top-k dependencies? These are exactly the questions we answer in this paper. As we want to be agnostic on the form of the dependence, we adopt an information-theoretic approach, and construct a reliable, bias correcting score that can be efficiently computed. Moreover, we give an effective optimistic estimator of this score, by which for the first time we can mine the approximate functional dependencies from data with guarantees of optimality. Empirical evaluation shows that the derived score achieves a good bias for variance trade-off, can be used within an efficient discovery algorithm, and indeed discovers meaningful dependencies. Most important, it remains reliable in the face of data sparsity.", "Base_Question_zh": "X机构的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX at X institution?", "Inputs": "name, organization", "Outputs": "abstract", "Entity_Information": {"name": "Panagiotis Mandros", "organization": "Max-Planck-Institut fur Informatik", "interest": "Information Theory"}}